meta
dict | text
stringlengths 1
1.2M
|
---|---|
{
"arxiv_id": "2302.14253",
"language": "en",
"timestamp": "2023-03-01T02:06:57",
"url": "https://arxiv.org/abs/2302.14253",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Ultracold mixtures of quantum gases are a fascinating tool to study in detail
the intricate physics of many-body problems~\cite{bloch_many-body_2008} with
both high controllability and high precision. The high degree of control
extends to both dimensional control by means of optical
lattices~\cite{bloch_ultracold_2005} and interaction control by use of
Feshbach resonances~\cite{inouye_observation_1998, chin_feshbach_2010}.
Generally speaking, the realm of many-body physics extends far beyond the
field of ultracold atomic physics and serves as an interface to interconnect
the different research disciplines in physics. Amongst many possible examples
the Efimov effect stands out in his broad
applicability~\cite{naidon_efimov_2017}. Originally investigated by Vitaly
Efimov in his work on nuclear three-body problems, his famous discovery of
three-body bound states is now a prime example of universal many-body physics
that can be applied to nearly any field of quantum physics. Physical evidence
for the predicted infinite series of three-body bound states, however,
remained elusive for more than 30 years until first evidences were found in
ultracold quantum gases of Cs~\cite{kraemer_evidence_2006,
zaccanti_observation_2009}. However, due to the large scaling factors involved
in the observation of at least three Efimov states it was not until the advent
of mass-imbalanced mixture experiments that multiple Efimov states could be
observed~\cite{pires_observation_2014, tung_geometric_2014}. These seminal
experiments utilized the large mass-imbalance in the Fermi-boson
${}^6$Li-${}^{133}$Cs mixture to reduce the scaling factor from $22.7$ to just
$4.9$.
Based on these hallmark experiments, our present work strives to expand the
experimental possibilities towards more ``exotic'' Efimov states involving two
heavy Fermions that resonantly couple to a third, light particle. Going beyond
the usual Efimov scenario, these novel trimer-states are predicted to only
occur for mass ratios beyond a threshold value of $13.6$ where the scaling
ratio initially diverges~\cite{naidon_efimov_2017}. However, with increasing
mass imbalance this scaling factor quickly reduces to experimentally feasible
values of less than $10$. We here present first steps towards the experimental
realization of these states with a Fermi-Fermi ${}^{167}$Er-${}^6$Li\ mixture that with a
mass ratio of $27.8$ lies well above the critical value of $13.6$ for the
occurrence of these hitherto unobserved Efimov states. In our experiments we
demonstrate successful cooling of the mixture to microkelvin temperatures and
we probe the system for broad interspecies Feshbach resonances. Such broad
Feshbach resonances are required for the necessary fine-tuned control of the
interspecies interactions over several orders of magnitude.
In addition, the realized mass-imbalanced Fermi-Fermi mixture of ${}^{167}$Er-${}^6$Li\
is, together with tunable interactions, promising for realizing a novel
superfluid state with a spatially varying order parameter, known as a
Fulde-Ferrell-Larkin-Ovchinnikov (FFLO)
state~\cite{fulde_superconductivity_1964, larkin_nonuniform_1965,
radzihovsky_imbalanced_2010, wang_enhancement_2017}, and also other new
behaviors like a Lifshitz point~\cite{gubbels_imbalanced_2013,
gubbels_lifshitz_2009}. Quite recently, the theory for a unitary Fermi gas has
been extended to large mass-imbalanced systems~\cite{endo_quatriemes_2022}.
Large mass-imbalanced Fermi-Fermi mixtures can also be useful for quantum
simulation of the Fermi-surface effect~\cite{kondo_diffusion_1984,
kondo_diffusion_1984-1, kondo_muon_1986} which manifests itself in quantum
diffusion behaviors of heavy particles in solids~\cite{storchak_quantum_1998}.
We start with a brief introduction to the experiment and the experimental
procedures of relevance to the present research (Sec.~\ref{sec:experiment}).
Next, the results of our Feshbach resonance search for magnetic fields up to
$800\,\mathrm{G}$ are introduced (Sec.~\ref{sec:results}). We conclude with a
discussion of our findings and some thoughts on possible future
works~(Sec.~\ref{sec:discussion}).
\section{Experiment}
\label{sec:experiment}
\begin{figure*}[tb!]
\centering
\includegraphics{figure1}
\caption{Magnetic field dependent atom loss in a ${}^{167}$Er($m_F = -19/2$)-${}^6$Li($m_F
= 1/2$) mixture. The panels show the fractions of Er atoms (blue) and Li
atoms (red) that, after having been cooled to about $2$ to $3\,\mu\mathrm{K}$,
remain in the optical trap after a holding time of $1000\,\mathrm{ms}$ at the
various magnetic fields. The upper two panels cover the magnetic field
range up to $100\,\mathrm{G}$ with data taken every $0.06\,\mathrm{G}$. The lower two
panels encompass the range from $100$ to about $800\,\mathrm{G}$ at a resolution
of $1\,\mathrm{G}$. (See the main text for a discussion on how the data was
obtained and averaged.) Numerous resonant loss features are observed, most
of them quite narrow at widths below $0.3\,\mathrm{G}$, but some broader
resonances with widths above $1\,\mathrm{G}$ have also been identified.
}
\label{fig:fig1}
\end{figure*}
The experiment is based on the setup already used in our earlier
works~\cite{schafer_feshbach_2022}. However, while our previous efforts
focused on bosonic erbium, some upgrades to the machine have been necessary
for our present work with fermionic ${}^{167}$Er. This is because ${}^{167}$Er\ is known to
experience strong losses in optical traps operating at wavelengths around
$1064\,\mathrm{nm}$ for reasons that were never completely
uncovered~\cite{aikawa_reaching_2014}. An additional far-off resonant trap
(FORT) at $1550\,\mathrm{nm}$ is therefore prepared to circumvent this inconvenience.
Our modified experimental method is hence as follows: Starting as
in~\cite{schafer_feshbach_2022} a triple-species mixture at roughly $100\,\mu\mathrm{K}$
of ${}^{167}$Er, ${}^6$Li\ and ${}^{174}$Yb\ is loaded into our horizontally oriented FORT
(H-FORT) operating at $1064\,\mathrm{nm}$. It is important to note here the addition
of bosonic ytterbium which we will use to sympathetically cool both Er and Li
in the evaporation step that is to follow. Since the magneto-optical trap
(MOT) light is blue-detuned for the Yb atoms in a FORT at $1550\,\mathrm{nm}$ and thus
causes considerable heating and atom loss we are required to first load all
atoms from the MOT into the $1064\,\mathrm{nm}$ H-FORT from where, once all MOT lights
could be extinguished, they are transferred within $30\,\mathrm{ms}$ into the
superimposed $1550\,\mathrm{nm}$ H-FORT. The transfer efficiency for each of the three
species is about 50\%. Forced evaporation proceeds in a crossed FORT
configuration where in addition to the H-FORT beam (waist diameter $50\,\mu\mathrm{m}$)
a vertical FORT beam (V-FORT, waist diameter $240\,\mu\mathrm{m}$) at the same
wavelength is added for a tighter confinement during the evaporation. As the
V-FORT beam is derived from the zeroth-order light of the acousto-optic
modulator that prepares the light for the H-FORT, the V-FORT is initially at
very low power. However, as the power of the H-FORT is gradually reduced
during the evaporation, the V-FORT power can be increased to efficiently
support the evaporation efficiency during most important final stages of the
evaporation. During evaporation the magnetic fields are carefully chosen for
good cooling performance and to maintain the spin polarization of the sample.
The magnetic field is initially set to $1.55\,\mathrm{G}$ and after $4\,\mathrm{s}$ reduced to
$0.4\,\mathrm{G}$. This procedure ensures that the natural spin-polarization of ${}^{167}$Er\
in the lowest $F = 19/2, m_F = -19/2$ magnetic sublevel obtained during the
narrow linewidth MOT is maintained. We further found that with this sequence
${}^6$Li\ is also naturally polarized in its lowest magnetic $F = 1/2, m_F = 1/2$
state, alleviating the need for any active optical pumping. After evaporation
the remaining ${}^{174}$Yb\ atoms are removed from the trap by a short pulse of light
resonant to the ${}^1S_0 \rightarrow {}^3P_1$ transition at $556\,\mathrm{nm}$ leaving
a pure Er-Li sample for the main part of the experiment.
In our first set of experiments we are interested in a general overview of the
${}^{167}$Er-${}^6$Li\ Feshbach resonance structure in the reasonable magnetic field range
of up to about $800\,\mathrm{G}$. For this we choose an evaporation ramp that has a
duration of $7\,\mathrm{s}$ and leaves the mixture in a trap with frequencies of about
$(\omega_x, \omega_y, \omega_z) = 2\pi \times (47, 405, 400)\,\mathrm{Hz}$ where the
$z$-axis is in vertical direction. After evaporation we typically obtain
$20(5) \times 10^3$ Er atoms at a temperature of $2.4(2)\,\mu\mathrm{K}$ and $7(2)
\times 10^3$ Li atoms at $3.0(5)\,\mu\mathrm{K}$. Next, the magnetic field is raised in
$10\,\mathrm{ms}$ to its desired value and the mixture is allowed to interact for
$1000\,\mathrm{ms}$ after which the magnetic field is again lowered and the number of
remaining atoms for both species is measured by standard absorption imaging.
For each magnetic field setting this experiment is repeated three times.
Additionally, at each field control measurements are taken in which either one
of the two species is removed from the optical trap by a pulse of resonant
$583\,\mathrm{nm}$ (Er) or $671\,\mathrm{nm}$ (Li) light as in~\cite{schafer_feshbach_2022}.
These control measurements are repeated two times each. From the
ratio of the averaged values we obtain our main observable: the fraction of
remaining atoms in the trap for each species. The results are summarized in
Fig.~\ref{fig:fig1}. The data can be divided into two parts: Up to $100\,\mathrm{G}$
the magnetic field was scanned with a resolution of $0.06\,\mathrm{G}$ to get a better
understanding of also the finer Feshbach resonance structure. The remainder of
the data has been taken with a step size of $1\,\mathrm{G}$. This greatly speeds up
the measurement process and allows us to obtain a good overview of the
available broader Feshbach resonances in a reasonable time. It is these
resonances that are most important for the physics of interest here.
\section{Results}
\label{sec:results}
\begin{table}[b!]
\caption{List of identified ${}^{167}$Er($m_F = -19/2$)-${}^6$Li($m_F = 1/2$)
interspecies Feshbach resonances obtained from the data of
Fig.~\ref{fig:fig1}. The resonance positions $B_0$ and widths $\Delta B$
are from Lorentzian fits to the data. Note has to be taken of the two
different resolution regimes: Up to $100\,\mathrm{G}$ more narrow resonances could
be observed due to the higher measurement resolution whereas above that
field purposely only broad resonances were detected and listed here. At
$72.3\,\mathrm{G}$ the fit failed to provide a good resonance width estimated due
to the finite measurement resolution.
}
\begin{tabular}{c@{\extracolsep{10mm}}crr@{\extracolsep{10mm}}c}
\hline
\hline
& Resolution & $B_0$ ($\mathrm{G}$) & $\Delta B$ ($\mathrm{G}$) &\\
\hline
& $0.06\,\mathrm{G}$ & $13.1$ & $0.1$ &\\
& & $16.3$ & $0.1$ &\\
& & $17.0$ & $0.1$ &\\
& & $17.5$ & $0.1$ &\\
& & $22.4$ & $0.1$ &\\
& & $33.8$ & $0.2$ &\\
& & $38.1$ & $0.2$ &\\
& & $45.3$ & $0.2$ &\\
& & $69.5$ & $0.2$ &\\
& & $71.2$ & $0.2$ &\\
& & $71.8$ & $0.1$ &\\
& & $72.3$ & --- &\\
& & $72.8$ & $0.1$ &\\
& & $73.4$ & $0.1$ &\\
& & $76.1$ & $0.2$ &\\
& & $77.1$ & $0.1$ &\\
& & $79.9$ & $0.1$ &\\
& & $80.9$ & $0.2$ &\\
& & $86.2$ & $0.3$ &\\
& & $90.6$ & $0.2$ &\\
& & $92.5$ & $0.2$ &\\
\hline
& $1.0\,\mathrm{G}$ & $110.3$ & $1.5$ &\\
& & $256.7$ & $1.0$ &\\
& & $455.0$ & $2.3$ &\\
& & $700.2$ & $1.2$ &\\
& & $705.3$ & $2.6$ &\\
& & $792.7$ & $3.5$ &\\
\hline
\hline
\end{tabular}
\label{tab:resonancelist}
\end{table}
Looking at the results in Fig.~\ref{fig:fig1} one immediately recognizes a
large number of resonant loss features. Focusing first on the upper two panels
that cover the magnetic field range up to $100\,\mathrm{G}$ we identify by eye $21$
resonances. From Lorentzian fits to these resonances we obtain a rough
estimate of their positions $B_0$ and full-widths at half maximum $\Delta B$.
All parameters are listed in the upper part of Tab.~\ref{tab:resonancelist}.
Note that due to our selected limited resolution of $0.06\,\mathrm{G}$ we do not
expect to have obtained a complete list of resonances. Also all the
uncertainties of $B_0$ and $\Delta B$ are estimated to be at least $0.1\,\mathrm{G}$.
Our primary goal with these first experiment has been to get a good overview
of typical resonance densities, on their widths and strengths. In this
respect, generally the spectrum is similarly rich as in our earlier results
with bosonic Er~\cite{schafer_feshbach_2022}. However, with all resonance
widths being well below $0.5\,\mathrm{G}$ it appears unlikely that the observed
resonances would be primary candidates to support the search of novel Efimov
and superfluid states.
We therefore now turn our attention to the lower two panels of
Fig.~\ref{fig:fig1} for the range $100$ to $800\,\mathrm{G}$. In this coarsely scanned
magnetic field range several narrow resonances are still visible where a
single data point happens to be close enough to such a resonance. We will
ignore such single-data-point events and instead focus on the broader
resonances of which in total $6$ have been observed. They are listed in the
lower part of Tab.~\ref{tab:resonancelist}. Of particular interest seem
the resonances at $455$, $705$ and $793\,\mathrm{G}$. Upon closer inspection one
notices that only the two resonances at $455$ and $793\,\mathrm{G}$ show nice losses
for both species. Of these two remaining resonances we find that the higher
one suffers from quite significant background losses also outside of the
immediate vicinity of the resonance. For these reasons we will now focus on
the resonant loss peak at about $455\,\mathrm{G}$ which additionally is also a
magnetic field strengths that is experimentally quite comfortable to work
with.
\begin{figure}[tb!]
\centering
\includegraphics{figure2}
\caption{Results of a detailed measurement of the interspecies Feshbach
resonance at about $455\,\mathrm{G}$. The temperature of the sample is about
$1\,\mu\mathrm{K}$ lower than in the overview measurements of Fig.~\ref{fig:fig1}
and the number of remaining atoms in the trap after only $30\,\mathrm{ms}$ of
interaction time is shown. (See the main text for details.) A coordinated
loss of both Er (top panel, blue points) and Li atoms (bottom panel, red
points) in the mixture is observed while in the single species control
measurements (open circles) no losses are found. The typical statistical
error of the data is indicated in each panel by a single error bar example
(gray). Lorentzian fits to the data on the one hand have their minima at
$455.3\,\mathrm{G}$ and indicate widths of about $2.4\,\mathrm{G}$ but on the other hand
also highlight the pronounced asymmetric lineshapes of the loss
resonances.
}
\label{fig:fig2}
\end{figure}
For a more detailed view of this resonance we now work with a slightly
modified evaporation sequence: The ramp has been extended by $2\,\mathrm{s}$ to now
$9\,\mathrm{s}$ and the final trap frequencies are about $(\omega_x, \omega_y,
\omega_z) = 2\pi \times (49, 285, 272)\,\mathrm{Hz}$. After evaporation we then obtain
a colder sample with $13(2) \times 10^3$ Er atoms at a temperature of
$0.9(1)\,\mu\mathrm{K}$ and $5(1) \times 10^3$ Li atoms at $1.3(3)\,\mu\mathrm{K}$. The magnetic
field range from $450$ to $460\,\mathrm{G}$ is measured in steps of $0.25\,\mathrm{G}$. The
interaction time is reduced to $30\,\mathrm{ms}$. The data is otherwise taken and
averaged as before and we will this time directly look at the raw atom numbers
in Fig.~\ref{fig:fig2}. As expected, while in the mixture one observes a
nicely synchronized loss of both species close to resonance, there is no
magnetic field dependence in the single species data. The lineshapes of both
resonances are strongly asymmetric. This is particularly highlighted when
trying to describe them by Lorentzian fits (also included in
Fig.~\ref{fig:fig2}). Still, from the fit minima one can at least deduce the
approximate atom loss for both species. The reduction in the number of atoms
is $(9.5 \pm 1.4) \times 10^3$ for Er and $(4.5 \pm 0.6) \times 10^3$ for Li.
This implies that for every Li atom lost about $(2.1 \pm 0.4)$ Er atoms are
removed from the trap and is consistent with the assumption that the losses
are dominated by Er-Er-Li three-body collisions~\cite{ye_observation_2022}.
\begin{figure}[tb!]
\centering
\includegraphics{figure3}
\caption{
Decay dynamics of the Er-Li mixture at $455.0\,\mathrm{G}$. Panel a) shows
the number of remaining Er and Li atoms (blue and red points) in the trap
for holding times up to $5\,\mathrm{ms}$. The curves are fits to the data of an
Er-Er-Li three-body loss model (see the main text for details). Panel b)
indicates for each holding time the relationship between the remaining Er
and Li atoms. The solid line is a fit of $N_{\rm Er} = a\,N_{\rm Li} + b$
to the data with $a = 1.9(1)$ and $b = -1.5(6)\times10^3$.
}
\label{fig:fig3}
\end{figure}
In Fig.~\ref{fig:fig3} we take a closer look the typical decay dynamics by
studying the number of remaining atoms, $N_{\rm Er}$ and $N_{\rm Li}$, in the
trap at a magnetic field of $455.0\,\mathrm{G}$ and for holding times up to $5\,\mathrm{ms}$.
In Fig.~\ref{fig:fig3}a) the decay is described by a three-body decay
model~\cite{ye_observation_2022} (solid lines), again assuming Er-Er-Li
three-body collisions only, with a three-body collision rate coefficient $K_3
= 2(1) \times 10^{-22}~\cm^3\,\s^{-1}$. Finally, Fig.~\ref{fig:fig3}b further corroborates
the applicability of such a model and also our earlier estimation of the
relative atom loss by directly showing the relationship between $N_{\rm Er}$
and $N_{\rm Li}$ which is well described by a linear fit of slope $1.9(1)$.
This is in good agreement with a complete loss of all particles from the trap
in Er-Er-Li three-body collisions which would lead to an expected slope of
$2.0$.
\section{Summary and Prospects}
\label{sec:discussion}
In the present work three major steps towards establishing ultracold
${}^{167}$Er-${}^6$Li\ mixtures as a new platform for the study of many-body physics in
general and Efimov states in particular have been taken: First, we could show
that by means of sympathetic cooling with ${}^{174}$Yb\ this mixture can be brought to
microkelvin temperatures and that for the selected spin states no unexpected
losses occur at the employed magnetic fields. Second, with these minimum
requirements fulfilled we could demonstrate that the mixture indeed supports a
wealth of Feshbach resonances that, however, are still sufficiently separated
to not interfere with each other. Third, while most of the observed resonances
are quite narrow some of them feature widths of at least $2\,\mathrm{G}$. There, it
has further to be pointed out that currently we can only estimate the width of
the inelastic collisional loss resonance and it is known that the width of the
elastic part of the resonance might as well be
larger~\cite{ye_observation_2022}.
We have in particular focused on the broad resonance at about $455\,\mathrm{G}$.
There, coordinated losses in both channels have been found and the observed
atom number decay dynamics seems to support an Er-Er-Li loss channel for the
Feshbach resonance. It is exactly this condition of two heavy Fermions
interacting with a light particle that in particular motivated the current
work as it brings the study of a new type of Efimov three-body state within
reach. Finally, the shape of the observed resonance is highly asymmetric. This
might be purely caused by the complicated scattering physics
involved~\cite{fouche_quantitative_2019}. It might, however, also at least
partially be caused by additional losses from Efimov
states~\cite{pires_observation_2014, tung_geometric_2014}. More detailed
measurements will be required and are currently planned to clarify this
important question in this promising mixture system.
\section*{Acknowledgments}
This work was supported by the Grant-in-Aid for Scientific Research of JSPS
Grants No.\ JP17H06138, No.\ 18H05405, and No.\ 18H05228, JST CREST Grant No.\
JPMJCR1673 and the Impulsing Paradigm Change through Disruptive Technologies
(ImPACT) program by the Cabinet Office, Government of Japan, and MEXT Quantum
Leap Flagship Program (MEXT Q-LEAP) Grant No.\ JPMXS0118069021 and JST
Moonshot R\&D - MILLENNIA Program (Grant No.\ JPMJMS2269).
\input{167Er6LiFeshbach.bbl}
\end{document}
|
{
"arxiv_id": "2302.14354",
"language": "en",
"timestamp": "2023-03-01T02:10:41",
"url": "https://arxiv.org/abs/2302.14354",
"yymm": "2302"
} | \section{Introduction}
Two main categories of Cultural Heritage (CH) are tangible and intangible heritages, and the CHBs fall under the former category. The tangible CHs have universal values which must be physically preserved for future generations as an irreplaceable legacy \citep{Lopez2018,Vecco2010}.
CHBs are indubitably an integral part of the history and culture of human beings. Throughout the years many of these precious buildings have been in danger of damage due to several reasons, namely material deterioration, natural disasters, presence of visitors, vandalism, etc. \citep{Chen1991,Markiewicz2020,Stanco2011}.
Currently, the topic of CH has attracted increasing global attention from scientists and researchers alike, and the scope of its concept is constantly expanding. Most social scientists emphasize on its utility in supporting ethnic and national interests, while many others point to its creative and counter-hegemonic aspects \citep{Stanco2011,Brumann2015}.
\subsection{Importance}
Endowed with rich CHBs, Iran is ranked 10th in 2022, among all other countries, with 26 UNESCO world heritage sites \citep{UNESCO2022}. Although only 26 of the CHBs in Iran have been registered in UNESCO and not all of them are buildings, the number of CHBs in Iran is of the order of thousands and according to archaeological findings, Iranian architecture dates back to 6,000-8,000 B.C. \citep{Hejazi2015}. One of the reasons why Iran has been unsuccessful in registering more CHBs is the fact that most of these CHBs have not been preserved correctly, if not at all. Even some CHBs are beyond restoration.
The CHBs, which fall under the category of immovable tangible CHs, demand more sophisticated methods for conservation since we cannot move them to museums to preserve.
Lack of resources in terms of skilled practitioners, budget, and new technologies are just some of the shortcomings that introduce many problems in the conservation process. As regards the usage of state-of-the-art technologies, Iran as a developing country still uses archaic, and sometimes obsolete, manned methods to preserve these precious treasures of humanity.
From a broader perspective, many CHBs around the world suffer from such problems as well, so the use of artificial intelligence (AI) techniques such as ML and DL is not a luxury anymore but a necessity. Using ML and DL, we can move toward unmanned conservation of CHB, hence an increase in accuracy and a decrease in human-induced error.
\subsection{Research Aim}
The aim of this paper was to develop a highly generalized, yet simple, deep learning pipeline for the identification of CHBs in need of preservation, which can be used even in poor countries. We achieved this by making our model as lightweight as possible using a wealth of novel methods, as not all countries have access to expensive resources. This mindset allows for having fewer data and processing power but still reaping satisfying results (\autoref{tab:final_results}).
\subsection{Contribution}
\textbf{Unprecedented in Iran:} To the best of our knowledge, and to our surprise, not even a single scientific research had been conducted using ML or DL in the conservation of Iran's CHBs. The body of research outside Iran is not so much either. according to \citet{Fiorucci2020} the use of ML in CH literacy has been quite limited in contrast to other fields.
We believe that more research in the intersection of AI and CH can change this situation and can pave the way for the prevalence of such techniques in the process of CHB conservation around the world and accrue many benefits to CHB literacy as well.
\textbf{First-hand Complex Data: }We used first-hand data, which had been collected from different sources, as discussed in \autoref{sec:data}. Using first-hand data is important in the sense that not only our experiment would be unprecedented in Iran but globally as well; since no known CHB dataset to date \citep{Fiorucci2020} can cover the diversity of types of buildings, types of defects, and color nuances of both Persian and Islamic architecture, like ours.
\textbf{New combination of Methods: }This paper proposes an automated deep learning pipeline for identifying surface damage of CHBs. Having developing countries in mind, we used a combination of state-of-the-art methods to cater to their conservation needs with as little budget as possible. That said, the final deep learning pipeline, using a pre-trained MobileNet, can be run on low-cost devices, for instance a budget mobile phone, to make inference.
\begin{itemize}
\item Image classification: define whether a CHB needs preservation or not.
\item MobileNet: a very lightweight CNN architecture, but with approximately the same performance as a lot of havier CNNs (e.g., ResNet and/or Inception).
\item Grad-CAM: to approximately localize the defects.
\item Transfer learning: to reap great results without the need for expensive servers or manpower to take copious images.
\item A valid data augmentation pipeline: allows the model to learn more features from the same data.
\item Compound regularization method: a combination of four regularization methods together, namely augmentation, dropout, L2 regularization, and batch normalization.
\end{itemize}
\section{Related works}
Globally many attempts have been made to use deep learning for damage detection in CHB images.
\citet{Wang2019} used object detection (OD) with the aid of FasterR-CNN based on a ResNet101 CNN to detect damage in images of masonry buildings with bounding boxes.
In another research, \citet{Wang2020} used instance segmentation (IS), by the means of a Mask R-CNN model, for damage detection, using a masked colored layer, in glazed tiled CHBs.
An interesting work by \citet{Pathak2021} used Faster-RCNN to detect damage in CHBs, but with one major difference to other works. They used point clouds data, instead of images, as the input to their proposed model, and instead rendered point clouds as images which increased the versatility of their model, since capturing photogrammetry doesn't have the same limitations of manually taking photos.
Expectedly, damage detection using deep learning is not limited to CHB literacy; for instance, \citet{Perez2021} used OD to detect defects
on the images of modern buildings.
As highly revered as OD and IS are, they have some downsides, namely (1) a time-consuming data labeling process with bounding boxes (for OD) or color annotation (for IS); (2) the need for a huge amount of accurately labeled data; (3) detecting only pre-specified types of defects; and (4) much higher computational complexity, in comparison with image classification.
This is especially important in the case of developing countries (e.g., Iran), where budgets and resources are limited. That's why despite the prevalence of OD and IS in computer vision, many researchers opted to use the simpler image classification, where each image will be given a label as a whole, and the position of damage is not delineated.
As an example, \citet{Perez2019} used image classification and CAM layers to classify and localize defects. The downside of their work was not the use of image classification, but using cropped images, which would have been more suitable for object detection rather than image classification.
The usage of image classification and deep learning has not been just for damage detection, but aspects of CHB can benefit from them, as was the case with \citet{Llamas2017} who attempted to classify different architectural elements in historical buildings.
In terms of methodology, we followed the footsteps of \citet{Llamas2017} and \citet{Perez2019} by using image classification over OD and/or IS. Although our work is different in terms of the details of methodology and data. Unlike them, we used data augmentation and a combination of four regularization methods together, which in our case resulted in a 4-5\% improvement in metrics (\autoref{tab:final_results} and \ref{tab:compare_results}).
\textbf{Research Gap:} To the best of our knowledge, most of the works regarding deep learning and CHB use either simplistic data
or use the data belonging to a single CHB.
As a result, the final trained model lacks the generalization needed to be used for a wide range of buildings in the country of origin. We believe that the data must reflect the variety of real-world data with no editing or cropping. This way the research can come as close as possible to the practical application of using deep learning in the conservation of CHBs. Despite being known as de facto in CV, OD and/or IS need substantial computational resources to process images and detect damage, therefore making these methods infeasible for developing and/or poor countries with so many CHBs (e.g., Iran). Using more lightweight and sophisticated techniques, we can achieve reasonable results but with low-budget and simple devices (e.g., Mobile Phones).
\section{Materials and Methods}
\subsection{Data} \label{sec:data}
For this experiment, we curated a labeled dataset of approximately 10,500 CHB images. In the following, the data curation process is discussed.
\subsubsection{Data Collection}
The data were gathered from four different sources; (i) The archives of Iran's cultural heritage ministry; (ii) The author's (M.B) personal archives; (iii) images captured on site by the authors (M.B) during the research process and (iv) pictures crawled from the Internet but kept it to a minimum as their distribution differed due to heavy edits and effects. The images that didn't meet the desired quality were removed, to avoid introducing noise to our dataset.
Our collected images proved to be very challenging, in the terms of complexity, peculiarity, level of detail, and variation in size, color, characteristics, etc (\autoref{fig:data_samples}).
Regarding the population of data, as it was infeasible to have access to all the CHBs of Iran, or manually take pictures of them, we tried a random but fair approach to increase the richness of data by taking samples from a wide variety of buildings in terms of architectural style, color theme, quality, time of building, etc. In the process of collecting data different types of criteria were foremost in our minds:
\begin{itemize}
\item \textbf{Locations}: Semnan, Hamedan, Tehran, Ghazvin, etc.
\item \textbf{Types}: Mosques, Shrines, Churches, Palaces, etc.;
\item \textbf{Style}: Islamic, Roman, Persian, etc.;
\item \textbf{Types}: cracks, deterioration, mold, etc.;
\item \textbf{Color nuances}: we have images from different times of the day and in different seasons.;
\end{itemize}
\subsubsection{Data cleaning and preprocessing} \label{sec:data_clean_preprocess}
A number of preprocessing steps were taken before creating our final datasets:
\begin{enumerate}
\item Cleaning low-quality images, in terms of relevance, corruption, aspect ratio, grayscale, lighting condition, etc. (\autoref{fig:omitted_images}).
\item Fixing the auto-rotation EXIF metadata.
\item Finding a good enough resolution and resizing all images to it (i.e., 224x224).
\item Normalizing pixel values to a range of $[-1, 1]$.
\end{enumerate}
\subsubsection{Data labeling}
Not to exacerbate the existent data imbalance, we chose binary classification over multi-class classification.
The negative class (label 0) was used for images that didn't include physical defects and the positive class (label 1) for the ones that did.
Not to become biased in the labeling phase we had three different highly qualified CHB practitioners label the images individually. This way the final label of a single image was determined by the majority vote of these three labelers.
When it comes to labeling, especially image data, we almost always have to deal with some degree of inconsistency, as different practitioners have different experiences, expertise, criteria, etc. To mitigate this effect we defined some criteria by which each labeler had a more consistent and clear guideline to label the images.
\autoref{fig:diff_size_defects} shows why it was so crucial to have some criteria that distinctly determine what should be considered a defect (e.g., in terms of length or depth).
As regards what types of physical defects were considered in the labeling process, we can enumerate the crack, mold, stain, and deterioration as the most important ones with enough samples in our dataset.
\subsubsection{Creating the datasets} \label{sec:create_dataset}
After cleaning and preprocessing our data, it was divided into three mutually exclusive and jointly exhaustive sets, namely train, validation (aka dev), and test (\autoref{fig:data_samples}). To ensure a random but fair division we used stratifying shuffle that's why we have approximately the same ratio between the number images for each label (\autoref{tab:class_distri}).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figures/Fig_data_sample.jpg}
\caption{A few sample images which show the complexity, diversity and variation of our data.}
\label{fig:data_samples}
\end{figure*}
\begin{table}[h]
\centering
\caption{The distribution of data; both aggregated and for each dataset separately.\label{tab:class_distri}}
\resizebox{0.7\columnwidth}{!}{%
\begin{tabular}{ccccc}
\toprule
\textbf{class/label} & \textbf{Total images} & \textbf{Train set} & \textbf{Validation set} & \textbf{Test set} \\
\midrule
\textbf{negative/0} & 1432 & 1018 (13.8\%) & 207 (12.99\%) & 207 (13.28\%) \\
\textbf{positive/1} & 9096 & 6358 (86.2\%) & 1386 (87.01\%) & 1352 (86.72\%) \\
\midrule
\textbf{Total} & 10528 & 7376 (70.06\%) & 1593 (15.13\%) & 1559 (14.80\%) \\
\bottomrule
\end{tabular}%
}
\end{table}
As it's evident in \autoref{tab:class_distri}, the notorious yet prevalent problem of data imbalance could be identified.
A will be discussed in \autoref{sec:evaluation} we used a weighted loss function to mitigate this problem by a large margin.
\subsection{Convolutional Neural Networks (CNNs)}
Synonymous with unassailable performance when it comes to processing image data, the CNNs were a staple in the field of CV since their introduction in 1989 by LeCun et al. \citep{LeCun1989,fang2020computer}. Therefore it was somewhat indubitable that we needed to process our CHB images with this type of NNs to benefit from all the advantages that could accrue to our models by using CNNs.
Goodfellow et al. \citep{goodfellow2016deep} believe CNNs to have three main benefits: translation equivariance, sparse connections, and parameter sharing. A CNN network has less number of learnable parameters in comparison with its conventional fully connected (FC) counterpart. This reduction in the number of parameters is the product of having sparse connections and parameter sharing which enables CNNs to; (i) train faster; (ii) be less prone to overfitting and as results demand fewer train data; and (iii) be able to work with high dimensional data (e.g., images), that their FC counterparts are incapable of. The CNN does the onerous work of feature extraction automatically; the task that without CNNs used to be done by hand engineering the features \citep{gu2018recent}.
In this experiment, we used three of the most prestigious CNN architectures which have shown compelling results and interesting loss convergence, namely ResNet \citep{He2015}, Inception \citep{Szegedy2014}, and MobileNet \citep{Howard2017}.
\subsection{Transfer Learning} \label{sec:transfer_learning}
Dealing with several restraints such as lack of enough data and powerful computers, a methodology called transfer learning was employed to drastically mitigate these impediments. TL tries to transfer the knowledge, a pre-trained model has already learned from a large amount of data, to another model \citep{Zhuang2019}.
Generally, TL consists of two main parts. The first part is responsible for customizing the output layer to our problem. The second part fine-tunes the pre-trained model to adapt more to our specific data.
\subsection{Class Activation Mapping (CAM)}
In spite of the merits of image classification, there is a notorious drawback that lies within, and that is the black-box nature of artificial neural networks (NN). That being said, we don't know whether the model considers pertinent features in an image to decide its class or not. That's why researchers came up with a solution named class activation mapping (CAM) \citep{Zhou2015}.
In this experiment we used gradient-weighted class activation maps (Grad-CAM) \citep{Selvaraju2016} which is a CAM method that merges the gradients (aka derivatives) of the final classification, that is the output layer deciding the label of the image, and the output of the final Conv layer of the model to generate a heatmap. The heatmap then is applied to the original image to localize the places that were taken into account when deciding its class/label.
\subsection{Regularization} \label{sec:regularization}
As one of the salient reasons for the occurrence of overfitting is the lack of enough data, which is ubiquitous in CV, we are always in need of more data. Unfortunately getting more brand-new data is not always possible. A workaround is to use the data we already have to increase the number of valid labeled train data, hence a decrease in overfitting as the model is now less capable of naively memorizing the train set \citep{Maharana2022}.
As data augmentation is a staple in CV \citep{Maharana2022}, we almost always opt for using it and this paper is not exempt. Finally, in \autoref{fig:data_aug} the result of our proposed data augmentation pipeline after nine runs on the same image can be seen. The data augmentation methods used in this paper can be found in \autoref{tab:data_aug}.
\begin{table}[h]
\centering
\caption{The data augmentation methods used in this paper and their corresponding values.\label{tab:data_aug}}
\resizebox{0.5\columnwidth}{!}{%
\begin{tabular}{cc|cc}
\toprule
\textbf{method} & \textbf{value} & \textbf{method} & \textbf{value} \\
\midrule
random flip & Horizontal & random brightness & 0.05 \\
random rotation & 0.005 & random saturation & 0.6 - 1.2 \\
random crop & 5\% & random contrast & 0.75 - 1.1 \\
random quality & 80 - 100 & random hue & 0.03 \\
\bottomrule
\end{tabular}%
}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{Figures/Fig_data_augmentation.png}
\caption{An example of applying the proposed data augmentation methods on a train image (i.e., nine times). Notice how random, realistic, and valid the augmented versions are.} \label{fig:data_aug}
\end{figure}
Briefly, to decrease overfitting, which is commonplace in DL models, due to their high capacity in terms of the number of parameters, a combination of four famous methods were used, namely L2 regularization \citep{Cortes2012}, dropout \citep{Hinton2012}, batch normalization layer \citep{Ioffe2015}, and data augmentation \citep{Maharana2022}. The results of this combining approach, as discussed in \autoref{sec:results}, were quite satisfiable in terms of overfitting and resulted in a very small amount of overfitting (i.e., $< 1\%$) for all of our models.
\section{Implementation}
\subsection{Network Architecture}
In the \autoref{fig:arch} the holistic architecture of our proposed method is represented.
Not to process new input images through a data preprocessing pipeline every time, we embedded both the resizing and the normalization preprocessing functions into our network (i.e., pink box). This way, there would be no need to process the unknown images before executing the prediction on them, after the model had been trained.
It was alluded to before that in this experiment we made use of several pre-eminent CNN architectures to tackle the problem at hand and not to be biased toward a certain architecture. As a result, four different networks were implemented, namely ResNet50-v2, ResNet152-v2, InceptionResNet-v2, and MobileNet-v2. One main difference between the ResNet50-v2 and other models is that we trained the ResNet50-v2 from scratch and with randomly initialized weights; while the other three were pre-trained models which were accompanied by TL.
The responsibility of the Global Average Pooling layer (i.e., purple box) was to flatten the output of the last Conv layer into a matrix, which is the desired shape of the input of a fully connected (FC) layer.
Before replacing the output of the pre-trained model with a layer of our own, an FC layer (i.e., light blue box) was added to decrease underfitting; the bigger our network becomes the less underfitting we experience, but it also increasing overfitting, that's why a single FC layer proved to provide a desired trade-off, and thus reduced underfitting by a large margin without increasing overfitting too much.
As shown in \autoref{fig:arch}, our model has two outputs. The first (i.e., green box) is responsible for the task of classification, by which each image will be given a label (i.e., negative or positive). The second output on the other hand does the task of localizing the parts by which the model has decided on a certain label for a specific image; this task is done by the Grad-CAM method (i.e., orange box).
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{Figures/Fig_network_architecture.pdf}
\caption{The overall architecture of our proposed model/network. Where the values shown in parenthesis below each layer represent the layer's output shape. The $N$, $n_H^{[L]}$, $n_W^{[L]}$, and $n_C^{[L]}$ refer to the batch size, height, width, and channels of the last layer ($L$) of the embedded CNN model respectively. \label{fig:arch}}
\end{figure*}
\subsection{Evaluation} \label{sec:evaluation}
To evaluate the implemented networks several metrics have been used in an endeavor to meticulously monitor the behavior of the networks at different stages of training. All these metrics will be scrutinized in the following subsections.
\subsubsection{Cost function}
As mentioned in \autoref{sec:create_dataset} our two classes were imbalanced and since it would nudge our model to be biased toward the class with more examples (i.e., the positive class), we had to tackle this problem somehow. Having decided in favor of using the class weight method due to its numerous merits the \autoref{eq:calc_class_weights} was used to calculate the weight of each class, but it's worth noting that there is a myriad of ways to calculate the weights but as we would fine-tune the calculated weights later on in hyperparameter tuning phase we chose the most widely used:
\begin{equation} \label{eq:calc_class_weights}
\mathlarger{\displaystyle w_c = \frac{n_t}{n_l * n_c}}
\end{equation}
Where $w_c$, $n_t$, $n_l$, and $n_c$ indicate the calculated weight of class $c$, the total number of images in the dataset, the number of classes, and the number of images in class $c$ respectively. These weights then will be used in the cost function of our networks so that the importance of images belonging to the inferior class outweighs that of the superior class, in a way that network will be rewarded or penalized more when it comes to the images of the class with fewer examples in it. The binary cross-entropy cost function was used, and the way it calculates cost before and after applying class weights can be seen in \autoref{eq:cost_without_class_weight} and \ref{eq:cost_with_class_weight} respectively. To make it more concrete the first one is used in validation, test, and prediction while the latter is employed in training time; that is we only care about data imbalance during training which is common sense as the network only updates its internal parameters (e.g., weights) in training time and backpropagation.
\begin{equation} \label{eq:cost_without_class_weight}
L(\hat{y}, y) = -\bigg(ylog(\hat{y}) + (1 - y)log(1 - \hat{y})\bigg)
\end{equation}
\begin{equation} \label{eq:cost_with_class_weight}
L(\hat{y}, y) = -\bigg((w_1)(y)log(\hat{y}) + (w_0)(1 - y)log(1 - \hat{y})\bigg)
\end{equation}
Where $y$ refers to the true label and the $\hat{y}$ to the predicted label of the given record. Note that as we did binary classification and sigmoid activation function for the output layer then $\hat{y}$ is actually the probability ([0, 1]) of the record belonging to the positive class.
\subsubsection{Performance measures and metrics} \label{sec:metrics}
When it comes to the evaluation of our model, several metrics were incorporated to ensure the rigor of our results. As we suffer from imbalanced data the Accuracy can be quite misleading if the model gets biased toward the superior class, so to address this issue four more performance measures were used, namely Precision, Recall, F-Score, and AUC. If anything, the F-Score is the harmonic mean of the Precision and Recall, thus it takes into account both of them to give us a balanced score of the two. Mathematically, Accuracy, Precision and Recall, and F-Score are defined as:
\begin{equation} \label{eq:accuracy}
Accuracy = {\frac{TP + TN}{TP + FP + TN + FN}}
\end{equation}
\begin{equation} \label{eq:precision}
Precision = {\frac{TP}{TP + FP}}
\end{equation}
\begin{equation} \label{eq:recall}
Recall = {\frac{TP}{TP + FN}}
\end{equation}
\begin{equation} \label{eq:f1}
F\!\!-\!\!Score = {\frac{2 * Precision * Recall}{Precision + Recall}}
\end{equation}
Where TP, TN, FP, and FN are True Positive, True Negative, False Positive, and False Negative respectively. In this paper the FN takes precedence over FP, thus the Recall is more important than precision as the FN is in the denominator of the Recall's \autoref{eq:recall}, however, we tried to balance them as much as possible. The reason is that if an image is falsely labeled as positive then in the worst-case scenario we lose time, but in the case of an image being falsely labeled as negative, then a building in dire need of conservation can be overlooked which might lead to irredeemable destruction. The area under the ROC curve, abbreviated as AUC, was employed in an endeavor to refrain from creating a model biased toward a certain class. AUC demonstrates the power of the model in distinguishing different classes.
\section{Results} \label{sec:results}
After slogging through the onerous task of training and fine-tuning the hyperparameters several times, we achieved highly satisfactory results (\autoref{tab:final_results}). Note that the training process of the ResNet50-v2 doesn't have the fine-tuning step as we trained it from the ground up and with random initial weights. Considering the lack of enough data and computational power, which were alluded to before, it was of no surprise that the networks trained with TL fared the best.
Among the networks that used TL, there is no definite winner, but the MobileNet-v2 had the best performance considering both the performance measures and the computational complexity for both the training and making an inference.
That said, MobileNet's lightweight architecture is conducive to training and predicting faster which is especially important for devices with low computational power such as mobile phones, edge devices, etc. which are considered de facto pieces of equipment to monitor CHBs \citep{Maksimovic2019}.
\begin{table*}[h]
\centering
\caption{Final results, after hyperparameter tuning.\label{tab:final_results}}
\resizebox{.85\textwidth}{!}{%
\begin{tabular}{cccc|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{\textbf{Measure}} & \multicolumn{3}{c}{\textbf{ResNet50V2 \textsuperscript{1}}} & \multicolumn{3}{c}{\textbf{ResNet152V2 \textsuperscript{2}}} & \multicolumn{3}{c}{\textbf{MobileNetV2 \textsuperscript{2, 3}}} & \multicolumn{3}{c}{\textbf{InceptionResNetV2 \textsuperscript{2}}} \\
\cmidrule{2-13}
& \textbf{train} & \textbf{val} & \textbf{test} & \textbf{train} & \textbf{val} & \textbf{test} & \textbf{train} & \textbf{val} & \textbf{test} & \textbf{train} & \textbf{val} & \textbf{test} \\
\midrule
\textbf{Loss} & 0.48 & 0.47 & 0.48 & 0.38 & 0.38 & 0.38 & 0.31 & 0.32 & 0.33 & 0.36 & 0.36 & 0.37 \\
\textbf{Accuracy} & 0.83 & 0.84 & 0.83 & 0.88 & 0.89 & 0.89 & 0.90 & 0.90 & 0.90 & 0.88 & 0.88 & 0.88 \\
\textbf{Precision} & 0.87 & 0.87 & 0.87 & 0.92 & 0.92 & 0.92 & 0.95 & 0.94 & 0.94 & 0.91 & 0.91 & 0.91 \\
\textbf{Recall} & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.96 & 0.94 & 0.94 & 0.94 & 0.96 & 0.95 & 0.96 \\
\textbf{F-Score} & 0.91 & 0.91 & 0.91 & 0.93 & 0.94 & 0.94 & 0.94 & 0.94 & 0.94 & 0.93 & 0.93 & 0.93 \\
\textbf{AUC} & 0.54 & 0.54 & 0.54 & 0.89 & 0.88 & 0.88 & 0.93 & 0.92 & 0.90 & 0.87 & 0.86 & 0.85 \\
\textbf{TP} & 6040 & 1310 & 1287 & 6056 & 1319 & 1296 & 5961 & 1311 & 1274 & 6082 & 1319 & 1295 \\
\textbf{FP} & 923 & 189 & 192 & 551 & 107 & 114 & 328 & 78 & 76 & 623 & 123 & 135 \\
\textbf{TN} & 95 & 21 & 22 & 467 & 103 & 100 & 690 & 127 & 139 & 395 & 87 & 79 \\
\textbf{FN} & 318 & 73 & 67 & 302 & 64 & 302 & 397 & 77 & 79 & 276 & 64 & 59 \\
\bottomrule
\end{tabular}%
}\\
\begin{flushleft}
\noindent{\footnotesize{\textsuperscript{1} This NN was trained from scratch and with initial random weights.}}\\
\noindent{\footnotesize{\textsuperscript{2} These NNs were trained and fine-tuned using TL.}}\\
\noindent{\footnotesize{\textsuperscript{3} This NN has the best performance among all.}}
\end{flushleft}
\end{table*}
\subsection{Evaluation of MobileNet-v2's Performance}
As mentioned before and according to \autoref{tab:final_results} the fine-tuned model made with pre-trained MobileNet-v2 was the winner among the other three networks. and its lightweight architecture which is conducive to training and predicting faster is especially important for devices with low computational power such as mobile phones, edge devices, etc. That being said, as the winner among all four network architectures let's scrutinize MobileNet-v2's performance even more. The results of other networks in detail can be found in \autoref{fig:appendix:resnet50v2_PM}-\ref{fig:appendix:inception_PM}. The \autoref{tab:hyperparameters} displays the most important hyperparameters used during the training and fine-tuning of our multiple networks.
The fine-tuned MobileNet-v2 doesn't suffer from underfitting nor overfitting (\autoref{fig:MobieNetV2_metrics}).
As regards the second output of the fine-tuned MobileNet-v2, the localizations seemed completely relevant and attest to the fact that the model had learned the correct features in the train data (\autoref{fig:cam_layer}).
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth, trim={0 0 0 6.5cm}, clip]{Figures/Fig_MobileNetV2_PMs_2.png}
\caption{The changes in performance measures reported after each epoch for both the train and validation sets during the training and fine-tuning phase; belonging to the MobileNet-v2 network. the green line indicates the point, in terms of epoch number, where we started to fine-tune some late layers in the pre-trained model.}
\label{fig:MobieNetV2_metrics}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\textwidth]{Figures/Fig_output_samples.jpg}
\caption{Some samples of the output of Grad-CAM layer of fine-tuned MobileNet-v2 network. The localized defects are shown by a heatmap (from Blue to Red). \label{fig:cam_layer}}
\end{figure*}
The output of several Conv layers, aka feature maps, from our fine-tuned MobileNet-v2 network, are visualized in \autoref{fig:feature_maps}; we purposefully chose one layer from the beginning, one from the middle, and another from the end of the network to demonstrate that the more we go deep into the network the more holistic and abstract the detected features will be and vice versa.
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{Figures/Fig_feature_extraction.jpg}
\caption{A few samples (i.e., 8) of feature maps from the beginning (top), mid-section (middle), and end (bottom) of our fine-tuned MobileNet-v2 network. The input image was the same as that of the Of subfigure~c~in~\autoref{fig:cam_layer}. \label{fig:feature_maps}}
\end{figure*}
\section{Discussion}
This work demonstrates the facilities of DL in the conservation of CHB by the means of damage detection. As we have collected a diverse set of intricate CHB images, the trained model is very robust and achieved a minimum of 90\% for all the metrics we used on the test set. More than our diverse data, using TL, data augmentation, and three different regularization methods in combination, was conducive to reducing overfitting and increasing the generalization power of our model.
The salient reasons that attest to why our results are considered to be good enough are (i) Bayes error rate and (ii) the value of performance measures. Although measuring Bayers error rate is a hard and time-consuming task, which was not in the scope of this experiment, we can argue that its value is high, as for instance even a highly skilled CHB practitioner from the south of Iran, would have had a hard time detecting the defects in CHBs from north of the country, considering the peculiarity and idiosyncrasies of each building in our dataset.
According to \citet{Mandrekar2010}, in the field of CV, values larger than 90\% are considered excellent, so it's safe to assume that the MobileNet-v2 had excellent performance, recording values above 90\% for all of our metrics. Other than reaching the best performance among other models, the MobileNet-v2 is particularly interesting as it is a faster NN which is particularly important in doing real-time damage detection in devices with low computational resources, such as mobile phones or edge devices. Using our proposed model based on MobileNet-v2 can pave the way for the wide usage of such models in CH sites in Iran and/or around the world with the fewest possible resources.
To compare our results with those of similar researchers, the papers of \citet{Llamas2017} and \citet{Perez2019} were used, as these were the ones that used image classification, CNN, and TL, just like this experiment. As both of these papers used multiclass classification whereas we used binary classification, we took the average of each metric (e.g., Recall) for all classes, Llamas et al. had ten/10 classes and Perez et al. had four/4 classes; this way we could make their results comparable to those of ours. The comparison of the results on the test set is shown in \autoref{tab:compare_results}.
\begin{table}[h]
\centering
\caption{A comparison between the results of similar studies. the reported values are for test set.\label{tab:compare_results}}
\resizebox{0.6\columnwidth}{!}{%
\begin{tabular}{lccc}
\toprule
{} & Precision & Recall & F1 Score \\
\midrule
Llamas et al. (ResNet) \protect\citep{Llamas2017} & 0.90 & 0.90 & 0.90 \\
Perez et al. (VGG-16) \protect\citep{Perez2019} & 0.90 & 0.89 & 0.89 \\
Our fine-tuned model (MobileNet-v2) & \textbf{0.94} & \textbf{0.94} & \textbf{0.94} \\
\bottomrule
\end{tabular}%
}
\end{table}
The most important challenges and limitations that we faced during this experiment were: (i) needing more data, which is a perennial problem in CV; (ii) lack of suitable computational power; and (iii) inconsistency in labeling due to personal preference and difference in the level of labelers' expertise.
\section{Conclusion}
This experiment is concerned with applying novel yet matured methods such as DL and CNNs to make the process of conservation of CHBs less prone to errors and more efficient than doing it manually by direct human supervision. By getting Iran's CHB practitioners, the main beneficiaries of this experiment, to use our proposed models besides their old methods, a higher rate of success in detecting physical defects of such buildings can be achieved. We irrevocably believe that CHB practitioners using DL models, such as our proposed one, can identify physical defects more often than either does alone and hopefully as a result, a lower prospect of CHBs deteriorating in structural health.
In an endeavor to practically demonstrate the utilities of DL in CH literature, We developed a fully fledged DL model that classifies the images in need of conservation and even more approximately localizes the defects to help the CH practitioners identify defects in a timely manner, and as a result speed of the process of CHB conservation as well as increasing its accuracy.
In spite of all the limitations, we achieved very good results with a score of at least 94\% for Precision, Recall, and F1-Score, which were about 4-5\% more than similar works (\autoref{tab:compare_results}).
As regards future works, addressing the limitations we faced can open up a plethora of opportunities in terms of methods and outputs. for instance, if had access to a large amount of labeled data and powerful servers, physical or in the cloud, then object detection or instance segmentation would be more useful and could elicit more accurate and user-friendly results from our data. Having gotten traction in the past few years, the generative adversarial networks (GANs) can be utilized in our network architecture to propose restoration based on the label and localizations our proposed model offers.
\nolinenumbers
|
{
"arxiv_id": "2302.14313",
"language": "en",
"timestamp": "2023-03-01T02:09:17",
"url": "https://arxiv.org/abs/2302.14313",
"yymm": "2302"
} | \chapter*{Contents}
\begin{center}
\SingleSpacing
\vskip 35pt
\SingleSpacing
\boolfalse {citerequest}\boolfalse {citetracker}\boolfalse {pagetracker}\boolfalse {backtracker}\relax
\defcounter {refsection}{0}\relax
\contentsline {chapter}{{Acknowledgements}}{iii}{chapter*.1}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{{Abstract}}{vi}{chapter*.2}%
\defcounter {refsection}{1}\relax
\contentsline {chapter}{{Published Content and Contributions}}{viii}{chapter*.3}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{Contents}{ix}{section*.4}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{List of Figures}{xii}{section*.5}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{List of Tables}{xviii}{section*.6}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\chapternumberline {1}Introduction}{1}{chapter.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.1}Finite temperature algorithms}{3}{section.1.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Direct evaluation of the trace}{4}{equation.1.1.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Imaginary time evolution}{7}{Item.13}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.2}Summary of research}{13}{section.1.2}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\chapternumberline {2}Finite temperature density matrix embedding theory}{18}{chapter.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.1}Abstract}{18}{section.2.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.2}Introduction}{18}{section.2.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.3}Theory}{20}{section.2.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Ground state DMET}{20}{section.2.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{DMET bath construction}{21}{equation.2.3.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Embedding Hamiltonian}{23}{equation.2.3.7}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Self-consistency}{24}{equation.2.3.10}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Ground-state expectation values}{25}{equation.2.3.14}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Finite temperature DMET}{25}{equation.2.3.15}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Finite temperature bath construction}{26}{equation.2.3.15}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Thermal observables}{29}{equation.2.3.21}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.4}Results}{29}{section.2.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Computational details}{29}{section.2.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{1D Hubbard model}{30}{section.2.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{2D Hubbard model}{33}{figure.caption.15}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.5}Conclusions}{39}{section.2.5}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\chapternumberline {3}\textit {Ab initio} finite temperature density matrix embedding theory}{41}{chapter.3}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.1}Abstract}{41}{section.3.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.2}Introduction}{41}{section.3.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.3}\textit {Ab initio} FT-DMET}{43}{section.3.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Orbital localization}{43}{section.3.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Bath truncation and finite temperature bath}{44}{equation.3.3.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Embedding Hamiltonian}{47}{figure.caption.20}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Impurity solver}{49}{figure.caption.21}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Thermal observables}{52}{figure.caption.21}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.4}Results}{54}{section.3.4}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.5}Conclusion}{58}{section.3.5}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\chapternumberline {4}Finite temperature complex polarization and metal-insulator transition}{59}{chapter.4}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.1}Abstract}{59}{section.4.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.2}Introduction}{60}{section.4.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.3}Ground state complex polarization and electron localization }{62}{section.4.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Electron localization}{63}{equation.4.3.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Complex polarization for independent electrons}{65}{equation.4.3.17}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.4}Finite temperature complex polarization}{66}{section.4.4}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.5}Tight binding model}{69}{section.4.5}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.6}Hydrogen chain}{75}{section.4.6}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.7}Conclusion}{80}{section.4.7}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\chapternumberline {5}Quantum imaginary time evolution and quantum thermal simulation}{81}{chapter.5}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.1}Abstract}{81}{section.5.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.2}Introduction}{81}{section.5.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.3}Quantum imaginary-time evolution}{83}{section.5.3}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.4}Quantum Lanczos algorithm}{88}{section.5.4}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.5}Quantum thermal averages}{90}{section.5.5}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.6}Results}{92}{section.5.6}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{Benchmarks}{95}{equation.5.6.22}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.7}Conclusions}{98}{section.5.7}%
\defcounter {refsection}{0}\relax
\contentsline {appendix}{\chapternumberline {A}Appendix for Chapter~\ref {chp:dmet} and Chapter~\ref {chp:hlatt}}{99}{chapter.A}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {A.1}Proof of the finite temperature bath formula}{99}{section.A.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {A.2}Analytic gradient of the cost function for correlation potential fitting in DMET at finite temperature}{101}{section.A.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {A.3}Davidson diagonalization}{102}{section.A.3}%
\defcounter {refsection}{0}\relax
\contentsline {appendix}{\chapternumberline {B}Appendix for Chapter~\ref {chp:qite}}{104}{chapter.B}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.1}Representing imaginary-time evolution by unitary maps}{104}{section.B.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.2}Proof of correctness from finite correlation Length}{105}{section.B.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.3}Spreading of correlations}{110}{section.B.3}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.4}Parameters used in QVM and QPUs simulations}{112}{section.B.4}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{Bibliography}{114}{section*.47}%
\end{center}
\listoffigures
\listoftables
\mainmatter
\chapter{Introduction\label{chp:intro}}
We live in an era where the computational power is one of the main driving
forces for science and technology development. The hardware breakthroughs in
supercomputers, graphical processing unit (GPU) and quantum computers made
heavy computational tasks possible. The development in machine learning
algorithms and artificial intelligence changed the way people live
tremendously. Many new materials and drugs are discovered via computational
simulations, saving hundreds of laboratory hours. We believe in the
computational power to bring us new knowledge and concepts, as well as to
solve fundamental problems that remain unclear for decades.
In quantum chemistry and condensed matter physics, those hard problems
include the phase diagram of high-temperature superconductors (HTSC)
~\citep{Dagotto1994,Nikolay2010},
the mechanism of nitrogen fixation~\citep{Hoffman2014,Cherkasov2015},
protein folding~\citep{Englander2014}, etc.
The barrier for efficient simulations of the above problems is usually
either the system size is too big or the interaction is too complicated.
The strongly correlated systems, unfortunately, have both of the above two barriers.
The hallmark of strongly correlated systems is localized orbitals such
as $d$ and $f$ orbitals, where electrons experience strong Coulomb repulsion.
For instance, transition metal compounds usually contain strong correlations
due to the localized $3d$ orbitals.
Strongly correlated materials attract
tremendous interest of both experimental and theoretical researchers
because they exhibit a plethora of exotic phases or behaviors: HTSC,
spintronic materials~\citep{Hirohata202016}, Mott insulators~\citep{Hubbard1963}, etc. Those
strongly correlated behaviors evoked novel applications such as
quantum processing units~\citep{Ladd2010}, superconducting magnets~\citep{Wilson1983,Chester1967},
and magnetic storage~\citep{Comstock2002}. Being able to simulate strongly correlated
problems and thus understand the physics behind them has been a key
task for theoretical and computational chemists.
This thesis focuses on developing theoretical and computational approaches
to simulate strongly correlated problems at finite temperature.
While ground state simulations provide basic information on the system
such as ground state energy and band gap, finite temperature is where
the real-life phase transitions happen. The complexity of a quantum
many-body problem can be described by a term called \textit{entanglement}.
At ground state away from the critical point, the entanglement is bounded
by the area law~\citep{Eisert2010}. However, at finite temperature,
especially low temperature where the quantum effect is not fully dissipated
by thermal fluctuation, the area law is no longer valid. One would expect
the entanglement strength to decay while the
entanglement length to grow with temperature. The interplay between the
entanglement strength and entanglement length decides the complexity of the
system. Normally one would expect more computational efforts for finite
temperature calculations than ground state calculations.
The complexity of finite temperature calculations can also be understood
in the ensemble picture. Most of the physical and chemical systems can be
seen as open systems, where the thermodynamic statistics is described by
the grand canonical ensemble. In the grand canonical ensemble picture, both
energy fluctuations and particle number fluctuations are involved.
The system at temperature $T$ is fully described by the density matrix
\begin{equation}
\hat{\rho}(T) = e^{-(\hat{H} - \mu\hat{N})/k_BT},
\end{equation}
where $\hat{H}$
is the Hamiltonian, $\mu$ is the chemical potential, $\hat{N}$ is the
number operator and $k_B \approx 1.38\times 10^{-23} \mathrm{J}\cdot\mathrm{K}^{-1}$ is the Boltzmann constant. The partition function
is defined as the trace of the density matrix:
$\mathcal{Z} = \text{Tr}(\hat{\rho})$. If one choose the
eigenstates of the Hamiltonian $\hat{H}$ as the basis to perform
the trace summation, each eigenstate would participate in the statistics with
probability
\begin{equation}\label{eq:intro_prob}
P(n, i) = e^{-(\varepsilon_i^n - \mu n)/k_BT}/\mathcal{Z},
\end{equation}
where $\varepsilon_i^n$ is the eigenvalue of the $i$th eigenstate in the
Fock space of $n$ particles. If $\varepsilon_i^n < \mu n$, $P(n,i)$
decreases to $1/\mathcal{N}$ as temperature rises; if
$\varepsilon_i^n > \mu n$, $P(n,i)$ increases to $1/\mathcal{N}$
as temperature rises, where $\mathcal{N}$ is the total number of eigenstates.
At $T=0$, only the ground state is involved;
as one raises the temperature, the contribution from the ground state
drops and excited states enter the ensemble. Eventually at infinite
temperature, all states are equally involved with a probability
$1/\mathcal{N}$. The inclusion of many excited states is the source of
the high complexity of finite temperature simulations. For instance,
for an electronic structure problem with $L$ orbitals, where each orbital
can take four states: $|0\rangle$, $|\uparrow\rangle$, $|\downarrow\rangle$,
and $|\uparrow\downarrow\rangle$. The total number of states is $\mathcal{N} = 4^L$,
which scales exponentially with $L$.
Albeit the high computational cost of finite temperature simulations, there
exist a variety of finite temperature algorithms that can fulfill different
computational tasks.
Section~\ref{sec:ftalgos} presents a detailed review of current finite
temperature algorithms. We hope this review could be helpful to researchers
who are interested in learning about or using finite temperature algorithms.
Section~\ref{sec:sumsec} provides an outline for the rest of the chapters in
this thesis.
\section{Finite temperature algorithms\label{sec:ftalgos}}
At finite temperature $T$, the grand canonical ensemble average of an operator
$\hat{O}$ is evaluated by
\begin{equation}\label{eq:average_intro}
\langle \hat{O}\rangle (T) = \frac{\mathrm{Tr}\left(e^{-(\hat{H}- \mu \hat{N})/k_BT } \hat{O}\right)}
{\mathrm{Tr}\left(e^{-(\hat{H} -\mu\hat{N})/k_BT}\right)}.
\end{equation}
There are generally two approaches to design a finite temperature algorithm:
(i) directly evaluate the trace in Eq.~\eqref{eq:average_intro} by summation
over the expectation values under an orthonormal basis; (ii) imaginary
time evolution from infinite temperature. Theoretically the two approaches
are all based on Eq.~\eqref{eq:average_intro}, so one could argue that
there is no big difference between the two approaches. Technically, however,
the first approach usually involves exact or approximate diagonalization
of the Hamiltonian, while the latter approach does not. In the following,
we will discuss the two approaches with some example algorithms.
\subsection{Direct evaluation of the trace}
We first discuss the non-interacting case. For a non-interacting Hamiltonian,
only one-body terms are involved, and the Hamiltonian can be simply written
as an $L\times L$ matrix, where $L$ is the number of orbitals in the system.
For most cases, this $L\times L$ Hamiltonian matrix can be directly
diagonalized, with eigenvalues $\varepsilon_i$ and eigenvectors
$|\phi_i\rangle$ (molecular orbitals, MOs). A direct implementation of
Eq.~\eqref{eq:average_intro} is to construct Slater determinants of
all possible particle numbers and evaluate the traces, where the number
of Slater determinants in the summation scales exponentially with $L$.
Luckily, for non-interacting electrons, the grand canonical density matrix
can be evaluated by Fermi-Dirac equation
\begin{equation}\label{eq:fd_intro}
\rho = \frac{1}{1+e^{(H-\mu\mathbb{I})/k_BT}},
\end{equation}
where $\mathbb{I}$ is the identity matrix. The occupation numbers on
MOs are the diagonal terms of the density matrix:
$n_i = 1/(1+e^{(\varepsilon_i - \mu)/k_BT})$. Thus Eq.~\eqref{eq:average_intro}
can be rewritten as
\begin{equation}
\langle \hat{O}\rangle_{NI} (T) = \sum_{ij} \rho_{ij} \langle \phi_j |\hat{O}|\phi_i\rangle,
\end{equation}
where the subscript "NI" stands for "non-interacting".
Finite temperature Hartree-Fock is an example of the above approach, with the
algorithm summarized in Algorithm~\ref{alg:fthf}.
\begin{algorithm}[h]
\SetAlgoLined
\vspace{0.2em}
\begin{description}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,leftmargin=*]
\item[] Construct the Fock matrix $F$ from the Hamiltonian.
Define $F'$ as identity.\\
\While{$F\neq F'$}{
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,leftmargin=*]
\item Store the Fock matrix into $F' = F$;
\item Diagonalized $F$ to get MO energies and coefficients;
\item Calculate the chemical potential $\mu$ by minimizing
$(N_{\text{elec}} - \sum_i n_i)^2$, where $N_{\text{elec}}$ is the
target electron number and $n_i$ is the occupation number of the $i$th MO;
\item Calculate density matrix $\rho$ from Eq.~\eqref{eq:fd_intro} by
substituting $H$ with $F$;
\item Evaluate the new Fock matrix $F$ from the density matrix $\rho$ as in
ground state Hartree-Fock algorithm;
\end{enumerate}
}
\item[] Evaluate thermal observables with converged $\rho$.
\end{description}
\caption{Finite temperature Hartree-Fock algorithm}\label{alg:fthf}
\end{algorithm}
Note that in above algorithm, the convergence criteria can also be the
density matrix or MO energies.
For the interacting case, a naive approach is exact diagonalization
(ED), where all eigenstates of the Hamiltonian $\hat{H}$ are explicitly
calculated and the thermal average of an observable $\hat{A}$ is evaluated
by
\begin{equation}
\langle \hat{O}\rangle (T) = \frac{\sum_{n,i}\langle \psi_i^n | \hat{O}
e^{-(\varepsilon_i^n - \mu n)/k_BT}|\psi_i^n\rangle}
{\sum_{n,i}\langle \psi_i^n | e^{-(\varepsilon_i^n - \mu n)/k_BT}|\psi_i^n\rangle},
\end{equation}
where $|\psi_i^n\rangle$ is the $i$th eigenstate in the Fock space with
$n$ particles. The algorithm of ED is described in
Algorithm~\ref{alg:intro_ed}. The expense of ED scales exponentially
with the number of orbitals $L$, and thus is only limited to small systems.
For electronic systems with two spins, the maximum $L$ is $\sim 8$. Therefore,
nearly no meaningful calculations can be done with ED.
\begin{algorithm}[H]
\SetAlgoLined
\vspace{0.2em}
\begin{description}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,leftmargin=*]
\item[] $\mathcal{Z} = 0, O = 0$;
\item[] \For{$n_a$ in $[0, L]$}{
\For{$n_b$ in $[0, L]$}{
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Construct Hamiltonian $H(n_a, n_b)$;
\item Diagonalize $H(n_a, n_b)$ to get eigenvalues $\varepsilon_i^{n_a, n_b}$ and
eigenstates $\{|\psi_i^{n_a, n_b}\rangle\}$;
\item Evaluate $\mathcal{Z}^{n_a, n_b} = \sum_i e^{-(\varepsilon_i^{n_a, n_b} - \mu (n_a + n_b))/k_BT}$ and $O^{n_a, n_b} = \sum_i e^{-(\varepsilon_i^{n_a, n_b} - \mu (n_a + n_b))/k_BT} \langle \psi_i^{n_a, n_b} | \hat{O} | \psi_i^{n_a, n_b}\rangle$;
\item $\mathcal{Z}$ += $\mathcal{Z}^{n_a, n_b}$ ; $O$ += $O^{n_a, n_b}$;
\end{enumerate}
}
}
\item[] $\langle O \rangle (T) = O/\mathcal{Z}$
\end{description}
\caption{Finite temperature exact diagonalization}\label{alg:intro_ed}
\end{algorithm}
One could reduce the computational cost by only including low-lying states
in the ensemble.
Davidson diagonalization~\citep{Davidson1975} and Lanczos algorithm~\citep{Lanczos1950} are two
methods that construct a smaller subspace of the Hilbert space containing
the low-lying states.
In the Lanczos algorithm, starting with a normalized vector $|\phi_0\rangle$, one
could generate a set of orthonormal Lanczos vectors $\{|\phi_m\rangle, m = 0, ..., M\}$ to span the Krylov space $\{|\phi_0\rangle, \hat{H} |\phi_0\rangle, ..., \hat{H}^M |\phi_0\rangle\}$ with the following steps:
\begin{enumerate}
\item Apply $\hat{H}$ to $|\phi_0\rangle$ and split the resulting vector into
$a_0|\phi_0\rangle$ and $b_1 |\phi_1\rangle$ with $|\phi_1\rangle \perp |\phi_0\rangle$
\begin{equation}
\hat{H}|\phi_0\rangle = a_0 |\phi_0\rangle + b_1 |\phi_1\rangle,
\end{equation}
where $a_0 = \langle \phi_0 | \hat{H}|\phi_0\rangle$ and $b_1$ is chosen so
that $|\phi_1\rangle$ is normalized.
\item Iteratively apply $\hat{H}$ to $|\phi_i\rangle, i = 1,...,M$ to get
\begin{equation}
|\phi_i\rangle = b_i |\phi_{i-1}\rangle + a_i |\phi_i\rangle + b_{i+1}
|\phi_{i+1}\rangle,
\end{equation}
where the iteration stops at $i=M$ with $b_{M+1} = 0$ or when $b_i = 0$ with
$i < M$.
\item Construct the matrix representation of the Krylov space Hamiltonian as
\begin{equation}
H' = \begin{bmatrix}
a_0 & b_1 & 0 & \cdots & 0 \\
b_1 & a_1 & b_2 & \cdots & 0\\
0 & b_2 & a_2 & \cdots & 0\\
& & & \ddots & \\
0 & 0 & 0 & \cdots & a_M
\end{bmatrix},
\end{equation}
where we choose $b_i$ to be real numbers.
\item Diagonalize the Krylov Hamiltonian $H'$ to get the eigenvalues and
eigenvectors in the basis of $\{|\phi_i\rangle, i = 0, ..., M\}$.
Note that the
$H'$ is a tridiagonal matrix, and the typical cost to diagonalize an
$M\times M$ symmetric tridiagonal matrix is $\mathcal{O}(M^2)$, while the
cost of diagonalizing a random symmetric $M\times M$ matrix is $\mathcal{O}(M^3)$.
\end{enumerate}
The quality of the Krylov space depends heavily on the initial state
$|\phi_0\rangle$. For instance, if $|\phi_0\rangle$ has zero overlap
with the ground state, then the leading part of the trace summation at low
temperature is missing and the result is inaccurate. One could
sample initial states and take the average of the sample to get a better approximation. Note that the above routine is for a system with fixed particle numbers,
so to fulfill the grand canonical ensemble, one should also sample the
Fock spaces with all possible particle numbers. For low temperature
simulation, sampling particle numbers near the targeted electron number
is usually enough. We also provide a summary of the Davidson algorithm in
Appendix~\ref{sec:apdx_davidson}
\subsection{Imaginary time evolution}
The imaginary time evolution operator is defined as
$e^{-\beta \hat{H}}$, where $\beta$ is called the imaginary time. This
approach can be used in both ground state search and the finite
temperature calculations. In the latter case, $\beta$ has a physical
meaning: the
inverse temperature $\beta = 1/k_B T$. At $\beta = 0$ (infinite temperature),
the density matrix $\hat{\rho}(\beta = 0)$ is proportional to the identity operator
and the
system is maximally entangled.
Differentiating $\hat{\rho}(\beta) = e^{-\beta \hat{H}}$ with respect to $\beta$ is described by the Bloch equation
\begin{equation}\label{eq:bloch_eq_intro}
\frac{\mathrm{d}\hat{\rho}}{\mathrm{d}\beta} = -\hat{H}\hat{\rho}
= -\frac{1}{2}(\hat{H}\hat{\rho} + \hat{\rho}\hat{H}),
\end{equation}
where the last equal sign used $[\hat{H}, e^{-\beta \hat{H}}] = 0$. The
solution to Eq.~\eqref{eq:bloch_eq_intro} can also be written in a
symmetrized form
\begin{equation}\label{eq:dm_evolve_intro}
\hat{\rho}(\beta) = e^{-\beta \hat{H}/2} \hat{\rho}(\beta = 0) e^{-\beta \hat{H}/2}.
\end{equation}
Density matrix quantum Monte Carlo (DMQMC)~\citep{Blunt2014,Petras2020} is an example of the above
approach. We introduce an energy shift $\Delta E$ to the original Hamiltonian
$\hat{H}$, and Eq.~\eqref{eq:bloch_eq_intro} turns into
\begin{equation}
\frac{\mathrm{d}\hat{\rho}}{\mathrm{d}\beta}= -\frac{1}{2}(\hat{T}\hat{\rho} + \hat{\rho}\hat{T}),
\end{equation}
where $\hat{T} = \hat{H} - \Delta E\hat{\mathbb{I}}$, and $\Delta E$ is
slowly adjusted to control the population. A similar concept
of $\Delta E$ is also employed in diffusion Monte Carlo (DMC)~\citep{Hammond1994,Foulkes2001}
and full configuration interaction quantum Monte
Carlo (FCIQMC)~\citep{Booth2009,Booth2013}.
The general form of $\hat{\rho}(\beta)$ can be written as a linear combination
\begin{equation}
\hat{\rho}(\beta) = \sum_{ij}\rho_{ij}(\beta) |\psi_i\rangle\langle\psi_j|,
\end{equation}
where $\{|\psi\rangle\}$ forms a complete orthonormal basis of the Hilbert
space. Here we choose $\{|\psi\rangle\}$ to be Slater determinants.
$\{|\psi_i\rangle\langle\psi_j|\}$ forms a basis for operators in this
Hilbert space, denoted as $\{X_{ij}\}$ for simplicity. Here we introduce
a term "psips"~\citep{Anderson1975,Anderson1976}: each psip resides on a particular basis operator $X_{ij}$ or
site $(i,j)$ with "charge" $p_{ij} = \pm 1$. The imaginary time evolution is divided into $N_{\beta}$
tiny steps: $\tau = \beta / N_{\beta}$. For each step, DMQMC
loops over the sample of psips and perform the following steps:
\begin{enumerate}
\item \textbf{Spawning along columns of the density matrix}. Starting from a
psip on site $(i,j)$, calculate the transition probabilities
$\frac{1}{2}|T_{ik}|\tau $ to spawn onto sites $(k,j)$ with $T_{ik}\neq0$
and $i\neq k$. If the spawning attempt is accepted, a psip is born
on site $(k,j)$ with charge $q_{kj} = \mathrm{sign}(T_{ik})q_{ij}$.
\item \textbf{Spawning along rows of the density matrix}. Repeat the above step to
spawn psips from site $(i,j)$ onto sites $(i,k)$.
\item \textbf{Psips replication and death}. Evaluate the diagonal sum $d_{ij} = T_{ii} + T_{jj}$ for site $(i,j)$: if $d_{ij} > 0$, a copy of the psip on
site $(i,j)$ is added to the pool with probability $p_d = \frac{1}{2}
|d_{ij}|\tau$; if $d_{ij} < 0$, the psip on site $(i,j)$ is removed
with probability $p_d$.
\item \textbf{Annihilation}. Pairs of psips on the same site with opposite charges
are removed from the pool.
\end{enumerate}
The distribution of the psips generated by repeating $N_{\beta}$ times the
above procedure provides an approximation of the unnormalized density matrix
at $\beta$. The thermal average of an observable $\hat{O}$ is then calculated
by
\begin{equation}
\langle \hat{O}\rangle (\beta) = \frac{\sum_{ij}\bar{q}_{ij}O_{ji}}
{\sum_i \bar{q}_{ii}},
\end{equation}
where $\bar{q}$ is an average of density matrices evaluated from a large
number of repeats of the above imaginary time evolution process.
The main concern of the above approach is the size of the density matrix. The
number of independent elements in the density matrix is
$\sim \mathcal{N}(\mathcal{N}+1)/2$, where $\mathcal{N}$ is the
Hilbert space size which grows exponentially with the system size.
Even with heavy parallelization, DMQMC still suffers from considerable
computational cost. Moreover, the accuracy of DMQMC becomes worse as
the temperature lowers, limiting this method to applications for intermediate
or high temperature calculations.
One could circumvent evolving a density matrix by artificially constructing
an enlarged space in which the density matrix of the original system can
be obtained by partial trace from the pure state solution of the enlarged
system. The above approach is called purification~\citep{Palser1998}. The idea of purification
is the following: suppose a system $\mathcal{S}$ can be bipartitioned into
two smaller systems $\mathcal{A}$ and $\mathcal{B}$; then a state
$|\Psi\rangle$ in $\mathcal{S}$ can be written as
\begin{equation}
|\Psi\rangle = \sum_{ij} c_{ij}|A_i\rangle|B_j\rangle,
\end{equation}
where $\{|A_i\rangle\}$ and $\{|B_i\rangle\}$ are orthonormal bases
of $\mathcal{A}$ and $\mathcal{B}$ respectively, and $\sum_{ij}|c_{ij}|^2 = 1$.
The density matrix of
the total system is $\hat{\rho}_{\mathcal{S}} = |\Psi\rangle\langle\Psi|$,
and the density matrix of $\mathcal{A}$ can be obtained by
\begin{equation}\label{eq:purify_intro}
\begin{split}
\hat{\rho}_{\mathcal{A}} &= \text{Tr}_{\mathcal{B}}\left(\hat{\rho}_{\mathcal{S}}\right) \\
&= \sum_k \langle B_k| \left(\sum_{ij} c_{ij} |A_i\rangle|B_j\rangle \right)
\left(\sum_{i'j'} c^*_{i'j'} \langle A_{i'}|\langle B_{j'}|\right)
| B_k\rangle\\
&= \sum_{ii'} \left(\sum_k c_{ik}c^*_{i'k}\right)|A_i\rangle\langle A_{i'}|\\
&= \sum_{ii'} w_{ii'} |A_i\rangle\langle A_{i'}|.
\end{split}
\end{equation}
Eq.~\eqref{eq:purify_intro} has the form of a density matrix operator, with
matrix elements $w_{ii'}$.
The matrix $\mathbf{w}$ has the following properties: (i) Hermitian;
(ii) diagonal terms $w_{ii} = \sum_{k} |c_{ik}|^2 \geq 0$;
and (iii) $\sum_i w_{ii} = 1$. Based on the above properties, we confirm that
$\mathbf{w}$ is a density matrix.
Given a density matrix $\hat{\rho}_{\mathcal{A}}$ and basis $\{|A_i\rangle\}$,
one could also find a set of $\{|B_i\rangle\}$ to construct a state $|\Psi\rangle$ such that $\hat{\rho}_{\mathcal{A}}$ can be derived from the partial trace
of $|\Psi\rangle\langle\Psi|$ with $\{|B_i\rangle\}$. The above procedure is
called purification. Note that for a system $\mathcal{A}$, there exist
more than one purified state $|\Psi\rangle$, and one could choose certain
$\{|B_i\rangle\}$ and $|\Psi\rangle$ for their convenience. At infinite
temperature, the density matrix of subspace $\mathcal{A}$ can be written as
\begin{equation}
\hat{\rho}_{\mathcal{A}}(\beta = 0) = \frac{1}{N_{\mathcal{A}}}\sum_{i}
|A_i\rangle\langle A_i|,
\end{equation}
where $N_{\mathcal{A}}$ is the size of $\mathcal{A}$. One could introduce
a set of ancillary orbitals $\{|\tilde{A}_i\rangle\}$ which are copies of
$\{|A_i\rangle\}$ and define the purified state as
\begin{equation}
|\Psi(\beta = 0)\rangle = \frac{1}{\sqrt{N_{\mathcal{A}}}}\sum_{i}
|A_i\rangle|\tilde{A}_i\rangle.
\end{equation}
It is easy to prove that $\hat{\rho}_{\mathcal{A}}(\beta = 0) $ can be derived as the partial trace of $|\Psi(\beta = 0)\rangle\langle \Psi(\beta = 0)|$
with $\{|\tilde{A}_i\rangle\}$.
Now one could apply imaginary time evolution onto $|\Psi(\beta = 0)\rangle$
instead of $\hat{\rho}_{\mathcal{A}}(\beta = 0)$,
\begin{equation}
|\Psi(\beta)\rangle \propto e^{-\beta(\hat{H}\otimes \hat{\mathbb{I}})}
|\Psi(\beta = 0)\rangle,
\end{equation}
where $\hat{H}$ is the original Hamiltonian on $\mathcal{A}$ and
$\hat{\mathbb{I}}$ is the identity operator on $\tilde{\mathcal{A}}$.
The thermal average of operator $\hat{O}$ in $\mathcal{A}$ is simply evaluated as
\begin{equation}
\langle \hat{O} \rangle (\beta) = \langle \Psi(\beta)| \hat{O}\otimes \hat{\mathbb{I}} |\Psi(\beta)\rangle.
\end{equation}
The most time consuming step in the above procedure is applying $e^{-\beta\hat{H}}$
onto $|\Psi\rangle$. A commonly accepted way to deal with $e^{-\beta \hat{H}}$
is Trotter-Suzuki decomposition. Again we divide $\beta$ into $N_{\beta}$
tiny steps $\tau = \beta/N_{\beta}$, and
$e^{-\beta\hat{H}} = \left(e^{-\tau \hat{H}}\right)^{N_{\beta}}$, where
we assumed that $\hat{H}$ does not change with temperature.
Suppose $\hat{H}$ can be decomposed into $\hat{H} = \hat{H}_1 + \hat{H}_2
+ \cdots + \hat{H}_n$, according to Trotter-Suzuki approximation
\begin{equation}
e^{-\tau \hat{H}} = e^{-\tau \hat{H}_1/2}e^{-\tau \hat{H}_2/2} \cdots
e^{-\tau \hat{H}_2/2} e^{-\tau \hat{H}_1/2} + \mathcal{O}(\tau^3).
\end{equation}
Another more accurate approach is the 4th order Runge-Kutta (RK4) algorithm, which
is based on solving the differentiation form of the imaginary time evolution
\begin{equation}
\frac{\mathrm{d}|\Psi\rangle}{\mathrm{d}\beta} = -\hat{H} |\Psi\rangle.
\end{equation}
Let $t_m = m\tau$, then one update step in RK4 algorithm is
\begin{equation}
|\Psi(t_{m+1})\rangle = |\Psi(t_{m})\rangle = \frac{1}{6}\tau
(k_1 + 2k_2 + 2k_3 + k_4),
\end{equation}
with initial condition $t_0 = 0$ and $|\Psi(t_0)\rangle =
|\Psi(\beta=0)\rangle$. $k_i (i=1,2,3,4)$ are defined from the $m$th step
values
\begin{equation}
\begin{split}
k_1 &= -\hat{H} |\Psi(t_{m})\rangle, \\
k_2 &= -\hat{H} \left(|\Psi(t_{m})\rangle + \frac{\tau}{2}k_1 \right),\\
k_3 &= -\hat{H} \left(|\Psi(t_{m})\rangle + \frac{\tau}{2}k_2 \right),\\
k_4 &= -\hat{H} \left(|\Psi(t_{m})\rangle + \tau k_3 \right).\\
\end{split}
\end{equation}
The error of one RK4 iteration scales as $\mathcal{O}(\tau^5)$, and
the accumulated error is $\mathcal{O}(\tau^4)$.
An example which adopted the purification approach is the finite temperature
density matrix renormalization group (FT-DMRG)~\citep{Feiguin2005} algorithm. The matrix product
state (MPS) is defined with alternating physical and ancillary sites, as shown
in Fig.~\ref{fig:ftdmrg_mps}. The operators are arranged in the same
alternating manner. The imaginary time evolution routine then follows
the same procedure as previously developed time-evolving block decimation
(TEBD)~\citep{Verstraete2004,Vidal2004}.
\begin{figure}
\centering
\justify
\includegraphics[width=1\textwidth]{figures/intro/ancilla-eps-converted-to.pdf}
\caption{Structure of the matrix product states used in the purification approach
of the finite temperature density matrix renormalization group algorithm.
}\label{fig:ftdmrg_mps}
\index{figures}
\end{figure}
In addition to the examples mentioned above, there exist several other finite
temperature algorithms. Minimally entangled typical thermal states (METTS)
algorithm~\citep{White2009,Stoudenmire2010} which will be mentioned in Chapter~\ref{chp:qite} is
another fulfillment of finite temperature DMRG based on importance sampling.
Compared to the purification approach, METTS requires a smaller bond
dimension and the statistical error decreases as the temperature lowers.
However, METTS has only been applied to spin systems, because the original
formulation does not allow the variation of electron numbers and thus is
limited to canonical ensemble. One could potentially adapt METTS for
a grand canonical ensemble by sampling the electron numbers or introducing
a set of initial states which do not preserve the electron numbers.
Determinantal quantum Monte Carlo (DQMC)~\citep{Blankenbecler1981}
and finite temperature auxiliary field quantum Monte Carlo (FT-AFQMC)~\citep{Liu2018,He2019}
are two other finite temperature algorithms based on importance sampling
of Slater determinants. Both of the two QMC methods utilizes
Hubbard-Stratonovich transformation to transform the many-body
imaginary time evolution operator to single-particle operators
expressed as free fermions coupled to auxiliary fields. AFQMC applies
a constrained path to alleviate the sign problem, yet the computational
cost is still non-negligible to reach low enough temperatures with large
system sizes. The dynamical mean-field theory (DMFT)~\citep{Georges1996,Kotliar2006} is an embedding
method which maps a many-body lattice problem to a many-body local problem.
Since DMFT evaluates the frequencies, it can be naturally extended to
finite temperature calculations with a finite temperature impurity solver.
As most embedding methods, DMFT results are affected by the
finite size effect, and extrapolation to thermodynamic limit (TDL) is
needed to remove the artifact from the finite impurity size. All the above numerical
algorithms have their pros and cons, and one could make their choices
based on the properties of the system and evaluate the results by
careful benchmarking.
\section{Summary of research}\label{sec:sumsec}
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/intro/hring-eps-converted-to.pdf}
\caption{$E_{\text{tot}} = 5E_0$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/intro/mole.png}
\caption{$E_{\text{tot}} = \sum_{i=0}^4 E_i$}
\end{subfigure}
\caption{Evaluating the total energy with density matrix embedding theory.
(a) A hydrogen ring composed of $10$ atoms obeying the periodic boundary condition,
and the impurity (supercell) is two adjacent atoms. The total energy equals
the energy of the supercell times the number of supercells.
(b) A single ligand heme molecule is divided into $5$ non-overlapping
fragments, and the DMET energy of each fragment is calculated. The total energy
is the sum of energies from all fragments.
}\label{fig:partition_intro}
\end{figure}
This thesis provides several tools to study the finite temperature behaviors
of strongly correlated materials. First we will introduce the finite temperature density matrix embedding theory (FT-DMET) in Chapter~\ref{chp:dmet}.
FT-DMET, as a thermal extension of ground state DMET (GS-DMET)~\citep{Knizia2012,KniziaJCTC2013,ZhengPRB2016,WoutersJCTC2016},
maps the many-body lattice thermal problem onto an impurity thermal problem.
Same as in GS-DMET, the system is divided into non-overlapping
fragments, which are defined by a set of local orbitals (LOs).
For periodic systems, the fragments are chosen as supercells and thus all
fragments are equivalent. For systems that do not obey periodicity,
extensive observables are evaluated for each fragment and the total
value of the observable is the summation of those from all fragments.
An illustration is shown in Fig.~\ref{fig:partition_intro} of the above two cases.
Note
that in the latter case, one should be careful when evaluating intensive
properties, which should either be defined for a specific fragment or
evaluated from global extensive properties.
This real-space partition ensures that most of the entanglement is retained
in the fragment for systems with short correlation lengths.
When treating one fragment, we call this fragment the \textit{impurity}
and the rest of the fragments the \textit{environment}.
To further capture
the entanglement between the impurity and the environment, we introduce
a term called \textit{bath}. Bath in DMET is a subspace of the environment which
is directly entangled with the impurity, spanned by a set of basis called
bath orbitals. In strongly correlated systems, the correlation is highly
localized, and the entanglement entropy obeys the area law. One could imagine
that the bath orbitals mostly come from sites adjacent to the impurity.
In practice, the bath orbitals are derived from the Schmidt decomposition
of the total system wavefunction, which is initialized as the mean-field
wavefunction and optimized in a bootstrap manner. A nice property of GS-DMET
is that the number of bath orbitals generated from Schmidt decomposition
is exactly equal to the number of impurity orbitals, with the assumption
that the impurity is much smaller than the environment.
The key issue going from GS-DMET to FT-DMET is that the Schmidt decomposition
no longer works since the system cannot be described by one single
wavefunction. In fact the finite temperature state is described by a
density matrix of the mixed state. Remember that the Schmidt decomposition
of a wavefunction is equivalent to the singular value decomposition (SVD) of
the corresponding density matrix. In FT-DMET algorithm, we start from
the mean-field single-particle density matrix $\rho_{0}$, and apply SVD to the
impurity-environment block to generate a set of bath orbitals, as
described in the theory part in Chapter~\ref{chp:dmet}. Note that
since the temperature enlarges the entanglement length, one should expect
more bath orbitals to cover all impurity-environment entanglement than in
GS-DMET. To do so, we continue to apply SVD to the impurity-environment block
of powers of $\rho_0$ to get the rest of the bath orbitals. The algorithm is
benchmarked with one- and two-dimensional Hubbard models, and shows
systematically improved accuracy by increasing bath or impurity size.
In Chapter~\ref{chp:hlatt}, we further extend the FT-DMET algorithm to handle
\textit{ab initio} problems. While model systems can be used to reproduce
some of the behaviors and phases in realistic lattices, being able to perform
\textit{ab initio} simulations is key to achieve a complete understanding
of the materials. There are two technical differences between model systems
and \textit{ab initio} systems: (ii) in most of the model systems,
site basis is used which is naturally localized, while in \textit{ab initio}
systems the Coulomb interaction is of long range and the basis set used
is usually not localized; (ii) in model systems, the two-body interaction
form is very simple, while the two-body interaction in an \textit{ab initio} Hamiltonian
is described by a complicated rank-$4$ matrix. The above two technical difficulties
are universal for all \textit{ab initio} simulations. For \textit{ab initio}
FT-DMET, one also needs to deal with the large embedding space due to the size
of the supercell and the basis set, which requires necessary truncation
to the bath space. Moreover, a finite temperature impurity solver that
can handle \textit{ab initio} Hamiltonian efficiently is also crucial
for any meaningful simulations. In Chapter~\ref{chp:hlatt}, we provide
solutions to the above problems and present the \textit{ab initio} FT-DMET
algorithm. We further use this algorithm to explore properties and
phase transitions of hydrogen lattices.
Chapter~\ref{chp:dmet} and Chapter~\ref{chp:hlatt} present an efficient
numerical tool to simulate both strongly-correlated model systems and
\textit{ab initio} systems. The next question to answer is what
order parameters we can use to capture essential thermal properties
and phase transitions at finite temperature. In Chapter~\ref{chp:cp},
we will study one of the most common but complex phase transitions:
metal-insulator transition (MIT). We argue that compared to the band
structure theory which is widely used to distinguish metal from insulator,
electron locality is a more universal criteria which can be used to detect
finite temperature MIT. We further introduce an order parameter named
complex polarization to measure the locality of electrons and provide
a thermofield approach to evaluate finite temperature complex polarization.
The finite temperature complex polarization formulation provides an easy
but well-defined way to characterize MIT in any periodic materials.
In Chapter~\ref{chp:qite}, several quantum algorithms will be introduced
for both ground state and finite temperature simulations on quantum devices.
With the development of quantum computing technology, especially the hardware,
it can be foreseen that certain categories of difficult problems in classical
simulations can be solved with less effort on a quantum device.
The bridge to connect chemical problems and successful quantum simulations
is efficient quantum algorithms for noisy intermediate-scale quantum (NISQ) devices.
Several quantum algorithms have been developed
to carry out quantum chemical simulations in the past decades, including
quantum phase estimation (QPE)~\citep{Farhi_MIT_2000,Kitaev_arxiv_1995} and hybrid quantum-classical variational
algorithms such as quantum approximate optimization algorithm (QAOA)~\citep{Farhi_MIT_2014,Otterbach_arxiv_2017,Moll_QST_2018}
and variational quantum eigensolver (VQE)~\citep{Peruzzo_Nature_2013,McClean_NJP_2016,grimsley2018adapt}. While the above algorithms
have many advantages as advertised, they all require quantum or classical
resources that can easily exceed the capacity of current devices.
In Chapter~\ref{chp:qite}, the key quantum algorithm that will be introduced
is called quantum imaginary time evolution (QITE). As mentioned in
Section~\ref{sec:ftalgos}, imaginary time evolution
is an efficient algorithm to find the ground state. If the initial state is
the identity density matrix at infinite temperature, then one could evaluate
the density matrix and thus the thermal observables at any temperature.
The conflict of implementing imaginary time evolution on a quantum device is that
the imaginary evolution operator $e^{\beta \hat{H}}$ is a non-unitary operator,
while only unitary operators are allowed on a quantum device. We present an
approach to reproduce a non-unitary operator with a rescaled unitary
operation on an enlarged domain. This approach could be flexibly performed
both exactly and approximately, depending on the computational resources
available. The result is systematically improved and converges rapidly
by increasing the size of the unitary domain. The convergence to the ground
state can be further accelerated by the quantum Lanczos algorithm (QLanczos).
QLanczos constructs a Krylov subspace with the intermediate states in
QITE simulation, and then diagonalizes the Hamiltonian in the subspace
representation to get a better approximation of the ground state. Unlike the
classical Lanczos algorithm mentioned in Section~\ref{sec:ftalgos} where
the Krylov subspace is spanned by $\{|\psi_0\rangle, H|\psi_0\rangle, ...,
H^m|\psi_0\rangle\}$, the Krylov space in QLanczos is spanned by
$\{|\psi_0\rangle, e^{2\tau \hat{H}}|\psi_0\rangle, ..., e^{2m\tau \hat{H}}|\psi_0\rangle\}$. The Hamiltonian in the quantum Krylov space can be
collected from the energy measurement at each step for free and no additional measurement is needed.
The third algorithm introduced in Chapter~\ref{chp:qite} is the quantum
minimally entangled typical thermal states (QMETTS) algorithm. While the
first two algorithms (QITE and QLanczos) can be applied to both
ground state and finite temperature calculations, QMETTS is designed
in particular for finite temperature simulations. QMETTS samples a set
of minimally entangled thermal states under the thermal statistics by
a repeated imaginary time evolving and then collapsing onto the product states
routine. The advantage
of the QMETTS algorithm is that the imaginary time evolution (fulfilled by
QITE) always starts from a product state, so that the entanglement will not
grow too large even at very low temperature. We present both
classical and quantum simulations on a variety of problems using the above
three quantum algorithms as examples and tests.
\chapter{Finite temperature density matrix embedding theory\label{chp:dmet}}
\section{Abstract}
We describe a formulation of the density matrix embedding theory
at finite temperature. We present a generalization of the
ground-state bath orbital construction that embeds a mean-field finite-temperature density matrix up to a given order in the Hamiltonian, or the Hamiltonian
up to a given order in the density matrix. We assess the performance
of the finite-temperature density matrix embedding on the 1D Hubbard model both at
half-filling and away from it, and the 2D Hubbard model at half-filling,
comparing to exact data where available, as well as results from finite-temperature
density matrix renormalization group,
dynamical mean-field theory,
and dynamical cluster approximations.
The accuracy of finite-temperature
density matrix embedding appears comparable to that of the ground-state theory, with at most a modest increase in bath size,
and competitive with that of cluster dynamical mean-field theory.
\section{\label{sec:intro_dmet}Introduction}
The numerical simulation of strongly correlated electrons is key
to understanding the quantum phases that derive from
electron interactions, ranging from the Mott transition~\citep{MottRMP1968,BullaPRL1999,BelitzRMP1994,QazilbashScience2007} to high
temperature superconductivity~\citep{AndersonScience1987,LakeScience2001,LakeNature2002}. Consequently, many numerical methods have been developed for this task. In the setting of quantum lattice models, quantum
embedding methods~\citep{SunACR2016}, such as dynamical mean-field theory (DMFT)\cite{KotliarRMP2006,GeorgesRMP1996,LichtensteinPRL2001,LichtensteinPRB2000,ZgidJCP2011} and density matrix embedding theory (DMET)\cite{Knizia2012,KniziaJCTC2013,WoutersJCTC2016,ZhengPRB2016,ZhengScience2017,BulikPRB2014,BulikJCP2014},
have proven useful in obtaining insights into complicated quantum phase diagrams.
These methods are based on an approximate mapping from the full interacting quantum lattice to a simpler
self-consistent quantum impurity problem, consisting of a few sites of the original lattice
coupled to an explicit or implicit bath. In this way, they avoid
treating an interacting quantum many-body problem in the thermodynamic limit.
The current work is concerned with the extension of DMET to finite temperatures.
DMET so far has mainly been applied in its ground-state formulation (GS-DMET), where it has achieved some success, particularly in applications to quantum phases where the order is associated with large unit cells~\citep{ZhengPRB2016,ZhengScience2017,ChenPRB2014}.
The ability to treat large unit cells at relatively low cost compared to other quantum embedding methods is due to the
computational formulation of DMET, which is based on modeling
the ground-state impurity density matrix, a time-independent quantity
accessible to a wide variety of efficient quantum many-body methods.
Our formulation of finite-temperature DMET (FT-DMET) is based on the simple structure of GS-DMET,
but includes the possibility to generalize the bath so as to better capture the finite-temperature impurity density matrix.
Bath generalizations have previously been used to extend GS-DMET to the calculation
of spectral functions and other dynamical quantities~\citep{BoothPRB2015,fertitta2019energy}. Analogously to GS-DMET, since one only needs to
compute time-independent observables,
finite-temperature DMET can be paired with the wide variety of quantum impurity solvers which can provide the finite-temperature
density matrix.
We describe the theory of FT-DMET in Section \ref{sec:theory_dmet}. In Section \ref{sec_results_dmet} we
carry out numerical calculations on the 1D and 2D Hubbard models, using exact diagonalization (ED) and the finite-temperature density matrix
renormalization group (FT-DMRG)~\citep{FeiguinPRB2005} as quantum impurity solvers. We benchmark our results against those from the Bethe ansatz in 1D, and
DMFT and the dynamical cluster approximation (DCA) in 2D, and also explore the quantum impurity derived N\'eel transition in the 2D Hubbard model.
We finish with brief conclusions about prospects for the method in \ref{sec:conc_dmet}.
\section{\label{sec:theory_dmet}Theory}
\subsection{Ground state DMET}\label{sec:gsdmet}
In this work, we exclusively discuss DMET in lattice models (rather than for
\emph{ab initio} simulations~\citep{WoutersJCTC2016,KniziaJCTC2013,BulikJCP2014,cui2019efficient}).
As an example of a lattice Hamiltonian, and one that we will use in numerical simulations,
we define the Hubbard model~\citep{HubbardPRS1963,GutzwillerPRL1963},
\begin{equation}\label{eq:theory-hubham}
\hat{H} = -t\sum_{\langle i,j\rangle,\sigma} \hat{a}^{\dagger}_{i\sigma}
\hat{a}_{j\sigma} - \mu \sum_{i,\sigma} \hat{a}^{\dagger}_{i\sigma}
\hat{a}_{i\sigma} + U\sum_{i}\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}
\end{equation}
where $\hat{a}^{\dagger}_{i\sigma}$ creates an electron with spin $\sigma$
on site $i$ and $\hat{a}_{i\sigma}$ annihilates it;
$\hat{n}_{i\sigma} = \hat{a}^{\dagger}_{i\sigma}\hat{a}_{i\sigma}$;
$t$ is the nearest-neighbour (denoted $\langle i,j\rangle$) hopping amplitude, here set to $1$;
$\mu$ is a chemical potential; and $U$ is the on-site repulsion.
The general idea behind a quantum embedding method such as DMET
is to approximately solve the interacting problem in the large lattice by dividing the lattice into small
fragments or impurities~\citep{SunACR2016}. (Here we will assume that the impurities are non-overlapping).
The main question is how to treat the coupling and entanglement between the impurities.
In DMET, other fragments around a given impurity are modeled by a set of bath orbitals.
The bath orbitals are constructed to exactly reproduce the entanglement between
the impurity and environment when the full lattice is treated at a mean-field level (the so-called ``low-level'' theory).
The impurity together with its bath orbitals then constitutes a small embedded quantum impurity problem,
which can be solved with a ``high-level'' many-body method. The low-level lattice wavefunction
and the high-level embedded impurity wavefunction are made approximately
consistent, by enforcing self-consistency of the single-particle density matrices
of the impurities and of the lattice. This constraint is implemented by introducing a static correlation potential on
the impurity sites into the low-level theory.
{The correlation potential introduced in DMET is analogous to the
DMFT self-energy. A detailed discussion of the correlation potential
including the comparison to other approaches such as density functional
theory (DFT) can be found in ~\citep{SunACR2016,KniziaJCTC2013}.}
To set the stage for the finite-temperature theory, in the following we briefly
recapitulate some details of the above steps in the GS-DMET formulation. In particular, we discuss
how to extract the bath orbitals, how to construct the embedding Hamiltonian, and how to carry out the
self-consistency between the low-level and high-level methods.
Additional details for the GS-DMET algorithm can be found in several articles~~\citep{Knizia2012,ZhengPRB2016,WoutersJCTC2016}, including the review in Ref.~\citep{WoutersJCTC2016}.
\subsubsection{DMET bath construction}
Given a full lattice of $L$ sites, we define the impurity $x$ over $L_x$ sites, the Hilbert space of which
is denoted as $\mathcal{A}^x$ and spanned by a set of orthonormal basis
$\{|A^x_i\rangle\}$. The rest of the lattice is treated as the environment of
impurity $x$, the Hilbert space of which is denoted as $\mathcal{E}^x$
spanned by an orthonormal basis $\{|E^x_i\rangle\}$. The Hilbert space of
the entire lattice $\mathcal{H}$ is the direct product of the two
subsystem Hilbert spaces: $\mathcal{H} = \mathcal{A}^x\otimes \mathcal{E}^x$.
Any state $|\Psi\rangle$ in $\mathcal{H}$ can be written as
\begin{equation}\label{eq:bipartite_dmet}
|\Psi\rangle = \sum_{ij} \psi_{ij} |A^x_i\rangle |E^x_j\rangle,
\end{equation}
where the coefficients $\psi_{ij}$ form a $2^{n_A}\times 2^{n_E}$ matrix.
Absorbing $\psi_{ij}$ into the environment orbitals, one could rewrite
Eq.~\eqref{eq:bipartite_dmet} as
\begin{equation}\label{eq:schmidt_dmet}
\begin{split}
|\Psi\rangle &= \sum_i |A^x_i\rangle \left(\sum_j \psi_{ij} |E^x_j\rangle\right) \\
& = \sum_i |A^x_i\rangle |B^x_i\rangle
\end{split},
\end{equation}
where $|B^x_i\rangle = \sum_j \psi_{ij} |E^x_j\rangle$.
Eq.~\eqref{eq:schmidt_dmet} tells us that the orbitals in $\mathcal{E}^x$
that are entangled to the impurity $x$ are of the same size as the impurity
orbitals. Note that $\{|B^x_i\rangle\}$ are not orthonormal and the
rest of the environment enters as a separatable product state
$|\Psi_{\text{core}}\rangle$
called "core contribution". Let $\{|\tilde{B}^x_i\rangle\}$ denote the
orthonormal states derived from $\{|B^x_i\rangle\}$, then
Eq.~\eqref{eq:schmidt_dmet} can be rewritten as
\begin{equation}\label{eq:emb_core_dmet}
|\Psi\rangle = \left(\sum_i \lambda_i |A^x_i\rangle |\tilde{B}^x_i\rangle\right)
|\Psi_{\text{core}}\rangle.
\end{equation}
The orbitals $\{|\tilde{B}^x_i\rangle\}$ are directly entangled with the
impurity $x$, and thus are called \textit{bath orbitals}. The space spanned
by impurity and bath is called \textit{embedding space}. One can then
derive the embedding state as
\begin{equation}\label{eq:emb_wf_dmet}
|\Psi_{\text{emb}}\rangle = \sum_i \lambda_i |A^x_i\rangle |\tilde{B}^x_i\rangle.
\end{equation}
If $|\Psi\rangle$ is an eigenstate of the Hamiltonian $\hat{H}$ in the full lattice,
then one can prove that $|\Psi_{\text{emb}}\rangle$ is also an eigenstate
of the embedding Hamiltonian $\hat{H}_{\text{emb}}$ defined as the projection
of $\hat{H}$ onto the embedding space. The two eigenvalues are identical.
Therefore, the full lattice problem can be reduced to a smaller embedding
problem.
In practice, the exact bath orbitals are unknown since the many-body
eigenstate $|\Psi\rangle$
is the final target of the calculation. Instead, we construct a set of
approximated bath orbitals from a mean-field ("low-level") wavefunction
$|\Phi\rangle$, which is an eigenstate of a quadratic lattice Hamiltonian $\hat{h}$.
We rewrite $|\Phi\rangle$ according to Eq.~\eqref{eq:emb_core_dmet} and
Eq.~\eqref{eq:emb_wf_dmet} in the form
\begin{align}
\label{eq:theory-mfwf}
|\Phi\rangle = |\Phi_\text{emb}\rangle |\Phi_\text{core}\rangle.
\end{align}
The single-particle density matrix $D^\Phi$ obtained from $|\Phi\rangle$
contains all information on the correlations in $|\Phi\rangle$.
Thus the bath orbitals can be defined from this density matrix.
We consider the impurity-environment block $D^{\Phi}_{\text{imp-env}}$ ($D_{ij}$ for $i \in x, j \notin x$) of dimension $L^{x} \times (L-L^x)$.
Then taking the thin SVD
\begin{equation}
D^{\Phi}_{\text{imp-env}} = U\lambda B^{\dagger},
\end{equation}
the columns of $B$ specify the bath orbitals in the lattice basis.
The bath space is thus a function of the density matrix, denoted $B(D)$.
\subsubsection{Embedding Hamiltonian}
After obtaining the bath orbitals, we construct the embedded Hamiltonian of the quantum impurity problem.
In GS-DMET, there are two ways to do so: the
interacting bath formulation and the non-interacting bath formulation. The conceptually simplest approach
is the interacting bath formulation. In this case, we project
the interacting lattice Hamiltonian $\hat{H}$ into the space of
the impurity plus bath orbitals, defined by the projector $\hat{P}$, i.e. the embedded Hamiltonian
is $\hat{H}_\text{emb} = \hat{P}\hat{H}\hat{P}$.
$\hat{H}_\text{emb}$ in general contains non-local interactions involving the bath orbitals, as they are non-local
orbitals in the environment. From the embedded Hamiltonian, we compute the high-level
ground-state impurity wavefunction,
\begin{align}
\hat{H}_\text{emb} |\Psi\rangle = E |\Psi\rangle.
\end{align}
If $\hat{H}$ were itself the quadratic lattice Hamiltonian $\hat{h}$, then
then $\Psi = \Phi$ and
\begin{equation}\label{eq:theory-samegs}
\hat{P}\hat{h}\hat{P} |\Phi\rangle = E |\Phi\rangle.
\end{equation}
Another way to write Eq.~(\ref{eq:theory-samegs}) for a mean-field state is
\begin{align}\label{eq:theory-samegs2}
[{P}{h}{P}, {P}{D}^\Phi{P}] = 0,
\end{align}
where $h$ denotes the single-particle Hamiltonian matrix and $P$ is the single-particle
projector into the impurity and bath orbitals.
These conditions imply that the lattice Hamiltonian and the embedded Hamiltonian $\hat{H}_\text{emb}$ share
the same ground-state at the mean-field level, which is the basic approximation in GS-DMET.
In the alternative non-interacting bath formulation, interactions on the bath are
approximated by a quadratic correlation potential (discussed below). This formulation retains the same exact embedding property as the interacting
bath formulation for a quadratic Hamiltonian.
In practice, both formulations give similar results in the Hubbard model~\citep{BulikPRB2014,WuJCP2019}, and the choice between the two
depends on the available impurity solvers; the interacting bath formulation generates non-local two-particle interactions in the bath
that not all numerical implementations can handle.
In this work, we use the interacting bath formulation in the 1D Hubbard model where an ED solver is used.
In the 2D Hubbard model, we use the non-interacting bath formulation, where both ED and FT-DMRG solvers are used.
This latter choice is because the cost of treating non-local interactions in FT-DMRG is relatively high (and we
make the same choice with ED solvers to keep the results strictly comparable).
\subsubsection{Self-consistency}
To maintain self-consistency between the ground-state of the lattice mean-field $|\Phi\rangle$,
and that of the interacting embedded Hamiltonian $|\Psi\rangle$, we introduce
a quadratic correlation potential $\hat{u}$ into $h$, i.e.
\begin{align}
\hat{h} \to \hat{h} + \hat{u},
\end{align}
where $\hat{u}$ is constrained to act on sites in the impurities, i.e. $\hat{u} = \sum_x \hat{u}^x$. To study magnetic order, we choose
the form
\begin{align}
\hat{u}^x = \sum_{ij \in x, \sigma \in \{ \uparrow,\downarrow\}} u^x_{i j\sigma} a^\dag_{i\sigma} a_{j\sigma}.
\end{align}
The coefficients $u^x_{ij\sigma}$ are adjusted to match the density
matrices on the impurity that are evaluated from the low-level wavefunction $|\Phi\rangle$
and from the high-level embedded wavefunction $|\Psi\rangle$. In this work, we
only match the single-particle density matrix elements of the impurity (impurity-only matching~\citep{WoutersJCTC2016}) by minimizing the cost function:
\begin{equation}\label{eq:cost_func_dmet}
f(u) = \sum_{i,j\in \text{imp}}(D_{ij}^{\text{low}} - D_{ij}^{\text{high}})^2,
\end{equation}
where $D^{\text{low}}$ and $D^{\text{high}}$ are single-particle density
matrices of low-level and high-level solutions, respectively. For each
minimization iteration, we assume that the high-level single-particle density
matrix is fixed, and the gradient of Eq.~\eqref{eq:cost_func_dmet} is
\begin{equation}\label{eq:gradient_cost_dmet}
\frac{\mathrm{d}f}{\mathrm{d}u} = \sum_{i,j\in \text{imp}}2(D_{ij}^{\text{low}} - D_{ij}^{\text{high}})
\frac{\mathrm{d}D_{ij}^{\text{low}} }{\mathrm{d}u}.
\end{equation}
For ground state, Ref.~\citep{WoutersJCTC2016} provided an analytical
approach to evaluate $\frac{\mathrm{d}D_{ij}^{\text{low}} }{\mathrm{d}u}$ using
the first order perturbation theory. At finite temperature, one could also
evaluate the gradient analytically as shown in the Appendix.
Note also that we will only be considering translationally invariant systems, and thus $\hat{u}^x$ is the same
for all impurities.
\subsection{Ground-state expectation values}
\label{sec:gsexpect}
Ground-state expectation values are evaluated from the density matrices of each high-level impurity wavefunctions
$|\Psi^x\rangle$. Since there are multiple impurities (in a translationally invariant system, these
are constrained to be identical), an expectation value is typically assembled from
the multiple impurity wavefunctions using a democratic
partitioning~\citep{WoutersJCTC2016}. For example, given two sites $i$, $j$,
where $i$ is part of impurity $x$ and $j$ is part of impurity $y$,
the expectation value of $a^\dag_i a_j$ is
\begin{align}
\langle a^\dag_i a_j \rangle = \frac{1}{2}[\langle \Psi^x | a^\dag_i a_j | \Psi^x\rangle +\langle \Psi^y | a^\dag_i a_j | \Psi^y\rangle].
\end{align}
Note that the {\textit pure bath} components of the high-level wavefunctions, e.g. $\langle \Psi^x | a^\dag_i a_j |\Psi^x\rangle$ for $i, j \notin x$
{are not used in defining the DMET expectation values.
Instead, the democratic partitioning is arranged such that an individual impurity embedding contributes the correct amount to a global expectation value so long as the impurity wavefunction produces
correct expectation values for operators that act on the impurity alone, or the impurity and bath together.}
\subsection{Finite temperature DMET}\label{sec:ftdmet}
Our formulation of FT-DMET follows the same rubric as the ground-state theory: a low-level
(mean-field-like) finite-temperature
density matrix is defined for the lattice; this is used to
obtain a set of bath orbitals to define the impurity problem;
a high-level finite-temperature
density matrix is calculated for the embedded impurity; and
self-consistency is carried out between the two via a correlation potential.
The primary difference lies in the properties of the bath, which we focus on below,
as well as in the appearance of quantities such as the entropy, which are
formally defined from many-particle expectation values.
\subsubsection{Finite temperature bath construction}
In GS-DMET bath construction, the bath orbitals are directly defined
from Schmidt decomposition of the full lattice ground state wavefunction as
in Eq.~\eqref{eq:emb_core_dmet}. However, at finite temperature, the state
of an open quantum system (grand canonical ensemble) is described by a
mixed state: the density matrix is described by a linear combination of
pure state density matrices. As a consequence, the Schmidt decomposition
can no longer be used to define bath orbitals. In fact, with non-zero temperature,
the entanglement becomes more delocalized. To capture the entanglement between
the impurity and environment, a larger bath space is needed compared to
that of ground state. In Fig.~\ref{fig:bath_occ}, we plotted the weight of
entanglement with the impurity as a function of distance (in sites) from
the impurity for a $100$-site tight binding model ($\hat{H} = \sum_{i}
\hat{a}_i^\dag\hat{a}_{i+1} + \text{h.c.}$). One could see that as the
temperature rises, more and more farther sites are entangled with the
impurity, and eventually all sites are uniformly and maximumly entangled.
At ground state
($T = 0$), the weight decreased with distance with an oscillating manner, with
wavelength = $2$ sites; at $T = 0.15$, the wavelength increased to $6$
sites due to the smearing effect of finite temperature. The increase of
oscillating wavelength is another example
of the increase of correlation length with temperature.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/dmet/bath_occ-eps-converted-to.pdf}
\caption{Weight of the entanglement with the impurity on environmental sites.
The weights are evaluated as the square norm of projections of
environmental sites to the bath space. Due to periodicity of the system,
only half of the environmental sites are shown in the figure.
} \label{fig:bath_occ}
\end{figure}
The difficulty of finite temperature bath orbital construction can also be
demonstrated by the commutation relation between the projected
single-particle density matrix and projected Hamiltonian.
In GS-DMET, the bath orbital construction is designed to be exact
if all interactions are treated at the mean-field level, giving rise
to the commuting condition for the projected single-particle density matrix
and projected Hamiltonian in Eq.~(\ref{eq:theory-samegs2}).
At finite temperature, the above commuting condition does not stand and
one should expect approximated bath orbitals even at mean-field level.
In general, we can look for a finite-temperature bath construction that preserves a similar property.
As pointed out in Sec.~\ref{sec:gsexpect}, the DMET embedding is still exact
for single-particle expectation values if the embedded projected single-particle
density matrix produces the correct expectation values in the impurity and impurity-bath
sectors, due to the use of the democratic partitioning. We aim to satisfy this slightly relaxed condition.
The finite temperature single-particle density matrix of a quadratic Hamiltonian $\hat{h}$
is given by the Fermi-Dirac function
\begin{equation}\label{eq:theory-fdfull}
{D}(\beta) = \frac{1}{1 + e^{({h}-\mu)\beta}},
\end{equation}
where $\beta = 1/k_B T$ ($k_B$ is the Boltzmann constant, $T$ is the temperature). In the following, we fix $k_B = 1$, thus
$\beta = 1/T$. If we could find an embedding directly analogous to the ground-state construction, we would obtain a projector ${P}$, such that
the embedded density matrix ${P} {D}{P}$ is the Fermi-Dirac function of the embedded quadratic Hamiltonian, i.e.
${P} h {P}$, i.e.
\begin{equation}\label{eq:theory-fdemb}
{P} {D} {P} = \frac{1}{1 + e^{({P}{h}{P}-\mu)\beta}}.
\end{equation}
However, unlike in the ground-state theory, the non-linearity of the exponential function means that
Eq.~(\ref{eq:theory-fdemb}) can only be satisfied
exactly if ${P}$ projects back into the full lattice basis. Thus a bath orbital construction at
finite temperature is necessarily always approximate, even for quadratic Hamiltonians.
Nonetheless, one can choose the bath orbitals to reduce the error between the l.h.s. and r.h.s. in
Eq.~(\ref{eq:theory-fdemb}). First, we require that the equality is satisfied only
for the impurity-environment block of $D$, following the relaxed requirements
of the democratic partitioning. Second, we require the equality to
be satisfied only up to a finite order $n$ in $h$, i.e.
\begin{align}
\label{eq:fdemb_nth}
[P D P ]_{ij} = \left[\frac{1}{1 + e^{({P}{h}{P}-\mu)\beta}}\right]_{ij} + O(h^n) \quad i \in x,j \notin x.
\end{align}
Then there is a simple algebraic construction of the bath space as (see Appendix for a proof)
\begin{align}
\label{eq:hbath}
\span\{B(h) \oplus B(h^2) \oplus B(h^3) \ldots B(h^n) \},
\end{align}
where $B(h^k)$ is the bath space derived from $h^k$, $k = 1, ..., n$. Note that each order of $h$ adds $L_x$ bath orbitals to the total
impurity plus bath space.
We can alternatively choose the bath to preserve the inverse relationship between the density matrix and Hamiltonian,
\begin{align}
[P h P ]_{ij} = \mathrm{inverseFD}(P D P) + O(D^n) \quad \mathrm{not}\ i,j \notin x,
\end{align}
where $\mathrm{inverseFD}$ is the inverse Fermi-Dirac function, and the bath space is
then given as
\begin{align}
\label{eq:dbath}
\span\{B(D) \oplus B(D^2) \oplus B(D^3) \ldots B(D^n) \}.
\end{align}
The attraction of this construction is that the lowest order corresponds to the standard GS-DMET bath construction.
The above generalized bath constructions allow for the introduction of an unlimited number of bath sites (so long as the total number
of sites in the embedded problem is less than the lattice size). Increasing the size of the embedded problem by increasing the number of bath
orbitals (hopefully) increases the accuracy of the embedding, but it also
increases the computational cost. However, an alternative way to increase
accuracy is simply to increase the number of impurity sites. Which strategy is better is problem dependent,
and we will assess both in our numerical experiments.
\subsubsection{Thermal observables}
The thermal expectation value of an observable $\hat{O}$ is defined as
\begin{equation}\label{eq:theory-ftexpval}
\langle \hat{O}(\beta)\rangle = \Tr\left[\hat{\rho}(\beta)\hat{O}\right].
\end{equation}
Once $\hat{\rho}(\beta)$ is obtained from
the high-level impurity calculation, for observables based on low-rank (e.g. one- and two-) particle
reduced density matrices, we evaluate Eq.~(\ref{eq:theory-ftexpval}) using the democratic partitioning formula for
expectation values in Sec.~\ref{sec:gsexpect}.
We will also, however, be interested in the entropy per site, which is a many-particle expectation value.
Rather than computing this directly as an expectation value,
we will obtain it by using the thermodynamic relation
$\frac{\dd S}{\dd E} = \beta$, and
\begin{equation}
S(\beta_0) = S(0) + \int_{E(0)}^{E(\beta_0)} \beta(E) \dd E
\end{equation}
where $\beta_0$ is the desired inverse temperature, and $S(0) = \ln 4 \approx 1.386$.
\section{\label{sec_results_dmet}Results}
\subsection{Computational details}
We benchmarked the performance of FT-DMET in the 1D and
2D Hubbard models as a function of $U$ and $\beta$. For the 1D Hubbard model, we compared our FT-DMET results
to exact solutions from the thermal Bethe ansatz ~\citep{TakahashiPRB2002}.
For the 2D Hubbard model, the FT-DMET results were compared to DCA and
DMFT
results~\citep{MF_DMFT2d,maier2005,kunes,jarrellPRB,jarrellEPL,LeBlancPRX2015}. We used large DMET mean-field lattices with periodic boundary conditions (240 sites in 1D, $24 \times 24$ sites in 2D).
We used exact diagonalization (ED) and finite temperature DMRG (FT-DMRG)
as impurity solvers.
There are two common ways to carry out FT-DMRG calculations: the purification method~\citep{FeiguinPRB2005} and the
minimally entangled typical thermal states (METTS) method~\citep{StoudenmireNJP2010}.
In this work, we used the purification method implemented with the ITensor
package~\citep{ITensor} as the FT-DMRG impurity solver, as well as to provide the finite lattice reference data in Fig.\ref{fig:1ddopping}.
In the FT-DMRG solver, the sites were ordered with the impurity sites coming first, followed by the bath sites (an orthonormal
basis for the set of bath sites of different orders was constructed via singular value decomposition, and ordered in
decreasing size of the singular values) and the ancillae arranged in between each physical site.
In the 1D Hubbard model, we used ED exclusively and the interacting bath formulation of DMET, while in the 2D Hubbard model,
we used ED for the 4 impurity, 4 bath
calculations, and FT-DMRG for the 4 impurity, 8 bath calculations, both within the non-interacting bath formulation.
FT-DMRG was carried out using 4th order Runge-Kutta time evolution.
To denote different calculations with different numbers of impurity and bath orbitals, we use the notation $InBm$, where $n$
denotes the number of impurity sites and $m$ the number of bath orbitals.
\subsection{1D Hubbard model}\label{sec:1dhub}
The 1D Hubbard model is an ideal test system for FT-DMET as its thermal properties
can be exactly computed via the thermal Bethe ansatz. We thus use it to assess various choices
within the FT-DMET formalism outlined above.
We first compare the relative performance of the two proposed bath constructions, generated via the Hamiltonian in Eq.~(\ref{eq:hbath}) or
via the density matrix in Eq.~(\ref{eq:dbath}). In Fig.~\ref{fig:hbath-vs-dbath}, we show the
error in the energy per site (measured from the thermal Bethe ansatz) for $U=2, 4$ and half-filling for these two choices. (The behaviour
for other $U$ is similar).
Using 4 bath sites, the absolute error in the energy is comparable to that of the ground-state calculation (which uses 2 bath sites)
over the entire temperature range.
Although the Hamiltonian
construction was motivated by the high temperature expansion of the density matrix, the density matrix construction appears to perform well
at both low and high temperatures.
Consequently, we use the density matrix derived bath in the subsequent calculations.
\begin{figure}
\centering
\justify
\includegraphics[width=1\textwidth]{figures/dmet/eps/fig1_hbathvsdbath-eps-converted-to.pdf}
\caption{Error in energy per site (units of $t$) of FT-DMET
for the 1D Hubbard model at $U=2$ and $U=4$ (2 impurity sites and
half-filling) with bath orbitals generated via the
density matrix $\gamma$ (Eq.~(\ref{eq:dbath})) (blue lines)
or lattice Hamiltonian $h$ (Eq.~(\ref{eq:hbath})) (orange lines)
as a function of inverse temperature $\beta$. The numbers in
parentheses denote the number of bath orbitals.
The grey area denotes the ground state error with 2 impurity orbitals.
}\label{fig:hbath-vs-dbath}
\index{figures}
\end{figure}
We next examine the effectiveness of the density matrix bath construction in removing
the finite size error of the impurity. As a first test,
in Fig.~\ref{fig:dmet-vs-ed} we compare the energy error obtained with FT-DMET and $I2B2$ with
a pure ED calculation with 4 impurity sites ($I4$) and periodic (PBC) or antiperiodic (APBC) boundary
conditions, at various $U$ and $\beta$. For weak ($U=2$) to moderate ($U=4$) coupling, FT-DMET shows a significant improvement
over a finite system calculation with the same number of sites, reducing the error by a factor of $\sim 2-6$ depending on the $\beta$, thus
demonstrating the effectiveness of the bath. The maximum FT-DMET energy error is 8.1, 6.6, 3.1\% for $U=2, 4, 8$.
At very strong couplings, the error of the finite system ED with PBC approaches that of FT-DMET. This is because both the finite size error
and the effectiveness of the DMET bath decrease as one approaches the atomic limit.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/dmet/eps/fig2_dmet_vs_ed-eps-converted-to.pdf}
\caption{Percentage error of the FT-DMET (with 2 impurity sites and 2 bath orbitals) energy per site vs. ED (4 sites) on a non-embedded cluster with PBC and APBC boundary conditions
for the 1D Hubbard model at various $U$ and $\beta$.} \label{fig:dmet-vs-ed}
\end{figure}
As a second test, in Fig.~\ref{fig:impvsbath} we compare increasing the number of impurity sites versus increasing
the number of bath orbitals generated in Eq.~(\ref{eq:dbath}) for various $U$ and $\beta$. Although complex behaviour
is seen as a function of $\beta$, we roughly see two patterns. For certain impurity sizes, (e.g. $I4$) it can be slightly
more accurate to use a larger impurity with an equal number of bath sites, than a smaller impurity with a larger number of bath sites.
(For example, at $U=8$, one can find a range of $\beta$ where $I4B4$ gives a smaller error than $I2B6$). However, there
are also some impurity sizes which perform very badly; for example $I3B3$ gives a very large error, because the (short-range) antiferromagnetic
correlations do not properly tile between adjacent impurities when the impurities are of odd size. Thus, due to these
size effects, convergence with impurity size is highly non-monotonic, but increasing the bath size (by including more terms
in Eq.~(\ref{eq:dbath})) is less prone to strong odd-size effects.
The ability to improve the quantum impurity by simply increasing the number of bath sites,
is expected to be particularly relevant in higher-dimensional lattices such as the 2D Hubbard model, where ordinarily to
obtain a sequence of clusters with related shapes,
it is necessary to increase the impurity size by large steps. Nonetheless, convergence with bath size is also not strictly monotonic,
as also illustrated in Fig.~\ref{fig:1dES}, where we see that the error in the $I2B4$ entropy can sometimes be less
than that of $I2B6$ for certain ranges of $\beta$. For the largest embedded problem $I2B6$, the maximum
error in the entropy is $4\times 10^{-3}$ and $2\times 10^{-2}$ for $U =4$
and $8$, respectively.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/dmet/eps/fig3_impvsbath-eps-converted-to.pdf}
\caption{Absolute error of the FT-DMET
energy per site of the 1D Hubbard model at half-filling
as a function of impurity and bath size. $InBm$ denotes $n$ impurity sites and $m$ bath orbitals.
Increasing impurity (blue lines); increasing bath (orange lines). The grey band depicts the ground state error with $2$ impurity sites and $2$ bath orbitals.}
\label{fig:impvsbath}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/dmet/eps/fig4_1dentropy_ws-eps-converted-to.pdf}
\caption{Absolute error of the FT-DMET entropy per
site of the 1D Hubbard model at half-filling
as a function of the number of bath sites.
The right panels show the absolute entropy.
}
\label{fig:1dES}
\end{figure}
The preceding calculations were all carried out at half-filling. Thus,
in Fig.~\ref{fig:1ddopping} we show FT-DMET calculations on the 1D Hubbard model away from half-filling at $U=4$.
We chose to simulate a finite Hubbard chain of 16-sites with PBC in order to readily generate numerically exact reference data
using FT-DMRG (using a maximum bond dimension of $2000$ and an
imaginary time step of $\tau = 0.025$). The agreement between the FT-DMRG energy per site and
that obtained from the thermal Bethe ansatz can be seen at half-filling, corresponding to a chemical potential $\mu=2$.
We see excellent agreement between FT-DMET and FT-DMRG results across the full range
of chemical potentials, and different $\beta$, suggesting that FT-DMET works equally well for doped systems as
well as for undoped systems.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/dmet/eps/fig5_dopping-eps-converted-to.pdf}
\caption{Energy per site (units of $t$) of a 16-site
Hubbard chain with periodic boundary conditions at $U=4$ as a function
of the chemical potential $\mu$ at various $\beta$ values.
The difference between the DMRG and DMET ($I2B6$) energies per site is
$0.01-1.4\%$.
Solid lines: DMRG energies; dashed lines: DMET energies; pentagons: Bethe ansatz.}
\label{fig:1ddopping}
\end{figure}
\subsection{2D Hubbard model}\label{sec:2dhub}
The 2D Hubbard model is an essential model of correlation physics in materials.
We first discuss the accuracy of FT-DMET for the energy
of the 2D Hubbard model at half-filling, shown in Fig.~\ref{fig:2dE}. The FT-DMET calculations
are performed on a $2\times 2$ impurity, with 4 bath orbitals ($I4B4$) (green diamond markers)
and 8 bath orbitals ($I4B8$) (red triangular markers). The results are compared
to DCA calculations with clusters of size $34$ (orange circle markers), $72$
(blue square markers)~\citep{LeBlancPRX2015}, and $2\times 2$ (light blue hexagon markers) (computed for this work).
The DCA results with the size $72$ cluster can be considered here to represent the thermodynamic limit.
The
DCA($2\times 2$)
data provides an opportunity to assess the relative contribution of the FT-DMET embedding to the finite size error; in particular, one can compare the
difference between FT-DMET and DCA(72) to the difference between
DCA($2\times2$) and DCA(72).
Overall, we see that the FT-DMET energies with
8 bath orbitals are in good agreement with the DCA(72) energies across the different $U$ values, and that the accuracy is slightly
better on average than that of
DCA($2\times2$).
The maximum error in the $I4B8$ impurity compared to thermodynamic limit extrapolations of the
DCA energy~\citep{LeBlancPRX2015} is found at $U=4$ and is in the range of 1-2\%,
comparable to errors observed in ground-state DMET at this cluster size (e.g. the error in GS-DMET at $U=4$ and $U=8$ is 0.3\% and 1.8\%, respectively).
In the $\beta = 8$ case, the FT-DMET calculations with two different bath sizes give very similar results; at low temperature, the
bath space constructed by the FT procedure is similar to that of the ground state, and the higher order bath sites do not contribute
very relevant degrees of freedom. Thus even the smaller bath achieves good accuracy in the low temperature FT-DMET calculations.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/dmet/eps/fig6_2denergy-eps-converted-to.pdf}
\caption{Energy per site versus U (units of $t$) of the
2D Hubbard model at half-filling with FT-DMET ($2\times2$ cluster with
4 and 8 bath orbitals),
DCA (34, 72 and $2\times 2$ site clusters).
}\label{fig:2dE}
\end{figure}
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.85\textwidth}
\includegraphics[width=\textwidth]{figures/dmet/eps/fig7a_magnetic-eps-converted-to.pdf}
\caption{ }
\end{subfigure}
\begin{subfigure}[t]{0.85\textwidth}
\includegraphics[width=\textwidth]{figures/dmet/eps/fig7b_neel-eps-converted-to.pdf}
\caption{ }
\end{subfigure}
\caption{N\'eel transition for the 2D Hubbard model within quantum impurity simulations.
(a) Antiferromagnetic moment $m$ as a function of $T$ with various $U$ values (units of $t$);
(b) N\'eel temperature $T_N$ calculated with FT-DMET, single-site DMFT and DCA. DMFT data is taken from Ref.~\citep{kunes}, DCA/NCA data for $U=4$
is taken from Ref.~\citep{maier2005}, DCA/QMC data for $U=6$ is
taken from Ref.~\citep{jarrellPRB}, and DCA/QMC data for $U=8$ is
taken from Ref.~\citep{jarrellEPL}.}\label{fig:2dlinemag}
\end{figure}
A central phenomenon in magnetism is the finite-temperature N\'eel transition.
In the thermodynamic limit, the 2D Hubbard model does not exhibit a true N\'eel transition,
but shows a crossover~\citep{mermin}. However, in finite quantum impurity calculations,
the crossover appears as a phase transition at a nonzero N\'eel temperature.
Fig.~\ref{fig:2dlinemag}(a)
shows the antiferromagnetic moment $m$ calculated as $m = \frac{1}{2L_x}\sum_{i}^{L_x}|n_{i\uparrow}-n_{i\downarrow}|$ as a function of temperature $T$ for various
$U$ values. As a guide to the eye, we fit the data to a mean-field
magnetization function $m = a\tanh\left(bm/T\right)$,
where $a$ and $b$ are parameters that depend on $U$. The FT-DMET calculations are performed with a $2\times 2$ impurity and $8$ bath orbitals,
using a finite temperature DMRG solver with maximal bond dimension $M = 600$ and time step $\tau = 0.1$. With this, the error in $m$ from the solver
is estimated to be less than 10$^{-3}$.
$m$ drops sharply to zero as $T$ is increased
signaling a N\'eel transition. The N\'eel temperature $T_N$ is taken as the point of intersection
of the mean-field fitted line with the $x$ axis; assuming this form of the curve, the uncertainty in $T_N$ is invisible
on the scale of the plot.
The plot of $T_N$ versus $U$ is shown in Fig.~\ref{fig:2dlinemag}(b), showing that
the maximal $T_N$ occurs at $U=6$.
Similar $T_N$ calculations on the 2D Hubbard model with single site DMFT~\citep{kunes} and DCA\cite{jarrellPRB,jarrellEPL, maier2005} are
also shown in Fig.~\ref{fig:2dlinemag}(b) for reference. Note that the difference in the DMFT results~\citep{kunes} and single-site DCA (formally equivalent to DMFT)~\cite{jarrellPRB,jarrellEPL}
likely arise from the different solvers used.
The behaviour of $T_N$ in our $2\times 2$ FT-DMET calculations
is quite similar to that of the 4-site DCA cluster.
In particular, we see in DCA that the
$T_N$ values obtained from calculations with a single-site cluster ($N_c=1$)
are higher than the $T_N$ values obtained from calculations with a
4-site cluster ($N_c = 4$).
An alternative visualization of the N\'eel transition in FT-DMET is shown in
Fig.~\ref{fig:2dpd4b}. The FT-DMET calculations here were performed with
a $2\times 2$ impurity and $4$ bath orbitals using an ED solver.
Though less quantitatively accurate than the $8$ bath orbital simulations, these FT-DMET calculations still capture
the qualitative behavior of the N\'eel transition. Focusing on the dark blue
region of the phase diagram, one can estimate the maximal $T_N$ to occur near $U\approx 9$, an increase
over the maximal N\'eel temperature using the $8$ bath orbital impurity model. This increase
in the maximal $T_N$ appears similar to that which happens when moving from a 4-site cluster to a 1-site cluster in DCA
in Fig.~\ref{fig:2dlinemag}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth]{figures/dmet/eps/fig8_phasediagram-eps-converted-to.pdf}
\caption{2D Hubbard antiferromagnetic moment (color scale) as a function of $T$ and $U$ (units of $t$) in FT-DMET ($2\times 2$ impurity, 4 bath orbitals.)}\label{fig:2dpd4b}
\end{figure}
\section{\label{sec:conc_dmet}Conclusions}
To summarize, we have introduced a finite temperature formulation
of the density matrix embedding theory (FT-DMET). This temperature
formulation inherits most of the basic structure of the ground-state
density matrix embedding theory, but modifies the bath construction
so as to approximately reproduce the mean-field finite-temperature density matrix.
From numerical assessments on the 1D and 2D Hubbard model, we conclude
that the accuracy of FT-DMET is comparable to that of its ground-state counterpart, with
at most a modest increase in size of the embedded problem. From the limited comparisons, it also
appears to be competitive in accuracy with the cluster dynamical mean-field theory for the same sized cluster.
Similarly to ground-state DMET, we expect FT-DMET to be broadly applicable
to a wide range of model and \emph{ab initio} problems of correlated electrons at finite temperature~\citep{ZhengScience2017,cui2019efficient}.
\chapter{\textit{Ab initio} finite temperature density matrix embedding theory\label{chp:hlatt}}
\section{\label{sec:hlatt_abs}Abstract}
This work describes the framework of the finite temperature density matrix
embedding theory (FT-DMET) for \textit{ab initio} simulations of solids.
We introduce the implementation details
including orbital localization, density fitting
treatment to the two electron repulsion integrals, bath truncation, lattice-to-embedding
projection, and impurity solvers. We apply this method to study the
thermal properties and phases of hydrogen lattices. We provide the finite
temperature dissociation curve, paramagnetic-antiferromagnetic transition,
and metal-insulator transition of the hydrogen chain.
\section{\label{sec:hlatt_intro}Introduction}
The numerical study of the many-electron problem has been playing a profound
role in understanding the electronic behaviors in molecules and materials.
One big challenge for current numerical methods is the description of strong
electron correlations, which requires non-trivial treatment of the
electron-electron interaction beyond the mean-field level. A variety of numerical
algorithms have been invented in the past decades to treat strong electron
correlations, including post-Hartree-Fock quantum chemistry methods such
as CCSD~\citep{Monkhorst1977, Bartlett2007}, DMRG and its multi-dimensional
alternatives~\citep{White1992, Scholl2005, Chan2011,Evenbly2015,Verstraete2008}, the QMC family such as AFQMC~\citep{Ceperley1977,Acioli1994, Foulkes2001}, and embedding
methods such as DMET~\citep{Honma1995, Carlson2011}. During the past decades, noticeable progress
has been made in the study of strongly correlated models such as one-dimensional
and two-dimensional Hubbard
models\cite{Lieb1989,White1989, LeBlanc2015}, while the \textit{ab initio}
study of strongly correlated solids is rare. Compared to model systems where
forms of the two-electron interaction are usually simple, \textit{ab initio}
Hamiltonians contain much more complicated two-electron terms with size $N^4$,
where $N$ is the number of orbitals. This complexity brings higher
computational costs. Therefore, an efficient method that can handle
the realistic Hamiltonian accurately is crucial for understanding the
physics behind real materials.
The hydrogen lattice is believed
to be the simplest chemical system with a straightforward analog to the
Hubbard model. A thorough comparison between the hydrogen lattice and Hubbard
model could provide insights of the roles of (i) the long range correlation
and (ii) the multi-band effect (with basis set larger than STO-6G). The
numerical study of hydrogen chain can be traced back to the 70s
with simple theoretical tools such as many body perturbation
theory (MBPT)\cite{Ross1976}. The rapid development of numerical algorithms
made it possible to achieve a better accuracy and thus plausible
conclusions\cite{Stella2011, Hachmann2006, Al-Saidi2007, Sinitskiy2010,
NguyenLan2016, Motta2017, Motta2019, Liu2020}.
Motta et al. benchmarked the equation of state\cite{Motta2017} and
explored the ground state properties\cite{Motta2019} of the hydrogen chain
with various popular numerical methods including DMRG and AFQMC. Despite the
numerous ground state simulations, the finite temperature study of hydrogen
lattices is rare, while the finite temperature study is crucial for
understanding the temperature-related phase diagrams.
Liu et. al. studied the finite temperature behaviors of
hydrogen chain with the minimal basis set (STO-6G), and identified the
signature of Pomeranchuk effect\cite{Liu2020}. However, the minimal basis
set hindered the exploration of more interesting phenomena caused by
the multi-band effect. A more thorough study
beyond the minimal basis set is needed to reach a quantitative observation
of the finite temperature behaviors of the hydrogen lattices.
In this work, we apply \textit{ab initio} finite temperature density matrix
embedding theory (FT-DMET)~\citep{Sun2020}
algorithm to study metal-insulator and magnetic crossovers in periodic
one-dimensional and two-dimensional hydrogen lattices as a function
of temperature $T$ and H-H bond distance $R$. We also explore how
basis set size influences the shape of the phases by comparing the results
with STO-6G, 6-31G, and CC-PVDZ basis sets. The rest of the article is
organized as follows: in Section~\ref{sec:abinit_ftdmet}, we present the
formulation and implementation details of \textit{ab initio} FT-DMET,
including orbital localization, tricks to reduce the cost due to the
two electron repulsion terms, bath truncation, impurity solver, and thermal
observables. In Section~\ref{sec:res_hlatt}, we demonstrate the \textit{ab initio} FT-DMET algorithm by studies of the dissociation curves and
phase transitions in a one-dimensional periodic hydrogen lattice. We finalize
this article with conclusions in Section~\ref{sec:conc_hlatt}.
\section{\textit{Ab initio} FT-DMET}\label{sec:abinit_ftdmet}
In our previous work\cite{Sun2020}, we introduced the basic formulation
of FT-DMET for lattice models. Going from lattice models to chemical
systems, there are several practical difficulties\cite{Cui2020}:
(i) the definition of impurity relies on the localization of the orbitals;
(ii) the number of orbitals in the impurity can be easily very large depending on
the infrastructure of the supercell and the basis set; (iii) manipulating
two-electron repulsion integrals in a realistic Hamiltonian is usually very expensive; (iv) an impurity solver which can handle \textit{ab initio}
Hamiltonians efficiently at finite temperature is required.
In the rest of this section, we discuss solutions to the above challenges and
provide implementation details of \textit{ab initio} FT-DMET.
\subsection{Orbital localization}
Since we are dealing with periodic lattices, the whole lattice problem is
described with Bloch (crystal) orbitals in the momentum space.
Thus crystal atomic
orbitals (AOs) $\{\phi^{\textbf{k}}_{\mu}(\textbf{r})\}$ are a natural choice.
The definition of impurity,
however, is based on real space localized orbitals~\citep{Edmiston1963}. Therefore we
define a two-step transformation from Bloch orbitals to localized orbitals (LOs)
$\{w_i(\mathbf{r})\}$.
\begin{equation}\label{eq:lo_trans}
\begin{split}
w_i^{\mathbf{R}}(\mathbf{r}) =& \frac{1}{\sqrt{N_\mathbf{k}}}\sum_{\mathbf{k}}
e^{-i\mathbf{k}\cdot \mathbf{R}}w_i^{\mathbf{k}}(\mathbf{r}),\\
w_i^{\mathbf{k}}(\mathbf{r}) =& \sum_{\mu} \phi^{\textbf{k}}_{\mu}(\textbf{r})
C_{\mu i}^{\textbf{k}},
\end{split}
\end{equation}
where $C_{\mu i}^{\textbf{k}}$ transforms AOs in momentum space
$\{\phi^{\textbf{k}}_{\mu}(\textbf{r})\}$
into LOs in momentum space $w_i^{\mathbf{k}}(\mathbf{r})$, and
LOs in real space $w_i^{\mathbf{R}}(\mathbf{r})$ are derived by
a Wannier summation over the local crystal orbitals $w_i^{\mathbf{k}}(\mathbf{r})$.
With the \textit{ab initio} periodic system expressed in LOs, one could
choose the impurity to be spanned by the LOs in a single unit cell or
supercell. In the rest of this paper, we choose the impurity to be
the supercell at the lattice origin.
To define the localization coefficients $C_{\mu i}^{\textbf{k}}$ in
Eq.~\eqref{eq:lo_trans}, we use a bottom-up strategy: transform from
AO computational basis to LOs. This strategy uses linear algebra to
produce LOs, and thus avoids dealing with complicated optimizations.
There are several choices of LOs from the bottom-up strategy:
L\"owdin and meta-L\"owdin orbitals~\citep{Lowdin1950,}, natural AOs (NAO)~\citep{Lowdin1956,Reed1985},
and intrinsic AOs (IAO)~\citep{KniziaJCTC2013}. In this work,
we used the $\mathbf{k}$-space unrestricted Hartree-Fock (KUHF)
function with density fitting in the quantum chemistry package PySCF\cite{PYSCF2017,PYSCF2020} to generate a set of crystal MOs.
Then we applied an adapted
IAO routine to generate a set of crystal IAOs from the crystal MOs with
$\mathbf{k}$-point sampling. The crystal IAOs generated from this routine
are valence orbitals that exactly span the occupied space of the mean-field
calculation. Note that the number of crystal IAOs is equal to the size of minimal basis only. To carry out calculations beyond the minimal basis,
we construct the rest nonvalence orbitals to be projected AOs (PAOs)~\citep{Saebo1993},
orthogonalized with L\"owdin orthogonalization~\citep{Aiken1980}. This IAO+PAO
strategy has been used in previous ground state DMET calculations~\citep{WoutersJCTC2016,Cui2020}.
\subsection{Bath truncation and finite temperature bath}
In the standard DMET routine, the bath orbitals used to construct the
embedding space are obtained from the SVD of the mean-field off-diagonal
density matrix between the impurity and the remaining lattice (called environment) $\gamma^{\mathbf{R}\neq\mathbf{0},\mathbf{0}}$
\begin{equation}\label{eq:svd_bath}
\gamma^{\mathbf{R}\neq\mathbf{0},\mathbf{0}} = B^{\mathbf{R}\neq\mathbf{0}}\Lambda V^{\mathbf{0}\dag},
\end{equation}
where $B^{\mathbf{R}}$ defines the coefficients of bath orbitals in the LO
basis. Thus we can construct the projection matrix in real space
\begin{equation}\label{eq:proj_mat_R}
P^{\mathbf{R}} = \begin{pmatrix}
\mathbf{I} & \mathbf{0} \\
\mathbf{0} & \mathbf{B^{R\neq 0}}
\end{pmatrix}.
\end{equation}
The projection in momentum space can be derived from Eq.~\eqref{eq:proj_mat_R}
with Wannier transformation
\begin{equation}
P^{\mathbf{k}} = \sum_{\mathbf{R}}e^{-i \mathbf{k\cdot R}} P^{\mathbf{R}}.
\end{equation}
Note that the projection matrices $P^{\mathbf{R}}$ and $P^{\mathbf{k}}$
are in the basis of LOs, and to obtain the $\mathbf{k}$-space transformation
matrix in AOs, simply multiply $C^{\mathbf{k}}$ from Eq.~\eqref{eq:lo_trans}
to the left of $P^{\mathbf{k}}$.
From Eq.~\ref{eq:svd_bath}, one generates a set of bath orbitals with the same size
as the impurity. This setting is valid and efficient for model systems, and
the embedding space is purely constructed with valence bands.
However, for \textit{ab inito} calculations, low-lying core and high-energy
virtual impurity orbitals will not entangle with the environment, and thus
with the bath orbitals. In practice, this results in singular values in
the SVD of Eq.~\eqref{eq:svd_bath}, leading to difficulties in the
convergence of the DMET self-consistency procedure. To overcome this difficulty,
we use the following strategy~\citep{WoutersJCTC2016}: we identify the
impurity orbitals as core, valence, and virtual orbitals, and then only take
valence columns of the off-diagonal density matrices of the off-diagonal
density matrix to construct the bath orbitals, as illustrated in
Fig.~\ref{fig:svd_valence}. With this strategy, the
size of bath orbitals is equal to the size of valence impurity orbitals,
and thus the number of embedding orbitals is reduced from $2n_{\text{imp}}$ to
$n_{\text{imp}} + n_{\text{val}}$. Note that if pseudopotential is included
in the calculation, there are no core orbitals.
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.8\textwidth}
\includegraphics[width=\textwidth]{figures/hlatt/svd_model.png}
\caption{ }
\end{subfigure}
\begin{subfigure}[t]{0.8\textwidth}
\includegraphics[width=\textwidth]{figures/hlatt/svd_mole.png}
\caption{ }
\end{subfigure}
\caption{Bath orbitals from singular value decomposition (SVD) of the off-diagonal block of the
mean-field density matrix. The whole square represents the mean-field
density matrix of size $N\times N$, with $N$ being the total number of orbitals, and the first $n_{\text{imp}}$ columns/rows of the matrix are orbitals
in the impurity. (a) Standard DMET routine computes the bath orbitals by the
SVD of the off-diagonal block (orange blocks on the left and top, the block
on the left corresponds to Eq.~\eqref{eq:svd_bath}). (b) \textit{Ab initio}
DMET computes the bath orbitals by the SVD of only the valence columns in
the off-diagonal blocks (green block).}\label{fig:svd_valence}
\end{figure}
At finite temperature, electronic occupation numbers of the energy levels
are ruled by the Fermi-Dirac distribution
\begin{equation}\label{eq:fd_hlatt}
f(\varepsilon_i) = \frac{1}{1+e^{\beta(\varepsilon_i-\mu)}},
\end{equation}
where $\varepsilon_i$ is the energy of the $i$th molecular orbital,
$\beta = \frac{1}{T}$ is the inverse temperature (we set the Boltzmann's constant $k_B = 1$) and $\mu$ is the chemical potential or the energy
of the Fermi level at ground state. When $\beta = \infty$,
Eq.~\eqref{eq:fd_hlatt} reproduces the ground state electron number
distribution: when $\varepsilon_i < \mu$, the occupation number is $1$ (occupied
orbitals), and when $\varepsilon_i > \mu$, the occupation number is $0$
(virtual orbitals). However, when $\beta$ is finite, the
electronic occupation number on virtual orbitals is no longer $0$. The
extreme case is when $\beta = 0$ where all energy levels are uniformly occupied
with occupation number $f = 0.5$. Therefore, the ground state bath
construction described previously is no longer suitable to provide an
accurate embedding Hamiltonian. There are generally two strategies:
(1) include part of the core and virtual orbitals into the off-block for SVD;
(2) obtain additional bath orbitals from higher powers of the mean-field
density matrix~\citep{Sun2020}: take the valence columns of the off-diagonal
blocks of $\gamma$, $\gamma^2$, ..., $\gamma^l$ and apply SVD to them,
respectively, to get $l$ sets of bath orbitals, then put the bath orbitals
together and perform orthogonalization to produce the final bath orbitals.
The disadvantage of the first strategy is obvious: as temperature gets
higher, the Fermi-Dirac curve in Fig.~\ref{fig:fd_curve} gets flatter,
and thus more non-valence orbitals are needed. Compared to the first strategy,
the latter strategy generally requires less number of bath orbitals.
For most systems, truncating
$l$ to $2$ or $3$ is already enough for the whole temperature spectrum,
therefore the number of embedding orbitals is $n_{\text{imp}} + ln_{\text{val}}$.
Since the number of valence orbitals is much smaller than that of the
non-valence orbitals, DMET with bath orbitals derived from the second
strategy is more economic and stable. In this paper, we adopt the second
strategy for our FT-DMET calculations.
\begin{figure}[h!]
\centering
\includegraphics[width=1.\textwidth]{figures/hlatt/fermi_N30_631g-eps-converted-to.pdf}
\caption{Fermi-Dirac distribution of electrons on Hartree-Fock molecular
orbitals for H$_{30}$ chain with STO-6G basis. $T$ is in unit Hartree.}\label{fig:fd_curve}
\end{figure}
\subsection{Embedding Hamiltonian}
There are two choices of constructing the embedding Hamiltonian: (i)
interacting bath formalism and (ii) non-interacting bath formalism
~\citep{WoutersJCTC2016}. We pick the interacting bath formalism to
restore most of the two-body interactions. The embedding Hamiltonian
constructed from interacting bath formalism has the form
\begin{equation}
H_{\text{emb}} = \sum{pq}F^{\text{emb}}_{pq}c^{\dag}_pc_q - \mu\sum_{p\in \text{imp}} c^{\dag}_pc_p + \frac{1}{2}\sum_{pqrs}\left(pq|rs\right)c^{\dag}_pc^{\dag}_r
c_sc_q.
\end{equation}
Note that we use $p,q,r,s$ to index embedding orbitals and $i,j,k,l$
to index lattice orbitals. A chemical potential $\mu$ is added to only
apply on the impurity, making sure that the number of electrons on
the impurity is correct during the DMET self-consistency.
The embedding Fock matrix $F^{\text{emb}}$ is obtained by projecting
the lattice Fock in AOs to the embedding orbitals. Using
$\tilde{P}^{\mathbf{k}} = C^{\mathbf{k}}P^{\mathbf{k}}$ to denote the projection
operator, one computes the embedding Fock matrix by
\begin{equation}\label{eq:fock_rotate}
\tilde{F} = \frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}\tilde{P}^{\mathbf{k}\dagger}
F^{\mathbf{k}} \tilde{P}^{\mathbf{k}}
\end{equation}
where $F^{\mathbf{k}}$ is the lattice Fock matrix in $\mathbf{k}$-space AO basis.
To avoid double counting, we subtract the contribution of the embedding
electron repulsion integrals (ERIs) from $\tilde{F}$
\begin{equation}
F^{\text{emb}}_{pq} = \tilde{F}_{pq} -
\left[\sum_{rs} \left(pq|rs\right)\gamma^{\text{emb}}_{sr} - \frac{1}{2} \left(pr|sq\right)\gamma^{\text{emb}}_{rs}\right]
\end{equation}
where $\gamma^{\text{emb}}$ is the lattice density matrix rotated to the
embedding space.
The time-consuming part is the construction and projection of the two-electron
ERIs to the embedding space. To reduce the cost, we use density
fitting~\citep{Whitten1973,Sun2017} to convert the four-center ERIs to the three-center ERIs,
\begin{equation}
\left(\mu\mathbf{k}_{\mu} \nu\mathbf{k}_{\nu}| \kappa\mathbf{k}_{\kappa}\lambda
\mathbf{k}_{\lambda}\right) \approx \sum_{L\mathbf{k}_L} \left(\mu\mathbf{k}_{\mu} \nu\mathbf{k}_{\nu}| L\mathbf{k}_L\right) \left(L\mathbf{k}_L|\kappa\mathbf{k}_{\kappa}\lambda
\mathbf{k}_{\lambda}\right)
\end{equation}
where $L\mathbf{k}_L$ is the auxiliary basis and only three $\mathbf{k}$ indices are
independent due to the conservation of momentum: $\mathbf{k}_{L} = \mathbf{k}_{\mu}
- \mathbf{k}_{\nu} +n\mathbf{q}$ ($n\mathbf{q}$ is the integer multiple of reciprocal lattice vectors). The auxiliary basis
used in this work is a set of chargeless Gaussian crystal orbitals
with the divergent part of the Coulomb term treated in Fourier space~\citep{Sun2017}. Density fitting with the above auxiliary basis is called
Gaussian density fitting (GDF). In practice, we first transform
three-center ERIs from the lattice orbitals to the embedding orbitals
with cost $\mathcal{O}\left(n_{\mathbf{k}}^2n_L n_{\text{lat}}n_{\text{emb}^2}
+ n_{\mathbf{k}}^2n_L n_{\text{lat}}^2n_{\text{emb}}\right)$; then we convert
the three-center ERIs back to the four-center ERIs in the embedding
space with cost $n_{\mathbf{k}}n_Ln_{\text{emb}}^2$. The computational
cost is significantly reduced compared to direct transformation
with cost $\mathcal{O}\left(n_{\mathbf{k}^3n_{\text{lat}^5}}\right)$.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.7\textwidth}
\includegraphics[width=\textwidth]{figures/hlatt/R1.5B20-eps-converted-to.pdf}
\caption{$R = 1.5$ Bohr}
\end{subfigure}
\begin{subfigure}[t]{0.7\textwidth}
\includegraphics[width=\textwidth]{figures/hlatt/R3.0B20-eps-converted-to.pdf}
\caption{$R = 3.0$ Bohr}
\end{subfigure}
\caption{Accuracy test on FT-DMRG and LT-DMRG solvers against exact diagonalization. The label "FT($x$)" stands for FT-DMRG solver with $\tau = x$, and
the label "LT($x, y$)" stands for LT-DMRG solver with $x$ Davidson roots and
$y$ electron deviations from half-filling for both spins.}\label{fig:dmrg_solvers}
\end{figure}
\subsection{Impurity solver}
An accurate finite temperature algorithm is required as the impurity
solver. In this work, we use homemade finite temperature
exact diagonalization (FT-ED) and finite temperature density matrix
renormalization group (FT-DMRG) for small and large impurity
problems, respectively. In particular, there are two ways to implement
FT-DMRG: (1) imaginary time evolution from an enlarged Hilbert space,
also known as the purification approach~\citep{Feiguin2005} (referred as
FT-DMRG) ; and (2)
using Davidson diagonalization to generate a set of low-energy levels
to be used in the grand canonical statistics (referred as low temperature
DMRG, LT-DMRG). While FT-DMRG can be used for the whole temperature spectrum,
LT-DMRG is especially for low temperature calculations. Because most
of the phase transitions happen at the low temperature regime, LT-DMRG can provide
accurate enough calculations with lower cost compared to FT-DMRG.
Since FT-DMRG is based on imaginary time evolution from inverse
temperature $\beta = 0$, the entanglement grows rapidly as $\beta$ increases,
and at low temperature a bond dimension that is much larger than the
ground state bond dimension is required. Another error source of FT-DMRG
is the imaginary time step $\tau = \beta / N$, where $N$ is the number
of time steps. For symmetrized Trotter-Suzuki approximation, the local
trucation error is on the order of $\mathcal{O}(\tau^3)$, while the total
accumulated error is on the order of $\mathcal{O}(\tau^2)$. If the 4th order
Runge-Kutta (RK4) method is used, the local truncation error is on the
order of $\mathcal{O}(\tau^5)$ and the total
accumulated error is on the order of $\mathcal{O}(\tau^4)$. The FT-DMRG
used in the calculations of this work used the RK4 method.
Note that since
the matrix product state (MPS) truncation is applied at every time step,
having a too small $\tau$ will lead to a large accumulation of MPS truncation
errors. Therefore, one needs to choose the $\tau$ value to be not too small
to introduce large MPS truncation errors and not too big to introduce large
time evolution truncation errors. The error source of LT-DMRG is from the
truncation of the grand canonical summation and the number of roots in
the Davidson diagonalization. Generally the ground state bond dimension is
enough for the low temperature calculations with LT-DMRG.
An assessment of the accuracy of FT-DMRG and LT-DMRG solvers is shown in
Fig.~\ref{fig:dmrg_solvers}. The embedding system is composed of two impurity
orbitals and two bath orbitals, generated from a $6$-site hydrogen chain
with the STO-6G basis at $R = 1.5$ and $3.0$ Bohr at $\beta = 20$. Exact
diagonalization (ED) is chosen as the exact reference.
In Fig.~\ref{fig:dmrg_solvers}, we try to understand the role of imaginary
time step $\tau$ in FT-DMRG solver and the roles of the number of
Davidson roots and the size of the truncated grand canonical space in the
LT-DMRG solver. At $\beta = 20$, the smaller $\tau$ (red lines) gave a
smaller error compared to $\tau = 0.2$ (blue lines), and the FT-DMRG
results converged at $M \sim 300$. The errors of LT-DMRG solver do not change
too much with the bond dimension $M$, so $M=100$ is already enough for
a $4$-site system. The accuracy at $R = 1.5$ Bohr is generally better than
$R = 3.0$ Bohr, since larger $R$ corresponds to stronger correlation.
At larger $R$, one needs to include a larger number of Davidson roots to
achieve high enough accuracy with LT-DMRG.
Generally with large enough bond dimension $M$, FT-DMRG could provide
more accurate results. However, when the embedding system is too large
to use a large $M$, one could consider the LT-DMRG method. In DMET calculations,
we use $ED$ solver for $L_{\text{emb}} < 8$ embedding problems and use
FT-DMRG solver for larger problems.
\subsection{Thermal observables}
In order to identify the metal-insulator transition and the N\'eel transition
and explore the mechanism behind the crossings, we compute the following
order parameters: staggered magnetic moment $m$, double occupancy $D$,
complex polarization $Z$,
spin-spin correlation functions $\mathcal{C}_{ss}$, and
charge-charge correlation functions $\mathcal{C}_{cc}$.
\noindent\emph{Staggered magnetic moment.} The staggered magnetic momentum
is calculated as
\begin{equation}\label{eq:hlatt_mag}
m = \frac{1}{N^{\text{imp}}}\sum_{i\in \text{imp}}\frac{|n_{i, \uparrow} - n_{i, \downarrow}|}{2},
\end{equation}
where $N^{\text{imp}}$ is the total number of H atoms in
the impurity (supercell), and $n_{i, \uparrow}$
and $n_{i, \downarrow}$ are electron numbers on $i$th atom with up spin
and down spin, respectively. Note that if one uses Bohr magneton
$\mu_B = \frac{e\hbar}{2m_e} = \frac{1}{2}$ as the unit, then one would
drop $2$ in the denominator in Eq.~\eqref{eq:hlatt_mag}.
To evaluate $n_{i,\uparrow}$ on the $i$th atom, we first
compute the one-particle impurity density matrix for up-spin
in the IAO basis, and then
sum up the diagonal terms that belong to the $i$th atom. For example,
when STO-6G basis is used, the occupation numbers on $1s$ orbital and
$2s$ orbital of atom-$i$ sum up to the electron density on atom-$i$.
$n_{i,\downarrow}$ is evaluated in the same way from the down-spin
one-particle impurity density matrix.
\noindent\emph{Double occupancy.} The double occupancy measures the probability
of two electrons with opposite spins occupying the same hydrogen atom,
calculated by
\begin{equation}\label{eq:docc_hlatt}
D = \frac{1}{N^{\text{imp}}}\sum_{i\in \text{imp}} \langle \hat{n}_{i\uparrow}\hat{n}_{i, \downarrow}\rangle.
\end{equation}
Note that the hat on $\hat{n}_{i\uparrow}$ denotes that it is an operator, not
a number, with $n_{i\uparrow} = \langle \hat{n}_{i\uparrow}\rangle$.
Since there are multiple bands on each atom, we expand $\hat{n}_{i\uparrow}$
as
\begin{equation}
\hat{n}_{i\uparrow} = \sum_w \hat{n}^w_{i\uparrow},
\end{equation}
where $w$ is the index of the bands on the $i$-th atom ( e.g., $1s$, $2s$,
$2p_x, 2p_y, 2p_z, ...$). Therefore, the precise expression of double occupancy
becomes
\begin{equation}
D = \frac{1}{N^{\text{imp}}}\sum_{i\in \text{imp}}\sum_{w, w'\in i}
\langle \hat{n}^{w}_{i\uparrow}\hat{n}^{w'}_{i, \downarrow}\rangle.
\end{equation}
\noindent\emph{Complex polarization.}
Complex polarization measures the mobility of electrons, and thus can be
used as an indicator of metal-insulator transition. The definition of
complex polarization on $z$ direction is
\begin{equation}
Z = \langle e^{i\frac{2\pi}{L}\hat{z}} \rangle,
\end{equation}
where $L$ is the chain length and $\hat{z}$ is the location operator
in the $z$-direction. When $Z = 0$, electrons are delocalized and the system is metallic; when
$Z = 1$, electrons are localized and the system is insulating.
At mean-field level, the ground state is a Slater determinant $|\phi\rangle$
of occupied
orbitals, so the complex polarization is evaluated by
\begin{equation}
Z = \langle \phi| e^{i\frac{2\pi}{L}\hat{z}}|\phi \rangle,
\end{equation}
which is equivalent to
\begin{equation}
Z = \text{Det}\left[C_{\text{occ}}^{\dag} e^{i\frac{2\pi}{L}z} C_{\text{occ}}\right],
\end{equation}
where $C_{\text{occ}}$ represents the coefficients of occupied orbitals.
At finite temperature, we use a thermofield approach~\citep{Harsha2019}
from our recent work (see Chapter~\ref{chp:cp}). We construct the infinite temperature
determinant with an enlarged Hilbert space $\tilde{\phi}$, and thermofield
operators of the Hamiltonian $\tilde{H}$ and position operator $\tilde{z}$.
Then the finite temperature complex polarization is evaluated by
\begin{equation}
Z(\beta) = \frac{1}{\mathcal{Z}} \langle \tilde{\phi} | e^{-\beta(\tilde{H})}e^{i\frac{2\pi}{L}\tilde{z}} | \tilde{\phi}\rangle,
\end{equation}
where $\mathcal{Z} \langle \tilde{\phi} | e^{-\beta(\tilde{H})} | \tilde{\phi}\rangle$ is the partition function.
\noindent\emph{Spin-spin correlation and charge-charge correlation functions.}
We define the two correlation functions as follows:
\begin{equation}
\begin{split}
\mathcal{C}^{ss}_i =& \langle (\hat{n}_0^{\uparrow} - \hat{n}_0^{\downarrow})
( \hat{n}_i^{\uparrow} - \hat{n}_i^{\downarrow})\rangle
- \langle \hat{n}_0^{\uparrow} - \hat{n}_0^{\downarrow}\rangle
\langle \hat{n}_i^{\uparrow} - \hat{n}_i^{\downarrow}\rangle,\\
\mathcal{C}^{cc}_i =& \langle (\hat{n}_0^{\uparrow} + \hat{n}_0^{\downarrow})
( n_i^{\uparrow} + n_i^{\downarrow})\rangle
- \langle \hat{n}_0^{\uparrow} + \hat{n}_0^{\downarrow}\rangle
\langle n_i^{\uparrow} + n_i^{\downarrow}\rangle,
\end{split}
\end{equation}
where $\hat{n}_i^{\sigma}$ is the electron density operator of spin $\sigma$
on site $i$.
\section{Results}\label{sec:res_hlatt}
In this section, we show some preliminary results of FT-DMET calculations
on the hydrogen chain system with periodic boundary condition.
First, we examine the basis set effect on a $22$-atom chain with $2$ atoms
in the impurity. Fig.~\ref{fig:mag_22k} shows the magnetic moment at
both ground state and $T=0.02$ Hartree calculated by DMET with STO-6G,
6-31G, and
CC-PVDZ basis sets. Paramagnetic-antiferromagnetic
(PM-AFM) transition is observed at both ground state and $T=0.02$ Hartree. A very interesting behavior of the magnetic moment at
ground state is observed: with STO-6G, the magnetic moment drops at
$R > 3.0$ Bohr, while with larger basis sets, this drop did not happen.
The reason for the above behaviors could be due to the loss of
entanglement between different sites at large $R$. Imagine at
$R = \infty$, the system should behave as $22$ individual atoms, and one
should expect the ground state to be paramagnetic. With more diffused orbitals
(e.g., $2s$ and $2p$ orbitals), however, the entanglement between different
sites decays slower as $R$ increases. Note that since the impurity size
is only $2$ atoms, one only needs to consider the entanglement
between adjacent sites. Once the impurity gets larger, a drop of magnetic
moment with larger basis sets should also be expected. At $T = 0.02$ (left
panel), the magnetic moment computed with all three basis sets dropped
as $R$ increases, as a consequence of thermal dissipation.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.7\textwidth}
\includegraphics[width=\textwidth]{figures/hlatt/hc_mag_gs.png}
\caption{ }
\end{subfigure}
\begin{subfigure}[t]{0.7\textwidth}
\includegraphics[width=\textwidth]{figures/hlatt/hc_mag_b50.png}
\caption{ }
\end{subfigure}
\caption{Magnetic moment of a $22$-atom chain at ground state (left panel)
and $T=0.02$ Hartree (right panel) with STO-6G, 6-31G, and CC-PVDZ basis sets.
}\label{fig:mag_22k}
\end{figure}
We further show the double occupancy from the above simulation settings in
Fig.~\ref{fig:docc_11k}. A clear change of the gradient of $D$ as a function
of $R$ is observed for both ground state and $T=0.02$ Hartree, indicating
a metal to insulator transition. The transition $R$ is around $1.6\sim 1.8$
Bohr, which agrees with the transition $R$ of PM-AFM transition, resulting
in a PM metal phase and AFM insulating phase.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.7\textwidth}
\includegraphics[width=\textwidth]{figures/hlatt/hc_docc_gs.png}
\caption{ }
\end{subfigure}
\begin{subfigure}[t]{0.7\textwidth}
\includegraphics[width=\textwidth]{figures/hlatt/hc_docc_b50.png}
\caption{ }
\end{subfigure}
\caption{Double occupancy of a $22$-atom chain at ground state (left panel)
and $T=0.02$ Hartree (right panel) with STO-6G, 6-31G, and CC-PVDZ basis sets.
The insets show a sudden change of the gradient of $D$ as a function of
$R$, indicating metal to insulator transition.
}\label{fig:docc_11k}
\end{figure}
Next we increase the total number of atoms in the hydrogen chain to $50$
atoms to eliminate the finite size effect of the total system size.
STO-6G is used as the basis set.
The impurity is
composed of two hydrogen atoms, and solved by ED.
We first present the
energy calculations and dissociation curve of the hydrogen chain, shown in Fig.~\ref{fig:energy_hlatt}, where the
energy per electron at $T=0.05$ is compared to FT-AFQMC~\citep{Liu2020}
results. The AFQMC calculation used the STO-6G basis set and a supercell with $10$ atoms and $5$
$k$ points. The two energy curves predicted the same dissociating
trend and equilibrium point ($\sim 1.8$ Bohr). However, the DMET energies
are all lower than the AFQMC energies, which could be due to the finite
impurity size effect.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figures/hlatt/energy_hlatt-eps-converted-to.pdf}
\caption{Dissociation curve of hydrogen chain at $T=0.05$ Hartree compared to AFQMC. The AFQMC data is extracted from Ref.~\citep{Liu2020}.
} \label{fig:energy_hlatt}
\end{figure}
We then examine the staggered magnetic moment $m$ as a function of inter-atomic
distance $R$ at ground state, $T=0.02$, $T=0.05$, and $T=0.1$ in
Fig.~\ref{fig:mag_hlatt_nk25}. Compared to Fig.~\ref{fig:mag_22k} where
the paramagnetic-antiferromagnetic (PM-AFM) transition happened around
$R = 1.6$ Bohr,
we observed the
PM-AFM transition at $R = 1.0\sim 1.5$ region for $T < 0.1$, which could be
an effect of the finite total system size.
Although $T = 0.02$ is
considered as very close to the ground state, the magnetic moment at
$T = 0.02$ drops earlier than the ground state curve as $R$ increases. This
behavior is due to the thermal dissipation of the magnetic order. As $R$
grows larger, the atoms are far apart from each other, and thus the
electron-electron correlation between different sites is weaker, and eventually
not enough to preserve the long-range AFM order, which lead to the
drop of magnetic order at large $R$ as shown in the figure. Even adding a
small temperature, the flip of the spin can happen to destroy the long-range
AFM order.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figures/hlatt/mag_nk25-eps-converted-to.pdf}
\caption{Staggered magnetic moment of hydrogen chain with periodic
boundary condition at ground state, $T=0.02$, $T=0.05$, and $T=0.1$. The unit of $T$ is Hartree.
} \label{fig:mag_hlatt_nk25}
\end{figure}
\section{Conclusion}\label{sec:conc_hlatt}
In this work, we generalized the previously described finite temperature
density matrix embedding theory to study \textit{ab initio} solid, and
employed the method to study the hydrogen chain problem.
Despite the simplicity of the hydrogen chain lattice compared to other
periodic systems, it exhibits a variety of intriguing behaviors including
paramagnetic-antiferromagnetic transition and metal-insulator transition
at both ground state and finite temperature. At finite temperature,
we observed thermal dissipation for the magnetic order at large inter-atomic
distance. We further confirmed the stabilizing effect from multi-band basis
sets. Since this work is not completely finished, in the future we will apply
this finite temperature algorithm to a larger set of \textit{ab initio}
solids including the two-dimensional and three-dimensional hydrogen
lattices, transition metal oxides, and challenging systems such as cuprate-based
high
temperature superconductors.
\chapter{Finite temperature complex polarization and metal-insulator transition\label{chp:cp}}
\section{Abstract}
Metal-insulator transition is a fundamental yet complicated
topic in condensed matter physics and material science.
Band structure theory has been widely used in explaining why insulators are insulating
and why metals are conducting, but can fail to describe insulation phases caused
by strong correlations or disorder. Electron locality/mobility serves as
a more general indicator of metallic and insulating phases: when the system
is metallic, the electrons are delocalized and can flow freely;
when the system is insulating, the electrons are localized. The standard
deviation (or second cumulant moment) of the electron position
is used as the order parameter of the electron localization, which
is directly related to the complex polarization of the system.
The complex polarization is widely accepted as a new indicator of the
metal-insulator transition at ground state: when the complex polarization
equals zero,
the second cumulant moment of the position is diverged and the system
sits in the metallic phase; when the complex polarization is nonzero,
the electrons are localized and the insulating phase wins.
In this work, we present the finite temperature formulation of the complex
polarization. We also introduce a thermofield approach to compute the
complex polarization with thermal Slater determinant. We demonstrate how
finite temperature complex polarization works as an indicator of
metal-insulator transition at low temperature with a modified tight binding
model and hydrogen chain system. In the hydrogen chain case, we also
compare the metal-insulator transition with the paramagnetic-ferromagnetic
transition, electron population, and energy gap to study the origin of
the insulating and metallic phases, respectively.
\section{Introduction}
Phase transition happens when a system undergoes a macroscopic change
from one phase to another phase due to the variation of control parameters
such as temperature, magnetic field, chemical substitution, and pressure.
Near the critical
point between the two phases, the physical properties of the bulk changes
dramatically with respect to even a minor perturbation in control parameters.
Metal-insulator transition (MIT) is among the most common phase transitions,
yet the microscopic cause and the physics behind the phenomena is nontrivial.
From the elementary physics
textbooks, we learned that metals are conducting when an electrical field
is applied, while insulators do not allow electrons to flow freely.
However, this is a rather vague and bipartite definition, which is not
able to answer questions such as (1) what are the microscopic driving forces
for conductivity? (2) what are the causes for MIT? and (3) how does one
characterize MIT?
In the past, the microscopic featurization of insulators and metals are
generally described by the band structure theory~\citep{Ashcroft76,Kittel2004}.
The band structure theory describes the movement of a single electron
in a periodic solid, with the mean-field effect from the other electrons.
According to band structure theory, if the Fermi level sits in a band gap, then
the system is insulating; if the Fermi level crosses a band, the
system is conducting. However, the band structure theory is based on
independent electron approximation and is only limited to crystalline
systems. The insulating behavior caused by disorder or electron-electron
correlation cannot be captured accurately by the band structure theory~\citep{Kohn1964,Anderson1961,Alexandrov1994,Imada1998}. A more general
description is to use the electron localization to distinguish metal and insulator:
when the electrons are \emph{localized}, the system is insulating, and
when the electrons are \emph{delocalized}, the system is conducting.
A widely accepted approach to evaluate electron localization is based
on the theory of polarization~\citep{Resta1992,KingSmith1993,Vanderbilt1993,Resta1993,
Ortiz1994,Resta1994,Resta1998,Resta1999}, where the macroscopic polarization
is connected to Berry phase~\citep{Berry1984}. A more straightforward indicator
of electron localization is the second cumulant moment of the electron
position operator describing
the spread of electrons~\citep{Resta1999,Resta1999PRL,KingSmith1993,
Vanderbilt1993,Resta1993,}. A value that connects to both the macroscopic
polarization and the second cumulant moment is complex polarization:
the phase of the complex polarization is the
Berry phase, while the second cumulant moment can be evaluated from the
modulo of the complex polarization~\citep{Resta2002,Souza2000,Aebischer2001}.
Moreover, the DC conductivity according to the ground state
Kubo formula~\citep{Kubo1957} is also related to the modulo of the complex
polarization.
Ground state complex polarization and its connection to macroscopic
polarization, electron localization, and DC conductivity have been
thoroughly studied and discussed in the past~\citep{Resta1999, Resta1999PRL, Souza2000}. However, discussion about finite temperature complex polarization
is rare, regardless of the significance of this parameter. In this work, we
introduce the formulation of finite temperature complex polarization
and discuss its relationship with electron localization. We also
present a mean-field level implementation of finite temperature
complex polarization under thermofield theory
~\citep{Matsumoto1983,Semenoff1983,Evans1992,Harsha2019}.
In Section~\ref{sec:cplx}, we introduce the ground state formulation
of complex polarization and electron localization, where we first use
the single particle case as a simplified example and then generalize
the single particle case to many-body mean-field formulation.
In Section~\ref{sec:ftcplx}, we extend the ground state formulation
to the finite temperature version, and present a thermofield
approach to evaluate the complex polarization. In Section~\ref{sec:cptb},
we apply the finite temperature formulation to a modified tight binding model
both analytically and numerically, presenting a preliminary analysis of
how complex polarization provides information of metal-insulator transition.
In Section~\ref{sec:cphydrogen}, we choose hydrogen chain as an example
of computing finite
temperature complex polarization for
\textit{ab initio} systems and explore the temperature-induced metal-insulator
transition in the hydrogen chain. We finalize this article
with a summary and outlook in Section~\ref{sec:cpconc}.
\section{Ground state complex polarization and electron localization \label{sec:cplx}}
The many-body complex polarization $Z_N^{(\alpha)}$ was first introduced as the
ground state expectation value of certain unitary many-body operators
~\citep{Resta1998}. We start by defining a general form of the unitary
many-body operator
\begin{equation}
\hat{U}(\mathbf{k}) = e^{i\mathbf{k}\cdot \hat{\mathbf{R}}},
\end{equation}
where $\mathbf{k}$ is an arbitrary three-dimension vector and $\hat{\mathbf{R}}$
is the three-dimensional position operator, with
$\hat{R}^{\alpha}\psi(r_1, r_2, r_3) = r_{\alpha} \psi(r_1, r_2, r_3),
\alpha = 1,2,3$.
We introduce three $\mathbf{k}$ vectors with notation $\mathbf{\kappa}^{(\alpha)}
(\alpha = 1,2,3)$, defined as
\begin{equation}
\kappa^{(\alpha)}_{\beta} = \frac{2\pi}{L}\delta_{\alpha\beta},
\end{equation}
which can be explicitly written as
\begin{equation}
\begin{split}
\kappa^{(1)} &= \left(\frac{2\pi}{L}, 0, 0 \right),\\
\kappa^{(2)} &= \left(0, \frac{2\pi}{L}, 0 \right),\\
\kappa^{(3)} &= \left(0, 0, \frac{2\pi}{L}\right).
\end{split}
\end{equation}
The ground state complex polarization is then defined as the expectation
values of the unitary many-body operators with the above three vectors:
\begin{equation}\label{eq:cplx_gs}
Z_N^{(\alpha)} = \langle \Psi_0 | \hat{U}(\mathbf{\kappa}^{\alpha}) | \Psi_0\rangle,
\end{equation}
where $N$ is the number of electrons and
$|\Psi_0\rangle$ represents the ground state of the system of interest.
The complex polarization $Z_N^{(\alpha)}$ in Eq.~\eqref{eq:cplx_gs}
can be explicitly written as
\begin{equation}\label{eq:phase_cplx}
Z_N^{(\alpha)} = |Z_N^{(\alpha)}|e^{i\gamma^{(\alpha)}_N},
\end{equation}
where $|Z_N^{(\alpha)}|\in [0,1]$ is the modulo of $Z_N^{(\alpha)}$
and $\gamma^{(\alpha)}_N$ is the phase of $Z_N^{(\alpha)}$,
referred as the Berry phase. In this article, we will not discuss
the macroscopic polarization, therefore the phase of Eq.~\eqref{eq:phase_cplx}
will not be mentioned. In fact, with a centrosymmetric choice of the
origin, the complex polarization will always remain real.
\subsection{Electron localization}
We start by considering a problem with one particle in a one-dimensional
potential wall. The locality of the particle can be measured by the
quadratic spread, or the second cumulant moment of the position $x$,
defined as
\begin{equation}\label{eq:scm_cp}
\langle \delta x^2 \rangle = \langle \psi | x^2 |\psi\rangle -
\langle \psi | x| \psi\rangle^2,
\end{equation}
where $|\psi\rangle$ is the ground state of the particle in a box.
$\langle \delta x^2 \rangle$ is finite when the state $|\psi\rangle$ is
bounded and diverges for an unbounded state. Let $n(x) = |\psi(x)|^2$
be the electron density, then we can rewrite Eq.~\eqref{eq:scm_cp} as
\begin{equation}
\langle \delta x^2 \rangle = \int_{-\infty}^{\infty} \mathrm{d}x \ x^2 n(x)
- \left(\int_{-\infty}^{\infty} \mathrm{d}x \ x n(x)\right)^2.
\end{equation}
We now assume that $\psi(x)$ is periodic with wavelength $L$
\begin{equation}
\psi(x + mL) = \psi(x),
\end{equation}
where $m$ is an integer. The Fourier transformation of $n(x)$ gives
\begin{equation}
\tilde{n}(k) = \int_{-\infty}^{\infty} e^{-i kx} n(x).
\end{equation}
We chose the origin to be $x_0$ so that $\langle x\rangle = 0$, then
\begin{equation}\label{eq:first_cp}
\int_{-\infty}^{\infty} \mathrm{d}x \ x n(x) =
-i \frac{\mathrm{d}\tilde{n}(k)}{\mathrm{d} k} \Biggr\vert_{k=0} = 0,
\end{equation}
and $\langle \delta x^2 \rangle$ is only evaluated from the average value
of $x^2$
\begin{equation}\label{eq:second_cp}
\langle \delta x^2 \rangle = \int_{-\infty}^{\infty} \mathrm{d}x \ x^2 n(x)
= -\frac{\mathrm{d}^2 \tilde{n}(k)}{\mathrm{d} k^2 }\Biggr\vert_{k=0}.
\end{equation}
Combining Eq.~\eqref{eq:first_cp} and Eq.~\eqref{eq:second_cp}, we can
approximate $\tilde{n}(k)$ with the Taylor expansion up to the second
order
\begin{equation}\label{eq:taylor_cp}
\tilde{n}(k) \approx 1 - \frac{1}{2} \langle \delta x^2 \rangle k^2.
\end{equation}
Now we can write down the single particle complex polarization modified
from Eq.~\eqref{eq:cplx_gs} as
\begin{equation}\label{eq:single_cplx}
z = e^{i\frac{2\pi}{L}x_0} \tilde{n}\left(-\frac{2\pi}{L}\right).
\end{equation}
Combining Eq.~\eqref{eq:taylor_cp} and Eq.~\eqref{eq:single_cplx}, we get
the relationship between the complex polarization $z$ and
second accumulant moment $\langle \delta x^2 \rangle$
\begin{equation}\label{eq:2nd_cp}
\langle \delta x^2 \rangle \approx 2 \left(\frac{2\pi}{L}\right)^2.
(1 - |z|)
\end{equation}
One could also rewrite Eq.~\eqref{eq:taylor_cp} as the exponential form
\begin{equation}\label{eq:exp_cp}
\tilde{n}(k) \approx e^{- \frac{1}{2} \langle \delta x^2 \rangle k^2},
\end{equation}
where we took $- \frac{1}{2} \langle \delta x^2 \rangle k^2$ in
Eq.~\eqref{eq:taylor_cp} as the \textit{first order} of the Taylor
expansion of an exponential instead of the second order. From Eq.~\eqref{eq:exp_cp}, one gets
\begin{equation}\label{eq:log_cp}
\langle \delta x^2 \rangle \approx -2 \left(\frac{L}{2\pi}\right)^2 \log|z|.
\end{equation}
For a localized state, both Eq.~\eqref{eq:2nd_cp} and Eq.~\eqref{eq:log_cp}
go to the same finite limit for large $L$; for a delocalized state,
Eq.~\eqref{eq:log_cp} is preferred since it diverges at $|z| = 0$.
Eq.~\eqref{eq:log_cp} gives us a straightforward relationship between
the complex polarization $z$ and second cumulant moment
$\langle \delta x^2 \rangle$: when the system is insulating with
$0 < |z| \leq 1$, $\langle \delta x^2 \rangle$ is finite and the ground
state is localized; when the
system is metallic with $|z| = 0$, $\langle \delta x^2 \rangle$ diverges
and the ground state is delocalized. Therefore, one could use $z$ as
a direct indicator of the locality of the electrons.
Similarly, the many-body electron localization is defined as
\begin{equation}
\langle \delta x^2 \rangle \approx -\frac{2}{N} \left(\frac{L}{2\pi}\right)^2
\log(|Z_N|),
\end{equation}
where $Z_N$ is the many-body complex polarization.
\subsection{Complex polarization for independent electrons}
When there is no interaction among electrons, the ground state
can be expressed as a Slater determinant $|\Psi_0\rangle$, and
the complex polarization can be written as
\begin{equation}
Z_N^{(\alpha)} = \langle \Psi_0 | \hat{U}(\mathbf{\kappa}^{(\alpha)})| \Psi_0\rangle = \langle \Psi_0 | \Phi_0\rangle,
\end{equation}
where $\hat{U}(\mathbf{\kappa}^{(\alpha)}) = e^{i \mathbf{\kappa}^{(\alpha)} \cdot \mathbf{r}}$ and $| \Phi_0\rangle = \hat{U}(\mathbf{\kappa}^{(\alpha)})| \Psi_0\rangle$.
According to the Thouless theorem~\citep{Thouless1960,Rosensteel1981},
$|\Phi_0\rangle$ is also a determinant composed of orbitals rotated from
the orbitals in $\Psi_0$ as
\begin{equation}
\phi_\mu(\mathbf{r}) = e^{i \mathbf{\kappa}^{(\alpha)} \cdot \mathbf{r}} \psi_{\mu}(\mathbf{r}).
\end{equation}
Therefore, $Z_N^{(\alpha)}$ is equal to the overlap between $|\Psi_0\rangle$
and $\Phi_0\rangle$. The overlap of two Slater determinants are evaluated
by the determinant of the $N\times N$ overlap matrix $\mathcal{S}^{(\alpha)}$
evaluated by
\begin{equation}\label{eq:ovlpmat_cp}
\mathcal{S}_{\mu\nu}^{(\alpha)} = \int \mathrm{d} \mathbf{r} \psi_\mu^*(\mathbf{r})
e^{i\mathbf{\kappa}^{(\alpha)}\cdot \mathbf{r}} \psi_\nu(\mathbf{r}),
\end{equation}
where $\psi_\mu(\mathbf{r})$ are occupied orbitals.
The many-body complex polarization is then evaluated as
\begin{equation}\label{eq:cplx_slater}
Z_N^{(\alpha)} = \left(\text{det} \mathcal{S}^{(\alpha)}_{\uparrow}\right)
\left(\text{det} \mathcal{S}^{(\alpha)}_{\downarrow}\right),
\end{equation}
where the indices $\uparrow$ and $\downarrow$ correspond to up and down
spins. Eq.~\eqref{eq:cplx_slater} can be applied to numerical calculations
where Slater determinants can be obtained to represent the state of the
system. The above numerical algorithms include the Hartree-Fock method,
the density functional theory (DFT), Slater determinant based quantum Monte
Carlo (QMC) methods, etc.
\section{Finite temperature complex polarization}\label{sec:ftcplx}
At finite temperature, the expectation value (thermal average) of an
operator $\hat{A}$ is evaluated under the grand canonical ensemble
\begin{equation}\label{eq:grand_can_cp}
\begin{split}
\langle \hat{A}\rangle(\beta) &= \frac{1}{\mathcal{Q}}\text{Tr}\langle \hat{A}
\hat{\rho}\rangle \\
&= \frac{1}{\mathcal{Q}}\sum_n \langle n | \hat{A}e^{-\beta\hat{H}} |n\rangle,
\end{split}
\end{equation}
where $\beta$ is the inverse temperature, $\hat{H}$ is the Hamiltonian with
the chemical potential,
$\{|n\rangle\}$ forms a set of orthonormal basis,
$\hat{\rho} = e^{-\beta \hat{H}}$ is the density matrix, and
$\mathcal{Q}$ is the partition function defined as
\begin{equation}
\mathcal{Q} = \sum_n \langle n |e^{-\beta\hat{H}} |n\rangle.
\end{equation}
According to the thermofield theory, the ensemble average in
Eq.~\eqref{eq:grand_can_cp} can be expressed as an expectation value
over one state $|\Psi(\beta)\rangle$, known as the \emph{thermofield double
state} or simply \emph{thermal state}
\begin{equation}\label{eq:tfd_av}
\langle \hat{A}\rangle(\beta) = \frac{\langle \Psi(\beta)| \hat{A} |\Psi(\beta)\rangle }{\langle \Psi(\beta)|\Psi(\beta)\rangle}.
\end{equation}
In thermofield theory, a copy of the original Hilbert space $\mathcal{H}$
is introduced as $\tilde{\mathcal{H}}$, known as the auxiliary space. At
infinite temperature ($\beta = 0$), the thermal state is given by
a uniform summation over the orthonormal basis
\begin{equation}
|\Psi(0)\rangle = \sum_n |n\rangle \otimes |\tilde{n}\rangle,
\end{equation}
where $\{|\tilde{n}\rangle\}$ are copies of $\{|n\rangle\}$ in the auxiliary
space.
The thermal state at $\beta$ is then derived by imaginary time evolution
from $|\Psi(0)\rangle$
\begin{equation}
|\Psi(\beta)\rangle = e^{-\beta \hat{H}/2} |\Psi(0)\rangle.
\end{equation}
Note that the Hamiltonian $\hat{H}$ only acts on the original Hilbert space
$\mathcal{H}$. Eq.~\eqref{eq:tfd_av} can be rewritten as
\begin{equation}
\begin{split}
\langle \hat{A}\rangle(\beta) &= \frac{\langle \Psi(0)| e^{-\beta \hat{H}/2}
\hat{A}e^{-\beta \hat{H}/2}| \Psi(0)\rangle }{ \langle \Psi(0)|e^{-\beta \hat{H}}|\Psi(0)\rangle}\\
& = \frac{\langle \Psi(0)|e^{-\beta \hat{H}} \hat{A} | \Psi(0)\rangle }{ \langle \Psi(0)|e^{-\beta \hat{H}}|\Psi(0)\rangle}.
\end{split}
\end{equation}
The complex polarization at temperature $T = 1/\beta$ is thus
\begin{equation}\label{eq:ftcplx_full}
Z_N(\beta) = \frac{\langle \Psi(0)|e^{-\beta \hat{H}} \hat{Z} | \Psi(0)\rangle }{ \langle \Psi(0)|e^{-\beta \hat{H}}|\Psi(0)\rangle},
\end{equation}
where $\hat{Z} = e^{-i\frac{2\pi}{L}\hat{x}}$. Note that for simplicity,
we dropped the superscript $(\alpha)$ and chose only the $x$ component of
the three-dimensional position operator $\hat{\mathbf{r}}$. This simplification
is valid for a one-dimensional system, and for multi-dimensional systems,
$Z_N$ of other directions can be evaluated in the same manner.
At the mean-field level, thermal state $|\Psi_0\rangle$ can be written as a
Slater determinant formed by the following $2L \times L$ coefficients
\begin{equation}
C_0 =
\begin{bmatrix}
1 & 0 & 0 & \cdots & 0 \\
0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 & \cdots & 0 \\
0 & 0 & 0 & \ddots & 0 \\
0 & 0 & 0 & \cdots & 1 \\
1 & 0 & 0 & \cdots & 0 \\
0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 & \cdots & 0 \\
0 & 0 & 0 & \ddots & 0 \\
0 & 0 & 0 & \cdots & 1 \\
\end{bmatrix}
= \begin{bmatrix}
\mathbb{I} \\ \mathbb{I}
\end{bmatrix},
\end{equation}
where the first $L$ rows correspond to the physical sites, and the last
$L$ rows correspond to the auxiliary sites. A one-body operator
$\hat{w}$ in $\mathcal{H}$ is rewritten as
\begin{equation}
\bar{\hat{w}} = \hat{w} \oplus 0.
\end{equation}
Under Hartree-Fock approximation, we use the Fock operator $\hat{f}$
as the one-body Hamiltonian, and the matrix form of the
thermal Fock operator $\bar{\hat{f}}$ is
\begin{equation}
\left[\bar{\hat{f}}\right] = \begin{bmatrix}
[\hat{f}] & 0\\
0 & 0
\end{bmatrix}.
\end{equation}
The position operator $\hat{x}$ is also a one-body operator, with
the matrix form as
\begin{equation}
\left[\bar{\hat{x}}\right] = \begin{bmatrix}
[\hat{x}] & 0\\
0 & 0
\end{bmatrix}.
\end{equation}
The thermal density operator $\bar{\hat{\rho}} = e^{-\beta \bar{\hat{f}}}$,
with the matrix form
\begin{equation}
\left[\bar{\hat{\rho}} \right] =
\begin{bmatrix}
\left[e^{-\beta \hat{f}}\right] & 0\\
0 &\mathbb{I}\\
\end{bmatrix}.
\end{equation}
The thermofield expression of the complex polarization operator
$e^{i\frac{2\pi}{L}\hat{x}}$ therefore has the matrix form
\begin{equation}
\left[ \bar{\hat{Z}}\right] =
\begin{bmatrix}
\left[ e^{i\frac{2\pi}{L}\hat{x}}\right] & 0\\
0 & \mathbb{I}
\end{bmatrix}.
\end{equation}
The complex polarization at $\beta$ can be evaluated by a similar
formulation as ground state
\begin{equation}\label{eq:ftcp_cp}
Z_N(\beta) = \frac{\text{det}\left( C_0^{\dag} \left[ \bar{\hat{Z}}\right]
\left[\bar{\hat{\rho}} \right] C_0\right)}
{\text{det}\left( C_0^{\dag} \left[\bar{\hat{\rho}} \right] C_0\right)}.
\end{equation}
At infinite temperature, $\beta = 0$, and $\left[e^{-\beta \hat{f}}\right]
= \mathbb{I}$, leading to $\left[\bar{\hat{\rho}} \right] =
\mathbb{I}\otimes \mathbb{I}$. One could rotate the basis to the eigenstates of $\hat{x}$,
and thus $\hat{Z}$ is diagonal in this basis. Note that $C_0$ and
$\left[e^{-\beta \hat{f}}\right]$ do not change under the rotation.
After rotation, $\left[\bar{\hat{Z}}\right]$ becomes a diagonal matrix having
the form
\begin{equation}
\left[ \bar{\hat{Z}}\right] =
\begin{bmatrix}
z_1 & & & & \\
& z_2 & & & \\
& & \ddots & & \\
& & & z_L& \\
& & & & [\mathbb{I}]
\end{bmatrix},
\end{equation}
where $z_{\mu} = e^{-i2\pi x_{\mu} /L}, \mu = 1,...,L$.
The denominator in Eq.~\eqref{eq:ftcp_cp} is then
\begin{equation}
\text{det}\left( C_0^{\dag} C_0\right) = 2^L.
\end{equation}
The numerator in Eq.~\eqref{eq:ftcp_cp} is
\begin{equation}\label{eq:infT_cp}
\text{det}\left( C_0^{\dag} \left[ \bar{\hat{Z}}\right] \left[\bar{\hat{\rho}} \right] C_0\right)
= \text{det}\left([\hat{Z}] + [\mathbb{I}]\right) = \prod_{\mu} \left(z_\mu + 1\right).
\end{equation}
Suppose the basis is chosen to be the site basis, i.e., $x_\mu = \mu$.
When $L$ is even, $z_{L/2} = -1$ is included in the product in
Eq.~\eqref{eq:infT_cp} and the numerator is zero, leading to $Z_N = 0$.
When $L$ is odd, $z_{(L+1)/2}$ and $z_{(L-1)/2}$ differ from $-1$
with infinitesimal displacement when $L$ is large enough and the numerator
$\ll 2^L$, leading to $Z_N \rightarrow 0$. Therefore, at thermal dynamic limit,
$Z_N = 0$ at infinite temperature ($\beta = 0$). This observation is
consistent with the common sense that the electron can move freely
at infinite temperature and the second cumulant moment diverges.
\section{Tight binding model}\label{sec:cptb}
The generalized form of a non-interacting Hamiltonian can be written
as
\begin{equation}
\hat{h} = -\sum_{\mu \neq \nu}\left(t_{\mu\nu}\hat{a}^{\dag}_\mu\hat{a}_\nu + \text{h.c.}\right)
+\sum_\mu u_\mu \hat{a}^\dag_\mu \hat{a}_\mu,
\end{equation}
where $\hat{a}^\dag_\mu\hat{a}_\nu$ describes electron hopping from site $j$ to site $i$.
The one-band tight binding model Hamiltonian takes the form
\begin{equation}
\hat{h}_{\text{tb}} = -t\sum_{\langle \mu, \nu\rangle, \sigma} \hat{a}^{\dag}_{\mu,\sigma}\hat{a}_{\nu,\sigma}
+ \text{h.c.},
\end{equation}
where $\langle i, j\rangle$ indicates nearest-neighbor hopping and $\sigma$
stands for spin freedom. In the following, we focus on the one-dimensional
tight binding model with periodic boundary condition (PBC) and
SU(2) symmetry. The Hamiltonian becomes
\begin{equation}\label{eq:1dtb_cp}
\hat{h}_{\text{tb}} = -t\sum_{\mu} \hat{a}^{\dag}_{\mu} \hat{a}_{\mu+1} + \text{h.c.}
\end{equation}
The eigenstates of Eq.~\eqref{eq:1dtb_cp} can be analytically solved
with the help of Fourier transformation from real space to $k$ space
(momentum space)
\begin{equation}
\begin{split}
\hat{a}_\mu &= \frac{1}{\sqrt{2\pi}}\sum_{k\in\text{BZ}} e^{ik\mu}\hat{c}_k,\\
\hat{c}_k & = \frac{1}{\sqrt{2\pi}} \sum_{\mu} e^{-ik\mu}\hat{a}_\mu.
\end{split}
\end{equation}
It is easy to prove that $\{\hat{c}_k, \hat{c}^\dag_{k'}\} = \delta_{kk'}$, so
$\hat{c}^\dag_{k}$ and $\hat{c}_k$ are creation and annihilation operators in
$k$ space. Eq.~\eqref{eq:1dtb_cp} can be rewritten as
\begin{equation}
\begin{split}
\hat{h}_{\text{tb}} &= -\frac{t}{2\pi} \sum_{\mu} \sum_{k,k'}
e^{-ik\mu} e^{ik'(\mu+1)} \hat{c}^{\dag}_k \hat{c}_{k'} + \text{h.c.} \\
&= -\frac{t}{2\pi} \sum_{k,k'} \sum_{\mu} \left(e^{-ik\mu} e^{ik'(\mu+1)}\right) \hat{c}^{\dag}_k \hat{c}_{k'} + \text{h.c.}\\
&= -t \sum_{k,k'} \delta_{kk'} e^{ik}\hat{c}^{\dag}_k \hat{c}_{k'} + \text{h.c.}\\
&= -2t \sum_k \cos k \hat{c}^{\dag}_k\hat{c}_k.
\end{split}
\end{equation}
Therefore, $\hat{h}_{\text{tb}}$ is diagonal in the basis created by $\hat{c}^{\dag}_k$. For the crystalline case, $\hat{c}^{\dag}_k$ creates an electron in
a Bloch wave
\begin{equation}
\psi_{k}(\mu) = e^{ik\mu} u_{k}(\mu),
\end{equation}
where $\mu = 0, 1, ..., L-1$ stands for the site basis and $k$ represents momentum numbers.
$u_{k}(\mu)$ is identical on each site: $u_{k}(\mu + 1) = u_{k}(\mu)$,
and we will use a constant $1/\sqrt{L}$ to replace $u_{k}(\mu)$ to ensure
that $\psi_{k}(\mu)$ is normalized.
In a one-dimensional chain with $L$ sites, there are $L$ allowed
$k$ values:
\begin{equation}
k_s = \frac{2\pi s}{L}, s = 0, 1, ..., L-1.
\end{equation}
We evaluate the overlap matrix in Eq.~\eqref{eq:ovlpmat_cp} under the
basis $\{\psi_{k_s}(\mu)\}$,
\begin{equation}
\begin{split}
\mathcal{S}_{k_s, k_{s'}} =& \sum_{\mu} \psi^*_{k_s}(\mu)
e^{\frac{i2\pi \mu}{L}} \psi_{k_{s'}}(\mu)\\
&= \frac{1}{L} \sum_{\mu} e^{-i(s-s'-1)\mu}\\
&= \delta_{s, s'+1}.
\end{split}
\end{equation}
Therefore, $\mathcal{S}_{k_s, k_{s'}}$ is nonzero only when $s = s'+1$.
When the lattice is fully occupied, both $s$ and $s'$ run over all the
$L$ values. This means that for any $s$, there exists an occupied orbital
$\psi_{k_{s-1}}$. Therefore, any row or any column of the $\mathcal{S}$
matrix has one and only one nonzero element (equal to $1$). The determinant
of $\mathcal{S}$ is thus nonzero, and $Z_N = 1$, indicating an insulating
state.
When the lattice is not fully occupied, the overlap matrix $\mathcal{S}$
only consists of occupied orbitals, and if one can find an $s$ where
$\psi_{k_{s-1}}$ is unoccupied, then the row corresponding to $\psi_{k_{s}}$
contains only zero elements, leading to $Z_N =
\text{det}\left(\mathcal{S}\right) = 0$. For the half-filling case, whether
$Z_N = 0$ or not depends on the value of $L$. When $L$ is even, there
are two cases: $L = 4m$ and $L = 4m+2$, where $m$ is an integer. The
spectrum of the two cases are shown in Fig.~\ref{fig:dispersion_tb} with
$L=8$ and $L=10$. The cosine line plot reflects the dispersion relation
between $\varepsilon_k$ and $k$: $\varepsilon_k = -t*\cos k$, and the
circles on top of the line correspond to allowed $k$ values: $2\pi n/L,
n = 0, ..., L-1$. Fig.~\ref{fig:dispersion_tb} (a) shows the half-filling
case of $L=8$, with $4$ electrons in the lattice. The solid black
dots are occupied orbitals, while the two circles with stripes are two
degenerate states with the total occupation number equal to $1$.
If we consider the two striped circles as one occupied site, then
for any $s$th occupied dot, the $(s+1)$th orbital is also
occupied or partially occupied. Therefore, when $L = 4m$, $|Z_N| > 0$.
Fig.~\ref{fig:dispersion_tb} (b) tells a different story. With $L=10$,
there are five occupied orbitals shown as solid black dots, and there
are no partially occupied orbitals in this case. Therefore, when $L=4m+2$,
$|Z_N| = 0$.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.85\textwidth}
\includegraphics[width=\textwidth]{figures/cp/dispersion_L8-eps-converted-to.pdf}
\caption{$L = 8$}
\end{subfigure}
\begin{subfigure}[t]{0.85\textwidth}
\includegraphics[width=\textwidth]{figures/cp/dispersion_L10-eps-converted-to.pdf}
\caption{$L = 10$}
\end{subfigure}
\caption{Dispersion relation and energy levels of the half-filled tight binding model for (a) $L = 8$ and (b) $L=10$. Solid black dots are occupied orbitals,
blank circles are unoccupied orbitals, and circles with stripes are partially
occupied orbitals due to degeneracy.}\label{fig:dispersion_tb}
\end{figure}
At finite temperature, we again evaluate the thermal average with
thermal states.
The matrix form of the phase operator $[\hat{Z}]$ based on the above discussion is:
\begin{equation}
\left[\hat{Z}\right] = \begin{bmatrix}
0 & 0 & 0 & \cdots & 0 & 1 \\
1 & 0 & 0 & \cdots & 0 & 0 \\
0 & 1 & 0 & \cdots & 0 & 0 \\
& & & \ddots & & \\
0 & 0 & 0 & \cdots & 1 & 0 \\
\end{bmatrix},
\end{equation}
and the thermal operator of complex polarization has the form
\begin{equation}
\left[\bar{\hat{Z}}\right] = \begin{bmatrix}
[\hat{Z}] & 0 \\
0 & \mathcal{I}
\end{bmatrix}.
\end{equation}
Since the Hamiltonian is diagonal with the basis $\{\psi_{k_{s}}\}$,
the thermal density matrix has the form
\begin{equation}
\left[\bar{\hat{\rho}}\right] = \begin{bmatrix}
\xi_1 & & & & \\
& \xi_2 & & & \\
& & \ddots & & \\
& & & \xi_L & \\
& & & & [\mathcal{I}]
\end{bmatrix}.
\end{equation}
The complex polarization can be evaluated according to
Eq.~\eqref{eq:ftcp_cp},
\begin{equation}\label{eq:tb_ftcp}
Z_N(\beta) = \frac{1-(-1)^L\prod_{\mu}\xi_\mu}{\prod_{\mu}(1+\xi_\mu)}
\end{equation}.
Note that $\xi_\mu =e^{-\beta\varepsilon_\mu} > 0$, so the denominator
of Eq.~\eqref{eq:tb_ftcp} is always greater than zero.
Now let us examine two extreme cases: $\beta \rightarrow \infty$ (zero
temperature) and $\beta\rightarrow 0$ (infinite temperature). At
$\beta \rightarrow \infty$, if the Fermi level is above all bands,
then $\xi_\mu \gg 1$ for all $\mu$, and Eq.~\eqref{eq:tb_ftcp}
is well approximated by
\begin{equation}
|Z_N| \approx \frac{\prod_{\mu}\xi_\mu}{\prod_{\mu}\xi_\mu} = 1.
\end{equation}
Therefore the lattice is an insulator. However, when the Fermi level
is below some bands (unoccupied orbitals), then the $\xi$ values of these
bands $\rightarrow 0$, and the numerator of Eq.~\eqref{eq:tb_ftcp}
$\rightarrow 1$, while the denominator $\prod_{\mu}(1+\xi_\mu)\rightarrow \infty$, so $Z_N\rightarrow 0$, giving a conducting solution. The above low
temperature limit agrees with the previous analysis of ground state
metal-insulator transition of the tight binding model.
At $\beta\rightarrow\infty$, all $\xi_\mu\rightarrow 1$, resulting in an
numerator $0$ ($L$ is even) or $2$ ($L$ is odd), while the denominator
is $2^L$. Therefore, $Z_N\rightarrow 0$ as $L\rightarrow \infty$, and
the electrons in the tight binding model are delocalized.
\begin{figure}
\centering
\justify
\includegraphics[width=1\textwidth]{figures/cp/ZN_line-eps-converted-to.pdf}
\caption{Complex polarization of the tight binding model ($L=42$) with staggered potential $u$ at ground state (GS), $T = 0.2$, $T=0.5$, and $T=1.0$, respectively.
}\label{fig:zn_tb}
\index{figures}
\end{figure}
Next we add the staggered potential $u$ onto the original tight binding
model:
\begin{equation}
\hat{h} = -t\sum_{\mu} \hat{a}^{\dag}_{\mu} \hat{a}_{\mu+1} + \text{h.c.}
+ u\sum_{\mu \in\text{odd}} \hat{a}^{\dag}_{\mu}\hat{a}_{\mu},
\end{equation}
where $u > 0$ is only applied to the odd sites. For simplicity, we assume
that $L$ is even. The effect of $u$ is to provide a potential wall/well
for every other site, and this effect prohibits the free flow of electrons.
For the rest of the tight binding calculations, we choose the chain length
$L = 42$ and Boltzmann constant $k_B = 1$, and use $t$ as the energy unit.
In Fig.~\ref{fig:zn_tb} we show the complex polarization $Z_N$ of the
half-filled tight binding model against
the staggered potential $u$ at ground state, $T=0.2t$, $T=0.5t$ and $T=1.0t$.
As predicted above, the half-filled ground state of the original tight
binding model ($u=0$) with $L=4m+2$ is metallic with $Z_N=0$.
As the staggered potential
turned on, ground state $Z_N$ grows rapidly and the system becomes more and
more insulating. With raising the temperature, the metallic regime expands
within the small $|u|$ region, and the growth curve of $Z_N$ with respect to $|u|$
becomes more flat. The temperature effect smears the sharp transition
at ground state. Note that the curves are symmetric to $u = 0$ since only
the potential differences between adjacent sites affect the state of the
system.
\begin{figure}
\centering
\justify
\includegraphics[width=1\textwidth]{figures/cp/tight_binding_phase-eps-converted-to.pdf}
\caption{Phase diagram of the tight binding model ($L=42$) with the staggered potential $u$. The blue area corresponds to $Z_N > 0$ (insulator) and the white area
corresponds to $Z_N = 0$ (metal). The 2D plot is smoothed by Bessel
interpolation. Grid: $20$ points in the $x$-axis and $10$ points in the
$y$-axis.
}\label{fig:one_band_tb_phase}
\index{figures}
\end{figure}
We show the phase diagram of $Z_N$ for the tight binding model with respect
to the staggered potential $u$ and temperature $T$ in Fig.~\ref{fig:one_band_tb_phase}. We observed a sharp barrier
between the metallic phase and insulating phase at $u\rightarrow 0_+$ and low
temperature, and then
the barrier becomes rather vague at larger $u$ with a higher transition
temperature. This observation is consistent with the flatter curves at a
higher temperature in Fig.~\ref{fig:zn_tb}. We further observe a linear
growth of transition temperature $T_c$ with respect to $u$ at larger $u$.
Since the transition temperature $T_c$ is directly
related to the gap of the system, we also plotted the gap against $u$ in
Fig.~\ref{fig:tb_gap}. The linear dependence of $\Delta_{\text{gap}}$
to the staggered potential $u$ at large $u$ region is consistant to the
linear $T_c - u$ relationship in Fig.~\ref{fig:one_band_tb_phase}.
At $u$ smaller than $0.1t$, we observe a rather slow growth of
$\Delta_{\text{gap}}$ with $u$, which agrees with the metallic phase at
$u\approx 0$ and then a sudden appearance of the insulator phase with a
nearly verticle wall.
\begin{figure}
\centering
\justify
\includegraphics[width=1\textwidth]{figures/cp/gap_u-eps-converted-to.pdf}
\caption{Energy gap of the half-filled staggered tight binding model
against the staggered potential $u$. Inset: energy gap for $u \in [0t,0.1t]$.}\label{fig:tb_gap}
\index{figures}
\end{figure}
\section{Hydrogen chain}\label{sec:cphydrogen}
A linear chain of hydrogen atoms equispaced~\citep{Hachmann2006,AlSaidi2007,
Sinitskiy2010,Stella2011,Motta2017,Motta2020} is the simplest \textit{ab initio}
periodic system that one can find. Unlike the simplicity of the structure,
the phase diagram of the hydrogen chain involves complex components:
metal-insulator transition (MIT), paramagnetic-antiferromagnetic (PM-AFM) transition
and dimerization~\citep{Hubbard1963}. Hydrogen chain has a similar
structure as the one-dimensional
Hubbard model which has been studied for decades. Compared to the Hubbard
model where electron-electron
interactions are of short range, the Coulomb interaction in the hydrogen
chain is long-ranged. Moreover, calculations beyond the minimal basis set
(STO-6G) will introduce a multi-band effect into the hydrogen chain, which
is absent in the one-band model systems.
In the following, we compute the complex polarization at both ground state
and finite temperature for the hydrogen chain system with atoms equally
spaced along the $z$-direction. The H-H bond length $R$ is introduced as
the parameter and adjusted to show different phases. The Hamiltonian of
this problem is
\begin{equation}
\hat{H} = -\frac{1}{2}\sum_{\mu = 1}^N\nabla^2_{\mu} +
\sum_{\mu < \nu}^N \frac{1}{|\mathbf{r}_{\mu} - \mathbf{r}_{\nu}|}
- \sum_{\mu, a}^{N} \frac{1}{|\mathbf{r}_{\mu} - \mathbf{R}_{a}|}
+ \sum_{a < b}^N \frac{1}{|\mathbf{R}_{a} - \mathbf{R}_{b}|}
\end{equation}
where $(\mathbf{r}_1, ..., \mathbf{r}_N)$ are the electron positions in the
Cartesian coordinates, $\mathbf{R}_{a} = aR\hat{\mathbf{e}}_z$ is the
position of the $a$th atom on $z$-axis. In this work, energies
and the temperature ($k_B=1$) are measured
in Hartree ($me^4/\hbar^2$) and lengths in Bohr radius $a_B = \hbar^2/(me^2)$.
In one supercell, $30$ hydrogen atoms are included and only the $\Gamma$ point
in the reciprocal space is taken into account. The basis set is 6-31G,
where the 1$s$ orbital and the 2$s$ orbital are included. We evaluate the
complex polarization $Z_N$, staggered magnetic moment $m$, electron
population on 2s orbital, and the HOMO-LUMO gap of
above hydrogen chain system
at ground state, $T = 0.01, 0.02, 0.03$ and $0.04$ Hartree.
We present the results from unrestricted Hartree-Fock (UHF)
and DFT (GGA/PBE and B3LYP) calculations in Fig.~\ref{fig:hchain_uhf_cp},
Fig.~\ref{fig:hchain_pbe_cp}, and Fig.~\ref{fig:hchain_b3lyp_cp}.
All calculations are performed within the framework of the quantum chemistry
package \texttt{PySCF}~\citep{PYSCF2017,PYSCF2020}.
\begin{figure}[th!]
\centering
\justify
\includegraphics[width=1\textwidth]{figures/cp/hchain_uhf_631g_N30-eps-converted-to.pdf}
\caption{Complex polarization, magnetic moment, population on 2s orbital
and energy gap of hydrogen chain with unrestricted Hartree-Fock method.
Note that the complex polarization at $T=0.01$ is not presented here due to
overflow.
}
\label{fig:hchain_uhf_cp}
\index{figures}
\end{figure}
\begin{figure}[th!]
\centering
\justify
\includegraphics[width=1\textwidth]{figures/cp/hchain_pbe_631g_N30-eps-converted-to.pdf}
\caption{Complex polarization, magnetic moment, population on 2s orbital,
and energy gap of hydrogen chain from DFT with PBE functional.}
\label{fig:hchain_pbe_cp}
\end{figure}
\begin{figure}[th!]
\centering
\justify
\includegraphics[width=1\textwidth]{figures/cp/hchain_b3lyp_631g_N30-eps-converted-to.pdf}
\caption{Complex polarization, magnetic moment, population on 2s orbital,
and energy gap of hydrogen chain from DFT with B3LYP functional.}
\label{fig:hchain_b3lyp_cp}
\index{figures}
\end{figure}
\begin{table}[hbt!]
\centering
\begin{tabular}{l|ccc}
\hline
$T$/Hartree & Hartree-Fock & PBE & B3LYP\\
\hline
Ground state & $\sim$1.0 & 2.6 & 2.2\\
0.01& 2.8 & 2.8 &2.8 \\
0.02 & 3.4 & 3.4 & 3.4\\
0.03 & 4.0 & 4.0 & 4.0\\
\hline
\end{tabular}
\caption{PM-AFM transition bond length $R$ (in Bohr) at ground state and low temperature.}\label{tab:mag_trans}
\index{tables}
\end{table}
\begin{table}[hbt!]
\centering
\begin{tabular}{l|ccc}
\hline
$T$/Hartree & Hartree-Fock & PBE & B3LYP\\
\hline
Ground state & $\sim$1.0 & 2.6 & 2.2\\
0.01& - & 2.8 &2.8 \\
0.02 & 3.4 & 3.6 & 3.4\\
0.03 & 4.0 & - & 4.0\\
\hline
\end{tabular}
\caption{Metal-insulator transition bond length $R$ (in Bohr) at ground state and low temperature.}\label{tab:cp_trans}
\index{tables}
\end{table}
All of the three methods predicted metal-insulator transition and
PM-AFM transition at ground state and low temperature. The transition
$R$ predicted by the above methods are summarized in Table~\ref{tab:mag_trans}
and Table~\ref{tab:cp_trans}. The two
transitions happened nearly simultaneously, which provided evidence
for the hypothesis that the insulator at large $R$ regime is an
antiferromagnetic (AFM) insulator. With a metal to insulator transition
happening
with raising $R$, the population of the 2s orbital experienced a sudden
drop, which indicates that the origin of the metal phase at small $R$ regime
is caused by the crossover between 1$s$ and 2$s$ bands. Although the three
methods all predicted the transitions, the behaviors of the order parameters
against $R$ are quite different between UHF and PBE calculations. UHF predicted
a much smaller transition $R$ at ground state ($\sim 1 Bohr$), while PBE
predicted $R_c$ to be $\sim 2.6 Bohr$. The finite temperature predictions
of $R_c$ for PM-AFM transitions from the two methods are similar, while the
transition behaviors are quite different: UHF described that the finite
temperature curves experienced a sudden jump from
zero to the ground state curve; PBE predicted that the finite temperature
curves grew from zero at $R_c$ and reached to a peak which decreases with
temperature. Moreover, at $T=0.03$ and $R > 4.0$, PBE predicted an
AFM metal ($m > 0$ and $Z_N = 0$). This observation confirmed that the
metal is mainly caused by the crossover of 1$s$ and 2$s$ bands, and the
existence of the AFM order does not necessarily guarantee an insulating
phase. However, the AFM metal phase is not observed with the other two methods.
The B3LYP results in Fig.~\ref{fig:hchain_b3lyp_cp} are closer to those
from UHF, except that the peaks of $m$ and $Z_N$ drop as the temperature
increases.
\section{Conclusion}\label{sec:cpconc}
In this chapter, we presented the finite temperature formulation of complex polarization
under the scheme of thermal field theory. The complex polarization
has a direct relationship with the electron localization at ground state:
when the complex polarization is zero, the electrons are delocalized,
and thus the system is metallic; when the complex polarization is
nonzero, the electrons are localized and thus the system is insulating.
At finite temperature, the complex polarization can also be used as
the indicator of metal-insulator transition. We applied the thermofield
implementation of the complex polarization to the tight binding model
with staggered potential $u$, where as $u$ increases, the electrons
tend to sit on the site with lower potential, and thus are localized.
We observed the increase of the complex polarization with $u$ for both
ground state and finite temperature. Moreover, we found that the
transition temperature predicted by the complex polarization is linearly
dependent on the staggered
potential $u$ at intermediate to large $u$ regime,
which is consistent with the linear dependence of the energy gap on $u$
at this intermediate to large $u$ regime. Therefore, the energy gap,
electron localization, and complex polarization provide the same
predictions of the metal-insulator transition behaviors. We further
studied the metal-insulator and paramagnetic-antiferromagnetic (PM-AFM) transition
for the hydrogen chain system by computing the complex polarization
and magnetic moment against temperature and H-H bond length.
Along with the results of the population of 2s orbitals and the energy gap,
we confirmed that the origin of the metallic phase is the crossover
of $1s$ and $2s$ (or higher, e.g, $2p$) bands. The antiferromagnetic (AFM)
phase is usually accompanied by the insulating phase, but at finite
temperature, we saw that PBE predicted an AFM metallic phase, which
indicates that the disappearance of the insulating phase is not necessarily
due to the loss of AFM phase. With complex polarization proving to be
a good indicator of metal-insulator transition at both ground
state and finite temperature, further applications are anticipated to
bring more insights into this intriguing phenomena.
\chapter{Quantum imaginary time evolution and quantum thermal simulation\label{chp:qite}}
\newcommand{\oper}[1]{\hat{#1}}
\newcommand{{\Delta \tau}}{{\Delta \tau}}
\newcommand{m}{m}
\newcommand{\choi}[2]{ \ket{#1} \bra{#2} }
\newcommand{\adj}[1]{ {#1}^\dagger }
\newcommand{\vett}[1]{ {\bf{#1} }}
\newtheorem{theorem}{Theorem}
\section{Abstract}
An efficient way to compute Hamiltonian ground-states on a quantum computer stands to impact
many problems in the physical and computer sciences, ranging from quantum simulation to machine learning. Unfortunately, existing techniques, such
as phase estimation and variational algorithms, display potential disadvantages, such as
requirements for deep circuits with ancillae and high-dimensional optimization. We describe the quantum imaginary time evolution and quantum Lanczos algorithms,
analogs of classical algorithms for ground (and excited) states, but with exponentially reduced space and time requirements per iteration,
and avoiding deep circuits with ancillae and high-dimensional optimization. We further discuss
quantum imaginary time evolution as a natural subroutine to generate Gibbs averages through an analog of minimally entangled typical thermal
states. We implement these algorithms with exact classical emulation as well as in prototype circuits on
the Rigetti quantum virtual machine and Aspen-1 quantum processing unit,
demonstrating the power of quantum elevations of classical algorithms.
\section{Introduction}
An important application for a quantum computer is to compute the ground-state $\Psi$ of a Hamiltonian $\oper{H}$
\cite{Feynman_IJTP_1982,Abrams_PRL_1997,Abrams_PRL_1999}.
This arises in simulations, for example, of the electronic structure of molecules and materials
\cite{Lloyd_Science_1996,Aspuru_Science_2005},
as well as in
optimization when the cost function is encoded in a Hamiltonian.
While efficient ground-state determination
cannot be guaranteed for all Hamiltonians, as this is a QMA complete
problem
\cite{Kempe_SIAM_2004}, several heuristic quantum algorithms
have been proposed, such as adiabatic state preparation with quantum phase estimation (QPE)
\cite{Farhi_MIT_2000,Kitaev_arxiv_1995} and quantum-classical variational algorithms,
including the quantum approximate optimization algorithm (QAOA)
\cite{Farhi_MIT_2014,Otterbach_arxiv_2017,Moll_QST_2018} and variational quantum eigensolver (VQE)
\cite{Peruzzo_Nature_2013,McClean_NJP_2016,grimsley2018adapt}.
While there have been many advances with these algorithms, they also have potential disadvantages,
especially in the context of near-term quantum computing architectures and limited quantum resources.
For example, phase estimation produces a nearly exact eigenstate, but appears impractical without error correction, while
variational algorithms, although somewhat robust to coherent errors, are limited in accuracy for a fixed variational form,
and involve a high-dimensional noisy classical optimization~\cite{mcclean2018barren}.
In classical simulations, different strategies are employed to numerically determine exact ground-states of Hamiltonians.
One popular approach is imaginary-time evolution,
which expresses the ground-state as the long-time limit of the imaginary-time Schr\"odinger equation
$- \partial_\beta |\Phi(\beta)\rangle
= \oper{H} |\Phi(\beta)\rangle$, $|\Psi\rangle
= \lim_{\beta \to \infty} \frac{|\Phi(\beta)\rangle}{ \| \Phi(\beta) \|}$ (for $\langle \Phi(0) | \Psi \rangle \neq 0$). Unlike variational algorithms with a fixed ansatz, imaginary-time evolution always converges to the ground-state
(as distinguished from imaginary-time ansatz optimization, which can be trapped in local minima
\cite{McArdle_arxiv_2018}).
Another common exact algorithm is the iterative Lanczos algorithm
\cite{Lanczos_somewhere_1950,Arnoldi_somewhere_1951}
and its variations.
The Lanczos iteration constructs the Hamiltonian matrix $\mathbf{H}$ in a successively enlarged Krylov subspace
$\{ |\Phi\rangle, \oper{H} |\Phi\rangle, \oper{H}^2|\Phi\rangle \ldots \}$; diagonalizing $\mathbf{H}$
yields a variational estimate of the ground-state which tends to $|\Psi\rangle$ for a large number
of iterations. For a Hamiltonian on $N$ qubits, the
classical complexity of imaginary time evolution and the Lanczos algorithm
scales as $\sim 2^{\mathcal{O}(N)}$ in space as well as time.
The exponential space comes from storing $\Phi(\beta)$ or the Lanczos vector, while exponential time
comes from the cost of Hamiltonian multiplication $\oper{H} |\Phi\rangle$, as well as, in principle,
though not in practice, the $N$-dependence of the number of propagation steps and propagation time, or number of Lanczos iterations.
Thus it is natural to consider quantum versions of these algorithms that can overcome the exponential bottlenecks.
In this work, we will describe the quantum imaginary time evolution (QITE) and the quantum Lanczos (QLanczos) algorithms
to determine ground-states (as well as excited states in the case of QLanczos) on a quantum computer.
Compared to their classical counterparts, these achieve an exponential reduction in space for a fixed number of propagation
steps or number of iterations, and for a given iteration or time-step offer an exponential reduction in time.
They also offer advantages over existing ground-state quantum algorithms; compared to quantum phase estimation, they do not require
deep circuits,
and compared to variational ground-state algorithms with a fixed ansatz, they are guaranteed to converge to the ground-state, avoiding non-linear
optimization. A crucial component of our algorithms is the efficient implementation of the non-Hermitian operation of an imaginary time step propagation
$e^{-{\Delta \tau} \oper{H} }$ (for small ${\Delta \tau}$), assuming a finite correlation length in the state.
Non-Hermitian operations are not natural on a quantum computer and are usually achieved using ancillae and postselection.
We will describe how to implement imaginary time evolution on a given state, without ancillae or postselection.
The lack of ancillae and complex circuits make QITE and QLanczos potentially
suitable for near-term quantum architectures.
Using the QITE algorithm, we further show how we can sample from thermal (Gibbs) states, also without deep circuits or ancillae
as is usually the case, via a quantum analog of the minimally entangled typical thermal states (QMETTS) algorithm~\cite{White_PRL_2009,Miles_NJP_2010}. We demonstrate the algorithms on spin and fermionic Hamiltonians (short- and long-range spin and Hubbard models, MAXCUT optimization, and dihydrogen
minimal molecular model) using exact classical emulation, and demonstrate proof-of-concept implementations on the
Rigetti quantum virtual machine (QVM) and Aspen-1 quantum processing units (QPUs).
\section{Quantum imaginary-time evolution}
Define a geometric $k$-local Hamiltonian $\oper{H} = \sum_m \oper{h}_m$ (where each term $\oper{h}_m$ acts on at most $k$ neighbouring qubits on an underlying graph)
and a Trotter decomposition of the corresponding imaginary-time evolution,
\begin{align}
e^{-\beta \oper{H}} = (e^{-{\Delta \tau} \oper{h}_1} e^{-{\Delta \tau} \oper{h}_2} \ldots)^n + \mathcal{O}\left( {{\Delta \tau}} \right); \ n= \frac{\beta}{{\Delta \tau}},
\end{align}
applied to a state $|\Psi\rangle$. After a single Trotter step, we have
\begin{align}
|\Psi^\prime \rangle = e^{-{\Delta \tau} \oper{h}_m} |\Psi\rangle \quad.
\end{align}
The basic idea is that the normalized state $|\bar{\Psi}^\prime \rangle = |\Psi^\prime \rangle / \| \Psi^\prime \|$ can be
generated from $|\Psi\rangle$ by a unitary operator $e^{-i {\Delta \tau} \oper{A}[m]}$ (which also depends on imaginary-time step) acting in the neighbourhood
of the qubits acted on by $\oper{h}_m$,
where the Hermitian operator $\oper{A}[m]$ can be determined from tomography of $|\Psi\rangle$ in this neighbourhood up to controllable errors.
This is illustrated by the simple example where $|\Psi\rangle$ is a product state. Then, the squared norm
$c = \| \Psi^\prime \|^2$ can be calculated from the expectation value of $\oper{h}_m $, which requires measurements
over $k$ qubits,
\begin{align}
c = \langle \Psi | e^{-2{\Delta \tau} \oper{h}[m]} | \Psi\rangle = 1 - 2{\Delta \tau} \langle \Psi| \oper{h}_ m |\Psi\rangle + \mathcal{O}({\Delta \tau}^2).
\end{align}
Because $|\Psi \rangle$ is a product state, $|\Psi^\prime \rangle$ is obtained by acting the unitary operator
$e^{-i{\Delta \tau} \oper{A}[m]}$ also on $k$ qubits.
$\oper{A}[m]$ can be expanded in terms of an operator basis, such as the Pauli basis
$\{ \sigma_i\}$ on $k$ qubits,
\begin{align}
\oper{A}[m] = \sum_{i_1i_2 \ldots i_k} a[m]_{i_1i_2 \ldots i_k} \sigma_{i_1}\sigma_{i_2} \ldots \sigma_{i_k},
\label{eq:aoperator}
\end{align}
where $I$ denotes the index $i_1i_2 \ldots i_D$. Then, up to $\mathcal{O}({\Delta \tau})$, the vector of coefficients $a[m]_{i_1i_2 \ldots i_k}$ can be determined from the linear
system
\begin{equation}\label{eq:lineareq1}
\mathbf{S} \mathbf{a}[m] = \mathbf{b},
\end{equation}
where the elements of $\mathbf{S}$ and $\mathbf{b}$ are expectation values over $k$ qubits of $\Psi$, namely
\begin{align}
S_{i_1i_2 \ldots i_k, i_1'i_2' \ldots i_k'} &= \langle \Psi|\sigma_{i_1}^\dag\sigma_{i_2}^\dag \ldots \sigma_{i_k}^\dag \sigma_{i_1'}\sigma_{i_2'} {\color{blue} \dots} \sigma_{i_k'}|\Psi\rangle \notag \\
b_{i_1i_2 \ldots i_k} &= -i \, c^{-\frac{1}{2}} \, \langle \Psi|\sigma_{i_1}^\dag\sigma_{i_2}^\dag \ldots \sigma_{i_k}^\dag \oper{h}[m] |\Psi\rangle.
\end{align}
In general, $\mathbf{S}$ will have a null space; to ensure $\mathbf{a}[m]$ is real, we minimize
$\| c^{-1/2}\Psi^\prime -(1-i{\Delta \tau} \oper{A}[m]) \Psi\|$ w.r.t. real variations in $\mathbf{a}[m]$. Note that the solution
is determined from a linear problem, thus there are no local minima.
In this simple case, the normalized result of the imaginary time evolution step could be represented by a
unitary over $k$ qubits, because $|\Psi\rangle$ had a zero correlation length. After the initial step, this is no longer the case.
However, for a more general $|\Psi\rangle$ with finite correlation length extending over $C$ qubits (meaning
that the correlations between two observables separated by distance $l$ are bounded by $\exp(- l /C )$),
$|\Psi^\prime \rangle$
can be generated by a unitary acting on a domain of width $D:= \log(1/\delta) C$ qubits surrounding
the qubits acted on by $h_i$ (this follows from Uhlmann's theorem~\cite{uhlmann}; see Appendix for a proof), with $\delta$ the
approximation error for that time step.
The unitary $e^{-i {\Delta \tau} A[i]}$ can then be determined by measurements and solving the least squares problem over $D$ qubits. For example, if we consider a nearest-neighbor local Hamiltonian on a $d$-dimension square lattice, the number of qubits $D$ where the unitary acts is bounded by $(2\log(1/\delta) C)^d$.
Because correlations are induced only by the gates applied at previous time steps, the correlation length increases at most with a velocity bounded by
a constant $\alpha_v$ which depends on the geometry of the lattice and the locality of interactions. Consequently, each successive imaginary time step can be simulated by a unitary over an increasingly large neighborhood whose size propagates with velocity bounded by $\alpha_v$ (Fig. 1).
The number of measurements and classical storage at an imaginary time $\beta$ (starting the propagation from a product state) is bounded by $\exp(O((\alpha_v \beta)^d))$ for each unitary update, since each unitary at that level acts on at most $(2 \alpha_{v} \beta)^d$ sites; classical solution of the least squares equation has the same scaling $\exp(O((\alpha_v \beta)^d))$, as does the synthesis and application of the unitary $e^{-i {\Delta \tau} A[i]}$. Thus, space and time requirements are bounded by exponentials of $\beta^d$, but are polynomial in $N$ (the polynomial in $N$ comes from the number of terms in $H$ and from the control of the Trotter error).
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/qite/fig1-eps-converted-to.pdf}
\caption{Quantum imaginary time evolution algorithm and correlation length.
(a) Schematic of the QITE algorithm.
Top: imaginary-time evolution under a geometric $k$-local operator $\hat{h}[m]$
can be reproduced by a unitary operation acting on a group of $D>k$ qubits.
Bottom:
exact imaginary-time evolution starting from a product state requires
unitaries acting on a domain $D$ that grows with $\beta$.
(b,c) Left: mutual information $I(i,j)$ between qubits $i$, $j$ as a function of
distance $d(i,j)$ and imaginary time $\beta$, for a 1D (b) and a 2D (c)
FM transverse-field Ising model, with $h=1.25$ (1D) and $h=3.5$ (2D). The mutual information
is seen to saturate at longer times.
Right: relative error in the energy $\Delta E$ and fidelity
$F= |\langle \Phi(\beta) | \Psi \rangle |^2$ between
the finite-time state $\Phi(\beta)$ and infinite-time state $\Psi$ as a function
of imaginary time. The noise in the 2D fidelity error at large $\beta$ arises from
the approximate nature of the algorithm used. }
\label{fig:1}
\end{figure}
\noindent \emph{Saturation of correlations}.
Note that the correlation volume cannot be larger than $N$. In many physical systems, we expect the correlation volume to
increase with $\beta$ and saturate for $C^d\ll N$ \cite{Hastings_CMP_2005}. As an example, in Fig.~\ref{fig:1} we plot the mutual information between qubits $i$ and $j$ for
the 1D and 2D FM transverse field Ising models computed by tensor network simulation
which shows a monotonic increase and clear saturation. If saturation occurs before the ground-state is attained, the
cost of the algorithm for subsequent time-steps becomes linear in $\beta$,
and exponential in $C^d$. \\
\noindent \emph{Comparison to classical algorithm}. Unlike classical imaginary time evolution, QITE
is bounded by an exponential in $\beta$, rather than an exponential in $N$. Thus for fixed $\beta$ (and
the same number of Trotter steps),
we achieve an exponential reduction in cost in space and time in $N$ compared to the classical algorithm. \\
\noindent \emph{Comparison to tensor networks}. If $|\Psi\rangle$ is represented by a tensor network in a classical simulation,
then $e^{-{\Delta \tau} \oper{h}[m]}|\Psi\rangle$ can be obtained directly as a classical tensor network with an increased bond dimension
\cite{VidalTEBD,Schollwock_Annals_2011}.
This bond dimension
increases exponentially with imaginary time $\beta$, thus the storage of the tensors, as well as the
cost of applying the imaginary time step $e^{-{\Delta \tau} \oper{h}[m]}$ to the tensors grows exponentially with $\beta$, similar
to the quantum algorithm. The key distinction is that, other than in one dimension, we cannot guarantee that contracting the resulting
classical tensor network to evaluate observables is efficient; it is a \#P-hard problem
in the worst case in two dimensions (and even in the average case for Gaussian distributed tensors)~\cite{Schuch_PRL_2007,PEPShard2018};
no such problem exists in the quantum algorithm.\\
\noindent \emph{Fermionic Hamiltonians}. For fermions, a non-local mapping to spins (e.g. through the Jordan-Wigner transformation) would
violate the $k$-locality of the Hamiltonian. In principle, this can be bypassed by using a local mapping to spins {\cite{Verstraete_JSM_2005}.
Alternatively, we conjecture that by using
a fermionic unitary, where the Pauli basis in Eq.~\eqref{eq:aoperator} is replaced by the fermionic operator basis
$\{ 1, \oper{a}, \oper{a}^\dag, \oper{a}^\dag \oper{a} \}$},
the area of support for the fermionic unitary grows in the same fashion as the standard unitary for geometric $k$-local
Hamiltonians described above. This can be tested in numerical simulations.\\
\noindent \emph{Long-range Hamiltonians}. Consider a $k$-local Hamiltonian with long-range terms on a lattice, such as
a general pairwise Hamiltonian. Then the action of $e^{-{\Delta \tau} \oper{h}[m]}$, if $\oper{h}[m]$ acts
on qubits $i$ and $j$, can be emulated by a unitary constructed in the neighborhood of $i$ and $j$, over $(2C \log(1/\delta))^k$ sites.\\
\noindent \emph{Inexact time evolution}. Given limited resources, we can choose to measure and construct
the unitary over a reduced number of sites $D' < D(\beta)$. For example, if $D' = 1$, this gives a mean-field
approximation of the imaginary time evolution. While the unitary is no longer an exact
representation of the imaginary time evolution, there is no issue of a local minimum in its construction, although
the energy is no longer guaranteed to decrease in every time step. In this case, one might apply inexact imaginary time evolution
simply until the energy stops decreasing. Alternatively, with limited resources, one may apply the quantum Lanczos algorithm described below.\\
\noindent\emph{Stabilization}.
Sampling noise in the expectation values of the Pauli operators can affect the solution to Eq.~\ref{eq:lineareq1} that sometimes leads to numerical instabilities. We regularize $\mathbf{S}+\mathbf{S}^T$ against such statistical errors by adding a small $\delta$ to its diagonal. To generate the data presented in Fig.~\ref{fig:4} and Fig.~\ref{fig:5} of the main text, we used $\delta=0.01$ for 1-qubit calculations and $\delta=0.1$ for 2-qubits calculations.
\section{Quantum Lanczos algorithm}
Given the QITE subroutine, we now consider
how to formulate a quantum version of the Lanczos algorithm. A significant practical motivation is that the Lanczos algorithm
typically converges much more quickly than imaginary time evolution, and often in physical simulations only tens of iterations
are needed to converge to good precision. In addition, Lanczos provides a natural way to compute excited states.
In quantum Lanczos, we generate a set of wavefunctions for different imaginary-time projections of
an initial state $| \Psi \rangle$, using QITE as a subroutine. The normalized states are
\begin{equation}
|\Phi_l \rangle = \frac{ e^{- l \Delta \tau \hat{H} } | \Psi_T \rangle }{\| e^{- l \Delta \tau \hat{H} } \Psi_T \|}
\equiv n_l \, e^{- l \Delta \tau \hat{H} } | \Psi_T \rangle \quad 0 \leq l < L_\text{max} \quad .
\end{equation}
where $n_l$ is the normalization constant.
For the exact imaginary-time evolution and $l$, $l^\prime$ both even (or odd) the matrix elements
\begin{equation}
S_{l,l^\prime} = \langle \Phi_l | \Phi_{l^\prime} \rangle
\quad,\quad
H_{l,l^\prime} = \langle \Phi_l | \hat{H} | \Phi_{l^\prime} \rangle
\end{equation}
can be computed in terms of expectation values (i.e. experimentally accessible quantities) only. Indeed, defining
$2r = l+l^\prime$, we have
\begin{equation}
S_{l,l^\prime} = n_l n_{l^\prime} \, \langle \Psi_T | e^{- l \Delta \tau \hat{H} } e^{- l^\prime \Delta \tau \hat{H} } | \Psi_T \rangle
= \frac{n_l n_{l^\prime}}{n_{r}^2} \quad ,
\end{equation}
and similarly
\begin{equation}
H_{l,l^\prime} = n_l n_{l^\prime} \, \langle \Psi_T | e^{- l \Delta \tau \hat{H} } \hat{H} e^{- l^\prime \Delta \tau \hat{H} } | \Psi_T \rangle
= \frac{n_l n_{l^\prime}}{n_{r}^2} \, \langle \Phi_r | \hat{H} | \Phi_r \rangle = S_{l,l^\prime} \, \langle \Phi_r | \hat{H} | \Phi_r \rangle \quad .
\end{equation}
The quantities $n_r$ can be evaluated recursively, since
\begin{equation}
\frac{1}{n^2_{r+1}} = \langle \Psi_T | e^{- (r+1) \Delta \tau \hat{H} } e^{- (r+1) \Delta \tau \hat{H} } | \Psi_T \rangle =
\frac{ \langle \Phi_r | e^{-2 \Delta \tau \hat{H} } | \Phi_r \rangle }{n_r^2} \quad.
\end{equation}
For inexact time evolution, the quantities $n_r$ and $\langle \Phi_r | \hat{H} | \Phi_r \rangle$ can still be used to
approximate $S_{l,l^\prime}$, $H_{l,l^\prime}$.
Given these matrices, we then solve the generalized eigenvalue equation $\mathbf{H}\mathbf{x} = E \mathbf{S}\mathbf{x}$ to find an approximation
to the ground-state $| \Phi' \rangle = \sum_l x_l | \Phi_l \rangle$ for the ground state of $\oper{H}$. This eigenvalue equation
can be numerically ill-conditioned, as $S$ can contain small and negative eigenvalues for several reasons: (i)
as $m$ increases the vectors $|\Phi_l \rangle$ become linearly dependent; (ii) simulations have finite
precision and noise; (iii) $S$ and $H$ are computed approximately when inexact time evolution is performed.
To regularize the problem, out of the set of time-evolved states we extract a well-behaved sequence as follows:
(i) start from $|\Phi_\text{last}\rangle = |\Phi_0\rangle$, (ii) add the next $|\Phi_l\rangle$ in the set
of time-evolved states s.t. $|\langle \Phi_l | \Phi_\text{last}\rangle| < s$, where $s$
is a regularization parameter $0<s<1$, (iii) repeat, setting the $|\Phi_\text{last}\rangle=\Phi_l$ (obtained from (ii)), until
the desired number of vectors is reached.
We then solve the generalized eigenvalue equation $\tilde{\mathbf{H}}\mathbf{x} = E \tilde{\mathbf{S}}\mathbf{x}$ spanned by this regularized sequence,
removing any eigenvalues of $\tilde{\mathbf{S}}$ less than a threshold $\epsilon$.
The QLanczos calculations reported in Fig.~\ref{fig:2} (lower panel) of the main text were stabilized with this algorithm,
in both cases using stabilization parameter $s=0.95$ and $\epsilon = 10^{-14}$. The stabilization parameters used in the QLanczos calculations reported in Fig.~\ref{fig:4} are $s=0.75$ and $\epsilon = 10^{-2}$.
We demonstrate the QLanczos algorithm using classical emulation on the 1D Heisenberg Hamiltonian,
as used for the QITE algorithm above in Fig. \ref{fig:2}.
Using exact QITE (large domains) to generate the matrix elements,
quantum Lanczos converges much more rapidly than imaginary time evolution. Using inexact QITE (small domains), the convergence
is usually faster and also reaches a lower energy.
We also assess the feasibility of QLanczos in the presence of noise, using emulated noise on the Rigetti QVM as well as
on the Rigetti Aspen-1 QPUs.
In Fig. \ref{fig:4}, we see
that QLanczos also provides more rapid convergence than QITE with both noisy classical emulation as well as on the physical device
for 1- and 2-qubits.
\section{Quantum thermal averages}
The QITE subroutine can be used in a range of other algorithms. As one
example, we now discuss how to compute thermal averages of operators i.e. $\mathrm{Tr}\big[ \oper{O} e^{-\beta \oper{H}} \big]
/ \mathrm{Tr} \big[ e^{-\beta \oper{H}} \big]$ using imaginary time evolution.
Several procedures have been proposed for quantum thermal averaging \cite{Terhal_PRA_2000}, ranging from generating the
finite-temperature state explicitly with the help of ancillae, to a quantum analog of Metropolis sampling \cite{Temme_Nature_2011}
that relies heavily on phase estimation. However, given a method for imaginary time evolution, one can generate thermal averages
of observables without any ancillae or deep circuits. This can be done by adapting to the quantum setting the classical minimally entangled typical thermal
state (METTS) algorithm \cite{White_PRL_2009,Miles_NJP_2010}, which generates a Markov chain from which the thermal average can be sampled.
Consider the thermal average of
an observable $\hat{O}$
\begin{equation}
\langle \hat{O}\rangle = \frac{1}{Z}\mathrm{Tr}[e^{-\beta \hat{H}}\hat{O}]
= \frac{1}{Z}\sum_{i}\langle i|e^{-\beta \hat{H}/2} \hat{O} e^{-\beta \hat{H}/2}|i\rangle
\end{equation}
where $\{|i\rangle\}$ is an orthonormal basis set, and $Z$ is the partition
function. Defining $|\phi_i\rangle = P_i^{-1/2}e^{-\beta \hat{H}/2}|i\rangle$,
we obtain
\begin{equation}\label{eq:thermal_sum}
\langle \hat{O}\rangle = \frac{1}{Z} \sum_i P_i \langle \phi_i|\hat{O}|\phi_i\rangle
\end{equation}
where $P_i = \langle i|e^{-\beta H}|i\rangle$. The summation in Eq.(\ref{eq:thermal_sum}) can be estimated by sampling
$|\phi_i\rangle$ with probability $P_i/Z$, and summing
the sampled $\langle \phi_i|\hat{O}|\phi_i\rangle$.
In standard Metropolis sampling for thermal states, one starts from $|\phi_i\rangle$ and obtains the next state
$|\phi_j\rangle$ from randomly proposing and accepting based an
acceptance probability. However, rejecting and resetting in the quantum analog of Metropolis~\cite{Temme_Nature_2011} is complicated to
implement on a quantum computer, requiring deep circuits.
The METTS algorithm provides an alternative way to sample
$|\phi_i\rangle$ distributed with probability $P_i/Z$ without this complicated procedure.
The algorithm is as follows
\begin{enumerate}
\item Choose a classical product state (PS) $|i\rangle$.
\item Compute $|\phi_i\rangle = P_i^{-1/2}e^{-\beta H/2}|i\rangle$ and
calculate observables of interest.
\item Collapse $|\phi_i\rangle$ to a new PS $|i'\rangle$ with probability
$p(i\rightarrow i') = |\langle i'|\phi_i\rangle|^2$ and repeat Step 2.
\end{enumerate}
In the above algorithm, $|\phi_i\rangle$ is named a minimally entangled typical
thermal state (METTS).
One can easily show that the set of METTS sampled following the above
procedure has the correct Gibbs distribution~\cite{Stoudenmire2010}.
Generally, $\{|i\rangle\}$ can be any orthonormal basis.
For convenience when implementing METTS on a quantum computer,
$\{|i\rangle\}$ are chosen to be product states.
On a quantum emulator or a quantum computer, the METTS algorithm is carried out as following
\begin{enumerate}
\item Prepare a product state $|i\rangle$.
\item Imaginary time evolve $|i\rangle$ with the QITE algorithm to
$|\phi_i\rangle = P_i^{-1/2}e^{-\beta H/2}|i\rangle$, and measure
the desired observables.
\item Collapse $|\phi_i\rangle$ to another product state by measurement.
\end{enumerate}
In practice, to avoid long statistical correlations between samples, we used
the strategy of collapsing METTS onto alternating basis sets~\cite{Stoudenmire2010}.
For instance, for the odd METTS steps, $|\phi_i\rangle$ is collapsed
onto the $X$-basis (assuming a $Z$ computational basis, tensor products of $|+\rangle$ and $|-\rangle$), and for
the even METTS steps, $|\phi_i\rangle$ is collapsed onto the $Z$-basis
(tensor products of $|0\rangle$ and $|1\rangle$). The statistical error is then estimated by block analysis~\cite{Flyvbjerg1989}.
In Fig. \ref{fig:5}a we show the results of quantum METTS (using exact classical emulation) for the thermal average $\langle \oper{H}\rangle$
as a function of temperature $\beta$,
for the 6-site 1D AFM transverse-field Ising model for several temperatures and domain sizes; sufficiently
large $D$ converges to the exact thermal average at each $\beta$; error bars reflect only the finite samples in QMETTS.
We also show an implementation of quantum METTS
on the Aspen-1 QPU and QVM with a 1-qubit field model (Fig.~\ref{fig:5}b),
and using the QVM for a 2-qubit AFM transverse field Ising model (Fig.~\ref{fig:5}d);
while the noise introduces additional error
including a systematic shift (Fig.~\ref{fig:5}c), the correct behaviour of the thermal average with temperature is reproduced on the
emulated and actual quantum device.
\section{Results}
To illustrate the QITE algorithm, we have carried out exact classical emulations (assuming perfect
expectation values and perfect gates) for several Hamiltonians: short-range 1D Heisenberg;
1D AFM transverse-field Ising; long-range 1D Heisenberg with spin-spin coupling
$J_{ij} ={|i-j|+1}^{-1}; i\neq j$; 1D Hubbard at half-filling (mapped by Jordan-Wigner transformation to a spin model); a 6-qubit MAXCUT
\cite{Farhi_MIT_2014,Otterbach_arxiv_2017,Moll_QST_2018} instance, and a minimal basis 2-qubit dihydrogen molecular Hamiltonian~\cite{OMalley2015}. We
describe the models below.
\noindent\textbf{1D Heisenberg and transverse field Ising model}.
The 1D short-range Heisenberg Hamiltonian is defined as
\begin{align}
\hat{H} =\sum_{\langle ij\rangle} \hat{\mathbf{S}}_i \cdot \hat{\mathbf{S}}_j \quad,
\end{align}
the 1D long-range Heisenberg Hamiltonian as
\begin{align}
\hat{H} =\sum_{i \neq j} \frac{1}{|i-j|+1} \, \hat{\mathbf{S}}_i \cdot \hat{\mathbf{S}}_j \quad,
\end{align}
and the AFM transverse-field Ising Hamiltonian as
\begin{align}
\hat{H} = \sum_{\langle ij\rangle} \hat{{S}}^z_i \hat{{S}}^z_j + \sum_i h \hat{S}^x_i \quad .
\end{align}
\noindent\textbf{1D Hubbard model}.
The 1D Hubbard Hamiltonian is defined as
\begin{align}
\hat{H} = - \sum_{\langle ij \rangle \sigma} a^\dag_{i\sigma} a_{j\sigma} + U \sum_i \hat{n}_{i\uparrow} \hat{n}_{i\downarrow}
\end{align}
where $\hat{n}_{i \sigma} = a^\dag_{i\sigma} a_{i\sigma}$, $\sigma \in \{ \uparrow,\downarrow\}$, and $\langle \cdot \rangle$
denotes summation over nearest-neighbors, here with open-boundary conditions. We label the $n$
lattice sites with an index $i=0 \dots n-1$, and the $2n-1$ basis functions as $|\varphi_0 \rangle = |0 \uparrow \rangle$,
$|\varphi_1 \rangle = |0 \downarrow \rangle$, $|\varphi_2 \rangle = |1 \uparrow \rangle$,
$|\varphi_3 \rangle = |1 \downarrow \rangle$ $\dots$.
Under Jordan-Wigner transformation, recalling that
\begin{align}
\hat{n}_{p} = \frac{1-Z_p}{2} \quad,\quad
\hat{a}^\dag_p \hat{a}_q + \hat{a}^\dag_q \hat{a}_p = \frac{X_p X_q \prod_{k=q+1}^{p-1} Z_k \left( 1- Z_p Z_q \right)}{2} \quad,
\end{align}
with $p=0 \dots 2n-2$ and $q<p$, the Hamiltonian takes the form
\begin{align}
\hat{H} = - \sum_p \frac{X_{p} X_{p+2} Z_{p+1} \left( 1- Z_{p} Z_{p+2} \right)}{2}
+ U \sum_{p \, \mathrm{even}} \frac{(1-Z_{2i}) (1-Z_{2i+1})}{4} + \mu \sum_p \frac{(1-Z_p)}{2}
\end{align}
\noindent\textbf{H$_2$ molecule minimal basis model}.
We use the hydrogen molecule minimal basis model at the STO-6G level of theory. This is a common minimal model of hydrogen chains
\cite{hachmann2006multireference,Motta_PRX_2017} and has previously been studied in quantum simulations, for example
in~\cite{OMalley2015}. Given a molecular
geometry (H-H distance $R$) we perform a restricted Hartree-Fock calculation and express the second-quantized Hamiltonian
in the orthonormal basis of RHF molecular orbitals as~\cite{szaboostlund}
\begin{equation}
\label{eq:H2}
\hat{H} = H_0 + \sum_{pq} h_{pq} \hat{a}^\dag_p \hat{a}_q + \frac{1}{2} \sum_{prqs} v_{prqs}
\hat{a}^\dag_p \hat{a}^\dag_q \hat{a}_s \hat{a}_r
\end{equation}
where $a^\dag$, $a$ are fermionic creation and annihilation operators for the molecular orbitals.
The Hamiltonian \eqref{eq:H2} is then encoded by a Bravyi-Kitaev transformation into the 2-qubit operator
\begin{equation}
\hat{H} = g_0 I \otimes I + g_1 Z \otimes I + g_2 I \otimes Z + g_3 Z \otimes Z + g_4 X \otimes X + g_5 Y \otimes Y \quad,
\end{equation}
with coefficients $g_i$ given in Table I of \cite{OMalley2015}.
\noindent\textbf{MAXCUT Hamiltonian}.
The MAXCUT Hamiltonian encodes the solution of the MAXCUT problem.
Given a graph $\Gamma = (V,E)$, where $V$
is a set of vertices and $E \subseteq V \times V$ is a set of links between vertices in $V$, a cut of $\Gamma$ is a subset
$S \subseteq V$ of $V$. The MAXCUT problem consists in finding a cut $S$ that maximizes the number of edges between $S$ and $S^c$ (the complement of $S$).
We denote the number of links in a given cut $S$ as $C(S)$.
The MAXCUT problem can be formulated as a Hamiltonian ground-state problem, by (i) associating a qubit to every vertex in $V$, (ii) associating to every
partition $S =$ an element of the computational basis (here assumed to be in the $z$ direction) of the form $| z_0 \dots z_{n-1} \rangle$, where $z_i = 1$ if
$i \in S$ and $z_i = 0$ if $i \in S^c$, and finding the minimal (most negative) eigenvalue of the $2$-local Hamiltonian
\begin{equation}
\hat{C} = -\sum_{(ij) \in E} \frac{1 - \hat{S}^z_i \hat{S}^z_j}{2} \quad .
\end{equation}
The spectrum of $\hat{C}$ is a subset of numbers $C \in \{ 0,1 \dots |E| \}$.
To assess the feasibility of implementation on near-term quantum devices, we have also carried out noisy
classical emulation (sampling expectation values and with an error model) using the Rigetti quantum virtual machine (QVM) and a
physical simulation using the Rigetti Aspen-1 QPUs, for a single qubit field model ($2^{-1/2}(X+Z)$)\cite{lamm2018simulation} and a
1D AFM transverse-field Ising model.
We carry out QITE using different fixed domain sizes $D$ for the unitary or fermionic unitary.
For quantum simulations, we used pyQuil, an open source Python library, to write quantum circuits that interface with both Rigetti's quantum virtual machine (QVM) and
the Aspen-1 quantum processing units (QPUs).
pyQuil provides a way to include noise models in the QVM simulations. Readout error can be included in a
high-level API provided in the package and is characterized by $p_{00}$ (the probability of reading $|0\rangle$ given that the qubit
is in state $|0\rangle$) and $p_{11}$ (the probability of reading $|1\rangle$ given that the qubit is in state $|1\rangle$). Readout errors
can be mitigated by estimating the relevant probabilities and correcting the estimated expectation values. We do so by using a high level
API present in pyQuil.
A general noise model can be applied to a gate in the circuit by applying the appropriate Kraus maps. Included in the package is
a high level API that applies the same decoherence error attributed to energy relaxation and dephasing to every gate in the circuit.
This error channel is characterized by the relaxation time $T_{1}$ and coherence time $T_{2}$. We also include in our emulation
our own high-level API that applies the same depolarizing noise channel to every single gate by using the appropriate Kraus maps.
The depolarizing noise is characterized by $p_{1}$, the depolarizing probability for single-qubit gates and $p_{2}$, the
depolarizing probability for two-qubit gates.
\subsection{Benchmarks}
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/qite/fig2-eps-converted-to.pdf}
\caption{Energy calculations with QITE and QLanczos algorithms. Top: QITE energy $E(\beta)$ (a) and fidelity (b) between finite-time state
$\Phi(\beta)$ and exact ground state $\Psi$ as function of imaginary time $\beta$,
for a 1D 10-site Heisenberg model, showing the convergence with increasing unitary domains of $D=2-8$ qubits.
Bottom: QITE (dashed red, dot-dashed green lines) and QLanczos (solid red, solid green lines) energies
as function of imaginary time $\beta$, for a 1D Heisenberg model with $N=20$ qubits, using domains
of $D=2$ (c) and $4$ qubits (d), showing improved convergence of QLanczos over QITE. Black line
is the exact ground-state energy/fidelity.}
\label{fig:2}
\end{figure}
Figs.~\ref{fig:2} and \ref{fig:3} show the energy obtained by QITE as a function of $\beta$ and $D$ for the various models.
As we increase $D$, the asymptotic ($\beta \to \infty$) energies rapidly converge to the exact ground-state. For
small $D$, the inexact QITE tracks the exact QITE for a time until the
correlation length exceeds $D$. Afterwards, it may go down or up. The non-monotonic behavior is strongest
for small domains; in the MAXCUT example, the smallest domain $D=2$ gives an oscillating energy.
In such cases, we consider a reasonable estimate of the ground-state energy to be the point at which the energy stops decreasing. In all
models, increasing $D$ past a maximum value (less than $N$) no longer
affects the asymptotic energy, showing that the correlations have saturated (this is true even in the MAXCUT instance).
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/qite/fig3-eps-converted-to.pdf}
\caption{QITE energy evaluations. (a) QITE energy $E(\beta)$ as a function of imaginary time $\beta$ for
a 6-site 1D long-range Heisenberg model, for unitary domains $D=2-6$;
(b) a 4-site 1D Hubbard model with $U/t = 1$, for unitary domains $D=2,4$; (d)
the H$_2$ molecule in the STO-6G basis. (c) Probability of MAXCUT detection, $P(C=C_{max})$ as a function of imaginary time $\beta$, for the
$6$-site graph in the panel. Black line is the exact ground-state energy/probability of detection.}
\label{fig:3}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/qite/fig4-eps-converted-to.pdf}
\caption{QITE, QLanczos, and QMETTS energies $E(\beta)$ as a function of imaginary time $\beta$ for
1-qubit field model using the QVM and QPU (qubit 14 on Aspen-1) and 2-qubit AFM transverse field Ising model
using the QVM and QPU (qubit 14, 15 on Aspen-1). (a) Ground state energies
for 1-qubit field model using the QVM and QPU (qubit 14 on Aspen-1); (b)
ground state energies for 2-qubit AFM transverse field Ising model
using the QVM and QPU (qubit 14, 15 on Aspen-1); (c) finite temperature
energies for 1-qubit field model using the QVM and QPU (qubit 14 on Aspen-1)
; and (d) finite temperature energies for 2-qubit AFM transverse field Ising model
using the QVM.
Black lines are the exact solutions.}
\label{fig:4}
\end{figure}
Figs.~\ref{fig:4} shows the results of running the QITE algorithm on Rigetti's QVM and Aspen-1 QPUs for 1- and 2- qubits, respectively.
Encouragingly for near-term simulations, despite sampling errors and other errors such as gate, readout and incoherent errors present in the device, it is possible to converge to a ground-state energy close to the exact energy for the 1-qubit case.
This result reflects a robustness that is sometimes informally observed in imaginary time evolution algorithms in which the ground state energy is approached even if the imaginary time step is not perfectly implemented. In the 2-qubit case, although the QITE energy converges,
there is a systematic shift which is reproduced on the QVM using available noise parameters for readout, decoherence and depolarizing noise~\cite{Rigetti}. (Remaining discrepancies between the emulator and hardware are likely attributable to cross-talk between parallel gates not included in the noise model) . However, reducing decoherence and depolarizing errors in the QVM or using different sets of
qubits with improved noise characteristics all lead to improved convergence to the exact ground-state energy.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{figures/qite/fig5-eps-converted-to.pdf}
\caption{Thermal (Gibbs) average $\langle E \rangle$ at temperature $\beta$ from QMETTS for a 1D 6-site
Heisenberg model (exact emulation).}
\label{fig:5}
\end{figure}
\section{Conclusions}.
We have introduced quantum analogs
of imaginary time evolution (QITE) and the Lanczos algorithm (QLanczos), that can be carried out without ancillae or deep circuits,
and which achieve exponential reductions in space and time per iteration relative to their classical counterparts.
They provide new quantum routes to approximate ground-states of Hamiltonians in both physical simulations and in optimization
that avoid some of the disadvantages of phase estimation based approaches and variational algorithms. The QLanczos iteration
appears especially powerful if sufficient sampling can be done, as in practice it obtains accurate estimates of ground-states
from only a few iterations, and also provides an estimate of excited states. Additionally, further algorithms that use QITE and QLanczos as subroutines can
be formulated, such as a quantum version of the METTS algorithm to compute thermal averages. Encouragingly, these algorithms
appear useful in conjunction with near-term quantum architectures, and serve to demonstrate the power of quantum elevations of
classical simulation techniques, in the continuing search for quantum supremacy.
\newpage
\begin{appendices}
\chapter{Appendix for Chapter~\ref{chp:dmet} and Chapter~\ref{chp:hlatt}}
\section{Proof of the finite temperature bath formula}
Let $M$ be an arbitrary $N\times N$ full rank square matrix, and $Q_k$ be the $Q$ derived from the QR decomposition of the first $n$ columns of $M^k$, i.e., $M^k[:,:n] = Q_k R_k$, with $k = 0, 1, ..., K$. Let $S$ ($|S| < N$) be a space spanned by $\{Q_0, Q_1, ..., Q_K\}$, and $P$ be the projector onto $S$. The following equality holds
\begin{equation}\label{eq:2prove}
P^{\dagger}M^lP[:,:n] = (P^{\dagger}MP)^l[:,:n], \hspace{0.2cm} l \leq K+1
.
\end{equation}
We prove the statement by mathematical induction. First write $M$ in the following form
\begin{equation}
M = \begin{bmatrix}
A & B \\
C & D
\end{bmatrix},
\end{equation}
where $A$ and $B$ are the first $n$ rows of $M$, $A$ and $C$ are the first $n$
columns of $M$.
The projector has the form
\begin{equation}
P = \begin{bmatrix}
I & 0\\
0 & V
\end{bmatrix},
\end{equation}
where $I$ is an $n\times n$ matrix, and $V$ is an $(N-n)\times (K-1)n$ matrix with $(K-1)n < (N-n)$. The columns of $V$ are derived from the QR decomposition of $M^k[n:, :n]$, $k = 1, ..., K$ and then orthogonalized.
We can write $V$ in the form
\begin{equation}
V = \begin{bmatrix} V_1 & V_2 & \cdots & V_K \end{bmatrix}
\end{equation}
where $V_k$ is from the QR decomposition of $M^k[n:, :n]$. $\PP M P$ has the
form
\begin{equation}
\PP M P = \begin{bmatrix}
A & BV\\
\V C & \V DV
\end{bmatrix}.
\end{equation}
The mathematical induction consists of two parts:
(i) We start with $l=2$. The first $n$ columns of $P^{\dagger}M^2P$ and $(P^{\dagger}MP)^2$ are
\begin{equation}
\begin{split}
P^{\dagger}M^2P[:,:n] &= \begin{bmatrix}
A^2 + BC \\ \V CA + \V DC
\end{bmatrix}\\
(P^{\dagger}MP)^2[:,:n] &= \begin{bmatrix}
A^2 + BV\V C \\ \V CA + \V DV\V C
\end{bmatrix}.
\end{split}
\end{equation}
The two are equal when
\begin{equation}\label{eq:VVC}
V\V C = V\V (VR) = V I R = VR = C
\end{equation}
which is true since $V$ is the $Q_1$ from the QR decomposition of $C$. (Note that $\V V = I$, but $V\V \neq I$).
Therefore, Eq.~(\ref{eq:2prove}) holds for $l=2$ when $K \geq 1$.
(ii) Now let us inspect Eq.~(\ref{eq:2prove}) for the $l$th order, assuming that Eq.~(\ref{eq:2prove}) holds for the $(l-1)$th order, i.e. $\PP M^{l-1}P = (\PP M P)^{l-1}$. Let
\begin{equation}
M^{l-1} = \begin{bmatrix}
W & X \\
Y & Z \\
\end{bmatrix}
\end{equation}
and $M^l = MM^{l-1}$ has the form
\begin{equation}
M^l = \begin{bmatrix}
AW+BY & AX+BZ\\
CW+DY & CX+DZ
\end{bmatrix}
\end{equation}
and
\begin{equation}
\PP M^{l-1}P = (\PP MP)^{l-1} = \begin{bmatrix}
W & XV\\
\V Y & \V Z V
\end{bmatrix}
\end{equation}
One can prove that $CW$ and $C$ share the same $Q$ space from the QR decomposition: let $C = QR$, then $CW = QRW$, where $R$ and $W$ are square matrices; we then perform another QR decomposition of $RW$, $RW = U\tilde{R}$, where $U$ is a unitary matrix, then $CW = \tilde{Q}\tilde{R}$ with $\tilde{Q} = QU$. Therefore, $Q$ and $\tilde{Q}$ span the same space.
The first $n$ columns of $\PP M^lP$ and $(\PP M P)^l$ are
\begin{equation}
\PP M^l P[:,:n] = \begin{bmatrix}
AW + BY \\
\V CW + \V DY
\end{bmatrix},
\end{equation}
\begin{equation}
\begin{split}
(\PP M P)^l[:,:n] =& \left((\PP M P)(\PP M P)^{l-1}\right)[:,:n] \\
=& \begin{bmatrix}
AW + BV\V Y\\
\V CW + \V DV\V Y
\end{bmatrix}.
\end{split}
\end{equation}
Since $V$ contains $V_{l-1}$, which is derived from the QR decomposition of $Y$, we have $V\V Y = Y$ as in Eq.~(\ref{eq:VVC}).
Combining (i) and (ii) we then see that Eq.~(\ref{eq:2prove}) holds for the $l$th order with $K\geq l-1$ for $\forall l$. \QEDB
\section{Analytic gradient of the cost function for correlation potential fitting in DMET at finite temperature}
We rewrite the gradient of the cost function Eq.~\eqref{eq:cost_func_dmet} here
\begin{equation}\label{eq:gradient_cost_apdx}
\frac{\mathrm{d}f}{\mathrm{d}u_{kl}} = \sum_{i,j\in \text{imp}}2(D_{ij}^{\text{low}} - D_{ij}^{\text{high}})
\frac{\mathrm{d}D_{ij}^{\text{low}} }{\mathrm{d}u_{kl}},
\end{equation}
where $D^{\text{low}}$ is the single-particle density matrix from the
mean-field (low-level) calculations, $D^{\text{high}}$ is the high-level
single partile density matrix, and $u$ is the correlation potential
matrix. The key to evaluate Eq.~\eqref{eq:gradient_cost_apdx} is to
calculate $\frac{\mathrm{d}D_{ij}^{\text{low}} }{\mathrm{d}u_{kl}}$. For simplicity,
we will drop the superscript on $D^{\text{low}}$.
At finite temperature, $D$ is given by
\begin{equation}
D = \frac{1}{1+e^{\beta (h - \mu + \delta u}},
\end{equation}
where $h$ is the one-body Hamiltonian, $\mu$ is the chemical potential
(Fermi level),
and $\delta u$ is a small perturbation added to the Hamiltonian.
Then $\frac{\mathrm{d}D_{ij}^{\text{low}} }{\mathrm{d}u_{kl}}$ has two parts:
\begin{equation}\label{eq:grad_D_apdx}
\frac{\mathrm{d}D_{ij}(u, \mu(u)) }{\mathrm{d}u_{kl}} =
\frac{\partial D_{ij}}{\partial u_{kl}}\biggr\vert_{\mu}
+ \frac{\partial D_{ij}}{\partial \mu}\frac{\partial\mu}{\partial u_{kl}},
\end{equation}
where the second part comes from the change of Fermi level due to the
change of correlation potential.
The first part of Eq.~\eqref{eq:grad_D_apdx} is evaluated by
\begin{equation}
\frac{\partial D_{ij}}{\partial u_{kl}}
= \sum_{pq} C_{ip}C^*_{kp}K_{pq}C_{lq}C_{jq}^*,
\end{equation}
where $C$ is the molecular orbital (MO) coefficient matrix with $ijkl$ the
site indices and $pq$ the MO indices, and
\begin{equation}
K_{pq} = n_p (1-n_q)\frac{1-e^{\beta(\varepsilon_p-\varepsilon_q)}}{\varepsilon_p-\varepsilon_q},
\end{equation}
where $n_p$ is the occupation number on the $p$th orbital and $ \varepsilon_p$
is the energy of $p$th orbital. Note that when $\varepsilon_p = \varepsilon_q$,
both the denominator and numerator goes to zero and the value of $K_{pq}$
depends on $\beta$. When $\beta = \inf$, $\varepsilon_p = \varepsilon_q$ means
$n_p = n_q = 0$ or $1$, so $K_{pq} = 0$ is bounded.
The second part is evaluated by
\begin{equation}
\begin{split}
\frac{\partial D_{ij}}{\partial \mu} &= \sum_p\beta C_{ip}n_p(1 - n_p)
C^*_{jp}\\
\frac{\partial\mu}{\partial u_{kl}} &= \frac{\sum_p n_p(1 - n_p)
C_{kp}^* C_{lp}}
{\sum_{p}n_p(1-n_p)}.
\end{split}
\end{equation}
The contribution of this part is usually small at low temperature and
becomes non-neglegible at higher temperature.
\section{Davidson diagonalization}\label{sec:apdx_davidson}
The Davidson diagonalization~\citep{Davidson1975} algorithm is an efficient
way to find the lowest/highest eigenvalues of a Hermitian matrix.
In quantum chemistry, this method is widely used to get the ground
state or low-lying excited states. This method constructs a subspace of the
Hilbert space from an initial vector as the guess of the ground state,
and diagonalize the Hamiltonian in this subspace. A preconditioner
is used to make the algorithm more stable and converge fast.
The steps to evaluate $m$ lowest eigenvectors are listed below:
\begin{enumerate}
\item Select initial guess vectors $\mathbf{v}^i, i = 1,...,n\geq m$ to form a
subspace $\mathcal{S}$.
\item Construct the matrix representation of the Hamiltonian in the
subspace $\mathcal{S}$: $\tilde{H}_{ij} = \mathbf{v}_i^\dag\tilde{H}\mathbf{v}_j$.
\item Diagonalize $\tilde{H}$ to obtain the lowest $m$ eigenvalues and
corresponding eigenvectors, $\tilde{H}\mathbf{x}^p = \lambda_p \mathbf{x}^p$.
The current approximated eigenvectors are $\mathbf{c}_p = \sum_i x^p_i
\mathbf{v}_i$.
\item Starting from the ground state ($p=1$), compute the residual vector
$\mathbf{r}_r = \sum_{i=1}^p\left(H - \lambda_i
\right)\mathbf{c}_i$. If $||\mathbf{r_p}|| < \epsilon$, then move on to the next
excited state ($p\rightarrow p+1$). Otherwise,
compute the rescaled correction vector $\mathbf{\sigma}^k_i = \left(\lambda_k
- A_{ii}\right)r^k_i$.
\item Orthogonalize $\mathbf{\sigma}^k$ with respect to $\mathcal{S}$ and normalize it. Add $\mathbf{\sigma}^k$ to $\mathcal{S}$. If the size of $\mathcal{S}$
exceeds the preset maximum size, discard the earlest vectors.
\item Go back to Step 2 until the algorithm converges.
\end{enumerate}
The above algorithm iteratively finds the lowest $m$ eigenvectors of the
Hamiltonian. Compared to other subspace methods such as Lanczos algorithm
mentioned in Chapter~\ref{chp:intro}, the Davidson algorithm is more
accurate for both ground state and low-lying excited states. Note that
when updating the excited state, the already converged ground state might
be perturbed, therefore in Step 4, we recommend that one should always start
from calculating the residual of the ground state. To make the algorithm
faster, one could not worry about the ground state for a moment until
all $m$ eigenvectors are derived, and then reexamine the residual of the
ground state to make sure it is not perturbed.
\chapter{Appendix for Chapter~\ref{chp:qite}}
\section{Representing imaginary-time evolution by unitary maps}
As discussed in the main text, we map the scaled non-unitary action of $e^{-\Delta\tau \hat{h}_m}$
on a state $\Psi$
to that of a unitary $e^{-i\Delta\tau \hat{A}[m] }$, i.e.
\begin{align}
| \Psi^\prime \rangle \equiv c^{-1/2} \, e^{-\Delta \tau \hat{h}_m } |\Psi\rangle = e^{-i\Delta\tau \hat{A}[m]} |\Psi\rangle \quad .
\end{align}
where $c = \langle \Psi | e^{-2 \Delta \tau \hat{h}_m } |\Psi\rangle$.
$\hat{h}_m$ acts on $k$ geometrically local qubits;
$\hat{A}$ is Hermitian and acts on a domain of $D$ qubits around the support of $\hat{h}_m$,
and is expanded as a sum of Pauli strings acting on the $D$ qubits,
\begin{align}
\hat{A}[m] &= \sum_{i_1i_2 \ldots i_D} a[m]_{i_1i_2 \ldots i_D} \sigma_{i_1}\sigma_{i_2} \ldots \sigma_{i_D} \notag \\
&= \sum_I a[m]_I \sigma_I \label{eq:pauli}
\end{align}
where $I$ denotes the index $i_1i_2 \ldots i_D$. Define $ |\Delta_0\rangle = \frac{| \Psi^\prime \rangle - | \Psi\rangle}{\Delta \tau}$
and $|\Delta\rangle = -i \hat{A}[m] |\Psi\rangle$.
Our goal is to minimize the difference $||\Delta_0 - \Delta||$. If the unitary $e^{-i\Delta\tau \hat{A}[m] }$ is defined over a
sufficiently large domain $D$, then this error minimizes at $\sim 0$, for small $\Delta \tau$. Minimizing for real $a[m]$
corresponds to minimizing the quadratic function $f(a[m])$
\begin{align}
f(a[m]) = f_0 + \sum_I b_I a[m]_I + \sum_{IJ} a[m]_I S_{IJ} a[m]_J
\end{align}
where
\begin{align}
f_0 &= \langle \Delta_0 | \Delta_0 \rangle \quad , \\
S_{IJ} &= \langle \Psi | \sigma^\dag_I \sigma_J | \Psi\rangle \quad ,\\
b_I &= i \, \langle \Psi | \sigma^\dag_I | \Delta_0 \rangle - i \, \langle \Delta_0 | \sigma_I | \Psi \rangle \quad ,
\end{align}
whose minimum obtains at the solution of the linear equation
\begin{align}
\left( \mathbf{S}+\mathbf{S}^T \right) \mathbf{a}[m] = -\mathbf{b} \label{eq:lineareq}
\end{align}
In general, $\mathbf{S}+\mathbf{S}^T$ may have a non-zero null-space. Thus, we solve Eq.~\eqref{eq:lineareq}
either by applying the generalized inverse of $\mathbf{S}+\mathbf{S}^T$ or by an iterative algorithm such as conjugate gradient.
For fermionic Hamiltonians, we replace the Pauli operators in Eq.~(\ref{eq:pauli}) by fermionic field operators.
For a number conserving Hamiltonian, such as the fermionic Hubbard Hamiltonian treated in Fig. 3 in the main text,
we write
\begin{align}
\hat{A}[m] &= \sum_{i_1i_2 \ldots i_D} a[m]_{i_1i_2 \ldots i_D} \hat{f}^\dag_{i_1}\ldots \hat{f}^\dag_{i_{D/2}} \hat{f}_{i_{D/2+1}} \ldots \hat{f}_{i_D}
\end{align}
where $\hat{f}^\dag$, $\hat{f}$ are fermionic creation, annihilation operators respectively.
\section{Proof of correctness from finite correlation Length}
Here we present a more detailed analysis of the running time of the algorithm. Consider a $k$-local Hamiltonian
\begin{equation}
H = \sum_{l=1}^m h_l
\end{equation}
acting on a $d$-dimensional lattice with $\Vert h_i \Vert \leq 1$, where $\Vert * \Vert$ is the operator norm. In imaginary time evolution (used e.g. in Quantum Monte-Carlo or in tensor network simulations) one typically applies Trotter formulae to approximate
\begin{equation}
\frac{e^{- \beta H} | \Psi_0 \rangle}{ \Vert e^{- \beta H} | \Psi_0 \rangle \Vert }
\end{equation}
for an initial state $| \Psi_0 \rangle$ (which we assume to be a product state) by
\begin{equation} \label{trotterdecomp}
\frac{ \left ( e^{- t h_1 / n} \ldots e^{- t h_m / n} \right)^{n} | \Psi_0 \rangle } { \Vert \left ( e^{- t h_1 / l} \ldots e^{- t h_m / n} \right)^{n} | \Psi_0 \rangle \Vert }.
\end{equation}
This approximation leads to an error which can be made as small as one wishes by increasing the number of time steps $n$.
Let $| \Psi_s \rangle$ be the state (after renormalization) obtained by applying $s$ terms $e^{- t h_i / n} $ from $\left( e^{- t h_1 / n} \ldots e^{- t h_m / n} \right)^{n}$; with this notation $| \Psi_{mn} \rangle$ is the state given by Eq. (\ref{trotterdecomp}). In the QITE algorithm, instead of applying each of the operators $ e^{- t h_i / n}$ to $| \Psi_0 \rangle$ (and renormalizing the state), one applies local unitaries $U_s$ which should approximate the action of the original operator. Let $| \Phi_s \rangle$ be the state after $s$ unitaries have been applied.
Let $C$ be an upper bound on the correlation length of $| \Psi_s \rangle$ for every $s$: we assume that for every $s$, and every observables $A$ and $B$ separated by $\text{dist}(A, B)$ sites,
\begin{equation} \label{correlationdecay}
\langle \Psi_s | A \otimes B | \Psi_s \rangle - \langle \Psi_s | A | \Psi_s \rangle \langle \Psi_s | B | \Psi_s \rangle \leq \Vert A \Vert \Vert B \Vert e^{- \text{dist}(A, B) / C}.
\end{equation}
\begin{theorem} \label{bounderrors}
For every $\varepsilon > 0$, there are unitaries $U_s$ each acting on
\begin{equation}
k (2 C)^d \ln^d(2 \sqrt{2} n m \varepsilon^{-1})
\end{equation}
qubits such that
\begin{equation}
\left \Vert | \Psi_{mn} \rangle - | \Phi_{mn} \rangle \right \Vert \leq \varepsilon
\end{equation}
\end{theorem}
\begin{proof}
We have
\begin{eqnarray} \label{boundingerror1}
\left \Vert | \Psi_{s} \rangle -| \Phi_{s} \rangle \right \Vert &=&
\left \Vert | \Psi_{s} \rangle - U_s | \Phi_{s-1} \rangle \right \Vert \nonumber \\
&\leq& \left \Vert | \Psi_{s} \rangle - U_s | \Psi_{s-1} \rangle \right \Vert + \left \Vert | \Psi_{s-1} \rangle - | \Phi_{s-1} \rangle \right \Vert
\end{eqnarray}
To bound the first term we use our assumption that the correlation length of $| \Psi_{s-1} \rangle$ is smaller than $C$. Consider a region $R_{v}$ of all sites that are at most a distance $v$ (in the Manhattan distance on the lattice) of the sites in which $h_{i_s}$ acts. Let $\text{tr}_{\backslash R_v}(| \Psi_s \rangle \langle \Psi_s | )$ be the reduced state on $R_v$, obtained by partial tracing over the complement of $R_v$ in the lattice. Since
\begin{equation}
| \Psi_{s} \rangle = \frac{ e^{-\beta h_{i_s}/n} | \Psi_{s-1} \rangle }{ \Vert e^{ - \beta h_{i_s}/n} | \Psi_{s-1} \rangle \Vert },
\end{equation}
it follows from Eq. (\ref{correlationdecay}) and Lemma 9 of \cite{brandao2015exponential} that
\begin{equation} \label{boundmarginal}
\left \Vert \text{tr}_{\backslash R_v}(| \Psi_s \rangle \langle \Psi_s | ) - \text{tr}_{\backslash R_v}(| \Psi_{s-1} \rangle \langle \Psi_{s-1} | ) \right \Vert_1 \leq \Vert e^{h_{i_s}/n} \Vert^{-1} e^{- \frac{v}{C}} \leq 2 e^{- \frac{v}{C}},
\end{equation}
where we used that for $n \geq 2\beta$, $\Vert e^{- \beta h_{i_s}/n} \Vert \geq \Vert I - \beta h_{i_s}/n \Vert \geq 1 - \beta/n \geq 1/2$. Above $\Vert * \Vert_1$ is the trace norm.
The key result in our analysis is Uhlmann's theorem (see e.g. Lemmas 11 and 12 of \cite{brandao2015exponential}). It says that two pure states with nearby marginals must be related by a unitary on the purifying system. In more detail, if $| \eta \rangle_{AB}$ and $| \nu \rangle_{AB}$ are two states s.t. $\Vert \eta_A - \nu_A \Vert_1 \leq \delta$, then there exists a unitary $V$ acting on $B$ s.t.
\begin{equation} \label{uhlmannstatement}
\Vert | \eta \rangle_{AB} - (I \otimes V) | \nu \rangle_{AB} \Vert \leq 2 \sqrt{\delta}.
\end{equation}
Applying Uhlmann's theorem to $| \Psi_s \rangle$ and $| \Psi_{s-1} \rangle$, with $B = R_v$, and using Eq. (\ref{boundmarginal}), we find that there exists a unitary $U_s$ acting on $R_{v}$ s.t.
\begin{equation}
\left \Vert | \Psi_{s} \rangle - U_s | \Psi_{s-1} \rangle \right \Vert \leq 2 \sqrt{2} e^{- \frac{v}{2C}},
\end{equation}
which by Eq. (\ref{boundingerror1}) implies
\begin{equation}
\left \Vert | \Psi_{s} \rangle - U_s | \Psi_{s-1} \rangle \right \Vert \leq 2 \sqrt{2} m n e^{- \frac{v}{2C}},
\end{equation}
Choosing $\nu = 2 C \ln(2 \sqrt{2} n m \varepsilon^{-1})$ as the width of the support of the approximating unitaries, the error term above is $\varepsilon$. The support of the local unitaries is $k \nu^d$ qubits (as this is an upper bound on the number of qubits in $R_d$). Therefore each unitary $U_s$ acts on at most
\begin{equation}
k (2 C)^d \ln^d(2 \sqrt{2} n m \varepsilon^{-1})
\end{equation}
qubits.
\end{proof}
\vspace{0.4 cm}
\noindent \textit{Finding $U_s$:} In the algorithm we claim that we can find the unitaries $U_s$ by solving a least-square problem. This is indeed the case if we can write them as $U_s = e^{i A[s] / n}$ with $A[s]$ a Hamiltonian of constant norm. Then for sufficiently large $l$, $U_s = I + i A[s]/n + O((1/n)^2)$ and we can find $A[s]$ by performing tomography of the reduced state over the region where $U_s$ acts and solving the linear problem given in the main text. Because we apply Uhlmann's Theorem to $ | \Psi_{s-1} \rangle$ and
\begin{equation}
\frac{ e^{- \beta h_{i_s}/n} | \Psi_{s-1} \rangle }{ \Vert e^{ - \beta h_{i_s}/n} | \Psi_{s-1} \rangle \Vert },
\end{equation}
using $e^{ - \beta h_{i_s}/n} = I - \beta h_{i_s}/n + O((1/n)^2)$ and following the proof of the Uhlmann's Theorem, we find that the unitary can indeed be taken to be close to the identity, i.e. $U_s $ can be written as $e^{i A[s] / n}$
\vspace{0.4 cm}
\noindent \textit{Total Running Time:} Theorem \ref{bounderrors} gives an upper bound on the maximum support of the unitaries needed for a Trotter update, while tomography of local reduced density matrices gives a way to find the unitaries. The cost for tomography is quadratic in the dimension of the region, so it scales as $\exp(O( k (2 C)^d \ln^d(2 \sqrt{2} n m \varepsilon^{-1})))$. This is also the cost to solve classically the linear system which gives the associated Hamiltonian $A[s]$ and of finding a circuit decomposition of $U_s = e^{i A[s] / n}$ in terms of
two qubit gates. As this is repeated $mn$ times, for each of the $mn$ terms of the Trotter decomposition, the total running time (of both quantum and classical parts) is
\begin{equation}
ml \exp(O( k (2 C)^d \ln^d(2 \sqrt{2} n m \varepsilon^{-1}))).
\end{equation}
This is exponential in $(C)^d$, with $C$ the correlation length, and quasi-polynomial in $n$ (the number of Trotter steps) and $m$ (the number of local terms in the Hamiltonian. Note that typically $m = O(N)$, with $N$ the number of sites). While this an exponential improvement over the $\exp(O(N))$ scaling classically, the quasi-polynomial dependence on $m$ is still prohibitive in practice. Below we show how to improve on that.
\vspace{0.4 cm}
\noindent \textit{Local Approximation:} We expect in practice to substantially beat the bound on the support of the unitaries given in Theorem \ref{bounderrors} above. Indeed, if one is only interested in a local approximation of the state (meaning that all the local marginals of $|\Phi_{nm} \rangle$ are close to the ones of $e^{- \beta H} |\Psi_0 \rangle$, but not necessarily the global states), then we expect the support of the unitaries to be independent of the number of terms of the Hamiltonian $m$ (while for global approximation we get a polylogarithmic dependence on $m$).
The scaling with $m$ in the bound comes from the additive accumulation of error from each of the $ml$ steps (Eq. (\ref{boundingerror1})). The assumption of a correlation length $C$ ensures that the errors of replacing each local term in the Trotter decomposition by a unitary do not all add up if one is interested in local observables. Indeed, the contribution of the local error for a region $S$ from the replacement of $e^{- \beta h_{j_s} / n}$ by $U_s$ is $\exp(- l / C)$, with $l$ the distance of the support of $h_{j_s}$ to $S$. Then we can substitute Eq. (\ref{boundmarginal}) by
\begin{equation} \label{localerror term}
\left \Vert \text{tr}_{\backslash S} ( | \Psi_{mn} \rangle \langle \Psi_{mn} | ) - \text{tr}_{\backslash S} ( | \Phi_{mn} \rangle \langle \Phi_{mn} | ) \right \Vert \leq 2\sqrt{2} n (C + |S|) e^{- \frac{v}{2C}}.
\end{equation}
with $|S|$ the size of the support of $S$. This gives a bound on the size of the support of the unitaries $U_s$ of
\begin{equation}
k (2 C)^d \ln^d(2 \sqrt{2} n (C + |S|) \varepsilon^{-1})
\end{equation}
Using this improved bound, the total running time becomes
\begin{equation}
ml \exp(O( k (2 C)^d \ln^d(2 \sqrt{2} n (C + |S|) \varepsilon^{-1}) )).
\end{equation}
As $m = O(N)$, we find the scaling with the number of sites $N$ to be linear.
\vspace{0.4 cm}
\noindent \textit{Non-Local Terms:} Suppose the Hamiltonian has a term $h_q$ acting on qubits which are not nearby, e.g. on two sites $i$ and $j$. Then $e^{- \beta h_q /n}$ can still be replaced by an unitary, which only acts on sites $i$ and $j$ and qubits in the neighborhoods of the two sites. This is the case if we assume that the state has a finite correlation length and the proof is again an application of Uhlmann's theorem (we follow the same argument from the proof of Theorem \ref{bounderrors} but define $R_v$ in that case as the union of the neighborhoods of $i$ and $j$). Note however that the assumption of a finite correlation length might be less natural for models with long range interactions.
\section{Spreading of correlations}
In the main text, we argued that the correlation volume $V$ of the state $e^{-\beta H}|\Psi\rangle$ is bounded
for many physical Hamiltonians and saturates at the ground-state with $V \ll N$ where $N$ is the system size.
To numerically measure correlations, we use the mutual information between two sites, defined as
\begin{align}
I(i,j) = S(i)+S(j) - S(i,j)
\end{align}
where $S(i)$ is the von Neumann entropy of the density matrix of site $i$ ($\rho(i)$) and similarly for $S(j)$, and $S(i,j)$
is the von Neumann entropy of the two-site density matrix for sites $i$ and $j$ ($\rho(i,j)$).
To compute the mutual information in Fig. 1 in the main text, we used matrix product state (MPS) and finite projected entangled pair state (PEPS) imaginary time evolution for the spin-$1/2$ 1D and 2D FM transverse field Ising model (TFI)
\begin{align}
H_{TFI} = - \sum_{\langle ij \rangle} \sigma^z_i \sigma^z_j - h \sum_{i} \sigma^x_i
\end{align}
where the sum over $\langle i, j \rangle$ pairs are over nearest neighbors. We use the parameter $h=1.25$ for the 1-D calculation and $h=3.5$ for the 2-D calculations as the ground-state is gapped in both cases. It is known that the ground-state correlation length is finite.
\noindent \textbf{MPS}.
We performed MPS imaginary time evolution (ITE) on a 1-D spin chin with $L=50$ sites with open boundary conditions. We start from an initial state that is a random product state, and perform ITE using time evolution block decimation (TEBD) \cite{vidal2004TEBD,schollwock2011mps} with a first order Trotter decomposition. In this algorithm, the Hamiltonian is separated into terms operating on even and odd bonds. The operators acting on a single bond are exponentiated exactly. One time step is given by time evolution of odd and even bonds sequentially, giving rise to a Trotter error on the order of the time step $\Delta \tau$. In our calculation, a time step of $\Delta \tau = 0.001$ was used.
We carry out ITE simulations with maximum bond dimension of $D=80$, but truncate singular values less than 1.0e-8 of the maximum singular value.
In the main text, the ITE results are compared against the ground state obtained via the density matrix renormalization group (DMRG)). This should be equivalent to comparing to a long-time ITE ground state. The long-time ITE ($\beta=38.352$) ground state reached an energy per site of -1.455071, while the DMRG ground-state energy per site is -1.455076. The percent error of the nearest neighbor correlations are on the order of 1.0e-4\% to 1.0e-3\%, and about 1.0e-2\% for correlations between the middle site and the end sites (a distance of 25 sites). The error in fidelity between the two ground states was about 5.0e-4.
\noindent \textbf{PEPS}. We carried out finite PEPS \cite{nishino1996corner,verstraete2004renormalization,
verstraete2006criticality,orus2014practical} imaginary time evolution for the two-dimensional transverse
field Ising model on a lattice
size of $21 \times 31$. The size was chosen to be large enough to see the spread of mutual information in the bulk
without significant effects from the boundary. The mutual information was calculated
along the long (horizontal) axis in the center of the lattice.
The standard Trotterized
imaginary time evolution scheme for PEPS~\cite{VerstraeteITimeReview} was used with a time step $\Delta \tau = 0.001$,
up to imaginary time $\beta = 6.0$, starting from a random product state. To reduce computational cost from the
large lattice size, the PEPS
was defined in a translationally invariant manner with only 2 independent tensors \cite{VerstraeteIPEPS}
updated via the so-called ``simple update'' procedure \cite{XiangSimpleUpdate}.
The simple update has been shown to be sufficiently accurate for capturing correlation
functions (and thus $I(i,j)$) for ground states with relatively short correlation
lengths (compared to criticality) \cite{LubaschAlgos,LubaschUnifying}.
We chose a magnetic field value $h=3.5$ which is
detuned from the critical field ($h \approx 3.044$) but
still maintains a correlation length long enough to see
interesting behaviour.
\noindent \textit{Accuracy:} Even though the simple update procedure was used for the tensor update,
we still needed to contract the $21 \times 31$ PEPS at at every imaginary time step $\beta$ for a range of
correlation functions, amounting to a large number of contractions.
To control the computational cost, we limited our bond dimension to $D=5$ and used an optimized
contraction scheme \cite{XiangContract}, with maximum allowed bond dimension
of $\chi = 60$ during the contraction.
Based on converged PEPS ground state correlation functions
with a larger bond dimension of $D=8$, our $D=5$ PEPS yields $I(i,i+r)$ (where $r$ denotes horizontal separation) at large $\beta$ with
a relative error of $\approx 1\%$ for $r=1-4$, $5\%$ or less for $r=5-8$, and $10\%$ or greater for
$r > 8$. At smaller values of $\beta$ ($< 0.5$) the errors up to $r=8$ are much smaller because
the bond dimension of 5 is able to completely support the smaller correlations (see Fig. 1, main text).
While error analysis on the 2D Heisenberg model \cite{LubaschAlgos} suggests
that errors with respect to $D=\infty$ may be larger, such analysis also confirms that
a $D=5$ PEPS captures the qualitative behaviour of
correlation in the range $r=5-10$ (and beyond).
Aside from the bond dimension error,
the precision of the calculations is governed by $\chi$ and the lattice size. Using the $21 \times 31$
lattice and $\chi = 60$, we were able to converge entries of single-site
density matrices $\rho(i)$ to a precision of $\pm 10^{-6}$ (two site density matrices $\rho(i,j)$
had higher precision). For $\beta = 0.001-0.012$,
the smallest eigenvalue of $\rho(i)$ fell below this precision threshold, leading
to significant noise in $I(i,j)$. Thus, these values of $\beta$ are omitted from Fig. 1 (main text)
and the smallest reported values of $I$ are $10^{-6}$, although with more precision we expect $I \to 0$ as $r \to \infty$.
Finally, the energy and fidelity errors were computed with respect to the PEPS
ground state \textit{of the same bond dimension} at $\beta = 10.0$ (10000 time steps).
The convergence of the these quantities shown in Fig. 1 (main text) thus isolates the convergence of the
imaginary time evolution, and does not include effects of other errors that
may result from deficiencies in the wavefunction ansatz.
\section{Parameters used in QVM and QPUs simulations}
In this section, we include the parameters used in our QPUs and QVM simulations.
Note that all noisy QVM simulations (unless stated otherwise in the text) were performed with noise parameters from noise model 1.
\begin{table}[h!]
\begin{center}
\caption{QPUs: 1-qubit QITE and QLanczos.}
\label{tab:table1}
\begin{tabular}{l|c|c|c|r}
\textbf{Trotter stepsize} & \textbf{nTrials} & \textbf{$\delta$} & \textbf{s} & \textbf{$\epsilon$}\\
\hline
0.2 & 100000 & 0.01 & 0.75 & $10^{-2}$\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{QPUs: 2-qubit QITE and QLanczos.}
\label{tab:table1}
\begin{tabular}{l|c|c|c|r}
\textbf{Trotter stepsize} & \textbf{nTrials} & \textbf{$\delta$} & \textbf{s} & \textbf{$\epsilon$}\\
\hline
0.5 & 100000 & 0.1 & 0.75 & $10^{-2}$\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{QPUs: 1-qubit METTS.}
\label{tab:table1}
\begin{tabular}{l|c|c|c|r}
\textbf{$\beta$} & \textbf{Trotter stepsize} & \textbf{nTrials} & \textbf{nMETTs} & \textbf{$\delta$}\\
\hline
1.5 & 0.15 & 1500 & 70 & 0.01\\
2.0 & 0.20 & 1500 & 70 & 0.01\\
3.0 & 0.30 & 1500 & 70 & 0.01\\
4.0 & 0.40 & 1500 & 70 & 0.01\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{QVM: 2-qubit QITE and QLanczos.}
\label{tab:table1}
\begin{tabular}{l|c|c|c|r}
\textbf{Trotter stepsize} & \textbf{nTrials} & \textbf{$\delta$} & \textbf{s} & \textbf{$\epsilon$}\\
\hline
0.5 & 100000 & 0.1 & 0.75 & $10^{-2}$\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{QVM: 1-qubit METTS.}
\label{tab:table1}
\begin{tabular}{l|c|c|c|r}
\textbf{$\beta$} & \textbf{Trotter stepsize} & \textbf{nTrials} & \textbf{nMETTs} & \textbf{$\delta$}\\
\hline
1.0 & 0.10 & 1500 &70 & 0.01\\
1.5 & 0.15 & 1500 &70 & 0.01\\
2.0 & 0.20 & 1500 &70 & 0.01\\
3.0 & 0.30 & 1500 &70 & 0.01\\
4.0 & 0.40 & 1500 &70 & 0.01\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{QVM: 2-qubit METTS.}
\label{tab:table1}
\begin{tabular}{l|c|c|c|r}
\textbf{$\beta$} & \textbf{Trotter stepsize} & \textbf{nTrials} & \textbf{nMETTs} & \textbf{$\delta$}\\
\hline
1.0 & 0.10 & 10000 & 200 & 0.1\\
1.5 & 0.15 & 10000 & 200 & 0.1\\
2.0 & 0.20 & 10000 & 200 & 0.1\\
3.0 & 0.30 & 10000 & 200 & 0.1\\
4.0 & 0.40 & 10000 & 200 & 0.1\\
\end{tabular}
\end{center}
\end{table}
\end{appendices}
\printbibliography[heading=bibintoc]
\end{document}
\begin{figure}[hbt!]
\centering
\includegraphics[width=.3\textwidth]{figures/caltech.png}
\caption{This is a figure}\label{fig:logo}
\index{figures}
\end{figure}
\begin{table}[hbt!]
\centering
\begin{tabular}{ll}
\hline
Area & Count\\
\hline
North & 100\\
South & 200\\
East & 80\\
West & 140\\
\hline
\end{tabular}
\caption{This is a table}\label{tab:sample}
\index{tables}
\end{table}
Here's an endnote.\endnote{Endnotes are notes that you can use to explain text in a document.}
\nomenclature{Asteroid}{A very small planet ranging from 1,000 km to less than one km in diameter. Asteroids are found commonly around other larger planets}
|
{
"arxiv_id": "2302.14255",
"language": "en",
"timestamp": "2023-03-01T02:06:57",
"url": "https://arxiv.org/abs/2302.14255",
"yymm": "2302"
} | \section{Introduction}
The paper studies predicting and filtering for discrete time signals. These problems are considered in the deterministic setting, where only a single
trajectory of the signal is observed, rather than a set of samples of trajectories that would allow to apply statistical methods. The method that we use is based on the frequency analysis. It is well known that certain
degeneracy on the spectrum can ensure
opportunities for prediction and interpolation of the signals;
see, e.g., \cite{C}-\cite{V}, \comm{ \cite{B}-\cite{V}. Knab
(1979), Papoulis (1985), Marvasti (1986), Vaidyanathan (1987), Lyman
{\it et al} (2000, 2001), Lyman and Edmonson (2001), and the bibliography therein.}
where band-limited continuous time signals were considered.
Some applications based on statistical methods and learning models have been described in \cite{C}-\cite{L}.
In the present paper, we consider discrete time signals
(digital signals). It is known in principle that these signals are predictable , i.e., they allow unique extrapolations from their past observations, if they have a finite spectrum gap, i.e. an interval on the unit circle
${\mathbb{T}}=\{z\in{\bf C}: \ |z|=1\}$, where its Z-transform vanishes; see, e.g. \cite{D12}.
This gap can be arbitrarily small. Respectively, an ideal low-pass filter or high-pass-filter would convert a non-predictable signal to predictable one. This is why
these ideal filters cannot be causal.
For discrete time signals, some predictors based on irrational causal transfer functions were obtained in \cite{D12,D12b}. The corresponding transfer functions were presented via exponentials of rational functions or power functions; an explicit representation in the time domain was not feasible for these transfer functions.
In in \cite{D16}, some low-pass filters were also constructed based on the similar principle.
The paper addresses again the prediction and filtering problems for discrete time signals; it offers new predictors and causal filters approximating ideal filters. The causal transfer functions for these predictors and filters are represented as polynomials of Z-transform of the unit step signal, i.e., polynomials of $(1-z^{-1})^{-1}$. For the predictors, the corresponding transfer functions approximate the function $e^{i\omega T}$ on ${\mathbb{T}}$, where $\omega\in (-\pi,\pi]$ represents the frequency, and where an integer $T>0$ represents a preselected prediction horizon. For the filters, the corresponding transfer functions approximate the real valued step function representing the trace on ${\mathbb{T}}$ of Z-transform of an ideal filter. The approximation is possible for signals with some arbitrarily small spectrum gap; the resulting signal could have a wider preselected spectrum gap.
The results are applicable applicable for high frequency signals as well as low frequency signals.
These new predictors and filters allow an explicit representation in the time domain and in the frequency domain; in addition, they are independent on the
spectral characteristics of the input signals with fixed and known finite spectral gap.
Some computational approach based on model fitting is suggested.
The method is based on the approach developed in \cite{D22,D22rus} for prediction of continuous time signals.
The paper is organized in the following manner. In Section
\ref{secDef}, we formulate the definitions. In Section
\ref{secM}, we formulate
the main theorems on predictability and predictors (Theorem \ref{ThP} and Theorem \ref{ThF}). In Section \ref{secR}, we discuss representation of transfer functions in the time domain. In Section \ref{secD}, we discuss some implementation problems.
\comm{ In Section \ref{secExp}, a method of computing approximating functions for exponentials $e^{i\omega}$ is suggested.}
In Section \ref{secLF}, we suggest extension of the results on low frequency and other
signals.
Section \ref{secProof} contains the proofs.
\section{Problem setting }\label{secDef}
\subsection*{Some notations}
Let ${\mathbb{Z}}$ be the set of all integers.
We denote by $\ell_r$ the set of all
sequences $x=\{x(t)\}_{t\in{\mathbb{Z}}}\subset{\bf C}$, such that
$\|x\|_{\ell_r}=\left(\sum_{t=-\infty}^{\infty}|x(t)|^r\right)^{1/r}<+\infty$
for $r\in[1,\infty)$.
\par
For $x\in \ell_1$ or $x\in \ell_2$, we denote by $X={\cal Z} x$ the
Z-transform \begin{eqnarray*} X(z)=\sum_{t=-\infty}^{\infty}x(t)z^{-t},\quad
z\in{\mathbb{T}}. \end{eqnarray*} Respectively, the inverse Z-transform $x={\cal Z}^{-1}X$ is
defined as \begin{eqnarray*} x(t)=\frac{1}{2\pi}\int_{-\pi}^\pi
X\left(e^{i\omega}\right) e^{i\omega t}d\omega, \quad t=0,\pm 1,\pm 2,....\end{eqnarray*}
If $x\in \ell_2$, then $X|_{\mathbb{T}}$ is defined as an element of
$L_2({\mathbb{T}};{\bf C})$.
We denote by ${\mathbb{I}}$ the indicator function.
\subsection*{Some definitions}
Let either $E={\bf R}$ or $E={\bf C}$.
Let $ {\cal X}\subset \ell_\infty$ be a set currently observable discrete time signals
with values in $E$.
Let ${\cal P}({\cal X})$ be the set of all continuous mappings $p:{\cal X}\to \ell_\infty$ such that, for any $x_1,x_2\in{\cal X}$ and $\tau\in{\mathbb{Z}}$, we have that
$p(x_1(\cdot))(t)=p(x_2(\cdot))(t)$ for all $t\le\tau$ if $x_1(t)=x_2(t)$ for all $t\le \tau$. In other words, this is the set of "causal" mappings;
we will look for predictors and filters in this class.
Let us consider first a prediction problem. Let an integer $T>0$ be given.
The goal is to estimate, at current times $t$, the values
$x(t+T)$, using historical values of the observable process
$x(s)|_{s\le t}$. Therefore, $T$ is the prediction horizon in this setting.
\begin{definition}\label{defP} Let ${\cal X}\subset\ell_\infty$.
\begin{itemize}
\item[(i)]
We say that the class ${\cal X}$ is predictable if there exists a sequence $\{\widetilde
p_{d}(\cdot)\}_{d=1}^{+\infty}\subset{\cal P}({\cal X})$ such that $$
\sup_{t\in{\mathbb{Z}}}|x(t+T)-\widetilde y_{d}(t)|\to 0\quad \hbox{as}\quad
d\to+\infty\quad\forall x\in{\cal X}, $$ where \begin{eqnarray*} &&
\widetilde y_{d}= \widetilde p_d(x(\cdot)).\label{predict} \end{eqnarray*}
\item[(ii)]
We say that the class ${\cal X}$ is uniformly predictable
with the prediction horizon $T$ if there exists a sequence $\{\widetilde
p_{d}(\cdot)\}_{d=1}^{+\infty}\subset{\cal P}$ such that \begin{eqnarray*} \sup_{t\in{\mathbb{Z}}}|x(t+T)-\widetilde
y_{d}(t)|\to 0\quad\hbox{uniformly in} \quad x\in{\cal X},\end{eqnarray*}
where $\widetilde
y_{d}(\cdot)$ is as in part (i) above.
\end{itemize}
\end{definition}
Functions $\widetilde y_{d}(t)$ in the definition above
can be considered as approximate predictions of the process $x(t+T)$.
Let us consider now the filtering problem.
Let $\Omega\in (0,\pi)$ be given. Let a function $\Phi_\Omega:{\mathbb{T}}\to{\bf R}$ be defined such that $\Phi_\Omega\left(i\o\right)={\mathbb{I}}_{|\omega|\ge \Omega}$.
We consider an ideal high-pass filter such that the trace of its transfer function on ${\mathbb{T}}$ is $\Phi_\Omega\left(i\o\right)$, $\omega\in(-\pi,\pi]$.
The goal is, for a an arbitrarily close approximation of this ideal non-causal high-pass filter by causal transfer functions.
\begin{definition}\label{defF} Let ${\cal X}\subset\ell_\infty$.
\begin{itemize}
\item[(i)]
We say that a class ${\cal X}\subset \ell_2$ allows causal high-pass filtering up to the spectrum gap if there is a sequence $\{\widetilde
p_{d}(\cdot)\}_{d=1}^{+\infty}\subset{\cal P}(\bar{\cal X})$ such that $$
\sup_{t\in{\mathbb{Z}}}|x(t+T)-\widetilde y_{d}(t)|\to 0\quad \hbox{as}\quad
d\to+\infty\quad\forall x\in{\cal X}, $$ where \begin{eqnarray*} &&
\widetilde x={\cal Z}^{-1}(\Phi_\Omega X),\quad \widetilde y_{d}= \widetilde p_d(x(\cdot)).\label{predict} \end{eqnarray*}
\item[(ii)]
We say that the class ${\cal X}$ allows uniform causal high-pass filtering up to the spectrum gap
$(-\Omega,\Omega)$ if there exists a sequence $\{\widetilde
p_{d}(\cdot)\}_{d=1}^{+\infty}\subset{\cal P}$ such that \begin{eqnarray*} \sup_{t\in{\mathbb{Z}}}|\widetilde x(t)-\widetilde
y_{d}(t)|\to 0\quad\hbox{uniformly in} \quad x\in{\cal X},\end{eqnarray*}
where $\widetilde x$ and $\widetilde
y_{d}$ are as in part (i) above.
\end{itemize}
\end{definition}
In the last definition, operators $p_d$ represent causal near ideal high pass filters; they ensure,
for the class ${\cal X}$, an arbitrarily close approximation of the non-causal ideal high-pass filter defined by its transfer function $\Phi_\Omega$.
\section{The main result}
\label{secM}
For $\bar\Omega\in (0,\pi)$, let
$\X(\oo\O)$ be the set of all signals $x:{\mathbb{Z}}\to E$ such that
$x(\cdot)\in\ell_2$ and $X\left(i\o\right)=0$ for $\omega\in (-\bar\Omega,\bar\Omega)$ and $X={\cal Z} x$.
\comm{Let $\U(\O)$ be the set of
signals $x\in \X(\O)$ such that $\int_{-\pi}^\pi|X\left(i\o\right) |d\omega \le 1$
for $X={\cal Z} x$.}
For $d=0,1,2,...$, let $\Psi_d^E$ be the set of all functions $\psi:{\bf C}\setminus \{1\}\to{\bf C}$ represented as \begin{eqnarray}
\psi(z)=\sum_{k=0}^d \frac{a_k}{(1-z^{-1})^k},
\label{g}\end{eqnarray} where $a_k\in E$ can be any. Let $\Psi^E:= \cup_{d}\Psi_d^E$.
\vspace{0.4cm}
\begin{lemma}\label{lemmaA}\label{ThP} For $\bar\Omega\in (0,\pi)$, let the function $\zeta:[-\pi,\pi]\to {\bf C}$ be defined either as $\zeta(\omega)=e^{i\omega T}$ or as $\zeta(\omega)={\mathbb{I}}_{|\omega|\ge \bar\Omega}$. Then,
for any $\varepsilon>0$, there exists a integer $d=d(\varepsilon,T)>0$ and $\psi_d\in\Psi_d^{\bf R}$ such that
\begin{eqnarray}
\sup_{\omega\in [-\pi,\pi]:\ |\omega|\ge \bar\Omega}
|\zeta(\omega)-\psi_d\left(i\o\right)| \le \varepsilon. \label{e2}
\end{eqnarray}
\end{lemma}
\begin{theorem}\label{ThP} For $\Omega\in (0,\pi)$, the predictability for $x\in \X(\O)$ considered in Definition \ref{defP}(i), as well as the uniform predictability for $x\in \X(\O) \cap \{x\in \ell_2:\ \|x\|_{\ell_2}\le 1\}$ considered in Definition \ref{defP}(ii),
can be ensured with
the sequence of the predictors $ p_d:\X(\O)\to \ell_2$, $d=1,2,....,$ defined by their transfer functions $\psi_d (z)$ selected as in Lemma \ref{lemmaA} with $\zeta(\omega)=e^{i\omega}$. More precisely, for any $\bar\varepsilon>0$ and $\widehat y_d(t)= p_d(x(\cdot))(t)$, the estimate
\begin{eqnarray*}
\sup_{t\in{\mathbb{Z}}}|x(t+T)-\widehat y_d(t)|\le\varepsilon
\end{eqnarray*}
holds if $d$ and $\psi_d$ are such that
(\ref{e2}) holds with $\zeta(\omega)=e^{i\omega T}$ for sufficiently small $\varepsilon$.
\end{theorem}
\begin{theorem}\label{ThF} For $\Omega\in (0,\pi)$ and any $\Omega_0\in (0,\Omega)$,
the causal filtering for $x\in \X(\O)$ considered in Definition \ref{defF}(i), as well as the uniform causal filtering for $x\in \X(\O) \cap \{x\in \ell_2:\ \|x\|_{\ell_2}\le 1\}$ considered in Definition\ref{defF}(ii)
can be ensured with
the sequence of the causal filters $ p_d:\X(\O)\to \ell_2$, $d=1,2,....,$ defined by their transfer functions $\psi_d (z)$ selected as in Lemma \ref{lemmaA} with $\zeta(\omega)={\mathbb{I}}_{|\omega|\ge \Omega}$. More precisely, for any $\bar\varepsilon>0$ and $\widehat y_d(t)= p_d(x(\cdot))(t)$ and $\widetilde x={\cal Z}^{-1}(\Phi_\Omega X)$, the estimate
\begin{eqnarray*}
\sup_{t\in{\mathbb{Z}}}|\widetilde x(t)-\widehat y_d(t)|\le\varepsilon\end{eqnarray*}
if $d$ and $\psi_d$ are such that
(\ref{e2}) holds with $\zeta(\omega)={\mathbb{I}}_{|\omega|\ge \Omega}$ for sufficiently small $\varepsilon$.
\end{theorem}
According to this theorem, a process with an arbitrarily small spectrum gap $(-\Omega_0,\Omega_0)$
can be converted, using causal operations, into a process with larger spectrum gap up to $(-\Omega,\Omega)$
\par
It van be noted that:
\begin{itemize}
\item
The transfer functions $\psi_d(z)$ are analytic in the domain
${\bf C}\setminus\{1\}$.
If we apply their traces $\psi_d \left(i\o\right)|_{\omega \in (-\pi,\pi]}$ on ${\mathbb{T}}$ for calculation of the outputs
for inputs $x\in \X(\O)$, then we obtain the same outputs as for the functions $\psi_d \left(i\o\right) {\mathbb{I}}_{\omega\in (-\pi,\pi], |\omega|>\Omega}$.
\item
For real valued inputs $x$, the outputs of these predictors are real valued.
\item $p_d(\cdot)$ depends on $T$ and $\Omega$ via the coefficients $a_k$
in the setting of Theorem \ref{ThP}, and $p_d(\cdot)$ depends on $\Omega$ and $\Omega_0$ via the coefficients $a_k$
in the setting of Theorem \ref{ThF}.
\end{itemize}
\section{Representation of operators $p_d(\cdot)$ in the time domain}
\label{secR}
Let either $\bar\Omega=\Omega$ or $\bar\Omega=\Omega_0$.
Consider operators $h_k$ defined on $\X(\oo\O)$ by their transfer functions
$H_k(z)=(1-z^{-1})^{-k}$, $k=0,1,2,...$\,. In other words, if $y=h_k(x)$ for $x\in\X(\oo\O)$, then
$Y(z)=(1-z^{-1})^{-k}X(z)$ for $Y={\cal Z} y$ and $X={\cal Z} x$.
Clearly, \begin{eqnarray*}
&& H_{k+1}(z)=H_1(z)H_{k}(z), \quad h_{k+1}(x(\cdot))=h_1(h_{k}(x(\cdot))), \quad k=0,1,2,3,...
\label{hint}\end{eqnarray*}
Hence $h_{k}(x(\cdot))\in\X(\oo\O)$ for all $k=0,1,2,...$, $x\in\X(\oo\O)$.
Therefore,
the
Z-transforms of processes
$h_{k}(x(\cdot))$ vanish on $\{e^{i\omega},\ \omega\in[-\pi,\pi],\ |\omega|<\bar\Omega\}$, and the operators
$h_k:\X(\oo\O)\to \ell_2$ are continuous, assuming that $\X(\oo\O)$ is a subspace of $\ell_2$ provided with $\ell_2$-norm.
Let $\varphi\in\ell_\infty$ be defined such that $\varphi(t)=0$ for $t<0$ and $\varphi(t)=1$ for $t\ge 0$,
i.e. $\varphi={\cal Z}^{-1}H_1(z)$.
Let $I_{\bar\Omega}:= [-\pi,-\bar\Omega]\cup [\bar\Omega,\pi] $ and $x\in\X(\oo\O)$.
Let us show that, in the time domain, the operator $h_1(x(\cdot))$ can be represented via causal convolution with the kernel $\varphi$, i.e.
if $x\in\X(\oo\O)$ then $h_1(x(\cdot))(t)=\sum_{s=-\infty}^t x(s)$.
Let ${\bf h}_{1,m}(t)=\varphi(t){\mathbb{I}}_{\{t<m-1\}}$. Clearly, ${\bf h}_{1,m}\in\ell_2$. Let
\begin{eqnarray*}
H_{1,m}(z):= {\cal Z} {\bf h}_{1,m}=\frac{1-z^{-m}}{1-z^{-1}}, \quad R_{m}(z):= {\cal Z} (\varphi-{\bf h}_{1,m})=\frac{z^{-m}}{1-z^{-1}}.
\end{eqnarray*} Clearly, $(1-e^{-i\cdot})^{-1}e^{i \cdot} X(e^{i\cdot}) \in L_2(I_{\bar\Omega},{\bf C})$ for any $t$. Hence
\begin{eqnarray*}
\int_{-\pi}^\pi R_{m}\left(i\o\right) e^{i \omega t}X\left(i\o\right) d\omega= \int_{I_{\bar\Omega}} \frac{e^{-i m \omega}}{1-e^{-i\omega}} e^{i \omega t}X\left(i\o\right) d\omega \to 0
\quad \hbox{as}\quad m \to+\infty
\end{eqnarray*}
for each $t\in{\mathbb{Z}}$. It follows that if $x\in\X(\oo\O)$ then \begin{eqnarray*}
h_1(x(\cdot))(t)=\sum_{s=-\infty}^t x(s) =\lim_{m\to +\infty}{\cal Z}^{-1} (R_m+{\cal Z}{\bf h}_{1,m})(t)=\lim_{m\to +\infty}\sum_{s=-m}^t x(s),
\end{eqnarray*}
and the series converges for each $t\in{\mathbb{Z}}$.
It can be noted that $x\in\X(\oo\O)\cap \ell_1$ then the series $\sum^t_{s=-\infty} x(s)$ converges absolutely; however, for general type $x\in\X(\oo\O)$, there is no guarantee that $x\in \ell_1$
or $ h_k(x(\cdot)) \in \ell_1$.
This implies that
\begin{eqnarray*}
h_{k}(x(\cdot))(t)=\sum_{s=-\infty}^t (h_{k-1}(x(\cdot))(s), \quad k=1,2,3,...\ .
\label{hint2}\end{eqnarray*}
Therefore, the operators $p_d$ in Theorems \ref{ThP}-\ref{ThF} can be represented as \begin{eqnarray*}
p_d(x(\cdot))(t)=\sum_{k=0}^d a_k h_k(x(\cdot))(t),
\label{Kkk}
\label{pred2}
\end{eqnarray*}
where
\begin{eqnarray}
\hspace{-0.5cm}&&h_k(x(\cdot))(t)= \sum_{s_{k-1}=-\infty}^{t}\
\sum_{s_{k-2}=-\infty}^{s_{k-1}}\ ... \sum_{s_1=-\infty}^{s_2}
\sum_{s=-\infty}^{s_1} x(s).
\label{int1}\end{eqnarray}
All series here converge as described above for $h_1$.
\section{ On numerical implementation of Theorems \ref{ThP}-\ref{ThF} }\label{secD}
The direct implementation of the predictor introduced in Theorems \ref{ThP}-\ref{ThF} requires
evaluation of sums for semi-infinite series that is not practically feasible.
However, these theorems could lead to predicting methods bypassing this calculation.
Let us discuss these possibilities.
\par
Let $t_1\in{\bf R}$ be given. Let $x_k:= h_k(x)$ for $x\in\X(\O_0)$, $k=1,2,...$, and let
\begin{eqnarray*}
\eta_k:= x_k(t_1-1).
\end{eqnarray*}
\begin{lemma}\label{lemma1} In the notation of Theorems \ref{ThP}-\ref{ThF}, for any $t\ge t_1$, we have that $\widehat y_d=p_d(x(\cdot))$ can be represented as
\begin{eqnarray}
\widehat y_d(t)= a_0 x(t)+
\sum_{k=1}^d a_k \left(\sum_{l=1}^k c_{l}(t)\eta_l+f_{k}(t)\right).
\label{viaeta0}\end{eqnarray}
Here are $a_k\in{\bf R}$ be the coefficients for $\psi_d(z)=\sum_{k=1}^d a_k (1-z^{-1})^{-k}$ from Theorems \ref{ThP}-\ref{ThF},
\begin{eqnarray*}
&&
f_k(t)=\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1} ...\sum_{s=t_1}^{\tau_k} x_{0}(s),
\end{eqnarray*}
\begin{eqnarray*}
&& c_{1}(t)=\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1} ...\sum_{\tau_{k}=t_1}^{\tau_{k-1}}(\tau_{k}-t_1+1),\qquad c_{2}(t)=\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1} ...\sum_{\tau_{k-1}=t_1}^{\tau_{k-2}}(\tau_{k-1}-t_1+1),\end{eqnarray*}
and
\begin{eqnarray*}
&&c_{l}(t)=\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1} ...\sum_{\tau_{l-1}=t_1}^{\tau_{k-l}}(\tau_{l-1}-t_1+1),\qquad l= 1,2,...,k-2,
\end{eqnarray*}
\begin{eqnarray*}
&& c_{k-1}(t)=\sum_{\tau_1=t_1}^{t_{}}(\tau_{1}-t_1+1),\qquad c_{k}(t)=t-t_1+1.
\end{eqnarray*}
\end{lemma}
This lemma shows that calculation of $\widehat y_d(t)=p_d(x(\cdot))(t)$ is easy for $t>t_1$ if we know all $\eta_k$, $k=1,...,d$, and observe $x(s)|_{s=t_1,...,t}$.
Let us discuss some ways to evaluate $\eta_k$ bypassing summation of infinite series.
First, let us observe that (\ref{viaeta0}) implies a useful property given below.
\begin{corollary}\label{corr1} For any $\varepsilon>0$, there exist
an integer $d=d(\varepsilon)>0$ and $a_0,a_1,....,a_d\in{\bf R}$ such that,
for any $t_1\in {\mathbb{Z}}$,
there exist $\bar\eta_1,\bar\eta_1,...,\bar\eta_d\in{\bf R}$ such that
$|\widetilde x(t)-y_d(t)|\le \varepsilon$ for all $t\ge t_1$, where \begin{eqnarray}
y_d(t)= y_d(t,x(t),\bar\eta_1,...,\bar\eta_d):=
a_0 x(t)+\sum_{k=1}^d a_k \left(\sum_{l=1}^k c_{l}(t)\bar\eta_l+f_{k}(t)\right).
\label{viaeta}\end{eqnarray}
\end{corollary}
\vspace{1mm}
In this corollary, $d=d(\varepsilon)$ and $a_k\in\rho$ are such as defined in Theorems \ref{ThP}-\ref{ThF}.
\par
\subsubsection*{The case of prediction problem: Theorem \ref{ThP} setting}
Let us discuss using (\ref{viaeta0}) and (\ref{viaeta}) for evaluation of $\eta_k$ in Theorem \ref{ThP} setting.
Let $\theta>t_1$. Assume first that the goal is to forecast the value $\widetilde x(t)=x(t+T)$
given observations at times $t\le \theta$, in the setting of Theorems \ref{ThP}.
It appears that if $\theta>t_1+T$ then Corollary \ref{corr1} gives an opportunity to construct predictors via fitting parameters $\eta_0,...,\eta_d$ using past observations available for $t=t_1,...,\theta-T$: we can match the values $y_d(t,x(t),\bar\eta_1,...,\bar\eta_d)$ with the
past observations $x(t+T)$. Starting from now, we assume that $\theta>t_1+T$.
Let $d$ be large enough such that
$x(t+T)$ is approximated by $\widehat y_d(t)$ as described in Theorem \ref{ThP}, i.e.,
$\sup_{t\in{\mathbb{Z}}}|x(t+T)-\widehat y_d(t)|\le \varepsilon$ for some sufficiently small
$\varepsilon>0$, for some choice of $a_k$.
As an approximation of the true $\eta_1,...,\eta_d$,
we can accept a set $\bar\eta_1,...,\bar\eta_d$ such that
\begin{eqnarray}
|x(t+T)-y_d(t,x(t),\bar\eta_1,...,\bar\eta_d)|\le \varepsilon,\quad t=t_1,...,\theta-T.
\label{eta1}\end{eqnarray}
(Remind that, at time $\theta$, values $x(t+T)$ and $y_d(t,x(t),\bar\eta_1,...,\bar\eta_d)$
are observable for these $t=t_1,....,\theta-T$). If (\ref{eta1}) holds, we can conclude that
$y_d(t,x(t),\bar\eta_1,...,\bar\eta_d)$ delivers an acceptable prediction of $x(t+T)$ for these $t$.
Clearly, Theorem \ref{ThP} implies that a set $\bar\eta_1,...,\bar\eta_d$ ensuring (\ref{eta1})
exists since this inequality holds with $\bar\eta_k=\eta_k$.
The corresponding value $y_d(\theta,x(t),\bar\eta_1,...,\bar\eta_d)$ would give an estimate for $\widehat y_d(\theta)$ and, respectively, for $x(\theta+T)$.
Furthermore, finding a set $\bar\eta_1,...,\bar\eta_d$ that ensures (\ref{eta1}) could still be difficult. Instead, one can consider fitting predictions and observations at a finite number of points $t=t_1,...,T-\theta$.
Let a integer $\bar d\ge d$ and a set $\{t_m\}_{m=1}^{\bar d}\subset {\mathbb{Z}}$ be selected such that $t_1<t_2<t_3<...<t_{\bar d-1}<t_{\bar d}\le\theta-T$. We suggest to use observations $x(t)$ at times $t=t_m$.
Consider a system of equations
\begin{eqnarray}
a_0 x(t_m)+\sum_{k=1}^d a_k \left( \sum_{l=1}^k c_{l}(t_m)\bar\eta_l+f_{k}(t_m) \right)=\zeta_m, \quad m=1,...,\bar d.\label{sys3} \end{eqnarray}
\par
Consider first the case where $\bar d=d$. In this case, we can select $\zeta_m= x(t_m+T)$; these values are directly observable, without calculation of integrals of semi-infinite intervals required for $\widehat y_d(t_m)$. The corresponding choice of $\bar\eta_k$ ensures zero prediction error for $x(t_m+T)$, $m=1,...,\bar d$.
Including into consideration more observations, i.e., selecting larger $\bar d>d$ and larger set $\{t_1,....,\theta-T\}$, would
improve estimation of $\eta_k$.
If we consider $\bar d>d$, then, in the general case, it would not be feasible to achieve that
$y_d(t,\bar\eta_1,...,\bar\eta_d)= x(t_m+T)$ for all $m$, since it cannot be guaranteed that system (\ref{sys3})
is solvable for $\zeta_m\equiv x(t_m+T)$: the system will be overdefined.
Nevertheless, estimate presented in (\ref{eta1}) can still be achieved for any arbitrarily large $\bar d$, since (\ref{eta1}) holds. A solution could be found using
methods for fitting linear models.
\comm{Furthermore, instead of calculation of the coefficients $a_0,a_1,...,a_d$, via solution of the approximation problem for the complex exponential described in Theorem \ref{ThP}(i)-(ii), one may find these coefficients considering them as
additional unknowns in system (\ref{sys3}) with $\bar d> 2d+1$.
Theorem \ref{ThP} implies again that there exist $\bar\eta_k=\eta_k\in{\bf R}$ and $a_k\in {\bf R}$ such that (\ref{sys3}) holds with $\zeta_m=\widehat y_d(t_m)$. This would lead to a
nonlinear fitting problem for unknowns $a_0,a_1,...,a_d,\bar\eta_1,...,\bar\eta_d$.}
So far, the consistency of these estimates is unclear since a choice of smaller $\varepsilon$ leads to larger $d$. We leave analysis of these methods for the future research.
\subsubsection*{The case causal filtering problem: Theorem \ref{ThF} setting}
In the setting of Theorem \ref{ThF}, the past values of the true unknown process $\widetilde x(t)$ are not observable and hence cannot be used for fitting the values $\eta_1,...,\eta_d$. However, we can use that the values $\eta_1,...,\eta_k$ in (\ref{viaeta0})-(\ref{viaeta}) are still the same as in the setting of Theorem \ref{ThF}, where $\widetilde x(t)=x(t+T)$. Since past $x(s)|_{s=t_1,...,t}$ are observable, we can use the fitting procedure based on Theorem \ref{ThP} to estimate $\eta_1,...,\eta_d$ using (\ref{viaeta0})-(\ref{viaeta}) with the coefficients $a_k$ defined for approximation of $\zeta(\omega)=e^{i\omega T}$ and with observations $x(t_m)$, $t_m\le t$, as described above. After that, we can estimate $\widetilde x(t)$ using equation (\ref{viaeta0}) again with the new coefficients $a_k$ defined for approximation of $\zeta(\omega)={\mathbb{I}}_{|\omega|\le \Omega}$.
\comm{ \section{A possible choice of $\psi_d$ for predictors in Theorem \ref{ThF} setting}
\label{secExp}
The coefficients $a_k$ for functions $\psi_n$ could be found use numerical methods from classical analysis such as the Gram-Schmidt procedure. In the case of Theorem \ref{ThP} for predictors, finding these coefficients can be simplified, especially for $T=1$.
Let us demonstrate this.
Assume that $T=1$. For real $\nu>0$,
define on ${\bf C}\setminus\{1\}$ a function
\begin{eqnarray*}
\widetilde\psi(\nu,z):= z\left(1-\exp\frac{\nu }{1-z}\right).
\end{eqnarray*}
This function is a modification of the transfer function introduced in \cite{D12} for prediction of signals with a single point spectrum degeneracy.
Clearly,
\begin{eqnarray*}
{\bf R}\frac{\nu }{1-e^{i\omega}}=\nu\frac{1-\cos(\omega)}{|1-e^{i\omega}|^2}\to 0\quad\hbox{as}\quad \nu\to -\infty
\end{eqnarray*}
uniformly on the set $\{e^{i\omega}, \quad \omega\in (-\pi,\pi],\ |\omega|\ge\Omega\}$.
Hence \begin{eqnarray*}
\psi(\nu,z)\to z\quad\hbox{as}\quad \nu\to -\infty
\end{eqnarray*}
uniformly on the set $\{e^{i\omega}, \quad \omega\in (-\pi,\pi],\ |\omega|\ge\Omega\}$.
Further, for $\varepsilon>0$, let $\nu<0$ be selected such that
\begin{eqnarray*}
|\psi(\nu,e^{i\omega})-e^{i\omega}|\le \frac{\varepsilon}{2},\quad \omega\in (-\pi,\pi],\ |\omega|\ge\Omega.
\end{eqnarray*}
The function $\widetilde\psi(\nu,\cdot)$ is analytic in ${\bf C}\setminus\{1\}$, and is bounded on ${\bf C}\setminus\{z\in{\bf C}:\ |1-z|>\delta\}$ for any $\delta>0$. Clearly, we have that
\begin{eqnarray*}
\widetilde\psi(\nu,z):= z\left(1-\left(1+\frac{\nu }{1-z}+\frac{1}{2}\left[\frac{\nu }{1-z}\right]^2
+\frac{1}{3!}\left[\frac{\nu }{1-z}\right]^3+\cdots +\frac{1}{d!}\left[\frac{\nu }{1-z}\right]^d+\cdots
\right)\right)\\
=\lim_{d\to +\infty}\widetilde\psi_d(\nu,z),
\end{eqnarray*}
where
\begin{eqnarray*}
\widetilde\psi_d(\nu,z):= -\frac{\nu z}{1-z}-\frac{z}{2}\left[\frac{\nu }{1-z}\right]^2
-\frac{z}{3!}\left[\frac{-\nu }{1-z}\right]^3-\cdots -\frac{z}{d!}\left[\frac{-\nu }{1-z}\right]^d,
\end{eqnarray*}
and where convergence is uniform on the set $\{e^{i\omega}, \quad \omega\in (-\pi,\pi],\ |\omega|\ge\Omega\}$.
It can be observed that the functions $\psi_d(\nu,z)$ belong to $\Psi_d$, since
\begin{eqnarray*}
\frac{z}{1-z}=\frac{1}{1-z^{-1}},\quad \frac{1}{1-z}=1-\frac{1}{1-z^{-1}},\quad z\in{\bf C},\quad z\neq 1
\end{eqnarray*}
For example,
\begin{eqnarray*}
\frac{z}{2}\left[\frac{\nu }{1-z}\right]^2=\frac{\nu^2}{2(1-z^{-1})}\left(1-\frac{1}{1-z^{-1}}\right).
\end{eqnarray*}
Clearly, we can select $d$ such that
\begin{eqnarray*}
|\psi(\nu,e^{i\omega})-\psi_d(\nu,e^{i\omega})|\le \frac{\varepsilon}{2},\quad \omega\in (-\pi,\pi],\ |\omega|\ge\Omega,
\end{eqnarray*}
For this $d$ and $\nu$, we have that
\begin{eqnarray*}
|\psi_d(\nu,e^{i\omega})-e^{i\omega}|\le \frac{\varepsilon}{2},\quad \omega\in (-\pi,\pi],\ |\omega|\ge\Omega.
\end{eqnarray*}
The coefficients $a_k$ can be computed form the representation of $\psi_d$ as an element of $\Psi_d$.
For the case of $T>1$, one can use functions $\psi_d(\nu,z)^T$.
}
\section{Low frequency and other signals}\label{secLF}
Let us show that the results obtained above for high frequency signals can be applied to signals of more general type described as follows.
Let $\Omega\in (0,\pi)$, $\Omega_0\in (0,\Omega)$, and $\theta\in (-\pi,\pi]$ be given, and let
${\cal Y}(\Omega,\theta)$ be the set of all signals
$x\in\ell_2$ such that $X\left(e^{i(\omega-\theta)}\right)=0$ for $|\omega|<\Omega$, $\omega\in [-\pi,\pi]$ and $X={\cal Z} x$.
For example, ${\cal Y}(\Omega,0)={\cal X}(\Omega)$; this set includes high frequency signals such that
$X\left(i\o\right)=0$ if $|\omega|<\Omega$. Respectively, the set ${\cal Y}(\Omega,\pi)$ includes low frequency signals (band limited signals) such that
$X\left(i\o\right)=0$ if $\omega\in (-\pi,-\pi+\Omega)\cup (\pi-\Omega,\pi]$.
\comm{\begin{eqnarray*}
&V(\Omega,\theta):= [-\pi,\pi]\setminus (\theta-\Omega,\theta+\Omega) \quad&\hbox{if}\quad
\theta\in (-\pi+\Omega,\pi+\Omega),
\\
&V(\Omega,\theta):= [\max(-\pi,\theta-\Omega),\min (\pi,\theta+\Omega)] \quad&\hbox{if}\quad
\theta\notin (-\pi+\Omega,\pi+\Omega)
\end{eqnarray*}
\begin{eqnarray*}
&W(\Omega,\Omega_0,\theta):= V(\Omega,\theta)\setminus (\theta-\Omega_0,\theta+\Omega_0) \quad&\hbox{if}\quad
\theta,\theta-\Omega_0,\theta+\Omega_0\in (-\pi+\Omega,\pi+\Omega),
\\
&V(\Omega,\theta):= [\max(-\pi,\theta-\Omega),\min (\pi,\theta+\Omega)] \quad&\hbox{if}\quad
\theta\notin (-\pi+\Omega,\pi+\Omega)
\end{eqnarray*}
Let ${\cal Y}(\Omega,{\cal F})$ be the set of all signals
$x\in\ell_2$ such that $X\left(i\o\right)=0$ for $\omega\in [-\pi,\pi]\setminus V(\Omega,\theta)$ and $X={\cal Z} x$.
}
To predict a signal $\widehat x\in {\cal Y}(\Omega,\theta)$, one can convert it into
a signal $ x\in {\cal X}(\Omega)={\cal Y}(\Omega,0)$ as $x(t)=e^{-i\theta t}\widehat x(t)$. Then one can use for $x$ the predictors introduced in Theorem \ref{ThP}.
The implied prediction $\widehat y(t)$ for $\widehat x(t)$ can be obtained as $\widehat y(t)=e^{i\theta t}y(t)$,
where $y(t)$ is the corresponding prediction for $x(t)$.
Similarly, one can construct a causal filter that, for $x\in {\cal Y}(\Omega_0,\theta)$, produces
an approximation of $\widehat x\in {\cal Y}(\Omega,\theta)$ such that $\widehat x={\cal Z}^{-1}\Phi_{\Omega,\theta} X$, where $X={\cal Z} x$, and $\Phi_{\Omega,\theta}$ is Z-transform of an ideal filter such that $\Phi_{\Omega,\theta}\left(e^{i(\omega-\theta)}\right)={\mathbb{I}}_{|\omega|>\Omega,\ \omega\in [-\pi,\pi]}$.
Again, one can convert it into
a signal $ x\in {\cal X}(\Omega_0)={\cal Y}(\Omega_0,0)$ as $x(t)=e^{-i\theta t}\widehat x(t)$. Then one can use for $x$ the causal filter introduced in Theorem \ref{ThF}.
The implied filtered signal $\widehat y(t)$ for $\widehat x(t)$ can be obtained as $\widehat y(t)=e^{i\theta t}y(t)$,
where $y(t)$ is the corresponding filtered signal for $x(t)$.
Alternatively, we can construct predictors and filters directly for signals from ${\cal Y}(\Omega,\theta)$
similarly to the ones introduced in Theorems \ref{ThP}-\ref{ThF} and with the transfer functions
\begin{eqnarray*}
\sum_{k=0}^d \frac{a_k}{(1-e^{i\theta}/z)^k}
\label{g1}\end{eqnarray*} approximating $e^{i\omega T}$ and ${\mathbb{I}}_{|\omega|>\Omega}$ on ${\mathbb{T}}$.
\section{Proofs}\label{secProof}
{\em Proof of Lemma \ref{lemmaA}}.
For a set $I\subset [-\pi,0)\cup(0,\pi]$,
let $\gamma^E(I)$ (or $\gamma_d^E(I)$) be the set of functions $\gamma:I\to {\bf C}$ constructed as $\gamma(\omega)=\psi\left(i\o\right)$ for some $\psi$ from $\Psi^E$ (or from $\Gamma_d^E(I)$, respectively)).
Let $I_{\bar\Omega}:= [-\pi,-\bar\Omega]\cup [\bar\Omega,\pi] $.
\comm{Let function $\zeta:[-\pi,\partial]\to {\bf C}$ be defined as $\zeta(\omega)=e^{i\omega T}$ for the proof of statement (i),
and as $\zeta(\omega)={\mathbb{I}}_{|\omega|\ge \Omega_0}$ for the proof of statement (ii).}
Clearly, $\frac{1}{1-z^{-1}}=1-\frac{1}{1-z}$ for all $z\in{\bf C}$, $z\neq 1$.
Hence \begin{eqnarray*}\overline{\left(\frac{1}{1-1/e^{i\omega}}\right)}=\frac{1}{1-e^{i\omega}}=1-\frac{1}{1-1/e^{i\omega}}, \qquad \omega\in{\bf R},\quad \omega\neq 0.\end{eqnarray*}
It follows that, if $\psi(z)\in\Psi^E$ then $\psi(z^{-1})\in\Psi^E$, for both $E={\bf R}$ and $E={\bf C}$. This implies that $\overline{\gamma(\omega)}\in \Gamma^{\bf E}(I_{\bar\Omega})$ if $\gamma(\omega)\in\Gamma^E(I_{\bar\Omega})$.
Since the function $\gamma_1(\omega)={\bf R}\frac{1}{1-e^{-i\omega}}$ is strictly monotone
on the intervals $(-\infty,0)$ and $(0,\infty)$, and has different signs on these two intervals,
it follows that $\gamma_1(\alpha)\neq \gamma_1(\beta)$ for all $\alpha,\beta\in I_{\bar\Omega}$, $\alpha\neq \beta$.
It follows that
the set of function
$\Gamma^{\bf C}(I_\Omega)$ separates points on the compact set $I_{\bar\Omega}$.
By the Stone-Weierstrass Theorem for complex valued continuous functions on compact sets of real numbers, it follows that the set $\Gamma(I_{\bar\Omega})$ is complete in the space $C(I_{\bar\Omega};{\bf C})$ of continuous complex-valued functions defined on $I_\Omega$ with the supremum norm;
see, e.g., Theorem 10 in \cite{SW}, pp. 238. It follows that, for any $\varepsilon>0$, there exists $d>0$ and $\widehat\gamma_d\in \Gamma_d^{\bf C}(I_{\bar\Omega})$ represented as
$\widehat\gamma_d(\omega)=\sum_{k=0}^d \frac{A_k}{\left(1-e^{-i\omega}\right)^{k}}$ defined for $\omega\in{\bf R}\setminus \{0\}$, where $A_k\in {\bf C}$, such that
\begin{eqnarray*}
&& \left(\int_{I_{\bar\Omega}}|\zeta(\omega)-\widehat\gamma_d(\omega)|^2d\omega\right)^{1/2}\le\varepsilon.
\end{eqnarray*}
For $\zeta(\omega)=e^{i\omega T}$ this follows directly from Theorem 10 in \cite{SW}, pp. 238, mentioned above.
For $\zeta(\omega)={\mathbb{I}}_{|\omega|\ge \bar\Omega}$ this follows from the fact that
the set $C(I_\Omega;{\bf C})$ is
everywhere dense in $L_2(I_{\bar\Omega};{\bf C})$, and convergence in $C(I_{\bar\Omega};{\bf C})$ implies convergence in $L_2(I_{\bar\Omega};{\bf C})$.
Let us show that
the same estimate holds for the polynomial $\gamma_d\in\Gamma_d^{\bf R}(I_\Omega)$
defined as
$\gamma_d(\omega)=\sum_{k=0}^d \frac{a_k}{\left(1-e^{-i\omega}\right)^{k}}$, where $a_k={\bf R} A_k$.
Suppose that ${\rm Im\,} A_k\neq 0$ for some $k$. Clearly,
the real and the the imaginary part of $\frac{ i {\rm Im\,} A_k}{\left(1-e^{-i\omega}\right)^{k}}$
are even and odd, respecively. On the other hand,
the functions ${\bf R} e^{i\omega T}=\cos(\omega T)$ and ${\rm Im\,} e^{i\omega T}=\sin(\omega T)$ are odd
and even, respectively, on ${\bf R}$. Therefore, the replacement of $A_k$ by $a_k={\bf R} A_k$
cannot spoil the estimate.
Hence the transfer
function $\psi_d\left(i\o\right)=\gamma_d(\omega)=\sum_{k=0}^d \frac{a_k}{\left(1-e^{-i\omega}\right)^{k}}$ satisfies the required estimate. This completes the prove of Lemma \ref{lemmaA}. $\Box$
\vspace{2mm}
\par
{\em Proof of Theorems \ref{ThP}-\ref{ThF}}. We will prove these theorems simultaneuosly. Again, we assume that $\bar\Omega=\Omega$ and $\zeta:I_\Omega\to {\bf C}$ is defined as $\zeta(\omega)=e^{i\omega T}$ for the proof of Theorem \ref{ThP},
and $\bar\Omega=\Omega_0$ as $\zeta(\omega)={\mathbb{I}}_{|\omega|\ge \Omega}$ for the proof of Theorem \ref{ThF}.
Assume that estimate (\ref{e2}) holds for selected $d,\gamma_d,\psi_d$. We have that
\begin{eqnarray*}
&&E(t)=\widetilde x(t)-\widehat y_d(t)=\int_{-\pi}^\pi e^{i \omega t} (\zeta(\omega)-\psi_d\left(i\o\right))X\left(i\o\right) d\omega.
\end{eqnarray*}
where $\widetilde x(t)=x(t+T)$ in the setting of Theorem \ref{ThP}, and $\widetilde x(t)$ is an ideal filtered process in the setting of Theorem \ref{ThF}. Clearly,
\begin{eqnarray*}
|E(t)|&\le& \int_{-\pi}^\pi |\zeta(\omega)-\psi_d\left(i\o\right)| |X\left(i\o\right)| d\omega
\\ &\le& \left(\int_{I_{\bar\Omega}}|\zeta(\omega)-\psi_d\left(i\o\right)|^2 d\omega\right)^{1/2}
\left(\int_{-\pi}^\pi|X\left(i\o\right)|^2 d\omega\right)^{1/2} \le \varepsilon.
\end{eqnarray*}
We have that
\begin{eqnarray*}
\left(\int_{-\pi}^\pi|X\left(i\o\right)|^2 d\omega\right)^{1/2}=\|x\|_{\ell_2}
\end{eqnarray*}
Hence $|\widetilde x(t)-\widehat y_d(t)|\le \varepsilon$. This completes the proofs of Theorems \ref{ThP} and \ref{ThF}. $\Box$
{\em Proof of Lemma \ref{lemma1}}.
We have that
\begin{eqnarray*} x_k(t)=\eta_k+\sum_{s=t_1}^{t}x_{k-1}(s)=
\sum_{l=1}^k c_{l}(t)\eta_l+f_{k}(t).
\label{eta22}\end{eqnarray*}
We have that
$y_d(t)=a_0x(t)+\sum_{k=1}^d a_k x_k(t)$ for any $t\ge t_1$, i.e.,
\begin{eqnarray}
y_d(t) =a_0x(t)+\sum_{k=1}^d a_k \left(\eta_k+\sum_{s=t_1}^{t}x_{k-1}(s)\right).
\label{sys} \end{eqnarray}
Here we assume that $x_0:= x$.
Further, we have that
\begin{eqnarray*}
&&\sum_{\tau=t_1}^{t_{}}x_{1}(\tau) =\sum_{\tau=t_1}^{t_{}}\left(\eta_1 +\sum_{s=t_1}^\tau x_{0}(s)\right)=\eta_1(t_{}-t_1+1)+\sum_{\tau=t_1}^{t_{}}\sum_{s=t_1}^\tau x_{0}(s)
\comm{ \\&&=\eta_1(t_{}-t_1+1) +\sum_{s=t_1}^{t_{}}\sum_{\tau=s}^t x_{0}(s)
=\eta_1(t-t_1+1)+\sum_{s=t_1}^t x_{0}(s)(t-s+1)}
\end{eqnarray*}
and
\begin{eqnarray*}
\sum_{\tau_1=t_1}^{t_{}}x_{2}(\tau_1) =\sum_{\tau_1=t_1}^{t_{}}\left(\eta_2 +\sum_{s=t_1}^{\tau_1} x_{1}(s)\right)=\eta_2(t_{}-t_1+1)+
\sum_{\tau_1=t_1}^{t_{}}\sum_{s=t_1}^{\tau_1} x_{1}(s)\\
=
\eta_2(t_{}-t_1+1)+\sum_{\tau_1= t_1}^{t_{}}
\left[\eta_1(\tau_1-t_1+1)+ \sum_{\tau_2=t_1}^{\tau_1}\sum_{s= t_1}^{\tau_2} x_{0}(s)\right]
\\=
\eta_2(t_{}-t_1+1)+\eta_1 \sum_{\tau_1= t_1}^{t_{}} (\tau_1-t_1+1)
+\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1}\sum_{s= t_1}^{\tau_2} x_{0}(s).
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
\sum_{\tau_1=t_1}^{t_{}}x_{3}(\tau_1) =\sum_{\tau_1=t_1}^{t_{}}\left(\eta_3 +\sum_{s=t_1}^{\tau_1} x_{2}(s)\right)=\eta_3(t_{}-t_1+1)+
\sum_{\tau_1=t_1}^{t_{}}\sum_{s=t_1}^{\tau_1} x_{2}(s)
\\
=
\eta_3(t_{}-t_1+1)+\sum_{\tau_1= t_1}^{t_{}}
\left[\eta_2(\tau_1-t_1+1)+ \sum_{\tau_2=t_1}^{\tau_1}\sum_{s= t_1}^{\tau_2} x_{1}(s)\right]
\\=
\eta_3(t_{}-t_1+1)+\eta_2 \sum_{\tau_1= t_1}^{t_{}} (\tau_1-t_1+1)
+\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1} \sum_{\tau_3=t_1}^{\tau_2} x_{1}(\tau_3)\\
= \eta_3(t_{}-t_1+1)+\eta_2 \sum_{\tau_1= t_1}^{t_{}} (\tau_1-t_1+1)
+
\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1} \sum_{\tau_3=t_1}^{\tau_2}\Bigl[\eta_1+\sum_{s=t_1}^{\tau_3}x_{0}(s)\Bigr]
\\=
\eta_3(t_{}-t_1+1)+\eta_2 \sum_{\tau_1= t_1}^{t_{}} (\tau_1-t_1+1)
+
\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1}\Bigl[\eta_1 (\tau_2-t_1+1)
+\sum_{s=t_1}^{\tau_2}\sum_{s=t_1}^{\tau_3}x_{0}(s)\Bigr]
\\=
\eta_3(t_{}-t_1+1)+\eta_2 \sum_{\tau_1= t_1}^{t_{}} (\tau_1-t_1+1)
+
\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1}\eta_1 (\tau_2-t_1+1)
+
\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1}\sum_{s=t_1}^{\tau_2}\sum_{s=t_1}^{\tau_3}x_{0}(s)
\end{eqnarray*}
Similarly, we obtain that, for $k>2$,
\begin{eqnarray*}
\sum_{s=t_1}^{t_{}}x_{k}(s) = \eta_k(t_{}-t_1+1)+\eta_{k-1}
\sum_{\tau_1= t_1}^{t_{}} (\tau_1-t_1+1)+... +
\eta_{1} \sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1} ...\sum_{\tau_k=t_1}^{\tau_{k-1}}(\tau_{k}-t_1+1)
\\
+\sum_{\tau_1=t_1}^{t_{}} \sum_{\tau_2=t_1}^{\tau_1} ...\sum_{s=t_1}^{\tau_k} x_{0}(s).
\label{etak}\end{eqnarray*}
\par
It follows that
\begin{eqnarray*} x_k(t)=\eta_k+\sum_{s=t_1}^{t}x_{k-1}(s)=
\sum_{l=1}^k c_{l}(t)\eta_l+f_{k}(t).
\label{eta222}\end{eqnarray*}
Together with (\ref{sys}), this proves (\ref{viaeta}) and completes the proof of Lemma \ref{lemma1}. $\Box$
\section{Concluding remarks}
\begin{enumerate
\item
The approach suggested in this paper allows many modifications.
In particular, other non-causal discrete time transfer functions can be approximated by
causal transfer functions from $\Psi^E$. In fact, any transfer function $H(z)$ can be approximated that way if $\int_{-\pi}^\pi |H\left(i\o\right)|^2d\omega <+\infty$.
\item
It can be shown that, by Theorem 10 in \cite{SW}, pp. 238 again,
approximation of $\zeta(\omega)={\mathbb{I}}_{|\omega|\ge
\bar\Omega}$ in Lemma \ref{lemmaA} can
be in fact achieved on the set of real valued functions represented as
\begin{eqnarray*}
\gamma_d(\omega)=\psi_d\left(i\o\right)=\sum_{k=0}^db_k\left|\frac{1}{1-e^{i\omega}}\right|^{2k}=\sum_{k=0}^db_k\left(\frac{1}{1-e^{-i\omega}}\right)^k \left(1-\frac{1}{1-e^{-i\omega}}\right)^k
\end{eqnarray*}
with $b_k\in{\bf R}$. This may help to streamline calculations since this set is smaller than $\Psi^R$. If $b_k$ are found, then we can derive the coefficients $a_k$ needed for the fitting of $\eta_k$ via (\ref{viaeta0})-(\ref{viaeta}).
\item The predictors introduced in \cite{D12,D12b} do not allow the fitting procedure described in Section \ref{secD} since the kernels of the corresponding causal convolutions are heavily time dependent.
\item In the present paper, we consider $L_2$-approximation of non-causal transfer functions; this allowed to approximate discontinuous on ${\mathbb{T}}$ transfer functions used for the filtering problem. In addition, this would allow to use the Gram-Schmidt procedure to construct the functions $\psi_d$.
This was not feasible in the continuous time setting \cite{D22rus}, where the uniform approximation on the infinite intervals was required.
\item In general, it can be expected that the approximating functions $\psi_d\left(i\o\right)$ take large
values for large $d$ inside the interval $(-\bar\Omega,\bar\Omega)$, in the terms of Lemma \ref{lemmaA}. However, some robustness of the prediction and filtering with respect to noise contamination can be established similarly to \cite{D12}. We leave it for the future research.
\end{enumerate}
|
{
"arxiv_id": "2302.14332",
"language": "en",
"timestamp": "2023-03-01T02:09:45",
"url": "https://arxiv.org/abs/2302.14332",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
The majority of modern robotic automation utilizes cameras for rich sensory information about the environment to infer tasks to be completed and provide feedback for closed-loop control.
The leading paradigm for converting the valuable environment information to the robot's frame of reference for manipulation is position-based visual servoing (PBVS)~\cite{chaumette2016visual}.
At a high level, PBVS converts 3D environmental information inferred from the visual data (e.g. the pose of an object to be grasped) and transforms it to the robot coordinate frame where all the robot geometry is known (e.g. kinematics) using the camera-to-robot pose.
Examples of robotic automation using the PBVS range from bin sorting \cite{mahler2019learning} to tissue manipulation in surgery \cite{li2020super}.
\begin{figure}[t!]
\centering
\input{figures/scatter_methods}
\caption{Comparison of speed and accuracy (based on AUC metric) for existing image-based robot pose estimation methods.}
\label{fig:methods_scatter}
\end{figure}
Calibrating camera-to-robot pose typically requires a significant amount of care and effort. Traditionally, the camera-to-robot pose is calibrated with externally attached fiducial markers (\eg Aruco Marker~\cite{garrido2014aruco}, AprilTag~\cite{olson2011apriltag}). The 2D location of the marker can be extracted from the image and the corresponding 3D location on the robot can be calculated with forward kinematics. Given a set 2D-3D correspondence, the camera-to-robot pose can be solved using Perspective-n-Point (PnP) methods~\cite{lepetit2009epnp,gao2003p3p}.
The procedure usually requires multiple runs with different robot configurations and once calibrated, the robot base and the camera are assumed static.
The incapability of online calibration limits the potential applications for vision-based robot control in the real world, where minor bumps or simply shifting due to repetitive use will cause calibrations to be thrown off, not to mention real-world environmental factors like vibration, humidity, and temperature, are non-constant.
Having flexibility on the camera and robot is more desirable so that the robot can interact with an unstructured environment.
Deep learning, known as the current state-of-the-art approach for image feature extraction, brings promising ways for markerless camera-to-robot calibration.
Current approaches to robot pose estimation are mainly classified into two categories: keypoint-based methods ~\cite{lee2020dream,lu2022keypoint,richter2021robotic,lambrecht2019towards,lambrecht2021optimizing} and rendering-based methods~\cite{labbe2021robopose,hao2018vision}.
Keypoint-based methods are the most popular approach for pose estimation \textcolor{black}{because of the fast inference speed}.
However, the performance is limited to the accuracy of the keypoint detector which is often trained in simulation such that the proposed methods can generalize across different robotic designs.
Therefore, the performance is ultimately hampered by the sim-to-real gap, which is a long-standing challenge in computer vision and robotics~\cite{zhao2020sim}.
Rendering-based methods can achieve better performance by using the shape of the entire robot as observation, which provides dense correspondence for pose estimation.
The approaches in this category usually employ an iterative refinement process and require a reasonable initialization for the optimization loop to converge~\cite{li2018deepim}.
Due to the nature that iteratively render and compare is time- and energy-consuming, rendering-based methods are more suitable for offline estimation where the robot and camera are held stationary.
In more dynamic scenarios, such as a mobile robot, the slow computation time make the rendering-based methods impracticable to use.
In this work, we propose CtRNet, an end-to-end framework for robot pose estimation \textcolor{black}{which, at inference, uses keypoints for the fast inference speed and leverages the high performance of rendering-based methods for training to overcome the sim-to-real gap previous keypoint-based methods faced.}
Our framework contains a segmentation module to generate a binary mask of the robot and keypoint detection module which extracts point features for pose estimation.
Since segmenting the robot from the background is a simpler task than estimating the robot pose and localizing point features on robot body parts, we leverage foreground segmentation to provide supervision for the pose estimation.
Toward this direction, we first pretrained the network on synthetic data, which should have acquired essential knowledge about segmenting the robot.
Then, a self-supervised training pipeline is proposed to transfer our model to the real world without manual labels.
We connect the pose estimation to foreground segmentation with a differentiable renderer~\cite{kato2018neural,liu2019soft}. The renderer generates a robot silhouette image of the estimated pose and directly compares it to the segmentation result. Since the entire framework is differentiable, the parameters of the neural network can be optimized by back-propagating the image loss.
\textbf{Contributions}.
Our main contribution is the novel framework for image-based robot pose estimation together with a scalable self-training pipeline that utilizes unlimited real-world data to further improve the performance \textit{without \textcolor{black}{any manual} annotations.}
Since the keypoint detector is trained with image-level supervision, we effectively encompass the benefits from both keypoint-based and rendering-based methods, where previous methods were divided.
As illustrated in the \cref{fig:methods_scatter}, our method maintains high inference speed while matching the performance of the rendering-based methods.
Moreover, we integrate the CtRNet into a robotic system for PBVS and demonstrate the effectiveness on real-time robot pose estimation.
\section{Related Works}
\label{sec:related_works}
\subsection{Camera-to-Robot Pose Estimation}
The classical way to calibrate the camera-to-robot pose is to attach the fiducial markers~\cite{garrido2014aruco,olson2011apriltag} to known locations along the robot kinematic chain. The marker is detected in the image frame and their 3D position in the robot base frame can be calculated with forward kinematics. With the geometrical constraints, the robot pose can be then derived by solving an optimization problem~\cite{park_robot_1994,fassi2005hand,ilonen2011robust,horaud1995hand}.
Early works on markerless articulated pose tracking utilize a depth camera for 3D observation \cite{schmidt2014dart,pauwels2014real,michel2015pose,desingh2019factored}. For a high degree-of-freedom articulated robot, Bohg \etal~\cite{bohg2014robot} proposed a pose estimation method by first classifying the pixels in depth image to robot parts, and then a voting scheme is applied to estimate the robot pose relative to the camera. This method is further improved in~\cite{widmaier2016robot} by directly training a Random Forest to regress joint angles instead of part label.
However, these methods are not suitable for our scenario where only single RGB image is available.
More recently, as deep learning becomes popular in feature extraction, many works have been employing deep neural networks for robot pose estimation. Instead of using markers, a neural network is utilized for keypoint detection, and the robot pose is estimated through an optimizer (\eg PnP solver)~\cite{lambrecht2019towards,lee2020dream,lu2022keypoint,zuo2019craves}. To further improve the performance, the segmentation mask and edges are utilized to refine the robot pose~\cite{lambrecht2021optimizing,hao2018vision}.
Labb\'{e} \etal~\cite{labbe2021robopose} also introduces the \textit{render\&compare} method to estimate the robot pose by matching the robot shape.
These methods mainly rely on synthetically generated data for training and hope the network can generalize to the real world by increasing the variance in data generation.
Our method explicitly deals with the sim-to-real transfer by directly training on real-world data with self-supervision.
\subsection{Domain Adaptation for Sim-to-Real Transfer}
In computer vision and robotics, Domain Randomization (DR)~\cite{tobin2017domain} is the most widely used method for sim-to-real transfer due to its simplicity and effectiveness. The idea is to randomize some simulation parameters (e.g. camera position, lighting, background, etc.) and hope that the randomization captures the distribution of the real-world data. This technique has been applied to object detection and grasping ~\cite{tremblay2018training,borrego2018generic,hinterstoisser2018pre,tobin2018domain,horvath2022object}, and pose estimation~\cite{sundermeyer2018implicit, manhardt2018deep, tremblay2018deep,lambrecht2019towards,lee2020dream,lu2022keypoint,zuo2019craves, labbe2021robopose}. The randomization is usually tuned empirically hence it is not efficient.
Another popular technique for domain transfer is Domain Adaptation (DA), which is to find the feature spaces that share a similar distribution between the source and target domains~\cite{wang2018deep}.
This technique has shown recent success in computer vision~\cite{hoffman2018cycada,chen2018domain,sankaranarayanan2018learning} and robotic applications~\cite{bousmalis2018using,james2019sim,gupta2017learning}.
In this work, instead of finding the latent space and modeling the distribution between the simulation and the real world, we perform sim-to-real transfer by directly training on the real-world data via a self-training pipeline.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/self_supervised_framework.png}
\caption{The overview of our proposed self-supervised training framework for sim-to-real transfer. The CtRNet contains a foreground segmentation module and a pose estimation module, which output a robot mask and a camera-to-robot pose respectively. The output pose is transformed into a silhouette image through a differentiable renderer. The image loss is back-propagated to train the keypoint detector and fine-tune the segmentation. }
\label{fig:self_supervised_learning}
\end{figure*}
\section{Methods}
\label{sec:methods}
In this paper, we introduce an end-to-end framework for robot pose estimation and a scalable training pipeline to improve pose estimation accuracy on real-world data without the need for any manual annotation. We first explain the self-supervised training pipeline for sim-to-real transfer in \cref{subsec:sim-to-real} \textcolor{black}{given a pretrained CtRNet on synthetic data which both segments the robot and estimates its pose from images}.
Then, we detail the camera-to-robot pose estimation network in \cref{subsec:ctr_network} \textcolor{black}{which utilizes a keypoint detector and a PnP solver to estimate the pose of the robot from image data in real-time}.
\subsection{Self-supervised Training for Sim-to-Real Transfer}
\label{subsec:sim-to-real}
The most effective way to adapt the neural network to the real world is directly training the network on real sensor data.
We propose a self-supervised training pipeline for sim-to-real transfer to facilitate the training without 3D annotations.
\textcolor{black}{To conduct the self-supervised training, we employ foreground segmentation to generate a mask of the robot, $f_{seg}$, alongside the pose estimation, $f_{pose}$.}
Given an input RGB image from the physical world, $\mathbb{I}$, and the robot joint angles, $\mathbf{q}$, $f_{pose}$ estimates the robot pose which is then transformed to a silhouette image through a differentiable renderer.
Our \textcolor{black}{self-supervised} objective is to optimize neural network parameters by minimizing the difference between the rendered silhouette image and the mask image.
We formulate the optimization problem as:
\begin{equation}
\label{eq:self_supervised_objective}
\Theta= \argmin_{\Theta }
\mathcal{L} [f_{seg}( \mathbb{I} |\Theta), \mathcal{R}(f_{pose}(\mathbb{I} |\mathbf{q},\Theta)|\mathbf{K}) ]
\end{equation}
where $\Theta$ denotes the parameters of the neural network.
$\mathcal{R}$ is the differentiable renderer with camera parameters $\mathbf{K}$, and $\mathcal{L}(.)$ is the objective loss function capturing the image difference.
\textcolor{black}{
We pretrained CtRNet's parameters, $\Theta$, which makes up $f_{seg}$ and $f_{pose}$, with synthetic data where the keypoint and segmentation labels are obtained freely (details in Supplementary Materials).
During the self-training phase, where CtRNet learns with real data, the objective loss in (\ref{eq:self_supervised_objective}) captures the difference between the segmentation result and the rendered image.
The loss is iteratively back-propagated to, $\Theta$, where each iteration $f_{seg}$ and $f_{pose}$ take turns learning from each other to overcome the sim-to-real gap.
}
\textbf{Overview}.
The overview of the self-supervised training pipeline is shown in the \cref{fig:self_supervised_learning}.
\textcolor{black}{The segmentation module, $f_{seg}$, simply takes in a robot image and outputs its mask.}
The pose estimation module, $f_{pose}$, consists of a keypoint detector and a PnP solver to estimate the robot pose using the 2D-3D point correspondence, as shown in \cref{fig:network}.
Given the input robot image and joint angles, our camera-to-robot pose estimation network outputs a robot mask and the robot pose with respect to the camera frame.
Mathematically, these functions are denoted as
\begin{equation}
\mathbb{M} = f_{seg} (\mathbb{I} | \Theta) \qquad \mathbf{T}^c_b = f_{pose} (\mathbb{I} | \mathbf{q}, \Theta)
\end{equation}
where $\mathbb{M}$ is the robot mask and $\mathbf{T}^c_b \in SE(3)$ is the 6-DOF robot pose.
\textcolor{black}{Finally, the self-supervised objective loss in (\ref{eq:self_supervised_objective}) is realized through a differentiable renderer, $\mathcal{R}$, which generates a silhouette image of the robot given its pose, $\mathbf{T}^{c}_b$.}
\textbf{Differentiable Rendering}.
To render the robot silhouette image, we utilize the PyTorch3D differentiable render~\cite{pytorch3d}. We initialize a perspective camera with intrinsic parameters $\mathbf{K}$ and a silhouette renderer, which does not apply any lighting nor shading, is constructed with a rasterizer and a shader.
The rasterizer applies the fast rasterization method~\cite{pytorch3d} which selects the $k$ nearest mesh triangles that effects each pixel and weights their influence according to the distance along the $z$-axis.
Finally, the \textit{SoftSilhouetteShader} is applied to compute pixel values of the rendered image using the sigmoid blending method~\cite{liu2019soft}.
We construct the ready-to-render robot mesh by connecting the CAD model for each robot body part using its forward kinematics and transforming them to the camera frame with the estimated robot pose $\mathbf{T}^c_b$ from $f_{pose}$.
Let $\mathbf{v}^n \in \mathbb{R}^3$ be a mesh vertex on the $n$-th robot link.
Each vertex is transformed to the camera frame, hence ready-to-render, by
\begin{equation}
\label{eq:vertex_transform}
\overline{\mathbf{v}}^c = \mathbf{T}^c_b \mathbf{T}^b_n(\mathbf{q}) \overline{\mathbf{v}}^n
\end{equation}
where $\overline{\cdot}$ represents the homogeneous representation of a point (e.g. $\overline{\mathbf{v}} = [\mathbf{v}, 1]^T$), and $\mathbf{T}^b_n(\mathbf{q})$ is the coordinate frame transformation obtained from the forward kinematics~\cite{denavit1955kinematic}.
\textbf{Objective loss function}. \textcolor{black}{The objective loss in (\ref{eq:self_supervised_objective}) is iteratively minimized where $f_{seg}$ and $f_{pose}$ take turns supervising each other on real data to overcome the sim-to-real gap faced by keypoint detection networks.}
\textcolor{black}{
To optimize $f_{pose}$, the L2 image loss is used since the segmentation network's accuracy, within the context of estimating robot poses, has been shown to effectively transfer from simulation to the real world \cite{labbe2021robopose}.
Mathematically the loss is expressed as}
\begin{equation}
\mathcal{L}_{mask} = \sum_{i=1}^{H} \sum_{j=1}^{W} \left(\mathbb{S}(i,j) - \mathbb{M}(i,j)\right)^2
\label{eq:mask_loss}
\end{equation}
where $H$ and $W$ is the height and width of the image, and $\mathbb{S}$ is the rendered silhouette image.
Although the pretrained robot segmentation, $f_{seg}$, already performs well on real-world datasets, it is still desirable to refine it through self-supervised training to better extract fine details of corners and boundaries.
To prevent the foreground segmentation layers from receiving noisy training signals, we apply the weighted Binary Cross Entropy Loss so that the high-quality rendering image can be used to further refine the foreground segmentation:
\begin{multline}
\mathcal{L}_{seg} = - \frac{w}{H*W} \sum_{i=1}^{H} \sum_{j=1}^{W} [\mathbb{M}(i,j) \log \mathbb{S}(i,j) \\ + (1 - \mathbb{M}(i,j)) \log(1-\mathbb{S}(i,j)) ].
\end{multline}
where $w$ is the weight for the given training sample.
For PnP solvers, the optimal solution should minimize the point reprojection error. Therefore, we assign the weight for each training sample according to the reprojection error:
\begin{equation}
w = exp\left( - s
O(\mathbf{o}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^c)
\right)
\label{eq:weights}
\end{equation}
where $s$ is a scaling constant, $O$ is the reprojection loss in the PnP solver (explained in \cref{subsec:ctr_network}), $\{\mathbf{o}_{i} | \mathbf{o}_{i} \in \mathbb{R}^2 \}^{n}_{i=1}$ and $\{\mathbf{p}_{i} | \mathbf{p}_{i} \in \mathbb{R}^3 \}^{n}_{i=1}$ are the 2D-3D keypoints inputted into the PnP solver.
\textcolor{black}{
The exponential function is applied to the weight such that training samples with poor solve PnP convergence are weighted exponentially lower than good solve PnP convergence hence stabilizing the training.
}
\subsection{Camera-to-Robot Pose Estimation Network}
\label{subsec:ctr_network}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/network.png}
\caption{The diagram of the camera-to-robot pose estimation network (CtRNet) which describes the inference process of network. Given an input RGB image, the neural network generates a robot mask and a set of keypoint. Given the associated robot joint angles, a set of corresponding 3D keypoints are computed with forward kinematics. The camera-to-robot pose is estimated by a PnP solver with provided 2D-3D keypoint pairs.}
\label{fig:network}
\end{figure}
The overview of the proposed Camera-to-Robot Pose Estimation Network, CtRNet, is shown in Fig. \ref{fig:network}.
Given an input RGB image, we employ ResNet50~\cite{he2016deep} as the backbone network to extract the latent features.
The latent features are then passed through the Atrous Spatial Pyramid Pooling layers~\cite{chen2017rethinking} to form the segmentation mask of input resolution.
The keypoint detector, sharing the backbone network with the foreground segmentation, upsamples the feature maps through transposed convolutional layers and forms the heatmaps with $n$ channels.
Then, we apply the spatial softmax operator~\cite{finn2016deep} on the heatmaps, which computes the expected 2D location of the points of maximal activation for each channel and results in a set of keypoints $[\mathbf{o}_1, ..., \mathbf{o}_n]$ for all $n$ channels.
For simplicity, we define the set of keypoints at each joint location of the robot.
Given the joint angles, the corresponding 3D keypoint location $\mathbf{p}_i$ can be calculated with robot forward kinematics:
\begin{equation}
\overline{\mathbf{p}}_i = \mathbf{T}^b_i (\mathbf{q})\overline{\mathbf{t}}, \:for \: i = 1,...,n
\label{eq:fk}
\end{equation}
where $\mathbf{t} = [0,0,0]$. With the 2D and 3D corresponding keypoints, we can then apply a PnP solver~\cite{lepetit2009epnp} to estimate the robot pose with respect to the camera frame.
\textbf{Back-propagation for PnP Solver}.
A PnP solver is usually self-contained and not back-propagatable as the gradient with respect to the input cannot be derived explicitly. Inspired by~\cite{bpnp}, the implicit function theorem~\cite{krantz2002IFT} is applied to obtain the gradient through implicit differentiation.
Let the PnP solver be denoted as followed in the form of a non-linear function $g$:
\begin{equation}
\mathbf{T}_b^{c*} = g(\mathbf{o}, \mathbf{p}, \mathbf{K})
\end{equation}
where $\mathbf{T}_b^{c*}$ is output pose from the PnP solver. In order to back-propagate through the PnP solver for training the keypoint detector, we are interested in finding the gradient of the output pose $\mathbf{T}_b^{c*}$ with respect to the input 2D points $\mathbf{o}$.
Note that, the objective of the PnP solver is to minimize the reprojection error, such that:
\begin{equation}
\mathbf{T}_b^{c*} = \argmin_{\mathbf{T}_b^{c}} O(\mathbf{o}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^{c}) \label{eq:bpnp_objective}
\end{equation}
with
\begin{align}
O(\mathbf{T}_b^{c}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^{c}) & =
\sum_{i=1}^n || \mathbf{o}_i - \pi(\mathbf{p}_i| \mathbf{T}_b^{c}, \mathbf{K}) ||^2_2 \label{eq:bpnp_r}\\
& = \sum_{i=1}^n || \mathbf{r}_i ||_2^2 \label{eq:bpnp_r_2}
\end{align}
where $\pi(.)$ is the projection operator. Since the optimal solution $\mathbf{T}_b^{c*}$ is a local minimum for the objective function $O(\mathbf{o}, \mathbf{p}, \mathbf{T}_b^{c}, \mathbf{K})$, a stationary constraint of the optimization process can be constructed by taking the first order derivative of the objective function with respect to $\mathbf{T}_b^{c}$:
\begin{equation}
\frac{\partial O}{\partial \mathbf{T}_b^{c}} (\mathbf{o}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^{c})|_{\mathbf{T}_b^{c} = \mathbf{T}_b^{c*}} = \mathbf{0}.
\end{equation}
Following \cite{bpnp}, we construct a constrain function $F$ to employ the implicit function theorem:
\begin{equation}
F(\mathbf{o}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^{c}) = \frac{\partial O}{\partial \mathbf{T}_b^{c}} (\mathbf{o}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^{c*}) = \mathbf{0}.
\label{eq:bpnp_constraint_function}
\end{equation}
Substituting the \cref{eq:bpnp_r} and \cref{eq:bpnp_r_2} to \cref{eq:bpnp_constraint_function}, we can derive the constraint function as:
\begin{align}
F(\mathbf{o}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^{c})
& = \sum_{i=1}^n \frac{\partial || \mathbf{r}_i ||_2^2}{\partial \mathbf{T}_b^{c}} \\
& = -2 \sum_{i=1}^n \mathbf{r}_i^T \frac{\partial \pi}{\partial \mathbf{T}_b^{c}}(\mathbf{p}_i| \mathbf{T}_b^{c*}, \mathbf{K}) .
\end{align}
Finally, we back-propagate through the PnP solver with the implicit differentiation. The gradient of the output pose with respect to the input 2D points is the Jacobian matrix:
\begin{multline}
\frac{\partial g}{\partial \mathbf{o}}(\mathbf{o}, \mathbf{p}, \mathbf{K}) \\ = - \left( \frac{\partial F}{\partial \mathbf{T}_b^{c}} (\mathbf{o}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^{c}) \right)^{-1}\left( \frac{\partial F}{\partial \mathbf{o}} (\mathbf{o}, \mathbf{p}, \mathbf{K}, \mathbf{T}_b^{c})\right) .
\end{multline}
\section{Experiments}
\label{sec:experiments}
We first evaluate our method on two public real-world datasets for robot pose estimation and compare it against several state-of-the-art image-based robot pose estimation algorithms.
We then conduct an ablation study on the pretraining procedure and explore how the number of pretraining samples could affect the performance of the self-supervised training.
Finally, we integrate the camera-to-robot pose estimation framework into a visual servoing system to demonstrate the effectiveness of our method on real robot applications.
\begin{table*}
\setlength\tabcolsep{0.34em}
\centering
\scalebox{0.8}{
\begin{tabular}{@{}lccccccccccc c@{}}
\toprule
\multirow{2}{*}{Method} & \multirow{2}{*}{Category} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c}{Panda 3CAM-AK} & \multicolumn{2}{c}{Panda 3CAM-XK} & \multicolumn{2}{c}{Panda 3CAM-RS} & \multicolumn{2}{c}{Panda ORB}& \multicolumn{2}{c}{All} \\
\cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} \cmidrule(lr){12-13}
& & & AUC $\color{green}\uparrow$ & Mean (m) $\color{red}\downarrow$
& AUC $\color{green}\uparrow$ & Mean (m) $\color{red}\downarrow$
& AUC $\color{green}\uparrow$ & Mean (m) $\color{red}\downarrow$
& AUC $\color{green}\uparrow$ & Mean (m) $\color{red}\downarrow$
& AUC $\color{green}\uparrow$ & Mean (m) $\color{red}\downarrow$ \\
\midrule
DREAM-F~\cite{lee2020dream} & Keypoint & VGG19 & 68.912 & 11.413 & 24.359 & 491.911 & 76.130 & 2.077 & 61.930 & 95.319 & 60.740 & 113.029\\
DREAM-Q~\cite{lee2020dream} & Keypoint & VGG19 & 52.382 & 78.089 & 37.471 & 54.178 & 77.984 & 0.027 & 57.087 & 67.248 & 56.988 & 59.284\\
DREAM-H~\cite{lee2020dream} & Keypoint & ResNet101 & 60.520 & 0.056 & 64.005 & 7.382 & 78.825 & 0.024 & 69.054 & 25.685 & 68.584 & 17.477\\
RoboPose~\cite{labbe2021robopose} & Rendering & ResNet34 & 76.497 & 0.024 & \textbf{85.926} & \textbf{0.014} & 76.863 & 0.023 & 80.504 & \textbf{0.019} & 80.094 & \textbf{0.020}\\
CtRNet & Keypoint & ResNet50 & \textbf{89.928} & \textbf{0.013} & 79.465 & 0.032 & \textbf{90.789} & \textbf{0.010} & \textbf{85.289} & 0.021 & \textbf{85.962} & \textbf{0.020}\\
\bottomrule
\end{tabular}}
\caption{Comparison of our methods with the state-of-the-art methods on DREAM-real datasets using ADD metric. We report the mean and AUC of the ADD on each dataset and the overall accuracy. }
\label{table:deam}
\end{table*}
\begin{table*}[t]
\centering
\setlength\tabcolsep{0.34em}
\scalebox{0.95}{
\begin{tabular}{lccccccc}
\toprule
\multirow{2}{*}{Method} & \multirow{2}{*}{Category} & \multicolumn{2}{c}{PCK (2D)} & Mean 2D Err. & \multicolumn{2}{c}{ADD (3D)}& Mean 3D Err.\\
\cmidrule(lr){3-4} \cmidrule(lr){6-7}
& & @50 pixel $\color{green}\uparrow$ & AUC $\color{green}\uparrow$ & (pixel) $\color{red}\downarrow$ & @100 mm $\color{green}\uparrow$ & AUC $\color{green}\uparrow$ & (mm) $\color{red}\downarrow$ \\ \midrule
Aruco Marker~\cite{garrido2014aruco} & Keypoint & 0.49 & 57.15 & 286.98 & 0.30 & 43.45 & 2447.34\\
DREAM-Q~\cite{lee2020dream} & Keypoint & 0.33 & 44.01 & 1277.33 & 0.32 & 40.63 & 386.17 \\
Opt. Keypoints~\cite{lu2022keypoint} & Keypoint & 0.69 & 75.46 & 49.51 & 0.47 &65.66 & 141.05 \\
Diff. Rendering & Rendering& 0.74 & 78.60 & 42.30 & 0.78 & 81.15 & 74.95\\
CtRNet & Keypoint & \textbf{0.99} & \textbf{93.94} & \textbf{11.62} & \textbf{0.88} & \textbf{83.93} & \textbf{63.81}\\
\bottomrule
\end{tabular}}
\caption{\label{exp:rigid_pck}Comparison of our methods with the state-of-the-art methods on Baxter dataset.}
\label{table:baxter}
\end{table*}
\subsection{Datasets and Evaluation Metrics}
\textbf{DREAM-real Dataset}. The DREAM-real dataset~\cite{lee2020dream} is a real-world robot dataset collected with 3 different cameras: Azure Kinect (AK), XBOX 360 Kinect (XK), and RealSense (RS). This dataset contains around 50K RGB images of Franka Emika Panda arm and is recorded at ($640 \times 480$) resolution. The ground-truth camera-to-robot pose is provided for every image frame.
The accuracy is evaluated with average distance (ADD) metric~\cite{xiang2017posecnn},
\begin{equation}
ADD = \frac{1}{n} \sum_{i=1}^n || \widetilde{\mathbf{T}}^c_b
\overline{\mathbf{p}}_i - \mathbf{T}^c_b
\overline{\mathbf{p}}_i ||_2
\end{equation}
where $\widetilde{\mathbf{T}}^c_b $ indicates the ground-truth camera-to-robot pose.
We also report the area-under-the-curve (AUC) value, which integrates the percentage of ADD over different thresholds. A higher AUC value indicates more predictions with less error.
\textbf{Baxter Dataset}. The Baxter dataset~\cite{lu2022keypoint} contains 100 RGB images of the left arm of Rethink Baxter collected with Azure Kinect camera at ($2048 \times1526$) resolution. The 2D and 3D ground-truth end-effector position with respect to the camera frame is provided.
We evaluate the performance with the ADD metric for the end-effector.
We also evaluate the end-effector reprojection error using the percentage of correct keypoints (PCK) metric~\cite{lu2022keypoint}.
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.48\linewidth]{figures/panda.png}\label{fig:panda}}
\subfloat[]{\includegraphics[width=0.48\linewidth]{figures/baxter.png}\label{fig:baxter}}
\caption{Qualitative results of CtRNet foreground segmentation and pose estimation on \protect\subref{fig:panda}~DREAM-real dataset and \protect\subref{fig:baxter}~Baxter dataset. The first row shows the input RGB image, the second row shows the foreground segmentation, and the third row shows the projected robot skeleton based on the estimated robot pose.}
\label{fig:qualitative_results}
\end{figure*}
\subsection{Implementation details}
The entire pipeline is implemented in PyTorch~\cite{torchautograd}. We initialize the backbone network with ImageNet~\cite{deng2009imagenet} pretrained weights, and we train separate networks for different robots.
The number of keypoints $n$ is set to the number of robot links and the keypoints are defined at the robot joint locations.
The neural network is pretrained on synthetic data for foreground segmentation and keypoint detection for 1000 epochs with 1e-5 learning rate. We reduce the learning rate by a factor of 10 once learning stagnates for 5 epochs. The Adam optimizer is applied to optimize the network parameters with the momentum set to 0.9.
For self-supervised training on real-world data, we run the training for 500 epochs with 1e-6 learning rate. The same learning rate decay strategy and Adam optimizer is applied here similar to the pretraining. To make the training more stable, we clip the gradient of the network parameters at 10. The scaling factor in \cref{eq:weights} is set to 0.1 for DREAM-real dataset and 0.01 for Baxter dataset, mainly accounting for the difference in resolution.
\subsection{Robot Pose Estimation on Real-world Datasets}
\textbf{Evaluation on DREAM-real Dataset}. The proposed CtRNet is trained at ($320 \times 240$) resolution and evaluated at the original resolution by scaling up the keypoints by a factor of 2.
Some qualitative results for foreground segmentation and pose estimation are shown in the \cref{fig:panda}.
We compared our method with the state-of-the-art keypoint-based method DREAM~\cite{lee2020dream} and the rendering-based method RoboPose~\cite{labbe2021robopose}.
The results for DREAM and RoboPose are compiled from the implementation provided by~\cite{labbe2021robopose}.
In \cref{table:deam}, we report the AUC and mean ADD results on DREAM-real dataset with 3 different camera settings and the overall results combining all the test samples.
Our method has a significantly better performance compared to the method in the same category and achieves comparable performance with the rendering-based method.
We outperform DREAM on all settings and outperform RoboPose on the majority of the dataset.
Overall on DREAM-real dataset, we achieve higher AUC (+17.378 compared to DREAM, +5.868 compared to RoboPose), and lower error compared to DREAM (-17.457).
\textbf{Evaluation on Baxter Dataset}. For the Baxter dataset, we trained the CtRNet at ($640\times480$) resolution and evaluate at the original resolution, and \cref{fig:baxter} shows some of the qualitative results.
We compared our method with several keypoint-based methods (Aruco Marker~\cite{garrido2014aruco}, DREAM~\cite{lee2020dream}, Optimized Keypoints~\cite{lu2022keypoint}).
We also implemented Differentiable Rendering for robot pose estimation, where the robot masks are generated with the pretrained foreground segmentation.
The 2D PCK results and 3D ADD results are reported in \cref{table:baxter}.
Our method outperforms all other methods on both 2D and 3D evaluations.
For 2D evaluation, we achieve 93.94 AUC for PCK with an average reprojection error of 11.62 pixels.
For 3D evaluation, we achieve 83.93 AUC for ADD with an average ADD of 63.81mm.
Notably, 99 percent of our estimation has less than 50 pixel reprojection error, which is less than 2 percent of the image resolution, and 88 percent of our estimation has less than 100mm distance error when localizing the end-effector.
\subsection{Ablation Study}
We study how the number of pretraining samples affects the convergence and performance of the self-supervised training empirically on the Baxter dataset.
We pretrain the neural network with different numbers of synthetic data samples $N_{pretrain} = \{500,1000,2000,4000,8000\}$, and examine the convergence of the self-supervised training process.
\cref{fig:abalation} shows the plot of self-training loss ($\mathcal{L}_{mask}+ \mathcal{L}_{seg}$) vs. the number of epochs for networks pretrianed with different number of synthetic data. We observe that doubling the size of pretraining dataset significantly improves the convergence of the self-training process at the beginning. However, the improvement gets smaller as the pretrainig size increase. For the Baxter dataset, the improvement saturates after having more than 2000 pretraining samples. Continuing double the training size results in very marginal improvement.
Noted that the Baxter dataset captures 20 different robot poses from a fixed camera position. The required number of pretraining samples might vary according to the complexity of the environment.
We further evaluate the resulting neural networks with the ground-truth labels on the Baxter dataset. We report the mean ADD and AUC ADD for the pose estimation in \cref{tab:ablation}. The result verifies our observation on the convergence analysis.
Having more pretraining samples improves the performance of pose estimation at the beginning, but the improvement stagnates after having more than 2000 pretraining samples.
\begin{figure}[t]
\centering
\input{figures/ablation_study}
\caption{\label{fig:abalation} The training loss vs. number of epochs for the self-supervised training with different numbers of pretraining samples. More pretraining samples results in better convergence. The improvement saturates after having more than 2000 pretraining samples as only marginal improvement by adding more samples.}
\end{figure}
\begin{table}
\centering
\begin{tabular}{@{}lcc@{}}
\toprule
$N_{pretrain}$ & Mean ADD (mm) $\color{red}\downarrow$ & AUC ADD $\color{green}\uparrow$\\
\midrule
500 & 2167.30 & 47.62 \\
1000 & 92.91 & 76.65 \\
2000 & 67.51 & 82.98 \\
4000 & 63.00 & 84.12 \\
8000 & 63.81 & 83.93\\
\bottomrule
\end{tabular}
\caption{Ablation study for the number of pretraining samples.}
\label{tab:ablation}
\end{table}
\subsection{Visual Servoing Experiment}
\begin{table}
\centering
\scalebox{0.9}{
\begin{tabular}{@{}lccc@{}}
\toprule
Method & Loop Rate & Trans. Err. (m) & Rot. Err. (rad)\\
\midrule
DREAM~\cite{lee2020dream} & 30Hz & 0.235 $\pm$ 0.313 & 0.300 $\pm$ 0.544 \\
Diff. Rendering & 1Hz & 0.046 $\pm$ 0.062 & 0.036 $\pm$ 0.066\\
CtRNet & 30Hz & \textbf{0.002 $\pm$ 0.001} & \textbf{0.002 $\pm$ 0.001}\\
\bottomrule
\end{tabular}}
\caption{Mean and standard deviation of the transnational error and rotational error for the visual servoing experiment.}
\label{tab:visual_servoing}
\end{table}
We integrate the proposed CtRNet into a robotic system for position-based visual servoing (PBVS) with eye-to-hand configuration.
We conduct the experiment on a Baxter robot and the details of the PBVS are described in the Supplementary Materials.
The PBVS is purely based on RGB images from a single camera and the goal is to control the robot end-effector reaching a target pose defined in the camera frame.
Specifically, we first set a target pose with respect to the camera frame. The target pose is then transformed into the robot base frame through the estimated camera-to-robot transformation. The robot controller calculates the desired robot configuration with inverse kinematics and a control law is applied to move the robot end-effector toward the target pose.
For comparison, we also implemented DREAM~\cite{lee2020dream} and a Differentiable Renderer for PBVS. For DREAM, the pretrained model for Baxter is applied. For Differentiable Renderer, we use the foreground segmentation of CtRNet to generate a robot mask. The optimizing loop for the renderer takes the last estimation as initialization and performs 10 updates at each callback to ensure convergence and maintain 1Hz loop rate.
In the experiment, we randomly set the target pose and the position of the camera, and the robotic system applies PBVS to reach the target pose from an arbitrary initialization, as shown in the \cref{fig:snapshot}.
We ran the experiment for 10 trails with different robot pose estimation methods, and the transnational (Euclidean distance) and rotational errors (Euler angles) of the end-effector are reported in \cref{tab:visual_servoing}.
The experimental results show that our proposed method significantly improves the stability and accuracy of the PBVS, achieving 0.002m averaged transnational error and 0.002rad rotational error on the end-effector.
We also plot the end-effector distance-to-goal over time for a selected trail in \cref{fig:visual_servoing}.
In this selected trial, the system could not converge with DREAM because the poor robot pose estimation confuses the controller by giving the wrong target pose in the robot base frame, which is unreachable. With the differentiable renderer, the servoing system takes more than 10 seconds to converge and oscillate due to the low loop rate. With our proposed CtRNet, the servoing system converges much faster ($\leq$ 5 seconds), thanks to the fast and robust robot pose estimation. We show more qualitative results in the Supplementary Materials.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/snapshot_vs.png}
\caption{Snapshots of PBVS. The goal is to move the end-effector to the target pose (green). The figure on the right shows the robot configuration upon the convergence of PBVS.}
\label{fig:snapshot}
\end{figure}
\begin{figure}[t]
\centering
\input{figures/visual_servoing}
\caption{The plot of end-effector distance-to-goal over time.}
\label{fig:visual_servoing}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We present the CtRNet, an end-to-end image-based robot pose estimation framework, and a self-supervised training pipeline that utilizes unlabelled real-world data for sim-to-real transfer.
The CtRNet, using a keypoint detector for pose estimation while employing a rendering method for training, achieves state-of-the-art performance on robot pose estimation while maintaining high-speed inference. The \cref{fig:methods_scatter} illustrates the advantages of CtRNet over existing methods, where the AUC values are normalized across two evaluation datasets by taking DREAM and CtRNet as references.
We further experiment with different robot pose estimation methods by applying them to PBVS, which demonstrates CtRNet's fast and accurate robot pose estimation enabling stability when using single-frame robot pose estimation for feedback. Therefore, CtRNet supports real-time markerless camera-to-robot pose estimation which has been utilized for surgical robotic manipulation~\cite{richter2021robotic} and mobile robot manipulators~\cite{wise2016fetch}.
For future work, we would like to extend our method to more robots and explore vision-based control in an unstructured environment.
{\small
\bibliographystyle{ieee_fullname}
|
{
"arxiv_id": "2302.14336",
"language": "en",
"timestamp": "2023-03-01T02:09:57",
"url": "https://arxiv.org/abs/2302.14336",
"yymm": "2302"
} | \section{Introduction}
In wireless edge networks, Federated learning (FL) is an effective distributed machine learning scheme that allows edge devices to collaboratively learn a global model without sending their local data to a central server \cite{zhu2020toward,lim2020federated}. A key feature in FL is for a parameter server to aggregate the local model updates received from the edge devices into a global model update. However, information exchange between the edge devices and the server can create stress on the communication resource.
To reduce the communication overhead, analog aggregation of the local models has been proposed, where the edge devices simultaneously transmit their local models in analog form over a shared wireless uplink, which naturally sums these models by superposition. Such over-the-air computation has attracted growing interest due to its efficient use of bandwidth and reduced communication latency compared with conventional transmission over orthogonal channels \cite{amiri2020machine,amiri2020federated,sery2020analog,guo2020analog,zhang2021gradient,zhang2021federated,wang2022online}. However, over-the-air computation is susceptible to noise, potentially causing large aggregation error, which propagates over the FL computation and communication iterations.
Furthermore, the quality of aggregation is disproportionately affected by devices with weak channel conditions, as the transmit power needs to be adjusted to align the transmitted signal amplitude with such devices, which results in lower received signal-to-noise ratio (SNR) \cite{lim2020federated}. However, excluding devices from model training may decrease the learning performance due to the reduced training data size. To improve FL training performance, we need to carefully design an effective device selection method to balance this tradeoff.
Device selection for over-the-air FL was first considered in \cite{zhu2019broadband} and also in \cite{sery2021over}. In \cite{zhu2019broadband}, a distance-based cell-interior device selection method was proposed to increase the received SNR at the base station. In \cite{sery2021over}, only the devices with strong channel conditions were selected in the learning process to improve the convergence of the model aggregation. However, how to design the proper threshold of channel strength for device selection was not discussed. Both \cite{zhu2019broadband} and \cite{sery2021over} considered only a single antenna at the server, thus there was no receiver beamforming design.
When the server is equipped with multiple antennas, beamforming techniques can be applied to reduce the impact of noise in over-the-air computation \cite{zhu2018mimo,jiang2019over}. It has been demonstrated in \cite{firstyang2020federated} that the method in \cite{jiang2019over} can be applied to improve FL performance. However, the study in \cite{firstyang2020federated} did not consider device selection. In \cite{yang2020federated}, joint receiver beamforming and device selection was considered to maximize the number of selected devices while limiting the communication error, where a difference-of-convex-function (DC) programming method was proposed to solve the problem. However, the design objective of \cite{yang2020federated} did not explicitly account for the impact of device selection and communication error on the FL training performance. In \cite{liu2021reconfigurable}, for a reflective intelligent surface (RIS)-assisted system, device selection, receiver beamforming, and RIS phase shift were jointly considered to minimize an upper bound on the steady-state expected loss function as time approaches infinity. Successive convex approximation (SCA) was used for receiver beamforming design, and Gibbs sampling was used for device selection. However, the wireless channels were assumed fixed over time in \cite{liu2021reconfigurable} in designing beamforming and device selection solution, which was used for all communication rounds of FL. Furthermore, Gibbs sampling can incur high computational complexity, especially as the number of devices increases, which is inefficient when the device selection needs to be updated in each communication round.
In this work, aiming at improving the training convergence rate in over-the-air FL, we jointly design the uplink receiver beamforming and device selection to minimize the global training loss after arbitrary $T$ communication rounds. Unlike existing works, we consider time-varying wireless channel, and device selection and beamforming need to be updated in each round. For the resulting mixed-integer programming problem, we propose an efficient Joint Beamforming and Device Selection (JBFDS) algorithm to minimize an upper bound of the global training loss after $T$ communication rounds, subject to per-device transmit power constraint. JBFDS uses an alternating-optimization approach on beamforming and device selection. We show that our uplink beamforming subproblem is equivalent to a downlink single-group multicast beamforming problem, for which we apply SCA to compute a solution. For the device selection subproblem, we propose an efficient search algorithm to obtain an optimal solution. Our simulation results with real-world image classification demonstrate that JBFDS outperforms existing methods for device selection or joint beamforming and device selection, in terms of training accuracy and training convergence rate. It also has significantly lower computational complexity compared with the existing methods.
\allowdisplaybreaks
\section{System Model and Problem Formulation}
\subsection{FL System }
We consider a wireless network consisting of a server and $M$ edge devices. Device $m$ has a local training dataset of size $K_m$ and denoted by $\mathcal{D}_m=\{(\mathbf{x}_{m,k}, y_{m,k}): 1\le k \le K_m \}$, where $\mathbf{x}_{m,k} \in \mathbb{R}^b$ is the $k$-th data feature vector and $ y_{m,k}$ is its corresponding label. The edge devices aim to collaboratively train a global model at the server that can predict the true labels of data feature vectors of all devices while keeping their local datasets private. The empirical local training loss function for device $m$ is defined as
\begin{align}\label{eq1}
F_m(\mathbf{w}; \mathcal{D}_m) \triangleq \frac{1}{K_m} \sum \limits_{k=1}^{K_m} l(\mathbf{w}; \mathbf{x}_{m,k} , y_{m,k}) \\[-2.2em] \nonumber
\end{align}
where $\mathbf{w} \in \mathbb{R}^D$ is the global model parameter vector and $l(\cdot)$ is the sample-wise training loss associated with each data sample. The global training loss function is defined as the weighted sum of the local loss functions over all devices, given by
\begin{align}\label{eq2}
F(\mathbf{w})= \frac{1}{K}\sum \limits_{m=1}^M K_m F_m(\mathbf{w}; \mathcal{D}_m ) \\[-2.2em] \nonumber
\end{align}
where $K= \sum_m K_m $ is the total number of training samples over all devices. We follow the general Federated Stochastic Gradient Descent (FedSGD) approach for iterative model training in FL, where the server updates the model parameters based on an aggregation of the gradients of all devices’ local loss functions \cite{mcmahan2017communication}. The learning objective is to find the optimal global model $\mathbf{w}^\star$, that minimizes the global training loss function $F(\mathbf{w})$. We call each iteration in the algorithm as a communication round. At the $t$-th communication round, the following steps are performed:\\
$\bullet$ \textbf{Device selection:} The server selects a subset of devices to contribute in the training of the model. The set of selected devices in round $t$ is denoted by $\mathcal{M}_t^\text{s} \subset \{1,2,..., M \}$. \\
$\bullet$ \textbf{Downlink phase:} The server broadcasts the model parameter vector $\mathbf{w}_t$ to all devices.\\
$\bullet$ \textbf{Gradient computation:} Each selected device $m$ computes the gradient of its local loss function, given by
$\mathbf{g}_{m,t} = \nabla F_m(\mathbf{w}_t; \mathcal{D}_m)$,
where $\nabla F_m(\mathbf{w}_t; \mathcal{D}_m)$ is the gradient of $F_m(\cdot)$ at $\mathbf{w}_t$.\\
$\bullet$ \textbf{Uplink phase:} The selected devices send their local gradients to the server through the uplink wireless channels. \\
$\bullet$ \textbf{Model updating:} The server computes a weighted aggregation of local gradients to update the global model. In the ideal scenario where the local gradients can be received at the server accurately, $\mathbf{r}_t \triangleq \sum_{m \in \mathcal{M}_t^\text{s}} K_m \mathbf{g}_{m,t}$ is used to update ${\bf w}_t$. However, in reality, only an estimate $\mathbf{\hat r}_t$ is obtainable at the server, due to wireless channel and noise. Thus, the server updates the global model as
\begin{align}\label{eq5}
\mathbf{w}_{t+1} = \mathbf{w}_t - \frac{\lambda}{ \sum_{m \in \mathcal{M}_t^\text{s}} K_m } \mathbf{\hat r}_t\\[-2.5em] \nonumber
\end{align}
where $\lambda$ is the learning rate.
\subsection{FL with Over-the-Air Analog Aggregation}
We assume each device has a single antenna and the server is equipped with $N$ antennas. The uplink channel between device $m$ and the server in communication round $t$ is denoted by $\mathbf{h}_{m,t} \in \mathbb{C}^{N \times 1}$. We consider over-the-air computation to efficiently obtain the aggregated local gradients at the server via analog aggregation over multiple access channel \cite{zhu2019broadband}. In each communication round, the devices send their local gradients to the server simultaneously using the same frequency resources. By adjusting the transmit weight at each device, the server receives the weighted sum of local gradients. Each selected device $m$ sends the variance of its computed local gradient, given by
$v_{m,t}^2=\frac{\|\mathbf{g}_{m,t}\|^2}{D}$ to the server in each communication round. As $v_{m,t}$ is only a scalar, we assume it is sent over a separate digital channel and is received by the server perfectly. At each communication round $t$, the local gradient of each selected device is transmitted over $D$ time slots. Specifically, in time slot $d$, each selected device $m$ sends the $d$-th entry of its local gradient, denoted by $g_{m,t}[d]$, which is normalized by $v_{m,t}$ and then adjusted by the transmit weight $a_{m,t}\in\mathbb{C}$.
The corresponding received signal at the server is
\begin{align}\label{eq10}
\mathbf{y}_t[d] = \sum_{m \in \mathcal{M}_t^\text{s} } \mathbf{h}_{m,t} a_{m,t}\frac{g_{m,t}[d]}{v_{m,t}}+\mathbf{n}_{d,t} \\[-2.5em] \nonumber
\end{align}
where $\mathbf{n}_{d,t} \in \mathbb{C}^{N}$ is the receiver additive white Gaussian noise with entries having zero mean and variance $\sigma_n^2$ and is i.i.d. over $t$. From (\ref{eq10}), the average transmit power at device $m$ for sending the local gradient in communication round $t$ is $|a_{m,t}|^2$, which is limited by the maximum transmit power $P_0$: $|a_{m,t}|^2 \le P_0$, $\forall m, \forall t$.
The server applies receiver beamforming to process the received signal. Let $\mathbf{f}_t \in \mathbb{C}^N$ denote the receiver beamforming vector with $\|\mathbf{f}_t\| =1$ and $\eta_t \in \mathbb{R}^+$ denote the receiver scaling factor. The post-processed received signal is given by
\begin{equation}\label{eq11}
\hat{r}_t[d]\!= \! \frac{\mathbf{f}_t^H\! \mathbf{y}_t[d]}{\sqrt{\eta_t}} \!= \! \frac{1}{\sqrt{\eta_t}}\Big(\!\! \!\sum_{m \in \mathcal{M}_t^\text{s}} \!\! \mathbf{\!\!f}_t^H \!\mathbf{h}_{m,t} a_{m,t}\frac{g_{m,t}[d]}{v_{m,t}}\!+\!\mathbf{f}_t^H\!\mathbf{n}_{d,t}\!\Big).
\end{equation}
The server uses $\mathbf{\hat{r}}_t \triangleq [\hat{r}_t[1],...,\hat{r}_t[D]]^T$ to update ${\bf w}_t$ based on \eqref{eq5}.
\subsection{Problem Formulation}
Since learning efficiency is important, we aim to maximize the training convergence rate by minimizing the global loss function after $T$ communication rounds. Therefore, our objective is to minimize the expected global loss function after $T$ communication rounds by jointly optimizing the device selection, the devices transmit weights, and the receiver processing (beamforming and scaling). This optimization problem is formulated as follows:
\begin{align}\label{eq33}
\min_{\{\mathbf{f}_t, \mathbf{x}_t , \eta_t, \{a_{m,t}\} \}_{t=0}^{T-1}} \quad & \mathbb{E}[F(\mathbf{w}_T)]\\
\textrm{s.t.} \quad &
|a_{m,t}|^2 \le P_0, \: \forall m , \forall \: t , \\
& \mathbf{x}_t \in \{0,1\}^M , \forall \: t ,\\ & \|\mathbf{f}_t\| =1 , \eta_t > 0, \forall \: t,
\end{align}
where $\mathbb{E}(\cdot)$ is taken over the receiver noise, and $\mathbf{x}_t$ is the device selection vector at round $t$, with its $m$-th entry $x_{t}[m]=1$ indicating device $m$ is selected to participate the model updating in communication round $t$.
\section{Joint Device Selection and Receiver Beamforming with Analog Aggregation}
Problem \eqref{eq33} is a finite time horizon stochastic optimization problem. Moreover, it is a mixed integer program. To tackle this challenging problem, we consider a more tractable upper bound on the loss function through training loss convergence rate analysis and propose our algorithm to minimize this upper bound. We first make the following assumptions on the global loss function, which are common in the stochastic optimization literature \cite{friedlander2012hybrid}.
\begin{enumerate}
\item[{\bf A1).}] The loss function $F(\mathbf{w})$ is $\mu$-strongly convex. The gradient $\nabla F(\mathbf{w})$ is $L$-Lipschitz continuous for some $L > 0$.
\item[{\bf A2).}] The loss function $F(\mathbf{w})$ is twice continuously differentiable.
\item[{\bf A3).}] The gradient of sample-wise training loss function is upper bounded: $\exists \; \alpha_1 \ge 0$ and $\alpha_2 \ge 1$, s.t.
\begin{align}\label{eq18}
\|\nabla l(\mathbf{w}_{t} ; \mathbf{x}_{k},y_{k})\|^{2}_{2} \le \alpha_{1} + \alpha_{2} \|\nabla F (\mathbf{w}_{t})\|^{2}_{2}, \: \forall k , \forall t.
\end{align}
\end{enumerate}
\subsection{Upper Bound on the Global Training Loss}
To analyze the expression of $F(\mathbf{w}_T)$, we rewrite the global model update at the server in \eqref{eq5} as
\begin{align}\label{eq12}
\mathbf{w}_{t+1} = \mathbf{w}_t - \lambda (\nabla F(\mathbf{w}_t)- \mathbf{e}_t)
\end{align}
where $\nabla F(\mathbf{w}_t)$ is the gradient of global loss function at $\mathbf{w}_t$, and $\mathbf{e}_t$ is the error vector representing the deviation of the updating direction from the true direction. The error vector can be expressed as follows:
\begin{align}
\mathbf{e}_t &= \nabla F(\mathbf{w}_{t})- \frac{\hat{\bold{r}}_{t}}{\sum_{m \in \mathcal{M}_t^\text{s}}K_{m}} \label{eq13} \\
&= \underbrace{ \nabla F(\mathbf{w}_{t})- \frac{\bold{r}_{t}}{\sum\limits_{m \in \mathcal{M}_t^\text{s}} K_{m}}}_\mathrm{\triangleq \mathbf{e_{1, t}}}+ \underbrace{\frac{\bold{r}_{t}}{\sum\limits_{m \in \mathcal{M}_t^\text{s}}K_{m}}-\frac{\hat{\bold{r}}_{t}}{\sum\limits_{m \in \mathcal{M}_t^\text{s}}K_{m}}}_\mathrm{\triangleq \mathbf{e_{2, t}}} \nonumber
\end{align}
where $\mathbf{e}_{1,t}$ represents the deviation due to device selection and $\mathbf{e}_{2,t}$ indicates the deviation due to the communication error as a result of receiver noise and analog aggregation over wireless channels.
The similar error expression in \eqref{eq13} is analyzed in \cite{liu2021reconfigurable} for a RIS-assisted FL system. We can apply the result in \cite{liu2021reconfigurable} to our system straightforwardly by replacing the RIS channel with our channel model. From the result in \cite{liu2021reconfigurable}, for given selected device set $\mathcal{M}_t^\text{s}$, the optimal receiver scaling $\eta_t$ and transmit weight $\{a_{m,t}\}$ that minimizes $\mathbb{E}[\|\mathbf{e}_{2,t}\|^2]$ is given by
\begin{align}\label{eq14}
\eta_t=\min\limits_{m\in \mathcal{M}_t^\text{s}}\frac{ P_{0} \lvert\mathbf{f}_t^{H} \mathbf{h}_{m,t}\rvert ^{2}}{K_{m}^{2}v_{m,t}^{2}}, \:
a_{m,t}= \frac{K_m \sqrt{\eta_t} v_{m,t} \mathbf{h}_{m,t}^H\mathbf{f}_t}{\lvert\mathbf{f}_t^H \mathbf{h}_{m,t} \rvert^2}.
\end{align}
Following the above, an upper bound on the expected global loss function $\mathbb{E}[F(\mathbf{w}_{t+1})]$ is derived in \cite{liu2021reconfigurable}, which we can straightforwardly apply to our problem. Specifically, if we substitute $\eta_t$ and $a_{m,t}$ from \eqref{eq14} into \eqref{eq11}, and set $\lambda = \frac{1}{L}$ in \eqref{eq5}, based on A1-A3, the expected difference between the loss function at round $(t+1)$ and the optimal loss is bounded by\cite{liu2021reconfigurable}
\begin{align}\label{eq19}
\mathbb{E} [F(\mathbf{w}_{t+1}\!)\!-\!F(\mathbf{w}^\star\!) ]\! \leq\! \psi_t \mathbb{E}[F(\mathbf{w}_t\!)\!-\! F(\mathbf{w}^\star\!)]\!+
\! \frac{\alpha_1}{L} d(\mathbf{f}_t,\! \mathbf{x}_t;\!\mathcal{H}_t\!)
\end{align}
where $\psi_t \triangleq 1- \frac{\mu}{L} \big( 1-2 \:\alpha_2\: d(\mathbf{f}_t, \mathbf{x}_t;\mathcal{H}_t) \big)$, $\mathcal{H}_t \triangleq \{{\bf h}_{m,t}\}$, and
\begin{align}\label{d_func}
d(\mathbf{f}_t, \mathbf{x}_t;\mathcal{H}_t)\triangleq& \frac{4}{K^2}\Big(\sum_{m=1}^M(1-x_{t}[m])K_m\Big)^2 \\ & + \frac{\sigma_n^2}{P_{0}(\sum_{m=1}^M x_{t}[m] K_m)^2}\max_{1\le m \le M}\frac{x_{t}[m] K_m^2}{\lvert\mathbf{f}_t^{H}\mathbf{h}_{m,t}\rvert^2}.\nonumber
\end{align}
Let $\mathbf{w}_0$ be the initial model parameter vector. Applying the bound in \eqref{eq19} to $t = 0,\ldots,T-1$, we have the following upper bound after T communication rounds:
\begin{align} \label{eq34}
& \mathbb{E} [F(\mathbf{w}_{T})-F(\mathbf{w}^\star) ] \leq \Big(\prod_{t=0}^{T-1}\psi_t\Big) \mathbb{E}[F(\mathbf{w}_0)- F(\mathbf{w}^\star)] \\[-.95em]
& + \frac{\alpha_1}{L} \Big( \sum \limits_{t=0}^{T-2}
\Big(\!\!\prod_{\tau =t+1}^{T-1}\!\! \psi_\tau\Big) d(\mathbf{f}_{t}, \mathbf{x}_{t};\mathcal{H}_t)+d(\mathbf{f}_{T-1}, \mathbf{x}_{T-1};\mathcal{H}_{T-1})\! \Big).\nonumber
\end{align}
\subsection{Joint Receiver Beamforming and Device Selection Design }
\label{sec:prob reformulation}
Since $\mathbb{E}[F(\mathbf{w}_T)]$ in (\ref{eq33}) is difficult to optimize directly, in the following, we minimize its upper bound in \eqref{eq34} instead. Note that since $\psi_t$ is an increasing function of $d(\mathbf{f}_t, \mathbf{x}_t;\mathcal{H}_t)$, the upper bound in \eqref{eq34} is an increasing function of $d(\mathbf{f}_t, \mathbf{x}_t;\mathcal{H}_t)$, $t=0,\ldots, {T-1}$. Thus, it is suffice to minimize $d(\mathbf{f}_t, \mathbf{x}_t;\mathcal{H}_t)$ at each round $t$ over $(\mathbf{f}_t,\mathbf{x}_t)$. This joint device selection and receiver beamforming optimization problem is given below, where we remove subscript $t$ from the problem for simplicity:\\[-1.4em]
\begin{align}\label{eq22}
\min_{\mathbf{f}, \mathbf{x}} \quad & d(\mathbf{f}, \mathbf{x;\mathcal{H}}) &
\textrm{s.t.} \quad & \|\mathbf{f}\| =1,\: \mathbf{x} \in \{0,1\}^M .
\end{align}
Problem (\ref{eq22}) is still a mixed-integer program. Below, we propose our algorithm JBFDS, which uses alternating optimization to solve receiver beamforming and device selection subproblems iteratively to find a solution.
\subsubsection{Receiver beamforming design given device selection $\mathbf{x}$}
Given device selection ${\bf x}$ (or ${\cal M}^\text{s}$), problem (\ref{eq22}) is reduced to
\begin{align}\label{eq25}
\min_{\mathbf{f:\|\mathbf{f}\| =1 }} & \max_{m \in {\cal M}^\text{s}}\frac{ K_m^2}{|\mathbf{f}^{H}\mathbf{h}_m|^2}
\end{align}
We re-write the min-max problem (\ref{eq25}) as
\begin{align}\label{jadid}
\min_{\mathbf{f}:\|\mathbf{f}\| =1,t} \ \ & t &
\textrm{s.t.} \quad & \frac{ K_m^2}{|\mathbf{f}^{H}\mathbf{h}_m|^2}\le t, \ m \in {\cal M}^\text{s}.
\end{align}
We can further simplify problem (\ref{jadid}) by letting $\tilde{{\bf f}}\triangleq \sqrt{t} \mathbf{f}$, given by
\begin{align}\label{eq27}
\min_{\tilde{{\bf f}} }\ \ & \|\tilde{{\bf f}} \|^2 &
\textrm{s.t.} \quad & |\tilde{{\bf f}}^H\mathbf{h}_m|^2 \ge K_m^2, \ m \in {\cal M}^\text{s}.
\end{align}
We notice that problem (\ref{eq27}) is in fact equivalent to the quality-of-service single-group downlink multicast beamforming problem \cite{Sidiropoulos&etal:TSP2006,dong2020multi}, where the BS transmits a common message to all devices in ${\cal M}^\text{s}$, and the goal is to optimize the multicast beamformer to minimize the transmit power while meeting each device SNR target. Specifically, $\tilde{{\bf f}}$ is the transmit beamformer, and $K_m^2$ is the SNR target for each device $m$. While the multicast beamforming problem is generally NP-had, it can be solved via the SCA method \cite{dong2020multi}, with convergence guarantee to a stationary point. Once the solution $\tilde{{\bf f}}$ to problem (\ref{eq27}) is computed, we have receiver beamforming vector ${\bf f}$.
\label{sec:JBFDS}
\subsubsection{Device selection given receiver beamforming $\mathbf{f}$}
Given receiver beamforming vector ${\bf f}$, problem (\ref{eq22}) reduces to
\begin{align}\label{eq29}
\min_{\mathbf{x}\in \{ 0,1\}\!^M} \ &\frac{4}{K^2}(K-\sum_{m=1}^Mx[m]K_m\:)^2\nonumber \\
&+\frac{\sigma_n^2}{P_{0}(\sum_{m=1}^M x[m] K_m)^2}\max_{1\le m \le M}\frac{x[m] K_m^2}{|\mathbf{f}^{H}\mathbf{h}_m|^2}.
\end{align}
Despite the above problem being an integer program, we develop an efficient algorithm to solve it.
Specifically, let $t=\max_{1\le m \le M}\frac{x[m] K_m^2}{|\mathbf{f}^{H}\mathbf{h}_m|^2}$, which is attained by some $m$. We first sort $\{\frac{K_m^2}{|\mathbf{f}^ H \mathbf{h}_m|^2}\}$ in ascending order and index the corresponding devices as $m_1,\cdots,m_M$. Then, assume $t$ take the $j$th of these sorted values, $j=1,\ldots,M$. In this case, only those corresponding $j$ devices are considered for selection, and the rest are not selected ({\it i.e.,\ \/} $x[m_{j'}]=0$, $j'>j$). For fixed $t$, the objective function in \eqref{eq29} decreases as more devices are selected (from $m_1$ to $m_j$). Thus, the minimum is attained by selecting all these $j$ devices. Finally, the objective values under different $t$'s are compared, and the optimal selection vector ${\bf x}$ that gives the minimum objective value is obtained. The algorithm is summarized in Algorithm \ref{alg2}. The algorithm is guaranteed to find the optimal ${\bf x}$ to problem \eqref{eq29} as stated.
\begin{proposition}
The output of Algorithm~\ref{alg2} is a global optimal point for problem (\ref{eq29}).
\end{proposition}
\begin{proof} Assume $\mathbf{y}$ is an arbitrary device selection vector. Let $m^\star$ be the device with the largest value of $\frac{K_m^2}{|\mathbf{f}^{H}\mathbf{h}_m|^2}$ among the selected devices in $\mathbf{y}$. Assume its corresponding index in the sorted devices $\{m_1,\ldots,m_M\}$ in Algorithm~\ref{alg2} is $m_{j^\star}$ ({\it i.e.,\ \/} $m^\star=m_{j^\star}$). Then, the set of selected devices in $\mathbf{y}$ is a subset of $\{m_1, m_2,..., m_{j^\star}\}$, {\it i.e.,\ \/} the devices selected in ${\bf z}_{j^\star}$.
From the objective function in \eqref{eq29}, we have $d(\mathbf{f}, \mathbf{z}_{j^\star}) \le d(\mathbf{f}, \mathbf{y}) $. Thus, for $\forall{\bf y}\in\{0,1\}^M$, we can find a selection vector in $\{{\bf z}_j\}$ with an equal or less objective value. Thus, the global optimal point is in $\{{\bf z}_j\}$.
\end{proof}
Note that the SCA method used for the receiver beamforming subproblem requires a feasible initial point to problem \eqref{eq27}. In iteration $l$ of the alternating optimization, we use the receiver beamforming vector ${\bf f}^{(l-1)}$ from the previous iteration $(l-1)$ as the initial point for the SCA method. Since the SCA is guaranteed to converge to a local minimum, and the device selection subproblem is solved optimally, the objective value in \eqref{eq22} is non-increasing over iterations and is non-negative. Thus, JBFDS is guaranteed to converge.
The computational complexity of Algorithm \ref{alg2} is $\mathcal{O}(M^2+MN)$, and that for solving problem (\ref{eq27}) by the SCA method is $\mathcal{O}(I_{\text{max}} \min(N,M)^3)$, where $I_{\text{max}}$ is the number of SCA iterations.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\STATE \textbf{Input:} Beamforming vector $\mathbf{f}$, $\{\mathbf{h}_m, K_m\}$.
\STATE Sort $\{\frac{K_m^2}{|\mathbf{f}^{H}\mathbf{h}_m|^2}\}$ in ascending order:
\begin{align}\label{eq31}
\frac{K_{m_1}^2}{|\mathbf{f}^{H}\mathbf{h}_{m_1}|^2}\le \frac{K_{m_2}^2}{|\mathbf{f}^{H}\mathbf{h}_{m_2}|^2} \le ...\le \frac{K_{m_M}^2}{|\mathbf{f}^{H}\mathbf{h}_{m_M}|^2}
\end{align} with device indices $m_1,\ldots,m_M$.
\STATE For $j=1,\ldots,M$, set
\begin{align}\label{eq32}
\mathbf{z}_j[m]= \begin{cases} 1, & \text{for } m \in \{m_1, \ldots,m_j\}\\0,&\text{otherwise}\end{cases}.
\end{align}
\STATE Choose $j^\star=\arg\min_jd(\mathbf{f}, \mathbf{z}_j)$ and set $\mathbf{x}=\mathbf{z}_{j^\star}$.
\STATE \textbf{Output:} $\mathbf{x}$
\end{algorithmic}
\caption{ Optimal Device Selection with Receive Beamforming}
\label{alg2}
\end{algorithm}
\section{Simulation Results}
\label{sec:simulation results}
We consider an image classification task based on the MNIST dataset \cite{deng2012mnist} for training and testing. The dataset consists of $6\times10^4$ training samples and $1\times10^4$ test samples, belonging to ten different classes. Each data sample is a labeled image of size $28 \times 28 $ pixels, {\it i.e.,\ \/} $\mathbf{x}_k \in \mathbb{R}^{784}$, with its label $y_k \in \{0,1,...,9 \}$ for each class. We consider training a multinomial logistic regression classifier. The cross-entropy loss function is \\[-2em]
\begin{align}\label{eq36}
l(\mathbf{w}; \mathbf{x}_k , y_k) = - \sum \limits_{j=0}^9 1\{y_k = j\} \text{log}\frac{\text{exp}(\mathbf{u}_k^T\mathbf{w}_j)}{\sum \limits_{i=0}^9 \text{exp}(\mathbf{u}_k^T\mathbf{w}_i)} \\[-2em]\nonumber
\end{align}
where $\mathbf{u}_k =[\mathbf{x}_k^T,1 ]^T$, and $\mathbf{w}= [\mathbf{w}_0^T,\ldots,\mathbf{w}_9^T]^T$ with ${\bf w}_j\in \mathbb{R}^{785}$ being the model parameter vector for label $j$, consisting of 784 weights and a bias term. We assume $M=$ 50 devices and the distribution of data is i.i.d. over devices. The local dataset in each device $m$ has $K_m =1080$ data samples, evenly split for each class.
We set $N=4$. To compare with existing methods, we assume channels are fixed over time, and each vector is generated i.i.d.\ as $\mathbf{h}_{m,t}={\bf h}_m\sim \mathcal{CN}(\mathbf{0}, \frac{C}{d^3_m}{\bf I})$, where the path loss constant $C=1$, and $d_m$ is the distance between device $m$ and the server, generated under a uniform
distribution as $d_m \sim U[10\text{m}, 100\text{m}]$. We set $\frac{P_0}{\sigma_n^2}= -10$ dB and $\lambda = 0.05$. For comparison, we consider the following five methods:
\begin{enumerate}
\item \textbf{Error-free centralized learning:} the server has all local training datasets and compute the gradient $\nabla F(\mathbf{w})$ centrally. It serves as an upper bound for all methods.
\item \textbf{Select all:}
all of the devices are selected to contribute in the FL training and the receiver beamforming is obtained using an SCA method.
\item \textbf{Select top one:}
only the device with the strongest channel condition is selected to contribute in the FL training.
\item \textbf{Gibbs sampling \cite{liu2021reconfigurable}:}
Gibbs sampling is used for device selection and receiver beamforming is obtained by an SCA method.
\item \textbf{DC approach \cite{yang2020federated}:} the receiver beamforming and the device selection are jointly optimized by DC programming to maximize the number of selected devices, subject to the communication MSE is no larger than threshold $\gamma$.
\end{enumerate}
Note that for a fair comparison of the computational complexity, the SCA method in \cite{dong2020multi} is used in select all, Gibbs sampling and our JBFDS method.
Figs.~\ref{fig1} and~\ref{fig2} show the average training accuracy and average global loss with $95\% $ confidence intervals over the different channel realizations, respectively, for $T=100$. For DC approach, we tested a range of $\gamma$ values between $45$ dB to $65$ dB and select $\gamma =57$ dB, which gives the fastest training convergence rate. We see the training accuracy improvement by JBFDS to optimize the device selection over either selecting top device or select all devices. Furthermore,
our proposed JBFDS outperforms other alternative device selection methods with the fastest training convergence rate and the highest training accuracy. Table \ref{tab1} shows the average computational time for generating the beamforming and device selection solution. We see that the computational complexity of JBFDS is substantially lower than Gibbs sampling and DC approach.
\begin{figure}[t]
\centerline{\includegraphics[width=0.4\textwidth]{train_acc_all.pdf}}
\caption{ Average train accuracy over communication rounds.}
\label{fig1}
\centerline{\includegraphics[width=0.4\textwidth]{train_loss_all.pdf}}
\caption{Average global training loss over communication rounds.}
\label{fig2}
\end{figure}
\begin{table}[t]
\caption{Average Run Time for Different Methods }
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
&{JBFDS}& {Gibbs Sampling} &{DC}\\
\hline
{Run Time (s)}& 2.58& 880.06 & 80.33 \\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\section{Conclusion}
In this paper, we jointly design uplink receiver beamforming and device selection in over-the-air FL to minimize the global loss function after an arbitrary number of communication rounds. We proposed an efficient JBFDS algorithm, which uses an alternating-optimization approach to iteratively improve the beamforming and device selection solution. We show that the uplink beamforming subproblem can be converted to a well-known downlink single-group multicast beamforming problem. For the device selection subproblem, we propose an efficient search algorithm to obtain an optimal solution. Simulation with image classification demonstrated the improved effectiveness and efficiency of JBFDS over other existing methods.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.14251",
"language": "en",
"timestamp": "2023-03-01T02:06:56",
"url": "https://arxiv.org/abs/2302.14251",
"yymm": "2302"
} |
\section{Introduction}
\label{sec:Intro}
3D reconstruction of clothed human models is crucial for reproducing digital twins of real world to give the user a sense of reality and immersion.
Clothed human models are useful for various applications, such as entertainment, virtual reality, and the movie industry.
In particular, with the surging demands for social connections in virtual spaces, it is valuable to produce realistic 3D human models in a typical capturing setup.
Parametric human models have been proposed to reconstruct 3D full body shapes for different poses.
Among them, SMPL~\cite{loper2015smpl} is a representative model and represents a human model with shape and pose parameters that are applied to a single template mesh with fixed topology.
In SMPL, shape deformations are obtained by linear blend skinning of the template mesh and cannot be detailed enough for depicting surface details of clothed human models.
This limitation also applies to other parametric models of human shapes~\cite{frankmodel, SMPL-X:2019}.
Recent learning-based methods for clothed human reconstruction utilize implicit 3D functions~\cite{saito2021scanimate, wang2021metaavatar}, but they learn a function defined in a 3D space and need an additional polygon extraction step to provide a 3D mesh as the output.
An explicit point cloud representation has been used to reconstruct loose clothes~\cite{POP:ICCV:2021,ma2021SCALE}, but this approach also needs the surface reconstruction step to produce a 3D mesh that can be directly used for applications.
On the other hand, Burov \Etal~\shortcite{burov2021dsfn} use an explicit mesh representation and train a Dynamic Surface Function Network (DSFN) to estimate vertex displacements for a template mesh.
However, DSFN may not fully exploit the surface details in the input scans due to the spatial regularization constraint needed for training.
In this paper, we present {\em LaplacianFusion}, a novel framework that reconstructs a {\em detailed} and {\em controllable} 3D clothed human model from an input depth or point cloud sequence.
Our key idea is to use differential coordinates, instead of implicit 3D function or vertex displacements, for representing local structures of surface details.
For the differential coordinates, we use Laplacian coordinates~\cite{karni2000spectral, alexa2003differential, sorkine2003high} that have been widely applied to mesh processing and editing~\cite{lipman2004differential, sorkine2004laplacian}.
Intuitively, Laplacian coordinates are the difference between a point and the average of its neighbors, so they can naturally encode local shape variations.
In our LaplacianFusion framework, the reconstructed human model is expressed as combination of a controllable base mesh and a surface function using a multi-layer perceptron (MLP).
In the training phase, we first reconstruct a base mesh sequence based on SMPL that fits the input scans.
We then train an MLP function by fusing the Laplacian values estimated at input points on the surface of the SMPL template mesh.
As a result, the MLP function learns to predict Laplacian coordinates representing the details on the body surface.
To reconstruct a detailed human model for a given pose, we start from a SMPL template mesh in the canonical space and obtain a base mesh by deforming the SMPL template using the pose parameters.
We then subdivide the base mesh to have enough vertices and estimate Laplacian coordinates for the vertices using the learned MLP function.
Finally, the detailed output mesh is obtained by globally integrating the estimated Laplacian coordinates as a whole.
In this paper, we call the MLP function {\em neural surface Laplacian function}.
We aim for a natural capture scenario where the subject freely performs actions during the capture.
Our approach can handle both full and partial views of point clouds that are captured by a dome-shaped multi-camera setup~\cite{fvv} and a single RGB-D camera~\cite{KinectAzure}, respectively.
The reconstructed 3D models are controllable as the base mesh and the neural surface Laplacian function are conditioned on SMPL pose parameters.
Our approach restores the surface details of a clothed human body better than the recent explicit surface-based approach, DSFN~\cite{burov2021dsfn}.
In addition, due to the differential representation and a fixed-topology base mesh, our approach can be easily adapted for other applications, including detail transfer, detail enhancement, and texture transfer.
Our codes are publicly available.\footnote{\href{https://github.com/T2Kim/LaplacianFusion}{https://github.com/T2Kim/LaplacianFusion}}
Our main contributions can be summarized as follows:
\begin{itemize}
\item We propose \textit{LaplacianFusion}, a novel framework for reconstructing surface details of a clothed human body model. Our framework can handle both partial and full-body point cloud sequences.
\item We introduce an approach to learn Laplacian coordinates representing surface details from scanned points using a MLP.
\item Our reconstructed model is controllable by pose parameters and supports various shape manipulations, including detail transfer.
\end{itemize}
\section{Related Work}
\subsection{Parametric models for 3D human shapes and clothes}
\label{param_model}
\paragraph{Human body models}
PCA-based parametric models have been proposed for handling human body and pose variations: SMPL~\cite{loper2015smpl, MANO:SIGGRAPHASIA:2017, SMPL-X:2019}, GHUM~\cite{xu2020ghum}, and Frank model~\cite{frankmodel}.
These models can handle body shape variations and pose-dependent shape deformations that cannot be modeled with Linear Blend Skinning (LBS)~\cite{lewis2000pose}. The models are suitable for expressing human shapes with coarse meshes, but they alone are not enough for containing rich details.
\paragraph{Clothed human models}
\label{cloparam}
Several approaches represented clothed humans by extending parametric human models.
SMPL \cite{loper2015smpl} has been extended to express clothed deformations by directly adding a displacement vector to each vertex~\cite{alldieck2019learning, alldieck2018detailed, alldieck2018video, bhatnagar2020ipnet}.
CAPE~\cite{ma2020cape} proposed an extended model by adding a cloth style term to SMPL.
Other approaches~\cite{de2010stable, guan2012drape, pons2017clothcap, bhatnagar2019multi, tiwari20sizer, xiang2020monoclothcap} used additional parametric models for representing clothes on top of a parametric human model.
Additional approaches for expressing surface details include GAN-based normal map generation~\cite{lahner2018deepwrinkles}, RNN-based regression~\cite{santesteban2019learning}, and style-shape specific MLP functions~\cite{patel2020tailornet}. However, these approaches are limited to several pre-defined clothes and cannot recover detailed human shapes with arbitrary clothes from input scans.
\subsection{Implicit clothed human reconstruction}
\paragraph{Volumetric implicit representations}
Truncated signed distance function (TSDF) is a classical implicit representation for reconstruction. TSDF-based approaches that warp and fuse the input depth sequence onto the canonical volume have been proposed for recovering dynamic objects~\cite{dynamicfusion, volumedeform}.
This volume fusion mechanism is extended to be conditioned with the human body prior for representing a clothed human~\cite{BodyFusion,yu2018DoubleFusion}.
Optimized approaches for real-time performance capture have also been proposed~\cite{dou2016fusion4d, dou2017motion2fusion, habermann2020deepcap, yu2021function4d}.
\paragraph{Neural implicit representations}
MLP-based neural implicit representation has been actively investigated for object reconstruction~\cite{Park_2019_CVPR, Mescheder2019occnet, Chen2019LearningIF}. PIFu \cite{saito2019pifu, saito2020pifuhd} firstly adopted this representation for reconstructing static clothed human from a single image. DoubleField~\cite{Shao_2022_CVPR} uses multi-view RGB cameras and improves visual quality by sharing the learning space for geometry and texture. With depth image or point cloud input, multi-scale features~\cite{chibane20ifnet, bhatnagar2020ipnet} and human part classifiers~\cite{bhatnagar2020ipnet} have been used for reconstructing 3D human shapes. Li et al.~\shortcite{li2021posefusion} proposed implicit surface fusion from a depth stream and enabled detailed reconstruction even for invisible regions.
Neural parametric models have also been proposed for modeling shape and pose deformations. NASA~\cite{deng2019NASA} learns pose-dependent deformations using part-separate implicit functions. LEAP~\cite{mihajlovic2021LEAP} and imGHUM~\cite{alldieck2021imghum} learn parametric models that can recover shape and pose parameters for SMPL~\cite{loper2015smpl} and GHUM~\cite{xu2020ghum}, respectively. NPMs~\cite{palafox2021npms} encodes shape and pose variations into two disentangled latent spaces using auto-decoders~\cite{Park_2019_CVPR}. SPAMs~\cite{Palafox_2022_CVPR} introduces a part-based disentangled representation of the latent space.
For subject-specific clothed human reconstruction from scans, recent implicit methods define shape details in a canonical shape and use linear blend skinning to achieve both detail-preservation and controllability. Neural-GIF~\cite{tiwari2021neural} learns a backward mapping network for mapping points to the canonical space and a displacement network working in the canonical space.
In contrast, SNARF~\cite{Chen_2021_ICCV} proposed a forward skinning network model to better handle unseen poses.
SCANimate~\cite{saito2021scanimate} learns forward and backward skinning networks with a cycle loss to reconstruct disentangled surface shape and pose-dependent deformations.
MetaAvatar~\cite{wang2021metaavatar} proposed an efficient pipeline for subject-specific fine-tuning using meta-learning.
These implicit representations are topology-free and can handle vastly changing shapes, such as loose clothes. However, they individually perform shape reconstruction at every frame and cannot provide temporally consistent mesh topology needed for animation.
In addition, this approach needs dense point sampling in a 3D volume to train implicit functions and is computationally heavy.
\begin{table}[t]
\centering
\caption{Comparison of approaches that reconstruct 3D clothed-human shapes. Our approach explicitly handles a rigged 3D mesh model, so the reconstruction is animatable and texture map can be readily applied.}
\vspace{-3.3mm}
\resizebox{.995\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c}
& {Method} & \makecell[c]{Shape\\representation} & \rotatebox[origin=c]{90}{2.5D input} & \rotatebox[origin=c]{90}{3D input} & \rotatebox[origin=c]{90}{\makecell[c]{Animation\\ready}} & \rotatebox[origin=c]{90}{\makecell[c]{Texture map\\ready}} \\
\hline
\multirow{12}{*}{\rotatebox[origin=c]{90}{Implicit}} & DynamicFusion \shortcite{dynamicfusion} & SDF & \checkmark & & & \\
& BodyFusion \shortcite{BodyFusion} & SDF & \checkmark & & & \\
& NASA \shortcite{deng2019NASA} & Occupancy & & \checkmark & \checkmark \\
& IF-Net \shortcite{chibane20ifnet} & Occupancy & \checkmark & \checkmark & & \\
& IP-Net \shortcite{bhatnagar2020ipnet} & Occupancy & \checkmark & \checkmark & \checkmark &\\
& NPMs \shortcite{palafox2021npms} & SDF & \checkmark & \checkmark & \checkmark & \\
& Neural-GIF \shortcite{tiwari2021neural} & SDF & & \checkmark & \checkmark & \\
& SCAnimate \shortcite{saito2021scanimate} & SDF & & \checkmark & \checkmark & \\
& MetaAvatar \shortcite{wang2021metaavatar} & SDF & \checkmark & \checkmark & \checkmark & \\
& SNARP \shortcite{Chen_2021_ICCV} & SDF & & \checkmark & \checkmark & \\
& POSEFusion \shortcite{li2021posefusion} & Occupancy & \checkmark & & \checkmark & \\
& LEAP \shortcite{mihajlovic2021LEAP} & Occupancy & & \checkmark & \checkmark & \\
\hline
\multirow{5}{*}{\rotatebox[origin=c]{90}{Explicit}}
& CAPE \shortcite{ma2020cape} & Coord. (vertex) & & \checkmark & \checkmark & \checkmark\\
& SCALE \shortcite{ma2021SCALE} & Coord. (patch) & & \checkmark & \checkmark & \\
& PoP \shortcite{POP:ICCV:2021} & Coord. (point) & & \checkmark & \checkmark & \\
& DSFN \shortcite{burov2021dsfn} & Coord. (vertex) & \checkmark & & \checkmark & \checkmark\\
\rowcolor{yellow}
& Ours & Coord. + Laplacian & \checkmark & \checkmark & \checkmark & \checkmark \\
\end{tabular}
}
\vspace{-3mm}
\label{tbl:related work}
\end{table}
\subsection{Explicit clothed human reconstruction.}
Explicit representations for reconstructing clothed humans have been developed mainly for handling geometric details on the template mesh of a parametric model such as SMPL ~\cite{loper2015smpl}.
\paragraph{Point-based}
SCALE~\cite{ma2021SCALE} recovers controllable clothed human shapes by representing local details with a set of points sampled on surface patches. To avoid artifacts of the patch-based approach at patch boundaries, PoP~\cite{POP:ICCV:2021} represents local details using point samples on a global 2D map. Point cloud representation has a flexible topology and can cover more geometric details. However, this representation does not provide an explicit output mesh.
\paragraph{Mesh-based}
For subject-specific human reconstruction with a depth sequence, DSFN~\cite{burov2021dsfn} represents surface details using vertex offsets on a finer resolution mesh obtained by subdividing the SMPL template mesh. This approach is the closest to ours. The main difference is that our method represents surface details using Laplacian coordinates, instead of vertex offsets. Experimental results show that Laplacian coordinates are more effective for recovering surface details than vertex offsets (\Sec{exp}).
\Tbl{related work} compares our method with related ones in terms of possible inputs and desirable properties.
\section{Preliminary}
\label{sec:background}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figs/Explanation/laplace_beltrami_jpg_eq_bottom.pdf}\\
\caption{Computing Laplacian coordinates on a mesh and a point cloud. (left) Mesh editing methods usually calculate Laplacian coordinates (red arrow) using uniform average of neighbor vertices on a mesh. (middle) Our input is a point cloud, and we approximate Laplacian coordinates by fitting a quadratic polynomial to a small region (yellow region) in the input point cloud and then using Laplace-Beltrami operator that produces differentials of a smooth real function $f$. (right) The approximated Laplacian coordinates differ from uniform Laplacian coordinates, so we utilize the discrete Laplace-Beltrami operator to convert Laplacian coordinates to absolute vertex positions.
}
\label{fig:explanation[laplace]}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{figs/overview.pdf}\\
\caption{System overview. (top blue box) LaplacianFusion takes a 2.5D depth sequence (mint color) or full-body 3D point clouds (olive color), and produces detailed meshes. In the training phase, (top left) we initially align the SMPL mesh to the input point clouds and obtain the skinned body meshes. (a) Then, we learn pose-dependent local deformations for the vertices of the skinned body meshes to accurately fit the input data. (b) To capture surface details, we set training pairs by projecting each raw scan to the base mesh, then learn neural surface Laplacian function that predicts pose-dependent Laplacian coordinates on the surface of a base mesh. (bottom green box) In the reconstruction phase, we can recover and animate the final 3D shapes using the base mesh controlled by pose parameters and the neural surface Laplacian function estimating surface details. (c) We conduct Laplacian reconstruction to convert the estimated Laplacian coordinates to vertex positions. Note that the red and blue colors illustrated on the line segments in (b) and (c) represent Laplacian coordinates.
}
\label{fig:overallProcess}
\end{figure*}
\subsection{Laplacian coordinates from a mesh}
\label{sec:LapMesh_editing}
In the graphics literature, recovering an unknown function from differential quantities (Laplacian) has become widely known through Poisson image editing~\cite{perez2003poisson}.
This technique was successfully expanded to the 3D mesh domain, especially for mesh editing~\cite{lipman2004differential, sorkine2004laplacian}. In mesh editing, the differential quantities are used to encode vertex coordinates and called {\em Laplacian coordinates}. Mesh editing based on Laplacian coordinates includes three steps: Encoding Laplacian coordinates from the original mesh, interactive editing of control points, and converting Laplacian coordinates into absolute vertex positions of the target mesh while satisfying the positional constraints imposed by edited control points. In the following, we briefly introduce the encoding and converting steps.
Let the original mesh $\mathcal{M}=\{\mathcal{V}, \mathcal{F}\}$ be described by the vertex set $\mathcal{V}$ and the triangle set $\mathcal{F}$, where $\mathcal{V}=\{\mathbf{v}_k\mid k = 1, \dots, K\}$. $\mathbf{v}_k$ denotes the position of the $k$-th vertex and $K$ is the number of vertices. Uniform Laplacian coordinates $\boldsymbol{\widehat{\delta}}_k\in\mathbb{R}^3$ are defined by:
\begin{equation}
\begin{split}
\label{equ:LapCoord_meshedit}
\boldsymbol{\widehat{\delta}}_k=\sum_{j\in \mathcal{N}(k)}\widehat{w}_k{\left(\mathbf{v}_k-\mathbf{v}_j\right)},
\end{split}
\end{equation}
where $\widehat{w}_k = \frac{1}{|\mathcal{N}(k)|}$ indicates uniform weights, and $\mathcal{N}(k)$ denotes the set of adjacent vertices of the $k$-th vertex (\Fig{explanation[laplace]} left). Regarding all vertices, this equation can be represented in a matrix form: $[\boldsymbol{\widehat{\delta}}_1, \dots,\boldsymbol{\widehat{\delta}}_K]^T=\widehat{\mathbf{L}}[\mathbf{v}_1, \dots,\mathbf{v}_K]^T$, where $\widehat{\mathbf{L}}$ is the uniform Laplacian matrix.
Notably, the matrix $\widehat{\mathbf{L}}$ has rank $K-1$, so $\{\boldsymbol{\widehat{\delta}}_k\}$ can be converted into $\mathcal{V}$ by taking the specified position of a selected vertex as the boundary condition and solving a linear system.
For example, when fixing the $i$-th vertex, we can form a sparse linear system $\mathbf{A}\mathbf{x}=\mathbf{b}$, where $\mathbf{A}=[\widehat{\mathbf{L}}\,^T, \mathbf{1}_i]^T$ and $\mathbf{b}=[\boldsymbol{\widehat{\delta}}_1, \dots,\boldsymbol{\widehat{\delta}}_K, \mathbf{v}_i]^T$. $\mathbf{1}_i$ denotes one-hot encoding, where the $i$-th element is one.
\subsection{Laplacian coordinates from a point cloud}
\label{sec:LapPCD}
In this paper, we compute Laplacian coordinates from raw 3D scan data and use them for shape detail reconstruction.
Then, we need an alternative approach to \Eq{LapCoord_meshedit} for computing Laplacian coordinates as a point cloud does not have edge connectivity. We may consider directly building edges from the point set, but it may generate a noisy and non-manifold mesh. To resolve this difficulty, Liang et al.~\shortcite{liang2012geometric} defined the Laplace-Beltrami operator on a point cloud by fitting a quadratic function for the local neighborhood of a point and computing the differentials of the function (\Fig{explanation[laplace]} middle). However, the Laplace-Beltrami operator computes Laplacian coordinates using a {\em continuous} function that reflects the {\em non-uniform} local shape in the neighborhood, which differs from the {\em discrete uniform} Laplacian coordinates in \Eq{LapCoord_meshedit}.
Therefore, Laplacian coordinates calculated by the Laplace-Beltrami operator need to be converted into mesh's vertex positions differently.
Let's assume that we have a mesh $\mathcal{M}=\{\mathcal{V}, \mathcal{F}\}$. We can calculate the discrete Laplace-Beltrami operator on the mesh (\Fig{explanation[laplace]} right) as follows~\cite{meyer2003discrete}:
\begin{equation}
\begin{split}
\label{equ:cotLap}
\boldsymbol{\delta}_k=\Delta_{\mathcal{M}}(\mathbf{v}_k)&=\frac{1}{a_k}
\sum_{j\in \mathcal{N}(k)}{\frac{cot(\alpha^1_{k,j})+cot(\alpha^2_{k,j})}{2}\left(\mathbf{v}_k-\mathbf{v}_j\right)},
\end{split}
\end{equation}
where $\boldsymbol{\delta}_k$ is non-uniform Laplacian coordinates, $\Delta_{\mathcal{M}}$ is the discrete Laplace-Beltrami operator, $cot(\cdot)$ denotes cotangent function, $a_k$ is the Voronoi area of $\mathbf{v}_k$, and $\alpha^1_{k,j}$ and $\alpha^2_{k,j}$ are the two angles opposite to the edge $\{k,j\}$ on $\mathcal{M}$.
Similarly to \Eq{LapCoord_meshedit}, \Eq{cotLap} can be represented in a matrix form
$[\boldsymbol{\delta}_1, \dots,\boldsymbol{\delta}_K]^T=\mathbf{L}[\mathbf{v}_1, \dots,\mathbf{v}_K]^T$, where $\mathbf{L}$ is the non-uniform Laplacian matrix, and we can convert $\{\boldsymbol{\delta}_k\}$ into $\mathcal{V}$ by solving a linear system with a boundary condition.
Compared with global coordinates, which represent exact spatial locations, Laplacian coordinates naturally encode local shape information, such as the sizes and orientations of local details~\cite{sorkine2006differential}. In mesh editing, this property has been used to retain desirable local shapes during the editing process by encoding Laplacian coordinates from the original mesh and constraining the edited mesh to follow the encoded Laplacian coordinates.
In this paper, we use the property for shape reconstruction rather than editing. We extract and encode the local details from the input scan data using Laplacian coordinates and restore the desirable details on the subdivided base mesh by integrating the encoded Laplacian coordinates.
\section{Overview}
\label{sec:overall_process}
\Fig{overallProcess} shows the overall process of our LaplacianFusion framework.
The input is a sequence of point clouds $\{\mathcal{P}_t\}_{t=\{1,...,T\}}$, each of which contains either 2.5D depth map or 3D full scan of a clothed human body with motion.
Our pipeline starts from the SMPL template mesh in the canonical T-pose (\Sec{basemesh}). SMPL provides the shape prior for the reconstructed models and enables the controllability using pose parameters. Given an input sequence, we estimate a single SMPL shape parameter $\boldsymbol{\beta}$ and per-frame SMPL pose parameters $\boldsymbol\theta_t$, and obtain the posed skin body $\mathcal{M}_t$ for each frame $t$.
Since the mesh $\mathcal{M}_t$ may not accurately fit the input point cloud, we add pose-dependent local deformations to improve the fitting and obtain the base mesh $\mathcal{B}_t$ (\Sec{Method[PBBM]}), where the local deformations are estimated using an MLP function $f_d$ that computes vertex displacements for $\mathcal{M}_t$.
On top of the base mesh $\mathcal{B}_t$, our novel {\em neural surface Laplacian function} $f_l$ predicts Laplacian coordinates that encode the surface details of a clothed human model (\Sec{LapSurfFunc}).
Finally, we reconstruct the detailed output mesh $\mathcal{S}_t$ by integrating Laplacian coordinates estimated at the vertices of the subdivided base mesh (\Sec{Method[Recon]}).
Note that $f_d$ and $f_l$ are functions of both 3D positions and pose parameters as the local deformations and cloth details such as wrinkles are pose-dependent.
We outline the training and inference phases for the two surface functions below.
\paragraph{Training phase}
We sequentially train the two surface functions because 3D points on a base mesh $\mathcal{B}$ obtained by $f_d$ are used as the input of $f_l$. We train $f_d$ to learn a deformation field that minimizes the Chamfer distance~\cite{barrow1977parametric} between the input point cloud and the base mesh $\mathcal{B}$.
To train $f_l$, for each point in the input point cloud, we calculate approximate Laplacian coordinates as described in \Sec{LapPCD}, and find the corresponding point on the base mesh $\mathcal{B}$ to assign the calculated Laplacian coordinates.
$f_l$ is then trained to learn the assigned Laplacian coordinates on $\mathcal{B}$ by mininmizing the L2 loss.
\paragraph{Inference phase}
We can infer the surface details for a particular pose using the learned MLP functions $f_d$ and $f_l$.
We first obtain the posed skinned body $\mathcal{M}$ by applying the given pose parameter to the SMPL template mesh.
We then apply the local deformation function $f_d$ to $\mathcal{M}$ and obtain the pose-dependent base mesh $\mathcal{B}$. Since the vertex number of the SMPL template is insufficient to represent surface details of a clothed human, we subdivide the base mesh $\mathcal{B}$.
We estimate Laplacian coordinates for each vertex of the subdivided $\mathcal{B}$ using neural surface Laplacian function $f_l$.
Finally, we reconstruct a detailed clothed human model by integrating the estimated Laplacian coordinates.
Note that the pose parameter used in the inference phase can be arbitrary, and it does not have to be one of the pose parameters estimated from the input frames (\Sec{animate}).
\section{Pose-dependent base mesh}
\subsection{Skinned body acquisition}
\label{sec:basemesh}
Our approach starts with building a skinned body.
We adopt the SMPL model~\cite{loper2015smpl} that can readily manipulate 3D human shape using identity-dependent parameters $\boldsymbol{\beta}$ and pose-dependent parameters $\boldsymbol{\theta}$.
SMPL supports rigging and skinning, and the template mesh can be deformed with an arbitrary pose.
As in the SMPL model, we use the linear blend skinning (LBS) scheme~\cite{lewis2000pose}
to compute template mesh deformation:
\begin{equation}
\label{equ:lbs}
\begin{split}
LBS_{\boldsymbol\theta}(\mathbf{v})=\left(\sum_{j}{\mathbf{w}_{j}(\mathbf{v})\mathbf{T}_{j}(\boldsymbol\theta)}\right)\mathbf{v},
\end{split}
\end{equation}
where $j \leq J$ denotes the index of a joint, $J$ is the number of joints, $\mathbf{T}_{j}(\boldsymbol\theta)$ denotes a $4\times4$ rigid transformation matrix for the $j$-th joint, $\mathbf{w}(\mathbf{v})\in\mathbb{R}^J$ is a skinning weight vector of $\mathbf{v}$ predefined by SMPL model, and $\mathbf{v}$ is homogeneous vertex coordinates of the base mesh. We can conduct articulated deformation by applying \Eq{lbs} to all mesh vertices.
The canonical neutral SMPL model $\mathcal{M}_C$ is in the T-pose, and we align the model with each input point cloud $\mathcal{P}_t$ by estimating shape and pose parameters, $\boldsymbol{\beta}$ and $\boldsymbol{\theta}_t$, so that the constructed posed skinned body $\mathcal{M}_{t}$ can fit $\mathcal{P}_t$ well.
We apply deep virtual markers~\cite{kim2021deep} to $\mathcal{M}_C$ and $\mathcal{P}_t$ to obtain the initial geometric correspondence. We additionally use OpenPose~\cite{cao2019openpose} to improve the point matching accuracy if color images are available.
To obtain the parameters $\boldsymbol{\beta}$ and $\boldsymbol{\theta}_t$ for $\mathcal{M}_t$,
we optimize them using the initial correspondence between $\mathcal{M}_C$ and $\mathcal{P}_t$, and then further minimize the correspondence alignment error and the Chamfer $l_2$ distance together. We add the smoothness regularization term in temporal domain for the optimization so that SMPL's pose parameters can be changed gradually.
In the face region of the SMPL model, vertices are placed with uneven distribution to provide a detailed face. However, such distribution does not match with the almost uniform point distributions in the input raw scans.
Therefore, we re-mesh the face region of the SMPL model, and the skinning weights of new vertices are assigned from the nearest original vertices.
\subsection{Pose-dependent base mesh}
\label{sec:Method[PBBM]}
Suppose we have obtained a skinned body mesh $\mathcal{M}_{t}$ that approximates the input frame from the previous step.
To fit the SMPL model tighter to the input point cloud, we combine pose-dependent local deformation with $\mathcal{M}_{t}$ (\Fig{overallProcess}a), and obtain a \textit{pose-dependent base mesh} $\mathcal{B}_{t}$;
\begin{equation}
\label{equ:pose_dependent_offsets_vert}
\mathbf{v}' = LBS_{\boldsymbol{\theta}}\left(\mathbf{v} + f_d\left(Q(\mathbf{v}),\overline{\boldsymbol{\theta}}(\mathbf{v})\right)\right),
\end{equation}
where $\mathbf{v}'$ is a vertex of $\mathcal{B}_{t}$.
In \Eq{pose_dependent_offsets_vert}, the displacements are applied to the vertices $\mathbf{v}$ of the T-posed SMPL mesh with the optimized shape parameter $\boldsymbol{\beta}$ before the articulated deformation is performed using LBS function. $f_d$ accounts for pose-dependent local deformation and is implemented as an MLP.
$f_d$ takes as the input a query point $Q(\cdot)$ and a per-point pose feature $\overline{\boldsymbol{\theta}}(\cdot)$ that are described in~\Sec{input}.
We observe such local deformations are useful to handle large shape variations in the input scans.
To optimize $f_d$, we formulate an energy function as follows:
\begin{equation}
\label{equ:pose_dependent_offsets}
\begin{split}
E_{d}= \sum_t\sum_i\mu_{t,i}\times{d_{C D}\left(\mathbf{p}_{t,i},\mathcal{B}_t\right)} +\lambda_{r} E_{r},
\end{split}
\end{equation}
where $\mathbf{p}_{t,i}\in\mathcal{P}_t$ is a target point in the point cloud at frame $t$, $d_{CD}(A,B)$ evaluates Chamfer distance from $A$ to $B$, and $\mathcal{B}_t$ indicates the base mesh for which SMPL pose parameter $\boldsymbol{\theta}_t$ and pose-dependent local deformation $f_d$ are applied: $\mathcal{B}_t = \mathcal{B}_{\boldsymbol{\theta}_t} = \{\mathcal{V}'_{\boldsymbol{\theta}_t},\mathcal{F}\}$, where $\mathcal{V}'_{\boldsymbol{\theta}_t}=\{\mathbf{v}'_{t,k}\mid k \leq K\}$ and $\mathbf{v}'_{t,k}$ denotes the $k$-th vertex deformed by~\Eq{pose_dependent_offsets_vert} with $\boldsymbol{\theta}_t$.
$\lambda_{r}$ is a weight parameter.
In all equations, for simplicity, we omit averaging terms that divide the sum of Chamfer distances by the number of points.
In~\Eq{pose_dependent_offsets}, per-point weight $\mu_{t,i}$ is used to fully exploit geometric details captured in the input depth images.
Details of nearer objects to the camera are usually better captured than farther ones, and it is advantageous to put more weights on the input points closer to the camera.
We then use $\mu_{t,i} = e^{-c|z_{t,i}|}$, where $z_{t,i}$ is the depth value of $\mathcal{P}_t$ at frame $t$ and the parameter $c$ is set to 2.
If the input is a sequence of point clouds, where the distance to the camera is not clear, we use $\mu_{t,i} = 1$.
To avoid noisy artifact, we use Laplacian regularizer $E_{r}$ in \Eq{pose_dependent_offsets} that is defined by
\begin{equation}
\label{equ:base_reg}
E_{r}=\sum_{t}\sum_{k}{\left|\mathbf{v}'_{t,k}-\frac{\sum_{j\in \mathcal{N}(k)}\mathbf{v}'_{t,j}}{|\mathcal{N}(k)|}\right|^2},
\end{equation}
where $\mathcal{N}(k)$ is the set of adjacent vertices of the $k$-th vertex. $E_{r}$ regularizes the shape of a base mesh $\mathcal{B}_{t}$ to be smooth.
During training time, we construct the total energy by gathering the per-frame energies of randomly sampled frames and optimize the total energy using Adam optimizer~\cite{Adam}.
Note that, in contrast to DSFN~\cite{burov2021dsfn},
we do not conduct any mesh subdivision at this stage for efficiency.
Indeed, in our experiments, SMPL topology is sufficient to represent a coarse, smooth mesh that is needed for learning neural surface Laplacian function in the following section.
\section{Neural surface Laplacian function}
\label{sec:LapSurfFunc}
To fully exploit the fine-level details in the input point cloud, we construct a neural surface Laplacian function $f_l$ that is an MLP defined on the surface of a pose-dependent base mesh $\mathcal{B}_{t}$.
The input of $f_l$ is the same as for $f_d$, but the output is \emph{approximate Laplacian coordinates}, whereas $f_d$ produces a displacement vector.
\subsection{Function input}
\label{sec:input}
\paragraph{Query point}
In the inputs of functions $f_d$ and $f_l$,
we use the concept of query point to neutralize shape variations of the base meshes for different subjects and at different frames.
We define a query point $Q(\cdot)$ as a 3D point on the T-posed canonical neutral SMPL model $\mathcal{M}_C$.
Consider two base meshes $\mathcal{B}_{t1}$ and $\mathcal{B}_{t2}$ for the same subject at different frames and their $k$-th vertices $\mathbf{v}_{t1,k}$ and $\mathbf{v}_{t2,k}$, respectively.
3D positions of $\mathbf{v}_{t1,k}$ and $\mathbf{v}_{t2,k}$ may differ, but their query points $Q(\mathbf{v}_{t1,k})$ and $Q(\mathbf{v}_{t2,k})$ are defined to be same as they share the same vertex index (\Fig{Explanation[input]}a).
Similarly, $Q(\cdot)$ is defined to be the same for the vertices of base meshes representing different subjects if the vertices have been deformed from the same vertex of $\mathcal{M}_C$.
In addition, $Q(\cdot)$ can be defined for any point on a base mesh other than vertices by using the barycentric coordinates in the mesh triangles.
Once determined, a query point is converted to a high-dimensional vector via a positional encoding $\gamma$~\cite{mildenhall2020nerf}, and we choose the dimension of ten in our experiments.
\begin{figure}[t]
\begin{tabular}{c c}
\includegraphics[width=0.194\textwidth]{figs/Explanation/input/query132_newnew.pdf} &
\includegraphics[width=0.26\textwidth]{figs/Explanation/input/pose_feature168_new.pdf} \\
(a) Query point & (b) Pose feature
\end{tabular}
\vspace{-3mm}
\caption{Input parameters for neural surface Laplacian function. (a) Query points are defined on the canonical neutral SMPL model, so they can be shared among various subjects and different poses. (b) A pose feature is a masked SMPL pose parameter $\boldsymbol{\theta}$ to focus on the relevant joints for a body part.
The yellow and gray regions indicate active/inactive parts, respectively.}
\vspace{-2mm}
\label{fig:Explanation[input]}
\end{figure}
\paragraph{Pose feature}
For a query point, our two MLPs $f_d$ and $f_l$ should estimate \textit{pose-dependent} deformation and Laplacian coordinates, respectively.
To provide the pose-dependency, we could simply include the pose parameter $\boldsymbol{\theta}$ in the input of the MLPs.
However, a query point is not affected by every joint but strongly associated with nearby joints. For example, the joint angle at the shoulder is irrelevant to local details on a leg.
To exploit the correlations between query points and joint angles in a pose parameter $\boldsymbol{\theta}$, we convert $\boldsymbol{\theta}$ to a per-point pose feature $\overline{\boldsymbol{\theta}}(\mathbf{v})$ that retains only relevant joint angles for a query point $\mathbf{v}$.
Inspired by Pose Map~\cite{saito2021scanimate}, we apply the joint association weight map $\mathbf{W}\in\mathbb{R}^{J\times J}$ and skinning weights $\mathbf{w}(\mathbf{v})\in\mathbb{R}^J$ of $\mathbf{v}$ to the original pose parameter $\boldsymbol{\theta}\in\mathbb{R}^{J\times3}$. Our pose feature used as the input of MLPs is defined by
\begin{equation}
\label{equ:pose_feature}
\overline{\boldsymbol{\theta}}(\mathbf{v}) = \lceil diag \left(\mathbf{W}\,\mathbf{w}(\mathbf{v}) \right) \rceil {\boldsymbol{\theta}},
\end{equation}
where $diag(\cdot)$ converts an input vector to a diagonal matrix, and $\lceil\cdot\rceil$ is an element-wise ceiling operation.
We manually define the weight map $\mathbf{W}$ to reflect our setting. For example, the details of the head are not correlated with any joint in our reconstructed model, and the details of a leg are affected by all nearby joints together. Then, for the head joint, we set zero for the association weight of any joint in $\mathbf{W}$. For a joint around a leg, we set higher association weights for all nearby joints (\Fig{Explanation[input]}b).
\subsection{Training pairs}
\label{sec:train_pairs}
To train the neural surface Laplacian function $f_l$, we calculate the ground-truth (GT) approximate Laplacian coordinates of scan points and localize them on $\mathcal{M}_C$ to the corresponding query points.
\paragraph{GT Laplacian coordinates approximation}
As discussed in Section \ref{sec:LapPCD}, we use an approximation method~\cite{liang2012geometric} for calculating Laplacian coordinates from scan points that do not have connectivity. We first locally fit a degree two polynomial surface for each point in the moving least squares manner.
In our experiments, we use 20-30 neighbor points for the local surface fitting.
\Fig{Explanation[lap_approx]} shows illustration.
Our approximate Laplacian coordinates are defined on a continuous domain, unlike in conventional mesh editing methods~\cite{lipman2004differential, sorkine2004laplacian} that formulate Laplacian coordinates in a discrete domain using mesh vertices. To apply our Laplacian coordinates to a discrete mesh, we take advantage of a pose-dependent base mesh
(\Sec{Method[Recon]}).
\begin{figure}[t]
\begin{tabular}{c c c}
\includegraphics[width=0.07\textwidth]{figs/Explanation/lap_approx/scan.png} &
\includegraphics[width=0.15\textwidth]{figs/Explanation/lap_approx/approx_tangent.pdf} &
\includegraphics[width=0.15\textwidth]{figs/Explanation/lap_approx/approx_poly.pdf} \\
(a) Scan & (b) Local coordinate system & (c) Quadric surface \\
\end{tabular}
\vspace{-2mm}
\caption{Illustration of GT Laplacian coordinates approximation. To compute Laplacian coordinates on a point cloud (a), we initially define a local coordinate system (b), and compute a quadratic polynomial that locally fits the point cloud. Then, the coefficients of the quadratic polynomial are used to obtain Laplacian coordinates. }
\vspace{-3mm}
\label{fig:Explanation[lap_approx]}
\end{figure}
\paragraph{Localization}
Although the base mesh $\mathcal{B}_t$ nearly fits the input point cloud, points may not reside exactly on the mesh. To obtain the corresponding query point, we project each point $\mathbf{p}$ in the input point cloud $\mathbf{P}_t$ onto the base mesh $\mathcal{B}_t$:
$\overline{\mathbf{p}}=\Pi\left(\mathcal{B}_t,\mathbf{p}\right)$,
where $\Pi$ denotes the projection operation from a point to a pose-dependent base mesh
(\Fig{overallProcess}b).
The position of $\,\overline{\mathbf{p}}_{t,i}$ is determined by the barycentric coordinates in a triangle of $\mathcal{B}_t$, so we can easily compute the query point, skinning weights, and pose feature using the barycentric weights.
\subsection{Optimization}
\label{sec:optimization}
\paragraph{neural surface Laplacian function}
For a given surface point $\overline{\mathbf{p}}$ and pose $\boldsymbol{\theta}$, the MLP $f_l$ estimates Laplacian coordinates:
\begin{equation}
\boldsymbol{\delta}'(\overline{\mathbf{p}})=LBS_{\boldsymbol{\theta}}\left(f_l\left(Q(\overline{\mathbf{p}}), \overline{\boldsymbol{\theta}}(\overline{\mathbf{p}})\right)\right).
\label{equ:pose_dependent_laplacian}
\end{equation}
The estimation is conducted in the canonical space and transformed into the posed space.
Working in the canonical space is essential because Laplacian coordinates are not invariant to rotation.
In~\Eq{pose_dependent_laplacian}, we discard the translation part in~\Eq{lbs} as Laplacian coordinates are differential quantity.
The estimated $\boldsymbol{\delta}'$ is non-uniform Laplacian coordinates, as described in~\Sec{LapPCD}.
We train the MLP $f_l$ by formulating a per-point energy function:
\begin{equation}
E_{l}=\sum_{t}\sum_{i}\mu_{t,i}{\left|\boldsymbol{\delta}'_{t,i}-\boldsymbol{\delta}_{t,i} \right|^2},
\label{equ:PDD}
\end{equation}
where $\boldsymbol{\delta}'_{t,i}$ is the Laplacian coordinates of $\overline{\mathbf{p}}_{t,i}$ predicted by $f_l$ using~\Eq{pose_dependent_laplacian}, $\boldsymbol{\delta}_{t,i}$ is the GT approximate Laplacian coordinates of $\mathbf{p}_{t,i}$, and $\mu_{t,i}$ is the weight used in~\Eq{pose_dependent_offsets}.
During training, we construct the total energy by summing per-point energies of randomly sampled input points, and optimize the total energy using Adam optimizer~\cite{Adam}.
\section{Laplacian Reconstruction}
\label{sec:Method[Recon]}
Once the training is over, for a given pose $\boldsymbol{\theta}$, we can obtain the pose-dependent base mesh $\mathcal{B}_{\boldsymbol{\theta}}$ and the Laplacian coordinates for the vertices of $\mathcal{B}_{\boldsymbol{\theta}}$. In the reconstruction step, we aggregate the estimated Laplacian coordinates to restore a whole body model.
\paragraph{Subdivision}
For detailed surface reconstruction,
we subdivide the base mesh $\mathcal{B}_{\boldsymbol{\theta}}$ as the SMPL model does not have enough number of vertices for representing fine details.
The new vertices of the subdivided mesh $\mathcal{B}$ reside on the midpoints of edges of $\mathcal{B}_{\boldsymbol{\theta}}$. We conduct subdivision twice, and the number of triangles increases 16 times.
As a result, we have the subdivided pose-dependent base mesh $\mathcal{B}=\{\mathcal{U},\mathcal{F}'\}$,
where $\mathcal{U}=\{\mathbf{u}_{k}\mid k \leq K'\}$ and $K'$ is the number of vertices of $\mathcal{B}$.
\paragraph{Reconstruction}
\label{sec:reconstruction}
Using the base mesh $\mathcal{B}$, as illustrated in \Fig{overallProcess}c, we reconstruct a detailed mesh $\mathcal{S}=\left(\mathcal{U}',\mathcal{F}'\right)$, where $\mathcal{U}'=\{\mathbf{u}'_k\mid k\leq K'\}$, by minimizing the following error functional~\cite{lipman2004differential, sorkine2004laplacian}:
\begin{equation}
E(\mathcal{U}')=\sum_{k}{\left\| \Delta_{\mathcal{B}}\left(\mathbf{u}'_k\right)-\boldsymbol{\delta}'\left(\mathbf{u}_k\right)\right\|^2}+\sum_{k\in \:anchor}{\left\| \mathbf{u}'_k - \mathbf{u}_k \right\|^2},
\label{equ:LapRecon}
\end{equation}
where
$\Delta_{\mathcal{B}}$ is the Laplace-Beltrami operator of $\mathcal{B}$ defined in~\Eq{cotLap},
$\boldsymbol{\delta}'(\mathbf{u}_k)$ is the non-uniform Laplacian coordinates of $\mathbf{u}_k$ predicted by the neural surface Laplacian function $f_l$
using~\Eq{pose_dependent_laplacian},
and $anchor$ is a set of indices of the constraint vertices on $\mathcal{B}$, which play the role of boundary conditions.
In \Eq{LapRecon}, the first term preserves desirable Laplacian coordinates on the whole surface, and the second term constrains the global position of the final shape using the anchor points.
\paragraph{Laplace-Beltrami operator}
To compute the Laplace-Beltrami operator $\Delta_{\mathcal{B}}$ using~\Eq{cotLap},
we need the angles $\alpha^1_{k,j}$ and $\alpha^2_{k,j}$ for each edge connecting vertices $\mathbf{u}_k$ and $\mathbf{u}_j$ in $\mathcal{B}$.
For efficient computation, we use uniform angle $\alpha=\alpha^1=\alpha^2=\frac{\pi}{2}-\frac{\pi}{|\mathcal{N}(k)|}$ for all edges of $\mathcal{B}$.
Then, \Eq{cotLap} is reduced to
\begin{equation}
\label{equ:approxLB}
\begin{split}
\Delta_{\mathcal{B}} \left(\mathbf{u}'_k\right)&=\frac{cot(\alpha)}{a_k}
\left(|\mathcal{N}(k)|\;\mathbf{u}'_k-\sum_{j\in \mathcal{N}(k)}{\mathbf{u}'_j}\right),
\end{split}
\end{equation}
where $a_k$ is the Voronoi area of a vertex $\mathbf{u}_k$ of $\mathcal{B}$.
As shown in the experimental results (\Sec{exp}), this approximation successfully reconstructs $\mathcal{S}$ from the predicted non-uniform Laplacian coordinates.
\paragraph{Anchor points}
Theoretically, we are able to recover the original mesh from the Laplacian coordinates of vertices by fixing one vertex position as an anchor and solving a linear system~\cite{sorkine2004laplacian}.
However, in our setting, one anchor may not suffice for reconstructing the accurate final surface $\mathcal{S}$, as the Laplacian coordinates are approximate ones predicted by a neural surface Laplacian function $f_l$, not computed directly from $\mathcal{S}$.
To improve the accuracy of reconstruction, we set anchor points of a sufficient number as the boundary condition.
We select a set of $n$ vertex indices as the $anchor$ in \Eq{LapRecon} in advance by uniformly sampling~\cite{yuksel2015sample} vertices from the canonical neutral SMPL model $\mathcal{M}_C$, where $n=800$ in our experiments.
\Fig{ablation[anchor_points]} shows reconstruction results with varying numbers of anchor points.
\paragraph{Reliable anchors}
Since anchor points are fixed for solving~\Eq{LapRecon}, they should be as close to the input point cloud as possible. To achieve this property, we set an additional energy term and optimize it along with \Eq{pose_dependent_offsets} when we train the pose-dependent local deformation function $f_d$:
\begin{equation}
\label{equ:reliable_anchor}
\begin{split}
E_{a} = \lambda_{a}\sum_t\sum_{k \in \;anchor}d_{CD}\left(\mathbf{v}'_{t,k},\mathcal{P}_t\right),
\end{split}
\end{equation}
where $d_{CD}$ measures a vertex-to-point cloud Chamfer distance, $\mathcal{P}_t$ is the input point cloud at frame $t$, and $\lambda_{a}$ is a weight parameter.
When the input is a depth map sequence, we apply this term to only visible anchors from the camera viewpoint.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figs/Explanation/anchor.pdf}
\vspace{-2mm}
\caption{Effect of the number of anchor points. On the left, the red dot represents a single anchor point used for reconstruction. Too few anchor points ($n\leq500$) introduce distortions in the reconstruction results.
}
\label{fig:ablation[anchor_points]}
\end{figure}
\section{Results}
\label{sec:exp}
\subsection{Experiment details}
\paragraph{Implementation and training details}
Pose-dependent local deformation function $f_d$ and neural surface Laplacian function $f_l$ are represented as 5- and 3-layer MLPs with ReLU activation, and 600 and 800 feature channels are used per intermediate layer, respectively.
In our experiments, we set $\lambda_r=0.1\sim 2$ and $\lambda_a=2$.
We optimize $f_d$ and $f_l$ with a learning rate of $1.0\times 10^{-3}$, batch sizes of $10$ frames and 5000 points, and 300 and 100 epochs, respectively.
The training time is proportional to the numbers of points and frames in the scan data. For instance, the point cloud sequence used for reconstruction in~\Fig{ablation[pose_dependent_details]} consists of 39k$\sim$56k points per frame with 200 frames. In that case, it takes about 20 and 30 minutes to train MLPs $f_d$ and $f_l$, respectively.
\paragraph{Datasets}
We evaluate the results of our method qualitatively and quantitatively on single-view and full-body point cloud sequences.
We capture RGB-D sequences using an Azure Kinect DK~\cite{KinectAzure} with 1280p resolution for color images and $1024 \times 1024$ for depth images. Additionally, we use RGB-D sequences that are provided in DSFN~\cite{burov2021dsfn}. RGB-D sequences used for experiments contain 200 to 500 frames. We also evaluate our method on full-body point clouds using synthetic datasets: CAFE~\cite{ma2020cape}, and Resynth~\cite{POP:ICCV:2021}.
We show the reconstruction result from a 2.5D depth point cloud in \textit{mint color} and the result of a full-body 3D point cloud in \textit{olive color}.
\paragraph{Timings}
In the reconstruction step, for the example in~\Fig{ablation[pose_dependent_details]}, it takes 3ms and 35ms per frame to evaluate the MLPs $f_d$ and $f_l$, respectively.
For solving a sparse linear system to obtain the final detailed mesh $\mathcal{S}$ that minimizes \Eq{LapRecon},
it takes 4 seconds for the matrix pre-factorization step and 130ms to obtain the solution for each frame when the mesh contains 113k vertices.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figs/Experiments/Ablation/onlyOffset/vsOffset_new2_error.pdf}\\
\vspace{-2mm}
\caption{Comparison with using a regularization-free displacement function on the base mesh. Although the surface displacement function is trained in the same condition as our neural surface Laplacian function, it cannot capture structure details (middle). In contrast, our results (right) successfully restore details from the input scan (left).
}
\label{fig:ablation[pose_dependent_details]}
\end{figure}
\subsection{Analysis}
\paragraph{Effect of Laplacian coordinates}
Our approach using Laplacian coordinates preserves local geometric details better than the approach using absolute coordinates of mesh vertices. To verify the claim, we conduct an experiment by changing the neural surface Laplacian function to estimate displacements instead of Laplacian coordinates. We think that this surface function mimics a pose-dependent displacement map.
To optimize the surface displacement function, we use the same energy function as \Eq{PDD} with the change of $\boldsymbol{\delta}$ to the displacement between surface point $\overline{\mathbf{p}}$ and scan point $\mathbf{p}$.
There is no regularization term in \Eq{PDD}, and the maximal capability of the displacement function is exploited for encoding surface details.
Nevertheless, the resulting displacement function cannot properly capture structure details, producing rather noisy surfaces (\Fig{ablation[pose_dependent_details]} middle).
In contrast, our results using Laplacian coordinates are capable of capturing structure details (\Fig{ablation[pose_dependent_details]} right) from the input point clouds (\Fig{ablation[pose_dependent_details]} left).
In~\Fig{ablation[pose_dependent_details]}, we include quantitative results (Chamfer distance $d_{CD}$ and normal consistancy $NC$) on our dataset. The normal consistency $NC$ is computed using the inner products of the ground truth normals at input points and the normals of the corresponding points on the reconstructed surface. Our result shows a slightly higher Chamfer distance error than the displacement function, but produces more visually pleasing results.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figs/Experiments/Ablation/onlyPoseblend/vspose_new.pdf} \\
\vspace{-3mm}
\caption{Pose-dependent local deformation (Sec.~\ref{sec:Method[PBBM]}) applied to the subdivided base mesh used for our final reconstruction. (left) We train the deformation function with varying regularization weights. It is hard to find the best weight for regularization as the structures and sizes of shape details are spatially varying in the raw scan (top right). (bottom right) The reconstruction result from our complete pipeline restores shape details with appropriate structures and sizes.
}
\vspace{-6mm}
\label{fig:ablation[onlyposeblend]}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figs/Experiments/Ablation/sel_weight/selective.pdf} \\
\vspace{-3mm}
\caption{Effect of selective weighting. Without selective weighting (c), surface functions may learn geometric information from low-quality frames (a). Our selective weighting (d) encourages surface functions to learn sharp geometric details (b) in the input depth map sequence.
}
\label{fig:ablation[selective_weighting]}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=0.45\textwidth]{figs/Experiments/Comparison/DSFN/DSFN_annadejan.pdf} \\
\end{tabular}
\vspace{-2mm}
\caption{Comparison with DSFN~\cite{burov2021dsfn} on DSFN real dataset captured by a single RGB-D camera.
}
\vspace{-3mm}
\label{fig:Comparison[DSFN1]}
\end{figure}
\paragraph{Unclear best weight for smoothness regularization on local deformation}
We conduct an experiment that learns the local deformation defined in Sec.~\ref{sec:Method[PBBM]} for a subdivided base mesh that has the same topology as our final mesh. The results (\Fig{ablation[onlyposeblend]} left) show that estimating absolute coordinates is sensitive to the regularization weight $\lambda_r$ defined in~\Eq{pose_dependent_offsets}. With a large value of $\lambda_r$, shape details are smoothed out in the reconstructed model. With a small $\lambda_r$, too small details are reconstructed. Then, it is hard to find the best regularization weight to handle spatially varying structures and sizes of shape details in the input scan (\Fig{ablation[onlyposeblend]} top right). On the contrary, our complete pipeline that learns differential representation performs better without a regularization term (\Fig{ablation[onlyposeblend]} bottom right). Note that all meshes in \Fig{ablation[onlyposeblend]} has the same number of vertices.
\paragraph{Effect of selective weighting}
We use selective weighting $\mu_{t,i}$ in \Eq{pose_dependent_offsets} and \Eq{PDD} to obtain the best details from the input depth map sequence. This weighting is especially effective in the face region (\Fig{ablation[selective_weighting]}).
\subsection{Comparisons}
We compare our LaplacianFusion with recent representations developed for \textit{controllable} 3D clothed human reconstruction. The compared representations are based on mesh~\cite{burov2021dsfn}, signed distance function (SDF)~\cite{wang2021metaavatar}, and point cloud~\cite{POP:ICCV:2021}.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{figs/Experiments/Comparison/DSFN/00114_new.pdf} \\
\vspace{-3mm}
\caption{Comparison with DSFN~\cite{burov2021dsfn} on BUFF dataset~\cite{Zhang_2017_CVPR}. Note that the training input is not a ground truth sequence (bottom row) but a rendered RGB-D sequence with $640\times 480$ resolution. The red and blue boxes highlight qualitative differences.
}
\label{fig:Comparison[DSFN]}
\end{figure}
\paragraph{Comparison with DSFN (Mesh)}
DSFN~\cite{burov2021dsfn} utilizes an explicit mesh to reconstruct a controllable 3D clothed human model from a RGB-D sequence. It represents local details with an offset surface function (dense local deformation), but its spatial regularization smooths out geometric details. Since the source code of DSFN is not published, we compared our results with DSFN on the dataset provided by the authors.
Their real data inputs are captured using an Azure Kinect DK~\cite{KinectAzure} with $1920 \times 1080$ pix.\ for color images, and $640 \times 576$ pix.\ for depth images. \Fig{Comparison[DSFN1]} shows results on the real dataset provided by DSFN, and our approach produces more plausible details than DSFN. Note that, in this dataset, the captured human stands away from the camera, so input point cloud lacks high-frequency details.
For quantitative evaluation, DSFN uses synthetic $640 \times 480$ pix.\ RGB-D sequences rendered from BUFF~\cite{Zhang_2017_CVPR}, where the camera moves around a subject to cover the whole body.
\Fig{Comparison[DSFN]} shows the comparison results, and our method preserves the details of the input better.
\Tbl{DSFN} shows the measured IOU, chamfer distance ($d_{CD}$), normal consistency ($NC$) scores. All three scores of our approach rate better than DSFN.
\begin{table}[t]
\centering
\caption{Quantitative comparisons with DSFN~\cite{burov2021dsfn} using a sequence of the BUFF dataset~\cite{Zhang_2017_CVPR}.}
\resizebox{.47\textwidth}{!}{
\begin{tabular}{c| c| c| c}
\hline
Method & IoU $\uparrow$ & $d_{CD}$(cm) $\downarrow$ & $NC$ $\uparrow$\\
\hline
DSFN~\shortcite{burov2021dsfn} & 0.832 & 1.56 & 0.917 \\
\hline
Ours (same vertex density as DSFN) & 0.863 & 0.99 & 0.933 \\
\hline
Ours & \textbf{0.871} & \textbf{0.94} & \textbf{0.941} \\
\hline
\end{tabular}
}
\vspace{-2mm}
\label{tbl:DSFN}
\end{table}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{figs/Experiments/Comparison/Meta/meta1_new.pdf}\\
\includegraphics[width=0.45\textwidth]{figs/Experiments/Comparison/Meta/meta2_newnew.pdf}\\
\vspace{-3mm}
\caption{Comparison with MetaAvatar~\cite{wang2021metaavatar} on CAPE dataset~\cite{ma2020cape}. (From left to right) raw 3D point clouds, results of MetaAvatar, and our results.
}
\vspace{-3mm}
\label{fig:Comparison[Meta]}
\end{figure}
\paragraph{Comparison with MetaAvatar (SDF)}
Wang et al.~\shortcite{wang2021metaavatar} propose MetaAvatar that represents shape as SDF defined on 3D space. The resulting shape tends to be smoothed due to the limited number of sampling points in the training time. MetaAvatar utilizes the given SMPL parameters for reconstruction, and we use them in the comparison.
MetaAvatar evaluates the reconstruction quality using unseen poses with the interpolation capability aspect. Similarly, we sample every 4th frame in the CAPE dataset~\cite{ma2020cape} and measure the reconstruction accuracy on every second frame excluding the training frames. The results are shown in \Fig{Comparison[Meta]}.
In~\Tbl{CAPE}, we show quantitative results, and the scores of our approach rate better than MetaAvatar.
\begin{figure}[t]
\includegraphics[width=0.46\textwidth]{figs/Experiments/Comparison/POP/pop1_new.pdf}\\
\includegraphics[width=0.46\textwidth]{figs/Experiments/Comparison/POP/pop2_new.pdf}\\
\vspace{-2mm}
\caption{
Comparison with two different settings of PoP~\cite{POP:ICCV:2021} on Resynth~\cite{POP:ICCV:2021} dataset. (From left to right) raw 3D point clouds, results of PoP (coarse and dense), and our results. Although our mesh has 113k vertices, it can sufficiently preserve geometric details with the aid of Laplacian coordinates.
}
\label{fig:Comparison[POP]}
\end{figure}
\begin{table}[t]
\centering
\caption{Comparisons with MetaAvatar \& PoP on CAPE~\cite{ma2020cape}.}
\begin{tabular}{c| c| c}
\hline
Method & $d_{CD}$(cm) $\downarrow$ & $NC$ $\uparrow$ \\
\hline
MetaAvatar~\shortcite{wang2021metaavatar} & 0.47 & 0.946\\
\hline
PoP~\shortcite{POP:ICCV:2021} (50k) & \textbf{0.32} & \textbf{0.977} \\
\hline
Ours & 0.45 & 0.959 \\
\hline
\end{tabular}
\vspace{-2mm}
\label{tbl:CAPE}
\end{table}
\paragraph{Comparison with PoP (Point cloud)}
Ma et al.~\shortcite{POP:ICCV:2021} utilize point cloud representation to deal with topological changes efficiently. We evaluate their method on CAPE dataset in the same configuration used for MetaAvatar in \Tbl{CAPE}.
In addition, we compare our results with PoP on Resynth dataset~\cite{POP:ICCV:2021}, which has various cloth types with abundant details. However, Resynth dataset includes human models wearing skirts that cannot be properly handled by our framework (\Fig{Failure}), so we do not conduct quantitative evaluation using all subjects. Instead, we select five subjects not wearing skirts to compare our results with PoP quantitatively. We then split the training and test datasets in the same manner as on CAPE dataset.
The original implementation of PoP queries the feature tensor with a $256\times 256$ UV map, resulting in a point cloud with 50k points. Since they are insufficient for representing details on the Resynth dataset, we modified the code to adopt $512\times 512$ UV map, and obtain 191k points. \Tbl{carla} shows quantitative comparisons with the above two settings, and our reconstruction is comparable with the original PoP. \Fig{Comparison[POP]} presents qualitative results where our mesh is more plausible than the original PoP, and comparable to dense PoP.
In~\Fig{Comparison[POP_back]}, the mesh result obtained from a reconstructed point cloud of PoP contains a vertical line on the back because PoP uses UV coordinates for shape inference. In contrast, our approach can reconstruct clear wrinkles without such artifacts.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figs/Experiments/Comparison/POP/PoP_back_new.pdf}\\
\vspace{-2mm}
\caption{Seam artifacts of PoP~\cite{POP:ICCV:2021}. (From left to right) raw 3D point cloud, result of PoP, mesh result of PoP obtained using screened Poisson reconstruction~\cite{kazhdan2013screened}, and our result. In the red box, we highlight seam artifacts of PoP.
}
\label{fig:Comparison[POP_back]}
\end{figure}
\begin{table}[t]
\centering
\caption{Comparison with PoP on Resynth~\cite{POP:ICCV:2021}}
\begin{tabular}{c| c| c| c}
\hline
\multicolumn{2}{c|}{Method} & $d_{CD}$(cm) $\downarrow$ & $NC$ $\uparrow$ \\
\hline
\multirow{2}{*}{PoP~\shortcite{POP:ICCV:2021}} & (50k) & 0.43 & \textbf{0.970}\\
\cline{2-4}
& (153k) & \textbf{0.33} & 0.968\\
\hline
\multicolumn{2}{c|}{Ours} & 0.41 & 0.964\\
\hline
\end{tabular}
\vspace{-2mm}
\label{tbl:carla}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figs/Experiments/Application/transfer_new.pdf} \\
\includegraphics[width=0.47\textwidth]{figs/Experiments/Application/enhance_new.pdf}\\
\vspace{-3mm}
\caption{Detail transfer (top) and smoothing \& sharpening (bottom). (top) In our method, the optimized neural surface Laplacian function can be applied to another subject. (bottom) The amount of details can be easily adjusted by scaling Laplacian coordinates.}
\label{fig:Application[enhancetransfer]}
\vspace{-3mm}
\end{figure}
\subsection{Applications}
\label{sec:application}
\paragraph{Detail transfer}
Our neural surface Laplacian function encodes detailed shape information as the Laplacian coordinates defined at query points in a common domain. As a result, we can easily transfer shape details to other models by evaluating the original surface Laplacian function on the target pose-dependent base mesh. \Fig{Application[enhancetransfer]} shows an example.
\paragraph{Sharpening \& Smoothing}
Laplacian coordinates predicted by a neural surface Laplacian function in the Laplacian reconstruction step (\Sec{reconstruction}) can be scaled to change the amount of reconstructed shape details.
Multiplying a value greater than 1.0 to the predicted Laplacian coordinates performs detail sharpening, and the opposite performs detail smoothing. \Fig{Application[enhancetransfer]} shows examples.
\begin{figure}[t]
\includegraphics[width=0.47\textwidth]{figs/Experiments/Application/motion/motion.pdf} \\
\vspace{-2mm}
\caption{Animating examples. See our supplementary video.}
\vspace{-3mm}
\label{fig:Application[motion]}
\end{figure}
\paragraph{Animating}
\label{sec:animate}
The pose parameters used for evaluating surface functions $f_d$ and $f_l$ can be arbitrary.
In the case of reconstructing a scanned animation sequence, we would use the pose parameters $\boldsymbol{\theta}_t$ estimated from the input scans (\Sec{basemesh}).
On the other hand, once the functions $f_d$ and $f_l$ have been optimized, any parameters $\boldsymbol{\theta}$ other than $\boldsymbol{\theta}_t$ can be used to produce unseen poses of the subject.
In that case, the validity of shape details of the unseen poses is not guaranteed but our experiments generate reasonable results.
In \Fig{Application[motion]}, we optimized the functions $f_d$ and $f_l$ on BUFF and CAPE datasets and adopted synthetic motions from AIST++~\cite{li2021learn}.
The results show natural variations of shape details depending on the motions.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{figs/Experiments/Application/texture/000.pdf} \\
\includegraphics[width=0.45\textwidth]{figs/Experiments/Application/texture/111.pdf} \\
\includegraphics[width=0.45\textwidth]{figs/Experiments/Application/texture/222.pdf} \\
\includegraphics[width=0.45\textwidth]{figs/Experiments/Application/texture/333.pdf} \\
\includegraphics[width=0.45\textwidth]{figs/Experiments/Application/texture/444.pdf} \\
\includegraphics[width=0.45\textwidth]{figs/Experiments/Application/texture/555.pdf} \\
\includegraphics[width=0.45\textwidth]{figs/Experiments/Application/texture/666.pdf} \\
\vspace{-2mm}
\caption{Texture mapping examples. (left) texture sources from RenderPoeple models~\cite{renderpeople}. (top) our reconstructed models. (others) texture transfer results. From the second column, each column shows the same shape with various textures. Since we use a fixed topology mesh, all our meshes have a common UV parametric domain. As a result, a single texture map can be shared among different reconstructed models without manual annotation.}
\label{fig:Application[texture]}
\vspace{-3mm}
\end{figure}
\paragraph{Texture mapping}
In our LaplacianFusion framework, all reconstructed models have the same fixed topology resulting from the subdivision applied to the SMPL model.
Then, we can build a common UV parametric domain for texture mapping of any reconstructed model.
In \Fig{Application[texture]}, we initially transfer a texture from a RenderPoeple model~\cite{renderpeople} to the T-posed canonical neutral SMPL model $\mathcal{M}_C$ by using deep virtual markers~\cite{kim2021deep}. Then, the texture of $\mathcal{M}_C$ can be shared with different reconstructed models through the common UV parametric domain.
\section{Conclusions}
We presented a novel framework, \textit{LaplacianFusion}, that can reconstruct a detailed and controllable 3D clothed human body model from a 3D point cloud sequence.
The key of our framework is Laplacian coordinates that can directly represent local shape variations.
We introduce a {\em neural surface Laplacian function} that uses Laplacian coordinates for encoding shape details from raw scans and then predicting desirable shape details on a pose-dependent base mesh.
The final model is reconstructed by integrating the Laplacian coordinates predicted on a subdivided base mesh.
Our approach can also be utilized for other applications, such as detail transfer.
\begin{figure}[t]
\includegraphics[width=0.35\textwidth]{figs/Experiments/Failure_case.pdf} \\
\vspace{-3mm}
\caption{Failure case. (left) raw 3D point cloud of a human model wearing skirt in Resynth dataset~\cite{POP:ICCV:2021}. (right) our reconstruction result. Our pose-dependent base mesh cannot represent a seamless surface covering both legs (red circles) as its topology is originated from the T-pose of the SMPL model.}
\vspace{-3mm}
\label{fig:Failure}
\end{figure}
\paragraph{Limitations and future work}
Since our framework uses a fixed topology mesh, we cannot cover topological changes, such as opening a zipper.
In addition, our base mesh is initialized from a skinned body shape, so it is hard to deal with loose clothes, such as skirts (\Fig{Failure}).
Our framework relies on registration of the SMPL model to the input scan sequence, and the reconstruction quality is affected by the registration accuracy.
Currently, we use simple mid-point subdivision to increase the number of vertices in a base mesh, but a data-driven subdivision approach~\cite{liu2020neural} could be considered.
Our neural surface Laplacian function is trained for one subject, and generalization of the function to handle other subjects remains as future work.
We also plan to generalize our framework for non-human objects.
|
{
"arxiv_id": "2302.14282",
"language": "en",
"timestamp": "2023-03-01T02:07:59",
"url": "https://arxiv.org/abs/2302.14282",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{W}{elcome} to the updated and simplified documentation to using the IEEEtran \LaTeX \ class file. The IEEE has examined hundreds of author submissions using this package to help formulate this easy to follow guide. We will cover the most commonly used elements of a journal article. For less common elements we will refer back to the ``IEEEtran\_HOWTO.pdf''.
This document applies to version 1.8b of IEEEtran.
The IEEEtran template package contains the following example files:
\begin{list}{}{}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{list}
These are ``bare bones" templates to quickly understand the document structure.
It is assumed that the reader has a basic working knowledge of \LaTeX. Those who are new to \LaTeX \ are encouraged to read Tobias Oetiker's ``The Not So Short Introduction to \LaTeX '', available at: \url{http://tug.ctan.org/info/lshort/english/lshort.pdf} which provides an overview of working with \LaTeX.
\section{The Design, Intent and \\ Limitations of the Templates}
\noindent The templates are intended to {\bf{approximate the final look and page length of the articles/papers}}. Therefore, {\bf{they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore\textsuperscript{\textregistered}}}. They will help to give the authors an approximation of the number of pages that will be in the final version. The structure of the \LaTeX files, as designed, enable easy conversion to XML for the composition systems used by the IEEE's outsource vendors. The XML files are used to produce the final print/IEEEXplore\textsuperscript{\textregistered} pdf and then converted to HTML for IEEEXplore\textsuperscript{\textregistered}. Have you looked at your article/paper in the HTML version?
\section{\LaTeX \ Distributions: Where to Get Them}
\noindent IEEE recommends using the distribution from the \TeX User Group at \url{http://www.tug.org}. You can join TUG and obtain a DVD distribution or download for free from the links provided on their website: \url{http://www.tug.org/texlive/}. The DVD includes distributions for Windows, Mac OS X and Linux operating systems.
\section{Where to get the IEEEtran Templates}
\noindent The {\bf{IEEE Template Selector}} will always have the most up-to-date versions of the \LaTeX\ and MSWord templates. Please see: \url{https://template-selector.ieee.org/} and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have their own special templates. Many of these are based on IEEEtran, but may have special instructions that vary slightly from those in this document.
\section{Where to get \LaTeX \ help - user groups}
\noindent The following on-line groups are very helpful to beginning and experienced \LaTeX\ users. A search through their archives can provide many answers to common questions.
\begin{list}{}{}
\item{\url{http://www.latex-community.org/}}
\item{\url{https://tex.stackexchange.com/} }
\end{list}
\section{Document Class Options in IEEEtran}
\noindent At the beginning of your \LaTeX\ file you will need to establish what type of publication style you intend to use. The following list shows appropriate documentclass options for each of the types covered by IEEEtran.
\begin{list}{}{}
\item{Regular Journal Article}
\item{{\tt{$\backslash$documentclass[journal]{IEEEtran}}}}\\
\item{{Conference Paper}}
\item{{\tt{$\backslash$documentclass[conference]{IEEEtran}}}}\\
\item{Computer Society Journal Article}
\item{{\tt{$\backslash$documentclass[10pt,journal,compsoc]{IEEEtran}}}}\\
\item{Computer Society Conference Paper}
\item{{\tt{$\backslash$documentclass[conference,compsoc]{IEEEtran}}}}\\
\item{{Communications Society Journal Article}}
\item{{\tt{$\backslash$documentclass[journal,comsoc]{IEEEtran}}}}\\
\item{{Brief, Correspondence or Technote}}
\item{{\tt{$\backslash$documentclass[9pt,technote]{IEEEtran}}}}
\end{list}
There are other options available for each of these when submitting for peer review or other special requirements. IEEE recommends to compose your article in the base 2-column format to make sure all your equations, tables and graphics will fit the final 2-column format. Please refer to the document ``IEEEtran\_HOWTO.pdf'' for more information on settings for peer review submission if required by your EIC.
\section{How to Create Common Front Matter}
\noindent The following sections describe general coding for these common elements. Computer Society publications and Conferences may have their own special variations and will be noted below.
\subsection{Paper Title}
\noindent The title of your paper is coded as:
\begin{verbatim}
\title{The Title of Your Paper}
\end{verbatim}
\noindent Please try to avoid the use of math or chemical formulas in your title if possible.
\subsection{Author Names and Affiliations}
\noindent The author section should be coded as follows:
\begin{verbatim}
\author{Masahito Hayashi
\IEEEmembership{Fellow, IEEE}, Masaki Owari
\thanks{M. Hayashi is with Graduate School
of Mathematics, Nagoya University, Nagoya,
Japan}
\thanks{M. Owari is with the Faculty of
Informatics, Shizuoka University,
Hamamatsu, Shizuoka, Japan.}
}
\end{verbatim}
Be sure to use the $\backslash$IEEEmembership command to identify IEEE membership status.
Please see the ``IEEEtran\_HOWTO.pdf'' for specific information on coding authors for Conferences and Computer Society publications. Note that the closing curly brace for the author group comes at the end of the thanks group. This will prevent you from creating a blank first page.
\subsection{Running Heads}
\noindent The running heads are declared by using the $\backslash${\tt{markboth}} command. There are two arguments to this command: the first contains the journal name information and the second contains the author names and paper title.
\begin{verbatim}
\markboth{Journal of Quantum Electronics,
Vol. 1, No. 1, January 2021}
{Author1, Author2,
\MakeLowercase{\textit{(et al.)}:
Paper Title}
\end{verbatim}
\subsection{Copyright Line}
\noindent For Transactions and Journals papers, this is not necessary to use at the submission stage of your paper. The IEEE production process will add the appropriate copyright line. If you are writing a conference paper, please see the ``IEEEtran\_HOWTO.pdf'' for specific information on how to code "Publication ID Marks".
\subsection{Abstracts}
\noindent The abstract is the first element of a paper after the $\backslash${\tt{maketitle}} macro is invoked. The coding is simply:
\begin{verbatim}
\begin{abstract}
Text of your abstract.
\end{abstract}
\end{verbatim}
Please try to avoid mathematical and chemical formulas in the abstract.
\subsection{Index Terms}
\noindent The index terms are used to help other researchers discover your paper. Each society may have it's own keyword set. Contact the EIC of your intended publication for this list.
\begin{verbatim}
\begin{IEEEkeywords}
Broad band networks, quality of service
\end{IEEEkeywords}
\end{verbatim}
\section{How to Create Common Body Elements}
\noindent The following sections describe common body text elements and how to code them.
\subsection{Initial Drop Cap Letter}
\noindent The first text paragraph uses a ``drop cap'' followed by the first word in ALL CAPS. This is accomplished by using the $\backslash${\tt{IEEEPARstart}} command as follows:
\begin{verbatim}
\IEEEPARstart{T}{his} is the first paragraph
of your paper. . .
\end{verbatim}
\subsection{Sections and Subsections}
\noindent Section headings use standard \LaTeX\ commands: $\backslash${\tt{section}}, $\backslash${\tt{subsection}} and $\backslash${\tt{subsubsection}}. Numbering is handled automatically for you and varies according to type of publication. It is common to not indent the first paragraph following a section head by using $\backslash${\tt{noindent}} as follows:
\begin{verbatim}
\section{Section Head}
\noindent The text of your paragraph . . .
\end{verbatim}
\subsection{Citations to the Bibliography}
\noindent The coding for the citations are made with the \LaTeX\ $\backslash${\tt{cite}} command. This will produce individual bracketed reference numbers in the IEEE style. At the top of your \LaTeX\ file you should include:
\begin{verbatim}
\usepackage{cite}
\end{verbatim}
For a single citation code as follows:
\begin{verbatim}
see \cite{ams}
\end{verbatim}
This will display as: see \cite{ams}\\
For multiple citations code as follows:
\begin{verbatim}
\cite{ams,oxford,lacomp}
\end{verbatim}
This will display as \cite{ams,oxford,lacomp}
\subsection{Figures}
\noindent Figures are coded with the standard \LaTeX\ commands as follows:
\begin{verbatim}
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig1}
\caption{This is the caption for one fig.}
\label{fig1}
\end{figure}
\end{verbatim}
The [!t] argument enables floats to the top of the page to follow IEEE style. Make sure you include:
\begin{verbatim}
\usepackage{graphicx}
\end{verbatim}
\noindent at the top of your \LaTeX file with the other package declarations.
To cross-reference your figures in the text use the following code example:
\begin{verbatim}
See figure \ref{fig1} ...
\end{verbatim}
This will produce:\\
See figure \ref{fig1} . . .
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig1}
\caption{This is the caption for one fig.}
\label{fig1}
\end{figure}
\subsection{Tables}
\noindent Tables should be coded with the standard \LaTeX\ coding. The following example shows a simple table.
\begin{verbatim}
\begin{table}
\begin{center}
\caption{Filter design equations ...}
\label{tab1}
\begin{tabular}{| c | c | c |}
\hline
Order & Arbitrary coefficients &
coefficients\\
of filter & $e_m$ & $b_{ij}$ \\
\hline
1& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$,
& $b_{00}=0$\\
\hline
2&$\beta_{22}=(~1,-1,-1,~~1,~~1,~~1)$ &\\
\hline
3& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$,
& $b_{00}=0$,\\
\hline
\end{tabular}
\end{center}
\end{table}
\end{verbatim}
To reference the table in the text, code as follows:
\begin{verbatim}Table~\ref{tab1} lists the closed-form...\end{verbatim}
to produce:
Table~\ref{tab1} lists the closed-form . . .
\begin{table}
\begin{center}
\caption{A Simple Table Example.}
\label{tab1}
\begin{tabular}{| c | c | c |}
\hline
Order & Arbitrary coefficients & coefficients\\
of filter & $e_m$ & $b_{ij}$ \\
\hline
1& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$, & $b_{00}=0$\\
\hline
2&$\beta_{22}=(~1,-1,-1,~~1,~~1,~~1)$ &\\
\hline
3& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$, & $b_{00}=0$,\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Lists}
\noindent In this section, we will consider three types of lists: simple unnumbered, numbered and bulleted. There have been numerous options added to IEEEtran to enhance the creation of lists. If your lists are more complex than those shown below, please refer to the ``IEEEtran\_HOWTO.pdf'' for additional options.\\
\noindent{\bf A plain unnumbered list}
\begin{list}{}{}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{list}
\noindent coded as:
\begin{verbatim}
\begin{list}{}{}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{list}
\end{verbatim}
\noindent{\bf A simple numbered list}
\begin{enumerate}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{enumerate}
\noindent coded as:
\begin{verbatim}
\begin{enumerate}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{enumerate}
\end{verbatim}
\noindent{\bf A simple bulleted list}
\begin{itemize}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{itemize}
\noindent coded as:
\begin{verbatim}
\begin{itemize}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{itemize}
\end{verbatim}
\subsection{Other Elements}
\noindent For other less common elements such as Algorithms, Theorems and Proofs, and Floating Structures such as page-wide tables, figures or equations, please refer to the ``IEEEtran\_HOWTO.pdf'' section on ``Double Column Floats.''
\section{How to Create Common Back Matter Elements}
\noindent The following sections demonstrate common back matter elements such as Acknowledgments, Bibliographies, Appendicies and Author Biographies.
\subsection{Acknowledgments}
\noindent This should be a simple paragraph before the bibliography to thank those individuals and institutions who have supported your work on this article.
\begin{verbatim}
\section{Acknowledgments}
\noindent Text describing those who
supported your paper.
\end{verbatim}
\subsection{Bibliographies}
\noindent {\bf{References Simplified:}} A simple way of composing references is to use the $\backslash${\tt{bibitem}} macro to define the beginning of a reference as in the following examples:\\
\noindent [6] H. Sira-Ramirez. ``On the sliding mode control of nonlinear systems,'' \textit{Systems \& Control Letters}, vol. 19, pp. 303--312, 1992.
\noindent coded as:
\begin{verbatim}
\bibitem{Sira3}
H. Sira-Ramirez. ``On the sliding mode
control of nonlinear systems,''
\textit{Systems \& Control Letters},
vol. 19, pp. 303--312, 1992.
\end{verbatim}
\noindent [7] A. Levant.``Exact differentiation of signals with unbounded higher derivatives,'' in \textit{Proceedings of the 45th IEEE Conference on Decision and Control}, San Diego, California, USA, pp. 5585--5590, 2006.
\noindent coded as:
\begin{verbatim}\bibitem{Levant}
A. Levant. ``Exact differentiation of
signals with unbounded higher
derivatives,'' in \textit{Proceedings
of the 45th IEEE Conference on
Decision and Control}, San Diego,
California, USA, pp. 5585--5590, 2006.
\end{verbatim}
\noindent [8] M. Fliess, C. Join, and H. Sira-Ramirez. ``Non-linear estimation is easy,'' \textit{International Journal of Modelling, Identification and Control}, vol. 4, no. 1, pp. 12--27, 2008.
\noindent coded as:
\begin{verbatim}
\bibitem{Cedric}
M. Fliess, C. Join, and H. Sira-Ramirez.
``Non-linear estimation is easy,''
\textit{International Journal of Modelling,
Identification and Control}, vol. 4,
no. 1, pp. 12--27, 2008.
\end{verbatim}
\noindent [9] R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez. ``Stabilization of food-chain systems using a port-controlled Hamiltonian description,'' in \textit{Proceedings of the American Control Conference}, Chicago, Illinois, USA, pp. 2245--2249, 2000.
\noindent coded as:
\begin{verbatim}
\bibitem{Ortega}
R. Ortega, A. Astolfi, G. Bastin, and H.
Rodriguez. ``Stabilization of food-chain
systems using a port-controlled Hamiltonian
description,'' in \textit{Proceedings of the
American Control Conference}, Chicago,
Illinois, USA, pp. 2245--2249, 2000.
\end{verbatim}
\subsection{Accented Characters in References}
\noindent When using accented characters in references, please use the standard LaTeX coding for accents. {\bf{Do not use math coding for character accents}}. For example:
\begin{verbatim}
\'e, \"o, \`a, \~e
\end{verbatim}
will produce: \'e, \"o, \`a, \~e
\subsection{Use of BibTeX}
\noindent If you wish to use BibTeX, please see the documentation that accompanies the IEEEtran Bibliography package.
\subsection{Biographies and Author Photos}
\noindent Authors may have options to include their photo or not. Photos should be a bit-map graphic (.tif or .jpg) and sized to fit in the space allowed. Please see the coding samples below:
\begin{verbatim}
\begin{IEEEbiographynophoto}{Jane Doe}
Biography text here without a photo.
\end{IEEEbiographynophoto}
\end{verbatim}
or a biography with a photo
\begin{verbatim}
\begin{IEEEbiography}[{\includegraphics
[width=1in,height=1.25in,clip,
keepaspectratio]{fig1.png}}]
{IEEE Publications Technology Team}
In this paragraph you can place
your educational, professional background
and research and other interests.
\end{IEEEbiography}
\end{verbatim}
Please see the end of this document to see the output of these coding examples.
\section{Mathematical Typography \\ and Why It Matters}
\noindent Typographical conventions for mathematical formulas have been developed to {\bf provide uniformity and clarity of presentation across mathematical texts}. This enables the readers of those texts to both understand the author's ideas and to grasp new concepts quickly. While software such as \LaTeX \ and MathType\textsuperscript{\textregistered} can produce aesthetically pleasing math when used properly, it is also very easy to misuse the software, potentially resulting in incorrect math display.
IEEE aims to provide authors with the proper guidance on mathematical typesetting style and assist them in writing the best possible article.
As such, IEEE has assembled a set of examples of good and bad mathematical typesetting. You will see how various issues are dealt with. The following publications have been referenced in preparing this material:
\begin{list}{}{}
\item{\emph{Mathematics into Type}, published by the American Mathematical Society}
\item{\emph{The Printing of Mathematics}, published by Oxford University Press}
\item{\emph{The \LaTeX Companion}, by F. Mittelbach and M. Goossens}
\item{\emph{More Math into LaTeX}, by G. Gr\"atzer}
\item{AMS-StyleGuide-online.pdf, published by the American Mathematical Society}
\end{list}
Further examples can be seen at \url{http://journals.ieeeauthorcenter.ieee.org/wp-content/uploads/sites/7/IEEE-Math-Typesetting-Guide.pdf}
\subsection{Display Equations}
\noindent A simple display equation example shown below uses the ``equation'' environment. To number the equations, use the $\backslash${\tt{label}} macro to create an identifier for the equation. LaTeX will automatically number the equation for you.
\begin{equation}
\label{deqn_ex1}
x = \sum_{i=0}^{n} 2{i} Q.
\end{equation}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation}
\label{deqn_ex1}
x = \sum_{i=0}^{n} 2{i} Q.
\end{equation}
\end{verbatim}
To reference this equation in the text use the $\backslash${\tt{ref}} macro.
Please see (\ref{deqn_ex1})\\
\noindent is coded as follows:
\begin{verbatim}
Please see (\ref{deqn_ex1})\end{verbatim}
\subsection{Equation Numbering}
\noindent {\bf{Consecutive Numbering:}} Equations within an article are numbered consecutively from the beginning of the
article to the end, i.e., (1), (2), (3), (4), (5), etc. Do not use roman numerals or section numbers for equation numbering.\\
\noindent {\bf{Appendix Equations:}} The continuation of consecutively numbered equations is best in the Appendix, but numbering
as (A1), (A2), etc., is permissible.\\
\noindent {\bf{Hyphens and Periods}}: Hyphens and periods should not be used in equation numbers, i.e., use (1a) rather than
(1-a) and (2a) rather than (2.a) for sub-equations. This should be consistent throughout the article.
\subsection{Multi-line equations and alignment}
\noindent Here we show several examples of multi-line equations and proper alignments.
\noindent {\bf{A single equation that must break over multiple lines due to length with no specific alignment.}}
\begin{multline}
\text{The first line of this example}\\
\text{The second line of this example}\\
\text{The third line of this example}
\end{multline}
\noindent is coded as:
\begin{verbatim}
\begin{multline}
\text{The first line of this example}\\
\text{The second line of this example}\\
\text{The third line of this example}
\end{multline}
\end{verbatim}
\noindent {\bf{A single equation with multiple lines aligned at the = signs}}
\begin{align}
a &= c+d \\
b &= e+f
\end{align}
\noindent is coded as:
\begin{verbatim}
\begin{align}
a &= c+d \\
b &= e+f
\end{align}
\end{verbatim}
The {\tt{align}} environment can align on multiple points as shown in the following example:
\begin{align}
x &= y & X & =Y & a &=bc\\
x' &= y' & X' &=Y' &a' &=bz
\end{align}
\noindent is coded as:
\begin{verbatim}
\begin{align}
x &= y & X & =Y & a &=bc\\
x' &= y' & X' &=Y' &a' &=bz
\end{align}
\end{verbatim}
\subsection{Subnumbering}
\noindent The amsmath package provides a {\tt{subequations}} environment to facilitate subnumbering. An example:
\begin{subequations}\label{eq:2}
\begin{align}
f&=g \label{eq:2A}\\
f' &=g' \label{eq:2B}\\
\mathcal{L}f &= \mathcal{L}g \label{eq:2c}
\end{align}
\end{subequations}
\noindent is coded as:
\begin{verbatim}
\begin{subequations}\label{eq:2}
\begin{align}
f&=g \label{eq:2A}\\
f' &=g' \label{eq:2B}\\
\mathcal{L}f &= \mathcal{L}g \label{eq:2c}
\end{align}
\end{subequations}
\end{verbatim}
\subsection{Matrices}
\noindent There are several useful matrix environments that can save you some keystrokes. See the example coding below and the output.
\noindent {\bf{A simple matrix:}}
\begin{equation}
\begin{matrix} 0 & 1 \\
1 & 0 \end{matrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{matrix} 0 & 1 \\
1 & 0 \end{matrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with parenthesis}}
\begin{equation}
\begin{pmatrix} 0 & -i \\
i & 0 \end{pmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{pmatrix} 0 & -i \\
i & 0 \end{pmatrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with square brackets}}
\begin{equation}
\begin{bmatrix} 0 & -1 \\
1 & 0 \end{bmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{bmatrix} 0 & -1 \\
1 & 0 \end{bmatrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with curly braces}}
\begin{equation}
\begin{Bmatrix} 1 & 0 \\
0 & -1 \end{Bmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{Bmatrix} 1 & 0 \\
0 & -1 \end{Bmatrix}
\end{equation}\end{verbatim}
\noindent {\bf{A matrix with single verticals}}
\begin{equation}
\begin{vmatrix} a & b \\
c & d \end{vmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{vmatrix} a & b \\
c & d \end{vmatrix}
\end{equation}\end{verbatim}
\noindent {\bf{A matrix with double verticals}}
\begin{equation}
\begin{Vmatrix} i & 0 \\
0 & -i \end{Vmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{Vmatrix} i & 0 \\
0 & -i \end{Vmatrix}
\end{equation}\end{verbatim}
\subsection{Arrays}
\noindent The {\tt{array}} environment allows you some options for matrix-like equations. You will have to manually key the fences, but you'll have options for alignment of the columns and for setting horizontal and vertical rules. The argument to {\tt{array}} controls alignment and placement of vertical rules.
A simple array
\begin{equation}
\left(
\begin{array}{cccc}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{cccc}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
A slight variation on this to better align the numbers in the last column
\begin{equation}
\left(
\begin{array}{cccr}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{cccr}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
An array with vertical and horizontal rules
\begin{equation}
\left( \begin{array}{c|c|c|r}
a+b+c & uv & x-y & 27\\ \hline
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{c|c|c|r}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
Note the argument now has the pipe "$\vert$" included to indicate the placement of the vertical rules.
\subsection{Cases Structures}
\noindent Many times we find cases coded using the wrong environment, i.e., {\tt{array}}. Using the {\tt{cases}} environment will save keystrokes (from not having to type the $\backslash${\tt{left}}$\backslash${\tt{lbrace}}) and automatically provide the correct column alignment.
\begin{equation*}
{z_m(t)} = \begin{cases}
1,&{\text{if}}\ {\beta }_m(t) \\
{0,}&{\text{otherwise.}}
\end{cases}
\end{equation*}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation*}
{z_m(t)} =
\begin{cases}
1,&{\text{if}}\ {\beta }_m(t),\\
{0,}&{\text{otherwise.}}
\end{cases}
\end{equation*}
\end{verbatim}
\noindent Note that the ``\&'' is used to mark the tabular alignment. This is important to get proper column alignment. Do not use $\backslash${\tt{quad}} or other fixed spaces to try and align the columns. Also, note the use of the $\backslash${\tt{text}} macro for text elements such as ``if'' and ``otherwise''.
\subsection{Function Formatting in Equations}
In many cases there is an easy way to properly format most common functions. Use of the $\backslash$ in front of the function name will in most cases, provide the correct formatting. When this does not work, the following example provides a solution using the $\backslash${\tt{text}} macro.
\begin{equation*}
d_{R}^{KM} = \underset {d_{l}^{KM}} {\text{arg min}} \{ d_{1}^{KM},\ldots,d_{6}^{KM}\}.
\end{equation*}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation*}
d_{R}^{KM} = \underset {d_{l}^{KM}}
{\text{arg min}} \{ d_{1}^{KM},
\ldots,d_{6}^{KM}\}.
\end{equation*}
\end{verbatim}
\subsection{ Text Acronyms inside equations}
\noindent This example shows where the acronym ``MSE" is coded using $\backslash${\tt{text\{\}}} to match how it appears in the text.
\begin{equation*}
\text{MSE} = \frac {1}{n}\sum _{i=1}^{n}(Y_{i} - \hat {Y_{i}})^{2}
\end{equation*}
\begin{verbatim}
\begin{equation*}
\text{MSE} = \frac {1}{n}\sum _{i=1}^{n}
(Y_{i} - \hat {Y_{i}})^{2}
\end{equation*}
\end{verbatim}
\subsection{Obsolete Coding}
\noindent Avoid the use of outdated environments, such as {\tt{eqnarray}} and \$\$ math delimiters, for display equations. The \$\$ display math delimiters are left over from PlainTeX and should not be used in \LaTeX, ever. Poor vertical spacing will result.
\subsection{Use Appropriate Delimiters for Display Equations}
\noindent Some improper mathematical coding advice has been given in various YouTube\textsuperscript{TM} videos on how to write scholarly articles, so please follow these good examples:\\
For {\bf{single-line unnumbered display equations}}, please use the following delimiters:
\begin{verbatim}\[ . . . \] or \end{verbatim}
\begin{verbatim}\begin{equation*} . . . \end{equation*}\end{verbatim}
Note that the * in the environment name turns off equation numbering.\\
For {\bf{multiline unnumbered display equations}} that have alignment requirements, please use the following delimiters:
\begin{verbatim}
\begin{align*} . . . \end{align*}
\end{verbatim}
For {\bf{single-line numbered display equations}}, please use the following delimiters:
\begin{verbatim}
\begin{equation} . . . \end{equation}
\end{verbatim}
For {\bf{multiline numbered display equations}}, please use the following delimiters:
\begin{verbatim}
\begin{align} . . . \end{align}
\end{verbatim}
\section{LaTeX Package Suggestions}
\noindent Immediately after your documenttype declaration at the top of your \LaTeX\ file is the place where you should declare any packages that are being used. The following packages were used in the production of this document.
\begin{verbatim}
\usepackage{amsmath,amsfonts}
\usepackage{algorithmic}
\usepackage{array}
\usepackage[caption=false,font=normalsize,
labelfont=sf,textfont=sf]{subfig}
\u00sepackage{textcomp}
\usepackage{stfloats}
\usepackage{url}
\usepackage{verbatim}
\usepackage{graphicx}
\usepackage{balance}
\end{verbatim}
\section{Additional Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) or \verb|(\ref{Eq})|
cross references instead of ``hard'' references (e.g., \verb|(1)|).
That will make it possible to combine sections, add equations, or
change the order of figures or citations without having to go through
the file line by line.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Please do not use \verb|\nonumber| or \verb|\notag| inside the
\verb|{array}| environment. It will not stop equation numbers inside
\verb|{array}| (there won't be any anyway) and it might stop a wanted
equation number in the surrounding equation.
\balance
\section{A Final Checklist}
\begin{enumerate}{}{}
\item{Make sure that your equations are numbered sequentially and there are no equation numbers missing or duplicated. Avoid hyphens and periods in your equation numbering. Stay with IEEE style, i.e., (1), (2), (3) or for sub-equations (1a), (1b). For equations in the appendix (A1), (A2), etc.}.
\item{Are your equations properly formatted? Text, functions, alignment points in cases and arrays, etc. }
\item{Make sure all graphics are included.}
\item{Make sure your references are included either in your main LaTeX file or a separate .bib file if calling the external file.}
\end{enumerate}
\section{Introduction}
\IEEEPARstart{T}{his} file is intended to serve as a ``sample article file''
for IEEE journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later. The most common elements are covered in the simplified and updated instructions in ``New\_IEEEtran\_how-to.pdf''. For less common elements you can refer back to the original ``IEEEtran\_HOWTO.pdf''. It is assumed that the reader has a basic working knowledge of \LaTeX. Those who are new to \LaTeX \ are encouraged to read Tobias Oetiker's ``The Not So Short Introduction to \LaTeX ,'' available at: \url{http://tug.ctan.org/info/lshort/english/lshort.pdf} which provides an overview of working with \LaTeX.
\section{The Design, Intent, and \\ Limitations of the Templates}
The templates are intended to {\bf{approximate the final look and page length of the articles/papers}}. {\bf{They are NOT intended to be the final produced work that is displayed in print or on IEEEXplore\textsuperscript{\textregistered}}}. They will help to give the authors an approximation of the number of pages that will be in the final version. The structure of the \LaTeX\ files, as designed, enable easy conversion to XML for the composition systems used by the IEEE. The XML files are used to produce the final print/IEEEXplore pdf and then converted to HTML for IEEEXplore.
\section{Where to Get \LaTeX \ Help --- User Groups}
The following online groups are helpful to beginning and experienced \LaTeX\ users. A search through their archives can provide many answers to common questions.
\begin{list}{}{}
\item{\url{http://www.latex-community.org/}}
\item{\url{https://tex.stackexchange.com/} }
\end{list}
\section{Other Resources}
See \cite{ref1,ref2,ref3,ref4,ref5} for resources on formatting math into text and additional help in working with \LaTeX .
\section{Text}
For some of the remainer of this sample we will use dummy text to fill out paragraphs rather than use live text that may violate a copyright.
Itam, que ipiti sum dem velit la sum et dionet quatibus apitet voloritet audam, qui aliciant voloreicid quaspe volorem ut maximusandit faccum conemporerum aut ellatur, nobis arcimus.
Fugit odi ut pliquia incitium latum que cusapere perit molupta eaquaeria quod ut optatem poreiur? Quiaerr ovitior suntiant litio bearciur?
Onseque sequaes rectur autate minullore nusae nestiberum, sum voluptatio. Et ratem sequiam quaspername nos rem repudandae volum consequis nos eium aut as molupta tectum ulparumquam ut maximillesti consequas quas inctia cum volectinusa porrum unt eius cusaest exeritatur? Nias es enist fugit pa vollum reium essusam nist et pa aceaqui quo elibusdandis deligendus que nullaci lloreri bla que sa coreriam explacc atiumquos simolorpore, non prehendunt lam que occum\cite{ref6} si aut aut maximus eliaeruntia dia sequiamenime natem sendae ipidemp orehend uciisi omnienetus most verum, ommolendi omnimus, est, veni aut ipsa volendelist mo conserum volores estisciis recessi nveles ut poressitatur sitiis ex endi diti volum dolupta aut aut odi as eatquo cullabo remquis toreptum et des accus dolende pores sequas dolores tinust quas expel moditae ne sum quiatis nis endipie nihilis etum fugiae audi dia quiasit quibus.
\IEEEpubidadjcol
Ibus el et quatemo luptatque doluptaest et pe volent rem ipidusa eribus utem venimolorae dera qui acea quam etur aceruptat.
Gias anis doluptaspic tem et aliquis alique inctiuntiur?
Sedigent, si aligend elibuscid ut et ium volo tem eictore pellore ritatus ut ut ullatus in con con pere nos ab ium di tem aliqui od magnit repta volectur suntio. Nam isquiante doluptis essit, ut eos suntionsecto debitiur sum ea ipitiis adipit, oditiore, a dolorerempos aut harum ius, atquat.
Rum rem ditinti sciendunti volupiciendi sequiae nonsect oreniatur, volores sition ressimil inus solut ea volum harumqui to see\eqref{deqn_ex1a} mint aut quat eos explis ad quodi debis deliqui aspel earcius.
\begin{equation}
\label{deqn_ex1a}
x = \sum_{i=0}^{n} 2{i} Q.
\end{equation}
Alis nime volorempera perferi sitio denim repudae pre ducilit atatet volecte ssimillorae dolore, ut pel ipsa nonsequiam in re nus maiost et que dolor sunt eturita tibusanis eatent a aut et dio blaudit reptibu scipitem liquia consequodi od unto ipsae. Et enitia vel et experferum quiat harum sa net faccae dolut voloria nem. Bus ut labo. Ita eum repraer rovitia samendit aut et volupta tecupti busant omni quiae porro que nossimodic temquis anto blacita conse nis am, que ereperum eumquam quaescil imenisci quae magnimos recus ilibeaque cum etum iliate prae parumquatemo blaceaquiam quundia dit apienditem rerit re eici quaes eos sinvers pelecabo. Namendignis as exerupit aut magnim ium illabor roratecte plic tem res apiscipsam et vernat untur a deliquaest que non cus eat ea dolupiducim fugiam volum hil ius dolo eaquis sitis aut landesto quo corerest et auditaquas ditae voloribus, qui optaspis exero cusa am, ut plibus.
\section{Some Common Elements}
\subsection{Sections and Subsections}
Enumeration of section headings is desirable, but not required. When numbered, please be consistent throughout the article, that is, all headings and all levels of section headings in the article should be enumerated. Primary headings are designated with Roman numerals, secondary with capital letters, tertiary with Arabic numbers; and quaternary with lowercase letters. Reference and Acknowledgment headings are unlike all other section headings in text. They are never enumerated. They are simply primary headings without labels, regardless of whether the other headings in the article are enumerated.
\subsection{Citations to the Bibliography}
The coding for the citations is made with the \LaTeX\ $\backslash${\tt{cite}} command.
This will display as: see \cite{ref1}.
For multiple citations code as follows: {\tt{$\backslash$cite\{ref1,ref2,ref3\}}}
which will produce \cite{ref1,ref2,ref3}. For reference ranges that are not consecutive code as {\tt{$\backslash$cite\{ref1,ref2,ref3,ref9\}}} which will produce \cite{ref1,ref2,ref3,ref9}
\subsection{Lists}
In this section, we will consider three types of lists: simple unnumbered, numbered, and bulleted. There have been many options added to IEEEtran to enhance the creation of lists. If your lists are more complex than those shown below, please refer to the original ``IEEEtran\_HOWTO.pdf'' for additional options.\\
\subsubsection*{\bf A plain unnumbered list}
\begin{list}{}{}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{list}
\subsubsection*{\bf A simple numbered list}
\begin{enumerate}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{enumerate}
\subsubsection*{\bf A simple bulleted list}
\begin{itemize}
\item{bare\_jrnl.tex}
\item{bare\_conf.tex}
\item{bare\_jrnl\_compsoc.tex}
\item{bare\_conf\_compsoc.tex}
\item{bare\_jrnl\_comsoc.tex}
\end{itemize}
\subsection{Figures}
Fig. 1 is an example of a floating figure using the graphicx package.
Note that $\backslash${\tt{label}} must occur AFTER (or within) $\backslash${\tt{caption}}.
For figures, $\backslash${\tt{caption}} should occur after the $\backslash${\tt{includegraphics}}.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig1}
\caption{Simulation results for the network.}
\label{fig_1}
\end{figure}
Fig. 2(a) and 2(b) is an example of a double column floating figure using two subfigures.
(The subfig.sty package must be loaded for this to work.)
The subfigure $\backslash${\tt{label}} commands are set within each subfloat command,
and the $\backslash${\tt{label}} for the overall figure must come after $\backslash${\tt{caption}}.
$\backslash${\tt{hfil}} is used as a separator to get equal spacing.
The combined width of all the parts of the figure should do not exceed the text width or a line break will occur.
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.5in]{fig1}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=2.5in]{fig1}%
\label{fig_second_case}}
\caption{Dae. Ad quatur autat ut porepel itemoles dolor autem fuga. Bus quia con nessunti as remo di quatus non perum que nimus. (a) Case I. (b) Case II.}
\label{fig_sim}
\end{figure*}
Note that often IEEE papers with multi-part figures do not place the labels within the image itself (using the optional argument to $\backslash${\tt{subfloat}}[]), but instead will
reference/describe all of them (a), (b), etc., within the main caption.
Be aware that for subfig.sty to generate the (a), (b), etc., subfigure
labels, the optional argument to $\backslash${\tt{subfloat}} must be present. If a
subcaption is not desired, leave its contents blank,
e.g.,$\backslash${\tt{subfloat}}[].
\section{Tables}
Note that, for IEEE-style tables, the
$\backslash${\tt{caption}} command should come BEFORE the table. Table captions use title case. Articles (a, an, the), coordinating conjunctions (and, but, for, or, nor), and most short prepositions are lowercase unless they are the first or last word. Table text will default to $\backslash${\tt{footnotesize}} as
the IEEE normally uses this smaller font for tables.
The $\backslash${\tt{label}} must come after $\backslash${\tt{caption}} as always.
\begin{table}[!t]
\caption{An Example of a Table\label{tab:table1}}
\centering
\begin{tabular}{|c||c|}
\hline
One & Two\\
\hline
Three & Four\\
\hline
\end{tabular}
\end{table}
\section{Algorithms}
Algorithms should be numbered and include a short title. They are set off from the text with rules above and below the title and after the last line.
\begin{algorithm}[H]
\caption{Weighted Tanimoto ELM.}\label{alg:alg1}
\begin{algorithmic}
\STATE
\STATE {\textsc{TRAIN}}$(\mathbf{X} \mathbf{T})$
\STATE \hspace{0.5cm}$ \textbf{select randomly } W \subset \mathbf{X} $
\STATE \hspace{0.5cm}$ N_\mathbf{t} \gets | \{ i : \mathbf{t}_i = \mathbf{t} \} | $ \textbf{ for } $ \mathbf{t}= -1,+1 $
\STATE \hspace{0.5cm}$ B_i \gets \sqrt{ \textsc{max}(N_{-1},N_{+1}) / N_{\mathbf{t}_i} } $ \textbf{ for } $ i = 1,...,N $
\STATE \hspace{0.5cm}$ \hat{\mathbf{H}} \gets B \cdot (\mathbf{X}^T\textbf{W})/( \mathbb{1}\mathbf{X} + \mathbb{1}\textbf{W} - \mathbf{X}^T\textbf{W} ) $
\STATE \hspace{0.5cm}$ \beta \gets \left ( I/C + \hat{\mathbf{H}}^T\hat{\mathbf{H}} \right )^{-1}(\hat{\mathbf{H}}^T B\cdot \mathbf{T}) $
\STATE \hspace{0.5cm}\textbf{return} $\textbf{W}, \beta $
\STATE
\STATE {\textsc{PREDICT}}$(\mathbf{X} )$
\STATE \hspace{0.5cm}$ \mathbf{H} \gets (\mathbf{X}^T\textbf{W} )/( \mathbb{1}\mathbf{X} + \mathbb{1}\textbf{W}- \mathbf{X}^T\textbf{W} ) $
\STATE \hspace{0.5cm}\textbf{return} $\textsc{sign}( \mathbf{H} \beta )$
\end{algorithmic}
\label{alg1}
\end{algorithm}
Que sunt eum lam eos si dic to estist, culluptium quid qui nestrum nobis reiumquiatur minimus minctem. Ro moluptat fuga. Itatquiam ut laborpo rersped exceres vollandi repudaerem. Ulparci sunt, qui doluptaquis sumquia ndestiu sapient iorepella sunti veribus. Ro moluptat fuga. Itatquiam ut laborpo rersped exceres vollandi repudaerem.
\section{Mathematical Typography \\ and Why It Matters}
Typographical conventions for mathematical formulas have been developed to {\bf provide uniformity and clarity of presentation across mathematical texts}. This enables the readers of those texts to both understand the author's ideas and to grasp new concepts quickly. While software such as \LaTeX \ and MathType\textsuperscript{\textregistered} can produce aesthetically pleasing math when used properly, it is also very easy to misuse the software, potentially resulting in incorrect math display.
IEEE aims to provide authors with the proper guidance on mathematical typesetting style and assist them in writing the best possible article. As such, IEEE has assembled a set of examples of good and bad mathematical typesetting \cite{ref1,ref2,ref3,ref4,ref5}.
Further examples can be found at \url{http://journals.ieeeauthorcenter.ieee.org/wp-content/uploads/sites/7/IEEE-Math-Typesetting-Guide-for-LaTeX-Users.pdf}
\subsection{Display Equations}
The simple display equation example shown below uses the ``equation'' environment. To number the equations, use the $\backslash${\tt{label}} macro to create an identifier for the equation. LaTeX will automatically number the equation for you.
\begin{equation}
\label{deqn_ex1}
x = \sum_{i=0}^{n} 2{i} Q.
\end{equation}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation}
\label{deqn_ex1}
x = \sum_{i=0}^{n} 2{i} Q.
\end{equation}
\end{verbatim}
To reference this equation in the text use the $\backslash${\tt{ref}} macro.
Please see (\ref{deqn_ex1})\\
\noindent is coded as follows:
\begin{verbatim}
Please see (\ref{deqn_ex1})\end{verbatim}
\subsection{Equation Numbering}
{\bf{Consecutive Numbering:}} Equations within an article are numbered consecutively from the beginning of the
article to the end, i.e., (1), (2), (3), (4), (5), etc. Do not use roman numerals or section numbers for equation numbering.
\noindent {\bf{Appendix Equations:}} The continuation of consecutively numbered equations is best in the Appendix, but numbering
as (A1), (A2), etc., is permissible.\\
\noindent {\bf{Hyphens and Periods}}: Hyphens and periods should not be used in equation numbers, i.e., use (1a) rather than
(1-a) and (2a) rather than (2.a) for subequations. This should be consistent throughout the article.
\subsection{Multi-Line Equations and Alignment}
Here we show several examples of multi-line equations and proper alignments.
\noindent {\bf{A single equation that must break over multiple lines due to length with no specific alignment.}}
\begin{multline}
\text{The first line of this example}\\
\text{The second line of this example}\\
\text{The third line of this example}
\end{multline}
\noindent is coded as:
\begin{verbatim}
\begin{multline}
\text{The first line of this example}\\
\text{The second line of this example}\\
\text{The third line of this example}
\end{multline}
\end{verbatim}
\noindent {\bf{A single equation with multiple lines aligned at the = signs}}
\begin{align}
a &= c+d \\
b &= e+f
\end{align}
\noindent is coded as:
\begin{verbatim}
\begin{align}
a &= c+d \\
b &= e+f
\end{align}
\end{verbatim}
The {\tt{align}} environment can align on multiple points as shown in the following example:
\begin{align}
x &= y & X & =Y & a &=bc\\
x' &= y' & X' &=Y' &a' &=bz
\end{align}
\noindent is coded as:
\begin{verbatim}
\begin{align}
x &= y & X & =Y & a &=bc\\
x' &= y' & X' &=Y' &a' &=bz
\end{align}
\end{verbatim}
\subsection{Subnumbering}
The amsmath package provides a {\tt{subequations}} environment to facilitate subnumbering. An example:
\begin{subequations}\label{eq:2}
\begin{align}
f&=g \label{eq:2A}\\
f' &=g' \label{eq:2B}\\
\mathcal{L}f &= \mathcal{L}g \label{eq:2c}
\end{align}
\end{subequations}
\noindent is coded as:
\begin{verbatim}
\begin{subequations}\label{eq:2}
\begin{align}
f&=g \label{eq:2A}\\
f' &=g' \label{eq:2B}\\
\mathcal{L}f &= \mathcal{L}g \label{eq:2c}
\end{align}
\end{subequations}
\end{verbatim}
\subsection{Matrices}
There are several useful matrix environments that can save you some keystrokes. See the example coding below and the output.
\noindent {\bf{A simple matrix:}}
\begin{equation}
\begin{matrix} 0 & 1 \\
1 & 0 \end{matrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{matrix} 0 & 1 \\
1 & 0 \end{matrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with parenthesis}}
\begin{equation}
\begin{pmatrix} 0 & -i \\
i & 0 \end{pmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{pmatrix} 0 & -i \\
i & 0 \end{pmatrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with square brackets}}
\begin{equation}
\begin{bmatrix} 0 & -1 \\
1 & 0 \end{bmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{bmatrix} 0 & -1 \\
1 & 0 \end{bmatrix}
\end{equation}
\end{verbatim}
\noindent {\bf{A matrix with curly braces}}
\begin{equation}
\begin{Bmatrix} 1 & 0 \\
0 & -1 \end{Bmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{Bmatrix} 1 & 0 \\
0 & -1 \end{Bmatrix}
\end{equation}\end{verbatim}
\noindent {\bf{A matrix with single verticals}}
\begin{equation}
\begin{vmatrix} a & b \\
c & d \end{vmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{vmatrix} a & b \\
c & d \end{vmatrix}
\end{equation}\end{verbatim}
\noindent {\bf{A matrix with double verticals}}
\begin{equation}
\begin{Vmatrix} i & 0 \\
0 & -i \end{Vmatrix}
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\begin{Vmatrix} i & 0 \\
0 & -i \end{Vmatrix}
\end{equation}\end{verbatim}
\subsection{Arrays}
The {\tt{array}} environment allows you some options for matrix-like equations. You will have to manually key the fences, but there are other options for alignment of the columns and for setting horizontal and vertical rules. The argument to {\tt{array}} controls alignment and placement of vertical rules.
A simple array
\begin{equation}
\left(
\begin{array}{cccc}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{cccc}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
A slight variation on this to better align the numbers in the last column
\begin{equation}
\left(
\begin{array}{cccr}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{cccr}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
An array with vertical and horizontal rules
\begin{equation}
\left( \begin{array}{c|c|c|r}
a+b+c & uv & x-y & 27\\ \hline
a+b & u+v & z & 134
\end{array}\right)
\end{equation}
is coded as:
\begin{verbatim}
\begin{equation}
\left(
\begin{array}{c|c|c|r}
a+b+c & uv & x-y & 27\\
a+b & u+v & z & 134
\end{array} \right)
\end{equation}
\end{verbatim}
Note the argument now has the pipe "$\vert$" included to indicate the placement of the vertical rules.
\subsection{Cases Structures}
Many times cases can be miscoded using the wrong environment, i.e., {\tt{array}}. Using the {\tt{cases}} environment will save keystrokes (from not having to type the $\backslash${\tt{left}}$\backslash${\tt{lbrace}}) and automatically provide the correct column alignment.
\begin{equation*}
{z_m(t)} = \begin{cases}
1,&{\text{if}}\ {\beta }_m(t) \\
{0,}&{\text{otherwise.}}
\end{cases}
\end{equation*}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation*}
{z_m(t)} =
\begin{cases}
1,&{\text{if}}\ {\beta }_m(t),\\
{0,}&{\text{otherwise.}}
\end{cases}
\end{equation*}
\end{verbatim}
\noindent Note that the ``\&'' is used to mark the tabular alignment. This is important to get proper column alignment. Do not use $\backslash${\tt{quad}} or other fixed spaces to try and align the columns. Also, note the use of the $\backslash${\tt{text}} macro for text elements such as ``if'' and ``otherwise.''
\subsection{Function Formatting in Equations}
Often, there is an easy way to properly format most common functions. Use of the $\backslash$ in front of the function name will in most cases, provide the correct formatting. When this does not work, the following example provides a solution using the $\backslash${\tt{text}} macro:
\begin{equation*}
d_{R}^{KM} = \underset {d_{l}^{KM}} {\text{arg min}} \{ d_{1}^{KM},\ldots,d_{6}^{KM}\}.
\end{equation*}
\noindent is coded as follows:
\begin{verbatim}
\begin{equation*}
d_{R}^{KM} = \underset {d_{l}^{KM}}
{\text{arg min}} \{ d_{1}^{KM},
\ldots,d_{6}^{KM}\}.
\end{equation*}
\end{verbatim}
\subsection{ Text Acronyms Inside Equations}
This example shows where the acronym ``MSE" is coded using $\backslash${\tt{text\{\}}} to match how it appears in the text.
\begin{equation*}
\text{MSE} = \frac {1}{n}\sum _{i=1}^{n}(Y_{i} - \hat {Y_{i}})^{2}
\end{equation*}
\begin{verbatim}
\begin{equation*}
\text{MSE} = \frac {1}{n}\sum _{i=1}^{n}
(Y_{i} - \hat {Y_{i}})^{2}
\end{equation*}
\end{verbatim}
\section{Conclusion}
The conclusion goes here.
\section*{Acknowledgments}
This should be a simple paragraph before the References to thank those individuals and institutions who have supported your work on this article.
{
\section{Implicit differentiation-based LMEs}
\label{sec:differentiation}
In previous model-based studies, locational marginal emissions rates are derived by first calculating how generation changes with demand, and then multiplying the change in generation by the emissions rate of each generator.
This is a manifestation of the chain rule,
which states that the LMEs are $\Lambda(D) = \nabla E(D) = \sum_{t=1}^T J \tilde g_t^* (D)^T c$,
where $J f(z) \in \mathbf{R}^{k \times n}$ denotes the Jacobian of the function $f : \mathbf{R}^n \rightarrow \mathbf{R}^k$ evaluated at $z \in \mathbf{R}^n$.
Therefore, the main technical challenge when computing $\Lambda(D)$ lies in computing an analytical expression for the Jacobians $J \tilde g_t^* (D)$.
In previous studies that only consider static dispatch models, i.e., $T = 1$, one only needs to derive a single expression for $J \tilde g_1^* (D) \in \mathbf{R}^{k \times n}$. In the general setting, the situation is much more complex---one must derive $T$ Jacobians $J \tilde g_t^*(D)$ of size $k \times Tn$.
Although deriving an analytical expression might be possible, we take a simpler and more powerful approach in this paper: we use the \textit{implicit function theorem} to compute the Jacobians $J_D \tilde g_t^*(D)$.
Our approach essentially generalizes the analytical derivations of~\cite{Ruiz2010AnalysisNetworks, Rudkevich2012LocationalMaking} to arbitrary convex optimization-based dispatch models, producing identical in the simpler static setting.
\begin{theorem}[Implicit Function Theorem, \cite{Dontchev2014ImplicitAnalysis}]
\label{thm:implicit}
Suppose $K : \mathbf{R}^n \times \mathbf{R}^r \rightarrow \mathbf{R}^k$ is strictly differentiable at $(D_0, x_0) \in \mathbf{R}^n \times \mathbf{R}^r$ and $K(D_0, x_0) = 0$.
Moreover, suppose $J_x K(D_0, x_0)$ is nonsingular, where $J_z f(z, y) \in \mathbf{R}^{k \times n}$ denotes the partial Jacobian of the function $f : \mathbf{R}^n \times \mathbf{R}^r \rightarrow \mathbf{R}^k$ with respect to $z$ evaluated at $(z, y)$.
Then the solution mapping $x^*(D) = \{ x \in \mathbf{R}^r \mid K(D, x) = 0 \}$ is single-valued in a neighborhood around $(D_0, x_0)$ and strictly differentiable at $(D_0, x_0)$ with Jacobian
$$
J x^* (D_0) = - J_x K(D_0, x_0)^{-1} J_D K(D_0, x_0).
$$
\end{theorem}
The implicit function theorem states that if a differentiable system of equations $K(D, x) = 0$ has a solution at $(D_0, x_0)$, and the corresponding partial Jacobian $J_x K(D_0, x_0)$ is non-singular, then the solution map $x^*(D)$ is a locally well-defined function with Jacobian given by Theorem~\ref{thm:implicit}.
In our setting, the solution map $G^*(D)$ is not the solution to a system of equations, but is rather the solution to a convex optimization problem.
From convex analysis, we know that $G$ solves Problem~\ref{eq:dyn-opf} if and only if it solves the Karush-Kuhn-Tucker (KKT) equations $K(D, G) = 0$~\cite{Boyd2004ConvexOptimization, Dontchev2014ImplicitAnalysis}.
Therefore, we can apply the implicit function theorem to the KKT equations to compute $J G^*(D)$.
If we assume that the device objectives $f_j$ are twice differentiable and that the device constraints $\mathcal G_j$ can be parametrized via a set of twice differentiable inequalities, then the KKT equations are strictly differentiable, and $(D, G^*(D))$ satisfy the conditions of Theorem~\ref{thm:implicit}
(for more details on the implicit function theorem and its applications to optimization, we refer the reader to~\cite{Dontchev2014ImplicitAnalysis} and the references therein).
By combining this with the chain rule, we can then derive marginal emissions rates for the dynamic optimal power flow problem specified by Problem~\eqref{eq:dyn-opf}.
To summarize, we compute LMEs in two steps.
First, we compute the gradient of emissions with respect to the device outputs, i.e. $dE/d\tilde g_t = c$ in the case of linear emissions functions.
Then, we multiply this by the Jacobian $J G^*(D)$, which is computed using the implicit function theorem. The resulting metrics indicate the changes in emissions resulting from a marginal change in demand, as illustrated in Fig.~\ref{fig:illustration}.
Critically, this derivation works for any dynamic dispatch model that fits the form in Problem~\ref{eq:dyn-opf}, i.e., regardless of the choice of device cost functions $f_j$ and constraint sets $\mathcal G_j$ (as long as they are convex and twice differentiable).
When calculating the LMEs, different choices of device characteristics in Problem~\eqref{eq:dyn-opf} only change the KKT operator $K(D, G)$ and its corresponding partial Jacobians $J_G K(D, G)$ and $J_D K(D, G)$;
however, the general derivation using the implicit function theorem remains unchanged.
In practice, these Jacobians can either be derived analytically or computed using an automatic differentiation library, such as~\cite{Innes2018DontPrograms}, given a programmatic specification $f_j$ and $\mathcal G_j$.
\textit{Remark:}
Since the dynamic OPF problem includes the static OPF problem and the economic dispatch problem as special cases, this work generalizes the derivations in~\cite{Ruiz2010AnalysisNetworks, Rudkevich2012LocationalMaking} and~\cite{Deetjen2019Reduced-OrderSector}.
However, our method is by no means constrained to the dynamic dispatch model used in this paper;
implicit differentiation can be used to derive the marginal emissions rates for any convex-optimization based dispatch model.
Importantly, this includes many of the dispatch models used by system operators in practice, a point we revisit in Section~\ref{sec:conclusion}.
\subsection{Complexity and Software Implementation}
\label{sec:software}
We implement our method in Julia \cite{Bezanson2017Julia:Computing} for all the aforementioned dispatch models and constraints.
Our implementation is publicly available on GitHub in the package \verb_DynamicMarginalEmissions.jl_.
We use \verb_Convex.jl_~\cite{Udell2014ConvexJulia} to model the problem and solve it with external solvers, such as ECOS~\cite{Domahidi2013ECOS:Systems} or Gurobi~\cite{GurobiOptimization2022GurobiManual}.
For the large-scale network discussed in Sections~\ref{sec:nrel} and~\ref{sec:renew} (i.e., $n = 240$ nodes, $k = 136$ generators, $m = 448$ lines) and $T = 1$ time periods, our software package solves the dispatch problem and computes the resulting LMEs in just under a second on a modern laptop with a 2.3~GHz quad-core processor.
For the same problem with $T=24$ time periods, the same machine takes about two minutes to solve the problem and compute the LMEs.
Our software package offers a flexible interface and can be used to analyze networks with different physical characteristics (e.g., locations of generators, transmission line ratings) and constraints (e.g., ramping rates).
After specifying the network parameters, one can compute the LMEs with a single function call.
Because our implementation is open-source, reasonably fast, and easy to use, we believe it is of value to the broader community.
In general, we expect our method to scale well to realistic, large problems encountered by grid operators.
Specifically, let $z = T \cdot \max(m, n, k)$.
Solving the dispatch problem itself requires $O(z^4)$ operations.
Once the dispatch problem is solved, constructing and inverting the Jacobian to compute LMEs only requires $O(z^3)$ operations, which is negligible compared to the complexity of solving the dispatch problem itself.
Since most grid operators must solve dispatch problems at regular intervals, e.g., every 15 minutes to clear the real-time market, computing LMEs can be viewed as a post-processing step requiring little additional compute time.
\section{Problem formulation}
\label{sec:problem}
In this section, we formulate the problem of computing the LMEs in a dynamically-constrained electricity system.
First, Section~\ref{sec:dispatch} provides background information on the \textit{dynamic dispatch problem}, a mathematical model for electricity networks with temporal constraints.
We then describe our mathematical model for emissions and marginal emissions in static networks in Section~\ref{sec:emissions}, where we formally state the dynamic marginal emissions problem.
Next, in Section~\ref{sec:special}, we describe two special cases of the dynamic marginal emissions problem that have been solved in previous work.
Notably, both special cases are static, i.e., they do not incorporate any dynamic constraints.
Finally, Section~\ref{sec:devices} gives three examples of dynamic devices, i.e., devices that cannot be represented in a static model.
\subsection{Dynamic dispatch model}
\label{sec:dispatch}
In electricity systems, a \textit{dispatch model} is a mathematical model for deciding which electricity generators should be used to meet demand.
Dispatch models are often formulated as convex optimization problems, where the variables are the amount of power produced by each generator, and the parameters include both the current electricity demand and the physical constraints on the system.
When modeling emissions, past work has often considered \textit{static} dispatch models, i.e., models that only reflect a single instant in time~\cite{Ruiz2010AnalysisNetworks, Rudkevich2012LocationalMaking, Li2013CarbonObligation, Deetjen2019Reduced-OrderSector}.
However, most real world electricity systems have \textit{dynamic} constraints---constraints that couple generator outputs across time periods.
For example, a generator with ramping constraints can only change its power output by a small amount between successive time periods.
In order to effectively model the impact of temporal constraints on emissions, we will study the \textit{dynamic optimal power flow problem},\footnote{%
The dynamic (DC) optimal power flow problem has been well studied in the power systems community~\cite{Xie2001DynamicMethods, He2019OptimizingApproach} and is sometimes referred to as the dynamic economic dispatch problem.
}
\begin{equation}
\label{eq:dyn-opf}
\begin{array}{lll}
\textrm{minimize}
& \sum_{j=1}^k f_j(g_j) \\[0.5em]
\textrm{subject to}
& g_j \in \mathcal G_j, \quad & j \in [1:k], \\
& \mathbf{1}^T d_t = \mathbf{1}^T \tilde g_t,
& t \in [1:T], \\
& F (d_t - B \tilde g_t) \leq u^{\max},
& t \in [1:T],
\end{array}
\end{equation}
where the variable is $G \in \mathbf{R}^{T \times k}$.
In the above, we use $g_j$ to refer to $j$-th column of $G$, and $\tilde g_t$ to refer to the $t$-th row of $G$.
The matrix $G$ represents the power output of $k$ devices over $T$ timesteps; each device can supply power, store power, or otherwise interact with the grid.
The entry $G_{tj}$ represent the output of device $j$ at time $t$.
\paragraph{Device Costs and Constraints}
Each device $j$ has three properties: a convex cost function $f_j(g) : \mathbf{R}^T \rightarrow \mathbf{R}$, a convex constraint set $\mathcal G_j \subset \mathbf{R}^T$, and a location on the network.
We model the device locations with a matrix $B \in \{0, 1\}^{n \times k}$ that maps each device to a particular node in the network, i.e., $B_{i j} = 1$ if device $j$ is located at node $i$, and $B_{ij} = 0$ otherwise.
The objective of Problem~\eqref{eq:dyn-opf} is to minimize the sum of the device costs, and the first constraint states that each device must stay within their constraint set.
\paragraph{Network Constraints}
Problem~\eqref{eq:dyn-opf} considers an electricity network with $n$ nodes and $m$ edges (transmission lines).
Node $i \in [1:n]$ has demand $(d_t)_{i}$ at time $t \in [1:T]$.
The second constraint is that power must be balanced across the network, i.e., $\mathbf{1}^T d_t = \mathbf{1}^T \tilde g_t$.
Finally, the third constraint is that the power flowing across each transmission line is limited by its capacity.
We define $F \in \mathbf{R}^{n \times m}$ to be the \textit{power flow distribution factor matrix}, where $F_{i \ell}$ determines how a power injection at node $i$ (withdrawn at node $n$) affects power flow across line $\ell \in [1:m]$.
Because of thermal and voltage phase angle constraints, each line $\ell$ can only transport up to $u^{\max}_{\ell}$ units of power,
modeled with the constraints $F (d_t - B \tilde g_t) \leq u^{\max}$.
\paragraph{Solution Map}
Let $D = (d_1, \ldots, d_T) \in \mathbf{R}^{Tn}$ be the concatenated vector of demand schedules and assume the solution to Problem~\eqref{eq:dyn-opf} exists and is unique for all $D \in \mathbf{R}^{Tn}$.\footnote{%
We make this assumption with little loss of generality.
For example, we can guarantee a solution exists by adding a generator with no capacity limits and extremely high costs to each node (representing curtailed demand).
Similarly, we can guarantee uniqueness by adding a small quadratic penalty $\sum_{t, j} G_{tj}^2s$ to the objective.}
Let $G^*(D) : \mathbf{R}^{Tn} \rightarrow \mathbf{R}^{T \times k}$ denote the optimal choice of $G$ in Problem~\eqref{eq:dyn-opf} as a function of $D$.
Because we assume the solution to Problem~\eqref{eq:dyn-opf} exists uniquely for all $D$, then $G^*$ is a well-defined function.
We call $G^*$ the \textit{solution map}, and use the vector-valued function $\tilde g_t^*(D) : \mathbf{R}^{Tn} \rightarrow \mathbf{R}^k$ to denote the $t$-th row of $G^*$.
As we will see shortly, the solution map will allow us to formalize the relationship between demand and emissions.
\subsection{Locational marginal emissions}
\label{sec:emissions}
We model the emissions of generator $i$ as a linear function of power output with rate $c_i$,
i.e., the total emissions at time $t$ are $c^T \tilde g_t$.
Since the generator power outputs are determined by the dispatch model, the emissions at time $t$ generated as a function of demand is $E_t(D) = c^T \tilde g_t^*(D)$.
The \textit{total emissions} over the entire time horizon are then $E(D) = \sum_t E_t(D)$.
Although we use a linear model throughout the remainder of the paper, it is straightforward to generalize all our results to nonlinear models.
For example, if each generator has nonlinear emissions functions $\gamma_i(g) : \mathbf{R} \rightarrow \mathbf{R}$, then the total emissions at time $t$ is $E_t(D) = \sum_i \gamma_i( (\tilde g_t)_i )$.
\paragraph{Problem statement}
The LMEs $\Lambda(D) : \mathbf{R}^{Tn} \rightarrow \mathbf{R}^{Tn}$ are the marginal rate of change in total emissions given a marginal change in demand at a specific node and at a given time.
In other words, the LMEs are the gradient of emissions with respect to demand, i.e., $\Lambda(D) = \nabla E(D)$.
The function $\Lambda(D)$ is vector-valued, since changes in electricity consumption at different nodes and different times may have different impacts on emissions.
As an illustration, we report a comparison between total emissions and LMEs for different values of demand at a given node in an arbitrary network (see Fig.~\ref{fig:illustration}). Locally, LMEs do indeed provide good approximations to the change in total emissions. It is however clear that those metrics are only locally valid and can sometimes be ill-defined, e.g. at points of non-differentiability of total emissions.
The problem we study in this paper is how to compute $\Lambda(D)$ when the solution maps $\tilde g_t^*(D)$ are determined by the dynamic optimal power flow problem. As far as we are aware, no prior published results have shown how to compute LMEs for generic dynamic dispatch models.
\begin{figure}[t]
\centering
\includegraphics[width=3in]{Figures/mef_illustration.pdf}
\caption{
Illustration of marginal emissions rates.
(Solid blue curve) Total emissions as a function of demand at a particular node.
(Dashed red curves)
The first order approximations defined by the LMEs calculated at each red circle.
The LMEs are the slopes of the dashed red curves.
}
\label{fig:illustration}
\end{figure}
\subsection{Special Case: Static Generators}
\label{sec:special}
When we restrict the devices (the functions $f_i$ and sets $\mathcal G_i$) to be static generators, we recover previous analytical models~\cite{Conejo2005LocationalSensitivities,Ruiz2010AnalysisNetworks, Rudkevich2012LocationalMaking}.
The static generator device has constraint set $\mathcal G = \{ g \mid g^{\min} \leq g \leq g^{\max} \}$, where $g^{\min}, g^{\max} \in \mathbf{R}^T$ and cost function $f(g) = \sum_t a g_t^2 + b g_t$.
The static generator could represent a traditional dispatchable generator, in which case $g^{\max} = \alpha \mathbf{1}$, or a renewable resource, in which case the entries of $g^{\max}$ may vary.
Most importantly, the static generator has no temporal constraints: $g_t$ is independent of the choice of $g_{t'}$ when $t \neq t'$.
In a network with only static generator devices, the dynamic problem would simplify to $T$ \textit{static} optimal power flow problems that can be solved independently.
Moreover, if we remove the network constraints by setting $F = 0$, we recover the model used in~\cite{Deetjen2019Reduced-OrderSector} to empirically estimate emissions rates.
\subsection{Dynamic Devices}
\label{sec:devices}
By addressing the dynamic optimal power flow problem in its full generality, our framework allows us to consider dynamic devices as well.
These devices have temporal constraints, implying their actions at any given time depend on their actions at other times.
We give three examples below.
\paragraph{Ramp-constrained generators}
Ramping constraints limit the rate at which a generator can change its output.
These generators are modeled with the constraint set,
\begin{align*}
\mathcal G = \left\{\right. g \mid & g^{\min} \leq g \leq g^{\max},
\\
& g_t - \rho \leq g_{t+1} \leq g_t + \rho, t \in [1:T-1]\left.\right\}
\end{align*}
where $\rho \in \mathbf{R}^k$ is the ramping rate of each generator.
Ramp-constrained generator devices have the same cost functions as static generator devices, $f(g) = \sum_t a g_t^2 + b g_t$.
Ramping constraints are particularly useful in dynamic dispatch models with short time intervals, e.g., 15 minutes, and slow dispatching generators, like nuclear and hydro.
\paragraph{Storage devices}
\label{sec:battery}
Storage devices include pump-hydro resources, grid-scale batteries, and DER aggregators selling storage services to the grid.
We define a storage device to have cost function $f(g) = 0$ and constraint set $\mathcal G$ that is the set of $g \in \mathbf{R}^T$ such that there exists $s, \gamma, \delta \in \mathbf{R}^T$ satisfying,
\begin{align*}
\arraycolsep=1.4pt
\begin{array}{rll}
0 &\leq s \leq C, \quad\quad
0 \leq \gamma \leq P, \quad\quad
0 \leq \delta \leq P, \\
g_t &= \delta_t - \gamma_t, \quad\quad
s_t = s_{t-1} + \eta \gamma_t - (1/\eta) \delta_t,
\end{array}
\end{align*}
for $t \in [1:T]$, where $s_0 = 0$.
The vector $C \in \mathbf{R}$ is the storage capacity, $P \in \mathbf{R}$ is the maximum (dis)charge rate, and $\eta \in (0, 1]$ is the storage efficiency.
\paragraph{Unit-commitment-constrained generators}
Unit-commitment constraints are integer constraints specifying whether or not a power generator is on.
If generator $i$ is on, it must produce a minimum power $g^{\min}_i$ and stay on for a specified period of time.
We model this by modifying the generator device constraint set to be the set $\mathcal G = \mathcal G_{\textrm{static}} \cup \{ 0 \}$, where $\mathcal G_{\textrm{static}}$ is the equivalent static generator.
Although the set $\mathcal G$ is not convex, it can be made mixed-integer convex by introducing an integer variable $z \in \{0, 1\}$.
Although the mixed-integer constraint makes~\eqref{eq:dyn-opf} NP-hard, many good heuristics for solving mixed-integer convex programs are readily available through commercial solvers such as Gurobi~\cite{GurobiOptimization2022GurobiManual}.
\section{Simulation Results} \label{sec:experiments}
In this section, we illustrate the applicability and utility of the suggested approach using two different simulation setups. First, in Section~\ref{sec:wecc-experiment}, we compute the LMEs for a static model and a dynamic model with unit-commitment constraints using real demand and generator data from the U.S.\ Western Interconnection~\cite{Deetjen2019Reduced-OrderSector}.
We compare each model's LMEs to real changes in emissions and to estimates from a merit-order-based model~\cite{Deetjen2019Reduced-OrderSector}.
Second,
in Sections~\ref{sec:nrel}, ~\ref{sec:renew} and \ref{sec:static_vs_dynamic},
we illustrate the methodology on a recent reduced network model of the same region~\cite{Yuan2020DevelopingIntegration}.
Using the original dataset, we highlight the geographic variance of marginal emissions across the network.
Then, we investigate the potential impacts of hypothetically large renewable penetration of storage and renewable generation.
We conclude by comparing the LMEs obtained from a static approximation to those of the dynamic model.
\subsection{Economic Dispatch in the Western United States}
\label{sec:wecc-experiment}
In our first experiment, we analyze electricity data from the U.S.\ Western Interconnection system in 2016.
The Western Interconnection dataset is compiled in \cite{Deetjen2019Reduced-OrderSector} and contains weekly generator capacities, generator costs, and generator emission rates for large (above 25 MW capacity) fossil fuel generators in the Western Interconnection, as well as hourly total demand and total carbon emissions.
Because no transmission data is available, we consider models without transmission constraints.
The LMEs for a static model, a dynamic model with unit commitment constraints, and a state-of-the-art static merit-order method are compared to the real hourly rate of change in emissions.
\paragraph{Models}
We analyze two models, which we compare to a baseline.
First, we analyze the results of the simple economic dispatch model \eqref{eq:dyn-opf}, with linear costs $f_i(g_i) = b_i g_i$, where $b_i$ is the marginal operation cost of generator $i$.
Second, we analyze a dynamic economic dispatch model with unit commitment constraints, over a time horizon of $T = 24$.
The unit-commitment constraints are only applied to coal generators, all of which are given a minimum output of $g^{\min}_i = 0.4 g^{\max}_i$.
We benchmark our results against the \textit{reduced-order dispatch model (RODM)} described in~\cite{Deetjen2019Reduced-OrderSector}.
The core of the RODM is a \textit{merit order}-based dispatch process: generators are dispatched in ascending order of cost.
After dispatching generators via the merit-order, the \textit{marginal generator}---the generator next in line to modify its output to meet an increase in demand---is identified to find the marginal emissions rate of the system.
In~\cite{Deetjen2019Reduced-OrderSector}, post-processing steps are applied to generate the marginal emission rates.
Notably, when no post-processing is applied, the RODM is identical to the economic dispatch model in \eqref{eq:dyn-opf} with linear costs $f_i(g_i) = b_i g_i$.
\paragraph{Results}
After generating LMEs $\lambda_t$ for every hour $t = 1, \ldots, T$ of the year, where $T = 8760$, we compare the resulting LMEs to the actual hourly changes in emissions.
Specifically, we compute the
change in demand $\Delta d_t$ and change in emissions $\Delta E_t$ for every hour of the year.
Each model's estimated change in emissions is given by $\Delta \hat{E}_t = \lambda_t \Delta d_t$.
In order to compare the
models, we compute the absolute error
$|\Delta \hat{E}_t - \Delta E_t| / Z$
of each model's estimate against the true hourly change in emissions at each timepoint, where errors are normalized by the mean absolute change in emissions $Z = (1/T) \sum_{t=1}^T |\Delta E_t|$.
We use absolute error, instead of square error, to minimize the effect of outliers.
\begin{figure*}
\centering
\includegraphics[width=6.5in]{Figures/makie_mefs_rodm_figure}
\caption{
(Panel A) Emissions error for model from \cite{Deetjen2019Reduced-OrderSector} (DA), economic dispatch model (ED), and unit-commitment model (UC), normalized by the mean absolute change in emissions.
The ED model, effectively the same as the DA model without post-processing, performs similarly as DA.
The UC model reduces the error by 8.2\% compared to the DA model, suggesting unit-commitment more accurately represents the real world dispatch process.
(Panel B) LMEs as a function of total demand.
Emissions rates are smoothed using the mean of a rolling window of 20\% of the data.
Shaded regions represent the interquartile range (IQR), i.e., the middle 50\% of the data, of the rolling window.
}
\label{fig:wecc}
\end{figure*}
A violin plot of absolute errors is displayed in Figure~\ref{fig:wecc}, Panel~A.
As expected, the economic dispatch model and the merit-order model from \cite{Deetjen2019Reduced-OrderSector} perform similarly---the merit-order model only differs from the economic dispatch model in its post-processing.
Notably, the unit-commitment model
better models hourly changes in emissions than
both the economic dispatch model and the merit-order model,
reducing the mean absolute error by 8.2\%.
We attribute this to the fact that the unit-commitment model accurately represents dynamic constraints that appear in real-world dispatch processes, namely that coal generators cannot rapidly turn on and off again.
LMEs as a function of demand are also reported in Panel~B of Figure~\ref{fig:wecc}.
Historical LMEs are computed as $\lambda_t = \Delta E_t / (\Delta d_t + \epsilon)$, where $\epsilon = 0.5\ \text{MWh}$ is a small value used to avoid unreasonably large LMEs when $\Delta d_t$ is small.
Following a similar procedure to \cite{Deetjen2019Reduced-OrderSector}, the LMEs for the data and for each model are smoothed using the mean of a rolling window of 20\% of the data.
Shaded regions representing the interquartile range (IQR) of the data are also plotted to better understand the variance of each model.
After averaging, the LMEs produced by the economic dispatch model most closely resemble the data.
However, the variation is significantly reduced in the unit-commitment model, and the IQR most closely resembles that of the data compared to both other models.
\subsection{240-bus Network Model}
\label{sec:nrel}
In this experiment, we study LMEs using a recent 240-bus network that models the Western United States in 2018~\cite{Yuan2020DevelopingIntegration}.
The dataset includes generator capacities, costs, and ramping rates; hourly demand and renewable generation for $T = 8760$ hours; and the efficiencies and capacities of four pumped hydro storage units.
We solve the dispatch problem and compute hourly LMEs using a dynamic model with storage devices as described in Section~\ref{sec:problem}.
We use this experiment to demonstrate the impact of network constraints on the LMEs.
Since the network has relatively little storage and only a few generators with ramping constraints, we do not analyze the impact of storage and dynamic constraints in this experiment.
\begin{figure*}
\centering \includegraphics[width=6.5in]{Figures/fig_240_map.pdf}
\caption{
(Panel~A) Distribution (over days of the year) of network-wide, total daily emissions for both the 2018 case and the high renewable scenario.
The mean of the distribution is denoted with a horizontal black line.
(Panel~B) Distribution (over nodes and days of the month) of LMEs during August at 6pm, both for the 2018 case and the high renewable scenario.
The mean of the distribution is again denoted with a horizontal black line.
(Panel~C) A map of nodal LMEs through the 240-bus WECC network during the same time period (averaged over the month).
(Panel~D) Same as Panel~C, but for the high renewable scenario with 15~GW of 4-hour storage.
}
\label{fig:240-map}
\end{figure*}
We report the distribution of daily total emissions in Figure~\ref{fig:240-map}, Panel~A (left frame) and the distribution of LMEs at 6pm in August (left frame) in Panel~B.
We observe that, on average, the distribution of nodal LMEs is narrowly concentrated around its mode.
However, we also note that the distribution of LMEs has relatively long tails;
although most of the LMEs are close to the mode, a few have drastically different values.
In Panel~C, we display a map of LMEs at 6pm in August (averaged over days of the month), illustrating the large geographic variation in emissions rates.
Panel~C demonstrates how transmission constraints can create large discrepancies in LMEs even within small geographic regions.
For example, despite their geographic proximity, the California Bay Area differs significantly from the Sacramento area.
The local diversity of the LMEs emphasizes the importance of modeling the network when computing emissions rates: a small geographic change can cause large differences in emissions rates.
\subsection{High Renewable Scenario in the 240-bus Network}
\label{sec:renew}
To illustrate the impact of grid-level changes on emissions, a high-renewable version of the 240-bus network is presented in this section.
Specifically, we uniformly scale renewable generation in the original 2018 model so that renewable generators meet 27\% of total demand (compared to 13.5\% originally).
We also add 15~GW of 4-hour battery storage
distributed among the ten largest renewable nodes proportional to generation capacity.
These batteries have a round trip efficiency of $89.8\%$ with symmetric charging and discharging efficiencies and are constrained to start and end each day with $50\%$ of their total capacity.
As in Section~\ref{sec:problem}, we assume the grid operator manages these batteries to minimize the overall cost of electricity.
The right frame of Panels~A and~B in Figure~\ref{fig:240-map} show the distribution of total emissions and LMEs in an identical manner to the 2018 case.
Similarly, Panel~D displays a map of average nodal LMEs akin to Panel~C.
The 2018 and the high renewable scenarios differ in several ways.
First, as expected, \textit{total} emissions decrease significantly in the high renewable scenario.
The changes to the locational \textit{marginal} emissions rates, on the other hand, vary significantly from node to node.
For example, adding renewable generation and storage causes LMEs to decrease at nodes in southern California, but to increase at nodes in Oregon and Washington.
In general, the distribution of nodal LMEs exhibits high variance and displays two modes, in contrast with the 2018 case.\footnote{We observe these differences for most hours of the day, but only display results for 6pm for concision.}
Overall, the changes in LMEs are complex and unintuitive: because of the presence of storage, nodal LMEs depend not only on non-local generation and transmission constraints, but also on that of every other time period. We believe this complexity is one reason grid operators should use an exact method for calculating LMEs (instead of relying, for example, on network-wide heuristics).
\subsection{Comparison Between Static and Dynamic LMEs}\label{sec:static_vs_dynamic}
In order to demonstrate the value of explicitly integrating dynamic constraints, we compare the true dynamic LMEs to the analogous ``static LMEs'' that arise from eliminating dynamic constraints.
Specifically, we consider how the LMEs would differ between static and dynamic models with the exact same loads and device outputs.
Since static models cannot incorporate dynamic devices, we first solve the dynamic dispatch problem, then fix the battery charging and discharging schedules and consider them as parameters to a series of independent static problems, eliminating any dynamic constraints between subsequent timesteps.
We then compute the LMEs of the resulting model, which now only has static devices and constraints.
More formally, consider the dynamic optimal power flow problem in~\eqref{eq:dyn-opf}, with the devices ordered so that the first $k_1$ devices are static and the remaining $k_2$ devices are dynamic.
After solving the dynamic problem with $k = k_1 + k_2$ devices to obtain device outputs $ G^{*}$ and LMEs $\Lambda^*$, we solve the static problem,
\begin{equation}
\label{eq:static-opf}
\begin{array}{lll}
\textrm{minimize}
& \sum_{j=1}^{k_1} f_j(g_j) \\[0.5em]
\textrm{subject to}
& g_j \in \mathcal G_j, \quad & j \in [1:k_1], \\
& g_j = g_j^*, \quad & j \in [k_1+1:k], \\
& \mathbf{1}^T d_t = \mathbf{1}^T \tilde g_t,
& t \in [1:T], \\
& F (d_t - B \tilde g_t) \leq u^{\max},
& t \in [1:T],
\end{array}
\end{equation}
where the variable is again $G \in \mathbf{R}^{T \times k}$.
Since the schedules of the $k_2$ dynamic devices are fixed, $\tilde g_t$ is independent of $\tilde g_{t'}$ for $t \neq t'$, and Problem~\eqref{eq:static-opf} can be decomposed into $T$ independent optimization problems, if desired.
We compute the resulting `static' LMEs $\Lambda^{\textrm{static}}$ from solving Problem~\eqref{eq:static-opf}, and compare them to $\Lambda^*$.
In theory, the difference between the LMEs of a dynamic model and its static approximation can be arbitrarily large, as seen in the following example.
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{Figures/fig_240_error.pdf}
\caption{
(Panel A) Distribution of root mean squared (RMS) deviation between nodal LMEs produced by the static model and the dynamic model, normalized by the median LME.
The average RMS deviation between the static and dynamic model is 28.40\%.
(Panel B)
Hourly time series of static LMEs (blue) and dynamic LMEs (yellow) during three sample days.
}
\label{fig:240-error}
\end{figure*}
\textit{Example:}
Consider a single node network with $T = 2$ timesteps and $k=3$ devices.
The first device is a gas generator with constraint set $\mathcal G_1 = [0, 10] \times [0, 10]$ (i.e., the generator has capacity 10 in both time periods), cost $f_1(g) = 1$, and emissions rate $c_1 = 500$.
The second device is a solar panel with constraint set $\mathcal G_2 = [0, 10] \times \{0 \}$ (i.e., the generator has capacity 10 in the first period, and no capacity in the second period), cost $f_3(g) = 0.1$, and emissions rate $c_3 = 0$.
Finally, the third device is a battery with constraint set $\mathcal G_3$ specified by Section~\ref{sec:battery}, with capacity $C = 10$, charging rate $P = 10$, and efficiency $\eta = 1$.
Assume a constant demand schedule $d = (1, 1)$.
The economic dispatch will result in the following device outputs: $g_1 = (0, 0)$, $g_2 = (2, 0)$, and $g_3 = (-1, 1)$, i.e., the solar panel will produce two units of power, storing some of it in the battery to serve the second time period, and curtail the remaining 8 units.
The dynamic LMEs are naturally $\Lambda(D) = (0, 0)$: if we had slightly higher demand in either period, the solar panel would curtail its output less to meet demand.
Now, if we fix the battery charging schedule to $g_3 = (-1, 1)$, solving the static problem gives the same device outputs $g_1 = (0, 0)$ and $g_2 = (2, 0)$.
However, the resulting static LMEs are $\Lambda(D) = (0, 500)$, a drastically different result.
This is because the static approximation fixes the battery schedule, so changes in demand during period two are met by a change in the gas generator output.
The toy example above demonstrates that dynamic and static LMEs can differ significantly in theory.
We verify that this occurs in practice using the 240-bus network from Section~\ref{sec:static_vs_dynamic}, where we use the same procedure to compute dynamic LMEs and their static approximations.
We report these differences in Figure~\ref{fig:240-error}, Panel A, where we display the distribution across all nodes and all days of the year of the root mean squared (RMS) deviation between the vector of daily emissions rates for the static model, $\lambda_{\textrm{static}} \in \mathbf{R}^{24}$, and the dynamic model $\lambda_{\textrm{dynamic}} \in \mathbf{R}^{24}$.
The average RMS deviation (normalized by the median LME) is 28.40\%, indicating that the static and dynamic models yield significantly different results.
In Panel B, we illustrate the static and dynamic LMEs for three randomly sampled days.
While static LMEs are very good approximations in some instances (e.g., morning hours in top-left), they deviate significantly in others.
These results suggest that ignoring dynamic constraints and simply computing static LMEs is not sufficient to model emissions in dynamic networks: explicitly computing dynamic LMEs is essential to understanding emissions rates in systems with significant dynamic constraints, such as large grid storage capacity.
\section{Introduction}
\label{sec:intro}
Policy-makers interested in decarbonizing the electricity grid require reliable emissions data in order to quantify the impact of a particular policy or intervention strategy.
Similarly, grid operators conducting generation and transmission expansion studies~\cite{Mo1991StochasticProgramming, Alguacil2003TransmissionApproach, Garcia-Bertrand2017DynamicPlanning, Majidi-Qadikolai2018Optimization-basedStudies} are increasingly looking to reduce emissions~\cite{Park2015StochasticEmissions, Bent2012GridReduction, Asgharian2017APlanning, Moreira2021Climate-AwareApproach, Degleris2021Emissions-awareDifferentiation} and require detailed information on the relationship between demand and emissions.
The need for this data will only grow as various systems, such as transportation and heating, begin to electrify and place additional strain on the grid.
Unfortunately, electricity systems depend on complex, interconnected transmission network structures with numerous operational constraints, making it difficult to attribute emissions to energy consumption at a particular time and place~\cite{deChalendar2019TrackingSystem}.
\textit{Emissions rates} are important metrics that quantify the amount of pollutant emissions, e.g., $\text{CO}_2$, $\text{SO}_2$, or $\text{NO}_\text{x}$, attributable to the consumption of energy.
Researchers and decision-makers often examine \textit{average emissions rates} \cite{
Kirschen1997ContributionsFlows,
Yu2019ResearchFlow,
Kang2012CarbonNetworks,
Kang2015CarbonModel,
Li2013CarbonObligation
}
which measure the average rate of emissions per MWh of energy consumed, and
\textit{marginal emissions rates}~\cite{Hawkes2010EstimatingSystems, SchramOnFactors, MandelWattTimePrimer, McCormickMarginalWatttime, CorradiAAction} (also known as marginal emission factors or marginal emission intensities), which measure the rate at which emissions increase or decrease given a marginal change in energy demand. While average emissions rates quantify emissions over long periods, marginal emissions rates better quantify the emissions impacts of small, local changes in demand, since only a few specific generators are expected to change production in response.
This response to marginal changes can be estimated both at the network level---quantifying the aggregate sensitivity across many nodes in the network---or at specific locations.
Indeed, these metrics vary on a node-by-node basis, as network constraints and the local energy mix dictate which generators are available to compensate for changes in demand.
Hence, \textit{locational} marginal emissions rates (LMEs)~\cite{Callaway2017LocationResources,Rudkevich2012LocationalMaking} quantify the emission sensitivity at the nodal level, revealing the spatial heterogeneity in marginal emissions rates that emerges from network constraints.
LMEs are the emissions-equivalent of locational marginal prices, which have been studied extensively in the power systems community due to their importance to electricity markets~\cite{Orfanogianni2007AEvaluation, Ji2017ProbabilisticCongestion, Bo2009ProbabilisticUncertainty}.
LMEs have been used to quantify the impacts of various policies on carbon emissions, e.g., increasing electric vehicle penetration~\cite{Tong2020WhatStates, Jochem2015Assessing2030} and changing electric vehicle charging policies~\cite{Pavic2015RoleSystems}, and are published live in the PJM Interconnection~\cite{FiveRates}, a major U.S.\ system operator.
LMEs have also been used in transmission expansion studies~\cite{Sun2017AnalysisAccounting, SaumaEnzo2018GlobalExpansion, Degleris2021Emissions-awareDifferentiation}.
In this application, the LMEs define the marginal emissions effect of offsetting demand at a particular node and can be viewed as the gradient of emissions with respect to net load in the planning optimization problem~\cite{Degleris2021Emissions-awareDifferentiation}.
Empirical studies on marginal emissions rates in the U.S.\ and U.K.\ have used \textit{regression-based approaches} to estimate emissions rates across large geographical regions~\cite{Hawkes2010EstimatingSystems, Siler-Evans2012MarginalSystem, GraffZivin2014SpatialPolicies, Callaway2017LocationResources}.
These works leverage publicly available data and fit linear models to changes in emissions and net demand.
The main benefit of these methods is that they do not require a model of the underlying electricity system.
However, because of their inherent data requirements, these methods are difficult to extend to finer geographic resolutions and hypothetical electricity systems that lack preexisting data.
In contrast, \textit{model-based approaches} explicitly calculate LMEs using the underlying dispatch model.
This calculation has been performed using marginal generator identification in merit-order models~\cite{Deetjen2019Reduced-OrderSector,Zheng2016AssessmentConstraints}, LMP-based approximations~\cite{Carter2011ModelingDelivery,Rogers2013EvaluationEmissions,
Wang2014LocationalDistribution
}, or, in simple cases, explicit formulae~\cite{Conejo2005LocationalSensitivities,Ruiz2010AnalysisNetworks, Rudkevich2012LocationalMaking}.
Model-based methods are promising because dispatch in real-world electricity systems often involves solving an optimization problem.
However, the models used in real world systems are highly complex, limiting the applicability of specific derivations in the aforementioned studies and highlighting the need for incorporating and feature specific constraints, e.g., ramping limits or storage, that need to be taken into account when calculating LMEs~\cite{Zheng2016AssessmentConstraints, Gil2007GeneralizedGeneration}.
For example, merit-order-based models usually neglect the impact of network constraints and only focus on generation cost to identify the marginal generator.
LMP-based methods, which rely on matching LMPs with generation cost in order to identify the marginal generator, are not exact~\cite{Rogers2013EvaluationEmissions, Wang2014LocationalDistribution}, and the presence of complex coupling constraints would likely make identification even harder.
On the other hand, analytical derivations, while exact, have so far only been conducted on dispatch models with a limited number of constraint types, e.g., transmission constraints.
As far as we are aware of, no previous studies have exactly accounted for time dependencies such as energy storage when computing LMEs.
In this work, we address this limitation by developing a method that supports arbitrary convex optimization-based dispatch models.
\subsection*{Contribution and outline}
This paper makes three main contributions:
\begin{itemize}
\item
We propose \textit{a new method to compute LMEs} in arbitrary convex optimization-based dispatch models.
This is in contrast to standard regression-based methods that may have significant data requirements and previous model-based methods that have been derived for specific dispatch models.
The method we propose generalizes previous analytical derivations~\cite{Ruiz2010AnalysisNetworks,Rudkevich2012LocationalMaking} and is generic, flexible, and suitable for real systems dispatched by grid operators.
\item
We use the proposed method to \textit{derive LMEs in networks with dynamic constraints}, such as energy storage.
As far as we are aware of, this is the first method to calculate LMEs for dynamic network dispatch models, such as the standard dynamic economic dispatch problem.
\item
We \textit{demonstrate the utility of computing LMEs in dynamic models} using two different experimental setups.
First, using a published dataset~\cite{Deetjen2019Reduced-OrderSector}, we show
that dynamic models more accurately represent the real-world relationship between demand and emissions compared to their static counterparts.
Second, we use our method to study the impact of dynamic constraints on emissions rates using a realistic model of the Western United States transmission system.
The dynamic LMEs are distinct from their static counterparts, demonstrating the importance of accurately including all relevant dynamic constraints when estimating emissions sensitivities.
\end{itemize}
The paper is structured as follows. In Section~\ref{sec:problem}, we introduce the problem of computing LMEs in dynamic electricity networks.
We show that this problem generalizes previous approaches~\cite{Ruiz2010AnalysisNetworks, Rudkevich2012LocationalMaking} to complex models with temporal constraints.
We then show how to solve this problem using implicit differentiation in Section~\ref{sec:differentiation}.
Although we use this technique to compute LMEs for the model specified in Section~\ref{sec:problem}, our technique generalizes to arbitrary convex optimization-based dispatch models, including those used by system operators in real world electricity markets.
Lastly, we report simulation results on two datasets in Section~\ref{sec:experiments}.
In the first experiment, we demonstrate the validity of our approach on real US electricity data and compare our results with an established method~\cite{Deetjen2019Reduced-OrderSector}.
In particular, we show that a dynamic model with unit-commitment constraints more accurately models changes in emissions compared to its static counterpart.
Second, we analyze a 240-bus model of the Western United States and show that, in the presence of grid-scale storage, computing LMEs dynamically is essential to accurately quantifying changes in emissions.
We conclude and discuss future work in Section~\ref{sec:conclusion}.
\section{OLD SECTION}\label{sec:framework}
\subsection{Dynamic dispatch models}\label{sec:dyn_model}
\begin{comment}
\section{Old}
In this section, we describe our model for the energy market and emissions.
Our model for the energy market---namely, a DC optimal power flow model \cite{?LecNote}---is commonly used in the literature and in practice.
Moreover, our results are all easy to extend to more detailed models, e.g.\ an AC optimal power flow model.
Finally, we conclude the section by briefly explaining how to compute the sensitivities of emissions with respect to network parameters and inputs.
This is accomplished by differentiating the solution mapping via the implicit function theorem.
\paragraph{Energy market.}
We consider a simple DC optimal power flow model for the energy market.
Consider a network with $n$ nodes, $m$ edges, and $\ell$ generators, described by the incidence matrix $A \in \mathbf{R}^{n \times m}$ and a node-generator map $B \in \mathbf{R}^{n \times \ell}$.
The variables of the optimization problem are the generations of each generator, $g \in \mathbf{R}^\ell$, the power flows across each edge, $p \in \mathbf{R}^m$, and the voltage phase angles $v \in \mathbf{R}^n$.
The generation at generator $i$ is limited by its capacity, $0 \leq g_i \leq g^{\max}_i$, and has cost $c^q_i g_i^2 + c^\ell_i g_i$.
Each power flow $p_j$ has a capacity constraint $|p_j| \leq p^{\max}_j$ and is determined by the difference of phase angles, i.e.\ $p = \mathbf{diag}(\gamma) A^T v$, where $\gamma_j \in \mathbf{R}_+$ is the susceptance of line $j$.
Finally, each node $i$ has energy demand $d_i \in \mathbf{R}$, and the network must satisfy power balance, $B g - d = Ap$.
In total, the \textit{optimal power flow} problem is,
\begin{equation} \label{eq:opf}
\begin{array}{ll}
\text{minimize} & g^T \mathbf{diag}(c^q) g + g^T c^\ell \\
\text{subject to}
& 0 \leq g \leq g^{\max}, \\
& |p| \leq p^{\max}, \\
& p = \mathbf{diag}(\gamma) A^T v, \\
& B g - d = Ap,
\end{array}
\end{equation}
where the variables are $g, v \in \mathbf{R}^n$ and $p \in \mathbf{R}^m$.
Here, all inequalities are to be read elementwise.
\paragraph{Emissions.}
For the purpose of simplicity, we model emissions as a linear function of generation.
Specifically, each generator emits $b_i \in \mathbf{R}_+$ units per unit of generation, and the total emissions are $E(g) = b^T g$.
\subsection{Computing sensitivities} \label{sec:sensitivity}
The \textit{sensitivity} of a function $f(x) : \mathbf{R}^n \rightarrow \mathbf{R}$ is given by its gradient $\nabla f(x)$, which tells us how $f(x)$ changes given a small change in $x$.
Computing the sensitivity of emissions $E(g)$ to generation is easy, since $\nabla E(g) = b$.
However, generation is indirectly determined by the parameters of \eqref{eq:opf}, like demand $d$, prices $f^q, f^\ell$, and capacities $p^{\max}$.
Computing the sensitivity of emissions to $d$, for example, is more difficult, since it's not immediately clear how $d$ affects $g$.
\paragraph{Solution mapping.}
Let $\theta = (d, f^q, f^\ell, g^{\max}, p^{\max}, \gamma) \in \mathbf{R}^{k}$ be the \textit{parameters} or \textit{inputs} of \eqref{eq:opf}.
Then let $g^*(\theta) : \mathbf{R}^k \rightarrow \mathbf{R}^n$ be the \textit{solution mapping} that maps $\theta$ to the optimal value of $g$ in \eqref{eq:opf} when the problem data is set to $\theta$.
Finally, define $\tilde E(\theta) = E(g^*(\theta))$ to be emissions as a function of the parameters $\theta$.
Assuming $g^*$ is well-defined and differentiable, computing $\nabla \tilde E(\theta)$ gives us a rigorous approach to finding the sensitivity of emissions to various problem parameters.
It is often useful to let $\theta$ be a subset of the problem data, e.g.\ $\theta = d$, and consider all other problem data fixed.
We will therefore sometimes abuse notation and write $g^*(d)$ or $g^*(p^{\max})$, for example, to signal to the reader that $\theta = d$ or $\theta = p^{\max}$, respectively, and all other problem data are fixed.
\paragraph{Implicit differentiation.}
The chain rule tells us that $\nabla \tilde E(\theta) = \partial g^*(\theta)^T b$.
The main technical challenge is computing the Jacobian $\partial g^*(\theta)$, if it even exists.
It turns out that we can do this assuming $g^*$ is well-defined (i.e.\ the solution to \eqref{eq:opf} is unique) in some local neighborhood around $\theta$.
In particular, we view the solution of \eqref{eq:opf} as the solution to a system of equations $K(x, \theta) = 0$ (namely, the KKT conditions), then invoke the \textit{implicit function theorem} to compute the Jacobian matrix $\partial g^*(\theta) \in \mathbf{R}^{n \times k}$.
Computing said Jacobian is also relatively efficient---after solving \eqref{eq:opf}, which is already required to compute $g^*(\theta)$, the Jacobian is easy and fast to evaluate.
For completeness, we include additional details in the appendix.
\section{Marginal emission factors} \label{sec:mefs}
In this section, we describe an important metric in emissions analysis---marginal emission factors (MEFs).
We first review the standard method for computing MEFs, then show how our sensitivity-based approach recovers the same quantity.
Finally, we explain how our approach allows one to compute \textit{dynamic} marginal emission factors, i.e.\ MEFs that incorporate temporal information.
As far as we know, dynamic MEFs have yet to be established in the literature, likely because computing dynamic MEFs is statistically or computationally intractable using traditional.
\paragraph{Static marginal emission factors.}
The \textit{marginal emission factor} at node $i$ is the marginal change in emissions given a marginal change in demand at node $i$.
Mathematically, given demand vector $d$, the marginal emission factor $\mu_i(d)$ at node $i$ is such that $E(g^*(d + \delta) ) \approx E(g^*(d)) + \mu(d)^T\delta$, i.e.\ the (locally) linear change in emissions given a change in demand.
\paragraph{Traditional methods.}
Traditionally, two approaches have been used to compute MEFs \cite{?}.
The first approach is regression-based.
In particular, after observing demand vectors $d^{(1)}, \ldots, d^{(T)}$ and corresponding emissions $E^{(1)}, \ldots, E^{(T)}$, we solve $n$ linear regression problems $\mu_i(d) = \mathrm{argmin}_{\alpha}\ \frac{1}{2} \sum_{t=1}^T (E^{(t)} - \alpha d_i^{(t)})^2$ to recover $\mu(d)$.
This approach works well when an abundance of data is available and the demands at each node are mostly uncorrelated. \anthony{I'll need to revise this a bit. Talk more about the limitations / data requirements. Mention how this ignores imports / exports.}
The second approach is simulation-based.
To determine the MEF $\mu_i(d)$, we compute $E(g^*(d))$ and $E(g^*(d + \epsilon e_i))$, where $\epsilon > 0$ is small.
Then $\mu_i(d) \approx \left(E(g^*(d + \epsilon e_i)) - E(g^*(d)) \right) / \epsilon$.
The main limitation of this method is that it requires solving \eqref{eq:opf} $n+1$ times to compute the MEF at each node; for large networks, this quickly becomes intractable, especially when the MEFs need to be computed for various operating conditions (e.g.\ different seasons, different times of day).
\anthony{TODO Fact check!}
\paragraph{Sensitivity method.}
As the reader may have noticed, the definition of the MEFs $\mu(d)$ are exactly the gradient $\nabla \tilde E(d)$.
Therefore, we propose leveraging the techniques presented in Section~\ref{sec:sensitivity} to directly compute $\mu(d) = \nabla \tilde E(d)$.
This method is still simulation-based, but only requires solving \eqref{eq:opf} once, instead of $n+1$ times, making it computationally tractable even for very large networks.
Moreover, our approach precisely captures the mathematical definition of MEFs---in fact, if $\tilde E(d)$ is not locally differentiable near $d$, the MEF $\mu(d)$ does not make sense, since the marginal change in emissions is not unique.
\paragraph{Modeling more complex markets.}
Real electricity markets often include additional constraints, such as reliability constraints (e.g.\ the $N-1$ security constraint) and unit commitment constraints.
Consequently, mathematically derived MEFs specific to a particular energy market model are limited in practice.
Our method, in contrast, applies to any optimization-based model for the energy market.
Because we use automatic differentiation to compute derivatives and Jacobians, the only change required to modify our framework for different dispatch models is to change the definition of \eqref{eq:opf}, and update the KKT operator $K(x, \theta)$ accordingly.
In the following section, we will show how this flexibility lets us model \textit{dynamic} energy markets, that is, markets with energy storage that are coupled in time.
This let's us compute a variant of marginal emission factors that account for \textit{delayed emissions}, emissions produced earlier or later in time because of a change in demand now.
\section{Dynamic marginal emission factors.} \label{sec:dyn-mef}
\paragraph{Dynamic energy market.} We introduce an extension of the model in~\eqref{eq:opf} that introduces storage into the system. The presence of storage introduces dependencies between subsequent timesteps. We also see the introduction of additional nodal variables $s_t$ indicating the state of charge of \say{nodal storage} at timestep $t$. They are constrained by a maximum charge of $C$ and a maximum charge/discharge power $P$.
\begin{equation} \label{eq:dyn_opf}
\begin{array}{ll}
\text{minimize} & \sum_t (g_t^T \mathbf{diag}(c^q) g_t + g_t^T c^\ell) \\
\text{subject to}
& 0 \leq g_t \leq g_t^{\max}, \\
& |p_t| \leq p_t^{\max}, \\
& p_t = \mathbf{diag}(\gamma) A^T v_t \\
& B g_t - d_t - (s_t - s_{t-1}) = Ap_t,\\
& 0 \leq s_t \leq C, \\
& | s_t - s_{t-1}| \leq P, \ t = 1, ...T\\
& s_0 = C_0
\end{array}
\end{equation}
\end{comment}
\section{Conclusion} \label{sec:conclusion}
In this paper, we introduce a novel method for computing locational marginal emissions rates using implicit differentiation.
We use this method to compute the LMEs of dynamic dispatch models, i.e., dispatch problems containing temporal constraints.
Using real WECC electricity and emissions data, we find that incorporating these dynamic constraints improves model accuracy by 8.2\%.
Finally, we observe that dynamic LMEs are difficult to approximate with their static counterparts: in a synthetic approximation of the WECC network, static and dynamic LMEs have a normalized average RMS deviation of 28.40\%.
Since flexible loads and energy storage are expected to play a large role in future grids, we believe incorporating dynamic constraints will be essential to accurately modeling LMEs.
The method presented in this paper generalizes previous methods to arbitrary convex optimization-based dispatch models.
Since many system operators use convex optimization-based dispatch models in day-ahead and real-time electricity markets \cite{PJM2021PJMOperations}, they could use this method to publish marginal emissions factors in real time.
Although these models can be notably more complex than those analyzed in academic research, the proposed method can compute marginal emissions factors for any such model, as long as they can be represented as convex optimization programs.
Moreover, by leveraging automatic differentiation software and optimization modeling languages~\cite{Dunning2017JuMP:Optimization}, the system operator would only need to specify the objective and constraints of their dispatch problem.
LMEs could then be published alongside LMPs to provide real time emissions information to electricity market participants.
This could be helpful, for example, to a large internet company choosing to reduce emissions by directing internet traffic to servers in low emitting regions, a problem considered in \cite{Lindberg2021AShifting}, or more generally to operators wanting to define optimal load management strategies~\cite{Wang2014LocationalDistribution}.
Finally, we comment on three directions for future work.
First, our experimental results indicate that LMEs in dynamic models often display complex behaviors and are difficult to interpret due to temporal and network constraints.
Deciphering the mechanisms underlying the structure of the LMEs in different settings would be useful in communicating these results and translating them into grid planning or policy decisions.
Second, we note that computing LMEs in large networks could be computationally intensive.
Exploiting network structure and using distributed computation could yield significant performance gains.
Third, our paper shows how to compute LMEs when the full network model is available.
In some cases, however, the network model may be unavailable to the interested party.
Understanding how to estimate the parameters of the electricity network from publicly available data (using the methods developed in \cite{Donti2018InverseData}, for example) and then deriving marginal emissions factors from the learned model is an interesting area of research.
\section*{Acknowledgements}
The authors thank Liang Min and In\^{e}s Azevedo for their valuable comments and suggestions.
Disclaimer: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
\section{Modelling Assumptions}
\begin{comment}
\section{Computing sensitivities}
\paragraph{Computing sensitivities.} I will here try to lay out the details of the computation of the Jacobian and the implicit differentation scheme so that someone else can cross read and validate.
First, let's write the optimization variables as
$$
x = (x_1, x_2, ... x_T), \ x_t = (g_t, p_t, s_t).
$$
Then, let's denote by $F$ and $H$ the matrices containing the inequality and equality constraints. We will decompose them in a similar way as above:
$$
F = \begin{bmatrix}
F_1\\
F_2\\
\vdots\\
F_T
\end{bmatrix},
H =
\begin{bmatrix}
H_1\\
H_2\\
\vdots\\
H_T
\end{bmatrix}.
$$
All $F_t, H_t$ should have the same structure (with the exception of $H_0$ which needs to encode the initial condition for the nodal battery states of charge). For now, we will assume they all have the same form to get 95\% of the way there.
We can there write an expression for both $F_t$ and $H_t$. We have
\begin{align*}
F_t^T &=
\begin{bmatrix}
-g_t, g_t - g_t^{\max}, -p_t-p_t^{\max}, p_t - p_t^{\max}, s_t - s_{t-1} - P, s_{t-1} - s_t - P, -s_t, s_t -C
\end{bmatrix},\\
H_t^T &=
\begin{bmatrix}
g_t - d_t - (s_t-s_{t-1}) - Ap_t
\end{bmatrix}.
\end{align*}
Once this is established, we can proceed to the expression of the KKT conditions in matrix form. The Lagrangian of the problem can be written as
\begin{align*}
L = \sum_t\left( g_t^T \mathbf{diag}(c^q) g_t + g_t^T c^\ell + \lambda_t^T F_t + \nu_t^T H_t\right).
\end{align*}
Using the same notations as~\cite{Barratt2018OnProblems}, we want to first write the matrix
\begin{align*}
g(z, \theta) =
\begin{bmatrix}
\nabla_x L \\
\mathbf{diag}(\lambda) F\\
H
\end{bmatrix}.
\end{align*}
The stationarity condition (i.e. the first block in the above matrix) yields:
\begin{align*}
\nabla_x L &=
\begin{bmatrix}
\nabla_{x_1} L, ..., \nabla_{x_T} L
\end{bmatrix}^T, \\
\nabla_{x_t} L &=
\begin{bmatrix}
\partial_{g_t} L\\
\partial_{p_t} L \\
\partial_{s_t} L\\
\end{bmatrix}
=
\begin{bmatrix}
2\ \mathbf{diag}(c^q) g_t + c^\ell + \lambda_t^T D_{g_t}F_t + \nu_t^T D_{g_t}H_t \\
\lambda_t^T D_{p_t}F_t + \nu_t^T D_{p_t}H_t\\
\lambda_{t+1}^T D_{s_t} F_{t+1} + \lambda_t^T D_{s_t} F_{t} + \nu_{t+1}^T D_{s_t} H_{t+1} + \nu_t^T D_{s_t} H_t
\end{bmatrix}.
\end{align*}
Therefore, the full computation of $\nabla_x L$ can be decomposed into suboperations at each timestep. However, it is not fully decomposable as can be seen in the last term of the above equation (there is coupling between subsequent timesteps.
Now we proceed to the clear expression of the different Jacobians present in the above.
For the inequality constraints we have
\begin{align*}
D_{g_t} F_t =
\begin{bmatrix}
-I \\
I \\
0\\
0\\
0\\
\vdots\\
0
\end{bmatrix},
D_{p_t} F_t =
\begin{bmatrix}
0 \\
0 \\
-I \\
I\\
0\\
\vdots\\
0
\end{bmatrix},
D_{s_t}F_t =
\begin{bmatrix}
0\\
0\\
0\\
0\\
I\\
-I\\
-I\\
I
\end{bmatrix},
D_{s_t}F_{t+1} =
\begin{bmatrix}
0\\
0\\
0\\
0\\
-I\\
I\\
0\\
0
\end{bmatrix}
\end{align*}
For the equality constraints we have
\begin{align*}
D_{g_t} H_t = [I], D_{p_t}H_t = [-A], D_{s_t} H_t = [-I], D_{s_t}H_{t+1} = [I]
\end{align*}
All this detail will be useful when computing the complete Jacobian, which we proceed to do now.
Again, borrowing notations from ~\cite{Barratt2018OnProblems} we want to compute
\begin{align*}
D_z g =
\begin{bmatrix}
D_x \nabla_x L & D_x F^T & D_x H^T \\
\mathbf{diag}(\lambda)D_xF & \mathbf{diag}(F) & 0 \\
D_x H & 0 & 0
\end{bmatrix}.
\end{align*}
We can easily decompose the jacobians in the expression above into the smaller timestep-level jacobians that we have computed above.
Indeed
\begin{align*}
D_xF =
\begin{bmatrix}
D_{x_1} F_1 & ... & D_{x_T} F_1\\
\vdots & ... & \vdots \\
D_{x_1} F_T & ... & D_{x_T} F_T
\end{bmatrix}
\end{align*}
and we have that
\begin{align*}
D_{x_t} F_t =
\begin{bmatrix}
D_{g_t}F_t & D_{p_t}F_t & D_{s_t} F_t
\end{bmatrix}
\end{align*}
and that
\begin{align*}
D_{x_t} F_{t+1} =
\begin{bmatrix}
0 & 0 & D_{s_t} F_{t+1}
\end{bmatrix}.
\end{align*}
Similarly, we have that
\begin{align*}
D_{x_t}H_t =
\begin{bmatrix}
I & -A & -I
\end{bmatrix},
D_{x_t} H_{t+1} =
\begin{bmatrix}
0 & 0 & I
\end{bmatrix}
\end{align*}
The above derivations should therefore be sufficient to compute the entire matrix $D_z g$.
We have that
\begin{align*}
D_x\nabla_x L =
\begin{bmatrix}
D_{x_1} \nabla_{x_1} L & ... & D_{x_T} \nabla_{x_1} L \\
\vdots & & \vdots \\
D_{x_1} \nabla_{x_T} L & ... & D_{x_T} \nabla_{x_T} L \\
\end{bmatrix}
\end{align*}
We have
\begin{align*}
D_{x_i}\nabla_{x_j} L = \delta_{ij}
\begin{bmatrix}
2 \mathbf{diag}(c^q) & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}
\end{align*}
\paragraph{A note on computational burden.} With a graph of $N$ nodes and $E$ edges, we go from $N + E$ variables and $3(N+E)$ constraints to $2(NT)$ variables with ... constraints. The size of the matrix to invert (KKT) is ...? This limits the size of the time horizon to ...?
Are there ways to leverage the structure of the matrix for easier inversion?
\end{comment}
|
{
"arxiv_id": "2302.14319",
"language": "en",
"timestamp": "2023-03-01T02:09:23",
"url": "https://arxiv.org/abs/2302.14319",
"yymm": "2302"
} | \section{Introduction}
In \cite{AM21} we presented an explicit partition function of refined Chern-Simons theory on $S^3$ for all gauge simple Lie algebras. It coincides with Aganagich-Shakirov \cite{AS11,AS12a} and Aganagich-Schaeffer \cite{AS12} partition functions for A and D algebras, as well as with their non-perturbative form, given in \cite{k_s,KM}. In the non-refined limit
it coincided with the universal form \cite{MV,M13} of Witten's partition function \cite{W1} for all simple gauge algebras. The refinement of Chern-Simons theory is based on the Macdonald's deformation of a number of formulae in the theory of simple Lie algebras, proved later in Cherednik's \cite{Cher1,Cher2}, and others.
In \cite{Cher1,Cher2} it is shown, that for non simply laced algebras Macdonalds deformation can have two parameters, corresponding to two different lengths of the roots of the algebras.
The aim of the present paper is to generalize the partition function \cite{AM21} of the refined Chern-Simons theory into the two-fold refined one, adding the second deformation parameter corresponding to that of Cherednik.
The main aim of \cite{AM21} was the preparatory work for finding the dual refined topological string for refined Chern-Simons theory with an arbitrary simple gauge algebra. That goal was achieved for all classical simple gauge algebras in the next paper \cite{AM22}, which required lengthy calculations. Similar calculations for the two-fold refined theories, considered in the present work, evidently would be much more complex, so we do not carry out them in the present work. However, they will hopefully establish the two-parameter deformation of topological string theories, unknown at the moment.
Our two-fold refined version of the Chern-Simons partition function is presented in Section \ref{partfunc} and is based on the generalization of Kac-Peterson formula for the determinant of the symmetrized Cartan matrix \cite{KP}. This formula was already generalized in \cite{AM21} to include one refinement parameter. In the present paper we present its two-fold refined version. In Section \ref{intrep} we present an integral representation of the partition function, which includes the non-perturbative (w.r.t. the string coupling constant) corrections, as shown earlier in the single-refined case in \cite{KM}.
In the Section \ref{sect:univ} we present this two-fold refined partition function in a "universal" form for all non simply laced algebras, generalizing the corresponding expression from \cite{AM21}. This form, as was mentioned, is ready for the further transformation of the partition function into the form, corresponding to the (hypothetical two-fold refined) topological strings.
\section{Double refinement of Kac-Peterson identity and the corresponding Chern-Simons partition functions} \label{partfunc}
In \cite{AM21} we introduced a deformation (refinement) of the Kac-Peterson formula for the volume of the fundamental domain of the coroots lattice \cite{KP}
via the inclusion of a parameter $y$.
It is significant for the Chern-Simons theory since that volume is a part of the partition function of the Chern-Simons theory on a 3d sphere $S^3$, see below. The refined formula looks as follows:
\begin{eqnarray} \label{vol}
Vol(Q^{\vee})= (ty)^{-\frac{r}{2}} \prod_{m=0}^{y-1} \prod_{\alpha_+} 2\sin \pi \frac{y(\alpha,\rho)-m (\alpha,\alpha)/2}{ty}
\end{eqnarray}
where $Q^{\vee}$ is the coroots lattice, $Vol(Q^{\vee})$ is the volume of its fundamental domain, $r$ is the rank of algebra, $\alpha_+$ are its positive roots, $\rho$ is the half-sum of
the positive roots, $t$ is the dual Coxeter number, taken in an arbitrary normalization of the invariant Cartan-Killing metric (or, the same, the sum of Vogel's universal parameters,
see e.g. \cite{AM21}). Here the refinement parameter $y$ is assumed to be a positive integer. At $y=1$ this formula coincides with the original formula of Kac-Peterson, see
\cite{KP}, eq. (4.32.2).
It appears, that the further generalization of this formula is possible, with inclusion of two parameters:
\begin{eqnarray} \label{vol2}
Vol(Q^{\vee})= (\tilde{k})^{-\frac{r}{2}} \prod_{\alpha_+} \prod_{m=0}^{k_{\nu_\alpha}-1} 2\sin \pi \frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{ \tilde{k}}
\end{eqnarray}
\begin{eqnarray}
\tilde{k}=k_s(\rho_s,\theta)+k_l(\rho_l,\theta)+k_l\frac{(\theta,\theta)}{2}
\end{eqnarray}
where $k_l$ and $k_s$ are the refinement parameters, here taken to be positive integers, corresponding to the different lengths of the roots, with subscripts $s$ for the short and $l$ for the long ones.
$\rho_s$ and $\rho_l$ stand for the half-sum of all positive short and long roots, correspondingly. The subscript $\nu_\alpha$ is $s$ if $\alpha$ is a short root, and $l$ if $\alpha$ is
a long one. $\theta$ stands for the highest root of an algebra.
Note, that this generalization touches the non simply laced algebras only. The previous case with one deformation parameter $y$ is covered in $k_s=k_l=y$ case.
This formula is checked numerically for all types of non simply laced simple Lie algebras for dozen thousands of random values of ranks and parameters $k_s, k_l$, so we assume it
is correct in all cases.
The analytical proof of this equality perhaps can be carried out similarly as in \cite{AM21}, making use of the following identity:
\begin{eqnarray}
N=\prod_{k=1}^{N-1}2 \sin{\pi\frac{k}{N}}
\end{eqnarray}
The partition function of the refined Chern-Simons theory on $S^3$ for all gauge algebras, suggested in \cite{AM21}, is
\begin{eqnarray}\label{refCS}
Z(\kappa,y)= Vol(Q^{\vee})^{-1} \delta^{-\frac{r}{2}} \prod_{m=0}^{y-1} \prod_{\alpha_+} 2\sin \pi \frac{y(\alpha,\rho)-m (\alpha,\alpha)/2}{\delta}
\end{eqnarray}
where $\delta=\kappa+yt$, $\kappa$ is the coupling constant. At $y=1$ this coincides with partition function of unrefined Chern-Simons theory, derived in pioneering \cite{W1}. It has the following
key feature
\begin{eqnarray}
Z(0,y)=1
\end{eqnarray}
due to the (\ref{vol}) identity. The meaning of this identity is as follows: a Chern-Simons theory with the coupling constant $\kappa$
(which is a positive integer in the Cartan-Killing normalization of the scalar
product in the root space) is based on the unitary representations of level $\kappa$ which are of a finite number. At $\kappa=0$ there is only one – trivial representation.
Generalizing the partition function (\ref{refCS}) we suggest the following partition function for the two-fold refined CS on $S^3$:
\begin{eqnarray}\label{Z2ref1}
Z(\kappa,k_s,k_l)= Vol(Q^{\vee})^{-1} \delta^{-\frac{r}{2}} \prod_{\alpha_+} \prod_{m=0}^{k_{\nu_\alpha}-1} \sin \pi \frac{{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}}{\delta}
\end{eqnarray}
with $\delta=\kappa+\tilde{k}$.
This partition function also satisfies the key identity
\begin{eqnarray}\label{key2}
Z(0,k_s,k_l)=1
\end{eqnarray}
due to the generalized identity (\ref{vol2}).
Making use of the key identity (\ref{key2}), similarly to \cite{AM21}, one can easily transform (\ref{Z2ref1}) into the following form
\begin{eqnarray} \label{Z2ref}
Z(\kappa,k_s,k_l)= (\frac{\tilde{k}}{\tilde{k}+\kappa})^{\frac{r}{2}} \prod_{\alpha_+} \prod_{m=0}^{k_{\nu_\alpha}-1} \frac{\sin \pi \frac{{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}}{\tilde{k}+\kappa}}
{\sin \pi \frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}}}
\end{eqnarray}
which is convenient for further transformation into universality-like integral representation of \cite{AM21}, appropriate for a further duality establishment \cite{AM22}.
\section{Integral representation of the partition function for the two-fold refined CS theories} \label{intrep}
We rewrite (\ref{Z2ref}) into the separate products over the short and the long roots:
\begin{multline}
Z(\kappa,k_s,k_l)= (\frac{\tilde{k}}{\tilde{k}+\kappa})^{\frac{r}{2}+(k_s L_s+k_l L_l)} \\
\prod_{\alpha_+^s} \prod_{m=0}^{k_s-1}
\frac{\sin \pi \frac{{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}}{\tilde{k}+\kappa}}
{ \pi \frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}+\kappa}} \cdot
\frac{ \pi \frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}}}
{\sin \pi \frac{{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}}{\tilde{k}}} \times \\
\prod_{\alpha_+^l} \prod_{m=0}^{k_l-1}
\frac{\sin \pi \frac{{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}}{\tilde{k}+\kappa}}
{ \pi \frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}+\kappa}} \cdot
\frac{ \pi \frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}}}
{\sin \pi \frac{{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}}{\tilde{k}}}= \\
(\frac{\tilde{k}}{\tilde{k}+\kappa})^{\frac{r}{2}+(k_s L_s+k_l L_l)}
Z_1^s\cdot Z_2^s\cdot Z_1^l\cdot Z_2^l
\end{multline}
Here $L_s$ and $L_l$ denote the number of positive short and long roots, respectively. Last equality implies a natural definition of $Z_1^s, Z_2^s, Z_1^l, Z_2^l$.
Using the well-known identity
\begin{eqnarray}
\frac{\sin \pi z}{\pi z} = \frac{1}{\Gamma(1+z) \Gamma(1-z)}
\end{eqnarray}
and the following integral representation
\begin{eqnarray}
\ln\Gamma(1+z)=\int_{0}^{\infty}dx \frac{e^{-zx}+z(1-e^{-x})-1}{x(e^{x}-1)}
\end{eqnarray}
we have
\begin{eqnarray*}
\ln Z_1^s= -\int_{0}^{\infty} \frac{dx}{x(e^{x}-1)} \sum_{m=0}^{k_s-1}\sum_{\alpha_+^s} ( e^{-x\frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}+\kappa}}+
e^{x\frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}+\kappa}}-2)
\end{eqnarray*}
and
\begin{eqnarray*}
\ln Z_2^s= \int_{0}^{\infty} \frac{dx}{x(e^{x}-1)} \sum_{m=0}^{k_s-1}\sum_{\alpha_+^s} ( e^{-x\frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}}}+
e^{x\frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}}}-2)
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
\ln Z_1^l= -\int_{0}^{\infty} \frac{dx}{x(e^{x}-1)} \sum_{m=0}^{k_l-1}\sum_{\alpha_+^l} ( e^{-x\frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}+\kappa}}+
e^{x\frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}+\kappa}}-2)
\end{eqnarray*}
and
\begin{eqnarray*}
\ln Z_2^l= \int_{0}^{\infty} \frac{dx}{x(e^{x}-1)} \sum_{m=0}^{k_l-1}\sum_{\alpha_+^l} ( e^{-x\frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}}}+
e^{x\frac{k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2}{\tilde{k}}}-2)
\end{eqnarray*}
Let us compute $\ln Z_1^s+\ln Z_1^l$ and $\ln Z_2^s+\ln Z_2^l$:
\begin{eqnarray*}
\ln Z_1:=\ln Z_1^s+\ln Z_1^l=-\int_{0}^{\infty} \frac{dx}{x(e^{x}-1)}
\big(F_X(\frac{x}{\tilde{k}+\kappa},k_s,k_l)-r-2(k_s L_s+k_l L_l)\big), \\
\ln Z_2:=\ln Z_2^s+\ln Z_2^l=\int_{0}^{\infty} \frac{dx}{x(e^{x}-1)}
\big(F_X(\frac{x}{\tilde{k}},k_s,k_l)-r-2(k_s L_s+k_l L_l)\big)=\\
\int_{0}^{\infty} \frac{dx}{x(e^{x\frac{\tilde{k}}{\tilde{k}+\kappa}}-1)}
\big(F_X(\frac{x}{\tilde{k}+\kappa},k_s,k_l)-r-2(k_s L_s+k_l L_l)\big)
\end{eqnarray*}
where
\begin{eqnarray}\label{FX}
F_X(x,k_s,k_l)= \\ \nonumber
r+ \sum_{m=0}^{k_{\nu_\alpha}-1} \sum_{\alpha_{+}} (
e^{x(k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2)}+e^{-x(k_s(\rho_s,\alpha)+k_l(\rho_l,\alpha)-m (\alpha,\alpha)/2)})
\end{eqnarray}
Using the relation
\begin{eqnarray} \label{cotmcotId}
\frac{1}{e^{b x}-1}-\frac{1}{e^{a x} -1} = \frac{e^{a x}-e^{b x}}{(e^{a x}-1)(e^{b x }-1)}=\frac{\sinh (\frac{x(a-b)}{2})}{2\sinh (\frac{x a}{2})\sinh (\frac{x b}{2})}\,,
\end{eqnarray}
we have:
\begin{eqnarray*}
\ln Z_1+\ln Z_2=
\int_{0}^{\infty} \frac{dx}{2x} \frac{sh(\frac{x}{2}\frac{\kappa}{\tilde{k}+\kappa})}
{sh(\frac{x}{2}) sh(\frac{x}{2}\frac{\tilde{k}}{\tilde{k}+\kappa})}
\big(F_X(\frac{x}{\tilde{k}+\kappa},k_s,k_l)-r-2(k_s L_s+k_l L_l)\big)= \\
\int_{0}^{\infty} \frac{dx}{2x} \frac{sh(x\kappa)}
{sh(x(\tilde{k}+\kappa)) sh(x\tilde{k})}
\big(F_X(2x,k_s,k_l)-r-2(k_s L_s+k_l L_l)\big)=\\
\frac{1}{4} \int_{R_+} \frac{dx}{x} \frac{sh(x\kappa)}
{sh(x(\tilde{k}+\kappa)) sh(x\tilde{k})}
\big(F_X(2x,k_s,k_l)-r-2(k_s L_s+k_l L_l)\big)
\end{eqnarray*}
where in the last line we firstly used that the integrand is even under $x\rightarrow -x$ transformation and secondly that its residue is zero (for the same reason),
so we can slightly deform the integration contour into a small semicircle in the upper semiplane around the zero (that is the definition of a contour $R_+$).
This form will be used below.
Now we can write the overall integral representation for $\ln Z$:
\begin{eqnarray*}
\ln Z=(\frac{r}{2}+(k_s L_s+k_l L_l))\cdot \ln (\frac{\tilde{k}}{\tilde{k}+\kappa})+ \\
\frac{1}{4} \int_{R_+} \frac{dx}{x} \frac{sh(x\kappa)}
{sh(x(\tilde{k}+\kappa)) sh(x\tilde{k})}
\big(F_X(2x,k_s,k_l)-r-2(k_s L_s+k_l L_l)\big)
\end{eqnarray*}
Due to the following identity
\begin{eqnarray}
\frac{1}{4}\int_{R_+} \frac{dx}{x} \frac{\sinh (x(a-b))}{\sinh (x a)\sinh (x b)}=-\frac{1}{2}\log (\frac{a}{b})\,,
\end{eqnarray}
we finally have:
\begin{eqnarray*}
\ln Z=\frac{1}{4} \int_{R_+} \frac{dx}{x} \frac{sh(x\kappa)}
{sh(x(\tilde{k}+\kappa)) sh(x\tilde{k})}
\big(F_X(2x,k_s,k_l)\big)
\end{eqnarray*}
thus obtaining the final expression for the partition function of the two-fold refined Chern-Simons theory on the $S^3$ manifold.
All formulae of course coincide with those for the usual refined Chern-Simons in the case $k_s=k_l$. Particularly $F_X(x,k,k)=F_X(x,k)$ with the last function defined in \cite{AM21}
\section{The universal-like expressions for the two-fold refined CS for non simply laced algebras} \label{sect:univ}
In this section we calculate the function $ F_X(x,k_s,k_l) $ in a form which naturally extends it into arbitrary values of the refinement parameters $k_s, k_l$. It is also
convenient for a further establishment of duality with the topological strings.
We will show that for all non simply laced algebras one can present the $F_X(x,k_s,k_l)$ in the form of $\frac{A_{X}(k_s,k_l)}{B_{X}(k_s,k_l)}$, given below.
Let us consider the $B_n$ algebras. Normalization corresponds to $\alpha=-4$, i.e. the square of the long root is $4$.
The corresponding representation we mentioned above is
\begin{eqnarray}\label{F2}
F_{B_n}(x,k_s,k_l)=
\frac{A_{B_n}(k_s,k_l)}{B_{B_n}(k_s,k_l)} \\
B_{B_n(k_s,k_l)}= (q^2-1) (q^{4 k_l}-1)\\
A_{B_n(k_s,k_l)}=\\
(q^{2 k_l n}-1) q^{-2 (2 k_l n+k_l+ k_s)}\times\\
(-q^{4 k_l (n+1)+2 k_s+1}+q^{2 k_l (n+2)+2 k_s+1}+q^{6 k_l n+4k_s+2}-q^{8 k_l}+\\
(q^{2 k_s+1}+1) (q^{2 k_l (n+3)}-q^{4 k_l n+2 k_l+2 k_s+1})+\\
(q+1) (q^{2 k_l}+1) (q^{4 k_l n+2 k_l+3k_s+1}-q^{2 k_l (n+2)+ k_s}))
\end{eqnarray}
For $C_n$ algebras the same normalization is used, with the square of the long root $4$. Then $F_X$ writes as
\begin{eqnarray}
F_{C_n}= \frac{A_{C_n}}{B_{C_n}} \\
B_{C_n}=(q^2-1) (q^{2 k_s}-1) \\
A_{C_n}= (q^{k_s n}-1) q^{-2 k_l-k_s (2 n+1)} \times \\
(q^{2 k_l+2 k_s (n+1)+1}-q^{2 k_l+k_s (n+2)+1}-q^{2 k_l+k_s (n+3)+1}+q^{2 (k_l+k_s n+k_s+1)}+\\
q^{2 k_l+2 k_s n+k_s+1}-q^{4 k_l+2 k_s n+k_s+1}+q^{4 k_l+3 k_s n+1}-q^{2 k_l+3 k_s n+k_s+2}+\\
q^{4 k_l+3 k_s n+k_s+2}+q^{4 k_l+3 k_s n+2}-q^{2 k_l+k_s (n+2)}+q^{2 k_l+3 k_s}+\\
q^{k_s (n+3)+1}-q^{4 k_s+1}-q^{3 k_s}-q^{4 k_s})
\end{eqnarray}
For $F_4$, with the same normalization, we have
\begin{eqnarray}
F_{F_4}= \frac{A_{F_4}}{B_{F_4}} \\
B_{F_4}=(q^2-1) \\
A_{F_4}=q^{-2 (5 k_l+3 k_s)} (q^{2 (2 k_l+k_s)}+1) (q^{6 k_l+6 k_s+1}-1) \times \\
(q^{4 k_l+k_s+1}+q^{6 k_l+3 k_s+1}-q^{4 k_l+4 k_s+1}+q^{10 k_l+4 k_s+1}+\\
q^{3 (2 k_l+k_s)}+q^{4 k_l+k_s}-q^{6 k_l}+1)
\end{eqnarray}
For $G_2$ we use the normalization corresponding to the square of the long root to be equal to $6$. The corresponding $F_{G_2}$ function is
\begin{eqnarray} \label{FXG2}
F_{G_2}= \frac{A_{G_2}}{B_{G_2}} \\
B_{G_2}=q^3-1 \\
A_{G_2}=q^{-3 (2 k_l+k_s)} (q^{3 k_l+3 k_s+1}-1) \times \\
(q^{3 k_l+k_s}+q^{2 (3 k_l+k_s)}+q^{3 k_l+k_s+1}+q^{3 k_l+k_s+2}+q^{6 k_l+2 k_s+1}+\\
q^{6 k_l+2 k_s+2}-q^{3 k_l+3 k_s+2}+q^{9 k_l+3 k_s+2}-q^{6 k_l}+1)
\end{eqnarray}
\section{Conclusion and outlook}
One can investigate the formulae above in various limits. E.g. a reasonable limit is $k_s=0$.
In that case only the long roots contribute in formula (\ref{FX}).
On the other hand, the long roots of a non simply laced algebras together with the Cartan subalgebra constitute some subalgebra $L_X$ of the original algebra $X$.
So, one has the following equality:
\begin{eqnarray}\label{ccc}
F_X(x,k_l,0)=F_{L_X}(cx,k_l)
\end{eqnarray}
where in the r.h.s. we have functions for single-refined algebras, given in \cite{AM21}. The rescaling constant $c$ may appear due to different normalization of the long roots in
algebras $X$ and $L_X$.
For example, for $G_2$ algebra one has
\begin{eqnarray}
L_{G_2}=A_2;
\end{eqnarray}
Then from (\ref{FXG2}) we have
\begin{eqnarray}
F_{G_2}(x,k_l,0)=\frac{q^{-6 k_l} \left(q^{3 k_l}+1\right) \left(q^{9 k_l+3}-1\right)}{q^3-1}
\end{eqnarray}
and, on the other hand, for single-refined case one has \cite{AM21}
\begin{eqnarray}\label{FA2}
F_{A_2}(x,y)= \frac{q^{-2y} \left(q^{y}+1\right) \left(q^{3y+1}-1\right)}{q-1}
\end{eqnarray}
These two formulae coincide after identification $k_l=y$ and redefinition of $q=\exp (x)$ (rescaling of $x$) in the second one according to the
\begin{eqnarray}
q \rightarrow q^{3}
\end{eqnarray}
i.e. in (\ref{ccc}) $c=3$.
This redefinition is required since the square of the long roots in the $F_{G_2}(x,k_l,k_s)$ is 6, and in $F_{A_2}(x,y)$ it is 2. Altogether, we see that in the $k_s=0$ limit our
formulae agree with the previously known ones.
This limit $k_s=0$ resembles the Nekrasov-Shatashvili (NS) \cite{NS09} limit in the refined Chern-Simons theories. Of course, the true NS limit is $k_s=k_l \rightarrow 0$.
In our case an interesting new possibility is the limit when both $k_s$ and $k_l$ are tending to zero with $k_s/k_l$ fixed.
Then one would have an additional deformation parameter $k_s/k_l$ and the question would be what kind of (deformed) quantum integrable systems would appear in this limit.
The other important direction for a further investigation is the following.
The present work generalizes the results obtained in \cite{AM21} where the partition function for single-refined Chern-Simons theory was presented for all gauge algebras and
was transformed into a duality-ready form (universal-like representation). The next step was taken in \cite{AM22} where the duality with the refined topological
strings was established (for the first time in case of the non simply laced algebras).
The procedure for carrying out that establishment included the transformation of the partition function into the form of product/ratio of multiple sine functions, as well as the
perturbative approximation of the multiple sines which yielded a partition function for the topological strings of a Gopakumar-Vafa type.
Let us look at the simplest case as an example. The refined $A_{N-1}$ Chern-Simons theory partition function can be transformed into
\begin{equation}
\begin{split}
Z_A(a,y,\delta) &=
\sqrt{\frac{\delta}{ya}} \frac{S_3(1+ya|1,y,\delta)}{S_3(y|1,y,\delta)}\,,
\end{split}
\end{equation}
with $a=N$. A similar form of the partition function can be obtained for other gauge algebras \cite{AM22}, but it requires more complex computations.
One can implement similar calculations to those in \cite{AM22} for the two-fold refined case. However, the situation with the double refined theories is more complicated in two respects. First, due to second refinement parameter the similar transformations of the partition functions are much more complicated. Second, there are no known two-fold refined
topological strings to compare our formulae with.
\section{Acknowledgments.}
MA and RM are partially supported by the Science Committee of the Ministry of Science
and Education of the Republic of Armenia under contract 21AG-1C060.
The work of MA is partially supported by ANSEF grant PS-mathph-2697.
|
{
"arxiv_id": "2302.14341",
"language": "en",
"timestamp": "2023-03-01T02:10:15",
"url": "https://arxiv.org/abs/2302.14341",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Symmetry (translational or rotational) prevailing in the coupled dynamical networks due to the coupling geometry manifests in a
wide variety of natural systems and in their intriguing macroscopic dynamical states~\cite{tstp2017}. Nevertheless, symmetry breaking
couplings are shown to be a source of a plethora of collective dynamical behavior that are inherent to it
and are mostly inaccessible with the symmetry preserving couplings. In particular, networks of the paradigmatic Stuart-Landau oscillators
with symmetry breaking coupling have been employed to unravel several collective dynamical states that mimic a variety
of collective patterns observed in nature and technology.
For instance, symmetry breaking coupling facilitates the transition from the homogeneous to an inhomogeneous steady states~\cite{akev2013},
symmetry breaking interaction has been identified as an essential
feature for the genesis of partially coherent inhomogeneous spatial patterns, namely chimera death state~\cite{zamk2014,tb2015,kpvkc2015}.
Multicluster oscillation death states have been observed in nonlocally coupled Stuart-Landau oscillators with symmetry breaking coupling~\cite{ismk2015}.
Further, the interplay of the nonisochronicity parameter and the symmetry breaking coupling is found to facilitate the onset of
different variants of chimera death state such as multichimera death state and periodic chimera death states in
nonlocally coupled Stuart-Landau oscillators~\cite{kpvkc2016}.
The effect of the symmetry breaking coupling has also been investigated on the phenomenon of reviving oscillations~\cite{wei2017}.
Recently, the effect of the symmetry breaking mean-field coupling on the phenomenon of the aging transition has also been investigate
Conjugate couplings, a symmetry breaking coupling, have also been widely employed in the literature~\cite{rkrr2007,amit2012,wei2022}.
Note that the pointed out reports are only a tip of an ice-berg and not an exhaustive list of studies that employed symmetry breaking coupling
using the network of the Stuart-Landau oscillators.
Despite the substantial investigations on the effect of the symmetry breaking coupling in networks of Stuart-Landau oscillators, there is a lacunae in
understanding the nontrivial role of the symmetry breaking coupling in the phase only models, which indeed allows for exact analytical treatment of
the macroscopic dynamical states in most cases. In particular, phase models such as Winfree and Kuramoto models, and their variants have been
extensively employed in the literature to investigate the emergence of various intriguing collective dynamical states. Interaction among the phase oscillators in
the Winfree model is modeled by a phase-dependent pulse function and a sensitive function. The former characterizes the mean-field, while the latter characterizes
the response of the individual oscillators to the mean-field~\cite{w:1,w:2}. Winfree model is one of its kind representing a class of pulse-coupled
biological oscillators such as flashing of fire files~\cite{In2}, applauding audience~\cite{In7} and many more.
Interaction among the phase oscillators in the Kuramoto model is modeled by the sine of difference between the phases of the oscillator and has been widely
employed to investigate the emergence of spontaneous synchronization in a wide variety of biological, chemical, mechanical and physical, systems~\cite{Kuramoto:1984,Acebron:2005,In1}. Examples include cardiac pacemaker~\cite{In3},
Josephson junction arrays~\cite{In6}, and power-grids~\cite{In8}.
A recent study has generalized the Kuramoto model by including an additional interaction term that breaks the rotational symmetry of the dynamics explicitly
and unveiled a rich phase diagram with stationary and standing wave phases due to the symmetry breaking interaction~\cite{In10}.
Specifically, the authors have considered unimodal frequency distributions
and revealed the emergence of a stationary state, characterized by time independent amplitude and phase of the complex Kuramoto order parameter, facilitated by
the symmetry breaking interaction, which is otherwise absent in the original Kuramoto model that allows for the rotational symmetry of the dynamics.
Interesting, in this work, we elucidate that the Kuramoto model can be translated into the Winfree model by the introduction of the additional
symmetry breaking coupling and consequently, one can obtain the phase diagrams of both these models simply by tuning the symmetry breaking parameter $q$,
thereby bridging the dynamics of both the models. Note that the macroscopic dynamical states of the pulse coupled biological oscillators with
different sensitive functions, characterizing the phase-response-curves of biological oscillators, are peculiar to the Winfree model and its generalizations,
which are far from reach for the Kuramoto model and its variants. In particular, we consider both the unimodal and bimodal frequency distributions
to explore the phase diagrams for various values of the symmetry breaking parameter $q$. On the one hand, we observe the typical phase diagram of the Kuramoto model
characterized only by incoherent and standing wave states in the absence of the symmetry breaking interaction for the unimodal frequency distribution.
On the other hand, we observe the phase diagram with incoherent state, standing wave pattern along with the synchronized stationary state and bistabilities among them,
a typical nature of the Winfree model, for $q=1$. For an intermediate and increasing value of $q\in(0,1)$, one can find the onset of
the stationary state and eventually the emergence of
bistability among these states in the phase diagram, and enlargement of the bistable regions resulting in the phase diagram of the Winfree model.
All three states are also observed in both Kuramoto and Winfree models for symmetric bimodal frequency distributions along with
the region of bistability. The degree of the spread of
the different macroscopic dynamical states depends on the strength of the symmetry breaking parameter $q$. Interestingly, for asymmetric
bimodal frequency distributions, increase in the degree of asymmetry of the frequency distributions favors the onset of bistable regions even for a rather low values of $q$,
which otherwise cannot be observed with the symmetric bimodal and unimodal frequency distributions. We arrive at the phase diagrams by numerical simulation
of the original equations of motion. We deduce the reduced low-dimensional evolution equations of motion for the order parameter using the
Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions. We also deduce the Hopf, pitchfork and saddle-node bifurcation curves
from the governing equations of motion for the order parameters, which mediates the dynamical transitions in the phase diagrams. Homoclinic bifurcation
curve is obtained from the XPPAUT software.
The plan of the paper is as follows. In Sec. II, we generalize the Kuramoto model by introducing a symmetry breaking coupling and elucidate that the latter bridges the Kuramoto model
and the Winfree model. We deduce the reduced low-dimensional evolution equations for the complex order parameters corresponding to the discrete set of generalized
Kuramoto model using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions in Sec. III. We also deduce Hopf, pitchfork and saddle-node bifurcation curves
from the evolution equations for the complex order parameters in Sec. III, mediating the dynamical transitions among the incoherent, standing wave and synchronized stationary states.
In Sec. IV, we discuss the observed dynamical states and their transitions in the various phase diagrams. Finally, in Sec. VI, we summarize the results.
\section{Model}
We consider a nontrivial generalization of the Kuramoto model
by including an interaction term that explicitly breaks the rotational symmetry of the dynamics~\cite{In10}.
The phase $\theta_i$ is governed by the set of N ordinary differential equations (ODEs),
\begin{align}
\dot{\theta_i}&=\omega_i+\frac{\varepsilon}{N}\sum_{j=1}^{N}\big[\sin(\theta_j-\theta_i)+q\sin(\theta_j+\theta_i)\big],
\label{eq:km2}
\end{align}
for $i=1, \ldots, N$, where $N\gg 1$. Here $\theta_i(t)$ is the phase of the $i$th oscillator at time $t$, $\varepsilon\ge 0$ is the coupling strength, and $q$ is the strength of the symmetry
breaking coupling. Note that Eq.~(\ref{eq:km2}) reduces to the Kuramoto model by setting $q = 0$ and on identifying $\varepsilon$ with the parameter $K>0$.
Equation~(\ref{eq:km2}) can also be viewed as a variant of the celebrated Winfree model~\cite{w1,w2,w3,w4} when $q=1$. The Winfree model takes the form
\begin{align}
\dot{\theta_i}=\omega_i+Q(\theta_i)\sum_{j=1}^{N}P(\theta_j),
\label{eq:wf}
\end{align}
where $P(\theta_j)$ is the phase dependent pulse function and the functional form of the response function $Q(\theta)$ characterizes the phase-response curves
of certain biological oscillators. From Eq.~(\ref{eq:km2}), it easy to recognize that $Q(\theta)={-}2\sin(\theta)$ and $P(\theta)=\cos(\theta)$.
It is also evident that the symmetry breaking parameter `q' bridges the Kuramoto and the Winfree models.
Equation~(\ref{eq:km2}) corresponds to the Kuramoto model when $q=0$ and it corresponds to a variant of the Winfree model when $q=1$, as in Eq.~(\ref{eq:wf}).
We consider the frequencies of the phase-oscillators are distributed both by the unimodal Lorentzian distribution given as
\begin{align}
g(\omega)&=\frac{\gamma}{\pi((\omega-\omega_0)^2+\gamma^2)};~~\gamma >0,
\label{eq:lor}
\end{align}
and bimodal Lorentzian distribution represented as
\begin{align}
g(\omega)&=\frac{1}{\pi}\left[\frac{\gamma_1}{((\omega-\omega_0)^2+\gamma_1^2)}+\frac{\gamma_2}{((\omega+\omega_0)^2+\gamma_2^2)}\right];~~~~~~~~\gamma_1, \gamma_2 >0.
\label{eq:bil}
\end{align}
Here $\gamma$, $\gamma_1$ and $\gamma_2$ are the width parameter (half width at half maximum)
of the Lorentzian and $\pm\omega_0$ are their central frequencies. Note that
$\omega_0$ corresponds to the degree of detuning in the system, which is
proportional to the separation between the two central frequencies.
Note that the bimodal distribution $g(\omega_0)$ is symmetric about zero when $\gamma_1=\gamma_2$.
It is also to be noted that $g(\omega_0)$ in Eq.~(\ref{eq:bil}) is bimodal if and only if the separation between their central
frequencies are sufficiently greater than their widths. To be precise, it is required that $\omega_0 > \gamma_{1,2}/\sqrt{3}$ for the distribution to be a bimodal,
otherwise the classical results of the unimodal distribution holds good.
Heterogeneity in the frequency distribution plays a crucial role in the manifestation of a plethora of collective dynamics
in a vast variety of natural systems. In particular, coexisting co-rotating and counter-rotating systems characterized by positive and negative frequencies, respectively,
are wide spread in nature. For instance, counter-rotating spirals are observed in protoplasm of the Physarum plasmodium~\cite{ref1}, counter-rotating vortices are inevitable in the atmosphere and ocean~\cite{ref2,ref3, ref4}, in magnetohydrodynamics of plasma flow~\cite{ref8}, Bose-Einstein condensates~\cite{ref9,ref10}, and in other physical systems~\cite{ref5,ref6,ref7}.
Very recently, the counter-rotating frequency induced dynamical effects were also reported in the coupled Stuart-Landau oscillator with symmetry
preserving as well as symmetry breaking couplings~\cite{ref11}. The coexistence of co-rotating and counter-rotating oscillators was initially identified
by Tabor~\cite{ref12}, which is followed by a series of work employing co-rotating and counter-rotating oscillators.
All these physical systems strongly suggest that counter-rotating time-evolving dynamical
systems indeed exist in nature and play a pertinent role in the manifestation of their intriguing collective dynamics.
In the following, we will deduce the low-dimensional evolution equations for the
complex macroscopic order parameters corresponding to both the unimodal and bimodal frequency distributions using the Ott-Antonsen (OA) ansatz~\cite{Ott:2008,Ott:2009}.
Subsequently, we also deduce the various bifurcation curves facilitating the dynamical transitions among the observed dynamical states in the phase diagrams.
\section{Low-dimensional evolution equations for the macroscopic order parameters}
We now provide an analysis of the dynamics~(\ref{eq:km2}), in the limit
$N \to \infty$, by invoking the Ott-Antonsen ansatz. In this limit, the dynamics of the discrete set of equations (\ref{eq:km2}) can be
captured by the probability distribution function $f(\theta,\omega,t)$, defined such that $f(\theta,\omega,t){\rm d}\theta$
gives the probability of oscillators with phase in the range
$[\theta,\theta+{\rm d}\theta]$ at time $t$. The distribution is
$2\pi$-periodic in $\theta$ and obeys the normalization
\begin{equation}
\int_0^{2\pi} {\rm d}\theta~f(\theta,\omega,t)=g(\omega)~\forall~\omega.
\label{eq:norm}
\end{equation}
Since the dynamics (\ref{eq:km2}) conserves the number of
oscillators with a given $\omega$, the time evolution of $f$ follows the
continuity equation
\begin{equation}
\frac{\partial f}{\partial t}+\frac{\partial(fv) }{\partial
\theta}=0,
\label{eq:continuity-equation}
\end{equation}
where $v(\theta,\omega,t)$ is the angular velocity of the oscillators. From Eq. (\ref{eq:km2}), we have,
\begin{equation}
v(\theta,\omega,t)=\omega+\frac{\varepsilon}{2i}[(ze^{-i\theta}-z^\star e^{i\theta})+q(ze^{i\theta}-z^\star e^{-i\theta})],
\end{equation}
where $z^\star$ denotes the complex conjugate of the macroscopic order parameter defined as
\begin{equation}
z=\int_{-\infty}^{\infty} g(\omega) \int_0^{2\pi} f(\theta, \omega, t)e^{i\theta}d\theta d\omega.
\label{eq:mo}
\end{equation}
Now, $f(\theta, \omega, t)$ can be expanded in terms of Fourier series of the form
\begin{equation}
f(\theta,\omega,t)=\frac{g(\omega)}{2\pi}\left[1+\sum_{n=1}^\infty
\left(\alpha_n(\omega,t) e^{i n\theta}+{\rm c.c.}\right)\right],
\label{eq:f-Fourier}
\end{equation}
where, $\alpha_n(\omega,t)$ is the $n$th Fourier
coefficient, while c.c. denotes complex conjugation of the preceding sum
within the brackets. The normalization condition in
(\ref{eq:norm}) is satisfied by the presence of the prefactor of $g(\omega)$ in (\ref{eq:f-Fourier}).
The Ott-Antonsen ansatz consists in assuming~\cite{Ott:2008,Ott:2009}
\begin{equation}
\alpha_n(\omega,t)=\left[\alpha(\omega,t)\right]^n.
\label{eq:OA}
\end{equation}
Now, it is straightforward to obtain
\begin{equation}
\frac{\partial\alpha}{\partial t}+i\omega\alpha+\frac{\varepsilon_1}{2}\left[(z\alpha^2-z^\star)+q(z-z^\star\alpha^2)\right],
\label{eq:12}
\end{equation}
where,
\begin{equation}
z^\star=\int_{-\infty}^{\infty}\alpha(t,\omega)g(\omega)d\omega.
\label{eq:13}
\end{equation}
\subsection{Unimodal Distribution}
\label{sec:ud}
Substituting the partial fraction expansion of the unimodal frequency distribution $g(\omega)$ (\ref{eq:lor}) in Eq.~(\ref{eq:13}) and evaluating the
integral using an appropriate contour integral, one can obtain the order parameter as
\begin{equation}
z(t)=a^\star(\omega_0-i\gamma,t).
\label{eqod}
\end{equation}
From (\ref{eq:12}) and (\ref{eqod}), one can obtain the evolution equation for the complex order parameter as
\begin{align}
\frac{\partial z}{\partial t}-{
i}(\omega_0+i\gamma)z+\frac{\varepsilon_1}{2}\bigg[\big[|z|^2z-z\big]+{q}\big[z^\star -z^3\big]\bigg]=0.
\label{eq:z-dynamics}
\end{align}
The above evolution equation for the complex order parameter $z(t)=r(t)e^{i\psi(t)}$ can be expressed in terms of the evolution equations in $r$ and
$\psi$ as
\begin{subequations}
\begin{eqnarray}
&\frac{{\rm d}r}{{\rm d}t}&=-\gamma
r-\frac{r\varepsilon}{2}(r^2-1)(1-q\cos(2\psi)),\\
&\frac{{\rm d}\psi}{{\rm
d}t}&=\omega_0+\frac{{\varepsilon}q}{2}(r^2+1)\sin(2\psi)).
\end{eqnarray}
\label{eq:r-dynamics}
\end{subequations}
The above equations govern the reduced low-dimensional order parameter
dynamics, which actually corresponds to the dynamics of the original discrete set of equations~(\ref{eq:km2}) in the limit $N
\to \infty$ for the unimodal Lorentzian distribution function $g(\omega)$~(\ref{eq:lor}). Now, we discuss the various asymptotic macroscopic dynamical states
admitted by Eq.~(\ref{eq:r-dynamics}).
\subsubsection{Incoherent (IC) state:}
\label{subsec:iss}
The incoherent (IC) state is characterized by time independent $z$ satisfying
$z=z^\star=0$ (thus representing a stationary state of the
dynamics~(\ref{eq:r-dynamics})); correspondingly, one has $r=0$.
The linear stability of such a state is determined by linearizing
Eq.~(\ref{eq:z-dynamics}) around $z=0$. By representing $z=u$ with $|u|\ll$1, we obtain
\begin{equation}
\frac{\partial u}{\partial t}+(\gamma-i \omega_0)u-\frac{\varepsilon}{2}\big[(u)-q(u^{\star})\big]=0.
\label{eq:u-dynamics}
\end{equation}
Decomposing $u=u_x + i u_y$ yields
\begin{equation}
\frac{\partial }{\partial t}
\begin{bmatrix}
u_x \\
u_y
\end{bmatrix}
=M
\begin{bmatrix}
u_x \\
u_y
\end{bmatrix};\\
\end{equation}
\begin{equation}
~~M \equiv \begin{bmatrix}
-\gamma+\frac{\varepsilon}{2}\big[1-q\big] & -\omega_0 \\\\
~\omega_0& -\gamma+\frac{\varepsilon}{2}\big[1+q\big] \\
\end{bmatrix}.
\label{eq:M-matrix} \nonumber
\end{equation}
The matrix $M$ has the characteristic eigenvalues
\begin{equation}
\lambda_{1,2}=\frac{-2\gamma+\varepsilon \pm \sqrt{\Delta}}{2},
\label{eq:M-eigenvalues}
\end{equation}
with
$\Delta=(\varepsilon^2q^2-4\omega_0^2)$. Note that we have $\lambda_1 > \lambda_2$. The
stability threshold for the incoherent state is then obtained by analysing $\lambda_1$
as a function of $\varepsilon$ and $q$, and seeking
the particular value of $\varepsilon$ at which $\lambda_1$
vanishes for a given $q$. The stability threshold can be obtained as
\begin{align}
&\varepsilon_{HB}=2 \gamma ,~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~ \;\;\;
\mbox{for}\;\; \Delta \le 0, \label{hb}\\
&\varepsilon_{PF}=2 \sqrt{\frac{\gamma^2+\omega_0^2}{1+q^2}} ~~~~~~~~~~~~~~~~~~~~ \mbox{for}
\;\; \Delta>0. \label{eq:ISS}
\end{align}
\subsubsection{Synchronized stationary state (SSS):}
\label{subsec:sss}
Now, we explore the
possibility of existence of the synchronized stationary state.
Requiring that $r$ and $\psi$ have time-independent non-zero
values in this case and hence equating the left hand side of equations (\ref{eq:r-dynamics})
to zero, we obtain the two coupled equations for the synchronized stationary state as
\begin{subequations}
\begin{eqnarray}
&\frac{\varepsilon q}{2}\cos(2\psi)&=\frac{\gamma}{(r^2-1)}+\frac{\varepsilon}{2}, \\
&\frac{\varepsilon q}{2}\sin(2\psi)&=-\frac{\omega_0}{(r^2+1)}.
\label{eq:SSS-dynamics}
\end{eqnarray}
\end{subequations}
With some algebra, one can obtained the following expressions for the stationary $r$ and
$\psi$:
\begin{subequations}
\begin{eqnarray}
&\frac{\varepsilon^2q^2}{4}&=\bigg(\frac{\gamma}{(r^2-1)}+\frac{\varepsilon}{2}\bigg)^2+\bigg(\frac{\omega_0}{(r^2+1)}\bigg)^2,\\
&\tan(2\psi)&=\frac{(1-r^2)(\omega_0)}{(r^2+1)(\gamma+\frac{\varepsilon}{2}(r^2-1))}.
\end{eqnarray}
\label{eq:stability-SSS}
\end{subequations}
$r$ and $\psi$ can be calculated for a fixed set of parameters by numerically solving the above set of equations,
which is then
substituted back into the evolution equations for the low-dimensional order parameters to deduce the characteristic equation.
The eigenvalues of the characteristic equation is then used to determine the saddle-node bifurcation curve in the suitable two parameter phase.
\subsection{Bimodal Distribution}
Now, we will deduce the low-dimensional evolution equations corresponding to the macroscopic order parameters for the original
discrete set of equations~(\ref{eq:km2}) in the limit $N
\to \infty$ for the asymmetric bimodal Lorentzian distribution function $g(\omega)$~(\ref{eq:bil}). Expanding the latter using partial fractions and
evaluating the integral in Eq.~(\ref{eq:13}) using appropriate contour integral, one can obtained the complex order parameter as
\begin{equation}
z(t)=\frac{1}{2}[z_1(t)+z_2(t)],
\end{equation}
where
\begin{equation}
z_{1,2}(t)=\alpha^\star(\pm\omega_0-i\gamma_{1,2},t).
\end{equation}
\begin{figure*}[!]
\centering
\includegraphics[width=12cm]{bim1.eps}
\caption{Phase diagrams in the $(\omega_0/\gamma, \varepsilon/\gamma)$ parameter space for the generalized Kuramoto model (\ref{eq:km2}) with unimodal frequency distribution
for different values of the symmetry breaking parameter $q$. (a) $q=0.0$, (b) $q=0.1$, (c) $q=0.5$, and (d) $q=1.0$. The line connected by filled squares is
the Hopf bifurcation curve $\varepsilon_{HB}$ (Eq.~(\ref{hb})), solid line corresponds to the pitchfork bifurcation curve $\varepsilon_{PF}$ (Eq.~(\ref{eq:ISS}))
dashed line corresponds to the saddle-node bifurcation curve (Eq.~(\ref{eq:stability-SSS})), and the dashed dotted line correspond to the homoclinic bifurcation curve
obtained using the software XPPAUT. Bistability between the standing wave (SW) state and the synchronized stationary (SS) state is represented by dark shaded region enclosed by
the saddle-node bifurcation curve and the homoclinic bifurcation curve. Bistability between the incoherent {(IC)} and the {SS} state is represented by light grey shaded region enclosed by the saddle-node bifurcation curve and the pitchfork bifurcation curve.}
\label{fig:i}
\end{figure*}
\begin{figure*}[!]
\hspace*{-1.0cm}
\includegraphics[width=12.99cm]{bim2.eps}
\caption{Time averaged order parameter $R$ and the Shinomoto-Kuramoto order parameter $\xi$ for the generalized Kuramoto model (\ref{eq:km2}) with unimodal frequency distribution as a function of $\varepsilon/\gamma$
for $\omega_0/\gamma=1$. (a) and (d) $q=0.0$ , (b) and (e) $q=0.5$, and (c) and (f) $q=1.0$. The forward trace is indicated by the line connected by open circles,
while the reverse trace is indicated by the line connected by closed circles. The states indicated by IC, SW and SS correspond to the incoherent, standing wave,
and synchronized stationary states, respectively. The bifurcation curves $\varepsilon_{HB}, \varepsilon_{Hc}, \varepsilon_{PF}$ and $\varepsilon_{SN}$ correspond to the
Hopf, homoclinic, pitchfork and saddle-node bifurcation curves, respectively.}
\label{fig:ai}
\end{figure*}
Substitution it into Eq. (\ref{eq:12}) yields two
coupled complex ordinary differential equations describing the evolution of two suborder parameters as
\begin{align}
\dot{z}_1=&-(\gamma_1+i\omega_0)z_1+\frac{\varepsilon}{4}[(z_1+z_2-(z_1^\star+z_2^\star)z_1^2)\nonumber\\&+q((z_1+z_2)z_1^2-(z_1^\star+z_2^\star))],\label{eq:z1}\\
\dot{z}_2=&-(\gamma_2-i\omega_0)z_2+\frac{\varepsilon}{4}[(z_1+z_2-(z_1^\star+z_2^\star)z_2^2)\nonumber\\&+q((z_1+z_2)z_2^2-(z_1^\star+z_2^\star))].\label{eq:z2}
\end{align}
The above evolution equations for the complex order parameters $z(t)_{1,2}=r(t)_{1,2}e^{i\psi(t)_{1,2}}$ can be expressed in terms of the evolution equations in $r_{1,2}$ and
$\psi_{1,2}$, as
\begin{subequations}
\begin{align}
\frac{{\rm d}r_1}{{\rm d}t}&=-\gamma_1
r_1+\frac{\varepsilon}{4}\big[(1-r_1^2)(r_1+r_2\cos(\psi_2-\psi_1))\nonumber\\&+q((r_1^2-1)(r_1\cos(2\psi_1)+r_2\cos(\psi_2+\psi_1)))\big],~~~\\
\frac{{\rm d}\psi_1}{{\rm
d}t}&=-\omega_0+\frac{{\varepsilon}}{4r_1}(r_1^2+1)\big[r_2\sin(\psi_2-\psi_1)\nonumber\\&+q(r_1\sin(2\psi_1)+r_2\sin(\psi_2+\psi_1))\big].
\end{align}
\label{eq:r-bim}
\end{subequations}
and
\begin{subequations}
\begin{align}
\frac{{\rm d}r_2}{{\rm d}t}&=-\gamma_2
r_2+\frac{\varepsilon}{4}\big[(1-r_2^2)(r_1\cos(\psi_2-\psi_1)+r_2)\nonumber\\&+q((r_2^2-1)(r_1\cos(\psi_2+\psi_1)+r_2\cos(2\psi_2)))\big],~~~~\\
\frac{d\psi_2}{dt}&=\omega_0-\frac{{\varepsilon}}{4r_2}(r_2^2+1)\big[r_1\sin(\psi_2-\psi_1)\nonumber\\&-q(r_1\sin(\psi_2+\psi_1)+r_2\sin(2\psi_2))\big].
\end{align}
\label{eq:r2-bim}
\end{subequations}
The above equations constitute the evolution equations for reduced low-dimensional order parameters
corresponding to the dynamics~(\ref{eq:km2}) in the limit $N
\to \infty$ and for the case of the asymmetric bimodal Lorentzian distribution $g(\omega)$~(\ref{eq:bil}).
Now, we discuss the various asymptotic macroscopic dynamical states
admitted by Eqs.~(\ref{eq:r-bim}) and (\ref{eq:r2-bim}).
\subsubsection{Incoherent state}
The incoherent state is defined by $r_1$=$r_2$=0.
A linear stability analysis of the fixed point $(z_1,z_2)$ = (0, 0) results in the stability condition,
\begin{align}
\omega_0^2=\frac{1}{4}(\varepsilon a_1-2a_2+\sqrt{\varepsilon^2q^2a_1-4\varepsilon a_3^2+4a_3^2a_1}),
\label{eq:pf}
\end{align}
where, $a_1=\gamma_1+\gamma_2, a_2=\gamma_1^2+\gamma_2^2$ and $a_3=\gamma_1-\gamma_2$.
This stability curve actually corresponds to the pitchfork bifurcation curve across which the fixed point $(z_1,z_2)$ = (0, 0) (incoherent state) loses its stability leading
to the synchronized stationary state. Note that the incoherent state loses it stability through the Hopf bifurcation, which results in the stability condition
\begin{align}
\omega_0^2=&\frac{1}{4}(\varepsilon-2b_1)^4(\varepsilon^2(q^2-1)-16b_2+4\varepsilon b_1)^2\bigg[\varepsilon^5(q-1)b_1-\varepsilon^4(q^2-1)\big((q^2-8)b_3\nonumber\\&+2b_2(q^2-10)\big)-4\varepsilon^3(q^2-2)\big(3(\gamma_1^3+\gamma_2^3)+13b_2b_1\big)+4\varepsilon^2(b_1)^2\big(b_3(q^2-8)\nonumber\\&+2b_2(3q^2-20)\big)+16\varepsilon b_1^3(b_3+10b_2)-64b_2b_1^4\bigg],
\label{eq:hb}
\end{align}
where, $b_1=\gamma_1+\gamma_2, b_2=\gamma_1\gamma_2$ and $b_3=\gamma_1^2+\gamma_2^2$. The above stability curve
corresponds to the Hopf bifurcation curve.
The boundary of stable incoherent state is therefore enclosed by both the pitchfork bifurcation and Hopf bifurcation curves.
\subsubsection{Synchronized stationary state}
Deducing the solution for the synchronized stationary state for the asymmetry bimodal distribution may not be possible as $r_1~\ne~r_2$ and $\psi_1~\ne~\psi_2$. However, for the symmetry bimodal distribution characterized by $r_1~=~r_2$ and $\psi_1~=~-\psi_2$, one can deduce the equations for $r$ and $\psi$ as in (\ref{eq:stability-SSS})
and obtain the saddle-node bifurcation curves as pointed out in Sec.~\ref{subsec:sss}.
\begin{figure*}[!]
\centering
\hspace*{-1cm}
\includegraphics[width=13cm,height=9.5cm]{bim3.eps}
\caption{Phase diagrams in the $q-\varepsilon/\gamma$ parameter space for the generalized Kuramoto model (\ref{eq:km2}) with unimodal frequency distribution
for increasing degree of heterogeneity of the frequency distribution. (a) $\omega_0/\gamma_2=0.4$, (b) $\omega_0/\gamma_2=0.6$, (c) $\omega_0/\gamma_2=1.0$, and
(a) $\omega_0/\gamma_2=1.2$. The bifurcation curves and dynamical states are similar to those in Fig.~\ref{fig:i}. }
\label{fig:ii}
\end{figure*}
\begin{figure*}[!]
\centering
\hspace*{-1cm}
\includegraphics[width=13cm]{bim4.eps}
\caption{Phase diagrams in the $\omega_0/\gamma-\varepsilon/\gamma$ parameter space for the generalized Kuramoto model (\ref{eq:km2}) with symmetric
bimodal frequency distribution for increasing values of the strength of the symmetry breaking coupling. (a) $q=0.0$, (b) $q=0.5$, (c) $q=0.8$, and (d) $q=1.0$.
The bifurcation curves and dynamical states are similar to those in Fig.~\ref{fig:i}.}
\label{fig:iii}
\end{figure*}
\begin{figure}[!]
\hspace*{-1cm}
\centering
\includegraphics[width=10cm]{bim5.eps}
\caption{Phase diagrams in the $\omega_0/\gamma_2-\varepsilon/\gamma_2$ parameter space for the generalized Kuramoto model (\ref{eq:km2}) with asymmetric
bimodal frequency distribution for increasing the strength of the symmetry breaking coupling and increasing the asymmetry between the bimodal frequency distributions.
(a) and (b) $\gamma_1/\gamma_2=0.6$, and (c) and (d) $\gamma_1/\gamma_2=1.2$. (a) and (c) $q=0.1$ and (b) and (d) $q=1$. The bifurcation curves and
dynamical states are similar to those in Fig.~\ref{fig:i}. }
\label{fig:iv}
\end{figure}
\begin{figure}[!]
\hspace*{-1cm}
\centering
\includegraphics[width=10cm]{bim6.eps}
\caption{Phase diagrams in the $q-\varepsilon/\gamma_2$ parameter space (first row) for $\omega_0/\gamma_2=1$ and $q-\omega_0/\gamma_2$ (second row)
for $\varepsilon/\gamma_2=2.5$ for the generalized Kuramoto model
(\ref{eq:km2}) with asymmetric bimodal frequency distribution. (a) and (c) $\gamma_1/\gamma_2= 0.6$, and (b) and (d) $\gamma_1/\gamma_2= 1.2$.}
\label{fig:v}
\end{figure}
\section{Numerical Results}
In this section, we will proceed to unravel the macroscopic dynamical states admitted by the generalized Kuramoto model (\ref{eq:km2}) with explicit symmetry breaking coupling
by constructing appropriate two parameter phase diagrams and classifying the underlying dynamical states from a numerical analysis of the
governing equations of the original discrete model. Specifically, we will unravel the rich phase diagrams of the generalized Kuramoto model,
using both unimodal and bimodal frequency distributions, for distinct values of the symmetry breaking parameter $q$.
The number of oscillators is fixed as N = $10^4$, and we use the standard 4th-order Runge-Kutta integration scheme with integration step size h = 0.01
to solve the generalized Kuramoto model (\ref{eq:km2}).
{Note that one can break the two-parameter phase into several segments and multiple copies of the same code can be simulated simultaneously
for different values of the parameters to generate the data, which can then be concatenated to get the complete phase diagrams with a reasonable workstation.}
The initial state of the oscillators ($\theta_i$'s ) is distributed with uniform random values between -$\pi$ and +$\pi$.
We use the time averaged order parameter $R$ defined as
\begin{equation}
R=\lim_{t \to \infty}\frac{1}{\tau}\int_{t}^{t+\tau}r(t)dt,
\label{eq:R}
\end{equation}
where $r(t)=\vert Z\vert=\vert N^{-1}\sum_{j=1}^{N}e^{i\theta_j} \vert$.
Incoherent state is characterized by $R=r(t)=0$, while the synchronized stationary state is characterized by $R=r(t)=const.$ Standing wave is
characterized by the oscillating nature of $r(t)$. In order to distinguish the synchronized stationary state and the standing wave state more clearly, we
use the Shinomoto-Kuramoto order parameter~\cite{gallego,ssyk1986}
\begin{align}
\xi=\overline{\vert r(t) - R\vert},
\end{align}
where $\bar{z}$ denoted the long time average. Shinomoto-Kuramoto order parameter takes $\xi=0$ for the incoherent and synchronized stationary states,
whereas it takes nonzero value for the standing wave state.
\subsubsection{Phase diagrams for the unimodal distribution}
We have depicted phase diagrams in the ($\omega_0/\gamma, \varepsilon/\gamma$) parameter space for different values of the symmetry breaking
parameter $q$ in Fig.~\ref{fig:i} in order to understand the effect of the explicit symmetry breaking interaction on the dynamics of Eq. (\ref{eq:km2}) with
unimodal frequency distribution. The phase diagram is demarcated into different dynamical regions using the value of the time averaged order parameter $R$ and
the Shinomoto-Kuramoto order parameter $\xi$.
Incoherent state (IC), synchronized stationary state (SS) and standing wave (SW), along with the bistable regions (dark and light gray shaded regions) are observed in the phase
diagram. The parameter space indicated by light gray shaded region corresponds to the bistable regime between the incoherent and the synchronized stationary states,
while that indicated by dark gray shaded region corresponds to the bistable regime between the standing wave and the synchronized stationary states,
Only the incoherent and standing wave states are observed in the phase diagram for $q=0$ (see ~\ref{fig:i}(a)), a typical phase diagram of the Kuramoto model
with unimodal frequency distribution. The line connected by the filled squares corresponds to the Hopf bifurcation curve, across which there is a transition
from the incoherent state to the standing wave state. Note that a finite value of $q$ results in the loss of the rotational symmetry of the dynamics of the Kuramoto oscillators.
Even a feeble value of $q$ manifests the synchronized stationary state in a rather large parameter space at the cost of the standing wave state (see Fig.~\ref{fig:i}(b) for q=0.1).
There is a transition from the incoherent state to the standing wave state via
the Hopf bifurcation curve $\varepsilon_{HB}$ (indicated by the line connected by filled squares) as a function of $\varepsilon/\gamma$ for $\omega_0/\gamma>0.1$.
The standing wave state loses its stability via the homoclinic bifurcation (indicated by the dashed-dotted line) as a function of $\varepsilon/\gamma$ resulting
in the synchronized stationary state. There is also a transition from the incoherent state to the synchronized stationary state for $\omega_0/\gamma\le 0.1$ as a function of
$\varepsilon/\gamma$ via the pitchfork bifurcation curve $\varepsilon_{PF}$ indicated by the solid line.
Further larger values of the symmetry breaking parameter results in the emergence of the bistability between the standing wave and the synchronized stationary states (indicated by
dark shaded region) enclosed by the saddle-node bifurcation curve (indicated by dashed line) and the homoclinic bifurcation curve (see Fig.~\ref{fig:i}(c) for q=0.5).
There is also a bistable region between the incoherent state and the synchronized stationary state (indicated by light grey shaded region) enclosed by the
saddle-node bifurcation curve
and the pitchfork bifurcation curve. For $q=1$, both the bistable regions enlarged in the phase diagram (see Fig.~\ref{fig:i}(d)), which is a typical phase diagram of
the Winfree model with the unimodal frequency distribution. The phase diagrams for $q=0.5$ and $1.0$ have similar dynamics except for the regime shift and enhanced bistabilities
in a larger parameter space.
Thus, as the value of $q$ is increased from the null value to the unity, one can observe the transition from the phase diagram of the Kuramoto model to that of the Winfree model.
Note that the Hopf, saddle-node and pitchfork bifurcation curves are the analytical bifurcation curves, Eqs.~(\ref{hb}), (\ref{eq:ISS}) and (\ref{eq:stability-SSS}) respectively, obtained from the
low-dimensional evolution equations for the order parameters deduced in Sec.~\ref{sec:ud}. Homoclinic bifurcation curve is obtained from the software XPPAUT~\cite{xpp}.
Time averaged order parameter $R$ and the Shinomoto-Kuramoto order parameter $\xi$ are depicted in Fig.~\ref{fig:ai} as a function of $\varepsilon/\gamma$ for different values of the symmetry breaking parameter $q$ and
$\omega_0/\gamma$. The forward trace is indicated by the line connected by open circles, while the backward trace is indicated by the line
connected by closed circles. There is a smooth (second order) transition from the incoherent to the standing wave states via the Hopf bifurcation $\varepsilon_{HB}$
at $\varepsilon/\gamma=2$ during both forward and reverse traces for $q=0.0$ and $\omega_0/\gamma=1$ as depicted in Figs.~\ref{fig:ai}(a) and \ref{fig:ai}(d).
In addition, to the
smooth transition from the incoherent state to the standing wave state via the Hopf bifurcation $\varepsilon_{HB}$ at $\varepsilon/\gamma=2$, there is another
smooth transition from the standing wave state to the synchronized stationary state via the homoclinic bifurcation $\varepsilon_{Hc}$ at $\varepsilon/\gamma=2.94$ in both the
forward and reverse traces as shown in Fig.~\ref{fig:ai}(b) for $q=0.5$ and $\omega_0/\gamma=1$. The transition from the standing wave state to the
synchronized stationary state is also corroborated by the sharp fall of the Shinomoto-Kuramoto order parameter $\xi$ to the null value (see Fig.~\ref{fig:ai}(e)).
In contrast, there is an abrupt (first order) transition
from the incoherent state to the synchronized stationary state at $\varepsilon/\gamma=2$ via the pitchfork bifurcation curve $\varepsilon_{PF}$
for $\omega_0/\gamma=1$ during the forward trace, whereas there is an abrupt transition from the synchronized stationary state to the incoherent state at
$\varepsilon/\gamma=1.8$ via the saddle-node bifurcation $\varepsilon_{SN}$ during the reverse trace (see Fig.~\ref{fig:ai}(c) for $q=1.0$)
elucidating the presence of hysteresis and bistability between the incoherent state and the synchronized stationary state. The Shinomoto-Kuramoto order parameter $\xi$
takes the null value, in the entire range of $\varepsilon/\gamma$ in Fig.~\ref{fig:ai}(f) for $q=1.0$, characterizing both the incoherent and the synchronized stationary states.
The observed dynamical states and their transitions are depicted in the $(q, \varepsilon/\gamma)$ parameter space for different $\omega_0/\gamma$ in Fig.~\ref{fig:ii}.
The bifurcations mediating the dynamical transitions are similar to those observed in Fig.~\ref{fig:i}. The phase diagram for $\omega_0/\gamma=0.4$ is shown
in Fig.~\ref{fig:ii}(a). There is a transition from the incoherent state to the standing wave state via the Hopf bifurcation curve for smaller values of
the symmetry breaking parameter as a function of $\varepsilon/\gamma$. Larger values of the symmetry breaking parameter favor the
synchronized stationary state in the entire range of $\varepsilon/\gamma$. However, in a narrow range of $q\in(0.36, 0.46]$ (see Fig.~\ref{fig:ii}(a)),
there is a transition from the incoherent state to the standing wave state and then to the synchronized stationary state. There is also a transition from
the incoherent state to the synchronized stationary state in the range of $q\in(0.46, 0.6)$. Recall that $\omega_0$ quantifies the degree of detuning of the frequency distribution.
Increase in the heterogeneity of the frequency distribution promotes bistable regions, incoherent and standing wave states,
to a large region of the $(q, \varepsilon/\gamma)$ parameter space. For instance, the phase diagram for $\omega_0/\gamma=0.6$ is depicted in Fig.~\ref{fig:ii}(b)
elucidates the emergence of the bistable regions and enlarged regions of the incoherent and standing wave states as a function of $q$,
a manifestation of increased heterogeneity. Further increase in the $\omega_0/\gamma$ enlarges the bistable regions, the incoherent and the standing wave
states as depicted in Figs.~\ref{fig:ii}(c) and ~\ref{fig:ii}(d) for
$\omega_0/\gamma=1$ and $1.2$, respectively. These results are in agreement with the phase diagrams in Fig.~\ref{fig:i} in the $(\omega_0/\gamma, \varepsilon/\gamma)$
parameter space for increasing values of the symmetry breaking parameter. Next, we will explore the effect of symmetric and asymmetric bimodal frequency
distributions on the phase diagrams in the following.
\subsubsection{Phase diagrams for bimodal distribution}
In this section, we analyse the phase space dynamics of the generalized Kuramoto model (\ref{eq:km2}) with symmetric bimodal frequency distribution~(\ref{eq:bil}) by setting
$\gamma=\gamma_1=\gamma_2$ for
increasing values of the strength of the symmetry breaking coupling. We have depicted the phase diagrams in the $(\omega_0/\gamma, \varepsilon/\gamma)$ parameter space
for different values of the symmetry breaking parameter $q$ in Fig.~\ref{fig:iii}. Note that the phase space dynamics of
the Kuramoto model (see Fig. ~\ref{fig:iii}(a) for $q=0$) are similar to those of the Winfree model (see Fig. ~\ref{fig:iii}(d) for $q=1$) for the symmetric bimodal frequency distribution
except for the regime shift. The dynamical states and the bifurcation curves are similar to those in Fig.~\ref{fig:i}. Increasing the strength of the symmetry breaking coupling
favors the synchronized stationary state and the bistable states in a large region of the parameter space as evident from Fig. ~\ref{fig:iii}(b) and ~\ref{fig:iii}(c) for
$q=0.5$ and $q=0.8$, respectively. Note that a large heterogeneity in the frequency distribution favor the incoherent and the standing wave states in a rather large region
of the phase diagram for smaller $q$ and $\varepsilon$ (see Fig. ~\ref{fig:iii}(a) for $q=0$). Nevertheless, the synchronized stationary state predominates the
phase diagram for larger strength of the symmetry breaking coupling and $\varepsilon$ despite the presence of a large heterogeneity in the frequency distribution
(see Fig. ~\ref{fig:iii}(d) for $q=1$).
Next, we analyze the phase space dynamics of the generalized Kuramoto model (\ref{eq:km2}) with asymmetric bimodal frequency distribution~(\ref{eq:bil}) by
increasing the strength of the symmetry breaking coupling and the degree of asymmetry between the bimodal frequency distributions.
We have depicted the phase diagrams in the $(\omega_0/\gamma_2, \varepsilon/\gamma_2)$ parameter space
for different values of the symmetry breaking parameter $q$ in Fig.~\ref{fig:iv}.
Again, the dynamical states and the bifurcation curves are similar to those in Fig.~\ref{fig:i}. Phase diagram for $q=0.1$ and $\gamma_1/\gamma_2=0.6$
is depicted in Fig.~\ref{fig:iv}(a). For most values of $\omega_0/\gamma_2$, there is a transition from the incoherent state to the synchronized stationary state via
the standing wave state and there is no bistability
for $\gamma_1<\gamma_2$. However, there is a transition from the incoherent state to the synchronized stationary state in a large range of $\omega_0/\gamma\in(0,1)$
and the emergence of bistable states for $\gamma_1>\gamma_2$ as depicted in Fig.~\ref{fig:iv}(b) for $\gamma_1/\gamma_2=1.2$. It is evident that
bistable states emerge even for low values of the symmetry breaking coupling when $\gamma_1>\gamma_2$.
Note that bistable states emerge even for $\gamma_1<\gamma_2$ but for a large strength of the symmetry breaking coupling (see Fig.~\ref{fig:iv}(c)
for $q=1$ and $\gamma_1/\gamma_2=0.6$). The spread of the bistable states increases for $q=1$ and $\gamma_1/\gamma_2=1.2$ as illustrated in
Fig.~\ref{fig:iv}(d). Thus, larger $\gamma_1/\gamma_2$ and $q$ favor the emergence of the bistable states.
Phase diagrams in the $(q, \varepsilon/\gamma_2)$ parameter space is depicted in Figs.~\ref{fig:v}(a) and ~\ref{fig:v}(b) for $\gamma_1/\gamma_2=0.6$ and $1.2$, respectively,
and for $\omega_0/\gamma_2=1$. The dynamical states and the bifurcation curves are similar to those in Fig.~\ref{fig:i}.
There is a transition from the incoherent state to the synchronized stationary state via the standing wave state for small values of $q$ (see Fig..~\ref{fig:v}(a))
similar to that in Fig.~\ref{fig:iv}(a). However, for larger values of $q$
multistability between the standing wave and the synchronized stationary state emerges (dark shaded region in the inset) in addition to the above dynamical transition.
For $\gamma_1>\gamma_2$, there a transition from the incoherent state to the standing wave state along with the bistability among them in
a rather narrow range of $q\in (0,0.4)$ as a function of
$\varepsilon/\gamma_2$ as shown in inset of Fig.~\ref{fig:v}(b). For $q> 0.4$, there is a transition from the incoherent state to the synchronized stationary state
with the onset of bistability (light grey shaded region)
between them. Phase diagrams in the $(q, \omega_0/\gamma_2)$ parameter space is depicted in Figs.~\ref{fig:v}(c) and ~\ref{fig:v}(d) for
$\gamma_1/\gamma_2=0.6$ and $1.2$, respectively, for $\varepsilon/\gamma_2=2.5$. There is a transition from the synchronized stationary state
to the standing wave state as a function of $\omega_0/\gamma_2$
for $\gamma_1<\gamma_2$ (see Fig.~\ref{fig:v}(c)) via the homoclinic bifurcation curve. Both the bistable states emerge when $\gamma_1>\gamma_2$
as shown in Fig.~\ref{fig:v}(c) for $\gamma_1=1.2$.\\
\section{Conclusions}
\label{sec:conclusions}
We have considered a nontrivial generalization of the paradigmatic Kuramoto model by using an additional coupling term that explicitly breaks the rotational symmetry
of the Kuramoto model. The strength of the symmetry breaking coupling is found to play a key role in the manifestation of the dynamical states and their transitions
along with the onset of bistability among the observed dynamical states in the phase diagram.
A typical phase diagram of the Kuramoto model is transformed into a typical phase diagram of the Winfree mode
for the unit value of the strength of the symmetry breaking coupling thereby bridging the dynamics of both the Kuramoto and Winfree models.
Large values of the strength of the symmetry breaking coupling favor the manifestation of bistable regions and synchronized stationary state in a large region
of the phase diagram. The dynamical transitions in the bistable region are characterized by an abrupt (first-order) transition in both the forward and reverse traces.
Phase diagrams of both the Kuramoto and Winfree models resemble each other for symmetric bimodal frequency distribution except for the regime shifts and the degree of
the spread of the dynamical states and bistable regions. Nevertheless, for asymmetric bimodal frequency distribution one cannot observe the bistable states
for low values of the strength of the symmetry breaking coupling when $\gamma_1<\gamma_2$. In contrast,
bistable states emerge even for $\gamma_1<\gamma_2$ for a large strength of the symmetry breaking coupling.
Larger $\gamma_1/\gamma_2$ and larger $q$ favors the emergence of the bistable states in the case of the asymmetric bimodal frequency distribution.
A large $\omega_0$ and consequently a large degree of heterogeneity facilitates the spread of the incoherent and standing wave states in the phase diagram for a low strength of
the symmetry breaking coupling. However, a large $q$ promotes the spread of the synchronized stationary state and bistable regions in the phase diagram
despite the degree of heterogeneity in the frequency distribution. We have deduced the low-dimensional evolution equations for the complex order parameters
using the Ott-Antonsen ansatz for both unimodal and bimodal frequency distributions.
We have also deduced the Hopf, pitchfork, saddle-node bifurcation curves from the low-dimensional evolution equations for the complex order parameters.
Homoclinic bifurcation curve is obtained from XPPAUT software. Simulation results, obtained from the original discrete set of
equations agrees well with the analytical bifurcation curves. We sincerely believe that our results will
shed more light and enhance our current understanding of the effects of symmetry breaking coupling in the phase models and bridges the dynamics of
two distinctly different phase models, which are far from reach otherwise.\\
\section{Acknowledgements}
The work of V.K.C. is supported by the DST-CRG Project under Grant No. CRG/2020/004353 and
DST, New Delhi for computational facilities under the DST-FIST program (SR/FST/PS- 1/2020/135)to the
Department of Physics. M.M. thanks the Department of
Science and Technology, Government of India, for provid-
ing financial support through an INSPIRE Fellowship No.
DST/INSPIRE Fellowship/2019/IF190871.
S.G. acknowledges support from the Science
and Engineering Research Board (SERB), India under SERB-TARE scheme Grant No.
TAR/2018/000023 and SERB-MATRICS scheme Grant No. MTR/2019/000560. He also thanks ICTP -- The Abdus Salam International Centre for Theoretical Physics,
Trieste, Italy for support under its Regular Associateship scheme. DVS is supported by the DST-SERB-CRG Project under Grant No. CRG/2021/000816.\\
\textbf{Data Availability Statement}: No Data associated in the manuscript. The data sets on the current study are available from the corresponding author on reasonable request.
|
{
"arxiv_id": "2302.14290",
"language": "en",
"timestamp": "2023-03-01T02:08:25",
"url": "https://arxiv.org/abs/2302.14290",
"yymm": "2302"
} |
\section{Introduction}
\label{sec:intro}
The primary goal of Data-Free Knowledge Distillation (DFKD) is to acquire a trustworthy alternative dataset for knowledge distillation. The quality of knowledge transfer depends on the caliber of these alternative samples. Noise optimization~\cite{nayak2019zero,yin2020dreaming,fang2021contrastive} and generative reconstruction~\cite{DAFL,micaelli2019zero,fang2019data} are the two primary ways to replace the original training data used in the distillation process with synthetic or pseudo samples. Adversarial DFKD methods~\cite{fang2019data,liu2021zero,micaelli2019zero} investigate an adversarial exploration framework to seek pseudo-samples. In such a paradigm, the Generator ($\mathcal{G}$) is used to create pseudo-samples as surrogates to perform knowledge distillation/transfer, and the Teacher-Student ($\mathcal{T}$-$\mathcal{S}$) setup acts as a joint discriminator to penalize and update generator parameters ($\mathcal{\theta_{G}}$) (Figure \ref{fig:intro}).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figs/Intro.pdf}
\caption{The \emph{top-section} represents the learning evolution of a generic Adversarial Data-Free Knowledge Distillation framework; the color-intensity variation signifies the change in the distribution of the pseudo-samples, the student network, and the generator network over the learning epochs. Under the variation in the distribution of the pseudo-samples, the \emph{bottom-section} shows the learning curves for cases when the student accuracy degrades (shown in {\color[HTML]{ff0000}Red}), which is undesirable, and when the student accuracy is maintained, if not improved, as proposed (shown in {\color[HTML]{008000}Green}).}
\label{fig:intro}
\end{figure}
In the adversarial framework, the generator explores the input space to find suitable pseudo-samples as the distillation progresses. Consequently, the distribution of the generated samples consistently keeps changing during the process due to the generator updates \cite{seff2017continual}. From the student network's perspective, at each iteration, the pseudo samples seem to be generated from different generator parameters ($\mathcal{\theta_{G}}$). Hence, the convergence of the student network gets hampered due to successive distributional alterations over time~\cite{thanh2020catastrophic}, as depicted by the red curve in Figure~\ref{fig:intro}. This observation hints that updating the student network, solely using the samples generated from the current generator parameters is not adequate to generalize the student. Moreover, the student forgets the knowledge acquired previously and decelerates the knowledge distillation. Therefore, the generator, apart from exploring the input space, seeks to compensate for the loss of knowledge in future iterations. Additionally, in a practical setting, during the distillation process, high variation in the student network's classification accuracy is undesirable, especially when the validation data is not available, since that prevents the user from tracking the student's accuracy over time, and selecting the distilled model parameters with the highest accuracy.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/overview_individual_1.pdf}
\caption{}
\label{fig:overview_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/overview_individual_2.pdf}
\caption{}
\label{fig:overview_b}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/overview_individual_3.pdf}
\caption{}
\label{fig:overview_c}
\end{subfigure}
\caption{Student update strategies: (a) Typical student update by optimizing the \emph{Knowledge-Acquisition} loss ($\mathcal{L}_{Acq}$) with the batch pseudo samples ($\hat{x}$), produced by the generator ($\mathcal{G}$) \cite{fang2019data,micaelli2019zero,choi2020data}. (b) Student update with simultaneous optimization of the \emph{Knowledge-Acquisition} loss ($\mathcal{L}_{Acq}$) and \emph{Knowledge-Retention} loss ($\mathcal{L}_{Ret}$) on the batch of pseduo samples ($\hat{x}$) and memory samples ($\hat{x}_{m}$) obtained from the generator ($\mathcal{G}$) and the memory ($\mathcal{M}$), respectively \cite{binici2022preventing,binici2022robust}. (c) The proposed student update strategy, which treats $\mathcal{L}_{Acq}$ as meta-train and $\mathcal{L}_{Ret}$ as meta-test, and implicitly imposes the alignment between them.}
\label{fig:overview}
\end{figure*}
To circumvent the above-discussed problem, existing methods maintain a memory buffer to rehearse the examples from previously encountered distributions while learning with current examples. Binci \textit{et al.} \cite{binici2022preventing} introduce \textit{Replay} based methods to explicitly retrain/replay on a limited subset of previously encountered samples while training on the current examples. Then, carrying forward, they use Generative-Replay \cite{shin2017continual} to transfer the learned examples to an auxiliary generative model (VAE), and sample from the VAE's decoder in subsequent iterations \cite{binici2022robust}. Nonetheless, the performance of these methods is upper bounded by joint training on previous and current examples \cite{antiforgetting2}. Although, recent works have focused on modeling the memory, we seek to work towards effectively utilizing the samples from memory.
In this paper, we aim to update the student network parameters ($\theta_{\mathcal{S}}$) such that its performance does not degrade on the samples previously produced by the generator network ($\mathcal{G}$), aspiring towards \emph{Learning to Retain while Acquiring}. Thus, we propose a meta-learning inspired strategy to achieve this goal. We treat the task of \emph{Knowledge-Acquisition} (learning from newly generated samples) and \emph{Knowledge-Retention} (learning from previously encountered samples from memory) as meta-train and meta-test, respectively. Hence, in the proposed approach, the student network acquires new information while maintaining performance on previously encountered samples. By doing so, the proposed strategy (Figure \ref{fig:overview_c}) implicitly aligns \emph{Knowledge-Acquisition} and \emph{Knowledge-Retention}, as opposed to simply combining them \cite{binici2022preventing,binici2022robust} without any coordination or alignment (Figure \ref{fig:overview_b}), which leaves them to potentially interfere with one another.
Additionally, analyzing the proposed meta-objective, we discover that (in Section \ref{sec:alignament_proof}) the latent alignment factor as the dot product between the gradients of the \emph{Knowledge-Acquisition} and \emph{Knowledge-Retention} objectives, suggesting that the meta-objective enforces a common gradient direction for both tasks, encouraging the alignment between the task-specific gradients. Thus, the proposed method simultaneously minimizes the loss and matches the gradients corresponding to the individual tasks (\emph{Knowledge-Acquisition} and \emph{Knowledge-Retention}), enforcing the optimization paths to be same for both tasks.
Moreover, the proposed student update strategy is scalable to different deep architectures as the gradient alignment is implicit, and memory-agnostic, making no assumptions about the replay scheme employed. Nonetheless, recent works on gradient alignment, have shown great empirical advantages in Zero-Shot Learning~\cite{ZeroGrad}, Distributed/Federated Learning~\cite{dandi2022implicit} and Domain Generalization~\cite{shi2022gradient}. Our method extends the advantages of gradient alignment to memory-based Adversarial DFKD, thus strengthening the empirical findings in these works.
Finally, to demonstrate the advantages of the proposed student update strategy, we evaluate and compare against current non-memory~\cite{DAFL,fang2019data,choi2020data} and memory-based~\cite{binici2022preventing,binici2022robust} Adversarial DFKD methods, and observe substantial improvement in the student learning evolution.
In summary, our contributions are as follows:
\begin{itemize}[noitemsep]
\item We propose a novel meta-learning inspired student update strategy in the Adversarial DFKD setting, that aims to maintain the student’s performance on previously encountered examples (\emph{Knowledge-Retention}) while acquiring knowledge from samples of the current distribution (\emph{Knowledge-Acquisition}).
\item We theoretically identify (in Section \ref{sec:alignament_proof}) that the proposed student update strategy enforces an implicit gradient alignment between the \emph{Knowledge-Acquisition} and \emph{Knowledge-Retention} tasks.
\item Finally, we evaluate our method and compare against various Adversarial DFKD methods, on multiple student architectures and replay schemes (Memory Buffer and Generative Replay).
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figs/training_framework.pdf}
\caption{An illustration of the proposed DFKD framework. The framework consists of the Generator ($\mathcal{G}$), Teacher ($\mathcal{T}$), Student ($\mathcal{S}$) and the Memory ($\mathcal{M}$). $\mathcal{G}$ and $\mathcal{S}$ are updated alternatively, similar to the GAN \cite{GAN} framework with the generator loss ($\mathcal{L_{G}}$) optimizing $\mathcal{G}$, and the \emph{Knowledge-Acquisition} loss ($\mathcal{L}_{Acq}$) and the \emph{Knowledge-Retention} loss ($\mathcal{L}_{Ret}$) optimizing the student ($\mathcal{S}$). We use $\mathcal{M}$ in a generalized way to denote any type of replay schemes (Memory Buffer or Generative Replay in our case).}
\label{fig:training_framework}
\end{figure*}
\section{Related Work}
\label{sec:related}
\noindent \textbf{Adversarial Data-Free Knowledge Distillation:} In the Adversarial Data-Free Knowledge Distillation paradigm, A generative model is trained to synthesize pseudo-samples that serve as queries for the Teacher ($\mathcal{T}$) and the Student $(\mathcal{S}$) \cite{choi2020data,micaelli2019zero,fang2019data}. ZSKT \cite{micaelli2019zero} attempts data-free knowledge transfer by first training a generator in an adversarial fashion to look for samples on which the student and teacher do not match well. To improve the model discrepancy measure, it adopts the Kullback–Leibler (KL) divergence, and introduces attention transfer \cite{zagoruyko2017paying} to aid knowledge transfer. Moreover, DFAD \cite{fang2019data} recommends Mean Absolute Error (MAE) as a model discrepancy function to prevent decayed gradients on converged samples. Furthermore, the adversarial framework was extended by Choi \textit{et al.}~\cite{choi2020data} in the context of model quantization, by proposing adversarial data-free quantization (DFQ), and introducing additional regularization terms that match the mean and standard deviation of the generated pseudo-samples with the teacher model's batch-norm statistics, and imposes batch categorical entropy maximization, such that sample from each class appear equally in the generated batch. Fang \textit{et al.} recently introduced FastDFKD \cite{fang2022up}, an effective method with a meta generator to speed up the DFKD process, delivering a 100-fold increase in the knowledge transfer rate.\\\noindent \textbf{Handling Distribution Shift in Adversarial DFKD:} To counter the distribution mismatch and the catastrophic forgetting phenomenon in the adversarial framework \cite{seff2017continual}, Binici \textit{et al.} \cite{binici2022preventing} suggested maintaining a dynamic collection of synthetic samples throughout training iterations to prevent catastrophic forgetting in DFKD. Moreover, in their latest work \cite{binici2022robust}, they introduce generative pseudo-replay \cite{shin2017continual} in which an auxiliary generative model simultaneously learns the distribution of the samples produced by the generator ($\mathcal{G}$). Throughout the training process, examples are generated from the auxiliary generator to replay during training. Nonetheless, these works have focused on modeling the memory buffer. A related line of research maintains an exponentially moving average (EMA) of the generator model $\mathcal{G}$ \cite{do2022momentum} to replace the typical Memory-Buffer and Generative Replay. Nonetheless, our work focuses on the effective utilization of the samples obtained from the memory.
\section{Methodology}
\label{sec:method}
In Section \ref{sec:overview}, we first provide a brief overview of Adversarial DFKD. Then, in Section \ref{sec:goal}, we discuss the data-free knowledge distillation objective. In Sections \ref{sec:LRA} and \ref{sec:alignament_proof}, we elaborate on the proposed student update strategy. Lastly, in Section \ref{sec:generator}, we discuss the adopted generator update strategy used for the baselines and the proposed framework.
\subsection{Adversarial Data-Free Knowledge Distillation}
\label{sec:overview}
In the Adversarial DFKD framework, a generator ($\mathcal{G}$) is used to create pseudo-samples as surrogates to perform knowledge distillation/transfer, and the teacher-student ($\mathcal{T}$-$\mathcal{S}$) setup acts as a joint discriminator to penalize and update generator parameters ($\theta_{\mathcal{G}}$) in an adversarial manner. After updating $\theta_{\mathcal{G}}$, random samples are generated and used to minimize the $\mathcal{T}$-$\mathcal{S}$ discrepancy by updating the student parameters ($\theta_{\mathcal{S}}$). The generator and the student are optimized alternatively up until a fixed number of pre-defined iterations. In essence, the goal of DFKD is to craft a lightweight student model ($\mathcal{S}$) by harnessing valuable knowledge from the well-trained Teacher model ($\mathcal{T}$) in the absence of training data. A general overview of Adversarial DFKD framework is illustrated in Figure \ref{fig:intro}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figs/all_lc_cummean.pdf}
\caption{Learning evolution of the proposed method compared to the prior arts (MB-DFKD~\cite{binici2022preventing} and PRE-DFKD~\cite{binici2022robust}) that employ replay. The plots visualize the Accuracy (\%) evolution (\textit{top-row}) and the Cumulative Mean Accuracy (\%) evolution (\textit{bottom-row}) of the ResNet-18~\cite{he2016deep} student network on SVHN~\cite{SVHN}, CIFAR10~\cite{CIFAR}, CIFAR100~\cite{CIFAR}, and Tiny-ImageNet~\cite{tiny} datasets. The proposed method is in \textbf{\color[HTML]{3b80ee} Blue}.}
\label{fig:learning_curves}
\end{figure*}
\subsection{Goal of Data Free Knowledge Distillation}
\label{sec:goal}
A student model ($\mathcal{S}$) is trained to match the teacher's ($\mathcal{T}$) predictions on its unavailable target domain ($\mathcal{D}_{\mathcal{T}}$) as part of the distillation process. Let, $p_{\mathcal{S}}(x) = \softmax(\mathcal{S}_{\theta_{\mathcal{S}}}(x))$ and $p_{\mathcal{T}}(x) = \softmax(\mathcal{T}_{\mathcal{\theta_{T}}}(x))$, $\forall x \in \mathcal{D}_{\mathcal{T}}$, denote the predicted Student and Teacher probability mass across the classes, respectively. We seek to find the parameters $\theta_{\mathcal{S}}$ of the student model that will reduce the probability of errors ($P$) between the predictions $p_{\mathcal{S}}(x)$ and $p_{\mathcal{T}}(x)$:
\begin{equation}
\min_{\theta_{\mathcal{S}}}P_{x \sim \mathcal{D}_{\mathcal{T}}}\left( \argmax_{i}{p_{\mathcal{S}}^{i}(x)} \neq \argmax_{i}{p_{\mathcal{T}}^{i}(x)} \right), \nonumber
\end{equation}
where the superscript $i$ denotes the $i^{th}$ probability score of the predicted masses, $p_{\mathcal{S}}(x)$ and $p_{\mathcal{T}}(x)$.
However, DFKD suggests minimizing the student's error on a pseudo dataset $\mathcal{D}_{\mathcal{P}}$, since the teacher's original training data distribution $\mathcal{D}_{\mathcal{T}}$ is not available. Typically, a loss function, say $\mathcal{L}_{KD}$, that gauges disagreement between the teacher and the student is minimized $\forall \hat{x} \in \mathcal{D}_{\mathcal{P}}$ by optimizing the student parameters ($\theta_{\mathcal{S}}$):
\begin{equation}
\min_{\theta_{S}}{\mathbb{E}_{\hat{x}\sim\mathcal{D}_{\mathcal{P}}}[\mathcal{L}_{KD}(\mathcal{T}_{\theta_{\mathcal{T}}}(\hat{x}), \mathcal{S}_{\theta_{\mathcal{S}}}(\hat{x}))]}.
\end{equation}
We use the Mean Absolute Error (MAE) to define loss measure ($\mathcal{L}_{KD}$) as suggested in \cite{fang2019data}. Suppose, given the predicted logits (pre-softmax predictions) $t(\hat{x}) = \mathcal{T}_{\theta_{\mathcal{T}}}(\hat{x})$ from the teacher model and the predicted logits $s(\hat{x})=\mathcal{S}_{\theta_{\mathcal{S}}}(\hat{x})$ from the student model, we define $\mathcal{L}_{KD}$ as:
\begin{equation}
\mathcal{L}_{KD}(t(\hat{x}), s(\hat{x})) = \lVert t(\hat{x}) - s(\hat{x}) \rVert_{1}.
\label{eq:mae}
\end{equation}
\subsection{Learning to Retain while Acquiring}
\label{sec:LRA}
Our novelty resides in the student update strategy, as described earlier. \emph{Knowledge-Retention} and \emph{Knowledge-Acquisition} relate to two independent goals and can therefore be considered as two discrete tasks. In fact, by aligning these two goals, we empirically observe that, they can cooperate to retain the knowledge on previously encountered examples while acquiring knowledge from newly generated samples. The proposed method utilizes a meta-learning-inspired optimization strategy to effectively replay the samples from memory, say $\mathcal{M}$.
In the typical Adversarial DFKD setup, the student update objective with the generated pseudo samples ($\hat{x}$), \textit{i.e.}, the \emph{Knowledge-Acquisition} task ($\mathcal{L}_{Acq}$) (Figure \ref{fig:overview_a}), is formulated as:
\begin{equation}
\min_{\theta_{\mathcal{S}}}\mathcal{L}_{Acq}(\theta_{\mathcal{S}}) = \min_{\theta_{\mathcal{S}}}
\mathbb{E}_{\hat{x}}[\mathcal{L}(\mathcal{T}_{\theta_{\mathcal{T}}}(\hat{x}), \mathcal{S}_{\theta_{\mathcal{S}}}(\hat{x}))],
\end{equation}
where $\mathcal{L}$ is the MAE (\ref{eq:mae}) between the teacher and the student logits, and $\hat{x} = \mathcal{G}(z), z\sim \mathcal{N}(0,I)$, denotes the batch of randomly generated samples.
Moreover, to alleviate the distribution drift during knowledge distillation in the adversarial setting, previous works have maintained a memory buffer, to store $\hat{x}$ and replay at later iterations to help the student recall the knowledge \cite{binici2022preventing}. At a fixed frequency, batches of samples from current iterations are updated in the memory \cite{binici2022preventing}. Also, recent works~\cite{binici2022robust} have explored using the Generative Replay~\cite{shin2017continual} strategy to simultaneously store the samples, as they are encountered, in a generative model (VAE~\cite{VAE}), and later replay by generating samples from the VAE decoder. Hence, the optimization objective for the \emph{Knowledge-Retention} ($\mathcal{L}_{Ret}$) can be defined as follows:
\begin{align}
\min_{\theta_{\mathcal{S}}} \mathcal{L}_{Ret}(\theta_{\mathcal{S}}) = \min_{\theta_{\mathcal{S}}}\mathbb{E}_{\hat{x}_{m}}[\mathcal{L}(\mathcal{T}_{\theta_{\mathcal{T}}}(\hat{x}_{m}), \mathcal{S}_{\theta_{\mathcal{S}}}(\hat{x}_{m}))], \nonumber \\
\hat{x}_{m} \sim \mathcal{M},
\end{align}
where, with a slight abuse of notation, $\mathcal{M}$ denotes a Memory Buffer or Generative Replay or any other replay scheme. Thus, the overall optimization objective to update $\mathcal{\theta_{\mathcal{S}}}$, while considering both the new samples (generated from the latest $\theta_\mathcal{G}$) and the old samples (sampled from $\mathcal{M}$) (Figure \ref{fig:overview_b}) is defined as:
\begin{equation}
\label{eq:naive_replay}
\min_{\theta_{\mathcal{S}}}\mathcal{L}_{Acq}(\theta_{\mathcal{S}}) + \mathcal{L}_{Ret}(\theta_{\mathcal{S}}).
\end{equation}
However, the objective in (\ref{eq:naive_replay}) attempts to simultaneously optimizes $\mathcal{L}_{Ret}$ and $\mathcal{L}_{Acq}$ but does not seek to align the objectives, which leaves them to potentially interfere with one another. Say, we denote the gradients of \emph{Knowledge Acquisition} and \emph{Knowledge Retention} tasks with $\nabla \mathcal{L}_{Acq}(\theta)$ and $\nabla \mathcal{L}_{Ret}(\theta)$, respectively. If $\nabla \mathcal{L}_{Acq}(\theta)$ and $\nabla \mathcal{L}_{Ret}(\theta)$ point in a similar direction or are said to be aligned, \textit{i.e.}, $\nabla \mathcal{L}_{Acq}(\theta).\nabla \mathcal{L}_{Ret}(\theta) > 0$, then taking a gradient step along $\nabla \mathcal{L}_{Acq}(\theta)$ or $\nabla \mathcal{L}_{Ret}(\theta)$ improves the model’s performance on both tasks. This is however not the case when $\nabla \mathcal{L}_{Acq}(\theta)$ and $\nabla \mathcal{L}_{Ret}(\theta)$ point in different directions, \textit{i.e.}, $\nabla \mathcal{L}_{Acq}(\theta).\nabla \mathcal{L}_{Ret}(\theta)\leq 0$, and the weight parameters obtained, may not be optimal for both the tasks simultaneously. Hence, the intended effect of having the gradient directions aligned is to obtain student parameters ($\theta_{\mathcal{S}}$) that have optimal performance on both $\mathcal{L}_{Acq}$ and $\mathcal{L}_{Ret}$.
The proposed meta-learning inspired approach, seeks to align the two tasks, $\mathcal{L}_{Acq}$ and $\mathcal{L}_{Ret}$. We take cues from Model-Agnostic Meta-learning (MAML)\cite{finn2017model}, and adapt it to Adversarial DFKD. At each iteration of the parameter update, we pose \emph{Knowledge-Acquisition} and \emph{Knowledge-Acquisition} as meta-train and meta-test, respectively. Which means, we perform a single gradient descent step using the current samples ($\hat{x}$) produced from the generator network ($\mathcal{G}$), on the parameters $\mathcal{\theta_{S}}$ and obtain $\mathcal{\theta_{S}}^{\prime}$ as $\theta_{\mathcal{S}}^\prime = \theta_{\mathcal{S}} - \alpha \nabla \mathcal{L}_{Acq}(\theta_{\mathcal{S}})$, where $\nabla \mathcal{L}_{Acq}(\theta_{\mathcal{S}})$ denotes gradient of $\mathcal{L}_{Acq}$ at $\theta_{\mathcal{S}}$. Then we optimize on the batch of samples ($\hat{x}_{m}$) obtained from the memory ($\mathcal{M}$), with the parameters $\theta_{\mathcal{S}}^\prime$, and then make a final update on $\theta_{\mathcal{S}}$. Thus, formally, the overall meta-optimization objective, with the task of \emph{Knowledge Acquisition} serving as meta-train and the task of \emph{Knowledge Retention} as meta-test (Figure \ref{fig:overview_c}), can be defined as follows:
\begin{align}
\label{eq:meta_objective}
\min_{\theta_{\mathcal{S}}} \mathcal{L}_{Acq}(\theta_{\mathcal{S}}) + \mathcal{L}_{Ret}(\theta_{\mathcal{S}}^{\prime}) &= \min_{\theta_{\mathcal{S}}} \mathcal{L}_{Acq}(\theta_{\mathcal{S}}) + \nonumber \\ & \mathcal{L}_{Ret}(\theta_{\mathcal{S}} - \alpha \nabla \mathcal{L}_{Acq}(\theta_{\mathcal{S}})).
\end{align}
\subsection{How does the Proposed Student Update Strategy Promote Alignment?}\label{sec:alignament_proof}
In this subsection, we analyze the proposed objective (\ref{eq:meta_objective}) to understand how it results in the desired alignment between the $\emph{Knowledge-Acquisiton}$ and $\emph{Knowledge-Retention}$ tasks. We assume that meta-train and meta-test tasks provide us with losses $\mathcal{L}_{Acq}$ and $\mathcal{L}_{Ret}$; in our case, $\mathcal{L}_{Acq}$ and $\mathcal{L}_{Ret}$ are the same function computed on the batches $\hat{x}$ and $\hat{x}_{m}$, respectively. We utilize Taylor's expansion to elaborate the gradient of $\mathcal{L}_{Ret}$ at a point $\theta$ displaced by $\phi_{\theta}$, as described in Lemma \ref{lemma:taylor_series},
\begin{customlemma}{1}\label{lemma:taylor_series}
If $\mathcal{L}_{Ret}$ has Lipschitz Hessian, \textit{i.e.}, $\lVert \nabla^{2}\mathcal{L}_{Ret}(\theta_{1}) - \nabla^{2}\mathcal{L}_{Ret}(\theta_{2}) \rVert \leq \rho \lVert \theta_{1} - \theta_{2} \rVert$ for some $\rho > 0$, then:
\begin{align*}
\nabla\mathcal{L}_{Ret}(\theta + \mathbf{\phi}_{\theta}) = \nabla\mathcal{L}_{Ret}(\theta) + \nabla^{2}\mathcal{L}_{Ret}(\theta)\mathbf{\phi}_{\theta}\\ + \mathcal{O}(\lVert \mathbf{\phi}_{\theta} \rVert^{2}).
\end{align*}
For instance, when $\phi_{\theta} = -\alpha \nabla \mathcal{L}_{Acq}(\theta)$, we have,
\begin{align*}
\nabla\mathcal{L}_{Ret}(\theta -\alpha \nabla \mathcal{L}_{Acq}(\theta)) = & \nabla\mathcal{L}_{Ret}(\theta)\\ &- \alpha \nabla^{2}\mathcal{L}_{Ret}(\theta) \nabla \mathcal{L}_{Acq}(\theta)\\ &+ \mathcal{O}(\alpha^{2}). \end{align*}
\end{customlemma}
\begin{customthm}{1}\label{theorem:1}
If $\theta^{\prime} = \theta - \alpha\nabla \mathcal{L}_{Acq}(\theta)$, denotes a single gradient descent step on $\theta$ with the objective $\mathcal{L}_{Acq}(\theta)$, where $\alpha$ is a scalar, and $\nabla \mathcal{L}_{Acq}(\theta)$ denotes the gradient of the objective at $\theta$, then:
\begin{align*}
\frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} = \nabla \mathcal{L}_{Ret}(\theta) - \alpha \nabla^{2} \mathcal{L}_{Ret}(\theta).\nabla \mathcal{L}_{Acq}(\theta)\\ - \alpha \nabla^{2} \mathcal{L}_{Acq}(\theta).\nabla \mathcal{L}_{Ret}(\theta) + \mathcal{O}(\alpha^{2}).
\end{align*}
\end{customthm}
\begin{proof}\let\qed\relax
Please refer to the Supplemental Material (Theorem \ref{lemma:theorem1_proof}. \nonumber
\end{proof}
\noindent While optimizing the objective defined in (\ref{eq:meta_objective}) using stochastic gradient descent, we would need to compute the gradient of the $\mathcal{L}_{Ret}(\theta_{\mathcal{S}}^{\prime})$ w.r.t $\theta_{\mathcal{S}}$. Therefore, utilizing Theorem \ref{theorem:1} we express $\frac{\partial \mathcal{L}_{Ret}(\theta^{\prime}_{\mathcal{S}})}{\partial \theta_{\mathcal{S}}}$ as:
\begin{align}
\frac{\partial \mathcal{L}_{Ret}(\theta^{\prime}_{\mathcal{S}})}{\partial \theta_{\mathcal{S}}} &= \nabla \mathcal{L}_{Ret}(\theta_{\mathcal{S}}) \nonumber \\ &- \alpha \nabla^{2}\mathcal{L}_{Ret}(\theta_{\mathcal{S}}).\nabla \mathcal{L}_{Acq}(\theta_{\mathcal{S}}) \nonumber \\ &- \alpha \nabla^{2} \mathcal{L}_{Acq}(\theta_{\mathcal{S}}).\nabla \mathcal{L}_{Ret}(\theta_{\mathcal{S}}) + \mathcal{O}(\alpha^{2}),
\end{align}
\noindent using the product rule $\nabla a .b + \nabla b.a = \nabla(a.b)$, we get:
\begin{align}
\label{eq:grad_align}
\frac{\partial \mathcal{L}_{Ret}(\theta^{\prime}_{\mathcal{S}})}{\partial \theta_{\mathcal{S}}} & = \nabla \mathcal{L}_{Ret}(\theta_{\mathcal{S}}) \nonumber \\
&- \alpha \nabla \underbrace{(\nabla \mathcal{L}_{Ret}(\theta_{\mathcal{S}}).\nabla \mathcal{L}_{Acq}(\theta_{\mathcal{S}}))}_{Gradient \ Alignment} + \mathcal{O}(\alpha^{2}).
\end{align}
From the analysis above, we observe that the gradient of $\mathcal{L}_{Ret}(\theta_{\mathcal{S}}^{\prime})$ at $\theta_{\mathcal{S}}$ (in (\ref{eq:grad_align})) produces the gradient of the gradient-product. This indicates, when optimizing $\mathcal{L}_{Ret}(\theta^{\prime}_{\mathcal{S}})$ (in (\ref{eq:meta_objective})), the gradient of $\mathcal{L}_{Ret}(\theta_{\mathcal{S}}^{\prime})$ at $\theta_{\mathcal{S}}$, has a negatively scaled gradient of the gradient-product term $ \nabla (\nabla \mathcal{L}_{Ret}(\theta_{\mathcal{S}}).\nabla \mathcal{L}_{Acq}(\theta_{\mathcal{S}}))$ (derived in (\ref{eq:grad_align})), suggesting that the overall-gradients minimize $\mathcal{L}_{Ret}(\theta_{\mathcal{S}})$ and maximize $\nabla \mathcal{L}_{Ret}(\theta_{\mathcal{S}}).\nabla \mathcal{L}_{Acq}(\theta_{\mathcal{S}})$. Hence, optimizing (\ref{eq:meta_objective}) enforces the updates on $\mathcal{L}_{Ret}(\theta_{\mathcal{S}})$ and $\mathcal{L}_{Acq}(\theta_{\mathcal{S}})$ to seek a common direction, by maximizing the gradient-product.
\input{table1}
\subsection{Generator Update in Adversarial Exploration-based DFKD}
\label{sec:generator}
In the absence of the training dataset ($\mathcal{D}_{\mathcal{T}}$), the generator ($\mathcal{G}$) is utilized to obtain pseudo-samples ($\hat{x}$) and perform knowledge-distillation, \textit{i.e.}, $\hat{x} = \mathcal{G}(z)$, $z \sim \mathcal{N}(0,I)$. The generator is learned to maximize the disagreement between Teacher network ($\mathcal{T}_{\theta_{\mathcal{T}}}$) and the Student network ($\mathcal{S}_{\theta_{\mathcal{S}}}$). Additionally, for the generated data $\hat{x}$ to mimic similar responses from the teacher as the real data, we include a prior loss $\mathcal{L}_{\mathcal{P}}$ \cite{DAFL} to be minimized alongside maximizing the discrepancy ($D$). Hence, we update the generator parameters ($\theta_{\mathcal{G}}$) by maximizing the following objective:
\begin{align}
\max_{\theta_{\mathcal{G}}} \mathbb{E}_{z\sim\mathcal{N}(0,I)}[D(\mathcal{T}_{\theta_{\mathcal{T}}}(\mathcal{G}_{\theta_{\mathcal{G}}}(z)), & \mathcal{S}_{\theta_{\mathcal{S}}}(\mathcal{G}_{\theta_{\mathcal{G}}}(z))) \nonumber\\ &- \mathcal{L}_{\mathcal{P}}(\mathcal{G}_{\theta_{\mathcal{G}}}(z))].
\end{align}
Typically the disagreement function ($D$) for the generator is identical to the teacher-student disagreement term \cite{fang2019data,choi2020data}. Instead, for teacher-student discrepancy maximization we use the Jensen-Shannon (JS) ($\mathcal{L}_{JS}$) divergence. Our motivation to use JS divergence is based on empirical study by Binici \textit{et al.}~\cite{binici2022preventing}. Hence, $D$ is defined as:
\begin{align}
D&(a, b) = \mathcal{L}_{JS}(p(a),p(b)), \nonumber \\
\mathcal{L}_{JS}(p(a),p(b)) = &\frac{1}{2}(\mathcal{L}_{KL}(p(a),m)+\mathcal{L}_{KL}(p(b),m)), \text{and}\nonumber \\
&m = \frac{1}{2}(p(a)+p(b)).
\end{align}
Here $\mathcal{L}_{KL}$ stands for the Kullback–Leibler divergence and $p(a)$ and $p(b)$ denote the probability vectors obtained after the $\softmax$ applied to the arguments $a$ and $b$, respectively.
Moreover, the prior loss $\mathcal{L_{P}}$ \cite{binici2022preventing} is defined as the combination of three loss functions ($\mathcal{L}_{OH}$, $\mathcal{L}_{A}$, and $\mathcal{L}_{EM}$) that capture different characteristics from the teacher model and impose them on the pseudo samples $\hat{x}$, and is defined as:
\begin{equation}
\mathcal{L_{P}} = \mathcal{L}_{OH} + \gamma \mathcal{L}_{A} + \delta\mathcal{L}_{EM},
\end{equation}
where, $\gamma$ and $\delta$ denote the weighing scalar coefficients.
\noindent \textbullet \ $\mathcal{L}_{OH}$ is the one-hot loss that forces the generated samples to have strong (one-hot vector like) predictions when input to the teacher. It is defined as the cross-entropy between the teacher's softmax output $p_{\mathcal{T}}(\hat{x}_{n}) = \softmax(\mathcal{T}_{\theta_{\mathcal{T}}}(\hat{x}_{n})), \hat{x}_{n} \in \hat{x}$, and its one-hot vector version $e_{c} \in \{0,1\}^{C}$, where $C$ denotes the total number of classes and $e_{c}$ denotes the $c$-th canonical one-hot-vector, and $c = \argmax_{i}(p_{\mathcal{T}}^{i}(\hat{x}_{n}))$, the superscript $i$ denotes the $i^{th}$ probability score of the predicted mass vector $p_{\mathcal{T}}(\hat{x}_{n})$. Hence, $\mathcal{L}_{OH}$ is defined as:
\begin{equation}
\mathcal{L}_{OH} = -\frac{1}{N}\sum_{n=1}^{N}e_{c}^{\top}\log(p_{\mathcal{T}}(\hat{x}_{n})).
\end{equation}
\noindent \textbullet \ $\mathcal{L}_{A}$ is the activation loss motivated by the notion that meaningful inputs result in higher-valued activation maps in a well-trained network \cite{DAFL} and is defined as:
\begin{equation}
\mathcal{L}_{A} = -\frac{1}{NL}\sum_{n=1}^{N}\sum_{l=1}^{L}{\lVert \mathcal{A}_{\mathcal{T}}^{l}(\hat{x}_{n}) \rVert_{1}},
\end{equation}
where $\mathcal{A}_{\mathcal{T}}^{l}(\hat{x}_{n})$ denotes the activation of the $l$-th layer in the Teacher network ($\mathcal{T}_{\theta_{\mathcal{T}}}$) for the input $n^{th}$ input $\hat{x}_{n} \in \hat{x}$.
\noindent \textbullet \ $\mathcal{L}_{EM}$ loss is the entropy-maximization term imposed on the generator to output an equal number of pseudo-samples from each category \cite{choi2020data}. In other words, the intra-batch entropy is maximized, resulting in a similar number of samples for each category \textit{i.e.} if $\bar{p}_{\mathcal{T}} = \frac{1}{N}\sum_{n=1}^{N}p_{\mathcal{T}}(\hat{x}_{n})$, then the loss $\mathcal{L}_{EM}$ is defined as:
\begin{equation}
\mathcal{L}_{EM} = \bar{p}_{\mathcal{T}}^{\top}\log(\bar{p}_{\mathcal{T}}).
\end{equation}
In sum, the Generator loss objective ($\mathcal{L}_{\mathcal{G}}$) is defined as:
\begin{equation}
\mathcal{L_{G}}(\theta_{\mathcal{G}}) = -D(\mathcal{T}_{\theta_{\mathcal{T}}}(\mathcal{G}_{\theta_{\mathcal{G}}}(z)), \mathcal{S}_{\theta_{\mathcal{S}}}(\mathcal{G}_{\theta_{\mathcal{G}}}(z))) + \mathcal{L}_{\mathcal{P}}(\mathcal{G}_{\theta_{\mathcal{G}}}(z)).
\end{equation}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figs/wrn_cifar10_33.pdf}
\caption{Student learning curves depicting the learning evolution of Wide-ResNet (WRN) \cite{WRN}. The WRN-16-1 (\textit{top-row}), and WRN-16-2 (\textit{bottom-row}) networks are distilled by a WRN-40-2 teacher network pre-trained on CIFAR10 ($\mathcal{T}_{Acc} = 94.87\%$). Each column represent the learning curves with the Buffer-based (with different memory buffer sizes) and Generative replay schemes. The proposed method is in \textbf{\color[HTML]{3b80ee} Blue}.}
\label{fig:wrn_cifar10}
\end{figure*}
\section{Experiments}
\subsection{Experimental Settings}
\label{sec:experiments}
\noindent \textbf{Datasets:} We evaluate the proposed method on SVHN \cite{SVHN}, CIFAR10 \cite{CIFAR}, CIFAR100 \cite{CIFAR} and Tiny-ImageNet \cite{tiny} datasets.\\\textbf{Teacher model: }For CIFAR10 and CIFAR100 datasets, we used the pre-trained teacher models made available by the authors of \cite{fang2021contrastive} and \cite{binici2022preventing}, respectively. For SVHN and Tiny-ImageNet we trained the teacher network from scratch. We provide the training details of the teacher models in the Supplemental Material.\\\textbf{Definition of an Epoch in DFKD: }In Adversarial DFKD, the notion of an \emph{epoch} is obscure. In a typical deep neural network-based classification training, an epoch is defined as one complete pass over the available training data. However, DFKD has no access to the training data; instead, the pseudo samples generated on-the-fly are used to distill knowledge to the student network. Therefore, prior works \cite{fang2019data,choi2020data,binici2022preventing} defined an epoch in terms of a fixed number of training iterations ($\mathcal{I}$), where each iteration consists of a set number of generator update steps ($g$) and student update steps ($s$). Hence, to be consistent across the baselines and prior arts, we use the same number of training iterations, generator update steps, and student updates steps to define an epoch. For all the methods, we set $\mathcal{I} = 72$, $g = 1$, and $s = 10$ and use a batch size of $512$ of the sampled noise ($z$) to generate the pseudo samples and optimize the parameters $\theta_{\mathcal{G}}$ and $\theta_{\mathcal{S}}$.\\\textbf{Training Details: }Due to the page-limit constraint, the training details are provided in the Supplemental Material.\\\textbf{Evaluation: }We evaluate the methods by comparing the mean and variance of the student network's test accuracy ($\mathcal{S}_{Acc}$), denoting them as $\mu[\mathcal{S}_{Acc}]$, and $\sigma^{2}[\mathcal{S}_{Acc}]$, respectively, across the epochs, motivated by Binci \textit{et al.} \cite{binici2022robust}. Specifically, we compare the different sections of the student's learning evolution by partitioning them into different epoch percentiles. For example, computing the $\mu[\mathcal{S}_{Acc}]$ and $\sigma^{2}[\mathcal{S}_{Acc}]$ for epochs greater than the $n^{th}$ percentile conveys the mean and variance across all the epochs greater than the ${\frac{n}{100}}^{th}$ of the total number of training epochs.
\subsection{Results and Observations}
\label{sec:mu_sigma}
\noindent \textbf{Baseline and State-of-the-art Comparisons}
In Table \ref{tab:main_table}, we analyze our method on classification task and compare it with prior Adversarial DFKD methods \cite{DAFL,fang2019data,choi2020data} and closely related memory-based Adversarial DFKD methods \cite{binici2022preventing,binici2022robust}. For a fair comparison across the methods, we re-implemented all the methods and used the same generator architecture to generate the pseudo samples. For each of the methods we report the $\mu[\mathcal{S}_{Acc}]$ and $\sigma^{2}[{\mathcal{S}_{Acc}}]$ at different epoch percentiles and the maximum accuracy (Acc\textsubscript{max} (\%)) attained by the student. We observe that, compared to MB-DFKD (Memory Bank) \cite{binici2022preventing}, \emph{Ours-1} (Memory Bank) demonstrates consistent improvement across all the datasets. Similarly, compared to PRE-DFKD (Generative Replay) \cite{binici2022robust}, utilizing the same VAE decoder architecture as the generative replay, we observe a similar trend for \emph{Ours-2} (Generative Replay).
\input{table2}
Moreover, in Figure \ref{fig:learning_curves}, we visualize the learning curves of the student networks trained on multiple datasets. We plot the test accuracy of the student at the end of each training epoch. The proposed method exhibits significant improvement in the learning evolution and the peak accuracies achieved, suggesting that the proposed approach can retain the knowledge from previously encountered samples as the learning progresses. However, on Tiny-ImageNet, with Generative replay, we did not observe substantial improvements; we conjecture that this may be due to the complexity of the dataset, and the inability to capture crucial samples as replay for the complex dataset, for a large number of epochs. Also, with Generative Replay we sometimes faced difficulty in training a VAE on a stream of synthetic samples (especially for complex dataset like Tiny-ImageNet) as it suffers due to the distribution drift of its own.
Additionally, we emphasize that we do not strive toward achieving state-of-the-art student classification accuracy (requiring extensive hyper-parameter tuning) in the DFKD setting, but verify the viability of our hypothesis of retaining the previously acquired knowledge while learning on new samples. Nonetheless, we observe that our method improves upon the student classification accuracy on CIFAR100 \cite{CIFAR} compared to the contemporary works and the current state-of-the-art \cite{binici2022robust} with the ResNet-34 \cite{he2016deep} ($\mathcal{T}$) and ResNet-18 \cite{he2016deep} ($\mathcal{S}$) setup, as shown in Table \ref{tab:cifar100-comparison}. Additionally, since previous works use the same teacher network with different test accuracies, we also report the teacher accuracies of the respective methods used to distill the knowledge to the student. Nonetheless, we also compute the Teacher-Student accuracy difference ($\Delta_{\mathcal{T}_{Acc} - \mathcal{S}_{Acc}}$) to assess the distance of the student from its teacher in terms of classification accuracy.\\
\noindent \textbf{Interpreting the Result: } Because of the proposed student update strategy, we observe a global monotonicity in the student's learning evolution which the existing approaches with naive replay \cite{binici2022preventing,binici2022robust} claim, but do not warrant (described in Section \ref{sec:LRA}). The global monotonicity in the learning evolution encompasses crucial advantages. For example, when the validation data is unavailable, the developer cannot assess the student's accuracy and identify the best parameters for the student. In such a scenario, the final accuracy is dependent on the random termination epoch set by the developer. In other words, the ideal DFKD approach should sustain high accuracy via monotonically enhancing it during the course of distillation. Therefore, $\mu[\mathcal{S}_{Acc}]$ and $\sigma^{2}[\mathcal{S}_{Acc}]$ contribute as crucial metrics to asses the distillation method as opposed to the maximum accuracy (Acc\textsubscript{max}), since the Acc\textsubscript{max} value can be attained at any point of time prior to the termination, and can be misleading. The improvements in the monotonicity and the $\mu[S_{Acc}]$ and $\sigma^{2}[S_{Acc}]$ values of proposed method are evident from Table \ref{tab:main_table}, Figure \ref{fig:learning_curves} and Figure \ref{fig:wrn_cifar10}.\\
\noindent \textbf{Architecture Scalability:} The proposed student update strategy is generic and is scalable across different neural network architecture families since the method is not constrained to a particular choice of architecture. From the ResNet-18~\cite{he2016deep}, WRN-16-1~\cite{WRN} and WRN-16-2~\cite{WRN} student learning curves~(in Figure~\ref{fig:learning_curves} and Figure~\ref{fig:wrn_cifar10}), we observe our method’s advantage on both the network architectures. Moreover, for large student network architectures (Deeper or Wider) that include complex layers, the proposed method efficiently handles the intricacies with regard to computing the Hessian and the Hessian-product, which becomes highly essential for cumbersome models.\\\textbf{Improvement across Replay Schemes:} Furthermore, the proposed method is agnostic to the memory scheme employed for replay, as demonstrated by the experiments (in Table~\ref{tab:main_table}, Figure~\ref{fig:learning_curves} and Figure~\ref{fig:wrn_cifar10}) using a Memory Buffer and Generative-Replay, thus, rendering our method generalized to the choice of replay. In Table~\ref{tab:main_table} and Figure \ref{fig:wrn_cifar10}, we can observe that the proposed method enhances the student's performance on both the replay schemes (Memory Bank and Generative Replay) used in the prior arts. Moreover, we experiment with different memory buffer sizes on WRN-16-1~\cite{WRN} and WRN-16-2~\cite{WRN} distillation (in Figure~\ref{fig:wrn_cifar10}) and observe consistent and substantial improvements across different memory sizes. Here, the memory size is defined as the maximum number of pseudo-example batches that the bank can contain and each batch consists of randomly sampled 64 examples from $\hat{x}$.\\\textbf{GPU Memory Utilization:} Moreover, our student update strategy brings in no practical memory overhead, compared to memory-based Adversarial DFKD methods. We observe only a minimal increase in the GPU memory usage of few MBs ($\approx$ 40 MB) due to the higher order gradients computed as a part of the update on $\theta_{\mathcal{S}}$ through $\theta_{\mathcal{S}}^{\prime}$. Moreover, we use a \emph{single} gradient descent step to obtain $\theta_{\mathcal{S}}^{\prime}$, which does not incur a large memory overhead. Thus, we do not opt for a first order approximation~\cite{nichol2018first} of our method, which is much prevalent in the meta-learning literature.
\section{Conclusion}
\label{sec:conclusion}
\noindent \textbf{Societal Impact: } Similar to other DFKD methods, our method may be framed as an attack strategy to create clones of proprietary pre-trained models that are accessible online \cite{DFME}. However, this work makes no such efforts and does not support such practices.\\\textbf{Summary: }In this paper, we proposed a meta-learning inspired student update strategy for the Adversarial DFKD setting, that treats \emph{Knowledge-Acquisition} and \emph{Knowledge-Retention} as meta-train and meta-test, respectively. The proposed strategy substantially improves the learning evolution of the student network by implicitly aligning the $\emph{Knowledge-Retention}$ and the $\emph{Knowledge-Acquisition}$ tasks. The intended effect of having the gradient directions aligned is to obtain student parameters ($\theta_{\mathcal{S}}$) that have optimal performance on both $\mathcal{L}_{Acq}$ and $\mathcal{L}_{Ret}$. The conducted experiments on multiple datasets, network architectures, and replay schemes demonstrate the effectiveness, scalability and generalizability of the proposed strategy.
\section{Training Details:}
\input{algo1.tex}
\input{algo2.tex}
\subsection{Teacher Model Training Details}
We train the ResNet-34 \cite{he2016deep} teacher model for SVHN \cite{SVHN} and Tiny-ImageNet \cite{tiny}. For SVHN we use the ResNet-34 model definition made available by Binci \textit{et al.}\footnote{\label{note1}\url{https://github.com/kuluhan/PRE-DFKD}} and for Tiny-ImageNet, we use the \texttt{torchvision} model definition from PyTorch\footnote{\url{https://pytorch.org/}}. To train the teacher models we use SGD optimizer with an initial learning rate of 0.1, momentum of 0.9 and a weight-decay of 5e-4, with a batch size of 128 for 400 epochs. Moreover, the learning rate is decayed at each iteration till 0, using cosine annealing.
\subsection{Student Model Training Details}
\label{sec:training_details}
For fair comparisons, we use the same Generator ($\mathcal{G}$) network (shown in Table \ref{tab:generator}) for all the methods. Unless not explicitly specified, for MB-DFKD \cite{binici2022preventing} and our method (w/ Memory Buffer), we maintain a memory buffer of size 10 and update the memory buffer at a frequency of $f=5$, following previous work \cite{binici2022preventing} (Algorithm \ref{alg:DFKD_MB}). Also, for PRE-DFKD \cite{binici2022robust} and our method (w/ Generative Replay), we use the same VAE architecture (as in Table \ref{tab:generator} (Decoder) and \ref{tab:vae_encoder} (Encoder)), from \cite{binici2022robust}, to transfer the pseudo samples as memory, and use the decoder part (same as the generator architecture in Table \ref{tab:generator}) to replay the learnt distribution, with the VAE update parameters of $f=1$ and $s_{max}^{gp} = 4$ (Algorithm \ref{alg:DFKD_GR}), following previous works \cite{binici2022robust}.
For all the methods and datasets, we use SGD optimizer with a momentum of 0.9 and a variable learning rate ($\alpha_{\mathcal{S}}$) with cosine annealing starting from 1e-1 and annealing it at each epoch to 0 to optimize the student parameters ($\theta_{\mathcal{S}}$). For the one-step gradient descent, we use a learning rate ($\alpha$) of 0.9. Furthermore, we use Adam optimizer with a learning rate ($\alpha_{\mathcal{G}}$) of 0.02 to optimize the Generator ($\mathcal{G}$). We test all our methods primarily on SVHN \cite{SVHN}, CIFAR10 \cite{CIFAR}, CIFAR100 \cite{CIFAR}, and Tiny-ImageNet \cite{tiny} for 200, 200, 400, and 500 epochs ($\mathcal{E}_{max}$), respectively. Our experiments were run on a mixture of Nvidia RTX2080Ti (11GB) and RTX3090 (24GB) GPUs. However, all our experiments incured not more than 11.5 GB of VRAM.
\begin{table}[ht]
\centering
\caption{Generator Network ($\mathcal{G}$) and Generative Replay (VAE \cite{VAE}) Decoder Architecture.}
\label{tab:generator}
\resizebox{0.65\textwidth}{!}{%
\begin{tabular}{@{}ll@{}}
\toprule
\textbf{Output Size} & \textbf{Layers} \\ \midrule
$1000$ & Noise ($z \sim \mathcal{N}(0,I)$) \\
$128 \times h / 4 \times w / 4$ & \textit{Linear}, \textit{BatchNorm1D}, \textit{Reshape} \\
$128 \times h / 4 \times w / 4$ & \textit{SpectralNorm (Conv ($3 \times 3$))}, \textit{BatchNorm2D}, \textit{LeakyReLU} \\
$128 \times h / 2 \times w / 2$ & \textit{UpSample} ($2 \times$) \\
$64 \times h / 2 \times w / 2$ & \textit{SpectralNorm (Conv ($3 \times 3$))}, \textit{BatchNorm2D}, \textit{LeakyReLU} \\
$64 \times h \times w $ & \textit{UpSample} ($2 \times$) \\
$3 \times h \times w $ & \textit{SpectralNorm (Conv ($3 \times 3$))}, \textit{TanH}, \textit{BatchNorm2D} \\ \bottomrule
\end{tabular}%
}
\end{table}
\begin{table}[ht]
\centering
\caption{Generative Replay (VAE \cite{VAE}) Encoder Architecture.}
\label{tab:vae_encoder}
\resizebox{0.65\textwidth}{!}{%
\begin{tabular}{@{}ll@{}}
\toprule
\textbf{Output Size} & \textbf{Layers} \\ \midrule
$3 \times h \times w$ & Input Example \\
$64 \times h \times w$ & \textit{SpectralNorm(Conv ($3 \times 3$))}, \textit{BatchNorm2D}, \textit{LeakyReLU} \\
$128 \times h \times w$ & \textit{SpectralNorm(Conv ($3 \times 3$))}, \textit{BatchNorm2D}, \textit{LeakyReLU} \\
$128 \times h/2 \times w/2$ & \textit{DownSample} ($0.5 \times$) \\
$128 \times h/2 \times w/2$ & \textit{SpectralNorm(Conv ($3 \times 3$))}, \textit{BatchNorm2D}\\
$128 \times h/4 \times w/4$ & \textit{DownSample} ($0.5 \times$) \\
$\{1000, 1000\}$ & \textit{Reshape}, \textit{Linear} \\ \bottomrule
\end{tabular}%
}
\end{table}
\section{Attribution of Existing Assets:}
\subsection{Code-Base:}
The code-base used to experiment with proposed method is adapted from the GitHub\textsuperscript{\ref{note1}} repository of Binci \textit{et al.} \cite{binici2022robust}.
\subsection{Pre Trained Teacher Model}
The CIFAR10 pretrained \cite{CIFAR} Teacher models of ResNet-34 and WRN-40-2 \cite{WRN} are used used from the GitHub\footnote{\url{https://github.com/zju-vipa/CMI}} repository made available by Fang \textit{et al.} \cite{fang2021contrastive}. For the ResNet-34 Teacher model, pretrained on CIFAR100 \cite{CIFAR}, we used the model made available by Binci \textit{et al.}\textsuperscript{\ref{note1}} \cite{binici2022robust}.
\section{Extended Results}
\paragraph{}
In Figure \ref{fig:wrn_cumulative}, we visualize the Cumulative Mean Accuracies (\%) across the training epochs with Buffer-based and Generative Replay. The plots in Figure \ref{fig:wrn_cumulative} complement the ones shown in Figure \ref{fig:wrn_cifar10} of the main manuscript.
\paragraph{}
Based on the similarity of the Tiny-ImageNet teacher accuracy ($\mathcal{T}_{Acc})$ of the methods proposed and reported by Li \textit{et al.} \cite{CuDFKD}, we compare our methods with the accuracies reported by them.
\begin{table}[ht]
\centering
\label{tab:tiny_extended}
\resizebox{0.7\textwidth}{!}{%
\begin{tabular}{lcc}
\toprule
\textbf{Method} & {\color[HTML]{656565} \textbf{Teacher Accuracy (\%) ($\mathcal{T}_{Acc}$)}} & \textbf{Student Accuracy (\%) ($\mathcal{S}_{Acc}$)} \\ \midrule
ADI\textsuperscript{a} \cite{yin2020dreaming} & {\color[HTML]{656565} 61.47} & 6.00 \\
CMI\textsuperscript{a} \cite{fang2021contrastive} & {\color[HTML]{656565} 61.47} & 1.85 \\
\midrule
DAFL\textsuperscript{b} \cite{DAFL} & {\color[HTML]{656565} 60.83} & 35.46 \\
DFAD\textsuperscript{b} \cite{fang2019data} & {\color[HTML]{656565} 60.83} & 19.60 \\
DFQ\textsuperscript{a} \cite{choi2020data} & {\color[HTML]{656565} 61.47} & 41.30 \\
CuDFKD\textsuperscript{a} \cite{CuDFKD} & {\color[HTML]{656565} 61.47} & 43.42 \\ \midrule
\rowcolor[HTML]{EFEFEF}
\textit{\textbf{Ours-1 (w/ Memory Bank)}} & {\color[HTML]{656565} 60.83} & 47.96 \\
\rowcolor[HTML]{EFEFEF}
\textit{\textbf{Ours-2 (w/ Generative Replay)}} & {\color[HTML]{656565} 60.83} & 49.88 \\ \bottomrule
\end{tabular}%
}
\caption{Classification accuracy (in \%) of the student trained using various DFKD methods on Tiny-ImageNet \cite{tiny} with ResNet-34 \cite{he2016deep} as the teacher and ResNet-18 \cite{he2016deep} as the student. \textsuperscript{a} and \textsuperscript{b} denote results obtained from \cite{CuDFKD} and our implementation, respectively.}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figs/wrn_cifar10_cumulative_33.pdf}
\caption{Cumulative Mean Accuracy (\%) evolution of Wide-ResNet (WRN) \cite{WRN}. The WRN-16-1 (\textit{top-row}), and WRN-16-2 (\textit{bottom-row}) networks are distilled by a WRN-40-2 teacher network pre-trained on CIFAR10 ($\mathcal{T}_{Acc} = 94.87\%$). Each column represent the learning curves with the Buffer-based (with different memory buffer sizes) and Generative replay schemes. The proposed method is in \textbf{\color[HTML]{3b80ee} Blue}.}
\label{fig:wrn_cumulative}
\end{figure}
\newpage
\begin{customlemma}{1}
\label{lemma:taylor_series_appn}
If $\mathcal{L}_{Ret}$ has Lipschitz Hessian, \textit{i.e.}, $\lVert \nabla^{2}\mathcal{L}_{Ret}(\theta_{1}) - \nabla^{2}\mathcal{L}_{Ret}(\theta_{2}) \rVert \leq \rho \lVert \theta_{1} - \theta_{2} \rVert$ for some $\rho > 0$, then:
\begin{align*}
\nabla\mathcal{L}_{Ret}(\theta + \mathbf{\phi}_{\theta}) = \nabla\mathcal{L}_{Ret}(\theta) + \nabla^{2}\mathcal{L}_{Ret}(\theta)\mathbf{\phi}_{\theta} + \mathcal{O}(\lVert \mathbf{\phi}_{\theta} \rVert^{2}).
\end{align*}
For instance, when $\phi_{\theta} = -\alpha \nabla \mathcal{L}_{Acq}(\theta)$, we have,
\begin{align*}
\nabla\mathcal{L}_{Ret}(\theta -\alpha \nabla \mathcal{L}_{Acq}(\theta)) = & \nabla\mathcal{L}_{Ret}(\theta) - \alpha \nabla^{2}\mathcal{L}_{Ret}(\theta) \nabla \mathcal{L}_{Acq}(\theta) + \mathcal{O}(\alpha^{2}).
\end{align*}
\end{customlemma}
\begin{proof}
Applying the fundamental theorem of calculus to each component of $\mathcal{L}_{Ret}$, we have:
\begin{align}
\nabla \mathcal{L}_{Ret}(\theta + \phi_{\theta}) = \nabla \mathcal{L}_{Ret}(\theta) + \nabla^{2}\mathcal{L}_{Ret}(\theta)\phi_{\theta} + \int_{k=0}^{1}( \nabla^{2} \mathcal{L}_{Ret}(\theta + k\phi_{\theta}) - \nabla^{2}\mathcal{L}_{Ret}(\theta))\phi_{\theta}dk.
\end{align}
Omitting the subscript $Ret$ for brevity,
\begin{align}
\implies \lVert \nabla \mathcal{L}(\theta + \phi_{\theta}) - (\nabla \mathcal{L}(\theta) + \nabla^{2}\mathcal{L}(\theta)\phi_{\theta}) \rVert &= \lVert \int_{k=0}^{1}( \nabla^{2} \mathcal{L}(\theta + k\phi_{\theta}) - \nabla^{2}\mathcal{L}(\theta))\phi_{\theta}dk \rVert \\
\implies \lVert \nabla \mathcal{L}(\theta + \phi_{\theta}) - (\nabla \mathcal{L}(\theta) + \nabla^{2}\mathcal{L}(\theta)\phi_{\theta}) \rVert & \leq \int_{k=0}^{1} \lVert ( \nabla^{2} \mathcal{L}(\theta + k\phi_{\theta}) - \nabla^{2}\mathcal{L}(\theta))\phi_{\theta} \rVert dk\\
\implies \lVert \nabla \mathcal{L}(\theta + \phi_{\theta}) - (\nabla \mathcal{L}(\theta) + \nabla^{2}\mathcal{L}(\theta)\phi_{\theta}) \rVert & \leq \int_{k=0}^{1} \rho \lVert k \phi_{\theta} \rVert. \lVert \phi_{\theta} \rVert dk \qquad \text{from $\rho$-Lipschitzness} \\
\implies \lVert \nabla \mathcal{L}(\theta + \phi_{\theta}) - (\nabla \mathcal{L}(\theta) & + \nabla^{2}\mathcal{L}(\theta)\phi_{\theta}) \rVert \leq \frac{\rho}{2} \lVert \phi_{\theta} \rVert^{2}.
\end{align}
\end{proof}
\begin{customthm}{1}
\label{lemma:theorem1_proof}
If $\theta^\prime = \theta - \alpha \nabla \mathcal{L}_{Acq}(\theta)$, denotes the one step gradient descent on $\theta$ with the objective $\mathcal{L}_{Acq}(\theta)$, where $\alpha$ is a scalar, and $\nabla\mathcal{L}_{Acq}(\theta)$ denotes the gradients of $\mathcal{L}_{Acq}$ at $\theta$, then:
\begin{align*}
\frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} = \nabla \mathcal{L}_{Ret}(\theta) - \alpha \nabla^{2} \mathcal{L}_{Ret}(\theta).\nabla \mathcal{L}_{Acq}(\theta) - \alpha \nabla^{2} \mathcal{L}_{Acq}(\theta).\nabla \mathcal{L}_{Ret}(\theta) + \mathcal{O}(\alpha^{2}).
\end{align*}
\end{customthm}
\begin{proof}
We have
\begin{align}
\frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} &= \nabla \mathcal{L}_{Ret}(\theta^{\prime}). \frac{\partial \theta^{\prime}}{\partial \theta} \\
\implies \frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} &= \nabla \mathcal{L}_{Ret}(\theta^{\prime}). \frac{\partial( \theta - \alpha \nabla \mathcal{L}_{Acq}(\theta))}{\partial \theta} \\
\implies \frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} &= \nabla \mathcal{L}_{Ret}(\theta^{\prime}).(I - \alpha \nabla^{2}\mathcal{L}_{Acq}(\theta)) \label{eq:partial}
\end{align}
Using Lemma \ref{lemma:taylor_series_appn}, we substitute the value of $\nabla \mathcal{L}_{Ret}(\theta^{\prime})$, where $\theta^\prime = \theta - \alpha \nabla \mathcal{L}_{Acq}(\theta)$ in (\ref{eq:partial}), and obtain:
\begin{align}
\frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} &= \overbrace{(\nabla \mathcal{L}_{Ret}(\theta) + \nabla^{2}\mathcal{L}_{Ret}(\theta).\underbrace{(\theta^{\prime} - \theta)}_{= -\alpha \nabla \mathcal{L}_{Acq}(\theta)} + \underbrace{\mathcal{O}(\lVert \theta^{\prime} - \theta\rVert^{2})}_{=\mathcal{O}(\alpha^2)})}^{=\nabla\mathcal{L}_{Ret}(\theta^{\prime})}.(I - \alpha \nabla^{2}\mathcal{L}_{Acq}(\theta))\\
\implies \frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} &= \nabla\mathcal{L}_{Ret}(\theta) + \nabla^{2}\mathcal{L}_{Ret}(\theta).\underbrace{(\theta^{\prime} - \theta)}_{= -\alpha \nabla \mathcal{L}_{Acq}(\theta)} - \alpha \nabla^{2}\mathcal{L}_{Acq}(\theta)\nabla \mathcal{L}_{Ret}(\theta) + \mathcal{O}(\alpha^{2}) \\
\implies \frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} &= \nabla\mathcal{L}_{Ret}(\theta) - \alpha \nabla^{2}\mathcal{L}_{Ret}(\theta)\nabla \mathcal{L}_{Acq}(\theta)- \alpha \nabla^{2}\mathcal{L}_{Acq}(\theta)\nabla \mathcal{L}_{Ret}(\theta) + \mathcal{O}(\alpha^{2}) \\
\implies \frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} &= \nabla\mathcal{L}_{Ret}(\theta) - \alpha \underbrace{(\overbrace{\nabla^{2}\mathcal{L}_{Ret}(\theta)\nabla \mathcal{L}_{Acq}}^{Hessian \ Product-1} - \overbrace{\nabla^{2}\mathcal{L}_{Acq}\nabla \mathcal{L}_{Ret}(\theta)}^{Hessian \ Product-2})}_{Gradient \ Matching} + \mathcal{O}.(\alpha^{2})
\label{eq:lret}
\end{align}
Note that, Lemma \ref{lemma:taylor_series_appn} provides an efficient way to obtain $Hessian \ Product-1$ (highlighted in (\ref{eq:lret})) by computing the gradient of $\mathcal{L}_{Ret}$ at $\theta^{\prime}$, thus, eradicating the time and memory overhead of explicitly computing $Hessian \ Product-1$. Hence, we have:
\begin{align}
\frac{\partial \mathcal{L}_{Ret}(\theta^{\prime})}{\partial \theta} = \nabla \mathcal{L}_{Ret}(\theta) - \alpha \nabla^{2}\mathcal{L}_{Ret}(\theta)\nabla \mathcal{L}_{Acq}(\theta) - \alpha \nabla^{2}\mathcal{L}_{Acq}\nabla \mathcal{L}_{Ret}(\theta) + \mathcal{O}(\alpha^{2}).
\end{align}
\end{proof}
|
{
"arxiv_id": "2302.14262",
"language": "en",
"timestamp": "2023-03-01T02:07:12",
"url": "https://arxiv.org/abs/2302.14262",
"yymm": "2302"
} | \section{Introduction}
Mesh adaptation is one of the most important steps to solve a computational fluid dynamics problem. Pointed by\cite{slotnick2014cfd}, mesh generation and adaptivity continue to be significant bottlenecks in the CFD workflow. In order to obtain an economic mesh,
developing a good criterion for adaptation is of major concern.
There are two commonly used h-adaptive methods for calculation: residual-based and physical-based. However, these kinds of adaptation methods may lack clear insight into the governing equation nor the quantity of interests and lead to a waste of degree of freedom. In our earlier work, we used the fully discrete adjoint method for the adaptation which performs well and developed a faster calculation for a specific quantity of interest in the steady Euler equations. As noted by \cite{fidkowski2011review}, adjoint-based techniques have become more widely used in recent years as they have more solid theoretical foundations for error estimation and control analysis. From these theoretical analyses, it is realized that the adjoint consistency property is of vital importance to guarantee error estimation as well as convergence behavior. In\cite{lu2005posteriori}, the dual consistency is firstly analyzed, showing that the dual weighted residual framework based on a dual inconsistent mesh adaptation may lead to dual solutions with oscillation and unexpected singularity may occur. Nevertheless, the adjoint consistency is not usually satisfied under certain issues, which results from the mismatch between the quantity of interest and the boundary conditions. To address this issue, the modification to the numerical solution shall be realized in order to construct the dual consistency-based adaptation in general configuration. In\cite{dolejsi2022}\cite{hartmann2007adjoint}, analysis of different numerical fluxes has been conducted to derive the dual consistency. As the assumption from the Fr\'{e}chet derivatives, the first order derivative shall be preserved to construct the framework. Then Lax-Friedrichs and Vijayasundaram numerical fluxes are considered in our work. Furthermore, Hartmann firstly generalized the dual consistency theory to general configuration by the boundary modification method\cite{HARTMANN2015754}. We combined these analyses with our framework and developed a DWR-based dual consistency mesh adaptation.
In order to derive a dual consistent framework, compatibility and consistency are of major concern. Firstly, compatibility puts emphasis on the dual part, the quantity of interest generated by the functional of dual solutions should theoretically be equal to the functional of the primal solutions. Giles used a matrix representation method to derive a continuous version of dual equations of Euler equations with different kinds of boundary conditions\cite{giles1997adjoint}. However, for some specific boundary conditions, dual equations may generate some non-trivial strong boundary conditions. To apply the DWR method in a discretized finite volume scheme, Darmofal developed the fully discrete method which is easier to be implemented. Then for the consistency part, \cite{hartmann2007adjoint}\cite{lu2005posteriori} analyzed the discretization based on the discontinuous Galerkin scheme, and found the prerequisite for the dual consistency. As highlighted by \cite{hicken2014dual}, the main reason for dual inconsistency is the impact of summation by part, the discretization for the framework is of vital importance. In order to show the relevance of consistency and compatibility, the limit of discretized dual equations shall be equivalent to the continuous dual equations exactly.
Besides, V\'{i}t Dolej\v{o}\'{i} et al. developed a series of $hp$-adaptation method to solve the adjoint based problem \cite{RANGARAJAN2020109321}, and applied this for the Euler equations in \cite{dolejsi2022}.
For the governing equations we considered, Euler equations, which have been playing an important role not only in the research but also in the shape optimal design of the vehicles and aircraft. Although these equations are essential for this type of research, it can be challenging to develop an efficient and robust iterative solver for the nonlinear equations, especially on unstructured meshes. Remarkable progress had been achieved since Jameson \cite{JAMESON1983327}\cite{JAMESONRunge}\cite{Jameson1983SolutionOT} developed numerous numerical techniques for this framework. A homotopy method had been applied to solve the steady-state problems of hyperbolic conservation laws since its introduction in \cite{hao2013homotopy}. In \cite{li2008multigrid}, Li et al. proposed a finite volume solver for 2D steady Euler equations, where the block lower-upper symmetric Gauss-Seidel iteration was implemented as a smoother in each Newton iteration. Newton's method is a very powerful method to solve the nonlinear algebraic system, but it is challenging to be developed into a robust and efficient solver for nonlinear equations with different kinds of configurations. To enhance the robustness of the solver for steady Euler equations, Hu et al.\cite{hu2011robust} used a WENO-type reconstruction which can help gain an accurate approximation both in the smooth region and non-oscillatory sharp profiles near the shock discontinuity. And NURBS method \cite{Meng2022} had been included in the solver to construct the airfoils' geometry. It is worth mentioning that with a series of customized numerical methods, the solver can resolve the system with residual convergence to machine precision on a desktop computer.
Even though the Newton-GMG solver is effective and robust to solve the steady Euler equations, it still encounters some problems when applying the h-adaptive refinement. In the previous work, we considered the dual-weighted residual method for the adaptation, which saves the degrees of freedom sharply compared with the global refinement. However, based on our numerical experience, the DWR method is hard to be implemented correctly. The difficulties originate from the theory to derive a well-defined dual equation, the algorithm to execute the algorithm of dual equations' solver, and the oscillations in the error of target functions which influence the design of the stop criterion of the algorithm. Out of these reasons, the dual consistency property should be considered. But according to our investigation, the combination of dual consistency and Newton-GMG solver has little research yet. The reasons are various, to develop an effective algorithm for the Newton-GMG solver is non-trivial firstly, and it is not guaranteed this solver can also work on the dual equations. Then, whether the dual consistency is preserved in the framework of the Newton-GMG solver is still under research.
Building on the powerful solver above, we step further to develop a faster and more robust h-adaptivity method for mesh refinement. Though the DWR method has already been investigated in our previous work, we have encountered unexpected phenomena that need to be addressed. Validating the efficiency of the dual consistency in this Newton-GMG framework is a major concern. However, obtaining results from both the primal equations and the dual equations with this solver is challenging, especially for unstructured mesh, not to mention a reconstruction framework is also considered in this work. To tackle these challenges, we use the Petrov-Galerkin method in our finite volume framework to combine the Discontinuous Galerkin scheme with our analysis. While Galerkin orthogonality disappears in a finite volume scheme, the Petrov-Galerkin method allows us to take care of residuals arising from a reconstruction scheme, as explained in this essay. In our earlier work, a k-exact reconstruction method is proposed to handle the reconstruction patch. With all these foundations, a highly efficient Godunov scheme can be implemented in our calculation. As previously mentioned, dual consistency must be considered both in the theory part and in the implementation of the algorithm. For the theoretical parts, the fully discrete adjoint method and continuous adjoint, DG scheme adjoint method are all reviewed. To ensure that the continuous limit of the dual-consistent framework equals the continuous dual equations exactly, we must derive a strong form of continuous dual equations, which is not trivial. Then the main implementation of the dual solver is based on the fully discrete method. In this work, the algorithm has been realized in the library AFVM4CFD developed by our group. The GMG smoother helps us obtain steady non-oscillatory solutions not only from the primal equations but also from the dual equations. However, the basic dual consistency property must preserve some certain restrictions, such as the zero normal velocity condition. Therefore, to develop the algorithm with various boundary conditions, the boundary modification method is implemented in the framework. For the generation of meshes on the boundary of geometry, the NURBS method is included in the framework, which helps us handle different kinds of models. Besides, to solve the dual equations well, a regularization term is added to the dual system. Fortunately, this regularization method leads to a satisfactory result in this work. With simulations in different models, we validated the efficiency of dual consistency in the Newton-GMG solver. In this work, refinement with residual-based adaptation is tested at first, which does not contribute to the convergence of quantity of interest. Then, the DWR method is considered. In the experiments, it is verified how this property improves the stability of target function convergence and the importance of a dual-consistent framework is highlighted. The refinement strategy under a dual-consistent scheme leads to a more accurate quantity of interest with an increase in degrees of freedom, which is not guaranteed and can be even worse for some circumstances under a dual-inconsistent framework. Therefore, it is realized that a DWR method should be implemented under a dual-consistent scheme in this Newton-GMG framework. With the modification to the entire system including the linear system solver, nonlinear solutions update, geometry generation, far-field boundary correction, and so on. A dual-consistent DWR h-adaptivity method is constructed in this work. To enhance the efficiency of the solver, a technique like the decreasing threshold was also considered in the algorithm, which helps us to obtain the quantity of interest with the expected precision without compromising the stability and robustness of the algorithm.
This work is structured as follows: Section 1 introduces basic notations and provides a brief overview of the Newton-GMG solver. Section 2 provides a comprehensive review of the Newton-GMG solver. In Section 3, the main theory for dual consistency is discussed, starting from a continuous dual theory and leading to a dual consistency approach based on the Discontinuous Galerkin scheme. As the proposed framework is based on a finite volume scheme, we combine it with the Discontinuous Galerkin scheme through Petrov Galerkin analysis. In Section 4, we consider the posterior error estimate, where the quantity of interest is already adjoint consistent with the discretization. Section 5 and Section 6 provide more details on the algorithm and the discovery of the combination between adjoint consistency and the Newton-GMG solver.
\section{Steady Euler equations}
\subsection{Basic notations and finite volume method}
We begin with some basic notations. Let $\Omega$ be a domain in
$\mathbb R^2$ with boundary $\Gamma$ and $\mathcal K_h$ be a shape
regular subdivision of $\Omega$ into different control volumes, $K$. $K_i$ is used to define the i-th element in this subdivision, $e_{i,j}$ denotes the common edge of $K_i$ and $K_j$, i.e., $e_{i,j}=\partial K_{i} \cap \partial K_{j}$. The unit outer normal vector on the edge $e_{i,j}$ with respect to $K_{i}$ is represented as $n_{i,j}$.
Then $\mathcal{V}^{B_h}$ is the broken function space with cell size $h$ defined on
$ K$ which contains discontinuous piecewise $H^s $ functions,
i.e.,\[\mathcal{V}^{B_h}=\{v~:v|_K\in H^s(K)\}.\] Similarly, we use
$\mathcal{V}^{B_h}_p$ to denote the finite-dimensional spaces on
$\mathcal K$ with discontinuous piecewise polynomial functions of
degree $p$,
\[\mathcal{V}^{B_h}_p=\{v:v|_{ K}\in \mathcal{P}_p(K)\},\]
where $\mathcal{P}_p(K)$ is the space of polynomials whose degree $\le ~p$ defined on element $K$ with size $h$. In each control volume $K\in\mathcal{K}_h$, we use $\mathbf{u}^+$ and $\mathbf{u}^-$ to denote the interior and exterior traces of $\mathbf{u}$, respectively.
Besides, we introduce the flux function $\mathcal F(\bf{u})=(\bf f_1(u),f_2(u))$. Then, we can write the conservation law in a compact form:~Find $ \mathbf{u}~\rm:~\Omega\rightarrow \mathbb R^4 $ such that~
\begin{equation}\label{primal}
\nabla\cdot \mathcal{F}( \mathbf{u})=0, \quad \mbox{in}~~\rm\Omega,
\end{equation}
subject to a certain set of boundary conditions.
For the inviscid two-dimensional steady Euler equations, the conservative variable flux is given by
\begin{equation}
\mathbf{u}=\begin{bmatrix}
\rho \\ \rho u_x \\\rho u_y\\E
\end{bmatrix},
\qquad\text{and}~\mathcal F(\mathbf{u})=\begin{bmatrix}
\rho u_x & \rho u_y
\\ \rho u_x^2+p&\rho u_xu_y
\\ \rho u_xu_y & \rho u_y^2+p
\\ u_x(E+p) & u_y(E+p)
\end{bmatrix},
\end{equation} where $(u_x,u_y)^T, \rho, p, E$ denote the velocity, density, pressure and total energy, respectively. In order to close the system in this paper, we use the equation of state
\begin{equation}
E=\frac{p}{\gamma-1}+\frac{1}{2}\rho(u_x^2+u_y^2),
\end{equation}
here $\gamma=1.4$ is the ratio of the specific heats of the perfect gas.
\subsection{Petrov Galerkin method}
In \cite{HU2016235}, a high-order adaptive finite volume method is proposed to solve \eqref{primal}. As the discretization is conducted under the schemes of the finite volume method, the Euler equations \eqref{primal} are actually solved by the equations reformulated as
\begin{equation}
\label{fvm_euler}
\mathcal{A}(\mathbf{u})=\int_\Omega \nabla \cdot \mathcal{F}(\mathbf{u})dx=\sum\limits_{i}\int_{K_i}\nabla\cdot
\mathcal{F}(\mathbf{u})dx=\sum\limits_{i}\sum\limits_{j}\oint_{e_{i,j}\in\partial K_i}F(\mathbf{u})\cdot n_{i,j}ds=0,
\end{equation}
by the divergence theorem. With the numerical flux $\mathcal{H}$ introduced to this scheme, the equations are actually a fully discretized system
\begin{equation}
\label{EulerDiscrete}
\sum\limits_{i}\sum\limits_{j}\oint_{e_{i,j}\in\partial K_i}\mathcal{H}(\mathbf{u}_i,\mathbf{u}_j)\cdot n_{i,j}ds=0.
\end{equation}
Here $\mathbf{u}_i$ is the restriction of $\mathbf{u}$ on $K_i$.
To make a further discussion about the dual-weighted residual method, we need to consider both the error estimates and a higher-order reconstruction of the solution. We adopt the Petrov-Galerkin variant of the discontinuous Galerkin method like in \cite{barth2002}, using $R_p^0$ to denote a reconstruction operator $R_p^0:\mathcal{V}_0^{B_h}\longmapsto \mathcal{V}_p^{B_h}$ which satisfies the cell-averaging condition for $\mathbf{u}_0\in\mathcal{V}_0^{B_h}$ and $K_i\in\mathcal{K}_h$, i.e.,
\begin{equation}
\label{Petrov}
(R_p^0\mathbf{u}_0,v)|_{K_i}=(\mathbf{u}_0,v)|_{K_i}=(\mathbf{u}_{0,i},v),\qquad\forall v\in\mathcal{V}_0^{B_h}.
\end{equation}
Here we use $(\cdot,\cdot)$ to denote the $L_2$ inner product of integration, while $(\cdot,\cdot)|_K$ is the inner product restricted on the control volume $K$.
With the Petrov Galerkin representation\eqref{Petrov}, the primal control function\eqref{primal} can be reformulated to a weak form in each element,
\begin{equation}
-\int_K \mathcal F(R_p^0\mathbf{u}_0)\cdot\nabla v dx+\int_{\partial K}\mathcal F(R_p^0\mathbf{u}_0^+)\cdot n_{K}v^+dx=0.
\end{equation}
Meanwhile, since $\mathbf{u}_0$ may be discontinuous between element interfaces, we replace the numerical flux $\mathcal F(R_p^0\mathbf{u}^+)\cdot n_{K}$ with function $\mathcal H(R_p^0\mathbf{u}_0^+,R_p^0\mathbf{u}_0^-,n_K)$, which depends on both the interior as well as the exterior part of $K$ and the outward normal vector with respect to $K$. While the numerical flux on the real boundary $\Gamma$ doesn't have an exterior trace, we denote it as $\tilde{\mathcal H}(R_p^0\mathbf{u}_0^+,\Gamma(R_p^0\mathbf{u}_0^+),n_K)$, where only the interior impact on the boundary $\Gamma$ has been considered. Then, summing the equation in each control volume, we get \textit{Petrov-Galerkin form discretized Euler equations}:
Find $\mathbf{u}_0\in\mathcal{V}_0^B$ s.t.
\begin{equation}
\begin{aligned}
\label{discrete_operator}
\mathcal A_{h}(\cdot,\cdot)&:= \sum\limits_{K\in\mathcal{K}_h}\left\{-\int_K \mathcal F(R_p^0\mathbf{u}_0)\cdot\nabla v dx+\int_{\partial K\backslash \Gamma}\mathcal H(R_p^0\mathbf{u}_0^+,R_p^0\mathbf{u}_0^-,n_K)v^++\int_{\partial K\cap\Gamma}\tilde{\mathcal H}(R_p^0\mathbf{u}_0^+,\Gamma(R_p^0\mathbf{u}_0^+),n_K)v^+\right\}
\\ &=\sum\limits_{K\in\mathcal{K}_h}\left\{\int_{\partial K\backslash \Gamma}\mathcal H(R_p^0\mathbf{u}_0^+,R_p^0\mathbf{u}_0^-,n_K)v^++\int_{\partial K\cap\Gamma}\tilde{\mathcal H}(R_p^0\mathbf{u}_0^+,\Gamma(R_p^0\mathbf{u}_0^+),n_K)v^+\right\}=0,\qquad\forall v\in\mathcal{V}_0^{B_h}.
\end{aligned}
\end{equation}
Generally, we denote $\mathcal J(\cdot):\mathcal V\rightarrow \mathbb R$
as the quantity of interest,
\begin{equation}\mathcal
J(\mathbf{u})=\int_{\Omega}j_{\Omega}(u)dx+\int_{\Gamma}j_{\Gamma}(Cu)ds,
\end{equation}
where $C$ is a differential operator. Similarly, the weak form of the quantity of interest can be reformulated as
\begin{equation}
\mathcal{J}(\cdot,\cdot)=\int_{\Omega}j_{\Omega}(R_p^0\mathbf{u}_0)vdx+\int_{\Gamma}j_{\Gamma}(C(R_p^0\mathbf{u}_0))vds.
\end{equation}
Since the quantity of interest is of major concern in this research, in the next section, our main discussion is around the theory to solve the quantity of interest by the dual-weighted residual method.
\subsection{Linearization}
To solve the equation \eqref{EulerDiscrete}, we apply the Newton iteration method where we expand the nonlinear term by Taylor series and ignore the higher order term. Then we get
\begin{equation}
\begin{aligned}
\label{linearized_euler}
\sum\limits_{i}\sum\limits_{j}\int_{e_{i,j}\in\partial\mathcal{K}_i} \mathcal{H}(\mathbf{u}_i^{(n)},\mathbf{u}_j^{(n)})\cdot n_{i,j}ds &+\sum\limits_{i}\sum\limits_{j}\int_{e_{i,j}\in\partial\mathcal{K}_i}\Delta \mathbf{u}_i^{(n)}\frac{\partial\mathcal{H}(\mathbf{u}_i^{(n)},\mathbf{u}_j^{(n)})}{\partial \mathbf{u}_i^{(n)}}\cdot n_{i,j}ds\\
&+\sum\limits_{i}\sum\limits_{j}\int_{e_{i,j}\in\partial\mathcal{K}_i}\Delta \mathbf{u}_j^{(n)}\frac{\partial\mathcal{H}(\mathbf{u}_i^{(n)},\mathbf{u}_j^{(n)})}{\partial \mathbf{u}_j^{(n)}}\cdot n_{i,j}ds=0,
\end{aligned}
\end{equation}
where $\Delta \mathbf{u}_i$ is the increment of the conservative variables in the $i-$th element. After each Newton iteration, the cell average is updated by $\mathbf{u}_i^{(n+1)}=\mathbf{u}_i^{(n)}+\Delta \mathbf{u}_i^{(n)}.$ However, the Jacobian matrix in the Newton iteration will sometimes be singular and cannot be solved. To overcome this issue, the regularization term $\int_{K_i}\Delta \mathbf{u}_i^{n}/{\Delta t_i}dx$, which stands for the artificial time derivative term generally, will be added to the system. The artificial local time step is often given as $CFL\times h_{K_i}/(|u|+c)$ where $h_{K_i}$ is the local grid size and $c$ is the speed of sound. While the local residual can quantify whether the solution is close to a steady state, it serves as a regularization term in this equation. Then this approach leads to the system of \textit{Regularized equations:}
\begin{equation}
\begin{aligned}
\label{regularized_equation}
\displaystyle \alpha \left|\!\left|\sum\limits_{i}\sum\limits_{j}\int_{e_{i,j}\in\partial\mathcal{K}_i}\mathcal{H}(\mathbf{u}_i^{(n)},\mathbf{u}_j^{(n)})\cdot n_{i,j}ds \right|\!\right|_{L_1}\Delta \mathbf{u}_i^{(n)}&+\sum\limits_{i}\sum\limits_{j}\int_{e_{i,j}\in\partial\mathcal{K}_i}\Delta \mathbf{u}_i^{(n)}\frac{\partial\mathcal{H}(\mathbf{u}_i^{(n)},\mathbf{u}_j^{(n)})}{\partial \mathbf{u}_i^{(n)}}\cdot n_{i,j}ds\\
&+\sum\limits_{i}\sum\limits_{j}\int_{e_{i,j}\in\partial\mathcal{K}_i}\Delta \mathbf{u}_j^{(n)}\frac{\partial\mathcal{H}(\mathbf{u}_i^{(n)},\mathbf{u}_j^{(n)})}{\partial \mathbf{u}_j^{(n)}}\cdot n_{i,j}ds\\
&=-\sum\limits_{i}\sum\limits_{j}\int_{e_{i,j}\in\partial\mathcal{K}_i}\mathcal{H}(\mathbf{u}_i^{(n)},\mathbf{u}_j^{(n)})\cdot n_{i,j}ds.
\end{aligned}
\end{equation}
The main advantage of this scheme is when $\alpha$ is fixed, we do not need to adjust the regularization coefficient adaptively. This process can be understood as that during the initial period of iteration, the solution is far from the steady state. With the update to the solution accordingly, it gets close to the final result. Based on our numerical experience, the coefficient $\alpha$ should be adjusted with the change of far-field Mach number. A larger Mach number needs a larger $\alpha$ while a smaller one is the opposite, usually we set $\alpha$ equal to $2$ for the subsonic scheme.
Solving this system\eqref{regularized_equation} is challenging. Motivated by the effectiveness of the block lower-upper Gauss-Seidel iteration method proposed in \cite{li2008multigrid} for smoothing the system, the geometric multigrid method with the agglomeration technique is included in the framework. The solver in this work behaves satisfactorily not only on the primal equations, but also on the dual equations. And this algorithm will be discussed later.
\section{Dual-Consistency}
Generally, the discussion about the dual-consistency, or adjoint-consistency, such as in \cite{hartmann2007adjoint}, consists of two parts. The first part concerns the dual part which should satisfy the compatible condition, and the second part concerns the consistency part. There are different methods to derive a well-defined adjoint discrete equation. For example, starting from the primal continuous equation, we can derive an adjoint equation and then discretize this adjoint equation to get the discrete version of the adjoint equation. If this discrete version of the adjoint equation is compatible with the discrete version of the primal equation, we define it as adjoint consistent. Alternatively, we can first discretize the primal equation and then find the adjoint equation of this discrete equation. If this adjoint discrete equation is consistent with the adjoint equation of the primal equation, it also satisfies the adjoint consistency. The consistency property is based on how we discretize the equation or the numerical schemes we use, while the compatible condition is based on how we derive the adjoint equation. These two properties are discussed in this section accordingly.
Here we use $(\cdot)^*$ to denote the adjoint version of a certain operator. While the $\mathcal A^*$ is usually derived with the method of integration by part, our main concern is the relation between $\mathcal B$ and $\mathcal B^*$, $\mathcal C$ and $\mathcal C^*.$ $\mathcal B$ is related to the boundary condition, while the $\mathcal C$ is mainly determined by the quantity of interest. In the work by \cite{giles1997adjoint}, the method for deriving a well-posed adjoint equation is well discussed, and we follow this theory to formulate the system based on our scheme.
\subsection{Continuous Adjoint}
The continuous adjoint approach has been a useful technique for aerospace application and optimal design, which was pioneered by Jameson\cite{Jameson}. For the compatibility part, it means the dual equations should be derived such that the quantity of interest based on the dual solution theoretically leads to an identical value. Originating from the continuous equation, Giles and Pierce proved that there is a limited set of objective functions for which the standard formulation of the dual problem is well-posed under certain restrictions of the boundary conditions. Though the fully discrete adjoint method is more commonly used for its straightforward process to generate the adjoint code, the continuous version to derive a compatible dual problem still provides a clear insight into the governing equation and helps us understand this scheme more profoundly.
\begin{myDef}\textbf{Compatibility}:
For simplicity, we assume the primal operators we want to discuss have already been linearized, and we use a similar new notation to distinguish from the discussion above.
Then the primal problem is
\begin{equation}
\begin{aligned}\displaystyle
&\text{Solve the Quantity of Interest:~}& J&=(\overline{j_{\Omega}},u)_{\Omega}+(\overline{j_{\Gamma}},\mathcal{C}u)_{\Gamma},
\\ &\text{Given the p.d.e.~}&\mathcal{A}&u=l \qquad\text{in}~\Omega,
\\ &\text{With the Boundary Condition~}&\mathcal{B}&u=g\qquad \text{on}~\Gamma.
\end{aligned}
\end{equation}
and the adjoint problem is transformed into:
\begin{equation}
\begin{aligned}
&\text{Solve the Quantity of Interest:~}& J&=(z,l)_{\Omega}+(\mathcal{C}^*z,g)_{\Gamma},
\\ &\text{Given the p.d.e.~}&\mathcal{A}^*&z=\overline{j_{\Omega}} \qquad\text{in}~\Omega,
\\ &\text{With the Boundary Condition~}&\mathcal{B}^*&z=\overline{j_{\Gamma}}\qquad \text{on}~\Gamma.
\end{aligned}
\end{equation}
To validate the equivalence of the two problems, we need to show the quantity of interest for both equation are just the same.
\begin{equation}
\label{compatible_theory}
\begin{aligned}
&J=(\overline{j_{\Omega}},u)_{\Omega}+(\overline{j_{\Gamma}},\mathcal{C}u)_{\Gamma}=(\mathcal{A}^*z,u)_{\Omega}+(\mathcal{B}^*z,\mathcal{C}u)_{\Gamma},\\
&J=(v,l)_{\Omega}+(\mathcal{C}^*z,g)_{\Gamma}=(z,\mathcal{A}u)_{\Omega}+(\mathcal{C}^*z,\mathcal{B}u)_{\Gamma}.
\end{aligned}
\end{equation}
\end{myDef}
Since $\mathcal A^*$ is based on the integration by part, it is derived from the equation
\begin{equation}
(z,\mathcal{A}u)_{\Omega}=(\mathcal{A}^*z,u)_{\Omega}+(\mathcal{A}_1z,\mathcal{A}_2u)_{\Gamma}.
\end{equation}
In order to show the-side terms on both equations in \eqref{compatible_theory} are equal, we need to prove that if the operators $\mathcal A,\mathcal B,\mathcal C$ are all given, whether there exists $\mathcal B^*$ and $\mathcal C^*$ such that
\begin{equation}
(\mathcal{A}_1z,\mathcal{A}_2u)_{\Gamma}=(\mathcal{B}^*z,\mathcal{C}u)_{\Gamma}-(\mathcal{C}^*z,\mathcal{B}u)_{\Gamma}.
\end{equation}
The truth is such operators $\mathcal{B}^*$ and $\mathcal{C}^*$ do exist and are unique under some restrictions. For example, the non-singular case can be discussed to support the existence of operators. In this case, we use $\bf A,B,C$ \rm to denote the matrices form of the operators $\mathcal{A,B,C}$. $\bf q_1,q_2$ \rm to denote the vectors containing all the variables of $u$ and $z$, with the assumption that
\begin{equation}
(\mathcal{A}_1z,\mathcal{A}_2u)_{\Omega}\equiv \bf q_2^TMq_1.
\end{equation}
If $\bf M$ is non-singular here, then our task is to find matrices $\bf B^*$ and $\bf C^*$ at each point of the boundary such that
\begin{equation}
\bf M=(B^*)\it^T\bf C-(C^*)\it^T\bf B.
\end{equation}
We use augmented matrices $\bf T$ and $\bf T^*$ to denote
\begin{equation}
\label{matrixbc}
\displaystyle \bf T=\left(\frac{B}{C}\right),\qquad T^*=\left(\frac{-C^*}{B^*}\right).
\end{equation}
Then the equation can be rewritten as
\begin{equation}\label{matrixTT}
\bf M=(T^*)\rm^T\bf T.
\end{equation}
From the equations \eqref{matrixbc} and \eqref{matrixTT}, we can obtain $\bf B^*$ and $\bf C^*$. In fact, it can be proven that the well-posedness of the dual problem is equivalent to the well-posedness of the primal problem.
However, we have noticed that $\bf M$ is mainly determined by the specific boundary condition of the problem. For certain types of boundary conditions, such as solid wall conditions with the vanishing normal velocity $\bf n\cdot \it (u_x,u_y)=0$ for the Euler equations, $\bf M$ may become a singular matrix.
In this case, we need to make a variable transformation to use the reduced variables.
In order to analyze the adjoint boundary operator of the Euler equations \eqref{primal}, further modifications shall be introduced to handle this nonlinear case. Firstly, we require the linear form of this equation, where we consider the Fr\'{e}chet derivative instead of the Taylor expansion linearization above for the theoretical analysis. This leads to the equation:
\begin{equation}
\label{eq:primal_derivative}
\displaystyle (\nabla\cdot\mathcal F)(\cdot)'[\bf u](\tilde{u})=\frac{\partial}{\partial \it x}( f_1'[\bf u] (\it \tilde{u}))+\frac{\partial}{\partial \it y}(\bf f_2'[\bf u]( \it \tilde{u}))=0.
\end{equation}
Returning to the inner product form of the Euler equations and using the integration by part, we will get
\begin{equation}
\label{derive_by_integration}
( z,\frac{\partial}{\partial x}\bf f_1'[u]\cdot \bf u+\frac{\partial}{\partial \it y}\bf f_2'[u]\cdot \bf u)_{\Omega}=(\it z,M\bf u)_{\Gamma}+(-{f'_1[u]}^T\frac{\partial \it z}{\partial \it x}{-f'_2[u]}^T\frac{\partial \it z}{\partial \it y},\bf u)_{\Omega}.
\end{equation}
Then $M$ is defined as $n_x\cdot f_1'[u]+n_y\cdot f_2'[u]$.
Here $f_1'[u]$ and $f_2'[u]$ are the Jacobin matrices which are calculated to be
\begin{equation}
\begin{aligned}
f_1'[u]=\begin{bmatrix}
0&1&0&0 \\ \frac{\gamma-3}{2}u_x^2+\frac{\gamma-1}{2}u_y^2 &(3-\gamma)u_x &(1-\gamma)u_y &\gamma-1
\\ -u_xu_y & u_y &u_x &0
\\u_x(\frac{\gamma-1}{2}(u_x^2+u_y^2)-H)&H-(\gamma-1)u_x^2&(1-\gamma)u_xu_y&\gamma u_x
\end{bmatrix},\\
f_2'[u]=\begin{bmatrix}
0&0&1&0
\\ -u_xu_y & u_y &u_x &0
\\ \frac{\gamma-3}{2}u_y^2+\frac{\gamma-1}{2}u_x^2 &(1-\gamma)u_x &(3-\gamma)u_y &\gamma-1
\\u_y(\frac{\gamma-1}{2}(u_y^2+u_x^2)-H)&(1-\gamma)u_xu_y&H-(\gamma-1)u_y^2&\gamma u_y
\end{bmatrix},
\end{aligned}
\end{equation}
where $H$ satisfies
\[\frac{\gamma E}{\rho}=H+\frac{\gamma-1}{2}(u_x^2+u_y^2).\]
We use $\widetilde{u_n}=u_xn_x+u_yn_y$ to denote the normal velocity. Then
\begin{equation}
\bf M=\it
\begin{bmatrix}
0 & n_x&n_y&0\\
\frac{\gamma-1}{2}n_x(u_x^2+u_y^2)-u_x\widetilde{u_n}&(2-\gamma)u_xn_x+\widetilde{u_n}&(1-\gamma)u_yn_x+u_xn_y&(\gamma-1)n_x\\
\frac{\gamma-1}{2}n_y(u_x^2+u_y^2)-u_y\widetilde{u_n}&u_yn_x+(1-\gamma)u_xn_y&(2-\gamma)u_yn_y+\widetilde{u_n}&(\gamma-1)n_y\\
\frac{\gamma-1}{2}(u_x^2+u_y^2)\widetilde{{u_n}}-\widetilde{u_n}H&Hn_x+(1-\gamma)u_x\widetilde{u_n}&Hn_y+(1-\gamma)u_y\widetilde{u_n}&\gamma\widetilde{u_n}
\end{bmatrix},
\end{equation}
with eigenvalues
\begin{equation}
\lambda_1=\widetilde{u_n}+c,~~~\lambda_2=\widetilde{u_n},~~~\lambda_3=\widetilde{u_n},~~~\lambda_4=\widetilde{u_n}-c,
\end{equation}
and corresponding eigenvectors
\begin{equation}
\displaystyle
\alpha_1=\begin{bmatrix}
1\\u_x+c n_x\\u_y+c n_y\\H+\widetilde{u_n}c
\end{bmatrix},
~~\alpha_2=\begin{bmatrix}
1\\u_x\\u_y\\\displaystyle\frac{u_x^2+u_y^2}{2}
\end{bmatrix},
~~\alpha_3=\begin{bmatrix}
1\\\widetilde{u_n}n_x\\\widetilde{u_n}n_y\\\displaystyle\frac{u_x^2+u_y^2}{2}-\widetilde{u_n}^2
\end{bmatrix},
~~\alpha_4=\begin{bmatrix}
1\\u_x-cn_x\\u_y-cn_y\\H-\widetilde{u_n}c
\end{bmatrix}.
\end{equation}
Here $c$ is the speed of sound which is given by
\begin{equation}
c=\sqrt{\displaystyle\frac{\gamma p}{\rho}}.
\end{equation}
Furthermore, the vanishing normal velocity boundary condition of the Euler equations is considered in this case, i.e.,
\begin{equation}\label{vanish_normal_velocity}
\mathcal B(\bf u)\it =n_xu_x+n_yu_y=0,\qquad\text{on} ~\Gamma.
\end{equation}
And in the research area of the airfoil shape optimal design, the lift and drag are two important quantities. So we consider the target functional as
\begin{equation}\label{lift_and_drag}
\mathcal J(\bf u)\it =\int_{\Gamma}j_{\Gamma}(C\bf u)\it ds= \int_{\Gamma}p_{\Gamma}(\bf u)\it\bf n\cdot\it \beta,
\end{equation}
where $\beta$ in the above formula is given as
\begin{equation}\beta=\left\{
\begin{aligned}
&(\cos\alpha,\sin\alpha)^T/C_{\infty},\quad &\text{for drag calculation}, \\
&(-\sin\alpha,\cos\alpha)^T/C_{\infty},\quad &\text{for lift calculation}.
\end{aligned}
\right.
\end{equation}
Here $C_{\infty}$ is defined as $\frac{1}{2}\gamma p_{\infty}Ma_{\infty}^2l$, where $p_{\infty},Ma_{\infty},l$ denote the far-field pressure, far-field Mach number and the chord length of the airfoil, respectively. In the following section, we will demonstrate that the quantity of interest is compatible with the boundary condition \eqref{vanish_normal_velocity} and the governing equation \eqref{primal}. Based on these analyses, the adjoint operator of the adjoint equation is derived.
Since the normal velocity is 0, the matrix $\mathbf{M}$ becomes a singular matrix with rank 2. To cope with the general theory, we use reduced variables to make further analyzes. Then we let $R=(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$. With this notation, $\mathbf{M}$ can be written as $\bf M=\it R* \Lambda *R^{-1}$, where $\Lambda =diag(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$. Then we can define a variable transform $u_{trans}, z_{trans}$ such that $(z,\mathcal{A}u)_{\Gamma}=(z_{trans},\Lambda u_{trans})_{\Gamma}$, where
\begin{equation}
\bf u\it_{trans}= R^{-1}\bf u=\it\begin{bmatrix}
\displaystyle\frac{1}{2\gamma }\rho \\\displaystyle\frac{\gamma-1}{\gamma}\rho\\0\\\displaystyle\frac{1}{2\gamma}\rho \end{bmatrix}.
\end{equation}
So if we consider the reduced vectors
\begin{equation}
\bf u_{\it reduced}=\begin{bmatrix}
u_{trans,1}\\u_{trans,4}
\end{bmatrix},
\qquad \bf z_{\it reduced}=\begin{bmatrix}
z_{trans,1}\\z_{trans,4}
\end{bmatrix},
\end{equation}
then the reduced diagonal matrix is
\begin{equation}
\Lambda_{reduced}=\begin{bmatrix}
&c& &0& \\
&0& &-c&
\end{bmatrix}.
\end{equation}
Due to the nonlinearity of the Euler equations, the following Fr\'{e}chet derivatives need to be considered in the discussion of the compatibility,
\begin{equation}
\displaystyle\mathcal B'[u]=(0,\frac{n_x}{\rho},\frac{n_y}{\rho},0),\qquad \mathcal C'[u]=p'[u]=(\gamma-1)\cdot(\frac{u_x^2+u_y^2}{2},-u_x,-u_y,1).
\end{equation}
Then we can calculate \begin{equation}
\mathcal{B}'[u](w)\equiv n_xu_x+n_yu_y=\widetilde{u_n},\qquad \mathcal{C}'[u](w)\equiv p.
\end{equation}
Under the reduced vectors, we get
\begin{equation}
\begin{aligned}
&\mathcal{B}'[u](w)=\displaystyle\frac{c}{2\rho}\bf u_{\it trans,1}+\it \frac{-c}{\rm2\rho} \bf u_{\it trans,\rm 4},\\
&\mathcal{C}'[u](w)=\displaystyle c^2\bf u_{\it trans,\rm 1}+\it c^{\rm 2}\bf u_{\it trans,\rm 4}.
\end{aligned}
\end{equation}
So from the above theory, we can calculate that
\begin{equation}
\begin{aligned}
\begin{bmatrix}
&c& &0& \\ &0& &-c&
\end{bmatrix}
\end{aligned}=\begin{bmatrix}
&-{(\mathcal C'[u])^*}^T& &{(\mathcal B'[u])^*}^T&
\end{bmatrix}
\cdot\displaystyle\begin{bmatrix}
&\displaystyle\frac{c}{2\rho}& &\displaystyle -\frac{c}{2\rho}& \\
&c^2& &c^2&
\end{bmatrix}.
\end{equation}
Then
\begin{equation}
\begin{aligned}
&(\mathcal B'[u])^*(\bf z_{\it trans})=\rm \frac{1}{2\it c}\it z_{trans,\rm 1}-\frac{\rm 1}{\rm 2\it c}z_{trans,\rm 4},\\ &(\mathcal C'[u])^*(\bf z_{\it trans})=-\rho\it z_{trans,\rm 1}-\rho\it z_{trans,\rm 4}.
\end{aligned}
\end{equation}
Then we put the variables back to the original variables by the relation $\bf z_{\it trans}=\it R^T z$, finally we get \textit{continuous dual equations:}
\begin{equation}
\label{continuous_adjoint}
{-f'_1[u]}^T\frac{\partial \it z}{\partial \it x}{-f'_2[u]}^T\frac{\partial \it z}{\partial \it y}=0,
\end{equation}
where the governing equations are obtained from the integration by part\eqref{derive_by_integration} with boundary conditions,
\begin{equation}
\label{continuous_boundary}
(\mathcal{B}'[u])^*z\equiv n_xz_2+n_yz_3=j_{\Gamma}'[C\bf u]\equiv n\cdot\beta.
\end{equation}
Then the quantity of interest is derived as the surface integration of
\begin{equation}
-2\rho\widetilde{u_n}(z_1+u_xz_2+u_yz_3+Hz_4).
\end{equation}
Although the adjoint equation has been derived, directly solving it is still a challenging task due to the lack of a non-trivial boundary equation to restrict the mass matrix. Additionally, since the zero normal velocity condition contains only two characteristics for both the primal and the dual equations, the rank of the mass matrix is reduced to $2$ under this circumstance. As a result, the mass matrix for the boundary part may lack regularity, making the use of iteration methods for solving the linear system unstable.
To address this issue, we can derive the discrete equations in a manner similar to \eqref{EulerDiscrete}. Herewith,
\begin{equation}
\label{newEulerDiscrete}
-\sum\limits_{i}\sum\limits_{j}\oint_{e_{i,j}\in\partial K_i}F^*(z)\cdot n_{i,j}ds=0,
\end{equation}
where $F^{*}(z)=(f_1'[u]^Tz,f_2'[u]^Tz)$. This equation can also be solved by the regularization method we discussed for the primal Euler equations, but subject to an inverse time direction.
To overcome the challenge posed by the strong boundary condition, Giles proposed a method in \cite{giles2003algorithm} that factorizes the operator into the interior part and a boundary part. The dual solutions are then obtained by solving the zero normal part and subsequently combing it with the strong boundary. $\mathcal{I}^*$ is denoted as the identity operator for the interior part $\Omega$ of dual space and $\mathcal{I}^*\equiv\mathcal{B}^*$ as the boundary modification operator on the boundary part $\Gamma$, i.e.,
\begin{equation}
\displaystyle
\left\{
\begin{aligned}
&\mathcal{I}^*\equiv \mathcal{I}^*\quad&\text{in}&~\Omega,\\
&\mathcal{I}^*\equiv \mathcal{B}^*\quad&\text{on}&~\Gamma,
\end{aligned}
\right.\qquad
\left\{
\begin{aligned}
&\mathcal{B}^*\equiv 0\quad&\text{in}&~\Omega,\\
&\mathcal{B}^*\equiv \mathcal{B}^*\quad&\text{on}&~\Gamma.
\end{aligned}
\right.
\end{equation}
Then the solutions are decomposed into two parts,
\begin{equation}
z=(\mathcal{I}^*-\mathcal{B}^*)z+\mathcal{B}^*z=z_{\Omega}+z_{\Gamma}
\end{equation}
We firstly solve the equations
\begin{equation}
\displaystyle
\left\{\
\begin{aligned}
&\mathcal{A}^* z_{\Omega}=0,\\
&\mathcal{B}^*z_{\Omega}=0.
\end{aligned}
\right.
\end{equation}
Then $z_{\Gamma}$ is updated by
\begin{equation}
z_{\Gamma}=(\mathbf{n}\cdot\beta)-\mathcal{B}^*\mathcal{A}^*z_{\Omega}.
\end{equation}
The main advantage we get from the dual equations is to use the information from the dual solutions to update our mesh rather than calculate the quantity of interest by the dual solutions. Even if the dual equations can derive a theoretically identical quantity of interest by the compatibility property, the linearization may sometimes pollute the adjoint equations and the result may not be robust. This is another important issue that motivates us to discuss the dual consistency theory.
\subsection{Fully Discrete Adjoint}
As we discussed in \cite{hu2016adjoint}, the framework proposed by \cite{venditti2000adjoint} developed a fully adjoint weighted residual method which is well applied in the finite volume scheme. The method was later applied to two-dimensional inviscid flows in \cite{venditti2002grid}. To apply this method in our framework, we provide a brief review of the method. The original motivation for this method was to accurately estimate the numerical integral of $J(\mathbf{u})$ on a fine mesh, $J_h(\mathbf{u}_h)$, without computing the solution on the fine mesh. A multiple-variable Taylor series expansion is used to achieve this goal.
\begin{equation}
\label{quantity_vector}
J_h(\mathbf {u}_h) =J_h(\mathbf{u}_h^H)+\left.\frac{\partial J_h}{\partial \mathbf{u}_h}\right|_{\mathbf{u}_h^H}(\mathbf{u}_h-\mathbf{u}_h^H)+...,
\end{equation}
here $\mathbf{u}_h^H$ represents the coarse solution $\mathbf{u}_H$ mapped onto the fine space $\mathcal V_h$ via some prolongation operator $I_h^H$,
\begin{equation}
\mathbf{u}_h^H= I_h^H\mathbf{u}_H.
\end{equation}
And we use the vector form $\mathcal R_h(\cdot)$ to denote the residual of the primal problem in the space $\mathcal V_h$. Apparently, we have
\begin{equation}
\mathcal{R}_h(\mathbf{u}_h)=\rm 0.
\end{equation}
Similarly, when we linearize this equation we have
\begin{equation}
\label{residual_vector}
\mathcal{R}_h(\mathbf{u}_h) =\mathcal{R}_h(\mathbf{u}_h^H)+\left.\frac{\partial \mathcal{R}_h}{\partial \mathbf{u}_h}\right|_{\mathbf{u}_h^H}(\mathbf{u}_h-\mathbf{u}_h^H)+...
\end{equation}
As $\left.\frac{\partial \mathcal{R}_h}{\partial \mathbf{u}_h}\right|_{\mathbf{u}_h^H}$ is the Jacobin matrix, symbolically, we can invert this matrix to obtain an approximation of the error vector,
\begin{equation}
\label{vector_error}
\mathbf{u}-\mathbf{u}_h^H\thickapprox -(\left.\frac{\partial \mathcal{R}_h}{\partial\mathbf{u}_h}\right|_{\mathbf{u}_h^H})^{-1}\mathcal{R}_h(\mathbf{u}_h^H)
\end{equation}
Then after putting the error vector \eqref{vector_error} into the residual vector \eqref{residual_vector}, we can get
\begin{equation}
\label{quantity_error}
J_h(\mathbf{u}_h) =J_h(\mathbf{u}_h^H)-(\mathbf{z}_h|_{\mathbf{u}_h^H})^T\mathcal{R}_h(\mathbf{u}_h^H),
\end{equation}
where $\mathbf{\mathbf{z}}_h|_{\mathbf{u}_h^H}\it$ is obtained from \textit{Fully discrete dual equations}:
\begin{equation}\label{discrete_vector}
\left(\left.\frac{\partial \mathcal R_h}{\partial \mathbf{u}_h}\right|_{\mathbf{u}_h^H}\right)^T\mathbf{z}_h|_{\mathbf{u}_h^H}=\left(\left.\frac{\partial \mathcal J_h}{\partial \mathbf{u}_h}\right|_{\mathbf{u}_h^H}\right)^T.
\end{equation}
The equation \eqref{quantity_error} can be used to update the quantity of interest. The second term, also known as the remaining error, can serve as a local error indicator to guide the mesh adaptation. For a higher-order enhancement, an error correction method has been proposed in the general framework in \cite{pierce2000adjoint}. To apply the adjoint correction method to this fully discrete adjoint weighted residual method, the quantity error has been split into two terms,
\begin{equation}
\label{adjoint_correction_primal}
J_h(\mathbf{u}_h)-J_h(\mathbf{u}_h^H)\thickapprox (L_h^H\mathbf{z}_H)^T\mathcal{R}_h(\mathbf{u}_h^H)+(\mathbf{z}_h|_{\mathbf{u}_h^H}-L_h^H\mathbf{z}_H)^T\mathcal{R}_h(\mathbf{u}_h^H),
\end{equation}
where $L_h^H$ is the prolongation operator that projects the adjoint solution from $\mathcal V_H$ into $\mathcal V_h$.
To denote the residual of the adjoint equation, the operator $\mathcal{R}_h^{\mathbf{z}}$ has been introduced as
\begin{equation}
\label{Rpsi}
\mathcal{R}_h^{\mathbf{z}}(\mathbf{z} )\equiv \left(\left.\frac{\partial \mathcal R_h}{\partial \mathbf{u}_h}\right|_{\mathbf{u}_h^H}\right)^T\mathbf{z}-\left(\left.\frac{\partial \mathcal J_h}{\partial \mathbf{u}_h}\right|_{\mathbf{u}_h^H}\right)^T.
\end{equation}
From the equation $\mathcal{R}_h^{\mathbf{z}}(\left.\mathbf{z}_h\right|_{\mathbf{u}_h^H})=0$, we can get
\begin{equation}
\mathcal{R}_h^{\mathbf{z}}(L_h^H\mathbf{z}_H)=\left(\left.\frac{\partial \mathcal R_h}{\partial \mathbf{u_h}}\right|_{\mathbf{ u}_h^H}\right)^T(L_h^H\mathbf{z}_H-\mathbf{z}_h|_{\mathbf{ u}_h^H}).
\end{equation}
With this equation substituted into \eqref{adjoint_correction_primal}, we can get
\begin{equation}
J_h(\mathbf{u}_h) -J_h(\mathbf{u}_h^H)\thickapprox (L_h^H\mathbf{z}_H)^T\mathcal{R}_h(\mathbf{u}_h^H)-\left\{ \mathcal{R}_h^{\mathbf{z}}(\left.\mathbf{z}_h\right|_{\mathbf{u}_h^H})\right\}^T\left(\left.\frac{\partial \mathcal R_h}{\partial \mathbf{u}_h}\right|_{\mathbf{u}_h^H}\right)^{-1}\mathcal{R}_h(\mathbf{u}_h^H).
\end{equation}
And this is equivalent to
\begin{equation}
\label{adjoint_correction_adjoint}
J_h(\mathbf{u}_h) -J_h(\mathbf{u}_h^H)\thickapprox(L_h^H\mathbf{z}_H)^T\mathcal{R}_h(\mathbf{u}_h^H)+\left\{ \mathcal{R}_h^{\mathbf{z}}(\left.\mathbf{z}_h\right|_{\mathbf{u}_h^H})\right\}^T(\mathbf{ u}_h-\mathbf{u}_h^H).
\end{equation}
The correction term in the equation \eqref{adjoint_correction_primal} incorporates the residual of the primal problem, while the correction term in the equation \eqref{adjoint_correction_adjoint} incorporates the residual of the dual problem. To combine these two equations, we can consider the duality gap between them, which is defined as follows:
\begin{equation}
\label{duality_gap}
D\equiv (\mathbf{z}_h|_{\mathbf{u}_h^H}-L_h^H\mathbf{z}_H)^T\mathcal{R}_h(\mathbf{u}_h^H)-\left\{ \mathcal{R}_h^{\mathbf{z}}(\left.\mathbf{z}_h\right|_{\mathbf{u}_h^H})\right\}^T(\mathbf{u}_h-\mathbf{u}_h^H).
\end{equation}
In \cite{venditti2000adjoint}, the \eqref{duality_gap} is shown to be equal to $\displaystyle (\mathbf{z}_h|_{\mathbf{u}_h^H}-L_h^H\mathbf{z}_H)^T\cdot\frac{1}{2}(\mathbf{u}_h-\mathbf{u}_h^H)^T\left.\frac{\partial ^{2} \mathcal{R}_h}{\partial \mathbf{u}_h^2}\right|_{\mathbf{u}_h^H}(\mathbf{u}_h-\mathbf{u}^H_h)$.
While the error correction method introduces additional calculations of the residual of the dual solutions, for the purpose of validating the dual consistency theory in this work, we only considered the algorithm with the first order decreasing.
\subsection{Dual Consistency}
In the first part of section 3, we reviewed the important theory for deriving a well-posed continuous dual equation. Dual consistency, or the adjoint consistency, is closely related to the smoothness of the discrete dual solutions. If a discretization is implemented under a dual-consistent scheme, the discrete dual solutions should approximate the continuous dual solutions as the refinement level increases. Conversely, a dual-inconsistent scheme may generate dual solutions with unexpected oscillations or exhibit some non smoothness. Hence, in this part, we will discuss the discretization method and validate whether the dual consistency property is preserved under this scheme.
As discussed above, the finite volume method is adopted in this work with the introduction of the Petrov-Galerkin method. Therefore, we need to perform elaborate analyses to verify dual consistency under the Petrov-Galerkin form finite volume scheme. In \cite{hartmann2007adjoint}, Ralf Hartmann developed analyses about adjoint consistency under the discontinuous Galerkin scheme. Motivated by this work, further analyses are made to discuss the dual consistency in this work.
In \eqref{discrete_operator}, the discretized primal equations are defined as:
Find $\mathbf{u}_0\in\mathcal{V}_0^{B_h}$, s.t.
\begin{equation}\label{discretized_primal}
\mathcal{A}_h(R_p^0\mathbf{u}_0,v_0)=0,\qquad \forall v_0\in\mathcal{V}_0^{B_h}.
\end{equation}
Similarly, the continuous version of primal equations can be defined as :
Find $\mathbf{u}\in\mathcal{V}$, s.t.
\begin{equation}
\mathcal{A}(\mathbf{u},v)=0,\qquad\forall v\in\mathcal{V}.
\end{equation}
Then the continuous exact dual solutions are derived like \eqref{Rpsi}:
Find $z\in\mathcal{V}$, s.t.
\begin{equation}
\mathcal{A}'[\mathbf{u}](w,z)+\mathcal{J}'[\mathbf{u}](w)=0,\qquad \forall w\in\mathcal{V}.
\end{equation}
The primal consistency is held when the exact solution $\mathbf{u}$ satisfies the discretized operator:
\begin{equation}
\mathcal{A}_h(\mathbf{u},v_0)=0,\qquad \forall v_0\in\mathcal{V}_0^{B_h}.
\end{equation}
The quantity of interest is defined as \it dual consistent\rm ~with the governing equations if the discretized operators satisfy:
\begin{equation}\label{DUALConsist}
\mathcal{A}_h'[\mathbf{u}](w,z)+\mathcal{J}_h'[\mathbf{u}](w)=0,\qquad \forall w \in\mathcal{V}_p^{B_h}.
\end{equation}
As in \eqref{discrete_operator}, we defined the discretized operator. By using the Fr\'{e}chet derivatives, the linearized form is
\begin{equation}\small
\begin{aligned}
& \mathcal{A}'_h[R_p^0\mathbf{u}_0](w,v)=\sum\limits_{K\in\mathcal{K}_h}\left\{\int_{\partial K\backslash \Gamma}\mathcal H'[R_p^0\mathbf{u}_0^+](R_p^0\mathbf{u}_0^+,R_p^0\mathbf{u}_0^-,n_K)w^+v_0^+ds+\int_{\partial K\backslash\Gamma}{\mathcal H'}[R_p^0\mathbf{u}_0^-](R_p^0\mathbf{u}_0^+,R_p^0\mathbf{u}_0^-,n_K)w^-v_0^+ds\right\}
\\&+\sum\limits_{K\in\mathcal{K}_h}\left\{\int_{\partial K\cap \Gamma}\tilde{\mathcal H}'[R_p^0\mathbf{u}_0^+](R_p^0\mathbf{u}_0^+,\Gamma(R_p^0\mathbf{u}_0^+),n_K)w^+v_0^+ds+\int_{\partial K\cap\Gamma}\tilde{\mathcal H}'[\Gamma(\cdot)](R_p^0\mathbf{u}_0^+,\Gamma(R_p^0\mathbf{u}_0^+),n_K)\Gamma'[R_p^0\mathbf{u}_0^+]w^+v_0^+ds\right\}.
\end{aligned}
\end{equation}
The quantity of interest \eqref{lift_and_drag} is similarly linearized as
\begin{equation}
\mathcal{J}'[R_p^0\mathbf{u}_0](w)=\sum\limits_{K\in\mathcal{K}_h}\int_{\partial K\cap\Gamma}j'_{\Gamma}[R_p^0\mathbf{u}_0](w)ds=\sum\limits_{K\in\mathcal{K}_h}\int_{\partial K\cap\Gamma}p'_{\Gamma}[R_p^0\mathbf{u}_0](w)n\cdot\beta ds.
\end{equation}
As we discussed the numerical dual solutions process in \eqref{discrete_vector}, then the dual solutions in a residual form are denoted as:
Find $\mathbf{z}_0\in\mathcal{V}_0^{B_h}$, $s.t.$ given $\mathbf{u}_0\in\mathcal{V}_0^{B_H}$,
\begin{equation}
\label{dualconsist_equations}
\sum\limits_{K\in\mathcal{K}_h}\int_{\partial K\backslash \Gamma}w^+\cdot r^*[R_p^0(\mathbf{u}_0)^H_h](\mathbf{z}_0)ds+\sum\limits_{K\in\mathcal{K}_h}\int_{\partial K\backslash \Gamma}w^+\cdot r_{\Gamma}^*[R_p^0(\mathbf{u}_0)^H_h](\mathbf{z}_0)ds,\qquad \forall w\in\mathcal{V}_0^{B_h},
\end{equation}
where
\begin{equation}
\begin{aligned}
&r^*[R_p^0(\mathbf{u}_0)^H_h](\mathbf{z}_0)=-\left(\mathcal H'[R_p^0(\mathbf{u}_0^+)^H_h](R_p^0(\mathbf{u}_0^+)^H_h,R_p^0(\mathbf{u}_0^-)^H_h,n_K^+)\right)^T[\![\mathbf{z}_0]\!]\cdot n_K^+,\qquad &\text{on}~\partial K\backslash\Gamma,
\\ &r_{\Gamma}^*[R_p^0(\mathbf{u}_0)^H_h](\mathbf{z}_0)=j'_{\Gamma}[R_p^0(\mathbf{u}_0^+)^H_h](w)-\left(\tilde{\mathcal H}'[R_p^0(\mathbf{u}_0^+)^H_h](\cdot)+\tilde{\mathcal H}'[\Gamma(\cdot)](\cdot)\Gamma'[R_p^0(\mathbf{u}_0^+)^H_h]\right)^T\mathbf{z}_0^+,\qquad&\text{on}~\partial K\cap\Gamma.
\end{aligned}
\end{equation}
In order to show the adjoint consistency, we need to prove that with the exact primal solutions $\mathbf u$, and the exact dual solutions $\mathbf{z}$,
\begin{equation}
\sum\limits_{K\in\mathcal{K}_h}\int_{\partial K\backslash \Gamma}w^+\cdot r^*[\mathbf{u}](\mathbf{z})ds+\sum\limits_{K\in\mathcal{K}_h}\int_{\partial K\backslash \Gamma}w^+\cdot r_{\Gamma}^*[\mathbf{u}](\mathbf{z})ds=0,\qquad \forall w\in\mathcal{V}_0^{B_h}.
\end{equation}
As we are considering the solid wall boundary condition with zero normal velocity, the solutions on the boundary should satisfy $(u_{\Gamma})_xn_x+(u_{\Gamma})_y n_y=0$. Generally, we can apply the boundary value operator below.
\begin{myDef}\textbf{Boundary modification operator for zero normal velocity}:
\begin{equation}
\Gamma(\mathbf{u}):=\begin{bmatrix}
1&0&0&0\\0&1-n_1^2&-n_1n_2&0\\0&-n_1n_2&1-n_2^2&0\\0&0&0&1
\end{bmatrix}\mathbf{u}=\mathbf{u}_{\Gamma},\qquad\text{on}~\Gamma.
\end{equation}
\end{myDef}
In \cite{hartmann2007adjoint}, authors discussed when the numerical flux is defined as follows.
\begin{equation}\label{sufficient_DUAL}
\tilde{\mathcal{H}}(R_p^0(\mathbf{u}_0^+),\Gamma(R_p^0(\mathbf{u}_0^+)),n_K^+)=n_{K}\cdot\mathcal{F}\left(\Gamma(R_p^0\mathbf{u}_0^+)\right)=(0,n_1\tilde{p}_{\Gamma},n_2\tilde{p}_{\Gamma},0)^T,
\end{equation}
where $\tilde{p}_{\Gamma}$ is the pressure of reconstructed solutions on the boundary. In this way, the numerical flux is only related to the interior trace of the boundary. Then the inner part of the quantity of interest can be reformulated as
\begin{equation}
\label{QOI_equality}
\tilde{p}_{\Gamma} n_K\cdot \beta=\tilde{\mathcal{H}}(R_p^0(\mathbf{u}_0^+),\Gamma(R_p^0(\mathbf{u}_0^+)),n_K^+)\cdot\tilde{\beta},
\end{equation}
where $\tilde{\beta}$ is defined as the augmented vector $\tilde{\beta}:=(0,\beta_1,\beta_2,0)^T$. Then the equations $r_{\Gamma}^*[\mathbf{u}](\mathbf{z})=0$ are held on the boundary $\Gamma$ with exact value $\mathbf{u}$ and $\mathbf{z}$ once the derivative is taken from \eqref{sufficient_DUAL}. And $r^*[\mathbf{u}](\mathbf{z})=0$ are held for the inner boundary due to the smoothness property of the dual solutions. However, as \eqref{sufficient_DUAL} is a sufficient condition for the dual consistency. To ensure the preservation of the dual consistency property, generalized boundary modification methods are applied, as discussed in \cite{HARTMANN2015754}\cite{dolejsi2022}.
If we want to consider the mirror reflection boundary condition, we shall reconsider the boundary operator as
\begin{myDef}\textbf{Boundary modification operator for mirror reflection}:
\begin{equation}
\Gamma_{\mathcal{M}}(\mathbf{u}):=\begin{bmatrix}
1&0&0&0\\0&1-2n_1^2&-2n_1n_2&0\\0&-2n_1n_2&1-2n_2^2&0\\0&0&0&1
\end{bmatrix}\mathbf{u}=\mathbf{u}_{\Gamma_{\mathcal{M}}},\qquad\text{on}~\Gamma.
\end{equation}
\end{myDef}
In this circumstance, the quantity of interest is dual consistent with the discretization if the numerical flux on the boundary is defined as $\tilde{\mathcal{H}}(R_p^0(\mathbf{u}_0^+),\Gamma_{\mathcal{M}}(R_p^0(\mathbf{u}_0^+)),n_K^+)$, while the quantity of interest is
\begin{equation}\label{QOI_mirror}
\mathcal J(\bf u)\it =\int_{\Gamma}j_{\Gamma_{\mathcal{M}}}(C\bf u)\it ds=\int_{\Gamma}p\left((\Gamma_{\mathcal{M}}(\mathbf{u})\right)\it\bf n\cdot\it \beta.
\end{equation}
In \cite{dolejsi2022} and \cite{HARTMANN2015754}, it was shown that certain numerical fluxes, such as Vijayasundaram, and Lax-Friedrichs, preserve the dual consistency property. However, other types of nonlinear numerical fluxes may not necessarily preserve the smoothness of dual solutions due to the linearization process involved. In the present work, the Lax-Friedrichs numerical flux was found to perform best for the scheme. To linearize the numerical flux and quantity of interest, different methods had been discussed in previous literature \cite{KENWAY2019100542}\cite{Li2020NewtonLM}. For instance, the analytic method, finite difference approximation, complex step method, and algorithmic differentiation. Acceleration techniques have also been developed for solving the adjoint problem. In this work, we only consider the two-dimensional model, while the finite difference method is employed for its simplicity and less requirement for the memory demand. And the behavior met expectations for the finite difference method. The idea can be summed up as an approximation to the analytical derivative, where the Jacobian matrix $\partial\mathcal{R}_i/\partial \mathbf{u}_j$ can be denoted as
\begin{equation}\displaystyle
\frac{\partial\mathcal{R}_i}{\partial \mathbf{u}_j}=\lim_{\epsilon\to 0}\frac{\mathcal{R}(...,\mathbf{u}_j+\epsilon,...)-\mathcal{R}(...,\mathbf{u}_j,\dots)}{\epsilon}\approx \frac{\mathcal{R}(...,\mathbf{u}_j+\epsilon,...)-\mathcal{R}(...,\mathbf{u}_j,\dots)}{\epsilon}.
\end{equation}
The choice of $\epsilon$ is crucial in numerical differentiation as it affects the accuracy and stability of the approximation. If $\epsilon$ is too small, numerical errors may dominate and the calculation can become ill-conditioned, leading to a loss of significance. On the other hand, if $\epsilon$ is too large, the approximation may destroy the accuracy of the truncation error. In this work, $\epsilon=\epsilon_0\mathbf{u}_j$ is adopted, where $\epsilon_0=1.0e-8$ is set in the simulations. This choice of $\epsilon$ is similar to that used for the derivative of the quantity of interest $\partial\mathcal{J}_i/\partial \mathbf{u}_j$.
As discussed in \cite{dolejsi2022}, a quantity of interest is dual inconsistent if the integrand is not considered under this dual consistency scheme. For example, we will consider an example of \textit{dual-inconsistent scheme} below.
In this case, the Fr\'{e}chet directional derivative is given by
\begin{equation}
\mathcal{J}'[R_p^0\mathbf{u}_0](w)=\sum\limits_{K\in\mathcal{K}_h}\int_{\partial K\cap\Gamma}j'_{\Gamma}[R_p^0\mathbf{u}_0](w)\cdot\phi ds,
\end{equation}
where $\displaystyle j'_{\Gamma}[R_p^0\mathbf{u}_0](w)\cdot\phi=\lim_{t\to 0}\frac{1}{t}\left(j_{\Gamma}(R_p^0\mathbf{u}_0+t\phi)-j_{\Gamma}(R_p^0\mathbf{u}_0)\right)$ and $\phi$ is a small directional pertubation, while $\phi_j=\epsilon_0(R_p^0\mathbf{u}_j)$ is used to calculate the derivatives on the boundary.
And $\displaystyle j'_{\Gamma}[R_p^0\mathbf{u}_0](w)=\frac{d\tilde{p}_n}{d\mathbf{u}}^T\tilde{\beta}$, where
\begin{equation}
\frac{d\tilde{p_n}}{d\mathbf{u}}:=n_K\otimes \frac{d\tilde{p}}{\mathbf{u}}=(\gamma-1)
\begin{bmatrix}
0&0&0&0\\\frac{1}{2}|\tilde{u}_x^2+\tilde{u}_y^2|n_x&-\tilde{u}_xn_x&-\tilde{u}_yn_x&n_x
\\\frac{1}{2}|\tilde{u}_x^2+\tilde{u}_y^2|n_y&-\tilde{u}_xn_y&-\tilde{u}_yn_y&n_y\\0&0&0&0
\end{bmatrix},
\end{equation}
here $n_K:=(0,n_x,n_y,0)^T$ is similarly the augmented vector of $n_K$. Dolej\v{o}\'{i} et al. selected the quantity of interest as lift and drag for simulation. However, the calculation of lift is oscillatory even if an anisotropic mesh is developed. Hence, we considered the drag coefficient to validate the algorithm in this work. The results are shown in the numerical results section.
\begin{myRem}
It is worth noting that when the scheme is derived dually consist, $r^*_{\Gamma}[\mathbf{u}](\mathbf{z})=0$, which is equivalent to $p'_{\Gamma}[\mathbf{u}](\cdot)n\cdot\beta=\left(\tilde{\mathcal{H}}[\mathbf{u}](\cdot)\right)^T\mathbf{z}$. By taking a derivative of \eqref{sufficient_DUAL}, we obtain $\tilde{\mathcal{H}}[\mathbf{u}](\cdot)=\tilde{n}_K\tilde{p}_{\Gamma}[\mathbf{u}](\cdot)$. Then $\tilde{n}_K\cdot z =n_K\cdot\beta$ is exactly equivalent to dual equation \eqref{continuous_boundary} in continuous version.
\end{myRem}
In \cite{giles2003discrete}, the mechanism with shock waves was discussed from the perspective of governing equations.
Different methods to derive dual equations have their own advantages for mesh adaptation and model calculation. Dual consistency is well known to be a significant property that can influence the performance of h-adaptivity. It was first analyzed in \cite{lu2005posteriori}, and the result shows that a dual inconsistent scheme may lead to dual solutions with oscillations. Besides, Collis and Heinkenschloss used the dual inconsistent scheme for the streamline upwind/Petrov-Galerkin method for linear advection-diffusion to an optimal control problem. The discretized continuous dual equations performed better than the discrete dual equations derived from the discrete primal equations.
Though dual consistency is not trivial to be preserved theoretically, the discretization may still be asymptotically adjoint if equations\eqref{DUALConsist} hold when taking the limit $h\to 0$. Even if the dual consistency is not preserved, additional terms can be added to the discretized equations or the quantity of interest to modify the scheme. As discussed in \cite{harriman2004importance}, symmetric and nonsymmetric interior penalties had been discussed separately. And the SIPG scheme, which is dual consistent, leads to a better convergence rate of the quantity of interest. In \cite{dolejsi2022}, a dual consistent-based hp-adaptive scheme achieved a better convergence rate. In this work, the dual consistency is implemented under the Newton-GMG solver, the boundary modification is updated in each step. As the dual inconsistency scheme may pollute the adaptation around the boundary, leading to some unexpected singularity, then the dual equations may generate a linear system with a loss of regularization. To resolve this issue, a regularization term was added to the dual equations, leading to a GMG solver with a more stable convergence rate and generating the error indicators well.
\section{ A Posteriori Error estimate}
\subsection{Posteriori Error Representation}
In \cite{hu2016adjoint}, we developed a non-oscillatory k-exact reconstruction method. Based on this work, we consider the finite volume scheme with the Petrov-Galerkin method to enhance the order of polynomials and step further to develop the error indicator. Firstly, the numerical solution $\mathbf{u}_p$ denotes the solution solved on a finite-dimensional space $\mathcal V_p^{B_h}$, which satisfy the equation
\begin{equation}
\mathcal A_{h}(\mathbf{u}_p\it,v)=0,\qquad \forall~v\in\mathcal V_p^{B_h}.
\end{equation}
With the orthogonality property that
\begin{equation}
\mathcal A_{h}(\mathbf{u}_p,v)=\mathcal A_{h}(\mathbf{u},v),\qquad \forall~v\in\mathcal V_p^{B_h},
\end{equation}
the exact error representation formula is derived from the following steps:
\begin{equation}
\begin{aligned}
\label{primal_residual}
\mathcal J(\mathbf{u})-\mathcal J(\mathbf{u}_p)&=\int_0^1\mathcal J'[\mathbf{u}]\left(\mathbf{u}_p+(\mathbf{u}-\mathbf{u}_p)\theta\right)d(\mathbf{u}-\mathbf{u}_p)\theta
\\ &=\int_0^1 \mathcal A'_{h}[\mathbf{u}](\mathbf{u}_p+(\mathbf{u}-\mathbf{u}_p)\theta,\mathbf{z})d(\mathbf{u}-\mathbf{u}_p)\theta
\\ &= \mathcal A_{h}(\mathbf{u},z)- \mathcal A_{h}(\mathbf{u}_p,\mathbf{z})
\\ &= \mathcal A_{h}(\mathbf{u}-\mathbf{u}_p,z)- \mathcal A_{h}(\mathbf{u}- \mathbf{u}_p,\pi_P\mathbf{z})
\\ &= \mathcal A_{h}(\mathbf{u},\mathbf{z}-\pi_P\mathbf{z})- \mathcal A_{h}(\mathbf{u}_p,\mathbf{z}-\pi_P\mathbf{z})
\\ &=- \mathcal A_{h}(\mathbf{u}_p,\mathbf{z}-\pi_P\mathbf{z}).
\end{aligned}
\end{equation}
Here, $\pi_p$ denotes a projection operator that maps the exact function into $\mathcal V_p^{B_h}$ and the Fr\'{e}chet derivatives are just formal calculations.
Meanwhile, the residual can be separated into two parts:
\begin{equation}
\begin{aligned}
\label{dwr_inequality}
\mathcal{J}(\mathbf{u})-\mathcal{J}(\mathbf{u}_p)=& \mathcal{A}_{h}(\mathbf{u}-\mathbf{u}_p,\mathbf{z}-\mathbf{z}_p)
\\=&\sum\limits_{K\in\mathcal{K}_h} \left\{ -\int_K\nabla\mathcal{F}(\mathbf{u}-\mathbf{u}_p)\cdot(\mathbf{z}-\mathbf{z}_p)dx\right.
\\ &+\int_{\partial K\backslash\Gamma}\left(\mathcal{F}(\mathbf{u}-\mathbf{u}_p)\cdot n_K-\mathcal{H}\left((\mathbf{u}-\mathbf{u}_p)^+,(\mathbf{u}-\mathbf{u}_p)^-,n_K\right)\right)\cdot(\mathbf{z}-\mathbf{z}_p)^+ds
\\ & \left. +\int_{\partial K\cap\Gamma}\left(\mathcal{F}(\mathbf{u}-\mathbf{u}_p)\cdot n_K-\tilde{\mathcal H}\left((\mathbf{u}-\mathbf{u}_p)^+,\Gamma\left((\mathbf{u}-\mathbf{u}_p)^+\right),n_K\right)\right)\cdot(\mathbf{z}-\mathbf{z}_p)^+ds\right\}
\\ & \leq\sum_{K\in\mathcal{K}_h}\rho_K\omega_K,
\end{aligned}
\end{equation}
where $\rho_K$ denotes the residuals and $\omega_K$ denote weights, defined by
\begin{equation}
\begin{aligned}
\rho_K &:=~h_K|\!|\nabla \mathcal{F}(\mathbf{u}-\mathbf{u}_p)|\!|_{L^2_{K}}+h_K^{1/2}|\!| r_K|\!|_{L^2_{\partial K\backslash\Gamma}}+h_K^{1/2}|\!|r^*_K|\!|_{L^2_{K\cap\Gamma}},
\\\omega_K &:=~ \max \{h_K^{-1} |\!|\mathbf{z}-\mathbf{z}_p|\!|_{L^2_{K}}, h_K^{-1/2}|\!|\mathbf{z}-\mathbf{z}_p|\!|_{L^2_{\partial K\backslash \Gamma}}, h_K^{-1/2}|\!|\mathbf{z}-\mathbf{z}_p|\!|_{L^2_{\partial K\cap\Gamma}}\}.
\end{aligned}
\end{equation}
Here $r_K$ and $r_K^*$ are defined by
\begin{equation}
\begin{aligned}
r_K&:=~\mathcal{F}(\mathbf{u}-\mathbf{u}_p)\cdot n_K-\mathcal H\left((\mathbf{u}-\mathbf{u}_p)^+,(\mathbf{u}-\mathbf{u}_p)^-,n_K\right),
\\r_K^*&:=~\mathcal{F}(\mathbf{u}-\mathbf{u}_p)\cdot n_K-\tilde{\mathcal H}\left((\mathbf{u}-\mathbf{u}_p)^+,\Gamma\left((\mathbf{u}-\mathbf{u}_p)^+\right),n_K\right).
\end{aligned}
\end{equation}
According to the local approximation estimates in \cite{brenner2008mathematical}, we have
\begin{equation}
\max \{h_K^{-1} |\!|\mathbf{z}-\mathbf{z}_p|\!|_{L^2_{K}}, h_K^{-1/2}|\!|\mathbf{z}-\mathbf{z}_p|\!|_{ L^2_{\partial K\backslash \Gamma}}, h_K^{-1/2}|\!|\mathbf{z}-\mathbf{z}_p|\!|_{ L^2_{\partial K\cap\Gamma}}\}\leq C_{i,K}h_K^{1+r}|\!|\nabla ^{1+r} \mathbf{z}|\!|_{\tilde{K}},\qquad r\in \{0,1\},
\end{equation}
for $\mathbf{z} \in \mathcal{V}\cap H^{1+r}(\Omega)$. For $r=0$, $\tilde{K}$ is the union of its neighboring patch of $K$ while for $r=1$, $\tilde{K}$ is $K$ itself. Generally, we don't have an explicit bound for the dual solution $\mathbf{z}$. The easiest way is to consider $r=1$, and the estimates for the weights $\omega_{K}$ is to substitute the dual solution $\bf z$ with the approximate value $\mathbf{z}_h\in\mathcal{W}_h$ and get
\begin{equation}
\omega_K\thickapprox \tilde{\omega}_K:=h_K^2|\nabla^2_h\mathbf{z}_h(x_K)|,
\end{equation}
where $x_K$ denotes the mid-point of element $K.$ Heuristically, if the numerical solution of the adjoint equation is singular at some point, then the mesh needs to be refined. Even when the primal solution is smooth, which leads to a relatively small residual, the dual weight dominates the tolerance in this scheme. Hence, the most important reason for using the dual-weighted residual method is to refine the mesh more economically under different schemes of the quantity of interest. Our objective is not to solve the numerical solution accurately globally, but to solve the target variable with higher accuracy instead. The dual weight indicates that even when the residual is small enough, we need to allocate more computational resources to solve the quantity of interest with greater accuracy. It is important to note that inequality in \eqref{dwr_inequality} can be designed with different kinds of methods. As we are concerned with the development of the dual weight in the tolerance, we focus on the following schemes.
\subsection{Reconstruction Method}
It is natural to apply the higher-order reconstruction method since the residual of the quantity of interest is denoted by the residual of both the primal and dual solution. However, in \eqref{primal_residual}, the lack of an exact value for $\bf u$ makes the equation unconvincing because the error is represented by the expression of $\bf u$. Therefore, for clarity in the proceeding analysis, we consider the linearized $\mathcal{A}_h[\mathbf{u}](\cdot)$ and derive the dual equation in a weak form to facilitate the error analysis.
\begin{equation}
\label{A_meanvalue}
\mathcal A_{h}(\mathbf{u},v)- \mathcal A_{h}(\mathbf{u}_p,v)=\int_0^1 \mathcal{A}'_{h}[\mathbf{u}](\mathbf{u}_p+(\mathbf{u}-\mathbf{u}_p)\theta,v)d(\mathbf{u}-\mathbf{u}_p)\theta.
\end{equation}
Similarly, the quantity of interest should also be considered in a linearization way, which is
\begin{equation}
\mathcal{J}(\mathbf{u})-\mathcal{J}(\mathbf{u}_p)=\int_0^1 \mathcal{J}'[\mathbf{u}](\mathbf{u}_p+ (\mathbf{u}-\mathbf{u}_p)\theta) d(\mathbf{u}-\mathbf{u}_p)\theta.
\end{equation}
If the discretization is implemented on a dual-consistent scheme, the weak form of the dual equation can be solved as: Find $\mathbf{z}\in\mathcal V$ such that
\begin{equation}
\label{weakform_dual}
\mathcal A_{h}(\mathbf{u},\mathbf{z})- \mathcal A_{h}(\mathbf{u}_p,\mathbf{z})=\mathcal{J}(\mathbf{u})-\mathcal{J}(\mathbf{u}_p).
\end{equation}
Then, let $R_q^p$ denote a reconstruction operator $R_q^p~:\mathcal V_p^B\rightarrow V_q^B$ which generates a cell wise discontinuous $q$-th order polynomials from a $p$-th order one on the broken space $B_{h}$.
Then based on the reconstruction, we develop the error representation based on the Godunov FVM case:
\begin{equation}
\begin{aligned}
\mathcal{J}(\mathbf{u})-\mathcal{J}(R^0_p\mathbf{u}_0)&=\int_0^1\mathcal{J}'[\mathbf{u}](R^0_p\mathbf{u}_0+(\mathbf{u}-R^0_p\mathbf{u}_0)\theta)d(\mathbf{u}-R^0_p\mathbf{u}_0)\theta
\\ &=\int_0^1 \mathcal{A}'_{h}[\mathbf{u}](R^0_p\mathbf{u}_0+(\mathbf{u}-R^0_p\mathbf{u}_0)\theta,\mathbf{z})d(\mathbf{u}-R^0_p\mathbf{u}_0)\theta
\\ &= \mathcal{A}_{h}(\mathbf{u},\mathbf{z})- \mathcal{A}_{h}( R^0_p\mathbf{u}_0,\mathbf{z})
\\ &= \mathcal{A}_{h}(\mathbf{u}- R^0_p\mathbf{u}_0,\mathbf{z})- \mathcal{A}_{h}(\mathbf{u}- R^0_p\mathbf{u}_0,\pi_p\mathbf{z})
\\ &= \mathcal{A}_{h}(\mathbf{u},\mathbf{z}-\pi_0\mathbf{z})- \mathcal{A}_{h}(R^0_p\mathbf{u}_0,\mathbf{z}-\pi_p\mathbf{z})
\\ &=- \mathcal A_{h}(R^0_p\mathbf{u}_0,\mathbf{z}-\pi_p\mathbf{z}),
\end{aligned}
\end{equation}
where the fourth equation is held by the Galerkin orthogonality
\begin{equation}
\mathcal{A}_{h}(\mathbf{u},\pi_p\mathbf{z})=\mathcal{A}_{h}( R^0_p\mathbf{u}_0,\pi_p\mathbf{z}).
\end{equation}
Then the error representation here can be reformulated as the following expression,
\begin{equation}
\begin{aligned}
\label{error_representation}
\mathcal J(\mathbf{u})-\mathcal{J}(R^0_p\mathbf{u}_0)=\sum\limits_{K\in\mathcal{K}_h} &\left\{ -\int_K\nabla\mathcal{F}(R^0_p\mathbf{u}_0)\cdot(\mathbf{z}-\pi_p\mathbf{z}) dx\right.
\\ &+\int_{\partial K\backslash\Gamma}\left(\mathcal{F}(R^0_p\mathbf{u}_0^+)\cdot n_K-\mathcal{H}(R^0_p\mathbf{u}_0^+,R^0_p\mathbf{u}_0^-,n_K)\right)\cdot(\mathbf{z}-\pi_p\mathbf{z})^+ds
\\ & \left. +\int_{\partial K\cap\Gamma}\left(\mathcal{F}(R^0_p\mathbf{u}_0^+)\cdot n_K-\tilde{\mathcal H}(R^0_p\mathbf{u}_0^+, \Gamma(R^0_p\mathbf{u}_0^+),n_K)\right)\cdot(\mathbf{z}-\pi_p\mathbf{z})^+ds\right\}.
\end{aligned}
\end{equation}
This kind of representation helps us to focus the error on each element. However, this is not suitable for obtaining a computational posterior error estimate. The reason is that the $\mathbf{z} ~$\ in the formula \eqref{error_representation} is on the infinite-dimensional space which is not generally known. Meanwhile, the mean-value linearization used in the error estimates is based on the exact solution of $\mathbf{u}~$, which contradicts the goal of calculating the numerical solution using this scheme.
To approximate the exact value of $\mathbf{z}$ on the infinite-dimensional space, we follow the patch recovery post-processing and global higher-order solves methods in \cite{barth2002}. We find an approximation solution $\mathbf{z}_0$ to the dual equation. Then we use the k-exact reconstruction method used in \cite{hu2016adjoint}. This lead to the following problem: Find $\mathbf{z}_0\in\mathcal V_0^{B_h}$ such that
\begin{equation}
\mathcal A_{h}(\mathbf{u},R_p^0\mathbf{z})- \mathcal A_{h}(\mathbf{u}_p,R_p^0\mathbf{z})=\mathcal{J}(\mathbf{u})-\mathcal{J}(\mathbf{u}_p).
\end{equation}
Then the dual solution on the infinite space can be approximated by $\displaystyle\bar{R}_q^pR_p^0\bf z_0$ where $\bar{R}_q^p$ denotes the k-exact reconstruction operator.
Then we use a higher-order reconstruction to substitute the exact value. After that, we solve the primal equation roughly to get $\bf u_0$, which makes the equation into the form: Find $\mathbf{z}_0~\in\mathcal V_0^{B_h}$ such that for $q>p$
\begin{equation}
\mathcal A_{h}(R_p^0\mathbf{u}_0,R_p^0\mathbf{z}_0)- \mathcal A_{h}(R^0_q\mathbf{u}_0,R_q^0\mathbf{z}_0)=\mathcal{J}(R^0_p\mathbf{u}_0d)-\mathcal{J}(R^0_q\mathbf{u}_0).
\end{equation}
Then the error can be formulated as
\begin{equation}
\begin{aligned}
\label{TOL}
Error=&\sum_{K\in\mathcal K_h}\mathcal{E}_{K}
\\=&\sum\limits_{K\in\mathcal K_h} \left\{ -\int_K\nabla[R^0_q \mathbf{u}_0]\mathcal{F}(R^0_q\mathbf{u}_0)\cdot(R^0_q\mathbf{z}_0-\mathbf{z}_0)dx\right.
\\ &+\int_{\partial K\backslash\Gamma}\left(\mathcal{F}(R^0_q\mathbf{u}_0^+)\cdot n_K-\mathcal{H}(R^0_p\mathbf{u}_0^+,R^0_p\mathbf{u}_0^-,n_K)\right)\cdot(R^0_q\mathbf{z}_0-\mathbf{z}_0)^+ds
\\ & \left. +\int_{\partial K\cap\Gamma}\left(\mathcal{F}(R^0_q\mathbf{u}_0^+)\cdot n_K-\tilde{\mathcal H}(R^0_q\mathbf{u}_0^+,\Gamma(R^0_q\mathbf{u}_0^+),n_K)\right)\cdot(R^0_q\mathbf{z}_0-\mathbf{z}_0)^+ds\right\}.
\end{aligned}
\end{equation}
While the whole adaptation scheme is implemented under the Petrov-Galerkin method, it is not possible to calculate dual solutions on a functional space that is the same as the primal solutions due to the potential influence of Galerkin orthogonality on the adaptation, resulting in an error representation of zero.
To avoid these unexpected phenomena originating from the Galerkin orthogonality, there are various approaches, such as altering the size of the broken space, considering its dual mesh for calculation, and enhancing the order of piecewise polynomials. For instance, a reconstruction method was combined with the goal-oriented method to solve a linear-convection-diffusion-reaction problem in \cite{dolejvsi2017goal}. In our previous work\cite{meng2021fourth}\cite{hu2011robust}, the reconstruction method conducted under \cite{zhang2011improvement} \cite{CiCP-30-1545}which accelerates the convergence of the system towards the steady state significantly.
As a robustness reconstruction had been achieved effectively and the motivation to avoid the Galerkin orthogonality for a theoretical guarantee, the reconstruction method is applied in this work to construct the algorithm framework. Moreover, for further implementation of the reconstruction around the shock waves, we could consider the shock capturing method\cite{AAMM-13-671} shall be considered as well.
\section{Numerical Algorithm}
\subsection{AFVM4CFD}
In \cite{slotnick2014cfd}, a comprehensive review of the development of
computational fluid dynamics in the past, as well as the
perspective in the future 15 years has been delivered, in which
the mesh adaptivity was listed as a potential technique for
significantly enhancing the simulation efficiency. However, it was
also mentioned that such a potential method has not seen widespread use, ``due to issues related to software complexity,
inadequate error estimation capabilities, and complex geometry''.
To provide a
competitive solution to resolve the above three issues in developing
adaptive mesh methods for steady Euler equations, through 1). a
thorough investigation of the dual consistency in dual-weighted
residual methods, 2). a well design of a dual-weighted residual
based $h$-adaptive mesh method with dual consistency, based on a
Newton-GMG numerical framework, and 3). a quality code based on
AFVM4CFD library. It is worth mentioning that with the AFVM4CFD, the Euler equations can be solved well with a satisfactory residual that gets close to the machine precision. Now AFVM4CFD is under the application of software copyright.
\subsection{Newton-GMG for dual equations}
In our previous work, the Newton-GMG solver is effective for solving equations. However, since the dual-inconsistent framework may cause unexpected singularities around the boundary domain, it can make the linear system ill-posed. Therefore, the robustness of the linear system solver is also crucial for the framework. Additionally, the dual equations derived from the primal equations have an inverse direction of time direction based on the integration by part, which should also be considered in the regularization of the dual linear system. The performance of the solver for the dual equations is shown below:
\begin{figure}[htb]\centering
\includegraphics[width=0.48\textwidth,height=0.259\textheight]{residualDecay.pdf}
\caption{Convergence trace of the residual of dual equations.}
\label{resTrace}
\end{figure}
As is shown in Figure.\ref{resTrace}, the solver for dual equations is steady and powerful. Even for the mesh with more than $1,500,000$ elements, the convergence trace of the solver is smooth, which supports the whole algorithm of the DWR refinement.
\subsection{Mesh Adapt Process}
From the theoretical discussion above, we developed the DWR-based h-adaptive refinement with a Newton-GMG solver which solved the equations well, the process for a one-step refinement can be summed up as the algorithm below:
\begin{algorithm}[H]
\SetAlgoLined
\KwData{Initial $\mathcal{K}_H$, $TOL$}
\KwResult{$\mathcal{K}_h$}
Using the Newton-GMG to solve $\mathcal{R}_{H}(\mathbf{u}_0)=0$ with residual tolerance $1.0e-3$\;
Reconstruct the piecewise constant solutions $\mathbf{u}_1=R^0_1\mathbf{u}_0$\;
Interpolate solution $(\mathbf{u_1})_H$ from the mesh $\mathcal{K}_{H}$ to $\mathcal{K}_{h}$ to get $(\mathbf{u}_1)_h^H$\;
Record the residual $\mathcal{R}_h\left((\mathbf{u}_1)_h^H\right)$\;
Solve the dual equation to get $(\mathbf{z}_0)_H$\;
Calculate the error indicator for each element\;
\While{$\mathcal{E}_{K_H}>TOL$ for some $K_H$}{
Adaptively refinement the mesh $\mathcal{K}_{H}$ with the process in \cite{HU2016235};}
\caption{DWR for one-step mesh refinement}
\end{algorithm}
In our previous work\cite{HU2016235}\cite{hu2013adaptive}, we used a constant tolerance for the multistep refinement process, which was found to be effective. However, as the desired precision of the quantity of interest increases, the number of elements in the mesh grows rapidly. Choosing an optimal tolerance is tricky due to the reason that too much refinement may need too much degree of freedom while too little refinement will cause too many steps for refinement and even lead to a large error result. It has been suggested by \cite{aftosmis2002multilevel}, tolerance shall match the error distribution histograms on hierarchical meshes. Besides, Nemec developed a decreasing threshold strategy based on the adjoint-based framework\cite{nemec2008adjoint}, which can help save degrees of freedom for a specific quantity of interest. Motivated by this idea, we have designed the algorithm with a decreasing refinement threshold, and the growth of grids has met expectations well.
\section{Numerical experiment}
\subsection{Subsonic Model}
To validate the efficiency of the algorithm, we tested the model with the following configurations:
\begin{itemize}
\item A domain with a NACA0012 airfoil, surrounding by an outer
circle with a radius of 30;
\item Mach number 0.5, and attack angle
0$^\circ$;
\item Lax-Friedrichs Numerical Flux;
\item Drag coefficient as the quantity of interest, and
zero normal velocity as the solid wall boundary condition;
\end{itemize}
\begin{figure}\centering
\frame{\includegraphics[width=0.4\textwidth]{adjoint_1.pdf}}
\frame{\includegraphics[width=0.4\textwidth]{adjoint_2.pdf}} \\
\frame{\includegraphics[width=0.4\textwidth]{incosistAdjoint_1.pdf}}
\frame{\includegraphics[width=0.4\textwidth]{incosistAdjoint_2.pdf}}
\caption{Top: the isolines of the first and second
dual momentum variables, with Dual-Consistent DWR
method. Bottom: isolines with Dual-Inconsistent DWR method, respectively.}
\label{mach0.5dual}
\end{figure}
\begin{figure}[ht]\centering
\frame{\includegraphics[width=0.45\textwidth]{consisMesh.pdf}}
\frame{\includegraphics[width=0.45\textwidth]{inconsistentMesh.pdf}}
\caption{Left: mesh grids from dual-consistent method; Right:mesh grids from dual-inconsistent methods.}
\label{onestep_refine}
\end{figure}
To compare the difference between dual consistency and dual inconsistency, the dual solutions are calculated from the mesh with 4 times global refinement. As shown in Figure. \ref{mach0.5dual}, the DWR method constructed on a dual consistent framework generates symmetric and smooth dual solutions, while the dual solutions from the dual inconsistent framework are oscillatory. As a result, the adaptive mesh in Figure. \ref{onestep_refine} shows that a dual inconsistent scheme contains more elements refined around the airfoil, with uneven refinement areas on the upper and lower sides.
The drag coefficient can be chosen as the quantity of interest with zero normal velocity boundary conditions, resulting in no normal pressure on the airfoil boundary and a drag coefficient close to 0, i.e., $\mathcal{J}(\mathbf{u})=0$. After a series of mesh adaptations, we achieve the target function with the error shown in Figure \ref{0.5multistep}.
\begin{figure}[htb]\centering
\includegraphics[width=0.48\textwidth,height=0.259\textheight]{residualAdapt.pdf}
\frame{\includegraphics[width=0.48\textwidth,height=0.259\textheight]{residualMesh.pdf}}
\caption{Left: Convergence curves of different refinement methods for NACA0012 with mach number 0.5, attack angle 0$^\circ$; Right: Mesh from the residual-based refinement.}
\label{0.5multistep}
\end{figure}
As shown in this result, the residual-based refinement technique is not very effective in calculating the quantity of interest, as it does not focus on the areas that have the greatest impact on the drag calculation. On the other hand, the DWR method is a more effective technique for mesh adaptation, as it saves the degree of freedom a lot while preserving the convergence of error of quantity of interest. However, in most practical models, we do not have access to the exact solution. In the following result, to test the robustness and effect of decreasing the refinement threshold, the models in the transonic regime are considered.
\subsection{Transonic Model}
\begin{itemize}
\item A domain with a NACA0012 airfoil, surrounding by an outer
circle with a radius of 30;
\item Mach number 0.8, and attack angle
0$^\circ$;
\item Lax-Friedrichs Numerical Flux;
\item Drag coefficient as the quantity of interest, and
mirror reflection as the solid wall boundary condition;
\end{itemize}
In this example, the assumption that $\mathcal{J}(\mathbf{u})=0$ can not be satisfied anymore because the model is a mirror reflection boundary condition with shock waves. Since the solver for the primal equations is so effective that the residual of equations can converge to machine precision, we use a globally refined mesh with $3672064$ elements to calculate the quantity of interest which is $C_{d}=0.00942314.$ We then compared this with the same configuration, but with a change in the dual consistency scheme, which produced the following results.
\begin{figure}[htb]\centering
\includegraphics[width=0.48\textwidth,height=0.259\textheight]{cicPair.pdf}
\caption{Convergence curves of dual-consistent refinement and dual-inconsistent refinement for NACA0012 with mach number 0.8, attack angle 0$^\circ$.}
\label{transonic0.8}
\end{figure}
It is shown that the dual consistent refinement preserves a stable convergence rate till the precision is below $1.0e-4$ while dual inconsistent refinement converges to a result around $1.0e-3$ precision. Specifically, in this model, the error in a dual-consistent framework could converge to $0.00108906$ with $136141$ elements, while in the dual-inconsistent framework converges to $0.00108446$ with $581701$ elements.
As the configuration for this problem is complicated, the tolerance for the calculation is an important variable. The dual-inconsistent method may refine areas that do not significantly influence the calculation of target functions, resulting in wasted degrees of freedom. As shown in Figure.\ref{transonic0.8}, the dual-consistent refinement with $100,000$ elements produced a result that is approximately equivalent to the dual-inconsistent refinement with nearly $600,000$ elements. Compared with the dual-inconsistent refinement, the dual-consistent scheme is always stable and enables the quantity of interest to converge smoothly. To validate this conclusion, we conducted experiments on different types of models, the results of which are presented below.
\begin{itemize}
\item A domain with a NACA0012 airfoil, surrounding by an outer
circle with a radius of 30;
\item Mach number 0.98, and attack angle
1.25$^\circ$;
\item Lax-Friedrichs Numerical Flux;
\item Drag coefficient as the quantity of interest, and
mirror reflection as the solid wall boundary condition;
\end{itemize}
In this example, we considered the influence of the attack angle. Since the boundary modification method can help us preserve the adjoint consistency for the reflection boundary condition, the difference between dual consistency and dual inconsistency is shown below.
\begin{figure}[htb]\centering
\includegraphics[width=0.48\textwidth,height=0.259\textheight]{0.98at1.25ResEle.pdf}
\caption{Convergence curves of dual-consistent refinement and dual-inconsistent refinement for NACA0012 with mach number 0.98, attack angle 1.25$^\circ$.}
\end{figure}
Even if the error calculated on final meshes derived from dual-consist refinement and dual-inconsistent refinement have approximate precision, the dual-consist refinement was more stable than dual-inconsistent refinement. This behavior indicates that the precision is enhanced with an increase in the size of elements under a dual-consistent framework. However, the convergence trace of a dual-inconsistent framework had oscillations, not only wasting degrees of freedom, but also influencing the analysis of the convergence of target functions. To understand why the DWR method works to obtain the quantity of interest with a stable convergence rate, the dual solutions and residuals for generating the error indicators are plotted for analysis.
\begin{figure}[htbp]\centering
\frame{\includegraphics[width=0.3\textwidth]{DualMach0.98.pdf}}
\frame{\includegraphics[width=0.3\textwidth]{ResMach0.98.pdf}}
\frame{\includegraphics[width=0.3\textwidth]{mach0.98mesh.pdf}}
\caption{NACA0012 model, Mach number 0.98, attack angle 1.25$^\circ$.: Left: Dual solution of the first variable for the indicators; Middle: Residual of the first variable for the indicators; Right: Meshes generated from the indicators.}
\label{Mach0.98refineMesh}
\end{figure}
As shown in Figure.\ref{Mach0.98refineMesh}, the residuals of this model oscillated around the shock waves and the direction of the inflow area, while the dual solutions focused on the boundary of the airfoils. The error indicators from the DWR method balance both the residuals and the dual solutions, generating a mesh that could resolve the quantity of interest well. To test the robustness of the dual-consistent framework, we considered different models using the same algorithm in the following parts.
\begin{itemize}
\item A domain with a RAE2822 airfoil, surrounding by an outer
circle with a radius of 30;
\item Mach number 0.729, and attack angle
2.31$^\circ$;
\item Lax-Friedrichs Numerical Flux;
\item Drag coefficient as the quantity of interest, and
mirror reflection as the solid wall boundary condition;
\end{itemize}
Similarly, we use a globally refined mesh with $3801088$ elements to calculate the quantity of interest which is $C_{d}=0.0144349.$ And the dual-consistent framework still works well for the whole algorithm.
\begin{figure}[htb]\centering
\frame{\includegraphics[width=0.3\textwidth]{dualRAE.pdf}}
\frame{\includegraphics[width=0.3\textwidth]{resRAE.pdf}}
\frame{\includegraphics[width=0.3\textwidth]{Rae_C.pdf}}
\caption{RAE2822 model: Left: Dual solution of the first variable for the indicators; Middle: Residual of the first variable for the indicators; Right: Meshes generated from the indicators.}
\label{RAErefineMesh}
\end{figure}
Unlike the residual-based adaptation, which emphasizes refinement around shock waves, the DWR method aims to solve the quantity of interest with the least computational cost. In this example, Figure \ref{RAErefineMesh} shows that the dual equations concentrate on the top half area of the airfoil, while the residuals oscillate around the shock waves and outflow direction. The DWR method combines these two aspects together to produce a stable error reduction trace of the target function. The result is shown in Figure.\ref{RAEeleRES}.
\begin{figure}[htbp]\centering
\includegraphics[width=0.45\textwidth]{RAEelementRes.pdf}
\caption{Convergence curves of dual-consistent refinement for RAE2822 with mach number 0.729, attack angle 2.31$^\circ$.}
\label{RAEeleRES}
\end{figure}
\begin{itemize}
\item A domain with two NACA0012 airfoils, surrounding by an outer
circle with a radius of 30;
\item Mach number 0.8, and attack angle
0$^\circ$;
\item Lax-Friedrichs Numerical Flux;
\item Drag coefficient of different airfoils, and
mirror reflection as the solid wall boundary condition;
\end{itemize}
\begin{figure}[htbp]\centering
\frame{\includegraphics[width=0.3\textwidth]{TwoC0Dual.pdf}}
\frame{\includegraphics[width=0.3\textwidth]{TwoC0res.pdf}}
\frame{\includegraphics[width=0.3\textwidth]{TwoC0Mesh.pdf}}
\caption{Two bodies model: Left: Dual solution of the first variable for the indicators; Middle: Residual of the first variable for the indicators; Right: Meshes generated from the indicators.}
\label{TwoBodyMesh}
\end{figure}
Firstly, we choose the quantity of interest as the drag coefficient of the upper airfoil. Figure 10 shows that the dual solutions focus on the boundary of the upper airfoil, while the residuals of different variables have oscillations around the shock waves and the outflow direction. The meshes generated from the error indicators mainly refine the upper airfoil, whereas the elements refined around the lower airfoil behave like a residual-based adaptation since the dual equations have no contributions in that region. The upper airfoil took a balance between the dual solutions and the residuals, resulting in a stable convergence of the quantity of interest. Similarly, the quantity of interest is calculated on a mesh with $2227712$ elements, then $C_{d}=9.218816e-03.$
\begin{figure}[htb]\centering
\includegraphics[width=0.45\textwidth]{Twobody0EleRes.pdf}
\includegraphics[width=0.45\textwidth]{Two1EleRes.pdf}
\caption{Convergence curves of dual-consistent and dual-inconsistent refinement for Two bodies NACA0012 with mach number 0.8, attack angle 0$^\circ$. Left: The drag of the upper airfoil chosen as the quantity of interest. Right: The drag of the lower airfoil chosen as the quantity of interest.}
\label{TwoBodyeleRES}
\end{figure}
If the quantity of interest is chosen as the lower airfoil, the opposite behavior occurred to focus the refinement around the lower airfoil.
\begin{figure}[htb]
\centering
\frame{\includegraphics[width=0.3\textwidth]{TwoC1Dual.pdf}}
\frame{\includegraphics[width=0.3\textwidth]{TwoC1Res.pdf}}
\frame{\includegraphics[width=0.3\textwidth]{TwoC1Mesh.pdf}}
\caption{Two bodies model: Left: Dual solution of the first variable for the indicators; Middle: Residual of the first variable for the indicators; Right: Meshes generated from the indicators.}
\end{figure}
The convergence curves for the two examples demonstrate that dual-consistent refinement is more stable than dual-inconsistent refinement, which can result in unexpected oscillations. More importantly, even if dual-inconsistent DWR may produce a quantity of interest with an acceptable level of precision, it is not always robust enough to derive a satisfactory stopping criterion for target functional-based adaptation.
\section{Conclusion}
Based on the previous work, we further constructed the h-adaptivity method for the steady Euler equations in the AFVM4CFD package. Implementing the Newton method for nonlinear equations posed a challenge for h-adaptivity, but we were able to validate the dual consistency property using the Newton-GMG solver. Our results showed that the under the Newton-GMG framework, a dual-consistent scheme was more stable in obtaining a quantity of interest with expected precision compared to the dual-inconsistent scheme, which could lead to unexpected singularities that would contaminate the refinement area. The dual-consistent framework also offered better precision as the degree of freedom increased. Besides, to preserve a dual-consistent algorithm, we apply the boundary modification operator for the numerical solutions, allowing for a more generalized configuration for simulations.
However, we encountered some difficulties during the simulations, such as designing a suitable tolerance for mesh adaptations. To make the whole process for refinement more automatic, we plan to develop an algorithm to balance the tolerance based on the mesh in the future. Besides, obtaining the dual solutions still requires a globally refined mesh from the previous step, which we hope to improve upon. As the dual equations are linear equations that do not require high precision to derive the error indicators, machine learning or multiple precisions method will be included to enhance the efficiency in the future. We will also develop a more efficient and faster dual solutions solver based on this framework. Finally, the dual consistency property for supersonic modeling is still under development.
\section*{Acknowledgement}
Thanks to the support from National Natural Science Foundation of China (Grant Nos. 11922120 and 11871489), FDCT of Macao S.A.R. (Grant No. 0082/2020/A2), MYRG of University of Macau (MYRG2020-00265-FST) and Guangdong-Hong Kong-Macao Joint Laboratory for Data-Driven Fluid Mechanics and Engineering Applications (2020B1212030001).
\bibliographystyle{plain}
|
{
"arxiv_id": "2302.14284",
"language": "en",
"timestamp": "2023-03-01T02:08:15",
"url": "https://arxiv.org/abs/2302.14284",
"yymm": "2302"
} |
\section{Conclusion}
In this paper, we rethink the performance of LTR methods with Vision Transformers and propose a baseline based on unsupervised pre-train to learn imbalanced data. We re-analyze the reasons for LTR methods' performance variation based on ViT backbone. Furthermore, we propose the PDC to measure the model predictive bias quantitatively, i.e., the predictors prefer to classify images into common classes. Extensive experiments demonstrate the effectiveness of PDC, which provides consistent and more intuitive evaluation.
\section{Experiments}
\label{sec:exp}
\input{Figs/cf.tex}
\subsection{Datasets}
\textbf{CIFAR100-LT} \cite{cifar} is created from the original CIFAR datasets that have 100 classes with 60K images. The skewness of the dataset is controlled by an Imbalance Factor (IF), which is the ratio between the most and the least frequent classes. We follow previous work\cite{LDAM-loss, CB-loss} to utilize the dataset with $IF = [10, 50, 100]$ for comprehensive comparisons.
\noindent\textbf{iNaturalist 2018} \cite{inaturalist} is the large-scale real-world dataset for LTR with 437.5K images from 8,142 classes. It is extremely imbalanced, with an imbalance factor of 500. We use the official training and validation split in our experiments.
\subsection{Implement Details}
We use a pre-trained ViT-Base model from MAE and fine-tune it with $32$ (CIFAR-LT) and $128$ (iNat18) resolution. We use AdamW optimizer with momentum $\beta_1 = 0.9$ and $\beta_2 = 0.999$. We train the model for 100 epochs with an effective batch size 1024 and weight decay 0.1. The base learning rate is $1e-3$, which follows cosine decay with 5 warmup epochs. We use Mixup (0.8) and Cutmix (1.0) as augmentation and set the drop path of ViT to 0.1.
\subsection{Compared Methods}
We adopt the recipe in vanilla ViT\cite{vit}, DeiT III\cite{deit3}, and MAE\cite{MAE} to train ViTs. In view of MAE's excellent performance and low computation consumption, we adopt MAE for our following evaluation. We adopt vanilla CE loss, Binary CE\cite{deit3}, BalCE\cite{BalCE}, CB\cite{CB}, LDAM\cite{LDAM-loss}, MiSLAS\cite{MiSLAS}, LADE\cite{LADE}, and IB loss\cite{IB-loss} for comprehensive comparisons. We ignore the multi-expert (heavy GPU memory) and contrastive learning (contradictory to MAE) methods.
\noindent Table \ref{tab:recipe} shows the results of different training manners. With the same training image number, the LT is lower than BAL for all recipes. MAE achieves the best on two datasets and learns meaningful features on both datasets (Figure \ref{fig:mae-pretrain}). Hence, we select MAE for the following experiments.
\input{Tabs/iNat18.tex}
\subsection{LTR Performance with ViT}
It is challenging to train ViTs directly on LTR datasets (Table \ref{Tab:apdx}), because it is difficult to learn the inductive bias of ViTs and statistical bias of LTR (Eq.\ref{bayes}) simultaneously (Table \ref{tab:recipe}). In Table \ref{tab:cifar}, we mainly re-rank different losses of LTR on ViT-Base, which is based on the pre-trained weights on ImageNet. The results in Table \ref{tab:inat} are trained \textit{from scratch} in the MAE manner without pre-trained weights to show the performance gap between ResNet and ViT. We only conduct the architecture comparisons on the iNat18 because ViTs are hard to train from scratch with limited data and resolution, like CIFAR.
As Table \ref{tab:cifar} \& \ref{tab:inat} show, BalCE achieves satisfying performance on both datasets, which indicates its effectiveness and generalization. Compared to the performance on ResNet, MiSLAS shows poor Acc and PDC, which means its special design is hard to generalize on ViT. In addition, IB is difficult to train for its numerical instability and thus results in worse performance (7\%$\downarrow$). For most proposals, the performance of PDC keeps consistent with Top-1 Acc and Few Acc. However, LDAM has better accuracy and worse PDC compared to CB, which means it alleviates predictive bias slightly. We additionally calculate the variance of PDC for the different unbalanced degrees, as shown in Table \ref{tab:cifar}. From this point of view, BCE obtains the maximum variance with decreasing performance, which suggests its weak adaptability.
Figure \ref{fig:cf} presents the visualization with confusion matrix. A larger PDC indicates more centralized off-diagonal elements(e.g., BCE). BalCE makes more balanced predictions with a smaller PDC, which demonstrates PDC is a precise quantitative metric to measure prediction distributions.
\input{Tabs/Rebuttal}
\section{Introduction}
\label{sec:intro}
With rapid advances in visual classification, deep models tend to depend on balanced large-scale datasets more seriously \cite{Imagenet, COCO}. However, the number of instances in real-world data usually follows a Long-Tailed (LT) distribution w.r.t. class. Many tail classes are associated with limited samples, while a few head categories occupy most of the instances \cite{CB-loss, OLTR, PriorLT, icasspunbalance}. The model supervised by long-tailed data tends to bias toward the head classes and ignore the tail ones. The tail data paucity makes the model hard to train with satisfying generalization. It is still a challenging task to overcome Long Tailed Recognition (LTR) and utilize real-world data effectively.
Recent literature mainly adopt two approaches to tackle LT data, i.e., feature re-sampling and class-wise re-weighting. The re-sampling methods balanced select the training data by over-sampling the tail or under-sampling the head. Some effective proposals replenish the tail samples via generation or optimization with the help of head instances\cite{M2m, Bagoftricks}. The re-weighting ones punish different categories with data number relevant weight or logit bias\cite{BalCE, LADE-loss}. Although the aforementioned methods have greatly mitigated the LT problem, the conclusions hold on the ResNet-based backbones \cite{MiSLAS, IB-loss}.
In recent years, many transformer-based backbones \cite{vit} have surpassed the performance of CNN. DeiT\cite{deit3} proposes an effective receipt to train ViT with limited data, and MAE\cite{MAE} adopts a masked autoencoder to pre-train the ViT. However, there is limited research on how ViTs perform on LTR. Motivated by this, we rethink the previous LT works with ViT. We figure out that it is hard to train ViTs with long-tailed data while the unsupervised pretraining manner ameliorates it by a large margin. The unsupervised pretraining ViTs will learn meaningful feature (c.f. Figure \ref{fig:mae-pretrain}) and generalize well on downstream tasks (c.f. Table \ref{tab:recipe}), either on long-tailed or balanced datasets.
Numerous studies have demonstrated that the model supervised by the LT dataset will inevitably exhibit prediction bias to the head\cite{PriorLT, LA, LADE, icassphead}. The predictor will simply classify the inquiry image to the head to attain a low misclassification error. The previous metrics, like accuracy on the validation dataset, are difficult to evaluate the model's predictive preference directly. The same accuracy may come at the cost of a different number of predictions (c.f. Figure \ref{fig:teaser}). Although some works show models' prediction distribution by visualization qualitatively\cite{PriorLT, MiSLAS, NCL}, a metric is required to evaluate it quantitatively. In this paper, we propose Prediction Distribution Calibration (PDC) to fill this gap. Specifically, if we view the prediction number and target instance number of each class as probability distributions, we can measure the distance between the two probability distributions. Considering the imbalance degree of training samples, we take the training label into account as well. To summarize, our main contributions are:
\noindent \textbf{1)} We figure out that it is difficult to train ViT with long-tailed data, which can be tackled with unsupervised pretraining.
\noindent \textbf{2)} We propose PDC to provide a quantitative view to measure how the proposal ameliorates the model predictive preference.
\noindent \textbf{3)} We conduct extensive experiments to analyze LTR proposals' performance on ViT with our proposed PDC, which will accurately indicate the model's predictive bias and is consistent with the visualization results.
\input{Figs/teaser.tex}
\section{The proposed approach}
\subsection{Long Tail Recognition}
\label{LTR}
Given an $C$-classes labeled dataset containing $N$ training instances, $\mathbf{D}=\left\{\left(x_1, y_1\right),\left(x_2, y_2\right), \ldots,\left(x_n, y_n\right)\right\}$, where $y_i \in \mathcal{C} = \{1,...,C\}$ and the distribution is $\mathbb{P}(\mathbf{x},\mathbf{y})$.
In this paper, we define a base classification model as ${\mathcal{M}_\theta}$, which is parameterized by $\theta$. For each input image $x$, the output logits as ${\mathbf{z}_\theta(x)=\mathcal{M}(x|\theta) }=\{z_1,...,z_c\}$. The goal is to optimize the parameters $\theta$ to get the best estimation of $\mathbb{P}(\mathbf{x},\mathbf{y})$. Generally, one adopts \textit{softmax} function to map the output ${\mathcal{M}_(x|\theta)}$ as the conditional probability:
\begin{equation}
p\left(\mathbf{y} \mid \mathbf{x}; \theta\right)=\frac{e^{\mathcal{M}(\mathbf{x} \mid \theta)_{\mathbf{y}}}}{\sum_i e^{\mathcal{M}(\mathbf{x} \mid \theta, \phi)_{\mathbf{y}_i}}}
\end{equation}
We get the posterior estimates $\mathbb{P}(y|x):=p\left(\mathbf{y}|\mathbf{x}; \theta\right)$ by maximum likelihood $\mathbb{P}(x|y)$ estimation, which is represented by model parameters $\theta$. In LTR, we train the model with long-tailed distributed training data $\mathbb{P}_{s}(x,y)$ while evaluating it with uniform ones $\mathbb{P}_{t}(x,y)$. The label prior distribution $\mathbb{P}_s(y)$ will be different for each class while keeping consistent in the test dataset, i.e., $\mathbb{P}_t(y):=1/C$. For a tail class $i$, $\mathbb{P}_s(y_i) \ll \mathbb{P}_t(y_i)$.
According to the Bayesian Theory, the posterior is proportional to the prior times the likelihood. Considering the same likelihood, i.e., $\mathbb{P}_s(x|y) = \mathbb{P}_t(x|y)$, we have the posterior on the target dataset:
\begin{equation}
\mathbb{P}_t(y_i| x)=\mathbb{P}(x|y_i) \cdot \mathbb{P}_s(y_i) / \mathbb{P}_t(y_i)
\label{bayes}
\end{equation}
With the Eq.\ref{bayes} and balanced target distribution $\mathbb{P}_t(y):=1/C$, we have $\mathbb{P}_t(y_i| x) \propto \mathbb{P}_s(y_i)$. Therefore, models tend to \textit{predict a query image into head classes} to satisfy the train label distribution $\mathbb{P}_s(y_i)$, which is called \textbf{predictive bias}. Such a mismatch makes the generalization in LTR extremely challenging, and the traditional metrics, e.g., mean accuracy on the training dataset, exacerbates biased estimation when evaluating models on the balanced test set.
\subsection{Vision Transforms}
ViT reshapes a image $x\in \mathbb{R}^{H \times W \times C}$ into a sequence (length $L=H\times W / P^2$) of flattened 2D patches $x_P \in\mathbb{R}^{L \times (P^2 \cdot C)}$, where $H \times W$ are the resolution of $x$, $C$ is channels, $P$ is the patch resolution. Although ViTs perform well on numerous visual tasks, we figure out that \textit{it is hard to train ViTs with long-tailed data, and the performance is unsatisfactory}. Recent work trains ViTs without label supervision by the encoder ($\mathcal{E}$) decoder ($\mathcal{D}$) architecture and random mask $\mathbf{M}$:
\begin{equation}
\hat{\mathbf{x}}=\mathcal{D}\left(\mathcal{E}(\mathbf{M} \odot \mathbf{x})\right)
\label{mae}
\end{equation}
\vspace{-10pt}
\input{Tabs/Cifar.tex}
We pinpoint that the \textit{ViTs will learn generalized feature extraction by Eq.\ref{mae}, either on long-tailed or balanced datasets}. Such an observation inspires us to adopt it as a strong baseline to evaluate the performance with ViTs.
\subsection{Predictive Distribution Calibration}
In LTR, recent works try to compensate for the mismatch of $\mathbb{P}_s(y)$ and $\mathbb{P}_t(y)$, which is described in section \ref{LTR}. However, they all adopt the Top1-accuracy to evaluate their proposals, which fails to show whether the mismatch is fixed. To fill the gap and measure it intuitively, we propose the Predictive Distribution Calibration (PDC) to quantitative analyze the model's predictive bias.
\input{Figs/mae_pretrain.tex}
\noindent \textbf{Step 1}: Here, we view the prediction number w.r.t. class as the predictive distribution $\hat{\mathbb{P}}_{t}(y)$. Considering the balanced label distribution $\mathbb{P}_{t}(y)$, we can calculate the \textit{distance} between the above two distributions. Considering to measure this \textit{distance} via Kullback-Leibler divergence (KL), we have:
\begin{equation}
D(\mathbb{P}_t, \hat{\mathbb{P}}_t) = \frac{1}{C} \sum_{y_i\in \mathcal{C}} \mathbb{P}_{t}(y_i) \cdot \left[\log \mathbb{P}_{t}(y_i) - \log \hat{\mathbb{P}}_{t}(y_i)\right] \!
\end{equation}
\noindent \textbf{Step 2}: Generally, the larger gap between $\mathbb{P}_{s}(y)$ and $\mathbb{P}_{t}(y)$, the more difficult to overcome the model predictive bias. To eliminate it, we take the training label distribution $\mathbb{P}_{s}(y)$ into consideration, which can be written as $D(\mathbb{P}_t, \mathbb{P}_s)$:
\begin{equation}
\begin{aligned}
&PDC(\mathcal{M}_\theta, \mathbf{D}) = D(\mathbb{P}_t, \hat{\mathbb{P}}_t) / D(\mathbb{P}_t, \mathbb{P}_s) \\
&= \frac{\sum_{y_i\in \mathcal{C}} \mathbb{P}_{t}(y_i) \cdot \log \mathbb{P}_{t}(y_i) - \mathbb{P}_{t}(y_i) \cdot \log \hat{\mathbb{P}}_{t}(y_i)}{ \sum_{y_i\in \mathcal{C}}\mathbb{P}_{t}(y_i) \cdot \log \mathbb{P}_{t}(y_i) - \mathbb{P}_{t}(y_i) \cdot \log \mathbb{P}_{s}(y_i)}
\end{aligned}
\label{D-kl}
\end{equation}
\noindent \textbf{Step 3}: Notice that $D(\mathbb{P}_t, \mathbb{P}_s)$ will be zero when the target label distribution is consistent with the training label distribution. Hence, we add an extra $\varepsilon=1e-6$ to $D(\mathbb{P}_t, \mathbb{P}_s)$ for numerical stability.
\subsection{Further Analysis}
Previous work evaluates the model predictive bias in the following manners:
\noindent \textbf{Group Acc} divides $\mathcal{C}$ into several groups $\{\mathcal{G}_1, \mathcal{G}_2,..., \mathcal{G}_n\}$ according to the $\mathbb{P}_s(y)$, where $\forall i, \mathcal{G}_i \subseteq \mathcal{C}$. A widely adopted group type is \{\textit{Many}, \textit{Medium}, \textit{Few}\} and the accuracy of each group can be calculated by:
\begin{equation}
Acc(\mathcal{G}) = \frac{1}{N_\mathcal{G}} \sum_{y \in \mathcal{G}} \mathbb{I}\left(y = \operatorname{argmax}_{y_{i} \in \mathcal{G}} \mathcal{M}(\mathbf{x}|\theta)_{y_{i}}\right),
\label{group-acc}
\end{equation}
where $N_\mathcal{G}$ is the sum instance number in $\mathcal{G}$ and $\mathbb{I}\left(\cdot\right)$ is indicator function. However, the weakness is obvious: 1) $Acc(\mathcal{G})$ heavily depends on $\mathbb{P}_s(y)$ and the definition of group $\mathcal{G}$. 2) The \textit{Few} accuracy can not avoid the acc trap (see Figure \ref{fig:teaser}).
\noindent \textbf{Confusion matrix} is used to visualize the classification situation for each class. However, 1) it can not quantitatively measure how much the predictive bias the methods alleviate. 2) It will be unintuitive when the class number $C$ gets larger. As a comparison, our PDC is plug-and-play with negligible computation operation. With fixed model structure and datasets, we can compare proposed methods quantitatively.
\input{Tabs/ViT.tex}
|
{
"arxiv_id": "2302.14275",
"language": "en",
"timestamp": "2023-03-01T02:07:50",
"url": "https://arxiv.org/abs/2302.14275",
"yymm": "2302"
} |
\section{Score-based tests}
This section includes a brief introduction to score-based tests; for a more detailed introduction, see \citeA{wanmer14}.
In the usual maximum likelihood estimation framework, the goal is to find the
parameter vector $\hat{\bm \theta}$ (say, of length $q)$ that maximizes the model's log likelihood
$\ell$:
\begin{equation}
\label{eq:theta}
\hat{\bm \theta}=\operatorname{argmax}\displaylimits_{\bm \theta} \ell(\bm \theta; \bm{y}_1, \ldots, \bm{y}_n),
\end{equation}
where $\bm{y}_1, \ldots, \bm{y}_n$ represents the observed data.
Maximization of the likelihood function
is equivalent to finding $\hat{\bm \theta}$ such that the \emph{score} vectors satisfy
the following condition:
\begin{equation}
\label{eq:scoresum}
\sum_{i=1}^{n}s(\hat{\bm \theta}; \bm y_i) = \bm{0},
\end{equation}
where a \emph{score} is the partial first derivative of the model's log
likelihood for case $i$:
\begin{equation}
\label{eq:score}
s(\bm \theta; \bm y_i) = \left(\frac{\partial \ell(\bm \theta;\bm y_i)}{
\partial \theta_1}, \ldots, \frac{\partial \ell(\bm \theta; \bm y_i)}{
\partial \theta_j} \right)^{T}.
\end{equation}
Score-based tests are used to study how model parameters fluctuate with unmodeled, auxiliary variables. To carry out a test, we organize the score vectors into a scaled, cumulative score process defined as:
\begin{equation}
\label{eq:cumscore}
{\bm B}(t; \hat {\bm \theta}) ~=~ n^{-1/2} \hat {\bm I}^{-1/2}
\sum_{i = 1}^{\lfloor n \cdot t \rfloor} {\bm s}(\hat {\bm \theta}; \bm{y}_{(i)}),
\end{equation}
where $t$ is in $[0,1]$; $\bm{y}_{(i)}$ represents the $i$th-smallest observation (ordered with respect to the unmodeled auxiliary variable);
and $\hat{\bm I}$ represents the information matrix, which has an analytical
expression for linear mixed models \cite<see>[Section 3.3]{wang182}.
The purpose of $\hat{\bm I}$ is to decorrelate the cumulative score matrix's columns. In other words, a case's score with respect to one parameter is typically correlated with its score with respect to other parameters. Premultiplication by $\hat {\bm I}^{-1/2}$ decorrelates these scores, which facilitates development of score-based tests.
Using this scaled cumulative score process, multiple test statistics can be computed.
We will focus on one, the {\em Cram\'{e}r von Mises} (CvM) statistic, because it is similar to the self-normalized statistic that we describe later. The CvM statistic averages the sum of squared values of the cumulative scores, across all points of the cumulative score process.
To facilitate comparison to the proposed statistic, the CvM statistic can be written as:
\begin{eqnarray}
\label{eq:transfer1}
\mathit{CvM} & = & n^{-1} \sum_{k = 1}^n {\bm B(k/n; \hat{\bm \theta})}^{T}
{\bm B (k/n; \hat{\bm \theta})} \\
\label{eq:transfer2}
& = & n^{-1} \sum_{k = 1}^n \left [ \left \{n^{-1/2} \hat {\bm I}^{-1/2}
\sum_{i = 1}^{k} {\bm s}(\hat {\bm \theta}; y_{(i)})\right \}^{T}
\left \{n^{-1/2} \hat {\bm I}^{-1/2}
\sum_{i = 1}^{k} {\bm s}(\hat {\bm \theta}; y_{(i)})\right \} \right ] \\
\label{eq:transfer3}
& = & n^{-2} \sum_{k = 1}^n \left [ \left \{
\sum_{i = 1}^{k} {\bm s}(\hat {\bm \theta}; y_{(i)}) \right \}^{T} \hat {\bm I}^{-1}
\left \{\sum_{i = 1}^{k} {\bm s}(\hat {\bm \theta}; y_{(i)})\right \} \right ].
\end{eqnarray}
We further define $\bm B^{\star}_{a, b} = \displaystyle \sum_{i = a}^{b} {\bm s}(\hat {\bm \theta}; y_{(i)})$, $a \leq b$, to represent cumulative scores that start at observation $a$ (i.e., the $a$th-smallest observation with respect to the auxiliary variable) and that end at observation $b$. Using this notation, we can rewrite Equation~\eqref{eq:transfer3} as
\begin{equation}
\label{eq:transfer4}
\mathit{CvM} = n^{-2} \sum_{k = 1}^n \bm{B}^{\star^T}_{1,k} \hat {\bm I}^{-1} \bm B^\star_{1,k}.
\end{equation}
This form of the CvM statistic will be helpful to keep in mind, as we consider the self-normalization statistic below.
\section{Self-Normalization}
While the information matrix decorrelates column-wise dependence among scores (i.e, dependence between parameters), it does not address row-wise dependence (between cases). As mentioned earlier, this is a problem for mixed modeling, where cases within the same cluster are correlated. If we applied the traditional, score-based test statistics to situations where row-wise dependence exists, the resulting test statistic will be incorrect \cite{per06}. \citeA{and91} proposed substitutes for $\hat{\bm I}$ in this situation, but these substitutes introduce an extra tuning parameter, which is difficult to implement in practice.
The method that we study here comes from a series of papers by Shao and Zhang \cite{shao10, zhang11}, who proposed a self-normalization approach for time series data. The key idea is to further decorrelate row dependence of the score matrix row as a function of $k$, where $k \in {1, \ldots, n-1}$ represents each possible changing point.
Formally, the statistic is defined as:
\begin{equation}
\label{eq:dm2}
SN = \sup_{k= 1, \ldots, n-1} \bm{T}_n(k)
^{T} \bm V_{n}^{-1}(k)
\bm {T}_n(k),
\end{equation}
where $\bm{T}_n(k)$ and $\bm{V}_n(k)$ are defined below.
The $\bm T_{n}(k)$ vector is the difference between the cumulative score process at point $k$ and its expected value. This can be written as
\begin{equation}
\label{eq:t}
\bm{T}_n(k) = n^{-1/2}\left (\bm B^{\star}_{1, k} - \frac{k}{n} \bm B^{\star}_{1, n} \right).
\end{equation}
Because the sum of scores across all cases is by definition zero (e.g., $\bm B^{\star}_{1, n} = \bm{0}$), the second term in parentheses disappears in most score-based applications. The only exception here may be situations where the scores involve an integral approximation, as in GLMMs \cite<e.g.,>{wangra22}. In these situations, the scores may not sum to exactly zero due to the approximation.
Next, $\bm V_n(k)$ involves the (co)variance in the cumulative scores, considered separately before and after point $k$. As an intermediate step, let $\bm C_{a,b}$ be a $q \times (b - a + 1)$ matrix containing the deviations of each $\bm B^\star$ from its expected value between points $a$ and $b$. Specifically, the $j$th column of $\bm C_{a,b}$ is defined as
\begin{equation*}
\bm B^\star_{a,(a+j-1)} - \frac{1}{(b - a + 1)} \bm B^\star_{a,b}
\end{equation*}
for $j = 1, \ldots, (b - a + 1)$. Using this intermediate matrix, we can write $\bm{V}_n(k)$ as
\begin{equation}
\label{eq:v}
\bm{V}_n(k) = n^{-2}\left [\bm C_{1,k} \bm C_{1,k}^T + \bm C_{k+1,n} \bm C_{k+1,n}^T \right ].
\end{equation}
\ \\ \ \\
Comparing the CvM from Equation~\eqref{eq:transfer4} to the self-normalization statistic from Equation~\eqref{eq:dm2}, we can observe two main differences. First, the self-normalization statistic involves the dynamic matrix $\bm{V}_n(k)$ that changes with $k$ and that can decorrelate (``self-normalize'') both row and column dependency in the cumulative score matrix. Second, the summing operation from the CvM statistic has changed into a maximum ($\sup$). It seems possible that we could also use a summing operator in the self-normalization procedure, but we do not consider it here. We return to this issue in the General Discussion.
The $SN$ statistic converges to a function of a Brownian bridge
\cite<for details, see>{shao10,zhang11}. We realize that some readers may be unfamiliar with a Brownian bridge, and we refer those readers to \citeA{wanmer14} for further discussion and additional references. For our purposes here, it is sufficient to know that the Brownian bridge is a well-studied statistical process that is used in other score-based tests and that allows us to derive critical values and p-values under the null hypothesis.
For the $SN$ statistic defined above, we can compute critical values by repeatedly simulating the Brownian bridge and computing the $SN$ statistic. This is similar to how some other score-based test statistics are obtained \cite<e.g.,>{merfanzei}.
In the following section, we will demonstrate problems with applying traditional score-based tests to correlated cases, and we will study the performance of the self-normalized test that we just described.
\section{Simulation}
In two-level, linear mixed models, observations in the same cluster are correlated with one another. This causes problems for traditional score-based tests but not necessarily for self-normalized tests. We now illustrate these issues via simulation.
\subsection{Method}
The simulation setup is similar to \citeA{wang21}, with the only
difference being that the level 1 scores are being used in place of aggregated level 2 scores (where the level 2 scores are independent of each other). Our model is similar to that used for the \emph{sleepstudy} data \cite{bel03} that are included with \pkg{lme4} \cite{lme4}. These data come from a longitudinal study of the association between sleep deprivation and reaction time, with each
subject contributing $10$ observations. The dependent variable is reaction time, and the predictor is
\emph{Days} of sleep deprivation. The fixed effects are therefore an intercept, $\beta_0$, and a slope for \emph{Days}, $\beta_1$.
We also have covarying intercept and slope random effects, leading to two random effect variances ($\sigma_0^2$, $\sigma_1^2$) and one covariance ($\sigma_{01}$). The variance not captured by
the random effects is modeled by the residual
variance $\sigma_r^2$. The model can be expressed as:
\begin{eqnarray}
\label{eq:prob1}
\text{Reaction Time}_j \mid\ b_{0j}, b_{1j} &\sim& N(\beta_{0} + b_{0j} + (\beta_{1} + b_{1j}) Days , \bm R_j)\\
\label{eq:prob2}
\left( \begin{array}{c} b_{0j} \\ b_{1j} \end{array} \right ) &\sim& N \left ( \bm 0, \left[ {\begin{array}{cc}
\sigma_0^2 & \sigma_{01}\\
\sigma_{01} & \sigma_{1}^2\\
\end{array} } \right] \right) \\
\label{eq:prob3}
\bm R_j &=& \sigma_r^2\bm I_{10},
\end{eqnarray}
where $j$ indicates subject $j$, $Days$ is a vector containing the values 0 to 9, and $\bm I_{10}$ denotes identity matrix with dimension of 10.
For simplicity,
we focus on instability in fixed effect parameters with respect to an unmodeled continuous variable, which we loosely call {\em cognitive ability}. In one condition, the fixed intercept differs for individuals below the median level of cognitive ability, as compared to individuals above the median level of cognitive ability. In a second condition, the fixed slope differs instead of the fixed intercept.
The magnitude of differences between parameters is based on a $d$ parameter. When $d$ is 0, it
represents no change in the corresponding parameter, which serves as the
baseline; when $d$ is greater than 0, it
represents differences in parameter values, with larger $d$ indicating more severe
parameter change. In this simulation, $d \in {0, 1, 2, 3, 4}$, which roughly indicates the number of standard errors by which parameters differ across groups.
The data generating model's true parameter values were set to be the same as the estimates from
the \emph{sleepstudy} data.
Model estimation proceeded via marginal maximum likelihood, where the form of the data generating model matched that of the fitted model (except for parameter differences between groups). Parameter instability was tested individually in the fixed intercept parameter and in the fixed slope parameter.
For both tests, we examine power and Type I error across three sample
sizes (24, 48, or 96 individuals, contributing a total of $n = 240, 480,$ or $960$ observations), five magnitudes of parameter change, and two parameter conditions (change in either the fixed intercept or fixed slope).
For each combination of conditions, we generate
$1,000$ data sets.
For traditional score-based tests,
we examine three continuous statistics. Specifically, in addition to the $\text{CvM}$ statistic mentioned above,
we also compute the double maximum statistic ($\text{DM}$) and maximum Lagrange multiplier test ($\text{maxLM}$).
These two statistics differ from $\text{CvM}$ on
different aggregation strategies for the cumulative score matrix $\bm B(t; \bm \theta)$; for more details, see \citeA{merzei13}.
The main focus of this simulation is to compare all of these statistics to the self-normalized
statistic proposed in this chapter.
\subsection{Results}
The full simulation results for $\beta_{0}$ and $\beta_{1}$ are demonstrated
in Figures~\ref{fig:sim11res} to~\ref{fig:sim14res}. These figures are arranged similarly, with the first two showing results for the traditional statistics (which assume independence of scores) and the second two showing results for the self-normalized statistic. In all four figures, the panel titles indicate the sample size and the tested parameter, and the y-axis indicates power
(using $\alpha = 0.05$). Figures~\ref{fig:sim11res} and~\ref{fig:sim13res} show results when $\beta_0$ exhibits instability, and Figures~\ref{fig:sim12res} and~\ref{fig:sim14res} show results when $\beta_1$ exhibits instability.
Looking at the first two figures, it is clear that the traditional statistics exhibit virtually no power in this scenario. The scores' lack of independence lead to this problem.
On the contrary, the self-normalized results in Figures~\ref{fig:sim13res} and~\ref{fig:sim14res} exhibit good power for the parameter that is truly changing. The power is monotonic with $d$ and increases with sample size. When $d \geq 3$, even the smallest sample size of $240$ demonstrates high power (power greater than 95\%). Additionally, we can see the power is slightly higher for parameter $\beta_0$ compared to $\beta_1$. This phenomenon was also observed in \citeA{wang182}.
Tables~\ref{tab:sim1} and~\ref{tab:sim2} show the exact numbers underlying Figures~\ref{fig:sim13res} and~\ref{fig:sim14res}, which is especially useful for examining Type I error rates (at $d=0$). Specifically, Type I error is equal to power when $d=0$ (when parameters do not change) or when tests are applied to parameters that do not change. We can see the Type I error rates are generally below or approaching 5\%. These results provide some evidence that self-normalized, score-based tests can be applied to models whose scores exhibit dependence.
\begin{figure}[H]
\caption{\scriptsize{Simulated power curves for $\text{CvM}, \text{DM}, \text{maxLM}$
across parameter change of
0--4 asymptotic standard error.
The changing parameter is $\beta_{0}$. Panel labels denote
the parameter being tested along with sample size.}}
\label{fig:sim11res}
\includegraphics{chapter-sim11res}
\end{figure}
\begin{figure}[H]
\caption{\scriptsize{Simulated power curves for $\text{CvM}, \text{DM}, \text{maxLM}$ across parameter change of
0--4 asymptotic standard error.
The changing parameter is $\beta_{1}$. Panel labels denote
the parameter being tested along with sample size.}}
\label{fig:sim12res}
\includegraphics{chapter-sim12res}
\end{figure}
\begin{figure}[H]
\caption{\scriptsize{Simulated power curves for
$\text{SN}$, across parameter change of
0--4 standard errors.
The changing parameter is $\beta_{0}$. Panel labels denote
the parameter being tested along with sample size.}}
\label{fig:sim13res}
\includegraphics{chapter-sim13res}
\end{figure}
\begin{figure}[H]
\caption{\scriptsize{Simulated power curves for
$\text{SN}$, across parameter change of
0--4 standard errors.
The changing parameter is $\beta_{1}$. Panel labels denote
the parameter being tested along with sample size.}}
\label{fig:sim14res}
\includegraphics{chapter-sim14res}
\end{figure}
\begin{table*}
\caption{Simulated power for self-normalized score-based test across three sample sizes
$n$, five magnitudes of parameter change from 0 to 4 times of asymptotic standard error of
true changing parameter $\beta_0$,
and six subsets of tested parameters. See Figure~\ref{fig:sim13res} for a visualization. }
\label{tab:sim1}
\begin{center}
\begin{tabular}{lccrrrrr}
\hline
Observation & Tested Parameter & Statistic & 0 & 1 & 2 & 3 & 4 \\ \hline
n=240 & $\beta_{0}$ & SN & 2.2 & 30.3 & 84.8 & 96.9 & 99.2 \\
& $\beta_{1}$ & SN & 3.3 & 2.9 & 1.6 & 0.4 & 0.0 \\
& $\sigma_{0}^2$ & SN & 4.4 & 2.9 & 1.2 & 1.1 & 0.4 \\
& $\sigma_{01}$ & SN & 5.0 & 2.8 & 1.2 & 1.0 & 0.4 \\
& $\sigma_{1}^2$ & SN & 3.3 & 3.4 & 3.7 & 2.2 & 1.6 \\
& $\sigma_{r}^2$ & SN & 4.4 & 3.5 & 3.4 & 2.8 & 1.7 \\
n=480 & $\beta_{0}$ & SN & 3.3 & 38.1 & 90.7 & 99.8 & 100.0 \\
& $\beta_{1}$ & SN & 4.1 & 4.7 & 1.9 & 2.3 & 1.3 \\
& $\sigma_{0}^2$ & SN & 3.4 & 3.5 & 1.8 & 0.8 & 0.2 \\
& $\sigma_{01}$ & SN & 6.3 & 3.6 & 2.5 & 1.2 & 0.7 \\
& $\sigma_{1}^2$ & SN & 4.8 & 5.0 & 4.7 & 4.8 & 2.6 \\
& $\sigma_{r}^2$ & SN & 4.5 & 5.2 & 3.5 & 3.6 & 3.1 \\
n=960 & $\beta_{0}$ & SN & 4.4 & 42.4 & 92.4 & 99.9 & 100.0 \\
& $\beta_{1}$ & SN & 4.7 & 4.7 & 5.8 & 7.9 & 4.8 \\
& $\sigma_{0}^2$ & SN & 4.4 & 4.5 & 2.5 & 1.3 & 0.7 \\
& $\sigma_{01}$ & SN & 5.5 & 4.5 & 2.1 & 2.4 & 1.7 \\
& $\sigma_{1}^2$ & SN & 5.2 & 4.7 & 4.9 & 5.5 & 4.6 \\
& $\sigma_{r}^2$ & SN & 5.0 & 5.0 & 4.6 & 4.7 & 3.8 \\ \hline\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\caption{Simulated power for self-normalized score-based test across three sample sizes
$n$, five magnitudes of parameter change from 0 to 4 times of asymptotic standard error of
true changing parameter $\beta_1$,
and six subsets of tested parameters. See Figure~\ref{fig:sim14res} for a visualization. }
\label{tab:sim2}
\begin{center}
\begin{tabular}{lccrrrrr}
\hline
Observation & Tested Parameter & Statistic & 0 & 1 & 2 & 3 & 4 \\ \hline
n=240 & $\beta_{0}$ & SN & 1.9 & 1.4 & 0.7 & 0.5 & 0.2 \\
& $\beta_{1}$ & SN & 3.2 & 28.7 & 77.4 & 95.9 & 99.3 \\
& $\sigma_{0}^2$ & SN & 3.9 & 4.0 & 3.8 & 2.0 & 1.1 \\
& $\sigma_{01}$ & SN & 5.0 & 2.9 & 1.5 & 0.8 & 0.4 \\
& $\sigma_{1}^2$ & SN & 5.4 & 3.0 & 0.9 & 0.7 & 0.3 \\
& $\sigma_{r}^2$ & SN & 3.6 & 3.3 & 3.2 & 2.7 & 2.0 \\
n=480 & $\beta_{0}$ & SN & 4.0 & 3.2 & 4.4 & 2.0 & 0.9 \\
& $\beta_{1}$ & SN & 3.3 & 34.1 & 86.1 & 99.0 & 99.9 \\
& $\sigma_{0}^2$ & SN & 4.5 & 4.6 & 2.9 & 3.4 & 3.7 \\
& $\sigma_{01}$ & SN & 5.1 & 4.8 & 2.6 & 2.3 & 0.7 \\
& $\sigma_{1}^2$ & SN & 4.7 & 3.0 & 2.3 & 1.5 & 0.9 \\
& $\sigma_{r}^2$ & SN & 4.8 & 4.8 & 3.6 & 3.4 & 4.2 \\
n=960 & $\beta_{0}$ & SN & 4.0 & 3.9 & 5.9 & 5.8 & 6.2 \\
& $\beta_{1}$ & SN & 5.0 & 36.5 & 89.0 & 99.2 & 99.9 \\
& $\sigma_{0}^2$ & SN & 3.9 & 5.3 & 4.3 & 3.5 & 4.7 \\
& $\sigma_{01}$ & SN & 4.6 & 4.4 & 4.3 & 2.5 & 2.3 \\
& $\sigma_{1}^2$ & SN & 4.2 & 4.5 & 3.1 & 2.2 & 1.4 \\
& $\sigma_{r}^2$ & SN & 4.3 & 5.0 & 4.7 & 5.9 & 4.9 \\ \hline\end{tabular}
\end{center}
\end{table*}
\section{Application}
\citeA{wang21} provided an application of score-based tests to linear mixed models, studying heterogeneity in the relationship between socioeconomic status and math test scores. They showed that the tests could detect heterogeneity in both fixed effects and random effect variances, when the heterogeneity occurred with respect to a level-2 auxiliary variable. Those tests were restricted to level-2 auxiliary variables, though, due to the dependence issue described throughout this chapter. In this section, we use self-normalization to conduct similar tests with respect to a level-1 auxiliary variable.
\subsection{Method}
We use the {\em bdf} dataset \cite{sni11} from R package \pkg{mlmRev} \cite{mlmRev}. The dataset contains language test scores of 2,287 students from 131 schools. Variables in the dataset include students' verbal IQs, along with language test scores at two timepoints (``pre'' and ``post'').
The aim of the current analysis is to determine how students' language scores at time 2
(denoted as \emph{langPOST} in the dataset) are associated with
their language scores at time 1, along with their verbal IQs. It
is plausible that the relationship between time 1 and 2 language scores
differ for students with different verbal IQs.
As illustrated in previous research \cite{wang21}, such an interaction can be insignificant due
to heterogeneity in random effect variances. Furthermore, because both covariates are at level 1, we have dependence in the scores.
We use the self-normalized score test to study heterogeneity in mixed model parameters with respect to verbal IQ. The model specifications are shown via code in Figures~\ref{modelfit} and~\ref{modelest}.
\begin{figure}
\caption{Code for including an interaction directly in the linear mixed model.}
\label{modelfit}
\begin{Schunk}
\begin{Sinput}
> library("mlmRev")
> library("lmerTest")
> data("bdf")
> m1 <- lmer(langPOST ~ IQ.verb * langPRET + (1 | schoolNR), data = bdf,
+ REML = FALSE)
\end{Sinput}
\end{Schunk}
\end{figure}
\subsection{Results}
The most common approach to testing the interaction between verbal IQ and time 1 scores involves including the interaction directly in the mixed model, as shown in Figure~\ref{modelfit}.
In estimating this model, we observe that the coefficient for the interaction term is not significant ($t(2264) = -1.182, p = 0.24$). But the significance test for the interaction
might be impacted by variance/covariance heterogeneity in random effects \cite{wang21}. Thus,
we use the self-normalized score-based tests to distinguish
between the level-1 interaction and variance heterogeneity.
To conduct the score-based tests, we first fit a model with time 1 scores as the only covariate,
as shown in Figure~\ref{modelest}. The score test is then carried out with verbal IQ as the auxiliary variable. Because verbal IQ is a level 1 covariate,
the traditional score-based tests are not appropriate due to dependence of the scores.
\begin{figure}
\caption{Code for fitting a model with only one fixed effect.}
\label{modelest}
\begin{Schunk}
\begin{Sinput}
> m2 <- lmer(langPOST ~ langPRET + (1 | schoolNR), data = bdf,
+ REML = FALSE)
\end{Sinput}
\end{Schunk}
\end{figure}
We compute a self-normalized score test statistic separately for the fixed effect of time 1 test scores and for the residual variance. The empirical statistics' fluctuation
processes are demonstrated in Figure~\ref{fig:fluctuate}, where the left panel is for the fixed effect and the right panel is for the residual variance. The black line shows the value of the $SN$ statistic at different values of verbal IQ, with the maximum (the highest point) being the official value of the test statistic. The red line indicates
the critical value. Because the black line crosses the red line in both cases, we conclude that both the fixed effect of time 1 scores and the residual variance fluctuate with verbal IQ.
Because both black lines of Figure~\ref{fig:fluctuate} peak around verbal IQs of 13, we can conclude that students with verbal IQs below 13 have different parameter values than students with verbal IQs above 13. This leads us to estimate two separate models, one for students with IQs less than or equal to 13, and one for students with IQs greater than 13. In estimating these separate models, we find that the fixed effect of time 1 scores is estimated at $0.89$ and $0.69$ for students with verbal IQs lower than 13 and higher than 13, respectively. The residual variance for students with verbal IQs below 13 is estimated at 33, while the residual variance for students with verbal IQs above 13 is estimated at 17.
These numerical results indicate that, while the association between time 1 and time 2 scores is somewhat lower for students with high verbal IQ, those students have lower residual variance and generally obtain high test scores. On the other hand, while time 1 scores are more helpful for predicting time 2 scores of students with low verbal IQ, those students also exhibit more residual variability in their scores. These types of results might help practitioners adopt different strategies for different types of students, and businesspeople could be market the method with a phrase like ``precision education.''
\begin{figure}
\caption{Empirical statistics process based on self-normalization approach.}
\label{fig:fluctuate}
\includegraphics{chapter-016}
\end{figure}
\section{General Discussion}
In this chapter, we demonstrated that the self-normalization approach can be used to carry out score-based tests in situations where scores exhibit dependence. This allows researchers to apply score-based tests to mixed models and other models where observations are not independent. It extends related work on score-based tests for mixed models \cite{fok15,wang21,wangra22}, allowing us to test for heterogeneity with respect to level-1 auxiliary variables.
While this chapter provides evidence that self-normalization is promising for score-based tests, it leaves open many issues for future work. We briefly describe some issues in the sections below.
\subsection{Extensions of Self-Normalization}
In this chapter, we used self-normalization to test one model parameter at a time. We also used the statistic that was previously proposed by \citeA{zhang11}. In future work, it would be worthwhile to test multiple parameters at a time and to use self-normalization in other statistics.
If we test for multiple parameters at at time, it is unclear whether we should first decorrelate the cumulative scores via $\hat{\bm I}$, prior to self-normalization. This is because the self-normalization procedure can potentially address both row-wise (cases) and column-wise (parameters) score dependence. However, because the information matrix is often obtained analytically, we may see improved power from a self-normalization test that first decorrelates via $\hat{\bm I}$.
Additionally, while we focused on the $SN$ statistic from Equation~\eqref{eq:dm2}, it appears that $\bm{V}_n(k)$ could also replace
$\hat{\bm I}$ in the CvM statistic of Equation~\eqref{eq:transfer4}. The $\bm{V}_n(k)$ matrix may similarly replace $\hat{\bm I}$ in other score-based statistics, leading to a family of self-normalized, score-based tests that is similar to the traditional family of score-based tests. The family of self-normalized tests could be applied to a wider variety of statistical models.
\subsection{Weighted Statistics}
\citeA{shao10} discuss how we can include weights in the $SN$ statistic, which would be useful for carrying out score-based tests with respect to ordinal variables \cite{merfanzei}. Specifically, let $w(t), t \in [0,1]$ be the weight function, then the general form of the weighted $SN$ statistic is
\begin{equation}
SN_w = \sup_{1 \leq k \leq n-1} w(k/n)\ \bm{T}_n(k)^{T} \bm V_{n}^{-1}(k)
\bm {T}_n(k).
\end{equation}
\citeA{merfanzei} described weights that allow for score-based tests with respect to ordinal auxiliary variables, which could be inserted into the above equation as
\begin{equation}
SN_{ord} = \sup_{k \in k_1, \ldots, k_{m-1}}\left \{\frac{k}{n}\left (1-
\frac{k}{n}\right )\right\}^{-1}
\bm{T}_n(k)^{T} \bm V_{n}^{-1}(k) \bm {T}_n(k),
\end{equation}
where $m$ represents the number of ordinal levels in the auxiliary variable. This means that the self-normalized statistics could flexibly handle many aspects of traditional score-based statistics.
\subsection{Computation}
Assuming that the ideas in the previous two subsections work as expected, we imagine an implementation similar to the functionality of the \pkg{strucchange} package \cite{strucchange}. This would provide users with score-based functionality for a wider variety of mixed modeling situations. With this in mind, our code is currently far from optimal and could be improved in a variety of manners. Here, we consider recursive computations of the self-normalized test statistic.
Recursive computations of the $\bm{V}_n(k)$ matrix from Equation~\eqref{eq:v} may be helpful, because that matrix must be computed and inverted for each value of $k = 1,\ldots,(n-1)$. The inversion is not a major problem if we test one parameter at a time (as we did in this chapter), but it could become computationally slow if we test many parameters at a time.
To speed this up, it appears possible to write $\bm{V}_n(k)$ as a function of $\bm{V}_n(k-1)$ using identities from \citeA<>[also see the helpful Wikipedia entry on Welford's online algorithm]{wel62}. That result might then be plugged in to the Sherman-Morrison formula, allowing us to compute $\bm{V}_n^{-1}(k)$ from $\bm{V}_n^{-1}(k-1)$. This could enable us to invert the $q \times q$ matrix for a single value of $k$, then obtain subsequent inverses through simple matrix multiplications. Of course, the full derivations remain to be worked out.
\subsection{Summary}
Given the flexibility of the self-normalization approach and the existing applications of score-based tests, the combination of these two methods appears fruitful. We expect that further development of the ideas presented here can provide enhanced score-based tools to researchers in the social sciences and beyond.
\section*{Computational Details}
All results were obtained using the \proglang{R}~system for
statistical computing \cite{r22},
version~4.0.3, especially including packages \pkg{lme4}~1.1-26 \cite{lme4} for model estimation, \pkg{merDeriv}~0.2-4 \cite{wang182} for score computations, and \pkg{strucchange}~1.5-2 \cite{strucchange} for score-based tests. Code for reproducing our results is available at \url{http://semtools.r-forge.r-project.org/}.
\proglang{R}~and the aforementioned packages are
freely available under the General Public License~2 from the
Comprehensive \proglang{R} Archive Network
at \url{https://CRAN.R-project.org/}.
|
{
"arxiv_id": "2302.14286",
"language": "en",
"timestamp": "2023-03-01T02:08:15",
"url": "https://arxiv.org/abs/2302.14286",
"yymm": "2302"
} | \section{Introduction}
Recently, pre-trained language models (PLMs) have become the imperative infrastructure in a series of downstream natural language processing (NLP) tasks~\cite{Devlin2019BERT, Liu2019RoBERTa, Yang2019XLNet}, which bring substantial improvements by a two-stage training strategy, including \emph{pre-training} and \emph{fine-tuning}.
Benefiting from this strategy, a branch of PLM methods arises to improve the models' effectiveness, promoting NLP's development in both academia and industry \cite{Liu2023Pre, Hu2022A}.
Yet, many existing approaches follow different patterns and code architectures, it is not easy to obtain high-performing models and develop them easily for researchers.
To fill this gap, this paper presents {\model}, a unified and comprehensive open-source library to allow researchers to develop and evaluate NLP models more efficiently and effectively.
To reach this goal, we utilize HuggingFace Transformers~\footnote{\url{https://huggingface.co/}.} as the prevalent backend, which provides abundant backbones of different scale-sizes of PLMs.
For training, we integrate a well-designed tracking toolkit \emph{MLFlow}~\footnote{\url{https://www.mlflow.org/}.} into the backend, which is convenient to observe experimental progress and records.
{\model} consists of some well-designed components, such as \emph{Models}, \emph{Processors}, and \emph{Applications}.
Concretely,
1) for \emph{Models}, we provide some popular PLMs, including BERT~\cite{Devlin2019BERT}, RoBERTa~\cite{Liu2019RoBERTa}, DeBERTa~\cite{He2021Deberta}, GPT-2~\cite{radford2019language} and T5~\cite{Raffel2020Exploring}, etc.
Based on these PLMs, we develop task-specific modules for pre-training (e.g., masked language modeling (MLM), casual language modeling (CLM)) and fine-tuning (e.g., sequence classifying and matching, span extraction, text generation).
We also provide some prompt-based fine-tuning techniques which enable parameter-efficient tuning for PLMs, including PET~\cite{Schick2021Exploiting}, P-tuning~\cite{Liu2021GPT}, Prefix-tuning~\cite{Li2021Prefix}, Adapter-tuning~\cite{Houlsby2019Parameter}.
2) In \emph{Processors}, we develop relevant data processing tools~\footnote{The \emph{Processor} is related to the task format. For example, we tailor some benchmark datasets, such as Chinese CLUE~\cite{Xu2020CLUE}, GLUE~\cite{Wang2019GLUE}, etc.
}
for some commonly used benchmark datasets and business-specific corpora.
3) In \emph{Applications}, we present core capacities to support the upper applications.
Specifically, our proposed KP-PLM~\cite{Wang2022Knowledge} enables plug-and-play knowledge injection in model pre-training and fine-tuning via converting structure knowledge into unified language prompts.
We also develop HugIE, a universal information extraction toolkit through instruction-tuning with extractive modeling (e.g., global pointer) \cite{Su2022Su}.
{\model} also integrates some novel algorithms and applications, such as uncertainty-aware self-training \cite{Mukherjee2020Uncertainty, Wang2023Uncertainty}, code understanding and generation~\cite{feng2020codebert, Wang2021CodeT5}.
Overall, {\model} has the following features.
\begin{itemize}
\item {\model} offers a range of pre-built components and modules (i.e., \emph{Models}, \emph{Processors}, \emph{Applications}) that can be used to speed up the development process and simplify the implementation of complex NLP models and tasks.
\item {\model} can also be easily integrated into existing workflows and customized to meet the specific needs of individual researchers or projects, ensuring the framework's scalability and flexibility.
\item {\model} is equipped with some novel core capacities, such as knowledge-enhanced pre-training, prompt-based fine-tuning, instruction and in-context learning, uncertainty-aware self-training, and parameter-efficient learning. We thus develop some featured products or solutions
on real-world application scenarios, e.g., KP-PLM, and HugIE.
\item HugNLP is based on PyTorch and HuggingFace, which are two widely used tools and platforms in the NLP community, allowing researchers to leverage their strengths and applying it to different academics and industry scenarios~\cite{Qiu2021EasyTransfer,Wang2022EasyNLP}.
\end{itemize}
\section{Background}
\subsection{Pre-trained Language Models}
The goal of the PLM is to learn semantic representations over unsupervised corpora via well-designed self-supervised learning tasks in the pre-training stage.
Notable PLMs can be divided into three main types, including encoder-only~\cite{Devlin2019BERT, Liu2019RoBERTa, He2021Deberta, Yang2019XLNet, Lan2020ALBERT}, decoder-only~\cite{radford2018improving, Brown2020Language, Zhang2022OPT} and encoder-decoder~\cite{Lewis2020BART, Raffel2020Exploring}.
However, these PLM may lack of background knowledge when applied to some task-specific scenarios.
To solve this problem, a branch of knowledge-enhanced PLMs~\cite{Zhang2019ERNIE, Wang2021KAdapter, Pan2022Knowledge} have been proposed for capturing rich factual knowledge from external knowledge bases.
In addition, some recent large-scale PLMs (e.g., GPT-3~\cite{Brown2020Language}) can enable few/zero-shot in-context learning with language prompts or instructions. Thus, we can leverage cross-task learning to unify semantics knowledge from different NLP tasks.
\subsection{Fine-tuning for PLMs}
A large number of applications in real scenarios focus on how to fine-tune the PLM to transfer the prior knowledge derived from the general domain to downstream task-specific domains~\cite{Xu2020CLUE, Wang2019GLUE}.
We integrate some task-orient fine-tuning methods to allow users to develop and evaluate PLM on different NLP tasks.
We also implement some popular tuning algorithms to enable tuning on low-resource scenarios, such as prompt-tuning~\cite{Liu2021GPT}, in-context learning~\cite{Brown2020Language}, etc.
\begin{figure*}[th!]
\centering
\includegraphics[width=\linewidth]{overview}
\caption{An overview of the {\model} library.}\label{fig:overview}
\end{figure*}
\section{{\model}}
\subsection{Overview}
{\model} is an open-sourced library with a hierarchical structure. As shown in Figure~\ref{fig:overview}. The backend is the prevalent HuggingFace Transformers platform that provides multiple transformer-based models and task trainers. In other words, {\model} can be seen as a customized NLP platform for efficiently training and evaluating.
In addition, {\model} integrates \emph{MLFlow}, which is a novel tracking callback toolkit for model training and experiment result analysis. Users can simply add one configure parameter \texttt{tracking\_uri} in the training script, and observe the tracking records after running \emph{MLFlow} server.
{\model} consists of three key components, including \emph{Models}, \emph{Processors}, and \emph{Applications}. Users can directly select the pre-built settings for some common tasks, or develop special user-defined training solutions in real-world application scenarios. We will provide a detailed description in the following sections.
\subsection{Library Architecture}
\paragraph{Models.}
In \emph{Models}, we provide some popular transformer-based models as backbones, such as BERT, RoBERTa, GPT-2, etc. We also release our pre-built KP-PLM, a novel knowledge-enhanced pre-training model which leverages \emph{knowledge prompting}~\cite{Wang2022Knowledge} paradigm to inject factual knowledge and can be easily used for arbitrary PLMs.
Apart from basic PLMs, we also implement some task-specific models, involving sequence classification, matching, labeling, span extraction, multi-choice, and text generation.
Particularly, we develop standard fine-tuning (based on CLS Head~\footnote{For standard fine-tuning, we need to add a classification head (CLS head) on the PLM and obtain the probability distribution of each class. The parameters of the CLS head are randomly initialized.}) and prompt-tuning models~\footnote{Different from fine-tuning, prompt-tuning can reuse the pre-training objective (e.g., MLM, CLM) to perform classifying on the masked token. It requires a task-orient template (e.g., ``It was [MASK].'') and the label word mapping (e.g., ``great'' maps to ``positive'' class in sentiment analysis task.)} that enable PLM tuning on classification tasks.
For few-shot learning settings, {\model} provides a prototypical network~\cite{Snell2017Prototypical} in both few-shot text classification and named entity recognition (NER).
\begin{figure}
\begin{minipage}{0.48\textwidth}
\begin{lstlisting}[language=python, caption=A model case of parameter freezing., label=case1, frame=shadowbox]
from tools.model_utils.parameter_freeze import
ParameterFreeze
freezer = ParameterFreeze()
class BertForSequenceClassification(BertPreTrained
Model):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.config = config
self.bert = BertModel(config)
# freeze the backbone
if self.config.use_freezing:
self.bert = freezer.freeze_lm(self.bert)
self.classifier = torch.nn.Linear(
config.hidden_size, config.num_labels)
self.init_weights()
\end{lstlisting}
\end{minipage}
\end{figure}
In addition, we also incorporate some \textit{plug-and-play utils} in {\model}.
1) \emph{Parameter Freezing}. If we want to perform parameter-efficient learning~\cite{Mao2022UniPELT}, which aims to freeze some parameters in PLMs to improve the training efficiency, we can set the configure \texttt{use\_freezing} and freeze the backbone. A use case is shown in Code~\ref{case1}.
2) \emph{Uncertainty Estimation} aims to calculate the model certainty when in semi-supervised learning~\cite{Mukherjee2020Uncertainty}.
3) We also design \emph{Prediction Calibration}, which can be used to further improve the accuracy by calibrating the distribution and alleviating the semantics bias problem~\cite{Zhao2021Calibrate}.
\paragraph{Processors.}
{\model} aims to load the dataset and process the task examples in a pipeline, containing sentence tokenization, sampling, and tensor generation.
Specifically, users can directly obtain the data through \texttt{load\_dataset}, which can directly download it from the Internet or load it from the local disk.
For different tasks, users should define a task-specific data collator, which aims to transform the original examples into model input tensor features.
\paragraph{Applications.}
It provides rich modules for users to build real-world applications and products by selecting among an array of settings from \emph{Models} and \emph{Processors}.
More details are shown in Section~\ref{sec:application}.
\subsection{Core Capacities}
To further improve the effectiveness of {\model}, we design multiple core capacities in the following.
\paragraph{Knowledge-enhanced Pre-training.}
Conventional pre-training methods lack factual knowledge ~\cite{Zhang2022DKPLM, Pan2022Knowledge}.
To deal with this issue,
we present KP-PLM~\cite{Wang2022Knowledge} with a novel knowledge prompting paradigm for knowledge-enhanced pre-training.
Specifically, we construct a knowledge sub-graph for each input text by recognizing entities and aligning with the knowledge base (e.g., Wikidata5M~\footnote{\url{https://deepgraphlearning.github.io/project/wikidata5m}.}) and decompose this sub-graph into multiple relation paths, which can be directly transformed into language prompts.
KP-PLM can be easily applied to other PLMs without introducing extra parameters as knowledge encoders.
\paragraph{Prompt-based Fine-tuning.}
Prompt-based fine-tuning aims to reuse the pre-training objective (e.g., MLM) and utilizes a well-designed template and verbalizer to make predictions, which has achieved great success in low-resource settings.
We integrate some novel approaches into {\model}, such as PET~\cite{Schick2021Exploiting}, P-tuning~\cite{Liu2021GPT}, etc.
\paragraph{Instruction-tuning and In-Context Learning.}
Instruction-tuning \cite{Wei2022Finetuned} and in-context learning~\cite{Brown2020Language} enable few/zero-shot learning without parameter update, which aims to concatenate the task-aware instructions or example-based demonstrations to prompt GPT-style PLMs to generate reliable responses.
So, all the NLP tasks can be unified into the same format and can substantially improve the models' generalization.
Inspired by this idea, we extend it into other two paradigms:
1) extractive-style paradigm: we unify various NLP tasks into span extraction, which is the same as extractive question answering~\cite{Keskar2019Unifying}, and 2) inference-style paradigm: all the tasks can be viewed as natural language inference to match the relations between inputs and outputs~\cite{Wang2021Entailment}.
\begin{figure}
\begin{minipage}{0.48\textwidth}
\begin{lstlisting}[language=python, caption=An application case of sequence classification for GLUE benchmark., label=case2, frame=shadowbox]
python3 hugnlp_runner.py \
--model_name_or_path=$path \
--data_dir=$data_path \
--output_dir=./outputs/glue/$glue_task \
--seed=42 \
--max_seq_length=$len \
--max_eval_seq_length=$len \
--do_train \
--do_eval \
--per_device_train_batch_size=8 \
--per_device_eval_batch_size=4 \
--gradient_accumulation_steps=1 \
--evaluation_strategy=steps \
--learning_rate=1e-5 \
--num_train_epochs=10 \
--task_name=clue \
--task_type=head_cls \
--model_type=bert \
--user_defined="data_name=rte" \
\end{lstlisting}
\end{minipage}
\end{figure}
\begin{figure}
\centering \includegraphics[width=\linewidth, frame]{case3}
\caption{An application case of HugIE.}\label{fig:case3}
\end{figure}
\paragraph{Uncertainty-aware Self-training.}
Self-training can address the labeled data scarcity issue by leveraging the large-scale unlabeled data in addition to labeled data, which is one of the mature paradigms in semi-supervised learning~\cite{Qi2022Small, Nitesh2005Learning, Amini2022Self}.
However, the standard self-training may generate too many noises, inevitably degrading the model performance due to the confirmation bias.
Thus, we present uncertainty-aware self-training. Specifically, we train a teacher model on few-shot labeled data, and then use Monte Carlo (MC) dropout technique in Bayesian neural network (BNN)~\cite{Gal2016Dropout} to approximate the model certainty, and judiciously select the examples that have a higher model certainty of the teacher.
\paragraph{Parameter-efficient Learning.}
To improve the training efficiency of {\model}, we also implement parameter-efficient learning, which aims to freeze some parameters in the backbone so that we only tune a few parameters during model training.
We develop some novel parameter-efficient learning approaches, such as Prefix-tuning~\cite{Li2021Prefix}, Adapter-tuning~\cite{Houlsby2019Parameter}, BitFit~\cite{Zaken2022BitFit} and LoRA~\cite{Hu2022LoRA}, etc.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{workflow}
\caption{The development workflow of {\model}.}\label{fig:workflow}
\end{figure*}
\subsection{Featured Applications}
\label{sec:application}
\begin{table*}
\centering
\begin{small}
\begin{tabular}{l cccccccc}
\toprule
\bf PLMs & \bf AFQMC & \bf CMNLI & \bf CSL & \bf IFLYTEK & \bf OCNLI & \bf TNEWS & \bf WSC & \bf Avg. \\
\midrule
BERT-base & 72.30 & 75.91 & 80.83 & 60.11 & 78.52 & 57.18 & 75.89 & 72.04 \\
BERT-large & 72.91 & 77.62 & 81.30 & 60.77 & 78.71 & 57.77 & 78.28 & 72.60 \\
RoBERTa-base & 73.33 & 81.05 & 80.17 & 60.81 & 80.88 & 57.69 & 86.74 & 74.10 \\
RoBERTa-large & 74.66 & 80.50 & 82.60 & 61.37 & 82.19 & 58.54 & 87.53 & 75.33 \\
MacBERT-base & 74.23 & 80.65 & 81.63 & 61.14 & 80.65 & 57.65 & 80.26 & 73.80 \\
MacBERT-large & 74.66 & 81.19 & 83.70 & 62.05 & 81.92 & 59.03 & 86.74 & 75.46 \\
\bottomrule
\end{tabular}
\end{small}
\caption{Accuracy (\%) of different tasks in the CLUE benchmark.}
\label{table:benchmark-clue}
\end{table*}
\paragraph{Benchmark Tuning.}
We develop the training application for some popular benchmarks, such as Chinese CLUE and GLUE.
We use both standard fine-tuning and prompt-based fine-tuning paradigms to tune PLMs over these benchmarks.
The case of this application is shown in Code~\ref{case2}.
\paragraph{Universal Information Extraction based on Extractive Instruction.}
We develop HugIE, a novel universal information extraction toolkit based on {\model}. Specifically, we collect multiple Chinese NER and event extraction datasets from ModelScope~\footnote{\url{https://modelscope.cn/datasets}} and QianYan~\footnote{\url{https://www.luge.ai}}. Then, we use the core capacity of extractive-style instruction with a global pointer~\cite{Su2022Su} to pre-train a universal information extraction model.
We also upload the trained model to HuggingFace~\footnote{\url{https://huggingface.co/wjn1996/wjn1996-hugnlp-hugie-large-zh}.}.
An example of using HugIE is shown in Figure~\ref{fig:case3}.
\paragraph{Low-resource Tuning for PLMs.}
For low-resource settings,
we have integrated two core capacities of prompt-tuning and uncertainty-aware self-training to further improve the performance with limited labeled data.
In other words, prompt-tuning can fully reuse the prior knowledge derived from PLMs to achieve high grades with few examples, while self-training can augment unlabeled data to enhance effectiveness.
\paragraph{Code Understanding and Generation.}
In addition to traditional NLP tasks, we also consider the scenario of code understanding and generation,
such as clone detection, defect detection, and code summarization~\cite{Lu2021codexglue}.
\subsection{Development Workflow}
{\model} is easy to use and develop. We draw a workflow in Figure~\ref{fig:workflow} to show how to develop a new running task.
It consists of five main steps, including library installation, data preparation, processor selection or design, model selection or design, and application design.
This illustrates that {\model} can simplify the implementation of complex NLP models and tasks.
\begin{table*}[ht]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c c llllllll c}
\toprule
\multirow{2}*{\bf Paradigms} & \multirow{2}*{\bf Methods} & \bf SST-2 & \bf SST-5 & \bf MR & \bf CR & \bf MPQA & \bf Subj & \bf TREC & \bf CoLA & \multirow{2}*{\bf Avg.} \\
& & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (matt.) & \\
\midrule
\multirow{2}*{PT-Zero} & RoBERTa & 82.57 & 29.46 & \textbf{65.10} & \textbf{82.15} & 49.90 & \textbf{69.20} & 20.80 & -4.89 & 49.29 \\
& KP-PLM & \textbf{84.15} & \textbf{30.67} & 64.15 & 81.60 & \textbf{53.80} & 68.70 & \textbf{24.80} & \textbf{-2.99} & \textbf{50.61} \\
\hdashline
\multirow{2}*{PT-Few} & RoBERTa & 86.35\small{\textpm1.3} & 36.79\small{\textpm2.0} & \textbf{83.35}\small{\textpm0.9} & \textbf{88.85}\small{\textpm1.4} & 66.40\small{\textpm1.9} & 89.25\small{\textpm2.6} & 76.80\small{\textpm5.0} & 6.61\small{\textpm6.9} & 66.80 \\
& KP-PLM & \textbf{90.71}\small{\textpm1.0} & \textbf{44.21}\small{\textpm2.9} & 82.00\small{\textpm1.5} & 85.35\small{\textpm0.4} & \textbf{67.30}\small{\textpm1.2} & \textbf{91.45}\small{\textpm0.4} & \textbf{81.00}\small{\textpm3.3} & \textbf{24.28}\small{\textpm11.3} & \textbf{70.79} \\
\hdashline
\multirow{2}*{FT-Full} & RoBERTa & 94.90 & 56.90 & \textbf{89.60} & 88.80 & 86.30 & \textbf{96.50} & \textbf{97.10} & 63.90 & 84.25 \\
& KP-PLM & \textbf{95.30} & \textbf{57.63} & 89.20 & \textbf{89.10} & \textbf{87.40} & 96.20 & \textbf{97.10} & \textbf{64.87} & \textbf{84.60} \\
\bottomrule
\end{tabular}
}
\caption{The comparison between KP-PLM and RoBERTa-base over multiple natural language understanding (NLU) tasks in terms of acc/f1/matt. (\%) and standard deviation with three paradigms, such as zero-shot prompt-tuning (PT-Zero), few-shot prompt-tuning (PT-Few), and full-data fine-tuning (FT-Full).}
\label{tab:nlu}
\end{table*}
\begin{table*}[t]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{p{12cm}p{6cm}}
\begin{minipage}[t]{0.775\textwidth}
\resizebox{\textwidth}{!}{
\begin{tabular}{lc c c c c c c c c}
\toprule
\multirow{2}{*}{\bf Methods} & \multirow{2}{*}{\bf Params.} & \multicolumn{2}{c}{\bf Java to C\#} & \multicolumn{2}{c}{\bf C\# to Java} & \multicolumn{2}{c}{\bf Refine Small} & \multicolumn{2}{c}{\bf Refine Medium}\\
\cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8}\cmidrule(lr){9-10}
& & (bleu) & (em) & (bleu) & (em) & (bleu) & (em) & (bleu) & (em) \\
\midrule
\textbf{CodeT5} & & & & & & & & & \\
Fine-Tuning & 224M \cellcolor[gray]{.5}& \textbf{84.15} & \textbf{65.30} & \textbf{79.12} & \textbf{66.40} & 77.39 & \textbf{21.35} & \textbf{91.04} & \textbf{7.82} \\
BitFit & 0.001M \cellcolor[gray]{.9}& 0.25 & 0.00 & 0.24 & 0.00 & 1.28 & 0.00 & 5.14 & 0.00 \\
Adapter & 14.22M \cellcolor[gray]{.7}& 75.43 & 52.40 & 73.10 & 57.70 & 77.41 & 18.58 & 91.01 & 3.61 \\
P-Tuning V2 & 0.633M \cellcolor[gray]{.8}& 59.86 & 33.70 & 57.10 & 41.00 & \textbf{78.99} & 4.56 & 91.02 & 0.79 \\
\cdashline{1-10}
\\[-1em]
\textbf{PLBART} & & & & & & & & \\
Fine-Tuning & 139M \cellcolor[gray]{.55}& \textbf{77.05} & \textbf{62.60} & \textbf{79.29} & \textbf{62.80} & 73.32 & \textbf{12.71} & 83.88 & \textbf{4.24} \\
BitFit & 0.126M \cellcolor[gray]{.89}& 16.48 & 0.10 & 17.43 & 0.90 & \textbf{74.08} & 1.45 & \textbf{85.41} & 0.42 \\
Adapter & 7.11M \cellcolor[gray]{.76}& 66.72 & 42.10 & 68.70 & 51.00 & 73.58 & 10.90 & 84.72 & 3.12 \\
P-Tuning V2 & 0.329M \cellcolor[gray]{.87}& 22.87 & 1.00 & 48.08 & 33.80 & 73.87 & 2.07 & 73.58 & 0.03 \\
\bottomrule
\end{tabular}
}
\captionsetup{type=table}
\vspace{-0.5em}
\caption{Performance (\%) on Code Translation \& Code Refinement Tasks.}
\label{table:code-trans}
\end{minipage}
&
\begin{minipage}[t]{0.37\textwidth}
\resizebox{\textwidth}{!}{
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{\bf Methods} & \multirow{2}{*}{\bf Params.} & \bf Defect & \bf Clone \\
\cmidrule(lr){3-3}\cmidrule(lr){4-4}
& & (acc) & (f1) \\
\midrule
\textbf{CodeT5} & & & \\
Fine-Tuning & 224M \cellcolor[gray]{.5}& \textbf{64.35} & \textbf{94.97} \\
BitFit &1.183M \cellcolor[gray]{.84}& 55.05 & 69.52 \\
Adapter & 15.40M \cellcolor[gray]{.68}& 59.74 & 94.47 \\
P-Tuning V2 & 1.182M \cellcolor[gray]{.84}& 54.61 & 79.83 \\
\cdashline{1-4}
\\[-1em]
\textbf{PLBART} & & & \\
Fine-Tuning & 139M \cellcolor[gray]{.55}& \textbf{62.27} & \textbf{92.85} \\
BitFit & 1.308M \cellcolor[gray]{.83}& 56.30 & 92.42 \\
Adapter & 8.29M \cellcolor[gray]{.74}& 61.60 & 92.74 \\
P-Tuning V2 & 1.182M \cellcolor[gray]{.84}& 53.81 & 75.88 \\
\bottomrule
\end{tabular}
}
\captionsetup{type=table}
\vspace{-0.5em}
\caption{Performance (\%) on Code Clone Detection \& Code Defect Detection Tasks.}
\label{table:code-understanding}
\end{minipage}
\end{tabular}
}
\end{center}
\end{table*}
\section{Experimental Performances}
In this section, we empirically examine the effectiveness and efficiency of the {\model} toolkit on some public datasets.
\subsection{Performance of Benchmarks}
To validate the effectiveness of {\model} on both fine-tuning and prompt-tuning, we choose Chinese CLUE~\cite{Xu2020CLUE} and GLUE benchmarks~\cite{Wang2019GLUE}.
For Chinese CLUE, we choose different sizes of BERT, RoBERTa and MacBERT~\cite{Cui2020Revisiting} and report the accuracy over the development sets of each task in Tables~\ref{table:benchmark-clue}.
For GLUE, we perform full-resource fine-tuning (FT-full), few-shot prompt-tuning (PT-few), and zero-shot prompt-tuning (PT-zero) based on our proposed KP-PLM. We select RoBERTa as the strong baseline and report the accuracy results with standard deviation in Table~\ref{tab:nlu}.
The obtained comparable performance has shown the reliability of {\model} in both full and low-resource scenarios, which achieves similar performance compared to other open-source frameworks and their original implementations~\cite{Wang2022EasyNLP}.
\subsection{Evaluation of Code-related Tasks}
We use {\model} to evaluate the performance on multiple code-related tasks, such as code clone detection, defection, translation, and refinement. We fine-tune two widely used models: CodeT5~\cite{Wang2021CodeT5} and PLBART~\cite{ahmad2021unified}, and then compare them with competitive parameter-efficient learning methods, including BitFit, Adapter, and P-tuning V2~\cite{Liu2021Ptuningv2}.
Results in Table~\ref{table:code-trans} and Table~\ref{table:code-understanding} demonstrate the effectiveness and efficiency of {\model}.
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{l cccc}
\toprule
\bf Methods & \bf RTE & \bf CB & \bf AGNews & \bf Avg. \\
\midrule
\multicolumn{5}{l}{\textit{\textbf{Few Labeled Data (16-shot)}}}\\
Fine-Tuning & 54.4\small{\textpm3.9} & 74.5\small{\textpm2.6} & 88.9\small{\textpm2.7} & 72.60 \\
\midrule
\multicolumn{5}{l}{\textit{\textbf{Few Labeled Data (16-shot) + Unlabeled Data}}}\\
UST & 55.6\small{\textpm2.6} & 76.0\small{\textpm3.1} & 89.3\small{\textpm3.5} & 73.63 \\
CEST & 57.0\small{\textpm1.9} & 78.1\small{\textpm2.7} & 88.5\small{\textpm2.2} & 74.53 \\
LiST & \bf 60.8\small{\textpm2.5} & \bf 79.7\small{\textpm2.9} & \bf 90.3\small{\textpm2.5} & \bf 76.93 \\
\bottomrule
\end{tabular}
}
\caption{Accuracy (\%) of uncertain-aware self-training with only 16 labeled examples per class.}
\label{table:self-training}
\end{table}
\subsection{Effectiveness of Self-training}
We end this section with an additional validation on the self-training. We choose some recent methods (using uncertainty estimation) to evaluate the implementations of {\model}, including UST~\cite{Mukherjee2020Uncertainty}, CEST~\cite{Tsai2022Contrast}, and LiST~\cite{Wang2022LiST}.
Results in Table~\ref{table:self-training} show that self-training can make substantial improvements in low-resource scenarios.
\section{Conclusion}
In this paper, we introduce {\model}, a unified and comprehensive library based on PyTorch and HuggingFace, allowing researchers to apply it to different academics and industry scenarios.
{\model} consists of three key components (i.e., \emph{Processors}, \emph{Models} and \emph{Applications}) and multiple pre-built core capacities and plug-and-play utils.
Finally, we perform some evaluation of different aspects of applications, and the results demonstrate its efficiency and effectiveness. We think {\model} can promote research and development for NLP applications.
\section*{Ethics Statement}
Our contribution in this work is to construct a unified and comprehensive library for NLP research and application.
However, transformer-based models may have some negative impacts, such as gender and social bias.
Our work would unavoidably suffer from these issues.
We suggest that users should carefully address potential risks
when models trained using the {\model} library are deployed online.
\section*{Acknowledgements}
This work has also been supported by the National Natural Science Foundation of China under Grant No. U1911203,
Alibaba Group through the Alibaba Innovation Research Program,
and the National Natural Science Foundation of China under Grant No. 61877018,
the Research Project of Shanghai Science and Technology Commission (20dz2260300) and the Fundamental Research Funds for the Central Universities.
|
{
"arxiv_id": "2302.14337",
"language": "en",
"timestamp": "2023-03-01T02:09:57",
"url": "https://arxiv.org/abs/2302.14337",
"yymm": "2302"
} | \section{Introduction}
In recent years, there has been growing interest in virtual humans and the metaverse, leading to an increased focus on the generation of natural talking faces~\cite{zhu2021deep}.
The applications of talking face generation can be broadly categorized into two groups, as depicted in \Fig{overview}.
The first group involves generating talking faces based on text inputs, which can be used for video production or multimodal chatbots~\cite{kumar2017obamanet,wang2021anyonenet,zhang2022text2video,li2021write,dahmani2019conditional,dahmani2021learning,abdelaziz2021avtacotron}.
In most cases, this group also requires simultaneous generation of speech synchronized with talking faces.
The second group involves generating talking faces synchronized with speech inputs, which can be used to animate characters' faces or to act like someone else~\cite{suwajanakorn2017obama,taylor2017deep,eskimez2018generating,zhou2020makeittalk,prajwal2020wav2lip,zhou2021pcavs,liang2022gcavt,zhang2022meta,deng2021unsupervised,ji2021audio}.
While this group usually requires removing the speaker identity to accommodate arbitrary speakers, it is also important to generate talking faces according to the speech emotions.
Although existing research has targeted only one of these groups, integrating them can produce a single versatile model for a variety of applications.
A current approach is to train a speech-to-face model for the second group of applications, and combine it with an external text-to-speech (TTS) model for the first group of applications~\cite{zhou2020makeittalk,wang2021anyonenet}.
However, this approach has several drawbacks; linguistic information cannot be utilized in face generation, the quality of generated talking faces is affected by the TTS quality, and additional inference time is required for TTS.
In this paper, we propose a unified facial landmark generator (UniFLG) to integrate text- and speech-driven talking face generation.
UniFLG has a TTS module based on the variational autoencoder (VAE)~\cite{kingma2014auto}, which makes it possible to acquire a time-aligned common representation of text and speech during TTS training.
Then, a landmark decoder, which is another module of UniFLG, generates facial landmarks from the intermediate representation.
This is beneficial for both text- and speech-driven generation;
during text-driven generation, speech and facial landmarks can be generated in parallel, resulting in faster inference and no error propagation from TTS.
During speech-driven generation, speaker identity is removed because the speech-based representation is learned to be shared with text-based representation.
To preserve speech emotions, we further introduce an utterance-level VAE to extract emotion embeddings and condition the landmark decoder on it.
Another important feature of this study is that we regard the facial landmark, a widely used low-dimensional representation of faces~\cite{kumar2017obamanet,wang2021anyonenet,zhang2022text2video,li2021write,dahmani2019conditional,dahmani2021learning,suwajanakorn2017obama,eskimez2018generating,zhou2020makeittalk,ji2021audio,yu2020multimodal,yu2021multimodal}, as common among speakers due to its little speaker dependence.
This assumption enables the landmark decoder to be trained with paired text, speech and facial videos of just one speaker, while the TTS module can be trained with existing multi-speaker corpora.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/overview.pdf}
\caption{Major applications of talking face generation.}
\label{fig:overview}
\vspace{-5ex}
\end{figure}
\section{Related work}
\textbf{Text-driven talking face generation.}
Most of the text-driven talking face generation methods are accompanied by TTS.
Pipelined methods first synthesize speech from text and then generate talking faces in a speech-driven manner~\cite{kumar2017obamanet, wang2021anyonenet}, and text-based methods utilize temporal alignment between the text and speech obtained via TTS~\cite{zhang2022text2video, li2021write}.
On the other hand, audiovisual speech synthesis simultaneously generates speech and talking faces~\cite{dahmani2019conditional,dahmani2021learning,abdelaziz2021avtacotron}.
AVTacotron2~\cite{abdelaziz2021avtacotron} extends Tacotron2~\cite{shen2018tacotron2} and demonstrates improved quality than pipelined methods.
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{fig/conceptual_diagram_v6.pdf}
\caption{Conceptual diagram of the (a) training and (b) inference of UniFLG. Speaker embedding is omitted for brevity.}
\label{fig:diagram}
\end{figure*}
\noindent\textbf{Speech-driven facial animation.}
Early methods targeted a specific speaker or emotion~\cite{taylor2017deep,suwajanakorn2017obama}.
Recently, several methods that support an arbitrary speaker's voice~\cite{eskimez2018generating,zhou2020makeittalk,deng2021unsupervised,prajwal2020wav2lip,zhou2021pcavs,liang2022gcavt,zhang2022meta} or generate talking faces in accordance with speech emotions~\cite{deng2021unsupervised,ji2021audio} have been proposed.
Particularly, multimodal methods that consider both text and speech have improved the quality of talking face generation~\cite{yu2020multimodal,yu2021multimodal,fan2022joint}.
However, their applicability is limited because they cannot generate talking faces from only text or speech, or they have only been validated on a specific speaker.
\section{UniFLG}
\label{sec:prop}
UniFLG consists of two components:
(1) VAE-VITS~\cite{mitsui2022endtoend}, which introduces an utterance-level latent variable into the end-to-end TTS called VITS~\cite{kim2021vits} and
(2) a landmark decoder, which generates facial landmarks from the common representation of text and speech extracted by VAE-VITS.
Our system is trained in two stages, as depicted in \Fig{diagram}(a).
First, VAE-VITS is trained on speech and its transcriptions, and following this, the landmark decoder is trained using paired speech and facial landmarks, as well as its transcriptions with fixed VAE-VITS parameters.
UniFLG simultaneously generates speech and facial landmarks during text-driven inference, and it generates facial landmarks without using textual information during speech-driven inference, as illustrated in \Fig{diagram}(b).
We provide details regarding UniFLG in the following sections.
\subsection{VAE-VITS}
VITS models the distribution of a speech waveform $\mathbf{x}$ conditioned on text $\mathbf{c}$ by introducing a frame-level latent variable $\mathbf{z}_f$.
The relationship between $\mathbf{x}$ and $\mathbf{z}_f$ is modeled by a posterior encoder and waveform decoder, and that between $\mathbf{z}_f$ and $\mathbf{c}$ is modeled by a prior encoder.
To model the temporal alignment between $\mathbf{z}_f$ and $\mathbf{c}$, they are first converted into latent representations;
$\mathbf{z}_f$ is transformed into $f_\theta^\mathrm{spec}(\mathbf{z}_f)$ using a normalizing flow $f_\theta$~\cite{rezende2015flow}, and $\mathbf{c}$ is transformed into $\{\bm{\mu}_\theta, \bm{\sigma}_\theta\}$ using text encoder and linear projection.
Then, the monotonic alignment search (MAS)~\cite{kim2020glowtts} algorithm estimates the most probable alignment between $f_\theta^\mathrm{spec}(\mathbf{z}_f)$ and $\{\bm{\mu}_\theta, \bm{\sigma}_\theta\}$.
As the alignment is not available during inference, a duration predictor is simultaneously trained; it uses the text encoder output $\mathbf{h}$ to predict the duration of each phoneme $\mathbf{d}_\mathrm{spec}$.
During inference, $\{\bm{\mu}_\theta, \bm{\sigma}_\theta\}$ is expanded into frame-level using $\mathbf{d}_\mathrm{spec}$, and $f_\theta^\mathrm{spec}(\mathbf{z}_f)$ is sampled from a Gaussian distibution defined by them.
Therefore, $f_\theta^\mathrm{spec}(\mathbf{z}_f)$ can be regarded as a speaker-independent representation common to text and speech.
To automatically extract emotion embeddings from speech, UniFLG uses VAE-VITS, which extends VITS with an utterance encoder.
By conditioning the entire system with an explicit speaker embedding, the utterance encoder extracts utterance-level latent variable $\mathbf{z}_u$ that represents speech emotions~\cite{mitsui2022endtoend}.
\subsection{Landmark Decoder}
The landmark decoder generates a series of facial landmarks $\mathbf{Y} \in \mathbb{R}^{T\times N\times2}$ given $f_\theta^\mathrm{land}(\mathbf{z}_f)$, a resampled version of $f_\theta^\mathrm{spec}(\mathbf{z}_f)$, where $T$ and $N$ denote the number of frames and 2D keypoints, respectively.
A non-causal WaveNet~\cite{prenger2019waveglow} is used for the landmark decoder.
Following WaveNet\footnote{\url{https://arxiv.org/abs/1609.03499}}, the emotion embedding $\mathbf{z}_u$ is given as the global conditioning.
\subsubsection{Mixed-modality training}
Although $f_\theta^\mathrm{spec}(\mathbf{z}_f)$ is common to text and speech, the ones that come from text and speech do not match exactly.
Therefore, the landmark decoder is trained by switching the input between text and speech at each iteration.
Given text, the duration $\mathbf{d}_\mathrm{spec}$ obtained from the MAS is multiplied by a constant to match the frame rate of $\mathbf{Y}$ to obtain $\mathbf{d}_\mathrm{land}$.
Thereafter, $\{\bm{\mu}_\theta, \bm{\sigma}_\theta\}$ are expanded according to $\mathbf{d}_\mathrm{land}$, and
$f_\theta^\mathrm{land}(\mathbf{z}_f)$ is sampled from a Gaussian distribution defined by them.
Given speech, $f_\theta^\mathrm{spec}(\mathbf{z}_f)$ extracted from $\mathbf{X}_\mathrm{spec}$ is resampled by linear interpolation to obtain $f_\theta^\mathrm{land}(\mathbf{z}_f)$.
The landmark decoder is trained to minimize the mean squared error between predicted and target facial landmarks.
\subsubsection{Inference}
The flow of speech-driven inference is exactly the same as that in training.
During text-driven inference, $\mathbf{d}_\mathrm{spec}$ obtained using the duration predictor is converted into $\mathbf{d}_\mathrm{land}$, and the following flow is the same as in training.
Additionally, $\mathbf{z}_u$ is extracted from speech during speech-driven inference and sampled from the prior $p(\mathbf{z}_u)=\mathcal{N}(\mathbf{z}_u; \mathbf{0}, \mathbf{I})$ or extracted from reference speech during text-driven inference.
\input{tab/objective_eval}
\subsection{UniFLG-AS}
\label{sec:uniflg_as}
Although UniFLG eliminates speaker-dependent factors from speech using the posterior encoder and flow, it cannot generate facial landmarks from the speech of unseen speakers because these modules require speaker embeddings.
To overcome this limitation, we propose a variant of our system, UniFLG for an arbitrary speaker (UniFLG-AS), which uses explicit emotion embeddings to represent speech emotions and $\mathbf{z}_u$ to represent speakers instead of emotions.
The landmark decoder uses the explicit emotion embedding as a global condition.
\section{Experiments}
\label{sec:exp}
\subsection{Experimental conditions}
\subsubsection{Datasets}
For training the landmark decoder, we recorded 3,359 (1,499 normal, 860 happy, and 1,000 sad) utterances of paired speech and facial videos from a female Japanese speaker according to predefined transcripts.
These will be hereinafter referred to as the \data{Paired} dataset.
The development and evaluation sets comprised 45 emotion-balanced utterances, respectively, and the remaining utterances were used as the training set.
For training the VAE-VITS module, we recorded 30,542 (14,508 talk, 7,771 happy, and 8,263 sad) utterances of speech uttered by 26 speakers (eighteen females and eight males) according to predefined transcripts.
These will be hereinafter referred to as the \data{Unpaired} dataset because the facial videos are not included.
The development and evaluation sets comprised 225 speaker- and emotion-balanced utterances, respectively, and the remaining utterances were used as the training set.
All the experiments were conducted using \SI{24}{\kilo\hertz}/16 bit speech signals and 30 frames per second 1280$\times$720 videos.
We used 50-dimensional linguistic features for $\mathbf{c}$, which contains phonemes, accents, and whether the current accent phrase is interrogative extracted using Open JTalk\footnote{\url{http://open-jtalk.sourceforge.net/}}.
We extracted 70 points of facial landmarks from each frame of the facial videos using OpenPose~\cite{cao2021openpose}.
\subsubsection{Model architecture and training}
The model architecture and training scheme of VAE-VITS were the same as those in a previous study~\cite{mitsui2022endtoend}.
For the landmark decoder, we used 16-layer non-causal WaveNet, where each layer had 192 filters with a kernel size of five and dilation factor of one, and 1$\times$1 convolution layers placed before and after the non-causal WaveNet.
A 16-dimensional $\mathbf{z}_u$ was fed to each layer of the non-causal WaveNet as a global condition.
The landmark decoder was trained using an AdamW optimizer~\cite{loshchilov2019adamw} with $\beta_1=0.8, \beta_2=0.99$, and a weight decay of 0.01.
The initial learning rate was set to $2\times 10^{-4}$ and was multiplied by $0.999875$ every epoch.
The training was conducted over 10,000 iterations (approximately 150 epochs) with a batch size of 48, which took \SI{2.5}{\hour} on a single NVIDIA Tesla P40 GPU.
\subsection{Results}
\input{tab/subjective_eval_v2}
\input{tab/subjective_eval2_v2}
\subsubsection{Facial landmark prediction accuracy}
\label{sec:obj_eval}
To evaluate the prediction accuracy of text- and speech-driven inference using UniFLG (hereinafter referred to as \method{UniFLG-T} and \method{UniFLG-S}, respectively), facial landmarks over the evaluation set of the \data{Paired} dataset were predicted.
For comparison, three text-driven and two speech-driven methods (presented sequentially) were used:
(1) \method{AVTacotron2}~\cite{abdelaziz2021avtacotron}, the state-of-the-art method that generates speech and facial landmarks jointly from text, similar to the proposed method,
(2) \method{TTL} (text-to-landmark) that trains the landmark decoder of the proposed system only in a text-driven manner,
(3) \method{UniFLG-P} (\method{P} stands for pipelined) that uses the proposed system, but it first synthesizes speech and then generates facial landmarks in a speech-driven manner,
(4) \method{STL} (speech-to-landmark) that trains the landmark decoder of the proposed system only in a speech-driven manner, and
(5) \method{STL-D} (\method{D} stands for direct) that directly feeds $\mathbf{X}_\mathrm{spec}$ to the landmark decoder.
Following previous studies~\cite{zhou2020makeittalk,wang2021anyonenet}, we evaluated the accuracy of lip movements using
the landmark distance for lips (D-LL),
landmark velocity difference for lips (D-VL),
and difference in the open mouth area (D-A)
and the accuracy of entire face movements using
the landmark distance (D-L) and
landmark velocity difference (D-V).
Because the sequence length of facial landmarks predicted using text-driven methods is not always the same as the target, we aligned them using dynamic time warping~\cite{berndt1994dtw}.
The obtained results are presented in \Table{obj_eval}.
Among the four text-driven methods, \method{UniFLG-T} achieved the lowest D-LL, D-A, and D-L scores.
In particular, these values were lower than those of \method{UniFLG-P}, which implies that \method{UniFLG-T} could reduce the prediction errors caused by TTS.
\method{AVTacotron2}, on the other hand, achieved the lowest D-VL and D-V scores.
This is possibly because these metrics focus on the difference between consecutive frames and \method{AVTacotron2} was the only method that generated facial landmarks autoregressively.
Among the three speech-driven methods, \method{UniFLG-S} achieved the lowest values for all evaluation metrics.
In addition, \method{UniFLG-\{T, S\}} achieved better performance than \method{TTL} and \method{STL}.
The proposed method uses both text and speech modalities as inputs, which may have acted like data augmentation and result in a more robust landmark decoder.
\subsubsection{Inference speed}
Fast inference is crucial, especially for real-time applications such as human conversation and live streaming.
We measured the real time factor (RTF), the time required to generate \SI{1}{\second} speech and facial landmarks, on one NVIDIA Tesla P40 GPU.
For \method{AVTacotron2}, we included the time required for waveform generation, for which we used HiFi-GAN~\cite{kong2020hifigan} trained on all speech data in the \data{Paired} and \data{Unpaired} datasets.
The results are listed in the right-most column of \Table{obj_eval}.
As UniFLG is non-autoregressive, its RTF was generally smaller than that of the autoregressive \method{AVTacotron2}.
We also confirmed that \method{UniFLG-T} improved the RTF by 41\% over \method{UniFLG-P} because it eliminates the need for generating speech before facial landmark generation.
Among the speech-driven methods, \method{UniFLG-S} was not as fast as \method{STL-D}; however, it was still faster than any text-driven method.
\subsubsection{Generated speech and facial landmark quality}
Subjective evaluation was conducted based on the following three criteria: speech quality, facial landmark quality, and lip-sync quality\footnote{Generated samples are available at \url{https://rinnakk.github.io/research/publications/UniFLG}.}.
For the second and third criteria, the facial landmarks were plotted frame by frame to construct a facial video and presented to the raters.
Four methods (\method{AVTacotron2} and \method{UniFLG-\{P, T, S\}}) were compared over the two datasets (\data{Paired}, \data{Unpaired}).
Note that the speech quality of \method{UniFLG-S} indicates the quality of the recorded speech.
Thirty-one raters participated in the evaluation, and each rater evaluated 30 samples on a five-point scale from one (bad) to five (excellent).
The obtained results are summarized in \Table{sbj_eval}.
Overall, similar trends were observed for the \data{Paired} and \data{Unpaired} datasets.
The speech quality of \method{AVTacotron2} was quite low; this is because the usage of multiple datasets, speakers, and emotions made the training difficult, resulting in the failure of alignment and stop token prediction.
\method{UniFLG-\{P, T\}} demonstrated significantly improved speech quality and those scores were close to that of \method{UniFLG-S} for the \data{Paired} dataset.
The scores slightly decreased for the \data{Unpaired} dataset, which is possibly because the amount of training data per speaker was approximately one-third of the data in the \data{Paired} dataset.
\method{UniFLG-\{P, T, S\}} achieved similar facial landmark quality and lip-sync quality scores.
These scores exceeded 4.0 and were significantly better than those of \method{AVTacotron2}.
Based on these results, we concluded that the proposed UniFLG can generate high-quality facial landmarks from either text or speech.
Furthermore, based on the fact that the scores of \method{UniFLG-T} were comparable to those of \method{UniFLG-P}, we can conclude that it can be used as a faster alternative to the pipelined counterpart.
\subsubsection{Facial landmark generation for unseen speakers}
To evaluate the facial landmark generation quality for unseen speakers during training, we considered 240 (80 talk, happy, and sad) utterances of speech uttered by 10 unseen speakers (five males and five females) as the \data{Unseen} dataset.
For each of the \data{Paired}, \data{Unpaired}, and \data{Unseen} datasets, we generated facial landmarks from speech using the UniFLG-AS, described in \Sec{uniflg_as} (hereinafter referred to as \method{UniFLG-AS-S}) and conducted subjective evaluation as outlined in the previous section.
For the baseline, we used \method{STL-D} described in \Sec{obj_eval}.
As \method{STL-D} does not use VAE-VITS, both the \data{Unpaired} and \data{Unseen} datasets were unseen during training, and hence, we omitted evaluation for the \data{Unpaired} dataset.
The obtained results are summarized in \Table{sbj_eval2}.
Both facial landmark and lip-sync quality of \method{UniFLG-AS-S} for the \data{Unseen} dataset outperformed those of \method{STL-D} for the \data{Paired} dataset.
This indicates that the proposed system can generate high-quality facial landmarks from the speech of unseen speakers.
Although the lip-sync quality score of \method{UniFLG-AS-S} for the \data{Unseen} dataset exceeded 4.0, it was slightly worse than that for the \data{Unpaired} dataset.
The number of speakers seen while training VAE-VITS was 27 in total, which we assume was insufficient to represent all unseen speakers with the learned latent space.
Narrowing this performance gap by using more speakers for training VAE-VITS will form a part of future research.
\section{Conclusions}
\label{sec:conlcusion}
This paper proposed UniFLG, which integrates audiovisual speech synthesis and speech-driven facial animation frameworks.
The involved experimental evaluation demonstrated that the proposed system could generate higher quality facial landmarks than conventional methods from either text or speech.
Moreover, UniFLG-AS, a variant of UniFLG, could generate natural facial landmarks even from the speech of unseen speakers.
Future research would involve attempts to extend the proposed system to an end-to-end framework that includes the training of a video generation network.
It would also be advantageous to integrate UniFLG and UniFLG-AS to generate facial landmarks from the speech of arbitrary speakers and emotions.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.14289",
"language": "en",
"timestamp": "2023-03-01T02:08:24",
"url": "https://arxiv.org/abs/2302.14289",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
Autonomous unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV) can replace humans for dangerous tasks such as surveillance and search-and-rescue. Recently, some receding-horizon motion planning methods \cite{herbert2017fastrack, kousik2019safe, tordesillas2019faster} guide an autonomous robot to explore and go to a destination in a complex environment. These methods require a planning hierarchy. On top of this hierarchy, a path planner such as \cite{dijkstra1959note, nash2010lazy} generates a sequence of sparse way-points based on the perception of the environment. Then a motion planner returns collision-free and dynamically feasible trajectories based on the sparse way-points. And the robot executes these trajectories and reaches
the goal without any collisions.
A group of autonomous robotic agents has more capabilities than a single robot in applications such as surveillance, information sensing, navigation, and search-and-rescue. If one can generate collision-free, non-conflict sparse paths for multiple agents and tasks at run-time, the robot swarm can explore a complex environment and execute complicated missions efficiently.
\begin{figure}[h]
\centering
\includegraphics[width=0.30\textwidth]{experiment_screenshoot_01-min.png}
\caption{A screenshot of an experiment with a dynamic obstacle. Orange cones represent tasks and black areas indicate no-fly zones. A manually controlled quadrotor with red shallow represents a dynamic obstacle with infinite height. Two quadrotors with blue/green boxes are the agents and transparency indicates time.}
\label{experiment:screenshot}
\end{figure}
In this paper, a task is defined as a location of interest that one agent must visit. Given a set of agents and tasks, a collision-aware multi-agent mission planning (MAMP) problem is defined twofold, i.e. finding optimal and conflict-free task allocations for agents and then generating collision-free paths such that agents can visit these task positions.
The former is categorized as a multi-agent task allocation (MATA) problem, and the latter is defined as a multi-agent path-finding (MAPF) problem. The optimal objective of MAPF is typically to minimize the total traveling distance. For a multi-agent system (MAS), real-time mission planning in a cluttered environment is necessary when deploying autonomous robots in a complex environment, especially when obstacles and tasks are dynamic. An example of MAMP problems is shown in Fig. \ref{example:before}.
This paper only considers MAMP problems defined as ST-SR-TA (Single-Task Robots, Single-Robot Tasks, Time-Extended Assignment) problems \cite{gerkey2004formal}. Here, tasks are assumed to be homogeneous and independent of each other, i.e. no temporal logic requirements; agents are assumed to be homogeneous regarding mission functionality. Since this problem is proven to be NP-hard \cite{gerkey2004formal}, there is a trade-off for MAMP problems between real-time performance and optimality. Furthermore, the scalability of an underlying algorithm, in terms of the number of agents and tasks, is crucial in MAS applications.
\subsection{Related Work}\label{subsec:related_work}
The literature on MAMP problems basically can be divided into two categories, i.e. solving MATA and MAPF problems sequentially or in an integrated way.
The methods related to MATA can be mainly categorized as auction-based and searching-based methods.
Auction-based approaches are derived from a concept in finance where each agent aims to maximize their own reward by giving higher bids. And the process must consider maximizing a global reward and include conflict resolution.
\cite{michael2008distributed} utilizes auction-based protocols to bid task assignments.
CBBA (Consensus-Based Bundle Algorithm)\cite{choi2009consensus} employs a decentralized consensus procedure for task conflict resolution and then generates task allocation for agents.
IACA (Iterated Auction Consensus Algorithm) \cite{wang2021distributed} proposes a similar iterative but resilient auction process and can remove malicious bids during the auction.
\cite{nunes2017decentralized} proposed an auction-based algorithm to deal with task allocation problems with time window constraints.
\cite{kim2019minimizing} produces task sequences with minimum communications by combining the greedy algorithm and the auction process.
Although the auction-based approaches are decentralized, the process of auction and conflict resolution can be time-consuming, especially when the problem size is large. In addition, the auction heuristic barely includes environmental information, e.g. the impact of obstacles on the cost/reward. Thus, the auction result is not necessarily optimal when the obstacles are present and may even lead to a bad solution.
Search-based methods rely on a fixed structure of information, e.g. the number of assigned tasks for each agent is known and fixed.
\cite{patel2020decentralized} proposes a decentralized genetic algorithm (GA) to search a task sequence parallelly.
\cite{banks2020multi} proposes a graph-based search method to allocate tasks to agents given a finite linear temporal logic objective, where the allocation order is partially known.
\cite{henkel2020optimized} builds an Optimized Directed Roadmap Graph (ODRM) by sampling first, and then navigates agents on this graph. Although searching paths on an ODRM is faster than on the most common occupancy grid map, generating and updating such a graph at run-time can be time-consuming in a cluttered and dynamic environment.
Due to the page limit, this paper omits the literature on MAPF problems because most of the recent literature focuses on the integration of MATA and MAPF problems. As for the literature on solving MATA and MAPF problems sequentially, they are mainly categorized as auction-based and search-based methods.
Based on CBBA, \cite{bertuccelli2009real} first generates task sequences without any obstacle information and then utilizes Dijkstra's algorithm \cite{dijkstra1959note} to find collision-free paths given the sequences.
\cite {choi2011genetic} proposes a two-stage GA-based approach where each agent first determines its own task sequence using a genetic algorithm and then negotiates with other agents to exchange tasks if that reduces the cost. Then collision-free paths are generated similarly as \cite{bertuccelli2009real}.
There are also some special cases of MAMP problems that have risen significant interest, such as multi-agent pickup and delivery\cite{henkel2019optimal}, and vehicle routing problems. Some special specifications are adopted for these problems. For example, the task set for each agent is prescribed; each agent can only be assigned one task; the initial positions for agents are the same, etc. This paper considers a general MAMP problem without these special specifications.
There is some literature on the integrated MAMP methods.
\cite{schillinger2018simultaneous} focuses on simultaneous task allocation and planning for a complex goal that consists of temporal logic sub-tasks. \cite{schillinger2018simultaneous} emphasizes the capability of a heterogeneous robot team to perform a complex goal, whereas the MAMP problem in this paper focuses on homogeneous agents and tasks.
\cite{ren2021ms}, as a fully centralized optimization-based method, first obtains a single tour that connects all the tasks without any obstacle information by solving a traveling salesman problem; then uses a heuristic policy to partition the tour to generate a task allocation sequence for each agent; finally generates collision-free paths. Although \cite{ren2021ms} deals with the same problem with this paper, its computation time is stably around 55 seconds, with 5 - 20 agents and 10 - 50 tasks in a map with random obstacles.
From the methodology perspective, there are primarily three types of methods for MAMP problems with homogeneous agents/tasks and no temporal logic constraints, i.e. decentralized auction-based, distributed GA-based (genetic algorithm), and centralized optimization-based methods. Decentralized auction-based methods, as mentioned above, suffer from inefficient auction and negotiation processes and a lack of obstacle information during the auction process. Distributed GA-based methods might have good real-time performance for small-size problems but it notably depends on the selection of GA parameters. Also, many methods assume the number of assigned tasks for each agent is known and fixed, whereas this paper does not.
As for optimization-based methods, they barely utilize obstacle information in the first place and are not in a distributed manner, i.e directly solving the entire allocation problem.
\subsection{Contributions and Notations}\label{subsec:contribution}
This paper proposes a real-time MAMP algorithm DrMaMP for homogeneous agents and tasks. DrMaMP first utilizes obstacle information as heuristics to approximate the cost of an ordered task allocation and path sequence by a metric from an unordered set. With this approximation, DrMaMP can partition the entire problem into several sub-problems and distribute them to each agent. Then each agent finds optimal task allocation and path sequence for each sub-problem. Due to the approximation and the distributed manner, DrMaMP makes a balance between computational performance and scalability.
The main contributions are:
\begin{enumerate}
\item a distributed real-time (on the order of millisecond) MAMP algorithm DrMaMP;
\item capability of handling dynamic obstacles and tasks in a cluttered environment at run-time;
\item good scalability in terms of the number of agents and tasks and relatively good optimality;
\item computational burden analysis for DrMaMP.
\end{enumerate}
\textit{Notations.} Vectors, variables, and functions in multiple dimensions are in bold lowercase; matrices and sets are in uppercase. For a point $\boldsymbol{p} \in \mathbb{R}$, $\{ \boldsymbol{p} \} \subset \mathbb{R}$ denotes a set containing that point as its only element. Set subtraction is ${A \setminus B=\{x\in A\mid x\notin B\}}$.
$\mathbb{Z}$ denotes the integer set.
$\mathbb{Z}_{+}$ denotes the positive integer set.
The cardinality of a set $A$ is denoted as $|A|$.
\section{Problem Formulation}\label{sec:problem_formulation}
The configuration space, $X \subseteq \mathbb{R}^n$, is all positions in space reachable by an agent. Denote an agent positions set $\mathcal{X} = \{\boldsymbol{p}_1,\cdots,\boldsymbol{p}_{n_a}\}$ of $n_a$ agents, and $\boldsymbol{p}_i \in X$ is the position of agent $i$. Denote a task positions set $\mathcal{T} = \{\boldsymbol{t}_1,\cdots,\boldsymbol{t}_{n_t}\}$ of $n_t$ tasks, and $\boldsymbol{t}_i \in X$ is the position of task $i$.
Define the agent and tasks index sets $\mathcal{I} \triangleq \{1,\cdots,n_a\}$ and $\mathcal{J} \triangleq \{1,\cdots,n_t\}$, respectively.
Suppose that an agent completes a task when the distance between two entities is less than a prescribed non-negative constant $\epsilon$, i.e. $||\boldsymbol{p}_i - \boldsymbol{t}_j||_2 \leq \epsilon, \ \epsilon \geq 0$.
Denote an obstacle positions set as $\mathcal{O} = \{ \boldsymbol{o}_1, \cdots, \boldsymbol{o}_{n_{o}} \}$ of $n_o$ obstacles, where $\boldsymbol{o}_i \in X$ is the position of obstacle $i$. Denote $\mathcal{P}_i \triangleq (\boldsymbol{p}^0_i, \boldsymbol{p}^1_i, \cdots, \boldsymbol{p}^{n_{p,i}-1}_i) \subset X$ as an ordered sequence of positions associated with agent $i$ which denotes a path starting from $\boldsymbol{p}^0_i$ and ending at $\boldsymbol{p}^{n_{p,i}-1}_i$, where $n_{p,i} \triangleq |\mathcal{P}_i|$ denotes the number of positions in $\mathcal{P}_i$.
Derived from \cite{choi2009consensus}, the collision-aware MATA problem is written as the following integer programming:
\begin{mini!}|s|[2]
{ \boldsymbol{x}, \boldsymbol{r}_1, \cdots, \boldsymbol{r}_{n_a} }{\textstyle \sum_{i=1}^{n_a} \sum_{j=1}^{n_t} c_{ij}(\boldsymbol{x}_i, \boldsymbol{r}_i, \mathcal{O})x_{ij} \label{task_allocation_prob:obj}}
{\label{task_allocation_prob}}{}
\addConstraint{\textstyle \sum_{j=1}^{n_t}x_{ij} \leq n_t, \ \forall i \in \mathcal{I}}{ \label{task_allocation_prob:task_constraint}}
\addConstraint{\textstyle \sum_{i=1}^{n_a}x_{ij} = 1, \ \forall j \in \mathcal{J}}{ \label{task_allocation_prob:agent_constraint}}
\addConstraint{\textstyle \sum_{i=1}^{n_a} \sum_{j=1}^{n_t} x_{ij} = n_t }{ \label{task_allocation_prob:complete_constraint}}
\addConstraint{ x_{ij} \in \{0,1\}, \ \forall (i,j) \in \mathcal{I} \times \mathcal{J},}{ \label{task_allocation_prob:decision_variable}}
\end{mini!}
where $x_{ij}=1$ if task $j$ is assigned to agent $i$ and $0$ otherwise; $\boldsymbol{x}_i \in \{0,1\}^{n_t}$ is the task assignment vector for agent $i$, $x_{ij}$ is the $j$-th element of $\boldsymbol{x}_i$, and $\boldsymbol{x} = \matt{ \boldsymbol{x}_1^{\prime} & \cdots & \boldsymbol{x}_{n_a}^{\prime} }^{\prime} \in \{0,1\}^{n_t n_a}$.
The vector $\boldsymbol{r}_i \in \{\mathcal{J} \cup \{\emptyset\}\}^{n_a}$ denotes an ordered sequence of tasks, i.e., the task allocation order, for agent $i$; its $k$-th element is $j \in \mathcal{J}$ if task $j$ is the $k$-th task of agent $i$'s assignment; $\boldsymbol{r}_i = \emptyset$ if agent $i$ has no assignment.
The collision-aware cost of task $j$ being assigned to agent $i$ followed by an order $\boldsymbol{r_i}$ is defined by $c_{ij}(\boldsymbol{x}_i, \boldsymbol{r}_i, \mathcal{O}) \geq 0$. In the context of mission planning, this cost typically represents traveling distance, fuel consumption, etc.
Constraint \eqref{task_allocation_prob:task_constraint} indicates that each agent can be at most assigned with $n_t$ tasks;
\eqref{task_allocation_prob:agent_constraint} requires that each task must be assigned to only one agent;
\eqref{task_allocation_prob:complete_constraint} enforces that every task must be assigned.
Denote a task allocation order set $\mathcal{R} \triangleq \{ \boldsymbol{r}_1, \cdots, \boldsymbol{r}_{n_a} \}$.
Given an order set $\mathcal{R}$ and the current positions of agents $\mathcal{X}$, the collision-aware MAPF problem is written as:
\begin{mini}|s|
{\mathcal{P}_1,\cdots,\mathcal{P}_{n_a}}{\textstyle \sum_{i=1}^{n_a} \ell_i(\mathcal{P}_i)}
{\label{multi_agent_path_planning}}{}
\addConstraint{\boldsymbol{p}_i^0 = \boldsymbol{p}_i, \ \forall i \in \mathcal{I}}
\addConstraint{\mathcal{R} \text{ is determined by } \eqref{task_allocation_prob}}
\addConstraint{\mathcal{P}_i \text{ satisfies the order } \boldsymbol{r}_i, \ \forall i \in \mathcal{I}}
\addConstraint{\mathcal{P}_i \cap \mathcal{O} = \emptyset, \ \forall i \in \mathcal{I},}
\end{mini}
where $\ell_i(\mathcal{P}_i) = \sum_{j=0}^{|\mathcal{P}_i|-2} ||\boldsymbol{p}_i^{j+1} - \boldsymbol{p}_i^{j}||_2$ is the traveling distance of path $\mathcal{P}_i$. This paper assumes that $\mathcal{P}_i \cap \mathcal{O} = \emptyset$ if and only if $||\boldsymbol{p}_i^j - \boldsymbol{o}_k||_2 \geq \delta > 0 \ \forall \boldsymbol{p}_i^j \in \mathcal{P}_i \text{ and } \forall \boldsymbol{o}_k \in \mathcal{O}$.
Based on \eqref{task_allocation_prob} and \eqref{multi_agent_path_planning}, the collision-aware MAMP problem in this paper is formulated as:
\begin{mini}|s|
{\boldsymbol{x}, \mathcal{R}, \mathcal{P}}{\textstyle \sum_{i=1}^{n_a} \sum_{j=1}^{n_t} c_{ij}(\boldsymbol{x}_i, \boldsymbol{r}_i, \mathcal{O})x_{ij}}
{\label{problem_this}}{}
\addConstraint{\textstyle \sum_{j=1}^{n_t} x_{ij} \leq n_t, \ \forall i \in \mathcal{I}}
\addConstraint{\textstyle \sum_{i=1}^{n_a} x_{ij} = 1, \ \forall j \in \mathcal{J}}
\addConstraint{\textstyle \sum_{i=1}^{n_a} \sum_{j=1}^{n_t} x_{ij} = n_t}
\addConstraint{ x_{ij} \in \{0,1\}, \ \forall (i,j) \in \mathcal{I} \times \mathcal{J}}
\addConstraint{\mathcal{P} \triangleq \{ \mathcal{P}_1, \cdots, \mathcal{P}_{n_a} \} \text{ is determined by } \eqref{multi_agent_path_planning}}
\addConstraint{\mathcal{R} \text{ is determined by } \eqref{task_allocation_prob},}
\end{mini}
where $\textstyle \sum_{j=1}^{n_t} c_{ij}(\boldsymbol{x}_i, \boldsymbol{r}_i, \mathcal{O})x_{ij}$ evaluates agent $i$'s collision-aware traveling distance given a particular assignment and allocation order.
Solving the task assignment $\boldsymbol{x}$, the allocation order $\mathcal{R}$, and the collision-free path $\mathcal{P}$ altogether is challenging because $\boldsymbol{x}$, $\mathcal{R}$, and $\mathcal{P}$ are coupled together in \eqref{task_allocation_prob}, \eqref{multi_agent_path_planning}, and \eqref{problem_this}.
Furthermore, the collision-aware MAMP problem \eqref{problem_this} is not even tractable since it is proven to be NP-hard \cite{gerkey2004formal}. This paper attempts to obtain a sub-optimal solution to the collision-aware MAMP problem scalably and in real-time, especially when the environment is unconstructed and cluttered and the obstacles and tasks are potentially dynamic.
\section{Algorithm}\label{sec:approach}
This paper proposes a Distributed Real-time Multi-agent Mission Planning (DrMaMP) algorithm to obtain a sub-optimal solution to \eqref{problem_this} in a scalable way.
Instead of considering the exact coupled cost $c_{ij}(\boldsymbol{x}_i, \boldsymbol{r}_i, \mathcal{O})$, DrMaMP utilizes task-based heuristics to approximate the cost of an ordered path by an unordered set. With this approximation, DrMaMP can partition the entire task set into several subsets and assign each task subset to one agent given the unordered heuristics. Then each agent only needs to solve a sub-problem, i.e. single-agent mission planning problem.
Specifically, DrMaMP consists of three phases:
\begin{enumerate}[noitemsep,nolistsep]
\item Task Segmentation: partitioning the entire task set into several subsets;
\item Cluster Assignment: assigning each agent a task subset;
\item Single-Agent Mission Planning: finding an optimal task allocation order and collision-free path for each agent.
\end{enumerate}
In Phase 1, given an objective defined in Section \ref{subsec:task_segmentation}, the entire task set $\mathcal{T}$ is partitioned into $n_a$ subsets.
In Phase 2, given an objective defined in Section \ref{subsec:cluster_assign}, each agent is assigned one task subset by solving an assignment problem.
In Phase 3, after each agent is assigned with a task subset, it needs to solve a single-agent mission planning problem individually to find the optimal task allocation order and collision-free path. The computation can be distributed to each agent.
A detailed explanation of the 3 phases is shown in the following subsections.
\begin{figure*}[h]
\centering
\begin{subfigure}{.25\textwidth}
\centering
\includegraphics[width=\linewidth]{example_before.png}
\caption{An example problem}
\label{example:before}
\end{subfigure}
\hfill
\begin{subfigure}{.25\textwidth}
\centering
\includegraphics[width=\linewidth]{example_cluster_assignment.png}
\caption{Task segmentation and cluster assignment result}
\label{example:cluster}
\end{subfigure}
\hfill
\begin{subfigure}{.25\textwidth}
\centering
\includegraphics[width=\linewidth]{example_after.png}
\caption{Mission planning result}
\label{example:after}
\end{subfigure}
\caption{ An example for MAMP with 8 agents and 40 tasks. (a) shows the problem to be solved in a $50 \times 50$ grid map with 150 obstacles, where the blue dots and red crosses indicate the positions of agents and tasks, respectively; (b) shows the task segmentation and cluster assignment result, where those tasks in the same color are within the same cluster, the purple stars indicate the positions of cluster centroids and an edge between an agent and a cluster centroid represents assignment; (c) shows the task allocation orders and collision-free paths, where the dashed lines in green indicate the paths. $N=300$ in Algorithm \ref{alg:algo_task_segment}. The computation time is 44.6 ms.}
\label{comp:fixed_target}
\end{figure*}
\subsection{Task Segmentation}\label{subsec:task_segmentation}
The entire task set $\mathcal{T}$ is partitioned into $n_a$ clusters $\{\mathcal{T}_1, \cdots, \mathcal{T}_{n_a}\}$, where each cluster includes possibly many tasks. Note that $\mathcal{T}_i$ has not been assigned to any agents yet. The tasks within a cluster have a minimal distance to the centroid of this cluster. Such a task segmentation can be obtained by an iterative k-means clustering algorithm \cite{lloyd1982least}, which minimizes the summation of the within-cluster sum of squares (WCSS), i.e.
\begin{equation}
\label{eq:problem_k_means}
\begin{aligned}
\min_{\mathcal{T}_1, \cdots, \mathcal{T}_{n_a}} \quad & \textstyle \sum_{i=1}^{n_a} \sum_{\boldsymbol{t} \in \mathcal{T}_i} ||\boldsymbol{t}-\boldsymbol{c}_i||_2^2 \\
\textrm{s.t.} \quad
& \mathcal{T} = \cup_{i=1}^{n_a} \mathcal{T}_i, \\
& \mathcal{T}_i \cap \mathcal{T}_j = \emptyset, \ \forall i \neq j, \\
& \boldsymbol{c}_i = (\Sigma_{\boldsymbol{t} \in \mathcal{T}_i} \ \boldsymbol{t}) / |\mathcal{T}_i|, \ \forall i,
\end{aligned}
\end{equation}
where $\mathcal{T}_i = \{ \boldsymbol{t}_j \ | \ \forall j \in \mathcal{I}_{c,i} \}$ and $\mathcal{I}_{c,i}$ is the task index set that is associated with the tasks within cluster $\mathcal{T}_i$; $\boldsymbol{c}_i \in \mathbb{R}^n$ is the centroid of tasks within $\mathcal{T}_i$. Denote $\mathcal{C} \triangleq \{ \boldsymbol{c}_1, \cdots, \boldsymbol{c}_k \}$.
As described in \eqref{problem_this}, the objective is to minimize the total traveling distance. But the cost of each agent visiting a known task set is unknown before a task allocation order is determined. Hence, for each task subset $\mathcal{T}_i$, an ordered sequence's length is approximated by an unordered set's WCSS, i.e. $\sum_{\boldsymbol{t} \in \mathcal{T}_i} ||\boldsymbol{t}-\boldsymbol{c}_i||_2^2$, since the tasks within $\mathcal{T}_i$ have a less WCSS associated with $\boldsymbol{c}_i$ than $\boldsymbol{c}_j \ \forall j \neq i$. The task segmentation problem \eqref{eq:problem_k_means} can be solved iteratively and the details are in Algorithm~\ref{alg:algo_task_segment}. An example is shown in Fig. \ref{example:cluster}.
\subsection{Cluster Assignment}\label{subsec:cluster_assign}
Since an ordered sequence's length is approximated by an unordered set's WCSS in Algorithm \ref{alg:algo_task_segment}, the cost of agent $i$ visiting a task subset is approximated by the distance between the agent and task subset's centroid plus the subset's WCSS, which is independent on the task allocation order.
Then the task subset (cluster) assignment problem is written as an integer linear programming \eqref{eq:problem_cluster_assignment},
where $y_{ij}=1$ if agent $i$ is assigned with cluster $j$ and $0$ otherwise; $w_{ij} \triangleq ||\boldsymbol{p}_i-\boldsymbol{c}_j||^2_2 + \sum_{\boldsymbol{t} \in \mathcal{T}_j} || \boldsymbol{t}-\boldsymbol{c}_j ||^2_2$ defines the cost of cluster $j$ being assigned to agent $i$, where the first term evaluates how far agent $i$ is to cluster $j$ and the second term estimates the cost of agent $i$ visiting all the tasks within cluster $j$.
\begin{mini!}|s|[2]
{\boldsymbol{y}}{\textstyle \sum_{i \in \mathcal{I}} \sum_{j \in \mathcal{I}_{c,j}} w_{ij}y_{ij} \label{eq:problem_cluster_assignment:obj}}
{\label{eq:problem_cluster_assignment}}{}
\addConstraint{\textstyle \sum_{i \in \mathcal{I}} y_{ij} = 1, \ \forall j \in \mathcal{I}_{c,j}}{\label{eq:problem_cluster_assignment:agent_constraint}}
\addConstraint{\textstyle \sum_{j \in \mathcal{I}_{c,j}} y_{ij} \leq 1, \ \forall i \in \mathcal{I}}{\label{eq:problem_cluster_assignment:cluster_constraint}}
\addConstraint{\textstyle \sum_{i \in \mathcal{I}}\sum_{j \in \mathcal{I}_{c,j}} y_{ij} = n_a}{\label{eq:problem_cluster_assignment:complete_constraint}}
\addConstraint{y_{ij}=\{0,1\}, \ \forall (i,j) \in \mathcal{I} \times \mathcal{I}_{c,j}.}{\label{eq:problem_cluster_assignment:decision_variable_constraint}}
\end{mini!}
Constraint \eqref{eq:problem_cluster_assignment:agent_constraint} ensures that each cluster must be assigned with one agent; \eqref{eq:problem_cluster_assignment:cluster_constraint} guarantees that each agent can be at most assigned to one cluster; \eqref{eq:problem_cluster_assignment:complete_constraint} enforces no unassigned cluster left.
Constraint \eqref{eq:problem_cluster_assignment:cluster_constraint} considers a situation when the number of agents is greater than the number of nonempty clusters. This situation can happen at run-time when some tasks are completed.
The cluster assignment problem \eqref{eq:problem_cluster_assignment} can be solved by some constrained integer linear programming solvers such as OR-Tools \cite{ortools}.
\begin{algorithm}
\caption{Task Segmentation}\label{alg:algo_task_segment}
\DontPrintSemicolon
\KwIn{$\mathcal{T}$, $N \in \mathbb{Z}_{+}$}
\KwOut{$\{\mathcal{T}_1, \cdots, \mathcal{T}_{n_a}\}$, $\{ \mathcal{I}_{c,1}, \cdots, \mathcal{I}_{c,n_a} \}$, $\mathcal{C}$}
Initialize \textbf{Output} by k-means++ \cite{arthur2006k}, $iter = 0$\;
\While {$iter < N$} {
\For {$\text{task } \boldsymbol{t}_i = \boldsymbol{t}_1 \ \text{to} \ \boldsymbol{t}_{n_t}$} {
$idx$ $\gets$ the index of $t_i$'s nearest centroid \;
$\mathcal{I}_{c,idx}.$append($i$)\;
}
\For {$j = 1 \ \text{to} \ n_a$} {
$\boldsymbol{c}_j$ $\gets$ mean of all tasks within cluster $j$\;
}
$iter \gets iter + 1$\;
}
\lFor {$i = 1 \ \text{to} \ n_a$} {
$\mathcal{T}_i \gets \{\boldsymbol{t}_j \ | \ \forall j \in \mathcal{I}_{c, i} \}$
}
\Return $\{\mathcal{T}_1, \cdots, \mathcal{T}_{n_a}\}$, $\{ \mathcal{I}_{c,1}, \cdots, \mathcal{I}_{c,n_a} \}$, $\mathcal{C}$
\end{algorithm}
\begin{algorithm}
\caption{Cluster Assignment}\label{alg:cluster_assignment}
\DontPrintSemicolon
\KwIn{$\{\mathcal{T}_1, \cdots, \mathcal{T}_{n_a}\}$, $\mathcal{X}$, $\mathcal{C}$}
\KwOut{$\{\hat{\mathcal{T}}_1, \cdots, \hat{\mathcal{T}}_{n_a}\}$}
\For {$\text{agent } i = 1 \ \text{to} \ n_a$} {
\For {$\text{cluster } j = 1 \ \text{to} \ n_a$} {
$w_{ij} \gets ||\boldsymbol{p}_i-\boldsymbol{c}_j||^2_2 + \sum_{\boldsymbol{t} \in \mathcal{T}_j} || \boldsymbol{t}-\boldsymbol{c}_j ||^2_2$\;
}
}
$\boldsymbol{y}^* \gets$ Solve \eqref{eq:problem_cluster_assignment} by an numerical solver\;
$\{\hat{\mathcal{T}}_1, \cdots, \hat{\mathcal{T}}_{n_a}\} \gets$ parse\_result($\{\mathcal{T}_1, \cdots, \mathcal{T}_k\}, \ \boldsymbol{y}^*$)\;
\Return $\{\hat{\mathcal{T}}_1, \cdots, \hat{\mathcal{T}}_{n_a}\}$
\end{algorithm}
Denote that $\hat{\mathcal{T}}_i$ is the task cluster assigned to agent $i$. Details about the cluster assignment are shown in Algorithm~\ref{alg:cluster_assignment}. An example is shown in Fig. \ref{example:cluster}.
\subsection{Single-Agent Mission Planning}
After each agent is assigned a task cluster, the task allocation orders and the collision-free paths need to be determined. This problem can be distributed to $n_a$ agents parallelly, and agent $i$ solves its own sub-problem by formulating it as a Travelling Salesperson Problem (TSP), where the nodes are the agent itself and its assigned tasks. A path-finding algorithm generates collision-free paths for every pair of nodes and the length of these paths is the traveling cost from one node to another. This paper utilizes Lazy Theta* \cite{nash2010lazy} as the path-finding algorithm due to fewer line-of-sight checks.
The single-agent mission planning problem can be written as an integer linear program \eqref{eq:tsp:obj} with Miller–Tucker–Zemlin (MTZ) formulation \cite{miller1960integer},
\begin{mini!}|s|[2]
{\boldsymbol{z},\boldsymbol{u}}{\textstyle \sum_{i=1}^{n_m} \sum_{j=1}^{n_m} d_{ij}z_{ij} \label{eq:tsp:obj}}
{\label{eq:tsp}}{}
\addConstraint{z_{ij} \in \{0,1\}, \ \forall i,j=1,\cdots,n_m}{\label{eq:tsp:decision_variable}}
\addConstraint{u_i \in \mathbb{Z}, \ \forall i=2,\cdots,n_m}{\label{eq:tsp:dummpy_variable}}
\addConstraint{u_i-u_j+n_m z_{ij} \leq n_m-1, \ 2 \leq i \neq j \leq n_m}{\label{eq:tsp:sub_tour_constraint}}
\addConstraint{1 \leq u_i \leq n_m-1}{\label{eq:tsp:sub_tour_bound}}
\addConstraint{\textstyle \sum_{i=1}^{n_m} z_{ij}=1, \ \forall j=2,\cdots,n_m}{\label{eq:tsp:start_constraint}}
\addConstraint{\textstyle \sum_{j=1}^{n_m} z_{ij}=1, \ \forall i=1,\cdots,n_m}{\label{eq:tsp:end_constraint}}
\addConstraint{\textstyle \sum_{i=1}^{n_m} z_{i1}=0,}{\label{eq:tsp:no_go_back_home_constraint}}
\end{mini!}
where $n_m \triangleq |\hat{\mathcal{T}}_i| + 1$ denotes the number of nodes; node 1 always indicates the agent's current position; $z_{ij}=1$ if the agent goes from node $i$ to node $j$, $\boldsymbol{z} \in \{0,1\}^{n_m^2}$; $\boldsymbol{u} \in \mathbb{Z}^{n_m-1}$ is a dummy variable to indicate tour ordering such that $u_i < u_j$ implies node $i$ is visited before node $j$; $d_{ij}$ is the cost of agent traveling from node $i$ to node $j$, which is the length of the underlying collision-free path.
\begin{algorithm}
\caption{Task Allocation and Path Finding}\label{alg:single}
\DontPrintSemicolon
\KwIn{$\{\hat{\mathcal{T}}_1, \cdots, \hat{\mathcal{T}}_{n_a}\},\mathcal{X},\mathcal{O}$}
\KwOut{$\{\mathcal{P}_1,\cdots,\mathcal{P}_{n_a}\}, \{\boldsymbol{r}_1,\cdots,\boldsymbol{r}_{n_a}\}$}
// $n_a$ agents parallelly execute the content in \textbf{parfor}\;
$\textbf{par}$\For {agent $i = 1 \ to \ n_a$} {
Initialize $P_{lib}$ as empty\;
\For {start, goal in $(\hat{\mathcal{T}}_i \cup \{\boldsymbol{p}_i\})$} {
$\mathcal{O}_{now} \gets \mathcal{O} \cup \mathcal{X} \setminus \{\boldsymbol{p}_i\}$\;
$P_{start,goal} \gets$ path\_finding($start, goal, \mathcal{O}_{now}$)\;
$P_{lib}.$append($P_{start,goal}$)\;
}
\For {$\text{node } i = 1 \ \text{to} \ 1+|\hat{\mathcal{T}}_i|$} {
\For {$\text{node } j = 1 \ \text{to} \ 1+|\hat{\mathcal{T}}_i|$} {
$P_{i,j} \gets $load\_path($P_{lib},i,j$)\;
$d_{ij} \gets $compute\_cost($P_{i,j}$)\;
}
}
$\boldsymbol{z}^*, \boldsymbol{u}^* \gets$ solve \eqref{eq:tsp} by a numerical solver\;
$\mathcal{P}_i, \boldsymbol{r}_i \gets$ parse\_path($P_{lib}, \boldsymbol{z}^*, \boldsymbol{u}^*$)\;
}
\Return $\{\mathcal{P}_1,\cdots,\mathcal{P}_{n_a}\}, \{\boldsymbol{r}_1,\cdots,\boldsymbol{r}_{n_a}\}$
\end{algorithm}
The constraints \eqref{eq:tsp:dummpy_variable} - \eqref{eq:tsp:sub_tour_bound} guarantees only one tour covering all nodes \cite{miller1960integer}. The constraints \eqref{eq:tsp:start_constraint} - \eqref{eq:tsp:end_constraint} guarantees that each node is visited from another node, and from each node, there is a departure to another node. The constraint \eqref{eq:tsp:no_go_back_home_constraint} indicates that the agent does not go back to its initial position after visiting all the tasks. One can change \eqref{eq:tsp:no_go_back_home_constraint} if the agent needs to go back to a base. To ensure that there is no collision between agents, each agent considers the other agents as obstacles.
Details are shown in Algorithm \ref{alg:single}. An example is shown in Fig. \ref{example:after}.
\subsection{DrMaMP at Run-time}\label{subsec:DrMaMP_in_realtime}
This subsection illustrates how DrMaMP operates at run-time. First, DrMaMP utilizes k-means++ \cite{arthur2006k} to initialize the cluster centroids. During the mission, the centroids from the previous timestamp are the initial centroids for the next timestamp. As some tasks are completed, the number of nonempty clusters $n_c$ might be less than $n_a$. If $n_c < n_a$, one needs to remove the empty clusters and revise the constraint \eqref{eq:problem_cluster_assignment:complete_constraint} as $\sum_{i \in \mathcal{I}} \sum_{j \in \mathcal{I}_{c,j}} y_{ij}=n_c$.
Note that all the constraints are compatible with the case where $n_c < n_a$.
If there exist dynamic obstacles and tasks, DrMaMP updates their information (positions) in each timestamp. More details are shown in Algorithm \ref{alg:DrMaMP}.
\begin{algorithm}
\caption{DrMaMP at Run-time}\label{alg:DrMaMP}
\DontPrintSemicolon
Initialize $\mathcal{C}$ by k-means++ \cite{arthur2006k}\;
\While {$\mathcal{T} \neq \emptyset$} {
$\mathcal{T} \gets$ update task set \;
$\mathcal{X} \gets$ update agent position \;
$\mathcal{O} \gets$ update obstacle \;
$\{\mathcal{T}_1, \cdots, \mathcal{T}_k\}, \mathcal{C} \gets$ Algorithm \ref{alg:algo_task_segment} with previous $\mathcal{C}$ \;
remove empty task cluster \;
$\{\hat{\mathcal{T}}_1, \cdots, \hat{\mathcal{T}}_{n_a}\} \gets$ Algorithm \ref{alg:cluster_assignment} \;
$\{\mathcal{P}_1,\cdots,\mathcal{P}_{n_a}\}, \{\boldsymbol{r}_1,\cdots,\boldsymbol{r}_{n_a}\} \gets$ Algorithm \ref{alg:single} \;
agents move one step along $\{\mathcal{P}_1,\cdots,\mathcal{P}_{n_a}\}$ \;
time moves one step forward \;
$\boldsymbol{t}_j \gets$ current assigned task of agent $i, \ \forall i \in \mathcal{I}$ \;
delete task $\boldsymbol{t}_j$ if $||\boldsymbol{p}_i-\boldsymbol{t}_j||_2 \leq \epsilon, \ \forall i \in \mathcal{I}$ \;
}
\end{algorithm}
\section{Comparisons and Experiments}\label{sec:results}
This section presents several experiments with static/dynamic obstacles/tasks and conducts scalability and optimality comparisons between DrMaMP and a decentralized method \cite{bertuccelli2009real}. From here on, CBBA is interchangeable with the method in \cite{bertuccelli2009real} because it consists of CBBA and posterior path-finding. DrMaMP outperforms \cite{bertuccelli2009real} in both indices based on the comparisons. In addition, this section analyzes the computational burden for DrMaMP and presents the optimality gap in small-size problems.
DrMaMP is written in C++ and compiled as a Python library to be invoked. The integer programmings in Algorithm \ref{alg:cluster_assignment} and \ref{alg:single} are solved by Google OR-Tools\cite{ortools}. The C++ implementation utilizes multithreading to simulate the distributed manner, i.e., the \emph{parfor} in Line 2, Algorithm \ref{alg:single}. First, a main thread, i.e. the central agent, runs Algorithm \ref{alg:algo_task_segment} and Algorithm \ref{alg:cluster_assignment}. Then the results of Algorithm \ref{alg:cluster_assignment} are distributed to multiple agents/threads, where each thread runs Algorithm \ref{alg:single} parallelly for each agent.
All the results are obtained by a desktop with a 2.8 GHz Intel Core i7-7700HQ CPU and 16 GB memory.
This implementation does not require a GPU but one can accelerate it with GPU if needed.
\subsection{Experiments} \label{subsec:experiments}
The test area is 6m $\times$ 5.6m and the grid map size is $120 \times 112$. DrMaMP runs on the same desktop and does real-time mission planning for two Parrot Mambo quadrotors. In the experiments, each quadrotor follows the discrete paths returned from DrMaMP.
Then a low-level trajectory tracking controller\footnote[1]{ \url{github.com/zehuilu/Mambo-Tracking-Interface} } broadcasts desired control commands given the desired paths to each Mambo individually.
Some details are explained in Fig. \ref{experiment:screenshot}.
In the case of a dynamic task, a cone moves from one side to another side and DrMaMP updates its planning result accordingly.
The video also includes simulations with many dynamic obstacles in a cluttered environment.
Footage from these experiments is included in a supplementary video file\footnote[2]{\url{youtu.be/il3YxhXgGac}}.
\subsection{Comparison with Increased Number of Agents} \label{subsec:comp_agen}
Section \ref{subsec:comp_agen} and Section \ref{subsec:comp_task} show scalability comparisons between DrMaMP and \cite{bertuccelli2009real}. The grid map is $50 \times 50$.
Given a particular number of agents $n_a$ and tasks $n_t$, there are 100 different scenarios where the positions of agents and tasks are generated randomly. For each scenario, there are 200 randomly generated obstacles; each method runs 20 times, and the average computation time and total distance are collected.
Although \cite{bertuccelli2009real} utilizes Dijkstra's algorithm \cite{dijkstra1959note} as the path-finder, this paper replaces Dijkstra's algorithm by Lazy Theta* \cite{nash2010lazy} as the path-finder of \cite{bertuccelli2009real} to present a fair comparison, regarding the computation. In other words, this paper cancels the difference between two path-finders although Lazy Theta* is faster, occupies less memory, and generates shorter paths due to any-angle movement.
\begin{figure*}[h]
\centering
\begin{subfigure}{.30\textwidth}
\centering
\includegraphics[width=\linewidth]{time_fixed_target_two.png}
\caption{Computation time of two methods}
\label{comp:fixed_target:time_two}
\end{subfigure}
\hfill
\begin{subfigure}{.30\textwidth}
\centering
\includegraphics[width=\linewidth]{time_fixed_target_one.png}
\caption{Computation time of DrMaMP}
\label{comp:fixed_target:time_one}
\end{subfigure}
\hfill
\begin{subfigure}{.30\textwidth}
\centering
\includegraphics[width=\linewidth]{distance_fixed_target_two.png}
\caption{Total distance of two methods}
\label{comp:fixed_target:distance_two}
\end{subfigure}
\caption{The computation time and total distance results of two methods: DrMaMP and \cite{bertuccelli2009real} with an increased number of agents. The number of unassigned tasks $n_t = 3n_a$. $N=300$.}
\label{comp:fixed_target}
\end{figure*}
In Fig. \ref{comp:fixed_target:time_two} and Fig. \ref{comp:fixed_target:time_one}, the computation time of \cite{bertuccelli2009real} is increased exponentially and is up to over 4.5 seconds when there are 20 agents and 60 tasks, whereas the computation time of DrMaMP is increased linearly, on the order of milliseconds.
The fully centralized method \cite{ren2021ms} has a similar scenario with a 32 $\times$ 32 map and random obstacles. According to Fig. 3 of \cite{ren2021ms}, it takes about 55 seconds to generate sequences for 5-20 agents and 10-50 tasks.
This paper omits the comparison with \cite{ren2021ms} because \cite{ren2021ms} is not a real-time algorithm.
The CBBA's computation time is increased exponentially because all agents need to take auctions iteratively and repeat for every task. The negotiation process for each task is more time-consuming and less efficient when $n_a$ is larger. Whereas for DrMaMP, the increased $n_a$ only raises the burden of Algorithm \ref{alg:algo_task_segment} and \ref{alg:cluster_assignment} slightly. The most computationally heavy part of DrMaMP is finding the collision-free path between every pair of nodes in each sub-problem, i.e., Line 4 - Line 7 of Algorithm \ref{alg:single}. Since Algorithm \ref{alg:single} is distributed over agents, the increased $n_a$ does not raise the computational load significantly. Section \ref{subsec:comput_analysis} analyzes the computational burden of DrMaMP and shows consistency with the comparisons.
As for optimality (total distance), DrMaMP outperforms \cite{bertuccelli2009real} because DrMaMP utilizes global information of tasks and agents in Algorithm \ref{alg:algo_task_segment} and \ref{alg:cluster_assignment}, while \cite{bertuccelli2009real} does auction for one task at one time. Thus the fully decentralized auction process does not utilize global information, resulting in less optimality. Moreover, the bid price in \cite{bertuccelli2009real} is the Euclidean distance between agent and task, and \cite{bertuccelli2009real} only generates collision-free paths after task order is determined. In a cluttered environment, the Euclidean distance is not the actual cost. Section \ref{subsec:comput_analysis} analyzes and compares the computational burden if \cite{bertuccelli2009real} utilizes collision-aware cost as the bid price.
\subsection{Comparison with Increased Number of Tasks}\label{subsec:comp_task}
\begin{figure*}[h]
\centering
\begin{subfigure}{.30\textwidth}
\centering
\includegraphics[width=\linewidth]{time_fixed_agent_two.png}
\caption{Computation time of two methods}
\label{comp:fixed_agent:time_two}
\end{subfigure}
\hfill
\begin{subfigure}{.30\textwidth}
\centering
\includegraphics[width=\linewidth]{time_fixed_agent_one.png}
\caption{Computation time of DrMaMP}
\label{comp:fixed_agent:time_one}
\end{subfigure}
\hfill
\begin{subfigure}{.30\textwidth}
\centering
\includegraphics[width=\linewidth]{distance_fixed_agent_two.png}
\caption{Total distance of two methods}
\label{comp:fixed_agent:distance_two}
\end{subfigure}
\caption{The computation time and total distance results of two methods: DrMaMP and \cite{bertuccelli2009real} with an increased number of tasks. The number of agents is fixed at 3. $N=300$.}
\label{comp:fixed_agent}
\end{figure*}
In Fig. \ref{comp:fixed_agent:time_two} and Fig. \ref{comp:fixed_agent:time_one}, CBBA's computation time is increased exponentially and is about 0.75 seconds for 60 tasks and 3 agents, whereas the computation time of DrMaMP is increased almost linearly. The increasing rate of CBBA's computation time in Fig. \ref{comp:fixed_agent:time_two} is much less than Fig. \ref{comp:fixed_target:time_two} because there is less negotiation among agents and thus the auction for each task needs fewer iterations when $n_a$ is smaller.
The increasing rate of DrMaMP's computation time in Fig. \ref{comp:fixed_agent:time_one} is greater than Fig. \ref{comp:fixed_target:time_one} because the linearly increased $n_t$ leads to the computational burden increasing quadratically (see Section \ref{subsec:comput_analysis}). Nevertheless, the magnitude of computation time is still relatively small because each agent only needs to deal with a task subset due to Algorithm \ref{alg:algo_task_segment}. The detailed analysis is shown in Section \ref{subsec:comput_analysis}. Fig. \ref{comp:fixed_agent:distance_two} shows that DrMaMP outperforms CBBA regarding optimality. These comparisons show that by using some global information in a distributed manner, DrMaMP achieves better performance than a decentralized method and a centralized method.
\subsection{Computational Burden Analysis}\label{subsec:comput_analysis}
DrMaMP approximates the traveling cost from one node to another one by the length of the underlying collision-free path. An intuitive way to improve the optimality of CBBA is to utilize the lengths of collision-free paths as bid prices. This subsection analyzes the computational burden of DrMaMP and this approach.
To find all possible paths, each agent connects to all the tasks and every two tasks connect to each other. Thus the total number of paths $\hat{N}_p$ for CBBA is
\begin{equation} \label{eq:num_path_cbba}
\hat{N}_p = {}_{n_t}P_{2} + n_a \cdot n_t = n_t(n_t+n_a-1),
\end{equation}
where ${}_{n_t}P_{2}=\frac{n_t!}{(n_t-2)!}$ is the number of permutations for selecting two elements from total $n_t$ elements.
As for DrMaMP, the upper bound $\overline{N}_p$ for the number of paths to be found for each agent is $n_t$, i.e.,
\begin{equation}
\overline{N}_p \triangleq \sup \max(|\hat{\mathcal{T}}_1|, \cdots, |\hat{\mathcal{T}}_{n_a}|) = n_t.
\end{equation}
Denote $\text{ceil}(\cdot): \mathbb{R} \mapsto \mathbb{Z}$ as the ceiling function, and $\text{ceil}(x)$ is the least integer greater than or equal to $x$. Since the entire task set is partitioned into $n_a$ subsets and the path-finding for each agent is parallel, the lower bound $\underline{N}_p$ is
\begin{equation}
\underline{N}_p \triangleq \inf \max(|\hat{\mathcal{T}}_1|, \cdots, |\hat{\mathcal{T}}_{n_a}|) \triangleq n_c = \text{ceil}(\tfrac{n_t}{n_a}).
\end{equation}
Hence, the maximum number of paths $N_p$ in total for DrMaMP is
\begin{equation}
n_c + {}_{n_c}P_{2} \leq N_p \leq n_t + {}_{n_t}P_{2} \ \Rightarrow \ {(\tfrac{n_t}{n_a})}^2 \lessapprox N_p \leq n_t^2.
\end{equation}
Combining with \eqref{eq:num_path_cbba} yields
\begin{equation}
1 < 1 + \tfrac{n_a-1}{n_t} \leq \tfrac{\hat{N}_p}{N_p} \lessapprox (1+\tfrac{n_a-1}{n_t})n_a.
\end{equation}
Since ${(\frac{n_t}{n_a})}^2 \lessapprox N_p \leq n_t^2$, when $n_a$ is increased linearly and the ratio of $n_t$ to $n_a$ is a constant $a \triangleq \frac{n_t}{n_a}$, the lower bound of $N_p$ increases linearly as $\underline{N}_p = a^2 n_a$. This conclusion is consistent with Fig. \ref{comp:fixed_target:time_one}. Based on observation on comparisons, the actual computational burden of DrMaMP is skewed towards the lower bound. When $n_t$ is increased linearly and $n_a$ is fixed, $\underline{N}_p$ increases quadratically with respect to $n_t$. In addition, the standard deviation of computation time in Fig. \ref{comp:fixed_agent:time_one} is increasingly larger than in Fig. \ref{comp:fixed_target:time_one}. This observation happens because the number of assigned tasks for each agent $|\hat{\mathcal{T}}_1|, \cdots, |\hat{\mathcal{T}}_{n_a}|$ tends to be more diverse as $n_t$ increases and $n_a$ is constant. As for Fig. \ref{comp:fixed_target:time_one}, the task-agent ratio is fixed and thus the deviation remains relatively the same when $n_a$ increases.
As for replacing the bid cost as the length of a collision-free path, the extra computational burden of CBBA is greater than the actual burden of DrMaMP. The difference between the two upper bounds is $n_t(n_a-1)$, which increases linearly as $n_t$ or $n_a$ increases. When the task-agent ratio is fixed and $n_a$ increases, $\frac{\hat{N}_p}{\underline{N}_p}$ is still greater than 1 and it increases with a rate of $\frac{1}{n_a}$. The upper bound $\frac{\hat{N}_p}{\overline{N}_p}$ increases with a rate of $n_a$. When $n_a \ll n_t$, $ N_p \leq \hat{N}_p \lessapprox n_a \cdot N_p $. The worst case of DrMaMP is that its computation burden is slightly less than CBBA's, but CBBA's burden at most is $n_a$ times greater than DrMaMP's. Thus, task segmentation and parallelizable mission planning benefit run-time computation. And revising the bid prices of CBBA is not computationally efficient and hence the scalability is not good.
\subsection{Optimality Gap in Small-size Problems} \label{subsec:optimality_gap}
Section \ref{subsec:optimality_gap} shows the optimality gap between DrMaMP and the global optimum. The global optimum is found by exhaustive search and thus the search is only feasible in small-size problems. Fig. \ref{comp:optimality} shows the optimality gap with two cases, 2 agents + 4 tasks and 3 agents + 6 tasks. For each case, there are 20 scenarios with different positions of agents, tasks, and obstacles. It is impossible to search a global optimum exhaustively for problems with a larger size since the MAMP problem is NP-hard.
The total number of solutions for $n_a$ agents and $n_t$ tasks is $\frac{(n_t+2n_a-1)!n_t!}{(n_a+n_t)!(n_a-1)!}$.
For the case with 3 agents and 6 tasks, it has 39600 possible solutions and takes about 10 seconds to find a global optimum. For 4 agents and 8 tasks, it has 18345600 solutions and the estimated time to find an optimum is 78 minutes.
\begin{figure}[h]
\centering
\includegraphics[width=0.30\textwidth]{with_optimal_performance_50.png}
\caption{Optimality gap in small-size problems}
\label{comp:optimality}
\end{figure}
In the case of 2 agents, the optimality gap between DrMaMP and the optimum is on average 4.3\% while CBBA's cost is on average 36.3\% greater than the optimum. As for another case, DrMaMP's cost is on average 8.3\% greater than the optimum whereas CBBA's cost is 43.1\% greater than the optimum. The DrMaMP's optimality gap increases when the problem size increases since the task segmentation algorithm cannot explore all the permutations of the number of assigned tasks for each agent. Nevertheless, the algorithm makes the MAMP problem tractable and solves it at run-time.
\section{Conclusion}\label{sec:conclusion}
The collision-aware MAMP problem is NP-hard but requires real-time computational performance in many applications. This paper presents a distributed real-time algorithm DrMaMP. DrMaMP partitions the entire task set into several subsets such that each agent can determine the task allocation order and collision-free path parallelly. This process reduces the dimension of original problems and hence makes DrMaMP able to run in real-time with good scalability. The above results show that by using global information in a distributed manner, DrMaMP achieves better performance on both computation and optimality.
\bibliographystyle{ieeetr}
|
{
"arxiv_id": "2302.14310",
"language": "en",
"timestamp": "2023-03-01T02:09:14",
"url": "https://arxiv.org/abs/2302.14310",
"yymm": "2302"
} | \section{Introduction}
This paper presents the recent development of the package ``FeynGrav'' \cite{Latosh:2022ydd}. The package provides a tool to operate with Feynman rules for perturbative quantum gravity within FeynCalc \cite{Mertig:1990an,Shtabovenko:2016sxi,Shtabovenko:2020gxv}. In \cite{Latosh:2022ydd} the author proposed a novel analytic approach to the derivation of Feynman rules. It provides a way to construct the Feynman rules for a wide class of gravity models. It was applied to models without supersymmetry and non-minimal coupling to gravity. Interaction rules for the massless matter of spin $0$, $1/2$, and $1$ were derived and their implementation within FeynGrav was discussed.
In this paper, we present a further development of the analytic approach proposed earlier and its implementation for FeynGrav. Firstly, we consider a matter of spin $0$, $1/2$, and $1$ with non-vanishing masses and minimal coupling to gravity. We pay particular attention to the case of a massless vector field and revisit the issue of gauge fixing. We demonstrate that the corresponding Faddeev-Popov ghosts interact with gravitational degrees of freedom. In addition, we derive the interaction rules for scalar field potential.
Secondly, we consider the gravitational coupling to $SU(N)$ Yang-Mills model. We derive the corresponding Feynman rules and show that, similarly to the case of a single massless vector field, the Faddeev-Popov ghosts interact with the gravitational degrees of freedom. This generalization allows for the calculation of scattering amplitudes in gravity coupled to gauge fields and opens new perspectives for phenomenological investigations.
Finally, we revisit the gauge fixing procedure for gravity and introduce more general gauge fixing conditions. The corresponding gauge fixing parameter is made explicit in all calculations. In full analogy with the previous cases, the corresponding Faddeev-Popov ghosts interact with the gravitational degrees of freedom.
All models discussed in this paper are implemented within the new version of FeynGrav. Its usage is illustrated in a few physically relevant examples.
It shall be noted that there are different approaches to the derivation of the Feynman rules for gravity. For instance, in classical papers \cite{DeWitt:1967yk,DeWitt:1967ub,DeWitt:1967uc} interaction rules for three and four-graviton vertices were derived directly from the Hilbert action. A similar approach based on the Hilbert action perturbative expansion was constructed in \cite{Prinz:2020nru}. Widely-known package xAct \cite{xActBundle,Portugal:1998qi,MARTINGARCIA2008597,Brizuela:2008ra,Pitrou:2013hga,Nutma:2013zea} also provides a tool to operate with perturbative expansion within gravity models, but its applicability is mostly limited to the classical domain. We discuss opportunities to implement it within FeynGrav in the previous paper \cite{Latosh:2022ydd}. Lastly, recently another package providing a tool to operate with Feynman rules for gravity-matter coupling was created \cite{SevillanoMunoz:2022tfb}. A more detailed discussion of computer algebra application for gravity research lies beyond the scope of this paper and can be found in the following reviews \cite{MacCallum:2018csx,HEPSoftwareFoundation:2020daq}.
In this paper, we present a comprehensive study of Feynman's rules for perturbative quantum gravity, covering matter with spin $0$, $1/2$, and $1$, $SU(N)$ Yang-Mills model, and the gauge fixing procedure. In Section \ref{section_review}, we provide an overview of our approach to derive these Feynman rules, including the notations used throughout the paper. The Feynman rules for matter fields are derived and presented. In Section \ref{section_SUNYM}, we extend our analysis to $SU(N)$ Yang-Mills model coupled to gravity. We revisit the gauge fixing procedure for gravity in Section \ref{section_gravity_ghosts} and discuss the interaction of Faddeev-Popov ghosts with gravitational degrees of freedom. In Section \ref{section_FG2}, we introduce the new version of FeynGrav, which implements all the models studied in this paper, and we illustrate its usage through a few physically relevant examples. Finally, we conclude with a discussion of the prospects and further development of FeynGrav in Section \ref{section_conclusions}.
\section{Perturbative Quantum Gravity}\label{section_review}
Perturbative quantum gravity associates gravitational phenomena with small metric perturbations propagating about the flat background. In that case the complete spacetime metric $g_{\mu\nu}$ is given as the following finite expansion:
\begin{align}\label{the_perturbative_expansion}
g_{\mu\nu} = \eta_{\mu\nu} + \kappa \, h_{\mu\nu} .
\end{align}
Here $\eta_{\mu\nu}$ is the flat metric, $h_{\mu\nu}$ are small metric perturbations with the canonical mass dimension, and $\kappa$ is the gravity coupling related with the Newton's constant $G_\text{N}$:
\begin{align}\label{the_gravitational_coupling_definition}
\kappa^2 \overset{\text{def}}{=} 32\,\pi\, G_\text{N} .
\end{align}
Although \eqref{the_perturbative_expansion} is a finite expression, it spawns infinite perturbative expansions for the inverse metric
\begin{align}
g^{\mu\nu} = \eta^{\mu\nu} - \kappa\, h^{\mu\nu} + \kappa^2\, h^{\mu\sigma} h_\sigma{}^\nu + \korder{3};
\end{align}
for the volume factor
\begin{align}
\sqrt{-g} = 1 + \cfrac{\kappa}{2} \, h -\cfrac{\kappa^2}{4} \left(h_{\mu\nu}^2 - \cfrac12\, h^2\right) + \korder{3};
\end{align}
for the Christoffel symbols
\begin{align}
\Gamma^\alpha_{\mu\nu} = g^{\alpha\beta} \,\Gamma_{\beta\mu\nu} =\Big( \eta^{\alpha\beta} - \kappa\,h^{\alpha\beta}+\kappa^2\, h^{\alpha\sigma} h_\sigma{}^\beta +\korder{3}\Big)\,\cfrac{\kappa}{2}\,\left[ \partial_\mu h_{\nu\beta} + \partial_\nu h_{\mu\beta} - \partial_\beta h_{\mu\nu} \right];
\end{align}
and, ultimately, for the Hilbert action
\begin{align}
\begin{split}
\mathcal{A}_\text{H}[g_{\mu\nu}] \overset{\text{def}}{=}&\int d^4 x \sqrt{-g} \left[-\cfrac{2}{\kappa^2} \, R\right] =\mathcal{A}_\text{H}[\eta] + \cfrac{\delta\mathcal{A}_H}{\delta g_{\mu\nu}}[\eta]\, \kappa h_{\mu\nu} + \cfrac{\delta^2 \mathcal{A}_H}{\delta g_{\mu\nu} \,\delta g_{\alpha\beta}} [\eta] ~\kappa^2 \, h_{\mu\nu} \,h_{\alpha\beta} + \korder{3} \\
=& h_{\mu\nu} \,\mathcal{D}^{\mu\nu\alpha\beta} \square \,h_{\alpha\beta} + \kappa\, \left(\mathfrak{V}^{(3)}\right)^{\mu_1\nu_1\mu_2\nu_2\mu_3\nu_3} \,h_{\mu_1\nu_1} h_{\mu_2\nu_2} h_{\mu_3\nu_3} + \korder{2} .
\end{split}
\end{align}
In this formula, the Hilbert action evaluated at the flat metric vanishes. The term linear in perturbations also vanishes because the flat background delivers a minimum to the Hilbert action. The term quadratic in perturbations describes the propagation of such perturbations. All other terms of higher orders in perturbations describe their interactions.
Perturbative quantum gravity is a quantum theory of small metric perturbations $h_{\mu\nu}$ constructed with the functional integral technique. For the sake of briefness, we call quanta of the field $h_{\mu\nu}$ gravitons. Their quantum behavior is described by the following generating functional:
\begin{align}
\begin{split}
\mathcal{Z} \overset{\text{def}}{=}& \int\mathcal{D} [g] \, \exp\Big[i\, \mathcal{A}_\text{H}[g]\Big] \\
=& \int \mathcal{D}[h]\, \exp\Bigg[ i\, h_{\mu\nu} \,\mathcal{D} ^{\mu\nu\alpha\beta} \square \,h_{\alpha\beta} +i\, \kappa\, \left(\mathfrak{V}^{(3)}\right)^{\mu_1\nu_1\mu_2\nu_2\mu_3\nu_3} \,h_{\mu_1\nu_1} h_{\mu_2\nu_2} h_{\mu_3\nu_3} + \korder{2} \Bigg] .
\end{split}
\end{align}
We shall note that this expression shall not be used directly before the gauge fixing procedure is performed. We discuss it in detail in Section \ref{section_gravity_ghosts}.
The perturbative structures of the inverse metric $g^{\mu\nu}$, the volume factor $\sqrt{-g}$, and the vierbein $\mathfrak{e}_m{}^\mu$ are described by families of $\mathcal{I}$ and $\mathcal{C}$ tensors defined in the original paper \cite{Latosh:2022ydd}. These tensors can be generated within a computer algebra system and offer a straightforward way to handle the corresponding perturbative expansions. While their discussion is beyond the scope of this paper, they are covered in great detail in \cite{Latosh:2022ydd}.
We introduce the following notations for perturbative expansions. If a quantity $X$ is expanded in a perturbative series with respect to $\kappa\, h_{\mu\nu}$, we note the corresponding series as follows:
\begin{align}
X = \sum\limits_{n=0}^\infty \,\kappa^n\,(X)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \, h_{\rho_1\sigma_1}\cdots h_{\rho_n\sigma_n} .
\end{align}
Here $(X)^{\rho_1\sigma_1\cdots\rho_n\sigma_n}$ notes an expression that specifies the tensor structure of a given term. To put it otherwise, it shows how indices of metric perturbations shall be contracted. In these notations perturbative expansions for $g^{\mu\nu}$ and $\sqrt{-g}$ are written as follows:
\begin{align}
\begin{split}
g^{\mu\nu} =& \sum\limits_{n=0}^\infty \, \kappa^n \,\left(g^{\mu\nu}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \, h_{\rho_1\sigma_1}\cdots h_{\rho_n\sigma_n} ,\\
\sqrt{-g} =& \sum\limits_{n=0}^\infty \, \kappa^n \, \left(\sqrt{-g} \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \, h_{\rho_1\sigma_1}\cdots h_{\rho_n\sigma_n} .
\end{split}
\end{align}
For the sake of illustration, we present a few terms from these expressions:
\begin{align}
\left( g^{\mu\nu} \right) &= \eta^{\mu\nu} ,& \left(g^{\mu\nu}\right)^{\alpha\beta} &= \cfrac12\,\left(\eta^{\mu\alpha}\eta^{\nu\beta} + \eta^{\mu\beta}\eta^{\nu\alpha}\right), & \\
\left(\sqrt{-g} \right) &= 1 ,& \left(\sqrt{-g}\right)^{\mu\nu} &= \cfrac12\,\eta^{\mu\nu}, \hspace{20pt} \left(\sqrt{-g}\right)^{\mu\nu\alpha\beta} =\cfrac{1}{8} \left(-\eta^{\alpha\nu} \eta^{\beta\mu}-\eta^{\alpha\mu} \eta^{\beta\nu}+\eta^{\alpha\beta} \eta^{\mu\nu}\right).\nonumber
\end{align}
All of the interaction rules presented in this paper have been derived using perturbative techniques as described above. It is worth noting that this approach can be extended to supersymmetric models and models with non-minimal gravitational coupling. These cases will be discussed in future works.
Let us briefly review the construction of the Feynman rules for a single scalar field, a single Dirac fermion, and a single vector field. The scalar and Dirac field cases were covered in detail in the original paper, so we will only briefly touch upon them. The construction of Feynman rules for a vector field is more intricate due to the gauge fixing and will be discussed in more depth
\subsection{Single scalar field}
A single free scalar field minimally coupled to gravity is described by the following action:
\begin{align}
\begin{split}
\mathcal{A}_{s=0} =& \int d^4 x \sqrt{-g} \left[ \cfrac12\,\left(\nabla\phi\right)^2 - \cfrac{m_\text{s}^2}{2} \, \phi^2 \right] = \int d^4 x \left[ \cfrac12\, \sqrt{-g}\, g^{\mu\nu} \, \partial_\mu\phi \, \partial_\nu\phi - \cfrac{m^2_\text{s}}{2}\,\sqrt{-g} \, \phi^2 \right].
\end{split}
\end{align}
Here $m_\text{s}$ is the scalar field mass. Its perturbative expansion in the momentum representation reads:
\begin{align}
\begin{split}
\mathcal{A}_{s=0} =& \sum\limits_{n=0}^\infty \int \cfrac{d^4 p_1}{(2\pi)^4} \cfrac{d^4 p_2}{(2\pi)^4} \prod\limits_{i=1}^n \cfrac{d^4 k_i}{(2\pi)^4}\,(2\pi)^4 \delta\left(p_1+p_2+\sum k_i\right) h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n}(k_n)\\
&\times \kappa^n \left[ -\cfrac12\, \left(\sqrt{-g}\, g^{\mu\nu}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} I_{\mu\nu\alpha\beta} (p_1)^\alpha (p_2)^\beta - \cfrac{m^2_\text{s}}{2} \left(\sqrt{-g}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \right]\,\phi(p_1) \phi(p_2)\,.
\end{split}
\end{align}
Here $k_i$ are momenta of gravitons, $p_1$ and $p_2$ are momenta of scalars, and $I$ tensor contracts indices of the metric and momenta in a symmetric way:
\begin{align}
I^{\mu\nu\alpha\beta} = \cfrac12\, \left( \eta^{\mu\alpha}\eta^{\nu\beta} + \eta^{\mu\beta}\eta^{\nu\alpha}\right) .
\end{align}
The background contribution of this expression describes the scalar field propagator:
\begin{align}\label{scalar_propagator}
\begin{gathered}
\begin{fmffile}{Diag01}
\begin{fmfgraph}(30,30)
\fmfleft{L}
\fmfright{R}
\fmf{dashes}{L,R}
\end{fmfgraph}
\end{fmffile}
\end{gathered}
= i \, \cfrac{1}{p^2 - m_\text{s}^2} ~.
\end{align}
The other parts of this expression define rules for gravitons coupling to the scalar field kinetic energy:
\begin{align}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_S_1}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2}
\fmfright{R1,R2}
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L2,V}
\fmfdot{V}
\fmf{dashes}{V,R1}
\fmf{dashes}{V,R2}
\fmffreeze
\fmf{dots}{L1,L2}
\fmflabel{$p_1$}{R1}
\fmflabel{$p_2$}{R2}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
= -i\, \kappa^n \,\left[ \left(\sqrt{-g}\, g^{\mu\nu} \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \, I_{\mu\nu\alpha\beta} (p_1)^\alpha (p_2)^\beta + m_\text{s}^2\, \left( \sqrt{-g}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \right]. \\ \nonumber
\end{align}
Here and further, all momenta are directed inwards and connected by conservation law. The dotted line on the left part of the diagram notes the presence of $n\geq 1$ graviton lines. This expression is symmetric with respect to the scalar field momenta. In the rest of this paper, we present expressions that are also symmetric with respect to momenta.
The gravitational coupling of a scalar field potential energy is derived similarly. The scalar field potential $V(\phi)$ shall be expanded in a power series with respect to the scalar field $\phi$. Each term of this expansion corresponds to a separate scalar field self-interaction coupled to gravity. Therefore, it is sufficient to derive the interaction rule for a single power-law potential. Let us consider the following power-law potential with $p \geq 3$ being a whole number, and $\lambda_p$ being a coupling with the mass dimension $4-p$:
\begin{align}
\mathcal{A}_{s=0,\text{potential}} = \int d^4 x \sqrt{-g} \left[ \cfrac{\lambda_p}{p!} ~ \phi^p \right] = \int d^4 x \left[ \sqrt{-g} ~ \cfrac{\lambda_p}{p!} ~ \phi^p \right].
\end{align}
In the momentum representation, this action becomes:
\begin{align}
\begin{split}
\mathcal{A}_{s=0,\text{potential}} =& \sum\limits_{n=0}^\infty\int \prod\limits_{j=1}^p \cfrac{d^4 q_j}{(2\pi)^4} \prod\limits_{i=0}^n \cfrac{d^4 k_i}{(2\pi)^4} \, (2\pi)^4 \,\delta\Big( \sum q_j + \sum k_i \Big)\,h_{\rho_1\sigma_1}(k_1)\cdots h_{\rho_n\sigma_n}(k_n) \\
&\times \kappa^n \,\cfrac{\lambda_p}{p!} \, \left(\sqrt{-g}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \phi(q_1) \cdots \phi(q_p)\,.
\end{split}
\end{align}
The corresponding interaction rule reads:
\begin{align}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_S_Power}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2}
\fmfright{R1,R2}
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L2,V}
\fmf{dashes}{R1,V}
\fmf{dashes}{R2,V}
\fmfdot{V}
\fmffreeze
\fmf{dots}{L1,L2}
\fmf{dots}{R1,R2}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L2}
\fmflabel{$q_1$}{R1}
\fmflabel{$q_p$}{R2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
= i\,\kappa^n \, \lambda_p \, \left(\sqrt{-g}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} . \\ \nonumber
\end{align}
This expression can be used for any whole $p \geq 3$, so it completely describes the gravitational coupling of scalar field potentials.
The healthy scalar field interactions described in this paper are just a small subset of the broader range of interactions described by the Horndeski and Beyond Horndeski models \cite{Horndeski:1974wa,Horndeski:1976gi,Kobayashi:2011nu,Zumalacarregui:2013pma,Gleyzes:2014dya,BenAchour:2016cay,Kobayashi:2019hrl}. Feynman rules for these models are beyond the scope of this paper and will be discussed in future publications.
\subsection{Single Dirac field}
A single Dirac field minimally coupled to gravity is described by the following action:
\begin{align}
\begin{split}
\mathcal{A}_{s=1/2} =& \int d^4 x \sqrt{-g} \left[ \overline{\psi} \left( i\, \widehat{\nabla}\right) \psi - m_\text{f}\, \overline{\psi}\,\psi \right] \\
=& \int d^4 x \left[ \sqrt{-g} \,\mathfrak{e}_m{}^\mu \, \frac12\, \left( i\,\overline{\psi}\, \gamma^m \,\nabla_\mu \psi - i\, \nabla_\mu\overline{\psi} \,\gamma^m \,\psi\right) - m_\text{f}\, \sqrt{-g}\, \overline{\psi} \,\psi \right] .
\end{split}
\end{align}
Here $m_\text{f}$ is the fermion mass, $\mathfrak{e}_m{}^\mu$ is the vierbein, and $\nabla$ is the fermionic covariant derivative. We discuss the construction of spinors in a curved spacetime alongside their perturbative treatment in the previous paper \cite{Latosh:2022ydd} (see also \cite{Shapiro:2016pfm,stepanyants2009}). The following theorem specifies the perturbative structure of this action \cite{Latosh:2022ydd}:
\begin{align}
\begin{split}
\mathcal{A}_{s=1/2} =& \int d^4 x \left[ \sqrt{-g} \, \mathfrak{e}_m{}^\mu \, \frac12\, \left( i\,\overline{\psi}\, \gamma^m \,\partial_\mu \psi - i\, \partial_\mu\overline{\psi} \,\gamma^m \,\psi\right) - m_\text{f}\, \sqrt{-g}\, \overline{\psi} \,\psi \right]\\
=& \sum\limits_{n=0}^\infty \int \cfrac{d^4 p_1}{(2\pi)^4} \cfrac{d^4 p_2}{(2\pi)^4} \prod\limits_{i=0}^n \cfrac{d^4 k_i}{(2\pi)^4} \,(2\pi)^4 \delta\left(p_1+p_2+\sum k_i\right) h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n} (k_n)\\
&\times \kappa^n~ \overline{\psi}(p_2) \left[ \left(\sqrt{-g}\, \mathfrak{e}_m{}^\mu\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n}\,\frac12 \, (p_1-p_2)_\mu \gamma^m - \left(\sqrt{-g}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} m_\text{f} \right] \psi(p_1).
\end{split}
\end{align}
The background part of this expansion corresponds to the fermion propagator:
\begin{align}
\begin{gathered}
\begin{fmffile}{Diag02}
\begin{fmfgraph}(35,35)
\fmfleft{L}
\fmfright{R}
\fmf{fermion}{L,R}
\end{fmfgraph}
\end{fmffile}
\end{gathered}
\hspace{10pt}= i ~ \cfrac{p_m \,\gamma^m +m_\text{f}}{p^2 - m_\text{f}^2} \,.
\end{align}
The other terms describe the following interaction rules:
\begin{align}\label{rule_F_1}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_F_1}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2}
\fmfright{R1,R2}
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L2,V}
\fmfdot{V}
\fmf{fermion}{R1,V}
\fmf{fermion}{V,R2}
\fmffreeze
\fmf{dots}{L1,L2}
\fmflabel{$p_1$}{R1}
\fmflabel{$p_2$}{R2}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
= i\, \kappa^n \,\left[ \cfrac12\, \left(\sqrt{-g}\, \mathfrak{e}_m{}^\mu\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \, (p_1-p_2)_\mu \gamma^m - \left(\sqrt{-g}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} m_\text{f} \right]. \\ \nonumber
\end{align}
As it was noted above, on this diagram all momenta are directed inwards, so $p_1$ notes an in-going momentum of a fermion, $p_2$ notes an in-out momentum of an anti-fermion. Moreover, this expression is applicable for the $SU(N)$ Yang-Mills model considered below.
\subsection{Single vector field}
The treatment of a vector field within the quantum field theory (and perturbative quantum gravity) is sensitive to the vector field mass. A massless vector field admits the gauge symmetry, so the gauge fixing shall be performed. If a vector field has a non-vanishing mass, then the gauge symmetry is not present and gauge fixing is not required.
We start with the case of a vector field with a non-vanishing mass, also known as the Proca field. Such a field coupled with gravity is described by the following action:
\begin{align}
\begin{split}
\mathcal{A}_{s=1,m_\text{v}} =& \int d^4 x \sqrt{-g} \left[ -\cfrac14\, F_{\mu\nu}F^{\mu\nu} + \cfrac{m_\text{v}^2}{2} \, A_\mu \,A^\mu \right].
\end{split}
\end{align}
Here $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$ is the field tensor, $m_\text{v}$ is the vector field mass. The perturbative expansion of this action in the momentum representation reads:
\begin{align}
\begin{split}
\mathcal{A}_{s=1,m_\text{v}}=& \sum\limits_{n=0}^\infty \int \cfrac{d^4 p_1}{(2\pi)^4} \cfrac{d^4 p_2}{(2\pi)^4} \prod\limits_{i=0}^n \cfrac{d^4 k_i}{(2\pi)^4} \,(2\pi)^4 \delta\left(p_1+p_2+\sum k_i\right) h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n} (k_n) \\
&\times \kappa^n \,\Bigg[\cfrac14\, \left(\sqrt{-g}\, g^{\mu\alpha}g^{\nu\beta}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n}~(p_1)_{\mu_1} (p_2)_{\mu_2} ~\big(F_{\mu\nu}\big)^{\mu_1\lambda_1}\big(F_{\alpha\beta}\big)^{\mu_2\lambda_2} \\
& \hspace{30pt}+ \cfrac{m_\text{v}^2}{2} \left( \sqrt{-g}\, g^{\lambda_1\lambda_2} \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \Bigg] \,A_{\lambda_1}(p_1) \, A_{\lambda_2}(p_2) .
\end{split}
\end{align}
Here we introduced the following notations:
\begin{align}
F_{\mu\nu} &= -i\,p_\sigma \, \big(F_{\mu\nu}\big)^{\sigma\lambda} \,A_{\lambda}(p) , & \big(F_{\mu\nu}\big)^{\sigma\lambda} &\overset{\text{def}}{=} \delta^\sigma_\mu \, \delta^\lambda_\nu - \delta^\sigma_\nu \, \delta^\lambda_\mu .
\end{align}
This expression spawns the standard Proca propagator:
\begin{align}\label{Proca_propagator}
\begin{gathered}
\begin{fmffile}{Diag03}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmf{photon}{L,R}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{13pt}=(-i)\,\cfrac{ ~ \eta_{\mu\nu} - \cfrac{p_\mu\,p_\nu}{m_\text{v}^2} ~ }{p^2 - m_\text{v}^2}\,.
\end{align}
The interaction rules describing gravitons coupling to the Proca field kinetic energy is given by the following expression:
\begin{align}\label{rule_V_1}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_V_1}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2}
\fmfright{R1,R2}
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L2,V}
\fmfdot{V}
\fmf{photon}{R1,V}
\fmf{photon}{V,R2}
\fmffreeze
\fmf{dots}{L1,L2}
\fmflabel{$p_1,\lambda_1,m_\text{v}$}{R1}
\fmflabel{$p_2,\lambda_2,m_\text{v}$}{R2}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\begin{split}
\hspace{50pt}= i\, \kappa^n \Bigg[ & \cfrac12\, \left(\sqrt{-g}\, g^{\mu\alpha}g^{\nu\beta}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} (p_1)_{\mu_1} (p_2)_{\mu_2} \big(F_{\mu\nu}\big)^{\mu_1\lambda_1} \big(F_{\alpha\beta}\big)^{\mu_2\lambda_2} \\
& + m_\text{v}^2 \left( \sqrt{-g}\, g_{\lambda_1\lambda_2} \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \Bigg].
\end{split}\\ \nonumber
\end{align}
To proceed with the massless case we shall briefly recall the Faddeev-Popov prescription for gauge theories \cite{Faddeev:1967fc,Peskin:1995ev,Weinberg:1995mt,Weinberg:1996kr,Weinberg:2000cr}. A quantum vector field is described by the following generating functional:
\begin{align}
\mathcal{Z} = \int \mathcal{D}[A] \exp\Big[i\,\mathcal{A}[A] \Big].
\end{align}
Here the integration is performed over all conceivable fields. The normalization factor is omitted for the sake of simplicity. Firstly, one adds a new term to the microscopic action:
\begin{align}
\mathcal{Z} = \int \mathcal{D}[A] \exp\Big[i\,\mathcal{A}[A] \Big] \int\mathcal{D}[\omega] \exp\left[\cfrac{i}{2}\,\epsilon\,\omega^2 \right] = \int\mathcal{D}[A]\mathcal{D}[\omega] \exp\left[ i\, \mathcal{A} +\cfrac{i}{2} \, \epsilon\,\omega^2\right] .
\end{align}
Here $\omega$ is an arbitrary scalar, $\epsilon$ is a free gauge fixing parameter. The new contribution is a Gauss-like integral so its introduction merely changes the (omitted) normalization factor.
Secondly, one splits the integration volume:
\begin{align}
\int \mathcal{D}[A] =\int \mathcal{D}[\zeta] \int \mathcal{D}[\mathbb{A}] \delta\left( \mathcal{G} - \omega \right) \det\Delta\,.
\end{align}
Here $\mathcal{G}$ is the gauge fixing condition; the new field variable $\mathbb{A}$, the gauge transformation parameter $\zeta$, and the field variable $A$ are related as follows:
\begin{align}
A_\mu = \mathbb{A}_\mu + \partial_\mu \zeta .
\end{align}
The integration over $\mathbb{A}$ is performed over all conceivable fields, but because of the $\delta$ function from each class of physically equivalent potentials only a single representative contributes to the integral. Therefore, the integration over $\mathbb{A}$ accounts not for all conceivable potential, but for all conceivable configurations of physical fields. The last term $\det\Delta$ is the Faddeev-Popov determinant which preserves the invariance of the integration measure. The corresponding differential operator $\Delta$ is defined as follows:
\begin{align}
\Delta \overset{\text{def}}{=} \cfrac{\delta\mathcal{G}}{\delta \zeta}\,.
\end{align}
Finally, one performs integrations and obtains the following expression for the generating functional:
\begin{align}
\begin{split}
\mathcal{Z} &= \int \mathcal{D}[\mathbb{A}]\mathcal{D}[\omega]\mathcal{D}[\zeta] \left(\det\Delta\right) ~ \delta\left(\mathcal{G} - \omega \right) \exp\left[ i\, \mathcal{A} +\cfrac{i}{2} \, \epsilon\,\omega^2\right] \\
&= \int \mathcal{D}[\mathbb{A}] \left(\det\Delta\right) \exp\left[ i\, \mathcal{A} +\cfrac{i}{2} \, \epsilon\,\mathcal{G}^2 \right] \\
&=\int\mathcal{D}[c]\mathcal{D}[\overline{c}]\mathcal{D}[\mathbb{A}] \exp\left[ i \,\overline{c} \, \Delta \, c + i \, \mathcal{A} + \cfrac{i}{2}\,\epsilon \, \mathcal{G}^2 \right] .
\end{split}
\end{align}
Here $\overline{c}$, $c$ are scalar anticommuting Faddeev-Popov ghosts that are introduced to account for the Faddeev-Popov determinant. The integration over the gauge parameter $\zeta$ is included in the normalization factor and omitted. This prescription produces a generating functional suitable for a consistent treatment of gauge models.
We use the standard Lorentz gauge fixing condition for the sake of simplicity. In a curved spacetime, it becomes:
\begin{align}
g^{\mu\nu}\,\nabla_\mu A_\nu =0 \leftrightarrow g^{\mu\nu}\,\partial_\mu A_\nu - g^{\mu\nu}\, \Gamma^\sigma_{\mu\nu} A_\sigma = 0 .
\end{align}
The gauge invariant part of the action admits the following perturbative expansion in the momentum representation:
\begin{align}
\begin{split}
\mathcal{A}_{s=1,m_\text{v}=0} =& \int d^4 x \sqrt{-g}\left[ -\cfrac14\,g^{\mu\alpha} g^{\nu\beta}\,F_{\mu\nu}F_{\alpha\beta}\right] \\
=& \sum\limits_{n=0}^\infty\int\cfrac{d^4p_1}{(2\pi)^4}\cfrac{d^4p_2}{(2\pi)^4}\prod\limits_{i=1}^n \cfrac{d^4k_i}{(2\pi)^4} \,(2\pi)^4 \,\delta\big(p_1+p_2+\sum k_i \big) h_{\rho_1\sigma_1}(k_1)\cdots h_{\rho_n\sigma_n}(k_n)\\
& \times \,\kappa^n \, \left[ \cfrac14\,\left(\sqrt{-g} \,g^{\mu\alpha}g^{\nu\beta}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n}\,(p_1)_{\mu_1}(p_2)_{\mu_2}\left(F_{\mu\nu}\right)^{\mu_1\lambda_1} \left(F_{\alpha\beta}\right)^{\mu_2\lambda_2}\right] A_{\lambda_1}(p_1) A_{\lambda_2}(p_2).
\end{split}
\end{align}
This expression matches the expression for the Proca field with $m_\text{v}=0$. The gauge fixing term naturally splits into three terms:
\begin{align}\label{gauge_fixing_vector}
\begin{split}
\mathcal{A}_\text{gf} =& \int d^4 x \sqrt{-g}\left[ \cfrac{\epsilon}{2}\,\nabla_{\lambda_1} A^{\lambda_1} \, \nabla_{\lambda_2} A^{\lambda_2} \right]\\
=& \cfrac{\epsilon}{2} \int d^4 x \left(\sqrt{-g}\,g^{\sigma_1\lambda_1}g^{\sigma_2\lambda_2}\right)\,\partial_{\sigma_1} A_{\lambda_1} \, \partial_{\sigma_2} A_{\lambda_2} -\epsilon \int d^4 x \left(\sqrt{-g}\,g^{\mu\nu}g^{\sigma_1\lambda_1}g^{\sigma_2\lambda_2}\right) \,\Gamma_{\sigma_1\mu\nu} \, A_{\lambda_1} \partial_{\sigma_2} A_{\lambda_2}\\
&+\cfrac{\epsilon}{2}\int d^4 x \left( \sqrt{-g}\, g^{\mu\nu} g^{\alpha\beta} g^{\sigma_1\lambda_1} g^{\sigma_2\lambda_2} \right)\,\Gamma_{\sigma_1\mu\nu} \Gamma_{\sigma_2\alpha\beta} \,A_{\lambda_1} A_{\lambda_2}.
\end{split}
\end{align}
Here we use the standard definition of the Christoffel symbols with only lower indices
\begin{align}
\Gamma_{\mu\alpha\beta} \overset{\text{def}}{=} g_{\mu\nu} \,\Gamma^\nu_{\alpha\beta} = \cfrac12\,\left(\partial_\alpha g_{\beta\mu} + \partial_\beta g_{\alpha\mu} - \partial_\mu g_{\alpha\beta}\right).
\end{align}
In contrast with $\Gamma^\mu_{\alpha\beta}$ these symbols admit a finite perturbative expansion:
\begin{align}
\begin{split}
\Gamma_{\mu\alpha\beta} =& \cfrac{\kappa}{2} \left[ \partial_\alpha h_{\beta\mu} + \partial_\beta h_{\alpha\mu} - \partial_\mu h_{\alpha\beta}\right] \Leftrightarrow \kappa \, (-i)\,p_\lambda \left(\Gamma_{\mu\alpha\beta}\right)^{\lambda\rho\sigma} h_{\rho\sigma}(p) \,, \\
\left(\Gamma_{\mu\alpha\beta}\right)^{\lambda\rho\sigma} =& \cfrac12\left[ \delta^\lambda_\alpha I_{\beta\mu}{}^{\rho\sigma} + \delta^\lambda_\beta I_{\alpha\mu}{}^{\rho\sigma} - \delta^\lambda_\mu I_{\alpha\beta}{}^{\rho\sigma} \right].
\end{split}
\end{align}
In the momentum representation the gauge fixing term reads:
\begin{align}
\begin{split}
\mathcal{A}_\text{gf} =& \sum\limits_{n=0}^\infty\int\cfrac{d^4 p_1}{(2\pi)^4}\cfrac{d^4p_2}{(2\pi)^4}\prod\limits_{i=1}^n \cfrac{d^4k_i}{(2\pi)^4}\,(2\pi)^4\delta \big(p_1+p_2+\sum k_i \big) \,h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n}(k_n) A_{\lambda_1}(p_1) A_{\lambda_2}(p_2)\\
&\times \,\kappa^n\,\left(\sqrt{-g}\, g^{\mu_1\lambda_1} g^{\mu_2\lambda_2}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \left[ -\cfrac{\epsilon}{2}\, (p_1)_{\mu_1} (p_2)_{\mu_2}\right] \\
+&\sum\limits_{n=1}^\infty\int\cfrac{d^4 p_1}{(2\pi)^4}\cfrac{d^4p_2}{(2\pi)^4}\prod\limits_{i=1}^n \cfrac{d^4k_i}{(2\pi)^4}\,(2\pi)^4\delta \big(p_1+p_2+\sum k_i \big) \,h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n}(k_n) A_{\lambda_1}(p_1) A_{\lambda_2}(p_2)\\
& \times\,\kappa^n\,\left( \sqrt{-g}\,g^{\mu\nu} g^{\mu_1\lambda_1} g^{\mu_2\lambda_2} \right)^{\rho_2\sigma_2\cdots\rho_n\sigma_n}\Big[ \epsilon \, \left(\Gamma_{\mu_1\mu\nu}\right)^{\sigma\rho_1\sigma_1}\,(k_1)_\sigma \, (p_2)_{\mu_2}\Big] \\
+& \sum\limits_{n=2}^\infty\int\cfrac{d^4 p_1}{(2\pi)^4}\cfrac{d^4p_2}{(2\pi)^4}\prod\limits_{i=1}^n \cfrac{d^4k_i}{(2\pi)^4}\,(2\pi)^4\delta \big(p_1+p_2+\sum k_i \big) \,h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n}(k_n) \,A_{\lambda_1}(p_1) A_{\lambda_2}(p_2)\\
&\times\,\kappa^n\,\left(\sqrt{-g}\,g^{\mu\nu} g^{\alpha\beta} g^{\mu_1\lambda_1} g^{\mu_2\lambda_2} \right)^{\rho_3\sigma_3\cdots\rho_n\sigma_n} \left[ - \cfrac{\epsilon}{2}\, (k_1)_{\tau_1}(k_2)_{\tau_2} \,\big(\Gamma_{\mu_1\mu\nu} \big)^{\tau_1\rho_1\sigma_1} \big( \Gamma_{\mu_2\alpha\beta}\big)^{\tau_2\rho_2\sigma_2}\right].
\end{split}
\end{align}
In full analogy with the previous cases, the background part of this expression corresponds to the following propagator\footnote{It matches the expression for the vector propagator given in FeynCalc with $\epsilon_\text{FeynCalc} = -1/\epsilon_\text{FeynGrav} $}:
\begin{align}\label{Maxwell_propagator}
\begin{gathered}
\begin{fmffile}{Diag04}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmf{photon}{L,R}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} = i ~ \cfrac{ -\eta_{\mu\nu} + \left(1+ \cfrac{1}{\epsilon}\right) \cfrac{p_\mu \, p_\nu}{p^2} }{p^2} ~.
\end{align}
The interaction rules are given by the following formula
\begin{align*}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_V_2}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2}
\fmfright{R1,R2}
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L2,V}
\fmfdot{V}
\fmf{photon}{R1,V}
\fmf{photon}{V,R2}
\fmffreeze
\fmf{dots}{L1,L2}
\fmflabel{$p_1,\lambda_1$}{R1}
\fmflabel{$p_2,\lambda_2$}{R2}
\fmflabel{$\rho_1\sigma_1,k_1$}{L1}
\fmflabel{$\rho_n\sigma_n,k_n$}{L2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\end{align*}
\begin{align}\label{rule_V_2}
\begin{split}
= i\, \kappa^n \Bigg[& \cfrac12\,\left(\sqrt{-g}\,g^{\mu\alpha}g^{\nu\beta}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n}\,(p_1)_{\sigma_1}(p_2)_{\sigma_2} \big(F_{\mu\nu}\big)^{\sigma_1\lambda_1} \big(F_{\alpha\beta}\big)^{\sigma_2\lambda_2} \\
& -\epsilon \, \left(\sqrt{-g} \, g^{\mu_1\lambda_1} g^{\mu_2\lambda_2}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} (p_1)_{\mu_1} (p_2)_{\mu_2} \\
& +\epsilon\left\{ \left(\sqrt{-g}\,g^{\mu\nu}g^{\mu_1\lambda_1}g^{\mu_2\lambda_2}\right)^{\rho_2\sigma_2\cdots\rho_n\sigma_n}\,(k_1)_\sigma \left[ (p_2)_{\mu_2}\big(\Gamma_{\mu_1\mu\nu}\big)^{\sigma\rho_1\sigma_1} + (p_1)_{\mu_1}\big(\Gamma_{\mu_2\mu\nu}\big)^{\sigma\rho_1\sigma_1} \right] + \cdots \right\}\\
& -\cfrac{\epsilon}{2} \Bigg\{ \left(\sqrt{-g}\,g^{\mu\nu}g^{\alpha\beta} g^{\mu_1\lambda_1} g^{\mu_2\lambda_2}\right)^{\rho_3\sigma_3\cdots\rho_n\sigma_n} \left[ (k_1)_{\tau_1}\,(k_2)_{\tau_2} \big(\Gamma_{\mu_1\mu\nu}\big)^{\tau_1\rho_1\sigma_1} \big(\Gamma_{\mu_2\alpha\beta}\big)^{\tau_2\rho_2\sigma_2} \right.\\
& \hspace{190pt} \left.+ (k_1)_{\tau_2}\,(k_2)_{\tau_1} \big(\Gamma_{\mu_2\mu\nu}\big)^{\tau_1\rho_2\sigma_2} \big(\Gamma_{\mu_1\alpha\beta}\big)^{\tau_2\rho_1\sigma_1} \right] + \cdots \Bigg\}\Bigg].
\end{split}
\end{align}
In this expression the dots note terms that make the expression symmetric with respect to graviton momenta. The last term contributes only to vertices with $n\geq 2$ gravitons.
The ghost sector of the theory shall be treated as follows. The Faddeev-Popov differential operator $\Delta$ reduces to the D'Alamber operator in curved spacetime:
\begin{align}
\Delta = \cfrac{\delta}{\delta \zeta} ~ \nabla_\mu\left(A^\mu + \nabla^\mu \zeta\right) = g^{\mu\nu}\nabla_\mu\nabla_\nu\,.
\end{align}
Therefore, the ghost part of the generating functional describes a single massless scalar ghost coupled to gravity:
\begin{align}
\begin{split}
\mathcal{Z}_\text{ghost} =& \int\mathcal{D}[c]\mathcal{D}[\overline{c}] \exp\left[i \,\int d^4 x \sqrt{-g} \, \left( \overline{c}\, \square\, c \right)\right] \\
=& \int\mathcal{D}[c]\mathcal{D}[\overline{c}] \, \exp\left[ - i\, \int d^4 x \, \sqrt{-g}\, g^{\mu\nu} \,\nabla_\mu \overline{c} \, \nabla_\nu c\right].
\end{split}
\end{align}
The corresponding perturbative expansion is similar to previous cases:
\begin{align}
\begin{split}
\mathcal{A}_\text{ghost} =& -\int d^4 x \, \sqrt{-g} \, g^{\mu\nu} \partial_\mu \overline{c} \,\partial_\nu c \\
=&\sum\limits_{n=0}^\infty\int \cfrac{d^4 p_1}{(2\pi)^4}\cfrac{d^4 p_2}{(2\pi)^4} \prod\limits_{i=1}^n \cfrac{d^4 k_i}{(2\pi)^4} \, (2\pi)^4 \delta\left( p_1 + p_2 + \sum k_i \right) \, h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n}(k_n)\\
&\times\,\kappa^n\, \overline{c}(p_1) \left[ \left(\sqrt{-g} \, g^{\mu\nu} \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} (p_1)_\mu (p_2)_\nu \right] c(p_2) .
\end{split}
\end{align}
This expression results in the following ghost propagator
\begin{align}\label{Maxwell_ghost_propagator}
\begin{gathered}
\begin{fmffile}{Diag05}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmf{dots}{L,R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} = i\, \cfrac{-1}{p^2} \,,
\end{align}
and in the following interaction rule:
\begin{align}\label{rule_Gh_1}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_Gh_1}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2}
\fmfright{R1,R2}
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L2,V}
\fmf{dots}{R1,V}
\fmf{dots}{R2,V}
\fmffreeze
\fmfdot{V}
\fmf{dots}{L1,L2}
\fmflabel{$p_1$}{R1}
\fmflabel{$p_2$}{R2}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered} = i \, \kappa^n \,\left(\sqrt{-g} \, g^{\mu\nu} \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \,I_{\mu\nu}{}^{\alpha\beta} (p_1)_\alpha (p_2)_\beta . \\ \nonumber
\end{align}
Let us highlight one more time, the ghosts discussed above are the standard Faddeev-Popov ghosts and should be treated accordingly. They do not appear in external states and mainly appear at the loop level. In the context of gravity, they are critical because of the following. In a given diagram a vertex describing the interaction between gravitons and vectors accounts for both physical and non-physical vector field polarizations. The coupling of Faddeev-Popov ghosts with gravity cancels out the energy contribution related to non-physical polarizations, making them necessary for the consistency of the theory.
\section{$SU(N)$ Yang-Mills}\label{section_SUNYM}
Let us turn to the discussion of the gravitational interaction of the $SU(N)$ Yang-Mills model. In the flat spacetime the $SU(N)$ Yang-Mills model is given by the following action:
\begin{align}\label{SUNYM_flat}
\begin{split}
\mathcal{A} =& \int d^4 x \left[ \overline{\psi} \left(i\, \widehat{\mathcal{D}} - m\right) \psi - \cfrac14\, F^a_{\mu\nu} \, F^{a\mu\nu}\right]\\
=&\int d^4 x \left[ \overline{\psi} (i\,\widehat{\partial} -m )\psi - \cfrac14\,\left(f^a_{\mu\nu}\right)^2 + \mathit{g}_\text{s} \,\overline{\psi} \widehat{A}\psi -\mathit{g}_\text{s}\,f^{abc}\partial_\mu A^a_\nu \,A^{b\mu} A^{c\nu} - \cfrac14\,\mathit{g}_\text{s}^2 \,f^{amn}\,f^{aij} \,(A^m\!\!\cdot\!\! A^i) (A^n\!\!\cdot\!\! A^j) \right] .
\end{split}
\end{align}
Here the fermion covariant derivative is defined as follows:
\begin{align}
\mathcal{D}_\mu \psi = \partial_\mu \psi - i\,\mathit{g}_\text{s}\,A_\mu\,\psi .
\end{align}
Field tensor $F_{\mu\nu}$ reads
\begin{align}
F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu -i\,\mathit{g}_\text{s} [A_\mu , A_\nu] .
\end{align}
The gauge field $A_\mu$ takes value in $SU(N)$ algebra:
\begin{align}
A_\mu = A^a_\mu \, T^a ,
\end{align}
where $T^a$ are generators. This gives the following expression of the field tensor components
\begin{align}
F^a_{\mu\nu} = \partial_\mu A^a_\nu - \partial_\nu A^a_\mu + \mathit{g}_\text{s}\, f^{abc} \, A^b_\mu A^c_\nu \,.
\end{align}
Here $f^{abc}$ are the structure constants of the algebra:
\begin{align}
[T^a, T^b] = i\, f^{abc} \, T^c\,.
\end{align}
Generalization of action \eqref{SUNYM_flat} for the case of curved spacetime is rather simple. One shall use the proper four-volume invariant and modify covariant derivatives to account for the curved geometry. This produces the following action:
\begin{align}
\mathcal{A} = \int d^4 x \, \sqrt{-g} \left[ \,\overline{\psi} \left( i\, \mathfrak{e}_m{}^\mu\,\gamma^m\, \mathcal{D}_\mu - m \right) \psi -\cfrac14\,F^a_{\mu\nu} \,F^{a\mu\nu} \right].
\end{align}
Here $\mathfrak{e}_m{}^\mu$ is a vierbein. The covariant derivative for fermions now reads
\begin{align}
\mathcal{D}_\mu \psi = \nabla_\mu \psi - i\,\mathit{g}_\text{s}\,A_\mu \,\psi ,
\end{align}
with $\nabla_\mu$ begin the part accounting for the spacetime curvature via the spin connection. The field tensor $F_{\mu\nu}$ shall also account for the spacetime curvature, but because of its structure, it preserves the simple form:
\begin{align}
\begin{split}
F_{\mu\nu} &= \nabla_\mu A_\nu - \nabla_\nu A_\mu - i\,\mathit{g}_\text{s}\,[A_\mu,A_\nu] \\
& = \partial_\mu A_\nu - \Gamma_{\mu\nu}^\sigma A_\sigma - \partial_\nu A_\mu + \Gamma_{\nu\mu}^\sigma A_\sigma - i\,\mathit{g}_\text{s}\,[A_\mu,A_\nu]\\
&=\partial_\mu A_\nu - \partial_\nu A_\mu -i\,\mathit{g}_\text{s} [A_\mu , A_\nu]\,.
\end{split}
\end{align}
Consequently, the $SU(N)$ Yang-Mills action in a curved spacetime reads:
\begin{align}\label{SUNYM_curve}
\begin{split}
\mathcal{A} = &\int d^4 x \sqrt{-g} \Bigg[ \overline{\psi} \left( i\, \mathfrak{e}_m{}^\mu \, \gamma^m \, \nabla_\mu- m \right) \psi - \cfrac14\, \left(f^a_{\mu\nu}\right)^2\\
&+ \mathit{g}_\text{s} \,\overline{\psi} \left( \mathfrak{e}_m{}^\mu \gamma^m \right) \psi\,A_\mu -g^{\mu\nu} g^{\alpha\beta} \, \mathit{g}_\text{s} \,f^{abc} \partial_\mu A^a_\alpha A^b_\nu A^c_\beta -\cfrac14\,\mathit{g}_\text{s}^2\,f^{amn}\,f^{aij}\,g^{\mu\nu} g^{\alpha\beta} \,A^m_\mu\,A^i_\nu\,A^n_\alpha\,A^j_\beta \Bigg]\,.
\end{split}
\end{align}
Perturbative quantization of kinetic parts of the action is discussed above, so we proceed with the derivation of Feynman's rules for the interaction sector. The perturbative expansion for the term describing the coupling of fermions to a gauge vector is given by the following expression:
\begin{align}\label{ffv_vertex}
\begin{split}
&\int d^4 x \sqrt{-g} \,\mathit{g}_\text{s}\, \overline{\psi} \left(\mathfrak{e}_m{}^\mu \gamma^m \right) \psi \, A_\mu \\
&=\sum\limits_{n=0}^\infty \int \cfrac{d^4 p_1}{(2\pi)^4}\,\cfrac{d^4 p_2}{(2\pi)^4}\,\cfrac{d^4 k}{(2\pi)^4} \,\prod\limits_{i=0}^n \cfrac{d^4 k_i}{(2\pi)^4}\, (2\pi)^4\,\delta\left(p_1+p_2+k+\sum k_i \right) \,h_{\rho_1\sigma_1}(k_1)\cdots h_{\rho_n\sigma_n}(k_n) \\
&\hspace{10pt}\times \, \kappa^n \, \overline{\psi}(p_2) \left[\mathit{g}_\text{s} \,\gamma^m\,T^a \, \left(\sqrt{-g}\,\mathfrak{e}_m{}^\mu\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \right] \psi(p_1) \, A^a_\mu(k)\,.
\end{split}
\end{align}
This expression produces the following Feynman rule:
\begin{align}\label{rule_QQG}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_QQG}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2,L3}
\fmftop{T}
\fmfbottom{B}
\fmfright{R}
\fmf{fermion}{B,V,T}
\fmf{gluon}{V,R}
\fmf{phantom}{V,L2}
\fmfdot{V}
\fmffreeze
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L3,V}
\fmf{dots}{L1,L3}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L3}
\fmflabel{$\mu,a$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered} \hspace{30pt} = i\, \kappa^n\, \mathit{g}_\text{s}\,\gamma^m\,T^a\,\left(\sqrt{-g} \,\mathfrak{e}_m{}^\mu \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} . \\ \nonumber
\end{align}
The perturbative expansion for the term cubic in gauge vectors reads:
\begin{align}\label{vvv_vertex}
\begin{split}
&\int d^4 x \sqrt{-g} \, (-\mathit{g}_\text{s})\,f^{abc}\, g^{\mu\nu} g^{\alpha\beta}\, \partial_\mu A^a_\alpha A^b_\nu A^c_\beta \\
& = \sum\limits_{n=0}^\infty \int \cfrac{d^4 p_1}{(2\pi)^4} \, \cfrac{d^4 p_2}{(2\pi)^4} \, \cfrac{d^4 p_3}{(2\pi)^4} \,\prod\limits_{i=1}^n\cfrac{d^4 k_i}{(2\pi)^4}\, (2\pi)^4\,\delta\left(p_1+p_2+p_3+\sum k_i\right) \,h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n}(k_n) \\
&\hspace{10pt}\times\,\kappa^n\,\left[ (-i\, \mathit{g}_\text{s}) \,f^{abc} \, (p_1)_{\sigma}\, \left(\sqrt{-g} \,g^{\mu_1\mu_3} g^{\mu_2\sigma}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \right] \, A^a_{\mu_1}(p_1)\,A^b_{\mu_2}(p_2)\,A^c_{\mu_3}(p_3) \,.
\end{split}
\end{align}
This expression produces the following rule:
\begin{samepage}
\begin{align*}
\\
\begin{gathered}
\begin{fmffile}{FR_GGG}
\begin{fmfgraph*}(60,40)
\fmfleft{L1,L2,L3}
\fmfright{R1,R2,R3}
\fmf{dbl_wiggly,tension=2}{L1,V}
\fmf{dbl_wiggly,tension=2}{L3,V}
\fmf{gluon,tension=0.5}{V,R1}
\fmf{gluon,tension=0.5}{V,R2}
\fmf{gluon,tension=0.5}{V,R3}
\fmffreeze
\fmf{dots}{L1,L3}
\fmfdot{V}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L3}
\fmflabel{$\mu_1,a,p_1$}{R1}
\fmflabel{$\mu_2,b,p_2$}{R2}
\fmflabel{$\mu_3,c,p_3$}{R3}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\end{align*}
\begin{align}\label{rule_GGG}
\begin{split}
=& \kappa^n \,\mathit{g}_\text{s}\,f^{abc} \Big[ (p_1-p_2)_{\sigma}\left(\sqrt{-g} \,g^{\mu_1\mu_2} g^{\mu_3\sigma}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \\
& +(p_3-p_1)_{\sigma} \left(\sqrt{-g} \,g^{\mu_1\mu_3} g^{\mu_2\sigma}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} +(p_2-p_3)_{\sigma} \left(\sqrt{-g} \,g^{\mu_2\mu_3} g^{\mu_1\sigma}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} \Big] .
\end{split}
\end{align}
\end{samepage}
Lastly, the term describing the four-vector coupling has the following perturbative expansion:
\begin{align}\label{vvvv_vertex}
\begin{split}
&\int d^4 x \sqrt{-g} \left(-\cfrac14\,\mathit{g}_\text{s}^2\right)\, f^{amn} f^{aij} \,g^{\mu\nu} g^{\alpha\beta} \, A^m_\mu \, A^i_\nu\, A^n_\alpha\, A^j_\beta\\
&=\sum\limits_{n=0}^\infty\int \cfrac{d^4 p_1}{(2\pi)^4}\,\cfrac{d^4 p_2}{(2\pi)^4}\,\cfrac{d^4 p_3}{(2\pi)^4}\,\cfrac{d^4 p_4}{(2\pi)^4}\,\prod\limits_{i=0}^n \cfrac{d^4 k_i}{(2\pi)^4}\,(2\pi)^4 \,\delta\left(p_1+p_2+p_3+p_4+\sum k_i \right) h_{\rho_1\sigma_1}(k_1)\cdots h_{\rho_n\sigma_n}(k_n) \\
&\hspace{10pt}\times\left( -\cfrac14 \right) \mathit{g}_\text{s}^2 \kappa^n f^{amn} f^{aij} \left(\sqrt{-g}\, g^{\mu_1\mu_3} g^{\mu_2\mu_4}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} A^m_{\mu_1}(p_1)A^n_{\mu_2}(p_2) A^i_{\mu_3}(p_3)A^j_{\mu_4}(p_4) .
\end{split}
\end{align}
This results in the following interaction rule:
\begin{align*}
\\
\begin{gathered}
\begin{fmffile}{FR_GGGG}
\begin{fmfgraph*}(50,50)
\fmfleft{L1,L2}
\fmfright{R1,R2,R3,R4}
\fmf{gluon,tension=.5}{R1,V}
\fmf{gluon,tension=.5}{R2,V}
\fmf{gluon,tension=.5}{R3,V}
\fmf{gluon,tension=.5}{R4,V}
\fmf{dbl_wiggly,tension=2}{L1,V}
\fmf{dbl_wiggly,tension=2}{L2,V}
\fmfdot{V}
\fmffreeze
\fmf{dots}{L1,L2}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L2}
\fmflabel{$\mu_1,a_1$}{R1}
\fmflabel{$\mu_2,a_2$}{R2}
\fmflabel{$\mu_3,a_3$}{R3}
\fmflabel{$\mu_4,a_4$}{R4}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\end{align*}
\begin{align}\label{rule_GGGG}
\begin{split}
= -i\,\mathit{g}_\text{s}^2 \kappa^n \Bigg[& f^{a_1 a_4 s} f^{a_2 a_3 s} \left( \left(\sqrt{-g}\, g^{\mu_1\mu_2}g^{\mu_3\mu_4}\right)^{\rho_1\cdots\sigma_n}-\left(\sqrt{-g}\, g^{\mu_1\mu_3}g^{\mu_2\mu_4}\right)\right)^{\rho_1\cdots\sigma_n} \\
&+f^{a_1 a_3 s} f^{a_2 a_4 s} \left( \left(\sqrt{-g}\, g^{\mu_1\mu_2}g^{\mu_3\mu_4}\right)^{\rho_1\cdots\sigma_n}-\left(\sqrt{-g}\, g^{\mu_1\mu_4}g^{\mu_2\mu_3}\right)\right)^{\rho_1\cdots\sigma_n} \\
& +f^{a_1 a_2 s} f^{a_3 a_4 s} \left( \left(\sqrt{-g}\, g^{\mu_1\mu_3}g^{\mu_2\mu_4}\right)^{\rho_1\cdots\sigma_n}-\left(\sqrt{-g}\, g^{\mu_1\mu_4}g^{\mu_2\mu_3}\right)\right)^{\rho_1\cdots\sigma_n} \Bigg].
\end{split}
\end{align}
Finally, we shall turn to a discussion of the gauge fixing and the Faddeev-Popov ghosts. The Yang-Mills action \eqref{SUNYM_curve} respects the following gauge transformations:
\begin{align}
\begin{split}
\delta \psi =& i\,\theta^a \, T^a \psi ,\\
\delta A_\mu =& i\,\theta^a \,[T^a, A_\mu] + \cfrac{1}{\mathit{g}_\text{s}} \, \partial_\mu \theta^a\,T^a ,\\
\delta A^a_\mu =& \cfrac{1}{\mathit{g}_\text{s}} \left[\partial_\mu \theta^a - g \, f^{abc} \, \theta^b \, A^c_\mu \right] .
\end{split}
\end{align}
Here $\theta^a$ are the gauge parameters. In the flat spacetime, one would use the standard Lorentz gauge fixing conditions
\begin{align}
\partial^\mu A^a_\mu =0 .
\end{align}
For the curved spacetime case, the standard derivative shall be replaced with the covariant derivative, so the Lorentz gauge fixing conditions read:
\begin{align}
g^{\mu\nu} \nabla_\mu A^a_\nu =0 .
\end{align}
We use this gauge fixing condition to introduce the Faddeev-Popov ghosts with the procedure discussed in the previous section. The introduction of this gauge fixing term will bring the kinetic part of the vector field to the same form obtained in the previous section.
The ghost action is defined by the Faddeev-Popov determinant obtained from the gauge fixing condition:
\begin{align}
\det\left[ \cfrac{\delta}{\delta \theta^b}\,\left\{ g^{\mu\nu} \nabla_\mu A^a_\nu \right\} \right] = \det\left[ \cfrac{1}{\mathit{g}_\text{s}}\,g^{\mu\nu} \, \nabla_\mu \left(\delta^{ab}\nabla_\nu - \mathit{g}_\text{s} \,f^{abc} \, A^c_\nu \right)\right].
\end{align}
It results in the following action:
\begin{align}
\mathcal{A}_\text{FP} = \int d^4 x \left[ - g^{\mu\nu}\, \nabla_\mu\overline{c}^a \nabla_\nu c^a + \mathit{g}_\text{s} \,g^{\mu\nu} \nabla_\mu \overline{c}^a f^{abc} c^b A_\nu^c \right].
\end{align}
The kinetic part of the action is similar to the case of a single massless vector field discussed in the previous section. The part of this action describing the interaction between ghosts, vectors, and gravitons admits the following perturbative expansion:
\begin{align}
\begin{split}
&\int d^4 x \sqrt{-g} \left[ \mathit{g}_\text{s}\,\partial_\mu \overline{c}^a \, f^{abc} \, c^b \, A^{c\,\mu} \right]\\
=&\sum\limits_{n=0}^\infty \int \cfrac{d^4 p_1}{(2\pi)^4} \cfrac{d^4 p_2}{(2\pi)^4} \cfrac{d^4 k}{(2\pi)^4}\prod\limits_{i=0}^n \cfrac{d^4 k_i}{(2\pi)^4}\, (2\pi)^4 \delta\left(p_1+p_2+k+\sum k_i\right)\, h_{\rho_1\sigma_1}(k_1)\cdots h_{\rho_n\sigma_n}(k_n)\\
&\times\, i\,\kappa^n\,\mathit{g}_\text{s}\,(p_1)_\nu \, f^{abc} \, \overline{c}^a(p_1) \,c^b(p_2) \, \left(\sqrt{-g}\,g^{\mu\nu}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n}\,\,A^c_\mu(k).
\end{split}
\end{align}
This expression produced the following rule:
\begin{align}\label{rule_GhGhG}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_GhGhG}
\begin{fmfgraph*}(50,50)
\fmfleft{L1,L2}
\fmfright{R1,R2,R3}
\fmf{dbl_wiggly,tension=2}{L1,V}
\fmf{dbl_wiggly,tension=2}{L2,V}
\fmf{gluon,tension=.5}{V,R2}
\fmf{dots_arrow,tension=.5}{R1,V}
\fmf{dots_arrow,tension=.5}{V,R3}
\fmfdot{V}
\fmffreeze
\fmf{dots}{L1,L2}
\fmflabel{$\rho_1\sigma_1$}{L1}
\fmflabel{$\rho_n\sigma_n$}{L2}
\fmflabel{$\mu,c$}{R2}
\fmflabel{$b$}{R1}
\fmflabel{$p_1,a$}{R3}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}=& - \kappa^n\,g_s\,f^{abc} \, (p_1)_\nu \,\left(\sqrt{-g}\,g^{\mu\nu}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n}. \\ \nonumber
\end{align}
Here all momenta are directed inwards and related by the conservation law.
Formulae \eqref{rule_F_1}, \eqref{rule_V_2}, \eqref{rule_Gh_1}, \eqref{rule_QQG}, \eqref{rule_GGG}, \eqref{rule_GGGG}, \eqref{rule_GhGhG} provide the complete set of Feynman rules required for treatment of the $SU(N)$ Yang-Mills model within perturbative quantum gravity.
\section{Faddeev-Popov ghosts for gravity}\label{section_gravity_ghosts}
General relativity is a gauge theory in the sense that it is invariant with respect to local transformations spawned by coordinate transformations. Therefore, a gauge fixing procedure similar to that for a gauge vector field shall be performed. However, the gravitational theory presents a more sophisticated system because of its geometrical nature. The perturbative approach to quantum gravity operates with small metric perturbations about the flat background. It may seem that this reduces gravity to a gauge theory of rank-$2$ symmetric tensor $h_{\mu\nu}$ propagating about the flat spacetime, but this is not the case.
A geometrical theory of gravity is different from a gauge theory of rank-$2$ symmetric tensor. The distinction lies in the choice of gauge fixing conditions. Let us begin with the case of a theory of a symmetric tensor $h_{\mu\nu}$ with the following gauge symmetry:
\begin{align}
\delta h_{\mu\nu} = \partial_\mu \zeta_\nu + \partial_\nu \zeta_\mu .
\end{align}
The fundamental object of this theory is the $h_{\mu\nu}$ tensor, so a gauge fixing condition can be stated in terms of $h_{\mu\nu}$ alone. For instance, one can use the following gauge fixing condition:
\begin{align}\label{naive_gauge_fixing}
\partial_\mu h^{\mu\nu} - \cfrac12\,\partial^\nu h = 0\,.
\end{align}
This (or any other) condition defines the structure of the Faddeev-Popov ghosts. Most importantly, since $h_{\mu\nu}$ is the fundamental object of the theory, the structure of divergencies can only be expressed in terms of $h_{\mu\nu}$ alone. Within the perturbative approach, all geometric quantities (Riemann tensor, Ricci tensor, scalar curvature, etc) are expressed in terms of small metric perturbations. The opposite is not true because there are operators given in terms of $h_{\mu\nu}$ alone that do not represent any geometric quantities. Consequently, within a gauge theory of a rank-$2$ symmetric tensor, one expects to find divergencies that cannot be described by geometric quantities and the theory can no longer be treated as a geometrical theory.
The situation is different for the consistent treatment of general relativity (or any other geometrical theory). Within the geometrical approach, gauge transformations are not introduced arbitrarily, but are related to coordinate frame transformations. This has two immediate implications. Firstly, within a geometrical theory gauge transformations are given by the so-called Lie derivatives:
\begin{align}
\delta g_{\mu\nu} \overset{\text{def}}{=} \mathcal{L}_\zeta g_{\mu\nu} = \nabla_\mu \zeta_\nu + \nabla_\nu \zeta_\mu .
\end{align}
Here $\mathcal{L}_\zeta$ is the Lie derivative with respect to an arbitrary vector field $\zeta$ which plays the role of gauge parameters. Secondly, any suitable gauge fixing conditions must be expressed in terms of geometrical quantities. Because of this, gauge fixing conditions \eqref{naive_gauge_fixing} are inconsistent with the geometrical approach and they cannot be imposed. Instead, we use the following gauge fixing conditions:
\begin{align}\label{the_gravity_gauge_fixing}
\mathcal{G}^\mu \overset{\text{note}}{=} g^{\alpha\beta} \Gamma^\mu_{\alpha\beta} =0 .
\end{align}
Together with the perturbative expansion \eqref{the_perturbative_expansion} gauge fixing conditions \eqref{the_gravity_gauge_fixing} spawns the following infinite series:
\begin{align}
\mathcal{G}^\nu = \cfrac{\kappa}{2}\,g^{\mu\nu} g^{\alpha\beta} \left[ \partial_\alpha h_{\beta\mu} + \partial_\beta h_{\alpha\mu} - \partial_\mu h_{\alpha\beta}\right] = \kappa \left[\partial_\mu h^{\mu\nu} - \cfrac12\,\partial^\nu h \right] + \mathcal{O}(\kappa^2) ,
\end{align}
The leading term of this series reproduces \eqref{naive_gauge_fixing}. Within the geometric theory, this series cannot be truncated, so the ghost sector is defined by the whole infinite expansion.
The need to use gauge fixing condition \eqref{the_gravity_gauge_fixing} except \eqref{naive_gauge_fixing} marks the difference between geometrical theories of gravity and a gauge theory of $h_{\mu\nu}$ tensor. The Faddeev-Popov prescription for the geometrical approach shall be constructed as follows. Firstly, we shall note that the gauge fixing condition $\mathcal{G}^\mu$ defined by \eqref{the_gravity_gauge_fixing} is a vector with mass dimension $+1$. Consequently, the general relativity action with the corresponding gauge fixing term shall be equipped with an additional dimensional parameter:
\begin{align}\label{Hilbert_gauge_fixed}
\mathcal{A}_{\text{H}+\text{gf}} = \int d^4 x \sqrt{-g} \left[ -\cfrac{2}{\kappa^2}\,R + \cfrac{\epsilon}{2\,\kappa^2} \, g_{\mu\nu} \,\mathcal{G}^\mu \mathcal{G}^\nu \right].
\end{align}
Secondly, the corresponding Faddeev-Popov ghosts are also vectors. The structure of their action is defined by the variation of the gauge fixing term \eqref{the_gravity_gauge_fixing}:
\begin{align}
\delta G^\mu = \mathcal{L}_\zeta \left[ g^{\alpha\beta} \, \Gamma^\mu_{\alpha\beta} \right] = \square \zeta^\mu - 2\, \Gamma^\mu_{\alpha\beta} \, \nabla^\alpha\zeta^\beta + R^\mu{}_\nu \zeta^\nu
\end{align}
with $R_{\mu\nu}$ begin the Ricci tensor. Consequently, the ghost action reads:
\begin{align*}
\mathcal{A}_\text{ghost} = \int d^4 x \sqrt{-g} \left[ -g^{\alpha\beta} g^{\mu\nu} \nabla_\alpha \overline{c}_\mu \nabla_\beta c_\nu - 2\,\Gamma^\mu_{\alpha\beta} \,\overline{c}_\mu \,\nabla^\alpha c^\beta + R_{\mu\nu} \, \overline{c}^\mu \,c^\nu \right].
\end{align*}
In the rest respect, the treatment of the Faddeev-Popov ghosts remains the same.
The structure of Feynman rules for gravity in this gauge is derived via the standard perturbative expansion. The structure of graviton interactions is given by action \eqref{Hilbert_gauge_fixed}:
\begin{align}
\begin{split}
\mathcal{A}_{\text{H}+\text{gf}} &= \int d^4 x \sqrt{-g} \left[ -\cfrac{2}{\kappa^2}\,R + \cfrac{\epsilon}{2\,\kappa^2}\, g_{\mu\nu}\, g^{\alpha\beta}\, g^{\rho\sigma} \,\Gamma^\mu_{\alpha\beta}\,\Gamma^\nu_{\rho\sigma}\right] = \\
& = \int d^4 x \sqrt{-g} \, g^{\mu\nu} g^{\alpha\beta} g^{\rho\sigma} \left(-\cfrac{2}{\kappa^2}\right) \left[ \Gamma_{\alpha\mu\rho}\Gamma_{\sigma\nu\beta} - \Gamma_{\alpha\mu\nu} \Gamma_{\rho\beta\sigma} - \cfrac{\epsilon}{4} \,\Gamma_{\mu\alpha\beta} \Gamma_{\nu\rho\sigma} \right]\\
& = -\cfrac12\,\int d^4 x \sqrt{-g}\,g^{\mu\nu} g^{\alpha\beta} g^{\rho\sigma} \Bigg[ \partial_\mu h_{\alpha\beta} \partial_\nu h_{\rho\sigma} - \partial_\mu h_{\alpha\rho} \partial_\nu h_{\beta\sigma} + 2\, \partial_\mu h_{\alpha\rho} \partial_\beta h_{\nu\sigma} -2\,\partial_\mu h_{\nu\alpha} \partial_\beta h_{\rho\sigma} \\
& \hspace{170pt} -\epsilon\, \left(\partial_\mu h_{\nu\rho} \partial_\alpha h_{\beta\sigma} - \partial_\mu h_{\alpha\beta} \partial_\rho h_{\sigma\nu} + \frac14\, \partial_\mu h_{\alpha\beta} \partial_\nu h_{\rho\sigma} \right) \Bigg].
\end{split}
\end{align}
It admits the following perturbative expansion:
\begin{align}
\begin{split}
\mathcal{A}_{\text{H}+\text{gf}} =& \sum\limits_{n=0}^\infty\int\cfrac{d^4 p_1}{(2\pi)^4} \cfrac{d^4 p_2}{(2\pi)^4} \prod\limits_{i=1}^n \cfrac{d^4 k_i}{(2\pi)^4}\,(2\pi)^4\,\delta\Big(p_1+p_2+\sum k_i \Big) \, h_{\rho_1\sigma_1}(k_1) \cdots h_{\rho_n\sigma_n}(k_n)\\
&\times \left(2\,\kappa^n \right)\left( \sqrt{-g} \, g^{\mu\nu} g^{\alpha\beta} g^{\rho\sigma} \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} (p_1)_{\lambda_1} (p_2)_{\lambda_2} \, h_{\mu_1\nu_1}(p_1) h_{\mu_2\nu_2}(p_2) \\
& \times \Bigg[ \left(\Gamma_{\alpha\mu\rho}\right)^{\lambda_1\mu_1\nu_1} \left(\Gamma_{\sigma\nu\beta}\right)^{\lambda_2\mu_2\nu_2} - \left(\Gamma_{\alpha\mu\nu} \right)^{\lambda_1\mu_1\nu_1} \left( \Gamma_{\rho\beta\sigma}\right)^{\lambda_2\mu_2\nu_2} - \cfrac{\epsilon}{4} \left( \Gamma_{\mu\alpha\beta} \right)^{\lambda_1\mu_1\nu_1} \left(\Gamma_{\nu\rho\sigma} \right)^{\lambda_2\mu_2\nu_2}\Bigg].
\end{split}
\end{align}
The complete expression for the graviton vertex is given by the following formula:
\begin{align*}
\nonumber \\
\begin{gathered}
\begin{fmffile}{FR_h}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2}
\fmfright{R1,R2}
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L2,V}
\fmf{dbl_wiggly}{R1,V}
\fmf{dbl_wiggly}{R2,V}
\fmffreeze
\fmfdot{V}
\fmf{dots}{L1,L2}
\fmflabel{$\mu_1\nu_1,p_1$}{R1}
\fmflabel{$\mu_2\nu_2,p_2$}{R2}
\fmflabel{$\mu_3\nu_3,p_3$}{L1}
\fmflabel{$\mu_n\nu_n,p_n$}{L2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\end{align*}
\begin{align}\label{Rule_Gravitons}
\begin{split}
= &\,i\,2\,\kappa^{n-2}\,\left(\sqrt{-g}\,g^{\mu\nu}g^{\alpha\beta}g^{\rho\sigma}\right)^{\mu_3\nu_3\cdots\mu_n\nu_n} \, (p_1)_{\lambda_1} (p_2)_{\lambda_2} \\
& \times \Bigg[ \left(\Gamma_{\alpha\mu\rho}\right)^{\lambda_1\mu_1\nu_1} \left(\Gamma_{\sigma\nu\beta}\right)^{\lambda_2\mu_2\nu_2} - \left(\Gamma_{\alpha\mu\nu} \right)^{\lambda_1\mu_1\nu_1} \left( \Gamma_{\rho\beta\sigma}\right)^{\lambda_2\mu_2\nu_2} - \cfrac{\epsilon}{4} \left( \Gamma_{\mu\alpha\beta} \right)^{\lambda_1\mu_1\nu_1} \left(\Gamma_{\nu\rho\sigma} \right)^{\lambda_2\mu_2\nu_2}\Bigg] \\
& + \text{permutations}.
\end{split}
\end{align}
Here the summation is performed over all possible permutations of graviton parameters $\{\mu_i\,\nu_i\,p_i\}$.
The ghost action is treated similarly.
\begin{align}
\begin{split}
\mathcal{A}_\text{ghost} =& \int d^4 x \sqrt{-g} \left[ -g^{\alpha\beta} g^{\mu\nu} \nabla_\alpha \overline{c}_\mu \nabla_\beta c_\nu - 2\,\Gamma^\mu_{\alpha\beta} \,\overline{c}_\mu \,\nabla^\alpha c^\beta + R_{\mu\nu} \, \overline{c}^\mu \,c^\nu \right] \\
=& \int d^4 x \sqrt{-g} \left[ -g^{\mu\nu} g^{\alpha\beta} \, \partial_\alpha\overline{c}_\mu \, \partial_\beta c_\nu \right]\\
& + \int d^4 x \sqrt{-g}\, g^{\mu\alpha}g^{\nu\beta}g^{\rho\sigma} \left[ \Gamma_{\beta\rho\alpha} \partial_\sigma \overline{c}_\mu c_\nu - \Gamma_{\alpha\rho\beta}\,\overline{c}_\mu \partial_\sigma c_\nu + \partial_\rho\Gamma_{\sigma\alpha\beta} \,\overline{c}_\mu \,c_\nu - \partial_\alpha \Gamma_{\rho\beta\sigma} \overline{c}_\mu c_\nu \right]\\
& + \int d^4 x \sqrt{-g}\,g^{\mu\alpha} g^{\nu\beta} g^{\rho\sigma} g^{\lambda\tau} \left[ \Gamma_{\rho\alpha\lambda} \Gamma_{\sigma\beta\tau} - \Gamma_{\rho\alpha\beta} \Gamma_{\sigma\lambda\tau} + \Gamma_{\alpha\rho\lambda} \Gamma_{\beta\sigma\tau} \right] \overline{c}_\mu c_\nu .
\end{split}
\end{align}
It has the following perturbative expansion:
\begin{align}
\begin{split}
\mathcal{A}_\text{ghost} =& \sum\limits_{n=0}^\infty\int \cfrac{d^4 p_1}{(2\pi)^4} \cfrac{d^4 p_2}{(2\pi)^4} \prod\limits_{i=1}^n \cfrac{d^4 k_i}{(2\pi)^4} \, (2\pi)^4 \delta\big(p_1+p_2 + \sum k_i\big) h_{\rho_1\sigma_1}(k_1)\cdots h_{\rho_n\sigma_n}(k_n) \, \overline{c}_\mu (p_1) c_\nu(p_2)\\
&\times \kappa^n \,\left(\sqrt{-g}\,g^{\mu\nu} g^{\alpha\beta} \right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} (p_1)_\alpha (p_2)_\beta\\
+& \sum\limits_{n=1}^\infty\int \cfrac{d^4 p_1}{(2\pi)^4} \cfrac{d^4 p_2}{(2\pi)^4} \prod\limits_{i=1}^n \cfrac{d^4 k_i}{(2\pi)^4} \, (2\pi)^4 \delta\big(p_1+p_2 + \sum k_i\big) h_{\rho_1\sigma_1}(k_1)\cdots h_{\rho_n\sigma_n}(k_n) \, \overline{c}_\mu (p_1) c_\nu(p_2)\\
&\times \kappa^n (-1)\left(\sqrt{-g}\,g^{\mu\alpha}g^{\nu\beta} g^{\rho\sigma}\right)^{\rho_2\sigma_2\cdots\rho_n\sigma_n} (k_1)_\lambda\left[ (p_1)_\sigma \left(\Gamma_{\beta\rho\alpha}\right)^{\lambda\rho_1\sigma_1}-(p_2)_\sigma \left(\Gamma_{\alpha\rho\beta}\right)^{\lambda\rho_1\sigma_1} \right.\\
&\hspace{210pt}\left. + (k_1)_\rho \left(\Gamma_{\sigma\alpha\beta}\right)^{\lambda\rho_1\sigma_1} - (k_1)_\alpha \left(\Gamma_{\rho\beta\sigma}\right)^{\lambda\rho_1\sigma_1}\right]\\
+& \sum\limits_{n=2}^\infty\int \cfrac{d^4 p_1}{(2\pi)^4} \cfrac{d^4 p_2}{(2\pi)^4} \prod\limits_{i=1}^n \cfrac{d^4 k_i}{(2\pi)^4} \, (2\pi)^4 \delta\big(p_1+p_2 + \sum k_i\big) h_{\rho_1\sigma_1}(k_1)\cdots h_{\rho_n\sigma_n}(k_n) \, \overline{c}_\mu (p_1) c_\nu(p_2)\\
&\times \kappa^n (-1) \left(\sqrt{-g} \,g^{\mu\alpha}g^{\nu\beta} g^{\rho\sigma} g^{\lambda\tau}\right)^{\rho_3\sigma_3\cdots\rho_n\sigma_n}\,(k_1)_{\lambda_1} (k_2)_{\lambda_2} \left[ \left(\Gamma_{\rho\alpha\lambda}\right)^{\lambda_1\rho_1\sigma_1} \left(\Gamma_{\sigma\beta\tau}\right)^{\lambda_2\rho_2\sigma_2} \right.\\
& \hspace{140pt}\left. - \left(\Gamma_{\rho\alpha\beta}\right)^{\lambda_1\rho_1\sigma_1} \left(\Gamma_{\sigma\lambda\tau}\right)^{\lambda_2\rho_2\sigma_2} + \left(\Gamma_{\alpha\rho\lambda}\right)^{\lambda_1\rho_1\sigma_1} \left(\Gamma_{\beta\sigma\tau}\right)^{\lambda_2\rho_2\sigma_2} \right].
\end{split}
\end{align}
The complete expression describing graviton-ghost vertices reads:
\begin{samepage}
\begin{align*}
\\
\begin{gathered}
\begin{fmffile}{FR_Gh_2}
\begin{fmfgraph*}(40,40)
\fmfleft{L1,L2}
\fmfright{R1,R2}
\fmf{dbl_wiggly}{L1,V}
\fmf{dbl_wiggly}{L2,V}
\fmf{dots_arrow}{R1,V}
\fmf{dots_arrow}{V,R2}
\fmfdot{V}
\fmffreeze
\fmf{dots}{L1,L2}
\fmflabel{$\rho_1\sigma_1,k_1$}{L1}
\fmflabel{$\rho_n\sigma_n,k_n$}{L2}
\fmflabel{$\nu,p_2$}{R1}
\fmflabel{$\mu,p_1$}{R2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\end{align*}
\begin{align}\label{Rules_Graviton-Ghosts}
\begin{split}
= i\,\kappa^n \Bigg[& \left(\sqrt{-g} \,g^{\mu\nu}g^{\alpha\beta}\right)^{\rho_1\sigma_1\cdots\rho_n\sigma_n} (p_1)_\alpha (p_2)_\beta \\
&+\Bigg\{ - \left(\sqrt{-g}\,g^{\mu\alpha} g^{\nu\beta}g^{\rho\sigma}\right)^{\rho_2\sigma_2\cdots\rho_n\sigma_n}(k_1)_\lambda \Bigg[ (p_1)_\sigma (\Gamma_{\beta\rho\alpha})^{\lambda\rho_1\sigma_1} - (p_2)_\sigma (\Gamma_{\alpha\rho\beta})^{\lambda\rho_1\sigma_1} \\
& \hspace{150pt}+ (k_1)_\rho (\Gamma_{\sigma\alpha\beta})^{\lambda\rho_1\sigma_1}-(k_1)_\alpha (\Gamma_{\rho\beta\sigma})^{\lambda\rho_1\sigma_1} \Bigg] + \text{permutations} \Bigg\}\\
& + \Bigg\{ -\left(\sqrt{-g}\,g^{\mu\alpha} g^{\nu\beta}g^{\rho\sigma} g^{\lambda\tau} \right)^{\rho_3\sigma_3\cdots\rho_n\sigma_n} \,(k_1)_{\lambda_1} (k_2)_{\lambda_2} \left[ \left(\Gamma_{\rho\alpha\lambda}\right)^{\lambda_1\rho_1\sigma_1} \left(\Gamma_{\sigma\beta\tau}\right)^{\lambda_2\rho_2\sigma_2} \right.\\
& \hspace{70pt}\left. - \left(\Gamma_{\rho\alpha\beta}\right)^{\lambda_1\rho_1\sigma_1} \left(\Gamma_{\sigma\lambda\tau}\right)^{\lambda_2\rho_2\sigma_2} + \left(\Gamma_{\alpha\rho\lambda}\right)^{\lambda_1\rho_1\sigma_1} \left(\Gamma_{\beta\sigma\tau}\right)^{\lambda_2\rho_2\sigma_2} \right] + \text{permutations} \Bigg\} \Bigg].
\end{split}
\end{align}
\end{samepage}
Propagators for ghosts and gravitons are derived by the standard procedure. The ghost propagator is given by the following expression.
\begin{align}
\begin{gathered}
\begin{fmffile}{FR_Ghost_Propagator}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmf{dots}{L,R}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} = i \, \cfrac{\eta_{\mu\nu}}{k^2}\,.
\end{align}
The graviton propagator contains the gauge fixing parameter $\epsilon$. The propagator corresponds to the part of the microscopic action quadratic in perturbations:
\begin{align}
\int d^4 x \sqrt{-g} \left[ -\cfrac{2}{\kappa^2}\, R + \cfrac{\epsilon}{2\,\kappa^2} ~\mathcal{G}_\mu \mathcal{G}^\mu \right] = \int d^4 x \left[ -\cfrac12\,h^{\mu\nu} \mathcal{D}_{\mu\nu\alpha\beta}(\epsilon)\, \square h^{\alpha\beta} \right] +\korder{1}.
\end{align}
In the momentum representation the operator $\mathcal{D}$ is given in terms of the Nieuwenhuizen operators \cite{VanNieuwenhuizen:1981ae,Accioly:2000nm}
\begin{align}
\mathcal{D}_{\mu\nu\alpha\beta} (\epsilon) = \cfrac{3 \epsilon -8}{4}\, P^0_{\mu\nu\alpha\beta} + \cfrac{\epsilon}{2}\,P^1_{\mu\nu\alpha\beta} + P^2_{\mu\nu\alpha\beta} - \cfrac{\epsilon}{4} ~ \overline{P}^0_{\mu\nu\alpha\beta} - \cfrac{\epsilon}{4} ~ \overline{\overline{P}}^0_{\mu\nu\alpha\beta} \,.
\end{align}
Only $P^0$ and $P^2$ operators are gauge invariant, so the operator is invertible if $\epsilon \not =0$. The inverse operator reads:
\begin{align}
\mathcal{D}^{-1}_{\mu\nu\alpha\beta}(\epsilon) = -\cfrac12\,P^0_{\mu\nu\alpha\beta}+\cfrac{2}{\epsilon}\, P^1_{\mu\nu\alpha\beta} + P^2_{\mu\nu\alpha\beta} - \cfrac{3\,\epsilon -8}{2\,\epsilon}~\overline{P}^0_{\mu\nu\alpha\beta} - \cfrac{1}{2} ~\overline{\overline{P}}^0_{\mu\nu\alpha\beta} .
\end{align}
Therefore, in an arbitrary gauge the graviton propagator is given by the following expression:
\begin{align}
\begin{gathered}
\begin{fmffile}{FR_Graviton_Propagator}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmf{dbl_wiggly}{L,R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} = i ~ \cfrac{ \mathcal{D}^{-1}_{\mu\nu\alpha\beta}(\epsilon) }{k^2} \, .
\end{align}
We will consider the general case within this paper. However, on the practical ground, the simplest choice of the gauge fixing parameter is $\epsilon =2$. With this value of the gauge fixing parameter the operator $\mathcal{D}^{-1}$ takes an extremely simple form:
\begin{align}
\mathcal{D}^{-1}_{\mu\nu\alpha\beta}(2) = \cfrac12\left[ \eta_{\mu\alpha} \eta_{\nu\beta} + \eta_{\mu\beta} \eta_{\nu\alpha} - \eta_{\mu\nu} \eta_{\alpha\beta} \right].
\end{align}
\section{FeynGrav v2}\label{section_FG2}
FeynGrav, a package for computing Feynman rules for gravity, has been updated with new features and improvements. The latest version includes support for the interaction rules presented in the sections above, as well as additional capabilities to enhance its functionality. The code is publicly available \cite{FeynGrav}.
Firstly, the package structure has been changed. The main file ``FeynGrav.wl'' contains the code providing tools to operate with Feynman rules for gravity. The package is based on FeynCalc and requires it to run \cite{Mertig:1990an,Shtabovenko:2016sxi,Shtabovenko:2020gxv}. The package operates with pre-generated libraries which contain data on gravitational interaction. The folder ``Rules'' contains realizations of both interaction rules and supplementary functions. The folder ``Libs'' contains files with evaluated expressions for the interaction rules. The folder also contains a script ``FeynGravLibrariesGenerator.wl'' which generates those libraries. FeynGrav is distributed with libraries for gravitational interaction up to $\korder{3}$ order. The previous version of FeynGrav generated expressions for interaction vertices on a user's call which negatively affected the performance.
Secondly, the package is distributed with an example file ``FeynGrav\_Examples.nb''. The file contains the following examples:
\begin{list}{$\bullet$}{}
\item
Realization of the Nieuwenhuizen operators \cite{VanNieuwenhuizen:1981ae,Accioly:2000nm};
\item
Calculation of matrix element for on-shell tree-level $2\to 2$ graviton scattering that agrees with \cite{Sannan:1986tz}.
\item
Calculation of various contributions to the graviton self-energy at the one-loop level.
\item
One-loop matter polarization operators induced by gravity.
\item
One-loop vertex function for a graviton-scalar vertex.
\end{list}
Thirdly, the package contains a few supplementary functions that are often used in quantum gravity calculations. These are propagator for a scalar field \eqref{scalar_propagator}, propagator for the Proca field (massive vector field) \eqref{Proca_propagator}, and the Nieuwenhuizen operators \cite{VanNieuwenhuizen:1981ae,Accioly:2000nm}. The Nieuwenhuizen operators are a generalization of the standard gauge projectors and they are discussed in many other publications, so we will not discuss them further. It must be noted that the original Nieuwenhuizen operators are defined in $d=4$ where they have a few special features. Within FeynGrav these operators are given in arbitrary $d$. This is done for the sake of consistency as most parts of tools provided by FeynCalc are designed to operate with arbitrary $d$.
The new version of FeynGrav also includes interaction rules for matter with $s=0$,$1$, and $1/2$ of arbitrary mass, as well as interaction rules for $SU(N)$ Yang-Mills that are consistent with the realization used within FeynCalc. The complete list of commands for interaction rules is given in Appendix \ref{Command_list}.
Lastly, the gravitational sector of the new FeynGrav version supports an arbitrary gauge fixing parameter present in \eqref{Hilbert_gauge_fixed}. The package is initiated with the corresponding parameter being unspecified and entering all expressions as a constant. At any point, the user is free to fix this parameter and proceed with the calculations. As was noted before, from the practical point of view $\epsilon=2$ gauge is the simplest because the graviton propagator takes a much simpler form.
All other features of FeynGrav remained unchanged from the previous version. They are described in detail in the previous paper \cite{Latosh:2022ydd}, so we will not discuss them further. However, we present some calculations that provide a suitable illustration of FeynGrav's applicability.
\subsection{Example of polarization operators}\label{section_physics}
To demonstrate the applicability of FeynGrav, we can use it to perform some typical quantum field theory calculations. Let's start with the calculation of various contributions to the graviton self-energy. In the following calculations, we express all loop integrals in terms of the Passarino-Veltman integrals \cite{Passarino:1978jh}. Since the calculations are performed within FeynCalc, we omit all $A_0(0)$ integrals but preserve $A_0(m^2)$ integrals.
Graviton polarization operator induced by a single scalar field:
\begin{align}
\begin{split}
i\, \Pi_{\mu\nu\alpha\beta}^{s=0,m_\text{s}} (p) =& \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Scalar_1}
\begin{fmfgraph*}(30,40)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dashes,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}+\hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Scalar_2}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly,tension=2}{L,VL}
\fmf{dbl_wiggly,tension=2}{VR,R}
\fmf{dashes,right=1,tension=.5}{VL,VR,VL}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\\
=& \kappa^2 ~ i \, \pi^2 B_0(p^2,m_\text{s}^2,m_\text{s}^2) \left[\cfrac{1}{12} \left(p^2+2\,m_\text{s}^2\right)^2 P^0_{\mu\nu\alpha\beta} +\cfrac{1}{120} \left(p^2-4\,m_\text{s}^2\right)^2 P^2_{\mu\nu\alpha\beta} \right] \\
& -\kappa^2 ~ i\,\pi^2 A_0(m_\text{s}^2) \left[\cfrac{1}{6} \left( p^2 + 2\,m_\text{s}^2 \right) P^0_{\mu\nu\alpha\beta} + \cfrac{1}{60} \left( p^2 + 8\,m_\text{s}^2 \right) P^2_{\mu\nu\alpha\beta} \right].
\end{split}
\end{align}
Graviton polarization operator induced by a single Dirac field:
\begin{align}
\begin{split}
& i\, \Pi^{s=1/2,m_\text{f}}_{\mu\nu\alpha\beta}(p)= \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Fermion_1}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{fermion,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}+\hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Fermion_2}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly,tension=2}{L,VL}
\fmf{dbl_wiggly,tension=2}{VR,R}
\fmf{fermion,right=1,tension=.5}{VL,VR,VL}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered} \\
&= \kappa^2\, i \,\pi^2\, B_0(p^2,m_\text{f}^2,m_\text{f}^2)\,(p^2- 4\,m_\text{f}^2)\, \left[ \cfrac16\,m_\text{f}^2\, P^0_{\mu\nu\alpha\beta}+\cfrac{1}{120}\, (3\,p^2+8\,m_\text{f}^2)\, P^2_{\mu\nu\alpha\beta} \right]\\
&\hspace{20pt} +\kappa^2\,i\,\pi^2\, A_0(m_\text{f}^2) \left[ \cfrac{19}{24}\,m_\text{f}^2\, P^0_{\mu\nu\alpha\beta} +\cfrac{1}{8}\,m_\text{f}^2 P^1_{\mu\nu\alpha\beta} -\cfrac{1}{120}\,(6\,p^2-47\,m_\text{f}^2) \,P^2_{\mu\nu\alpha\beta} + \cfrac{m_\text{f}^2}{8}\, \overline{P}^0_{\mu\nu\alpha\beta} \right].
\end{split}
\end{align}
Graviton polarization operator induced by a single Proca field:
\begin{align}
\begin{split}
&i\, \Pi^{s=1,m_\text{v}\not=0}_{\mu\nu\alpha\beta} = \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Proca_1}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{photon,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}+\hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Proca_2}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly,tension=2}{L,VL}
\fmf{dbl_wiggly,tension=2}{VR,R}
\fmf{photon,right=1,tension=.5}{VL,VR,VL}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
&= \kappa^2 ~i\,\pi^2\,B_0(p^2,m_\text{v}^2,m_\text{v}^2) \left[ \cfrac{1}{12}\left( p^4 -4 \,m_\text{v}^2\,p^2 +12\,m_\text{v}^4 \right)\,P^0_{\mu\nu\alpha\beta} +\cfrac{1}{120}\left( 13\,p^4 +56\,m_\text{v}^2\,p^2+48\,m_\text{v}^4 \right)\, P^2_{\mu\nu\alpha\beta} \right] \\
& \hspace{10pt} +\kappa^2~i\,\pi^2\, A_0(m_\text{v}^2) \left[ -\cfrac16\,(p^2 + 3 \,m_\text{v}^2) P^0_{\mu\nu\alpha\beta} - \cfrac{m_\text{v}^2}{4}\,P^1_{\mu\nu\alpha\beta} -\cfrac{13}{60}\,(p^2+3\,m_\text{v}^2)\,P^2_{\mu\nu\alpha\beta}+\cfrac{m_\text{v}^2}{4} ~ \overline{\overline{P}}^0_{\mu\nu\alpha\beta} \right].
\end{split}
\end{align}
Graviton polarization operator induced by a single massless vector field ($\epsilon_\text{V}=-1$):
\begin{align}
\begin{split}
i\, \Pi^{s=1,m=0}_{\mu\nu\alpha\beta} =& \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Maxwell_1}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{photon,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}+\hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Maxwell_2}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dots,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}+\hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Maxwell_3}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly,tension=2}{L,VL}
\fmf{dbl_wiggly,tension=2}{VR,R}
\fmf{photon,right=1,tension=.5}{VL,VR,VL}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}+\hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Maxwell_4}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly,tension=2}{L,VL}
\fmf{dbl_wiggly,tension=2}{VR,R}
\fmf{dots,right=1,tension=.5}{VL,VR,VL}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
= & \kappa^2 ~i\,\pi^2\,B_0(p^2,0 ,0) \, p^2\,\left[ \frac{1}{4}\,P^0_{\mu\nu\alpha\beta} +\cfrac{1}{8}\, P^2_{\mu\nu\alpha\beta} \right].
\end{split}
\end{align}
Graviton polarization operator induced by $SU(N)$ Yang-Mills matter ($\epsilon_\text{SU(N)YM}=-1$):
\begin{align}
\begin{split}
i\, \Pi^{SU(N)\text{ Yang-Mills}}_{\mu\nu\alpha\beta} =& \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_1}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{gluon,right=.5}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}+\hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_2}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dots,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt}+\hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_3}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{fermion,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
& + \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_4}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly,tension=2}{L,VL}
\fmf{dbl_wiggly,tension=2}{VR,R}
\fmf{gluon,right=.5,tension=.5}{VL,VR,VL}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} + \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_5}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly,tension=2}{L,VL}
\fmf{dbl_wiggly,tension=2}{VR,R}
\fmf{dots,right=1,tension=.5}{VL,VR,VL}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} + \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_6}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmf{dbl_wiggly,tension=2}{L,VL}
\fmf{dbl_wiggly,tension=2}{VR,R}
\fmf{fermion,right=1,tension=.5}{VL,VR,VL}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
=& \kappa^2 ~i\,\pi^2\,B_0(p^2,0 ,0)\,p^4 \,\left[\cfrac{N^2-1}{4} \,P^0_{\mu\nu\alpha\beta} +\left( \cfrac{N^2-1}{8} - \cfrac{N}{40} \right) P^2_{\mu\nu\alpha\beta} \right].
\end{split}
\end{align}
Here $N$ is the number of colors.
In the same way, the graviton self-energy is calculated:
\begin{align}
\begin{split}
i\,\Pi_{\mu\nu\alpha\beta}^{s=2,m=0}&= \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Gravity_0}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmf{dbl_wiggly}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dbl_wiggly,right=1}{V,T,V}
\fmfdot{V}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} + \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Gravity_1}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmf{dbl_wiggly}{L,VL}
\fmf{dbl_wiggly}{VR,R}
\fmf{dbl_wiggly,right=1,tension=.3}{VL,VR,VL}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} + \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Gravity_2}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmf{dbl_wiggly}{L,VL}
\fmf{dbl_wiggly}{VR,R}
\fmf{dots,right=1,tension=.3}{VL,VR,VL}
\fmflabel{$\mu\nu$}{L}
\fmflabel{$\alpha\beta$}{R}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
&=\kappa^2\,i\,\pi^2\,B_0(p^2,0,0)\,p^4\,\Bigg[\cfrac{13}{12}\,P^1_{\mu\nu\alpha\beta} - \cfrac{11}{20}\,P^2_{\mu\nu\alpha\beta} +\cfrac{7}{8}\,P^0_{\mu\nu\alpha\beta} +\cfrac{17}{8}\,\overline{P}^0_{\mu\nu\alpha\beta} -\cfrac{11}{24}\,\overline{\overline{P}}^0_{\mu\nu\alpha\beta} \Bigg].
\end{split}
\end{align}
Gravitational contribution to matter propagators is given by the following expression in $d=4$ with the gauge fixing parameter $\epsilon=2$. The polarization operator for a scalar field reads:
\begin{align}
\begin{split}
i\,\Pi^{s=0,m_\text{s}} = &\hspace{10pt}
\begin{gathered}
\begin{fmffile}{Loop_Scalar_3}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmf{dashes}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dbl_wiggly,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{5pt} + \hspace{5pt}
\begin{gathered}
\begin{fmffile}{Loop_Scalar_4}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmf{dashes,tension=2}{L,VL}
\fmf{dashes,tension=2}{VR,R}
\fmf{dashes,tension=.1}{VL,VR}
\fmf{dbl_wiggly,left=1}{VL,VR}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
=& -\cfrac{1}{2}\,m_\text{s}^2\,\kappa^2\,i\,\pi^2\,A_0(m_\text{s}^2) + m^2\,\left(p^2 - \cfrac{m_\text{s}^2}{2}\right)\,\kappa^2\,i\,\pi^2\, B_0(p^2,0,m_\text{s}^2).
\end{split}
\end{align}
Polarization operator for a Dirac field:
\begin{align}
\begin{split}
i\,\Pi^{s=1/2,m_\text{f}} =& \hspace{10pt}
\begin{gathered}
\begin{fmffile}{Loop_Fermion_3}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmf{fermion}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dbl_wiggly,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{5pt} + \hspace{5pt}
\begin{gathered}
\begin{fmffile}{Loop_Fermion_4}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmf{fermion,tension=2}{L,VL}
\fmf{fermion,tension=2}{VR,R}
\fmf{fermion}{VL,VR}
\fmffreeze
\fmf{dbl_wiggly,left=1}{VL,VR}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
=& \cfrac{1}{8}\,\left[ \left(1 + 2\,\cfrac{m_\text{f}^2}{p^2}\right) \widehat{p} - 3\,m_\text{f}\right]\,\kappa^2\,i\,\pi^2\,A_0(m_\text{f}^2)\\
& + \cfrac{1}{8}\,\left[ -\left( 1 - \cfrac{m_\text{f}^2}{p^2} + 2 \, \cfrac{m_\text{f}^4}{p^4} \right)\,p^2\,\widehat{p} + m_\text{f} \left(p^2 + m_\text{f}^2\right) \right]\,\kappa^2\,i\,\pi^2\, B_0(p^2,0,m_\text{f}^2).
\end{split}
\end{align}
Polarization operator for a Proca field:
\begin{align}
\begin{split}
i\,\Pi^{s=1,m_\text{v}\not =0} =& \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Proca_3}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmf{photon}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dbl_wiggly,right=1}{V,T,V}
\fmfdot{V}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} + \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Proca_4}
\begin{fmfgraph*}(30,30)
\fmfleft{L}
\fmfright{R}
\fmf{photon,tension=2}{L,VL}
\fmf{photon,tension=2}{VR,R}
\fmf{photon,tension=.1}{VL,VR}
\fmf{dbl_wiggly,left=1}{VL,VR}
\fmfdot{VL,VR}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
=& \cfrac{1}{6}\,\Big\{ \left(p^2 - 8\, m_\text{v}^2\right) \,\kappa^2\,i\,\pi^2\,A_0(m_\text{v}^2) - \left(p^4 + p^2\,m_\text{v}^2 + m_\text{v}^4\right)\,\kappa^2\,i\,\pi^2\,B_0(p^2,0,m_\text{v}^2) \Big\}\theta_{\mu\nu}(p)\\
& - \cfrac32\,m_\text{v}^4\,\kappa^2\,i\,\pi^2\,B_0(p^2,0,m_\text{v}^2) \, \omega_{\mu\nu}(p).
\end{split}
\end{align}
Here the following definitions of gauge projectors are used:
\begin{align}
\theta_{\mu\nu} (p) & = \eta_{\mu\nu} - \cfrac{p_\mu p_\nu}{p^2} \, , & \omega_{\mu\nu}(p) &= \cfrac{p_\mu p_\nu}{p^2}\,.
\end{align}
The polarization operator for a massless vector field reads:
\begin{align}
\begin{split}
\Pi^{s=1,m=0}_{\mu\nu} =& \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Maxwell_5}
\begin{fmfgraph*}(40,50)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmf{photon}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dbl_wiggly,right=1}{V,T,V}
\fmfdot{V}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} + \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_Maxwell_6}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmf{photon,tension=2}{L,VL}
\fmf{photon,tension=2}{VR,R}
\fmf{photon,tension=.1}{VL,VR}
\fmf{dbl_wiggly,left=1}{VL,VR}
\fmfdot{VL,VR}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\\
=& -\cfrac{1}{6} \, \kappa^2\,i\,\pi^2\,B_0(p^2,0,0)\,p^4\,\theta_{\mu\nu}(p) - \cfrac{5}{2} \,\kappa^2\,i\,\pi^2\,B_0(p^2,0,0)\,p^4\,\omega_{\mu\nu}(p).
\end{split}
\end{align}
Lastly, for the $SU(N)$ Yang-Mills theory there are polarization operators for gluons:
\begin{align}
\begin{split}
i\,\Pi_{\mu\nu}^\text{Gluon}=& \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_7}
\begin{fmfgraph*}(40,50)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmf{gluon}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dbl_wiggly,right=1}{V,T,V}
\fmfdot{V}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{20pt} + \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_8}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmf{gluon,tension=2}{L,VL}
\fmf{gluon,tension=2}{VR,R}
\fmf{gluon,tension=.1}{VL,VR}
\fmf{dbl_wiggly,left=1}{VL,VR}
\fmfdot{VL,VR}
\fmflabel{$\mu$}{L}
\fmflabel{$\nu$}{R}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\\
=&-\cfrac{1}{6}\, \kappa^2\,i\,\pi^2\,B_0(p^2,0,m^2)\,\delta^{ab}\, p^4 \,\left[ \theta_{\mu\nu}(p) + 15\, \omega_{\mu\nu}(p) \right];
\end{split}
\end{align}
and polarization operators for quarks:
\begin{align}
\begin{split}
i\,\Pi_{\mu\nu}^\text{Quark} = & \hspace{10pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_9}
\begin{fmfgraph*}(40,50)
\fmfleft{L}
\fmfright{R}
\fmftop{T}
\fmf{fermion}{L,V,R}
\fmf{phantom}{V,T}
\fmffreeze
\fmf{dbl_wiggly,right=1}{V,T,V}
\fmfdot{V}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}
\hspace{5pt} + \hspace{5pt}
\begin{gathered}
\begin{fmffile}{Loop_SUNYM_10}
\begin{fmfgraph*}(40,40)
\fmfleft{L}
\fmfright{R}
\fmf{fermion}{L,VL}
\fmf{fermion}{VR,R}
\fmf{fermion}{VL,VR}
\fmffreeze
\fmf{dbl_wiggly,left=1}{VL,VR}
\fmfdot{VL,VR}
\end{fmfgraph*}
\end{fmffile}
\end{gathered}\\
=& -\cfrac{1}{8}\, \kappa^2\,i\,\pi^2\,B_0(p^2,0,m^2)\, p^2\, \widehat{p}.
\end{split}
\end{align}
\subsection{Example of a vertex operator}
Let us briefly consider another example of calculations that can be performed within FeynGrav. For the sake of illustration, we can address a one-loop scalar-graviton vertex function:
\begin{align}
\nonumber \\
i\,\Gamma_{\mu\nu}(k,p_1,p_2)= \hspace{20pt}
\begin{gathered}
\begin{fmffile}{Vertex}
\begin{fmfgraph*}(40,40)
\fmfleft{i}
\fmfright{o1,o2}
\fmf{dbl_wiggly}{i,v}
\fmf{dashes}{o1,v}
\fmf{dashes}{v,o2}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=20}{v}
\fmflabel{$\mu\nu$}{i}
\fmflabel{$p_1$}{o1}
\fmflabel{$p_2$}{o2}
\end{fmfgraph*}
\end{fmffile}
\end{gathered} \hspace{10pt}
=
\begin{gathered}
\begin{fmffile}{Vertex_1}
\begin{fmfgraph}(40,40)
\fmfleft{i}
\fmfright{o1,o2}
\fmf{dbl_wiggly,tension=2}{i,v}
\fmf{dashes,tension=0.7}{o1,v}
\fmf{dashes,tension=0.7}{v,o2}
\fmffreeze
\fmf{phantom,tension=2}{o1,v1}
\fmf{phantom,tension=2}{o2,v2}
\fmf{phantom}{v1,v,v2}
\fmffreeze
\fmf{dbl_wiggly}{v1,v2}
\fmfdot{v1,v2,v}
\end{fmfgraph}
\end{fmffile}
\end{gathered}
+
\begin{gathered}
\begin{fmffile}{Vertex_2}
\begin{fmfgraph}(40,40)
\fmfleft{i}
\fmfright{o1,o2}
\fmf{dbl_wiggly,tension=2}{i,v}
\fmf{phantom,tension=0.7}{o1,v}
\fmf{phantom,tension=0.7}{v,o2}
\fmffreeze
\fmf{dashes,tension=2}{o1,v1}
\fmf{dashes,tension=2}{o2,v2}
\fmf{dbl_wiggly}{v1,v,v2}
\fmffreeze
\fmf{dashes}{v1,v2}
\fmfdot{v1,v2,v}
\end{fmfgraph}
\end{fmffile}
\end{gathered}
+
\begin{gathered}
\begin{fmffile}{Vertex_3}
\begin{fmfgraph}(40,40)
\fmfleft{i}
\fmfright{o1,o2}
\fmf{dbl_wiggly,tension=2}{i,v}
\fmf{dashes,tension=0.7}{o1,v}
\fmf{dashes,tension=0.7}{v,o2}
\fmffreeze
\fmf{phantom,tension=2}{o1,v1}
\fmf{phantom}{v1,v}
\fmfdot{v,v1}
\fmffreeze
\fmf{dbl_wiggly,right=0.7}{v1,v}
\end{fmfgraph}
\end{fmffile}
\end{gathered}
+
\begin{gathered}
\begin{fmffile}{Vertex_4}
\begin{fmfgraph}(40,40)
\fmfleft{i}
\fmfright{o1,o2}
\fmf{dbl_wiggly,tension=2}{i,v}
\fmf{dashes,tension=0.7}{o1,v}
\fmf{dashes,tension=0.7}{v,o2}
\fmffreeze
\fmf{phantom,tension=2}{o2,v1}
\fmf{phantom}{v1,v}
\fmfdot{v,v1}
\fmffreeze
\fmf{dbl_wiggly,left=0.7}{v1,v}
\end{fmfgraph}
\end{fmffile}
\end{gathered}
+
\begin{gathered}
\begin{fmffile}{Vertex_5}
\begin{fmfgraph}(40,40)
\fmfleft{i}
\fmfright{o1,p,o2}
\fmf{dbl_wiggly,tension=2}{i,v}
\fmf{dashes,tension=0.5,left=0.5}{o1,v}
\fmf{dashes,tension=0.5,right=0.5}{o2,v}
\fmffreeze
\fmf{phantom,tension=2}{p,p1}
\fmf{dbl_wiggly,right=1}{v,p1,v}
\fmfdot{v}
\end{fmfgraph}
\end{fmffile}
\end{gathered} . \\ \nonumber
\end{align}
A detailed discussion of this function lies far beyond the scope of this paper and will be presented elsewhere. Here we will only consider a very specific limit of this function that was already studied in \cite{Donoghue:1994dn} (see also \cite{Burgess:2003jk,Vanhove:2021zel,Bjerrum-Bohr:2022blt} for a detailed discussion). We only consider the case when both scalars are placed on the mass shell $p_1^2=p_2^2=m^2$ and the graviton four-momentum only has spacial components $k^2 = - (\vec{k})^2$ and they are small. This setup allows one to recover the classical limit of the theory.
The example file``FeynGrav\_Examples.nb'' contains expressions calculating all the amplitude given above. Using FeynCalc tools we separate the Passarino-Veltman integrals and keep terms relevant for $\abs{\vec{k}}\to 0$ limit:
\begin{align}
i\,\Gamma_{\mu\nu} (k,p_1,p_2) \to i\,\pi^2\,\kappa^3\,(p_1+p_2)_\mu (p_1+p_2)_\nu \left[ \cfrac{101}{96} \,\ln k^2 + \cfrac{\pi^2}{32} \, \cfrac{m}{k} \right] .
\end{align}
This expression is in agreement with the previous studies in \cite{Donoghue:1994dn,Vanhove:2021zel,Bjerrum-Bohr:2022blt}, where the leading-order contributions are non-analytic functions that correspond to power-law corrections to the Newtonian potential.
\subsection{Example of a tree-level scattering amplitude}
Lastly, we want to briefly touch upon the implementation of FeynGrav for scattering amplitudes. In full analogy with the previous case, a more detailed discussion of scattering amplitudes lies far beyond the scale of this paper. Because of this, we will only consider a single tree-level scattering amplitude for two scalars of different masses.
\begin{align}
i \, \mathcal{M} (p_1,p_2,p_3,p_4,m_1,m_2) = \hspace{40pt}
\begin{gathered}
\begin{fmffile}{Scalar_Tree_Scattering}
\begin{fmfgraph*}(50,50)
\fmftop{t1,t2}
\fmfbottom{b1,b2}
\fmf{dashes}{t1,v1,b1}
\fmf{dashes}{t2,v2,b2}
\fmf{dbl_wiggly}{v1,v2}
\fmfdot{v1,v2}
\fmflabel{$p_1,m_1$}{b1}
\fmflabel{$p_2,m_2$}{b2}
\fmflabel{$p_3,m_1$}{t1}
\fmflabel{$p_4,m_2$}{t2}
\end{fmfgraph*}
\end{fmffile}
\hspace{40pt} .
\end{gathered}
\end{align}
It is more convenient to express this amplitude in terms of the Mandelstam variables \cite{Mandelstam:1958xc}:
\begin{align}
s &= (p_1+p_2)^2 \,, & t &= (p_1 + p_3)^2 \,, & u &= (p_1+p_4)^2 .
\end{align}
The scattering amplitude reads:
\begin{align}
i\,\mathcal{M} = -i\, \cfrac{ \kappa^2}{4\,t} ~ \Big( u^2 + t\, u - (t +2\,u)(m_1^2 + m_2^2) + m_1^4 + m_2^4 \Big).
\end{align}
In full analogy with the previous case, it is convenient to consider a quasi-static limit:
\begin{align}
s &= (m_1+m_2)^2\,, & t &= -\left(\vec{p}\right)^2 \to 0 \,.
\end{align}
That limit recovers the part amplitude leading in the weak interaction limit which reads
\begin{align}
\mathcal{M} \sim \cfrac{\kappa^2}{2} \, \cfrac{m_1^2\,m_2^2}{p^2}\,.
\end{align}
In full agreement with \cite{Donoghue:1994dn,Burgess:2003jk,Vanhove:2021zel,Bjerrum-Bohr:2022blt} the recovered contribution corresponds to the leading-order term in the Newtonian potential.
\section{Conclusions}\label{section_conclusions}
In this paper, we present the latest developments of FeynGrav, which offers a simple and efficient way to derive Feynman's rules for gravity. Building on our previous work in \cite{Latosh:2022ydd}, where we derived the Feynman rules for gravitational interaction with massless matter, we extend the formalism to cover matter with arbitrary mass. We also revisit the implementation of the Faddeev-Popov prescription within the formalism and derive the corresponding rules for the Faddeev-Popov ghosts present in the theory of a single massless vector field. Additionally, we implement the formalism to the $SU(N)$ Yang-Mills model and obtain all the required interaction rules. These interaction rules are sufficient for calculating gravitational corrections to standard model processes, which opens up new opportunities to search for relevant gravitational effects within the standard model.
The explicit examples of tree and loop-level calculations performed with FeynGrav demonstrate the usefulness of the presented rules, and the potential for further applications of FeynGrav for scattering amplitudes is promising. The contemporary methods of scattering amplitude calculations are well-developed for on-shell amplitudes \cite{Elvang:2013cua,Cheung:2017pzi,Travaglini:2022uwo}. FeynGrav provides a way to calculate off-shell scattering amplitudes, which is an important step toward studying higher-order effects in gravitational interactions.
Future developments of FeynGrav will focus on several directions. First, we plan to implement non-minimal interactions, particularly non-minimal interactions with scalar fields \cite{Horndeski:1974wa,Horndeski:1976gi,Kobayashi:2011nu}. This will allow us to study their influence on quantum gravity behavior. Secondly, we aim to improve the performance of the package, as quantum gravitational calculations are notoriously complicated due to a large number of terms and Lorentz indices involved. We plan to explore techniques such as parallel computations to increase FeynGrav's performance. Lastly, we intend to extend the formalism to supersymmetric models, which will provide another effective tool to operate with supergravity scattering amplitudes.
\section*{Acknowledgment}
The work was supported by the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”.
\bibliographystyle{unsrturl}
|
{
"arxiv_id": "2302.14349",
"language": "en",
"timestamp": "2023-03-01T02:10:35",
"url": "https://arxiv.org/abs/2302.14349",
"yymm": "2302"
} | \section{Introduction}
Quantum key distribution (QKD)~\cite{bennett2014quantum, ekert1991quantum} enables remote two parties to share secret keys protected from eavesdropping by the laws of physics. In the past forty years, QKD has
achieved rapid development in terms of secret key rates~\cite{scarani2009security,xu2020secure,pirandola2020advances,Grunenfelder2022Fast}, transmission distance~\cite{gobby2004quantum,
frohlich2017long,boaron2018secure} and network deployment~\cite{Peev_2009,sasaki2011field,dynes2019cambridge,chen2021integrated}.
Although the security of QKD has been proven in theory, the imperfections of realistic devices lead to various security loopholes~\cite{zhao2008quantum,lydersen2010hacking,tang2013source}, especially in detection~\cite{lydersen2010hacking}.
Fortunately, measurement-device-independent (MDI) QKD is proposed~\cite{lo2012measurement}, which assumes an untrusted intermediate node to perform two-photon Bell state measurements, thus solving all security issues at the detection side~\cite{braunstein2012side}. Extensive work demonstrates the potential of MDI-QKD, including experimental breakthroughs~\cite{Liu2013Experimental,rubenok2013real,experiment2014mdi,yin2016measurement,comandar2016quantum,yin2017experimental,woodward2021gigahertz}, on-chip implementations~\cite{semenenko2020chip,zheng2021heterogeneously,wei2020high}, and continuous theoretical developments~\cite{curty2014finite,yin2014long,zhou2016making,wang17measurement,wang2019asymmetric,azuma2022quantum}. Moreover, users in a MDI-QKD network can share expensive detectors, and the topology of MDI-QKD is naturally suitable for deployment in star-type networks. Additionally, side-channel-secure QKD has recently been experimentally realized, which is not only MDI but also immune to potential source imperfections~\cite{zhang2022experimental,gu2022experimental}.
However, the key rates of most forms of QKD are fundamentally bounded by the secret key capacity of repeaterless QKD~\cite{pirandola2009direct,takeoka2014fundamental,pirandola2017fundamental,das2021universal} due to photon loss in the channel. A rigorous theorem,
the absolute repeaterless secret key capacity
(SKC$_0$), expresses this limit as $R=-\log_2(1-\eta)$ ~\cite{pirandola2017fundamental}, i.e., the key rate $R$ scales linearly with the channel transmittance $\eta$. Despite some progress in overcoming this bound~\cite{jiang2009quantum,munro2012quantum,azuma2015allrepeaters,azuma2015all}, such devices remain elusive.
Twin-field (TF) QKD~\cite{Lucamarini2018overcoming} and its variants~\cite{ma2018phase,wang2018twin,yin2019measurement,lin2018simple,cui2019twin,curty2019simple} are proposed to break this bound. The protocols make the untrusted intermediate node use Bell state measurements based on single-photon interference, rather than two-photon interference. Numerous works have advanced theory with finite key analysis~\cite{maeda2019repeaterless, yin2019finite, jiang2019unconditional,curras2021tight}. Ref.~\cite{Li2021long} applies entangled coherent state sources as untrusted relays to further increase the transmission distance of TF-QKD by reducing the signal-to-noise ratio at the measurement nodes. Several
experimental achievements have shown the performance
of twin-field QKD over large loss~\cite{minder2019experimental,zhong2019proof,wang2019beating,liu2019experimental,fang2020implementation,chen2020sending,liu2021field,zhong2021proof,chen2021twin,clivati2022coherent,Pittaluga2021600-km,wang2022twin,li2022twin,zhou2023quantum},
and the maximum distance of TF-QKD has
been experimentally increased to 830 kilometers~\cite{wang2022twin}.
The idea of single-photon interference has also been implemented in device independent QKD~\cite{xie2021overcoming}.
Nonetheless, as TF-QKD requires stable long-distance single-photon interference, phase-tracking and phase-locking techniques are indispensable~\cite{Lucamarini2018overcoming}. These techniques are complicated and expensive, and usually impose a negative impact on the system performance. For example, phase tracking technology requires sending strong reference light, which reduces the effective clock frequency of the quantum signal and increases background noise~\cite{fang2020implementation,chen2020sending,Pittaluga2021600-km,wang2022twin}.
Recently, the new variant~\cite{xie2022breaking, zeng2022mode} of MDI-QKD, called asynchronous MDI-QKD~\cite{xie2022breaking} (also called mode-pairing MDI-QKD~\cite{zeng2022mode}), is proposed. It asynchronously pairs two successful clicks within a long pairing time to establish two-photon Bell state, thereby breaking SKC$_0$. Asynchronous MDI-QKD is highly practical and has a noteworthy advantage over TF-QKD in intercity-distance quantum communications, owing to its implementation simplicity and performance. Several exciting experiments have successfully verified the superior performance of asynchronous MDI-QKD with accessible technology. Ref.~\cite{zhu2023experimental} realizes the experiment with a maximal distance of 407 km without global phase locking. Ref.~\cite{zhou2022experimental} demonstrates the first asynchronous MDI-QKD that overcomes SKC$_0$ without global phase tracking and extends the maximal distance to 508 km.
However, before asynchronous MDI-QKD can be applied in real life, many issues of practicality necessitate resolution, such as identifying the optimal number of decoy states, determining the optimal calculation method of decoy states, and assessing the performance in asymmetric channels and networks.
In this work, we address these issues by introducing the joint-constraints technique~\cite{jiang2021higher} and new methods for phase error rate estimation to enable higher-rate asynchronous MDI-QKD.
By employing the three-intensity protocol alongside an additional \textit{click filtering} operation---which is the known best choice for performance---we simulate the key rate of asynchronous MDI-QKD in multi-user networks. For a network of five users, asynchronous MDI-QKD result in the key rates of all links surpassing the secret key capacity. Furthermore, using a 4 GHz repetition rate system~\cite{wang2022twin}, secret key
rates of 6.02 Mbps, 2.29 Mbps, and 0.31 Mbps can be achieved at fiber distances of 50 km, 100 km, and 200 km, respectively. Asynchronous MDI-QKD can achieve the highest key rate in the range of 170 to 480 km, compared with decoy-state QKD~\cite{hwang2003quantum,wang2005beating,lo2005decoy} and TF-QKD~\cite{Lucamarini2018overcoming}. More importantly, our work provides conceptual differences between asynchronous MDI-QKD and its synchronous version (original time-bin MDI-QKD)~\cite{ma2012alternative}. Asynchronous MDI-QKD holds the most promising potential as a solution for short-distance quantum communication in the future, owing to its minimal detector requirements and absence of strong light feedback.
\section{Protocol description}
Here, we consider an asymmetric asynchronous MDI-QKD protocol using three-intensity settings. The intensity
of each laser pulse is randomly set to one of the three intensities
$\mu_{a(b)}$ (signal), $\nu_{a(b)}$ (decoy) and $o_{a(b)}$ (vacuum), and the intensities satisfy $\mu_{a(b)}>\nu_{a(b)}>o_{a(b)}=0$.
A successful click is obtained when one and only one detector clicks in a time bin, and we refer to $(k_{a}|k_{b})$ as a successful click when Alice sends intensity $k_a$ and Bob sends $k_b$. The
notation $[k_{a}^{\rm tot}, k_{b}^{\rm tot}]$ indicates an asynchronous coincidence where the combined intensity in the two time-bins Alice (Bob) sent is $k_{a}^{\rm tot}$ ($k_{b}^{\rm tot}$). The details of the protocol are presented as follows.
\textit{1. Preparation.} For each time bin, Alice chooses a phase value $\theta_a=2\pi M_{a}/M$ with $M_{a}\in \{0,1,...,M-1\}$
at random. Then, she selects an intensity choice $k_a\in \{\mu_a,\nu_a,o_a\}$ with probabilities $p_{\mu_a}$, $ p_{\nu_a}$, and $ p_{o_a}=1-p_{\mu_a}-p_{\nu_a}$, respectively. Alice prepares a weak laser pulse $\ket{e^{\textbf{i}\theta_a}\sqrt{k_a}}$ based on the chosen values. Similarly, Bob
prepares a weak coherent pulse $\ket{e^{\textbf{i}\theta_b}\sqrt{k_b}}$~($k_b\in \{\mu_b,\nu_b,o_b\}$). Finally, Alice and Bob send their optical pulses to Charlie via the quantum channel.
\textit{2. Measurement.} For each time bin, Charlie performs a first-order interference measurement on the two received pulses, and he publicly announces whether a successful click is obtained and which detector ($D_{L}$ or $D_{R}$) clicked. The first two steps will be repeated $N$ times.
\textit{3. Coincidence pairing.}
The clicks that Alice and Bob retained for further processing depend on whether \textit{click filtering} is applied. If they perform \textit{click filtering},
Alice (Bob) announces whether she (he) applied the decoy intensity $\nu_{a}$ ($\nu_{b}$) to the pulse sent for each event. Then they discard clicks $(\mu_{a}|\nu_{b})$ and $(\nu_{a}|\mu_{b})$, and keep all other clicks. Otherwise, they keep all clicks.
For all kept clicks, Alice and Bob always pair a click with the nearest one within a time interval $T_c$ to form a successful coincidence. They discard the lone click that failed to find a partner within $T_c$. For each coincidence, Alice (Bob) computes the total intensity used between the two time bins $k_a^{\rm tot}$ ($k_b^{\rm tot}$) and the phase differences between the early ($e$) and late ($l$) time bins, $\varphi_{a(b)}=\theta_{a(b)}^{l}-\theta_{a(b)}^{e}$.
\textit{4. Sifting.} Alice and Bob announce their computational results and then discard the data if $k_a^{\rm tot} \ge \mu_a+\nu_a$ or $k_b^{\rm tot} \ge \mu_b+\nu_b$. When there is a \textit{click filtering} operation, we define $\tilde{k}_{a(b)}=\mu_{a(b)}$; otherwise, we define $\tilde{k}_{a(b)} \in \{\mu_{a(b)},\nu_{a(b)}\}$.
For $[\tilde{k}_a, \tilde{k}_b]$ coincidence, Alice
(Bob) extracts a $\boldsymbol{Z}$-basis bit 0 (1) if she (he) sends $\tilde{k}_{a(b)}$ in the early time bin and $o_{a(b)}$ in the late time bin. Otherwise, Alice (Bob) extracts an opposite bit.
Note that we use four intensity
groups ($[\mu_a, \mu_b], [\mu_a, \nu_b], [\nu_a, \nu_b], [\nu_a, \mu_b]$) for the key generation when \textit{click filtering} is not applied, while existing MDI-QKD protocols typically use only one intensity group.
For $[2\nu_a, 2\nu_b]$ coincidence, Alice and Bob calculate the relative phase difference $\varphi_{ab}=(\varphi_{a}-\varphi_{b})\mod 2 \pi$. They extract an $\boldsymbol{X}$-basis bit 0 if $\varphi_{ab} = 0$ or $\pi$. Afterwards, Bob flips his bit value, if $\varphi_{ab} = 0$ and both detectors clicked, or $\varphi_{ab} = \pi$ and the same detector clicked twice. The coincidence with other phase differences is discarded.
\textit{5. Parameter estimation.} Alice and Bob group their data into different sets $\mathcal{S}_{[k_{a}^{\rm tot},k_{b}^{\rm tot}]}$ and count the corresponding number $n_{[k_{a}^{\rm tot},k_{b}^{\rm tot}]}$. By using all the raw data they have obtained, Alice and Bob estimate the necessary parameters to calculate the key rate. They estimate the number of vacuum events, $s_0^z$, the number of single-photon pair events in the $\boldsymbol{Z}$ basis, $s_{11}^z$, the bit error rate of the single-photon pairs in the $\boldsymbol{X}$ basis, $e_{11}^x$, and the phase error rate associated with the single-photon pair events in the $\boldsymbol{Z}$ basis, $\phi^{z}_{11}$.
\textit{6. Key distillation.} Alice and Bob perform an error correction step that reveals at most $\lambda_{\rm EC}$ bits of information. Under the condition of passing the checks in the error correction
and privacy amplification steps, a $\varepsilon_{\rm tot}$-secret key of length
\begin{equation}
\begin{aligned}
\ell=& \underline{s}_{0}^z+\underline{s}_{11}^z\left[1-H_2\left(\overline{\phi}_{11}^z\right)\right]-\lambda_{\rm EC}
\\
& \log_2\frac{2}{\varepsilon_{\rm cor}}-2\log_2\frac{2}{\varepsilon'\hat{\varepsilon}}-2\log_2\frac{1}{2\varepsilon_{\rm PA}} ,\label{eq_keyrate}
\end{aligned}
\end{equation}
can be extracted, where $\underline{x}$ and $\overline{x}$ are the lower and
upper bounds of the observed value $x$, respectively; $H_2(x)=-x \log_2 x-(1-x) \log_2(1-x)$ is the binary Shannon entropy function. Using the entropic uncertainty relation~\cite{zhou2022experimental}, the total secure coefficient $\varepsilon_{\rm tot}=2(\varepsilon'+2\varepsilon_e+\hat{\varepsilon})+\varepsilon_0+\varepsilon_1+\varepsilon_\beta+\varepsilon_{\rm PA}+\varepsilon_{\rm cor}$, where $\varepsilon_{\rm cor}$ is the failure probability of error correction; $\varepsilon_{\rm PA}$ is the failure probability of privacy amplification; $\hat{\varepsilon}$ and $\varepsilon'$ are the coefficients while using a chain-rule for smooth entropies; $\varepsilon_0$, $\varepsilon_1$ and $\varepsilon_\beta$ are the failure probabilities for estimating the terms of $s_0^z$, $s_{11}^z$, and $e_{11}^x$, respectively.
\section{The key rate formula}
In the following description,
let $x^*$ be the expected value of $x$. In the asynchronous MDI-QKD protocol, $[\tilde{k}_a, \tilde{k}_b]$ coincidence can be used to generate keys. Since the binary
Shannon entropy function is concave, we can correct errors for each group $[\tilde{k}_a, \tilde{k}_b]$ separately to reduce the consumption of information, which does not affect the security of the protocol. Hence the amount of information consumed in error correction can be written as
\begin{equation}
\begin{aligned}
\lambda_{\rm{EC}}&=\sum\limits_{\tilde{k}_a,\tilde{k}_b}\left[ n_{[\tilde{k}_a,\tilde{k}_b]}fH_2\left( E_{[\tilde{k}_a,\tilde{k}_b]}\right)\right],
\end{aligned}\label{eq:lambda}
\end{equation}
where $f$ is the error correction efficiency and $E_{[\tilde{k}_a,\tilde{k}_b]}$ is the bit error rate of $[\tilde{k}_a,\tilde{k}_b]$ coincidence. Because vacuum states contain no information about their bit values, in the asymmetric case we can separately extract higher-valued vacuum components in each group $[\tilde{k}_a, \tilde{k}_b]$ to obtain higher key rates.
The total number of vacuum components in the $\boldsymbol{Z}$ basis can be given by
\begin{equation}
\begin{aligned}
\underline{s}_{0}^{z*}= \sum\limits_{\tilde{k}_a,\tilde{k}_b} \max\left\{
\frac{e^{-\tilde{k}_a } p_{[\tilde{k}_a,\tilde{k}_b]}}{p_{[o_a,\tilde{k}_b]}}\underline{n}_{[o_a,\tilde{k}_b]}^{ *}, \frac{e^{-\tilde{k}_b}p_{[\tilde{k}_a,\tilde{k}_b]}}{p_{[\tilde{k}_a,o_b]}}\underline{n}_{[\tilde{k}_a,o_b]}^{ *}\right\}.
\end{aligned}\label{s0z_start}
\end{equation}
Here $p_{[k_a^{\rm tot},k_b^{\rm tot}]}$ is the probability that $[k_a^{\rm tot},k_b^{\rm tot}]$ coincidence occurs given the coincidence event, which is
\begin{equation}
\begin{aligned}
p_{[k_a^{\rm tot},k_b^{\rm tot}]}= \sum\limits_{k_a^e+k_a^l= k_a^{\rm tot}}\sum\limits_{k_b^e+k_b^l=k_b^{\rm tot}}
\frac{p_{k_a^e}p_{k_b^e}}{p_s} \frac{p_{k_a^l}p_{k_b^l}}{p_s}. \label{Eq:ptot}
\end{aligned}
\end{equation}
When \textit{click filtering} is not applied, $p_s=1$, otherwise $p_s=1-p_{\mu_a}p_{\nu_b}-p_{\nu_a}p_{\mu_b}$.
Next, we need to estimate the number and phase error rate of the single-photon pairs in the
$\boldsymbol{Z}$ basis, $s_{11}^z$ and $\phi_{11}^z$. Because the density matrices of single-photon pairs in the $\boldsymbol{Z}$ basis are equal to those in the $\boldsymbol{X}$ basis, the single-photon pair yields in the two bases are equal. For the same reason, we can estimate the single-photon pair phase error rate in the $\boldsymbol{Z}$ basis according to the single photon-pair bit error rate in the $\boldsymbol{X}$ basis. Therefore, in all single-photon pairs, the expected ratio of different intensity settings is the same as that of the emitted states,
\begin{equation}
\begin{aligned}
\frac{\underline{s}_{11}^{z*}}{\underline{s}_{11}^{x*}}=\frac{t_{11}^{z*}}{t_{11}^{x*}}= \frac{\sum_{\tilde{k}_a,\tilde{k}_b}\left(\tilde{k}_a\tilde{k}_be^{-\tilde{k}_a-\tilde{k}_b} p_{[\tilde{k}_a,\tilde{k}_b]}\right)}{4\nu_a\nu_be^{-2\nu_a-2\nu_b}p_{[2\nu_a,2\nu_b]}}.\label{s11_relation}\\
\end{aligned}
\end{equation}
Then we estimate the lower bound of ${s}_{11}^{z*}$ using the decoy-state method~\cite{hwang2003quantum,wang2005beating,lo2005decoy}, which can be given by
\begin{equation}
\begin{aligned}
\underline{s}_{11}^{z*}=& \frac{\sum_{\tilde{k}_a,\tilde{k}_b}\left(\tilde{k}_a\tilde{k}_be^{-\tilde{k}_a-\tilde{k}_b} p_{[\tilde{k}_a,\tilde{k}_b]}\right)}{\nu_a\nu_b\mu_a\mu_b(\mu'-\nu')}\\&\times \left[\mu_a\mu_b\mu' \left(e^{\nu_a+\nu_b }\frac{\underline{n}_{[\nu_a ,\nu_b] }^{*}}{p_{[\nu_a,\nu_b]}}-e^{\nu_b}\frac{\overline{n}_{[o_a,\nu_b] }^*}{p_{[o_a,\nu_b] }}\right.\right.\\
&\left.\left.-
e^{\nu_a }\frac{\overline{n}_{[\nu_a ,o_b] }^{*}}{p_{[\nu_a,o_b]}}
+\frac{\underline{n}_{[o_a,o_b] }^{*}}{p_{[o_a,o_b]}} \right)
\right.\\
& -\nu_a\nu_b\nu' \left(e^{\mu_a+\mu_b}\frac{\overline{n}_{[\mu_a,\mu_b] }^{*}}{p_{[\mu_a,\mu_b] }}-e^{\mu_b }\frac{\underline{n}_{[o_a,\mu_b] }^*}{p_{[o_a,\mu_b] }}\right.\\
&\left.\left.-
e^{\mu_a}\frac{\underline{n}_{[\mu_a, o_b] }^{*}}{p_{[\mu_a,o_b]}}
+ \frac{\underline{n}_{[o_a,o_b] }^{*}}{p_{[o_a,o_b] }} \right) \right],
\end{aligned}\label{eq_decoy_Y11}
\end{equation}
where
\begin{equation}
\begin{cases}
\mu'=\mu_a,\quad \nu'=\nu_a,& \mbox{if} \quad \frac{\mu_a}{\mu_b}
\le \frac{\nu_a}{\nu_b}, \\
\mu'=\mu_b, \quad\nu'=\nu_b,&\mbox{if} \quad \frac{\mu_a}{\mu_b}
> \frac{\nu_a}{\nu_b}.
\end{cases}
\end{equation}
We can use the technique of joint constraints~\cite{jiang2021higher} to obtain the tighter estimated value of ${s}_{11}^{z*}$. The details of the analytic results of joint constraints are shown in Appendix~\ref{joint_constraint_append}.
Then we can obtain the lower bound of ${s}_{11}^{x*}$ with Eq.~\eqref{s11_relation}.
\begin{table}[b!]
\centering
\caption{Simulation parameters. Here $\eta_d=\eta_d^L=\eta_d^R$, $p_d=p_d^L=p_d^R$, and $\eta_d^L~(\eta_d^R)$ and $p_d^L~(p_d^R)$ are the detection efficiency and the dark count rate of the detector $D_L~(D_R)$, respectively; $\alpha$ denotes the attenuation coefficient of the fiber; $\omega_{\rm fib}$ is the fiber phase drift rate; $E_{\rm HOM}$ is the interference misalignment error rate; $f$ is the error correction efficiency; $\Delta \nu$ is the laser frequency difference; and $\epsilon$ is the failure probability considered in the error verification and finite data analysis.}\label{tab1}
\begin{tabular}[b]{@{\extracolsep{0pt}}cccccccc}
\hline
\hline
$\eta_{d} $ & $p_{d} $ & $\alpha$ & $\omega_{\rm fib}$ & $E_{\rm HOM}$ & $f$ &$\Delta\nu$ & $\epsilon$\\
\hline\xrowht{7pt}
$80\%$ & 0.1 Hz & $0.16$ dB/km & $5900$ rad/s & 0.04 & $1.1$ & 10 Hz & $10^{-10}$\\
\hline
\hline
\end{tabular}
\end{table}
The upper bound of the single-photon pair errors of the $\boldsymbol{X}$ basis is
\begin{align}
\overline{t}_{11}^{x}= m_{[2\nu_a,2\nu_b]}-\underline{m}_{[2\nu_a,2\nu_b]}^{0},\label{phi_11z_start}
\end{align}
where $m_{[2\nu_a,2\nu_b]}$ is the observed error bit number in the $\boldsymbol{X}$ basis, and $m_{[2\nu_a,2\nu_b]}^{0}$ is the error bit number in the $\boldsymbol{X}$ basis given that at least one of Alice and Bob sends vacuum component. The lower bound of the expected value $m_{[2\nu_a,2\nu_b]}^{0*}$ can be given by
\begin{equation}
\begin{aligned}\label{mo}
\underline{m}_{[2\nu_a,2\nu_b]}^{0*}=&\frac{e^{-2\nu_a}p_{[2\nu_a,2\nu_b]}}{2p_{[o_a,2\nu_b]}}\underline{n}_{[o_a,2\nu_b]}^{*}+\frac{e^{-2\nu_b}p_{[2\nu_a,2\nu_b]}}{2p_{[2\nu_a,o_b]}}\underline{n}_{[2\nu_a,o_b]}^{*}\\
&- \frac{e^{-2\nu_a-2\nu_b}p_{[2\nu_a,2\nu_b]}}{2p_{[o_a,o_b]}}\overline{n}_{[o_a,o_b]}^{*}.
\end{aligned}
\end{equation}
Similarly, we obtain the tighter value of $\underline{m}_{[2\nu_a,2\nu_b]}^{0*}$ under the joint constraints~\cite{jiang2021higher}.
According to the formula of random sampling without replacement in Ref.~\cite{yin2020tight}, we can obtain the estimated value of the phase error rate of the single-photon pair events in the $\boldsymbol{Z}$ basis
\begin{equation}
\begin{aligned}
\overline{\phi}_{11}^{z}=& \frac{\overline{t}_{11}^x}{\underline{s}_{11}^{x}} +\gamma\left(\underline{s}_{11}^z,\underline{s}_{11}^x,\frac{\overline{t}_{11}^x}{\underline{s}_{11}^{x}},\varepsilon_e\right),\\
\end{aligned}\label{Randomswr}
\end{equation}
where
\begin{equation}
\gamma^{U}(n,k,\lambda,\epsilon)=\frac{\frac{(1-2\lambda)AG}{n+k}+
\sqrt{\frac{A^2G^2}{(n+k)^2}+4\lambda(1-\lambda)G}}{2+2\frac{A^2G}{(n+k)^2}},
\end{equation}
with $A=\max\{n,k\}$ and $G=\frac{n+k}{nk}\ln{\frac{n+k}{2\pi nk\lambda(1-\lambda)\epsilon^{2}}}$. From another viewpoint, we can calculate $t_{11}^{x*}$ by
\begin{align}
\overline{t}_{11}^{x*}= m_{[2\nu_a,2\nu_b]}^*-\underline{m}_{[2\nu_a,2\nu_b]}^{0*},
\end{align}
and obtain $\overline{t}_{11}^{z*}$ by Eq.~\eqref{s11_relation}. Then we calculate the observed values of $\overline{t}_{11}^{z*}$ and $\underline{s}_{11}^{z*}$ with the Chernoff bound(see Eqs.~\eqref{chernoff1}~and~\eqref{chernoff2} in Appendix~\ref{statistical}). The upper bound of the single-photon pair phase error rate in the $\boldsymbol{Z}$ basis can be written as
\begin{equation}
\begin{aligned}
\overline{\phi}_{11}^{z}=& \frac{\overline{t}_{11}^{z}}{\underline{s}_{11}^z}.\label{phi_11z_stop}
\end{aligned}
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=8.6cm]{amdi_sy_f_newvsold.pdf}
\caption{Secret key rates of the three-intensity asynchronous MDIQKD protocol with \textit{click filtering} using different phase error rate estimation methods. The numerical results here show that the new phase error rate estimation method has a notable advantage.}\label{fig:amdi_sy_f_newvsold}
\end{figure}
\begin{table*}[t]
\centering
\caption{Simulated secret key rates per second for asynchronous MDI-QKD, SNS-QKD with the AOPP method, and PM-QKD in the QKD network shown in Fig.~\ref{amdi_network} using the parameters in Table~\ref{app:tab1}. The system clock frequency is 4 GHz and the transmission time is 22 hours. Here, link A-B represents that user A communicates with user B. The sending intensities and corresponding probabilities are selected by the users to obtain the optimal key rate for each link. Note that here we consider a 50\% duty cycle for the TF-type protocols~\cite{minder2019experimental,Pittaluga2021600-km,zhou2023quantum}.}\label{tab3}
\begin{tabular}[b]{@{\extracolsep{10pt}}c cc ccccc }
\hline
\hline \xrowht{7pt}
Link &A-B (A-E) & B-C (C-E)& B-D (D-E)& B-E& A-C & A-D& C-D\\
\hline\xrowht{7pt}
SKC$_0$ & 5.77 $\times 10^{3}$ & 4.80 $\times 10^{3}$& 1.45 $\times 10^{4}$& 2.30 $\times 10^{3}$ & 1.21 $\times 10^{4}$ & 3.64 $\times 10^{4}$ & 3.03 $\times 10^{4}$ \\
Asynchronous MDI-QKD & 1.47 $\times 10^{4}$ & 1.36 $\times 10^{4}$& 2.05 $\times 10^{4}$&
9.46 $\times 10^{3}$ & 2.36 $\times 10^{4}$ & 4.04 $\times 10^{4}$& 3.56 $\times 10^{4}$\\
SNS-QKD (AOPP) & 1.18 $\times 10^{4}$ & 1.09 $\times 10^{4}$& 1.64 $\times 10^{4}$&
7.53 $\times 10^{3}$ & 1.78 $\times 10^{4}$ & 3.05 $\times 10^{4}$& 2.72 $\times 10^{4}$\\
PM-QKD & 2.56 $\times 10^{3}$ & 2.40 $\times 10^{3}$ & 3.22 $\times 10^{3}$&
1.71 $\times 10^{3}$ & 4.19 $\times 10^{3}$ & 6.91 $\times 10^{3}$ & 6.01 $\times 10^{3}$\\
\hline
\hline
\end{tabular}
\end{table*}
\section{Performance }
\subsection{Optimal decoy-state method}
For the evaluation, we numerically optimize the secret key
rate $R := \ell F/N$ of
asynchronous-MDIQKD with Eq.~\eqref{Randomswr} (original method~\cite{zhou2022experimental}) and Eq.~\eqref{phi_11z_stop} (new method), which is shown in
Fig.~\ref{fig:amdi_sy_f_newvsold}. Here $F$ is the system clock frequency. In this work, we set failure parameters $\varepsilon_{\rm{cor}}$,
$\varepsilon'$, $\varepsilon_e$, $\hat{\varepsilon}$, $\varepsilon_{\beta}$, and $\varepsilon_{\rm PA}$ to be the same
value: $\epsilon$. The experimental parameters are set to the values used in the state-of-the-art system, as shown in Table~\ref{tab1}.
In Fig.~\ref{fig:amdi_sy_f_newvsold}, we set $F=1$ GHz and $l_a = l_b$, and the source parameters of Alice and Bob are all
the same. The genetic algorithm is exploited to globally search for the optimal value of light
intensities and their corresponding probabilities. The gray line is the results of SKC$_0$. The results show that as the distance increases, the influence of statistical fluctuations becomes increasingly significant,
and the key rate advantage of the new phase error rate estimation method is also increasing.
For example, at a fiber length of 600 km with $N=10^{14}$, the secret key rate obtained by the new phase
error rate estimation method is approximately 1.49 times that of the original method.
In the following key rate calculations, we use the new phase error rate estimation method by default.
\subsection{Optimal protocol}
Figure~\ref{fig:amd_nfvsf_10hz} shows a comparison of the secret key rates of asynchronous MDI-QKD with and
without \textit{click filtering} under symmetrical $l_a=l_b$ and asymmetrical channels $l_a-l_b=100$ km. The parameters are
listed in Table~\ref{tab1}. $F=1$ GHz and $N=10^{13}$ are used. The green dotted line is results of using only $[\mu_a,\mu_b]$ coincidence to form the secret key without \textit{click filtering}. In the symmetric channel, Fig.~\ref{fig:amd_nfvsf_10hz}(a), we can see that the key rate of asynchronous MDI-QKD with \textit{click filtering} is always higher than that of asynchronous MDI-QKD without \textit{click filtering} based on $[\mu_a,\mu_b]$ group. This is expected since the filtering operation corresponds to a higher number of valid pairs and smaller statistical fluctuations in the estimation process. And the key rate of asynchronous MDI-QKD with \textit{click filtering} is higher than that of asynchronous MDI-QKD without \textit{click filtering} based on four intensity groups at short and medium distances.
At a fiber length of 300 km, the
secret key rate obtained with \textit{click filtering} is approximately 1.11 times
the one without \textit{click filtering} based on four intensity groups, and 1.29 times the one based on $[\mu_a,\mu_b]$ group. The same trend is observed for the asymmetric channel (Fig.~\ref{fig:amd_nfvsf_10hz}(b)).
\begin{figure*}
\centering
\includegraphics [width=17.6cm]{amdi_nfvsf.pdf}
\caption{ Comparison of the secret key rates of asynchronous MDIQKD with and without
\textit{click filtering} under two types of
channels: (a) symmetric channel $l_a=l_b$ and (b) asymmetric channels $l_a-l_b=100$ km. } \label{fig:amd_nfvsf_10hz}
\end{figure*}
\subsection{Asynchronous MDI-QKD Networks}
We provide a figure about a scalable QKD network setup consisting of numerous users who may freely
join or leave the network in Fig.~\ref{amdi_network}. Each user node has an asymmetric channel connected to an untrusted relay, through which it can establish a QKD link to others. The users will adjust the sending intensities and corresponding probability values so that each link can obtain the optimal key rate.
The experimental parameters used here are listed in Table~\ref{app:tab1}. We assume a 4 GHz clock rate~\cite{wang2022twin} and 22-hour transmission time (about $3.2\times 10^{14}$ quantum pulses for asynchronous MDI-QKD).
Table~\ref{tab3} shows simulated secret key rates for asynchronous MDI-QKD, sending-or-not-sending QKD (SNS-QKD) with actively odd-parity pairing (AOPP)~\cite{jiang2020zigzag}, and phase-matching QKD (PM-QKD)~\cite{zeng2020symmetry} in the QKD intercity network. We assume that the quantum transmission duty ratio of the SNS-QKD and PM-QKD systems is 50\%~\cite{minder2019experimental,Pittaluga2021600-km,zhou2023quantum}. Note that duty cycle ratios are lower in many important TF-QKD experiments, for example, the duty ratio at 402 km is 22.4\% in Ref.~\cite{fang2020implementation}, 45\% in Ref.~\cite{chen2020sending}, and 40\% in Ref.~\cite{wang2022twin}. We can see that asynchronous MDI-QKD enables the key rates of all links to exceed SKC$_0$. Additionally, asynchronous MDI-QKD always enjoys higher secret key rates per clock than SNS-QKD~(AOPP) and PM-QKD.
\subsection{Practical advantages of asynchronous MDI-QKD}
We simulate the performance of our protocol assuming
a 4 GHz clock rate and 22 hours transmission time.
Figure \ref{amdi_performance} presents the key rate per second versus fiber distance for asynchronous MDI-QKD, together with four-intensity time bin MDI-QKD~\cite{jiang2021higher}, SNS-QKD (AOPP)~\cite{jiang2020zigzag}, PM-QKD~\cite{zeng2020symmetry}, four-phase TF-QKD~\cite{wang2022twin}, and four-intensity decoy-state QKD.
For SNS-QKD (AOPP), PM-QKD, and four-phase TF-QKD, we set the duty cycle to 50\%, Charlie's transmission loss at Alice's (Bob's) side to 2 dB, and the loss-independent misalignment error to 4.8\%~\cite{zhou2023quantum}. We assume an insert loss on Bob’s side of 2 dB and a loss-independent misalignment error of $e_m=0.02$ for decoy-state QKD. The interference misalignment error rate of decoy-state MDI-QKD is set to 0.04, which corresponds to 26\% error rate in the $\boldsymbol{X}$ basis. Device parameters are shown in Table ~\ref{app:tab1}. The simulation formulas of MDI-QKD and decoy-state QKD are detailed in Appendix~\ref{simulation:MDIQKD} and~\ref{simulation:QKD}, respectively. We also include SKC$_0$ to prove the repeater-like behavior for asynchronous MDI-QKD. Simulation shows that the key rate of our protocol surpasses that of the decoy-state QKD protocol when $l>170$ km, and it exceeds SKC$_0$ when $l>330$ km. In the 170-483 km range, the performance of our protocol is better than that of the other five protocols, especially in the range of 200-300 km. We observe that, in the simulations, the key rates of decoy-state QKD surpass those of original time bin MDI-QKD due to the influence of the dark rate and finite key analysis. Within the 0-45km range, the original time bin MDI-QKD demonstrates a greater rate than asynchronous MDI-QKD. The lower key rate of asynchronous MDI-QKD at short distances (less than 45km) in comparison to the original time bin MDI-QKD is attributed to the stronger light intensity of the signal state in original MDI-QKD, approaching 1, which results in a higher number of single-photon pairs in the $\boldsymbol{Z}$ basis. In
Table~\ref{tab4}, we present the bits-per-second (bps) values of asynchronous MDI-QKD at various typical distances, employing device parameters identical to those employed in Fig.~\ref{amdi_performance}. Our protocol can generate secret keys rate of 0.31 Mbps at a fiber length of 200 km, thereby rendering it adequate for secure key-demanding applications such as real-time one-time-pad secure audio encryption in intra- and inter-urban areas.
\begin{figure}[t]
\centering
\includegraphics[width=8.6cm]{network_figure_1.pdf}
\caption{Example of a scalable QKD network setup consisting of numerous users who may freely join or leave the network. Each user node has an asymmetric
channel connected to an untrusted relay, through which it can establish a QKD link to others.}\label{amdi_network}
\end{figure}
\begin{table*}[!htp]
\centering
\caption{ The secret key rates of the three-intensity asynchronous MDI-QKD protocol with \textit{click filtering}. Here the fiber loss is 0.16 dB/km; the clock rate is 4 GHz; the dark count rates is 0.1 Hz; and the detection efficiency is $\eta_d=80\%$.}\label{tab4}
\begin{tabular}[b]{@{\extracolsep{9pt}}c cccccc }
\hline
\hline
\xrowht{8pt}
Data size & $ 10^{12}$ & 5 $\times 10^{12}$ & $10^{13}$ & $10^{13}$ & 5 $\times 10^{13}$ & 5 $\times 10^{13}$ \\
\cline{2-7} \noalign{\smallskip} Distance (km) &
50 & 100 & 150 & 200 & 250 & 300 \\
\hline\xrowht{8pt}
Secret key rate & 6.02 Mbps & 2.29 Mbps& 855.40 kbps & 305.05 kbps & 129.60 kbps & 46.671 kbps \\
\hline
\hline
\end{tabular}
\end{table*}
\section{Discussion and conclusion}
\label{sec_conclusion}
Here, we point out two conceptual differences between asynchronous MDI-QKD and original MDI-QKD.
\romannumeral 1. Coincidence pairing. In original MDI-QKD, the expected
yields of single-photon pairs in the $\boldsymbol{Z}$ and $\boldsymbol{X}$ bases satisfy the relation
$Y_{11}^{z*}=Y_{11}^{x*}$~\cite{zhou2016making,wang2019asymmetric}. However, since asynchronous MDI-QKD uses post-measurement coincidence pairing, there is no concept of the expected total pair number. Therefore, the ‘gain’ and ‘yield’ cannot be calculated.
\romannumeral 2. Overcompleteness. For asynchronous MDI-QKD, the so-called three-intensity and four-intensity refer to the number of light intensities sent, and the intensities at different bases after pairing are associated.
In the original MDI-QKD protocol, an important idea is to consider the double-scanning method~\cite{jiang2021higher}. We have applied the double-scanning method to asynchronous MDI-QKD.
The derivation details of double-scanning are shown in Appendix~\ref{double_scanning_append}. However, numerical results show that the method does not work for the three-intensity asynchronous MDI-QKD protocol~\cite{xiecode}. We remark that this phenomenon may be caused by the above two important characteristics. For three-intensity asynchronous MDI-QKD, there are three intensities in each of the $\boldsymbol{Z}$ and $\boldsymbol{X}$ bases after coincidence pairing. Whereas in the original three-intensity MDI-QKD, there is only one intensity in the $\boldsymbol{Z}$ basis. This means that we can directly use the $\boldsymbol{Z}$-basis data to tightly estimate the number of single photon pairs in the $\boldsymbol{Z}$ basis for asynchronous MDI-QKD, rather than inefficiently using data from the $\boldsymbol{X}$ basis to calculate. Additionally, the overcompleteness
of asynchronous MDI-QKD connotes that there is a correlation between the intensity used to estimate the number of the $\boldsymbol{Z}$-basis single-photon pairs and the intensity used to estimate the $\boldsymbol{X}$-basis phase error rate ($\boldsymbol{Z}$ basis: $\mu$ and $\nu$; $\boldsymbol{X}$ basis: $2\mu$ and $2\nu$). In contrast, the intensity and decoy-state estimation for the $\boldsymbol{Z}$ and $\boldsymbol{X}$ bases are independent of original MDI-QKD, so double
scanning is effective.
\begin{figure}[t]
\centering
\includegraphics[width=8.6cm]{amdi_praperf.pdf}
\caption{Simulated secret key rates for asynchronous MDI-QKD, original time-bin MDI-QKD, decoy-state QKD, SNS-QKD with the AOPP method, PM-QKD, and four-phase TF-QKD under the state-of-the-art system.}\label{amdi_performance}
\end{figure}
Furthermore, in the original MDI-QKD protocol, we can improve the performance of the protocol by increasing the number of decoy states, such as four-intensity MDI-QKD~\cite{zhou2016making}. We also have calculated the key rate of the four-intensity asynchronous MDI-QKD protocol, in which the intensity of each laser pulse is randomly set to one of the four intensities
$\mu_{a(b)}$ (signal), $\omega_{a(b)}$ (decoy 1), $\nu_{a(b)}$ (decoy 2) and $o_{a(b)}$ (vacuum), and the intensities satisfy $\mu_{a(b)}>\omega_{a(b)}>\nu_{a(b)}>o_{a(b)}=0$. The detailed calculation of the protocol is presented in Appendix~\ref{four_AMDI}. Comparing secret key rates of the three-intensity and four-intensity asynchronous MDI-QKD protocol with \textit{click filtering}, we find that the optimal key rates for the four-intensity decoy-state method are nearly equal to the results for the three-intensity decoy-state method~\cite{xiecode}. We remark that this situation is also due to overcompleteness. Therefore, the three-intensity asynchronous MDI-QKD protocol is a good trade-off between the performance of key rates and the ease of implementation.
In this work, we have presented an analysis of the practical aspects of asynchronous MDI-QKD. We have provided refined decoy-state methods that enable higher-rate asynchronous MDI-QKD. The numerical results of different asynchronous MDI-QKD protocols demonstrate that the three-intensity protocol, with a \textit{click filtering} operation, can provide a favorable balance between performance and ease of implementation. We have introduced the decoy-state method for the asymmetric situation, which permits the direct application of our protocol to asynchronous MDI-QKD experiments with asymmetric channels. Our work also provides important insights into asynchronous MDI-QKD: the decoy-state analysis for the $\boldsymbol{Z}$ and $\boldsymbol{X}$ bases of asynchronous MDI-QKD are overcomplete, rendering the introduction of double scanning and additional decoy states ineffective for key rate improvement. With its superior performance and straightforward design, asynchronous MDI-QKD holds strong potential in future quantum networks spanning 200 to 400 km. We anticipate the application of the asynchronous concept to MDI multiparty quantum communication tasks, such as
quantum conference key agreement~\cite{fu2015long}, quantum
secret sharing~\cite{fu2015long}, and quantum digital signatures~\cite{yin2022experimental}.
\par
\begin{center}
\textbf{ACKNOWLEDGMENTS}
\end{center}
The authors acknowledge Z. Yuan and L. Zhou for the insightful discussions.
This work has been supported by the National Natural Science Foundation of China (No. 12274223), the Natural Science Foundation of Jiangsu Province (No. BK20211145), the Fundamental Research Funds for the Central Universities (No. 020414380182), the Key Research and Development Program of Nanjing Jiangbei New Aera (No. ZDYD20210101), the Program for Innovative Talents and Entrepreneurs in Jiangsu (No. JSSCRC2021484), and the Program of Song Shan Laboratory (Included in the management of Major Science and Technology Program of Henan Province) (No. 221100210800).
|
{
"arxiv_id": "2302.14345",
"language": "en",
"timestamp": "2023-03-01T02:10:26",
"url": "https://arxiv.org/abs/2302.14345",
"yymm": "2302"
} | \section{Introduction}
Let $X$ be an $m\times n$ matrix of indeterminates over a field $K$. The minors of $X$, i.e., the determinants of the submatrices of $X$ generate subalgebras and ideals of the polynomial ring $R=K[X_{ij}: i=1,\dots,m\ j=1\dots,n]$ that are important not only in algebraic geometry and commutative algebra, but also in representation theory and combinatorics. For a recent survey of the manifold approaches to the theory of ideals and subalgebras generated by minors we refer the reader to our recent volume \cite{BCRV} with Raicu and Varbaro. One of these approaches is via Gröbner and Sagbi bases. We trust that the reader is familiar with the notion of Gröbner basis. A Sagbi basis (terminology of Robbiano and Sweedler \cite{RobSwe}) of a subalgebra $A$ for a given monomial (or term) order on the ambient polynomial ring is a system of generators ${\mathcal F}$ such that the set $\operatorname{in}({\mathcal F})$ of initial monomials generates the initial algebra $\operatorname{in}(A)$.
Let us assume $m\le n$. By a theorem of Sturmfels \cite{StuGrDet}, the $t$-minors, i.e., the determinants of the $t\times t$-submatrices, are a Gröbner basis of the ideal they generate under any diagonal monomial order---a diagonal monomial order chooses the product of the diagonal elements as the initial monomial of a $t$-minor. For the maximal minors, for which $t=m$, much more is true: They are a universal Gröbner basis, namely a Gröbner basis for an arbitrary monomial order. See Conca, De Negri and Gorla \cite{CDG} for a simple proof of this theorem of Bernstein, Sturmfels and Zelevinsky \cite{BerZel, StuZel}.
The subalgebra $G(m,n)$ generated by the maximal minors is the homogeneous coordinate ring of the Grassmann variety of $m$-dimensional subspaces of $K^n$. For general $t$ the $t$-minors are not a Sagbi basis of the algebra they generate. But for maximal minors this is true for diagonal orders, and the toric deformation of $G(m,n)$ that it offers gives comfortable access to many homological and enumerative invariants of $G(m,n)$. Other term orders
for which maximal minors are a Sagbi bases of $G(m,n)$ have been recently discovered, see for example \cite{CM} and \cite{HO}.
In view of the Bernstein--Sturmfels--Zelevinsky theorem the question of universality arises also here. But it was disproved by Speyer and Sturmfels \cite{SpeyStu}: for a $3\times 6$ matrix there exists a lexicographical order under which the $3$-minors are not a Sagbi basis of $G(3,6)$.
The starting point of the project on which we report in this paper was the question of what can be said about reverse lexicographical orders in this respect. As we will see, revlex universality also fails, but needs a $3\times 8$ matrix for failure. In view of the special properties of normal monomial algebras it is also interesting to ask whether initial algebras of $G(m,n)$ are always normal. This is surprisingly often true, and finding a counterexample needs more patience. By our experimental methods whose results are reported in Section \ref{Grass}, we cannot answer the question whether there is a finite universal Sagbi basis of $G(m,n)$. But our findings support the conjecture that this is true.
Our investigation of Sagbi bases of Grassmannians motivated the development of an algorithm for the computation of Sagbi bases in general. It is almost impossible to devise a really new algorithm, but the efficiency of the implementation is extremely important for this complicated task. Our algorithm is implemented in the Singular \cite{Sing} script sagbiNormaliz.lib. While Singular is used for the polynomial arithmetic, newly developed functions of Normaliz \cite{Nmz} are called for the combinatorial tasks. The variants of our algorithms are explained in Section \ref{Impl}. One of them is based on control by Hilbert series.
In the last section we compare our algorithm to packages that come with the standard distributions of Macaulay2\cite{M2}, and Singular \cite{Sing}. We also compare it with a preliminary version of a CoCoA5 \cite{CoCoA} package which will be in the standard distribution from version 5.4.2. Applications to different types of subalgebras show that our implementation extends the range of computability by more than one order of magnitude.
It is planned to publish release 3.17.10 together with the Singular library sagbiNormaliz.lib and a new version of normaliz.lib by the end of 2022. On request we will provide pre-release versions of the software and all data of this project to interested readers.
\section{Basics of Sagbi bases}\label{basics}
The reader finds a compact discussion of Sagbi bases in \cite[Sect. 1.3]{BCRV}. We use the notation developed there. Kreuzer and Robbinao \cite[Sect. 6.6]{KrRo} give a more extensive introduction; also see Ene and Herzog \cite{EneHerz} and Sturmfels \cite{StuGrPol}. Sagbi bases were introduced independently by Robbiano and Sweedles \cite{RobSwe} and Kapur and Madlener \cite{KapMad}. The acronym Sagbi stands for ``subalgebra analog to Gröbner bases of ideals'' \cite{RobSwe}. Some authors have adopted recently a new terminology, Khovanskii bases, for a notion that generalize that Sagbi bases but in this paper we keep the traditional name.
Let $A \subset R=K[X_1,\dots,X_n]$ be a $K$-subalgebra and ${\mathcal F}$ a (not necessarily finite) family of polynomials belonging to $A$. We assume that $R$ is endowed with a monomial (or term) order $<$. In the following we will often simply speak of an \emph{order}. One calls ${\mathcal F}$ a \emph{Sagbi basis} of $A$ if the initial monomials $\operatorname{in}(f)$, $f\in {\mathcal F}$, generate the initial algebra $\operatorname{in}(A)$. A Sagbi basis is automatically a system of generators of $A$. If ${\mathcal F}$ is finite, then $A$ and $K[\operatorname{in}({\mathcal F})]$ are connected by a flat deformation (Conca, Herzog and Valla \cite{CHV}), and this allows the transfer homological and enumerative data from the toric algebra $K[\operatorname{in}({\mathcal F})]$ to $A$. Chapter 6 of \cite{BCRV} exploits this approach for the investigation of algebras generated by minors.
Sagbi bases need not be finite, but can always be chosen countable. Therefore one must allow that ${\mathcal F}= (f_u)_{u\in N}$ with $N=\{1,\dots,p\}$ or $N={\mathbb N}$. We will always assume that the members of ${\mathcal F}$ are monic. This is evidently no essential restriction of generality as long as the base ring $K$ is a field.
For us the following simple lemma is an important tool in the computation of Sagbi bases. For a ${\mathbb N}$-graded vector $K$-vector space $V$ one defines the \emph{Hilbert function} of $V$ by
\begin{equation}
H(V,k) = \dim_K V_k, \qquad k\in{\mathbb N},\label{Hilb}
\end{equation}
where $V_k$ is the subspace of degree $k$ elements of $V$.
\begin{lemma}\label{HilbLemma}
Let $R$ be a polynomial ring, endowed with a monomial order, and $A$ a finitely generated graded subalgebra. Furthermore let ${\mathcal F}$ be a family of polynomials in $A$ and $B=K[\operatorname{in}({\mathcal F})]$ the subalgebra generated by the monomials $\operatorname{in}(f)$, $f\in {\mathcal F}$.Then the following hold:
\begin{enumerate}
\item $H(B,k) \le H(A,k)$ for all $k$.
\item ${\mathcal F}$ is a Sagbi basis of $A$ if and only if $H(B,k) = H(A,k)$ for all $k\in{\mathbb N}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For a graded subspace $V$ of the polynomial ring one has $H(V,k)=H(\operatorname{in}(V)),k)$ for all $k$ \cite[1.4.3]{BCRV}. Furthermore there are inclusions
$$
B_k \subset \operatorname{in}(A)_k,\qquad k\in{\mathbb N},
$$
and equality holds if and only if $H(B_k,k)=H(\operatorname{in}(A)_k),k)$ for all $k$. Together with Equation \eqref{Hilb} this proves the lemma.
\end{proof}
To present $A$ as a residue class ring of a polynomial ring, we choose $P=K[Y_u: u\in N]$ and define a surjection
\begin{equation}
\phi: P \to A, \qquad \phi(Y_u) = f_u, \ u\in N.\label{phi}
\end{equation}
The $K$-algebra $K[\operatorname{in}({\mathcal F})]$ is a homomorphic image of $P$, as well, namely by the surjection
$$
\psi: P \to K[\operatorname{in}({\mathcal F})], \qquad \psi(Y_u) = \operatorname{in}(f_u), \ u\in N.
$$
The kernel of $\psi$ is generated by a set of binomials. In the terminology of \cite{RobSwe} a binomial in $\operatorname{Ker}\psi$ is called a \emph{t\^ete-a-t\^ete}.
A monomial in $P$ is given by an exponent vector $e=(e_u)_{u\in N}$ of natural numbers $e_u$ of which all but finitely many are $0$. We set $Y^{e} = \prod_{u\in N} Y_u^{e_u}$. Let $F\in P$ be a polynomial, given as a $K$-linear combination of monomials, $F=\sum_i a_iY^{e_i}$ where the $e_i$ are exponent vectors, and $a_i\neq 0$ for all indices involved. We set
\begin{align*}
\operatorname{in}_\phi(F)&=\max_i \ \operatorname{in}(\phi(Y^{e_i})), \\
\operatorname{init}_\phi(F)&=\sum_{\operatorname{in}(\phi(y^{e_i}))=\operatorname{in}_\phi(F) } a_iY^{e_i}.
\end{align*}
Note that in the definition of $\operatorname{in}_\phi(F)$ the maximum is taken over the initial monomials with respect to the monomial order on $R$ so that it is a monomial in $R$. In contrast, $\operatorname{init}_\phi(F)$ is a polynomial in $P$. Since $\phi(\operatorname{init}_\phi(F))$ can be $0$, in general $\operatorname{in}_\phi(F) \neq \operatorname{in}(\phi(F))$, and this cancellation of initials is the crucial aspect of Sagbi computation. One says that a polynomial $F\in\operatorname{Ker}\phi$ \emph{lifts} a polynomial $H\in \operatorname{Ker}\psi$ if $\operatorname{init}_\phi(F)=\operatorname{init}_\phi(H)$.
We can now formulate the \emph{Sagbi criterion} (see \cite[1.3.14]{BCRV}).
\begin{theorem}
With the notation introduced, et ${\mathcal B}$ be a set of binomials generating $\operatorname{Ker}\psi$. Then the following are equivalent:
\begin{enumerate}
\item ${\mathcal F}$ is a Sagbi basis of $A$;
\item every binomial $b\in {\mathcal B}$ can be lifted to a polynomial $F\in \operatorname{Ker}\phi$.
\end{enumerate}
\end{theorem}
The Buchberger algorithm for a Gröbner basis of an ideal $I$ starts from a system of generators $G$ of $I$. Then one applies two steps, namely (i) the computation of the $S$-polynomials $S(g_1,g_2)$, $g_1, g_2\in G$, and (ii) their reductions modulo $G$. The nonzero reductions are then added to $G$, and the next round of $S$-polynomials of the augmented $G$ and their reductions is run. This produces an increasing sequence of initial ideals $\operatorname{in}(G)$. Because of Noetherianity the process stops after finitely many rounds with a Gröbner basis of $I$. The reduction of an $S$-polynomial $S(g_1,g_2)$ to $0$ is equivalent to the liftability of the ``divided Koszul syzygy'' of $\operatorname{in}(g_1)$ and $\operatorname{in}(g_2)$ to a syzygy of the polynomials $g\in G$.
The computation of Sagbi bases follows the same pattern. There are however two main differences: an analog of the divided Koszul syzygies does not exist, and ascending chains of monomial subalgebras of $R$ need not stabilize. For Sagbi bases one must therefore compute a binomial system of generators of $\operatorname{Ker}\Psi$, and one cannot expect the algorithm to stop. The analog of reduction is called subduction (we again follow \cite{RobSwe}) .
\begin{definition}
Let $g\in R$. Then $r\in R$ is a \emph{\index{subduction}subduction of $g$ modulo ${\mathcal F}$} if there exist monomials ${\mathcal F}^{e_1},\dots,{\mathcal F}^{e_m}$ and non-zero coefficients $a_i\in K$ such that the following hold:
\begin{enumerate}
\item $g=a_1{\mathcal F}^{e_1}+\dots+a_m{\mathcal F}^{e_m}+r$;
\item $\operatorname{in}({\mathcal F}^{e_i})\le \operatorname{in}(g)$ for $i=1,\dots.m$;
\item no monomial $\mu\in\operatorname{supp}(r)$ is of type $\operatorname{in}({\mathcal F}^e)$.
\end{enumerate}
\end{definition}
The process that computes a subduction of $f$ modulo ${\mathcal F}$ is also called subduction. Here $\operatorname{supp}(r)$ is the set of monomials of $R$ appearing in $r$ with a nonzero coefficient. In the computation of Sagbi bases one can replace (3) by the weaker condition
\begin{enumerate}
\item[($3'$)] $\operatorname{in}(r)$ is not of type $\operatorname{in}({\mathcal F}^e)$.
\end{enumerate}
There is an obvious algorithm that produces a \emph{subduction remainder} $r$ in $(3')$: if $\operatorname{in}(f)=\operatorname{in}({\mathcal F}^e)$,we replace $f$ by $f - a\phi({\mathcal F}^e)$ where $a$ is the leading coefficient of $f$, and iterate this \emph{subduction step} as long as possible. The algorithm stops since the sequence of initial monomials is descending and descending sequences in a monomial order are finite. Once $(3')$ is reached, one applies subduction steps iteratively to the remaining monomials to achieve the `tail subduction'' asked for by (3).
The algorithm (\emph{Sagbi}) starts from the finite family ${\mathcal F}_0$ generating the subalgebra $A\subset R$. Then one proceeds as follows:
\begin{enumerate}
\item[(1)] Set $i=0$.
\item[(2)] Set ${\mathcal F}'=\emptyset$ and compute a binomial system of generators ${\mathcal B}_i$ of the kernel of $\psi_i: P_i \to K[{\mathcal F}_i]$, $P_i=K[Y_F:F\in {\mathcal F}_i]$, $\psi_i(Y_F)=\operatorname{in}(F)$.
\item[(3)] For all $\beta\in {\mathcal B}_i$ compute the subduction $r$ of $\phi_i(\beta)$ modulo ${\mathcal F}_i$, $\phi_i$ given by the substitution $Y_F\mapsto F$, $F\in{\mathcal F}_i$. If $r\neq 0$, make $r$ monic and add it to ${\mathcal F}'$.
\item[(4)] If ${\mathcal F}'=\emptyset$, set ${\mathcal F}_j={\mathcal F}_i$, $P_j=P_i$, ${\mathcal B}_j={\mathcal B}_i$ for all $j\ge i$ and stop.
\item[(5)] Otherwise set ${\mathcal F}_{i+1}={\mathcal F}_i\cup {\mathcal F}'$, $i=i+1$ and go to (2).
\end{enumerate}
It is not hard to see that ${\mathcal F}=\bigcup_{i=0}^\infty{\mathcal F}_i$ is a Sagbi basis of $A$, and that the algorithm stops after finitely many steps if $A$ has a finite Sagbi basis.
\begin{remark}\label{goodies}
The computation of Sagbi bases is in general a very complex operation. However, in some cases it can offer a fast solution to problems that seem much simpler at first sight, but then turn out to be very hard. We discuss two cases.
(a) In our work preparing the article \cite{BCV} with Varbaro, we did not succeed to compute the defining ideal of the algebra $A$ generated by the $2\times 2$ minors of a $4\times 4$ matrix of indeterminates over a field $K$ in ``nonexceptional'' characteristics, for example, characteristic~$0$. Singular, CoCoA and Macaulay did not stop in computing the defining ideal in reasonable time, not even up to degree $4$.
From \cite{BCPow} it was however known that $A$ has a finite Sagbi basis, and meanwhile more is known about it by work of Varbaro; see \cite[6.4.10]{BCRV}. In Section \ref{comp} we come back to the computation. It takes only 40 sec; see Table \ref{Bench_1}. With the right bookkeeping one can explicitly lift the final t\^ete-a-t\^ete{} to a defining ideal of the algebra. Since one wants the defining ideal in terms of the original system of generators, further processing is necessary and a minimization must follow. Nevertheless this is a feasible approach.
By the work of Huang et al.\ \cite{HPPRS} it is now known that the defining ideal for algebras of $2$-minors is generated in degree $2$ and $3$.
(b) Suppose that one wants to compute the Hilbert series of a Grassmannian explicitly by a computer algebra system. It is of course possible to use representation theoretic methods or classical approaches going back to Hodge \cite{Hodge}; see Braun \cite{Braun} for explicit formulas. But these need preparations, and the same is true if one wants to exploit the Plücker relations. Computing a Gröbner basis by elimination is therefore tempting, but already for rather small cases it takes surprisingly long--- already $3\times 9$ takes days. In contrast, the computation from a Sagbi basis given by the generating minors in a diagonal monomial order is almost instantaneous.
Also the explicit computation of the Sagbi basis with respect to a diagonal monomial order is very fast. By a degree controlled algorithm is faster by orders of magnitude than the elimination approach. See Remark \ref{OnBench_1}(f) for computational data.
\end{remark}
\section{Sagbi combinatorics of maximal minors}\label{Grass}
Let $K$ be a field and $X$ an $m\times n$ matrix of indeterminates with $m\le n$. By ${\mathcal M}$ we denote the set of $m$-minors of $X$, i.e., the determinants of the submatrices
$$
(X_{iu_j}: i=1,\dots,m, \ j=1,\dots,m ), \qquad u_1<\dots<u_m.
$$
The subalgebra $G(m,n) = K[{\mathcal M}]$ of $R = K[X_{ij}: i=1,\dots,m,\ j=1,\dots, n]$ is an object of classical algebraic geometry, namely the homogeneous coordinate ring of the Grassmannian of $m$-dimensional vector subspaces of $K^n$ in its Plücker embedding. A ``natural'' monomial order on $R$ is lexicographic (or degree reverse lexicographic) for the order
\begin{equation}
X_{11} > \dots > X_ {1n} > X_{21} >\dots > X_{2n} > \dots > X_{m1} >\dots > X_{mn}. \label{diag_order}
\end{equation}
It is \emph{diagonal}: the product of the indeterminates in the diagonal is the initial monomial of each minor in ${\mathcal M}$. The standard bitableaux are a $K$-basis of $G(m,n)$ (see \cite[Chap 3]{BCRV}), and this implies that ${\mathcal M}$ is a Sagbi basis of $G(m,n)$ for every diagonal monomial order on $R$. To the best of our knowledge, this was first observed by Sturmefels \cite{StuGrPol}. This toric deformation gives a comfortable access to the cohomological and enumenerative properties of $G(m,n)$. For example see \cite[Sect. 6.2]{BCRV}.
In view of Lemma \ref{HilbLemma} it is crucial for our experimental approach to compute the Hilbert series
$$
H_{G(m,n)}(t) = \sum_{k=0}^{\infty} (\dim_K G(m,n)_k)t^k.
$$
Usually we work with the \emph{normalized} degree on $G(m,n)$ in which the $m$-minors have degree $1$. So $G(m,n)_k$ is the degree $km$ homogeneous component of $G(m,n)$ in the standard grading of $R$. Since the Hilbert series of $G(m,n)$ and its initial algebra coincide and the initial algebra is normal, the computation of the Hilbert series by Normaliz is almost instantaneous.
It takes some work to show that ${\mathcal M}$ is a Gröbner basis of the ideal $I_m = I_m(X)$ of maximal minors of $X$ with respect to a diagonal monomial order. But much more is true: by a theorem of Bernstein--Sturmfels--Zelevinsky \cite{BerZel,StuZel}, ${\mathcal M}$ is a \emph{universal} Gröbner basis of $I_m$, a Gröbner basis with respect to every monomial order on $R$. See \cite{CDG} for the simple proof of Conca, De Negri and Gorla and \cite{CDG1} for further developments. This raises the question whether ${\mathcal M}$ is also a universal Sagbi basis for $G(m,n)$. While this is true for $m=2$ \cite[5.3.6]{BCRV}, it fails already for $m=3$, $n=6$, as observed by Speyer and Sturmfels \cite{SpeyStu}: there exists a lexicographic order on $R$ for which ${\mathcal M}$ is \emph{not} a Sagbi basis of $G(m,n)$. But is ${\mathcal M}$ \emph{universally revlex}, namely a Sagbi basis for all degree reverse lexicographic orders on $R$?
Before we answer this question, let us outline two strategies for the investigation of this and related questions. The first strategy is very simple: after fixing the matrix format, choose an order of the indeterminates, extend it to a (reverse) lexicographic order and check whether ${\mathcal M}$ is a Sagbi basis for this order by comparing the Hilbert functions. (Instead of varying the order, we use a random permutation of the matrix entries.) This strategy is realized in a Singular script and the Hilbert series are computed by Normaliz. In the cases in which ${\mathcal M}$ is not a Sagbi basis, one can furthermore try to extend it to a full Sagbi basis by the algorithm documented in Section \ref{Impl}.
While the random search suggests reasonable working hypotheses, it cannot prove statements about universality for which we must exhaust all relevant monomial orders. Even if one tries to use all the symmetries of $G(m,n)$, it is impossible to scan all (reverse) lexicographic orders. Instead we start from the \emph{matching fields} in the terminology of Sturmfels-Zelevinsky \cite{StuZel}: a matching field assigns each minor a monomial in its Laplace expansion. It is called \emph{coherent} if assigns each minor its initial monomial with respect to a monomial order. To reduce the number of matching fields, we use a result of Sturmfels and Zelevinsky: given a coherent matching field there are exactly $m(m-1)$ indeterminates that do not appear in any monomial (we call them the ``missing" indeterminates). Sturmfels and Zelevinsky also explain that the location of these missing indeterminates satisfies certain restrictions, for example in each row one finds exactly $(m-1)$ of them. Hence the configuration of the missing indeterminates subdivides the set of coherent matching fields. For $m=3$ and $n\ge 6$, up to row and column permutations, there are exactly $4$ types. See Figure \ref{types} in which the entries $0$ denote the missing indeterminates.
\begin{figure}[hbt]
\begin{align*}
1: \begin{bmatrix}
&0&0&\dots\\
0&&0&\dots\\
0&0&&\dots
\end{bmatrix}&&&
2: \begin{bmatrix}
&0&0&&\dots\\
0&&&0&\dots\\
0&0&&&\dots
\end{bmatrix}\\
3:\begin{bmatrix}
&0&0&&&\dots\\
0&&&0&&\dots\\
0&&&&0&\dots
\end{bmatrix}&&&
4:\begin{bmatrix}
&&&&0&0&\dots\\
&&0&0&&&\dots\\
0&0&&&&&\dots
\end{bmatrix}
\end{align*}
\caption{Types of matching fields for $m=3$}\label{types}
\end{figure}
We call a matching field \emph{lex (revlex) compatible} if there exists a (reverse) lexicographic order that produces the given matching field as the sequence of the initial monomials of ${\mathcal M}$. The following observation saves computation time.
\begin{proposition}
Type 4 is not lex compatible.
\end{proposition}
\begin{proof}
Among the indeterminates one entry is largest with respect to the order, say $X_{uv}$. However, regardless of which it is, there always exist a minor involving $3$ columns such that both monomials in its Laplace expansion that are divisible by $X_{uv}$ are excluded since they both hit a $0$ in one of the rows different from row $u$.
\end{proof}
The types are scanned individually. Even after these preparations it would take too long to create all macing fields for a given type and then check whether they are lex or revlex compatible, and if so, whether ${\mathcal M}$ is a Sagbi basis for such an order. Usually there are several such orders; but it depends only on the matching field whether ${\mathcal M}$ is a Sagbi basis. As soon as the matching field is fixed, this is only a question of whether the corresponding monomial algebra and $G(m,n)$ have the same Hilbert series.
Therefore we choose an incremental approach that is realized in a C++ program. Let ${\mathcal M}=\{\Delta_1,\dots, \Delta_N\}$, $N=\binom{n}{m}$. The matching fields under consideration are sequences of monomials $\delta_1,\dots,\delta_N$ such that $\delta_i$ appears in the Laplace expansion of $\Delta_i$ and avoids the indeterminates that are excluded by the chosen type. The notion of lex or revlex compatibility extends naturally to initial subsequences of a matching field. For a given initial subsequence $\gamma_1,\dots,\gamma_u$ let $\Gamma(\gamma_1,\dots,\gamma_u)$ be the set of all compatible matching fields that extend $\gamma_1,\dots,\gamma_u$ by $\gamma_{u+1},\dots,\gamma_N$. We want to compute $\Gamma(\emptyset)$ where $\emptyset$ stands for the empty initial subsequence. This is done via the recursive relation
$$
\Gamma(\gamma_1,\dots,\gamma_u) = \bigcup_{\gamma_{u+1}} \Gamma(\gamma_1,\dots,\gamma_u, \gamma_{u+1})
$$
where $\gamma_{u+1}$ satisfies the following conditions:
\begin{enumerate}
\item $\gamma_{u+1}$ appears in the Laplace expansion of $\Delta_{u+1}$,
\item $\gamma_{u+1}$ is not excluded by the given type, and
\item $\gamma_1,\dots,\gamma_u, \gamma_{u+1}$ is compatible.
\end{enumerate}
Less formally, we extend a compatible initial sequence $\gamma_1,\dots,\gamma_u$ in all possible ways. Condition (3) is the crucial test: if $\gamma_1,\dots,\gamma_u$ is not extensible by at least one $\gamma_{u+1}$, the recursion sops and we backtrack to $\gamma_1,\dots\gamma_{u-1}$ and try the next choice of $\gamma_u$.
Let us state the results of the two experimental approaches:
\begin{theorem}\label{universal}
Let $m= 3$.
\begin{enumerate}
\item ${\mathcal M}$ is a universal Sagbi basis for $n\le5$, but not for $n\ge 6$.
\item ${\mathcal M}$ is a universally revlex Sagbi basis for $n\le 7$.
\item There exist a lexicographic order for $n=6$ and a reverse lexicographic order for $n=8$ such that ${\mathcal M}$ is not a Sagbi basis for them.
\end{enumerate}
\end{theorem}
Unexpectedly often we have observed that both $K[\operatorname{in}(M)]$ and the full initial algebra $\operatorname{in}(K[{\mathcal M}])$ are normal:
\begin{theorem}\label{normal}
Let $m=3$.
\begin{enumerate}
\item For lex orders $K[\operatorname{in}({\mathcal M})]$ is normal for $n\le 9$.
\item For revlex orders $K[\operatorname{in}({\mathcal M})]$ is normal for $n\le 8$.
\item For $n=9$ there exists a revlex order such that ${\mathcal M}$ is a Sagbi basis, but $\operatorname{in}(K[{\mathcal M}])$ is not normal.
\item For $n = 10$ there exists a lex order such $K[\operatorname{in}({\mathcal M})]$ is not normal.
\end{enumerate}
\end{theorem}
As a concrete example let us give a matrix for Theorem \ref{normal}(3) where $[u\mid v]$ denotes $X_{uv}$:
$$
\begin{pmatrix}
[1\mid4]& [3\mid1]& [1\mid5]& [2\mid5]& [3\mid4]& [2\mid8]& [2\mid9]& [2\mid6]& [2\mid1]\\
[1\mid9]& [1\mid3]& [2\mid2]& [2\mid7]& [1\mid6]& [3\mid5]& [3\mid2]& [1\mid1]& [2\mid3]\\
[1\mid8]& [2\mid4]& [3\mid7]& [3\mid9]& [3\mid8]& [1\mid2]& [3\mid3]& [1\mid7]& [3\mid6)
\end{pmatrix}.
$$
The indeterminates are ordered as in \eqref{diag_order} and the monomial order is the degrevlex extension; as mentioned, we have permuted the entries of the matrix instead of changing the order of the indeterminates.
Even our recursive method for finding compatible matching fields does not reach $n=10$. Already for $n=9$ it needs weeks of computation time, whereas it finishes in hours for $n=8$. So one can say that we were rescued by the random search that produced counterexamples exactly one step beyond the limit of computability.
The experimental evidence that we have collected, makes us optimistic for the following
\begin{conjecture}
$G(m,n)$ has a finite universal Sagbi basis.
\end{conjecture}
The conjecture is supported by overwhelming experimental evidence for $G(3,6)$; in fact, we expect that the universal Sagbi basis has $15$ elements of (normalized) degree $2$ in addition to the $3$-minors.
We are confident that we can extend our experiments to checking the conjecture for $m = 3$ and $n\le 8$. It is already clear that $n=9$ is out of reach, not only because of the large number of cases, but also since the algorithm of Section \ref{Impl} must often give up if the degrees of the polynomials in the Sagbi basis exceeds $7$, and we have seen cases ion which degree $11$ is reached. Even if the combinatorial computations should be still doable, the complexity of the polynomial computations and the available memory set a limit.
\begin{remark}\label{SystRemark}
(a) While ${\mathcal M}$ fails to be a Sagbi basis much earlier and much more often for lex orders than for revlex ones, we have observed that the missing polynomials usually have considerably lower degree in the lex case.
(b) Instead of lex and revlex orders one can experiment with arbitrary orders. At least for $m= 3$ and $n\le 8$, all matching fields found for arbitrary orders are lex or revlex compatible.
(c) The algebra $G(m,n)$ is a retract of the Rees algebra $\operatorname{\cR}(I_m)$ by degree selection, and therefore the equality $K[\operatorname{in}({\mathcal M})] = \operatorname{in}(K[{\mathcal M}])$ is a necessary condition for $\operatorname{\cR}(I_m)=\operatorname{in}(\operatorname{\cR}(I_m))$. Not surprisingly, it is not a sufficient condition, as many counterexamples demonstrate. Note that $\operatorname{\cR}(I_m)=\operatorname{in}(\operatorname{\cR}(I_m))$ is equivalent to $\operatorname{in}(I_m)^k= \operatorname{in}(I_m^k)$ for all $k$. Even if this does not hold in general, it often starts to fail for unexpectedly large $k$.
As a concrete example consider the matrix
$$
\begin{pmatrix}
[2\mid 4]&[1\mid 5]&[3\mid 5]&[3\mid 6]&[1\mid 1]&[1\mid 6]&[2\mid 7]\\
[1\mid 4]&[2\mid 6]&[1\mid 7]&[3\mid 4]&[3\mid 3]&[2\mid 5]&[2\mid 3]\\
[1\mid 2]&[3\mid 7]&[3\mid 2]&[3\mid 1]&[1\mid 3]&[2\mid 1]&[2\mid 2]
\end{pmatrix}
$$
where we use the same notation and monomial order as in the example following Theorem \ref{normal}. Despite of $K[\operatorname{in}({\mathcal M})]=\operatorname{in}(K[{\mathcal M}])$ one has $\operatorname{\cR}(I_m)\neq\operatorname{in}(\operatorname{\cR}(I_m))$, as the comparison of Hilbert series shows . Since the elements of ${\mathcal M}$ have constant degree, $\operatorname{\cR}(I_m)$ and $\operatorname{in}(\operatorname{\cR}(I_m))$ have a standard grading. With CoCoA \cite{CoCoA} we have analyzed the binomial relations of $K[\operatorname{in}({\mathcal M})]$. With respect to the normalized bigrading of the Rees algebra there are $245$ quadrics of bidegree $(1,1)$ and $(0,2)$. Those of bidegree $(1,1)$ can be lifted since the minors form a universal Gröbner basis. Those of bidegree $(0,2)$ have standard degree $2$, but the first degree in which the Hilbert series differ is $5$. So they are liftable as well. The obstruction to equality is a relation of bidegree $(1,4)$ and standard degree $5$. This implies that $\operatorname{in}(I_3)^k=\operatorname{in}(I_3^k)$ for $k\le 3$, but $\operatorname{in}(I_3)^4\neq\operatorname{in}(I_3^4)$.
The algorithm (Gen) of Section \ref{Impl} completes the Sagbi basis by exactly one more degree $5$ element in a few seconds and confirms the analysis above and the additional element has bidegree $(1,4)$. Consequences: (i) $\operatorname{in}(I_3^4)$ and $\operatorname{in}(I_3^4)$ differ in degree $13$ and (ii) $\operatorname{in}(I_3^k)=\operatorname{in}(I_3^4) \operatorname{in}(I_3^{k-4})$ for $k\geq 4$.
\end{remark}
\section{An implementation of the Sagbi algorithm}\label{Impl}
Our implementation is based on Singular \cite{Sing} and Normaliz \cite{Nmz}. We have realized three variants of the algorithm that we will explain below. They are organized in the Singular library sagbiNormaliz.lib, which in its turn connects to Normaliz for the combinatorial tasks via an extended version of the Singular library normaliz.lib. Both libraries will be published together with Normaliz version 3.10.0 (expected in December 2022).
The interface offered by normaliz.lib writes input files for Normaliz, calls it, and then reads the output files. The Macaulay2 \cite{M2} interface normaliz.M2 \cite{BrKae} is file based as well. The transfer of the Singular implementation to Macaulay2, together with an update of normaliz.M2, is not hard for an experienced Macaulay2 user. The Normaliz team would be very grateful for help! A realization via the C++ class library libnormaliz would of course be preferable and simplify the implementation.
Independently of the Sagbi computations, version 3.10.0 of Normaliz has been augmented by functions for arbitrary positive affine monoids. Such a monoid represents the combinatorial skeleton of a monomial algebra $M$, namely the monoid of the exponent vectors of the monomials of $M$. Whereas the functions are implemented additively on the combinatorial side, we describe their results in the multiplicative language of monomial algebras:
\begin{enumerate}
\item A minimal system of generators, which is uniquely determined. (See Bruns and Gubeladze \cite[Chap. 2]{BrGu} for the theory of affine monoids.). In fact, it is the set of \emph{irreducible} elements of $M$, i.e., those monomials $x$ that cannot be written as a product of monomials $y,z\neq x$ . In addition, one can ask for the \emph{representations} of the reducible elements in a given system of generators as power products of the irreducible ones.
\item A defining binomial ideal of $A$, given by a minimal system of generators. Such an ideal is often called \emph{toric} and the (minimal) system of generators is a (minimal) \emph{Markov basis}; see De Loera, Hemmecke and Köppe \cite{DLHK}. Contrary to (1), this is a time critical task. Normaliz has implemented the project-and-lift algorithm of Hemmecke and Malkin \cite{HemMal}. The Normaliz implementation is not inferior to Hemmecke and Malkin's 4ti2 \cite{4ti2}.
\item The Hilbert series of $M$, which is the ordinary generating function of the enumeration of monomials by degree. It uses the classical commutation of Hilbert series via initial (monomial) ideals.
\end{enumerate}
One needs the defining binomial ideal for the t\^ete-a-t\^ete, as is clear by their definition. The irreducible elements must be known for a minimal Sagbi basis and the control of the algorithm. The representations are necessary for subduction. One of our variants uses the Hilbert series, as we will explain below.
As far as a grading is involved, the algorithms assume the standard ${\mathbb N}$-grading of the ambient polynomial ring $R$. The extension to other positive gradings would be possible without much effort, not yet however the extension to more general gradings because the current version of Normaliz does not allow them. For a (monomial) subalgebra $A$ we work with the \emph{normalized degree}. It is obtained as follows: first one computes the greatest common divisor of the degrees of the polynomials in a generating set, and then divides the standard degree by this ``grading denominator''. This is compatible to Normaliz, which uses the normalized degree as well (unless one forbids the grading denominator).
Via the generating system ${\mathcal F} = (f_u)_{u\in N}$ of the subalgebra $A$, the polynomial ring $P = K[X_u: u\in N]$ is graded as well when we set $\deg Y_u = \deg f_u$ for all $u\in N$. Under this grading all t\^ete-a-t\^ete{} are homogeneous.
All three variants proceed in ``rounds'' as described in the algorithm (Sagbi), but are modified in two variants:
\begin{enumerate}
\item[(Gen)] The \emph{general} variant is exactly (Sagbi). It stops when ${\mathcal F}'$ in step (4) is empty. As this may never happen, the user can set a bound on the number of rounds. The general variant does not require a grading.
\item[(Deg)] After the computation of the t\^ete-a-t\^ete{} for a set ${\mathcal F}_i$, the \emph{degree by degree} variant goes over the homogeneous components of the t\^ete-a-t\^ete{} and stops at degree $d$ as soon as at least one of the subduction remainders is nonzero and therefore at least one new element $f$ of the Sagbi basis has been found.
All subduction remainders then extend the Sagbi basis to degree $d$, and the ``search degree'' can be raised to $d+1$. The stop criterion ${\mathcal F}' = \emptyset$ is the same as for (Gen). It is reached in degree $d$ when no homogeneous component of the nt\^ete-a-t\^ete{} has a nonzero subduction remainder in degrees $>d$.
For (Deg) the user must set an upper bound for $d$. If it can be expected that the Sagbi basis is finite, the upper bound should be chosen very large.
\item[(Hilb)] The \emph{Hilbert series controlled variant} refines (Deg). As input it not only needs a system of generators of the subalgebra $A$, but also the Hilbert series of $A$, given as a rational function by its numerator and denominator.
Let $S$ be the Sagbi basis of $A$ up to degree $d$. Then the Hilbert series of $K[\operatorname{in}(S)]$ is computed. If it agrees with the Hilbert series of $A$, $S$ is the complete Sagbi basis. Otherwise the ``critical degree'' is computed, i.e., the lowest degree in which the Hilbert functions of $K[\operatorname{in}(S)]$ and $A$ differ, together with the difference.
This information not only tells us in which degree we must evaluate the t\^ete-a-t\^ete{} for $K[\operatorname{in}(S)]$ to find the next elements of the Sagbi basis, but also their number $m$. Therefore the subduction can stop as soon as $m$ nonzero pairwise different remainders have been reached. They must be irreducible since they are not divisible by monomials of smaller degree and do not divide each other.
In contrast to (Gen) and (Deg), (Hilb) offers a perfect error control, which was extremely helpful during the development of the library.
\end{enumerate}
(Gen) and (Deg) have been implemented in other packages (though not necessarily together) that will be named in Section \ref{comp}, and only some details may vary. But we are not aware of an implementation of (Hilb).
All our variants return the (partial) Sagbi basis computed and an additional integer that takes the value $0$, if the partial Sagbi basis is incomplete, the value $1$ if completeness is unknown, and $2$ if the Sagbi basis is complete.
\begin{remark}\label{details}
(a) Both the evaluation of the t\^ete-a-t\^ete{} and a subduction step can be realized by the homomorphism $\phi: P \to A$ of Equation \eqref{phi}. Both amount to mapping binomials from $P$ to $A$ via $\phi$. We use the Singular map functionality for it.
(b) For the subduction one has two choices: (i) to subduce the polynomials individually until the remainder has been found, or (ii) apply one subduction step to all polynomials simultaneously, and iterate this step as long as necessary. In our computations (ii) has proved to be the better choice.
\end{remark}
\begin{remark}\label{choice}
The following comments will be illustrated by computational data in Section~\ref{comp}.
(a) If the given subalgebra $A$ is graded, it is always advisable to use (Deg) or even (Hilb). (Gen) should be reserved for nongraded subalgebras: a stop after a certain number of rounds gives much less information on the Sagbi basis than the stop at a predefined degree. Even in graded cases in which the complete Sagbi basis is computed, (Deg) is usually better.
Subduction remainders of homogeneous polynomials of degree $\le d$ with respect to a family ${\mathcal F}$ of homogeneous polynomials of degree $\le d$ remain untouched if ${\mathcal F}$ is augmented by homogeneous polynomials of degree $>d$. In other words, the partial Sagbi basis computed by (Deg) or (Hilb) are increasing with respect to inclusion.
(b) The choice between (Deg) and (Hilb) is more difficult. (Hilb) knows exactly into which degree to look next and can often stop the subduction process much earlier than (Deg). On the other hand, the Hilbert series of $A$ must be known, and in each round the Hilbert series of a monomial algebra must be computed.
The Hilbert series computation is very fast if the monomial algebra is normal since then the Normaliz algorithm based on triangulation and Stanley decomposition can be used. In the non-normal case it is a byproduct of the t\^ete-a-t\^ete{} computation that produces a Gröbner basis of the binomial ideal before shrinking it to a minimal system of generators, but the Hilbert series computation for the initial ideal of the t\^ete-a-t\^ete{} ideal does not come for free.
Another parameter is the unpredictable complexity of subduction, which in its turn depends crucially on the sparsity of the polynomials involved in it. So the decision between (Deg) and (Hilb) is a matter of the Hilbert series computation versus the subduction.
As for (Deg) the reader must set a degree bound for (Hilb), but can ask for a final check when it is reached.
(c) In some cases the time of the t\^ete-a-t\^ete{} computation depends significantly on the order of the polynomials entering it (see Remark \ref{OnBench_1}(e)). Our explanation is that the Gröbner basis computation on which it is based depends on a monomial order and is \emph{not} invariant under the permutation of coordinates. However, we have no suggestion how to choose an order for which the t\^ete-a-t\^ete{} computation is especially fast. As a step in this direction, the user can ask for a sorting of the polynomials at the beginning of each round. In the implementation it is fixed to be ascending in the degrevlex monomial order on the ambient polynomial ring $R$. Note, that both (Deg) and (Hilb) generate the elements of the Sagbi basis in ascending degree, but not necessarily in any more refined order.
\end{remark}
\section{Computational data}\label{comp}
In the following we give computation times for several examples and compare them to packages that are included in the computer algebra systems CoCoA5, Macaualy2 and Singular:
\begin{enumerate}
\item The computations with CoCoA 5 use a variation of the script developed by Anna Bigatti for \cite{BR} which is available at \cite{BB} and will be part of the standard distribution of CoCoA from version 5.4.2.
\item The Macaulay2 distribution contains the package SubalgebraBases.m2 by Burr et al., version 1.1. It realizes the variant (Deg) and allows a degree bound.
\item The Singular distribution contains the library sagbi.lib by Hackfeld, Pfister and Levandovskyy, version 4.1.1.0. It offers only the variant (Gen) with an optional bound on the number of rounds.
\end{enumerate}
As far as we could complete the computations, all packages give the same results.
We now list our test examples. With the exception of (\ttt{HK}$_2$) and (\ttt{2x2$_2$}), the base field has characteristic $0$.
\begin{itemize}
\item[(\ttt{HK$_0$})] the subalgebra of $K[X,Y,Z]$ generated by the polynomials $X^6$, $X^5Y$, $Y^5Z$, $XZ^5$, $Y^6+ Y^3Z^3$. It is taken from Han and Kwak \cite{HanKwa} where it serves as a simple counterexample to the Eisenbud--Goto conjecture. Order is degrevlex.
\item[(\ttt{HK$_2$})] The same, but over a field of characteristic $2$.
\item[(\ttt{Pow$_l$})] The subalgebra of $K[X,Y,Z]$ generated by the polynomials $X^6+Y^6+Z^6$, $X^7+Y^7+Z^7$, $X^8+Y^8+Z^8$. The monomial order is lex.
\item[(\ttt{Pow$_r$})] The same as (\ttt{Pow$_l$}), but order degrevlex.
\item[(\ttt{2x2$_0$})] The subalgebra of $K[X_{ij}: i,j = 1,\dots,4]$ generated by the $2$-minors. The monomial order is diagonal.
\item[(\ttt{2x2$_2$})] The same, but over a field of characteristic $2$.
\item[(\ttt{3x6})] $G(3,6)$, a nondiagonal lex order.
\item[(\ttt{3x7})] $G(3,7)$, a nondiagonal lex order.
\item[(\ttt{3x8})] $G(3,8)$, a nondiagonal lex order.
\item[(\ttt{3x9$_l$})] $G(3,9)$, a nondiagonal lex order.
\item[(\ttt{3x9$_r$})] $G(3,9)$, a nondiagonal degrevlex order.
\end{itemize}
Table \ref{Bench_1} contains data of the examples that are relevant for the Sagbi computation.
\begin{table}[hbt]
\begin{tabular}{crrrrrrrr}
\midrule[1.2pt]
\vphantom{\Large(} & \multicolumn{3}{c}{norm deg}& & \multicolumn{3}{c}{times}\\
\cline{2-4}\cline{6-9}
\vphantom{\Large(} ecample& bound& Sagbi & (Deg) & \#Sagbi & (Deg) & (Hilb) & CoCoA5 & M2 \\
\midrule[1.2pt]
\vphantom{\Large(} (\ttt{HK$_0$}) & 16 & ---& ---& 80 &1:13.68 & 1:18.77 & 3:29.00 & 357:10.3\\
\hline
\vphantom{\Large(} (\ttt{HK$_2$})& 16 & ---& ---& 16 &0:00.85 & 0:00.86 & 0:00.54 & 0:04.84 \\
\hline
\vphantom{\Large(} (\ttt{Pow$_l$}) & 200& ---& ---& 28 &5:57.17 & 1:14.51 & 21:27.51 & ---\\
\hline
\vphantom{\Large(} (\ttt{Pow$_r$)} & 200& --- &---& 46 &5:31.09 & 3:26.11 & 78:58.44 & E\\
\hline
\vphantom{\Large(} (\ttt{2x2$_0$}) & 10& 3 & 7 & 89 &0:39.27 & 0:11.61 & 56:17.28 & ---\\
\hline
\vphantom{\Large(} (\ttt{2x2$_2$}) & 15& 6 & 13 &130& 5:15.64 & O & 931:31.00 & ---\\
\hline
(\vphantom{\Large(} \ttt{3x6}) & 10& 2 & 4 & 21 &0:00.41 & 0:00.38 & 0:0 .40 & 0:01.46\\
\hline
\vphantom{\Large(} (\ttt{3x7}) & 10& 2 & 4 & 37 &0:01.37 & 0:00.76 & 0:08.95 & 2:44.32\\
\hline
\vphantom{\Large(} (\ttt{3x8}) & 10& 3 & 6 & 67 &0:08.20 & 0:03.21 & 0:44.84 & H\\
\hline
\vphantom{\Large(} (\ttt{3x9$_l$}) & 10& 3 & 6 &101 &0:48.26 & 0:26.20 & 14:36.46 & ---\\
\hline
\vphantom{\Large(} (\ttt{3x9$_r$}) & 10& 7 & 8 & 90 &\ M & \ \ 0:54.52 & --- & ---\\
\hline
\end{tabular}
\vspace*{1ex}
\caption{sagbiNormaliz.lib vs. CoCoA5 and M2}
\label{Bench_1}
\end{table}
In the table the first column after the name of the example is the degree bound for all four computations, (Deg), (Hilb), CoCoA5 and M2.The next column gives the maximum degree of the Sagbi basis, provided we could compute a complete Sagbi basis. The third column lists to what degree the computation had to be run for (Deg) to finish in these cases. Then we find the cardinality of the (partial) Sagbi basis, followed by the computation times. Some computations failed because of error conditions:
\begin{itemize}
\item[E] After several hours M2 stopped with error code 137. Therefore we did not try (\ttt{Pow$_l$}).
\item[H] Overflow of heap management in Macaulay2 for (\ttt{3x8}) after several hours. So that we did not try the larger formats;
\item[M] Lack of memory (max 32 GB), see Remark \ref{OnBench_2}(d).
\item[O] An intermediate Hilbert series could not be transferred from Normaliz to Singular because of Singular's bound of 32 bit for the type \ttt{int}.
\end{itemize}
The times are given in the format min:sec with two decimals for the seconds. The times have been taken on a Dell xps17 laptop with an Intel i7-11800H at 2.3 GHz and, for CoCoA5, on a MacBook Pro with an Intel Quad-Core i7 at 2,3 GHz. For comparison the Macbook times have been multiplied by the factor 0.73 measured by running (\ttt{3x7}) with Macaulay2 on both machines.
\begin{remark}\label{OnBench_1}
We add comments on specific examples.
(a) The bulk of the computation time for (\ttt{HK$_0$}) goes into the t\^ete-a-t\^ete{} computation which reaches rather high degrees. In contrast, the polynomials are very sparse so that the subductions are fast. This explains why (Deg) is faster than (Hilb). We have added the characteristic $2$ case since it shows that the Sagbi basis depends significantly on the base field. In this case it shrinks from characteristic $0$ to characteristic $2$ (and $3$), but the opposite can happen as well.
We expect that (\ttt{HK$_0$}) and (\ttt{HK$_2$}) do not have finite Sagbi bases.
(b) In contrast to (\ttt{HK$_0$}), (\ttt{Pow$_r$}) uses its computation time mainly for the subduction by rather dense polynomials. This is especially visible in (\ttt{Pow$_l$}). We expect that the Sagbi bases are infinite.
(c) We have mentioned the $2\times 2$ minors of a $4\times 4$ matrix of indeterminates already in Remark \ref{goodies}(a). That the defining ideal was not computable for us about 10 years ago, while the Sagbi basis takes only 40 sec with (Deg) and only 12 sec with (Hilb) is remarkable.
In characteristic $2$ the algebraic structure is very different from that in characteristic $0$, and this is also visible in the Sagbi basis that becomes considerably larger. Since (Deg) went through we could compute the Hilbert series. But then (Hilb) failed since an intermediate Hilbert series could not be transferred because of an overflow in Singular.
(d) The evaluation of the t\^ete-a-t\^ete{} on the partial Sagbi basis computed can of course fail for lack of memory, as we see in (Deg) for (\ttt{3x9$_r$}). For (Deg) it is impossible to know that the Sagbi basis is already complete (as is clear from (Hilb)), and degree $8$ power products of $3$-minors can already be very long polynomials. Even if the memory of a larger machine could suffice, computation time can set a limit at this point.
(e) (\ttt{HK$_0$}) has been run with sorting, all the others without. It is the only case in which we have seen a significant difference in running time. Without sorting the computation times grow by about 1 min, for (Deg) as well as for (Hilb), and the terminal output reveals that the time difference stems form the t\^ete-a-t\^ete{} computation.
(f) The computation of the Hilbert series of $G(3,9)$ via a Sagbi basis with respect to a diagonal monomial order (not in Table \ref{Bench_1}), order takes only 13 sec.
\end{remark}
We add some computations with (Gen) to get a comparison to the Singular library sagbi.lib. For it the number of rounds of (Sagbi) can be limited. We have confined ourselves to the algebras related by minors since the degrees of the t\^ete-a-t\^ete{} for (\ttt{HK}) and (\ttt{HK}) become extremely large after one or two rounds, and (Gen) must evaluate them fully. We have added (\ttt{3x9$_d$}, $G(3x0)$ with a diagonal monomial order.
\begin{table}[hbt]
\begin{tabular}{crrrrr}
\midrule[1.2pt]
\vphantom{\Large(} & \multicolumn{2}{c}{rounds} & \\
\cline{2-3}
\vphantom{\Large(} example & bound& Sagbi & \#Sagbi & (Gen) & sagbi.lib \\
\midrule[1.2pt]
\vphantom{\Large(} (\ttt{2x2$_0$}) & 10& 3 & 89 &1:18.89 & T\\
\hline
\vphantom{\Large(} (\ttt{3x6}) & 10& 2 & 21 &0:00.53 &0:00.22\\
\hline
\vphantom{\Large(} (\ttt{3x7}) & 10& 2 & 37 &0:01.75 & F \\
\hline
\vphantom{\Large(} (\ttt{3x8}) & 10& 3 & 65 &0:16.31 & ---\\
\hline
\vphantom{\Large(} (\ttt{3x9$_l$}) & 10& 3 &101 &1:56.62 & ---\\
\hline
\vphantom{\Large(} (\ttt{3x9$_d$}) & 10& 1 &84 &0:12.20 & T\\
\hline
\end{tabular}
\vspace*{1ex}
\caption{sagbiNormaliz.lib vs. sagbi.lib}
\label{Bench_2}
\end{table}
In Table \ref{Bench_2} T indicates that the computation was stopped after $1$ hour without output, and F indicates a failure because of a segmentation fault in Singular.
\begin{remark}\label{OnBench_2}
As documented in Table \ref{Bench_2}, sagbi.lib computes (\ttt{3x6}). It failed for the others. In view of the failure for (\ttt{3x7}) after $> 40$ min we did not try (\ttt{3x8}) or (\ttt{3x9$_l$}). It is certainly surprising that sagbi.lib cannot recognize that the $84$ minors form already a Sagbi basis for (\ttt{3x9$_d$}).
\end{remark}
|
{
"arxiv_id": "2302.14292",
"language": "en",
"timestamp": "2023-03-01T02:08:25",
"url": "https://arxiv.org/abs/2302.14292",
"yymm": "2302"
} | \section{Introduction}
Collinear singularities in the scattering amplitudes of gauge theory have been suggested to be encoded in a two-dimensional CFT by the celestial holography program (see \cite{LECTURES_ON_THE_INFRARED_STRUCTURE_OF_GRAVITY_AND_GAUGE_THEORY, LECTURES_ON_CELESTIAL_HOLOGRAPHY, LECTURES_ON_CELESTIAL_AMPLITUDES} and references therein). In the case of self-dual gauge theory with only states of positive helicity, this was shown to be true at tree level \cite{HOLOGRAPHIC_SYMMETRY_ALGEBRAS_FOR_GAUGE_THEORY_AND_GRAVITY} and later at one-loop level \cite{PERTURBATIVELY_EXACT_ASYMPTOTIC_SYMMETRY_OF_QUANTUM_SELF_DUAL_GRAVITY}. In \cite{ON_THE_ASSOCIATIVITY_OF_ONE_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPE}, it was shown that the one-loop collinear singularities of the self-dual limit of pure gauge theory with states of both helicities did not lead to a consistent chiral algebra, as associativity did not persist past tree-level. This was a consequence of the twistor theory uplift of self-dual Yang Mills (SDYM) suffering from a gauge anomaly associated to a box diagram. For certain gauge groups, this can be remedied by a Green-Schwarz mechanism with the introduction of a quartic axion field \cite{QUANTIZING_HOLOMORPHIC_FIELD_THEORIES_ON_TWISTOR_SPACE}. The one-loop deformation in the axion-coupled theory satisfies associativity and thus preserves the structure of the universal collinear singularitites as a chiral algebra at one-loop level. We refer to this as the (extended) celestial chiral algebra. \\ \\
The 1-loop corrections to the chiral algebra were calculated in \cite{ON_THE_ASSOCIATIVITY_OF_ONE_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPE} using known 4d collinear splitting amplitudes and the requirement of associativity. In this paper, we will use Koszul duality (see \cite{KOSZUL_DUALITY_IN_QFT} for a review) to obtain these 1-loop corrections. Koszul duality is a mathematical notion which in essence takes an algebra $A$ satisfying certain conditions and produces a new Koszul dual algebra $A^{!}$ such that $(A^{!})^{!} = A$. In \cite{TWISTED_SUPERGRAVITY_AND_KOSZUL_DUALITY}, it was shown that we can extend the framework of Koszul duality to act on chiral algebras. For recent mathematical developments, see \cite{QUADRATIC_DUALITY_FOR_CHIRAL_ALGEBRAS} and references therein. In this case, given a chiral algebra, Koszul duality produces a new Koszul dual chiral algebra. \\ \\
Suppose we have some twist of a supersymmetric quantum field theory (see \cite{TASI_LECTURES_ON_THE_MATHEMATICS_OF_STRING_DUALITIES} for a review) on $\mathbb{C} \times (\mathbb{C})^n$ which is BRST-invariant, holomorphic along $\mathbb{C}$ and, in general, a combination of holomorphic and topological in the other directions. Let $A$ be the differential-graded (DG) chiral algebra of (not necessarily gauge-invariant) local operators restricted to $\mathbb{C}$, with the BRST operator as its differential and a known OPE. The Koszul dual $A^{!}$ is then defined as the universal chiral algebra that can be coupled to our theory as the algebra of operators of a holomorphic defect wrapping $\mathbb{C}$. We demand that this coupling be BRST invariant, which results in contraints on the OPEs between the operators of $A^{!}$. This procedure can be modeled by Feynman diagrams. This interpretation allows us to work in perturbation theory, ensuring BRST invariance order-by-order.
\\ \\
The structure of this paper is as follows:
\begin{itemize}
\item \textbf{Section 2}. We review 6d holomorphic BF theory in the BV formalism and its BRST variations.
\item \textbf{Section 3 and 4}. Following \cite{TWISTED_SUPERGRAVITY_AND_KOSZUL_DUALITY}, we briefly review the celestial chiral algebra and its tree-level OPEs from the point of view of Koszul duality.\footnote{Note that at tree-level, we can neglect the fact that 6d holomorphic BF theory fails to be BRST-invariant at 1-loop.}
\item \textbf{Section 5 and 6}. We review the inclusion of the quartic axion field to the twistorial theory, its BV action functional and BRST variations following \cite{CELESTIAL_HOLOGRAPHY_MEETS_TWISTED_HOLOGRAPHY_4D_AMPLITUDES_FROM_CHIRAL_CORRELATORS}. We also discuss the introduction of the corresponding two towers of generators to the (extended) celestial chiral algebra. From the fields of this theory, one can construct "bulk" local operators, the restriction of which to the twistor $\mathbb{CP}^1$ will give us $A$.
\item \textbf{Section 7}. We calculate the 1-loop corrections to the (extended) celestial chiral algebra using Koszul duality. Note that a similar computation in the context of self-dual gravity coupled to a $4^{\text{th}}$-order gravitational axion was also recently performed in \cite{ON_THE_ASSOCIATIVITY_OF_1_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPERATOR_PRODUCT_IN_GRAVITY}. We relegate details of some holomorphic integrals to the Appendices.
\end{itemize}
\section{Holomorphic BF Theory}
The holomorphic BF-type action on twistor space is given by\footnote{We use a different normalization convention to that of \cite{ON_THE_ASSOCIATIVITY_OF_ONE_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPE}. In particular, we normalize our integrals by a factor of $\frac{1}{2 \pi i}$. \label{footnote_1}}
\begin{equation}
S[\mathcal{A}, \mathcal{B}] = \bigg(\frac{1}{2 \pi i}\bigg) \underset{\mathbb{PT}}{\int} \text{Tr}(\mathcal{B} F^{0,2}(\mathcal{A})) = \bigg(\frac{1}{2 \pi i}\bigg)\underset{\mathbb{PT}}{\int} \text{Tr}(\mathcal{B} \overline{\partial} \mathcal{A}+\frac{1}{2} \mathcal{B} [\mathcal{A}, \mathcal{A}])
\end{equation}
where the field content of the theory is $\mathcal{A} \in \Omega^{0,1}(\mathbb{PT}, \mathfrak{g})$ and $\mathcal{B} \in \Omega^{3,1}(\mathbb{PT}, \mathfrak{g})$ for $\mathfrak{g}$ a complex semi-simple Lie algebra. The fields $\mathcal{A}$ and $\mathcal{B}$ are subject to two gauge variations with generators $\chi \in \Omega^{0,0}(\mathbb{PT}, \mathfrak{g})$ and $\nu \in \Omega^{3,0}(\mathbb{PT}, \mathfrak{g})$
\begin{equation}
\delta \mathcal{A} = \overline{\partial} \chi + [\mathcal{A}, \chi] \quad \quad \delta \mathcal{B} = \overline{\partial} \nu + [\mathcal{B}, \chi].
\end{equation}
In the BV formalism, we extend $\mathcal{A}$ to a field in $\Omega^{0,*}(\mathbb{PT}, \mathfrak{g})[1]$ and $\mathcal{B}$ to one in $\Omega^{3,*}(\mathbb{PT}, \mathfrak{g})[1]$, where [1] denotes a shift in ghost number so that fields in Dolbeault degree j are in ghost number $1-j$. In particular, the $(*,1)$ component of these polyform fields correspond to the physical fields. Explicitly, the resulting polyform fields are written as
\begin{equation}
\tilde{\mathcal{A}} = \chi + \mathcal{A} + \mathcal{B}^{\vee} + \nu^{\vee} \quad \quad \Tilde{\mathcal{B}} = \nu + \mathcal{B} + \mathcal{A}^{\vee}+\chi^{\vee}
\end{equation}
where $( \cdot )^{\vee}$ denotes the antifield of $( \cdot )$. The resulting BV action is
\begin{equation}
S[\tilde{\mathcal{A}},\tilde{\mathcal{B}}] = \bigg(\frac{1}{2 \pi i}\bigg)\underset{\mathbb{PT}}{\int} \text{Tr}(\tilde{\mathcal{B}} \overline{\partial} \tilde{\mathcal{A}}+\frac{1}{2} \tilde{\mathcal{B}} [\tilde{\mathcal{A}}, \tilde{\mathcal{A}}]).
\end{equation}
Writing the action out in terms of the components of the polyform fields, the holomorphic BF action obtains the additional terms:
\begin{equation}
\bigg(\frac{1}{2 \pi i}\bigg)\underset{\mathbb{PT}}{\int} \text{Tr} \bigg(
\mathcal{A}^{\vee}(\overline{\partial} \chi + [\mathcal{A}, \chi]) +\mathcal{B}^{\vee} (\overline{\partial} \nu + [\mathcal{A}, \nu] + [\mathcal{B}, \chi]) +\frac{1}{2} \chi^{\vee} [ \chi, \chi]+ \nu^{\vee} [\chi, \nu] \bigg).
\end{equation}
The BRST transformation of the fields are encoded in their equations of motion. This can then be easily read off from this form of the action, as $\delta_{( \cdot )^{\vee}}$ yields the equation of motion for $( \cdot )$:
\begin{equation}
\delta \mathcal{A} = \overline{\partial} \chi + [\mathcal{A}, \chi] \quad \quad \delta \mathcal{B} = \overline{\partial} \nu + [\mathcal{A}, \nu] + [\mathcal{B},\chi] \quad \quad
\delta \chi = \frac{1}{2} [\chi, \chi] \quad \quad \delta \nu = [\chi, \nu].
\end{equation}
\\ At the classical level, the holomorphic BF action reduces to 4d SDYM on $\mathbb{R}^4$:
\begin{equation}
\underset{\mathbb{R}^4}{\int} \text{Tr}(BF(A)_{-})
\end{equation}
where the field content is $A \in \Omega^{1}(\mathbb{R}^4,\mathfrak{g})$ and $B \in \Omega_{-}^{2}(\mathbb{R}^4,\mathfrak{g})$ \cite{ON_SELF_DUAL_GAUGE_FIELDS}. The reduction from twistor theory incorporates states of both helicities \cite{TWISTOR_ACTIONS_FOR_NON_SELF_DUAL_FIELDS}.
\section{Chiral Algebra}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.3]{figure_1a.png}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.3]{figure_1b.png}
\end{subfigure}
\caption{}
\label{figure_1}
\end{figure}
Consider a defect along a holomorphic plane $\mathbb{C} \subset \mathbb{C}^3$ and choose coordinates $z$, $v^1$, $v^2$ so that the plane is located at $v^i = 0$. Since anomalies are local, working with this local model for $\mathbb{CP}^1\subset \mathbb{PT}$ is sufficient. The most general way that this defect theory couples to holomorphic BF theory is
\begin{equation}
\sum_{m,n \geq 0} \bigg( \frac{1}{2 \pi i} \bigg) \frac{1}{m! n!} \underset{\mathbb{C}}{\int} d^2 z \bigg(J_a[m,n] \partial_{v^1}^m \partial_{v^2}^n \mathcal{A}_{\overline{z}}^a + \tilde{J}_a[m,n] \partial_{v^1}^m \partial_{v^2}^n \mathcal{B}_{\overline{z}}^a
\bigg)
\end{equation}
in terms of some general defect operators $J$ and $\tilde{J}$. We will assume the defect theory whose algebra of operators $B$ includes $J$ and $\tilde{J}$ is not itself a 2d gauge theory, i.e. $B$ is an ordinary algebra, not a DGA. This coupling diagramatically takes the form of Figure 1. The $J$ and $\tilde{J}$ towers of states form the celestial chiral algebra for SDYM with states of both helicities. Their scaling dimension is $-(m+n)$ and $-2-(m+n)$, and their spin is $1-\frac{m+n}{2}$ and $-1-\frac{m+n}{2}$, respectively.\footnote{Dimension here corresponds to the charge of the operator under scaling of $\mathbb{R}^4$. Spin refers to holomorphic 2d conformal weight.}
\\ \\
We demand that this coupling be gauge invariant. This requires all anomalous (i.e., BRST-non-invariant) Feynman diagram contributions to cancel order-by-order in perturbation theory. This will lead to constraints for the OPEs between the defect operators. For more details, see section 6 of \cite{TWISTED_SUPERGRAVITY_AND_KOSZUL_DUALITY} and the appendix of \cite{ASPECTS_OF_OMEGA_DEFORMED_M_THEORY}.
\section{Tree-Level OPEs}
The tree-level Feynman diagrams for the bulk-defect interactions are illustrated in Figure 2. The contribution of the left-most diagram corresponds to the gauge variation of the $J_a \mathcal{A}^a_{\overline{z}}$ coupling
\begin{equation}
\sum_{l,k \geq 0} \bigg( \frac{1}{2 \pi i} \bigg) \frac{1}{l! k!}\underset{\mathbb{C}}{\int} d^2 z f^a_{bc} J_a[l,k] \partial_{v^1}^l \partial_{v^2}^k (\chi^b \mathcal{A}_{\overline{z}}^c),
\end{equation}
where we've integrated by parts and used $\partial_{\overline{z}}J_a=0$ to get rid of the linear piece of the gauge variation, $\overline{\partial}\chi$. The contribution of the right-most diagram corresponds to the gauge variation of the $\tilde{J}_a \mathcal{B}^a_{\overline{z}}$ coupling
\begin{equation}
\sum_{l,k \geq 0} \bigg( \frac{1}{2 \pi i} \bigg) \frac{1}{l! k!}\underset{\mathbb{C}}{\int} d^2 z f^a_{bc} \tilde{J}_a[l,k] \partial_{v^1}^l \partial_{v^2}^k (\nu^b \mathcal{A}^c_{\overline{z}} + \chi^b \mathcal{B}^c_{\overline{z}}).
\end{equation}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.3]{figure_2a.png}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.3]{figure_2b.png}
\end{subfigure}
\caption{}
\end{figure}
Gauge invariance demands that these anomalous contributions be cancelled by the gauge variation of the diagrams in Figure 3. These contributions correspond to the linearized gauge variation of an integral involving two copies of the operators $J_a$, and an integral with one copy of $J_a$ and one of $\tilde{J}_a$, located at points $z$ and $w$ separated by a distance $|z-w| \geq \epsilon$, where $\epsilon$ is a small point-splitting regulator.\\ \\
The contribution from the left-most diagram in Figure 3 simplifies to
\begin{equation}
- \sum_{r,s,m,n \geq 0} \bigg( \frac{1}{2 \pi i} \bigg)^2 \frac{1}{r! s! m! n!}\underset{\mathbb{C}^2}{\int} d^2z d^2w (J_c [r,s](z) J_b [m,n](w)) \partial^r_{v^1} \partial^s_{v^2} \partial_{\overline{z}} \chi^c(z) \partial^m_{v^1} \partial^n_{v^2} \mathcal{A}^b_{\overline{z}}(w).
\end{equation}
Integrating by parts comes at the expense of introducing boundary terms where $|z-w| = \epsilon$. Taking the external legs to be test functions of the form $\chi = (v^1)^r (v^2)^s \mathbf{t}_c$ and $\mathcal{A}_{\overline{z}} = (v^1)^m (v^2)^n \mathbf{t}_b$, the requirement of cancellation becomes
\begin{equation}
\underset{\mathbb{C}}{\int} d^2 w f^a_{bc} J_a[r+m,s+n](w) = \underset{\mathbb{C}}{\int} d^2 w \underset{z \to w}{\text{Res}}(J_c[r,s](z)J_b[m,n](w)).
\end{equation}
Similarly, we find that cancellation of the second contribution leads to the following equality:
\begin{equation}
\underset{\mathbb{C}}{\int} d^2 w f^a_{bc} \tilde{J}_a[r+m,s+n](w) = \underset{\mathbb{C}}{\int} d^2 w \underset{z \to w}{\text{Res}}(\tilde{J}_c[r,s](z)J_b[m,n](w)).
\end{equation}
This means that, at tree-level, gauge invariance of the coupling to the defect constrains the OPEs between the defect operators to be
\begin{equation}
\begin{aligned}
J_c[r,s](z)J_b[m,n](w) \sim \frac{1}{z-w} f^a_{bc} J_a[r+m,s+n](w) \\
\tilde{J}_c[r,s](z)J_b[m,n](w) \sim \frac{1}{z-w} f^a_{bc} \tilde{J}_a[r+m,s+n](w).
\end{aligned}
\end{equation}
This is the level-$0$ Kac-Moody algebra for $\mathfrak{g}[v^1,v^2]$, discovered in the context of the celestial chiral algebra for gauge theory \cite{HOLOGRAPHIC_SYMMETRY_ALGEBRAS_FOR_GAUGE_THEORY_AND_GRAVITY}.
\section{Axion Field}
6d holomorphic BF theory suffers from an anomaly associated to the box diagram shown in Figure 4. This means that the 6d theory (and our chiral algebra) is not consistent at the quantum level. In
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.3]{figure_3a.png}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.3]{figure_3b.png}
\end{subfigure}
\caption{}
\end{figure}
\cite{QUANTIZING_HOLOMORPHIC_FIELD_THEORIES_ON_TWISTOR_SPACE}, it was demonstrated that this anomaly can be canceled by a Green-Schwarz mechanism under the condition that the gauge group is one of $\mathfrak{sl}_2$, $\mathfrak{sl}_3$, $\mathfrak{so}_8$ or one of the exceptional algebras.\\ \\
If the gauge group is one of the above, we extend our twistor action to include\footnote{The action becomes $S[\mathcal{A},\mathcal{B},\eta] = S[\mathcal{A},\mathcal{B}]+iS[\eta,\mathcal{A}]$}
\begin{equation}
S[\eta,\mathcal{A}] = \frac{1}{2} \bigg( \frac{1}{2 \pi i}\bigg) \underset{\mathbb{PT}}{\int}\bigg( \partial^{-1} \eta \overline{\partial}\eta + \frac{\lambda_{\mathfrak{g}}}{(2 \pi i) \sqrt{6}} \eta \text{Tr}(\mathcal{A}\partial \mathcal{A}) \bigg)
\end{equation}
where we've enlarged our field content by introducing a new field $\eta \in \Omega^{2,1}(\mathbb{PT})$ constrained to satisfy $\partial \eta = 0$, and $\lambda_{\mathfrak{g}}$ is a constant that depends on the gauge group and is determined by the requirement that the gauge variation from Figure 5 (with the dashed line representing the $\eta$ propagator) cancels that of Figure 4.\footnote{The condition defining $\lambda_{\mathfrak{g}}$ is $\text{Tr}_{\text{adj}}(X^4) = \lambda_{\mathfrak{g}}^2 \text{Tr}_{\text{fun}}(X^2)$.} This field is subject to the following gauge variation with generator $\gamma \in \Omega^{2,0}$
\begin{equation}
\delta \eta = \overline{\partial} \gamma.
\end{equation}
Extending $\eta$ to a field in $\Omega^{2,*}(\mathbb{PT})[1]$, we find an additional term to the gauge variation of $\eta$ and $\mathcal{B}$
\begin{equation}
\delta \eta = ...-\frac{\lambda_{\mathfrak{g}}}{(2 \pi i) \sqrt{6}} \text{Tr}(\partial \chi \partial \mathcal{A}) \quad \quad \delta \mathcal{B} = ...-\frac{i \lambda_{\mathfrak{g}}}{(2 \pi i) \sqrt{6}} (\gamma \partial \mathcal{A}+\eta \partial \chi).
\end{equation}
\\
In \cite{QUANTIZING_HOLOMORPHIC_FIELD_THEORIES_ON_TWISTOR_SPACE}, it was shown that this extension to the holomorphic BF theory is realized in 4d spacetime by a new axion-like field $\rho$\footnote{Here we are integrating over the $\mathbb{C}\mathbb{P}^{1}$ corresponding to $x \in \mathbb{R}^4$.}
\begin{equation}
\rho(x) = \frac{1}{2 \pi i} \underset{\mathbb{CP}^{1}_{x}}{\int} \partial^{-1} \eta,
\end{equation}
coupled to SDYM via
\begin{equation}
\underset{\mathbb{R}^4}{\int} \bigg( \frac{1}{2}(\Delta\rho)^2 + \frac{\lambda_{\mathfrak{g}}}{(2 \pi i \sqrt{3})} \rho \text{Tr}(F(A) \wedge F(A)) \bigg).
\end{equation}
\begin{figure}[t]
\centering
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[scale=0.35]{figure_4.png}
\caption{}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[scale=0.35]{figure_5.png}
\caption{}
\end{minipage}
\end{figure}
\section{Chiral Algebra Including The Axion}
The introduction of the axion field enlarges our chiral algebra by adding two extra towers $E[r,s]$ and $F[r,s]$. These towers come from the defect coupling to the axion. Instead of working with $\eta$ directly, we choose to work with a (1,1)-form $\alpha$ satisfying $\partial \alpha = \eta$ to easily implement the constraint on $\eta$. This field is then subject to two gauge variations generated by $\omega \in \Omega^{0,1}(\mathbb{PT})$ and $\theta \in \Omega^{1,0}(\mathbb{PT})$:
\begin{equation}
\delta \alpha = \partial \omega + \overline{\partial} \theta.
\end{equation}
The most general way that the defect theory couples to the axion is
\begin{equation}
\sum_{m,n \geq 0} \bigg( \frac{1}{2 \pi i} \bigg) \frac{1}{m! n!} \underset{\mathbb{C}}{\int} d^2 z \bigg(e_i[m,n] \partial_{v^1}^m \partial_{v^2}^n \alpha_i + e_z[m,n] \partial_{v^1}^m \partial_{v^2}^n \alpha_z
\bigg).
\end{equation}
Gauge invariance of the coupling under $\alpha \to \partial \omega$ leads to a constraint involving the $e_i$ and $e_z$ operators, which tells us that the operators are not independent. This is the reason why we only get two additional towers, as opposed to three. The towers $E[r,s]$ and $F[r,s]$ are then linear combinations of the $e_i$ and $e_z$ operators
\begin{equation}
E[r,s]=\frac{-1}{r+s} e_z[r,s] \quad \quad F[r,s]=\frac{1}{r+s+2}(e_2[r+1,s]-e_1[r,s+1]).
\end{equation}
Using Koszul duality considerations similar to section 4, we can derive the tree-level OPEs of the enlarged chiral algebra. For more details, see section 7.2 of \cite{CELESTIAL_HOLOGRAPHY_MEETS_TWISTED_HOLOGRAPHY_4D_AMPLITUDES_FROM_CHIRAL_CORRELATORS} and section 10 of \cite{TWISTED_HOLOGRAPHY_AND_CELESTIAL_HOLOGRAPHY_FROM_BOUNDARY_CHIRAL_ALGEBRA}.
\section{One-Loop Corrections}
In \cite{ON_THE_ASSOCIATIVITY_OF_ONE_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPE}, the quantum corrections to the $J[0,1]J[1,0]$, $\tilde{J}[0,1] J[1,0]$, and $\tilde{J}[1,0] J[0,1]$ OPEs were computed using known 4d collinear splitting amplitudes and constraints from associativity. Here, we demonstrate how the same result can be obtained through Koszul duality using the methods developed in \cite{KOSZUL_DUALITY_IN_QFT,ON_THE_ASSOCIATIVITY_OF_1_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPERATOR_PRODUCT_IN_GRAVITY}. In particular, a similar computation in the setting of self-dual gravity coupled to a $4^{\text{th}}$-order gravitational axion was performed in \cite{ON_THE_ASSOCIATIVITY_OF_1_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPERATOR_PRODUCT_IN_GRAVITY}, and we closely follow the presentation therein.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.35]{figure_6a.png}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[scale=0.35]{figure_6b.png}
\end{subfigure}
\caption{}
\label{figure_1}
\end{figure}
\\ \\
Since axion exchanges are already counted as a one-loop effect by the Green-Schwarz mechanism, one-loop corrections to these OPEs cannot involve the axion operators.\footnote{In our counting convention, any $E$ or $F$ operator on the right-hand side of an OPE comes with an explicit factor of $\hbar$. Since any one-loop effect also comes with an explicit factor of $\hbar$, it follows that the 1-loop corrections cannot involve $E$ or $F$.} Reintroducing $\hbar$, SDYM is invariant under simultaneous rescalings $\hbar \to \lambda \hbar$, $\mathcal{B} \to \lambda \mathcal{B}$. In terms of the chiral algebra, this means that under this rescaling, $\tilde{J}$ transforms non-trivially, $\tilde{J} \to \lambda^{-1} \tilde{J}$.
Since 1-loop corrections come with an explicit factor of $\hbar$, invariance under rescaling of $\hbar$ requires that the number of $\tilde{J}$'s increases by one so that the powers of $\lambda$ match on both sides of the OPE. \\ \\
The one-loop Feynman diagrams that yield anomalies in the bulk-defect coupling which contribute corrections are then illustrated in Figure 6. A priori, we should also consider the BRST variation of the diagram in Figure 7. In \cite{RENORMALIZATION_FOR_HOLOMORPHIC_FIELD_THEORIES} it was proven that this contribution necessarily vanishes in a 6d holomorphic theory.
We reproduce the corrections to the $\tilde{J}[0,1] J[1,0]$, and $\tilde{J}[1,0] J[0,1]$ OPEs in Appendix B.
\\ \\
We denote the location of the defect operators as $Z = (z_1,0^{\Dot{\alpha}})$ and $W = (z_2,0^{\Dot{\alpha}})$, where we again require that their distance satisfy $|z_1-z_2| \geq \epsilon$. We denote the location of the vertices as $X = (x^0,x^{\Dot{\alpha}})$ and $Y = (y^0,y^{\Dot{\alpha}})$. We also define $z_0 = \frac{z_1+z_2}{2}$, $z_{12}=z_1-z_2$, and $D^{i}_{rs} = \frac{1}{r! s!} \partial^{r}_{v^{1}_{i}} \partial^{s}_{v^{2}_{i}}$. \\ \\
The linearized BRST variation of the left-most diagram in Figure 6 leads to two terms. The term corresponding to the gauge variation acting on the up-most external leg is given by the general form
\begin{equation}
\bigg( \frac{1}{2 \pi i} \bigg)^2 \underset{\mathbb{C}^2}{\int} dz_1 dz_2 (J_c[r,s](z_1) \tilde{J}_d[m,n](z_2)) K^{ef}f^d_{ae} f^c_{bf} \underset{rsmn}{\mathcal{M}}(z_1,z_2; \overline{\partial} \chi^b, \mathcal{A}^a)
\end{equation}
\begin{equation}
\underset{rsmn}{\mathcal{M}}(z_1,z_2; \overline{\partial}\chi^b, \mathcal{A}^a) =
\bigg(\frac{1}{2}\bigg) \bigg( \frac{1}{2 \pi i} \bigg)^2 \underset{(\mathbb{C}^3)^2}{\int} D^1_{rs}\underset{z \leftrightarrow x}{P} \overline{\partial} \chi^{b} d^3X \underset{x \leftrightarrow y}{P} d^3Y \mathcal{A}^{a} D^4_{mn}\underset{y \leftrightarrow w}{P}
\end{equation}
where the structure constants $f^a_{bc}$ come from the trivalent vertices labeled by the cubic interaction of the action, the Killing form $K^{fe}$ come from the bivalent vertex labeled by the quadratic interaction, and $\underset{z \leftrightarrow w}{P}$ is the $\mathcal{A}-\mathcal{B}$ propagator without the Lie algebra information. This propagator is defined via the equations
\begin{equation}
\underset{z \leftrightarrow w}{P} = j^{*}(p), \quad \quad \frac{1}{2 \pi i} \overline{\partial}p = -\delta^{(3)}, \quad \quad \underset{\mathbb{C}^3}{\int} d^3Z\delta^{(3)} = 1
\end{equation}
where $j:(\mathbb{C}^3 \times \mathbb{C}^3 \to \mathbb{C}^3)$ is the difference map $(Z,W) \mapsto (Z-W)$, $p \in \Omega^{(0,2)}(\mathbb{C}^3)$, and $\delta^{(3)}$ is the $(0,3)$-form $\delta$-function with support at $Z=0$. A nice discussion of propagators and Feynman rules in holomorphic gauge theories can be found in \cite{A_ONE_LOOP_EXACT_QUANTIZATION_OF_CHERN_SIMONS_THEORY}. Explicitly, the propagator is given by
\begin{equation}
\underset{z \leftrightarrow w}{P} = \bigg(\frac{1}{2 \pi}\bigg)^2 \epsilon_{\overline{a} \overline{b} \overline{c}} \frac{(\overline{Z}-\overline{W})^{\overline{a}}d(\overline{Z}-\overline{W})^{\overline{b}}d(\overline{Z}-\overline{W})^{\overline{c}}}{\lVert Z-W \rVert^6}.
\end{equation} \\
The second term, which corresponds to the gauge variation acting on the down-most external leg, can be obtained from the first by exchanging $J \leftrightarrow \tilde{J}$ before taking any OPEs. \\ \\
Integrating by parts leads to terms where $\overline{\partial}$ acts on a propagator, and a boundary term where $\lvert z_{12} \rvert = \epsilon$. Since $\underset{z \leftrightarrow w}{\overline{\partial}{P}} = -(2 \pi i) \underset{z \leftrightarrow w}{\delta^{(3)}}$, where $\underset{z \leftrightarrow w}{\delta^{(3)}}$ is the $(0,3)$-form $\delta$-function with support at $Z-W=0$, these terms correspond to contractions of the internal edges. Diagrammatically this takes the form of Figure 8.
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{figure_7.png}
\caption{}
\end{figure}
\\ \\
Consider the contraction coming from the term with $\underset{z_{i} \leftrightarrow z_{j}}{\overline{\partial} P}$. The resulting expression will contain holomorphic derivatives acting on the product $\underset{z_1 \leftrightarrow z_k }{P} \chi \mathcal{A} \underset{z_k \leftrightarrow z_4}{P}$. This product will be proportional to:
\begin{equation}
(\epsilon_{\overline{a}\overline{b}\overline{c}} \overline{Z}^{\overline{a}}_{{1k}} d\overline{Z}^{\overline{b}}_{{1k}} d\overline{Z}^{\overline{c}}_{{1k}})( \epsilon_{\overline{d}\overline{e}\overline{f}} \overline{Z}^{\overline{d}}_{{k2}} d\overline{Z}^{\overline{e}}_{{k2}} d\overline{Z}^{\overline{f}}_{{k2}}).
\end{equation} \\
Denoting the location of the vertex as $Z_{k} = (z_k,v^{\dot{\alpha}}_{k})$, the differences $Z_{1k}$ and $Z_{k2}$ become $(z_{1k},-v^{\dot{\alpha}}_{k})$ and $(z_{k2},v^{\dot{\alpha}}_{k})$. The only terms that survive in eq. 7.5 are those which do not have more than two $d\overline{v}^{\dot{\alpha}}_{k}$,
\begin{equation}
(\epsilon_{\overline{a}\overline{b}\overline{c}} \overline{Z}^{\overline{a}}_{{1k}} d\overline{Z}^{\overline{b}}_{{1k}} d\overline{Z}^{\overline{c}}_{{1k}})( \epsilon_{\overline{d}\overline{e}\overline{f}} \overline{Z}^{\overline{d}}_{{k2}} d\overline{Z}^{\overline{e}}_{{k2}} d\overline{Z}^{\overline{f}}_{{k2}}) = -4\overline{v}^{\dot{\alpha}} \overline{v}^{\dot{\beta}} \epsilon_{\dot{\alpha} \dot{\gamma}} \epsilon_{\dot{\beta} \dot{\nu}} \epsilon^{\dot{\nu}\dot{\gamma}} d^{2}\overline{v}_{k}d\overline{z}_{1k} d\overline{z}_{k2}.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{figure_8.png}
\caption{}
\end{figure}
Summing over the $\epsilon$ indices, we find that the contributions coming from contractions are proportional to $\epsilon_{\dot{\alpha} \dot{\beta}}\overline{v}^{\dot{\alpha}} \overline{v}^{\dot{\beta}} = [\overline{v},\overline{v}] = 0$. This means that the only anomalous contributions come from the boundary term, which takes the form:
\begin{equation}
-\bigg( \frac{1}{2 \pi i} \bigg)^2 \underset{\mathbb{C}}{\int} dz_0 \underset{|z_{12}| = \epsilon}{\oint} dz_{12} (J_c[r,s](z_1) \tilde{J}_d[m,n](z_2)) K^{ef}f^d_{ae} f^c_{bf} \underset{rsmn}{\mathcal{M}}(z_1,z_2; \chi^b, \mathcal{A}^a )
\end{equation}
where we restrict $d\overline{z}_1 = d\overline{z}_2 = d\overline{z}_0$. As in section 4, gauge invariance requires that this be cancelled by the gauge variation of the left-most diagram in Figure 3
\begin{equation}
-\bigg( \frac{1}{2 \pi i} \bigg)^2 \underset{\mathbb{C}^2}{\int} d^2z_{1} d^2z_{2} (J_b [0,1](z_1) J_a [1,0](z_2)) D^1_{01} \partial_{\overline{z}}\chi^b(z_1) D^2_{10} \mathcal{A}^a_{\overline{z}}(z_2).
\end{equation}
The matching of scaling dimensions then tells us then that the only terms that contribute in eq. 7.7 are those which satisfy $r+s+m+n=0$. This follows from the fact that $J[0,1]J[1,0]$ has scaling dimension equal to $-2$. We therefore need to compute the following quantity:
\begin{equation}
\underset{0000}{\mathcal{M}}(z_1,z_2; \chi^b, \mathcal{A}^a) = \bigg(\frac{1}{2}\bigg) \bigg( \frac{1}{2 \pi i} \bigg)^2 \underset{(\mathbb{C}^3)^2}{\int} \underset{z \leftrightarrow x}{P} \chi^b d^3X \underset{x \leftrightarrow y}{P} d^3Y \mathcal{A}^a \underset{y \leftrightarrow w}{P} \bigg|_{d\overline{z}_1 = d\overline{z}_2 = d\overline{z}_0}.
\end{equation}
We take the external legs to be test functions of the form $\chi= z(v^2) \mathbf{t}_b$ and $\mathcal{A} = (v^1) d\overline{z} \mathbf{t}_a$:
\begin{equation}
\underset{0000}{\mathcal{M}}(z_1,z_2; x^0 x^2, y^1 d\overline{y}^0) = -\bigg(\frac{1}{2 \pi i}\bigg)^2 \bigg(\frac{1}{4 \pi^2}\bigg)^3 \bigg( \frac{1}{2} \bigg)^2 8 \overline{z}_{12} d\overline{z}_0 \underset{(\mathbb{C}^3)^2}{\mathcal{I}} \end{equation}
\begin{equation}
\underset{(\mathbb{C}^3)^2}{\mathcal{I}} = \underset{(\mathbb{C}^3)^2}{\int} d^6X d^6Y \frac{x^0 [\overline{x},\overline{y}] x^2 y^1}{\lVert Z-X \rVert^6 \lVert X-Y \rVert^6 \lVert Y-W \rVert^6}.
\end{equation}
where we've used the fact that the antiholomorphic form structure
\begin{equation}
(\epsilon_{\overline{a}\overline{b}\overline{c}} \overline{Z}^{\overline{a}}_{{1x}} d\overline{Z}^{\overline{b}}_{{1x}} d\overline{Z}^{\overline{c}}_{{1x}})( \epsilon_{\overline{d}\overline{e}\overline{f}} \overline{Z}^{\overline{d}}_{{xy}} d\overline{Z}^{\overline{e}}_{{xy}} d\overline{Z}^{\overline{f}}_{{xy}})( \epsilon_{\overline{g}\overline{h}\overline{i}} \overline{Z}^{\overline{g}}_{{y2}} d\overline{Z}^{\overline{h}}_{{y2}} d\overline{Z}^{\overline{i}}_{{y2}}).
\end{equation}
with $d\overline{z}_1 = d\overline{z}_2 = d\overline{z}_0$, simplifies to:
\begin{equation}
8 [\overline{v}_{x},\overline{v}_{y}] \overline{z}_{12} d\overline{z}_0 d\overline{z}_x d^2\overline{v}_{x} d^2 \overline{v}_{y}.
\end{equation}
We compute this integral explicitly in the appendix. The result is
\begin{equation}
\underset{(\mathbb{C}^3)^2}{\mathcal{I}} = -\frac{(-2 \pi i)^6}{(2!)^3 6} \frac{3z_0 + \frac{z_{12}}{2}}{|z_{12}|^2} \quad \quad
\underset{0000}{\mathcal{M}}(z_1,z_2; x^0 x^2, y^1 d\overline{y}^0) = \frac{1}{96 \pi^2} \bigg(\frac{3z_0}{z_{12}} + \frac{1}{2}\bigg) d\overline{z}_0.
\end{equation}
After inserting this into eq. 7.7 and performing the contour integral, the anomalous contribution coming from both terms then becomes
\begin{equation}
\bigg(\frac{1}{2 \pi i}\bigg) \frac{1}{96 \pi^2} \underset{\mathbb{C}}{\int} d^2z_0 \bigg(3 z_{0} C^{cd}_{ab} :J_c[0,0] \tilde{J}_d[0,0]:(z_0)+ K^{ef}f^{d}_{ae}f^{c}_{bf}f^{l}_{dc} \tilde{J}_l[0,0](z_0)\bigg)
\end{equation}
\begin{equation*}
C^{cd}_{ab}= K^{ef}(f^d_{ae} f^c_{bf}+f^c_{ae} f^d_{bf}).
\end{equation*}
We can simplify the second term by repeated use of the Jacobi identity after writing it as
\begin{equation}
K^{ef}f^d_{ae} f^c_{bf} f^l_{dc} = \frac{1}{2}K^{ef}(f^d_{ae}f^c_{bf}-f^d_{be}f^c_{af})f^l_{dc}
\end{equation}
and making use of the fact that the Casimir in the adjoint representation is $2h^{\vee}$, which gives us the identity:
\begin{equation}
K^{ab}f^e_{ad}f^d_{bc}= 2h^{\vee} \delta^e_c.
\end{equation}
We obtain
\begin{equation}
\bigg(\frac{1}{2 \pi i}\bigg) \underset{\mathbb{C}}{\int} \frac{d^2z_0}{96 \pi^2} \bigg(3z_0K^{ef}(f^c_{ae} f^d_{bf}+f^d_{ae} f^c_{bf}) :J_c[0,0] \tilde{J}_d[0,0]:(z_0)+h^{\vee} f^c_{ab} \tilde{J}_l[0,0](z_0)\bigg).
\end{equation}
With this choice of test functions, eq. 7.8 simplifies to:
\begin{equation}
-\bigg(\frac{1}{2 \pi i} \bigg)^2 \underset{\mathbb{C}}{\int} d^2z_0 \underset{|z_{12}| = \epsilon}{\oint} dz_{12} (J_b[0,1](z_1)J_a[1,0](z_2))(\frac{z_{12}}{2}+z_0).
\end{equation}
We need to perform the contour integral.
\begin{equation}
-\bigg( \frac{1}{2 \pi i} \bigg) \underset{\mathbb{C}}{\int} d^2z_0 \underset{z_1 \to z_2}{\text{Res}}\bigg( (\frac{z_{12}}{2}+z_0) (J_b[0,1](z_1)J_a[1,0](z_2)) \bigg).
\end{equation}
Comparing eq. 7.20 to eq. 7.18, we see that gauge invariance holds if and only if the OPE correction has the form
\begin{equation}
\begin{aligned}
J_b[0,1](z)J_a[1,0](w) \sim \frac{\alpha}{z-w} K^{ef}(f^c_{ae} f^d_{bf}+f^d_{ae} f^c_{bf}) :J_c[0,0] \tilde{J}_d[0,0]:(w) \\
+\frac{\beta}{(z-w)^2} f^c_{ab} \tilde{J}_c[0,0](w)+\frac{\beta}{2(z-w)}f^c_{ab}\partial\tilde{J}_c[0,0](w),
\end{aligned}
\end{equation}
with the numerical constants \footnote{This matches what was found in \cite{ON_THE_ASSOCIATIVITY_OF_ONE_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPE} after adjusting the normalization convention. See footnote \ref{footnote_1}.}
\begin{equation}
\alpha = \frac{1}{32 \pi^2} = -\frac{3}{2(2\pi i)^2 12} \quad \quad \beta = \frac{h^{\vee}}{48 \pi^2} = - \frac{h^{\vee}}{(2 \pi i)^2 12}.
\end{equation}
Note that the added term $\partial \tilde{J}$ is a consequence of symmetry, in particular, the requirement that
\begin{equation}
J_b[1,0](z)J_a[0,1](w) \sim - J_a[0,1](z)J_b[1,0](w).
\end{equation}
Similarly, we find in Appendix B that the $\tilde{J}[0,1]J[1,0]$ and $\tilde{J}[1,0] J[0,1]$ OPEs receive the following corrections:
\begin{equation}
\begin{aligned}
\tilde{J}_b[0,1](z) J_a[1,0](w) \sim \frac{\alpha}{z-w} K^{fe}f^c_{ae}f^d_{bf}:\tilde{J}_c[0,0]\tilde{J}_d[0,0]:(w) \\
\tilde{J}_b[1,0](z) J_a[0,1](w) \sim \frac{-\alpha}{z-w} K^{fe}f^c_{ae}f^d_{bf}:\tilde{J}_c[0,0]\tilde{J}_d[0,0]:(w).
\end{aligned}
\end{equation}
Notice that even though the axion did not appear explicitly in the computation of the one-loop diagram (in contrast to the computations in \cite{ON_THE_ASSOCIATIVITY_OF_ONE_LOOP_CORRECTIONS_TO_THE_CELESTIAL_OPE} which make direct use of the extended tree-level OPEs, including the $E$ and $F$ generators) Koszul duality is guaranteed to output a well-defined associative chiral algebra, with the precise numerical coefficients characteristic of the axion-coupled twistor theory.\footnote{We focused on the corrections to the OPEs involving $J[1]$ since $J[1]$ generate the chiral algebra. Similarly, we can also compute the one-loop deformations of OPEs between $J[k]J[1]$ and $\tilde{J}[k] J[1]$ by adjusting the external legs test-functions and using arguments of dimension-matching. In principle, we expect that these can also be determined from the $J[1]$ OPEs and the requirement of associativity. Here, $J[k]$ denotes $J[k_1,k_2]$ with $k_1+k_2 = k$.}
\acknowledgments
I would like to thank Natalie Paquette, my research advisor, for assigning me this project. I am deeply grateful for her continued support, instruction and exceptional guidance. I would also like to thank Niklas Garner for helpful discussions and invaluable comments on a draft version of this paper. V.F. acknowledges support from the University of Washington and the U.S. Department of Energy Early Career Award, DOE award DE-SC0022347.
|
{
"arxiv_id": "2302.14273",
"language": "en",
"timestamp": "2023-03-01T02:07:44",
"url": "https://arxiv.org/abs/2302.14273",
"yymm": "2302"
} | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{M}{ulti-rotors} aided by vision sensors are widely employed in both academia \cite{history0,history1,history2,arc} and industry \cite{dji,skydio,parrot}, due to high maneuverability and compactness of platform. The main applications of vision-aided multi-rotors are surveillance \cite{covert_tracking} and cinematography \cite{Bonatti}, and autonomous target chasing is essential in such tasks. In target-chasing missions, various situations exist in which a single drone has to handle both single- and multi-target scenarios without occlusion. For example, in filming shooting, there are scenes in that one or several actors are shot in one take, without being visually disturbed by structures in the shooting set. Moreover, the occlusion of main actors by background actors is generally prohibited. Therefore, a tracking strategy that can handle both single- and multi-target among static and dynamic obstacles can benefit various scenarios in chasing tasks.
\begin{figure}[t!]
\label{fig:thumbnail}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\label{subfig:thumbnail_single}
\centering
\includegraphics[width=\textwidth]{figure/thumbnail_single_comp.png}
\caption{Single-target tracking}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\label{subfig:thumbnail_dual}
\centering
\includegraphics[width=\textwidth]{figure/thumbnail_dual_comp.png}
\caption{Dual-target tracking}
\end{subfigure}
\caption{Target tracking mission in a realistic situation. \textbf{(a)}: A target (red) moves in an indoor arena, and a dynamic obstacle (green) interrupts the visibility of the target. \textbf{(b)}: Two targets (red and green) move among stacked bins (grey). A chaser drone (blue) generates a trajectory keeping the target within the camera view consistently.}
\label{fig:my_label}
\end{figure}
Despite great attention in aerial chasing works during the recent decade, aerial target tracking remains a challenging task. First, it is difficult to forecast accurate future paths of dynamic objects due to perceptual errors from sensors and unreliable estimation of intentions of multiple moving objects in obstacle environments. Also, a motion generator in the chasing system ought to address the visibility of targets, collision avoidance against obstacles, and dynamic limits of the drone simultaneously, and should be executed in real-time.
In order to solve the problem, this paper proposes a target-chasing strategy that enhances the visibility of single- and dual-target in both static and dynamic environments. The proposed method consists of two parts: \textit{Prediction problem} and \textit{Chasing problem}. In the former problem, to cope with an uncertain future trajectory of the targets, the computation of reachable sets of moving objects that they can reach considering obstacle configuration is proposed.
In the latter problem, we propose robustly target-visible trajectory generation that considers safety from obstacles and dynamical feasibility. The key idea of the proposed trajectory planning is a target-visible region (TVR), a time-dependent set spatial set that keeps the visibility of targets. Two-stage visibility consideration in TVR improves target visibility. First, TVR is defined based on analysis of the topology of the targets' path to explicitly avoid inevitable target occlusion. Second, the TVR takes into account path prediction error of not only target but also dynamic obstacles. Also, to see dual-target simultaneously, the camera field of view (FOV) is additionally considered when defining TVR. In order to acquire real-time performance with the guarantee of global optimality, we formulate a chasing problem as a single quadratic programming (QP). Our main contributions are summarized as follows.
\begin{itemize}
\item A real-time trajectory planning framework that can handle both single and dual target-chasing missions in dynamic environments.
\item A fast reachable set prediction of moving objects considering static and dynamic obstacles.
\item Target-visible region (TVR) considering path topology and reachable sets of moving objects, which robustly keeps the visibility of the target.
\end{itemize}
The remainder of this paper is arranged as follows. We review the relevant references in Section \ref{sec:related_works}, and the relationship between visibility and topology is studied in Section \ref{sec:preliminary}. The problem statement and a pipeline of the proposed system are presented in Section \ref{sec:overview}, and Section \ref{sec:object_trajectory_prediction} describes the prediction of a reachable set of moving objects. Section \ref{sec:chasing_trajectory_generation} describes TVR and design reference chasing trajectory and complete QP formulation. The validation of the proposed pipeline is demonstrated with high-fidelity simulations and real-world experiments in Section \ref{sec:validation}.
\section{Related Works}
\label{sec:related_works}
\subsection{Target Visibility among Obstacles}
\label{subsec: visibility consideration}
There have been various studies that take visibility into account in single-target chasing in static environments. \cite{Bonatti,Bonatti2,sphere} utilize nonlinear programming (NLP) to generate the chasing motion. They propose a term to penalize target occlusion in the cost function for optimization.
The cost function combining multiple conflicting objectives
such as actuation efficiency, collision avoidance desired shooting distance between a drone and a target, and occlusion avoidance can yield a sub-optimal solution and ruin pure tracking motion; therefore, visibility of the target is not ensured. Another line of works \cite{boseong_iros, boseong_icra} employs a cascaded structure. First, a series of safe view-points is obtained using a visibility metric that reflects Euclidean signed distance field (ESDF). Then in a subsequent module, the view-points are smoothly interpolated. However, this framework does not ensure the visibility of the target while moving between the view-points.
On the other hand, there are works that explicitly consider visibility constraints in optimization. The authors of \cite{elastic, fast-tracker2, visibility-aware} propose various visibility constraints and convert their problem to unconstrained optimization to generate the chasing trajectory. \cite{elastic} defines visible regions and uses them in spatial-temporal trajectory optimization. \cite{fast-tracker2, visibility-aware} apply a concept of path topology to enhance the robustness of visibility as in \cite{mywork}. However, the above works only think of the visibility of the center of a target, not the whole body of it; hence, partial occlusion may occur while chasing the target.
Also, existing methods on target path prediction \cite{predict1,predict3} may generate erroneous prediction results caused by the dynamic movement of sensors and insufficient consideration of obstacles; the paths conflict with obstacles. With inaccurate target path prediction, chasing planners may not produce effective chasing paths and fail to chase a target without occlusion.
Meanwhile, our framework predicts the movement of the target with consideration of obstacle configuration and generates a target-visible trajectory that is robust to prediction error.
\subsection{Trajectory Planning in Dynamic Environments}
\label{subsec:dynamic_scenarios}
To the best knowledge of the authors, there are few studies considering target visibility as vital importance in dynamic environments. The approach in \cite{ijcas} makes a bunch of polynomial motion primitives and select the best paths that satisfy safety and visibility constraints, and it can be applied to dynamic environments. However, as target occlusion is imminent, the planner of \cite{ijcas} may alter the chasing path to a region with path topology, where inevitable occlusion occurred. Also, \cite{Nageli} designed visibility cost to address occlusion by dynamic obstacles. Since the method softly deals with occlusion, it allows the target to be hidden. On the other hand, our method directly applies concepts of the path topology to avoid inescapable occlusion and enforce strict target visibility, as in our previous work \cite{mywork}.
\subsection{Multiple Target Scenarios}
\label{subsec:multiple_target_scenarios}
Multi-target tracking frameworks with a single drone also have been discussed. The approaches in \cite{dual1,dual2} minimize the change in the position of the target projected on the camera image; however, they do not consider obstacle environments. A work \cite{access} designs dual visibility score field to deal with the visibility of both targets in obstacle environments with a camera field-of-view (FOV) limit. Nevertheless, due to the fact that the designed field is made in a heuristic way; hence, the success rate of tracking missions highly depends on parameter settings. In contrast, the proposed planner considers limited camera FOV with hard constraints to ensure the simultaneous observation of the target.
\begin{figure}[t!]
\centering
\includegraphics[width = 1\linewidth]{figure/Motivation.png}
\caption{Comparison between two view-points. Viewpoint A can observe the target, and the path of a drone is path homotopic to the path of a target. On the contrary, the target is occluded by an obstacle at Viewpoint B, and the topology classes of the two paths are different.}
\vspace{-4mm}
\label{fig:motivation}
\end{figure}
\section{Preliminary}
\label{sec:preliminary}
This section presents the relationship between target occlusion and path topology.
In an obstacle environment, there exist multiple path topology classes \cite{homotopy}. As shown in Fig. \ref{fig:motivation}, the visibility of the target is closely related to path topology. We analyzed the relation between them and reflect on the relation in generating the chasing trajectory.
As stated in \cite{topology_book}, the definition of path homotopy is presented as the following:
\begin{definition}
\label{def:path-homotopy}
Paths $\sigma_{1}, \sigma_{2}: I \to X \subset \mathbb{R}^3$ are path-homotopic if there is a continuous map $F: I \times I \to X$ such that
\begin{equation}
\label{eq:path-homotopy}
F(s,0)=\sigma_{1}(s)\ \text{and} \ F(s,1)=\sigma_{2}(s),
\end{equation}
where $I$ is unit interval $[0,1]$.
\end{definition}
In this paper, \textit{Line-of-Sight} is of interest, whose definition is:
\begin{definition}
\label{def:line-of-sight}
\textit{Line-of-Sight} is a segment connecting two objects $\textbf{x}_1,\textbf{x}_2: [0,\infty) \to \mathbb{R}^3$.
\begin{equation}
\label{eq:line-of-sight}
L(\textbf{x}_1(t),\textbf{x}_2(t)) = (1-\epsilon)\textbf{x}_1(t) + \epsilon \textbf{x}_2(t), \quad ^{\forall}\epsilon\in I
\end{equation}
where $t$ represents time.
\end{definition}
Based on the definitions above, we derive a relation between the path topology and visibility between two objects.
\begin{theorem}
\label{theorem:vis_homotopy}
When two objects are reciprocally visible, the paths of objects are path-homotopic.
\end{theorem}
\begin{proof}
Suppose that two objects $\textbf{x}_1(t), \textbf{x}_2(t)$ move along paths $\sigma_1, \sigma_2$ in free space $\mathcal{F}\in\mathbb{R}^3$ during time interval $T=[t_0,t_f]$, respectively.
\textit{i.e.} $\sigma_1 = \textbf{x}_1 \circ \xi$, $\sigma_2 = \textbf{x}_2 \circ \xi$, where $\xi: I \to T$ is the time mapping function such that $t=\xi(s)$. If visibility between $x_1$ and $x_2$ is maintained, the \textit{Line-of-Sight} does not collide with obstacles: $L(\textbf{x}_1(t), \textbf{x}_2(t)) \subset \mathcal{F}$ for $^{\forall}t\in T$. Then, by definition, $L(\sigma_1(s),\sigma_2(s))=(1-\epsilon)\sigma_1(s)+\epsilon\sigma_2(s)\in \mathcal{F}$ for $^{\forall}\epsilon,s\in I$. Since such a condition satisfies the definition of continuous mapping in Definition 1, the two paths are homotopic.
\end{proof}
From \textbf{Theorem}\ \ref{theorem:vis_homotopy}, when the drone chooses a path with a different topology class from the target path, target occlusion inevitably occurs. Therefore, we explicitly consider path-homotopy when planning a chasing trajectory.
\section{Overview}
\label{sec:overview}
\begin{table}[t]
\caption{Nomenclature}
\centering
\label{tab:nomenclature}
\begin{tabular}{|c|c|}
\hline
Name & Definition\\
\hline
$\textbf{c}(t)$ & A trajectory of the drone. $\textbf{c}(t)\in \mathbb{R}^{2}$ \\
\hline
$\underbar{\textbf{c}}$ & \makecell[l]{An optimization variable that consists of \\Bernstein coefficients representing $\textbf{c}(t)$.} \\
\hline
$n_{c}$ & The degree of a polynomial trajectory $\textbf{c}(t)$. \\
\hline
$T$ & Planning horizon\\
\hline
$\theta_{f}$ & FOV of the camera built on the drone. \\
\hline
$v_{\text{max}},a_{\text{max}}$ &\makecell[l]{ The maximum speed and acceleration of the \\drone.} \\
\hline
$\boldsymbol{\mathcal{O}}, \mathcal{O}_{i}$ & A set of obstacles. An $i$-th obstacle in $\mathcal{O}$\\
\hline
$N_{o}, N_{samp}$ & \makecell[l]{The number of elements of $\mathcal{O}$\\ and samples of end-points in the prediction.}\\
\hline
$\mathcal{R}_{q_{i}}(t),\hat{\textbf{q}}_{i}(t),r_{q_{i}}(t)$& \makecell[l]{A reachable set of an $i$-th target. A predicted\\center trajectory and radius of $\mathcal{R}_{q_{i}}(t)$. We omit\\a subscript $i$ to handle an arbitrary target.}\\
\hline
$\mathcal{R}_{o_{i}}(t),\hat{\textbf{o}}_{i}(t),r_{o_{i}}(t)$ & \makecell[l]{A reachable set of an $i$-th obstacle. A predict\\ center trajectory and radius of $\mathcal{R}_{o_{i}}(t)$. We omit\\a subscript $i$ to handle an arbitrary target.}\\
\hline
$L(\textbf{x}_{1},\textbf{x}_{2})$ & Line-of-Sight between $\textbf{x}_{1}$ and $\textbf{x}_{2}$.\\
\hline
$\mathcal{V}_{O}(t), \mathcal{V}_{F}(t)$ & \makecell[c]{Target visible region (TVR) against \\ an obstacle and considering camera FOV.}\\
\hline
${}_{\textbf{x}_{2}}\textbf{x}_{1}(t)$ & \makecell[c]{Relative position between $\textbf{x}_{1}(t)$ and $\textbf{x}_{2}(t)$.\\$\textbf{x}_{1}(t)-\textbf{x}_{2}(t)$.}\\
\hline
$\|\textbf{x}\|_{p}, {\textbf{x}}^{(i)}(t)$ & $L_{p}$ norm of $\textbf{x}$. $i$-th time derivative of $\textbf{x}(t)$. \\
\hline
$\textbf{x}_{x}, \textbf{x}_{y}$ & $x$- and $y$-components of $\textbf{x}$.\\
\hline
$\begin{pmatrix}
n \\ k
\end{pmatrix}$ & \makecell[c]{Binomial coefficients, the number of\\$k$-combinations from a set of $n$ elements.} \\
\hline
$\textbf{0}_{m\times n}$ & An $m\times n$ matrix with all-zero elements.\\
\hline
$I_{m\times m}$ & An identity matrix with rank $m$.\\
\hline
$\text{det}(\cdot)$ & Determinant of a matrix.\\
\hline
$\mathcal{B}(\textbf{x},r)$ & A ball with center at $\textbf{x}$ and radius $r$. \\
\hline
$\partial \mathcal{A}$ & A boundary of a closed set $\mathcal{A}$. \\
\hline
\makecell[c]{$\overline{AB}$,\\ $\angle{ABC}$} & \makecell[c]{Segment connecting points $A$ and $B$. \\ Angle between $\overline{BA}$ and $\overline{BC}$.}\\
\hline
\end{tabular}
\end{table}
\subsection{Problem Setup}
\label{subsec:problem_setup}
In this section, we formulate the trajectory planning problem for a tracking drone with firmly attached camera sensors having limited FOV $\theta_{f}\in (0,\pi)$ [rad]. Our goal is to generate a trajectory of the drone that can see the single- and dual-target ceaselessly in an obstacle environment $\mathcal{W}$ over the time horizon $[0, T]$. To achieve the goal, the drone has to predict the future motions of moving objects like the target and dynamic obstacles, and the planner generates a continuous-time trajectory of the drone that keeps the visibility of the target while satisfying dynamical feasibility and avoiding a collision. To accomplish the missions, we set up two problems: 1) \textit{Prediction problem} and 2) \textit{Chasing problem}.
We assume that the environment $\mathcal{W}$ consists of separate cylindrical static and dynamic obstacles, and a set of obstacles is denoted as $\mathcal{O}$. In addition, flying at a higher altitude may provide a simple solution for target tracking missions, but we set the flying height of the drone to a fixed level for the acquisition of consistent images of the target. From the problem settings, we focus on the design of chasing trajectory in the $x-y$ plane.
Throughout this article, we use the notation in Table \ref{tab:nomenclature}. The bold small letters represent a vector, calligraphic capital letters denote a set, and italic small letters mean scalar value.
\subsubsection{Prediction problem}
\label{subsubsec:prediction_problem}
The prediction module forecasts reachable sets of moving objects such as targets and dynamic obstacles, over a time horizon $t\in[0, T]$. The reachable set $\mathcal{R}_{\textbf{p}}(t)$ is a set that moving objects can reach and has to be set considering an obstacle set $\mathcal{O}$ as well as estimation error.
The goal is to calculate $\mathcal{R}_{\textbf{p}}(t)$ enveloping the possible future position of the moving objects $\textbf{p}(t)$. In this paper, using $\textbf{q}$ and $\textbf{o}$ instead of $\textbf{p}$ represent information of reachable sets of the target and obstacles, respectively. Specifically, we represent the $i$-th target ,$i=1,2$, and $j$-th obstacle $j=\{1,\ldots,N_{o}:=|\mathcal{O}|\}$, as $\textbf{q}_i$, $\textbf{o}_j$ respectively.
\subsubsection{Chasing problem}
\label{subsubsec:chasing_problem}
Given the reachable sets of the target and the obstacles obtained by the prediction module, we generate a trajectory of the drone $\textbf{c}(t)$ that keeps the visibility of the target (\ref{subeq:visibility_constraint}), does not collide with obstacles (\ref{subeq:collision_constraint}), and satisfies dynamical limits (\ref{subeq:initial_constraint}), (\ref{subeq:dynamic_constraint}). For the smoothness of the trajectory and high visibility, the cost function (\ref{subeq:cost}) consists of terms penalizing jerky motion $J_j$ and tracking error to designed reference trajectory $J_e$. We formulate a tracking problem as
\begin{subequations}
\label{eq:chasing_problem}
\begin{align}
\label{subeq:cost}
&\underset{\textbf{c}(t)}{\text{min}} && J = w_{j}J_{j}+w_{e}J_{e}\\
\label{subeq:visibility_constraint}
& \text{s.t.} &&L(\textbf{c}(t),\mathcal{R}_{\textbf{q}}(t)) \subset \mathcal{F}\setminus \mathcal{R}_{\textbf{o}}(t)\\
\label{subeq:collision_constraint}
& &&\mathcal{B}(\textbf{c}(t),r_{c}) \subset \mathcal{F}\setminus \big\{\bigcup_{i=1}^{N_{o}}\mathcal{R}_{\textbf{o}_i}(t)\cup \mathcal{R}_\textbf{q}(t)\big\}\\
\label{subeq:initial_constraint}
& &&\textbf{c}^{(i)}(0) = \textbf{c}^{(i)}_0, \ i = 0, 1 \\
\label{subeq:dynamic_constraint}
& &&\|\textbf{c}^{(1)}(t)\|_{2} \leq v_{\text{max}}, \ \| \textbf{c}^{(2)}(t)\|_{2} \leq a_{\text{max}}
\end{align}
\end{subequations}
where $w_j$ and $w_e$ are weight factors of the costs, $r_c$ is radius of the drone, and $\textbf{c}^{(i)}_{0}, i = 0, 1$ are current position and velocity of the drone.
\subsubsection{Assumptions}
\label{subsusbsec:assumption}
The stated problems are to be solved along with the below assumptions. For the \textit{Prediction Problem}, we assume that \textit{P1)} the moving objects do not collide with obstacles, and \textit{P2)} they do not move in a jerky way. In the \textit{Chasing Problem}, we assume that \textit{C1)} the maximum velocity $v_{\text{max}}$ and the maximum acceleration $a_{\text{max}}$ are higher than the target and obstacles, and \textit{C2)} the current state of drone does not violate (\ref{subeq:visibility_constraint})-(\ref{subeq:dynamic_constraint}). Furthermore, based on \textbf{Theorem 1}, when the targets move along paths with different path topology, occlusion unavoidably occurs, so we assume that \textit{C3)} all targets move along homotopic paths against obstacles.
\subsubsection{Trajectory representation}
\label{subsubsec:trajectory_representation}
Due to the virtue of differential flatness of quadrotor dynamics \cite{snap}, the trajectory of multi-rotors can be expressed with a polynomial function of time $t$. In this paper, Bernstein basis is deployed to express polynomials. Bernstein bases of $n$-th order polynomial for time interval $[t_a, t_b]$ are defined as follows:
\begin{equation}
\label{eq:bernstein_basis}
B_{k,n}(t,t_{a},t_{b}) = \begin{pmatrix} n\\k \end{pmatrix}\frac{(t_b-t)^{n-k}(t-t_a)^{k}}{(t_b-t_a)^n},\ 0\leq k\leq n
\end{equation}
{Since the bases defined above are non-negative in the time interval $[t_{a},t_{b}]$, a linear combination with non-negative coefficients makes the total value become non-negative. We utilize this property in the following sections.
The trajectory of the chasing drone, $\textbf{c}(t)=[c_x(t),c_y(t)]^T \in \mathbb{R}^2$, is represented as an $M$-segment piecewise Bernstein polynomial:
\begin{equation}
\label{eq:segment_representation}
\textbf{c}(t)=\begin{cases}
&\textbf{C}_{1}^{T} \textbf{B}_{n_{c},1}(t)\ \ \ \ \ t\in [T_{0},T_{1}]\\
&\textbf{C}_{2}^{T} \textbf{B}_{n_{c},2}(t)\ \ \ \ \ t\in [T_{1},T_{2}]\\
&\ldots \\
&\textbf{C}_{M}^{T} \textbf{B}_{n_{c},M}(t)\ \ \ t\in [T_{M-1},T_{M}]\\
\end{cases}
\end{equation}
where $T_0=0$ (current time), $T_M = T$, $n_c$ is the degree of polynomial, $\textbf{C}_{i}=[\textbf{c}_{i(x)}, \textbf{c}_{i(y)}]\in \mathbb{R}^{(n_{c}+1)\times2}$ and $\textbf{B}_{n_{c},i}(t)=[B_{0,n_{c}}(t,T_{0},T_{1}),\ldots,B_{n_{c},n_{c}}(t,T_{M-1},T_{M})]^T$ are control points and the corresponding basis vector of the $i$-th segment, respectively. An array of control points in the $i$-th segment is defined as $\textbf{c}_{i}=[\textbf{c}_{i(x)}^{T},\textbf{c}_{i(y)}^{T}]^{T}\in \mathbb{R}^{2(n_c+1)}$, and a concatenated vector of all control points $\underbar{\textbf{c}} = [\textbf{c}_{1}^{T},\ldots,\textbf{c}_{M}^{T}]^{T}\in \mathbb{R}^{2M(n_c+1)}$ is a decision vector of the polynomial trajectory optimization.
\subsubsection{Objectives}
\label{objective_qp}
For the \textit{Prediction Problem}, the prediction module forecasts moving objects' reachable set $\mathcal{R}_{\textbf{p}}(t)$.
Then, for the \textit{Chasing Problem}, the chasing module formulates a QP problem with respect to $\underbar{\textbf{c}}$ and finds an optimal $\underbar{\textbf{c}}$ so that the drone chases the target without occlusion and collision while satisfying dynamical limits.
\subsection{Pipeline}
\label{subsec:pipeline}
\begin{figure}[t!]
\centering
\includegraphics[width = 0.8\linewidth]{figure/Architecture_comp.png}
\caption{Pipeline of the proposed chasing system.}
\label{fig:Architecture}
\end{figure}
The proposed overall architecture of QP Chaser is shown in Fig. \ref{fig:Architecture}. From camera sensors, RGB and depth images are acquired, and point cloud and information of target poses are extracted from the images. Then, locations and dimensions of static obstacles are estimated from the point cloud to build an obstacle map, and the reachable sets of the target and dynamic obstacles are predicted based on observed poses of the dynamic obstacles and the targets. Based on the static obstacle map and prediction results, the chasing trajectory generation is executed.
\section{Object Trajectory Prediction}
\label{sec:object_trajectory_prediction}
From the observation by camera sensors, position, velocity, and their estimation error covariance, and dimension (radius) of moving objects are acquired as $\hat{\textbf{p}}_0$, $\hat{\textbf{p}}^{(1)}_0$, $P_0$, and $r_{p0}$ at current time $t=0$, respectively. We first sample the position that can be reached with the current dynamical state at $t=T$ and generate motion primitives which can represent the possible trajectory of the moving objects. Then, we filter out the primitives that collide with obstacles and define a reachable set $\mathcal{R}_{\textbf{p}}(t)$ that encloses the non-conflicting primitives.
\subsection{Candidate for Future Trajectory of Moving Object}
\label{subsec:candidate_object_trajectory}
\subsubsection{End point sampling}
\label{subsubsec:end_point_sampling}
We first estimate the position of a moving object at time $t=T$. Moving object dynamics can be modeled as a constant velocity model with disturbance and is represented as
\begin{equation}
\label{eq:dynamics_object}
\begin{aligned}
&
\begin{bmatrix}
\textbf{p}^{(1)}(t) \\ \textbf{p}^{(2)}(t)
\end{bmatrix}
= F_p \begin{bmatrix}
\textbf{p}(t) \\ \textbf{p}^{(1)}(t)
\end{bmatrix}
+ G_p \textbf{w}, \quad \textbf{w}\sim (0,Q), \\
& F_p =
\begin{bmatrix}
\textbf{0}_{2\times2} & \textbf{I}_{2\times2}\\
\textbf{0}_{2\times2} & \textbf{0}_{2\times2}
\end{bmatrix}
, \quad G_p =
\begin{bmatrix}
\textbf{0}_{2\times2} \\
\textbf{I}_{2\times2}
\end{bmatrix}
\end{aligned}
\end{equation}
where $\textbf{p}(t)$ is position of the moving object, and $\textbf{w} \in \mathbb{R}^2$ is white noise with covariance $Q$. Then prediction error covariance $P(t)$ propagates along with time and is represented as
\begin{equation}
\label{eq:error_propagation}
P^{(1)}(t) = F_p(t)P(t)+P(t)F_{p}^{T}(t)+G_{p}QG_{p}^{T}
\end{equation}
(\ref{eq:error_propagation}) is differential Ricatti equation and error propagation matrix $P(t)$ can be calculated algebraically.\\
We sample $N_{samp}$ points from a 2-dimensional gaussian distribution $\mathcal{N}(\int_{0}^{T}e^{-F_{p}(T-\tau)}[\hat{\textbf{p}}_{0}^{T},\hat{\textbf{p}}_{0}^{(1)T}]^{T}d\tau,P(T))$ and call a set of the samples \textit{End Point Set} and represent it as
\begin{equation}
\mathcal{S}_{p}=\{\textbf{s}_{p,i}\in\mathbb{R}^2 | i=1,\ldots,N_{samp} \}
\end{equation}
\subsubsection{Primitive generation}
\label{subsubsec:primitive_generation}
Given the initial position, velocity, and end positions, we design trajectory candidates for moving objects. Under the assumption \textit{P2}) that moving objects do not move in a jerky way, we establish the following problem.
\begin{equation}
\label{eq:primitive_optimization_problem}
\begin{aligned}
&\underset{\hat{\textbf{p}}(t)}{\text{min}} &&\int_{0}^{T}\|\hat{\textbf{p}}^{(3)}(t)\|^{2}_{2}dt \\
& \text{s.t.} && \hat{\textbf{p}}(0) = \hat{\textbf{p}}_{0},\
\hat{\textbf{p}}^{(1)}(0) = \hat{\textbf{p}}^{(1)}_{0},\
\hat{\textbf{p}}(T) \in \mathcal{S}_p
\end{aligned}
\end{equation}
Recalling that the trajectory is represented with Bernstein polynomial, we write an $i$-th candidate trajectory, that satisfies $\hat{\textbf{p}}(T) = \textbf{s}_{p,i}$, as $\hat{\textbf{p}}_{i}(t)=\textbf{P}^{T}_{i}\boldsymbol{B}_{n_{p}}(t)$, $i=1,\ldots, N_{samp}$ where $\textbf{P}_{i}= [\textbf{p}_{i(x)},\textbf{p}_{i(y)}]\in \mathbb{R}^{(n_{p}+1) \times 2}$ and $\boldsymbol{B}_{n_{p}}(t)=[B_{0,n_{p}}(t,0,T),\ldots,B_{n_{p},n_{p}}(t,0,T)]^{T}$. By defining $\textbf{p}_{i}:=[\textbf{p}_{i(x)}^{T},\textbf{p}_{i(y)}^{T}]^{T}$ as an optimization variable, (\ref{eq:primitive_optimization_problem}) become a QP problem as
\begin{equation}
\label{eq:primitive_qp}
\begin{aligned}
&\underset{\textbf{p}_{i}}{\text{min}} && \frac{1}{2}\textbf{p}_{i}^{T}Q_{p}\textbf{p}_{i}\\
&\text{s.t.} && A_{p}\textbf{p}_{i}=\textbf{b}_{p,i}\\
\end{aligned}
\end{equation}
where $Q_{p}$ is a positive semi-define matrix, $A_{p}$ is a $6$ by $(n_{p}+1)$ matrix, and $\textbf{b}_{p,i}$ is a vector composed of $\hat{\textbf{p}}_{0}$, $\hat{\textbf{p}}^{(1)}_{0}$, and $\textbf{s}_{p,i}$.
(\ref{eq:primitive_qp}) can be converted into unconstrained QP, and
the optimal $\textbf{p}_{i}$ is a closed solution as follows.
\begin{equation}
\label{eq:KKT_matrix}
\begin{bmatrix}
\textbf{p}_{i}\\
\boldsymbol{\lambda}
\end{bmatrix}
=
\begin{bmatrix}
Q_p & A_{p}^{T}\\
A_{p} & \textbf{0}_{6\times6}
\end{bmatrix}^{-1}
\begin{bmatrix}
\textbf{0}_{2(n_p+1)\times1} \\
\textbf{b}_{p,i}
\end{bmatrix}
\end{equation}
where $\boldsymbol{\lambda}$ is a lagrange multiplier. A set of candidate trajectories of the moving object is defined as follows
\begin{equation}
\mathcal{P}_{p}=\{\hat{\textbf{p}}_{i}(t)|\hat{\textbf{p}}_{i}(t)
= \textbf{P}^{T}_{i}\boldsymbol{B}_{n_{p}}(t),\ i=1,\ldots,N_{samp} \}
\end{equation}
\subsection{Collision Check}
\label{subsec:colli_check}
Under the assumption \textit{P1}) that moving objects do not collide, trajectory candidates $\hat{\textbf{p}}_{i}(t)\in \mathcal{P}_{p}$ that violate the condition $\|\hat{\textbf{p}}_{i}(t)-\textbf{o}_{j}(t) \|_{2}^{2}-(r_{p0}+r_{o_j}(t))^{2} \geq 0,$ $ (\forall t \in [0,T]) $ are filtered out. Due to the fact that all terms in the non-colliding condition are represented in polynomials, and Bernstein bases are non-negative in the time period $[0, T]$, whole non-negative coefficients make the left-hand side non-negative. We examine all coefficients of each basis of $\forall \hat{\textbf{p}}_{i}(t)\in \mathcal{P}_{p}$ and select the primitives which have all non-negative coefficients. For details, see Appendix.\ref{appendix:collision_check}. A set of primitives that pass the test is defined as $\mathcal{P}_{s}$.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.239\textwidth}
\centering\includegraphics[width=\textwidth]{figure/PredictionSampling_comp.png}
\caption{}
\label{fig:PredictionSampling}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.239\textwidth}
\centering\includegraphics[width=\textwidth]{figure/ReachSetAll_comp.png}
\caption{}
\label{fig:ReachSetAll}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.239\textwidth}
\centering\includegraphics[width=\textwidth]{figure/ReachSet80_comp.png}
\caption{}
\label{fig:ReachSet80}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.239\textwidth}
\centering\includegraphics[width=\textwidth]{figure/PredictionDynamic_comp.png}
\caption{}
\label{fig:PredictionDynamic}
\end{subfigure}
\caption{\textbf{(a)}: Sampled endpoints (red), primitives $\mathcal{P}_{p}$ (black), and safe primitives $\mathcal{P}_{s}$ (green) among obstacles (grey). \textbf{(b)}: The best primitive $\hat{\textbf{p}}(t)$ (blue), reachable set $\mathcal{R}_{\textbf{p}}(t)$ (light blue), and the farthest primitive $\hat{\textbf{p}}'(t)$ (magenta) for moving objects. \textbf{(c)}: The prediction with $\epsilon_{p} = 0.2$. \textbf{(d)}: The prediction in a dynamic environment.}
\label{fig:PredicionExample}
\end{figure}
\begin{table}[t!]
\centering
\caption{Computation time in Prediction Process}
\label{tab:prediction_time}
\begin{tabular}{c|cccc}
\toprule
$(N_{samp},N_{o})$ &(500,2) &(500,4) &(2000,2) &(2000,4)\\ \hline
\begin{tabular}[c]{@{}c@{}}Sampling End Points \& \\ {Primitive Generation\ [}$\mu$s{]}\end{tabular}& 43.97 & 47.23 & 168.3 & 184.1 \\ \hline
Collision Check\ [$\mu$s] & 146.3 & 185.3 & 487.9 & 750.7 \\ \hline
\begin{tabular}[c]{@{}c@{}}Reachable Set\\ {Computation\ [}ms{]}\end{tabular}& 0.245 & 0.172 & 2.921 & 2.073 \\ \hline
\textbf{Total Time} [ms] & 0.435 & 0.405 & 3.577 & 3.008\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Prediction with Error Bounds}
\label{subsec:define_reachable_sets}
With the set of non-colliding primitives $\mathcal{P}_{s}$, the reachable set $\mathcal{R}_{\textbf{p}}(t)$ with the following representation is defined.
\begin{equation}
\label{eq:reachable_set}
\mathcal{R}_{\textbf{p}}(t) = \mathcal{B}(\hat{\textbf{p}}(t),r_{p}(t))
\end{equation}
We determine the primitive having the smallest sum of distance with the other primitives as a center trajectory $\hat{\textbf{p}}(t)$, which is expressed as follows.
\begin{equation}
\label{eq:best_primitive}
\hat{\textbf{p}}(t)= \underset{\hat{\textbf{p}}_{i}(t)\in\mathcal{P}_s}{\text{argmin}}
\sum_{j\neq i}\int_{0}^{T}\|\hat{\textbf{p}}_{i}(t)-\hat{\textbf{p}}_{j}(t) \|_{2}dt
\end{equation}
\begin{proposition}
The optimization problem (\ref{eq:best_primitive}) is equivalent to the following problem.
\begin{equation}
\label{eq:best_end_point}
\begin{aligned}
&\hat{\textbf{p}}(t) = \hat{\textbf{p}}_{i^{*}}(t),\ i^{*} =\underset{i}{\text{argmin}}\sum_{j=0}^{|\mathcal{P}_{p}|} \|\textbf{s}_{p,i}- \textbf{s}_{p,j}\|_2, \\
&\text{where}\ i \neq j \in \{1,\ldots,|\mathcal{P}_{p}|\}\ \wedge \ \hat{\textbf{p}}_{i}(t),\hat{\textbf{p}}_{j}(t) \in \mathcal{P}_{s}
\end{aligned}
\end{equation}
\end{proposition}
\begin{proof}
See Appendix \ref{appendix:prediction_proof}.
\end{proof}
From \textbf{Proposition 1}, $\hat{\textbf{p}}(t)$ is determined by simple arithmetic operations and a \textbf{min} algorithm with time complexity $O(|\mathcal{P}_{s}|^{2})$ and $O(|\mathcal{P}_{s}|)$, respectively. Then we define $r_p(t)$ so that $\mathcal{R}_{\textbf{p}}(t)$ encloses all primitives in $\mathcal{P}_{s}$ for $\forall{t} \in [0,T]$.
\begin{equation}
\label{eq:reachable_set_radius}
\begin{aligned}
&r_{p}(t) = \|\hat{\textbf{p}}(t)-\hat{\textbf{p}}'(t) \|_2+r_{p0}, \\
&\text{where}\ \hat{\textbf{p}}{'}(t) = \underset{\hat{\textbf{p}}_{i}(t)\in \mathcal{P}_{s}}{\text{argmax}} \int_{0}^{T} \|\hat{\textbf{p}}(t)-\hat{\textbf{p}}_{i}(t)\|_{2}^{2}dt
\end{aligned}
\end{equation}
\subsection{Evaluation}
\label{subsec:prediction_evaluation}
Fig. \ref{fig:PredicionExample} visualizes the prediction process among multiple obstacles. We set $n_p$ as a minimum degree 3 and test 1000 times for different scenarios $(N_{samp}, N_{o})$ $=(500,2),(500,4),(2000,2),(2000,4)$. The prediction module is computed on a laptop with Intel i7 CPU, with a single thread implementation, and execution time is summarized in Table. \ref{tab:prediction_time}. During the entire test, $\textbf{p}(t)\in \mathcal{R}_{\textbf{p}}(t)$, $\forall t\in[0,T]$ was satisfied with a empirical probability 0.988.
\section{Chasing Trajectory Generation}
\label{sec:chasing_trajectory_generation}
In the previous section, we focused on the prediction of the reachable set of moving objects: the target $\mathcal{R}_{\textbf{q}}(t)=\mathcal{B}(\hat{\textbf{q}}(t),r_{q}(t))$, and obstacles $\mathcal{R}_{\textbf{o}}(t)=\mathcal{B}(\hat{\textbf{o}}(t),r_{o}(t))$. This section formulates a QP problem with respect to optimization variable $\underbar{\textbf{c}}$, that represents \textit{Chasing Problem}.
\subsection{Topology Check}
\label{subsec:topology_check}
In 2-dimensional space, there exist two classes of path topology against a single obstacle as shown in Fig. \ref{fig:motivation}, and as stated in \textbf{Theorem 1}, the drone should move along a homotopic path with the target path to avoid occlusion.
Based on the relative position between the drone and the obstacle, ${}_{\hat{\textbf{o}}}\textbf{c}(t)=\textbf{c}(t)-\hat{\textbf{o}}(t)$, and the relative position between the target and the obstacle, ${}_{\hat{\textbf{o}}}\hat{\textbf{q}}(t)=\hat{\textbf{q}}(t)-\hat{\textbf{o}}(t)$, at the current time $t=0$, the topology class of chasing path is determined as follows.
\begin{equation}
\label{eq:topology_check}
\begin{cases}
&\textit{Class O1} \ (\text{if}\ \det \begin{bmatrix}{}_{\hat{\textbf{o}}}\textbf{c}^T(0)\\ {}_{\hat{\textbf{o}}}\hat{\textbf{q}}^T(0) \end{bmatrix} \geq 0)\\
&\textit{Class O2} \ (\text{if}\ \det \begin{bmatrix}{}_{\hat{\textbf{o}}}\textbf{c}^T(0)\\ {}_{\hat{\textbf{o}}}\hat{\textbf{q}}^T(0) \end{bmatrix} < 0)\\
\end{cases}
\end{equation}
Based on the topology check above, the visibility constraint and reference chasing trajectory are defined.
\begin{figure}[t!]
\begin{subfigure}[t]{0.241\textwidth}
\centering\includegraphics[width=\textwidth]{figure/TVRO_Not_Overlap_comp.png}
\caption{}
\label{fig:TVRO1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.241\textwidth}
\centering\includegraphics[width=\textwidth]{figure/TVRO_Overlap_comp.png}
\caption{}
\label{fig:TVRO2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.241\textwidth}
\centering\includegraphics[width=\textwidth]{figure/TVRF_comp.png}
\caption{}
\label{fig:TVRF}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[t]{0.241\textwidth}
\centering\includegraphics[width=\textwidth]{figure/Collision_Constraint_comp.png}
\caption{}
\label{fig:Collision_constraint}
\end{subfigure}
\caption{\textbf{(a)} TVR-O (blue: \textit{Class O1}, green: \textit{Class O2}) when the reachable sets of the target (red) and obstacles (grey) are distant from each other. \textbf{(b)} TVR-O (purple) when the reachable sets of the target (red) and obstacles (grey) overlap. \textbf{(c)} TVR-F (blue: \textit{Class F1}, green: \textit{Class F2}). \textbf{(d)}: Collision-free region (blue) against an obstacle (grey). }
\label{fig:TVR}
\end{figure}
\subsection{Target Visible Region}
\label{subsec:target_visible_region}
To robustly maintain the visibility of the target against prediction error, the target visible region TVR is defined considering the reachable sets of the moving objects: $\mathcal{R}_{\textbf{q}}(t)$ and $\mathcal{R}_{\textbf{o}}(t)$.
\subsubsection{Target visibility against obstacles}
\label{subsubsec:target_visibility_obstacles}
We define the target visible region against an obstacle (TVR-O), for each obstacle. In order to maximize a visible area of the targets' reachable set that is not occluded by the reachable set of obstacles, TVR-O is set to a half-space so that it includes $\mathcal{R}_{\textbf{q}}(t)$ and minimizes the area that overlaps with $\mathcal{R}_{\textbf{o}}(t)$.
In the case where the reachable sets of the target and obstacles do not overlap, we define $\mathcal{V}_{O}(t)$ as a half-space made by a tangential line between $\mathcal{R}_{\textbf{q}}(t)$ and $\mathcal{R}_{\textbf{o}}(t)$ as shown in Fig. \ref{fig:TVRO1}, and TVR-O is represented as the following:
\begin{equation}
\label{eq:TVR}
\begin{aligned}
\mathcal{V}_{O}(t) = \Big \{\textbf{x}(t)\in \mathbb{R}^{2}|\
{}_{\hat{\textbf{o}}}\hat{\textbf{q}}^T(t)\begin{bmatrix}r_{qo}(t)& \mp d_2(t) \\ \pm d_2(t)& r_{qo}(t) \end{bmatrix}{}_{\hat{\textbf{o}}}\textbf{x}(t)\\
- r_o(t)d_{1}^{2}(t) \geq 0 \Big\}
\end{aligned}
\end{equation}
where $r_{qo}(t) = r_q(t)+r_o(t)$, $d_1(t) =\|{}_{\textbf{o}}\hat{\textbf{q}}(t)\|_{2}$, and $d_2(t) = (d_{1}^{2}(t)-r_{qo}^{2}(t))^{\frac{1}{2}}$. The double signs in (\ref{eq:TVR}) are in the same order, where the lower and upper signs are for \textit{Class O1} and \textit{Class O2}, respectively.
\begin{lemma}
If $\textbf{c}(t)\in \mathcal{V}_{O}(t)$,\ $L(\textbf{c}(t),\mathcal{R}_{q}(t))\subset \mathcal{F}\setminus \mathcal{R}_{\textbf{o}}(t)$ is satisfied.
\end{lemma}
\begin{proof}
$\mathcal{V}_{O}(t)$ is a half-space which is convex, and both $\textbf{c}(t)$ and $\mathcal{R}_{\textbf{q}}(t)$ belongs to $\mathcal{V}_{O}(t)$. By the definition of convexity, all \textit{Line-of-Sight} connecting the drone $\textbf{c}(t)$ and all points in the reachable set of the target $\mathcal{R}_{\textbf{q}}(t)$ are included in $\mathcal{V}_{O}(t)$. Since the TVR-O defined in (\ref{eq:TVR}) are disjoint with $\mathcal{R}_{\textbf{o}}(t)$, $L(\textbf{c}(t),\mathcal{R}_{q}(t))\subset \mathcal{F}\setminus \mathcal{R}_{\textbf{o}}(t)$.
\end{proof}
\begin{remark}
In dual-target scenarios, in order to avoid the occlusion of a target by another target, (\ref{eq:TVR}) is further defined by considering one of the targets as an obstacle.
\end{remark}
For the case where the reachable sets overlap, TVR-O becomes a half-space made by a tangential line to the $\mathcal{R}_{o}(t)$ which is perpendicular to a line segment connecting centers of the target $\hat{\textbf{q}}(t)$ and the obstacles $\hat{\textbf{o}}(t)$, illustrated as straight lines in Fig. \ref{fig:TVRO2}. Then the TVR-O is represented as
\begin{equation}
\label{eq:TVR_except}
\mathcal{V}_{O}(t)= \{\textbf{x}(t)\in \mathbb{R}^{2}|
{}_{\hat{\textbf{o}}}\hat{\textbf{q}}^T(t){}_{\hat{\textbf{q}}}\textbf{x}(t)- r_{q}(t)d_{1}(t)\geq 0 \}
\end{equation}
All terms in (\ref{eq:TVR})-(\ref{eq:TVR_except}) are polynomials except $d_1(t)$ and $d_2(t)$. We convert $d_1(t), d_2(t)$ into Bernstein polynomials with a numerical technique that origins from \textit{Lagrange interpolation} with standard polynomial representation \cite{lagrange}.\\ For the time interval $[T_m,T_{m+1}]$, $d_i(t), i=1,2$ are approximately represented as $\tilde{d}_i(t)=\sum_{j=0}^{\tilde{n}_{i}} \tilde{d}_{i,m,j} B_{j,\tilde{n}_{i}}(t,T_{m},T_{m+1})$. $\tilde{n}_{i}$ is the degree of $\tilde{d}_i(t)$, and control points $\tilde{\textbf{d}}_i=[\tilde{d}_{i,m,0},\ldots,\tilde{d}_{i,m,\tilde{n}_{i}}]^T$ can be acquired as follows:
\begin{equation}
\tilde{\textbf{d}}_i= B_{\tilde{n}_{i}}^{-1}\textbf{d}_{i}'
\label{eq:lagrange_approximation}
\end{equation}
where $B_{\tilde{n}_{i}}=\{b_{j,k}\}\in \mathbb{R}^{(\tilde{n}_{i}+1)\times(\tilde{n}_{i}+1)}$ is a Bernstein-Vandermonde matrix \cite{vandermonde} with elements given by $b_{j,k}= \begin{pmatrix} \tilde{n}_{i} \\ k \end{pmatrix}$
${\Big(\dfrac{j}{\tilde{n}_{i}}\Big)}^{k}{\Big(\dfrac{\tilde{n}_{i}-j}{\tilde{n}_{i}}\Big)}^{\tilde{n}_{i}-k},\ j,k=0,\ldots,\tilde{n}_{i}$, and
$\textbf{d}_{i}'\in \mathbb{R}^{\tilde{n}_{i}+1}$ consists of $\tilde{n}_{i}+1$ sequential samples $d_i\big((1-\frac{l}{\tilde{n}_{i}})T_{m}+\frac{l}{\tilde{n}_{i}}T_{m+1}\big),\ l=0,\ldots,\tilde{n}_{i}$.\\
With the approximated terms $\tilde{d}_{1}(t), \tilde{d}_{2}(t)$, we represent the target visibility constraint as the following:
\begin{subequations}
\label{eq:visibility_constraint_revised}
\begin{align}
\label{subeq:tvro1}
&\text{Case 1}:\ t\in [T_{i},T_{i+1}],\ \text{s.t. } \mathcal{R}_{q}(t)\cap \mathcal{R}_{o}(t) = \emptyset : \\ \nonumber
& \quad \quad {}_{\hat{\textbf{o}}}\hat{\textbf{q}}^{T}(t)\begin{bmatrix}r_{qo}(t)& \mp \tilde{d}_2(t) \\ \pm \tilde{d}_2(t)& r_{qo}(t) \end{bmatrix}{}_{\hat{\textbf{o}}}\textbf{c}(t) - r_o(t){d}_{1}^{2}(t) \geq 0 \\
\label{subeq:tvro2}
&\text{Case 2}:\ t\in [T_{i},T_{i+1}],\ \text{s.t. } \mathcal{R}_{q}(t)\cap \mathcal{R}_{o}(t) \neq \emptyset : \\ \nonumber
& \quad \quad {}_{\hat{\textbf{o}}}\hat{\textbf{q}}^T(t){}_{\hat{\textbf{q}}}\textbf{c}(t) - r_{q}(t)\tilde{d}_1(t)\geq 0
\end{align}
\end{subequations}
\subsubsection{FOV constraints}
\label{subsubsec:fov_constraints} In addition to (\ref{eq:visibility_constraint_revised}), FOV limit should be considered in the case where the drone follows dual-target.
Circles, whose inscribed angle of an arc tracing points $Q_{1}$ and $Q_{2}$ at $\hat{\textbf{q}}_{1}(t)$, $\hat{\textbf{q}}_{2}(t)$ equals to $\theta_{f}$, are uniquely defined and represented as follows.
\begin{equation}
\label{eq:FOV_circle}
\begin{aligned}
&\mathcal{D}_{i}(t) = \mathcal{B}(\textbf{f}_{i}(t), r_f(t)),\ i=1,2 \ ,\\
& \textbf{f}_{i}(t) = \frac{1}{2}\begin{bmatrix}
1 & (-1)^{i+1}\cot{\theta_{f}}\\
(-1)^{i}\cot{\theta_{f}} & 1
\end{bmatrix} {}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}(t) +\hat{\textbf{q}}_{1}(t),\\
& r_{f}(t) = \frac{1}{2\sin{\theta_{f}}}\|{}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}(t)\|_{2},\ {}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}(t) = \hat{\textbf{q}}_{2}(t)-\hat{\textbf{q}}_{1}(t)
\end{aligned}
\end{equation}
According to a geometric property of an inscribed angle, $\angle Q_{1}XQ_{2} > \theta_{f}$, $\angle Q_{1}YQ_{2} < \theta_{f}$ for $\forall X \in \mathcal{D}_{z}(t)\setminus \partial\mathcal{D}_{z}(t)$, $\forall Y \notin \mathcal{D}_{z}(t)$, where $D_{z}(t)$ is represented as follows.
\begin{equation}
\label{eq:FOV_dead_zone}
\mathcal{D}_{z}(t)= \begin{cases}
&\mathcal{D}_{1}(t)\cap\mathcal{D}_{2}(t) \quad (\text{if}\ \theta_{f}\geq \frac{\pi}{2})\\
&\mathcal{D}_{1}(t)\cup\mathcal{D}_{2}(t) \quad (\text{if}\ \theta_{f}< \frac{\pi}{2})
\end{cases}
\end{equation}
Since the camera FOV is $\theta_{f}$, the drone inside $D_{z}(t)$ misses at least one target in the camera image inevitably, and the drone outside of $\mathcal{D}_{z}(t)$ is able to see both targets.
We define the target visible region considering camera FOV (TVR-F) as a half-space that does not include $\mathcal{D}_{z}(t)$ and is tangential to $\mathcal{D}_{z}(t)$ as shown in Fig. \ref{fig:TVRF}. It is represented as follows.
\begin{equation}
\label{eq:fov_constraint}
\mathcal{V}_{F}(t)=\{\textbf{x}(t)| \pm\text{det} ( \begin{bmatrix}
{}_{\hat{\textbf{q}}_1}\textbf{x}^{T}(t)\\ {}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}^{T}(t)
\end{bmatrix}-\frac{1+\cos{\theta_{f}}}{2\sin{\theta_{f}}}\|{}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}(t) \|_{2}^{2} \geq 0\}
\end{equation}
Upper and lower signs are for \textit{Class F1} and \textit{Class F2}, respectively, where the class is defined as
\begin{equation}
\label{eq:topology_fov_check}
\begin{cases}
&\textit{Class F1} \ (\text{if}\ \det \begin{bmatrix}{}_{\hat{\textbf{q}}_{1}}\textbf{c}^T(0)\\ {}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}^T(0) \end{bmatrix} \geq 0)\\
&\textit{Class F2} \ (\text{if}\ \det \begin{bmatrix}{}_{\hat{\textbf{q}}_{1}}\textbf{c}^T(0)\\ {}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}^T(0) \end{bmatrix} < 0)\\
\end{cases}
\end{equation}
We make the drone satisfy $\textbf{c}(t)\in \mathcal{V}_{F}(t)$ to see the both targets.
\subsection{Collision Avoidance}
\label{subsec:collision_avoidance}
To avoid collision with obstacles, we define a collision-free area as a half-space made by a tangential line to a set inflated by $r_{c}$ from $\mathcal{R}_{o}(t)$ considering $\textbf{c}_{0}^{*}(t)$, a planning result in the previous step. It is shown in Fig.\ref{fig:Collision_constraint} and can be represented as the following.
\begin{equation}
\label{eq:collisiion_constraint}
\mathcal{F}_{c}(t) = \{\textbf{x}(t)\in \mathbb{R}^{2}| {}_{\hat{\textbf{o}}}\textbf{c}_{0}^{*T}(t){}_{\hat{\textbf{o}}}\textbf{x}(t)\geq (r_{o}(t)+r_{c})\|{}_{\hat{\textbf{o}}}\textbf{c}_{0}^{*}(t)\|_{2} \}
\end{equation}
${}_{\hat{\textbf{o}}}\textbf{c}_{0}^{*}(t):=\textbf{c}_{0}^{*}(t)-\hat{\textbf{o}}(t)$. As in (\ref{eq:lagrange_approximation}), we approximate a non-polynomial term $\| {}_{\hat{\textbf{o}}}\textbf{c}_{0}^{*}(t)\|_{2}$ to a polynomial $\tilde{d}_{3}(t)$. With the approximated the term $\tilde{d}_{3}(t)$, the collision constraint is defined as follows:
\begin{equation}
\label{eq:collisiion_constraint_revised}
{}_{\hat{\textbf{o}}}\textbf{c}_{0}^{*T}(t){}_{\hat{\textbf{o}}}\textbf{c}(t)- (r_{o}(t)+r_{c})\tilde{d}_{3}(t)\geq 0
\end{equation}
Due to the fact that the multiplication of Bernstein polynomials is also a Bernstein polynomial, the left-hand side of the (\ref{eq:visibility_constraint_revised}), (\ref{eq:fov_constraint}), (\ref{eq:collisiion_constraint_revised}) can be represented in a Bernstein polynomial form. With the non-negativeness property of Bernstein basis, we make coefficients of each basis non-negative in order to keep the left-hand side non-negative, and (\ref{eq:visibility_constraint_revised}), (\ref{eq:fov_constraint}), (\ref{eq:collisiion_constraint_revised}) turn into affine constraints with the decision vector $\underbar{\textbf{c}}$. The constraints can be written as follows, and we omit details.
\begin{equation}
\label{eq:affine_constraint}
\begin{aligned}
A_{\text{TVR-O}}\underbar{\textbf{c}}& \geq \textbf{b}_{\text{TVR-O}}, \\
A_{\text{TVR-F}}\underbar{\textbf{c}}& \geq \textbf{b}_{\text{TVR-F}},\\
A_{\text{Colli}}\underbar{\textbf{c}}& \geq \textbf{b}_{\text{Colli}}
\end{aligned}
\end{equation}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering\includegraphics[width=\textwidth]{figure/SingleRefTraj_comp.png}
\caption{}
\label{fig:SingleRefTraj}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.24\textwidth}
\centering\includegraphics[width=\textwidth]{figure/DualRefTraj_comp.png}
\caption{}
\label{fig:DualRefTraj}
\end{subfigure}
\caption{Reference trajectory design in single- and dual-target (red) among obstacles (grey). \textbf{(a)}: The points having the maximum visibility score $\psi$ in $\mathcal{C}_{s}(\hat{\textbf{q}}(t),r_d)$ (green), a trajectory with the maximum $\psi$, $\textbf{s}^{*}(t)$ (magenta), and the reference trajectory $\textbf{c}^{*}(t)$ (blue line) considering current position of the drone (blue dot). \textbf{(b)}: $\mathcal{C}_{d}^{\gamma_{c}}(\hat{\textbf{q}}_{1}(t),\hat{\textbf{q}}_{2}(t))$ (black circle). The trajectory with high $\psi$, $\textbf{d}^{*}(t)$ (magenta), and the reference trajectory $\textbf{c}^{*}(t)$ (blue line) considering current position of the drone (blue dot).}
\label{fig:ref_traj}
\end{figure}
\subsection{Reference Trajectory for Target Tracking}
\label{subsec:reference_trajectory}
In this section, we propose a reference trajectory for target chasing that enhances the visibility of the targets.
\subsubsection{Single target}
\label{subsubsec:reference_single_target}
Referring to \cite{boseong_iros}, the visibility score is taken using Euclidean Distance Field (EDF) $\phi(\cdot)$ \cite{edf} in designing the reference trajectory. Definition of the visibility score is the closest distance between all points in an $i$-th obstacle, $\mathcal{O}_i$, and the \textit{Line-of-Sight} connecting the target and the drone, and it is represented as
\begin{equation}
\psi(\textbf{c}(t);\hat{\textbf{q}}(t),\mathcal{O}_{i}) =\min_{L(\textbf{c}(t),\hat{\textbf{q}}(t))}\phi(\textbf{c}(t),\mathcal{O}_{i})
\label{eq:visibility_metric}
\end{equation}
In order to keep the dimension of the target, that is projected to the camera image, we set the desired shooting distance $r_{d}$. With the desired shooting distance $r_d$, a viewpoint candidate set can be defined as $\mathcal{C}_{s}(\hat{\textbf{q}}(t),r_d)$ $=\partial \mathcal{B}(\hat{\textbf{q}}(t),r_{d})$. Under the assumption that the environment consists of cylindrical obstacles, half-circumference of $\mathcal{C}_{s}(\hat{\textbf{q}}(t),r_d)$ acquires the maximum visibility score as illustrated in green in Fig. \ref{fig:SingleRefTraj}. Therefore, the following trajectory maintains the maximum $\psi$ against the $\mathcal{O}_i$.
\begin{equation}
\label{eq:ref_traj_single}
\textbf{s}_{i}(t)=
\textbf{q}(t)+r_{d}[\cos{\delta_i(t)},\sin{\delta_i(t)}]^T
\end{equation}
where $\delta_{i}(t)=\tan^{-1}({}_{\hat{\textbf{o}}_{i}}\hat{\textbf{q}}_{y}(t))/{}_{\hat{\textbf{o}}_{i}}\hat{\textbf{q}}_{x}(t))\mp \frac{\pi}{2}$. The upper and lower signs are for \textit{Class O1} and \textit{Class O2}, respectively. We define the reference trajectory as a weighted sum of $\textbf{s}_{i}(t)$.
\begin{equation}
\label{eq:single_ref_traj_weighted}
\textbf{s}^{*}(t) = \Big(\sum_{i=1}^{N_{o}}w_{i}\Big)^{-1} \sum_{i=1}^{N_{o}}w_{i}\textbf{s}_{i}(t)
\end{equation}
$w_i$s are weight functions that are inverse proportion to the current distance between the target and each obstacle.
\subsubsection{Dual target}
\label{subsubsec:reference_dual_target}
In order to make aesthetically pleasing scenes, we place two targets in a ratio of $1:\gamma_{c}:1$ on the camera image. To do so, the drone must exist in a set $\mathcal{C}_{d}^{\gamma_{c}}(\hat{\textbf{q}}_{1}(t),\hat{\textbf{q}}_{2}(t))$, which is defined as follows and is illustrated as a black circle in Fig. \ref{fig:DualRefTraj}.
\begin{equation}
\label{fig:ref_set_dual}
\begin{aligned}
&\mathcal{C}_{d}^{\gamma_{c}}(\hat{\textbf{q}}_{1}(t),\hat{\textbf{q}}_{2}(t))= \partial \mathcal{B}(\textbf{f}_{c}(t),r_{f,c}(t)),\\
& \textbf{f}_{c}(t) = \frac{1}{2}(\hat{\textbf{q}}_{1}(t)+\hat{\textbf{q}}_{2}(t))\pm \\
& \quad \frac{\gamma_{c}+2}{4\gamma_{c}}\cot{\frac{\theta_{f}}{2}}\Big(1-\frac{\gamma_{c}^{2}}{(\gamma_{c}+2)^{2}}\tan^{2}{\frac{\theta_f}{2}}\Big)
\begin{bmatrix}
0&1\\-1&0
\end{bmatrix} {}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}(t),\\
& r_{f,c}(t)= \frac{\gamma_{c}+2}{4\gamma_{c}}\cot{\frac{\theta_{f}}{2}}\Big(1+\frac{\gamma_{c}^2}{(\gamma_{c}+2)^{2}}\tan^{2}{\frac{\theta_f}{2}} \Big) \|{}_{\hat{\textbf{q}}_1}\hat{\textbf{q}}_{2}(t) \|_{2}
\end{aligned}
\end{equation}
where the upper and lower signs are for \textit{Class F1} and \textit{Class F2}. We define the reference trajectory to acquire high visibility score $\psi$ while maintaining the ratio.
\begin{equation}
\label{eq:ref_traj_dual}
\begin{aligned}
& \textbf{d}^{*}(t) = \textbf{f}_{c}(t)+ r_{f,c}(t)[\cos{\delta_{d}}(t), \sin{\delta_{d}(t)}]^{T},\\
& \delta_{d}(t) = \tan^{-1}{\frac{\sum_{i=1}^{2N_{o}}w_{i}\sin{\delta_{i}(t)}+w_{m}\sin{\delta_{m}(t)}}{\sum_{i=1}^{2N_{o}}w_{i}\cos{\delta_{i}(t)}+w_{m}\cos{\delta_{m}(t)}}}
\end{aligned}
\end{equation}
where $\delta_{m}(t)=\tan^{-1}({}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2y}(t))/{}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2x}(t))\mp \frac{\pi}{2}$, and $w_{i}$ and $\delta_{i}(t)$ are defined as in (\ref{eq:ref_traj_single}), for two targets. The upper and lower signs are for \textit{Class F1} and \textit{Class F2}, respectively. $\delta_{m}(t)$ is defined as the furthest direction from the two targets to maximize a metric, $\psi(\textbf{c}(t);\hat{\textbf{q}}_{1}(t),\hat{\textbf{q}}_{2}(t))$ + $\psi(\textbf{c}(t);\hat{\textbf{q}}_{2}(t),\hat{\textbf{q}}_{1}(t))$, which can represent the visibility score between two targets.
For numerical stability in the QP solver, the reference trajectory is redefined by the interpolation between $\boldsymbol{\mu}^{*}(t)$, $\boldsymbol{\mu}=\textbf{s},\textbf{d}$ and the current position of the drone $\textbf{c}^{(0)}_{0}$. With a non-decreasing polynomial function $\alpha(t)$ such that $\alpha(0)=0$, $\alpha(T)=1$, the reference trajectory is defined as follows.
\begin{equation}
\textbf{c}^{*}(t) = \big(1-\alpha(t)\big)\textbf{c}^{(0)}_{0}+\alpha(t)\boldsymbol{\mu}^{*}(t)
\label{eq:ref_traj}
\end{equation}
Since trigonometric terms in (\ref{eq:single_ref_traj_weighted}),(\ref{eq:ref_traj_dual}) are non-polynomial, the \textit{Lagrange interpolation} is used as (\ref{eq:lagrange_approximation}).
The approximated $\textbf{c}^{*}(t)$ is denoted as $\tilde{\textbf{c}}^{*}(t)$.
Based on the construction of reference trajectory and target visibility constraints, we formulate the trajectory optimization problem as a QP problem.
\begin{figure}[t]
\centering
\includegraphics[width = 0.9\linewidth]{figure/Segmentation_comp.png}
\caption{Time segmentation. Green and blue regions are time intervals when the (\ref{subeq:tvro1}) and (\ref{subeq:tvro2}) are adopted as visibility constraints, respectively.}
\label{fig:trajectory_segmentation}
\end{figure}
\subsection{QP Formulation}
\label{subsec:QP_formulation}
\subsubsection{Trajectory segmentation}
\label{subsubsec:segmentation}The constraint (\ref{eq:visibility_constraint_revised}) is divided into the cases, when the reachable sets are separated and when they overlap. To apply (\ref{eq:visibility_constraint_revised}) according to the situation, the roots of equations $\|{}_{\hat{\textbf{o}}}\hat{\textbf{q}}(t)\|_{2}=r_{qo}(t)$ and $\|{}_{\hat{\textbf{q}}_{1}}\hat{\textbf{q}}_{2}(t)\|_{2}=r_{q1}(t)+r_{q2}(t)$. should be investigated for $t\in(0,T)$.
$[T_{0},\ldots, T_{M}]$ is defined through root finding, and we set $M$ polynomial segments to optimize, as stated in (\ref{eq:segment_representation}). Fig. \ref{fig:trajectory_segmentation} visualizes the above process.\\
Due to the virtue of De Casteljau's algorithm \cite{bebot}, the single Bernstein polynomial such as $\hat{\textbf{q}}(t),r_{q}(t)$ can be divided into $M$ Bernstein polynomials. With the information with $M$-segment-polynomial representation, we formulate a QP problem.
\begin{table*}[t]
\centering
\caption{Computation time in QP solver}
\label{tab:planning_time}
\begin{tabular}{rrr|rrrrr|rrrrr|rrrrr}
\toprule
& Computation & & \multicolumn{5}{c}{$n_{c}=4$} & \multicolumn{5}{c}{$n_{c}=5$} & \multicolumn{5}{c}{$n_{c}=6$}\\
& time [ms] & & \multicolumn{5}{c}{\# polynomial segment\ $(M)$} & \multicolumn{5}{c}{\# polynomial segment\ $(M)$} & \multicolumn{5}{c}{\# polynomial segment\ $(M)$}\\
& & &1 &2 & 3 & 4 & 5 &1 &2 & 3 & 4 & 5 &1 &2 & 3 & 4 & 5\\ \hline
\multirow{4}{*}{{\rotatebox[origin=c]{90}{Single}}}
&\# obstacle
& 1 & $0.17$ & $0.54$ & $\cdot$ & $\cdot$ & $\cdot$
& $0.19$ & $0.77$ & $\cdot$ & $\cdot$ & $\cdot$
& $0.21$ & $1.16$ & $\cdot$ & $\cdot$ & $\cdot$\\
& $(N_o)$ & 2 & $0.18$ & $0.71$ & $2.07$ & $\cdot$ & $\cdot$
& $0.22$ & $1.12$ & $2.95$ & $\cdot$ & $\cdot$
& $0.24$ & $1.66$ & $4.45$ & $\cdot$ & $\cdot$\\
& & 3 & $0.29$ & $0.98$ & $2.91$ & $6.05$ & $\cdot$
& $0.34$ & $1.23$ & $4.58$ & $8.05$ & $\cdot$
& $0.43$ & $1.82$ & $5.62$ & $12.3$ & $\cdot$\\
& & 4 & $0.32$ & $1.01$ & $2.75$ & $7.81$ & $12.2$
& $0.39$ & $1.38$ & $4.24$ & $12.6$ & $18.8$
& $0.55$ & $1.88$ & $6.81$ & $16.8$ & $24.8$\\
\midrule
\multirow{3}{*}{{\rotatebox[origin=c]{90}{Dual}}}
& \# obstacle
& 1 & $0.18$ & $0.66$ & $1.94$ & $5.34$ & $\cdot$
& $0.20$ & $0.91$ & $2.95$ & $7.61$ & $\cdot$
& $0.26$ & $1.38$ & $4.46$ & $10.5$ & $\cdot$\\
& $(N_o)$& 2 & $0.21$ & $1.21$ & $3.94$ & $5.82$ & $13.6$
& $0.46$ & $1.72$ & $6.78$ & $9.48$ & $21.4$
& $0.55$ & $2.41$ & $10.8$ & $17.8$ & $36.3$\\
& & 3 & $0.43$ & $1.67$ & $3.86$ & $8.54$ & $17.8$
& $0.78$ & $2.51$ & $5.62$ & $13.5$ & $28.6$
& $0.81$ & $3.72$ & $8.91$ & $21.8$ & $43.6$\\
\bottomrule
\end{tabular}%
\end{table*}
\subsubsection{Constraints}
\label{subsubsec:constraint}
We set up affine constraints with respect to $\underbar{\textbf{c}}$ to keep dynamic feasibility and the visibility of the target and avoid collision. Dynamic constraints consist of constraints regarding an initial state, limits of velocity and acceleration, and continuity between consecutive polynomial segments. For the details, see Appendix \ref{appendix:dynamic_constraints}. Then, for the visibility of the target and the safety of the drone, the constraints in (\ref{eq:affine_constraint}) are applied.
\subsubsection{Costs}
\label{cost}
We minimized jerky motion and reference tracking errors.
\begin{equation}
\label{eq:qp_cost}
J = \int_{0}^{T}\|\textbf{c}^{(3)}(t)\|_{2}^{2}+w_{e}\|\textbf{c}(t)-\tilde{\textbf{c}}^{*}(t)\|_{2}^{2}dt
\end{equation}
$w_{e}$ is a weight factor of tracking cost. The designed cost is quadratic with the decision vector $\underbar{\textbf{c}}$. For the details, see Appendix. \ref{appendix:cost}.
Since all the constraints are affine, and the cost is quadratic with respect to $\underbar{\textbf{c}}$, the \textit{Chasing Problem} (\ref{eq:chasing_problem}) is reformulated as a QP problem.
\begin{equation}
\label{eq:qp_final}
\begin{aligned}
&\underset{\underbar{\textbf{c}}}{\text{min}} && \underbar{\textbf{c}}^{T}Q_{c}\underbar{\textbf{c}} +\textbf{g}_{c}^{T}\underbar{\textbf{c}} \\
&\text{s.t.} && A_{c}\underbar{\textbf{c}}\geq\textbf{b}_{c}\\
\end{aligned}
\end{equation}
\subsection{Evaluation}
\label{subsec:planning_evaluation}
We set $n_c$ as 4, 5, 6 and test 5000 times for different scenarios $N_{o}$ $=1,2,3,4$. qpOASES \cite{qpoases} is utilized as a QP solver, and the execution time to solve a QP problem is summarized in Table. \ref{tab:planning_time}. As you can see in the table, the computation time increases as the number of segments and obstacles increases. To satisfy real-time criteria (10Hz), as $N_{o}$ increases, we reduce $r_{q}(t)$ by defining $\epsilon_{p}|\mathcal{P}_{s}|$ furthest primitives as outliers and removing them from $\mathcal{P}_{s}$ and recalculate $\mathcal{R}_{\textbf{q}}(t)$. It is visualized in Fig. \ref{fig:ReachSet80} and can lower the number of segments $M$. With this strategy, real-time planning is possible even in complex and dense environments.
\section{Chasing Scenario Validation}
\label{sec:validation}
\subsection{Implementation Details}
\label{subsec:implementation_details}
We perform AirSim \cite{airsim} simulation for a benchmark test of our method and \cite{ijcas}, \cite{access}. In real-world experiments, stereo cameras ZED2 \cite{zed} are mounted on the drone, and intel NUC and Jetson Xavier NX are equipped as onboard computers. As a flight controller, Pixhawk4 is used. In addition to the odometry function in the camera, we utilize 3D human pose estimation, and humans are distinguished by color as in \cite{access}. An algorithm in \cite{obstacle_detector} is used to build a static obstacle map.
In scenarios involving multiple moving objects, visual odometry suffers from dynamic objects in a camera image. We install two ZED2 cameras facing the opposite direction. The front-view sensor is for object detection and tracking, and the rear-view one is for the localization of the robot. Since the rear-view sensor does not contain multiple moving objects in the camera image, the localization system is free from visual interference by dynamic objects, which can prevent degradation of localization performance.
In both simulations and experiments, two actors move in environments: one is a target and the other is considered a dynamic obstacle (interrupter) in single-target scenarios whereas two actors are both targets in dual-target scenarios. For consistent acquisition of positional and scale data of the target and the interrupter, the drone looks between them. Besides, the drone adjusts its altitude to the height of the target. The parameters used in validations are summarized in Table. \ref{table:parameter}.
\begin{table}[t]
\caption{Problem Settings}
\resizebox{\linewidth}{!}
{\begin{tabular}{lcc}
\toprule
\multicolumn{3}{c}{\textbf{Scenario Settings}}\\
Name &Single Target &Dual Target\\ \hline
drone radius ($r_c$) [m] & 0.4 &0.4 \\
camera FOV $(\theta_{f})$ $[{}^{\circ}]$ &120 &120 \\\hline
\multicolumn{3}{c}{\textbf{Prediction Parameters}}\\
Name &Single Target &Dual Target\\ \hline
time horizon ($T$) [s] & 1.5 & 1.5 \\
$\#$ sampled points ($N_{samp}$) &1000 & 1000\\\hline
\multicolumn{3}{c}{\textbf{Planning Parameters}}\\
Name &Single Target &Dual Target\\
\hline
polynomial degree $(n_{c})$ & 6 &6 \\
max velocity ($v_{\text{max}}$) [m/s] &4.0 &4.0 \\
max acceleration ($a_{\text{max}}$) [$\text{m}/\text{s}^2$] &5.0 &5.0 \\
shooting distance ($r_{d}$) [m] &$4.0$ &$\cdot$\\
screen ratio ($\gamma_{c}$) &$\cdot$ &1.0\\
tracking weight ($w_e$) &10.0 &10.0 \\
jerk weight ($w_{j}$) &1.0 &1.0 \\
\bottomrule
\end{tabular}}
\label{table:parameter}
\vspace{-4mm}
\end{table}
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0\textwidth]{figure/sim_recap_comp.png}
\caption{Comparison between the proposed planner (blue) and baselines (magenta) \cite{ijcas},\cite{access}. Scenarios 1-2 and Scenarios 3-4 represent single-target and dual-target scenarios, respectively. In the plots regarding distance, areas below the red lines mean collision, and in the plots regarding visibility score, areas below the red lines mean target occlusion. Red-shaded regions represent the situation when the drone moved by baselines comes into collisions and occlusions.}
\label{fig:sim_recap}
\end{figure*}
\subsection{Simulations}
\label{subsec:simulations}
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0\textwidth]{figure/single_sim_result_comp.png}
\caption{Single-target scenarios. The red and green lines represent the overall paths of the target and the dynamic obstacle respectively. Our planner produces the path for the chasing drone to move along the blue path. The magenta paths are generated by the baseline \cite{ijcas}. The bottom figures are snapshots of the camera images when the interrupter intersects the target's path and tries to conceal the target's body.}
\label{fig:single_sim_result}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0\textwidth]{figure/dual_sim_result_comp.png}
\caption{Dual-target scenarios. Two targets move along the red and green paths among obstacles (brown). Snapshots capture the moment when the drone moving by \cite{access} (magenta) misses one of the targets in camera images. Blue paths are generated by our planner and they ensure the drone sees two targets without any occlusion.}
\label{fig:dual_sim_result}
\end{figure*}
We test the proposed planner with four scenarios where two actors run in the forest.
The first two scenarios are cases where the first actor is a target, and the other is regarded as an interrupter, while the two remaining scenarios are situations where two actors are both targets. Of the many flight tests, we extract and report 30-40 seconds flights in this paper. In spite of the requirement of heavy computation resources from the physics engine, the planning pipeline is executed within 30 milliseconds.
In the simulation, we compare the proposed planner with the other state-of-the-art planners \cite{ijcas} in single-target chasing scenarios and \cite{access} in dual-target chasing scenarios. We measure the distances between the drone and targets $\|{}_{\textbf{q}}\textbf{c}(t)\|_{2}$, the minimum distance between the drone and obstacles $\|{}_{\textbf{o}}\textbf{c}(t)\|_{2} := \min(\|{}_{\textbf{o}_{1}}\textbf{c}(t)\|_{2},\ldots,\|{}_{\textbf{o}_{N_{o}}}\textbf{c}(t)\|_{2})$, and a visibility score that is defined as follows:
\begin{equation}
\label{eq:min_visibility_score}
\underset{\mathcal{O}_{i}\in \mathcal{O}}{\min} \psi(\textbf{c}(t);\textbf{q}(t),\mathcal{O}_{i})
\end{equation}
As shown in Fig. \ref{fig:sim_recap}, Fig. \ref{fig:single_sim_result}, and Fig. \ref{fig:dual_sim_result}, the proposed approach makes the drone follow the target while maintaining target-visibility safely, whereas collision with obstacles and target-occlusion by the interrupter occur several times when the drone flies by the baseline \cite{ijcas,access}. Furthermore, in contrast to baselines, the proposed planner adjusts the chasing path so that the drone does not move too far away from the targets. This motion can prevent the degradation of visual object detection and tracking performance.
\subsection{Experiments}
\label{subsec:experiments}
To validate our chasing strategy, real-world experiments with four and three scenarios are executed for single-target and dual-target tracking, respectively. Two actors move in $8\times11$ [m$^{2}$] indoor space with stacked bins. As in the simulations, one actor moves and the other tries to block the target in the drone's camera view in single-target scenarios, while two actors are both targets in dual-target scenarios.
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0\textwidth]{figure/single_exp_result_comp.png}
\caption{Autonomous aerial tracking in an indoor environment with single-target scenarios. \textbf{Scenario1}: The target (red) moves among white cylindrical obstacles, and the drone (blue) follows the target. \textbf{Scenario2}: The target moves away from the drone, and the interrupter (green) cuts in between the target and the drone. \textbf{Scenario3}: The target moves all over the space, and the interrupter consistently disturbs the target's tracking. \textbf{Scenario4}: The interrupter intentionally hides behind bins and appears abruptly to obstruct the camera's view. Stars are the start points of moving objects, blue crosses mean the position of the drone at captured moments, and purple segments represent line-of-sight between the target and the drone.}
\label{fig:single_exp}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width = 0.8\textwidth]{figure/dual_exp_result_comp.png}
\caption{Autonomous aerial tracking in an indoor environment with dual-target scenarios. \textbf{Scenario1}: The targets (red, green) move in a circular motion. \textbf{Scenario2}: The targets move while repeatedly widening and narrowing their relative distance. \textbf{Scenario3}: The targets move among stacked bins (grey). The drone follows targets along blue paths, stars are the start points of moving objects, and blue crosses mean the position of the drone at captured moments.}
\label{fig:dual_exp}
\end{figure*}
We update the chasing trajectory at 10Hz, but the actual computation of the entire pipeline takes less than 25 milliseconds. Fig. \ref{fig:single_exp} shows the situations where the drone generates the target-visible path considering the interrupter's interference and obstacles, and Fig. \ref{fig:dual_exp} shows the situations where the drone follows two targets along trajectories considering camera FOV and obstacles. From Fig. \ref{fig:exp_recap}, we can confirm that the chasing drone's path is free from collision and occlusion.
To record the camera images in limited computer storage and to transfer images between computers, we use compressed RGB and depth images in the pipeline. Since the compressed data is acquired at a slow rate, information about moving objects is updated slowly; therefore trajectories of moving objects drawn in Fig. \ref{fig:single_exp} and Fig. \ref{fig:dual_exp} seem jerky. Nevertheless, the drone succeeds at keeping the visibility of the target and its own safety by quickly updating the chasing trajectory.
\newline
\newline
\newline
\newline
\newline
\newline
\newline
\newline
\newline
\newline
\section{Conclusion}
\label{Conclusion}
We propose a real-time target chasing planner among static and dynamic obstacles. First, we calculate the reachable sets of moving objects in obstacle environments, with Bernstein polynomial primitives. Then, to prevent target occlusion, we define continuous-time target-visible region (TVR) based on the concept of path homotopy, while considering the camera field-of-view limit. The reference trajectory for target tracking is designed and utilized with TVR to formulate trajectory optimization as a single QP problem. The proposed QP formulation can generate dynamically feasible, collision-free, and occlusion-free chasing trajectories in real-time. We extensively demonstrate the effectiveness of the proposed planner through challenging scenaios, including realistic simulations and indoor experiments. In future work, we plan to extend our work to chase multiple targets in dynamic environments.
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0\textwidth]{figure/exp_recap_comp.png}
\caption{Distance between the drone and objects: obstacles, the targets, and visibility score (\ref{eq:visibility_metric}), that are reported in experiments. In the distance plots, areas below the red lines mean collision, and in the visibility score plots, areas below the red lines mean target occlusion. Single-target scenarios \{1,2,3,4\} match with \{magenta, green, blue, orange\}, while dual-target scenarios \{1,2,3\} match with \{magenta, green, blue\}. }
\label{fig:exp_recap}
\end{figure*}
\appendices
\section{Implementation of Collision Check}
\label{appendix:collision_check}
\begin{equation}
\label{eq:collision_const_bernstein}
\begin{aligned}
&\|\hat{\textbf{p}}_{i}(t)-\textbf{o}_{j}(t) \|_{2}^{2}-(r_{p0}+r_{o_j}(t))^{2} \geq 0 \Longleftrightarrow\\
&\sum_{k=0}^{2n_{p}}\begin{pmatrix}
2n_p \\ k
\end{pmatrix}^{-1} {}^{ij}C_{k} B_{k,2n_{p}}(t,0,T) \geq 0, \ {}^{\forall}j\in \{1,\ldots,|\mathcal{O}|\}, \\
&\text{where}\\
&{}^{ij}C_{k}\ = \ \
\sum_{l=\max(0,k-n_p)}^{\min(k,n_p)}
\begin{pmatrix}
n_p \\ l
\end{pmatrix}
\begin{pmatrix}
n_p \\ k-l
\end{pmatrix}
\Big ( {}^{ij}_{o}\textbf{p}_{x}[l]\ {}^{ij}_{o}\textbf{p}_{x}[k-l]\\
& \quad \quad \quad \quad \quad \quad \quad \quad +{}^{ij}_{o}\textbf{p}_{y}[l]\ {}^{ij}_{o}\textbf{p}_{y}[k-l]
- {}^{ij}_{o}\textbf{r}[l]\ {}^{ij}_{o}\textbf{r}[k-l]
\Big ),\\
&{}^{ij}_{o}\textbf{p}_{x}=\textbf{p}_{i(x)}-\textbf{o}_{j(x)},\ {}^{ij}_{o}\textbf{p}_{y}=\textbf{p}_{i(y)}-\textbf{o}_{j(y)}, \
{}^{ij}_{o}\textbf{r}=\textbf{r}_{o_{j}}+r_{p0}
\end{aligned}
\end{equation}
$[l]$ represents $l$-th elements of a vector. The condition ${}^{ij}C_{k}\geq 0,\ k=0,\ldots,2n_{p}$ makes the moving objects do not collide with obstacles during $[0, T]$.
\section{Proof of the Proposition1}
\label{appendix:prediction_proof}
Motion primitives for object prediction derived from (\ref{eq:primitive_optimization_problem}) are represented as follows.
\begin{equation}
\label{eq:appendix:primitive_solution}
\begin{aligned}
&\hat{\textbf{p}}_{i}(t)= \textbf{P}_{n_{p}}^{T}\textbf{B}_{n_p}(t,T)\\
&\text{where}\ \textbf{P}_{n_{p}}=E_{3,n_{p}}^{T}\textbf{P}_{3}^{*},\\
&\textbf{P}_{3}^{*}=[\hat{\textbf{p}}_{0},\hat{\textbf{p}}_{0}+\frac{1}{3}\hat{\textbf{p}}^{(1)}_{0}T,\frac{2}{3}\hat{\textbf{p}}_{0}+\frac{1}{3}\textbf{s}_{p,i}+\frac{1}{3}\hat{\textbf{p}}^{(1)}_{0}T, \textbf{s}_{p,i}]^{T}, \\
&\textbf{B}_{n_p}(t,T) = [B_{0,n_{p}}(t,0,T),\ldots,B_{n_{p},n_{p}}(t,0,T)]^{T},\ n_{p}\geq3,\\
&E_{m,n}=\{e_{j,k}\}\in \mathbb{R}^{(n+1)\times(m+1)},\ m\geq n, \\
& e_{j,j+k} = \frac{\begin{pmatrix} n \\j
\end{pmatrix}\begin{pmatrix}m-n \\ k \end{pmatrix}}{\begin{pmatrix} m\\j+k\end{pmatrix}}
\end{aligned}
\end{equation}
The difference between $\hat{\textbf{p}}_{i}(t)$ and $\hat{\textbf{p}}_{j}(t)$, $\|\hat{\textbf{p}}_{i}(t)-\hat{\textbf{p}}_{j}(t)\|=\|\textbf{s}_{p,i}- \textbf{s}_{p,j}\|\frac{t^{2}}{T^{2}}$. Therefore, in order to minimize distance sum in (\ref{eq:best_primitive}), it is sufficient to investigate sampled endpoints.
\section{QP Implementation Details}
\label{appendix:qp_implementaion}
\subsection{Dynamic Feasibility Constraints}
\label{appendix:dynamic_constraints}
Dynamic constraints in Section \ref{subsec:QP_formulation} are formulated as follows. First, the trajectory is made to satisfy an initial condition, position and velocity as in (\ref{subeq:appendix_initial_position_constraint}),(\ref{subeq:appendix_initial_vel_constraint}).
$\mathcal{C}_2$ continuity between consecutive segments is achieved by (\ref{subeq:appendix_conti_pos_constraint}),(\ref{subeq:appendix_conti_vel_acc_constraint}). Velocity and acceleration are limited by (\ref{subeq:appendix_vel_max_constraint}),(\ref{subeq:appendix_acc_max_constraint}).
\begin{subequations}
\label{eq:appendix_dynamic_constraint}
\begin{align}
\label{subeq:appendix_initial_position_constraint}
&\begin{bmatrix}
\textbf{c}_{1(x)}[0]\\\textbf{c}_{1(y)}[0]
\end{bmatrix}=\textbf{c}_{0}^{(0)}\\
\label{subeq:appendix_initial_vel_constraint}
&\frac{n_{c}}{(T_{1}-T_0)}\begin{bmatrix}
\textbf{c}_{1(x)}[1]-\textbf{c}_{1(x)}[0]\\
\textbf{c}_{1(y)}[1]-\textbf{c}_{1(y)}[0]
\end{bmatrix}=\textbf{c}^{(1)}_{0}\\
\label{subeq:appendix_conti_pos_constraint}
&\textbf{c}_{m(\mu)}[n_{c}]=
\textbf{c}_{m+1(\mu)}[0],\mu=x,y, \ m=1,\ldots,M-1\\
\label{subeq:appendix_conti_vel_acc_constraint}
&\sum_{j=0}^{k}(-1)^j \bigg \{ \frac{\textbf{c}_{m(\mu)}[n_c-j]-\textbf{c}_{m(\mu)}[n_c-j-1]}{(T_m-T_{m-1})^{k+1}}\nonumber\\
&-\frac{\textbf{c}_{m+1(\mu)}[j+1]-\textbf{c}_{m+1(\mu)}[j]}{(T_{m+1}-T_{m})^{k+1}}\bigg \}=0,\nonumber \\
&\mu=x,y,\ k=0,1,\ m=1,\ldots,M-1, \\
\label{subeq:appendix_vel_max_constraint}
& \frac{n_c}{T_{m}-T_{m-1}}\big|\textbf{c}_{m(\mu)}[j+1]-\textbf{c}_{m(\mu)}[j]\big|\leq \frac{1}{\sqrt{2}}v_{\text{max}},\nonumber\\
&\mu=x,y, \ m=1,\ldots,M,\ j=0,\ldots,n_c-1\\
\label{subeq:appendix_acc_max_constraint}
& \frac{n_c(n_c-1)}{(T_{m}-T_{m-1})^2}\big|\textbf{c}_{m(\mu)}[j+2]-2\textbf{c}_{m(\mu)}[j+1]+\textbf{c}_{m(\mu)}[j]\big| \nonumber\\
&\leq \frac{1}{\sqrt{2}} a_{\text{max}},\ \ \mu=x,y,\ m=1, \ldots,M,\ j=0,\ldots,n_c-2
\end{align}
\end{subequations}
\subsection{Costs}
\label{appendix:cost}
The cost (\ref{eq:qp_cost}) is represented as follows:
\begin{subequations}
\label{eq:appendix_costs}
\begin{align}
\label{subeq:appendix_cost_qp}
&J = \underbar{\textbf{c}}^{T}Q\underbar{\textbf{c}} + H\underbar{\textbf{c}},\nonumber \\
&\text{where}\ Q = \begin{bmatrix}
Q_{1} & &\boldsymbol{0}\\
& \ddots &\\
\boldsymbol{0} & & Q_{M}
\end{bmatrix},
H=[H_{1},\ldots,H_{M}],\\
\label{subeq:appendix_cost_Qi}
&Q_{i}=w_{j}D_{jerk,i}^{T}B_{jerk,i}^{\int}D_{jerk,i}+ w_{e}B_{track,i}^{\int},\\
\label{subeq:appendix_cost_Hi}
& H_{i}=-2w_{e}\tilde{\textbf{c}}_{i}^{*T}B_{track,i}^{\int},\\
\label{subeq:appendix_cost_diff3}
&D_{jerk,i} = E_{n_{c}-2,i}E_{n_{c}-1,i}E_{n_{c},i},\\
\label{subeq:appendix_cost_diff1}
& E_{m,i}=\frac{m}{T_{i}-T_{i-1}}\begin{bmatrix}
-1 & 1 & 0 &\dotsi & 0\\
0 & \ddots & \ddots &\ddots & \vdots \\
\vdots & \ddots & \ddots &\ddots & 0 \\
0 & \dotsi & 0 & -1 & 1
\end{bmatrix}\in \mathbb{R}^{m\times (m+1)}\\
\label{subeq:appendix_cost_jerk_int}
&B_{jerk,i}^{\int} = \frac{T_{i}-T_{i-1}}{(2n_{c}-5)}B_{jerk},\\
\label{subeq:appendix_cost_track_int}
&B_{track,i}^{\int} = \frac{T_{i}-T_{i-1}}{2n_{c}+1}B_{track},\\
\label{subeq:appendix_cost_int_comp}
& B_{jerk} = \{a_{k,l}\}, \ B_{track} = \{b_{k,l}\} \in \mathbb{R}^{(n_c+1)\times(n_c+1)},\\
\label{subeq:appendix_cost_int_element}
&a_{k,l} = \frac{\begin{pmatrix}n_c-3\\k\end{pmatrix}\begin{pmatrix}n_c-3\\l\end{pmatrix}}{\begin{pmatrix}2n_c-6\\k+l\end{pmatrix}}, \
b_{k,l} = \frac{\begin{pmatrix}n_c\\k\end{pmatrix}\begin{pmatrix}n_c\\l\end{pmatrix}}{\begin{pmatrix}2n_c\\k+l\end{pmatrix}}
\end{align}
\end{subequations}
\normalem
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.14264",
"language": "en",
"timestamp": "2023-03-01T02:07:13",
"url": "https://arxiv.org/abs/2302.14264",
"yymm": "2302"
} | \section{Introduction}
\vspace{-1pt}
Grasp detection, which aims to generate gripper configurations on objects, is essential to robotic manipulation in the real world, such as bin-picking and sorting. Early methods \cite{DBLP:conf/icra/BicchiK00, DBLP:conf/icra/BuchholzFWW13} tackle this task by fitting the input to the templates with annotated grasps or analyzing shape characteristics in certain feature spaces. However, these methods either only work on known objects or suffer from high computational complexity, making them less practical for widespread use. Recently, with the rapid development of deep learning, data-driven methods \cite{DBLP:journals/ral/ChuXV18, DBLP:conf/rss/MorrisonLC18, DBLP:conf/rss/MahlerLNLDLOG17, DBLP:journals/corr/abs-2212-05275, DBLP:conf/iros/WangZG021} have dominated this field and shown great potential to fulfill a diversity of applications.
As a typical scenario, planar grasp detection has received increasing attention during the past decade, where the gripper configuration is usually represented as a 5-dimensional rotated rectangle on the image \cite{DBLP:conf/icra/JiangMS11}, and advanced object detection networks are adapted for prediction. Considering rich texture information in the color space, a large number of studies have been made in the literature \cite{DBLP:conf/iros/ZhangLBZTZ19, DBLP:conf/iros/ZhouLZTZZ18,DBLP:conf/icra/GuoSLKFX17}, which take RGB images as input. Meanwhile, due to the innovation of depth sensors, a trend has appeared that object shapes are introduced to complement appearances \cite{DBLP:conf/iros/KumraJS20, DBLP:journals/ral/ChuXV18}.
\begin{figure}[t]
\setlength{\abovecaptionskip}{0pt}
\centering
\subfigure[Measured grasp depth]{
\includegraphics[width=0.45\linewidth]
{no_depth_125.pdf}
}
\subfigure[Predicted grasp depth]{
\includegraphics[width=0.45\linewidth]{with_depth_125.pdf}
}
\caption{Visualization of detected grasps which make use of depth clues in two different ways. The color of the grasp represents the score in terms of the force closure metric. The red indicates a grasp of a high quality while the blue denotes a grasp of a low quality. (a) shows the grasp which adopts the measured depth in the grasp center and (b) shows the grasp with the predicted depth achieved by the proposed method.}
\label{visualization about depth prediction}
\vspace{-20pt}
\end{figure}
Despite promising performance reported in RGB-D grasp detection \cite{DBLP:conf/iros/KumraK17, DBLP:journals/ral/ChuXV18, DBLP:conf/iros/KumraJS20}, two major issues still remain challenging, because for mainstream sensing devices, \emph{e.g.} Intel RealSense and Microsoft Kinect, the data in the depth channel is of a relatively lower quality with much stronger noise compared to that in the RGB channel. One is how to effectively and efficiently acquire the grasp depth, and another is how to sufficiently combine the credits in the RGB and depth modalities.
For the former issue, a major manner is to directly employ the depth value of the raw data in the center of the predicted 5-dimensional rectangle \cite{DBLP:conf/icra/JiangMS11, DBLP:journals/ijrr/MorrisonCL20}. While efficient, the depth value is often not so accurate or even missing in such noisy images, which is prone to incur failed grasps colliding with objects, as shown in Fig. \ref{visualization about depth prediction} (a). An alternative is to initially sample a set of patches and then estimate grasp depths from them \cite{DBLP:conf/rss/MahlerLNLDLOG17, DBLP:journals/ral/SatishMG19, DBLP:conf/iros/GariepyRCG19, DBLP:journals/mva/WangLHFSQ20}. These methods indeed boost the accuracy to some extent, but the entire procedure is complicated and incurs huge computational cost, especially in cluttered scenes. Both the two strategies make the generalization to the real-world case problematic.
For the latter issue, existing methods conduct the fusion of RGB and depth data by summing or concatenating at the early or middle stage of the networks \cite{DBLP:conf/icra/RedmonA15, DBLP:journals/ral/ChuXV18, DBLP:conf/iros/KumraK17, DBLP:conf/icra/ZengSYDHBMTLRFA18}, where the two modalities are analogously treated.
This straightforward way confirms the complementarity of the two types of inputs; unfortunately, it does not fully take their properties into account. In particular, the depth images are generally coarse, and the extracted depth features tend to be misleading and thus bring difficulties in multi-modal clue integration. This dilemma is more serious when the networks become deeper \cite{DBLP:conf/cvpr/JiLYZPYB0ZL021, DBLP:conf/eccv/Chen020}. To the best of our knowledge, these two issues have not been well discussed and leave much space for improvement.
To address the issues aforementioned, in this paper, we propose a novel two-stage approach, namely Depth Guided Cross-modal Attention Network (DGCAN), to RGB-D grasp detection. Specifically,
DGCAN builds a depth guided learning framework, where both the RGB and depth images are fed and their features are combined to generate grasp proposals. To better leverage the geometry clues in the noisy depth map, a complete 6-dimensional rectangle representation is adopted with the depth of the grasp dedicatedly considered besides its center, size, and orientation defined in the common 5-dimensional one. The prediction of the extra grasp depth substantially strengthens grasp features, thereby contributing to more accurate results. Furthermore, a Local Cross-modal Attention (LCA) module is designed to fuse the features which are separately encoded in the RGB and depth channels by two sub-networks. To reduce the negative impact caused by the discrepancy of data quality in two modalities, LCA conducts in an asymmetric way that depth features are refined according to cross-modal relations and concatenated to the RGB ones. In addition, a comprehensive RGB-D planar grasp dataset is produced based on GraspNet-1Billion \cite{DBLP:conf/cvpr/FangWGL20} to facilitate this research. Extensive simulation and real-world evaluations are made and the experimental results highlight the superiority of the proposed approach.
\vspace{-2pt}
\section{Related Work}
\vspace{-2pt}
\subsection{RGB Grasp Detection}
\vspace{-1pt}
In the past decade, many efforts have been made for grasp detection on RGB images, and we can see its development from hand-crafted methods to learning based ones. Following a sample-based pipeline, \cite{DBLP:journals/ijrr/SaxenaDN08} computes the edge and texture filter responses to decide whether an image patch contains a grasp point. Further, \cite{DBLP:conf/icra/PintoG16} extracts deep features by AlexNet \cite{DBLP:conf/nips/KrizhevskySH12} to estimate the graspable probabilities of different orientations. With the inspiration of the two-stage object detectors, such as Faster-RCNN \cite{DBLP:conf/nips/RenHGS15} and RRPN \cite{DBLP:journals/tmm/MaSYWWZX18}, end-to-end trainable networks \cite{DBLP:conf/icra/DepierreD021, DBLP:conf/iros/ZhouLZTZZ18, DBLP:conf/icra/AinetterF21} are proposed to directly locate rotated grasp rectangles with consistently improved performance. By applying the backbones pretrained on ImageNet \cite{DBLP:conf/nips/KrizhevskySH12}, they also alleviate the over-fitting risk, even with limited grasp annotations. Although RGB grasp detection methods achieve a great success, they are still not qualified enough for real-world applications due to the intrinsic ambiguity caused by 2D imaging techniques.
\vspace{-3pt}
\subsection{RGB-D Grasp Detection}
\vspace{-2pt}
Depth data convey geometry clues, which are necessary to complement appearance ones to build more accurate features for grasp detection. On the one hand, to acquire grasp depth, some attempts investigate the simple and direct strategy which uses the value of the depth image in the center of the predicted rectangle \cite{DBLP:conf/icra/JiangMS11, DBLP:journals/ijrr/MorrisonCL20}, but the performance is sensitive to the quality of depth data, largely limited by current consumer-grade RGB-D sensors. Although some subsequent methods improve this by a sample-before-evaluate pipeline \cite{DBLP:conf/rss/MahlerLNLDLOG17, DBLP:journals/ral/SatishMG19, DBLP:conf/iros/GariepyRCG19, DBLP:journals/mva/WangLHFSQ20}, they suffer from high computational complexity, not easy to generalize to cluttered scenes. On the other hand, to jointly capture textural and geometric information, these methods conduct multi-modal combination in two major ways, \emph{i.e.} early fusion and middle fusion. For early fusion, RGB and depth images are stacked before feeding to the network. \cite{DBLP:conf/icra/JiangMS11} concatenates an RGB image and an aligned depth image as a 4-channel input. \cite{DBLP:conf/icra/RedmonA15, DBLP:journals/ral/ChuXV18} replace the blue channel in the RGB image with the depth channel to deliver a 3-channel matrix. However, as the distribution gap exists between modalities, it is difficult for a single network to achieve reconciliation. Regarding middle fusion, RGB and depth images are individually processed through two different networks with similar architectures and finally aggregated for prediction \cite{DBLP:conf/iros/KumraK17, DBLP:conf/icra/ZengSYDHBMTLRFA18, DBLP:conf/iros/ZhuLBCL0TTL20, song2022deep}. These methods perform better than early fusion based ones, but they generally treat the two modalities in an analogous manner and ignore their quality difference, leading to sub-optimal results.
\begin{figure}[t]
\setlength{\abovecaptionskip}{0pt}
\centering
\subfigure[]{
\centering
\includegraphics[width=0.3\linewidth]{rectangle.pdf}
}
\subfigure[]{
\centering
\includegraphics[width=0.3\linewidth]{depth.pdf}
}
\caption{(a) The 5-dimensional grasp configuration $(u, v, w, h, \theta)$. (b) The grasp depth $d$ in the 6-dimensional grasp configuration.}
\label{6d grasp configuration}
\vspace{-18pt}
\end{figure}
\begin{figure*}[ht]
\vspace{5pt}
\setlength{\abovecaptionskip}{0pt}
\centering
\subfigure[]{
\centering
\includegraphics[width=0.60\linewidth]{framework.pdf}
\label{overall framework}}
\subfigure[]{
\centering
\includegraphics[width=0.35\linewidth]{LCA.pdf}
\label{local cross-modal attention}
}
\caption{(a) Overview of the proposed DGCAN framework. It consists of three main parts: Local Cross-modal Attention (LCA) based multi-modal fusion network, Grasp Proposal Network (GPN) and Grasp Region of Interest Network (GRoI-Net) with depth prediction. (b) Details of the LCA module. RGB features are treated as queries and depth features are treated as keys and values.}
\label{DGCAN and LCA}
\vspace{-18pt}
\end{figure*}
\vspace{-7pt}
\section{Problem Formulation}
Given an RGB image $I \in \mathbb{R}^{3 \times H \times W}$ and its corresponding depth image $D \in \mathbb{R}^{1 \times H \times W}$ as input, different from the common 5-dimensional rectangle representation for planar grasp detection proposed in \cite{DBLP:conf/icra/JiangMS11}, we highlight the necessity of grasp depth to RGB-D grasp detection and adopt a 6-dimensional rectangle with an additional variable for grasp depth, as Fig. \ref{6d grasp configuration} shows, which is formally defined as:
\begin{equation}
\setlength{\abovedisplayshortskip}{-10pt}
\setlength{\belowdisplayshortskip}{2pt}
\textbf{g} = (u, v, d, w, h, \theta)
\label{6-dimensional grasp configurations equation}
\end{equation}
where $(u, v)$, $w$, $h$ and $\theta$ are the same as those defined in the 5-dimensional representation, denoting the center coordinate, width, height, and angle between the width and $x$-axis in the image of the grasp rectangle respectively. $d$ is added to indicate the depth of the fingertips relative to the camera.
\vspace{-2pt}
\section{Methodology}
\vspace{-1pt}
\subsection{Overall Framework}
\vspace{-2pt}
With the 6-dimensional rectangle representation, DGCAN builds a depth guided learning framework. It follows the two-stage pipeline \cite{DBLP:journals/ral/ChuXV18} but extends it by multi-modal fusion and grasp depth prediction. As in Fig. \ref{DGCAN and LCA} (a), DGCAN is composed of three parts: \emph{i.e.} Grasp Proposal Network (GPN), Grasp Region of Interest Network (GRoI-Net), and Local Cross-modal Attention (LCA) based fusion network.
Taking a 3-channel RGB image and a 1-channel depth image as input, two networks with the same architecture (\emph{i.e.} ResNet \cite{DBLP:conf/cvpr/HeZRS16}) are adopted to extract appearance and geometry features respectively, where the depth image is replicated to 3 channels before processing. To sufficiently utilize the RGB and depth information, the LCA module is designed and placed after each convolutional block of the network, which performs an asymmetric fusion by first refining the depth features through cross-modal relation learning and then concatenating them to the RGB ones. The multi-modal features in the last block are further fed into GPN to generate possible grasp regions and predict 6-dimensional grasps in GRoI-Net. In the inference stage, we apply Non-Maximum Suppression (NMS) after GPN to suppress duplicated grasp proposals and retain final grasps according to the graspablility scores.
\subsection{Grasp Proposal Network}\label{Grasp Region Proposal Network}
As in \cite{DBLP:journals/ral/ChuXV18}, GPN is introduced as the first stage to estimate potential grasp candidates over the whole image. Concretely, given the multi-modal feature maps as input, it predicts the grasp bounding box $\hat{g} = (\hat{u}, \hat{v}, \hat{w}, \hat{h})$ and its probability of grasp proposal $\hat{p}$ for each anchor.
Compared to the typical Region Proposal Network (RPN) \cite{DBLP:conf/nips/RenHGS15} applied in the two-stage object detectors, GPN aims to find the optimal match from a dense collection of positive grasp labels with different scores. In this case, besides setting an Intersection over Union (IoU) threshold to filter Ground Truths (GTs) by positions and scales, only the grasp with the highest score $s$ is assigned to the anchor. The loss of GPN is defined as follows:
\begin{equation}
\begin{aligned}
L_{GPN}(\{(\hat{p}_i, \hat{t}_i)\}_{i=1}^{N}) = \frac{1}{N_{cls}}\sum_i{L_{GPN}^{cls}(\hat{p}_i, \hat{p}_i^*)}
\\+ \lambda_1\frac{1}{N_{reg}}\sum_i{\hat{p}_i^*L_{GPN}^{reg}}(\hat{t}_i, \hat{t}_i^*)
\label{GPN loss}
\end{aligned}
\end{equation}
where $L_{GPN}^{cls}$ is the cross entropy loss of binary classification estimating whether an anchor is graspable, $L_{GPN}^{reg}$ is the Smooth L1 loss for grasp proposal regression, and $\lambda_1$ is a weight parameter. $N_{cls}$ and $N_{reg}$ indicate the number of sampled anchors and number of positive samples respectively. $\hat{p}_i$ and $\hat{p}_i^*$ denote the predicted probability and classification label of the $i$-th anchor. $\hat{p}_i^* = 1$ means a specific grasp and $\hat{p}_i^* = 0$ indicates that there is no grasp. $\hat{t}_i$ and $\hat{t}_i^*$ represent the predicted and GT of the 4-dimensional grasp vector. The predicted grasp proposal $(\hat{u}, \hat{v}, \hat{w}, \hat{h})$ is calculated as:
\begin{equation}
\begin{aligned}
\hat{t}_{u} = (\hat{u} - u_a) / w_a, \quad&\hat{t}_{v} = (\hat{v} - v_a) / h_a,
\\\hat{t}_{w} = \log(\hat{w} / w_a), \quad&\hat{t}_{h} = \log(\hat{h} / h_a)
\end{aligned}
\end{equation}
where $(u_a, v_a, w_a, h_a)$ denotes the grasp vector of an anchor. Then, RoI Align is employed to extract features from grasp proposals and feed them into the subsequent GRoI-Net.
\begin{table*}[ht]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-0pt}
\vspace{5pt}
\caption{Summary of the properties of public planar grasp datasets.}
\label{dataset comparison}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{\textbf{Dataset}} & \textbf{Grasp} & \multirow{2}{*}{\textbf{Modality}} & \textbf{Objects} & \textbf{Grasps} & \textbf{Num} & \textbf{Num} & \textbf{Num}\\
~ & \textbf{Depth} & ~ & \textbf{/Img} & \textbf{/Img} & \textbf{Imgs} & \textbf{Objects} & \textbf{Grasps}\\
\hline
Levine \textit{et al.} \cite{DBLP:journals/ijrr/LevinePKIQ18} & No & RGB-D & - & 1 & 800K & - & 800K\\
\hline
Pinto \textit{et al.} \cite{DBLP:conf/icra/PintoG16} & No & RGB-D & - & 1 & 50K & 150 & 50K \\
\hline
Cornell \cite{DBLP:conf/icra/JiangMS11} & No & RGB-D & 1 & $\sim$8 & 1035 & 240 & 8K\\
\hline
Mahler \textit{et al.} \cite{DBLP:conf/rss/MahlerLNLDLOG17} & Yes & D & 1 & 1 & 6.7M & 1500 & 6.7M\\
\hline
Jacquard \cite{DBLP:conf/iros/DepierreD018} & No & RGB-D & 1 & $\sim$20 & 54k & 11K & 1.1M\\
\hline
VMRD \cite{DBLP:conf/iros/ZhangLBZTZ19} & No & RGB & $\sim$3 & $\sim$20 & 4.7K & $\sim$200 & 100K\\
\hline
Multi-Object \cite{DBLP:journals/ral/ChuXV18} & No & RGB-D & $\sim$4 & $\sim$30 & 96 & - & 2904\\
\hline
REGRAD \cite{DBLP:journals/ral/ZhangYWZLDZ22} & No & RGB-D & 1$\sim$20 & 1.02K & 900K & 50K & 100M \\
\hline
GraspNet-1Billion \cite{DBLP:conf/cvpr/FangWGL20} & No & RGB-D & $\sim$10 & 3$\sim$9M & 97K & 88 & $\sim$1.2B \\
\hline
GraspNet-Planar & Yes & RGB-D & $\sim$10 & $\sim$30K & $\sim$11K & 88 & 169M\\
\hline
\end{tabular}
\vspace{-15pt}
\end{table*}
\subsection{Grasp Region of Interest Network}
For each RoI, GRoI-Net further extracts the feature on it using the fourth block $C_4$ in ResNet \cite{DBLP:conf/cvpr/HeZRS16} and predicts the 6-dimensional grasp configuration $\textbf{g} = (u, v, d, w, h, \theta)$ with its grasp possibility $q$ and grasp score $s$. Therefore, given $M$ grasp proposals, the outputs of GRoI-Net are $M \times 7$, $M \times 2$ and $M \times 1$ for grasp target regression $t = (t_u, t_v, t_d, t_w, t_h, t_{sin}, t_{cos})$, binary classification $q$ and grasp score estimation $s$, respectively. Particularly, for grasp depth prediction, we consider the depth measurement $d_o$ captured by the RGB-D camera as the reference depth and regress the offset $t_d$ defined as:
\begin{equation}
t_d = d - d_o.
\end{equation}
Meanwhile, we encode the angle as two components $t_{sin} = \sin(2\theta)$ and $t_{cos} = \cos(2\theta)$ in the range $[-1, 1]$ and calculate the grasp orientation as:
\begin{equation}
\theta = \frac{1}{2}\arctan\frac{\sin(2\theta)}{\cos(2\theta)}.
\end{equation}
With this definition, the target function of GRoI-Net is defined as follows:
\begin{equation}
\begin{aligned}
L_{GRoI}(\{(q_i, t_i, s_i)\}_{i=1}^{M}) = \frac{1}{M_{cls}}\sum_i{L_{GRoI}^{cls}(q_i, q_i^*)}
\\+ \lambda_2\frac{1}{M_{reg}}\sum_i{L_{GRoI}^{reg}(t_i, t_i^*)}
\\+ \lambda_3\frac{1}{M_{reg}}\sum_i{L_{GRoI}^{reg}(s_i, s_i^*)}
\end{aligned}
\end{equation}
where $q_i^*$, $t_i^*$ and $s_i^*$ denote the GTs for $q_i$, $t_i$ and $s_i$. $M_{cls}$ and $M_{reg}$ are similar to $N_{cls}$ and $N_{reg}$ in Eq. \ref{GPN loss}. Here, the cross entropy loss is used for $L_{GRoI}^{cls}$, while the Smooth L1 loss is used for $L_{GRoI}^{reg}$. $\lambda_2$ and $\lambda_3$ are weight parameters.
\subsection{Local Cross-modal Attention based Fusion Network}
The previous RGB-D grasp detection networks \cite{DBLP:conf/iros/KumraK17, DBLP:conf/icra/ZengSYDHBMTLRFA18, DBLP:conf/iros/ZhuLBCL0TTL20} generally integrate RGB and depth features by summation or concatenation with the two modalities analogously treated. However, the data in the depth channel is of a lower quality than that in the RGB channel in existing RGB-D sensors and these methods do not fully take this gap into account, thus leading to sub-optimal fusion. To handle this, we dedicatedly design a new module, \emph{i.e.} LCA, to combine the RGB and depth features in an asymmetric manner, where the relatively stable RGB features are used to refine the noisy depth ones. Furthermore, as local shape perception is critical to grasp detection, for the depth feature in each position, only the ones within a neighborhood are involved.
As illustrated in Fig. \ref{DGCAN and LCA} (b), LCA first refines depth features by weighting them according to the relation between the RGB and depth modalities in a local region and then concatenates the weighted depth features to those in the RGB stream.
Denote the RGB and depth feature as $X_{R} \in \mathbb{R}^{C \times H \times W}$ and $X_D \in \mathbb{R}^{C \times H \times W}$ respectively, the output of LCA $F$ is formulated as:
\begin{equation}
F = f(X_{R} || r(X_{R}, X_D))
\end{equation}
where $||$ is the concatenation operation of the feature maps from the two modalities, $f$ refers to $1\times1$ convolution with the number of channels modified from $2C$ to $C$, and $r$ represents the depth refinement operator based on cross-modal attention. Concretely, $X_{R}$ serves as the \textit{query} and is embedded to $Q_{R} \in \mathbb{R}^{C' \times H \times W}$; $X_{D}$ serves as the \textit{key} and \textit{value}, embedded to $K_{D} \in \mathbb{R}^{C' \times H \times W}$ and $V_{D} \in \mathbb{R}^{C' \times H \times W}$ by two different $1\times1$ convolutions. Inspired by \cite{ramachandran2019stand}, we design an attention module to capture the relationship between the RGB feature and the corresponding depth features within a neighborhood considering the importance of local geometric features to grasp detection. Given an RGB query at $(i, j)$, we extract depth keys and values in the neighboring region $N_{k}(i, j)$ with spatial size $k$ and calculate the depth output $v_{ij}$ as follows:
\begin{equation}
v_{ij} = \sum_{(m,n) \in N_{k}(i, j)}{\text{softmax}(Q_{R,ij}K_{D,mn} / \sqrt{C'})V_{D,mn}}
\end{equation}
where $(m, n) = (i + \Delta{i}, j + \Delta{j})$, $\Delta{i} = -\lceil k/2 \rceil, -\lceil k/2 \rceil + 1, ..., \lceil k/2 \rceil$, $\Delta{j} = -\lceil k/2 \rceil, -\lceil k/2 \rceil + 1, ..., \lceil k/2\rceil$. We then project the outputs $v$ to the final depth features $y \in \mathbb{R}^{C \times H \times W}$ by $1 \times 1$ convolution.
Following the self-attention \cite{DBLP:conf/nips/VaswaniSPUJGKP17} strategy, we implement the multi-head attention mechanism and add sine positional encodings in queries and keys.
\subsection{Loss Function}
During training, the whole network is optimized in an end-to-end manner by minimizing the following loss function:
\begin{equation}
L = L_{GPN}(\{(\hat{p}_i, \hat{t}_i)\}_{i=1}^{N}) + L_{GRoI}(\{(q_i, t_i, s_i)\}_{i=1}^{M})
\end{equation}
\begin{table*}[ht]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\vspace{5pt}
\caption{Performance comparison on GraspNet-Planar captured by RealSense/Kinect. \textit{CD} represents collision detection.}
\label{comparision with others}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c|c c c|c c c|c c c}
\hline
\multirow{2}{*}{\textbf{Method}} & \multicolumn{3}{c|}{\textbf{Seen}} & \multicolumn{3}{c|}{\textbf{Similar}} & \multicolumn{3}{c}{\textbf{Novel}}\\
\cline{2-10}
~ & \textbf{AP} & \textbf{AP$_{0.8}$} & \textbf{AP$_{0.4}$} & \textbf{AP} & \textbf{AP$_{0.8}$} & \textbf{AP$_{0.4}$} & \textbf{AP} & \textbf{AP$_{0.8}$} & \textbf{AP$_{0.4}$} \\
\hline
GR-ConvNet \cite{DBLP:conf/iros/KumraJS20} & 19.97/17.72 & 25.42/22.23 & 11.43/10.60 & 13.24/13.04 & 16.93/16.39 & 6.79/7.67 & 6.32/4.41 & 7.67/5.41 & 1.69/1.52 \\
GG-CNN2 \cite{DBLP:journals/ijrr/MorrisonCL20} & 27.45/23.73 & 35.31/30.20 & 16.60/14.90 & 21.49/18.02 & 27.87/23.44 & 11.27/10.12 & 9.76/7.12 & 11.66/8.77 & 2.95/3.14\\
Chu \textit{et al.} \cite{DBLP:journals/ral/ChuXV18} & 29.71/25.69 & 35.98/30.70 & 23.76/20.96 & 24.17/20.92 & 30.00/25.44 & 16.86/15.87 & 10.92/6.70 & 13.38/8.23 & 6.69/3.88 \\
\hline
DGCAN & 49.85/47.32 & 59.67/57.27 & 42.24/38.55 & 41.46/35.73 & 50.31/44.22 & 33.69/26.99 & 17.48/16.10 & 21.83/20.01 & 7.90/7.81 \\
DGCAN-\textit{CD} & \textbf{52.16/50.45} & \textbf{62.71/61.22} & \textbf{43.14/40.64} & \textbf{44.69/38.62} & \textbf{54.52/47.85} & \textbf{35.37/28.81} & \textbf{19.26/17.66} & \textbf{23.93/21.94} & \textbf{8.89/8.29}\\
\hline
\end{tabular}}
\vspace{-10pt}
\end{table*}
\section{GraspNet-Planar Database}
Current public RGB-D planar grasp data do not contain grasp depth annotations, we thus build a new benchmark based on a 6-DoF grasp dataset, \emph{i.e.} GraspNet-1Billion \cite{DBLP:conf/cvpr/FangWGL20}, for evaluation. Following the same protocol as GraspNet-1Billion, we take 100 scenes for training and 90 scenes for testing, where the test set is divided into three parts for seen, similar and novel objects respectively. Because the camera pose of planar grasping is constrained to be perpendicular to the workspace plane from top to down, we only use the images where the angles between the camera's observation view and $z$-axis are smaller than $15^{\circ}$. In general, our benchmark consists of 5,513 RGB-D images from RealSense D435 and 5,341 images from Azure Kinect.
Although GraspNet-1Billion provides transformation matrices from 6-DoF grasps to planar grasps, the effect of grasp depths is ignored, which incurs inaccurate grasp annotations. To deal with this, we present a new pipeline. Specifically, for each object in the scene, we first select grasps with small angles between the grasp approaches and camera view. The approach directions of the selected 6-DoF grasps are set the same as the camera view. This projection may result in failed grasp labels due to the change of grasp poses; therefore, we re-evaluate these grasps by collision detection as well as force closure metric \cite{DBLP:journals/ijrr/Nguyen88}. We assign new force-closure scores to the successful grasps as grasp scores and remove the failed samples. Finally, to obtain the gripper configurations of planar grasp detection, the grasp label $\textbf{G}$ in the $SE(3)$ space is projected to the image plane as follows:
\begin{equation}
\textbf{G} = T_{CI}\textbf{g}
\label{transformation from image to camera}
\end{equation}
where $T_{CI}$ is the intrinsic parameter matrix of the camera and $\textbf{g}$ is described in Eq. \ref{6-dimensional grasp configurations equation}. Table \ref{dataset comparison} compares our GraspNet-Planar benchmark with major public counterparts. It provides not only multi-modal RGB-D images but also more complete annotations with grasp depths than others, which highlights its advantage for RGB-D planar grasp detection research.
\begin{figure}[b]
\setlength{\abovecaptionskip}{0pt}
\vspace{-15pt}
\centering
\subfigure[]{
\centering
\includegraphics[width=0.31\linewidth]{objects.pdf}
\label{object set}
}
\subfigure[]{
\centering
\includegraphics[width=0.5\linewidth]{robot.pdf}
\label{real-robot setting}
}
\caption{(a) The 25 household objects used in physical evaluation. (b) The hardware set-up with a 7-DoF Agile Diana-7 robot arm and an Intel RealSense D435i camera tied on the arm.}
\label{real-robot experiments}
\end{figure}
\section{Experiments}
\subsection{Protocols}
The simulation evaluation on GraspNet-Planar follows the metric in the GraspNet-1Billion benchmark \cite{DBLP:conf/cvpr/FangWGL20}, \emph{i.e.} Average Precision (\textbf{AP}). Given the predicted 6-dimensional planar grasps, we calculate $\textbf{AP}_{\mu}$ with different friction coefficient $\mu$ after grasp-NMS \cite{DBLP:conf/cvpr/FangWGL20}. As in GraspNet-1Billion, the overall result \textbf{AP} is calculated by averaging of $\textbf{AP}_{\mu}$, where $\mu$ ranges from $0.2$ to $1.0$ with the interval $\Delta\mu=0.2$.
For physical evaluation, we choose 25 objects of various sizes, shapes and textures from the YCB Object Set \cite{DBLP:journals/ram/CalliWSSAD15}, as shown in Fig. \ref{real-robot experiments} (a). We conduct all real-world experiments on a 7-DoF robotic arm in two settings: single-object grasping and multi-object grasping. For single-object grasping, 25 objects are employed and each object is evaluated in three different poses. For multi-object grasping, we construct 10 cluttered scenes, each of which contains 5 different objects. Grasping methods are required to remove the objects in scenes as many as possible within 10 attempts. As for the metric, Grasp Success Rate (GSR) is used in both single-object grasping and multi-object grasping, and Scene Completion Rate (SCR) is additionally recorded for multi-object grasping.
\vspace{-2.5pt}
\subsection{Implementation Details}
\vspace{-0.5pt}
We build our network based on ResNet-50\cite{DBLP:conf/cvpr/HeZRS16}. For LCA in the three residual blocks, we adopt the local spatial size $k = (5, 3, 3)$, which is further discussed in Sec. \ref{ablation study section}. The anchor aspect ratios and scales in GPN are set to [0.5, 1, 2] and [32, 64, 128, 256], respectively.
During training, the model is initialized by the weights pre-trained on ImageNet \cite{DBLP:conf/nips/KrizhevskySH12}. We choose 512 samples for both GPN and GRoI-Net, where the ratios of positive and negative samples are respectively 1:1 and 1:3. After GPN, we take 2,000 proposals whose IoUs are lower than 0.7. The weights in the loss function are set as $\lambda_1$ = 1, $\lambda_2$ = 1 and $\lambda_3$ = 4. During inference, we retain the predicted grasps with the graspable scores higher than $0.5$ to deliver the final result.
The model is trained on four GTX1080Ti GPUs by the SGD optimizer with the momentum and weight decay set to 0.9 and 0.0001. The mini-batch size is 4 and the initial learning rate is $5 \times 10^{-4}$ and decays at 43k iterations. Before training, grasp-NMS \cite{DBLP:conf/cvpr/FangWGL20} is launched to filter out largely overlapped GTs. Random horizontal flipping and rotating are employed to augment input images. Besides, the hole-filling filter and bilateral filter are applied to depth images for smoothing.
In physical evaluation, we use a 7-DoF Agile Diana-7 robot arm with an Intel RealSense D435i camera tied on it. The RGB-D images are captured above the workspace, as shown in Fig. \ref{real-robot experiments} (b). Our network and control algorithm are equipped on a desktop with an AMD Ryzen 5 2600 six-core processor and a single NVIDIA GeForce 1080 GPU.
\begin{figure}[!t]
\setlength{\abovecaptionskip}{0pt}
\centering
\includegraphics[width=\linewidth]{vis.pdf}
\caption{Qualitative visualization on GraspNet-Planar captured by RealSense (the first 2 columns) and Kinect (the last column) after grasp-NMS \cite{DBLP:conf/cvpr/FangWGL20}. The color in the second row denotes the predicted scores of grasp poses ranging from 0 to 1. The blue color represents 0 and the red color represents 1. Zoom for better views.}
\label{result visualization}
\vspace{-19pt}
\end{figure}
\vspace{-3pt}
\subsection{Simulation Evaluation}
\vspace{-0.5pt}
Table \ref{comparision with others} summarizes the results of different solutions on the GraspNet-Planar dataset. We compare the proposed approach to three representative end-to-end trainable planar grasp detectors \cite{DBLP:journals/ral/ChuXV18, DBLP:journals/ijrr/MorrisonCL20, DBLP:conf/iros/KumraJS20}. \cite{DBLP:journals/ral/ChuXV18} is a two-stage network which is built based on Faster-RCNN \cite{DBLP:conf/nips/RenHGS15} and \cite{DBLP:journals/ijrr/MorrisonCL20, DBLP:conf/iros/KumraJS20} follow the one-stage pipeline which output dense predictions of grasp qualities, widths and orientations. It can be seen that our network outperforms the other state-of-the-art counterparts by a large margin, which demonstrates the effectiveness of grasp depth prediction and the LCA module. Some qualitative results are shown in Fig. \ref{result visualization}, which indicate the validity of our approach in cluttered scenes.
\begin{table}[h]
\vspace{5pt}
\setlength{\abovecaptionskip}{-5pt}
\setlength{\belowcaptionskip}{-0pt}
\caption{Physical robot evaluation results}
\label{physical robot experiment}
\begin{center}
\resizebox{\linewidth}{!}{\begin{tabular}{c|c|c|c}
\hline
\multirow{2}{*}{\textbf{Method}} & \textbf{Single-Object} & \multicolumn{2}{c}{\textbf{Multi-Object}}\\
\cline{2-4}
~ & \textbf{GSR (\%)} & \textbf{GSR (\%)} & \textbf{SCR (\%)} \\
\hline
Dex-Net 4.0\cite{DBLP:journals/scirobotics/MatlSDDMG19} & 70.67 (53/75) & 67.14 (47/70) & 94.00 (47/50) \\
\hline
FC-GQ-CNN\cite{DBLP:journals/ral/SatishMG19} & 73.33 (55/75) & 67.61 (48/71) & 96.00 (48/50)\\
\hline
DGCAN & \textbf{88.00} (66/75) & \textbf{84.75} (50/59) & \textbf{100.00} (50/50)\\
\hline
\end{tabular}}
\end{center}
\vspace{-10pt}
\end{table}
\begin{table}[h]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-0pt}
\caption{Ablation study of grasp depth prediction on GraspNet-Planar captured by RealSense.}
\label{ablation study on grasp depth}
\centering
\begin{tabular}{c|c|c|c}
\hline
{\textbf{Grasp Depth}} & {\textbf{Seen}} & {\textbf{Similar}} & {\textbf{Novel}}\\
\hline
Center & 42.01 & 34.54 & 15.25\\
\hline
Classification & 48.57 & 39.01 & 17.06\\
\hline
Ours & \textbf{49.85} & \textbf{41.46} & \textbf{17.48}\\
\hline
\end{tabular}
\vspace{-15pt}
\end{table}
\begin{table}[h]
\vspace{5pt}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-0pt}
\caption{Ablation study of multi-modal fusion mechanism on GraspNet-Planar captured by RealSense.}
\label{ablation study on multi-modal fusion}
\centering
\begin{tabular}{c|c|c|c}
\hline
{\textbf{Fusion Method}} & {\textbf{Seen}} & {\textbf{Similar}} & {\textbf{Novel}}\\
\hline
RGB & 40.83 & 27.76 & 9.94\\
\hline
RGD & 47.87 & 38.05 & 16.07\\
\hline
RGB-D-Sum & 48.08 & 38.95 & 15.43\\
\hline
RGB-D-Concat & 48.61 & 40.46 & 16.82\\
\hline
LCA & \textbf{49.85} & \textbf{41.46} & \textbf{17.48}\\
\hline
\end{tabular}
\end{table}
\subsection{Physical Evaluation}
In real-world experiments, Dex-Net 4.0 \cite{DBLP:journals/scirobotics/MatlSDDMG19} and FC-GQ-CNN \cite{DBLP:journals/ral/SatishMG19} which also involve the policies of acquiring grasp depths are employed for comparison. Dex-Net 4.0 takes measurements of grasp centers in depth images as grasp depths and FC-GQ-CNN uniformly samples depth values to several bins within the range of the whole scene and then evaluates them. As shown in Table \ref{physical robot experiment}, Dex-Net 4.0 and FC-GQ-CNN achieve similar performance which fall behind ours in both settings.
\subsection{Ablation Study} \label{ablation study section}
\textbf{Influence of depth guided learning.} We use two baselines, both of which deliver grasp depths. One is \textit{Center}, where we predict 5-dimensional grasp rectangles and set the sum of the depth measurements of the rectangle centers and a fixed offset of 20 millimeters as the grasp depths. Another is \textit{Classification}. Here, similar to \cite{DBLP:journals/ral/SatishMG19}, the prediction of grasp depths is modeled as a classification problem. We replace the regression head of grasp depths with a classification head. The grasp depths are uniformly divided into 40 bins from the maximum to the minimum depth value of scenes. The width of each bin is about 6 millimeters. It should be noted that for \textit{Classification} we train the networks with different numbers of depth bins ranging from 25 to 50 by an interval of 5 and the one with 40 bins achieves the best performance.
For fair comparison, we adopt very similar network architectures in all the models, which only differ in the head of GRoI-Net.
As shown in Table \ref{ablation study on grasp depth}, the model that acquires grasp depths by \textit{Center} reaches the lowest performance among the three, which indicates the significance of replacing the measured grasp depth to the predicted depth. When compared to \textit{Classification}, the proposed approach has respectively gained $1.28\%$, $2.45\%$ and $0.42\%$ improvements on seen, similar and novel objects, proving that the grasp depth predicted by regression is more accurate than that by classification. Besides, \textit{Classification} brings extra hyper-parameters, which increases the design complexity, such as the range of depth values and the number of depth bins.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{-5pt}
\setlength{\belowcaptionskip}{0pt}
\vspace{-5pt}
\centering
\includegraphics[width=0.85\linewidth]{plot.pdf}
\caption{AP under various local spatial sizes $(k_1, k_2, k_3)$ for different stages $(C_1, C_2, C_3)$ of ResNet.}
\label{experiments for k}
\vspace{-20pt}
\end{figure}
\textbf{Influence of LCA based multi-modal fusion.} An uni-modal model (\textit{RGB}) taking only RGB images as input is evaluated as a baseline. We also employ an early fusion baseline following \cite{DBLP:journals/ral/ChuXV18}, \emph{i.e.} \textit{RGD}, which replaces the blue channel with the depth channel. As shown in Table \ref{ablation study on multi-modal fusion}, although jointly using RGB and depth data boosts the performance, the early fusion strategy is sub-optimal because of the distribution gap between two modalities. Furthermore, we evaluate another two middle fusion baselines, which exploit independent feature extractors to process RGB and depth data respectively and sum (\textit{RGB-D-Sum}) or concatenate (\textit{RGB-D-Concat}) the two types of features. The same as in LCA, multi-modal features are fused at the three residual blocks of the ResNet models. The results in Table \ref{ablation study on multi-modal fusion} demonstrate that LCA outperforms the other middle fusion strategies, highlighting its superiority in RGB-D feature fusion for grasp detection.
To evaluate the influence of the local spatial extent in LCA, we train our network with different combinations of $k_i$ in each block of ResNet, $i=1, 2, 3$. The results are presented in Fig. \ref{experiments for k} and as it shows, the LCA module achieves consistent improvements with different settings of local spatial sizes. However, when the size of the first block is set too large, there is a drop. Based on the ablations, we choose $(k_1, k_2, k_3) = (5, 3, 3)$ to reach the best performance.
\section{Conclusion}
This paper proposes a novel two-stage approach, \emph{i.e.} DGCAN, to RGB-D grasp detection. A complete 6-dimensional rectangle representation is adopted to emphasize the necessity of the grasp depth, which is not considered in the common 5-dimensional definition. The prediction of the grasp depth substantially improves grasp feature learning, thereby leading to more accurate results. Besides, an LCA module is designed, where the depth features are refined according to cross-modal relations and concatenated to the RGB ones for more sufficient fusion. The results of the simulation and physical experiments demonstrate its effectiveness.
\newpage
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.14329",
"language": "en",
"timestamp": "2023-03-01T02:09:40",
"url": "https://arxiv.org/abs/2302.14329",
"yymm": "2302"
} | \section{Introduction}
Feature preprocessing plays a crucial role in building a machine learning (ML) pipeline~\cite{zha2023data,wang2020skewness,zha2019multi,garcia2016big,tan2022bring,li2022towards}. It transforms the raw input features into numerical representations through multiple preprocessing primitive steps, such as missing value completion, normalization, etc. Feature preprocessing is so important that around 50\% of the time is spent on data preprocessing in building an ML system, reported in a survey collected from practitioners~\cite{munson2012study}. Thus, modern automated machine learning (AutoML) systems have included various preprocessing primitives in building ML pipelines~\cite{drori2021alphad3m,heffetz2020deepline,le2020scaling,liu2020admm,rakotoarison2019automated,lai2021tods,zha2021autovideo}.
Despite the great successes of the existing AutoML systems, they often have a very small search space for the feature preprocessing~\cite{drori2021alphad3m,feurer2015efficient,olson2016tpot,akhtar2018oboe}. A common strategy is to perform fixed transformations for non-numerical features and search for preprocessing pipelines from a small search space for numerical features. For example, Auto-Sklearn~\cite{feurer2015efficient}, one of the most popular AutoML systems, adopts a constant imputer and a one-hot encoder for all the non-numerical features, and searches for the imputers and scalers for numerical features. It applies the same preprocessing primitives to all the numerical features so that there are only 21 possible combinations within its search space\footnote{\scriptsize \url{https://github.com/automl/auto-sklearn/blob/master/autosklearn/pipeline/components/data_preprocessing/}}. While more recent systems, such as AlphaD3M~\cite{drori2021alphad3m}, have introduced more preprocessing primitives to handle different data types, the possible preprocessing pipelines still follow very strict grammars, and the same pipelines are applied for all the numerical features.
Unfortunately, this simple design of the preprocessing search space may lead to sub-optimal performance since different features may require different pre-processing pipelines to achieve the best results. For example, for the choices of encoders, some numerical features may only have very few possible values so it could be better to encode them like categorical features. For feature normalization, different features may have very different value distributions such that they may require different scalers. Motivated by this, we propose to allow the search algorithm to adopt a specific preprocessing pipeline for each feature. Specifically, we investigate the following research question: \emph{Can we improve the performance by enabling feature-wise preprocessing pipeline search?}
This is a non-trivial task because of two major challenges. First, the search space will grow exponentially with more features. Specifically, let $|\mathcal{P}|$ be the number of possible preprocessing pipelines, and $D$ be the number of features. Then the search space is $|\mathcal{P}|^D$. Thus, we need an efficient strategy to explore the search space. Second, many of the preprocessing pipelines can be invalid. For example, a mean imputer will raise errors when applied to string values. We need to avoid frequently sampling invalid pipelines to discover valid pipelines more easily.
To tackle these challenges, we propose ClusterP3S, a novel framework for \underline{P}ersonalized \underline{P}reprocessing \underline{P}ipeline \underline{S}earch via \underline{Cluster}ing. To efficiently explore the search space, we learn feature clusters, where the same preprocessing pipeline will be adopted for the features within a cluster. In this way, we can significantly reduce the search, which also inherently encourages the search algorithm to hit valid pipelines. In particular, we propose a hierarchical search strategy to jointly learn the clusters and search for the optimal pipelines, where the upper-level search optimizes the feature clustering, and the lower-level search optimizes the pipeline given a specific cluster assignment. We instantiate this idea with a reinforced deep clustering network at the upper-level, and random search at the lower-level. Extensive experiments on benchmark real-world classification datasets suggest that enabling feature-wise preprocessing pipeline search can significantly improve performance. To summarize, we make the following major contributions.
\vspace{-5pt}
\begin{itemize}
\item Identify the importance of enabling feature-wise preprocessing pipeline search, which is often overlooked by the existing AutoML systems.
\vspace{-5pt}
\item Propose ClusterP3S framework for personalized preprocessing pipeline search. It adopts a hierarchical search strategy to jointly learn the clusters and search for the optimal pipelines.
\vspace{-5pt}
\item Instantiate ClusterP3S with a deep clustering network trained with reinforcement learning and random search for the upper-level search and the lower-level search, respectively.
\vspace{-5pt}
\item Demonstrate the effectiveness of ClusterP3S on eight benchmark classification datasets, showing the promise of personalized preprocessing pipeline search. In addition, we also present ablation and hyperparameter studies to understand how ClusterP3S behaves.
\vspace{-5pt}
\end{itemize}
\section{Problem Formulation}
Given a training dataset $\mathcal{D}_{\text{train}} = \{(\mathbf{x}_i, y_i)\}_{i=1}^{N}$, where $\mathbf{x}_{i} \in \mathbb{R}^D$ is a $D$-dimensional feature vector, and $y_i \in \mathbb{R}$ denotes the target, we aim to learn a mapping from features to targets $f: \mathbb{R}^D \to \mathbb{R}$ based on $\mathcal{D}_{\text{train}}$ such that $f$ can accurately make predictions on a validation dataset. Here, $f$ is often a pipeline consisting of multiple primitive steps, where a primitive is an implementation of a specific function and serves as a basic building block in a pipeline. In this work, we consider a typical pipeline design with multiple preprocessing steps, followed by a machine learning model. Additionally, we allow each feature to have a different preprocessing pipeline (which is a sub-pipeline of $f$). Formally, a pipeline $f$ can be represented as a $(D+1)$-tuple $\langle P_1, P_2, ..., P_D, M \rangle$, where $P_j$ ($j \in \{1,2,...,D\}$) is the preprocessing pipeline for the $j^\text{th}$ feature and can have multiple steps, and $M$ denotes a machine learning model that takes as input all the pre-processed features and outputs the predictions.
We describe the problem of \underline{P}ersonalized \underline{P}re-processing \underline{P}ipeline \underline{S}earch (P3S) as follows. Let $\mathcal{P}$ be the search space of the preprocessing pipelines. Given a training dataset $\mathcal{D}_{\text{train}}$, a validation dataset $\mathcal{D}_{\text{valid}}$, a machine learning model $M$, we aim to solve the following optimization problem:
\begin{equation}
\label{eq:1}
\argmax_{P_j \in \mathcal{P}} L (f, \mathcal{D}_\text{train}, \mathcal{D}_\text{valid}),
\end{equation}
where $f = \langle P_1, P_2, ..., P_D, M \rangle$ is the pipeline, and $L (f, \mathcal{D}_\text{train}, \mathcal{D}_\text{val})$ is the performance metric on $\mathcal{D}_\text{valid}$ when fitting $f$ on $\mathcal{D}_\text{train}$. In this work, we perform 10-fold cross validation to obtain $L$. Let $|\mathcal{P}|$ denote the number of possible preprocessing pipelines in $\mathcal{P}$. Then the search space is $|\mathcal{P}|^D$, which grows exponentially with more features. In our preliminary experiments, we find that naively applying an existing search algorithm to this massive search space will lead to poor performance.
\section{Methodology}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/ClusterP3S.pdf}
\vspace{-8pt}
\caption{A high-level overview of the ClusterP3S framework. The column features are first embedded from the input dataset and then processed by a deep clustering network which assigns each feature to a cluster. The cluster information is forwarded to the cluster-wise pre-possessing module which applies a preprocessing pipeline to each cluster. Finally, an ML model will generate the predictions based on the pre-processed features. This process is jointly optimized following a hierarchical search strategy, where the upper-level search optimizes the feature clustering to enable better pipelines built upon the clusters, and the lower-level search optimizes the pipeline given a specific cluster assignment.}
\vspace{-8pt}
\label{fig:framework}
\end{figure}
We propose ClusterP3S for the P3S problem, illustrated in Figure~\ref{fig:framework}. ClusterP3S consists of (i) a deep clustering network which embeds the column features and assigns each feature to a cluster, (ii) a cluster-wise preprocessing module that samples a preprocessing pipeline for each cluster, and (iii) an ML model which generates predictions based on the pre-processed features. We first introduce a hierarchical search objective to jointly optimize the clustering network and the pipeline (Section~\ref{sec:method1}). Then we propose a piratical instance that achieves the upper-level search and the lower-level search with reinforcement learning and random search, respectively (Section~\ref{sec:method2}). Finally, we instantiate ClusterP3S with a tailored search space (Section~\ref{sec:method3}).
\subsection{Hierarchical Search of Feature Clusters and Preprocessing Pipelines}
\label{sec:method1}
This subsection introduces a hierarchical search objective by clustering similar features and applying the same preprocessing pipeline to all the features within a cluster. Specifically, suppose the number of clusters is $K$. Then the search space can be reduced exponentially from $|\mathcal{P}|^{D}$ to $|\mathcal{P}|^{K}$.
A naive way to achieve this is to first embed the feature and then apply an off-the-shelf clustering algorithm, such as K-Means. However, this will lead to sub-optimal performance. First, the obtained clusters highly depend on the feature embedding, whose quality is hard to control. As a result, features that require different preprocessing pipelines could be falsely grouped together and forced to use the same pipeline. Second, the clusters obtained in this way are fixed in the whole search process. Bad clusters could significantly limit the performance upper bound in the search phase.
To tackle these issues, we propose to dynamically learn feature clusters in an end-to-end fashion. Let $\mathbf{c} \in \mathcal{C} \subseteq \mathbb{R}^{D}$ be the cluster assignment of the features, where each element $c_j \in \{1, 2, ..., K\}$, and $K$ is the number of clusters. We abuse the notations of $L$ and $f$ with a given cluster assignment by letting $L (f, \mathbf{c}, \mathcal{D}_\text{train}, \mathcal{D}_\text{valid})$ be the performance metric given $\mathbf{c}$, and $f = \langle P_1, P_2, ..., P_K, M \rangle$ which only searches for $K$ preprocessing pipelines. We aim to optimize the following objective:
\begin{equation}
\label{eq:2}
\argmax_{\mathbf{c} \in \mathcal{C}, P_k \in \mathcal{P}} L (f, \mathbf{c}, \mathcal{D}_\text{train}, \mathcal{D}_\text{valid}).
\end{equation}
Eq~(\ref{eq:2}) can be interpreted as a hierarchical search problem with two levels: at the upper-level, we find the best clusters, and at the lower-level, we search for the best pipeline given a cluster assignment. While the overall search complexity of Eq~(\ref{eq:2}) is the same as that of Eq~(\ref{eq:1}), we can leverage the similarity pattern in the feature embedding space to effectively cluster the features such that we can reduce the upper-level search complexity significantly.
\subsection{Training Deep Clustering Network with Reinforcement Learning}
\label{sec:method2}
This subsection introduces a practical instance to achieve the hierarchical search based on a deep clustering network and reinforcement learning. Our design consists of four steps: (i) a feature embedding module using an AutoEncoder, (ii) a deep clustering network that assigns each feature to a cluster, (iii) a cluster-wise preprocessing pipeline search module for the lower-level search, and (iv) a reinforcement learning loss to update the clustering network.
\noindent\textbf{Feature Embedding.} The goal of feature embedding is to generate a representation for each column feature. Motivated by how documents are encoded in text classification~\cite{korde2012text}, we treat each feature column as a document and the value of each instance within the column as a term. Then we use the term frequency of each column feature to represent the feature. Formally, let $V$ be the vocabulary size, i.e., the number of terms in the dataset. Each feature will be embedded as $\mathbf{e}_j \in \mathbb{R}^{V}$, where the $v^{\text{th}}$ element of $\mathbf{e}_j$ indicates the number of appearances of the $v^{\text{th}}$ term. Following this strategy, we can embed a dataset as $\mathbf{E} \in \mathbb{R}^{D \times V}$. However, $\mathbf{E}$ is often high-dimensional. Thus, we further use an AutoEncoder to obtain condensed embeddings. Specifically, we train an AutoEncoder with Mean-Squared-Error (MSE) loss:
\begin{equation}
\label{eq:3}
L_{\text{AE}} = (\text{AutoEncoder}(\mathbf{E}) - \mathbf{E})^2.
\end{equation}
Then we use the condensed representations obtained by the AutoEncoder as the final dataset embedding, denoted as $\widetilde{\mathbf{E}} \in \mathbb{R}^{D \times H}$, where $H$ is the hidden dimension of the AutoEncoder.
\noindent \textbf{Deep Clustering Network.} Given the dataset embedding $\widetilde{\mathbf{E}}$, we aim to assign each feature to a cluster. Motivated by~\cite{caron2018deep}, we use a deep clustering network to map feature embeddings to cluster IDs. The deep clustering network is initialized in an unsupervised fashion following two steps. First, we use a clustering algorithm, such as K-Means, to assign the features to $K$ clusters. Second, we generate pseudo labels based on the clusters and use supervised loss to train the clustering network. Formally, let $\text{ClusterNN}(\cdot)$ be the clustering network, and $\widetilde{\mathbf{c}}_j$ be the one-hot pseudo labels for the $j^{th}$ feature. We train the network with cross-entropy loss:
\begin{equation}
L_{\text{cluster}} = \sum_{i=1}^N \text{cross-entropy}(\text{ClusterNN}(\widetilde{\mathbf{e}}_j), \widetilde{\mathbf{c}}_j),
\label{eq:4}
\end{equation}
where $\widetilde{\mathbf{e}}_j$ is the embedding for the $j^{th}$ feature, and $\text{cross-entropy}(\cdot, \cdot)$ is the standard cross-entropy loss. However, the clusters obtained in this way may not be optimal. One benefit of using deep clustering instead of K-Means is that we can update the clustering network with gradient descent. Thus, Eq.~(\ref{eq:4}) is only used for initializing the clustering network, which will be further updated later.
\noindent \textbf{Cluster-wise Preprocessing Pipeline Search.} Given a clustering network $\text{ClusterNN}(\cdot)$, we can produce the cluster assignment for all the features with a forward pass. Formally, let $\mathbf{c} = \text{ClusterNN}(\widetilde{\mathbf{E}})$ be the cluster assignment, where $\mathbf{c} \in \mathbb{R}^{D}$, and its each element $c_j \in \{1,2,...,K\}$. Given $\mathbf{c}$, we search for the best preprocessing pipelines, i.e., $\argmax_{P_k \in \mathcal{P}} L (f, \mathbf{c}, \mathcal{D}_\text{train}, \mathcal{D}_\text{valid})$. In this work, we adopt the random search, where in each iteration, we randomly sample a preprocessing pipeline. Since there can be many invalid pre-processing pipelines in the search space, we record the invalid primitives met in the search and force the algorithm not to sample the invalid ones later.
\noindent \textbf{Reinforcement Learning Update.} The initial cluster assignment of the clustering network could be sub-optimal in that it is learned in an unsupervised way and it is not performance-aware. We propose to use reinforcement learning to update the clustering network towards the best cluster assignment. Specifically, we treat the clustering network as a policy network, whose outputs are actions indicating the probability of assigning the input feature to the $K$ clusters. Then we use reinforcement loss to update the policy network using the performance as the reward. Formally, let $p(c_j|\widetilde{\mathbf{e}}_j)$ be the probability of assigning the $j^{th}$ feature to the cluster $c_j$, $r_{\text{perf}}$ be the reward, i.e., the classification performance. To reduce the variance, we employ a baseline in the reward function, i.e., $r = r_{\text{perf}} - \bar{r}_{\text{perf}}$, where we use REINFORCE~\cite{williams1992simple} to update the policy network with
\begin{equation}
L_{\text{reinforce}} = -r \log p(c_j|\widetilde{\mathbf{e}}_j).
\label{eq:5}
\end{equation}
\begin{algorithm}[t]
\caption{Learning procedure of ClusterP3S}
\label{alg:1}
\setlength{\intextsep}{0pt}
\begin{algorithmic}[1]
\STATE \textbf{Input:} Training dataset $\mathcal{D}_{\text{train}}$, validation dataset $\mathcal{D}_{\text{valid}}$, the search space of the preprocessing pipelines $\mathcal{P}$, a machine learning model $M$.
\STATE Learn feature embedding based on Eq.~(\ref{eq:3}).
\STATE Initialize the deep clustering network based on Eq.~(\ref{eq:4}).
\FOR{each upper-level iteration}
\STATE Assign each feature to a cluster using the cluster network.
\FOR{each lower-level iteration}
\STATE Randomly sample a preprocessing pipeline within $\mathcal{P}$ based on the current clusters.
\IF{the sampled pipeline achieves better performance}
\STATE Store the sampled pipeline and mark it as the best
\ENDIF
\ENDFOR
\STATE Update the clustering network using the best performance as the reward based on Eq.~(\ref{eq:5})
\ENDFOR
\end{algorithmic}
\end{algorithm}
\noindent \textbf{Summary.} We summarize the overall learning procedure in Algorithm~\ref{alg:1}. After initializing the feature embedding and the clustering network (lines 2 and 3), we jointly optimize the clustering network and search for the best preprocessing pipelines. In the outer loop, we generate a cluster assignment (line 5) and update the clustering network based on the feedback from the inner loop (line 12). In the inner loop, we use random search to search for the best preprocessing pipelines given a cluster assignment (lines 5 to 10). Finally, The best pre-possessing pipeline will be returned.
\subsection{Search Space Design}
\label{sec:method3}
This subsection introduces the search space of ClusterP3S. We include several standard preprocessing primitives, which can be grouped into three categories: (i) imputers that complete the missing values of the features, (ii) encoders that encode the features with transformation, and (iii) scalers that scale the feature values. We design a search space that allows each feature to be sequentially processed by an imputer, an encoder, and a scaler, whose order is fixed. However, a preprocessing pipeline can skip a specific primitive step. For example, it can only have an encoder and a scaler without an imputer. Our search space includes 11 primitives and 48 possible preprocessing pipelines. We provide more details of these primitives in \textbf{Appendix~\ref{apendix:1}}. Therefore, the overall search space is $48^D$, which grows exponentially with the number of features.
\section{Experiments}
We evaluate the performance of ClusterP3S on several benchmark datasets. Specifically, we aim to answer the following research questions. \textbf{RQ1:} Can ClusterP3S improve performance by enabling feature-wise preprocessing pipeline search (Section~\ref{sec:exp2})? \textbf{RQ2:} How effective of the proposed deep clustering network compared with vanilla clustering algorithm, such as K-Means (Section~\ref{sec:exp3})? \textbf{RQ3:} How efficient is the search of ClusterP3S (Section~\ref{sec:exp4})? \textbf{RQ4:} What is the impact of cluster numbers on ClusterP3S (Section~\ref{sec:exp5})? \textbf{RQ5:} How does the preprocessing pipeline discovered by ClusterP3S compare with the heuristic methods (Section~\ref{sec:exp6})?
\subsection{Experimental Setting}
\label{sec:exp1}
\noindent\textbf{Datasets.} For comprehensive comparison, we evaluate the performance of ClusterP3S on eight datasets with various scales and feature characteristics from the OpenMLcc-18 benchmark~\cite{bischl2017openml}. Specifically, these datasets differ in sample size and feature characteristics (e.g., missing value ratios and the number of numerical and categorical features). Table~\ref{tbl:dataset_stats} summarizes the data statistics. More details are provided in \textbf{Appendix~\ref{apendix:2}}.
\noindent\textbf{Comparison Methods.} Since there is no automated search algorithm designed for feature-wise preprocessing, we implement two feasible solutions for comparison including heuristic pre-processing pipeline (\textbf{HeuristicP3}) and random clustering based personalized pre-processing pipeline (\textbf{RandClusterP3}). HeuristicP3 is obtained by identifying the optimal pre-processing configurations via common practice or domain experts. RandClusterP3 is obtained by randomly clustering features into different groups. The details of the comparison methods are as follows.
\begin{itemize}
\item \textbf{HeuristicP3} is built upon a handcrafted pipeline that takes a series of widely adopted preprocessing primitives. For the case of numerical values, an imputation method will be applied if there are missing values. Specifically, we will first fill the missing values using the mean value of observed features and then normalize each feature via a scaler primitive MaxABS\footnote{\url{https://github.com/automl/auto-sklearn/tree/master/autosklearn/pipeline/components/feature_preprocessing}}. For the non-numerical features, we first fill the missing values using the most frequent value and then adopt a one-hot encoding technique to transfer the data. Finally, we concatenate the resultant numerical features and the transformed non-numerical features as input for downstream machine learners.
\item \textbf{RandClusterP3} is a variant of the proposed framework. It is obtained by replacing the deep clustering network adopted by ClusterP3S with a random algorithm, which randomly assigns each feature into different group clusters. After generating a set of random clusters, we apply the same cluster-wise preprocessing pipeline used in ClusterP3S for model evaluation.
\end{itemize}
\begin{table}[t]
\centering
\caption{Dataset Statistics.}
\vspace{-8pt}
\label{tbl:dataset_stats}
\footnotesize
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}[t]{l*{5}{c}}
\toprule
Dataset & \# Features & \# Samples & \# Missing values & \# Numerical Features & \# Non-numerical Features\\
\midrule
38-sick & 30 & 3772 & 6064 & 6 & 22 \\
29-credit-a & 16 & 690 & 67 & 6 & 9 \\
1049-pc4 & 38 & 1458 & 0 & 37 & 0 \\
1480-ilpd & 11 & 583 & 0 & 9 & 1 \\
23381-dresses-sales & 13 & 500 & 835 & 1 & 11 \\
3-kr-vs-kp & 37 & 3196 & 0 & 0 & 36 \\
40975-car & 7 & 1728 & 0 & 0 & 6 \\
50-tic-tac-toe & 10 & 958 & 0 & 0 & 9 \\
\bottomrule
\end{tabular}
\vspace{-15pt}
\end{table}
\noindent\textbf{Learning Protocol.} Following classical evaluation protocols for supervised setting~\cite{bischl2017openml,gijsbers2019open,probst2019tunability}, we first use the whole dataset to conduct pipeline search, and then apply 10-fold cross-validation to evaluate search preprocessing pipeline under different machine learning learners. We consider five popular machine learning classifiers: RandomForest, GradientBoosting, SVC, AdaBoost and SGDC from Sklearn \cite{sklearn_api}. To avoid the randomness, we repeat the process twenty times and report the averaged accuracy results.
\noindent\textbf{Implementation Details.} We implement our model based on Pytorch and search 50 iterations using Adam optimizer with a learning rate equal to 0.001. For the AutoEncoder network, we adopt six fully connected layers with hidden dimensions fixed as 128. For the deep clustering network, we adopt four fully connected layers with hidden dimensions set to 128. We set number of clusters $K$ to be 5 for all the experiments. We provide more details in \textbf{Appendix~\ref{apendix:3}}.
\subsection{Comparison with Baselines}
\label{sec:exp2}
We start by comparing the performance of ClusterP3S with other baseline methods (\textbf{RQ1}). It is worth noting that the naive feature-wise random search solution is not involved for comparison. This is because it takes an extremely long time to scan the pre-processing pipeline candidates if the feature space is relatively large or the performance is inferior using the identified feature-wise pipelines. Table~\ref{tbl:fwt_acc_improve} reports the result over 8 datasets in terms of accuracy. We made several observations. \textbf{First}, the proposed ClusterP3S performs better than two baseline methods
across all datasets. Specifically, ClusterP3S improves 3.5\% and 2.0\% over HeuristicP3 and RandClusterP3 on average, respectively. \textbf{Second}, although RandClusterP3 achieves comparable or even better better results than HeuristicP3 in most cases, it underperforms our method in all scenarios. The possible reason is that different features may favor various pre-processing pipelines, and our method can effectively identify specific pipelines for various features via reinforcement learning.
Besides, we dig into the learning process of our method and the two competitors, shown in the left panel of Figure~\ref{fig:self}. The performances of our method and RandClusterP3 first increase with more iterations and then stay stable. ClusterP3S consistently outperforms RandClusterP3 during the learning process by a wide margin. We attribute it to our policy-based deep clustering network.
\begin{table}[t]
\centering
\caption{Accuracy performance of all methods over eight OpenML-CC18 Benchmark datasets. The accuracy values are averaged based on five classical machine learning algorithms: RandomForest, GradientBoosting, SVC, AdaBoost, and SGD.
}
\label{tbl:fwt_acc_improve}
\footnotesize
\begin{tabular}[t]{lccc}
\toprule
Dataset & HeuristicP3 & RandClusterP3 & ClusterP3S\\
\midrule
38-sick & 96.63 $\pm$ 1.69 & 97.29 $\pm$ 1.36 & 97.70 $\pm$ 1.14\\
29-credit-a & 84.28 $\pm$ 1.94 & 82.55 $\pm$ 4.76 & 85.07 $\pm$ 1.11\\
1049-pc4 & 88.69 $\pm$ 1.40 & 90.06 $\pm$ 0.67 & 91.08 $\pm$ 0.38\\
1480-ilpd & 70.77 $\pm$ 2.07 & 70.60 $\pm$ 0.71 & 71.35 $\pm$ 1.09\\
23381-dresses-sales & 58.04 $\pm$ 3.14 & 59.44 $\pm$ 1.61 & 62.20 $\pm$ 1.60\\
3-kr-vs-kp & 94.77 $\pm$ 0.65 & 94.88 $\pm$ 1.18 & 95.18 $\pm$ 1.24\\
40975-car & 86.42 $\pm$ 4.16 & 84.40 $\pm$ 3.73 & 86.70 $\pm$ 3.45\\
50-tic-tac-toe & 77.52 $\pm$ 10.31 & 87.70 $\pm$ 7.23 & 91.00 $\pm$ 7.46\\
\midrule
Average & 82.14 $\pm$ 3.17 & 83.36$\pm$ 2.66 & 85.03 $\pm$ 2.19\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ablation Study}
\label{sec:exp3}
To answer question \textbf{RQ2}, we compare our model with a Kmeans based variant (ClusterP3S-Kmeans) across a set of machine learning classifiers in middle panel of Figure~\ref{fig:self}. ClusterP3S-Kmeans is obtained by replacing our policy-based deep clustering network with the popular K-Means algorithm. Our model performs consistently better than K-Means based variant across five different classifiers. Specifically, ClusterP3S significantly outperforms the K-Means variant on SVC and SGD base classifiers. The possible reason is that RandomForest, Gradient Boosting and AdaBoost are based on decision tree, which can capture discrete values better than other numerical methods. This result demonstrates the effectiveness of the proposed policy-based deep clustering network.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=5.4cm]{figures/efficiency_acc_iterations.png}
\end{subfigure}%
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=5.4cm]{figures/ablation.png}
\end{subfigure}%
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=5.4cm]{figures/efficiency_acc_times.png}
\end{subfigure}%
\caption{ClusterP3S analysis. \textbf{Left:} The impact of learning iterations on ClusterP3S and baseline methods. \textbf{Middle:} The performance of ClusterP3S vs. ClusterP3S-Kmeans across five machine learning classifiers. \textbf{Right:} Efficiency analysis of ClusterP3S.}
\label{fig:self}
\end{figure*}
\subsection{Efficiency Analysis}
\label{sec:exp4}
To investigate the efficiency of ClusterP3S (\textbf{RQ3}), following the setting in Section~\ref{sec:exp3}, we plot the averaged accuracy results of all methods with respect to the running time (in second) in Figure~\ref{fig:self} (the right panel). The x-axis denotes the wall-clock time in seconds. In general, our method achieves significantly better performance than the two baselines after only 15 seconds of search.
\subsection{Hyperparameter Study}
\label{sec:exp5}
To analyze the impact of the number of clusters $K$ (\textbf{RQ4}), we vary $K$ from the set $\{2, 5, 10, 20\}$ and report the results on all datasets based on SVC classifier in Table~\ref{tbl:k_hyperparameter}. The results show that our method is not sensitive to the number of clusters when $K \ge 5$. A possible reason is that the deep clustering network can automatically learn the number of clusters even with a too large $K$.
\begin{table}[t]
\centering
\caption{The performance of ClusterP3S \textit{w.r.t.} $K$ based on SVC classifier. "-" means not applicable since these datasets have fewer number of features than $K$.}
\label{tbl:k_hyperparameter}
\vspace{-8pt}
\footnotesize
\begin{tabular}[t]{l|*{4}{c}}
\toprule
Dataset & $K=2$ & $K=5$ & $K=10$ & $K=20$\\
\midrule
38-sick & 97.61 & 96.81 & 96.81 & 97.29 \\
29-credit-a & 84.78 & 85.65 & 85.65 & - \\
1049-pc4 & 90.73 & 90.80 & 91.01 & 90.80 \\
1480-ilpd & 71.36 & 72.22 & 71.35 & - \\
23381-dresses-sales & 58.0 & 62.2 & 62.4 & - \\
3-kr-vs-kp & 93.89 & 94.24 & 94.49 & 94.30 \\
40975-car & 89.75 & 89.75 & - & - \\
50-tic-tac-toe & 93.21 & 93.21 & - & - \\
\bottomrule
\end{tabular}
\vspace{-15pt}
\end{table}
\subsection{Case Study}
\label{sec:exp6}
To answer \textbf{RQ5}, we conduct a case study on the 29-credit-a dataset by comparing the heuristic pipeline and the one discovered by ClusterP3S, shown in Figure~\ref{fig:diff_pipelines}. The heuristic pipeline adopts two commonly used preprocessing pipelines for numerical and non-numerical features, respectively. Whereas, ClusterP3S groups feature A11 with non-numerical features. A possible explanation is that A11 has a low cardinality such that treating it like a non-numerical feature leads to better performance. We can also observe that the pipeline discovered by ClusterP3S does not quite align with our intuitions. For example, it is unclear why ordinal encoder is used. Interestingly, ClusterP3S can achieve better performance with this pipeline. This suggests the necessity of using AutoML, which could discover novel preprocessing pipelines that cannot be easily identified by humans.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/pipeline_example.pdf}
\caption{Comparison of the heuristic pipeline (\textbf{left}) and the pipeline discovered by ClusterP3S (\textbf{middle}) on the 29-credit-a dataset. The table shows the characteristics of the features (\textbf{right}), where numerical features are marked in red, and non-numerical features are marked in blue. We can observe that feature A11 (which has relatively low cardinality) is grouped with non-numerical features even if it is essentially a numerical feature. }
\label{fig:diff_pipelines}
\end{figure}
\section{Related Work}
\textbf{AutoML.} AutoML systems have achieved remarkable success in various ML design tasks, such as hyperparameter tuning~\cite{feurer2019hyperparameter,liaw2018tune}, algorithm selection~\cite{lindauer2015autofolio,mohr2021towards}, neural architecture search~\cite{klein2017fast,zimmer2021auto,zoph2016neural,wang2022auto,li2021automated}, meta-learning~\cite{finn2017model,vanschoren2018meta,behl2019alpha}, and pipeline search~\cite{thornton2013auto,feurer2020auto,heffetz2020deepline,drori2021alphad3m,yang2020automl,rakotoarison2019automated,kishimoto2021bandit,zha2021autovideo,milutinovic2020evaluation,lai2021tods,li2020pyodds,li2021autood,li2019pyodds,lai2021revisiting}. In this work, we study pipeline search and focus on pre-processing pipelines, which complements the existing efforts. We propose to enable feature-wise preprocessing pipeline search and devise a cluster-based algorithm to efficiently explore the search space.
\noindent\textbf{Reinforcement Learning.} Reinforcement learning has shown strong performance in many reward-driven tasks~\cite{mnih2013playing,zha2021douzero,schulman2017proximal,lillicrap2015continuous,silver2016mastering,silver2017mastering,jumper2021highly,zha2019experience,zha2021rank,lai2020dual,zha2021simplifying,zha2020meta,zha2021rlcard,zha2022autoshard,zha2022dreamshard,zha2022towards,zha2019rlcard,lai2020policy,dong2023active}. It has also been applied to AutoML search~\cite{zoph2016neural}. More recently, automated reinforcement learning has been explored~\cite{parker2022automated}. In this work, we reduce the search space with a novel deep clustering network, which is trained using reinforcement based on the reward obtained from the performance.
\section{Conclusions}
In this work, we study whether enabling feature-wise preprocessing pipeline search can improve performance. To tackle the large search space, we propose ClusterP3S, which jointly learns the clusters and search for the optimal pipelines with a deep clustering network trained by reinforcement learning. Experiments on benchmark datasets show that enabling feature-wise preprocessing pipeline search can significantly improve performance. We hope this insight and the idea of learning feature clusters can motivate future AutoML system designs.
\section{Limitations and Broader Impact Statement}
Preprocessing is a key step in building ML pipelines. Our work advances the existing algorithms by enabling feature-wise preprocessing pipeline search. Our efforts will pave the way towards generic, robust, and efficient AutoML systems, which will broadly benefit various ML-driven applications, such as recommender systems, healthcare, fraud detection, traffic prediction, etc. There is no negative societal impact to the best of our knowledge. Nevertheless, our research is limited in that we only search preprocessing pipelines. Combining ClusterP3S with model selection and hyperparameter tuning could lead to better performance, which we will investigate in the future. As fairness~\cite{chuang2022mitigating} becomes increasingly important, especially in high-stakes application~\cite{wan2022processing,ding2023fairly}, another future direction is fairness-aware preprocessing pipeline search. Making the search process more interpretable~\cite{chuang2023efficient,wang2022accelerating} is another direction that we plan to investigate.
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.14259",
"language": "en",
"timestamp": "2023-03-01T02:07:09",
"url": "https://arxiv.org/abs/2302.14259",
"yymm": "2302"
} | \section{Introduction and Summary}
Rotating objects are ubiquitous in our nature. If the objects are macroscopic, they may obey thermodynamics.
However, in statistical mechanics, investigating rotating interacting many body systems at thermal equilibrium are problematic.
The chemical potential for the angular momentum, which is equivalent to the angular velocity for point particles, makes the Euclidean actions complex even for bosonic systems, and the standard Monte Carlo method (MC) does not work because of the sign problem (see Sec.~\ref{sec-chemical}).
Hence overcoming this issue is a quite important challenge in theoretical physics.
There are various approaches to the sign problem, such as the Lefschetz-thimble method \cite{1205_3996,1303_7204} and the complex Langevin method (CLM) \cite{Parisi:1983mgm,Klauder:1983sp}. The Lefschetz-thimble method handles the sign problem by deforming the integration contour to mitigate the sign problem. The CLM, which we focus on in this paper, is a stochastic process for the complexified variables. The advantage of the CLM is that it allows us to study large systems. At first, the CLM has suffered from the problem that CLM results in a wrong result without noticing it. Recent studies \cite{1101_3270,1211_3709,1508_02377,1604_07717,1606_07627} have clarified the conditions that the CLM results converge to the correct result equivalent to the path integral. The sufficient condition to obtain the correct result is that the probability distribution of the drift norm falls off exponentially or faster (see Sec.~\ref{sec-CLM}). The drift norm is a byproduct of solving the Langevin equation numerically, which does not involve extra CPU costs.
To test whether the CLM works in rotating systems, we study the U($N$) matrix quantum mechanics whose Euclidean action is given by \cite{deWit:1988wri, Banks:1996vh},
\begin{align}
\label{action-BFSS}
S
=
\int_0^{\beta} \hspace{-2mm} dt
\Tr
\Biggl\{
\sum_{I=1}^D
\frac{1}{2}
\left(
D_t X^I \right)^2
-
\sum_{I,J=1}^D \frac{g^2}{4} [X^I,X^J]^2
\Biggr\}.
\end{align}
Here $X^I(t)$ ($I=1,\cdots,D$) are $N \times N$ hermitian traceless matrices.
$D_t:= \partial_t -i [A_t,\,]$ is a covariant derivative and $A_t(t)$ is a gauge field.
$g$ is a coupling constant, and we take the 't Hooft limit $N \to \infty$ and $g\to 0$ with a fixed 't Hooft coupling $\lambda := g^2N$ \cite{tHooft:1973alw}.
The model is invariant under the local U($N$) gauge transformation $X^I \to U X^I U^\dagger $ and $D_t \to U D_t U^\dagger $.
Since we will study the system at finite temperature, we have introduced the Euclidean time $t$ and the inverse temperature $\beta=\frac{1}{T}$.
Note that $A_t$ is a vector and $X^I$ are scalars in this one dimension.
This model has an SO($D$) rotation symmetry $X^I \to {O^I}_J X^J$, and the angular momentum is conserved.
Then, we analyze this model at thermal equilibrium with a finite angular momentum chemical potential through the CLM.
Although the model is strongly coupled and the standard perturbative analysis does not work, the model without rotation has been analyzed through the MC method \cite{Narayanan:2003fc, Aharony:2004ig, Aharony:2005ew, Kawahara:2007fn, Azeyanagi:2009zf, Azuma:2012uc, Azuma:2014cfa, Filev:2015hia, Hanada:2016qbz, Bergner:2019rca, Asano:2020yry, Watanabe:2020ufk, Dhindsa:2022uqn} and other non-perturbative methods such as the minimum sensitivity \cite{Kabat:1999hp, Hashimoto:2019wmg, Morita:2020liy, Brahma:2022ifx} and the $1/D$-expansion \cite{Azuma:2012uc, Azuma:2014cfa, Hotta:1998en, Mandal:2009vz, Morita:2010vi, Mandal:2011hb}.
Particularly, the minimum sensitivity and the $1/D$-expansion would work even in the presence of the angular momentum chemical potential.
Thus, we can test the CLM by comparing these methods.
This is one advantage of studying this model.
(Indeed, some earlier results through the $1/D$-expansion have been reported in Ref.~\cite{Morita:2010vi}.)
Actually, we will show that the CLM reproduces the results of the minimum sensitivity quantitatively, when the chemical potential is not so large.
(We investigate the model at $D=9$ and $D=16$, and we find that the minimum sensitivity is slightly better than the $1/D$-expansion there.)
Although the CLM does not provide reliable results at larger chemical potentials, it is enough to observe non-trivial phase transitions.
Hence, the CLM partially overcomes the sign problem of the angular momentum chemical potential in the model \eqref{action-BFSS}.
As far as we know, this is the first example that quantum many body systems in thermal equilibrium with a finite angular momentum chemical potential are solved through the first principle computation.
Before going to explain the details of our analysis, we present the importance of the model \eqref{action-BFSS}.
This model appears in various contexts of high energy physics.
Here we list some of them related to the current work:
\begin{itemize}
\item
This model is a large-$N$ reduction \cite{Eguchi:1982nm} (or dimensional reduction \cite{LUSCHER1983233}) of the $(D+1)$-dimensional U($N$) pure Yang-Mills (YM) theory to one dimension.
In this view, $X^I$ are the dimensional reductions of the spatial components of the original $(D+1)$-dimensional gauge fields.
It is known that this model in the large-$N$ limit is confined at low temperatures and shows a confinement-deconfinement transition at finite temperature \cite{Narayanan:2003fc, Aharony:2004ig, Mandal:2009vz, Kabat:1999hp}.
This phase structure is similar to that of the original YM theory.
Hence, this model is important as a toy model of the original YM theory.
Particularly, if we turn on the angular momentum chemical potential in our model, it may be regarded as a toy model of a rotating pure gluon system \cite{Chernodub:2020qah, Yamamoto:2013zwa, Braguta:2021jgn, Chen:2022smf, Chernodub:2022wsw, Chernodub:2022veq}, which is actively being studied to reveal natures of neutron stars and quark gluon plasma in relativistic heavy ion colliders \cite{STAR:2017ckg}.
\item
This model at $D=9$ appears as a low energy effective theory of $N$ supersymmetric D-particles \cite{Banks:1996vh} on Euclidean $S_\beta^1 \times S^1 \times R^8$ in type IIA superstring theory \cite{Aharony:2004ig}.\footnote{ Here $S_\beta^1 $ is the temporal direction of the model \eqref{action-BFSS} and another $ S^1 $ is a Scherk-Schwarz circle whose radius is taken large.
This system is described by a supersymmetric version of the model \eqref{action-BFSS}, but the Scherk-Schwarz circle breaks the supersymmetry and makes the fermions on the D-particles massive. When the radius of the Scherk-Schwarz circle is taken large, the fermion's masses become large and they can be ignored at low energy.
Note that the diagonal components of $X^I$ represent the positions of the D-particles in the $ S^1 \times R^8$ space, and the scalar for the Scherk-Schwarz circle, say $X^1$, is not distinguishable to other $X^I$ due to the large circle limit.
Then, the low energy effective theory becomes the model \eqref{action-BFSS}.
}
In this picture, a gravity dual \cite{Maldacena:1997re, Itzhaki:1998dd} of the model \eqref{action-BFSS} is given by black brane geometries.
There, the aforementioned confinement/deconfinement transition corresponds to a Gregory-Laflamme (GL) transition \cite{Aharony:2004ig, Gregory:1994bj, Mandal:2011ws}.
\item This model is also a toy model of $N$ D-particles, which may be regarded as a microscopic description of a black hole \cite{Klebanov:1997kv}.\footnote{The conventional D-particles in superstring theory are described by the supersymmetric version of the model \eqref{action-BFSS} at $D=9$, and it is called the Banks-Fischler-Shenker-Susskind (BFSS) theory \cite{Banks:1996vh}.
Since the model \eqref{action-BFSS} is a bosonic version of the BFSS theory, it is called the bosonic BFSS model.}
In this picture, the diagonal components of the matrix $X_{ii}^I$ ($i=1,\cdots, N$) represent the position of the $i$-th D-particle on the $I$-th coordinate and the off-diagonal components $X_{ij}^I$ ($i \neq j$) represent the open strings connecting the $i$-th and $j$-th D-particles, which induce interactions between the D-particles.
In this way, the model \eqref{action-BFSS} describes quantum mechanics of $N$ interacting D-particles in the $D$-dimensional space.
The interactions cause attractive forces between the D-particles and they compose a bound state. This bound state is chaotic and is regarded as a toy model of a black hole.\footnote{
\label{ftnt-SYM}
Indeed, it is known that the dynamics of the model \eqref{action-BFSS} is similar to that of the ${\mathcal N} =4$ supersymmetric YM theory (SYM) on $S_\beta^1 \times S^3 $.
For example, this SYM theory shows a large-$N$ confinement/deconfinement phase transition related to the model \eqref{action-BFSS} \cite{Sundborg:1999ue, Aharony:2003sx}.
Particularly, the SYM theory at strong coupling has a gravity dual given by AdS geometries \cite{Witten:1998zw, Aharony:1999ti}.
There, at low temperature, a thermal AdS geometry is stable while an AdS black hole geometry is favored at high temperature, and a phase transition between these two geometries is called the Hawking-Page transition \cite{Hawking:1982dh}.
These geometries correspond to the confinement and deconfinement phases in the SYM theory, and hence they are also related to those of the model \eqref{action-BFSS}.
}
Hence, when the system rotates, it may correspond to a rotating black hole.
\end{itemize}
In this article, we mainly focus on the last D-particle picture, since $X_{ii}^I$ represents the particle position and the angular momenta for these particles are easily understood intuitively.
\subsection{Summary of this work}
We summarize our findings on the model \eqref{action-BFSS} through the CLM and the minimum sensitivity analysis.
\begin{itemize}
\item We study the model with a finite angular momentum chemical potential through the CLM, and the results agree with the minimum sensitivity analysis quantitatively, as far as the chemical potential is not so large.
It indicates that the CLM works properly in the rotating system.
We also observe that the transition temperature for the confinement/deconfinement transition decreases as the chemical potential increases. See Sec.~\ref{sec-result}.
This decreasing critical temperature is consistent with the previous studies in rotating black holes in holography \cite{Gubser:1998jb, Chamblin:1999tk, Cvetic:1999ne, Hawking:1999dp, Gubser:2000mm, Basu:2005pj, Yamada:2006rx} and rotating pure gluons \cite{Chen:2022smf, Chernodub:2022veq}.
\item We develop an approximation method in the minimum sensitivity analysis, which enables us to explore the so-called ``small black hole" solution \cite{Aharony:1999ti}.\footnote{
Analyses of the model \eqref{action-BFSS} through the minimum sensitivity were also done in Refs.~\cite{Kabat:1999hp} and \cite{Morita:2020liy}.
Particularly, Ref.~\cite{Morita:2020liy} confirmed the existence of the small black hole solution in the model.
However, this study merely pointed out the existence of the solution, and the solution itself was not derived.
They also could not derive the gapped solution, which corresponds to a large black hole.
The derivation of these solutions through the new approximation is one of our results in this article.}
This solution has a negative specific heat similar to Schwarzschild black holes, and is important in the context of quantum gravity.
See, for example, Fig.~\ref{fig-D=9}.
\item
By using the minimum sensitivity, we study the properties of the model with an imaginary angular momentum chemical potential, which has been employed to evade the sign problem in the MC computations \cite{Yamamoto:2013zwa, Braguta:2021jgn, Chen:2022smf, Chernodub:2022wsw, Chernodub:2022veq}.
We find a stable confinement phase at high temperature when $D=3$.
This is consistent with the recent study in the four-dimensional pure Yang-Mills theories \cite{Chen:2022smf}.
We also argue a condition for the existence of the stable high temperature confinement phase in our model \eqref{action-BFSS}.
Besides, we compute the critical temperature for the real chemical potential through the analytic continuation of the imaginary chemical potential, and find that the analytic continuation quantitatively works for finite chemical potentials.
See Sec.~\ref{sec-Im}.
\end{itemize}
This paper is organized as follows. In Sec.~\ref{sec-review}, we review the thermodynamical properties of the model \eqref{action-BFSS} without angular momentum (readers familiar with this topic can skip this section). In Sec.~\ref{sec-chemical}, we introduce angular momentum to the model \eqref{action-BFSS}, and show that the model has a sign problem.
In Sec.~\ref{sec-result}, we present our main results that the CLM successfully predicts non-trivial phase structure of the model \eqref{action-BFSS}, which agrees with the minimum sensitivity analysis.
In Sec.~\ref{sec-CLM}, we present the details of the application of the CLM to the model.
In Sec.~\ref{sec-minimum}, we explain the minimum sensitivity analysis.
In Sec.~\ref{sec-Im}, we study the imaginary chemical potential in our model by using the minimum sensitivity, and compare the results obtained from four-dimensional YM theories.
Sec.~\ref{Sec_discussion} is devoted to a discussion.
In this article, we take an unit $c=\hbar=k_B=1$.
For numerical computations, we take $\lambda=1$.
\section{Review of the previous results on the non-rotating model}
\label{sec-review}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\begin{minipage}{0.33\hsize}
\begin{center}
\includegraphics[scale=0.5]{rho1.eps}\\
uniform distribution
\end{center}
\end{minipage}
\begin{minipage}{0.33\hsize}
\begin{center}
\includegraphics[scale=0.5]{rho2.eps}\\
non-uniform distribution
\end{center}
\end{minipage}
\begin{minipage}{0.33\hsize}
\begin{center}
\includegraphics[scale=0.5]{rho3.eps}\\
gapped distribution
\end{center}
\end{minipage}
\end{tabular}
\caption{
Three typical eigenvalue distributions $\rho(\alpha)$ of $A_t$ \eqref{rho}.
The uniform distribution (left) is favored at low temperatures, while the gapped distribution (right) is favored at high temperatures.
The non-uniform distribution might appear at middle temperatures.
}
\label{fig-rho}
\end{center}
\end{figure}
In this section, we briefly review the properties of the model \eqref{action-BFSS} without rotation.
At finite temperatures, the model has a large-$N$ confinement/deconfinement transition.
The order parameters of this transition are the Polyakov loop operators
\begin{align}
u_n := \frac{1}{N} \Tr {\mathcal P} \exp\left( i \int_0^{n \beta } dt A_t \right).
\label{polyakov}
\end{align}
If $\langle u_n \rangle =0$ (${}^\forall n$), it indicates a confinement, and $\langle u_n \rangle \neq 0$ (${}^\exists n$) signals a deconfinement.
In order to investigate the phase transition, it is useful to take the static diagonal gauge
\begin{align}
\beta A_t = {\rm diag}(\alpha_1,\alpha_2,\cdots, \alpha_N ).
\label{gauge-diagonal}
\end{align}
Here $\alpha_k$ ($k=1,\cdots, N$) is independent of the Euclidean time $\partial_t \alpha_k=0$.
It also satisfies $\alpha_k=\alpha_k+2\pi$.
Then, the Polyakov loops $u_n$ become
\begin{align}
u_n= \frac{1}{N} \sum_{k=1}^N e^{in \alpha_k} .
\label{polyakov_static_diag}
\end{align}
We also introduce the density function for $\{ \alpha_k \}$,
\begin{align}
\rho(\alpha)=\frac{1}{N} \sum_{k=1}^N \delta(\alpha-\alpha_k)=\frac{1}{2\pi} \sum_{n \in {\mathbf Z}} u_n e^{in\alpha}, \qquad (-\pi \le \alpha \le \pi).
\label{rho}
\end{align}
As we see soon, the profile of this function characterizes the phases of our system.
Roughly speaking, the phase structure of this system can be explained as follows.
If the scalars $X^I$ do not exist, ``repulsive forces'' work between $\{ \alpha_k \}$, and they uniformly distribute on the configuration space that is a circle ($\alpha_k=\alpha_k+2\pi$).
Then, the density function $\rho(\alpha)$ becomes $\rho(\alpha)=1/2\pi$ as plotted in Fig.~\ref{fig-rho} (left).
We call this solution a uniform solution.\footnote{As we will see, in the minimum sensitivity analysis, phases of the model \eqref{action-BFSS} are obtained as saddle point solutions of an effective action. Hence, we may call phases ``solutions".
}
From Eq.~\eqref{rho}, $u_n=0$ is satisfied, and the system is in a confinement phase.
Once we turn on the scalars $X^I$, they induce ``attractive forces" between $\{ \alpha_k \}$, which strengthen as temperature increases.
Thus, while the system remains confined at sufficiently low temperatures, the attractive forces would overcome the repulsive forces at high temperatures, and $\{ \alpha_k \}$ collapse to a cluster.
Then, the density function is depicted as shown in Fig.~\ref{fig-rho} (right).
There, a gap exists in the distribution of $\{ \alpha_k \}$ around $\alpha=\pi$, and this solution is called a gapped solution.\footnote{In this solution, we have taken a gauge that the peak of the density $\rho(\alpha)$ is at $\alpha=0$.
Then, all $u_n$ would become real.
In the CLM, we evaluate $\langle |u_n| \rangle$ instead of taking this gauge.
\label{ftnt-gauge}}
In this solution, $u_n \neq 0$ and it corresponds to a deconfinement phase.
There is another solution that connects the uniform solution and the gapped one as shown in Fig.~\ref{fig-rho} (center).
In this solution, any gap does not exist, and it is called a non-uniform solution.
We can easily check that $u_1 \neq 0$ in this configuration, and it is in a deconfinement phase.
(Hence, we have two deconfinement phases in large-$N$ gauge theories.)
Between these three solutions, large-$N$ phase transitions may occur.
Note that, whether the non-uniform solution arises as a stable phase depends on the dynamics of the system, and, for example, it is always unstable at $D=9$ as we will see soon.
In this way, the model \eqref{action-BFSS} has a confinement/deconfinement transition similar to higher-dimensional YM theories.
One feature of this transition is that the order of the transition would change depending on the dimension $D$ \cite{Azuma:2014cfa}.
For small $D$, it is first order, while it would be second order for large $D$.
Indeed, MC computations have established that it is first order up to $D=25$ \cite{Azuma:2014cfa, Bergner:2019rca}.
On the other hand, the $1/D$-expansion predicts that the transition is second order at sufficiently large $D$ \cite{Mandal:2009vz}.
Also, the minimum sensitivity analysis at three-loop order indicates that it is first order up to $D=35$ and it becomes second order from $D=36$ \cite{Morita:2020liy}.
(Note that the minimum sensitivity analysis at two-loop order predicts the phase transition is always first order independently of $D$.
Also, it is not clear whether the three-loop computation is sufficiently reliable, and it has not been established if the order of the transition changes at $D=36$.)
Such a $D$-dependence of the transitions is also consistent with a holography.\footnote{
As we mentioned in the introduction, the confinement/deconfineement transition of the model \eqref{action-BFSS} is related to a Gregory-Laflamme (GL) transition in gravity \cite{Gregory:1994bj}.
Here, $D=9$ is taken and the dual gravity theory is in Euclidean $S_\beta^1\times S^1 \times R^{d}$ space with $d=8$.
We can formally extend this correspondence to a general $D$ by taking $d=D-1$ \cite{Azuma:2014cfa}.
Then, the order of the GL transition depends on the dimension $D$.
It is first order for $D \le 11$ and it is second order for $D \ge 12$, and they are qualitatively similar to our matrix model results \cite{Sorkin:2004qq, Kudoh:2005hf}.
Interestingly, a similar $D$-dependence has been observed in Rayleigh-Plateau instability in a fluid model, too \cite{Cardoso:2006ks, Miyamoto:2008rd}.
}
In this article, we will study the model with a finite angular momentum chemical potential through the CLM.
There, we investigate $D=9$ and $16$, where the first order transitions occur.
Hence we will employ the minimum sensitivity analysis at two-loop order rather than the $1/D$-expansion to test the CLM, since it also predicts the first order transitions.
(The minimum sensitivity analysis at three-loop order with the finite chemical potential is much more complicated than the two-loop analysis and we do not do it in this article.)
In order to test the minimum sensitivity analysis at two-loop order, we compare the results at zero chemical potential obtained through this analysis and MC (not CLM), and we find quantitative agreements.\footnote{The agreements become not so good as temperature increases. This is because we use an approximation \eqref{large-D-approximation}, and it is not reliable at higher temperatures.} (We will show the details of the minimum sensitivity analysis in Sec.~\ref{sec-minimum}.) See Fig.~\ref{fig-D=9} for $u_1$ at $D=9$.
Hence, we expect that the minimum sensitivity analysis at two-loop order may be reliable even with a finite chemical potential, and we use this method to test the CLM.
We also plot the temperature dependence of free energy through the minimum sensitivity at $D=9$ in Fig.~\ref{fig-D=9} (left).
It shows a first-order phase transition as we mentioned above.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.95]{F-T.eps}\\
$T$ vs. $F$
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{u1-d9.eps}\\
$T$ vs. $u_1$
\end{center}
\end{minipage}
\end{tabular}
\caption{
Temperature dependence of free energy and the Polyakov loop $u_1$ at $D=9$ at zero angular momentum chemical potentials through the minimum sensitivity analysis at two-loop order.
The solid curves represent the results at large $N$.
The blue, green and red curves are for the uniform, the non-uniform and the gapped solutions, respectively.
(See Sec.~\ref{sec-free-energy} and Sec.~\ref{subsec-Polyakov}.)
The dashed blue curve is the leading $1/N$ correction for $u_1$ in the confinement phase at $N=32$ \eqref{un-finite-N-main}.
The dots are results through MC at $N=32$, and we observe good quantitative agreements.
The free energies of the three solutions indicate that a first-order phase transition occurs at $T=T_*$.
}
\label{fig-D=9}
\end{center}
\end{figure}
\section{Introducing chemical potentials for angular momenta}
\label{sec-chemical}
We introduce angular momentum chemical potentials to the model \eqref{action-BFSS}.
For simplicity, we first consider the chemical potential in a single particle quantum mechanics in two dimensions.
We take the Hamiltonian of this system as
\begin{align}
H=\frac{1}{2} \left(
p_x^2+p_y^2
\right)+V(x,y).
\end{align}
Here, $x$ and $y$ are the two-dimensional position operators, and $p_x$ and $p_y$ are their conjugate momenta.
It is convenient to employ a complex coordinate $z:=(x+iy)/\sqrt{2}$.
Then, the angular momentum is given by
\begin{align}
J= xp_y-yp_x=i \left(z p_z-\bar{z} \bar{p}_z \right).
\end{align}
Now we introduce the angular chemical potential $\mu$, and the Hamiltonian is modified as
\begin{align}
H-\mu J= & \frac{1}{2} \left(
p_x^2+p_y^2
\right) +V(x,y)- \mu J \nonumber \\
= & p_z \bar{p}_z -i \mu \left(z p_z-\bar{z} \bar{p}_z \right)+V(z,\bar{z}).
\end{align}
Then, the the Euclidean action for this Hamiltonian, which is used in the path-integral formalism at finite temperature, is derived as
\begin{align}
S=\int_0^\beta dt \left\{ \left( \dot{\bar{z}} - \mu \bar{z} \right)\left( \dot{z} + \mu z \right) +V(z,\bar{z}) \right\} .
\end{align}
Here, the first term is complex.
Hence, a finite angular momentum causes a sign problem in MC in general.
Let us introduce the angular momentum chemical potentials to the matrix model \eqref{action-BFSS} through the same procedure.
Note that the model \eqref{action-BFSS} is a quantum mechanics in $D$ dimension, and we consider rotations on $\tilde{D}$ planes ($2\tilde{D} \le D $).
Hence, we introduce the chemical potential $\mu_I$ ($I=1, \cdots, \tilde{D}$) for each plane.
Then, the action becomes \cite{Morita:2010vi, Yamada:2006rx}
\begin{align}
\label{action-BFSS-J}
S
= &
\int_0^{\beta} \hspace{-2mm} dt
\Tr
\Biggl\{
\sum_{I=1}^{\tilde{D}}
\left(
D_t -\mu_I \right) Z^{I \dagger }
\left(
D_t +\mu_I \right) Z^I +
\sum_{I=2\tilde{D}+1}^D
\frac{1}{2}
\left(
D_t X^I \right)^2
-
\sum_{I,J=1}^D \frac{g^2}{4} [X^I,X^J]^2
\Biggr\} \nonumber \\
= & \int^{\beta}_{0} dt \textrm{Tr } \Biggl[ \frac{1}{2} \sum_{I=1}^D (D_t X^I)^2 - \sum_{I,J=1}^D \frac{g^2}{4} [X^I,X^J]^2 - \sum_{I=1}^{{\tilde D}} \frac{\mu_I^2}{2} \left\{ (X^I)^2+(X^{\tilde{D}+I})^2 \right\} \Biggr. \nonumber \\
& \ \ \Biggl. + i \sum_{I=1}^{{\tilde D}} \mu_I \left\{ (D_t X^I) X^{{\tilde D}+I} - (D_t X^{{\tilde D}+I}) X^{I} \right\} \Biggr],
\end{align}
where we have defined $Z^I:= \left( X^I+i X^{\tilde{D}+I} \right)/\sqrt{2} $, ($I=1,\cdots, \tilde{D}$).
In the following analysis, for simplicity, $\mu_I$ is taken to be a common value $\mu>0$ to all $\mu_I$.
\section{Overview of our main results}
\label{sec-result}
In this section, we show our main results for the model \eqref{action-BFSS-J} derived through the CLM and the minimum sensitivity.
The details of the CLM and the minimum sensitivity are presented in Sec.~\ref{sec-CLM} and Sec.~\ref{sec-minimum}, respectively.
We mainly investigate $D=9$ and $D=16$.
We have chosen them because $D=9$ is the critical dimension of superstring theories and $D=16$ is suitable to be compared with the minimum sensitivity, since they agree better for larger $D$ in the $\mu=0$ case as demonstrated in Ref.~\cite{Morita:2020liy}.
\subsection{Phase diagrams}
\label{sec-result-phase}
We draw the $\mu-T$ phase diagrams at $D=9$ with $\tilde{D}=1$ and $\tilde{D}=3$ in Fig.~\ref{fig-phase}.
These are obtained through the minimum sensitivity analysis (see Sec.~\ref{sec-free-energy}).
These phase diagrams show that the uniform phase is favored in a low temperature and chemical potential region, and the gapped phase is favored in a high temperature and chemical potential region. A first-order transition occurs between them.
In the ``unknown" region depicted in Fig.~\ref{fig-phase}, the minimum sensitivity analysis does not work, and we presume that this region may be thermodynamically unstable due to a large chemical potential.
(As far as we attempt, we obtain similar phase diagrams when we change $D$ and/or $\tilde{D}$.)
These phase diagrams show that a larger $\mu$ or a larger $\tilde{D}$ makes the transition temperature lower.
Similar properties are expected in four-dimensional pure YM theories and black holes as we will argue in Sec.~\ref{sec-Im} and Sec.~\ref{sec-BHs}.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.85]{phase-d9-1.eps}\\
$D=9$ with ${\tilde D}=1$
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.85]{phase-d9-3.eps}\\
$D=9$ with ${\tilde D}=3$
\end{center}
\end{minipage}
\end{tabular}
\caption{
Phase diagrams for $D=9$ with $ {\tilde D}=1$ and ${\tilde D}=3$.
The derivation is argued in Sec.~\ref{sec-free}.
The red solid lines represent $\mu_{*}(T)$, where the first-order phase transitions between the uniform phase and the gapped phase occur.
The green solid curves represent $\mu_{\rm unstable}(T)$, where $m_Z=\mu$ occurs and our analysis for the gapped solution is not reliable anymore beyond these curves.
The black solid lines represent the critical points $\mu_c(T)$.
The black dashed lines represent the GWW point $\mu_{\textrm{GWW}} (T)$.
In the ${\tilde D}=3$ case, the blue dashed line represents $\mu=m_0$ where the uniform solution becomes not reliable.
Although the phase structures at ${\tilde D}=1$ and ${\tilde D}=3$ are similar,
the difference appears near $(T, \mu) \sim (0,\mu_c)$.
While $\mu_c(T)$ reaches $T\simeq 0$ in the ${\tilde D}=1$ case, it does not in the ${\tilde D}=3$ case.
However, our approximation near $(T, \mu) \sim (0,\mu_c)$ is not so reliable and they are not conclusive.
}
\label{fig-phase}
\end{center}
\end{figure}
\subsection{Results for observables}
\label{sec-result-observables}
By using the CLM and the minimum sensitivity, we investigate $\mu$ dependence of the four observables:
the Polyakov loop $|u_1|$ and $|u_2|$, the angular momentum $J$ and the expectation values of the square of the scalars $(X^I)^2$.
The results are shown in Figs.~\ref{u1_D09}, \ref{u2_D09}, \ref{JI_D09} and \ref{r2_D09}, respectively.
There, we take $D=9$ and $D=16$, and investigate them by changing $\tilde{D}$ and temperature $T$.
The values of $\tilde{D}$ and $T$ that we have taken are summarized in Table \ref{critical_muc}.
In this table, the phase transition point $\mu_*$ computed by the minimum sensitivity, at which the uniform phase turns into the gapped phase, is also listed.
In the CLM, $N$=16 and 32 are taken at $D=9$, and $N=16$ is taken at $D=16$.
Note that we present the numerical results obtained by the CLM only in the parameter region of $\mu$ where the data are acceptable.
(The criterion for the acceptable data is based on the drift norms argued in Sec.~\ref{sec-CLM}.)
We find that obtaining acceptable results for larger $\mu$ is harder.
In the following subsequent subsections, we argue the details of the obtained observables.
\begin{table} [htbp]
\renewcommand{\arraystretch}{1.3}
\begin{center}
\begin{tabular}{|c||c|} \hline
parameters & $\mu_*$ \\ \hline \hline
$\displaystyle D=9, ~{\tilde D}=1,~ T=0.85$ & $1.1$ \\ \hline
$\displaystyle D=9, ~{\tilde D}=1,~ T=0.90$ & $0.71$ \\ \hline \hline
$\displaystyle D=9, ~{\tilde D}=3,~ T=0.85$ & $0.67$ \\ \hline
$\displaystyle D=9, ~{\tilde D}=3,~ T=0.90$ & $0.42$ \\ \hline
\end{tabular} \hspace*{5mm}
\begin{tabular}{|c||c|} \hline
parameters & $\mu_*$ \\ \hline \hline
$\displaystyle D=16, ~{\tilde D}=1,~ T=0.80$ & $1.69$ \\ \hline
$\displaystyle D=16, ~{\tilde D}=1,~ T=0.85$ & $1.29$ \\ \hline \hline
$\displaystyle D=16, ~{\tilde D}=5,~ T=0.80$ & $0.87$ \\ \hline
$\displaystyle D=16, ~{\tilde D}=5,~ T=0.85$ & $0.62$ \\ \hline
\end{tabular}
\caption{
The list of parameters that are used in our analysis shown in Figs.~\ref{u1_D09} - \ref{r2_D09}.
In the $D=9$ case, we take $\tilde{D}=1$ and 3 and T=0.85 and 0.90.
In the $D=16$ case, we take $\tilde{D}=1$ and 5 and T=0.80 and 0.85.
The transition point $\mu_*$, at which the uniform phase turns into the gapped phase for these parameters are also shown.
($\mu_*$ are obtained through the minimum sensitivity analysis in Sec.~\ref{sec-free-energy}.)
}
\label{critical_muc}
\end{center}
\end{table}
\subsubsection{Polyakov loop $|u_1|$ and $|u_2|$}
\label{sec-result-Polyakov}
The Polyakov loop $\{ u_n \}$ \eqref{polyakov} is the order parameter of the large-$N$ confinement/deconfinment transition as mentioned in Sec.~\ref{sec-review}. $u_1=0$ at large $N$ indicates the uniform phase, and
$u_1 \ne 0$ indicate the non-uniform phase and the gapped phase.
Besides, the minimum sensitivity predicts that the non-uniform solution satisfies $u_2 = 0$, whereas the gapped solution satisfies $u_2 \neq 0$ (see Sec.~\ref{sec-free}).
So, $u_1$ and $u_2$ may tell us which phases appear.
However, it is known that the Polyakov loops suffer relatively large O$\left( \frac{1}{N} \right)$ corrections \eqref{un-finite-N-main} at large $N$ \cite{Aharony:2003sx, Azuma:2014cfa}. (The corrections for other quantities are typically O$\left( \frac{1}{N^2} \right)$.) Hence, we take care to distinguish the signals of the gapped and non-uniform phases and the $1/N$ corrections.
Our results for $|u_1|$ and $|u_2|$ against $\mu$ are plotted in Figs.~\ref{u1_D09} and \ref{u2_D09}.
The minimum sensitivity predicts the first-order transition from the uniform phase ($u_1=u_2=0$) at low $\mu$ to the gapped phase ($u_1 \neq 0$ and $u_2 \neq 0$) at high $\mu$.
The CLM results agree with the minimum sensitivity results at large $N$ in the gapped phase. In the uniform phase, $|u_n|$ has a finite-$N$ effect of order O$\left( \frac{1}{N} \right)$ \cite{Azuma:2014cfa}, and we compare the CLM results of $|u_n|$ with the minimum sensitivity result with the $1/N$ correction \eqref{un-finite-N-main}, which also agree with each other.
Among our results, the deviation between the CLM and the minimum sensitivity at $T=0.90$ in the $D=9$ with $\tilde{D}=1$ case is larger.
We presume that the fluctuation near $T=0.90$ is large in the CLM, since the phase diagram in Fig.~\ref{fig-phase} indicates that $T_c \simeq 0.9$ for $\mu \lesssim 1$.
Although we observe the phase transitions in the CLM,
we do not attempt to determine their order.
In order to do it, we may need to evaluate,
for example, the susceptibility for $u_1$ \cite{Azuma:2014cfa}, but it requires computation at larger $N$ in the CLM, and we leave it for a future work.
\begin{figure} [htbp]
\centering
\includegraphics[width=0.43\textwidth]{u1_N16D09T00-85cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{u1_N16D09T00-90cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{u1_N16D09T00-85cdir3lat60.eps}
\includegraphics[width=0.43\textwidth]{u1_N16D09T00-90cdir3lat60.eps}
\includegraphics[width=0.43\textwidth]{u1_N16D16T00-80cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{u1_N16D16T00-85cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{u1_N16D16T00-80cdir5lat60.eps}
\includegraphics[width=0.43\textwidth]{u1_N16D16T00-85cdir5lat60.eps}
\caption{$|u_1|$ is plotted against $\mu$. We present $\langle |u_1| \rangle$ obtained by the CLM, at $N=16,32$ for $D=9$ and $N=16$ for $D=16$. The lines denote the results of the minimum sensitivity \eqref{un-finite-N-main}-\eqref{un-gapped-main}. The dashed lines represent those of the uniform phase at $N=16,32,\infty$, while the dotted and solid lines represent those of the non-uniform and gapped phase at $N=\infty$, respectively.}\label{u1_D09}
\end{figure}
\begin{figure} [htbp]
\centering
\includegraphics[width=0.43\textwidth]{u2_N16D09T00-85cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{u2_N16D09T00-90cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{u2_N16D09T00-85cdir3lat60.eps}
\includegraphics[width=0.43\textwidth]{u2_N16D09T00-90cdir3lat60.eps}
\includegraphics[width=0.43\textwidth]{u2_N16D16T00-80cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{u2_N16D16T00-85cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{u2_N16D16T00-80cdir5lat60.eps}
\includegraphics[width=0.43\textwidth]{u2_N16D16T00-85cdir5lat60.eps}
\caption{$|u_2|$ is plotted against $\mu$. We present $\langle |u_2| \rangle$ obtained by the CLM, at $N=16,32$ for $D=9$ and $N=16$ for $D=16$. The lines denote the results of the minimum sensitivity \eqref{un-finite-N-main}-\eqref{un-gapped-main}. The dashed lines represent those of the uniform phase at $N=16,32$ and large $N$, while the dotted and solid lines represent those of the non-uniform and gapped phase at $N=\infty$, respectively.
Note that the non-uniform solutions overlap with the uniform solutions at $N=\infty$ because both are $u_2=0$.
}\label{u2_D09}
\end{figure}
\subsubsection{Angular momentum $J$}
\label{sec-result-J}
Our results for angular momentum $J$ is shown in Fig.~\ref{JI_D09}.
Note that we have introduced the common angular momentum chemical potential $\mu$ for the $\tilde{D}$ planes in the model \eqref{action-BFSS-J}, and we evaluate the angular momentum for the single plane.
The definition of $J$ in the CLM is given in Eq.~\eqref{JI_def}.
In the minimum sensitivity, we derive $J$ through Eq.~\eqref{eq-J}.
We observe that $J$ is close to zero in the uniform phase in the CLM. This is a feature of the confinement phase (see Sec.\ref{subsec-angular}), and the CLM correctly reproduce it.
In the gapped phase, we observe slight discrepancies between the CLM results and the minimum sensitivity results.
We investigate the lattice space dependence and find that $J$ is more sensitive than other quantities.
Thus, the discrepancies may include artifacts of the CLM.
\begin{figure}
\centering
\includegraphics[width=0.43\textwidth]{ReJ_N16D09T00-85cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{ReJ_N16D09T00-90cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{ReJ_N16D09T00-85cdir3lat60.eps}
\includegraphics[width=0.43\textwidth]{ReJ_N16D09T00-90cdir3lat60.eps}
\includegraphics[width=0.43\textwidth]{ReJ_N16D16T00-80cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{ReJ_N16D16T00-85cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{ReJ_N16D16T00-80cdir5lat60.eps}
\includegraphics[width=0.43\textwidth]{ReJ_N16D16T00-85cdir5lat60.eps}
\caption{$J/N^2$ is plotted against $\mu$. We present $\langle J_{I=1}/N^2 \rangle$ obtained by calculating \eqref{JI_def} via the CLM, at $N=16,32$ for $D=9$ and $N=16$ for $D=16$. The lines denote the results \eqref{eq-J} of the minimum sensitivity obtained at $N=\infty$. The dashed, dotted and solid lines represent those of the uniform, non-uniform and gapped phase, respectively.}\label{JI_D09}
\end{figure}
\subsubsection{Expectation values of scalars}
\label{sec-result-scalar}
The square of the scalar $(X^I)^2$ represents the distribution of the D-particles on the $I$-th direction. Since we have the rotational and the non-rotating directions, we define the averages
\begin{align}
R_Z^2:= & \frac{1}{\tilde{D}} \frac{g^2}{N} \sum_{I=1}^{\tilde{D}} \left\langle \Tr Z^{I\dagger} Z^{I } \right\rangle
,
\label{R_Z-result}
\\
R^2_X:= & \frac{1}{D-2\tilde{D}} \frac{g^2}{N} \sum_{I=2 \tilde{D}+1}^{D} \left\langle\Tr X^I X^{I } \right\rangle ,
\label{R_X-result}
\end{align}
and investigate them separately.
(We have assumed that any spontaneous symmetry breaking on the rotation direction or the non-rotating directions does not occur. Indeed, we do not observe any signal for it in the CLM.
We have also assumed that they are time independent.)\\
In Fig.~\ref{r2_D09}, the results of $R_Z^2$ and $R_X^2$ are plotted.
The discrepancies between the CLM results and the minimum sensitivity results are small, and the both analyses capture important features of the rotating system. In the uniform phase, there is no separation between $R_Z^2$ and $R_X^2$. In the gapped phase, $R_Z^2$ gains a larger value, and $R_Z^2$ and $R_X^2$ begin to separate with each other.
It means that the D-particles spread to the rotation directions as $J$ increases, and it is a natural consequence as rotating objects.
\begin{figure} [htbp]
\centering
\includegraphics[width=0.43\textwidth]{r2_N16D09T00-85cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{r2_N16D09T00-90cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{r2_N16D09T00-85cdir3lat60.eps}
\includegraphics[width=0.43\textwidth]{r2_N16D09T00-90cdir3lat60.eps}
\includegraphics[width=0.43\textwidth]{r2_N16D16T00-80cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{r2_N16D16T00-85cdir1lat60.eps}
\includegraphics[width=0.43\textwidth]{r2_N16D16T00-80cdir5lat60.eps}
\includegraphics[width=0.43\textwidth]{r2_N16D16T00-85cdir5lat60.eps}
\caption{$R_Z^2$ and $R_X^2$ \eqref{R_X-result} are plotted against $\mu$. We present $\langle R_Z^2 \rangle$ and $\langle R_X^2 \rangle$ obtained by calculating \eqref{R_Z2} and \eqref{R_X2} via the CLM respectively, at $N=16,32$ for $D=9$ and $N=16$ for $D=16$. The lines denote the results of the minimum sensitivity (\ref{R_Z}) and (\ref{R_X}) at $N=\infty$. The dashed, dotted and solid lines represent those of the uniform, non-uniform and gapped phase, respectively.}\label{r2_D09}
\end{figure}
\section{Application of the CLM}
\label{sec-CLM}
In this section, we present the details of the application of the CLM \cite{Parisi:1983mgm,Klauder:1983sp}, which is a promising method to simulate the system with sign problem, to the action \eqref{action-BFSS-J}, and explain the derivation of the results shown in Sec.~\ref{sec-result}. In the following, we rescale the action (\ref{action-BFSS-J}) as
\begin{eqnarray}
t = \lambda^{\frac{-1}{3}} t', \ \ A_t = \lambda^{\frac{1}{3}} {A'}_{t'}, \ \ X_{\mu} = g^{-1} \lambda^{\frac{1}{3}} {X'}_{\mu}, \ \ \mu = \lambda^{\frac{1}{3}} \mu', \label{BFSS_rescaling}
\end{eqnarray}
where $\lambda = g^2N$ is the 't Hooft coupling. This gives
\begin{eqnarray}
S &=& N \int^{\beta'}_{0} dt' \textrm{Tr } \Biggl\{ \frac{1}{2} \sum_{I=1}^D (D_{t'} {X'}^I)^2 - \sum_{I,J=1}^D \frac{1}{4} [{X'}^I,{X'}^J]^2 - \frac{{\mu'}^2}{2} \sum_{K=1}^{2{\tilde D}} ({X'}^K)^2 \Biggr. \nonumber \\
& & \ \ \Biggl. + \mu' i \sum_{K=1}^{{\tilde D}} \{ (D_{t'} {X'}^K) {X'}^{K+{\tilde D}} - (D_{t'} {X'}^{K+{\tilde D}}) {X'}^{K} \} \Biggr\}, \label{action-BFSS-J2}
\end{eqnarray}
where $\displaystyle D_{t'} = \partial_{t'} - i [A'_{t'}, ]$, $\beta' = \beta \lambda^{\frac{1}{3}}$
. We omit $'$ in the following. At $\mu=0$, this action is invariant under the transformations
\begin{eqnarray}
& & X^I (t) \to X^I(t) + x^I I_N, \label{x_inv} \\
& & A(t) \to A(t) + \alpha(t) I_N, \label{a_inv}
\end{eqnarray}
where $I_N$ is an $N \times N$ unit matrix. $x^I$ and $\alpha(t)$ are c-numbers, and $x^I$ has no dependence on $t$. The $\mu \neq 0$ case maintains the invariance under the transformation (\ref{a_inv}), but breaks the invariance under the transformation (\ref{x_inv}).
To put the action (\ref{BFSS_rescaling}) on a computer, we adopt a lattice regularization, where the number of the lattice sites is $n_t$ and the lattice space is $(\Delta t) = \frac{\beta}{n_t}$. We also adopt the periodic boundary condition $X_{\mu} (n_t+1) =X_{\mu} (1)$ and $A(n_t+1)=A(1)$. This yields the lattice-regularized action
\begin{eqnarray}
S_{\textrm{lat}} &=& N \textrm{Tr } \sum_{n=1}^{n_t} \Biggl\{ \frac{1}{2 (\Delta t)} \sum_{I=1}^{D} ( X^I (n+1) - V(n) X^I (n) V(n)^{-1} )^2 - \frac{(\Delta t)}{4} \sum_{I,J=1}^{D} [X^I(n), X^J (n)]^2 \Biggr. \nonumber \\
& & \ \ - \frac{(\Delta t)}{2} \mu^2 \sum_{K=1}^{2 {\tilde D}} X^K (n)^2 + \mu i \sum_{K=1}^{{\tilde D}} (X^K (n+1) - V(n) X^K (n) V(n)^{-1} ) X^{K+{\tilde D}} (n) \nonumber \\
& & \ \ - \Biggl. \mu i \sum_{K=1}^{{\tilde D}} (X^{K+{\tilde D}} (n+1) - V(n) X^{K+{\tilde D}} (n) V(n)^{-1} ) X^{K} (n) \Biggr\}, \label{action-BFSS-Jlat}
\end{eqnarray}
where $\displaystyle V(n) = e^{i A (n) (\Delta t)}$. We find it convenient to take a static diagonal gauge (\ref{gauge-diagonal}).
In this gauge, we have
\begin{eqnarray}
V(1) = V(2) = \cdots = V(n) = \textrm{diag } (e^{\frac{i \alpha_1}{n_t}}, \ e^{\frac{i \alpha_2}{n_t}}, \ \cdots, \ e^{\frac{i \alpha_N}{n_t}}). \label{action-BFSS-Jlat-V}
\end{eqnarray}
Together with the gauge fixing term, as derived in Refs.~\cite{Aharony:2003sx, hep-th/0310286,hep-th/0601170}, we work on the action
\begin{eqnarray}
S_{\textrm{eff}} = S_{\textrm{lat}} + S_{\textrm{g.f.}}, \textrm{ where } S_{\textrm{g.f.}} = - \sum_{k,\ell=1, \ k \neq \ell}^N \log \left| \sin \frac{\alpha_k - \alpha_{\ell} }{2} \right|, \label{S_eff}
\end{eqnarray}
The CLM consists of solving the complexified version of the Langevin equation. The complex Langevin equation is given by
\begin{eqnarray}
\frac{d X^I_{k\ell} (n,\sigma)}{d \sigma} = - \ \frac{\partial S_{\textrm{eff}}}{\partial X^I_{\ell k} (n, \sigma)} + \eta^I_{k \ell} (n,\sigma), \ \ \frac{d \alpha_{k} (\sigma)}{d \sigma} = - \ \frac{\partial S_{\textrm{eff}}}{\partial \alpha_{k} (\sigma)} + \eta^{(\alpha)}_{k} (\sigma). \label{langevin_eq}
\end{eqnarray}
Here, $\sigma$ is the fictitious Langevin time, and the white noises $\eta^I_{k \ell} (n,\sigma)$ and $\eta^{(\alpha)}_{k} (\sigma)$ are Hermitian matrices and real numbers obeying the probability distribution proportional to $\exp \left( - \ \frac{1}{4} \int d \sigma \sum_{n=1}^{n_t} \sum_{I=1}^{D} \textrm{Tr } \eta^I (n,\sigma)^2 \right)$ and $\exp \left( - \ \frac{1}{4} \int d \sigma \eta^{(\alpha)} (\sigma)^2 \right)$, respectively. The terms $\frac{\partial S_{\textrm{eff}}}{\partial X^I_{\ell k} (n, \sigma)}$ and $\frac{\partial S_{\textrm{eff}}}{\partial \alpha_{k} (\sigma)}$ are called ``drift terms".
The hermiticity of $X^I (n)$ and the reality of $\alpha_k$ are not maintained as the Langevin time $\sigma$ progresses, since the action $S_{\textrm{eff}}$ is complex. The expectation value of an observable ${\cal O}$ is evaluated as
\begin{eqnarray}
\langle {\cal O}[X^I(n), \alpha_k ] \rangle =\frac{1}{\sigma_{T}} \int^{\sigma_0+\sigma_T}_{\sigma_0} d \sigma {\cal O}[X^I (n,\sigma), \alpha_k (\sigma)], \label{VEV_CLM}
\end{eqnarray}
where $\sigma_0$ is the thermalization time, and $\sigma_T$ is the time required for statistics, both of which should be taken to be sufficiently large. In Refs.~\cite{1101_3270,1606_07627,0912_3360}, it was found that the holomorphy of the observable ${\cal O}$ plays an essential role in the validity of Eq.~(\ref{VEV_CLM}).
When we solve the Langevin equation (\ref{langevin_eq}) with a computer, we need to discretize it as
\begin{eqnarray}
X^I_{k\ell} (n,\sigma + \Delta \sigma) &=& X^I_{k\ell} (n,\sigma) - \ (\Delta \sigma) \frac{\partial S_{\textrm{eff}}}{\partial X^I_{\ell k} (n, \sigma)} + \sqrt{\Delta \sigma} {\tilde \eta}^I_{k \ell} (n,\sigma), \label{langevin_eq2X} \\
\alpha_{k} (\sigma +\Delta \sigma) &=& \alpha_k (\sigma) - \ (\Delta \sigma) \ \frac{\partial S_{\textrm{eff}}}{\partial \alpha_{k} (\sigma)} + {\tilde \eta}^{(\alpha)}_{k} (\sigma). \label{langevin_eq2a}
\end{eqnarray}
Here, $\Delta \sigma$ is the step size, which we take to be $\Delta \sigma = 10^{-5}$. The factor $\sqrt{\Delta \sigma}$ stems from the normalization of the discretized version of the white noises ${\tilde \eta}^I_{k \ell} (n,\sigma)$ and ${\tilde \eta}^{(\alpha)}_{k} (\sigma)$, which obey the probability distribution proportional to $\exp \left( - \ \frac{1}{4} \sum_{\sigma} \sum_{n=1}^{n_t} \sum_{I=1}^{D} \textrm{Tr } {\tilde \eta}^I (n,\sigma)^2 \right)$ and $\exp \left( - \ \frac{1}{4} \sum_{\sigma} {\tilde \eta}^{(\alpha)} (\sigma)^2 \right)$, respectively.
In order to extract a reliable result equivalent to the path integral from the CLM, we need to avoid the following two problems. One is the ``excursion problem", which occurs when $X^I$ and $\alpha_k$ are far from Hermitian matrix and real number, respectively. The other is the ``singular drift problem", which occurs when the drift terms are too large. It is found in Ref.~\cite{1606_07627} that the CLM is justified when the probability distribution of the drift norms
\begin{eqnarray}
u_X = \sqrt{\frac{1}{N^3 Dn_t} \sum_{n=1}^{n_t} \sum_{I=1}^D \sum_{k,\ell=1}^{N} \left| \frac{\partial S_{\textrm{eff}}}{\partial X^I_{\ell k} (n, \sigma)} \right|^2}, \ \ u_{\alpha} = \sqrt{\frac{1}{N} \left| \frac{\partial S_{\textrm{eff}}}{\partial \alpha_{k} (\sigma)} \right|^2} \label{drift_norms}
\end{eqnarray}
fall off exponentially or faster. If we look at the drift term, we get the drift of the CLM.
The gauge cooling \cite{1211_3709,1508_02377,1604_07717} is a standard technique to suppress the excursion problem. However, in our complex Langevin studies we already fix the gauge as Eq.~(\ref{gauge-diagonal}), which prevents us from applying the gauge cooling.
In the CLM, we calculate the Polyakov loop $u_n$, which is defined as Eq.~(\ref{polyakov_static_diag}), and the following observables.
\begin{eqnarray}
J_{I} &=& J^{(1)}_{I} + J^{(2)}_{I} \ \ (I=1,2,\cdots, {\tilde D}), \ \ \textrm{ where } \label{JI_def} \\
J^{(1)}_I &=& \frac{iN}{ \beta} \int^{\beta}_{0} \textrm{Tr} \{ X_I (t) (D_t X_{{\tilde D} +I}(t)) - X_{{\tilde D}+I} (D_t X_{I}(t)) \}, \nonumber \\
J_I^{(2)} &=& \frac{\mu N}{ \beta} \int^{\beta}_{0} \textrm{Tr} \{ X_I (t)^2 + X_{{\tilde D}+I} (t)^2 \}, \nonumber \\
R_Z^2 &=& \frac{1}{2{\tilde D} N \beta} \int^{\beta}_{0} \sum_{I=1}^{2 {\tilde D}} \textrm{Tr} X_I^2 (t) dt, \label{R_Z2} \\
R_X^2 &=& \frac{1}{(D - 2 {\tilde D}) N \beta} \int^{\beta}_{0} \sum^{D}_{I=2 {\tilde D}+1} \textrm{Tr} X_I^2 (t) dt. \label{R_X2}
\end{eqnarray}
In a rotating system, we cannot apply an angular momentum to the center-of-mass, which leads us to remove the trace part and implement the constraints
\begin{eqnarray}
\frac{1}{N\beta} \int^{\beta}_0 \textrm{Tr} X^I (t) dt = 0, \ \ \textrm{Tr} A(t) = 0. \label{constraints_on_sun}
\end{eqnarray}
To this end, in the measurement routine we calculate the observables (\ref{polyakov_static_diag}), (\ref{JI_def}), (\ref{R_Z2}) and (\ref{R_X2}) in terms of the matrices
\begin{eqnarray}
X^{I\textrm{(m)}}_{ij} (n) = X^I_{ij} (n) - \frac{1}{N n_t} \left( \sum_{n=1}^{n_t} \sum_{k=1}^N X^I_{kk} (n) \right) \delta_{ij}, \ \ \alpha^{\textrm{(m)}}_i = \alpha_i - \frac{1}{N} \sum_{k=1}^N \alpha_k. \label{constraint_on_sun_discre}
\end{eqnarray}
We solve the Langevin equation (\ref{langevin_eq}) without imposing the constraints (\ref{constraints_on_sun}), which facilitates the numerical calculation.
In the CLM, the hermiticity of $X_I(n)$ and the reality of $\alpha_i$ are lost, and the observables (\ref{JI_def}), (\ref{R_Z2}) and (\ref{R_X2}) are not real in general. In the following, we present the real part of the expectation values as for the numerical results obtained by the CLM.
We work on the $D=9$, ${\tilde D}=1,3$, $T=0.85, 0.90$, $N=16,32$ and $D=16$, ${\tilde D}=1,5$, $T=0.80, 0.85$, $N=16$ cases. In these cases, we take $n_t=60$. To probe the region of $\mu$ we can study by the CLM, we present the log-log plots of the probability distribution of the drift norms $u_X$ and $u_{\alpha}$, which we denote as $p(u_X)$ and $p(u_{\alpha})$, respectively. As a typical example, we show the $D=9, {\tilde D}=1, T=0.90$ case in Fig.~\ref{drift_norms_result}. $p(u_X)$ falls exponentially or faster for $\mu \leq 1.2$ at $N=16$ and $\mu \leq 0.9$ at $N=32$, respectively. On the other hand, $p(u_{\alpha})$ falls in a power law even for small $\mu$. As we present in Appendix \ref{appendix_drift_BFSS}, the power-law decay of $p(u_{\alpha})$ is observed even at $\mu=0$, which has no sign problem. At $\mu=0$, without the static diagonal gauge (\ref{gauge-diagonal}), the probability distributions $p(u_A)$, where $u_A$ is the drift norm without the static diagonal gauge as defined by Eq.~(\ref{drift_normV}), fall exponentially or faster. To save the CPU time, we take the static diagonal gauge (\ref{gauge-diagonal}) and accept the results for $\mu \leq 1.2$ at $N=16$ and $\mu \leq 0.9$ at $N=32$. Similar trends are observed for other $D$, ${\tilde D}$ and $T$. In Figs.~\ref{u1_D09} - \ref{r2_D09}, we present the numerical results in the parameter region of $\mu$, where we accept the CLM result with this criterion.
\begin{figure} [h]
\centering
\includegraphics[width=0.43\textwidth]{histlog_dnormN16D09T00-90cdir1lat60CLM.eps}
\includegraphics[width=0.43\textwidth]{tdhistlog_dnormN16D09T00-90cdir1lat60CLM.eps}
\includegraphics[width=0.43\textwidth]{histlog_dnormN32D09T00-90cdir1lat60CLM.eps}
\includegraphics[width=0.43\textwidth]{tdhistlog_dnormN32D09T00-90cdir1lat60CLM.eps}
\caption{The log-log plot of the histogram of the drift norm $u_X$ (left) and $u_{\alpha}$ (right), for $D=9, {\tilde D}=1$, $T=0.90$ at $N=16$ (top) and $N=32$ (bottom).}\label{drift_norms_result}
\end{figure}
\section{Minimum sensitivity analysis}
\label{sec-minimum}
In this section, we present the details of the derivation of the results shown in Sec.~\ref{sec-result} through the minimum sensitivity analysis.
\subsection{Derivation of Free energy}
\label{sec-free}
We study the thermodynamical properties of the model \eqref{action-BFSS-J} at large $N$ via a minimum sensitivity analysis \cite{Stevenson:1981vj}.
For this purpose,
we introduce trial masses $m_Z$ and $m_X$ for $Z^I$ and $X^I$, respectively,
and deform the action \eqref{action-BFSS-J} as \cite{Morita:2020liy}
\begin{align}
\label{action-BFSS-MS}
S_{\kappa}
:= & S_0+ \kappa S_{\rm int},
\\
S_0:= &
\int_0^{\beta} \hspace{-2mm} dt
\Tr
\left\{
\sum_{I=1}^{\tilde{D}}
\left(
D_t -\mu \right) Z^{ I \dagger}
\left(
D_t +\mu \right) Z^I + m_Z^2 Z^{ I \dagger} Z^I
+ \sum_{I=2\tilde{D}+1}^D \frac{1}{2}
\left(
D_t X^I \right)^2+
\frac{1}{2} m_X^2 X^{I2} \right\},
\label{S-free}
\\
S_{\rm int}:= &
\int_0^{\beta} \hspace{-2mm} dt
\Tr
\left\{
- m_Z^2 Z^{ I \dagger} Z^I
-\frac{1}{2} m_X^2 X^{I2}
-
\sum_{I,J=1}^D \frac{g^2}{4} [X^I,X^J]^2
\right\}.
\label{S-int}
\end{align}
Here $\kappa$ is a formal expansion parameter.
If we set $\kappa=1$, the mass terms are canceled and this action reproduces the original one \eqref{action-BFSS-J}.
However, if we perform a perturbative expansion with respect to $\kappa$ up to a certain order and take $\kappa=1$ after that, then the obtained quantities would depend on the trial mass parameters $m_Z$ and $m_X$.\footnote{In the action \eqref{action-BFSS-MS}, the gauge field $A_t$ interacts with $X^I$ and $Z^I$ through the covariant derivatives.
However, thanks to the gauge fixing \eqref{gauge-diagonal}, $A_t$ does not prevent the perturbative computations with respect to $\kappa$.}
The idea of the minimum sensitivity is that we fix $m_Z$ and $m_X$ so that the dependence of a certain physical quantity on these parameters is minimized.
It has been demonstrated that this prescription works in various models.
Note that the obtained result through this method would depend on of which quantity we minimize the parameter dependence.
In our study, we investigate free energy at two-loop order, and minimize its $m_Z$ and $m_X$ dependence.
By integrating out $X^I$ and $Z^I$ perturbatively, we obtain the effective action for $\{\alpha_k \}$ at two-loop order as shown in Appendix \ref{app-effective-action},
\begin{align}
Z:= & \exp\left(-\beta F(T,\mu, m_X,m_Z) \right) \nonumber \\
:= & e^{ -N^2 \beta f_0 (m_X,m_Z) } \int dU \exp\left[ -N^2 \left\{f_1(T,\mu, m_X,m_Z) -1 \right\} |u_1|^2 \right] .
\label{effective-action-gapped}
\end{align}
Here we have taken $\kappa=1$ and $dU$ is an integral measure defined by \cite{Aharony:2003sx}
\begin{align}
dU := \prod_{k} d\alpha_k e^{-S_{\textrm{g.f.}}}, \qquad
\frac{1}{N^2}
S_{\textrm{g.f.}}=\sum_{n=1}^{\infty}\frac{ 1}{n}|u_{n}|^{2}.
\label{action-G.F.}
\end{align}
$f_0(m_X,m_Z) $ and $f_1(T,\mu, m_X,m_Z)$ are defined in Eqs.~\eqref{f0} and \eqref{fn}, respectively.
Whereas $f_1$ depends on $\beta$ and $\mu$, $f_0$ does not.
Note that we have used an approximation \eqref{large-D-approximation}, which is not reliable for large $T$ or $\mu$.
We have also used an assumption $\mu < m_Z$. (See Appendix \ref{app-effective-action} for the details.)
If this condition is not satisfied, the scalar $Z^I$ becomes tachyonic and the perturbative computations fail.
In order to derive the free energy $F$, we need to evaluate the $dU$ integral in Eq.~\eqref{effective-action-gapped}.
This integration at large $N$ has been studied in Ref.~\cite{Liu:2004vy}, and the result depends on the sign of $f_1$.
If $f_1>0$, the saddle point for the uniform solution (Fig.~\ref{fig-rho} [left]) dominates, while when $f_1<0$, the saddle point for a gapped solution (Fig.~\ref{fig-rho} [right]) does, and the free energy is given by \cite{Liu:2004vy}
\begin{empheq}[left={\beta F(T,\mu, m_X,m_Z)=\empheqlbrace}]{alignat=2}
&N^2 \beta f_0 &\qquad &\text{$f_1>0$,}
\label{F-low}
\\
& N^2 \left\{\beta f_0 - \frac{1}{2} \left( \frac{w}{1-w} + \log (1-w) \right) \right\} & &\text{$f_1<0$}.
\label{F-high}
\end{empheq}
Here we have defined
\begin{align}
w:= \sqrt{\frac{-f_1}{1-f_1}}.
\label{w}
\end{align}
Note that we treat $f_1=0$ case separately as we will explain in Sec.~\ref{sec-non-uniform}.
Correspondingly, the Polyakov loop \eqref{polyakov} at large $N$ is
computed as \cite{AlvarezGaume:2005fv}
\begin{empheq}[left={u_1=\empheqlbrace}]{alignat=2}
& 0 &\qquad &\text{$f_1>0$,}
\label{u1-low}
\\
& \frac{1+w}{2} & &\text{$f_1<0$} .
\label{u1-high}
\end{empheq}
Here, we have taken the gauge mentioned in footnote \ref{ftnt-gauge}.
Similarly, for $u_n$ ($n \ge 2$), we obtain \cite{Rossi:1996hs, Okuyama:2017pil}
\begin{empheq}[left={u_n=\empheqlbrace}]{alignat=2}
& 0 &\qquad &\text{$f_1>0$,}
\label{un-low}
\\
& \frac{w^2}{n-1} P^{(1,2)}_{n-2}(2w-1) & &\text{$f_1<0$},
\label{un-high}
\end{empheq}
where $P^{(1,2)}_{n-2}(z)$ denotes the Jacobi polynomial.
These results show that, when $f_1>0$, $u_n = 0$ for all $n$ and the density function $\rho(\alpha)$ becomes uniform through Eq.~\eqref{rho}.
Hence the system is confined.
On the other hand, when $f_1<0$, the gapped solution satisfies $u_n \neq 0$ and the system is deconfined.
Note that, when $f_1<0$, Eq.~\eqref{w} indicates $w>0$, and thus $u_1>1/2$.
Therefore, $u_1$ is discontinuous at $f_1=0$.
In Sec.~\ref{sec-non-uniform}, we will see that non-uniform solutions appear at $f_1=0$ and they fill this discontinuity.
We have so far evaluated the $dU$ integral in the partition function \eqref{effective-action-gapped}.
Now we fix the trial masses $m_X$ and $m_Z$ so that the dependence of the free energy $F(T,\mu, m_X,m_Z)$ on these masses is minimized.
Hence, we solve the equations
\begin{align}
\partial_{m_X}F(T,\mu, m_X,m_Z)=0, \quad \partial_{m_Z}F(T,\mu, m_X,m_Z)=0.
\label{minimum-F}
\end{align}
In the subsequent subsections, we evaluate these equations in each $f_1>0$, $f_1<0$ and $f_1=0$ cases.
It will determine the free energies of each solution, and they tell us their stabilities and phase structures as drawn in Fig.~\ref{fig-phase}.
\subsubsection{Uniform solution}
\label{sec-confinement}
In this subsection, we evaluate Eq.~\eqref{minimum-F} when $f_1>0$, which is in the confinement phase.
In this case, the free energy becomes $F=N^2f_0$ through Eq.~\eqref{F-low}, and we solve $\partial_{m_X}f_0=\partial_{m_Z}f_0=0 $, where $f_0$ is defined in Eq.~\eqref{f0}.
Then, we obtain
\begin{align}
m_X=m_Z=m_0:=(D-1)^{1/3}\lambda^{1/3}.
\label{m-2-loop}
\end{align}
By using this result, the free energy is given by
\begin{align}
F=N^2 f_0 (m_0,m_0)=\frac{3D}{8} (D-1)^{1/3} \lambda^{1/3}.
\label{F-uni}
\end{align}
Hence, the free energy in this solution depends on neither temperature nor the chemical potential.
See Figs.~\ref{fig-D=9} and \ref{fig-F} for the $D=9$ case.
Note that, when we derived the effective action \eqref{effective-action-gapped}, we had assumed $m_Z < \mu $.
Hence, the uniform solution is not reliable in the region $ \mu \ge m_0 $.
For example, in the $D=9$ with $\tilde{D}=3$ case shown in Fig.~\ref{fig-phase}, such a region appears in the uniform phase, and the fate of the system there is not obvious.
It is likely that the system is unstable in this region and any stable phase does not exist.
\subsubsection{Gapped solution}
\label{sec-gapped}
We discuss the $f_1<0$ case.
Here, we cannot solve Eq.~\eqref{minimum-F} analytically, and we evaluate it numerically.
For a fixed $T$ and $\mu$, we may find several solutions of $m_X$ and $m_Z$.
See Fig.~\ref{fig-m} for $D=9$ with $\tilde{D}=1$ and $\tilde{D}=3$.
However, the condition $m_Z > \mu $ had been assumed when we derived the effective action \eqref{effective-action-gapped}, and the solutions that do not satisfy it are not reliable.
As far as we investigated, only the solutions connected to the non-uniform solution at $\mu=\mu_{\rm GWW}(T)$, which we will argue in the next subsection, are reliable.
As we increase $\mu$ with a fixed $T$, even these reliable solutions reach the point $m_Z = \mu $, which we call $\mu=\mu_\text{unstable}$, and the fates of the systems beyond it are unclear.
These regions are presented as ``unknown" in the phase diagrams in Fig.~\ref{fig-phase}.
We presume that the systems are unstable in these regions.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{M-m-mu-d9-1.eps}\\
$D=9$ with $\tilde{D}=1$ at $T=0.80$
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{M-m-mu-d9-3.eps}\\
$D=9$ with $\tilde{D}=3$ at $T=0.85$
\end{center}
\end{minipage}
\end{tabular}
\caption{
Trial masses $m_X$ and $m_Z$ in the minimum sensitivity analysis.
These are obtained by solving Eq.~\eqref{minimum-F}.
The solid lines describe $m_Z$ and the dashed lines describe $m_X$.
The blue, green and red colors represent the uniform, non-uniform and gapped solutions, respectively.
In the $D=9$ with $\tilde{D}=1$ case, the second gapped solutions represented by the purple lines exist for each $\mu$, while only the single gapped solution exists in the $\tilde{D}=3$ case.
Our analysis is valid until $m_Z > \mu$, and the solutions of $m_Z$ below the black dotted line denoting $ m_Z = \mu$ are not reliable.
The borders of the valid solutions are marked as $\mu_\text{unstable}$.
Hence, the second gapped solution at $\tilde{D}=1$ (the purple solution) is always not reliable.
}
\label{fig-m}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{F-mu-wide.eps}\\
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{F-mu.eps}\\
\end{center}
\end{minipage}
\end{tabular}
\caption{Free energy in the $D=9$ with $\tilde{D}=1$ case at $T=0.80$.
The right plot is an enlarged view of the left plot near the transition point $\mu=\mu_*$.
The blue, green and red curves represent the uniform, non-uniform and gapped solutions, respectively.
The system shows the first-order phase transition between the uniform and gapped phase at $\mu_*$.
Our analysis for the gapped solution is reliable in the region $\mu < \mu_\text{unstable}$, where the condition $\mu < m_Z$ is satisfied.
The purple curve is for the gapped solution shown in Fig.~\ref{fig-m}, although it appears in the unreliable region.
}
\label{fig-F}
\end{center}
\end{figure}
\subsubsection{Non-uniform solution}
\label{sec-non-uniform}
We have seen that the uniform solutions and the gapped solutions appear when $f_1>0$ and $f_1<0$, respectively.
Thus, a transition would occur at $f_1=0$, and we investigate this case in details.
For this purpose, it is useful to rewrite the partition function \eqref{effective-action-gapped} as
\begin{align}
\label{effective-action-un}
Z= & \int \prod_{n=1} du_n ~\exp(-S_{\text{eff}}\left(\{ u_n \},m_X,m_Z\right)), \nonumber \\
& S_{\text{eff}}\left(\{ u_n \},m_X,m_Z\right)= N^2 \left(\beta f_0 + f_1 |u_1|^2 +\sum_{n=2} \frac{1}{n} |u_n|^2 \right) .
\end{align}
Here, we have changed the integral variables from $\{ \alpha_k \}$ to $\{ u_n \}$.
(Such a change is possible in the large-$N$ limit \cite{Aharony:2003sx}.)
Note that $\{ u_n \}$ are not completely independent, since they have to satisfy the condition that the density function $\rho$ \eqref{rho} is non-negative.
The action \eqref{effective-action-un} is quadratic in $\{ u_n \}$, and their coefficients are all positive for $n \ge 2$.
Thus, $u_n=0$ ($n \ge 2$) is a stable solution.
Hereafter, we assume $u_n=0$ ($n \ge 2$) and focus on $u_1$.
By differentiating the action \eqref{effective-action-un} by $u_1$, we obtain an equation
\begin{align}
f_1(T,\mu, m_X,m_Z) u_1^*=0 .
\end{align}
Obviously, one solution is given by $u_1^*=0$, which represents the uniform solution studied in Sec.~\ref{sec-confinement}.
The other possible solution is
\begin{align}
f_1(T,\mu, m_X,m_Z) =0 .
\label{EOM-u1}
\end{align}
This is what we are interested in.
Besides, we have the conditions \eqref{minimum-F} that the dependence of the free energy on $m_X$ and $m_Z$ is minimized,
\begin{align}
& \beta \partial_{m_X} f_0 (m_X,m_Z)+ \partial_{m_X} f_1(T,\mu, m_X,m_Z) |u_1|^2 =0,
\label{EOM-mx}
\\
& \beta \partial_{m_Z} f_0 (m_X,m_Z)+ \partial_{m_Z} f_1(T,\mu, m_X,m_Z) |u_1|^2 =0.
\label{EOM-mz}
\end{align}
By combining these equations and Eq.~\eqref{EOM-u1}, we obtain three equations,
\begin{align}
u_1=\sqrt{-\frac{\beta \partial_{m_X} f_0}{\partial_{m_X} f_1}}, \qquad f_1 =0,
\qquad \frac{ \partial_{m_X} f_0}{\partial_{m_X} f_1}=\frac{ \partial_{m_Z} f_0}{\partial_{m_Z} f_1}.
\label{u_1-NU}
\end{align}
The last two equations determine $m_X$ and $m_Z$, and the first one fixes $u_1$.
These equations can be solved numerically, and the results for $D=9$ with $\tilde{D}=1$ and $\tilde{D}=3$
are shown in Fig.~\ref{fig-m}.
Then, the eigenvalue density \eqref{rho} becomes
\begin{align}
\rho(\alpha)=\frac{1}{2\pi} \left(1+ 2 u_1 \cos \alpha \right) .
\label{rho-non-uniform}
\end{align}
This solution represents a non-uniform solution plotted in Fig.~\ref{fig-rho}, if $u_1 \neq 0$.
However, if $u_1 > 1/2$, the eigenvalue density \eqref{rho-non-uniform} becomes negative around $\alpha=\pi$.
Thus, the non-uniform solution is allowed only for $|u_1|\le 1/2$.
Recall that $u_1= 1/2$ is the lower bound of the gapped solution \eqref{u1-high}, and a transition to the gapped solution from the non-uniform one occurs at $u_1= 1/2$.
This transition is called a Gross-Witten-Wadia (GWW) type transition \cite{Gross:1980he, Wadia:2012fr}, and we define this transition point as $\mu=\mu_{\rm GWW}(T)$.
See Figs.~\ref{fig-D=9} and \ref{fig-F}.
On the other hand, the non-uniform solution also merges to the uniform one at $u_1 = 0$.
Through Eqs.~\eqref{m-2-loop} and \eqref{EOM-u1}, it occurs when $\mu$ and $T$ satisfies
\begin{align}
f_1(T,\mu, m_0,m_0)=0 .
\label{eq-T_c}
\end{align}
We define this solution as $\mu_{c}(T)$, and call it a critical point.
Therefore the non-uniform solutions exist in the region $\mu_{\rm GWW}(T) \le \mu \le \mu_{c}(T) $.
See Fig.~\ref{fig-D=9} and \ref{fig-F}, again.
Note that $f_1$ may be negative beyond the critical point $\mu=\mu_{c}(T)$.
Since $f_1$ is the coefficient of $|u_1|^2$ in the effective action \eqref{effective-action-un}, the uniform solution becomes unstable in this case.
\subsubsection{Free energy and phase structure}
\label{sec-free-energy}
So far, we have obtained the three solutions: uniform, non-uniform and gapped solution.
To see the phase structure, we evaluate their free energies.
For the uniform solution, we have derived the free energy in Eq.~\eqref{F-uni}.
For the non-uniform and gapped solutions, we obtain their free energies by substituting the numerical solutions $m_X$ and $m_Z$ into Eqs.~\eqref{effective-action-un} and \eqref{F-high}.
These results are summarized as
\begin{empheq}[left={ F/N^2=\empheqlbrace}]{alignat=3}
& \frac{3D}{8} (D-1)^{1/3} &\qquad &\text{(uniform solution)},
\nonumber
\\
& f_0 (m_X,m_Z) &\qquad &\text{(non-uniform solution)},
\label{F-summary}
\\
& f_0 (m_X,m_Z)- \frac{1}{2 \beta} \left( \frac{w}{1-w} + \log (1-w) \right) &\qquad &\text{(gapped solution)} .
\nonumber
\end{empheq}
Note that we have used $f_1=0$ and $u_n=0$ ($n \ge 2$) for the non-uniform solution.
The free energy for the $D=9$ with $\tilde{D}=1$ case at $T=0.80$ is plotted in Fig.~\ref{fig-F}.
This figure shows that a first-order transition occurs in this system.
There, the transition point is given when the free energy of the uniform solution and the gapped one are coincident.
We define $\mu_{*}(T) $ for this point.\footnote{We also use $T_*(\mu)$, $T_\text{GWW}(\mu)$ and $T_c(\mu)$ instead of $\mu_*(T)$, $\mu_\text{GWW}(T)$ and $\mu_c(T)$.}
Fig.~\ref{fig-F} also shows that the free energy of the non-uniform solution is always higher than those of the uniform solution and the gapped one.
Actually, the free energy of the non-uniform solution is a concave function, and the specific heat is negative.
These results imply that the non-uniform solution is always unstable in the grand canonical ensemble.
One feature of this phase transition is that neither the non-uniform solution nor the gapped one exists in the region $\mu < \mu_{\rm GWW}(T)$.\footnote{We have seen that several gapped solutions may exist in our model, and the non-uniform solution is connected to one of the gapped solutions at the GWW point $ \mu_{\rm GWW}(T)$.
Thus, other gapped solutions might exist even in the region $\mu < \mu_{\rm GWW}(T)$.
}
This is due to our approximated effective action \eqref{effective-action-un}, and, if the action involves higher-order terms such as $u_n u_m u_{-n-m}$, the location of the GWW transition point $\mu_{\rm GWW}(T)$ would change \cite{AlvarezGaume:2005fv}.
By combining all the results, the whole phase diagrams are obtained as drawn in Fig.~\ref{fig-phase}.
\subsection{Calculating observables}
\label{sec-observalbes}
We have investigated the phase structure of the model.
Now, we explain the derivation of the observables shown in Figs.~\ref{u1_D09} - \ref{r2_D09} in Sec.\ref{sec-result-observables}.
The results in this section are for the large-$N$ limit, unless it is specified.
\subsubsection{Polyakov loops}
\label{subsec-Polyakov}
We evaluate the Polyakov loop operators \eqref{polyakov}, which are the order parameters of the confinement/deconfinement transition.
In the uniform solution, $u_n=0$ is obtained through Eqs.~\eqref{u1-low} and \eqref{un-low} at large $N$.
We can also derive the leading $1/N$ correction \eqref{un-finite-N} as discussed in Appendix \ref{app-un}.
The result is given by
\begin{align}
\label{un-finite-N-main}
u_n = \frac{1}{2N} \sqrt{\frac{\pi}{f_n(\beta,\mu, m_0,m_0)}}
+\textrm{O} \left(\frac{1}{N^3} \right), \qquad (\text{uniform solution}).
\end{align}
(This result does not work near the critical point $f_1=0$.)
For the non-uniform solution, we have obtained
\begin{align}
u_1= & \sqrt{-\frac{\beta \partial_{m_X} f_0}{\partial_{m_X} f_1}}, \quad u_n= 0 \quad (n \ge 2), \qquad (\text{non-uniform solution}),
\end{align}
at large $N$ as argued in Sec.~\ref{sec-non-uniform}.
Here, $m_X$ and $m_Z$ are the solutions of Eq.~\eqref{u_1-NU}.
For the gapped solution, the following solution has been derived
\begin{align}
u_1= & \frac{1+w}{2}, \quad u_n= \frac{w^2}{n-1} P^{(1,2)}_{n-2}(2w-1) \quad (n \ge 2) , \qquad (\text{gapped solution}),
\label{un-gapped-main}
\end{align}
in Eqs.~\eqref{u1-high} and \eqref{un-high}.
Here $w$ has been defined in Eq.~\eqref{w}, and $m_X$ and $m_Z$ are the solutions of Eq.~\eqref{minimum-F}.
The results for $u_1$ are plotted in Figs.~\ref{fig-D=9} and \ref{u1_D09}, and $u_2$ is shown in Fig.~\ref{u2_D09}.
As we have seen in Sec.~\ref{sec-result-Polyakov}, they are consistent with the CLM.
\subsubsection{Angular momentum}
\label{subsec-angular}
We have derived the free energy \eqref{F-summary} in Sec.~\ref{sec-free-energy}.
By using this result, we can read off the angular momentum via
\begin{align}
J=- \frac{1}{\tilde{D}} \frac{\partial F}{\partial \mu} .
\label{eq-J}
\end{align}
(As we mentioned in Sec.~\ref{sec-result-J}, we calculate the angular momentum for the single plane.
Hence, we have divided $-(\partial F/ \partial \mu)$ by $\tilde{D}$.)
The results are compared with the CLM as shown in Fig.~\ref{JI_D09}.
Interestingly, $J$ decreases as $\mu$ increases in the non-uniform solution.
This property would be related to thermodynamical instabilities of the non-uniform solution.
Besides, $J=0$ in the uniform phase, because the free energy \eqref{F-summary} does not depend on $\mu$, there\footnote{$J=0$ in the large-$N$ limit means that $J$ is not an O$(N^2)$ quantity. Thus, $J$ may be O$(1)$.}.
Thus, the uniform phase does not rotate at large $N$, although the chemical potential is finite.
(Similarly, entropy in the uniform phase is zero even at finite temperatures. It means that thermal excitations are highly suppressed in the uniform phase indicating a confinement \cite{Sundborg:1999ue, Aharony:2003sx}.)
\subsubsection{ Expectation values of scalars }
\label{subsec-scalars}
We evaluate the expectation values of the scalars $R_Z^2$ and $R^2_X$ defined in Eqs.~\eqref{R_Z-result} and \eqref{R_X-result}.
Through the one-loop computation \eqref{2-looop-2pt}, we obtain
\begin{align}
R_Z^2= & \frac{1}{\tilde{D}} \frac{g^2}{N} \sum_{I=1}^{\tilde{D}} \left\langle \Tr Z^{I\dagger} Z^{I } \right\rangle
= \lambda \left\{ \frac{1 }{2m_Z}
+ \sum_{n=1}^{\infty}\frac{ 1 }{2m_Z} z^{n} (q^{n}+q^{-n}) |u_{n}|^{2} \right\}
,
\label{R_Z}
\\
R^2_X= & \frac{1}{D-2\tilde{D}} \frac{g^2}{N} \sum_{I=2 \tilde{D}+1}^{D} \left\langle\Tr X^I X^{I } \right\rangle
= \lambda \left\{ \frac{1}{2 m_X}
+ \sum_{n=1}^{\infty}\frac{1 }{ m_X} x^{n} |u_{n}|^{2} \right\} .
\label{R_X}
\end{align}
To compute them, the suitable solutions for $m_X$, $m_Z$ and $u_n$ need to be substituted.
The results are plotted in Fig.~\ref{r2_D09}.
\section{Imaginary chemical potentials and relation to rotating YM theory in four dimensions}
\label{sec-Im}
Rotating quark gluon plasma (QGP) is actively being studied motivated by relativistic heavy ion colliders \cite{STAR:2017ckg} and neutron stars.
As a related problem, rotating pure YM theories are also being investigated.
Particularly, one important question is how the rotation affects the confinement/deconfinement transition temperatures.
(Rotating media are non-uniform in space, and the transition temperatures on the rotation axis is mainly studied.)
However, these theories are strongly coupled and standard perturbative computations do not work.
In addition, the sign problem prevents lattice MC computations.
To avoid these issues, the imaginary angular velocity $\mu_{\rm Im} \in \mathbf{R} $ is considered \cite{Yamamoto:2013zwa, Braguta:2021jgn, Chen:2022smf, Chernodub:2022wsw, Chernodub:2022veq}.
(This imaginary angular velocity $\mu_{\rm Im} $ corresponds to the angular momentum chemical potential $\mu$ in our model \eqref{action-BFSS-J} as $\mu=i \mu_{\rm Im} $ through the dimensional reduction, and we call $\mu_{\rm Im} $ a imaginary chemical potential, hereafter.)
The imaginary chemical potential does not cause the sign problem and MC works.
Once we obtain the results for the imaginary chemical potential,
through the analytic continuation, we may reach the results for the real chemical potential.
Such analytic continuation would work as far as the chemical potential is sufficiently small.
In this section, we review some results in pure YM theories with the imaginary chemical potential.
Then, we compute the corresponding quantities in the matrix model \eqref{action-BFSS-J} through the minimum sensitivity, and compare them with the YM theories.
We will see some similarity between the matrix model \eqref{action-BFSS-J} and the YM theories, and the matrix model provides some insights into the YM theories.
\subsection{Stable confinement phase at high temperatures}
Recently, one remarkable result on the SU(3) pure YM theory was reported in Ref.~\cite{Chen:2022smf}.
The authors investigated the high temperature regime ($T \to \infty$), where the perturbative computation is reliable, and found that the system is in a confinement phase for $ \pi/2 \le \beta \mu_{\rm Im} \le 3\pi /2 $ and in a deconfinement phase for $ 0 \le \beta \mu_{\rm Im} \le \pi/2$ and $ 3\pi /2 \le \beta \mu_{\rm Im} \le 2\pi $.
Hence, the system is confined, although temperature is high.
See Fig.~4 in Ref.~\cite{Chen:2022smf}.
(Note that, when the imaginary chemical potential is turned on, the Boltzmann factor is multiplied by $\exp(i \beta \mu_{\rm Im} J)$, and the thermal partition function is periodic with respect to $\beta \mu_{\rm Im}$: $Z(\beta \mu_{\rm Im})=Z(\beta \mu_{\rm Im}+2\pi)$.)
Then, one important question is whether this high temperature confinement phase in $ \pi/2 \le \beta \mu_{\rm Im} \le 3\pi /2 $ continues to the low temperature confinement phase at $ \beta \mu_{\rm Im} =0 $.
To answer this question, we need to study the strong coupling regime in the YM theory, and it has not been understood.
This result motivates us to study the imaginary chemical potential in our matrix model \eqref{action-BFSS-J}.
Particularly, exploring the high temperature regime and investigating the fate of the confinement phase at $T \to \infty$ would be valuable.
The analysis through the minimum sensitivity is almost straightforward.
We simply need to repeat the same computations done in Sec.~\ref{sec-minimum} by using the effective action \eqref{effective-action-gapped} with $\mu=i \mu_{\rm Im} $.
To investigate the phase structure at high temperatures, we evaluate the critical point \eqref{eq-T_c} in the limit $T \to \infty$.
Then, through Eq.~\eqref{fn}, we obtain
\begin{align}
f_1(T,\mu= i \mu_{\rm Im}, m_0,m_0) \to 1- D +2 \tilde{D} \left\{1- \cos \left( \beta \mu_{\rm Im}\right) \right\}, \qquad (T \to \infty),
\end{align}
and the critical point is derived through the condition $f_1=0$.
Thus, if the relation
\begin{align}
1-D+4 \tilde{D} \ge 0
\label{cond-pi}
\end{align}
is satisfied, the phase transition at $T \to \infty$ occurs at
\begin{align}
\beta \mu_{{\rm Im}0} :=\arccos\left( \frac{1-D+2 \tilde{D}}{2 \tilde{D}} \right),
\end{align}
and the system is confined in $\beta \mu_{{\rm Im}0} \le \beta \mu_{{\rm Im}} \le 2\pi - \beta \mu_{{\rm Im}0} $.
On the other hand, if Eq.~\eqref{cond-pi} is not satisfied, the transition at $T \to \infty$ does not occur and the high temperature confinement phase does not exist.
Thus, the existence of the high temperature confinement phase depends on $D$ and $\tilde{D}$ via Eq.~\eqref{cond-pi}.\footnote{
The condition \eqref{cond-pi} can be rewritten as $ D-2 \tilde{D} \le 2 \tilde{D}+1 $.
Here, $(D-2 \tilde{D})$ is the number of the scalar $X^I$ for the non-rotating directions and $(1+2 \tilde{D}) $ is the number of the scalars for the rotational directions plus 1 that is the contribution of the gauge fixing.
Ref.~\cite{Chen:2022smf} argued that the gauge fields $A^I$ for the rotational directions at high temperature behave as ghost modes.
Therefore, the condition \eqref{cond-pi} states that, if the number of the ghost modes is greater than that of the ordinary scalars at high temperature, the system is confined.
}
Interestingly, in the $D=3$ and $\tilde{D}=1$ case that is the dimensional reduction of the four-dimensional YM studied in Ref.~\cite{Chen:2022smf}, the system is confined in $\pi/2 \le \beta \mu_{{\rm Im}} \le 3 \pi/2 $.
This is the same result as that of Ref.~\cite{Chen:2022smf}, although SU(3) is taken in Ref.~\cite{Chen:2022smf} and we have taken the large-$N$ limit.
Therefore, our model might capture the phase transition of the original model.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{phase-D3-Im.eps}\\
$D=3$, $\tilde{D}=1$
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{phase-D9-Im.eps}\\
$D=9$, $\tilde{D}=1$
\end{center}
\end{minipage}
\end{tabular}
\caption{ $(\beta \mu_{\rm Im})$-$T$ phase diagrams of the matrix model \eqref{action-BFSS-J} with the imaginary chemical potential.
The black solid line represents the critical point $T_c(\beta \mu_{\rm Im})$. The black dashed line represents the GWW point $T_{\textrm{GWW}} ( \beta \mu_{\rm Im})$. The red solid line represents $T_{*}(\beta \mu_{\rm Im} )$, where the first-order phase transition occurs.
In the $D=3$ and $\tilde{D}=1$ case, the critical point approaches to $\beta \mu_{\rm Im}=\pi/2$ and $3\pi/2$ asymptotically as $T \to \infty$.
}
\label{fig-Im}
\end{center}
\end{figure}
We also derive the whole phase structures in the $D=3$ with $\tilde{D}=1$ case and the $D=9$ with $\tilde{D}=1$ case as drawn in Fig.~\ref{fig-Im}.
In the $D=9$ with $\tilde{D}=1$ case, the condition \eqref{cond-pi} is not satisfied, and the high temperature confinement phase does not appear.
In the $D=3$ with $\tilde{D}=1$ case, we observe that the high temperature confinement phase continues to the conventional confinement phase at $ \mu_{\rm Im} = 0 $.
This suggests that the confinement phase may be continuous in the four-dimensional YM theory, too.
\subsection{Analytic continuation of the chemical potential}
Refs.~\cite{Chen:2022smf, Chernodub:2022veq} argued that the imaginary chemical potential increases the transition temperature in the pure YM theory.\footnote{ Through a lattice computation, Ref.~\cite{Braguta:2021jgn} showed the opposite prediction that the imaginary chemical potential makes the transition temperature lower. Thus, the influence of the imaginary chemical potential is still under debate.}
Through the analytic continuation $\mu \to i \mu_{\rm Im}$, this result implies that the decreasing transition temperature under the presence of the real chemical potential at least for small $\mu$.
However, it is unclear whether such an analytic continuation provides quantitatively good results for a finite chemical potential.
Thus, it may be valuable to test the analytic continuation in our matrix model, since the transition temperatures for each the real and imaginary chemical potential can be computed.
In Fig.~\ref{fig-ana-con}, we explicitly compare the critical temperature $T_c$ against the real chemical potential $\mu$ computed through Eq.~\eqref{eq-T_c} and that of the analytic continuation of the imaginary chemical potential derived in Fig.~\ref{fig-Im}.\footnote{To obtain the analytic continuation results, we plot $T_c$ against $\mu_{\rm Im}$ by using the data employed in Fig.~\ref{fig-Im}, and fit the obtained curve by a polynomial $T_c = \sum_n c_n (\mu_{\rm Im}^2)^n $, where $c_n$ are the fitting parameters. Then, we perform the analytic continuation and obtain $T_c = \sum_n (-1)^nc_n \mu^{2n} $. This is the red curve plotted in Fig.~\ref{fig-ana-con}.}
In both $D=3$ and $D=9$ cases, we observe good agreement for $\mu \lesssim 0.5 \times \mu_c|_{T=0} $. This result suggests that the analytic continuation of the imaginary chemical potential may work in similar ranges even in the four-dimensional YM theories and QCD, too.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{Re-Im-D3.eps}\\
$D=3$, $\tilde{D}=1$
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[scale=0.8]{Re-Im-D9.eps}\\
$D=9$, $\tilde{D}=1$3
\end{center}
\end{minipage}
\end{tabular}
\caption{Critical temperatures $T_c$ obtained by the analytic continuation of the imaginary chemical potential.
$T_c$ against the {\it real} chemical potential $\mu$ are plotted.
The black dashed curves are $T_c$ directly computed through Eq.~\eqref{eq-T_c} by using the real chemical potential.
The red curves represent the results obtained through the analytic continuation of the imaginary chemical potential data derived in Fig.~\ref{fig-Im}.
In both cases, they agree for $\mu \lesssim 0.5 \times \mu_c|_{T=0} $, and the analytic continuation works there.
}
\label{fig-ana-con}
\end{center}
\end{figure}
\section{Discussions}
\label{Sec_discussion}
In this article, we have studied the matrix model \eqref{action-BFSS} at finite angular momentum chemical potentials by using the CLM and the minimum sensitivity, and found the quantitative agreements. The action \eqref{action-BFSS-J}, with the chemical potential for the angular momentum added, suffers from the sign problem. This prevents us from using the conventional Monte Carlo methods, as we cannot regard $e^{-S}$ for the complex action $S$ as a probability. This leads us to study the model numerically \eqref{action-BFSS-J} using the CLM. The CLM turns out to work successfully in the parameter region of $\mu$, wide enough to elicit the behavior of the confinement and deconfinement phases.
This is the very first result showing that a rotating quantum many body system at thermal equilibrium is analyzed through the first principle computation, as far as the authors know.
Such rotating quantum systems are important in various topics including condensed matter and high energy physics, and our result encourages us to apply the CLM to these systems, too.
In Sec.~\ref{sec-Im}, we have compared our matrix model and rotating pure YM theories in four dimensions under the presence of the imaginary chemical potentials.
We found the stable confinement phase at high temperature in the matrix model akin to the YM theories argued in Ref.~\cite{Chen:2022smf}.
We also found that the increasing transition temperature consistent with Refs.~\cite{Chen:2022smf, Chernodub:2022veq}.
Therefore, the natures of the matrix model \eqref{action-BFSS} is quite similar to the YM theories.
Since we can investigate the model \eqref{action-BFSS} with the real chemical potential, it may provide insights into the YM theories.
Besides, if we apply the CLM to the pure YM theories, it may shed more light on the properties of the YM theories.
\subsection{Relation to gravity}
\label{sec-BHs}
We have seen that the transition temperature decreases as the chemical potential increases in our model.
A similar result has been obtained in the ${\mathcal N}$=4 SYM on $S_\beta^1 \times S^3 $ \cite{Basu:2005pj, Yamada:2006rx}. (See also footnote \ref{ftnt-SYM}.)
This model at strong coupling would be described by dual AdS geometries through the AdS/CFT correspondence \cite{Maldacena:1997re}, and the gravity computation \cite{Gubser:1998jb, Chamblin:1999tk, Cvetic:1999ne, Hawking:1999dp, Gubser:2000mm} also indicates a similar phase structure. (See Fig.~1 of Ref.~\cite{Yamada:2006rx}.)
There, rotating black D3-brane solutions correspond to the rotating gapped solutions in the SYM theory.\footnote{
The D3-branes rotate on the transverse $S^5$ in the ten dimension.
Through the dimensional reduction of the $S^5$, the angular momenta become the Kaluza-Klein charges, and the rotating geometries reduce to the charged black branes, and Refs.~\cite{Basu:2005pj, Yamada:2006rx} studied this situation.}
Therefore, the decreasing transition temperature by a rotation is a common feature of these gauge theories and gravities.
In relation to the black holes, the non-uniform solution derived through the minimum sensitivity analysis is interesting.
As shown in Sec.~\ref{sec-free-energy}, this solution has the negative specific heat akin to black hole solutions such as Schwarzschild black holes and small black holes in AdS correspondence \cite{Aharony:1999ti}.
Hence, the non-uniform solution may explain the origin of the negative specific heat of the black holes through microscopic description.
It would be valuable to pursue this question further.
Besides, the properties of the large chemical potential regions (the ``unknown" regions in Fig.~\ref{fig-phase}) in the model \eqref{action-BFSS-J} may be understood through the gravity.
We can compute the free energy of the black branes as a function of temperature and chemical potential by using the results of Ref.~\cite{Cvetic:1999ne}.
Then, we will see a similar result to our result shown in Fig.~\ref{fig-F}:
Two black brane solutions appear and they merge at a large chemical potential, and, beyond this point, there is no solution. (The two black brane solutions correspond to the two gapped solutions in the matrix model.)
Although our result in Fig.~\ref{fig-F} beyond $\mu=\mu_\text{unstable}$ is not reliable, the gravity analysis indeed predicts a similar result.
It is tempting to improve our approximation in the matrix model and verify the gravity prediction.
In addition, gravity systems with angular momentum have various exotic solutions such as black rings and black Saturn, and it might be possible to find the corresponding solutions in the matrix model, too.
\paragraph*{Acknowledgment.---}
We thank P.~Basu, M.~Fukuma, Y.~Hidaka, A.~Joseph, J.~Nishimura and A.~Tsuchiya for valuable discussions and comments.
The work of T.~M. is supported in part by Grant-in-Aid for Scientific Research C (No. 20K03946) from JSPS.
Numerical calculations were carried out using the computational resources, such as KEKCC and NTUA het clusters.
|
{
"arxiv_id": "2302.14307",
"language": "en",
"timestamp": "2023-03-01T02:09:02",
"url": "https://arxiv.org/abs/2302.14307",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Federated Learning~(FL)~\cite{McMahan2017Communication, li2020federated} is a privacy-preserving distributed machine learning scheme in which workers jointly participate in the collaborative training of a centralized model by sharing model information~(parameters or updates) rather than their private datasets.
In recent years, FL has shown its potential to facilitate real-world applications, which falls broadly into two categories~\cite{kairouz2021advances}: the \textit{cross-silo} FL and the \textit{cross-device} FL.
The \textit{cross-silo} FL corresponds to a relatively small number of reliable workers, usually organizations, such as healthcare facilities~\cite{jiang2022harmofl} and financial institutions~\cite{yang2019ffd}, etc. %
In contrast, for the \textit{cross-device} FL, the number of workers can be very huge and unreliable, such as mobile devices~\cite{McMahan2017Communication}, IoT~\cite{nguyen2021federated} and autonomous driving cars~\cite{li2021privacy}, among others.
In this paper, we focus on \textit{cross-device} FL.
The privacy-preserving and communication-efficient properties of the \textit{cross-device} FL make it promising, but it also confronts practical challenges arising from data heterogeneity~(i.e., non-iid data distribution across workers) and partial participation~\cite{li2019convergence, karimireddy2020scaffold, yang2021achieving, gu2021fast}.
Specifically, the datasets held by real-world workers are generated locally according to their individual circumstances, resulting in the distribution of data on different workers being not identical.
Moreover, owing to the flexibility of worker participation in many scenarios~(e.g., IoT and mobile devices), workers can join or leave the FL system at will, thus making the set of active workers random and time-varying across communication rounds.
Note that we consider a worker participates or is active at round $t$~(i.e., the index of the communication round) if it is able to complete the computation task and send back model information at the end of round $t$.
The above-mentioned challenges mainly bring catastrophic forgetting~(CF)~\cite{mccloskey1989catastrophic, shoham2019overcoming, xu2022acceleration} to FL.
In a typical FL process, represented by FedAvg~\cite{McMahan2017Communication}, a server updates the centralized model by iteratively aggregating the model information from workers that generally is trained over several steps locally before being sent to the server.
On the one hand, due to data heterogeneity, the model is updated on private data in local training, which is prone to overfit the current knowledge and forget the previous experience, thus leading to CF~\cite{huang2022learn}.
In other words, the updates of the local models are prone to drift and diverge increasingly from the update of the centralized model~\cite{karimireddy2020scaffold}.
This can seriously deteriorate the performance of the centralized model.
To ameliorate this issue, a variety of existing efforts regularize the objectives of the local models to align the centralized optimization objective~\cite{li2020federated1, karimireddy2020scaffold, Acar2021Federated, li2021model, kim2022multi}.
On the other hand, the server can only aggregate model information from active workers per communication round caused by partial participation.
In this case, many existing works directly discard~\cite{McMahan2017Communication, li2020federated, karimireddy2020scaffold, Acar2021Federated, Karimireddy2020Mime, yang2021achieving} or implicitly utilize~\cite{hsu2019measuring, reddi2020adaptive}, by means of momentum, the information provided by workers who have participated in the training but dropped out in the current communication round~(i.e., stragglers).
This results the centralized model, which tends to forget the experience of the stragglers, thus inducing CF.
In doing so, the convergence of popular FL approaches~(e.g., FedAvg) can be seriously slowed down by stragglers.
Moreover, all above approaches solely aggregate the collected information by averaging in the server, ignoring the server’s rich computing and memory resources that could be potentially harnessed to boost the performance of FL~\cite{zhang2022fine}.
In this paper, to alleviate CF caused by data heterogeneity and stragglers, we bring forward a new FL approach, dubbed as GradMA~(\underline{\textbf{Grad}}ient-\underline{\textbf{M}}emory-based \underline{\textbf{A}}ccelerated Federated Learning), which takes inspiration from continual learning~(CL)~\cite{yoon2019scalable, kirkpatrick2017overcoming, lopez2017gradient, farajtabar2020orthogonal, saha2021gradient} to simultaneously correct the server-side and worker-side update directions and fully utilize the rich computing and memory resources of the server.
Concretely, motivated by the success of GEM~\cite{lopez2017gradient} and OGD~\cite{farajtabar2020orthogonal}, two
memory-based CL methods, we invoke quadratic programming~(QP) and memorize updates to correct the update directions.
On the worker side, GradMA harnesses the gradients of the local model in the previous step and the centralized model, and the parameters difference between the local model in the current step and the centralized model as constraints of QP to adaptively correct the gradient of the local model.
Furthermore,
we maintain a memory state to memorize accumulated update of each worker on the server side.
GradMA then explicitly takes the memory state to constrain QP to augment the momentum~(i.e., the update direction) of the centralized model.
Here, we need the server to allocate memory space to store memory state.
However, it may be not feasible in FL scenarios with a large size of workers, which can increase the storage cost and the burden of computing QP largely.
Therefore, we carefully craft a memory reduction strategy to alleviate the said limitations.
In addition, we theoretically analyze the convergence of GradMA in the smooth non-convex setting.
To sum up, we highlight our contributions as follows:
\begin{itemize}
\item We formulate a novel FL approach GradMA, which aims to simultaneously correct the server-side and worker-side update directions and fully harness the server's rich computing and memory resources. Meanwhile, we tailor a memory reduction strategy for GradMA to reduce the scale of QP and memory cost.
\item For completeness, we analyze the convergence of GradMA theoretically in the smooth non-convex setting.
As a result, the convergence result of GradMA achieves the linear speed up as the number of selected active workers increases.
\item We conduct extensive experiments on four commonly used image classification datasets~(i.e., MNIST, CIFAR-10, CIFAR-100 and Tiny-Imagenet) to show that GradMA is highly competitive compared with other state-of-the-art baselines.
Meanwhile, ablation studies demonstrate efficacy and indispensability for core modules and key parameters.
\end{itemize}
\section{Related Work}
\textbf{FL with Data Heterogeneity.}~FedAvg, the classic distributed learning framework for FL, is first proposed by McMahan et al.~\cite{McMahan2017Communication}.
Although FedAvg provides a practical and simple solution for aggregation, it still suffers performance deterioration when the data among workers is non-iid ~\cite{li2020federated}.
Shortly thereafter, a panoply of modifications for FedAvg have been proposed to handle said issue.
For example,
FedProx~\cite{li2020federated1} constrains local updates
via adding a proximal term to the local objectives.
Scaffold~\cite{karimireddy2020scaffold} uses
control variate
to augment the local updates.
FedDyn~\cite{Acar2021Federated} dynamically regularizes the objectives of workers to align global and local objectives.
Moon~\cite{li2021model} corrects the local training by conducting contrastive learning in model-level.
Meanwhile, there exists another line of works to improve the global performance of FL through performing knowledge distillation~\cite{lin2020ensemble, yao2021local, zhu2021data, zhang2022fine, kim2022multi} on the server side or worker side.
FedMLB~\cite{kim2022multi} architecturally regularizes the local objectives via online knowledge distillation.
However, other approaches incur additional communication overhead~\cite{yao2021local, zhu2021data} or pseudo data~\cite{lin2020ensemble, zhang2022fine}.
Going beyond the aforementioned approaches, FL with momentum is an effective way to tackle worker drift problem caused by data heterogeneity and accelerate the convergence.
Specifically, on the server side, FedAvgM~\cite{hsu2019measuring} maintains a momentum buffer, whereas FedADAM~\cite{reddi2020adaptive} and FedAMS~\cite{wang2022communication} both adopt adaptive gradient-descent methods to speed up training.
FedCM~\cite{xu2021fedcm} keeps a state, carrying global information broadcasted by the server, on the worker side to address data heterogeneity issue.
DOMO~\cite{xu2022coordinating} and Mime~\cite{Karimireddy2020Mime} maintain momentum buffers on both server side and worker side to improve the training performance.
\textbf{FL with Partial Participation.}
In addition to data heterogeneity issue, another key hurdle to FL stems from partial participation.
The causes for partial participation can be roughly classified into two categories.
One is the difference in the computing power and communication speed of different workers.
A natural way to cope with this situation is to allow asynchronous updates~\cite{xie2019asynchronous, avdiukhin2021federated, yang2022anarchic}.
The other is the different availability mode,
in which workers can abort the training midway~(i.e., stragglers)~\cite{kairouz2021advances}.
To do so,
many approaches may collect information from only a subset of workers to update the centralized model~\cite{McMahan2017Communication, li2020federated, karimireddy2020scaffold, Acar2021Federated, hsu2019measuring, Karimireddy2020Mime, yang2021achieving}.
However, the server in the mentioned approaches simply ignores and discards the information of the stragglers, which can lead to other problems such as under-utilization of computation and memory~\cite{zhang2022fine}, slower convergence~\cite{li2020federated}, and biased/unfair use of workers' information~\cite{kairouz2021advances}.
Recently, MIFA~\cite{gu2021fast} corrects the gradient bias by exploiting the memorized latest updates of all workers, which avoids excessive delays caused by inactive workers and mitigates CF to some extent.
\textbf{Continual Learning.}
CL is a training paradigm that focuses on scenarios with a continuously changing class distribution of each task and aims at overcoming CF.
Existing works for CL can be roughly divided into three branches: expansion-based methods~\cite{yoon2019scalable, li2019learn}, regularization-based methods~\cite{kirkpatrick2017overcoming,wang2021training} and memory-based methods~\cite{lopez2017gradient, farajtabar2020orthogonal, saha2021gradient}.
Note that unlike CL, we focus on alleviating CF in distributed data, not sequential data.
There are a handful of recent studies that consider FL with CL.
For example, FedWeIT~\cite{yoon2021federated} focuses on sequential data. FedCurv~\cite{shoham2019overcoming} trains objectives based on all-reduce protocol. FedReg~\cite{xu2022acceleration} and FCCL~\cite{huang2022learn} require generated pseudo data and public data, respectively.
\section{Preliminaries}
This section defines the objective function for FL and introduces QP.
In practice, FL is designed to minimize the empirical risk over data distributed across multiple workers without compromising local data.
The following optimization problem is often considered:
\begin{equation}
\min_{\bm{x}\in \mathbbm{R}^{d}} f(\bm{x})=\frac{1}{N}\sum_{i=1}^{N} \left[f_{i}(\bm{x})=\frac{1}{n_i}\sum_{r=1}^{n_i}F_i(\bm{x};\bm{\xi}_r^{(i)})\right],
\end{equation}
where $N$ is the number of workers. Moreover, the local objective $f_{i}:\mathbbm{R}^{d} \rightarrow \mathbbm{R}$ measures the local empirical risk over data distribution $\mathcal{D}_i$, i.e., $\bm{\xi}_r^{(i)} \sim \mathcal{D}_i$, with $n_i$ samples available at $i$-th worker. %
Note that
$\mathcal{D}_i$
can be different among workers. In this work, we consider the typical centralized setup where $N$ workers are connected to one central server.
Next, we introduce QP, which is a fundamental optimization problem with well-established solutions and can be widely seen in the machine learning community, to correct the server-side and work-side update directions.
In this paper,
we can model our goal via QP, which is posed in the following primal form:
\begin{equation}
\label{prim_qp_1:}
\begin{split}
\min_{\Tilde{\bm{p}}} \frac{1}{2}\|\bm{p}-\Tilde{\bm{p}}\|^2 \quad {\rm s.t.} \ \langle \Tilde{\bm{p}}, \bm{M}[i]\rangle \geq 0, \forall i \in [C],
\end{split}
\end{equation}
where $\bm{p} \in \mathbbm{R}^{d}$ and $\bm{M}\in \mathbbm{R}^{d\times C}$
~($C \in \mathbbm{N}$).
One can see that the goal of~(\ref{prim_qp_1:}) is to seek a vector $\tilde{\bm{p}}$ that is positively correlated with $\bm{M}[i] \in \mathbbm{R}^d , \forall i \in [C]$ while being close to $\bm{p}$. By discarding the constant term $\bm{p}^\top\bm{p}$, we rewrite (\ref{prim_qp_1:}) as:
\begin{equation}
\label{prim_qp_2:}
\begin{split}
&\min_{\Tilde{\bm{p}}} \frac{1}{2}\Tilde{\bm{p}}^\top\Tilde{\bm{p}}-\bm{p}^\top\Tilde{\bm{p}} \quad {\rm s.t.} \ \bm{M}^\top \Tilde{\bm{p}} \succeq \bm{0} \in \mathbbm{R}^C.
\end{split}
\end{equation}
However, this is a QP problem on $d$ variables, which are updates of the model.
Generally, $d$ can be enormous, resulting in $d$ being much larger than $C$.
We thus solve the dual formulation of the above QP problem:
\begin{equation}
\label{prim_qp_3:}
\begin{split}
&\min_{\bm{z}} \frac{1}{2}\bm{z}^\top \bm{M}^\top \bm{M} \bm{z} + \bm{p}^\top \bm{Mz} \quad {\rm s.t.} \ \bm{z} \succeq \bm{0} \in \mathbbm{R}^C.
\end{split}
\end{equation}
Once we solve for the optimal dual variable $\bm{z}^\star$, we can recover the optimal primal solution as $\Tilde{\bm{p}}=\bm{M}\bm{z}^\star+\bm{p}$.
\section{Proposed Approach: GradMA}
We now present the proposed FL approach GradMA, see Alg.~\ref{alg:1} for complete pseudo-code.
Note that the communication cost of GradMA is the same as that of FedAvg.
Next, we detail core modules of GradMA, which include the memory reduction strategy~(i.e., mem\_red$()$), Worker\_Update$()$ and Server\_Update$()$ on lines 9, 12 and 16 of Alg.~\ref{alg:1}, respectively.
\begin{algorithm}[tb]
\caption{GradMA: A Gradient-Memory-based Accelerated Federated Learning}
\label{alg:1}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} learning rates ($\eta_l ,\eta_g$), the number of all workers $N$, the number of sampled active workers per communication round $S$, control parameters ($\beta_1$, $\beta_2$), synchronization interval $I$ and memory size $m$ ($S \leq m \leq \min\{d, N\}$).
\STATE Initial state $\bm{x}_0^{(i)}=\bm{x}_0\in \mathbbm{R}^d$~($\forall i \in [N]$), $\tilde{\bm{m}}_0=\bm{0}$.
\STATE Initial $counter = \{c(i) = 0, \forall i \in [N]$\}.
\STATE Initial memory state $\bm{D}=\{\}$.
\STATE $buf=\{\}$, $new\_buf=\{\}$.
\FOR{$t=0,1,\ldots, T-1$}
\STATE \textbf{On server:}
\STATE Server samples a subset $\mathcal{S}_t$ with $S$ active workers and transmits $\bm{x}_t$ to $\mathcal{S}_t$.
\STATE $counter, \bm{D}, buf, new\_buf\leftarrow$ {mem\_red} $(m, \mathcal{S}_t, $ $counter, \bm{D}, buf, new\_buf)$.
\STATE \textbf{On workers:}
\FOR{$i \in \mathcal{S}_t$ parallel}
\STATE $\bm{x}_{t+1}^{(i)}=$ Worker\_Update($\bm{x}_{t}^{(i)}$, $\bm{x}_t$, $\eta_l$, $I$),
\STATE sends $\bm{d}_{t+1}^{(i)}= \bm{x}_{t}-\bm{x}_{t+1}^{(i)}$ to server.
\ENDFOR
\STATE \textbf{On server:}
\STATE $\bm{D}, \bm{x}_{t+1}, \tilde{\bm{m}}_{t+1}=$ Server\_Update($ [\bm{d}_{t+1}^{(i)}, i\in \mathcal{S}_t]$, $\Tilde{\bm{m}}_{t}$, $\bm{D}$, $\eta_g$, $\beta_1$, $\beta_2$, $buf$, $new\_buf$).
\STATE Sends $\bm{x}_{t+1}$ to sampled active workers in the next round.
\STATE $new\_buf=\{\}$.
\ENDFOR
\STATE {\bfseries Output:} $ \bm{x}_T$
\end{algorithmic}
\end{algorithm}
\subsection{Correcting gradient for the worker side}
Throughout the local update, we leverage QP to perform correcting gradient directions, see Alg.~\ref{local_update:}.
Here, the input of QP~(marked as QP$_l$ for distinction) is $\bm{p}\leftarrow \bm{g}_{\tau}^{(i)}$ and $\bm{M}\leftarrow \bm{G}_{\tau}^{(i)} \in \mathbbm{R}^{d\times 3}$ (line 5 of Alg.~\ref{local_update:}), and its output is the following vector $\tilde{\bm{g}}_{\tau}^{(i)}$, which is positively correlated with $\nabla f_{i}(\bm{x}_{\tau-1}^{(i)}), \nabla f_{i}(\bm{x})$ and $\bm{x}_{ \tau}^{(i)}-\bm{x}_t$ while ensuring the minimum $\|\bm{g}_{\tau}^{(i)}-\tilde{\bm{g}}_{\tau}^{(i)}\|$:
\begin{align}
\label{tilde_g_sol:}
&\tilde{\bm{g}}_{\tau}^{(i)}=\bm{G}_{\tau}^{(i)}\bm{z}_{\tau}^\star+\bm{g}_{\tau}^{(i)}\\
& = z_{\tau,1}^\star\nabla f_{i}(\bm{x}_{\tau-1}^{(i)}) + z_{\tau,2}^\star\nabla f_{i}(\bm{x}) + z_{\tau,3}^\star(\bm{x}_{ \tau}^{(i)}-\bm{x}_t) + \bm{g}_{\tau}^{(i)}, \notag
\end{align}
where $\bm{z}_{\tau}^\star = [z_{\tau,1}^\star, z_{\tau,2}^\star, z_{\tau,3}^\star]^\top$ and $\bm{z}_{\tau}^\star \succeq \bm{0} \in \mathbbm{R}^3$.
Essentially, the output of QP$_l$ is a conical combination and serves as an update direction for local training.
Particularly, when $z_{\tau,1}^\star=0, z_{\tau,2}^\star=0$ and $z_{\tau,3}^\star>0$, Eq.~(\ref{tilde_g_sol:}) is equivalent to the local update of FedProx~\cite{li2020federated1}.
The difference is that the control parameter $\mu$ in FedProx is a hyper-parameter, while $z_{\tau,3}^\star$ is determined adaptively by QP$_l$.
Specifically, when $\bm{g}_{\tau}^{(i)}$ is positively correlated with $\bm{x}_{ \tau}^{(i)}-\bm{x}_t$, i.e., $\langle\bm{g}_{\tau}^{(i)}, \bm{x}_{ \tau}^{(i)}-\bm{x}_t\rangle\geq 0$, $z_{\tau,3}^\star$ is approximately equal to $0$; otherwise, $z_{\tau,3}^\star$ is greater than $0$.
In other words,
$\bm{x}_{ \tau}^{(i)}-\bm{x}_t$ acts as a hard constraint only when $\bm{g}_{\tau}^{(i)}$ is negatively correlated with $\bm{x}_{ \tau}^{(i)}-\bm{x}_t$, which makes $\tilde{\bm{g}}_{\tau}^{(i)}$ focus more on local information.
Moreover, the calculation mechanisms for $z_{\tau,1}^\star$ and $z_{\tau,2}^\star$ are the same as that for $z_{\tau,3}^\star$.
When $z_{\tau,1}^\star>0$ and $z_{\tau,2}^\star>0$, it indicates that the update direction $\tilde{\bm{g}}_{\tau}^{(i)}$ takes into account the previous step and global information about the model, which is inspired by CL~\cite{lopez2017gradient, farajtabar2020orthogonal}.
Intuitively, Eq.~(\ref{tilde_g_sol:}) adaptively taps previous and global knowledge, thus effectively mitigating CF caused by data heterogeneity.
\begin{algorithm}[tb]
\caption{Worker\_Update($\bm{x}^{\prime}$, $\bm{x}$, $\eta_l$, $I$)}
\label{local_update:}
\begin{algorithmic}[1]
\STATE Sets $\bm{x}_{-1}^{(i)}=\bm{x}^{\prime}$, $\bm{x}_{0}^{(i)}=\bm{x}$.
\FOR{$\tau = 0,1,\ldots, I-1$}
\STATE $\bm{g}_{\tau}^{(i)}=\nabla f_{i}(\bm{x}_{\tau}^{(i)})$
\STATE $\bm{G}_{\tau}^{(i)}=[ \nabla f_{i}(\bm{x}_{\tau-1}^{(i)}), \nabla f_{i}(\bm{x}), \bm{x}_{ \tau}^{(i)}-\bm{x}_t]$,
\STATE $\tilde{\bm{g}}_{\tau}^{(i)}={\rm QP}_l(\bm{g}_{\tau}^{(i)}, \bm{G}_{\tau}^{(i)})$,
\STATE $\bm{x}_{\tau+1}^{(i)}=\bm{x}_{\tau}^{(i)}- \eta_{l} \tilde{\bm{g}}_{\tau}^{(i)}$.
\ENDFOR
\STATE {\bfseries Output:} $\bm{x}_{I}^{(i)}$.
\end{algorithmic}
\end{algorithm}
\subsection{Correcting update direction for the server side}
Now, we describe the proposed update process of the centralized model on the server side, see Alg.~\ref{server_update:} for details.
For ease of presentation, we define the number of local updates of workers that the server can store as the memory size $m$.
To elaborate, we assume that there is enough memory space on the server such that $m=N$.
In this way, at communication round $t$, the update process can be streamlined, which takes the form:
\begin{align}
&\bm{d}_{t+1}=\frac{1}{S}\sum_{i \in \mathcal{S}_t}\bm{d}_{t+1}^{(i)}, \bm{m}_{t+1}=\beta_1\Tilde{\bm{m}}_{t}+\bm{d}_{t+1}, \label{cal_d_:}\\
&\bm{D}[i]\leftarrow\left\{\begin{array}{l}
\beta_2 \bm{D}[i]+\bm{d}_{t+1}^{(i)}, i \in \mathcal{S}_t \\
\beta_2 \bm{D}[i], i \notin \mathcal{S}_t
\end{array} \right. ,\label{update_M_:}\\
& \tilde{\bm{m}}_{t+1}={\rm QP}_g(\bm{m}_{t+1}, \bm{D}), \bm{x}_{t+1} = \bm{x}_{t}-\eta_g \tilde{\bm{m}}_{t+1}. \label{tidle_update_m_:}
\end{align}
As shown in Eq.~(\ref{update_M_:}), we propose that the server allocates memory space to maintain a memory state $\bm{D}$,
which is updated in a momentum-like manner to memorize the accumulated updates of all workers.
Each worker only uploads update $\bm{d}_{t+1}^{(i)}$ ($i \in [N]$) to the server, and as such the risk of data leakage is greatly reduced.
By memorizing accumulated updates of inactive workers, GradMA avoids waiting for any straggler when facing heterogeneous workers with different availability, so as to effectively overcome the adverse effects caused by partial participation.
This is different from the recently proposed MIFA~\cite{gu2021fast}~(see Alg.~\ref{MIAFA:} in Appendix~\ref{appendix_A:}), which stores the latest updates of all workers to perform averaging.
However, such a straightforward and naive implementation of integration implicitly increases statistical heterogeneity in situations where different workers have varying data distributions, which can induce bias.
Therefore, the core idea of this paper is how to leverage memorized information to overcome the above challenge effectively.
To tackle the challenge,
we apply QP~(marked as QP$_g$ for distinction) to seek an update direction $\tilde{\bm{m}}_{t+1}$ that is positively correlated with buffers $\bm{D}[i] \in \mathbbm{R}^d , \forall i \in [N]$ while being close to $\bm{m}_{t+1}$. Concretely, the input of QP$_g$ is $\bm{p} \leftarrow \bm{m}_{t+1}$ and $\bm{M} \leftarrow \bm{D} \in \mathbbm{R}^{d\times N}$, and its output is $\tilde{\bm{m}}_{t+1}$~(see Eq.~(\ref{tidle_update_m_:})), which takes the form $\Tilde{\bm{m}}_{t+1}=\bm{D}\bm{z}_{t+1}^\star+\bm{m}_{t+1}$, where $\bm{z}_{t+1}^\star = [z_{t+1, 1}^\star, \cdots, z_{t+1, N}^\star]^\top \succeq \bm{0} \in \mathbbm{R}^N$ is determined adaptively by QP$_g$.
Inherently, QP$_g$ takes advantage of the accumulated updates of all workers stored on $\bm{D}$ to correct the update direction $\bm{m}_{t+1}$ and circumvents the centralized model from forgetting stragglers' knowledge, thereby alleviating CF induced by partial participation.
In particular, one can easily observe that $\tilde{\bm{m}}_{t+1}=\bm{m}_{t+1}$ holds if $m=0$ (that is, $\bm{D}=\bm{0}$).
The update process of Alg.~\ref{server_update:} is then consistent with that of FedAvgM~\cite{hsu2019measuring} on the server side.
Consequently, Alg.~\ref{server_update:} can be considered as an extension of FedAvgM in terms of augmenting updates through allocating memory.
\begin{algorithm}[tb]
\caption{Server\_Update($[\bm{d}_{t+1}^{(i)}, i\in \mathcal{S}_t]$, $\Tilde{\bm{m}}_{t}$, $\bm{D}$, $\eta_g$, $\beta_1$, $\beta_2$, $buf$, $new\_buf$)}
\label{server_update:}
\begin{algorithmic}[1]
\STATE $\bm{d}_{t+1}=\frac{1}{S}\sum_{i \in \mathcal{S}_t}\bm{d}_{t+1}^{(i)}$, $\bm{m}_{t+1}=\beta_1\Tilde{\bm{m}}_{t}+\bm{d}_{t+1}$.
\FOR{$c(i) \in buf$}
\IF{$i \in \mathcal{S}_t$}
\STATE $\bm{D}[i] \leftarrow\left\{\begin{array}{l}
\beta_2 \bm{D}[i]+\bm{d}_{t+1}^{(i)}, c(i) \notin new\_buf \\
\bm{d}_{t+1}^{(i)}, c(i) \in new\_buf
\end{array} \right.$.
\ELSIF{$i \notin \mathcal{S}_t$}
\STATE $\bm{D}[i] \leftarrow \beta_2 \bm{D}[i]$.
\ENDIF
\ENDFOR
\STATE $\tilde{\bm{m}}_{t+1}={\rm QP}_g(\bm{m}_{t+1}, \bm{D})$, $\bm{x}_{t+1} = \bm{x}_{t}-\eta_g \tilde{\bm{m}}_{t+1}$.
\STATE {\bfseries Output:} $\bm{D}$, $\bm{x}_{t+1}$, $\tilde{\bm{m}}_{t+1}$.
\end{algorithmic}
\end{algorithm}
\subsection{A Practical Memory Reduction Strategy}
\begin{algorithm}[tb]
\caption{mem\_red$(m, \mathcal{S}, c, \bm{D}, buf, new\_buf)$}
\label{mem_red:}
\begin{algorithmic}[1]
\FOR{$i \in \mathcal{S}$}
\IF{$c(i) \in buf$}
\STATE $c(i) \leftarrow c(i) + 1$.
\ELSIF{$c(i) \notin buf$}
\IF{$Length(buf)=m$}
\STATE $old\_buf=\{\}$.
\FOR{$k \in buf$}
\IF{$k \notin \mathcal{S}$}
\STATE $old\_buf \leftarrow old\_buf\cup \{c(k)\}$.
\ENDIF
\ENDFOR
\STATE Discarding $c(i^\prime)$ with the smallest value from $old\_buf$ and set $c(i^\prime)=0$.
\STATE Discarding $\bm{D}[i^\prime]$ from memory state $\bm{D}$.
\ENDIF
\STATE $c(i) \leftarrow c(i)+1$.
\STATE $buf \leftarrow buf \cup \{c(i)\}$.
\STATE $new\_buf\leftarrow new\_buf \cup \{c(i)\}$.
\ENDIF
\ENDFOR
\STATE {\bfseries Output:} $c, \bm{D}, buf, new\_buf$
\end{algorithmic}
\end{algorithm}
It is well known that in realistic FL scenarios, on the one hand, the number of workers may be large; the size of the model may be huge on the other hand, leading to large-scale QP as well as high memory demanding for server to store $\bm{D}$, which is infeasible and unnecessary in practice.
Therefore, we propose a memory reduction strategy to alleviate this deficiency, which ensures that the size of $\bm{D}$ does not exceed a pre-given $m$ and $S \leq m \leq \min \{d, N\}$, see Alg.~\ref{mem_red:} for details.
The design ethos of the memory reduction strategy is to keep as much useful information as possible in a given $m$.
Specifically, at communication round $t$, the server samples $S$ active workers and performs that
$c(i)\leftarrow c(i)+1$~($i \in \mathcal{S}$)~(lines 3 and 15 of Alg.~\ref{mem_red:}).
When the memory used is less than the given one, $c(i) \notin buf$ of sampled active workers enter the buffers $buf$ and $new\_buf$ in turn~(lines 16-17 of Alg.~\ref{mem_red:}).
Once the memory used is equal to the given one, $c(i^\prime)$ with the smallest value in $old\_buf$ is discarded and set $c(i^\prime)=0$.
Also, $\bm{D}[i^\prime]$ is discarded from $\bm{D}$~(lines 12-13 of Alg.~\ref{mem_red:}).
\section{Convergence Results for GradMA}
We now present a convergence analysis of GradMA in the smooth non-convex setting.
And the following assumptions are considered.
\begin{assumption}
\label{Global_Function_Below_Bounds:}
{\rm (Global function below bounds).}
Set $f^*=\inf_{\bm{x}\in \mathbbm{R}^d}f(\bm{x})$ and $f^*>-\infty$.
\end{assumption}
\begin{assumption}
\label{L_smooth:}
{\rm ($L$-smooth).}
$\forall i \in [N]$,
the local functions $f_i$ are differentiable, and
there exist constant $L>0$ such that for any $\bm{x},\bm{y} \in \mathbbm{R}^d$,
$\left \| \nabla f_i(\bm{x})-\nabla f_i(\bm{y}) \right \| \leq L \|\bm{x}-\bm{y}\|$.
\end{assumption}
\begin{assumption}
\label{Bounded_data_heterogeneity:}
{\rm (Bounded data heterogeneity).}
The degree of heterogeneity of the data distribution across workers can be quantified as $\|\nabla f_i(\bm{x})-\nabla f(\bm{x})\|^2 \leq \rho^2$,
for any $i \in [N]$ and some constant $\rho \geq 0$.
\end{assumption}
\begin{assumption}
\label{UBE_QP_l:}
{\rm (Bounded optimal solution error for ${\rm QP}_l$).}
Given $\bm{g}^{(i)}=\nabla f_i(\bm{x})$~(see Alg.~\ref{local_update:}), then there exists $\varepsilon_l > 0$ such that $\|\bm{g}^{(i)}-\Tilde{\bm{g}}^{(i)}\|^2 \leq \varepsilon_{l}^2$.
\end{assumption}
\begin{assumption}
\label{UBE_QP_g:}
{\rm (Bounded optimal solution error for ${\rm QP}_g$).}
Given $\beta_2\in [0, 1)$ and $\bm{m}$~(see Alg.~\ref{server_update:}), then there exists $\varepsilon_g > 0$ such that $\|\bm{m}-\Tilde{\bm{m}}\|^2 \leq \frac{\varepsilon_{g}^2}{1-\beta_2}$.
\end{assumption}
Assumptions \ref{Global_Function_Below_Bounds:} and \ref{L_smooth:}
are commonly used in the analysis of
distribution learning \cite{karimireddy2020scaffold, li2020federated1, xin2021hybrid}.
Assumption~\ref{Bounded_data_heterogeneity:} quantifies inter-worker variances, i.e., data heterogeneity \cite{karimireddy2020scaffold, li2020federated1}.
Assumptions~\ref{UBE_QP_l:} and~\ref{UBE_QP_g:} are necessary for the theoretical analysis of GradMA, which constrain the upper bound on the optimal solution errors of QP$_l$ and QP$_g$, respectively.
Intuitively, the assumptions hold if the local updates for all worker make sense~\cite{esfandiari2021cross}.
Note that the upper bound for Assumption~\ref{UBE_QP_g:} follows an intuitive observation: more accumulated updates of workers (i.e., the larger $\beta_2$) can provide more accumulated update information for the centralized model.
Next, we state our convergence results for GradMA.
\begin{thm}
\label{thm:alg}
Assume Assumptions \ref{Global_Function_Below_Bounds:}-\ref{UBE_QP_g:} exist. Let $\eta_l \leq \frac{1}{160^{0.5}LI}$, $\eta_g\eta_l\leq\frac{(1-\beta_1)^2S(N-1)}{IL(\beta_1S(N-1)+4N(S-1))}$ and $ 320I^2\eta_l^2L^2+\frac{64I\eta_g\eta_lL(1+40I^2\eta_l^2L^2)}{(1-\beta_1)^2}\frac{N-S}{S(N-1)}\leq 1$. For all $t \in [0,\cdots, T-1]$,
the following relationship generated by Alg.~\ref{alg:1} holds:
\begin{align}
&\frac{1}{T}\sum_{t=0}^{T-1}\mathbbm{E}\left[\left\|\nabla f(\bm{x}_{t})\right\|^2\right] \overset{}{\leq}
\frac{8(1-\beta_1)(f(\bm{x}_{0}) -f^\star)}{I\eta_g\eta_lT} \notag\\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + C_1 \varepsilon_l^2 + C_2 \varepsilon_g^2 + C_3 \rho^2, \nota
\end{align}
where the expectation $\mathbbm{E}$ is w.r.t the sampled active workers per communication round, and $C_1=8+320I^2\eta_l^2L^2+ \frac{64I\eta_g\eta_lL(1+40I^2\eta_l^2L^2)}{(1-\beta_1)^2}\frac{N-S}{S(N-1)}$, $C_2 =\frac{20 \eta_g L }{(1-\beta_1)^2(1-\beta_2)I\eta_l} + \frac{8}{(1-\beta_2)I^2\eta_l^2}$, $C_3 = C_1 - 8$.
\end{thm}
A detailed proof of Theorem~\ref{thm:alg} is presented in the Appendix~\ref{appendix_C:}.
\begin{coro}
\label{coro_1:}
Assume Assumptions \ref{Global_Function_Below_Bounds:}-\ref{UBE_QP_g:} exist.
We set $\eta_l=\frac{1}{T^{0.5}LI}$, $\eta_g=\frac{S^{0.5}}{I^{0.5}}$, $\varepsilon_l=\frac{1}{T^{0.5}}$ and $\varepsilon_g=\frac{I^{0.25}}{T^{0.75}S^{0.25}L}$.
For $T\geq\max \left\{160, \frac{(\beta_1S(N-1)+4N(S-1))^2}{I^2(1-\beta_1)^4S(N-1)^2}, \frac{(b+(b^2+1280)^{0.5})^2}{4}\right\}$ where $b=\frac{128(N-S)}{(1-\beta_1)^2I^{0.5}S^{0.5}(N-1)}$ in Theorem~\ref{thm:alg}, we have:
\begin{align*}
&\frac{1}{T}\sum_{t=0}^{T-1} \mathbbm{E}\left[\|\nabla f(\bm{x}_t)\|^2\right] =\mathcal{O}\left(\frac{I^{0.5}}{S^{0.5}T^{0.5}}+\frac{1}{T}\right).
\end{align*}
\end{coro}
An immediate observation from Corollary~\ref{coro_1:} is that GradMA can achieve the linear speed up as the number of sampled active workers $S$ increases.
This convergence rate matches the well-known best result in FL approaches in literature~\cite{yang2021achieving} under the smooth non-convex setting.
\section{Empirical Study}
In this section, we empirically investigate GradMA on four datasets~(MNIST~\cite{lecun1998gradient},
CIFAR-10, CIFAR-100~\cite{Krizhevsky2009Learning} and Tiny-Imagenet\footnote{http://cs231n.stanford.edu/tiny-imagenet-200.zip}) commonly used for image classification tasks.
\subsection{Experimental Setup}
To gauge the effectiveness of Worker\_Update$()$ and Server\_Update$()$, we perform ablation study of \textbf{GradMA}.
For this purpose, we design Alg.~\ref{GradMA-W:}~(marked as \textbf{GradMA-W}) and Alg.~\ref{GradMA-S:}~(marked as \textbf{GradMA-S}), as specified in Appendix~\ref{appendix_A:}.
Meanwhile, we compare other baselines, including \textbf{FedAvg}~\cite{McMahan2017Communication},
\textbf{FedProx}~\cite{li2020federated1},
\textbf{MOON}~\cite{li2021model},
\textbf{FedMLB}~\cite{kim2022multi},
\textbf{Scaffold}~\cite{karimireddy2020scaffold},
\textbf{FedDyn}~\cite{Acar2021Federated},
\textbf{MimeLite}~\cite{Karimireddy2020Mime},
\textbf{MIFA}~\cite{gu2021fast}
and slow-momentum variants of FedAvg, FedProx, MIFA, MOON and FedMLB~(i.e., \textbf{FedAvgM}~\cite{hsu2019measuring}, \textbf{FedProxM}, \textbf{MIFAM}, \textbf{MOONM} and \textbf{FedMLBM}),
in terms of test accuracy and communication efficiency in different FL scenarios.
For fairness,
we divide the baselines into three groups based on FedAvg's improvements on the \textbf{worker side}, \textbf{server side}, or \textbf{both}.
See Table~\ref{table_1:} and Table~\ref{table_2:} for details.
Furthermore, on top of GradMA-S, we empirically study the effect of the control parameters~($\beta_1$, $\beta_2$) and verify the effectiveness of men\_red$()$ by setting varying memory sizes $m$.
All our experiments are performed on a centralized network with $100$ workers.
And we fix synchronization interval $I=5$.
To explore the performances of the approaches,
we set up multiple different scenarios w.r.t. the number of sampled active workers $S$ per communication round and data heterogeneity.
Specifically,
we set $S \in \{5, 10, 50\}$.
Moreover, we use Dirichlet process $Dp(\omega)$~\cite{Acar2021Federated, zhu2021data} to strictly partition the training set of each dataset across $100$ workers.
We set $\omega \in \{0.01, 0.1, 1.0\}$.
A visualization of the data partitions for the four datasets at varying $\omega$ values can be found in Fig.~\ref{data_par_sum_appendix:} in Appendix~\ref{appendix_B:}.
Also, the original testing set~(without partitioning) of each dataset is used to evaluate the performance of the trained centralized model.
For MNIST, a neural network (NN) with three linear hidden layers is implemented for each worker.
We fix the total number of iterations to $2500$, i.e., $T\times I=2500$.
For CIFAR-10~(CIFAR-100, Tiny-Imagenet), each worker runs a Lenet-5~\cite{lecun1998gradient}~(VGG-11~\cite{simonyan2014very}, Resnet20~\cite{he2016deep}) architecture.
We fix the total number of iterations to $5000~(10000, 10000)$, i.e., $T\times I=5000~(10000, 10000)$.
Due to the space limitation, we relegate detailed hyper-parameters tuning and full experimental results to Appendix~\ref{appendix_B:}.
\subsection{Performance Analysis}
\begin{table*}[htbp]
\centering
\caption{Top test accuracy~(\%) overview given different FL scenarios.}
\resizebox{2.0\columnwidth}{!}{
\begin{tabular}{cccc|cc|ccc|ccc|ccc}
\toprule
\multirow{2}[1]{*}{Alg.s} & \multicolumn{3}{c|}{MNIST+NN, $S=10$} & \multicolumn{2}{c|}{MNIST+NN, $\omega=0.01$} & \multicolumn{3}{c|}{CIFAR-10+Lenet-5, $S=10$} & \multicolumn{3}{c|}{CIFAR-100+VGG-11, $\omega=0.1$} & \multicolumn{3}{c}{Tiny-Imagenet+Resnet20, $(\omega, S)$} \\
& $\omega=1.0$ & $\omega=0.1$ & $\omega=0.01$ & $S=5$ & $S=50$ & $\omega=1.0$ & $\omega=0.1$ & $\omega=0.01$ & $S=5$ & $S=10$ & $S=50$ & $(0.01,5)$ & $(1.0, 5)$ & $(1.0, 10)$ \\
\midrule
FedAvg & 98.22$\pm$0.05 & 97.11$\pm$0.39 & 46.19$\pm$1.29 & 49.65$\pm$3.88 & 68.32$\pm$4.16 & 69.48$\pm$8.28 & 47.86$\pm$5.26 & \textbf{20.97$\pm$3.73} & 56.02$\pm$0.37 & 61.22$\pm$0.16 & 64.78$\pm$0.43 & 7.50$\pm$0.32 & 41.80$\pm$0.55 & 42.90$\pm$0.12 \\
\midrule
FedProx & 98.16$\pm$0.09 & 97.19$\pm$0.31 & 46.82$\pm$0.96 & 49.89$\pm$3.67 & 67.97$\pm$4.27 & 71.58$\pm$4.66 & 48.63$\pm$4.92 & 20.40$\pm$3.85 & 55.94$\pm$0.71 & 61.25$\pm$0.09 & 64.69$\pm$0.27 & 7.51$\pm$0.46 & 41.82$\pm$0.29 & 42.58$\pm$0.69 \\
FedMLB & \textbf{98.31$\pm$0.06} & \textbf{97.26$\pm$0.40} & 54.53$\pm$0.39 & 57.22$\pm$3.07 & 68.44$\pm$3.26 & 69.28$\pm$6.56 & 48.99$\pm$4.94 & 20.81$\pm$3.26 & 53.80$\pm$0.16 & 59.20$\pm$0.27 & 64.06$\pm$0.30 & 7.98$\pm$0.34 & 42.83$\pm$0.13 & 43.59$\pm$0.80 \\
MOON & 98.18$\pm$0.12 & 97.11$\pm$0.31 & 46.26$\pm$1.35 & 50.39$\pm$5.16 & \textbf{68.75$\pm$4.50} & 71.11$\pm$7.94 & 48.84$\pm$5.16 & 19.39$\pm$3.99 & 55.37$\pm$0.34 & 60.58$\pm$0.60 & 64.48$\pm$0.42 & 7.70$\pm$0.38 & 41.68$\pm$0.22 & 42.80$\pm$0.54 \\
Scaffold & 97.63$\pm$0.37 & 93.94$\pm$1.18 & 50.86$\pm$7.46 & 39.97$\pm$4.88 & 49.54$\pm$2.28 & 53.33$\pm$6.63 & 35.91$\pm$2.14 & 15.55$\pm$1.33 & 32.22$\pm$0.92 & 34.72$\pm$0.80 & 45.70$\pm$0.76 & 7.20$\pm$0.33 & 40.96$\pm$0.23 & 43.02$\pm$0.30 \\
GradMA-W & 98.15$\pm$0.10 & 97.01$\pm$0.23 & \textbf{63.34$\pm$3.75} & \textbf{65.39$\pm$0.96} & 65.13$\pm$2.54 & \textbf{72.33$\pm$3.84} & \textbf{50.25$\pm$3.94} & 18.99$\pm$4.06 & \textbf{56.43$\pm$0.51} & \textbf{61.38$\pm$0.11} & \textbf{64.96$\pm$0.36} & \textbf{9.98$\pm$0.22} & \textbf{43.68$\pm$0.23} & \textbf{44.57$\pm$0.45} \\
\midrule
FedAvgM & 98.29$\pm$0.18 & 97.20$\pm$0.30 & 53.77$\pm$0.32 & 57.87$\pm$3.64 & 67.80$\pm$5.58 & 71.04$\pm$7.29 & 51.91$\pm$4.46 & 21.02$\pm$3.52 & 55.85$\pm$0.28 & 61.32$\pm$0.29 & 64.88$\pm$0.25 & 16.96$\pm$1.08 & 41.91$\pm$0.23 & 42.57$\pm$0.14 \\
MIFA & 98.02$\pm$0.12 & 96.88$\pm$0.56 & 66.92$\pm$2.53 & 56.04$\pm$3.92 & 52.84$\pm$4.89 & 71.41$\pm$5.81 & 50.60$\pm$11.87 & 23.78$\pm$2.04 & 50.37$\pm$1.02 & 58.74$\pm$0.42 & 64.71$\pm$0.31 & 8.88$\pm$0.33 & 41.42$\pm$0.22 & 42.83$\pm$0.13 \\
MIFAM & 98.02$\pm$0.15 & 96.90$\pm$0.44 & 67.15$\pm$2.23 & 55.28$\pm$6.05 & 53.35$\pm$6.84 & 73.48$\pm$1.37 & 52.13$\pm$9.71 & 24.17$\pm$1.24 & 49.30$\pm$0.86 & 58.91$\pm$0.24 & 64.61$\pm$0.33 & 12.01$\pm$0.32 & 41.94$\pm$0.06 & 43.17$\pm$0.09 \\
GradMA-S & \textbf{98.38$\pm$0.09} & \textbf{97.35$\pm$0.28} & \textbf{74.52$\pm$1.71} & \textbf{75.93$\pm$0.97} & \textbf{69.09$\pm$3.83} & \textbf{78.76$\pm$1.96} & \textbf{64.60$\pm$5.87} & \textbf{28.41$\pm$2.43} & \textbf{59.08$\pm$0.43} & \textbf{63.23$\pm$0.22} & \textbf{65.63$\pm$0.35} & \textbf{20.93$\pm$1.49} & \textbf{48.83$\pm$1.06} & \textbf{49.65$\pm$0.72} \\
\midrule
FedProxM & 98.26$\pm$0.08 & 97.13$\pm$0.34 & 54.50$\pm$0.79 & 58.59$\pm$4.58 & 69.00$\pm$4.42 & 78.00$\pm$1.61 & 51.22$\pm$5.14 & 21.80$\pm$3.72 & 55.63$\pm$0.31 & 63.15$\pm$0.12 & 64.78$\pm$0.11 & 18.30$\pm$0.79 & 37.98$\pm$0.10 & 45.27$\pm$0.19 \\
FedMLBM & 98.26$\pm$0.16 & \textbf{97.35$\pm$0.30} & 61.12$\pm$1.48 & 64.12$\pm$4.17 & 68.78$\pm$3.28 & 73.70$\pm$4.62 & 49.90$\pm$5.82 & 21.53$\pm$2.93 & 53.91$\pm$0.78 & 60.44$\pm$0.34 & 64.85$\pm$0.18 & 17.32$\pm$0.82 & 44.62$\pm$0.32 & 45.18$\pm$0.27 \\
MOONM & 98.21$\pm$0.13 & 97.04$\pm$0.42 & 62.34$\pm$8.91 & 57.98$\pm$5.51 & 68.82$\pm$4.43 & 73.96$\pm$4.11 & 50.06$\pm$6.14 & 20.19$\pm$3.10 & 56.01$\pm$0.25 & 62.06$\pm$0.19 & 65.37$\pm$0.17 & 16.78$\pm$0.95 & 42.43$\pm$0.39 & 42.78$\pm$0.46 \\
Feddyn & 97.92$\pm$0.12 & 96.03$\pm$0.46 & 59.39$\pm$2.29 & 65.36$\pm$5.20 & 57.68$\pm$4.30 & 74.94$\pm$2.48 & 41.93$\pm$3.22 & 17.94$\pm$3.52 & 52.95$\pm$1.63 & 58.48$\pm$0.18 & 61.71$\pm$0.25 & 17.89$\pm$0.95 & 44.37$\pm$0.57 & 44.86$\pm$0.15 \\
MimeLite & 98.19$\pm$0.07 & 97.10$\pm$0.31 & 54.86$\pm$13.36 & 51.04$\pm$4.15 & \textbf{69.41$\pm$4.15} & 77.98$\pm$1.48 & 53.27$\pm$1.69 & 20.73$\pm$3.33 & 58.00$\pm$0.51 & 63.29$\pm$0.49 & 64.68$\pm$0.33 & 8.29$\pm$0.29 & 41.05$\pm$0.21 & 41.56$\pm$0.18 \\
GradMA & \textbf{98.39$\pm$0.04} & 97.34$\pm$0.35 & \textbf{77.97$\pm$1.28} & \textbf{75.51$\pm$1.94} & 66.68$\pm$3.03 & \textbf{79.92$\pm$0.59} & \textbf{65.91$\pm$5.10} & \textbf{30.81$\pm$1.78} & \textbf{59.47$\pm$0.58} & \textbf{63.49$\pm$0.47} & \textbf{65.68$\pm$0.25} & \textbf{23.52$\pm$1.32} & \textbf{49.29$\pm$0.86} & \textbf{50.54$\pm$0.56} \\
\midrule
\end{tabular}}
\label{table_1:}
\end{table*}
\begin{table*}[htbp]
\centering
\caption{Communication rounds to reach given test accuracy $ac$ under different FL scenarios. Note that since Scaffold and MimeLite have twice as much communication load per communication round as the other approaches, we use $2\times$ to show the distinction.}
\resizebox{2.0\columnwidth}{!}{
\begin{tabular}{cccc|cc|ccc|ccc|ccc}
\toprule
\multirow{3}[4]{*}{Alg.s} & \multicolumn{3}{c|}{MNIST, $S=10$} & \multicolumn{2}{c|}{MNIST, $\omega=0.01$} & \multicolumn{3}{c|}{CIFAR-10, $S=10$} & \multicolumn{3}{c|}{CIFAR-100, $\omega=1.0$} & \multicolumn{3}{c}{Tiny\_Imagenet+Resnet20, $(\omega, S)$} \\
\cmidrule{2-15} & $w=1.0$ & $w=0.1$ & $w=0.01$ & $S=5$ & $S=50$ & $w=1.0$ & $w=0.1$ & $w=0.01$ & $S=5$ & $S=10$ & $S=50$ & $(0.01,5)$ & $(1.0, 5)$ & $(1.0, 10)$ \\
& $ac=95\%$ & $ac=95\%$ & $ac=45\%$ & $ac=40\%$ & $ac=50\%$ & $ac=55\%$ & $ac=45\%$ & $ac=15\%$ & $ac=30\%$ & $ac=40\%$ & $ac=60\%$ & $ac=5\%$ & $ac=30\%$ & $ac=35\%$ \\
\midrule
FedAvg & 25 & 115 & 493 & 299 & 117 & 177 & 882 & 106 & 437 & 435 & 1,071 & 986 & 906 & 1,116 \\
\midrule
FedProx & 25 & 120 & 478 & 283 & 117 & 141 & 766 & 197 & 429 & 435 & 1,081 & 986 & 906 & 1,091 \\
FedMLB & \textbf{22} & 115 & 280 & 203 & \textbf{78} & 245 & 694 & \textbf{63} & 511 & 548 & 1,404 & \textbf{791} & 806 & 966 \\
MOON & 25 & 116 & 493 & 283 & 117 & \textbf{96} & 616 & 247 & 475 & 458 & 1,088 & 986 & 906 & 1,116 \\
Scaffold~(2$\times$) & 26 & -- & \textbf{60} & -- & -- & -- & -- & -- & 806 & -- & -- & 971 & 971 & 1,181 \\
GradMA-W & 30 & \textbf{111} & 125 & \textbf{78} & 172 & 168 & \textbf{582} & 136 & \textbf{389} & \textbf{420} & \textbf{1,027} & 821 & \textbf{841} & \textbf{936} \\
\midrule
FedAvgM & 19 & 116 & 280 & 138 & 79 & 175 & 605 & 73 & 429 & 397 & 654 & 286 & 821 & 991 \\
MIFA & 42 & 85 & 67 & 62 & 89 & 156 & 393 & 54 & 631 & 567 & 1,075 & 716 & 1,021 & 1,151 \\
MIFAM & 39 & 80 & 60 & 62 & 80 & 152 & 325 & 25 & 677 & 547 & 883 & 656 & 956 & 1,076 \\
GradMA-S & \textbf{16} & \textbf{43} & \textbf{38} & \textbf{38} & \textbf{56} & \textbf{83} & \textbf{101} & \textbf{14} & \textbf{297} & \textbf{266} & \textbf{579} & \textbf{131} & \textbf{231} & \textbf{251} \\
\midrule
FedProxM & 22 & 115 & 280 & 138 & 96 & 97 & 631 & 67 & 468 & 397 & 665 & 291 & 821 & 611 \\
FedMLBM & \textbf{16} & 87 & 191 & 113 & \textbf{77} & 185 & 653 & 69 & 522 & 522 & 922 & 296 & 681 & 796 \\
MOONM & 19 & 115 & 186 & 210 & 95 & 78 & 766 & 85 & 461 & 417 & 697 & 291 & 821 & 991 \\
Feddyn & 19 & 73 & 136 & 128 & 96 & 66 & -- & 83 & 441 & 356 & 736 & 371 & 396 & 406 \\
MimeLite~(2$\times$) & 32 & 115 & 277 & 284 & 113 & 92 & 385 & 134 & 355 & 329 & 675 & 1,016 & 871 & 1,071 \\
GradMA & 18 & \textbf{49} & \textbf{37} & \textbf{29} & 105 & \textbf{49} & \textbf{130} & \textbf{11} & \textbf{274} & \textbf{253} & \textbf{559} & \textbf{146} & \textbf{231} & \textbf{231} \\
\midrule
\end{tabular}}
\label{table_2:}
\end{table*}
\textbf{Effects of data heterogeneity.}
From Table~\ref{table_1:}, one can see that the performances of all approaches degrade severely with decreasing $\omega$ on MNIST, CIFAR-10 and Tiny-Imagenet, with GradMA being the only approach that is robust while surpasses other baselines with an overwhelming margin against most scenarios.
In particular, the higher data heterogeneity, the more superior performance for GradMA.
Also, as shown in Table~\ref{table_2:} and Fig.~\ref{fig_gradma_baseline_main:}, GradMA requires much less communication rounds to reach a given test accuracy compared to baselines against most scenarios.
These results validate our idea in the sense that the advantage of GradMA comes from the effective adaptive utilization of workers' information on both the worker side and server side, which alleviates negative impacts caused by the discrepancy of data distributions among workers.
\begin{figure}[b]
\centering
\includegraphics[width=0.45\textwidth]{MNIST_CIFAR-10_5_GradMA_W_baselines_main.pdf}
\caption{Test accuracy curves selected of GradMA-W as well as baselines over MNIST and CIFAR-10.}
\label{fig_gradma_w_baseline_main:}
\end{figure}
\textbf{Impacts of stragglers.}
We explore the impacts of different $S$ on MNIST, CIFAR-100 and Tiny-Imagenet.
A higher $S$ means more active workers upload updates per communication round.
From Table~\ref{table_1:}, we can clearly see that the performance of all approaches improves uniformly with increasing $S$ on CIFAR-100 and Tiny-Imagenet, where GradMA consistently dominates other baselines in terms of test accuracy.
Meanwhile, Fig.~\ref{fig_gradma_baseline_main:} shows that the learning efficiency of GradMA consistently outperforms other baselines~(see Appendix~\ref{appendix_B:} for more results).
However, for MNIST, the test accuracy for most of the approaches does not intuitively improve with increasing $S$.
We conjecture that for simple classification tasks and models, the more active workers participating in training, the more prone the centralized model is to overfitting.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{CIFAR-100_tinyimagenet_5_GradMA_S_baselines_main.pdf}
\caption{Test accuracy curves selected of GradMA-S as well as baselines over CIFAR-100 and Tiny-Imagenet.}
\label{fig_gradma_s_baseline_main:}
\end{figure}
\textbf{Comments on GradMA-W and GradMA-S.}
We now discuss the empirical performances of GradMA-W and GradMA-S and observe that GradMA-S beats GradMA-W by a significant margin in different FL scenarios, and even slightly outperforms GradMA in a few cases~(see Table~\ref{table_1:} and Table~\ref{table_2:}).
To put it differently, GradMA leads GradMA-S in most FL scenarios, suggesting that the combination of Worker\_Update$()$ and Server\_Update$()$ can have a positive effect and thus improve performance.
Meanwhile, GradMA-W trumps baselines in most cases, which suggests that Worker\_Update$()$ can mitigate the issue of CF and thus augment the centralized model.
In addition, we can draw an empirical conclusion that correcting the update direction of the centralized model on the server can greatly boost accuracy compared to correcting that of the local model for each worker.
Selected learning curves shown in Fig.~\ref{fig_gradma_w_baseline_main:} and Fig.~\ref{fig_gradma_s_w_main:} verify the above statements.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{CIFAR-10_CIFAR-100_5_GradMA_baselines_main.pdf}
\caption{Test accuracy curves selected of GradMA as well as baselines over CIFAR-10 and CIFAR-100.}
\label{fig_gradma_baseline_main:}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{CIFAR-10_tinyimagenet_5_GradMA_S_W_main.pdf}
\caption{Test accuracy curves selected of GradMA, GradMA-S and GradMA-W over CIFAR-10 and Tiny-Imagenet.}
\label{fig_gradma_s_w_main:}
\end{figure}
Next, we further explore effects of ($\beta_1$, $\beta_2$) and $m$ on the performance of GradMA-S on MNIST and CIFAR-10.
\textbf{Varying control parameters~($\beta_1$, $\beta_2$).} In order to explore effects of ($\beta_1$, $\beta_2$) in more detail, we set $\beta_1, \beta_2 \in \{0.0, 0.1, 0.3, 0.5, 0.7, 0.9\}$. And we fix $m=100$ and $S=10$.
Notice that some similarities exist between GradMA-S and MIFA~(MIFAM) when $\beta_1 = 0.0$ and $\beta_2 = 0.0$~($\beta_1 = 0.0$ and $ \beta_2 > 0.0$), i.e., they both memorize the latest updates of stragglers at the server side.
From Table~\ref{table_1:} and Fig.~\ref{beta_1_beta2_main:}~(refer to Appendix~\ref{appendix_B:} for more results),
we can see that GradMA-S with $\beta_2 = 0.0$ considerably beats MIFA and MIFAM regardless of value of $\beta_1$.
Furthermore, we observe that GradMA-S with $\beta_2 > 0.0$ outperforms GradMA-S with $\beta_2 = 0.0$ in most cases, and the best test accuracy is located in the region of $\beta_2 > 0.0$.
This indicates that the accumulated updates of stragglers can provide more effective update information for the centralized model to refine the performance of GradMA-S compared to the latest updates of stragglers.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{beta1_beta2_main.pdf}
\caption{Top test accuracy~(\%) overview for GradMA-S with varying control parameters ($\beta_1$, $\beta_2$) on MNIST and CIFAR-10.}
\label{beta_1_beta2_main:}
\end{figure}
\textbf{Varying memory sizes $m$.}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{m_omega_main.pdf}
\caption{Top test accuracy~(\%) overview for GradMA-S with varying memory sizes $m$ on MNIST and CIFAR-10.}
\label{m_omega_main:}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{m_omega_comm_round_main.pdf}
\caption{Test accuracy curves selected of GradMA-S with varying memory sizes $m$ on MNIST and CIFAR-10.}
\label{m_omega_comm_round_main:}
\end{figure}
In a real-world FL scenario, the memory space on the server side determines the value of the tunable parameter $m$ for GradMA-S.
Here, we fix $S=10$ and set $m \in \{0, 10, 20, 40, 60, 80, 100\}$ to carefully look into the performance of GradMA-S with varying $m$.
Notably, GradMA-S with $m=0$ is equivalent to FedAvgM.
From Fig.~\ref{m_omega_main:},
FedAvgM performs comparably to GradMA-S with $m>0$ for MNIST under moderate data heterogeneity setting~(i.e., $\omega=0.1$).
In contrast, the performance of FedAvgM sharply degrades and is seriously worse than that of GradMA-S with $m>0$ under high data heterogeneity setting~(i.e., $\omega=0.01$).
Meanwhile, for CIFAR-10, GradMA-S with $m>0$ consistently surpasses FedAvgM, even under mild data heterogeneity setting~(i.e., $\omega=1.0$).
Besides, we can see that the performance of GradMA-S does not intuitively and monotonically improve with increasing $m$.
This indicates that the quality of the memory reduction strategy is an essential ingredient affecting the performance of GradMA-S for a given $m$.
Therefore, how to tailor a more effective memory reduction strategy is one of our future works.
From Fig.~\ref{m_omega_comm_round_main:}, the learning curves selected also echo the said statements~(see Appendix~\ref{appendix_B:} for more results).
\section{Conclusions}
In this paper, we propose a novel FL approach GradMA, which corrects the update directions of the server and workers simultaneously.
Specifically,
on the worker side, GradMA utilizes the gradients of the local model in the previous step and the centralized model, and the parameters difference between the local model in the current round and the centralized model as constraints of QP to adaptively correct the update direction of the local model.
On the server side, GradMA takes the memorized accumulated gradients of all workers as constraints of QP to augment the update direction of the centralized model.
Meanwhile, we provide the convergence analysis theoretically of GradMA in the smooth non-convex setting.
Also, we conduct extensive experiments to verify the superiority of GradMA.
{\small
\bibliographystyle{ieee_fullname}
|
{
"arxiv_id": "2302.14295",
"language": "en",
"timestamp": "2023-03-01T02:08:34",
"url": "https://arxiv.org/abs/2302.14295",
"yymm": "2302"
} | \section{#1}\setcounter{equation}{0}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\nonumber}{\nonumber}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\begin{subequations}}{\begin{subequations}}
\newcommand{\end{subequations}}{\end{subequations}}
\newcommand{{\oplus}}{{\oplus}}
\newcommand{{\ominus}}{{\ominus}}
\newcommand{{\mathbf W}}{{\mathbf W}}
\newcommand{{\mathbf G}}{{\mathbf G}}
\newcommand{{\mathbf H}}{{\mathbf H}}
\newcommand{{\mathbf \Psi}}{{\mathbf \Psi}}
\newcommand{{\mathbf V}}{{\mathbf V}}
\newcommand{{\mathbf \Omega}}{{\mathbf \Omega}}
\newcommand{\nabla}{\nabla}
\newcommand{\bar \nabla}{\bar \nabla}
\newcommand{{\bm\omega}}{{\bm\omega}}
\newcommand{{\bar\cD}}{{\bar\cD}}
\newcommand{{\bar\phi}}{{\bar\phi}}
\newcommand{{\bar j}}{{\bar j}}
\newcommand{{\mathbb D}}{{\mathbb D}}
\newcommand{{\bar i}}{{\bar i}}
\newcommand{{\bar j}}{{\bar j}}
\newcommand{{\bar k}}{{\bar k}}
\newcommand{{\bar l}}{{\bar l}}
\newcommand{{\bar p}}{{\bar p}}
\newcommand{{\bar q}}{{\bar q}}
\newcommand{{\bar 1}}{{\bar 1}}
\newcommand{{\bar 2}}{{\bar 2}}
\newcommand{{\bar 0}}{{\bar 0}}
\newcommand{{\bar n}}{{\bar n}}
\newcommand{{\bar m}}{{\bar m}}
\newcommand{{\bar 4}}{{\bar 4}}
\def\dt#1{{\buildrel {\hbox{\LARGE .}} \over {#1}}}
\newcommand{\bm}[1]{\mbox{\boldmath$#1$}}
\def\double #1{#1{\hbox{\kern-2pt $#1$}}}
\newcommand{{\boldsymbol\alpha}}{{\boldsymbol\alpha}}
\newcommand{{\boldsymbol\beta}}{{\boldsymbol\beta}}
\newcommand{{\boldsymbol\gamma}}{{\boldsymbol\gamma}}
\newcommand{{\boldsymbol\delta}}{{\boldsymbol\delta}}
\newcommand{{\boldsymbol\dalpha}}{{\boldsymbol{\dot{\alpha}}}}
\newcommand{{\boldsymbol\dbeta}}{{\boldsymbol{\dot{\beta}}}}
\newcommand{{\boldsymbol\dgamma}}{{\boldsymbol{\dot{\gamma}}}}
\newcommand{{\boldsymbol\ddelta}}{{\boldsymbol{\dot{\delta}}}}
\newcommand{{\boldsymbol\mu}}{{\boldsymbol\mu}}
\newcommand{{\boldsymbol\dmu}}{{\boldsymbol{\dot{\mu}}}}
\newcommand{{\boldsymbol\nu}}{{\boldsymbol\nu}}
\newcommand{{\boldsymbol\dnu}}{{\boldsymbol{\dot{\nu}}}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\Exp}[1]{\langle #1 \rangle}
\newcommand{\floor}[1]{\left \lfloor #1 \right \rfloor}
\newcommand{\mathcal L}{\mathcal L}
\newcommand{\mathrm{Re\,}}{\mathrm{Re\,}}
\newcommand{\mathrm{Im\,}}{\mathrm{Im\,}}
\newcommand{\mathrm{Tr }}{\mathrm{Tr }}
\newcommand{\Lie}[1]{\mathcal L_{#1}}
\newcommand{\piecett}[4]
{\left\{\begin{array}{cc}
#1 & #2 \\
#3 & #4
\end{array} \right.}
\newcommand{\piecettt}[6]
{\left\{\begin{array}{cc}
#1 & #2 \\
#3 & #4 \\
#5 & #6
\end{array} \right.}
\newcommand{\tilde{\omega}}{\tilde{\omega}}
\newcommand{\bar{\sigma}}{\bar{\sigma}}
\newcommand{{\bar\theta}}{{\bar\theta}}
\newcommand{\tilde{\sigma}}{\tilde{\sigma}}
\newcommand{\mathcal E}{\mathcal E}
\newcommand{\notag \\}{\notag \\}
\newcommand{{\vert}}{{\vert}}
\newcommand{\sym}[1]{\stackrel{\scriptstyle#1}{\mbox{\tiny $\smile$}}}
\newcommand{\lsym}[1]{\stackrel{\scriptstyle#1}{\mbox{$\smile$}}}
\newcommand{{\textrm{sdet}}}{{\textrm{sdet}}}
\newcommand{\vert}{\vert}
\newcommand{{{\loco}\!{\loco}}}{{{\vert}\!{\vert}}}
\def {\rm const} {{\rm const}}
\def \underline{\underline}
\def{\rm tr}{{\rm tr}}
\def \bibitem{\bibitem}
\def \cite{\cite}
\newcommand{\poin}[1]{{#1}}
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\def\urlprefix{URL }\fi
\providecommand{\eprint}[2][]{\url{#2}}
\begin{document}
\begin{titlepage}
\begin{flushright}
March, 2023
\end{flushright}
\vspace{5mm}
\begin{center}
{\Large \bf
On curvature-squared invariants of minimal five-dimensional supergravity from superspace
}
\end{center}
\begin{center}
{\bf
Gregory Gold,
Jessica Hutomo,
Saurish Khandelwal,\\
and Gabriele Tartaglino-Mazzucchelli
} \\
\vspace{5mm}
\footnotesize{
{\it
School of Mathematics and Physics, University of Queensland,
\\
St Lucia, Brisbane, Queensland 4072, Australia}
}
\vspace{2mm}
~\\
\texttt{[email protected];
[email protected];
[email protected];
\\ [email protected]}\\
\vspace{2mm}
\end{center}
\begin{abstract}
\baselineskip=14pt
We elaborate on the off-shell superspace construction of curvature-squared invariants in minimal five-dimensional supergravity. This is described by the standard Weyl multiplet of conformal supergravity coupled to two compensators being a vector multiplet and a linear multiplet. In this set-up, we review the definition of the off-shell two-derivative gauged supergravity together with the three independent four-derivative superspace invariants defined in arXiv:1410.8682. We provide the explicit expression for the linear multiplet based on a prepotential given by the logarithm of the vector multiplet primary superfield. We then present for the first time the primary equations of motion for minimal gauged off-shell supergravity deformed by an arbitrary combination of these three four-derivative locally superconformal invariants. We also identify a four-derivative invariant based on the linear multiplet compensator and the kinetic superfield of a vector multiplet which can be used to engineer an alternative supersymmetric completion of the scalar curvature squared.
\end{abstract}
\vspace{5mm}
\vfill
\end{titlepage}
\newpage
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\tableofcontents{}
\vspace{1cm}
\bigskip\hrule
\allowdisplaybreaks
\section{Introduction}
Almost five decades after the first (two-derivative) supergravity was constructed (for $\cN=1$ supersymmetry in four dimensions), higher-order locally supersymmetric invariants are still largely unknown. Higher-order curvature terms play, however, a significant role in string theory, where quantum corrections take the form of an infinite series potentially constrained by supersymmetry order by order in the string tension $\alpha'$ and the string coupling $g_s$. Many open problems in string theory, for example its vacua structure, are unresolved due to the lack of information about the full quantum corrected supergravity effective action. The complexity of such an effective theory is even made worse by the fact that the purely gravitational higher-curvature terms are related by supersymmetry to contributions depending on $p$-forms, which describe part of the string spectrum. These terms, which have not yet been systematically understood, play an important role in studying, for example, the moduli in compactified string theory and the low-energy description of string dualities; see, e.\,g., \cite{Antoniadis:1997eg,Antoniadis:2003sw,Liu:2013dna}. In the context of string-inspired holographic dualities, such as the AdS/CFT, higher-order $1/N$ corrections in quantum field theories translate into higher curvature terms on the gravity side making these contributions fundamental for precision tests in AdS/CFT. New interesting analyses in this topic have been performed in the last few years -- see for example \cite{Baggio:2014hua,Bobev:2020zov,Bobev:2020egg,Bobev:2021oku,Bobev:2021qxx,Liu:2022sew,Hristov:2022lcw,Bobev:2022bjm,Cassani:2022lrk}
and references therein.
One obstacle to constructing locally supersymmetric higher-order invariants is that often supersymmetry is only realised {\emph{on-shell}}, meaning {\emph{the symmetry algebra closes by using equations of motion}}. In on-shell approaches -- which are, e.g., typically used in 10- and 11-dimensional theories -- one needs to intertwine the construction of higher-order invariant terms in the Lagrangian of interest with a systematic and consistent deformation of the supersymmetry transformations making the problem remarkably involved.
This obstacle is simplified by using {\emph{off-shell supersymmetry}} where one introduces extra (auxiliary) degrees of freedom to obtain supersymmetric multiplets possessing {\emph{model-independent transformation rules}}. In a low number of space-time dimensions (D), in particular ${\rm D}\leq 6$, off-shell techniques are by now well developed and understood for up to eight real supercharges -- see \cite{FVP,Lauria:2020rhc,SUPERSPACE,WB,BK, Kuzenko:2022skv,Kuzenko:2022ajd} for reviews on off-shell approaches to supersymmetry and supergravity. In these cases, the construction of supergravity higher-derivative invariants can in principle be systematically approached. A restricted list of references using off-shell approaches to construct locally supersymmetric higher-derivative invariants is:
\cite{HOT,BSS1,LopesCardoso:1998tkj,Mohaupt:2000mj,BRS,CVanP,Bergshoeff:2012ax,Butter:2013rba,Butter:2013lta,Kuzenko:2013vha,OP1,OP2,OzkanThesis,BKNT-M14,Kuzenko:2015jxa,BKNT16,Butter:2016mtk,BNT-M17,NOPT-M17,Butter:2018wss,Butter:2019edc,Hegde:2019ioy,Mishra:2020jlc,deWS,Ozkan:2016csy}.
The scope of our paper is to enhance the classification of off-shell curvature-squared invariants of minimal five-dimensional (5D) supergravity.
Minimal on-shell 5D supergravity was introduced four decades ago in \cite{Cremmer,CN}, and the first off-shell description was given in \cite{Howe5Dsugra} by the use of superspace techniques. Since then, 5D minimal supergravity and its matter couplings have been extensively studied at the component level, both in on-shell \cite{GST1,GST2,GZ,CD} and off-shell \cite{Zucker1,Zucker2,Zucker3,Ohashi1,Ohashi2,Ohashi3,Ohashi4,Bergshoeff1,Bergshoeff2,Bergshoeff3} settings.
The superspace approach to general off-shell 5D $\cN=1$ supergravity-matter systems has then been developed in \cite{KT-M_5D2,KT-M_5D3,KT-M08,BKNT-M14}; see also \cite{Howe:2020hxi} for a recent local supertwistor description of 5D conformal supergravity.
In our paper we will specifically use the 5D $\cN=1$ conformal superspace approach of \cite{BKNT-M14}.\footnote{Conformal superspace was originally introduced by D.\,Butter for 4D $\cN=1$ supergravity in \cite{Butter4DN=1} and then extended to other space-time dimensions $2\leq {\rm D} \leq 6$ for various amount of supersymmetry in \cite{Butter4DN=2,BKNT-M1,BKNT-M14,BKNT16,BNT-M17,Kuzenko:2022qnb} -- see also \cite{Kuzenko:2022skv,Kuzenko:2022ajd} for recent reviews.} This approach merges advantages of the 5D superconformal tensor calculus of \cite{Ohashi1,Ohashi2,Ohashi3,Ohashi4,Bergshoeff1,Bergshoeff2,Bergshoeff3} with the superspace approaches of \cite{Howe5Dsugra,KT-M_5D2,KT-M_5D3,KT-M08}.
In the superconformal setup (both in components and superspace), one enlarges the supergravity gauge group to be described by local superconformal transformations, plus potentially internal symmetries. Local Poincar\'e supersymmetry is then recovered by using an appropriate choice of compensating multiplets that are used to gauge fix extra non-physical symmetries within the conformal algebra. For instance, in this setup, the off-shell formulation of minimal 5D supergravity is achieved by coupling the standard Weyl multiplet of 5D conformal supergravity to two off-shell conformal compensators: a vector multiplet and a hypermultiplet, the latter conveniently described by a linear multiplet. These will be the off-shell multiplets used in our paper. Within this setup, locally supersymmetric completions of the Weyl tensor squared and the scalar curvature squared were constructed for the first time, respectively, in \cite{HOT} and \cite{OP2} by using component fields techniques.
Up to total derivatives, a generic combination of curvature-squared terms in 5D should include also a Ricci tensor squared invariant. A third independent locally superconformal invariant which includes Ricci squared was indeed constructed in superspace in \cite{BKNT-M14} by using a 5D analog of the ``log multiplet'' construction in 4D $\cN=2$ supergravity of \cite{Butter:2013lta}. However, due to the computational complexity of the log multiplet in 5D, the component analysis of this invariant has not appeared so far -- in a follow-up paper, we will report on the component structure of this invariant which has been computed by making use of the computer algebra program {\it Cadabra} \cite{Cadabra-1,Cadabra-2}.
Note that the conformal approach described above is not unique. In five dimensions it is known that an efficient setup to describe general supergravity-matter couplings make use of a vector-dilaton Weyl multiplet as a multiplet of conformal supergravity in place of the standard Weyl one \cite{Ohashi3, Bergshoeff1}.\footnote{The vector-dilaton Weyl multiplet terminology is used here to stress that the variant multiplet of conformal supergravity in \cite{Ohashi3, Bergshoeff1} is defined as an on-shell vector multiplet coupled to the standard Weyl multiplet. It was recently shown in \cite{Gold:2022bdk,Hutomo:2022hdi} that an on-shell hypermultiplet in a standard Weyl multiplet background can be reinterpreted as yet another new variant Weyl multiplet of off-shell conformal supergravity which was referred to as hyper-dilaton Weyl.}
A remarkable property of systems based on the use of a 5D vector-dilaton Weyl multiplet, that are related to the Poincar\'e supergravity first introduced in \cite{NR}, is the simplicity to define a third locally supersymmetric curvature-squared invariant. In fact, by employing a map between fields of the vector-dilaton Weyl multiplet and an off-shell vector multiplet, in \cite{BRS} a locally supersymmetric extension of the Riemann tensor squared was constructed (a construction that however is not applicable for a standard Weyl multiplet). This, together with the Weyl-squared invariant of \cite{HOT}, was sufficient for Ozkan \& Pang to construct in \cite{OP1,OP2} a locally supersymmetric extension of the Gauss-Bonnet combination which is expected to play a key role in the description of the first $\alpha^\prime$ corrections to compactified string theory \cite{Zwiebach:1985uq,Deser:1986xr}.
Despite the remarkable features mentioned above, two important disadvantages of the use of a vector-dilaton Weyl multiplet are that:
(i) the spectrum of the on-shell theory do not precisely match the one of minimal Poincar\'e supergravity as in fact it leads to an extra on-shell physical multiplet that includes a scalar (dilaton) field;
(ii) it is not possible to describe gauged supergravity and then AdS supergravity.\footnote{It has been proposed in \cite{CO}, and successively described also in superspace in \cite{BKNT-M14}, how to gauge a system based on the vector-dilaton Weyl multiplet by appropriately deforming the constraint of the on-shell vector multiplet. However, so far, this construction has not been systematically studied as for gauged supergravities based on the standard Weyl multiplet, including curvature-squared invariants. Interestingly, the hyper-dilaton Weyl multiplet of \cite{Gold:2022bdk,Hutomo:2022hdi} has no apparent issues concerning gauging, at least for matter systems not including extra physical hypermultiplets.}. This second limitation has a clear impact if one is interested in using off-shell supergravity in the study of AdS/CFT. Indeed, the authors of \cite{Bobev:2021qxx,Bobev:2022bjm,Cassani:2022lrk} employed a formulation of minimal gauged supergravity in 5D based on the standard Weyl multiplet, for which however they could only use two of the three independent invariants, the ones of \cite{HOT,OP1,OP2}, explicitly known in terms of the component fields.
To this regard, it is worth explaining that, as first discussed in \cite{Baggio:2014hua}, see also \cite{Bobev:2020zov,Bobev:2021qxx,Bobev:2022bjm,Cassani:2022lrk}, the use of two invariants might suffice in 5D since a curvature dependent redefinition of the metric can reabsorb one of the three curvature-squared terms. It remains however a nontrivial open problem to prove this statement for whole locally supersymmetric invariants (e.g., including fermions) and to have clear control of the supersymmetry transformations under this redefinition. All three invariants might also play a role to construct general higher-derivative invariants beyond four derivatives.
It is also worth mentioning that the related recent analysis of \cite{Liu:2022sew} was based on the three independent curvature-squared invariants of \cite{HOT,BRS,OP1,OP2} defined using a vector-dilaton Weyl multiplet. However, it remains unclear to us whether the analysis of \cite{Liu:2022sew} might have some issues with supersymmetry, due to the constraints in defining the gauging (or equivalently the cosmological constant term) in a vector-dilaton Weyl formulation.
Considering the potential subtleties in the recent studies in
\cite{Bobev:2020zov,Bobev:2021qxx,Liu:2022sew,Bobev:2022bjm,Cassani:2022lrk} it is natural to look back at \cite{BKNT-M14} and elaborate on properties of the three independent curvature-squared invariants for minimal supergravity constructed in superspace. A fundamental property of these locally superconformal invariants is that they can all be constructed by using a standard Weyl multiplet, making straightforward their addition to the 5D minimal off-shell two-derivative gauged supergravity theory. In this paper we then start to report on new results based on these invariants. More specifically, we will present here detailed expressions of all the composite primary superfields associated to each invariant -- including a new expression for the $\log$ multiplet -- which can be readily used for component analyses. We will then describe the primary equations of motion in superspace that describe minimal 5D gauged supergravity deformed by an arbitrary combination of three curvature-squared invariants. Our results are defined in superspace, but they can be straightforwardly translated in components by using the analysis of \cite{BKNT-M14}. In particular, since all the expressions are fully covariant and described explicitly in terms of composites of descendants of the various multiplets, one could, for example, straightforwardly obtain the whole set of deformed supergravity equations of motion by the successive action of $Q$-supersymmetry. This will then be a new step towards several applications of the three locally superconformal invariants of \cite{BKNT-M14}. Moreover, we also introduce an alternative four-derivative invariant based on the linear multiplet compensator and the kinetic superfield of the vector multiplet compensator. This can be used to engineer a scalar curvature squared invariant also in alternative off-shell supergravities as for example the formulation based on the recently introduced 5D hyper-dilaton Weyl multiplet \cite{Hutomo:2022hdi}.
Our paper is organised as follows. In section \ref{Section--2} we describe the structure of 5D $\cN=1$ superconformal multiplets that will be used in this work. In section \ref{5Dactionsuperspace} we give the salient details of \cite{BKNT-M14} concerning the superspace construction of various locally superconformal invariants (including curvature-squared ones) which will play the role of action principles. One of our new results in section \ref{5Dactionsuperspace} includes the expression of a composite primary multiplet which defines the ``log multiplet'' curvature-squared invariant. Section \ref{5DEOMSuperspace} contains the main results of our paper: the superconformal primary equations of motion of all the curvature-squared terms for the minimal 5D gauged off-shell supergravity based on the standard Weyl multiplet. An alternative construction of a scalar curvature square invariant is presented in section \ref{NewCurvature2}.
Our notation and conventions correspond to that of \cite{BKNT-M14} (see also \cite{Hutomo:2022hdi}, where a handful of typos from \cite{BKNT-M14} were fixed).
\section{Superconformal multiplets in 5D $\cN=1$ superspace}
\label{Section--2}
In this section we review several superconformal multiplets which will serve as building blocks for the various curvature-squared invariants presented in this work. After describing the standard Weyl multiplet of conformal supergravity in 5D $\cN=1$ conformal superspace, we move on to the discussion of the Abelian vector and off-shell linear multiplets. Here we make use of the approach and results given in \cite{BKNT-M14}.
We refer the reader to \cite{KL,KT-M5D1,Howe5Dsugra,HL,KT-M_5D2,KT-M_5D3,KT-M08,K06} for other works on flat and curved superspace and off-shell multiplets in five dimensions.
\subsection{The standard Weyl multiplet}
\label{SWM-superspace}
The standard Weyl multiplet of 5D $\cN=1$ conformal supergravity \cite{Bergshoeff1} is associated with the gauging of the superconformal algebra $\rm F^2(4)$. The multiplet contains $32+32$ physical components described by a set of independent gauge fields: the vielbein $e_{{m}}{}^{{a}}$,
the gravitino $\psi_{{m}}{}_{{\alpha}}^i$,
the ${\rm SU(2)}_R$ gauge field $\phi_{{m}}{}^{ij}$, and a dilatation gauge field $b_{{m}}$. The other gauge fields associated with the remaining gauge symmetries: the spin connection $\omega_{{m}}{}^{{{a}}{{b}}}$, the $S$-supersymmetry connection
$\phi_{{m}}{}_{{\alpha}}^i$, and the special conformal connection
$\mathfrak{f}_{{m}}{}^{{a}}$ are composite fields, i.e., they are algebraically determined in terms of the other fields by imposing constraints on some of the
curvature tensors. The standard Weyl multiplet also comprises a set of covariant auxiliary fields: a real antisymmetric
tensor $w_{{{a}}{{b}}}$, a fermion $\chi_{{\alpha}}^i$, and a real auxiliary scalar $D$.
The 5D $\cN=1$ conformal superspace is parametrised by
local bosonic $(x^{{{m}}})$ and fermionic $(\theta_i)$ coordinates
$z^{{{M}}} = (x^{{{m}}},\theta^{{\mu}}_i)$,
where ${{m}} = 0, 1, 2,3, 4$, ${\mu} = 1, \cdots, 4$, and $i = 1, 2$.
To perform the gauging of the
superconformal algebra, one introduces
covariant derivatives $ {\nabla}_{{{A}}} = (\nabla_{{{a}}} , \nabla_{{{\alpha}}}^i)$ which have the form
\begin{subequations}
\begin{eqnarray}\label{eq:covD}
\de_{{{A}}}
= E_{{{A}}} - \omega_{{{A}}}{}^{\underline{b}}X_{\underline{b}}
&=& E_{{{A}}} - \frac12 \Omega_{{{A}}}{}^{{{a}} {{b}}} M_{{{a}} {{b}}} - \Phi_{{{A}}}{}^{ij} J_{ij} - B_{{{A}}} \mathbb{D} - \mathfrak{F}_{{{A}}}{}^{{{B}}} K_{{{B}}}~,
\\ &=& E_{{{A}}} - \frac12 \Omega_{{{A}}}{}^{{{a}} {{b}}} M_{{{a}} {{b}}} - \Phi_{{{A}}}{}^{ij} J_{ij} - B_{{{A}}} \mathbb{D} - \mathfrak{F}_{{{A}}}{}^{\alpha i} S_{\alpha i} - \mathfrak{F}_{{{A}}}{}^{a} K_{a} ~.~~~~
\end{eqnarray}
\end{subequations}
Here $E_{{{A}}} = E_{{{A}}}{}^{{{M}}} \partial_{{{M}}}$ is the inverse super-vielbein,
$M_{{{a}} {{b}}}$ are the Lorentz generators, $J_{ij}$ are generators of the
${\rm SU(2)}_R$ $R$-symmetry group,
$\mathbb D$ is the dilatation generator, and $K_{{{A}}} = (K_{{{a}}}, S_{{{\alpha}} i})$ are the special superconformal
generators.
The super-vielbein one-form is $E^{{{A}}} =\mathrm d z^{{{M}}} E_{{{M}}}{}^{{{A}}}$ with $E_{{{M}}}{}^{{{A}}} E_{{{A}}}{}^{{{N}}} =\delta_{{{M}}}^{{{N}}}$ and
$E_{{{A}}}{}^{{{M}}} E_{{{M}}}{}^{{{B}}}=\delta_{{{A}}}^{{{B}}}$.
We associate with each generator $X_{\underline{a}} = (M_{{{a}}{{b}}},J_{ij},\mathbb D, S_{{{\alpha}} i}, K_{{a}})$ a connection super one-form
$\omega^{\underline{a}} = (\Omega^{{{a}}{{b}}},\Phi^{ij},B,\mathfrak{F}^{{{\alpha}} i},\mathfrak{F}^{{{a}}})= \mathrm d z^{{M}} \omega_{{M}}{}^{\underline{a}} = E^{{{A}}} \omega_{{{A}}}{}^{\underline{a}}$.
The algebra of covariant derivatives
\begin{align}
[ \nabla_{{A}} , \nabla_{{B}} \}
&= -\mathscr{T}_{{{A}}{{B}}}{}^{{C}} \nabla_{{C}}
- \frac{1}{2} {\mathscr{R}(M)}_{{{A}}{{B}}}{}^{{{c}}{{d}}} M_{{{c}}{{d}}}
- {\mathscr{R}(J)}_{{{A}}{{B}}}{}^{kl} J_{kl}
\nonumber \\ & \quad
- {\mathscr{R}(\mathbb{D})}_{{{A}}{{B}}} \mathbb D
- {\mathscr{R}(S)}_{{{A}}{{B}}}{}^{{{\gamma}} k} S_{{{\gamma}} k}
- {\mathscr{R}(K)}_{{{A}}{{B}}}{}^{{c}} K_{{c}}~,
\label{nablanabla}
\end{align}
is constrained to be expressed in terms of a single primary superfield, the super-Weyl tensor $W_{{{\alpha}} {{\beta}}}$.\footnote{Here and in what follows, an antisymmetric rank-2 tensor $T_{a b} = -T_{b a}$ can equivalently be written as: $T_{a b} = (\Sigma_{a b}){}^{\alpha \beta} T_{\alpha \beta}$ and $T_{{{\alpha}} {{\beta}}} = 1/2 \,({\Sigma}^{{{a}} {{b}}})_{{{\alpha}} {{\beta}}}T_{{{a}} {{b}}}$.} It has the following properties
%
\begin{equation}
W_{{{\alpha}} {{\beta}}} = W_{{{\beta}} {{\alpha}}} \ , \quad
K_{{{A}}} W_{{{\alpha}} {{\beta}}} = 0 \ ,
\quad \mathbb D W_{{{\alpha}} {{\beta}}} = W_{{{\alpha}} {{\beta}}} \ ,
\end{equation}
and satisfies the Bianchi identity
\begin{equation}
\nabla_{{\gamma}}^k W_{{{\alpha}} {{\beta}}} = \nabla_{({{\alpha}}}^k W_{{{\beta}} {{\gamma}} )} + \frac{2}{5} \varepsilon_{{{\gamma}} ({{\alpha}}} \nabla^{{{\delta}} k} W_{{{\beta}} ) {{\delta}}} \ . \label{WBI}
\end{equation}
In \eqref{nablanabla}
$ \mathscr{T}_{{{A}} {{B}}}{}^C$ is the torsion, and $ \mathscr{R}(M)_{{{A}} {{B}}}{}^{{{c}} {{d}}}$,
$ \mathscr{R}(J)_{{{A}} {{B}}}{}^{kl}$, $ \mathscr{R}(\mathbb D)_{{{A}} {{B}}}$, $ \mathscr{R}(S)_{{{A}} {{B}}}{}^{ {{\gamma}}k}$, and $ \mathscr{R}(K)_{{{A}} {{B}}}{}^{{{c}}}$
are the curvatures associated with Lorentz, ${\rm SU(2)}_R$,
dilatation, $S$-supersymmetry, and special conformal
boosts, respectively.
The full algebra of covariant derivatives \eqref{nablanabla} (including the explicit expressions for the torsion and curvature components in terms of the descendant superfields) are given in Refs. \cite{BKNT-M14} and \cite{Hutomo:2022hdi}.
To make use of results of \cite{BKNT-M14} it is important to note that in this paper we make use of the ``traceless'' frame conventional constraints for the conformal superspace algebra employed in appendix C of \cite{BKNT-M14} as well as in \cite{Hutomo:2022hdi}. We also refer the reader to these papers for the description of how to reduce superspace results to standard component fields.
It is useful to introduce the dimension-3/2 superfields
\begin{subequations} \label{descendantsW-5d}
\begin{gather}
W_{{{\alpha}} {{\beta}} {{\gamma}}}{}^k := \nabla_{({{\alpha}}}^k W_{{{\beta}} {{\gamma}} )} \ , \quad X_{{\alpha}}^i := \frac{2}{5} \nabla^{{{\beta}} i} W_{{{\beta}}{{\alpha}}} \ ,
\end{gather}
and the dimension-2 descendant superfields constructed from spinor covariant derivatives of $W_{\alpha \beta}$:
\begin{eqnarray}
W_{{{\alpha}} {{\beta}} {{\gamma}} {{\delta}}} := \nabla_{({{\alpha}}}^k W_{{{\beta}} {{\gamma}} {{\delta}}) k} \ , \quad
X_{{{\alpha}} {{\beta}}}{}^{ij} := \nabla_{({{\alpha}}}^{(i} X_{{{\beta}})}^{j)}
\ , \quad
Y := {\rm i} \nabla^{{{\gamma}} k} X_{{{\gamma}} k} \ .
\end{eqnarray}
\end{subequations}
It can be checked that only the superfields \eqref{descendantsW-5d} and their vector derivatives appear upon taking successive spinor derivatives of $W_{\alpha \beta}$. The following relations define the tower of covariant fields in the standard Weyl multiplet and are particularly useful for analysing the structure of curvature-squared invariants:
\begin{subequations} \label{eq:Wdervs}
\begin{eqnarray}
\nabla_{{{\gamma}}}^k W_{{{\alpha}}{{\beta}}}
&=& W_{{{\alpha}}{{\beta}}{{\gamma}}}{}^k + \varepsilon_{{{\gamma}} ({{\alpha}}} X_{{{\beta}})}^k \ , \label{Wdervs-a}
\\
\nabla^{i}_{\alpha}{X^{j}_{{\beta}}}
&=&
X_{{\alpha} {\beta}}\,^{i j}
+\frac{{\rm i}}{8} \varepsilon^{i j} \varepsilon_{{\alpha} {\beta}} Y
-\frac{3{\rm i}}{2} \varepsilon^{i j} (\Gamma^{{a}})_{{\alpha}}{}^{ \rho} {\nabla}_{{a}}{W_{{\beta} {\rho}}}
-2{\rm i} \varepsilon^{i j} W_{{\alpha}}{}^{{\rho}} W_{{\beta} {\rho}}
\nonumber\\
&&
+\frac{{\rm i}}{2} \varepsilon^{i j} \varepsilon_{{\alpha} {\beta}} W^{{\gamma} {\delta}} W_{{\gamma} {\delta}}
-\frac{{\rm i}}{2}\varepsilon^{i j} (\Gamma^{{a}})_{{\beta}}{}^{{\rho}} {\nabla}_{{a}}{W_{{\alpha} {\rho}}}
~,
\\
\nabla^{i}_{{\alpha}}{W_{{\beta} {\gamma} {\lambda}}{}^{j}}
&=& - \frac{1}{2} \varepsilon^{i j} \Big( W_{{\alpha} {\beta} {\gamma} {\lambda}} + 3 {\rm i} (\Gamma_{{a}})_{{\alpha} ({\beta}} {\nabla}^{{a}}{W_{{\gamma} {\lambda})}} + 3 {\rm i} \varepsilon_{{\alpha} ({\beta}} (\Gamma_{{a}})_{{\gamma}}{}^{{\tau}} {\nabla}^{{a}}{W_{{\lambda}) {\tau}}}\Big) \nonumber \\&&- \frac{3}{2}\varepsilon_{{\alpha} ({\beta}} X_{{\gamma} {\lambda})}\,^{i j} \ ,
\\
\nabla^{i}_{{\alpha}}{W_{{\beta} {\gamma} {\lambda} {\rho}}}
&=& - 4 {\rm i} (\Gamma_{{a}})_{{\alpha} ({\beta}} {\nabla}^{{a}}{W_{{\gamma} {\lambda} {\rho})}\,^{i}}- 6 {\rm i} W_{{\alpha} ({\beta} {\gamma}}\,^{i} W_{{\lambda} {\rho})}+6 {\rm i} W_{{\alpha} ({\beta}} W_{{\gamma} {\lambda} {\rho})}\,^{i}\nonumber \\&& +6 {\rm i} \varepsilon_{{\alpha} ({\beta}} \Big( W_{{\gamma} {\lambda}} X^{i}_{{\rho})}-2 (\Gamma_{{a}})_{{\gamma}}{}^{{\tau}} {\nabla}^{{a}}{W_{{\lambda} {\rho}) {\tau}}\,^{i}}- W_{{\gamma}}{}^{{\tau}} W_{{\lambda} {\rho} ){\tau}}\,^{i}\Big) \ , \\
\nabla^{i}_{{\alpha}}{X_{{\beta} {\gamma}}\,^{j k}}
&=& {\rm i} \varepsilon^{i (j} \Big( -3 W_{({\beta}}{}^ {\lambda} W_{ {\gamma}) {\alpha}\lambda}\,^{ k)} - \varepsilon_{{\alpha} ({\beta}} W^{{\rho} {\tau}} W_{{\gamma}) {\rho} {\tau}}\,^{k)} - W_{{\alpha} {\lambda}} W_{{\beta} {\gamma} }\,^{{\lambda}k)} - \frac{3}{2} W_{{\beta} {\gamma}} X^{k)}_{{\alpha}}
\nonumber\\&& \qquad ~
+\frac{1}{2} W_{{\alpha} ({\beta}} X^{k)}_{{\gamma})} +\frac{3}{2} \varepsilon_{{\alpha} ({\beta}} W_{{\gamma}) {\lambda}} X^{k) {\lambda}} + 2 (\Gamma^{{a}})_{{\alpha}}{}^{{\rho}} {\nabla}_{{a}}{W_{{\beta} {\gamma} {\rho}}\,^{k)}}
\nonumber\\&& \qquad ~
+ 2 (\Gamma^{{a}})_{({\beta} }{}^{{\rho}} {\nabla}_{{a}}{W_{ {\gamma}) {\alpha} {\rho}}\,^{k)}} - (\Gamma^{{a}})_{{\alpha} ({\beta}} {\nabla}_{{a}}{X^{k)}_{{\gamma})}}
+ \varepsilon_{{\alpha} ({\beta}} (\Gamma^{{a}})_{{\gamma}) {\lambda}} {\nabla}_{{a}}{X^{k) {\lambda}}} \Big)
~,~~~~~~ \\
\nabla^{i}_{{\alpha}} {Y}
&=& 8(\Gamma^{{a}})_{{\alpha}}{}^{{\beta}} {\nabla}_{{a}}{X^{i}_{ {\beta}}} + 8 W_{{\alpha}}{}^{{\beta}} X^{i}_{{\beta}}
~.
\end{eqnarray}
\end{subequations}
Due to \eqref{WBI}, the $X_{\alpha\beta}{}^{ij}$ and $W_{\alpha\beta\gamma\delta}$ dimension-2 superfields of the standard Weyl multiplet obey the following Bianchi identities:
\begin{subequations}
\begin{eqnarray}
\nabla_{(\alpha}{}^{\gamma} X_{\beta)\gamma}{}^{ij} &=& -\frac{1}{2} X^{\gamma (i} W_{\alpha \beta \gamma}{}^{j)}
~,\\
\nabla_{(\alpha}{}^{\lambda} W_{\beta \gamma \tau)\lambda} &=& 3 {\rm i} \nabla_{(\alpha}{}^{\lambda} \Big(W_{\beta \gamma} W_{\tau)\lambda}\Big)
~.
\end{eqnarray}
\end{subequations}
The independent descendant superfields of $W_{\alpha\beta}$ are all annihilated by $K_{{{a}}}$. However, under $S$-supersymmetry, they transform as follows:
\begin{subequations}
\begin{eqnarray}
S_{{{\alpha}} i} W_{{{\beta}}{{\gamma}}{{\delta}}}{}^j
&=&
6 \delta^j_i \varepsilon_{{{\alpha}} ({{\beta}}} W_{{{\gamma}} {{\delta}})} \ , \qquad
S_{{{\alpha}} i} X_{{\beta}}^j = 4 \delta_i^j W_{{{\alpha}}{{\beta}}}~,
\\
S_{{{\alpha}} i} W_{{{\beta}}{{\gamma}}{{\delta}}{{\rho}}} &=& 24 \varepsilon_{{{\alpha}} ({{\beta}}} W_{{{\gamma}}{{\delta}} {{\rho}})}{}_i \ , \qquad
S_{{{\alpha}} i} Y = 8 {\rm i} X_{{{\alpha}} i}
~,
\\
S_{{{\alpha}} i} X_{{{\beta}}{{\gamma}}}{}^{jk} &=& - 4 \delta_i^{(j} W_{{{\alpha}}{{\beta}}{{\gamma}}}{}^{k)} + 4 \delta_i^{(j} \varepsilon_{{{\alpha}} ({{\beta}}} X_{{{\gamma}})}^{k)} \ .\label{S-on-X_Y-a}
\end{eqnarray}
\end{subequations}
The conformal supergravity gauge group $\cG$ is generated by
{\it covariant general coordinate transformations},
$\delta_{\rm cgct}$, associated with a local superdiffeomorphism parameter $\xi^{{{A}}}$ and
{\it standard superconformal transformations},
$\delta_{\cH}$, associated with the local superfield parameters:
the dilatation $\sigma$, Lorentz $\Lambda^{{{a}} {{b}}}=-\Lambda^{{{b}} {{a}}}$, ${\rm SU(2)}_R$ $\Lambda^{ij}=\Lambda^{ji}$,
and special conformal transformations $\Lambda^{{{A}}}=(\eta^{{{\alpha}} i},\Lambda^{{{a}}}_{K})$.
The covariant derivatives transform as
\begin{subequations}
\label{sugra-group}
\begin{eqnarray}
\delta_\cG \nabla_{{{A}}} &=& [\cK , \nabla_{{{A}}}] \ ,
\label{TransCD}
\end{eqnarray}
where
\begin{eqnarray}
\cK = \xi^{{{C}}} {\nabla}_{{{C}}} + \frac12 {\Lambda}^{{{a}} {{b}}} M_{{{a}} {{b}}} + {\Lambda}^{ij} J_{ij} + \sigma \mathbb D + {\Lambda}^{{{A}}} K_{{{A}}} ~.
\end{eqnarray}
\end{subequations}
A covariant (or tensor) superfield $U$ transforms as
\begin{equation}
\delta_{\cG} U =
(\delta_{\rm cgct}
+\delta_{\cH}) U =
\cK U
~. \label{trans-primary}
\end{equation}
The superfield $U$ is a \emph{superconformal primary} of dimension $\Delta$ if $K_{{{A}}} U = 0$ (it suffices to require that $S_{\alpha i} U = 0$) and $\mathbb D U = \Delta U$.
\subsection{The Abelian vector multiplet}
\label{vector-superspace}
In conformal superspace \cite{BKNT-M14}, a 5D $\cN=1$ Abelian vector multiplet \cite{HL, Zupnik:1999iy} is described by a real primary superfield $W$ of dimension 1,
\begin{subequations}\label{vector-defs}
\begin{eqnarray}
(W)^* = W~, \qquad K_A W=0~, \qquad \mathbb D W= W~.
\end{eqnarray}
The superfield $W$ obeys the Bianchi identity
\begin{eqnarray}
\nabla_{{\alpha}}^{(i} \nabla_{{\beta}}^{j)} W
= \frac{1}{4} \varepsilon_{{\alpha} {\beta}} \nabla^{{\gamma} (i} \nabla_{{\gamma}}^{j)}
W
~.
\label{vector-Bianchi}
\end{eqnarray}
\end{subequations}
Let us introduce the following descendants constructed from spinor derivatives of $W$:
\begin{subequations} \label{components-vect}
\begin{align}
\lambda_{{\alpha}}^i := - {\rm i}\nabla_{{\alpha}}^i W \ , \qquad
X^{ij} := \frac{{\rm i}}{4} \nabla^{{{\alpha}} (i} \nabla_{{\alpha}}^{j)} W
= - \frac{1}{4} \nabla^{{{\alpha}} (i} \lambda_{{\alpha}}^{j)}~.
\end{align}
These superfields, along with
\begin{equation}
F_{{{\alpha}}{{\beta}}} := - \frac{{\rm i}}{4} \nabla^k_{({{\alpha}}} \nabla_{{{\beta}}) k} W - W_{\alpha \beta} W
= \frac{1}{4} \nabla_{({{\alpha}}}^k \lambda_{{{\beta}}) k}
- W_{\alpha \beta} W
~,
\end{equation}
\end{subequations}
satisfy the following identities:
\begin{subequations} \label{VMIdentities}
\begin{align}
\nabla_{{\alpha}}^i \lambda_{{\beta}}^j
&= - 2 \varepsilon^{ij} \big(F_{{{\alpha}} {{\beta}}} + W_{{{\alpha}}{{\beta}}} W\big)
- \varepsilon_{{{\alpha}}{{\beta}}} X^{ij} - \varepsilon^{ij} \nabla_{{{\alpha}}{{\beta}}} W \ , \\
\nabla_{{\alpha}}^i F_{{{\beta}}{{\gamma}}}
&=
- {\rm i} \nabla_{{{\alpha}} ({{\beta}}} \lambda_{{{\gamma}})}^i
- {\rm i} \varepsilon_{{{\alpha}} ({{\beta}}} \nabla_{{{\gamma}} )}{}^{{\delta}} \lambda_{{\delta}}^i
- \frac{3{\rm i}}{2} W_{{{\beta}}{{\gamma}}} \lambda_{{\alpha}}^i - W_{{{\alpha}}{{\beta}} {{\gamma}}}{}^i W \nonumber\\
&~~~+ \frac{{\rm i}}{2} W_{\alpha (\beta} \lambda_{\gamma)}^i
-\frac{3 {\rm i}}{2} \varepsilon_{\alpha (\beta} W_{\gamma)}{}^{\delta} \lambda_{\delta}^i\ , \\
\nabla_{{\alpha}}^i X^{jk} &= 2 {\rm i} \varepsilon^{i (j}
\Big(
\nabla_{{\alpha}}{}^{{\beta}} \lambda_{{\beta}}^{k)}
- \frac12 W_{{{\alpha}}{{\beta}}} \lambda^{{{\beta}} k)}
+ \frac{3{\rm i}}{4} X_{{\alpha}}^{k)} W
\Big)
\ .
\end{align}
\end{subequations}
We also note that $F_{\alpha \beta} = \frac12 (\Sigma^{ab})_{\alpha \beta} F_{ab}$. Due to \eqref{vector-Bianchi}, a dimension-2 superfield of a vector multiplet in the traceless frame obeys the following Bianchi identity:
\begin{align}
\nabla_{(\alpha}{}^{\gamma} F_{\beta)\gamma} = \frac{1}{2} \lambda^{\gamma k} W_{\alpha \beta \gamma k}
~.
\end{align}
The actions of the $S$-supersymmetry generator on the descendants are given by
\begin{align}
S_{{\alpha}}^i \lambda_{{\beta}}^j &= - 2 {\rm i} \varepsilon_{{{\alpha}}{{\beta}}} \varepsilon^{ij} W \ , \qquad
S_{{\alpha}}^i F_{{{\beta}}{{\gamma}}} = 4 \varepsilon_{{{\alpha}} ({{\beta}}} \lambda_{{{\gamma}})}^i \ , \qquad
S_{{\alpha}}^i X^{jk} = - 2 \varepsilon^{i (j} \lambda_{{\alpha}}^{k)} \ ,
\label{S-on-W}
\end{align}
while all the fields are annihilated by the $K_a$ generators.
In 5D $\cN=1$ conformal superspace, there exists a prepotential formulation for the Abelian vector multiplet, which, was developed in \cite{BKNT-M14}, see also \cite{K06,KL,KT-M08,KT-M_5D3} for earlier related analysis in other superspaces. The authors of \cite{BKNT-M14} introduced a real primary superfield $V_{ij}$ of dimension $-2$, $\mathbb{D} V_{ij} = -2 V_{ij}$. It was also shown that $V_{ij}$ transforms as an isovector under ${\rm SU}(2)_R$ transformations and is the 5D analogue of Mezincescu's prepotential \cite{Mezincescu, HST, BK11} for the 4D $\cN=2$ Abelian vector multiplet. This then allows us to represent the field strength $W$ as
\begin{eqnarray}
W= -\frac{3{\rm i}}{40} \de_{ij} \Delta^{ijkl} V_{kl}~,
\label{mezincescu}
\end{eqnarray}
where we have defined the operators
\begin{subequations}
\begin{eqnarray}
\Delta^{ijkl} &:=&
-\frac{1}{96} \varepsilon^{\alpha \beta \gamma \delta} \de_{\alpha}^{(i} \de_{\beta}^j \de_{\gamma}^k \de_{\delta}^{l)} = -\frac{1}{32} \de^{(ij} \de^{kl)} = \Delta^{(ijkl)}~, \\
\de^{ij} &:=& \de^{\alpha (i} \de_{\alpha}^{j)}~.
\end{eqnarray}
\end{subequations}
It should be noted that $V_{ij}$ in \eqref{mezincescu} is defined modulo gauge transformations of the form
\begin{eqnarray}
\delta V_{kl} = \de_{\alpha}^p \Lambda^{\alpha}{}_{klp}~, \qquad \Lambda^{\alpha}{}_{klp} = \Lambda^{\alpha}{}_{(klp)}~,
\label{gauge-vector}
\end{eqnarray}
with the gauge parameter $\Lambda^{\alpha}{}_{klp}$ being a primary superfield,
\begin{eqnarray}
S_{\alpha}^i \Lambda^{\beta}{}_{jkl}=0~, \qquad \mathbb{D} \Lambda^{\beta}{}_{jkl} = -\frac{5}{2} \Lambda^{\beta}{}_{jkl}~.
\end{eqnarray}
\subsection{The linear multiplet}
The linear multiplet \cite{FS2,deWit:1980gt,deWit:1980lyi,deWit:1983xhu,N=2tensor,Siegel:1978yi,Siegel80,SSW,deWit:1982na,KLR,LR3},
or $\cO(2)$ multiplet, can be described in terms of the primary superfield $G^{ij}= G^{ji}$, which is characterised by the properties
\begin{subequations}\label{linear-constr}
\begin{eqnarray}
\de_{\alpha}^{(i} G^{jk)} &=& 0~,
\label{O(2)constraints}\\
K_A G^{ij} &=& 0~, \qquad \mathbb{D} G^{ij} = 3 G^{ij}~.
\end{eqnarray}
\end{subequations}
We assume $G^{ij}$ to be real, $(G^{ij})^{*}= \varepsilon_{ik} \varepsilon_{jl} G^{kl}$.
The component structure of $G^{ij}$ is characterised by the following tower of identities:
\begin{subequations} \label{O2spinorderivs}
\begin{align}
\nabla_{{\alpha}}^i G^{jk} &= 2 \varepsilon^{i(j} \varphi_{{\alpha}}^{k)} \ , \\
\nabla_{{\alpha}}^i \varphi_{{\beta}}^j &= - \frac{{\rm i}}{2} \varepsilon^{ij} \varepsilon_{{{\alpha}}{{\beta}}} F + \frac{{\rm i}}{2} \varepsilon^{ij} \cH_{{{\alpha}}{{\beta}}} + {\rm i} \nabla_{{{\alpha}}{{\beta}}} G^{ij} \ , \\
\nabla_{{\alpha}}^i F &= - 2 \nabla_{{\alpha}}{}^{{\beta}} \varphi_{{\beta}}^i - 3 W_{{{\alpha}}{{\beta}}} \varphi^{{{\beta}} i} - \frac{3}{2} X_{{{\alpha}} j} G^{ij} \ , \\
\nabla_{{\alpha}}^i \cH_{{{a}}} &= 4 (\Sigma_{{{a}}{{b}}})_{{\alpha}}{}^{{\beta}} \nabla^{{b}} \varphi_{{\beta}}^i - \frac{3}{2} (\Gamma_{{a}})_{{\alpha}}{}^{{\beta}} W_{{{\beta}} {{\gamma}}} \varphi^{{{\gamma}} i}
-\frac{1}{2} (\Gamma_{{a}})_{{\gamma}}{}^{{\beta}} W_{{{\beta}} {{\alpha}}} \varphi^{{{\gamma}} i} \ ,
\end{align}
\end{subequations}
where we have defined the independent descendants superfields
\begin{subequations} \label{O2superfieldComps}
\begin{align}
\varphi_{{\alpha}}^i &:= \frac{1}{3} \nabla_{{{\alpha}} j} G^{ij} \ , \\
F &:= \frac{{\rm i}}{12} \nabla^{{{\gamma}} i} \nabla_{{\gamma}}^j G_{ij} = - \frac{{\rm i}}{4} \nabla^{{{\gamma}} k} \varphi_{{{\gamma}} k} \ ,\\
\cH_{abcd} &:= \frac{{\rm i}}{12} \varepsilon_{abcde} (\Gamma^{e})^{\alpha \beta} \nabla_{\alpha}^i \nabla_{\beta}^j G_{ij}
\equiv \varepsilon_{{{a}}{{b}}{{c}}{{d}}{{e}}} \cH^{{e}} \ .
\end{align}
\end{subequations}
Here $\cH^{a}$ obeys the differential condition
\begin{equation}
\nabla_{{a}} \cH^{{a}} = 0~, \qquad {\cH}^{{{a}}}:= -\frac{1}{4!}\varepsilon^{{{a}} {{b}} {{c}} {{d}} {{e}}} \cH_{{{b}} {{c}} {{d}} {{e}}}~.
\end{equation}
The descendants \eqref{O2superfieldComps} are all annihilated by $K_a$. Under the action of $S$-supersymmetry, they transform as follows:
\begin{align} S_{{\alpha}}^i \varphi_{{\beta}}^j = - 6 \varepsilon_{{{\alpha}} {{\beta}}} G^{ij} \ , ~~~~~~
S_{{\alpha}}^i F = 6 {\rm i} \varphi_{{\alpha}}^i \ , ~~~~~~
S_{{\alpha}}^i \cH_{{b}} &= - 8 {\rm i} (\Gamma_{{b}})_{{\alpha}}{}^{{\beta}} \varphi_{{\beta}}^i \ .
\label{O2-S-actions}
\end{align}
We refer the reader to \cite{BKNT-M14} for a superform description of the linear multiplet.
As described in \cite{BKNT-M14}, the linear multiplet constraints \eqref{linear-constr} may be solved in terms of
an arbitrary primary real dimensionless scalar prepotential $\Omega$,
\begin{eqnarray}
S_{{\alpha}}^i\Omega=0~,~~~~~~
\mathbb D\Omega=0
~,
\end{eqnarray}
and the solution is
\begin{eqnarray}
G^{ij}
&=&
-\frac{3{\rm i}}{40}\Delta^{ijkl}\de_{kl}\Omega ~.
\label{def-G-0-b}
\end{eqnarray}
A crucial property of $G^{ij}$ defined by
\eqref{def-G-0-b} is that it is invariant under gauge transformations
of $\Omega$
of the form
\begin{eqnarray}
\delta\Omega=-\frac{{\rm i}}{2}(\Gamma^{{a}})^{{{\alpha}}{{\beta}}}\de_{{\alpha}}^i \de_{{\beta}}^jB_{{a}}{}_{ij}
~,
\label{gauge-O2-0}
\end{eqnarray}
where the gauge parameter is assumed to have the properties
\begin{eqnarray}
B_{{{a}}}{}^{ij}=B_{{{a}}}{}^{ji}
~,~~~~~~
S_{{{\alpha}}}^{i} B_{{a}}{}^{jk}=0
~,~~~~~~
\mathbb D B_{{a}}{}^{ij}=-B_{{a}}{}^{ij}
~,
\label{gauge-inv-O2-1}
\end{eqnarray}
and is otherwise arbitrary.
To conclude this section we introduce another result that will be used in the rest of the paper. Given a system of $n$ Abelian vector multiplets $W^I$, with $I=1,2, \dots n$, all satisfying \eqref{vector-defs}, we can construct the following composite linear multiplet and its descendants \cite{BKNT-M14}:
\begin{subequations} \label{O2composite-N}
\begin{eqnarray}
H^{ij}
&=& C_{JK} \Big\{ \,2 W^J X^{ij\, K}
- {\rm i} \lambda^{\alpha J\,(i } \lambda_{\alpha}^{j) K}\Big\}
~,\\
\varphi_{\alpha\, }^i &=& C_{JK} \bigg\{\,
{\rm i} X^{ij\, J} \lambda_{\alpha j}^K
-2 {\rm i} F_{\alpha \beta}^{J} \lambda^{\beta i K}
- \frac{3}{2} X_{\alpha}^i W^J W^K
-2 {\rm i} W^J \de_{\alpha \beta} \lambda^{\beta i K}\nonumber\\
&&~~~~~~~~
- {\rm i} (\de_{\alpha \beta} W^J) \lambda^{\beta i K}
- 3 {\rm i} W_{\alpha \beta} W^J \lambda^{\beta i K}
\bigg\}~, \\
F &=& C_{JK} \bigg\{\,
X^{ij J}X_{ij}^{K}
- F^{ab J} F_{ab}^{K}
+ 4 W^J \Box W^K
+ 2 (\de^a W^J) \de_{a} W^K \nonumber\\
&&~~~~~~~~
+ 2 {\rm i} (\de_{\alpha}{}^{\beta} \lambda_{\beta}^{iJ}) \lambda^{\alpha K}_{i}{} -6 W^{ab} F_{ab}^J W^K
-\frac{39}{8} W^{ab} W_{ab} W^J W^K
\nonumber\\
&&~~~~~~~~
+ \frac{3}{8} Y W^J W^K
+ 6 X^{\alpha i}\lambda_{\alpha i}^J W^K
-3 {\rm i} W_{\alpha \beta} \lambda^{\alpha i J} \lambda^{\beta K}_{i}
\bigg\}
~, \\
\cH_{a} &=& C_{JK} \bigg\{
-\frac12 \varepsilon_{abcde} F^{bc\, J} F^{de \,K}
+ 4 \de^{b} \Big( W^J F_{ba}^K + \frac{3}{2} W_{ba} W^J W^K\Big)\nonumber\\
&&~~~~~~~~~
+ 2 {\rm i} (\Sigma_{ba})^{\alpha \beta} \de^{b} (\lambda_{\alpha}^{iJ} \lambda_{\beta i}^K)
\bigg\}
~,
\end{eqnarray}
\end{subequations}
where $\Box := \de^{a} \de_{a}$ and $C_{JK} = C_{(JK)}$ is a constant symmetric in $J$ and $K$.
Equation \eqref{O2composite-N} is the superspace analogue of the composite linear multiplet constructed in \cite{Bergshoeff1}.
\section{Superconformal actions}
\label{5Dactionsuperspace}
In this section, we review a main action principle that was used in \cite{BKNT-M14} to construct various locally superconformally invariants (including curvature-squared ones) in superspace. A simple way to define it is based on a full superspace integral
\begin{align}
S[\cL] = \int\mathrm d^{5|8}z\, E\, \cL~,\qquad\mathrm d^{5|8}z:=\mathrm d^5x\,\mathrm d^8\theta~, \qquad
E := {\rm Ber}(E_{{M}}{}^{{A}})~,
\end{align}
where the Lagrangian $\cL$ is a conformal primary superfield of dimension $+1$, $\mathbb{D} \cL= \cL$. This invariant can be proven to be locally superconformal invariant, that is, invariant under the supergravity gauge transformations \eqref{sugra-group}.
\subsection{BF action}
The action involving the product of a linear multiplet with an Abelian vector multiplet is referred to as the BF action. Analogous to the component superconformal tensor calculus, this plays a fundamental role in the construction of general supergravity-matter couplings, see \cite{Ohashi1,Ohashi2,Ohashi3,Ohashi4,Bergshoeff1,Bergshoeff2,Bergshoeff3} for the 5D case, and it was a main building block for the invariants introduced in \cite{BKNT-M14} that we focus on.
In superspace the BF action may be described by
\begin{subequations}\label{BF-action-0}
\begin{eqnarray}
S_{\rm{BF}}
=
\int\mathrm d^{5|8}z\, E\,
\Omega W
=
\int \mathrm d^{5|8}z\, E\,
G^{ij} V_{ij}
~.
\label{SGV}
\end{eqnarray}
As implied by the equation above, the BF action can be written in different ways, see \cite{BKNT-M14} for even more variants. In the first form in \eqref{SGV}, it involves the field strength of the vector multiplet,
$W$, and the prepotential of the linear multiplet, $\Omega$.
By using \eqref{def-G-0-b}, \eqref{mezincescu}, and then integrating by parts, one may obtain the equivalent form of the BF action involving Mezincescu's prepotential $V_{ij}$ and the field strength $G^{ij}$ described by the right-hand side of \eqref{SGV}.
One may also prove that the functionals $\int \mathrm d^{5|8}z\, E\,
\Omega W$ and
$\int \mathrm d^{5|8}z\, E\,
G^{ij} V_{ij}$
are, respectively, invariant under the gauge transformations \eqref{gauge-O2-0} and \eqref{gauge-vector}, thanks to the defining differential constraints satisfied by $W$ and $G^{ij}$, eqs.~\eqref{vector-Bianchi} and \eqref{O(2)constraints}.
In components, and in our notation, the BF action takes the form \cite{BKNT-M14}
\begin{equation}
\begin{aligned}
S_{\rm BF} = - \int \mathrm d^5 x & \, e \Big( \,v_{{a}} \cH^{{a}} + W F + X_{i j} G^{i j} + 2 \lambda^{\alpha k} \varphi_{\alpha k} \\
&- \psi_{{a }}{}^{\alpha}_i(\Gamma^{{a}})_{\alpha}{}^{\beta} \varphi_{\beta}^i W - {\rm i} \psi_{{a}}{}^{\alpha}_{i} (\Gamma^{{a}})_{\alpha}{}^{\beta} \lambda_{\beta j} G^{i j} + {\rm i} \psi_{{a}}{}^{\alpha}_{i} (\Sigma^{{a} {b}}){}_{\alpha}{}^{\beta} \psi_{{b} \beta j} W G^{i j} \Big)~. \label{BF-Scomp}
\end{aligned}
\end{equation}
\end{subequations}
In \eqref{BF-Scomp}, we have defined the usual component projection to $\theta = 0$, i.e., $U(z) | := U(z) |_{\theta=0}$. We associate the same symbol for the covariant component fields and the corresponding superfields, when the interpretation is clear from the context.
Here $v_{{{m}}} := V_m|$ denotes a real Abelian gauge connection. Its real field strength is $f_{mn} := F_{mn}| = 2 \partial_{[m} v_{n]}$. Note that the field strength $f_{mn}$ may be expressed in terms of the bar-projected, covariant field strength $F_{ab}:= F_{ab} \vert$ via the relation
\begin{eqnarray}
F_{ab} = f_{ab} + {\rm i} (\Gamma_{[a})_{\alpha}{}^{\beta} \psi_{b]}{}^{\alpha}_{k} \lambda_{\beta}^k
+ \frac{{\rm i}}{2} W\, \psi_{[a}{}^{\gamma}_{k} \psi_{b]}{}^{k}_{\gamma}~, \qquad f_{ab}:= e_{a}{}^{m} e_{b}{}^{n} f_{mn}~.
\end{eqnarray}
When projected to components, the lowest component of the covariant superfield $\cH_a$ satisfies the constraint $\de^{{{a}}} {\cH}_{{{a}}} = 0$~, where $\cH_a := \cH_a \vert$.
It holds that
\begin{eqnarray}
\cH^a = h^a
+ 2 (\Sigma^{ab})_\alpha{}^\beta\psi_{b}{}^\alpha_i\varphi_\beta^i
- \frac{{\rm i}}{2} \varepsilon^{abcde} (\Sigma_{bc})_{\alpha\beta}\psi_{d}{}^\alpha_{i}\psi_{e}{}^\beta_{j} G^{ij} ~.
\end{eqnarray}
The constraint $\de^{a} \cH_a=0$ implies the existence of a gauge three-form potential, $b_{{{m}} {{n}} {{p}}}$, and its exterior derivative $h_{{{m}} {{n}} {{p}} {{q}}}:= 4 \partial_{[{{m}}}b_{{{n}} {{p}} {{q}}]}$.
See \cite{BKNT-M14} and \cite{Hutomo:2022hdi} for more details.
\subsection{Vector multiplet compensator}
\label{vector_compensator}
The two-derivative invariant for the vector multiplet compensator can be constructed using the above BF action principle \eqref{SGV} but with the linear multiplet being a composite superfield.
We denote by $H_{\rm{VM}}^{ij}$ the composite linear multiplet \eqref{O2composite-N}, which is built out of a single Abelian vector multiplet:
\begin{eqnarray}
H^{ij}_{\rm{VM}} &=& {\rm i} (\de^{\alpha(i} W) \de_{\alpha}^{j)} W + \frac{{\rm i}}{2} W \de^{\alpha(i} \de_{\alpha}^{j)} W \nonumber\\
&=& -{\rm i} {\lambda}^{\alpha i} {\lambda}_{\alpha}^j +2 {W} {X}^{ij}~.\label{HijVM}
\end{eqnarray}
One can check that $H_{\rm{VM}}^{ij}$ is a dimension-3 primary superfield, $S_{\alpha}^k H^{ij}_{\rm{VM}}= 0$. Thanks to the Bianchi identity \eqref{vector-Bianchi} obeyed by the field strength $W$, the composite superfield $H_{\rm{VM}}^{ij}$ satisfies the analyticity constraint
\begin{eqnarray}
\de^{(i}_{\alpha} H^{jk)}_{\rm{VM}}=0~.
\end{eqnarray}
The vector multiplet action may then be rewritten as an integral over the full superspace,
\begin{eqnarray}
S_{\rm{VM}}&=&
\frac{1}{4} \int\mathrm d^{5|8}z\, E\, V_{ij} { H }_{\rm VM}^{ij} ~.~~~
\label{rep8.3}
\end{eqnarray}
It is also possible to write the action as
\begin{eqnarray}
S_{\rm{VM}}=
\frac{1}{4}
\int \mathrm d^{5|8}z\, E\,
{\bm \Omega}_{\rm VM} W~,
\label{rep8.4}
\end{eqnarray}
where we have introduced the primary superfield $ {\bm \Omega}_{\rm VM}$ defined by \cite{BKNT-M14}:
\begin{eqnarray}
{\bm \Omega}_{\rm VM} =
\frac{{\rm i}}{4}\Big(
W\de^{ij} V_{ij}
-2 (\de^{{{\alpha}} i}V_{ij})\de_{{\alpha}}^{j}W
-2 V_{ij}\de^{ij} W \Big)
~.
\end{eqnarray}
This is a prepotential for $H_{\rm{VM}}^{ij}$ in the sense of \eqref{def-G-0-b}.
The representations \eqref{rep8.3} and \eqref{rep8.4} allow us to compute the variation of
$S_{\rm{VM}}$ with respect to the Mezincescu's prepotential,
\begin{eqnarray}
\delta S_{\rm{VM}}
&=&
\frac{3}{4}\int \mathrm d^{5|8}z\, E\, \delta V_{ij}
{ H }_{\rm VM}^{ij} ~.
\label{var8.6b}
\end{eqnarray}
Note that the above variation vanishes when $\delta V_{ij}$ is a gauge transformation \eqref{gauge-vector}. This implies that
\begin{eqnarray}
\int \mathrm d^{5|8}z\, E\, \Lambda^{\alpha}{}_{ijk} \de_{\alpha}^{(k} H_{\rm VM}^{ij)} = 0~,
\end{eqnarray}
that is, $\de_{\alpha}^{(i} H_{\rm VM}^{jk)} = 0$.
This result is true for any dynamical system involving an Abelian vector multiplet \cite{BKNT-M14}. The variation with respect to the prepotential $V_{ij}$ couples to a composite linear multiplet which depends on the specific form of the associated action principle -- let us call this, in general, $\mathbf{H}^{ij}$ which satisfies by construction the constraints \eqref{linear-constr}.
The equation of motion (EOM) for a vector multiplet is then $\mathbf{H}^{ij}=0$.
In the case of eq.~\eqref{rep8.3}, the EOM for the vector multiplet compensator is $H^{ij}_{\rm VM}=0$.
The superspace action $S_{\rm VM}$ can be reduced to components. The bosonic part of the component action reads \cite{BKNT-M14}
\begin{eqnarray}
S_{\rm VM} &=& \int \mathrm d^{5} x\, e\, \bigg\{
- \frac{1}{8} W^3 {\cR}
+ \frac{3}{2} W ({\cD}^{{{a}}} W) {\cD}_{{{a}}} W
- \frac{3}{4} W X^{ij} X_{ij}
+\frac{1}{8} \varepsilon_{{{a}} {{b}} {{c}} {{d}} {{e}}}v^{{{a}}} f^{{{b}} {{c}}} f^{{{d}} {{e}}}
\nonumber\\
&&~~~~~~~~~~~~~
+ \frac{3}{4} W f^{{{a}} {{b}}} f_{{{a}} {{b}}}
+ \frac{9}{4} W^2 W^{{{a}} {{b}}} f_{{{a}} {{b}}}
+ \frac{39}{32} W^3 W^{{{a}} {{b}}} W_{{{a}} {{b}}} - \frac{3}{32} W^3 Y \bigg\}~,
\label{bosonic-swm}
\end{eqnarray}
where $\cR$ denotes the scalar curvature. In the above, we have introduced the spin, dilatation, and
${\rm SU}(2)_R$ covariant derivative ${\cD}_{{a}}$
\begin{align}
\cD_{{{a}}} &= e_{{{a}}}{}^{{{m}}} \cD_{{{m}}} = e_{a}{}^{m} \Big(\partial_{{m}}
- \frac{1}{2} \omega_{{m}}{}^{{{b}} {{c}}} M_{{{b}} {{c}}}
- b_{{m}} \mathbb D
- \phi_{{m}}{}^{ij} J_{ij} \Big)~.
\end{align}
The action is two-derivative and, upon gauge fixing dilatation by imposing $W=1$, the first term gives a scalar curvature term, $\cR$.
The gauge fixing $W=1$ can be achieved by requiring $W\ne 0$ meaning that the vector multiplet is a conformal compensator.
\subsection{Linear multiplet compensator}
\label{linear_compensator}
The action for the linear multiplet compensator can also be constructed using the BF action principle \eqref{SGV}. In this case, the dynamical part of the action is described by a vector multiplet built out of the linear multiplet.
We denote by $\mathbf{W}$ the composite vector multiplet field strength:
\begin{eqnarray}
\mathbf{W} :=
\frac{{\rm i}}{16} G \nabla^{{{\alpha}} i} \nabla_{{\alpha}}^j \Big(\frac{G_{ij}}{G^2}\Big) = \frac{1}{4}F G^{-1} - \frac{{\rm i}}{8} G_{i j} \varphi^{i \alpha} \varphi^{j}_{\alpha} G^{-3}~,
\label{comp-W}
\end{eqnarray}
with
\begin{eqnarray}
G := \sqrt{\frac12 G^{ij} G_{ij}}
\end{eqnarray}
being nowhere vanishing, $G \neq 0$. At the component level, the vector multiplet \eqref{comp-W} was first derived by Zucker
\cite{Zucker:2000ks} as a 5D analogue of the improved 4D $\cN=2$ tensor multiplet \cite{deWit:1982na}.
The field strength $\mathbf{W}$ obeys
the constraints \eqref{vector-defs}.
The action for the linear multiplet compensator may then be rewritten as
\begin{eqnarray} \label{S_boldW}
S_{\rm L} = \int \mathrm d^{5|8}z\, E\, \Omega \mathbf{W} ~.
\end{eqnarray}
Varying the prepotential $\Omega$ leads to
\begin{eqnarray}
\delta S_{\rm L}
=
\int \mathrm d^{5|8}z\, E\, \delta \Omega \,\mathbf{W} ~.
\label{rep8.11}
\end{eqnarray}
Similar to what we discussed for the vector multiplet case, the previous form holds for the first-order variation of a matter system which includes a linear multiplet with respect to its prepotential $\Omega$. In particular, the variation must vanish if $\delta \Omega$ is the gauge transformation \eqref{gauge-O2-0}. This holds if $\mathbf{W}$ obeys the Bianchi identity \eqref{vector-Bianchi}.
In general, any dynamical system involving a linear multiplet then possesses a composite vector multiplet $\mathbf{W}$. The EOM for the linear multiplet is $\mathbf{W}=0$ and for the specific case of the linear multiplet action of eq.~\eqref{S_boldW} this is given by $\mathbf{W}$ defined in \eqref{comp-W}.
The bosonic part of $S_{\rm L}$ is given by \cite{BKNT-M14}
\begin{eqnarray}
\label{eq:TensorComp}
S_{\rm L}
&=&
\int \mathrm d^{5}x\, e\,
\bigg\{
- \frac{3}{8} G\cR
+\frac{3}{32} GY
- \frac{1}{8G} F^2
- \frac{3}{32} W^{{{a}}{{b}}} W_{{{a}}{{b}}} G
+\frac{1}{4} G^{-1}( \cD_{{a}} G^{ij} )\cD^{{a}} G_{ij}
\notag \\ &&
- \frac{1}{2} G^{-1} \cH^{{a}} \cH_{{a}}
+ \frac{1}{12} \varepsilon^{{{a}}{{b}}{{c}}{{d}}{{e}}} b_{{{c}}{{d}}{{e}}} \Big(
\frac{1}{2} G^{-3} (\cD_{{a}} G_{ik}) (\cD_{{b}} G_j{}^k) G^{ij}
+ G^{-1} R(J)_{{{a}}{{b}}}{}^{ij} G_{ij}
\Big)
\bigg\}~.~~~~~~~~~
\end{eqnarray}
The action is two-derivative, and, with $G=1$, the first term gives an $\cR$ term.
For later use it is useful to provide the explicit expressions of the composite descendant superfields of $\mathbf{W}$. These are given by
\begin{subequations} \label{desc-W-R2}
\begin{eqnarray}
\bm{\lambda}_{\alpha}^i &=& -{\rm i} \de_{\alpha}^i \mathbf{W}\nonumber\\
&=& ~~ G^{-1} \Bigg\{ -\frac{{\rm i}}{2} \de_{\alpha \beta} \varphi^{\beta i}
+ \frac{3{\rm i}}{4} W_{\alpha \beta} \varphi^{\beta i} + \frac{3{\rm i}}{8} G^{ij} X_{\alpha j}\Bigg\}
\nonumber\\
&&
+ G^{-3} \Bigg\{ -\frac{{\rm i}}{8} F G^{ij} \varphi_{\alpha j} - \frac{{\rm i}}{8} G^{ij} \cH_{\alpha \beta} \varphi^{\beta}_{j}
+ \frac{{\rm i}}{4} G_{jk} \varphi^{\beta k} \de_{\alpha \beta} G^{ij}
+ \frac{1}{4} \varphi^{\beta i} \varphi_{\beta}^j \varphi_{\alpha j} \Bigg\}
\nonumber\\
&&
+ G^{-5} \Bigg\{ -\frac{3}{8} G^{ij} G_{kl} \varphi^{\beta k} \varphi_{\beta}^l \varphi_{\alpha j}\Bigg\}~,
\\
\mathbf{X}^{ij}
&=& \frac{{\rm i}}{4} \nabla^{{{\alpha}} (i} \nabla_{{\alpha}}^{j)} \mathbf{W}
\nonumber\\
&=& ~~ G^{-1} \Bigg\{\, \frac12 \Box G^{ij}
+ \frac{3}{64} W^{ab} W_{ab} G^{ij} -\frac{3}{64} Y G^{ij}
+ \frac{3{\rm i}}{4} X^{\alpha (i} \varphi_{\alpha}^{j)}
\Bigg\}\nonumber\\
&&+ G^{-3} \Bigg\{ -\frac{1}{16} F^2 G^{ij} - \frac{1}{16} \cH^{a} \cH_{a} G^{ij} + \frac{1}{4} \cH^{a} G^{k(i} \de_{a} G^{j)}_{k}-\frac{1}{4} G_{kl} \big(\de^{a} G^{k(i}\big) \de_{a} G^{j)l}
\nonumber\\
&&~~~~~~~~~~
-\frac{3{\rm i}}{8} G^{ij} G_{kl} X^{\alpha k} \varphi_{\alpha}^l
-\frac{{\rm i}}{8} F \varphi^{\alpha(i} \varphi_{\alpha}^{j)}
+\frac{3{\rm i}}{8} W^{\alpha \beta} G^{ij} \varphi^{k}_{\alpha} \varphi_{\beta k}
\nonumber\\
&&~~~~~~~~~~
+ \frac{{\rm i}}{16}(\Gamma^a)^{\alpha \beta} \Big(\cH_{a} \varphi^{(i}_{\alpha} \varphi^{j)}_{\beta}
+ 8 G^{k(i} \big(\de_{a} \varphi^{j)}_{\alpha} \big) \varphi_{\beta k} + 2 \varphi_{\alpha}^{(i} \big(\de_{a} G^{j)k} \big) \varphi_{\beta k} \Big)
\Bigg\}\nonumber\\
&&+ G^{-5} \Bigg\{\,\frac{3{\rm i}}{16} F G^{ij} G_{kl} \varphi^{\alpha k} \varphi_{\alpha}^l+ \frac{3{\rm i}}{16} G^{k(i} G^{j)l} (\Gamma^a)_{\alpha \beta} \cH_{a} \varphi_{k}^{\alpha} \varphi_{l}^{\beta}
-\frac{3{\rm i}}{8}G^{mn}G^{k(i} \big(\de_{\alpha \beta} G^{j)}{}_{n} \big) \varphi^{\alpha}_{k} \varphi^{\beta}_{m}
\nonumber\\
&&~~~~~~~~~
+ \frac{3}{8} G^{k(i} \varphi^{j)}_{\alpha} \varphi^{\alpha l} \varphi^{\beta}_{k} \varphi_{\beta l} -\frac{3}{8} G^{kl} \varphi^{\alpha(i} \varphi_{\alpha}^{j)} \varphi_{k}^{\beta} \varphi_{\beta l}
\Bigg\}
\nonumber\\
&&
+ G^{-7} \Bigg\{ \frac{15}{32} G^{ij}G_{kl} G_{mn} \varphi^{\alpha k} \varphi_{\alpha}^{l} \varphi^{\beta m} \varphi_{\beta}^n \Bigg\} ~,
\\
\mathbf{F}_{{a} {b}} &=& \frac{1}{4} (\Sigma_{{a} {b}})^{\alpha \beta} \de^{k}_{(\alpha} \bm{\lambda}_{\beta) k} - W_{{a} {b}} \mathbf{W} \nonumber\\
&=& ~~ G^{-1} \Bigg\{ \frac12 \de_{[a} \cH_{b]} -\frac{3{\rm i}}{8} G_{ij}X_{ab}{}^{ij} + \frac{{\rm i}}{4} W_{ab \alpha}{}^i\varphi^{\alpha}_i
\Bigg\} \nonumber\\
&&+ G^{-3} \Bigg\{ \,\frac{1}{4} G_{ij} \cH_{[a} \de_{b]} G^{ij}
-\frac{1}{4} G_{ij} \big(\de_{[a} G^{ik} \big) \de_{b]} G^{j}_k + \frac{{\rm i}}{2} G_{ij} (\Gamma_{[a})^{\alpha \beta} (\de_{b]} \varphi_{\alpha}^i) \varphi_{\beta}^{j}
\nonumber\\
&&
~~~~~~~~~
- \frac{{\rm i}}{8} (\Gamma_{[a})^{\alpha \beta} \big(\de_{b]} G_{ij} \big) \varphi^{i}_{\alpha} \varphi^{j}_{\beta}
\Bigg\}
\nonumber\\
&&
+ G^{-5} \Bigg\{ -\frac{3{\rm i}}{8} G^{k}{}_{(i} G^{l}{}_{j)} (\Gamma_{[a})^{\alpha \beta} \big(\de_{b]} G_{kl} \big) \varphi_{\alpha}^{i} \varphi^{j}_{\beta} \Bigg\}
~.
\end{eqnarray}
\end{subequations}
\subsection{Gauged supergravity action}
An off-shell formulation for 5D minimal supergravity can be obtained by coupling the standard Weyl multiplet to two off-shell compensators: vector and linear multiplets \cite{Zucker1,Zucker2,Zucker3,Ohashi1,Ohashi2,Ohashi3,Ohashi4,Bergshoeff1,Bergshoeff2,Bergshoeff3,BKNT-M14}.
This is the 5D analogue of the off-shell formulation for 4D $\cN=2$ supergravity \cite{deWit:1982na, K-08}.
The complete (gauged) supergravity action, $S_{\rm gSG}$, is given by the following two-derivative action:
\begin{subequations} \label{gsg-action}
\begin{eqnarray}
S_{\rm gSG}&=& S_{\rm VM} + S_{\rm L} +\kappa \, S_{\rm BF} =
\int \mathrm d^{5|8}z\, E\, \Big\{
\frac{1}{4} V_{ij} { H }_{\rm VM}^{ij}
+\Omega \mathbf{ W }
+\kappa V_{ij} G^{ij} \Big\} \\
&=&
\int \mathrm d^{5|8}z\, E\, \Big\{
\frac{1}{4} V_{ij} { H }_{\rm VM}^{ij}
+ \Omega \mathbf{ W}
+\kappa \Omega W \Big\}~. \label{rep8.13b}
\end{eqnarray}
\end{subequations}
The BF action $\kappa S_{\rm BF}$ describes a supersymmetric cosmological term. The case $\kappa=0$ case corresponds to Poincar\'e supergravity, while $\kappa \neq 0$ leads to gauged or anti-de Sitter supergravity.
Upon gauge fixing dilatation and superconformal symmetries (dilatation, $S$ and $K$) and integrating out the various auxiliary fields, one obtains the on-shell Poincare supergravity action of \cite{Cremmer, CN}. The contributions from the scalar curvature terms in eqs.~\eqref{bosonic-swm} and \eqref{eq:TensorComp} combine to give the normalised Einstein-Hilbert term $-\frac12 \cR$ plus a cosmological constant, see, e.g., \cite{BKNT-M14} for details.
In the remaining subsections, we elaborate on the structure of three independent curvature-squared invariants \cite{BKNT-M14, HOT, BRS, OP1, OP2}. These invariants were constructed in superspace \cite{BKNT-M14} in the standard Weyl multiplet background. In particular, we present the full expressions of all the composite primary multiplets which generate these invariants with the $\log$ multiplet appearing for the first time in its expanded form in terms of the descendants of $W$ and $W_{\alpha\beta}$.
\subsection{Weyl-squared}
\label{weyl_squared}
We first consider a composite primary superfield that may be used to generate a supersymmetric completion of a Weyl-squared term. In superspace, it was described in \cite{BKNT-M14} in terms of the super Weyl
tensor:
\begin{equation}
H^{i j}_{\textrm{Weyl}} := - \frac{{\rm i}}{2} W^{{\alpha} {\beta} {\gamma} i} W_{{\alpha} {\beta} {\gamma}}\,^{j}+\frac{3{\rm i}}{2} W^{{\alpha} {\beta}} X_{{\alpha} {\beta}}\,^{i j} - \frac{3 {\rm i}}{4} X^{{\alpha} i} X^{j}_{{\alpha}}~.
\label{H-Weyl}
\end{equation}
It can be checked that $H^{ij}_{\rm Weyl}$ satisfies the constraints \eqref{linear-constr}.
The superfield $H^{ij}_{\rm Weyl}$ corresponds to the composite linear multiplet first constructed in components by Hanaki, Ohashi, and Tachikawa in \cite{HOT}.
With the aid of the relations \eqref{eq:Wdervs}, the component fields of the composite linear multiplet are straightforward to compute. They include the $\theta =0$ projection (or the ``bar-projection'') of $H^{ij}_{\rm Weyl}$, together with the bar-projection of the following descendant superfields of $H^{ij}_{\rm Weyl}$:
\begin{subequations} \label{desc-H-Weyl}
\begin{align}
\varphi_{\rm Weyl}^{{{\alpha}}\, i} &= \frac{1}{3} \nabla^{{{\alpha}}}_{j} H^{ij}_{\rm Weyl} \ , \\
F_{{\rm Weyl}} &= \frac{{\rm i}}{12} \de^{\alpha}_{i} \de_{\alpha j}H^{i j}_{\rm{Weyl}}~, \\
\cH^{{a}}_{{\rm Weyl}} &= \frac{{\rm i}}{12} (\Gamma^{{a}})^{\alpha \beta} \de_{\alpha i} \de_{\beta j} H^{i j}_{\rm{Weyl}}~.
\end{align}
\end{subequations}
Eqs.~\eqref{desc-H-Weyl} play an important role in analysing superconformal primary equations of motion in the next section. The resulting expression coincides, up to notations, to the results of \cite{HOT}. We will give the full component expressions \eqref{desc-H-Weyl} in a follow-up paper.
By inserting the components of the composite linear multiplet \eqref{H-Weyl} and \eqref{desc-H-Weyl} into the BF action \eqref{BF-action-0}, one may construct the following higher-derivative invariant in a standard Weyl multiplet background \cite{BKNT-M14}
\begin{subequations}
\label{S-Weyl-0}
\begin{eqnarray}
S_{\rm{Weyl}} &=& \int\mathrm d^{5|8}z\, E\, V_{ij} { H }_{\rm Weyl}^{ij} \label{S-Weyl} \\
&=& - \int \mathrm d^5 x \, e \, \bigg( v_{{a}} \cH^{{a}}_{\rm Weyl} + W F_{\rm Weyl} + X_{i j} H^{i j}_{\rm Weyl} + 2 \lambda^k \varphi_{k \, \rm Weyl} \nonumber\\
& &~~~
- \psi_{{a} i} \Gamma^{{a}} \varphi_{\rm Weyl}^i W - {\rm i} \psi_{{a} i} \Gamma^{{a}} \lambda_{j} H^{i j}_{\rm Weyl} + {\rm i} \psi_{{a} i} \Sigma^{{a} {b}} \psi_{{b} j} W H^{i j}_{\rm Weyl} \bigg)~,
\label{S-Weyl-b}
\end{eqnarray}
\end{subequations}
where the spinor indices here are suppressed. This defines a locally supersymmetric extension of the Weyl squared term \cite{HOT, BRS, OP1, OP2}.
\subsection{$\log{W}$}
\label{logW}
We now consider a composite linear superfield which includes a supersymmetric Ricci tensor-squared term. In superspace, it was described for the first time in \cite{BKNT-M14} in analogy with the construction of a higher-derivative chiral invariant in 4D $\cN=2$ supergravity \cite{Butter:2013lta}. The composite superfield makes use of the standard Weyl multiplet coupled to the off-shell
vector multiplet compensator. It takes the form\footnote{Note that there is an overall minus sign difference between the definition of the $\log{W}$ invariant in this paper and the one of \cite{BKNT-M14}.}
\begin{equation}
H^{i j}_{{\log{W}}} = - \frac{3 {\rm i}}{40} \Delta^{i j k l}\de_{k l} \log{W} = \frac{3 {\rm i}}{1280} \de^{(i j} \de^{k l)} \de_{k l} \log{W}~.
\label{logW-0}
\end{equation}
In general such a linear multiplet could be defined by replacing $W$ with any primary scalar superfield of weight $q$ for which it is possible to prove that \eqref{logW-0} satisfies all the linear multiplet constraints, eq.~\eqref{linear-constr}, see \cite{BKNT-M14}. However, for various applications, we choose to construct it in terms of the vector multiplet superfield strength, $W$.
Due to the complexity in computing the action of six spinor derivatives on the ``log multiplet,'' the component analysis of $H^{ij}_{{\log{W}}}$ has not
appeared so far.
This calculation can be performed with the aid of the \textit{Cadabra} software.
Here we find that the full expression of $H^{ij}_{{\log{W}}}$ in terms of the descendant superfields of the vector and standard Weyl multiplets is given by
\begin{eqnarray} \label{HlogW-full}
H^{i j}_{{\log{W}}}
&=&
{} - \frac{3{\rm i}}{8} W_{{a} {b}} X^{{a} {b} i j} + \frac{51{\rm i}}{64} X^{{\alpha} i} X^{j}_{{\alpha}}
\nonumber\\
&&
+{{W}}^{-1} \Bigg\{\,\frac{9}{64}X^{i j} Y - \frac{3 {\rm i}}{8} F_{{a} {b}} X^{{a} {b} i j} - \frac{1}{2} \Box{{X^{i j}}} - \frac{9}{64}X^{i j} W^{{a} {b}} W_{{a} {b}} \nonumber\\
&&~~~~~~~~~~
- \frac{3}{4}(\Gamma^{{a}})^{{\alpha} {\beta}} X^{(i}_{{\alpha}} {\nabla}_{{a}}{\lambda^{j)}_{{\beta}}} +\frac{3}{4}(\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{(i}_{{\alpha}} {\nabla}_{{a}}{X^{j)}_{{\beta}}}\Bigg\}
\nonumber\\
&&
+ {{W}}^{-2} \Bigg\{\,\frac{1}{2} X^{i j} \Box{{{W}}}+\frac{1}{2} \big({\nabla}^{{a}}{{W}} \big) {\nabla}_{{a}}{X^{i j}}
+\frac{{\rm i}}{2} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} \lambda^{(i}_{{\alpha}} {\nabla}^{{a}}{{\nabla}^{{b}}{\lambda^{j)}_{{\beta}}}}-\frac{{\rm i}}{2} \lambda^{{\alpha}(j} \Box{{\lambda^{i)}_{{\alpha}}}} \nonumber\\
&&~~~~~~~~~~~
-\frac{{\rm i}}{4} \big({\nabla}^{{a}}{\lambda^{{\alpha} i}} \big) {\nabla}_{{a}}{\lambda^{j}_{{\alpha}}} - \frac{3 {\rm i}}{16} \epsilon^{{a} {b} {c} {d} {e}} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} W_{{d} {e}} \lambda^{(i}_{{\alpha}} {\nabla}_{{c}}{\lambda^{j)}_{{\beta}}} - \frac{3 {\rm i}}{8} (\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{i}_{{\alpha}} \lambda^{j}_{{\beta}} {\nabla}^{{c}}{W_{{a} {c}}} \nonumber\\
&&~~~~~~~~~~~
- \frac{3}{32}X^{{a} {b} i j} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} \lambda^{k}_{{\alpha}} \lambda_{{\beta} k} + \frac{3 {\rm i}}{64} Y \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}}
+\frac{3}{8}(\Sigma_{{a} {b}})^{{\alpha} {\beta}} F^{{a} {b}} X^{(i}_{{\alpha}} \lambda^{j)}_{{\beta}}
\nonumber\\
&&~~~~~~~~~~~
+ \frac{3 {\rm i}}{128} W^{{a} {b}} W_{{a} {b}} \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}}
+\frac{9 {\rm i}}{256} \epsilon^{{a} {b} {c} {d} {e}} (\Gamma_{a})^{{\alpha} {\beta}} W_{{b} {c}} W_{{d} {e}} \lambda^{i}_{{\alpha}} \lambda^{j}_{{\beta}}
- \frac{3}{8} X^{i j} X^{{\alpha} k} \lambda_{{\alpha} k}
\Bigg\}
\nonumber\\
&&
+ {{W}}^{-3} \Bigg\{\,\frac{1}{8}X^{i j} F^{{a} {b}} F_{{a} {b}} - \frac{1}{8} X^{i j} X^{k l} X_{k l} - \frac{1}{4} X^{i j} \big({\nabla}^{{a}}{{W}} \big) {\nabla}_{{a}}{{W}} \nonumber\\
&&~~~~~~~~~~
- \frac{{\rm i}}{8} \epsilon^{{a} {b} {c} {d} {e}} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} F_{{d} {e}} \lambda^{(i}_{{\alpha}} {\nabla}_{c}{\lambda^{j)}_{{\beta}}} - \frac{{\rm i}}{4} (\Gamma^{{a}})^{{\alpha} {\beta}} F_{{a} {b}} \lambda^{(i}_{{\alpha}} {\nabla}^{{b}}{\lambda^{j)}_{{\beta}}}
\nonumber\\
&&~~~~~~~~~~
- \frac{{\rm i}}{4} (\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{i}_{{\alpha}} \lambda^{j}_{{\beta}} {\nabla}^{{c}}{F_{{a} {c}}}
+\frac{{\rm i}}{4} (\Gamma^{{a}})^{{\alpha} {\beta}} X^{i j} \lambda^{k}_{{\alpha}} {\nabla}_{{a}}{\lambda_{{\beta} k}}+\frac{{\rm i}}{4} (\Gamma^{{a}})^{{\alpha} {\beta}} X^{k (i} \lambda^{j)}_{{\alpha}} {\nabla}_{{a}}{\lambda_{{\beta} k}}
\nonumber\\
&&~~~~~~~~~~
- \frac{{\rm i}}{4} (\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{(i}_{{\alpha}} \big({\nabla}_{{a}}{X^{j) l}} \big) \lambda_{{\beta} l}
+ \frac{{\rm i}}{4} \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}} \Box{{{W}}} + \frac{3 {\rm i}}{4} \big({\nabla}^{{a}}{{W}} \big) \lambda^{{\alpha} (i} {\nabla}_{{a}}{\lambda^{j)}_{{\alpha}}}
\nonumber\\
&&~~~~~~~~~~
- \frac{{\rm i}}{2} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} \big({\nabla}^{{a}}{{W}} \big)\lambda^{(i}_{{\alpha}} {\nabla}^{{b}}{\lambda^{j)}_{{\beta}}}
- \frac{3 {\rm i}}{16} (\Gamma^{{a}})^{{\alpha} {\beta}} W_{{a} {b}} \lambda^{i}_{{\alpha}} \lambda^{j}_{{\beta}} {\nabla}^{{b}}{{W}} + \frac{3 {\rm i}}{32} W_{{a} {b}} F^{{a} {b}} \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}}
\nonumber\\
&&~~~~~~~~~~
+\frac{9 {\rm i}}{64} \epsilon^{{a} {b} {c} {d} {e}} (\Gamma_{{a}})^{{\alpha} {\beta}} W_{{b} {c}} F_{{d} {e}} \lambda^{i}_{{\alpha}} \lambda^{j}_{{\beta}} - \frac{3 {\rm i}}{32} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} X^{i j} W^{{a} {b}} \lambda^{k}_{{\alpha}} \lambda_{{\beta}k}
\nonumber\\
&&~~~~~~~~~~
- \frac{3 {\rm i}}{8} X^{{\alpha} k} \lambda^{(i}_{{\alpha}} \lambda^{{\beta} j)} \lambda_{{\beta} k}- \frac{3 {\rm i}}{8} X^{{\alpha} k} \lambda^{{\beta} i} \lambda^{j}_{{\beta}} \lambda_{{\alpha} k} \Bigg \}
\nonumber\\
&&
+ {{W}}^{-4}\Bigg\{ -\frac{3 {\rm i}}{16} \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}} \big({\nabla}^{{a}}{{W}} \big) {\nabla}_{{a}}{{W}}
- \frac{3 {\rm i}}{8} (\Gamma^{{a}})^{{\alpha} {\beta}} X^{ k (i} \lambda^{j)}_{{\alpha}} \lambda_{{\beta}k} {\nabla}_{{a}}{{W}} \nonumber\\
&&~~~~~~~~~~~
+\frac{3{\rm i}}{8} (\Gamma^{{a}})^{{\alpha} {\beta}} F_{{a} {b}} \lambda^{i}_{{\alpha}} \lambda^{j}_{{\beta}} {\nabla}^{{b}}{{W}}
+ \frac{3 {\rm i}}{32} F^{{a} {b}} F_{{a} {b}} \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}} +\frac{3{\rm i} }{64} \epsilon^{{a} {b} {c} {d} {e}} (\Gamma_{{a}})^{{\alpha} {\beta}} F_{{b} {c}} F_{{d} {e}} \lambda^{i}_{{\alpha}} \lambda^{j}_{{\beta}} \nonumber\\
&&~~~~~~~~~~~
- \frac{3 {\rm i}}{16} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} X^{i j} F^{{a} {b}} \lambda^{k}_{{\alpha}} \lambda_{{\beta} k} -\frac{3 {\rm i}}{16} X^{i j} X^{k l} \lambda^{\alpha}_{k} \lambda_{{\alpha} l} -\frac{3 {\rm i}}{32} X^{k l} X_{ k l} \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}} \nonumber\\
&&~~~~~~~~~~~
-\frac{15}{64} (\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{i}_{{\alpha}} \lambda^{j}_{{\beta}} \lambda^{{\gamma} k} {\nabla}_{{a}}{\lambda_{{\gamma} k}}-\frac{9}{32} (\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{(i}_{{\alpha}} \lambda^{{\rho} j)} \lambda^{k}_{{\beta}} {\nabla}_{{a}}{\lambda_{{\rho} k}}
\nonumber\\
&&~~~~~~~~~~~
-\frac{15}{32}(\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{(i}_{{\alpha}} \lambda^{{\rho} j)} \lambda^{k}_{{\rho}} {\nabla}_{{a}}{\lambda_{{\beta} k}}
-\frac{15}{64} (\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{{\rho} i} \lambda^{j}_{{\rho}} \lambda^{k}_{{\alpha}} {\nabla}_{{a}}{\lambda_{{\beta} k}}
\nonumber\\
&&~~~~~~~~~~~
+ \frac{9}{32} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} W^{{a} {b}} \lambda^{(i}_{{\alpha}} \lambda^{{\rho} j)} \lambda^{k}_{{\beta}} \lambda_{{\rho} k}
+ \frac{9}{64} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} W^{{a} {b}} \lambda^{{\rho} i} \lambda^{j}_{{\rho}} \lambda^{k}_{{\alpha}} \lambda_{{\beta} k} \Bigg\}
\nonumber\\
&&
+ {{W}}^{-5} \Bigg\{\, \frac{3}{8} (\Gamma^{{a}})^{{\alpha} {\beta}} \lambda^{(i}_{{\alpha}} \lambda^{{\rho} j)} \lambda^{k}_{{\beta}} \lambda_{{\rho} k} {\nabla}_{{a}}{{W}} + \frac{3}{8} (\Sigma_{{a} {b}})^{{\alpha} {\beta}} F^{{a} {b}} \lambda^{{\rho} i} \lambda^{j}_{{\rho}} \lambda^{k}_{{\alpha}} \lambda_{{\beta} k} \nonumber\\
&&~~~~~~~~~~
+\frac{15}{40} X^{k l} \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}} \lambda^{{\beta}}_{k} \lambda_{{\beta} l}
\Bigg\} \nonumber\\
&&
+ {W}^{-6}\Bigg\{\,\frac{3 {\rm i}}{32} \lambda^{{\alpha} i} \lambda^{j}_{{\alpha}} \lambda^{{\beta} k} \lambda^{l}_{{\beta}} \lambda_{k}^{{\gamma}} \lambda_{{\gamma} l}
+\frac{3 {\rm i}}{16} \lambda^{{\alpha} i}
\lambda^{k}_{{\alpha}}
\lambda^{{\beta} j}
\lambda^{l}_{{\beta}}
\lambda_{k}^{{\gamma}}
\lambda_{{\gamma} l} \Bigg\}
~.
\end{eqnarray}
Using the explicit expression \eqref{HlogW-full},
together with the relations \eqref{eq:Wdervs}, \eqref{S-on-X_Y-a}, \eqref{VMIdentities}, and \eqref{S-on-W},
we have shown that $H^{i j}_{{\log{W}}}$ is indeed a primary and linear superfield satisfying \eqref{linear-constr}.
Furthermore, we have computed the descendants of the primary superfield $H^{i j}_{{\log{W}}}$ defined as
\begin{subequations} \label{desc-H-log}
\begin{align}
\varphi_{{\log{W}}}^{{{\alpha}}\, i} &= \frac{1}{3} \nabla^{{{\alpha}}}_{j} H^{ij}_{{\log{W}}} \ , \\
F_{{{\log{W}}}} &= \frac{{\rm i}}{12} \de^{\alpha}_{i} \de_{\alpha j}H^{i j}_{\rm{log W}}~, \\
\cH^{{a}}_{{{\log{W}}}} &= \frac{{\rm i}}{12} (\Gamma^{{a}})^{\alpha \beta} \de_{\alpha i} \de_{\beta j} H^{i j}_{\rm{log W}}~.
\end{align}
\end{subequations}
By using \eqref{HlogW-full} and the BF action, eqs.~\eqref{SGV} and \eqref{BF-Scomp}, one may construct the following locally superconformal invariant in a standard Weyl multiplet background
\begin{subequations}
\label{S-log-0}
\begin{eqnarray}
S_{\log{W}} &=& \int\mathrm d^{5|8}z\, E\, V_{ij} { H }_{{\log{W}}}^{ij} \label{S-log} \\
&=& - \int \mathrm d^5 x \, e \, \bigg( v_{{a}} \cH^{{a}}_{{\log{W}}} + W F_{{\log{W}}} + X_{i j} H^{i j}_{{\log{W}}} + 2 \lambda^k \varphi_{k \, {\log{W}}} \nonumber\\
&&- \psi_{{a} i} \Gamma^{{a}} \varphi_{{\log{W}}}^i W - {\rm i} \psi_{{a} i} \Gamma^{{a}} \lambda_{j} H^{i j}_{{\log{W}}} + {\rm i} \psi_{{a} i} \Sigma^{{a} {b}} \psi_{{b} j} W H^{i j}_{{\log{W}}} \bigg)~. \label{S-log-b}
\end{eqnarray}
\end{subequations}
The resulting component action includes, for example, a $(\Box\Box W)$ term which, upon gauge-fixing $W=1$, includes a Ricci tensor squared combination.
A more detailed discussion of the (fairly involved) component structure of \eqref{desc-H-log} will be given elsewhere.
\subsection{Scalar curvature squared} \label{scalar_squared}
Given a composite vector multiplet \eqref{comp-W} and its corresponding descendants \eqref{desc-W-R2},
we can then construct a composite linear multiplet defined by \cite{BKNT-M14}
\begin{eqnarray}
H^{ij}_{R^2}:=H^{ij}_{\rm{VM}}[\mathbf{W}] &=& {\rm i} (\de^{\alpha(i} \mathbf{W}) \de_{\alpha}^{j)} \mathbf{W} + \frac{{\rm i}}{2} \mathbf{W} \de^{\alpha(i} \de_{\alpha}^{j)} \mathbf{W} \nonumber\\
&=& -{\rm i} \bm{\lambda}^{\alpha i} \bm{\lambda}_{\alpha}^j +2 \mathbf{W} \mathbf{X}^{ij}~.
\end{eqnarray}
Inserting the composite field $H^{ij}_{R^2}$ and its independent descendants ($\varphi^{\alpha i}_{R^2}, F_{R^2}$, and $\cH^{{a}}_{R^2}$) into the BF action principle \eqref{BF-action-0} leads to the
following supersymmetric invariant
\begin{subequations} \label{action-R2}
\begin{eqnarray}
S_{R^2} &=& \int\mathrm d^{5|8}z\, E\, V_{ij} H^{ij}_{R^2} \\
&=& - \int \mathrm d^5 x \, e \, \bigg( v_{{a}} \cH^{{a}}_{R^2} + W F_{R^2} + X_{i j} H^{i j}_{R^2} + 2 \lambda^k \varphi_{k \, R^2} \nonumber\\
&&~~~~~~
- \psi_{{a} i} \Gamma^{{a}} \varphi_{R^2}^i W - {\rm i} \psi_{{a} i} \Gamma^{{a}} \lambda_{j} H^{i j}_{R^2} + {\rm i} \psi_{{a} i} \Sigma^{{a} {b}} \psi_{{b} j} W H^{i j}_{R^2} \bigg)~. \label{action-R2-b}
\end{eqnarray}
\end{subequations}
At the component level, the above action generates the scalar curvature-squared invariant constructed in \cite{OP1,OP2}.
\section{Superconformal equations of motion}
\label{5DEOMSuperspace}
Let us now combine the gauged supergravity action, $S_{\rm gSG}$, with the three independent curvature-squared invariants described by eqs. \eqref{S-Weyl-0}, \eqref{S-log-0}, and \eqref{action-R2}, to form a higher-derivative action
\begin{eqnarray}
S_{\rm HD}&=& S_{\rm gSG} + \alpha \, S_{\rm Weyl} +\beta \, S_{{\log{W}}} + \gamma \, S_{R^2}
~.
\label{HD-action}
\end{eqnarray}
The goal of this section is to obtain superconformal primary equations of motion in superspace that describe minimal 5D gauged supergravity deformed by an arbitrary combination of the three curvature-squared invariants described by the action above.
We can obtain these equations of motion by varying the superspace action \eqref{HD-action} with respect to the superfield prepotentials of the standard Weyl multiplet ($\mathfrak U$), the vector multiplet compensator ($V_{ij}$), and the linear multiplet compensator ($\Omega$). Such variations lead to the supercurrent superfield $\cJ$, the linear multiplet of the EOM of $V_{ij}$, and the vector multiplet of the EOM of $\Omega$, respectively.
Alternatively, we can reduce \eqref{HD-action} to components and vary it with respect to the highest dimension independent fields ($Y$, $X_{i j}$, and $F$) of the corresponding multiplets. The resulting equations of motion then describe the primary fields, i.e., the bottom components, of the multiplets of the equations of motion that arise from the variation of the full superfields. It is then straightforward to reinterpret them as the primary superfields of the equations of motion. By making use of code developed in \textit{Cadabra}, the full higher-derivative action in components has been obtained by substituting the explicit form of composite primary multiplets described in subsections \ref{weyl_squared} to \ref{scalar_squared} together with their descendants. These results, and details of the derivation of the equations of motion which we derived by using a combination of both superspace and components arguments, will be presented in an upcoming paper. The important point to stress is that the equations of motion are fully locally superconformal covariant. From them, successively acting with spinor derivatives (which is equivalent to taking successive $Q$-supersymmetry transformations) one can obtain the whole tower of independent equations of motion. Note that the component action computed from \eqref{HD-action} includes thousands of terms when fermions are considered, and it is not manifestly covariant due to the presence of naked gravitini and Chern-Simons terms. These would become covariant only after taking field variations and several integration by parts. An efficient alternative, and algorithmic, way to attack this problem is then by analysing the multiplets of the equations of motion starting from their primaries. Moreover, one could extract as much information as possible about the structure of the on-shell action (including all fermionic contributions) by directly working with the equations of motion in superspace.
In the next subsections, we will simply state the final results and show that the three primary equations of motion satisfy all necessary consistency checks dictated by their general structures. From this point of view, the results of this section stand on their own.
\subsection{Vector multiplet}
The vector multiplet equation of motion is obtained by varying \eqref{HD-action} with respect to the superfield $V_{ij}$ or equivalently the field $X_{ij}$.
The resulting EOM is
\begin{eqnarray}
0&=&
\frac{3}{4}H^{ij}_{\rm VM}
+ \kappa G^{ij}
+\alpha H^{ij}_{\rm Weyl}
+ \beta H^{ij}_{\log{W}}
+\gamma H^{i j}_{R^2}
~. \label{VM-eom}
\end{eqnarray}
Note that the first two terms correspond to the EOM for the vector multiplet in the two-derivative supergravity theory $S_{\rm gSG}$ while the remaining three terms describe the contribution coming from the three curvature-squared invariants. It is clear that, as expected, the right-hand side of \eqref{VM-eom} is a linear multiplet satisfying \eqref{linear-constr}.
\subsection{Linear multiplet}
The linear multiplet equation of motion is obtained by varying \eqref{HD-action} with respect to the superfield $\Omega$ or equivalently the auxiliary field $F$.
The resulting EOM is
\begin{equation}
0=\mathbf{W} + \kappa W + \gamma W_{R^2}~,
\label{LM-eom}
\end{equation}
with
\begin{eqnarray}
W_{R^2}
&= &
G^{-1} \bigg[~
\frac12 X^{ij} \mathbf{X}_{ij}
-\frac{1}{2} F^{{{a}} {{b}}} \mathbf{F}_{{{a}} {{b}}}
+W \Box \mathbf{W} + \mathbf{W} \Box W+ \big(\de^{{{a}}} W \big) \de_{{{a}}} \mathbf{W}
\nonumber\\
&&~~~~~~~
+ \frac{3}{16} Y W \mathbf{W}
-\frac{3}{2} W^{{{a}} {{b}}} \big(F_{{{a}} {{b}}} \mathbf{W} + \mathbf{F}_{{{a}} {{b}}} W \big)
-\frac{39}{16} W^{{{a}} {{b}}} W_{{{a}} {{b}}} W \mathbf{W}
\nonumber\\
&&~~~~~~~
+ \frac{{\rm i}}{2} \lambda^{\alpha i} \de_{\alpha}{}^{\beta} \bm{\lambda}_{\beta i}
+ \frac{{\rm i}}{2} \bm{\lambda}^{\alpha i} \de_{\alpha}{}^{\beta} \lambda_{\beta i}
+ \frac{3}{2} X^{\alpha i} \big( W \bm{\lambda}_{\alpha i} + \bm{W} \lambda_{\alpha i}\big)
-\frac{3{\rm i}}{2} W^{\alpha \beta} \lambda_{\alpha}^i \bm{\lambda}_{\beta i}
\bigg]
\nonumber\\
&&
+G^{-3} \bigg[~
\frac{1}{4} G^{ij} \varphi_{\beta i} \big( \lambda_{\alpha j} \de^{\alpha \beta} \mathbf{W} + \bm{\lambda}_{\alpha j} \de^{\alpha \beta} W
\big)+ \frac{1}{2} G^{ij} \varphi_{\beta i} \big( W \de^{\alpha \beta} \bm{\lambda}_{\alpha j} + \mathbf{W} \de^{\alpha \beta} \lambda_{\alpha j}
\big)
\nonumber\\
&&~~~~~~~~
-\frac{1}{2}G^{ij} \varphi_{\alpha i} \big( F^{\alpha \beta} \bm{\lambda}_{\beta j} + \mathbf{F}^{\alpha \beta} \lambda_{\beta j} \big)
-\frac{1}{4}G^{ij} F \big( W \mathbf{X}_{ij} + \mathbf{W} X_{i j} \big)
\nonumber\\
&&~~~~~~~~
-\frac{3}{4}G^{ij} W^{\alpha \beta} \varphi_{\alpha i} \big(\mathbf{W} \lambda_{\beta j} + W \bm{\lambda}_{\beta j} \big)
+\frac{{\rm i}}{4}F G^{ij}\lambda^{\alpha}_{i} \bm{\lambda}_{\alpha j}
+ \frac{3{\rm i}}{4} G_{ij} X^{\alpha i} \varphi_{\alpha}^j W \mathbf{W}
\nonumber\\
&&~~~~~~~~
+\frac{1}{4} G_{ij} \varphi^{\alpha i} \big( X^{jk} \bm{\lambda}_{\alpha k} + \mathbf{X}^{jk} \lambda_{\alpha k} \big)
-\frac{{\rm i}}{4} \varphi^{\alpha i} \varphi^{j}_\alpha \big( X_{ij} \mathbf{W} + \mathbf{X}_{ij} W \big)
\nonumber\\
&&~~~~~~~~
- \frac{1}{4} \varphi^{\alpha i} \varphi^{j}_{\alpha} \lambda^{\beta}_i \bm{\lambda}_{\beta j}^L
\bigg]
\nonumber\\
&&
+G^{-5} \bigg[~
\frac{3{\rm i}}{8} G^{ij} G^{kl} \varphi^{\alpha}_{k} \varphi_{\alpha l}
\Big(
X_{ij} \mathbf{W} + \mathbf{X}_{ij} W
-{\rm i} \lambda^{\beta}_i \bm{\lambda}_{\beta j}
\Big) \bigg]
~.
\label{LM-EOM-111}
\end{eqnarray}
It is possible to check explicitly that $W_{R^2}$ is primary, $S_{\alpha}^i W_{R^2} = 0$.
Moreover, we find that $W_{R^2}$ can be expressed as
\begin{subequations}\label{W-R2-composite}
\begin{equation} \label{WR2_as_W1}
W_{R^2} = \frac{{\rm i}}{32} G \de_{i j} \mathcal{R}^{i j}_{1}~,
\end{equation}
where
\begin{equation}
\mathcal{R}^{i j}_{1} =G^{-2} \left(\delta^i_k \delta^j_l - \frac{1}{2 G^2} G^{i j}G_{k l}\right) H^{k l}_{\rm bilinear}
~,
\end{equation}
and
\begin{equation}
H^{k l}_{\rm bilinear} = 2 W \mathbf{X}^{k l} + 2 \mathbf{W} X^{k l} - 2 \lambda^{\alpha k} \bm{\lambda}^{l}_{\alpha}~.
\end{equation}
\end{subequations}
This is exactly the structure of the composite vector multiplets $\mathbb{W}_{n}$ in (\ref{W_family}) with $n=1$ and with a precise choice of composite linear multiplet $H^{k l}:= H^{kl}_{\rm bilinear}$. See section \ref{NewCurvature2} for more detail on these composite vector multiplets. Besides the remarkably simple form of \eqref{W-R2-composite}, this result guarantees that the right-hand side of eq.~\eqref{LM-eom}, and in particular \eqref{LM-EOM-111}, is a primary superfield satisfying the vector multiplet constraints \eqref{vector-defs}, as expected. This is a very non-trivial consistency check of eq.~\eqref{LM-EOM-111}.
\subsection{Standard Weyl multiplet}
The conformal supergravity equation of motion is obtained by varying \eqref{HD-action} with respect to the standard Weyl multiplet prepotential superfield $\mathfrak U$ or equivalently the field $Y$. The resulting EOM is
\begin{subequations} \label{Y-eom}
\begin{eqnarray}
0&=& \cJ=J_{\rm EH}
+\alpha J_{\rm Weyl}
+\beta J_{\log{W}} + \gamma J_{{R^2}}~,
\label{Y-eom-0}
\end{eqnarray}
with
\begin{eqnarray}
J_{\rm EH}
&=&
\frac{3}{32}(G- W^{3})
~,
\label{Y-eom-1}
\\
J_{\rm Weyl}
&=&
- \frac{3}{64} W Y + \frac{3}{16} W W^{{a} {b}}W_{{a} {b}} + \frac{3}{32} F_{{a} {b}} W^{{a} {b}} - \frac{3}{16} \lambda_{i}^{\alpha} X^{i}_{\alpha}
~,
\\
J_{\log{W}}
&=&
-\frac{3}{1024}W Y
- \frac{69}{1024} W^{{a} {b}} W_{{a} {b}} W
+ \frac{3}{32} \Box W
- \frac{3}{64} F_{{a}{b}} W^{{a} {b}}
- \frac{3}{256}\lambda^{\alpha}_{j} X^{j}_{\alpha}
\nonumber\\
&&+ \frac{3}{128} F^{{a} {b}} F_{{a} {b}} W^{-1}
- \frac{9}{128} X^{i j} X_{i j} W^{-1}
+ \frac{3{\rm i}}{32} (\Gamma^{{a}})^{\alpha \beta} W^{-1} \lambda^{i}_{\alpha} {\de}_{{a}}\lambda_{\beta i}
\nonumber\\
&&+ \frac{3}{64} W^{-1} ({\de}^{{a}}W) {\de}_{{a}}W
- \frac{3 {\rm i} }{128} (\Sigma^{{a} {b}})^{\alpha \beta} F_{{a}{b}} \lambda^{i}_{\alpha} \lambda_{\beta i} W^{-2}
-\frac{3{\rm i}}{64} X^{i j} \lambda^{\alpha}_{i} \lambda_{ \alpha j} W^{-2}
\nonumber\\
&&- \frac{3 {\rm i}}{32} (\Gamma_{{b}})^{\alpha \beta} \ \lambda_{\alpha}^{j} \lambda_{{\beta} j} W^{-2} {\de}^{{b}} W
- \frac{3}{256} \lambda^{\alpha i} \lambda^{\beta}_{i} \lambda^{j}_{\alpha} \lambda_{{\beta} j} W^{-3}
~,
\label{Y-eom-2}
\\
J_{{R^2}}
&=&
-4 W \mathbf{W}^2
+ G^{-1} \Big(\,
W G^{ij} \mathbf{X}_{ij}
+ G^{ij}X_{ij} \mathbf{W}
-{\rm i} G^{ij} \lambda^{\alpha}_{i} \bm{\lambda}_{\alpha j}
\Big)
~.
\label{Y-eom-3}
\end{eqnarray}
\end{subequations}
Here $J_{\rm EH}$ is the EOM from the gauged supergravity action, $S_{\rm gSG}$, which does not have any contribution from the cosmological constant term $\kappa$.
Analogous to the case of 4D $\cN=2$ conformal supergravity \cite{KT,Butter:2010sc}, the 5D Weyl multiplet may be described by a single unconstrained real prepotential $\mathfrak U$ \cite{BKNT-M14}. Given a system of matter superfields $\varphi^i$, one can construct a Noether coupling between $\mathfrak U$ and the matter supercurrent $\cJ$ of the form
\begin{eqnarray}
S[\varphi^i] &=& \int\mathrm d^{5|8}z\, E\, {\mathfrak U} \cJ = \int \mathrm d^5 x \, e \, \Big( Y J + \cdots \Big)~,
\end{eqnarray}
where $J = \cJ \vert$.
The supercurrent $\cJ$ is a dimension-3 primary real scalar superfield. The conformal supergravity EOM \eqref{Y-eom} is obtained by varying the supergravity action with respect to $\mathfrak U$
\begin{eqnarray}
\frac{\delta S[\varphi^i]}{\delta \mathfrak U} = \cJ = 0~.
\end{eqnarray}
The supercurrent multiplet in 5D was constructed by Howe and Lindstr\"om \cite{HL}. It satisfies the conservation equation
\begin{eqnarray}
\de^{ij} \cJ = 0~, \label{supercurrent}
\end{eqnarray}
when all matter superfields equations of motion are satisfied.
Thus, as a consistency check, we shall prove that the expression $\cJ$ in \eqref{Y-eom} satisfies the conservation constraint \eqref{supercurrent}. It has been shown in \cite{BKNT-M14} that this constraint holds for $J_{\rm EH}$.
For each invariant, we have indeed verified that the corresponding $J$ is a primary superfield of dimension 3. It also satisfies
$
\de^{ij}J=0
$
provided the vector and linear multiplets equations of motion of eqs.~\eqref{VM-eom} and \eqref{LM-eom}, respectively, are imposed. Using \textit{Cadabra} an explicit calculation shows that, off-shell, it holds
\begin{subequations} \label{supercurrent_equation}
\begin{eqnarray}
\de^{ij}J_{\rm Weyl}&=&\frac{3 {\rm i}}{4} W H^{ij}_{\rm Weyl}
~,\\
\de^{ij}J_{\log{W}}&=&\frac{3 {\rm i}}{4} W H^{ij}_{\log{W}}
~,\\
\de^{ij}J_{R^2}&=& 8 {\rm i} W H^{ij}_{R^2}- 8 {\rm i} G^{ij}W_{R^2}
~.
\end{eqnarray}\end{subequations}
It is then clear that the right-hand sides of \eqref{supercurrent_equation} are proportional to the composite vector and linear multiplets appearing in \eqref{VM-eom} and \eqref{LM-eom}. Consequently, the supercurrent conservation equation \eqref{supercurrent} is satisfied once the equations of motion for the compensators are used. This represents a very non-trivial consistency check of \eqref{Y-eom-1}--\eqref{Y-eom-3}.
\section{An alternative scalar curvature-squared invariant}
\label{NewCurvature2}
Recall the action defined in terms of a composite vector multiplet superfield, $\mathbf{W}$, written in eqs.~(\ref{S_boldW}) and (\ref{comp-W}), respectively. There also exists an infinite number of alternative vector multiplets composite of a linear multiplet compensating superfield, $G_{i j}$, and a superfield associated to a primary real $\cO^{(2n)}$ multiplet $H^{i_1\cdots i_{2n}}=H^{(i_1\cdots i_{2n})}$, such that $\de_\alpha^{(j}H^{i_1\cdots i_{2n})}=0$. In 5D this was constructed in \cite{BKNT-M14} by extending the 4D $\cN=2$ analysis of \cite{BK11}. We refer to \cite{BKNT-M14} for details, including the precise definition and literature on $\cO^{(2n)}$ multiplets, and simply state the final result here.
The following superfields
\begin{equation} \label{W_family}
\mathbb{W}_{n}
= \frac{{\rm i} (2n)!}{2^{2n+3} (n+1)! (n-1)!} G \nabla_{i j} \mathcal{R}_n^{i j}~,
\end{equation}
where
\begin{equation}
\mathcal{R}^{i j}_{n} =
G^{-2n}\left(\delta^i_k \delta^j_l - \frac{1}{2 G^2} G^{i j}G_{k l}\right) H^{k l i_1 \cdots i_{2n-2}} G_{(i_1 i_2} \cdots G_{i_{2n-3} i_{2n-2})}~,
\end{equation}
all satisfy the constraints \eqref{vector-defs} for any positive integer $n$.
In fact, $\mathbb{W}_1$ is precisely the structure seen in $W_{R^2}$ of eq.~(\ref{WR2_as_W1}). By considering $n=2$, and choosing $H^{ijkl}$ to be the square of a linear multiplet $H^{ij}$ (distinguished from $G^{ij}$), $H^{ijkl}=H^{(ij}H^{kl)}$, we can engineer an alternative scalar curvature-squared invariant. The result is in spirit similar to the scalar curvature-squared invariant engineered for 4D $\cN=2$ in \cite{deWS} and directly related to 5D $\cN=1$ results in \cite{Ozkan:2016csy}.\footnote{GT-M is grateful for discussions with M.\,Ozkan on scalar curvature-squared invariants.}
Let us show how this works.
Consider the $n=2$ composite superfield:
\begin{equation}
\mathbb{W}_{2} = \frac{{\rm i}}{32} G \nabla_{i j} \mathcal{R}^{i j}_{2}~,
\label{W2-0}
\end{equation}
where
\begin{equation}
\mathcal{R}^{i j}_{2} = G^{-4}\left(\delta^i_k \delta^j_l - \frac{1}{2 G^2} G^{i j}G_{k l}\right) H^{(k l} H^{m n)}G_{m n}~.
\end{equation}
By explicitly computing \eqref{W2-0},
we may define $\mathbb{W}_2$ as a linear combination of real functions, $\mathcal{P}_A$ and $\mathcal{P}_{A B}{}^{i j}$, which are themselves comprised of descendants of the linear multiplets:
\begin{equation}
\mathbb{W}_{2} = 2 \mathcal{P}_{A} F^{A} + 2 {\rm i} \mathcal{P}_{A B}{}^{i j}\varphi^{\alpha A}_i \varphi^{B}_{\alpha j}
~.
\label{W2-2}
\end{equation}
Here the index $A = 1, 2$ indicates the two linear superfields, $G^1_{i j} = G_{i j}$ and $G^{2}_{i j} = H_{i j}$. Note that this is analogous to eq.~(2.5) in \cite{Ozkan:2016csy} with the first $A$ index fixed so that $\mathcal{F}_{A B} \rightarrow \mathcal{P}_{B}$ and with an appropriate normalisation factor added to the second term. All functions, $\mathcal{P}_A$ and $\mathcal{P}_{A B}{}^{i j}$, are defined as follows:
\begin{subequations}
\begin{eqnarray}
\mathcal{P}_{1} &=& \frac{1}{8} H^2 G^{-3} - \frac{3}{32} \left(G_{k l} H^{k l}\right)^2 G^{-5}~, \\
\mathcal{P}_{2} &=& \frac{1}{8} \left(G_{k l} H^{k l}\right) G^{-3}~,\\
\mathcal{P}_{1 1}{}^{i j} &=& -\frac{3}{16}G^{i j} H^2 G^{-5} - \frac{3}{16} \left(G_{k l} H^{k l}\right) H^{i j} G^{-5} + \frac{15}{64} \left( G_{k l} H^{k l}\right)^2 G^{i j} G^{-7}~, \\
\mathcal{P}_{1 2}{}^{i j} &=& \mathcal{P}_{2 1}{}^{i j} = \frac{1}{8} H^{i j} G^{-3} - \frac{3}{16} \left( G_{k l} H^{k l} \right) G^{i j} G^{-5} ~,\\
\mathcal{P}_{2 2}{}^{i j} &=& \frac{1}{8} G^{i j} G^{-3}~.
\label{P-e}
\end{eqnarray}
\end{subequations}
It is then a straightforward exercise to show that functions with two $A$ indices are derivatives of functions with one, that is:
\begin{equation}
\mathcal{P}_{A B}{}^{i j} = \frac{\partial \mathcal{P}_{A}}{\partial G^{B}_{i j}}~.
\end{equation}
They also satisfy the following constraints
\begin{equation}
\mathcal{P}_{A B}{}^{i j} = \mathcal{P}_{(A B)}{}^{i j}~, \; \; \; \mathcal{P}_{A B}{}^{i j} G^{B}_{j k} = - \frac{1}{2} \delta^i_k \mathcal{P}_{A}~.
\end{equation}
Lastly we may define functions of two derivatives on $\mathcal{P}_{A}$,
\begin{equation}
\mathcal{P}_{A B C}{}^{i j k l} := \frac{\partial \mathcal{P}_{A B}{}^{i j}}{\partial G^C_{k l}} = \frac{\partial^2 \mathcal{P}_{A}}{\partial G^B_{i j} \partial G^C_{k l}}~,
\end{equation}
which satisfy
\begin{equation}
\mathcal{P}_{A B C}{}^{i j k l} = \mathcal{P}_{(A B C)}{}^{i j k l}~, \; \; \; \mathcal{P}_{A B C}{}^{i j k l} \epsilon_{j k} = 0~.
\end{equation}
These are the constraints needed to ensure that $\mathbb{W}_{2}$ in \eqref{W2-2} satisfies \eqref{vector-defs}, which in our case are satisfied by construction.
To engineer the alternative scalar curvature-squared invariant, consider $G_{i j}$ to be a compensator and $H_{i j}$ to be composite of a vector multiplet, which is built out of a single Abelian
vector multiplet, as in eq.~(\ref{HijVM}),
\begin{equation}
H^{i j} := H^{ij}_{\rm{VM}}
~.
\label{Hvm-2}
\end{equation}
In finding the $\mathbb{X}^{i j}_2$ descendant of $\mathbb{W}_2$, we are interested in squared contributions of $F_{\rm VM}$ being a descendant field of $H^{i j}_{\rm VM}$. This is apparent from the fact that $F_{\rm VM}$ satisfies
\begin{equation}
F_{\rm VM} = 4 W \Box W
+\cdots
~,
\end{equation}
where $\Box W$ gives rise to a $\cR$
contribution. Roughly speaking, by considering \eqref{W2-2}--\eqref{P-e} with the choice \eqref{Hvm-2}, we are squaring the kinetic term of the vector multiplet compensator which, in turn, leads to a scalar curvature squared invariant. In fact,
if we look at the dimension-2 scalar descendant of $\mathbb{W}_2$ we obtain
\begin{subequations}
\begin{eqnarray}
\mathbb{X}^{i j}_2 &:=& \frac{{\rm i}}{4} \nabla^{\alpha (i}\nabla^{j)}_{\alpha} \mathbb{W}_2 = \frac{1}{8} G^{i j} G^{-3} F^2_{\rm VM} + \cdots = \mathcal{P}_{2 2}{}^{i j} F^2_{\rm VM} + \cdots
~,
\\
G_{ij} \mathbb{X}^{i j}_2&=& 4 G^{-1} W^2(\Box W)^2+\cdots~.
\label{BFR2Vec}
\end{eqnarray}
\end{subequations}
Specifically, eq.~\eqref{BFR2Vec} is one term in the component action given by the BF action principle. If one proceeds in setting to constants $G$ and $W$, by gauge fixing dilatation and using (two-derivative) equations of motion, we are left with a $\cR^2$ contribution to the four-derivative component action. Although we have not yet analysed in detail the equations of motion and the component structure of this invariant, we expect it might play a role in studying higher-derivative invariants in alternative off-shell superspace settings, as, for example, the recent off-shell supergravity constructed in \cite{Hutomo:2022hdi} by using the variant hyper-dilaton Weyl multiplet of conformal supergravity. We leave for the future more investigations along this line.
\vspace{0.3cm}
\noindent
{\bf Acknowledgements:}\\
We are grateful to M.\,Ozkan and Y.\,Pang for discussions and collaboration related to this work.
This work is supported by the Australian Research Council (ARC)
Future Fellowship FT180100353 and by the Capacity Building Package of the University
of Queensland.
G.G. and S.K. are supported
by the postgraduate scholarships
at the University of Queensland.
|
{
"arxiv_id": "2302.14248",
"language": "en",
"timestamp": "2023-03-01T02:06:52",
"url": "https://arxiv.org/abs/2302.14248",
"yymm": "2302"
} | \section*{Acknowledgements}
The authors thank Ian Waudby-Smith for insightful discussion and review.
\bibliographystyle{plainnat}
\section{Confidence Sequences for Fixed $v$}
\label{app:confseqreview}
Since our algorithm operates via reduction to pointwise confidence sequences,
we provide a brief self-contained review here. We refer the interested
reader to \citet{howard2021time} for a more thorough treatment.
A confidence sequence for a random process $X_t$ is a time-indexed
collection of confidence sets $\text{CI}_t$ with a time-uniform
coverage property $\mathbb{P}\left(\forall t \in \mathbb{N}: X_t \in
\text{CI}_t\right) \geq 1 - \alpha$. For real random variables,
the concept of a lower confidence sequence can be defined via
$\mathbb{P}\left(\forall t \in \mathbb{N}: X_t \geq L_t\right) \geq 1 -
\alpha$, and analogously for upper confidence sequences; and a lower
and upper confidence sequence can be combined to form a confidence
sequence $\text{CI}_t \doteq \left\{ x | L_t \leq x \leq U_t \right\}$
with coverage $(1 - 2 \alpha)$ via a union bound.
One method for constructing a lower confidence sequence for a real valued
parameter $z$ is to exhibit a real-valued random process $E_t(z)$ which,
when evaluated at the true value $z^*$ of the parameter of interest,
is a non-negative supermartingale with initial value of 1, in which case
Ville's inequality ensures $\mathbb{P}\left(\forall t \in \mathbb{N}:
E_t(z^*) \leq \alpha^{-1}\right) \geq 1 - \alpha$. If the process
$E_t(z)$ is monotonically increasing in $z$, then the supremum of the
lower contour set $L_t \doteq \sup_z \left\{ z | E_t(z) \leq \alpha^{-1} \right\}$
is suitable as a lower confidence sequence; an upper confidence sequence
can be analogously defined.
We use the above strategy. First we lower bound \cref{eqn:binmart},
\begin{equation}
E_t(\lambda) \doteq \exp\left(\lambda S_t - \sum_{s \leq t} \log\left(h(\lambda, \theta_s)\right) \right),
\tag{\ref{eqn:binmart}}
\end{equation}
and eliminate the explicit dependence upon $\theta_s$, by noting
$h(\lambda, \cdot)$ is concave and therefore
\begin{equation}
E_t(\lambda) \geq \exp\left(\lambda t \left(q_t - \hat{q}_t\right) - t\ h\left(\lambda, q_t\right) \right),
\label{eqn:binmartlb}
\end{equation}
because
$\left( t f(q) = \max_{\theta \bigl| 1^\top \theta=t q} \sum_{s \leq t} f(\theta_s) \right)$
for any concave $f$. \cref{eqn:binmartlb} is monotonically increasing
in $q_t$ and therefore defines a lower confidence sequence. For an
upper confidence sequence we use $q_t = 1 - (1 - q_t)$ and a lower
confidence sequence on $(1 - q_t)$.
Regarding the choice of $\lambda$, in practice many $\lambda$ are
(implicitly) used via stitching (i.e., using different $\lambda$ in
different time epochs and majorizing the resulting bound in closed form)
or mixing (i.e., using a particular fixed mixture of \cref{eqn:binmartlb}
via a discrete sum or continuous integral over $\lambda$); our choices
will depend upon whether we are designing for tight asymptotic rates or
low computational footprint. We provide specific details associated
with each theorem or experiment.
Note \cref{eqn:binmartlb} is invariant to permutations of $X_{1:t}$
and hence the empirical CDF at time $t$ is a sufficient statistic for
calculating \cref{eqn:binmartlb} at any $v$.
\subsection{Challenge with quantile space}
\label{app:whynotquantilespace}
In this section assume all CDFs are invertible for ease of exposition.
In the i.i.d. setting, \cref{eqn:binmart} can be evaluated at the
(unknown) fixed $v(q)$ which corresponds to quantile $q$. Without
knowledge of the values, one can assert the existence of such values for a
countably infinite collection of quantiles and a careful union bound of
Ville's inequality on a particular discretization can yield an LIL rate:
this is the approach of \citet{howard2022sequential}. A key
advantage of this approach is covariance to monotonic transformations.
Beyond the i.i.d. setting, one might hope to analogously evaluate
\cref{eqn:binmart} at an unknown fixed value $v_t(q)$ which for each $t$
corresponds to quantile $q$. Unfortunately, $v_t(q)$ is not just unknown,
but also unpredictable with respect to the initial filtration,
and the derivation that \cref{eqn:binmart} is a martingale depends
upon $v$ being predictable. In the case that $X_t$ is independent
but not identically distributed, $v_t(q)$ is initially predictable
and therefore this approach could work, but would only be valid under
this assumption.
The above argument does not completely foreclose the possibility of
a quantile space approach, but merely serves to explain why the
authors pursued a value space approach in this work. We encourage
the interested reader to innovate.
\section{Unit Interval Bounds}
\label{app:boundzeroone}
\subsection{Lower Bound}
\label{app:lowerboundzeroone}
\begin{algorithm}[tb]
\caption{Unit Interval Lower Bound. $\epsilon(d)$ is an increasing function specifying the resolution of discretization at level $d$. $\lboracle_t\left(\rho; \delta, d, \Psi_t\right)$ is a lower confidence sequence for fixed value $\rho$ with coverage at least $\left(1 - \delta\right)$.}
\label{alg:lbunionzeroone}
\begin{algorithmic}
\STATE {\bfseries Input:} value $v$; confidence $\alpha$; sufficient statistic $\Psi_t$. \hfill\algcommentlight{comments below indicate differences from upper bound}
\STATE \algcommentlight{$\Psi_t \doteq X_{1:t}$ or $\Psi_t \doteq (W_{1:t}, X_{1:t})$}
\STATE {\bfseries Output:} $L_t(v)$ satisfying \cref{eqn:overallguarantee}.
\IIF{$v < 0$} \textbf{return} 0 \unskip\ \algorithmicend\ \algorithmicif \hfill \algcommentlight{check for underflow of range rather than overflow}
\STATE $l \leftarrow 0$ \hfill \algcommentlight{initialize with 0 instead of 1}
\STATE $v \leftarrow \min\left(1, v\right)$ \hfill \algcommentlight{project onto [0, 1] using $\min$ instead of $\max$}
\FOR{$d=1$ {\bfseries to} $\infty$}
\STATE $\rho_d \leftarrow \epsilon(d) \lfloor \epsilon(d)^{-1} v \rfloor$ \hfill \algcommentlight{use floor instead of ceiling}
\STATE $\delta_d \leftarrow \nicefrac{\alpha}{2^d \epsilon(d)}$
\STATE $l \leftarrow \max\left(l, \lboracle_t\left(\rho_d; \delta, \Psi_t\right)\right)$ \hfill \algcommentlight{use lower bound instead of upper bound}
\IF{$0 = \sum_{s \leq t} 1_{X_s \in \left[\rho_d, v\right)}$}
\RETURN $l$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\cref{alg:lbunionzeroone} is extremely similar to
\cref{alg:ubunionzeroone}: the differences are indicated in comments.
Careful inspection reveals the output of \cref{alg:ubunionzeroone},
$U_t(v)$, can be obtained from the output of \cref{alg:lbunionzeroone},
$L_t(v)$, via $U_t(v) = 1 - L_t(1 - v)$; but only if the sufficient
statistics are adjusted such that
$\uboracle_t(\rho_d; \delta, \Psi_t) = 1 - \lboracle_t(1 - \rho_d; \delta, \Psi_t')$.
The reference implementation uses this strategy.
\subsection{Proof of \cref{thm:coverage}}
\label{app:proofthmcoverage}
We prove the results for the upper bound \cref{alg:ubunionzeroone}; the argument for the lower bound \cref{alg:lbunionzeroone} is similar.
The algorithm terminates when we find a $d$ such that $0 = \sum_{s \leq t} 1_{X_s \in (v, \rho_d]}$. Since $\epsilon(d) \uparrow \infty$ as $d \uparrow \infty$, we have $\rho_d = \epsilon(d) \lceil \epsilon(d)^{-1} v \rceil \downarrow v$, so that $\sum_{s \leq t} 1_{X_s \in (v, \rho_d]} \downarrow 0$. So the algorithm must terminate.
At level $d$, we have $\epsilon(d)$ confidence sequences. The $i$\textsuperscript{th} confidence sequence at level $d$ satisfies
\begin{align}
P(\exists t: \overline{\CDF}_t(v) > \uboracle_t(i / \epsilon(d); \delta_d, d, \Psi_t))
&\leq \frac{\alpha}{2^d \epsilon(d)}.
\end{align}
Taking a union bound over all confidence sequences at all levels, we have
\begin{align}
P\left(
\exists d \in \mathbb{N}, i \in \lbrace 1, \dots, d \rbrace,
t \in \mathbb{N}:
\overline{\CDF}_t(i / \epsilon(d))
> \uboracle_t(i / \epsilon(d); \delta, d, \Psi_t)
\right) \leq \alpha.
\end{align}
Thus we are assured that, for any $v \in \mathbb{R}$,
\begin{align}
P(\forall t, d: \overline{\CDF}_t(v) \leq \overline{\CDF}_t(\rho_d)
\leq \uboracle_t(\rho_d; \delta_d, d, \Psi_t))
\geq 1 - \alpha.
\end{align}
\Cref{alg:ubunionzeroone} will return $\uboracle_t(\rho_d; \delta_d, d, \Psi_t)$ for some $d$ unless all such values are larger than one, in which case it returns the trivial upper bound of one. This proves the upper-bound half of guarantee \eqref{eqn:overallguarantee}. A similar argument proves the lower-bound half, and union bound over the upper and lower bounds finishes the argument.
\section{Proof of \cref{thm:smoothedregretzeroone}}
\thmSmooth*
\label{app:proofsmoothedregretzeroone}
Note $v$ is fixed for the entire argument below, and $\xi_t$ denotes
the unknown smoothness parameter at time $t$.
We will argue that the upper confidence radius $U_t(v) - t^{-1} \sum_{s
\leq t} 1_{X_s \leq v}$ has the desired rate. An analogous argument
applies to the lower confidence radius $t^{-1} \sum_{s \leq t} 1_{X_s
\leq v} - L_t(v)$, and the confidence width $U_t(v) - L_t(v)$ is the
sum of these two.
For the proof we introduce an integer parameter $\eta \geq 2$ which
controls both the grid spacing ($\epsilon(d) = \eta^d$) and the
allocation of error probabilities to levels ($\delta_d = \alpha /
(\eta^d \epsilon(d))$). In the main paper we set $\eta = 2$.
At level $d$ we construct $\eta^d$ confidence sequences on an
evenly-spaced grid of values $1/\eta^d, 2/\eta^d, \dots, 1$. We divide
total error probability $\alpha / \eta^d$ at level $d$ among these
$\eta^d$ confidence sequences, so that each individual confidence sequence
has error probability $\alpha/\eta^{2d}$.
For a fixed bet $\lambda$ and value $\rho$, $S_t$ defined in
\cref{subsec:unitinterval} is sub-Bernoulli qua \citet[Definition
1]{howard2021time} and therefore sub-Gaussian with variance process
$V_t \doteq t K(q_t)$, where $K(p) \doteq \nicefrac{(2 p - 1)}{2
\log\left(\nicefrac{p}{1-p}\right)}$ is from \citet{kearns1998large};
from \citet[Proposition 5]{howard2021time} it follows that there exists
an explicit mixture distribution over $\lambda$ such that
\begin{align}
M(t; q_t, \tau) &\doteq \sqrt{2 \left(t K(q_t) + \tau\right) \log\left(\frac{\eta^{2d}}{2 \alpha} \sqrt{\frac{t K(q_t) + \tau}{\tau}} + 1\right)}
\label{eqn:binnormalcurved}
\end{align}
is a (curved) uniform crossing boundary, i.e., satisfies
\begin{align*}
\frac{\alpha}{\eta^{2d}} &\geq \mathbb{P}\left(\exists t \geq 1: S_t \geq \frac{M(t; q_t, \tau)}{t} \right),
\end{align*}
where $S_t \doteq \overline{\CDF}_t(\rho) - t^{-1} \sum_{s \leq t} 1_{X_s \leq
\rho}$ is from \cref{eqn:binmart}, and $\tau$ is a hyperparameter
to be determined further below.
Because the values at level $d$ are $1/\eta^d$ apart, the worst-case
discretization error in the estimated average CDF value is
\begin{align*}
\overline{\CDF}_t(\epsilon(d) \lceil \epsilon(d)^{-1} v \rceil) - \overline{\CDF}_t(v)
\leq 1/(\xi_t \eta^d),
\end{align*}
and the total worst-case confidence radius including discretization error is
\begin{align*}
r_d(t) &= \frac{1}{\xi_t \eta^d} + \sqrt{\frac{2 \left(K(q_t) + \tau/t\right)}{t} \log\left(\frac{\eta^{2d}}{2 \alpha} \sqrt{\frac{t K(q_t) + \tau}{\tau}} + 1\right)}.
\end{align*}
Now evaluate at $d$ such that
$\sqrt{\psi_t} < \xi_t \eta^d \leq \eta \sqrt{\psi_t}$ where $
\psi_t \doteq t \left(K(q_t) + \tau/t\right)^{-1}$,
\begin{align*}
r_d(t) &\leq \sqrt{\frac{K(q_t) + \tau/t}{t}} + \sqrt{\frac{2 \left(K(q_t) + \tau/t\right)}{t} \log\left(\frac{\xi_t^{-2} \eta^2}{2 \alpha} \left(\frac{t}{K(q_t) + \tau/t}\right) \sqrt{\frac{t K(q_t) + \tau}{\tau}} + 1\right)}.
\end{align*}
The final result is not very sensitive to the choice of $\tau$, and we
use $\tau = 1$ in practice.
\section{Extensions}
\subsection{Arbitrary Support}
\label{app:arbitrarysupport}
\begin{algorithm}[tb]
\caption{Entire Real Line Upper Bound. $\epsilon(d)$ is an increasing function specifying the resolution of discretization at level $d$. $\uboracle_t\left(\rho; \delta, d, \Psi_t\right)$ is an upper confidence sequence for fixed value $\rho$ with coverage at least $\left(1 - \delta\right)$.}
\label{alg:ubunionrealline}
\begin{algorithmic}
\STATE {\bfseries Input:} value $v$; confidence $\alpha$; sufficient statistic $\Psi_t$.
\STATE \algcommentlight{e.g. $\Psi_t \doteq X_{1:t}$ or $\Psi_t \doteq (W_{1:t}, X_{1:t})$}
\STATE {\bfseries Output:} $U_t(v)$ satisfying \cref{eqn:overallguarantee}.
\STATE $u \leftarrow 1$
\FOR{$d=1$ {\bfseries to} $\infty$}
\STATE $k_d \leftarrow \lceil \epsilon(d)^{-1} v \rceil$ \hfill \algcommentlight{Sub-optimal: see text for details}
\STATE $\rho_d \leftarrow \epsilon(d) k_d$
\STATE $\delta_d \leftarrow \left(\nicefrac{\alpha}{2^d}\right) \left(\nicefrac{3}{(\pi^2 - 3) (1 + |k_d|)^2}\right)$ \hfill \algcommentlight{Union bound over $d \in \mathbb{N}$ and $k_d \in \mathbb{Z}$}
\STATE $u \leftarrow \min\left(u, \uboracle_t\left(\rho_d; \delta_d, d, \Psi_t\right)\right)$
\IF{$0 = \sum_{s \leq t} 1_{X_s \in \left(v, \rho_d\right]}$}
\RETURN $u$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\cref{alg:ubunionrealline} is a variation on \cref{alg:ubunionzeroone} which
does not assume a bounded range, and instead uses a countably discrete
dense subset of the entire real line. Using the same argument
of \cref{thm:smoothedregretzeroone} with the modified probability
from the modified union bound, we have
\begin{align*}
|k_d| - 1 &< \eta^{-d} |v| \leq |k_d|, \\
\xi_t / \sqrt{\psi_t} &> \eta^{-d} \geq \eta^{-1} \xi_t / \sqrt{\psi_t} \\
\implies 1 + |k_d| &< 2 + \xi_t |v| / \sqrt{\psi_t} \\
\implies r_d(t) &\leq
\logvwidthbound,
\end{align*}
demonstrating a logarithmic penalty in the probe value $v$ (e.g., \cref{fig:lognormalwidth}).
\paragraph{Sub-optimality of $k_d$} The choice of $k_d$
in \cref{alg:ubunionrealline} is amenable to analysis,
but unlike in \cref{alg:ubunionzeroone}, it is not optimal.
In \cref{alg:ubunionzeroone} the probability is allocated uniformly at
each depth, and therefore the closest grid point provides the tightest
estimate. However in \cref{alg:ubunionrealline}, the probability
budget decreases with $|k_d|$ and because $k_d$ can be negative, it
is possible that a different $k_d$ can produce a tighter upper bound.
Since every $k_d$ is covered by the union bound, in principle we could
optimize over all $k_d$ but it is unclear how to do this efficiently.
In our implementation we do not search over all $k_d$, but we do adjust
$k_d$ to be closest to the origin with the same empirical counts.
\subsection{Discrete Jumps}
\label{app:discretejumps}
\paragraph{Known Countably Infinite}
Suppose $D$ is smooth wrt a reference measure $M$, where $M$ is of
the form $$M = \breve{M} + \sum_{i \in I} \zeta_i 1_{v_i},$$ with $I$
a countable index set, $1 \geq \sum_{i \in I} \zeta_i$ and $\breve{M}$
a sub-probability measure normalizing to $(1 - \sum_{i \in I} \zeta_i)$.
Then we can allocate $(1 - \sum_{i \in I} \zeta_i)$ of our overall
coverage probability to bounding $\breve{M}$ using \cref{alg:ubunionzeroone}
and \cref{alg:lbunionzeroone}. For the remaining $\{ v_i \}_{i \in I}$
we can run explicit pointwise bounds each with coverage probability
fraction $\zeta_i$.
Computationally, early termination of the infinite search over the
discrete bounds is possible. Suppose (wlog) $I$ indexes $\zeta$ in
non-increasing order, i.e., $i \leq j \implies \zeta_i \leq \zeta_j$: then as soon as there are no remaining empirical
counts between the desired value $v$ and the most recent discrete value
$v_i$, the search over discrete bounds can terminate.
\section{Importance-Weighted Variant}
\label{app:importanceweighted}
\subsection{Modified Bounds}
\cref{alg:ubunionzeroone} and \cref{alg:lbunionzeroone} are unmodified,
with the caveat that the oracles $\lboracle_t$ and $\uboracle_t$ must
now operate on an importance-weighted realization $(W_{1:t}, X_{1:t})$,
rather then directly on the realization $X_{1:t}$.
\subsubsection{DDRM Variant}
For simplicity we describe the lower bound $\lboracle_t$ only. The
upper bound is derived analogously via the equality
$Y_s = W_s - (W_s - Y_s)$ and a lower bound on $(W_s - Y_s)$: see
\citet[Remark 3]{waudby2022anytime} for more details.
This is the Heavy NSM from \citet{mineiro2022lower} combined with the
$L^*$ bound of \citet[\S4.2.3]{orabona2019modern}. The Heavy NSM allow us
to handle importance weights with unbounded variance, while the Adagrad
$L^*$ bound facilitates lazy evaluation.
For fixed $v$, let $Y_t = W_t 1_{X_t \geq v}$ be a non-negative
real-valued discrete-time random process, let $\hat{Y}_t \in [0, 1]$ be a
predictable sequence, and let $\lambda \in [0, 1)$ be a fixed scalar bet.
Then $$
\begin{aligned}
E_t(\lambda) &\doteq \exp\left( \lambda \left(\sum_{s \leq t} \hat{Y}_s - \mathbb{E}_{s-1}\left[Y_s\right]\right) + \sum_{s \leq t} \log\left(1 + \lambda \left(Y_s - \hat{Y}_s\right) \right) \right) \\
\end{aligned}
$$ is a test supermartingale~\citep[\S3]{mineiro2022lower}. Manipulating, $$
\begin{aligned}
E_t(\lambda) &= \exp\left( \lambda \left(\sum_{s \leq t} Y_s - \mathbb{E}_{s-1}\left[Y_s\right]\right) - \sum_{s \leq t} \underbrace{\left( \lambda \left(Y_s - \hat{Y}_s\right) - \log\left(1 + \lambda \left(Y_s - \hat{Y}_s\right) \right) \right)}_{\doteq h\left(\lambda \left(Y_s - \hat{Y}_s\right)\right)} \right) \\
&= \exp\left( \lambda \left(\sum_{s \leq t} Y_s - \mathbb{E}_{s-1}\left[Y_s\right]\right) - \sum_{s \leq t} h\left(\lambda \left(Y_s - \hat{Y}_s\right)\right) \right) \\
&\geq \exp\left( \lambda \left(\sum_{s \leq t} Y_s - \mathbb{E}_{s-1}\left[Y_s\right]\right) - \left( \sum_{s \leq t} h\left(\lambda \left(Y_s - \hat{Y}_t^*\right)\right) \right) - \text{Reg}(t) \right) & \left(\dagger\right) \\
&= \exp\left( \lambda \left(t \hat{Y}_t^* - \sum_{s \leq t} \mathbb{E}_{s-1}\left[Y_s\right]\right) + \sum_{s \leq t} \log\left(1 + \lambda \left(Y_s - \hat{Y}_t^*\right) \right) - \text{Reg}(t) \right),
\end{aligned}
$$ where for $(\dagger)$ we use a no-regret learner
on $h()$ with regret $\text{Reg}(t)$ to any constant
prediction $\hat{Y}_t^* \in [0, 1]$. The function $h()$ is
$M$-smooth with $M = \frac{\lambda^2}{(1 - \lambda)^2}$
so we can get an $L^*$ bound~\citep[\S4.2.3]{orabona2019modern} of
$$
\begin{aligned}
\text{Reg}(t) &= 4 \frac{\lambda^2}{(1 - \lambda)^2} + 4 \frac{\lambda}{1 - \lambda} \sqrt{\sum_{s \leq t} h\left(\lambda \left(Y_s - \hat{Y}_t^*\right)\right)} \\
&= 4 \frac{\lambda^2}{(1 - \lambda)^2} + 4 \frac{\lambda}{1 - \lambda} \sqrt{\left(-t \hat{Y}_t^* + \sum_{s \leq t} Y_s\right) - \sum_{s \leq t} \log\left(1 + \lambda \left(Y_s - \hat{Y}_t^*\right)\right)},
\end{aligned}
$$ thus essentially our variance process is inflated by a square-root. In
exchange we do not have to actually run the no-regret algorithm, which
eases the computational burden. We can compete with any in-hindsight
prediction: if we choose to compete with the clipped running mean
$\overline{Y_t}$ then we end up with
\begin{equation}
E_t(\lambda) \geq \exp\left( \lambda \left(\min\left(t, \sum_{s \leq t} Y_s\right) - \mathbb{E}_{s-1}\left[Y_s\right]\right) + \sum_{s \leq t} \log\left(1 + \lambda \left(Y_s - \overline{Y_t}\right) \right) - \text{Reg}(t) \right),
\label{eqn:heavynsmplusadagrad}
\end{equation}
which is implemented in the reference implementation as \pythonmethod{LogApprox:getLowerBoundWithRegret(lam)}. The $\lambda$-s are mixed
using DDRM from \citet[Thm. 4]{mineiro2022lower},
implemented via the \pythonmethod{DDRM} class and the
\pythonmethod{getDDRMCSLowerBound} method in the reference implementation.
\pythonmethod{getDDRMCSLowerBound} provably correctly early terminates
the infinite sum by leveraging
$$
\begin{aligned}
\sum_{s \leq t} \log\left(1 + \lambda \left(Y_s - \overline{Y_t}\right) \right) &\leq \lambda \left(\sum_{s \leq t} Y_s - t \overline{Y_t}\right)
\end{aligned}
$$ as seen in the termination criterion of the inner method \pythonmethod{logwealth(mu)}.
To minimize computational overhead, we can lower bound
$\log(a + b)$ for $b \geq 0$
using strong concavity qua \citet[Thm. 3]{mineiro2022lower},
resulting in the following geometrically spaced collection of sufficient
statistics:
$$
\begin{aligned}
(1 + k)^{n_l} = z_l &\leq z < z_u = (1 + k) z_l = (1 + k)^{n_l+1}, \\
\end{aligned}
$$ along with distinct statistics for $z = 0$. $k$ is a hyperparameter
controlling the granularity of the discretization (tighter lower bound vs.
more space overhead): we use $k = 1/4$ exclusively in our experiments.
Note the coverage guarantee is preserved for any choice of $k$ since we
are lower bounding the wealth.
Given these statistics, the wealth can be lower bounded given any
bet $\lambda$ and any in-hindsight prediction $\hat{Y}_t^*$ via $$
\begin{aligned}
f(z) &\doteq \log\left(1 + \lambda \left(z - \hat{Y}_t^*\right) \right), \\
f(z) &\geq \alpha f(z_l) + (1 - \alpha) f(z_u) + \frac{1}{2} \alpha (1 - \alpha) m(z_l), \\
\alpha &\doteq \frac{z_u - z}{z_u - z_l}, \\
m(z_l) &\doteq \left( \frac{k z_l \lambda}{k z_l \lambda + 1 - \lambda \hat{Y}_t^*} \right)^2.
\end{aligned}
$$ Thus when accumulating the statistics, for each $Y_s = W_s 1_{X_s
\geq v}$, a value of $\alpha$ must be accumulated at key $f(z_l)$,
a value of $(1 - \alpha)$ accumulated at key $f(z_u)$, and a value of
$\alpha (1 - \alpha)$ accumulated at key $m(z_l)$.
The \pythonmethod{LogApprox::update} method from the reference implementation
implements this.
Because these sufficient statistics are data linear, a further
computational trick is to accumulate the sufficient statistics with
equality only, i.e., for $Y_s = W_s 1_{X_s = v}$; and when
the CDF curve is desired, combine these point statistics into
cumulative statistics. In this manner only $O(1)$ incremental
work is done per datapoint; while an additional $O(t \log(t))$ work is done
to accumulate all the sufficient statistics only when the bounds
need be computed. The method
\pythonmethod{StreamingDDRMECDF::Frozen::\_\_init\_\_} from the reference
implementation contains this logic.
\subsubsection{Empirical Bernstein Variant}
\label{subsec:empirical_bernstein_variant}
For simplicity we describe the lower bound $\lboracle_t$ only. The
upper bound is derived analogously via the equality
$Y_s = W_s - (W_s - Y_s)$ and a lower bound on $(W_s - Y_s)$: see
\citet[Remark 3]{waudby2022anytime} for more details.
This is the empirical Bernstein NSM from \citet{howard2021time} combined
with the $L^*$ bound of \citet[\S4.2.3]{orabona2019modern}. Relative to
DDRM it is faster to compute, has a more concise sufficient statistic,
and is easier to analyze; but it is wider empirically, and theoretically
requires finite importance weight variance to converge.
For fixed $v$, let $Y_t = W_t 1_{X_t \geq v}$ be a non-negative
real-valued discrete-time random process, let $\hat{Y}_t \in [0, 1]$ be a
predictable sequence, and let $\lambda \in [0, 1)$ be a fixed scalar bet.
\defcitealias{fan2015exponential}{Fan}
Then $$
\begin{aligned}
E_t(\lambda) &\doteq \exp\left( \lambda \left(\sum_{s \leq t} \hat{Y}_s - \mathbb{E}_{s-1}\left[Y_s\right]\right) + \sum_{s \leq t} \log\left(1 + \lambda \left(Y_s - \hat{Y}_s\right) \right) \right) \\
\end{aligned}
$$ is a test supermartingale~\citep[\S3]{mineiro2022lower}. Manipulating, $$
\begin{aligned}
E_t(\lambda) &\doteq \exp\left(
\lambda \left(\sum_{s \leq t} Y_s - \mathbb{E}_{s-1}\left[Y_s\right]\right)
- \sum_{s \leq t} \underbrace{\left( \lambda \left(Y_s - \hat{Y}_s\right) - \log\left(1 + \lambda \left(Y_s - \hat{Y}_s\right) \right) \right)}_{\doteq h\left(\lambda \left(Y_s - \hat{Y}_s\right)\right)}
\right) \\
&\geq \exp\left(
\lambda \left(\sum_{s \leq t} Y_s - \mathbb{E}_{s-1}\left[Y_s\right]\right) - h(-\lambda) \sum_{s \leq t} \left(Y_s - \hat{Y}_s\right)^2\right) & \text{\citepalias[Lemma 4.1]{fan2015exponential}} \\
&\geq \exp\left(
\lambda \left(\sum_{s \leq t} Y_s - \mathbb{E}_{s-1}\left[Y_s\right]\right) - h(-\lambda) \left( \text{Reg}(t) + \sum_{s \leq t} \left(Y_s - Y^*_t\right)^2\right) \right) & \left(\dagger\right), \\
&\doteq \exp\left(\lambda S_t - h(-\lambda) V_t\right), \label{eq:empbern_supermg}
\end{aligned}
$$ where $S_t = \sum_{s \leq t} Y_s - \mathbb{E}_{s-1}\left[Y_s\right]$ and for $(\dagger)$ we use a no-regret learner on squared loss on feasible set $[0, 1]$ with regret $\text{Reg}(t)$ to any constant in-hindsight prediction $\hat{Y}_t^* \in [0, 1]$. Since $Y_s$ is unbounded above, the loss is not Lipschitz and we can't get fast rates for squared loss, but we can run Adagrad and get an $L^*$ bound, $$
\begin{aligned}
\text{Reg}(t) &= 2 \sqrt{2} \sqrt{\sum_{s \leq t} g_s^2} \\
&= 4 \sqrt{2} \sqrt{\sum_{s \leq t} (Y_s - \hat{Y}_s)^2} \\
&\leq 4 \sqrt{2} \sqrt{\text{Reg}(t) + \sum_{s \leq t} (Y_s - \hat{Y}_t^*)^2}, \\
\implies \text{Reg}(t) &\leq 16 + 4 \sqrt{2} \sqrt{8 + \sum_{s \leq t} (Y_s - \hat{Y}_t^*)^2}.
\end{aligned}
$$ Thus basically our variance process is inflated by an additive square root.
We will compete with $Y^*_t = \min\left(1, \frac{1}{t} \sum_s Y_s\right)$.
A key advantage of the empirical Bernstein over DDRM is the availability
of both a conjugate (closed-form) mixture over $\lambda$ and a closed-form
majorized stitched boundary. This yields both computational speedup and
analytical tractability.
For a conjugate mixture, we use the truncated gamma prior from
\citet[Theorem 2]{waudby2022anytime} which yields mixture wealth
\newcommand{\eqniwbernada}{
M_t^{\text{EB}} \doteq \left(\frac{\tau^\tau e^{-\tau}}{\Gamma(\tau) - \Gamma(\tau, \tau)}\right) \left( \frac{1}{\tau + V_t} \right) {}_1F_1\left(1, V_t + \tau + 1, S_t + V_t + \tau\right)
}
\begin{equation}
\eqniwbernada,
\label{eqn:iwempbernada}
\end{equation}
where ${}_1F_1(\ldots)$ is Kummer's confluent hypergeometric function
and $\Gamma(\cdot, \cdot)$ is the upper incomplete gamma function.
For the hyperparameter, we use $\tau = 1$.
\subsection{Proof of \cref{thm:iwsmoothedregretzeroone}}
\label{app:proofiwsmoothedregretzeroone}
\thmIwSmooth*
Note $v$ is fixed for the entire argument below, and $\xi_t$ denotes
the unknown smoothness parameter at time $t$.
We will argue that the upper confidence radius $U_t(v) - t^{-1}
\sum_{s \leq t} W_s 1_{X_s \leq v}$ has the desired rate. An analogous
argument applies to the lower confidence radius. One difference
from the non-importance-weighted case is that, to be sub-exponential,
the lower bound is constructed from an upper bound on $U'_t(v) = W_s
(1 - 1_{X_s \leq v})$ via $L_t(v) - 1 - U'_t(v)$, which introduces an
additional $B_t = t^{-1} \sum_{s \leq t} (W_s - 1)$ term to the width.
(Note, because $\forall t: \mathbb{E}_t[W_t - 1] = 0$, this term will
concentrate, but we will simply use the realized value here.)
For the proof we introduce an integer parameter $\eta \geq 2$ which
controls both the grid spacing ($\epsilon(d) = \eta^d$) and the
allocation of error probabilities to levels ($\delta_d = \alpha /
(\eta^d \epsilon(d))$). In the main paper we set $\eta = 2$.
At level $d$ we construct $\eta^d$ confidence sequences on an
evenly-spaced grid of values $1/\eta^d, 2/\eta^d, \dots, 1$. We divide
total error probability $\alpha / \eta^d$ at level $d$ among these
$\eta^d$ confidence sequences, so that each individual confidence sequence
has error probability $\alpha/\eta^{2d}$.
For a fixed bet $\lambda$ and value $\rho$, $S_t$ defined in
\cref{subsec:empirical_bernstein_variant} is sub-exponential
qua \citet[Definition 1]{howard2021time} and therefore
from \cref{lemma:iwcurved} there exists an explicit mixture
distribution over $\lambda$ inducing (curved) boundary
\begin{align}
\frac{\alpha}{\eta^{2d}} &\geq \mathbb{P}\left( \exists t \geq 1 : \frac{S_t}{t} \geq \max\left(\frac{C(\tau)}{t}, u\left(V_t; \tau, \frac{\alpha}{\eta^{2d}}\right) \right) \right), \nonumber \\
u\left(V_t; \tau, \frac{\alpha}{\eta^{2d}}\right) &= \sqrt{2 \left(\frac{(\tau + V_t)/t}{t}\right) \log\left( \sqrt{\frac{\tau + V_t}{2 \pi}} e^{-\frac{1}{12 (\tau + V_t) + 1}} \left( \frac{1 + \eta^{2d} \alpha^{-1}}{C(\tau)} \right) \right) } \nonumber \\
&\qquad + \frac{1}{t} \log\left(\sqrt{\frac{\tau + V_t}{2 \pi}} e^{-\frac{1}{12 (\tau + V_t) + 1}} \left( \frac{1 + \eta^{2d} \alpha^{-1}}{C(\tau)} \right)\right), \label{eqn:iwcurvedboundary}
\end{align}
where $S_t \doteq \overline{\CDF}_t(\rho) - t^{-1} \sum_{s \leq t} W_s 1_{X_s \leq
\rho}$, and $\tau$ is a hyperparameter to be determined further below.
Because the values at level $d$ are $1/\eta^d$ apart, the worst-case
discretization error in the estimated average CDF value is
\begin{align*}
\overline{\CDF}_t(\epsilon(d) \lceil \epsilon(d)^{-1} v \rceil) - \overline{\CDF}_t(v)
\leq 1/(\xi_t \eta^d),
\end{align*}
and the total worst-case confidence radius including discretization error is
\begin{align*}
r_d(t) &= \frac{1}{\xi_t \eta^d} + \max\left(\frac{C(\tau)}{t}, u\left(V_t; \tau, \frac{\alpha}{\eta^{2d}}\right)\right).
\end{align*}
Now evaluate at $d$ such that $\sqrt{\psi_t} < \xi_t \eta^d \leq \eta \sqrt{\psi_t}$ where $
\psi_t \doteq t \left((\tau + V_t)/t\right)^{-1}$,
\begin{align*}
r_d(t) &\leq \frac{1}{\sqrt{\psi_t}} + \max\left(\frac{C(\tau)}{t}, u\left(V_t; \tau, \frac{\alpha}{\eta^2 \xi_t^{-2} \psi_t}\right)\right) \\
&= \sqrt{\frac{(\tau + V_t)/t}{t}} + \tilde{O}\left(\sqrt{\frac{(\tau + V_t)/t}{t} \log\left(\xi_t^{-2} \alpha^{-1}\right)}\right) + \tilde{O}(t^{-1} \log\left(\xi_t^{-2} \alpha^{-1}\right)),
\end{align*}
where $\tilde{O}()$ elides polylog $V_t$ factors. The final result
is not very sensitive to the choice of $\tau$, and we use $\tau = 1$
in practice.
\begin{lemma}
\label{lemma:iwcurved}
Suppose
\begin{align*}
&\exp\left( \lambda S_t - \psi_e(\lambda) V_t \right), \\
\psi_e(\lambda) &\doteq -\lambda - \log(1 - \lambda),
\end{align*}
is sub-$\psi_e$ qua \citet[Definition 1]{howard2021time}; then there
exists an explicit mixture distribution over $\lambda$ with hyperparameter
$\tau > 0$ such that
\begin{align*}
\alpha &\geq \mathbb{P}\left( \exists t \geq 1 : \frac{S_t}{t} \geq \max\left(\frac{C(\tau)}{t}, u\left(V_t; \tau, \alpha\right) \right) \right), \\
u\left(V_t; \tau, \alpha\right) &= \sqrt{2 \left(\frac{(\tau + V_t)/t}{t}\right) \log\left( \sqrt{\frac{\tau + V_t}{2 \pi}} e^{-\frac{1}{12 (\tau + V_t) + 1}} \left( \frac{1 + \alpha^{-1}}{C(\tau)} \right) \right) } \\
&\qquad + \frac{1}{t} \log\left(\sqrt{\frac{\tau + V_t}{2 \pi}} e^{-\frac{1}{12 (\tau + V_t) + 1}} \left( \frac{1 + \alpha^{-1}}{C(\tau)} \right)\right), \\
C(\tau) &\doteq \frac{\tau^\tau e^{-\tau}}{\Gamma(\tau) - \Gamma(\tau, \tau)},
\end{align*}
is a (curved) uniform crossing boundary.
\begin{proof}
We can form the conjugate mixture using a truncated gamma prior from
\citet[Proposition 9]{howard2021time}, in the form from \citet[Theorem
2]{waudby2022anytime}, which is our \cref{eqn:iwempbernada}.
\begin{equation*}
\eqniwbernada,
\end{equation*}
where ${}_1F_1(\ldots)$ is Kummer's confluent hypergeometric function.
Using \citet[identity 13.6.5]{olver2010nist},
\begin{align*}
{}_1F_1(1,a+1,x) &= e^x a x^{-a} \left(\Gamma(a) - \Gamma(a, x)\right)
\end{align*}
where $\Gamma(a, x)$ is the (unregularized) upper incomplete gamma function.
From \citet[Theorem 1.2]{pinelis2020exact} we have
\begin{align*}
\Gamma(a, x) &< \frac{x^a e^{-x}}{x - a} \\
\implies {}_1F_1(1,a+1,x) &\geq e^x a x^{-a} \Gamma(a) - \frac{a}{x - a}.
\end{align*}
Applying this to the mixture yields
\begin{align*}
M_t^{\text{EB}} %
&\geq \frac{C(\tau) e^{\tau + V_t + S_t}}{(\tau + V_t + S_t)^{\tau + V_t}} \Gamma\left(\tau + V_t\right) - \frac{C(\tau)}{S_t} \\
&\geq \frac{C(\tau) e^{\tau + V_t + S_t}}{(\tau + V_t + S_t)^{\tau + V_t}} \Gamma\left(\tau + V_t\right) - 1, & \left(\dagger\right)
\end{align*}
where $\left(\dagger\right)$ follows from the self-imposed
constraint $S_t \geq C(\tau)$.
This yields crossing boundary
\begin{align*}
\alpha^{-1} &= \frac{C(\tau) e^{\tau + V_t + S_t}}{(\tau + V_t + S_t)^{\tau + V_t}} \Gamma\left(\tau + V_t\right) - 1, \\
\frac{e^{\tau + V_t + S_t}}{\left(1 + \frac{S_t}{\tau + V_t}\right)^{\tau + V_t}} &= \left( \frac{\left(\tau + V_t\right)^{\tau + V_t}}{\Gamma\left(\tau + V_t\right)}\right) \left(\frac{1 + \alpha^{-1}}{C(\tau)} \right) \doteq \left( \frac{\left(\tau + V_t\right)^{\tau + V_t}}{\Gamma\left(\tau + V_t\right)}\right) \phi_t(\tau, \alpha), \\
\frac{e^{1 + \frac{S_t}{\tau + V_t}}}{\left(1 + \frac{S_t}{\tau + V_t}\right)} &= \left( \frac{\left(\tau + V_t\right)^{\tau + V_t}}{\Gamma\left(\tau + V_t\right)}\right)^{\frac{1}{\tau + V_t}} \phi_t(\tau, \alpha)^{\frac{1}{\tau + V_t}} \doteq z_t, \\
S_t &= \left(\tau + V_t\right) \left( -1 - W_{-1}\left(-z_t^{-1}\right) \right). %
\end{align*}
\citet[Theorem 1]{chatzigeorgiou2013bounds} states
\begin{align*}
W_{-1}(-e^{-u-1}) &\in -1 - \sqrt{2u} + \left[-u, -\frac{2}{3}u\right] \\
\implies -1 - W_{-1}(-e^{-u-1}) &\in \sqrt{2u} + \left[\frac{2}{3}u, u\right].
\end{align*}
Substituting yields
\begin{align}
\left(\tau + V_t\right) \left( -1 - W_{-1}\left(-z_t^{-1}\right) \right)
&\leq \left(\tau + V_t\right) \left( \sqrt{2 \log\left(\frac{z_t}{e^1}\right)} + \log\left(\frac{z_t}{e^1}\right) \right). \label{eqn:iwstresone}
\end{align}
From \citet[Equation (9.8)]{fellerintroduction} we have
\begin{align*}
\Gamma(1 + n) &\in \sqrt{2 \pi n} \left(\frac{n}{e^1}\right)^n \left[e^{\frac{1}{12 n + 1}}, e^{\frac{1}{12 n}}\right] \\
\implies \left( \frac{\left(\tau + V_t\right)^{\tau + V_t}}{\Gamma\left(\tau + V_t\right)} \right)^{\frac{1}{\tau + V_t}} &\in \left(\frac{\tau + V_t}{2 \pi}\right)^{\frac{1}{2(\tau + V_t)}} e^1 \left[e^{-\frac{1}{12 (\tau + V_t)^2}}, e^{-\frac{1}{12 (\tau + V_t)^2 + (\tau + V_t)}}\right].
\end{align*}
Therefore
\begin{align}
\left(\tau + V_t\right) \sqrt{2 \log\left(\frac{z_t}{e^1}\right)}
&\leq \left(\tau + V_t\right) \sqrt{2 \log\left(\left(\frac{\tau + V_t}{2 \pi}\right)^{\frac{1}{2(\tau + V_t)}} e^{-\frac{1}{12 (\tau + V_t)^2 + (\tau + V_t)}} \phi_t(\tau, \alpha)^{\frac{1}{\tau + V_t}}\right)} \nonumber \\
&= \sqrt{2 \left(\tau + V_t\right) \log\left( \sqrt{\frac{\tau + V_t}{2 \pi}} e^{-\frac{1}{12 (\tau + V_t) + 1}} \phi_t(\tau, \alpha) \right)}, \label{eqn:iwpartone}
\end{align}
and
\begin{align}
\left(\tau + V_t\right) \log\left(\frac{z_t}{e^1}\right)
&\leq \left(\tau + V_t\right) \log\left(\left(\frac{\tau + V_t}{2 \pi}\right)^{\frac{1}{2(\tau + V_t)}} e^{-\frac{1}{12 (\tau + V_t)^2 + (\tau + V_t)}} \phi_t(\tau, \alpha)^{\frac{1}{\tau + V_t}}\right) \nonumber \\
&= \log\left(\sqrt{\frac{\tau + V_t}{2 \pi}} e^{-\frac{1}{12 (\tau + V_t) + 1}} \phi_t(\tau, \alpha)\right). \label{eqn:iwparttwo}
\end{align}
Combining \cref{eqn:iwstresone,eqn:iwpartone,eqn:iwparttwo} yields the crossing boundary
\begin{align*}
\frac{S_t}{t} &= \sqrt{2 \left(\frac{(\tau + V_t)/t}{t}\right) \log\left( \sqrt{\frac{\tau + V_t}{2 \pi}} e^{-\frac{1}{12 (\tau + V_t) + 1}} \left( \frac{1 + \alpha^{-1}}{C(\tau)} \right) \right) } \\
&\qquad + \frac{1}{t} \log\left(\sqrt{\frac{\tau + V_t}{2 \pi}} e^{-\frac{1}{12 (\tau + V_t) + 1}} \left( \frac{1 + \alpha^{-1}}{C(\tau)} \right)\right).
\end{align*}
\end{proof}
\end{lemma}
\section{Simulations}
\label{app:simulations}
\subsection{i.i.d. setting}
\label{app:iidsimulations}
For non-importance-weighted simulations, we use
the Beta-Binomial boundary of \citet{howard2021time} for $\lboracle_t$ and
$\uboracle_t$.
The curved boundary is induced by the
test NSM
$$
\begin{aligned}
W_t(b; \hat{q}_t, q_t) &= \frac{\int_{q_t}^1 d\text{Beta}\left(p; b q_t, b (1 - q_t)\right)\ \left(\frac{p}{q_t}\right)^{t \hat{q}_t} \left(\frac{1 - p}{1 - q_t}\right)^{t (1 - \hat{q}_t)}}{\int_{q_t}^1 d\text{Beta}\left(p; b q_t, b (1 - q_t)\right)} \\
&= \frac{1}{(1 - q_t)^{t (1 - \hat{q}_t)} q_t^{t \hat{q}_t}} \left(\frac{\text{Beta}(q_t, 1, b q_t + t \hat{q}_t, b (1 - q_t) + t (1 - \hat{q}_t))}{\text{Beta}(q_t, 1, b q_t, b (1 - q_t))}\right) \\
\end{aligned}
$$ with prior parameter $b=1$. Further
documentation and details are in the
reference implementation \texttt{csnsquantile.ipynb}.
The importance-weighted simulations use
the constructions from \cref{app:importanceweighted}: the reference
implementation is in \texttt{csnsopquantile.ipynb}
for the DDRM variant and \texttt{csnsopquantile-ebern.ipynb} for the empirical Bernstein variant.
\begin{figure*}[p]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{betacurves.pdf}
\vskip -12pt
\repeatcaption{fig:betacurves}{CDF bounds approaching the true CDF when sampling i.i.d.\ from a Beta(6,3) distribution. Note these bounds are simultaneously valid for all times and values.}
%
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{betaboundwidth.pdf}
\vskip -12pt
\caption{Comparison to naive time-uniform DKW (which is only valid in the i.i.d.\ setting) for Beta distributions of varying smoothness. Decreasing smoothness degrades our bound.}
\label{fig:betawidth}
\end{minipage}
\vskip -0.2in
\end{figure*}
\begin{figure*}[p]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{lognormalcurves.unbounded.pdf}
\vskip -12pt
\caption{CDF bounds approaching the true CDF when sampling i.i.d.\ from a lognormal(0, 1) distribution. Recall these bounds
are simultaneously valid for all times and values.}
\label{fig:lognormalcurves}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{lognormalboundwidth.unbounded.pdf}
\vskip -12pt
\repeatcaption{fig:lognormalwidth}{Demonstration of the variant described in \cref{subsec:extensions,app:arbitrarysupport} for distributions with arbitrary support, based on i.i.d.\ sampling from a variety of lognormal distributions. Logarithmic range dependence is evident.}
%
\end{minipage}
\vskip -0.2in
\end{figure*}
\begin{figure*}[p]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{gaussiancurves.unbounded.pdf}
\vskip -12pt
\caption{CDF bounds approaching the true CDF when sampling i.i.d.\ from a Gaussian(0, 1) distribution. Recall these bounds
are simultaneously valid for all times and values.}
\label{fig:gaussiancurves}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{gaussianboundwidth.unbounded.pdf}
\vskip -12pt
\caption{Demonstration of the variant described in \cref{subsec:extensions,app:arbitrarysupport} for distributions with arbitrary support, based on i.i.d.\ sampling from a variety of Gaussian distributions. Logarithmic range dependence is evident.}
\label{fig:gaussianwidth}
\end{minipage}
\vskip -0.2in
\end{figure*}
\begin{figure*}[p]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{contpolyatwoseeds.pdf}
\vskip -12pt
\repeatcaption{fig:contpolyatwoseeds}{Nonstationary Polya simulation for two seeds approaching different average conditional CDFs. Bounds successfully track the true CDFs in both cases. See \cref{exp:nonstationary}.}
%
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{contpolyagammasweep.pdf}
\vskip -12pt
\caption{Maximum bound width, scaled by $\sqrt{\nicefrac{t}{\log(t)}}$
to remove the primary trend, as a function of $t$, for nonstationary
Polya simulations with different $\gamma_t$ schedules. See \cref{exp:nonstationary}}
\label{fig:contpolyagammasweep}
\end{minipage}
\vskip -0.2in
\end{figure*}
\begin{figure*}[p]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{betaexpiwcurves.pdf}
\vskip -12pt
\caption{CDF bounds approaching the true counterfactual CDF when sampling i.i.d.\ from a Beta(6,3) with finite-variance importance weights, using DDRM for the oracle confidence sequence.}
\label{fig:iwexpbetacurves}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{betaparetoiwcurves.pdf}
\vskip -12pt
\repeatcaption{fig:iwexpparetocurves}{CDF bounds approaching the true counterfactual CDF when sampling i.i.d.\ from a Beta(6,3) with infinite-variance importance weights, using DDRM for the oracle confidence sequence.}
%
\end{minipage}
\vskip -0.2in
\end{figure*}
\begin{figure*}[p]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{betaexpiwcurvesbern.pdf}
\vskip -12pt
\caption{CDF bounds approaching the true counterfactual CDF when sampling i.i.d.\ from a Beta(6,3) with finite-variance importance weights, using Empirical Bernstein for the oracle confidence sequence.}
\label{fig:iwexpbetacurvesbern}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{betaparetoiwcurvesbern.pdf}
\vskip -12pt
\caption{CDF bounds approaching the true counterfactual CDF when sampling i.i.d.\ from a Beta(6,3) with infinite-variance importance weights, using Empirical Bernstein for the oracle confidence sequence. Despite apparent convergence, eventually this simulation would reset the Empirical Bernstein oracle confidence sequence to trivial bounds.}
\label{fig:iwexpparetocurvesbern}
\end{minipage}
\vskip -0.2in
\end{figure*}
\section{Introduction}
\label{sec:introduction}
What would have happened if I had acted differently? Although this
question is as old as time itself, successful companies have recently
embraced this question via counterfactual estimation of outcomes from
the exhaust of their controlled experimentation platforms, e.g., based
upon A/B testing or contextual bandits. These experiments are run in
the real (digital) world, which is rich enough to demand statistical
techniques that are non-asymptotic, non-parametric, and non-stationary.
Although recent advances admit characterizing counterfactual average
outcomes in this general setting, counterfactually estimating a
complete distribution of outcomes is heretofore only possible with
additional assumptions. Nonethless, the practical importance of
this problem has motivated multiple solutions:
see \cref{tab:comparepriorart} for a summary, and \cref{sec:relatedwork}
for complete discussion.
Intriguingly, this problem is provably impossible in the data dependent
setting without additional assumptions.~\cite{rakhlin2015sequential}
Consequently, our bounds always achieve non-asymptotic coverage, but may
converge to zero width slowly or not at all, depending on the hardness of
the instance. We call this design principle
AVAST (\underline{A}lways \underline{V}alid \underline{A}nd
\underline{S}ometimes \underline{T}rivial).
In pursuit of our ultimate goal, we derive factual distribution estimators
which are useful for estimating the complete distribution
of outcomes from direct experience.
\paragraph{Contributions}
\begin{enumerate}
\item In \cref{subsec:unitinterval} we provide a time and value
uniform upper bound on the CDF of the averaged historical conditional
distribution of a discrete-time real-valued random process. Consistent
with the lack of sequential uniform convergence of linear threshold
functions~\citep{rakhlin2015sequential}, the bounds are always valid
and sometimes trivial, but with an instance-dependent guarantee: when
the data generating process is smooth qua \citet{block2022smoothed}
with respect to the uniform distribution on the unit interval, the bound
width adapts to the unknown smoothness parameter.
\item In \cref{subsec:extensions} we extend the previous technique to
distributions with support over the entire real line, and further to
distributions with a known countably infinite or unknown nowhere dense set of discrete jumps;
with analogous instance-dependent guarantees.
\item In \cref{subsec:importanceweighted} we extend the previous techniques to importance-weighted random variables, achieving our ultimate goal of
estimating a complete counterfactual distribution of outcomes.
\end{enumerate}
We exhibit our techniques in various simulations in \cref{sec:simulations}.
Computationally our procedures have comparable cost to point estimation
of the empirical CDF, as the empirical CDF is a sufficient statistic.
\defcitealias{chandak2021universal}{UnO21}
\defcitealias{waudby2022anytime}{WS22}
\defcitealias{huang2021off}{HLLA21}
\defcitealias{howard2022sequential}{HR22}
\begin{table*}[t]
\caption{Comparison to prior art for CDF estimation. See \cref{sec:relatedwork} for details.}
\label{tab:comparepriorart}
\vskip 0.15in
\begin{minipage}{\textwidth}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccccccccr}
\toprule
Reference &
\multicolumn{1}{|p{1.5cm}|}{\centering Quantile-\\ Uniform?} &
\multicolumn{1}{|p{1.4cm}|}{\centering Time-\\ Uniform?} &
\multicolumn{1}{|p{1.7cm}|}{\centering Non-\\ stationary?} &
\multicolumn{1}{|p{1.75cm}|}{\centering Non-\\ asymptotic?} &
\multicolumn{1}{|p{1.75cm}|}{\centering Non-\\ parametric?} &
\multicolumn{1}{|p{1.75cm}|}{\centering Counter-\\ factual?} &
\multicolumn{1}{|p{1.2cm}|}{\centering $w_{\max}$-\\ free?\footnote{$w_{\max}$ free techniques are valid with unbounded importance weights.}} \\
\midrule
\citetalias{howard2022sequential} & \cmark & \cmark & & \cmark & \cmark & & N/A \\
\citetalias{huang2021off} & \cmark & & & \cmark & \cmark & \cmark & \\
\citetalias[\citetext{IID}]{chandak2021universal} & \cmark & & & \cmark & \cmark & \cmark & \cmark \\
\citetalias[\citetext{NS}]{chandak2021universal} & \cmark & & \cmark & & & \cmark & \cmark \\
\citetalias[\citetext{\S4}]{waudby2022anytime} & \cmark & \cmark & & \cmark & \cmark & \cmark & \cmark \\
this paper & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{minipage}
\vskip -0.1in
\end{table*}
\section{Problem Setting}
\label{sec:setting}
Let
$\left(\Omega, \mathcal{F}, \left\{ \mathcal{F}_t \right\}_{t \in \mathbb{N}}, \mathbb{P}\right)$
be a probability space equipped with a discrete-time filtration, on
which let $X_t$ be an adapted real valued random process. Let
$\mathbb{E}_t\left[\cdot\right] \doteq \mathbb{E}\left[\cdot | \mathcal{F}_t\right]$.
The quantity of interest is
\begin{equation}
\overline{\CDF}_t(v) \doteq \frac{1}{t} \sum_{s \leq t} \mathbb{E}_{s-1}\left[1_{X_s \leq v}\right],
\label{eqn:defcdf}
\end{equation}
i.e., the CDF of the averaged historical conditional distribution.
We desire simultaneously time and value uniform bounds which hold with
high probability, i.e., adapted sequences of maps $L_t, U_t: \mathbb{R}
\to [0,1]$ satisfying
\begin{equation}
\mathbb{P}\left( \substack{\forall t \in \mathbb{N} \\ \forall v \in \mathbb{R}}: L_t(v) \leq \overline{\CDF}_t(v) \leq U_t(v) \right) \geq 1 - 2\alpha.
\label{eqn:overallguarantee}
\end{equation}
In the i.i.d. setting, \cref{eqn:defcdf} is independent of $t$
and reduces to the CDF of the (unknown) generating distribution.
In this setting, the classic results of \citet{glivenko1933sulla} and
\citet{cantelli1933sulla} established uniform convergence of linear
threshold functions; subsequently the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality characterized fixed time and value
uniform convergence rates~\cite{dvoretzky1956asymptotic,massart1990tight};
extended later to simultaneously time and value uniform
bounds~\cite{howard2022sequential}. The latter result guarantees an
$O(t^{-1} \log(\log(t)))$ confidence interval width, matching the limit
imposed by the Law of the Iterated Logarithm.
\paragraph{AVAST principle} In contrast, under arbitrary data
dependence, linear threshold functions are not sequentially
uniformly convergent, i.e., the averaged historical empirical CDF
does not necessarily converge uniformly to the CDF of the averaged historical conditional
distribution.~\cite{rakhlin2015sequential} Consequently, additional
assumptions are required to provide a width guarantee. In this
paper we design bounds that are \underline{A}lways \underline{V}alid
\underline{A}nd \underline{S}ometimes \underline{T}rivial, i.e., under
worst-case data generation
$\forall t, \exists v: U_t(v) - L_t(v) = O(1)$. Fortunately our bounds
are also equipped with an instance-dependent width guarantee dependent
upon the smoothness of the distribution to a reference measure qua
\cref{def:smooth}.
\paragraph{Additional Notation} Let $X_{a:b} = \{ X_s \}_{s=a}^b$
denote a contiguous subsequence of a random process. Let $\mathbb{P}_t$
denote the average historical conditional distribution, defined as a (random) distribution over the sample space $\mathbb{R}$ by
$\mathbb{P}_t(A) \doteq t^{-1} \sum_{s \leq t} \mathbb{E}_{s-1}\left[1_{X_s \in A}\right]$ for a Borel subset $A$.
\begin{figure*}[t!]
\vskip 0.05in
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=1.2\linewidth]{algopic.pdf}
\vskip -12pt
\caption{Visualization of \cref{alg:ubunionzeroone}. The values of
interest $v$ are uncountably infinite; the algorithm allocates probability
$\delta$ to maintain upper bounds on a countably infinite set of points
$\rho$ at different resolution levels $d$; and leverages the monotonicity
of $\overline{\CDF}_t(v)$. The algorithm searches over all $d$ to optimize
the overall bound via a provably correct early termination criterion.}
\label{fig:algopic}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\begin{algorithm}[H]
\caption{Unit Interval Upper Bound. $\epsilon(d)$ is an increasing function specifying the resolution of discretization at level $d$. $\uboracle_t\left(\rho; \delta, d, \Psi_t\right)$ is an upper confidence sequence for fixed value $\rho$ with coverage at least $\left(1 - \delta\right)$.}
\label{alg:ubunionzeroone}
\begin{algorithmic}
\STATE {\bfseries Input:} value $v$; confidence $\alpha$; sufficient statistic $\Psi_t$.
\STATE \algcommentlight{e.g. $\Psi_t \doteq X_{1:t}$ or $\Psi_t \doteq (W_{1:t}, X_{1:t})$}
\STATE {\bfseries Output:} $U_t(v)$ satisfying \cref{eqn:overallguarantee}.
\IIF{$v > 1$} \textbf{return} 1 \unskip\ \algorithmicend\ \algorithmicif
\STATE $u \leftarrow 1$
\STATE $v \leftarrow \max\left(0, v\right)$
\FOR{$d=1$ {\bfseries to} $\infty$}
\STATE $\rho_d \leftarrow \epsilon(d) \lceil \epsilon(d)^{-1} v \rceil$
\STATE $\delta_d \leftarrow \nicefrac{\alpha}{2^d \epsilon(d)}$
\STATE $u \leftarrow \min\left(u, \uboracle_t\left(\rho_d; \delta_d, \Psi_t\right)\right)$
\IF{$0 = \sum_{s \leq t} 1_{X_s \in \left(v, \rho_d\right]}$}
\RETURN $u$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{figure*}
\section{Derivations}
\label{sec:derivations}
\subsection{Bounds for the Unit Interval}
\label{subsec:unitinterval}
\paragraph{Fixed $v$} Initially we consider a fixed value $v \in
\mathbb{R}$. Let
$\theta_s \doteq \mathbb{E}_{s-1}\left[1_{X_s \leq v}\right]$
;
$\hat{q}_t \doteq t^{-1} \sum_{s \leq t} 1_{X_s \leq v}$
; and
$q_t \doteq t^{-1} \sum_{s \leq t} \theta_s = \overline{\CDF}_t(v)$.
Our goal is to uniformly bound the deviations of the martingale $S_t = t(\hat{q}_t - q_t)$, quantifying how far the estimand $\overline{\CDF}_t(v)$ lies from the empirical estimate $\hat{q}_t$. We bound these deviations using the following nonnegative martingale,
\newcommand{E_t(\lambda) \doteq \exp\left(\lambda S_t - \sum_{s \leq t} \log\left(h(\lambda, \theta_s)\right) \right),}{E_t(\lambda) \doteq \exp\left(\lambda S_t - \sum_{s \leq t} \log\left(h(\lambda, \theta_s)\right) \right),}
\begin{equation}
E_t(\lambda) \doteq \exp\left(\lambda S_t - \sum_{s \leq t} \log\left(h(\lambda, \theta_s)\right) \right),
\label{eqn:binmart}
\end{equation}
where
$\lambda \in \mathbb{R}$ is fixed
and
$h(\lambda, z) \doteq (1 - z) e^{-\lambda z} + z e^{\lambda (1 - z)}$, the moment-generating function of a centered Bernoulli($z)$ random variable. \cref{eqn:binmart} is a test martingale qua \citet{shafer2011test},
i.e., it can be used to construct time-uniform bounds on $\hat{q}_t - q_t$
via Ville's inequality. This topic has received extensive
treatment in the literature: we provide a self-contained overview in
\cref{app:confseqreview}. For now, we will simply posit the existence
of fixed-value confidence sequences $\lboracle_t\left(v; \delta, \Psi_t\right)$ and
$\uboracle_t\left(v; \delta, \Psi_t\right)$ based on a sufficient statistic $\Psi_t$ which, for a fixed
value $v$, achieve
$\mathrm{P}\left(\forall t: \lboracle_t\left(v; \delta, \Psi_t\right) \leq \overline{\CDF}_t(v) \leq \uboracle_t\left(v; \delta, \Psi_t\right)\right) \geq 1 - \delta$.
Our goal is to compose these fixed value bounds into a value-uniform
bound, which we achieve via union-bounding over a discretization.
\paragraph{Countably Infinite Union}
In the i.i.d. setting, $q_t$ is independent of $t$, and each $q$ can be
associated with a largest value
$v^{(+)}(q) \doteq \sup\left\{ v | \mathbb{E}\left[1_{X \leq v}\right] \leq q \right\}$;
therefore the upper bound can be evaluated at fixed $q$, i.e.,
$\uboracle_t(q; \delta, \Psi_t) \doteq \uboracle_t\left(v^{(+)}(q); \delta, \Psi_t\right)$, to search
over $v$ qua \citet{howard2022sequential}; and analogously for the
lower bound. This ``quantile space'' approach has advantages, e.g.,
variance based discretization and covariance to monotonic transformations.
Unfortunately, under arbitrary data dependence, $q_t$ changes with time
and \cref{eqn:binmart} does not admit the same strategy, so we proceed
by operating in ``value space''. See \cref{app:whynotquantilespace}
for more details.
\cref{alg:ubunionzeroone}, visualized in \cref{fig:algopic}, constructs
an upper bound on \cref{eqn:defcdf} which, while valid for all values, is
designed for random variables ranging over the unit interval. It refines
an upper bound on the value $\rho \geq v$ and exploits monotonicity of
$\overline{\CDF}_t(v)$. A union bound over the (countably infinite) possible
choices for $\rho$ controls the coverage of the overall procedure.
Because the error probability $\delta$ decreases with resolution (and the
fixed-value confidence radius $\uboracle_t$ increases as $\delta$ decreases), the
procedure can terminate whenever no empirical counts remain between the
desired value $v$ and the current upper bound $\rho$, as all subsequent
bounds are dominated.
The lower bound is derived analogously in \cref{alg:lbunionzeroone} (which we have left to \cref{app:lowerboundzeroone} for the sake of brevity) and leverages a lower confidence
sequence $\lboracle_t\left(\rho; \delta, \Psi_t\right)$ (instead of an upper confidence sequence) evaluated at an
increasingly refined lower bound on the value $\rho \leftarrow \epsilon(d)
\lfloor \epsilon(d)^{-1} v \rfloor$.
\begin{theorem}\label{thm:coverage}
If $\epsilon(d) \uparrow \infty$ as $d \uparrow \infty$, then \cref{alg:ubunionzeroone,alg:lbunionzeroone} terminate with probability one. Furthermore, if for all $\rho$, $\delta$, and $d$ the algorithms $\lboracle_t(\rho; \delta, \Psi_t)$ and $\uboracle_t(\rho; \delta, \Psi_t)$ satisfy
\begin{align}
P(\forall t: \overline{\CDF}_t(v) \geq \lboracle_t(\rho; \delta, \Psi_t))
&\geq 1 - \delta, \\
P(\forall t: \overline{\CDF}_t(v) \leq \uboracle_t(\rho; \delta, \Psi_t))
&\geq 1 - \delta,
\end{align}
then guarantee \eqref{eqn:overallguarantee} holds with $U_t,L_t$ given by the outputs of \cref{alg:ubunionzeroone,alg:lbunionzeroone}, respectively.
\end{theorem}
\begin{proof}
See \cref{app:proofthmcoverage}.
\end{proof}
\Cref{thm:coverage} ensures \cref{alg:ubunionzeroone,alg:lbunionzeroone} yield the desired time- and value-uniform coverage, essentially due to the union bound and the
coverage guarantees of the oracles $\uboracle_t,\lboracle_t$. However, coverage
is also guaranteed by the trivial bounds $0 \leq \overline{\CDF}_t(v) \leq 1$.
The critical question is: what is the bound width?
\paragraph{Smoothed Regret Guarantee}
Even assuming $X$ is entirely supported on the unit interval, on
what distributions will \cref{alg:ubunionzeroone} provide a non-trivial
bound? Because each
$\left[\lboracle_t(\rho; \delta, \Psi_t), \uboracle_t(\rho; \delta, \Psi_t)\right]$
is a confidence sequence for the mean of the bounded random variable
$1_{X_s \leq \rho}$, we enjoy width guarantees at each of the (countably
infinite) $\rho$ which are covered by the union bound, but the guarantee
degrades as the depth $d$ increases. If the data generating process
focuses on an increasingly small part of the unit interval over time, the
width guarantees on our discretization will be insufficient to determine
the distribution. Indeed, explicit constructions demonstrating the
lack of sequential uniform convergence of linear threshold functions
increasingly focus in this manner.~\cite{block2022smoothed}
Conversely, if $\forall t: \overline{\CDF}_t(v)$ was Lipschitz continuous in
$v$, then our increasingly granular discretization would eventually
overwhelm any fixed Lipschitz constant and guarantee uniform convergence.
\cref{thm:smoothedregretzeroone} expresses this intuition, but using the
concept of smoothness rather than Lipschitz, as the former concept will
allow us to generalize further.
\begin{definition}
\label{def:smooth}
A distribution $D$ is $\xi$-smooth wrt reference measure $M$ if $D \ll M$ and $\esssup_M \left(\nicefrac{dD}{dM}\right) \leq \xi^{-1}$.
\end{definition}
\newcommand{CDF bounds approaching the true CDF when sampling i.i.d.\ from a Beta(6,3) distribution. Note these bounds are simultaneously valid for all times and values.}{CDF bounds approaching the true CDF when sampling i.i.d.\ from a Beta(6,3) distribution. Note these bounds are simultaneously valid for all times and values.}
\newcommand{Nonstationary Polya simulation for two seeds approaching different average conditional CDFs. Bounds successfully track the true CDFs in both cases. See \cref{exp:nonstationary}.}{Nonstationary Polya simulation for two seeds approaching different average conditional CDFs. Bounds successfully track the true CDFs in both cases. See \cref{exp:nonstationary}.}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{betacurves.pdf}
\vskip -12pt
\caption{CDF bounds approaching the true CDF when sampling i.i.d.\ from a Beta(6,3) distribution. Note these bounds are simultaneously valid for all times and values.}
\label{fig:betacurves}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{contpolyatwoseeds.pdf}
\vskip -12pt
\caption{Nonstationary Polya simulation for two seeds approaching different average conditional CDFs. Bounds successfully track the true CDFs in both cases. See \cref{exp:nonstationary}.}
\label{fig:contpolyatwoseeds}
\end{minipage}
\vskip -0.2in
\end{figure*}
When the reference measure is the uniform distribution on the unit
interval, $\xi$-smoothness implies an $\xi^{-1}$-Lipschitz CDF. However,
when the reference measure has its own curvature, or charges points,
the concepts diverge. In this case the reference measure is a probability measure, therefore $\xi \leq 1$. Note that as $\xi \to 0$, the smoothness constraint
is increasingly relaxed. As \cref{thm:smoothedregretzeroone} indicates,
when the smoothness constraint relaxes, convergence is slowed.
\begin{restatable}{theorem}{thmSmooth}
\label{thm:smoothedregretzeroone}
Let $U_t(v)$ and $L_t(v)$ be the upper and lower bounds returned
by \cref{alg:ubunionzeroone} and \cref{alg:lbunionzeroone}
respectively, when evaluated with $\epsilon(d) = 2^d$ and the confidence sequences $\lboracle_t$ and $\uboracle_t$
of \cref{eqn:binnormalcurved}. If $\forall t: \mathbb{P}_t$ is
$\xi_t$-smooth wrt the uniform distribution on the unit interval then
\begin{equation}
\begin{split}
&\forall t, \forall v: U_t(v) - L_t(v) \leq \\
&\qquad \sqrt{\frac{V_t}{t}} + \tilde{O}\left(\sqrt{\frac{V_t}{t} \log\left(\xi_t^{-2} \alpha^{-1} t^{3/2}\right)}\right),
\end{split}
\label{eqn:mainthm}
\end{equation}
where $q_t \doteq \overline{\CDF}_t(v)$;
$V_t \doteq \nicefrac{1}{t} + \nicefrac{(q_t - 1/2)}{\log\left(\nicefrac{q_t}{1-q_t}\right)}$;
and $\tilde{O}()$ elides polylog $V_t$ factors.
\end{restatable}
\begin{proof} See \cref{app:proofsmoothedregretzeroone}.
\end{proof}
\cref{thm:smoothedregretzeroone} matches our empirical results in two
important aspects: (i) logarithmic dependence upon smoothness (e.g.,,
\cref{fig:uniformfail}); (ii) tighter intervals for more extreme quantiles
(e.g., \cref{fig:betacurves}). Note the choice $\epsilon(d) = 2^d$
ensures the loop in \cref{alg:ubunionzeroone} terminates after at most
$\log_2(\Delta)$ iterations, where $\Delta$ is the minimum difference
between two distinct realized values.
\subsection{Extensions}
\label{subsec:extensions}
\paragraph{Arbitrary Support} In \cref{app:arbitrarysupport} we
describe a variant of \cref{alg:ubunionzeroone} which uses
a countable dense subset of the entire real line.
It enjoys a similar guarantee to \cref{thm:smoothedregretzeroone},
but with an additional width which is logarithmic in the probe
value $v$:
\newcommand{\logvwidthbound}{ \tilde{O}\left(\sqrt{\frac{V_t}{t} \log\left(\left(2 + \xi_t |v| t^{-1/2}\right)^2 \xi_t^{-2} \alpha^{-1} t^{3/2}\right)}\right)
}
$\logvwidthbound$.
Note in this case $\xi_t$ is defined relative to (unnormalized) Lebesque
measure and can therefore exceed 1.
\paragraph{Discrete Jumps} If $\mathbb{P}_t$ is smooth wrt a
reference measure which charges a countably infinite number of known
discrete points, we can explicitly union bound over these additional
points proportional to their density in the reference measure.
In this case we preserve the above value-uniform guarantees. See \cref{app:discretejumps} for more details.
For distributions which charge unknown discrete points, we note the proof of \cref{thm:smoothedregretzeroone} only exploits
smoothness local to $v$.
Therefore if the set of discrete points is nowhere
dense, we eventually recover the guarantee
of \cref{eqn:mainthm} after a ``burn-in'' time
$t$ which is logarithmic in the minimum
distance from $v$ to a charged discrete point.
\newcommand{CDF bounds approaching the true counterfactual CDF when sampling i.i.d.\ from a Beta(6,3) with infinite-variance importance weights, using DDRM for the oracle confidence sequence.}{CDF bounds approaching the true counterfactual CDF when sampling i.i.d.\ from a Beta(6,3) with infinite-variance importance weights, using DDRM for the oracle confidence sequence.}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{uniformsweep.pdf}
\vskip -12pt
\caption{As smoothness decreases, we require more time to reach the same maximum confidence width. For low smoothness, DKW dominates our method. The logarithmic dependence matches our theory. See \cref{paragraph:fail}.}
\label{fig:uniformfail}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 1pt
\centering
\includegraphics[width=.96\linewidth]{betaparetoiwcurves.pdf}
\vskip -12pt
\caption{CDF bounds approaching the true counterfactual CDF when sampling i.i.d.\ from a Beta(6,3) with infinite-variance importance weights, using DDRM for the oracle confidence sequence.}
\label{fig:iwexpparetocurves}
\end{minipage}
\vskip -0.2in
\end{figure*}
\subsection{Importance-Weighted Variant}
\label{subsec:importanceweighted}
An important use case is estimating a distribution based upon
observations produced from another distribution with a known shift,
e.g., arising in transfer learning~\cite{pan2010survey} or off-policy
evaluation~\cite{waudby2022anytime}. In this case the observations are
tuples $(W_t, X_t)$, where the importance weight $W_t$ is a Radon-Nikodym
derivative, implying
$\forall t: \mathbb{E}_t\left[W_t\right] = 1$ and a.s. $W_t \geq 0$;
and the goal is to estimate
$\overline{\CDF}_t(v) = t^{-1} \sum_{s \leq t} \mathbb{E}_{s-1}\left[ W_s 1_{X_s \leq v} \right]$.
The basic approach in \cref{alg:ubunionzeroone} and
\cref{alg:lbunionzeroone} is still applicable in this setting,
but different $\lboracle_t$ and $\uboracle_t$ are required.
In \cref{app:importanceweighted} we present details on two possible
choices for $\lboracle_t$ and $\uboracle_t$: the first is based upon
the empirical Bernstein construction of \citet{howard2021time}, and the
second based upon the DDRM construction of \citet{mineiro2022lower}.
Both constructions leverage the $L^*$ Adagrad bound of
\citet{orabona2019modern} to enable lazy evaluation. The empirical
Bernstein version is amenable to analysis and computationally lightweight,
but requires finite importance weight variance to converge (the variance
bound need not be known, as the construction adapts to the unknown
variance). The DDRM version requires more computation but produces
tighter intervals. See \cref{subsec:iidiwexperiments} for a comparison.
Inspired by the empirical Bernstein variant, the following analog of
\cref{thm:smoothedregretzeroone} holds. Note $\mathbb{P}_t$
is the target (importance-weighted) distribution, not the observation
(non-importance-weighted) distribution.
\begin{restatable}{theorem}{thmIwSmooth}
\label{thm:iwsmoothedregretzeroone}
Let $U_t(v)$ and $L_t(v)$ be the upper and lower bounds returned by
\cref{alg:ubunionzeroone} and \cref{alg:lbunionzeroone} respectively
with $\epsilon(d) = 2^d$ and the confidence sequences $\lboracle_t$
and $\uboracle_t$ of \cref{eqn:iwcurvedboundary}. If $\forall t:
\mathbb{P}_t$ is $\xi_t$-smooth wrt the uniform distribution on the unit
interval then
\begin{equation}
\begin{split}
&\forall t, \forall v: U_t(v) - L_t(v) \leq \\
&\qquad B_t + \sqrt{\frac{(\tau + V_t)/t}{t}} \\
&\qquad + \tilde{O}\left(\sqrt{\frac{(\tau + V_t)/t}{t} \log\left(\xi_t^{-2} \alpha^{-1}\right)}\right) \\
&\qquad + \tilde{O}(t^{-1} \log\left(\xi_t^{-2} \alpha^{-1}\right)),
\end{split}
\label{eqn:iwmainthm}
\end{equation}
where $q_t \doteq \overline{\CDF}_t(v)$,
$K(q_t) \doteq \nicefrac{(q_t - 1/2)}{\log\left(\nicefrac{q_t}{1-q_t}\right)}$;
$V_t = O\left(K(q_t) \sum_{s \leq t} W_s^2\right)$,
$B_t \doteq t^{-1} \sum_{s \leq t} (W_s - 1)$,
and $\tilde{O}()$ elides polylog
$V_t$ factors.
\end{restatable}
\begin{proof} See \cref{app:proofiwsmoothedregretzeroone}.
\end{proof}
\cref{thm:iwsmoothedregretzeroone} exhibits the following key
properties: (i) logarithmic dependence upon smoothness; (ii)
tighter intervals for extreme quantiles and importance weights
with smaller quadratic variation; (iii) no explicit dependence
upon importance weight range; (iv) asymptotic zero width for
importance weights with sub-linear quadratic variation.
\paragraph{Additional Remarks} First, the importance-weighted average
CDF is a well-defined mathematical quantity, but the interpretation as
a counterfactual distribution of outcomes given different actions in
the controlled experimentation setting involves subtleties: we refer the
interested reader to \citet{waudby2022anytime} for a complete discussion.
Second, the need for nonstationarity techniques for estimating the
importance-weighted CDF is driven by the outcomes $(X_t)$ and not the
importance-weights $(W_t)$. For example with off-policy contextual
bandits, a changing historical policy does not induce nonstationarity,
but a changing conditional reward distribution does.
\section{Simulations}
\label{sec:simulations}
These simulations explore the empirical
behaviour of \cref{alg:ubunionzeroone} and
\cref{alg:lbunionzeroone} when instantiated
with $\epsilon(d) = 2^d$ and
curved boundary
oracles $\lboracle$ and $\uboracle$. To save
space, precise details on the experiments as
well additional figures are elided to
\cref{app:simulations}. Reference implementations
which reproduce the figures are available at \url{https://github.com/microsoft/csrobust}.
\subsection{The i.i.d. setting}
These simulations exhibit our techniques on i.i.d.\ data. Although
the i.i.d.\ setting does not fully exercise the technique, it is convenient for visualizing
convergence to the unique true CDF. In this setting
the DKW inequality applies, so to build intuition about our
statistical efficiency, we compare our bounds with a naive time-uniform
version of DKW resulting from a $\left(\nicefrac{6}{\pi^2 t^2}\right)$
union bound over time.
\paragraph{Beta distribution} In this case the data is smooth
wrt the uniform distribution on $[0, 1]$ so we can directly
apply \cref{alg:ubunionzeroone} and \cref{alg:lbunionzeroone}.
\cref{fig:betacurves} shows the bounds converging to the true CDF
as $t$ increases for an i.i.d. $\text{Beta}(6, 3)$ realization.
\cref{fig:betawidth} compares the bound width to time-uniform DKW at
$t=10000$ for Beta distributions that are increasingly less smooth with
respect to the uniform distribution. The DKW bound is identical for all,
but our bound width increases as the smoothness decreases.
The additional figures in \cref{app:simulations}
clearly indicate tighter bounds at extreme
quantiles. The analysis of \cref{thm:smoothedregretzeroone} uses the
worst-case variance (at the median) and
does not reflect this.
\newcommand{Demonstration of the variant described in \cref{subsec:extensions,app:arbitrarysupport} for distributions with arbitrary support, based on i.i.d.\ sampling from a variety of lognormal distributions. Logarithmic range dependence is evident.}{Demonstration of the variant described in \cref{subsec:extensions,app:arbitrarysupport} for distributions with arbitrary support, based on i.i.d.\ sampling from a variety of lognormal distributions. Logarithmic range dependence is evident.}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{iwcontpolyatwoseeds.pdf}
\vskip -12pt
\caption{A nonstationary, importance-weighted simulation in which the factual distribution (red) diverges dramatically from the counterfactual distribution (green). The bound correctly covers the counterfactual CDF.}
\label{fig:iwcontpolyatwoseeds}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\vskip 0pt
\centering
\includegraphics[width=.96\linewidth]{lognormalboundwidth.unbounded.pdf}
\vskip -12pt
\caption{Demonstration of the variant described in \cref{subsec:extensions,app:arbitrarysupport} for distributions with arbitrary support, based on i.i.d.\ sampling from a variety of lognormal distributions. Logarithmic range dependence is evident.}
\label{fig:lognormalwidth}
\end{minipage}
\vskip -0.2in
\end{figure*}
\paragraph{Beyond the unit interval} In \cref{fig:lognormalwidth}
(main text) and \cref{app:iidsimulations} we present
further simulations of i.i.d. lognormal and Gaussian random variables,
ranging over $\mathbb{R}^+$ and $\mathbb{R}$ respectively, and using
\cref{alg:ubunionrealline}. The logarithmic dependence of the
bound width upon the probe value is evident.
\paragraph{An Exhibition of Failure}
\label{paragraph:fail}
\cref{fig:uniformfail}
shows the (empirical) relative convergence
when the data is
simulated i.i.d. uniform over $[0, \epsilon]$ for decreasing $\epsilon$
(hence decreasing smoothness). The reference
width is the maximum bound width obtained with \cref{alg:ubunionzeroone}
and \cref{alg:lbunionzeroone} at
$t_{\text{ref}} = 10000$ and $\epsilon = 1/16$,
and shown is the multiplicative factor of time required for the maximum bound width
to match the reference width as smoothness varies. The trend is consistent with arbitrarily
poor convergence with arbitrarily small $\epsilon$. Because this is
i.i.d.\ data, DKW applies and a uniform bound (independent of $\epsilon$)
is available. Thus while our instance-dependent guarantees are valuable
in practice, they can be dominated by stronger guarantees leveraging
additional assumptions. On a positive note, a logarithmic dependence on smoothness is evident
over many orders of magnitude, confirming the analysis of
\cref{thm:smoothedregretzeroone}.
\paragraph{Importance-Weighted}
\label{subsec:iidiwexperiments}
In these simulations, in addition to being i.i.d., $X_t$ and $W_t$
are drawn independently of each other, so the importance weights merely
increase the difficulty of ultimately estimating the same quantity.
In the importance-weighted case, an additional aspect is whether
the importance-weights have finite or infinite variance.
\cref{fig:iwexpbetacurves,fig:iwexpparetocurves}
demonstrate convergence in both conditions when using DDRM for pointwise
bounds. \cref{fig:iwexpbetacurvesbern,fig:iwexpparetocurvesbern} show the results using empirical
Bernstein pointwise bounds. In theory, with enough samples and infinite
precision, the infinite variance Pareto simulation would eventually
cause the empirical Bernstein variant to reset to trivial bounds, but
in practice this is not observed. Instead, DDRM is consistently tighter
but also consistently more expensive to compute, as exemplified in
\cref{tab:compddrmebern}. Thus either choice is potentially preferable.
\begin{table}[h]
\caption{Comparison of DDRM and Empirical Bernstein on i.i.d. $X_t \sim
\text{Beta}(6, 3)$, for different $W_t$. Width denotes the maximum
bound width $\sup_v U_t(v) - L_t(v)$. Time is for computing the bound
at 1000 equally spaced points.}
\label{tab:compddrmebern}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccr}
\toprule
$W_t$ &
What &
Width &
Time (sec) \\
\midrule
\multirow{2}{*}{$\text{Exp}(1)$} & DDRM & 0.09 & 24.8 \\
& Emp. Bern & 0.10 & 1.0 \\
\multirow{2}{*}{$\text{Pareto}(\nicefrac{3}{2})$} & DDRM & 0.052 & 59.4 \\
& Emp. Bern & 0.125 & 2.4 \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\subsection{Nonstationary}
\label{exp:nonstationary}
\paragraph{Continuous Polya Urn} In this case
$$
X_t \sim \text{Beta}\left(2 + \gamma_t \sum_{s < t} 1_{X_s > \nicefrac{1}{2}}, 2 + \gamma_t \sum_{s < t} 1_{X_s \leq \nicefrac{1}{2}}\right),$$ i.e., $X_t$ is Beta distributed
with parameters becoming more extreme over time: each realization will
increasingly concentrate either towards $0$ or $1$. Suppose $\gamma_t
= t^q$. In the most extreme case that $\left(t = \sum_{s \leq
t} 1_{X_s > \nicefrac{1}{2}}\right)$, the conditional distribution at time $t$
is $\text{Beta}\left(x; 2 + t \gamma_t, 2\right) = O(t^{1+q})$,
hence $\nicefrac{d\mathbb{P}_t}{dU} = O(t^{1 + q})$, which is smooth enough
for our bounds to converge. \cref{fig:contpolyatwoseeds}
shows the bounds covering the true CDF for two realizations with different
limits. \cref{fig:contpolyagammasweep} shows (for one realization)
the maximum bound width, scaled by $\sqrt{\nicefrac{t}{\log(t)}}$ to
remove the primary trend, as a function of $t$ for different $\gamma_t$
schedules.
\paragraph{Importance-Weighted Continuous Polya Urn} In this case
$W_t$ is drawn iid either $W_t = 0$ or $W_t = w_{\max}$, such as
might occur during off-policy evaluation with an $\epsilon$-greedy
logging policy. Given $W_t$, the distribution of $X_t$ is given by
$$
\begin{aligned}
X_t | W_t &\sim \text{Beta}\left(2 + \gamma_t \sum_{s < t} 1_{X_s > 1/2} 1_{W_s = W_t}, \right. \\
&\qquad \qquad \left. 2 + \gamma_t \sum_{s < t} 1_{X_s < 1/2} 1_{W_s = W_t}\right),
\end{aligned}
$$
i.e., each importance weight runs an independent Continuous Polya Urn.
Because of this, it is possible for the unweighted CDF to mostly
concentrate at one limit (e.g., 1) but the weighted CDF to concentrate
at another limit (e.g., 0). \cref{fig:iwcontpolyatwoseeds} exhibits
this phenomenon.
\section{Related Work}
\label{sec:relatedwork}
Constructing nonasymptotic confidence bands for the cumulative distribution function of iid random variables is a classical problem of statistical inference dating back to~\citet{dvoretzky1956asymptotic} and~\citet{massart1990tight}. While these bounds are quantile-uniform, they are ultimately fixed-time bounds (i.e.~not time-uniform). In other words, given a sample of iid random variables $X_1, \dots, X_n \sim F$, these fixed time bounds $[\dot L_n(x), \dot U_n(x)]_{x \in \mathbb{R}}$ satisfy a guarantee of the form:
\begin{equation}
\mathbb{P}(\forall x \in \mathbb{R},\ \dot L_n(x) \leq F(x) \leq \dot U_n(x)) \geq 1-\alpha,
\end{equation}
for any desired error level $\alpha \in (0, 1)$. \citet{howard2022sequential} developed confidence bands $[\widebar L_t(x), \widebar U_t(x)]_{x \in \mathbb{R}, t \in \mathbb{N}}$ that are both quantile- \emph{and} time-uniform, meaning that they satisfy the stronger guarantee:
\begin{equation}
\mathbb{P}(\forall x \in \mathbb{R}, t \in \mathbb{N},\ \widebar L_t(x) \leq F(x) \leq \widebar U_t(x)) \geq 1-\alpha.
\end{equation}
However, the bounds presented in \citet{howard2022sequential} ultimately focused on the classical iid \emph{on-policy} setup, meaning the CDF for which confidence bands are derived is the same CDF as those of the observations~$\infseqt{X_t}$. This is in contrast to off-policy evaluation problems such as in randomized controlled trials, adaptive A/B tests, or contextual bandits, where the goal is to estimate a distribution different from that which was collected (e.g.~collecting data based on a Bernoulli experiment with the goal of estimating the counterfactual distribution under treatment or control). \citet{chandak2021universal} and \citet{huang2021off} both introduced fixed-time (i.e.~non-time-uniform) confidence bands for the off-policy CDF in contextual bandit problems, though their procedures are quite different, rely on different proof techniques, and have different properties from one another. \citet[Section 4]{waudby2022anytime} later developed \emph{time-uniform} confidence bands in the off-policy setting, using a technique akin to \citet[Theorem 5]{howard2022sequential} and has several desirable properties in comparison to \citet{chandak2021universal} and \citet{huang2021off} as outlined in~\citet[Table 2]{waudby2022anytime}.
Nevertheless, regardless of time-uniformity or on/off-policy estimation, all of the aforementioned prior works assume that the distribution to be estimated is \emph{fixed and unchanging over time}. The present paper takes a significant departure from the existing literature by deriving confidence bands that allow the distribution to change over time in a data-dependent manner, all while remaining time-uniform and applicable to off-policy problems in contextual bandits. Moreover, we achieve this by way of a novel stitching technique which is closely related to those of~\citet{howard2022sequential} and~\citet{waudby2022anytime}.
\section{Discussion}
\label{sec:discussion}
This work constructs bounds by tracking specific
values, in contrast with i.i.d.\ techniques which
track specific quantiles. The value-based approach
is amenable to proving correctness qua
\cref{thm:coverage}, but has the disadvantage
of sensitivity to monotonic transformations.
We speculate it is possible to be covariant
to a fixed (wrt time) but unknown monotonic
transformation without violating known
impossibility results. A technique with this
property would have increased practical utility.
|
{
"arxiv_id": "2302.14334",
"language": "en",
"timestamp": "2023-03-01T02:09:55",
"url": "https://arxiv.org/abs/2302.14334",
"yymm": "2302"
} | \section{Related Work}
\noindent \textbf{Small, compact LiDAR for small robotics:} MEMS mirrors have been studied to build compact LiDAR systems \cite{tasneem2020adaptive, kasturi2016uav, kimoto2014development}. For instance, Kasturi et al. demonstrated a UVA-borne LiDAR with an electrostatic MEMS mirror scanner that could fit into a small volume of 70 mm $\times$ 60 mm $\times$ 60 mm and weighed about only 50 g \cite{kasturi2016uav}. Kimoto et al. developed a LiDAR with an electromagnetic resonant MEMS mirror for robotic vehicles \cite{kimoto2014development}.
\noindent \textbf{Comparison to software-based compensation:} Motion compensation techniques and image stabilization techniques have been widely used in image captures. Similar to imaging devices, LiDAR point cloud shows point cloud blurring, motion artifacts caused by the motion of the LiDAR or the motion of the target object. Some software-based LiDAR motion compensation use ICP (iterative closest point) \cite{neuhaus2018mc2slam} and feature matching \cite{gojcic2019perfect} to find translation and rotation of successive point clouds. Software-based compensation for robotics motion has been studied in great detail in SLAM algorithms~\cite{taketomi2017visual} or Expectation–
maximization (EM) methods\cite{phelps2019blind}. Software-based motion compensation have a relative high computation barrier for micro-robotics and may degrade if the point cloud have large discrepancy. Some of the software-based motion compensation relies on the capture of a full frame of point cloud, so it cannot capture the motion frequency higher than the frame rate. For most of the LiDAR (other than flash LiDAR), especially the single scanning beam MEMS LiDAR, the rolling shutter effect caused by the LiDAR motion jitter remains a problem. In contrast to these approaches, we wish to compensate the sensor in hardware, during image capture. Hardware LiDAR motion compensation has several benefits. First, the compensation can be implemented to every LiDAR scanning pulse (for 2D MEMS based LiDAR), which can correct the rolling shutter effect and improve the motion response range. Second, the motion compensation algorithm very simple and can be implement on a low-power microcontroller or FPGA. Third, even if the hardware motion compensation still have some errors, it provides a better initialization for the following software compensation.
These ideas are closer to how PTZ cameras track dynamic objects \cite{li2009tracking,hrabar2011ptz} and assist with background subtraction \cite{suhr2010background}. However, compared to these approaches, we can tackle egomotion of much higher frequencies, which is a unique challenge of micro-robots. We compensate signals much closer to those seen in adaptive optics and control for camera shake \cite{antonello2013imu, tyson1999performance, ben2004jitter}. In addition, our system is on a free moving robot, rather than a fixed viewpoint.
\noindent \textbf{Motorized gimbals:} Comparing to motorized image stabilization systems \cite{jia2017system}, MEMS mirrors not only have smaller size and lighter weight, but their frequency response bandwidth are better than the bulky and heavy camera stabilizer. The MEMS mirror response time is can be less than 10 ms or even less than 1 ms. The servo motor of the camera stabilizer has a bandwidth width less than 30 Hz because they are bulky and have heavy load \cite{li2020nonorthogonal, sagitov2017towards}. This results in a response time higher than 10 ms.
\noindent \textbf{Motion compensation in displays and robotics:}
Motion-compensated MEMS mirror scanner has been applied for projection, \cite{gruger20073}, where hand-shake is an issue. In contrast, we deal with the vibration of much higher frequencies, and our approach is closest to adaptive optics for robotics. For example, \cite{taketomi2015zoom, taketomi2015focal} change the zoom and focal lengths of cameras for SLAM. We compensate using small mirrors, utilizing a rich tradition of compensation in device characterization\cite{maksymova2019detection} and to improve SNR \cite{hayakawa2016gain}. Compared to all the previous methods, we are the first to show IMU-based LiDAR compensation with a MEMS mirror in hardware.
\noindent \textbf{LiDAR SLAM}: Ever since the seminal work of ~\cite{zhang2014loam}, successive LIDAR SLAM designs largely follow a LiDAR SLAM architecture similar to Figure~\ref{fig:rotating_lidar_slam_pipeline}, where the front end consists De-skew and Feature extraction stages, while the backend usually consists of ICP and Pose Graph Optimization packages such as g2o~\cite{kummerle2011g} or GTSAM~\cite{dellaert2012factor} that globally optimizes the odometry information as estimated by LiDAR visual odometry. Successive efforts moved towards improvement in the following sub areas: 1) tightly coupling LiDAR and IMU ~\cite{ye2019tightly}; 2) updating the backend PGO optimizer~\cite{shan2020lio}; 3) updating the back end's ICP ~\cite{pan2021mulls}~\cite{yang2020teaser}; 4) updating the front end's point-cloud data structure to do away with ICP's feature dependence~\cite{xu2022fast}
\emph{Nevertheless, to the best of our knowledge, all existing LiDAR SLAM systems are designed for LiDARs that are rigidly attached, via fixed joints, to robots and vehicles.}
\noindent \textbf{Sensor reorientation in Active SLAM:} There has been a lot of work in the area of perception-aware path planning. A basic assumption of this line of work is that the sensor is rigidly attached to the robot, and therefore, its field of view can be changed only by changing the pose of the robot.
\cite{costante2016perception}\cite{sadat2014feature}\cite{deng2018feature} improve SLAM accuracy by actively changing the robot trajectory to improve the field-of-view. Our sensor can simplify these works by changing the FOV in hardware without requiring additional constraints on the path planning.
\begin{figure*}
\centering
\includegraphics[width=.9\textwidth]{Simulation/qualitative_new.PNG}
\caption{(a) Representative simulation scenario - Blocks scene (b) Mapping the Blocks scene with compensation at 55Hz and no delay (c) Mapping the Blocks scene without compensation (d) Mapping the Blocks scene with compensation at 55Hz and delay of 150ms. (e) Mountains scene (f) mountains scene simulation with 55Hz compensation and 0ms delay (g) mountains scene simulation without compensation. (h) mountains scene simulation with 5Hz compensation and 0ms delay. }
\label{fig:sim_qual}
\vspace{-10pt}
\end{figure*}
\section{Understanding the Benefits of Compensated LiDAR in Simulation}
\label{sec:benefits}
\color{black}
\subsection{Basic LiDAR geometry}
A MEMS-based LIDAR scanning system consists of a laser beam reflected off a small mirror. Voltages control the mirror by physically tilting it to different angles. This allows for LIDAR depth measurements at the direction corresponding the mirror position. Let the function controlling the azimuth be $\phi(V(t))$ and the function controlling the elevation be $\theta(V(t))$, where $V$ is the input voltage that varies with time step $t$.
To characterize our sensor, we use the the structure-from-motion (SFM) framework with the LIDAR projection matrix $\mathbf{P}$ and the robot's rotation $\mathbf{R}$ and translation $t$
\begin{equation}
\mathbf{P} = \mathbf{K}
\begin{bmatrix}
\mathbf{R} & t
\end{bmatrix}
\end{equation}
Where $\mathbf{K}$ is an identity matrix.
In our scenario, the `pixels' $\mathbf{x}$ relate to the mirror vector orientation $(\theta(V(t)), \phi(V(t))$ on a plane at unit distance from the mirror along the z-axis, and are obtained by projections of 3D points $\mathbf{X}$.
Many robotics applications need point cloud alignment across frames, which needs us to recover unknown rotation and translation that minimizes the following optimization.
\begin{equation}
\min_{\mathbf{R} , t} \| \mathbf{x} - \mathbf{P} \mathbf{X} \|.
\end{equation}
This optimization \emph{usually happens in software, after LiDAR and IMU measurements}~\cite{thrun2002probabilistic}. Our key idea is that the MEMS mirror provides an opportunity to compensate or control two aspects of the projection matrix $\mathbf{P}$ \emph{before capture, in hardware.}. In this paper, we propose to control a new aspect of the SFM equation in hardware: the rotation matrix $\mathbf{R}$.
Given the robot pose (from onboard IMU or other sensing) and the intrinsic matrix, we can easily perform post-capture translation estimation.
\begin{equation}
\min_{t} \| \mathbf{x} - \mathbf{P} \mathbf{X} \|. \label{simpl_opt}
\end{equation}
In other words, hardware compensation with MEMS mirrors simplifies the post-capture LIDAR alignment methods to \emph{just finding translation $t$}, allowing for lightweight and low-latency algorithms to be used with minimal computational effort.
\begin{comment}
\subsection{Fisher Information in ICP}
In feature-based ICP, the uncertainty of robot pose estimation is inversely correlated to the amount of correctly matched features being observed.
Recent work has reasoned about this using the Cramér–Rao lower bound; regardless of their parameterizations, these come in very similar form. We refer the readers to \cite{censi2007achievable} and \cite{zhang2019beyond} for detail.
Consider a robot equipped with LiDAR sensor in the environment. Each beam of the sensor is represented as a bearing vector measurement, which is a vector to the direction of the ray, whose length is generated by ray-tracing measurement model, corrupted by zero-mean Gaussian noise. Let the LiDAR‘s pose be $\pmb{p}\triangleq (\pmb{R},\pmb{t})$ w.r.t a fixed world frame. This sensor output $\pmb{z}$ of $m$ bearing vector measurements, but only $n$ of them are correctly matched to environmental features.
In \cite{zhang2019beyond}, a 3D featured matching paper where parameterizations of the measurements are in bearing vectors, the FI is expressed (within identical and independent zero-mean Gaussian noise $\mathcal{N}(0,\sigma^2)$) as
\begin{equation}\label{FIF_eq}
I_{\pmb{p}}(\pmb{z})=\frac{1}{\sigma^2}\sum_{i}^{n}I_i
\end{equation}
Where $I_i$ stands for the Fisher Information Matrix associated with the i-th matched 3D feature bearing vector measurement.
In Equation~\ref{FIF_eq} the FIM in feature-based ICP is positively correlated to the amount of correctly matched features observed. In other words, the Cramér–Rao lower bound is inversely correlated to the amount of correctly matched features. Basically, the more correctly matched features are considered, the lower the odometry uncertainty is.
While the above is hardly surprising, it forms the theoretical backing for our system design where we hypothesize \emph{that decoupling increases correctly matched features} because now the sensor can orient towards feature rich regions. We now confirm this theory in the special case of the decoupled sensor, first with simulated experiments and then with real robot experiments. In the following section with simulations, we run LOAM, which has a feature-base ICP in the back-end, where FIM undergirds the entire estimation. We perform and discuss experiments with LOAM, with and without sensor orientation decoupling from our novel design.
\end{comment}
\begin{figure*}
\begin{center}
\begin{subfigure}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth,trim={12pt 10 13 25},clip]{Simulation/quantitative/a_blocks_odom.pdf}
\caption{Odometry error vs compensation rate (Blocks scene)}
\label{fig:sim_quant_blocks_comprate}
\end{subfigure}
\hspace{30pt}
\begin{subfigure}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth,trim={12pt 10 13 25},clip]{Simulation/quantitative/c_blocks_delay.pdf}
\caption{Odometry error vs compensation delay (Blocks scene)}
\label{fig:sim_quant_blocks_delay}
\end{subfigure}
\end{center}
\begin{center}
\begin{subfigure}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth,trim={12pt 10 13 25},clip]{Simulation/quantitative/b_mountains_odom.pdf}
\caption{Odometry error vs compensation rate (Mountains scene)}
\label{fig:sim_quant_mountains_comprate}
\end{subfigure}
\hspace{30pt}
\begin{subfigure}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth,trim={12pt 10 13 25},clip]{Simulation/quantitative/d_mountains_delay.pdf}
\caption{Odometry error vs compensation delay (Mountains scene)}
\label{fig:sim_quant_mountains_delay}
\end{subfigure}
\end{center}
\caption{ UAV odometry error while varying compensation rate and compensation delay in two scenes}
\vspace{-10pt}
\label{fig:sim_quant}
\end{figure*}
\color{black}
\subsection{Benefits of IMU-compensated LiDAR in SLAM}
\label{sec:benefit of motion compensation in SLAM}
We demonstrate the benefits of motion compensated LiDAR in simulation. Our setup is as follows - we use Airsim \cite{shah2018airsim} running on Unreal Engine 4 for realistic perception and visualization. We tested two scenarios - a scene with geometric objects, called {\it Blocks scene} shown in Figure~\ref{fig:sim_qual}(a), and an outdoor scene with a bridge and mountains, called {\it Mountains scene} shown in Figure~\ref{fig:sim_qual}(e). In both scenes, the LiDAR is mounted on a prototype quadrotor UAV. We run LOAM~\cite{zhang2014loam}, an open-source state-of-the-art LiDAR SLAM system to map the environment and localize the UAV.
As described earlier, motion compensation can be achieved through various means such as a gimble, active compensation of a pan-tilt-zoom camera or MEMS-based hardware compensation like our system. The differences between these methods are along two dimensions - (i) latency of compensation, called {\it compensation delay} from now on, and (ii) number of times we can compensate in a second, called {\it compensation rate}. By varying these two parameters in simulation, we compare each method's performance. In order to systematically compensate based on IMU input, we perform some pre-processing of the IMU data. To smooth out the high angular velocity body movements, an angular moving average LiDAR stabilization algorithm is implemented. This method stores the past UAV orientations in a sliding, fix length queue, and reorients the mounted LiDAR towards the average of the past orientations. The average of the orientations is calculated through Linear Interpolation (LERP) of the stored quaternions. We detailed our calculations in Appendix.
The method is also known as Quaternion $L_2$-mean~\cite{hartley2013rotation}. Given the relative short duration of the sliding window, and the relatively small range of rotation that's cover during simulation flights, the prerequisite of using this method is met. It helps remove the impulsive jerky movements that may be observed by the LiDAR, akin to a low-pass filter.
In the experiment, the UAV performs three back-and-forth lateral flights between two way points. During the alternation of way points, the UAV reaches ~130$^\circ$/s in the X body axis. The mounted LiDAR is configured at 16 channels, 360 degree horizontal FOV, 30$^\circ$ vertical FOV and with 150,000 Hz sample rate, akin to commercially available LiDARs.
To quantify performance, we calculate {\it odometry error}, the difference between the ground truth UAV positions and those positions estimated by LOAM. Figure~\ref{fig:sim_quant} show the results from our simulations for Blocks scene and Mountains scene. We set the compensation rate to five different values - uncompensated, 5Hz, 10 Hz, 30 Hz, and 55 Hz. We set the compensation delay to five values - no delay (0 ms), 30 ms, 90 ms and 150 ms.
Both the position error and angular error are high when the compensation rate is uncompensated or 5 Hz in the Blocks scene (Figure~\ref{fig:sim_quant_blocks_comprate}). It is significantly lower for 10 Hz, 30 Hz and 55 Hz. This shows that smaller rates of compensation as performed by a mechanical gimbal or a PTZ camera (which operate at 5 Hz or lower) are far less effective than a faster compensation mechanism such as the one proposed by us. Similarly, the error in position as well as orientation is low when the compensation delay is either 0ms or 30 ms (Figure~\ref{fig:sim_quant_blocks_delay}).
For larger compensation delays such as 90ms and 150 ms, the error is several times that of when the compensation delay is 30 ms. This shows that as the compensation delay is higher, as it could be with software-based compensation on low-power embedded systems, it is far less effective and leads to greater error in trajectory estimation. This further argues for a system such as ours that is able to perform compensation in hardware, and therefore at a higher rate. The trends are similar, albeit less pronounced in the Mountains scene where features are much less distinct and feature matching is more challenging in general. This proof-of-concept set of simulations encouraged us to build our proposed system.
\section{Novel LiDAR Design}
We propose a simple and effective design, where the MEMS mirror and photodetector are placed on a movable head. For image stabilization, we are also able to place the IMU there. A LiDAR engine and accompanying electronics are tethered to this device, which can be light and small enough for micro-robots. To enable both the LiDAR scanning and compensated scanning at high rate, it is important to understand the characterization of the MEMS scanner.
\begin{comment}
From simple to sophisticated cases, the compensation level can be categorized into 3 levels. \underline{LEVEL 1}, similar to an optical image stabilizer (OIS), is to compensate small rotational motion jitter only. \underline{LEVEL 2} is to compensating all three rotational components. And \underline{LEVEL 3} is for any rotational or translation motion needs to be compensated.
\end{comment}
\subsection{The MEMS mirror}
All the compensation effects and size advantages described so far will be nullified if the MEMS mirror cannot survive the shock, vibration and shake associated real-world robots. Here we analyze the robustness of the MEMS mirror device for such platforms. Most MEMS mirrors rely on high-quality factor resonant scanning to achieve wide field-of-view (FoV), which leads to heavy ringing effects and overshoot with sudden changes of direction \cite{milanovic2017closed, wang2020mems}. A suitable MEMS mirror for motion-compensated scanning is expected to have a wide non-resonant scanning angle, smooth and fast step responses ,can operate under common robotics vibration and can survive shock. To achieve this goal, we adopt a popular electrothermal bimorph actuated MEMS mirror design \cite{jia2009electrothermal, wang2019large} to build this MEMS mirror. The employed MEMS mirror is fabricated with Al/SiO$_2$ based inverted-series-connected (ISC) bimorph actuation structure reported in \cite{jia2009electrothermal}. This type of MEMS mirror has the advantages of simple and mature fabrication process \cite{zhang2015fast, wang2017ultra}, wide non-resonant scanning angle, linear response and good stiffness.
A new electrothermal MEMS mirror is designed and fabricated with the adaption of the motion compensation application. We note that other previously reported MEMS mirrors with electrothermal actuators, electrostatic actuators, or electromagnetic actuators may also be applicable to the motion compensated LiDAR scanning \cite{kasturi2016uav,ito2013system,wang2020low}.
\subsection{Compensation Algorithm}
\label{sec:comp_algo}
In the previous sections, we saw the advantages of MEMS mirror-based compensation and the feasibility for use in a robotic LiDAR. Here we focus on the details of the hardware-based rotation compensation algorithm using MEMS mirror scanning LiDAR and sensing for the compensation.
\color{black}
The MEMS mirror reflect a single ray of light towards a point in the spherical coordinate $\{\alpha,\beta,r\}$. The $\{\alpha,\beta\}$ are the two angular control input to the mirror to achieve such target. We will first establish the local(robot) and global (world) frames, then introduce known helper conversion from spherical to Cartesian coordinates, and finally gets into the details of compensation.
\subsubsection{Coordinate Systems}
\label{subsubsec:coordinate-system}
Our LiDAR can compensate for rotation, but it can not compensate for translation. So all discussion here on in drops translation from $SE(3)$ and will only be focus on $SO(3)$.
Let the robot have rotation $R_{robot}^w \in SO(3)$ relative to the world frame. In here, the frame of the un-moving base of the LiDAR sensor have Identity rotation $R_{base}^w \in SO(3)$ and therefore identical $SO(3)$ transformation as the robot frame. Let $R_{desired}^w \in SO(3)$ be the desired rotation target in the world frame. $R_{desired}^w$ can be decided by the users. For example, it can remain a constant rotation matrix, to impose a stabilization control policy and have the robot's FOV to remain up-right. Other possibility includes aiming towards a specific world frame target $t \in R^3$ which we will touch on later, in ~\ref{subsubsec:aiming}.
\subsubsection{Spherical-to-Cartesian Conversions}
It is important to outline the conversion from the spherical, which is the control coordinate, to normal Cartesian coordinate.
Points in the spherical coordinate $\{\alpha,\beta,r\}$ can be converted to Cartesian coordinate via known equations,
\begin{equation}
p_{cartesian} =
\begin{bmatrix}
x \\
y \\
z \\
\end{bmatrix}
=
\begin{bmatrix}
r\cos{\alpha} \cos{\beta} \\
r\cos{\alpha} \sin{\beta} \\
r\sin{\alpha}\
\end{bmatrix}
\label{eqn:sphere-to-cartesian}
\end{equation}
and vice versa:
\begin{equation}
p_{spherical} =
\begin{bmatrix}
\alpha \\
\beta \\
r
\end{bmatrix}
=
\begin{bmatrix}
\arctan{\frac{z}{\sqrt{x^{2} + y_M^{2}}}} \\
\arctan{\frac{y}{x}} \\
\sqrt{x^2+y^2+z^2}
\end{bmatrix}
\label{eqn:cartesian-to-sphere}
\end{equation}
Note that both $p_{cartesian}$ and $p_{spherical}$ are points located in the robot's local coordinate frame, $R_{robot}^w$. Other literature's refer to this frame as the local frame, or camera frame.
\subsubsection{Spatial Scanning}
A set of $i$ spherical control coordinates $\{\alpha_i,\beta_i,r_i\}$ defines the scanning pattern of the LiDAR, this is up to the user. For example, $\{\alpha_i,\beta_i\}$ can be restricted to certain range to define a FOV limit. This range can span $(0,360)$ degrees like a commercially available velodyne LiDAR, or it can be smaller.
\subsubsection{General Rotation Compensation}
\label{subsubsec:general_rotation_compensation}
Let $R_{control}$ be the rotation from robot rotation $R_{robot}^w$ to the desired rotation $R_{desired}^w$ , therefore $R_{desired}^w=R_{control}R_{robot}^w$. We have,
\begin{equation}
R_{control} = R_{desired}^w (R_{robot}^w)^T
\end{equation}
Now, all points in the spatial scanning pattern $p_{spherical}=\{\alpha_i,\beta_i,r_i\}$ of the robot frame $R_{robot}^w$ can be re-projected to the desired frame $R_{desired}^w$, we first translate each $\{\alpha_i,\beta_i,r_i\}$ to Cartesian $p_{cartesian}$ by Equation~\ref{eqn:sphere-to-cartesian}. Then:
\begin{equation}
p_{desired-cartesian} = R_{control}p_{cartesian}
\end{equation}
Then we can translate the rotated points $p_{desired-cartesian_i}$ back to the spherical coordinate $p_{desired-spherical_i}$, via equation~\ref{eqn:cartesian-to-sphere} for point $i$'s control input.
It is important to note that, this full $SO(3)$ compensation is only achievable because our LiDAR project individual point $p_i$ independently from other points in the set. In the case of a traditional camera or a commercially available LiDAR like Velodyne, The entire set of $p_i$ can be viewed as being projected as a group and correlate to each other. In these other sensors, Full $SO(3)$ compensation is not achievable, even if the sensors are mounted to the robot by a universal joint with 2 degree of freedom $\alpha,\beta$. But we will also analyze this special case of grouped points re-projection, since our LiDAR can achieve this 2-axis only compensation.
\subsubsection{Special Case: 2-axes only compensation}
\label{subsubsec:special_case}
Let $R_{control}$ be limited to 2-axes rotation only:
\begin{gather}
\label{eq:special-case-rotation}
R_{control}^*
=
\begin{bmatrix}
\cos{\beta} & 0 & \sin{\beta} \\
0 & 1 & 0 &\\
-\sin{\beta} & 0 & \cos{\beta}
\end{bmatrix}
\begin{bmatrix}
\cos{\alpha} & -\sin{\alpha}&0 \\
\sin{\alpha} & \cos{\alpha} & 0 &\\
0 & 0 & 1
\end{bmatrix}
\end{gather}
Our LiDAR can then use $R_{control}^*$ to perform 2-axis only compensation. Further, this compensation can be readily extend to commercially available cameras and LiDARs (such as Velodyne) mounted on a universal joint to the robot frame.
\subsubsection{Target Aiming}
\label{subsubsec:aiming}
Let $t_{target}^w \in \mathbb{R}^3$ be the target of interest in the world frame, and let $t_{robot}^w$ be the robot's current world frame translation. Then
\begin{gather}
p_{aim}=t_{target}^w-t_{robot}^w
\end{gather}
outlines the ray direction which we want to align our "principle axis", or the projection center point $p_{cartesian}:\{x=1,y=0,z=0\}$ towards.
We can simply translate $p_{aim}$ to spherical coordinate via equation~\ref{eqn:cartesian-to-sphere} to find $\alpha,\beta$, then compose a $R_{control}^*$ via equation~\ref{eq:special-case-rotation} for the entire scanning grid.
\subsubsection{MEMS Related details}
MEMS-related details, relating to the 1-dimension controls of each actuation axis $\{\alpha,\beta\}$, ~including analysis of
robot motion shock on the MEMS as well as preliminary pointcloud stitching, are included in the appendix.
\color{black}
\begin{comment}
\underline{LEVEL 1}: The basic case is to compensate the motion in jitter in the yaw and pitch axes only. It assumes all other motions in the roll axes and all the translation are desired motion of the robot and sensor, and ignored during compensation. This is applicable to scenarios where small motion jitter on ground and aerial vehicles needs to be compensated. This algorithm is relatively simple. It takes two steps. Assume the initial MEMS optical scanning angles relative to the MEMS mirror local frame is
\begin{equation}
\Phi_M = [\alpha, \beta]^T
\label{eqn:alphabeta}
\end{equation}
The desired motion in the yaw and pitch directions represented in Euler angle is $S= [\psi\text{(yaw)}, \theta\text{(pitch)}]^T$. The motion of the robot (as given by an IMU, say) is $S_i = [\psi_i,\theta]^T$. We can get the updated MEMS mirror scanning angle $\Phi'_M$ by subtracting the difference,
\begin{equation}
\Phi'_M = \Phi_M - (S_i - S)
\end{equation}
$\Phi'_M$ is sent to the MEMS driver.
\underline{LEVEL 2} is to compensate the robotics motion jitter in all three rotational axes, e.g. $S = [\psi\text{(yaw)},\theta\text{(pitch)}, \phi\text{(roll)}]^T$. While the translation degrees of freedom are assumed as the desired motion of the robot and sensor, and ignored. First, convert the MEMS scanning direction $\Phi_{M\angle}$ to a unit vector format $\Phi_{M}$.
\begin{equation}
\Phi_M =
\begin{bmatrix}
\sin{\alpha} \cos{\beta} \\
\sin{\alpha} \sin{\beta} \\
\cos{\alpha}\
\end{bmatrix}
\end{equation}
where $\alpha$ and $\beta$ are described in \autoref{eqn:alphabeta}.
\begin{figure*}[!h]
\centering
\includegraphics[width=0.8\textwidth]{setup.jpg}
\caption{The movable LiDAR MEMS scanner head, which include the MEMS mirror, an IMU and a fiber laser collimator. (a) shows the top view and (b) shows the LiDAR scanner head mounted to the bottom of the UAV.}
\label{fig:setup}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{vis.jpg}
\caption{We use a visible laser to compare the effect of motion compensation of our sensor. The upper laser trace indicates UAV motion, and the lower laser trace indicates the compensated/uncompensated scanning laser reflected from the MEMS mirror. The compensated MEMS scanning (right) shows a much smaller laser trace area than the uncompensated MEMS scanning result.}
\label{fig:vis}
\end{figure}
We then get the translation between the measured motion versus the desired motion. To avoid the Euler angle rotation sequence problem, we use rotation matrix $R$ to represent this translation. We use the inverse of $R$ to transfer the MEMS scanning direction to the compensated direction,
\begin{equation}
\Phi'_M = R^{-1} \Phi_M =
\begin{bmatrix}
x'_M \\
y'_M \\
z'_M \\
\end{bmatrix}
\end{equation}
We then convert $\Phi'_M$ from the unit vector back to angle $\Phi_{M\angle}$ and send it to the MEMS driver,
\begin{equation}
\Phi'_M =
\begin{bmatrix}
\alpha' \\
\beta' \\
\end{bmatrix}
=
\begin{bmatrix}
\arctan{\frac{z'_M}{\sqrt{x_M^{'2} + y_M^{'2}}}} \\
\arctan{\frac{y'_M}{x'_M}} \\
\end{bmatrix}
\end{equation}
\underline{LEVEL 1} and \underline{LEVEL 2} are applicable to the motion compensation for any MEMS scanning system, and \underline{LEVEL 3} takes the LiDAR point cloud data into the compensation. In this scenario, the MEMS based LiDAR is expected to fully decouple its orientation from the robot. An example of this can be when the robot wants to view the same scene irrespective of the robot pose. In such cases, both the pose and the relative position between the LiDAR and the target object will affect the MEMS mirror scanning direction. For ease of explanation, lets assume the target object is static. In this scenario, here is the procedure to calculate the compensated MEMS scanning direction $\Phi'_M$.
First, the MEMS LiDAR achieves the initial scanning and measurement of the target object in the LiDAR local frame, $P_{M\angle} = [\alpha, \beta, r]$. Similarly, $[\alpha, \beta]$ are the MEMS scanning angles, and $r$ is the distance from the MEMS mirror to the target object measured by the LiDAR ToF engine. Convert the MEMS LiDAR measurement from spherical coordinate $P_{M\angle}$ to $P_M$ in Cartesian coordinate format,
\begin{equation}
P_M =
\begin{bmatrix}
x_M \\
y_M \\
z_M \\
\end{bmatrix}
=
\begin{bmatrix}
r\sin{\alpha} \cos{\beta} \\
r\sin{\alpha} \sin{\beta} \\
r\cos{\alpha}\
\end{bmatrix}
\end{equation}
The LiDAR rotates and moves relative in the world frame. The translation $T_i$ and rotation $R_i$ matrix as $S_{i, 4\times4}$ of the MEMS LiDAR motion, and calculation the transformation between the measurement motion and desired motion as $S_{4\times4}$. Note that this requires a mechanism to measure desired motion which is an independent problem that we do not tackle in this work. In our experiments (Section~\ref{sec:robotexp}), we have used an external motion capture system for this purpose.
Now, the current target object coordinate $P'_M$ in the new MEMS LiDAR local frame,
\begin{equation}
\begin{bmatrix}
P'_M \\
1 \\
\end{bmatrix} = S^{-1}
\begin{bmatrix}
P_M \\
1 \\
\end{bmatrix}
=
\begin{bmatrix}
x'_M \\
y'_M \\
z'_M \\
1 \\
\end{bmatrix}
\end{equation}
To let the MEMS mirror scan towards the target position $P'_M$, the MEMS mirror scanning angle is updated to $\Phi'_M $ following the conversion from Cartesian coordinate to the spherical coordinate system
\begin{equation}
\Phi'_M =
\begin{bmatrix}
\alpha' \\
\beta' \\
\end{bmatrix}
=
\begin{bmatrix}
\arccos{\frac{z'_M}{x_M^{'2} + y_M^{'2} + z_M^{'2}}} \\
\arctan{\frac{y'_M}{x'_M}} \\
\end{bmatrix}
\label{eqn:Cartesian_to_spherical}
\end{equation}
This method also works if we only want to compensate the motion in certain degree of freedom(s). The compensation system errors has several parts including the desired motion sensing error, the assembly error and the MEMS mirror scanning error.
For \underline{LEVEL 3} motion compensation, the translation between the MEMS mirror and the desired motion should be measured or calibrated to get actual motion of the MEMS mirror.
\end{comment}
\subsection{LiDAR Hardware Specifics}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.8\textwidth]{setup.jpg}
\caption{The movable LiDAR MEMS scanner head, which include the MEMS mirror, an IMU and a fiber laser collimator. (a) shows the top view and (b) shows the LiDAR scanner head mounted to the bottom of the UAV.}
\label{fig:setup}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{vis.jpg}
\caption{We use a visible laser to compare the effect of motion compensation of our sensor. The upper laser trace indicates UAV motion, and the lower laser trace indicates the compensated/uncompensated scanning laser reflected from the MEMS mirror. The compensated MEMS scanning (right) shows a much smaller laser trace area than the uncompensated MEMS scanning result.}
\label{fig:vis}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{exp2.jpg}
\caption{ Motion compensated LiDAR point cloud result with hand held motion disturbance. (a)The target object "+" placed 2.4 m from the LiDAR, along with (b), its initial point cloud scan. (c) and (d) show uncompensated vs. compensated scanning. Uncompensated shake range was $(-0.2 ^\circ, +1.4 ^\circ), (+1.0 ^\circ, +1.4 ^\circ), (-1.5 ^\circ, +1.7 ^\circ)$ and compensated shake range was $(-1.1 ^\circ, +1.4 ^\circ), (+1.2 ^\circ, -0.5 ^\circ), (-2.3 ^\circ, +0.5 ^\circ)$. (Please see supplementary video). (d and e) The stacking of 5 frame of point cloud of compensated and uncompensated results.}
\label{fig:exp2}
\end{figure*}
Our prototype~(Figure~\ref{fig:setup}) uses an InGaAs avalanche photodiode module (Thorlabs, APD130C). A fiber with a length of 3 m delivers the laser from the laser source to the scanner head. The gain-switch laser (Leishen, LEP-1550-600) is collimated and reflected by the MEMS mirror. The X-axis of the IMU (VectorNav, VN-100) is parallel to the neutral scanning direction of the MEMS mirror. The in-run bias stability of the gyroscope is $5-7^\circ$/hr typ. The scanner head sits on a tripod so that it can be rotated in the yaw and pitch directions. In the LiDAR base, an Arduino microcontroller is used to process the ToF signals, sample the IMU signals and control the MEMS mirror scanning direction. The data are sent to a PC for post-processing and visualization.
Since our motivation was use with micro-robots, our maximum detection distance is 4 m with a $80\%$ albedo object and the minimal resolvable distance is 5 cm. The maximum ToF measurement rate is 400 points/sec. According to the compensation algorithm described in the previous section, the MEMS mirror scanning direction is updated and compensated for motion at 400 Hz. We now describe our experiments. Please see the accompanying video for further clarification.
\subsection{Compensation experiments with zero translation}
To demonstrate the effect of compensation, a visible laser is used instead of the LiDAR IR light to visualize the effect of tracking. We mount the LiDAR MEMS scanner on the UAV, as shown in Figure \ref{fig:setup}. The MEMS mirror desired scanning angle is set to a single point on the target object ($0^\circ$ by $0^\circ$) to make it easier for comparing.
\color{black} Here the entire scanning grid $\{\alpha_i,\beta_i,1\}$ consist of one single point only at the projection center. We use the general compensation outline in ~\ref{subsubsec:general_rotation_compensation}
\color{black}
The UAV together with the LiDAR scanner head is held with hand with random rotational motion in yaw/pitch direction. The upper laser trace comes from the laser rigidly connected to the UAV which indicates the UAV's motion. The lower trace is reflected from the MEMS mirror, which shows the compensated/uncompensated scanning laser. The results are shown in \autoref{fig:vis}. The MEMS scanning laser trace area of the compensated scanning is significantly smaller than the uncompensated scanning trace under similar rotational motion disturbance. The videos of the real-time compensation results is available in the supplementary materials.
Then the IR pulse laser is connected to run the LiDAR. An object of interest (in the shape of a +) is placed 2.4 m away from the LiDAR and at the center of the field of view and the background is at 2.8 m, as shown in Figure \ref{fig:exp2}(a). The MEMS mirror performs a raster scanning pattern with an initial field of view of $-3.5^\circ$$\sim$$+3.5^\circ$ in both axes to leave the room for compensation. Each frame has 20 by 20 pixels, and the frame refresh rate is 1 fps. To mimic robot vibration, the tripod is rotated randomly in the directions of yaw (Z-axis) and pitch (Y-axis), and the point clouds are shown in Figures \ref{fig:exp2}(d). Despite the motion of the LiDAR head, the point clouds are quite stable. The differences among all of the point clouds are generally less than 2 pixel in either axis, caused by measurement noise.
Figure \ref{fig:exp2}(c) shows the point clouds without compensated scanning, where the relative positions of the target object in the point clouds keep changing. The target object may come out of the MEMS scanning FoV without compensation. With a continuous rotation of 1.5 Hz in the Y-axis, the same structure may appear in multiple positions in the same frame of the point cloud, as shown in the 3rd figure of Figure \ref{fig:exp2}(c). Multiple frames of point cloud are stocked together and shown in the last column of Figure \ref{fig:exp2}. The object can still be identified in the compensated point cloud (Figure~\ref{fig:exp2}(f)), but becomes fuzzy caused by the motion jitter when not compensated (Figure~\ref{fig:exp2}(e). The videos of the real-time compensation point cloud results is available in the supplementary materials.
\section{UAV Experiment}
\label{sec:robotexp}
\begin{figure*}[!ht]
\centering
\begin{subfigure}{0.6\textwidth}
\centering
\includegraphics[width=\textwidth]{UAV_Visible.PNG}
\caption{Our UAV setup with Intel Aero UAV and our LiDAR mounted on it. }
\label{fig:fvsc_drone}
\end{subfigure}%
\begin{subfigure}{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{fvsc1.jpg}
\caption{Effect of compensation rate on yaw rotation}
\label{fig:fvsc1}
\end{subfigure}%
\begin{subfigure}{0.8\textwidth}
\centering
\includegraphics[width=\linewidth]{fvsc2.jpg}
\caption{Effect of compensation rate on pitch rotation}
\label{fig:fvsc2}
\end{subfigure}%
\caption{A comparison of the compensation strength versus the robot pose sampling frequencies. All the images are accumulation of 12s of UAV hovering videos. The compensation target scanning direction is a fixed direction.
}
\label{fig:fvsc}
\end{figure*}
\begin{figure*} [!ht]
\centering
\begin{subfigure}[b]{0.6\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/Drone_flying.jpg}
\caption{Our set up with UAV, LiDAR and the feature target.}
\label{fig:uav_pointcloud}
\end{subfigure}
\begin{subfigure}[b]{0.2\linewidth}
\centering
\includegraphics[width=\textwidth]{fvpd_uc.png}
\caption{Uncompensated}
\label{fig:fvpd_uc}
\end{subfigure}%
\begin{subfigure}[b]{0.2\linewidth}
\centering
\includegraphics[width=\textwidth]{fvpd_1f.png}
\caption{One frame}
\label{fig:fvpd_1f}
\end{subfigure}%
\begin{subfigure}[b]{0.2\linewidth}
\centering
\includegraphics[width=\textwidth]{fvpd_50hz.png}
\caption{50Hz sampling rate}
\label{fig:fvpd_50hz}
\end{subfigure}%
\begin{subfigure}[b]{0.2\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/fvpd_2hz.jpg}
\caption{2Hz sampling rate}
\label{fig:fvpd_2hz}
\end{subfigure}%
\begin{subfigure}[b]{0.2\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/fvpd_1hz.jpg}
\caption{1Hz sampling rate}
\label{fig:fvpd_1hz}
\end{subfigure}%
\caption{A comparison of the compensation strength verse the IMU sampling frequencies. The images are accumulation of 20s of point cloud video during the UAV hovering. We use a cuboidal object (as seen in Figure~\ref{fig:uav_pointcloud}) as object of interest.
The width of the target increases due to compensation inaccuracy as we reduce compensation rate from 50 Hz to 1 Hz demonstrating the utility of high rates of compensation even in such static scenarios}
\label{fig:fvpd}
\end{figure*}
Next, we demonstrated the motion compensated LiDAR by flying it on a UAV. The robot pose is from an external motion capture system that tracks the UAV. We vary the robot pose sampling rate and study its effect on the effect of compensation. The UAV is controlled to hover at a designated position with yaw/pitch rotation as motion jitter. Motion compensated LiDAR is set to compensate all the rotational motion, including the controlled rotation and the random motion disturbance. The compensated MEMS scanning laser uses a visible light, and the other visible laser is fixed at a relative higher position on the UAV, as shown in the images in Figure \ref{fig:fvsc1}. The target scanning direction is a fixed point on the target.
\color{black} Here, the entire scanning grid $\{\alpha_i,\beta_i,1\}$ consist of 20x20 grid pattern points. We use the aiming compensation outline in ~\ref{subsubsec:aiming}
\color{black}
We trim about 12 s videos in each experiments while the UAV is flying, and the each frames of the videos are accumulated into an image to track the motion of the UAV and the errors of the compensated scanning.
The robot pose sampling rate is set from 1 Hz to 200 Hz to investigate its effect on the compensation results. The controlled UAV rotations are in the yaw and pitch direction. However, the actual motions cause some random motions during the flying. Point clouds are also collected when the UAV is hovering and we overlap several frames. As the robot pose sampling frequency increases from 1Hz, 2Hz to 50Hz, the width of the overlapping area shrink from 10 to 11 points at 1Hz (Fig. \ref{fig:fvpd_1hz}), to 6 points at 50 Hz (\ref{fig:fvpd_50hz}). As the target object in the size of the target object in the point cloud settle down to the a smaller area and the location of the target in the point cloud becomes more certain.
\section{Rotation Compensated LiDAR-Inertial SLAM Design}
\label{sec:rotation_compensated_lidar_slam}
\color{black} SLAM is a body of fundamental applications for visual sensors. All exisitng SLAM literatures reason about its odometry in the sensor's local frame, sometimes call camera frame. In this work this frame is the robot frame, with world frame orientation $R_{robot}^w$, refer to ~\ref{subsubsec:coordinate-system}.
The basic assumption of the existing SLAM is that visual sensor readings use the robot frame with world rotation $R_{robot}^w$ as their reference. This assumption is untrue for our sensor, because that our sensor readings use the frame with world orientation $R_{control}R_{robot}^w$ as their reference.
Through sections ~\ref{subsubsec:general_rotation_compensation} to \ref{subsubsec:aiming}, the additional none-zero rotation $R_{control}$ orients the original scanning grid towards different directions. The existence of $R_{control}$ breaks the basic assumption of existing SLAM.
$R_{control}$ must be compensated for, in order for the existing SLAM pipelines to work with our sensor. This can be done post-capture, we can use either ~\ref{subsubsec:general_rotation_compensation} or ~\ref{subsubsec:special_case} to compensate. We details the compensation later in ~\ref{subsec:the_rotation_stage}.
\color{black}
Most LiDAR odometry pipelines utilize Iterative Closest Point (ICP) to match consecutive scans and determine the rotation and translation between the poses. Any rotation of the LiDAR relative to the vehicle would cause errors in the ICP's prior. This would directly impact the quality of ICP's point-cloud registration. Although ICP can tolerate certain levels of error in its prior, in Section~\ref{sec:tolerable_input_level} we will show that it is far from enough when the magnitude of $R_{control}$ input increases.
\subsection{Motion Compensation for LiDAR SLAM}
In this simulation, we simulate a 360 degree velodyne LiDAR, that can rotate relative to the vehicle it is mounted on, by a universal joint. A universal joint has rotational DOF similar to a MEMS mirror, both limited to 2 DOF. This setup fits into the compensation framework introduced in the special case ~\ref{subsubsec:special_case}. In this section, we will demonstrate in simulation that such rotation introduces error in an off-the-shelf LiDAR SLAM pipeline. Additionally, We propose a general method to incorporate such rotation into consideration when performing LiDAR-related SLAM. We demonstrate the effectiveness of the framework in a Rotation Compensated LiDAR-Inertial Odometry and Mapping package, which is publicly available on ~\href{https://github.com/yuyangch/LIO-SAM_rotation}{Github}.
\begin{figure}
\centering
\includegraphics[trim={0 0cm 0 0cm},clip,width=.95\columnwidth]{figures/rotating_lidar_slam_pipeline.jpg}
\caption{Illustration of our rotating LiDAR SLAM augmentation pipelines. The existing structures are shown in grey}
\vspace{+0pt}
\label{fig:rotating_lidar_slam_pipeline}
\end{figure}
For the ease of integration, our framework proposal does not make large edits in the existing paradigm. It only adds a ``rotate" stage right after the de-skew stage in the front end, and before feature extraction stage. This edition can be easily integrated with existing pipeline and future designs. The rotate stage does one single operation, it rotates the de-skewed point cloud according to the control rotation input to the LiDAR. Our workflow block diagram in shown in Figure ~\ref{fig:rotating_lidar_slam_pipeline}.
\subsection{The Rotation Stage}
\label{subsec:the_rotation_stage}
The purpose of this stage of the pipeline, is to rotate the captured LiDAR frame, to a correct position, relative to the LiDAR's base frame of reference. (In this work, the LiDAR's base frame is identical to the vehicle's body frame.)
Let the Lidar's base frame have world rotation \color{black}$R_{robot}^w \in SO(3)$\color{black}.
In a traditional LiDAR that doesn't rotate, all points received in a LiDAR frame are position relative to the LiDAR's base frame, with world rotation \color{black}$R_{robot}^w$\color{black}. However, this assumption is incorrect for our device, where the LiDAR frame is positioned relative to the frame with rotation $R_{control}R_{robot}^w$.
The LiDAR's head can rotate \color{black}$R_{control} \in SO(3)$\color{black}, relative to its base. This rotation is restricted to azimuth $\beta$ and elevation directions $\alpha$. \color{black} note that in here we analyze a more generalize, special case compensation~\ref{subsubsec:special_case}, but the it can be easily extend to full $SO(3)$ compensation~\ref{subsubsec:general_rotation_compensation}\color{black},
When a LiDAR frame is received, we take the most recently known rotation $\alpha,\beta$, in this case the most recent known command rotation, and converts them into a rotation matrix:
\begin{gather}
\label{eq:lidar_head_rotation_matrix}
\color{black}R_{control}\color{black}
=
\begin{bmatrix}
\cos{\beta} & 0 & \sin{\beta} \\
0 & 1 & 0 &\\
-\sin{\beta} & 0 & \cos{\beta}
\end{bmatrix}
\begin{bmatrix}
\cos{\alpha} & -\sin{\alpha}&0 \\
\sin{\alpha} & \cos{\alpha} & 0 &\\
0 & 0 & 1
\end{bmatrix}
\end{gather}
And applies the rotation to each point $p \in \mathbb{R}^3 $ in the frame point-cloud:
\begin{gather}
\label{eq:rotated_pointcloud}
p_{rotated}
= \color{black}R_{control}\color{black}p
\end{gather}
The rotated point-cloud $p_{rotated}$ now locates at the correct position, relative to the LiDAR's base frame, with world rotation $R_{robot}^w$. The basic assumption of traditional SLAM are now met.
\begin{comment}
\begin{figure}
\centering
\vspace{-60pt}
\includegraphics[width=.48\textwidth]{figures/rotate_pointcloud.pdf}
\vspace{-140pt}
\caption{a) With 0 degree MEMS LiDAR head rotation, akin a traditional LiDAR, the traditional SLAM's assumption holds b) With none zero degree MEMS LiDAR head rotation, the traditional SLAM's assumption incorrectly place the object's pointcloud. c) With our software rotation $R_{head}$, object's pointcloud is now correctly placed.}
\label{fig:before_after_rotation}
\end{figure}
\end{comment}
\subsection{Evaluation}
\label{sec:SLAM-eval}
Now we evaluate the sensor in simulation to answer a few questions. First, we want to compare traditional LiDAR SLAM and our Motion Compensated SLAM in terms of the handling change in mirror/universal joint orientation magnitude. Next we investigate the effect of noise in the mirror's orientation (say through a faulty IMU or other sensor) on the robustness of our pipeline. We also show the degree to which our pipeline can tolerate such noise.
The proposed SLAM framework should be expected to function, even when the LiDAR users employ control policies that rotate its FOV significantly frame-to-frame. This is Unlike the scenario of running a active stabilization control policy proposed in ~\ref{sec:benefit of motion compensation in SLAM}, Where frame-to-frame variation is minimal. Therefore, in this evaluating section, we use control policy that samples random LiDAR rotation control input from Gaussian distributions at high frequency.
We choose LIO-SAM as the traditional SLAM package to compare against, and built our Motion Compensation framework into it, and open-source it on github. LIO-SAM has all the signature point-cloud processing stages shown in Figure ~\ref{fig:rotating_lidar_slam_pipeline}. It is relatively new and has good SLAM accuracy performance versus State-of-the-Art. We hope through the open-source code we can demonstrate to the community an example of incorporating our framework.
For Odometry error evaluation, We calculate Average Translation Error (ATE) which is defined by the KITTI benchmark ~\cite{geiger2012we}:
\begin{gather}
\label{eq:kitti_benchmark_eqn}
E_{trans}(\mathcal{F})
= \frac{1}{|\mathcal{F}|} \sum_{i,j \in \mathcal{F}}\|\hat{T}_j\hat{T}_i^{-1}(T_{j}T_i^{-1})^{-1}\|_{2}
\end{gather}
Where $\mathcal{F}$ is a set of frames $(i,j)$, $T$ and $\hat{T}$ are the estimated and true LiDAR poses respectively.
\subsubsection{Experiment Set Up}
\begin{figure*}
\centering
\includegraphics[width=.9\textwidth]{figures/PX4_Gazebo_Simulator.png}
\caption{Illustration of a) our simulator environment, b) mapping results on LIO-SAM with our motion compensation c) mapping results on stock LIO-SAM. It is worth noting that the pointcloud generated from the simulation has rotation with respect to the frame with world rotation $R_{control}R_{robot}^w$, which breaks any traditional visual SLAM's assumption, see section \ref{sec:rotation_compensated_lidar_slam}. It has significant frame-to-frame FOV variations (See section ~\ref{sec:SLAM-eval}), which is difficult for any un-compensated, traditional SLAM to handle.}
\vspace{0pt}
\label{fig:px4_gazebo_simulator}
\end{figure*}
A simulation study is setup in robotics simulator Gazebo, where a LiDAR with similar sensor characteristics to a VLP-32 ~\cite{velodyne} is mounted on a simulated drone. Further, the LiDAR can rotate in azimuth and elevation via a universal joint. The simulated drone iris, is from the PX4's simulation package. Its onboard IMU have noise added to it according to a noise model outlined in Kalibr ~\cite{furgale2013unified}. The point-cloud messages from the LiDAR, as well as the IMU messages from the drone, are passed into robotics middleware ROS, where the proposed LiDAR SLAM package runs.
The drone is commanded to flight in a diamond waypoint pattern, around a enviornment with different types of resident buildings.
The proposed LiDAR-Inertial SLAM package builds on top of LIO-SAM, which employs the powerful PGO backend GTSAM ~\cite{dellaert2012factor}. We incoporate the compensation described in \ref{subsec:the_rotation_stage} into LIO-SAM, here on in refer to as Motion Compensated LIO-SAM. Naturally, we will compare SLAM performance of Motion Compensated LIO-SAM, against the stock version of LIO-SAM. See Figure ~\ref{fig:px4_gazebo_simulator}. To control the orientation of the universal joint, angular commands in $\alpha,\beta$, in degrees, are input to the mirror.
\subsubsection{Level of mirror control orientation magnitude tolerable by an unmodified pipeline vs our system}
\label{sec:tolerable_input_level}
The two angular commands are sampled from 1-d Gaussian distributions with standard deviation of various degrees, at 10 Hz. Odometry error vs command rotation's gaussian standard deviation is plotted in figure~\ref{fig:mirror_control_vs_odometry_error}.
An Gaussian distribution with 8 degree standard deviation generate input angle within +-,8,16,24 degrees, 68,95 and 99.7 percent of the time respectively. Therefore 99.7 percent of the time, angular input span a range of 48 degrees.
In short, by considering mirror rotation, the system can tolerates angular input that span 48 degrees. In contrast, without mirror rotation information, the system can only tolerate angular input that span 12 degrees.
Even in the cases where the input spans less than 12 degrees, by considering mirror rotation, SLAM quality improves in comparison.
\begin{figure*}
\centering
\includegraphics[width=.9\textwidth]{figures/C_UC_odometry_comparison.png}
\caption{ \color{black} a) Mirror control magnitude and odometry error, Our Motion Compensated LIO-SAM vs LIO-SAM . As mirror control magnitudes increase, the unmodified LIO-SAM fails completely. b) At 2 degrees standard deviation, Our Motion Compensated LIO-SAM outperforms LIO-SAM c) At 3 degrees standard deviation, degree threshold and beyond, Our Motion Compensatedd LIO-SAM perform normally while the stock LIO-SAM completely fails}
\label{fig:mirror_control_vs_odometry_error}
\end{figure*}
\begin{comment}
\begin{figure*}
\begin{center}
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=\textwidth,trim={12pt 10 13 25},clip]{figures/input_sigma_vs_odometry_error_C_UC_ateare.pdf}
\label{fig:odometry_error_over_input_sigma}
\end{subfigure}
\hspace{2pt}
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=\textwidth,trim={12pt 10 13 25},clip]{figures/2_sigma_odometry_comparison.pdf}
\label{fig:odometry_error_2_sigma}
\end{subfigure}
\hspace{2pt}
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=\textwidth,trim={12pt 10 13 25},clip]{figures/3_sigma_odometry_comparison.pdf}
\label{fig:odometry_error_3_sigma}
\end{subfigure}
\end{center}
\caption{\color{blue}version2 \color{black} a) Mirror control magnitude and odometry error, Ours vs LIO-SAM . As mirror control magnitudes increase, the unmodified LIO-SAM fails completely. b)At 2 degrees standard deviation, Our Augmented LIO-SAM outperforms LIO-SAM c)At 3 degrees standard deviation, degree threshold and beyond, Our Augmented LIO-SAM perform normally while the stock LIO-SAM completely fails}
\vspace{-10pt}
\label{fig:mirror_control_vs_odometry_error}
\end{figure*}
\end{comment}
\subsubsection{Level of mirror control noise tolerable }
The two angular commands are sampled from 1-d Gaussian distributions with standard deviation of 3 degrees. Additionally, Noise rotations in both azimuth and elevation are added on top of each channel. Odometry error vs command rotation's noise Gaussian standard deviation is plotted in Figure~\ref{fig:mirror_control_noise_vs_odometry_error}.
The system can tolerate mirror input control noise up to 1.6 degree standard deviation, which spans 9.6 degree.
\begin{figure}
\centering
\includegraphics[width=.48\textwidth]{figures/input_noise_sigma_vs_odometry_error_ateare.pdf}
\caption{Mirror control noise and odometry error. As the mirror control noise increases, the odometry error also increases. Our pipeline fails after noise standard deviation exceeds 1.6 degree.}
\label{fig:mirror_control_noise_vs_odometry_error}
\end{figure}
\begin{comment}
\subsubsection{Level of IMU noise tolerable}
The two angular commands are sampled from 1-d Gaussian distributions with standard deviation of 3 degrees.
Additionally, IMU noise are added on top of IMU output, according to an IMU noise model outlined in kalibr ~\cite{furgale2013unified}. The model is based on ~\cite{ma1998ieee}\cite{trawny2005indirect}\cite{crassidis2006sigma}. The base set of noise parameters are outline in the following table.
\setlength{\tabcolsep}{2pt}
\begin{tabular}{ | c| c| c | }
\hline
Parameter & Unit & Value \\
\hline
Gyroscope Noise Density& $\frac{rad}{s} \frac{1}{\sqrt{Hz}} $& 3.394e-4\\
\hline
Gyroscope Random Walk& $\frac{rad}{s^2} \frac{1}{\sqrt{Hz}} $ & 3.879e-05\\
\hline
Gyroscope Turn On Bias Sigma & $\frac{rad}{s} $ & 8.727e-3 \\
\hline
Accelerometer Noise Density& $\frac{m}{s^2} \frac{1}{\sqrt{Hz}} $& 4e-3\\
\hline
Accelerometer Random Walk& $\frac{m}{s^3} \frac{1}{\sqrt{Hz}} $ & 6e-3\\
\hline
Accelerometer Turn On Bias Sigma & $\frac{m}{s^2} $ & 0.196\\
\hline
\end{tabular}
\rule{0pt}{4ex}
The above set of noise parameters are multiply by several factors and their relationship with odometry error are shown in Figure ~\ref{fig:imu_noise_vs_odometry_error}. The following set of noise parameters remains constant.
\rule{0pt}{4ex}
\begin{tabular}{ | c| c| c | }
\hline
Parameter & Unit & Value \\
\hline
Gyroscope Bias Correlation Time & $s $ & 1000 \\
\hline
Accelerometer Bias Correlation Time & $s $ & 300 \\
\hline
\end{tabular}
\rule{0pt}{4ex}
\emph{In conclusion, our pipeline can tolerate IMU noise factor of up to 18.}
\begin{figure}
\centering
\includegraphics[width=.48\textwidth]{figures/imu_noise_vs_odometry_error.pdf}
\caption{Imu noise and odometry error. As IMU noise factor increase, odometry error also increases. SLAM failure in our pipeline occurs at factor 20.}
\label{fig:imu_noise_vs_odometry_error}
\end{figure}
\subsection{Ablation Study: errors in ICP's prior }
{\color{blue}TODO add more details, a figure on the error on the ICP's prior, with/without our modifications }
\end{comment}
\section{Limitations and Conclusions}
We have designed an adaptive lightweight LiDAR capable of reorienting itself. We have demonstrated the benefits of such a LiDAR in simulation as well as experiment. We have demonstrated in experiment image stabilization in hardware using an onboard IMU. We have also demonstrated viewing an object of interest using this LiDAR through external robot pose feedback. Please see the supplementary material of this paper for some MEMS-related details, including analysis of robot motion shock on the MEMS as well as preliminary point cloud stitching. We also explain how such a sensor can reduce sensing uncertainty. Finally, our accompanying video shows our experiments in action.
We would also like to acknowledge limitations of our study.
\begin{itemize}
\item We have indirectly compared to software methods using compensation \emph{delay}. This is because, compared to hardware-compensation, any software-compensation will add delay, and therefore delay is a fundamental metric for hardware-software comparison. For future work we will directly compared with software compensation methods.
\item Our design requires the robot to connected to the heavier sensing components using a tether. This limits the fly range and the detection FoV of the system. Although removing the tether restriction is left to future work, we believe that our design is capable of advancing sensing in microrobots significantly, and will help our community in designing microrobots in the future.
\item All our results (using IMU as well as Vicon motion capture) are indoor results. We hope to perform future experiments with outdoor effects such as wind.
\item In our current system design, there are implementation bottlenecks that limit compensated bandwidth. These are caused by the MEMS mirror and by the signal processing. Tightly coupled on-board designs can reduce these.
\end{itemize}
In conclusion, through simulation and a prototype implementation we realize our design shown in Fig \ref{fig:futuristic}. We have shown, in simulation on on real hardware experiments, that hardware-compensation using a MEMS mirror improves both reconstruction and mapping. In particular, microrobots which suffer from heavy vibration and motion jitter (such as flapping-wing MAVs~\cite{robobees-sciam13}) can benefit greatly from the motion compensated MEMS mirror scanning LiDAR for stabilized scene capture. Finally, over the long term, we believe that our design methodology can decouple robot and sensor geometry greatly simplifying robot perception.
\printbibliography
\newpage
\section*{Supplementary materials}
\subsection*{Motivation}
\subsubsection*{Fisher Information in Iterative Closest Point}
A popular method for 3D feature alignment is Iterative Closest Point method. In feature-based ICP, the uncertainty of robot pose estimation is inversely correlated to the amount of correctly matched features being observed.
Recent work has reasoned about this using the Cramér–Rao lower bound; regardless of their parameterizations, these come in very similar form. We refer the readers to \cite{censi2007achievable} and \cite{zhang2019beyond} for detail.
Consider a robot equipped with LiDAR sensor in the environment. Each beam of the sensor is represented as a bearing vector measurement, which is a vector to the direction of the ray, whose length is generated by ray-tracing measurement model, corrupted by zero-mean Gaussian noise. Let the LiDAR‘s pose be $\pmb{p}\triangleq (\pmb{R},\pmb{t})$ w.r.t a fixed world frame. This sensor output $\pmb{z}$ of $m$ bearing vector measurements, but only $n$ of them are correctly matched to environmental features. In \cite{zhang2019beyond}, a 3D featured matching paper where parameterizations of the measurements are in bearing vectors, the FI is expressed (within identical and independent zero-mean Gaussian noise $\mathcal{N}(0,\sigma^2)$) as
\begin{equation}\label{FIF_eq}
I_{\pmb{p}}(\pmb{z})=\frac{1}{\sigma^2}\sum_{i}^{n}I_i
\end{equation}
Where $I_i$ stands for the Fisher Information Matrix associated with the i-th matched 3D feature bearing vector measurement.
In Equation~\ref{FIF_eq} the FIM in feature-based ICP is positively correlated to the amount of correctly matched features observed. In other words, the Cramér–Rao lower bound is inversely correlated to the amount of correctly matched features. Basically, the more correctly matched features are considered, the lower the odometry uncertainty is. While the above is hardly surprising, it forms the theoretical backing for our system design where we hypothesize \emph{that decoupling increases correctly matched features} because the sensor can orient towards feature rich regions.
\subsection{Quaternion $L_2-mean$}
Supposedly $q_1.....q_n$ are the world frame quaternions store in a queue data structure, representing the UAV's world frame rotation in the last $n$ time stamps. We can find its average via LERP, summing and normalizing the quaternions as 4-vectors:
\begin{gather}
q_{avg}
= \frac{\sum_{i=1}^n q_i}{||\sum_{i=1}^n q_i||_2}
\label{eqn:LERP}
\end{gather}
$q_{avg}$ can be converted into a rotation matrix $R_{desired}^w$. Then we can find $R_{control}$ and the control input of each compensated point in the spatial scanning set according to methods outline in ~\ref{subsubsec:general_rotation_compensation}.
\subsection*{Motion Compensation Controller}
The IMU model is provided by the manufacturer, and it is usually simplified as a passive low-pass filter model. For example, the IMU(VN-100) has a bandwidth of $f_2=150$Hz with a simplified transfer function of
\begin{equation}
H_2(s) = \frac{\frac{f_2}{2\pi}}{s+\frac{f_2}{2\pi}}
\end{equation}
The compensating components $H_g(s)$ and the high pass filter $H_{pf}(s)$ improve the usable range of the motion compensation. We assume that the relative low frequency spectrum motion is the desired motion, and should not be compensated. It can be achieved by tuning the bandwidth of the high pass filter $H_{pf}(s)$. In this work, a passive high pass filter with a bandwidth of $f_h$ is selected,
\begin{equation}
H_{pf}(s) = \frac{s}{s+2\pi f_h}
\end{equation}
To improve the response bandwidth at higher frequency and suppress the noise, $H_g(s)$ is added. The $H_g(s)$ is defined with the inverse of the transfer function of the IMU and the MEMS mirror,
\begin{equation}
H_g(s) = \frac{B(s)}{H_1(s)\dot H_2(s)}
\end{equation}
\noindent where $B(s)$ ensure that the order of the numerator is not higher than the order of the denominator. A fourth order Butter-worth filter with a bandwidth of $f_B = 200$Hz is used as the low pass filter. The 200 Hz bandwidth is selected to improve the speed while it can still suppress the resonance of the MEMS mirror, which is,
\begin{equation}
B(s) = \frac{1}{\frac{s}{2\pi f_B}^4 + 2.613\frac{s}{2\pi f_B}^3 + 3.414\frac{s}{2\pi f_B}^2 + 2.613\frac{s}{2\pi f_B} + 1}
\end{equation}
which implies,
\begin{equation}
H_g(s) = \frac{s^4+512 s^3 + 1.081e08 s^2 + 1.486e11 s + 4.457e13}{4.457e13}
\end{equation}
To implement the transfer function in a microcontroller, the continuous-time transfer function $H_g(s)$ is translated to a discrete-time transfer function $G[z]$ with a sampling time of 0.0025s,
\begin{equation}
G[z] = \frac{Y[z]}{X[z]} = \frac{0.843z^4 + 0.93z^3 - 0.32 z^2 - 0.045 z + 0.0064}{z^4 + 0.32 z^3 + 0.11 z^2 - 0.017 z + 0.0021}
\end{equation}
$X[z]$ in the input (IMU measurement) and $Y[z]$ is the output (MEMS scanning angle) in the $Z$ domain. Converting the discrete-time transfer function $G[z]$ to the time domain,
\begin{multline}
y[n] = (0.843x[n] + 0.93x[n-1] - 0.32x[n-2] - 0.045x[n-3]
\\ + 0.0064x[n-4])- \\
(0.32y[n-1] + 0.11y[n-2] - 0.017y[n-3] + 0.0021y[n-4]).
\end{multline}
Simulink is used to simulate the performance of the controller and tune the parameters. The Simulink setup is shown in Fig. \ref{fig:simulink}. The input motion jitter is a sinusoid wave. Fig. \ref{fig:simulink}(b) and (c) shows the motion compensation residues with the input motion jitter frequency of 5Hz and 10 Hz. $H_{g}(s)$ can effectively reduce the residue. Note that the compensation residue is expected to be better than real-world experiments because of the digitization.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{simulink.jpg}
\caption{(a) The Simulink simulation setup of the motion compensation. The input frequency is 5 Hz (b) and 10 Hz (c). (d) The effect of $H_{pf}(s)$ under step response.}
\label{fig:simulink}
\end{figure}
\subsection*{Motion Compensation Controller Design}
Those algorithms provide motion-compensated MEMS scanning in steady-state. However, as both the IMU and the MEMS mirror have limited response bandwidths, the residue and the speed of the motion compensation may get worse as the frequency of motion jitter increases. Also, the MEMS mirror has a limited scanning range, so the range of motion compensation is also limited. Since the motion compensation system is an open-loop system, a compensating stage $H_g(s)$ can be added to the microcontroller to increase the performance of the system.
A simplified block diagram of the open-loop motion compensation system is shown in Figure \ref{fig:block}, where $H_1(s)$ denotes the MEMS mirror tip-tilt model given in Eq. \ref{H1s}, $H_2(s)$ is the IMU model, $H_g(s)$ is the compensating components, $H_{pf}(s)$ is an optional high-pass filter and $F$ is the model for the MEMS mirror driver.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{block.png}
\caption{The block diagram of the open-loop motion compensation system.}
\label{fig:block}
\end{figure}
\subsection*{Motion Compensation Strength with motion jitter frequency}
The MEMS mirror motion compensation system is controlled by an Arduino Mega. The IMU sends the data to the Arduino at 400 Hz. The data is processed, and the compensator $H_{g}(s)$ is implemented by the Arduino to get the MEMS angle and the desired driving voltages of the MEMS mirror. The two MEMS orthogonal scanning directions are assembled parallel to two IMU axes. To evaluate the compensation results, the reflected laser is captured by a PSD (position-sensitive detector) sensor fixed on the bench. The PSD sensor is placed 12 cm from the MEMS mirror. The PSD is for compensation evaluation only and is not in the controller loop.
Testing was done only on LEVEL 1 and 2, since high-resolution translation measurement is not available, so LEVEL 3 compensation cannot be tested. The motion-compensated MEMS scanner test is assembled on a step motor to test the compensation capability under various frequencies. The test bench, including the MEMS mirror, the IMU, and the pigtailed laser are fixed on the shaft of the stepper and rotate with the motor. One of the MEMS scanning directions is coincident with the motor rotation direction. The laser is delivered through a fiber. The stepper has a step size of $1.8^\circ$. With a micro-stepper controller, the approximate minimal step is as small as $0.018^\circ$ for smooth step translation control. The transient time of a $1.8^\circ$ step can be set from 30ms to 500ms. The motion compensation is tested in the pitch direction for smaller errors. The motor is placed horizontally to the ground.
Fig. \ref{fig:jitterExp} shows the PSD results without motion compensation. The motor transient time is 85 ms/$1.8^\circ$. As can be seen from Fig. \ref{fig:jitterExp}(b), the original IMU measurement (red line) is 5 ms behind the PSD signal (blue line). The $H_{g}(s)$ processed IMU measurement (green line) leads ahead of the original IMU measurements (red line).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{jitter_exp.jpg}
\caption{PSD raw data and IMU measurement results without motion compensation. IMU is the original IMU measurements, processed IMU is the IMU results processed with the compensator $H_{g}(s)$. (b) is the zoom-in the result of (a).}
\label{fig:jitterExp}
\end{figure}
Fig. \ref{fig:comp_vs_speed} shows the motion compensation results comparison under various motor speed ($t$, motor transient time) and with and without the compensator $H_{g}(s)$. When the motor speed gets faster, the motion compensation errors increase, and $H_{g}(s)$ can effectively reduce the error.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{comp_vs_speed.jpg}
\caption{The motion compensation results comparison under different motor speed (t, motor transient time) and with and without the compensator $H_{g}(s)$. When the motor speed gets faster, the motion compensation errors become larger. $H_{g}(s)$ can effectively improve the compensation scanning. }
\label{fig:comp_vs_speed}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/SEM.png}
\caption{}
\label{fig:memssem}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/angle.png}
\caption{}
\label{fig:memsangle}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/step.png}
\caption{}
\label{fig:memsstep}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/frequency.png}
\caption{}
\label{fig:memsfreq}
\end{subfigure}
\caption{(a) An SEM image of the fabricated MEMS mirror. (b) The optical scanning angle response of the MEMS mirror. (c) The step response of the MEMS mirror is 5 ms. (d)The frequency response of the MEMS mirror.}
\label{fig:fvsc}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{exp3.jpg}
\caption{Compensated point cloud stitching and experiment and its results. (a) The target scene placed at 22 cm from the LiDAR. The red line is the ideal motion, the blue line qualitatively depicts the type of experiment we performed, with vertical disturbances only. The graphs show the actual, measured disturbances. (b) With compensated scanning, the recorded motion and the generated point cloud show sharper depth edges than that with (c) with uncompensated scanning. }
\label{fig:stitching}
\vspace{-10pt}
\end{figure*}
The motion compensation under continuous sinusoidal drive disturbance is also tested. The motor drives the MEMS mirror scanner head with sinusoidal motions. The MEMS compensated scanning system tries to compensate the scanning angle to an ideal direction. The effect with and without the compensator term is also compared, as shown in Table \ref{table:comp}.
\begin{table}
\centering
\caption{A comparison of the errors under step motion disturbance and continuous sinusoidal motion disturbance on a step motion with the $H_{g}(s)$ present or not.}
\includegraphics[width=\linewidth]{comp_table.jpg}
\label{table:comp}
\end{table}
\subsection*{MEMS and Robot Motion Shock}
The mirror has a maximum actuation voltage of 5 V and a scanning FoV of $-4.8^\circ$ to $+5.2^\circ$ in the horizontal axis and $-3.8^\circ$ to $+4.3^\circ$ in the vertical axis (Fig. \ref{fig:memsangle}). The voltage to MEMS tilting angle response is approximately linear. The MEMS mirror can perform non-resonant arbitrary scanning or pointing according to the control signal. The cross-axis sensitivity is about 6 $\%$ in both axes. In the micro-controller, the voltage and the MEMS scanning angle is approximated with a linear relationship with the cross-axis sensitivity taken into considered. The maximum error caused by the non-linearity is $0.3^\circ$.
The step response is 5 ms (Fig. \ref{fig:memsstep}(a)) with very small ringing. To test the frequency response, the frequency of the actuation voltage is swept and the actual tilt angle is measured by tracking the light beam reflected from the mirror plate using a position sensing detector (PSD), shown in Fig. \ref{fig:memsfreq}(b). The piston resonant mode is found at $f_1 $=1.07kHz and the tip-tilting resonant modes are at $f_2 = $1.63 kHz and $f_3 =$ 1.69 kHz.
The tip-tilt scanning response of the MEMS mirror is modeled with a 3rd order system according to \cite{li2017modelling}. The transfer function $H_1(s)$ can be expressed as,
\begin{equation} \label{H1s}
H_1(s) = \frac{\frac{1}{\tau}\omega_n^2}{(s^2+2\omega_n\zeta s + \omega_n^2)(s+\frac{1}{\tau})}
\end{equation}
where $\tau$ is the thermal time constant, $\tau\approx t_r/2.2 = 2.3 ms$; $\omega_n$ is the natural resonant frequency of the mirror rotation, $\omega_n \approx 2\pi \text{(1.65 kHz)}$, and $\zeta$ is the damping ratio of the bimorph-mirror plate system, $\zeta \approx 1/2Q = 0.006$. Thus, the transfer function $H_1(s)$ of the MEMS mirror can be obtained by substituting and slightly tuning the parameters in Eq. \ref{H1s}.
Similar to \cite{wang2020low}, the MEMS mirror is actuated by the PWM signals with a voltage level of 0-5V. The PWM signal can be generated by an Arduino Mega microcontroller at 15 kHz and 8 bit. The ringing of a step response is less than 2 $\%$ after about 10 ms. The minimal achievable step is 0.035$^\circ$ which is much smaller than the linearization error.
We now show expressions for the acceleration and forces generated by a MEMS mirror scan. The small-angle tip-tilt scanning stiffness $k_{\mbox{r}}$ is
\begin{equation}
k_{\text{r}} = I(2\pi f_{r})^2
\end{equation}
\noindent where $f_{r}$ is the resonant frequency of the tip-tilting modes ($f_2, f_3$); $I$ is the moment of inertia of the mirror plate alone its tip-tilting axis,
\begin{equation}
I = \frac{1}{12} m_{\text{plate}}(t^2+d^2)
\end{equation}
\noindent where $t$ is the thickness of the mirror plate, and $d$ is the length of the mirror plate. The rotation stiffness $k_{\mbox{r}}$ = 2.16e-6 N$\cdot$m/rad. With an external angular acceleration of $\ddot{\theta}$ alone on the mirror rotation axis, the excited mirror rotation $\theta$ is
\begin{equation}
\theta = -\frac{I\ddot{\theta}}{k_r} = \text{-1e-8} \cdot\ddot{\theta}.
\end{equation}
Take the mirror scanning step $0.25^\circ$ as the maximum tolerance of the excited mirror plate rotation, the tolerable external angular acceleration is $\ddot{\theta}$ = 44000 $\text{rad}$/$\text{s}^2$. The maximum angular acceleration of a commercialized robot is usually less than 1000 $\text{rad}$/$\text{s}^2$, and the excited MEMS mirror rotation is less than 6e-4$^\circ$ which can be ignored. Since this MEMS mirror has four identical actuators and the difference on the two axes alone are small, the excited MEMS mirror rotation under robot vibration can be ignored.
We now consider robot crash scenarios. The MEMS mirror can also survive most of the extreme vibration or mechanical shock without failure. The stiffness of the MEMS mirror under shock $k_{\mbox{p}}$ is:
\begin{equation}
k_{\mbox{p}} = m_{\text{plate}} (2 \pi f_1)^2
\end{equation}
\noindent where $m_{plate}$ is the mass of the mirror plate. Thus, the stiffness the MEMS mirror in piston motion is $k_{p}$ = 3.2 N/m. The maximum allowable piston displacement of the mirror plate without failure is $d_{\mbox{max}}$= 200$\upmu$m. The maximum tolerable acceleration in the direction perpendicular to the mirror plate is $a_{max}$ is
\begin{equation}
a_{max} = \frac{k_{\text{p}} d_{\text{max}}}{m_{\text{plate}}} = 5500 \text{m}/\text{s}^2.
\end{equation}
For most commercial robots, maximum tolerable shock is under 1000 $\text{m}/\text{s}^2$. \emph{So the MEMS mirror can survive most of the mechanical shock and vibration of the robot.} External vibration around the resonant frequency will excite large MEMS mirror vibration or even damage the mirror. To avoid the resonance effect, the MEMS mirror should avoid being actuated around the resonant frequency ($f_1, f_2, f_3$).
\subsection*{Stitching experiments with translation}
Here, we performed point-cloud stitching as that sensor moves along an object. The target scanning area is along the horizontal paper-cut figure shown in the highlighted area in the Figure \ref{fig:stitching} (a). The blue line is an example of the true motion with disturbance only existing in the vertical direction. As the LiDAR rotates from in the horizontal direction from left to right, it is expected to collect the best point cloud covering the highlight area of the object only. The result of compensated scanning and uncompensated scanning are shown in Figures \ref{fig:stitching} (b) and (c) on the right side, with their measured motions on the left side. Please find other supplementary materials in the accompanying video.
\end{document}
\section{Introduction}
Modern autonomy is largely driven by vision and depth sensors for perception. Most such techniques make an implicit assumption that the relative pose of the sensor w.r.t. the robot is fixed and changes in sensor viewpoint require a change in the robot pose. This implies that fast-moving robots must deal with motion compensation (i.e. camera-robot \emph{stabilization}) and that robots need to reorient themselves to observe the relevant parts of the scene. Correspondingly, stabilization~\cite{neuhaus2018mc2slam,gojcic2019perfect,taketomi2017visual,phelps2019blind} and active vision~\cite{costante2016perception,bircher2016receding,zhang2018perception,sadat2014feature} are well-studied problems.
Let us consider the specific example of image stabilization. While successful, most such methods compensate through \emph{post-capture} processing of sensor data. \emph{We contend that this is simply not feasible for the next generation of fast miniature robots} such as robotic bees~\cite{robobees-sciam13}, crawling and walking robots \cite{hoffman2013design}, and other micro-air vehicles~\cite{mulgaonkar2015design}. For example, flapping-wing robots such as the RoboBee exhibit a high frequency rocking motion (at about 120 Hz in one design) due to the piezo-electric actuation \cite{helbling2014pitch}. Environmental factors such as wind affects micro-robots to a greater extent than a larger robot. There might be aerodynamic instability due ornithopter-based shock absorption \cite{xue2019computational}. The egomotion of small robots (and onboard sensors) is quite extreme making any sensing challenging. While there have been software methods to correct for such effects for cameras \cite{chen2009real} and LiDARs \cite{peng2007model}, this is often difficult to perform in real-time onboard due to the computational, energy and latency constraints on the robot mentioned above. Without proper motion compensation for miniature devices, we will not be able to unlock the full potential of what is one of the ten grand challenges in robotics~\cite{yang2018grand}.
\subsection{Key Idea: Compensation \emph{during} Imaging}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/figure_2.png}
\includegraphics[width=0.2\textwidth]{dim.PNG}
\includegraphics[width=0.2\textwidth]{int.PNG}
\caption{Our design is given above with the prototype motion-compensated LiDAR (up), and we also prepared a design for future work to integrate this onto smaller platforms.
}
\label{fig:futuristic}
\vspace{-10pt}
\end{figure}
Our idea is for motion correction to happen in sensor hardware during imaging such that measurements are already compensated without requiring onboard computing. This paper shows the motion compensation advantage of decoupling robot-camera geometry, and providing the ability to control the camera properties independent of the robot pose could bring about a new perspective to robot perception and simplify the autonomy pipeline. We demonstrate this through the design of a MEMS-driven LiDAR and perform compensation in two ways - (i) onboard IMU, and (ii) external feedback of robot pose at a high rate.
\begin{figure}
\centering
\begin{subfigure}[b]{.34\linewidth}
\centering
\includegraphics[width=\linewidth]{hawk1.pdf}
\caption{start of video}
\end{subfigure}
\begin{subfigure}[b]{.31\linewidth}
\centering
\includegraphics[width=\linewidth]{hawk2.pdf}
\caption{midway}
\end{subfigure}
\begin{subfigure}[b]{.27\linewidth}
\centering
\includegraphics[width=\linewidth]{hawk3.pdf}
\caption{end}
\end{subfigure}
\begin{subfigure}[b]{.95\linewidth}
\centering
\includegraphics[width=\linewidth]{hawk-main.pdf}
\caption{Average of all images in video}
\end{subfigure}
\caption{Biological motion compensation. The position and the angle of the head of the hawk remain stable despite body motion to provide the hawk an stabilized vision.
\href{https://www.youtube.com/watch?v=aqgewVCC0k0}{https://www.youtube.com/watch?v=aqgewVCC0k0}}
\label{fig:hawk}
\vspace{-15pt}
\end{figure}
We are inspired by animal eyes that have fast mechanical movements that compensate for motion, in real-time and at high accuracy \cite{alldieck2017optical}. In Figure \ref{fig:hawk}, we show frames $V(t)$ from a video of a hawk (\emph{Buteo jamaicensis}) being moved by a human trainer \cite{youtube_2020}. We also show the average of the video $\sum_t \frac{V(t)}{T}$ over a time interval $T$. Note that the averaged image shows motion blurring, except where the eagle mechanically compensates for the shifts. We envision biologically-inspired motion compensation that happens during sensing. These sensors need to adaptively change their orientation, in real-time, and in concert with robot goals such as mapping or navigation. Effectively, the rotation matrix $R$ must cancel out robot motion to provide a "stable" view of a scene.
\subsection{MEMS Mirror-enabled Adaptive LIDAR}
The ability to reorient sensor pose could have many uses in robotics, particularly in image alignment during motion such as in SLAM. If the camera and robot are rigidly attached, then the camera experiences all the motion the robot experiences, including jitter and other potential disturbances that are detrimental to the Visual SLAM task. This could result in spurious features, errors in localization, and incorrect feature association leading to an inaccurate map. In this paper, we describe a sensor design that can perform image reorientation of a LiDAR in hardware without the need for any software processing for such compensation.
Previously, pan-tilt-zoom (PTZ) cameras have attempted to address this problem. However, they use mechanical actuation which can react in ones of Hz making it not suitable for tasks such as egomotion compensation in real-time. This is evidenced by the limited use of PTZ cameras on robots - most robots just have sensors rigidly attached.
Our designs break through these past difficulties by exploiting recently available microelectromechanical (MEMS) and opto-mechanical components for changing camera parameters. Opto-MEMS components are famously fast (many kHz), and they allow the changing of the LiDAR projection offset orientation during robot motion, such that the view of LiDAR is effectively static. By changing LiDAR views two orders of magnitude (or more) faster than robot motion, we can effectively allow for camera views to be independent of the robot view. In this work, we can compensate the LiDAR point cloud using an onboard IMU or external feedback such as motion tracking setup. More generally, such compensation allows the robot to focus on the control task while the camera can perform perception (which is required for the control task) independently, and greatly simplifies robot planning as the planner does not need to account for perception and just needs to reason about the control task at hand.
MEMS LiDAR optics have the advantages of small size and low power consumption~\cite{tasneem2020adaptive, kasturi2016uav, kimoto2014development}.
Our algorithmic and system design contributions beyond this are:
\begin{itemize}
\item We present the design of a novel LiDAR sensor adopting a MEMS mirror similar to this LiDAR MEMS scanner \cite{wang2020low}. This design enables wide non-resonant scanning angles for arbitrary orientations. We integrate this with two types of feedback (IMU and external sensors) to demonstrate quick and high-rate motion compensation. Figure~\ref{fig:futuristic} shows the design of our sensor.
\item We describe and geometrically characterize our sensor, showing that compensation in hardware can reduce the number of unknowns for proprioceptive and exteroceptive tasks. In a simulation, we characterize the effect of compensation delay and compensation rate to identify benefits for robot perception. The quantitative and qualitative results of these simulations are shown in Sect. \ref{sec:benefits}.
\item We show UAV flight with a proof-of-concept hardware prototype combining external feedback with the MEMS mirror for egomotion compensation. We enable UAV flight by tethering the MEMS modulator to the other heavy necessary components, like the laser, photodetector, optics, the driver circuit, and the signal processing circuitry. The frequencies of the mirror modulation and IMU measurement are much higher than typical robot egomotion. Our prototype MEMS compensated scan system can perform such compensation in under 10 ms. Please see the accompanying video for proper visualization, and see Fig. \ref{fig:fvsc}.
\item We provide an implementation of the sensor in the Gazebo simulator. Using this simulated sensor, we propose a framework to adapt modern LiDAR SLAM pipeline to incorporate motion compensation. We adapt a modern LiDAR SLAM pipeline LIO-SAM~\cite{shan2020lio} to incorporate motion compensation to use such a sensor and demonstrate the utility of such motion compensation. We will open-source the sensor implementation, the UAV simulation environment, as well as our LIO-SAM adaptations \footnote{\url{https://github.com/yuyangch/Motion_Compensated_LIO-SAM} } on publication.
\end{itemize}
\section{Introduction}
Modern autonomy is largely driven by vision and depth sensors for perception. Most such techniques make an implicit assumption that the relative pose of the sensor w.r.t. the robot is fixed, and changes in sensor viewpoint require a change in the robot pose. This means that fast moving robots must deal with motion compensation (i.e. camera-robot \emph{stabilization}). Correspondingly, this is a widely studied problem in robotics and vision ~\cite{neuhaus2018mc2slam,gojcic2019perfect,taketomi2017visual,phelps2019blind}.
While successful, most such methods compensate through \emph{post-capture} processing of sensor data. We contend that this is simply not feasible for the next generation of fast miniature robots such as robotic bees \cite{robobees-sciam13}, crawling and walking robots \cite{hoffman2013design}, and small UAVs (unmanned aerial vehicle) \cite{mulgaonkar2015design}. Without proper motion compensation for these devices, we will not be able to unlock the full potential of what is one of the ten grand challenges in science robotics~\cite{yang2018grand}.
This paper shows the motion compensation advantage of decoupling robot-camera geometry, and providing the ability to control the camera properties independent of the robot pose could bring about a new perspective to robot perception and simplify the autonomy pipeline.
\section{Key Idea: Motion Compensation \emph{in} Hardware}
\kar{the first two paragraphs here are somewhat redundant to the intro}
The egomotion of small robots (and onboard sensors) is quite extreme making any sensing challenging. For example, flapping-wing robots such as the RoboBee exhibit a high frequency rocking motion (at about 120 Hz) due to the piezo-electric actuation \cite{helbling2014pitch}. Environmental factors such as wind affect micro-robots to a greater extent than a larger robot. There might be aerodynamic instability due ornithopter-based shock absorption \cite{xue2019computational}.
While there have been software methods to correct for such effects for cameras \cite{chen2009real} and LiDARs \cite{peng2007model}, this is often difficult to perform in real-time onboard due to the computational, energy and latency constraints on the robot mentioned above. It would be ideal for sensors to perform such correction in hardware without requiring onboard computing help. In this work, we propose the design of an IMU-compensated LiDAR to address this challenge.
Most systems today leave the difficult problem of motion estimation and compensation to post-capture algorithms such as optical flow-based registration. In contrast, animal eyes have fast mechanical movements that compensate for motion, in real-time and at high accuracy \cite{alldieck2017optical}. In Figure \ref{fig:hawk}, we show frames $V(t)$ from a video of a hawk (\emph{Buteo jamaicensis}) being moved by a human trainer \cite{youtube_2020}. We also show the average of the video $\sum_t \frac{V(t)}{T}$ over a time interval T. Note that the averaged image shows motion blurring, except where the eagle mechanically compensates for the shifts.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{hawk.jpg}
\caption{Biological motion compensation.
\href{https://www.youtube.com/watch?v=aqgewVCC0k0}{https://www.youtube.com/watch?v=aqgewVCC0k0}}
\label{fig:hawk}
\end{figure}
We envision biologically-inspired motion compensation that happens during sensing. These sensors need to adaptively change their orientation, in real-time, and in concert with robot goals such as mapping or navigation. Effectively, the rotation matrix R must cancel out robot motion.
For the first time, we use a custom designed microelectromechanical (MEMS) mirror with a real-time inertial measurement unit inside a simple yet effective system-level sensor design for a light ranging (LIDAR) device. The LIDAR is designed such that its heavy and power-hungry components are separated, so that the lightweight components can fit on micro-air vehicle flight payloads. We then demonstrate our motion compensated LIDAR with an autonomous UAV, showing for the first time ever, biologically inspired motion compensation on a robot in real-time.
\section{Theoretical Strategy: Decoupling Robot-Camera Geometry}
Our key idea is to build a sensor design that can decouple robot-camera geometry, and provide the ability to control the camera properties independent of the robot pose. Such sensors could bring about a new perspective to robot perception and simplify the autonomy pipeline. We contend that this decoupling will propel a paradigm shift for robotic perception, particularly in underactuated robots.
Such a breakthrough has been the goal of previous efforts, collectively termed as active vision \cite{paul2021camtuner} --- however, active vision in robotics has not reached its full potential, partly because of its dependence on PTZ (point-tilt-zoom) cameras and similar technologies that use mechanical actuation. In fact, most robots do not have PTZ-like technology and cameras are usually simply rigidly attached to robots. Therefore, robotics today is constrained within the degenerate case of active vision, where robot pose determines camera pose.
In this paper, we show new sensor designs for decoupling this robot-dominated relative robot-camera pose and demonstrate that, even over small time periods, these designs can enable novel applications for robot perception.
\subsection{Motivation for Decoupling}
To see why decoupling is useful, we start with the well-understood form of camera-robot geometry. If the robot is at the origin, then the camera pose is given by the rotation and translation matrices $[R_c t_c]$, and the image pixels are related to this pose change by sensor parameters $K$, containing properties such as focal length and pixel pitch.
Let us consider a perception task such as Visual SLAM. As the robot is traversing the environment, its motion causes a change in the camera pose allowing the camera to see new parts of the scene. These are then processed into a map of the environment and localize the robot in that map. If there is no decoupling, then the camera experiences all the motion the robot experiences, including jitter and other potential disturbances that are detrimental to the Visual SLAM task. Further, if the system also needs to track targets, the robot pose changes for navigation may not coincide with the optical camera pose for tracking. The ability to decouple the poses and control the camera pose allows us to alleviate such motion allowing the camera to better perform the robot perception task.
\subsection{Uncoupling within Brief Intervals}
\kar{better section heading?}
This work demonstrates camera designs that break through these past difficulties, by exploiting recently available microelectromechanical (MEMS) and opto-mechanical components for changing camera parameters. Opto-MEMS components are famously fast (many KHz), and they allow the changing of the camera projection matrix $P = K[R_c t_c]$ during robot motion, such that the robot is effectively static. By changing camera views two orders of magnitude (or more) faster than robot motion, we can effectively decouple the two allowing for camera views independent of the robot view. Such a decoupling allows the robot to focus on the control task while the camera can perform perception (which is required for the control task) independently. This decoupling greatly simplifies robot planning as the planner does not need to account for perception and just needs to reason about the control task at hand.
\kar{are there PDF versions of these images? }
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{scan.png}
\includegraphics[width=0.2\textwidth]{dim.PNG}
\includegraphics[width=0.2\textwidth]{int.PNG}
\caption{Our design is given above with the prototype motion-compensated LiDAR (up), and we also prepared a design for future work to integrate this onto smaller platforms.}
\label{fig:futuristic}
\vspace{-10pt}
\end{figure}
\subsection{Our Contribution: MEMS-based Decoupling for LIDAR}
Microelectromechanical (MEMS) technology provides a viable alternative for laser scanner that can minimize the weight and size of the LiDAR \cite{tasneem2020adaptive, kasturi2016uav, kimoto2014development}. However, even with the MEMS mirror's small size and light weight, the LiDAR is still too large and heavy because of other necessary components, like the laser, photodetector, optics, the driver circuit, and the signal processing circuitry. Furthermore, to enable real-time compensated scanning, the MEMS mirror direction must be controlled for arbitrary orientations in real-time.
We solve the twin challenges of form-factor and egomotion with the following algorithmic and design contributions:
\begin{itemize}
\item We present a simple, effective design, Figure~\ref{fig:futuristic}, where the LiDAR power and laser are delivered by tethers. Only the tether tip, the MEMS mirror, and photodetector will move with the micro-robot.
\item We adopted an electrothermal MEMS mirror similar to LiDAR scanner \cite{wang2020low}. This design enables wide non-resonant scanning angle for arbitrary orientations.
\item We show a proof-of-concept prototype combining an inertial measurement unit (IMU) with the MEMS mirror for egomotion compensation. The frequencies of the mirror modulation and IMU measurement are much higher than typical robot egomotion challenges. Our prototype IMU-based MEMS compensated scan system can perform such compensation in under 10 ms.
\item We describe and geometrically characterize our sensor, showing that compensation in hardware can reduce the number of unknowns for proprioceptive and exteroceptive tasks such as Simultaneous Localization and Mapping (SLAM). In simulation, we characterize the effect of compensation delay and compensation rate to identify benefits for sensing on micro-robots.
\end{itemize}
Our envisioned design for the use of such a LiDAR as well as an image of our prototype are shown in Figure \ref{fig:futuristic}.
|
{
"arxiv_id": "2302.14317",
"language": "en",
"timestamp": "2023-03-01T02:09:21",
"url": "https://arxiv.org/abs/2302.14317",
"yymm": "2302"
} | \section{Introduction}
The two-dimensional compressible isentropic Euler equations read
\begin{equation}
\left\{\begin{aligned}
&\partial_\mathrm{t} \mathrm{\rho}+\nabla_\mathrm{x}\cdot(\rho u)=0\\
&\partial_\mathrm{t}(\rho u)+\mathrm{div}_\mathrm{x}(\rho u\otimes u)+\nabla_\mathrm{x}p=0\\
&p=\frac{1}{\gamma}\rho^\gamma,
\end{aligned}\right.
\label{original Euler equation}
\end{equation}
where $\mathrm{x}=(\mathrm{x}_1,\mathrm{x}_2)\in\mathbb{R}^2$ and $\mathrm{t}\in\mathbb{R}$ are space and time coordinates respectively, the unknown scalar $\rho$ is the fluid density, $u=(u_1,u_2)$ is the velocity field of fluid, $p=\frac{1}{\gamma}\rho^\gamma$ is the pressure with adiabatic index $\gamma>1$. This system describes the evolution of a two-dimensional compressible ideal gas without viscosity.
We define the vorticity $\omega=\partial_{\mathrm{x}_1}u_2-\partial_{\mathrm{x}_2}u_1$ and the specific vorticity $\zeta=\omega/\rho$ at those points where $\rho>0$. One can deduce from (\ref{original Euler equation}) that $\zeta$ is purely transported by the velocity field:
\begin{equation}
\partial_\mathrm{t}\zeta+u\cdot\nabla_{\mathrm{x}}\zeta=0.
\end{equation}
Our main result can be stated roughly as follows:
\begin{theorem}[Rough statement of the main theorem]
\textit{There exists a set of initial data $(u_0,\rho_0)$ with $|\nabla(u_0,\rho_0)|=\mathcal{O}(1/\varepsilon)$, such that their corresponding solutions to (\ref{original Euler equation}) develop a shock-type singularity within time $\mathcal{O}(\varepsilon)$. }
\end{theorem}
It is well known that governing by compressible Euler equations, shock can develop from smooth initial data. In the one-dimensional case, this fact can be studying the dynamics of the Riemann variables, which were first introduced in Riemann's foundational work \cite{riemann1858uber}. See the discussion in John \cite{john1974formation}, Liu \cite{liu1979development}, and Majda \cite{majda2012compressible}.
In multi-dimensional cases, Sideris \cite{sideris1985formation} first proved a blow-up result. However, the shock formation remained open. In 2007, Christodoulou \cite{christodoulou2007formation} studied the relativistic fluids and he found an open set of irrotational initial data that will eventually develop shock-type singularity, this was considered to be the first proof of shock formation for the compressible Euler equations in multi-dimensional cases. Later in Christodoulou-Miao \cite{christodoulou2014compressible} the authors established the shock formation for non-relativistic and irrotational flow. In the case of irrotational flow, one can rewrite the isentropic Euler equations as a scalar second-order quasilinear wave equation. Alinhac \cite{alinhac1999blowup2,alinhac1999blowup} proved the first blow-up result for 2D quasilinear wave equations, which do not satisfy Klainerman's null condition. Using geometric method, shock formation for the 3D quasilinear wave equations was studied in Speck-Holzegel-Luk-Wong\cite{speck2016stable}, Speck\cite{speck2016shock}, Miao-Yu\cite{miao2017formation}. The first result on shock formation that admits non-zero vorticity for the compressible Euler equations was given by Luk-Speck \cite{luk2018shock}. They use the geometric framework and developed new methods to study the vorticity transport. Later in \cite{luk2021stability}, they proved shock formation for full compressible Euler equations in 3D with non-trivial vorticity and variable entropy. In An-Chen-Yin \cite{an2020low,an2021low,an2022cauchy,an2022h,an2022low}, the authors proved the low regularity ill-posedness for elastic waves and MHD equations and showed that the ill-posedness is driven by shock formation. As to the shock development problem for the compressible Euler equations, one could refer to the ‘discussions in Christodoulou-Lisibach\cite{christodoulou2016shock}, Christodoulou\cite{christodoulou2019shock}, Abbrescia-Speck\cite{abbrescia2022emergence} and Buckmaster-Drivas-Shkoller-Vicol\cite{buckmaster2022simultaneous}.
In \cite{buckmaster2019-2d-formation}, Buckmaster, Shkoller, and Vicol utilized the modulation method to construct shock solutions to the 2D Euler equations with azimuthal symmetry. Later in \cite{buckmaster2022formation}, they extend this method to the 3D case with no symmetry assumptions. After a dynamical rescaling, the solutions they constructed are close to a profile $\ovl W$, which solves the self-similar Burgers equation. By a singular coordinate transformation controlled by several modulation variables, proving shock formation is equivalent to showing global existence in the self-similar coordinate. This approach, known as the modulation method or dynamical rescaling, was successfully applied in \cite{merle1996asymptotics,merle2005blow,merle2020strongly} for the blow-up of Schr\"odinger equations and in \cite{merle1997stability} for the nonlinear heat equation. The proof in \cite{buckmaster2019-2d-formation} is $L^\infty$ based since there is no derivative in the forcing term, whereas in \cite{buckmaster2022formation}, an additional $L^2$ based energy estimate was used to overcome the derivative loss in the $L^\infty$-type argument. They also analyzed the non-isentropic case in \cite{buckmaster2020shock}.
Following the work in \cite{buckmaster2022formation}, we utilize the self-similar Burgers ansatz to construct shock solutions. To keep track of the curvature of shock formation while maintaining the solution's stationarity in the far field, we make a minor modification to the construction in \cite{buckmaster2022formation}. Different from the construction in \cite{buckmaster2019-2d-formation}, we consider shock solutions without any symmetry.
The shock we attempt to construct is of self-similar type. We introduce a self-similar coordinate transformation $(\mathrm{t},\mathrm{x})\mapsto (s,y)$, where $(\mathrm{t},\mathrm{x})$ is the original Cartesian coordinate and $(s,y)$ is the self-similar coordinate. The new coordinate is aligned to the shock formation and will become singular when $t$ approaches the blow-up time $T_*$. Roughly speaking, we have that $$y_1\approx(T_*-t)^{-3/2}\mathrm{x}_1,\ \ y_2\approx(T_*-t)^{-1/2}\mathrm{x}_2.$$
Thus $y$ is a zoom-in version of $\mathrm{x}$. In self-similar coordinates, the Riemann invariant $W$(will be defined in the next subsection) will converge to a profile $\ovl W$, uniformly on any compact set of $y$. Moreover, $\ovl W$ solves the self-similar Burgers equation:
$$-\frac{1}{2}\overline{W}+\left(\frac{3}{2}y_1+\overline{W}\right)\partial_{y_1}\overline{W}+\frac{1}{2}y_2\partial_{y_2}\overline{W}=0.$$
In this sense, the constructed blow-up solution of the Euler equations is close to a fixed shape on a smaller and smaller scale.
To better understand what happens, we shall examine the simplest 1D inviscid Burgers model, whose well-localized solutions are proved to become singular in finite time. It is pointed out explicitly in \cite{eggers2008role,collot2018singularity} that as we are approaching the blow-up point, the blow-up solution can be well modeled by a dynamically rescaled version of a fixed profile, which belongs to a countable family $\mathcal{F}$ of functions, and the members in $\mathcal{F}$ are solutions to the self-similar Burgers equation. The choice of profile only depends on the derivatives of initial data at the point that achieves the minimum negative slope. Thus the family $\mathcal{F}$ of solutions to the self-similar Burgers equation plays an important role in the blow-up phenomenon of the Burgers equation. For a detailed discussion, see \cite{collot2018singularity}, or the toy model in appendix \ref{universality}.
After the asymptotic blow-up behavior of the inviscid Burgers equation was clarified systematically in \cite{collot2018singularity}, the self-similar Burgers profiles have been used to explore blow-up phenomena in various systems, see \cite{buckmaster2022formation,buckmaster2022-2d-unstableformation,yang2021shock,qiu2021shock,pasqualotto2022gradient}. The modulation method that was developed in the context of nonlinear dispersive equations, is the suitable way to utilize the self-similar Burgers profiles.
\section{Preliminaries}
In this section we introduce a seris of coordinate transformations and the Riemann variables.
\subsection{Coordinates adapted to the shock}
We introduce the sound speed $\sigma=\frac{1}{\alpha}\rho^\alpha$, where $\alpha=\frac{\gamma-1}{2}>0$, then the system of $(u,\rho)$ is transformed into a system of $(u,\sigma)$, which reads
\begin{equation}
\left\{\begin{aligned}
&\partial_\mathrm{t} \sigma+u\cdot\nabla_\mathrm{x}\sigma+\alpha\sigma\nabla_\mathrm{x}\cdot u=0\\
&\partial_\mathrm{t}u+(u\cdot\nabla)u+\alpha\sigma\nabla_\mathrm{x}\sigma=0,\\
\end{aligned}\right.
\end{equation}
By defining $t=\frac{1+\alpha}{2}\mathrm{t}$, the equations become
\begin{equation}
\left\{\begin{aligned}
&\frac{1+\alpha}{2}\partial_t \sigma+u\cdot\nabla_\mathrm{x}\sigma+\alpha\sigma\nabla_\mathrm{x}\cdot u=0\\
&\frac{1+\alpha}{2}\partial_tu+(u\cdot\nabla)u+\alpha\sigma\nabla_\mathrm{x}\sigma=0,\\
\end{aligned}\right.
\end{equation}
The vorticity is defined as
\begin{equation}
\omega=\partial_{\mathrm{x}_1}u_2-\partial_{\mathrm{x}_2}u_1.
\end{equation}
We also introduce the specific vorticity $\zeta:=\omega/\rho$, which satisfies
\begin{equation}
\frac{1+\alpha}{2}\partial_t\zeta+u\cdot\nabla_{\mathrm{x}}\zeta=0.
\end{equation}
To keep track of shock formation, we introduce six time-dependent modulation variables $\xi=(\xi_1,\xi_2)\in\mathbb{R}^2,n=(n_1,n_2)\in\mathbb{S}^1,\tau\in\mathbb{R},\phi\in\mathbb{R}$, and $\kappa\in\mathbb{R}$. $\xi$ records the location of the shock; $n$ records the direction of the shock; $\tau$ records the slope of the Riemann invariant $w$; $\phi$ measures the "curvature" of the shock; $\kappa$ records the value of the Riemann invariant at $\mathrm{x}=\xi(t)$.
Using modulation variables $\xi,n,\tau,\phi$, we define the coordinate adapted to the shock formation.
\subsubsection{Tracing the location and direction of the shock}
With the time-dependent vector $\xi(t)=(\xi_1(t),\xi_2(t))$, and the normal vector $n(t)=(n_1(t),n_2(t))$, we define a coordinate transformation $\tilde x=R(t)^T(\mathrm{x}-\xi(t))$, where
\begin{equation}
R(t)=\begin{bmatrix}
n_1 & -n_2\\
n_2 & n_1
\end{bmatrix}\in SO(2)
\end{equation}
The origin of the $\tilde{x}$ coordinate coincides with $\xi(t)$, which dynamically tracks the spatial location of the shock formation, and $\tilde{e}_1$ aligns with $n(t)$, direction of the shock.
The functions should also be rewritten in the new coordinate:
\begin{equation}
\left\{\begin{aligned}
&\tilde{u}(\tilde{x},t)=R(t)^Tu(\mathrm{x},t)\\
&\tilde \rho(\tilde{x},t)=\rho(\mathrm{x},t)\\
&\tilde \sigma(\tilde{x},t)=\sigma(\mathrm{x},t)\\
&\tilde{\zeta}(\tilde{x},t)=\zeta(\mathrm{x},t).
\end{aligned}\right.
\end{equation}
Then $(\tilde{u},\tilde{\sigma})$ satisfies
\begin{equation}
\left\{\begin{aligned}
&\frac{1+\alpha}{2}\partial_t \tilde{\sigma}+(\tilde{u}+\tilde{v})\cdot\nabla_{\tilde{x}}\tilde{\sigma}+\alpha\tilde{\sigma}\nabla_{\tilde{x}}\cdot\tilde{u}=0\\
&\frac{1+\alpha}{2}\partial_t\tilde{u}-\frac{1+\alpha}{2}Q\tilde{u}+\left[(\tilde{u}+\tilde{v})\cdot\nabla_{\tilde{x}}\right]\tilde{u}+\alpha\tilde{\sigma}\nabla_{\tilde{x}}\tilde{\sigma}=0,
\end{aligned}\right.
\end{equation}
where $Q(t)=\frac{dR(t)^T}{dt}R(t)=\dot R(t)^TR(t)$, and $\tilde{v}(\tilde{x},t)=\frac{1+\alpha}{2}(Q\tilde{x}-R^T\dot\xi)$.
The equation of specific vorticity is transformed into
\begin{equation}
\frac{1+\alpha}{2}\partial_t \tilde{\zeta}+(\tilde{u}+\tilde{v})\cdot\nabla_{\tilde{x}}\tilde{\zeta}=0.
\label{evolution of specific vorticity in tilde x coordinate}
\end{equation}
\subsubsection{Tracking the curvature of shock front}
In order to track the curvature of the shock, we introduce a time-dependent scalar function $\tilde f(\tilde{x}_1,\tilde{x}_2,t)$.
We denote $\phi(t)\in\mathbb{R}$ as the ``curvature" of the ``wavefront of the shock formation" at the origin, and we assume that $\tilde f$ satisfies $\partial_{\tilde{x}_2}^2\tilde f(0,0,t)=\phi(t)$.
In particular, we construct $\tilde f$ as follows. Let $\theta\in C_c^\infty(-\frac{5}{4},\frac{5}{4})$ be a bump function such that $\theta(\tilde x_2)\equiv1$ when $|\tilde x_2|\le1$, then we define
\begin{equation}
\tilde f(\tilde{x}_1,\tilde{x}_2,t)=\theta(\varepsilon^{-\frac{1}{2}}\tilde{x}_1)\int_0^{\tilde x_2}\phi(t) \tilde x_2'\theta(\varepsilon^{-\frac{1}{6}}\tilde x_2')d\tilde x_2',
\label{construction of f}
\end{equation}
where $\varepsilon$ is a small constant to be specified. Note that $\tilde{f}(\tilde{x}_1,\tilde{x}_2,t)=\frac{1}{2}\phi\tilde{x}_2^2$ when $|\tilde{x}|$ is small. This gurantees that in the forcing terms of $W,Z,A$ (to be defined in (\ref{definition of W,Z,A})) those related to the coordinate transformation vanish when $y$ is far from the origin, while not affecting the computation near the origin.
Now we introduce the coordinate transformation that adapted to the shock front:
\begin{equation}
\left\{
\begin{aligned}
&x_1=\tilde{x}_1-\tilde f(\tilde x_1,\tilde{x}_2,t)\\
&x_2=\tilde{x}_2.
\end{aligned}\right.
\end{equation}
Let $f(x_1,x_2,t):=\tilde f(\tilde x_1,\tilde{x}_2,t)$, then we have
\begin{equation}
\left\{
\begin{aligned}
&\tilde x_1=x_1+f(x_1,x_2,t)\\
&\tilde x_2=x_2.
\end{aligned}\right.
\end{equation}
We define
\begin{equation}
J(\tilde x_1,\tilde x_2,t)=|\nabla_{\tilde x}x_1|=\sqrt{(1-\tilde f_{\tilde x_1})^2+\tilde f_{\tilde x_2}^2}=\frac{\sqrt{1+f_{x_2}^2}}{1+f_{x_1}},
\end{equation}
\begin{equation}
N=J^{-1}\nabla_{\tilde{x}}x_1=\frac{(1-\tilde f_{\tilde x_1},-\tilde f_{\tilde x_2})}{\sqrt{(1-\tilde f_{\tilde x_1})^2+\tilde f_{\tilde x_2}^2}}=\frac{1}{\sqrt{1+f_{x_2}^2}}(1,-f_{x_2}),
\end{equation}
\begin{equation}
T=N^\perp=\frac{(\tilde f_{\tilde x_2},1-\tilde f_{\tilde x_1})}{\sqrt{(1-\tilde f_{\tilde x_1})^2+\tilde f_{\tilde x_2}^2}}=\frac{1}{\sqrt{1+f_{x_2}^2}}(f_{x_2},1).
\end{equation}
Note that $\{N,T\}$ forms an orthonormal basis.
$J,N,T$ can also be viewed as functions of $(x_1,x_2,t)$ and we overload their names for the sake of convenience. One can verify that
\begin{equation}
\mathrm{supp}_x(N-\tilde e_1,T-\tilde e_2)\subset\left\{|x_1|\le\frac{3}{2}\varepsilon^{\frac{1}{2}},|x_2|\le\frac{3}{2}\varepsilon^{\frac{1}{6}}\right\}.
\label{support of DN and DT}
\end{equation}
Now the functions are redefined as
$$\left\{\begin{aligned}
&\mathring{u}(x,t)=\tilde{u}(\tilde{x},t)\\
&\mathring{\rho}(x,t)=\tilde\rho(\tilde{x},t)\\
&\mathring{\sigma}(x,t)=\tilde \sigma(\tilde{x},t)\\
&\mathring{\zeta}(x,t)=\tilde{\zeta}(\tilde{x},t)\\
&v(x,t)=\tilde{v}(\tilde{x},t),
\end{aligned}\right.$$
and the system can be written as
\begin{equation}
\left\{\begin{aligned}
&\partial_t\mathring{u}-Q\mathring{u}+\left[-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1(\mathring{u}+v)\cdot JN\right]\partial_{x_1}\mathring{u}+2\beta_1(\mathring{u}_2+v_2)\partial_{x_2}\mathring{u}=-2\beta_3JN\mathring{\sigma}\partial_{x_1}\mathring{\sigma}-2\beta_3\mathring{\sigma}\partial_{x_2}\mathring{\sigma}\tilde{e}_2\\
&\partial_t\mathring{\sigma}+\left[-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1(\mathring{u}+v)\cdot JN\right]\partial_{x_1}\mathring{\sigma}+2\beta_1(\mathring{u}_2+v_2)\partial_{x_2}\mathring{\sigma}=-2\beta_3\mathring{\sigma} JN\cdot\partial_{x_1}\mathring{u}-2\beta_3\mathring{\sigma}\partial_{x_2}\mathring{u}_2,
\end{aligned}
\right.
\end{equation}
where
\begin{equation}
\beta_1=\frac{1}{1+\alpha},\ \beta_2=\frac{1-\alpha}{1+\alpha},\ \beta_3=\frac{\alpha}{1+\alpha}.
\end{equation}
We can also deduce he equation governing the evolution of $\mathring{\zeta}$:
\begin{equation}
\partial_t\mathring{\zeta}+\left[-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1(\mathring{u}+v)\cdot JN\right]\partial_{x_1}\mathring{\zeta}+2\beta_1(\mathring{u}_2+v_2)\partial_{x_2}\mathring{\zeta}=0.
\end{equation}
\subsubsection{Riemann variables}
We define the Riemann variables by
\begin{equation}
\left\{\begin{aligned}
&w(x,t)=\mathring{u}(x,t)\cdot N+\mathring{\sigma}(x,t)\\
&z(x,t)=\mathring{u}(x,t)\cdot N-\mathring{\sigma}(x,t)\\
&a(x,y)=\mathring{u}(x,t)\cdot T.
\end{aligned}
\right.
\end{equation}
Then the system of $(\mathring{u},\mathring{\sigma})$ can be rewritten in terms of $(w,z,a)$ as
\begin{equation}
\begin{aligned}
\partial_tw&+\left(-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1v\cdot JN+Jw+\beta_2Jz\right)\partial_{x_1}w+\left(2\beta_1v_2+N_2w+\beta_2N_2z+2\beta_1aT_2\right)\partial_{x_2}w\\
=&-2\beta_3\mathring{\sigma}\partial_{x_2}aT_2+aT\cdot(\partial_t)_x N+aQ_{ij}T_jN_i+2\beta_1(\mathring{u}\cdot NN_2+aT_2+v_2)aT\cdot\partial_{x_2}N\\
&-2\beta_3\sigma(a\partial_{x_2}T_2+\mathring{u}\cdot N\partial_{x_2}N_2)-\left(-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1v\cdot JN+2\beta_1J\mathring{u}\cdot N\right)a\partial_{x_1}T\cdot N
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\partial_tz&+\left(-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1v\cdot JN+\beta_2Jw+Jz\right)\partial_{x_1}z+\left(2\beta_1v_2+\beta_2N_2w+N_2z+2\beta_1aT_2\right)\partial_{x_2}w\\
=&2\beta_3\mathring{\sigma}\partial_{x_2}aT_2+aT\cdot(\partial_t)_x N+aQ_{ij}T_jN_i+2\beta_1(\mathring{u}\cdot NN_2+aT_2+v_2)aT\cdot\partial_{x_2}N\\
&+2\beta_3\mathring{\sigma}(a\partial_{x_2}T_2+\mathring{u}\cdot N\partial_{x_2}N_2)-\left(-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1v\cdot JN+2\beta_1J\mathring{u}\cdot N\right)a\partial_{x_1}T\cdot N
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\partial_ta&+\left(-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1v\cdot JN+\beta_1Jw+\beta_1Jz\right)\partial_{x_1}a+2\beta_1\left(v_2+\frac{w+z}{2}N_2+aT_2\right)\partial_{x_2}a\\
=&-2\beta_3\mathring{\sigma} T_2\partial_{x_2}\mathring{\sigma}+\mathring{u}\cdot T N\cdot(\partial_t)_x T+\mathring{u}\cdot NQ_{ij}N_jT_i\\
&+2\beta_1(\mathring{u}\cdot NN_2+aT_2+v_2)\mathring{u}\cdot N N\cdot\partial_{x_2}T-\left(-\frac{\partial_t f}{1+f_{x_1}}+2\beta_1v\cdot JN+2\beta_1J\mathring{u}\cdot N\right)\mathring{u}\cdot N\partial_{x_1}N\cdot T.
\end{aligned}
\end{equation}
\subsubsection{Self-similar transformation}
We introduce self-similar variables as follows
\begin{equation}
\left\{\begin{aligned}
&s(t)=-\log(\tau(t)-t)\\
&y_1=\frac{x_1}{(\tau-t)^{3/2}}=x_1e^{\frac{3}{2}s}\\
&y_2=\frac{x_2}{(\tau-t)^{1/2}}=x_2e^{\frac{s}{2}},
\end{aligned}\right.
\end{equation}
where $\tau(t)$ is a parameter to be determined.
Now the original time $t$ is transformed into the self-similar time $\tau$, and the space variable $x$ is transformed into the self-similar space variable $y$. At each fixed time $t$, $y$ is a dilation of $x$. In the $y$ coordinate, we can closely observe the behavior of the solution around the shock location.
Now we assume that
\begin{equation}
\left\{\begin{aligned}
&w(x,t)=e^{-\frac{s}{2}}W(y,s)+\kappa(t)\\
&z(x,t)=Z(y,s)\\
&a(x,t)=A(y,s),
\end{aligned}\right.
\label{definition of W,Z,A}
\end{equation}
where $\kappa$ is also a modulation parameter to be determined.
In the self-similar variables, the system becomes
\begin{equation}
\left\{\begin{aligned}
&\left(\partial_s-\frac{1}{2}\right)W+\left(\frac{3}{2}y_1+g_W\right)\partial_1W+\left(\frac{1}{2}y_2+h_W\right)\partial_2W=F_W\\
&\partial_sZ+\left(\frac{3}{2}y_1+g_Z\right)\partial_1Z+\left(\frac{1}{2}y_2+h_Z\right)\partial_2Z=F_Z\\
&\partial_sA+\left(\frac{3}{2}y_1+g_A\right)\partial_1A+\left(\frac{1}{2}y_2+h_A\right)\partial_2A=F_A.
\end{aligned}\right.
\label{evolution of W,Z,A}
\end{equation}
Here and throughout the papar we use the notation $\partial_j=\partial_{y_j}$, and $\beta_\tau:=\frac{1}{1-\dot\tau}$. The transport terms and and the forcing terms are given by
\begin{equation}
\left\{\begin{aligned}
&g_W=\beta_\tau JW+\beta_\tau e^{\frac{s}{2}}\left[-\frac{\partial_t f}{1+f_{x_1}}+J\left(\kappa+\beta_2Z+2\beta_1V\cdot N\right)\right]=\beta_\tau JW+G_W\\
&g_Z=\beta_2\beta_\tau JW+\beta_\tau e^{\frac{s}{2}}\left[-\frac{\partial_t f}{1+f_{x_1}}+J\left(\beta_2\kappa+Z+2\beta_1V\cdot N\right)\right]=\beta_2\beta_\tau JW+G_Z\\
&g_A=\beta_1\beta_\tau JW+\beta_\tau e^{\frac{s}{2}}\left[-\frac{\partial_t f}{1+f_{x_1}}+J\left(\beta_1\kappa+\beta_1Z+2\beta_1V\cdot N\right)\right]=\beta_1\beta_\tau JW+G_A,
\end{aligned}\right.
\label{transport terms of W,Z,A}
\end{equation}
\begin{equation}
\left\{\begin{aligned}
&h_W=\beta_\tau e^{-s}N_2W+\beta_\tau e^{-\frac{s}{2}}\left(2\beta_1V_2+N_2\kappa+\beta_2N_2Z+2\beta_1AT_2\right)\\
&h_Z=\beta_2\beta_\tau e^{-s}N_2W+\beta_\tau e^{-\frac{s}{2}}\left(2\beta_1V_2+\beta_2N_2\kappa+N_2Z+2\beta_1AT_2\right)\\
&h_A=\beta_1\beta_\tau e^{-s}N_2W+\beta_\tau e^{-\frac{s}{2}}\left(2\beta_1V_2+\beta_1N_2\kappa+\beta_1N_2Z+2\beta_1AT_2\right),
\end{aligned}\right.
\end{equation}
and
\begin{equation}
\left\{\begin{aligned}
F_W=&-2\beta_3\beta_\tau S\partial_2AT_2+\beta_\tau e^{-\frac{s}{2}}AT\cdot(\partial_t)_xN+\beta_\tau e^{-\frac{s}{2}}Q_{ij}AT_jN_i\\
&+2\beta_1\beta_\tau \left(V_2+U\cdot NN_2+AT_2\right)AT\cdot\partial_2N-2\beta_3\beta_\tau S(U\cdot N\partial_2N_2+A\partial_2T_2)\\
&-\beta_\tau e^{s}\left(-\frac{\partial_tf}{1+f_{x_1}}+2\beta_1V\cdot JN+2\beta_1JU\cdot N\right)A\partial_1T\cdot N-\beta_\tau e^{-\frac{s}{2}}\dot{\kappa}\\
F_Z=&2\beta_3\beta_\tau e^{-\frac{s}{2}}S\partial_2AT_2+\beta\tau e^{-s}AT\cdot(\partial_t)_xN+\beta e^{-s}Q_{ij}AT_jN_i\\
&+2\beta_1\beta_\tau e^{-\frac{s}{2}}(V_2+U\cdot NN_2+AT_2)AT\cdot\partial_2N+2\beta_3\beta_\tau e^{-\frac{s}{2}}(A\partial_2T_2+U\cdot N\partial_2N_2)\\
&-\beta_\tau e^{\frac{s}{2}}\left(-\frac{\partial_tf}{1+f_{x_1}}+2\beta_1V\cdot JN+2\beta_1JU\cdot N\right)A\partial_1T\cdot N\\
F_A=&-2\beta_3\beta_\tau e^{-\frac{s}{2}}ST_2\partial_2S+\beta_\tau e^{-s}U\cdot NN\cdot(\partial_t)_xT+\beta_\tau e^{-s}Q_{ij}(U\cdot NN_j+AT_j)T_i\\
&+2\beta_1\beta_\tau e^{-\frac{s}{2}}(V_2+U\cdot NN_2+AT_2)U\cdot NN\cdot\partial_2T\\
&-\beta_\tau e^{\frac{s}{2}}\left(-\frac{\partial_tf}{1+f_{x_1}}+2\beta_1V\cdot JN+2\beta_1JU\cdot N\right)U\cdot N\partial_1N\cdot T,
\end{aligned}\right.
\label{forcing terms of W,Z,A}
\end{equation}
where $U$, $V$, $S$ are the self-similar versions of $\mathring{u}$, $v$, $\mathring{\sigma}$, for example $S(y,s)=\mathring{\sigma}(x,t)$.
If we write the transport terms as
\begin{equation}
\left\{\begin{aligned}
&\mathcal{V}_W=\left(\frac{3}{2}y_1+g_W,\frac{1}{2}y_2+h_W\right)\\
&\mathcal{V}_Z=\left(\frac{3}{2}y_1+g_Z,\frac{1}{2}y_2+h_Z\right)\\
&\mathcal{V}_A=\left(\frac{3}{2}y_1+g_A,\frac{1}{2}y_2+h_A\right),
\end{aligned}\right.
\end{equation}
then the equation of $(W,Z,A)$ can be written in a compact form:
\begin{equation}
\left\{\begin{aligned}
&\partial_sW-\frac{1}{2}W+\mathcal{V}_W\cdot\nabla W=F_W\\
&\partial_sZ+\mathcal{V}_Z\cdot\nabla Z=F_Z\\
&\partial_sA+\mathcal{V}_A\cdot\nabla A=F_A.
\end{aligned}
\right.
\end{equation}
We also deduce the equations of $(U,S)$:
\begin{equation}
\left\{\begin{aligned}
&\partial_sU_i-\beta_\tau e^{-s}Q_{ij}U_j+\mathcal{V}_A\cdot\nabla U=-2\beta_3\beta_\tau e^{\frac{s}{2}}S\partial_1SJN_i-2\beta_3\beta_\tau e^{-\frac{s}{2}}S\partial_2S\delta_{i2}\\
&\partial_sS+\mathcal{V}_A\cdot\nabla S=-2\beta_3\beta_\tau e^{\frac{s}{2}}S\partial_1U\cdot JN-2\beta_3\beta_\tau e^{-\frac{s}{2}}S\partial_2U_2.
\end{aligned}
\right.
\label{equation of U,S}
\end{equation}
We can see that $(U,S)$ are transported in the same way as $A$. The transport terms $g_A$, $h_A$ in the equation of $A$ can also be expressed in terms of $U$, $S$:
\begin{equation}
\left\{
\begin{aligned}
&g_A=\beta_\tau e^{\frac{s}{2}}\left[2\beta_1(U+V)\cdot JN-\frac{\partial_t f}{1+f_{x_1}}\right]\\
&h_A=2\beta_1\beta_\tau e^{-\frac{s}{2}}(U_2+V_2).
\end{aligned}\right.
\end{equation}
Here we record the relation between $(U,S)$ and $(W,Z,A)$:
\begin{equation}
\left\{
\begin{aligned}
&U=\frac{1}{2}\left(e^{-\frac{s}{2}}W+Z+\kappa\right)N+AT\\
&S=\frac{1}{2}\left(e^{-\frac{s}{2}}W-Z+\kappa\right),
\end{aligned}\right.
\label{U,S in terms of W,Z,A}
\end{equation}
and
\begin{equation}
\left\{
\begin{aligned}
&W=e^{\frac{s}{2}}(U\cdot N+S-\kappa)\\
&Z=U\cdot N-S\\
&A=U\cdot T.
\end{aligned}\right.
\label{W,Z,A in terms of U,S}
\end{equation}
Although we introduce the self-similar version of functions like $V(y,s)$ of $v(x,t)$, we overload the functions $f$, $J$, $N$, $T$ as functions of $(y,s)$. For example, in the self-similar coordinates, we view $N$ as the map $y\mapsto N(x(y),t(s))$, and $\partial_2N(y)=\partial_{y_2}[N(x(y),t(s))]$.
\subsection{Self-similar 2D Burgers profile}
We first introduce the 1D self-similar Burgers profile
\begin{equation}
W_{1d}(y_1)=\left(-\frac{y_1}{2}+\left(\frac{1}{27}+\frac{y_1^2}{4}\right)^{\frac{1}{2}}\right)^{\frac{1}{3}}-\left(\frac{y_1}{2}+\left(\frac{1}{27}+\frac{y_1^2}{4}\right)^{\frac{1}{2}}\right)^{\frac{1}{3}},
\end{equation}
which solves the 1D self-similar Burgers equation(cf. \cite{collot2018singularity}):
\begin{equation}
-\frac{1}{2}W_{1d}+\left(\frac{3}{2}y_1+W_{1d}\right)\partial_{y_1}W_{1d}=0.
\end{equation}
Moreover, we introduce
\begin{equation}
\overline{W}(y_1,y_2)=\langle y_2\rangle W_{1d}(\langle y_2\rangle^{-3}y_1),
\end{equation}
where $\langle y_2\rangle=\sqrt{1+y_2^2}$. One an verify that $\overline{W}$ is a solution to the 2D self-similar Burgers equation:
\begin{equation}
-\frac{1}{2}\overline{W}+\left(\frac{3}{2}y_1+\overline{W}\right)\partial_{y_1}\overline{W}+\frac{1}{2}y_2\partial_{y_2}\overline{W}=0.
\end{equation}
\subsubsection{Properties of $\overline{W}$}
It can be checked via the explicit formula of $W_{1d}$ that
\begin{equation}
\left|W_{1d}(y_1)\right|\le\min\left(|y_1|,\frac{|y_1|}{\frac{1}{3}+|y_1|^{\frac{2}{3}}}\right)\le\min\left(|y_1|,|y_1|^{\frac{1}{3}}\right),
\end{equation}
\begin{equation}
|W_{1d}'(y_1)|\le\langle y_1\rangle^{-\frac{2}{3}},\ |W_{1d}''(y_1)|\le\langle y_1\rangle^{-\frac{5}{3}},
\end{equation}
\begin{equation}
\left|W_{1d}(y_1)W_{1d}'(y_1)\right|\le\frac{1}{3}\langle y_1\rangle^{-\frac{1}{3}},\ \left|(W_{1d}W_{1d}')'(y_1)\right|\le\min\left(\langle y_1\rangle^{-\frac{4}{3}},\frac{1}{7}|y_1|^{-1}\langle y_1\rangle^{-\frac{1}{3}}\right).
\end{equation}
Define $\eta(y)=1+y_1^2+y_2^6$, $\tilde{\eta}(y)=1+|y|^2+y_2^6$, then the above inequalities imply that
\begin{equation}
\left|\overline{W}\right|\le(1+y_1^2)^{\frac{1}{6}}\le\eta^{\frac{1}{6}},
\label{estimate of ovl W}
\end{equation}
\begin{equation}
\left|\partial_1\ovl{W}\right|\le\tilde{\eta}^{-\frac{1}{3}}, \ \left|\partial_2\ovl{W}\right|\le\frac{2}{3},
\label{estimates of D ovl W}
\end{equation}
\begin{equation}
\left|\partial_{11}\ovl{W}\right|\le\tilde\eta^{-\frac{5}{6}},\
\left|\partial_{12}\ovl{W}\right|\le2\eta^{-\frac{1}{2}},\ \left|\partial_{22}\ovl{W}\right|\le\frac{6}{7}\eta^{-\frac{1}{6}}.
\label{estimates of D^2 ovl W}
\end{equation}
At the origin we can check by the expression of $\ovl{W}$ that
\begin{equation}
\ovl{W}(0)=0,\ \ \ \nabla\ovl{W}(0)=
\begin{pmatrix}
-1\\0
\end{pmatrix},\ \ \
\nabla^2\ovl{W}(0)=
\begin{pmatrix}
0&0\\
0&0
\end{pmatrix},\ \ \
\partial_1\nabla^2\ovl{W}(0)=
\begin{pmatrix}
6&0\\
0&2
\end{pmatrix}.
\label{evaluation of ovl W at 0}
\end{equation}
\subsection{Evolution of $\tilde W$ and higher order derivatives of the unknowns}
If we define $\widetilde{W}=W-\ovl{W}$, then $\widetilde{W}$ satisfies
\begin{equation}
\left(\partial_s-\frac{1}{2}+\beta_\tau J\partial_1\ovl{W}\right)\widetilde{W}+\mathcal{V}_W\cdot\nabla\widetilde{W}=\widetilde{F}_W,
\label{equation of tilde W}
\end{equation}
where
\begin{equation}
\widetilde{F}_W=F_W+\left[(1-\beta_\tau J)\ovl{W}-G_W\right]\partial_1\ovl{W}-h_W\partial_2\ovl{W}.
\end{equation}
For a multi-index $\gamma=(\gamma_1,\gamma_2)$ satisfying $|\gamma|\ge1$, we have the evolution equation for $(\partial^\gamma W,\partial^\gamma Z,\partial^\gamma A)$:
\begin{equation}
\left\{\begin{aligned}
&\left(\partial_s+\frac{3\gamma_1+\gamma_2-1}{2}+\beta_\tau(1+\gamma_1\mathbbm{1}_{\gamma_1\ge2}J\partial_1W)\right)\partial^\gamma W+\mathcal{V}_W\cdot\nabla\partial^\gamma W=F_W^{(\gamma)}\\
&\left(\partial_s+\frac{3\gamma_1+\gamma_2}{2}+\beta_2\beta_\tau\gamma_1J\partial_1W\right)\partial^\gamma Z+\mathcal{V}_Z\cdot\nabla\partial^\gamma Z=F_Z^{(\gamma)}\\
&\left(\partial_s+\frac{3\gamma_1+\gamma_2}{2}+\beta_2\beta_\tau\gamma_1J\partial_1W\right)\partial^\gamma A+\mathcal{V}_A\cdot\nabla\partial^\gamma A=F_A^{(\gamma)},
\end{aligned}
\right.
\end{equation}
where the forcing terms are
\begin{equation}
\begin{aligned}
F_W^{(\gamma)}=&\partial^\gamma F_W-\beta_\tau\partial_1W[\partial^\gamma,J]W-\beta_\tau\mathbbm{1}_{|\gamma|\ge2}\sum_{\substack{|\beta|=|\gamma|-1\\\beta_1=\gamma_1}}\binom{\gamma}{
\beta}\partial^{\gamma-\beta}(JW)\partial_1\partial^\beta W\\
&-\beta_\tau\mathbbm{1}_{|\gamma|\ge3}\sum_{\substack{1\le|\beta|\le|\gamma|-1\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}(JW)\partial_1\partial^\beta W-\sum_{0\le\beta<\gamma}\binom{\gamma}{\beta}\left(\partial^{\gamma-\beta}G_W\partial_1\partial^\beta W+\partial^{\gamma-\beta}h_W\partial_2\partial^\beta W\right),
\end{aligned}
\label{forcing terms of derivatives of W}
\end{equation}
\begin{equation}
\begin{aligned}
F_Z^{(\gamma)}=&\partial^\gamma F_Z-\beta_2\beta_\tau\sum_{\substack{|\beta|=|\gamma|-1\\\beta_1=\gamma_1}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}(JW)\partial_1\partial^\beta Z\\
&-\beta_2\beta_\tau\mathbbm{1}_{|\gamma|\ge2}\sum_{\substack{0\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}(JW)\partial_1\partial^\beta Z-\sum_{0\le\beta<\gamma}\binom{\gamma}{\beta}\left(\partial^{\gamma-\beta}G_Z\partial_1\partial^\beta Z+\partial^{\gamma-\beta}h_Z\partial_2\partial^\beta Z\right),
\end{aligned}
\label{forcing terms of derivative of Z}
\end{equation}
\begin{equation}
\begin{aligned}
F_A^{(\gamma)}=&\partial^\gamma F_A-\beta_2\beta_\tau\sum_{\substack{|\beta|=|\gamma|-1\\\beta_1=\gamma_1}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}(JW)\partial_1\partial^\beta A\\
&-\beta_2\beta_\tau\mathbbm{1}_{|\gamma|\ge2}\sum_{\substack{0\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}(JW)\partial_1\partial^\beta A-\sum_{0\le\beta<\gamma}\binom{\gamma}{\beta}\left(\partial^{\gamma-\beta}G_A\partial_1\partial^\beta A+\partial^{\gamma-\beta}h_A\partial_2\partial^\beta A\right).
\end{aligned}
\label{forcing terms of derivatives of A}
\end{equation}
Similarly we can deduce the equation of $\partial^\gamma\widetilde{W}$:
\begin{equation}
\left[\partial_s+\frac{3\gamma_1+\gamma_2-1}{2}+\beta_\tau J(\partial_1\ovl{W}+\gamma_1\partial_1W)\right]\partial^\gamma\widetilde{W}+\mathcal{V}_W\cdot\nabla\partial^\gamma\widetilde{W}=\widetilde{F}_W^{(\gamma)},
\label{evolution of derivatives of tilde W}
\end{equation}
where
\begin{equation}
\begin{aligned}
\widetilde{F}_W^{(\gamma)}=&\partial^\gamma\widetilde{F}_W-\sum_{0\le\beta<\gamma}\binom{\gamma}{\beta}\left[\partial^{\gamma-\beta}G_W\partial_1\partial^\beta \widetilde{W}+\partial^{\gamma-\beta}h_W\partial_2\partial^\beta \widetilde{W}+\beta_\tau\partial^{\gamma-\beta}(J\partial_1\ovl{W})\partial^\beta \widetilde{W}\right]\\
&-\beta_\tau\gamma_2\partial_2(JW)\partial_1^{\gamma_1+1}\partial_2^{\gamma_2-1}\widetilde{W}-\beta_\tau\mathbbm{1}_{|\gamma|\ge2}\sum_{\substack{0\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}(JW)\partial_1\partial^\beta\widetilde{W}.
\end{aligned}
\end{equation}
\section{Main result}
In this section, we state the initial condition and the main shock formation result of the 2D compressible Euler equations. The proof of the main theorem will be given in section \ref{proof of the main theorem}.
\subsection{Initial data in physical variables}\label{Initial data in physical variables}
We assume that the initial time is $t=-\varepsilon$ with $\varepsilon$ to be determined.
For modulation variables, we assume that
\begin{equation}
\kappa(-\varepsilon)=\kappa_0,\ \ \xi(-\varepsilon)=0,\ \ n_2(-\varepsilon)=0,\ \ \tau(-\varepsilon)=0,\ \ \ \phi(-\varepsilon)=\phi_0=0,
\label{initial condition of modulation variables}
\end{equation}
where
\begin{equation}
\kappa_0\ge\frac{3}{1-\max(\beta_1,\beta_2)}.
\label{initial condition for kappa}
\end{equation}
Since $n_2(-\varepsilon)=0$ and $\xi(-\varepsilon)=0$, $\mathrm{x}$-coordinate and $\tilde{x}$-coordinate coincide at $t=-\varepsilon$, and
\begin{equation}
\left\{
\begin{aligned}
x_1&=\tilde{f}(\mathrm{x}_1,\mathrm{x}_2,-\varepsilon)\\
x_2&=\mathrm{x}_2.
\end{aligned}
\right.
\end{equation}
Now we prescribe the initial data:
\begin{equation}
u_0(\mathrm{x}):=u(\mathrm{x},-\varepsilon),\ \ \ \rho_0(\mathrm{x}):=\rho(\mathrm{x},-\varepsilon),\ \ \ \sigma_0:=\frac{\rho_0^\alpha}{\alpha}.
\end{equation}
We choose $u_0$ and $\rho_0$ such that the corresponding Riemann varibles satisfy the conditions stated in this section. The initial data of the Riemann variables are denoted as
\begin{equation}
\begin{aligned}
\widetilde{w}_0(\mathrm{x}):&=u_0(\mathrm{x})\cdot N(\mathrm{x},-\varepsilon)+\sigma_0(\mathrm{x})=:w_0(x)\\
\widetilde{z}_0(\mathrm{x}):&=u_0(\mathrm{x})\cdot N(\mathrm{x},-\varepsilon)-\sigma_0(\mathrm{x})=:z_0(x)\\
\widetilde{a}_0(\mathrm{x}):&=u_0(\mathrm{x})\cdot T(\mathrm{x},-\varepsilon)=:w_0(x).
\end{aligned}
\end{equation}
First we assume that
\begin{equation}
\mathrm{supp}_{\mathrm{x}}\ (\widetilde{w}_0-\kappa_0,\widetilde{z}_0,\widetilde{a}_0)\subset{\scriptstyle\mathcal{X}}_0:=\left\{|\mathrm{x}_1|\le\frac{1}{2}\varepsilon^{\frac{1}{2}},|\mathrm{x}_2|\le\varepsilon^{\frac{1}{6}}\right\}.
\label{spatial support of initial data in physical variable, tilde x}
\end{equation}
This implies that
\begin{equation}
\mathrm{supp}_{x}\ (w_0-\kappa_0,z_0,a_0)\subset\left\{|x_1|\le\varepsilon^{\frac{1}{2}},|x_2|\le\varepsilon^{\frac{1}{6}}\right\}
\label{spatial support of initial data in physical variable}.
\end{equation}
The function $\widetilde{w}_0(\mathrm{x})$ is chosen such that
\begin{equation}
\begin{aligned}
&\text{the minimum negative slope of } \widetilde{w}_0\text{ occurs in the $\mathrm{x}_1$ direction, }\\
&\partial_{\mathrm{x}_1}\widetilde{w}_0\text{ attains its global minimum at }\mathrm{x}=0.
\end{aligned}
\label{initial slope condition for w, 1}
\end{equation}
and
\begin{equation}
\nabla_{\mathrm{x}}\partial_{\mathrm{x}_1}\widetilde{w}_0(0)=0.
\label{initial slope condition for w, 2}
\end{equation}
We also assume that
\begin{equation}
\widetilde{w}_0(0)=\kappa_0,\ \ \ \partial_{\mathrm{x}_1}\widetilde{w}_0(0)=-\frac{1}{\varepsilon},\ \ \ \partial_{\mathrm{x}_2}\widetilde{w}_0(0)=0,
\label{initial value of w at 0}
\end{equation}
Define
\begin{equation}
\overline{w}_\varepsilon(x):=\varepsilon^{\frac{1}{2}}\overline{W}(\varepsilon^{-\frac{3}{2}}x_1,\varepsilon^{-\frac{1}{2}}x_2),
\end{equation}
and we set
\begin{equation}
\wideparen{w}_0(\mathrm{x}):=\widetilde{w}_0(\mathrm{x})-\overline{w}_\varepsilon(\mathrm{x}_1-\tilde{f}(\mathrm{x},t),\mathrm{x}_2)=w_0(x)-\overline{w}_\varepsilon(x)=\varepsilon^{\frac{1}{2}}\widetilde{W}(y,-\log\varepsilon)+\kappa_0.
\end{equation}
We assume that for $\mathrm{x}$ such that $\left|(\varepsilon^{-\frac{3}{2}}\mathrm{x}_1,\varepsilon^{-\frac{1}{2}}\mathrm{x}_2)\right|\le2\varepsilon^{-\frac{1}{10}}$, the following bounds hold:
\begin{equation}
\begin{aligned}
|\wideparen{w}_0(\mathrm{x})-\kappa_0|&\le\varepsilon^{\frac{1}{10}}\left(\varepsilon^3+\mathrm{x}_1^2+\mathrm{x}_2^6\right)^\frac{1}{6},\\
|\partial_{\mathrm{x}_1}\wideparen{w}_0(\mathrm{x})|&\le\varepsilon^{\frac{1}{11}}\left(\varepsilon^3+\mathrm{x}_1^2+\mathrm{x}_2^6\right)^{-\frac{1}{3}},\\
|\partial_{\mathrm{x}_2}\wideparen{w}_0(\mathrm{x})|&\le\frac{1}{2}\varepsilon^{\frac{1}{12}}
\label{initial data for difference in physical variable, 0th and 1st order}.
\end{aligned}
\end{equation}
For $\mathrm{x}$ such that $\left|(\varepsilon^{-\frac{3}{2}}\mathrm{x}_1,\varepsilon^{-\frac{1}{2}}\mathrm{x}_2)\right|\le1$, we assume that
\begin{equation}
|\partial_{\mathrm{x}}^\gamma\wideparen{w}_0(\mathrm{x})|\overset{|\gamma|=4}{\le}\frac{1}{2}\varepsilon^{\frac{5}{8}-\frac{1}{2}(3\gamma_1+\gamma_2)}.
\label{initial data for difference in physical variable, 4th order}
\end{equation}
At $\mathrm{x}=0$, we assume that
\begin{equation}
|\partial_{\mathrm{x}}^\gamma\wideparen{w}_0(\mathrm{0})|\overset{|\gamma|=3}{\le}\frac{1}{2}\varepsilon^{1-\frac{1}{2}(3\gamma_1+\gamma_2)-\frac{4}{2k-7}}.
\label{initial data for difference in physical variable, 3rd order}
\end{equation}
For $\mathrm{x}\in{\scriptstyle\mathcal{X}}_0$ such that $\left|(\varepsilon^{-\frac{3}{2}}\mathrm{x}_1,\varepsilon^{-\frac{1}{2}}\mathrm{x}_2)\right|\ge\frac{1}{2}\varepsilon^{-\frac{1}{10}}$, we assume that
\begin{equation}
\begin{aligned}
|\widetilde{w}_0(\mathrm{x})-\kappa_0|&\le(1+\varepsilon^{\frac{1}{11}})\left(\varepsilon^4+\mathrm{x}_1^2+\mathrm{x}_2^6\right)^{\frac{1}{6}},\\
|\partial_{\mathrm{x}_1}\widetilde{w}_0(\mathrm{x})|&\le(1+\varepsilon^{\frac{1}{12}})\left(\varepsilon^4+\mathrm{x}_1^2+\mathrm{x}_2^6\right)^{-\frac{1}{3}},\\
|\partial_{\mathrm{x}_2}\widetilde{w}_0(\mathrm{x})|&\le\frac{2}{3}+\varepsilon^{\frac{1}{13}}.
\end{aligned}
\label{initial data for w in physical variable, 0th and 1st order, outside}
\end{equation}
For all $\mathrm{x}\in{\scriptstyle\mathcal{X}}_0$, we assume that
\begin{equation}
\begin{aligned}
|\partial_{\mathrm{x}_1}^2\widetilde{w}_0(\mathrm{x})|&\le\varepsilon^{-\frac{3}{2}}\left(\varepsilon^3+\mathrm{x}_1^2+\mathrm{x}_2^6\right)^{-\frac{1}{3}},\\
|\partial_{\mathrm{x}_1\mathrm{x}_2}\widetilde{w}_0(\mathrm{x})|&\le\frac{1}{2}\varepsilon^{-\frac{1}{2}}\left(\varepsilon^3+\mathrm{x}_1^2+\mathrm{x}_2^6\right)^{-\frac{1}{3}},\\
|\partial_{\mathrm{x}_2}^2\widetilde{w}_0(\mathrm{x})|&\le\frac{1}{2}\left(\varepsilon^3+\mathrm{x}_1^2+\mathrm{x}_2^6\right)^{-\frac{1}{6}}.
\end{aligned}
\label{initial data for w in physical variable, 2nd order}
\end{equation}
Also att $\mathrm{x}=0$ we assume that
\begin{equation}
\left|\partial_{\mathrm{x}_2}^2\widetilde{w}_0(0)\right|\le1.
\label{initial data for w in physical variable, 2nd order at 0}
\end{equation}
For the initial data of $\widetilde{z}_0$ and $\widetilde{a}_0$ we assume that
\begin{equation}
\begin{aligned}
&|\widetilde{z}_0(\mathrm{x})|\le\varepsilon,\ \ \ |\partial_{\mathrm{x}_1}\widetilde{z}_0(\mathrm{x})|\le1,\ \ \ |\partial_{\mathrm{x}_2}\widetilde{z}_0(\mathrm{x})|\le\frac{1}{2}\varepsilon^{\frac{1}{2}},\\
&|\partial_{\mathrm{x}_1}^2\widetilde{z}_0(\mathrm{x})|\le\varepsilon^{-\frac{3}{2}},\ \ \ |\partial_{\mathrm{x}_1\mathrm{x}_2}\widetilde{z}_0(\mathrm{x})|\le\frac{1}{2}\varepsilon^{-\frac{1}{2}},\ \ \ |\partial_{\mathrm{x}_2}^2\widetilde{z}_0(\mathrm{x})|\le\frac{1}{2},
\end{aligned}
\label{initial data for z in physical variable}
\end{equation}
and
\begin{equation}
|\widetilde{a}_0(\mathrm{x})|\le\varepsilon,\ \ \ |\partial_{\mathrm{x}_1}\widetilde{a}_0(\mathrm{x})|\le1,\ \ \ |\partial_{\mathrm{x}_2}\widetilde{a}_0(\mathrm{x})|\le\frac{1}{2}\varepsilon^{\frac{1}{2}},\ \ \ |\partial_{\mathrm{x}_2}^2\widetilde{a}_0(\mathrm{x})|\le\frac{1}{2}.
\label{initial data for a in physical variable}
\end{equation}
For the initial specific vorticity, we assume that
\begin{equation}
\left\|\frac{\operatorname{curl}u_0(\mathrm{x})}{\rho_0(\mathrm{x})}\right\|_{L^\infty}\le1
\label{initial data for vorticity in physical variable}.
\end{equation}
Finally for the Sobolev norm of the initial data we assume that for a fixed $k$ with $k\ge18$ the following holds:
\begin{equation}
\sum_{\gamma=k}\varepsilon^2\|\partial_\mathrm{x}^\gamma \widetilde{w}_0\|_{L^2}^2+\|\partial_\mathrm{x}^\gamma \widetilde{z}_0\|_{L^2}^2+\|\partial_\mathrm{x}^\gamma \widetilde{a}_0\|_{L^2}^2\le\frac{1}{2}\varepsilon^{\frac{7}{2}-3\gamma_1-\gamma_2}.
\label{initial sobolev norm in physical variable}
\end{equation}
\begin{theorem}[\emph{Main result in physical variables}]
If
\begin{itemize}
\item the initial values of the modulation variables satisfy (\ref{initial condition of modulation variables})(\ref{initial condition for kappa});
\item the initial data $(u_0,\rho_0)$ of the Euler equations is smooth, and it gurantees that the corresponding Riemann variables $(w_0,z_0,a_0)$ satisfies the initial conditions (\ref{initial slope condition for w, 1})-(\ref{initial sobolev norm in physical variable}),
\end{itemize}
then the cooresponding solution $(u,\rho)$ to (\ref{original Euler equation}) blows up in finite time $-\varepsilon<T_*=O(\varepsilon^2)<+\infty$. Moreover, we have the following description of the shock:
\begin{enumerate}
\item\emph{Blow-up speed}. We have the following inequalities for $(u,\sigma)$:
\begin{equation}
\frac{c}{T_*-t}\le\|\nabla_\mathrm{x} u(t)\|_{L^\infty}\le\frac{C}{T_*-t},
\label{blow up speed for u}
\end{equation}
\begin{equation}
\frac{c}{T_*-t}\le\|\nabla_\mathrm{x} \sigma(t)\|_{L^\infty}\le\frac{C}{T_*-t}.
\label{blow up speed for sigma}
\end{equation}
\item\emph{Blow-up location}. For arbitrary $\delta\in(0,1)$, there holds that
\begin{equation}
\|\nabla _\mathrm{x} u(t)\|_{L^\infty(B_\delta^c(\xi(t)))}+\|\nabla_\mathrm{x} \sigma(t)\|_{L^\infty(B_\delta^c(\xi(t)))}\le C(\delta),
\label{boundedness of u and sigma away from origin}
\end{equation}
while we have the unboundedness of gradient along $\xi(t)$:
\begin{equation}
|\nabla _\mathrm{x} u(\xi(t),t)|\ge\frac{c}{T_*-t},\ \ |\nabla _\mathrm{x} \sigma(\xi(t),t)|\ge\frac{c}{T_*-t}.
\label{unboundednesss of u and sigma at the origin}
\end{equation}
Moreover, the limit of $\xi(t)$ exists:
\begin{equation}
\lim_{t\rightarrow T_*}\xi(t)=\xi_*\in\mathbb{R}^2.
\end{equation}
\item\emph{Direction of the shock}. The gradient of $(u,\sigma)$ blows up only in one direction:
\begin{equation}
|[(R(t)N)\cdot\nabla _{\mathrm{x}}] u(\xi(t),t)|\ge\frac{c}{T_*-t},\ \ |(R(t)N)\cdot\nabla _{\mathrm{x}} \sigma(\xi(t),t)|\ge\frac{c}{T_*-t};
\label{unboundedness at the N direction}
\end{equation}
\begin{equation}
\|[(R(t)T)\cdot\nabla _{\mathrm{x}}] u(t)\|_{L^\infty}+ \|[(R(t)T)\cdot\nabla _{\mathrm{x}}] \sigma(t)\|_{L^\infty}\le C.
\label{boundedness at the T direction}
\end{equation}
Moreover, we have $n(t)=R(t)N(0,t)$, and the limit of $n(t)$ exists:
\begin{equation}
\lim_{t\rightarrow T_*}n(t)=n_*\in\mathbb{S}^1.
\label{limit of n}
\end{equation}
\item \emph{1/3-H\"older continuity}. The solution has a uniform-in-time $C^{1/3}$ bound. More precisely, we have that
\begin{equation}
(u,\sigma)\in L^\infty_t([-\varepsilon,T_*),C^{1/3}_\mathrm{x}).
\end{equation}
\end{enumerate}
\end{theorem}
Proof of the main result will be given in section \ref{proof of the main theorem}.
\subsection{Initial data in self-similar variables}
Since $\tau(-\varepsilon)=0$, we have that the initial self-similar time is $s=-\log\varepsilon$.
When $s=-\log\varepsilon$, $y_1=x_1\varepsilon^{-\frac{3}{2}}$, $y_2=x_2\varepsilon^{-\frac{1}{2}}$, from (\ref{spatial support of initial data in physical variable}) we have that the initial data of $W,Z,A$ are supported in
\begin{equation}
\mathcal{X}_0=\{|y_1|\le\varepsilon^{-1},|y_2|\le\varepsilon^{-\frac{1}{3}}\}.
\label{initial spatial support}
\end{equation}
Now we introduce a large constant $M=M(\alpha,\kappa_0,k)$ to absorb universal constants, here $k$ is the order of energy estimate to be established later in section \ref{energy estimate}. In terms of $M$ and $\varepsilon$, we define a small scale $l$ and a large scale $L$ by
\begin{subequations}
\begin{align}
\begin{split}
l=(\log M)^{-5},
\label{definition of l}
\end{split}\\
\begin{split}
L=\varepsilon^{-\frac{1}{10}}.
\label{definition of L}
\end{split}
\end{align}
\end{subequations}
From (\ref{initial data for difference in physical variable, 0th and 1st order})(\ref{initial data for difference in physical variable, 4th order})(\ref{initial data for difference in physical variable, 3rd order}) we know that $\widetilde W(y,-\log\varepsilon)$ satisfies
\begin{subequations}
\begin{align}
\begin{split}
\eta^{-\frac{1}{6}}\left|\widetilde{W}(y,-\log\varepsilon)\right|\mathbbm{1}_{|y|\le L}&\le\varepsilon^{\frac{1}{10}},
\end{split}\\
\begin{split}
\eta^{\frac{1}{3}}\left|\partial_1\widetilde{W}(y,-\log\varepsilon)\right|\mathbbm{1}_{|y|\le L}&\le\varepsilon^{\frac{1}{11}},
\end{split}\\
\begin{split}
\left|\partial_2\widetilde{W}(y,-\log\varepsilon)\right|\mathbbm{1}_{|y|\le L}&\le\varepsilon^{\frac{1}{12}},
\end{split}\\
\begin{split}
\left|\partial^\gamma\widetilde{W}(y,-\log\varepsilon)\right|\mathbbm{1}_{|y|\le l}\overset{|\gamma|=4}&{\le}\varepsilon^{\frac{1}{8}}
\end{split}\\
\begin{split}
\left|\partial^\gamma\widetilde{W}(0,-\log\varepsilon)\right|\overset{|\gamma|=3}&{\le}\varepsilon^{\frac{1}{2}-\frac{1}{k-3}}.
\end{split}
\end{align}
\label{initial condition of tilde W}
\end{subequations}
For $W(y,-\log\varepsilon)$, from (\ref{initial data for w in physical variable, 0th and 1st order, outside}) that for all $y\in\mathcal{X}_0\in\{|y|\ge L\}$, there hold that
\begin{equation}
\begin{aligned}
\eta^{-\frac{1}{6}}\left|{W}(y,-\log\varepsilon)\right|&\le1+\varepsilon^{\frac{1}{11}},\\
\eta^{\frac{1}{3}}\left|\partial_1{W}(y,-\log\varepsilon)\right|&\le1+\varepsilon^{\frac{1}{12}},\\
\left|\partial_2{W}(y,-\log\varepsilon)\right|&\le\frac{3}{4}.
\end{aligned}
\label{initial condition of W, 0th and 1st order}
\end{equation}
and from (\ref{initial data for w in physical variable, 2nd order}) we have that for all $y\in\mathcal{X}_0$ there hold that
\begin{equation}
\begin{aligned}
\eta^{\frac{1}{3}}\left|\partial_{11}{W}(y,-\log\varepsilon)\right|&\le1,\\
\eta^{\frac{1}{3}}\left|\partial_{12}{W}(y,-\log\varepsilon)\right|&\le1,\\
\eta^{\frac{1}{6}}\left|\partial_{22}{W}(y,-\log\varepsilon)\right|&\le1.\\
\end{aligned}
\label{initial condition of W, 2nd order}
\end{equation}
From (\ref{initial data for z in physical variable})(\ref{initial data for a in physical variable}), we have that the initial data of $Z$ and $A$ satisfy
\begin{equation}
\left|\partial^\gamma Z(y,-\log\varepsilon)\right|\le\left\{
\begin{aligned}
&\varepsilon^{\frac{3}{2}},\ \ \ \gamma_1>0,\ |\gamma|=2\\
&\varepsilon,\ \ \ \ \ \gamma_1=0,\ |\gamma|\le2,
\end{aligned}\right.
\label{initial condition of Z}
\end{equation}
\begin{equation}
\left|\partial^\gamma A(y,-\log\varepsilon)\right|\le\left\{
\begin{aligned}
&\varepsilon^{\frac{3}{2}},\ \ \ \gamma=(1,0)\\
&\varepsilon,\ \ \ \ \ \gamma_1=0,\ |\gamma|\le2.
\end{aligned}\right.
\label{initial condition of A}
\end{equation}
Furthermore, from (\ref{initial data for vorticity in physical variable}) we know the specific vorticity satisfies
\begin{equation}
\left\|\Omega(\cdot,-\log\varepsilon)\right\|_{L^\infty}\le1.
\end{equation}
Finally from (\ref{initial sobolev norm in physical variable}) we have
\begin{equation}
\varepsilon\|W(\cdot,-\log\varepsilon)\|_{\dot H^k}^2+\|Z(\cdot,-\log\varepsilon)\|_{\dot H^k}^2+\|A(\cdot,-\log\varepsilon)\|_{\dot H^k}^2\le\varepsilon.
\label{initial sobolev norm}
\end{equation}
\begin{theorem}[Main theorem in self-similar coordinate]
Suppose $W(y,-\log\varepsilon)$,$Z(y,-\log\varepsilon)$, $A(y,-\log\varepsilon)\in H^k(\mathbb{R}^2)$ with integer $k$ large enough, and they satisfy (\ref{initial spatial support})-(\ref{initial sobolev norm}), and the initial data of modulation variables $(\kappa,\xi,n_2,\tau,\phi)$ satisfy (\ref{initial condition of modulation variables})(\ref{initial condition for kappa}), then there exists a choice of $\varepsilon\ll1$, such that the system (\ref{evolution of W,Z,A}) coupled with (\ref{evolution of kappa})(\ref{evolution of phi})(\ref{evolution of tau})(\ref{evolution of xi}) admits a global solution, and the solution $(W,Z,A,\kappa,\phi,\tau,\xi)$ satisfies the bootstrap assumptions(which are stated in the next section) for all time.
\end{theorem}
\section{Bootstrap argument}
To establish global existence in self-similar coordinate, we set up bootstrap argument.
\subsection{Bootstrap assumption}
We first state the bootstrap assumptions.
\begin{enumerate}
\item[(1)]\emph{Assumptions on modulation variables}. For the modulation variables, we assume that
\begin{equation}\tag{B-M}
\left\{
\begin{aligned}
&\frac{1}{2}\kappa_0\le\kappa\le2\kappa_0, &&|\dot\kappa|\le M\\
&|\tau|\le M\varepsilon^2, &&|\dot{\tau}|\le Me^{-s}\\
&|\xi|\le M^{\frac{1}{4}}\varepsilon, &&|\dot\xi|\le M^{\frac{1}{4}}\\
&|n_2|\le M^2\varepsilon^{\frac{3}{2}}, &&|\dot n_2|\le M^2\varepsilon^{\frac{1}{2}}\\
&|\phi|\le M^2\varepsilon, &&|\dot\phi|\le M^2.
\end{aligned}
\right.
\label{bootstrap assumptions of dynamic variables}
\end{equation}
\item[(2)]\emph{Assumptions on Spatial support bootstrap}. We define $\mathcal{X}(s):=\left\{|y_1|\le2\varepsilon^{\frac{1}{2}}e^{\frac{3}{2}s},|y_2|\le2\varepsilon^{\frac{1}{6}}e^{\frac{s}{2}}\right\}$, and assume that
\begin{equation}\tag{B-S}
\mathrm{supp}(DW,DZ,DA)\subset\mathcal{X}(s).
\label{bootstrap assumptions for the spatial support}
\end{equation}
We will show that this assumption together with (\ref{support of DN and DT}) imply that $\mathrm{supp}(DU,DS)\subset\mathcal{X}(s)$ in Lemma \ref{spatial support of DU,DS}.
\item[(3)]\emph{Assumptions on $W$ and $\widetilde{W}$}. For $|\gamma|\le2$, we assume that either $\partial^\gamma W$ is close to $\partial^\gamma\ovl W$ , or it behaves like $\partial^\gamma\ovl W$. More precisely, we assume that
\begin{equation}\tag{B-$W$}
\left\{\begin{aligned}
&|W|\le(1+\varepsilon^{\frac{1}{20}})\eta^{\frac{1}{6}},\ \
&|\partial_1W|\le2\eta^{-\frac{1}{3}},\ \
&|\partial_2W|\le1\\
&|\partial_{11}W|\le M^{\frac{1}{3}}\eta^{-\frac{1}{3}},\ \
&|\partial_{12}W|\le M^{\frac{2}{3}}\eta^{-\frac{1}{3}},\ \
&|\partial_{22}W|\le M\eta^{-\frac{1}{6}}.
\end{aligned}\right.
\label{bootstrap assumptions of W}
\end{equation}
Noting that by $\mathrm{supp}\ DW\subset\mathcal{X}(s)$ and $W(0)=\ovl{W}(0)=0$, we have
\begin{equation}
|W(y)|\le \int_0^{y_1}2\eta^{-\frac{1}{3}}(y_1',0)dy_1'+\|\partial_2W\|_{L^\infty}|y_2|\lesssim \varepsilon^{\frac{1}{6}}e^{\frac{s}{2}}.
\label{estimate of W}
\end{equation}
For $\widetilde{W}$ we assume that
\begin{equation}\tag{B-$\widetilde{W}$-1}
\left\{
\begin{aligned}
&\left|\widetilde{W}\right|\mathbbm{1}_{|y|\le L}\le\varepsilon^{\frac{1}{11}}\eta^{\frac{1}{6}}\\
&\left|\partial_1\widetilde{W}\right|\mathbbm{1}_{|y|\le L}\le\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}\\
&\left|\partial_2\widetilde{W}\right|\mathbbm{1}_{|y|\le L}\le\varepsilon^{\frac{1}{13}}.
\end{aligned}
\right.
\label{bootstrap assumptions of tilde W when |y|<L}
\end{equation}
where $L=\varepsilon^{-\frac{1}{10}}$, and
\begin{equation}\tag{B-$\widetilde{W}$-2}
\left|\partial^\gamma\widetilde{W}\right|\mathbbm{1}_{|y|\le l}\le\log^4 M\varepsilon^{\frac{1}{10}}|y|^{4-|\gamma|}+M\varepsilon^{\frac{1}{4}}|y|^{3-|\gamma|}\ \ \ \ \ (\forall|\gamma|\le3),
\label{bootstrap assumptions of tilde W when |y|<l, |gamma|<4}
\end{equation}
\begin{equation}\tag{B-$\widetilde{W}$-3}
\left|\partial^\gamma\widetilde{W}\right|\mathbbm{1}_{|y|\le l}\le\frac{1}{2}\log^{|\check\gamma|} M\varepsilon^{\frac{1}{10}}\ \ \ \ \ (\forall|\gamma|=4).
\label{bootstrap assumptions of tilde W when |y|<l, |gamma|=4}
\end{equation}
where $l=(\log M)^{-5}$, and
\begin{equation}\tag{B-$\widetilde{W}^0$}
\left|\partial^\gamma\widetilde{W}(0,s)\right|\le\varepsilon^{\frac{1}{4}}\ \ \ \ \ (\forall|\gamma|=3,\forall s\ge s_0).
\label{bootstrap assumptions of tilde W when y=0, |gamma|=3}
\end{equation}
\item [(4)]\emph{Assumptions on $Z$ and $A$}. For $Z$, $A$ and their derivatives up to second order, we assume they are small or have decay properties. More precisely, we assume that
\begin{equation}\tag{B-$Z$}
\left\{
\begin{aligned}
&|Z|\le M\varepsilon,\ \
&|\partial_1Z|\le M^{\frac{1}{2}}e^{-\frac{3}{2}s},\ \
&|\partial_2Z|\le M\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}}\\
&|\partial_{11}Z|\le M^{\frac{1}{2}}e^{-\frac{3}{2}s},\ \
&|\partial_{12}Z|\le Me^{-\frac{3}{2}s},\ \
&|\partial_{22}Z|\le Me^{-s}.
\end{aligned}
\right.
\label{bootstrap assumptions of Z}
\end{equation}
and
\begin{equation}\tag{B-$A$}
\left\{
\begin{aligned}
&|A|\le M\varepsilon,\ \
&|\partial_1A|\le Me^{-\frac{3}{2}s}\\
&|\partial_2A|\le M\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}},\ \
&|\partial_{22}A|\le Me^{-s}.
\end{aligned}
\right.
\label{bootstrap assumptions of A}
\end{equation}
\end{enumerate}
\subsection{Bootstrap procedure}
Now we state the improved bootstrap inequality (IB), which supposedly can be deduced from the bootstrap assumptions and the initial conditions:
\begin{equation}\tag{IB-M}
\left\{
\begin{aligned}
&\frac{3}{4}\kappa_0\le\kappa\le\frac{5}{4}\kappa_0, &&|\dot\kappa|\le \frac{1}{2}M\\
&|\tau|\le \frac{1}{4}M\varepsilon^2, &&|\dot{\tau}|\le \frac{1}{4}Me^{-s}\\
&|\xi|\le \frac{1}{2}M^{\frac{1}{4}}\varepsilon, &&|\dot\xi|\le \frac{1}{2}M^{\frac{1}{4}}\\
&|n_2|\le \frac{1}{2}M\varepsilon, &&|\dot n_2|\le \frac{1}{2}M^2\varepsilon^{\frac{1}{2}}\\
&|\phi|\le \frac{1}{2}M^2\varepsilon, &&|\dot\phi|\le \frac{1}{10}M^2,
\end{aligned}
\right.
\label{refined bootstrap inequality of dynamic variables}
\end{equation}
\begin{equation}\tag{IB-S}
\mathrm{supp}(DW,DZ,DA)\subset\frac{7}{8}\mathcal{X}(s),
\label{refined spatial support}
\end{equation}
\begin{equation}\tag{IB-$W$}
\left\{\begin{aligned}
&|W|\le(1+\varepsilon^{\frac{1}{19}})\eta^{\frac{1}{6}},\ \
&|\partial_1W|\le\left(1+\varepsilon^{\frac{1}{13}}\right)\eta^{-\frac{1}{3}},\ \
&|\partial_2W|\le\frac{5}{6}\\
&|\partial_{11}W|\le \frac{1}{2}M^{\frac{1}{3}}\eta^{-\frac{1}{3}},\ \
&|\partial_{12}W|\le \frac{1}{2}M^{\frac{2}{3}}\eta^{-\frac{1}{3}},\ \
&|\partial_{22}W|\le \frac{1}{2}M\eta^{-\frac{1}{6}},
\end{aligned}\right.
\label{refined bootstrap inequality of W}
\end{equation}
\begin{equation}\tag{IB-$\widetilde{W}$-1}
\left\{
\begin{aligned}
&\left|\widetilde{W}\right|\mathbbm{1}_{|y|\le L}\le\frac{1}{2}\varepsilon^{\frac{1}{11}}\eta^{\frac{1}{6}}\\
&\left|\partial_1\widetilde{W}\right|\mathbbm{1}_{|y|\le L}\le\frac{1}{2}\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}\\
&\left|\partial_2\widetilde{W}\right|\mathbbm{1}_{|y|\le L}\le\frac{1}{2}\varepsilon^{\frac{1}{13}},
\end{aligned}
\right.
\label{refined bootstrap inequality of tilde W when |y|<L}
\end{equation}
\begin{equation}\tag{IB-$\widetilde{W}$-2}
\left|\partial^\gamma\widetilde{W}\right|\mathbbm{1}_{|y|\le l}\le\frac{1}{2}\log^4 M\varepsilon^{\frac{1}{10}}|y|^{4-|\gamma|}+\frac{1}{2}M\varepsilon^{\frac{1}{4}}|y|^{3-|\gamma|}\ \ \ \ \ (\forall|\gamma|\le3),
\label{refined bootstrap inequality of tilde W when |y|<l, |gamma|<4}
\end{equation}
\begin{equation}\tag{IB-$\widetilde{W}$-3}
\left|\partial^\gamma\widetilde{W}\right|\mathbbm{1}_{|y|\le l}\le\frac{1}{4}\log^{|\check\gamma|} M\varepsilon^{\frac{1}{10}}\ \ \ \ \ (\forall|\gamma|=4),
\label{refined bootstrap inequality of tilde W when |y|<l, |gamma|=4}
\end{equation}
\begin{equation}\tag{IB-$\widetilde{W}^0$}
\left|\partial^\gamma\widetilde{W}(0,s)\right|\le\frac{1}{10}\varepsilon^{\frac{1}{4}}\ \ \ \ \ (\forall|\gamma|=3,\forall s\ge s_0),
\label{refined bootstrap inequality of tilde W when y=0, |gamma|=3}
\end{equation}
\begin{equation}\tag{IB-$Z$}
\left\{
\begin{aligned}
&|Z|\le \frac{1}{2}M\varepsilon,\ \
&|\partial_1Z|\le \frac{1}{2}M^{\frac{1}{2}}e^{-\frac{3}{2}s},\ \
&|\partial_2Z|\le \frac{1}{2}M\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}}\\
&|\partial_{11}Z|\le \frac{1}{2}M^{\frac{1}{2}}e^{-\frac{3}{2}s},\ \
&|\partial_{12}Z|\le \frac{1}{2}Me^{-\frac{3}{2}s},\ \
&|\partial_{22}Z|\le \frac{1}{2}Me^{-s},
\end{aligned}
\right.
\label{refined bootstrap inequality of Z}
\end{equation}
\begin{equation}\tag{IB-$A$}
\left\{
\begin{aligned}
&|A|\le M\varepsilon,\ \
&|\partial_1A|\le \frac{1}{2}Me^{-\frac{3}{2}s}\\
&|\partial_2A|\le \frac{1}{2}M\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}},\ \
&|\partial_{22}A|\le \frac{1}{2}Me^{-s}.
\end{aligned}
\right.
\label{refined bootstrap inequality of A}
\end{equation}
Compare to the 3d case in \cite{buckmaster2022formation}, we carefully close the bootstrap argument of spatial support in subsection \ref{upper bound of the trajectories}. To prove $W,Z,A$ are constant outside $\frac{7}{8}\mathcal{X}(s)$, we define two rectangles $Q_{big}=\{|y_1|\le M',|y_2|\le M'\}$ and $Q_{small}(s)$ satisfying
$$\frac{3}{4}\mathcal{X}(s)\subset Q_{small}(s)\subset\frac{7}{8}\mathcal{X}(s)\subset Q_{big},$$
where $M'$ can be chosen arbitrarily large. Then we consider the quantity
$$\int_{Q_{big}\setminus Q_{small}} E(y,s)dy,$$ where $E(y,s)=\frac{1}{2}\left(e^{-s}(W-W_\infty)^2+(Z-Z_\infty)+2(A-A_\infty)^2\right)$.
From the equations of $W,Z,A$ and bootstrap assumptions, we find that
$$\frac{d}{ds}\int_{Q_{big}\setminus Q_{small}} E\le C\int_{Q_{big}\setminus Q_{small}} E.$$
By Gronwall's inequality and the initial conditions, we can deduce that $W,Z,A$ are constant outside $Q_{small}$.
\section{Immediate corollaries of bootstrap assumptions}
\subsection{Blow-up time}
By the definition of $s$, we have $t=\tau-e^{-s}$. From the bootstrap assumption of $\tau$ and $s\ge-\log\varepsilon$, we can see that if the bootstrap assumptions hold on the interval $[t_0,t]=[-\varepsilon,t]$, then $t$ satisfies
\begin{equation}
|t-t_0|=|t+\varepsilon|\le\varepsilon+M\varepsilon^2+e^{\log\varepsilon}\le3\varepsilon.
\label{bootstrap time}
\end{equation}
The blow-up time $T_*$ is defined to be $T_*=\tau(T_*)$.
\subsection{Closure of bootstrap argument for $W$,$\widetilde{W}$ near the origin}
From estimates (\ref{estimate of ovl W})(\ref{estimates of D ovl W}) of $\ovl{W}$ and bootstrap assumptions (\ref{bootstrap assumptions of tilde W when |y|<L}), we have
\begin{equation}
\left\{
\begin{aligned}
&\left|W\right|\mathbbm{1}_{|y|\le L}\le(1+\varepsilon^{\frac{1}{11}})\eta^{\frac{1}{6}}\\
&\left|\partial_1W\right|\mathbbm{1}_{|y|\le L}\le(1+\varepsilon^{\frac{1}{12}})\eta^{-\frac{1}{3}}\\
&\left|\partial_2W\right|\mathbbm{1}_{|y|\le L}\le\frac{2}{3}+\varepsilon^{\frac{1}{13}}.
\end{aligned}
\right.
\label{bootstrap estimate of W and DW}
\end{equation}
Thus we closed the bootstrap argument for $W$ and $DW$ in the region $\{|y|\le L\}$, and by $D^2\widetilde{W}(0,s)=0$, the bootstrap argument for $D^2W$ in $\{|y|\le l\}$ is automatically closed.
Note that by (\ref{bootstrap estimate of W and DW})(\ref{bootstrap assumptions of W}), for $\varepsilon$ taken small enough, we have
\begin{equation}
\left|\partial_1W\right|\le(1+\varepsilon^{\frac{1}{12}})\eta^{-\frac{1}{3}}\mathbbm{1}_{|y|\le L}+2\eta^{-\frac{1}{3}}\mathbbm{1}_{|y|>L}\le1+\varepsilon^{\frac{1}{12}}.
\label{estimate of D1W that will appear in damping terms}
\end{equation}
This bound will be used in the estimate of the damping terms.
Now we prove (\ref{refined bootstrap inequality of tilde W when |y|<l, |gamma|<4}) for $\widetilde{W}$:
\begin{equation}
\begin{aligned}
\left|\partial^\gamma\widetilde{W}\right|\mathbbm{1}_{|y|\le l}\overset{|\gamma|=3}&{\le}\left|\partial^\gamma\widetilde{W}(0,s)\right|+\left\|D\partial^{\gamma}\widetilde{W}\right\|_{L^\infty(|y|\le l)}|y|\\
&{\le}\varepsilon^{\frac{1}{4}}+\frac{1}{2}\log^4 M\varepsilon^{\frac{1}{10}}|y|;
\end{aligned}
\end{equation}
if $|\gamma|\le2$, we have that
\begin{equation}
\left|\partial^\gamma\widetilde{W}\right|\mathbbm{1}_{|y|\le l}\overset{|\gamma|\le2}{\le}\left\|D\partial^{\gamma}\widetilde{W}(\cdot,s)\right\|_{L^\infty(|\cdot|\le |y|)}|y|.
\end{equation}
\subsection{Spatial support of unknowns}
For the support of unknowns, we have the following lemma.
\begin{lemma}\label{spatial support of DU,DS}
$\mathrm{supp}\ (DU,DS)\subset \mathcal{X}(s)$.
\end{lemma}
\begin{proof}
According to the spatial support assumption of $(DW,DZ,DA)$, it suffices to show $\mathrm{supp}_x(D_xN,D_xT)\subset\{|x_1|\le2\varepsilon^{\frac{1}{2}},|x_2|\le2\varepsilon^{\frac{1}{6}}\}$. Now by the expression of $N,T$, we only need to show that $\mathrm{supp}_x\ f_{x_2}\subset\{|x_1|\le2\varepsilon^{\frac{1}{2}},|x_2|\le2\varepsilon^{\frac{1}{6}}\}$. Note that $f_{x_2}=\tilde f_{\tilde x_2}(1+\frac{\tilde f_{\tilde x_1}}{1-\tilde f_{\tilde x_1}})$, and $\mathrm{supp}_{\tilde x}\tilde f_{\tilde x_2}\subset\{|\tilde x_1|\le\frac{5}{4}\varepsilon^{\frac{1}{2}},|\tilde x_2|\le\frac{5}{4}\varepsilon^{\frac{1}{6}}\}$, thus we have $\mathrm{supp}_x(D_xN,D_xT)\subset\{|x_1|\le\frac{3}{2}\varepsilon^{\frac{1}{2}},|x_2|\le\frac{3}{2}\varepsilon^{\frac{1}{6}}\}$ by choosing $\varepsilon$ small enough in terms of $M$.
\end{proof}
From (\ref{spatial support of initial data in physical variable}), we know that in the original $\mathrm{x}$ coordinate, we have
\begin{equation}
\lim\limits_{|\mathrm{x}|\rightarrow\infty}u(\mathrm{x},-\varepsilon)=\frac{\kappa_0}{2}e_1,\ \lim\limits_{|\mathrm{x}|\rightarrow\infty}\sigma(\mathrm{x},-\varepsilon)=\frac{\kappa_0}{2}.
\end{equation}
From the finite speed propagation of the Euler equations, we have that for all $t\in[-\varepsilon,T_*)$, there hold
\begin{equation}
\lim\limits_{|\mathrm{x}|\rightarrow\infty}u(\mathrm{x},t)=\frac{\kappa_0}{2}e_1,\ \lim\limits_{|\mathrm{x}|\rightarrow\infty}\sigma(\mathrm{x},t)=\frac{\kappa_0}{2}.
\end{equation}
Note that the coordinate transformation is determined by the modulation variables, and from bootstrap assumptions we can deduce that
\begin{equation}
y\notin\mathcal{X}(s)\text{ implies that }
\left\{\begin{aligned}
&W(y,s)=W_\infty(s)\\
&Z(y,s)=Z_\infty(s)\\
&A(y,s)=A_\infty(s)\\
&S(y,s)=S_\infty(s)\\
&U(y,s)=U_\infty(s),
\end{aligned}\right.
\end{equation}
where
\begin{equation}
\left\{
\begin{aligned}
W_\infty(s)&:=\left[\frac{\kappa_0}{2}(n_1+1)-\kappa\right]e^{\frac{s}{2}}\\
Z_\infty(s)&:=\frac{\kappa_0}{2}(n_1-1)\\
A_\infty(s)&:=-\frac{\kappa_0}{2}n_2\\
S_\infty(s)&:=\frac{e^{-\frac{s}{2}}W_\infty+\kappa-Z_\infty}{2}=\frac{\kappa_0}{2}\\
U_\infty(s)&:=\frac{e^{-\frac{s}{2}}W_\infty+\kappa+Z_\infty}{2}\tilde{e}_1+A_\infty\tilde{e}_2=\frac{\kappa_0n_1}{2}\tilde{e}_1-\frac{\kappa_0n_2}{2}\tilde{e}_2.
\end{aligned}
\right.
\end{equation}
\subsection{Estimates related to coordinate transformation}
In this section we will estimate the functions $f$, $J$, $N$, $T$, $Q$, $V$, which only depend on modulation variables.
\begin{lemma}\label{estimates for functions of coordinate transformation, lemma}
For any multi-index $\gamma\in \mathbb{Z}_{\ge0}^2$, we have
\begin{equation}
\left\{
\begin{aligned}
&|\partial_x^\gamma f|\le C_\gamma M^2\varepsilon^{\frac{4}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&|\partial_x^\gamma(J-1)|\le C_\gamma M^2\varepsilon^{\frac{5}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&|\partial_x^\gamma (N-\tilde e_1)|\le C_\gamma M^2\varepsilon^{\frac{7}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&|\partial_x^\gamma (T-\tilde e_2)|\le C_\gamma M^2\varepsilon^{\frac{7}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&|\partial_x^\gamma (JN-\tilde e_1)|\le C_\gamma M^2\varepsilon^{\frac{5}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&|\partial_x^\gamma\partial_tf|\le C_\gamma M^2\varepsilon^{\frac{1}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&\left|\partial_x^\gamma\frac{\partial_tf}{1+f_{x_1}}\right|\le C_\gamma M^2\varepsilon^{\frac{1}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&\left|\partial_x^\gamma\partial_t N\right|\le C_\gamma M^2\varepsilon^{\frac{1}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&\left|\partial_x^\gamma\partial_t T\right|\le C_\gamma M^2\varepsilon^{\frac{1}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}.\\
\end{aligned}
\right.
\label{estimates of f-dependent functions, inequalities}
\end{equation}
\end{lemma}
\begin{proof}
From the expression of $\tilde f$ and the bootstrap assumption for $\phi$ and $\dot\phi$, it is not hard to see that $|\partial_{\tilde x}^\gamma\tilde f|\le C_\gamma M^2\varepsilon^{\frac{4}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}$, $|\partial_{\tilde x}^\gamma\partial_t\tilde f|\le C_\gamma M^2\varepsilon^{\frac{1}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}$.
Using chain rule, one can see that
\begin{equation}
\left\{
\begin{aligned}
&\partial_{x_1}=\frac{\partial_{\tilde x_1}}{1-\tilde f_{\tilde x_1}}\\
&\partial_{x_2}=\frac{\tilde f_{\tilde x_2}}{1-\tilde f_{\tilde x_1}}\partial_{\tilde x_1}+\partial_{\tilde x_2}.
\end{aligned}\right.
\end{equation}
By Faà di Bruno's formula, we have
\begin{equation}
\begin{aligned}
\left|\partial_{\tilde x}^\gamma\left(\frac{1}{1-\tilde f_{\tilde x_1}}\right)\right|\overset{\gamma>0}&{\lesssim}_{|\gamma|}\sum\limits_{\sum\limits_{\beta\le\gamma}\beta m_\beta=\gamma}\left|1-\tilde f_{\tilde x_1}\right|^{-1-\sum\limits_{\beta\le\gamma}m_\beta}\prod_{\beta\le\gamma}\left|\partial_{\tilde x}^\beta\tilde f_{\tilde x_1}\right|^{m_\beta}\\
\overset{\varepsilon\ll1}&{\lesssim}\sum\limits_{\sum\limits_{\beta\le\gamma}\beta m_\beta=\gamma}\left(1-\varepsilon^{\frac{1}{2}}\right)^{-\sum\limits_{\beta\le\gamma}m_\beta}\prod_{\beta\le\gamma}\left(M^2\varepsilon^{\frac{4}{3}-\frac{\beta_1+1}{2}-\frac{\beta_2}{6}}\right)^{m_\beta}\\
&\lesssim\varepsilon^{-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\sum\limits_{\sum\limits_{\beta\le\gamma}\beta m_\beta=\gamma}\left((1-\varepsilon^{\frac{1}{2}})M^2\varepsilon^{\frac{5}{6}}\right)^{\sum\limits_{\beta\le\gamma}m_\beta}\lesssim M^2\varepsilon^{\frac{5}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}.
\end{aligned}
\end{equation}
And by Leibniz rule, we have
\begin{equation}
\begin{aligned}
\left|\partial_{\tilde x}^\gamma\left(\frac{\tilde f_{\tilde x_2}}{1-\tilde f_{\tilde x_1}}\right)\right|&\lesssim\sum_{0<\beta\le\gamma}\left|\partial^{\gamma-\beta}\tilde f_{\tilde x_2}\right|\left|\partial_{\tilde x}^\beta\left(\frac{1}{1-\tilde f_{\tilde x_1}}\right)\right|+M^2\varepsilon^{\frac{4}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2+1}{6}}\lesssim M^2\varepsilon^{\frac{7}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}.
\end{aligned}
\end{equation}
Note that
\begin{equation}
\begin{aligned}
\partial_{x_2}^k&=\left(\frac{\tilde f_{\tilde x_2}}{1-\tilde f_{\tilde x_1}}\partial_{\tilde x_1}+\partial_{\tilde x_2}\right)^k=\sum_{\substack{\sum\limits_{|\beta|\le k}n_\beta=\gamma_1+\sum\limits_{|\beta|\le k}\beta_1n_\beta\\\gamma_1+\gamma_2+\sum\limits_{|\beta|\le k}n_\beta|\beta|=k}}C(k,\gamma,n_\beta)\prod_{|\beta|\le k}\left(\partial_{\tilde x}^\beta\left(\frac{\tilde f_{\tilde x_2}}{1-\tilde f_{\tilde x_1}}\right)\right)^{n_\beta}\partial_{\tilde x}^\gamma.
\end{aligned}
\end{equation}
Thus we have
\begin{equation}
\begin{aligned}
\left|\partial_{\tilde x_1}^j(\partial_{x_2}^kf)\right|&\lesssim\left|\partial_{\tilde x_1}^j\sum_{\substack{\sum\limits_{|\beta|\le k}n_\beta=\gamma_1+\sum\limits_{|\beta|\le k}\beta_1n_\beta\\\gamma_1+\gamma_2+\sum\limits_{|\beta|\le k}n_\beta|\beta|=k}}C(k,\gamma,n_\beta)\prod_{|\beta|\le k}\left(\partial_{\tilde x}^\beta\left(\frac{\tilde f_{\tilde x_2}}{1-\tilde f_{\tilde x_1}}\right)\right)^{n_\beta}\partial_{\tilde x}^\gamma\tilde f\right|\\
&\lesssim_{j,k}\sum_{\substack{\sum\limits_{|\beta|\le k+j}n_\beta+j=\gamma_1+\sum\limits_{|\beta|\le k+j}\beta_1n_\beta\\\gamma_1+\gamma_2+\sum\limits_{|\beta|\le k+j}n_\beta|\beta|=k+j}}\ \prod_{|\beta|\le k+j}\left|\partial_{\tilde x}^\beta\left(\frac{\tilde f_{\tilde x_2}}{1-\tilde f_{\tilde x_1}}\right)\right|^{n_\beta}|\partial_{\tilde x}^\gamma f|\\
&\lesssim\sum_{\substack{\sum\limits_{|\beta|\le k+j}n_\beta+j=\gamma_1+\sum\limits_{|\beta|\le k+j}\beta_1n_\beta\\\gamma_1+\gamma_2+\sum\limits_{|\beta|\le k+j}n_\beta|\beta|=k+j}}\ \prod_{|\beta|\le k+j}\left(M^2\varepsilon^{\frac{7}{6}-\frac{\beta_1}{2}-\frac{\beta_2}{6}}\right)^{n_\beta}M^2\varepsilon^{\frac{4}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}\\
&\lesssim M^2\varepsilon^{\frac{4}{3}-\frac{j}{2}-\frac{k}{6}}\sum_{\substack{\sum\limits_{|\beta|\le k+j}n_\beta+j=\gamma_1+\sum\limits_{|\beta|\le k+j}\beta_1n_\beta\\\gamma_1+\gamma_2+\sum\limits_{|\beta|\le k+j}n_\beta|\beta|=k+j}}\left(M^2\varepsilon\right)^{\sum\limits_{|\beta|\le k+j}n_\beta}\lesssim M^2\varepsilon^{\frac{4}{3}-\frac{j}{2}-\frac{k}{6}}.
\end{aligned}
\end{equation}
Finally, we have
\begin{equation}
\begin{aligned}
\left|\partial_x^\gamma f\right|&=\left|\left(\frac{\partial_{\tilde x_1}}{1-\tilde f_{\tilde x_1}}\right)^{\gamma_1}\partial_{x_2}^{\gamma_2}f\right|\\
\overset{\gamma_1\ge1}&{\lesssim}\sum_{j=1}^{\gamma_1}\sum_{\substack{n_1+2n_2+\cdots+\gamma_1n_{\gamma_1}=\gamma_1-j\\ n_0+n_1+\cdots+n_{\gamma_1}=\gamma_1}}\left|\frac{1}{1-\tilde f_{\tilde x_1}}\right|^{n_0}\left|\partial_{\tilde x_1}\left(\frac{1}{1-\tilde f_{\tilde x_1}}\right)\right|^{n_1}\cdots\left|\partial_{\tilde x_1}^{\gamma_1}\left(\frac{1}{1-\tilde f_{\tilde x_1}}\right)\right|^{n_{\gamma_1}}\left|\partial_{\tilde x_1}^j\partial_{x_2}^{\gamma_2}f\right|\\
&\lesssim\sum_{j=1}^{\gamma_1}\sum_{\substack{n_1+2n_2+\cdots+\gamma_1n_{\gamma_1}=\gamma_1-j\\ n_0+n_1+\cdots+n_{\gamma_1}=\gamma_1}}\left(1-\varepsilon^{\frac{1}{2}}\right)^{-n_0}\left(M^2\varepsilon^{\frac{5}{6}-\frac{1}{2}}\right)^{n_1}\cdots\left(M^2\varepsilon^{\frac{5}{6}-\frac{\gamma_1}{2}}\right)^{n_{\gamma_1}}\left|\partial_{\tilde x_1}^j\partial_{x_2}^{\gamma_2}f\right|\\
&\lesssim\sum_{j=1}^{\gamma_1}\varepsilon^{-\frac{\gamma_1-j}{2}}\left|\partial_{\tilde x_1}^j\partial_{x_2}^{\gamma_2}f\right|\sum_{\substack{n_1+2n_2+\cdots+\gamma_1n_{\gamma_1}=\gamma_1-j\\ n_0+n_1+\cdots+n_{\gamma_1}=\gamma_1}}\left[(1-\varepsilon^{\frac{1}{2}})M^2\varepsilon^{\frac{5}{6}}\right]^{\gamma_1-n_0}\\
&\lesssim\varepsilon^{-\frac{\gamma_1}{2}}\sum_{j=1}^{\gamma_1}\varepsilon^{\frac{j}{2}}M^2\varepsilon^{\frac{4}{3}-\frac{j}{2}-\frac{\gamma_2}{6}}\lesssim M^2\varepsilon^{\frac{4}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}.
\end{aligned}
\end{equation}
One can check the same estimate holds when $\gamma_1=0$.
Also from Faà di Bruno's formula one can see that for $\alpha\in\mathbb{R}$ and $\gamma>0$, we have $\left|\partial_x^\gamma(1+f_{x_2}^2)^\alpha\right|\lesssim_{\alpha,\gamma} M^4\varepsilon^{\frac{7}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}$, this estimate combining with Leibniz rule gives that $|\partial_x^\gamma N|\lesssim M^2\varepsilon^{\frac{7}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}$ for $\gamma>0$. $|N-\tilde e_1|\lesssim M^4\varepsilon^{\frac{7}{6}}$ can be checked separately. The estimates of $N$ implies $|\partial_x^\gamma (T-\tilde e_2)|\lesssim M^2\varepsilon^{\frac{7}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}$ for $\gamma\ge0$ since $T=N^\perp$. The estimate of $JN$ is similar.
As for $J=\frac{\sqrt{1+f_{x_2}^2}}{1+f_{x_1}}$, we use Leibniz rule to deduce that $|\partial^\gamma J|\lesssim M^2\varepsilon^{\frac{5}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}$ holds for $\gamma>0$, then one can check $|J-1|\lesssim M^2\varepsilon^{\frac{5}{6}}$.
The estimates of $\partial_t f$ and $\frac{\partial_t f}{1+f_{x_1}}$ is much the same and rely on the facts that $|\partial_{\tilde x}^\gamma\partial_t\tilde f|\lesssim M^2\varepsilon^{\frac{1}{3}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}$ and $(\partial_t)_{x}f=\frac{(\partial_t)_{\tilde x}\tilde f}{1-\tilde f_{\tilde x_1}}$.
\end{proof}
Here we emphasize that $C_\gamma$ in Lemma \ref{estimates for functions of coordinate transformation, lemma} grows at least exponentially since $f$ is compactly supported and cannot be analytic.
\begin{lemma}
For $\varepsilon\ll1$ small enough and $M\gg1$ large enough we have
\begin{equation}
|Q|\le M^2\varepsilon^{\frac{1}{2}}.
\label{estimate of Q}
\end{equation}
\end{lemma}
\begin{proof}Since we have
\begin{equation}
Q=\dot R^TR=
\begin{bmatrix}
0 &-n_1\dot n_2+n_2\dot n_1\\
-n_2\dot n_1+n_1\dot n_2 &0
\end{bmatrix},
\end{equation}
the rest is appealing to $n_1=\sqrt{1-n_2^2}$ and the bootstrap assumptions (\ref{bootstrap assumptions of dynamic variables}) for $n_2$ and $\dot n_2$.
\end{proof}
\begin{lemma}
For $y\in10\mathcal{X}(s)=\{|y_1|\le20\varepsilon^{\frac{1}{2}}e^{\frac{3}{2}s},|y_2|\le20\varepsilon^{\frac{1}{6}}e^{\frac{s}{2}}\}$, we have
\begin{equation}
|V|\lesssim M^{\frac{1}{4}}
\label{estimate of of V}
\end{equation}
and for $\forall y\in\mathbb{R}^2$, it holds that
\begin{equation}
\left\{
\begin{aligned}
&|\partial_1V|\lesssim M^2\varepsilon^{\frac{1}{2}}e^{-\frac{3}{2}s}\\
&|\partial_2V|\lesssim M^2\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}}\\
&|\partial_{11}V|\lesssim M^4\varepsilon^{\frac{5}{6}}e^{-3s}\\
&|\partial_{12}V|\lesssim M^4\varepsilon^{\frac{7}{6}}e^{-2s}\\
&|\partial_{22}V|\lesssim M^4\varepsilon^{\frac{3}{2}}e^{-s}\\
&|\partial^\gamma V|\overset{|\gamma|\ge3}{\lesssim} M^4\varepsilon^{\frac{11}{6}}e^{-(\gamma_1+\gamma_2/3)s}\\
&|\partial^\gamma V|\overset{|\gamma|\ge 1}{\lesssim} M^4\varepsilon^{\frac{2}{3}}e^{-(\gamma_1+\gamma_2/3)s}.
\end{aligned}
\right.
\label{estimates of derivatives of V}
\end{equation}
\end{lemma}
\begin{proof}
Note that
\begin{equation}
V(y,s)=\frac{1+\alpha}{2}\left(Q\begin{bmatrix}
y_1e^{-\frac{3}{2}s}+f\\ y_2e^{-\frac{s}{2}}
\end{bmatrix}-R^T\dot\xi\right).
\end{equation}
By $R\in SO(2)$ and (\ref{bootstrap assumptions of dynamic variables})(\ref{estimates of f-dependent functions, inequalities}), we have the above estimates.
\end{proof}
\subsection{Estimates for $U,S$}
\begin{lemma}
For $U\cdot N$ and $S$, we have that
\begin{equation}
|\partial^\gamma(U\cdot N)|+|\partial^\gamma S|\lesssim\left\{
\begin{aligned}
&M^{\frac{1}{4}}\ \ &\gamma=(0,0)\\
&e^{-\frac{s}{2}}\eta^{-\frac{1}{3}}\ \ &\gamma=(1,0)\\
&e^{-\frac{s}{2}}\ \ &\gamma=(0,1)\\
&M^{\frac{1}{3}}e^{-\frac{s}{2}}\eta^{-\frac{1}{3}}\ \ &\gamma=(2,0)\\
&M^{\frac{2}{3}}e^{-\frac{s}{2}}\eta^{-\frac{1}{3}}\ \ &\gamma=(1,1)\\
&Me^{-\frac{s}{2}}\eta^{-\frac{1}{6}}\ \ &\gamma=(0,2).
\end{aligned}
\right.
\label{estimates of U dot N,S}
\end{equation}
\end{lemma}
\begin{proof}
One can express $U\cdot N$, $S$ in terms of $W$, $Z$, $A$ as in (\ref{U,S in terms of W,Z,A}). Then by directly appealing to the bootstrap assumptions we obtain the desired estimates.
\end{proof}
\begin{lemma}
By taking $\varepsilon$ sufficiently small, we have
\begin{equation}
\left\{
\begin{aligned}
&|U|\lesssim M^{\frac{1}{4}}\\
&|\partial_1U|\le\left(1+\varepsilon^{\frac{3}{4}}\right)e^{-\frac{s}{2}}\\
&|\partial_2U|\le e^{-\frac{s}{2}}\\
&|\partial_1S|\le(1+\varepsilon)e^{-\frac{s}{2}}\\
&|\partial_2S|\le\left(\frac{1}{2}+\varepsilon^{\frac{1}{2}}\right)e^{-\frac{s}{2}}.
\end{aligned}
\right.
\label{estimates of U,S}
\end{equation}
\end{lemma}
\begin{proof}
Express $U$ in terms of $W$, $Z$, $A$, then use bootstrap assumptions and the estimates (\ref{estimates of f-dependent functions, inequalities}) of $N$, $T$.
\end{proof}
\subsection{Transport estimates}
\begin{lemma}
For $\varepsilon\ll1$ and $\forall y\in10\mathcal{X}(s)$, we have
\begin{equation}
\left\{
\begin{aligned}
&|\partial_1G_A|\lesssim M^2e^{-\frac{5}{6}s},\ \
&|\partial_2G_A|\lesssim M^2\varepsilon^{\frac{1}{6}}\\
&|\partial_{11}G_A|\lesssim M^\frac{1}{2}e^{-s},\ \
&|\partial_{12}G_A|\lesssim Me^{-s},\ \
&|\partial_{22}G_A|\lesssim M^2e^{-\frac{s}{2}}.
\end{aligned}
\right.
\label{estimates of derivatives of G_A}
\end{equation}
\end{lemma}
\begin{proof}
We first deal with $\partial_1G_A$.
Using the definition (\ref{transport terms of W,Z,A}) of $G_A$, the estimates (\ref{estimates of f-dependent functions, inequalities}) for functions of coordinate transformation, estimates (\ref{estimate of of V})(\ref{estimates of derivatives of V}) for $V$, and the bootstrap assumptions, and by Leibniz rule, we have that
\begin{equation}
\begin{aligned}
|\partial_1G_A|&\lesssim e^{\frac{s}{2}}\left|\partial_1\frac{\partial_tf}{1+f_{x_1}}\right|+e^{\frac{s}{2}}|\partial_1J|(\kappa_0+|Z|+|V|)+e^{\frac{s}{2}}|\partial_1Z|+e^{\frac{s}{2}}|\partial_1(V\cdot N)|\\
&\lesssim e^{\frac{s}{2}}M^2\varepsilon^{-\frac{1}{6}}e^{-\frac{3}{2}s}+e^{\frac{s}{2}}\varepsilon^{\frac{1}{3}}e^{-\frac{3}{2}s}M^{\frac{1}{4}}+e^{\frac{s}{2}}(M^{\frac{1}{2}}e^{-\frac{3}{2}s}+M^2\varepsilon^{\frac{1}{2}}e^{-\frac{3}{2}s}+M^{2+\frac{1}{4}}\varepsilon e^{-\frac{3}{2}s})\\
&\lesssim M^2\varepsilon^{-\frac{1}{6}}e^{-s}\lesssim M^2e^{-\frac{5}{6}s}.
\end{aligned}
\end{equation}
The other derivatives of $G_A$ are estimated in the similar way.
\end{proof}
\begin{lemma}
For $\varepsilon\ll1$ and $\forall y\in\mathcal{X}(s)$, we have
\begin{equation}
\left\{
\begin{aligned}
&|g_A|\lesssim M^{\frac{1}{4}}e^{\frac{s}{2}}\\
&|\partial_1g_A|\le3\\
&|\partial_2g_A|\le2\\
&|D^2g_A|\lesssim M\eta^{-\frac{1}{6}}+M^2e^{-\frac{s}{2}}\\
&|\partial_1h_A|\lesssim e^{-s}\\
&|\partial_2h_A|\lesssim e^{-s}.
\end{aligned}
\right.
\label{estimates of transport terms for energy estimate}
\end{equation}
\end{lemma}
\begin{proof}
Use the definition (\ref{transport terms of W,Z,A}) and the estimates (\ref{bootstrap assumptions of W})(\ref{estimates of f-dependent functions, inequalities})(\ref{estimates of derivatives of G_A}), estimate similarly as we did in the proof of (\ref{estimates of derivatives of G_A}) with more care since there is no room of a universal constant here.
\end{proof}
\section{Energy estimate}\label{energy estimate}
To overcome the loss of derivative in $L^\infty$ estimates of $W$, $Z$, and $A$, we will establish an additional energy estimate to control the $\dot H^k$($k\ll1$) norms of $W$, $Z$, and $A$. It is crutial that in the proof of energy estimate we only use the bootstrap assumptions, not requiring any information on higher order derivatives.
\begin{proposition}[Energy estimate for $W$, $Z$, $A$]
For an integer $k\ge 18$, and a constant $\lambda=\lambda(k)$,
\begin{equation}
\|Z(\cdot,s)\|_{\dot H^k}^2+\|A(\cdot,s)\|_{\dot H^k}^2\le2\lambda^{-k}e^{-s}+M^{4k}e^{-s}(1-\varepsilon^{-s}e^{-s})\lesssim M^{4k}e^{-s},
\end{equation}
\begin{equation}
\|W(\cdot,s)\|_{\dot H^k}^2\le2\lambda^{-k}\varepsilon^{-1}e^{-s}+M^{4k}(1-\varepsilon^{-s}e^{-s}).
\end{equation}
\end{proposition}
We will prove this by using the $\dot H^k$ bound for $(U,S)$, and the fact that the $\dot H^k$ norm of $(W,Z,A)$ can be controlled by the $\dot H^k$ norm of $(U,S)$. More precisely, we have:
\begin{lemma}\label{W,Z,A controlled by U,S}The following inequalities hold:
\begin{equation}
\begin{aligned}
\|W\|_{\dot H^k}\lesssim_k e^{\frac{s}{2}}\left(\|U\|_{\dot H^k}+\|S\|_{\dot H^k}+M^{\frac{9}{4}}\varepsilon^{\frac{3}{2}}e^{-\frac{k-3}{3}s}\right),\\
\|Z\|_{\dot H^k}+\|A\|_{\dot H^k}\lesssim_k \|U\|_{\dot H^k}+\|S\|_{\dot H^k}+M^{\frac{9}{4}}\varepsilon^{\frac{3}{2}}e^{-\frac{k-3}{3}s}.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
We first estimate $\|W\|_{\dot H^k}$. Note that by (\ref{W,Z,A in terms of U,S}), $\mathrm{supp}(DU,DS)\subset\mathcal{X}(s)$, we have
\begin{equation}
\begin{aligned}
e^{-\frac{s}{2}}\|\partial^\gamma W\|_{L^2(\mathbb{R}^2)}\overset{|\gamma|=k}&{\lesssim_k}\|\partial^\gamma S\|_{L^2}+\sum_{\beta\le\gamma}\|\partial^{\gamma-\beta}U\cdot\partial^\beta N\|_{L^2(\mathcal{X}(s))}\\
&\lesssim \|S\|_{\dot{H}^k}+\|U\|_{L^\infty}\|\partial^\gamma N\|_{L^\infty}|\mathcal{X}(s)|^{\frac{1}{2}}+\|\partial^\gamma U\|_{L^2}+\sum_{0<\beta<\gamma}\|\partial^{\gamma-\beta}U\|_{L^2}\|\partial^\beta N\|_{L^\infty}\\
\overset{\text{Poincaré}}&{\lesssim_k}\|S\|_{\dot H^k}+\|U\|_{\dot H^k}+M^{\frac{1}{4}}M^2\varepsilon^{\frac{7}{6}-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}e^{-(\frac{3}{2}\gamma_1+\frac{1}{2}\gamma_2)s}\varepsilon^{\frac{1}{3}}e^s\\
&\ \ \ \ \ +\sum_{0<\beta<\gamma}(\varepsilon^{\frac{1}{6}}e^{\frac{s}{2}})^{|\beta|}\|D^kU\|_{L^2}M^2\varepsilon^{\frac{7}{6}-\frac{\beta_1}{2}-\frac{\beta_2}{6}}e^{-(\frac{3}{2}\beta_1+\frac{1}{2}\beta_2)s}\\
&\lesssim\|S\|_{\dot H^k}+\|U\|_{\dot H^k}+M^{\frac{9}{4}}\varepsilon^{\frac{3}{2}}e^{-\frac{|\gamma|-3}{3}s}.
\end{aligned}
\end{equation}
The estimates of $Z$ and $A$ are similar.
\end{proof}
\begin{definition}[Modified $\dot H^k$ norm]We define
\begin{equation}
E_k^2(s):=\sum_{|\gamma|=k}\lambda^{\gamma_2}\left(\|\partial^\gamma U(\cdot,s)\|_{L^2}^2+\|\partial^\gamma S(\cdot,s)\|_{L^2}^2\right),
\end{equation}
where $\lambda\in(0,1)$ is to be specified below. Clearly we have the norm equivalence:
\begin{equation}
\lambda^k\left(\|U\|_{\dot H^k}^2+\|S\|_{\dot H^k}^2\right)\le E_k^2\le\|U\|_{\dot H^k}^2+\|S\|_{\dot H^k}^2.
\end{equation}
\end{definition}
\subsection{Evolution of derivatives of $(U,S)$}
Applying $\partial^\gamma$ to both sides of the $(U,S)$ equation (\ref{equation of U,S}), we see that
\begin{subequations}
\begin{equation}
\begin{aligned}
\partial_s\partial^\gamma U_i&-\beta_\tau e^{-s}Q_{ij}\partial^\gamma U_j+\mathcal{V}_A\cdot\nabla\partial^\gamma U_i+D_\gamma\partial^\gamma U_i+\beta_3\beta_\tau(1+\gamma_1)JN_i\partial^\gamma S\partial_1W,\\
&+2\beta_3\beta_\tau S\left(e^{\frac{s}{2}}JN_i\partial_1\partial^\gamma S+e^{-\frac{s}{2}}\delta_{i2}\partial_2\partial^\gamma S\right)=F_{U_i}^{(\gamma)},
\end{aligned}
\label{equation of derivatives of U_i}
\end{equation}
\begin{equation}
\begin{aligned}
\partial_s\partial^\gamma S&+\mathcal{V}_A\cdot\nabla\partial^\gamma S+D_\gamma\partial^\gamma S+\beta_\tau(\beta_1+\beta_3\gamma_1)JN\cdot\partial^\gamma U\partial_1W\\
&+2\beta_3\beta_\tau S\left(e^{\frac{s}{2}}JN\cdot\partial_1\partial^\gamma U+e^{-\frac{s}{2}}\partial_2\partial^\gamma U_2\right)=F_{S}^{(\gamma)}
\end{aligned}
\label{equation of derivatives of S}
\end{equation}
\end{subequations}
where $D_\gamma=\frac{1}{2}|\gamma|+\gamma_1(1+\partial_1g_U)$, and the forcing terms are $F_{U_i}^{(\gamma)}=F_{U_i}^{(\gamma,U)}+F_{U_i}^{(\gamma-1,U)}+F_{U_i}^{(\gamma,S)}+F_{U_i}^{(\gamma-1,S)}$, $F_{S}^{(\gamma)}=F_{S}^{(\gamma,U)}+F_{S}^{(\gamma-1,U)}+F_{S}^{(\gamma,S)}+F_{S}^{(\gamma-1,S)}$. Here
\begin{subequations}
\begin{align}
\begin{split}
F_{U_i}^{(\gamma,U)}=&-2\beta_1\beta_\tau\left(e^{\frac{s}{2}}JN_j\partial^\gamma U_j\partial_1 U_i+e^{-\frac{s}{2}}\partial^\gamma U_2\partial_2 U_i\right)\\
&-\gamma_2\partial_2g_A\partial_1\partial^{\gamma-e_2} U_i-\sum_{\substack{|\beta|=|\gamma|-1\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}h_A\partial_2\partial^\beta U_i\\
=&F_{U_i,(1)}^{(\gamma,U)}+F_{U_i,(2)}^{(\gamma,U)}+F_{U_i,(3)}^{(\gamma,U)},
\label{Ui top order forcing terms involving Ui}
\end{split}\\
\begin{split}
F_{U_i}^{(\gamma-1,U)}=&-\sum_{\substack{{1\le|\beta|\le|\gamma|-2}\\{\beta\le\gamma}}}\binom{\gamma}{\beta}\left(\partial^{\gamma-\beta}g_A\partial_1\partial^\beta U_i+\partial^{\gamma-\beta}h_A\partial_2\partial^\beta U_i\right)\\
&-2\beta_1\beta_\tau e^{\frac{s}{2}}[\partial^\gamma,JN]\cdot U\partial_1 U_i-\beta_\tau e^{\frac{s}{2}}\partial^\gamma\left(2\beta_1V\cdot JN-\frac{\partial_tf}{1+f_{x_1}}\right)\partial_1U_i-2\beta_1\beta_\tau e^{-\frac{s}{2}}\partial^\gamma V_2\partial_2U_i\\
=&F_{U_i,(1)}^{(\gamma-1,U)}+F_{U_i,(2)}^{(\gamma-1,U)}+F_{U_i,(3)}^{(\gamma-1,U)}+F_{U_i,(4)}^{(\gamma-1,U)},
\label{Ui lower order forcing terms involving Ui}
\end{split}\\
\begin{split}
F_{U_i}^{(\gamma,S)}=&-2\beta_3\beta_\tau \gamma_2e^{\frac{s}{2}}\partial_2(SJN_i)\partial_1\partial^{\gamma-e_2}S-\beta_3\beta_\tau(1+\gamma_1)e^{\frac{s}{2}}JN_i\partial_1Z\partial^\gamma S\\
&-2\beta_3\beta_\tau e^{-\frac{s}{2}}\delta_{i2}\sum_{\substack{|\beta|=|\gamma|-1\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}S\partial_2\partial^\beta S-2\beta_3\beta_\tau\delta_{i2}e^{-\frac{s}{2}}\partial^\gamma S\partial_2S-2\beta_3\beta_\tau\gamma_1e^{\frac{s}{2}}\partial_1(JN_i)S\partial^\gamma S\\
=&F_{U_i,(1)}^{(\gamma,S)}+F_{U_i,(2)}^{(\gamma,S)}+F_{U_i,(3)}^{(\gamma,S)}+F_{U_i,(4)}^{(\gamma,S)}+F_{U_i,(5)}^{(\gamma,S)},
\label{Ui top order forcing terms involving S}
\end{split}\\
\begin{split}
F_{U_i}^{(\gamma-1,S)}=&-2\beta_3\beta_\tau\sum_{\substack{{1\le|\beta|\le|\gamma|-2}\\{\beta\le\gamma}}}\binom{\gamma}{\beta}\left(e^{\frac{s}{2}}\partial^{\gamma-\beta}(SJN_i)\partial_1\partial^\beta S+e^{-\frac{s}{2}}\delta_{i2}\partial^{\gamma-\beta}S\partial_2\partial^\beta S\right)\\
&-2\beta_3\beta_\tau e^{\frac{s}{2}}[\partial^\gamma,JN_i]S\partial_1S\\
=&F_{U_i,(1)}^{(\gamma-1,S)}+F_{U_i,(2)}^{(\gamma-1,S)},
\label{Ui lower order forcing terms involving S}
\end{split}\\
\begin{split}
F_{S}^{(\gamma,S)}=&-2\beta_3\beta_\tau\left(e^{\frac{s}{2}}\partial^\gamma SJN_j\partial_1 U_j+e^{-\frac{s}{2}}\partial^\gamma S\partial_2 U_2\right)\\
&-\gamma_2\partial_2g_A\partial_1\partial^{\gamma-e_2} S-\sum_{\substack{|\beta|=|\gamma|-1\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}h_A\partial_2\partial^\beta S,
\end{split}\\
\begin{split}
F_S^{(\gamma-1,S)}=&-\sum_{\substack{{1\le|\beta|\le|\gamma|-2}\\{\beta\le\gamma}}}\binom{\gamma}{\beta}\left(\partial^{\gamma-\beta}g_A\partial_1\partial^\beta S+\partial^{\gamma-\beta}h_A\partial_2\partial^\beta S\right)\\
&-2\beta_3\beta_\tau\sum_{\substack{{1\le|\beta|\le|\gamma|-2}\\{\beta\le\gamma}}}\binom{\gamma}{\beta}\left(e^{\frac{s}{2}}\partial^{\gamma-\beta}(SJN)\cdot\partial_1\partial^\beta U+e^{-\frac{s}{2}}\partial^{\gamma-\beta}S\partial_2\partial^\beta U_2\right)\\
&-2\beta_3\beta_\tau e^{\frac{s}{2}}\partial_1 U_j[\partial^\gamma,JN_j]S-\beta_\tau e^{\frac{s}{2}}\partial^\gamma\left(2\beta_1V\cdot JN-\frac{\partial_tf}{1+f_{x_1}}\right)\partial_1S-2\beta_1\beta_\tau e^{-\frac{s}{2}}\partial^\gamma V_2\partial_2S,
\end{split}\\
\begin{split}
F_S^{(\gamma,U)}=&-2\beta_3\beta_\tau \gamma_2e^{\frac{s}{2}}\partial_2(SJN)\cdot\partial_1\partial^{\gamma-e_2} U+\beta_\tau(\beta_1+\beta_3\gamma_1)e^{\frac{s}{2}}JN\cdot\partial^\gamma U\partial_1Z\\
&-2\beta_3\beta_\tau e^{-\frac{s}{2}}\sum_{\substack{|\beta|=|\gamma|-1\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}S\partial_2\partial^\beta U_2-2\beta_1\beta_\tau e^{-\frac{s}{2}}\partial^\gamma U_2\partial_2S-2\beta_3\beta_\tau\gamma_1 e^{\frac{s}{2}}S\partial^\gamma U_j\partial_1(JN_j),
\end{split}\\
\begin{split}
F_S^{(\gamma-1,U)}=&-2\beta_1\beta_\tau e^{\frac{s}{2}}\partial_1S[\partial^\gamma,JN_j]U_j.
\end{split}
\end{align}
\end{subequations}
\subsection{Estimates for forcing terms}
\begin{lemma}
Let $k\gg1$ and $\delta\in(0,\frac{1}{32}]$, $\lambda=\frac{\delta^2}{12k^2}$, then for $\varepsilon\ll1$ we have
\begin{subequations}
\begin{align}
\begin{split}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int_{\mathbb{R}^3}\left|F_{U_i}^{(\gamma)}\partial^\gamma U_i\right|&\le(4+8\delta)E_k^2+e^{-s}M^{4k-4},
\label{Ui forcing estimate}
\end{split}\\
\begin{split}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int_{\mathbb{R}^3}\left|F_{S}^{(\gamma)}\partial^\gamma S\right|&\le(4+8\delta)E_k^2+e^{-s}M^{4k-4}.
\label{S forcing estimate}
\end{split}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
We begin with (\ref{Ui forcing estimate}).
We first deal with the term $F_{U_i}^{(\gamma,U)}$ involving the top order derivatives of $U$, this term is decomposed as a sum $F_{U_i,(1)}^{(\gamma,U)}+F_{U_i,(2)}^{(\gamma,U)}+F_{U_i,(3)}^{(\gamma,U)}$. From (\ref{bootstrap assumptions of dynamic variables}), $0<\beta_1,\beta_\tau<1$, and (\ref{estimates of f-dependent functions, inequalities}), we have
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int_{\mathbb{R}^3}\left|F_{U_i,(1)}^{(\gamma)}\partial^\gamma U_i\right|
\overset{(\ref{Ui top order forcing terms involving Ui})}&{\le}4\beta_1\beta_\tau\sum_{|\gamma|=k}\lambda^{\gamma_2}\left[e^{\frac{s}{2}}(1+\varepsilon^{\frac{3}{4}})\|\partial_1U\|_{L^\infty}+e^{-\frac{2}{2}}\|\partial_2U\|_{L^\infty}\right]\|\partial^\gamma U\|_{L^2}^2\\
&\le(4+\varepsilon^{\frac{1}{2}})E_k^2.
\end{aligned}
\end{equation}
By (\ref{estimates of transport terms for energy estimate}) and Young's inequality, we can see that
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int_{\mathbb{R}^3}\left|F_{U_i,(2)}^{(\gamma)}\partial^\gamma U_i\right|
\overset{(\ref{Ui top order forcing terms involving Ui})}&{\le}2\sum_{|\gamma|=k}\lambda^{\gamma_2}\gamma_2\|\partial_2g_A\|_{L^\infty(\mathcal{X}(s))}\|\partial_1\partial^{\gamma-e_2}U_i\|_{L^2}\|\partial^\gamma U_i\|_{L^2}\\
&\le2\sum_{|\gamma|=k}\left(\frac{\gamma_2^2}{\delta}\lambda^{\gamma_2+1}\|\partial^\gamma U\|_{L^2}^2+\mathbbm{1}_{\gamma_2>0}\delta\lambda^{\gamma_2-1}\|\partial_1\partial^{\gamma-e_2}U\|_{L^2}^2\right)\\
&\le\lambda\frac{2k^2}{\delta}E_k^2+2\delta E_k^2
\overset{\lambda=\frac{\delta^2}{12k^2}}{\le}3\delta E_k^2,
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int_{\mathbb{R}^3}\left|F_{U_i,(3)}^{(\gamma)}\partial^\gamma U_i\right|\overset{(\ref{Ui top order forcing terms involving Ui})}&{\lesssim}\sum_{|\gamma|=k}\int\sum_{\substack{|\beta|=|\gamma|-1\\\beta\le\gamma}}|\partial^{\gamma-\beta}h_A||\partial_2\partial^\beta U||\partial^\gamma U|\\
&\lesssim\varepsilon\sum_{|\gamma|=k}\ \sum_{\substack{|\beta|=|\gamma|-1\\\beta\le\gamma}}\left(\|\partial^\gamma U\|_{L^2}^2+\|\partial_2\partial^\beta U\|_{L^2}^2\right)\le\varepsilon^{\frac{1}{2}}E_k^2.
\end{aligned}
\end{equation}
Combining these three estimates, we have
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int_{\mathbb{R}^3}\left|F_{U_i}^{(\gamma,U)}\partial^\gamma U_i\right|\le(4+3\delta+\varepsilon^{\frac{1}{2}})E_k^2.
\end{equation}
Next we deal with the forcing terms $F_{U_i}^{(\gamma-1,U)}$ involving lower order derivatives of $U$. We decompose its first part as $F_{U_i,(1)}^{(\gamma-1,U)}=I_{i1}+I_{i2}+I_{i3}$ where
\begin{equation}
\begin{aligned}
&I_{i1}=-\sum_{\substack{1\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}g_A\partial^\beta\partial_1(U\cdot NN_i),\\
&I_{i2}=-\sum_{\substack{1\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}g_A\partial^\beta\partial_1(AT_i),\\
&I_{i3}=-\sum_{\substack{1\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}h_A\partial^\beta\partial_2U_i.
\end{aligned}
\end{equation}
Since $D(U\cdot N)$ is supported in $\mathcal{X}(,)$, we introduce a positive cut-off function $\tilde{\theta}\in C_c(5\mathcal{X}(0))$ such that $\tilde{\theta}\equiv1$ on $\mathcal{X}(0)$. Let $\tilde{\theta}_s(y)=\tilde{\theta}(y_1e^{-\frac{3}{2}s},y_2e^{-\frac{s}{2}})$, then $\tilde{\theta}_s\in C_c^\infty(5\mathcal{X}(s))$, $\tilde\theta_s\equiv1$ on $\mathcal{X}(s)$, and
\begin{equation}
\|\partial^\gamma\tilde\theta_s\|_{L^\infty}\lesssim\varepsilon^{-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}e^{-\frac{3}{2}\gamma_1s-\frac{\gamma_2}{2}s}\lesssim e^{-\frac{|\gamma|}{3}s}.
\end{equation}
By the interpolation inequality (\ref{interpolation of L2 norm of product}), we have
\begin{equation}
\begin{aligned}
\|I_{i1}\|_{L^2(\mathbb{R}^2)}\lesssim&\left\|D^k\left(\tilde{\theta}_sg_A\right)\right\|_{L^2_y(\mathbb{R}^2)}^a\left\|D^2\left(\tilde{\theta}_sg_A\right)\right\|_{L^q(\mathbb{R}^2)}^{1-a}\|D^k(U\cdot NN)\|_{L^2(\mathbb{R}^2)}^b\|D^2(U\cdot NN)\|_{L^q(\mathbb{R}^2)}^{1-b}.
\end{aligned}
\end{equation}
We estimate each factor. We first bound the $D^2g_A$ term:
\begin{equation}
\begin{aligned}
\left\|D^2\left(\tilde{\theta}_sg_A\right)\right\|_{L^q(\mathbb{R}^2)}\overset{(\ref{estimates of transport terms for energy estimate})}&{\lesssim}M^{\frac{1}{4}}e^{\frac{s}{2}}e^{-\frac{2}{3}s}(\varepsilon^{\frac{2}{3}}e^{2s})^{\frac{1}{q}}+e^{-\frac{s}{3}}(\varepsilon^{\frac{2}{3}}e^{2s})^{\frac{1}{q}}+\|M\eta^{-\frac{1}{6}}+M^2e^{-\frac{s}{2}}\|_{L^q(5\mathcal{X}(s))}\\
&\lesssim M\|\eta^{-1}\|_{L^{\frac{q}{6}}(\mathbb{R}^2)}^{\frac{1}{6}}+M^2e^{-\frac{s}{6}}\varepsilon^{\frac{2}{3q}}e^{\frac{2}{q}s}{\lesssim} M.
\end{aligned}
\end{equation}
In the last inequality we require $q\ge12$ and use the fact that $(1+|y_1|^{\alpha_1}+\cdots+|y_d|^{\alpha_d})^{-1}\in L^1(R^d)$ as long as $\sum\alpha_i^{-1}<1$. From estimates (\ref{estimates of U dot N,S}) of $U\cdot N$ and estimates (\ref{estimates of f-dependent functions, inequalities}) of $N$, we have
\begin{equation}
\|D^2(U\cdot NN)\|_{L^q}\lesssim Me^{-\frac{s}{2}}.
\end{equation}
Then, as we did in the proof of lemma \ref{W,Z,A controlled by U,S}, we have
\begin{equation}
\begin{aligned}
\|D^k(U\cdot JN)\|_{L^2(5\mathcal{X}(s))}
{\lesssim}\|D^kU\|_{L^2(\mathbb{R}^2)}+M^2\varepsilon^{\frac{1}{3}}e^{-\frac{k-3}{3}},
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\|D^mg_A\|_{L^2(5\mathcal{X}(s))}&\lesssim e^\frac{s}{2}\left(\|D^m(U\cdot JN)\|_{L^2(5\mathcal{X}(s))}+\|D^m(V\cdot JN)\|_{L^2(5\mathcal{X}(s))}+\left\|D^m(\frac{\partial_tf}{1+f_{x_1}})
\right\|_{L^2(5\mathcal{X}(s))}\right)\\
\overset{m>0}&{\lesssim}e^{\frac{s}{2}}\left(\|D^mU\|_{L^2(\mathbb{R}^2)}+M^2\varepsilon^{\frac{1}{3}}e^{-\frac{m-3}{3}}\right),
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\left\|\partial^\gamma\left(\tilde{\theta}_sg_A\right)\right\|_{L^2(\mathbb{R}^2)}&\lesssim_\gamma\varepsilon^{-\frac{\gamma_1}{2}-\frac{\gamma_2}{6}}e^{-\frac{3}{2}\gamma_1s-\frac{\gamma_2}{2}s}\|g_A\|_{L^\infty}|5\mathcal{X}(s)|^{1/2}+\sum_{\beta<\gamma}\varepsilon^{-\frac{\beta_1}{2}-\frac{\beta_2}{6}}e^{-\frac{3}{2}\beta_1s-\frac{\beta_2}{2}s}\|\partial^{\gamma-\beta}g_A\|_{L^2(5\mathcal{X}(s))}\\
&\lesssim e^{\frac{s}{2}}\left(\|D^{|\gamma|}U\|_{L^2(\mathbb{R}^2)}+M^2\varepsilon^{\frac{1}{3}}e^{-\frac{|\gamma|-3}{3}s}\right).
\end{aligned}
\end{equation}
For $k\ge5$, we have $a+b\ge\frac{1}{2}$, $\frac{2-a-b}{1-a-b}\le2k-4$. Hence, by taking $M$ to be large enough in terms of $\lambda$ and $k$, we have
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|I_{i1}\partial^\gamma U_i\right|&\lesssim\sum_{|\gamma|=k}\lambda^{\gamma_2}\|D^kU\|_{L^2}\left[\|D^kU\|_{L^2}^{a+b}+\left(M^2\varepsilon^\frac{1}{3} e^{-\frac{k-3}{3}s}\right)^{a+b}\right]M^{2-a-b}e^{\frac{a+b-1}{2}s}\\
&\lesssim\sum_{|\gamma|=k}\lambda^{\gamma_2}\left(\lambda^{-\frac{k}{2}}E_k\right)^{1+a+b}M^{2-a-b}e^{\frac{a+b-1}{2}s}+\sum_{|\gamma|=k}\lambda^{\gamma_2}M^{2+3a+3b}\varepsilon^{\frac{a+b}{3}}e^{-\frac{a+b}{3}ks+\frac{a+b+1}{2}s}\lambda^{-\frac{k}{2}}E_k\\
\overset{a+b<1}&{\le}2\delta E_k^2+C(a,b,\delta)e^{-s}M^{\frac{2(2-a-b)}{1-a-b}}\lambda^{-\frac{1+a+b}{1-a-b}k}+C(\delta)M^{10}\varepsilon^{\frac{2}{3}(a+b)}\lambda^{-k}e^{-\frac{2}{3}(a+b)ks+(a+b+1)s}\\
&\le2\delta E_k^2+C(a,b,\delta)e^{-s}M^{4k-8}\lambda^{-\frac{1+a+b}{1-a-b}k}\le2\delta E_k^2+e^{-s}M^{4k-6}.
\end{aligned}
\end{equation}
Next, we estimate the $L^2$ norm of $I_{i2}$:
\begin{equation}
\begin{aligned}
\|I_{i2}\|_{L^2}&\lesssim e^{\frac{s}{2}}\sum_{j=1}^{k-2}\|D^{k-j}(U\cdot JN)D^j\partial_1(AT)\|_{L^2}+e^{\frac{s}{2}}\sum_{\substack{1\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}\left(|\partial^{\gamma-\beta}(V\cdot JN)|+\left|\partial^{\gamma-\beta}\frac{\partial_t f}{1+f_{x_1}}\right|\right)\|\partial^\beta\partial_1(AT)\|_{L^2}\\
&=I_{i2,1}+I_{i2,2}.
\end{aligned}
\end{equation}
First, for $I_{i2,1}$, we have that
\begin{equation}
\begin{aligned}
I_{i2,1}\overset{\text{Hölder}}&{\lesssim}e^{\frac{s}{2}}\sum_{j=1}^{k-2}\|D^{k-j-1}D(\tilde{\theta}_sU\cdot JN)\|_{L^{\frac{2(k-1)}{k-1-j}}(\mathbb{R}^2)}\|D^j\partial_1(AT)\|_{L^{\frac{2(k-1)}{j}}}\\
\overset{(\text{\ref{special case 1 of lemma A.1}})}&{\lesssim}e^{\frac{s}{2}}\sum_{j=1}^{k-2}\|D(\tilde{\theta}_sU\cdot JN)\|_{\dot H^{k-1}}^{\frac{k-j-1}{k-1}}\|D(\tilde{\theta}_sU\cdot JN)\|_{L^\infty}^{\frac{j}{k-1}}\|\partial_1(AT)\|_{\dot H^{k-1}}^{\frac{j}{k-1}}\|\partial_1(AT)\|_{L^\infty}^{\frac{k-j-1}{k-1}}\\
&\lesssim e^{\frac{s}{2}}\sum_{j=1}^{k-2}\left(\|D^kU\|_{L^2}+M^2\varepsilon^{\frac{1}{3}} e^{-\frac{k-3}{3}s}\right)^{\frac{k-j-1}{k-1}}(Me^{-\frac{s}{2}})^{-\frac{j}{k-1}}\left(\|D^kA\|_{L^2}+M^4\varepsilon e^{-\frac{k-3}{3}s}\right)^{\frac{j}{k-1}}\left(Me^{-\frac{3}{2}s}\right)^{\frac{k-j-1}{k-1}}\\
&\lesssim M^{\frac{1}{k-1}}e^{-\frac{1}{k-1}s}\left(\lambda^{-\frac{k}{2}}E_k+M^2\varepsilon^{\frac{1}{3}} e^{-\frac{k-3}{3}s}\right).
\end{aligned}
\end{equation}
Then
\begin{equation}
\begin{aligned}
I_{i2,2}&\lesssim e^{\frac{s}{2}}\sum_{\substack{1\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}M^2\varepsilon^{\frac{1}{3}-\frac{\gamma_1-\beta_1}{2}-\frac{\gamma_2-\beta_2}{6}}e^{-\frac{3}{2}(\gamma_1-\beta_1)s-\frac{1}{2}(\gamma_2-\beta_2)s}(\varepsilon^{\frac{1}{6}}e^{\frac{s}{2}})^{|\gamma|-|\beta|-1}\|D^k(AT)\|_{L^2}\\
&\lesssim \sum_{\substack{1\le|\beta|\le|\gamma|-2\\\beta\le\gamma}}M^2\varepsilon^{\frac{1}{6}-\frac{\gamma_1-\beta_1}{3}}e^{-(\gamma_1-\beta_1)s}\left(\|D^kA\|_{L^2}+M^\frac{9}{4}\varepsilon^\frac{3}{2}e^{-\frac{k-3}{3}s}\right)\\
&\lesssim M^2\varepsilon^{\frac{1}{6}}\left(\lambda^{-\frac{k}{2}}E_k+M^\frac{9}{4}\varepsilon^\frac{3}{2}e^{-\frac{k-3}{3}s}\right).
\end{aligned}
\end{equation}
Hence we have
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|I_{i2}\partial^\gamma U_i\right|&\overset{k\ge7}{\lesssim}\lambda^{-\frac{k}{2}}E_kM^{\frac{1}{k-1}}e^{-\frac{1}{k-1}s}\left(\lambda^{-\frac{k}{2}}E_k+M^2\varepsilon^\frac{1}{3} e^{-\frac{k-3}{3}s}\right)\\
&\lesssim(M\varepsilon)^{\frac{1}{k-1}}\lambda^{-k}E_k^2+M^{\frac{1}{k-1}}e^{-\frac{1}{k-1}s}M^6\varepsilon^\frac{2}{3}e^{-\frac{2}{3}(k-3)s}\le\varepsilon^{\frac{1}{k}}E_k^2+e^{-s}.
\end{aligned}
\end{equation}
Similar as the estimate of $I_{i2}$, we can estimate $I_{i3}$:
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|I_{i3}\partial^\gamma U_i\right|\le\varepsilon^{\frac{1}{2}}E_k^2+e^{-s}.
\end{equation}
Summing up these estimates, we obtain
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(1)}^{(\gamma-1,U)}\partial^\gamma U_i\right|\le(2\delta+\varepsilon^{\frac{1}{2}}+\varepsilon^{\frac{1}{k}})E_k^2+e^{-s}M^{4k-5}.
\end{equation}
Now we turn to the estimate of $F_{U_i,(2)}^{(\gamma-1,U)}$. Using the same method in the proof of lemma \ref{W,Z,A controlled by U,S}, we have
\begin{equation}
\left\|[\partial^\gamma,JN]U\right\|_{L^2}\le\varepsilon^{\frac{1}{2}}\|D^kU\|_{L^2}+\varepsilon e^{-\left(\gamma_1+\frac{\gamma_2}{3}-1\right)s}.
\end{equation}
Thus, if we choose $\varepsilon$ small enough in terms of $\lambda$ and $k$, we have
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(2)}^{(\gamma-1,U)}\partial^\gamma U_i\right|&\lesssim\sum_{|\gamma|=k}e^{\frac{s}{2}}\left\|[\partial^\gamma,JN]U\right\|_{L^2}\|\partial^\gamma U\|_{L^2}\|\partial_1U\|_{L^\infty}\\
\overset{(\text{\ref{estimates of U,S}})}&{\lesssim} e^{\frac{s}{2}}\left(\varepsilon^{\frac{1}{2}}\|D^kU\|_{L^2}+\varepsilon e^{-\frac{k-3}{3}s}\right)\|D^kU\|_{L^2}e^{-\frac{s}{2}}\\
&\lesssim \lambda^{-k}\varepsilon^{\frac{1}{2}}E_k^2+\varepsilon\lambda^{-\frac{k}{2}}E_ke^{-\frac{k-3}{3}s}\\
&\lesssim\lambda^{-k}\varepsilon^{\frac{1}{2}}E_k^2+\varepsilon^{\frac{1}{2}}e^{-\frac{2(k-3)}{3}s}\le\varepsilon^{\frac{1}{4}}E_k^2+e^{-s}.
\end{aligned}
\end{equation}
From the estimates $(\ref{estimate of of V})$(\ref{estimates of f-dependent functions, inequalities}) of $V$ and $J$,$N$, we can see that
\begin{equation}
\left|\partial^\gamma(V\cdot JN)\right|+\left|\partial^\gamma\frac{\partial_tf}{1+f_{x_1}}\right|\lesssim M^2\varepsilon^{\frac{1}{3}}e^{-\left(\gamma_1+\frac{\gamma_2}{3}\right)s}.
\end{equation}
Therefore, we have
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(3)}^{(\gamma-1,U)}\partial^\gamma U_i\right|&\lesssim e^{\frac{s}{2}}\sum_{|\gamma|=k}M^2\varepsilon^{\frac{1}{3}}e^{-\left(\gamma_1+\frac{\gamma_2}{3}\right)s}\|\partial_1U\|_{L^\infty}\|\partial^\gamma U\|_{L^2}|\mathcal{X}(s)|^{\frac{1}{2}}\\
&\lesssim M^2\varepsilon^{\frac{2}{3}}e^{-\frac{k-3}{3}s}\|D^kU\|_{L^2}\lesssim\varepsilon^{\frac{2}{3}}\|D^kU\|_{L^2}^2+M^4\varepsilon^{\frac{2}{3}}e^{-\frac{2(k-3)}{3}s}\\
&\le\varepsilon^{\frac{1}{2}}E_k^2+e^{-s}.
\end{aligned}
\end{equation}
The estimate of $F_{U_i,(4)}^{(\gamma-1,U)}$ is much the same, we have
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(4)}^{(\gamma-1,U)}\partial^\gamma U_i\right|\le\varepsilon^{\frac{1}{2}}E_k^2+e^{-s}.
\end{equation}
Combining the above estimates, we arrive at
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i}^{(\gamma-1,U)}\partial^\gamma U_i\right|\le2(\delta+\varepsilon^{\frac{1}{4}})E_k^2+e^{-s}M^{4k-4}.
\end{equation}
Now we estimate the terms involving $k$ order derivatives of $S$.
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|\left(F_{U_i,(2)}^{(\gamma,S)}+F_{U_i,(4)}^{(\gamma,S)}\right)\partial^\gamma U_i\right|&\lesssim\left(e^{\frac{s}{2}}\|\partial_1Z\|_{L^\infty}+e^{-\frac{s}{2}}\|\partial_2S\|_{L^\infty}\right)\lambda^{-k}E_k^2\le\varepsilon^{\frac{1}{2}}E_k^2.
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(3)}^{(\gamma,S)}\partial^\gamma U_i\right|&\lesssim\sum_{|\gamma|=k}\lambda^{\gamma_2}e^{-\frac{s}{2}}\sum_{\substack{|\beta|=|\gamma|-1\\\beta\le\gamma}}\|\nabla S\|_{L^\infty}\|\partial_2\partial^\beta S\|_{L^2}\|\partial^\gamma U\|_{L^2}\\
&\lesssim e^{-s}\lambda^{-k}E_k^2\le\varepsilon^{\frac{1}{2}}E_k^2,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(1)}^{(\gamma,S)}\partial^\gamma U_i\right|&\lesssim\sum_{|\gamma|=k}\lambda^{\frac{\gamma_2+1}{2}}\gamma_2\|\partial_2(SJN)\|_{L^\infty}\|\partial^\gamma U\|_{L^2}\|\partial_1\partial^{\gamma-e_2}S\|_{L^2}\lambda^{\frac{\gamma_2-1}{2}}\\
&\lesssim\sum_{|\gamma|=k}e^{-\frac{s}{2}}\left(\lambda^{\gamma_2+1}\|\partial^\gamma U\|_{L^2}^2+\lambda^{\gamma_2-1}\gamma_2^2\|\partial_1\partial^{\gamma-e_2}S\|_{L^2}^2\right)\le\varepsilon^{\frac{1}{4}}E_k^2,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(5)}^{(\gamma,S)}\partial^\gamma U_i\right|&\lesssim e^{\frac{s}{2}}\|\partial_1(JN)\|_{L^\infty}\|S\|_{L^\infty}\lambda^{-k}E_k^2\\
&\lesssim e^{\frac{s}{2}}M^2\varepsilon^{\frac{5}{6}-\frac{1}{2}}e^{-\frac{3}{2}s}M^{\frac{1}{4}}\lambda^{-k}E_k^2\le\varepsilon E_k^2.
\end{aligned}
\end{equation}
Summing up the above inequalities, we get
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i}^{(\gamma,S)}\partial^\gamma U_i\right|\le2\varepsilon^{\frac{1}{4}}E_k^2.
\end{equation}
Now we look at the terms involving lower order derivatives of $S$. We decompose $F_{U_i,(1)}^{(\gamma-1,S)}=I_{i1}+I_{i2}+I_{i3}$ where
\begin{equation}
\begin{aligned}
&I_{i1}=-2\beta_3\beta_\tau\sum_{\substack{{1\le|\beta|\le|\gamma|-2}\\{\beta\le\gamma}}}\binom{\gamma}{\beta}e^{\frac{s}{2}}\partial^{\gamma-\beta}((S-S_\infty)JN_i)\partial_1\partial^\beta S,\\
&I_{i2}=-2\beta_3\beta_\tau\sum_{\substack{{1\le|\beta|\le|\gamma|-2}\\{\beta\le\gamma}}}\binom{\gamma}{\beta}e^{\frac{s}{2}}S_\infty\partial^{\gamma-\beta}(JN_i)\partial_1\partial^\beta S,\\
&I_{i3}=-2\beta_3\beta_\tau\sum_{\substack{{1\le|\beta|\le|\gamma|-2}\\{\beta\le\gamma}}}\binom{\gamma}{\beta}e^{-\frac{s}{2}}\delta_{i2}\partial^{\gamma-\beta}S\partial_2\partial^\beta S.
\end{aligned}
\end{equation}
For the first part $I_{i_1}$ we have that
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|I_{i1}\partial^\gamma U_i\right|&\lesssim e^{\frac{s}{2}}\|D^kU\|_{L^2}\sum_{j=1}^{k-2}\left\|D^{k-1-(j-1)}((S-S_\infty)JN)D^{j-1}D^2S\right\|_{L^2}\\
&\lesssim e^{\frac{s}{2}}\|D^kU\|_{L^2}\sum_{j=1}^{k-2}\|D^k((S-S_\infty)JN)\|_{L^2}^a\|D^2((S-S_\infty)JN)\|_{L^q}^{1-a}\|D^kS\|_{L^2}^b\|D^2S\|_{L^q}^{1-b}\\
\end{aligned}
\end{equation}
As before we use Leibniz rule, estimates (\ref{estimates of f-dependent functions, inequalities}) of $J$,$N$ and the Poincaré inequality in $y_2$ direction to deduce that
\begin{equation}
\|D^k((S-S_\infty)JN)\|_{L^2(\mathbb{R}^2)}\lesssim\|D^kS\|_{L^2},\ |D^2(JN)|\lesssim\varepsilon^{\frac{1}{4}}e^{-s},\ \|D^2((S-S_\infty)JN)\|_{L^q(\mathbb{R}^2)}\lesssim Me^{-\frac{s}{2}}.
\end{equation}
In the last inequality we used the fact that $q>4\Rightarrow\|\eta^{-1}\|_{L^{\frac{q}{6}}(\mathbb{R}^2)}<\infty$. Thus we have
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|I_{i1}\partial^\gamma U_i\right|&\lesssim e^{\frac{s}{2}}\|D^kU\|_{L^2}\sum_{j=1}^{k-2}\|D^kS\|_{L^2}^{a+b}\left(Me^{-\frac{s}{2}}\right)^{2-a-b}\\
&\lesssim\sum_{j=1}^{k-2}\lambda^{-\frac{k}{2}(1+a+b)}M^{2-a-b}e^{-\frac{1-a-b}{2}s}E_k^{1+a+b}\\
&\le\sum_{j=1}^{k-2}\left(\delta E_k^2+C(\delta)\lambda^{-\frac{2k(1+a+b)}{2(1-a-b)}}M^{\frac{2(2-a-b)}{1-a-b}}e^{-s}\right)\le\delta E_k^2+e^{-s}M^{4k-6}
\end{aligned}
\end{equation}
$I_{i2}$ is estimated as
\begin{equation}
\begin{aligned}
\|I_{i2}\|_{L^2}&\lesssim\sum_{\substack{{1\le|\beta|\le|\gamma|-2}\\{\beta\le\gamma}}}e^{\frac{s}{2}}M^3\varepsilon^{\frac{5}{6}-\frac{\gamma_1-\beta_1}{2}-\frac{\gamma_2-\beta_2}{6}}e^{-\frac{3}{2}(\gamma_1-\beta_1)s-\frac{1}{2}(\gamma_2-\beta_2)s}\cdot\left(\varepsilon^{\frac{1}{6}}e^{\frac{s}{2}}\right)^{k-1-|\beta|}\|D^kS\|_{L^2}\\
\overset{|\gamma|=k}&{\lesssim}M^3\varepsilon^{\frac{2}{3}}\|D^kS\|_{L^2}
\end{aligned}
\end{equation}
And $I_{i3}$ is estimated as
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|I_{i3}\partial^\gamma U_i\right|&\lesssim\sum_{|\gamma|=k}e^{-\frac{s}{2}}\|D^kU\|_{L^2}\sum_{j=1}^{k-2}\|S\|_{\dot H^k}^{\frac{k-1-j}{k-1}}\|DS\|_{L^\infty}^{\frac{j}{k-1}}\|S\|_{\dot H^k}^{\frac{j}{k-1}}\|\partial_2S\|_{L^\infty}^{\frac{k-1-j}{k-1}}\\
&\lesssim e^{-\frac{s}{2}}\|U\|_{\dot H^k}\|S\|_{\dot H^k}e^{-\frac{s}{2}}\le\varepsilon^{\frac{1}{2}}E_k^2
\end{aligned}
\end{equation}
Hence, we have
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(1)}^{(\gamma-1,S)}\partial^\gamma U_i\right|\le(\delta+2\varepsilon^{\frac{1}{2}})E_k^2+e^{-s}M^{4k-6}
\end{equation}
Next, we turn to $F_{U_i,(2)}^{(\gamma-1,S)}$. From Leibniz rule we have
\begin{equation}
\|[\partial^\gamma,JN_i]S\|_{L^2(\mathcal{X}(s))}\lesssim\varepsilon^{\frac{1}{2}}\|D^kS\|_{L^2(\mathbb{R}^2)}+\varepsilon e^{-\left(\gamma_1+\frac{\gamma_2}{3}-1\right)s},
\end{equation}
and
\begin{equation}
\begin{aligned}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int\left|F_{U_i,(2)}^{(\gamma-1,S)}\partial^\gamma U_i\right|&\lesssim\sum_{|\gamma|=k}e^{\frac{s}{2}}\left\|[\partial^\gamma,JN_i]S\right\|_{L^2(\mathcal{X}(s))}\|\partial_1S\|_{L^\infty}\|D^kU_i\|_{L^2}\\
&\lesssim e^{\frac{s}{2}}\left(\varepsilon^{\frac{1}{2}}\|D^kS\|_{L^2}+\varepsilon e^{-\left(\gamma_1+\frac{\gamma_2}{3}-1\right)s}\right)e^{-\frac{s}{2}}\|D^kU\|_{L^2}\le\varepsilon^{\frac{1}{4}}E_k^2+e^{-s}.
\end{aligned}
\end{equation}
Thus we have
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int_{\mathbb{R}^3}\left|F_{U_i}^{(\gamma-1,S)}\partial^\gamma U_i\right|\le(\delta+2\varepsilon^{\frac{1}{2}}+\varepsilon^{\frac{1}{4}})E_k^2+e^{-s}M^{4k-5}.
\end{equation}
Summing all the estimates together leads us to
\begin{equation}
2\sum_{|\gamma|=k}\lambda^{\gamma_2}\int_{\mathbb{R}^3}\left|F_{U_i}^{(\gamma)}\partial^\gamma U_i\right|\le\left(4+C\varepsilon^{\frac{1}{4}}+6\delta\right)E_k^2+e^{-s}M^{4k-4}.
\end{equation}
The proof of (\ref{S forcing estimate}) is similar.
\end{proof}
\begin{proof}[Proof of $\dot H^k$ estimates of $U$, $S$]
We multiply the equations of $\partial^\gamma U_i$, $\partial^\gamma S$ by $\partial^\gamma U_i$, $\partial^\gamma S$ respectively and sum over, then we arrive at
\begin{subequations}
\begin{align}
\begin{split}
\frac{1}{2}\frac{d}{ds}\|\partial^\gamma U\|_{L^2}^2\le&\frac{1}{2}\int|\partial^\gamma U|^2(\operatorname{div}\mathcal{V}_A-2D_\gamma)+\frac{1}{2}(1+\gamma_1)\beta_3\beta_\tau(1+\varepsilon^{\frac{1}{13}})\left(\|\partial^\gamma S\|_{L^2}^2+\|\partial^\gamma U\|_{L^2}^2\right)\\
&-2\beta_3\beta_\tau\int S\left(e^{\frac{s}{2}}JN_i\partial_1\partial^\gamma S+e^{-\frac{s}{2}}\delta_{i2}\partial_2\partial^\gamma S\right)\partial^\gamma U_i+\int\left|F_{U_i}^{(\gamma)}\partial^\gamma U_i\right|,
\end{split}\\
\begin{split}
\frac{1}{2}\frac{d}{d s}\left\|\partial^{\gamma} S\right\|_{2}^{2}&\le\frac{1}{2}\int\left|\partial^{\gamma}S\right|^{2}(\operatorname{div}\mathcal{V}_{A}-2D_\gamma)+\frac{1}{2}\beta_{\tau}\left(\beta_{1}+\beta_{3}\gamma_{1}\right)\left(1+\varepsilon^{\frac{1}{13}}\right)\left(\left\|\partial^{r}S\right\|_{2}^{2}+\left\|\partial^{\gamma}U\right\|_{2}^{2}\right)\\
&-2 \beta_{3} \beta_{\tau} \int S\left(e^{\frac{s}{2}} \partial_{1} \partial^{\gamma} U_{j}JN_{j}+e^{-\frac{s}{2}} \partial_{2} \partial^{\gamma} U_{2}\right)\partial^{\gamma}S+\int\left|F_{S}^{(\gamma)} \partial^{\gamma} S\right|.
\end{split}
\end{align}
\end{subequations}
Here we used the fact that $|JN\partial_1W|\le|JN||\partial_1W|\le (1+\varepsilon^{\frac{2}{3}})(1+\varepsilon^{\frac{1}{12}})\le1+\varepsilon^{\frac{1}{13}}$. By summing up the above two inequalities and integrating by part, we get
\begin{equation}
\begin{aligned}
&\frac{d}{ds}\left(\|\partial^\gamma U\|_{L^2}^2+\|\partial^\gamma S\|_{L^2}^2\right)+\int\left(2D_\gamma-\operatorname{div}\mathcal{V}_A-\beta_\tau(1+2\gamma_1\beta_3)(1+\varepsilon^{\frac{1}{13}})\right)\left(|\partial^\gamma U|^2+|\partial^\gamma S|^2\right)\\
&\le2\int\left|F_{U_i}^{(\gamma)}\partial^\gamma U_i\right|+2\int\left|F_{S}^{(\gamma)}\partial^\gamma S\right|+4\beta_3\beta_\tau\int\left[e^{\frac{s}{2}}\partial^\gamma S\partial^\gamma U\cdot\partial_1(SJN)+e^{-\frac{s}{2}}\partial^\gamma S\partial_2U_2\partial_2S\right]\\
&\le2\int\left|F_{U_i}^{(\gamma)}\partial^\gamma U_i\right|+2\int\left|F_{S}^{(\gamma)}\partial^\gamma S\right|+2\beta_3\beta_\tau(1+2\varepsilon^{\frac{1}{2}})\left(\|\partial^\gamma U\|_{L^2}^2+\|\partial^\gamma S\|_{L^2}^2\right).
\end{aligned}
\label{energy estimate partial result}
\end{equation}
In the last inequality we used the facts that $|\partial_1(SJN)|\le(1+\varepsilon^{\frac{1}{2}})e^{-\frac{s}{2}}$ and estimate (\ref{estimates of U,S}) of $S$, the first fact can be obtained from (\ref{estimates of U,S}) and estimates (\ref{estimates of f-dependent functions, inequalities}) of $J$, $N$.
Now we estimate the damping term:
\begin{equation}
\begin{aligned}
&2D_\gamma-\operatorname{div}\mathcal{V}_A-\beta_\tau(1+2\gamma_1\beta_3)(1+\varepsilon^{\frac{1}{13}})-2\beta_3\beta_\tau(1+2\varepsilon^{\frac{1}{2}})\\
&\ge|\gamma|+2\gamma_1\left(1+\beta_1\beta_\tau\partial_1(JW)+\partial_1G_A\right)-2-\beta_1\beta_\tau\partial_1(JW)-\partial_1G_A-\partial_2h_A\\
&\ \ \ \ \ \ \ \ \ \ \ \ -2\beta_3\beta_\tau(1+\varepsilon^{\frac{1}{13}})\gamma_1-\underbrace{\left[\beta_\tau(1+\varepsilon^{\frac{1}{13}})+2\beta_3\beta_\tau(1+2\varepsilon^{\frac{1}{2}})\right]}_{\le3}\\
\overset{(\ref{estimate of D1W that will appear in damping terms})}&{\ge}|\gamma|+2\gamma_1\left(1-\beta_1\beta_\tau-\beta_3\beta_\tau\right)-6-C\varepsilon^{\frac{1}{13}}\ge k-7.
\end{aligned}
\end{equation}
Multiply (\ref{energy estimate partial result}) by $\lambda^{\gamma_2}$ and take the sum, we have
\begin{equation}
\frac{d}{ds}E_k^2+(k-7)E_k^2\le (8+16\delta)E_k^2+e^{-s}M^{4k-3}.
\end{equation}
Taking $k\ge 18$ we have
\begin{equation}
\frac{d}{ds}E_k^2+2E_k^2\le e^{-s}M^{4k-3},
\end{equation}
which results in
\begin{equation}
E_k^2(s)\le e^{-2(s-s_0)}E_k^2(s_0)+(1-e^{-(s-s_0)})e^{-s}M^{4k-3}.
\end{equation}
By Leibniz rule, we have
\begin{equation}
\left\{
\begin{aligned}
&\|WN\|_{\dot H^k}\le(1+C\varepsilon^{\frac{1}{2}})\|W\|_{\dot H^k}+CM^2\varepsilon^{\frac{5}{3}}e^{-\left(\frac{k}{3}-\frac{3}{2}\right)s}\\
&\|AT\|_{\dot H^k},\|ZN\|_{\dot H^k}\le(1+C\varepsilon^{\frac{1}{2}})\|A\text{ or }Z\|_{\dot H^k}+CM^3\varepsilon^{\frac{5}{3}}e^{-\frac{k-3}{3}s},
\end{aligned}
\right.
\end{equation}
and
\begin{equation}
\left\{
\begin{aligned}
&\|U\|_{\dot H^k}\le(1+C\varepsilon^{\frac{1}{2}})\left[\frac{1}{2}\left(e^{-\frac{s}{2}}\|W\|_{\dot H^k}+\|Z\|_{\dot H^k}\right)+\|A\|_{\dot H^k}\right]+CM^3\varepsilon^{\frac{5}{3}}e^{-\frac{k-3}{3}s}\\
&\|S\|_{\dot H^k}\le\frac{1}{2}(1+C\varepsilon^{\frac{1}{2}})\left(e^{-\frac{s}{2}}\|W\|_{\dot H^k}+\|Z\|_{\dot H^k}\right)+CM^3\varepsilon^{\frac{5}{3}}e^{-\frac{k-3}{3}s}.
\end{aligned}\right.
\end{equation}
According to the assumption (\ref{initial sobolev norm}) of $\dot H^k$ norm of $W$,$Z$,$A$, we have
\begin{equation}
E_k^2(s_0)\le (2+C\varepsilon^{\frac{1}{2}})\varepsilon.
\end{equation}
Thus we finally obtain that
\begin{equation}
\lambda^k\left(\|U\|_{\dot H^k}+\|S\|_{\dot H^k}\right)\le E_k^2\le(2+C\varepsilon^{\frac{1}{2}})\varepsilon^{-1}e^{-2s}+M^{4k-3}e^{-s}(1-\varepsilon^{-1}e^{-s}).
\end{equation}
This finishes the proof of energy estimate.
\end{proof}
\subsection{Higher order estimates for $W,Z,A$}
Using the energy estimate, we can further obtain higher order estimates for $W,Z,A$.
\begin{lemma}
For $k\gg1$, we have that
\begin{subequations}
\begin{align}
\begin{split}
&|\partial^\gamma W|\lesssim\left\{
\begin{aligned}
&\eta^{-\frac{1}{6}}e^{\frac{s}{2(k-3)}},\ \ \ \ \ \ \gamma_1=0,\ |\gamma|=3\\
&\eta^{-\frac{1}{3}}e^{\frac{s}{k-3}},\ \ \ \ \ \ \gamma_1>0,\ |\gamma|=3,
\end{aligned}\right.
\label{new estimate of W}
\end{split}\\
\begin{split}
&|\partial^\gamma Z|\lesssim\left\{
\begin{aligned}
&e^{-\left(\frac{3}{2}-\frac{1}{2(k-3)}\right)s},\ \ \ \gamma_1\ge1,\ |\gamma|=3\\
&e^{-\left(1-\frac{|\gamma|-1}{2(k-3)}\right)s},\ \ \ \ \ |\gamma|=3,4,5,
\end{aligned}\right.
\label{new estimate of Z}
\end{split}\\
\begin{split}
&|\partial^\gamma A|\lesssim\left\{
\begin{aligned}
&e^{-\left(\frac{3}{2}-\frac{|\gamma|}{k-2}\right)s},\ \ \ \gamma_1\ge1,\ |\gamma|=2,3\\
&e^{-\left(1-\frac{|\gamma|-1}{2(k-3)}\right)s},\ \ \ \ |\gamma|=3,4,5.
\end{aligned}\right.
\label{new estimate of A}
\end{split}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
The proof is similar to the interpolation in \cite{buckmaster2022formation}, still for the reader's convinience we recap the proof here.
First we deal with $A$. For $\gamma_1\ge1$, $|\gamma|=2,3$, we have
\begin{equation}
\begin{aligned}
|\partial^\gamma A|&\lesssim\|\partial_1A\|_{\dot H^{k-1}}^{\frac{|\gamma|-1}{k-2}}\|\partial_1A\|_{L^\infty}^{1-\frac{|\gamma|-1}{k-2}}\lesssim(M^{2k}e^{-\frac{s}{2}})^{\frac{|\gamma|-1}{k-2}}(Me^{-\frac{3}{2}s})^{1-\frac{|\gamma|-1}{k-2}}\lesssim M^{2k}e^{-\left(\frac{3}{2}-\frac{|\gamma|-1}{k-2}\right)s}\lesssim e^{-\left(\frac{3}{2}-\frac{|\gamma|}{k-2}\right)s}.
\end{aligned}
\end{equation}
For $|\gamma|=3,4,5$, we have
\begin{equation}
|\partial^\gamma A|\lesssim\|D^kA\|_{L^2}^{\frac{|\gamma|-2}{k-3}}\|D^2A\|_{L^\infty}^{1-\frac{|\gamma|-2}{k-3}}\lesssim(M^{2k}e^{-\frac{s}{2}})^{\frac{|\gamma|-2}{k-3}}(Me^{-s})^{1-\frac{|\gamma|-2}{k-3}}\lesssim M^{2k}e^{-\left(1-\frac{|\gamma|-2}{2(k-3)}\right)s}\lesssim e^{-\left(1-\frac{|\gamma|-1}{2(k-3)}\right)s}.
\end{equation}
Next we estimate $Z$. For $\gamma_1\ge1$, $|\gamma|=3$, we have
\begin{equation}
|\partial^\gamma Z|\lesssim\|\partial_1\nabla Z\|_{\dot H^{k-2}}^{\frac{1}{k-3}}\|\partial_1\nabla Z\|_{L^\infty}^{1-\frac{1}{k-3}}\lesssim(M^{2k}e^{-\frac{s}{2}})^{\frac{1}{k-3}}(Me^{-\frac{3}{2}s})^{1-\frac{1}{k-3}}\lesssim M^{2k}e^{-\left(\frac{3}{2}-\frac{1}{k-3}\right)s}\lesssim e^{-\left(\frac{3}{2}-\frac{1}{2(k-3)}\right)}.
\end{equation}
For $|\gamma|=3,4,5$, the estimates for $Z$ are the same as $A$.
Now we turn to $W$. Since $|\gamma|=3$, we can split $\gamma$ as $\gamma=\gamma'+\gamma''$, where $|\gamma'|=1$ and $\gamma''_1=\min(\gamma_1,2)$, then $\eta^\mu\partial^\gamma W=\partial^{\gamma'}(\eta^\mu\partial^{\gamma''}W)-\partial^{\gamma'}(\eta^{\mu})\partial^{\gamma''}W=I_1+I_2$. Let
\begin{equation}
\mu=\left\{
\begin{aligned}
&\frac{1}{6},\ \ \ \gamma_1=0\\
&\frac{1}{3},\ \ \ \text{otherwise.}
\end{aligned}\right.
\end{equation}
Note that $|\partial_1(\eta^\mu)|\lesssim\eta^{\mu-\frac{1}{2}}$, $|\partial_2(\eta^\mu)|\lesssim\eta^{\mu-\frac{1}{6}}$, thus when $\gamma_1=0$ we have $|I_2|\lesssim\eta^{\mu-\frac{1}{6}}|\partial_{22}W|\lesssim M$, when $\gamma_1>0$ we have $|I_2|\lesssim M\eta^{-\frac{1}{6}}\lesssim M$. By interpolation and bootstrap assumptions for $W$, we have
\begin{equation}
|I_1|\lesssim\|D(\eta^\mu\partial^{\gamma''}W)\|_{L^\infty}\lesssim\|\eta^\mu\partial^{\gamma''}W\|_{\dot H^{k-2}}^{\frac{1}{k-3}}\|\eta^\mu\partial^{\gamma''}W\|_{L^\infty}^{1-\frac{1}{k-3}}\lesssim\ M\|\eta^\mu\partial^{\gamma''}W\|_{\dot H^{k-2}}^{\frac{1}{k-3}}.
\end{equation}
We estimate the $\dot H^{k-2}$ norm as follows:
\begin{equation}
\begin{aligned}
\|\eta^\mu\partial^{\gamma''}W\|_{\dot H^{k-2}}&\lesssim\sum_{m=0}^{k-2}\|D^m\partial^{\gamma''}WD^{k-2-m}\eta^\mu\|_{L^2}\lesssim\sum_{m=0}^{k-2}\|D^m\partial^{\gamma''}W\|_{L^{\frac{2(k-1)}{m+1}}}\|D^{k-2-m}\eta^\mu\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))}\\
&\lesssim\sum_{m=0}^{k-2}\|W\|_{\dot H^k}^{\frac{m+1}{k-1}}\|\nabla W\|_{L^\infty}^{1-\frac{m+1}{k-1}}\|D^{k-2-m}\eta^\mu\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))}\\
&\lesssim\sum_{m=0}^{k-2}(M^{2k})^{\frac{m+1}{k-1}}\|D^{k-2-m}\eta^\mu\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))}.
\end{aligned}
\end{equation}
Simple calculation yields $|D^k(\eta^\mu)|\lesssim\eta^{\mu-\frac{k}{6}}$, thus we have that
\begin{equation}
\begin{aligned}
\|D^{k-2-m}\eta^\mu\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))}&\lesssim\|\eta^{\mu-\frac{k-2-m}{6}}\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))}\lesssim\|\eta^\mu\|_{L^\infty(\mathcal{X}(s))}\|\eta^{-\frac{k-2-m}{6}}\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))}\\
&\lesssim\varepsilon e^{3\mu s}\times\left\{
\begin{aligned}
&1,\ \ \ \ \ \ \ \ \ &m=k-2\\
&\|\eta^{-1}\|_{L^{\frac{k-1}{3}}(\mathcal{X}(s))},\ \ &m<k-2
\end{aligned}\right.\\
\overset{k>3}&{\lesssim}\varepsilon^{\mu}e^{3\mu s}.
\end{aligned}
\end{equation}
Consequently, we obtain $|I_1|\lesssim M(M^{2k}\varepsilon^\mu e^{3\mu s})^{\frac{1}{k-3}}\lesssim e^{\frac{3\mu s}{k-3}}$, and $|\eta^\mu\partial^\gamma W|\lesssim e^{\frac{3\mu s}{k-3}}+M\lesssim e^{\frac{3\mu s}{k-3}}$.
\end{proof}
\section{Constraints and evolution of modulation variables}
In this section we close the bootstrap argument for the modulation variables $\xi,n,\phi,\tau,\kappa$. The equation of these variables are deduced from the constraints that we impose on $W$.
\subsection{Constraints}
We impose constraints on $W$ and its derivatives up to second order at the origin, i.e.
\begin{equation}
W(0,s)=\ovl{W}(0)=0,\ \ \
\nabla W(0,s)=\nabla\ovl{W}(0)=(-1,0)^T,\ \ \
\nabla^2 W(0,s)=\nabla^2\ovl{W}(0)=
\begin{pmatrix}
0&0\\
0&0
\end{pmatrix}.
\end{equation}
It is possible to impose these constraints. In fact, as long as the initial data $W(y,-\log\varepsilon)$ satisfies these constraints, we can choose 6 modulation variables $\xi$, $n_2$, $\phi$, $\tau$, $\kappa$ in a continuous manner with respect to time in terms of $w(x,t)$, ensuring that $W(y,s)$ still satisfies these constraints.
\subsection{The functions $G_W$, $h_W$, $F_W$ and their derivatives, evaluated at $y=0$}
In a neighborhood of the origin, $\tilde f$ reduces to $\tilde f(\tilde x,t)=\frac{1}{2}\phi\tilde x_2^2$, and as a consequence in a neighborhood of 0, $f(x,t)=\frac{1}{2}\phi x_2^2$. Note that any derivatives with respect to $x_1$ or $\tilde{x}_1 $ of those function vanish at the origin, we can conveniently evaluated the $f$-related functions at the origin:
\begin{subequations}
\label{evaluation of f-related functions at 0}
\begin{align}
\begin{split}
&\tilde f^0=0,\ \ \partial_{\tilde x_2}\tilde f^0=0,\ \ \partial_{\tilde x_2}^2\tilde f^0=0;
\end{split}\\
\begin{split}
&(\partial_t)_{\tilde x}\tilde f^0=0,\ \ \partial_{\tilde x_2}(\partial_t)_{\tilde x}\tilde f^0=0,\ \ \partial_{\tilde x_2}^2(\partial_t)_{\tilde x}\tilde f^0=\dot\phi;
\end{split}\\
\begin{split}
&f^0=0,\ \ \partial_{x_2}f^0=0,\ \ \partial_{x_2}^2f^0=0;
\end{split}\\
\begin{split}
&J^0=0,\ \ \partial_{x_2}J^0=0,\ \ \partial_{x_2}^2J^0=\phi^2,\ \ \partial_{x_2}^3J^0=0;
\end{split}\\
\begin{split}
N^0=(1,0)^T,\ \ \partial_{x_2}N^0=(0,-\phi)^T,\ \ \partial_{x_2}^2N^0=(-\phi^2,0)^T,\ \ \partial_{x_2}^3N^0=(0,2\phi^3)^T;
\end{split}\\
\begin{split}
T^0=(0,1)^T,\ \ \partial_{x_2}T^0=(\phi,0)^T,\ \ \partial_{x_2}^2T^0=(0,\phi^2)^T,\ \ \partial_{x_2}^3T^0=(-2\phi^3,0)^T;
\end{split}\\
\begin{split}
&(\partial_t)_xf^0=0,\ \ \partial_{x_2}(\partial_t)_x f^0=0,\ \ \partial_{x_2}^0(\partial_t)_x f^0=\dot\phi;
\end{split}\\
\begin{split}
&\partial_tJ^0=0,\ \ \partial_{x_2}\partial_tJ^0=0,\ \ \partial_{x_2}^2\partial_tJ^0=2\phi\dot\phi;
\end{split}\\
\begin{split}
\partial_tN^0=(0,0)^T,\ \ \partial_{x_2}\partial_tN^0=(0,-\dot\phi)^T,\ \ \partial_{x_2}^2\partial_tN^0=(-2\phi\dot\phi,0)^T;
\end{split}\\
\begin{split}
\partial_tT^0=(0,0)^T,\ \ \partial_{x_2}\partial_tT^0=(\dot\phi,0)^T,\ \ \partial_{x_2}^2\partial_tT^0=(0,-2\phi\dot\phi)^T.
\end{split}
\end{align}
\end{subequations}
By the definition of $V$, we have
\begin{subequations}
\label{evaluaton of V at 0}
\begin{align}
\begin{split}
V_i^0=-\frac{1+\alpha}{2}R_{ji}\dot\xi_j,
\end{split}\\
\begin{split}
\partial_1V^0=\frac{1+\alpha}{2}e^{-\frac{3}{2}s}(0,Q_{21})^T,\ \ \partial_2V^0=\frac{1+\alpha}{2}e^{-\frac{s}{2}}(Q_{12},0)^T,
\end{split}\\
\begin{split}
\partial_{11}V^0=\frac{1+\alpha}{2}\phi e^{-3s}(0,Q_{21})^T,\ \ \partial_{12}V^0=0,\ \ \partial_{22}V^0=0.
\end{split}
\end{align}
\end{subequations}
From the definition of $G_W$ and (\ref{evaluation of f-related functions at 0})(\ref{evaluaton of V at 0}), we have
\begin{subequations}
\begin{align}
\begin{split}
\frac {1} {\beta_{\tau}}G_W^0 &= e^{\frac s 2 }[\kappa+\beta_2Z^0-(1+\alpha)\beta_1R_{j1}\dot\xi_j],
\label{evaluation of G_W at 0}
\end{split}\\
\begin{split}
\frac {1} {\beta_{\tau}}\partial_1 G_W^0 &=\beta_2e^{\frac {s} {2} }\partial_1 Z^0,
\label{evaluation of D1GW at 0}
\end{split}\\
\begin{split}
\frac {1} {\beta_{\tau}}\partial_2 G_W^0 &=\beta_2e^{\frac {s} {2} }\partial_2 Z^0+(1+\alpha)\beta_1Q_{12}+\beta_1(1+\alpha)\phi R_{j2}\dot\xi_j,
\label{evaluation of D2GW at 0}
\end{split}\\
\begin{split}
\frac {1} {\beta_{\tau}}\partial_{11} G_W^0 &=\beta_2e^{\frac {s} {2} }\partial_{11} Z^0,
\label{evaluation of D11GW at 0}
\end{split}\\
\begin{split}
\frac {1} {\beta_{\tau}}\partial_{12} G_W^0 &=\beta_2e^{\frac {s} {2} }\partial_{12} Z^0-(1+\alpha)\beta_1 e^{-\frac 3 2 s}\phi Q_{21},
\label{evaluation of D12GW at 0}
\end{split}\\
\begin{split}
\frac {1} {\beta_{\tau}}\partial_{22} G_W^0 &= -\phi e^{-\frac {s} {2}}+\phi^2e^{-s}\frac {G_W^0}{\beta_{\tau}}+e^{-\frac {s} {2} }\beta_2\partial_{22}Z^0-(1+\alpha)\beta_1 \phi^2e^{-\frac {s} {2}} R_{j1}\dot \xi_j.
\label{evaluation of D22GW at 0}
\end{split}
\end{align}
\end{subequations}
Similarly for $h_W$, we have
\begin{equation}
\frac 1 {\beta_{\tau}}h_W^0 =\beta_1 e ^{-\frac s 2} \left(2A^0-(1+\alpha)R_{j2}\dot\xi_j\right)
\label{evaluation of h_W at 0}.
\end{equation}
And for the forcing terms, we also insert the above evaluation to the definition (\ref{forcing terms of W,Z,A}), then we have
\begin{subequations}
\begin{align}
\begin{split}
F_W^0 =&-\beta_3\beta_\tau(\kappa-Z^0)\partial_2A^0+\beta_\tau e^{-\frac{s}{2}}Q_{12}A^0\\
& -2\phi \beta_1 \beta_{\tau} e ^{-\frac s 2}\left(-\frac {1+\alpha} 2 R_{j2}\dot\xi_j+ A^0\right)A^0+\frac 1 2 \phi \beta_3 \beta_{\tau}e^{-\frac s 2} (\kappa+Z^0)(Z^0- \kappa),
\label{evaluation of FW at 0}
\end{split}\\
\begin{split}
\partial_1 F_W^0 =& \beta_3\beta_{\tau}\partial_2 A^0(\partial_1Z^0+ e^{-\frac s 2})-\beta_3 \beta_{\tau} \partial_{12} A^0(\kappa - Z^0)+\beta_{\tau} e^{\frac s 2 } Q_{12}\partial_1 A^0 \\
&-\phi \partial_1 A h_W^0 - \phi \beta_2 \beta_{\tau} e^{-\frac s 2} A^0\left((1+\alpha) Q_{21} e^{-\frac 3 2 s}+ 2\partial_1 A^0\right) \\
&-\frac 1 2 \phi \beta_3 \beta_{\tau} e ^{-\frac s 2}(e^{-\frac s 2 }+\partial_1 Z^0)(\kappa +Z^0 )+ \frac 1 2 \phi \beta_3 \beta_{\tau}e ^{-\frac s 2}(\kappa -Z^0 )(\partial_1 Z^0- e^{-\frac s 2}),
\label{evaluation of D1FW at 0}
\end{split}\\
\begin{split}
\partial_2 F_W^0 =&-\beta_3 \beta_{\tau} (\kappa - Z^0) \partial_{22} A^0+ \beta_3 \beta_{\tau}\partial_2 Z^0 \partial_2 A^0 -\dot \phi \beta_{\tau} e^{-s} A^0+ \beta_{\tau} e^{-\frac s 2} Q_{12} \partial_2 A^0\\
&-\phi \beta_3 \beta_{\tau} e^{-\frac s 2} \partial_2 Z^0 Z^0+ \phi^2 \beta_3 \beta_{\tau} e^{-s} A^0 (\kappa - Z^0)\\ &-\phi \beta_1 \beta_{\tau} e^{-\frac s 2} A^0\left(2\partial_2 A^0 -\phi e^{-\frac s 2}(\kappa+ Z^0)\right)-\phi \partial_2 A^0 h_W^0,
\label{evaluation of D2FW at 0}
\end{split}\\
\begin{split}
\partial_{11} F_W^0 =&2\beta_3 \beta_{\tau} (e^{-\frac s 2}+\partial_{12} Z^0) \partial_{12} A^0- \beta_3 \beta_{\tau}(\kappa - Z^0) \partial_{112} A^0 + \beta_{\tau} e^{-\frac s 2}Q_{12} \partial_{11} A^0\\
&-2\phi \beta_1 \beta_{\tau} e^{-\frac s 2} \partial_{11} A^0\left(2A^0-\frac {1+\alpha} {2} R_{j2}\dot \xi_j\right)-4\phi \beta_1 \beta_{\tau} e^{-\frac s 2} \partial_{1} A^0\left(\frac {1+\alpha} {2} Q_{21} e^{-\frac 3 2 s} +\partial_1 A^0\right)\\
&- \phi \beta_3 \beta_{\tau} e^{-\frac s 2}\left({(\partial_1 Z^0)}^2 -e^{-s} +Z^0\partial_{11} Z^0\right)+ \beta_3 \beta_{\tau} \partial_{11} Z^0 \partial_2 A^0,
\label{evaluation of D11FW at 0}
\end{split}\\
\begin{split}
\partial_{12} F_W^0 =&-2\beta_3 \beta_{\tau}(\kappa - Z^0) \partial_{122} A^0+ \beta_3 \beta_{\tau}\partial_{12} Z^0 \partial_{2} A^0 + \beta_3 \beta_{\tau}\partial_{2} Z^0 \partial_{12} A^0\\
&+\beta_3\beta_{\tau} (e^{-\frac s 2}+ \partial_{1}Z^0)\partial_{22}A^0-\beta_{\tau} \dot\phi e^{-s} \partial_{1} A^0+ \beta_{\tau} e^{-\frac s 2} Q_{12} \partial_{12}A^0 \\
&- \phi \beta_3 \beta_{\tau} e^{-\frac s 2}\left(\partial_{12} Z^0Z^0 +\partial_{1}Z^0\partial_{2} Z^0\right)+ \phi^2 \beta_3 \beta_{\tau} e^{-s}\left((\kappa - Z^0) \partial_{1} A^0-(e^{-\frac s 2}+ \partial_{1}Z^0)A^0\right)\\
&- 2\phi \beta_1 \beta_{\tau} e^{-\frac s 2}\left[\partial_{1}A^0\partial_{2} A^0+\left(\frac {1+\alpha}{2} Q_{21} e^{-\frac 3 2 s}+\partial_1 A^0\right)\partial_2 A^0 +A^0\partial_{12} A^0\right]\\
&-\phi \partial_{12} A^0 h_W^0+ \phi^2 \beta_1 \beta_{\tau} e^{-s}\left[\partial_1 A^0 (\kappa + Z^0)+ A^0(\partial_1 Z^0- e^{-\frac s 2})\right],
\label{evaluation of D12FW at 0}
\end{split}\\
\begin{split}
\partial_{22} F_W^0 =&\beta_3 \beta_{\tau}\left[\partial_{22} Z^0 \partial_2 A^0- (\kappa - Z^0) \partial_{222} A^0+\partial_2 Z^0 \partial_{22} A^0\right]+ \phi^2 \beta_3 \beta_{\tau}e^{-s}(\kappa - Z^0) \partial_{2} A^0\\
&-2\dot\phi\beta_{\tau} e^{-s} \partial_{2} A^0-\phi \beta_3 \beta_{\tau} e^{-\frac s 2} \partial_{2}Z^0 \partial_{2}Z^0+ \beta_{\tau} e^{-\frac s 2}\partial_{22} A^0 Q_{12}\\
&+2 {\phi}^2 \beta_3 \beta_{\tau} e^{-s}\left[(\kappa - Z^0) \partial_{2} A^0 -A^0\partial_{2}Z^0\right]- \phi^3 \beta_3 \beta_{\tau} e^{-\frac 3 2 s}(\kappa - Z^0) (\kappa + Z^0)\\
&- 2\phi \beta_1 \beta_{\tau} e^{-\frac s 2}[2\partial_{2}A^0\partial_{2} A^0+A^0\partial_{22} A^0]+ 2\phi^3\beta_1 \beta_{\tau} e^{-\frac 3 2 s} {(A^0)}^2\\
&-2\phi^2 \beta_1 \beta_{\tau} e^{-s} \partial_2{[A(U\cdot N)]}^0 -\phi h_W^0 \partial_{22} A^0 +\phi^3 e^{-s} h_W^0 A^0\\
&-(1+\alpha) \phi^2 \beta_1 \beta_{\tau} e^{-\frac 3 2 s} Q_{21} A^0 - \phi \beta_3 \beta_{\tau} e^{-\frac s 2 }Z^0 \partial_{22} Z^0.
\label{evaluation of D22FW at 0}
\end{split}
\end{align}
\end{subequations}
Also note that if $|\gamma|=1,2$, we have
\begin{equation}
F_W^{(\gamma),0}=\partial^\gamma F_W^0+\partial^\gamma G_W^0.
\end{equation}
\subsection{Evolution of modulation variables}
Setting $y=0$ in the equation of $W$, we can see that
\begin{equation}
\dot{\kappa}=\frac{1}{\beta_\tau}e^{\frac{s}{2}}(F_W^0+G_W^0).
\label{evolution of kappa}
\end{equation}
Setting $y=0$ in the equation of $\partial_1W$, we have
\begin{equation}
\dot\tau=\frac{1}{\beta_\tau}(\partial_1F_W^0+\partial_1G_W^0).
\label{evolution of tau}
\end{equation}
Setting $y=0$ in the equation of $\partial_2W$, we have
\begin{equation}
0=\partial_2F_W^0+\partial_2G_W^0.
\end{equation}
Combining this with $(\ref{evaluation of D2GW at 0})$, we obtain
\begin{equation}
Q_{12}=-\frac{1}{\beta_1\beta_\tau(1+\alpha)}\left(\partial_2F_W^0+\beta_2\beta_\tau e^{\frac{s}{2}}\partial_2Z^0+\beta_1\beta_\tau(1+\alpha)e^{\frac{s}{2}}\phi R_{j2}\dot\xi_{j}\right).
\label{evolution of Q12}
\end{equation}
Setting $y=0$ in the equation of $\partial_{11}W$ and $\partial_{12}W$, we have
\begin{equation}
\begin{pmatrix}
\partial_{111}W^0&\partial_{112}W^0\\
\partial_{112}W^0&\partial_{122}W^0
\end{pmatrix}
\begin{pmatrix}
G_W^0\\
h_W^0
\end{pmatrix}=
\begin{pmatrix}
\partial_{11}F_W^0+\partial_{11}G_W^0\\
\partial_{12}F_W^0+\partial_{12}G_W^0
\end{pmatrix}.
\end{equation}
Denote the matrix $\partial_1\nabla^2W^0$ by $H^0(s)$, then we have
\begin{equation}
|G_W^0|+|h_W^0|\lesssim\left|(H^0)^{-1}\right|\left(|\partial_1\nabla F_W^0|+|\partial_1\nabla G_W^0|\right),
\label{GW0 and hW0 controlled by D1DFW0}
\end{equation}
which shall be used to establish an upper bound for $|G_W^0|$ and $|h_W^0|$. Since $R\in SO(2)$, we have
\begin{equation}
\dot\xi_j=R_{ji}R_{ki}\dot\xi_k=R_{j1}R_{k1}\dot\xi_k+R_{j2}R_{k2}\dot\xi_k.
\end{equation}
Combining this with (\ref{evaluation of G_W at 0})(\ref{evaluation of h_W at 0}), we have
\begin{equation}
\dot\xi_j=\frac{R_{j1}}{(1+\alpha)\beta_1}\left(\kappa+\beta_2Z^0-\frac{1}{\beta_\tau}e^{-\frac{s}{2}}G_W^0\right)+\frac{R_{j2}}{1+\alpha}\left(2A^0-\frac{e^{\frac{s}{2}}}{\beta_1\beta_\tau}h_W^0\right).
\label{evolution of xi}
\end{equation}
Setting $y=0$ in the equation of $\partial_{11}W$ and $\partial_{12}W$, we have
\begin{equation}
G_W^0\partial_{122}W^0+h_W^0\partial_{222}W^0=\partial_{22}F_W^0+\partial_{22}G_W^0.
\end{equation}
Then from (\ref{evaluation of D22GW at 0}), we have
\begin{equation}
\begin{aligned}
\dot\phi=&\frac{e^{\frac{s}{2}}}{\beta_\tau}\left(\partial_{122}W^0G_W^0+\partial_{222}W^0h_W^0-\partial_{22}F_W^0\right)+\beta_2e^s\partial_{22}Z^0+\phi^2\left(\kappa+\beta_2Z^0-\frac{e^{-\frac{s}{2}}}{\beta_\tau}G_W^0\right)+\frac{\phi^2}{\beta_\tau}e^{-\frac{s}{2}}G_W^0.
\end{aligned}
\label{evolution of phi}
\end{equation}
\section{Closure of bootstrap argument for the modulation variables}
From (\ref{evaluation of ovl W at 0})(\ref{bootstrap assumptions of tilde W when y=0, |gamma|=3}), we can see that
\begin{equation}
H^0:=\partial_1\nabla^2W^0=\partial_1\nabla^2\ovl{W}^0+\partial_1\nabla^2\widetilde W^0=\mathrm{diag}(6,2)+O(\varepsilon^{\frac{1}{4}}).
\end{equation}
As a consequence, we have
\begin{equation}
\left|(H^0)^{-1}\right|\le1.
\end{equation}
Next we estimate $|\partial_1\nabla F_W^0|$. From (\ref{evaluation of D11FW at 0})(\ref{evaluation of D12FW at 0}), bootstrap assumptions and (\ref{new estimate of A}), we have $|\partial_{11}F_W^0|\lesssim e^{-s}$ and $|\partial_{12}F_W^0|\lesssim e^{-s}+\varepsilon^2|h_W^0|$. Then by invoking (\ref{GW0 and hW0 controlled by D1DFW0}), one can see that
\begin{equation}
|G_W^0|+|h_W^0|\lesssim e^{-s}.
\label{estimate of GW0 and hW0}
\end{equation}
Now we give a new estimate for $V_2=\frac{1+\alpha}{2}\left[Q_{21}\left(y_1e^{-\frac{3}{2}s}+f\right)+\frac{e^{\frac{s}{2}}}{(1+\alpha)\beta_1\beta_\tau}h_W^0+\frac{2}{1+\alpha}A^0\right]$. Recall that in (\ref{estimate of of V}) we already have a bound $|V_2|\lesssim M^{\frac{1}{4}}$, but now with the help of (\ref{estimate of GW0 and hW0}) one can see that for all $y\in\mathcal{X}(s)$, there holds that
\begin{equation}
|V_2|\lesssim M\varepsilon^{\frac{1}{2}}.
\end{equation}
\subsection{The $\xi$ estimate}
From (\ref{evolution of xi}) we have
\begin{equation}
\begin{aligned}
|\dot\xi_j|=&\kappa_0+M\varepsilon+e^{-\frac{s}{2}}Me^{-s}\le\frac{1}{10}M^{\frac{1}{4}}.
\end{aligned}
\end{equation}
From (\ref{bootstrap time}) and $\xi(-\varepsilon)=0$, we have
\begin{equation}
|\xi_j(t)|\le\int_{-\varepsilon}^t|\dot\xi_j|dt\le\frac{1}{10}M^{\frac{1}{4}}\varepsilon.
\end{equation}
\subsection{The $\kappa$ estimate}
From (\ref{evaluation of FW at 0}) and bootstrap assumptions, we have $|F_W^0|\lesssim \varepsilon^{\frac{1}{4}}e^{-\frac{s}{2}}$, thus according to (\ref{evolution of kappa})(\ref{estimate of GW0 and hW0}), we have that
\begin{equation}
|\dot \kappa|\lesssim e^{\frac{s}{2}}\left(Me^{-s}+\varepsilon^{\frac{1}{4}}e^{-\frac{s}{2}}\right)\le\frac{1}{2}M,
\end{equation}
and
\begin{equation}
|\kappa-\kappa_0|\le\frac{1}{2}M|t+\varepsilon|\lesssim M\varepsilon\le\frac{1}{4}\kappa_0.
\label{closure of kappa bootstrap}
\end{equation}
\subsection{The $\phi$ estimate}
From (\ref{evaluation of D22FW at 0}), bootstrap assumptions and (\ref{new estimate of A}), we have $|\partial_{22}F_W^0|\lesssim e^{-\frac{s}{2}}$, thus via (\ref{evolution of phi}), we obtain
\begin{equation}
\begin{aligned}
|\dot\phi|&\lesssim e^{\frac{s}{2}}\left(\varepsilon^{\frac{1}{4}}Me^{-s}+\varepsilon^{\frac{1}{4}}Me^{-s}+e^{-\frac{s}{2}}\right)+e^sMe^{-s} +M^4\varepsilon^2\left(\kappa_0+M\varepsilon+e^{-\frac{s}{2}}Me^{-s}\right)+M^4\varepsilon^2e^{-\frac{s}{2}}M^{-s}\\
&\lesssim M\le\frac{1}{10}M^2.
\end{aligned}
\end{equation}
Since $|\phi(-\varepsilon)|=|\phi_0|\le\varepsilon$, we can further obtain that
\begin{equation}
|\phi|\le\varepsilon+|\dot\phi||t+\varepsilon|\le\frac{1}{2}M^2\varepsilon.
\end{equation}
\subsection{The $\tau$ estimate}
Also from (\ref{evaluation of D1FW at 0})(\ref{evaluation of D1GW at 0}) and bootstrap assumptions, we have $|\partial_1F_W^0|\lesssim e^{-s}$ and $|\partial_1G_W^0|\lesssim M^\frac{1}{2}e^{-s}$, thus by (\ref{evolution of tau}), we have
\begin{equation}
|\dot\tau|\lesssim e^{-s}+M^\frac{1}{2}e^{-s}\le\frac{1}{4}Me^{-s}.
\end{equation}
Since $\tau(-\varepsilon)=0$, we get
\begin{equation}
|\tau(t)|\le\int_{-\varepsilon}^t\frac{1}{4}M\varepsilon dt\le\frac{1}{4}M\varepsilon^2.
\end{equation}
\subsection{The $n_2$ estimate}
We first estimate $Q_{12}$. From (\ref{evaluation of D2FW at 0})(\ref{estimate of GW0 and hW0}) and bootstrap assumptions, $|\partial_2F_W^0|\lesssim M\kappa_0e^{-s}$, thus via (\ref{evolution of Q12}), we can bound $Q_{12}$ by
\begin{equation}
|Q_{12}|\lesssim M\kappa_0e^{-s}+e^{\frac{s}{2}}M\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}}\le2M\varepsilon^{\frac{1}{2}}.
\end{equation}
From the definition of $Q$, we have
\begin{equation}
Q_{12}=-\dot n_2\sqrt{1-n_2^2}-\frac{n_2^2\dot n_2}{\sqrt{1-n_2^2}},
\end{equation}
thus by bootstrap assumption of $n_2$, we finally can see that
\begin{equation}
|\dot n_2|=|Q_{12}|\left(\sqrt{1-n_2^2}+\frac{n_2^2}{\sqrt{1-n_2^2}}\right)^{-1}\le\left(1+\varepsilon^{\frac{1}{2}}\right)|Q_{12}|\le\frac{1}{2}M^2\varepsilon^{\frac{1}{2}}.
\end{equation}
By $n_2(-\varepsilon)=0$, we improve the assumption of $n_2$ by factor $\frac{1}{2}$.
\section{Estimates for transport and forcing terms}
To close the bootstrap argument of the Riemann variables $W,Z,A$, we will estimate each term in the transport-type equations they satisfy.
\subsection{Transport estimates}
\begin{lemma}For the transport terms in the equations of $W,Z,A$, we have the following inequalities:
\begin{equation}
|\partial^\gamma G_W|\lesssim\left\{
\begin{aligned}
&Me^{-s}+M^{\frac{1}{2}}e^{-s}|y_1|+M^2\varepsilon^{\frac{1}{2}}|y_2|\lesssim\varepsilon^{\frac{1}{3}}e^{\frac{s}{2}}\ \ &\ \gamma=(0,0)\\
&M^2e^{-\frac{5}{6}s}\ \ &\ \gamma=(1,0)\\
&M^2\varepsilon^{\frac{1}{6}}\ \ &\ \gamma=(0,1)\\
&M^2e^{-\frac{s}{2}}\ \ &|\gamma|=2,
\end{aligned}\right.
\label{estimate for GW}
\end{equation}
\begin{equation}
\left|\partial^\gamma\left(G_A+(1-\beta_1)\kappa_0e^{\frac{s}{2}}\right)\right|+\left|\partial^\gamma\left(G_Z+(1-\beta_2)\kappa_0e^{\frac{s}{2}}\right)\right|\lesssim\left\{
\begin{aligned}
&\varepsilon^{\frac{1}{3}}e^{\frac{s}{2}}\ \ &\gamma=(0,0)\\
&M^2e^{-\frac{5}{6}s}\ \ &\gamma=(1,0)\\
&M^2\varepsilon^{\frac{1}{6}}\ \ &\gamma=(0,1)\\
&M^2e^{-\frac{s}{2}}\ \ &|\gamma|=2,
\end{aligned}\right.
\label{estimates for GZ and GA}
\end{equation}
\begin{equation}
|\partial^\gamma h_W|+|\partial^\gamma h_Z|+|\partial^\gamma h_A|\lesssim\left\{
\begin{aligned}
&M\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}}\ \ &\gamma=(0,0)\\
&M\varepsilon^{\frac{1}{3}}e^{-s}\eta^{-\frac{1}{3}}\ \ &\gamma=(1,0)\\
&\varepsilon^{\frac{1}{3}}e^{-s}\ \ &\gamma=(0,1)\\
&\varepsilon^{\frac{1}{6}}e^{-s}\eta^{-\frac{1}{6}}\ \ &\gamma=(2,0)\\
&\varepsilon^{\frac{1}{6}}e^{-s}\eta^{-\frac{1}{6}}\ \ &\gamma=(1,1)\\
&e^{-s}\eta^{-\frac{1}{6}}\ \ &\gamma=(0,2).
\end{aligned}
\right.
\label{estimates for h}
\end{equation}
Furthermore, for $|\gamma|=3,4$ we have
\begin{equation}
\left\{
\begin{aligned}
&|\partial^\gamma G_W|\lesssim e^{-\left(\frac{1}{2}-\frac{|\gamma|-1}{2(k-3)}\right)s}\\
&|\partial^\gamma h_W|\lesssim e^{-s}.
\end{aligned}
\right.
\label{higher order estimates for transport terms}
\end{equation}
\end{lemma}
\begin{proof}
For $\gamma>0$, from the definition (\ref{transport terms of W,Z,A}) of $G_W$, we have
\begin{equation}
|\partial^\gamma G_W|\lesssim e^{\frac{s}{2}}\left|\partial^\gamma\frac{\partial_t f}{1+f_{x_1}}\right|+e^{\frac{s}{2}}\sum_{\beta\le\gamma}\left|\partial^\beta J\right|\left(\kappa_0\mathbbm{1}_{\beta=\gamma}+|\partial^{\gamma-\beta}Z|+|\partial^{\gamma-\beta}(V\cdot N)|\right).
\end{equation}
Then appealing to bootstrap assumptions and (\ref{estimates of f-dependent functions, inequalities})(\ref{estimate of of V})(\ref{new estimate of A})(\ref{new estimate of Z}), we obtain the desired estimates for $G_W$.
For the case $\gamma=0$, we have that
\begin{equation}
\begin{aligned}
|G_W|&\le\left|\left(G_W+\beta_\tau e^{\frac{s}{2}}\frac{\partial_tf}{1+f_{x_1}}\right)^0\right|+\left\|\partial_1\left(G_W+\beta_\tau e^{\frac{s}{2}}\frac{\partial_tf}{1+f_{x_1}}\right)\right\|_{L^\infty}|y_1|+\left\|\partial_2\left(G_W+\beta_\tau e^{\frac{s}{2}}\frac{\partial_tf}{1+f_{x_1}}\right)\right\|_{L^\infty}|y_2|\\
&\ \ \left|\beta_\tau e^{\frac{s}{2}}\frac{\partial_tf}{1+f_{x_1}}\right|\\
&\lesssim|G_W^0|+M^{\frac{1}{2}}\varepsilon^{\frac{1}{2}}e^{-s}|y_1|+M^2\varepsilon^{\frac{2}{3}}e^{\frac{s}{2}}\\
&\lesssim Me^{-s}+M^{\frac{1}{2}}\varepsilon^{\frac{s}{2}}+M^2\varepsilon^{\frac{2}{3}}e^{\frac{s}{2}}\lesssim \varepsilon^{\frac{1}{3}}e^{\frac{s}{2}}.
\end{aligned}
\end{equation}
Once we have the bounds for $G_W$ and its derivatives, the estimates of $G_Z$ and $G_A$ follow from the identities
\begin{equation}
\begin{aligned}
G_Z+(1-\beta_2)\kappa_0e^{\frac{s}{2}}&=G_W+(1-\beta_2)e^{\frac{s}{2}}\left[(\kappa_0-\kappa)+(1-\beta_\tau J)\kappa+\beta_\tau J\right],\\
G_A+(1-\beta_1)e^{\frac{s}{2}}\kappa_0&=G_W+(1-\beta_1)e^{\frac{s}{2}}\left[(\kappa_0-\kappa)+(1-\beta_\tau J)\kappa\right]+(\beta_2-\beta_1)\beta_\tau e^{\frac{s}{2}} JZ.
\end{aligned}
\end{equation}
The estimates of $h_W$, $h_Z$, $h_A$ can be obtain by the definition of these transport terms, bootstrap assumptions and (\ref{estimates of f-dependent functions, inequalities})(\ref{estimate of of V})(\ref{new estimate of A})(\ref{new estimate of Z})(\ref{new estimate of W}).
\end{proof}
\subsection{Forcing estimates}
Now we deal with the forcing terms that appear in the equations of $W,Z,A$.
\begin{lemma}
For derivatives of the forcing terms, we have the following bounds:
\begin{equation}
|\partial^\gamma F_W|+e^{\frac{s}{2}}|\partial^\gamma F_Z|\lesssim\left\{
\begin{aligned}
&e^{-\frac{s}{2}},\ &\gamma=(0,0)\\
&e^{-s}\eta^{-\frac{1}{6}+\frac{2}{3(k-2)}},\ &\gamma=(1,0)\\
&M^2e^{-s},\ &\gamma=(0,1)\\
&e^{-s}\eta^{-\frac{1}{6}+\frac{1}{k-2}},\ &\gamma=(2,0)\\
&e^{-s}\eta^{-\frac{1}{6}+\frac{1}{k-2}},\ &\gamma=(1,1)\\
&M^{\frac{1}{4}} e^{-\left(1-\frac{1}{k-3}\right)s},\ &\gamma=(0,2),\\
\end{aligned}
\right.
\label{estimates for derivatives of FW and FZ}
\end{equation}
\begin{equation}
|\partial^\gamma F_W|\lesssim\left\{
\begin{aligned}
&e^{-\frac{s}{2}},\ &|\gamma|=3\\
&\varepsilon^{\frac{1}{6}},\ \ \ \ \ &|\gamma|=4,|y|\le l,
\end{aligned}
\right.
\label{estimates for higher order derivatives of FW}
\end{equation}
\begin{equation}
|\partial^\gamma F_A|\lesssim\left\{
\begin{aligned}
&M^{\frac{1}{2}}e^{-s},\ &\gamma=(0,0)\\
&M^{\frac{1}{4}}e^{-s},\ &\gamma=(0,1)\\
&M^{\frac{1}{4}} e^{-\left(1-\frac{1}{k-3}\right)s}\eta^{-\frac{1}{6}},\ &\gamma=(0,2),\\
\end{aligned}
\right.
\label{estimates for derivatives of FA}
\end{equation}
\begin{equation}
|\partial^\gamma \widetilde F_W|\lesssim\left\{
\begin{aligned}
&M\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{6}},\ &\gamma=(0,0),\ |y|\le L\\
&\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{2}+\frac{2}{3(k-2)}},\ &\gamma=(1,0),\ |y|\le L\\
&M^2\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{3}},\ &\gamma=(0,1),\ |y|\le L\\
&\varepsilon^{\frac{1}{7}},\ &|\gamma|\le4,\ |y|\le l,\\
\end{aligned}
\right.
\label{estimates for derivatives of tilde FW}
\end{equation}
and
\begin{equation}
\left|(\partial^\gamma \widetilde{F}_W)^0\right|\overset{|\gamma|=3}{\lesssim} e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}.
\label{estimates for derivatives of tilde FW at origin}
\end{equation}
\end{lemma}
\begin{proof}
The proof of (\ref{estimates for derivatives of FW and FZ})(\ref{estimates for higher order derivatives of FW})(\ref{estimates for derivatives of FA})(\ref{estimates for derivatives of tilde FW}) is just taking derivatives of the forcing terms, then using the bootstrap assumptions and the estimates (\ref{estimates of f-dependent functions, inequalities})(\ref{estimate of Q})(\ref{estimate of of V})(\ref{estimates of derivatives of V})(\ref{new estimate of W})(\ref{new estimate of Z})(\ref{new estimate of A})(\ref{estimate for GW})(\ref{estimates for GZ and GA})(\ref{estimates for h})(\ref{higher order estimates for transport terms}) to estimate each term therein. Finally we prove (\ref{estimates for derivatives of tilde FW at origin}). Since $\partial^\gamma\ovl{W}^0=0$ when $|\gamma|$ is even, and $\partial_2G_W^0+\partial_2F_W^0=0$, we have
\begin{equation}
\begin{aligned}
|(\partial^\gamma\widetilde{F}_W)^0|&\lesssim e^{-\frac{s}{2}}+|(1-\beta_\tau J)^0|+\sum_{m=1}^3|\nabla^m J^0|+|\nabla G_W^0|+|\nabla^3 G_W^0|+|\nabla h_W^0|+|\nabla^3 h_W^0|\\
&\lesssim Me^{-\frac{s}{2}}+M^2e^{-\frac{5}{6}s}+e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}\lesssim e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}.
\end{aligned}
\end{equation}
\end{proof}
\begin{lemma}For the forcing terms of $\partial^\gamma W,\partial^\gamma Z,\partial^\gamma A$, we have that
\begin{equation}
|F_W^{(\gamma)}|\lesssim\left\{
\begin{aligned}
&e^{-\frac{s}{2}},\ &\gamma=(0,0)\\
&\varepsilon^{\frac{1}{4}}\eta^{-\frac{1}{2}+\frac{2}{3(k-2)}},\ &\gamma=(1,0)\\
&M^2\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{3}},\ &\gamma=(0,1)\\
&\eta^{-\frac{1}{2}+\frac{1}{k-2}},\ &\gamma=(2,0)\\
&M^{\frac{1}{3}}\eta^{-\frac{1}{3}},\ &\gamma=(1,1)\\
&M^{\frac{2}{3}} \eta^{-\frac{1}{3}+\frac{1}{3(k-3)}},\ &\gamma=(0,2),\\
\end{aligned}
\right.
\label{estimate of forcing terms of W}
\end{equation}
\begin{equation}
|F_Z^{(\gamma)}|\lesssim\left\{
\begin{aligned}
&e^{-s},\ &\gamma=(0,0)\\
&e^{-\frac{3}{2}s}\eta^{-\frac{1}{6}+\frac{2}{3(k-2)}},\ &\gamma=(1,0)\\
&M^2e^{-\frac{3}{2}s},\ &\gamma=(0,1)\\
&e^{-\frac{3}{2}s}\left(1+M\eta^{-\frac{1}{3}}\right),\ &\gamma=(2,0)\\
&e^{-\frac{3}{2}s}\left(M^{\frac{1}{2}}+M^2\eta^{-\frac{1}{3}}\right),\ &\gamma=(1,1)\\
&M^{\frac{1}{4}}e^{-\left(\frac{3}{2}-\frac{1}{k-3}\right)s},\ &\gamma=(0,2),\\
\end{aligned}
\right.
\label{estimate of forcing terms of Z}
\end{equation}
\begin{equation}
|F_A^{(\gamma)}|\lesssim\left\{
\begin{aligned}
&M^{\frac{1}{4}}e^{-s},\ &\gamma=(0,0)\\
&M^{\frac{1}{4}}e^{-s},\ &\gamma=(0,1)\\
&e^{-\left(1-\frac{2}{k-3}\right)s}\eta^{-\frac{1}{6}},\ &\gamma=(0,2),\\
\end{aligned}
\right.
\label{estimate of forcing terms of A}
\end{equation}
\begin{equation}
|\widetilde F_W^{(\gamma)}|\lesssim\left\{
\begin{aligned}
&\varepsilon^{\frac{1}{11}}\eta^{-\frac{1}{2}},\ &\gamma=(1,0),\ |y|\le L\\
&\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}},\ &\gamma=(0,1),\ |y|\le L\\
&\varepsilon^{\frac{1}{7}}+\varepsilon^{\frac{1}{10}}\left(\log M\right)^{\gamma_2-1},\ &|\gamma|\le4,\ |y|\le l.\\
\end{aligned}
\right.
\label{estimate of forcing terms of tilde W}
\end{equation}
And for $y=0$ and $|\gamma|=3$, we have
\begin{equation}
\left|\widetilde{F}_W^{(\gamma),0}\right|\lesssim e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s},\ \ \ \ |\gamma|=3.
\label{estimate of forcing terms of tilde W when |gamma|=3 and y=0}
\end{equation}
\end{lemma}
\begin{proof}
First, we have
\begin{equation}
\left|F_W^{(0,0)}\right|=|F_W|\lesssim e^{-\frac{s}{2}}.
\end{equation}
For the case $1\le|\gamma|\le2$, we decompose the estimate for forcing term as
\begin{equation}
\begin{aligned}
\left|F_W^{(\gamma)}\right|\overset{1\le|\gamma|\le2}{\lesssim}&|\partial^\gamma F_W|+\sum_{0\le\beta<\gamma}\left(|\partial^{\gamma-\beta}G_W||\partial_1\partial^\beta W|+|\partial^{\gamma-\beta}h_W||\partial_2\partial^\beta W|\right)\\
&+\mathbbm{1}_{|\gamma|=2}\gamma_2|\partial_2(JW)||\partial_1^{\gamma_1+1}\partial_2^{\gamma_2-1}W|+\left|[\partial^\gamma,J]W\partial_1W\right|\\
=&|\partial^\gamma F_W|+I_1^{(\gamma)}+I_2^{(\gamma)}+I_3^{(\gamma)}.
\end{aligned}
\end{equation}
Then one can check that each term do not exceed the proposed bound. $F_Z^{(\gamma)}$, $F_A^{(\gamma)}$ and $\widetilde{F}_W^{(\gamma)}$ can be estimated in a similar fashion.
\end{proof}
\section{Bounds on Lagrangian trajectories}
Given a point $y_0$ and an initial time $s_0\ge-\log\varepsilon$, we define the Lagrangian trajectory $\Phi_W^{y_0}$ by
\begin{equation}
\left\{
\begin{aligned}
&\frac{d}{ds}\Phi_W^{y_0}(s)=\mathcal{V}_W\circ\Phi_W^{y_0}(s)\\
&\Phi_W^{y_0}(s_0)=y_0.
\end{aligned}
\right.
\end{equation}
Similarly we define $\Phi_Z^{y_0}$ and $\Phi_A^{y_0}$ using the transport terms in the equations of $Z$ and $A$ respectively.
We now discuss the upper bound and the lower bound of these Lagrangian trajectories, and we will close the bootstrap argument for the spatial support of $W,Z,A$.
\subsection{Upper bound of the trajectories}\label{upper bound of the trajectories}
\begin{lemma}Let $\Phi$ denote either $\Phi_W^{y_0}$, $\Phi_Z^{y_0}$, $\Phi_A^{y_0}$, for any $y_0\in\mathcal{X}_0$, we have that
\begin{equation}
\begin{aligned}
|\Phi_1(s)|&\le\frac{3}{2}\varepsilon^{\frac{1}{2}}e^{\frac{3}{2}s},\\
|\Phi_2(s)|&\le\frac{3}{2}\varepsilon^{\frac{1}{6}}e^{\frac{s}{2}}.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
We first deal with the case $\Phi=\Phi_W^{y_0}$. Note that
\begin{equation}
\begin{aligned}
\frac{d}{ds}\left(e^{-\frac{3}{2}s}\Phi_1(s)\right)&=e^{-\frac{3}{2}s}g_W\circ\Phi,\\
\frac{d}{ds}\left(e^{-\frac{s}{2}}\Phi_2(s)\right)&=e^{-\frac{s}{2}}h_W\circ\Phi,\\
\Phi(-\log\varepsilon)&=y_0.
\end{aligned}
\end{equation}
Then the estimates are direct consequences of $|g_W|\le e^{\frac{s}{2}}$ and $|h_W|\le e^{\frac{s}{2}}$. We omit the detail, which is the same as that in \cite{buckmaster2022formation} The estimates for $\Phi_Z$ and $\Phi_A$ are similar.
\end{proof}
Now we close the bootstrap bound for spatial support. We attempt to show that
\begin{equation}
\operatorname{supp}(DW,DZ,DA)\subset\frac{7}{8}\mathcal{X}(s)=\left\{|y_1|\le\frac{7}{4}\varepsilon^{\frac{1}{2}}e^{\frac{3}{2}s},|y_2|\le\frac{7}{4}\varepsilon^{\frac{1}{6}}e^{\frac{s}{2}}\right\}.
\end{equation}
Since $\mathrm{supp}_x(D_xN,D_xT)\subset\frac{3}{4}\mathcal{X}(s)=\{|x_1|\le\frac{3}{2}\varepsilon^{\frac{1}{2}},|x_2|\le\frac{3}{2}\varepsilon^{\frac{1}{6}}\}$, in $\left(\frac{3}{4}\mathcal{X}(s)\right)^c$, there hold
\begin{equation}
\left\{\begin{aligned}
&g_W=\beta_\tau JW+\beta_\tau e^{\frac{s}{2}}\left[-\frac{\partial_t f}{1+f_{x_1}}+J\left(\kappa+\beta_2Z+2\beta_1V_1\right)\right]\\
&g_Z=\beta_2\beta_\tau JW+\beta_\tau e^{\frac{s}{2}}\left[-\frac{\partial_t f}{1+f_{x_1}}+J\left(\beta_2\kappa+Z+2\beta_1V_1\right)\right]\\
&g_A=\beta_1\beta_\tau JW+\beta_\tau e^{\frac{s}{2}}\left[-\frac{\partial_t f}{1+f_{x_1}}+J\left(\beta_1\kappa+\beta_1Z+2\beta_1V_1\right)\right],
\end{aligned}\right.
\label{transport terms of W,Z,A outside the support}
\end{equation}
\begin{equation}
h_W=h_Z=h_A=2\beta_1\beta_\tau e^{-\frac{s}{2}}\left(V_2+A\right),
\end{equation}
\begin{equation}
\left\{
\begin{aligned}
F_W&=-2\beta_{3}\beta_\tau S\partial_2A+\beta_\tau e^{-\frac{s}{2}}Q_{12}A\\
F_Z&=2\beta_{3}\beta_\tau S\partial_2A+\beta_\tau e^{-\frac{s}{2}}Q_{12}A\\
F_A&=-2\beta_{3}\beta_\tau S\partial_2S-\beta_\tau e^{-\frac{s}{2}}Q_{12}U\cdot N.
\end{aligned}
\right.
\end{equation}
We also define
\begin{equation}
\left\{
\begin{aligned}
W_\infty(t)&=\left[\frac{\kappa_0}{2}(n_1+1)-\kappa\right]e^{\frac{s}{2}}\\
Z_\infty(t)&=\frac{\kappa_0}{2}(n_1-1)\\
A_\infty(t)&=-\frac{\kappa_0}{2}n_2\\
S_\infty(t)&=\frac{e^{-\frac{s}{2}}W_\infty+\kappa-Z_\infty}{2}=\frac{\kappa_0}{2}.
\end{aligned}
\right.
\end{equation}
Then $W-W_\infty$, $Z-Z_\infty$, $A-A_\infty$ satisfy transport-type equations:
\begin{equation}
\begin{aligned}
\left(\partial_s-\frac{1}{2}\right)(W-W_\infty)+\mathcal{V}_W\cdot\nabla(W-W_\infty)&=F_{W-W_\infty},\\
\partial_s(Z-Z_\infty)+\mathcal{V}_Z\cdot\nabla(Z-Z_\infty)&=F_{Z-Z_\infty},\\
\partial_s(A-A_\infty)+\mathcal{V}_A\cdot\nabla(A-A_\infty)&=F_{A-A_\infty}.
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
F_{W-W_\infty}=&-\beta_3\beta_\tau e^{-\frac{s}{2}}(W-W_\infty)\partial_2A+\beta_3\beta_\tau(Z-Z_\infty)\partial_2A+\beta_\tau e^{-\frac{s}{2}}Q_{12}(A-A_\infty)-2\beta_3\beta_\tau S_\infty\partial_2A,\\
F_{Z-Z_\infty}=&\beta_3\beta_\tau e^{-s}(W-W_\infty)\partial_2A-\beta_3\beta_\tau e^{-\frac{s}{2}}(Z-Z_\infty)\partial_2A+2\beta_3\beta_\tau e^{-\frac{s}{2}}S_\infty\partial_2A+\beta_\tau e^{-s}Q_{12}(A-A_\infty),\\
F_{A-A_\infty}=&-\beta_3\beta_\tau e^{-s}(W-W_\infty)\partial_2S+\beta_3\beta_\tau e^{-\frac{s}{2}}(Z-Z_\infty)\partial_2S-2\beta_3\beta_\tau e^{-\frac{s}{2}}S_\infty\partial_2S\\
&-\beta_\tau e^{-\frac{3}{2}s}Q_{12}(W-W_\infty)-\beta_\tau e^{-s}Q_{12}(Z-Z_\infty).
\end{aligned}
\end{equation}
For $y_0\notin\frac{7}{8}\mathcal{X}(s)$, let $M'>|y_0|$ be a large enough constant. Define
\begin{equation}
Q_{big}=\left\{|y_1|\le M',|y_2|\le M'\right\},\ Q_{small}(s)=\left\{|y_1|\le e^{\frac{3}{2}s}\mu_1(s),|y_2|\le e^{\frac{s}{2}}\mu_2(s)\right\},
\end{equation}
where
\begin{equation}
\left\{\begin{aligned}
\mu_1(s)&=\frac{3+\varepsilon}{2}\varepsilon^{\frac{1}{2}}-2CM^{\frac{1}{4}}e^{-s}\\
\mu_2(s)&=\frac{3+\varepsilon}{2}\varepsilon^{\frac{1}{6}}-2CM^{\frac{1}{4}}e^{-s}.
\end{aligned}\right.
\end{equation}
One can verify that $\frac{3}{4}\mathcal{X}(s)\subset Q_{small}\subset\frac{7}{8}\mathcal{X}(s)\subset Q_{big}$ if we take $\varepsilon$ small enough and $M'$ large enough. Define
\begin{equation}
E(y,s)=\frac{1}{2}\left(e^{-s}(W-W_\infty)^2+(Z-Z_\infty)+2(A-A_\infty)^2\right),
\end{equation}
then we have
\begin{equation}
\frac{d}{ds}\int_{Q_{big}\setminus Q_{small}}E\le C\int_{Q_{big}\setminus Q_{small}}E.
\end{equation}
From the initial condition, we can see that when $s=-\log\varepsilon$, $\int_{Q_{big}\setminus Q_{small}}E=0$, thus $\int_{Q_{big}\setminus Q_{small}}E\equiv0$ at each time according to Gronwall's inequality. This tells us as long as $y_0\notin\frac{7}{8}\mathcal{X}(s)$, $W(y_0,s)=W_\infty$, $Z(y_0,s)=Z_\infty$, $A(y_0,s)=A_\infty$, thus we proved (\ref{refined spatial support}).
\subsection{Lower bounds for lagrangian trajectories}
\begin{lemma}
\label{lower bound for trajectory of W}
Suppose $|y_0|\ge l$, $s_0\ge-\log\varepsilon$, then we have
\begin{equation}
|\Phi_W^{y_0}(s)|\ge|y_0|e^{\frac{s-s0}{5}}\ \ \ \text{ for all }s\ge s_0.
\end{equation}
\end{lemma}
\begin{proof}
It suffices to prove that $y\cdot\mathcal{V}_W\ge\frac{1}{5}|y|^2$. Note that by definition of $\mathcal{V}_W$, we can see that
\begin{equation}
y\cdot\mathcal{V}_W(y)\ge\frac{1}{2}|y|^2+y_1^2-\beta_\tau|y_1JW|-|y_1G_W|-|y_2h_W|.
\end{equation}
We split the estimate of $W$ into two cases: $|y|\le L$ and $|y|> L$.
If $|y|\le L$, by (\ref{bootstrap assumptions of tilde W when |y|<L})(\ref{estimates of D ovl W}) we have
\begin{equation}
\begin{aligned}
|W(y)|&\le|W(y_1,y_2)-W(0,y_2)|+|W(0,y_2)-\ovl{W}(0,y_2)|+\underbrace{|\ovl{W}(0,y_2)|}_{=0}\\
&\le(1+\varepsilon^{\frac{1}{12}})|y_1|+\varepsilon^{\frac{1}{13}}|y_2|.
\end{aligned}
\end{equation}
If $|y|>L$, from bootstrap assumption we have
\begin{equation}
|W(y)|\le(1+\varepsilon^{\frac{1}{20}})\eta^{\frac{1}{6}}(y)\le(1+\varepsilon^{\frac{1}{20}})^2|y|.
\end{equation}
Then appealing to (\ref{estimate for GW})(\ref{estimates for h}) we have the desired result.
\end{proof}
\begin{lemma}
\label{lower bound of trajectories of Z,A}
Let $\Phi$ denote either $\Phi_Z^{y_0}$ or $\Phi_A^{y_0}$. If
\begin{equation}
\kappa_0\ge\frac{3}{1-\max(\beta_1,\beta_2)}.
\end{equation}
then for any $0\le\sigma_1<\frac{1}{2}$ and $2\sigma_1<\sigma_2$, we have the bound
\begin{equation}
\int_{-\log\varepsilon}^\infty e^{\sigma_1s'}\left(1+|\Phi_1(s')|\right)^{-\sigma_2}ds'\le C(\sigma_1,\sigma_2).
\label{integral of eta along trajectories of W and A are bounded}
\end{equation}
\end{lemma}
\begin{proof}
The proof is the same as that in \cite{buckmaster2022formation}.
\end{proof}
\begin{lemma}
Let $\Phi^{y_0}$ denote either $\Phi_Z^{y_0}$ or $\Phi_A^{y_0}$, then
\begin{equation}
\sup_{y_0\in\mathcal{X}_0}\int_{-\log\varepsilon}^\infty|\partial_1W|\circ \Phi^{y_0}(s')ds'\lesssim1.
\label{integral of D1W along Phi is bounded}
\end{equation}
\end{lemma}
\begin{proof}
Using lemma \ref{lower bound of trajectories of Z,A} and bootstrap assumption of $\partial_1W$, we can deduce the above inequality.
\end{proof}
\section{Closure of bootstrap argument for $\partial_1A$}
Since the vorticity is purely transported by $u$, the bootstrap of $\partial_1A$ is easy to close from the bound of the vorticity and bootstrap assumptions, in no need of the evolution equation of $\partial_1A$.
\begin{lemma}[Relating $A$ and $\Omega$]
We have the following identity:
\begin{equation}
\begin{aligned}
J e^{\frac 3 2 s} \partial_1 A =&-{(\alpha S)}^{\frac 1 \alpha} \Omega- T_2 e^{\frac 3 2 s} \partial_2 \left(\frac {e^{\frac 3 2 s}W+\kappa+Z }{2}\right)- N_2 e^{-\frac 3 2 s}\partial_1 A+ U\cdot (N_2\partial_{x_2}T -T_2\partial_{x_2} N+J\partial_{x_1} T).
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
Note that $\mathrm{curl }\ \mathring u =\partial_T \mathring u \cdot N- \partial_T \mathring u \cdot T$. We compute each term as follows:
\begin{equation}
\begin{aligned}
\partial_T\mathring u &= T_1\partial_{\tilde{x}_1} \mathring u + T_2\partial_{\tilde{x}_2} \mathring u =T_1 \frac {1} {1+f_{x_1}}\partial_{x_1} \mathring u +T_2\left(-\frac{f_{x_2}}{1+f_{x_1}}\partial_{x_1} \mathring u + \partial_{x_2} \mathring u \right)\\
&= \frac {f_{x_2}}{\sqrt{1+f_{x_2}^2}}\frac 1 {1+f_{x_1}}\partial_{x_1} \mathring u - \frac {f_{x_2}}{\sqrt{1+f_{x_2}^2}}\frac 1 {1+f_{x_1}}\partial_{x_1} \mathring u + \frac{\partial_{x_2}\mathring u}{\sqrt{1+f_{x_2}^2}}= T_2 \partial_{x_2}\mathring u,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\partial_N\mathring u &= N_1\partial_{\tilde{x}_1} \mathring u + N_2\partial_{\tilde{x}_2} \mathring u =\frac {1}{\sqrt{1+f_{x_2}^2}}\frac {1} {1+f_{x_1}}\partial_{x_1} \mathring u -\frac{f_{x_2}}{\sqrt{1+f_{x_1}^2}} (-\frac{f_{x_2}}{1+f_{x_1}}\partial_{x_1} \mathring u + \partial_{x_2} \mathring u )\\
&= \frac {\sqrt{1+f_{x_2}^2}}{1+f_{x_1}}\frac 1 {1+f_{x_1}}\partial_{x_1} \mathring u - \frac {f_{x_2}}{\sqrt{1+f_{x_2}^2}}\partial_{x_1} \mathring u= J \partial_{x_1}\mathring u+ N_2\partial_{x_2} \mathring u.
\end{aligned}
\end{equation}
Thus, we have
\begin{equation}
\begin{aligned}
\mathrm{curl}\ \mathring u &= T_2 \partial_{x_2} \mathring u \cdot N-(J \partial_{x_1}\mathring u+ N_2\partial_{x_2} \mathring u)\cdot T\\
&= T_2 \partial_{x_2} (\mathring u \cdot N)- T_2 \mathring u \partial_{x_2} N-J\partial_{x_1}(\mathring u \cdot T)+ J\mathring u \partial_{x_1} T -N_2 \partial_{x_2}\left(\mathring u \cdot T\right)+N_2 \mathring u \cdot \partial_{x_2}T\\
&= T_2 \partial_{x_2}\left(\frac {w+z}{2}\right)- T_2 \mathring u \partial_{x_2}N-J \partial_{x_1}a +J\mathring u \partial_{x_1} T-N_2\partial_{x_2}a+ N_2 \mathring u \partial_{x_2} T\\
&= T_2 \partial_{x_2}\left(\frac {w+z}{2}\right)-J\partial_{x_1}a- N_2 \partial_{x_2}a+\mathring u \cdot (N_2 \partial_{x_2}T-T_2\partial_{x_2} N+ J\partial_{x_1} T).
\end{aligned}
\end{equation}
On the other hand, $\mathrm{curl}\ \mathring{u}=\tilde \rho\tilde\zeta=\left(\alpha S\right)^{\frac{1}{\alpha}}\Omega$, thus we get the desired result.
\end{proof}
With the help of this identity, we have
\begin{equation}
\begin{aligned}
e^{\frac 3 2 s} |\partial_1 A| &\lesssim \kappa_0^{\frac 1 \alpha} + e^{\frac s 2} (e^{-\frac s 2}+ M\varepsilon^{\frac 1 2} e^{-\frac s 2})+ \varepsilon^{\frac 1 2} e^{\frac s 2}M \varepsilon^{-\frac 1 2} e^{\frac s 2}+ M^{\frac 1 4}(\varepsilon^{\frac 1 2} M^2 \varepsilon- M^2\varepsilon +M^2 \varepsilon^{\frac 2 3})\le \frac 1 2 M.
\end{aligned}
\end{equation}
This improves the bootstrap bound for $\partial_1A$.
\section{Closure of bootstrap argument for $Z$ and $A$}
In this section we improve the bootstrap bound of $Z$ and $A$.
\begin{lemma}[Close $Z$ bootstrap]
For the Riemann variable $Z$, we have the improved bootstrap bound:
\begin{equation}
\begin{aligned}
\left|Z\circ\Phi_Z^{y_0}(s)\right|&\le\frac{1}{2}M\varepsilon,\\
e^{\frac{3}{2}s}\left|\partial_1Z\circ\Phi_Z^{y_0}(s)\right|&\le\frac{1}{2}M^{\frac{1}{2}},\\
e^{\frac{s}{2}}\left|\partial_2Z\circ\Phi_Z^{y_0}(s)\right|&\le\frac{1}{2}M\varepsilon^{\frac{1}{2}},\\
e^{\frac{3}{2}s}\left|\partial_{11}Z\circ\Phi_Z^{y_0}(s)\right|&\le\frac{1}{2}M^{\frac{1}{2}},\\
e^{\frac{3}{2}s}\left|\partial_{12}Z\circ\Phi_Z^{y_0}(s)\right|&\le\frac{1}{2}M,\\
e^{s}\left|\partial_{22}Z\circ\Phi_Z^{y_0}(s)\right|&\le\frac{1}{2}M.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
Since $e^{\mu s}\partial^\gamma Z$ obeys
\begin{equation}
\partial_s(e^{\mu S}\partial^\gamma Z)+D_Z^{(\gamma,\mu)}(e^{\mu s}\partial^\gamma Z)+(\mathcal{V}_Z\cdot\nabla)(e^{\mu s}\partial^\gamma Z)=e^{\mu s}F_Z^{(\gamma)},
\end{equation}
by Gronwall's inequality we can see that
\begin{equation}
\begin{aligned}
e^{\mu s}\left|\partial^\gamma Z\circ\Phi_Z^{y_0}(s)\right|\lesssim&\ \varepsilon^{-\mu}\left|\partial^\gamma Z(y_0,-\log\varepsilon)\right|\exp\left(-\int_{-\log\varepsilon}^sD_Z^{(\gamma,\mu)}\circ\Phi_Z^{y_0}(s')ds'\right)\\
&+\int_{-\log\varepsilon}^se^{\mu s'}\left|F_Z^{(\gamma)}\circ\Phi_Z^{y_0}(s')\right|\exp\left(-\int_{s'}^sD_Z^{(\gamma,\mu)}\circ\Phi_Z^{y_0}(s'')ds''\right)ds',
\end{aligned}
\end{equation}
where
\begin{equation}
D_Z^{(\gamma,\mu)}=D_Z^{(\gamma)}-\mu=\frac{3}{2}\gamma_1+\frac{1}{2}\gamma_2+\beta_2\beta_\tau\gamma_1J\partial_1W-\mu.
\end{equation}
If we require that $\frac{3}{2}\gamma_1+\frac{1}{2}\gamma_2\ge\mu$, then we have
\begin{equation}
D_Z^{(\gamma,\mu)}\le\beta_2\beta_\tau\gamma_1|J\partial_1W|\overset{|\gamma|\le2}{\le} 2|\partial_1W|.
\end{equation}
Thus the damping term is bound by
\begin{equation}
\begin{aligned}
\exp\left(-\int_{s'}^sD_Z^{(\gamma,\mu)}\circ\Phi_Z^{y_0}(s'')ds''\right)&\lesssim e^{-\left(\frac{3\gamma_1+\gamma_2}{2}-\mu\right)(s-s')}\exp\left(\int_{s'}^s2|\partial_1W|\circ\Phi_Z^{y_0}(s'')ds''\right)\\
\overset{(\ref{integral of D1W along Phi is bounded})}{\lesssim}e^{-\left(\frac{3\gamma_1+\gamma_2}{2}-\mu\right)(s-s')}.
\end{aligned}
\end{equation}
And finally we have
\begin{equation}
e^{\mu s}\left|\partial^\gamma Z\circ\Phi_Z^{y_0}(s)\right|\lesssim\ \varepsilon^{-\mu}\left|\partial^\gamma Z(y_0,-\log\varepsilon)\right|+\int_{-\log\varepsilon}^se^{\mu s'}\left|F_Z^{(\gamma)}\circ\Phi_Z^{y_0}(s')\right|e^{-\left(\frac{3\gamma_1+\gamma_2}{2}-\mu\right)(s-s')}ds'.
\end{equation}
Next, for different multi-index $\gamma$, we choose different $\mu$ in the above inequality.
\emph{Case 1}. $\gamma=(0,0)$. We set $\mu=0$. From (\ref{initial condition of Z})(\ref{estimate of forcing terms of Z}), we have
\begin{equation}
\begin{aligned}
\left|Z\circ\Phi_Z^{y_0}(s)\right|&\lesssim\varepsilon+\int_{-\log\varepsilon}^se^{-s'}ds'\lesssim\varepsilon\le\frac{1}{2}M\varepsilon.
\end{aligned}
\end{equation}
\emph{Case 2}. $\gamma=(1,0)$. We set $\mu=\frac{3}{2}$. Also from (\ref{initial condition of Z})(\ref{estimate of forcing terms of Z}), we have
\begin{equation}
\begin{aligned}
e^{\frac{3}{2}s}\left|Z\circ\Phi_Z^{y_0}(s)\right|&\lesssim\varepsilon^{-\frac{3}{2}}\varepsilon^{\frac{3}{2}}+\int_{-\log\varepsilon}^se^{-\frac{3}{2}}e^{\frac{3}{2}}\eta^{-\frac{1}{6}+\frac{2}{3(k-2)}}\circ\Phi_Z^{y_0}(s')ds'\\
&\lesssim1+\int_{-\log\varepsilon}^s\left(1+|\Phi_1(s')|^2\right)^{-\frac{1}{6}+\frac{2}{3(k-2)}}ds'
\overset{(\ref{integral of eta along trajectories of W and A are bounded})}{\lesssim}1\le\frac{1}{2}M^{\frac{1}{2}}.
\end{aligned}
\end{equation}
\emph{Case 3}. $\gamma=(2,0)$. We set $\mu=\frac{3}{2}$ and we can duduce that
\begin{equation}
\begin{aligned}
e^{\frac{3}{2}s}\left|\partial_{11}Z\circ\Phi_Z^{y_0}(s)\right|&\lesssim\varepsilon^{-\frac{3}{2}}\varepsilon^{\frac{3}{2}}+\int_{-\log\varepsilon}^se^{\frac{3}{2}s'}e^{-\frac{3}{2}s'}\left(1+M\eta^{-\frac{1}{3}}\circ\Phi(s')\right)e^{-\frac{3}{2}(s-s')}ds'\\
&\lesssim1+M\int_{-\log\varepsilon}^se^{-\frac{1}{8}(s-s')}\left(1+|\Phi_1(s')|\right)^{-\frac{2}{3}}ds'
\overset{(\ref{integral of eta along trajectories of W and A are bounded})}{\lesssim}1+Me^{-\frac{s}{8}}\le\frac{1}{2}M^\frac{1}{2}.
\end{aligned}
\end{equation}
\emph{Case 4}. $\gamma=(1,1)$. We set $\mu=\frac{3}{2}$ and we can duduce that
\begin{equation}
\begin{aligned}
e^{\frac{3}{2}s}\left|\partial_{12}Z\circ\Phi_Z^{y_0}(s)\right|&\lesssim\varepsilon^{-\frac{3}{2}}\varepsilon^{\frac{3}{2}}+\int_{-\log\varepsilon}^se^{\frac{3}{2}s'}e^{-\frac{3}{2}s'}\left(M^{\frac{1}{2}}+M^2\eta^{-\frac{1}{3}}\circ\Phi(s')\right)e^{-\frac{1}{2}(s-s')}ds'\\
&\lesssim1+M^\frac{1}{2}+M^2\int_{-\log\varepsilon}^se^{-\frac{1}{8}(s-s')}\left(1+|\Phi_1(s')|\right)^{-\frac{2}{3}}ds'\\
\overset{(\ref{integral of eta along trajectories of W and A are bounded})}&{\lesssim}M^\frac{1}{2}+M^2e^{-\frac{s}{8}}\le\frac{1}{2}M.
\end{aligned}
\end{equation}
\emph{Case 5}. $\gamma=(0,2)$. We set $\mu=1$ and we can duduce that
\begin{equation}
\begin{aligned}
e^{\frac{3}{2}s}\left|\partial_{22}Z\circ\Phi_Z^{y_0}(s)\right|&\lesssim\varepsilon^{-\frac{1}{2}}\varepsilon+\int_{-\log\varepsilon}^se^{s'}M^{\frac{1}{4}}e^{-\left(\frac{3}{2}-\frac{1}{k-3}\right)s'}ds'\lesssim\varepsilon^{\frac{1}{2}}+M^{\frac{1}{4}}\varepsilon^{\frac{1}{2}-\frac{1}{k-3}}\le\frac{1}{2}M.
\end{aligned}
\end{equation}
\end{proof}
Next we close the bootstrap argument oF $A$ by proving (\ref{refined bootstrap inequality of A}).
\begin{lemma}[Close $A$ bootstrap]
For the Riemann variable $A$, we have the improved bootstrap bound:
\begin{equation}
\begin{aligned}
\left|A\circ\Phi_A^{y_0}(s)\right|&\le\frac{1}{2}M\varepsilon,\\
e^{\frac{s}{2}}\left|\partial_2A\circ\Phi_A^{y_0}(s)\right|&\le\frac{1}{2}M\varepsilon^{\frac{1}{2}},\\
e^{s}\left|\partial_{22}A\circ\Phi_A^{y_0}(s)\right|&\le\frac{1}{2}M.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
As in the closure of $Z$ bootstrap, if $\mu=\frac{3\gamma_1+\gamma_2}{2}$, we have
\begin{equation}
e^{\mu s}\left|\partial^\gamma A\circ\Phi_A^{y_0}(s)\right|\lesssim\ \varepsilon^{-\mu}\left|\partial^\gamma A(y_0,-\log\varepsilon)\right|+\int_{-\log\varepsilon}^se^{\mu s'}\left|F_A^{(\gamma)}\circ\Phi_A^{y_0}(s')\right|ds'.
\end{equation}
For different multi-index $\gamma$, we set different $\mu$ in the above inequality.
\emph{Case 1}. $\gamma=(0,0)$. We set $\mu=0$. From (\ref{initial condition of A})(\ref{estimate of forcing terms of A}), we have
\begin{equation}
\begin{aligned}
\left|A\circ\Phi_A^{y_0}(s)\right|&\lesssim\varepsilon+\int_{-\log\varepsilon}^sM^{\frac{1}{4}}e^{-s'}ds'\lesssim M^{\frac{1}{4}}\varepsilon\le\frac{1}{2}M\varepsilon.
\end{aligned}
\end{equation}
\emph{Case 2}. $\gamma=(0,1)$. We set $\mu=\frac{1}{2}$ and we can deduce that
\begin{equation}
\begin{aligned}
e^{\frac{s}{2}}\left|\partial_2A\circ\Phi_A^{y_0}(s)\right|&\lesssim\varepsilon^{-\frac{1}{2}}\varepsilon+\int_{-\log\varepsilon}^se^{\frac{s'}{2}}M^{\frac{1}{4}}e^{-s'}ds'\lesssim M^{\frac{1}{4}}\varepsilon^\frac{1}{2}\le\frac{1}{2}M\varepsilon^{\frac{1}{2}}.
\end{aligned}
\end{equation}
\emph{Case 3}. $\gamma=(0,2)$. We set $\mu=1$ and we can deduce that
\begin{equation}
\begin{aligned}
e^s\left|\partial_{22}A\circ\Phi_A^{y_0}(s)\right|&\lesssim\varepsilon^{-1}\varepsilon+\int_{-\log\varepsilon}^se^{s'}e^{-\left(1-\frac{3}{k-2}\right)s'}\eta^{-\frac{1}{6}}\circ\Phi_A^{y_0}(s')ds'\\
&\lesssim1+M^{\frac{1}{4}}\int_{-\log\varepsilon}^se^{\frac{2}{k-2}s'}\left(1+|\Phi_1(s)|\right)^{-\frac{1}{3}}ds'
\overset{(\ref{integral of eta along trajectories of W and A are bounded})}{\lesssim}1\le\frac{1}{2}M.
\end{aligned}
\end{equation}
\end{proof}
\section{Closure of bootstrap argument for $W$ and $\widetilde W$}
In this section we prove the improved bootstrap bounds (\ref{refined bootstrap inequality of W})(\ref{refined bootstrap inequality of tilde W when y=0, |gamma|=3})(\ref{refined bootstrap inequality of tilde W when |y|<L})(\ref{refined bootstrap inequality of tilde W when |y|<l, |gamma|<4})(\ref{refined bootstrap inequality of tilde W when |y|<l, |gamma|=4}) for $W$ and $\widetilde{W}$.
\subsection{Closure of bootstrap argument for high order derivatives of $\widetilde W$}
As stated in (\ref{evolution of derivatives of tilde W}), $\partial^\gamma\widetilde W$ satisfies the equation
\begin{equation}
\partial_s\partial^\gamma\widetilde W+D_{\widetilde W}^{(\gamma)}\partial^\gamma\widetilde W+(\mathcal{V}_W\cdot\nabla)\partial^\gamma\widetilde W=\widetilde{F}_{W}^{(\gamma)},
\end{equation}
where the damping term has a lower bound according to (\ref{estimates of f-dependent functions, inequalities})(\ref{estimate of ovl W})(\ref{estimate of D1W that will appear in damping terms}):
\begin{equation}
\begin{aligned}
D_{\widetilde W}^{(\gamma)}&=\frac{3\gamma_1+\gamma_2-1}{2}+\beta_\tau J\left(\partial_1\ovl{W}+\gamma_1\partial_1W\right)\\
&\ge\frac{3}{2}+\gamma_1-(1+\varepsilon^{\frac{1}{2}})\left(1+\gamma_1(1+\varepsilon^{\frac{1}{12}})\right)\ge\frac{3}{2}-1+\gamma_1-\gamma_1-C\varepsilon^{\frac{1}{12}}\ge\frac{1}{3}.
\label{lower bound of damping terms of tilde W}
\end{aligned}
\end{equation}
From the equation of $\partial^\gamma\widetilde{W}$, we have
\begin{equation}
\frac{d}{ds}\left|\partial^\gamma\widetilde{W}\circ\Phi_W^{y_0}\right|+\left(D_{\widetilde{W}}^{(\gamma)}\circ\Phi_W^{y_0}\right)\left|\partial^\gamma\widetilde{W}\circ\Phi_W^{y_0}\right|\le\left|\widetilde{F}_W^{(\gamma)}\circ\Phi_W^{y_0}\right|.
\end{equation}
If $|\gamma|=4$ and $|y|\le l$, from (\ref{estimate of forcing terms of tilde W})(\ref{lower bound of damping terms of tilde W}), we have
\begin{equation}
\begin{aligned}
e^{\frac{s}{3}}\left|\partial^\gamma\widetilde{W}\circ\Phi_W^{y_0}(s)\right|\le\varepsilon^{-\frac{1}{3}}\varepsilon^{\frac{1}{8}}+Ce^{\frac{s}{3}}\left(\varepsilon^{\frac{1}{7}}+\varepsilon^{\frac{1}{10}}(\log M)^{\gamma_2-1}\right).
\end{aligned}
\end{equation}
Thus for $|\gamma|=4$ and $|y|\le l$, we have
\begin{equation}
\left|\partial^\gamma\widetilde{W}\circ\Phi_W^{y_0}(s)\right|\le\frac{1}{4}\varepsilon^{\frac{1}{10}}(\log M)^{\gamma_2}.
\label{bootstrap estimate for D^gamma tilde W when |gamma|=4 and |y|le l}
\end{equation}
Now we consider the case $|\gamma|=3$, $y=0$. Let $y=0$ in (\ref{evolution of derivatives of tilde W}), we have
\begin{equation}
\begin{aligned}
\left|\partial_s\partial^\gamma\widetilde{W}^0\right|&=\left|\widetilde{F}_W^{(\gamma),0}-G_W^0\partial_1\partial^\gamma\widetilde{W}^0-h_W^0\partial_2\partial^\gamma\widetilde{W}^0-\left(1-\beta_\tau\right)(1+\gamma_1)\partial^\gamma\widetilde{W}^0\right|\\
&\lesssim e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}+Me^{-s}\varepsilon^{\frac{1}{10}}(\log M)^4+Me^{-s}\varepsilon^{\frac{1}{4}}\lesssim e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}.
\end{aligned}
\end{equation}
Thus from (\ref{initial condition of tilde W})
\begin{equation}
|\partial^\gamma\widetilde{W}^0(s)|\le|\partial^\gamma\widetilde{W}(-\log\varepsilon)|+Ce^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}\le\frac{1}{10}\varepsilon^{\frac{1}{4}}.
\label{bootstrap estimate for tilde W for |gamma|=3 and y=0}
\end{equation}
Next, we consider the case $|\gamma|\le3$, $|y|\le l$. For $|\gamma|=3$, by (\ref{bootstrap estimate for D^gamma tilde W when |gamma|=4 and |y|le l})(\ref{bootstrap estimate for tilde W for |gamma|=3 and y=0}), we have
\begin{equation}
|\partial^\gamma\widetilde{W}|\le\varepsilon^{\frac{1}{4}}+\frac{1}{2}\varepsilon^{\frac{1}{10}}(\log M)^{\gamma_2+1}|y|\le\frac{1}{2}(\log M)^4\varepsilon^{\frac{1}{10}}|y|+\frac{1}{2}M\varepsilon^{\frac{1}{4}}.
\end{equation}
Now by induction and $\partial^\gamma\widetilde{W}^0=0$ for $|\gamma|\le2$, we can close the bootstrap argument of $\partial^\gamma\widetilde{W}$ as in the case $|\gamma|=3$.
\subsection{A general discusion of weighted estimates}
In order to close the bootstrap argument for $W$ and the rest part of $\widetilde{W}$ case, we consider the evolution of $q=\eta^\mu R$, where $R$ is the derivatives of $W$ and $\widetilde{W}$, or them itself, and $|\mu|\le\frac{1}{2}$. Suppose $R$ satisfies
\begin{equation}
\partial_sR+D_RR+\mathcal{V}_W\cdot R=F_R,
\end{equation}
then $q$ satisfies
\begin{equation}
\partial_sq+D_qq-\mathcal{V}_W\cdot\nabla q=\eta^\mu F_R,
\end{equation}
where
\begin{equation}
\begin{aligned}
D_q&=D_R-\mu\eta^{-1}\mathcal{V}_W\cdot\nabla q=D_R-3\mu+3\mu\eta^{-1}-2\mu\eta^{-1}\left(y_1g_W+3y_2^5h_W\right).
\end{aligned}
\end{equation}
By (\ref{estimates of f-dependent functions, inequalities})(\ref{estimate for GW})(\ref{estimates for h}) and the bootstrap assumption for $W$, one can see that $|D_\eta|\le3\eta^{-\frac{1}{3}}$. Thus $D_q\ge D_R-3\mu+3\mu\eta^{-1}-6|\mu|\eta^{-\frac{1}{3}}$.
By composing $q$ with the trajectory of $\mathcal{V}_W$, we have
\begin{equation}
\begin{aligned}
\left|q\circ\Phi_W^{y_0}(s)\right|\le&\ \left|q(y_0,s_0)\right|\exp\left(-\int_{s_0}^sD_q\circ\Phi_W^{y_0}(s')ds'\right)+\int_{s_0}^s\left|F_q^{(\gamma)}\circ\Phi_W^{y_0}(s')\right|\exp\left(-\int_{s'}^sD_q\circ\Phi_W^{y_0}(s'')ds''\right)ds',
\end{aligned}
\end{equation}
where $(y_0,s_0)$ is the starting position and starting time of the trajectory. Note that $s_0$ need not to be $-\log\varepsilon$.
If $|y_0|\ge l$, we have that
\begin{equation}
\begin{aligned}
2\mu\int_{s'}^sD_\eta\circ\Phi_W^{y_0}(s'')ds''\overset{|\mu|\le\frac{1}{2}}&{\le}\int_{s_0}^s 3\eta^{-\frac{1}{3}}\circ\Phi_W^{y_0}(s')ds'\\
&\le3\cdot2^{\frac{1}{3}}\int_{s_0}^\infty\left(1+l^2e^{\frac{2}{5}(s'-s_0)}\right)^{-\frac{1}{3}}ds'\le-30\log l,
\end{aligned}
\end{equation}
consequently, we can bound $q$ by
\begin{equation}
\begin{aligned}
\left|q\circ\Phi_W^{y_0}\right|\le&\ l^{-30}|q(y_0,s_0)|\exp\left[-\int_{s_0}^s\left(D_R-3\mu+3\mu\eta^{-1}\right)\circ\Phi_W^{y_0}(s')ds'\right]\\
&+l^{-30}\int_{s_0}^s\left|F_q^{(\gamma)}\circ\Phi_W^{y_0}(s')\right|\exp\left[-\int_{s'}^s\left(D_R-3\mu+3\mu\eta^{-1}\right)\circ\Phi_W^{y_0}(s'')ds''\right]ds'.
\label{estimate of q when y_0 ge l}
\end{aligned}
\end{equation}
We remark that as long as $|y_0|\ge l$ and $p>0$, one can verify that
\begin{equation}
\int_{s_0}^\infty\eta^{-p}\circ\Phi_W^{y_0}(s)ds\lesssim_p-\log l
\end{equation}
If $|y_0|\ge L$, we have another inequality:
\begin{equation}
\begin{aligned}
2\mu\int_{s'}^sD_\eta\circ\Phi_W^{y_0}(s'')ds''\overset{|\mu|\le\frac{1}{2}}&{\le}\int_{s_0}^s 3\eta^{-\frac{1}{3}}\circ\Phi_W^{y_0}(s')ds'\\
&\le3\cdot2^{\frac{1}{3}}\int_{s_0}^\infty\left(1+L^2e^{\frac{2}{5}(s'-s_0)}\right)^{-\frac{1}{3}}ds'\le CL^{-\frac{20}{3}}.
\end{aligned}
\end{equation}
And $q$ is bound by
\begin{equation}
\begin{aligned}
\left|q\circ\Phi_W^{y_0}\right|\le&\ e^\varepsilon|q(y_0,s_0)|\exp\left[-\int_{s_0}^s\left(D_R-3\mu+3\mu\eta^{-1}\right)\circ\Phi_W^{y_0}(s')ds'\right]\\
&+e^\varepsilon\int_{s_0}^s\left|F_q^{(\gamma)}\circ\Phi_W^{y_0}(s')\right|\exp\left[-\int_{s'}^s\left(D_R-3\mu+3\mu\eta^{-1}\right)\circ\Phi_W^{y_0}(s'')ds''\right]ds'.
\end{aligned}
\label{estimate of q when y_0 ge L}
\end{equation}
\subsection{Closure of bootstrap argument for $\widetilde W$}
For different multi-index $\gamma$, we choose different $\mu$, and we will use (\ref{estimate of q when y_0 ge l}) or (\ref{estimate of q when y_0 ge L}), depending on the location of $y$. We establish the estimates case by case.
\emph{Case 1}. $|\gamma|=0$, $l\le|y|\le L$. In this case we set $\mu=-\frac{1}{6}$, thus we have $q=\eta^{-\frac{1}{6}}\widetilde{W}$ and $D_R-3\mu+3\mu\eta^{-1}=-\frac{1}{2}\eta^{-1}+\beta_\tau J\partial_1\ovl{W}$. We estimate the damping term and the forcing term.
\begin{equation}
\begin{aligned}
-\int_{s'}^s\left(\beta_\tau J\partial_1\ovl{W}-\frac{1}{2}\eta^{-1}\right)\circ\Phi_W^{y_0}(s'')ds''&\le(1+\varepsilon^{\frac{1}{2}})\int_{s_0}^s\left|\partial_1\ovl{W}\circ\Phi_W^{y_0}(s'')\right|ds''+\frac{1}{2}\int_{s_0}^s\eta^{-1}\circ\Phi_W^{y_0}(s'')ds''\\
&\le2\int_{s_0}^s\eta^{-\frac{1}{3}}\circ\Phi_W^{y_0}(s'')ds''\le-20\log l,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\int_{s_0}^s\left|\left(\eta^{-\frac{1}{6}}\widetilde{F}_W\right)\circ\Phi_W^{y_0}(s')\right|ds'&\lesssim\int_{s_0}^s M\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{3}}\circ\Phi_W^{y_0}(s')ds'\le-\varepsilon^{\frac{1}{8}}\log l.
\end{aligned}
\end{equation}
According to lemma \ref{lower bound for trajectory of W}, it is possible to require that either $|y_0|=l$ or $s_0=-\log\varepsilon$, thus we can use the initial condition or bootstrap assumptions to bound $|q(y_0,s_0)|$. From (\ref{estimate of q when y_0 ge l})(\ref{initial condition of tilde W})(\ref{bootstrap assumptions of tilde W when |y|<L}), we have
\begin{equation}
\begin{aligned}
\left|\eta^{-\frac{1}{6}}\widetilde{W}\circ\Phi_W^{y_0}(s)\right|&\le l^{-30}\left|\widetilde{W}(y_0,s_0)\right|\eta^{-\frac{1}{6}}(y_0)l^{-20}+l^{-30}l^{-20}(-\varepsilon^\frac{1}{8})\log l\\
&\le l^{-50}\eta^{-\frac{1}{6}}(y_0)\max\left(\varepsilon^{\frac{1}{10}}\eta^{-\frac{1}{6}}(y_0),2(\log M)^4\varepsilon^{\frac{1}{10}}l^4\right)-l^{-50}\varepsilon^{\frac{1}{8}}\log l\le\frac{1}{2}\varepsilon^{\frac{1}{11}}.
\end{aligned}
\end{equation}
\emph{Case 2}. $\gamma=(1,0)$, $l\le|y|\le L$. Let $\mu=\frac{1}{3}$, then we have $D_R-3\mu+3\mu\eta^{-1}\ge\beta_\tau J(\partial_1\ovl{W}+\partial_1W)$, and
\begin{equation}
\begin{aligned}
-\int_{s'}^s\left(D_R-3\mu+3\mu\eta^{-1}\right)\circ\Phi_W^{y_0}(s'')ds''&\le4\int_{s_0}^\infty\eta^{-\frac{1}{3}}\circ\Phi_W^{y_0}(s'')ds''\le -40\log l,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\int_{s_0}^s\left|F_q\circ\Phi_W^{y_0}(s')\right|ds'&\lesssim\varepsilon^{\frac{1}{11}}\int_{s_0}^s\left(\eta^{\frac{1}{3}}\eta^{-\frac{1}{2}}\right)\circ\Phi_W^{y_0}(s')ds'\lesssim-\varepsilon^{\frac{1}{11}}\log l.
\end{aligned}
\end{equation}
Now we can bound $q$ by
\begin{equation}
\begin{aligned}
\left|\eta^{\frac{1}{3}}\widetilde{W}\circ\Phi_W^{y_0}(s)\right|&\le l^{-30}\left|\widetilde{W}(y_0,s_0)\right|\eta^{\frac{1}{3}}(y_0)l^{-40}+l^{-30}l^{-40}(-\varepsilon^\frac{1}{11})\log l\\
&\le l^{-70}\eta^{\frac{1}{3}}(y_0)\max\left(\varepsilon^{\frac{1}{11}}\eta^{-\frac{1}{3}}(y_0),2(\log M)^4\varepsilon^{\frac{1}{10}}l^3\right)-l^{-70}\varepsilon^{\frac{1}{11}}\log l\\
&\le\frac{1}{2}\varepsilon^{\frac{1}{12}}.
\end{aligned}
\end{equation}
\emph{Case 3}. $\gamma=(0,1)$, $l\le|y|\le L$. Let $\mu=0$, then we have $D_R-3\mu+3\mu\eta^{-1}=\beta_\tau J\partial_1\ovl{W}$, and $|F_q|\lesssim\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}$. The rest is almost the same as Case 2.
\subsection{Closure of bootstrap for $W$}
Similarly, for different $\gamma$ we choose different $\mu$, and we will use (\ref{estimate of q when y_0 ge l}) or (\ref{estimate of q when y_0 ge L}), depending on the location of $y$.
\emph{Case 1}. $|\gamma|=2$, $|y|\ge l$. Now $R=\partial^\gamma W$, and we let
\begin{equation}
\mu=\left\{\begin{aligned}
&\frac{1}{3},\ \ \ \ \gamma=(2,0),(1,1)\\
&\frac{1}{6},\ \ \ \ \gamma=(0,2).
\end{aligned}\right.
\end{equation}
The damping term becomes
\begin{equation}
3\mu-D_R=\left\{
\begin{aligned}
&-\gamma_1+\frac{1}{2}-\beta_\tau\left(1+\gamma_1\mathbbm{1}_{\gamma_1\ge2}\right)J\partial_1W,\ \ \ &\gamma_1\ge1\\
&-\beta_\tau J\partial_1W,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &\gamma_1=0.
\end{aligned}\right.
\end{equation}
When $\gamma_1=0$, we have
\begin{equation}
\begin{aligned}
\int_{s'}^s\left(3\mu-D_R\right)\circ\Phi_W^{y_0}(s'')ds''&\le2\int_{s_0}^\infty|\partial_1W|\circ\Phi_W^{y_0}(s'')ds''\le-20\log l,
\end{aligned}
\end{equation}
and the forcing term is bound by
\begin{equation}
\begin{aligned}
\int_{s_0}^s\left|\eta^{\frac{1}{6}}F_W^{(0,2)}\right|\circ\Phi_W^{y_0}(s')ds'&\lesssim M^{\frac{2}{3}}\int_{s_0}^s\left(\eta^{\frac{1}{6}}\eta^{-\frac{1}{3}+\frac{1}{3(k-3)}}\right)\circ\Phi_W^{y_0}(s')ds'\le -M^{\frac{5}{6}}\log l.
\end{aligned}
\end{equation}
Thus, we have that
\begin{equation}
\begin{aligned}
\left|\eta^{\frac{1}{6}}\partial_{22}{W}\circ\Phi_W^{y_0}(s)\right|&\le l^{-30}\eta^{\frac{1}{6}}(y_0)\left|\partial_{22}{W}(y_0,s_0)\right|l^{-20}-l^{-30}M^{\frac{5}{6}}\log l\\
&\le l^{-50}\eta^{\frac{1}{6}}(y_0)\max\left(\eta^{-\frac{1}{6}}(y_0),\frac{6}{7}\eta^{-\frac{1}{6}}(y_0)+2(\log M)^4\varepsilon^{\frac{1}{10}}l^2\eta^{-\frac{1}{6}}(y_0)\|\eta^{\frac{1}{6}}\|_{L^\infty(|y|\le l)}\right)-l^{-50}M^{\frac{5}{6}}\log l\\
\overset{\varepsilon\text{ small}}&{\le}-2l^{-50}M^{\frac{5}{6}}\log l
\overset{M\text{ large}}{\le}\frac{1}{2}M.
\end{aligned}
\end{equation}
When $\gamma_1>0$, we have that
\begin{equation}
\begin{aligned}
\exp\left(\int_{s'}^s\left(3\mu-D_R\right)\circ\Phi_W^{y_0}(s'')ds''\right)&\le\exp\left\{3\int_{s'}^s|\partial_1W|\circ\Phi_W^{y_0}(s'')ds''+\int_{s'}^s\left(\frac{1}{2}-1\right)ds''\right\}\\
&\le\exp\left\{4\int_{s'}^s\eta^{-\frac{1}{3}}\circ\Phi_W^{y_0}(s'')ds''-\frac{1}{2}(s-s')\right\}\le l^{-80}e^{-\frac{1}{2}(s-s')},
\end{aligned}
\end{equation}
and $|F_q|=\left|\eta^{\frac{1}{3}}F_W^{(\gamma)}\right|\lesssim\eta^{\frac{1}{3}}M^{\frac{1}{3}\gamma_2}\eta^{-\frac{1}{3}}\le M^{\frac{1}{3}\gamma_2+\frac{1}{6}}$. Thus, we have the bound for $\partial^\gamma W$:
\begin{equation}
\begin{aligned}
\left|\eta^{\frac{1}{3}}\partial^\gamma W\right|\circ\Phi_W^{y_0}(s)&\le l^{-20}\eta^{\frac{1}{3}}(y_0)|\partial^\gamma W(y_0,s_0)|l^{-80}e^{-\frac{1}{2}(s-s_0)}+L^{-20}\int_{s_0}^sM^{\frac{1}{3}\gamma_2+\frac{1}{6}}l^{-80}e^{-\frac{1}{2}(s-s')}ds'\\
&\le l^{-100}\eta^{\frac{1}{3}}(y_0)\max\left(\eta^{-\frac{1}{3}}(y_0),C\eta^{-\frac{1}{2}}(y_0)+2(\log M)^4\varepsilon^{\frac{1}{10}}l^2\eta^{-\frac{1}{3}}(y_0)\|\eta^{\frac{1}{3}}\|_{L^\infty(|y|\le l)}\right)e^{-\frac{1}{2}(s-s_0)}\\
&\ \ \ +l^{-101}M^{\frac{1}{3}\gamma_2+\frac{1}{6}}\\
&\le l^{-100}\max\left(1,C+3(\log M)^4\varepsilon^{\frac{1}{10}}l^2\right)+l^{-101}M^{\frac{1}{3}\gamma_2+\frac{1}{6}}\\
&\le M^{\frac{1+\gamma_2}{3}}\underbrace{\left(CM^{-\frac{1}{3}}+l^{-101}M^{-\frac{1}{6}}\right)}_{<\frac{1}{2}\text{ when }M\text{ large}}\le\frac{1}{2}M^{\frac{1+\gamma_2}{3}}.
\end{aligned}
\end{equation}
\emph{Case 2}. $|\gamma|=0$ and $|y|\ge L$. Let $\mu=-\frac{1}{6}$. Now we have $3\mu-D_R-3\mu\eta^{-1}=\frac{1}{2}\eta^{-1}$ and $F_q=\eta^{-\frac{1}{6}}\left(F_W-e^{-\frac{s}{2}}\beta_\tau\dot\kappa\right)$. And we bound the damping term and the forcing term by
\begin{equation}
\begin{aligned}
\int_{s'}^s\frac{1}{2}\eta^{-1}\circ\Phi_W^{y_0}(s'')ds''&\le\int_{s_0}^\infty\left(1+L^2e^{\frac{2}{5}(s''-s_0)}\right)^{-1}ds''\le L^{-2}\int_{s_0}^\infty e^{-\frac{2}{5}(s''-s_0)}ds''\le L^{-1}=\varepsilon^{\frac{1}{10}},
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\int_{s_0}^s\left|F_q\circ\Phi_W^{y_0}(s')\right|ds'&\lesssim\int_{s_0}^s\left(e^{-\frac{s'}{2}}+Me^{-\frac{s'}{2}}\right)\eta^{-\frac{1}{6}}\circ\Phi_W^{y_0}(s')ds'\lesssim M\int_{s_0}^s e^{-\frac{s'}{2}}ds'\le\varepsilon^\frac{1}{3}.
\end{aligned}
\end{equation}
Thus, we have that
\begin{equation}
\begin{aligned}
\left|\eta^{-\frac{1}{6}}W\right|\circ\Phi_W^{y_0}(s)&\le e^\varepsilon\eta^{-\frac{1}{6}}(y_0)|W(y_0,s_0)|e^{\varepsilon^\frac{1}{10}}+e^\varepsilon\varepsilon^{\frac{1}{3}}e^{\varepsilon^{\frac{1}{10}}}\\
&\le e^\varepsilon e^{\varepsilon^\frac{1}{10}}\eta^{-\frac{1}{6}}(y_0)\max\left(\eta^{\frac{1}{6}}(y_0)(1+\varepsilon^{\frac{1}{11}}),\eta^{\frac{1}{6}}(y_0)+\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}(y_0)\right)+e^\varepsilon\varepsilon^{\frac{1}{3}}e^{\varepsilon^{\frac{1}{10}}}\\
&\le 1+\varepsilon^{\frac{1}{19}}.
\end{aligned}
\end{equation}
\emph{Case 3}. $\gamma=(1,0)$ and $|y|\ge L$. In this case, we can see that $q=\eta^{\frac{1}{3}}\partial_1W$, $3\mu-D_R-3\mu\eta^{-1}=-\beta_\tau J\partial_1W-\eta^{-1}\le-\beta_\tau J\partial_1W$, and
\begin{equation}
\begin{aligned}
\int_{s'}^s\left(3\mu-D_R-3\mu\eta^{-1}\right)\circ\Phi_W^{y_0}(s'')ds''&\le2\int_{s'}^s|\partial_1W|\circ\Phi_W^{y_0}(s'')ds''\\
&\lesssim\int_{s_0}^\infty\left(1+L^2e^{\frac{2}{5}(s''-s_0)}\right)^{-\frac{1}{3}}ds''\lesssim L^{-\frac{2}{3}}\le\varepsilon.
\end{aligned}
\end{equation}
The forcing term is bound by
\begin{equation}
\begin{aligned}
\int_{s_0}^s\left|F_q\circ\Phi_W^{y_0}(s')\right|ds'&\lesssim\int_{s_0}^s\varepsilon^{\frac{1}{4}}\left|\eta^{\frac{1}{3}}\eta^{-\frac{1}{2}+\frac{2}{3(k-2)}}\right|\circ\Phi_W^{y_0}(s')ds'\lesssim \varepsilon^{\frac{1}{4}}\int_{s_0}^s \eta^{-\frac{1}{12}}\circ\Phi_W^{y_0}(s')ds'\lesssim\varepsilon^{\frac{1}{12}}.
\end{aligned}
\end{equation}
Thus we have the bound
\begin{equation}
\begin{aligned}
\left|\eta^{\frac{1}{3}}\partial_1W\right|\circ\Phi_W^{y_0}(s)&\le e^\varepsilon\eta^{\frac{1}{3}}(y_0)|\partial_1W(y_0,s_0)|e^{\varepsilon}+Ce^\varepsilon\varepsilon^{\frac{1}{4}}e^{\varepsilon}\\
&\le e^\varepsilon e^{2\varepsilon}\eta^{\frac{1}{3}}(y_0)\max\left(\eta^{-\frac{1}{3}}(y_0)(1+\varepsilon^{\frac{1}{112}}),\eta^{-\frac{1}{3}}(y_0)+\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}(y_0)\right)+Ce^{2\varepsilon}\varepsilon^{\frac{1}{4}}\\
&\le 1+\varepsilon^{\frac{1}{13}}.
\end{aligned}
\end{equation}
\emph{Case 4}. $\gamma=(0,1)$ and $|y|\ge L$. Let $\mu=0$, we have $q=R=\partial_2W$, and $3\mu-D_R-3\mu\eta^{-1}=-\beta_\tau J\partial_1W$. Thus we have the bound for damping term:
\begin{equation}
\int_{s'}^s\left(3\mu-D_R-3\mu\eta^{-1}\right)\circ\Phi_W^{y_0}(s'')ds''\le\varepsilon.
\end{equation}
The forcing term is bound by
\begin{equation}
\begin{aligned}
\int_{s_0}^s\left|F_q\circ\Phi_W^{y_0}(s')\right|ds'&\lesssim\int_{s_0}^sM^2\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{3}}\circ\Phi_W^{y_0}(s')ds'\le\varepsilon^{\frac{1}{8}}.
\end{aligned}
\end{equation}
Finally we have that
\begin{equation}
\begin{aligned}
\left|\partial_2W\right|\circ\Phi_W^{y_0}(s)&\le e^\varepsilon|\partial_2W(y_0,s_0)|e^{\varepsilon}+e^\varepsilon\varepsilon^{\frac{1}{8}}e^{\varepsilon}\le e^{2\varepsilon}\max\left(\frac{3}{4},\frac{2}{3}+\varepsilon^{\frac{1}{13}}\right)+e^{2\varepsilon}\varepsilon^{\frac{1}{8}}\le \frac{5}{6}.
\end{aligned}
\end{equation}
\section{Proof of the main theorem}\label{proof of the main theorem}
In this section we prove the main theorem, discuss the H\"older regularity of $w$ and deduce a lower bound of vorticity.
\begin{proof}[Proof of the main theorem]
The local well-posedness of $(u,\sigma)$ in physical variables implies the local well-posedness of $(W,Z,A,\kappa,\tau,\xi,n,\phi)$ in self-similar variables, and the global existence of $(W,Z,A,\kappa,\tau,\xi,n,\phi)$ in self-similar variables is obtained via the bootstrap bound.
Now we prove the solution has the desired blow-up behavior. From the bootstrap assumptions and $\tau(t)-t=\int_t^{T_*}(1-\dot\tau(t'))dt'$ we can see that $c(T_*-t)\le\tau-t=e^{-s}\le C(T_*-t)$. Since $R(t)\in SO(2)$, using (\ref{estimates of f-dependent functions, inequalities}) and (\ref{estimates of U,S}), we have that
\begin{equation}
\begin{aligned}
\left|[(R(t)N)\cdot\nabla_{\mathrm{x}}]u\right|&=|N\cdot\nabla_{\tilde{x}}\tilde{u}|=\left|\left(\frac{\sqrt{1+f_{x_2}^2}}{1+f_{x_1}}\partial_{x_1}-\frac{f_{x_2}}{\sqrt{1+f_{x_2}^2}}\partial_{x_2}\right)\mathring{u}\right|\le (1+\varepsilon^{\frac{2}{3}})(1+\varepsilon^{\frac{3}{4}})e^s+\varepsilon\le\frac{C}{T_*-t}.
\end{aligned}
\end{equation}
Similarly, we can see that the derivative of $u$ that is aligned to the shock is bounded:
\begin{equation}
\begin{aligned}
\left|[(R(t)T)\cdot\nabla_{\mathrm{x}}]u\right|&=|T\cdot\nabla_{\tilde{x}}\tilde{u}|=\left|\frac{1}{\sqrt{1+f_{x_2}^2}}\partial_{x_2}\mathring{u}\right|\le1+\varepsilon^{\frac{1}{2}}.
\end{aligned}
\end{equation}
In a same way, we can prove that $\left|[(R(t)N)\cdot\nabla_{\mathrm{x}}]\sigma\right|\le\frac{C}{T_*-t}$ and $\left|[(R(t)T)\cdot\nabla_{\mathrm{x}}]u\right|\le C$. Consequently, we have that
\begin{equation}
|\nabla_\mathrm{x}u(t)|\le\left|[(R(t)N)\cdot\nabla_{\mathrm{x}}]u\right|+\left|[(R(t)T)\cdot\nabla_{\mathrm{x}}]u\right|\le\frac{C}{T_*-t},
\end{equation}
\begin{equation}
|\nabla_\mathrm{x}\sigma(t)|\le\left|[(R(t)N)\cdot\nabla_{\mathrm{x}}]\sigma\right|+\left|[(R(t)T)\cdot\nabla_{\mathrm{x}}]\sigma\right|\le\frac{C}{T_*-t}.
\end{equation}
From the bootstrap assumptions $|\dot\xi|\le M^\frac{1}{4}$ and $|\dot n_2|\le M^2\varepsilon^{\frac{1}{2}}$, we know that both $\xi$ and $n$ have limits as $t\rightarrow T_*$.
Next, by the definition of $n$ and $N$, and the coordinate transformations, we have that $n(t)=R(t)N(0,t)$. Also we can see that
\begin{equation}
\begin{aligned}
\left|[(R(t)N)\cdot\nabla_{\mathrm{x}}]u(\xi(t),t)\right|&=\left|\left(\frac{\sqrt{1+f_{x_2}^2}}{1+f_{x_1}}\partial_{x_1}-\frac{f_{x_2}}{\sqrt{1+f_{x_2}^2}}\partial_{x_2}\right)\mathring{u}(0,t)\right|\\
&=\left|\frac{-e^s+\partial_{x_1}z(0,t)}{2}\tilde{e}_1+\partial_{{x_1}}a(0,t)\tilde{e}_2\right|\ge(\frac{1}{2}-\varepsilon^{\frac{1}{2}})e^s.
\end{aligned}
\end{equation}
Similarly, we have that
\begin{equation}
\begin{aligned}
\left|[(R(t)N)\cdot\nabla_{\mathrm{x}}]\sigma(\xi(t),t)\right|=\left|\partial_{{x_1}}\mathring{\sigma}(0,t)\right|=\left|\frac{-e^s-\partial_{{x_1}}z(0,t)}{2}\right|\ge(\frac{1}{2}-\varepsilon^{\frac{1}{2}})e^s.
\end{aligned}
\end{equation}
Thus, we can conclude that $\|\nabla_{\mathrm{x}}u\|_{L^\infty}\ge\left|[(R(t)N)\cdot\nabla_{\mathrm{x}}]u(\xi(t),t)\right|\ge\frac{c}{T_*-t}$, and $\|\nabla_{\mathrm{x}}\sigma\|_{L^\infty}\ge\left|[(R(t)N)\cdot\nabla_{\mathrm{x}}]\sigma(\xi(t),t)\right|\ge\frac{c}{T_*-t}$.
Next, we prove (\ref{boundedness of u and sigma away from origin}). This follows from that $\|\partial_{{x_1}}w\|_{L^\infty(B_x(0,\delta))}\le C(\delta)$. From (\ref{refined bootstrap inequality of W}), we have that
\begin{equation}
\begin{aligned}
\|\partial_{{x_1}}w\|_{L^\infty(B_x(0,\delta))}&\le(1+\varepsilon^{\frac{1}{13}})e^s\left\|\frac{1}{(1+y_1^2+y_2^6)^{1/3}}\right\|_{L^\infty_y(\{e^{-3s}y_1^2+e^{-s}y_2^2\le\delta^2\}^c)}\\
&\le2\delta^{-2}(1+\varepsilon^{\frac{1}{13}})\frac{e^s}{(1+e^{3s})^{\frac{1}{3}}}\le3\delta^{-2}.
\end{aligned}
\end{equation}
Now we have complete the proof of the main shock formation result and (\ref{blow up speed for u})-(\ref{limit of n}). The H\"older bound is left to the next subsection.
\end{proof}
\subsection{H\"older regularity for $w$}
We now prove that Riemann invariant $w$ posseses a uniform 1/3-H\"older bound up to the blow-up time.
\begin{proposition}
For the Riemann variable $w$, we have that $w\in L^\infty([-\varepsilon,T_*);C^{1/3})$.
\end{proposition}
\begin{proof}
The proof of this proposition is the same as that in \cite{buckmaster2022formation}, and for the reader's convenience we outline the proof here.
Using the bootstrap assuptions we directly compute the $C^{1/3}$ norm:
\begin{equation}
\begin{aligned}
\frac{|w(x_1,x_2,t)-w(x_1',x_2',t)|}{|x-x'|^{1/3}}&=\frac{e^{-\frac{s}{2}}|W(y,s)-W(y',s)|}{[e^{-3s}(y_1-y_1')^2+e^{-s}(y_2-y_2')^2]^{1/6}}\\
&\le\frac{|W(y_1,y_2,s)-W(y_1',y_2,s)|}{|y_1-y_1'|^{1/3}}+e^{-\frac{s}{3}}\frac{|W(y_1',y_2,s)-W(y_1',y_2',s)|}{|y_2-y_2'|^{1/3}}\\
&\lesssim\frac{\int_{y_1'}^{y_1}(1+z^2)^{-1/3}dz}{|y_1-y_1'|^{1/3}}+e^{-\frac{s}{3}}|y_2-y_2'|^{2/3}
\overset{y\in\mathcal{X}(s)}{\lesssim}1.
\end{aligned}
\end{equation}
Now we have proved $w$ is uniformly H\"older-1/3 continuous with respect to $x$, and one can check that the transformation $\tilde{x}\mapsto x$, $\mathrm{x}\mapsto\tilde{x}$ do not affect the H\"older-1/3 continuity of $w$.
\end{proof}
\subsection{Discussion of the vorticity}
From (\ref{evolution of specific vorticity in tilde x coordinate}), we know that in $\tilde{x}$-coordinate, the specific vorticity $\tilde{\zeta}$ is purely transported by $\tilde{u}+\tilde{v}$. From (\ref{estimate of of V})(\ref{estimates of U,S}) and the estimate (\ref{estimates of f-dependent functions, inequalities}) of $|f|$, we can deduce that $|\tilde{u}+\tilde{v}|\lesssim M^\frac{1}{4}$ on $\{|\tilde{x}_1|\le 10\varepsilon^{\frac{1}{2}},|\tilde{x}_2|\le10\varepsilon^\frac{1}{6}\}\supset B_{\tilde x}(0,\varepsilon^{\frac{3}{4}})$. Note that $|T_*-t_0|=|T_*+\varepsilon|\lesssim\varepsilon$, we have that if $\tilde{\zeta}(\tilde{x},t_0)\ge c_0$ for some $c_0>0$ on $B_{\tilde x}(0,\varepsilon^{\frac{3}{4}})$, then $\tilde{\zeta}(\tilde{x},t)\ge c_0/2$ on $B_{\tilde x}(0,\varepsilon^{\frac{3}{4}}/2)$.
From the bootstrap assumptions and (\ref{closure of kappa bootstrap}) we have that
$$\left|S-\frac{\kappa_0}{2}\right|\lesssim|\kappa-\kappa|+e^{-\frac{s}{2}}|W|+|Z|\lesssim M\varepsilon+\varepsilon^{\frac{1}{6}}\lesssim\varepsilon^{\frac{1}{6}}.$$
Thus the sound speed $\tilde{\sigma}\ge\frac{\kappa_0}{4}$, and $|\tilde\omega|=|\tilde\zeta||\tilde\rho|=|\zeta|(\alpha|\sigma|)^{1/\alpha}\ge\frac{c_0}{2}\cdot(\frac{\alpha\kappa_0}{4})^{1/\alpha}$ on $B_{\tilde x}(0,\varepsilon^{\frac{3}{4}}/2)$.
The initial conditions stated in subsection \ref{Initial data in physical variables} can not rule out the possibility that $\tilde{\zeta}(\tilde{x},t_0)$ have a positive lower bound on $B_{\tilde x}(0,\varepsilon^{\frac{3}{4}})$, thus there do exist solutions satisfying the listed initial condition and present non-zero vorticity at the blow-up point.
\section*{Data availability statement}
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
\section*{Acknowledgements}
The author is supported by the China Scholarship Council (File No.202106100096), and thank for the warm host of the department of mathematics of the National University of Singapore. The author is grateful to Prof. Xinliang An and Dr. Haoyang Chen for valuable instruction, discussions and suggestions; the author would also like to thank Prof. Lifeng Zhao and Yiya Qiu for helpful correspondence.
|
{
"arxiv_id": "2302.14247",
"language": "en",
"timestamp": "2023-03-01T02:06:51",
"url": "https://arxiv.org/abs/2302.14247",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
In applications such as environment monitoring \cite{rogers1996predicting,azzali2000mapping} and surveillance \cite{othman2012wireless,wimalajeewa2017application}, temporal sequences of images are compared to determine changes within the physical region of interest. When data are acquired indirectly, each image in the temporal sequence is typically first recovered individually, before any comparisons are made. However, if the scene contains rigid bodies, it is often enough to infer important surveillance and monitoring information from the corresponding time sequenced {\em edge maps},
\cite{gelb2017detecting,GS2017,SVGR,gelb2019reducing}. This is useful because recovering an edge map may be more accurate as well as less costly than recovering a full scale image.
Sequential edge maps can be used more generally in image recovery processes.
For example, the weights in weighted $\ell_1$ regularization methods are designed to be inversely proportional to the strength of the signal in the sparse domain (see e.g.~\cite{candes2008enhancing}).
Accurate edge maps can help to determine these weights off-line and in advance, as was demonstrated in \cite{adcock2019jointsparsity,gelb2019reducing,scarnati2019accelerated}. Segmentation, classification and object-based change detection (see e.g.~\cite{xiao2022sequential}) are also examples of procedures where edge information can improve the quality of the results.
The primary motivation in this investigation comes from spotlight synthetic aperture radar (SAR), where the phase history data may be considered as Fourier data, \cite{jakowatz2012spotlight}.\footnote{We note that SAR images are complex-valued and that retaining phase information is important for downstream processing. While only real-valued images are included in this study, our approach is not inherently limited. In future investigations we will consider the modifications needed for complex-valued signal recovery.} We therefore seek to recover a sequence of edge maps from multiple observations of noisy and under-sampled Fourier measurements at different time instances, while noting that our underlying methodology can be applied to other types of measurements with some modifications.
Several algorithms have been designed to recover the edges in a real-valued signal from (multiple) under-sampled and noisy Fourier measurements at a {\em single instance in time}, even in situations where missing bands of Fourier data would make it impossible to recover the underlying image, \cite{GS2017,SVGR,viswanathan2012iterative}. Many techniques have also been developed to recover the magnitude of a SAR image using a sequence of sub-aperture data acquisitions, again at a single instance in time, \cite{Cetin2005,ccetin2014sparsity,moses2004wide,stojanovic2008joint}. In this case the images are first {\em separately} recovered, each corresponding to a single data acquisition. The final estimate at each pixel is then calculated as the weighted average of the recovered images. However, processing the individual measurements to recover a set of images before combining the joint information can lead to additional information loss, especially in the case of noisy and incomplete data collections, \cite{archibald2016image,scarnati2019accelerated,wasserman2015image,xiao2022sequential}. By contrast, exploiting redundant information without having to first recover separate images leads to more accurate image recovery, \cite{scarnati2019accelerated,xiao2022sequential}. Recovery algorithms that incorporate information from multi-measurement vectors are often referred to as MMV recovery algorithms, \cite{adcock2019jointsparsity,chen2006theoretical,cotter2005sparse}.
In developing methods to recover temporal sequences of images from noisy and under-sampled Fourier measurements, changes such as translation and rotation that are expected to occur between the sequential data collections must also be accounted for. Thus none of the techniques mentioned in the preceding paragraph is suitable.
By contrast, the sequential image recovery developed in \cite{xiao2022sequential} employs a compressive sensing (CS) framework with regularization terms designed to both account for sparsity in the appropriate transform domain as well as to promote similarities in regions for which {\em no} change has been detected. Following the ideas in \cite{scarnati2019accelerated}, the edge maps are generated {\em without} first recovering the images so that additional information is not lost due to processing.
In this way inter-image temporal correlations between adjacent measurements can be better exploited. Although demonstrated to yield improved accuracy when compared to algorithms which separately recover each image in the sequence, that is, without using inter-image information, the method in \cite{xiao2022sequential} is multi-step and relies on several hand-tuned parameters. Specifically, after the edge maps for each individual image are generated from the observed data, rigid objects must be constructed to determine the changed and unchanged regions between each adjacent image (so-called change masks), which would then in turn form the regularization term. This step typically requires some type of clustering algorithm, leading to both a loss of information due to the extra processing and a lack of robustness due to the additional hand-tuning of parameters. Finally, there is an assumption that a moving object is not contained within another (background) object. For example, one would have to remove the skull in a magnetic resonance image (MRI) brain scan in order to determine changes within.
Such concerns were initially addressed in \cite{xiao2022sequential1}. There, the method in \cite{xiao2022sequential} was recast in a Bayesian framework, with an {\em intra-}image prior used to promote sparsity in the edges of each image, and a second {\em inter-}image prior used to promote the similarity of the unchanged-regions. By treating the similarity between adjacent images as a random variable, forming the rigid boundaries as an intermediate step was no longer required.
In this investigation we consider sequential edge map recovery (as opposed to sequential image recovery) which allows us to further simplify the approach introduced in \cite{xiao2022sequential1} by integrating both intra- and inter-image information into one prior. That is, we introduce a prior that simultaneously promotes intra-image sparsity and inter-image similarity. To this end, we note that the classic sparse Bayesian learning (SBL) \cite{tipping2001sparse,wipf2011latent,zhang2011sparse,chen2016simultaneous,zhang2021empirical} requires a shared support of all the collected measurements to approximate edges. Such an assumption will clearly be violated when change occurs between sequential data collections. To compensate, our proposed method introduces a new set of hyper-parameters so that information outside the shared support is not considered in the joint estimation of the edge values. Our numerical examples demonstrate that by constructing the SBL algorithm in this way we are able to account for changes in each sequenced edge map. Such information can then be used in downstream processing as warranted by the particular application.
This rest of this paper is organized as follows. Section \ref{sec:preliminaries} provides the necessary background for our new method, which is introduced in Section \ref{sec:newmodel}. Numerical examples for both one- and two-dimensional signals are considered in Section \ref{sec:numresultsSBL}. We also demonstrate how our method for recovering sequential edge maps can then be used for recovering sequential images. Concluding remarks are given in \ref{sec:discussion_ongoing}.
\section{Preliminaries}
\label{sec:preliminaries}
Let $f_{j} \colon [-\pi,\pi]\to\mathbb R$ be a sequence of one-dimensional piecewise smooth functions at different times $j=1,\dots,J$. The corresponding Fourier samples are given as
\begin{align}
\label{eq:fourcoeff}
\hat f_{j,l} =\frac{1}{2\pi} \int_{-\pi}^\pi f_{j} (s)e^{-i ls}ds, \quad -\frac{n}{2}\leq l\leq \frac{n}{2}-1, \quad j=1,\dots,J.
\end{align}
We note that our approach is directly extendable to two-dimensional images, which will be demonstrated via the numerical experiments in Section \ref{sec:numresultsSBL}.
Suppose the corresponding forward model for the observations of $f_{j}$ is given by
\begin{equation}\label{eq:forward}
\vect g_{j} =F_{j} \vect f_{j} , \quad j=1,\dots, J,
\end{equation}
where $\vect f_{j} $ is the discrete approximation of $f_{j} $ at $n$ uniform grid points on $[-\pi,\pi]$ and {$F_{j} \in\mathbb C^{n\times n}$} is the discrete Fourier transform forward operator. In our numerical experiments, each $F_{j} $ is missing an arbitrarily chosen band-width of frequencies, indicating that the data sequence is compromised in some way, as well as being under-sampled. Details regarding the missing band-width will be made explicit in Section \ref{sec:numresultsSBL}.
We further assume that each of the $J$ observations are corrupted with additive independent and identically distributed (iid) zero-mean Gaussian noise with unknown variance $\sigma^2$ given by
\begin{equation}\label{eq:likelihood_noise}
\boldsymbol\epsilon_{j} \sim\mathcal{N}\left(0,\beta^{-1}I_n\right), \quad j=1,\dots,J,
\end{equation}
where $\beta^{-1} =\sigma^{2}$.
The forward model \eqref{eq:forward} therefore becomes
\begin{align}\label{eq:forward_noisy}
\tilde{\vect{g}}_{j} =F_{j} \vect f_{j} +\boldsymbol\epsilon_{j} , \quad j=1,\dots, J.
\end{align}
We seek to recover the sequential edge maps corresponding to the observations in \eqref{eq:forward_noisy} using a Bayesian framework.
As already noted, the edge maps may be useful on their own or in downstream processing. Examples in Sections \ref{sec:img_rec} demonstrate how the edge maps can be used effectively for sequential image recovery.
\subsection{Edge Detection from Fourier data}
\label{sec:edge detection}
Detecting the edges of a piecewise smooth signal or image from a finite number of Fourier samples, \eqref{eq:fourcoeff}, is a well-studied problem. Due to its simple linear construction, here we employ the concentration factor (CF) method, initially developed in \cite{gelb1999detection}. Below we briefly describe the CF method in the context of edge map recovery. For ease of notation we drop the $j$ and write $f = f_{j} $ since what follows applies to any $j = 1,\dots, J$.
Consider the piecewise analytic function $f$ which has $K$ simple discontinuities located at $\{\xi_k\}_{k=1}^\mathcal{K}$ in $(-\pi,\pi]$. The corresponding jump function is defined as
\begin{equation}\label{eq:jumpFunc}
[f](s)=f(s^+)-f(s^-),
\end{equation}
where $f(s^+)$ and $f(s^-)$ represent the right- and left-hand side limit of $f$ at location $s$.
We then define the signal $\vect x \in \mathbb{R}^n$ as the jump function of $f$ evaluated at $n$ uniform grid points in $(-\pi,\pi]$, given by ${s_\nu} = -\pi + \frac{2\pi \nu}{n}$. Specifically,
\begin{equation}
\label{eq:xvector}
{\vect x} = \{x_{\nu}\}_{\nu = 1}^n = \{[f](s_\nu)\}_{\nu = 1}^n.
\end{equation}
Observe that ${\bf x}$ is sparse, as it is zero everywhere except for indices corresponding to points in the domain where there is an edge, or jump discontinuity, in the function $f$.
An equivalent expression of \eqref{eq:xvector} is given by
\begin{align*}
x_\nu = \sum^{K}_{k=1}[f](\xi_k)I_{\xi_k}(s_{\nu}), \quad {\nu = 1,\dots,n,}
\end{align*}
where $I_{\xi_k}(s_{\nu})$ is the indicator function defined as
\begin{equation*}
I_{\xi_k}(s{_\nu}) =\begin{cases}
1, &s{_\nu} = \xi_k\\
0,&\text{otherwise}.
\end{cases}
\end{equation*}
Given the set of Fourier coefficients $\{\hat f_l\}_{l=-n/2}^{n/2-1}$ of each $\vect f$ in the temporal sequence, the CF edge detection method is formulated as
\begin{align}\label{eq:1d_CF}
S^\sigma_n f(s) = \sum_{l=-n/2}^{n/2-1} \tau(l) \hat f_l e^{i ls},
\end{align}
where
$$ \tau(l) = i \,\text{sgn}(l) \sigma\left(\frac{2\abs{l}}{n}\right),\quad l = -n/2,\dots,n/2-1.$$
Here $\sigma(\eta)$, $\eta \in [0,1]$, is an admissible concentration factor that guarantees the convergence of \eqref{eq:1d_CF} to $[f](s)$, \cite{gelb1999detection,gelb2000detection}.\footnote{Loosely speaking, $\tau(l)$ is a band-pass filter that amplifies more of the high-frequency coefficients which typically contain information about the edges of the underlying function $f$. We also note that $\tau(l)$ can be constructed for {\em discrete} Fourier measurements. Hence in this regard the method developed in our investigation applies to other forms of measurement data, as well as multiple data sources, so long as they can be stored as (discrete) Fourier samples.} As the number of discontinuities in two dimension is infinite, the corresponding extension of \eqref{eq:1d_CF} becomes undefined. Parameterization of the corresponding edge curve and the rotation of the concentration factor are incorporated into the CF technique to circumvent this issue for two-dimensional case, see \cite{adcock2019jointsparsity,xiao2022sequential}.
Given all $n$ Fourier coefficients in \eqref{eq:fourcoeff} for each $j = 1,\dots, J$, it is possible to recover the sequence of edge maps for moderate levels of noise. It is also possible to incorporate inter-image information from the temporal sequence of data sets to improve each individual edge map recovery, \cite{xiao2022sequential}. As already noted, however, this additional step requires more processing and therefore more hand-tuning of parameters. Moreover, as the SNR decreases and as fewer coefficients in each data set become available (for example, if a band of Fourier data is not usable), correlating information from the sequenced data becomes more challenging. Thus we are motivated to use a sparse Bayesian learning approach to incorporate the temporal inter-image information into the process of edge map recovery, which we now describe.
\subsection{Sparse Bayesian Learning}
\label{sec:SBL}
We start by applying the CF method in \eqref{eq:1d_CF} on $\tilde{\vect g}_{j} $ in \eqref{eq:forward_noisy} to generate $j = 1,\dots,J$ approximations to $\vect x_{j} $ in \eqref{eq:xvector} as
\begin{align}\label{eq:edge_est}
\begin{split}
\vect y_{j}&=E_{j}\left(F_{j}\vect f_{j}+\boldsymbol\epsilon_{j}\right),\\
&= E_{j}\vect{g}_{j}+E_{j}\boldsymbol\epsilon_{j},\\
&=\tilde{\vect x}_{j}+\tilde{\boldsymbol\epsilon}_{j},
\quad j=1,\dots, J.
\end{split}
\end{align}
Here $E_{j}\in {\mathbb C^{n\times n}}$ is the (diagonal) edge detection operator with entries $E_{j}(l,l) = \tau(l-(n/2+1))$, $l = 1,\dots,n$,
and $\tilde{\vect x}_{j}$ is an approximation to the sparse signal ${\vect x}_{j}$. We note that each matrix $E_{j}$ is designed specifically for the case when bands of Fourier data are unavailable, see \cite{viswanathan2012iterative}.
We then stack all measurements into a vector and obtain the new model as
\begin{equation}\label{eq:forward_m}
Y= X+\mathcal E,
\end{equation}
where $Y=\col\left([\vect y_{1},\dots,\vect y_{J}]^T\right) \in \mathbb R^{nJ}$ and $X=\col\left([\vect x_{1},\dots,\vect x_{J}]^T\right)\in\mathbb R^{nJ}$.
That is, there are $n$ arrays of length $J$ in $X$ and $Y$ where $Y_i$ denotes the $i^{th}$ array of measurements at location $i$, and $X_i$ the corresponding solution.
Compressive sensing (CS) methods \cite{candes2006stable,candes2006robust,candes2006near,donoho2006compressed,candes2008enhancing,langer2017automated} are often used to solve problems in the form of \eqref{eq:forward_m}. In this regard the classical unconstrained optimization problem used to recover the underlying signal is given by
\begin{equation}\label{eq:cs_model}
\argmin_{X}(\norm{Y-X}^2_2+\lambda \norm{X}_1),
\end{equation}
where the first term is used to promote data fidelity and the second (regularization) term encourages sparsity in the solution. As is explained in the classical CS literature, the $\ell_1$ norm in \eqref{eq:cs_model} serves as a surrogate for $\norm{\cdot}_0$, since the $0$ ``norm'' (pseudo-norm) yields an intractable problem \cite{wipf2004minimization,candes2006stable}. The regularization parameter $\lambda$ balances the contribution between the terms, with smaller $\lambda$ implying high quality data and vice versa. Importantly, \eqref{eq:cs_model} does not account for the correlated information available from the $J$ data sets.
Methods to exploit the joint sparsity attainable from the multiple measurement vectors (MMVs) have been subsequently developed in \cite{cotter2005sparse,chen2006theoretical,eldar2009robust,zheng2012subspace,deng2013group,singh2016weighted}, and somewhat relatedly, additional refinement to \eqref{eq:cs_model} can be made by employing {\em weighted} $\ell_1$ or $\ell_2$ regularization, \cite{candes2008enhancing,churchill2018edge,gelb2019reducing,scarnati2019accelerated,ren2020imaging}. Both approaches have the effect of more heavily penalizing sparse regions in the sparse domain of the solution than in locations of support. Ultimately these techniques require information regarding the support locations in the solution's sparse domain, which may not be readily accessible when the measurement data are heavily corrupted.
As an alternative, \eqref{eq:forward_m} can be viewed from a {\em sparse Bayesian learning} (SBL) perspective. By choosing uninformative hyper-priors, no advanced knowledge about the support in the sparse domain is required. Since SBL serves as the foundation of the proposed algorithm in our investigation, we briefly review it below.
The inverse problem \eqref{eq:forward_m} can be formulated in a hierarchical Bayesian framework by extending $X$, $Y$ and a collective parameter $\boldsymbol\theta$ into random variables (see e.g.~\cite{calvetti2007introduction,kaipio2006statistical}). We use the following density functions to describe the relationships among $X$, $Y$ and $\boldsymbol\theta$:
\begin{itemize}
\item The {\em prior} $p(X\vert\boldsymbol\theta)$ is the probability distribution of $X$ conditioned on $\boldsymbol\theta$.
\item The {\em hyper-prior} $p(\boldsymbol\theta)$ is the probability distribution of $\theta$.
\item The {\em likelihood} $p(Y\vert X,\boldsymbol\theta)$ is the probability distribution of $Y$ conditioned on $X$ and $\boldsymbol\theta$.
\item The {\em posterior} $p(X,\boldsymbol\theta \vert Y)$ is the joint probability distribution of $X$ and parameter $\boldsymbol\theta$ conditioned on $Y$.
\end{itemize}
Our goal is to recover the posterior distribution, which by Bayes' theorem is given by
\begin{equation}\label{eq:bayes}
p(X,\boldsymbol\theta\vert Y)=\frac{p(Y\vert X)p(X\vert\boldsymbol\theta)p(\boldsymbol\theta)}{p(Y)}.
\end{equation}
In particular, $\boldsymbol\theta$ is not pre-determined a {\em priori} but rather as a part of the Bayesian inference. A main challenge in the above formulation is the computation of the marginal distribution \(p(Y) \), usually an intractable high-dimensional integral. As it is impractical to compute the posterior \(p(X,\vect{\theta}\vert Y) \) directly from \eqref{eq:bayes}, we instead seek its approximation. Specifically, we first employ the empirical Bayes approach to obtain a point estimate \(\hat{\vect{\theta}} \) and then compute the conditional posterior \(p(X\vert\hat{\vect{\theta}} , Y) \) as an approximation of the joint posterior. A point estimate of \(X \) can also be realized as the {\em maximum a posteriori} (MAP) estimate of the conditional posterior, given by
\begin{equation}
X^\ast=\argmax_{X}p(X, \hat{\vect{\theta}} \vert Y).
\label{eq:BayesMAP}
\end{equation}
We note that \eqref{eq:cs_model} is equivalent to finding a point estimate approximation to \eqref{eq:BayesMAP} when using a Laplace prior with a pre-determined hyper-parameter $\lambda$.
Sparse Bayesian learning (SBL) is a catch-all phrase for a class of algorithms designed to calculate the hyper-parameter estimate \(\hat{\vect{\theta}}\) and the corresponding conditional posterior distribution when considering a hierarchical prior. More generally there are a range of techniques for solving inverse problems using the Bayesian approach that focus in particular on (application dependent) prior and hyper-prior estimation, \cite{mackay1992bayesian,mackay1999comparison,mohammad1996full,mohammad1996joint,molina1999bayesian}. For example, the hierarchical Gaussian prior (HGP) approach guarantees a closed-form posterior distribution \cite{bardsley2012mcmc,calvetti2019hierachical,calvetti2020hybrid,calvetti2020sparsity,
calvetti2020sparse}. Indeed, many SBL algorithms choose conjugate priors \cite{tipping2001sparse, wipf2004sparse, wipf2007empirical,zhang2011sparse, wipf2011latent,chen2016simultaneous}. In this framework the hyper-parameters are often approximated by the Expectation Maximization (EM) \cite{dempster1977maximum} or the evidence maximization approach \cite{mackay1992bayesian}.
In some cases the hyper-parameters for the sparse prior are determined empirically from the given data \cite{wipf2007empirical,pereira2015empirical,zhang2021empirical}. Finally, we note that (joint recovery) SBL is designed for stationary support in the sparse domain, which is pointedly {\em not} our assumption in this investigation.
In Section \ref{sec:newmodel} we propose a new technique that provides more accurate and efficient MAP estimates of the temporal sequence of solution posteriors. Our approach exploits the temporal correlation between the neighboring data sets in the sequence by introducing a new set of hyper-parameters which are then incorporated into the algorithm developed in \cite{tipping2001sparse}.\footnote{To be clear, the algorithm in \cite{tipping2001sparse} considers a stationary sparsity profile. The magnitudes at the jump locations are allowed to vary in this setting, however, which is also the assumed case for MMV methods in \cite{wipf2007empirical,zhang2011sparse}.}
\subsection{Hierarchical Bayesian Framework}
We first specify each of the terms used in \eqref{eq:bayes} and subsequent MAP estimate \eqref{eq:BayesMAP}. To derive the proposed algorithm, we work through the standard SBL framework and start with the case where ${\boldsymbol x}$ is a stationary signal for which we have $J \ge 1$ sets of observable data. We relax this assumption to accommodate for signal sequence with non-stationary support in Section \ref{sec:newmodel}.
\subsubsection{The likelihood}
The likelihood function describes the relationship of the solution $X$, the observation noise $\mathcal{E}$, and the data sets $Y$. When considering individual parts of the temporal sequence,
for each pair of $\vect{y}_{j}$ and $\vect{x}_{j}$, $j=1,\dots,J$ in \eqref{eq:edge_est}, we assume the zero-mean, i.i.d Gaussian distributed noise $\boldsymbol \epsilon_{j}\in\mathbb{R}^{n}$ in \eqref{eq:likelihood_noise} with the precision parameter $\beta>0$.
This assumption leads to the following likelihood function:
\begin{align}\label{eq:likelihood}
p(Y\vert X, \beta) = (2\pi)^{-\frac{nJ}{2}}\beta^{\frac{nJ}{2}}\exp{\left\{-\frac{\beta}{2}\norm{Y-X}^2_2\right\}}.
\end{align}
In many applications the maximum likelihood estimate (MLE) is used to obtain a point estimate for the solution. However, overfitting can be an issue, especially in low SNR environments. This issue often motivates using the Bayesian approach, in which an appropriate prior distribution on the solution is imposed.
\subsubsection{The prior (stationary case)} \label{sec:prior}
The desired solution $X$ in \eqref{eq:forward_m} contains the discrete jump approximation of the underlying signal $f_j$. Thus it should be sparse.
As already noted there are plenty of potential prior distributions that promote sparsity, including the Laplace prior \cite{figueiredo2007majorization}, the hyper-Laplacian prior \cite{levin2007image, krishnan2009fast}, and the mixture-of-Gaussian prior \cite{fergus2006removing}. Since explicit formulas are available for conjugate priors, here we consider a Gaussian prior distribution for each $X_i$ in \eqref{eq:BayesMAP}, conditioned on the hyper-parameters $s_i$ given by
\begin{equation}\label{eq:prior_i_classic}
p(X_{i}\vert s_i) \sim \mathcal N(0,s_i^{-1}I_J), \quad i=1,\dots,n.
\end{equation}
In particular, the hyper-parameter $\boldsymbol s =(s_1,\dots,s_n)$ assumes a {\em stationary} sparsity profile across the temporal sequence and the value of pixels are i.i.d.~to each other. In this case the elements in each array $X_i$ are the same.
The value of the hyper-parameter forces the sparsity of the posterior by concentrating most of the probability at \(X_{i} \approx 0 \) when \(s_{i} ^{-1} \approx 0\). In \cite{tipping2001sparse,wipf2007empirical}, the recovery of the sparse signal (in the deterministic case) amounts to determining the $i = 1,\dots, n$ entries in $\boldsymbol x$ that correspond to large $s_i^{-1}$.
\subsection{The hyper-prior (stationary case)}
\label{sec:hyper_prior}
From the discussion in Section \ref{sec:prior}, it is clear that to promote sparsity of the solution $X$ in the conditional Gaussian distributed prior \eqref{eq:prior}, the entries of the hyper-parameter $\boldsymbol s$ should be able to vary significantly. This can be achieved by using an uninformative hyper-prior as the density function of $\boldsymbol s$ and then treating each $s_1,\dots, s_n$ as a random variable. Following similar discussions in \cite{tipping2001sparse,wipf2007empirical,glaubitz2022generalized} here we employ the gamma distribution for each $s_i$, $i=1,\dots, n$, in \eqref{eq:prior_i} given by
\begin{equation}
\label{eq:hyperhyper1}
p(s_i)=\Gamma(s_i\vert a_s,b_s) = \frac{b_s^{a_s}}{\Gamma(a_s)}s_i^{a_s-1}e^{-b_ss_i},
\end{equation}
where $a_s,b_s>0$ are the shape and rate parameters of the gamma distribution \cite{artin2015gamma}. In particular, the gamma distribution is uninformative because it has mean $a_s/b_s\to\infty$ and variance $a_s/b_s^2\to\infty$ if $a_s<\infty$ and $b_s\to 0$.
By choosing $a_s=1$ and $b_s=10^{-4}$ \cite{tipping2001sparse,bardsley2012mcmc,glaubitz2022generalized}, the hyper-parameters $s_1,\dots,s_n$ will be able to vary as needed depending on the observed data sets. We can conveniently assume the same uninformative hyper-prior for the hyper-parameter $\beta$ in \eqref{eq:likelihood}
\begin{equation}\label{eq:hyper-prior-beta}
p(\beta^{-1}) = \Gamma(\beta\vert a_\beta,b_\beta),
\end{equation}
where we analogously choose $a_\beta=1$ and $b_\beta=10^{-4}$ for simplicity.
Although our formulation generates more parameters to compute, from a methodological perspective, an accurate inference approximation may prevent over-fitting \cite{neal2012bayesian}. This hierarchical formulation of the prior distribution falls under the category of {\em automatic relevance determination} (ARD) prior \cite{mackay1996bayesian,neal2012bayesian}. Based on the evidence from the data sets, the uninformative hyper-priors allow the posterior function of the solution $X$ to concentrate at large values of $\boldsymbol s$ \cite{tipping2001sparse}. The corresponding values of $X$ at those locations must have values near $0$, which are deemed as ``irrelevant'', i.e.~as not contributing to the data sets.
\section{Exploiting inter-image information}
\label{sec:newmodel}
The Gaussian distributed prior conditioned on $\boldsymbol s$ in \eqref{eq:prior_i_classic} assumes a stationary sparsity profile across the temporal sequence of observations. Such an assumption fails whenever the sparsity profile is not stationary. For example, if an object within a scene moves from one time frame to another, we can analogously consider the case where $x_{j,i} \ne 0$, for some $i \in [0,n] , j \in [1,J]$, but that $x_{j^\prime, i} = 0$ for some $j^\prime \ne j$. In the stationary framework the estimate of $s_i^{-1}$ will be ``averaged out'' over {\em all} the $J$ recoveries. In particular if $j = 1$ and subsequently $j' = 2,\dots,J$, then (for large enough $J$) $s_i^{-1} \approx 0$, and correspondingly yields $x_{1,i} \approx 0$. Figure \ref{fig:rec_SBL} illustrates this behavior for $J = 6$, where the translating support locations are lost due to the incorrect assumption regarding stationary support.
We address this issue by introducing new hyper-parameters $\boldsymbol q=(q_1,\dots,q_n)$ in the prior covariance matrix to account for potential changes between neighboring signals, that is, the temporal correlation, at each location. This will allow appropriate moderation of the strength of $\boldsymbol s$ on the conditioned prior distribution of $X$. More specifically we adapt the Gaussian prior distribution of $X_i$ in \eqref{eq:prior_i_classic} to be conditioned on the hyper-parameters $s_i$ and $q_i$ as
\begin{equation}\label{eq:prior_i}
p(X_{i}\vert s_i,q_i) \sim \mathcal N(0,[\Sigma_{i}(\vect{s},\vect{q})]^{-1}), \quad i=1,\dots,n.
\end{equation}
where
\begin{equation}\label{eq:prior_cov_i}
\Sigma_{i}(\vect{s},\vect{q})=\begin{pmatrix}
s_i & q_i & &\\
q_i & \ddots & \ddots &\\
& \ddots & \ddots & q_i \\
& & q_i & s_i
\end{pmatrix}
\quad\in\mathbb R^{J\times J},
\end{equation}
We note that we construct the prior covariance matrix as a tri-diagonal matrix for convenience, and other banded or sparse matrices may also be appropriate. Importantly, values from measurements at the same location are {\em not} assumed to be i.i.d, as in \cite{wipf2007empirical}, but are instead correlated. This is a departure from standard SBL approaches described in Section \ref{sec:prior} used for stationary observations.
With \eqref{eq:prior_cov_i} in hand, we now employ the conditional Gaussian prior density distribution over $X$ given by \cite{tipping2001sparse,wipf2007empirical,glaubitz2022generalized}
\begin{equation}\label{eq:prior}
p(X\vert \boldsymbol s,\boldsymbol q)=(2\pi)^{-\frac{nJ}{2}} \abs{\Sigma(\vect{s},\vect{q})}^{-\frac{1}{2}}\exp{\left\{-\frac{1}{2}X^T\Sigma^{-1}(\vect{s},\vect{q}) X\right\}},
\end{equation}
where $\Sigma(\vect{s},\vect{q}) $ is the block diagonal matrix given by
\begin{align}\label{eq:Sigma-s-q}
\Sigma(\vect{s},\vect{q}) =\diag(\Sigma_{1}(\vect{s},\vect{q}),\dots,\Sigma_{n}(\vect{s},\vect{q})).
\end{align}
The same uninformative hyper-prior as used before in \eqref{eq:hyperhyper1} is adopted for the hyper-parameters \(\boldsymbol s \) and $\boldsymbol q$:
\begin{align}\label{eq:hyper-hyper-prior}
p(\boldsymbol s)= \prod^n_{i=1}\Gamma(s_{i} \vert a, b),\quad
p(\boldsymbol q) = \prod^n_{i=1}\Gamma(q_{i} \vert a, b).
\end{align}
where $a = 1$ and $b = 10^{-4}$.
\subsection{Inference with joint hierarchical Bayesian learning (JBHT)}
We first compute the MAP estimate \(\hat{\vect{\theta}}\) of all hyper-parameters \(\vect{\theta} = (\vect{s},\vect{q},\beta) \) as defined by their density functions in \eqref{eq:hyper-prior-beta} and \eqref{eq:hyper-hyper-prior} by maximizing the posterior of \(\vect{\theta} \), that is,
\begin{align}\label{eq:MAP-theta}
\hat{\vect{\theta}}=\mathop{\arg\max}\limits_{\vect{\theta}}p(\vect{\theta}\vert Y).
\end{align}
We then derive the conditional posterior \(p(X\vert\hat{\vect{\theta}}, Y)\) and a point estimate of \(X\) by maximizing the conditional posterior. In what follows we describe how this can be accomplished.
We start with the computation of \(\hat{\vect{\theta}}\) in \eqref{eq:MAP-theta}. Using Bayes' theorem \textcolor{red}{\eqref{eq:bayes}},
we rewrite \eqref{eq:MAP-theta} as
\begin{align*}
\hat{\vect{\theta}}=\mathop{\arg\min}\limits_{\vect{\theta}}[-\ln p(Y\vert\vect{\theta}) - \ln p(\vect{\theta})] = \mathop{\arg\min}\limits_{\vect{\theta}} [-\ln p(Y\vert\vect{\theta})-\ln p(\vect{s}) -\ln p(\vect{q}) -\ln p(\beta)].
\end{align*}
We then plug the likelihood function \eqref{eq:likelihood} and the prior \eqref{eq:prior} into
\begin{align*}
p(Y\vert\vect{\theta}) = \int p(Y\vert X, \beta) p(X\vert\vect{s},\vect{q})dX,
\end{align*}
and then obtain from a standard derivation (see e.g.~\cite{tipping2001sparse})
\begin{align*}
p(Y\vert\vect{\theta}) = (2\pi)^{-\frac{nJ}{2}}\abs{\beta^{-1}I+\Sigma(\vect{s},\vect{q})}^{-\frac{1}{2}}\exp{\left\{ -\frac{1}{2}Y^T(\beta^{-1}I+\Sigma(\vect{s},\vect{q}))^{-1}Y \right\}}.
\end{align*}
We now combine this expression with the hyper-priors \eqref{eq:hyper-hyper-prior} and \eqref{eq:hyper-prior-beta} to obtain
\begin{align*}
\hat{\vect{\theta}} = \mathop{\arg\min}\limits_{\vect{\theta} = (\vect{s},\vect{q},\beta)} \mathcal{L}(\vect{\theta})\coloneqq &\ln \left\vert \beta^{-1}I+\Sigma(\vect{s},\vect{q}) \right\vert + Y^T(\beta^{-1}I+\Sigma(\vect{s},\vect{q}))^{-1}Y\\
& - 2b \left(\sum_{i=1}^{nJ}s_i + \sum_{i=1}^{nJ}q_i + \beta\right),
\end{align*}
where \(b=10^{-4}\) based on previous choices of $b_s$ and $b_\beta$.
Following a similar approach from \cite{tipping2001sparse}, we are now able to construct a solution to this minimization problem. Specifically, we derive iterative algorithms based on the partial derivatives with respect to the log of each parameter, which are computed as
\begin{align}\label{eq:L-partials}
\begin{aligned}
\frac{\partial\mathcal{L}}{\partial{\ln s_i}} &= 2 - s_i\left[\frac{1}{J}\norm{[\vect{\mu}(\vect{\theta})]_i}^2+[\Lambda(\vect{\theta})]_{i,i} -[\Sigma(\vect{s},\vect{q})]_{i,i} + 2b \right],\\
\frac{\partial\mathcal{L}}{\partial{\ln q_i}} &= 2 - 2q_{i} \left[\frac{1}{J}[\vect{\mu}(\vect{\theta})]_i^T [\vect{\mu}(\vect{\theta})]_{i+1}+[\Lambda(\vect{\theta})]_{i,i+1}-[\Sigma(\vect{s},\vect{q})]_{i,i+1}+b\right],\\
\frac{\partial\mathcal{L}}{\partial{\ln \beta}} &= (N+2) - \beta \left[\text{tr}(\Lambda(\vect{\theta}))+\norm{Y-\vect{\mu}(\vect{\theta})}^2+2b\right],
\end{aligned}
\end{align}
where
\begin{align}\label{eq:mu-Lambda}
\vect{\mu}(\vect{\theta}) = \beta\Lambda(\vect{\theta}) Y,\quad\quad \Lambda(\vect{\theta}) = \left(\beta I+[\Sigma(\vect{s},\vect{q})]^{-1}\right)^{-1},
\end{align}
\([\vect{\mu}(\vect{\theta})]_{i} \) is the \(i \)-th block of \(\vect{\mu}(\vect{\theta})\), and
\begin{align*}
A_{i,j}: = \frac{1}{J}\sum_{k=1}^{J} A((i-1)J +k, (j-1)J +k), \quad 1\leq i,j\leq n
\end{align*}
for \(A\in \mathbb{R}^{nJ\times nJ} \).
We are now able to formulate the minimizer \(\hat{\vect{\theta}} \) as a fixed point of a specified operator by setting each partial derivative in \eqref{eq:L-partials} to zero and then compute the fixed point via iterative algorithm(s). In particular, we use the following iterative formulas to update \(\vect{\theta}^{(k+1)} \) based on the current value \(\vect{\theta}^{(k)} = (\vect{s}^{(k)}, \vect{q}^{(k)}, \beta^{(k)}) \):
\begin{align}
s_{i} ^{(k+1)} &= \frac{2}{\frac{1}{J}\norm{[\vect{\mu}(\vect{\theta}^{(k)})]_i}^2+[\Lambda(\vect{\theta}^{(k)})]_{i,i} -[\Sigma(\vect{s}^{(k)},\vect{q}^{(k)})]_{i,i} + 2b}, \label{eq:SBL-s} \\
q_{i} ^{(k+1)} &= \frac{1}{\frac{1}{J}[\vect{\mu}(\vect{\theta}^{(k)})]_i^T [\vect{\mu}(\vect{\theta}^{(k)})]_{i+1}+[\Lambda(\vect{\theta}^{(k)})]_{i,i+1}-[\Sigma(\vect{s}^{(k)},\vect{q}^{(k)})]_{i,i+1}+b}, \label{eq:SBL-q}
\end{align}
each for $1 \le i \le n$, and
\begin{align}
\beta^{(k+1)} &= \frac{2N+2}{\text{tr}(\Lambda(\vect{\theta}^{(k)}))+\norm{Y-\vect{\mu}(\vect{\theta}^{(k)})}^2+2b}. \label{eq:SBL-beta}
\end{align}
With the approximation \(\vect{\theta}^{\ast}\) of \(\hat{\vect{\theta}}\) in hand, we can now compute the point estimate of \(X \) by maximizing the conditional posterior as
\begin{align*}
X^{\ast} = \mathop{\arg\max}\limits_{X} p(X\vert\vect{\theta}^{\ast}, Y).
\end{align*}
By using the conjugacy of Gaussian distributions \cite{Gelman2015}, it follows from the Gaussian likelihood \(p(Y\vert X, \beta) \) in \eqref{eq:likelihood} and the Gaussian prior \(p(X\vert \vect{s},\vect{q}) \) in \eqref{eq:prior} that
\begin{align}\label{eq:conditional-posterior}
p(X\vert \vect{\theta},Y) = (2\pi)^{-\frac{nJ}{2}}\abs{\Lambda(\vect{\theta})}^{-\frac{1}{2}}\exp{\left\{ -\frac{1}{2}\left(X-\vect{\mu}(\vect{\theta})\right)^T [\Lambda(\vect{\theta})]^{-1}\left(X-\vect{\mu}(\vect{\theta})\right) \right\}},
\end{align}
where \(\vect{\mu} \) and \(\Lambda \) are defined in \eqref{eq:mu-Lambda}. From \eqref{eq:conditional-posterior} we have our estimate
\begin{align*}
X^{\ast} = \vect{\mu}(\vect{\theta}^{\ast}).
\end{align*}
Our new JHBL approach is summarized in Algorithm \ref{alg:JHBL}.
\begin{algorithm}[h!]
\caption{JHBL approach of estimating $X$ ($\boldsymbol x_{\JHBL}^\beta$ in numerical examples)}
\label{alg:JHBL}
\hspace*{\algorithmicindent} \textbf{Input:} Measurements $Y$ in \eqref{eq:forward_m}\\
\hspace*{\algorithmicindent} \textbf{Output:} Signal estimate $X^{\ast}$.
\begin{algorithmic}[1]
\State{Initialize $a=1$, $b = 10^{-4}$, and \(\vect{\theta}^{(0)}=(\vect{s}^{(0)}, \vect{q}^{(0)}, \beta^{(0)}) \).}
\Repeat
\State{Update $\vect{s}^{(k+1)}$ by \eqref{eq:SBL-s}}
\State{Update $\vect{q}^{(k+1)}$ by \eqref{eq:SBL-q}}
\State{Update $\beta^{(k+1)}$ by \eqref{eq:SBL-beta}}
\State{Update $k\to k+1$}
\Until{convergence at \(\vect{\theta}^{\ast} = (\vect{s}^{\ast}, \vect{q}^{\ast}, \beta^{\ast}) \) }
\State Compute \( X^{\ast} = \vect{\mu}(\vect{\theta}^{\ast}) \) as in \eqref{eq:mu-Lambda}.
\end{algorithmic}
\end{algorithm}
As discussed in \cite{wipf2007empirical}, it may be possible to have information regarding $\beta$, in which case it does not have to be determined. Hence we also include Algorithm \ref{alg:JHBL-fixed-beta}, which is the JHBL algorithm given such information.
\begin{algorithm}[h!]
\caption{JHBL approach of estimating $X$ with fixed $\beta $ ($\vect{x}_{\JHBL}$ in numerical examples)}
\label{alg:JHBL-fixed-beta}
\hspace*{\algorithmicindent} \textbf{Input:} Measurements $Y$ in \eqref{eq:forward_m}\\
\hspace*{\algorithmicindent} \textbf{Output:} Signal estimate $X^{\ast}$.
\begin{algorithmic}[1]
\State{Initialize $a=1$, $b = 10^{-4}$, \(\beta \) and \(\vect{\theta}^{(0)} \).}
\Repeat
\State{Update $\vect{s}^{(k+1)}$ by \eqref{eq:SBL-s}}
\State{Update $\vect{q}^{(k+1)}$ by \eqref{eq:SBL-q}}
\State{Update $k\to k+1$}
\Until{convergence at \(\vect{\theta}^{\ast} = (\vect{s}^{\ast}, \vect{q}^{\ast}, \beta) \) }
\State Compute \( X^{\ast} = \vect{\mu}(\vect{\theta}^{\ast}) \) as in \eqref{eq:mu-Lambda}.
\end{algorithmic}
\end{algorithm}
\subsection{Inference with refined JHBL}
As discussed in Section \ref{sec:hyper_prior}, the solution $X$ can have non-zero entries only corresponding to very small $\boldsymbol s$. As was also discussed, the components of $\boldsymbol s$ may be large over any interval in which there is a change in the edge location, which results from the ``averaging out'' of the information in the temporal sequence. We emphasize that whenever {\em any} $s_i$ is large, it will dominate the information provided by both the inverse prior covariance matrix $\Sigma$ in \eqref{eq:prior} and the posterior covariance matrix $\Lambda$ in \eqref{eq:conditional-posterior}. This is because the re-estimate rule of $q_i$ in \eqref{eq:SBL-q} involves both $\Sigma$ and $\Lambda$ meaning that $\boldsymbol q$ contains similar information as $\boldsymbol s$ does.
A better approach would be to use an update rule for $\boldsymbol q$, which represents the {\em temporal correlation} of the $J$ observations at each location $i = 1,\dots,n$, that is independent of $\boldsymbol s$, which corresponds to probability that an edge occurs at that location in any {\em individual} image. Thus we propose to compute a point estimate \(X^{(k)} \) given the current hyper-parameter estimate \(\vect{\theta}^{(k)} \)
\begin{align}
X^{(k)} = \mathop{\arg\max}\limits_{X} p(X\vert\vect{\theta}^{(k)}, Y) = \vect{\mu}(\vect{\theta}^{(k)})
\label{eqn:Xkiterate}
\end{align}
and use the covariance matrix of the given temporal sequence to compute $q$ as
\begin{equation}\label{eq:q_update_correct}
\vect{q}^{(k+1)} = \diag(\Cov( \vect{x}_{1}^{(k)},\dots, \vect{x}_J^{(k)})).
\end{equation}
Here $\Cov(\cdot)$ denotes the covariance matrix of the input.
Using \eqref{eq:q_update_correct} helps to mitigate the problem corresponding to large $s_i$ for any individual image having an out-sized impact on the remaining images in the sequence. It is still the case, however, that large $s_i$ will have dominant influence in the neighborhoods of $X$ for which there is translation of a nonzero value, or edge.
To compensate for this problem, our method must account for nonzero values of $\boldsymbol s$ in these neighborhoods so that the translating edge is identified. In this regard, we first get individual estimate of \(\vect{s} \) from each \( \vect{y}_{j} \) separately, which could be viewed as a special case of \eqref{eq:SBL-s} with \(J=1\) and \(Y=\vect{y}_{j} \) for each \(1\leq j\leq J\). Indeed, we will calculate the components of each individual estimate \(\vect{u}_{j} ^{(k+1)} \), $j = 1,\dots,J$, as
\begin{align}\label{eq:SBL-u}
u_{j,i}^{(k+1)} = \frac{2}{\norm{[\vect{\mu}_{j} (\vect{\theta}^{(k)})]_i}^2+[\Lambda_{j} (\vect{\theta}^{(k)})]_{i,i} -[\Sigma_{j} (\vect{s}^{(k)},\vect{q}^{(k)})]_{i,i} + 2b}, \quad 1\leq i\leq n,
\end{align}
where \( \vect{\mu}_{j} (\vect{\theta}^{(k)}) = \beta^{(k)} \Lambda_{j} (\vect{\theta}^{(k)}) \vect{y}_{j} \), \(\Lambda_{j} (\vect{\theta}^{(k)}) = (\beta I + [\Sigma_{j} (\vect{s}^{(k)},\vect{q}^{(k)})]^{-1} )^{-1} \), and \(\Sigma_{j} (\vect{s}^{(k)},\vect{q}^{(k)}) = \operatorname{diag} (\vect{s}^{(k)}) \). We then use \textcolor{red}{\eqref{eq:SBL-u}} to refine \(\vect{s}^{(k+1)}\) as
\begin{align}\label{eq:SBL-s-refine}
s_{i} ^{(k+1)} = \mathop{\min}_{1\leq j\leq J} \left\vert u_{j,i}^{(k+1)} \right\vert, \quad i \in I,
\end{align}
where $I=\{i\colon q^{(k+1)}_i \geq \vartheta\max_{i=1,\dots,n}q^{(k+1)}_i\}$ for some $\vartheta\in[0,1]$. The parameter $\vartheta$ is inherently related to the scale of the sparse signal, the SNR, and the distance between nonzero entries (resolution) in the underlying solution ${\vect x}_{j} $. In our examples we use a rough estimate for $\vartheta$ that would be accessible from the measurements without any additional tuning. The idea in using \eqref{eq:SBL-s-refine} is to redefine $\vect{s}$ in possible regions of change, which is informed by $\vect{q}$ defined by \eqref{eq:q_update_correct}. Since larger entries of $q_{i}$'s indicate more likely regions of change, \eqref{eq:SBL-s-refine} seeks to mitigate the impact of $s_{i}$'s in those regions.
Observe that \eqref{eq:SBL-s-refine} does not consider the joint sparsity profile of the temporal sequence, but rather seeks to confirm whether a nonzero value (or edge) was present at any $i \in I$ over $j = 1,\dots, J$. Specifically, by using \eqref{eq:SBL-s-refine} the solution posterior is allowed to produce nonzero values at the edges over the neighborhood where changes occur.
We present this refined JHBL approach in Algorithm \ref{alg:JHBL-refine}.
\begin{algorithm}[h!]
\caption{Refined JHBL approach of estimating $X$ ($\boldsymbol x_{\JHBL}^{\beta, \vect{q}}$ in numerical examples)}
\label{alg:JHBL-refine}
\hspace*{\algorithmicindent} \textbf{Input:} Measurements $Y$ in \eqref{eq:forward_m}\\
\hspace*{\algorithmicindent} \textbf{Output:} Signal estimate $X^{\ast}$.
\begin{algorithmic}[1]
\State{Initialize $a=1$, $b = 10^{-4}$, and \(\vect{\theta}^{(0)}=(\vect{s}^{(0)}, \vect{q}^{(0)}, \beta^{(0)}) \).}
\Repeat
\State{Update $\vect{s}^{(k+1)}$ by \eqref{eq:SBL-s}}
\State{Update $\vect{q}^{(k+1)}$ by \eqref{eq:q_update_correct}}
\State{Refine $\vect{s}^{(k+1)}$ by \eqref{eq:SBL-s-refine}}
\State{Update $\beta^{(k+1)}$ by \eqref{eq:SBL-beta}}
\State{Update $k\to k+1$}
\Until{convergence at \(\vect{\theta}^{\ast} = (\vect{s}^{\ast}, \vect{q}^{\ast}, \beta^{\ast}) \) }
\State Compute \( X^{\ast} = \vect{\mu}(\vect{\theta}^{\ast}) \) as in \eqref{eq:mu-Lambda}.
\end{algorithmic}
\end{algorithm}
\subsection{Inference of two-dimensional images}
\label{sec:inference}
We observe that the major computational cost of Algorithm \ref{alg:JHBL-refine} is the calculation of \(\Lambda(\vect{\theta}^{(k)}) \) repeatedly at each iteration \(k \), which is the inverse of a large-scale matrix. This becomes computationally prohibitive for 2D images since the size \(n^{2} \) of a vectorized \(n\times n \) image is typically huge. In what follows we therefore describe an alternative approach that enables an efficient and robust recovery, specifically by approximating $\(\Lambda(\vect{\theta}^{(k)}) \)$.
We begin by using a diagonal approximation of \(\Sigma(\vect{s}, \vect{q}) \) in \eqref{eq:Sigma-s-q} given by
\begin{align*}
\widetilde{\Sigma} (\vect{s}) = \operatorname{diag} (s_{1} I_J, s_{2}I_J, \ldots , s_{n} I_J).
\end{align*}
That is, we ignore the sub-diagonals \(\vect{q}\). We note that the magnitude of \(\vect{q} \) is typically smaller than the magnitude of \(\vect{s} \). The approximations of \(\vect{\mu}(\vect{\theta}) \) and \(\Lambda(\vect{\theta}) \) are defined correspondingly as
\begin{align*}
\widetilde{\vect{\mu}} (\vect{\theta}) = \beta \widetilde{\Lambda}(\vect{\theta}) Y, \quad \widetilde{\Lambda} (\vect{\theta})= \left(\beta I + \widetilde{\Sigma} (\vect{s})^{-1} \right)^{-1}.
\end{align*}
We then use these approximations in the partial derivative \eqref{eq:L-partials} to derive the corresponding iterative formulas for updating \(\vect{s} \) and \(\beta \):
\begin{align}\label{eq:SBL-2D-s}
s_i ^{(k+1)} = \frac{J+2}{\norm{[\widetilde{\vect{\mu}}(\vect{\theta}^{(k)})]_i}^2 + 2b}
\end{align}
and
\begin{align}\label{eq:SBL-2D-beta}
\beta ^{(k+1)} = \frac{nJ+2}{\norm{Y-\widetilde{\vect{\mu}}(\vect{\theta}^{(k)})}^2+2b}.
\end{align}
We emphasize that the above iterations could be computed much faster than \eqref{eq:SBL-s} and \eqref{eq:SBL-beta} since the approximated matrix \(\widetilde{\Lambda} \) is diagonal.
Analogous to \eqref{eqn:Xkiterate}, we then have
\begin{equation}
\label{eq:Xkiterate2D}
X^{(k)} = \widetilde{\vect{\mu}} (\vect{\theta}^{(k)}),
\end{equation}
for which we employ the same refinement techniques as \eqref{eq:q_update_correct} and \eqref{eq:SBL-s-refine} respectively given by
\begin{align}\label{eq:SBL-2D-q}
\vect{q}^{(k+1)} = \diag(\Cov( \vect{x}_{1}^{(k)},\dots, \vect{x}_J^{(k)})),
\end{align}
and
\begin{align}\label{eq:SBL-2D-s-refine}
s_{i} ^{(k+1)} = \mathop{\min}_{1\leq j\leq J} \left\vert u_{j,i}^{(k+1)} \right\vert, \quad i \in I,
\end{align}
where \(u_{j,i}^{(k+1)}\) is similar to \eqref{eq:SBL-u} with the diagonal approximations and
$$I=\{i\colon q^{(k+1)}_i \geq \vartheta\max_{i=1,\dots,n}q^{(k+1)}_i\}$$ for some $\vartheta\in[0,1]$.
We present this refined JHBL approach for 2D images in Algorithm \ref{alg:JHBL-refine-2D}.
\begin{algorithm}[h!]
\caption{Refined JHBL approach of estimating 2D $X$ ($\boldsymbol x_{\JHBL}^{\beta, \vect{q}}$ in numerical examples)}
\label{alg:JHBL-refine-2D}
\hspace*{\algorithmicindent} \textbf{Input:} Measurements $Y$ in \eqref{eq:forward_m}\\
\hspace*{\algorithmicindent} \textbf{Output:} Signal estimate $X^{\ast}$.
\begin{algorithmic}[1]
\State{Initialize $a=1$, $b = 10^{-4}$, and \(\vect{\theta}^{(0)}=(\vect{s}^{(0)}, \vect{q}^{(0)}, \beta^{(0)}) \).}
\Repeat
\State{Update $\vect{s}^{(k+1)}$ by \eqref{eq:SBL-2D-s}}
\State{Update $\vect{q}^{(k+1)}$ by \eqref{eq:SBL-2D-q}}
\State{Refine $\vect{s}^{(k+1)}$ by \eqref{eq:SBL-2D-s-refine}}
\State{Update $\beta^{(k+1)}$ by \eqref{eq:SBL-2D-beta}}
\State{Update $k\to k+1$}
\Until{convergence at \(\vect{\theta}^{\ast} = (\vect{s}^{\ast}, \vect{q}^{\ast}, \beta^{\ast}) \) }
\State Compute \( X^{\ast} = \vect{\mu}(\vect{\theta}^{\ast}) \) as in \textcolor{red}{\eqref{eq:Xkiterate2D}}.
\end{algorithmic}
\end{algorithm}
\section{Numerical results}
\label{sec:numresultsSBL}
We now provide some one- and two-dimensional examples to demonstrate the efficacy of our method.
\subsection{One-dimensional piecewise continuous signal}
\label{sec:test_1Dsignal}
We consider the temporal sequence of $J=6$ one-dimensional piecewise smooth functions given by
\begin{example}
\label{example:falpha}
The functions $f_{j} \colon[-\pi,\pi]$ are defined as
\begin{equation}\label{eq:1d_fun}
f_{j} (\alpha_i) = \begin{cases}
1.5, & -\frac{3\pi}{4}-.5+.1j\leq \alpha_i<-\frac{\pi}{2}-.5+.1j\\
\frac{7}{4}-\frac{\alpha_i}{2}+\sin(\alpha_i-\frac{\pi}{4}), & -\frac{\pi}{4}\le \alpha_i<\frac{\pi}{8}\\
\frac{11\alpha_i}{4}-5, & \frac{3\pi}{8}\le \alpha_i<\frac{3\pi}{4}\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
\end{example}
The sequence of functions given by Example \ref{example:falpha} are depicted in Figure \ref{fig:noisy_edge}(a), while Figure \ref{fig:noisy_edge}(b) shows the measurements $\vect{y}_{1} $ as given by \eqref{eq:edge_est}. We note that the similar oscillations are likewise observed in each $\vect{y}_{j}$, $j = 2,\dots, 6$. The Fourier measurements in \eqref{eq:fourcoeff}, as discretized by ${\boldsymbol g}_{j} $, $j = 1,\dots,6$, given in \eqref{eq:forward_noisy}, are also each missing the symmetric frequency band $\mathcal{K}_j$, $j = 1,\dots, J$, given by
\begin{equation}
\label{eq:bandzero_1d}
\mathcal{K}_j = [\pm(10j + 13),\pm(10j + 15)],
\end{equation}
Our goal is to recover the jump function (edges) of each $f^{(j)}$ as defined in \eqref{eq:jumpFunc} given by ${\boldsymbol x}_{j} $ in \eqref{eq:xvector}. The domain $[-\pi,\pi]$ is uniformly discretized with $n=128$ grid points such that $\alpha_i= \frac{(2 \pi)(i-1)}{n}$, $i=1,\dots,n$.
In our experiments we consider additive i.i.d. Gaussian noise of zero-mean and $.2$-variance (signal-to-noise ratio $\text{SNR} \approx 20$) where we define the SNR specifically for \eqref{eq:forward_m} as
\begin{equation}
\label{eq:SNR}
{\text{SNR}=10\log_{10}\left(\frac{\bar{f}^2}{\sigma^2}\right)},
\end{equation}
where $\bar{f}$ is the mean value of $f$ over the observed grid points.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.8\textwidth}
\includegraphics[width=\textwidth]{plot/data_multijumps.eps}
\caption{$f_{j} $, $j=1,\dots,6$ in \eqref{eq:1d_fun}}
\end{subfigure}
\\
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth]{plot/noisyObs_x1_multijumps.eps}
\caption{$\vect{y}_{1} $}
\end{subfigure}
\begin{subfigure}[b]{.42\textwidth}
\includegraphics[width=\textwidth]{plot/reconstruction_cov_multijumps.eps}
\caption{The initialization of $\boldsymbol q$}
\end{subfigure}
\caption{(a) The sequence of functions given by Example \ref{example:falpha}. (b) $\vect{y}_{1} $ as defined in \eqref{eq:forward_m}. (c) hyper-parameter $\boldsymbol q$ initialized in \eqref{eq:q_update_correct}.
}
\label{fig:noisy_edge}
\end{figure}
From Figure \ref{fig:noisy_edge}(c) we observe that the covariance matrix of the acquired data in \eqref{eq:forward} can roughly determine the interval in which the edge locations have moved. It is not very accurate, however, as there are spurious oscillations that affect the ability to determine the true shifted jump locations. Such a coarse initial estimate of the change region might be all that is available in a variety of applications, and indeed motivates our proposed framework here. That is, our method can still be useful even when the change region is not accurately described.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.31\textwidth}
\includegraphics[width=\textwidth]{plot/reconstruction_x1_multijumps}
\end{subfigure}
\begin{subfigure}[b]{.31\textwidth}
\includegraphics[width=\textwidth]{plot/reconstruction_x2_multijumps}
\end{subfigure}
\begin{subfigure}[b]{.31\textwidth}
\includegraphics[width=\textwidth]{plot/reconstruction_x3_multijumps}
\end{subfigure}
\\
\begin{subfigure}[b]{.31\textwidth}
\includegraphics[width=\textwidth]{plot/reconstruction_x4_multijumps}
\end{subfigure}
\begin{subfigure}[b]{.31\textwidth}
\includegraphics[width=\textwidth]{plot/reconstruction_x5_multijumps}
\end{subfigure}
\begin{subfigure}[b]{.31\textwidth}
\includegraphics[width=\textwidth]{plot/reconstruction_x6_multijumps}
\end{subfigure}
\caption{Sequential edge map recovery for Fourier measurements given by \eqref{eq:bandzero_1d}, $J = 6$. The legend is provided in the top left figure.}
\label{fig:rec_SBL}
\end{figure}
Figure \ref{fig:rec_SBL} compares the results of our new method to other existing approaches that seek to recover sparse signals from a sequence of given data. These include
(1) $\vect{x}_{\JHBL}$, the JHBL recovery using Algorithm \ref{alg:JHBL-fixed-beta} with $\beta$ chosen a-priori and (2) $\vect{x}_{\JHBL}^\beta$, the JHBL recovery using Algorithm \ref{alg:JHBL} for updating $\beta$ iteratively. We denote the results from our proposed method, realized using Algorithm \ref{alg:JHBL-refine}, as $\vect{x}_{\JHBL}^{\beta, \vect{q}}$. We note that in our examples the exact value of $\beta$ is assumed for $\vect{x}_{\JHBL}$, but not for $\vect{x}_{\JHBL}^{\beta}$ or $\vect{x}_{\JHBL}^{\beta,\vect{q}}$. As can be observed from the results in Figure \ref{fig:rec_SBL}, by explicitly including change information, as is the main feature of Algorithm \ref{alg:JHBL-refine}, we are able to more accurately capture the sequence of jump function recoveries. We used $\vartheta=0.05$ in \eqref{eq:SBL-s-refine} and note that some additional tuning may improve the results. More discussion regarding how parameter $\vartheta$ may be chosen follows \eqref{eq:SBL-s-refine}.
Finally, we note that it is possible of course to use the standard SBL to recover the individual signals separately, that is without combining information from other data in the sequence. In this case the joint sparsity method is reduced to standard $\ell_1$ regularization (compressive sensing) given by \eqref{eq:cs_model}. While the results for this simple example are comparable, we are interested in the problem where there is obstruction in each data acquisition that hampers individual recovery. For example, in our two-dimensional examples we consider the case where there are missing bands of Fourier data in each acquisition, so that no single acquisition has enough data to accurately recover the underlying image.
\subsection{Two-dimensional data}
\label{sec:img_rec}
Our two-dimensional examples include a magnetic resonance image (MRI) and a synthetic aperture radar (SAR) image. As was done for the one-dimensional sparse signal example, we also compare edge maps respectively recovered using Algorithm \ref{alg:JHBL}, Algorithm \ref{alg:JHBL-fixed-beta}, and Algorithm \ref{alg:JHBL-refine-2D}.
In contrast to the one-dimensional signal example, here we show how the temporal edge maps are employed for the downstream process of full image recovery.
\subsubsection*{Sequential mage recovery using edge maps}
Since it pertains to the overall usefulness of our new sequential edge map recovery procedure, we now include a brief review of some commonly employed algorithms for sequential image recovery. Due to the sparsity in the edge domain, CS algorithms \cite{candes2006robust,candes2006stable,candes2006near,donoho2006compressed} are often employed to either separately (see e.g.~above references) or jointly (see e.g.~\cite{adcock2019jointsparsity,cotter2005sparse,chen2006theoretical,eldar2009robust,xiao2022sequential}) recover the corresponding sequence of images.\footnote{The joint recovery methods in these publications consider cases of non-overlapping support in the sparse domain, which is consistent with the assumptions for the temporal sequence of images discussed here.} For the standard single measurement case, the standard CS algorithm may be written as
\begin{equation}\label{eq:CS_model_image}
\Tilde{\vect{f}}_{j} = \argmin_{\vect{f}^\ast} \left(\frac{1}{2}\norm{F_{j} \vect{f}^\ast-\Tilde{\vect{g}}_{j} }_2^2+ \lambda\norm{\mathcal{L}\vect{f}^\ast}_1\right),
\end{equation}
where ${\mathcal L}$ is a sparsifying transform operator, designed here to promote sparsity in the edge domain.\footnote{Since our examples only include piecewise constant structures, we can simply choose ${\mathcal L}$ as a first order differencing operator, and note that high order differencing may also be used as appropriate (see e.g. \cite{archibald2016image}).} If the data acquisition model \eqref{eq:forward_noisy} is accurate, it should be the case that any measurement $\Tilde{\vect{g}}_{j} $ carries sufficient information to recover its corresponding image so long as the regularization parameter is suitably chosen. The performance of CS algorithms deteriorate for seriously under-sampled data with low SNR, however, \cite{shchukina2017pitfalls,kang2019compressive}. Further, even if optimally chosen, the {\em global} impact of regularization term in \eqref{eq:CS_model_image} will make it impossible to resolve local features in these environments.
Spatially varying regularization parameters can improve the accuracy of the CS solution. Specifically, the parameter should be constructed to more heavily penalize the solution in the true sparse regions (in the edge domain) and less so in regions of presumed support. The (re-)weighted $\ell_1$ regularization method is designed for this purpose, with the most basic form written as \cite{candes2008enhancing}
\begin{equation}\label{eq:wl1_model}
\Tilde{\vect{f}}_{j} = \argmin_{\vect{f}^\ast}\left(\frac{1}{2}\norm{F_{j} \vect{f}^\ast-\Tilde{\vect{g}}_{j} }_2^2+\norm{W\mathcal{L}\vect{f}^\ast}_1\right).
\end{equation}
Here the entries $\vect{w}_{j}$ of the diagonal weighting matrix $W=\diag{(\vect{w}_{j} )}$ typically are iteratively constructed, yielding expensive computational cost. Furthermore, errors in $W$ due to noise and incompleteness of acquired data will be fed into \eqref{eq:wl1_model} and the resulting error will propagate at each iteration.
As the edge information of each part of the temporal sequence has been determined prior to the image reconstruction, following what was done in \cite{adcock2019jointsparsity,gelb2019reducing,scarnati2019accelerated}, we avoid iterating on $W$ and define a pre-computed weighting matrix as
\begin{equation}\label{eq:weights}
\omega^{(j)}(i) = \frac{1}{\text{vec}\left(\Tilde{\vect{x}}_{j} \right)(i)+\epsilon},
\end{equation}
where $\text{vec}(\cdot)$ is to stack input vertically as a vector and $\epsilon=10^{-4}$ is introduced to avoid zero-valued denominator. We further scale the matrix with maximum value 1 and replace \eqref{eq:weights} as
\begin{equation}
\label{eq:weights_scaled}
\omega^{(j)}(i) = \frac{\omega^{(j)}(i)}{\max_{i}{\abs{ \omega^{(j)}(i)}}}.
\end{equation}
We follow \cite{gelb2019reducing} and employ the Alternating Direction Method of Multipliers (ADMM) (see \cite{boyd2011distributed}), to solve the convex optimization problem in \eqref{eq:wl1_model}.
\subsubsection*{Sequential MRI}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/true_x1_2d.eps}
\caption{$\vect{f}_1$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/true_x2_2d.eps}
\caption{$\vect{f}_2$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/true_x3_2d.eps}
\caption{$\vect{f}_3$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/true_x4_2d.eps}
\caption{$\vect{f}_4$}
\end{subfigure}
\\
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/edgeTrue_x1_2d.eps}
\caption{$\vect{x}_{1}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/edgeTrue_x2_2d.eps}
\caption{$\vect{x}_2$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/edgeTrue_x3_2d.eps}
\caption{$\vect{x}_3$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/edgeTrue_x4_2d.eps}
\caption{$\vect{x}_4$}
\end{subfigure}
\\
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/edge_x1_2d.eps}
\caption{$\mathbf{y}_{1}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/edge_x2_2d.eps}
\caption{$\mathbf{y}_2$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/edge_x3_2d.eps}
\caption{$\mathbf{y}_3$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/edge_x4_2d.eps}
\caption{$\mathbf{y}_4$}
\end{subfigure}
\caption{(top row) The underlying MRI images, \cite{Lalwanietal}. Two rotating/translating ellipses are imposed on each image.
(middle row) The exact edge map for each MRI image.
(bottom row) The edge approximations using the two-dimensional extension of the CF method \eqref{eq:1d_CF} (see e.g. \cite{xiao2022sequential}) from the under-sampled and noisy Fourier samples. Here we use SNR $=2$ for the noise standard deviation in \eqref{eq:SNR}.} \label{fig:MRI}
\end{figure}
Figure \ref{fig:MRI}(top) shows four sequential MRI images. Two ellipses that are rotating/translating in time are super-imposed on each static image. We assume we are given the corresponding Fourier data for $J = 6$ (with four shown for better visualization) sequential images, which we simulate by taking the discrete Fourier transform of each $128\times 128$ pixelated image. We also ``zero out'' a symmetric band $\mathcal{K}_j$, $j = 1,\dots, J$, given by
\begin{equation}
\label{eq:bandzero}
\mathcal{K}_j = [\pm(10j + 1),\pm(10(j+1))],
\end{equation}
and then add noise with {SNR $=2$}.
The intervals for the missing data bands were somewhat arbitrarily chosen, with the idea being that in each case relevant information in each data set of the sequence would be compromised in some way. Figure \ref{fig:MRI}(middle) shows the exact edge map at each time, while
Figure \ref{fig:MRI}(bottom) displays the edge maps obtained using the two-dimensional expansion of the CF method in \eqref{eq:1d_CF} (see e.g.~\cite{adcock2019jointsparsity,gelb2017detecting,xiao2022sequential} for discussion on two-dimensional CF method expansion).
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sbl_x1_2d.eps}
\caption{${\vect{x}_1}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sbl_x2_2d.eps}
\caption{${\vect{x}_2}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sbl_x3_2d.eps}
\caption{${\vect{x}_3}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sbl_x4_2d.eps}
\caption{${\vect{x}_4}_{\JHBL}$}
\end{subfigure}
\\
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblb_x1_2d.eps}
\caption{${\vect{x}_1}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblb_x2_2d.eps}
\caption{${\vect{x}_2}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblb_x3_2d.eps}
\caption{${\vect{x}_3}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblb_x4_2d.eps}
\caption{${\vect{x}_4}^{\beta}_{\JHBL}$}
\end{subfigure}
\\
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblba_x1_2d.eps}
\caption{${\vect{x}_1}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblba_x2_2d.eps}
\caption{${\vect{x}_2}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblba_x3_2d.eps}
\caption{${\vect{x}_3}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblba_x4_2d.eps}
\caption{${\vect{x}_4}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
\caption{(top row) Edge map recovery using Algorithm \ref{alg:JHBL-fixed-beta}.
(middle row) Edge map recovery using Algorithm \ref{alg:JHBL}.
(bottom row) Edge map recovery using Algorithm \ref{alg:JHBL-refine-2D}.}
\label{fig:edge_SBL_2d}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/rec_sbl_x1_2d.eps}
\caption{${\vect{x}_1}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/rec_sblb_x1_2d.eps}
\caption{${\vect{x}_1}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/rec_sblba_x1_2d.eps}
\caption{${\vect{x}_1}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
\\
\begin{subfigure}[b]{.325\textwidth}
\includegraphics[width=\textwidth]{plot/error_sbl_x1_2d.eps}
\caption{${\vect{x}_1}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.325\textwidth}
\includegraphics[width=\textwidth]{plot/error_sblb_x1_2d.eps}
\caption{${\vect{x}_1}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.325\textwidth}
\includegraphics[width=\textwidth]{plot/error_sblba_x1_2d.eps}
\caption{${\vect{x}_1}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
\caption{(top) Image recovery of the first image in the sequence via \eqref{eq:wl1_model} with weights informed by (left) Algorithm \ref{alg:JHBL-fixed-beta};
(middle) Algorithm \ref{alg:JHBL};
and (right) Algorithm \ref{alg:JHBL-refine-2D}.
(bottom) The corresponding log-scale pointwise error.}
\label{fig:rec_SBL_2d}
\end{figure}
Figure \ref{fig:edge_SBL_2d} compares edge recovery using the JHBL method given by Algorithm \ref{alg:JHBL-fixed-beta}, which assumes information regarding $\beta$ is known a-priori, the standard JHBL method, given by Algorithm \ref{alg:JHBL}, which learns $\beta$ but does not refine the parameters based on inter-signal information, and Algorithm \ref{alg:JHBL-refine-2D}, which refines the parameter selection by accounting for both intra- and inter-image information at each of the four time instances, again based on $J = 6$ original data sets. It is evident that which band is missing \eqref{eq:bandzero} plays an important role in how well each method is able to resolve the edges. Using a priori information regarding $\beta$ so that $\beta=\frac{1}{\sigma^2}$, (top row) captures the internal structures but appears to result in additional clutter.
Learning $\beta$ {\em without} refining the hyperparameters according to inter-signal information results in loss of moving edge information in each recovered edge map (second row). In all cases, refining ${\boldsymbol q}$ improves resolution while mitigating the effects of both the corrupted data and the change of support locations in the edge domain (bottom row). For this example we chose $\vartheta=0.3$ in \eqref{eq:SBL-2D-s-refine} and note that some additional tuning may improve the results.
We observe in particular the poor edge map recovery quality using the JHBL approach with fixed \(\beta \) (Algorithm \ref{alg:JHBL-fixed-beta}) in the first column of the first row in Figure \ref{fig:edge_SBL_2d}, which is likely due to the zeroed out low frequencies in \eqref{eq:bandzero}. In the middle row, we see that while the {\em stationary} edge map structures are recovered more accurately, the {\em changed} regions are not retrieved in the update process used by Algorithm \ref{alg:JHBL}. By contrast, our new approach in Algorithm \ref{alg:JHBL-refine-2D} is able both to enhance the quality observed when using Algorithm \ref{alg:JHBL-fixed-beta} while also recovering the changed regions in the edge maps (bottom row).
Figure \ref{fig:rec_SBL_2d} demonstrates how the edge maps from each recovery can be used to inform the weights in \eqref{eq:weights_scaled} which is in turn used in the weighted $\ell_1$ regularization method given by \eqref{eq:wl1_model} for image recovery. While only the first image in the sequence is displayed, the results for the rest of the sequence are comparable. In particular we observe that using Algorithm \ref{alg:JHBL-refine-2D} consistently performs as well as or better than the other two edge recovery methods which do not properly account for change information. It is also once again evident that which band is missing \eqref{eq:bandzero} plays an important role in how well each method is able to resolve the features in each image.
\subsubsection*{Sequential SAR images}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/true_f1_golf.eps}
\caption{$\vect{f}_1$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/true_f2_golf.eps}
\caption{$\vect{f}_2$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/true_f3_golf.eps}
\caption{$\vect{f}_3$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/true_f4_golf.eps}
\caption{$\vect{f}_4$}
\end{subfigure}
\caption{The underlying scene is the SAR image of a golf course, \cite{SAR_Image_ref}, with a superimposed (static) hotel, and translating/rotating cars and boats.}
\label{fig:rec_golf}
\end{figure}
For the second experiment we consider a temporal sequence of six SAR images of a golf course, \cite{SAR_Image_ref}, four of which are displayed in Figure \ref{fig:rec_golf}. Observe there is no ``ground truth'' in this case. As in the MRI case, we again use the discrete Fourier transform of the $128\times128$ pixelated image to obtain the measurement data.
We once again assume a symmetric band of measurements, $\{\mathcal{K}_j\}_{j=1}^J$ given in \eqref{eq:bandzero}, is for some reason not available for use. Finally, we add noise with SNR $=2$.
To respectively simulate moving and background objects, we impose cars and boats on the scene, each of magnitude $1$, and a building of magnitude $1.5$.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sbl_x1_golf.eps}
\caption{${\vect{x}_1}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sbl_x2_golf.eps}
\caption{${\vect{x}_2}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sbl_x3_golf.eps}
\caption{${\vect{x}_3}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sbl_x4_golf.eps}
\caption{${\vect{x}_4}_{\JHBL}$}
\end{subfigure}
\\
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblb_x1_golf.eps}
\caption{${\vect{x}_1}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblb_x2_golf.eps}
\caption{${\vect{x}_2}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblb_x3_golf.eps}
\caption{${\vect{x}_3}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblb_x4_golf.eps}
\caption{${\vect{x}_4}^{\beta}_{\JHBL}$}
\end{subfigure}
\\
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblba_x1_golf.eps}
\caption{${\vect{x}_1}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblba_x2_golf.eps}
\caption{${\vect{x}_2}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblba_x3_golf.eps}
\caption{${\vect{x}_3}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.23\textwidth}
\includegraphics[width=\textwidth]{plot/sblba_x4_golf.eps}
\caption{${\vect{x}_4}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
\caption{Sequential edge map recovery shown at four time stamps using (top) Algorithm \ref{alg:JHBL-fixed-beta};
(middle) Algorithm \ref{alg:JHBL}; and (bottom) Algorithm \ref{alg:JHBL-refine-2D}.}
\label{fig:edge_golf_2d}
\end{figure}
As in the sequential MRI example, we first compare edge map recoveries using the different algorithms in Figure \ref{fig:edge_golf_2d}. The edges are then used to inform the weights in \eqref{eq:weights_scaled} for the weighted $\ell_1$ regularization method \eqref{eq:wl1_model}. Figure \ref{fig:rec_golf_2d} displays the results for the fourth image in the sequence. The results for the rest of the sequence are similar. Overall, it is clear that by using Algorithm \ref{alg:JHBL-refine-2D} we are more able to capture the structural details in the underlying images without ``oversmoothing'' the background. We note that using either Algorithm \ref{alg:JHBL} or Algorithm \ref{alg:JHBL-fixed-beta} is sufficient for obtaining weights in \eqref{eq:wl1_model} for some images in the sequence, however the results are not consistent.
More distinction between the results of each algorithm are observed in the sequential edge map recovery. Figure \ref{fig:edge_golf_2d} shows cluttering in the edge maps when using Algorithm \ref{alg:JHBL-fixed-beta}. Learning the hyperparameters without the benefit of correlating temporal information (Algorithm \ref{alg:JHBL}) reduces the clutter, but as was the case for the MRI example, the rotating and translating structures are lost. Only Algorithm\ref{alg:JHBL-refine-2D}, which refines the hyperparameters to consider inter-signal correlations is able to capture moving objects.
\begin{figure}
\centering
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/rec_sbl_x4_golf.eps}
\caption{${\vect{x}_4}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/rec_sblb_x4_golf.eps}
\caption{${\vect{x}_4}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/rec_sblba_x4_golf.eps}
\caption{${\vect{x}_4}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
\\
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/error_sbl_x4_golf.eps}
\caption{${\vect{x}_4}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/error_sblb_x4_golf.eps}
\caption{${\vect{x}_4}^{\beta}_{\JHBL}$}
\end{subfigure}
%
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{plot/error_sblba_x4_golf.eps}
\caption{${\vect{x}_4}^{\beta,\boldsymbol q}_{\JHBL}$}
\end{subfigure}
\caption{(top row) Image recovery using \eqref{eq:wl1_model} where the weights are informed by (left) Algorithm \ref{alg:JHBL-fixed-beta}
(middle) Algorithm \ref{alg:JHBL} and (right) Algorithm \ref{alg:JHBL-refine-2D}.
(bottom row) The corresponding log-scale pointwise error of the images in the top row.}
\label{fig:rec_golf_2d}
\end{figure}
\section{Conclusion}
\label{sec:discussion_ongoing}
In this paper we proposed a new sparse Bayesian learning algorithm to jointly recover a temporal sequence of edge maps from under-sampled and noisy Fourier data of piecewise smooth functions and images. Since each data set in the temporal sequence only acquires partial information regarding the underlying scene, it is not possible to {\em individually} recover each edge map in the sequence. Our new method incorporates both {\em inter-} and {\em intra-}image information into the design of the prior distribution. This is in contrast to standard multiple-measurement SBL approaches, which require stationary support across the temporal sequence. Moreover, unlike the deterministic method developed in \cite{xiao2022sequential}, our approach does {\em not} require the explicit construction of sequential change masks, which adds numerical cost as well as introduces more parameters.
Our new algorithm compares favorably to more standard SBL approaches in regions where the underlying function is smooth (correspondingly zero in the jump function).
In particular we observed in our one-dimensional example that fewer non-zero values were returned in smooth regions where the data sequence share the joint sparsity profile. Furthermore, our new method does not suffer as much magnitude loss at the true (non-zero) jump locations.
The magnitudes at varied jump locations (across the sequence) are also much more accurate using our method, although there are some oscillations in the surrounding neighborhoods. By contrast, we observed that jump locations were completely missed when the hyperparameters were not temporally correlated.
While these initial results are promising, our method should be refined for more complicated sequences of images. For example, an empirical method can be introduced to determine the shape and rate of hyper-parameters for the partial joint support of the temporal data stream. This would potentially involve another layer of hyper-hyper-parameters in the hierarchical Bayesian structure. Similarly, the proposed framework can be applied to complex-valued signals (or images), where there is sparsity in the magnitude of the signal. This would be important in SAR or ultrasound images. We anticipate using sampling techniques such as Markov chain Monte Carlo (MCMC) and Gibbs sampling in these cases which will also allow uncertainty quantification of the solution posterior.
\bmhead{Data Availability}
All underlying data sets can be made available upon request or are publicly available. MATLAB codes used for obtaining results is available upon request to the authors. MRI GE images were originally publicly provided by researchers at the Barrow Neurological Institute for the purpose of algorithmic development. The SAR golf course image is provided in \cite{SAR_Image_ref}.
\bmhead{Acknowledgments}
This work is partially supported by the NSF grants DMS \#1912685 (AG), DMS \#1939203 (GS), AFOSR grant \#FA9550-22-1-0411 (AG), DOE ASCR \#DE-ACO5-000R22725 (AG), and ONR MURI grant \#N00014-20-1-2595 (AG).
\bibliographystyle{spmpsci}
|
{
"arxiv_id": "2302.14342",
"language": "en",
"timestamp": "2023-03-01T02:10:19",
"url": "https://arxiv.org/abs/2302.14342",
"yymm": "2302"
} | \section{Introduction}
If $f\colon\thinspace X\to \mathbb{R}$ is a suitably tame function on a compact topological space $X$, the \emph{sublevel persistence module} of $f$ (say with coefficients in a field $\kappa$) consists of the data of the homologies of the sublevel sets $H_{*}(X^{\leq t};\kappa)$ where $X^{\leq t}=f^{-1}((-\infty,t])$, together with the inclusion-induced maps $H_{*}(X^{\leq s};\kappa)\to H_*(X^{\leq t};\kappa)$. Aspects of this have been studied at least since \cite{Mor}; in contemporary language, as seen in \cite{ZC}, the sublevel persistence module can be regarded as arising from the homologies of the terms in a filtered chain complex, and decomposes as a sum of interval modules $\kappa_{[s,t)}$ where $-\infty<s<t\leq \infty$. Each summand $\kappa_{[s,t)}$ corresponds to a one-dimensional subspace that first appears in $H_{*}(X^{\leq s};\kappa)$ and, if $t=\infty$, maps injectively to $H_{*}(X;\kappa)$, while if $t<\infty$ the subspace maps to zero in the homologies of the sublevels $H_{*}(X^{\leq t'};\kappa)$ iff $t'\geq t$. The collection of intervals $[s,t)$ in this decomposition is called the sublevel barcode of $f$.
Ideas related to (what is now called) sublevel persistence have for some time been influential in symplectic topology; key early works in this direction include \cite{Vit},\cite{FH},\cite{Oh},\cite{Sc00}. While the first of these references used the (relative) singular homologies of sublevel sets of a function on an auxiliary topological space, work since that time has more often used some version of filtered Floer homology. The latter involves a filtered chain complex that is constructed by analogy with the Morse complex familiar from Morse theory on finite-dimensional manifolds; however, as the filtered Floer complex is based on a function $\mathcal{A}$ on an infinite-dimensional manifold (such as the free loop space $\mathcal{L}M$ of a symplectic manifold $(M,\omega)$), with critical points of infinite Morse index, the associated filtered Floer groups do not actually represent the homologies of the sublevel sets of $\mathcal{A}$. Nonetheless, from an algebraic standpoint the filtered Floer persistence module is fairly well-behaved, and various quantities that can be extracted from it convey interesting geometric information about, \emph{e.g.}, Hamiltonian diffeomorphisms and Lagrangian submanifolds. The language of persistent homology and in particular barcodes was first brought to Floer theory in \cite{PS} and additional relevant theory was developed in \cite{UZ}; this framework has found use in a variety of recent symplectic applications such as \cite{She},\cite{CGG}.
An \emph{interlevel set} of a function $f\colon\thinspace X\to \mathbb{R}$ is by definition the preimage $X_{[a,b]}:=f^{-1}([a,b])$ of some closed interval $[a,b]\subset \mathbb{R}$; for example level sets arise as the special case that $a=b$. One then obtains a persistence module parametrized by the poset of closed intervals, based on the inclusion-induced maps $H_k(X_{[a,b]};\kappa)\to H_k(X_{[c,d]};\kappa)$ for $[a,b]\subset [c,d]$. This persistence module, under suitable hypotheses, also satisfies a decomposition theorem which allows it to be classified by a collection of intervals which we will refer to as the interlevel barcode of $f$; this can be understood either in terms of the quiver-theoretic classification of zigzag diagrams such as \[ \cdots \leftarrow H_k(X_{[s,s]};\kappa)\to H_k(X_{[s,t]};\kappa)\leftarrow H_k(X_{[t,t]};\kappa)\to H_k(X_{[t,u]};\kappa)\leftarrow \cdots \] as in \cite{CDM}, or in terms of the fact that the Mayer-Vietoris sequence imposes special structure on the interlevel persistence module within the class of two-dimensional peristence modules, as in \cite{CO},\cite{BGO}.\footnote{As the Mayer-Vietoris sequence works better with open sets than closed ones, \cite{CO},\cite{BGO} use preimages of open intervals, $X_{(a,b)}=f^{-1}((a,b))$ in the role of their interlevel sets. In the motivating cases for this paper, $f$ will be a Morse function, which implies that $X_{[a,b]}$ is a deformation retract of $X_{(a-\epsilon,b+\epsilon)}$ for sufficiently small $\epsilon>0$, so working with closed intervals will not cause additional difficulty. See \cite{CDKM} for considerations related to (closed) interlevel persistence for less well-behaved functions $f$.} The interlevel barcode comprises, in general, finite-length intervals of all four types $[a,b),(a,b],(a,b),[a,b]$; roughly speaking, the presence of an interval $I$ in the interlevel barcode corresponds to a one-dimensional summand in each level-set homology $H_k(X_{[t,t]};\kappa)$ for $t\in I$, such that if $[s,t]\subset I$ the respective summands of $H_k(X_{[s,s]};\kappa)$ and $H_k(X_{[t,t]};\kappa)$ have common, nontrivial image in $H_k(X_{[s,t]};\kappa)$.
The interlevel persistence barcode of a suitably well-behaved function $f\colon\thinspace X\to \mathbb{R}$ contains more information than the sublevel persistence barcode. From \cite[Section 3]{CDM},\cite{ext} one can see that the (finite-length) bars of form $[a,b)$ are the same for both the sublevel and interlevel barcodes, while the bars of form $(a,b]$ in the interlevel barcode of $f$ correspond (modulo grading adjustment) to the finite-length bars $[-b,-a)$ in the sublevel barcode of $-f$; in the case that $f$ is a Morse function on a compact smooth $\kappa$-oriented manifold, Poincar\'e duality relates the latter to bars in the sublevel barcode of $f$. Both varieties of half-open bars $[a,b),(a,b]$ detect homology classes in interlevel (or sublevel) sets of $\pm f$ that vanish upon inclusion into the whole space $X$; the closed or open bars $[a,b],(a,b)$, on the other hand, correspond to classes that are nontrivial in $H_*(X;\kappa)$. Indeed, in the case of a Morse function on a compact smooth $\kappa$-oriented manifold, such bars come in pairs $\{[a,b],(a,b)\}$ with homological degrees adding to $n-1$, and each such pair corresponds to two infinite-length bars $[a,\infty),[b,\infty)$ in the sublevel barcode of $f$. Thus, in this Morse case, the additional information provided by the interlevel barcode in comparison to the sublevel barcode amounts to a \emph{pairing} between the endpoints of the infinite-length bars of the sublevel barcode. See Figure \ref{genustwo} for a simple example of two functions on a genus-two surface having the same sublevel barcode but different interlevel barcodes.
\begin{center}
\begin{figure}
\includegraphics[width=4.5 in]{genustwo.eps}
\caption{The height functions on the above genus two surfaces are perfect Morse functions (\emph{i.e.} the Morse inequalities are equalities), so their interlevel barcodes have no half-open intervals, and their sublevel barcodes consist only of semi-infinite intervals beginning at the critical values, which are identical in the two cases. However, the closed and open intervals in their interlevel barcodes differ, as shown. (Segments with solid endpoints represent closed intervals; those without them represent open intervals.)}
\label{genustwo}
\end{figure}
\end{center}
The present work grew out of a project to develop an abstract algebraic setup for interlevel persistence that would be general enough to provide interlevel-type barcodes in the context of Hamiltonian Floer theory and its variants in symplectic topology, just as the ``Floer-type complexes'' of \cite{UZ} allow one to construct symplectic-Floer-theoretic versions of sublevel barcodes. Since these sublevel barcodes completely classify Floer-type complexes up to suitable isomorphism of filtered complexes by \cite[Theorems A and B]{UZ}, while the interlevel barcodes should contain more information than the sublevel barcodes, our treatment must take into account features separate from the filtered isomorphism type of the Floer complex. Our resolution to this problem uses the abstract algebraic notion of a ``\textbf{Poincar\'e--Novikov structure},'' discussed briefly in Section \ref{intropn} and in more detail in Section \ref{pnsect}, which, when applied to Hamiltonian Floer theory in Section \ref{floersect}, incorporates information from the PSS isomorphism \cite{PSS} as well as the Poincar\'e intersection pairing in the underlying symplectic manifold. A general algebraic procedure associates to a Poincar\'e--Novikov structure a ``\textbf{filtered matched pair}'' in every grading, from which interlevel-barcode-type invariants can be extracted.
While the constructions in this paper were originally motivated by symplectic topology, they are algebraic in nature and as such are adaptable enough to potentially be of broader interest. Various classical-topological instances of our constructions are described in Section \ref{geomintro}.
There are two main issues that make the adaptation of interlevel persistence to Floer theory nontrivial; in isolation, either of these could perhaps be addressed by small modifications of standard methods, but in combination they seem to require new techniques such as the ones we develop here. The first point is that, while the homologies of sublevel sets are naturally mimicked by the homologies of filtered subcomplexes of the Floer complex, there does not seem to be any established Floer-theoretic counterpart to the homology of an interlevel set.\footnote{The often-used ``action window Floer homology,'' denoted $S^{[a,b)}$ in \cite{FH}, can be considered a counterpart to the \emph{relative} homology $H_{*}(f^{-1}([a,b)),f^{-1}(\{a\}))$, but for interlevel persistence we would want absolute homology---in particular we would not want the homology to vanish if $[a,b)$ contains no critical values, as is the case for action-window homology.} Thus the zigzag or two-parameter persistence modules that are usually used to develop interlevel persistence theory are not available to us, at least initially. (We will eventually construct an object that can serve this purpose (see (\ref{hkdef})), but it should be regarded as a byproduct of our theory, not as an input.)
A way around this difficulty is suggested by the isomorphism from \cite{CDM} between interlevel persistence and the \emph{extended persistence} of \cite{ext}. The latter is based on constructions directly with sublevel and superlevel sets of a function rather than interlevel sets, and thus seems conceptually closer to Floer theory. However, this runs into our second issue, namely that in many cases of interest the filtered Floer complex is an analogue not of the Morse complex of a smooth function on a compact manifold $X$ but rather of the Novikov complex \cite{Nov} of a function $\tilde{f}\colon\thinspace \tilde{X}\to \mathbb{R}$ on a covering space $\tilde{X}$ of $X$, such that $d\tilde{f}$ is the pullback of closed one-form $\theta$ on $X$. (If the group of periods $\Gamma=\{\int_{S^1}\gamma^*\theta|\gamma\colon\thinspace S^1\to X\}$ is nontrivial and discrete, $\theta$ can be considered as the derivative of a function $X\to S^1$.) While versions of interlevel persistence have been developed in the Novikov-theoretic context both for discrete (\cite{BD},\cite{BH17},\cite{B18}) and indiscrete (\cite{B1}\cite{B20b}) period groups, an analogous construction for extended persistence does not seem to be available in the literature in a form that lends itself to straightforward adaptations to Floer theory. Some of the constructions in this paper can be regarded as providing such an extension of extended persistence, in a general algebraic framework. See Section \ref{connext} for more about the connection to extended persistence, and Remark \ref{burghelea} for a comparison to Burghelea's work.
\subsection{Organization of the paper} The following two sections can be regarded as an extended introduction: Section \ref{algintro} gives an overview of the key elements of our algebraic setup, and Section \ref{geomintro} explains more concretely how our structures arise in finite-dimensional Morse and Novikov theory and other contexts, and indicates relations to other constructions. Sections \ref{algintro} and \ref{geomintro} both contain few proofs, and frequently refer forward in the paper for precise definitions. We defer the discussion of the motivating case of Hamiltonian Floer theory to Section \ref{floersect}, after the relevant algebra has been explained in more detail. Section \ref{basic} sets up some of the basic algebraic ingredients for the rest of the paper, including an abstraction of Poincar\'e duality in Section \ref{pdsec}. Section \ref{fmpsec} introduces the central notion of a filtered matched pair and proves several key results about such objects, notably a stability theorem (Theorem \ref{stab}) and, in Section \ref{basisconstruct}, an existence theorem for the doubly-orthogonal bases that we use to construct the open and closed intervals in our version of the interlevel barcode when the period group $\Gamma$ is discrete. Section \ref{pnsect} introduces our notion of a Poincar\'e--Novikov structure, which mixes abstract versions of Poincar\'e duality and filtered Novikov theory, and connects Poincar\'e--Novikov structures to filtered matched pairs. Having developed the algebra, we then apply it in Section \ref{floersect} to Hamiltonian Floer theory, yielding (again when $\Gamma$ is discrete, though some structure remains when it is not) an ``essential barcode'' associated to a Hamiltonian flow that is designed to play the role of the collection of intervals in an interlevel barcode that are either open or closed. (The remaining, half-open intervals in interlevel barcodes are analogous to the finite-length intervals in the barcodes from \cite{UZ}.) At the end of Section \ref{floersect} we identify the endpoints of the intervals in the essential barcode (up to sign) with particular spectral invariants of the Hamiltonian flow and of its inverse; the lengths of these intervals---of which there are as many as the dimension of the homology of the manifold---could be regarded in somewhat the same spirit as the spectral norm of a Hamiltonian diffeomorphism. Section \ref{floersect} is the only part of the rest of the paper that discusses symplectic topology, and can be skipped by those whose interests lie elsewhere.
The constructions through Section \ref{floersect} focus on persistence features that are ``homologically essential'' in that they arise from and can be computed in terms of the behavior of nontrivial global homology classes. For a full picture one should also incorporate data---represented in interlevel barcodes by half-open intervals---that vanish in global homology. Thus in Section \ref{clsec} we introduce chain-level versions of our constructions which make such information visible. We synthesize in (\ref{hkdef}) a two-dimensional persistence module from such a chain-level structure that is designed to play the role of the two-dimensional persistence module given by homologies of interlevel sets. We also show that this persistence module can be computed in terms of the homology-level structures discussed in Section \ref{fmpsec} together with the sublevel-persistence-type barcodes from \cite{UZ}. Section \ref{concretemorse} reviews basic features of Morse and Novikov complexes, and explains in detail how to associate our chain-level structures to such complexes. Then, in Section \ref{isosect}, we use ideas from \cite{Pa} to prove (Theorem \ref{bigiso}) that the two-dimensional persistence module given by applying (\ref{hkdef}) to these chain-level structures in the case of the Novikov complex of an $\mathbb{R}$- or $S^1$-valued function is in fact isomorphic to the usual interlevel homology persistence module. Apart from its intrinsic interest, this justifies thinking of our Floer-theoretic barcodes as the correct analogues of interlevel barcodes, since Floer complexes and their associated Poincar\'e--Novikov structures are formally very similar to the corresponding structures in Novikov theory.
\subsection*{Acknowledgements}
The seeds for this paper were planted at the 2018 Tel Aviv workshop ``Topological data analysis meets symplectic topology,'' in particular through discussions there with Dan Burghelea which left me curious as to whether there might be a more abstract version of his constructions. Early stages of the work were supported by NSF grant DMS-1509213. I am also grateful to the Geometry and Topology group at the University of Iowa and to IBS Center for Geometry and Physics for the opportunity to present this work at seminars in 2021 when it was still at a relatively early stage of development.
\section{The general setup} \label{algintro}
We now begin to introduce our algebraic framework, which is meant to include relevant features of the filtered Novikov complexes associated to functions $\tilde{f}\colon\thinspace\tilde{X}\to\mathbb{R}$ as discussed earlier, as well as their Floer-theoretic analogues. In the background throughout the paper we fix a coefficient field $\kappa$, and a finitely-generated additive subgroup $\Gamma$ of $\mathbb{R}$ which in the case of the Novikov complex corresponds to the period group of a one-form on $X$ which pulls back to $d\tilde{f}$ under the covering map $\tilde{X}\to X$, and which is isomorphic to the deck transformation group of the cover. $\Gamma$ might well be trivial, corresponding to the case of classical Morse theory. Associated to $\kappa$ and $\Gamma$, as discussed in more detail in Section \ref{basic}, are the \textbf{group algebra} $\Lambda:=\kappa[\Gamma]$ as well as two versions of the \textbf{Novikov field} which we denote by $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$, each of which contains $\Lambda$ as a subring. In the case that $\Gamma$ is trivial, each of $\Lambda,\Lambda_{\uparrow},\Lambda^{\downarrow}$ degenerates to the original coefficient field $\kappa$. If $\Gamma=\mathbb{Z}$, then $\Lambda$ is the Laurent polynomial ring $\kappa[T^{-1},T]$, while $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ are the two Laurent series fields $\kappa[T^{-1},T]]$ and $\kappa[[T^{-1},T]$. The latter are of course isomorphic as fields, but we distinguish them because they are not isomorphic as $\Lambda$-algebras.
\subsection{Filtered matched pairs}
In the case, alluded to above, of the Novikov complex of a function $\tilde{f}\colon\thinspace\tilde{X}\to \mathbb{R}$ on a covering space $\tilde{X}\to X$ with $\Gamma$ playing the role of the period group, the deck transformation action of $\Gamma$ on $\tilde{X}$ makes the homology $H_{*}(\tilde{X};\kappa)$ into a graded $\Lambda$-module. The usual Novikov chain complex of $\tilde{f}$ (reviewed in Section \ref{novsect} and written there as $\mathbf{CN}(f;\xi)_{\uparrow}$, with $\xi$ denoting the cohomology class of the form on $X$ of which $d\tilde{f}$ is the pullback) is a chain complex of finite-dimensional vector-spaces over $\Lambda_{\uparrow}$, equipped with an ascending real-valued filtration. The complex $\mathbf{CN}(f;\xi)_{\uparrow}$ is constructed by considering negative gradient trajectories for $\tilde{f}$; one can equally well use positive gradient trajectories of $\tilde{f}$ to obtain a complex $\mathbf{CN}(\tilde{f};\xi)^{\downarrow}$ of vector spaces over $\Lambda^{\downarrow}$, with a descending real-valued filtration.\footnote{Of course, this other complex can also be understood as the usual Novikov complex of $-\tilde{f}$, after adjusting the coefficients and filtration. Assuming orientability, it can also be interpreted as the dual of the chain complex $\mathbf{CN}(\tilde{f};\xi)_{\uparrow}$ (\emph{cf}. Proposition \ref{negdualnov}), but we grade both complexes $\mathbf{CN}(\tilde{f};\xi)_{\uparrow}$ and $\mathbf{CN}(\tilde{f};\xi)^{\downarrow}$ homologically.}
Our notion of a \textbf{filtered matched pair}, defined precisely in Definition \ref{fmpdfn}, models the relation between $H_{*}(\tilde{X};\kappa)$ and the homologies of the two versions of the filtered Novikov complex mentioned above. A filtered matched pair $\mathcal{P}$ amounts to a diagram \begin{equation}\label{mpgen} \xymatrix{ & (V^{\downarrow},\rho^{\downarrow}) \\ H\ar[ru]^{\phi^{\downarrow}}\ar[rd]_{\phi_{\uparrow}} & \\ & (V_{\uparrow},\rho_{\uparrow}) } \end{equation} where $H$ is a finitely-generated $\Lambda$-module, $V_{\uparrow}$ (resp. $V^{\downarrow}$) is a $\Lambda_{\uparrow}$-vector space (resp. a $\Lambda^{\downarrow}$-vector space), the two maps are $\Lambda$-module homomorphisms that become isomorphisms after coefficient extension, and $\rho_{\uparrow},\rho^{\downarrow}$ are ``filtration functions'' on $V_{\uparrow},V^{\downarrow}$, interpreted as assigning to an element the first filtration level at which it appears.\footnote{The filtration on $V_{\uparrow}$ is ascending and that on $V^{\downarrow}$ is descending, so ``first'' means ``minimal'' in the case of $\rho_{\uparrow}$ and ``maximal'' in the case of $\rho^{\downarrow}$. The precise conditions required of $\rho_{\uparrow}$ and $\rho^{\downarrow}$ are specified in Definition \ref{orthdef}.}
If $d$ is the maximal cardinality of a $\Lambda$-independent set in $H$, in Definition \ref{gapdfn} we associate to the filtered matched pair $\mathcal{P}$ as in (\ref{mpgen}) a sequence of real numbers $G_1(\mathcal{P})\geq \cdots\geq G_d(\mathcal{P})$. In favorable situations---including, as we show in Theorem \ref{basistheorem}, all cases in which the period group $\Gamma$ is discrete---$\mathcal{P}$ will admit what we call a \textbf{doubly-orthogonal basis} $\{e_1,\ldots,e_d\}\subset H$. In this case one has (modulo reordering) $G_i(\mathcal{P})
=\rho^{\downarrow}(\phi^{\downarrow}e_i)-\rho_{\uparrow}(\phi_{\uparrow}e_i)$, and the \textbf{basis spectrum} of $\mathcal{P}$ is defined as the multiset $\Sigma(\mathcal{P})$ of elements \[ \left(\rho_{\uparrow}(\phi_{\uparrow}e_i)\,\mathrm{mod}\Gamma,\,\,\rho^{\downarrow}(\phi^{\downarrow}e_i)-\rho_{\uparrow}(\phi_{\uparrow}e_i)\right) \] of $(\mathbb{R}/\Gamma)\times \mathbb{R}$. By Proposition \ref{barinvt} this multiset is independent of the choice of doubly-orthogonal basis.
In the examples that we have in mind, the modules $H,V_{\uparrow},V^{\downarrow}$ appearing in a filtered matched pair arise as, for some $k\in\mathbb{Z}$, the degree-$k$ homologies of chain complexes $\mathcal{C},\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$, with $\mathcal{C}_{\uparrow}$ and $\mathcal{C}^{\downarrow}$ admitting filtrations that make them into Floer-type complexes as in \cite{UZ}. (See the notion of a ``chain-level filtered matched pair'' in Definition \ref{cfmpdfn}.) Elements of the basis spectrum are intended to correspond to intervals (canonical up to $\Gamma$-translation) in interlevel barcodes connecting endpoints $\rho^{\downarrow}(\phi^{\downarrow}e_i)$ and $\rho_{\uparrow}(\phi_{\uparrow}e_i)$; the relevant interval is closed and in homological degree $k$ when $\rho^{\downarrow}(\phi^{\downarrow}e_i)\geq \rho_{\uparrow}(\phi_{\uparrow}e_i)$ and open and in homological degree $k-1$ when $\rho^{\downarrow}(\phi^{\downarrow}e_i)< \rho_{\uparrow}(\phi_{\uparrow}e_i)$. The absolute values of the ``gaps'' $G_i(\mathcal{P})$ then correspond to the lengths of these intervals (which do not need to be reduced modulo $\Gamma$). By design, a filtered matched pair only carries information that survives to the homologies $H,V_{\uparrow},V^{\downarrow}$ of the full chain complexes $\mathcal{C},\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$; the remaining (half-open) bars in interlevel barcodes can be found by applying the methods of \cite{UZ} to the Floer-type complexes $\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$.
Note that the $G_i(\mathcal{P})$, unlike the basis spectrum, are defined regardless of whether $\mathcal{P}$ admits a doubly-orthogonal basis. By Theorem \ref{basistheorem}, the only cases in which $\mathcal{P}$ might not admit a doubly-orthogonal basis occur when the subgroup $\Gamma$ of $\mathbb{R}$ is indiscrete and hence dense. In this scenario, the only quantity associated to an interval modulo $\Gamma$-translation that is robust to perturbation is its length, as represented by $G_i(\mathcal{P})$; thus in the cases where we do not have a well-defined basis spectrum the $G_i(\mathcal{P})$ could be said to carry all the geometrically meaningful information that such a spectrum might have conveyed. Nonetheless, it would be interesting to know whether Theorem \ref{basistheorem} can be generalized to some cases\footnote{For such a generalization one would minimally need to assume that the $\Lambda$-module $H$ splits as a direct sum of a free module and a torsion module, which is not always true when $\Gamma$ is dense. This hypothesis does hold in the motivating case of Hamiltonian Floer theory (in fact in that case $H$ is free).} in which $\Gamma$ is dense, especially if this could be done in a way which (like our proof of Theorem \ref{basistheorem} in the discrete case) is constructive and hence would give a way of computing the $G_i(\mathcal{P})$, which seems difficult to do directly from the definition when $\Gamma$ is dense.
A crucial general property of persistence barcodes is \textbf{stability}: for instance, for interlevel persistence a small change in the function whose interlevel sets are being considered leads to a correspondingly small change in the barcode. We now describe such a result in our algebraic framework of filtered matched pairs. Since filtered matched pairs are only intended to model the contributions to interlevel persistence that correspond to global homology classes, the appropriate stability result should compare filtered matched pairs whose versions of the respective modules $H,V_{\uparrow},V^{\downarrow}$ are isomorphic, but with different versions of the filtration functions $\rho_{\uparrow},\rho^{\downarrow}$. For ease of exposition in this introduction we assume the versions of the $H,V_{\uparrow},V^{\downarrow}$ are equal, not just isomorphic; we give a slightly more general formulation in Section \ref{fmpsec}. Here is the fundamental stability result:
\begin{theorem}\label{introstab} Consider two filtered matched pairs
\[
\xymatrix{ & & (V^{\downarrow},\rho^{\downarrow}) & & & & (V^{\downarrow},\hat{\rho}^{\downarrow}) \\ \mathcal{P} \ar@{}[r]^(.35){}="a"^(.65){}="b" \ar@{=} "a";"b" & H\ar[ru]^{\phi^{\downarrow}}\ar[rd]_{\phi_{\uparrow}} & & \mbox{and} & \hat{\mathcal{P}} \ar@{}[r]^(.35){}="a"^(.65){}="b" \ar@{=} "a";"b" & H\ar[ru]^{\phi^{\downarrow}}\ar[rd]_{\phi_{\uparrow}} \\ & & (V_{\uparrow},\rho_{\uparrow}) & & & & (V_{\uparrow},\hat{\rho}_{\uparrow}) }
\] with the same data $H,V^{\downarrow},V_{\uparrow},\phi^{\downarrow},\phi_{\uparrow}$ but (possibly) different filtration functions $\rho^{\downarrow},\rho_{\uparrow},\hat{\rho}^{\downarrow},\hat{\rho}_{\uparrow}$, and let \[ t=\max\left\{\max_{V^{\downarrow}\setminus\{0\}}|\rho^{\downarrow}-\hat{\rho}^{\downarrow}|,\max_{V_{\uparrow}\setminus\{0\}}|\rho_{\uparrow}-\hat{\rho}_{\uparrow}|\right\}.\] Then the gaps $G_i$ satisfy \[ |G_i(\mathcal{P})-G_i(\hat{\mathcal{P}})|\leq 2t.\] Moreover, if $\Gamma$ is discrete, so that the basis spectra $\Sigma(\mathcal{P})$ and $\Sigma(\hat{\mathcal{P}})$ are well-defined, then there is a bijection $\sigma\colon\thinspace \Sigma(\mathcal{P})\to\Sigma(\hat{\mathcal{P}})$ such that for each $([a],\ell)\in\Sigma(\mathcal{P})$, the image $([\hat{a}],\hat{\ell})=\sigma([a],\ell)$ satisfies (for some choice of $\hat{a}$ within its $\Gamma$-coset) \[ |\hat{a}-a|\leq t\quad \mbox{and}\quad |(\hat{a}+\hat{\ell})-(a+\ell)|\leq t.\]
\end{theorem}
\begin{proof}
The statement about the $G_i$ follows from Proposition \ref{gapcts} (using the identity map in the role of the ``$t$-morphism'' of that proposition), and the statement about the basis spectra follows from Theorem \ref{stab}.
\end{proof}
The combination of Theorem \ref{introstab} (for the closed and open bars) with \cite[Theorem 8.17]{UZ} (for the half-open bars) can be regarded as an algebraic version of stability theorems for interlevel persistence such as the LZZ stability theorem of \cite{CDM} or \cite[Theorem 1.2]{BH17}. As noted in \cite{BH17}, continuous deformations of a function can result in a closed interlevel barcode interval in degree $k$ shrinking and then transforming into an open interlevel barcode interval in degree $k-1$; in our context this corresponds to the parameter $\ell$ in an element $([a],\ell)$ of the basis spectrum (corresponding to the difference $\rho^{\downarrow}(\phi^{\downarrow}e_i)-\rho_{\uparrow}(\phi_{\uparrow}e_i)$) passing from positive to negative.
\subsection{Poincar\'e-Novikov structures} \label{intropn} A filtered matched pair involves two vector spaces $V_{\uparrow}$ and $V^{\downarrow}$ over (different) Novikov fields, one with an ascending filtration and the other with a descending filtration. In motivating examples, there is a filtered matched pair in every grading $k$, and (for some fixed integer $n$) a version of Poincar\'e duality relates the degree $k$ version of $V^{\downarrow}$ to the dual of the degree $n-k$ version of $V_{\uparrow}$. This is abstracted in our definition of a $n$-\textbf{Poincar\'e-Novikov structure}, see Definition \ref{pnstr}. Here one has, for each $k\in \mathbb{Z}$, a $\Lambda$-module $H_k$, a $\Lambda_{\uparrow}$-vector space $V_k$ equipped with an ascending filtration function $\rho_k$, and (as in the bottom half of the diagram (\ref{mpgen})) a module homomorphism $S_k\colon\thinspace H_k\to V_k$ that becomes an isomorphism after coefficient extension. Moreover, we require an abstraction of the Poincar\'e intersection pairing between $H_k$ and $H_{n-k}$, taking the form of a suitable map $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$ where ${}^{\vee}\!H_{n-k}$ is the ``conjugated dual'' (see Section \ref{dualsec}) of $H_{n-k}$. (One obtains a ``strong'' or a ``weak'' $n$-Poincar\'e--Novikov structure depending on precisely what requirements are put on the $\mathcal{D}_k$.) This information is sufficient to yield a filtered matched pair of the form $\mathcal{P}(\mathcal{N})_k$ of the form \[\xymatrix{ & ({}^{\vee}(V_{n-k}),{}^{\vee}\rho_{n-k}) \\ H_k\ar[ru]^{\widetilde{S}_k} \ar[rd]_{S_k} & \\ & (V_k,\rho_k)} \] constructed in detail in Section \ref{pnsect}. From the $\mathcal{P}(\mathcal{N})_{k}$ we can define the gaps $\mathcal{G}_{k,i}(\mathcal{N}):=G_i(\mathcal{P}(\mathcal{N})_k)$ and, when the $\mathcal{P}(\mathcal{N})_k$ admit doubly-orthogonal bases, the ``essential barcode'' of $\mathcal{N}$; the latter is defined in Definition \ref{pnbar} in terms of the basis spectra of the $\mathcal{P}(\mathcal{N})_k$ and consists, in each grading, of equivalence classes (written as $[a,b]^{\Gamma}$ or $(a,b)^{\Gamma}$) of closed or open intervals modulo the translation action of $\Gamma$. The word ``essential'' alludes to the fact that such intervals correspond to nontrivial global homology classes in the case of interlevel persistence. Theorem \ref{introstab} readily implies stability results for these invariants of Poincar\'e-Novikov structures, see Corollaries \ref{pngapstab} and \ref{pnstab}.
There is a natural notion of the dual of a filtered matched pair, developed in Section \ref{fmpdual}. In the case of the filtered matched pairs $\mathcal{P}(\mathcal{N})_k$ associated to a $n$-Poincar\'e--Novikov structure, the dual of $\mathcal{P}(\mathcal{N})_k$ is closely related to $\mathcal{P}(\mathcal{N})_{n-k}$ (see Proposition \ref{dualpnchar} for a precise statement). This leads to Corollary \ref{pndual}, asserting that if $\mathcal{N}$ is a strong $n$-Poincar\'e--Novikov structure such that the $\mathcal{P}(\mathcal{N})_k$ all admit doubly-orthogonal bases\footnote{For instance, if $\Gamma$ is discrete then the Poincar\'e--Novikov structures associated to filtered Novikov complexes and their Hamiltonian-Floer-theoretic analogues satisfy these properties, using Proposition \ref{cohtopdstr} and Theorem \ref{basistheorem}.} then for each $k$ there is a bijection between elements $[a,b]^{\Gamma}$ of the degree-$k$ essential barcode of $\mathcal{N}$ and elements $(a,b)^{\Gamma}$ of the degree-$(n-k-1)$ essential barcode of $\mathcal{N}$. In the context of interlevel persistence for an $\mathbb{R}$- or $S^1$-valued Morse function $f$ on a smooth manifold, this, together with a similar statement for the half-open bars in the full barcode that can be inferred from \cite[Proposition 6.7]{UZ}\footnote{see the discussion in Section \ref{cpnstr}}, can be regarded as demonstrating Poincar\'e duality (at least in the classical sense of a symmetry of Betti numbers) for the regular fibers of $f$. Thus our general formalism of Poincar\'e--Novikov structures is used to infer, entirely algebraically, a statement corresponding to Poincar\'e duality for the regular fibers of a map from hypotheses including Poincar\'e duality (in the form of the maps $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$) for the full domain of the map.
\subsection{Chain-level notions}
While the modules $H,V_{\uparrow},V^{\downarrow}$ in a filtered matched pair as in (\ref{mpgen}) are in practice usually obtained as the homologies (in some degree) of chain complexes $\mathcal{C},\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$, the formalism in Section \ref{pnsect} does not require this, and the gaps and basis spectra of a filtered matched pair are defined only in terms of $H,V_{\uparrow},V^{\downarrow}$ and maps between them, without reference to any chain complex. Likewise, the definition and properties of a $n$-Poincar\'e--Novikov structure do not appeal to any chain complexes having $H_k,V_k$ as their homologies. However, in order to justify our interpretation of these as modeling aspects of interlevel persistence, and also to capture the parts of interlevel persistence arising from homology classes of interlevel sets that vanish upon inclusion into the whole space, we should incorporate data from such chain complexes and not just their homologies. This motivates the definitions of a ``chain-level filtered matched pair'' (Definition \ref{cfmpdfn}) and a ``chain-level $n$-Poincar\'e--Novikov structure'' (second paragraph of Section \ref{cpnstr}). Roughly speaking, these chain-level definitions replace the relevant modules over $\Lambda,\Lambda_{\uparrow},\Lambda^{\downarrow}$ with chain complexes, and replace the conditions that various maps be isomorphisms after coefficient extension with the condition that they be chain homotopy equivalences after coefficient extension.
The data of a chain-level filtered matched pair include chain complexes $\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$ over $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$, respectively, equipped with respective filtration functions $\ell_{\uparrow}$ and $\ell^{\downarrow}$ that make them Floer-type complexes in the the sense of \cite{UZ} (with a slight modification of the definition in the case of $\mathcal{C}^{\downarrow}$ so that the filtration will be descending rather than ascending). Applying the homology functor to a chain-level filtered matched pair yields a filtered matched pair (in the original sense) in each degree $k$; the relevant (ascending) filtration function $\rho_{\uparrow}$ on $H_k(\mathcal{C}_{\uparrow})$ is given by \[ \rho_{\uparrow}(h)=\inf\{c\in \mathcal{C}_{\uparrow}|\partial_{\mathcal{C}_{\uparrow}}c=0,\,[c]=h\} \] and similarly the descending filtration function on $H_k(\mathcal{C}^{\downarrow})$ is given by \[ \rho^{\downarrow}(h)=\sup\{c\in \mathcal{C}^{\downarrow}|\partial_{\mathcal{C}^{\downarrow}}c=0,\,[c]=h\}.\] (In the language of chain-level symplectic Floer theory, $\rho_{\uparrow}(h)$ or $\rho^{\downarrow}(h)$ would be called the \emph{spectral invariant} of the homology class $h$.)
To a chain-level filtered matched pair $\mathcal{CP}$ and a grading $k\in \mathbb{Z}$ we associate in (\ref{hkdef}) a two-parameter persistence module $\mathbb{H}_k(\mathcal{CP})$ in the category of vector spaces over $\kappa$; thus for all $(s,t)\in \mathbb{R}^2$, we have a $\kappa$-vector space $\mathbb{H}_k(\mathcal{CP})_{s,t}$, with suitably compatible maps $\varepsilon_{(s,t),(s',t')}\colon\thinspace \mathbb{H}_k(\mathcal{CP})_{s,t}\to \mathbb{H}_k(\mathcal{CP})_{s',t'}$ whenever $s\leq s'$ and $t\leq t'$. It is this persistence module that most directly generalizes persistence modules built from interlevel sets. To wit, suppose that $X$ is a smooth oriented $n$-dimensional manifold and $p\colon\thinspace \tilde{X}\to X$ is a regular covering space with deck transformation group $\tilde{\Gamma}$. Suppose also that $\tilde{f}\colon\thinspace \tilde{X}\to \mathbb{R}$ is a Morse function such that for some isomorphism $\omega\colon\thinspace \tilde{\Gamma}\to\Gamma$ (where $\Gamma$ is the same subgroup of $\mathbb{R}$ as before), we have $\tilde{f}(\gamma\tilde{x})=\tilde{f}(x)-\omega(\gamma)$ for all $\tilde{x}\in\tilde{X}$. One can then use the filtered Novikov complex of $\tilde{f}$, together with Poincar\'e duality for $\tilde{X}$ (as adapted to regular covering spaces in, \emph{e.g.}, \cite[Section 4.5]{Ran}) to construct a chain-level $n$-Poincar\'e--Novikov structure $\mathcal{CN}(f;\xi)$; see Definition \ref{cndef}. In this context we prove the following as Theorem \ref{bigiso}:
\begin{theorem}\label{introiso} Assume that $\Gamma$ is discrete. Then, for $s+t\geq 0$, there are isomorphisms \[ \sigma_{s,t}\colon\thinspace \mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))_{s,t}\to H_k(\tilde{f}^{-1}([-s,t]);\kappa) \] such that, when $s\leq s'$ and $t\leq t'$, we have a commutative diagram \[ \xymatrix{ \mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))_{s,t} \ar[r]^{\varepsilon_{(s,t),(s',t')}} \ar[d]_{\sigma_{s,t}} & \mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))_{s',t'} \ar[d]^{\sigma_{s',t'}} \\ H_k(\tilde{f}^{-1}([-s,t]);\kappa) \ar[r] & H_k(\tilde{f}^{-1}([-s',t']);\kappa) } \] where the bottom horizontal map is the map on singular homology induced by inclusion.
\end{theorem}
When $\Gamma$ is discrete, we show in Section \ref{decompsect} that, up to a notion of ``filtered matched homotopy equivalence'' (Definition \ref{fmhedef}) that preserves the isomorphism types of the associated persistence modules $\mathbb{H}_k(\mathcal{CP})$, any chain-level filtered matched pair $\mathcal{CP}=(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ splits as a direct sum of five types of simple building blocks, leading to a corresponding decomposition of each $\mathbb{H}_k(\mathcal{CP})$. Two of the five types of building block correspond to homologically trivial summands in decompositions from \cite{UZ} of the Floer-type complexes $\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$. Another two correspond to elements $([a],\ell)$ of the basis spectrum of the filtered matched pair $\mathcal{P}$ obtained by applying the homology functor to $\mathcal{CP}$ (with one variant corresponding to elements with $\ell\geq 0$ and the other to elements with $\ell<0$). The remaining building block corresponds to the $\Lambda$-torsion part of the homology of the chain complex of $\Lambda$-modules $\mathcal{C}$, and yields a contribution to each $\mathbb{H}_k(\mathcal{CP})_{s,t}$ that is independent of the parameters $s$ and $t$ and of the filtrations on the Floer-type complexes $\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$. (If $\Gamma=\{0\}$ then $\Lambda$ is the field $\kappa$ so the $\Lambda$-torsion is trivial; for nontrivial $\Gamma$ the $\Lambda$-torsion part of $H_k(\mathcal{CP})$ corresponds to what is denoted $V_{r}(\xi)$ at the start of \cite[Section 4]{BH17}, and thus to the ``Jordan cells'' of \cite{BD},\cite{BH17}.)
The ``full barcode'' of $\mathcal{CP}$ (Definition \ref{fullbar}) comprises intervals-modulo-$\Gamma$-translation $[a,b]^{\Gamma}$ and $(a,b)^{\Gamma}$ obtained from the basis spectrum of $\mathcal{P}$, and $[a,b)^{\Gamma}$ and $(a,b]^{\Gamma}$ obtained from the concise barcodes (as in \cite{UZ}) of $\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$. Then, continuing to assume that $\Gamma$ is discrete, $\mathbb{H}_k(\mathcal{CP})$ splits as a direct sum of $\Gamma$-periodic versions of the block modules of \cite{CO} associated to the intervals in the full barcode, together with a contribution from the torsion part of $H_k(\mathcal{C})$; see Theorem \ref{bigdecomp} for a precise statement.
As a practical matter, this implies that the constructions of Section \ref{fmpsec} at the homological level, together with those of \cite{UZ} associated with the Floer-type complexes $\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$, are sufficient for the computation of all invariants associated to our algebraic version of interlevel persistence when $\Gamma$ is discrete; one does not need to engage directly with the somewhat more complicated constructions in Section \ref{clsec}, though such constructions have conceptual importance in that they justify our interpretation of the invariants via Theorem \ref{introiso}. Note that both Theorem \ref{basistheorem} which allows one to find the intervals $[a,b]^{\Gamma}$ and $(a,b)^{\Gamma}$ in the full barcode, and \cite[Theorem 3.5]{UZ} which allows one to find the intervals $[a,b)^{\Gamma}$ and $(a,b]^{\Gamma}$, have proofs that are at least implicitly algorithmic.
If $\Gamma$ is not discrete, one cannot expect chain-level Poincar\'e--Novikov structures $\mathcal{CP}$ to be classified up to filtered matched homotopy equivalence in as straightforward a way as in the discrete case; after all, any such classification would need to be at least as complicated as the classification of the possible $H_k(\mathcal{C})$, which can be arbitrary finitely-generated $\Lambda$-modules, and the indiscreteness of $\Gamma$ implies that $\Lambda$ is not a PID, so that its finitely-generated modules do not admit a simple classification. From the persistent homology viewpoint, though, one would be interested mainly in the invariants of $\mathcal{CP}$ that depend nontrivially on the filtrations on $\mathcal{C}_{\uparrow}$ and $\mathcal{C}^{\downarrow}$ and are suitably stable under perturbation. Examples of such invariants are the gaps $G_i$ of the filtered matched pair $\mathcal{P}$ obtained by applying the homology functor to $\mathcal{CP}$, and the lengths of the finite-length bars in the concise barcodes of $\mathcal{C}_{\uparrow}$ and $\mathcal{C}^{\downarrow}$. It is not clear to me whether there are other such invariants going beyond these.
\section{Examples, connections, and interpretations}\label{geomintro}
\subsection{Morse functions and extended persistence} \label{connext}
We now discuss in a bit more detail how our algebraic setup arises in practice. Let us begin with the special case of a Morse function $f\colon\thinspace X\to \mathbb{R}$ on a compact $n$-dimensional $\kappa$-oriented smooth manifold $X$. (So in this case $\Gamma=\{0\}$.) As we review in Section \ref{morseintro}, associated to $f$ is a $\Lambda_{\uparrow}$-Floer type complex $\mathbf{CM}(f)_{\uparrow}$; this notation encompasses both the usual Morse chain complex which we denote on its own by $\mathbf{CM}_{*}(f)$ and the filtration function $\ell_{\uparrow}^{f}$ which assigns to a formal sum of critical points the maximal critical value of a critical point appearing in the sum. There is a standard isomorphism $\phi_f\colon\thinspace H_*(X;\kappa)\to \mathbf{HM}_*(f)$ between the $\kappa$-coefficient singular homology of $X$ and the homology of $\mathbf{CM}_{*}(f)$ (see, \emph{e.g.}, \cite[Chapter 6]{Pa}). Definition \ref{cmdef} associates to $f$ a chain-level $n$-Poincar\'e--Novikov structure; upon passing to homology (and using the isomorphism $\phi_{h_0}$ to identify the Morse homology of the auxiliary Morse function $h_0$ in Definition \ref{cmdef} with $H_*(X;\kappa)$) one obtains a $n$-Poincar\'e--Novikov structure consisting of the following data:
\begin{itemize} \item the isomorphism $\phi_f\colon\thinspace H_*(X;\kappa)\to \mathbf{HM}_*(f)$;
\item the filtration function $\rho_{\uparrow}^{f}\colon\thinspace \mathbf{HM}_{*}(f)\to\mathbb{R}\cup\{-\infty\}$ defined by $\rho_{\uparrow}^{f}(h)=\inf\{\ell_{\uparrow}^{f}(c)|c\in \mathbf{CM}_{*}(f),\,[c]=h\}$; and \item for each $k$, the map $\mathcal{D}_k\colon\thinspace H_k(X;\kappa)\to \mathrm{Hom}_{\kappa}(H_{n-k}(X;\kappa),\kappa)$ corresponding to the Poincar\'e intersection pairing: for $a\in H_k(X;\kappa)$ and $b\in H_{n-k}(X;\kappa)$, $(\mathcal{D}_ka)(b)=a\cap b$ is the signed count of intersections between suitably generic representatives of $a$ and $b$.
\end{itemize}
Denote this Poincar\'e--Novikov structure by $\mathcal{N}(f)$. The general procedure of Section \ref{pnsect} then yields, for each $k\in \mathbb{Z}$, a filtered matched pair $\mathcal{P}(\mathcal{N}(f))_k$. By using Proposition \ref{downup} (applied to the case that $\Gamma=\{0\}$) and then passing to homology, one sees that $\mathcal{P}(\mathcal{N}(f))_k$ is isomorphic in the natural sense to the filtered matched pair \begin{equation}\label{introupdown} \xymatrix{ & (\mathbf{HM}_{k}(-f),-\rho_{\uparrow}^{-f}) \\ H_k(X;\kappa) \ar[ru]^{\phi_{-f}} \ar[rd]_{\phi_f} & \\ & (\mathbf{HM}_k(f),\rho_{\uparrow}^{f}) } \end{equation} Let us write $\mu_{\uparrow}^{f}=\rho_{\uparrow}^{f}\circ \phi_f$ and $\mu^{\downarrow}_{f}=(-\rho^{\downarrow}_{-f})\circ\phi_{-f}$ for the
pullbacks of the filtration functions appearing in the above diagram to functions on $H_k(X;\kappa)$. These define, respectively, an ascending and a descending filtration on $H_k(X;\kappa)$; using, \emph{e.g.}, \cite[Theorem 3.8]{Qin} one can see that, for any $t\in \mathbb{R}$, \begin{align}\label{qiniso} \{h\in H_k(X;\kappa)|\mu_{\uparrow}^{f}(h)\leq t\}&=\mathrm{Im}\left(H_k(X^{\leq t};\kappa)\to H_k(X;\kappa)\right) \\
\{h\in H_k(X;\kappa)|\mu^{\downarrow}_{f}(h)\geq t\} &= \mathrm{Im}\left(H_k(X_{\geq t};\kappa)\to H_k(X;\kappa)\right) \nonumber\end{align} where the maps on homology are induced by inclusion and, as before, $X^{\leq t}=f^{-1}((-\infty,t]))$ and $X_{\geq t}=f^{-1}([t,\infty))$. Thus $\mu_{\uparrow}^{f}$ associates to a homology class $h$ its ``minimax value''--the infimum, among all cycles $c$ representing $h$, of the maximal value of $f$ on $c$---while $\mu^{\downarrow}_{f}(h)$ is the similarly-defined maximin value.
A doubly-orthogonal basis for the filtered matched pair (\ref{introupdown})---as defined in general in Definition \ref{dodfn}---then amounts to a basis $\{e_1,\ldots,e_d\}$ for $H_k(X;\kappa)$ such that, for $c_1,\ldots,c_d\in \kappa$, one has both \[ \mu_{\uparrow}^{f}\left(\sum_{i=1}^{d}c_ie_i\right)=\max\left\{\left.\mu_{\uparrow}^{f}(e_i)\right| c_i\neq 0\right\}\quad\mbox{and}\quad \mu^{\downarrow}_{f}\left(\sum_{i=1}^{d}c_ie_i\right)=\min\left\{\left.\mu_{\uparrow}^{f}(e_i)\right| c_i\neq 0\right\} .\] Our general prescription then associates a barcode interval $[\mu_{\uparrow}^{f}(e_i),\mu^{\downarrow}_{f}(e_i)]$ in degree $k$ to each $e_i$ with $\mu_{\uparrow}^{f}(e_i)\leq \mu^{\downarrow}_{f}(e_i)$, and a barcode interval $\left(\mu^{\downarrow}_{f}(e_i),\mu_{\uparrow}^{f}(e_i)\right)$ in degree $k-1$ to each $e_i$ with $\mu^{\downarrow}_{f}(e_i)<\mu_{\uparrow}^{f}(e_i)$.
Recall from \cite{ext}, \cite[Section 3]{CDM} that the extended persistence of $f$ is constructed from the persistence module given by choosing
an increasing sequence $(s_0,\ldots,s_m)$ with one element in each connected component of the set of regular values of $f$ (so $s_0<\min f$ and $s_m>\max f$) and considering the diagram \begin{align} \label{essdiag} 0=& H_k(X^{\leq s_0};\kappa)\to H_k(X^{\leq s_1};\kappa)\to\cdots\to H_k(X^{\leq s_m};\kappa)=H_k(X;\kappa)=H_k(X,X_{\geq s_m};\kappa)\\ &\to H_k(X,X_{\geq s_{m-1}};\kappa)\to\cdots\to H_k(X,X_{\geq s_0};\kappa)=0\nonumber \end{align} where all maps are induced by inclusion.
The standard persistence module decomposition theorem \cite{ZC} splits this persistence module into a direct sum of interval modules, \emph{i.e.} diagrams of form \begin{equation}\label{interval} 0\to\cdots 0\to \kappa\to\kappa\to\cdots\kappa\to 0\to\cdots\to 0\end{equation} where each map $\kappa\to\kappa$ is the identity. This yields a persistence diagram which is subdivided into ``ordinary,'' ``relative,'' and ``extended'' subdiagrams corresponding respectively to summands (\ref{interval}) in which the nonzero terms all appear in the first half of the diagram, all appear in the second half of the diagram, or bridge the two halves (and thus include a one-dimensional summand of $H_k(X;\kappa)$). The ordinary subdiagram evidently corresponds to the finite-length part of the sublevel barcode of $f$, and by the EP Symmetry Corollary in \cite{CDM} the relative subdiagram corresponds to the finite-length part of the sublevel barcode of $-f$.
We claim that there is a straightforward correspondence between the extended subdiagram and doubly-orthogonal bases for the filtered matched pair (\ref{introupdown}). Indeed, for $i=1,\ldots,m$ let $c_i$ be the unique critical value for $f$ lying in the interval $(s_{i-1},s_i)$, and consider a summand (\ref{interval}) of (\ref{essdiag}) whose first nonzero term lies in $H_k(X^{\leq s_i};\kappa)$ and whose last nonzero term lies in $H_k(X,X_{\geq s_j};\kappa)$. Let $h$ generate the $H_k(X;\kappa)$-term of this summand. Then (since $s_i\in (a_i,a_{i+1})$ and the image of $H_k(X^{\leq s};\kappa)$ in $H_k(X;\kappa)$ is the same for all $s\in (a_i,a_{i+1})$) we see that $a_i$ is the infimal value of $s$ with the property that $h\in \mathrm{Im}(H_k(X^{\leq s};\kappa)\to H_k(X;\kappa))$, \emph{i.e.} that $\mu_{\uparrow}^{f}(h)=a_i$. Similarly $a_{j}$ is the supremum of all values of $s$ such that $h\in \ker(H_k(X;\kappa)\to H_k(X,X_{\geq s};\kappa))$; by the exactness of the homology exact sequences of the pairs $(X,X_{\geq s})$ this is equivalent to the statement that $\mu^{\downarrow}_{f}(h)=a_j$.
Now given a decomposition of (\ref{essdiag}) into interval modules, let $e_1,\ldots,e_d$ denote generators for the $H_k(X;\kappa)$-terms of those summands in the decomposition for which the $H_k(X;\kappa)$ term is nontrivial (\emph{i.e.}, of those summands that contribute to the extended subdiagram). Thus, for $\ell=1,\ldots,d$, the values $\mu_{\uparrow}^{f}(e_{\ell})$ and $\mu^{\downarrow}_{f}(e_{\ell})$ can be read off in the manner described above from the first and last nonzero terms of the summand corresponding to $e_{\ell}$. More generally, if we apply the same reasoning to an arbitrary linear combination of the $e_{\ell}$, we see that \[ \mu_{\uparrow}^{f}\left(\sum_{i=1}^{d}c_ie_i\right)=\max\left\{\left.\mu_{\uparrow}^{f}(e_i)\right| c_i\neq 0\right\}\quad\mbox{and}\quad \mu^{\downarrow}_{f}\left(\sum_{i=1}^{d}c_ie_i\right)=\min\left\{\left.\mu_{\uparrow}^{f}(e_i)\right| c_i\neq 0\right\} ,\] \emph{i.e.} that $\{e_1,\ldots,e_d\}$ is a doubly-orthogonal basis. Thus we have a direct relationship between the extended subdiagram for (\ref{essdiag}) and the basis spectrum of (\ref{introupdown}), and hence of the essential barcode of $\mathcal{N}(f)$. In view of this, basis spectra of arbitrary filtered matched pairs might be regarded as generalizing the extended subdiagram of extended persistence, and then Theorem \ref{introiso} could be seen as a Novikov-theoretic extension of the EP Equivalence Theorem (between extended and interlevel persistence) from \cite{CDM}.
\subsection{Novikov homology}
Let $X$ be a $\kappa$-oriented $n$-dimensional smooth manifold, let $\xi\in H^1(X;\mathbb{R})$, and let $\pi\colon\thinspace\tilde{X}\to X$ be the smallest covering space such that $\pi^*\xi=0$. The deck transformation group of $\tilde{X}$ is then naturally isomorphic to the subgroup $\Gamma=\{\langle\xi,a\rangle|a\in H_1(X;\mathbb{Z})\}$ of $\mathbb{R}$, and we use this group $\Gamma$ (along with the field $\kappa$) to form the ring $\Lambda$ and the fields $\Lambda_{\uparrow},\Lambda^{\downarrow}$ as in Section \ref{basic}. For $\tilde{f}\colon\thinspace\tilde{X}\to\mathbb{R}$ a Morse function whose exterior derivative $d\tilde{f}$ is the pullback by $\pi$ of a de Rham representative of $\xi$, we recall in Section \ref{novsect} the $\Lambda_{\uparrow}$-Floer-type complex $\mathbf{CN}(\tilde{f};\xi)_{\uparrow}$. The homology of this complex, $\mathbf{HN}_{*}(\tilde{f};\xi)$, comes equipped with a filtration function $\rho^{\tilde{f}}_{\uparrow}$ defined as usual by taking infima of filtration levels of representing cycles.
The deck transformations of the covering space $\tilde{X}$ make its $\kappa$-coefficient singular homology into a graded module over $\Lambda=\kappa[\Gamma]$; we write this homology as $H_*(\tilde{X};\Lambda)$ to emphasize this module structure. As originally shown in \cite{Lat}, \cite{Pa95}, there is a natural isomorphism $\Lambda_{\uparrow}\otimes_{\Lambda}H_*(\tilde{X};\Lambda)\cong\mathbf{HN}_{*}(f;\xi)$. Let us write $\phi_{\tilde{f}}\colon\thinspace H_*(\tilde{X};\Lambda)\to\mathbf{HN}_{*}(f;\xi)$ for the composition of this isomorphism with the coefficient extension map $H_*(\tilde{X};\Lambda)\to \Lambda_{\uparrow}\otimes_{\Lambda}H_*(\tilde{X};\Lambda)$. (Then $\phi_f$ is the map induced on homology by the ``Latour map'' of Section \ref{novsect}.) If $\Gamma\neq \{0\}$ (so that $\Lambda_{\uparrow}$ includes infinite sums that do not lie in $\Lambda$) this map will not be surjective unless its codomain vanishes; it also will typically not be injective, as its kernel is the $\Lambda$-torsion submodule of $H_*(\tilde{X};\Lambda)$.
The Poincar\'e--Novikov structure $\mathcal{N}(\tilde{f};\xi)$ associated to $f$---obtained by passing to homology from the chain-level Poincar\'e--Novikov structure $\mathcal{CN}(f;\xi)$ from Section \ref{novsect}---is then given by:
\begin{itemize} \item the map $\phi_{f}\colon\thinspace H_*(\tilde{X};\Lambda)\to \mathbf{HN}_{*}(\tilde{f};\xi)$;
\item the aforementioned filtration function $\rho_{\uparrow}^{f}$ on $\mathbf{HN}_{*}(\tilde{f};\xi)$; and \item for each $k$, the map $\mathcal{D}_k\colon\thinspace H_k(\tilde{X};\Lambda)\to {}^{\vee}H_{n-k}(\tilde{X};\Lambda)$, where the notation ${}^{\vee}$ is defined in Section \ref{dualsec}, such that for $a\in H_k(\tilde{X};\Lambda)$ and $b\in H_{n-k}(\tilde{X};\Lambda)$, $\langle a,b\rangle:=(\mathcal{D}_ka)(b)\in \Lambda$ is the intersection pairing of $a$ with $b$ in the sense of \cite[Definition 4.66]{Ran}.
\end{itemize}
In other words, writing elements of $\Lambda=\kappa[\Gamma]$ as $\sum_{g\in \Gamma}c_gT^g$, we have $\langle a,b\rangle=\sum_{g\in\Gamma}\left((T^ga)\cap b\right)T^g$, with $\left((T^ga)\cap b\right)\in\kappa$ denoting the signed count of intersections of the image under the deck transformation corresponding to $g$ of a generic representative of $a$ with a generic representative of $b$. The map \[ \langle\cdot,\cdot\rangle \colon\thinspace H_k(\tilde{X};\Lambda)\times H_{n-k}(\tilde{X};\Lambda)\to \Lambda \] is \emph{sesquilinear} in that it is biadditive and obeys, for $\lambda,\mu\in \Lambda$, \[ \langle\lambda a,\mu b\rangle=\bar{\lambda}\mu\langle a,b\rangle\] where the conjugation operation $\lambda\mapsto\bar{\lambda}$ is defined in Section \ref{conjsect}. Note that the latter property illustrates that this pairing cannot be extended to a pairing on $\Lambda_{\uparrow}\otimes_{\Lambda}H_*(\tilde{X};\Lambda)$ (or on $\mathbf{HN}_{*}(\tilde{f};\xi)$ to which it is isomorphic): if $\lambda,\mu\in \Lambda_{\uparrow}\setminus \Lambda$, then while there is an element $\bar{\lambda}$ of the field $\Lambda^{\downarrow}$, there will usually be no way of making sense of the product $\bar{\lambda}\mu$. The fact that the Poincar\'e pairing cannot be defined directly on Novikov homology is part of the motivation for the way that we have defined Poincar\'e--Novikov structures.
Following the procedure of Section \ref{pnsect} then yields, for each $k$, a filtered matched pair $\mathcal{P}(\mathcal{N}(\tilde{f};\xi))_{k}$ which Proposition \ref{downup} (after passing to homology) identifies with \begin{equation}\label{novupdown} \xymatrix{ & (\overline{\mathbf{HN}_{k}(-\tilde{f};-\xi)},\rho^{\downarrow}_{\tilde{f}}) \\ H_k(\tilde{X};\Lambda) \ar[ru]^{\phi_{-\tilde{f}}} \ar[rd]_{\phi_{\tilde{f}}} & \\ & (\mathbf{HN}_k(\tilde{f};\xi),\rho_{\uparrow}^{\tilde{f}}) } \end{equation} Here we write $\rho^{\downarrow}_{\tilde{f}}$ for $(-\rho_{\uparrow}^{-\tilde{f}})$, and the notation $\overline{\mathbf{HN}_{k}(-\tilde{f};-\xi)}$ refers to the conjugation functor defined in Section \ref{conjsect}, which converts $\Lambda_{\uparrow}$-vector spaces into $\Lambda^{\downarrow}$-vector spaces. The space $\overline{\mathbf{HN}_{k}(-\tilde{f};-\xi)}$ can be more directly interpreted in terms of $\tilde{f}$ as the $k$th homology of a $\Lambda^{\downarrow}$-Floer-type complex $\mathbf{CN}(f;\xi)^{\downarrow}$ with differential that counts positive gradient flowlines of $\tilde{f}$, just as $\mathbf{HN}_k(\tilde{f};\xi)$ is the $k$th homology of a $\Lambda_{\uparrow}$-Floer-type complex $\mathbf{CN}(f;\xi)_{\uparrow}$ whose differential counts negative gradient flowlines of $\tilde{f}$.
If $\Gamma$ is discrete, Theorem \ref{basistheorem} shows that each filtered matched pair $\mathcal{P}(\mathcal{N}(\tilde{f};\xi))_k$ admits a doubly-orthogonal basis $\{e_1,\ldots,e_d\}\subset H_k(\tilde{X};\Lambda)$, yielding an essential barcode (Definition \ref{pnbar}) for $\mathcal{N}(\tilde{f};\xi)$ consisting of intervals modulo $\Gamma$-translation $\left[\rho_{\uparrow}^{\tilde{f}}(\phi_{\tilde{f}}e_i),\rho_{\tilde{f}}^{\downarrow}(\phi_{-\tilde{f}}e_i)\right]^{\Gamma}$ or $\left(\rho_{\tilde{f}}^{\downarrow}(\phi_{-\tilde{f}}e_i),\rho_{\uparrow}^{\tilde{f}}(\phi_{\tilde{f}}e_i)\right)^{\Gamma}$ (depending on which of $\rho_{\uparrow}^{\tilde{f}}(\phi_{\tilde{f}}e_i),\rho_{\tilde{f}}^{\downarrow}(\phi_{-\tilde{f}}e_i)$ is larger). In view of the discussion in Section \ref{connext} this essential barcode could be seen as providing a Novikov-theoretic analogue of the extended subdiagram of extended persistence. The bars in the essential barcode of $\mathcal{N}(\tilde{f};\xi)$ represent contributions to the homologies of interlevel sets of $\tilde{f}$ according to Theorems \ref{bigdecomp} and \ref{bigiso}.
Regardless of whether $\Gamma$ is discrete, the Poincar\'e--Novikov structure $\mathcal{N}(\tilde{f};\xi)$ has associated gaps $\mathcal{G}_{k,i}(\mathcal{N}(\tilde{f};\xi))$ (Definitions \ref{pngap} and \ref{gapdfn}), whose absolute values are equal to the lengths of the bars in the essential barcode in case the latter is defined. Given two functions $\tilde{f}_0,\tilde{f}_1\colon\thinspace \tilde{X}\to \mathbb{R}$ of the type being considered in this section, a standard argument based on an estimate as in (\ref{filtchange}) for the behavior of the filtrations with respect to the usual continuation maps $\mathbf{CN}(\tilde{f}_0;\xi)_{\uparrow}\to \mathbf{CN}(\tilde{f}_1;\xi)_{\uparrow}$ and $\mathbf{CN}(\tilde{f}_1;\xi)_{\uparrow}\to \mathbf{CN}(\tilde{f}_0;\xi)_{\uparrow}$ implies that there is a $\|\tilde{f}_1-\tilde{f}_0\|_{C^0}$-morphism (in the sense of Definition \ref{pntmorph}) between $\mathcal{N}(\tilde{f}_0;\xi)$ and $\mathcal{N}(\tilde{f}_1;\xi)$. Via Corollaries \ref{pngapstab} and \ref{pnstab} of Theorem \ref{introstab}, this implies stability results for the gaps and (if $\Gamma$ is discrete) essential barcodes of $\mathcal{N}(\tilde{f}_0;\xi)$ and $\mathcal{N}(\tilde{f}_1;\xi)$ analogous to \cite[LZZ Stability Theorem]{CDM}, \cite[Theorem 1.2]{B18}, and \cite[Theorem 1.3]{B1}.
For $h\in H_*(\tilde{X};\Lambda)$, the values $\rho^{\downarrow}_{\tilde{f}}(\phi_{-\tilde{f}}h)$ and $\rho_{\uparrow}^{\tilde{f}}(\phi_{\tilde{f}}h)$ are, respectively, ``maximin'' and ``minimax'' quantities describing the levels of the function $\tilde{f}$ at which $h$ can be represented in the Novikov chain complexes of $-\tilde{f}$ and $\tilde{f}$. As a sample of the kind of information that can be read from our techniques, we prove:
\begin{cor} With notation as above, suppose that $\Gamma=\mathbb{Z}$, so that $\tilde{f}\colon\thinspace \tilde{X}\to \mathbb{R}$ is a lift of a Morse function $f\colon\thinspace X\to \mathbb{R}/\mathbb{Z}$. Let $m\in \mathbb{N}$, and suppose that for each $k\in \mathbb{Z}$ there is a subset $S_k\subset H_k(\tilde{X};\Lambda)$, independent over $\Lambda$, such that for each $h\in S_k$ we have $\rho^{\downarrow}_{\tilde{f}}(\phi_{-\tilde{f}}h)-\rho_{\uparrow}^{\tilde{f}}(\phi_{\tilde{f}}h)\geq m$. Also let $\tau_k$ denote the dimension (over the field $\kappa$) of the $\Lambda$-torsion submodule $tH_k(\tilde{X};\Lambda)$ of $H_k(\tilde{X};\Lambda)$. Then for each regular value $\theta$ of $f$ we have \[ \dim_{\kappa}H_k(f^{-1}(\{\theta\};\kappa)\geq \tau_{k}+m\left(\#S_k+\#S_{n-k-1}\right).\]
\end{cor}
\begin{proof}
If $\theta'\in\mathbb{R}$ projects to $\theta\in \mathbb{R}/\mathbb{Z}$ then $\theta'$ is a regular value of $\tilde{f}$ and the covering projection sends $\tilde{f}^{-1}(\{\theta'\})$ diffeomorphically to $f^{-1}(\{\theta\})$ so it suffices to prove the inequality with $f^{-1}(\{\theta\})$ replaced by $\tilde{f}^{-1}(\{\theta'\})$. By Theorem \ref{bigiso}, $H_k(\tilde{f}^{-1}(\{\theta'\});\kappa)$ is isomorphic to what is there denoted by $\mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(f;\xi)))_{-\theta',\theta'}$. By Theorem \ref{bigdecomp} (and Definition \ref{fullbar}), $\dim_{\kappa}\mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(f;\xi)))_{-\theta',\theta'}$ is equal to the sum of $\tau_k$ and, for each interval-mod-$\mathbb{Z}$-translation $I^{\mathbb{Z}}$ in the degree-$k$ full barcode of
$\mathcal{CP}(\mathcal{CN}(f;\xi))$, the number of $g\in \mathbb{Z}$ such that $\theta'+g\in I$. This degree-$k$ full barcode includes the degree-$k$ essential barcode of $\mathcal{N}(\tilde{f};\xi)$.
Let us write $s_k=\#S_k$. By hypothesis and Definition \ref{gapdfn}, for $i=1,\dots,s_k$ the gap $\mathcal{G}_{k,i}(\mathcal{N}(f;\xi))$ is greater than or equal to $m$. It then follows from Proposition \ref{gapchar} that the degree-$k$ essential barcode of $\mathcal{N}(\tilde{f};\xi)$ includes at least $s_k$ elements $I^{\Gamma}$, counted with multilplicity, for which $I$ is a closed interval of length at least $m$. Reversing the roles of $k$ and $n-k-1$, it follows from the duality result Corollary \ref{pndual} that the degree-$k$ essential barcode of $\mathcal{N}(\tilde{f};\xi)$ also includes at least $s_{n-k-1}$ elements $I^{\Gamma}$ for which $I$ is an open interval of length at least $m$. Since the endpoints of any interval in the essential barcode are necessarily critical values of $\tilde{f}$,\footnote{This follows, even in the more subtle case where $\Gamma$ is dense, from \cite[Theorem 1.4]{U08}, since these endpoints are spectral numbers for Floer-type complexes all of whose nonzero elements have filtration equal to critical values of $\tilde{f}$.} regardless of whether such an interval $I$ is open or closed, if its length is at least $m$ then it must contain at least $m$ of the regular values $\theta'+g$ as $g$ ranges through $\mathbb{Z}$. Thus from these (at least) $(s_k+s_{n-k-1})$-many intervals of length at least $m$ in the degree-$k$ full barcode we obtain a contribution $m(s_k+s_{n-k-1})$ to $\dim_{\kappa}(\tilde{f}^{-1}(\{\theta\});\kappa)$. Combining this with the contribution $\tau_{k}$ mentioned in the previous paragraph implies the result.
\end{proof}
In the presence of more specific information about the bars in the essential barcode of $\mathcal{N}(\tilde{f};\xi)$ one could adapt the above proof to yield a sharper result, such as one that applies to some but not all of the regular level sets of $f$.
\begin{remark}\label{burghelea} Let us compare our approach to barcodes for Novikov homology to the one taken by Burghelea in works such as \cite{B18},\cite{B1}. In both cases, the essential-barcode-type invariants can be viewed as being derived from the interaction between an ascending and a descending filtration by $\kappa$-subspaces of the graded $\Lambda$-module $H_*(\tilde{X};\Lambda)$. For Burghelea these filtrations are given by the images of the inclusion-induced maps $H_*(\tilde{X}^{\leq t};\kappa)\to H_*(\tilde{X};\Lambda)$ (for the ascending filtration) and $H_*(\tilde{X}_{\geq t};\kappa)\to H_*(\tilde{X};\Lambda)$ (for the descending filtration), see, \emph{e.g.}, the notation at the start of \cite[Section 4]{B1}. In the present work the filtrations are given by, respectively, sublevel and superlevel sets of the functions $\rho^{\tilde{f}}_{\uparrow}\circ\phi_{f}$ and $\rho_{\tilde{f}}^{\downarrow}\circ\phi_{-f}$ on $H_*(\tilde{X};\Lambda)$---said differently, by \emph{pre}images under $\phi_{\tilde{f}}$ and $\phi_{-\tilde{f}}$ of sub- and superlevel sets of the functions $\rho^{\tilde{f}}_{\uparrow}$ and $\rho_{\tilde{f}}^{\downarrow}$ defined on Novikov homology. If $\Gamma=\{0\}$ then our filtrations are the same as Burghelea's in view of (\ref{qiniso}). If $\Gamma\neq \{0\}$, the fact (and its analogue in Floer theory) that there is no natural inclusion-induced map to $H_*(\tilde{X};\Lambda)$ from the homologies of filtered subcomplexes of the Novikov complex---as the Novikov complex involves semi-infinite chains---motivated our different formulation. When $\Gamma$ is discrete, one could perhaps use \cite[Proposition 5.2]{B18}, which connects Burghelea's filtrations to analogous ones on Borel--Moore homology, to obtain a relation to our approach.
If $\Gamma$ is not discrete, the collection of our gaps $\mathcal{G}_{k,1}(\mathcal{N}(\tilde{f};\xi)),\ldots,\mathcal{G}_{k,d}(\mathcal{N}(\tilde{f};\xi))$, for $d$ equal to the maximal cardinality of a $\Lambda$-independent set in $H_k(\tilde{X};\Lambda)$, appears to be analogous to what is denoted $\mathbf{\delta}_{k}^{\omega}$ in \cite{B1}; in both cases these are multisets of $\mathbb{R}$ with cardinality $d$ that satisfy a stability theorem (Corollary \ref{pngapstab} for the $\mathcal{G}_{k,i}$, \cite[Theorem 1.3]{B1} for $\mathbf{\delta}_{k}^{\omega}$). While one might speculate that $\mathbf{\delta}_{k}^{\omega}$ consists precisely of our $\mathcal{G}_{k,i}$, any attempt to prove this lies beyond the scope of this paper. Part of the difficulty is that, since in the indiscrete case the function $\tilde{f}$ will not be proper, the connection between homologies of filtered subcomplexes of the Novikov complex and homologies of sublevel sets of $\tilde{f}$ is less clear.
In the indiscrete case, we also leave open the question of a duality result for the $\mathcal{G}_{k,i}(\mathcal{N}(\tilde{f};\xi))$, which would assert that (modulo reordering) the $\mathcal{G}_{k,i}$ are the negatives of the $\mathcal{G}_{n-k,i}$. (If $\Gamma$ is discrete this follows from Corollary \ref{pndual}. In general, Corollary \ref{weakdual} implies that the $\mathcal{G}_{k,i}$ are bounded above by the negatives of the $\mathcal{G}_{n-k,i}$ after reordering.) There is a corresponding question of whether Burghelea's $\mathbf{\delta}_{k}^{\omega}$ is obtained by negating the elements of $\mathbf{\delta}_{n-k}^{\omega}$; by \cite[Theorem 1.4]{B20b} this is true if $\mathbf{\delta}_{n-k}^{\omega}$ is replaced by a Borel--Moore analogue ${}^{BM}\!\mathbf{\delta}_{n-k}^{\omega}$, but in the indiscrete case it is apparently unknown whether $\mathbf{\delta}_{n-k}^{\omega}={}^{BM}\!\mathbf{\delta}_{n-k}^{\omega}$.
\end{remark}
\subsection{Other filtered matched pairs}
The preceding two subsections have discussed constructions of Poincar\'e--Novikov structures $\mathcal{N}$, which give rise to essential barcodes via a general procedure that first associates to $\mathcal{N}$ filtered matched pairs $\mathcal{P}(\mathcal{N})_k$ and then associates basis spectra to the latter. One can also consider filtered matched pairs that do not arise from Poincar\'e--Novikov structures; these still have gaps and (if $\Gamma$ is discrete) basis spectra that satisfy the stability result Theorem \ref{introstab}, though they will not typically satisfy symmetry results such as Corollary \ref{pndual}.
For example, if one drops the hypothesis in the two preceding subsections that $X$ is $\kappa$-oriented, then due to sign issues one will no longer have the same type of Poincar\'e intersection pairing, so the construction of $\mathcal{N}(\tilde{f};\xi)$ fails, but one can still form the Novikov complexes $\mathbf{CN}(\pm \tilde{f};\pm \xi)_{\uparrow}$ and the filtered matched pairs (\ref{novupdown}). As the proof of Theorem \ref{introiso} is based on a persistence module (denoted in Section \ref{isosect} by $\mathcal{H}_{k}(\tilde{f})$) that is constructed directly from (\ref{novupdown}) via (\ref{hkdef}), and does not use the orientation of $X$ after $\mathcal{H}_{k}(\tilde{f})$ has been introduced, if $\Gamma$ is discrete we will have the same relation between the basis spectrum of (\ref{novupdown}) and homologies of interlevel sets as in the previous subsection.
Another small variation on the preceding subsection would be to consider two different Morse functions $\tilde{f},\tilde{g}\colon\thinspace \tilde{X}\to \mathbb{R}$, both having derivatives equal to pullbacks of de Rham representatives of the same cohomology class $\xi$, and, for any grading $k$, form the filtered matched pair \[ \xymatrix{ & (\overline{\mathbf{HN}_{k}(-\tilde{g};-\xi)},\rho^{\downarrow}_{\tilde{g}}) \\ H_k(\tilde{X};\Lambda) \ar[ru]^{\phi_{-g}} \ar[rd]_{\phi_f} & \\ & (\mathbf{HN}_k(\tilde{f};\xi),\rho_{\uparrow}^{\tilde{f}}) } \] The basis spectrum of this would (at least if $\Gamma$ is discrete) encode homological information about how the sublevel sets of $\tilde{f}$ interact with the superlevel sets of $\tilde{g}$. As described in Remark \ref{twostab}, the stability theorem can be refined to give information about how the basis spectrum varies as $\tilde{f}$ and $\tilde{g}$ are varied independently of each other.
A setting that is closer to standard constructions in computational topology involves a polyhedron (\emph{i.e.}, geometric realization of a finite simplicial complex) $X$ together with a PL function $f\colon\thinspace X\to \mathbb{R}$. The simplicial chain complex $\Delta_{*}(X;\kappa)$ carries an ascending filtration function $\ell_{\uparrow}$ and a descending filtration function $\ell^{\downarrow}$ defined by sending a simplicial chain to the maximal (resp. minimal) value of $f$ on the chain. Thus for any $t\in \mathbb{R}$ the subcomplex $\Delta_{*}^{\leq t}(X;\kappa)=\{a\in\Delta_*(X;\kappa)|\ell_{\uparrow}(a)\leq t\}$ is the simplicial chain complex of the subpolyhedron of $X$ consisting of simplices that are entirely contained in the sublevel set $X^{\leq t}$; by an argument on \cite[p. 135]{EH} this subpolyhedron is a deformation retract of $X^{\leq t}$. An analogous remark applies to relate the filtration given by $\ell^{\downarrow}$ to the superlevel sets of $f$.
This fits into our general theory of chain-level filtered matched pairs with $\Gamma=\{0\}$, taking all three chain complexes $\mathcal{C},\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$ in Definition \ref{cfmpdfn} equal to $\Delta_{*}(X;\kappa)$ (with the filtrations on $\mathcal{C}_{\uparrow}$ and $\mathcal{C}^{\downarrow}$ being given by $\ell_{\uparrow}$ and $\ell^{\downarrow}$ respectively), and taking the maps $\phi_{\uparrow},\phi^{\downarrow}$ both equal to the identity. One can verify that the full barcode (Definition \ref{fullbar}) of this chain-level filtered matched pair coincides with the interlevel barcode of $f$, for instance by passing through an identification with extended persistence along the lines of Section \ref{connext}.
If we instead let $f\colon\thinspace X\to S^1$ be a PL function on a polyhedron (\emph{i.e.}, the restriction of $f$ to each simplex $\Delta\subset X$ should be the composition of an affine map $\Delta\to \mathbb{R}$ and the projection $\mathbb{R}\to S^1=\mathbb{R}/\mathbb{Z}$), then for a suitable infinite cyclic cover $\tilde{X}\to X$, $f$ lifts to a PL map $\tilde{f}\colon\thinspace \tilde{X}\to \mathbb{R}$. The simplicial chain complex $\Delta_{*}(\tilde{X};\kappa)$ of the (infinite) simplicial complex $\tilde{X}$ is then a chain complex of free finite-rank modules over $\Lambda=\kappa[\mathbb{Z}]$. We obtain a chain-level filtered matched pair (with $\Gamma=\mathbb{Z}$) by setting $\mathcal{C}=\Delta_{*}(\tilde{X};\kappa)$, $\mathcal{C}_{\uparrow}=\Lambda_{\uparrow}\otimes_{\Lambda}\mathcal{C}$, and $\mathcal{C}^{\downarrow}=\Lambda^{\downarrow}\otimes_{\Lambda}\mathcal{C}$, with $\phi_{\uparrow},\phi^{\downarrow}$ each given by coefficient extension. As before, the filtration functions $\ell_{\uparrow},\ell^{\downarrow}$ are defined by taking, respectively, maxima and minima of $f$ on chains. The resulting full barcode should agree with the one considered in \cite{BD},\cite{BH17}, though our method for computing it---based on the doubly-orthogonal basis constructed in the proof of Theorem \ref{basistheorem}---is rather different from the algorithm from \cite[Section 6]{BD}.
\section{Some algebraic conventions and definitions}\label{basic}
We now begin to give more precise explanations of the algebraic ingredients in this work.
In the background throughout what follows---and often suppressed from the notation for brevity---are:
\begin{itemize} \item a field $\kappa$ (which serves as a homology coefficient ring, though in some cases the appropriate ring will be an extension of $\kappa$) and
\item a finitely generated additive subgroup $\Gamma$ of $\mathbb{R}$.\end{itemize}
In motivating examples, the group $\Gamma$ is (isomorphic to) the deck transformation group of a covering space $p\colon\thinspace \tilde{X}\to X$, and we wish to understand the persistence theory of functions $\tilde{f}\colon\thinspace \tilde{X}\to \mathbb{R}$ that arise as lifts of functions $f\colon\thinspace X\to \mathbb{R}/\Gamma$. The case that $\Gamma=\{0\}$ corresponds to classical Morse theory, or in the situation of Section \ref{floersect} to Hamiltonian Floer theory on weakly exact or positively or negatively monotone symplectic manifolds. Readers who are (at least on first reading) inclined to confine themselves to this case might skim the present section with the understanding that, when $\Gamma=\{0\}$, the objects denoted below by $\Lambda,\Lambda_{\uparrow},\Lambda^{\downarrow},\mathcal{Q}(\Lambda),$ and $\Lambda_{\updownarrow}$ are all simply equal to the field $\kappa$, and the conjugation operation discussed in Section \ref{conjsect} is the identity, as a result of which many of the results of this section are vacuous in this special case.
With ``$T$'' denoting a formal variable, we consider the following rings containing $\kappa$ (with addition and multiplication given by the obvious generalizations of the corresponding operations on polynomials or power series):
\begin{enumerate}\item \[ \Lambda = \left\{\left.\sum_{g\in \Gamma}a_gT^g\right|a_g\in \kappa,\, \#\{g|a_g\neq 0\}<\infty\right\} \]
\item \[ \Lambda_{\uparrow}=\left\{\left.\sum_{g\in \Gamma}a_gT^g\right|a_g\in \kappa,\, (\forall C\in \mathbb{R})\left(\#\{g|a_g\neq 0\mbox{ and }g<C\}<\infty\right)\right\} \]
\item \[ \Lambda^{\downarrow} = \left\{\left.\sum_{g\in \Gamma}a_gT^g\right|a_g\in \kappa,\, (\forall C\in \mathbb{R})\left(\#\{g|a_g\neq 0\mbox{ and }g>-C\}<\infty\right)\right\} \]\end{enumerate}
The ring $\Lambda$ is just the group algebra $\kappa[\Gamma]$ of $\Gamma$ over $\kappa$. Since $\Gamma$ (being a finitely generated subgroup of $\mathbb{R}$) is isomorphic to $\mathbb{Z}^d$ for some $d\geq 0$, we see that $\Lambda$ is isomorphic as a $\kappa$-algebra to the multivariable Laurent polynomial algebra $\kappa[t_1,t_{1}^{-1},\ldots,t_d,t_{d}^{-1}]$. In particular, $\Lambda$ is an integral domain.
The rings $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ are each \emph{fields}, as can be seen for instance from \cite[Theorem 4.1]{HS}. Since $\Lambda$ is a subring both of $\Lambda_{\uparrow}$ and of $\Lambda^{\downarrow}$, if $\mathcal{Q}(\Lambda)$ denotes the field of fractions of $\Lambda$, the inclusions of $\Lambda$ into $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ extend uniquely to field extensions $\mathcal{Q}(\Lambda)\hookrightarrow \Lambda_{\uparrow}$ and $\mathcal{Q}(\Lambda)\hookrightarrow \Lambda^{\downarrow}$. Like any field of fractions of an integral domain, $\mathcal{Q}(\Lambda)$ is flat as a $\Lambda$-module (apply \cite[Theorem 4.80]{Rot} with the multiplicative set $S$ equal to the set of all nonzero elements of $\Lambda$). So since field extensions are likewise flat as modules, it follows that the inclusions $i_{\uparrow}\colon\thinspace \Lambda\hookrightarrow \Lambda_{\uparrow}$ and $i^{\downarrow}\colon\thinspace \Lambda\hookrightarrow\Lambda^{\downarrow}$ make $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ into flat $\Lambda$-modules. Related to this, we have:
\begin{prop}\label{kerext}
Let $M$ be any (left) $\Lambda$-module. Then the coefficient extension maps $ i_{\uparrow}\otimes 1_M\colon\thinspace M\to \Lambda_{\uparrow}\otimes_{\Lambda}M$ and $ i^{\downarrow}\otimes 1_M\colon\thinspace M\to \Lambda^{\downarrow}\otimes_{\Lambda}M$ each have kernel equal to the $\Lambda$-torsion submodule \[ tM:=\{m\in M|(\exists \lambda\in \Lambda\setminus\{0\})(\lambda m=0)\}.\]
\end{prop}
\begin{proof}
The two cases are identical so we just consider the case of $1_M\otimes i_{\uparrow}$. This map can be written as a composition \[ M=\Lambda\otimes_{\Lambda}M\to \mathcal{Q}(\Lambda)\otimes_{\Lambda}M\to \Lambda_{\uparrow}\otimes_{\Lambda}M \] of maps obtained by tensoring the identity $1_M$ with injections $\Lambda\hookrightarrow \mathcal{Q}(\Lambda)\hookrightarrow \Lambda_{\uparrow}$. The map $ M\to \mathcal{Q}(\Lambda)\otimes_{\Lambda}M$ has kernel equal to $tM$ by \cite[Proposition 4.78]{Rot}, while the map $\mathcal{Q}(\Lambda)\otimes_{\Lambda}M\to \Lambda_{\uparrow}\otimes_{\Lambda}M$ has trivial kernel since $\mathcal{Q}(\Lambda)\hookrightarrow \Lambda_{\uparrow}$ is a field extension.
\end{proof}
Now let us introduce, for use beginning in Section \ref{clsec}, \begin{equation}\label{updown} \Lambda_{\updownarrow}=\left\{\left.\sum_{g\in \Gamma}a_gT^g\right|a_g\in \kappa,\, (\forall C\in \mathbb{R})\left(\#\{g|a_g\neq 0\mbox{ and }|g|<C\}<\infty\right)\right\}. \end{equation} We thus have inclusions $j_{\uparrow}\colon\thinspace \Lambda_{\uparrow}\to \Lambda_{\updownarrow}$ and $j^{\downarrow}\colon\thinspace \Lambda^{\downarrow}\to \Lambda_{\updownarrow}$. Since multiplication of elements of $\Lambda_{\uparrow}$ by elements of $\Lambda^{\downarrow}$ is typically ill-defined, their common superset $\Lambda_{\updownarrow}$ is not naturally a ring (except in the case $\Gamma=\{0\}$); however there is no difficulty in making sense of addition of elements of $\Lambda_{\updownarrow}$, or of multiplication of elements of $\Lambda_{\updownarrow}$ by elements of $\Lambda$, and so $\Lambda_{\updownarrow}$ is naturally a $\Lambda$-module. Indeed we evidently have a short exact sequence of $\Lambda$-modules \begin{equation}\label{updownses} \xymatrix{0 \ar[r] & \Lambda\ar[r]^<<<<<{(i^{\downarrow},i_{\uparrow})} & \Lambda^{\downarrow}\oplus \Lambda_{\uparrow}\ar[r]^<<<<<{\,\,-j^{\downarrow}+j_{\uparrow}} & \Lambda_{\updownarrow}\ar[r]&0}\end{equation}
\begin{prop}\label{tor} For any left $\Lambda$-module $M$ we have a natural isomorphism of $\Lambda$-modules $\mathrm{Tor}^{\Lambda}_{1}(\Lambda_{\updownarrow},M)\cong tM$ where $tM$ is the $\Lambda$-torsion submodule of $M$, and $\mathrm{Tor}^{\Lambda}_{i}(\Lambda_{\updownarrow},M)=\{0\}$ for all $i>1$.
\end{prop}
\begin{proof} We use the long exact sequence on $\mathrm{Tor}^{\Lambda}_{*}(\cdot,M)$ induced by (\ref{updownses}); see for instance \cite[Corollary 6.30]{Rot}. Since $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ are flat $\Lambda$-modules we have $\mathrm{Tor}_{i}^{\Lambda}(\Lambda_{\uparrow}\oplus\Lambda^{\downarrow},M)=\{0\}$ for all $i\geq 1$. So part of the long exact sequence reads \[ \xymatrix{0\ar[r]& \mathrm{Tor}^{\Lambda}_{1}(\Lambda_{\updownarrow},M)\ar[r]^{\partial} & \Lambda\otimes_{\Lambda}M\ar[rr]^-{(i^{\downarrow}\otimes 1_M,i_{\uparrow}\otimes 1_M)}& & (\Lambda^{\downarrow}\otimes_{\Lambda}M)\oplus (\Lambda_{\uparrow}\otimes_{\Lambda}M) }.\] By Proposition \ref{kerext}, the last map above has kernel equal to $tM$, and so the connecting homomorphism $\partial$ maps $\mathrm{Tor}^{\Lambda}_{1}(\Lambda_{\updownarrow},M)$ isomorphically to $tM$. If $i>1$ then the exact sequence and the flatness of $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ imply an isomorphism $\mathrm{Tor}^{\Lambda}_{i}(\Lambda_{\updownarrow},M)\cong \mathrm{Tor}_{i-1}^{\Lambda}(\Lambda,M)$, but of course the latter vanishes since $i-1\geq 1$.
\end{proof}
\subsection{Conjugation}\label{conjsect}
Given our field $\kappa$ and subgroup $\Gamma\subset \mathbb{R}$, with $\Lambda=\kappa[\Gamma]$ define the \emph{conjugation} automorphism $\mathfrak{c}\colon\thinspace\Lambda\to\Lambda$ by \[ \mathfrak{c}\left(\sum_g a_gT^g\right)=\sum_g a_gT^{-g} \] In general, we will denote $\mathfrak{c}(\lambda)$ as $\bar{\lambda}$ for $\lambda\in \Lambda$. Note that $\mathfrak{c}$ extends, using the same formula as above, to a field isomorphism $\Lambda_{\uparrow}\to\Lambda^{\downarrow}$, or just as well to a field isomorphism $\Lambda^{\downarrow}\to\Lambda_{\uparrow}$, and we will continue to use the notation $\bar{\lambda}$ for the image of an element $\lambda$ of $\Lambda^{\downarrow}$ or of $\Lambda_{\uparrow}$ under these conjugation maps. While conjugation defines an isomorphism of fields $\Lambda_{\uparrow}\cong\Lambda^{\downarrow}$, if $\Gamma\neq\{0\}$ this is of course not an isomorphism of $\Lambda$-algebras (if it were, all elements of $\Lambda$ would be fixed by $\mathfrak{c}$), and issues related to this are why we will maintain the distinction between $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ rather than using conjugation to work exclusively over one or the other of them.
\begin{remark}\label{conjrmk} For \emph{any} group $\pi$, abelian or not, the group algebra $\kappa[\pi]$ admits a conjugation map $\mathfrak{c}$ similar to that above, though if $\pi$ is not abelian (and hence $\kappa[\pi]$ is not commutative) this is not an algebra automorphism but rather an anti-automorphism in that $\mathfrak{c}(\lambda\mu)=\mathfrak{c}(\mu)\mathfrak{c}(\lambda)$. This map $\mathfrak{c}$ arises in considerations of Poincar\'e duality for regular covering spaces with deck transformation group $\pi$, \emph{cf}. \cite[Section 4.5]{Ran}. In this setting, despite the noncommutativity of $\kappa[\pi]$, a left $\kappa[\pi]$-module $M$ can be converted to a right $\kappa[\pi]$-module by defining $m\lambda=\mathfrak{c}(\lambda)m$ for $\lambda\in \kappa[\pi]$ and $m\in M$. This conjugation operation on modules remains useful in cases such as ours where the group $\pi$ (our $\Gamma$) is abelian. For this reason, contrary to the custom of making no distinction between the left and right modules of a commutative ring, a module over $\Lambda$ will often be specified to be a left module or a right module, with the understanding that if one wants to switch between these the appropriate way to do so is by conjugation, not by trivially
renaming multiplication on the left as multiplication on the right. An exception to this is that when one of the rings $\Lambda,\Lambda_{\uparrow},$ or $\Lambda^{\downarrow}$ is regarded as a module over its subring $\Lambda$, both the left and the right module structure should be interpreted as simply given by the ring multiplication, with no conjugation. Also, in vector spaces over the fields $\Lambda_{\uparrow}$ or $\Lambda^{\downarrow}$ the scalar multiplication will consistently be written as acting on the left. \end{remark}
Given a left (resp. right) $\Lambda$-module $M$, as just suggested we define its conjugate module $\bar{M}$ to be the right (resp. left) $\Lambda$-module whose underlying abelian group is the same as $M$, and with the scalar multiplication given by $m\lambda=\bar{\lambda}m$ (resp. $\lambda m=m\bar{\lambda}$). Similarly, if $V$ is a vector space over $\Lambda_{\uparrow}$, it has an associated conjugate vector space $\bar{V}$ over $\Lambda^{\downarrow}$ with the same underlying abelian group but with the new scalar multiplication given by $\bar{\lambda}v=\lambda v$, and likewise with the roles of $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ reversed. These operations are trivially functorial, in that if $\phi\colon\thinspace M\to N$ is a left $\Lambda$-module homomorphism (resp. $\Lambda_{\uparrow}$-vector space homomomrphism) then the same function $\phi$ can equally well be regarded as a right $\Lambda$-module homomorphism (resp. $\Lambda^{\downarrow}$-vector space homomorphism) $\bar{M}\to \bar{N}$.
The proof of the following is left as an exercise to readers who wish to check their understanding of the notation.
\begin{prop}\label{flipconj}
If $M$ is a right $\Lambda$-module there is an isomorphism of $\Lambda^{\downarrow}$-vector spaces \[ \alpha\colon\thinspace \overline{M\otimes_{\Lambda}\Lambda_{\uparrow}}\to \Lambda^{\downarrow}\otimes_{\Lambda}\bar{M} \] defined by, for $m_i\in M$ and $\lambda_i\in \Lambda_{\uparrow}$, \[ \alpha\left(\sum_i m_i\otimes\lambda_i\right) = \sum_i\bar{\lambda}_i\otimes m_i.\]
\end{prop}
\subsection{Duals}\label{dualsec}
If $M$ is a left module over a not-necessarily-commutative ring $R$, then $M^*:=\mathrm{Hom}_R(M,R)$ is naturally regarded as a \emph{right} $R$-module, with scalar multiplication of an element $\phi\in M^*$ by $r\in R$ being given by, for all $m\in M$, $(\phi r)(m)=\phi(m)r$. In the case that $R=\Lambda$ (or more generally if $R$ is a group algebra) we can then convert this right module into a left module by conjugation; doing so yields the notion of module duality that is most useful for this paper.
\begin{dfn}
If $M$ is a left (resp. right) $\Lambda$-module we define its dual module ${}^{\vee}\!M$ to be the left (resp. right) $\Lambda$-module \[ {}^{\vee}\!M=\overline{\mathrm{Hom}_{\Lambda}(M,\Lambda)}.\] Similarly, if $V$ is a $\Lambda_{\uparrow}$-vector space we define ${}^{\vee}\!V$ as the $\Lambda^{\downarrow}$-vector space \[ {}^{\vee}\!V=\overline{\mathrm{Hom}_{\Lambda_{\uparrow}}(V,\Lambda_{\uparrow})}.\]
\end{dfn}
If $A\colon\thinspace M\to N$ is a morphism of $\Lambda$-modules, we obtain an adjoint ${}^{\vee}\!A\colon\thinspace {}^{\vee}\!N\to {}^{\vee}\!M$ by taking the usual transpose $\phi\mapsto \phi\circ A$ from $\mathrm{Hom}_{\Lambda}(N,\Lambda)\to \mathrm{Hom}_{\Lambda}(M,\Lambda)$ and regarding this as a map between the conjugate modules ${}^{\vee}\!N$ and ${}^{\vee}\!M$. If $M$ and $N$ happen to be free finite-rank (left) modules, with respective $\Lambda$-bases $\{e_1,\ldots,e_m\}$, $\{f_1,\ldots,f_n\}$, then the usual dual bases $\{e_{1}^{*},\ldots,e_{m}^{*}\}$,$\{f_{1}^{*},\ldots,f_{n}^{*}\}$ for the right modules $\mathrm{Hom}_{\Lambda}(M,\Lambda)$ and $\mathrm{Hom}_{\Lambda}(N,\Lambda)$ serve just as well as bases for the left modules ${}^{\vee}\!M$ and ${}^{\vee}\!N$; the conjugation however has the effect that coefficients in representations with respect to these bases are conjugated and hence the matrix representing ${}^{\vee}\!A$ with respect to the dual bases is the \emph{conjugate} transpose of the matrix representing $A$ with respect to the original bases.
One of course has ${}^{\vee}\!(B\circ A)={}^{\vee}\!A\circ {}^{\vee}\!B$ for composable morphisms $A$ and $B$; in particular, if $A$ is invertible then so too is ${}^{\vee}\!A$, with $({}^{\vee}\!A)^{-1}={}^{\vee}\!(A^{-1})$. Consequently we may write ${}^{\vee}\!A^{-1}$ without fear of confusion about order of operations.
Similarly, if $A\colon\thinspace V\to W$ is a morphism of $\Lambda_{\uparrow}$-vector spaces, we obtain an adjoint morphism of $\Lambda^{\downarrow}$-vector spaces ${}^{\vee}\!A\colon\thinspace {}^{\vee}\!W\to {}^{\vee}\!V$ by applying the conjugation functor to the transpose $A^*\colon\thinspace \mathrm{Hom}_{\Lambda_{\uparrow}}(W,\Lambda_{\uparrow})\to \mathrm{Hom}_{\Lambda_{\uparrow}}(W,\Lambda_{\uparrow})$. Note that, although ${}^{\vee}\!A$ and $A^*$ are set-theoretically the same function, ${}^{\vee}\!A$ is a morphism of $\Lambda^{\downarrow}$-vector spaces while $A^{*}$ is a morphism of $\Lambda_{\uparrow}$-vector spaces.
Of course, if instead $B\colon\thinspace X\to Y$ is a morphism of $\Lambda^{\downarrow}$-vector spaces, the same construction as in the previous paragraph gives an adjoint morphism ${}^{\vee}\!B\colon\thinspace {}^{\vee}\!Y\to {}^{\vee}\!X$ of $\Lambda_{\uparrow}$-vector spaces.
If $V$ is a $\Lambda_{\uparrow}$-vector space and $v\in V$ then we obtain a $\Lambda^{\downarrow}$-linear map $\iota_v\colon\thinspace {}^{\vee}V\to \Lambda^{\downarrow}$ defined by $\iota_v(\phi)=\overline{\phi(v)}$ for $\phi\in {}^{\vee}V$. Now ${}^{\vee}\!({}^{\vee}\!V)=\overline{\mathrm{Hom}_{\Lambda^{\downarrow}}({}^{\vee}\!V,\Lambda^{\downarrow})}$, so $\iota_v$ can be regarded as an element of the $\Lambda_{\uparrow}$-vector space ${}^{\vee}\!({}^{\vee}\!V)$, and the map $\alpha_V:v\mapsto\iota_v$ is a $\Lambda_{\uparrow}$-linear map $V\to {}^{\vee}\!({}^{\vee}\!V)$. This map $\alpha_V$ is obviously injective, so if $V$ is finite-dimensional $\alpha_V$ must be a $\Lambda_{\uparrow}$-vector space isomorphism $V\cong {}^{\vee}\!({}^{\vee}\!V)$ by dimensional considerations. Indeed, if $\{e_1,\ldots,e_n\}$ is a basis for $V$ with dual basis $\{e_{1}^{*},\ldots,e_{n}^{*}\}$ for ${}^{\vee}\!V$, then the dual basis for ${}^{\vee}\!({}^{\vee}\!V)$ of $\{e_{1}^{*},\ldots,e_{n}^{*}\}$ is evidently $\{\alpha_V(e_1),\ldots,\alpha_V(e_n)\}$.
If $A\colon\thinspace V\to W$ is a morphism of finite-dimensional $\Lambda_{\uparrow}$-vector spaces, we obtain a double-adjoint ${}^{\vee}\!({}^{\vee}\!A)\colon\thinspace {}^{\vee}\!({}^{\vee}\!V)\to {}^{\vee}\!({}^{\vee}\!W)$, and it is straightforward to check that ${}^{\vee}\!({}^{\vee}\!A)$ coincides with $A$ under the isomorphisms $\alpha_V\colon\thinspace V\cong{}^{\vee}\!({}^{\vee}\!V)$ and $\alpha_W\colon\thinspace W\cong {}^{\vee}\!({}^{\vee}\!V)$.
By the same token, if $H$ is a $\Lambda$-module, we obtain a $\Lambda$-module homomorphism $\alpha_H\colon\thinspace H\to {}^{\vee}\!({}^{\vee}\!H)$ by setting $\alpha_H(h)$ equal to the map ${}^{\vee}\!H\to \Lambda$ defined by $\phi\mapsto \overline{\phi(h)}$. In contrast to the situation with finite-dimensional vector spaces over $\Lambda_{\uparrow}$ or $\Lambda^{\downarrow}$, the map $\alpha_H$ need not be an isomorphism for general finitely-generated modules $H$ over $\Lambda$; for instance, if $h$ is a torsion element of $H$ then $\alpha_H(h)$ will be zero. If $A\colon\thinspace H_0\to H_1$ is a $\Lambda$-module homomorphism we have a commutative diagram \begin{equation}\label{ddnat} \xymatrix{ H_0\ar[r]^{A}\ar[d]_{\alpha_{H_0}} & H_1\ar[d]^{\alpha_{H_1}} \\ {}^{\vee}\!({}^{\vee}\!H_0)\ar[r]_{{}^{\vee}\!{}^{\vee}\!A} & {}^{\vee}\!({}^{\vee}\!H_1)},\end{equation} as both compositions send $x\in H_0$ to the map on ${}^{\vee}\!H_1$ given by $\psi\mapsto \overline{\psi(Ax)}$.
\subsection{PD structures}\label{pdsec}
One ingredient in our constructions is the following pair of definitions, which are designed to mimic maps obtained from Poincar\'e duality in covering spaces.
\begin{dfn}\label{pdstr}
Let $H_*=\oplus_{k\in \mathbb{Z}}H_k$ be a graded left $\Lambda$-module such that each $H_k$ is finitely generated over $\Lambda$, and let $n\in \mathbb{Z}$. A \textbf{weak $n$-PD structure} on $H_*$ consists of module homomorphisms $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$ for all $k\in \mathbb{Z}$ obeying the symmetry condition \begin{equation}\label{pdsym} (\mathcal{D}_kx)(y)=\pm \overline{(\mathcal{D}_{n-k}y)(x)} \quad\mbox{for all }x\in H_k,y\in H_{n-k} \end{equation} for some sign $\pm$ that may depend on $k$, such that $1\otimes \mathcal{D}_k\colon\thinspace \mathcal{Q}(\Lambda)\otimes_{\Lambda}H_k\to \mathcal{Q}(\Lambda)\otimes_{\Lambda}{}^{\vee}\!H_{n-k}$ is an isomorphism where $\mathcal{Q}(\Lambda)$ is the fraction field of $\Lambda$. A \textbf{strong $n$-PD structure} on $H_*$ consists of module homomorphisms $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$ obeying (\ref{pdsym}) which are surjective and have kernel equal to the $\Lambda$-torsion submodule $tH_k$ of $H_k$.
\end{dfn}
\begin{remark}\label{trivpdstr}
If $\Gamma=\{0\}$ and hence $\Lambda$ is equal to the field $\kappa$, then there is no $\Lambda$-torsion and $\mathcal{Q}(\Lambda)=\Lambda$, so either a weak or a strong $n$-PD structure just amounts to isomorphisms $H_k\cong {}^{\vee}\!H_{n-k}$ obeying (\ref{pdsym}). \end{remark}
For general $\Gamma$, from the facts (\cite[Proposition 4.78 and Theorem 4.80]{Rot}) that $\mathcal{Q}(\Lambda)$ is flat over $\Lambda$ and that the coefficient extension map $H_k\to \mathcal{Q}(\Lambda)\otimes_{\Lambda}H_k$ has kernel equal to $tH_k$, one easily sees that a strong $n$-PD structure is also a weak $n$-PD structure. For a counterexample to the converse if $\Gamma\neq\{0\}$ (and hence $\Lambda$ is not a field), one could start with a strong $n$-PD structure and then multiply all $\mathcal{D}_k$ by some nonzero, non-invertible, self-conjugate element of $\Lambda$ (such as $T^g+T^{-g}$ where $g\in\Gamma\setminus\{0\}$); the resulting map would no longer be surjective but would still become an isomorphism after tensoring with $\mathcal{Q}(\Lambda)$. We will revisit this case in Example \ref{weakdualstrict}.
The motivating topological model for Definition \ref{pdstr} involves a regular covering space $p\colon\thinspace \tilde{X}\to X$ where $X$ is a smooth closed oriented manifold and the deck transformation group of $p$ is isomorphic to $\Gamma$. The $\Gamma$ action on $\tilde{X}$ makes the singular chain complex of $\tilde{X}$ into complex of $\Lambda$-modules and for $H_k$ we can take the $k$th homology of this complex.\footnote{$H_k$ is finitely generated because $X$, being a smooth closed manifold, admits a finite cell decomposition which then lifts to a $\Gamma$-equivariant cell decomposition of $\tilde{X}$, and $H_k$ is isomorphic to the $k$th homology of the complex of finitely-generated $\Lambda$-modules associated to this cell decomposition.} Poincar\'e duality as in \cite[Theorem 4.65]{Ran} then provides an isomorphism $H_k\cong H^{n-k}$ where $H^{n-k}$ is the cohomology of the dual complex to the singular chain complex. Now Definition \ref{pdstr} does not reference anything that directly plays the role of the cohomology $H^{n-k}$; rather we use ${}^{\vee}\!H_{n-k}$, the dual of the homology, which if $\Gamma\neq\{0\}$ is a different thing. More specifically, there is an evaluation map $\mathrm{ev}\colon\thinspace H^{n-k}\to{}^{\vee}\!H_{n-k}$ from cohomology to the dual of homology, and the maps $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$ should be understood as the composition of the Poincar\'e duality isomorphism with $\mathrm{ev}$. The mapping $(x,y)\mapsto (\mathcal{D}_kx)(y)$ is the homology intersection pairing defined in \cite[Definition 4.66]{Ran}; as noted there, one has $(\mathcal{D}_kx)(y)=(-1)^{k(n-k)}\overline{(\mathcal{D}_{n-k}y)(x)}$ so the symmetry condition (\ref{pdsym}) holds.
At this point we encounter the basic trichotomy of algebraic complexity in our analysis, depending on nature of the subgroup $\Gamma$ of $\mathbb{R}$. If $\Gamma=\{0\}$ then most of the discussion up to this point collapses to something rather more simple: all of $\Lambda,\Lambda_{\uparrow},\Lambda^{\downarrow},\Lambda_{\updownarrow}$ are just equal to the field $\kappa$; ``conjugation'' is the identity map on $\kappa$; the map $\mathrm{ev}$ of the previous paragraph is an isomorphism; and, as already noted in Remark \ref{trivpdstr}, either notion of a $n$-PD structure in Definition \ref{pdstr} just amounts to isomorphisms $H_k\to {}^{\vee}\!H_{n-k}$ that satisfy (\ref{pdsym}) (and in this case the conjugation symbol in (\ref{pdstr}) may be ignored).
The next simplest possibility is that $\Gamma$ is a discrete, nontrivial subgroup of $\mathbb{R}$; as is well-known and not hard to show, this implies that $\Gamma$ is isomorphic to $\mathbb{Z}$ and hence that the group ring $\Lambda$ is isomorphic to the Laurent polynomial ring $\kappa[t,t^{-1}]$. Thus in this case $\Lambda$ is a PID, and so the universal coefficient theorem \cite[Theorem 7.59]{Rot} applies to show that $\mathrm{ev}$ is surjective and to compute its (generally nontrivial) kernel.
The remaining possibility is that $\Gamma\leq \mathbb{R}$ is a dense subgroup, in which case (since we assume $\Gamma$ to be finitely generated) $\Lambda$ is isomorphic to a multivariable Laurent polynomial ring $\kappa[t_1,t_{1}^{-1},\ldots,t_d,t_{d}^{-1}]$ with $d>1$, which is not a PID. Finitely generated modules over $\Lambda$ can therefore be rather complicated, and moreover the usual universal coefficient theorem does not apply. Instead, there is a universal coefficient spectral sequence described in \cite[Section 2]{Lev}, and if some differential on an $E^r$ page of this spectral sequence with $r\geq 2$ does not vanish on the bottom row then the map $\mathrm{ev}$ will not be surjective.
Our definition of a weak $n$-PD structure is designed to be flexible enough to apply in relevant situations even if $\Gamma$ is a dense subgroup of $\mathbb{R}$, while our definition of a strong $n$-PD structure is meant to allow for sharper statements such as Corollary \ref{pndual} that hold when $\Gamma$ is discrete. To see that such structures do indeed arise from Poincar\'e duality, we prove:
\begin{prop}\label{cohtopdstr}
Let $C_*=\oplus_{k\in \mathbb{Z}}C_k$ be a graded left $\Lambda$-module with each $C_k$ free, let $\partial\colon\thinspace C_{*}\to C_{*-1}$ obey $\partial\circ\partial=0$, and assume that each homology module $H_k=\frac{\ker(\partial\colon\thinspace C_k\to C_{k-1})}{\Img(\partial\colon\thinspace C_{k+1}\to C_k)}$ is finitely generated as a $\Lambda$-module. Write $C^k={}^{\vee}\!(C_k)$, and $\delta\colon\thinspace C^*\to C^{*+1}$ for the map given on $C^k$ by ${}^{\vee}\!((-1)^{k+1}\partial|_{C_{k+1}})$. Assume that for each $k$ we have an isomorphism $D_k\colon\thinspace H_k\to H^{n-k}$ where $H^{n-k}=\frac{\ker(\delta\colon\thinspace C^k\to C^{k+1})}{\Img(\delta\colon\thinspace C^{k-1}\to C^k)}$, and let $\mathrm{ev}\colon\thinspace H^{n-k}\to {}^{\vee}\!H_{n-k}$ be the evaluation map: $\mathrm{ev}([\phi])([c])=\phi(c)$ whenever $\phi\in C^{n-k}$ has $\delta\phi=0$ and $c\in C_{n-k}$ has $\partial c=0$. Assume moreover that the maps $\mathcal{D}_k=\mathrm{ev}\circ D_k$ obey (\ref{pdsym}). Then:
\begin{itemize}\item[(i)] If $\Gamma$ is discrete then the maps $\mathcal{D}_k=\mathrm{ev}\circ D_k$ define a strong $n$-PD structure.
\item[(ii)] Regardless of whether $\Gamma$ is discrete, if each $C_k$ is finitely generated then the maps $\mathcal{D}_k=\mathrm{ev}\circ D_k$ define a weak $n$-PD structure.
\end{itemize}\end{prop}
\begin{proof}[Proof of Proposition \ref{cohtopdstr}(i)]
Since $\Gamma$ is discrete and so $\Lambda$ is a PID the universal coefficient theorem gives (after applying the conjugation functor) a short exact sequence \[ \xymatrix{0\ar[r] &\overline{\mathrm{Ext}^{1}_{\Lambda}(H_{n-k-1},\Lambda)}\ar[r]^{\,\,\,\,\,\,\quad j} & H^{n-k}\ar[r]^{\mathrm{ev}} & {}^{\vee}\!H_{n-k}\ar[r] & 0 }.\] Since we assume that $D_k\colon\thinspace H_k\to H^{n-k}$ is an isomorphism it follows that $\mathcal{D}_k=\mathrm{ev}\circ D_k$ is a surjection with kernel equal to $D_{k}^{-1}(\Img j)$. Since ${}^{\vee}\!H_{n-k}=\overline{\mathrm{Hom}_{\Lambda}(H_{n-k},\Lambda)}$ is torsion-free this kernel must contain all $\Lambda$-torsion elements of $H_k$. Conversely, all elements of $\ker\mathcal{D}_k=D_{k}^{-1}(\mathrm{Im}j)$ are torsion since $\mathrm{Ext}^{1}_{\Lambda}(H_{n-k-1},\Lambda)$ is a torsion $\Lambda$-module (as can be seen by direct calculation from the classification of finitely generated modules over a PID, noting that $H_{n-k-1}$ is assumed to be finitely generated).
\end{proof}
For the proof of the second part of the proposition it is helpful to have the following lemma.
\begin{lemma}\label{extend-to-frac}
Let $A$ be an integral domain with fraction field $Q$, and let $M$ be a finitely generated $A$-module. Then the $Q$-vector space homomorphism \[ \alpha\colon\thinspace \mathrm{Hom}_{A}(M,A)\otimes_A Q\to \mathrm{Hom}_A(M,Q),\] defined on simple tensors $\phi\otimes q$ by $\left(\alpha(\phi\otimes q)\right)(m)=q\phi(m)$ for all $m\in M$, is an isomorphism.
\end{lemma}
\begin{proof}
First note that, because $M$ is finitely-generated, for each $\phi\in \mathrm{Hom}_A(M,Q)$, there is $a\in A$ such that $a\phi\in \mathrm{Hom}_A(M,A)$. Indeed, if $\{x_1,\ldots,x_m\}$ is a generating set for $M$ with $\phi(x_i)=\frac{p_i}{q_i}$ with $p_i,q_i\in A$ and $q_i\neq 0$, we can take $a=\prod_{i=1}^{m}q_i$. Thus the quotient $\frac{\mathrm{Hom}_A(M,Q)}{\mathrm{Hom}_A(M,A)}$ is a torsion $A$-module, and hence its tensor product with the fraction field $Q$ vanishes. Now $Q$ is a flat $A$-module, so applying the exact functor $\_\otimes_A Q$ to the short exact sequence \[ 0\to \mathrm{Hom}_A(M,A)\to \mathrm{Hom}_A(M,Q)\to \frac{\mathrm{Hom}_A(M,Q)}{\mathrm{Hom}_A(M,A)}\to 0\] shows that the map \begin{equation}\label{domtofrac} \mathrm{Hom}_A(M,A)\otimes_A Q\to \mathrm{Hom}_A(M,Q)\otimes_A Q \end{equation} induced by inclusion of $A$ into $Q$ is an isomorphism. But there is an isomorphism of $Q$-vector spaces $\mathrm{Hom}_A(M,Q)\otimes_A Q\to \mathrm{Hom}_A(M,Q)$ given on simple tensors by $\phi\otimes q\mapsto q\phi$ (this follows for instance from the proof of \cite[Corollary 4.79(ii)]{Rot}); composing this with (\ref{domtofrac}) yields the result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{cohtopdstr}(ii)]
Write $(C^*,\delta)$ for the usual (unconjugated) dual complex to $(C_*,\partial)$, so $C^i=\mathrm{Hom}_{\Lambda}(C_i,\Lambda)$ and Lemma \ref{extend-to-frac} gives isomorphisms $C^i\otimes_{\Lambda}\mathcal{Q}(\Lambda)\cong \mathrm{Hom}_{\Lambda}(C_i,\mathcal{Q}(\Lambda))$. Since $\mathcal{Q}(\Lambda)$ is a flat $\Lambda$-module, the obvious map $H^{n-k}(C^*)\otimes_{\Lambda}\mathcal{Q}(\Lambda)\to H^{n-k}(C^{*}\otimes_{\Lambda}\mathcal{Q}(\Lambda))$ is an isomorphism. So we have an isomorphism $H^{n-k}(C^*)\otimes_{\Lambda}\mathcal{Q}(\Lambda)\cong H^{n-k}(\mathrm{Hom}_{\Lambda}(C_*,\mathcal{Q}(\Lambda)))$.
In turn, since $\mathcal{Q}(\Lambda)$ is an injective $\Lambda$-module, the evaluation map $H^{n-k}(\mathrm{Hom}_{\Lambda}(C_*,\mathcal{Q}(\Lambda)))\to \mathrm{Hom}_{\Lambda}(H_{n-k}(C_*),\mathcal{Q}(\Lambda))$ is an isomorphism. Finally, another application of Lemma \ref{extend-to-frac}
gives an isomorphism $ \mathrm{Hom}_{\Lambda}(H_{n-k}(C_*),\mathcal{Q}(\Lambda))\cong \mathrm{Hom}_{\Lambda}(H_{n-k}(C_*),\Lambda)\otimes_{\Lambda} \mathcal{Q}(\Lambda)$.
Stringing together all of the above isomorphisms gives an isomorphism $H^{n-k}(C^*)\otimes_{\Lambda}\mathcal{Q}(\Lambda)\to \mathrm{Hom}_{\Lambda}(H_{n-k}(C_*),\Lambda)\otimes_{\Lambda} \mathcal{Q}(\Lambda)$ which is easily seen to coincide with the coefficient extension of the evaluation map. Conjugating then gives that $1_{\mathcal{Q}(\Lambda)}\otimes \mathrm{ev}\colon\thinspace \mathcal{Q}(\Lambda)\otimes_{\Lambda}H^{n-k}\to \mathcal{Q}(\Lambda)\otimes_{\Lambda}{}^{\vee}\!H_{n-k}$ is an isomorphism. Composing this isomorphism with the coefficient extension to $\mathcal{Q}(\Lambda)$ of the isomorphism $D_k\colon\thinspace H_k\to H^{n-k}$ from the statement of the proposition, we see that the coefficient extension to $\mathcal{Q}(\Lambda)$ of $\mathcal{D}_k=\mathrm{ev}\circ D_k$ gives an isomorphism $\mathcal{Q}(\Lambda)\otimes_{\Lambda}H_k\to\mathcal{Q}(\Lambda)\otimes_{\Lambda} {}^{\vee}\!H_{n-k}$, as desired.
\end{proof}
\section{Filtered matched pairs}\label{fmpsec}
Continuing to denote by $\Lambda$ the group algebra $\kappa[\Gamma]$ of a finitely-generated subgroup $\Gamma<\mathbb{R}$ over a field $\kappa$, let us define functions \[ \nu_{\uparrow}\colon\thinspace \Lambda\to \mathbb{R}\cup\{\infty\}\qquad \nu^{\downarrow}\colon\thinspace \Lambda\to \mathbb{R}\cup\{-\infty\} \] by \begin{equation}\label{nuformula} \nu_{\uparrow}\left(\sum_ga_gT^g\right)=\min\{g|a_g\neq 0\},\qquad \nu^{\downarrow}\left(\sum_ga_gT^g\right)=\max\{g|a_g\neq 0\} \end{equation} (with the convention that the empty set has maximum $-\infty$ and minimum $\infty$). Setting $\nu$ equal either to $\nu_{\uparrow}$ or to $-\nu^{\downarrow}$ gives a non-Archimedean valuation on $\Lambda$, which is to say a function $\nu\colon\thinspace \Lambda\to\mathbb{R}\cup\{\infty\}$ obeying: \begin{itemize} \item[(i)] $\nu(\lambda)=\infty$ iff $\lambda=0$; \item[(ii)] $\nu(\lambda\mu)=\nu(\lambda)+\nu(\mu)$ for all $\lambda,\mu\in\Lambda$; \item[(iii)] $\nu(\lambda+\mu)\geq\min\{\nu(\lambda),\nu(\mu)\}$ for all $\lambda,\mu\in \Lambda$. \end{itemize}
Thus either of the formulas $(\lambda,\mu)\mapsto e^{-\nu_{\uparrow}(\lambda-\mu)}$ or $(\lambda,\mu)\mapsto e^{\nu^{\downarrow}(\lambda-\mu)}$ defines a metric on $\Lambda$, and the fields $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ are, respectively, the completions of $\Lambda$ with respect to these metrics. The functions $\nu_{\uparrow}$ and $\nu^{\downarrow}$ extend by the same formulas as in (\ref{nuformula}) to functions $\nu_{\uparrow}\colon\thinspace \Lambda_{\uparrow}\to \mathbb{R}\cup\{\infty\}$ and $\nu^{\downarrow}\colon\thinspace \Lambda^{\downarrow}\to \mathbb{R}\cup\{-\infty\}$ (note that while $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ contain certain infinite sums $\sum a_g T^g$, the $\min$ in (\ref{nuformula}) will still be attained for a nonzero element of $\Lambda_{\uparrow}$, and the $\max$ in (\ref{nuformula}) will still be attained for a nonzero element of $\Lambda^{\downarrow}$). Evidently we have \[ \nu_{\uparrow}(\lambda)=-\nu^{\downarrow}(\bar{\lambda}) \] for $\lambda\in \Lambda_{\uparrow}$, using the conjugation map $\Lambda_{\uparrow}\to\Lambda^{\downarrow}$ defined in Section \ref{conjsect}.
The following definition is adapted from \cite[Definition 2.10]{UZ}; note that what is denoted by $\Lambda$ in that paper is denoted $\Lambda_{\uparrow}$ here:
\begin{dfn}\label{orthdef}\begin{enumerate}\item
An \textbf{orthogonalizable $\Lambda_{\uparrow}$-space} is a pair $(V_{\uparrow},\rho_{\uparrow})$ where $V_{\uparrow}$ is a finite-dimensional
vector space over $\Lambda_{\uparrow}$ and $\rho_{\uparrow}\colon\thinspace V_{\uparrow}\to \mathbb{R}\cup\{-\infty\}$ is a function such that \begin{itemize}\item $\rho_{\uparrow}(v)=-\infty$ if and only if $v=0$; and \item $V_{\uparrow}$ has a basis $\{e_1,\ldots,e_m\}$ which is \textbf{$\rho_{\uparrow}$-orthogonal} in the sense that, for all $\lambda_1,\ldots,\lambda_m\in \Lambda_{\uparrow}$, one has \[ \rho_{\uparrow}\left(\sum_{i=1}^{m}\lambda_ie_i\right)=\max_{1\leq i\leq m}\left(\rho_{\uparrow}(e_i)-\nu_{\uparrow}(\lambda_i)\right).\]
\end{itemize}
\item An \textbf{orthogonalizable $\Lambda^{\downarrow}$-space} is a pair $(V^{\downarrow},\rho^{\downarrow})$ where $V^{\downarrow}$ is a finite-dimensional
vector space over $\Lambda^{\downarrow}$ and $\rho^{\downarrow}\colon\thinspace V^{\downarrow}\to \mathbb{R}\cup\{\infty\}$ is a function such that \begin{itemize}\item $\rho^{\downarrow}(v)=\infty$ if and only if $v=0$; and \item $V^{\downarrow}$ has a basis $\{e_1,\ldots,e_m\}$ which is \textbf{$\rho^{\downarrow}$-orthogonal} in the sense that, for $\lambda_1,\ldots,\lambda_m\in \Lambda^{\downarrow}$, one has \[ \rho^{\downarrow}\left(\sum_{i=1}^{m}\lambda_ie_i\right)=\min_{1\leq i\leq m}\left(\rho^{\downarrow}(e_i)-\nu^{\downarrow}(\lambda_i)\right).\]\end{itemize}\end{enumerate}
\end{dfn}
\begin{remark}\label{switch}
If $(V_{\uparrow},\rho_{\uparrow})$ is an orthogonalizable $\Lambda_{\uparrow}$-space then $(\overline{V_{\uparrow}},-\rho_{\uparrow})$ is an orthogonalizable $\Lambda^{\downarrow}$-space, and similarly with the symbols $\uparrow$ and $\downarrow$ reversed. Indeed $\{e_1,\ldots,e_m\}$ is a $\rho_{\uparrow}$-orthogonal basis for $V_{\uparrow}$ if and only if the same set $\{e_1,\ldots,e_d\}$ is a $(-\rho_{\uparrow})$-orthogonal basis for $\overline{V_{\uparrow}}$. This allows one to convert basic results about orthogonalizable $\Lambda_{\uparrow}$-spaces such as those appearing in \cite[Section 2]{UZ} to results about orthogonalizable $\Lambda^{\downarrow}$-spaces. For instance, subspaces of such spaces are always orthogonalizable (\cite[Corollary 2.17]{UZ}).
\end{remark}
\begin{remark}\label{normconds}
The conditions in Definition \ref{orthdef} evidently imply, for $\lambda$ belonging to $\Lambda_{\uparrow}$ or $\Lambda^{\downarrow}$ and $v,w$ belonging to $V_{\uparrow}$ or $V^{\downarrow}$ as appropriate: \begin{align*} \rho_{\uparrow}(\lambda v)&=\rho_{\uparrow}(v)-\nu_{\uparrow}(\lambda),\qquad \rho^{\downarrow}(\lambda v)=\rho^{\downarrow}(v)-\nu^{\downarrow}(\lambda),\\ \rho_{\uparrow}(v+w)&\leq \max\{\rho_{\uparrow}(v),\rho_{\uparrow}(w)\},\qquad \rho^{\downarrow}(v+w)\geq\min\{\rho^{\downarrow}(v),\rho^{\downarrow}(w)\}.\end{align*}
\end{remark}
\begin{remark}\label{alldifferent} The inequalities in the last line of the previous remark become equalities if $\rho_{\uparrow}(v)\neq \rho_{\uparrow}(w)$ (resp. $\rho^{\downarrow}(v)\neq \rho^{\downarrow}(w)$), see \cite[Proposition 2.3]{UZ}. From this it is easy to see that a sufficient condition for a basis $\{e_1,\ldots,e_m\}$ of an orthogonalizable $\Lambda_{\uparrow}$-space $(V,\rho_{\uparrow})$ to be $\rho_{\uparrow}$-orthogonal is that the $\rho_{\uparrow}(e_i)$ all represent different cosets of $\Gamma$ in $\mathbb{R}$. Conversely, if $(V,\rho_{\uparrow})$ admits some basis $\{e_1,\ldots,e_m\}$ such that the $\rho_{\uparrow}(e_i)$ all represent different cosets of $\Gamma$ in $\mathbb{R}$, then all $\rho_{\uparrow}$-orthogonal bases for $V_{\uparrow}$ will have the same property. Similar remarks apply to $\rho^{\downarrow}$-orthogonal bases.
\end{remark}
\begin{remark}
Let us consider these notions in the case that $\Gamma=\{0\}$ and hence $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ both coincide with the coefficient field $\kappa$. Both $\nu_{\uparrow}$ and $\nu^{\downarrow}$ send all nonzero elements of $\kappa$ to $0$, and they send $0$ to $+\infty$ and $-\infty$, respectively. So an orthogonalizable $\Lambda_{\uparrow}$-space $(V_{\uparrow},\rho_{\uparrow})$ has a basis $\{e_1,\ldots,e_m\}$ for which $\rho_{\uparrow}\left(\sum_i\lambda_ie_i\right)=\max\{\rho_{\uparrow}(e_i)|\lambda_i\neq 0\}$, while an orthogonalizable $\Lambda^{\downarrow}$-space has a basis $\{e_1,\ldots,e_m\}$ for which $\rho^{\downarrow}\left(\sum_i\lambda_ie_i\right)=\min\{\rho^{\downarrow}(e_i)|\lambda_i\neq 0\}$.\end{remark}
For an example of an orthogonalizable $\Lambda_{\uparrow}$-space with $\Gamma=\{0\}$, one can consider a Morse function $f\colon\thinspace X\to \mathbb{R}$ on a compact smooth manifold $M$, let $V_{\uparrow}=H_k(X;\kappa)$ for some choice of degree $k$, and define $\rho_{\uparrow}\colon\thinspace V_{\uparrow}\to\mathbb{R}\cup\{-\infty\}$ by \[ \rho_{\uparrow}(a)=\inf\{t\in \mathbb{R}|a\in \Img(H_k(f^{-1}((-\infty,t]);\kappa)\to H_k(X;\kappa))\}\] (where the map is the obvious one induced by the inclusion $f^{-1}((-\infty,t])\to X$). Thus $\rho_{\uparrow}(a)$ records the level at which the homology class $a$ first appears in the homologies of sublevel sets of $f$. If the infinite intervals appearing in the degree-$k$ sublevel persistence barcode of $f$ are $[t_1,\infty),\ldots,[t_m,\infty)$, then there will be a $\rho_{\uparrow}$-orthogonal basis $\{e_1,\ldots,e_m\}$ of $H_k(X;\kappa)$ with $\rho_{\uparrow}(e_i)=t_i$. (This follows from the general \cite[Proposition 6.6]{UZ}; in fact, modulo reordering, every $\rho_{\uparrow}$-orthogonal basis will satisfy this property in view of \cite[Proposition 5.5]{UZ}.)
Dually, an example of an orthogonalizable $\Lambda^{\downarrow}$-space with $\Gamma=\{0\}$ is given by again taking $V^{\downarrow}=H_k(X;\kappa)$ but now putting \[ \rho^{\downarrow}(a)=\sup\{t\in \mathbb{R}|a\in \Img(H_k(f^{-1}([t,\infty));\kappa)\to H_k(X;\kappa))\}.\] Thus $\rho_{\uparrow}$ records how the images in $H_k(X;\kappa)$ of the homologies of sublevel sets grow as the level increases, while $\rho^{\downarrow}$ records how the images in $H_k(X;\kappa)$ of the homologies of superlevel sets grow as the level decreases. (This is the motivation for the use of the arrows $\uparrow$ and $\downarrow$ in the notation.)
For general $\Gamma$, the constructions in \cite{UZ} (in particular, that paper's Proposition 6.6) show that, for a cover $\tilde{X}\to X$ with deck transformation group $\Gamma$ and for a Morse function $\tilde{f}\colon\thinspace \tilde{X}\to \mathbb{R}$ such that $d\tilde{f}$ is the pullback of a closed one-form on $X$ representing the class $\xi\in H^1(X;\mathbb{R})$, the Novikov homology $\mathbf{HN}_k(\tilde{f};\xi)$ (which is a vector space over $\Lambda_{\uparrow}$) again admits the structure of an orthogonalizable $\Lambda_{\uparrow}$-space, with $\rho_{\uparrow}(a)$ defined to be the infimal filtration level at the homology class $a$ is represented in the Novikov chain complex.
We can now finally define the basic structures that will ultimately give rise to our version of the open and closed intervals in interlevel barcodes.
\begin{dfn}\label{fmpdfn} Fix as before a field $\kappa$ and a finitely generated subgroup $\Gamma<\mathbb{R}$.
A \textbf{filtered matched pair} is a diagram \[ \xymatrix{ & (V^{\downarrow},\rho^{\downarrow}) \\ H\ar[ru]^{\phi^{\downarrow}}\ar[rd]_{\phi_{\uparrow}} & \\ & (V_{\uparrow},\rho_{\uparrow}) } \] where:
\begin{itemize} \item $H$ is a finitely-generated left $\Lambda$-module;
\item $(V_{\uparrow},\rho_{\uparrow})$ is an orthogonalizable $\Lambda_{\uparrow}$-space, and the $\Lambda$-module homomorphism $\phi_{\uparrow}\colon\thinspace H\to V_{\uparrow}$ has the property that $1_{\Lambda_{\uparrow}}\otimes\phi_{\uparrow} \colon\thinspace \Lambda_{\uparrow}\otimes_{\Lambda}H\to V_{\uparrow}$ is an isomorphism; and
\item $(V^{\downarrow},\rho^{\downarrow})$ is an orthogonalizable $\Lambda^{\downarrow}$-space, and the $\Lambda$-module homomorphism $\phi^{\downarrow}\colon\thinspace H\to V^{\downarrow}$ has the property that $1_{\Lambda^{\downarrow}}\otimes \phi^{\downarrow}\colon\thinspace \Lambda^{\downarrow}\otimes_{\Lambda}H\to V^{\downarrow}$ is an isomorphism.
\end{itemize}\end{dfn}
If $\Gamma=\{0\}$ then $\Lambda_{\uparrow}=\Lambda^{\downarrow}=\Lambda=\kappa$, so $\phi_{\uparrow}$ and $\phi^{\downarrow}$ are already isomorphisms; thus in this case a filtered matched pair provides not much more information than simply a $\kappa$-vector space isomorphism (namely $\phi^{\downarrow}\circ\phi_{\uparrow}^{-1}$) between $V_{\uparrow}$ and $V^{\downarrow}$. However, if $\Gamma\neq\{0\}$ then $V_{\uparrow}$ and $V^{\downarrow}$ are not even vector spaces over the same field so they certainly cannot be isomorphic. One could conjugate $V^{\downarrow}$ to obtain a vector space over the same field as $V_{\uparrow}$, but conjugation is incompatible with the $\Gamma$-action which is meant to encode geometrically significant information. Definition \ref{fmpdfn} allows for comparison between $V_{\uparrow}$ and $V^{\downarrow}$ by providing a venue, $H$, in which elements of these spaces can sometimes be ``matched'' while simultaneously respecting the $\Gamma$-actions on both spaces.
\begin{remark} With notation as in Definition \ref{fmpdfn}, since $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ are both field extensions of the fraction field $\mathcal{Q}(\Lambda)$, the map $\phi_{\uparrow}$ factors as $H\to \mathcal{Q}(\Lambda)\otimes_{\Lambda}H\to V_{\uparrow}$ where the first map is the coefficient extension map and the second is the restriction of $1_{\Lambda_{\uparrow}}\otimes \phi_{\uparrow}$ and in particular is injective. A similar remark applies to $\phi^{\downarrow}$. So since the coefficient extension map $H\to \mathcal{Q}(\Lambda)\otimes_{\Lambda}H$ has kernel equal to the $\Lambda$-torsion submodule $tH$ of $H$, it follows that $\ker\phi_{\uparrow}=\ker\phi^{\downarrow}=tH$ and that $\phi_{\uparrow}$ (resp. $\phi^{\downarrow}$) descends to an injection $\frac{H}{tH}\to \Lambda_{\uparrow}$ (resp. $\frac{H}{tH}\to \Lambda^{\downarrow}$) which becomes an isomorphism after tensoring with $\Lambda_{\uparrow}$ (resp. $\Lambda^{\downarrow}$). Moreover the dimensions of $V_{\uparrow}$ and $V^{\downarrow}$ are both equal to $\dim_{\mathcal{Q}(\Lambda)}\mathcal{Q}(\Lambda)\otimes_{\Lambda}H$.
\end{remark}
Let us now extract quantitative data from a filtered matched pair $\mathcal{P}=(H,V_{\uparrow},\rho_{\uparrow},\phi_{\uparrow},V^{\downarrow},\rho^{\downarrow},\phi^{\downarrow})$. In favorable situations these data will emerge from finding a special kind of basis for the torsion-free module $\frac{H}{tH}$. In our current level of generality such a special basis may not exist---indeed if $\Gamma$ is a dense subgroup of $\mathbb{R}$ so that $\Lambda$ is not a PID, then $\frac{H}{tH}$ may well have no basis at all---but we can still make the following definition:
\begin{dfn}\label{gapdfn}
For a filtered matched pair $\mathcal{P}$ as above, let $d=\dim_{\mathcal{Q}(\Lambda)}\mathcal{Q}(\Lambda)\otimes_{\Lambda}H$ and let $i$ be an integer with $1\leq i\leq d$. We define the \textbf{$i$th gap} of $\mathcal{P}$ as \[ G_i(\mathcal{P})=\sup\left\{\gamma\in \mathbb{R}\left|\begin{array}{c}(\exists x_1,\ldots,x_i\in H \mbox{ independent})\\ (\rho^{\downarrow}(\phi^{\downarrow}x_j)-\rho_{\uparrow}(\phi_{\uparrow}x_j)\geq \gamma\mbox{ for all }j=1,\ldots,i)\end{array}\right.\right\}.\]
\end{dfn}
(Here a collection of elements of a module $H$ over an integral domain is said to be independent if the elements become linearly independent after tensoring with the fraction field of the domain.)
For filtered matched pairs obtained from Poincar\'e--Novikov structures via the construction in Section \ref{pnsect}, the $G_i(\mathcal{P})$ which are positive will appear as lengths of the closed intervals in the associated barcode, while the absolute values of the negative $G_i(\mathcal{P})$ will appear as lengths of open intervals. (The latter will be shifted down by $1$ in homological degree.)
One might hope to have $G_i(\mathcal{P})=\rho^{\downarrow}(\phi^{\downarrow}x_i)-\rho_{\uparrow}(\phi_{\uparrow}x_i)$ for some collection of elements $\{x_1,\ldots,x_d\}\subset H$. We will see that this holds if the $x_i$ form a \emph{doubly-orthogonal basis} in the following sense:
\begin{dfn}\label{dodfn}
For a filtered matched pair $\mathcal{P}$ as above, a \textbf{doubly-orthogonal basis} is a subset $\{e_1,\ldots,e_d\}\subset H$ such that:
\begin{itemize}\item[(i)] $\{e_1,\ldots,e_d\}$ projects under the quotient map $H\to \frac{H}{tH}$ to a basis for the $\Lambda$-module $\frac{H}{tH}$, where $tH$ is the $\Lambda$-torsion module of $H$;
\item[(ii)] $\{\phi_{\uparrow}(e_1),\ldots,\phi_{\uparrow}(e_d)\}$ is a $\rho_{\uparrow}$-orthogonal basis for $V_{\uparrow}$; and
\item[(iii)] $\{\phi^{\downarrow}(e_1),\ldots,\phi^{\downarrow}(e_d)\}$ is a $\rho^{\downarrow}$-orthogonal basis for $V^{\downarrow}$.
\end{itemize}
\end{dfn}
We will prove the following in Section \ref{basisconstruct}.
\begin{theorem}\label{basistheorem} Suppose that the subgroup $\Gamma<\mathbb{R}$ is discrete. Then any filtered matched pair has a doubly-orthogonal basis.
\end{theorem}
As alluded to earlier, if $\Gamma$ is not discrete then doubly-orthogonal bases might not exist for the trivial reason that $\frac{H}{tH}$ might not be a free module. I do not know whether, in the indiscrete case, filtered matched pairs in which $\frac{H}{tH}$ is free always admit
doubly-orthogonal bases. While the individual steps of the algorithm implicitly described in the proof of Theorem \ref{basistheorem} can be implemented for any $\Gamma$, the proof that the algorithm terminates uses the assumption that $\Gamma$ is discrete.
Doubly-orthogonal bases are related to the gaps of Definition \ref{gapdfn} as follows:
\begin{prop}\label{gapchar}
Let $\{e_1,\ldots,e_d\}$ be a doubly-orthogonal basis for a filtered matched pair $\mathcal{P}$, ordered in such a way that $\rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i))\geq \rho^{\downarrow}(\phi^{\downarrow}(e_{i+1}))-\rho_{\uparrow}(\phi_{\uparrow}(e_{i+1}))$ for each $i\in \{1,\ldots,d-1\}$. Then, for each $i$, \[ G_i(\mathcal{P})=\rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i)).\]
\end{prop}
\begin{proof} That $G_i(\mathcal{P})\geq \rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i))$ follows immediately from the definition (take $x_j=e_j$). To prove the reverse inequality we must show that, whenever $\{x_1,\ldots,x_i\}$ is an independent set in $H$, there is some $j\leq i$ such that $\rho^{\downarrow}(\phi^{\downarrow}(x_j))-\rho_{\uparrow}(\phi_{\uparrow}(x_j))\leq \rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i))$.
To see this, note that since $\{x_1,\ldots,x_i\}$ is an independent set, it is not contained in the $\Lambda$-span of $\{e_i,\ldots,e_{i-1}\}$. Hence there are $j\leq i$ and $k\geq i$ such that, expressing $x_j$ in terms of our basis as $x_j=t+\sum_{\ell}\lambda_{\ell}e_{\ell}$ where $t\in tH$, the coefficient $\lambda_k\in \Lambda$ is nonzero. We hence have, by the double-orthogonality of $\{e_1,\ldots,e_d\}$ and the fact that $\phi_{\uparrow}$ and $\phi^{\downarrow}$ vanish on $tH$, \begin{align*} \rho^{\downarrow}(\phi^{\downarrow}(x_j))-\rho_{\uparrow}(\phi_{\uparrow}(x_j))& \leq \left(\rho^{\downarrow}(\phi^{\downarrow}(e_k))-\nu^{\downarrow}(\lambda_k)\right)-\left(\rho_{\uparrow}(\phi_{\uparrow}(e_k))-\nu_{\uparrow}(\lambda_k\right)
\\ &\leq \left(\rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i))\right)-\left(\nu^{\downarrow}(\lambda_k)-\nu_{\uparrow}(\lambda_k)\right)
\\ &\leq \left(\rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i))\right),\end{align*}
where the second inequality uses that $k\geq i$ and the last inequality uses that, as is clear from (\ref{nuformula}), $\nu^{\downarrow}(\lambda)-\nu_{\uparrow}(\lambda)\geq 0$ for all nonzero elements of $\Lambda$.
\end{proof}
It follows in particular that the multiset $\{\rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i))\}$ is the same for any choice of doubly-orthogonal basis. Note that for this to hold it is essential that, in Definition \ref{dodfn}, we require that $\{e_1,\ldots,e_d\}$ project to a \textbf{basis} for the module $\frac{H}{tH}$ and not merely a maximal independent set (assuming that $\Gamma\neq \{0\}$ so that there is a distinction between these notions). Indeed, if we start with a doubly-orthogonal basis $\{e_1,\ldots,e_d\}$ and then multiply, say, $e_1$ by a nonzero, non-invertible element $\lambda\in \Lambda$, the new set $\{\lambda e_1,e_2,\ldots,e_d\}$ would still have the property that its images under $\phi_{\uparrow}$ and $\phi^{\downarrow}$ are orthogonal bases of $V_{\uparrow}$ and $V^{\downarrow}$ respectively, but since $\nu^{\downarrow}(\lambda)-\nu_{\uparrow}(\lambda)$ is nonzero for non-invertible (\emph{i.e.}, non-monomial) elements of $\Lambda$ this operation would change the value of $\rho^{\downarrow}(\phi^{\downarrow}(e_1))-\rho_{\uparrow}(\phi_{\uparrow}(e_1))$.
The absolute values of the $\rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i))$ for a doubly-orthogonal basis $\{e_i\}$ are meant to correspond to the lengths of bars in a barcode; it is then natural to expect that the endpoints of these bars would be $ \rho_{\uparrow}(\phi_{\uparrow}(e_i))$ and $\rho^{\downarrow}(\phi^{\downarrow}(e_i))$. As in \cite{UZ}, in the presence of the deck transformation group $\Gamma$ we would expect these bars to be well-defined only up to translation by the group $\Gamma$; this corresponds to the fact that multiplying each $e_i$ by the invertible element $T^{g_i}$ of $\Lambda$ for some $g_i\in \Gamma$ preserves the double-orthogonality of the basis. Modulo this ambiguity, we will show in Proposition \ref{barinvt} that different choices of doubly-orthogonal bases yield the same collection of ``bars.''
To set up some notation, given a filtered matched pair $\mathcal{P}$ as above, define functions $\ell_{\uparrow}\colon\thinspace \frac{H}{tH}\to\mathbb{R}\cup\{-\infty\}$ and $\ell^{\downarrow}\colon\thinspace \frac{H}{tH}\to \mathbb{R}\cup\{\infty\}$ by, for a coset $[x]$ in $\frac{H}{tH}$ of $x\in H$, \[ \ell_{\uparrow}([x])=\rho_{\uparrow}(\phi_{\uparrow}(x)),\qquad \ell^{\downarrow}([x])=\rho^{\downarrow}(\phi^{\downarrow}(x)).\] Also, if $a\in \mathbb{R}$, put \begin{align}\label{hupdown} H_{\uparrow}^{\leq a}&=\left\{\left.[x]\in \frac{H}{tH}\right|\ell_{\uparrow}([x])\leq a\right\},\qquad H_{\uparrow}^{< a}=\left\{\left.[x]\in \frac{H}{tH}\right|\ell_{\uparrow}([x])< a\right\},\\ H^{\downarrow}_{\geq a} &=\left\{\left.[x]\in \frac{H}{tH}\right|\ell^{\downarrow}([x])\geq a\right\}, \qquad H^{\downarrow}_{> a} =\left\{\left.[x]\in \frac{H}{tH}\right|\ell^{\downarrow}([x])> a\right\}.\nonumber\end{align} It follows from Remark \ref{normconds} that each of $H_{\uparrow}^{\leq a},H_{\uparrow}^{>a},H^{\downarrow}_{\geq a},H^{\downarrow}_{>a}$ is a $\kappa$-vector subspace of $\frac{H}{tH}$.
\begin{prop}\label{barinvt}
With notation as above, if $\{e_1,\ldots,e_d\}\subset H$ is a doubly-orthogonal basis for the filtered matched pair $\mathcal{P}$ and if $a,b\in \mathbb{R}$, then \begin{align}\label{invteqn} &\#\{i|(\exists g\in \Gamma)(\rho_{\uparrow}(\phi_{\uparrow}(e_i))=a+g\mbox{ and }\rho^{\downarrow}(\phi^{\downarrow}(e_i))=b+g)\} \\ &\qquad = \dim_{\kappa}\left(\frac{H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{\geq b}}{(H_{\uparrow}^{<a}\cap H^{\downarrow}_{\geq b})+(H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{>b})}\right).\nonumber\end{align}
\end{prop}
\begin{proof}
An element of $z\in \frac{H}{tH}$ belongs to $H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{\geq b}$ if and only if its coordinate expression $z=\sum_i\lambda_i[e_i]$ in terms of the basis $[e_1],\ldots,[e_d]$ has, for all $i$, both $\rho_{\uparrow}(\phi_{\uparrow}(e_i))-\nu_{\uparrow}(\lambda_i)\leq a$ and $\rho^{\downarrow}(\phi^{\downarrow}(e_i))-\nu^{\downarrow}(\lambda_i)\geq b$, \emph{i.e.} both $\nu_{\uparrow}(\lambda_i)\geq \rho_{\uparrow}(\phi_{\uparrow}(e_i))-a$ and $\nu^{\downarrow}(\lambda_i)\leq \rho^{\downarrow}(\phi^{\downarrow}(e_i))-b$. Expressing the coefficients $\lambda_i\in \Lambda$ as finite sums $\lambda_i=\sum_{g\in \Gamma}k_{g,i}T^{g}$ where $k_{g,i}\in \kappa$, this shows that $z=\sum_{i=1}^{d}\sum_{g\in\Gamma} k_{g,i}T^g[e_i]$ belongs to $H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{\geq b}$ iff $k_{g,i}$ vanishes for all $g,i$ other than those for which $g$ lies in the interval $[\rho_{\uparrow}(\phi_{\uparrow}(e_i))-a,\rho^{\downarrow}(\phi^{\downarrow}(e_i))-b]$.
Similarly, $z=\sum_{i,g} k_{g,i}T^g[e_i]$ belongs to $H_{\uparrow}^{<a}\cap H^{\downarrow}_{\geq b}$ iff $k_{g,i}=0$ whenever $g\notin (\rho_{\uparrow}(\phi_{\uparrow}(e_i))-a,\rho^{\downarrow}(\phi^{\downarrow}(e_i))-b]$, and belongs to $H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{>b}$ iff $k_{g,i}=0$ whenever $g\notin [\rho_{\uparrow}(\phi_{\uparrow}(e_i))-a,\rho^{\downarrow}(\phi^{\downarrow}(e_i))-b)$. So a vector space complement to $(H_{\uparrow}^{<a}\cap H^{\downarrow}_{\geq b})+(H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{>b})$ in $H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{\geq b}$ is given by the space of all classes $\sum_{i,g} k_{g,i}T^g[e_i]$ having the property that the only $k_{g,i}$ which might be nonzero are those for which $g$ belongs to $[\rho_{\uparrow}(\phi_{\uparrow}(e_i))-a,\rho^{\downarrow}(\phi^{\downarrow}(e_i))-b]$ but belongs to neither $ (\rho_{\uparrow}(\phi_{\uparrow}(e_i))-a,\rho^{\downarrow}(\phi^{\downarrow}(e_i))-b]$ nor $[\rho_{\uparrow}(\phi_{\uparrow}(e_i))-a,\rho^{\downarrow}(\phi^{\downarrow}(e_i))-b)$. In other words, $k_{g,i}=0$ unless $g=\rho_{\uparrow}(\phi_{\uparrow}(e_i))-a=\rho^{\downarrow}(\phi^{\downarrow}(e_i))-b$. So if $I_0$ is the subset of $\{1,\ldots,d\}$ appearing on the left in (\ref{invteqn}), then $\frac{H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{\geq b}}{(H_{\uparrow}^{<a}\cap H^{\downarrow}_{\geq b})+(H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{>b})}$ is isomorphic as a $\kappa$-vector space to the $\kappa$-span of $\{T^{\rho_{\uparrow}(\phi_{\uparrow}(e_i))-a}[e_i]|i\in I_0\}$.
\end{proof}
We accordingly make the following definition:
\begin{dfn}\label{hbsdef}
Suppose that $\mathcal{P}$ is a filtered matched pair which admits a doubly-orthogonal basis. The \textbf{basis spectrum} $\Sigma(\mathcal{P})$ of $\mathcal{P}$ is the sub-multiset of $(\mathbb{R}/\Gamma)\times \mathbb{R}$ given by $\left\{\left([\rho_{\uparrow}(\phi_{\uparrow}(e_i))],\rho^{\downarrow}(\phi^{\downarrow}(e_i))-\rho_{\uparrow}(\phi_{\uparrow}(e_i))\right)|i\in\{1,\ldots,d\}\right\}$ for one and hence (by Proposition \ref{barinvt}) every doubly-orthogonal basis $\{e_1,\ldots,e_d\}$ of $\mathcal{P}$.
\end{dfn}
Thus Proposition \ref{barinvt} shows that, for $([a],\ell)\in (\mathbb{R}/\Gamma)\times \mathbb{R}$ with $[a]$ denoting the coset in $\mathbb{R}/\Gamma$ of the real number $a$, the multiplicity of $([a],\ell)$ in $\Sigma(\mathcal{P})$ equals $\dim_{\kappa}\left(\frac{H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{\geq b}}{(H_{\uparrow}^{<a}\cap H^{\downarrow}_{\geq b})+(H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{>b})}\right)$. This characterization is similar to the definition of the quantity denoted $\delta_r^f(a,b)$ in \cite[Section 3]{B1}.
As was mentioned in Section \ref{algintro}, in the motivating topological example an element $([a],\ell)$ of $\Sigma(\mathcal{P})$ corresponds to a bar in the interlevel persistence barcode of form $[a,a+\ell]$ if $\ell\geq 0$ and $(a+\ell,a)$ if $\ell<0$ (with homological degree shifted down by $1$ in the latter case). When $\Gamma$ is nontrivial, as is reflected in Proposition \ref{barinvt}, bars can be shifted by elements of $\Gamma$; as in \cite{UZ} our notation accounts for this ambiguity by having the first entry of an element of $\Sigma(\mathcal{P})$ only be defined modulo $\Gamma$ while the second (representing the difference of the endpoints) is a real number. (A difference with \cite{UZ} is that here the second entry is allowed to be negative, reflecting that these ``homologically essential'' bars reflect invariant topological features and so do not disappear when their ``lengths'' go to zero; rather, they change into different types of bars.)
\subsection{Stability}\label{stabsec}
We now discuss the algebraic counterpart to the notion that a $C^0$-small change in a function should lead to a correspondingly small change in its interlevel barcode. The $\Lambda$-module $H$ in the definition of a filtered matched pair is meant to correspond to (a graded piece of) the homology of the cover on which our function is defined, and thus does not change with the function. Since the vector spaces $V_{\uparrow}$ and $V^{\downarrow}$ are isomorphic to $\Lambda_{\uparrow}\otimes_{\Lambda}H$ and $\Lambda^{\downarrow}\otimes_{\Lambda}H$, respectively, their vector space isomorphism types do not change either, but the filtration functions $\rho_{\uparrow}$ and $\rho^{\downarrow}$ will be sensitive to the function.
\begin{dfn}\label{tmorphdfn}
Let $t\geq 0$ and let
\[
\xymatrix{ & & (V^{\downarrow},\rho^{\downarrow}) & & & & (\hat{V}^{\downarrow},\hat{\rho}^{\downarrow}) \\ \mathcal{P} \ar@{}[r]^(.35){}="a"^(.65){}="b" \ar@{=} "a";"b" & H\ar[ru]^{\phi^{\downarrow}}\ar[rd]_{\phi_{\uparrow}} & & \mbox{and} & \hat{\mathcal{P}} \ar@{}[r]^(.35){}="a"^(.65){}="b" \ar@{=} "a";"b" & H\ar[ru]^{\psi^{\downarrow}}\ar[rd]_{\psi_{\uparrow}} \\ & & (V_{\uparrow},\rho_{\uparrow}) & & & & (\hat{V}_{\uparrow},\hat{\rho}_{\uparrow}) }
\]
be two filtered matched pairs with the same $\Lambda$-module $H$. A \textbf{$t$-morphism} from $\mathcal{P}$ to $\hat{\mathcal{P}}$ consists of a $\Lambda_{\uparrow}$-vector space isomorphism $\alpha_{\uparrow}\colon\thinspace V_{\uparrow}\to \hat{V}_{\uparrow}$ and a $\Lambda^{\downarrow}$-vector space isomorphism $\alpha^{\downarrow}\colon\thinspace V^{\downarrow}\to\hat{V}^{\downarrow}$ such that:
\begin{itemize}\item $\psi_{\uparrow}=\alpha_{\uparrow}\circ\phi_{\uparrow}$;
\item $\psi^{\downarrow}=\alpha^{\downarrow}\circ\phi^{\downarrow}$; and \item $|\rho_{\uparrow}(\phi_{\uparrow}(x))-\hat{\rho}_{\uparrow}(\psi_{\uparrow}(x))|\leq t$ and $|\rho^{\downarrow}(\phi^{\downarrow}(x))-\hat{\rho}^{\downarrow}(\psi^{\downarrow}(x))|\leq t$ for all $x\in H\setminus\{0\}$.\end{itemize}
\end{dfn}
Let us first show stability for the gaps $G_i$ from Definition \ref{gapdfn}.
\begin{prop}\label{gapcts}
Suppose that there exists a $t$-morphism from $\mathcal{P}$ to $\hat{\mathcal{P}}$. Then $|G_i(\mathcal{P})-G_i(\hat{\mathcal{P}})|\leq 2t$ for all $i$.
\end{prop}
\begin{proof} This is almost immediate from the definitions. If, as in Definition \ref{gapdfn}, $\{x_1,\ldots,x_i\}\subset H$ is an independent set and $\gamma$ is a real number such that $\rho^{\downarrow}(\phi^{\downarrow}(x_j))-\rho_{\uparrow}(\phi_{\uparrow}(x_j))\geq \gamma$ for all $j\in \{1,\ldots,i\}$, then by Definition \ref{tmorphdfn} we have $\hat{\rho}^{\downarrow}(\psi^{\downarrow}(x_j))\geq \rho^{\downarrow}(\phi^{\downarrow}(x_j))-t$ and $\hat{\rho}_{\uparrow}(\phi_{\uparrow}(x_j))\leq \rho_{\uparrow}(\phi_{\uparrow}(x_j))+t$. Thus $\hat{\rho}^{\downarrow}(\psi^{\downarrow}(x_j))-\hat{\rho}_{\uparrow}(\phi_{\uparrow}(x_j))\geq \gamma-2t$. So taking the $\sup$ over such $\gamma$ shows that $G_i(\hat{\mathcal{P}})\geq G_i(\mathcal{P})-2t$. Combining this with the same statement with the roles of $\mathcal{P}$ and $\hat{\mathcal{P}}$ reversed proves the result.
\end{proof}
In the case that $\mathcal{P}$ and $\hat{\mathcal{P}}$ admit doubly-orthogonal bases, we now consider improving Proposition \ref{gapcts} to a statement about the basis spectra $\Sigma(\mathcal{P}),\Sigma(\hat{\mathcal{P}})$. Now by Proposition \ref{gapchar}, $\Sigma(\mathcal{P})$ takes the form $\{([a_1],G_1(\mathcal{P})),\ldots,([a_d],G_d(\mathcal{P}))\}$ for suitable $[a_i]\in \mathbb{R}/\Gamma$, and likewise for $\Sigma(\hat{\mathcal{P}})$. Proposition \ref{gapcts} thus bounds the difference between the second coordinates of these. If $\Gamma<\mathbb{R}$ is dense, there is no meaningful nontrivial distance between the first coordinates (the natural topology on $\mathbb{R}/\Gamma$ is the trivial one), so even laying aside the point that Theorem \ref{basistheorem} is proven only for the discrete case, in the dense case Proposition \ref{gapcts} should be regarded as providing all the stability information that we could ask for. Accordingly, Proposition \ref{stab} below will assume that $\Gamma$ is discrete.
To compare the basis spectra let us first introduce the following definition.
\begin{dfn}
Let $\mathcal{S}$ and $\hat{\mathcal{S}}$ be two multisets whose elements belong to $(\mathbb{R}/\Gamma)\times \mathbb{R}$, and let $t\geq 0$. A \textbf{strong $t$-matching} between $\mathcal{S}$ and $\hat{\mathcal{S}}$ is a bijection $\sigma\colon\thinspace \mathcal{S}\to\hat{\mathcal{S}}$ such that, for all $([a],\ell)\in \mathcal{S}$, if we write $\sigma([a],\ell)=([\hat{a}],\hat{\ell})$, then there is $g\in \Gamma$ such that both $|g+\hat{a}-a|\leq t$ and $|(g+\hat{a}+\hat{\ell})-(a+\ell)|\leq t$.
\end{dfn}
(Our use of the word ``strong'' is meant to distinguish this notion from the notion of matching that arises in sublevel persistence, which allows for deleting elements with $\ell\leq 2t$.)
\begin{theorem}\label{stab}
Suppose that $\Gamma$ is discrete and that there is a $t$-morphism from $\mathcal{P}$ to $\hat{\mathcal{P}}$, where $t\geq 0$. Then there is a strong $t$-matching between the basis spectra $\Sigma(\mathcal{P})$ and $\Sigma(\hat{\mathcal{P}})$.\end{theorem}
\begin{proof}
The overall outline of the proof bears some resemblance to \cite[Section 4]{CDGO} in that we use interpolation to reduce to a local result which is then proven by considering certain rectangle measures, though the details are different.
By replacing $(\hat{V}_{\uparrow},\hat{\rho}_{\uparrow})$ and $(\hat{V}^{\downarrow},\hat{\rho}^{\downarrow})$ by $(V_{\uparrow},\hat{\rho}_{\uparrow}\circ\alpha_{\uparrow})$ and $(V^{\downarrow},\hat{\rho}^{\downarrow}\circ\alpha^{\downarrow})$ we may as well assume that $\hat{V}_{\uparrow}=V_{\uparrow}$ and $\hat{V}^{\downarrow}=V^{\downarrow}$, and that $\alpha_{\uparrow}$ and $\alpha^{\downarrow}$ are the respective identity maps. So we have functions $\rho_{\uparrow},\hat{\rho}_{\uparrow}\colon\thinspace V_{\uparrow}\to\mathbb{R}\cup\{-\infty\}$ such that $(V_{\uparrow},\rho_{\uparrow})$ and $(V_{\uparrow},\hat{\rho}_{\uparrow})$ are each orthogonalizable $\Lambda_{\uparrow}$-spaces, and likewise we have functions $\rho^{\downarrow},\hat{\rho}^{\downarrow}\colon\thinspace V^{\downarrow}\cup\{\infty\}$ such that $(V^{\downarrow},\rho^{\downarrow})$ and $(V^{\downarrow},\hat{\rho}^{\downarrow})$ are orthogonalizable $\Lambda^{\downarrow}$-spaces. Moreover $\max|(\rho_{\uparrow}-\hat{\rho}_{\uparrow})\circ\phi_{\uparrow})|\leq t$ and $\max|(\rho^{\downarrow}-\hat{\rho}^{\downarrow})\circ\phi^{\downarrow}|\leq t$.
For each $s\in [0,1]$, put \[ \rho_{\uparrow}^{s}=(1-s)\rho_{\uparrow}+s\hat{\rho}_{\uparrow},\qquad \rho^{\downarrow}_{s}=(1-s)\rho_{\downarrow}+s\hat{\rho}^{\downarrow}.\] Then each $(V_{\uparrow},\rho_{\uparrow}^{s})$ is an orthogonalizable $\Lambda_{\uparrow}$-space. Indeed, one can apply \cite[Theorem 3.4]{UZ} to the identity map in order to obtain a basis $\{e_1,\ldots,e_d\}$ for $V_{\uparrow}$ that is orthogonal both for $\rho_{\uparrow}$ and for $\hat{\rho}_{\uparrow}$, and then $\{e_1,\ldots,e_d\}$ will also be orthogonal for $\rho_{\uparrow}^{s}$ for all $s\in [0,1]$. Similarly (see Remark \ref{switch} for switching between $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$), $(V^{\downarrow},\rho^{\downarrow}_{s})$ is an orthogonalizable $\Lambda^{\downarrow}$-space for all $s\in [0,1]$. Thus for all $s\in [0,1]$, $\phi_{\uparrow}\colon\thinspace H\to (V_{\uparrow},\rho_{\uparrow}^{s})$ and $\phi^{\downarrow}\colon\thinspace H\to (V^{\downarrow},\rho^{\downarrow}_{s})$ together provide a filtered matched pair which we denote by $\mathcal{P}_s$. Moreover if $s,s'\in [0,1]$ then the identity maps provide a $(t|s-s'|)$-morphism between $\mathcal{P}_{s}$ and $\mathcal{P}_{s'}$.
It consequently suffices to prove the following ``local'' version of Theorem \ref{stab}:
\begin{lemma}\label{localstab}
Suppose that $\Gamma$ is discrete and that $\mathcal{P}$ is a filtered matched pair. Then there is $\tau(\mathcal{P})>0$ such that, if $\hat{\mathcal{P}}$ is any other filtered matched pair such that there exists a $t$-morphism from $\mathcal{P}$ to $\hat{\mathcal{P}}$ with $t<\tau(\mathcal{P})$, then there is a strong $t$-matching between $\mathcal{P}$ and $\hat{\mathcal{P}}$.
\end{lemma}
Indeed, assuming Lemma \ref{localstab}, for all $s\in [0,1]$ there will be $\delta_s>0$ such that if $s'\in [0,1]$ obeys $|s-s'|<\delta_s$ then there is a strong $t|s-s'|$-matching between $\Sigma(\mathcal{P}_s)$ and $\Sigma(\mathcal{P}_{s'})$. Using a finite cover of $[0,1]$ by intervals of form $(s-\delta_s,s+\delta_s)$, one can then find $0=s_0<s_1<\cdots<s_{N-1}<s_N=1$ such that there is a $t(s_{i+1}-s_i)$-matching between $\Sigma(\mathcal{P}_{s_i})$ and $\Sigma(\mathcal{P}_{s_{i+1}})$ for each $i\in\{0,\ldots,N-1\}$. The composition of these matchings would then be the desired $t$-matching between $\Sigma(\mathcal{P}_0)$ and $\Sigma(\mathcal{P}_1)$.
Accordingly, to complete the proof of Theorem \ref{stab} we now prove Lemma \ref{localstab}. Since $\Gamma$ is a discrete subgroup of $\mathbb{R}$, either $\Gamma=\{0\}$ or $\Gamma$ is infinite cyclic. In the former case set $\lambda_0=\infty$, and in the latter case set $\lambda_0$ equal to the positive generator of $\Gamma$. Let $\{([a_1],\ell_1),\ldots,([a_d],\ell_d)\}$ denote the basis spectrum of $\mathcal{P}$, and for the representatives $a_i$ of the $\Gamma$-cosets $[a_i]$ take the unique element of the coset that belongs to $\left(-\frac{\lambda_0}{2},\frac{\lambda_0}{2}\right]$. Choose $\tau(\mathcal{P})>0$ to be such that both $\tau(\mathcal{P})<\frac{\lambda_0}{2}$ and, for all $i\in \{1,\ldots,d\}$ and $j\neq i$, either $([a_j],\ell_j)=([a_i],\ell_i)$ or else the $\ell^{\infty}$-distance in $\mathbb{R}^2$ from $(a_i,a_i+\ell_i)$ to the $\Gamma$-orbit $\{(a_j+g,a_j+\ell_j+g)|g\in \Gamma\}$ is greater than $2\tau(\mathcal{P})$.
For $a,b\in \mathbb{R}$ let $H_{\uparrow}^{\leq a},H_{\uparrow}^{<a},H^{\downarrow}_{\geq b},H^{\downarrow}_{>b}$ be the $\kappa$-vector subspaces of $\frac{H}{tH}$ defined in (\ref{hupdown}). If $a'<a$ and $b'>b$, and if $\max\{|a'-a|,|b'-b|\}<\lambda_0$, then a similar analysis to that in the proof of Proposition \ref{barinvt} shows that the $\kappa$-vector space \[ V_{[a',a]\times [b,b']}:=\frac{H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{\geq b}}{(H_{\uparrow}^{< a'}\cap H^{\downarrow}_{\geq b})+(H_{\uparrow}^{\leq a}\cap H^{\downarrow}_{>b'})}\] has dimension equal to the number of $i$ (counting multiplicity) for which the set $O_i:=\{(a_i+g,a_i+\ell_i+g)|g\in \Gamma\}\subset \mathbb{R}^2$ contains an element of the rectangle $[a',a]\times [b,b']$.\footnote{Such an element is necessarily unique by the assumption that $\max\{|a'-a|,|b'-b|\}<\lambda_0$; without this assumption the correct statement would have been that $\dim_{\kappa}V_{[a',a]\times [b,b']}$ is the sum of the cardinalities of the sets $O_i\cap ([a',a]\times [b,b'])$.} If $a'<x'\leq a<x$ and $b'>y'\geq b>y$ then the inclusions $H_{\uparrow}^{\leq a}\subset H_{\uparrow}^{\leq x}$, $H_{\uparrow}^{<a'}\subset H_{\uparrow}^{<x'}$, $H^{\downarrow}_{\geq b}\subset H^{\downarrow}_{\geq y}$, and $H^{\downarrow}_{>b'}\subset H^{\downarrow}_{>y'}$ induce a map $V_{[a',a]\times [b,b']}\to V_{[x',x]\times[y,y']}$, and the rank of this map equals the number of $i$ for which some element of $O_i$ lies in both rectangles $[a',a]\times[b,b']$ and $[x',x]\times [y,y']$ (\emph{i.e.}, lies in the intersection $[x',a]\times[b,y']$).
In particular, the inclusion-induced map $V_{[a',a]\times [b,b']}\to V_{[x',x]\times[y,y']}$ is an isomorphism provided that every element of $\cup_iO_i$ that lies in either one of $[a',a]\times[b,b']$ or $[x',x]\times [y,y']$ in fact lies in both of them. For example, if $([a_i],\ell_i)\in \Sigma(\mathcal{P})$ we can apply this with $x'=a=a_i$ and $b=y'=a_i+\ell_i$ to see that, if $t$ is smaller than the number $\tau(\mathcal{P})$ defined above, then the inclusion-induced map \[ V_{[a_i-2t,a_i]\times [a_i+\ell_i,a_i+\ell_i+2t]}\to V_{[a_i,a_i+2t]\times [a_{i}+\ell_i-2t,a_i+\ell_i]} \] is an isomorphism between $\kappa$-vector spaces of dimension equal to the multiplicity of $([a_i],\ell_i)$ in $\Sigma(\mathcal{P})$.
Now assuming that there is a $t$-morphism from $\mathcal{P}$ to $\hat{\mathcal{P}}$ with $t<\tau(\mathcal{P})$, for closed intervals $I,J\subset \mathbb{R}$ let $\hat{V}_{I\times J}$ be the vector space analogous to $V_{I\times J}$ but constructed from $\hat{\mathcal{P}}$ instead of from $\mathcal{P}$, and similarly define $\hat{H}_{\uparrow}^{\leq a},\hat{H}_{\uparrow}^{<a},\hat{H}^{\downarrow}_{\geq b},\hat{H}^{\downarrow}_{>b}$. Because $|\rho_{\uparrow}\circ\phi_{\uparrow}-\hat{\rho}_{\uparrow}\circ\hat{\psi}_{\uparrow}|\leq t$ and $|\rho_{\uparrow}\circ\phi_{\uparrow}-\hat{\rho}_{\uparrow}\circ\hat{\psi}_{\uparrow}|\leq t$, for all $a\in \mathbb{R}$ we have inclusions $H_{\uparrow}^{\leq a-t}\subset
\hat{H}_{\uparrow}^{\leq a}\subset H_{\uparrow}^{\leq a+t}$, $H_{\uparrow}^{< a-t}\subset
\hat{H}_{\uparrow}^{<a}\subset H_{\uparrow}^{<a+t}$, $H^{\downarrow}_{\geq b+t}\subset
\hat{H}^{\downarrow}_{\geq b}\subset H^{\downarrow}_{\geq b-t}$, and $H^{\downarrow}_{> b+t}\subset
\hat{H}^{\downarrow}_{>b}\subset H^{\downarrow}_{>b-t}$. So for any rectangle $[a',a]\times[b,b']$ we have a composition of maps induced by these inclusions: \[ V_{[a'-t,a-t]\times[b+t,b'+t]}\to \hat{V}_{[a',a]\times[b,b']}\to V_{[a'+t,a+t]\times[b-t,b'-t]}.\]
So if $([a_i],\ell_i)$ belongs to $\Sigma(\mathcal{P})$ with positive multiplicity $m_i$, the isomorphism of $m_i$-dimensional vector spaces $V_{[a_i-2t,a_i]\times [a_i+\ell_i,a_i+\ell_i+2t]}\to V_{[a_i,a_i+2t]\times [a_{i}+\ell_i-2t,a_i+\ell_i]}$ factors through the $\kappa$-vector space $\hat{V}_{[a_i-t,a_i+t]\times [a_i+\ell_i-t,a_i+\ell_i+t]}$, whence the latter vector space has dimension at least $m_i$. Temporarily denote $\dim_{\kappa}\hat{V}_{[a_i-t,a_i+t]\times [a_i+\ell_i-t,a_i+\ell_i+t]}$ as $\hat{d}_i$; let us now show that $\hat{d}_i=m_i$. Using again that $2t<\lambda_0$, the dimension $\hat{d}_i$
is equal to sum of the multiplicities of all elements $([\hat{a}_j],\hat{\ell}_j)$ of $\Sigma(\hat{\mathcal{P}})$ such that $a_j$ can be chosen within its $\Gamma$-coset to obey $(\hat{a}_j,\hat{a}_j+\hat{\ell}_j)\in [a_i-t,a_i+t]\times [a_i+\ell_i-t,a_i+\ell_i+t]$. Because $t<\tau(\mathcal{P})$, the orbits of the rectangles $[a_i-t,a_i+t]\times [a_i+\ell_i-t,a_i+\ell_i+t]$ under the diagonal action of $\Gamma$ are pairwise disjoint as $i$ varies, in view of which $\sum_i\hat{d}_i$ is no larger than the sum of the multiplicities of all elements of $\Sigma(\hat{\mathcal{P}})$. But since $\Sigma(\mathcal{P})$ and $\Sigma(\hat{\mathcal{P}})$ are each in one-to-one correspondence with a basis for the free rank-$d$ $\Gamma$-module $\frac{H}{tH}$, it follows that each of these multisets has the sum of the multiplicities of its elements equal to $d$. So we have shown on the one hand that $\sum_{i}\hat{d}_i\leq \sum_i m_i=d$, and on the other hand that, for each $i$, $\hat{d}_i\geq m_i$, whence indeed $\hat{d}_i=m_i$ for all $i$. Furthermore, \emph{every} one of the $d$ elements (counted with multiplicity) of $\Sigma(\hat{\mathcal{P}})$ is among those enumerated by the various $\hat{d}_i$, for otherwise it could not hold both that $\hat{d}_i=m_i$ and that $\sum_i m_i=d$.
It follows that there is a bijection between $\Sigma(\hat{\mathcal{P}})$ and $\Sigma(\mathcal{P})$ that sends $([\hat{a}],\hat{\ell})\in \Sigma(\hat{\mathcal{P}})$ to (one of the $d_i$ copies of) the unique $([a_i],\ell_i)$ for which there is $g\in \Gamma$ such that $(\hat{a}+g,\hat{a}+\hat{\ell}+g)\in [a_i-t,a_i+t]\times [a_i+\ell_i-t,a_i+\ell_i+t]$. This bijection is then a $t$-matching, completing the proof of Lemma \ref{localstab} and hence of Theorem \ref{stab}.
\end{proof}
\begin{remark}
Conversely, if there is a $t$-matching between $\Sigma(\mathcal{P})$ and $\Sigma(\hat{\mathcal{P}})$ then it is easy to construct a $t$-morphism between $\mathcal{P}$ and $\hat{\mathcal{P}}$, by taking $\alpha_{\uparrow}$ and $\alpha^{\downarrow}$ to intertwine the basis elements corresponding to the elements of $(\mathbb{R}/\Gamma)\times \mathbb{R}$ that correspond under the $t$-matching. Thus the above stability theorem implies an isometry theorem (on the space of filtered matched pairs with appropriate morphism and matching pseudometrics) like the familiar one from sublevel persistence (as in \cite{CDGO}).
\end{remark}
\begin{remark}
As in \cite[Remark 8.5]{UZ}, a ``two-parameter'' version of the notion of a $t$-morphism could be defined by declaring, for any real numbers $s,t$ with $s+t\geq 0$, a $(s,t)$-morphism
to be given by data as in Definition \ref{tmorphdfn} except with the last bullet point replaced by the conditions \[ \rho_{\uparrow}\circ\phi_{\uparrow}-s\leq \hat{\rho}_{\uparrow}\circ\psi_{\uparrow}\leq \rho_{\uparrow}\circ\phi_{\uparrow}+t,\qquad
\rho^{\downarrow}\circ\phi^{\downarrow}-s\leq \hat{\rho}^{\downarrow}\circ\psi^{\downarrow}\leq \rho^{\downarrow}\circ\phi^{\downarrow}\circ+t.\] The existence of an $(s,t)$-morphism then implies that of a ``$(s,t)$-matching'' in the sense of a bijection between $\Sigma(\mathcal{P})$ and $\Sigma(\hat{\mathcal{P}})$ such that, for corresponding elements $([a],\ell)\in\Sigma(\mathcal{P})$ and $([\hat{a}],\hat{\ell})$ and suitable choices of $a,\hat{a}$ within their $\Gamma$-cosets, one has $a-s\leq \hat{a}\leq a+t$ and $a+\ell-s\leq \hat{a}+\hat{\ell}\leq a+\ell+t$. Indeed, this can be inferred formally from what we have done by considering the filtered matched pair $\tilde{\mathcal{P}}$ given by setting $\tilde{\rho}_{\uparrow}=\hat{\rho}_{\uparrow}+\frac{s-t}{2}$ and $\tilde{\rho}^{\downarrow}=\hat{\rho}^{\downarrow}+\frac{s-t}{2}$. There is then a $\frac{s+t}{2}$-morphism between $\mathcal{P}$ and $\tilde{\mathcal{P}}$ and hence (by Theorem \ref{stab}) a $\frac{s+t}{2}$-matching between $\Sigma(\mathcal{P})$ and $\Sigma(\tilde{\mathcal{P}})$, and then our claim follows by the obvious shift that relates $\Sigma(\tilde{\mathcal{P}})$ to $\Sigma(\hat{\mathcal{P}})$.
For example (set $s=0$), if $\Gamma=\{0\}$ then the basis spectra are monotone in the sense that if $\hat{\rho}_{\uparrow}\circ\phi_{\uparrow}\geq\rho_{\uparrow}\circ\psi_{\uparrow}$ and $\hat{\rho}^{\downarrow}\circ\psi^{\downarrow}\geq \rho^{\downarrow}\circ\psi^{\downarrow}$ then there is a bijection from $\Sigma(\mathcal{P})$ to $\Sigma(\hat{\mathcal{P}})$ that sends each $(a,\ell)$ to $(\hat{a},\hat{\ell})$ with both $\hat{a}\geq a$ and $\hat{a}+\hat{\ell}\geq a+\ell$.
\end{remark}
\begin{remark}\label{twostab}
Another modest generalization of the notion of a $t$-morphism would involve allowing $(V_{\uparrow},\rho_{\uparrow})$ and $(V^{\downarrow},\rho^{\downarrow})$ to vary independently of each other, so we would choose two nonnegative numbers $t_{\uparrow},t^{\downarrow}$ and require $|\rho_{\uparrow}(\phi_{\uparrow}(x))-\hat{\rho}_{\uparrow}(\psi_{\uparrow}(x))|\leq t_{\uparrow}$ for all $x\in V_{\uparrow}\setminus\{0\}$ and $|\rho^{\downarrow}(\phi^{\downarrow}(x))-\hat{\rho}^{\downarrow}(\psi^{\downarrow}(x))|\leq t^{\downarrow}$ for all $x\in V^{\downarrow}\setminus\{0\}$. Of course one could then apply Theorem \ref{stab} with $t=\max\{t_{\uparrow},t^{\downarrow}\}$, but if $t_{\uparrow}\neq t^{\downarrow}$ a slightly stronger statement holds. Namely, the $t$-matching provided by Theorem \ref{stab} can be taken to have the property that, for matched elements $([a_i],\ell_i),([\hat{a}_i],\hat{\ell}_i)$ and suitable representatives $a_i,\hat{a}_i$ of the $\Gamma$-cosets, we have $|\hat{a}_i-a_i|\leq t_{\uparrow}$ and $|(\hat{a}_i+\hat{\ell}_i)-(a_i+\ell_i)|\leq t^{\downarrow}$. If $t_{\uparrow},t^{\downarrow}$ are both nonzero this can be deduced formally from Theorem \ref{stab} by multiplying $\rho^{\downarrow},\hat{\rho}^{\downarrow}$ by $\frac{t_{\uparrow}}{t^{\downarrow}}$, applying Theorem \ref{stab} with $t=t_{\uparrow}$, and then rescaling again to return to the original $\rho^{\downarrow},\hat{\rho}^{\downarrow}$. If instead $t^{\downarrow}=0$ (say), one can observe that, given the discreteness of $\Gamma$, the existence of a matching of the desired form for all $0<t^{\downarrow}<t_{\uparrow}$ implies the same for $t^{\downarrow}=0$, since if $|(\hat{a}_i+\hat{\ell}_i)-(a_i+\ell_i)|$ is sufficiently small then it must be zero.
For example, one might take $V_{\uparrow}$ to arise from the sublevel persistence of some function which is allowed to vary, and take $V^{\downarrow}$ to arise from the superlevel persistence of a different function which is held fixed. The basis spectra $\Sigma(\mathcal{P}_s)$ of the resulting filtered matched pairs $\mathcal{P}_s$ would then vary continuously with the first function, with the values $a+\ell$ for $([a],\ell)\in \Sigma(\mathcal{P}_s)$ remaining fixed as $s$ varies.
\end{remark}
\subsection{Duality}\label{fmpdual}
Given a filtered matched pair $\mathcal{P}$ consisting of data \begin{equation}\label{pdisp}\xymatrix{ & (V^{\downarrow},\rho^{\downarrow}) \\ H\ar[ru]^{\phi^{\downarrow}}\ar[rd]_{\phi_{\uparrow}} & \\ & (V_{\uparrow},\rho_{\uparrow}) }\end{equation} satisfying the conditions in Definition \ref{fmpdfn}, we shall now define the \textbf{dual matched pair} ${}^{\vee}\!\mathcal{P}$ with corresponding data \begin{equation}\label{dualm} \xymatrix{ & ({}^{\vee}\!(V_{\uparrow}),{}^{\vee}\!\rho_{\uparrow}) \\ {}^{\vee}\!H\ar[ru]^{\delta^{\downarrow}(\phi_{\uparrow})}\ar[rd]_{\delta_{\uparrow}(\phi^{\downarrow})}& \\ & ({}^{\vee}\!(V^{\downarrow}),{}^{\vee}\!\rho^{\downarrow})}\end{equation} (The arrows are not misplaced---recall that if $W$ is a $\Lambda_{\uparrow}$-vector space then ${}^{\vee}\!W=\overline{\mathrm{Hom}_{\Lambda_{\uparrow}}(W,\Lambda_{\uparrow})}$ is a $\Lambda^{\downarrow}$-vector space, and vice versa.)
The construction proceeds in a fairly straightforward way. First, we need to define the functions ${}^{\vee}\!\rho_{\uparrow}\colon\thinspace {}^{\vee}\!(V_{\uparrow})\to \mathbb{R}\cup\{\infty\}$ and ${}^{\vee}\!\rho^{\downarrow}\colon\thinspace {}^{\vee}\!(V^{\downarrow})\to \mathbb{R}\cup\{-\infty\}$ in such a way that $({}^{\vee}\!(V_{\uparrow}),{}^{\vee}\!\rho_{\uparrow})$ is an orthogonalizable $\Lambda^{\downarrow}$-space and $({}^{\vee}\!(V^{\downarrow}),{}^{\vee}\!\rho_{\downarrow})$ is an orthogonalizable $\Lambda_{\uparrow}$-space. This was essentially done in \cite[Section 2.4]{UZ}, in which, for an orthogonalizable $\Lambda_{\uparrow}$-space $(V,\rho)$, a function $\rho^*\colon\thinspace \mathrm{Hom}_{\Lambda_{\uparrow}}(V,\Lambda_{\uparrow})\to \mathbb{R}\cup\{-\infty\}$ was defined by $\rho^*(\zeta)=\sup_{0\neq x\in V}(-\rho(x)-\nu_{\uparrow}(\zeta(x)))$. By \cite[Proposition 2.20]{UZ}, $(\mathrm{Hom}_{\Lambda_{\uparrow}}(V,\Lambda_{\uparrow}),\rho^*)$ is then an orthogonalizable $\Lambda_{\uparrow}$-space, and hence by Remark \ref{flipconj} $({}^{\vee}V,-\rho^*)$ is an orthogonalizable $\Lambda^{\downarrow}$-space.
Thus, for an orthogonalizable $\Lambda_{\uparrow}$-space $(V_{\uparrow},\rho_{\uparrow})$ we obtain an orthogonalizable $\Lambda^{\downarrow}$-space $({}^{\vee}\!(V_{\uparrow}),{}^{\vee}\!\rho_{\uparrow})$ by setting, for each $\zeta\in {}^{\vee}\!(V_{\uparrow})=\overline{\mathrm{Hom}_{\Lambda_{\uparrow}}(V_{\uparrow},\Lambda_{\uparrow})}$, \[ {}^{\vee}\!\rho_{\uparrow}(\zeta)=\inf_{0\neq x\in V_{\uparrow}}(\nu_{\uparrow}(\zeta(x))+\rho_{\uparrow}(x)).\] Symmetrically, for an orthogonalizable $\Lambda^{\downarrow}$-space $(V^{\downarrow},\rho^{\downarrow})$ we obtain an orthogonalizable $\Lambda_{\uparrow}$-space $({}^{\vee}(V^{\downarrow}),{}^{\vee}\!\rho^{\downarrow})$ by setting, for each $\zeta\in {}^{\vee}\!(V^{\downarrow})$, \[ {}^{\vee}\!\rho^{\downarrow}(\zeta)=\sup_{0\neq x\in V^{\downarrow}}(\nu^{\downarrow}(\zeta(x))+\rho^{\downarrow}(x)).\] This suffices to define the notation ${}^{\vee}\!\rho_{\uparrow}$, ${}^{\vee}\!\rho^{\downarrow}$ that appears in (\ref{dualm}).
\begin{remark}\label{dualbasisrem}
After adjusting signs for the effect of conjugation, \cite[Proposition 2.20]{UZ} shows that if $\{v_1,\ldots,v_d\}$ is an orthogonal basis for $V_{\uparrow}$ (resp. for $V^{\downarrow}$), then, with respect to ${}^{\vee}\!\rho_{\uparrow}$ or ${}^{\vee}\!\rho^{\downarrow}$, the dual basis $\{v_{1}^{*},\ldots,v_{d}^{*}\}$ is likewise an orthogonal basis for ${}^{\vee}\!(V_{\uparrow})$ (resp. for ${}^{\vee}\!(V^{\downarrow})$), and moreover that ${}^{\vee}\!\rho_{\uparrow}(v_{i}^{*})=\rho_{\uparrow}(v_i)$ (resp. ${}^{\vee}\!\rho^{\downarrow}(v_{i}^{*})=\rho^{\downarrow}(v_i)$) for each $i$.
\end{remark}
We still need to define the maps $\delta^{\downarrow}(\phi_{\uparrow})$, $\delta_{\uparrow}(\phi^{\downarrow})$ that appear in (\ref{dualm}). In this direction, it is helpful to note the following:
\begin{prop}\label{pullout} If $H$ is a finitely-generated $\Lambda$-module then the $\Lambda^{\downarrow}$-linear map $\beta\colon\thinspace \Lambda^{\downarrow}\otimes_{\Lambda}{}^{\vee}\!H \to {}^{\vee}(\Lambda_{\uparrow}\otimes_{\Lambda}H)$, defined by extending linearly from \[ (\beta(\mu\otimes\phi))(\lambda\otimes x)=\bar{\mu}\lambda\phi(x) \] whenever $\mu\in \Lambda^{\downarrow},\phi\in {}^{\vee}\!H,\lambda\in \Lambda_{\uparrow},x\in H$, is an isomorphism. Likewise, the same formula defines an isomorphism $\beta$ of $\Lambda_{\uparrow}$-vector spaces $\Lambda_{\uparrow}\otimes_{\Lambda}{}^{\vee}\!H\to {}^{\vee}(\Lambda^{\downarrow}\otimes_{\Lambda}H)$.
\end{prop}
\begin{remark}\label{betabasis}
If $H$ is a free module, say with basis $\{e_1,\ldots,e_r\}$, then we obtain a basis $\{1\otimes e_1,\ldots,1\otimes e_r\}$ for $\Lambda_{\uparrow}\otimes_{\Lambda}H$ and then dual bases $\{e_{1}^{\vee},\ldots,e_{r}^{\vee}\}$ for ${}^{\vee}\!H$ and $\{(1\otimes e_1)^{\vee},\ldots,(1\otimes e_r)^{\vee}\}$ for ${}^{\vee}\!(\Lambda_{\uparrow}\otimes_{\Lambda} H)$. Then $\beta\colon\thinspace \Lambda^{\downarrow}\otimes_{\Lambda}{}^{\vee}\!H \to {}^{\vee}(\Lambda_{\uparrow}\otimes_{\Lambda}H)$ is the isomorphism that sends $1\otimes (e_{i}^{\vee})\in \Lambda^{\downarrow}\otimes_{\Lambda}{}^{\vee}\!H$ to $(1\otimes e_i)^{\vee}\in {}^{\vee}(\Lambda_{\uparrow}\otimes_{\Lambda}H)$. However the argument that we give below applies regardless of whether $H$ is free.
\end{remark}
\begin{proof}
Applying Proposition \ref{flipconj} with $M$ equal $\mathrm{Hom}_{\Lambda}(H,\Lambda)$ (so by definition ${}^{\vee}H=\bar{M}$), our map is the composition of the inverse of the map $\alpha$ from that proposition with the map $\gamma\colon\thinspace\overline{\mathrm{Hom}_{\Lambda}(H,\Lambda)\otimes_{\Lambda}\Lambda_{\uparrow}}\to \overline{\mathrm{Hom}_{\Lambda_{\uparrow}}(\Lambda_{\uparrow}\otimes_{\Lambda}H,\Lambda_{\uparrow})}$ that sends a simple tensor $\phi\otimes \mu$ to the map defined on simple tensors by $\lambda\otimes x\mapsto \mu\lambda\phi(x)$. Let us remove the conjugation symbols and regard this map $\gamma$ as a $\Lambda_{\uparrow}$-linear map $\mathrm{Hom}_{\Lambda}(H,\Lambda)\otimes_{\Lambda}\Lambda_{\uparrow}\to \mathrm{Hom}_{\Lambda_{\uparrow}}(\Lambda_{\uparrow}\otimes_{\Lambda}H,\Lambda_{\uparrow})$. (Recall that the conjugation functor does not change the underlying set-theoretic map.)
Passing without comment through various identifications $R\otimes_RN\cong N$ for $R$-modules $N$, and recalling that $\mathcal{Q}(\Lambda)$ is the fraction field of $\Lambda$, the map $\gamma$ factors as \begin{align*} \left(\mathrm{Hom}_{\Lambda}(H,\Lambda)\otimes_{\Lambda}\mathcal{Q}(\Lambda)\right)\otimes_{\mathcal{Q}(\Lambda)}\Lambda_{\uparrow}&\to \mathrm{Hom}_{\mathcal{Q}(\Lambda)}(\mathcal{Q}(\Lambda)\otimes_{\Lambda}H,\mathcal{Q}(\Lambda))\otimes_{\mathcal{Q}(\Lambda)}\Lambda_{\uparrow} \\ & \to \mathrm{Hom}_{\Lambda_{\uparrow}}(\Lambda_{\uparrow}\otimes_{\Lambda}H,\Lambda_{\uparrow}) \end{align*} where both maps are defined in a straightforward way that is very similar to the definition of $\gamma$. The first map above is an isomorphism by \cite[Lemma 4.87]{Rot}\footnote{This lemma applies because our module $H$, being finitely generated over the Noetherian ring $\Lambda$, is finitely presented.}, while the second map is the canonical isomorphism between the coefficient extension to $\Lambda_{\uparrow}$ of the dual of the $\mathcal{Q}(\Lambda)$-vector space $W:=\mathcal{Q}(\Lambda)\otimes_{\Lambda}H$ with the dual of the coefficient extension $\Lambda_{\uparrow}\otimes_{\mathcal{Q}(\Lambda)}W$. Thus $\gamma$ is an isomorphism, and hence $\beta=\gamma\circ\alpha^{-1}$ is also an isomorphism. The isomorphism $\Lambda_{\uparrow}\otimes_{\Lambda}{}^{\vee}\!H\cong {}^{\vee}(\Lambda^{\downarrow}\otimes_{\Lambda}H)$ holds by the same argument.
\end{proof}
Given the filtered matched pair $\mathcal{P}$ with data as in (\ref{pdisp}), so that in particular $1_{\Lambda_{\uparrow}}\otimes \phi_{\uparrow}\colon\thinspace \Lambda_{\uparrow}\otimes_{\Lambda}H\to V_{\uparrow}$ and $1_{\Lambda^{\downarrow}}\otimes \phi^{\downarrow}\colon\thinspace \Lambda^{\downarrow}\otimes_{\Lambda}H\to V^{\downarrow}$ are isomorphisms, we \textbf{define the maps $\delta_{\uparrow}(\phi^{\downarrow}),\delta^{\downarrow}(\phi^{\uparrow})$ in (\ref{dualm}) to be the respective compositions} \[ \xymatrix{ {}^{\vee}\!H\ar[r]& \Lambda_{\uparrow}\otimes_{\Lambda}{}^{\vee}\!H\ar[r]^{\beta}& {}^{\vee}\!(\Lambda^{\downarrow}\otimes_{\Lambda}H)\ar[rr]^{{}^{\vee}\!(1_{\Lambda^{\downarrow}}\otimes \phi^{\downarrow})^{-1}} & & {}^{\vee}\!(V^{\downarrow})} \] and \begin{equation}\label{dualpss} \xymatrix{ {}^{\vee}\!H\ar[r]& \Lambda^{\downarrow}\otimes_{\Lambda}{}^{\vee}\!H\ar[r]^{\beta}& {}^{\vee}\!(\Lambda_{\uparrow}\otimes_{\Lambda}H)\ar[rr]^{{}^{\vee}\!(1_{\Lambda_{\uparrow}}\otimes \phi_{\uparrow})^{-1}} & & {}^{\vee}\!(V_{\uparrow})} \end{equation} where in each case the first map is coefficient extension and the map $\beta$ is the isomorphism from Proposition \ref{pullout}. Replacing $\delta_{\uparrow}(\phi^{\downarrow})$ by $1_{\Lambda_{\uparrow}}\otimes \delta_{\uparrow}(\phi^{\downarrow})$ and $\delta^{\downarrow}(\phi_{\uparrow})$ by $1_{\Lambda^{\downarrow}}\otimes \delta^{\downarrow}(\phi_{\uparrow})$ has the effect of converting the first maps in the above composition to the respective identities, and so since in each case the second and third maps are isomorphisms it follows that $1_{\Lambda_{\uparrow}}\otimes \delta_{\uparrow}(\phi^{\downarrow})$ and $1_{\Lambda^{\downarrow}}\otimes \delta^{\downarrow}(\phi_{\uparrow})$ are isomorphisms. This suffices to show that, with our definitions, the dual ${}^{\vee}\!\mathcal{P}$ from (\ref{dualm}) is a filtered matched pair.
Here is, perhaps, a more transparent characterization of the maps $\delta_{\uparrow}(\phi^{\downarrow})$ and $\delta^{\downarrow}(\phi^{\uparrow})$.
\begin{prop}\label{deltachar}
For each $\zeta\in {}^{\vee}\!H$, the element $\delta_{\uparrow}(\phi^{\downarrow})(\zeta)\in {}^{\vee}\!(V^{\downarrow})$ is uniquely characterized by the fact that \begin{equation}\label{deltachareqn} \mbox{for all $h\in H$,}\quad \left(\delta_{\uparrow}(\phi^{\downarrow})(\zeta)\right)(\phi^{\downarrow}h)=\zeta(h).\end{equation} Similarly, $\delta^{\downarrow}(\phi_{\uparrow})(\zeta)\in {}^{\vee}\!(V_{\uparrow})$ is uniquely characterized by the fact that \[ \mbox{for all $h\in H$,}\quad \left(\delta^{\downarrow}(\phi_{\uparrow})(\zeta)\right)(\phi_{\uparrow}h)=\zeta(h).\]
\end{prop}
\begin{proof} The proofs of the two sentences are identical so we just prove the first. There can be at most one $\eta\in {}^{\vee}\!(V^{\downarrow})$ obeying $\eta(\phi^{\downarrow}h)=\zeta(h)$ for all $h\in H$ because the image of $H$ under $\phi^{\downarrow}$ spans $V^{\downarrow}$ over $\Lambda^{\downarrow}$, so we just need to check that (\ref{deltachareqn}) holds. By definition, \begin{align*}
\left(\delta_{\uparrow}(\phi^{\downarrow})(\zeta)\right)(\phi^{\downarrow}h)&=\left({}^{\vee}(1_{\Lambda^{\downarrow}}\otimes \phi^{\downarrow})^{-1}\beta(1\otimes \zeta)\right)(\phi^{\downarrow}h)
\\ &= \left(\beta(1\otimes\zeta)\right)(1\otimes h)=\zeta(h),\end{align*} as desired.
\end{proof}
When $\mathcal{P}$ has a doubly-orthogonal basis, we readily obtain a similar such basis for ${}^{\vee}\!\mathcal{P}$, as follows.
\begin{prop}\label{bardual}
If $\mathcal{P}$ is a filtered matched pair having a doubly-orthogonal basis $\{e_1,\ldots,e_d\}$, then the dual filtered matched pair has a doubly-orthogonal basis $\{e_{1}^{*},\ldots,e_{d}^{*}\}$ such that, with notation as in (\ref{pdisp}) and (\ref{dualm})) ${}^{\vee}\!\rho_{\uparrow}(\delta^{\downarrow}(\phi_{\uparrow})e_{i}^{*})=\rho_{\uparrow}(\phi_{\uparrow}e_i)$ and ${}^{\vee}\!\rho^{\downarrow}(\delta_{\uparrow}(\phi^{\downarrow})e_{i}^{*})=\rho^{\downarrow}(\phi^{\downarrow}e_i)$ for all $i$. Hence the basis spectra of ${}^{\vee}\!\mathcal{P}$ and $\mathcal{P}$ are related by \[ \Sigma({}^{\vee}\!\mathcal{P})=\{([a_i+\ell_i],-\ell_i)|([a_i],\ell_i)\in \Sigma(\mathcal{P})\}.\]
\end{prop}
\begin{proof}
The existence of the doubly-orthogonal basis $\{e_1,\ldots,e_d\}$ for $\mathcal{P}$ implies in particular that $\frac{H}{tH}$ is free with basis consisting of the cosets $[e_i]$ of $e_i$. Since, $\Lambda$ being an integral domain, we have $\mathrm{Hom}_{\Lambda}(tH,\Lambda)=\{0\}$, by dualizing the canonical exact sequence $tH\to H\to \frac{H}{tH}\to 0$ we see that the pullback map ${}^{\vee}\!\left(\frac{H}{tH}\right)\to {}^{\vee}\!H$ is an isomorphism. So ${}^{\vee}\!H$ has a basis $\{e_{i}^{*}\}$ consisting of the pullbacks to $H$ of the elements of the dual basis to $\{[e_1],\ldots,[e_d]\}$ for ${}^{\vee}\!\left(\frac{H}{tH}\right)$. It then follows immediately from Proposition \ref{deltachar} that the pairing of $\delta_{\uparrow}(\phi^{\downarrow})e_{i}^{*}$ with $\phi^{\downarrow}e_j$ is $1$ if $i=j$ and $0$ otherwise. So $\{\delta_{\uparrow}(\phi^{\downarrow})e_{1}^{*},\ldots,\delta_{\uparrow}(\phi^{\downarrow})e_d^*\}$ is the dual basis for the $\Lambda_{\uparrow}$-vector space ${}^{\vee}(V^{\downarrow})$ to the orthogonal basis $\{\phi^{\downarrow}e_1,\ldots,\phi^{\downarrow}e_d\}$ for $V^{\downarrow}$. Similarly, $\{\delta^{\downarrow}(\phi_{\uparrow})e_1^*,\ldots,\delta^{\downarrow}(\phi_{\uparrow})e_d^*\}$ is the dual basis to the orthogonal basis $\{\phi_{\uparrow}e_1,\ldots\phi_{\uparrow}e_d\}$ for $V_{\uparrow}$. By Remark \ref{dualbasisrem}, the bases $\{\delta_{\uparrow}(\phi^{\downarrow})e_1^*,\ldots,\delta_{\uparrow}(\phi^{\downarrow})e_d^*\}$ and $\{\delta^{\downarrow}(\phi^{\uparrow})e_1^*,\ldots,\delta^{\downarrow}(\phi_{\uparrow})e_d^*\}$ are orthogonal, with their values of ${}^{\vee}\!\rho^{\downarrow}$ and ${}^{\vee}\!\rho_{\uparrow}$ as claimed in the proposition. The last sentence of the proposition follows directly from this and from the definition of the basis spectrum.
\end{proof}
Our constructions behave well with respect to double duals:
\begin{prop} \label{ddcom} For a filtered matched pair $\mathcal{P}$ with notation as in (\ref{pdisp}) we have commutative diagrams \[ \xymatrix{ H\ar[r]^{\phi_{\uparrow}}\ar[d]_{\alpha_H} & V_{\uparrow}\ar[d]^{\alpha_{V_{\uparrow}}} \\ {}^{\vee\vee}\!H\ar[r]_{\delta_{\uparrow}(\delta^{\downarrow}(\phi_{\uparrow}))} & {}^{\vee\vee}\!V_{\uparrow} },\qquad \xymatrix{ H\ar[r]^{\phi^{\downarrow}}\ar[d]_{\alpha_H} & V^{\downarrow}\ar[d]^{\alpha_{V^{\downarrow}}} \\ {}^{\vee\vee}\!H\ar[r]_{\delta^{\downarrow}(\delta_{\uparrow}(\phi^{\downarrow}))} & {}^{\vee\vee}\!V^{\downarrow} }\] where $\alpha_H,\alpha_{V_{\uparrow}},\alpha_{V^{\downarrow}}$ are the canonical maps to double duals from Section \ref{dualsec}.
\end{prop}
\begin{proof}
The proofs are the same for the two diagrams so we just check the commutativity of the first one.
Let $h\in H$. By Proposition \ref{deltachar}, the element $\eta:=(\delta_{\uparrow}(\delta^{\downarrow}(\phi_{\uparrow})))(\alpha_Hh)$ of ${}^{\vee\vee}V_{\uparrow}$ is uniquely characterized by the fact that, whenever $\zeta\in {}^{\vee}\!H$, we have $\eta(\delta^{\downarrow}(\phi_{\uparrow})\zeta)=(\alpha_Hh)(\zeta)$. So we just need to check that $\alpha_{V_{\uparrow}}(\phi_{\uparrow}h)$ satisfies this same property.
For $\zeta\in {}^{\vee}\!H$, one has \begin{align*} (\alpha_{V_{\uparrow}}(\phi_{\uparrow}h))(\delta^{\downarrow}(\phi_{\uparrow})\zeta)&=\overline{(\delta^{\downarrow}(\phi_{\uparrow}\zeta))(\phi_{\uparrow}h)}
\\ &= \overline{\zeta(h)}=(\alpha_Hh)(\zeta),\end{align*} as desired, where in the penultimate equality we have again used Proposition \ref{deltachar}.
\end{proof}
If $\mathcal{P}$ is a filtered matched pair with notation again as in (\ref{pdisp}), then with $d=\dim_{\mathcal{Q}(\Lambda)}\mathcal{Q}(\Lambda)\otimes_{\Lambda}H$ we have the gaps $G_1(\mathcal{P}),\ldots,G_d(\mathcal{P})$ of Definition \ref{gapdfn}. The dimension of $\mathcal{Q}(\Lambda)\otimes_{\Lambda}{}^{\vee}\!H$ is also equal to $d$ (for instance this follows from \cite[Lemma 4.87]{Rot}), so there are also gaps $G_1({}^{\vee}\!\mathcal{P}),\ldots,G_{d}({}^{\vee}\!\mathcal{P})$. If $\mathcal{P}$ admits a doubly-orthogonal basis (as always holds if $\Gamma$ is discrete) then Propositions \ref{gapchar} and \ref{bardual} imply that we have $G_i(\mathcal{P})=-G_{d+1-i}({}^{\vee}\!\mathcal{P})$ for each $i=1,\ldots,d$. More generally we at least have an inequality:
\begin{prop} \label{gapdual} With notation as above, for each $i\in\{1,\ldots,d\}$ we have $G_{i}(\mathcal{P})\leq -G_{d+1-i}({}^{\vee}\!\mathcal{P})$.
\end{prop}
\begin{proof}
Suppose that $c>-G_{d+1-i}({}^{\vee}\!\mathcal{P})$; it suffices to show that for any independent set $\{h_1,\ldots,h_i\}\subset H$ there is some $\ell\in \{1,\ldots,i\}$ with $\rho^{\downarrow}(\phi^{\downarrow}h_{\ell})-\rho_{\uparrow}(\phi_{\uparrow}h_{\ell})<c$.
Now the assumption that $c>-G_{d+1-i}({}^{\vee}\!\mathcal{P})$ implies that there is an independent set $\{\zeta_1,\ldots,\zeta_{d+1-i}\}\subset {}^{\vee}\!H$ such that, for each $m\in\{1,\ldots,d+1-i\}$, we have \begin{equation}\label{dualc} {}^{\vee}\!\rho_{\uparrow}(\delta^{\downarrow}(\phi_{\uparrow})\zeta_m)-{}^{\vee}\!\rho^{\downarrow}(\delta_{\uparrow}(\phi^{\downarrow})\zeta_m)>-c.\end{equation} Now any independent set $\{h_1,\ldots,h_i\}\subset H$ has the annihilator of $\{1\otimes h_1,\ldots,1\otimes h_i\}\subset \mathcal{Q}(\Lambda)\otimes_{\Lambda}H$ equal to a $(d-i)$-dimensional subspace of $\mathrm{Hom}_{\mathcal{Q}(\Lambda)}(\mathcal{Q}(\Lambda)\otimes_{\Lambda}H,\mathcal{Q}(\Lambda))$. Since the $(d+1-i)$-element set $\{\zeta_1,\ldots,\zeta_{d+1-i}\}$ becomes linearly independent on coefficient extension to $\mathcal{Q}(\Lambda)$, it follows that at least one $\zeta_{m}$ with $1\leq m\leq d+1-i$ must not vanish identically on $\{h_1,\ldots,h_i\}$. So choose $\ell\in\{1,\ldots,i\}$ such that $\zeta_{m}(h_{\ell})\neq 0$.
By the definition of ${}^{\vee}\!\rho_{\uparrow}$ and Proposition \ref{deltachar}, we have \[
{}^{\vee}\!\rho_{\uparrow}(\delta^{\downarrow}(\phi_{\uparrow})\zeta_m)\leq \nu_{\uparrow}((\delta^{\downarrow}(\phi_{\uparrow})\zeta_m)(\phi_{\uparrow}h_{\ell}))+\rho_{\uparrow}(\phi_{\uparrow}h_{\ell})=\nu_{\uparrow}(\zeta_m(h_{\ell}))+\rho_{\uparrow}(\phi_{\uparrow}h_{\ell});\] similarly, the definition of ${}^{\vee}\!\rho^{\downarrow}$ yields \[ {}^{\vee}\!\rho^{\downarrow}(\delta_{\uparrow}(\phi^{\downarrow})\zeta_m)\geq \nu^{\downarrow}((\delta_{\uparrow}(\phi^{\downarrow})\zeta_m)(\phi^{\downarrow}h_{\ell}))+\rho^{\downarrow}(\phi^{\downarrow}h_{\ell})=\nu^{\downarrow}(\zeta_m(h_{\ell}))+\rho^{\downarrow}(\phi^{\downarrow}h_{\ell}).\]
Hence (\ref{dualc}) gives \[ -c<\left(\nu_{\uparrow}(\zeta_m(h_{\ell}))+\rho_{\uparrow}(\phi_{\uparrow}h_{\ell})\right)-\left(\nu^{\downarrow}(\zeta_m(h_{\ell}))+\rho^{\downarrow}(\phi^{\downarrow}h_{\ell})\right).\] But any nonzero $\lambda\in \Lambda$, such as $\lambda=\zeta_m(h_{\ell})$, obeys $\nu_{\uparrow}(\lambda)-\nu^{\downarrow}(\lambda)\leq 0$, so we infer from the above that \[ -c<-\left(\rho^{\downarrow}(\phi^{\downarrow}h_{\ell})-\rho_{\uparrow}(\phi_{\uparrow}h_{\ell})\right).\] Since the number $c>-G_{d+1-i}({}^{\vee}\!\mathcal{P})$ and the independent $i$-element set $\{h_1,\ldots,h_i\}\subset H$ were arbitrary this confirms that $G_{i}(\mathcal{P})\leq -G_{d+1-i}({}^{\vee}\!\mathcal{P})$.
\end{proof}
It would be interesting to know whether equality always holds in Proposition \ref{gapdual}.
\subsection{Constructing doubly-orthogonal bases}\label{basisconstruct}
In this section we prove Theorem \ref{basistheorem} asserting that, if $\Gamma$ is discrete, any filtered matched pair admits a doubly-orthogonal basis.\footnote{In the case that $\Gamma=\{0\}$ one can prove this with much less effort than we expend for the general discrete case: in the notation used below, the maps $\phi_{\uparrow}$ and $\phi^{\downarrow}$ are just isomorphisms of $\kappa$-vector spaces when $\Gamma=\{0\}$, and one can then apply \cite[Theorem 3.4]{UZ} to the map $\phi_{\uparrow}\circ\phi^{\downarrow -1}$ between the orthogonalizable $\Lambda_{\uparrow}$-spaces $(\bar{V}^{\downarrow},-\rho^{\downarrow})$ and $(V_{\uparrow},\rho_{\uparrow})$ to obtain the desired basis.} Assume the filtered matched pair to be given by maps $\phi_{\uparrow}\colon\thinspace H\to V_{\uparrow}$ and $\phi^{\downarrow}\colon\thinspace H\to V^{\downarrow}$, where $(V_{\uparrow},\rho_{\uparrow})$ is an orthogonalizable $\Lambda_{\uparrow}$-space and $(V^{\downarrow},\rho^{\downarrow})$ is an orthogonalizable $\Lambda^{\downarrow}$-space. Write $F$ for the quotient $\frac{H}{tH}$; then $F$ is a free $\Lambda$-module because $H$ is a finitely generated module over $\Lambda$, which is a PID because $\Gamma$ is discrete. Since $\phi_{\uparrow}$ and $\phi^{\downarrow}$ each factor through the quotient projection $H\to F$, and this quotient projection becomes an isomorphism after tensoring with $\Lambda_{\uparrow}$ or $\Lambda^{\downarrow}$, we may for simplicity just consider the case of a filtered matched pair of the form \[ \xymatrix{ & (V^{\downarrow},\rho^{\downarrow}) \\ F\ar[ru]^{\phi^{\downarrow}}\ar[rd]_{\phi_{\uparrow}} & \\ & (V_{\uparrow},\rho_{\uparrow}) } \] where $F$ is a finitely-generated free $\Lambda$-module, as a doubly-orthogonal basis for this filtered matched pair will lift via the quotient projection to a doubly-orthogonal basis for the original one.
We thus need to show that there is a basis $\{e_1,\ldots,e_d\}$ for the free module $F$ such that, simultaneously, $\{\phi_{\uparrow}e_1,\ldots,\phi_{\uparrow}e_d\}$ is $\rho_{\uparrow}$-orthogonal for $V_{\uparrow}$ and $\{\phi^{\downarrow}e_1,\ldots,\phi^{\downarrow}e_d\}$ is $\rho^{\downarrow}$-orthogonal for $V^{\downarrow}$. A first step is to show that, separately, $(V_{\uparrow},\rho_{\uparrow})$ and $(V^{\downarrow},\rho^{\downarrow})$ each admit orthogonal bases that are images under $\phi_{\uparrow}$ and $\phi^{\downarrow}$, respectively, of bases $\{x_1,\ldots,x_d\}$ and $\{y_1,\ldots,y_d\}$ of $F$; subsequent to this, we will iteratively modify these two bases, preserving the respective orthogonality criteria, until they are the same basis for $F$. For both of these purposes, it is helpful to record some general operations which preserve the property of orthogonality.
\begin{prop}\label{orthpres}
Let $(V_{\uparrow},\rho_{\uparrow})$ be an orthogonalizable $\Lambda_{\uparrow}$-space and let $\{v_1,\ldots,v_d\}$ be a $\rho_{\uparrow}$-orthogonal basis for $V_{\uparrow}$.
\begin{itemize} \item[(i)] If $\mu_1,\ldots,\mu_d$ are nonzero elements of $\Lambda_{\uparrow}$ then $\{\mu_1v_1,\ldots,\mu_dv_d\}$ is still a $\rho_{\uparrow}$-orthogonal basis for $V_{\uparrow}$.
\item[(ii)] If $w_1,\ldots,w_d\in V_{\uparrow}$ obey $\rho_{\uparrow}(w_i)<\rho_{\uparrow}(v_i)$ for all $i$, then $\{v_1+w_1,\ldots,v_d+w_d\}$ is still a $\rho_{\uparrow}$-orthogonal basis for $V_{\uparrow}$.
\item[(iii)] If $v\in \mathrm{span}_{\Lambda_{\uparrow}}\{v_2,\ldots,v_d\}$ obeys $\rho_{\uparrow}(v)\leq \rho_{\uparrow}(v_1)$, and if $\lambda\in \Lambda_{\uparrow}$ with $\nu_{\uparrow}(\lambda)=0$, then $\{\lambda v_1+v,v_2,\ldots,v_d\}$ is still a $\rho_{\uparrow}$-orthogonal basis for $V_{\uparrow}$.
\end{itemize}
\end{prop}
\begin{remark}\label{orthrev}
By Remark \ref{switch}, this proposition implies analogous results for orthogonalizable $\Lambda^{\downarrow}$-spaces, replacing the various apperances of $\rho_{\uparrow}$ with $(-\rho^{\downarrow})$. Specifically, the $\Lambda^{\downarrow}$ version of (ii) (which will be used in the proof of Lemma \ref{triconstruct}) says that, if $(V^{\downarrow},\rho^{\downarrow})$ has an orthogonal basis $\{v_1,\ldots,v_d\}$ and if $\rho^{\downarrow}(w_i)>\rho^{\downarrow}(v_i)$ for each $i$ then $\{v_1+w_1,\ldots,v_d+w_d\}$ is still an orthogonal basis.
\end{remark}
\begin{proof}
(i) follows from noting that, due to the general relation $\nu_{\uparrow}(\lambda\mu)=\nu_{\uparrow}(\lambda)+\nu_{\uparrow}(\mu)$, for any $\lambda_1,\ldots,\lambda_d\in \Lambda_{\uparrow}$ we have \[ \nu_{\uparrow}\left(\sum_{i=1}^{d}\lambda_i\mu_iv_i\right)=\max_i\left(\rho_{\uparrow}(v_i)-\nu_{\uparrow}(\mu_i)-\nu_{\uparrow}(\lambda_i)\right)=\max_i\left(\rho_{\uparrow}(\mu_iv_i)-\nu_{\uparrow}(\lambda_i)\right).\]
For (ii), recall that, as noted in Remark \ref{alldifferent}, one has $\rho_{\uparrow}(x+y)=\max\{\rho_{\uparrow}(x)+\rho_{\uparrow}(y)\}$ whenever $\rho_{\uparrow}(x)\neq\rho_{\uparrow}(y)$. Given $\lambda_1,\ldots,\lambda_d\in \Lambda_{\uparrow}$, the hypothesis $\rho_{\uparrow}(w_i)<\rho_{\uparrow}(v_i)$ for all $i$ implies that \[ \rho_{\uparrow}\left(\sum_i\lambda_iw_i\right)\leq \max_i\left(\rho_{\uparrow}(w_i)-\nu_{\uparrow}(\lambda_i)\right)<\max_i\left(\rho_{\uparrow}(v_i)-\nu_{\uparrow}(\lambda_i)\right)=\rho_{\uparrow}\left(\sum_i\lambda_iv_i\right)\] and hence \begin{align*} \rho_{\uparrow}\left(\sum_i\lambda_i(v_i+w_i)\right)& = \rho_{\uparrow}\left(\sum_{i}\lambda_iv_i+\sum_i\lambda_iw_i\right)=\rho_{\uparrow}\left(\sum_i\lambda_iv_i\right) \\ &=\max_i\left(\rho_{\uparrow}(v_i)-\nu_{\uparrow}(\lambda_i)\right)=\max_i\left(\rho_{\uparrow}(v_i+w_i)-\nu_{\uparrow}(\lambda_i)\right).\end{align*}
Thus the set $\{v_1+w_1,\ldots,v_d+w_d\}$ is $\rho_{\uparrow}$-orthogonal. A $\rho_{\uparrow}$-orthogonal set of nonzero vectors is automatically linearly independent since the orthogonality condition implies that any nontrivial linear combination of the vectors has $\rho_{\uparrow}>-\infty$ while $\rho_{\uparrow}(0)=-\infty$. So since by hypothesis $\dim V_{\uparrow}=d$ it follows that our $\rho_{\uparrow}$-orthogonal set $\{v_1+w_1,\ldots,v_d+w_d\}$ is a basis for $V_{\uparrow}$.
We now prove (iii). First note that, by first applying (i) with $\mu_1=\lambda$ and $\mu_2=\cdots=\mu_d=1$, it suffices to prove (iii) in the case that $\lambda=1$. Because $\rho_{\uparrow}(v)\leq \rho_{\uparrow}(v_1)$ and $v\in \mathrm{span}_{\Lambda_{\uparrow}}\{v_2,\ldots,v_d\}$ where $\{v_1,\ldots,v_d\}$ is orthogonal, one has $\rho_{\uparrow}(v_1+v)=\rho_{\uparrow}(v_1)$. Let $\lambda_1,\ldots,\lambda_d\in \Lambda_{\uparrow}$. If $\rho_{\uparrow}(v_1+v)-\nu_{\uparrow}(\lambda_1)<\max_{i\geq 2}(\rho_{\uparrow}(v_i)-\nu_{\uparrow}(\lambda_i))$ then, using Remark \ref{alldifferent}, \begin{align*} & \rho_{\uparrow}\left(\lambda_1(v_1+v)+\sum_{i\geq 2}\lambda_iv_i\right)=\rho_{\uparrow}\left(\sum_{i\geq 2}\lambda_iv_i\right)=\max_{i\geq 2}(\rho_{\uparrow}(v_i)-\nu_{\uparrow}(\lambda_i)) \\ & \qquad = \max\left\{\rho_{\uparrow}(v_1+v)-\nu_{\uparrow}(\lambda_1),\rho_{\uparrow}(v_2)-\nu_{\uparrow}(\lambda_2),\ldots,\rho_{\uparrow}(v_d)-\nu_{\uparrow}(\lambda_d)\right\}.\end{align*} The remaining case is that $\rho_{\uparrow}(v_1+v)-\nu_{\uparrow}(\lambda_1)\geq \max_{i\geq 2}(\rho_{\uparrow}(v_i)-\nu_{\uparrow}(\lambda_i))$. Then $\lambda_1v+\sum_{i=2}^{d}\lambda_iv_i\in\mathrm{span}_{\Lambda_{\uparrow}}\{v_2,\ldots,v_d\}$ with $\rho_{\uparrow}\left(\lambda_1v+\sum_{i=2}^{d}\lambda_iv_i\right)\leq \rho_{\uparrow}(\lambda_1v_1)$, and so by the orthogonality of $\{v_1,\ldots,v_d\}$ we have \begin{align*} &\rho_{\uparrow}\left(\lambda_1(v_1+v)+\sum_{i=2}^{d}\lambda_iv_i\right)=\rho_{\uparrow}\left(\lambda_1v_1+\left(\lambda_1v+\sum_{i=2}^{d}\lambda_iv_i\right)\right)=\rho_{\uparrow}(\lambda_1v_1)\\ &=\rho_{\uparrow}(v_1)-\nu_{\uparrow}(\lambda_1)=\rho_{\uparrow}(v_1+v)-\nu_{\uparrow}(\lambda_1) \\& =\max\left\{\rho_{\uparrow}(v_1+v)-\nu_{\uparrow}(\lambda_1),\rho_{\uparrow}(v_2)-\nu_{\uparrow}(\lambda_2),\ldots,\rho_{\uparrow}(v_d)-\nu_{\uparrow}(\lambda_d)\right\}.\end{align*} So indeed the set $\{v_1+v,v_2,\ldots,v_d\}$ (which is obviously a basis) is $\rho_{\uparrow}$-orthogonal.
\end{proof}
We now begin constructing the desired bases. Throughout the following we continue to assume that $F$ is a free $\Lambda$-module of rank $d$, equipped with $\Lambda$-module homomorphisms $\phi_{\uparrow}\colon\thinspace F\to V_{\uparrow}$ and $\phi^{\downarrow}\colon\thinspace F\to V^{\downarrow}$ where $(V_{\uparrow},\rho_{\uparrow})$ is an orthogonalizable $\Lambda_{\uparrow}$-space and $(V^{\downarrow},\rho^{\downarrow})$ is an orthogonalizable $\Lambda^{\downarrow}$-space, such that $1_{\Lambda_{\uparrow}}\otimes\phi_{\uparrow}$ and $1_{\Lambda^{\downarrow}}\otimes \phi^{\downarrow}$ are vector space isomorphisms.
\begin{lemma} \label{triconstruct} There are bases $\{x_1,\ldots,x_d\}$ and $\{y_1,\ldots,y_d\}$ for the free $\Lambda$-module $F$ such that:
\begin{itemize} \item[(i)] $\{\phi_{\uparrow}x_1,\ldots,\phi_{\uparrow}x_d\}$ is an orthogonal basis for $(V_{\uparrow},\rho_{\uparrow})$;
\item[(ii)] $\{\phi^{\downarrow}y_1,\ldots,\phi^{\downarrow}y_d\}$ is an orthogonal basis for $(V^{\downarrow},\rho^{\downarrow})$; and
\item[(iii)] For each $i\in\{1,\ldots,d\}$, $x_i-y_i\in \mathrm{span}_{\Lambda}\{y_1,\ldots,y_{i-1}\}$.
\end{itemize}
\end{lemma}
\begin{proof}
Let us first construct $\{y_1,\ldots,y_d\}$. Let $\{f_1,\ldots,f_d\}$ be an arbitrary basis for the free module $F$. Then $\{\phi^{\downarrow}f_1,\ldots,\phi^{\downarrow}f_d\}$ is a (probably not orthogonal) basis for $V^{\downarrow}$. In view of Remark \ref{switch}, \cite[Theorem 2.16]{UZ} shows that there is an orthogonal basis $\{v_1,\ldots,v_d\}$ for $(V^{\downarrow},\rho^{\downarrow})$ such that, for each $i$, we have \[ v_i=\phi^{\downarrow}f_i+\sum_{j<i}\mu_{ij}\phi^{\downarrow}f_j \] for suitable $\mu_{ij}\in \Lambda^{\downarrow}$. These $v_i$ might not belong to the image of $\phi^{\downarrow}$ because the $\mu_{ij}$ may not belong to the subset $\Lambda$ of $\Lambda^{\downarrow}$, but we can remedy this by using Proposition \ref{orthpres} (and its variant from Remark \ref{orthrev}).
Specifically, let $a_0=\max_i \rho^{\downarrow}(v_i)$, and let $d_0=\min_j\rho^{\downarrow}(\phi^{\downarrow}f_j)-a_0$. For a general element $\mu\in \Lambda^{\downarrow}$, so that $\mu$ takes the form $\sum_{g\in \Gamma}c_gT^g$ where $c_g\in\kappa$ and, for any $g_0\in \mathbb{R}$, only finitely many of the $c_g$ with $g\geq g_0$ are nonzero, let us write \[ \widehat{\mu}=\sum_{g\in \Gamma:g\geq d_0}c_gT^g \] for the sum of only those terms in the generalized power series defining $\mu$ for which the power of $T$ is at least $d_0$. In particular, the sum defining $\widehat{\mu}$ above is a finite sum, so that $\widehat{\mu}\in \Lambda$. Moreover, $\nu^{\downarrow}(\widehat{\mu}-\mu)<d_0$. Hence for each $i$ and $j$ and each $\mu\in \Lambda^{\downarrow}$ we have \[ \rho^{\downarrow}\left((\widehat{\mu}-\mu)\phi^{\downarrow}f_j\right)>\rho^{\downarrow}(\phi^{\downarrow}f_j)-d_0\geq a_0\geq \rho^{\downarrow}(v_i).\] So by Remark \ref{orthrev}, if we let \[ \widehat{v}_i=v_i+\sum_{j<i}(\widehat{\mu}_{ij}-\mu_{ij})\phi^{\downarrow}f_j=\phi^{\downarrow}f_i+\sum_{j<i}\widehat{\mu}_{ij}\phi^{\downarrow}f_j \] then $\{\widehat{v}_1,\ldots,\widehat{v}_d\}$ will be an orthogonal basis for $V^{\downarrow}$. So define the elements $y_i\in F$ by \[ y_i=f_i+\sum_{j<i}\widehat{\mu}_{ij}f_j;\] by construction, $v_i=\phi^{\downarrow}y_i$ and so $\{\phi^{\downarrow}y_1,\ldots,\phi^{\downarrow}y_d\}$ is an orthogonal basis for $V^{\downarrow}$. Moreover, since $\{y_1,\ldots,y_d\}$ is related to $\{f_1,\ldots,f_d\}$ by a triangular $\Lambda$-valued matrix with ones on the diagonal, the fact that $\{f_1,\ldots,f_d\}$ is a basis for the $\Lambda$-module $F$ (and not merely a maximal independent set) implies that $\{y_1,\ldots,y_d\}$ is also a basis for $F$.
Now that we have our basis $\{y_1,\ldots,y_d\}$, the construction of $\{x_1,\ldots,x_d\}$ proceeds in essentially the same way (adjusting for signs), using $\{y_1,\ldots,y_d\}$ as the initial basis instead of $\{f_1,\ldots,f_d\}$. \cite[Theorem 2.16]{UZ} gives an orthogonal basis $\{w_1,\ldots,w_d\}$ for $V_{\uparrow}$ with each $w_i$ having the form \[ w_i=\phi_{\uparrow}y_i+\sum_{j<i}\lambda_{ij}\phi_{\uparrow}y_j \] with $\lambda_i\in \Lambda_{\uparrow}$. If we let $\widehat{\lambda}_{ij}$ denote the element of $\Lambda$ that results from deleting from $\lambda_{ij}$ all terms involving powers $T^d$ with $d>\max_j\rho_{\uparrow}(\phi_{\uparrow}y_j)-\min_i\rho_{\uparrow}(w_i)$, then the elements $x_i=y_i+\sum_{j<i}\widehat{\lambda}_{ij}y_j$ evidently comprise a basis for $F$ which satisfies property (iii) in the lemma, and Proposition \ref{orthpres} (ii) implies that $\{\phi_{\uparrow}x_1,\ldots,\phi_{\uparrow}x_d\}$ is an orthogonal basis for $(V_{\uparrow},\rho_{\uparrow})$.
\end{proof}
With Lemma \ref{triconstruct}'s bases $\{x_1,\ldots,x_d\}$ and $\{y_1,\ldots,y_d\}$ for $F$ in hand, we set about describing a linear algebraic procedure to modify these until they equal a common doubly-orthogonal basis. Let $M$ denote the $d\times d$ matrix with coefficients in $\Lambda$ relating our two bases in the sense that $x_j=\sum_{i=1}^{d}M_{ij}y_i$. Thus by Lemma \ref{triconstruct}(iii), $M$ is \textbf{unitriangular}, \emph{i.e.} upper triangular with all diagonal entries equal to $1$. Changes to the basis $\{x_1,\ldots,x_d\}$ (resp. $\{y_1,\ldots,y_d\}$) of course correspond to column (resp. row) operations on $M$; we would like to confine ourselves to row or column operations that correspond to basis changes that preserve the property that $\{\phi_{\uparrow}x_1,\ldots,\phi_{\uparrow}x_d\}$ and $\{\phi^{\downarrow}y_1,\ldots,\phi^{\downarrow}y_d\}$ are respectively $\rho_{\uparrow}$- and $\rho^{\downarrow}$-orthogonal. To assist with this we keep track of, along with the change of basis matrix $M$, the values $\xi_i=\rho_{\uparrow}(\phi_{\uparrow}x_i)$ and $\eta_j=\rho^{\downarrow}(\phi^{\downarrow}y_j)$.
\begin{dfn} A \textbf{labeled basis change matrix} is a triple $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ where $M$ is an invertible matrix with coefficients in $\Lambda$, say of size $d$-by-$d$, and $\vec{\xi},\vec{\eta}\in \mathbb{R}^d$.
\end{dfn}
It turns out to be helpful to introduce the following quantity:
\begin{dfn} \label{misdef} Let $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ be a labeled basis change matrix with the property that $M$ is unitriangular. The \textbf{misalignment} of $\mathcal{M}$ is the number \[ m(\mathcal{M})=\sum_{i<j}|(\eta_i-\xi_i)-(\eta_j-\xi_j)|.\]
\end{dfn}
We now consider some transformations of labeled basis change matrices $(M,\vec{\xi},\vec{\eta})$ which correspond to modifications of the bases $\{x_1,\ldots,x_d\}$ and/or $\{y_1,\ldots,y_d\}$ that preserve the desired orthogonality properties:
\begin{itemize}
\item[(A1)] Reordering the $y_i$, say by swapping $y_i$ with $y_j$, corresponds to swapping the $i$th and $j$th rows of $M$, and also swapping $\eta_i$ with $\eta_j$.
\item[(A2)] Multiplying the basis element $x_j$ by a unit in $\Lambda$ (which is necessarily of form $aT^g$ where $a\in \kappa^{\times}$ and $g\in \Gamma$) corresponds to multiplying the $j$th column of $M$ by $aT^g$ while subtracting $g$ from $\xi_j$.
\item[(A3)] By Proposition \ref{orthpres} and Remark \ref{orthrev}, the basis $\{y'_1,\ldots,y'_d\}$ obtained by, for some distinct $i$ and $j$, setting $y'_j=y_j+\mu y_i$ and all other $y'_k=y_k$, will continue to have the property that $\{\phi^{\downarrow}y'_1,\ldots,\phi^{\downarrow}y'_d\}$ is $\rho^{\downarrow}$-orthogonal provided that $\rho^{\downarrow}(\mu y_i)\geq \rho^{\downarrow}(y_j)$. This corresponds to subtracting $\mu$ times row $j$ from row $i$ while leaving all $\eta_k,\xi_k$ unchanged, subject to the condition that $\nu^{\downarrow}(\mu)\leq \eta_i-\eta_j$.
\item[(A4)] Similarly to (A3), Proposition \ref{orthpres} implies that the basis $\{x'_1,\ldots,x'_d\}$ obtained by, for some distinct $i$ and $j$, setting $x'_j=x_j+\mu x_i$ and all other $x'_k=x_k$ will continue to have $\{\phi_{\uparrow}x'_1,\ldots,\phi_{\uparrow}x'_d\}$ $\rho_{\uparrow}$-orthogonal provided that $\rho_{\uparrow}(\mu x_i)\leq \rho_{\uparrow}(x_j)$. This corresponds to adding $\mu$ times column $i$ to column $j$ while leaving all $\eta_k,\xi_k$ unchanged, subject to the condition that $\nu_{\uparrow}(\mu)\geq \xi_i-\xi_j$.
\item[(A5)] Generalizing (A4), let $Z=\left(\begin{array}{cc} d_i & \mu \\ \lambda & d_j\end{array}\right)$ be a $2\times 2$ matrix that is invertible over $\Lambda$, with $\nu_{\uparrow}(d_i)=\nu_{\uparrow}(d_j)=0$ and $\nu_{\uparrow}(\lambda)>\xi_j-\xi_i$ while $\nu_{\uparrow}(\mu)\geq\xi_i-\xi_j$ (corresponding to the conditions $\rho_{\uparrow}(\lambda x_j)<\rho_{\uparrow}(x_i)$ and $\rho_{\uparrow}(\mu x_i)\leq\rho_{\uparrow}(x_j)$). Proposition \ref{orthpres} implies that the basis $\{x'_1,\ldots,x'_d\}$ obtained by setting $x'_i=d_ix_i+\lambda x_j$, $x'_j=\mu x_i+d_jx_j$, and all other $x'_k=x_k$ will continue to have $\{\phi_{\uparrow}x'_1,\ldots,\phi_{\uparrow}x'_d\}$ $\rho_{\uparrow}$-orthogonal. The corresponding operation on labeled basis change matrices $(M,\vec{\xi},\vec{\eta})$ leaves $\vec{\xi},\vec{\eta}$ unchanged and multiplies $M$ on the right by the $d\times d$ matrix $E$ which coincides with the identity except that $E_{ii}=d_i,E_{ij}=\mu,E_{ji}=\lambda,E_{jj}=d_j$ (again, subject to the conditions that $\nu_{\uparrow}(d_i)=\nu_{\uparrow}(d_j)=0$, $\nu_{\uparrow}(\lambda)>\xi_j-\xi_i$, and $\nu_{\uparrow}(\mu)\geq \xi_i-\xi_j$).
\end{itemize}
In view of Lemma \ref{triconstruct}, to prove Theorem \ref{basistheorem} it thus suffices to show that, if $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ is any labeled basis change matrix such that $M$ is unitriangular, there is a sequence of operations of types (A1),(A2),(A3),(A4),(A5) as above which convert $M$ to the identity (for the corresponding bases $\{x'_1,\ldots,x'_d\}$ and $\{y'_1,\ldots,y'_d\}$ will then be equal to each other, and will continue to have the properties that $\{\phi_{\uparrow}x'_1,\ldots,\phi_{\uparrow}x'_d\}$ is $\rho_{\uparrow}$-orthogonal and $\{\phi^{\downarrow}y'_1,\ldots,\phi^{\downarrow}y'_d\}$ is $\rho^{\downarrow}$-orthogonal; thus $\{x'_1,\ldots,x'_d\}=\{y'_1,\ldots,y'_d\}$ will be a doubly orthogonal basis). To facilitate an inductive argument, we will prove a more specific statement that includes information about the labels $\vec{\xi}$ and $\vec{\eta}$. Let us first make the following definition.
\begin{dfn}
Let $\alpha=(\alpha_1,\ldots,\alpha_d)\in\mathbb{R}^d$. A \textbf{simple compression} of $\vec{\alpha}$ is an element $\vec{\beta}=(\beta_1,\ldots,\beta_d)\in \mathbb{R}^d$ such that, for some distinct $k,\ell\in\{1,\ldots,d\}$, we have \[ \beta_k+\beta_{\ell}=\alpha_k+\alpha_{\ell},\quad |\beta_k-\beta_{\ell}|<|\alpha_k-\alpha_{\ell}|,\,\,\mbox{ and }\beta_i=\alpha_i\mbox{ for all }i\notin\{k,\ell\}.\] A \textbf{compression} of $\alpha$ is an element $\vec{\alpha}'\in\mathbb{R}^d$ such that, for some $m\geq 1$, there are $\vec{\alpha}^{(0)}=\alpha,\,\vec{\alpha}^{(1)},\ldots,\vec{\alpha}^{(m)}=\vec{\alpha}'$ such that $\vec{\alpha}^{(i+1)}$ is a simple compression of $\vec{\alpha}^{(i)}$ for each $i\in\{0,\ldots,m-1\}$.\end{dfn}
It will be helpful to know the following (\emph{cf}. Definition \ref{misdef}):
\begin{prop}\label{compmis}
Suppose that $\vec{\alpha}'$ is a compression of $\vec{\alpha}$. Then \[ \sum_{i<j}|\alpha'_i-\alpha'_j|<\sum_{i<j}|\alpha_i-\alpha_j|.\]
\end{prop}
\begin{proof}
By transitivity it suffices to prove this in the case of a simple compression, so assume that, for some $k,\ell$, say with $\alpha'_k\leq \alpha'_{\ell}$, we have $\alpha'_k+\alpha'_{\ell}=\alpha_k+\alpha_{\ell}$, $|\alpha'_k-\alpha'_{\ell}|<|\alpha_k-\alpha_{\ell}|$, and $\alpha'_i=\alpha_i$ for all $i\notin\{k,\ell\}$. For any $t\in \mathbb{R}$, if $t$ does not lie in the open interval $(\alpha'_k,\alpha'_{\ell})$ then \[ |t-\alpha'_k|+|t-\alpha'_{\ell}|=|2t-(\alpha'_{k}+\alpha'_{\ell})|=|2t-(\alpha_k+\alpha_{\ell})|\leq |t-\alpha_k|+|t-\alpha_{\ell}|;\] on the other hand, if $t$ does lie in $(\alpha'_k,\alpha'_{\ell})$ then \[ |t-\alpha'_k|+|t-\alpha'_{\ell}|=|\alpha'_k-\alpha'_{\ell}|<|\alpha_k-\alpha_{\ell}|\leq |\alpha_k-t|+|t-\alpha_{\ell}|.\] So in fact $|t-\alpha'_{k}|+|t-\alpha'_{\ell}|\leq |t-\alpha_k|+|t-\alpha_{\ell}|$ for all $t\in \mathbb{R}$. Applying this with $t=\alpha_{i}$ for $i\notin \{k,\ell\}$, we see that sum of just those terms in $\sum_{i<j}|\alpha'_i-\alpha'_j|$ for which $\{i,j\}\neq\{k,\ell\}$ is bounded above by the corresponding sum in $\sum_{i<j}|\alpha_i-\alpha_j|$. So since $|\alpha'_k-\alpha'_{\ell}|<|\alpha_k-\alpha_{\ell}|$ the result follows.
\end{proof}
Here then is the main technical ingredient in the proof of Theorem \ref{basistheorem}.
\begin{prop}\label{reduction}
Let $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ be a labeled basis change matrix such that $M$ is a $d\times d$ unitriangular matrix, and assume that $\Gamma$ is discrete. Then, by a sequence of operations of types (A1),(A2),(A3),(A4),(A5), $\mathcal{M}$ may be converted to a labeled basis change matrix of the form $\mathcal{M}'=(I,\vec{\xi}',\vec{\eta}')$ where $I$ is the $d\times d$ identity, and the tuple $(\eta'_1-\xi'_1,\ldots,\eta'_d-\xi'_d)$ either is equal to or is a compression of $(\eta_1-\xi_1,\ldots,\eta_d-\xi_d)$.
\end{prop}
\begin{proof}
The proof is by induction on the dimension, so we assume that the result holds for all unitriangular $k\times k$ labeled basis change matrices whenever $k<d$.
If $M$ is $d\times d$ and unitriangular and $\ell\in \{2,\ldots,d\}$, let $M(\ell)$ denote the lower right $(d-\ell+1)\times (d-\ell+1)$ block of $M$ (so the entries of $M(\ell)$ are the $M_{ij}$ with both $i,j\geq \ell$). Then $M(\ell)$ is still unitriangular.
Given an operation on labeled $(d-\ell+1)\times(d-\ell+1)$ basis change matrices of type (A1)-(A5), that operation extends in obvious fashion to an operation, which is again of type (A1)-(A5), on labeled $d\times d$ basis change matrices: if the original operation is given at the level of matrices by left or right multiplication by a $(d-\ell+1)\times (d-\ell+1)$ matrix $E$ then the extended operation acts by left or right multiplication by the block matrix $\left(\begin{array}{cc} I & 0\\ 0 & E\end{array}\right)$, and the operation on labels $\vec{\xi},\vec{\eta}$ extends trivially. The extended operation has the same effect on the lower right block $M(\ell)$ as did the original operation; it has no effect on the upper left $(\ell-1)\times (\ell-1)$ block of $M$; and while it might affect the other blocks for general matrices $M$ it preserves the condition that, in the lower left block, $M_{ij}=0$ for all $i\geq \ell$ and $j<\ell$.
With $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ as in the statement of the proposition, we may apply the inductive hypothesis to the $(d-1)\times (d-1)$ submatrix $M(2)$ with labels $(\xi_2,\ldots,\xi_d)$ and $(\eta_2,\ldots,\eta_d)$, and extend the sequence of operations to $M$ in the manner described in the previous paragraph. This reduces us to the case that $M(2)$ is the identity matrix, so that $M$ has the form: \begin{equation}\label{1cleared} M=\left(\begin{array}{ccccc}1 & f_2 & f_3 & \cdots & f_d \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1\end{array}\right),\end{equation} and our task now is to eliminate the entries $f_i$. For $k\in\{1,\ldots,d\}$, let us say that a $d\times d$-matrix $M$ is $k$-\textbf{cleared} if $M_{ij}=\delta_{ij}$ (Kronecker delta) whenever either $i\geq 2$ or $j\leq k$. Thus a matrix as in (\ref{1cleared}) is $1$-cleared, and it is $k$-cleared iff $f_j=0$ for all $j\leq k$.
The main steps in our reduction procedure are contained in the proof of the following lemma:
\begin{lemma}\label{mainbasisstep}
Let $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ be a labeled basis change matrix such that the $d\times d$ matrix $M$ is $k$-cleared, for some $k\in \{1,\ldots,d-1\}$. Then there is a sequence of operations of type (A1)-(A5) which converts $\mathcal{M}$ to a labeled basis change matrix $\mathcal{M}'=(M',\vec{\xi}',\vec{\eta}')$ such that either:
\begin{itemize} \item $M'$ is $(k+1)$-cleared, and $\vec{\xi}'=\vec{\xi}$ and $\vec{\eta}'=\vec{\eta}$; or \item $M'$ is $k$-cleared, and $(\eta'_1-\xi'_1,\ldots,\eta'_d-\xi'_d)$ is a compression of $(\eta_1-\xi_1,\ldots,\eta_d-\xi_d)$.
\end{itemize}
\end{lemma}
We now complete the proof of Proposition \ref{reduction} (and hence, as noted before the proposition, of Theorem \ref{basisconstruct}) assuming Lemma \ref{mainbasisstep}, deferring the proof of the latter to the end of the section.
The main remaining observation is that, due to the discreteness of $\Gamma$, if we iteratively apply Lemma \ref{mainbasisstep} then there are only finitely many values that the misalignment $m$ can take during the iteration. To see this, for any labeled basis change matrix $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ let \[ \mathcal{S}(\mathcal{M})=\{(\eta_i-\xi_j)-(\eta_k-\xi_{\ell})+g|1\leq i,j,k,\ell\leq d,\,g\in \Gamma\}.\] Thus $\mathcal{S}(\mathcal{M})$ is a finite union of cosets of the discrete subgroup $\Gamma<\mathbb{R}$, and so $\mathcal{S}(\mathcal{M})$ is a discrete subset of $\mathbb{R}$. Among the operations (A1)-(A5), the only ones that modify the labels $\vec{\xi},\vec{\eta}$ are (A1), which swaps two of the $\eta_i$, and (A2), which subtracts an element of $\Gamma$ from one of the $\xi_i$. Since $\Gamma$ is an additive subgroup of $\mathbb{R}$ it follows that $\mathcal{S}(\mathcal{M}')=\mathcal{S}(\mathcal{M})$ whenever $\mathcal{M}'$ is obtained from $\mathcal{M}$ by an operation from among (A1)-(A5), and hence also whenever $\mathcal{M}'$ is obtained from $\mathcal{M}$ by a sequence of such operations.
Given $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ as in the statement of the proposition, as noted already we can apply the inductive hypothesis to $M(2)$ to reduce to the case that $M$ is $1$-cleared, setting us up to apply Lemma \ref{mainbasisstep}. Let $\mathcal{M}^{(0)}=\mathcal{M}$, and, assuming inductively that $\mathcal{M}^{(i)}=(M^{(i)},\vec{\xi}^{(i)},\vec{\eta}^{(i)})$ is $k_i$-cleared with $k_i<d$, let $\mathcal{M}^{(i+1)}$ result from applying Lemma \ref{mainbasisstep} to $\mathcal{M}^{(i)}$. By the preceding paragraph, $\mathcal{S}(\mathcal{M}^{(i)})=\mathcal{S}(\mathcal{M}^{(0)})$ for all $i$. Also, by Proposition \ref{compmis}, the misalignments $m(\mathcal{M}^{(i)})$ obey $m(\mathcal{M}^{(i+1)})\leq m(\mathcal{M}^{(i)})$, with equality only when $M^{(i+1)}$ is $(k_i+1)$-cleared. Now the misalignments $m(\mathcal{M}^{(i)})$ form a nonincreasing sequence within the set of finite sums of absolute values of elements of the discrete set $\mathcal{S}(\mathcal{M}^{(0)})$; this set of sums is itself a discrete set of nonnegative numbers. But any nonincreasing sequence in a discrete set of nonnegative numbers must eventually stabilize: there is $i_0$ such that $m(\mathcal{M}^{(i+1)})=m(\mathcal{M}^{i})$ whenever $i\geq i_0$. After this point in the iteration, the first alternative in Lemma \ref{mainbasisstep} must always hold, and so since $M^{(i_0)}$ is $k_{i_0}$-cleared it follows that $M^{(i_0+d-k_{i_0})}$ is $d$-cleared, \emph{i.e.} is equal to the identity matrix. Thus the proposition holds with $\mathcal{M}'=\mathcal{M}^{(i_0+d-k_{i_0})}$, completing the proof modulo Lemma \ref{mainbasisstep}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{mainbasisstep}]
Our input is a labeled basis change matrix $\mathcal{M}=(M,\vec{\xi},\vec{\eta})$ such that $M_{ij}=\delta_{ij}$ for all $i\geq 2$, and $M_{1j}=\delta_{1j}$ for all $j\leq k$, so that the first (possibly) nonzero entry off of the diagonal in the first row of $M$ is $f:=M_{1,k+1}$. Our task is to use operations (A1)-(A5) either to eliminate $f$ while preserving $\vec{\xi}$ and $\vec{\eta}$, or else to preserve the property of being $k$-cleared while compressing the vector $\vec{\eta}-\vec{\xi}$.
By definition, $f$ is an element of the group algebra $\Lambda=\kappa[\Gamma]$, so takes the form $f=\sum_{g\in \Gamma}a_gT^g$ for some $a_g\in \kappa$ (only finitely many of which are nonzero). Let us split this sum up as \[ f=\underbrace{\sum_{g\leq \eta_1-\eta_{k+1}}a_gT^g}_{f_-}+\underbrace{\sum_{\eta_1-\eta_{k+1}<g<\xi_1-\xi_{k+1}}a_gT^g}_{f_0} + \underbrace{\sum_{g\geq \xi_1-\xi_{k+1}}a_gT^g}_{f_+}.\] (If $\eta_{1}-\eta_{k+1}\geq \xi_{1}-\xi_{k+1}$ such a decomposition is not unique, but we choose one.) Thus $\nu^{\downarrow}(f_-)\leq \eta_1-\eta_{k+1}$, and $\nu_{\uparrow}(f_+)\geq \xi_1-\xi_{k+1}$. So we can apply operation (A3), subtracting $f_-$ times row $k+1$ from row $1$; since row $k+1$ is the $(k+1)$th standard basis vector this has the sole effect of changing $M_{1,k+1}$ from $f$ to $f_0+f_+$. Then apply operation (A4), subtracting $f_+$ times column $1$ from column $k+1$, after which $M_{1,k+1}$ will be equal to $f_0$ but the rest of the matrix $M$, and also the labels $\vec{\xi}$ and $\vec{\eta}$, will be unchanged from what they were at the start of the proof. By construction, we have \begin{equation}\label{nuf0} \nu_{\uparrow}(f_0)>\eta_1-\eta_{k+1}\mbox{ and }\nu^{\downarrow}(f_0)<\xi_1-\xi_{k+1}.\end{equation}
If $f_0=0$ (as will automatically be the case if $\eta_{1}-\eta_{k+1}\geq \xi_{1}-\xi_{k+1}$) then we are done: the matrix is now $(k+1)$-cleared and the labels $\vec{\xi}$ and $\vec{\eta}$ are unchanged, so the first alternative in the conclusion of the lemma holds.
So suppose for the rest of the proof that $f_0\neq 0$. The lowest degree term in $f_0$ then takes the form $aT^g$ where $a\in \kappa^{\times}$ and $\eta_1-\eta_{k+1}<g< \xi_1-\xi_{k+1}$; let us factor this out and express $f_0$ in the form \[ f_0=aT^g(1-r) \mbox{ where }a\in\kappa^{\times},\,\eta_1-\eta_{k+1}<g< \xi_1-\xi_{k+1}\,\,\nu_{\uparrow}(r)>0.\] (This includes the possibility that $r=0$, as $\nu_{\uparrow}(0)=\infty$.) Since $\nu_{\uparrow}$ take products to sums, and since $\nu_{\uparrow}(r)>0$, we may choose $N\in \mathbb{N}$ such that \begin{equation}\label{nupower} \nu_{\uparrow}(r^{N+1})>\xi_1-\xi_{k+1}-g. \end{equation} We shall apply (A5) with $i=1$, $j=k+1$, and \[ Z=\left(\begin{array}{cc} -a(1-r) & T^gr^{N+1}\\ T^{-g} & a^{-1}(1+r+\cdots+r^{N})\end{array}\right).\] Note that this $Z$ is invertible over $\Lambda$: the relation $(1-r)(1+r+\cdots+r^{N})=1-r^{N+1}$ implies that $\det Z=-1$. Also, $\nu_{\uparrow}(T^{-g})=-g>\xi_{k+1}-\xi_1$ and $\nu_{\uparrow}(T^gr^{N+1})=g+\nu_{\uparrow}(r^{N+1})>\xi_1-\xi_{k+1}$ so the conditions on $Z$ required in (A5) are indeed satisfied. Applying this transformation leaves all of the data in $\mathcal{M}$ unchanged except that the principal submatrix of $M$ corresponding to rows and columns $1$ and $k+1$ becomes \[ \left(\begin{array}{cc} 0 & T^gr^{N+1}+T^g(1-r^{N+1}) \\ T^{-g} & a^{-1}(1+r+\cdots+r^N)\end{array}\right)=\left(\begin{array}{cc} 0 & T^g \\ T^{-g} & a^{-1}(1+r+\cdots+r^N)\end{array}\right).\]
Next, apply (A1) (swapping rows $1$ and $k+1$), and then apply (A2), multiplying column $1$ by $T^g$ and column $k+1$ by $T^{-g}$. The result of this sequence of operations is a labeled basis change matrix $\mathcal{M}''=(M'',\vec{\xi}'',\vec{\eta}'')$ where $M''$ is a unitriangular matrix of form
\begin{equation}\label{bigmatrix} \left(\begin{array}{cccccc}
1 & \cdots & a^{-1}T^{-g}(1+r+\cdots+r^N) & 0& \cdots & 0 \\
0 & 1 & 0 & \cdots & \cdots & 0\\
\vdots & & & & & \vdots \\
0 & \cdots & 1 & \star & \cdots & \star \\
\vdots & & & \ddots & & \vdots \\
\vdots & & & & \ddots & \vdots \\
0 & \cdots & \cdots & \cdots & 0 & 1\end{array}\right) \end{equation} and where the labels $\vec{\xi}'',\vec{\eta}''$ coincide with $\vec{\xi},\vec{\eta}$ except that \begin{equation}\label{primeprime} \xi''_{1}=\xi_1-g,\quad \xi''_{k+1}=\xi_{k+1}+g,\quad \eta''_{1}=\eta_{k+1},\quad \eta''_{k+1}=\eta_1.\end{equation}
In particular, we have \begin{equation}\label{sumsame}
(\eta''_1-\xi''_1)+(\eta''_{k+1}-\xi''_{k+1})=(\eta_1-\xi_1)+(\eta_{k+1}-\xi_{k+1}).
\end{equation} Also, \[ (\eta''_1-\xi''_1)-(\eta_1-\xi_1)=(\eta_{k+1}-\eta_1)+g>0 \] and \[ (\eta''_1-\xi''_1)-(\eta_{k+1}-\xi_{k+1})=\xi_{k+1}-\xi_1+g<0,\] where we have twice used (\ref{nuf0}), as $f_0=aT^g(1-r)$ has $g=\nu_{\uparrow}(f_0)\leq \nu^{\downarrow}(f_0)$.
By (\ref{sumsame}), since $\eta''_1-\xi''_1$ lies strictly between $\eta_1-\xi_1$ and $\eta_{k+1}-\xi_{k+1}$, we see that $\eta''_{k+1}-\xi''_{k+1}$ also lies strictly between $\eta_1-\xi_1$ and $\eta_{k+1}-\xi_{k+1}$ and hence that $|(\eta''_{1}-\xi''_1)-(\eta''_{k+1}-\xi''_{k+1})|<|(\eta_1-\xi_1)-(\eta_{k+1}-\xi_{k+1})|$. Thus $\vec{\eta}''-\vec{\xi}''$ is a simple compression of $\vec{\eta}-\vec{\xi}$.
Now $\mathcal{M}''$ is, perhaps, not $k$-cleared because our row swap may have introduced nonzero terms above the diagonal in row $k+1$. However, we can apply the inductive hypothesis to the $(d-k)\times (d-k)$ lower right block $M''(k+1)$ with labels $\vec{\xi}''(k+1)=(\xi''_{k+1},\ldots,\xi''_d)$ and $\vec{\eta}''(k+1)=(\eta''_1,\ldots,\eta''_d)$ to obtain operations of type (A1)-(A5) that convert $M''(k+1)$ to the $(d-k)\times(d-k)$ identity matrix, with new labels $\vec{\xi}'(k+1)=(\xi'_{k+1},\ldots,\xi'_d),\vec{\eta}'(k+1)=(\eta'_{k+1},\ldots,\eta'_d)$ such that $\vec{\eta}'(k+1)-\vec{\xi}'(k+1)$ either is equal to or is a compression of $\vec{\eta}''(k+1)-\vec{\xi}''(k+1)$. When these operations on the lower right block $M''(k+1)$ are extended in the natural way to operations on the entire matrix $M''$, they do not affect the fact that columns $1$ through $k$ and rows $2$ through $k$ of $M''$ coincide with the corresponding columns and rows of the identity matrix, while they convert the lower $(d-k)\times (d-k)$ block to the identity, and so the resulting matrix $M'$ is still $k$-cleared. Moreover the labels $\vec{\xi}''$ and $\vec{\eta}''$ are transformed, respectively, to $\vec{\xi}':=(\xi''_1,\ldots,\xi''_{k},\xi'_{k+1},\ldots,\xi'_d)$ and $\vec{\eta}'=
(\eta''_1,\ldots,\eta''_k,\eta'_{k+1},\ldots,\eta'_d)$. The fact that $\vec{\eta}'(k+1)-\vec{\xi}'(k+1)$ either is equal to or is a compression of $\vec{\eta}''(k+1)-\vec{\xi}''(k+1)$ implies that $\vec{\eta'}-\vec{\xi}'$ either is equal to or is a compression of $\vec{\eta}''-\vec{\xi}''$. So since $\vec{\eta}''-\vec{\xi}''$ is a simple compression of $\vec{\eta}-\vec{\xi}$, it follows that $\vec{\eta}'-\vec{\xi}'$ is a compression of $\vec{\eta}-\vec{\xi}$. Our new labeled basis change matrix $\mathcal{M}'=(M',\vec{\xi}',\vec{\eta}')$ thus satisfies the second alternative of the conclusion of Lemma \ref{mainbasisstep}, completing the proof.\end{proof}
\section{Poincar\'e-Novikov structures}\label{pnsect}
We now introduce a new type of structure that gives rise to filtered matched pairs that satisfy a version of Ponicar\'e duality. We continue to work with respect to a fixed field $\kappa$ and finitely generated subgroup $\Gamma<\mathbb{R}$.
\begin{dfn}\label{pnstr}
A weak (resp. strong) $n$-\textbf{Poincar\'e-Novikov structure} $\mathcal{N}$ consists of the following data: \begin{itemize} \item A graded $\Lambda$-module $H_*=\oplus_{k\in \mathbb{Z}}H_k$ with each $H_k$ finitely generated over $\Lambda$, and maps $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$ that comprise the data of a weak (resp. strong) $n$-PD structure (see Definition \ref{pdstr});
\item for each $k\in \mathbb{Z}$, an orthogonalizable $\Lambda_{\uparrow}$-space $(V_k,\rho_k)$; and
\item for each $k\in \mathbb{Z}$, a homomorphism of $\Lambda$-modules $S_k\colon\thinspace H_k\to V_k$ such that $1_{\Lambda_{\uparrow}}\otimes S_k\colon\thinspace \Lambda_{\uparrow}\otimes_{\Lambda}H_k\to V_k$ is an isomorphism of $\Lambda_{\uparrow}$-vector spaces.
\end{itemize}
\end{dfn}
If $\mathcal{N}$ is as in Definition \ref{pnstr}, for each $k\in \mathbb{Z}$ we obtain a filtered matched pair $\mathcal{P}(\mathcal{N})_k$ of the form \begin{equation}\label{pnkdef} \xymatrix{ & & ({}^{\vee}(V_{n-k}),{}^{\vee}\rho_{n-k}) \\ \mathcal{P}(\mathcal{N})_k \ar@{}[r]^(.45){}="a"^(.7){}="b" \ar@{=} "a";"b" & H_k\ar[ru]^{\widetilde{S}_k} \ar[rd]_{S_k} & \\ & & (V_k,\rho_k)} \end{equation} where ${}^{\vee}\rho_{n-k}$ is defined from $\rho_{n-k}$ as in Section \ref{fmpdual} and where $\widetilde{S}_k=\delta^{\downarrow}(S_{n-k})\circ \mathcal{D}_k$. Thus, in view of Proposition \ref{deltachar}, the map $\widetilde{S}_k\colon\thinspace H_k\to {}^{\vee}\!(V_{n-k})$ is characterized by the property that, for all $x\in H_k$ and $y\in H_{n-k}$, we have \begin{equation}\label{tildesk} (\widetilde{S}_kx)(S_{n-k}y)=(\mathcal{D}_kx)(y).\end{equation}
We now introduce a barcode associated to a Poincar\'e-Novikov structure. Elements of barcodes in this paper will be taken to be intervals modulo $\Gamma$-translation, expressed as follows:
\begin{notation}\label{intnot}
If $I$ is a nonempty interval in $\mathbb{R}$ (of any type $[a,b),(a,b],[a,b],(a,b)$, allowing $b=\infty$ in the first and fourth cases and $a=-\infty$ in the second and fourth cases), we denote by $I^{\Gamma}$ the equivalence class of $I$ under the equivalence relation $\sim$ on the collection $\mathcal{I}$ of all intervals in $\mathbb{R}$ according to which, for $I,J\in\mathcal{I}$, $I\sim J$ if and only if there is $g\in \Gamma$ such that the translation $x\mapsto x+g$ maps $I$ bijectively to $J$.
\end{notation}
Our conventions in the following definition are motivated by the outcome of the calculation in Section \ref{mabsect}; see also Definition \ref{fullbar}.
\begin{dfn}\label{pnbar}
Let $\mathcal{N}$ be a weak $n$-Poincar\'e-Novikov structure such that each $\mathcal{P}(\mathcal{N})_k$ admits a doubly-orthogonal basis. The \textbf{essential barcode} of $\mathcal{N}$ is the collection $\{\mathcal{B}_k(\mathcal{N})|k\in\mathbb{Z}\}$ of multisets of intervals modulo $\Gamma$-translation, where $\mathcal{B}_k(\mathcal{N})$ consists of the elements $[a,a+\ell]^{\Gamma}$ for each $([a],\ell)\in \Sigma(\mathcal{P}(\mathcal{N})_k)$ with $\ell\geq 0$, together with the elements $(a+\ell,a)^{\Gamma}$ for each $([a],\ell)\in \Sigma(\mathcal{P}(\mathcal{N})_{k+1})$ with $\ell<0$. (Here $\mathcal{P}(\mathcal{N})_k$ is defined in (\ref{pnkdef}).)\end{dfn}
Under weaker hypotheses we can adapt the gaps of Definition \ref{gapdfn} as follows:
\begin{dfn}\label{pngap}
If $\mathcal{N}$ is a weak $n$-Poincar\'e-Novikov structure with modules $H_k$ as in Definition \ref{pnstr} having $d_k=\dim_{\mathcal{Q}(\Lambda)}\mathcal{Q}(\Lambda)\otimes_{\Lambda}H_k$, for any $k\in \mathbb{Z}$ and $i\in \{1,\ldots,d_k\}$ we define \[ \mathcal{G}_{k,i}(\mathcal{N})=G_i(\mathcal{P}(\mathcal{N})_k).\]
\end{dfn}
It is straightforward to adapt the stability results for filtered matched pairs from Section \ref{stabsec} to prove analogous results for Poincar\'e-Novikov structures. First we define the appropriate notion of approximate isomorphism:
\begin{dfn}\label{pntmorph} Let $\mathcal{N}$ and $\mathcal{N}'$ be two weak $n$-Poincar\'e-Novikov structures, having the same modules $H_k$ and duality morphisms $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$ and (possibly) different orthogonalizable $\Lambda_{\uparrow}$-spaces $(V_k,\rho_k)$, $(V'_k,\rho'_k)$ and $\Lambda$-module homomorphisms $S_k\colon\thinspace H_k\to V_k$ and $S'_k\colon\thinspace H_k\to V'_k$. If $t\geq 0$, a $t$-\textbf{morphism} from $\mathcal{N}$ to $\mathcal{N}'$ consists of, for each $k$, $\Lambda_{\uparrow}$-linear isomorphisms $a_k\colon\thinspace V_k\to V'_k$ such that $S'_k=a_k\circ S_k$ and, for all $x\in V_k$, $|\rho'_k(a_kx)-\rho_k(x)|\leq t$.
\end{dfn}
\begin{prop}\label{pntofmp}
If there exists a $t$-morphism between the weak $n$-Poincar\'e-Novikov structures $\mathcal{N}$ and $\mathcal{N}'$, then for each $k$ there is a $t$-morphism (in the sense of Definition \ref{tmorphdfn}) between the corresponding filtered matched pairs $\mathcal{P}(\mathcal{N})_k$ and $\mathcal{P}(\mathcal{N}')_k$.
\end{prop}
\begin{proof}
For the maps $\alpha_{\uparrow}$ of Definition \ref{tmorphdfn} we may use $a_k$ from Definition \ref{pntmorph}, as by definition $a_k$ obeys $S'_k=a_k\circ S_k$ and $|\rho'_k\circ S'_k-\rho_k\circ S_k|\leq t$. For the map $\alpha^{\downarrow}\colon\thinspace {}^{\vee}\!V_{n-k}\to {}^{\vee}\!V'_{n-k}$, since $a_{n-k}$ is a vector space isomorphism we may use the map ${}^{\vee}a_{n-k}^{-1}$. Indeed, if $x\in H_k$ and $y\in H_{n-k}$ then by (\ref{tildesk}) we have $(\tilde{S}_{k}x)(S_{n-k}y)=(\tilde{S}'_kx)(S'_{n-k}y)=(\mathcal{D}_kx)(y)$ and hence \[ ({}^{\vee}a_{n-k}^{-1}\tilde{S}_kx)(S'_{n-k}y)=(\tilde{S}_kx)(a_{n-k}^{-1}S'_{n-k}y)=(\tilde{S}_{k}x)(S_{n-k}y)=(\tilde{S}'_kx)(S'_{n-k}y),\] confirming that ${}^{\vee}a_{n-k}^{-1}\tilde{S}_k=\tilde{S}'_k$ since the image of $S'_{n-k}$ spans $V'_{n-k}$ over $\Lambda_{\uparrow}$. Moreover if $\zeta\in {}^{\vee}\!V'_{n-k}$ then \[ {}^{\vee}\!\rho_{n-k}({}^{\vee}\!a_{n-k}\eta)=\inf_{0\neq x\in V_{n-k}}\left(\nu_{\uparrow}(\zeta(a_{n-k}x))+\rho_k(x)\right) \] which differs from \[ {}^{\vee}\!\rho'_{n-k}(\eta)=\inf_{0\neq y\in V'_{n-k}}(\nu_{\uparrow}(\zeta(y))+\rho'_k(y)) \] by at most $t$ since $a_{n-k}$ is surjective and $|\rho'_{n-k}\circ a_{n-k}-\rho_{n-k}|\leq t$ by hypothesis. This shows that $|{}^{\vee}\!\rho'_{n-k}\circ {}^{\vee}\!a_{n-k}^{-1}-{}^{\vee}\!\rho_{n-k}|\leq t$, so the continuity requirement on $\alpha^{\downarrow}={}^{\vee}\!a_{n-k}^{-1}$ is satisfied.
\end{proof}
We can now read off the following stability results.
\begin{cor}\label{pngapstab}
If $\mathcal{N}$ and $\mathcal{N}'$ are weak $n$-Poincar\'e-Novikov structures between which there exists a $t$-morphism, then for all $k$ and $i$ we have $|\mathcal{G}_{k,i}(\mathcal{N})-\mathcal{G}_{k,i}(\mathcal{N}')|\leq 2t$.
\end{cor}
\begin{proof}
This is immediate from Propositions \ref{gapcts} and \ref{pntofmp}.
\end{proof}
\begin{cor}\label{pnstab}
Assume that $\Gamma$ is discrete and that $\mathcal{N}$ and $\mathcal{N}'$ are two weak $n$-Poincar\'e-Novikov structures between which there exists a $t$-morphism. Then there is a bijection $\sigma\colon\thinspace \cup_k\mathcal{B}_k(\mathcal{N})\to\cup\mathcal{B}_k(\mathcal{N}')$ such that each $[a,b]^{\Gamma}\in \mathcal{B}_k(\mathcal{N})$ has $\sigma([a,b]^{\Gamma})$ equal either to some $[a',b']^{\Gamma}\in\mathcal{B}_k(\mathcal{N}')$ or to some $(b',a')^{\Gamma}\in\mathcal{B}_{k-1}(\mathcal{N}')$ where in either case $|a'-a|\leq t$ and $|b'-b|\leq t$, and similarly each $(a,b)^{\Gamma}\in\mathcal{B}_k(\mathcal{N}')$ has $\sigma((a,b)^{\Gamma})$ equal either to some $(a',b')^{\Gamma}\in\mathcal{B}_k(\mathcal{N}')$ or to some $[b',a']^{\Gamma}\in\mathcal{B}_{k+1}(\mathcal{N}')$ where in either case $|a'-a|\leq t$ and $|b'-b|\leq t$,
\end{cor}
\begin{proof}
This is immediate from Theorem \ref{stab}, Proposition \ref{pntofmp}, and the conventions in Definition \ref{pnbar}.
\end{proof}
We now develop consequences of the duality analysis in Section \ref{fmpdual}. First, we find the duals of the filtered matched pairs $\mathcal{P}(\mathcal{N})_k$ associated to a Poincar\'e-Novikov structure via (\ref{pnkdef}).
\begin{prop}\label{dualpnchar} For a weak $n$-Poincar\'e Novikov structure with notation as above, the dual filtered matched pair ${}^{\vee}\!\mathcal{P}(\mathcal{N})_k$ (in the sense of Section \ref{fmpdual}) takes the form \[ \xymatrix{ & {}^{\vee}\! V_k \\ {}^{\vee}\!H_k \ar[ru]^{\delta^{\downarrow}(S_k)} \ar[rd]_{F_k} \\ & {}^{\vee}\!({}^{\vee}V_{n-k}) } \]
where the map $F_k$ satisfies \[ F_k\circ \mathcal{D}_{n-k}=\pm \alpha_{V_{n-k}}\circ S_{n-k} \] where $\pm$ is the sign from (\ref{pdsym}) and $\alpha_{V_{n-k}}\colon\thinspace V_{n-k}\to {}^{\vee}\!({}^{\vee}V_{n-k})$ is the isomorphism from Section \ref{dualsec}.
\end{prop}
\begin{proof} That the map ${}^{\vee}\!H_k\to {}^{\vee}\!V_k$ in ${}^{\vee}\!\mathcal{P}(\mathcal{N})_k$ is equal to $\delta^{\downarrow}(S_{k})$ is true by definition. The other map $F_k$ is equal by definition to $\delta_{\uparrow}(\tilde{S}_k)=\delta_{\uparrow}(\delta^{\downarrow}(S_{n-k})\circ \mathcal{D}_k)$, so we are to show that, for all $y\in H_{n-k}$, \begin{equation}\label{dualnoveqn} \delta_{\uparrow}(\delta^{\downarrow}(S_{n-k})\circ \mathcal{D}_k)(\mathcal{D}_{n-k}y)=\pm \alpha_{V_{n-k}}(S_{n-k}y)\end{equation} as elements of ${}^{\vee}\!({}^{\vee}\!V_{n-k})$. Since both $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$ and $\delta^{\downarrow}(S_{n-k})\colon\thinspace {}^{\vee}\!H_{n-k}\to {}^{\vee}\!V_{n-k}$ become isomorphisms after tensoring with the field $\Lambda^{\downarrow}$, it suffices to show that the two sides of (\ref{dualnoveqn}) are equal when evaluated on arbitrary elements of the image of $\delta^{\downarrow}(S_{n-k})\circ \mathcal{D}_k$. To check this, we find from Proposition \ref{deltachar} that for any $x\in H_k$, \[
\left(\delta_{\uparrow}(\delta^{\downarrow}(S_{n-k})\circ \mathcal{D}_k)(\mathcal{D}_{n-k}y)\right)((\delta^{\downarrow}(S_{n-k})\circ \mathcal{D}_k)x)=(\mathcal{D}_{n-k}y)(x),\] while \[ \left(\alpha_{V_{n-k}}(S_{n-k}y)\right)((\delta^{\downarrow}(S_{n-k})\circ \mathcal{D}_k)x)=\overline{\left(\delta^{\downarrow}(S_{n-k})(\mathcal{D}_kx)\right)(S_{n-k}y)}=\overline{(\mathcal{D}_kx)(y)}.\] Thus the desired equality is
an expression of the symmetry property (\ref{pdsym}).\end{proof}
\begin{cor}\label{pndual}
If $\mathcal{N}$ is a strong $n$-Poincar\'e-Novikov structure such that each $\mathcal{P}(\mathcal{N})_k$ admits a doubly-orthogonal basis, there is a bijection from $\cup_k\mathcal{B}_k(\mathcal{N})$ to itself which pairs each element $[a,b]^{\Gamma}\in\mathcal{B}_k(\mathcal{N})$ (resp. $(a,b)^{\Gamma}\in \mathcal{B}_k(\mathcal{N})$) such that $a<b$ with an element $(a,b)^{\Gamma}$ (resp. $[a,b]^{\Gamma}$) of $\mathcal{B}_{n-1-k}(\mathcal{N})$, and which pairs any element of form $[a,a]^{\Gamma}\in\mathcal{B}_k(\mathcal{N})$ with an element $[a,a]^{\Gamma}$ of $\mathcal{B}_{n-k}(\mathcal{N})$.
\end{cor}
\begin{proof} Note that, under our usual correspondence between elements $([a],\ell)\in(\mathbb{R}/\Gamma)\times\mathbb{R}$ and intervals modulo $\Gamma$-translation $[a,a+\ell]^{\Gamma}$ (for $\ell\geq 0$) and $(a+\ell,a)^{\Gamma}$ (for $\ell<0$), the involution $([a],\ell)\leftrightarrow([a+\ell],-\ell)$ corresponds to interchanging $[a,b]^{\Gamma}$ with $(a,b)^{\Gamma}$ (except in the case that $\ell=0$ in which case it has no effect). So
in view of Proposition \ref{bardual}, the statement is equivalent to the claim that, for each $k$, the basis spectra $\Sigma(\mathcal{P}(\mathcal{N})_{n-k})$ and $\Sigma({}^{\vee}\!\mathcal{P}(\mathcal{N})_k)$ coincide. By hypothesis, with notation as in Definition \ref{pnstr}, $\mathcal{D}_{n-k}\colon\thinspace H_{n-k}\to{}^{\vee}\!H_k$ is a surjection with kernel equal to the torsion module $tH_{n-k}$. Hence by Proposition \ref{dualpnchar}, a doubly-orthogonal basis for ${}^{\vee}\mathcal{P}(\mathcal{N})_{k}$ lifts via $\mathcal{D}_{n-k}$ to a doubly orthogonal basis for the filtered matched pair \begin{equation}\label{otherfmp} \xymatrix{ & {}^{\vee}\! V_k \\ H_{n-k} \ar[ru]^{\delta^{\downarrow}(S_k)\circ\mathcal{D}_{n-k}} \ar[rd]_{\pm \alpha_{V_{n-k}}\circ S_{n-k}} \\ & {}^{\vee}\!({}^{\vee}V_{n-k})
}.\end{equation} By definition, the map $\delta^{\downarrow}(S_k)\circ\mathcal{D}_{n-k}$ is equal to the map $\widetilde{S}_{n-k}$ in the definition of $\mathcal{P}(\mathcal{N})_{n-k}$. Also, it follows from Remark \ref{dualbasisrem} that any orthogonal basis $\{v_1,\ldots,v_d\}$ for $V_{n-k}$ will be mapped by the isomorphism $\pm\alpha_{V_{n-k}}\colon\thinspace V_{n-k}\to {}^{\vee\vee}\!V_{n-k}$ to an orthogonal basis for ${}^{\vee\vee}\!V_{n-k}$, with ${}^{\vee\vee}\rho_{\uparrow}(\pm\alpha_{V_{n-k}}v_i)=\rho_{\uparrow}(v_i)$ for each $i$. Thus any doubly-orthogonal basis for $\mathcal{P}(\mathcal{N})_{n-k}$ is also a doubly-orthogonal basis for the filtered matched pair in (\ref{otherfmp}), and the values of the various versions of the functions $\rho_{\uparrow},\rho^{\downarrow}$ coincide under this correspondence. So indeed $\Sigma(\mathcal{P}(\mathcal{N})_{n-k})=\Sigma({}^{\vee}\!\mathcal{P}(\mathcal{N})_k)$
\end{proof}
Under weaker hypotheses on $\mathcal{N}$ we still have a duality inequality for the gaps $\mathcal{G}_{k,i}(\mathcal{N})$:
\begin{cor}\label{weakdual}
Let $\mathcal{N}$ be a weak $n$-Poincar\'e-Novikov structure and for each $k$ let $d_k$ be as in Definition \ref{pngap}. Then $d_{n-k}=d_k$, and for all $i\in \{1,\ldots,d_k\}$ we have $\mathcal{G}_{n-k,i}(\mathcal{N})+\mathcal{G}_{k,d_k+1-i}(\mathcal{N})\leq 0$.
\end{cor}
\begin{proof} That $d_k=d_{n-k}$ follows from the fact that, with notation as in Definition \ref{pnstr}, $\mathcal{Q}(\Gamma)\otimes_{\Lambda}H_k$ is isomorphic (by the coefficient extension of $\mathcal{D}_k$) to $\mathcal{Q}(\Lambda)\otimes_{\Lambda}{}^{\vee}\!H_{n-k}$.
By definition we have $\mathcal{G}_{n-k,i}(\mathcal{N})=G_i(\mathcal{P}(\mathcal{N})_{n-k})$ and $\mathcal{G}_{k, d_{k}+1-i}(\mathcal{N})=G_{d_k+1-i}(\mathcal{P}(\mathcal{N})_k)$, while Proposition \ref{gapdual} shows that $G_{i}({}^{\vee}\!\mathcal{P}(\mathcal{N})_k)+G_{d_k+1-i}(\mathcal{P}(\mathcal{N})_k)\leq 0$. So it suffices to show that $G_i(\mathcal{P}(\mathcal{N})_{n-k})\leq G_{i}({}^{\vee}\!\mathcal{P}(\mathcal{N})_k)$. By Proposition \ref{dualpnchar}, the maps $\tilde{S}_{n-k}\colon\thinspace H_{n-k}\to {}^{\vee}\!V_k$ and $S_{n-k}\colon\thinspace H_{n-k}\to V_{n-k}$ that define $\mathcal{P}(\mathcal{N})_{n-k}$ factor through the analogous maps $\delta^{\downarrow}(S_k)$ and $F_k$ for ${}^{\vee}\!\mathcal{P}(\mathcal{N})_k$ as, respectively, \[ \xymatrix{ H_{n-k}\ar[r]^{\mathcal{D}_{n-k}} & {}^{\vee}\!H_k\ar[r]^{\delta^{\downarrow}(S_k)} & {}^{\vee}\!V_k } \] and \[ \xymatrix{
H_{n-k}\ar[r]^{\mathcal{D}_{n-k}} & {}^{\vee}\!H_k\ar[r]^{F_k} & {}^{\vee}\!({}^{\vee}\!V_{n-k})\ar[r]^{\pm\alpha_{V_{n-k}}^{-1}} & V_{n-k}}.\] Moreover by Remark \ref{dualbasisrem} we have ${}^{\vee\vee}\!\rho_{n-k}(x)=\rho_{n-k}(\pm\alpha_{V_{k}}^{-1}x)$ for all $x\in {}^{\vee}\!({}^{\vee}\!V_{n-k})$. Our hypotheses imply that $\mathcal{D}_{n-k}$, while not necessarily surjective, maps independent subsets of $H_{n-k}$ to independent subsets of ${}^{\vee}\!H_k$. Consequently the set of which Definition \ref{gapdfn} defines $G_{i}({}^{\vee}\!\mathcal{P}(\mathcal{N})_k)$ as the supremum contains the corresponding set for $G_i(\mathcal{P}(\mathcal{N})_{n-k})$. So indeed $G_i(\mathcal{P}(\mathcal{N})_{n-k})\leq G_{i}({}^{\vee}\!\mathcal{P}(\mathcal{N})_k)$.
\end{proof}
\begin{ex}\label{weakdualstrict}
We provide an example showing that, if $\mathcal{N}$ is not a strong Poincar\'e--Novikov structure, then the inequality in Corollary \ref{weakdual} may be strict. In the notation of Definition \ref{pnstr}, take $n=0$ and $H_k=\left\{\begin{array}{ll} \Lambda & k=0 \\ \{0\} & k\neq 0\end{array}\right.$. The only nontrivial $V_k$ will necessarily correspond to $k=0$, and we take $(V_0,\rho_0)=(\Lambda_{\uparrow},-\nu_{\uparrow})$, and use for the map $S_0\colon\thinspace H_0\to V_0$ the inclusion $\Lambda\to\Lambda_{\uparrow}$. To define the PD structure, choose $\alpha\in \Lambda\setminus\{0\}$ such that $\bar{\alpha}=\alpha$, and define $\mathcal{D}_0\colon\thinspace H_0\to {}^{\vee}\!H_0$ by $(\mathcal{D}_0(\lambda))(\mu)=\alpha\bar{\lambda}\mu$. Let $\mathcal{N}_{\alpha}$ denote the weak Poincar\'e--Novikov structure consisting of the data in this paragraph. (The condition $\bar{\alpha}=\alpha$ ensures that (\ref{pdsym}) holds.)
There is an isomorphism $\Lambda^{\downarrow}\to {}^{\vee}\!\Lambda_{\uparrow}$ sending $\mu\in \Lambda^{\downarrow}$ to the map $(\lambda\mapsto \bar{\mu}\lambda)$. One can check that the filtration function ${}^{\vee}\!\rho_0$ on ${}^{\vee}\!\Lambda_{\uparrow}$ corresponds under this isomorphism to $-\nu^{\downarrow}\colon\thinspace \Lambda^{\downarrow}\to \mathbb{R}\cup\{\infty\}$, and that the map $\tilde{S}_0$ from (\ref{pnkdef}) corresponds to the map $\Lambda\to\Lambda^{\downarrow}$ given by $\alpha$ times the inclusion.
Since $d_0=1$ and all other $d_k$ are zero, the only gap to consider is $\mathcal{G}_{0,1}(\mathcal{N}_{\alpha})$, and Corollary \ref{weakdual} asserts that $\mathcal{G}_{0,1}(\mathcal{N}_{\alpha})\leq 0$. Now from the definitions and the identifications in the previous paragraph one obtains \begin{align*} \mathcal{G}_{0,1}(\mathcal{N}_{\alpha})&=\sup\left\{\nu_{\uparrow}(\lambda)-\nu^{\downarrow}(\alpha\lambda)|\lambda\neq 0\right\}
\\ & = -\nu^{\downarrow}(\alpha)+\sup\left\{\nu_{\uparrow}(\lambda)-\nu^{\downarrow}(\lambda)|\lambda\neq 0\right\}=-\nu^{\downarrow}(\alpha)
\end{align*}
So if $\Gamma\neq \{0\}$ and $\alpha=T^g+T^{-g}$ for some $g>0$ then we have $\mathcal{G}_{0,1}(\mathcal{N}_{\alpha})=-g<0$ and the inequality of Corollary \ref{weakdual} is indeed strict.
\end{ex}
The above example also illustrates the importance of the symmetry condition (\ref{pdsym}), which we used in Proposition \ref{dualpnchar}; if (\ref{pdsym}) had not been required to hold, we could have taken $\alpha$ to be an arbitrary nonzero element of $\Lambda$ and, by choosing $\alpha$ with $\nu^{\downarrow}(\alpha)<0$, obtained a counterexample to Proposition \ref{weakdual}.
\section{Hamiltonian Floer theory}\label{floersect}
We now turn to the case that motivated this work, namely Floer theory for Hamiltonian flows on compact symplectic manifolds. We will not attempt to be self-contained, and refer to \cite[Chapters 19 and 20]{ohbook} for the general theory of Hamiltonian Floer homology (and to Part 1 of \cite{ohbook} for more elementary background about symplectic manifolds and Hamiltonian flows on them). There are certainly other variants of Floer theory to which our methods would apply; for instance, one could consider Floer theory for (not necessarily Hamiltonian) symplectic isotopies as in \cite{LO}, to which some of the considerations about Novikov homology in Section \ref{novsect} would be relevant. However, we will restrict attention to Hamiltonian Floer theory here.
Let $(M,\omega)$ be a compact, $2m$-dimensional semipositive symplectic manifold.\footnote{See \cite[Definition 19.6.1]{ohbook} for the definition of semipositivity; this condition should not be essential if one adapts virtual techniques as in \cite{BX},\cite{Par}. However, we will use in Proposition \ref{floerpair} some facts related to the PSS map that, to the best of my knowledge, are not available in the literature in the non-semipositive case.} Writing $S^1=\mathbb{R}/\mathbb{Z}$, let $\mathcal{L}_0M$ denote the space of contractible loops $\gamma\colon\thinspace S^1\to M$, and let $\widetilde{\mathcal{L}_0M}$ denote the covering space of $\mathcal{L}_0M$ whose fiber over $\gamma\in \mathcal{L}_0M$ consists of equivalence classes $[\gamma,w]$ of pairs $(\gamma,w)$ where $w\colon\thinspace D^2\to M$ saitisfies $w(e^{2\pi it})=\gamma(t)$ for all $t$, with $(\gamma,v)$ and $(\gamma,w)$ equivalent iff both $\int_{D^2}v^*\omega=\int_{D^2}w^*\omega$ and $\langle c_1,[\bar{v}\#w]\rangle =0$. Here $c_1\in H^2(M;\mathbb{Z})$ is the first Chern class of $TM$ (using an almost complex structure on $TM$ that is compatible with $\omega$) and $[\bar{v}\#w]$ is the homology class of the sphere that results from gluing the disks $w$ and $v$ along their boundaries, with the orientation on $v$ reversed.
For a nondegenerate smooth Hamiltonian function $H\colon\thinspace S^1\times M\to \mathbb{R}$, we then have an action functional $\mathcal{A}_H\colon\thinspace \widetilde{\mathcal{L}_0M}\to \mathbb{R}$ defined by \[ \mathcal{A}_H([\gamma,w])=-\int_{D^2}w^{*}\omega-\int_{0}^{1}H(t,\gamma(t))dt.\] Formally speaking, the Floer complex $CF_{*}(H)$ is the Novikov complex of this functional (defined with the assistance of a suitable $S^1$-parametrized family of almost complex structures $\{J_t\}$ in order to define the analogue of the gradient flow; we suppress $\{J_t\}$ from the notation as different choices lead to naturally isomorphic filtered Floer homologies). The critical points of $\mathcal{A}_H$ are those $([\gamma,w])$ such that $\gamma$ is a closed orbit of the Hamiltonian flow of $H$ (using the sign convention of \cite{ohbook} that the Hamiltonian vector field of $H$ is given at time $t$ by $\omega(X_{H_t},\cdot)=d(H(t,\cdot))$). Such a critical point has a Conley-Zehnder index $\mu_H([\gamma,w])\in \mathbb{Z}$ (\cite[Definition 18.4.5]{ohbook}) which serves in Floer theory as a substitute for the Morse index (as the true Morse index is infinite).
In contrast to the finite-dimensional Novikov context, two critical points $[\gamma,v]$ and $[\gamma,w]$ lying in the same fiber of the projection $\widetilde{\mathcal{L}_0M}\to\mathcal{L}_0M$ may have different Conley--Zehnder indices. To be precise, if $A\in \pi_2(M)$ and $[\gamma,w]\in\widetilde{\mathcal{L}_0M}$ let $[\gamma,A\#w]$ be the result of gluing a representative of $A$ to $w$; one then has \begin{equation}\label{muchange} \mu_H([\gamma,A\#w])=\mu_H([\gamma,w])-2\langle c_1,A\rangle \end{equation} (where we have implicitly mapped $A$ to $H_2(M;\mathbb{Z})$ by the Hurewicz map). Similarly, by the definition of $\mathcal{A}_H$, we have \begin{equation}\label{omegachange} \mathcal{A}_H([\gamma,A\#w])=\mathcal{A}_H([\gamma,w])-\langle [\omega],A\rangle,\end{equation} where $[\omega]$ is the de Rham cohomology class of the closed $2$-form $\omega$.
Let $I_{\omega}\colon\thinspace \pi_2(M)\to\mathbb{R}$ and $I_{c_1}\colon\thinspace \pi_2(M)\to\mathbb{Z}$ be the maps defined by evaluating, respectively, $[\omega]$ and $c_1$ on spheres, and denote \begin{equation}\label{gammas} \hat{\Gamma}=\frac{\pi_2(M)}{\ker(I_{\omega})\cap \ker(I_{c_1})},\qquad \Gamma'=\frac{\ker(I_{c_1})}{\ker(I_{\omega})\cap \ker(I_{c_1})},\end{equation} so that $I_{\omega}$ and $I_{c_1}$ descend to maps defined on $\hat{\Gamma},\Gamma'$. Thus $\hat{\Gamma}$ is the full deck transformation group of the covering space $\widetilde{\mathcal{L}_0M}$ and $\Gamma'$ is the subgroup which preserves the Conley--Zehnder indices $\mu_H$ of critical points of $\mathcal{A}_H$. For our subgroup $\Gamma$ of $\mathbb{R}$ (used in the definitions of $\Lambda,\Lambda_{\uparrow},\Lambda^{\downarrow}$ in Section \ref{basic}) we use the image under the monomorphism $I_{\omega}|_{\Gamma'}\colon\thinspace \Gamma'\to \mathbb{R}$. (In particular, if $(M,\omega)$ is positively or negatively monotone, or weakly exact, then $\Gamma=\{0\}$.) Modulo the identification of $\Gamma'$ with $\Gamma$ given by integration of $\omega$, our Novikov field $\Lambda_{\uparrow}$ then coincides with the field denoted $\Lambda_{\omega}^{(0)}$ in \cite[Proposition 19.1.5]{ohbook}.
For $k\in\mathbb{Z}$, let $\tilde{P}_k(H)$ denote the set of critical points $[\gamma,w]$ of $\mathcal{A}_H$ having $\mu_{H}([\gamma,w])=k$. The degree-$k$ part of the Floer complex, $CF_k(H)$, consists of formal sums \[ \sum_{[\gamma,w]\in\tilde{P}_k(H)}a_{[\gamma,w]}[\gamma,w] \] where $a_{[\gamma,w]}\in\kappa$ and, for all $c\in \mathbb{R}$, only finitely many $[\gamma,w]$ have both $\mathcal{A}_H([\gamma,w])\geq c$ and $a_{[\gamma,w]}\neq 0$. Any $\lambda\in \Lambda_{\uparrow}$ can be written as $\lambda=\sum_{A\in \ker(I_{c_1})}\lambda_AT^{I_{\omega}(A)}$ where for all $c\in \mathbb{R}$ only finitely many $A$ have both $\lambda_A\neq 0$ and $I_{\omega}(A)\leq c$. Given (\ref{omegachange}) and the definition of the equivalence relation on $\widetilde{\mathcal{L}_0M}$, $CF_k(H)$ is a $\Lambda_{\uparrow}$-vector space with the scalar multiplication \[ \left(\sum_{A\in \ker(I_{c_1})}\lambda_AT^{I_{\omega}(A)}\right)\cdot \left(\sum_{[\gamma,w]\in\tilde{P}_k(H)}a_{[\gamma,w]}[\gamma,w]\right)=\sum_{[\gamma,w],A}\lambda_Aa_{[\gamma,w]}[\gamma,A\#w].\] One has a Floer boundary operator $\partial_H\colon\thinspace CF_k(H)\to CF_{k-1}(H)$ defined as in \cite[Section 19.2]{ohbook}, and a filtration function $\ell_{\uparrow}^{H}\colon\thinspace CF_k(H)\to\mathbb{R}\cup\{-\infty\}$ defined by $\ell_{\uparrow}^{H}\left( \sum_{[\gamma,w]\in\tilde{P}_k(H)}a_{[\gamma,w]}[\gamma,w]\right)=\max\{\mathcal{A}_{H}([\gamma,w])|a_{[\gamma,w]}\neq 0\}$. Writing $CF_{*}(H)=\oplus_kCF_k(H)$, we then have (in the language of Section \ref{pmdefsec}) a $\Lambda_{\uparrow}$-Floer-type complex $\mathbf{CF}(H)_{\uparrow}=(CF_*(H),\partial_H,\ell_{\uparrow}^{H})$.
With $HF_k(H)$ defined as the $k$th homology of the complex $CF_*(H)$, the general \cite[Proposition 6.6]{UZ} implies that $HF_k(H)$ is an orthogonalizable $\Lambda_{\uparrow}$-space, in the sense of Definition \ref{orthdef}, with respect to the filtration function $\rho_{\uparrow}^{H}$ which sends a homology class $h$ to the infimal (in fact minimal, if $h\neq 0$, by \cite[Theorem 1.4]{U08}) value of $\ell_{\uparrow}^{H}$ on cycles representing $h$. The Poincar\'e--Novikov structure $\mathcal{N}(M,\omega,H)$ associated to $H$ will then be based on the orthogonalizable $\Lambda_{\uparrow}$-spaces $(HF_k(H),\rho_{\uparrow}^{H})$ together with data associated to the PSS isomorphism $QH_*(M,\omega)\to HF_*(H)$ from quantum homology and the (classical) Poincar\'e intersection pairing on $M$.
To be precise about this we should first fix conventions for quantum homology. Recalling (\ref{gammas}), let $\hat{\Lambda}=\kappa[\hat{\Gamma}]$, so that $\hat{\Lambda}$ consists of finite sums $\sum_{A\in \hat{\Gamma}}b_AT^A$ where $b_A\in \kappa$), and let $\hat{\Lambda}_{\uparrow}$ denote the ring of (possibly infinite) sums $\sum_{A\in\hat{\Gamma}}b_AT^A$ such that for each $c\in \mathbb{R}$ there are only finitely many $A$ with $b_A\neq 0$ and $\langle[\omega],A\rangle\leq c$, and such that moreover the set $\{\langle c_1,A\rangle|b_A\neq 0\}\subset \mathbb{Z}$ is finite. We regard both $\hat{\Lambda}_{\uparrow}$ and its subring $\hat{\Lambda}$ as graded $\kappa$-algebras, with $k$th graded part consisting of sums $\sum_Ab_AT^A$ for which every $A$ appearing in the sum has $-2\langle c_1,A\rangle=k$. (The $-2$ is motivated by (\ref{muchange}).) The integration isomorphism $I_{\omega}\colon\thinspace \Gamma'\cong \Gamma$ identifies our earlier rings $\Lambda=\kappa[\Gamma]$ and $\Lambda_{\uparrow}$ with the degree-zero parts of $\hat{\Lambda}$ and $\hat{\Lambda}_{\uparrow}$. If $I_{c_1}\colon\thinspace \pi_2(M)\to \mathbb{Z}$ identically vanishes then these degree-zero parts are the entire rings (in which case we put $q=1$ in what follows). Otherwise, letting $N$ denote the minimal Chern number of $(M,\omega)$ (\emph{i.e.} the positive generator of the image of $I_{c_1}$), by choosing some $A_0\in \hat{\Gamma}$ with $I_{c_1}(A_0)=N$ we may identify $\hat{\Lambda}\cong \Lambda[q^{-1},q]$ and $\hat{\Lambda}_{\uparrow}\cong \Lambda_{\uparrow}[q^{-1},q]$ where $q=T^{A_0}$ is a variable of degree $-2N$.
Note that, correspondingly, the action of $\Lambda_{\uparrow}$ on $CF_*(H)$ extends to an action of $\hat{\Lambda}_{\uparrow}$ by the obvious extension of the prescription $T^A[\gamma,w]=[\gamma,A\#w]$, and this action is consistent with the gradings in that the grading $j$ part of $\hat{\Lambda}_{\uparrow}$ sends $CF_k(H)$ to $CF_{j+k}(H)$.
The \textbf{quantum homology} of $(M,\omega)$, $QH_{*}(M,\omega)$, is then the tensor product of graded modules $\hat{\Lambda}_{\uparrow}\otimes_{\kappa}H_*(M;\kappa)$ where $H_*(M;\kappa)$ is singular homology. Thus if $I_{c_1}=0$ then $QH_k(M,\omega)=\Lambda_{\uparrow}\otimes_{\kappa}H_k(M;\kappa)$; otherwise, with notation in the previous paragraph, we have \[ QH_k(M,\omega)=\bigoplus_{j\in \mathbb{Z}}q^{j}\Lambda_{\uparrow}\otimes_{\kappa}H_{k+2Nj}(M;\kappa),\] and multiplication by $q$ induces isomorphisms $QH_k(M,\omega)\cong QH_{k-2N}(M,\omega)$. The PSS isomorphism $\Phi_{PSS}^{H}$ as in \cite{PSS},\cite[Section 20.3]{ohbook} is then an isomorphism of graded $\hat{\Lambda}_{\uparrow}$-modules $QH_{*+m}(M,\omega)\to HF_{*}(H)$ (where $m=\frac{1}{2}\dim M$).
We can now make the following definition:
\begin{dfn}\label{pnfloer} Let $H\colon\thinspace S^1\times M\to \mathbb{R}$ be a non-degenerate Hamiltonian on a $2m$-dimensional semipositive symplectic manifold $(M,\omega)$.
We define $\mathcal{N}(M,\omega,H)$ to be the Poincar\'e--Novikov structure having its data $n,H_k,\mathcal{D}_k,V_k,\rho_k,S_k$ as in Definition \ref{pnstr} given as follows (with notation as above):
\begin{itemize} \item $n=0$.
\item $H_k$ is the $(k+m)$th graded part of the tensor product $\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa)$. Thus if $I_{c_1}=0$ then $H_k=\Lambda\otimes_{\kappa}H_{k+m}(M;\kappa)$, and otherwise $H_k=\oplus_{j\in\mathbb{Z}}q^j\Lambda\otimes_{\kappa}H_{k+m+2Nj}(M;\kappa)$.
\item $\mathcal{D}_k\colon\thinspace H_{k}\to {}^{\vee}\!H_{-k}=\overline{\mathrm{Hom}_{\Lambda}(H_{-k},\Lambda)}$ is given by, for $x_{i,A}\in H_{k+m+2Ni}(M;\kappa)$ and $y_{j,B}\in H_{-k+m+2Nj}(M;\kappa)$, \begin{equation}\label{dquant} \left(\mathcal{D}_k\left(\sum_{A\in\Gamma',i\in \mathbb{Z}}q^iT^Ax_{i,A}\right)\right)\left(\sum_{B\in\Gamma',j\in \mathbb{Z}}q^jT^By_{j,B}\right)=\sum_{A,B\in \Gamma,\,i,j\in\mathbb{Z}}T^{I_{\omega}(-A+B)}x_{i,A}\cap y_{j,B},\end{equation} Here $x_{i,A}\cap y_{j,B}$ is the usual signed count of intersections between generic representatives of $x_{i,A}$ and $y_{j,B}$, and is taken to vanish if these are not of complementary dimension. (Hence the only contributions to the above sum have $j=-i$.)
\item $V_k=HF_k(H)$, and $\rho_k=\rho_{\uparrow}^{H}$.
\item $S_k\colon\thinspace H_k\to V_k$ is the restriction to $H_k=\left(\hat{\Lambda}\otimes_{\kappa}H_{*}(M,\omega)\right)_{k+m}\subset QH_{k+m}(M,\omega)$ of the degree-$k$ part of the $PSS$ isomorphism $\Phi_{PSS}^{H}\colon\thinspace QH_{*+m}(M,\omega)\to HF_{*}(H)$.\end{itemize}\end{dfn}
Note that because we use $\hat{\Lambda}$ and not $\hat{\Lambda}_{\uparrow}$ in the definition of $H_k$, all sums in (\ref{dquant}) are finite. It is routine to check that $\mathcal{D}_k\colon\thinspace H_k\to {}^{\vee}\!H_{-k}$ is an isomorphism that satisfies the symmetry condition (\ref{pdsym}) based on the corresponding fact for the intersection pairing on $H_*(M;\kappa)$, so in the terminology of Definition \ref{pnstr} $\mathcal{N}(M,\omega,H)$ is a strong $0$-Poincar\'e--Novikov structure.
\begin{remark}\label{unnatural}
In cases where both $I_{c_1}$ and $\Gamma$ are nontrivial, a somewhat unnatural aspect of our formulation is that, in order to identify $\hat{\Lambda}$ with $\Lambda[q^{-1},q]$, we have chosen an element $A_0\in \hat{\Gamma}$ having $I_{c_1}(A_0)=N$ and interpreted $q$ as $T^{A_0}$. The set of possible such choices $A_0$ is a coset of $\Gamma'\cong \Gamma$ in $\hat{\Gamma}$, and if both $I_{c_1}$ and $\Gamma$ are nontrivial then (\ref{dquant}) does depend on this choice. Correspondingly, the map $\mathfrak{c}$ that appears in Corollary \ref{describepn} below also depends on the choice of $A_0$. An arguably more natural version of (\ref{dquant}) would omit reference to $q$ and allow $A,B$ to vary through $\hat{\Gamma}$ rather than $\Gamma'$, leading to a pairing \begin{equation}\label{modifiedpair} \left(\sum_{A\in\hat{\Gamma}}T^Ax_A,\sum_{B\in \hat{\Gamma}}T^By_B\right)\mapsto \sum_{A,B}T^{I_{\omega}(-A+B)}x_A\cap y_B.\end{equation} However, if $I_{\omega}(\hat{\Gamma})$ properly contains $\Gamma=I_{\omega}(\Gamma')$ then the right-hand side of (\ref{modifiedpair}) typically does not belong to $\Lambda=\kappa[\Gamma]$, so other modifications also would be required for this to fit into the general theory, such as using the larger group $I_{\omega}(\hat{\Gamma})$ in the role of $\Gamma$ and extending coefficients in the Floer complexes accordingly. Enlarging $\Gamma$ in such a fashion seems undesirable, especially if doing so causes $\Gamma$ to change from being discrete to being dense. On the other hand, if $I_{\omega}(\hat{\Gamma})=I_{\omega}(\Gamma')$ then (\ref{modifiedpair}) agrees with (\ref{dquant}) provided that we use for $A_0$ the unique element of $\hat{\Gamma}$ having $I_{c_1}(A_0)=N$ and $I_{\omega}(A_0)=0$, which can be regarded as a natural choice in this case.
\end{remark}
With $\mathcal{N}(M,\omega,H)$ defined, the general constructions of Section \ref{pnsect} then yield, for each $k\in \mathbb{Z}$, a filtered matched pair $\mathcal{P}(\mathcal{N}(M,\omega,H))_{k}$, gaps \\ $\mathcal{G}_{k,1}(\mathcal{N}(M,\omega,H)),\ldots,\mathcal{G}_{k,\dim_{\Lambda_{\uparrow}}QH_k(M,\omega)}(\mathcal{N}(M,\omega,H))$, and, if $\Gamma$ is discrete, an essential barcode $\mathcal{B}_k(\mathcal{N}(M,\omega,H))$ (using Theorem \ref{basistheorem} to obtain suitable doubly-orthogonal bases). The various maps in our definition of $\mathcal{P}(\mathcal{N}(M,\omega,H))$ are obtained by passing to homology from maps defined at chain level, so one could rephrase the definition to yield a chain-level Poincar\'e--Novikov structure as in Section \ref{cpnstr}. If $\Gamma$ is discrete this structure has an associated ``full barcode,'' intended to be considered as a Floer-theoretic version of the interlevel persistence barcode, which can be read off via Theorem \ref{bigdecomp} from the essential barcode of $\mathcal{N}(M,\omega,H)$ together with the concise barcode (from \cite{UZ}) of the Floer-type complex $\mathbf{CF}(H)_{\uparrow}$.
\begin{remark} \label{unnat2} Related to Remark \ref{unnatural}, if $x\in H_k=(\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{m+k}$, then with notation as in (\ref{pnkdef}) and Definition \ref{pnfloer} one has \[ \rho_{k-2N}(S_{k-2n}(qx))=\rho_{k}(S_kx)-I_{\omega}(A_0),\qquad {}^{\vee}\!\rho_{k-2N}(\tilde{S}_{k-2N}(qx))={}^{\vee}\!\rho_k(\tilde{S}_kx)+I_{\omega}(A_0). \] (This is easiest to see from Corollary \ref{describepn}, but can also be inferred by directly unwinding the various definitions.) In particular, the gaps $\mathcal{G}_{k-2N,i}(\mathcal{N}(M,\omega,H))$ are shifted by $2I_{\omega}(A_0)$ relative to the gaps $\mathcal{G}_{k,i}(\mathcal{N}(M,\omega,H))$
\end{remark}
If $H_0,H_1\colon\thinspace S^1\times M\to \mathbb{R}$ are two different nondegenerate Hamiltonians, we have a continuation chain homotopy equivalence $h_{H_0,H_1}\colon\thinspace CF_{*}(H_0)\to CF_{*}(H_1)$ whose induced map $\underline{h}_{H_0H_1}$ on homology obeys $\underline{h}_{H_0,H_1}\circ\Phi_{PSS}^{H_0}=\Phi_{PSS}^{H_1}$. Standard estimates for the behavior of the filtration functions $\ell_{\uparrow}^{H_i}$ under $h_{H_0,H_1}$ then imply as in \cite[Proposition 21.3.6]{ohbook} that $\underline{h}_{H_0H_1}$ provides a $t$-morphism in the sense of Definition \ref{pntmorph} from $\mathcal{N}(M,\omega,H_0)$ to $\mathcal{N}(M,\omega,H_1)$, with $t=\int_{0}^{1}\max_M|H_1(t,\cdot)-H_0(t,\cdot)|dt$. Hence Corollaries \ref{pngapstab} and \ref{pnstab} imply that the gaps and essential barcode of
$\mathcal{N}(M,\omega,H)$ depend in Lipschitz fashion\footnote{Implicit in this is a metric on barcodes which can be inferred from Corollary \ref{pngapstab}, based on an inf over bijections which match open intervals in degree $k-1$ either to other open intervals in degree $k-1$ or closed intervals in degree $k$, and likewise match closed intervals in degree $k$ either to other closed intervals in degree $k$ or open intervals in degree $k-1$.} on the Hamiltonian $H$ with respect to the $C^0$-norm. In particular we may extend these by continuity to arbitrary Hamiltonians $H$, including degenerate ones. The analogous continuity statement for the concise barcodes of the $\mathbf{CF}(H_i)_{\uparrow}$ is \cite[Corollary 1.5]{UZ}. Note that the behavior of short intervals under continuous variation is different in the concise and essential barcodes, as the bottleneck distance with respect to which the concise barcode is continuous allows for intervals to disappear after their lengths continuously shrink to zero.
\begin{remark}\label{htopyinv}
By using the ``optimal continuation maps'' denoted $h_{(\phi^{\rho},J)}$ in \cite[Section 21.6.2]{ohbook}, which relate the Floer complexes of mean-normalized Hamiltonians $H_0,H_1\colon\thinspace S^1\times M\to \mathbb{R}$ having fixed-endpoint-homotopic flows $\{\phi_{H_i}^{t}\}_{0\leq t\leq 1}$ and which preserve the filtrations $\ell_{\uparrow}^{H_i}$ by \cite[Corollary 21.6.5]{ohbook}, one obtains in this situation an isomorphism ($t$-morphism with $t=0$) between the Poincar\'e--Novikov structures $\mathcal{N}(M,\omega,H_i)$. Hence the gaps and (when applicable) the essential barcode can be regarded as being associated to an element $\tilde{\phi}$ of the universal cover $\widetilde{\mathrm{Ham}}(M,\omega)$ of the Hamiltonian diffeomorphism group by choosing any mean-normalized Hamiltonian whose flow represents $\tilde{\phi}$, and they vary in Lipschitz fashion with respect to the Hofer (pseudo-)norm on $\widetilde{\mathrm{Ham}}(M,\omega)$. One can also deduce a similar invariance result for the concise barcodes using these optimal continuation maps.
Also, any symplectomorphism $\psi$ of $(M,\omega)$ induces an isomorphism $\mathcal{N}(M,\omega,H)\cong \mathcal{N}(M,\omega,H\circ\psi^{-1})$ (provided that the ambiguity in the choice of $A_0$ mentioned in Remark \ref{unnatural} is resolved consistently for $\mathcal{N}(M,\omega,H)$ and $\mathcal{N}(M,\omega,H\circ\psi^{-1})$, \emph{e.g.} by requiring the $I_{\omega}(A_0)$ to agree), so the gaps and/or essential barcodes associated to an element of $\widetilde{\mathrm{Ham}}(M,\omega)$ are invariant under conjugation by the symplectomorphism group $\mathrm{Symp}(M,\omega)$.
\end{remark}
\begin{ex}
Suppose that $H\colon\thinspace S^1\times M\to \mathbb{R}$ is independent of the $S^1$-variable, say $H(t,x)=h(x)$ for some Morse function $h\colon\thinspace M\to \mathbb{R}$. The Hamiltonian flow of $H$ then has rest points at each critical point of $h$. If $h$ is sufficiently $C^2$-small then these critical points $x\in \mathrm{Crit}(h)$ will be the only fixed points of the time-one map $\phi_{H}^{1}$ of $H$, and we will have $\mu_{H}([\gamma_x,w_x])=\mathrm{ind}_{-h}(x)-m$ where $\gamma_x\colon\thinspace S^1\to M$ and $w_x\colon\thinspace D^2\to M$ are constant maps to $x$ and $\mathrm{ind}_{-h}(x)$ is the Morse index of $x$ with respect to the Morse function $-h$. Moreover $\mathcal{A}_{H}([\gamma_x,w_x])=-h(x)$. In this situation one may reasonably expect the Floer complex $CF_{*}(H)$ to coincide, under the map $x\mapsto [\gamma_x,w_x]$, with (the coefficient extension of) the Morse complex, $\hat{\Lambda}_{\uparrow}\otimes \mathbf{CM}_{*-m}(-h)$, and also for the PSS map to coincide with the standard isomorphism between singular and Morse homology. For example, under additional hypotheses and after multiplying $H$ by a small constant, the statement about the Floer complex follows from \cite[Proposition 7.4]{HS}; if one uses the virtual techniques of \cite{Par} (and hence characteristic zero coefficients) to define the Floer complex then the identification of the Floer complex of $H$ with the coefficient-extended Morse complex of $-h$ is established quite generally in the proof of \cite[Theorem 10.7.1]{Par}, and Pardon's $S^1$-localization techniques could plausibly also be used to prove the statement about the PSS map. At any rate, let us assume that we are in a case where these correspondences hold.
In this case our Poincar\'e--Novikov structure $\mathcal{N}(M,\omega,H)$ coincides with the coefficient extension of the Morse-theoretic Poincar\'e--Novikov structure for $-h$ constructed in Section \ref{connext}. Hence, in view of Theorem \ref{introiso}, the gaps and (for discrete $\Gamma$) essential barcode of $\mathcal{N}(M,\omega,H)$ can be read off from the interlevel barcode of the function $-h$. (One should adjust appropriately for the coefficient extension by $\hat{\Lambda}$, in particular noting Remark \ref{unnat2} if $I_{c_1}\neq 0$.) For a concrete example, if we assume that the functions in Figure \ref{genustwo} are scaled to be small enough that our assumptions hold, then the difference between their interlevel barcodes implies a lower bound on the Hofer distance between the elements in $\widetilde{\mathrm{Ham}}$ associated to their time-one Hamiltonian flows. The fact that the sublevel barcodes of these functions are identical implies that no positive lower bound can be deduced from their concise barcodes, so the essential barcodes provide new information.
\end{ex}
To better understand the invariants associated to $\mathcal{N}(M,\omega,H)$ let us work toward an explicit description, up to isomorphism, of the corresponding filtered matched pairs $\mathcal{P}(\mathcal{N}(M,\omega,H))_k$. To avoid various sign computations related to orientations of moduli spaces we shall assume that the field $\kappa$ has characteristic $2$, though this should not be essential modulo multiplying some of our maps by appropriate powers of $-1$. By Definition \ref{pnfloer} and (\ref{pnkdef}), for any $k\in \mathbb{Z}$, $\mathcal{P}(\mathcal{N}(M,\omega,H))_k$ takes the form \begin{equation}\label{hfpn} \xymatrix{ & \left({}^{\vee}\!HF_{-k}(H),{}^{\vee}\!\rho_{\uparrow}^{H}\right) \\ \left(\hat{\Lambda}\otimes_{\kappa}H_{*}(M;\kappa)\right)_{k+m} \ar[ru]^{\tilde{S}_{k}}\ar[rd]_{S_{k}} & \\ & \left(HF_k(H),\rho_{\uparrow}^{H}\right) } \end{equation} where $S_k$ is the restriction of the PSS map $\Phi_{H}^{PSS}$ and the $\Lambda$-module homomorphism $\tilde{S}_k$ is, by (\ref{tildesk}), uniquely characterized by the property that, for all $x\in \left(\hat{\Lambda}\otimes_{\kappa}H_{*}(M;\kappa)\right)_{k+m} $ and $y\in \left(\hat{\Lambda}\otimes_{\kappa}H_{*}(M;\kappa)\right)_{-k+m}$, we have $(\tilde{S}_{k}x)(\Phi_{PSS}^{H}y)=(\mathcal{D}_kx)(y)$.
To begin to motivate what follows, define $\hat{H}\colon\thinspace S^1\times M\to \mathbb{R}$ by $\hat{H}(t,x)=-H(1-t,x)$. Thus the time-one maps of $\hat{H}$ and $H$ are inverse to each other, and a loop $\gamma\colon\thinspace S^1\to M$ is a closed orbit of the Hamiltonian flow of $H$ if and only if its time-reversal $\bar{\gamma}\colon\thinspace S^1\to M$, given by $\bar{\gamma}(t)=\gamma(1-t)$, is a closed orbit of the Hamiltonian flow of $\hat{H}$. The association $\gamma\mapsto \bar{\gamma}$ lifts to an involution $\mathbb{I}\colon\thinspace\widetilde{\mathcal{L}_0M}\to\widetilde{\mathcal{L}_0M}$ given by $\mathbb{I}([\gamma,w])=[\bar{\gamma},\bar{w}]$ where for any $w\colon\thinspace D^2\to M$ we define $\bar{w}\colon\thinspace D^2\to M$ by $\bar{w}(z)=w(\bar{z})$. It is easy to see that the action functionals are related by $\mathcal{A}_{\hat{H}}=-\mathcal{A}_H\circ \mathbb{I}$. Thus, via the action of $\mathbb{I}$, the (formal) positive gradient flow of $\mathcal{A}_H$ corresponds to the negative gradient flow of $\hat{H}$. So in analogy with (\ref{novupdown}), we would expect that ${}^{\vee}\!HF_{-k}(H)$ could be replaced in (\ref{hfpn}) by $\overline{HF_k(\hat{H})}$. This will be reflected in Proposition \ref{floerpair}; the effect of the involution $\mathbb{I}$ appears as the conjugation involved in the map to $\overline{HF_k(\hat{H})}$, which, at least if $I_{c_1}=0$, can be interpreted as the action of $\mathbb{I}$ on the homology of the restriction of the covering space $\widetilde{\mathcal{L}_0M}$ to constant loops, corresponding to the submodule $\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa)$ of quantum homology.
For any $k\in \mathbb{Z}$, write $\tilde{P}_{-k}(H)$ for the set of critical points $[\gamma,w]$ for $\mathcal{A}_H$ having $\mu_{CZ}([\gamma,w])=-k$. Denote by $P_{-k}(H)=\{\gamma_1,\ldots,\gamma_r\}$ the image of $\tilde{P}_{-k}(H)$ under the covering projection $\pi\colon\thinspace \widetilde{\mathcal{L}_0M}\to\mathcal{L}_{0}M$, and for $i=1,\ldots,r$ choose an element $[\gamma_i,w_i]\in\pi^{-1}(\{\gamma_i\})$. Then $CF_{-k}(H)$ has $\{[\gamma_1,w_1],\ldots,[\gamma_r,w_r]\}$ as a basis over $\Lambda_{\uparrow}$, and $CF_{k}(\hat{H})$ has $\{[\bar{\gamma}_1,\bar{w}_1],\ldots,[\bar{\gamma}_r,\bar{w}_r]\}$ as a basis over $\Lambda_{\uparrow}$. We can accordingly define a bilinear pairing \[ L_{H,k}^{F}\colon\thinspace CF_{-k}(H)\times CF_{k}(\hat{H})\to \Lambda_{\uparrow} \] by, for $\lambda_i,\mu_i\in\Lambda_{\uparrow}$, \[ L_{H,k}\left(\sum_{i=1}^{r}\lambda_i[\gamma_i,w_i],\sum_{i=1}^{r}\mu_i[\bar{\gamma}_i,\bar{w}_i]\right) = \sum_{i=1}^{r}\lambda_i\mu_i.\] Because, for $A\in \Gamma'$, one has $[\gamma_i,A\#w_i]=T^{I_{\omega}(A)}[\gamma_i,w_i]$ while $[\bar{\gamma}_i,\overline{A\#w_i}]=[\bar{\gamma}_i,(-A)\#\bar{w}_i]=T^{-I_{\omega}(A)}[\bar{\gamma}_i,\bar{w}_i]$, we see that $L_{H,k}^{F}$ is independent of the choice of $w_i$. Moreover (working with characteristic-two coefficients) we have, for any $a\in CF_{-k+1}(H),b\in CF_{k}(\hat{H})$, \[ L_{H,k}^{F}(\partial_{H}a,b)=L_{H,k-1}^{F}(a,\partial_{\hat{H}}b);\] this reflects the fact that a Floer trajectory $u\colon\thinspace \mathbb{R}\times S^1\to M$ for $\hat{H}$ corresponds, by reversing the directions of both the $\mathbb{R}$ and the $S^1$ variables, to a Floer trajectory for $H$.
Therefore $L_{H,k}^{F}$ descends to a bilinear pairing $\underline{L}_{H,k}^{F}\colon\thinspace HF_{-k}(H)\times HF_k(\hat{H})\to \Lambda_{\uparrow}$. Because $L_{H,k}^{F}$ is a non-degenerate pairing, and because $\Lambda_{\uparrow}$ is a field, the universal coefficient theorem implies that $\underline{L}_{H,k}^{F}$ is likewise non-degenerate, so that the map $a\mapsto \underline{L}_{H,k}^{F}(\cdot,a)$ defines an isomorphism of $\Lambda_{\uparrow}$-vector spaces $\lambda_{H,k}^{F}\colon\thinspace HF_{k}(\hat{H})\to HF_{-k}(H)^{*}$. Applying the conjugation functor of Section \ref{conjsect}, the same map $\lambda_{H,k}^{F}$ can be viewed as an isomorphism of $\Lambda^{\downarrow}$-vector spaces $\overline{HF_k(\hat{H})}\cong {}^{\vee}\!HF_{-k}(H)$.
This fits into a situation considered in more generality in \cite{U10}; the correspondence $[\bar{\gamma},\bar{w}]\leftrightarrow [\gamma,w]$ identifies the Floer-type complex $\mathbf{CF}(H)_{\uparrow}$ with what \cite{U10} calls the ``opposite complex'' to $\mathbf{CF}(\hat{H})_{\uparrow}$, and the pairing $L_{H,k}^{F}$ is identified with the pairing $L$ from \cite{U10}. From this we infer:
\begin{prop}\label{dualcompute}
With ${}^{\vee}\!\rho_{\uparrow}^{H}\colon\thinspace {}^{\vee}\!HF_{-k}\to\mathbb{R}\cup\{\infty\}$ defined from $\rho_{\uparrow}^{H}$ as in Section \ref{fmpdual}, for each $a\in HF_{k}(\hat{H})$ we have \[ -\rho_{\uparrow}^{\hat{H}}(a)= {}^{\vee}\!\rho_{\uparrow}^{H}(\lambda_{H,k}^{F}(a)).\]
\end{prop}
\begin{proof}
Denote by $\underline{\Delta}\colon\thinspace HF_{-k}(H)\times HF_k(\hat{H})\to \kappa$ the map given by composing $\underline{L}_{H,k}^{F}$ with the function $\tau\colon\thinspace \Lambda_{\uparrow}\to\kappa$ defined by $\tau\left(\sum_gc_gT^g\right)=c_0$. Then by \cite[Corollary 1.4]{U10} (generalizing \cite[Lemma 2.2]{EP}), we have, for each $a\in HF_k(\hat{H})$, \[ -\rho_{\uparrow}^{\hat{H}}(a)=\inf\{\rho_{\uparrow}^{H}(b)|\underline{\Delta}(b,a)\neq 0\}.\] By definition, \[ {}^{\vee}\!\rho_{\uparrow}^{H}(\lambda_{H,k}^{F}(a))=\inf\left\{\left.\rho_{\uparrow}^{H}(b)+\nu_{\uparrow}\left(\underline{L}_{H,k}^{F}(b,a)\right)\right|b\neq 0\right\}.\] By the definitions of $\nu_{\uparrow}$ and $\underline{\Delta}$, if $\underline{\Delta}(b,a)\neq 0$ then $\nu_{\uparrow}\left(\underline{L}_{H,k}^{F}(b,a)\right)\leq 0$, from which it follows readily that ${}^{\vee}\!\rho_{\uparrow}^{H}(\lambda_{H,k}^{F}(a))\leq -\rho_{\uparrow}^{\hat{H}}(a)$. To establish the reverse inequality, note that if $b\neq 0$ but $\underline{L}_{H,k}^{F}(b,a)=0$ then $\rho_{\uparrow}^{H}(b)+\nu_{\uparrow}\left(\underline{L}_{H,k}^{F}(b,a)\right)=\infty$, while if $\underline{L}_{H,k}^{F}(b,a)\neq 0$ then the element $b'=T^{-\nu_{\uparrow}(\underline{L}_{H,k}^{F}(b,a))}b$ satisfies $\nu_{\uparrow}(\underline{L}_{H,k}^{F}(b',a))=0$, $\underline{\Delta}(b',a)\neq 0$, and $\rho_{\uparrow}^{H}(b')=\rho_{\uparrow}^{H}(b)+\nu_{\uparrow}\left(\underline{L}_{H,k}^{F}(b,a)\right)$. Hence $-\rho_{\uparrow}^{\hat{H}}(a)\leq {}^{\vee}\!\rho_{\uparrow}^{H}(\lambda_{H,k}^{F}(a))$.
\end{proof}
By Proposition \ref{dualcompute}, up to isomorphism the $ \left({}^{\vee}\!HF_{-k}(H),{}^{\vee}\!\rho_{\uparrow}^{H}\right) $ appearing in (\ref{hfpn}) can be replaced by $(\overline{HF_k(\hat{H})},-\rho_{\uparrow}^{\hat{H}})$. The following proposition specifies the map that correspondingly replaces $\tilde{S}_k$ in (\ref{hfpn}). We again identify $\hat{\Lambda}$ with $\Lambda[q^{-1},q]$; as noted in Remark \ref{unnatural}, if $I_{c_1}\neq 0$ this identification depends on the choice of $A_0\in \hat{\Gamma}$ with $I_{c_1}(A_0)=N$. (If $I_{c_1}=0$, $q$ should be interpreted as equal to $1$ in what follows, reflecting that $\hat{\Lambda}=\Lambda$.) We continue to assume that $\mathrm{char}(\kappa)=2$ in order to avoid sign computations.
\begin{prop}\label{floerpair}
Define $\mathfrak{c}\colon\thinspace \Lambda[q^{-1},q]\to \Lambda[q^{-1},q]$ by, for $\lambda_i\in\Lambda$, $\mathfrak{c}\left(\sum_i\lambda_iq^i\right)=\sum_i\bar{\lambda}_iq^i$, and for $a\in \hat{\Lambda}\otimes_{\kappa}H_{*}(M;\kappa)$ let $\bar{a}=(\mathfrak{c}\otimes 1_{H_*(M;\kappa)})(a)$. Then, for all $a\in (\hat{\Lambda} \otimes_{\kappa}H_*(M;\kappa))_{k+m}$, the map $\tilde{S}_k$ of (\ref{hfpn}) satisfies \[ \tilde{S}_k(a)=\lambda_{H,k}^{F}\left(\Phi_{PSS}^{\hat{H}}(\bar{a})\right).\]
\end{prop}
\begin{proof}
The map $a\mapsto \lambda_{H,k}^{F}(\Phi_{PSS}^{\hat{H}}(\bar{a}))$ is a homomorphism of $\Lambda$-modules from $(\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{k+m}$ to the conjugated dual ${}^{\vee}\!HF_{-k}(H)$. So since $\tilde{S}_k\colon\thinspace (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{k+m}\to {}^{\vee}\!HF_{-k}(H)$ is the unique $\Lambda$-module homomorphism obeying (\ref{tildesk}) with $n=0$ and $S_{-k}$ equal to the restriction to $(\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{-k+m}$ of the PSS isomorphism $\Phi_{PSS}^{H}\colon\thinspace QH_{-k+m}(M,\omega)\to HF_{-k}(H)$, it suffices to show that, for all $a\in (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{k+m}$ and $b\in (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{-k+m}$ we have \begin{equation} \label{needhat}
\underline{L}_{H,k}^{F}(\Phi_{PSS}^{H}(b),\Phi_{PSS}^{\hat{H}}(\bar{a}))=(\mathcal{D}_ka)(b).
\end{equation}
In this regard, recall that $\Phi_{PSS}^{H}$ is defined via the identification of $H_*(M;\kappa)$ with the Morse homology $\mathbf{HM}_*(f;\kappa)$ of a Morse function $f$ on $M$. (We assume familiarity with Morse homology in what follows; see also Section \ref{morseintro}.) Letting $\mathbf{CM}_*(f;\kappa)$ denote the Morse complex of $f$, $\Phi_{PSS}^{H}$ is defined as the map induced on homology by a chain map $\tilde{\Phi}^{f,H}_{PSS}\colon\thinspace \hat{\Lambda}_{\uparrow}\otimes_{\kappa}\mathbf{CM}_{*+m}(f;\kappa)\to CF_{*}(H)$. The usual continuation isomorphisms between the $\mathbf{HM}_{*}(\tilde{f})$ for different choices of $f$ identify the maps induced on homology by the corresponding $\tilde{\Phi}^{f,H}_{PSS}$; as these continuation isomorphisms also commute with the identifications between Morse and singular homology (\emph{cf}. (\ref{contcomm})) it follows that the same map $\Phi_{PSS}^{H}\colon\thinspace QH_{*+m}(M,\omega)\to HF_*(H)$ is induced by $\tilde{\Phi}^{f,H}_{PSS}$ for any choice of $f$. In particular, if we regard $\Phi_{PSS}^{H}$ as induced by $\tilde{\Phi}^{f,H}_{PSS}$, we may (and shall) regard $\Phi_{PSS}^{\hat{H}}$ as induced by $\tilde{\Phi}^{-f,\hat{H}}_{PSS}$.
Because an index-$(m-k)$ critical point for $f$ is the same thing as an index-$(m+k)$ critical point for $-f$, for each $k$ we have a nondegenerate $\kappa$-bilinear pairing $L^{M,\kappa}_{f,k}\colon\thinspace \mathbf{CM}_{m-k}(f;\kappa)\times \mathbf{CM}_{m+k}(-f)\to \kappa$ defined by $L^{M,\kappa}_{f,k}\left(\sum_i \alpha_ip_i,\sum_j\beta_jp_j\right)=\sum_i\alpha_i\beta_i$ for $\alpha_i,\beta_j\in \kappa$, if the index-$(m-k)$ critical points of $f$ are $p_1,\ldots,p_r$. As we work in characteristic two, the Morse boundary operators for $\pm f$ are adjoint with respect to this pairing\footnote{see Corollary \ref{mor} for the appropriate sign in general characteristic}, so it descends to a nondegenerate pairing on homology. In fact, by considering intersections of the descending manifolds for $f$ with those of $-f$, it is easy to see that the resulting pairing on homology coincides with the Poincar\'e intersection pairing under the canonical isomorphisms between $H_*(M;\kappa)$ and $\mathbf{HM}_*(\pm f;\kappa)$. Now let us extend coefficients to obtain a non-degenerate pairing of $\Lambda_{\uparrow}$-vector spaces \[ L^{M}_{f,k}\colon\thinspace \left(\hat{\Lambda}_{\uparrow}\otimes_{\kappa}\mathbf{CM}_*(f;\kappa)\right)_{-k+m} \times \left(\hat{\Lambda}_{\uparrow}\otimes_{\kappa}\mathbf{CM}_*(-f;\kappa)\right)_{k+m}\to \Lambda_{\uparrow} \] by putting, for $\lambda_j,\mu_j\in \Lambda_{\uparrow}$, $x_j\in \mathbf{CM}_{m-k+2Nj}(f;\kappa)$, and $y_j\in \mathbf{CM}_{m+k+2Nj}(-f;\kappa)$, \[ L^{M}_{f,k}\left(\sum_{j\in \mathbb{Z}}q^j\lambda_j\otimes x_j,\sum_{j\in \mathbb{Z}}q^j\mu_j\otimes y_j\right) =\sum_{j\in \mathbb{Z}} \lambda_{-j}\mu_j L^{M,\kappa}_{f,k+2Nj}(x_{-j},y_j).\] Passing to homology and identifying Morse homology with singular homology, we obtain an induced nondegenerate pairing $\underline{L}^{M}_{f,k}\colon\thinspace QH_{-k+m}(M,\omega)\times QH_{k+m}(M,\omega)\to \Lambda_{\uparrow}$ (which is in fact independent of $f$). Given that $L^{M,\kappa}_{f,k}$ induces the Poincar\'e intersection pairing on homology, comparing to (\ref{dquant}) shows that the restriction of $\underline{L}^{M}_{f,k}$ to $(\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{-k+m}\times (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{k+m}$ satisfies \[ (\mathcal{D}_ka)(b)=\underline{L}^{M}_{f,k}(b,\bar{a})\mbox{ for }a\in (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{k+m},\,b\in(\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{-k+m}.\] So, changing variables to $z=\bar{a}$, our desired conclusion (\ref{needhat}) reduces to the claim that, for $b\in (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{-k+m}$ and $z\in (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{k+m}$, we have \begin{equation}\label{needhat2} \underline{L}^{F}_{H,k}(\Phi_{PSS}^{H}b,\Phi_{PSS}^{\hat{H}}z)=\underline{L}^{M}_{f,k}(b,z).\end{equation}
This follows rather quickly from the standard construction \cite{PSS},\cite[Section 20.3]{ohbook} of the chain homotopy inverse $\tilde{\Psi}^{H,f}_{PSS}\colon\thinspace CF_*(H)\to \hat{\Lambda}_{\uparrow}\otimes_{\kappa}\mathbf{CM}_*(f;\kappa)$ to $\tilde{\Phi}_{f,H}^{PSS}$. One has a well-known relation (see \cite[Section 2.6.8]{EP},\cite[Proof of Proposition 3.13(vii)]{U11}), for all $c\in CF_{-k}(H)$ and $d\in (\hat{\Lambda}_{\uparrow}\otimes_{\kappa}\mathbf{CM}_*(-f;\kappa))_{k+m}$, \[ L_{H,k}^{F}(c,\tilde{\Phi}^{-f,\hat{H}}_{PSS}(d))=L_{f,k}^{M}(\tilde{\Psi}^{H,f}_{PSS}(c),d), \] essentially because the spiked planes enumerated by the left-hand side are in one-to-one correspondence, by reversing the time-variable of the gradient flowline and precomposing the perturbed-holomorphic plane by $z\mapsto 1/z$, with the spiked planes enumerated by the right-hand side. Passing to homology and using that $\tilde{\Psi}^{H,f}_{PSS}$ induces on homology the inverse to $\Phi_{PSS}^{H}$, we infer that (\ref{needhat}) holds for all $b\in QH_{-k+m}(M,\omega)$ and $z\in QH_{k+m}(M,\omega)$, and in particular for all $b\in (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{-k+m}$ and $z\in (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{k+m}$. This establishes (\ref{needhat}) and hence the proposition.
\end{proof}
\begin{cor}\label{describepn}
If $\mathrm{char}(\kappa)=2$, the filtered matched pair $\mathcal{P}(\mathcal{N}(M,\omega,H))_{k}$ associated to the Poincar\'e--Novikov structure $\mathcal{N}(M,\omega,H)$ is isomorphic to the filtered matched pair \begin{equation}\label{betterpn} \xymatrix{ & \left(\overline{HF_k(\hat{H})},-\rho_{\uparrow}^{\hat{H}}\right) \\ (\hat{\Lambda}\otimes_{\kappa}H_*(M;\kappa))_{k+m} \ar[ru]^{\Phi^{\hat{H}}_{PSS}\circ(\mathfrak{c}\otimes 1)} \ar[rd]_{\Phi^{H}_{PSS}} & \\ & (HF_k(H),\rho_{\uparrow}^{H}) } \end{equation} where $\mathfrak{c}\colon\thinspace \hat{\Lambda}\to\hat{\Lambda}$ is defined by $\mathfrak{c}\left(\sum \lambda_iq^i\right)=\bar{\lambda}_iq^i$ for $\lambda_i\in \Lambda$.
\end{cor}
\begin{proof}
Indeed, the isomorphism with (\ref{hfpn}) is provided by $\lambda_{H,k}^{F}\colon\thinspace \overline{HF_k(\hat{H})}\to {}^{\vee}\!HF_{-k}(H)$, under which $-\rho_{\uparrow}^{\hat{H}}$ corresponds to ${}^{\vee}\!\rho_{\uparrow}^{H}$ and $\Phi^{\hat{H}}_{PSS}\circ(\mathfrak{c}\otimes 1)$ corresponds to $\tilde{S}_k$ by the preceding two propositions.
\end{proof}
For any Hamiltonian $H\colon\thinspace S^1\times M\to \mathbb{R}$ let $\tilde{\phi}_H$ be the element of $\mathrm{Ham}(M,\omega)$ represented by the flow $\{\phi_{H}^{t}\}_{t\in [0,1]}$
Recall that if $a\in QH_{*}(M,\omega)$ and $\tilde{\phi}\in \widetilde{\mathrm{Ham}}(M,\omega)$ the \textbf{spectral invariant} $c(a,\tilde{\phi})$ is defined by $c(a,\tilde{\phi})=\rho_{\uparrow}^{H}(\Phi_{PSS}^{H}(a))$ for one and hence every mean-normalized Hamiltonian $H$ such that $\tilde{\phi}_H=\tilde{\phi}$. (If such $H$ are degenerate the right-hand-side is defined by continuity, approximating $H$ by nondegenerate Hamiltonians.) From Corollary \ref{describepn} it follows that, for non-degenerate and mean-normalized $H$, the essential barcode of $\mathcal{N}(M,\omega,H)$, if it exists (as it always does if $\Gamma$ is discrete), consists of intervals modulo $\Gamma$-translation $[c(e_j,\tilde{\phi}),-c(\bar{e}_j,\tilde{\phi}^{-1})]^{\Gamma}$ in degree $k$ or $(-c(\bar{e}_j,\tilde{\phi}^{-1}),c(e_j,\tilde{\phi}))^{\Gamma}$ in degree $k-1$ (whichever is nonempty), as $e_j$ varies through a doubly-orthogonal basis for the $\Lambda$-module $
(\Lambda\otimes_{\kappa}H_*(M,\kappa))_{k+m}$. Here $\bar{e}_j=(\mathfrak{c}\otimes 1)e_j$. (In particular, if $\Gamma=\{0\}$, \emph{e.g.} if $(M,\omega)$ is monotone, then $\bar{e}_j=e_j$.) The $i$th gap $\mathcal{G}_{k,i}(\mathcal{N}(M,\omega,H))$ is, in this situation, equal to the $i$th largest of the values $-\left(c(e_j,\tilde{\phi})+c(\bar{e}_j,\tilde{\phi}^{-1})\right)$ by Proposition \ref{gapchar}. Regardless of whether the essential barcode is defined (\emph{i.e.}, whether doubly-orthogonal bases exist for (\ref{betterpn}) for all $k$), we have \begin{equation}\label{hfgk} \mathcal{G}_{k,i}(\mathcal{N}(M,\omega,H))=-\inf\left\{\gamma\left|\begin{array}{c}(\exists S \subset (\Lambda\otimes_{\kappa}H_*(M,\kappa))_{k+m}\mbox{ independent over }\Lambda)\\ (\#S=i\mbox{ and }(\forall x\in S)\left(c(x,\tilde{\phi})+c(\bar{x},\tilde{\phi}^{-1})\leq\gamma\right)\end{array}\right.\right\} \end{equation} When one does have doubly-orthogonal bases, Corollary \ref{pndual} implies the non-obvious fact that, for any $k$, the set of quantities in (\ref{hfgk}) as $i$ varies is reflected across $0$ when $k$ is replaced by $-k$.
The appearance of quantities $c(x,\tilde{\phi})+c(\bar{x},\tilde{\phi}^{-1})$ above is obviously reminiscent of the spectral pseudonorm $\gamma(\tilde{\phi})=c([M],\tilde{\phi})+c([M],\tilde{\phi}^{-1})$ on $\widetilde{\mathrm{Ham}}(M,\omega)$ \cite[Section 22.1]{ohbook}\footnote{As we are taking $\gamma$ to be defined on $\widetilde{\mathrm{Ham}}$ rather than $\mathrm{Ham}(M,\omega)$ we can only call it a pseudonorm, as it is conceivable that $\gamma$ might vanish on some nontrivial element of $\pi_1(\mathrm{Ham}(M,\omega))$.}, noting that $\overline{[M]}=[M]$. Unless $QH_{2m}(M,\omega)$ is one-dimensional (equivalently, either $I_{c_1}=0$ or $N>m$), though, it is not clear whether any of the $\mathcal{G}_{m,i}(\mathcal{N}(M,\omega,H))$ will equal $\gamma(\tilde{\phi})$. As already noted in Remark \ref{htopyinv}, the $\mathcal{G}_{k,i}$ and (when defined) the essential barcode share with $\gamma$ the property of being invariant under conjugation by $\mathrm{Symp}(M,\omega)$.
An interesting question that we leave open is whether, possibly after a slight modification of the definition, the gaps $\mathcal{G}_{k,i}(\mathcal{N}(M,\omega,H))$ of a normalized Hamiltonian $H$ descend to functions on $\mathrm{Ham}(M,\omega)$ rather than just on $\widetilde{\mathrm{Ham}}(M,\omega)$---equivalently, whether they are unchanged if $\tilde{\phi}_H$ is composed with a nontrivial element $\eta\in \pi_1(\mathrm{Ham}(M,\omega))$. Certainly this is the case if the spectral invariants descend to $\mathrm{Ham}(M,\omega)$ as in \cite[Proposition 3.1 (i) and (ii)]{M10}---indeed in that case the essential barcode descends to a function on $\mathrm{Ham}(M,\omega)$---but this is a rather stringent condition. In general (if $I_{c_1}\neq 0$) one should for this purpose probably extend coefficients in the various ingredients of $\mathcal{N}(M,\omega,H)$, taking quantum and Floer homology to be defined as graded modules over $\Lambda_{\uparrow}[q^{-1/N},q^{1/N}]$ rather than over $\Lambda_{\uparrow}[q^{-1},q]$, where $q^{1/N}$ has degree $-2$ and affects the filtration on the Floer complex by addition of $-\frac{I_{\omega}(A_0)}{N}$. With these coefficients, the Seidel representation \cite{Se} on $QH_*(M,\omega)$ and its corresponding isomorphisms $\mathcal{S}_{\eta,H}\colon\thinspace CF_*(H)\to CF_*(\eta^*H)$ (see \cite[p. 515]{MS}) can be taken to be grading-preserving (by multiplying by appropriate powers of $q^{1/N}$), and one has, for $a\in QH_*(M,\omega)$, $\tilde{\phi}\in\widetilde{\mathrm{Ham}}(M,\omega)$, $\eta\in \pi_1(\mathrm{Ham}(M,\omega))$, and $\mathcal{S}$ the Seidel representation, \begin{equation}\label{seidel} c(\mathcal{S}(\eta^{-1})a,\tilde{\phi}\circ\eta)=c(a,\tilde{\phi}) \end{equation} (see, \emph{e.g.}, \cite[(2.4)]{M10}). The fact that the chain isomorphism $\mathcal{S}_{\eta,H}$ shifts filtration by a uniform amount $\mathbb{A}(\eta)$ is well-known to imply that the concise barcodes associated by \cite{UZ} to $\tilde{\phi}$ and to $\tilde{\phi}\circ\eta$ are related by translating each interval by $\mathbb{A}(\eta)$. Since $\mathbb{A}(\eta^{-1})=-\mathbb{A}(\eta)$, the sets $\{c(a,\tilde{\phi})|a\in QH_*(M,\omega)\}$ and $\{-c(a,\tilde{\phi}^{-1})|a\in QH_*(M,\omega)\}$ both likewise shift by $\mathbb{A}(\eta)$ when $\tilde{\phi}$ is replaced by $\tilde{\phi}\circ\eta$, so since the intervals in the essential barcode connect elements of the former set to elements of the latter it seems plausible that the essential barcodes of $\tilde{\phi}$ and $\tilde{\phi}\circ\eta$ would be related by the same shift that the concise barcodes are, and that the gaps $\mathcal{G}_{k,i}$ (representing the signed lengths of the intervals in the essential barcode) would coincide. However, for an element $e_i$ of a doubly-orthogonal basis, the elements $\mathcal{S}(\eta^{-1})e_i$ and $\mathcal{S}(\eta)\bar{e}_i$ would not generally be conjugates, so one would not be able to naively use (\ref{seidel}) to relate doubly-orthogonal bases for $\tilde{\phi}$ and $\tilde{\phi}\circ\eta$. A similar issue arises for relating the gaps of $\tilde{\phi}$ and $\tilde{\phi}\circ\eta$ using the formulation (\ref{hfgk}).
\section{Chain level constructions and persistence modules}\label{clsec}
In the present section we lift the notions of filtered matched pairs and Poincar\'e--Novikov structures to the level of chain complexes and (homotopy classes of) chain maps between them. This will allow us to see the concise and essential barcodes as arising from a common algebraic structure, and eventually to relate these barcodes to interlevel persistence for $S^1$-valued Morse functions.
\subsection{Generalities about filtrations and cones}
We collect here some conventions, definitions, and simple results concerning modules that are filtered by partially ordered sets, and constructions associated to the filtered maps between them.
Let $(\mathcal{P},\preceq)$ be a partially ordered set, and $R$ be a commutative ring. A \textbf{$\mathcal{P}$-filtered module} over $R$ is an $R$-module $M$ equipped with submodules $M^{\preceq p}$ for all $p\in\mathcal{P}$ such that $M^{ \preceq p}\subset M^{\preceq q}$ whenever $p\preceq q$, and $\cup_{p\in P}M^{\preceq p}=M$.
\begin{ex}\label{orthfiltex} In the language of the preceding sections, if $(V_{\uparrow},\rho_{\uparrow})$ is an orthogonalizable $\Lambda_{\uparrow}$-space, then $V_{\uparrow}$ obtains the structure of a $\mathbb{R}$-filtered module over $\kappa$\footnote{While $V_{\uparrow}$ is a vector space over $\Lambda_{\uparrow}$, the individual $V_{\uparrow}^{\preceq p}$ are not preserved by scalar multiplication by elements of $\Lambda_{\uparrow}$ so $V_{\uparrow}$ is not a filtered module over $\Lambda_{\uparrow}$. It is a filtered module over the subring of $\Lambda_{\uparrow}$ consisting of elements with $\nu_{\uparrow}\geq 0$, though we will not use this in a direct way.} by setting, for $p\in \mathbb{R}$, $V_{\uparrow}^{\preceq p}=\{v\in V_{\uparrow}|\rho_{\uparrow}(v)\leq p\}$. (The fact that $V_{\uparrow}^{\preceq p}$ is a $\kappa$-subspace follows from Remark \ref{normconds}.)
On the other hand, if $(V^{\downarrow},\rho^{\downarrow})$ is an orthogonalizable $\Lambda^{\downarrow}$-space, it is the superlevel sets, not the sublevel sets, of $\rho^{\downarrow}$ that are $\kappa$-subspaces of $V^{\downarrow}$. So we obtain a $\mathbb{R}$-filtered module structure (with the usual partial order on $\mathbb{R}$) by setting \[ V^{\downarrow\preceq p}=\{v\in V^{\downarrow}|\rho^{\downarrow}(v)\geq -p\}.\]
\end{ex}
If $M$ and $N$ are $\mathcal{P}$-filtered modules over $R$, an $R$-module homomorphism $\phi\colon\thinspace M\to N$ is said to be filtered if $\phi(M^{\preceq p})\subset N^{\preceq p}$ for all $p$. If $M_1,\ldots,M_k$ are filtered $\mathcal{P}$-modules, their \textbf{filtered direct sum} is the module $\oplus_{i=1}^{k}M_i$ equipped with the filtration given by \[ \left(\oplus_{i=1}^{k}M_i\right)^{\preceq p}=\oplus_{i=1}^{k}(M_i^{\preceq p}).\] If $(V_{\uparrow},\rho_{\uparrow})$ is an orthogonalizable $\Lambda_{\uparrow}$-space, a $\Lambda_{\uparrow}$-basis $\{x_1,\ldots,x_k\}$ is a $\rho_{\uparrow}$-\emph{orthogonal} basis for $V_{\uparrow}$ (in the sense of Definition \ref{orthdef}) if and only if the obvious addition isomorphism \[ \oplus_{i=1}^{k}(\Lambda_{\uparrow}x_i)\to V_{\uparrow} \] is a \emph{filtered} isomorphism (where the $\Lambda_{\uparrow}$-spans $\Lambda_{\uparrow} x_i$ of the $x_i$ are separately given the filtrations that they inherit as submodules of $V_{\uparrow}$).
A \textbf{$\mathcal{P}$}-filtered chain complex of $R$-modules is a chain complex $\mathcal{C}=(C_*,\partial)$ of $R$-modules in which the chain modules $C_k$ are $\mathcal{P}$-filtered and the boundary maps $\partial\colon\thinspace C_k\to C_{k-1}$ are filtered homomorphisms. Thus for each $p\in \mathcal{P}$ we have a subcomplex $\mathcal{C}^{\preceq p}$ of $\mathcal{C}$, with the chain modules of $\mathcal{C}^{\preceq p}$ given by the $C_{k}^{\preceq p}$. Recalling that a \textbf{persistence module} over $\mathcal{P}$ is a functor to the category of $R$-modules from the category with object set $\mathcal{P}$ and a unique morphism from $p$ to $q$ whenever $p\preceq q$, for each $k$ the maps $H_{k}^{\preceq p}(\mathcal{C})\to H_k^{\preceq q}(\mathcal{C})$, induced on homology by the inclusions $C_k^{\preceq p}\to C_{k}^{\preceq q}$, form the structure of a persistence module. Persistence modules form a category, with morphisms given by natural transformations of functors.
If $\mathcal{C}=(C_*,\partial_C)$ and $\mathcal{D}=(D_*,\partial_D)$ are $\mathcal{P}$-filtered chain complexes, two filtered chain maps $f,g\colon\thinspace \mathcal{C}\to\mathcal{D}$ are said to be filtered homotopic if there is a filtered module homomorphism $K\colon\thinspace C_*\to D_{*+1}$ such that $f-g=\partial_D\circ K+K\circ \partial_C$, A \textbf{filtered homotopy equivalence} from $\mathcal{C}$ to $\mathcal{D}$ is a filtered chain map $f\colon\thinspace \mathcal{C}\to\mathcal{D}$ such that there is another filtered chain map $h\colon\thinspace \mathcal{D}\to\mathcal{C}$ such that $f\circ h$ and $h\circ f$ are filtered homotopic to the respective identities. A \textbf{filtered quasi-isomorphism} from $\mathcal{C}$ to $\mathcal{D}$ is filtered chain map $f\colon\thinspace\mathcal{C}\to\mathcal{D}$ such that each induced map on homology $f_*\colon\thinspace H_k(\mathcal{C}^{\preceq p})\to H_k(\mathcal{D}^{\preceq p})$ is an isomorphism, \emph{i.e.} such that $f$ induces an isomorphism between the corresponding homology persistence modules. Of course, a filtered homotopy equivalence is a filtered quasi-isomorphism. In general, we will use the terms ``homotopy'' and ``chain homotopy'' interchangeably.
Now suppose that $\mathcal{C}=(C_*,\partial_C)$ is a $\mathcal{P}$-filtered chain complex of $R$-modules and $\mathcal{D}=(D,\partial_D)$ is another complex of $R$-modules (which we do not assume to be filtered). If $f\colon\thinspace \mathcal{C}\to\mathcal{D}$ is a chain map, the \textbf{mapping cone} of $f$, denoted $\mathrm{Cone}(f)$, is the chain complex whose $i$th graded part is $\mathrm{Cone}(f)_i:=C_{i-1}\oplus D_i$ and whose differential is given by $\partial_{\mathrm{Cone}}(c,d)=(-\partial_Cc,fc+\partial_Dd)$. $\mathrm{Cone}(f)$ inherits the $\mathcal{P}$-filtration from $\mathcal{C}$, setting $\mathrm{Cone}(f)_{i}^{\preceq p}=C_{i-1}^{\preceq p}\oplus D_i$.
For all $p\in\mathcal{P}$ one then has a short exact sequence of chain complexes \[ \xymatrix{0 \ar[r] & \mathcal{D}\ar[r] & \mathrm{Cone}(f)^{\preceq p}\ar[r] & \mathcal{C}^{\preceq p}[-1] \ar[r] & 0 } \] (where the maps are the obvious inclusion and projection, and where we use the usual convention that, for a chain complex $\mathcal{E}=(E_*,\partial_E)$ and for $a\in \mathbb{Z}$, $\mathcal{E}[a]$ denotes the chain complex whose $k$th graded part is $E_{k+a}$ and whose differential is $(-1)^{a}\partial_E)$, and the induced long exact sequence on homology has its connecting homomorphism $H_k(\mathcal{C}^{\preceq p}[-1])=H_{k-1}(\mathcal{C}^{\preceq p})\to H_{k-1}(\mathcal{D})$ equal to the map on homology induced by $f|_{C_{k-1}^{\preceq p}}$.
\begin{prop}\label{filtinv}
Let $\mathcal{C},\hat{\mathcal{C}}$ be two $\mathcal{P}$-filtered chain complexes, and $\mathcal{D},\hat{\mathcal{D}}$ be two chain complexes
and consider a diagram of chain maps \begin{equation}\label{coneseq} \xymatrix{ \mathcal{C}\ar[r]^f \ar[d]_{\gamma} & \mathcal{D} \ar[d]^{\delta} \\ \hat{\mathcal{C}}\ar[r]^{\hat{f}} & \hat{\mathcal{D}} }\end{equation} where $\gamma$ is a filtered quasi-isomorphism, $\delta$ is a quasi-isomorphism, and the two compositions $\delta\circ f,\hat{f}\circ \gamma\colon\thinspace \mathcal{C}\to \hat{\mathcal{D}}$ are homotopic. Then there is a \textbf{filtered} quasi-isomorphism $\mathrm{Cone}(f)\to\mathrm{Cone}(\hat{f})$.
\end{prop}
\begin{proof}
Let $K\colon\thinspace C_{*-1}\to \hat{D}_*$ obey $\partial_{\hat{D}}K+K\partial_{C}=\delta f-\hat{f}\gamma$. This identity together with the fact that $\gamma$ and $\delta$ are chain maps imply that the map $A\colon\thinspace \mathrm{Cone}(f)\to \mathrm{Cone}(\hat{f})$ defined by, for $c\in C_{k-1}$ and $d\in D_k$, \[ A(c,d)=(\gamma c, Kc+\delta d),\] is a chain map. Evidently $A$ sends each filtered subcomplex $\mathrm{Cone}(f)^{\preceq p}=\mathcal{C}^{\preceq p}\oplus \mathcal{D}$ to $\mathrm{Cone}(\hat{f})^{\preceq p}$. Moreover $A$ intertwines the exact sequences as in (\ref{coneseq}) into a commutative diagram \[ \xymatrix{ 0 \ar[r] & \mathcal{D}\ar[r]\ar[d]_{\delta} & \mathrm{Cone}(f)^{\preceq p}\ar[r]\ar[d]_{A} & \mathcal{C}^{\preceq p}[-1]\ar[d]_{\gamma} \ar[r] & 0 \\ 0 \ar[r] & \hat{\mathcal{D}}\ar[r] & \mathrm{Cone}(\hat{f})^{\preceq p}\ar[r] & \hat{\mathcal{C}}^{\preceq p}[-1] \ar[r] & 0 } \] The resulting diagram of long exact sequences on homology is therefore also commutative, and so the fact that $A$ induces isomorphisms $H_k(\mathrm{Cone}(f)^{\preceq p})\cong H_k(\mathrm{Cone}(\hat{f})^{\preceq p})$ for all $k$ and $p$ follows from the five lemma and the assumptions that $\delta$ is a quasi-isomorphism and $\gamma$ is a filtered quasi-isomorphism.
\end{proof}
\subsection{From chain-level filtered matched pairs to persistence modules}\label{pmdefsec}
\cite[Definition 4.1]{UZ} defines a \textbf{Floer-type complex} $\mathcal{C}_{\uparrow}=(C_{\uparrow *},\partial_{\mathcal{C}_{\uparrow}},\ell_{\uparrow})$ to be a chain complex $(C_{\uparrow *}=\oplus_kC_{\uparrow k},\partial_{\mathcal{C}_{\uparrow}})$ over the Novikov field $\Lambda_{\uparrow}$ equipped with a function $\ell_{\uparrow}\colon\thinspace C_{\uparrow*}\to\mathbb{R}\cup\{-\infty\}$ such that each $(C_{\uparrow k},\ell_{\uparrow}|_{C_{\uparrow k}})$ is an orthogonalizable $\Lambda_{\uparrow}$-space and, for all $x\in C_{*}$, we have $\ell_{\uparrow}(\partial_{\mathcal{C}_{\uparrow}}x)\leq \ell_{\uparrow}(x)$. Said differently, endowing each $C_k$ with the $\mathbb{R}$-filtration described in Example \ref{orthfiltex}, the last condition amounts to each $\partial_{\mathcal{C}_{\uparrow}}|_{C_{k}}$ being a filtered homomorphism.
In this paper we will refer to the Floer-type complexes of \cite{UZ} as ``$\Lambda_{\uparrow}$-Floer-type complexes,'' as, just as in Definition \ref{orthdef}, one has the companion notion of a $\Lambda^{\downarrow}$\textbf{-Floer-type complex}: a structure $\mathcal{C}^{\downarrow}=(C^{\downarrow}_{*},\partial_{\mathcal{C}^{\downarrow}},\ell^{\downarrow})$ with each $(C^{\downarrow}_{k},\ell^{\downarrow}|_{C^{\downarrow}_{k}})$ an orthogonalizable $\Lambda^{\downarrow}$-space and, for each $x\in C^{\downarrow}_{k}$, $\ell^{\downarrow}(\partial_{\mathcal{C}^{\downarrow}}x)\geq \ell^{\downarrow}(x)$. As in Remark \ref{switch}, $(C ^{\downarrow}_{*},\partial_{\mathcal{C}^{\downarrow}},\ell^{\downarrow})$ is a $\Lambda^{\downarrow}$-Floer-type complex if and only if $(\overline{C^{\downarrow}}_{*},\partial_{\mathcal{C}_{\downarrow}},-\ell^{\downarrow})$ is a $\Lambda_{\uparrow}$-Floer-type complex, so general results in \cite{UZ} about Floer-type complexes apply to either type (with appropriate sign adjustments in the $\Lambda^{\downarrow}$ case).
\begin{dfn}\label{cfmpdfn}
A \textbf{chain-level filtered matched pair} $\mathcal{CP}=(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ consists of the following data:
\begin{itemize}\item a chain complex $\mathcal{C}=(C_*,\partial)$ of modules over the group algebra $\Lambda=\kappa[\Gamma]$, such that each graded piece $C_k$ is finitely generated and free as a $\Lambda$-module;
\item a $\Lambda^{\downarrow}$-Floer-type complex $\mathcal{C}^{\downarrow}=(C^{\downarrow}_{*},\partial_{\mathcal{C}^{\downarrow}},\ell^{\downarrow})$, and a $\Lambda_{\uparrow}$-Floer-type complex $\mathcal{C}_{\uparrow}=(C_{\uparrow *},\partial_{\mathcal{C}_{\uparrow}},\ell_{\uparrow})$;
\item chain maps (commuting with the $\Lambda$-actions) $\phi^{\downarrow}\colon\thinspace C_*\to C^{\downarrow}_{*}$ and $\phi_{\uparrow}\colon\thinspace C_*\to C_{\uparrow *}$ such that the coefficient extensions $1_{\Lambda^{\downarrow}}\otimes\phi^{\downarrow}\colon\thinspace \Lambda^{\downarrow}\otimes_{\Lambda}C_*\to C^{\downarrow}_{*}$ and $1_{\Lambda_{\uparrow}}\otimes \phi_{\uparrow}\colon\thinspace \Lambda_{\uparrow}\otimes_{\Lambda}C_*\to C_{\uparrow *}$ are chain homotopy equivalences.
\end{itemize}
\end{dfn}
\begin{remark}\label{cfmpfmp}
For any $\Lambda_{\uparrow}$-Floer-type complex $\mathcal{C}_{\uparrow}=(C_{\uparrow *},\partial_{\mathcal{C}_{\uparrow}},\ell_{\uparrow})$ and any $k\in \mathbb{Z}$, \cite[Proposition 6.6]{UZ} shows that the $k$th homology $H_k(\mathcal{C}_{\uparrow})$ becomes an orthogonalizable $\Lambda_{\uparrow}$-space with respect to the filtration function $\rho_{\uparrow}\colon\thinspace H_k(\mathcal{C}_{\uparrow})\to\mathbb{R}\cup\{-\infty\}$ defined by \[ \rho_{\uparrow}(\alpha)=\inf\{\ell_{\uparrow}(c)|c\in C_{\uparrow k},\partial_{\mathcal{C}_{\uparrow}}c=0,\,[c]=\alpha\}.\] Likewise, for a $\Lambda^{\downarrow}$-Floer-type complex $\mathcal{C}^{\downarrow}=(C^{\downarrow}_{*},\partial_{\mathcal{C}^{\downarrow}},\ell^{\downarrow})$ and $k\in \mathbb{Z}$, we have an orthogonalizable $\Lambda$-space $(H_k(\mathcal{C}^{\downarrow}),\rho^{\downarrow})$ with $\rho^{\downarrow}\colon\thinspace H_k(\mathcal{C}^{\downarrow})\to\mathbb{R}\cup\{+\infty\}$ defined by \[ \rho^{\downarrow}(\alpha)=\sup\{\ell^{\downarrow}(c)|c\in C^{\downarrow}_{k},\partial_{\mathcal{C}^{\downarrow}}c=0,\,[c]=\alpha\}.\]
Hence a chain-level filtered matched pair as in Definition \ref{cfmpdfn} induces, for each $k$, a filtered matched pair in the sense of Definition \ref{fmpdfn}, namely \[ \xymatrix{ & (H_k(\mathcal{C}^{\downarrow}),\rho^{\downarrow}) \\ H_k(\mathcal{C})\ar[ru]^{\phi^{\downarrow}_{*}}\ar[rd]_{\phi_{\uparrow *}} & \\ & (H_k(\mathcal{C}_{\uparrow}),\rho_{\uparrow}) }\] We denote this latter filtered matched pair by $\mathcal{H}_k(\mathcal{CP})$.
\end{remark}
Give $\mathbb{R}^2$ the partial order defined by declaring $(s,t)\preceq (s',t')$ iff both $s\leq s'$ and $t\leq t'$. Given a chain-level filtered matched pair, we now describe a construction of, for each $k\in \mathbb{Z}$, an isomorphism class of persistence modules $\mathbb{H}_k(\mathcal{CP})=\{\mathbb{H}_k(\mathcal{CP})_{s,t}\}_{(s,t)\in \mathbb{R}^2}$ over $\mathbb{R}^2$. Writing $\mathcal{CP}= (\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ as in the definition, let $\psi^{\downarrow}\colon\thinspace C^{\downarrow}_{*}\to \Lambda^{\downarrow}\otimes_{\Lambda}C_*$ and $\psi_{\uparrow}\colon\thinspace C_{\uparrow *}\to \Lambda_{\uparrow}\otimes_{\Lambda}C_*$ be chain homotopy inverses to $\phi^{\downarrow},\phi_{\uparrow}$, respectively. Note that these are defined uniquely up to homotopy: if $\psi'_{\uparrow}$ is a different choice of homotopy inverse to $\phi_{\uparrow}$ then $\psi_{\uparrow},\psi'_{\uparrow}$ are each homotopic to $\psi'_{\uparrow}\phi_{\uparrow}\psi_{\uparrow}$ and hence are homotopic to each other, and similarly for $\psi^{\downarrow}$. Now recall the $\Lambda$-module $\Lambda_{\updownarrow}$ from (\ref{updown}), and the inclusions $j^{\downarrow}\colon\thinspace \Lambda^{\downarrow}\to \Lambda_{\updownarrow}$ and $j_{\uparrow}\colon\thinspace \Lambda_{\uparrow}\to\Lambda_{\updownarrow}$. We unite the maps $\psi^{\downarrow},\psi_{\uparrow}$ into a single map \begin{equation}\label{bigmap} -j^{\downarrow}\otimes \psi^{\downarrow}+j_{\uparrow}\otimes\psi_{\uparrow}\colon\thinspace C^{\downarrow}_{*}\oplus C_{\uparrow *}\to \Lambda_{\updownarrow}\otimes_{\Lambda}C_*.\end{equation} This is a chain map (in the category of complexes over $\Lambda$), whose homotopy class depends only on the original data $\mathcal{CP}= (\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ since $\psi^{\downarrow},\psi_{\uparrow}$ are unique up to homotopy.
The domain of (\ref{bigmap}) is naturally filtered by $\mathbb{R}^2$, by setting \[ (C^{\downarrow}_{*}\oplus C_{\uparrow *})^{\preceq (s,t)}=\{(x,y)\in C^{\downarrow}_{*}\oplus C_{\uparrow *}|\ell^{\downarrow}(x)\geq -s\mbox{ and } \ell_{\uparrow}(y)\leq t\} \] (\emph{cf}. Example \ref{orthfiltex}). We then define \begin{equation}\label{hkdef} \mathbb{H}_k(\mathcal{CP})_{s,t}=H_{k+1}\left(\mathrm{Cone}\left(\xymatrix{\left(C^{\downarrow}_{*}\oplus C_{\uparrow *}\right)^{\preceq (s,t)}\ar[rrr]^{-j^{\downarrow}\otimes \psi^{\downarrow}+j_{\uparrow}\otimes\psi_{\uparrow}}&&&\Lambda_{\updownarrow}\otimes_{\Lambda}C_* }\right)\right).\end{equation}
The inclusions $\left(C^{\downarrow}_{*}\oplus C_{\uparrow *}\right)^{\preceq (s,t)}\hookrightarrow \left(C^{\downarrow}_{*}\oplus C_{\uparrow *}\right)^{\preceq (s',t')}$ induce maps that make $\mathbb{H}_k(\mathcal{CP})=\{\mathbb{H}_k(\mathcal{CP})_{s,t}\}_{(s,t)\in \mathbb{R}^2}$ into a persistence module of $\kappa$-vector spaces over $\mathbb{R}^2$. In motivating examples, as demonstrated by Theorem \ref{bigiso}, if $s+t\geq 0$ (so that the interval $[-s,t]$ is nonempty) then $\mathbb{H}_k(\mathcal{CP})_{s,t}$ is isomorphic to the singular homology $H_k(f^{-1}([-s,t]);\kappa)$ for some Morse function $f$. However, in our conventions $\mathbb{H}_k(\mathcal{CP})_{s,t}$ is still defined when $s+t<0$ even though it is not meant to be interpreted as the homology of an interlevel set in this case.
To give some hint of the motivation for (\ref{hkdef}), for a continuous function $f\colon\thinspace X\to \mathbb{R}$ and an interval $I\subset \mathbb{R}$ write $X_{I}=f^{-1}(I)$, and for any space $Y$ denote the $\kappa$-coefficient singular chain complex of $Y$ by $\mathbf{S}_*(Y)$. Then, if $s+t>0$, the standard proof of the Mayer--Vietoris sequence gives rise to an isomorphism \[ H_{k}(X_{[-s,t]};\kappa)\cong H_{k+1}\left(\mathrm{Cone}\left(\xymatrix{ \mathbf{S}_*(X_{[-s,\infty)})\oplus \mathbf{S}_{*}(X_{(-\infty,t]}) \ar[rr]^<<<<<<<<<<{-j_{[-s,\infty)}+j_{(-\infty,t]}} & & \mathbf{S}_*(X) }\right)\right) \]
where $j_{[-s,\infty)}$ and $j_{(-\infty,t]}$ are the inclusions into $\mathbf{S}_*(X)$.
The definition (\ref{hkdef}) is designed to parallel this, in a way that also works in the context of Novikov homology. This reasoning is made more precise in the proof of Theorem \ref{introiso} in Section \ref{isosect}.
While (\ref{hkdef}) requires a choice of homotopy inverses $\psi_{\uparrow},\psi^{\downarrow}$, the fact that these homotopy inverses are unique up to homotopy implies that the persistence module isomorphism type of $\mathbb{H}_k(\mathcal{CP})$ depends only on the original data $\mathcal{CP}$ by Proposition \ref{filtinv}. We will also see that we have invariance under the following relation:
\begin{dfn}\label{fmhedef}
Two chain-level filtered matched pairs $\mathcal{CP}=(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ and $\widehat{\mathcal{CP}}=(\widehat{\mathcal{C}},\widehat{\mathcal{C}}^{\downarrow},\widehat{\mathcal{C}}_{\uparrow},\widehat{\phi}^{\downarrow},\widehat{\phi}_{\uparrow})$ are said to be \textbf{filtered matched homotopy equivalent} if there is a diagram of chain maps \begin{equation}\label{sixdiagram} \xymatrix{ & \mathcal{C}^{\downarrow}\ar[rr]^f & & \widehat{\mathcal{C}}^{\downarrow} \\ \mathcal{C}\ar[ur]^<<<<<<{\phi^{\downarrow}}\ar[rr]^{g}\ar[dr]_<<<<<<{\phi_{\uparrow}} & & \widehat{\mathcal{C}}\ar[ur]_>>>>{\widehat{\phi}^{\downarrow}}\ar[dr]^>>>>{\widehat{\phi}_{\uparrow}} &\\ & \mathcal{C}_{\uparrow}\ar[rr]^h & & \widehat{\mathcal{C}}_{\uparrow} & } \end{equation} where $f$ and $h$ are filtered homotopy equivalences, $g$ is a homotopy equivalence, and the two parallelograms commute up to homotopy.
\end{dfn}
\begin{prop}\label{persinv}
If the chain-level filtered matched pairs $\mathcal{CP}=(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ and $\widehat{\mathcal{CP}}=(\widehat{\mathcal{C}},\widehat{\mathcal{C}}^{\downarrow},\widehat{\mathcal{C}}_{\uparrow},\widehat{\phi}^{\downarrow},\widehat{\phi}_{\uparrow})$ are filtered matched homotopy equivalent then for each $k$ there is an isomorphism of persistence modules $\mathbb{H}_k(\mathcal{CP})\cong \mathbb{H}_k(\widehat{\mathcal{CP}})$.
\end{prop}
\begin{proof}
Let $\psi^{\downarrow},\psi_{\uparrow},\widehat{\psi}^{\downarrow},\widehat{\psi}_{\uparrow}$ be homotopy inverses to $1_{\Lambda^{\downarrow}}\otimes \phi^{\downarrow},1_{\Lambda_{\uparrow}}\otimes \phi_{\uparrow},1_{\Lambda^{\downarrow}}\otimes \widehat{\phi}^{\downarrow}$ and $1_{\Lambda_{\uparrow}}\otimes \widehat{\phi}_{\uparrow}$, respectively. Then, using $\sim$ denote the relation of chain homotopy between chain maps, the fact that the upper parallelogram in (\ref{sixdiagram}) commutes up to homotopy implies (after tensoring with $\Lambda^{\downarrow}$) that \[ (1_{\Lambda^{\downarrow}}\otimes \widehat{\phi}^{\downarrow})\circ (1_{\Lambda^{\downarrow}}\otimes g)\circ \psi^{\downarrow}\sim f\circ(1_{\Lambda^{\downarrow}}\otimes \phi^{\downarrow})\circ\psi^{\downarrow}\sim f \] and hence \[ \widehat{\psi}^{\downarrow}\circ f\sim
\widehat{\psi}^{\downarrow}\circ(1_{\Lambda^{\downarrow}}\otimes \widehat{\phi}^{\downarrow})\circ (1_{\Lambda^{\downarrow}}\otimes g)\circ \psi^{\downarrow}\sim (1_{\Lambda^{\downarrow}}\otimes g)\circ \psi^{\downarrow}.\] Similarly $\widehat{\psi}_{\uparrow}\circ h\sim (1_{\Lambda_{\uparrow}}\otimes g)\circ\psi_{\uparrow}$. So we have commutative-up-to-homotopy diagrams \begin{equation}\label{twosquares} \xymatrix{\mathcal{C}^{\downarrow}\ar[r]^{\psi^{\downarrow}} \ar[d]_{f} & \Lambda^{\downarrow}\otimes_{\Lambda}\mathcal{C} \ar[d]^{1_{\Lambda^{\downarrow}}\otimes g} \\ \widehat{\mathcal{C}}^{\downarrow}\ar[r]_{\widehat{\psi}^{\downarrow}} & \Lambda^{\downarrow}\otimes_{\Lambda}\widehat{\mathcal{C}}}\qquad \xymatrix{\mathcal{C}_{\uparrow}\ar[r]^{\psi_{\uparrow}} \ar[d]_{h} & \Lambda_{\uparrow}\otimes_{\Lambda}\mathcal{C} \ar[d]^{1_{\Lambda_{\uparrow}}\otimes g} \\ \widehat{\mathcal{C}}_{\uparrow}\ar[r]_{\widehat{\psi}_{\uparrow}} & \Lambda_{\uparrow}\otimes_{\Lambda}\widehat{\mathcal{C}}} \end{equation} It follows that the diagram \[ \xymatrix{ \mathcal{C}^{\downarrow}\oplus \mathcal{C}_{\uparrow}\ar[rrr]^{-j^{\downarrow}\otimes \psi^{\downarrow}+j_{\uparrow}\otimes\psi_{\uparrow}} \ar[d]_{f\oplus h} &&& \Lambda_{\updownarrow}\otimes_{\Lambda}\mathcal{C} \ar[d]^{1_{\Lambda_{\updownarrow}}\otimes g} \\ \widehat{\mathcal{C}}^{\downarrow}\oplus \widehat{\mathcal{C}}_{\uparrow}\ar[rrr]^{-j^{\downarrow}\otimes \widehat{\psi}^{\downarrow}+j_{\uparrow}\otimes\widehat{\psi}_{\uparrow}} &&& \Lambda_{\updownarrow}\otimes_{\Lambda}\widehat{\mathcal{C}} } \] also commutes up to homotopy, and so the result follows from Proposition \ref{filtinv}.
\end{proof}
\subsection{Building blocks}\label{bbs}
Let us compute the graded persistence modules $\mathbb{H}_*(\mathcal{CP})=\oplus_k\mathbb{H}_k(\mathcal{CP})$ associated by (\ref{hkdef}) to various rather simple examples of chain-level filtered matched pairs $\mathcal{CP}$. As we will see later, when $\Gamma$ is discrete, any chain-level filtered matched pair $\mathcal{CP}$ is filtered matched homotopy equivalent to a direct sum of examples such as those considered presently.
\subsubsection{$\mathcal{PE}_{\uparrow}(a,L,k)$}\label{peupsec} First, borrowing notation from the second case of \cite[Definition 7.2]{UZ}, if $a\in \mathbb{R}$, $L\in[0,\infty)$, and $k\in\mathbb{Z}$, let $\mathcal{E}_{\uparrow}(a,L,k)$ denote the Floer-type complex $(E_*,\partial_E,\ell_E)$ given by setting $E_k$ and $E_{k+1}$ equal to one-dimensional vector spaces over $\Lambda_{\uparrow}$ generated by symbols $x$ and $y$ respectively while $E_j=\{0\}$ for $j\notin\{k,k+1\}$, putting $\partial_Ey=x$ and $\partial_Ex=0$, and defining the filtration function $\ell_E$ by $\ell_E(\lambda x)=a-\nu_{\uparrow}(\lambda)$ and $\ell_E(\lambda y)=a+L-\nu_{\uparrow}(\lambda)$. The identity map on the chain complex $(E_*,\partial_E)$ is homotopic to the zero map (the chain homotopy is given by sending $x$ to $y$ and $y$ to $0$), in view of which we may define a chain-level filtered matched pair $\mathcal{PE}_{\uparrow}(a,L,k)$ by \[\xymatrix{ & 0 \\ 0 \ar[ru]^{0} \ar[rd]_{0} & \\ & \mathcal{E}_{\uparrow}(a,L,k)}\] By definition, we have, for any $j\in\mathbb{Z}$ and $(s,t)\in \mathbb{R}^2$, \begin{align*}
\mathbb{H}_j(\mathcal{PE}_{\uparrow}(a,L,k))_{s,t} & =H_{j+1}\left(\mathrm{Cone}\left((\{0\}\oplus \mathcal{E}_{\uparrow}(a,L,k))^{\preceq(s,t)}\to \{0\}\right)\right)
\\ &= H_j(\mathcal{E}_{\uparrow}(a,L,k)^{\leq t})=\frac{\{v\in E_j|\partial_Ev=0,\,\ell_E(v)\leq t\}}{\partial_E\left(\{w\in E_{j+1}|\ell_E(w)\leq t\}\right)},\end{align*} with structure maps $\mathbb{H}_j(\mathcal{PE}_{\uparrow}(a,L,k))_{s,t}\to \mathbb{H}_j(\mathcal{PE}_{\uparrow}(a,L,k))_{s',t'}$ for $s\leq s'$, $t\leq t'$ induced by inclusion. Since $\partial_E$ has trivial kernel in degrees other than $k$ we thus have $\mathbb{H}_{j} (\mathcal{PE}_{\uparrow}(a,L,k))_{s,t} =\{0\}$ for $j\neq k$. In degree $k$ we find \[
\mathbb{H}_k(\mathcal{PE}_{\uparrow}(a,L,k))_{s,t}=\frac{\{\lambda x|\lambda\in\Lambda_{\uparrow},a-\nu_{\uparrow}(\lambda)\leq t\}}{\{\lambda x|\lambda\in\Lambda_{\uparrow},(a+L)-\nu_{\uparrow}(\lambda)\leq t\}}.\] Now an element $\lambda=\sum_{g\in \Gamma}c_gT^g$ of $\Lambda_{\uparrow}$ obeys $a-\nu_{\uparrow}(\lambda)\leq t$ if and only if all $g$ for which $c_g$ is nonzero obey $t\geq a-g$; the corresponding element $\lambda x$ will vanish in the above quotient if and only if all $g$ for which $c_g$ is nonzero obey $t\geq (a+L)-g$. So, for any $t$, $\mathbb{H}_k(\mathcal{PE}_{\uparrow}(a,L,k))_{s,t}$ is isomorphic as a $\kappa$-vector space to the $\kappa$-vector space $V_{t,a,L}$ of elements $\sum_gc_gT^g$ of $\Lambda_{\uparrow}$ where the sum ranges over just those elements $g\in\Gamma$ with the property that $t\in [a-g,a+L-g)$. The persistence module structure maps correspond under these isomorphisms to the maps $V_{t,a,L}\to V_{t',a,L}$ for $t\leq t'$ which truncate a sum $\sum_g c_gT^g\in V_{t,a,L}$ by deleting all terms with $t'\geq a+L-g$.
Let us rephrase this calculation in terms of the ``block modules'' of \cite{CO}. If $R\subset \mathbb{R}^2$ is any product of intervals $R=I\times J$ (including the case that $I$ and/or $J$ is all of $\mathbb{R}$), we have a persistence module $\kappa_R$ over $\mathbb{R}^2$ defined by \[ (\kappa_{R})_{s,t}=\left\{\begin{array}{ll} \kappa & \mbox{if }(s,t)\in R \\ \{0\} & \mbox{otherwise} \end{array}\right. \] with structure maps $(\kappa_R)_{s,t}\to(\kappa_R)_{s',t'}$ given by the identity if $(s,t),(s',t')\in R$ and $0$ otherwise. The above discussion shows that \begin{equation}\label{peup} \mathbb{H}_j(\mathcal{PE}_{\uparrow}(a,L,k))\cong\left\{\begin{array}{ll} \bigoplus_{g\in\Gamma} \kappa_{\mathbb{R}\times[a-g,a+L-g)} & \mbox{if }j=k \\ \{0\} & \mbox{otherwise}\end{array}\right.,\end{equation} with the summand $\kappa_{\mathbb{R}\times[a-g,a+L-g)}$ corresponding to the coefficient $c_g$ in an element $\sum_g c_gT^g$ as in the previous paragraph. (In particular, $\mathbb{H}_j(\mathcal{PE}_{\uparrow}(a,L,k))$ is trivial if $L=0$.)
\subsubsection{$\mathcal{PE}^{\downarrow}(b,L,k)$}\label{pedownsec} Dually, if $b\in \mathbb{R}$, $L\in[0,\infty)$, and $k\in \mathbb{Z}$ we have a $\Lambda^{\downarrow}$-Floer-type complex $\mathcal{E}^{\downarrow}(b,L,k)=(E_*,\partial_E,\ell_E)$ given by taking $E_j$ to be $\{0\}$ for $j\notin\{k,k+1\}$ and to be generated in degree $k$ by an element $x$ with $\ell_E(x)=b$ and in degree $k+1$ by an element $y$ with $\ell_E(y)=b-L$ and $\partial_Ey=x$. (This $\Lambda^{\downarrow}$-Floer type complex is obtained by conjugating the $\Lambda_{\uparrow}$-Floer-type complex $\mathcal{E}_{\uparrow}(-b,L,k)$ along the lines of Remark \ref{switch}.) $\mathcal{E}^{\downarrow}(b,L,k)$ fits into the following chain-level filtered matched pair, denoted $\mathcal{PE}^{\downarrow}(b,L,k)$: \[ \xymatrix{ & \mathcal{E}^{\downarrow}(b,L,k) \\ 0 \ar[ru]^{0} \ar[rd]_{0} & \\ & 0} \]
Just as in the previous case, one has \begin{align*} \mathbb{H}_j(\mathcal{PE}^{\downarrow}(b,L,k))_{s,t}&=H_{j+1}\left(\mathrm{Cone}\left((\mathcal{E}^{\downarrow}(b,L,k)\oplus\{0\})^{\preceq(s,t)}\to \{0\}\right)\right) \\ &=H_j((\mathcal{E}^{\downarrow}(b,L,k)\oplus\{0\})^{\preceq(s,t)}).\end{align*} This vanishes for $j\neq k$, and for $j=k$ it equals \[ \frac{\{\mu x|\mu\in \Lambda^{\downarrow},b-\nu^{\downarrow}(\mu)\geq -s\} }{\{\mu x|\mu\in\Lambda^{\downarrow},(b-L)-\nu^{\downarrow}(\mu)\geq -s\}}.\]
The same analysis as in Section \ref{peupsec} then yields isomorphisms of persistence modules \begin{equation}\label{pedown} \mathbb{H}_j(\mathcal{PE}^{\downarrow}(b,L,k))\cong \left\{\begin{array}{ll} \bigoplus_{g\in\Gamma} \kappa_{[-b-g,-b+L-g)\times \mathbb{R}} & \mbox{if }j=k \\ \{0\} & \mbox{otherwise}\end{array}\right.,\end{equation}
\subsubsection{$\mathcal{PM}(a,b,k)$} \label{mabsect} Given $a,b\in \mathbb{R}$ and $k\in \mathbb{Z}$ we define a chain-level filtered matched pair as follows. The chain complex $\mathcal{C}=(C_*,\partial)$ of $\Lambda$-modules is given by setting $C_k=\Lambda$ and $C_j=\{0\}$ for $j\neq k$, so necessarily $\partial=0$. The $\Lambda_{\uparrow}$-Floer-type complex $\mathcal{C}_{\uparrow}$ is likewise zero in degrees other than $k$, has zero differential, and has degree-$k$ chain group given by a copy of $\Lambda_{\uparrow}$ equipped with the filtration function $\ell_{\uparrow a}(\lambda)=a-\nu_{\uparrow}(\lambda)$. Likewise, the $\Lambda^{\downarrow}$-Floer-type complex $\mathcal{C}^{\downarrow}$ is zero in degrees other than $k$, has zero differential, and has degree-$k$ chain group given by a copy of $\Lambda^{\downarrow}$ equipped with the filtration function $\ell^{\downarrow b}(\mu)=b-\nu^{\downarrow}(\mu)$. The filtered matched pair $\mathcal{PM}(a,b,k)$ is then defined using these $\mathcal{C},\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$, taking the maps $\phi_{\uparrow}$ and $\phi^{\downarrow}$ to be given in degree $k$ by the standard inclusions of $\Lambda$ into $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$.
The maps $1_{\Lambda_{\uparrow}}\otimes \phi_{\uparrow},1_{\Lambda^{\downarrow}}\otimes\phi^{\downarrow}$ are, in degree $k$, just the standard canonical isomorphisms $\Lambda_{\uparrow}\otimes_{\Lambda}\Lambda\cong \Lambda_{\uparrow}$ and $\Lambda^{\downarrow}\otimes_{\Lambda}\Lambda\cong \Lambda^{\downarrow}$, so for the homotopy inverses $\psi_{\uparrow},\psi^{\downarrow}$ we can (indeed must) take the inverses of these isomorphisms. It then follows that (implicitly passing through the canonical isomorphism $\Lambda_{\updownarrow}\otimes_{\Lambda}\Lambda\cong \Lambda_{\updownarrow}$) the map $-j^{\downarrow}\otimes\psi^{\downarrow}+j_{\downarrow}\otimes\psi_{\uparrow}$ appearing in (\ref{hkdef}) is identified in degree $k$ with the map $\delta\colon\thinspace \Lambda^{\downarrow}\oplus\Lambda_{\uparrow}\to \Lambda_{\updownarrow}$ defined by $(\mu,\lambda)\mapsto -j^{\downarrow}\mu+j_{\uparrow}\lambda$. The $(s,t)$-filtered subcomplex $(C^{\downarrow}_{*}\oplus C_{\uparrow}*)^{\preceq(s,t)}$ consists in degree $k$ of the space $(\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}$ of those $(\mu,\lambda)\in\Lambda^{\downarrow}\oplus\Lambda_{\uparrow}$ with $b-\nu^{\downarrow}(\mu)\geq -s$ and $a-\nu_{\uparrow}(\lambda)\leq t$.
The complex $\mathrm{Cone}\left(\xymatrix{\left(C^{\downarrow}_{*}\oplus C_{\uparrow *}\right)^{\preceq (s,t)}\ar[rrr]^{-j^{\downarrow}\otimes \psi^{\downarrow}+j_{\uparrow}\otimes\psi_{\uparrow}}&&&\Lambda_{\updownarrow}\otimes_{\Lambda}C_* }\right)$ of (\ref{hkdef}) is thus given in degree $k+1$ by $(\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}$, in degree $k$ by $\Lambda_{\updownarrow}$, and in all other degrees by $\{0\}$, and the only nontrivial differential is given by the restriction of the subtraction map $\delta\colon\thinspace \Lambda^{\downarrow}\oplus\Lambda_{\uparrow}\to \Lambda_{\updownarrow}$. Hence, noting the degree shift in (\ref{hkdef}), \[ \mathbb{H}_k(\mathcal{PM}(a,b,k))_{s,t}\cong \ker(\delta|_{(\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}}),\quad \mathbb{H}_{k-1}(\mathcal{PM}(a,b,k))_{s,t}\cong\mathrm{coker}(\delta|_{(\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}}),\] and $\mathbb{H}_{j}(\mathcal{PM}(a,b,k))_{s,t}=\{0\}$ for $j\notin\{k-1,k\}$, with the persistence module structure maps induced by the inclusons $(\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}\hookrightarrow (\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s',t')}$.
Let us now compute the above kernel and cokernel. A pair $(\mu,\lambda)\in \Lambda^{\downarrow}\oplus\Lambda_{\uparrow}$ lies in the kernel of the difference map $\delta\colon\thinspace \Lambda^{\downarrow}\oplus\Lambda_{\uparrow}\to\Lambda_{\updownarrow}$ if and only if $\mu$ and $\lambda$ both lie in the common submodule $\Lambda=\kappa[\Gamma]$ of $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$, and are equal to each other. Since $(\mu,\lambda)\in (\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}$ if and only if $\nu^{\downarrow}(\mu)\leq b+s$ and $\nu_{\uparrow}(\lambda)\geq a-t$, this identifies $\mathbb{H}_k(\mathcal{PM}(a,b,k))_{s,t}$ with the sub-$\kappa$-vector space of $\Lambda$ consisting of sums $\sum_{g\in \Gamma}c_gT^g$ for which each $g$ appearing in the sum obeys $a-t\leq g\leq b+s$ (equivalently, $t\geq a-g$ and $s\geq -b+g$). The persistence module maps for $s\leq s'$ and $t\leq t'$ are just the inclusions. This yields an isomorphism of persistence modules \begin{equation}\label{hkm} \mathbb{H}_k(\mathcal{PM}(a,b,k))\cong \bigoplus_{g\in \Gamma}\kappa_{[-b+g,\infty)\times [a-g,\infty)}.\end{equation}
We now turn to the cokernel of $\delta|_{(\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}}$. Consider a general element $\gamma=\sum_{g\in\Gamma}c_g T^g$ of $\Lambda_{\updownarrow}$; by definition of $\Lambda_{\updownarrow}$, this in general may be an infinite sum, subject to the constraint that in each bounded interval there are only finitely many $g$ with $c_g\neq 0$. If $a-t\leq b+s$ we may (perhaps non-uniquely) partition $\{g|c_g\neq 0\}$ into sets $S_1,S_2$ such that each $g\in S_1$ has $g\leq b+s$ and each $g\in S_2$ has $g\geq a-t$. Then letting $\mu=-\sum_{g\in S_1}c_gT^g$ and $\lambda=\sum_{g\in S_2}c_gT^g$, we see that $(\mu,\lambda)\in (\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}$ and $\delta(\mu,\lambda)=\gamma$. Thus, when $a-t\leq b+s$, the map $\delta|_{(\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}})$ has trivial cokernel. Assuming instead that $b+s<a-t$, given a general element $\gamma=\sum_{g\in \Gamma}c_gT^g$ as above, we may uniquely express $\gamma$ in the form \[ \gamma=-\mu+\left(\sum_{b+s<g<a-t}c_gT^g\right) +\lambda \] where $(\mu,\lambda)\in (\Lambda^{\downarrow}\oplus\Lambda_{\uparrow})^{\preceq_{a,b}(s,t)}$. This yields an isomorphism \[ \mathbb{H}_{k-1}(\mathcal{PM}(a,b,k))_{s,t}\cong\bigoplus_{g\in\Gamma\cap(b+s,a-t)}\kappa,\] with the persistence module structure maps associated to increasing both $s$ and $t$ (and hence shrinking the interval $(b+s,a-t)$) corresponding to the obvious projection. Thus, in terms of block modules, \begin{equation}\label{hk-1m} \mathbb{H}_{k-1}(\mathcal{PM}(a,b,k))\cong \bigoplus_{g\in \Gamma}\kappa_{(-\infty,-b+g)\times (-\infty,a-g)}.\end{equation}
Since the graded persistence modules $\mathbb{H}_*(\mathcal{PE}_{\uparrow}(a,L,k))$, $\mathbb{H}_*(\mathcal{PE}^{\downarrow}(b,L,k))$, $\mathbb{H}_*(\mathcal{PM}(a,b,k))$ are meant to be building blocks for an algebraic model of interlevel persistence, it is not a coincidence that (when $\Gamma=\{0\}$) they resemble the block $MV$-systems of \cite[Definition 2.16]{BGO}, though the fact that \cite{BGO} works with persistence modules defined only over $\{(a,b)\in \mathbb{R}^2|a+b>0\}$ rather than all of $\mathbb{R}^2$ leads to some slight differences. When $\Gamma$ is nontrivial, we have seen that these building blocks are modified from the $\Gamma=\{0\}$ case simply by making them $\Gamma$-periodic in a natural way. On the other hand, consistently with \cite{BH17}, an additional type of building block arises when $\Gamma\neq\{0\}$ that is unlike the block $MV$-systems of \cite{BGO}, owing to the fact that, when $\Gamma\neq\{0\}$, modules over $\kappa[\Gamma]$ can have torsion. We describe this final building block now.
\subsubsection{$\mathcal{PR}(T,k)$}\label{pr} Let $T$ be a finitely-generated, torsion $\Lambda$-module, and $k\in \mathbb{Z}$. Let \begin{equation}\label{rest} \cdots\to F_{m}\to F_{m-1}\to\cdots\to F_1\to F_0\to T\to 0 \end{equation} be a resolution of $T$ by finitely-generated, free $\Lambda$-modules, as in \cite[Proof of Lemma 7.19]{Rot}. Let $\mathcal{C}$ be the chain complex given in degree $j$ by $F_{j-k}$ (interpreted as $\{0\}$ if $j<k$), with differentials given by the maps in the resolution. Thus $H_k(\mathcal{C})\cong T$ while all other $H_j(\mathcal{C})$ vanish. By Proposition \ref{kerext}, $\Lambda_{\uparrow}\otimes_{\Lambda}T\cong \Lambda^{\downarrow}\otimes_{\Lambda}T=\{0\}$ since $T$ is torsion. So since $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$ are flat $\Lambda$-modules, the exactness of (\ref{rest}) implies that the complexes $\Lambda_{\uparrow}\otimes_{\Lambda}\mathcal{C}$ and $\Lambda^{\downarrow}\otimes_{\Lambda}\mathcal{C}$ are acyclic. Since an acyclic chain complex over a field is chain homotopy equivalent to the zero complex, the following diagram defines a chain-level filtered matched pair $\mathcal{PR}(T,k)$: \[ \xymatrix{ & 0 \\ \mathcal{C} \ar[ur]^{0} \ar[dr]_{0} & \\ & 0 } \] (The notation $\mathcal{PR}(T,k)$ suppresses the choice of free resolution, but since any two free resolutions are homotopy equivalent, the filtered matched homotopy equivalence class of $\mathcal{PR}(T,k)$ depends only on $T$ and $k$.)
By definition, we have \[ \mathbb{H}_j(\mathcal{PR}(T,k))_{s,t}=H_{j+1}\left(\mathrm{Cone}(0\to \Lambda_{\updownarrow}\otimes_{\Lambda}\mathcal{C})\right) = H_{j+1}(\Lambda_{\updownarrow}\otimes_{\Lambda}\mathcal{C}),\] independently of $s,t$ (and with structure maps $\mathbb{H}_j(\mathcal{PR}(T,k))_{s,t}\to \mathbb{H}_j(\mathcal{PR}(T,k))_{s',t'}$ equal to the identity). Since $\Lambda_{\updownarrow}\otimes_{\Lambda}\mathcal{C}$ is given in degree $j$ by $\{0\}$ if $j<k$ and by $\Lambda_{\updownarrow}\otimes F_{j-k}$ otherwise, with differentials extended from (\ref{rest}),
we conclude that $\mathbb{H}_{j}(\mathcal{PR}(T,k))_{s,t}\cong \mathrm{Tor}_{j+1-k}^{\Lambda}(T,\Lambda_{\updownarrow})$ (which trivially vanishes if $j<k-1$). By Proposition \ref{tor}, since $T$ is torsion, $\mathrm{Tor}_{j+1-k}^{\Lambda}(T,\Lambda_{\updownarrow})$ vanishes if $j>k$ and is isomorphic to $T$ if $j=k$. In the remaining case that $j=k-1$, we have $\mathrm{Tor}_{0}^{\Lambda}(T,\Lambda_{\updownarrow})\cong \Lambda_{\updownarrow}\otimes_{\Lambda}T$, which vanishes due to the exact sequence $0\to\Lambda\to\Lambda_{\uparrow}\oplus \Lambda^{\downarrow}\to\Lambda_{\updownarrow}\to 0$ and the right exactness of $(\cdot)\otimes_{\Lambda}T$. So $\mathbb{H}_{j}(\mathcal{PR}(T,k))_{s,t}$ vanishes in all degrees other than $k$, and is isomorphic (independently of $(s,t)$) to $T$ for $j=k$. So if the dimension of $T$ as a vector space over $\kappa$ is $d$, we have (as persistence modules of $\kappa$-vector spaces over $\mathbb{R}^2$) \[ \mathbb{H}_j(\mathcal{PR}(T,k))\cong \left\{\begin{array}{ll} 0 &\mbox{if }j\neq k \\ \left(\kappa_{(-\infty,\infty)\times(-\infty,\infty)}\right)^{\oplus d} & \mbox{if }j=k\end{array}\right..\]
\subsection{Direct sum decompositions and barcodes}\label{decompsect}
In this section we recall some facts about Floer-type complexes and their barcodes from \cite{UZ}, and we combine these with results from Sections \ref{fmpsec} and \ref{bbs} to show that, when $\Gamma$ is discrete, chain-level filtered matched pairs can be described up to filtered matched homotopy equivalence by a combination of the barcodes of \cite{UZ} and the basis spectra of Definition \ref{hbsdef}. In \cite{UZ} our notational convention for handling the symmetry of our persistence modules with respect to the action of $\Gamma$ was to define the elements of our barcodes to be pairs $([a],L)\in (\mathbb{R}/\Gamma)\times [0,\infty]$; these were meant to correspond to equivalence classes intervals $[a,a+L)$ with the left endpoint $a$ (but not the length $L$) defined only modulo $\Gamma$. To avoid confusion with the similar convention in Definition \ref{hbsdef}, and also because in the current context we will need to maintain distinctions between intervals of all four types $[a,b),(a,b],[a,b],(a,b)$, our convention in this paper is to represent elements of the barcode directly as equivalence classes $I^{\Gamma}$ of intervals $I$ modulo $\Gamma$-translation, as in Notation \ref{intnot}.
Let $\mathcal{C}_{\uparrow}=(C_{\uparrow*},\partial_{\mathcal{C}_{\uparrow}},\ell_{\uparrow})$ be a $\Lambda_{\uparrow}$-Floer-type complex. By \cite[Corollaries 2.17 and 2.18]{UZ}, the subspaces $\Img(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow(k+1)}})$ and $\ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow k}})$ of the orthogonalizable $\Lambda_{\uparrow}$-space $C_{\uparrow k}$ are both orthogonalizable, any $\ell_{\uparrow}$-orthogonal basis for $\Img(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow(k+1)}})$ can be extended to an $\ell_{\uparrow}$-orthogonal basis for $\ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow k}})$, and any $\ell_{\uparrow}$-orthogonal basis for $\ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow k}})$ can be extended to an $\ell_{\uparrow}$-orthogonal basis for $C_{\uparrow k}$. \cite[Theorem 3.4]{UZ} shows that each map $\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow(k+1)}}\colon\thinspace C_{\uparrow (k+1)}\to \ker(\partial_k)$ admits a (nonarchimedean) singular value decomposition: there are $\ell_{\uparrow}$-orthogonal bases $\{y_{1}^{k+1},\ldots,y_{r}^{k+1},\ldots,y_{n}^{k+1}\}$ for $C_{\uparrow (k+1)}$ and $\{x_{1}^{k},\ldots,x_{r}^{k},\ldots,x_{m}^{k}\}$ for $\ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow k}})$ with $\{y_{r+1}^{k+1},\ldots,y_{n}^{k+1}\}$ a basis for $\ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow (k+1)}})$ while $\partial_{\mathcal{C}_{\uparrow}} y_{i}^{k+1}=x_{i}^{k}$ for $i=1,\ldots,r$.
Write $F_{\uparrow(k+1)}=\mathrm{span}_{\Lambda_{\uparrow}}\{y_{1}^{k+1},\ldots,y_{r}^{k+1}\}$. Then $F_{\uparrow(k+1)}$ and $\ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow (k+1)}})$ are orthogonal complements in $C_{\uparrow(k+1)}$ (\emph{i.e.} their direct sum is $C_{\uparrow(k+1)}$ and $\ell_{\uparrow}(f+z)=\max\{\ell_{\uparrow}(f),\ell_{\uparrow}(z)\}$ for $f\in F_{\uparrow(k+1)}$ and $z\in \ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow (k+1)}})$), so, more generally, the union of any $\ell_{\uparrow}$-orthogonal basis for $F_{\uparrow(k+1)}$ and any $\ell_{\uparrow}$-orthogonal basis for $\ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow (k+1)}})$ gives an orthogonal basis for $C_{\uparrow(k+1)}$.
Similarly, we obtain a singular value decomposition of $\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow k}}$, yielding in particular a subspace $F_{\uparrow k}\subset C_{\uparrow k}$ with an $\ell_{\uparrow}$-orthogonal basis $\{y_{1}^{k},\ldots,y_{s}^{k}\}$ such that $F_{\uparrow k}$ and $\ker(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow k}})$ are orthogonal complements and such that $\{\partial_{\mathcal{C}_{\uparrow}}y_{1}^{k},\ldots,\partial_{\mathcal{C}_{\uparrow}}y_{s}^{k}\}$ is an $\ell_{\uparrow}$-orthogonal basis for $\Img(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow k}})$. Writing $B_{\uparrow k}=\Img(\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow (k+1)}})$ (so that $B_{\uparrow k}=\mathrm{span}_{\Lambda_{\uparrow}}\{x_{1}^{k},\ldots,x_{r}^{k}\}$) and $H_{\uparrow k}=\mathrm{span}_{\Lambda}\{x_{r+1}^{k},\ldots,x_{m}^{k}\}$, we have thus decomposed each $C_{\uparrow k}$ as a direct sum of three orthogonal subspaces $B_{\uparrow k},H_{\uparrow k},F_{\uparrow k}$ such that $\partial_{\mathcal{C}_{\uparrow}}|_{B_{\uparrow k}\oplus H_{\uparrow k}}=0$ and $\partial_{\mathcal{C}_{\uparrow}}$ maps the $\ell_{\uparrow}$-orthogonal basis $\{y_{1}^{k+1},\ldots,y_{r}^{k+1}\}$ for $F_{\uparrow(k+1)}$ to the $\ell_{\uparrow}$-orthogonal basis $\{x_{1}^{k},\ldots,x_{r}^{k}\}$ for $B_{\uparrow k}$.
The \textbf{degree $k$ verbose barcode} of the $\Lambda_{\uparrow}$-Floer-type complex $\mathcal{C}_{\uparrow}$ is defined in \cite[Definition 6.3]{UZ} as the multiset of elements of $(\mathbb{R}/\Gamma)\times [0,\infty]$ consisting of the pairs $([\ell_{\uparrow}(x_{i}^{k})],\ell_{\uparrow}(y_{i}^{k})-\ell_{\uparrow}(x_{i}^{k}))$ for $i=1,\ldots,r$ as well as the pairs $([\ell_{\uparrow}(x_{i}^{k})],\infty)$ for $i=r+1,\ldots,m$ (where for $a\in \mathbb{R}$ we write $[a]$ for its equivalence class in $\mathbb{R}/\Gamma$). \cite[Theorem 7.1]{UZ} shows that the verbose barcode is independent of the choice of singular value decomposition used to define it.
The \emph{concise barcode} of $\mathcal{C}_{\uparrow}$ is, by definition, obtained from the verbose barcode by discarding all intervals of zero length, and is shown in \cite[Theorem B]{UZ} to be a complete invariant of $\mathcal{C}_{\uparrow}$ up to filtered chain homotopy equivalence (whereas the verbose barcode is a complete invariant up to the finer relation of filtered chain isomorphism); in particular, the distinction between the verbose and concise barcodes is immaterial from the standpoint of homology persistence modules. In keeping with Notation \ref{intnot}, in this paper we represent elements of the concise barcode as equivalence classes $[\ell_{\uparrow}(x_{i}^{k}),\ell_{\uparrow}(y_{i}^{k+1}))^{\Gamma}$ (for $i\leq r$, provided that $\ell_{\uparrow}(x_{i}^{k})<\ell_{\uparrow}(y_{i}^{k+1})$) or $[\ell_{\uparrow}(x_{i}^{k}),\infty)^{\Gamma}$ (for $i>r$) of intervals modulo $\Gamma$-translation.
\begin{prop}\label{iPi}
With notation as above, the inclusion of chain complexes $\iota_{\uparrow}\colon\thinspace \left(\oplus_k H_{\uparrow k},0\right)\to \left(C_{\uparrow *},\partial_{\mathcal{C}_{\uparrow}}\right)$ is a homotopy equivalence, with homotopy inverse $\Pi_{\uparrow}$ restricting to each $H_{\uparrow k}$ as the identity and mapping both $B_{\uparrow k}$ and $F_{\uparrow k}$ to $\{0\}$.
\end{prop}
\begin{proof}
By definition, $\Pi_{\uparrow}\circ \iota_{\uparrow}$ is the identity on $\oplus_k H_{\uparrow k}$. If we define $L\colon\thinspace C_{\uparrow *}\to C_{\uparrow (*+1)}$ to be zero on each $H_{\uparrow k}\oplus F_{\uparrow k}$ and to send the basis element $x_{i}^{k}\in B_{\uparrow k}$ to the corresponding $y_{i}^{k+1}\in F_{\uparrow(k+1)}$, one readily checks that $L$ defines a homotopy between $\iota_{\uparrow}\circ\Pi_{\uparrow}$ and the identity on $C_{\uparrow *}$.
\end{proof}
Since the homology of a complex with zero differential is just the underlying chain module of the complex, the map $\iota_{\uparrow}$ of the previous proposition induces an isomorphism $\iota_{\uparrow *}$ from the submodule $H_*=\oplus_k H_{\uparrow k}$ of $\oplus_kC_{\uparrow k}$ to the homology $H_*(\mathcal{C}_{\uparrow})$. The $(H_{\uparrow k},\ell_{\uparrow}|_{H_{\uparrow k}})$ are orthogonalizable $\Lambda_{\uparrow}$-spaces by \cite[Corollary 2.17]{UZ}, so by putting $\rho_{\uparrow k}=\ell_{\uparrow}\circ \iota_{\uparrow *}^{-1}|_{H_k(\mathcal{C}_{\uparrow})}$ we obtain orthogonalizable $\Lambda_{\uparrow}$-spaces $(H_k(\mathcal{C}_{\uparrow}),\rho_{\uparrow k})$. It follows from \cite[Proposition 6.6]{UZ} that the map $\rho_{\uparrow k}\colon\thinspace H_k(\mathcal{C}_{\uparrow})\to \mathbb{R}\cup\{-\infty\}$ assigns to each $\alpha\in H_k(\mathcal{C}_{\uparrow})$ the ``spectral invariant'' \begin{equation}\label{specup} \rho_{\uparrow k}(\alpha)=\inf\{\ell_{\uparrow}(a)|a\in\ker\partial_{\mathcal{C}_{\uparrow}}|_{C_{\uparrow k}},\,[a]=\alpha\in H_k(\mathcal{C}_{\uparrow})\}.\end{equation} In particular this function $\rho_{\uparrow k}$ is independent of the choice of the singular value decomposition that we initially used to obtain it. By \cite[Proposition 5.5]{UZ}, for any $\rho_{\uparrow k}$-orthogonal basis $\{h_1,\ldots,h_{m-r}\}$ for $H_k(\mathcal{C}_{\uparrow})$, the multiset of values $\{\rho_{\uparrow k}(h_1)\mod \Gamma,\ldots,\rho_{\uparrow k}(h_{m-r})\mod\Gamma\}$ is, independently of the choice of orthogonal basis, equal to $\{\ell_{\uparrow}(x_{r+1}^{k})\mod\Gamma,\ldots,\ell_{\uparrow}(x_m^k)
\mod\Gamma\}$, \emph{i.e.}, to the collection of left endpoints (modulo $\Gamma$) of the infinite-length bars in the degree-$k$ (verbose or concise) barcode.
As usual, using Remark \ref{switch}, analogous statements hold for $\Lambda^{\downarrow}$-Floer-type complexes $\mathcal{C}^{\downarrow}=(C^{\downarrow}_{*},\partial_{\mathcal{C}^{\downarrow}},\ell^{\downarrow})$: each $C^{\downarrow}_{k}$ decomposes orthogonally as $C^{\downarrow}_{k}=B^{\downarrow}_{k}\oplus H^{\downarrow}_{k}\oplus F^{\downarrow}_{k}$, with $\partial_{\mathcal{C}^{\downarrow}}$ vanishing on $B_{k}^{\downarrow}\oplus H_{k}^{\downarrow}$ and mapping some orthogonal basis $\{y_{1}^{k+1},\ldots,y_{r}^{k+1}\}$ for $F^{\downarrow}_{k+1}$ bijectively to an orthogonal basis for $B^{\downarrow}_{k}$. Writing $x_{i}^{k}=\partial_{\mathcal{C}^{\downarrow}}y_{i}^{k+1}$ for $i=1,\ldots,r$ and letting $\{x_{r+1}^{k},\ldots,x_{m}^{k}\}$ be an orthogonal basis for $H^{\downarrow}_{k}$, the degree-$k$ concise barcode of $\mathcal{C}^{\downarrow}$ is taken to consist of half-open intervals modulo $\Gamma$-translation $(\ell^{\downarrow}(y_{i}^{k+1}),\ell^{\downarrow}(x_{i}^{k})]^{\Gamma}$ for those $i\in\{1,\ldots,r\}$ with $\ell^{\downarrow}(y_{i}^{k+1})\neq \ell^{\downarrow}(x_{i}^{k})$, and $(-\infty,\ell^{\downarrow}(x_{i}^{k})]^{\Gamma}$ for $i=r+1,\ldots,m$. The inclusions $\iota^{\downarrow}$ of $H^{\downarrow}_{k}$ (with zero differential) into $C^{\downarrow}_{k}$ give a chain homotopy equivalence with homotopy inverse given by the projection $\Pi^{\downarrow}\colon\thinspace C^{\downarrow}_{k}\to H^{\downarrow}_{k}$ associated to the direct sum decomposition $C^{\downarrow}_{k}=B^{\downarrow}_{k}\oplus H^{\downarrow}_{k}\oplus F^{\downarrow}_{k}$. The resulting isomorphism $H^{\downarrow}_{k}\cong H_k(\mathcal{C}^{\downarrow})$ identifies $\ell^{\downarrow}|_{H^{\downarrow}_{k}}$ with the spectral invariant $\rho^{\downarrow}_{k}\colon\thinspace H_k(\mathcal{C}^{\downarrow})\to\mathbb{R}\cup\{\infty\}$ defined by \begin{equation}\label{specdown} \rho^{\downarrow}_{k}(\alpha)=\sup\{\ell^{\downarrow}(a)|a\in\ker(\partial_{\mathcal{C}^{\downarrow}}|_{C^{\downarrow}_{k}}),\,[a]=\alpha\in H_k(\mathcal{C}^{\downarrow})\}.\end{equation}
Now suppose that we have a chain-level filtered matched pair $\mathcal{CP}=(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ as in Definition \ref{cfmpdfn}. In particular $\mathcal{C}=(C_*=\oplus_kC_k,\partial)$ is a chain complex of free, finitely-generated $\Lambda$-modules. In order to ensure that we can obtain suitable direct sum decompositions we \textbf{assume at this point that the subgroup $\Gamma$ of $\mathbb{R}$ is discrete}, and hence that $\Lambda$ is a PID. Then each $\ker(\partial|_{C_k})$ is also a finitely generated free $\Lambda$-module, and so by putting the differential $\partial|_{C_{k+1}}\colon\thinspace C_{k+1}\to \ker(\partial|_{C_k})$ in Smith
normal form we obtain bases $\{y_{1}^{k+1},\ldots,y_{r}^{k+1},y_{r+1}^{k+1},\ldots,y_{n}^{k+1}\}$ for $C_{k+1}$ and $\{x_{1}^{k},\ldots,x_{r}^{k},\ldots,x_{m}^{k}\}$ for $\ker(\partial|_{C_k})$, as well as nonzero elements $\alpha_1,\ldots,\alpha_r\in \Lambda$ such that $\alpha_i|\alpha_{i+1}$, with the property that $\partial y_{i}^{k+1}=\alpha_ix_{i}^{k}$ for $i=1,\ldots,r$ and $\partial y_{i}^{k+1}=0$ for $i=r+1,\ldots,n$.
Analogously to our discussion of $\mathcal{C}_{\uparrow}$ and $\mathcal{C}^{\downarrow}$, but with modest additional complication due to the fact that the $\alpha_i$ may not be invertible, write $F_{k+1}=\mathrm{span}_{\Lambda}\{y_{1}^{k+1},\ldots,y_{r+1}^{k+1}\}$, $\tilde{B}_k=\mathrm{span}_{\Lambda}\{x_{1}^{k},\ldots,x_{r}^{k}\}$, and $H_{k}^{\mathrm{free}}=\mathrm{span}_{\Lambda}\{x_{r+1}^{k},\ldots,x_{m}^{k}\}$. (Thus $\ker(\partial|_{C_k})=H_{k}^{\mathrm{free}}\oplus \tilde{B}_k$, and the $k$th homology $H_k(\mathcal{C})$ is the direct sum of $H_{k}^{\mathrm{free}}$ with the torsion submodule $\frac{\tilde{B}_k}{\mathrm{span}_{\Lambda}\{\alpha_1x_{1}^{k},\ldots,\alpha_rx_{r}^{k}\}}$.) In just the same way, the Smith normal form of $\partial|_{C_k}$ yields submodules $F_k$ of $C_k$ and $H_{k-1}^{\mathrm{free}}$ and $\tilde{B}_{k-1}$ of $\ker(\partial|_{C_{k-1}})$. In particular, we have direct sum decompositions $C_k=F_k\oplus \ker(\partial|_{C_k})=F_k\oplus H_{k}^{\mathrm{free}}\oplus \tilde{B}_k$.
Writing $H_{*}^{\mathrm{free}}=\oplus_k H_{k}^{\mathrm{free}}$, let $\iota\colon\thinspace H_{*}^{\mathrm{free}}\to C_*$ be the inclusion and let $\Pi\colon\thinspace C_*\to H_{*}^{\mathrm{free}}$ be given in each degree $k$ by the projection associated to the direct sum decomposition $C_k=F_k\oplus H_{k}^{\mathrm{free}}\oplus \tilde{B}_k$. Regarding $H_{*}^{\mathrm{free}}$ as a chain complex with zero differential, we see that $\iota$ is a chain map because $\partial|_{H_{k}^{\mathrm{free}}}=0$, and $\Pi$ is a chain map because $\Img(\partial|_{C_{k+1}})$ is contained in $\tilde{B}_k$ which is annihilated by $\Pi$. In contrast to the situation of Proposition \ref{iPi}, $\iota$ and $\Pi$ should not be expected to be homotopy inverses since $H_k(\mathcal{C})$ may have torsion in which case it is not isomorphic to $H_{k}^{\mathrm{free}}$. However, recalling that we are assuming $\mathcal{C}$ to be part of a chain-level filtered matched pair $\mathcal{CP}=(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$, we have:
\begin{prop}\label{phipi}
The chain map $\phi_{\uparrow}\colon\thinspace \mathcal{C}\to\mathcal{C}_{\uparrow}$ is chain homotopic to $\phi_{\uparrow}\circ\iota\circ\Pi$. Similarly $\phi^{\downarrow}\colon\thinspace \mathcal{C}\to\mathcal{C}^{\downarrow}$ is chain homotopic to $\phi^{\downarrow}\circ\iota\circ\Pi$.
\end{prop}
\begin{proof}
The two cases are identical, so we just consider $\phi_{\uparrow}$. Recalling that $\mathcal{C}_{\uparrow}$ is, among other properties, a chain complex of vector spaces over the \emph{field} $\Lambda_{\uparrow}\supset \Lambda$, we may define a homomorphism $L\colon\thinspace C_{*}\to C_{\uparrow(*+1)}$ by setting $L$ equal to zero on each $F_k\oplus H_{k}^{\mathrm{free}}$ and by letting $L(x_{i}^{k})=\frac{1}{\alpha_i}\phi_{\uparrow}(y_{i}^{k+1})$ for $i=1,\ldots,r$, where as before $\{y_{1}^{k+1},\ldots,y_r^{k+1}\}$ is a basis for $F_{k+1}$ and $\{x_{1}^{k},\ldots,x_{r}^{k}\}$ is a basis for $\tilde{B}_{k}$, with $\partial y_{i}^{k+1}=\alpha_ix_{i}^{k}$. It is then straightforward to verify that $\phi_{\uparrow}-\phi_{\uparrow}\circ\iota\circ\Pi=\partial_{\mathcal{C}_{\uparrow}}\circ L+L\circ\partial$.
\end{proof}
Now let us write $\mathcal{A}(\mathcal{C})$, $\mathcal{A}(\mathcal{C}_{\uparrow})$, and $\mathcal{A}(\mathcal{C}^{\downarrow})$ for the subcomplexes of $\mathcal{C},\mathcal{C}_{\uparrow}$, and $\mathcal{C}^{\downarrow}$ whose degree-$k$ parts are, respectively, $F_k\oplus \tilde{B}_k$, $F_{\uparrow k}\oplus B_{\uparrow k}$, and $F^{\downarrow}_{ k}\oplus B^{\downarrow}_{ k}$. So $\mathcal{A}(\mathcal{C}_{\uparrow})$is a $\Lambda_{\uparrow}$-Floer-type complex with filtration function given by the restriction of $\ell_{\uparrow}$, and likewise restricting $\ell^{\downarrow}$ makes $\mathcal{A}(\mathcal{C}^{\downarrow})$ into a $\Lambda^{\downarrow}$-Floer-type complex; both $\mathcal{A}(\mathcal{C}_{\uparrow})$ and $\mathcal{A}(\mathcal{C}^{\downarrow})$ have trivial homology, while the homology of $\mathcal{A}(\mathcal{C})$ in degree $k$ is the torsion submodule $tH_k(\mathcal{C})$ of $H_k(\mathcal{C})$. Let $\mathcal{H}^{\mathrm{free}}(\mathcal{C})$ be the graded $\Lambda$-module given in degree $k$ by $\frac{H_k(\mathcal{C})}{tH_{k}(\mathcal{C})}$, regarded as a chain complex with zero differential; the inclusions $\iota\colon\thinspace H_{k}^{\mathrm{free}}\to C_k$ induce on homology a map which, when composed with the quotient projection $H_k(\mathcal{C})\to \frac{H_k(\mathcal{C})}{tH_k(\mathcal{C})}$, is an isomorphism to $\mathcal{H}^{\mathrm{free}}(\mathcal{C})$ from the subcomplex (again with zero differential) of $\mathcal{C}$ given in degree $k$ by $H_{k}^{\mathrm{free}}(\mathcal{C})$. Finally, let $\mathcal{H}(\mathcal{C}_{\uparrow})$ and $\mathcal{H}(\mathcal{C}^{\downarrow})$ denote the $\Lambda_{\uparrow}$- and $\Lambda^{\downarrow}$-Floer-type complexes, with zero differential, given in degree $k$ by the respective homologies $H_k(\mathcal{C}_{\uparrow})$ and $H_k(\mathcal{C}^{\downarrow})$ and with filtration functions given by the spectral invariants $\rho_{\uparrow k},\rho^{\downarrow}_{k}$. The maps induced on homology by $\iota_{\uparrow}\colon\thinspace H_{\uparrow k}\to C_{\uparrow k}$ and $\iota^{\downarrow}\colon\thinspace H_{k}^{\downarrow}\to C^{\downarrow}_{k}$ provide isomorphisms to these Floer-type complexes from the sub-Floer-type complexes of $\mathcal{C}_{\uparrow},\mathcal{C}^{\downarrow}$ given in degree $k$ by $H_{\uparrow k}$ and $H^{\downarrow}_{k}$.
The direct sum of orthogonalizable $\Lambda_{\uparrow}$- or $\Lambda^{\downarrow}$-spaces is defined in the obvious way that makes the summands orthogonal subspaces, leading to corresponding notions of direct sum of Floer-type complexes an of chain-level filtered matched pairs implicit below.
\begin{prop}\label{splitcfmp}
With notation as above, we have a diagram, commutative up to homotopy,
\begin{equation}\label{splitsix} \xymatrix{ & \mathcal{A}(\mathcal{C}^{\downarrow})\oplus \mathcal{H}(\mathcal{C}^{\downarrow}) \ar[rr]^f & & \mathcal{C}^{\downarrow} \\ \mathcal{A}(\mathcal{C})\oplus\mathcal{H}^{\mathrm{free}}(\mathcal{C})\ar[ur]^{\Phi^{\downarrow}}\ar[rr]^{g}\ar[dr]_{\Phi_{\uparrow}} & & \mathcal{C}\ar[ur]_<<<<<<<<{\phi^{\downarrow}}\ar[dr]^{\phi_{\uparrow}} &\\ & \mathcal{A}(\mathcal{C}_{\uparrow})\oplus \mathcal{H}(\mathcal{C}_{\uparrow}) \ar[rr]^h & & \mathcal{C}_{\uparrow} } \end{equation}
where $f$ and $h$ are filtered chain isomorphisms, $g$ is a chain isomorphism, $\Phi^{\downarrow}$ restricts to $\mathcal{A}(\mathcal{C}^{\downarrow})$ as $0$ and to $\mathcal{H}^{\mathrm{free}}(\mathcal{C})$ as the map $\phi^{\downarrow}_{*}\colon\thinspace \mathcal{H}^{\mathrm{free}}(\mathcal{C})\to \mathcal{H}(\mathcal{C}^{\downarrow})$ induced on homology by $\phi^{\downarrow}$, and $\Phi_{\uparrow}$ restricts to $\mathcal{A}(\mathcal{C}_{\uparrow})$ as $0$ and to $\mathcal{H}^{\mathrm{free}}(\mathcal{C})$ as the map $\phi_{\uparrow *}\colon\thinspace \mathcal{H}^{\mathrm{free}}(\mathcal{C})\to \mathcal{H}(\mathcal{C}_{\uparrow})$ induced on homology by $\phi_{\uparrow}$.\end{prop}
\begin{proof}
As noted above the proposition, in each degree $k$ we have isomorphisms $H_{\uparrow k}\cong H_{k}(\mathcal{C}_{\uparrow})$, $H_k^{\downarrow}\cong H_k(\mathcal{C}^{\downarrow})$, and $H_k^{\mathrm{free}}\cong \frac{H_k(\mathcal{C})}{tH_k(\mathcal{C})}$ induced on homology by $\iota_{\uparrow},\iota^{\downarrow},\iota$ respectively; the chain isomorphisms $f,g,h$ are just the compositions of (the inverses of) these isomorphisms with the maps given in each degree $k$ by obvious addition maps associated to the direct sum decompositions $C_k=(F_k\oplus \tilde{B}_k)\oplus H_{k}^{\mathrm{free}}$, $C_{\uparrow k}=(F_{\uparrow k}\oplus B_{\uparrow k})\oplus H_{\uparrow k}$, $C^{\downarrow}_{k}=(F^{\downarrow}_{k}\oplus B^{\downarrow}_{k})\oplus H_{k}^{\downarrow}$. (The fact that $f$ and $h$ are \emph{filtered} isomorphisms follows from the corresponding direct sum decompositions being orthogonal.)
It remains to show that the diagram commutes up to homotopy. Under the same identifications of $\frac{H_k(\mathcal{C})}{tH_k(\mathcal{C})}$ with $H_{k}^{\mathrm{free}}$ and $H_k(\mathcal{C}_{\uparrow})$ with $H_{\uparrow k}$ as in the previous paragraph, the map $\phi_{\uparrow *}\colon\thinspace \frac{H_k(\mathcal{C})}{tH_k(\mathcal{C})}\to H_k(\mathcal{C}_{\uparrow})$ becomes identified with the map $\Pi_{\uparrow}\circ (\phi_{\uparrow}|_{H_k^{\mathrm{free}}})\colon\thinspace H_{k}^{\mathrm{free}}\to H_{\uparrow k}$ between submodules of $C_k$ and $C_{\uparrow k}$. The map $\Phi_{\uparrow}$ then becomes identified, in degree $k$, with the map $C_k\to C_{\uparrow k}$ that vanishes on $F_k\oplus \tilde{B}_k$ and sends $ H_{k}^{\mathrm{free}}$ to $ H_{\uparrow k}$ via $\Pi_{\uparrow}\circ (\phi_{\uparrow}|_{H_k^{\mathrm{free}}})$; this map $C_k\to C_{\uparrow k}$ can be written as $\iota_{\uparrow}\circ\Pi_{\uparrow}\circ\phi_{\uparrow}\circ\iota\circ\Pi$. By Propositions \ref{iPi} and \ref{phipi}, this latter map is chain homotopic to $\phi_{\uparrow}$, and hence the bottom half of (\ref{splitsix}) commutes up to homotopy. That the top half commutes up to homotopy follows by the same argument.
\end{proof}
Thus our original chain-level filtered matched pair $\mathcal{CP}$ is filtered matched homotopy equivalent to the chain-level filtered matched pair
given by \begin{equation} \xymatrix{ & \mathcal{A}(\mathcal{C}^{\downarrow})\oplus \mathcal{H}(\mathcal{C}^{\downarrow}) \\ \mathcal{A}(\mathcal{C})\oplus\mathcal{H}^{\mathrm{free}}(\mathcal{C})\ar[ur]^{\Phi^{\downarrow}}\ar[dr]_{\Phi_{\uparrow}} & \\ & \mathcal{A}(\mathcal{C}_{\uparrow})\oplus \mathcal{H}(\mathcal{C}_{\uparrow}) } \label{splitversion}\end{equation} We regard this latter chain-level filtered matched pair as a direct sum of three simpler ones: \[ \xymatrix{ & 0 \\ \mathcal{A}(\mathcal{C})\ar[ur]^{0}\ar[dr]_{0} & \\ & 0 }\qquad \xymatrix{ & \mathcal{A}(\mathcal{C}^{\downarrow}) \\ 0\ar[ur]^{0}\ar[dr]_{0} & \\ & \mathcal{A}(\mathcal{C}_{\uparrow}) }\qquad \xymatrix{ & \mathcal{H}(\mathcal{C}^{\downarrow}) \\ \mathcal{H}^{\mathrm{free}}(\mathcal{C})\ar[ur]^{\phi^{\downarrow}_{*}}\ar[dr]_{\phi_{\uparrow *}} & \\ & \mathcal{H}(\mathcal{C}_{\uparrow}) } \] Denoting these, respectively, by $\mathcal{T}(\mathcal{CP}),\mathcal{A}(\mathcal{CP}),\mathcal{H}^{\mathrm{free}}(\mathcal{CP})$ it follows readily from Proposition \ref{splitcfmp} and the definitions that, in the category of persistence modules, \begin{equation}\label{hkdec} \mathbb{H}_k(\mathcal{CP})\cong \mathbb{H}_k(\mathcal{T}(\mathcal{CP}))\oplus \mathbb{H}_k(\mathcal{A}(\mathcal{CP})) \oplus \mathbb{H}_k(\mathcal{H}^{\mathrm{free}}(\mathcal{CP})).\end{equation} The three summands on the right-hand side can be described in terms of the building blocks in Section \ref{bbs}.
First of all, letting $tH_k(\mathcal{C})$ denote the torsion part of $H_k(\mathcal{C})$, for each $k$ we have a free resolution $0\to F_{k+1}\to \tilde{B}_k\to tH_k(\mathcal{C})\to 0$. So since $\mathcal{A}(\mathcal{C})$ is the subcomplex $\oplus_j(F_j\oplus \tilde{B}_j)$ of $\mathcal{C}$, we can identify $\mathcal{A}(\mathcal{C})$ with the direct sum over $k$ of the chain-level filtered matched pairs $\mathcal{PR}(tH_k(\mathcal{C}),k)$ from Section \ref{pr}. Thus \[ \mathbb{H}_k(\mathcal{T}(CP))\cong (\kappa_{(-\infty,\infty)\times(-\infty,\infty)})^{\oplus (\dim_{\kappa}tH_k(\mathcal{C}))}.\]
As for $\mathcal{A}(\mathcal{CP})$, the subcomplex $\mathcal{A}(\mathcal{C}_{\uparrow})$ of $\mathcal{C}_{\uparrow}$ decomposes as a (filtered) direct sum of simple subcomplexes generated by pairs $\{y_{i}^{k+1},x_{i}^{k}\}$ coming from the singular value decomposition of $\partial_{\mathcal{C}_{\uparrow}}$, with $\partial_{\mathcal{C}_{\uparrow}}y_{i}^{k+1}=x_{i}^{k}$; these simple subcomplexes are isomorphic to the elementary complexes $\mathcal{E}_{\uparrow}(\ell_{\uparrow}(x_{i}^{k}),\ell_{\uparrow}(y_{i}^{k})-\ell_{\uparrow}(x_{i}^{k}),k)$ mentioned in Section \ref{peupsec}, and, except for those $i$ for which $\ell_{\uparrow}(y_{i}^{k+1})=\ell_{\uparrow}(x_{i}^{k})$, they also correspond to the finite-length bars $[\ell_{\uparrow}(x_{i}^{k}),\ell_{\uparrow}(y_{i}^{k+1}))^{\Gamma}$ in the concise barcode of $\mathcal{C}_{\uparrow}$. Similarly, $\mathcal{A}(\mathcal{C}^{\downarrow})$ decomposes as a filtered direct sum of elementary complexes $\mathcal{E}^{\downarrow}(b,b-a,k)$ for each finite-length bar $(a,b]^{\Gamma}$ in the degree-$k$ concise barcode of $\mathcal{C}^{\downarrow}$, together with any summands of form $\mathcal{E}^{\downarrow}(b,b,k)$ arising from the distinction between the verbose and concise barcodes. It follows that $\mathcal{A}(\mathcal{C})$ decomposes as a direct sum of the various $\mathcal{PE}_{\uparrow}(a,b-a,k)$ and $\mathcal{PE}^{\downarrow}(b,b-a,k)$ associated respectively to finite-length bars $[a,b)^{\Gamma}$ in the concise barcode of $\mathcal{C}_{\uparrow}$ and to finite-length bars $(a,b]^{\Gamma}$ in the concise barcode of $\mathcal{C}^{\downarrow}$, plus perhaps some $\mathcal{PE}_{\uparrow}(a,a,k)$ or $\mathcal{PE}^{\downarrow}(b,b,k)$. So by the calculations in Section \ref{bbs}, each finite-length bar in the degree-$k$ concise barcode of $\mathcal{C}_{\uparrow}$ contributes a summand $\oplus_{g\in\Gamma}\kappa_{(-\infty,\infty)\times [a-g,b-g)}$ to $\mathbb{H}_k(\mathcal{A}(\mathcal{CP})$, and likewise each finite-length bar $(a,b]^{\Gamma}$ in the degree-$k$ concise barcode of $\mathcal{C}^{\downarrow}$ contributes a summand $\oplus_{g\in\Gamma}\kappa_{[-b-g,-a-g)\times(-\infty,\infty)}$ to $\mathbb{H}_k(\mathcal{A}(\mathcal{CP}))$. (Any potential summands $\mathcal{PE}_{\uparrow}(a,a,k)$ or $\mathcal{PE}^{\downarrow}(b,b,k)$ do not affect the outcome because $\mathbb{H}_k(\mathcal{PE}_{\uparrow}(a,a,k))\cong \mathbb{H}_k(\mathcal{PE}^{\downarrow}(b,b,k))\cong \{0\}$.)
Finally, since all differentials on the complexes involved in the chain-level filtered matched pair $\mathcal{H}^{\mathrm{free}}(\mathcal{CP})$ are zero, for each $k\in \mathbb{Z}$ the degree-$k$ part $\mathcal{H}_{k}^{\mathrm{free}}(\mathcal{CP})$ of $\mathcal{H}^{\mathrm{free}}(\mathcal{CP})$ is a filtered matched pair as in Section \ref{fmpsec} and so, due to our standing assumption that $\Gamma$ is discrete, $\mathcal{H}_{k}^{\mathrm{free}}(\mathcal{CP})$ admits a doubly orthogonal basis $\{[e_1],\ldots,[e_d]\}$ by Theorem \ref{basisconstruct}. Here each $[e_i]\in\frac{H_k(\mathcal{C})}{tH_k(\mathcal{C})}$, and we may regard $[e_i]$ as obtained via the quotient projection from a doubly orthogonal basis for the filtered matched pair $\mathcal{H}_k(\mathcal{CP})$ discussed in Remark \ref{cfmpfmp}. It is then easy to see that $\mathcal{H}_{k}^{\mathrm{free}}(\mathcal{CP})$ splits as a (filtered) direct sum of filtered matched pairs, for $i=1,\ldots,d$, \[ \xymatrix{ & \mathrm{span}_{\Lambda^{\downarrow}}\{\phi^{\downarrow}_{*}e_i\} \\ \mathrm{span}_{\Lambda} \{[e_i]\} \ar[rd]_{\phi_{\uparrow *}} \ar[ru]^{\phi^{\downarrow}_{*}} & \\ & \mathrm{span}_{\Lambda_{\uparrow}}\{\phi_{\uparrow *}e_i\}}\] and these (regarded as chain-level filtered matched pairs with zero differential) are respectively isomorphic to the chain-level filtered matched pairs $\mathcal{PM}(\rho_{\uparrow}(\phi_{\uparrow *}e_i),\rho^{\downarrow}(\phi^{\downarrow}_{*}e_i),k)$ of Section \ref{mabsect}. Recall that the basis spectrum $\Sigma(\mathcal{H}_{k}^{\mathrm{free}}(\mathcal{CP}))$ is by definition the collection of pairs $([\rho_{\uparrow}(\phi_{\uparrow *}e_i)],\rho^{\downarrow}(\phi^{\downarrow}_{*}e_i)-\rho_{\uparrow}(\phi_{\uparrow *}e_i))\in (\mathbb{R}/\Gamma)\times \mathbb{R}$; this is the same as the basis spectrum $\Sigma(\mathcal{H}_k(\mathcal{CP}))$. It then follows from the calculation in Section \ref{mabsect} that \begin{align*} \mathbb{H}_k(\mathcal{H}(\mathcal{CP}))\cong& \left(\bigoplus_{([a],L)\in \Sigma(\mathcal{H}_k(\mathcal{CP}))}\bigoplus_{g\in\Gamma}\kappa_{[-a-L+g,\infty)\times[a-g,\infty)} \right)\\ & \quad\oplus\left(\bigoplus_{([a],L)\in\Sigma(\mathcal{H}_{k+1}(\mathcal{CP}))}\bigoplus_{g\in\Gamma}\kappa_{(-\infty,-a-L+g)\times(-\infty,a-g)}\right).\end{align*}
We summarize this discussion as follows:
\begin{theorem}\label{bigdecomp}
Assume that $\Gamma$ is discrete and let $\mathcal{CP}=(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ be a chain-level filtered matched pair. For each $k\in\mathbb{Z}$, the persistence module $\mathbb{H}_k(\mathcal{CP})$ splits as a direct sum of the following block modules:
\begin{itemize} \item[(i)] ($\dim_{\kappa}tH_k(\mathcal{C})$)-many copies of $\kappa_{(-\infty,\infty)\times(-\infty,\infty)}$;
\item[(ii)] for each finite-length bar in the degree-$k$ concise barcode of $\mathcal{C}_{\uparrow}$, represented as $[a,b)^{\Gamma}$ for some $a,b\in\mathbb{R}$, and for each $g\in \Gamma$, a copy of $\kappa_{(-\infty,\infty)\times [a-g,b-g)}$;
\item[(iii)] for each finite-length bar in the degree-$k$ concise barcode of $\mathcal{C}^{\downarrow}$, represented as $(a,b]^{\Gamma}$ for some $a,b\in \mathbb{R}$, and for each $g\in\Gamma$, a copy of $\kappa_{[-b-g,-a-g)\times(-\infty,\infty)}$;
\item[(iv)] for each element of $\Sigma(\mathcal{H}_k(\mathcal{CP}))$, represented as $([a],b-a)$ for some $a,b\in \mathbb{R}$, and for each $g\in \Gamma$, a copy of $\kappa_{[-b+g,\infty)\times[a-g,\infty)}$; and
\item[(v)] for each element of $\Sigma(\mathcal{H}_{k+1}(\mathcal{CP}))$, represented as $([a],b-a)$ for some $a,b\in\mathbb{R}$, and for each $g\in \Gamma$, a copy of $\kappa_{(-\infty,-b+g)\times(-\infty,a-g)}$. \end{itemize} Here all elements of concise barcodes or basis spectra are counted with multiplicity.
\end{theorem}
Consider in particular the $\kappa$-vector spaces $\mathbb{H}_k(\mathcal{CP})_{-s,s}$, which are meant to correspond to level-set homologies $H_k(f^{-1}(\{s\});\kappa)$. Theorem \ref{bigdecomp} shows that these decompose as a sum of the following contributions:
\begin{itemize} \item[(i)] the $\kappa$-vector space $tH_k(\mathcal{C})$, independently of $s$;
\item[(ii)] for each finite-length bar $[a,b)^{\Gamma}$ in the degree-$k$ concise barcode of $\mathcal{C}_{\uparrow}$, as many copies of $\kappa$ as the number of $g\in\Gamma$ for which $s+g\in [a,b)$;
\item[(iii)] for each finite-length bar $(a,b]^{\Gamma}$ in the degree-$k$ concise barcode of $\mathcal{C}^{\downarrow}$, as many copies of $\kappa$ as the number of $g\in\Gamma$ for which $s+g\in (a,b]$;
\item[(iv)] for each element $([a],b-a)\in \Sigma(\mathcal{H}_k(\mathcal{CP}))$ having $a\leq b$, as many copies of $\kappa$ as the number of $g\in\Gamma$ for which $s+g\in [a,b]$; and
\item[(v)] for each element $([a],b-a)\in \Sigma(\mathcal{H}_{k+1}(\mathcal{CP}))$ having $a>b$, as many copies of $\kappa$ as the number of $g\in\Gamma$ for which $s+g\in (b,a)$.\end{itemize}
Moreover, if $I$ is one of the intervals $[a,b),(a,b],[a,b],(a,b)$ listed above, and if $s<s'$ and $g\in \Gamma$ with $s+g,s'+g\in I$, then the corresponding summands of $\mathbb{H}_k(\mathcal{CP})_{-s,s}$, $\mathbb{H}_k(\mathcal{CP})_{-s',s'}$ are related by the fact that they are sent under the persistence module structure maps to the same summand of $\mathbb{H}_k(\mathbb{CP})_{-s,s'}$. Thus the collection of these intervals (modulo $\Gamma$-translation) has the features that would be expected of an interlevel persistence barcode. We accordingly make the following definition:
\begin{dfn}\label{fullbar}
Assume that $\Gamma$ is discrete, let $k\in\mathbb{Z}$ and let $\mathcal{CP}=(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ be a chain-level filtered matched pair. The \textbf{full degree-$k$ barcode} of $\mathcal{CP}$ consists of the following intervals modulo $\Gamma$-translation:\begin{itemize}
\item[(i)] all finite-length intervals $[a,b)^{\Gamma}$ in the degree-$k$ concise barcode of $\mathcal{C}_{\uparrow}$;
\item[(ii)] all finite-length intervals $(a,b]^{\Gamma}$ in the degree-$k$ concise barcode of $\mathcal{C}^{\downarrow}$;
\item[(iii)] for each $([a],\ell)\in \Sigma(\mathcal{H}_{k}(\mathcal{CP}))$ with $\ell\geq 0$, an interval $[a,a+\ell]^{\Gamma}$; and
\item[(iv)] for each $([a],\ell)\in \Sigma(\mathcal{H}_{k+1}(\mathcal{CP}))$ with $\ell<0$, an interval $(a+\ell,a)^{\Gamma}$.
\end{itemize}
\end{dfn}
Note that a stability theorem for this barcode now follows trivially from stability theorems concerning the various ingredients in the barcode, namely Theorem \ref{stab} for the intervals in (iii) and (iv), and \cite[Theorem 8.17]{UZ} (and a conjugated version thereof) for the intervals in (i) and (ii).
\subsection{Chain-level Poincar\'e-Novikov structures}\label{cpnstr}
If $\mathcal{C}=(C_*,\partial)$ is a chain complex and $a\in\mathbb{Z}$, as before we let $\mathcal{C}[a]$ denote the chain complex with chain modules $C[a]_{k}=C_{k+a}$ and differential $(-1)^a\partial$. Also we will write ${}^{\vee}\!\mathcal{C}$ for the chain complex whose $k$th chain module is $({}^{\vee}\!C)_{k}:={}^{\vee}\!(C_{-k})$ and whose differential acts on $({}^{\vee}\!C)_{k+1}$ by ${}^{\vee}\!((-1)^k\partial|_{C_{-k}})\colon\thinspace {}^{\vee}\!(C_{-k-1})\to {}^{\vee}\!(C_{-k})$. (Thus ${}^{\vee}\!\mathcal{C}$ is obtained from the cochain complex $(C^*,\delta)$ as in Proposition \ref{cohtopdstr} by negating the grading so as to turn it into a chain complex.)
We define a \textbf{chain-level $n$-Poincar\'e-Novikov structure} $\mathcal{CN}=(\mathcal{C},\tilde{\mathcal{D}},\mathcal{C}_{\uparrow},\mathcal{S})$ to consist of a chain complex $\mathcal{C}$ of free, finitely-generated $\Lambda$-modules, a chain homotopy equivalence $\tilde{\mathcal{D}}\colon\thinspace \mathcal{C}\to {}^{\vee}\!(\mathcal{C}[n])$, a $\Lambda_{\uparrow}$-Floer-type complex $\mathcal{C}_{\uparrow}$, and a chain map $\mathcal{S}\colon\thinspace \mathcal{C}\to\mathcal{C}_{\uparrow}$ which becomes a chain homotopy equivalence after tensoring with $1_{\Lambda_{\uparrow}}$.
So $\tilde{\mathcal{D}}$ maps $C_k$ to ${}^{\vee}(C_{n-k})$ and, in the notation of Proposition \ref{cohtopdstr}, induces isomorphisms $D_k\colon\thinspace H_k\to H^{n-k}$, so by that proposition one obtains maps $\mathrm{ev}\circ D_k\colon\thinspace H_k\to {}^{\vee}\!H_{n-k}$ which define a strong $n$-PD structure if $\Gamma$ is discrete, and a weak $n$-PD structure for any $\Gamma$. Combining this PD structure with the map induced on homology by $\mathcal{S}$ yields a (respectively strong or weak) $n$-Poincar\'e-Novikov structure in the sense of Definition \ref{pnstr}.\vspace{.1 in}
\begin{center}
\begin{tikzpicture}
\node (cpn) [box] at (0,0) {chain-level Poincar\'e-Novikov structure};
\node (cfmp) [box] at (6,0) {chain-level filtered matched pair};
\node (pn) [box] at (0,-3) {Poincar\'e-Novikov structure};
\node (fmp) [box] at (6,-3) {graded filtered matched pair};
\draw [arrow] (cpn) -- node[anchor=east] {$H_*$} (pn);
\draw [arrow] (cfmp) -- node[anchor=west] {$\mathcal{H}_*$ (Remark \ref{cfmpfmp})} (fmp);
\draw [arrow] (pn) -- node[anchor=north] {(\ref{pnkdef})} (fmp);
\draw [dashed, arrow] (cpn) -- (cfmp);
\end{tikzpicture}
\end{center}
Let us complete the commutative diagram above, associating a chain-level filtered matched pair $\mathcal{CP}(\mathcal{CN})$ to a chain-level Poincar\'e-Novikov structure $\mathcal{CN}=(\mathcal{C},\tilde{\mathcal{D}},\mathcal{C}_{\uparrow},\mathcal{S})$ in a manner which, upon passing to homology, induces in every degree $k$ the operation $\mathcal{N}\mapsto\mathcal{P}(\mathcal{N})_k$ from (\ref{pnkdef}). Thus we must specify data $(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ as in Definition \ref{cfmpdfn}. For $\mathcal{C},\mathcal{C}_{\uparrow}$ we use the corresponding data from $\mathcal{CN}$, and for $\phi_{\uparrow}$ we use the map $\mathcal{S}$ from $\mathcal{CN}$. For the $\Lambda^{\downarrow}$-Floer-type complex $\mathcal{C}^{\downarrow}$ we use the shifted dual ${}^{\vee}\!(\mathcal{C}_{\uparrow}[n])$, with $k$th chain module equal to ${}^{\vee}(C_{\uparrow (n-k)})$, equipped with the filtration function ${}^{\vee}\!(\ell_{\uparrow}|_{C_{\uparrow (n-k)}})$, and with differential given by the appropriate sign times the adjoint of the differential on $\mathcal{C}$,
It remains to specify a chain map $\phi^{\downarrow}\colon\thinspace \mathcal{C}\to {}^{\vee}\!(\mathcal{C}_{\uparrow}[n])$ which becomes a chain homotopy equivalence after tensoring with $1_{\Lambda^{\downarrow}}$. This is done just as in the construction of $\tilde{S}_{n-k}=\delta^{\downarrow}(S_{n-k})\circ \mathcal{D}$ in (\ref{pnkdef}). Namely, letting $\mathcal{T}\colon\thinspace \mathcal{C}_{\uparrow}\to \Lambda_{\uparrow}\otimes_{\Lambda}\mathcal{C}$ denote a homotopy inverse to $1_{\Lambda_{\uparrow}}\otimes\mathcal{S}$, we take $\phi^{\downarrow}$ to be the composition \begin{equation}\label{phidowncpn} \xymatrix{ \mathcal{C}\ar[r]^-{\tilde{\mathcal{D}}} & {}^{\vee}\!(\mathcal{C}[n])\ar[r] & \Lambda^{\downarrow}\otimes {}^{\vee}\!(\mathcal{C}[n])\ar[r]^-{\beta} & {}^{\vee}\!\left(\Lambda_{\uparrow}\otimes_{\Lambda}\mathcal{C}[n]\right)\ar[r]^-{{}^{\vee}\!\mathcal{T}} & {}^{\vee}\!(\mathcal{C}_{\uparrow}[n])} \end{equation} where the second map is coefficient extension and $\beta$ is (in every grading) as in Proposition \ref{pullout}. Since $\beta$ is an isomorphism and $\tilde{\mathcal{D}}$ and $\mathcal{T}$ are homotopy equivalences, we see that $1_{\Lambda^{\downarrow}}\otimes\phi^{\downarrow}$ is likewise a homotopy equivalence. This completes the definition of the chain-level filtered matched pair $\mathcal{CP}(\mathcal{CN})$ associated to a chain-level Poincar\'e-Novikov structure $\mathcal{CN}$. Since the map on homology induced by $\mathcal{T}$ is inverse to that induced by $1_{\Lambda_{\uparrow}}\otimes \mathcal{S}$, it is easy to see that, given a chain-level Poincar\'e-Novikov structure $\mathcal{CN}$, one obtains the same filtered matched pair by first constructing $\mathcal{CP}(\mathcal{CN})$ and then passing to homology as one does by first passing to homology to obtain a Poincar\'e-Novikov complex in the sense of Definition \ref{pnstr} and then taking the associated filtered matched pair as in (\ref{pnkdef}).
From here one can, for any $k$, construct the two-parameter persistence module $\mathbb{H}_k(\mathcal{CP}(\mathcal{CN}))$ defined in (\ref{hkdef}). This definition, in general, involves the choice of homotopy inverses $\psi_{\uparrow},\psi^{\downarrow}$ to $1_{\Lambda_{\uparrow}}\otimes\phi_{\uparrow}$ and $1_{\Lambda^{\downarrow}}\otimes \phi^{\downarrow}$ (with the resulting persistence module independent of this choice up to isomorphism, as noted above Definition \ref{fmhedef}). Since $\phi_{\uparrow}=\mathcal{S}$, in the role of $\psi^{\downarrow}$ we may use the map $\mathcal{T}$ from the previous paragraph. Since $\phi^{\downarrow}$ is given by (\ref{phidowncpn}) and since ${}^{\vee}\!\mathcal{T}$ has homotopy inverse ${}^{\vee}\!(1_{\Lambda_{\uparrow}}\otimes\mathcal{S})$, the role of $\psi^{\downarrow}$ is fulfilled by the composition \begin{equation}\label{psidowncpn} \xymatrix{
{}^{\vee}\!(\mathcal{C}_{\uparrow}[n]) \ar[r]^-{{}^{\vee}\!(1_{\Lambda_{\uparrow}}\otimes\mathcal{S})} & {}^{\vee}\!\left(\Lambda_{\uparrow}\otimes_{\Lambda}\mathcal{C}[n]\right) \ar[r]^-{\beta^{-1}} & \Lambda^{\downarrow}\otimes {}^{\vee}\!(\mathcal{C}[n])\ar[r]^-{1_{\Lambda^{\downarrow}}\otimes\tilde{\mathcal{D}}'} & \Lambda^{\downarrow}\otimes_{\Lambda}\mathcal{C}}
\end{equation} for any choice of homotopy inverse $\tilde{\mathcal{D}}'$ to the chain-level Poincar\'e duality map $\mathcal{D}$.
Assume now that $\Gamma$ is discrete. We define the \textbf{full barcode} $\cup_k\mathcal{B}_k(\mathcal{CN})$ of a chain-level $n$-Poincar\'e--Novikov structure $\mathcal{CN}$ to be the full barcode, as given by Definition \ref{fullbar}, of the corresponding chain-level filtered matched pair $\mathcal{CP}(\mathcal{CN})$. The bars appearing in items (iii) and (iv) of Definition \ref{fullbar} evidently comprise the essential barcode (as defined in Definition \ref{pnbar}) of the $n$-Poincar\'e--Novikov structure $\mathcal{N}$ that is obtained from $\mathcal{CN}$ by passing to homology. These satisfy the stability property from Corollary \ref{pnstab}, and the duality result of Corollary \ref{pndual}. (For the latter we use that$\mathcal{N}$ is a \emph{strong} $n$-Poincar\'e--Novikov structure by Proposition \ref{cohtopdstr} and the fact that $\Gamma$ is discrete.) The bars in (i) and (ii) of Definition \ref{fullbar} (in the context of $\mathcal{CP}(\mathcal{CN})$ for our chain-level $n$-Poincar\'e--Novikov structure $\mathcal{CN}=(\mathcal{C},\tilde{\mathcal{D}},\mathcal{C}_{\uparrow},\mathcal{S})$) comprise the finite-length bars $[a,b)^{\Gamma}$ of the $\Lambda_{\uparrow}$-Floer-type complex $\mathcal{C}$, and the finite-length bars $(a,b]^{\Gamma}$ of the $\Lambda^{\downarrow}$-Floer-type complex ${}^{\vee}\!(\mathcal{C}[n])$. These also satisfy a stability property based on \cite[Theorem 8.17]{UZ}. After adjusting for conjugation and the degree shift by $n$, it follows from \cite[Proposition 6.7]{UZ} that, for any $k$, bars $[a,b)^{\Gamma}\in \mathcal{B}_k(\mathcal{CN})$ are in one-to-one correspondence with bars $(a,b]^{\Gamma}\in \mathcal{B}_{n-k-1}(\mathcal{CN})$.
Combining this with Corollary \ref{pndual}, we see that \emph{all} types of bars in $\cup_k\mathcal{B}_k(\mathcal{CN})$ except for those of the form $[a,a]^{\Gamma}$ are subject to a symmetry which matches bars in degree $k$ with bars in degree $n-k-1$ with identical interiors but opposite endpoint conditions (closed intervals $[a,b]^{\Gamma}$ are matched to open intervals $(a,b)^{\Gamma}$, and intervals $[a,b)^{\Gamma}$ are matched to intervals $(a,b]^{\Gamma}$). This symmetry is consistent with Poincar\'e duality in the regular level sets of a Morse function, and it is also analogous to the symmetry theorem in \cite{ext}.
\section{The chain-level Poincar\'e--Novikov structures of Morse and Novikov theory}\label{concretemorse}
In this section we review basic features of Morse and Novikov complexes in sufficient detail as to allow us to construct and study their associated chain-level Poincar\'e--Novikov structures. This culminates in the proof of Theorem \ref{introiso} which relates, in the case of an $\mathbb{R}$- or $S^1$-valued Morse function, the two-parameter persistence module from (\ref{hkdef}) to interlevel persistence.
\subsection{The Morse complex, continuation maps, and Poincar\'e--Novikov structures with $\Gamma=\{0\}$}\label{morseintro}
Let $X$ be a smooth, $n$-dimensional manifold, and let $f\colon\thinspace X\to \mathbb{R}$ be a Morse function on $X$. Recall that a vector field $v$ on $X$ is said to be \emph{gradient-like} for $f$ provided that both $df_p(v)>0$ at all $p$ outside of the set $\mathrm{Crit}(f)$ of critical points of $f$, and, around each $p\in \mathrm{Crit}(f)$, there is a coordinate chart in which $v$ is represented in the standard form $\left(-\sum_{i=1}^{k}x_i\partial_{x_i}+\sum_{i=k+1}^{n}x_i\partial_{x_i}\right)$. It is not hard to construct such vector fields: take $v$ equal to the gradient $\nabla f$ with respect to a Riemannian metric which is standard in Morse charts around the points of $\mathrm{Crit}(f)$; conversely, any gradient-like vector field for $f$ is the gradient of $f$ with respect to some Riemannian metric (\cite[Proposition II.2.11]{Pa}\footnote{Here and below we use Roman numerals to denote chapter numbers in \cite{Pa}, so this reference is to Proposition 2.11 in Chapter 2; the theorem numbering in \cite{Pa} does not include chapters}).
Assuming that our manifold $X$ is compact and without boundary and that $v$ is gradient-like for $f$, write $\{\phi_{-v}^{t}\}_{t\in \mathbb{R}}$ for the flow of the vector field $-v$. One then has, for each critical point $p$ of $f$, descending and ascending manifolds \[ \mathbf{D}^v(p)=\{q\in X|\lim_{t\to-\infty}\phi_{-v}^{t}(q)=p\},\qquad \mathbf{A}^v(p)=\{q\in X|\lim_{t\to+\infty}\phi_{-v}^{t}(q)=p\}. \] These are smoothly embedded disks in $X$ of dimensions, respectively $\mathrm{ind}_f(p)$ and $n-\mathrm{ind}_f(p)$ where $\mathrm{ind}_f(p)$ is the Morse index of $p$ (equal to the integer $k$ appearing in the local representation for $v$ in the previous paragraph). Obviously the vector field $-v$ is gradient-like for the Morse function $-f$, and we have $\mathbf{D}^{-v}(p)=\mathbf{A}^{v}(p)$. For choices of $v$ that are generic amongst gradient-like vector fields for $f$, the flow of $-v$ will be Morse-Smale in the sense that each intersection $\mathbf{W}^v(p,q):=\mathbf{D}^v(p)\cap \mathbf{A}^v(q)$ is transverse, and hence smooth of dimension $\mathrm{ind}_f(p)-\mathrm{ind}_f(q)$ (see, \emph{e.g.}, \cite[Theorem IV.2.13]{Pa}).
Since the various $\mathbf{D}^v(p),\mathbf{A}^v(q)$ are invariant under the flow of $-v$, this flow induces an $\mathbb{R}$-action on $\mathbf{W}^v(p,q)$, which is free provided that $p\neq q$. So, assuming the Morse-Smale condition, the space $\mathbf{M}^v(p,q)=\frac{\mathbf{W}^v(p,q)}{\mathbb{R}}$ is a smooth manifold of dimension $\mathrm{ind}_f(p)-\mathrm{ind}_f(q)-1$. Under the correspondence which sends a trajectory $\gamma\colon\thinspace \mathbb{R}\to X$ of the negative gradient-like vector field $-v$ to its initial condition $\gamma(0)$, we may regard $\mathbf{W}^v(p,q)$ as the space of trajectories of $-v$ that are asymptotic as $t\to-\infty$ to $p$ and as $t\to+\infty$ to $q$, and $\mathbf{M}^v(p,q)$ as the space of such trajectories modulo reparametrization.
Continuing to assume the Morse-Smale condition, if $\mathrm{ind}_f(p)-\mathrm{ind}_f(q)=1$ then $\mathbf{M}_f(p,q)$ is a zero-dimensional manifold, which moreover is compact (\cite[Lemma VI.1.1]{Pa}), and so consists of finitely many points. We recall in Section \ref{orsect} how to orient the various $\mathbf{M}^v(p,q)$ (based on arbitrary orientations $\mathfrak{o}_{v,p}$ of the disks $\mathbf{D}^v(p)$); an orientation of a zero-dimensional manifold amounts to a choice of sign for each of its points, and so we obtain from these orientations counts $n_{f,v}(p,q)\in\mathbb{Z}$ of the elements of $\mathbf{M}^v(p,q)$, each point contributing $+1$ or $-1$ according to its orientation sign.
For $k\in \mathbb{N}$ let $\mathrm{Crit}_k(f)$ denote the set of index-$k$ critical points of $f$. The integer-coefficient Morse chain complex $\mathbf{CM}_*(f;\mathbb{Z})$ (or $\mathbf{CM}_{*}(f,v;\mathbb{Z})$ if we wish to emphasize the dependence on the gradient-like vector field $v$) has its degree $k$ part equal to the free abelian group with one generator for each element of $\mathrm{Crit}_{k}(f)$, with the Morse boundary operator $\partial_{k+1}^{f}\colon\thinspace \mathbf{CM}_{k+1}(f;\mathbb{Z})\to \mathbf{CM}_{k}(f;Z)$ defined to be the homomorphism given by, for each $p\in \mathrm{Crit}_{k+1}(f)$, \[ \partial_{k+1}^{f}p=\sum_{q\in\mathrm{Crit}_k(f)}n_{f,v}(p,q)q.\]
See, \emph{e.g.}, \cite[Section 4.1]{Sc} or \cite[Chapter 3]{AD} for self-contained proofs (based on spaces of broken gradient trajectories) that one has $\partial_k^f\circ\partial_{k+1}^f=0$, so that $\mathbf{CM}_{*}(f;\mathbb{Z})$ is indeed a chain complex; its homology will be denoted by $\mathbf{HM}_{*}(f;\mathbb{Z})$, and is isomorphic to the singular homology of $X$. Indeed, in \cite[Section VI.3.3]{Pa}, Pajitnov constructs a chain homotopy equivalence $\mathcal{E}(f,v)$ from $\mathbf{CM}_{*}(f;\mathbb{Z})$ to the singular chain complex $\mathbf{S}_*(X)$.
A related classical (\emph{e.g.} \cite[Theorem 3.5]{Mil},\cite[Theorem 1]{Kal}) perspective on the Morse complex identifies it as the cellular chain complex of a CW complex. \cite[Theorems 3.8 and 3.9]{Qin} show that our manifold $X$ admits a CW decomposition with open $k$-cells equal to the $\mathbf{D}_f(p)$ for $p\in \mathrm{Crit}_k(f)$, and moreover that, using the orientations $\mathfrak{o}_{v,p}$ of the $\mathbf{D}^v(p)$ to identify the $k$th cellular chain group of this CW complex with the free abelian group on $\mathrm{Crit}_k(f)$ (\emph{i.e.}, with $\mathbf{CM}_k(f;\mathbb{Z})$), the cellular differential for the CW complex agrees with the Morse differential defined above. See also \cite[Section 4.9]{AD} for a different proof and \cite{Lau},
\cite{BH01} for earlier related results.
For any ring $R$ we may form the chain complex of $R$-modules $\mathbf{CM}_*(f,v;R):=\mathbf{CM}_{*}(f,v;\mathbb{Z})\otimes_{\mathbb{Z}}R$. For a field $\kappa$, $\mathbf{CM}_{*}(f,v;\kappa)$, can naturally be endowed with the structure of (using the language of Section \ref{clsec}, with the group $\Gamma$ equal to $\{0\}$) a $\Lambda_{\uparrow}$-Floer-type complex, by defining the filtration function $\ell_{\uparrow}^{f}\colon\thinspace C_k\to\mathbb{R}\cup\{-\infty\}$ by \[ \ell_{\uparrow}^{f}\left(\sum_{p\in \mathrm{Crit}_k(f)}n_pp\right)=\max\{f(p)|n_p\neq 0\},\] the maximum of the empty set being $-\infty$. Indeed $\mathbf{CM}_k(f,v;\kappa)$ has basis given by $\mathrm{Crit}_k(f)$ which, as is clear from the above formula, is $\ell_{\uparrow}^{f}$-orthogonal, and the inequality $\ell_{\uparrow}^{f}(\partial_kx)\leq \ell_{\uparrow}^{f}(x)$ follows from the fact that $f$ decreases along the flow of the negative gradient-like field $-v$. We write $\mathbf{CM}(f,v;\kappa)_{\uparrow}$ (or just $\mathbf{CM}(f)_{\uparrow}$ if $v$ and $\kappa$ are understood) for $\Lambda_{\uparrow}$-Floer-type complex consisting of the chain complex $\mathbf{CM}_*(f,v;\kappa)$ together with the filtration function $\ell_{\uparrow}^{f}$.
\begin{remark}
If one is only interested in the case that, as here, $\Gamma=\{0\}$, then our algebraic language is unnecessarily baroque: the fields $\Lambda_{\uparrow}$ and $\Lambda^{\downarrow}$, the ring $\Lambda$, and the module $\Lambda_{\updownarrow}$ are all equal to the field $\kappa$. Even so, in this context a ``$\Lambda_{\uparrow}$-Floer-type complex'' is still a slightly different object than a ``$\Lambda^{\downarrow}$-Floer-type complex'' as the filtration function on the former satisfies $\ell_{\uparrow}\circ\partial\leq \ell_{\uparrow}$ while that on the latter satisfies $\ell^{\downarrow}\circ\partial\geq \ell^{\downarrow}$.
\end{remark}
Now suppose that $f_-,f_+$ are two Morse functions on $X$, with respective gradient-like vector fields $v_-,v_+$ each having Morse-Smale flows. As in, \emph{e.g.}, \cite[Section 4.1.3]{Sc}, one defines a \textbf{continuation map} $\Phi_{f_-f_+}\colon\thinspace \mathbf{CM}_*(f_-,v_-;\mathbb{Z})\to \mathbf{CM}_{*}(f_+,v_+;\mathbb{Z})$ as follows. Choose a smooth, $\mathbb{R}$-parametrized family of vector fields $\mathbb{V}=\{v_s\}_{s\in \mathbb{R}}$ having the property that $v_s=v_-$ for $s\leq -1$ and $v_s=v_+$ for $s\geq 1$, and for $p_-\in \mathrm{Crit}(f_-)$ and $p_+\in \mathrm{Crit}(f_+)$ let $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ be the set of solutions $\gamma\colon\thinspace \mathbb{R}\to X$ to non-autonomous equation \begin{equation}\label{conteq} \gamma'(s)+v_s(\gamma(s))=0 \end{equation} such that $\gamma(s)\to p_-$ as $s\to-\infty$ and $\gamma(s)\to p_+$ as $s\to\infty$. If we denote by $\Psi^{\mathbb{V}}\colon\thinspace X\to X$ the diffeomorphism which sends $x\in X$ to the value $\eta_x(1)$ where $\eta_x\colon\thinspace [-1,1]\to X$ is the unique solution to $\eta'_{x}(s)=v_s(\eta_x(s))$ subject to the initial condition $\eta_x(-1)=x$, then any $\gamma\in \mathbf{N}^{\mathbb{V}}(p_-,p_+)$ has $\gamma(-1)\in \mathbf{D}^{v_-}(p_-)$, $\gamma(1)\in\mathbf{A}^{v_+}(p_+)$, and $\Psi^{\mathbb{V}}(\gamma(-1))=\gamma(1)$. In this way we see that $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ is in bijection with the intersection $\mathbf{D}^{v_-}(p_-)\cap (\Psi^{\mathbb{V}})^{-1}(\mathbf{A}^{v_+}(p_+))$. For generic choices of the interpolating family $\mathbb{V}$, this intersection will be transverse for all choices of $p_-$ and $p_+$ (\cite[Proposition VI.4.7]{Pa}), and hence a manifold of dimension $\mathrm{ind}_{f_-}(p_-)-\mathrm{ind}_{f_+}(p_+)$. When this dimension is zero, $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ is a finite set (\cite[Lemma VI.4.5]{Pa}), and so for generators $p_-$ of $\mathbf{CM}_k(f_-,v_-;\mathbb{Z})$ we put $\Phi_{f_-f_+}p_-=\sum_{p_+\in \mathrm{Crit}_k(f_+)}\#\mathbf{N}^{\mathbb{V}}(p_-,p_+)p_+$ where $\#\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ is the signed count of points in the compact zero-manifold $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$, oriented as in Section \ref{orsect}.
The continuation map $\Phi_{f_-f_+}$ is a chain homotopy equivalence between the chain complexes $\mathbf{CM}_*(f_-,v_-;\mathbb{Z})$ and $\mathbf{CM}_{*}(f_+,v_+;\mathbb{Z})$, whose chain homotopy class is independent of the choice of interpolating family $\mathbb{V}$. Indeed, $\Phi_{f_-f_+}$ is evidently equal to the map which would be written in the notation of \cite[p. 221]{Pa} as $(\Psi^{\mathbb{V}})_{\flat}$, and so by \cite[Proposition VI.4.3 and Theorem VI.4.6]{Pa} and the fact that $\Psi^{\mathbb{V}}\colon\thinspace X\to X$ is homotopic to the identity, we have a homotopy-commutative diagram \begin{equation}\label{contcomm} \xymatrix{ \mathbf{CM}_*(f_-,v_-;\mathbb{Z})\ar[rr]^{\Phi_{f_-f_+}} \ar[dr]_{\mathcal{E}(f_-,v_-)} & & \mathbf{CM}_*(f_-,v_-;\mathbb{Z}) \ar[ld]^{\mathcal{E}(f_+,v_+)} \\ & \mathbf{S}_{*}(X) & } \end{equation} where $\mathcal{E}(f_{\pm},v_{\pm})$ are the chain homotopy equivalences from \cite[Section VI.3.3]{Pa}.
Of course, we can then tensor with any ring and obtain the corresponding statement for the Morse and singular complexes with coefficients in that ring.
The behavior of $\Phi_{f_-f_+}$ with respect to the filtration functions $\ell_{\uparrow}^{f_{\pm}}$ is also of interest. For this purpose we take a family of functions $f_s$ to be of the form $f_s=f_-+\beta(s)(f_+-f_-)$ where $\beta$ is a smooth function such that $\beta'(s)\geq 0$ for all $s$, $\beta(s)=0$ for $s\leq -1$, and $\beta(s)=1$ for $s\geq 1$, and for the family of vector fields $\mathbb{V}=\{v_s\}_{s\in \mathbb{R}}$ we require that $(df_s)_x(v_s)\geq 0$ for all $s\in\mathbb{R},p\in X$. (For instance $v_s$ could be the gradient of $f_s$ with respect to a Riemannian metric that varies smoothly with $s$.) Then if $\gamma$ is a solution to (\ref{conteq}) with $\gamma(s)\to p_{\pm}$ as $s\to\pm\infty$, we will have
\begin{align} f_+(p_+)-f_-(p_-)&=\int_{-\infty}^{\infty}\frac{d}{ds}(f_s(\gamma(s)))ds=\int_{-\infty}^{\infty}\left((df_s)_{\gamma(s)}(\gamma'(s))+\frac{\partial f_s}{\partial s}(\gamma(s))\right)ds \nonumber \\ &= -\int_{-\infty}^{\infty}(df_s)_{\gamma(s)}(v_s)ds+\int_{-\infty}^{\infty}\beta'(s)\left(f_+(\gamma(s))-f_-(\gamma(s))\right)ds\nonumber \\& \leq \max_X(f_+-f_-).\label{filtchange}\end{align}
With coefficients in the field $\kappa$ (so that we have $\Lambda_{\uparrow}$-Floer-type complexes $\mathbf{CM}(f_{\pm},v_{\pm};\kappa)_{\uparrow}$ with filtration functions $\ell_{\uparrow}^{f_{\pm}}$) it follows that $\ell_{\uparrow}^{f_+}(\Phi_{f_-f_+}x)\leq \ell_{\uparrow}^{f_-}(x)+\max_X(f_+-f_-)$ for all $x\in \mathbf{CM}_{*}(f_-,v_-;\kappa)$ provided that the family of vector fields $\mathbb{V}$ used to define $\Phi_{f_-f_+}$ is chosen as above. In the special case that $f_-$ and $f_+$ are both equal to the same Morse function $f$ (but the gradient-like vector fields $v_{\pm}$ may differ) it follows that $\Phi_{ff}$ is a morphism of filtered complexes. In fact, in this situation $\Phi_{ff}$ is a filtered homotopy equivalence, as can be seen by inspecting the construction of the appropriate chain homotopies from \cite[Proposition 4.6]{Sc}. In what follows we will again suppress the relevant gradient-like vector fields and the field $\kappa$ from the notation for the Morse complex.
We now have almost all of the ingredients needed define the chain-level Poincar\'e--Novikov structure $\mathcal{CP}(f)$ for a Morse function $f$ on a compact manifold $X$ (with respect to the field $\kappa$, and with the group $\Gamma$ equal to $\{0\}$). If the characteristic of $\kappa$ is not $2$, then we also require that $X$ be oriented. To describe $\mathcal{CP}(f)$ we must specify data $(\mathcal{C},\tilde{\mathcal{D}},\mathcal{C}_{\uparrow},\mathcal{S})$ as at the start of Section \ref{cpnstr}. The role of the $\Lambda_{\uparrow}$-Floer-type complex $\mathcal{C}_{\uparrow}$ is played by the filtered Morse complex $\mathbf{CM}(f)_{\uparrow}$ of $f$ with coefficients in the field $\kappa$. For the chain complex $\mathcal{C}$ we use the (unfiltered) Morse complex $\mathbf{CM}_{*}(h_0)$ of another Morse function $h_0$; this function $h_0$ should be regarded as fixed for normalization purposes at the same time that we fix the manifold $X$ (so we would by default use the same $h_0$ to construct $\mathcal{CP}(f)$ for different functions $f$ on the same manifold, though this is not entirely necessary as a different choice of $h_0$ would not change the barcode). Since in the present context $\Gamma=\{0\}$ (so $\Lambda_{\uparrow}=\Lambda=\kappa$), the map $\mathcal{S}$ should then just be a chain homotopy equivalence from $\mathbf{CM}_{*}(h_0)$ to $\mathbf{CM}_{*}(f)$, and for this purpose we may use the continuation map $\Phi_{h_0f}$.
It remains to describe the ``chain-level Poincar\'e duality map'' $\tilde{\mathcal{D}}\colon\thinspace \mathbf{CM}_{*}(h_0)\to{}^{\vee}\!(\mathbf{CM}_*(h_0)[n])$ where $\dim X=n$. For this purpose we establish the following proposition, whose proof in the case that $\mathrm{char}(\kappa)\neq 2$ depends in part on the sign analysis that we defer to Section \ref{orsect}. Recall that if $h\colon\thinspace X\to \mathbb{R}$ is a Morse function we write $\mathrm{Crit}_k(f)$ for the set of index-$k$ critical points of $h$, and note that $\mathrm{Crit}_{k}(-h)=\mathrm{Crit}_{n-k}(h)$. For the statement and proof of the proposition we restore the vector field $v$ and field $\kappa$ to the notation for $\mathbf{CM}_{*}(\pm h_0)$ in order to make explicit the various dependencies.
\begin{prop}\label{negdual} Given an element $c=\sum_{p\in\mathrm{Crit}_{k}(-h_0)}a_pp$ define $\epsilon_c\colon\thinspace \mathbf{CM}_{n-k}(h_0,v;\kappa)\to \kappa$ by \[ \epsilon_c\left(\sum_{q\in\mathrm{Crit}_{n-k}(h_0)}b_qq\right)=\sum_{p\in \mathrm{Crit}_k(h_0)}a_pb_p.\] Then, assuming that either $\mathrm{char}(\kappa)=2$ or that $X$ is oriented and that we choose orientations for the descending manifolds $\mathbf{D}^v(p),\mathbf{D}^{-v}(q)$ as in Section \ref{orsect}, the assignment $c\mapsto \epsilon_c$ defines an isomorphism of chain complexes $\epsilon\colon\thinspace \mathbf{CM}_{*}(-h_0,-v;\kappa)\to {}^{\vee}\!(\mathbf{CM}_*(h_0,v;\kappa)[n])$.
\end{prop}
\begin{proof}
Note that the degree $k$-part of ${}^{\vee}\!(\mathbf{CM}_*(h_0,v;\kappa)[n])$ is, by definition, the dual of the degree $(-k)$-part of $\mathbf{CM}_{*}(h_0,v;\kappa)[n]$, \emph{i.e.} is the dual of $\mathbf{CM}_{n-k}(h_0,v;\kappa)$. (Since we are in the case that $\Gamma=\{0\}$ there is no distinction between the ``conjugated dual'' ${}^{\vee}$ and the usual vector space dual.) Now, as $\kappa$-vector spaces, $\mathbf{CM}_{k}(-h_0,-v;\kappa)$ and $\mathbf{CM}_{n-k}(h_0,v;\kappa)$ are equal to each other, both having $\mathrm{Crit}_k(-h_0)$ as a basis. In degree $k$, the map $\epsilon$ is the linear map that sends each element of the basis $\mathrm{Crit}_k(-h_0)$ for $\mathbf{CM}_{k}(-h_0,-v;\kappa)$ to the corresponding dual basis element for the dual of $\mathbf{CM}_{n-k}(h_0,v;\kappa)$. Thus $\epsilon$ is an isomorphism of graded vector spaces; it remains to check that it is a chain map.
This latter statement amounts to the claim that, for any $q\in \mathrm{Crit}_{k+1}(-h_0)=\mathrm{Crit}_{n-k-1}(h_0)$ and $p\in \mathrm{Crit}_{k}(-h_0)=\mathrm{Crit}_{n-k}(h_0)$, we have $\left(\epsilon(\partial^{-h_0}q)\right)(p)= (-1)^{n-k}(({}^{\vee}\!\partial^{h_0})(\epsilon q))(p)$. (The $(-1)^{n-k}$ on the right hand side results from the conventions mentioned at the start of Section \ref{cpnstr}: a factor $(-1)^k$ arises from the sign convention for the differentials on dual complexes, and a factor $(-1)^n$ arises from the convention that shifting the grading of a complex by $n$ is accompanied by multiplying the differential by $(-1)^n$.) But $\left(\epsilon(\partial^{-h_0}q)\right)(p)$ is the signed count $n_{-h_0,-v}(q,p)$ of elements of the zero-dimensional space $\mathbf{M}^{-v}(q,p)$, while $(({}^{\vee}\!\partial^{h_0})(\epsilon q))(p)=(\epsilon q)(\partial^{h_0}p)$ is the corresponding signed count $n_{h_0,v}(p,q)$ of elements of $\mathbf{M}^{v}(p,q)$. If $\mathrm{char}(\kappa)=2$ then there is no distinction between positive and negative contributions to the counts $n_{h_0,v}(p,q),n_{-h_0,-v}(q,p)$, and also $(-1)^{n-k}=1$ in $\kappa$, so the desired equality follows from the obvious bijection between $\mathbf{M}_{-h_0}(q,p)$ and $\mathbf{M}_{h_0}(p,q)$. If instead we assume that $X$ is oriented then the equality $n_{-h_0,-v}(q,p)=(-1)^{n-k}n_{h_0,v}(p,q)$ is proven in Corollary \ref{mor}.
\end{proof}
We are thus justified in making the following definition, which will be generalized in Section \ref{novsect}:
\begin{dfn}\label{cmdef} Let $X$ be a smooth compact manifold and if our ground field $\kappa$ has $\mathrm{char}(\kappa)\neq 2$ assume that $X$ is oriented. Fix a Morse function $h_0\colon\thinspace X\to\mathbb{R}$. Given any other Morse function $f\colon\thinspace X\to \mathbb{R}$ we take $\mathcal{CM}(f)$ to be the chain-level Poincar\'e-Novikov complex with the following data $(\mathcal{C},\tilde{\mathcal{D}},\mathcal{C}_{\uparrow},\mathcal{S})$ as in Section \ref{cpnstr}:
\begin{itemize}
\item $\mathcal{C}=\mathbf{CM}_{*}(h_0)$ is the Morse complex of $h_0$;
\item $\tilde{\mathcal{D}}\colon\thinspace \mathbf{CM}_{*}(h_0)\to {}^{\vee}\!(\mathbf{CM}_{*}(h_0)[n])$ is the composition of the continuation map $\Phi_{h_0,-h_0}\colon\thinspace \mathbf{CM}_*(h_0)\to \mathbf{CM}_{*}(-h_0)$ with the chain isomorphism $\epsilon\colon\thinspace \mathbf{CM}_{*}(-h_0)\to {}^{\vee}\!(\mathbf{CM}_*(h_0)[n])$ from Proposition \ref{negdual}.
\item $\mathcal{C}_{\uparrow}=\mathbf{CM}(f)_{\uparrow}$ (\emph{i.e.}, the $\Lambda_{\uparrow}$-Floer-type complex with chain complex $\mathbf{CM}_{*}(f)$ and the filtration function $\ell_{\uparrow}^{f}$).
\item $\mathcal{S}\colon\thinspace \mathbf{CM}_{*}(h_0)\to \mathbf{CM}_{*}(f)$ is the continuation map $\Phi_{h_0f}$.
\end{itemize}
\end{dfn}
\subsection{Orientations}\label{orsect} In this section we give details related to the orientations used in the definitions of the Morse boundary operator and of the continuation maps $\Phi_{f_-f_+}$. A reader who is willing to work in characteristic $2$ can safely skip this; in characteristics other than $2$, the calculations here are needed to justify the fact that our chain-level Poincar\'e duality map $\tilde{\mathcal{D}}$ satisfies the required properties on oriented manifolds. (Of course, since Poincar\'e duality does not hold on nonorientable manifolds in characteristics other than $2$ unless one works with suitably twisted coefficients, this cannot be an entirely trivial point.)
In general, given a short exact sequence of vector spaces \begin{equation}\label{sesex} \xymatrix{0 \ar[r]& A\ar[r]^{\alpha} & B \ar[r]^{\beta} & C \ar[r] & 0 }\end{equation} in which two of the three vector spaces $A,B,C$ carry specified orientations, the remaining vector space may be oriented by requiring that, if $(a_1,\ldots,a_k)$ is a positively-oriented ordered basis for $A$, and if $\hat{c}_1,\ldots,\hat{c}_{\ell}\in B$ have the property that $\left(\beta(\hat{c}_1),\ldots,\beta(\hat{c}_{\ell})\right)$ is a positively-oriented ordered basis for $C$, then $\left(\alpha(a_1),\ldots,\alpha(a_k),\hat{c}_1,\ldots,\hat{c}_{\ell}\right)$ is a positively-oriented ordered basis for $B$. If, \emph{e.g.}, $B$ and $C$ already carry orientations, the orientation on $A$ determined by this prescription will be referred to as the one ``induced by the short exact sequence (\ref{sesex}).''
For critical points $p,q$ of a Morse function $f$, let us recall how to orient the manifolds $\mathbf{W}^v(p,q)$ and $\mathbf{M}^v(p,q)$ (continuing to assume that $v$ is a gradient-like vector field for $f$ and that the flow of $v$ is Morse-Smale so that these are manifolds of the expected dimensions); one can check that our conventions below are equivalent to those in \cite[Section 6]{Qin} and \cite[Chapter 6]{Pa}. One chooses first of all an orientation $\mathfrak{o}_{v,p}$ for each descending manifold $\mathbf{D}^v(p)$. These orientations yield orientations of the normal bundles $N\mathbf{A}^v(p)$ to the ascending disks $\mathbf{A}^v(p)$, as the fiber $N_p\mathbf{A}^v(p)$ is naturally identified with $T_p\mathbf{D}^v(p)$, so that $\mathfrak{o}_{v,p}$ orients the fiber $N_p\mathbf{A}^v(p)$ and this orientation extends uniquely to an orientation of the bundle $N\mathbf{A}^v(p)$. For each $x\in\mathbf{W}^v(p,q)$, due to the transversality of $\mathbf{D}^v(p)$ and $\mathbf{A}^v(q)$, the inclusion $T_x\mathbf{D}^v(p)\hookrightarrow T_xX$ followed by the projection $T_xX\to N_x\mathbf{A}^v(q)$ yields a surjection $T_x\mathbf{D}^v(p)\to N_x\mathbf{A}^v(q)$ with kernel $T_x\mathbf{W}^v(p,q)$. So we orient the smooth manifold $\mathbf{W}^v(p,q)$ by taking the orientation on each $T_x\mathbf{W}^v(p,q)$ to be the one induced (in the sense of the previous paragraph) by the short exact sequence \[ 0\to T_x\mathbf{W}^v(p,q)\to T_x\mathbf{D}^v(p)\to N_x\mathbf{A}^v(q)\to 0. \]
Having oriented $\mathbf{W}^v(p,q)$, the unparametrized trajectory spaces $\mathbf{M}^v(p,q)=\frac{\mathbf{W}^v(p,q)}{\mathbb{R}}$ for $p\neq q$ are oriented as follows. If $x\in \mathbf{W}^v(p,q)$ has image under the quotient projection equal to $[x]\in\mathbf{M}^v(p,q)$, the derivative of the quotient projection gives a surjection $T_x\mathbf{W}^v(p,q)\to T_{[x]}\mathbf{M}^v(p,q)$ with kernel spanned by $-v(x)$. If $(-v(x),e_2,\ldots,e_{d})$ is a positively-oriented basis for $T_x\mathbf{W}^v(p,q)$ then we declare the image under the above surjection of $(e_2,\ldots,e_{d})$ to be a positively-oriented basis for $T_{[x]}\mathbf{M}^v(p,q)$. (In other words, the orientation on $T_{[x]}\mathbf{M}^{v}(p,q)$ is the one induced by the short exact sequence \[ 0 \to \mathbb{R}\to T_x\mathbf{W}^v(p,q)\to T_{[x]}\mathbf{M}^v(p,q) \] where the first map sends $t$ to $(-tv(x))$ and the second is the derivative of the quotient projection.)
The spaces $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ underlying the continuation maps $\Phi_{f_-f_+}$ are oriented in a similar way. We assume as before that $\mathbb{V}=\{v_s\}_{s\in\mathbb{R}}$ is a smooth family of vector fields such that $v_s=v_{\pm}$ for $\pm s\geq 1$ where $v_+$ and $v_-$ are gradient-like vector fields for the Morse functions $f_+$ and $f_-$, respectively. We also choose orientations of the descending manifolds $\mathbf{D}^{v_-}(p_-),\mathbf{D}^{v_+}(p_+)$ for $p_-\in\mathrm{Crit}(f_-)$ and $p_+\in\mathrm{Crit}(f_+)$. Thus with $\Psi^{\mathbb{V}}$ the diffeomorphism of $X$ that sends the value at time $-1$ of a general integral curve of the time-dependent vector field $\mathbb{V}$ to the curve's value at time $1$,
the continuation trajectory space $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ is identified with the intersection $\mathbf{D}^{v_-}(p_-)\cap (\Psi^{\mathbb{V}})^{-1}(\mathbf{A}^{v_+}(p_+))$, which we assume to be transverse through a generic choice of $\mathbb{V}$. As before, the orientation of $\mathbf{D}^{v_+}(p_+)$ is equivalent to a coorientation of $\mathbf{A}^{v_+}(p_+)$, which maps by the diffeomorphism $(\Psi^{\mathbb{V}})^{-1}\colon\thinspace X\to X$ to a coorientation of $(\Psi^{\mathbb{V}})^{-1}(\mathbf{A}^{v_+}(p_+))$. Thus we orient the intersection $\mathbf{N}^{\mathbb{V}}(p_-,p_+)=\mathbf{D}^{v_-}(p_-)\cap (\Psi^{\mathbb{V}})^{-1}(\mathbf{A}^{v_+}(p_+))$ by, for all $x\in \mathbf{N}^{\mathbb{V}}(p_-,p_+)$, using the orientation induced by the short exact sequence \[ 0\to T_x\mathbf{N}^{\mathbf{V}}(p_-,p_+)\to T_x\mathbf{D}^{v_-}(p_-)\to N_x\left((\Psi^{\mathbb{V}})^{-1}(\mathbf{A}^{v_+}(p_+))\right)\to 0. \] Equivalently, to phrase this more directly in terms of the coorientation of $\mathbf{A}^{v_+}(p_+)$, the orientation on $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ is induced by the short exact sequence
\[ 0 \to T_x\mathbf{N}^{\mathbf{V}}(p_-,p_+)\to T_x\mathbf{D}^{v_-}(p_-)\to N_{\Psi^{\mathbb{V}}(x)}\mathbf{A}^{v_+}(p_+)\to 0 \] where the the second nontrivial map is the composition of the derivative of $\Psi^{\mathbb{V}}$ with the quotient projection.
Our earlier trajectory spaces $\mathbf{W}^{v}(p,q)$ can be regarded as a special case of the continuation trajectory spaces $\mathbf{N}^{\mathbb{V}}(p,q)$ in which the Morse functions $f_-,f_+$ are both equal to $f$ and $\mathbb{V}=\{v_s\}_{s\in \mathbb{R}}$ has each $v_s$ equal to the same vector field $v$. In this case, the diffeomorphism $\Psi^{\mathbb{V}}$ is the time-$2$ map of the vector field $-v$; since the flow of $-v$ preserves the submanifolds $\mathbf{A}^{v}(q)$ (and hence also their coorientations) the orientation prescription for $\mathbf{N}^{\mathbb{V}}(p,q)$ from the previous paragraph evidently agrees with our earlier orientation of $\mathbf{W}^v(p,q)$.
We now consider the effect of ``turning $X$ upside down'' by replacing Morse functions and vector fields by their negatives. For a time-dependent vector field $\mathbb{V}=\{v_s\}_{s\in \mathbb{R}}$ as above let $\bar{\mathbb{V}}=\{-v_{-s}\}_{s\in \mathbb{R}}$; then for $p_-\in\mathrm{Crit}(f_-)$ and $p_+\in\mathrm{Crit}(f_+)$ we obtain spaces $\mathbf{N}^{\bar{\mathbb{V}}}(p_+,p_-)$ which may be used to define continuation maps $\Phi_{-f_+,-f_-}$; regarding these spaces as consisting of solutions $\gamma\colon\thinspace \mathbb{R}\to X$ to the appropriate version of (\ref{conteq}), $\mathbf{N}^{\bar{\mathbb{V}}}(p_+,p_-)$ is in bijection with $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ via the map that sends $\gamma\colon\thinspace\mathbb{R}\to X$ to its reversal $\bar{\gamma}(s)=\gamma(-s)$. Our prescriptions above give orientations of $\mathbf{N}^{\bar{\mathbb{V}}}(p_+,p_-)$ dependent upon choices of orientations $\mathfrak{o}_{-v_-,p_-},\mathfrak{o}_{-v_+,p_+}$ of $\mathbf{D}^{-v_{\pm}}(p_{\pm})$, (\emph{i.e.}, of the ascending manifolds $\mathbf{A}^{v_{\pm}}(p_{\pm})$). Let us compare the orientations of $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ and $\mathbf{N}^{\bar{\mathbb{V}}}(p_+,p_-)$, where these spaces are implicitly identified using the bijection $\gamma\mapsto \bar{\gamma}$. Note that if we use the assignment $\gamma\mapsto\gamma(-1)$ to identify $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ with $\mathbf{D}^{v_-}(p_-)\cap (\Psi^{\mathbb{V}})^{-1}(\mathbf{A}^{v_+}(p_+))$, and likewise for $\mathbf{N}^{\bar{\mathbb{V}}}(p_+,p_-)$, then the bijection $\gamma\mapsto\bar{\gamma}$ from $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ to $\mathbf{N}^{\bar{\mathbb{V}}}(p_+,p_-)$ corresponds to the diffeomorphism \[ \Psi^{\mathbb{V}}\colon\thinspace \mathbf{D}^{v_-}(p_-)\cap (\Psi^{\mathbb{V}})^{-1}(\mathbf{A}^{v_+}(p_+))\to \mathbf{A}^{v_+}(p_+)\cap\Psi^{\mathbb{V}}(\mathbb{D}^{v_-}(p_-)).\]
\begin{prop}\label{wor}
Assume that $X$ is oriented and that, for each $p\in\mathrm{Crit}(f_{\pm})$, the orientations $\mathfrak{o}_{v_{\pm},p}$ on each $\mathbf{D}^{v_{\pm}}(p)$ and $\mathfrak{o}_{-v_{\pm},p}$ on $\mathbf{A}^{v_{\pm}}(p)$ are chosen in such a way that, as oriented vector spaces, $T_{p}\mathbf{D}^{v_{\pm}}(p)\oplus T_p\mathbf{A}^{v_{\pm}}(p)=T_pX$ for each critical point $p$ of $f_+$ or $f_-$. Then, for $p_-\in\mathrm{Crit}(f_-)$ and $p_+\in\mathrm{Crit}(f_+)$, the resulting orientations on $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ and $\mathbf{N}^{\bar{\mathbb{V}}}(p_+,p_-)$ are related by the sign $(-1)^{\mathrm{ind}_{f_+}(p_+)(\mathrm{ind}_{f_-}(p_-)-\mathrm{ind}_{f_+}(p_+))}$.
\end{prop}
\begin{proof}
As explained before the proposition, for $p$ a critical point either of $f_-$ or of $f_+$, the choice of $\mathfrak{o}_{v_{\pm},p}$ induces orientations both of the tangent bundle $T\mathbf{D}^{v_{\pm}}(p)$ and of the normal bundle $N\mathbf{A}^{v_{\pm}}(p)$. Likewise, then, $\mathfrak{o}_{-v_{\pm},p}$ induces orientations both of the tangent bundle $T\mathbf{A}^{v_{\pm}}(p)$ and of the normal bundle $N\mathbf{D}^{v_{\pm}}(p)$. From now on we identify normal bundles to submanifolds $S$ as subbundles (as opposed to quotient bundles) of $TX|_S$ using a Riemannian metric. Our prescription on the relation between $\mathfrak{o}_{v_{\pm},p}$ and $\mathfrak{o}_{-v_{\pm},p}$ yields that $T_{p}\mathbf{D}^{v_{\pm}}(p)\oplus N_{p}\mathbf{D}^{v_{\pm}}(p)=T_{p}X$ as oriented vector spaces, and hence (since $\mathbf{D}^{v_{\pm}}(p)$ is connected) that $T_x\mathbf{D}^{v_{\pm}}(p)\oplus N_x\mathbf{D}^{v_{\pm}}(p)=T_xX$ as oriented vector spaces for all $x\in \mathbf{D}^{v_{\pm}}(p)$ and all critical points $p$ of $f_{\pm}$. Since $N_{p}\mathbf{A}^{v_{\pm}}(p)=T_{p}\mathbf{D}^{v_{\pm}}(p)$ and $T_{p}\mathbf{A}^{v_{\pm}}(p)=N_{p}\mathbf{D}^{v_{\pm}}(p)$ and since $\mathbf{A}^{v_{\pm}}(p)$ is connected, it likewise follows that $N_x\mathbf{A}^{v_{\pm}}(p)\oplus T_x\mathbf{A}^{v_{\pm}}(p)=T_xX$ as oriented vector spaces for all $x\in \mathbf{A}^{v_{\pm}}(p)$ and all critical points $p$ of $f_{\pm}$.
Let us abbreviate $\Psi=\Psi^{\mathbb{V}}$, and note that then $\Psi^{\bar{\mathbb{V}}}=\Psi^{-1}$. We then have identifications $\mathbf{N}^{\mathbb{V}}(p_-,p_+)\cong \mathbf{D}^{v_-}(p_-)\cap \Psi^{-1}(\mathbf{A}^{v_+}(p_+))$ and $\mathbf{N}^{\bar{\mathbf{V}}}(p_+,p_-)\cong \mathbf{A}^{v_+}(p_+)\cap \Psi(D^{v_-}(p_-))$ and our task is to see how the orientations on these spaces are related under the diffeomorphism $\Psi$ between them. We write $\Psi_*$ for the derivative of $\Psi$.
Given $x\in \mathbf{N}^{\mathbb{V}}(p_-,p_+)$, consider an ordered basis for $T_xX$ of the form \[ \mathcal{B}_x=\left(u_1,\ldots,u_k,v_1,\ldots,v_{\ell},w_1,\ldots,w_m\right)\] where $(u_1,\ldots,u_k)$ is a positively-oriented basis for $T_x\mathbf{N}^{\mathbb{V}}(p_-,p_+)$, $(u_1,\ldots,u_k,v_1,\ldots,v_{\ell})$ is a positively-oriented basis for $T_x\mathbf{D}^{v}(p_-)$, and $(\Psi_*u_1,\ldots,\Psi_*u_k,\Psi_*w_1,\ldots,\Psi_*w_m)$ is a positively-oriented basis for $T_{\Psi(x)}\mathbf{A}^{v_+}(p_+)$. Thus $k+\ell=|p_-|$, $k+m=n-|p_+|$, and $k+\ell+m=n$, so $\ell=|p_+|$ and $k=|p_-|-|p_+|$, where we abbreviate $\mathrm{ind}_{f_{\pm}}(p_{\pm})$ as $|p_{\pm}|$. The definition of the orientation on $T_x\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ implies that $(v_1,\ldots,v_{\ell})$ maps by $\Psi_*$ to a positively-oriented basis for $N_{\Psi(x)}\mathbf{A}^{v_+}(p_+)$. So since $N_{\Psi(x)}\mathbf{A}^{v_+}(p_+)\oplus T_{\Psi(x)}\mathbf{A}^{v_+}(p_+)=T_{\Psi(x)}X$ as oriented vector spaces, it follows that $\Psi_*\mathcal{B}_x$ has sign $(-1)^{k\ell}$ relative to the orientation on $T_{\Psi(x)}X$.
Let $\epsilon$ denote the sign of $(\Psi_*u_1,\ldots,\Psi_*u_k)$ relative to the orientation on $\mathbf{N}^{\bar{\mathbf{V}}}(p_+,p_-)$; the conclusion of the proposition is that $\epsilon=(-1)^{k\ell}$. Since $(\Psi_*u_1,\ldots,\Psi_*u_k,\Psi_*w_1,\ldots,\Psi_*w_m)$ is a positively-oriented basis for $T_{\Psi(x)}\mathbf{A}^{v_+}(p_+)$, it follows that $\epsilon$ is also the sign of $(w_1,\ldots,w_m)$ relative to the orientation of $N_x\mathbf{D}^{v_-}(p_-)$. But since $T_x\mathbf{D}^{v_-}(p_-)\oplus N_x\mathbf{D}^{v_-}(p_-)=T_xX$ as oriented vector spaces and since $(u_1,\ldots,u_k,v_1,\ldots,v_{\ell})$ is a positively-oriented basis for $T_x\mathbf{D}^{v_-}(p_-)$, it then follows that our original basis $\mathcal{B}_x$ for $T_xX$ has sign $\epsilon$ relative to the orientation on $T_xX$. So since $\Psi$ (being isotopic to the identity) is orientation-preserving and $\Psi_*\mathcal{B}_x$ has sign $(-1)^{k\ell}$ relative to the orientation on $T_{\Psi(x)}X$, we see that $\epsilon=(-1)^{k\ell}$, as desired.
\end{proof}
As noted earlier, in the special case that $f_-=f_+=f$ and $v_s=v$ independently of $s$, for $p,q\in\mathrm{Crit}(f)$ the spaces $\mathbf{N}^{\mathbb{V}}(p,q)$ and $\mathbf{N}^{\bar{\mathbb{V}}}(q,p)$ coincide as oriented manifolds with the respective spaces $\mathbf{W}^{v}(p,q),\mathbf{W}^{-v}(q,p)$. Taking the quotients of these spaces by $\mathbb{R}$-translation to obtain $\mathbf{M}^{v}(p,q),\mathbf{M}^{-v}(q,p)$, we have:
\begin{cor}\label{mor}
Assume that $X$ is oriented and that the orientations $\mathfrak{o}_{v,p},\mathfrak{o}_{-v,p}$ are related as in Proposition \ref{wor}. Let $p,q$ be critical points of $f$ with $\mathrm{ind}_f(p)=n-k,\,\mathrm{ind}_{f}(q)=n-k-1$ (and hence $\mathrm{ind}_{-f}(p)=k,\,\mathrm{ind}_{-f}(q)=k+1$). Then the signed counts of points $n_{f,v}(p,q),n_{-f,-v}(q,p)$ in $\mathbf{M}^v(p,q)$ and $\mathbf{M}^{-v}(q,p)$ are related by \[ n_{-f,-v}(q,p)=(-1)^{n-k}n_{f,v}(p,q).\]
\end{cor}
\begin{proof}
Proposition \ref{wor} shows that the (set-theoretically equal) oriented $1$-manifolds $\mathbf{W}^v(p,q)$ and $\mathbf{W}^{-v}(q,p)$ have their orientations related by the sign $(-1)^{n-k-1}$. By our definition of the orientation on the $0$-manifold $\mathbf{M}^{v}(p,q)$, for any $x\in \mathbf{W}^{v}(p,q)$ the projection of $x$ to $\mathbf{M}^{v}(p,q)$ is positively-oriented iff $-v(x)$ agrees with the orientation of $\mathbf{W}_f(p,q)$. On the other hand the projection of $x$ to $\mathbf{M}^{-v}(q,p)$ is positively-oriented iff $-(-v)(x)=+v(x)$ agrees with the orientation of $\mathbf{W}_{-f}(p,q)$. So the contributions of (the respective equivalence classes of) $x$ to $n_f(p,q)$ and to $n_{-f}(q,p)$ are related by $-(-1)^{n-k-1}=(-1)^{n-k}$.
\end{proof}
\subsection{Lifted Morse complexes on covering spaces}
To generalize beyond Section \ref{morseintro}, we consider Morse theory for functions $\tilde{f}\colon\thinspace \tilde{X}\to\mathbb{R}$ defined on the domains of regular covering spaces $\pi\colon\thinspace \tilde{X}\to X$, where $X$ is a compact smooth $n$-dimensional manifold. Let $\tilde{\Gamma}$ be the deck transformation group of $\pi$; we will generally be concerned with the case that $\tilde{\Gamma}$ is a free abelian group, isomorphic to the subgroup $\Gamma<\mathbb{R}$ considered throughout the paper, though for the moment we do not assume this.
Unless $\tilde{\Gamma}$ is a finite group, the space $\tilde{X}$ will be noncompact, and so the algebraic structure (if any) that one obtains from Morse theory for such a function $\tilde{f}$ depends on how that function behaves at infinity. The simplest case is that $\tilde{f}=\pi^{*}h$ for some Morse function $h\colon\thinspace X\to \mathbb{R}$. In this case the index-$k$ critical points $\tilde{p}$ of $\tilde{f}$ are precisely the elements of $\tilde{X}$ which map by $\pi$ to index-$k$ critical points of $h$. Let us use a vector field $\tilde{v}$ on $\tilde{X}$ that is the lift to $\tilde{X}$ of a gradient-like vector field $v$ for $h$ on $X$ whose flow is Morse-Smale; then the flow of $-\tilde{v}$ on $\tilde{X}$ will just be the lift of that of $-v$ on $X$ and in particular will also be Morse-Smale. For $\tilde{p}\in \mathrm{Crit}_{k}(\pi^*h)$, the descending and ascending manifolds $\mathbf{D}^{\tilde{v}}(\tilde{p})$ and $\mathbf{A}^{\tilde{v}}(\tilde{p})$ will be the unique lifts passing through $\tilde{p}$ of, respectively, $\mathbf{D}^v(p)$ and $\mathbf{A}^v(p)$.
Let $\tilde{p},\tilde{q}$ be two critical points of $\pi^*h$, with $\pi(\tilde{p})=p,\pi(\tilde{q})=q$.
Identify as usual the one-dimensional manifold $\mathbf{W}^v(p,q)=\mathbf{D}^v(p)\cap \mathbf{A}^v(q)$ with the space of trajectories $\eta\colon\thinspace \mathbb{R}\to X$ for $-v$ having $\eta(t)\to p$ as $t\to-\infty$ and $\eta(t)\to q$ as $t\to +\infty$ (under the identification sending $\eta$ to $\eta(0)$). Any such $\eta$ lifts uniquely to a negative gradient trajectory $\tilde{\eta}$ of $-\tilde{v}$ having $\tilde{\eta}(t)\to \tilde{p}$ as $t\to -\infty$; the limit $\lim_{t\to\infty}\tilde{\eta}(t)$ will project to $q$ and hence, since our covering space is regular, will be of the form $\gamma\tilde{q}$ for some $\gamma\in \tilde{\Gamma}$. In this way we see that $\pi$ restricts to a diffeomorphism \[ \coprod_{\gamma\in\tilde{\Gamma}}\mathbf{W}^{\tilde{v}}(\tilde{p},\gamma\tilde{q})\cong \mathbf{W}^v(p,q)\] which is equivariant with respect to the $\mathbb{R}$-actions given by the flows of $-\tilde{v}$ and $-v$ and hence induces a diffeomorphism of the quotients by these actions, \begin{equation}\label{projmiso} \coprod_{\gamma\in\tilde{\Gamma}}\mathbf{M}^{\tilde{v}}(\tilde{p},\gamma\tilde{q})\cong \mathbf{M}^{v}(p,q).\end{equation} In particular, if $\mathrm{ind}_{\pi^*h}(\tilde{p})-\mathrm{ind}_{\pi^*h}(\tilde{q})=1$, then since $\mathbf{M}^{v}(p,q)$ is a finite set there are only finitely many $\gamma$ such that $\mathbf{M}^{\tilde{v}}(\tilde{p},\gamma\tilde{q})$ is nonempty.
We may orient the various $\mathbf{D}^{\tilde{v}}(\tilde{p})$ and hence (via Section \ref{orsect}) the various $\mathbf{M}^{\tilde{v}}(\tilde{p},\tilde{q})$ by first choosing orientations of the each $\mathbf{D}^v(p)$ for $p\in\mathrm{Crit}(h)$ and then requiring that the projection $\pi$ map the orientation on $\mathbf{D}^{\tilde{v}}(\tilde{p})$ to the chosen one on $\mathbf{D}^{v}(\pi(\tilde{p}))$. Then (\ref{projmiso}) will be an orientation-preserving diffeomorphism, and moreover any deck transformation $\gamma\in\tilde{\Gamma}$ will induce an orientation-preserving diffeomorphism $\mathbf{M}^{\tilde{v}}(\tilde{p},\tilde{q})\cong \mathbf{M}^{\tilde{v}}(\gamma\tilde{p},\gamma\tilde{q})$. So the signed counts of points $n_{\pi^*h,\tilde{v}},n_{h,v}$ obey, whenever $\tilde{p}\in \mathrm{Crit}_{k+1}(\pi^*h),\tilde{q}\in\mathrm{Crit}_{k}(\pi^*h)$, \[ n_{h,v}(\pi(\tilde{p}),\pi(\tilde{q}))=\sum_{\gamma\in\tilde{\Gamma}}n_{\pi^*h,\tilde{v}}(\tilde{p},\gamma\tilde{q}) \] and \begin{equation}\label{npi}n_{\pi^*h,\tilde{v}}(\gamma\tilde{p},\gamma\tilde{q})=n_{\pi^*h,\tilde{v}}(\tilde{p},\tilde{q})\,\,(\forall \gamma\in\tilde{\Gamma}).\end{equation}
Given a field $\kappa$, we may then, just as before, define the Morse complex $\widetilde{\mathbf{CM}}_{*}(\pi^*h;\kappa[\tilde{\Gamma}])$ to have chain groups $\widetilde{\mathbf{CM}}_{k}(\pi^*h;\kappa[\tilde{\Gamma}])$ equal to the $\kappa$-vector space with basis $\mathrm{Crit}_k(\pi^*h)$ and with differential $\partial_{k+1}^{\pi^*h}\colon\thinspace \widetilde{\mathbf{CM}}_{k+1}(\pi^*h;\kappa[\tilde{\Gamma}])\to \widetilde{\mathbf{CM}}_{k}(\pi^*h;\kappa[\tilde{\Gamma}])$ given by, for each $\tilde{p}\in \mathrm{Crit}_{k+1}(\pi^*h)$, \[ \partial_{k+1}^{\pi^*h}\tilde{p}=\sum_{\tilde{q}\in\mathrm{Crit}_{k}(\pi^*h)}n_{\pi^*h,\tilde{v}}(\tilde{p},\tilde{q})\tilde{q}.\] Now $\tilde{\Gamma}$ acts on each set $\mathrm{Crit}_{k}(\pi^*h)$, inducing the structure of a left $\kappa[\tilde{\Gamma}]$-module on $\widetilde{\mathbf{CM}}_{k}(\pi^*h;\kappa[\tilde{\Gamma}])$, and then (\ref{npi}) shows that $\partial_{k+1}^{\pi^*h}$ is a homomorphism of $\kappa[\tilde{\Gamma}]$-modules. The $\widetilde{\mathbf{CM}}_k(\pi^*h;\kappa[\tilde{\Gamma}])$ are finitely-generated and free as $\kappa[\tilde{\Gamma}]$-modules, a basis being given by selecting, for each $p\in \mathrm{Crit}_k(h)$, a preimage $\tilde{p}$ under $\pi$.
The usual argument that the Morse boundary operator squares to zero is readily seen to show that, likewise, $\partial_{k}^{\pi^*h}\circ\partial_{k+1}^{\pi^*h}=0$, and thus $\widetilde{\mathbf{CM}}_*(\pi^*h;\kappa[\tilde{\Gamma}])$ is a chain complex of free, finitely-generated left $\kappa[\tilde{\Gamma}]$-modules. By \cite[Theorem A.5]{Pa95}, there is a chain homotopy equivalence of $\kappa[\tilde{\Gamma}]$-modules $\tilde{\mathcal{E}}(h)\colon\thinspace \widetilde{\mathbf{CM}}_*(\pi^*h;\kappa[\tilde{\Gamma}])\to \mathbf{S}_{*}(\tilde{X})$ (where the $\kappa$-coefficient singular chain complex $\mathbf{S}_{*}(\tilde{X})$ is made into a complex of $\kappa[\tilde{\Gamma}]$-modules in the obvious way using the action of $\tilde{\Gamma}$ on $\tilde{X}$.
\begin{remark} \label{actionkey}
Because $\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\tilde{\Gamma}])$ denotes a complex of modules over $\kappa[\tilde{\Gamma}]$, its structure depends in particular on the action of $\tilde{\Gamma}$ on $\tilde{X}$ and not merely on the topology of $\tilde{X}$ and the abstract group structure on $\tilde{\Gamma}$. This distinction becomes relevant in Section \ref{assocpair} when the same subgroup $\Gamma<\mathbb{R}$ acts on $\tilde{X}$ in two different ways. We will resolve this notationally by referring to $\Gamma$ alternately as $\Gamma_{\xi}$ or $\Gamma_{-\xi}$ depending upon which action we are considering.
\end{remark}
On the group algebra $\kappa[\tilde{\Gamma}]$, just as in Section \ref{conjsect}\footnote{Since we are regarding $\tilde{\Gamma}$ as an abstract group which in principle need not be abelian we will write the group operation multiplicatively here, though in the cases that are ultimately of interest $\tilde{\Gamma}$ will be isomorphic to the subgroup $\Gamma<\mathbb{R}$ considered in the rest of the paper. Correspondingly elements of the group algebra are written here as $\sum_{\gamma}a_{\gamma}\gamma$ instead of $\sum_{g}a_gT^g$.} we have a conjugation operation given by, for each $\lambda=\sum_{\gamma\in\tilde{\Gamma}}a_{\gamma}\gamma$, setting $\bar{\gamma}=\sum_{\gamma\in\Gamma}a_{\gamma}\gamma^{-1}$.
If $M$ is a left (resp. right) $\kappa[\tilde{\Gamma}]$-module we define its conjugate $\bar{M}$ to be the right (resp. left) module with the same abelian group structure and with the scalar multiplications on $M$ and $\bar{M}$ related by $\lambda m=m\bar{\lambda}$ for $m\in M$ and $\lambda\in\kappa[\tilde{\Gamma}]$. If $M$ is a left $\kappa[\tilde{\Gamma}]$-module then as in Section \ref{conjsect} we write ${}^{\vee}\!M$ for the left $\kappa[\tilde{\Gamma}]$-module obtained as the conjugate of the right $\kappa[\tilde{\Gamma}]$-module $\mathrm{Hom}_{\kappa[\tilde{\Gamma}]}(M,\kappa[\tilde{\Gamma}])$. (Thus for $\phi\in {}^{\vee}\!M$, $\lambda\in\kappa[\tilde{\Gamma}]$, and $m\in M$ we have $(\lambda\phi)(m)=\phi(m)\bar{\lambda}$.)
In particular, for a Morse function $h$ on the base $X$ of our regular covering space $\pi\colon\thinspace \tilde{X}\to X$ (and for a suitably generic gradient-like vector field for $h$ on $X$ which we pull back to $\tilde{X}$), we have a chain complex of left $\kappa[\tilde{\Gamma}]$-modules $\widetilde{\mathbf{CM}}_{*}(\pi^*h;\kappa[\tilde{\Gamma}])$ from which we can form the conjugated dual chain complex (with grading shift) ${}^{\vee}\!(\widetilde{\mathbf{CM}}_{*}(\pi^*h;\kappa[\tilde{\Gamma}])[n])$. This latter complex of left $\kappa[\tilde{\Gamma}]$-modules has grading $k$ part equal to ${}^{\vee}\!(\widetilde{\mathbf{CM}}_{n-k}(\pi^*h;\kappa[\tilde{\Gamma}]))$, and the differential ${}^{\vee}\!(\widetilde{\mathbf{CM}}_{*}(\pi^*h;\kappa[\tilde{\Gamma}])[n])_{k+1}\to {}^{\vee}\!(\widetilde{\mathbf{CM}}_{*}(\pi^*h;\kappa[\tilde{\Gamma}])[n])_k$ acts by $(-1)^{n-k}$ times the adjoint of the differential $\partial^{\pi^*h}_{n-k}$. Just as in Proposition \ref{negdual}, we have:
\begin{prop}\label{negdualcover}
Given an element $c=\sum_{\tilde{p}\in\mathrm{Crit}_{k}(-\pi^*h)}a_{\tilde{p}}\tilde{p}\in \widetilde{\mathbf{CM}}_{k}(-\pi^*h;\kappa[\tilde{\Gamma}])$ define $\epsilon_c\colon\thinspace \widetilde{\mathbf{CM}}_{n-k}(\pi^*h;\kappa[\tilde{\Gamma}])\to \kappa[\tilde{\Gamma}]$ by \[ \epsilon_c\left(\sum_{\tilde{q}\in\mathrm{Crit}_{n-k}(\pi^*h)}b_{\tilde{q}}\tilde{q}\right)=\sum_{\tilde{p}\in \mathrm{Crit}_k(-\pi^*h)}\sum_{\gamma\in\tilde{\Gamma}}a_{\tilde{p}}b_{\gamma\tilde{p}}\gamma.\] Then, assuming that either $\mathrm{char}(\kappa)=2$ or that $X$ is oriented and that we choose orientations for the descending manifolds $\mathbf{D}_{h}(p),\mathbf{D}_{-h}(q)$ as in Section \ref{orsect}, the assignment $c\mapsto \epsilon_c$ defines an isomorphism of chain complexes $\epsilon\colon\thinspace \widetilde{\mathbf{CM}}_{*}(-\pi^*h;\kappa[\tilde[\Gamma])\to {}^{\vee}\!(\widetilde{\mathbf{CM}}_{*}(\pi^*h;\kappa[\tilde{\Gamma}])[n])$.
\end{prop}
\begin{proof}
The maps $\epsilon_c$ are clearly $\kappa$-linear, as is the map $c\mapsto \epsilon_c$. If $c=\sum_{\tilde{p}}a_{\tilde{p}}\tilde{p}\in \widetilde{\mathbf{CM}}_{k}(-\pi^*h;\kappa[\tilde{\Gamma}])$ and $d=\sum_{\tilde{q}}b_{\tilde{q}}\tilde{q}\in \widetilde{\mathbf{CM}}_{n-k}(\pi^*h;\kappa[\tilde{\Gamma}])$ then for $\xi,\eta\in\tilde{\Gamma}$ we have $\xi c=\sum_{\tilde{p}}a_{\xi^{-1}\tilde{p}}\tilde{p}$ and $\eta d=\sum_{\tilde{q}}b_{\eta^{-1}\tilde{q}}\tilde{q}$ and hence (making the substitutions $\tilde{q}=\xi^{-1}\tilde{p}$ in the sum in the second equality and $\mu=\eta^{-1}\gamma\xi$ in the third) \begin{align*} \epsilon_{\xi c}(\eta d) &=\sum_{\tilde{p}\in\mathrm{Crit}_{k}(-\pi^*h)}\sum_{\gamma\in\tilde{\Gamma}}a_{\xi^{-1}\tilde{p}}b_{\eta^{-1}\gamma\tilde{p}}\gamma \\ &= \sum_{\tilde{q}\in\mathrm{Crit}_{k}(-\pi^*h)}\sum_{\gamma\in\tilde{\Gamma}}a_{\tilde{q}}b_{\eta^{-1}\gamma\xi\tilde{q}}\gamma = \sum_{\tilde{q}\in\mathrm{Crit}_{k}(-\pi^*h)}\sum_{\mu\in\tilde{\Gamma}}a_{\tilde{q}}b_{\mu\tilde{q}}\eta\mu\xi^{-1} \\ &= \eta\epsilon_c(d)\xi^{-1}.\end{align*}
With $\xi$ equal to the identity this shows that $\epsilon_c\in\mathrm{Hom}_{\kappa[\tilde{\Gamma}]}(\widetilde{\mathbf{CM}}_{n-k}(\pi^*h;\kappa[\tilde{\Gamma}]),\kappa[\tilde{\Gamma}])$. With, instead, $\eta$ equal to the identity we see that for any $\lambda\in \kappa[\tilde{\Gamma}]$, $\epsilon_{\lambda c}(d)=\epsilon_c(d)\bar{\lambda}$ and so $\epsilon\colon\thinspace c\mapsto \epsilon_c$ defines a morphism of $\kappa[\tilde{\Gamma}]$-modules $\widetilde{\mathbf{CM}}_{k}(-\pi^*h;\kappa)\to {}^{\vee}\!\left(\widetilde{\mathbf{CM}}_{n-k}(\pi^*h;\kappa[\tilde{\Gamma}])\right)$. Recalling that $\widetilde{\mathbf{CM}}_{k}(-\pi^*h;\kappa)$ has basis given by lifts $\tilde{p}_i$ to $\tilde{X}$ of the various index-$k$ critical points of $-\pi^*h$, it is easy to see that $\epsilon$ maps this basis to the corresponding dual basis of ${}^{\vee}\!\left(\widetilde{\mathbf{CM}}_{n-k}(\pi^*h;\kappa[\tilde{\Gamma}])\right)$ and so is an isomorphism of modules.
The fact that $\epsilon$ is an isomorphism of chain complexes then follows from the identity $\epsilon_{\partial_{k+1}^{-\pi^*h}\tilde{q}}(\tilde{p})=(-1)^{n-k}\epsilon_{\tilde{q}}(\partial_{n-k}^{\pi^*h}\tilde{p})$ for $\tilde{q}\in\mathrm{Crit}_{k+1}(-\pi^*h)$ and $\tilde{p}\in\mathrm{Crit}_k(-\pi^*h)$, and results from the same calculation as in Proposition \ref{negdual}.
\end{proof}
If now $h_-,h_+$ are two Morse functions on $X$, one obtains a continuation map $\tilde{\Phi}_{h_-h_+}\colon\thinspace \widetilde{\mathbf{CM}}_{*}(h_-;\kappa[\tilde{\Gamma}])\to\widetilde{\mathbf{CM}}_*(h_+;\kappa[\tilde{\Gamma}])$ by choosing a family of vector fields $\mathbb{V}=\{v_s\}_{s\in \mathbb{R}}$ on $X$ interpolating between gradient-like vector fields for $h_-$ and $h_+$, lifting these vector fields to $\tilde{X}$, and counting solutions in $\tilde{X}$ of the obvious analogue of (\ref{conteq}) according to their asymptotics in the usual way. (Alternatively, $\tilde{\Phi}_{h_-h_+}$ could be constructed directly from the spaces $\mathbf{N}^{\mathbb{V}}(p_-,p_+)$ underlying the construction of the original continuation map $\Phi_{h_-h_+}$ by keeping track of the asymptotics of the lifts to $\tilde{X}$ of the solutions to (\ref{conteq}).) This yields a chain homotopy equivalence of complexes of $\kappa[\tilde{\Gamma}]$-modules (with chain homotopy inverse given by $\tilde{\Phi}_{h_+h_-}$).
In particular, one obtains a chain-level Poincar\'e duality map $\tilde{\mathcal{D}}\colon\thinspace \widetilde{\mathbf{CM}}_{*}(h;\kappa[\tilde{\Gamma}])\to {}^{\vee}\!(\widetilde{\mathbf{CM}}_{*}(h;\kappa[\tilde{\Gamma}])[n])$ by composing the continuation map $\tilde{\Phi}_{h,-h}$ with the isomorphism $\epsilon$ from Proposition \ref{negdualcover}.
\subsection{Novikov complexes}\label{novsect}
We now consider a compact connected smooth manifold $X$ and a de Rham cohomology class $\xi\in H^1(X;\mathbb{R})$. Let $\pi \colon\thinspace\tilde{X}_{\xi}\to X$ be the covering space associated to the kernel of the evaluation map $\langle \xi,\cdot\rangle\colon\thinspace \pi_1(X)\to \mathbb{R}$. Then $\pi^{*}\xi=0\in H^1(X_{\xi};\mathbb{R})$, and we may and do identify the deck transformation group of $\tilde{X}_{\xi}$ with the subgroup $\Gamma_{\xi}=\Img(\langle \xi\,\cdot\rangle\colon\thinspace \pi_1(X)\to \mathbb{R})$ of $\mathbb{R}$ in such a way that, for $\tilde{p}\in \tilde{X}_{\xi}$ and $g\in\Gamma_{\xi}$, $\xi$ evaluates as $-g$ on the image under $\pi$ of a path from $\tilde{x}$ to $g\tilde{x}$. Note that, while $\tilde{X}_{\xi}=\tilde{X}_{-\xi}$ and $\Gamma_{\xi}=\Gamma_{-\xi}$ as sets, our conventions lead to the action of $\Gamma_{-\xi}$ on $\tilde{X}_{-\xi}$ being opposite to the action of $\Gamma_{\xi}$ on $\tilde{X}_{\xi}$.
If $\theta\in \Omega^1(X)$ is a closed $1$-form representing $\xi$, then we will have $\pi^*\theta=d\tilde{f}$ for a smooth function $\tilde{f}\colon\thinspace \tilde{X}_{\xi}\to \mathbb{R}$ that satisfies $\tilde{f}(g\tilde{p})=\tilde{f}(\tilde{p})-g$ for all $g\in\Gamma_{\xi}$ and $\tilde{p}\in\tilde{X}_{\xi}$. The $1$-form $\theta$ is said to be Morse if, considered as a section of the cotangent bundle $T^*X$, it is transverse to the zero section; this is equivalent to $\tilde{f}$ being a Morse function. In this case, the Novikov complex $\mathbf{CN}_{*}(\tilde{f};\xi)$ is defined using the flow of the negative of a gradient-like vector field $\tilde{v}$ for $\tilde{f}$ which is the lift of a Morse-Smale vector field $v$ on $X$. Each zero $p$ of $\theta$ will then have injectively immersed descending and ascending manifolds $\mathbf{D}^v(p),\mathbf{A}^v(p)$ for the flow of $v$, and by the Morse-Smale condition (which can be arranged to hold by taking $v$ equal to the metric dual of $\theta$ with respect to a suitably generic metric by \cite[Proposition 1]{BH01}) the various $\mathbf{D}^v(p)$ and $\mathbf{A}^v(q)$ will be transverse. The critical points $\tilde{p}$ of $\tilde{f}$ are the preimages under $\pi$ of the zeros of $\theta$, and their descending and ascending manifolds $\mathbf{D}^{\tilde{v}}(\tilde{p}),\mathbf{A}^{\tilde{v}}(\tilde{p})$ are embedded disks which are lifts of the $\mathbf{D}^v(p),\mathbf{A}^{v}(p)$. We orient the $\mathbf{D}^{\tilde{v}}(\tilde{p})$ by lifting chosen orientations on the $\mathbf{D}^{v}(p)$; thus for distinct critical points $\tilde{p},g\tilde{p}$ of $\tilde{f}$ in the same fiber of $\pi$ the orientations of $\mathbf{D}^{\tilde{v}}(\tilde{p})$ and $\mathbf{D}^{\tilde{v}}(g\tilde{p})$ correspond under the deck transformation $g$.
In this situation, as before, for distinct critical points $\tilde{p},\tilde{q}$ of $\tilde{f}$ we obtain oriented manifolds $\mathbf{W}^{\tilde{v}}(\tilde{p},\tilde{q})=\mathbf{D}^{\tilde{v}}(\tilde{p})\cap\mathbf{A}^{\tilde{v}}(\tilde{q})$ and $\mathbf{M}^{\tilde{v}}(\tilde{p},\tilde{q})=\frac{\mathbf{W}^{\tilde{v}}(\tilde{p},\tilde{q})}{\mathbb{R}}$, and, if $\mathrm{ind}_{\tilde{f}}(\tilde{p})-\mathrm{ind}_{\tilde{f}}(q)=1$, a signed count $n_{\tilde{f},\tilde{v}}(\tilde{p},\tilde{q})$ of the points of $\mathbf{M}^{\tilde{v}}(\tilde{p},\tilde{q})$. In contrast to the situation of the previous section, though, given $\tilde{p}\in\mathrm{Crit}_{k+1}(\tilde{f})$ there may be infinitely many $\tilde{q}\in\mathrm{Crit}_k(\tilde{f})$ for which $n_{\tilde{f},\tilde{v}}(\tilde{p},\tilde{q})\neq 0$. There will however be only finitely many such $\tilde{q}$ obeying any given lower bound $\tilde{f}(\tilde{q})\geq c$, and so one defines, following \cite{Nov}, the degree-$k$ part of the Novikov chain complex of $\tilde{f}$ to be consist of possibly-infinite sums as follows: \[ \mathbf{CN}_{k}(\tilde{f};\xi)=\left\{\left.\sum_{\tilde{p}\in\mathrm{Crit}_{k}(\tilde{f})}a_{\tilde{p}}\tilde{p}\right|a_{\tilde{p}}\in\kappa,(\forall c\in \mathbb{R})(\#\{\tilde{p}|a_{\tilde{p}}\neq 0,\,\tilde{f}(\tilde{p})>c\}<\infty)\right\}.\]
This is a vector space over the field $\Lambda_{\uparrow}$ from Section \ref{basic} (using $\Gamma_{\xi}$ in the role of $\Gamma$) with the obvious action extending the action of the group ring, bearing in mind our sign convention that $\tilde{f}(g\tilde{p})=\tilde{f}(\tilde{p})-g$ for $g\in\Gamma_{\xi}$.
Moreover, defining a boundary operator $\partial_{k+1}^{\tilde{f}}\colon\thinspace \mathbf{CN}_{k+1}(\tilde{f};\xi)\to \mathbf{CN}_{k}(\tilde{f};\xi)$ by the usual prescription $\partial_{k+1}^{\tilde{f}}\tilde{p}=\sum_{\tilde{q}\in\mathrm{Crit}_{k}(\tilde{f})}n_{\tilde{f},\tilde{v}}(\tilde{p},\tilde{q})\tilde{q}$ makes $(\mathbf{CN}_{*}(\tilde{f});\xi),\partial^{\tilde{f}})$ into a chain complex of $\Lambda_{\uparrow}$-vector spaces. If the zeros of $\theta$ having index $k$ are $p_1,\ldots,p_d$, and then any choice $\{\tilde{p}_1,\ldots,\tilde{p}_d\}$ of lifts to $\tilde{X}_{\xi}$ of the $p_i$ gives a basis for $\mathbf{CN}_k(\tilde{f};\xi)$. See \cite{Lat},\cite{BH01} for more details regarding the construction of the Novikov complex.
The Novikov complex becomes a Floer-type complex by setting \[ \ell_{\uparrow}^{\tilde{f}}\left(\sum_{\tilde{p}}a_{\tilde{p}}\tilde{p}\right)=\max\{\tilde{f}(\tilde{p})|a_{\tilde{p}}\neq 0\}.\] A basis $\{\tilde{p}_1,\ldots,\tilde{p}_d\}$ as in the preceding paragraph will then be $\ell_{\uparrow}^{\tilde{f}}$-orthogonal, with $\ell_{\uparrow}^{\tilde{f}}\left(\sum_{i}\lambda_i\tilde{p}_i\right)=\max_i\{\tilde{f}(\tilde{p}_i)-\nu_{\uparrow}(\lambda_i)\}$ for $\lambda_i\in\Lambda_{\uparrow}$. We write $\mathbf{CN}(\tilde{f};\xi)_{\uparrow}$ for the Floer-type complex consisting of the Novikov chain complex $(\mathbf{CN}_{*}(\tilde{f};\xi),\partial^{\tilde{f}})$ and the filtration funtion $\ell_{\uparrow}^{\tilde{f}}$.
Suppose that $\theta_-,\theta_+$ are closed Morse one-forms on $X$ that both represent the de Rham cohomology class $\xi$, and let $\tilde{f}_-,\tilde{f}_+\colon\thinspace \tilde{X}_{\xi}\to \mathbb{R}$ be (necessarily Morse) functions such that $d\tilde{f}_{\pm}=\pi^{*}\theta_{\pm}$. Using the lift to $\tilde{X}$ of a suitable one-parameter family of vector fields on $X$, we then obtain continuation maps $\Phi_{\tilde{f}_-\tilde{f}_+}\colon\thinspace \mathbf{CN}_{*}(\tilde{f}_-;\xi)\to\mathbf{CN}_{*}(\tilde{f}_+;\xi)$ and $\Phi_{\tilde{f}_+\tilde{f}_-}\colon\thinspace \mathbf{CN}_{*}(\tilde{f}_+;\xi)\to\mathbf{CN}_{*}(\tilde{f}_-;\xi)$, by the same procedure as in Section \ref{morseintro}; these are chain maps defined over $\Lambda_{\uparrow}$ that are homotopy inverses to each other, and so the chain homotopy type of $\mathbf{CN}_{*}(\tilde{f};\xi)$ depends only on the cohomology class $\xi$. It is important to note that the existence of these continuation maps depends on $\tilde{f}_{\pm}$ being pullbacks of \emph{cohomologous} $1$-forms; were this not the case, the difference $\tilde{f}_+-\tilde{f}_-$ would be unbounded, so (\ref{conteq}) would no longer provide bounds that are needed in the proof that $\Phi_{\tilde{f}_-\tilde{f}_+}$ is well-defined and respects the finiteness condition in the definition of the Novikov chain complexes. In particular, if $\xi\neq 0$ there is no natural continuation map from $\mathbf{CN}_{*}(\tilde{f};\xi)$ to $\mathbf{CN}_{*}(-\tilde{f};-\xi)$ even though their homologies are isomorphic as graded vector spaces over the same field $\Lambda_{\uparrow}$; if there were such a map, much of the complexity of this paper might have been avoided.
Adapting part of the proof of \cite[Th\'eor\`eme 2.18]{Lat}, the homotopy type of $\mathbf{CN}_{*}(\tilde{f};\xi)$ can be identified as follows. Begin with a Morse function $h_0\colon\thinspace X\to \mathbb{R}$ and a gradient-like vector field $v_0$ for $h_0$ whose flow is Morse-Smale. Then choose a closed $1$-form $\theta_0$ on $X$ which represents the cohomology class $\xi$ and has support contained in $X\setminus \Omega$ for some contractible neighborhood $\Omega$ of the zero locus of $dh_0$. Then if $N\gg 1$, the closed one-form $\theta_N:=Ndh_0+\theta_0$ on $X$ will also represent the class $\xi$, will coincide with $Ndh_0$ on a neighborhood of $\mathrm{Crit}(h_0)$, and will have the property that $(\theta_N)_x(v_0)>0$ for all $x\in X\setminus \mathrm{Crit}(h_0)$. Consequently, letting $\tilde{h}_N\colon\thinspace \tilde{X}_{\xi}\to \mathbb{R}$ be such that $d\tilde{h}_N=\pi^*(Ndh_0+\theta_0)$, the lift $\tilde{v}_0$ of $v_0$ to $\tilde{X}_{\xi}$ will be gradient-like both for $\tilde{h}_N$ and for $\pi^*h_0$. Thus we may use the descending and ascending manifolds of this same vector field $\tilde{v}_0$ in the formation both of the lifted Morse complex $\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$ and of the Novikov complex $\mathbf{CN}_{*}(\tilde{h}_N;\xi)$. It follows readily from this that we have identical chain complexes \begin{equation}\label{cxid} \mathbf{CN}_{*}(\tilde{h}_N;\xi)=\Lambda_{\uparrow}\otimes_{\kappa[\Gamma_{\xi}]}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]). \end{equation} We define the \textbf{Latour map} $\mathcal{L}_{h_0\tilde{f}}\colon\thinspace \widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])\to \mathbf{CN}_{*}(\tilde{f};\xi)$ to be the composition of the coefficient extension $\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])\hookrightarrow \Lambda_{\uparrow}\otimes_{\kappa[\Gamma_{\xi}]}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$ and the continuation map $\mathbf{CN}_{*}(\tilde{h}_N;\xi)\to \mathbf{CN}_{*}(\tilde{f};\xi)$, as (\ref{cxid}) identifies the codomain of the former with the domain of the latter.
Then $1_{\Lambda_{\uparrow}}\otimes \mathcal{L}_{h_0\tilde{f}}$ is just the continuation map $\mathbf{CN}_{*}(\tilde{h}_N;\xi)\to \mathbf{CN}_{*}(\tilde{f};\xi)$ and so is a chain homotopy equivalence. We now have the ingredients needed to generalize Definition \ref{cmdef} to the Novikov context:
\begin{dfn}\label{cndef} Let $X$ be a smooth compact manifold and if our ground field $\kappa$ has $\mathrm{char}(\kappa)\neq 2$ assume that $X$ is oriented. Fix a Morse function $h_0\colon\thinspace X\to\mathbb{R}$. Let $\xi\in H^1(X;\mathbb{R})$ with associated covering space $\pi\colon\thinspace \tilde{X}_{\xi}\to \mathbb{R}$ and let $\Gamma=\Gamma_{\xi}=\Img(\langle\xi,\cdot\rangle\colon\thinspace \pi_1(X)\to\mathbb{R})$. Given any Morse function $\tilde{f}\colon\thinspace \tilde{X}\to \mathbb{R}$ such that $d\tilde{f}$ is the pullback of a form in class $\xi$, we take $\mathcal{CN}(\tilde{f};\xi)$ to be the chain-level Poincar\'e-Novikov complex with the following data $(\mathcal{C},\tilde{\mathcal{D}},\mathcal{C}_{\uparrow},\mathcal{S})$ as in Section \ref{cpnstr}:
\begin{itemize}
\item $\mathcal{C}=\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$ is the lifted Morse complex of $h_0$;
\item $\tilde{\mathcal{D}}\colon\thinspace \mathbf{CM}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])\to {}^{\vee}\!(\mathbf{CM}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])[n])$ is the composition of the continuation map $\Phi_{\pi^*h_0,-\pi^*h_0}\colon\thinspace \widetilde{\mathbf{CM}}_*(\pi^*h_0;\kappa[\Gamma_{\xi}])\to \widetilde{\mathbf{CM}}_{*}(-\pi^*h_0;\kappa[\Gamma_{\xi}])$ with the chain isomorphism $\epsilon\colon\thinspace \widetilde{\mathbf{CM}}_{*}(-\pi^*h_0;\kappa[\Gamma_{\xi}])\to {}^{\vee}\!(\widetilde{\mathbf{CM}}_*(\pi^*h_0;\kappa[\Gamma_{\xi}])[n])$ from Proposition \ref{negdualcover}.
\item $\mathcal{C}_{\uparrow}=\mathbf{CN}(\tilde{f};\xi)_{\uparrow}$ (\emph{i.e.}, the $\Lambda_{\uparrow}$-Floer-type complex with chain complex $\mathbf{CN}_{*}(\tilde{f};\xi)$ and the filtration function $\ell_{\uparrow}^{\tilde{f}}$).
\item $\mathcal{S}$ is the Latour map $\mathcal{L}_{h_0\tilde{f}}\colon\thinspace \widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])\to \mathbf{CN}_{*}(\tilde{f};\xi)$ described above.
\end{itemize}
\end{dfn}
\subsection{The associated chain-level filtered matched pairs}\label{assocpair}
Having defined the chain-level Poincar\'e-Novikov structures $\mathcal{CN}(\tilde{f};\xi)$ associated to Morse functions $\tilde{f}\colon\thinspace \tilde{X}_{\xi}\to \mathbb{R}$ on covers $\tilde{X}_{\xi}$ such that $d\tilde{f}$ is the pullback of a closed one-form in the cohomology class $\xi$, let us now discuss the chain-level filtered matched pairs $\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi))$ associated to these by the construction in Section \ref{cpnstr}, as these are in turn the inputs for the rest of the constructions in Section \ref{clsec}.
By definition, the data $(\mathcal{C},\mathcal{C}^{\downarrow},\mathcal{C}_{\uparrow},\phi^{\downarrow},\phi_{\uparrow})$ of $\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi))$ will include $\mathcal{C}=\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$, $\mathcal{C}_{\uparrow}=\mathbf{CN}(\tilde{f};\xi)_{\uparrow}$, and $\phi_{\uparrow}=\mathcal{L}_{h_0\tilde{f}}$. In the role of the $\Lambda^{\downarrow}$-Floer-type complex $\mathcal{C}^{\downarrow}$ is ${}^{\vee}\!\left(\mathbf{CN}(\tilde{f};\xi)_{\uparrow}[n]\right)$, but it is simpler to describe this using an analogue of Proposition \ref{negdualcover}. Note first that, in our conventions, the covering spaces $\tilde{X}_{\xi}$ and $\tilde{X}_{-\xi}$ are the same, and we have well-defined $\Lambda_{\uparrow}$-Floer-type complexes $\mathbf{CN}_*(f;\xi)$ and $\mathbf{CN}_{*}(-f;-\xi)$ using the same subgroup $\Gamma=\Gamma_{\xi}=\Gamma_{-\xi}$ of $\mathbb{R}$ for the Novikov fields in both cases; however the ways in which this subgroup of $\mathbb{R}$ is identified with the deck transformation group of $\tilde{X}_{\xi}=\tilde{X}_{-\xi}$ are different due to the general rule that the action of an element $g\in \Gamma$ should decrease the value of the corresponding function (respectively $\tilde{f}$ or $-\tilde{f}$) by $g$. This leads to the need to conjugate the $\Lambda_{\uparrow}$-Floer-type complex $\mathbf{CN}(-\tilde{f};-\xi)_{\uparrow}$ to obtain a $\Lambda^{\downarrow}$-Floer-type complex in the following. (Though we suppress the notation for the gradient-like vector fields in the statement of the proposition, it should be understood that if $\tilde{v}$ is the vector field used to define $\mathbf{CN}_{*}(f;\xi)$ then we use $-\tilde{v}$ to define $\mathbf{CN}_{*}(-f;-\xi)$.)
\begin{prop}\label{negdualnov}
Given an element $c=\sum_{\tilde{p}\in\mathrm{Crit}_{k}(-\tilde{f})}a_{\tilde{p}}\tilde{p}\in \mathbf{CN}_{k}(-\tilde{f};-\xi)$ define $\epsilon_c\colon\thinspace \mathbf{CN}_{n-k}(\tilde{f};\xi)\to \Lambda_{\uparrow}$ by \begin{equation}\label{novdual} \epsilon_c\left(\sum_{\tilde{q}\in\mathrm{Crit}_{n-k}(\tilde{f})}b_{\tilde{q}}\tilde{q}\right)=\sum_{\tilde{p}\in \mathrm{Crit}_k(-\tilde{f})}\sum_{\gamma\in\tilde{\Gamma}}a_{\tilde{p}}b_{g\tilde{p}}T^g \end{equation} where the notation $g\tilde{p}$ refers to the deck transformation action on $\tilde{X}_{\xi}$ associated to $\xi$ (not $-\xi$). Then, assuming that either $\mathrm{char}(\kappa)=2$ or that $X$ is oriented and that we choose orientations for the descending manifolds $\mathbf{D}_{\tilde{f}}(p),\mathbf{D}_{-\tilde{f}}(q)$ as in Section \ref{orsect}, the assignment $c\mapsto \epsilon_c$ defines an isomorphism of $\Lambda^{\downarrow}$-Floer-type complexes $\epsilon\colon\thinspace \overline{\mathbf{CN}(-\tilde{f};-\xi)_{\uparrow} }\to {}^{\vee}\!(\mathbf{CN}(\tilde{f};\xi)_{\uparrow}[n])$.
\end{prop}
\begin{proof}
Let $\mathrm{Crit}_{k}(-f)=\{p_1,\ldots,p_d\}$ and choose lifts $\tilde{p}_1,\ldots,\tilde{p}_d$ of the $p_i$ to the covering space $\tilde{X}_{-\xi}=\tilde{X}_{\xi}$. Thus $\{\tilde{p}_1,\ldots,\tilde{p}_d\}$ gives an orthogonal basis for both orthogonalizable $\Lambda_{\uparrow}$-spaces $\mathbf{CN}_{k}(-f;-\xi)$ and $\mathbf{CN}_{n-k}(f;\xi)$.
Any $\tilde{p}\in\mathrm{Crit}_{k}(-\tilde{f})$ can be expressed as $h\tilde{p}_i$ for unique $i\in\{1,\ldots,d\}$ and $h\in \Gamma_{\xi}$, where as in the statement of the proposition we use in this notation the deck transformation action associated to $\xi$ rather than $-\xi$. Thus the corresponding elements $\tilde{p}$ in the respective Novikov complexes will equal $T^{-h}\tilde{p}_i$ in $\mathbf{CN}_{k}(-\tilde{f};-\xi)$ and $T^h\tilde{p}_i$ in $\mathbf{CN}_{n-k}(\tilde{f};\xi)$. So the pairing $(c,x)\mapsto \epsilon_c(x)$ can be rewritten in terms of the basis as \[ \left(\sum_{i=1}^{d}\sum_{s\in\Gamma}a_{i,s}T^s\tilde{p}_i, \sum_{i=1}^{d}\sum_{t\in\Gamma}b_{i,t}T^t\tilde{p}_i\right)\mapsto \sum_{i=1}^{d}\sum_{s,t\in\Gamma}a_{i,s}b_{i,t}T^{s+t},\] or equivalently as, for $\lambda_i,\mu_i\in\Lambda_{\uparrow}$, \[ \left(\sum_{i}\lambda_i\tilde{p}_i,\sum_i\mu_i\tilde{p}_i\right)\mapsto \sum_i\lambda_i\mu_i.\] This makes clear that (\ref{novdual}) gives a well-defined element of $\Lambda_{\uparrow}$, and that $\epsilon$ defines an isomorphism from $\mathbf{CN}_{k}(-f;-\xi)$ to the (unconjugated) dual of $\mathbf{CN}_{n-k}(f;\xi)$, and hence, after conjugation, from $\overline{\mathbf{CN}_{k}(-f;-\xi)}$ to ${}^{\vee}\!(\mathbf{CN}_{n-k}(f;\xi))$. That $\epsilon$ is an isomorphism of chain complexes follows from the identity $n_{-\tilde{f},-\tilde{v}}(\tilde{q},\tilde{p})=(-1)^{n-k}n_{\tilde{f},\tilde{v}}(\tilde{p},\tilde{q})$ just as in Proposition \ref{negdualcover}.
To conclude that $\epsilon$ is an isomorphism of $\Lambda^{\downarrow}$-Floer-type complexes we need to check that it intertwines the filtration functions, namely $-\ell_{\uparrow}^{-\tilde{f}}$ on $\overline{\mathbf{CN}(-\tilde{f};-\xi)_{\uparrow} }$ (see Remark \ref{switch}) and ${}^{\vee}\!\ell_{\uparrow}^{\tilde{f}}$ on ${}^{\vee}\!(\mathbf{CN}(\tilde{f};\xi)_{\uparrow}[n])$ (see Section \ref{dualsec}). In grading $k$, it follows from the preceding paragraph that the $(-\ell_{\uparrow}^{-\tilde{f}})$-orthogonal basis $\{\tilde{p}_1,\ldots,\tilde{p}_{d}\}$ for $\overline{\mathbf{CN}_{k}(-\tilde{f};-\xi)}$ is mapped by $\epsilon$ to the dual basis $\{\tilde{p}_{1}^{*},\ldots,\tilde{p}_{d}^{*}\}$ to the $\ell_{\uparrow}^{\tilde{f}}$-orthogonal basis $\{\tilde{p}_1,\ldots,\tilde{p}_{d}\}$ for $\mathbf{CN}_{n-k}(\tilde{f};\xi)$. By Remark \ref{dualbasisrem}, this dual basis is an ${}^{\vee}\!\ell_{\uparrow}^{\tilde{f}}$-orthogonal basis for ${}^{\vee}\!\mathbf{CN}_{n-k}(f;\xi)$, and moreover we see that $(-\ell_{\uparrow}^{-\tilde{f}})(\tilde{p}_i)={}^{\vee}\!\ell_{\uparrow}^{\tilde{f}}(\tilde{p}_{i}^{*})=\tilde{f}(\tilde{p}_i)$. Thus, in each grading, $\epsilon$ maps a $(-\ell_{\uparrow}^{-\tilde{f}})$-orthogonal basis to a ${}^{\vee}\!\ell_{\uparrow}^{\tilde{f}}$-orthogonal basis, preserving the filtration level of each basis element, from which it follows readily that ${}^{\vee}\!\ell_{\uparrow}^{\tilde{f}}=(-\ell_{\uparrow}^{-\tilde{f}})\circ \epsilon$, as desired.
\end{proof}
\begin{notation}
We write $\mathbf{CN}(\tilde{f};\xi)^{\downarrow}$ for the $\Lambda^{\downarrow}$-Floer-type complex $\overline{\mathbf{CN}(-\tilde{f};-\xi)_{\uparrow}}$.
\end{notation}
This notation is designed to emphasize that $\overline{\mathbf{CN}(-\tilde{f};-\xi)_{\uparrow}}$ can be interpreted directly in terms of $\tilde{f}$ instead of $-\tilde{f}$: it is a version of the filtered Novikov chain complex for $\tilde{f}$ in which the differential is defined in terms of the positive gradient flow of $\tilde{f}$ instead of the negative gradient flow as in the usual Novikov complex, and correspondingly the filtration is descending rather than ascending. The general degree-$k$ chain in $\mathbf{CN}(\tilde{f};\xi)^{\downarrow}$ takes the form $\sum_{\tilde{p}\in\mathrm{Crit}_{n-k}(\tilde{f})}a_{\tilde{p}}\tilde{p}$ where for any $C\in\mathbb{R}$ only finitely many $\tilde{p}$ have both $a_{\tilde{p}}\neq 0$ and $\tilde{f}(\tilde{p})<C$, and the filtration function $\ell_{\tilde{f}}^{\downarrow}:=-\ell^{-\tilde{f}}_{\uparrow}$ is given by \[ \ell_{\tilde{f}}^{\downarrow}\left(\sum_{\tilde{p}\in\mathrm{Crit}_{n-k}(\tilde{f})}a_{\tilde{p}}\tilde{p}\right)=\min\{\tilde{f}(\tilde{p})|a_{\tilde{p}}\neq 0\}.\]
There is a version of the Latour map, which we will denote by $\bar{\mathcal{L}}_{h_0\tilde{f}}$, from the lifted Morse complex $\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$ to the underlying chain complex $\overline{\mathbf{CN}_{*}(-\tilde{f};-\xi)}$ of $\mathbf{CN}(f;\xi)^{\downarrow}$. Namely, as alluded to in Remark \ref{actionkey}, $\Gamma_{\xi}$ and $\Gamma_{-\xi}$ refer to the same group $\Gamma$ but with actions on $\tilde{X}_{\xi}=\tilde{X}_{-\xi}$ that are inverse to each other, so that we have an equality of complexes of $\kappa[\Gamma]$-modules \[ \widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])=\overline{\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{-\xi}])}.\] We then have a Latour map $\mathcal{L}_{h_0,-\tilde{f}}\colon\thinspace \widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{-\xi}])\to \mathbf{CN}_{*}(-\tilde{f};-\xi)$; applying the conjugation functor together with the above identification of $\kappa[\Gamma]$-modules yields the promised conjugate Latour map \[
\bar{\mathcal{L}}_{h_0\tilde{f}}\colon\thinspace \widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]) \to \overline{\mathbf{CN}_{*}(-\tilde{f};-\xi)}.\]
\begin{prop}\label{downup}
The chain-level filtered matched pair $\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi))$ associated to $\mathcal{CN}(\tilde{f};\xi)$ is filtered matched homotopy equivalent to the chain-level filtered matched pair \begin{equation}\label{eqcmp} \xymatrix{ & \mathbf{CN}(\tilde{f};\xi)^{\downarrow} \\ \widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]) \ar[ur]^{\bar{\mathcal{L}}_{h_0\tilde{f}}} \ar[dr]_{\mathcal{L}_{h_0\tilde{f}}} & \\ & \mathbf{CN}(\tilde{f};\xi)_{\uparrow} }\end{equation}
\end{prop}
\begin{proof} In the general notation of Definition \ref{cfmpdfn}, the two chain-level filtered matched pairs in question have the same data $\mathcal{C}$ and $\mathcal{C}_{\uparrow}$, while the role of $\mathcal{C}^{\downarrow}$ is played by ${}^{\vee}\!(\mathbf{CN}(f;\xi)_{\uparrow}[n])$ in one case and $\mathbf{CN}(f;\xi)^{\downarrow}$ in the other. So we may use the isomorphism $\epsilon$ from Proposition \ref{negdualnov} in the role of the top horizontal map in Definition \ref{fmhedef}, and the respective identities in the roles of the other horizontal maps; it remains only to check that the diagram \begin{equation}\label{topsquare} \xymatrix{ \overline{\mathbf{CN}_{*}(-\tilde{f};-\xi)} \ar[rr]^{\epsilon} & & {}^{\vee}\!(\mathbf{CN}(f;\xi)_{\uparrow}[n]) \\ & \widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]) \ar[ul]^{\bar{\mathcal{L}}_{h_0\tilde{f}}} \ar[ur]_{\phi^{\downarrow}} & } \end{equation} commutes up to homotopy, where $\phi^{\downarrow}$ is the map obtained as in (\ref{phidowncpn}), with the map $\mathcal{T}$ therein equal to a homotopy inverse to $1_{\Lambda_{\uparrow}}\otimes \mathcal{L}_{h_0\tilde{f}}$. (The desired conclusion is independent of which homotopy inverse we use.)
As in the definition of the Latour map, let $\theta_0$ be a closed one-form representing the cohomology class $\xi$ which vanishes on a neighborhood of $\mathrm{Crit}(h_0)$, and choose $N$ sufficiently large that the lift $\tilde{v}$ to $\tilde{X}_{\xi}$ of a gradient-like vector field $v$ for $h_0$ is still gradient-like for a primitive $\tilde{h}_N$ for the one-form $\pi^*(Ndh_0+\theta_0)$. Increasing $N$ if necessary, assume that $\tilde{v}$ is also gradient-like for a primitive $\tilde{g}_N$ for the one-form $\pi^*(Ndh_0-\theta_0)$. Using $\tilde{v}$ in the definition of the differentials on the Morse and Novikov complexes for $\pi^*h_0$, $\tilde{h}_N$ and $\tilde{g}_N$, we then have identical complexes $\Lambda_{\uparrow}\otimes_{\kappa[\Gamma_{\xi}]}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])=\mathbf{CN}_{*}(\tilde{h}_N;\xi)$, and also identical complexes $\Lambda_{\uparrow}\otimes_{\kappa[\Gamma_{-\xi}]}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{-\xi}])=\mathbf{CN}_*(\tilde{g}_N;-\xi)$. Using the first of these identifications, $1_{\Lambda_{\uparrow}}\otimes \mathcal{L}_{h_0\tilde{f}}$ is equal to the continuation map $\Phi_{\tilde{h}_N\tilde{f}}$. So for its homotopy inverse $\mathcal{T}$ as in (\ref{phidowncpn}) we may use the continuation map $\Phi_{\tilde{f}\tilde{h}_N}\colon\thinspace\mathbf{CN}_{*}(\tilde{f};\xi)\to \mathbf{CN}_{*}(\tilde{h}_N;\xi)$.
We consider the diagram \begin{equation}\label{8term} \xymatrix{ \overline{\mathbf{CN}_{*}(-\tilde{f};-\xi)} \ar[rr]^{\epsilon} & & {}^{\vee}\!(\mathbf{CN}(f;\xi)_{\uparrow}[n]) \\
\overline{\mathbf{CN}_{*}(\tilde{g}_N;-\xi)} \ar[r]_{\Phi_{\tilde{g}_N,-\tilde{h}_N}} \ar[u]^{\Phi_{\tilde{g}_N,-\tilde{f}}} & \overline{\mathbf{CN}_{*}(-\tilde{h}_N;-\xi)} \ar[r]_{\epsilon} & {}^{\vee}\!(\mathbf{CN}_*(\tilde{h}_N;\xi)[n]) \ar[u]_{{}^{\vee}\!\Phi_{\tilde{f}\tilde{h}_N}}
\\
\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]) \ar[r]_{\Phi_{\pi^*h_0,-\pi^*h_0}} \ar[u]^{f_1} & \widetilde{\mathbf{CM}}_{*}(-\pi^*h_0;\kappa[\Gamma_{\xi}]) \ar[r]_{\epsilon}\ar[u]^{f_2} & {}^{\vee}\!(\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])[n]) \ar[u]_{f_3}
}\end{equation}
Here the maps denoted $\Phi$ are continuation maps, the maps denoted $\epsilon$ are appropriate versions of the maps from Propositions \ref{negdualcover} and \ref{negdualnov}, and the maps $f_1,f_2,f_3$ are as follows:
\begin{itemize}
\item $f_1\colon\thinspace \widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])\to \overline{\mathbf{CN}_{*}(\tilde{g}_N;-\xi)}$ is the composition of the trivial identification of $\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$ with $\overline{\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{-\xi}])}$, the coefficient extension of the latter to $\overline{\Lambda_{\uparrow}\otimes_{\kappa[\Gamma_{-\xi}]}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{-\xi}])}$, and the identification of $\overline{\Lambda_{\uparrow}\otimes_{\kappa[\Gamma_{-\xi}]}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{-\xi}])}$ with $\overline{\mathbf{CN}_{*}(\tilde{g}_N;-\xi)}$ that results from $\tilde{v}$ being gradient-like for both $\pi^*h_0$ and $\tilde{g}_N$.
\item Similarly, $f_2$ is the composition \begin{align*} & \widetilde{\mathbf{CM}}_{*}(-\pi^*h_0;\kappa[\Gamma_{\xi}])=\overline{\widetilde{\mathbf{CM}}_{*}(-\pi^*h_0;\kappa[\Gamma_{-\xi}])}\\ & \,\,\to \overline{\Lambda_{\uparrow}\otimes_{\kappa[\Gamma_{-\xi}]}\widetilde{\mathbf{CM}}_{*}(-\pi^*h_0;\kappa[\Gamma_{-\xi}])}=\overline{\mathbf{CN}_{*}(-\tilde{h}_N;-\xi)} \end{align*} where we use the vector field $-\tilde{v}$ (which is gradient-like for both $-\pi^*h_0$ and $-\tilde{h}_N$) to define $\widetilde{\mathbf{CM}}_{*}(-\pi^*h_0;\kappa[\Gamma_{\xi}])$ and $\overline{\mathbf{CN}_{*}(-\tilde{h}_N;-\xi)}$.
\item $f_3\colon\thinspace {}^{\vee}\!(\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])[n])\to {}^{\vee}\!(\mathbf{CN}_{*}(\tilde{h}_N;\xi)[n])$ is the composition of the second and third maps in (\ref{phidowncpn}) (applied with $\mathcal{C}=\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$) followed by the dual of the identification of $\Lambda_{\uparrow}\otimes_{\kappa[\Gamma_{\xi}]}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$ with $\mathbf{CN}_{*}(\tilde{h}_N;\xi)$.
\end{itemize}
By definition, the composition on the left side of (\ref{8term}) is $\bar{\mathcal{L}}_{h_0\tilde{f}}$. Also, the composition on the bottom of (\ref{8term}) is the Poincar\'e duality map $\tilde{\mathcal{D}}$, so the bottom and right sides together give the map $\phi^{\downarrow}$ in (\ref{topsquare}). So to complete the proof it suffices to show that (\ref{8term}) is homotopy-commutative.\footnote{Strictly speaking (\ref{8term}) is underspecified in that we have not said what time-dependent vector fields $\mathbb{V}=\{v_s\}_{s\in\mathbb{R}}$ are to be used in the construction of the various continuation maps; however since the homotopy classes of these continuation maps are independent of such choices we may make whatever choices are convenient.}
The bottom left square of (\ref{8term}) is (homotopy)-commutative because we may use the same time-dependent vector fields $\mathbb{V}$ in the construction of $\Phi_{\tilde{g}_N,-\tilde{h}_N}$ as we use in the construction of $\Phi_{\pi^*h_0,-\pi^*h_0}$; this results in all relevant trajectory spaces being identical and hence in $\Phi_{\tilde{g}_N,-\tilde{h}_N}$ being the coefficient extension to $\Lambda_{\uparrow}$ of the $\kappa[\Gamma_{\xi}]$-module homomorphism $\Phi_{\pi^*h_0,-\pi^*h_0}$.
The bottom right square of (\ref{8term}) is also commutative. Indeed, if $\tilde{S}\subset \mathrm{Crit}(\pi^*h_0)$ contains one point in the fiber over each critical point of the original Morse function $h_0\colon\thinspace X\to \mathbb{R}$, then $\tilde{S}$ serves as a basis simultaneously for the graded $\kappa[\Gamma_{\xi}]$-module $\widetilde{\mathbf{CM}}_{*}(-\pi^*h_0;\kappa[\Gamma_{\xi}])$ and for the graded $\Lambda_{\uparrow}$-module $\mathbf{CN}_{*}(\tilde{h}_N;\xi)[n]$. It is easy to check that both the compositions $\epsilon\circ f_2$ and $f_3\circ \epsilon$ send the elements of this basis to their corresponding dual basis elements in ${}^{\vee}\!(\mathbf{CN}_{*}(\tilde{h}_N;\xi)[n])$.
Finally, we consider the top rectangle in (\ref{8term}). Now $\Phi_{\tilde{g}_N,-\tilde{h}_N}$ has homotopy inverse $\Phi_{-\tilde{h}_N,\tilde{g}_N}$, and $\Phi_{\tilde{g}_N,-\tilde{f}}\circ \Phi_{-\tilde{h}_N,\tilde{g}_N}$ is homotopic to $\Phi_{-\tilde{h}_N,-\tilde{f}}$. So the top rectangle in (\ref{8term}) is commutative up to homotopy if and only if \begin{equation}\label{lastsquare} \xymatrix{ \overline{\mathbf{CN}_{*}(-\tilde{f};-\xi)} \ar[r]^{\epsilon} & {}^{\vee}\!(\mathbf{CN}_{*}(\tilde{f};\xi)[n]) \\ \overline{\mathbf{CN}_{*}(-\tilde{h}_N;-\xi)}\ar[u]^{\Phi_{-\tilde{h}_N,-\tilde{f}}}\ar[r]_{\epsilon} & {}^{\vee}\!(\mathbf{CN}_{*}(\tilde{h}_N;\xi)[n]) \ar[u]_{{}^{\vee}\!\Phi_{\tilde{f}\tilde{h}_N}} } \end{equation} is commutative up to homotopy. Assume that $\Phi_{\tilde{f}\tilde{h}_N}$ is defined using (the lift to $\tilde{X}_{\xi}$ of) a time-dependent vector field $\mathbb{V}=\{v_s\}_{s\in \mathbb{R}}$; then in constructing $\Phi_{-\tilde{h}_N,-\tilde{f}}$ we may use the time-dependent vector field $\bar{\mathbb{V}}=\{-v_{-s}\}_{s\in \mathbb{R}}$. With this choice we shall show that the above diagram commutes. Indeed, it suffices to show that for each $\tilde{p}\in \mathrm{Crit}_k(-\tilde{h}_N)$ and each $\tilde{q}\in \mathrm{Crit}_{k}(-\tilde{f})=\mathrm{Crit}_{n-k}(\tilde{f})$ we have $\left(\epsilon(\Phi_{-\tilde{h}_N,-\tilde{f}}\tilde{p})\right)(\tilde{q})=(\epsilon(\tilde{p}))(\Phi_{\tilde{f}\tilde{h}_N}\tilde{q})$. But these quantities are equal to the counts of points in the (oriented, if $\mathrm{char}(\kappa)\neq 2$) zero-dimensional continuation trajectory spaces $\mathbf{N}^{\bar{\mathbb{V}}}(\tilde{p},\tilde{q})$ and $\mathbf{N}^{\mathbb{V}}(\tilde{q},\tilde{p})$, respectively. These continuation trajectory spaces are in bijection under the map which sends a path $\gamma$ to its time-reversal $\bar{\gamma}(s)=\gamma(-s)$; in the oriented case this bijection is orientation-preserving by Proposition \ref{wor}. (The sign in Proposition \ref{wor} is $1$ because $\mathrm{ind}_{\tilde{h}_N}(\tilde{p})=\mathrm{ind}_{\tilde{f}}(\tilde{q})$.) This completes the proof that (\ref{lastsquare}) commutes, and hence that (\ref{8term}) and (\ref{topsquare}) commute up to homotopy.
\end{proof}
\subsection{The isomorphism with interlevel persistence}\label{isosect}
As in Section \ref{novsect}, let $\xi$ be a class in the de Rham cohomology $H^1(X;\mathbb{R})$ of a compact connected smooth manifold $X$. This section is concerned with the special case in which the image $\Gamma=\Gamma_{\xi}$ of the integration homomorphism $\langle\xi,\cdot\rangle\colon\thinspace \pi_1(X)\to \mathbb{R}$ is discrete, and thus equal to $\lambda_0\mathbb{Z}$ for some $\lambda_0\geq 0$. If $\lambda_0=0$ then $\xi=0$ and the integration cover $X_{\xi}$ is equal to $X$, so that the functions $\tilde{f}$ as in Section \ref{novsect} are just real-valued Morse functions on $X$ and their Novikov complexes are the usual Morse complexes. Otherwise $\lambda_0>0$, $X_{\xi}$ is an infinite cyclic cover of $X$, and a Morse function $\tilde{f}\colon\thinspace \tilde{X}_{\xi}\to \mathbb{R}$ as in Section \ref{novsect} fits into a commutative diagram \[ \xymatrix{ \tilde{X}_{\xi}\ar[r]^{\tilde{f}} \ar[d] & \mathbb{R} \ar[d] \\ X \ar[r]^{f} & \mathbb{R}/\lambda_0\mathbb{Z}} \] for a circle-valued Morse function $f\colon\thinspace X\to \mathbb{R}/\lambda_0\mathbb{Z}$. Note that $\tilde{f}$ is proper, and its critical values form a discrete subset of $\mathbb{R}$ (more specifically, a finite union of cosets of the discrete group $\Gamma_{\xi}$).
Associated to $\tilde{f}\colon\thinspace \tilde{X}_{\xi}\to \mathbb{R}$ we have a chain-level Poincar\'e--Novikov structure $\mathcal{CN}(\tilde{f};\xi)$ as in Definition \ref{cndef} (which specializes to Definition \ref{cmdef} in case $\xi=0$, as the one-form $\theta_0$ can then be taken to be zero). By the general algebraic theory in Section \ref{clsec}, this yields a chain-level filtered matched pair $\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi))$ and then, for each $k\in \mathbb{Z}$, a persistence module $\mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))$ of $\kappa$-vector spaces over $\mathbb{R}^2$ given by (\ref{hkdef}). The discreteness of $\Gamma_{\xi}$ allows us to use constructions in \cite{Pa} to prove our main comparison result with $\kappa$-coefficient singular homology, Theorem \ref{introiso}, which we restate here:
\begin{theorem}\label{bigiso}
Assume that $\Gamma_{\xi}$ is discrete and let $\tilde{f}\colon\thinspace \tilde{X}_{\xi}\to \mathbb{R}$ be a Morse function such that $d\tilde{f}$ is the pullback to $X_{\xi}$ of a one-form representing the class $\xi\in H^1(X;\mathbb{R})$. Then, for $s+t\geq 0$, there are isomorphisms \[ \sigma_{s,t}\colon\thinspace \mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))_{s,t}\to H_k(\tilde{f}^{-1}([-s,t]);\kappa) \] such that, when $s\leq s'$ and $t\leq t'$, we have a commutative diagram \[ \xymatrix{ \mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))_{s,t} \ar[r] \ar[d]_{\sigma_{s,t}} & \mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))_{s',t'} \ar[d]^{\sigma_{s',t'}} \\ H_k(\tilde{f}^{-1}([-s,t]);\kappa) \ar[r] & H_k(\tilde{f}^{-1}([-s',t']);\kappa) } \] where the horizontal maps are induced by inclusion.
\end{theorem}
In other words, the restriction of the persistence module $\mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))$ to the index set $\{s+t\geq 0\}$ coincides with the persistence module given by singular homologies of interlevel sets (under the correspondence associating a pair $(s,t)$ with $-s\leq t$ to the interval $[-s,t]$). The rest of this section is devoted to the proof of this theorem.
In view of Propositions \ref{persinv} and \ref{downup}, the persistence module $\mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))$ is isomorphic to the one given at $(s,t)\in \mathbb{R}^2$ by the $(k+1)$th homology of the mapping cone \begin{equation}\label{proofcone} \mathrm{Cone}\left(\xymatrix{ \mathbf{CN}(\tilde{f};\xi)^{\downarrow}_{\geq -s}\oplus \mathbf{CN}(\tilde{f};\xi)_{\uparrow}^{\leq t} \ar[rrr]^{-j^{\downarrow}\otimes\psi^{\downarrow}+j_{\uparrow}\otimes\psi_{\uparrow}} & & & \Lambda_{\updownarrow}\otimes_{\kappa[\Gamma_{\xi}]}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]) }\right) \end{equation} where \[ \mathbf{CN}(\tilde{f};\xi)^{\downarrow}_{\geq -s}=\{c\in \mathbf{CN}(\tilde{f};\xi)^{\downarrow}|\ell_{\tilde{f}}^{\downarrow}(c)\geq -s\},\,\,\mathbf{CN}(\tilde{f};\xi)_{\uparrow}^{\leq t} =\{c\in \mathbf{CN}(\tilde{f};\xi)_{\uparrow}|\ell_{\uparrow}^{\tilde{f}}(c)\leq t\} \]
and the rest of the notation is as in Section \ref{pmdefsec}, with persistence module structure maps induced by inclusion of subcomplexes. So it suffices to prove the result with this persistence module (which we abbreviate $\mathbb{H}_k(\tilde{f})$) in the role of $\mathbb{H}_k(\mathcal{CP}(\mathcal{CN}(\tilde{f};\xi)))$
We next argue that it suffices to construct the maps $\sigma_{s,t}$ of Theorem \ref{bigiso} in the case that $s+t>0$ and both $-s$ and $t$ are regular values of $\tilde{f}$. In this direction, note that due to the discreteness of the set of critical values of $\tilde{f}$, for any $(s,t)\in \mathbb{R}^2$ there is $\epsilon_{s,t}>0$ such that there are no critical values in the union of open intervals $(t,t+\epsilon_{s,t})\cup(-(s+\epsilon_{s,t}),-s)$. Then if $\delta,\epsilon\in [0,\epsilon_{s,t})$, the inclusions of subcomplexes $\mathbf{CN}(\tilde{f};\xi)^{\downarrow}_{\geq -s}\subset \mathbf{CN}(\tilde{f};\xi)^{\downarrow}_{\geq -(s+\delta)}$ and $\mathbf{CN}(\tilde{f};\xi)_{\uparrow}^{\leq t}\subset \mathbf{CN}(\tilde{f};\xi)_{\uparrow}^{\leq t+\epsilon}$ are equalities. Thus, for $0\leq \delta,\epsilon<\epsilon_{s,t}$, we have $\mathbb{H}_k(\tilde{f})_{s,t}=\mathbb{H}_k(\tilde{f})_{s+\delta,t+\epsilon}$, and the structure map $\mathbb{H}_k(\tilde{f})_{s,t}\to \mathbb{H}_k(\tilde{f})_{s+\delta,t+\epsilon}$ is the identity.
Similarly, again assuming that $\delta,\epsilon\in [0,\epsilon_{s,t})$, the fact that the only critical values of $\tilde{f}$ lying in the interval $[-s-\delta,t+\epsilon]$ in fact lie in $[-s,t]$ implies that the gradient flow of $\tilde{f}$ may be used to construct a deformation retraction of $\tilde{f}^{-1}([-s-\delta,t+\epsilon])$ to $\tilde{f}^{-1}([-s,t])$. (If neither $-s$ nor $t$ is a critical value of $\tilde{f}$ this follows straightforwardly as in \cite[Theorem 3.1]{Mil} applied first to $\tilde{f}$ and then to $-\tilde{f}$; if $-s$ and/or $t$ is a critical value one can instead use the argument in \cite[Remark 3.4]{Mil}.) Hence if $0\leq \delta,\epsilon<\epsilon_{s,t}$ the inclusion-induced map $H_k(\tilde{f}^{-1}([-s,t]);\kappa)\to H_k(\tilde{f}^{-1}([-(s+\delta),t+\epsilon]);\kappa)$ is an isomorphism. So supposing the maps $\sigma_{s,t}$ as in Theorem \ref{bigiso} have been constructed for all $(s,t)$ belong to the set $\mathcal{U}=\{(s,t)\in \mathbb{R}^2|s+t>0,s,t\notin\mathrm{Crit}(\tilde{f})\}$, we may uniquely extend the construction to all of $\{(s,t)|s+t\geq 0\}$ by, whenever $s+t\geq 0$, choosing arbitrary $\delta,\epsilon\in [0,\epsilon_{s,t})$ such that $(s+\delta,t+\epsilon)\in \mathcal{U}$ and requiring $\sigma_{s,t}$ to coincide with $\sigma_{s+\delta,t+\epsilon}$ under the equality $\mathbb{H}_k(\tilde{f})_{s,t}=\mathbb{H}_k(\tilde{f})_{s+\delta,t+\epsilon}$ and the inclusion-induced isomorphism $H_k(\tilde{f}^{-1}([-s,t]);\kappa)\cong H_k(\tilde{f}^{-1}([-(s+\delta),t+\epsilon]);\kappa)$; this prescription is clearly independent of the choice of $\delta,\epsilon$ and preserves the commutation of the relevant diagrams. This justifies restricting attention to those $(s,t)$ that belong to the subset $\mathcal{U}$ in what follows.
\subsubsection{The classical Morse case}\label{easyiso}
Let us first prove Theorem \ref{bigiso} in the simpler case that $\xi=0$. Thus $\Gamma_{\xi}=0$ and $\kappa[\Gamma_{\xi}],\Lambda_{\uparrow},\Lambda^{\downarrow}$, and $\Lambda_{\updownarrow}$ are all equal to $\kappa$, and the Novikov complexes simplify to Morse complexes of functions on the original compact manifold $X$. (In particular, the underlying chain complex of $\mathbf{CN}(\tilde{f};0)^{\downarrow}$ is the Morse complex $\mathbf{CM}_{*}(-\tilde{f})$, with $\mathbf{CN}(\tilde{f};0)^{\downarrow}_{\geq -s}$ equal to the subcomplex $\mathbf{CM}_{*}(-\tilde{f})^{\leq s}$ generated by critical points $p$ with $-\tilde{f}(p)\leq s$.) Referring to (\ref{proofcone}), the maps $j_{\uparrow},j^{\downarrow}$ are in this case just the identity on $\kappa$; the map $\psi_{\uparrow}\colon\thinspace \mathbf{CM}_{*}(\tilde{f})\to \mathbf{CM}_{*}(h_0)$ is a homotopy inverse to the continuation map $\Phi_{h_0\tilde{f}}$ and hence can be taken equal to the continuation map $\Phi_{\tilde{f}h_0}$; likewise, $\psi^{\downarrow}$ can be taken equal to $\Phi_{-\tilde{f},h_0}$.
For any space $Y$ let $\mathbf{S}_{*}(Y)$ denote the singular chain complex of $Y$ with coefficients in $\kappa$; for $Z\subset Y$ we let $\mathbf{S}_*(Y,Z)=\frac{\mathbf{S}_*(Y)}{\mathbf{S}_*(Z)}$ denote the relative singular chain complex. If $g\colon\thinspace W\to [a,b]$ is a Morse function on a compact manifold with boundary $W$ such that $\partial W$ is a disjoint union of regular level sets $\partial_0 W=g^{-1}(\{a\})$ and $\partial_1 W=g^{-1}(\{b\})$, one has (suppressing notation for gradient-like vector fields) a Morse complex $\mathbf{CM}_{*}(g)$ and a chain homotopy equivalence $\mathcal{E}(g)\colon\thinspace \mathbf{CM}_*(g)\to \mathbf{S}_*(W,\partial W_0)$ from \cite[p. 218]{Pa}. In particular, we can set $g$ equal to our Morse function $\tilde{f}\colon\thinspace X\to \mathbb{R}$ to obtain a chain homotopy equivalence $\mathcal{E}(f)\colon\thinspace \mathbf{CM}_*(f)\to \mathbf{S}_*(X)$; also, for any regular value $t$ of $f$, denoting $X^{\leq t}=\tilde{f}^{-1}((-\infty,t])$ and $\tilde{f}^t=\tilde{f}|_{X^{\leq t}}$ we can observe that $\mathbf{CM}_{*}(\tilde{f}^{t})=\mathbf{CM}_{*}(\tilde{f})^{\leq t}$ to obtain a chain homotopy equivalence $\mathcal{E}(\tilde{f}^t)\colon\thinspace \mathbf{CM}_{*}(\tilde{f})^{\leq t}\to \mathbf{S}_*(X^{\leq t})$. It follows from arguments in \cite{Pa} that one has a homotopy-commutative diagram \begin{equation}\label{pajcomp} \xymatrix{ \mathbf{CM}_{*}(\tilde{f})^{\leq t} \ar[r] \ar[d]_{\mathcal{E}(\tilde{f}^t)} & \mathbf{CM}_{*}(\tilde{f}) \ar[d]^{\mathcal{E}(\tilde{f})} \\ \mathbf{S}_*(X^{\leq t}) \ar[r] & \mathbf{S}_*(X) } \end{equation} where the horizontal maps are inclusions.
Indeed, as in the proof of \cite[Propositon VI.2.4]{Pa}, the cellular filtration used to construct $\mathcal{E}(\tilde{f}^t)$ can be taken to be contained in that used to construct $\mathcal{E}(f)$; the top arrow above can be interpreted as being induced by this inclusion of filtrations, and then the homotopy commutativity of the diagram follows from the functoriality statement in \cite[Corollary VI.3.8]{Pa}. Likewise, if $-s$ is a regular value for $\tilde{f}$, we set $X_{\geq -s}=\tilde{f}^{-1}([-s,\infty))$ and $\tilde{f}_{-s}=\tilde{f}|_{X_{\geq -s}}$, and then we have chain homotopy equivalences $\mathcal{E}(-\tilde{f})\colon\thinspace \mathbf{CM}_{*}(-\tilde{f})\to \mathbf{S}_*(X)$ and $\mathcal{E}(-\tilde{f}_{-s})\colon\thinspace \mathbf{CM}_{*}(-\tilde{f})^{\leq s}\to \mathbf{S}_{*}(X_{\geq -s})$, compatible up to homotopy with the inclusions. In view of the homotopy commutativity of diagrams such as
(\ref{contcomm}) (applied to $\psi_{\uparrow}=\Phi_{\tilde{f}h_0}$ and $\psi^{\downarrow}=\Phi_{-\tilde{f},h_0}$ defined on the full Morse complexes $\mathbf{CM}_{*}(\pm \tilde{f})$) it then follows that we have a homotopy-commutative diagram
\begin{equation}\label{tosing} \xymatrix{ \mathbf{CM}_{*}(-\tilde{f})^{\leq s}\oplus \mathbf{CM}_{*}(\tilde{f})^{\leq t} \ar[d]_{\mathcal{E}(-\tilde{f}_{-s})\oplus \mathcal{E}(\tilde{f}^{t})} \ar[rrr]^{-j^{\downarrow}\otimes\psi^{\downarrow}+j_{\uparrow}\otimes\psi_{\uparrow}} & & & \mathbf{CM}_{*}(h_0) \ar[d]^{\mathcal{E}(h_0)}
\\ \mathbf{S}_*(X_{\geq -s})\oplus \mathbf{S}_{*}(X^{\leq t}) \ar[rrr]^{-j_{-s}+j^{t}} & & & \mathbf{S}_*(X) } \end{equation} where $j_{-s},j^t$ are the maps on singular chains induced by inclusion of $X_{\geq -s},X^{\leq t}$ into $X$. Since our persistence module $\mathbb{H}_{k}(\tilde{f})$ is given at $(s,t)$ by the $(k+1)$th homology of the cone of the top horizontal map above, it follows from Proposition \ref{filtinv} that $\mathbb{H}_k(\tilde{f})_{s,t}$ can equally well be computed as the $(k+1)$th homology of the cone of the bottom horizontal map. This correspondence respects the inclusion induced maps associated to pairs $(s,t),(s',t')$ with $s\leq s',t\leq t'$, due to the fact that the Pajitnov chain homotopy equivalences $\mathcal{E}$ are likewise compatible up to homotopy with inclusions $X^{\leq t}\subset X^{\leq t'}$, $X_{\geq -s}\subset X_{\geq -s'}$. (This follows by the same argument that was used above to yield homotopy commutative diagrams such as (\ref{pajcomp}).)
So we now consider the cone of the map $-j_{-s}+j^{t}$ at the bottom of (\ref{tosing}). As noted just before the start of this subsection, for the purposes of the proof of the theorem it suffices to confine attention to pairs $(s,t)$ where $-s$ and $t$ are regular values of $\tilde{f}$ and $s+t>0$. In this case the interiors $\{x|\tilde{f}(x)<t\}$ and $\{x|\tilde{f}(x)>-s\}$ of $X^{\leq t}$ and $X_{\geq -s}$ cover $X$. So if we let $\mathbf{S}^{\circ}_{*}(X)$ denote the subcomplex of $\mathbf{S}_*(X)$ generated by $\mathbf{S}_*(X^{\leq t})$ and $\mathbf{S}_{*}(X_{\geq -s})$, the theorem of small chains (\cite[Proposition 2.21]{Ha}) shows that the inclusion $\mathbf{S}_{*}^{\circ}(X)\hookrightarrow \mathbf{S}_*(X)$ is a chain homotopy equivalence. Of course the map $-j_{-s}+j^{t}$ factors through $\mathbf{S}_{*}^{\circ}(X)$, so the inclusion gives a chain homotopy equivalence \begin{equation}\label{coneq} \mathrm{Cone}(-j_{-s}+j^t\colon\thinspace \mathbf{S}_*(X_{\geq -s})\oplus \mathbf{S}_{*}(X^{\leq t})\to \mathbf{S}_{*}^{\circ}(X)) \sim \mathrm{Cone}(-j_{-s}+j^t\colon\thinspace \mathbf{S}_*(X_{\geq -s})\oplus \mathbf{S}_{*}(X^{\leq t})\to \mathbf{S}_{*}(X)).\end{equation} Moreover, since $X^{\leq t}\cap X_{\geq -s}=\tilde{f}^{-1}([-s,t])$ we have a short exact sequence of chain complexes \begin{equation}\label{mvses} \xymatrix{ 0 \ar[r] & \mathbf{S}_*(\tilde{f}^{-1}([-s,t]))\ar[r]^{(i_{-s},i^{t})} & \mathbf{S}_*(X_{\geq -s})\oplus \mathbf{S}_{*}(X^{\leq t}) \ar[r]^<<<<{-j_{-s}+j^{t}} & \mathbf{S}_{*}^{\circ}(X)\ar[r] & 0 }\end{equation} where $i_{-s},i^t$ are inclusions.
Now it quite generally holds that if $\xymatrix{0 \ar[r]& A_* \ar[r]^{\alpha} & B_* \ar[r]^{\beta} & C_*\ar[r] & 0}$ is a short exact sequence of chain complexes then the map $\hat{\alpha}\colon\thinspace A_k\to B_k\oplus C_{k+1}=(\mathrm{Cone}(\beta)[1])_k$ defined by $\hat{\alpha}(a)=(\alpha a,0)$ is a quasi-isomorphism from $A_*$ to $\mathrm{Cone}(\beta)[1]$. Applying this (together with (\ref{coneq})) to (\ref{mvses}) gives an isomorphism \[ \mathfrak{p}_{s,t}\colon\thinspace H_k(\tilde{f}^{-1}([-s,t]);\kappa)\to H_{k+1}\left(\mathrm{Cone}\left(\xymatrix{ \mathbf{S}_*(X_{\geq -s})\oplus \mathbf{S}_{*}(X^{\leq t})\ar[r]^<<<<<{-j_{-s}+j^t} & \mathbf{S}_*(X)} \right)\right). \] As $s,t$ vary, the $\mathfrak{p}_{s,t}$ are clearly compatible with the obvious inclusion-induced maps for $s\leq s',t\leq t'$. Composing $\mathfrak{p}_{s,t}^{-1}$ with the isomorphism of cones induced by (\ref{tosing}) gives the isomorphisms $\sigma_{s,t}$ promised in Theorem \ref{bigiso}. This completes the proof of that theorem in the case that $\xi=0$.
\subsubsection{The $S^1$-valued case}
We now turn to the somewhat more complicated case of Theorem \ref{bigiso} in which the image of $\langle\xi,\cdot\rangle\colon\thinspace \pi_1(X)\to \mathbb{R}$ is a nontrivial discrete subgroup $\Gamma_{\xi}=\lambda_0\mathbb{Z}$ of $\mathbb{R}$, where $\lambda_0>0$. The Novikov field is then \[ \Lambda_{\uparrow}=\left\{\left.\sum_{m=m_0}^{\infty}a_mT^{\lambda_0m}\right|m_0\in \mathbb{Z}, a_m\in \kappa\right\}; \] substituting $x=T^{\lambda_0}$, we can identify $\Lambda_{\uparrow}$ with the Laurent series field $\kappa[x^{-1},x]]$; similarly $\Lambda$ is the Laurent polynomial ring $\kappa[x^{-1},x]$ and $\Lambda^{\downarrow}$ is $\kappa[[x^{-1},x]$ (the field of Laurent series in the variable $x^{-1}$).
For $t\in \mathbb{R}$ and $m\in \mathbb{N}$, the action of $x^{m}=T^{\lambda_0m}$ on the Novikov chain complex $\mathbf{CN}_{*}(\tilde{f};\xi)$ sends the filtered subcomplex $\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}$ isomorphically to $\mathbf{CN}(\tilde{f};\xi)^{\leq (t-\lambda_0m)}_{\uparrow}$. If $t$ is a regular value of $\tilde{f}$ and $m>0$, the quotient complex $\mathcal{N}_{*}(t,m):=\frac{\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}}{x^{m}\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}}$ is just the same as the (relative) Morse complex of the restriction of $\tilde{f}$ to $\tilde{f}^{-1}([t-\lambda_0m,t])$, and so the construction from \cite[p. 218]{Pa} mentioned in the previous subsection gives a chain homotopy equivalence $\mathcal{N}_{*}(t,m)\to \mathbf{S}_{*}(\tilde{f}^{-1}([t-\lambda_0m,t]),\tilde{f}^{-1}(\{t-\lambda_0m\}))$ and hence, using excision, a chain homotopy equivalence $\mathcal{E}(\tilde{f},t,m)\colon\thinspace\mathcal{N}_{*}(t,m)\to \mathbf{S}_*(X^{\leq t},X^{\leq(t-\lambda_0m)})$. Here as in the previous subsection we in general write $X^{\leq t}=\tilde{f}^{-1}((-\infty,t])$ and (later) $X_{\geq -s}=\tilde{f}^{-1}([-s,\infty))$.
Now there is a straightforward identification of the filtered subcomplex $\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}$ with the inverse limit \[ \varprojlim_{m\to\infty}\mathcal{N}_{*}(t,m)=\varprojlim_{m\to\infty}\frac{\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}}{x^{m}\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}} \] While $\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}$ is not invariant under the action of the full Novikov field $\Lambda_{\uparrow}=\kappa[x^{-1},x]]$, it is preserved by the subring $\kappa[[x]]$ (\emph{i.e.} by the elements having $\nu_{\uparrow}\geq 0$). Likewise, since the action of $x$ on $\tilde{X}_{\xi}$ decreases the value of $\tilde{f}$, the singular chain complex $\mathbf{S}_{*}(X^{\leq t})$ is a complex of $\kappa[x]$-modules. The discussion leading up to \cite[Definition XI.4.5]{Pa} explains how to refine the construction of the map $\mathcal{E}(\tilde{f},t,m)$ to make it a chain homotopy equivalence of complexes of $\frac{\kappa[x]}{x^m\kappa[x]}$-modules (not just $\kappa$-modules), canonical up to homotopy. We have $\mathbf{S}_*(X^{\leq t},X^{\leq(t-\lambda_0m)})=\frac{\mathbf{S}_{*}(X^{\leq t})}{x^m\mathbf{S}_{*}(X^{\leq t})}$, and then $\kappa[[x]]\otimes_{\kappa[x]}\mathbf{S}_{*}(X^{\leq t})$ can be identified with $\varprojlim_{m\to\infty}\mathbf{S}_*(X^{\leq t},X^{\leq(t-\lambda_0m)})$.
In \cite[Proposition XI.6.3]{Pa}, Pajitnov constructs a chain homotopy equivalence of complexes of $\kappa[[x]]$-modules $\Psi^{t}_{\tilde{f}}\colon\thinspace \mathbf{CN}(\tilde{f};\xi)^{\leq t}\to \kappa[[x]]\otimes_{\kappa[x]}\mathbf{S}_{*}(X^{\leq t})$
with the property that each diagram \[ \xymatrix{ \mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}\ar[r]^<<<<<{\Psi^{t}_{\tilde{f}}} \ar[d] & \kappa[[x]]\otimes_{\kappa[x]} \mathbf{S}_{*}(X^{\leq t}) \ar[d] \\ \mathcal{N}_{*}(t,m)\ar[r]^<<<<<{\mathcal{E}(\tilde{f},t,m)} & \mathbf{S}_*(X^{\leq t},X^{\leq(t-\lambda_0m)}) } \] commutes up to homotopy, where the vertical maps are the projections from the respective inverse limits. Moreover the full Novikov complex $\mathbf{CN}_{*}(\tilde{f};\xi)$ can be recovered from $\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}$ as $\kappa[x^{-1},x]]\otimes_{\kappa[[x]]}\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}$, and \cite[Theorem XI.6.2]{Pa} gives a chain homotopy equivalence $\mathbb{P}_{\tilde{f}}\colon\thinspace \mathbf{CN}_{*}(\tilde{f};\xi)\to \kappa[x^{-1},x]]\otimes_{\kappa[[x]]}\mathbf{S}_{*}(\tilde{X}_{\xi})$ characterized (in view of the construction in \cite[Section XI.6.1]{Pa}) uniquely up to homotopy by the property that, for each regular value $t$, $\mathbb{P}_{\tilde{f}}$ is chain homotopic to the coefficient extension of $\Psi^{t}_{\tilde{f}}$ from $\kappa[[x]]$ to $\kappa[x^{-1},x]]$. (Note that in the notation of the rest of the paper the codomain of $\mathbb{P}_{\tilde{f}}$ would be written as $\Lambda_{\uparrow}\otimes_{\Lambda}\mathbf{S}_{*}(\tilde{X}_{\xi})$.) With these chain homotopy equivalences in hand, the proof of Theorem \ref{bigiso} in the present case follows a strategy similar to the one in the case considered in Section \ref{easyiso}.
Part of the proof will require identifying the map $-j^-\otimes\psi^-+j^+\otimes\psi^+$ in (\ref{proofcone}), up to homotopy equivalence, with a map between singular chain complexes. Let $h_0,\tilde{h}_N,\tilde{g}_N$ be as in Section \ref{assocpair}, with $v$ a Morse-Smale gradient-like vector field on $X$ whose lift $\tilde{v}$ to $\tilde{X}_{\xi}=\tilde{X}_{-\xi}$ is gradient-like for both $\tilde{g}_N$ and $\tilde{h}_N$. Thus, using $\tilde{v}$ to define the various Morse or Novikov complexes of $\pi^*h_0,\tilde{h}_N,\tilde{g}_N$, we have $\mathbf{CN}_{*}(\tilde{h}_N;\xi)=\Lambda_{\uparrow}\otimes_{\Lambda}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])$ and $\mathbf{CN}_{*}(\tilde{g}_N;-\xi)=\Lambda_{\uparrow}\otimes_{\Lambda}\widetilde{\mathbf{CM}}_*(\pi^*h_0;\kappa[\Gamma_{-\xi}])$. We also recall from \cite[Theorem A.5]{Pa95} the chain homotopy equivalence of complexes of $\kappa[\Gamma_{\xi}]$-modules $\tilde{\mathcal{E}}(h_0)\colon\thinspace\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])\to \mathbf{S}_{*}(\tilde{X}_{\xi})$. Applying the conjugation functor of Section \ref{conjsect} (which does not change the underlying set-theoretic map), $\tilde{\mathcal{E}}(h_0)$ can equally well be regarded as a chain homotopy equivalence $\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{-\xi}])\to \mathbf{S}_{*}(\tilde{X}_{-\xi})$ of complexes of $\kappa[\Gamma_{-\xi}]$-modules. (Here as before, while $\tilde{X}_{-\xi}$ refers to the same covering space as $\tilde{X}_{\xi}$, and $\Gamma_{-\xi}$ is the same group as $\Gamma_{\xi}$ with $\kappa[\Gamma_{\xi}]=\kappa[\Gamma_{-\xi}]=\Lambda$, we use different notation for them to reflect that the action of $\Gamma_{-\xi}$ on $\tilde{X}_{-\xi}$ is inverse to that of $\Gamma_{\xi}$ on $\tilde{X}_{\xi}$.)
\begin{lemma}\label{lmtonov}
The chain homotopy equivalence \[ \mathbb{P}_{\tilde{h}_N}\colon\thinspace \mathbf{CN}_{*}(\tilde{h}_N;\xi)=\Lambda_{\uparrow}\otimes_{\Lambda}\widetilde{\mathbf{CM}}(\pi^*h_0;\kappa[\Gamma_{\xi}])\to\Lambda_{\uparrow}\otimes_{\Lambda}\mathbf{S}_{*}(\tilde{X}_{\xi}) \] is homotopic to the coefficient extension to $\Lambda_{\uparrow}$ of $\tilde{\mathcal{E}}(h_0)\colon\thinspace\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])\to \mathbf{S}_{*}(\tilde{X}_{\xi})$. Similarly, $\mathbb{P}_{\tilde{g}_N}\colon\thinspace \mathbf{CN}_{*}(\tilde{g}_N;-\xi)\to \Lambda_{\uparrow}\otimes_{\Lambda}\mathbf{S}_{*}(\tilde{X}_{-\xi})$ is homotopic to the coefficient extension to $\Lambda_{\uparrow}$ of the homotopy equivalence $\tilde{\mathcal{E}}(h_0)\colon\thinspace\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{-\xi}])\to \mathbf{S}_{*}(\tilde{X}_{-\xi})$.
\end{lemma}
\begin{proof}
The two cases are identical so we just prove the first statement, involving $h_0$ and $\tilde{h}_N$. This is perhaps most easily done by using the characterization of a homotopy inverse to $\mathbb{P}_{\tilde{h}_N}$ in \cite[Section XI.7.2]{Pa}, based on \cite{HL}. Namely, letting $\mathbf{S}_{*}^{\pitchfork}(\tilde{X}_{\xi})$ denote the subcomplex of $\mathbf{S}_{*}(\tilde{X}_{\xi})$ generated by $C^{\infty}$ simplices $\sigma\colon\thinspace \Delta^k\to \tilde{X}_{\xi}$ that are transverse to all ascending manifolds $\mathbf{A}^{\tilde{v}}(\tilde{p})$, one defines $\phi\colon\thinspace \Lambda_{\uparrow}\otimes_{\Lambda}\mathbf{S}_{k}^{\pitchfork}(\tilde{X}_{\xi})\to \mathbf{CN}_{k}(\tilde{h}_N;\xi)$ by setting $\phi(\sigma)=\sum_{\tilde{p}\in\mathrm{Crit}_{k}(\tilde{h}_N)}[\sigma:\tilde{p}]\tilde{p}$ where $[\sigma:\tilde{p}]$ is the intersection number of $\sigma$ with $\mathbf{A}^{\tilde{v}}(\tilde{p})$. This yields a chain map, and \cite[Theorem XI.7.5]{Pa} asserts that $\mathbb{P}_{\tilde{h}_N}\circ \phi$ is homotopic to the inclusion of the coefficient extension of $\mathbf{S}_{*}^{\pitchfork}(\tilde{X}_{\xi})$ into $\mathbf{S}_{*}(\tilde{X}_{\xi})$, which is itself a chain homotopy equivalence.
But in the present context where $\tilde{v}$ is the lift to $\tilde{X}_{\xi}$ of a Morse-Smale gradient-like vector field for $h_0$, intersections of $\sigma$ as above with ascending manifolds $\mathbf{A}^{\tilde{v}}(\tilde{p})$ in $\tilde{X}_{\xi}$ are in bijection with intersections of $\pi\circ\sigma$ with ascending manifolds $\mathbf{A}^{v}(p)$ in $X$, and there are only finitely many such intersections by compactness considerations. Thus the sums defining $\phi$ are finite, and so $\phi$ is the coefficient extension
to $\Lambda_{\uparrow}$ of a similarly defined map $\phi_0\colon\thinspace \mathbf{S}_{*}^{\pitchfork}(\tilde{X}_{\xi})\to \widetilde{\mathbf{CM}}(\pi^*h_0;\kappa[\Gamma_{\xi}])$. Then $\tilde{\mathcal{E}}(h_0)\circ \phi_0$ is homotopic to the inclusion of $\mathbf{S}_{*}^{\pitchfork}(\tilde{X}_{\xi})$ into $\mathbf{S}_{*}(\tilde{X}_{\xi})$ by a straightforward lifting of \cite[Theorem VI.4.12]{Pa} to covering spaces.
It follows that $1_{\Lambda_{\uparrow}}\otimes\tilde{\mathcal{E}}(h_0)$ and $\mathbb{P}_{\tilde{h}_N}$ are chain homotopy inverses to the same chain map (namely the coefficient extension to $\Lambda_{\uparrow}$ of the composition of $\phi_0$ with a chain homotopy inverse to the inclusion $\mathbf{S}_{*}^{\pitchfork}(\tilde{X}_{\xi})\hookrightarrow\mathbf{S}_{*}(\tilde{X}_{\xi})$); hence $1_{\Lambda_{\uparrow}}\otimes\tilde{\mathcal{E}}(h_0)$ and $\mathbb{P}_{\tilde{h}_N}$ are homotopic.
\end{proof}
For the rest of the proof let $-s,t$ be regular values of $\tilde{f}$ with $s+t>0$. In (\ref{proofcone}), the map $\psi_{\uparrow}$ is the composition of the inclusion $\mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow}\hookrightarrow \mathbf{CN}_{*}(\tilde{f};\xi)$ with a chain homotopy inverse to the continuation map $\Phi_{\tilde{h}_N\tilde{f}}\colon\thinspace \Lambda_{\uparrow}\otimes_{\Lambda}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}])\to \mathbf{CN}_{*}(\tilde{f};\xi)$; for this chain homotopy inverse we may use the continuation map $\Phi_{\tilde{f}\tilde{h}_N}$. By the same argument as in (\ref{contcomm}), $\mathbb{P}_{\tilde{h}_N}\circ\Phi_{\tilde{f}\tilde{h}_N}$ is chain homotopic to $\mathbb{P}_{\tilde{f}}$, so by Lemma \ref{lmtonov} we have a homotopy-commutative diagram of complexes of $\kappa[[x]]$-modules \[ \xymatrix{ \mathbf{CN}(\tilde{f};\xi)^{\leq t}_{\uparrow} \ar[rr]^{\psi_{\uparrow}} \ar[dd]_{\Psi^{t}_{\tilde{f}}} \ar@{^{(}->}[rd] & & \Lambda_{\uparrow}\otimes_{\Lambda}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]) \ar[dd]^{1_{\Lambda_{\uparrow}}\otimes \tilde{\mathcal{E}}(h_0)}\\ & \mathbf{CN}_{*}(\tilde{f};\xi) \ar[ru]^<<<<<{\Phi_{\tilde{f}\tilde{h}_N}} \ar[rd]_{\mathbb{P}_{\tilde{f}}} & \\ \kappa[[x]]\otimes_{\kappa[x]}\mathbf{S}_{*}(X^{\leq t}) \ar@{^{(}->}[rr] & & \Lambda_{\uparrow}\otimes_{\Lambda}\mathbf{S}_{*}(\tilde{X}_{\xi}) } \] where the vertical arrows are chain homotopy equivalences. Similarly, using the same reasoning with $\xi,\tilde{f},\tilde{h}_N$ replaced by $-\xi,-\tilde{f},\tilde{g}_N$, and using the conjugation functor and Proposition \ref{flipconj} to convert $\Lambda_{\uparrow}$-vector spaces to $\Lambda^{\downarrow}$-vector spaces, we have a homotopy-commutative diagram of complexes of $\kappa[[x^{-1}]]$-modules \[ \xymatrix{ \mathbf{CN}(\tilde{f};\xi)_{\geq -s}^{\downarrow} \ar[r]^<<<<<<<{\psi^{\downarrow}} \ar[d]_{\bar{\Psi}^{-s}_{-\tilde{f}}} & \Lambda^{\downarrow}\otimes_{\Lambda}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]) \ar[d]^{1_{\Lambda^{\downarrow}}\otimes \tilde{\mathcal{E}}(h_0)} \\ \kappa[[x^{-1}]]\otimes_{\kappa[x^{-1}]}\mathbf{S}_{*}(X_{\geq -s}) \ar@{^{(}->}[r] & \Lambda^{\downarrow}\otimes_{\Lambda}\mathbf{S}_{*}(\tilde{X}_{\xi}) } \]
Taking the direct sum of the two diagrams above, and combining with the map $-j^{\downarrow}+j_{\uparrow}\colon\thinspace \Lambda^{\downarrow}\oplus\Lambda_{\uparrow}\to\Lambda_{\updownarrow}$, we see that we have a homotopy-commutative diagram (at the level of complexes of $\kappa$-vector spaces) \begin{equation}\label{novtosing} \xymatrix{ \mathbf{CN}(\tilde{f};\xi)_{\geq -s}^{\downarrow}\oplus \mathbf{CN}(\tilde{f};\xi)_{\uparrow}^{\leq t}\ar[d]_{\Psi^{t}_{\tilde{f}}\oplus \bar{\Psi}^{-s}_{-\tilde{f}}} \ar[rrr]^{-j^{\downarrow}\otimes\psi^{\downarrow}+j_{\uparrow}\otimes\psi_{\uparrow}} & & & \Lambda_{\updownarrow}\otimes_{\Lambda}\widetilde{\mathbf{CM}}_{*}(\pi^*h_0;\kappa[\Gamma_{\xi}]) \ar[d]^{1_{\updownarrow}\otimes \tilde{\mathcal{E}(h_0)}} \\ \left(\kappa[[x^{-1}]]\otimes_{\kappa[x^{-1}]}\mathbf{S}_{*}(X_{\geq -s})\right)\oplus\left(\kappa[[x]]\otimes_{\kappa[x]}\mathbf{S}_{*}(X^{\leq t})\right) \ar[rrr]^>>>>>>>>>>>>>>>{-j^{\downarrow}\otimes\iota_{X_{\geq -s}}+j_{\uparrow}+\otimes\iota_{X^{\leq t}}} & & & \Lambda_{\updownarrow}\otimes_{\Lambda}\mathbf{S}_{*}(X) } \end{equation}
where $\iota_{X_{\geq -s}},\iota_{X^{\leq t}}$ are the inclusions and the vertical maps are chain homotopy equivalences. This induces isomorphisms between the homologies of the cones of the horizontal maps; as the pair $(s,t)$ varies these homology isomorphisms are compatible with the inclusion-induced maps for $s\leq s',t\leq t'$ due to \cite[Proposition XI.4.6]{Pa}. So it remains only to identify $H_{k+1}(\mathrm{Cone}(-j^{\downarrow}\otimes\iota_{X_{\geq -s}}+j_{\uparrow}\otimes\iota_{X^{\leq t}}))$ with $H_k(\tilde{f}^{-1}([-s,t]);\kappa)$, compatibly with inclusions $[-s,t]\subset [-s',t']$, continuing to assume that $s+t>0$. Choose any number $\delta$ with $0<\delta<s+t$ and for any subset $Y$ of $\tilde{X}_{\xi}$ let $\mathbf{S}_{*}^{\delta}(Y)$ denote the subcomplex of $\mathbf{S}_{*}(Y)$ generated by simplices $\sigma\colon\thinspace \Delta^k\to Y$ with $\max(\tilde{f}\circ\sigma)-\min(\tilde{f}\circ \sigma)\leq \delta$. By \cite[Proposition 2.21]{Ha} applied to the cover $\{\tilde{f}^{-1}([a,a+\delta])|a\in \mathbb{R}\}$ of $\tilde{X}_{\xi}$, the inclusion $\mathbf{S}_{*}^{\delta}(Y)\to \mathbf{S}_{*}(Y)$ is a homotopy equivalence, so we may instead consider the cone of \begin{equation}\label{smallcone}
-j^{\downarrow}\otimes\iota_{X_{\geq -s}}+j_{\uparrow}\otimes\iota_{X^{\leq t}}\colon\thinspace \left(\kappa[[x^{-1}]]\otimes_{\kappa[x^{-1}]}\mathbf{S}_{*}^{\delta}(X_{\geq -s})\right)\oplus \left(\kappa[[x]]\otimes_{\kappa[x]}\mathbf{S}_{*}^{\delta}(X^{\leq t})\right)\to \Lambda_{\updownarrow}\otimes_{\Lambda}\mathbf{S}^{\delta}_{*}(\tilde{X}_{\xi}).\end{equation} Suppose that $\lambda=\sum_{m\in \mathbb{Z}}a_mx^m$ is a general element of $\Lambda_{\updownarrow}$ (with $a_m\in\kappa$) and $\sigma\colon\thinspace \Delta^k\to \tilde{X}_{\xi}$ is a generator of $ \mathbf{S}_{*}^{\delta}(\tilde{X}_{\xi})$. If $m_0\in \mathbb{Z}$ is the minimal integer such that $\max(\tilde{f}\circ \sigma)-m\lambda_0\leq t$, then $x^{m_0}\sigma$ has image contained in $X^{\leq t}$, while
the choice of $\delta$ implies that $x^{m_0-1}\sigma$ has image contained in $X_{\geq -s}$. It follows readily from this that $\lambda\otimes\sigma$ lies in the image of (\ref{smallcone}); thus this map is surjective, and it is easy to see that its kernel is equal to the image of $\mathbf{S}_{*}^{\delta}(\tilde{f}^{-1}([-s,t]))$ under the diagonal embedding. So just as in Section \ref{easyiso} we obtain an isomorphism between the $(k+1)$th homology of the cone of (\ref{smallcone}) and $H_k(\mathbf{S}_{*}^{\delta}(\tilde{f}^{-1}([-s,t])))$, which in turn is isomorphic to $H_k(\tilde{f}^{-1}([-s,t]);\kappa)$ by another appeal to the theorem of small chains. If $s'\geq s$ and $t'\geq t$ the resulting isomorphisms are compatible with the obvious inclusions, since we can use the same value of $\delta$ for both pairs $(s,t),(s',t')$. This completes the proof of Theorem \ref{bigiso}.
|
{
"arxiv_id": "2302.14352",
"language": "en",
"timestamp": "2023-03-01T02:10:37",
"url": "https://arxiv.org/abs/2302.14352",
"yymm": "2302"
} | \section{Introduction}
Let $\mathcal{S}^n$ be the space of $n\times n$ real symmetric matrices and $A, B\in \mathcal{S}^n. $ In this paper we compute the set
\begin{align*}
I_{\succeq}(A,B)=\{\mu\in\Bbb R: A+\mu B\succeq0\}
\end{align*}
and then apply it for solving the following optimization problem
\begin{equation}\label{original}
\hspace*{0.6cm}
\begin{array}{lll}
\lambda^*=&\min &f(x)=x^TAx+2a^Tx\\
&{\rm s.t.} & g(x)=x^TBx+2b^Tx+ c\le 0,
\end{array}
\end{equation}
where $a, b\in \Bbb R^{n}$ are $n-$ dimension vectors, $c\in\Bbb R.$ This problem is referred as the generalized trust region subproblem (GTRS) since it contains the
trust region subproblem (TRS), when $B=I$ is the identity matrix, $b=0$ and $c=-1,$ as a special case.
Our study is inspired
by the following results obtained by Mor\'{e} \cite{MJ}: i) a vector $x^*\in\Bbb R^n$ is a global optimizer of the
GTRS \eqref{original} if and only if $g(x^*)\le 0$ and there exists $\mu^*\in I_{\succeq}(A,B)\cap [0,\infty)$
such that
\begin{align}
\nabla f(x^*)+\mu^*\nabla g(x^*)=0,\label{mu1}\\
\mu^*g(x^*)=0 \label{mu2};
\end{align}
and ii) if the set defined by $I_{\succ}(A,B)=\{\mu\in\Bbb R: A+\mu B\succ0\}$ is nonempty then
it is an open interval and
the function $\varphi(\mu):=g[x(\mu)]$ is strictly decreasing on $I_{\succ}(A,B),$ unless $x(\mu)$
uniquely solved from \eqref{mu1} is constant on $ I_{\succ}(A,B).$
These two results suggest that $\mu^*$ can be found efficiently whenever $I_{\succeq}(A,B)$ is computed.
Unfortunately, in literature, results on $I_{\succeq}(A,B) $ are
very limited. The earliest study on computing $I_{\succeq}(A,B)$ was, perhaps, by
Caron and Gould \cite{Caron}, where the authors suppose that $A$ is positive semidefinite and $B$ is of rank one or two.
Song \cite{Song} computed $I_{\succeq}(A,B)$ under the assumption that $\mathcal{R}(B)\subset\mathcal{R}(A),$
where $\mathcal{R}(\cdot)$ denotes the range of a matrix. Mor\'{e} \cite{MJ}, Adachi et al. \cite{Adachi}
computed $I_{\succeq}(A,B)$ under the assumption that $I_{\succ}(A,B)\ne\emptyset.$
Out of above conditions, computing $I_{\succeq}(A,B)$
is still an open question so far. However, if $I_{\succ}(A,B)\ne \emptyset,$ $I_{\succeq}(A,B) $ is then the closure of
$I_{\succ}(A,B) $ and
computing $I_{\succeq}(A,B)$ is much easier since the matrices $A,B$ are then simultaneously diagonalizable via congruence (SDC), i.e.,
there exists a nonsingular matrix $P$ such that $P^TAP$ and $P^TBP$ are all diagonal, please see \cite{Horn}.
Mor\'{e} \cite{MJ} proposed an algorithm for finding $\mu^*$ under the assumption that $I_{\succ}(A,B)$
is nonempty, while the case $I_{\succ}(A,B)=\emptyset$ is referred as the {\em hard-case}.
The assumption $I_{\succ}(A,B)\ne\emptyset$ is so strictive that it restricts the GTRS \eqref{original} into a
very special class which always attains a unique optimal solution. Moreover, if
$I_{\succ}(A,B)\ne\emptyset$ then $A, B$ are SDC
and the GTRS \eqref{original} is equivalently reformulated as a convex second-order cone programming (SOCP) \cite{Ben-Tal}.
This result is then
extended that any bounded GTRS can be reformulated as an SOCP even the SDC condition fails \cite{Jiang1}.
The assumption $I_{\succ}(A,B)\ne\emptyset$ also allows
to solve GTRS \eqref{original} by solving only one eigenpair of a generalized eigenvalue problem of $(2n+1)\times(2n+1)$ dimension \cite{Adachi}.
A more recent result by Jiang and Li can solve GTRS \eqref{original} even in linear time $O(n)$ but under a stricter condition that
there exist $\mu\in(0,1]$ and $\lambda_{\rm min}(B)\le -\xi<0$ such that $\mu A+(1-\mu)B\succeq \xi I,$
here $\lambda_{\rm min}(B)$ is the smallest eigenvalue of $B$ \cite{Jiang2}. Hsia et al. \cite{Hsia}
solve the GTRS \eqref{original} when $A, B$ are SDC, while the case when $A, B$ are not SDC is still referred as the {\em hard-case}.
In this paper, we show that not only when $I_{\succ}(A,B)\ne \emptyset$ but also even $I_{\succ}(A,B)=\emptyset$
the interval $I_{\succeq}(A,B)$ can be computed by only solving a generalized eigenvalue problem of $n\times n$ dimension.
Specifically, we show that if $A, B$ are not SDC then $I_{\succeq}(A,B)$
either is empty or has only one point: $I_{\succeq}(A,B)=\{\mu\}.$ Such a value $\mu$ can be found efficiently
and then checked whether $\mu^*=\mu.$
If $A, B$ are SDC
and $B$ is nonsingular, then the matrix pencil
$A+\mu B$ can always be decomposed into the form
\begin{align*}
P^T(A+\mu B)P=\texttt{diag}((\lambda_1+\mu)B_1,(\lambda_2+\mu)B_2\ldots,(\lambda_k+\mu)B_k),
\end{align*}
where $B_i$ are $m_i\times m_i$ symmetric matrices, $\lambda_1>\lambda_2>\ldots>\lambda_k$ are distinct
eigenvalues of the matrix $B^{-1}A.$ The set
$I_{\succeq}(A,B)$ is then computed quickly since $A+\mu B\succeq0$ is equivalent to $(\lambda_i+\mu)B_i\succeq0$ for
all $i=1,2,\ldots,k.$ If $B$ is singular and $A$ is nonsingular, we can decompose
$B, A$ to the taking the forms $U^TBU=\texttt{diag}(B_1,0),$
$U^TAU=\texttt{diag}(A_1,A_3),$
where $B_1, A_1$ are symmetric of the same size, $B_1$ is nonsingular such that
if $A_3\succ0$ then $I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1),$ otherwise
$I_{\succeq}(A,B)= \emptyset.$
Especially, if $I_{\succeq}(A,B)$ has more than one point, then $I_{\succ}(A,B)\ne \emptyset$ and $I_{\succ}(A,B)={\rm int}(I_{\succeq}(A,B)),$
please see Corollary \ref{cor1} below.
If $A, B$ are SDC and both are singular, then there always exists a nonsingular
matrix $U$ such that
$A, B$ are decomposed to either the following form
\begin{align}\label{l1}
U^TBU=\texttt{diag}(B_1, 0) \text{ and }
U^TAU=\texttt{diag}(A_1, 0)
\end{align} or
\begin{align}\label{l2}
U^TBU=\texttt{diag}(B_1, 0) \text{ and }
U^TAU=\texttt{diag}(A_1, A_4),
\end{align}
where $A_4$ is diagonal and, in both cases, $A_1, B_1$ are diagonal and of the same size, $B_1$ is nonsingular.
If $A, B$ are decomposed to \eqref{l2} and $A_4$ has even one diagonal negative element then
$I_{\succeq}(A,B)= \emptyset.$
Otherwise, in both cases, $I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1)$ with $B_1$ nonsingular.
To apply $I_{\succeq}(A,B)$ for solving GTRS \eqref{original}, we consider the set of candidates of Lagrange multipliers
$I=I_{\succeq}(A,B)\cap[0,\infty).$
\begin{enumerate}
\item If $I=\emptyset,$ \eqref{original} has no optimal solution since it is unbounded from below;
\item If $I$ is singleton: $I=\{\mu\},$ we need only to solve linear equations to check whether $\mu^*=\mu;$
\item If $I$ is an interval and $I_{\succ}(A,B)\ne \emptyset,$ then there is a unique optimal Lagrange multiplier $\mu^* \in I.$ If
$\mu^*$ is not an endpoint of $I,$ a bisection algorithm can find $\mu^*$ in the interior of $I$
since the function $\varphi(\mu)$
is strictly decreasing on $I.$
\item If $I$ is an interval and $I_{\succ}(A,B)= \emptyset,$ $A, B$ are converted to the form either \eqref{l1} or \eqref{l2}
and $I_{\succ}(A_1,B_1)\ne \emptyset.$ In this case, the GTRS \eqref{original} is either unbounded below or reduced
to a GTRS of $p $ variables with matrices $A_1, B_1$ such that $I_{\succ}(A_1,B_1)\ne\emptyset.$
\end{enumerate}
\section{Computing the positive semidefinite interval $I_{\succeq}(A,B)$}\label{sec1}
In this section, we show computing $I_{\succeq}(A,B)$ in two separate cases: $A, B$ are SDC and $A, B$ are not SDC.
For the former case, we need first the following result.
\begin{lem}[\cite{Greub}]\label{3}
Let $A, B\in \mathcal{S}^n $ and $B$ be nonsingular.
Then $A, B$ are SDC if and only if there is a nonsingular real matrix $P$ such that $P^{-1}B^{-1}AP$ is a real diagonal matrix.
\end{lem}
Now, if $A, B$ are SDC and $B$ is nonsingular, by Lemma \ref{3}, there is a nonsingular matrix $P$ such that
$$ J:= P^{-1} B^{-1}A P =\texttt{diag}(\lambda_1I_{m_1}, \ldots, \lambda_k I_{m_k}),$$
is a diagonal matrix, where $\lambda_1, \lambda_2, \ldots, \lambda_k$
are the $k$ distinct eigenvalues of $B^{-1}A,$ $I_{m_t}$ is the identity matrix of size $m_t \times m_t$
and $m_1+m_2+\ldots+m_k=n.$
We can suppose without loss of generality that
$\lambda_1>\lambda_2>\ldots>\lambda_k.$
Using $J$ together with the following result we show how to simultaneously decompose $A$ and $B$ into block diagonals.
\begin{lem}[\cite{Uhlig76}]\label{bd1}
Let $K $ be a Jordan matrix of form
$$K=\texttt{diag}(C(\lambda_{1}), C(\lambda_{2}), \cdots, C(\lambda_{k})),$$
where $C(\lambda_{i})=\texttt{diag}(K_{i_{1}}(\lambda_{i}), K_{i_{2}}(\lambda_{i}),\cdots,K_{i_{t_i}}(\lambda_{i})), i=1,2,\ldots, k, $
are Jordan blocks associated with eigenvalue $\lambda_{i}$ and
\par
\[
K_{i_j}(\lambda_i)=
\left(
\begin{array}{cccccc}
\lambda_{i} & 1 & 0 &\cdots &\cdots & 0\\
0 & \lambda_{i} & 1&\cdots &\cdots & 0\\
\cdots &\cdots &\cdots&\cdots &\cdots &\cdots\\
\cdots &\cdots &\cdots&\cdots &\cdots &\cdots\\
0 & 0 &\cdots &\cdots &\lambda_{i} & 1\\
0 & 0 &\cdots &\cdots & 0 &\lambda_{i}
\end{array}
\right)_{(ij)}, \hspace{1cm} j=1, 2, \cdots, t_i.
\]
For a symmetric matrix $S,$ if $SK$ is symmetric, then $S$ is block diagonal and $S=\texttt{diag}(S_{1}, S_{2},\cdots, S_{k})$ with
$${\rm dim} S_{i}={\rm dim }C(\lambda_{i}).$$
\end{lem}
Observe that $P^TBP.J=P^TAP$ and $P^TAP$ is symmetric. Lemma \ref{bd1} indicates that $P^TBP$ is a block diagonal matrix with the same partition as $J.$ That is
\begin{align}\label{ct1}
P^TBP=\texttt{diag}(B_1,B_2\ldots,B_k),
\end{align}
where $B_t$ is real symmetric matrices of size $m_t \times m_t$ for every
$t=1,2, \ldots, k.$
We now have
\begin{align}\label{ct2}
P^TAP=P^TBP.J=\texttt{diag}(\lambda_1B_1,\lambda_2B_2\ldots,\lambda_kB_k).
\end{align}
Both \eqref{ct1} and \eqref{ct2} show that $A, B$ are now decomposed into the same block structure and the matrix
pencil $A+\mu B$ now becomes
\begin{align}\label{ct3} P^T(A+\mu B)P=\texttt{diag}((\lambda_1+\mu)B_1,(\lambda_2+\mu)B_2\ldots,(\lambda_k+\mu)B_k).
\end{align}
The requirement $A+\mu B \succeq 0$ is then equivalent to
\begin{equation}\label{pt1}
\begin{split}
(\lambda_i+\mu)B_i \succeq 0, i=1,2,\ldots,k.
\end{split}
\end{equation}
Using \eqref{pt1} we compute $I_{\succeq}(A,B)$ as follows.
\begin{thm}\label{dl1}
Suppose $A, B\in \mathcal{S}^n$ are SDC and $B$ is nonsingular.
\begin{enumerate}
\item If $B \succ 0$ then $I_{\succeq}(A,B)=[-\lambda_k, + \infty);$
\item If $B \prec 0$ then $I_{\succeq}(A,B)=(- \infty, -\lambda_1];$
\item If $B$ is indefinite then
\begin{itemize}
\item[(i)] if $B_1, B_2,\ldots,B_{t} \succ 0$ and $B_{t+1}, B_{t+2},\ldots, B_k \prec 0$ for some $t \in \{1,2,\ldots,k\},$ then $I_{\succeq}(A,B)=[ -\lambda_t,-\lambda_{t+1}].$
\item [(ii)] if $B_1, B_2,\ldots, B_{t-1} \succ 0,$ $B_t$ is indefinite and $B_{t+1}, B_{t+2},\ldots, B_k \prec 0,$ then $I_{\succeq}(A,B)=\{-\lambda_t\},$
\item [(iii)] in other cases, that is either $B_i, B_j$ are indefinite for some $i\ne j$ or $B_i \prec 0, B_j \succ 0$ for some $i<j$ or
$B_i $ is indefinite and $B_j \succ 0$ for some $i<j,$ then $I_{\succeq}(A,B)=\emptyset.$
\end{itemize}
\end{enumerate}
\end{thm}
\begin{proof}
\begin{enumerate}
\item If $B \succ 0$ then $B_i \succ 0 ~ \forall i=1,2,\ldots,k.$ The inequality \eqref{pt1} is then equivalent to
$\lambda_i+\mu \geq 0~ \forall i=1,2,\ldots, k.$
Since $\lambda_1>\lambda_2>\ldots>\lambda_k,$ we need only $\mu \geq -\lambda_k.$
This shows $I_{\succeq}(A,B)=[-\lambda_k, + \infty).$
\item Similarly, if $B \prec 0$ then $B_i \prec 0~ \forall i=1,2,\ldots,k.$ The inequality \eqref{pt1} is then equivalent to
$\lambda_i+\mu \leq 0~ \forall i=1,2,\ldots, k.$ Then $I_{\succeq}(A,B)=(- \infty, -\lambda_1].$
\item The case $B$ is indefinite:
\begin{itemize}
\item [(i)] if $B_1, B_2,\ldots, B_{t} \succ 0$ and $B_{t+1}, B_{t+2},\ldots, B_k \prec 0$ for some $t \in \{1,2,\ldots,k\},$
the inequality \eqref{pt1} then implies
$$ \begin{cases} \lambda_i+\mu \geq 0, \forall i=1,2,\ldots, t,\\
\lambda_i+\mu \leq 0, \forall i=t+1,\ldots,k.
\end{cases}
$$
Since $\lambda_1>\lambda_2>\ldots>\lambda_k,$ we have $I_{\succeq}(A,B)=[ -\lambda_t,-\lambda_{t+1}].$
\item[(ii)] if $B_1, B_2,\ldots, B_{t-1} \succ 0,$ $B_t$ is indefinite and $B_{t+1},B_{t+2},\ldots, B_k \prec 0$ for some
$t \in \{1,2,\ldots,k\}.$ The inequality \eqref{pt1} then implies
$$ \begin{cases} \lambda_i+\mu \geq 0, \forall i=1,2,\ldots, t-1\\
\lambda_t+\mu = 0\\
\lambda_i+\mu \leq 0, \forall i=t+1,\ldots,k.
\end{cases}
$$
Since $\lambda_1>\lambda_2>\ldots>\lambda_k,$ we have $I_{\succeq}(A,B)=\{-\lambda_t\}.$
\item [(iii)] if $B_i, B_j$ are indefinite, \eqref{pt1} implies $\lambda_i+\mu=0$ and $\lambda_j+\mu=0.$ This cannot happen since
$\lambda_i\ne \lambda_j.$ If $B_i \prec 0$ and $ B_j \succ 0$ for some $i<j,$ then
$$ \begin{cases}
\lambda_i+\mu \leq 0\\
\lambda_j+\mu \geq 0
\end{cases}
$$
implying $-\lambda_j \leq \mu \leq -\lambda_i.$ This also cannot happen since
$\lambda_i>\lambda_j.$ Finally, if $B_i $ is indefinite and $B_j \succ 0$ for some $i<j.$ Again, by \eqref{pt1},
$$ \begin{cases}\label{pt2} \lambda_i+\mu = 0\\
\lambda_j+\mu \geq 0
\end{cases}
$$
implying $ \lambda_i \leq \lambda_j.$ This also cannot happen.
So $I_{\succeq}(A,B)=\emptyset$ in these all three cases.
\end{itemize}
\end{enumerate}
\end{proof}
The proof of Theorem \ref{dl1} indicates that if $A,B$ are SDC, $B$ is nonsingular and $I_{\succeq}(A,B)$ is an interval
then $I_{\succ}(A,B)$ is nonempty. In that case we have
$I_{\succ}(A,B)={\rm int}(I_{\succeq}(A,B)),$ please see \cite{MJ}.
If $B$ is singular and $A$ is nonsingular, we have the following result.
\begin{thm}\label{dl2}
Suppose $A, B\in \mathcal{S}^n$ are SDC, $B$ is singular and $A$ is nonsingular. Then
\begin{enumerate}
\item[(i)] there always exists a nonsingular matrix $U$ such that
$$U^TBU=\texttt{diag}(B_1,0),$$
$$U^TAU=\texttt{diag}(A_1,A_3),$$
where $B_1, A_1$ are symmetric of the same size, $B_1$ is nonsingular;
\item[(ii)] if $A_3\succ0$ then $I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1).$ Otherwise,
$I_{\succeq}(A,B)= \emptyset.$
\end{enumerate}
\end{thm}
\begin{proof} (i) Since $B$ is symmetric and singular, there is an orthogonal matrix $Q_1$ that puts $B$ into the form
$$ \hat{B} =Q_1^TBQ_1=\texttt{diag}(B_1, 0)$$
such that $B_1$ is a nonsingular symmetric matrix of size $p\times p,$ where $p={\rm rank}(B).$
Let $\hat{A} :=Q_1^TAQ_1.$ Since $A, B$ are SDC, $\hat{A}, \hat{B}$ are SDC too (the converse also holds true).
We can write $\hat{A}$ in the following form
\begin{align}\label{b}
\hat{A}=Q_1^TAQ_1=\left(\begin{matrix}M_1&M_2\\M_2^T&M_3\end{matrix}\right)
\end{align} such that
$M_1$ is a symmetric matrix of size $ p\times p,$ $M_2$ is a $p \times(n-p)$ matrix,
$M_3$ is symmetric of size $(n-p)\times(n-p) $ and, importantly, $M_3\ne0.$ Indeed,
if $M_3=0$ then $\hat{A}=Q_1^TAQ_1=\left(\begin{matrix}M_1&M_2\\ M_2^T&0\end{matrix}\right).$
Then we can choose a nonsingular matrix $H$ written in the same partition as $\hat{A}:$ $H=\left(
\begin{matrix}
H_1&H_2\\ H_3&H_4
\end{matrix}
\right)$ such that both $H^T\hat{B}H, H^T\hat{A}H$ are diagonal and
$H^T\hat{B}H$ is of the form
\begin{align*}
H^T\hat{B} H= \left(\begin{matrix}H_1^T B_1H_1&H_1^T B_1H_2\\
H_2^T B_1H_1&H_2^T B_1H_2\\
\end{matrix}\right)
= \left(\begin{matrix}H_1^T B_1H_1&0\\
0&0\\
\end{matrix}\right),
\end{align*}
where $H_1^T B_1H_1$ is nonsingular. This implies
$H_2=0.$ On the other hand,
\begin{align*}
H^T\hat{A} H=\left(\begin{matrix}H_1^T M_1H_1+H_3^TM_2^TH_1+H_1^TM_2H_3&H_1^TM_2H_4\\H_4^TM_2^TH_1&0\\
\end{matrix}\right)
\end{align*}
is diagonal implying that $H_1^TM_2H_4=0,$ and so
\begin{align*}
H^T\hat{A} H=\left(\begin{matrix}H_1^T M_1H_1+H_3^TM_2^TH_1+H_1^TM_2H_3&0\\0&0\\
\end{matrix}\right).
\end{align*}
This cannot happen since $\hat{A}$ is nonsingular.
Let $P$ be an orthogonal matrix such that $P^TM_3P=\texttt{diag}(A_3,0_{q-r}),$
where $A_3$ is a nonsingular diagonal matrix of size $r\times r, r\leq q$ and $p+q=n,$ and set
$U_1=\texttt{diag}(I_p, P).$ We then have
\begin{align}\label{c}
\tilde{A}: =U_1^T\hat{A}U_1
=\left(
\begin{matrix}
M_1&M_2P\\ (M_2P)^T& P^TM_3P
\end{matrix}
\right)
=\left(
\begin{matrix}
M_1&A_4&A_5\\ A_4^T&A_3&0\\ A_5^T&0&0
\end{matrix}
\right),
\end{align}
where $\left(\begin{matrix}A_4 & A_5\end{matrix}\right)=M_2P,$ $A_4$ and $ A_5$ are of size $p\times r$ and $p\times(q-r), r\le q,$ respectively.
Let
\begin{align*}
U_2=\left(\begin{matrix}I_p&0&0\\-A_3^{-1}A_4^T&I_r&0\\0&0&I_{q-r}
\end{matrix}\right) \text{ and } U=Q_1U_1U_2.
\end{align*}
We can verify that
\begin{align*}
U^TBU=U_2^TU_1^T (Q_1^TBQ_1)) U_1U_2=\hat B,
\end{align*}
and, by \eqref{c},
\begin{align*}
U^TAU=U_2^T\tilde A U_2=\left(
\begin{matrix}
M_1-A_4A_3^{-1}A_4^T&0&A_5\\0&A_3&0\\A_5^T&0&0
\end{matrix}
\right).
\end{align*}
We denote $A_1:=M_1-A_4A_3^{-1}A_4^T$ and rewrite the matrices as follows
\begin{align*}
U^TBU=\texttt{diag}(B_1,0), U^TAU=\left( \begin{matrix} A_1&0&A_5\\
0&A_3&0\\ A_5^T&0&0\end{matrix}\right).
\end{align*}
We now consider whether it can happen that $r<q.$
We note that $U^TAU, U^TBU$ are SDC. We can choose a nonsingular congruence matrix $K$ written in the form
\begin{align*}
K=\left(
\begin{matrix}
K_1&K_2&K_3\\ K_4&K_5&K_6\\ K_7&K_8&K_9
\end{matrix}
\right)
\end{align*}
such that not only the matrices $K^TU^TAUK, K^TU^TBUK$ are diagonal but also
$ K^TU^TBUK$ is remained a $p\times p$ nonsingular submatrix at the northwest corner. That is
\begin{align*}
K^TU^T BU K=\left(\begin{matrix}K_1^T B_1K_1&K_1^T B_1K_2&K_1^T B_1K_3\\
K_2^T B_1K_1&K_2^T B_1K_2&K_2^T B_1K_3\\K_3^TB_1K_1&K_3^TB_1K_2&K_3^TB_1K_3
\end{matrix}\right)=\left(\begin{matrix}K_1^T B_1K_1&0&0\\
0&0&0\\0&0&0
\end{matrix}\right)
\end{align*}
is diagonal and $K_1^T B_1K_1$ is nonsingular diagonal of size $p\times p.$ This
implies that $K_2=K_3=0.$
Then
\begin{align*}
& K^TU^T AU K=\\
&=\left(\begin{matrix}\substack{K_1^T A_1K_1+K_1^T A_2K_7 \\ +K_4^TA_3K_4+K_7^TA_2^TK_1}&K_1^TA_2K_8+K_4^TA_3K_5&K_1^TA_2K_9+K_4^TA_3K_6\\K_8^TA_2^TK_1+K_5^TA_3^TK_4&K_5^TA_3K_5&K_5^TA_3K_6\\K_9^TA_2^TK_1+K_6^TA_3^TK_4&K_6^TA_3K_5&K_6^TA_3K_6
\end{matrix}\right) \\
&= \left(\begin{matrix}K_1^T A_1K_1+K_1^T A_2K_7+K_4^TA_3K_4+K_7^TA_2^TK_1&0&0\\0&K_5^TA_3K_5&0\\0&0&K_6^TA_3K_6
\end{matrix}\right)
\end{align*}
is diagonal implying that
$$K_1^T A_1K_1+K_1^T A_2K_7+K_4^TA_3K_4+K_7^TA_2^TK_1, K_5^TA_3K_5,
K_6^TA_3K_6$$ are diagonal.
Note that $U^TAU$ is nonsingular, $K_5^TA_3K_5, K_6^TA_3K_6$ must be nonsingular. But then
$K_5^TA_3K_6=0$ with $A_3$ nonsingular is a contradiction. It therefore holds that $q=r.$ Then
\begin{align*} U^TBU=\texttt{diag}(B_1, 0) , U^TAU=\texttt{diag}(A_1, A_3)
\end{align*}
with $B_1, A_1, A_3$ as desired.
(ii) We note first that $A$ is nonsingular so is $A_3.$ If $A_3 \succ 0,$ then $A+\mu B \succeq 0$ if and only if $ A_1+\mu B_1 \succeq 0.$
So it holds in that case $I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1).$ Otherwise, $A_3$
is either indefinite or negative definite then $I_{\succeq}(A,B)= \emptyset.$
\end{proof}
The proofs of Theorems \ref{dl1} and \ref{dl2} reveal the following important result.
\begin{cor}\label{cor1}
Suppose $A, B\in\mathcal{S}^n$ are SDC and either $A$ or $B$ is nonsingular.
Then $I_{\succ}(A,B)$ is nonempty if and only if $I_{\succeq}(A,B)$ has more than one point.
\end{cor}
If $A, B$ are both singular, they can be decomposed to either of the following form.
\begin{lem}[\cite{Ngan}]\label{lem1}
For any $A, B\in \mathcal{S}^n,$ there always exists a nonsingular matrix $U$ that
puts $B$ to
\begin{align*}
\bar{B}=U^TBU
=\left(
\begin{matrix}
B_1&0_{p\times r}\\ 0_{r\times p}&0_{r\times r} \end{matrix}
\right)
\end{align*}
such that $ B_1$ is nonsingular diagonal of size $p\times p,$ and puts $A$ to $\bar{A}$
of either form
\begin{align}\label{a1}
\bar{A}=U^TAU
=\left(
\begin{matrix}
A_1& A_2\\ A^T_2&0_{r\times r}
\end{matrix}
\right)
\end{align}
or
\begin{align}\label{a3}
\bar{A}=U^TAU
=\left(
\begin{matrix}
A_1&0_{p\times s}& A_2\\ 0_{s\times p}&A_3&0_{s\times (r-s)}\\
A_2^T&0_{(r-s)\times s}& 0_{(r-s)\times (r-s)}
\end{matrix}
\right),
\end{align}
where $ A_1$ is symmetric of dimension $p\times p,$
$A_2$ is a $p\times (r-s)$ matrix, and $A_3$ is a nonsingular diagonal matrix of dimension $s\times s;$
$p, r, s\ge0, p+r=n.$
\end{lem}
It is easy to verify that $A, B$ are SDC if and only if $\bar{A}, \bar{B}$ are SDC.
\begin{lem}[\cite{Ngan}]\label{lem2}
\begin{enumerate}
\item[i)] If $\bar{A}$ takes the form \eqref{a1} then $\bar{B}, \bar{A}$ are SDC if and only if $B_1, A_1$
are SDC and $A_2=0;$
\item[ii)] If $\bar{A}$ take the form \eqref{a3} then $\bar{B}, \bar{A}$ are SDC if and only if $B_1, A_1$
are SDC and $A_2=0$ or does not exist, i.e., $s=r.$
\end{enumerate}
\end{lem}
Now suppose that $A, B$ are SDC, Lemmas \ref{lem1} and \ref{lem2} allow to assume without loss of generality
that $\bar{B}, \bar{A}$ are already SDC. That is
\begin{align}\label{pt3}
\bar{B}=U^TBU=\texttt{diag}(B_1, 0),
\bar{A}=U^TAU=\texttt{diag}(A_1, 0)
\end{align} or
\begin{align}\label{pt4}
\bar{B}=U^TBU=\texttt{diag}(B_1, 0),
\bar{A}=U^TAU=\texttt{diag}(A_1, A_4),
\end{align}
where $A_1, B_1$ are of the same size and diagonal, $B_1$ is nonsingular and
if $\bar{A}$ takes the form \eqref{a1} or \eqref{a3} and $A_2=0$ then
$A_4={\rm diag}(A_3,0)$ or if $\bar{A}$ takes the form \eqref{a3} and $A_2$ does not exist then $A_4=A_3.$
Now we can compute $I_{\succeq}(A,B)$ as follows.
\begin{thm}\label{thmb}
\begin{itemize}
\item [(i)] If $\bar{B}, \bar{A}$ take the from $(\ref{pt3}),$ then $I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1);$
\item [(ii)] If $\bar{B}, \bar{A}$ take the from $(\ref{pt4}),$ then $I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1)$
if $A_4 \succeq 0$ and $I_{\succeq}(A,B)=\emptyset$ otherwise.
\end{itemize}
\end{thm}
We note that $B_1$ is nonsingular, $I_{\succeq}(A_1,B_1)$ is therefore computed by Theorem \ref{dl1}. Especially,
if $I_{\succeq}(A_1,B_1)$ has more than one point, then $I_{\succ}(A_1,B_1)\ne\emptyset,$ see Corollary \ref{cor1}.
In the rest of this section we consider $I_{\succeq}(A,B)$ when $A, B$ are not SDC.
We need first to show that if $A, B$ are not SDC, then $I_{\succeq}(A,B)$ either is empty or has only one point.
The proof of Lemma \ref{1} is easy, we omit it.
\begin{lem}\label{1}
If $A, B$ are positive semidefinite then $A, B$ are SDC.
\end{lem}
\begin{lem}\label{dlgiaosu}
If $A, B\in \mathcal{S}^n$ are not SDC then $I_{\succeq}(A,B)$ either is empty or has only one element.
\end{lem}
\begin{proof}
Suppose in contrary that $I_{\succeq}(A,B)$ has more than one elements, then we can choose $\mu_1, \mu_2\in I_{\succeq}(A,B), \mu_1 \neq \mu_2$ such that $C:=A+\mu_1B \succeq 0$ and $D:=A+\mu_2B \succeq 0.$
By Lemma \ref{1}, $C, D$ are SDC, i.e., there is a nonsingular matrix $P$ such that
$P^TCP, P^TDP$ are diagonal. Then $P^TBP$ is diagonal because $P^TCP- P^TDP=(\mu_1-\mu_2)P^TBP$ and $\mu_1 \neq \mu_2.$
Since $P^TAP=P^TCP-\mu_1P^TBP,$ $P^TAP$ is also diagonal. That is $A, B$ are SDC and we get a contradiction.
\end{proof}
To know when $I_{\succeq}(A,B)$ is empty or has one element, we need the following result.
\begin{lem}[\cite{Uhlig76}]\label{dlUhlig1}
Let $A, B\in \mathcal{S}^n,$ B be nonsingular. Let
$B^{-1}A$ have the real Jordan normal form $diag(J_1,\ldots J_r,J_{r+1},\ldots, J_m)$, where $J_1, \ldots, J_r$ are Jordan blocks corresponding to real eigenvalues $\lambda_1, \lambda_2,\ldots, \lambda_r$ of $B^{-1}A$ and $J_{r+1},\ldots, J_m$ are Jordan blocks for pairs of complex conjugate roots $\lambda_i=a_i \pm {\bf i}b_i, a_i, b_i \in \Bbb R, i=r+1, r+2, \ldots, m$
of $B^{-1}A$. Then there exists a nonsingular matrix $U$ such that
\begin{align}\label{mtB}
U^TBU=\texttt{diag}(\epsilon_1E_1,\epsilon_2E_2,\ldots,\epsilon_rE_r,E_{r+1},\ldots, E_m)
\end{align}
\begin{align}\label{mtA}
U^TAU=\texttt{diag}(\epsilon_1E_1J_1,\epsilon_2E_2J_2,\ldots,\epsilon_rE_rJ_r,E_{r+1}J_{r+1},\ldots, E_mJ_m)
\end{align}
where $\epsilon_i=\pm1, E_i=\left(\begin{matrix}0&0&\ldots&0&1\\
0&0&\ldots&1&0\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
1&0&\ldots&0&0
\end{matrix}\right);$ ${\rm dim} E_i={\rm dim} J_i=n_i; n_1+n_2+\ldots+n_m=n.$
\end{lem}
\begin{thm}\label{dl3}
Let $A, B\in \mathcal{S}^n$ be as in Lemma \ref{dlUhlig1} and $A, B$ are not SDC. The followings hold.
\begin{enumerate}
\item [(i)] if $A \succeq 0$ then $I_{\succeq}(A,B)=\{0\};$
\item [(ii)] if $A\nsucceq 0$ and there is a real eigenvalue $\lambda_l$ of $B^{-1}A$ such that $A+(-\lambda_l)B \succeq 0$ then $$I_{\succeq}(A,B)=\{-\lambda_l\};$$
\item [(iii)] if (i) and (ii) do not occur then $I_{\succeq}(A,B)=\emptyset.$
\end{enumerate}
\end{thm}
\begin{proof}
It is sufficient to prove only (iii).
Lemma \ref{dlUhlig1} allows us to decompose $A$ and $B$ to the forms \eqref{mtA} and \eqref{mtB}, respectively. Since $A, B$ are not SDC,
at least one of the following cases must occur.
{\bf Case 1} {\em There is a Jordan block $J_i$ such that $n_i\geq 2$ and $\lambda_i \in \Bbb R.$ }
We then consider the following principal minor of $A+\mu B:$
$$Y=\epsilon_i(E_iJ_i+\mu E_i)=\epsilon_i \left( \begin{matrix}0&0&\ldots&0&\lambda_i+\mu\\
0&0&\ldots&\lambda_i+\mu&1\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
\lambda_i+\mu&1&\ldots&0&0 \end{matrix}\right)_{n_i\times n_i}.$$
If $n_i =2$ then $Y=\epsilon_i\left( \begin{matrix}0&\lambda_i+\mu\\
\lambda_i+\mu&1 \end{matrix}\right).$
Since $ \mu\ne -\lambda_i,$ $Y\not\succeq0$ so $A+\mu B\not\succeq0.$
If $n_i>2$ then $Y$ always contains the following not positive semidefinite principal minor of size $(n_i-1)\times(n_i-1):$
$$\epsilon_i \left( \begin{matrix}
0&0&\ldots&\lambda_i+\mu&1\\
0&0&\ldots&1&0\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
\lambda_i+\mu&1&\ldots&0&0\\
1&0&\ldots&0&0 \end{matrix}\right)_{(n_i-1)\times(n_i-1)}.$$ So $A+\mu B\not\succeq0.$
{\bf Case 2} {\em There is a Jordan block $J_i$ such that $n_i\geq 4$ and $\lambda_i=a_i\pm {\bf i}b_i \notin \Bbb R.$}
We then consider
$$Y=\epsilon_i(E_iJ_i+\mu E_i)=\epsilon_i \left( \begin{matrix}0&0&\ldots&b_i&a_i+\mu\\
0&0&\ldots&a_i+\mu&-b_i\\
\ldots&\ldots&\ldots&\ldots&\ldots\\
b_i&a_i+\mu&\ldots&0&0\\
a_i+\mu&-b_i&\ldots&0&0 \end{matrix}\right)_{n_i\times n_i}.$$
This matrix always contains either a principal minor of size $2 \times 2:$
$\epsilon_i \left( \begin{matrix}b_i&a_i+\mu\\
a_i+\mu&-b_i \end{matrix}\right)$
or a principal minor of size $4 \times 4:$
$$ \epsilon_i \left( \begin{matrix}0&0&b_i&a_i+\mu\\
0&0&a_i+\mu&-b_i\\
b_i&a_i+\mu&0&0\\
a_i+\mu&-b_i&0&0 \end{matrix}\right).$$
Both are not positive semidefinite for any $\mu\in\Bbb R.$
\end{proof}
Similarly, we have the following result. We omit its proof.
\begin{thm}\label{dl4}
Let $A, B\in \mathcal{S}^n$ be not SDC. Suppose $A$ is nonsingular and $A^{-1}B$ has real Jordan normal form $diag(J_1,\ldots J_r,J_{r+1},\ldots, J_m)$, where $J_1, \ldots, J_r$ are Jordan blocks corresponding to real eigenvalues $\lambda_1, \lambda_2,\ldots, \lambda_r$ of $A^{-1}B$ and $J_{r+1},\ldots, J_m$ are Jordan blocks for pairs of complex conjugate roots $\lambda_i=a_i \pm {\bf i}b_i, a_i, b_i \in \Bbb R, i=r+1, r+2, \ldots, m$
of $A^{-1}B$.
\begin{enumerate}
\item [(i)] If $A \succeq 0$ then $I_{\succeq}(A,B)=\{0\};$
\item [(ii)] If $A\nsucceq0$ and there is a real eigenvalue $\lambda_l \neq 0$ of $A^{-1}B$
such that $A+\left(-\dfrac{1}{\lambda_l}\right)B \succeq 0$ then $I_{\succeq}(A,B)=\left\{-\dfrac{1}{\lambda_l}\right\};$
\item [(iii)] If cases $(i)$ and $(ii)$ do not occur then $I_{\succeq}(A,B)=\emptyset.$
\end{enumerate}
\end{thm}
Finally, if $A$ and $B$ are not SDC and both singular. Lemma \ref{lem1} indicates that $A$ and $B$ can be simultaneously
decomposed to $\bar{A}$ and $\bar{B}$ in either \eqref{a1} or \eqref{a3}.
If $\bar{A}$ and $\bar{B}$ take the forms \eqref{a1} and $A_2=0$
then $I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1),$ where $A_1, B_1$ are not SDC and $B_1$ is nonsingular.
In this case we apply Theorem \ref{dl3}
to compute $I_{\succeq}(A_1,B_1).$ If $\bar{A}$ and $\bar{B}$ take the forms \eqref{a3} and $A_2=0.$
In this case, if $A_3$ is not positive definite then $I_{\succeq}(A,B)=\emptyset.$ Otherwise,
$I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1),$ where $A_1, B_1$ are not SDC and $B_1$ is nonsingular, again we can apply
Theorem \ref{dl3}. Therefore we need only to consider
the case $A_2\ne0$ with noting that $I_{\succeq}(A,B)\subset I_{\succeq}(A_1,B_1).$
\begin{thm}\label{thm5}
Given $A, B\in \mathcal{S}^n $ are not SDC and singular such that
$\bar{A}$ and $\bar{B}$ take the forms in either \eqref{a1} or \eqref{a3} with $A_2\ne 0.$ Suppose that $I_{\succeq}(A_1,B_1)=[a,b], a<b.$
Then, if $a\not\in I_{\succeq}(A, B)$ and $b\not\in I_{\succeq}(A, B)$ then $I_{\succeq}(A, B)=\emptyset.$
\end{thm}
\begin{proof} We consider $\bar{A}$ and $\bar{B}$ in \eqref{a3}, the form in
\eqref{a1} is considered similarly. Suppose in contrary that
$I_{\succeq}(A, B)=\{\mu_0\}$ and $a<\mu_0<b.$
Since $I_{\succeq}(A_1,B_1)$ has more than one point, by Lemma \ref{dlgiaosu}, $A_1, B_1$ are SDC.
Let $Q_1$ be a $p\times p$ nonsingular matrix such that $Q_1^TA_1Q_1, Q_1^TB_1Q_1$ are diagonal, then
$Q^T_1(A_1+\mu_0 B_1)Q_1:=\rm{diag}(\gamma_1,\gamma_2,\ldots,\gamma_p)$ is
a diagonal matrix. Moreover, $B_1$ is nonsingular, we have
$I_{\succ}(A_1,B_1)=(a,b),$ please see Corollary \ref{cor1}. Then
$\gamma_i>0 $ for $i=1,2,\ldots, p$ because $\mu_0\in I_{\succ}(A_1,B_1).$
Let $Q:=\rm{diag}(Q_1,I_s,I_{r-s})$ we then have
\begin{align*} Q^T(\bar{A}+\mu_0 \bar{ B})Q= \left(
\begin{matrix}
Q^T_1(A_1+\mu_0 B_1)Q_1&0_{p\times s}&Q^T_1 A_2\\ 0_{s\times p}&A_3&0_{s\times (r-s)}\\
A_2^TQ_1&0_{(r-s)\times s}& 0_{(r-s)\times (r-s)}
\end{matrix}
\right).
\end{align*}
We note that $I_{\succeq}(A, B)=\{\mu_0\}$ is singleton implying ${\rm det}(A+\mu_0B)=0$
and so ${\rm det}(Q^T(\bar{A}+\mu_0 \bar{ B})Q)=0.$ On the other hand, since
$A_3$ is nonsingular diagonal and $A_1+\mu_0B_1\succ0,$ the first $p+s$ columns of the matrix
$Q^T(\bar{A}+\mu_0 \bar{ B})Q$ are linearly independent. One of the following cases must occur:
i) the columns of the right side submatrix $\left(
\begin{matrix}Q^T_1A_2\\0_{s\times (r-s)}\\0_{(r-s)\times (r-s)} \end{matrix}
\right)$ are linearly independent and at least one column, suppose $(c_1, c_2, \ldots, c_p, 0, 0, \ldots, 0 )^T,$ is a
linear combination of the columns of the matrix
$$\left(
\begin{matrix}Q^T_1(A_1+\mu_0. B_1)Q_1\\0_{s\times p}\\A_2^TQ_1 \end{matrix}
\right):=({\rm column_1}|{\rm column_2}|\ldots |{\rm column_p}),$$
where ${\rm column_i}$ is the $i$th column of the matrix or ii)
the columns of the right side submatrix $\left(
\begin{matrix}Q^T_1A_2\\0_{s\times (r-s)}\\0_{(r-s)\times (r-s)} \end{matrix}
\right)$ are linearly dependent.
If the case i) occurs then there are scalars
$a_1, a_2, \ldots, a_p$ which are not all zero such that
\begin{align}\label{B2}
\left(
\begin{matrix}c_1\\c_2\\ \vdots \\ c_p\\ 0 \\ \vdots \\0 \end{matrix}
\right)=a_1 {\rm column_1}+a_2{\rm column_2}+\ldots+a_p{\rm column_p}.
\end{align}
Equation \eqref{B2} implies that
$$ \begin{cases} c_1=a_1 \gamma_1\\ c_2=a_2 \gamma_2\\ \ldots \\ c_p = a_p \gamma_p\\ 0=a_1 c_1+a_2 c_2+\ldots+a_p c_p\end{cases}$$
which further implies
$$ 0= (a_1)^2 \gamma_1+(a_1)^2 \gamma_2+\ldots+(a_p)^2 \gamma_p.$$ This cannot happen with $\gamma_i>0$ and $(a_1)^2+(a_2)^2+\ldots+(a_p)^2 \neq 0.$ This contradiction shows that
$I_{\succeq}(A,B)=\emptyset.$
If the case ii) happens then there always exists a nonsingular matrix $H$ such that
\begin{align*} H^TQ^T(\bar{A}+\mu_0 \bar{ B})QH= \left(
\begin{matrix}
Q^T_1(A_1+\mu_0 B_1)Q_1&0_{p\times s}&\hat{A}_2&0
\\ 0_{s\times p}&A_3&0 &0\\
\hat{A}_2^T&0&0&0\\
0&0 & 0 &0
\end{matrix}
\right),
\end{align*}
where $\hat{A}_2$ is a full column-rank matrix. Let
\begin{align*} \hat{A}= \left(\begin{matrix}
Q^T_1A_1Q_1&0_{p\times s}&\hat{A}_2\\
0_{s\times p}&A_3&0\\
\hat{A}_2^T&0&0
\end{matrix}
\right), \hat{B}= \left(\begin{matrix}
Q^T_1B_1 Q_1&0_{p\times s}&0\\
0_{s\times p}&0&0 \\
0&0&0
\end{matrix}
\right),
\end{align*}
we have $I_{\succeq}(A,B)=I_{\succeq}(\bar{A},\bar{B})=I_{\succeq}(\hat{A},\hat{B})$
and so $I_{\succeq}(\hat{A},\hat{B})=\{\mu_0\}.$ This implies
${\rm det}(\hat{A}+\mu_0\hat{B})=0,$ and the right side submatrix $\left(
\begin{matrix}\hat{A}_2\\0\\0\end{matrix}
\right)$ is full column-rank.
We return to the case i).
\end{proof}
\section{Application for the GTRS}\label{sec2}
In this section we find an optimal Lagrangian multiplier
$$\mu^*\in I:= I_{\succeq}(A,B)\cap [0,\infty)$$ together with an optimal solution $x^*$ for the GTRS \eqref{original}.
We need first to recall the following optimality conditions for the GTRS \eqref{original}.
\begin{lem}[\cite{MJ}]\label{More}
A vector $x^*\in\Bbb R^n$ is an optimal solution to GTRS \eqref{original} if and only if there exists $\mu^*\ge0$ such that
\begin{align}
(A+\mu^*B)x^*+a+\mu^* b=0,\label{dk1}\\
g(x^*)\le0,\label{dk2}\\
\mu^* g(x^*)=0, \label{dk3}\\
A+\mu^* B\succeq0. \label{dk4}
\end{align}
\end{lem}
In fact, the conditions \eqref{dk1} and \eqref{dk4} are necessary and sufficient for the GTRS \eqref{original}
to be bounded below \cite{Xia}. However,
a bounded GTRS may have no optimal solution, see, for example \cite{Adachi}. The conditions \eqref{dk2}-\eqref{dk3}
are thus added to guarantee the existence of an optimal solution to GTRS \eqref{original}. To check
whether a $\mu \in I$ satisfies conditions \eqref{dk1}-\eqref{dk3} we need to apply
the following result.
\begin{lem}[\cite{MJ}]\label{More1}
Suppose $I_{PD}:=I_{\succ}(A,B)\cap [0,\infty)$ is nonempty. If $x(\mu)$ is the solution of
\eqref{dk1}, and if the function $\varphi:\Bbb R \rightarrow \Bbb R$
is defined on $I_{PD}$ by
$$\varphi(\mu)=g[x(\mu)],$$
then $\varphi$ is strictly decreasing on $I_{PD}$ unless $x( \cdot)$ is constant on $I_{PD}.$
\end{lem}
We note that $I$ can be a nonempty interval while $I_{PD}$ is empty. Fortunately,
our Corollary \ref{cor1} shows that this case happens only when $A$ and $B$ are both singular.
Then we can decompose $A, B$ to $\bar{A}, \bar{B}$ and apply
Theorem \ref{thmb} if $A, B$ are SDC, Theorem \ref{dl3} or Theorem \ref{dl4}
or Theorem \ref{thm5} if $A, B$ are not SDC.
By Lemma \ref{More}, if
$I=\emptyset,$ the GTRS \eqref{original}
has no optimal solution, it is even unbounded from below \cite{Xia}.
If $I\ne\emptyset$ but has only one element: $I=I_{\succeq}(A,B)\cap [0,+\infty)=\{\mu\}.$ Then
we need only to solve the linear system \eqref{dk1}-\eqref{dk3}
to check whether $\mu^*=\mu.$ If $I$ is an interval, suppose
$I=[\mu_1, \mu_2],$ where $\mu_1\ge0$ and $\mu_2$ may be $\infty,$ then $A, B$ are SDC.
We assume that $A={\rm diag}(\alpha_1,\alpha_2,\ldots,\alpha_n),
B={\rm diag}(\beta_1,\beta_2,\ldots,\beta_n).$
The equation \eqref{dk1} is then
of the following simple form
\begin{align}\label{equations}
(\alpha_i+\mu \beta_i) x_i= -(a_i+\mu b_i), i=1,2,\ldots,n.
\end{align}
Solving \eqref{equations} for a fixed $\mu$ is very simple, the main duty is thus
finding $\mu^*$ in the following cases.
\begin{enumerate}
\item If at least one of the matrices $A$ and $B$ is nonsingular then $I_{PD}\ne\emptyset$ and
$I ={\rm closure}(I_{PD}).$
The GTRS \eqref{original} then attains a unique optimal solution $x^*$ at an optimal Lagrange multiplier $\mu^*$ \cite{MJ}.
We first check whether $\mu^*=0.$ If not, we apply Lemma \ref{More1} for finding $\mu^*$ such that $\varphi(\mu^*)=0.$
Observe from \eqref{equations} that $x(\mu)$ is constant on $I_{PD}$
only when $\beta_i=b_i=0$ for all $i=1,2,\ldots,n.$ This case can be dealt with easily since $g(x)$ is then constant.
Otherwise, $\varphi(\mu)$ is strictly decreasing on $I_{PD}$
and we have the following results.
\begin{lem}[\cite{Feng},\cite{Adachi}]\label{findmu}
Suppose the Slater condition holds for the GTRS \eqref{original}, i.e., there exists $\bar{x}\in\Bbb R^n$ such that
$g(\bar{x})<0,$ and $I_{PD}\ne\emptyset.$
\begin{enumerate}
\item[(a)] If $\varphi(\mu)>0$ on $I_{PD}$ and $\mu_2<\infty,$ then $\mu^*=\mu_2;$
\item[(b)] If $\varphi(\mu)<0$ on $I_{PD}$ then $\mu^*=\mu_1;$
\item[(c)] If $\varphi(\mu)$ changes its sign on $I_{PD}$ then $\mu^*\in I_{PD};$
\item[(d)] If $\varphi(\mu)>0$ on $I_{PD}$ then $\mu_2$ cannot be $\infty.$
\end{enumerate}
\end{lem}
The case (d) indicates that if $I=[\mu_1, \infty)$ and $\varphi(\mu_1)>0$ then $\mu_1<\mu^*<\infty.$
\item If both $A$ and $B$ are singular, by Lemma \ref{lem2}, $B, A$ are decomposed to
the form either \eqref{pt3} or \eqref{pt4}, and are now in the form
\begin{align}\label{pt5}
B=\texttt{diag}(\beta_1, \ldots,\beta_p,0,\ldots,0),
A=\texttt{diag}(\alpha_1, \ldots,\alpha_p,0,\ldots,0)
\end{align} or
\begin{align}\label{pt6}
B=\texttt{diag}(\beta_1, \ldots,\beta_p,0,\ldots,0),
A=\texttt{diag}(\alpha_1, \ldots,\alpha_p,\alpha_{p+1},\ldots, \ldots,\alpha_{n})
\end{align}
where $\beta_1, \beta_2,\ldots,\beta_p$ are nonzero. We have
$$I_{\succeq}(A,B)=I_{\succeq}(A_1,B_1)={\rm closure}\left(I_{\succ}(A_1,B_1)\right),$$
where $B_1=\texttt{diag}(\beta_1, \beta_2,\ldots,\beta_p), A_1=\texttt{diag}(\alpha_1, \alpha_2,\ldots,\alpha_p),$
please see Theorem \ref{thmb} and Corollary \ref{cor1}. We note also that
if $\alpha_i>0$ for $i=p+1, \ldots, n,$ then $I_{\succ}(A,B)=I_{\succ}(A_1, B_1)$
and we can apply Lemma \ref{findmu}. Otherwise, $I_{\succ}(A,B)=\emptyset$
and the GTRS \eqref{original} may have no optimal solution. We deal with this case as follows.
If $A, B$ take the form \eqref{pt5},
the equations \eqref{equations} become
\begin{align}\label{equationsp}
(\alpha_i+\mu \beta_i) x_i = -(a_i+\mu b_i), i=1,2,\ldots,p;\\
0 = -(a_i+\mu b_i), i=p+1,\ldots,n.\nonumber
\end{align}
If $a_i=b_i=0$ for $i=p+1,\ldots,n,$ then \eqref{original} is reduced to a GTRS of $p$ variables
with matrices $A_1, B_1$ such that $I_{\succ}(A_1,B_1)\ne\emptyset.$
We then apply Lemma \ref{findmu} for it. Otherwise,
either \eqref{equationsp} has no solution $x$ for all $\mu\in I$ or
it has solutions at only one $\mu\in I.$ Then we check easily whether $\mu^*=\mu.$
If $A, B$ take the form \eqref{pt6}, the equations \eqref{equations} become
\begin{align}\label{equationsp1}
(\alpha_i+\mu \beta_i) x_i &= -(a_i+\mu b_i), i=1,2,\ldots,p;\\
\alpha_i x_i &= -(a_i+\mu b_i), i=p+1,p+2,\ldots,p+s;\nonumber\\
0 &= -(a_i+\mu b_i), i=p+s+1,\ldots,n.\nonumber
\end{align}
By the same arguments as above, either \eqref{original} is then reduced to a GTRS of $p+s$ variables
with matrices
\begin{align*}
&\bar{A}_1=\texttt{diag}(A_1, \alpha_{p+1}, \ldots,\alpha_{p+s})=\texttt{diag}(\alpha_1,\ldots, \alpha_p, \alpha_{p+1}, \ldots,\alpha_{p+s}),\\
&\bar{B}_1=\texttt{diag}(B_1, \underbrace{0,\ldots, 0}_{s \text{ zeros }})=\texttt{diag}(\beta_1, \ldots,\beta_p,\underbrace{0,\ldots, 0}_{s \text{ zeros }})
\end{align*}
such that $I_{\succ}(\bar{A}_1,\bar{B}_1)\ne\emptyset.$
We then apply Lemma \ref{findmu} for it. Otherwise,
either \eqref{equationsp1} has no solution $x$ for all $\mu\in I$ or
it has solutions at only one $\mu\in I.$
\end{enumerate}
\section{Conclusion and remarks}
In this paper, we showed that for a given pair of real symmetric matrices $A, B,$ the set $I_{\succeq}(A, B)$ of real values $\mu\in\Bbb R$ such that the matrix pencil $A+\mu B$ is positive semidefinite is always computable by solving the generalized eigenvalue problem
of $n\times n$ dimension.
The computation is considered in two separated cases:
If $A, B$ are not simultaneously diagonalizable via congruence (SDC), $I_{\succeq}(A, B)$
is either empty or singleton, while if $A, B$ are SDC, $I_{\succeq}(A, B)$ can be empty, singleton or an interval.
In case $I_{\succeq}(A, B)$
is an interval, if either $A$ or $B$ is nonsingular, $I_{\succeq}(A, B)$ is the closure of
the positive definite interval $I_{\succ}(A, B).$ Otherwise, $A, B$ are decomposable to block diagonals
of submatrices $A_1, B_1$ with $B_1$ nonsingular such that
$I_{\succeq}(A, B)$ is now the closure of $I_{\succ}(A_1, B_1).$
With $I_{\succeq}(A, B)$ in hand, we are able to solve the generalized trust region subproblem \eqref{original}
not only in the {\em easy-case} when $I_{\succ}(A, B)$ is nonempty but also in the {\em hard-case} by only solving the linear
equations.
Our result needs only to solve the generalized eigenvalue problem of $n\times n$ dimension compared with
an $(2n + 1)\times (2n + 1)$ generalized eigenvalue problem in \cite{Adachi}. On the other hand,
we can completely deal with the hard-case of the GTRS \eqref{original}, which was an open problem in
\cite{MJ} and \cite{Hsia}.
\section*{References}
|
{
"arxiv_id": "2302.14321",
"language": "en",
"timestamp": "2023-03-01T02:09:31",
"url": "https://arxiv.org/abs/2302.14321",
"yymm": "2302"
} |
\section*{Abstract}
\input{0_Abstract.tex}
Keywords: Regular reflection; Mach reflection; Moving wedge; Dynamic shock waves; Supersonic flow; Dual solution domain.
\input{1_Nomenclature.tex}
\section{Introduction}
\input{2_Introduction.tex}
\section{Physical Model}
\subsection{Model Description}
\input{3_Model_Description.tex}
\subsection{Governing Equations}
\input{4_Governing_Equations.tex}
\subsection{Equations of Motion}
\input{5_Equations_of_Motion.tex}
\section{Computational Model}
\subsection{Numerical Implementation}
\input{6_Numerical_Implementation.tex}
\subsection{Computational Domain}
\input{7_Computational_Domain.tex}
\subsection{Grid Generation}
\input{8_Grid_Generation.tex}
\section{Verification}
\input{9_Verification.tex}
\section{Results and Discussion}
\input{10_Results_and_Discussion.tex}
\subsection{Dynamic Transition from RR to MR}
\input{11_Dynamic_transition_from_RR_to_MR.tex}
\subsection{Mach Stem Height}
\input{12_Mach_Stem_Height.tex}
\subsection{Shock Reflection Domain}
\input{13_Shock_Reflection_Domain.tex}
\section{Conclusion}
\input{14_Conclusion.tex}
|
{
"arxiv_id": "2302.14312",
"language": "en",
"timestamp": "2023-03-01T02:09:15",
"url": "https://arxiv.org/abs/2302.14312",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{M}anipulation of quantum systems plays a significant role in many fields such as quantum computing \cite{1,2,3}, quantum high-precision measurements \cite{4,5,6}, chemical physics \cite{dong2022quantum} and quantum communication networks \cite{8}. To achieve quantum operations with high efficiency, various control approaches \cite{dong2022quantum}, such as optimal control \cite{11,12,13}, sliding mode control \cite{15}, robust and learning control \cite{16,17,wu2019learning} and Lyapunov control \cite{18,19,20,21} have been proposed to solve this problem. Traditional methods may rely on the model of systems and thus have limited applications for complex quantum systems \cite{dong2010quantum}.
Reinforcement learning (RL) \cite{kaelbling1996reinforcement}, a branch of machine learning in artificial intelligence \cite{22}, has proven to be a powerful tool to solve a wide range of complex problems, such as Alphago and Atari. Owing to its potential in searching for control policy without prior knowledge of the environment \cite{31}, RL is naturally introduced to search for the optimal control fields that maximize a desired performance \cite{25}. In particular, Q-tables have been introduced to solve the shortest path problem \cite{26} and policy gradient methods have been used for approximating compilations of quantum unitary transformations \cite{27}. In recent decades, the introduction of deep learning to the RL framework has greatly enhanced its control performance on high dimensional state (action) spaces and has been introduced to various quantum tasks \cite{shindi2023model}. For example, the control transport of a quantum state by adiabatic passage through an array of semiconductor quantum dots has been realized by deep Q network (DQN) that utilizes a neural network to predict its Q-value function rather than a Q-table \cite{28}. Furthermore, DQN has also demonstrated its capability of realizing multi-qubit gates with high efficiency and precision \cite{29} and estimating quantum system parameters with improved precision \cite{30}.
In practice, the effectiveness of quantum control relies heavily on the accuracy of control laws. However, the existing algorithms such as DQN can only output partially feasible actions. This results in an insufficient fidelity of the target state and the network converges more slowly or even cannot converge at all. The deep deterministic policy gradient (DDPG) algorithm is an extension of DQN aiming at dealing with continuous control problems \cite{32}. It not only absorbs the advantages of single-step updating of policy gradients in Actor-Critic but also the skill of DQN for Q-value estimation. Such an Actor-Critic framework allows it to explore continuous control fields with more efficiency. Now, DDPG has been recognized as an important tool for solving complicated problems and achieving efficient control for quantum systems. For instance, the deep deterministic policy gradients techniques have been integrated to optimize the fidelity of the superconducting quantum circuit \cite{33}. A general framework of the quantum-inspired DDPG method has been invented to efficiently solve both classical and quantum sequential decision problems (e.g., state preparation) in the continuous domain \cite{34}.
The goal of RL is to derive an agent’s policy that maximizes its utility in sequential decision-making problems. In the standard formulation, the utility of an agent is defined via its reward function, which plays a critical role in enhancing the agent’s learning process \cite{44}. When applying RL to quantum systems, its control effects are greatly limited by the sparse reward since a random search makes it difficult for the quantum system to evolve into the final state with dense reward signals, thus providing limited information for neural networks to suggest good actions \cite{31}. To realize an efficient control of quantum systems, we utilize quantum fidelity (which is widely used in the quantum information community) to design an enhanced reward function \cite{31,34}. Inspired by the potential energy reward function \cite{47}, we design a guided reward function that encourages the RL agent to find a good policy that gradually increases the fidelity and prevents the premature termination of exploration.
Recently, the auxiliary task setting has been introduced to boost the performance of the original RL (defined as the main task). For example, two auxiliary tasks involving depth map prediction and closed-loop detection in the navigation enable the agent to explore the environment with more efficiency by detecting whether the current location has been reached even when visual cues are scarce in I-maze \cite{49}. An auxiliary task of direction prediction accelerates the learning process by providing extra gradients \cite{50}. In the 3D map visual navigation task, a reward prediction auxiliary task has been added to the Asynchronous Advantage Actor-Critic (A3C) algorithm, which makes the agent significantly outperform the previous state-of-the-art on Atari \cite{51}. To efficiently use the reward signals from quantum systems, we introduce an auxiliary task aiming at predicting new reward signals before the RL agents obtain the actual reward from the environment. The auxiliary task learns synchronously with the main task, allowing the task to select the most relevant features of the environment that are key to suggesting good actions towards the desired states \cite{52}.
To effectively control quantum systems, we propose an auxiliary task-based deep reinforcement learning (AT-DRL) for quantum control. In particular, we utilize the deep deterministic policy gradient (DDPG) method to generate continuous control fields for quantum systems. In addition, we design a guided reward function based on the fidelity of quantum states and utilize an auxiliary task with a neural network to fit the reward signal, to make the best use of the reward information in quantum systems. The main task is closely correlated with the auxiliary task and the shared parameters settings allow the main network to capture useful features that provide important implications for suggesting good actions. To demonstrate the effectiveness of the proposed method, numerical simulations of state preparations are presented and compared.
The main contributions of this paper are summarized as follows.
\begin{enumerate}
\item To overcome the sparse reward signal for quantum control problems, we design a guided reward function to incentivize the RL agent to incrementally improve its performance towards high fidelity.
\item Apart from the main task, an auxiliary task is introduced to predict the reward given by the environment. By sharing the parameters between the two tasks, the RL agent is able to select the most relevant features of the environment that are key to suggesting good actions toward the desired states.
\item To verify the effectiveness of the proposed method, numerical simulations on one-qubit, two-qubit, and eight-qubit quantum systems are implemented, respectively. The supervisory of AT-DRL over two traditional DRL methods (DQN and DDPG) demonstrates that the proposed method can achieve efficient state preparation even with sparse reward signals.
\end{enumerate}
The rest of this paper is organized as follows. Section II introduces several basic concepts about RL, DDPG, and quantum dynamics. The design and implementation of the AT-DRL algorithm are described in detail in Section III. In Section IV, numerical results for several quantum systems are presented. Concluding remarks are presented in Section V.
\section{Preliminaries}
\subsection{Markov Decision Process and Deep Deterministic Policy Gradient Policy}
Markov decision process is a mathematical model of a sequential decision-making process that is involved in reinforcement learning. A Markov decision process consists of a five-tuple $\left[ \textit{S, A, P, R, $\gamma$} \right]$ \cite{37}, where $\textit{S}$ denotes the state space, $\textit{A}$ represents the action space, and \textit{P} $ \in \left[0,1\right)$ is the state transfer probability.
\textit{R: S $\times$ A $\rightarrow$ $\mathbb{R}$} is the reward function, and $\gamma \in \left[0,1\right]$ is the discount factor.
At each time step $t \in \left[0,\mathrm{T}\right]$, the state of the agent is $s_t \in S$. The action for the agent is chosen according to the policy $\pi$: $a_t = \pi(s_t)$. After performing the action $a_t$, the agent transfers to the next state $s_{t+1}$ and receives the scalar reward signal $R_t$ from the environment. RL aims to determine the optimal action $a_t$ for each state $s_t$ to maximize the cumulative future discounted return $R_t=\sum_{k=0}^{T-t} \gamma^{k} r_{t+k}$.
Due to the integration of deep learning and reinforcement learning, DQN was proposed as a powerful deep reinforcement learning (DRL) method and applied to address discrete problems
\cite{mnih2013playing,mnih2015human}. This idea was extended to the continuous problem, which was formulated as Deep Deterministic Policy Gradient Policy (DDPG). Proposed by the DeepMind team \cite{38,39}, DDPG is really helpful when the dimension of the action space is high, where DQN fails to learn a good policy. The DDPG algorithm is actually a classical Actor-Critic algorithm consisting of an Actor network and a Critic network. In this architecture, the Actor network generates actions while the Critic network evaluates these actions based on the current value function \cite{konda1999actor}. By utilizing these two networks, reinforcement learning is capable of successfully finding a good policy for continuous control. In DDPG, both the policy-based (also known as Actor) network and the value-based (also known as Critic) network are divided into two parts: a current network for updating at each step and a target network for computing the predicted Q values and actions. The target network copies the parameters of the current network in a regular period to obtain a relatively stable Q value that is helpful for the convergence of the model.
To train the neural networks, experience replay is utilized \cite{mnih2013playing}, where experiences or transition $(s_t,a_t, r_t, s_{t+1}, done)$ are stored in a large memory pool and are sampled from the pool to update the parameters of the neural networks. This helps to achieve independent identical distribution and thus contributes to accelerated convergence \cite{39}.
Exploration is essential for agents, yet a set of deterministic networks can only output a deterministic action for an input state. To solve this, noise can be added to the output actions to improve the exploration ability of agents. For example, the Ornstein-Uhlenbeck process \cite{41} is a common choice to introduce noise to the action space. It can be defined by the following stochastic differential equation (in the one-dimensional case):
\begin{equation}\label{eq1}
dN_t = \theta(\mu - N_t)dt + \sigma dB_t,
\end{equation}
where $dN_t$ denotes the value inscribed and $\mu$ denotes its weight. $\theta$ determines the speed of mean reversion and is an important feature that distinguishes one Ornstein-Uhlenbeck process from others. Large $\theta$ leads to smaller system disturbance, suggesting that the current value is near the mean value. $B_t$ represents the standard Brownian motion that acts as an external random noise with $\sigma \textgreater 0$ denoting its weight. When the initial perturbation is a single-point distribution at the origin (i.e., limited to $N_0 = 0$) and $\mu = 0$, the solution of the above equation is
\begin{equation}\label{eq2}
N_t = \sigma \int_{0}^{t} e^{\theta(\tau-t)} dB_t.
\end{equation}
\subsection{Quantum Dynamics}
The state of a classical bit is either 0 or 1, whereas a state of one-qubit systems can be described as a linear combination of the ground states
$\ket{0}$ and $\ket{1}$, which is also called the superposition state
\begin{equation}\label{eq3}
\ket{\psi} = \alpha\ket{0} + \beta\ket{1},
\end{equation}
where $\alpha$ and $\beta$ are complex numbers that satisfy the normalization condition $|\alpha|^2 + |\beta|^2 = 1$. This applies to multiple qubits, for example, the state of two-qubit systems is written as
\begin{equation}\label{eq4}
\ket{\psi} = \alpha_{00}\ket{00} + \alpha_{01}\ket{01} + \alpha_{10}\ket{10} + \alpha_{11}\ket{11},
\end{equation}
where $\ket{00}$, $\ket{01}$, $\ket{10}$, and $\ket{11}$ are four computational states and $\alpha_{jk}$ are complex numbers that satisfy normalization. For multiple qubits, when the states cannot be written as the product of two states in the subsystems (i.e., $\ket{\psi_1}\ket{\psi_2}$), we call them entangled quantum states. For example, Bell states are the most widely used two-qubit entangled states, formulated as
\begin{equation}\nonumber
\begin{aligned}
\ket{\phi^+} =(\ket{00}+\ket{11})/\sqrt{2},\\
\ket{\phi^-} = (\ket{00}-\ket{11})/\sqrt{2},\\
\ket{\psi^+} =(\ket{01}+\ket{10})/\sqrt{2},\\
\ket{\psi^-} =(\ket{01}-\ket{10})/\sqrt{2}.
\end{aligned}
\end{equation}
Denote $\ket{\psi(t)}$ as the state vector for a closed quantum system, its evolution can be described by the Schr\"{o}dinger equation \cite{42}:
\begin{equation}\label{eq5}
\begin{aligned}
i\hbar\frac{\partial}{\partial{t}}\ket{\psi(t)}=[H_0+\sum_k u_k H_k]\ket{\psi(t)}, \ket{\psi(0)} = \ket{\psi_0},
\end{aligned}
\end{equation}
where $\hbar$ is the reduced Planck constant and is usually set as 1 in the atomic unit system. $H_0$ denotes the free Hamitonian, $\sum_k u_k H_k$ represents the control Hamitonian with $u_k$ being the external control fields. The solution of (5) can be given as equation (6) \cite{43}, where $U(t,t_0)$ is an evolution operator $U$ satisfying the following equation (7):
\begin{equation}\label{eq6}
\begin{aligned}
\ket{\psi(t)}=U(t,t_0)\ket{\psi(t_0)},
\end{aligned}
\end{equation}
\begin{equation}\label{eq7}
\begin{aligned}
i\hbar\frac{\partial{U}}{\partial{t}}=\widehat{H}U,
\end{aligned}
\end{equation}
where $\widehat{H}$ donates a Hamiltonian operator that represents the total energy of a wave function.
\section{METHODOLOGY}
In this section, the framework of auxiliary task-based deep reinforcement learning for quantum control is presented. Then two key elements that contribute to the improved control performance for quantum systems are introduced, including guided reward function design and reward prediction using an auxiliary task. Finally, the implementation detail of AT-DRL for quantum control is summarized.
\subsection{Framework of Auxiliary Task-based Deep Reinforcement Learning for Quantum Control}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\linewidth]{framework.pdf}
\caption{The framework of AT-DRL for quantum control. (a) Interaction of the agent with the environment. The agent needs to constantly interact with the environment in order to enhance their understanding. The actor selects an action $a_t$ based on the behavior policy and transmits it to the simulation environment for execution. Upon completing the execution of $a_t$, the simulation environment provides a reward $r_t$ and a new state $s_{t+1}$. The specific design format of the reward $r_t$ will be elaborated in Section III-B. (b) The actor stores the state transition process including $(s_i,a_i,r_i,s_{i+1},done)$ in the replay memory buffer, thereby building a training dataset for the online network. (Step 1) Subsequently, a mini-batch of $N$ transition data is randomly sampled from the replay memory buffer, serving as the training data for the online policy network and the online Q network. The parameters of Actor network and Critic network are optimized using the Adam optimizer (Step 2). (c) Reward Prediction Auxiliary Task. An auxiliary neural network is established to predict rewards provided by the environment, which mitigates the issue of reward sparsity. The output $r_{pre\_t}$ of the auxiliary network is also added to the replay memory buffer. The network shares some parameters with the main network, which facilitates the iteration of the main task network during auxiliary task training.}
\label{fig:framework}
\end{figure*}
To achieve accurate control of quantum systems, we utilize DDPG as the basic algorithm owing to its capability of generating continuous control policies. As an extension of DQN, DDPG contains the Actor-Critic framework \cite{konda1999actor}, where the Actor is used for generating policies $\pi(a|s)$ and output actions for state $s$, and the Critic is used to evaluate the effectiveness of a strategy using a value function $Q(s,a)$. Both networks have their corresponding target networks. Thus, four networks in all are involved in the DDPG framework, i.e., Actor-network $\mu(\cdot|\theta^{\mu})$, the Critic network $Q(\cdot|\theta^{Q})$, the Target Actor network $\mu'(\cdot|\theta^{\mu'})$, and the Target Critic network $Q'(\cdot|\theta^{Q'})$. The process of searching for a good policy using DDPG is defined as the main task in this work.
In the standard RL paradigm, the training of RL agents is realized through trial and error, where the environment provides a scalar reward that encourages good actions and weakens bad actions. When applying it to control tasks, it is highly desirable to design a suitable reward function to overcome the sparse reward signal, which is usually the case in quantum control problems \cite{31}. Inspired by the concept of the potential energy reward function \cite{47}, we design a guided reward function that encourages the agent to find a good policy for achieving incremental improvements in fidelity. To make full use of the valuable reward, we introduce an auxiliary task aiming at predicting new reward signals for new states. To associate the main task and the auxiliary task together, the parameters of the auxiliary task are partially shared with the Actor in the main task and are optimized in a sparse way \cite{52}.
The whole framework of AT-DRL for quantum control is illustrated in Fig. \ref{fig:framework}, with three parts. (a) The agent interacts with the environment to enhance their understanding, where the actor selects an action $a_t$ based on the behavior policy and transmits it to the simulation environment for execution. Upon completing the execution of $a_t$, the RL agent evolves into a new state $s_{t+1}$. (b) The guided reward signal $r_t$ is obtained based on the fidelity between the current state $s_{t+1}$ and the target state. The state transition process including $(s_i,a_i,r_i,s_{i+1},done)$ in the replay memory buffer and a mini-batch of transitions is randomly sampled from the replay memory buffer, serving as the training data for the online policy network and the online Q network. (c) An auxiliary neural network is established to predict rewards provided by the environment, which mitigates the issue of reward sparsity. The output $r_{pre\_t}$ of the auxiliary network is also added to the replay memory buffer. The network shares some parameters with the main network, which facilitates the iteration of the main task network during auxiliary task training.
\subsection{Guided Reward Function Design}
The goal of reinforcement learning (RL) is to derive an agent's policy that maximizes its utility in sequential decision-making tasks. In the standard formulation, the utility of an agent is defined via its reward function, which plays a critical role in providing sound specifications of the task goals and supporting the agent's learning process \cite{44}. There are different perspectives on reward design, which differ in the investigation, which is extremely important in reinforcement learning.
In quantum control, fidelity is widely used to measure the similarity between quantum states and therefore evaluates the performance of the control task \cite{nielsen2010quantum}. For a state transfer problem, we can utilize the following fidelity
\begin{equation}\label{eq14}
\begin{aligned}
F(\langle\psi_f|\psi\rangle) = |\langle\psi_f|\psi\rangle|^2,
\end{aligned}
\end{equation}
where $\ket{\psi}$, and $\ket{\psi_f}$ denote the actual state and the target state, respectively. For each step $t$ with the quantum state as $\ket{\psi_t}$, we have $F_t=|\langle\psi_f|\psi_t\rangle|^2$.
In order to drive the quantum system to the target state, we design a binary ``guided'' reward function. First, the reward function is divided into two phases based on whether the ideal fidelity has been achieved. Then, to prevent premature termination of exploration, we draw on the concept of a potential energy reward function \cite{47} to incentivize further improvement in fidelity. Inspired by the concept of power in physics, the reward function assigns a positive reward to the agent when it transitions from a high potential energy state to a low potential energy state, and a negative reward is given when the agent moves from a low potential energy state to a high potential energy state. This approach follows the paradigm of the potential Markov Decision Process (MDP), whose reward function is formulated as
\begin{equation}
\begin{aligned}
& R^{\prime}\left(s, a, s^{\prime}\right)=R\left(s, a, s^{\prime}\right)+P\left(s^{\prime}\right) \\
& P\left(s^{\prime}\right)=\Phi\left(s^{\prime}\right)-\gamma \Phi(s)
\end{aligned}
\end{equation}
where $R$ is the original reward function and $R^{'}$ is the modified reward function. $P$ denotes the potential energy function, and $\Phi$ denotes potential energy. By incorporating the potential energy difference between the states as an extra reward $P$, the optimal solution of the MDP remains unchanged.
For quantum problems, we define the fidelity gap between the current state and the target state as $e_t=1-F_t$ at the step moment $t$ and utilize it as the potential energy to design a guided reward function. Here, the ``guided'' means that the quantum system is evolving toward the target state when $e_t < e_{t-1}$, allowing it to acquire the reward from the environment. In particular, we utilize the following function
\begin{equation}\label{eq15}
r_{t}\! =\!\left\{\begin{array}{c}
10000+1000 *\left(e_{t-1}-e_t\right), F_t \geq f_0 \\
F_t * 100+1000 *\left(e_{t-1}-e_{t}\right), F_t<f_0 \text { or $t$ } \geq N_{max}
\end{array}\right.
\end{equation}
where $F_t$ is the fidelity, $f_0$ is the ideal fidelity, $t$ denotes the number of exploration steps.
The reward signal is given using two phases based on the criterion of realizing $f_0$ \cite{45}, where the first involves a fixed reward of 10000 and the second involves a reward proportional to the gap between the current and target states. When fidelity reaches the ideal level, the fixed reward aims to maintain fidelity above $f_0$. If the current fidelity falls below $f_0$, the reward is directly proportional to the fidelity, facilitating the efficient evolution of the system toward the desired state while assigning smaller rewards for low fidelity levels \cite{46}.
\subsection{Predicting Reward using Auxiliary Task}
During the evolutionary process, the quantum system must identify states with high rewards to effectively learn value functions and strategies. However, the rewards given by the environment tend to be sparsely distributed and might be given at the end of control pulses \cite{48}. For a simple problem with a limited state space, it is easy for the agent to randomly explore, and effective state transfers at a certain percentage can be guaranteed. As the problem's complexity increases, the probability of exploring in a random way becomes small. The sparse feedback signals fail to indicate optimal exploration directions for the agent, making it difficult to develop local knowledge. The blind exploration results in a low sample efficiency in experience replay and therefore low efficiency for the state preparation problem. In this case, the RL algorithm struggles to converge. In the evolutionary process, what and how should the agent learn in the case where the reward is not immediately known?
\iffalse
To overcome the issue of sparse rewards, an auxiliary task is proposed to predict the rewards. When the reward from the main task is sparse, the agent can obtain the rewards from the auxiliary task, thereby making the rewards dense enough to provide the agent with strong guidance. In addition, the introduction of an auxiliary task enables the RL agent to explore the environment more efficiently. For instance,
adding two auxiliary tasks involving depth map prediction and closed-loop detection in the navigation main task allows the agent to master the skill of depth prediction and detect whether the current location has been reached. This enables efficient exploration of the environment, even when visual cues are scarce in I-maze \cite{49}. An auxiliary task of direction prediction has been further proposed to accelerate the learning process by providing extra gradients as well as relevant information when the environmental rewards are sparse \cite{50}. This constitutes a novel approach whereby reinforcement learning-based navigation algorithms can be leveraged for real-world street view navigation \cite{mirowski2018efficient}. In the 3D map visual navigation task, a reward prediction auxiliary task has been added to the Asynchronous Advantage Actor-Critic (A3C) algorithm, which makes the agent significantly outperforms the previous state-of-the-art on Atari \cite{51}.
\fi
To overcome the issue of sparse rewards, an auxiliary task is proposed to predict the rewards with the purpose of making the rewards dense enough to provide the agent with strong guidance. In fact, the introduction of an auxiliary task enables the RL agent to explore the environment more efficiently \cite{49,50,51} and has constituted a novel approach whereby reinforcement learning-based navigation algorithms can be leveraged for real-world street view navigation \cite{mirowski2018efficient}. Considering the sparse reward obtained from fidelity, we propose an auxiliary task-based DRL for quantum control. Specifically, we incorporate a neural network to fit the environment's reward signal, allowing the agent to predict the reward signal of executing an action based on the output of the auxiliary task neural network before the execution ends. To update the parameters of the main network and the auxiliary networks efficiently, we utilize sparse sharing \cite{52}.
Sparse sharing means that each task corresponds to a network and all networks perform their respective tasks independently but share some network parameters. When training auxiliary tasks, parameter updates for the shared part facilitate network iteration of the main task. By sharing these parameters, the agent must balance improving its performance with respect to the global reward $r_t$ with improving performance on the auxiliary tasks. If the main task and the auxiliary task are strongly correlated, then these two networks have a high parameter overlap rate. Conversely, if they are weakly correlated, then their parameter overlap rate is low. Although each task is only trained with its corresponding sub-network, a part of the parameters is shared by both tasks at the same time. Therefore, this shared part is updated multiple times in the learning process of every network.
To predict the reward with high efficiency, the parameters of the auxiliary network are optimized using the gradient descent method of supervised learning. At the time $t$, the main network outputs the action $a_t$, which is then executed by the agent. The agent receives a reward $r_t^*$ from the environment, which is also the supervisory signal for the auxiliary task. The output of the auxiliary network is the predicted reward $r_t$. To make the output value approach the actual reward given by the environment as possible, we can update the parameters of the auxiliary network using the following squared loss function, given by equation (16):
\begin{equation}\label{eq16}
\begin{aligned}
J(\theta)=(r_t^* - r_t)^2.
\end{aligned}
\end{equation}
\iffalse
The updating formula of the auxiliary neural network parameters can be calculated as
\begin{equation}
\begin{aligned}
\frac{\partial}{\partial \theta_i} J(\theta) & =\frac{\partial}{\partial \theta_i} \frac{1}{2}\left(r_t^*-r_{t \theta}(x)\right)^2 \\
& =\left(r_t^*-r_{t \theta}(x)\right) \cdot \frac{\partial}{\partial \theta_i}\left(r_t^*-r_{t \theta}(x)\right) \\
& =\left(r_t^*-r_{t \theta}(x)\right) \cdot \frac{\partial}{\partial \theta_i}\left(r_t^*-\left(\theta_0 x_0+\cdots\right.\right.\\
&\left.\left.\hspace{3cm}+\theta_i x_i+\cdots+\theta_n x_n\right)\right) \\
& =\left(r_t^*-r_{t \theta}(x)\right) x_i,
\end{aligned}
\end{equation}
where $x_i$ denotes the input to the auxiliary-task network.
Therefore, the network parameters $\theta_i$ are updated according to
\begin{equation}\label{eq18}
\theta_i:=\theta_i-\alpha\left(r_t^*-r_{t \theta}(x)\right) x_i,
\end{equation}
where $\alpha$ is the network learning rate, which can be designed based on experience.
If we have $m$ training samples, then the network parameters are updated as
\begin{equation}\label{eq19}
\theta_i:=\theta_i-\alpha \sum_{j=1}^m\left(r_t^{*(j)}-r_{t \theta}\left(x^{(j)}\right)\right) \cdot x_i^{(j)}.
\end{equation}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\linewidth]{flow_chart.pdf}
\hfil
\caption{The flow chart for multitask learning.}
\end{figure*}
\fi
\subsection{Implementation}
The idea of the DDPG algorithm for the quantum state preparation problem is as follows. First, the network parameters $\theta^{Q}$, $\theta^{\mu}$, $\theta^{Q'}$, $\theta^{\mu'}$ and the experience pool Replay Buffer of the four networks are initialized, as well as the initial state $s_{t_0}$ of the quantum system. In this paper, we use piecewise constant pulses for control and set the duration of each segment to be equal. The total time $T$ is divided into $N$ equal time steps with each step size being $dt=T/N$. At each time step, the action $a_t$ is determined by the current policy and the action noise. After each $d_t$, depending on the current state, the environment rewards the system with $r_t$ and observes the next state $s_{t+1}$. Each round stores the quintet $(s_t,a_t,r_t,s_{t+1},done)$ in the Replay Buffer. At the end of the exploration process, a specified number of quintets $(s_i,a_i,r_i,s_{i+1},done)$ are randomly selected from the Replay Buffer for the network to learn. The Critic network updates its parameters by continuously decreasing the loss value, i.e.,
\begin{equation}\label{eq8}
\begin{aligned}
y_i = r_i + \gamma Q'(s_{i+1},\mu'(s_{i+1}|\theta^{\mu'})|\theta^{Q'}),
\end{aligned}
\end{equation}
\begin{equation}\label{eq9}
\begin{aligned}
L = \frac{1}{N} \sum_i (y_i - Q(s_i,a_i|\theta^Q))^2,
\end{aligned}
\end{equation}
where $Q(\cdot|\theta^{Q})$ denotes the output of the Critic network, $\mu'(\cdot|\theta^{\mu'})$ is the output of the Target Actor network, and $Q'(\cdot|\theta^{Q'})$ stands for the output of the Target Critic network. Parameters $s_i$, $a_i$, and $r_i$ represent the state, action, and reward value at the $i$th moment, respectively.
The policy network uses the following gradient descent method for parameter updates:
\begin{equation}\label{eq10}
\begin{aligned}
\nabla_{\theta^\mu} J \approx \frac{1}{N} \sum_i \nabla_a Q\left(s, a \mid \theta^Q\right)|_{s=s_i, a=\mu\left(s_i\right)} \nabla_{\theta^\mu} \mu\left(s \mid \theta^\mu\right)|_{s_i}.
\end{aligned}
\end{equation}
The target network update equation is
\begin{equation}\label{eq11}
\begin{aligned}
\theta^{Q'} \leftarrow \tau \theta^Q + (1-\tau)\theta^{Q'},
\end{aligned}
\end{equation}
\begin{equation}\label{eq12}
\begin{aligned}
\theta^{\mu'} \leftarrow \tau \theta^{\mu} + (1-\tau)\theta^{\mu'},
\end{aligned}
\end{equation}
where $\mu(\cdot|\theta^{\mu})$ denotes the output of the Actor network, $\theta^{\mu}$, $\theta^{\mu'}$, $\theta^{Q}$, and $\theta^{Q'}$ represent the parameters of the Actor network, Target Actor network, Critic network, and Target Critic network, respectively.
After every 100 episodes, we output the evaluation's average reward and fidelity to check the degree of network learning. When a stable state is reached, the network is thought to have learned good parameters. The complete execution process of the neural network is demonstrated in Algorithm 1.
\begin{algorithm*}[t]
\caption{Algorithm description for AT-DRL}
\hspace*{0.02in} {\bf Input:}
maximum steps $N_{max}$, number of quantum bits $n$, state space dimension $n_{features}=2*n$, action space dimension $n_{actions}$ \\
\hspace*{0.02in} {\bf Output:}
optimal control values
\begin{algorithmic}[1]
\State Randomly initialize the Critic network $Q(s,a|\theta^{Q})$ and the Actor network $\mu(s|\theta^{\mu})$ with weights $\theta^{Q}$ and $\theta^{\mu}$;
\State Initialize the target networks $Q'$ and $\mu'$ with weights $\theta^{Q'}$ $\leftarrow$ $\theta^{Q}$ and $\theta^{\mu'}$ $\leftarrow$ $\theta^{\mu}$;
\State Initialize the auxiliary task network parameters;
\State Initialize replay buffer R;
\For{episode = 1, M}
\State Initialize the Ornstein-Uhlenbeck process for action exploration;
\State Set the initial quantum state as $s_1$;
\For{$t$ = 1, $N_{max}$; $done = false$}
\State Select action $a_t = \mu(\cdot|\theta^\mu)+OU_t$ according to the current policy and exploration noise;
\State Obtain the reward prediction $r_{pre\_t}$ according to the auxiliary task network;
\State Execute action $a_t$ and observe reward $r_t$ and new state $s_{t+1}$ and $done$ signal;
\State Store transition ($s_t$, $a_t$, $r_t$, $s_{t+1}$, $done$, $r_{pre\_t}$) in R;
\State Sample a random minibatch of $N$ transitions ($s_t$, $a_t$, $r_t$, $s_{t+1}$, $done$, $r_{pre\_t}$) from R;
\State Set $y_i = r_i + \gamma Q'(s_{i+1},\mu'(s_{i+1}|\theta^{\mu'})|\theta^{Q'})$;
\State Update the Critic network by minimizing the loss $L = \frac{1}{N} \sum_i (y_i - Q(s_i,a_i|\theta^Q))^2$;
\State Update the Actor network by using the sampled policy gradient
$$
\nabla_{\theta^\mu} J \approx \frac{1}{N} \sum_i \nabla_a Q\left(s, a \mid \theta^Q\right)|_{s=s_i, a=\mu\left(s_i\right)} \nabla_{\theta^\mu} \mu\left(s \mid \theta^\mu\right)|_{s_i};
$$
\State Update the Reward Prediction network by using batch gradient descent $J(\theta_0,\theta_1)=\frac{1}{N} \sum_{i=1}^N (r_{pre\_t}-r_t)^2$;
\State Update the target networks $\theta^{Q'} \leftarrow \tau \theta^Q + (1-\tau)\theta^{Q'}$ and $\theta^{\mu'} \leftarrow \tau \theta^{\mu} + (1-\tau)\theta^{\mu'}$.
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm*}
\section{Numerical Simulation}
In order to verify the performance of the proposed AT-DRL for quantum control, we implement numerical simulations of state preparation on one-qubit, two-qubit, and eight-qubit systems.
\subsection{Parameters Setting}
The neural networks including the Actor network and the Critic network both utilize four hidden layers with 300, 800, 1600, and 800 neurons, respectively. The number of neurons in the output layer is related to the number of control channels in the quantum system. For example, a one-qubit system is controlled by one control channel, so there is only one neuron in the output layer; while eight-qubit systems are controlled by eight channels, so the output layer has eight neurons. Considering that the Critic network aims at guiding
the Actor network, therefore the Critic network must learn faster than the Actor network to ensure the Actor network updates parameters in the correct direction. We set the learning rates of the Actor network and the Critic network as $\alpha_{Actor}=10^{-4}$ and $\alpha_{Critic}=10^{-3}$, respectively. In the simulation, the discount factor is taken as $\gamma = 0.99$ and the parameters are optimized by Adam's algorithm with the batch size $B = 64$. The action space is defined in $[-1,1]$, which means that the action can output any real number between $-1$ and $1$. In the whole network setting, we use a six-layer neural network with an input layer, an output layer, and four hidden layers. In the comparison with the DQN method, we use an input layer, an output layer and one hidden layer, with the hidden layer containing 10 neurons. The learning rate is set as $\alpha_{DQN}=0.01$, and the batch size is set to be $B = 32$ \cite{53}. The action space is the same as the AT-DRL method.
Before inputting quantum states (that are complex vectors) into the neural networks, we first transform them into real vectors using the following equation \cite{31}:
\begin{equation}\nonumber
\begin{gathered}
s_j=\left[\Re\left(\left\langle 0 \mid \psi_j\right\rangle\right), \Re\left(\left\langle 1 \mid \psi_j\right\rangle\right), \ldots, \Re\left(\left\langle n-1 \mid \psi_j\right\rangle\right),\right. \\
\left.\quad \Im\left(\left\langle 0 \mid \psi_j\right\rangle\right), \Im\left(\left\langle 1 \mid \psi_j\right\rangle\right), \ldots, \Im\left(\left\langle n-1 \mid \psi_j\right\rangle\right)\right],
\end{gathered}
\end{equation}
where $\{|k\rangle\}_{k=0}^{n-1}$ is a computational basis of the $n$-qubit system, $\Re(\cdot)$ and $\Im(\cdot)$ denote the real and imaginary parts of a complex number.
\subsection{One-qubit Quantum Systems}
We consider the one-qubit quantum system \cite{53}, whose Hamiltonian is
\begin{equation}
\begin{aligned}
H[J(t)]=4 J(t) \sigma_z+h \sigma_x,
\end{aligned}
\end{equation}
where $\sigma_x$ and $\sigma_z$ are Pauli matrices, i.e.,
$$
\sigma_x=\left(\begin{array}{ll}
0 & 1 \\
1 & 0
\end{array}\right), \quad \sigma_y=\left(\begin{array}{cc}
0 & -i \\
i & 0
\end{array}\right), \quad \sigma_z=\left(\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}\right).
$$
The whole Hamiltonian describes a singlet-triplet qubit or a single spin with energy gap $h$ under tunable control fields \cite{53}. To facilitate calculation, $h = 1$ is specified. The goal is to find an appropriate control $J(t)$ so that the system evolves from the initial state $\ket{\psi_0}$ to the target state $\ket{\psi_f}$ with a high fidelity. In the simulation, assume fixed evolution time for each step is $\pi / 20$.
For the one-qubit system, good performance can be attained by only using the DDPG algorithm. Thus, we only use guided rewards and do not utilize the auxiliary task. In particular, we consider target states with two cases: an Eigenstate as the target state and a Superposition state as the target state.
\subsubsection{Eigenstates Preparation}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.6in]{Figure2-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=2.6in]{Figure4-b.pdf}%
\label{fig_second_case}}
\hfil
\subfloat[]{\includegraphics[width=1.9in]{Figure4-c.pdf}%
\label{fig_second_case}}
\caption{Preparation of $\ket{0}$ on a one-qubit quantum system. (a) the whole learning process, (b) the fidelity and control laws of each step within one episode, and (c) the evolution trajectory on the Bloch sphere.}
\label{fig_sim}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.6in]{Figure3-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=2.6in]{Figure5-b.pdf}%
\label{fig_second_case}}
\hfil
\subfloat[]{\includegraphics[width=1.9in]{Figure5-c.pdf}%
\label{fig_second_case}}
\caption{Preparation of $\ket{1}$ on a one-qubit quantum system. (a) the whole learning process, (b) the fidelity and control laws of each step within one episode, and (c) the evolution trajectory on the Bloch sphere.}
\label{fig_sim}
\end{figure*}
As shown in Fig. 2, the initial state is $\ket{1}$ and the target state is $\ket{0}$. Considering the results in Fig. 2(a), the fidelity of the system reaches a high value at the 300--th episode and is around 0.9993 at the end (taking the average of the last 100 episodes). According to Fig. 3(a), the initial state is $\ket{0}$ and the target state is $\ket{1}$. the system is manipulated to a state with a fidelity of 0.9976 against the target state $\ket{1}$. Figs. 2(b) and 3(b) show the fidelity and the action values obtained from the Action network, demonstrating that our method finds a control strategy with 11 control pulses and achieves a fidelity above 0.99. Fig. 2(c) and Fig. 3(c) visualize the trajectories of the state in Bloch representations \cite{lou2011state}.
\subsubsection{Superposition States preparation}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.6in]{Figure4-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=2.6in]{Figure6-b.pdf}%
\label{fig_second_case}}
\hfil
\subfloat[]{\includegraphics[width=1.9in]{Figure6-c.pdf}%
\label{fig_second_case}}
\caption{Preparation of superposition state in a one-qubit quantum system with the initial state $\ket{1}$. (a) the whole learning process, (b) the fidelity and control laws of each step within one episode, and (c) the evolution trajectory on the Bloch sphere.}
\label{fig_sim}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.6in]{Figure5-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=2.6in]{Figure7-b.pdf}%
\label{fig_second_case}}
\hfil
\subfloat[]{\includegraphics[width=1.9in]{Figure7-c.pdf}%
\label{fig_second_case}}
\caption{Preparation of superposition state in a one-qubit quantum system with the initial state $\ket{0}$. (a) the whole learning process, (b) the fidelity and control laws of each step within one episode, and (c) the evolution trajectory on the Bloch sphere.}
\label{fig_sim}
\end{figure*}
We also implement two simulations, the first one with the initial state $\ket{1}$ and the target state $(\frac{1}{2}+\frac{1}{2} i)\ket{1} + (\frac{1}{2}+\frac{1}{2} i)\ket{0}$, and the second one with the initial state $\ket{0}$ and the target state $(\frac{1}{2}+\frac{1}{2} i)\ket{1} + (\frac{1}{2}+\frac{1}{2} i)\ket{0}$. The simulation results are shown in Fig. 4 and Fig. 5. From Fig. 4(a), both the DDPG algorithm and the DQN algorithm gradually converge for around 750 episodes with different final fidelities. The DDPG algorithm can reach about 0.9991, while the DQN algorithm can only achieve about 0.9163, demonstrating that the DDPG algorithm outperforms the DQN algorithm. Similar simulation results occur in Fig. 5(a), where the fidelity can reach 0.9941 through the DDPG algorithm, while the DQN algorithm can only reach 0.9327. In addition, the control system obtained by the DQN algorithm has unstable oscillations compared with the control obtained by the DDPG algorithm.
From the above simulations, the DDPG algorithm performs a more accurate control effect than the DQN algorithm for one-qubit state preparation.
\subsection{Two-qubit Quantum Systems}
Now, we consider a two-qubit quantum system, with its Hamiltonian as \cite{53}
$$H(t)=H_0 + u_1(t) H_1+u_2(t)H_2,$$
where the free Hamiltonian is
$H_0=\sigma_x \otimes \sigma_x + \sigma_y \otimes \sigma_y $, and the control Hamiltonians are $H_1=\sigma_z^{(1)} \otimes I_2$ and $H_2= I_2\otimes \sigma_z^{(2)}$, with $I_2$ denotes the identity operator for two-level systems. The control laws $u_1(t)$ and $u_2(t)$ take values in [-1,1]. The evolution step size of the two-qubit system is set to be $\pi / 40$. In the simulation, we compare the AT-DRL algorithm with the basic DDPG and DQN algorithms.
\subsubsection{Eigenstate Preparation}
We choose two target states $s_1 = \ket{00}$ and $s_2 = \ket{11}$ to verify the effectiveness of the algorithm.
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=3.5in]{Figure6-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=3.5in]{Figure8-b.pdf}%
\label{fig_second_case}}
\caption{Preparation of eigenstate in a two-qubit quantum system with the initial state $\ket{00}$. (a) the whole learning process, (b) the fidelity and control laws of each step within one episode.}
\label{fig_sim}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=3.5in]{Figure7-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=3.5in]{Figure9-b.pdf}%
\label{fig_first_case}}
\caption{Preparation of eigenstate in a two-qubit quantum system with the initial state $\ket{11}$. (a) the whole learning process, (b) the fidelity and control laws of each step within one episode.}
\label{fig_sim}
\end{figure*}
The simulation results are shown in Fig. 6 and Fig. 7, where Fig. 6 corresponds to the state transfer from $s_1$ to $s_2$, and Fig. 7 corresponds to the state transfer from $s_2$ to $s_1$. Based on Fig. 6(a) and simulation data, AT-DRL achieves a stable fidelity of 0.9984 after the 300-th episode, while the basic DDPG only achieves a final fidelity of 0.9907. DQN is much inferior to the other two algorithms, as it can only achieve a final fidelity of 0.97 with longer training episodes. Similar to Fig. 6(a), AT-DRL in Fig. 7(a) achieves better results than the two other algorithms.
\subsubsection{Bell State Preparation}
The Bell states are the maximally entangled states of two-qubit systems with the maximum entanglement degree and are also the powerful resource for constituting quantum communication \cite{54}. We consider Bell states, which are usually difficult to generate and also important for various applications. In the simulations, we demonstrate the effectiveness of the algorithm by preparing two states $\ket{\phi^+}$ and $\ket{\phi^-}$ in (4).
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=3.5in]{Figure8-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=3.5in]{Figure10-b.pdf}%
\label{fig_second_case}}
\caption{Preparation of $\ket{\phi^+}$ on a two-qubit quantum system. (a) the whole learning process, and (b) the fidelity and control laws of each step within one episode.}
\label{fig_sim}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=3.5in]{Figure9-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=3.5in]{Figure11-b.pdf}%
\label{fig_first_case}}
\caption{Preparation of $\ket{\phi^-}$ on a two-qubit quantum system. (a) the whole learning process, and (b) the fidelity and control laws of each step within one episode.}
\label{fig_sim}
\end{figure*}
The simulation results are shown in Fig. 8 and Fig. 9, where Fig. 8 represents the preparation of the Bell state $\ket{\phi^+}=\frac{1}{\sqrt{2}}\ket{00}+\frac{1}{\sqrt{2}}\ket{11}$ from the state $s_1$. As we can see, the blue and yellow lines stabilize above 0.9, indicating that both the basic DDPG algorithm and the AT-DRL algorithm have good performance regarding the average of the last five results, where the basic DDPG algorithm reaches 0.955 and the AT-DRL algorithm reaches 0.991. However, the DQN algorithm can only reach 0.813. In terms of convergence rate, the AT-DRL algorithm achieves a fidelity of greater than 0.90 at the 600-th episode, which is a significant improvement. Fig. 9 represents the preparation of the Bell state $\ket{\phi^-}=\frac{1}{\sqrt{2}}\ket{01}+\frac{1}{\sqrt{2}}\ket{10}$ from $s_1$. The advantage of the AT-DRL algorithm can be intuitively demonstrated by its rapid convergence to 0.971. In comparison, the basic DDPG algorithm takes more episodes to realize a stable learning curve in Fig. 9(a). The DQN algorithm can only reach a fidelity of 0.92 and has large fluctuations in the learning process.
\subsection{Multi-qubit Systems}
We conduct simulations on a spin-transfer system \cite{53}, whose Hamiltonian is
\begin{equation}
\begin{aligned}
H(t)= & C \sum_{k=1}^{K-1} (I_2^{\otimes (k-1)} \otimes S_x^{(k)} \otimes S_x^{(k+1)} \otimes I_2^{\otimes (K-k-1)} \\ &+I_2^{\otimes (k-1)} \otimes S_y^{(k)} \otimes S_y^{(k+1)} \otimes I_2^{\otimes (K-k-1)}) \\
& +\sum_{k=1}^K 2 B_k(t) I_2^{\otimes (k-1)} \otimes S_z^{(k)} \otimes I_2^{\otimes (K-k)},
\end{aligned}
\end{equation}
where $K$ is the total number of spins, $s_x^{(k)}$, $s_y^{(k)}$ and $s_z^{(k)}$ are the $k$-th spin operators. $C$ describes the constant nearest-neighbor coupling strength and is set as $C=1$ in this work. $B_k(t)$ is the time-dependent local magnetic field applied on the $k$th spin. We consider the spin transfer of an eight-qubit system, i.e., $K=8$. As is demonstrated in Fig. 10, the goal is to evolve the system from an initial state $\ket{10000000}$ to the target state $\ket{00000001}$. The status $\ket{10000000}$ represents that the leftmost qubit is spin-up and the remaining seven qubits are spin-down, while the status $\ket{00000001}$ represents that the rightmost qubit is spin-up and the remaining seven qubits is spin-down.
\begin{figure}[!t]
\centering
{\includegraphics[width=3in]{Figure13.pdf}%
\label{fig_first_case}}
\caption{The 8-qubit spin-transfer system.}
\end{figure}
In the simulation, we set the total time as $T=(K-1)\pi/2$ and divide $T$ into $80$ equal time intervals.
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=3.5in]{Figure11-a.pdf}%
\label{fig_first_case}}
\hfil
\subfloat[]{\includegraphics[width=3.5in]{Figure14-b.pdf}%
\label{fig_first_case}}
\caption{Quantum spin transfer of eight-qubit systems. (a) the whole evolution process and (b) the fidelity and the control laws of each step output of a complete episode.}
\label{fig_sim}
\end{figure*}
The simulation results for eight-qubit state transfer is shown in Fig. 11(a), where the blue line achieves a fidelity of 0.90 at the 500-th episode. The yellow line indicates the basic DDPG, which has a slightly inferior result and reaches a fidelity of 0.894. In contrast, the DQN algorithm only attains a fidelity of 0.704. Fig. 11(b) shows that our algorithm finds a control strategy with 20 control pulses that eventually achieves a fidelity of 0.900. In the spin transfer task of the multi-qubit system, the network training process of the AT-DRL algorithm converges earlier and achieves better results in the end.
\section{Conclusion}
In this paper, we investigated the effectiveness of continuous control policies based on deep deterministic policy gradient and proposed an auxiliary task-based deep reinforcement learning (AT-DRL) method for quantum control. In particular, we designed a guided reward function based on the fidelity of quantum states and introduced an auxiliary task that shares training parameters with the main task to predict the reward provided. The numerical results of state preparation on different scenarios demonstrate the proposed method outperforms the basic DDPG algorithm in terms of fidelity, as well as greatly outperforms the basic DQN algorithm. In future work, we will extend it to open quantum systems to achieve accurate control even in the presence of environmental disturbances.
|
{
"arxiv_id": "2302.14294",
"language": "en",
"timestamp": "2023-03-01T02:08:29",
"url": "https://arxiv.org/abs/2302.14294",
"yymm": "2302"
} | \section{Introduction}
\label{sec:introduction}
In October 2022, Elon Musk, a
self-declared ``free speech absolutist'' acquired Twitter --- the social network that he regarded as the ``de facto town square'' where public debate takes place.
Musk's takeover has been controversial and highly publicized.
Some users admire Musk and his takeover, regarding it as crucial for free speech; others have expressed concerns over increased misinformation and toxicity.
Regardless of one's stance, it is undeniable that the acquisition has led to a series of noteworthy events.
On November 04, 2022, Musk fired half of the 7,500 employees previously working at Twitter.
Two weeks later (November 17, 2022),
hundreds of employees resigned in response to an
ultimatum to commit to ``extremely hardcore'' work or leave.
These events and the associated public backlash, prompted many users to search for alternatives.
Figure~\ref{fig:gtrends1} presents a time series of Google trend search interest for ``Twitter alternatives''.
We observe a large spike on October 28, 2022, the day after Musk's takeover.
Similarly, Figure~\ref{fig:gtrends2} shows equivalent search interest for other popular alternatives to Twitter, \emph{e.g.}\xspace Koo (an Indian micro-blogging and social networking service), and Hive (a micro-blogging service that permits NSFW mature content).
One platform that stands out as being particularly prominent is \emph{Mastodon}, a decentralized micro-blogging platform. Although released in 2016, Mastodon has anecdotally gathered significant attention since October 2022. It is part of the wider \emph{fediverse}, in which any person can create and operate their own Mastodon server (aka ``instance'').
Each Mastodon instance operates as an independent microblogging service, where users can create local accounts and enjoy similar functions to Twitter (\emph{e.g.}\xspace posting, following).
Importantly, these instances can also federate together, allowing users on one instance to follow users on another.
This means that Mastodon operates in a decentralized fashion (with people joining independent instances), while retaining the ability to interact across the entire globe.
This new paradigm has attracted significant attention and has made it an obvious candidate for users who are unhappy with the Musk acquisition (and the associated centralization of power in the hands of one individual).
\begin{figure}[t]
\centering
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth]{gtrends1}
\caption{}
\label{fig:gtrends1}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth]{gtrends2}
\caption{}
\label{fig:gtrends2}
\end{subfigure}
\caption{Interest over time for the search terms (a) Twitter alternatives and (b) Mastodon, Koo \& Hive Social.}
\label{fig:gtrends}
\end{figure}
This sudden interest in Mastodon offers a unique opportunity to study the migration of users between social networks.
This is particularly the case due to the differing value propositions of the two platforms, with clear contrasts in the governance and ownership of Twitter vs. Mastodon. The unusual circumstances of the migration create further dimensions of analysis.
With this context in mind, we explore the following three research questions:
\begin{itemize}
\item \textbf{RQ1} How are new users spread across Mastodon instances, and are there any consequences for decentralization?
\item \textbf{RQ2} How much (if at all) does a user's ego-centric Twitter follower network influence their migration to Mastodon?
\item \textbf{RQ3} What are usage patterns of migrated users across both platforms?
\end{itemize}
To address these questions, we track 136,009 unique twitter users who moved to 2,879 unique Mastodon instances. %
The main findings related to our three RQs are as follows:
\begin{itemize}
\item There is a user-driven pressure towards centralization on Mastodon (the top 25\% most populous instances contain 96\% of the users). This pressure is counterbalanced by the greater activity of the users on smaller instances. On average, users of single-user instances post 121\% more statuses than users on bigger instances.
\item The social network of users on Twitter influences their choice of an instance on Mastodon \emph{e.g.}\xspace 4.09\% of users changed the instance on which they created an account (when they first migrated to Mastodon) and moved to the instance of choice of their Twitter followees who migrated to Mastodon as well. %
\item Users tend to post different content across the two platforms. On average, only 1.53\% of a user's Mastodon posts are identical to their Twitter posts. Twitter hosts more diverse topics ranging from Entertainment to Politics, whereas discussions around Fediverse and Migration dominate on Mastodon.
\end{itemize}
\section{Mastodon Primer}
\label{sec:primerMast}
Mastodon is an open-source~\cite{mastodonGithub} federated server platform released in 2016. It offers micro-blogging functionality, allowing administrators to create their own independent Mastodon servers, aka \textbf{instances}.
Each unique Mastodon instance works much like Twitter, allowing users to register new accounts and share statuses with their followers -- equivalent to tweeting on Twitter. Users can also \textbf{boost} others' statuses -- equivalent to retweeting on Twitter.
Instances can work in isolation, only allowing locally registered users to follow each other.
However, Mastodon instances can also \textbf{federate}, whereby users registered on one instance can follow users registered on another instance.
This results in the instance \textbf{subscribing} to posts performed on the remote instance, such that they can be pushed across and presented to local users.
For simplicity, we refer to users registered on the same instance as \textbf{local}, and users registered on different instances as \textbf{remote}.
Note that a user registered on their local instance does \emph{not} need to register with the remote instance to follow the remote user. Instead, a user just creates a single account with their local instance; when the user wants to follow a user on a remote instance, the user's local instance performs the subscription on the user's behalf.
This process is implemented using an underlying subscription protocol, ActivityPub~\cite{activitypub}. This makes Mastodon compatible with other decentralised micro-blogging implementations (notably, Pleroma).
The \textbf{Fediverse}, refers to the growing group of ActivityPub compatible, and therefore interconnected, applications.
When a user logs in to their local instance, they are presented with three timelines: ({\em i}\/)\xspace~a \textit{home} timeline, with statuses shared by the accounts whom the user follows; ({\em ii}\/)\xspace~a \textit{local} timeline, listing the statuses generated within the same instance; and ({\em iii}\/)\xspace~a \textit{federated} timeline, with \emph{all} statuses that have been retrieved from remote instances.
The latter is not limited to remote statuses that the user follows; rather, it is the union of remote statuses retrieved by all users on the instance.
\section{Data Collection}
\label{sec:datacollection}
\subsection{Mastodon Accounts from Twitter}
\label{sec:mastodonaccountsfromtwitter}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{datacrawled}
\caption{Temporal distribution of tweets containing (i) links to Mastodon instances and (ii) migration related keywords/ hashtags.}
\label{fig:datacrawled}
\end{figure}
We collect a global list of Mastodon instances from \url{instances.social}, which contains a comprehensive index of Mastodon instances.
We compile a set of 15,886 unique instances.
We then collect all tweets containing a link to these Mastodon instances using Twitter’s Search API.\footnote{https://api.twitter.com/2/tweets/search/all}
Additionally, we collect all tweets containing the following list of keywords related to the migration from Twitter: `mastodon', `bye bye twitter', `good bye twitter'; and hashtags \#Mastodon, \#MastodonMigration, \#ByeByeTwitter, \#GoodByeTwitter, \#TwitterMigration, \#MastodonSocial, \#RIPTwitter.
In total, we collect 2,090,940 tweets posted by 1,024,577 users between October 26, 2022 (\emph{i.e.}\xspace a day before Musk's takeover) and November 21, 2022. Figure~\ref{fig:datacrawled} shows the temporal distribution of these tweets.
We next search for Mastodon usernames in these tweets and the accompanying metadata of any account that posted a tweet (\emph{i.e.}\xspace display name, location, description, URLs, pinned tweet text).
Mastodon usernames take the form @[email protected] and https://example.com/@alice, where alice is a username and \url{example.com} is an instance.
To map a Mastodon handle to a Twitter account, we do this search in a hierarchical fashion: We first look for Mastodon usernames in user metadata (\emph{e.g.}\xspace bio) and create a mapping between Twitter account \& Mastodon account if one is found. If the search is unsuccessful at the first step, we then look for Mastodon usernames in the tweet text.
To ensure mapping accuracy, we only map a Twitter account to a Mastodon account identified from a tweet text if both the Twitter and Mastodon usernames are identical.
Using this methodology, we identify the Mastodon accounts of 136,009 Twitter users, which are created across 2,879 unique Mastodon instances. %
We find that 72\% of Twitter users that migrated created a Mastodon account with the same username that they use on Twitter.
4\% of the Twitter users who create a Mastodon account, have a (legacy) verified status (\emph{i.e.}\xspace authentic, notable, and active) on Twitter, suggesting that even well-established users have been migrating. %
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{mastodonactivity}
\caption{Weekly activity on Mastodon instances.}
\label{fig:mastodonactivity}
\end{figure}
While we track and analyze the migration of a large number of Twitter users (136,009), the takeover of Twitter by Musk, is likely to have pushed even more users to migrate who the above methodology cannot identify. Indeed, on November 12, 2022, Mastodon announced that over 1 million users had registered since October 27, 2022~\cite{mastodontweet}, significantly more than our methodology identifies.
To understand the wider activities on the Mastodon instances, we cross-check the new registrations on the 2,879 instances by crawling their weekly activity from Mastodon's Weekly Activity Endpoint.\footnote{https://docs.joinmastodon.org/methods/instance/\#activity}
Figure~\ref{fig:mastodonactivity} shows the weekly number of registrations, logins and statuses. We notice a large increase in all three activity metrics after the Twitter acquisition.
Of course, we cannot confirm that all these users migrated directly from Twitter. However, given the timeline of registrations, we believe that it is very likely that a large share of these new users migrated from Twitter.
\subsection{Twitter and Mastodon Timelines.}
\label{sec:twitterandmastodontimelines}
We next crawl both the Twitter and Mastodon timelines of the migrating users identified in the previous section.
We use Twitter's Search API and Mastodon's Account's Statuses Endpoint.\footnote{\url{https://docs.joinmastodon.org/methods/accounts/#statuses}}
For each user, we crawl all tweets/statuses from October 01, 2022 to November 30, 2022. In total, we gather Twitter timelines for 94.88\% of the users.
The rest were suspended (0.08\%), deleted/deactivated (2.26\%), or the tweets were protected (2.78\%).
We crawl the timelines of 79.22\% of Mastodon users: the rest have either not posted a single status (9.20\%) or their instances were down at the time of crawl (11.58\%). In total, we gather 16,163,600 tweets, and 5,746,052 Mastodon statues.
\subsection{Followees}
\label{sec:followees}
We also crawl the user's followees for both Twitter and Mastodon accounts.
We use the Twitter Follows API\footnote{https://api.twitter.com/2/users/:id/following
} and the Mastodon Account's Following Endpoint\footnote{https://docs.joinmastodon.org/methods/accounts/\#following} respectively.
Due to the rate limitations of the Twitter's API we crawl a sub-sample of 10\% of the migrated users.
For representativity, our sample relies on the followees distribution takes 5\% from above the median value and 5\% from below.
In total, we gather followee data for 13,068 users. This covers 11,453,484 followee relationships.
\subsection{Ethical Considerations}
\label{sec:ethicalconsiderations}
The datasets in this paper include both user and post information, and therefore might have privacy implications. To overcome any data mishandling, we have exclusively collected the publicly available data following well-established ethical procedures for social data. We have obtained a waiver from the ethics committee at the author’s institution.\footnote{anonymised for double-blind submission}
We anonymize the data before use and store it in a secure silo. Upon acceptance of the paper, anonymized data will be made available to the public, which we hope will help further works.
\section{RQ1: The Centralization Paradox}
\label{sec:instances}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{accountsperinstance}
\caption{Top 30 Mastodon instances Twitter users migrated to.}
\label{fig:accountsperinstance}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{accountsinbiginstances}
\caption{Percentage of users on top 25\% instances (w.r.t number of users).}
\label{fig:accountsinbiginstances}
\end{figure}
Mastodon is a decentralized platform, in which users are (notionally) spread across thousands of independent instances.
We therefore first test if the migration has resulted in true decentralization or if Mastodon experiences a paradox, whereby the majority of users centralize upon a small number of servers.
Overall, the Twitter users in our dataset migrate to 2,879 unique Mastodon instances. Figure~\ref{fig:accountsperinstance} presents a histogram of the number of users who have joined the top 30 Mastodon instances.
The plot divide accounts into those created before the acquisition, and those created after.
Interestingly, not all the Mastodon accounts advertised on Twitter in response to Elon Musk's acquisition are new though. 21\% of the Mastodon accounts were created before the Musk's takeover.
Despite Mastodon's decentralization efforts,
we observe a clear trend towards centralization:
a large number of users migrate to a small set of instances.
In particular,
\url{mastodon.social}, a flagship Mastodon instance operated by Mastodon gGmbH, receives the largest fraction of migrated Twitter users.
We next explore the pressure towards Mastodon centralization by comparing the percentage of migrated users with the percentage of instances they join.
Figure~\ref{fig:accountsinbiginstances} plots the distribution of users across the top \% of instances. We find that nearly 96\% of users join the top 25\% of largest instances (w.r.t number of users). This centralization trend implies that a small number of instance owners, administrators and moderators have an impact on a large fraction of migrated users.
Arguably, this means that Mastodon is at risk of becoming another (semi-)centralized service.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{socialnetworkwrtinstancesizeV1}
\caption{(a) Distribution of instances w.r.t to number of users. (b) CDF of number of followers of users on different-sized instances. (c) CDF of number of followees of users on different-sized instances. (d) CDF of number of statuses of users on different-sized instances.}
\label{fig:socialnetworkwrtinstancesizeV1}
\end{figure*}
One possible explanation for this trend is that users intend to build a large social network by joining large and well-known instances.
We therefore examine the relationship between the size of instances and social networks
by analyzing the number of followers and followees of users joining different-sized instances.
We analyze all migrated users who join Mastodon after the Twitter acquisition and have 30 days old account (to ensure a fair comparison). This covers 50.59\% of all migrated users.
We divide the instances based on quantiles w.r.t number of users. Figure~\ref{fig:socialnetworkwrtinstancesizeV1} presents the distribution of instances by the number of users, CDFs of the number of followers, followees, and statuses of users on different-sized instances.
Contrary to our hypothesis, users in the bigger instances tend to have smaller social networks.
13.16\% of instances have just one user, who tends to have more followers, followees, and statuses than users in more populated instances.
Paradoxically,
the single user instances,
have 64.88\% more followers,
follows 99.04\% more users, and
posts 121.14\% more statuses (on average) than the users of the bigger instances. %
This implies that the size of an instance has a limited impact on the size of a user's social network. Rather it mainly depends on the user's activeness, engagement and networking. Hence, while large instances have more users, small instances attract more active users.
Manual inspection suggests that this is because smaller instances have more dedicated and proactive user bases (whereas larger ones accumulate more experimental users).
\section{RQ 2: Social Network Influence}
\label{sec:socialnet}
There are at least two possible reasons for platform migration from Twitter to Mastodon, particularly after the Musk takeover:
({\em i}\/)\xspace A user might have decided to move for ideological reasons, if they disagree with Musk's actions after he gained control of Twitter;
and
({\em ii}\/)\xspace A user might have decided to move because a large fraction of the accounts they follow moved (and therefore Twitter has become irrelevant as a platform for them).
Of course, these two reasons are not contradictory or mutually exclusive. In this section, we attempt to distinguish between these reasons based on the observation that if a user moves because their immediate social network moves, a large proportion of their ego network neighbourhood would also have moved with them. We argue this offers an interesting example of social contagion.
\subsection{Twitter vs. Mastodon Social Network}
\label{sec:twittervsmastodonsocialnetwork}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{twittervsmastodonsocialnetwork}
\caption{CDF of number of followers and followees of migrated users on Twitter and Mastodon.}
\label{fig:twittervsmastodonsocialnetwork}
\end{figure}
We first analyze the size of the social network (\emph{i.e.}\xspace number of followers \& followees) that the migrated users have on both Twitter and Mastodon.
Figure~\ref{fig:twittervsmastodonsocialnetwork} plots the CDF of the number of followers and followees of migrated users on both platforms.
The median followers and followees that migrated users have on Twitter are 744 and 787, respectively.
Just 152 users (0.11\% of total migrated) have no Twitter followers, and 465 (0.35\% of total migrated) have no Twitter followees.
In contrast, on Mastodon, 6.01\% of users have no followers, and 3.6\% do not follow anyone. The median followers and followees on Mastodon were 38 and 48, respectively. Interestingly, 1.65\% of migrated users gained a median of 33 \emph{more} followers on Mastodon than their Twitter followers. This confirms that these new users are yet to bootstrap a significant social network on Mastodon.
However, we emphasize that the median age of migrated accounts on Twitter is 11.5 years, in contrast to just 35 days on Mastodon.
Hence, due to these disproportionate ages, the size of the social networks on the two platforms are not directly comparable.
\subsection{Social Network Driven Migration}
\label{sec:socialnetworkeffectinmigration}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{communitymigration}
\caption{CDFs of the fraction of Twitter followees of each migrated user that (i) moved to Mastodon (blue) (ii) moved to Mastodon before the user (orange) and (iii) moved to the same instances on Mastodon as the user (green).}
\label{fig:communitymigration}
\end{figure}
We next conjecture that a user's (Twitter) social network may have an impact on their likelihood of migration. For example, if a user's friends migrate to Mastodon, this may encourage them to do the same.
To inspect this, we analyze the followees data from both Twitter and Mastodon for 10\% of the migrated users (see \S{\ref{sec:followees}}). Figure~\ref{fig:communitymigration} shows CDFs of the fraction of Twitter followees of each migrated user that
({\em i}\/)\xspace~moved to Mastodon (blue);
({\em ii}\/)\xspace~moved to Mastodon before the user (orange);
and
({\em iii}\/)\xspace~moved to the same Mastodon instances as the user (green).
We notice that just 5.99\% of each user's followees also migrate (on average).
In fact, for 3.94\% of the migrated users, none of their Twitter followees move to Mastodon.
Thus, the majority of the social network of the migrated users seems indeed reluctant to migrate, and sometimes they are the first in taking this step.
To better understand this, we compare the date on which each migrated user joined Mastodon with that of their Twitter followees who migrated as well.
We find that, out of their social network (\emph{i.e.}\xspace their followees),
4.98\% of the migrated users were the first
and 4.58\% were the last to migrate from Twitter to Mastodon.
On average, 45.76\% of the followees of a user migrated to Mastodon before the user actually did.
We are also curious to understand if users select the same Mastodon instance as their social network. We therefore compare the instance of each migrated user with that of its Twitter followees.
On average, 14.72\% of each migrated user's followees (that move to Mastodon) join the same instance.
With 15K+ Mastodon instances, this is a considerable proportion, suggesting a clear network effect.
However, we also notice that this average is highly impacted by one flagship instance: \url{mastodon.social}.
This is the largest instance available, and is probably the best known.
Of all the migrated users whose Twitter followees move to the same instance, 30.68\% are on \url{mastodon.social}. That said, we also find small instances that attract significant proportions of a given user's Twitter followers.
For example, 4.5\% of the migrated users whose Twitter followees join them on the same instance are on \url{mastodon.gamedev.place} (a Mastodon server focused on game development and related topics) %
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{movedvisualizations}
\caption{Chord plot of switching within Mastodon instances.}
\label{fig:movedvisualizations}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{switchingusersinfluence}
\caption{CDFs of the fraction of Twitter followees of each switched user that (i) moved to first instance (blue) (ii) moved to second instance (orange) and (iii) moved to second instance before the user (green).}
\label{fig:switchingusersinfluence}
\end{figure}
\subsection{Instance Switching}
A unique feature of Mastodon is that users can easily `switch' instance. This involves migrating their data from one instance to another.
We are curious to see if this is also driven by network effects.
Overall, 4.09\% of the users have switched from the Mastodon instance they initially created an account on (hereinafter first instance) to a new instance (hereinafter second instance).
Curiously, 97.22\% of these switches happened after Musk's Twitter takeover.
This suggests that users may join initial instances, but migrate to a more suitable one once they are more experienced.
Figure~\ref{fig:movedvisualizations} shows the chord plot of switches from each user's first Mastodon instance to their second. A common pattern across these switches is that users move from general purpose/ flagship instances (\emph{e.g.}\xspace \url{mastodon.social}, \url{mastodon.online}) to more topic specific instances, \emph{e.g.}\xspace \url{sigmoid.social} (a Mastodon instance for people researching and working in Artificial Intelligence) and \url{historians.social} (a Mastodon server for people interested in history).
Interestingly, we notice a strong social network influence behind these switches.
Figure~\ref{fig:switchingusersinfluence} shows the CDFs of the fraction of Twitter followees of each switched user that
({\em i}\/)\xspace~moved to the first instance (blue);
({\em ii}\/)\xspace~moved to the second instance (orange);
and
({\em iii}\/)\xspace~moved to second instance before the user (green).
On average, 46.98\% of each user's followees (who moved to Mastodon) at some point also join the second instance. In contrast to just 11.4\% who join the first instance.
Interestingly, 77.42\% of each switching user's followees (on average) joined the second instance before the user.
This suggests that the users switched from the first instance because a large fraction of their Twitter followees moved to the second one.
\section{RQ3: Timelines Analysis}
\label{sec:content}
We are next curious to understand how people use their (two) accounts after migration.
\subsection{Twitter vs. Mastodon Activity}
\label{sec:twittervsmastodonactivity}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{twittermastodonactivity}
\caption{Temporal distribution of tweets and statuses posted by migrated users on Twitter and Mastodon respectively.}
\label{fig:twittermastodonactivity}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{twitteractivitysource}
\caption{Top 30 sources of tweets. Note the log scale on the y-axis.}
\label{fig:twitteractivitysource}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{twitteractivitysourceusers}
\caption{Number of users that use cross-posting tools daily.}
\label{fig:twitteractivitysourceusers}
\end{figure}
We first analyze the timelines of migrated users from both Twitter and Mastodon. Figure~\ref{fig:twittermastodonactivity} shows the number of tweets on Twitter and the number of statuses on Mastodon posted by migrated users each day from October 01, 2022 to November 30, 2022. We observe a continuous growth in user activity on Mastodon after the acquisition of Twitter.
However, the activity of migrated users on Twitter do not decrease in parallel, \emph{i.e.}\xspace our migrated users are using both their Twitter and Mastodon accounts simultaneously.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{images/CDF_identical_similar.png}
\caption{CDFs of fraction of each migrated user's Mastodon statuses that are identical or similar to its tweets.}
\label{fig:identicalsimilar}
\end{figure}
We next check if people are generating identical content across both platforms or are, instead, projecting multiple `personas'.
Figure~\ref{fig:identicalsimilar} plots the CDFs of the fraction of each migrated user's Mastodon statuses that are identical or similar to its tweets.
We consider the Mastodon status similar to a tweet if the cosine-similarity of their sentence embeddings~\cite{reimers-2019-sentence-bert} is greater than 0.7.
Surprisingly, just 1.53\% of each migrated user's Mastodon statuses are identical.
On average, just 16.57\% of each user's Mastodon statues are similar to their tweets.
Instead, 84.45\% of the migrated users use the two platforms to post completely different content.
This suggests a mix of users, some of whom create different personas on the two platforms, and a smaller subset who mirror all their content.
A potential explanation for the latter is the use of cross-posting tools.
Such tools allow users to automatically mirror their Mastodon statues on Twitter, and vice versa.
To examine this, we compare the number of tweets posted via different sources before and after Musk's takeover in Figure~\ref{fig:twitteractivitysource}.
Naturally, the majority are posted by official Twitter clients such as the Twitter Web App. The two sources that increase most dramatically, however, are two well-known cross-posters, Mastodon-Twitter Crossposter and Moa Bridge --- by 1128.95\% and 1732.26\%, respectively.
Of all migrated users, 5.73\% use one of the two cross-posters at least once.
This suggested such users see both Twitter and Mastodon as viable platforms, and have limited intention of creating multiple 'personas'.
Figure~\ref{fig:twitteractivitysourceusers} also plots the number of users using cross-posters over time.
We see that their usage increases rapidly after Musk's takeover. The downward trend towards the end of November is likely a result of the posting issues that cross-posters faced after their posting rate limit was revoked by Twitter~\cite{crossposterexit}.
\subsection{Hashtags}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{images/hashtags_twitter_mastodon.png}
\caption{Top 30 hashtags along with their frequencies on Twitter and Mastodon.}
\label{fig:top30hashtags}
\end{figure}
Given that 84.45\% of the migrated users post completely different content on the two platforms, we next inspect the hashtags used.
This gives us a flavour of the parallel discussions taking place on Mastodon and Twitter.
Figure~\ref{fig:top30hashtags} presents the top 30 most frequent hashtags used over the two platforms by the migrated users. We notice that users discuss more diverse topics on Twitter such as Entertainment (\#NowPlaying, \#BBC6Music), Celebrities (\#BarbaraHolzer), and Politics (\#StandWithUkraine, \#GeneralElectionNow), whereas Mastodon seems dominated by Fediverse related discussion (\#fediverse) and the migration to it (\#TwitterMigration).
We conjecture that we might see more diverse discussions on Mastodon once the migrated users make themselves familiar with the platform.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{images/Toxicity.png}
\caption{CDFs of fraction of each migrated user's toxic posts on Twitter and Mastodon.}
\label{fig:toxicity}
\end{figure}
\subsection{Toxicity Analysis}
Moderation on Mastodon has received significant attention in recent months~\cite{bin2022toxicity,anaobi2023will}.
This is because the administrators of Mastodon instances do not universally have the resources to moderate malicius content.
To shed light on this, we study the extent to which toxic content is shared by migrated users on both platforms. To do this, we label all tweets and statuses using Google Jigsaw’s Perspective API.\footnote{https://www.perspectiveapi.com}
For a given post, Perspective returns a score between 0 and 1 for its toxicity (0 $=$ non-toxic). Specifically, we use the API's TOXICITY attribute that defines toxicity as ``a rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion''. In the literature, 0.5 is the most common choice to threshold the perspective scores~\cite{bin2022toxicity,rottger2020hatecheck,papasavva2020raiders}, however, higher values such as 0.8 are also used~\cite{agarwal2021hate}. Here, we use 0.5 as a threshold and consider a post to be toxic if its toxicity score is greater than 0.5 (and vice versa).
Figure~\ref{fig:toxicity} shows the CDFs of the fraction of each migrated user's toxic posts on Twitter and Mastodon. Overall, just 5.49\% of tweets are toxic.
Mastodon is substantially less toxic, with just 2.80\%. On average, each user posts 4.02\% toxic tweets on Twitter vs. just 2.07\% toxic statuses on Mastodon. Even though the discourse is non-toxic over both platforms, we notice that 14.26\% of migrated users post at least one toxic post on both the platforms. While this may not be problematic for Twitter which has its own moderation team, it might present challenges for Mastodon, where volunteer administrators are responsible for content moderation~\cite{hassan2021exploring}.
\section{Related Work}
\label{sec:relatedwork}
\pb{Decentralised Social Networks.}
Many previous efforts have been made to build decentralized online social platforms. In the earliest days, there were many peer-to-peer online social networks, such as Safebook \cite{cutillo2009safebook}, PeerSoN \cite{buchegger2009peerson}, LotusNet \cite{aiello2012lotusnet}, and LifeSocial.KOM \cite{graffi2011lifesocial}.
However, performance and security limitations~\cite{paul2014survey},
limited their adoption and success.
New decentralized social networks, such as Mastodon, Pleroma, Pixelfed, and PeerTube, have since emerged.
In sum, these platforms are referred to as the \emph{Fediverse}.
These social network applications use ActivityPub, a W3C protocol, to implement server federation.
Some recent work has looked into these new decentralized social networks.
For instance, a large-scale measurement study of Mastodon~\cite{raman2019challenges}, found centralization trends in Mastodon. Paradoxically, we found that while centralization occurs in terms of how many users are attracted to an instance, smaller instances attract more active users.
Other works focus on user behavior across instances~\cite{la2022network,la2022information}.
Our work also touches upon the need for decentralised moderation.
This has been investigated in prior work on Pleroma (another Fediverse microblog.
Hassan et al identify novel challenges~\cite{hassan2021exploring} and propose a strawman solution.
Zia et al.~\cite{bin2022toxicity} also propose model sharing solution to help automate moderation.
Our work confirms the presence of toxic content in Mastodon, though the numbers identified do not show a trend towards greater toxicity than Twitter.
\pb{Social Network Migration.} There have been a number of measurement studies on social network migration. For example, \cite{fiesler2020moving} measured the migration activity of fandom, tracking migrating users and the reasons behind their migration. The authors find that policy and value-based aspects are determinant in the migration.
Gerhart et al.~\cite{gerhart2019social} analyze user migration from traditional social networks to anonymous social networks perspective.
They identify that social norms drive migration.
Otala et al.~\cite{m2021political} study the migration of Twitter users to Parler.
The results show that, although Parler is not widely used, it has a significant impact on political polarization.
Our work also studies the migration of Twitter users.
However, to the best of our knowledge, it is the first to systematically measure and analyze the migration of users from centralised Twitter to a decentralised platform.
\section{Conclusion}
\label{sec:conslusion}
In this paper, we have explored the migration of users from Twitter to Mastodon, prompted by Elon Musk's acquisition of Twitter.
We have focused on three RQs: ({\em i}\/)\xspace~How are new users spread across Mastodon instances, and are there any consequences for decentralization?
({\em ii}\/)\xspace~How much (if at all) does a user's ego-centric Twitter network influence their migration to Mastodon?
({\em iii}\/)\xspace~What are usage patterns of migrated users across both platforms?
To answer \textbf{RQ1}, we have found that 2.26\% of users completely left Twitter, deleting their account. Despite Mastodon's decentralized architecture, we found that the 25\% largest instance on Mastodon contains 96\% of the users. Paradoxically, while larger instances attract more users, smaller ones attract more active users, reinforcing Mastodon's decentralization.
To answer \textbf{RQ2}, we showed that the size of the Mastodon instance had limited effect on the size of the user's social network.
We observed the impact of social network in migration, with an average of 14.72\% of Twitter followees per user migrating to the exact same Mastodon instance as user.
To answer \textbf{RQ3}, we found that users tend to post \emph{different} content across platforms. On average, only 1.53\% of Mastodon posts per user were identical to Twitter. In terms of toxicity, most of the user's content on both platforms was non-toxic. Mastodon appears to be less toxic than Twitter though. Overall, just 5.49\% of tweets and 2.80\% of statuses posted by migrated users on Twitter and Mastodon respectively were toxic.
There are a number of lines of future works. We would like to further investigate whether migrating users retain their Mastodon accounts or return to Twitter, and whether new users are joining the migration wave. It will be interesting to see what the future holds for these user-driven centralized Mastodon instances. This study provides the first step in the migration of Twitter to Mastodon. We hope that it will inspire further exploration of follow-up work.
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.14344",
"language": "en",
"timestamp": "2023-03-01T02:10:25",
"url": "https://arxiv.org/abs/2302.14344",
"yymm": "2302"
} | \section{Introduction}
Physics simulation enables synthetic data acquisition in a virtual environment to reduce the cost, time, and risk of data-driven methods that are increasingly emerging in robotics \cite{agos19nmi,zeng20tro,ding20icra,lee20tro}.
Further, in terms of finding a solution to the modeled system dynamics equation, it can be directly utilized in various problems such as trajectory optimization \cite{mastalli20crocoddyl}, system identification \cite{carpentier21iden}, etc.
As such, the importance of simulation is increasingly being emphasized, with a plethora of open-source software \cite{RaiSim,bullet,mujoco,flex,sofa}.
One of the most important concerns in robotic simulation research is how to obtain data that is accurate and efficient in terms of computation time.
This is a challenging problem and implies the question of how to formulate the dynamics of systems, and which algorithms to use to solve them.
Since it includes many factors such as discrete-time integration, various types of constraints, friction, system-induced sparsity, numerical algorithms, etc., various methods have been proposed for decades.
However, simulation of a high degree of freedom (DOF) system with many constraints is still a difficult problem \cite{choi21simrobot}.
This is because, fundamentally, all system DOFs are dynamically coupled, so a constraint force acting on a part of the system in general affects the entire system.
This leads many algorithms to use large-size matrix operations (e.g., factorization) or possibly excessive numerical iterations.
In this paper, we attempt to solve this challenge by developing a novel subsystem-based simulation approach, that is simple, modular, and suitable for parallelization while ensuring the solution consistency and accuracy.
For this, we first split the multibody system into several subsystems and reformulate the conventional expressions of discrete-time constrained dynamics into a subsystem perspective.
Then inspired by the structure of the Alternating Direction Method of Multiplier (ADMM \cite{boyd11admm}), we present a novel variable splitting scheme and solution process on the reformulated dynamics equation.
This then reduces the solution process to iterations of 1) block-decomposed linear solving of the subsystem dynamics equation (allowing for complete parallelization) and 2) parallel resolution of all the constraint interfaces (with scalar operation only), ensuring low per-iteration computation time and scalability.
Moreover, our method can handle with various types of constraints and also exhibits stable convergence properties, rendering itself as an appealing alternative for robot simulation.
Several multibody simulation examples are then implemented and demonstrated to show the validity of our framework.
The conventional approach to dealing with constrained dynamics equations is applying pivoting algorithms \cite{llyod05icra} after formulating a linear complementarity problem \cite{potra97nd}.
However, since these direct methods require high computational complexity and polygonal friction cone approximation, iteration-based methods have been more widely used in recent studies.
One of the popular approaches is using Gauss-Seidel type iteration per constraint \cite{todorov14convex,macklin14unified,macklin16game,horak19ral}.
These methods scale well for particle-based systems, but not well for systems with generalized coordinate representation (e.g., robot joint angles) and complex internal constraints (e.g., finite element).
Several researches tackle this issue \cite{otaduy09cgf,daviet20tog,carpentier21rss} by taking an operator splitting type method.
However, their applicability to rigid-deformable objects with various constraint types is limited and they still have to deal with the full system size matrices.
Another direction is to apply a Newton-type iteration over the cost including the constraint \cite{macklin19tog,li20ipc,castro22convex}.
Despite their good convergence property, their second-order nature could be problematic for large-sized problems as they require multiple linear problem resolutions.
Our subsystem-based ADMM algorithm may be regarded as an opportunistic compromise between the two directions described above.
By properly separating primal-dual relationships based on subsystems, we circumvent the burdens of handling both with many constraints and large-sized matrices.
In this context, \cite{periet19tog,lee21icra,lee2021real} share some conceptual similarities with our framework proposed here.
However, their applicability is much limited as compared to our framework, since 1) they need factorization to construct coupling interface equation, which is costly especially as the size of the subsystems grows, and 2) their constructed coupling dynamics is dense, therefore only a small number of inter-connection between subsystems is permitted for reasonable performance.
In contrast, by utilizing the structural peculiarity of ADMM, our proposed framework can handle all the constraints in a decoupled manner for each iteration phase, thereby not only substantially improving the algorithmic efficiency but also allowing for its extension to a wide range of multibody systems.
We also note that \cite{daviet20tog,tasora21admm,overby17tvcg} employ ADMM structure in simulation.
However, their full system level approaches still require large-sized matrix operations.
In contrast, our subsystem-based variable splitting gives a rise to small-sized and parallelized structures, making our scheme much more efficient and scalable.
The rest of the paper is organized as follows.
Some background materials about constrained dynamics simulation and ADMM will be explained in Sec.~\ref{sec-preliminary}.
Then our simulation framework using subsystem-based ADMM will be described in Sec.~\ref{sec-madmm}.
Some illustrative examples and performance evaluation will be presented in Sec.~\ref{sec-evaluation}.
Finally, discussions and concluding remarks are given in \ref{sec-conclusion}.
\section{Preliminary} \label{sec-preliminary}
\subsection{Constrained Dynamics}
Consider following continuous-time dynamics:
\begin{equation}
\begin{aligned}
M(q)\ddot{q} + C(q,\dot{q})\dot{q} + d\psi^T = f + J(q)^T\lambda
\end{aligned}
\end{equation}
where $q\in\mathbb{R}^n$ is the generalized coordinate variable of system, $M(q),C(q,\dot{q})\in\mathbb{R}^{n\times n}$ are the mass, Coriolis matrix, $d\psi^T\in\mathbb{R}^n$ is the potential action, $f\in\mathbb{R}^n$ is the external force, and $\lambda\in\mathbb{R}^{n_c}, J(q)\in\mathbb{R}^{n_c\times n}$ are the constraint impulse and Jacobian with $n,n_c$ being the system/constraint dimension.
The discretized version of the dynamics is
\begin{equation} \label{eq-ddyn}
\begin{aligned}
&M_k\frac{v_{k+1}-v_k}{t_k} + C_k v_k + d\psi_k^T = f_k + J_{k}^T\lambda_{k} \\
&\hat{v}_k=\frac{v_k+v_{k+1}}{2}, \quad q_{k+1} \leftarrow \text{update}(q_k, \hat{v}_k, t_k)
\end{aligned}
\end{equation}
where $k$ denotes the time step index, $M_k=M(q_k)$, $C_k=C(q_k,v_k)$, $t_k$ is the step size, and $v_k,\hat{v}_k\in\mathbb{R}^n$ are the current, representative velocity \cite{kim17ijrr} of each time step. Although we use the midpoint rule here, it can be transformed into other integration rules. From now on, time step index $k$ will be omitted for simplicity but note that all components are still time(step)-varying.
In this paper, we deal with the constraints at the velocity level as in many other works \cite{RaiSim,bullet,mujoco}, which is stable but is based on linearization.
Issues that may arise from linearization can be suppressed by adopting multiple-linearization as in \cite{daviet20tog} or re-linearization \cite{verschoor19collision}, and these will be integrated into our future implementation.
We classify the system constraints into three categories: soft, hard, and contact constraints:
\subsubsection{Soft constraint}
Soft constraints are originated from the elastic potential energy of the system (e.g., finite element).
If the $j$-th constraint is soft, impulse can be written as
\begin{align} \label{eq-scon}
\lambda_j = -k_j (e_j + \alpha_j J_j\hat{v})
\end{align}
where $e_j\in\mathbb{R}$ and $J_j\in\mathbb{R}^{1\times n}$ are the ($t$-scaled) error and Jacobian for soft constraint, $k_j$ is the gain parameter, and $\alpha_j > 0$ is the variable that includes an implicit term with constraint-space damping.
The value of $\alpha_j$ is associated with system energy behavior, see \cite{andrews17cgf,kim17ijrr} for more details.
\subsubsection{Hard constraint}
Hard constraints ensure that equations and inequalities for the system are strictly satisfied (e.g., joint limit), including holonomic and non-holonomic types. If the $j$-th constraint is hard, it has the form of
\begin{align} \label{eq-hcon}
J_j\hat{v} + e_j \ge 0
\end{align}
where $e_j\in\mathbb{R}$ and $J_j\in\mathbb{R}^{1\times n}$ denote the error and Jacobian for hard constraint.
Here, the error can be determined by methods such as Baumgarte stabilization \cite{baumgarte72cm}.
\subsubsection{Contact constraint}
Contact condition is typically the most demanding type since it includes non-linear complementarity relation between primal (i.e., velocity) and dual (i.e., impulse) variables.
We take Signorini-Coulomb condition \cite{lee2022large}, which is the most universal expression for frictional contact.
If the $j$-th constraint is contact, the relation is
\begin{equation} \label{eq-scc}
\begin{aligned}
& 0 \le \lambda_{j,n} \perp J_{j,n}\hat{v} + e_{j,n} \ge 0 \\
& 0 \le \delta_j \perp \mu_j\lambda_{j,n} - \| \lambda_{j,t} \| \ge 0 \\
&\delta_j \lambda_{j,t} + \mu_j\lambda_{j,n} J_{j,t}\hat{v} = 0
\end{aligned}
\end{equation}
where $\perp$ denotes complementarity, $e_{j,n}\in\mathbb{R}$ and $J_{j,n}\in\mathbb{R}^{1\times n}$ denote the error and Jacobian for contact normal, $J_{j,t}\in\mathbb{R}^{2\times n}$ is the Jacobian for contact tangential, and $\mu_j$ is the friction coefficient and $\delta_j$ is the auxiliary variable.
There are three situations induced by the condition - open ($\lambda_{j,n}=0$), stick ($\lambda_{j,n}>0, \delta_j=0$), and slip ($\lambda_{j,n}>0, \delta_j>0$).
\subsection{Alternating Direction Method of Multiplier}
Alternating direction method of multiplier (ADMM \cite{boyd11admm}) is the method to solve the following optimization problem:
\begin{align*}
\min_{x,z} f(x) + g(z) \quad \text{s.t.} \quad Px+Qz = r
\end{align*}
Based on the augmented Lagrangian defined as,
\begin{align*}
&\mathcal{L} = f(x) + g(z) + u^T(Px+Qz-r) + \frac{\beta}{2} \| Px+Qz - r\|^2
\end{align*}
where $u$ is the Lagrange multiplier and $\beta > 0$ is the penalty weight. ADMM iteratively performs alternating minimization of $\mathcal{L}$ with respect to each variable. The iteration process of ADMM can be summarized as follow:
\begin{align*}
&x^{l+1} = \argmin_{x} \left( f(x) + \frac{\beta}{2} \| Px+Qz^l - r + \frac{1}{\beta}u^l\|^2 \right ) \\
&z^{l+1} = \argmin_{z} \left( g(z) + \frac{\beta}{2} \| Px^{l+1}+Qz - r + \frac{1}{\beta}u^l\|^2 \right) \\
&u^{l+1} = u^l + \beta(Px^{l+1}+Qz^{l+1}-r)
\end{align*}
where $l$ is the loop index.
ADMM is known as robust, simple to implement, and able to attain independent resolution with respect to each variable \cite{boyd11admm,wang19kdd}.
\section{Simulation via Subsystem-Based ADMM} \label{sec-madmm}
\subsection{Subsystem Division} \label{subsec-division}
\begin{figure}[t]
\centering
\begin{subfigure}{3.8cm}
\captionsetup{belowskip=2pt}
\includegraphics[width=3.8cm]{Figure/motstir.png}
\caption{Granular objects stirring}
\label{fig-motstir}
\end{subfigure}
\begin{subfigure}{3.8cm}
\captionsetup{belowskip=2pt}
\includegraphics[width=3.8cm]{Figure/cobot3.png}
\caption{Cable mobile manipulation}
\label{fig-motcobot}
\end{subfigure}
\begin{subfigure}{3.8cm}
\includegraphics[width=3.8cm]{Figure/motbeam.png}
\caption{Deformable body insertion}
\label{fig-motbeam}
\end{subfigure}
\caption{Motivating examples and implementations of our subsystem-based ADMM framework.}
\label{fig-motivation}
\end{figure}
Our approach starts by dividing the entire system into several subsystems.
See Fig.~\ref{fig-motivation} for our motivational examples.
We assume that objects in typical robotics simulation can be broadly classified into three main classes: rigid body, deformable body, and robot manipulator.
In many cases, each rigid body and manipulator is treated as a single subsystem (as in Fig.~\ref{fig-motstir}).
This is intuitive and allows for preserving modularity for each class (e.g., constant 6 DOF inertia for a rigid body, articulated structure of manipulator).
However, for the situations in which a large number of rigid bodies are connected through soft coupling (e.g., cable modeling as in Fig.~\ref{fig-motcobot}), we find that defining a new subsystem by assembling several rigid body instances can give better performance.
In the case of a deformable body, its dimension is often so high to conveniently treat it as a single subsystem and causes an imbalance with other objects.
Thus, we split each deformable object into several pieces and consider each as a subsystem, while they are jointly connected using hard constraints (as in Fig.~\ref{fig-motbeam}).
\subsection{Subsystem-Based Dynamics Reformulation} \label{subsec-splitting}
Now consider that the whole system is divided as described in Sec.~\ref{subsec-division}.
If all the subsystems are completely independent (i.e., no coupling), we can formulate each subsystem dynamics using the structure of \eqref{eq-ddyn} and write in the following compressed form:
\begin{align} \label{eq-sdyn}
&A_i\hat{v}_i = b_i + J_{in,i}^T\lambda_{in,i}
\end{align}
for $i=\left\{1,\cdots,N\right\}$ where $N$ is the number of subsystem, $A_i\in\mathbb{R}^{n_i\times n_i}, b_i\in\mathbb{R}^{n_i}$ are the subsystem dynamics matrices/vectors, and $\lambda_{in,i}\in\mathbb{R}^{n_{in,i}}, J_{in,i}\in\mathbb{R}^{n_{in,i}\times n_i}$ are the \textit{intra}-subsystem constraint impulse/Jacobian while $n_i,n_{in,i}$ are the dimension of subsystem/intra-subsystem constraint.
Here, each $A_i$ is a symmetric positive definite from the mass matrix and energy Hessian approximation \cite{macklin19tog,daviet20tog,lee2022large}.
\begin{remark}
Since \eqref{eq-scon} is in closed-form of $\hat{v}$, it can be directly included in $A_i,b_i$, or still be remained in $\lambda_{in,i}$ of \eqref{eq-sdyn}.
Currently, this is optional, as both these schemes work fine in our framework.
\end{remark}
Now to take into account the \textit{coupling} constraints between the subsystems, we must add a coupling impulse and the dynamics of the entire system can be written as
\begin{equation} \label{eq-sdyn-total1}
\begin{bmatrix}
A_1 & & \\
& \ddots & \\
& & A_{N}
\end{bmatrix}
\begin{bmatrix}
\hat{v}_1 \\ \vdots \\ \hat{v}_{N}
\end{bmatrix} =
\begin{bmatrix}
b_1 \\ \vdots \\ b_{N}
\end{bmatrix} +
\begin{bmatrix}
J_{in,1}^T\lambda_{in,1} \\ \vdots \\ J_{in,N}^T\lambda_{in,N}
\end{bmatrix} + J_{cp}^T\lambda_{cp}
\end{equation}
where $\lambda_{cp}\in\mathbb{R}^{n_{cp}}$ and $J_{cp}\in\mathbb{R}^{n_{cp}\times n}$ are the inter-subsystem coupling impulse and Jacobian. Then \eqref{eq-sdyn-total1} can be rewritten as
\begin{align} \label{eq-sdyn-total2}
A \hat{v} = b + J_{in}^T\lambda_{in} + J_{cp}^T\lambda_{cp}
\end{align}
Note that this new subsystem-based dynamics formulation \eqref{eq-sdyn-total2} does not relax any physical condition, while still allowing to utilize the block-diagonal structure of $A$, even for complex multibody scenarios.
\subsection{ADMM-Based Solver}
To solve \eqref{eq-sdyn-total2} using ADMM, we start by defining the following function:
\begin{align} \label{eq-admm-f}
f_i(\hat{v}_i,x_i) = \frac{1}{2} \hat{v}_i^T A_i \hat{v}_i - b_i^T\hat{v}_i + \mathcal{I}(J_{c,i}\hat{v}_i=x_i)
\end{align}
where $x_i\in\mathbb{R}^{n_{c,i}}$ is the auxiliary variable, $\mathcal{I}$ is the indicator function, and $J_{c,i}\in\mathbb{R}^{n_{c,i}\times n_i}$ is the row stack of $J_{in,i}$ and $J_{cp,i}$ while $n_{c,i}$ is the summation of intra- and inter-subsystem constraint dimension.
The function \eqref{eq-admm-f} is defined independently for each subsystem and includes the cost for the dynamics ($A_i,b_i$) and the mapping into the constraint space ($J_i\hat{v}_i=x_i$), but does not yet concern with constraint satisfaction.
For the constraint satisfaction, we define the following function:
\begin{align} \label{eq-admm-g}
g(z) = g(z_1,z_2,\cdots,z_{n_s}) = \sum_{j=1}^{n_{in}+n_{cp}} g_j
\end{align}
where each $z_i\in\mathbb{R}^{n_{c,i}}$ is actually interpreted as a duplicated variable of $x_i$ for the $g$ function to enforce the constraints.
The function $g$ can be better understood in constraint-wise, i.e., summation of $g_j$ where $j$ index denotes each constraint.
Each $g_j$ is a function of only the variables corresponding to the $j$-th constraint i.e.,
\begin{align*}
\{ z_{i,j} ~\vert~ i\in \mathcal{S}_j \}
\end{align*}
where $z_{i,j}$ is the segment of $z_i$ corresponds to the $j$-th constraint and $\mathcal{S}_j$ is the set of subsystem indexes related to the $j$-th constraint.
For the intra-subsystem constraint, the cardinality of $S_j$ (i.e., $\vert \mathcal{S}_j \vert$) is $1$; if the constraint is inter-subsystem coupling, then $\vert \mathcal{S}_j \vert \ge 2$.
Based on the functions \eqref{eq-admm-f} and \eqref{eq-admm-g} defined above, solving \eqref{eq-sdyn-total2} can be reformulated as the following optimization problem:
\begin{equation} \label{eq-admm-sim}
\begin{aligned}
\min_{\hat{v},x,z}
&\sum_{i=1}^{n_s}f_i(\hat{v}_i,x_i) + g(z)\\
\text{s.t.}& \quad x = z
\end{aligned}
\end{equation}
Now applying ADMM iteration on \eqref{eq-admm-sim}, we can obtain the following iteration sequence:
\begin{align}
&\hat{v}_{i}^{l+1},x_i^{l+1} = \argmin_{\hat{v}_i,x_i} \left( f_i + \frac{\beta_i}{2} \| x_i-z_i^l + \frac{1}{\beta_i}u_i^l\|^2 \right ) \label{eq-admm-1} \\
&z^{l+1} = \argmin_{z} \left( g + \sum_{i} \frac{\beta_i}{2} \| x_i^{l+1} - z_i + \frac{1}{\beta_i}u_i^l\|^2 \right) \label{eq-admm-2} \\
&u_i^{l+1} = u_i^l + \beta_i(x_i^{l+1}-z_i^{l+1}) \label{eq-admm-3}
\end{align}
where \eqref{eq-admm-1} and \eqref{eq-admm-3} are actually computed $\forall i$ in parallel and the weight parameter $\beta_i\in\mathbb{R}$ is utilized for each subsystem for better numerical conditions (see also Sec.~\ref{subsection-choicebeta}).
Note that the fixed-point of above iteration will satisfy $\forall i~J_{c,i}\hat{v}_i=x_i=z_i$, therefore it will exactly satisfy \eqref{eq-sdyn-total2} and all constraints (i.e., \eqref{eq-scon}, \eqref{eq-hcon}, \eqref{eq-scc} $\forall j$) without any relaxation.
Since Lagrange multiplier update \eqref{eq-admm-3} is a trivial step, the main consideration here is how to solve \eqref{eq-admm-1} and \eqref{eq-admm-2} in an efficient manner.
\subsubsection{Solving \eqref{eq-admm-1}}
By using an auxiliary variable $x_i$, it can be seen that the dimension of the problem \eqref{eq-admm-1} is expanded to $\text{dim}(\hat{v}_i)+\text{dim}(x_i)$ from the original subsystem dimension $\text{dim}(\hat{v}_i)$.
Consider the following KKT conditions of \eqref{eq-admm-1}:
\begin{align*}
&A_i\hat{v}_i^{l+1} = b_i + J_{c,i}^T\gamma \\
&\beta_i x_i^{l+1} = \beta_i z_i^l - u_i^l - \gamma \\
&J_{c,i}\hat{v}_i^{l+1} = x_i^{l+1}
\end{align*}
where $\gamma$ is the Lagrange multiplier. Here, combining these three equations, we can obtain $\hat{v}_i$ by solving the following linear equation:
\begin{align} \label{eq-admm-1i}
(A_i+\beta_i J_{c,i}^T J_{c,i})\hat{v}_i^{l+1} = b_i + J_{c,i}^T(\beta_i z_i^l - u_i^l)
\end{align}
where the equation is always solvable from the positive definite property of the left-most matrix.
By this procedure, the problem size can be brought back to $\text{dim}(\hat{v}_i)$, therefore the concern about increased computation time due to the inclusion of $x_i$ can be obliviated.
Note that this trick is not possible if we attempt to directly solve the minimization of non-smooth function $f_i$. This rather becomes possible as \eqref{eq-admm-1} in ADMM procedure uses the quadratic augmented term with scalar weight.
In conclusion, the process for solving \eqref{eq-admm-1} is simply obtaining a \textit{subsystem size} linear solution for each subsystem in \textit{parallel}.
\subsubsection{Solving \eqref{eq-admm-2}} \label{subsubsec-admm-2}
As described earlier, $g$ is the summation of all the $g_j$ defined for each constraint.
Accordingly, the problem \eqref{eq-admm-2} can be independently decomposed according to all the constraints as
\begin{align} \label{eq-admm-2j}
\min_{\underset{i\in \mathcal{S}_j}{z_{i,j}}}
\left( g_j + \sum_{i\in \mathcal{S}_j} \frac{\beta_i}{2} \| x_{i,j}^{l+1} - z_{i,j} + \frac{1}{\beta_i}u_{i,j}^l \|^2 \right )
\end{align}
therefore can be solved $\forall j$ in parallel.
Now consider solving \eqref{eq-admm-2j} for bilateral case (i.e., $\vert \mathcal{S}_j \vert = 2$), which is one of the most frequently appearing in practice.
For simplicity, let us assume $\mathcal{S}_j= \left\{ 1,2 \right\}$.
\textbf{Hard constraint:}
As $z_i$ is the value already mapped into constraint space, $g_j$ only needs to enforce the constraint on $z_{1,j}+z_{2,j}$.
So in the case of hard constraint,
\begin{align} \label{eq-gj-hard}
g_j = \mathcal{I}(z_{1,j}+z_{2,j}+e_j\ge 0)
\end{align}
and \eqref{eq-gj-hard} can be interpreted as constraint impulse $\lambda_j$ acting on the linear solution of the quadratic terms in \eqref{eq-admm-2j} i.e.,
\begin{align} \label{eq-surrogate}
\begin{split}
&\beta_1 z_{1,j} = \underbrace{\beta_1 x_{1,j}^{l+1} + u_{1,j}^l}_{:=y_{1,j}^{l+1}} + \lambda_j \\
&\beta_2 z_{2,j} = \underbrace{\beta_2 x_{2,j}^{l+1} + u_{2,j}^l}_{:=y_{2,j}^{l+1}} + \lambda_j
\end{split}
\end{align}
where we introduce the new variable $y$ for conciseness.
We can see from the structure of \eqref{eq-admm-2j} that the relation \eqref{eq-surrogate} is \textit{matrix-free}, and only consists of scalar weights.
Thanks to this property, $\lambda_j$ can be computed in a very simple manner
as we combine \eqref{eq-surrogate} with the following complementarity condition:
\begin{align} \label{eq-hard12}
\begin{split}
&0 \le \lambda_j \perp z_{1,j} + z_{2,j} + e_j \ge 0
\end{split}
\end{align}
the solution for $\lambda_j$ can be obtained with the simple scalar operation:
\begin{align*}
&\lambda_j = \Pi_{\ge 0}\left (-\frac{\beta_1^{-1}y_{1,j}^{l+1} + \beta_2^{-1} y_{2,j}^{l+1} + e_j}{\beta_1^{-1}+\beta_2^{-1}} \right )
\end{align*}
where $\Pi_{\ge 0}$ denotes the projection on positive set.
The matrix-free relation \eqref{eq-surrogate} is the same for other types of constraints (soft, contact), while \eqref{eq-hard12} to be replaced with other relation.
\textbf{Soft constraint:}
From the structure of \eqref{eq-scon},
\begin{align} \label{eq-soft12}
\lambda_j = -k_j(e_j+\alpha_j (z_{1,j}+ z_{2,j}))
\end{align}
has to be satisfied.
Then by substituting \eqref{eq-surrogate} to \eqref{eq-soft12}, we can obtain the impulse solution as
\begin{align*}
\lambda_j = -\frac{k_j(e_j + \alpha_j(\beta_1^{-1} y^{l+1}_{1,j} + \beta_2^{-1} y^{l+1}_{2,j})}{(1+(\beta_1^{-1}+\beta_2^{-1})\alpha_j k_j)}
\end{align*}
which is also very simple to compute.
\textbf{Contact constraint:}
Here the relation between $z_{1,j}+z_{2,j}$ and $\lambda_j$ must follow \eqref{eq-scc}, therefore
\begin{align} \label{eq-contact12}
\begin{split}
& 0 \le \lambda_{j,n} \perp z_{1,j,n}+z_{2,j,n} + e_{j,n} \ge 0 \\
& 0 \le \delta_j \perp \mu\lambda_{j,n} - \| \lambda_{j,t} \| \ge 0 \\
&\delta \lambda_{j,t} + \mu\lambda_{j,n} (z_{1,j,t}+z_{2,j,t}) = 0
\end{split}
\end{align}
Despite the complexity of \eqref{eq-contact12}, solution can be easily obtained from the simple scalar structure of \eqref{eq-surrogate}:
\begin{align*}
\lambda_j = \Pi_{\mathcal{C}}\left (-\frac{\beta_1^{-1}y_{1,j}^{l+1} + \beta_2^{-1} y_{2,j}^{l+1} + e_j}{\beta_1^{-1}+\beta_2^{-1}} \right )
\end{align*}
where $\Pi_{\mathcal{C}}$ denotes the projection on the friction cone.
The process can be done through a few algebraic operations, while respecting all contact conditions \cite{lee2022large}.
Although we explain the process only for the bilateral case, it can be shown straightforwardly that such a simple solution form can be derived for other cases as well.
\begin{algorithm} [t]
\caption{Simulation via Subsystem-Based ADMM}
\label{alg1}
\begin{algorithmic}[1]
\State Subsystem division for given multibody (Sec.~\ref{subsec-division})
\While{simulation}
\State $\forall i$ construct $A_i,b_i$ in parallel
\State $\forall j$ construct $e_j,J_j$ in parallel
\State $\forall i$ factorize $A_i+\beta_iJ_{c,i}^TJ_{c,i}$ in parallel
\While{loop}
\State $\forall i$ update $\hat{v}_{i}^{l+1}$ from \eqref{eq-admm-1i} in parallel
\State compute residual $\theta$ from \eqref{eq-residual}
\If{$\theta < \theta_{th}$ or $l=l_{max}$}
\State \textbf{break}
\EndIf
\State $\forall j$ update $z_j^{l+1}$ from \eqref{eq-admm-2j} in parallel
\State $\forall i$ update $u^{l+1}$ from \eqref{eq-admm-3} in parallel
\State $l\leftarrow l+1$
\EndWhile
\State{update each subsystem state using $\hat{v}_i^{l+1}$}
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsection{Convergence}
It can be easily verified that each $f_i$ and $g_j$ for hard and soft constraints is convex in our formulation \eqref{eq-admm-sim}.
For contact conditions, $g_j$ may not be convex, but can be convexified by adopting the relaxed convex model \cite{todorov14convex}.
In such cases, our method can guarantee convergence \cite{boyd11admm}.
Although we have not encountered the convergence issue associated with non-convexity of \eqref{eq-scc}, a more thorough analysis will be left for future work.
\subsubsection{Residual}
Originally, our process \eqref{eq-admm-1}, \eqref{eq-admm-2}, \eqref{eq-admm-3} is the iteration of $(\hat{v},x,z,u)$ and both primal and dual residual \cite{boyd11admm} are required to check the condition to terminate the iteration.
Instead, for our framework, we use the variable $y$ in \eqref{eq-surrogate} to define the residual as
\begin{align} \label{eq-residual}
\theta = \sum_{i=1}^{n_s} \| y_i^{l+1}-y_i^l \|^2
\end{align}
where $\theta$ is the residual value.
This means that the iteration can be reinterpreted in terms of the lower-dimensional variable $y$, and the process of calculating the residuals can be more concise.
The following proposition provides the rationale of the statement:
\begin{proposition}
$(\hat{v}^{l+1},x^{l+1},z^{l+1},u^{l+1})$ is the fixed-point of the iteration \eqref{eq-admm-1}, \eqref{eq-admm-2} and \eqref{eq-admm-3}, if and only if $\theta = 0$.
\end{proposition}
\begin{proof}
$\left( \Rightarrow \right)$ This is trivial. $\left( \Leftarrow \right)$ As $\theta=0$ denotes $\forall y_i^{l+1}=y_i^l$, we can find that $z^{l+1}=z^l$ holds as $\forall\lambda_j$ are uniquely determined from $y$.
Then as \eqref{eq-admm-3} is equivalent to $u_i^{l+1}=y_i^{l+1}-z_i^{l+1}$, $u^{l+1}=u^l$ also holds.
Finally, $\hat{v}$ is determined from $z$ and $u$ \eqref{eq-admm-1i}, so we can conclude that the set value is in fixed-point of the iteration.
\end{proof}
\subsubsection{Choice of $\beta$} \label{subsection-choicebeta}
We find that iteration has stable convergence regardless of $\beta$, but the value of $\beta$ affects the convergence rate.
We empirically confirm that the following $\beta$ setting exhibits good performance:
\begin{align}
\forall\beta_i=\frac{\text{Tr}\left ( A_i \right )}{\text{Tr}\left (J_{c,i}^T J_{c,i}\right )}
\end{align}
which suggests a balanced weight between dynamics-related term $A_i$ and constraint-related term $J_{c,i}^T J_{c,i}$.
A more in-depth theoretical analysis of the strategy will be discussed in future work.
\subsection{Summary}
Our physics simulation framework via subsystem-based ADMM is summarized in Alg.~1.
As described earlier, the major part of the procedure is \textit{subsystem-wise} parallel solving of \eqref{eq-admm-1i} (line 7) and \textit{constraint-wise} parallel solving of \eqref{eq-admm-2j} (line 12).
From these characteristics, the computational complexity of our algorithm is at least linear: $\mathcal{O}(n_s+n_{in}+n_{cp})$.
If parallelization is taken into account, it will be lower.
\section{Examples and Evaluations} \label{sec-evaluation}
We use an Intel Core i7-8565 CPU 1.80GHz (Quad-Core), OpenGL as a rendering tool, C++ Eigen as a matrix computation library, and C++ OpenMP as a parallelization library in our implementation.
Time step is set to $10~\rm{ms}$ for all examples.
See also our supplemental video.
\subsection{Scenarios}
We implement three high-DOF multibody manipulation scenarios.
In general, they consist of a combination of high-gain controlled robotic arms and lightweight objects with multi-type constraints, resulting in numerically challenging situations.
We employ Franka Emika panda \cite{franka} as a robot arm and Husky \cite{husky} as a ground vehicle.
\subsubsection{Granular object stirring}
The example is illustrated in Fig.~\ref{fig-motstir}: the robot arm uses an end effector to stir the granular material contained in the box.
The granular material consists of a total of $216$ spheres with a radius of $1~\rm{cm}$ and a weight of $4~\rm{g}$.
The total system dimension is $1303$, and since each rigid body and robot is treated as a subsystem, there are a total of $217$ subsystems.
\subsubsection{Collaborative cable manipulation}
The example is illustrated in Fig.~\ref{fig-motcobot}: two mobile manipulator consist of a ground vehicle and a robot arm are transporting and winding a flexible cable.
Cable is modeled by $640$ rigid bodies and soft constraint from Cosserat model, with length $1.2~\rm{m}$, diameter $8~\rm{mm}$, Young modulus $0.1~\rm{MPa}$, and Poisson ratio $0.49$.
Each mobile manipulator is modeled as $10$-DOF system while its movement constrained by non-holonomic constraint (no-slip).
Total system dimension is $3840$, and we treat each mobile manipulator and $4$ cable segments as a subsystem, making a total of $162$ subsystems.
\subsubsection{FEM beam insertion}
The example is illustrated in Fig.~\ref{fig-motbeam}: the robot arm inserts the deformable beam modeled with co-rotational FEM through narrow gap.
The size of beam is $0.05\times0.05\times0.5~\rm{m}$, with a Young modulus $10~\rm{MPa}$ and a Poisson ratio $0.45$.
The FEM model consists of $1591$ nodes and $6347$ tetrahedral elements, therefore total dimension is $4780$.
We divide the model into $20$ subsystems so the entire system consists of a total of $21$ subsystems including the manipulator.
\begin{table*}[t]
\small
\centering
\renewcommand{\arraystretch}{1.3}{
\resizebox{17.7cm}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Solver} & \multicolumn{3}{c|}{PGS} & \multicolumn{3}{c|}{PJ} & \multicolumn{3}{c|}{FADMM} & \multicolumn{3}{c|}{NNewton} & \multicolumn{3}{c|}{\textbf{SubADMM}} \\
\hline
\multicolumn{2}{|c|}{Iteration} &
30 & 60 & 90 &
30 & 60 & 90 &
30 & 60 & 90 &
3 & 6 & 9 &
30 & 60 & 90 \\
\hline
\multirow{2}{*}{Stir} & AT
& $14.50$ & $23.91$ & $32.27$
& $3.235$ & $5.688$ & $9.425$
& $28.41$ & $41.40$ & $56.95$
& $24.35$ & $46.99$ & $75.44$
& $3.489$ & $5.940$ & $8.705$ \\
\cline{2-17} & AA
& $4.427$ & $4.928$ & $5.248$
& $3.033$ & $3.349$ & $3.562$
& $4.107$ & $4.644$ & $5.009$
& $3.565$ & $4.429$ & $5.324$
& $4.069$ & $4.579$ & $5.023$ \\
\hline
\multirow{2}{*}{Cable} & AT
& $48.08$ & $59.30$ & $72.96$
& - & - & -
& $16.74$ & $23.52$ & $32.96$
& $43.35$ & $87.98$ & $132.7$
& $2.288$ & $4.285$ & $6.402$ \\
\cline{2-17} & AA
& $4.141$ & $4.860$ & $5.404$
& - & - & -
& $4.189$ & $4.634$ & $4.910$
& $3.270$ & $4.454$ & $5.278$
& $4.344$ & $4.984$ & $5.222$ \\
\hline
\multirow{2}{*}{Beam} & AT
& $231.4$ & $241.2$ & $251.2$
& - & - & -
& $50.33$ & $92.11$ & $130.3$
& $188.5$ & $360.2$ & $525.7$
& $13.41$ & $24.67$ & $35.50$ \\
\cline{2-17} & AA
& $3.895$ & $4.194$ & $4.255$
& - & - & -
& $4.220$ & $4.478$ & $4.756$
& $2.445$ & $3.494$ & $4.945$
& $4.326$ & $4.743$ & $4.925$ \\
\hline
\end{tabular}
}
}
\caption{Evaluation results for various solvers.
AT: average compuatation time (\rm{ms}), AA: average accuracy (constraint error value converted using $-\log(\cdot)$ before averaged, therefore bigger is better).
Unmarked values (-) means that the simulation fails to run successfully (e.g., significant penetration).}
\label{table-result}
\end{table*}
\subsection{Baselines}
We implement the following algorithms for performance comparison, with our method being denoted as SubADMM.
\subsubsection{Projected Gauss-Seidel (PGS)}
PGS is a representative algorithm in robotics and graphics fields \cite{todorov14convex,macklin14unified,macklin16game,horak19ral} and software \cite{mujoco,bullet,flex}.
We implement an algorithm with conjugate gradient-based acceleration to improve its performance.
\subsubsection{Projected Jacobi (PJ)}
PJ is similar to PGS, but they do not solve constraints sequentially, but rather solve and update them in parallel at once.
\subsubsection{Full ADMM (FADMM)}
State-of-the-art implementations of ADMM algorithms \cite{stellato20osqp} can be used to solve physics simulation, which is specified in \cite{tasora21admm}.
The main difference with our algorithm is that they require solving of the full-system size matrix for each iteration.
\subsubsection{Nonsmooth Newton (NNewton)}
We also implement a recently proposed algorithm that transform the constraints into non-smooth function and solve it using Newton iteration.
We refer \cite{macklin19tog,andrews22course} for details.
\subsection{Performance Index}
We apply the same number of iteration ($30,60,90$) for all algorithms except NNewton and measure the average solver computation time and constraint error norm from the simulation results.
In the case of NNewton, considering its second-order nature (cost per iteration is high but uses fewer iterations), the number of iterations is reduced by $1/10$ (i.e., $3,6,9$).
Constraint error for contact is calculated using Fischer-Burmeister function \cite{macklin19tog}.
\subsection{Results}
Evaluation results are summarized in Table~\ref{table-result}.
For graunlar object strring scenario, PJ and SubADMM shows the fastest computation speed, and this is due to their structure suitable for parallelization.
However, constraint error of PJ is signifcantly higher than SubADMM.
This reflects the unstable and slow convergence of the Jacobi-style iteration.
On the other hand, SubADMM shows comparable error with other methods and shows its validity in terms of accuracy.
In the case of the cable and beam scenario, the computation performance of PGS and PJ becomes lower as the Delassus operator assembly is more complicated.
As such, FADMM outperforms them, yet SubADMM is still the fastest.
This is due to our special structure, which, as mentioned earlier, only requires parallelized resolution of the subsystem matrices without dealing with large-sized matrices.
In the similar vein, SubADMM also has an efficiency advantage over NNewton.
Algorithms other than PJ showed valid accuracies, while PJ failed to generate an adequate simulation results.
In summary, the results demonstrate all of the methodologically described advantages of SubADMM: 1) it avoids burdens on both many constraints and large-sized matrices, and 2) it does not use certain approximations on the model and has a good convergence property.
\subsection{Scalability}
\begin{figure}[t]
\centering
\begin{subfigure}{4.0cm}
\includegraphics[width=4.0cm]{Figure/scalestir.jpg}
\caption{Stir}
\end{subfigure}
\begin{subfigure}{4.0cm}
\includegraphics[width=4.0cm]{Figure/scalestir.jpg}
\caption{Stir}
\end{subfigure}
\caption{Scalablity test results of SubADMM.}
\label{fig-scalablity}
\end{figure}
To precisely evaluate the scalability of our method, we measure the computation time (iteration: $60$) by increasing the number of spheres in the stir scenario and the number of segments in the cable scenario.
Fig.~\ref{fig-scalablity} shows linear complexity of SubADMM (R-squared value: $0.9993$, $0.9998$).
\section{Discussions and Conclusions} \label{sec-conclusion}
In this paper, we present a new physics simulation framework based on subsystem-based ADMM.
Our approaches combines a novel subsystem-based formulation \eqref{eq-sdyn-total1} and operator splitting \eqref{eq-admm-f} and \eqref{eq-admm-g}, thereby achieve parallelizable and modular architecture for general multibody dynamics.
Several examples are implemented and evaluations show the advantages of our framework against state-of-the-art algorithms.
We believe that a generic implementation (similar to the open source form) will make a good contribution to the robotics community.
We also believe that our work can be extended to the area of optimal control by exploiting the coupled structure of the large-size optimization problem (e.g., time correlation).
Finally, similar to typical ADMM, the convergence property of our algorithm is stable but still linear.
Therefore, combination with second-order acceleration schemes will be a promising research direction.
\newpage
\bibliographystyle{unsrt}
|
{
"arxiv_id": "2302.14296",
"language": "en",
"timestamp": "2023-03-01T02:08:34",
"url": "https://arxiv.org/abs/2302.14296",
"yymm": "2302"
} | \section{Introduction}
The Covariance Control (CC) problem for linear systems was initially posed by A. Hotz and R. Skelton in \cite{hotz1987covariance}.
It was studied in an infinite horizon setting for both continuous and discrete-time systems and the authors provided a parametrization for all linear state feedback controllers that achieve a specified system covariance.
Later, the authors in \cite{grigoriadis1997minimum} provided analytical solutions for a minimum effort controller that achieves a specified steady-state system covariance in the same setting.
Its finite horizon counterpart, the Covariance Steering (CS) problem, gained attention only recently.
Although similar ideas can be traced back in the Stochastic Model Predictive Control literature~\cite{primbs2009stochastic,farina2013probabilistic}, in the sense that these methods also try to address constraints in the system covariance, they achieve this objective by using conservative approximations or by solving computationally demanding non-linear programs.
Covariance Steering theory, on the other hand,
offers a more direct approach, often providing tractable algorithms for the solution in real time.
The first formal treatment of the CS problem was made by the authors of \cite{chen2015optimal,chen2015optimal1} for continuous-time systems, by studying the minimum-effort finite horizon covariance steering problem in continuous time.
Later, in \cite{bakolas2016optimal,bakolas2018finite} the author provided a numerical approach for solving the discrete version of the problem with a relaxed terminal covariance boundary condition using semidefinite programming.
Later, \cite{okamoto2018optimal} introduced a constrained version of the original problem where the state and control vectors are required to stay within specified bounds in a probabilistic sense and finally its connections to Stochastic Model Predictive control were cemented in \cite{okamoto2019stochastic}.
Ever since then, the newly developed covariance steering theory has been applied to a variety of problems ranging from path planning for linear systems under uncertainty \cite{okamoto2019optimal}, control of linear systems with multiplicative noise \cite{liu2022optimal_mult}, distributed robot control \cite{saravanos2021distributed}, as well as for control of non-linear \cite{yi2020nonlinear, ridderhof2019nonlinear, saravanos2022distributed} and non-Gaussian \cite{sivaramakrishnan2022distribution, renganathan2022distributionally} systems.
In our previous work \cite{liu2022optimal}, we presented a new method of solving the optimal covariance steering problem in discrete time based on an exact convex relaxation of the original non-linear programming formulation of the problem.
This method did not use the lifted form of the dynamic equations as most previous works on the subject, but rather involved the system covariance matrices of each time step as optimization variables while adding the covariance dynamics as a non-linear constraint.
An exact convex relaxation was proposed to efficiently solve this problem using linear semidefinite programming (SDP).
At the same time, but independently, the authors of \cite{balci2022covariance} used the same relaxation for solving the optimal covariance steering problem with an inequality terminal boundary condition for a system with multiplicative noise.
The contributions of this paper are two-fold. First, we extend our previous results and prove that the proposed lossless convex relaxation presented in \cite{liu2022optimal} also holds under state and control chance constraints, as well as for the case of inequality terminal boundary covariance constraint.
The motivation for this extension is straightforward; many practical applications of the covariance steering theory require probabilistic constraints to characterize the feasible part of state space or limit the control effort applied to the system.
Furthermore, the inequality terminal covariance boundary condition might better reflect the desire to limit the uncertainty of the state, rather than driving it to an exact value.
In this paper, we establish that the proposed method can handle all variants of the optimal covariance steering problem for linear systems encountered in the literature.
Finally, we show that it outperforms other approaches for solving the CS problem, such as \cite{bakolas2018finite} and \cite{okamoto2019stochastic}, by over an order of magnitude in terms of run-times while also having much better scaling characteristics with respect to the steering horizon and model size.
\section{Problem Statement}
Let a stochastic, discrete, time-varying system be described by the state space model
\begin{equation}
x_{k+1} = A_k x_k + B_k u_k + D_k w_k,
\end{equation}
where $k=0,1,\ldots,N-1$ denotes the time step, $A_k \in \mathbb{R}^{n \times n}$ is the system matrix, $B_k \in \mathbb{R}^{n \times p}$ is the input matrix and $D_k \in \mathbb{R}^{n \times q}$ is the disturbance matrix. The system's state, input, and stochastic disturbance are denoted by $x_k, \; u_k$ and $w_k$, respectively.
The first two statistical moments of the state vector are denoted by $\mu_k = \mathbb{E}[x_k] \in \mathbb{R}^n$ and $ \Sigma_k = \mathbb{E} [ (x_k - \mu_k) (x_k - \mu_k)\t ] \in \mathbb{R}^{n \times n}$.
We assume that the process noise $w_k$ has zero mean and unitary covariance.
The discrete-time finite horizon optimal covariance steering problem can be expressed as the following optimization problem:
\begin{subequations} \label{reference_problem}
\begin{align}
& \min_{x_k, u_k} \quad J =\mathbb{E} [\sum_{k = 0}^{N-1} { x_k\t Q_k x_k + u_k \t R_k u_k}], \label{ref:cost}
\end{align}
such that, for all $k = 0, 1, \dots, N-1$,
\begin{align}
& x_{k+1} = A_k x_k + B_k u_k + D_k w_k, \label{ref:dyn} \\
& x_0 \sim \mathcal{N}(\mu_i, \Sigma_i), \label{ref:initial_distr} \\
& x_N \sim \mathcal{N}(\mu_f, \Sigma_f), \label{ref:final_distr} \\
& \P(x_k \in \mathcal{X}) \geq 1-\epsilon_1, \label{ref:chan_con_x}\\
& \P(u_k \in \mathcal{U}) \geq 1-\epsilon_2 . \label{ref:chan_con_u}
\end{align}
\end{subequations}
For the rest of this paper, we will assume that $R_k \succ 0, \; Q_k \succeq 0$ and that $A_k$ is invertible for all $k = 0,1,\dots, N-1$.
The last condition is met in most practical problems where the system dynamics are derived through the discretization of a continuous-time state-space model.
The decision variables for problem \eqref{reference_problem} are stochastic random variables, rendering it hard to solve using numerical optimization methods.
As shown in \cite{liu2022optimal}, in the absence of \eqref{ref:chan_con_x}, \eqref{ref:chan_con_u} this problem is solved optimally with a linear state feedback law of the form
\begin{equation}\label{controller}
u_k = K_k(x_k-\mu_k) + v_k,
\end{equation}
where $K_k \in \mathbb{R}^{ p \times n}$ is a feedback gain that controls the covariance dynamics and $v_k \in \mathbb{R}^p$ a feedforward term controlling the system mean.
The cost function can be written, alternatively, in terms of the first and second moments of the state as follows
\begin{align*}
\mathbb{E} [\sum_{k = 0}^{N-1} { x_k\t Q x_k + u_k \t R u_k}] = \sum_{k = 0}^{N-1} & \text{tr} (Q \Sigma_k) + \text{tr}(R K_k \Sigma_k K_k \t) \\
& + \mu_k\t Q \mu_k + v_k\t R_k v_k.
\end{align*}
If the initial distribution of the state is Gaussian and a linear feedback law as in \eqref{controller} is used, the state distribution remains Gaussian.
This allows us to write the constraints \eqref{ref:initial_distr}, \eqref{ref:final_distr} as
\begin{equation*}
\mu_0 = \mu_i, \quad \Sigma_0 = \Sigma_i, \quad \mu_N = \mu_f, \quad \Sigma_N = \Sigma_f.
\end{equation*}
In contrast to previous works such as \cite{bakolas2018finite} and \cite{okamoto2019optimal}, we choose to keep the intermediate states in the steering horizon as decision variables, handling them in terms of their first and second moments.
To this end, we replace \eqref{ref:dyn} with the mean and covariance propagation equations
\begin{subequations}
\begin{align}
& \mu_{k+1} = A_k\mu_k + B_k v_k, \\
& \Sigma_{k+1} = (A_k + B_k K_k) \Sigma_k (A_k+B_k K_k)\t + D_k D_k\t.
\end{align}
\end{subequations}
Omitting the chance constraints \eqref{ref:chan_con_u} and \eqref{ref:chan_con_x} for the moment, the problem is recast as a standard non-linear program
\begin{subequations} \label{NLP}
\begin{align}
& \min_{\Sigma_k, K_k, \mu_k, v_k} J = \sum_{k = 0}^{N-1} \text{tr} (Q \Sigma_k) + \text{tr}(R K_k \Sigma_k K_k \t) \nonumber \\
& \qquad \qquad \qquad \qquad + \mu_k\t Q \mu_k + v_k\t R_k v_k, \label{NLP:cost}
\end{align}
such that, for all $k = 0, 1, \dots, N-1$,
\begin{align}
& \Sigma_{k+1} = A_k \Sigma_k A_k\t + B_k K_k \Sigma_k A_k\t + A_k \Sigma_k K_k\t B_k\t, \nonumber \\
& \quad \qquad + B_k K_k \Sigma_k K_k \t B_k\t + D_k D_k\t, \label{NLP:cov_dyn} \\
& \Sigma_0 = \Sigma_i, \label{NLP:in_cov} \\
& \Sigma_N = \Sigma_f, \label{NLP:final_cov} \\
& \mu_{k+1} = A_k \mu_k + B_k v_k, \label{NLP:mean_dyn} \\
& \mu_0 = \mu_i, \label{NLP:in_mean} \\
& \mu_N = \mu_f. \label{NLP:final_mean}
\end{align}
\end{subequations}
In the following sections, we will convert this problem to an equivalent convex one.
\section{Unconstrained Covariance Steering}
It is well established in the covariance steering control literature \cite{okamoto2018optimal} that under no coupled mean-covariance constraints, problem \eqref{NLP} can be decoupled into the mean steering problem and the covariance steering problem as
\begin{align}
\min_{\Sigma_k, K_k} \quad & J_\Sigma = \sum_{k = 0}^{N-1} {\text{tr}\big(Q_k \Sigma_k \big) + \text{tr} \big(R_k U_k \Sigma_k^{-1} U_k\t \big)}, \label{NLPcov} \\
& \textrm{subject to \eqref{NLP:cov_dyn} - \eqref{NLP:final_cov}}, \nonumber
\end{align}
and
\begin{align}
\min_{\mu_k, v_k} \quad & J_\mu = \sum_{k = 0}^{N-1} {\mu_k\t Q_k \mu_k + v_k\t R_k v_k}, \label{NLPmean}\\
& \textrm{subject to \eqref{NLP:mean_dyn}-\eqref{NLP:final_mean}}. \hspace{9mm} \nonumber
\end{align}
Problem \eqref{NLPmean} is trivial and can even be solved analytically \cite{okamoto2018optimal}.
We, therefore, focus solely on problem \eqref{NLPcov}.
Using the change of variables $U_k = K_k \Sigma_k$ and the convex relaxation proposed in \cite{liu2022optimal} one can transform Problem \eqref{NLPcov} into a linear semidefinite program
\begin{subequations} \label{convex_eq}
\begin{align}
& \min_{\Sigma_k, U_k, Y_k} \quad J_\Sigma = \sum_{k = 0}^{N-1} {\text{tr} \big(Q_k \Sigma_k \big) + \text{tr} \big(R_k Y_k \big)}
\end{align}
such that, for all $k = 0, 1, \dots, N-1$,
\begin{align}
& C_k \triangleq U_k \Sigma_k^{-1} U_k\t - Y_k \preceq 0, \label{convex_eq:relaxation} \\
& G_k \triangleq A_k \Sigma_k A_k\t + B_k U_k A_k\t + A_k U_k\t B_k\t + B_k Y_k B_k\t, \nonumber \\
& \qquad + D_k D_k\t - \Sigma_{k+1} = 0, \label{convex_eq:cov_dyn} \\
& \Sigma_N - \Sigma_f = 0,\label{convex_eq:final_cov}
\end{align}
\end{subequations}
where the constraint \eqref{convex_eq:relaxation} can be expressed as an LMI using the Schur complement as
\begin{equation*}
\begin{bmatrix}
\Sigma_k & U_k\t \\
U_k & Y_k
\end{bmatrix} \succeq 0.
\end{equation*}
\begin{theorem} \label{thm:lossless1}
The optimal solution to the relaxed problem \eqref{convex_eq} satisfies $C_k = 0$ for all $k = 0,1,\dots, N-1$ and therefore also optimally solves \eqref{NLPcov}.
\end{theorem}
\begin{proof}
See \cite{liu2022optimal}.
\end{proof}
\begin{remark}
A different approach that results in the same formulation is that of a randomized feedback control policy presented in \cite{balci2022exact}.
Therein, the injected randomness on the control policy can be interpreted as a slack variable converting \eqref{convex_eq:relaxation} to equality.
In \cite{balci2022exact} it is shown that for the soft-constrained version of the problem, the value of this slack variable is zero.
In our work, we tackle the hard-constrained version, instead, with equality or inequality terminal covariance constraints as well as chance constraints.
In this case, strong duality is not apparent and the technique of the proof of \cite{balci2022exact} is not directly applicable.
\end{remark}
Next, consider Problem \eqref{NLPcov} but with an inequality terminal covariance boundary condition instead, and its corresponding relaxed version, namely,
\begin{subequations} \label{convex_ineq}
\begin{align}
& \min_{\Sigma_k, U_k, Y_k} \quad J_\Sigma = \sum_{k = 0}^{N-1} {\text{tr} \big(Q_k \Sigma_k \big) + \text{tr} \big(R_k Y_k \big)},
\end{align}
such that, for all $k = 0, 1, \dots, N-1$,
\begin{align}
& C_k \triangleq U_k \Sigma_k^{-1} U_k\t - Y_k \preceq 0, \label{convex_ineq:relaxation} \\
& \Sigma_N - \Sigma_f \preceq 0, \label{convex_ineq:cov} \\
& G_k \triangleq A_k \Sigma_k A_k\t + B_k U_k A_k\t + A_k U_k\t B_k\t + B_k Y_k B_k\t, \nonumber \\
& \qquad + D_k D_k\t - \Sigma_{k+1} = 0. \label{convex_ineq:cov_dyn}
\end{align}
\end{subequations}
\begin{theorem} \label{thm:lossless2}
Assuming that the exact covariance steering problem \eqref{NLPcov} is feasible, problem \eqref{convex_ineq} satisfies $C_k = 0$ for all $k = 0,1,\dots,N-1$
and therefore also optimally solves \eqref{NLPcov} with an inequality terminal covariance boundary condition, instead of \eqref{NLP:final_cov}.
\end{theorem}
\begin{proof}
Using matrix Lagrange multipliers $M_k^{(1)}, \; M^{(2)}$, $ \Lambda_k$ for the constraints \eqref{convex_ineq:relaxation}, \eqref{convex_ineq:cov}, \eqref{convex_ineq:cov_dyn}, respectively, we define the Lagrangian function
\begin{align}
& \mathcal{L}(\Sigma_{k}, U_k, Y_k, M_k^{(1)},M^{(2)},\Lambda_k) = J_{\Sigma} \nonumber \\
& + \text{tr} \big( M^{(2)} (\Sigma_N - \Sigma_f) \big) + \sum_{k = 0}^{N-1}{\text{tr} \big( M_k^{(1)} C_k \big) + \text{tr} \big( \Lambda_k G_k \big)}
\end{align}
The relevant first-order optimality conditions are \cite{vandenberghe1996semidefinite}:
\begin{subequations}\label{opt_cond}
\begin{align}
& \frac{\partial \mathcal{L}}{\partial U_k} = 2 M_k^{(1)} U_k \Sigma_k^{-1} + 2 B_k\t \Lambda_k A_k = 0 \label{opt_cond_U}\\
& \frac{\partial \mathcal{L}}{\partial Y_k} = R_k - M_k^{(1)} + B_k\t \Lambda_k B_k = 0 \label{opt_cond_Y}\\
& \text{tr} \big(M_k^{(1)} C_k \big) = 0, \label{opt_cond_comp_slack1}
\end{align}
\end{subequations}
where $k = 0,1,\dots N-1$.
Note that we can choose $\Lambda_k$ to be symmetric because of the symmetry of the constraint \eqref{convex_ineq:cov_dyn}, while $M_k^{(1)} $ and $ M^{(2)}$ are symmetric by definition.
We will prove that the optimal solution to problem \eqref{convex_ineq} satisfies $C_k = 0$ for all $k = 0,1,\ldots,N-1$.
To this end, assume that $C_k$ has at least one nonzero eigenvalue for some $k$.
Equation \eqref{opt_cond_comp_slack1} then yields that $M_k^{(1)}$ has to be singular \cite{liu2022optimal}.
The optimality condition \eqref{opt_cond_U} can then be rewritten as $B_k\t \Lambda_k = -M_k^{(1)} U_k \Sigma_k^{-1} A_k^{-1}$. Substituting to \eqref{opt_cond_Y} yields
\begin{equation} \label{contradicion}
R_k = M_k^{(1)} \big( I_{p} + U_k \Sigma_k^{-1} A_k^{-1} B_k \big).
\end{equation}
Calculating the determinants of both sides of \eqref{contradicion}, we obtain
\begin{equation*}
\det(R_k) = \det(M_k^{(1)})\, \det\big( I_{p} + U_k \Sigma_k^{-1} A_k^{-1} B_k \big) = 0.
\end{equation*}
This clearly contradicts the fact that $R_k \succ 0$.
Therefore, at the optimal solution, the matrix $C_k$ has all its eigenvalues equal to zero.
This, along with the fact that $C_k$ is symmetric, yields that $C_k = 0$ for all $k = 0,1,\dots,N-1$.
The final step to conclude this proof is to show that the KKT conditions \eqref{opt_cond} for the relaxed problem \eqref{convex_ineq} are sufficient for the optimal solution, or in other words, the duality gap for the relaxed problem is zero.
We have already proved that strong duality holds for the exact covariance steering problem in \cite{liu2022optimal}.
Since the relaxed terminal boundary condition problem \eqref{convex_ineq} has a domain at least as big as the exact problem \eqref{convex_eq} and strong duality holds for the exact problem, from Slater's condition strong duality holds for the relaxed problem as well.
\end{proof}
\section{Constrained Covariance Steering}
Many real-world applications require additional constraints of the form \eqref{ref:chan_con_u}, \eqref{ref:chan_con_x} to be imposed on the problem to reflect the physical limitations of the system or some other desired behavior.
These may include constraints on the total control effort $u_k$ on each time step or physical limits on the state vector $x_k$.
In this work, we assume polytopic state and control constraints of the form
\begin{subequations} \label{probabilistic_con}
\begin{align}
& \P(\alpha_x \t x_k \leq \beta_x) \geq 1-\epsilon_x, \\
& \P(\alpha_u \t u_k \leq \beta_u) \geq 1-\epsilon_u,
\end{align}
\end{subequations}
where $\alpha_x \in \mathbb{R}^{n}, \; \alpha_u \in \mathbb{R}^p,\; \beta_x, \beta_u \in \mathbb{R} $ and $\epsilon_x, \epsilon_u \in [0, 0.5]$ reflects the violation probability of each constraint.
To convert the probabilistic constraints~\eqref{probabilistic_con} into deterministic constraints on the decision variables note that $\alpha_x\t x_k$ and $\alpha_u \t u_k$ are univariate random variables with first and second moments given by
\begin{subequations}
\begin{align}
& \hspace*{-3mm} \mathbb{E}(\alpha_x\t x_k) = \alpha_x\t \mu_k, \\
& \hspace*{-3mm}\mathbb{E}(\alpha_u \t u_k) = \alpha_u \t v_k, \\
& \hspace*{-3mm}\mathbb{E} ( \alpha_x\t (x_k -\mu_k) (x_k -\mu_k)\t \alpha_x) = \alpha_x\t \Sigma_k \alpha_x, \\
&\hspace*{-3mm} \mathbb{E}(\alpha_u \t K_k (x_k-\mu) (x_k-\mu)\t K_k \t \alpha_u ) = \alpha_u \t U_k \Sigma^{-1}_k U_k\t \alpha_u. \label{control_cov}
\end{align}
\end{subequations}
To this end, according to \cite{okamoto2018optimal}, equations \eqref{probabilistic_con} are converted to
\begin{subequations} \label{constraints}
\begin{align}
& \Phi^{-1}(1-\epsilon_x) \sqrt{ \alpha_x\t \Sigma_k \alpha_x} + \alpha_x\t \mu_k - \beta_x \leq 0, \label{chance_con} \\
& \Phi^{-1}(1-\epsilon_u) \sqrt{ \alpha_u \t U_k \Sigma^{-1}_k U_k\t \alpha_u } + \alpha_u \t v_k - \beta_u \leq 0, \label{effort_st1}
\end{align}
\end{subequations}
where $\Phi^{-1}(\cdot)$ is the inverse cumulative distribution function of the normal distribution.
If the Gaussian assumptions for the disturbances is dropped, then $\Phi^{-1}(\cdot)$ can be conservatively replaced using Cantelli's concentration inequality with $Q( 1 - \epsilon ) = \sqrt{{\epsilon}/{ (1 - \epsilon)}}$ \cite{renganathan2022distributionally}.
Using the same relaxation as before to handle the non-linear term $U_k \Sigma^{-1}_k U_k\t$, equation \eqref{effort_st1} is further relaxed to
\begin{equation} \label{effort_con}
\Phi^{-1}(1-\epsilon_u) \sqrt{ \alpha_u \t Y_k \alpha_u } + \alpha_u \t v_k - \beta_u \leq 0.
\end{equation}
Unfortunately, due to the square root on the decision variables $\Sigma_k$ and $Y_k$ neither of \eqref{chance_con}, \eqref{effort_con} are convex. One conservative option to overcome this issue is to linearize these constraints around some reasonable value of $\alpha_x\t \Sigma_k \alpha_x$ and $\alpha_u \t Y_k \alpha_u$, respectively, for a given problem. Because the square root is a strongly concave function, the tangent line can serve as a linear global overestimator \cite{boyd2004convex}, yielding
\begin{equation*}
\sqrt{x} \leq \frac{1}{2 \sqrt{x_0}}x + \frac{\sqrt{x_0}}{2}, \quad \forall x,x_0 > 0.
\end{equation*}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figures/convex_approx.png}
\caption{Global overestimator of the square root function}
\label{fig: sqrt_linearization}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{figures/convexified_domain.png}
\caption{Example of a convexified domain for a 1-dimensional system}
\label{fig: domain_convexification}
\end{figure}
This is illustrated in Figure~\ref{fig: sqrt_linearization}.
The constraints in \eqref{constraints} can therefore be conservatively approximated as
\begin{subequations} \label{constraints_lin}
\begin{align}
\Phi^{-1}(1-\epsilon_x) &\frac{1}{2 \sqrt{ \alpha_x\t \Sigma_r \alpha_x}} \alpha_x\t \Sigma_k \alpha_x + \alpha_x\t \mu_k \nonumber\\
& - \left( \beta_x - \frac{1}{2} \sqrt{ \alpha_x\t \Sigma_r \alpha_x} \right) \leq 0, \label{chance_con_lin}
\end{align}
and
\begin{align}
\Phi^{-1}(1-\epsilon_u) &\frac{1}{2 \sqrt{ \alpha_u \t Y_r \alpha_u }} \alpha_u \t Y_k \alpha_u + \alpha_u \t v_k \nonumber \\
& - \left( \beta_u - \frac{1}{2} \sqrt{ \alpha_u \t Y_r \alpha_u} \right) \leq 0, \label{effor_con_lin}
\end{align}
\end{subequations}
where $\Sigma_r, \; Y_r$ are some reference values. The linearized constraints now form a convex set, as illustrated in Figure \ref{fig: domain_convexification}.
For notational simplicity, next, we consider the more general constraint form of
\begin{subequations} \label{constraints_final}
\begin{align}
& \ell \t \Sigma_k \ell + \alpha_x\t \mu_k - \beta_x \leq 0, \label{constraints_final_cov} \\
& e\t Y_k e + \alpha_u \t v_k - \beta_u \leq 0.
\end{align}
\end{subequations}
Given the additional constraints in \eqref{constraints_final} the fundamental question is whether the relaxation proposed in \eqref{convex_eq} remains lossless.
To this end, consider the constrained Covariance Steering problem
\begin{subequations} \label{convex_constrained}
\begin{align}
& \min_{\Sigma_k, U_k, Y_k, \mu_k, v_k} J = J_{\Sigma} + J_{\mu},
\end{align}
such that, for all $k = 0, 1, \dots, N-1$,
\begin{align}
& \mu_{k+1} = A_k \mu_k + B_k v_k, \\
& C_k(\Sigma_k, Y_k, U_k) \preceq 0, \\
& G_k(\Sigma_{k+1}, \Sigma_k, Y_k, U_k) = 0, \\
& \ell \t \Sigma_k \ell + \alpha_x\t \mu_k - \beta_x \leq 0, \label{convex_constrained_cov} \\
& e\t Y_k e + \alpha_u \t v_k - \beta_u \leq 0. \label{convex_constrained_effort}
\end{align}
\end{subequations}
Note that an equality terminal covariance condition is implied, by excluding $\Sigma_N$ from the optimization variables and treating it as constant.
\begin{theorem} \label{lossless_2}
The optimal solution to the problem \eqref{convex_constrained} satisfies $C_k = 0$ for all $k = 0,1,\dots,N-1$.
\end{theorem}
\begin{proof}
Define again the problem Lagrangian as
\begin{equation*}
\begin{split}
\mathcal{L}_a( \cdot ) = J & + \sum_{k = 0}^{N-1} \text{tr} \big( M_k\t C_k \big) + \text{tr} \big( \Lambda_{1,k}\t G_k \big) \\
& + \lambda_{1,k}\t (\mu_{k+1} - A_k \mu_k - B_k v_k) \\
& + \lambda_{2,k} \big(\ell \t \Sigma_k \ell + \alpha_x\t \mu_k - \beta_x \big) \\
& + \lambda_{3,k} \big(e\t Y_k e + \alpha_u \t v_k - \beta_u \big).
\end{split}
\end{equation*}
The relevant first-order optimality conditions for this problem are
\begin{subequations}\label{opt_cond_constrained}
\begin{align}
& \frac{\partial \mathcal{L}_a}{\partial U_k} = 2 M_k U_k \Sigma_k^{-1} + 2 B_k\t \Lambda_k A_k = 0, \label{opt_cond_constrained_U}\\
&\frac{\partial \mathcal{L}_a}{\partial Y_k} = R_k - M_k + B_k\t \Lambda_k B_k + \lambda_{3,k} e e \t = 0, \label{opt_cond_constrained_Y}\\
&\frac{\partial \mathcal{L}_a}{\partial \Lambda_k} = G_k = 0, \label{opt_cond_constrained_G} \\
& \text{tr} \big(M_k C_k \big) = 0. \label{opt_cond_constrained_comp_slack1}
\end{align}
\end{subequations}
Following the same steps as in the proof of Theorem \ref{thm:lossless1}, let $C_k$ have at least one nonzero eigenvalue. From \eqref{opt_cond_constrained_comp_slack1}, $M_k$ has to be singular. Solving for $B_k\t \Lambda_k$ in \eqref{opt_cond_constrained_U} and substituting in \eqref{opt_cond_constrained_Y} we get
\begin{equation} \label{contradicion2}
R_k + \lambda_{3,k} e e\t = M_k \big( I_{p} + U_k \Sigma_k^{-1} A_k^{-1} B_k \big).
\end{equation}
Since $\lambda_{3,k} \geq 0$ by definition, and $e e\t \succeq 0$, it follows that $R_k + \lambda_{2,k} e e\t \succ 0$. Therefore, taking again the determinant of both sides of \eqref{contradicion2} leads to a contradiction.
\end{proof}
\section{Numerical example and run-time analysis}
To illustrate our method, we consider the problem of path planning for a quadrotor in a 2D plane.
To this end, we use a 2D triple integrator model to generate smooth jerk paths, which can then be translated into low-level motor commands through differential flatness-based controllers \cite{mellinger2011minimum}.
To this end, consider the triple integrator model
\begin{equation*}
A = \begin{bmatrix} \ I_2 & \Delta T I_2 & 0_2 \\ 0_2 & I_2 & \Delta T I_2 \\ 0_2 & 0_2 & I_2 \end{bmatrix}, \quad B = \begin{bmatrix} 0_2 \\ 0_2 \\ \Delta T I_2 \end{bmatrix}, \quad D = 0.1 I_6 ,
\end{equation*}
for time step $\Delta T = 0.1$~sec and a horizon of $N=60$, yielding $61$ total time steps.
In this system, the first two states represent position, the second two velocity, and the last two the acceleration of the quad. The boundary conditions are
\begin{align*}
& \Sigma_i = I_6, \quad \Sigma_f = 0.1 I_4, \\
& \mu_i = \begin{bmatrix} \ 20 & 0_{1 \times 5} \end{bmatrix}\t, \quad \mu_f = 0_{6 \times 1}.
\end{align*}
The feasible state space is characterized by a bounding box expressed in the form of \eqref{chance_con_lin} with parameters
\begin{align*}
& \alpha_x = \Big\{ \left[\begin{array}{ccc} \pm 1 & 0 & 0_{1 \times 4} \end{array}\right]\t, \left[\begin{array}{ccc} 0 & \pm 1 & 0_{1 \times 4} \end{array}\right]\t \Big\}, \\
& \beta_x = \{22, \;-3, \; 7, \; -7\},
\end{align*}
To account for the maximum allowable bank rate of the quadrotor, the control inputs are probabilistically restricted to lie inside the set $\mathcal{U} = \{ u_k \in \mathbb{R}^2 : \Vert u_k \Vert_{\infty} \leq 25 \}$.
This constraint can be cast in the form of \eqref{effor_con_lin} using four affine functions with parameters
\begin{equation*}
\alpha_u = \Big \{ \left[ \begin{array}{cc} \pm 1 & 0 \end{array} \right]\t, \left[\begin{array}{cc} 0 & \pm1 \end{array}\right]\t \Big\}, \quad
\beta_u = \{\pm 25, \pm 25 \}.
\end{equation*}
Apart from the initial and terminal boundary conditions, two position waypoints are implemented by constraining the first two components of the state at time steps 20 and 40 of the steering horizon.
For all constraints a violation probability of $\epsilon_x = \epsilon_u = 0.1 \%$ was used for all $k = 0,1, \dots N-1$.
The vectors $\alpha_x, \; \alpha_u$ are selected to have a unitary second norm for an easier selection of the linearization point, which is performed around $\Sigma_r = 1.2 I_4 \; Y_r = 15 I_2$.
Further tuning of the linearization point parameters can be done using an iterative approach, where the problem is resolved sequentially and the linearization points at each time step are calculated from the last iteration's optimal trajectory.
This produces less conservative results, but it was observed to have a small impact overall and was therefore not included in this example.
It was observed, however, that overestimating the values of the linearization points is preferable when compared to underestimating them.
This can be interpreted by inspection of equation \eqref{constraints_final} and with the help of Figure~\ref{fig: sqrt_linearization}.
Equation \eqref{constraints_final} shows that constraining a stochastic signal is equivalent to constraining a weighted sum of its mean and uncertainty.
Returning to Figure \ref{fig: sqrt_linearization}, when the true value of the uncertainty is above the linearization point, the weight of the uncertainty of the signal is unboundedly increasing, potentially causing the violation of the total constraint budget.
On the other hand, when the uncertainty is below the linearization point, this weight is lower bounded by the $y$-intercept of the affine approximation to the square root function preventing potential infeasibilities.
All optimization problems are solved in Matlab using MOSEK~\cite{aps2019mosek}.
The resulting optimal steering is illustrated in Figure \ref{fig: state steering}, while the required control effort is shown in Figure \ref{fig: control effrot}.
The feasible set in each figure is denoted with green lines and the mean of each signal with a dashed black line. For Figure \ref{fig: state steering} the 3-sigma confidence level bounds are represented with blue ellipses, while for Figure \ref{fig: control effrot} by the light-blue area around the mean signal.
Initial and terminal values for the state confidence ellipses as well as the waypoints are denoted with red.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\linewidth]{figures/path.png}
\caption{Covariance Steering}
\label{fig: state steering}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\linewidth]{figures/effort.png}
\caption{Resulting control effort}
\label{fig: control effrot}
\end{figure}
Next, we present a run-time comparison between different methods for solving the unconstrained covariance steering problem.
Although different methods result in different control policies and potentially more or less conservative control sequences, this comparison only studies run-times and the resulting optimization problem sizes in terms of the number of decision variables involved in each program.
To evaluate the performance of each algorithm, random state space models of various sizes were generated using Matlab's \textsf{\small drss()} command.
For each system, we use as many noise channels as state variables and half as many input channels as state variables.
The analysis was performed for systems of varying size and a fixed steering horizon of 32 time steps, as well as for varying time horizons for an $8 \times 8$ system.
The results are summarized in Tables \ref{ssize_analysis} and \ref{horizon_analysis} respectively. The empty cells are due to the program running out of memory.
The simulations were carried out in Matlab 2022 running on an 11\textsuperscript{th} Gen. Intel Core i7-11800H and 16 GB of RAM.
\begin{table}[ht!]
\centering
\caption{Run-time comparison for various state space sizes for a time horizon of $N = 32$ time steps. Run time is measured in [sec] and problem size is the total number of decision variables of the resulting optimization problem}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{0.5cm}{\textbf{$n$}} & \multicolumn{2}{c|}{\textbf{Approach 1, \cite{bakolas2018finite}}} & \multicolumn{2}{c|}{\textbf{Approach 2, \cite{okamoto2019stochastic}}} & \multicolumn{2}{c|}{\textbf{Proposed approach}}\\
\cline{2-7}
& \textbf{p. size} & \textbf{r. time} & \textbf{p. size} & \textbf{r. time} & \textbf{p. size} & \textbf{r. time} \\
\hline
4 & 3200 & 93.28 & 256 & 0.20 & 884 & 0.03 \\ \hline
8 & 10496 & - & 1024 & 2.91 & 3536 & 0.18 \\ \hline
16 & 37376 & - & 4096 & 138.07 & 14144 & 2.59 \\ \hline
32 & 140288 & - & 16384 & - & 56576 & 151.76 \\ \hline
\end{tabular}
\label{ssize_analysis}
\end{table}
\begin{table}[h!]
\centering
\caption{Run-time comparison for various time horizons for a system of order $n = 8$. Run time is measured in [sec] and problem size is the total number of decision variables of the resulting optimization problem.}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{0.5cm}{\textbf{$N$}} & \multicolumn{2}{c|}{\textbf{Approach 1, \cite{bakolas2018finite}}} & \multicolumn{2}{c|}{\textbf{Approach 2, \cite{okamoto2019stochastic}}} & \multicolumn{2}{c|}{\textbf{Proposed approach}}\\
\cline{2-7}
& \textbf{p. size} & \textbf{r. time} & \textbf{p. size} & \textbf{r. time} & \textbf{p. size} & \textbf{r. time} \\
\hline
8 & 640 & 3.57 & 256 & 0.12 & 848 & 0.04 \\ \hline
16 & 2306 & 76.74 & 512 & 0.70 & 1744 & 0.08 \\ \hline
32 & 8704 & - & 1024 & 3.33 & 3536 & 0.17 \\ \hline
64 & 33792 & - & 2048 & 19.27 & 7120 & 0.36 \\ \hline
128 & 133120 & - & 4096 & - & 14288 & 0.75 \\ \hline
256 & 528384 & - & 8129 & - & 28624 & 1.60 \\ \hline
\end{tabular}
\label{horizon_analysis}
\end{table}
It is clear that the proposed approach outperforms the state-of-the-art algorithms significantly, by over an order of magnitude for almost all cases.
Also, it is worth noting that problem \eqref{convex_eq} is a linear semidefinite program, while the formulations of \cite{bakolas2018finite} and \cite{okamoto2019stochastic} result in quadratic semidefinite programs, which need to be converted to linear ones using suitable relaxations, increasing further the number of decision variables needed as well as the complexity of the problem.
Finally, problem \eqref{convex_eq} involves $N-1$ LMIs of dimensions $p \times p$ as opposed to a single large LMI of dimensions $(N+2)n \times (N+2)n$ for the terminal covariance constraint used in methods \cite{bakolas2018finite, okamoto2019stochastic}.
As suggested in \cite{aps2019mosek}, multiple smaller LMIs can be solved more efficiently compared to a single larger one due to the resulting sparsity of the constraints.
This also explains why although Approach~2 of~\cite{okamoto2019stochastic} results in smaller problem sizes compared to the proposed approach, still has significantly larger solution times.
\section*{ACKNOWLEDGMENT}
The authors would like to sincerely thank Dr. Fengjiao Liu for her constructive comments and discussion on the paper, and Ujjwal Gupta for his help with the quadrotor numerical example.
This work was supported by NASA ULI award 80NSSC20M0163 and ONR award N00014-18-1-2828.
\bibliographystyle{ieeetr}
|
{
"arxiv_id": "2302.14241",
"language": "en",
"timestamp": "2023-03-01T02:06:26",
"url": "https://arxiv.org/abs/2302.14241",
"yymm": "2302"
} | \section{Introduction}
Across many disciplines, one often has to work with a complex network given only local information. While exact objectives may vary, they usually reduce to the problem of discovering the graph efficiently. Consider the following general setup. An agent is placed on a site in an unknown network, and at each given moment, they may travel to a neighboring site of their current position. Naturally, one is interested in the evolution of the set of visited vertices and how it approximates the properties of the underlying network. One can visualize this problem by imagining a person placed in an unknown city and tasked with drawing its map. What is the best strategy to proceed with? Would it help to place several people to do this job together? If one knows that this city looks like a square grid of size $m\times n$, then one can easily traverse it in $mn$ steps. However, it is not easy to devise a deterministic strategy for such a cartographer without knowing the specific properties of the network. Hence, naturally, one would like to implement a randomized strategy. In economics, it may be used to optimize marketing strategies on a social network, also known as seeding (see \emph{e.g.,}~\cites{Akbarpour20,sadler22}). In mathematical biology, where the graph is based on the interaction of the species, random walks are used to explore the communal and hierarchical structures (see \emph{e.g.,}~\cites{Rosvall08, Rosvall11}).
Similarly, in the study of gerrymandering, one is interested in understanding the set of all possible partitions (maps) of a given area into voting districts that satisfy certain conditions. Identifying the set of outliers is a difficult problem. While this set is highly complex, letting two maps be connected in a network if they differ only in a small location equips this set with a natural metric. This enables the exploration of the network with a random walk, leads to practical sampling, and gives further insight into various structural properties (see \emph{e.g.,}~\cite{DeFord21} and references therein).
There have been considerable efforts to understand random sets arising from random walks on graphs. For instance, the fluctuation behavior of the set of vertices visited by random walks was studied in~\cites{DE51, JP70clt, JO68, LeGall} for $\dZ^d$, and~\cite{DK21} for the discrete torus $\dZ^d_n$. Sznitman~\cite{Sznitman10} introduced the random interlacement model, constructed using a Poisson point process on the set of doubly infinite nearest--neighbor paths on $\dZ^d$. The percolative properties of the range of a simple random walk on $\dZ^d_n$ have been investigated in~\cites{Sznitman10, TW11}, combining the random interlacement model with coupling techniques. Another example is the competition of random walks on a graph (or the coloring of a graph with random walks), studied in~\cites{Gomes96, Dicker06, Miller13}. In a competitive dynamic, each walker is associated with the set of vertices it visited before others. Hence, several randomly growing sets compete for the area in this model.
This paper proposes a collaborative dynamic among several independent random walks on a graph. In contrast with the competition of random walks described above, a vertex is considered discovered if at least one of the random walks visited it. We investigate the average size of this set compared to that of a single random walk and its relation to the distribution of starting positions. In particular, Theorem~\ref{thm: k-collab} says that on average, $k$ independent identical walkers with lifespans $t_1,t_2,\ldots,t_k$ started from the stationary distribution independently of each other, on average discover a higher proportion of the graph than a single walker, also started at the stationary distribution, with the lifespan $t_1+t_2+\cdots+t_k$. This holds for any finite connected graph $G$ and any $k\in \dN$, very mild assumptions on $t_i$ (if any), and any time-homogeneous random walk with a stationary distribution on $G$. In Lemma~\ref{thm: k-collab vs star} we show that the former collaboration scheme also, on average, discovers a higher proportion of vertices than the same amount of independent random walks started at independently chosen vertices following the same distribution (``star" shape).
Our results lead to various natural questions, including a quantitative version of the inequality from Theorem~\ref{thm: k-collab}, stochastic dominance of one collaboration scheme over the other, and how the single walker scheme compares to the ``star" shaped one. We explicitly state and discuss these questions in Section~\ref{sec: discussion}. First, we review some basic results about reversible Markov chains.
\subsection{Markov chains on graphs and networks}
It is well-known that a reversible Markov chain on a finite state space can be seen as a random walk on an edge-weighted network. Let $G=(V,E)$ a finite undirected connected graph $G$. For each edge $e\in E$, we assign the weight $c(e)>0$, also known as \textit{conductance} of $e$. For $x,y\in V$ we write $x\sim y$ if $(x,y)\in E$. In this paper, we consider the Markov chain $(X(t))_{t\ge 0}$ on $G$ with transition matrix $P=(P(x,y))_{x,y\in V}$ given by
\begin{align*}
P(x,y) = \begin{cases}
\frac{c(x,y)}{c(x)}, & (x,y)\in E,\\
0, & \text{otherwise},
\end{cases}
\end{align*}
where $c(x)=\sum_{y:y\sim x}c(x,y)$. One can interpret these weights as the inscription of the bias of the walker. For example, if one would like to prioritize discovering vertices with higher degrees, one can set $c(x,y)$ to equal the indicator that $(x,y)$ is an edge. For further information on random walks on networks, we refer the interested reader to~\cites{LPW, LP}.
Note that $X(t)$ has the unique stationary distribution $\pi$ given by
\begin{align*}
\pi(x) = \frac{c(x)}{\sum_{y\in V}c(y)} \quad\text{ for }x\in V.
\end{align*}
\subsection{Set up}\label{sec:setup}
Throughout the paper $k$ is a fixed positive integer and $(X_i(t))_{t\in \cI}$, for $i\in[k]:=\{1,2,\ldots,k\}$ and an appropriate index set $\cI$, are \textit{independent} Markov chains that satisfy the following assumption.
\begin{ass}\label{ass:mc}
For all $i\in [k]$ Markov chains $(X_i(t))_{t\in \cI}$ have the \textit{same} transition matrix on a finite state space $V$ of size $\abs{V}=n$. Each $X_i$ is assigned a corresponding length of trajectory $t_i$, which we call a lifespan. We further assume that the Markov chain is \textit{time-homogeneous, reversible, irreducible, and aperiodic}.
\end{ass}
Equivalently, each $X_i(t)$ can be seen as a random walk on the corresponding network $G=(V,E)$. Let $\pi$ be the stationary distribution for the Markov chain. The range of the $i$-th random walk $X_i(t)$ at time $t$ is denoted by
\begin{align*}
\cR_{i}(t):=\{X_{i}(s)\mid s\le t\} \quad \text{ for } i\in[k].
\end{align*}
If $k=1$, we simply denote by $X(t)=X_1(t)$ and $\cR(t)=\cR_1(t)$.
Let $\nu_i$ be probability measures on $V$, for $i\in[k]$. We use the notations $\pr_{\nu_1,\nu_2,\ldots, \nu_k}$ and $\E_{\nu_1,\ldots, \nu_k}$ to denote the probability and the expectation of $(X_i)_{i\in[k]}$ where $(X_1,X_2,\ldots,X_k)$ starts from $\nu_1\otimes\cdots\otimes\nu_k$. If $\nu_i$'s are the same distribution $\nu$, we simply write $\pr_{\nu^k}$ and $\E_{\nu^k}$. If we sample the same starting points for all the $k$ random walks $(X_i)_{i\in[k]}$ from a single probability measure $\nu$, the probability, and the expectation are denoted by $\pr_\nu$ and $\E_\nu$. We use the notation $\pr_{x\sim \nu}$ and $\E_{x\sim\nu}$ to specify the starting point $x$ chosen from the distribution $\nu$. When a Markov Chains starts at a given point $x\in V$, {\emph{i.e.,}} $\nu=\gd_x$, we
denote the corresponding expectation and probability as $\E_x$ and $\pr_x$, respectively.
For some of our results, we further assume that $(X_i(t))_{t\in \cI}$ falls into one of the following three cases.
\begin{enumerate}[label=(\roman*)]
\item for each $i\in [k]$, $(X_i(t))_{t\ge 0}$ are continuous-time Markov chains driven by an exponential clock with intensity $1$, \label{continuous version}
\item for each $i\in [k]$, $(X_i(t))_{t\in \dN}$ are $\half$--lazy Markov chains, {\emph{i.e.,}} at each step with probability $\half$ the particle does not move and otherwise proceeds following its transition matrix,\label{lazy version}
\item for each $i\in [k]$, $(X_i(t))_{t\in \dN}$ are discrete time Markov chains and $t_1+t_2+\cdots+t_k$ is even. \label{even version}
\end{enumerate}
We believe that the last case is a restriction caused by our methods and should not play a significant role in the behavior of the system, see Conjecture~\ref{conj: oddcase}.
\subsection{Main results}\label{sec:main}
\begin{thm}[One vs.~Many - Uniform I.I.D.]\label{thm: k-collab}
Let $k\ge 2$, $t_1,t_2,\ldots,t_k\in \cI$, and $(X_i(t))_{t\in\cI}$ be Markov chains on a finite connected graph $G$ satisfy Assumption~\ref{ass:mc} and falls into one of the cases~\ref{continuous version},~\ref{lazy version}, or~\ref{even version}. Then we have
\begin{align}\label{eq: main}
\E_{\pi^k} \abs{\bigcup_{i=1}^k\cR_i(t_i)} \ge\E_{\pi} \abs{\cR\left(\sum_{i=1}^k t_i\right)} .
\end{align}
\end{thm}
The result shows that the expected size of the set of vertices covered by multiple random walks is always as large as that of a single random walk. Note that the inequality is not an asymptotic estimate and holds for any timespans $t_1,t_2,\ldots,t_k>0$. The proof is based on the spectral representation of the survival probabilities.
For some graphs, it might be challenging to compute the stationary measure of a given random walk; hence, one would want to relax the assumption on the distribution of the starting positions.
For example, it is well-known that the distribution of the position after a time of order of the mixing time with an additional log factor is close to the stationary. It is natural to ask if one acquires starting positions in such a fashion, the statement of Theorem~\ref{thm: k-collab} would still hold. Propositions~\ref{lem: nonstat} and~\ref{lem:almostind} give affirmative answers. Define the following quantity
\begin{equation}\label{eq:pistar}
\pi_\ast := \min_{x\in V}\frac{\pi(x)}{1-\pi(x)}=\frac{\pi_{\min}}{1-\pi_{\min}}>0,
\end{equation}
where $\pi_{\min}:=\min_{x\in V}\pi(x)$.
Notice that, if $\pi$ is uniform, then $\pi_\ast = (|V|-1)^{-1}$ where $|V|$ is the number of vertices. Suppose $\pi$ is not uniform, then there exists a vertex $y$ such that $\pi(y)<1/|V|$. Since the map $t\mapsto t/(1-t)$ is increasing, we see that $\pi_\ast<(|V|-1)^{-1}$. Thus, we conclude that $\pi_\ast$ is maximized when $\pi$ is uniform, and the deficit of $\pi_\ast$ from its maximum value $(|V|-1)^{-1}$ measures how much $\pi$ is away from uniformity.
\begin{prop}[One vs.~Many - Near Uniform Independent]\label{lem: nonstat}
Under the same assumptions as in Theorem~\ref{thm: k-collab}. Suppose $\pi_\ast$ is as in~\eqref{eq:pistar} and $\nu_i$ be probability measures on $V$ for $i\in[k]$ such that
\begin{align}\label{eq:nu-assumption1/3}
\sup_{x\in V}\left|\frac{\nu_i(x)-\pi(x)}{\pi(x)}\right|\le (1+ \pi_\ast)^{1/k}-1\approx \pi_\ast/k.
\end{align}
Then for every $t_j\in\cI$, $j=1,2,\ldots,k$, we have
\begin{align*}
\E_{\nu_1, \nu_2, \ldots, \nu_k} \abs{\bigcup_{i=1}^k\cR_i(t_i)}
\ge\E_{\pi}\abs{\cR\left(t_1+t_2+\cdots+t_k\right)} .
\end{align*}
\end{prop}
\begin{remark}
Consider the $d$-dimensional discrete torus $\dZ^d_n$ of side-length $n$ for $d\ge 5$. It is well-known that if $t= \ga n^2\log n$, then
\begin{align*}
\max_{x,y\in \dZ^d}\left|\frac{\pr_x(X(t)=y)}{\pi(y)}-1\right|\le c e^{-c\ga}
\end{align*}
for some $c>0$, where $\pi$ is the stationary measure. Recall that for $\dZ^d_n$ the size of the vertex set $\abs{V}=n^d$. Since $|\cR(t)|/|V|\le t/n^d=\ga n^{2-d}\log n$, we see that if $n$ and $\ga$ are large enough, then $\pr_x(X(t)=y)$ is close to the stationary measure while the random walk rarely covers $G$ up to time $t$.
\end{remark}
\begin{prop}[One vs.~Many - Near Uniform Dependent]\label{lem:almostind}
Under the same assumptions as in Theorem~\ref{thm: k-collab}. Suppose $\pi_\ast$ is as in~\eqref{eq:pistar} and $\nu_i,i\in[k]$ are probability measures on $ V$ such that for some $\varepsilon\in (0,1)$
\begin{align}\label{eq:nu-assumption1/4}
\sup_{x\in V}\left|\frac{\nu_i(x)-\pi(x)}{\pi(x)}\right|\le (1+\varepsilon \pi_\ast)^{1/k}-1, \text{ for all } i\in[k].
\end{align}
Assume that, there is a coupling $\mu$ of $\nu_i,i\in[k]$ such that
\begin{align}\label{eq:almost ind}
\sup_{x_i \in G, i\in[k]}\frac{|\mu(x_1,x_2,\ldots,x_k)-\prod_{i=1}^k\nu_i(x_i)|}{\prod_{i=1}^k\pi(x_i)}\le (1-\varepsilon)\cdot \pi_\ast.
\end{align}
Then for every $t_1, t_2,\ldots,t_k\in\cI$, we have
\begin{align*}
\E_{\mu} \abs{\bigcup_{i=1}^k \cR_i(t_i)}
\ge\E_{\pi} \abs{\cR\left(t_1+t_2+\cdots+t_k\right)}.
\end{align*}
\end{prop}
We remark that the Assumptions~\eqref{eq:nu-assumption1/3},~\eqref{eq:nu-assumption1/4}, and~\eqref{eq:almost ind} are quite strong in the sense that $\pi_\ast\le (|V|-1)^{-1}$ and we are interested in the case where the size of the graph $|V|$ is large. However, such assumptions are required since the results in Propositions~\ref{lem: nonstat} and~\ref{lem:almostind} hold for \emph{any} number of random walks, \emph{any} underlying network, and very mild assumptions on $t_i$, if any (depending on the cases~\ref{continuous version},~\ref{lazy version}, and~\ref{even version}). This observation also suggests that not only stationary but also the uniformity of the starting distribution is crucial in this context. Hence one can also ask if the converse is true in some sense: if multiple random walks are more efficient in covering the graph than a single random walk for any number of random walks and for any lengths of time spans, then the starting distribution should be almost stationary and almost uniform in a practical sense.
More generally this can be stated as the following question: \emph{how does the average size of the total area covered by $k$ walkers depend on the joint distribution of the initial positions?} See Question~\ref{q:opt}. While the generality of this question makes it challenging, it turned out that we can answer the question for particular schemes. In particular, we show that $k$ random walks started at points, chosen independently from the same distribution $\nu$, on average cover the graph more effectively than the $k$ walkers started at the same point, sampled from $\nu$.
\begin{lem}[Star vs. Many IID]\label{thm: k-collab vs star}
For any $k,t_1=t_2=\cdots=t_k\in \dN$, connected graph $G$, $(X_i(t))_{t\in\dN}$ a random walk on it, and a probability measure $\gn$ on $V$ we have
\begin{align*}
\E_{\gn^k} \abs{\bigcup_{i=1}^k\cR_i(t_i)} \ge\E_{\gn} \abs{\bigcup_{i=1}^k\cR_i(t_i)}.
\end{align*}
\end{lem}
We have shown that, under appropriate assumptions, multiple random walks starting with independent (almost) stationary distributions are better than a single random walk or starting at one vertex provided. A natural further step is to compare a single random walk and multiple random walks with the single starting position; see Question~\ref{q:star vs path}. The following theorem answers this question in the case of $d$-dimensional discrete torus $\dZ^d_n$, $k=3$, and such that at least one lifespan is long enough.
\begin{thm}\label{thm:starone}
Let $G=\dZ^d_n$, $d\ge 3$, and $t_1, t_2, t_3\in\dN$.
Consider three independent simple random walks $X_1, X_2, X_3$ starting at $0$.
Then there exists a constant $C>0$ such that if $t_1\ge C n^2\log n$, then
\begin{align*}
\E_{0} \abs{\cR_1(t_1)\cup \cR_2(t_2)\cup \cR_3(t_3)}\ge \E_{0} \abs{\cR\left(t_1+t_2+t_3\right)}.
\end{align*}
\end{thm}
The main ingredients of the proof are time reversal and the fact that the mixing time is of order $n^2$. One may extend the result to more general settings where time reversal and mixing properties are available. Moreover, the case when $k\ge 4$ remains open.
\section{Proof of Main Results}
For $A\subset V$ define $\tau_{i}(A)$ as the hitting time of the $i$-th random walk $X_i$ to $A$, \emph{i.e.,}
\begin{align*}
\tau_{i}(A):=\min\{t\ge 0\mid X_{i}(t)\in A\}.
\end{align*}
If $A=\{x\}$, we simply write $\gt_i(x)$. Let $\mathcal{V}}\newcommand{\cW}{\mathcal{W}}\newcommand{\cX}{\mathcal{X}_i(t)$ be the set of vertices of $V$ not visited by $X_i$ until time $t$. We call $\mathcal{V}}\newcommand{\cW}{\mathcal{W}}\newcommand{\cX}{\mathcal{X}_i(t)$ the vacant set at time $t$. Note that the range $\cR_i(t)=V\setminus \mathcal{V}}\newcommand{\cW}{\mathcal{W}}\newcommand{\cX}{\mathcal{X}_i(t)$ and the size of $\mathcal{V}}\newcommand{\cW}{\mathcal{W}}\newcommand{\cX}{\mathcal{X}_i(t)$ can be written as
\begin{align*}
|\mathcal{V}}\newcommand{\cW}{\mathcal{W}}\newcommand{\cX}{\mathcal{X}_i(t)| = \sum_{x\in V}\vone_{\{\gt_i(x)>t\}},
\end{align*}
which will be frequently used in what follows.
We first give the proof of Theorem~\ref{thm: k-collab}.
\begin{proof}[Proof of Theorem~\ref{thm: k-collab}]
We present the proof in case of the discrete Markov chain, {\emph{i.e.,}} cases~\ref{lazy version} and~\ref{even version}. The continuous case is analogous.
Let $y\in V$ be fixed and $|V|=n$. Let $\wh{P}$ be the matrix obtained from $P$ by removing the row and the column corresponding to $y$, that is, $\wh{P}(\xi, \eta) = P(\xi, \eta)$ for $\xi, \eta \in V \setminus\{y\}$. By reversibility, the matrix $A(x,\xi)=\pi(x)^{\half}\pi(\xi)^{-\half}\wh{P}(x,\xi)$ is symmetric.
It follows from the Spectral theorem that there exist eigenvalues $\gl_v$ and orthonormal eigenvectors $\varphi_v$ for $v\in V\setminus\{y\}$ for $A$.
By the spectral representation, we have
\begin{align*}
\pr_x(\gt(y)>t)
&= \pr_x(X_s\neq y, s=0,1,\ldots,t)\\
&= \sum_{\xi\in V\setminus\{y\}} \wh{P}^t(x,\xi)\\
&= \sum_{\xi\in V\setminus\{y\}}\pi(\xi)^{\half}\pi(x)^{-\half}A^t(x,\xi)
= \sum_{v, \xi\in V\setminus\{y\}}\pi(\xi)^{\half}\pi(x)^{-\half}(\gl_v)^t \varphi_v(x)\ol{\varphi_v(\xi)}.
\end{align*}
Thus,
\begin{align*}
\pr_{\pi}(\gt(y)>t)
&= \sum_{x,v, \xi\in V\setminus\{y\}} \gl_v^t\cdot \varphi_v(x)\ol{\varphi_v(\xi)} \pi(\xi)^{\half}\pi(x)^{\half}
= \sum_{v \in V\setminus\{y\}}\ga_v \cdot \gl_v^t,
\end{align*}
where
\begin{align*}
\ga_v = \Big|\sum_{\xi\in V\setminus\{y\}}\varphi_v(\xi)\pi(\xi)^{\half}\Big|^2\ge 0.
\end{align*}
Note that if $t=0$, we have $\pr_{\pi}(\gt(y)>t)=1-\pi(y)=\sum_v \ga_v$. Define a random variable $W$ by
\begin{align*}
\pr(W=\lambda_v) = \frac{\ga_v}{1-\pi(y)},\quad\text{ for } v\in G\setminus\{y\},
\end{align*}
then
\begin{align*}
(1-\pi(y))^{-1}\pr_{\pi}(\gt(y)>t) =\E[W^t].
\end{align*}
For the next step we need to show that $\E[W^t]\E[W^s]\,\le\, \E[W^{t+s}]$.
In cases~\ref{continuous version} and~\ref{lazy version} the corresponding eigenvalues are nonnegative. By the eigenvalue interlacing theorem, we see that $\gl_v$ are nonnegative for all $v\in G\setminus\{y\}$, that is, $W$ only takes nonnegative values. Let $t,s\in\dN$ and $W'$ be an independent copy of $W$, then it follows from the monotonicity of the maps $x^t$, $x^s$ for $x\ge 0$ that
\begin{align*}
0\le \E[(W^t-(W')^t)(W^s-(W')^s)] = 2(\E[W^{t+s}]-\E[W^t]\E[W^s]).
\end{align*}
Thus, we have that
\begin{align}\label{ineq:moments}
\frac{\pr_{\pi}(\gt(y)>t)\pr_{\pi}(\gt(y)>s) }{(1-\pi(y))^{2} }
&= \E[W^t]\E[W^s]\,\le\, \E[W^{t+s}]
= \frac{\pr_{\pi}(\gt(y)>t+s)}{1-\pi(y)}
\end{align}
and so
\begin{align}\label{prob-ineq-1}
\pr_{\pi}(\gt(y)>t) \pr_{\pi}(\gt(y)>s)
\le (1-\pi(y))\pr_{\pi}(\gt(y)>t+s)
\le \pr_{\pi}(\gt(y)>t+s).
\end{align}
By iteration, we get
\begin{align
\prod_{i=1}^k \pr_{\pi}(\gt(y)>t_i)
\le (1-\pi(y))^{k-1}\pr_{\pi}\left(\gt(y)>t_1+t_2+\cdots+t_k\right).
\end{align}
It remains to address the case~\ref{even version}. The odd values can be grouped in pairs if the sum $\sum_{i=1}^kt_i$ is even. For each pair of odd natural numbers $t$ and $s$ by monotonicity of the maps $x^t$, $x^s$ for $x\ge 0$, the inequality~\eqref{ineq:moments} holds. Hence replacing each pair of $\E[W^t]\E[W^s]$ by $\E[W^{t+s}]$ allows us to upper bound the product of such moments as another product of only even moments.
$$\prod_{i=1}^k \pr_{\pi}(\gt(y)>t_i) =(1-\pi(y))^{k-1}\prod_{i=1}^k \E[W^{t_i}]\le(1-\pi(y))^{k-1}\prod_{j=1}^{\ell} \E[W^{t_j}],$$
where all $t_j$'s are even. Thus we can replace $W$ by $\abs{W}$. The rest of the argument is the same as in cases~\ref{continuous version} and~\ref{lazy version}.
This completes the proof.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{lem: nonstat}]
Since for each $i\in[k]$ we have that
\begin{align*}
\left|\pr_{\nu_i}(\gt(y)>t)-\pr_{\pi}(\gt(y)>t)\right|
&= \left|\sum_{x\in V}(\nu_i(x)-\pi(x))\pr_{x}(\gt(y)>t)\right|\\
&\le \sup_{x\in V}\left|\frac{\nu_i(x)-\pi(x)}{\pi(x)}\right|\pr_{\pi}(\gt(y)>t).
\end{align*}
Assumption~\eqref{eq:nu-assumption1/3} implies that for $i\in[k],$ $$\pr_{\nu_i}(\gt(y)>t)\le (1+\pi_\ast)^{1/k}\cdot \pr_{\pi}(\gt(y)>t).$$ It then follows from~\eqref{prob-ineq-1} that for all $i$ and $j$
\begin{align}
\begin{split}\label{eq: gap}
\prod_{i=1}^k\pr_{\nu_i}(\gt(y)>t_i)
&\le (1+ \pi_\ast)\cdot \prod_{i=1}^k\pr_{\pi}(\gt(y)>t_i)\\
&\le \frac{1}{1-\pi(y)}\cdot \prod_{i=1}^k\pr_{\pi}(\gt(y)>t_i)
\le \pr_{\pi}(\gt(y)>t_1+t_2+\cdots+t_k).
\end{split}
\end{align}
By iteration, we complete the proof.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{lem:almostind}]
We will prove the result for $k=2$; the general $k$ case follows similarly. For $x\in V$, Assumption~\eqref{eq:almost ind} and inequality~\eqref{eq: gap} yield that
\begin{align*}
&\abs{\pr_{\mu}(\gt_1(x)>t_1, \gt_2(x)>t_2)-\pr_{\nu_1}(\gt_1(x)>t_1)\pr_{\nu_2}(\gt_2(x)>t_2)}\\
&\qquad\le \sum_{y,z\in G}\pr_{y}(\gt_1(x)>t_1)\pr_{z}(\gt_2(x)>t_2) \cdot \abs{\mu(y,z)-\nu_1(y)\nu_2(z)}\\
&\qquad\le (1-\varepsilon)\cdot \pi_\ast\cdot \pr_{\pi}(\gt_1(x)>t_1)\pr_{\pi}(\gt_2(x)>t_2).
\end{align*}
By a similar argument to the one in the proof of Proposition~\ref{lem: nonstat} and Assumption~\eqref{eq:nu-assumption1/4}, we have that $$\pr_{\nu_i}(\gt(y)>t)\le \sqrt{1+ \varepsilon\pi_\ast}\cdot \pr_{\pi}(\gt(y)>t).$$
Thus
\begin{align*}
\pr_{\mu}(\gt_1(x)>t_1, \gt_2(x)>t_2)
&\le \pr_{\nu_1}(\gt_1(x)>t_1)\pr_{\nu_2}(\gt_2(x)>t_2) \\
&\qquad +\varepsilon\pi_\ast\cdot \pr_{\pi}(\gt_1(x)>t_1)\pr_{\pi}(\gt_2(x)>t_2)\\
&\le (1+ \pi_\ast)\cdot \pr_{\pi}(\gt_1(x)>t_1)\pr_{\pi}(\gt_2(x)>t_2)\le \pr_{\pi}(\gt(x)>t_1+t_2).
\end{align*}
This completes the proof.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{thm: k-collab vs star}]
The case when $G$ is a general graph and $t=t_1=\cdots=t_k$ follows directly from Jansen's inequality. Indeed,
\begin{align*}
\E_{x\sim\gn} \abs{\bigcup_{i=1}^k\cR_i(t)}
&= \abs{V}-\E_{x\sim \gn} \sum_{y\in V}\prod_{i=1}^k \vone_{\{\gt_i(y)>t\}} \\
&= \abs{V}-\sum_{y\in V}\E_{x\sim \gn} (\pr_x(\gt(y)>t)^k\\
&\le \abs{V}-\sum_{y\in V}(\pr_\gn(\gt(y)>t))^k
= \E_{\gn^k} \abs{\bigcup_{i=1}^k\cR_i(t)} .
\end{align*}
\begin{comment}
First suppose $G$ is vertex-transitive. Fix a vertex $y\in V$. By the Markov property, one can enumerate the vertices of $G$ as
\begin{align*}
V(y)=\{v_1, v_2, \cdots, v_n\}
\end{align*}
such that
\todo{Check the conclusion below!!}
\[
a\le b\quad \Longleftrightarrow \quad\pr_{v_a}(\gt(y)>t)\le \pr_{v_b}(\gt(y)>t) \qquad\text{for all}\quad t\ge0.
\]
This implies that $f_t(y,a):=\pr_{v_a}(\gt(y)>t)$ is non-decreasing in $a$, for each $t\ge 0$. The desired result then follows from the FKG inequality
\begin{align*}
\E_{x\sim\gn} \abs{\bigcup_{i=1}^k\cR_i(t_i)}
&= n-\E_{x\sim \gn} \sum_{y\in V}\prod_{i=1}^k \vone_{\{\gt_i(y)>t_i\}} \\
&= n-\sum_{y\in V}\E_{x\sim \gn} \prod_{i=1}^k \pr_x(\gt(y)>t_i) \le n-\sum_{y\in V}\prod_{i=1}^k \pr_\gn(\gt(y)>t_i)
= \E_{\gn^k} \abs{\bigcup_{i=1}^k\cR_i(t_i)} .
\end{align*}
If $t=t_1=\cdots=t_k$, then the enumeration with monotonicity property is available for $t$, which leads to the desired result.
\end{comment}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:starone}]
Suppose $t_1\ge \ga T_{\mathrm{mix}}$ where $\ga>0$, that can depend on $n$, and $T_{\mathrm{mix}}$ is the mixing time. It is well-known that $T_{\mathrm{mix}}=\Theta(n^2)$. By the time reversal, it is equivalent to consider the ranges of two random walks $Y_1$ and $Y_2$ such that $Y_1$ starts from an ``almost'' stationary measure, $Y_1(0)=X_1(t_1)$, and $Y_2$ starts from $Y_2(0)=Y_1(t_1)$, see Figure~\ref{fig:timereversal}. If $t_1$ is large enough, it follows from the mixing property that the starting location $Y_2(0)$ is ``almost" stationary and is ``almost" independent from the random walk $Y_1(0)$. Indeed, since for some constant $c>0$ we have that $$\sup_{x\in V}|\pr_0(Y_1(t_1)=x)-\pi(x)|\le O(n^{-d}e^{-c\ga}),$$ it is easy to see that if $\ga=c'\log n$, for some $c'>0$, then the distributions of $Y_1(0)$ and $Y_2(0)$ satisfy the Assumptions~\eqref{eq:nu-assumption1/4} and~\eqref{eq:almost ind}. Thus, applying Proposition~\ref{lem:almostind} we conclude that three random walks starting at the same site on average cover the discrete torus more than a single random walk, as desired.\end{proof}
\tikzset{
Big dot/.style={
circle, inner sep=0pt,
minimum size=2.5mm, fill=black
}
}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[thick, scale=0.7]
\node[Big dot] (1) at (-5,0) {};
\node[Big dot] (2) at (5,0) {};
\node[Big dot] (3) at (-6.4,2) {};
\node[Big dot] (V52) at (-1,-3) {};
\node[Big dot] (V72) at (-7,-1) {};
\draw plot [smooth] coordinates {(-5,0) (-6,1.2) (-4,1.8) (-6.4,2)};
\draw plot [smooth] coordinates {(-5,0) (-3,0.2) (-2,-2) (-1,-3)};
\draw plot [smooth] coordinates {(-5,0) (-5.5,-0.5) (-6,-2) (-7,-1)};
\draw (-4.3,-0.2) node[below] {$X_i(0)\sim \pi$};
\draw (-1,-3.2) node[below] {$X_1(t_1)$};
\draw (-6.7,2) node[below] {$X_2(t_2)$};
\draw (-7,-0.8) node[above] {$X_3(t_3)$};
\node[Big dot] (r3) at (3.6,2) {};
\node[Big dot] (V52) at (9,-3) {};
\node[Big dot] (V72) at (3,-1) {};
\draw plot [smooth] coordinates {(5,0) (4,1.2) (6,1.8) (3.6,2)};
\draw plot [smooth] coordinates {(5,0) (7,0.2) (8,-2) (9,-3)};
\draw plot [smooth] coordinates {(5,0) (4.5,-0.5) (4,-2) (3,-1)};
\draw (5.4,-0.2) node[below] {$Y_1(t_1)$};
\draw (9.3,-3.2) node[below] {$Y_1(0)\sim \pi$};
\draw (3.4,2) node[below] {$Y_2(t_2)$};
\draw (2.6,-0.8) node[above] {$Y_1(t_1+t_3)$};
\draw (-2,-0.6) node[below] {$t_1$};
\draw (8,-0.6) node[below] {$t_1$};
\draw (-6.2,-2)node[below] {$t_3$};
\draw (3.8,-2)node[below] {$t_3$};
\draw (-3.7,2)node[below] {$t_2$};
\draw (6.3,2)node[below] {$t_2$};
\end{tikzpicture}
\end{center}
\caption{Depiction of time reversal procedure from the proof of Theorem~\ref{thm:starone}. On the left there is a sketch of trajectories of $3$ independent random walks $X_i$ with lifespans $t_1,t_2$ and $t_3$ started at the same point sampled from stationary distribution. On the right there are two trajectories of random walks $Y_1$ and $Y_2$. $Y_1$ is started from a stationary point and has lifespan $t_1+t_3$, while $Y_2$ is started from $Y_1(t_1)$ and has a lifespan of $t_2$.}
\label{fig:timereversal}
\end{figure}
\section{Open questions and discussion}\label{sec: discussion}
In this section, we discuss further questions and state conjectures, some of which are based on the simulations done by Tyler M. Gall in the case when $G$ is a two-dimensional torus and Andrew Yin in the case $G$ is a random graph.
As mentioned in Section~\ref{sec:main} we believe that version~\ref{even version}, that states that the total lifespan $T:=\sum_{i=1}^kt_i$ of random walks $(X_i(t))_{t\in[t_i]}$ has to be even, is unnecessary and the result should hold for any $t_1, t_2,\ldots,t_k$. However, this assumption is used only in the inequality~\eqref{ineq:moments}. While it does not seem to be a drastic difference to allow the total time to be odd, the inequality becomes difficult to justify. Hence we leave this part as a conjecture.
\begin{conj}\label{conj: oddcase}
The inequality~\eqref{eq: main} holds for any total life time of the discrete Markov chains.
\end{conj}
It is natural to study the difference between the left-hand and right-hand sides of~\eqref{eq: main} as a function of the total life span $T$. When $T=0$, the difference is less or equal to $k-1$,
on the other hand, when $T$ is larger than the cover time of the graph $G$, the difference is $0$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\linewidth]{2collab_edited.png}
\includegraphics[width=0.5\linewidth]{3collab_edited.png}
\caption{Simulation of the averaged difference of covered number of vertices (blue smooth curves) of $G_{n.p}$ with $n=100$, $p=0.1$ by $2$ random walks (left), $3$ random walks (right), of equal lengths and started at uniformly chosen vertex, and by a single random walk of length $T=k\cdot c\cdot n^2$ plotted versus $c$. Orange dashed curves are fitted exponential curves.}
\label{fig:expdecay}
\end{figure}
Based on simulations, see Figure~\ref{fig:expdecay}, we ask the following question.
\begin{ques}
Under assumptions of Theorem~\ref{thm: k-collab} is it true that
\begin{align}\label{eq: exp}
\E_{\pi^{k}} \abs{\bigcup_{i=1}^k\cR_i(t_i)} -\E_{\pi} \abs{\cR\left(\sum_{i=1}^kt_i\right)} \approx k\cdot \exp\left(-f(G)\cdot \sum_{i=1}^kt_i\right),
\end{align}
where $\pi$ is a stationary distribution, $f(G)$ is a function that depends on the graph $G$, and the number of walkers $k$.
\end{ques}
In addition to quantitative bound on the gap in the inequality~\eqref{eq: main} it is natural to study the fluctuation behaviors of the quantities on both sides of the inequality.
While the direct comparison between the variances of the two models is still open, we remark that in case of discrete torus $\dZ^d_n$, $d\ge 3$, partial answers are known.
The fluctuation behavior of the set of vertices covered by multiple random walks on the discrete torus $\dZ^d_n$, $d\ge 3$, as $n$ goes to $\infty$ was investigated in~\cite{DK21}. Indeed, if $t_1=t_2=\cdots=t_k=cn^d$, $c>0$, $d\ge 5$, and $\gs_{n,k}^2$ is the variance $\bigcup_{i=1}^k \cR_i(cn^d)$, then it was proven in~\cite{DK21} that $n^{-d}\gs_{n,k}^2$ converges to $\nu_d(2kc/G(0))$ where $\nu_d$ is an explicit function and $G(\cdot)$ is the Green's function on $\dZ^d$. Similar results hold for $d=3,4$. One can also extend it to general vertex-transitive graphs with some assumptions such as the hyper-cube $\dZ^n_2$ and the Cayley graphs of the symmetric groups, see~\cite{DK21}*{Remark 1.7}. The result in~\cite{DK21} implies that on the discrete torus $\dZ^d_n$, $d\ge 3$, the ranges of the two models in~\eqref{eq: main} has the same order of asymptotic variances if $t_1=\cdots=t_k=cn^d$ for some $c>0$.
We expect Lemma~\ref{thm: k-collab vs star} to hold more generally for arbitrary values of $t_i$ and a general class of graphs such as vertex transitive ones. For a transitive graph, it is clear that if for $x,y\in V$ we have that $\pr_x(\tau(y)\le t)$ is monotone with respect to $\mathrm{dist}_G(x,y)$ then the result follows from the FKG inequality. Such monotonicity does hold for simple graphs such as a cycle, however it is establish for a general graph. Hence we leave it as a question.
\begin{ques}\label{q:k-collab vs star}
Under what conditions on $k,t_1,t_2,\ldots,t_k\in \dN$, a graph $G$, and the probability measure $\nu$, we have that
\begin{align*}
\E_{\gn^k} \abs{\bigcup_{i=1}^k\cR_i(t_i)} \ge\E_{\gn} \abs{\bigcup_{i=1}^k\cR_i(t_i)}?
\end{align*}
\end{ques}
Furthermore, it is of interest to understand how the distribution of the starting positions affects the performance compared to a single random walk with a combined lifespan. Theorem~\ref{thm: k-collab} and Proposition~\ref{lem: nonstat} state that if the starting distribution of each walker is close to stationary and is independent of each other, then such collaboration, on average, covers more of the graph than a single walk. On the other hand, if $k$ walkers start at a single vertex ({\emph{i.e.,}} star shape), then for small values of $t_1,t_2,\ldots,t_k\in \dN$ they would interfere with each other, and hence a single walk with a lifespan $T=\sum_{i=1}^kt_i$ should visit more vertices. As values of $t_i$ increase and approach mixing time, the positions of the walkers become closer to independence and should start capturing graphs more and more efficiently. By that time, not much of the graph may be left unexplored. This suggests that one might want to consider the rate of acquiring new vertices and study the total time $T$, at which $k$ walkers start to explore faster than a single random walk.
\begin{ques}\label{q:star vs path}
Under what conditions on $k,t_1,t_2,\ldots,t_k\in \dN$ and for what measure $\boldsymbol{\nu}}\newcommand{\mvgo}{\boldsymbol{\omega}}\newcommand{\mvgO}{\boldsymbol{\Omega}$ on $V^k$, we have
\begin{align*}
\E_{\boldsymbol{\nu}}\newcommand{\mvgo}{\boldsymbol{\omega}}\newcommand{\mvgO}{\boldsymbol{\Omega}} \abs{\bigcup_{i=1}^k\cR_i(t_i)}
\ge\E_{\pi} \abs{\cR\left(\sum_{i=1}^k t_i\right)}?
\end{align*}
\end{ques}
One can also look at the proposed problem from the point of view of the seeding problem, \emph{i.e.,}~what initial conditions allow to cover the graph most efficiently. Sometimes in diffusion models on graphs, it is more beneficial to start at vertices with the highest degrees or at the furthest distance from each other in the graph metric.
\begin{ques}\label{q:opt}
For a connected graph $G$, which measures $\nu_1\otimes\nu_2\otimes\cdots\otimes\nu_k$ starting at which $k$ random walks on average cover the graph most effectively?
\end{ques}
This article is concerned with inequalities between the average size of ranges of the corresponding strategy of capturing the graph. It is of great interest to extend this study to stochastic dominance under necessary assumptions if at all possible.
\begin{ques}
Let $t_1,t_2,\ldots, t_k\in\dN$ and $t=\sum_{i=1}^k t_i$. Is it true that for any $y>0$
\begin{align*}\pr_{\pi^{k}}\left(\abs{\bigcup_{i=1}^k\cR_i(t_i)}\ge y\right) \ge\pr_{\pi}\left(|\cR(t)|\ge y\right)?
\end{align*}
\end{ques}
Finally, it is likely that non-reversible Markov chains, such as non-backtracking random walk, would outperform random walks considered in this work. While, it seems intuitive that similar results ours should work, our analysis heavily relies on the reversibility.
\begin{ques}
Is it possible to extend Theorem~\ref{thm: k-collab} to non-reversible setting, \emph{e.g.,}\ a non-backtracking random walk, various biased random walks, or self-avoiding random walks with re-sampling rule when the particle may not continue?
\end{ques}
\vskip.1in
\noindent{\bf Acknowledgments.}
We thank our undergraduate students, Tyler M.~Gall and Andrew Yin, for helping with simulations and many insightful conversations. We also thank Illinois Geometry Lab at UIUC for providing a platform to undergraduate researchers.
|
{
"arxiv_id": "2302.14260",
"language": "en",
"timestamp": "2023-03-01T02:07:10",
"url": "https://arxiv.org/abs/2302.14260",
"yymm": "2302"
} | \section*{Acknowledgement}
This work was partly supported by Institute of Information \& communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01906, Artificial Intelligence Graduate School Program (POSTECH) and No.2022-0-00959, (part2) Few-Shot learning of Causal Inference in Vision and Language for Decision Making) and National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1C1C1013366, 2022R1F1A1064569).
\bibliographystyle{unsrtnat}
\section{Evaluating Intervention Strategies}
\label{sec:results}
\subsection{Experiment Settings}
\textbf{Dataset}\quad
We experiment with three datasets: (1) CUB \citep{wah2011caltech} -- the standard dataset used to study CBMs, (2) SkinCon \citep{daneshjouskincon} -- a medical dataset used to build interpretable models, and (3) Synthetic -- the synthetic datasets we generate based on different causal graphs to conduct a wide range of controlled experiments.
Extensive details of these datasets including preprocessing, label characteristics, data splits, and the generation process are provided in \cref{app:datasets}.
\textbf{Implementation}\quad
We follow the standard implementation protocols as in previous works.
The full details including model architectures and optimization hyperparameters are provided in \cref{app:implementation}.
Our code is available at \url{https://github.com/ssbin4/Closer-Intervention-CBM}.
\textbf{Training}\quad
We consider the following training strategies similarly to \citet{koh2020concept}:
\begin{itemize}[noitemsep, leftmargin=10pt, topsep=0pt]
\item \textsc{ind}:
$g$ and $f$ are trained \underline{ind}ependently of each other.
$f$ always takes ground-truth concept values as input.
\item \textsc{seq}:
$g$ and $f$ are trained \underline{seq}uentially, $g$ first and $f$ next.
$f$ takes predicted concept values as input from trained $g$.
\item \textsc{jnt}:
$g$ and $f$ are trained \underline{j}oi\underline{nt}ly at the same time as a multi-objective. This results in increased initial task accuracy but comes with the price of decreased intervention effectiveness \citep{koh2020concept}.
\item \textsc{jnt+p}:
similar to \textsc{jnt} but the input to $f$ is sigmoid-activated \underline{p}robability distribution rather than logits.
\end{itemize}
\textbf{Conceptualization}\quad
We consider different forms of concept predictions as input to the target predictor at inference:
\begin{itemize}[noitemsep, leftmargin=10pt, topsep=0pt]
\item \textsc{soft}:
$f$ takes real values of $\hat{c} \in [0, 1]^k$ as \underline{soft} representation of concepts \citep{koh2020concept}.
\item \textsc{hard}:
$f$ takes binary values of $\hat{c} \in \{0, 1\}^k$ as \underline{hard} representation of concepts based on $\mathbbm{1}[\hat{c} \geq 0.5]$ \citep{mahinpei2021promises}.
This prevents information leakage \citep{havasi2022addressing} in exchange for decreased prediction performance.
\item \textsc{samp}:
$m$ random \underline{samp}les are drawn by treating the soft concept prediction scores as a probability distribution, and the target prediction is made as an ensemble, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\hat{y} = \frac{1}{m} \sum_{i=1}^{m} f(\hat{c})$ where $\hat{c}$ is binarized concept prediction \citep{havasi2022addressing}.
We use $m=5$ for the experiments.
\end{itemize}
\begin{figure*}[!t]
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a0.01_b100.pdf}
\caption{$\alpha = 0.01$}
\label{fig:cost_alpha0.01}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a0.03_b100.pdf}
\caption{$\alpha = 0.03$}
\label{fig:cost_alpha0.03}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a0.05_b100.pdf}
\caption{$\alpha = 0.05$}
\label{fig:cost_alpha0.05}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a0.1_b100.pdf}
\caption{$\alpha = 0.1$}
\label{fig:cost_alpha0.1}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a1.0_b100.pdf}
\caption{$\alpha = 1.0$}
\label{fig:cost_alpha1}
\end{subfigure}%
\caption{
Effect of $\alpha$ on intervention.
We fix $\tau_i = 1, \beta = 100, k=100$.
\textsc{ucp} and \textsc{ectp}, the intervention methods strongly evaluated previously, become less effective as $\alpha$ decreases.
Here, kinked shapes are due to the relatively high initial cost on the first intervention before $n$ becomes large.
}
\label{fig:cost_result_alpha}
\end{figure*}
\subsection{Evaluating Concept Selection Criteria}
\label{sec:results-criteria}
We first evaluate the intervention effectiveness of concept selection criteria and present the results in \cref{fig:result-main}.
Across all datasets, we find that the current practice of random intervention (\textsc{rand}) is easily outperformed by the other alternatives in almost all cases with a significant margin.
Specifically, in the CUB experiment, correcting $20$ concepts by random intervention reduces the task error less than $4\%$ whereas correcting the same amount based on the uncertainty of concept predictions (\textsc{ucp}) leads to more than $16\%$ error reduction.
To put it differently, \textsc{rand} requires to intervene on $43$ concepts in order to reduce the error by half, while it is only $12$ concepts to fix for \textsc{ucp} to achieve the same reduction.
In the SkinCon experiment, selecting concepts based on the expected change in target prediction (\textsc{ectp}) leads the way among others, and yet, the scale of improvements over \textsc{rand} is not as large.
Note also that the strategy of fixing concepts with the largest loss first (\textsc{lcp}) performs exceptionally well in all cases.
This is however due to the help of the ground-truth knowledge on concepts which is unavailable in practice.
Nonetheless, we believe this can serve as an indicator to guide a better intervention strategy which we defer to future work.
\subsubsection{Reflecting cost of intervention}
As we discussed in \cref{sec:cost}, the cost of intervention may differ by concept selection criteria.
Taking into account this aspect, we set up experiments where we can evaluate the intervention effectiveness in terms of the theoretical cost.
Specifically, we model the relationships between $\tau_i$, $\tau_g$, $\tau_f$ as $\tau_i = \alpha \tau_g$ and $\tau_g = \beta \tau_f$, which means that the cost of intervention (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, time to fix a concept) is $\alpha$-proportional to the cost of making inference on $g$, and likewise, $\beta$ linearity between $\tau_g$ and $\tau_f$.
Then we can evaluate the cost-reflected intervention effectiveness with respect to arbitrary unit ($\upsilon$), and from which, we can further show how it transforms by controlling $\alpha$ and $\beta$.
First, the result of changing $\alpha$ is plotted in \cref{fig:cost_result_alpha}.
As $\alpha$ becomes smaller \textsc{rand} becomes very effective compared to for instance \textsc{ucp}.
This makes sense because with small $\alpha$, $\tau_i$ becomes relatively small and $\tau_g$ dominates the cost of \textsc{ucp} which is $\mathcal{O}(\tau_g + n \tau_i)$ as seen in \cref{tab:intervention_methods_cost}, and therefore, \textsc{ucp} becomes penalized when it comes to the intervention effectiveness.
In contrast, when $\alpha$ becomes larger, $\tau_g$ takes little part in the cost of \textsc{ucp} as with increasing $n$, which in turn recovers the effectiveness of \textsc{ucp}.
Intuitively, the former can happen when the model $g$ is large (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, large $\tau_g$), and the latter can happen when intervention is conducted on hard examples (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, large $\tau_i$).
We also experiment on changing $\beta$ to control the relative cost between $\tau_g$ and $\tau_f$.
As a result, we find that when $\beta$ is small \textsc{ectp} can perform poorly while \textsc{ucp} can be effective as it does not require $\tau_f$.
Furthermore, we extend this analysis to the CUB experiment with more realistic settings where $\tau_g$ and $\tau_f$ are set based on the wall-clock times of running each model, and $\tau_i$ is set based on the actual concept annotation time provided in the dataset.
All of these results are put in \cref{app:cost} with detailed analysis for space reasons.
\subsection{Analyzing Intervention Levels}
\label{sec:restuls-levels}
\begin{figure*}[!t]
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_level_gs.pdf}
\caption{\textsc{G+S} level}
\label{fig:cub_result_level_gs_main}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_gs_ucp.pdf}
\caption{(\textsc{I+S}) vs. (\textsc{G+S})}
\label{fig:cub_level_is_gs_ucp_main}
\end{subfigure}
\hspace{12mm}
\begin{subfigure}{0.22\linewidth}
\includegraphics[width=\linewidth]{figures/cub/result_level_ib.pdf}
\caption{\textsc{I+B} level}
\label{fig:cub_result_level_ib_main}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_ib_ucp.pdf}
\caption{(\textsc{I+S}) vs. (\textsc{I+B})}
\label{fig:cub_level_is_ib_ucp_main}
\end{subfigure}
\caption{
Comparing the effects of different intervention levels using the CUB dataset.
Here, intervention counts denote the number of intervened groups and average number of intervened concepts for \textsc{g} and \textsc{b}, respectively.
We fix the selection criterion to be \textsc{ucp} in (b) and (d) while all other cases are provided in \cref{sec:results-others-levels}.
}
\label{fig:cub_level}
\end{figure*}
As seen in \cref{fig:cub_result_level_gs_main}, most criteria still remain more effective than \textsc{rand} in group-wise single (\textsc{G + S}) intervention.
Specifically, \textsc{rand} needs $39.3\%$ ($11$ out of $28$), while \textsc{ucp} needs $25.0\%$ ($7$ out of $28$) of the groups to be intervened to decrease the task error by half.
However, \textsc{cctp} does not outperform \textsc{rand} this time.
We also find a similar pattern for the batch case \textsc{G + B} (see \cref{fig:cub_result_level} in \cref{sec:results-others-levels}).
We suspect that calculating the mean of the scores loses some discriminative information in some selection criteria and perhaps a different surrogate needs to be designed.
In addition, we find that group-wise intervention is in general less effective than individual counterpart with the same budget of intervention expense (see \cref{fig:cub_level_is_gs_ucp_main}).
Intuitively, correcting concepts within the same group may not provide rich information as opposed to selecting concepts across different groups with the same intervention counts.
Nonetheless, we remark that group-wise intervention can potentially be cost-effective when concepts within the same group are mutually exclusive, which depends on how the concepts are annotated during the creation of datasets.
The proposed concept selection criteria also remain effective for batch intervention (\textsc{b}) as seen in \cref{fig:cub_result_level_ib_main}.
Interestingly, batch intervention turns out to be more effective when compared to single (\textsc{s}) as well as seen in \cref{fig:cub_level_is_ib_ucp_main}.
This trend holds true for other criteria besides \textsc{ucp} except for \textsc{cctp} and extends to group-wise batch (\textsc{G+B}) intervention (see \cref{sec:results-others-levels} for full results).
\begin{figure*}[!t]
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/training_eudtp.pdf}
\caption{Training on CUB}
\label{fig:cub_training_eudtp_main}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/training_eudtp.pdf}
\caption{Training on SkinCon}
\label{fig:skincon_training_eudtp_main}
\end{subfigure}%
\hspace{12mm}
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/conceptualization_rand.pdf}
\caption{Conceptualization: \textsc{rand}}
\label{fig:skincon_conceptualization_rand_main}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/conceptualization_ucp.pdf}
\caption{Conceptualization: \textsc{ucp}}
\label{fig:skincon_conceptualization_ucp_main}
\end{subfigure}%
\caption{
Comparing the effects of different training strategies (a,b) and conceptualization methods (c, d).
We choose \textsc{eudtp} as the concept criterion for (a,b) and SkinCon as the dataset for (c, d).
We provide all other results in \cref{sec:results-others-training,sec:results-others-conceptualization}.
}
\label{fig:training_and_conceptualization}
\end{figure*}
\subsection{Considering Training and Conceptualization}
\label{sec:exp_train_inf_strategies}
\textbf{Effect of training scheme}\quad
As seen in \cref{fig:cub_training_eudtp_main}, intervention is in general the most effective under the \textsc{ind} training scheme.
We believe that this is because $f$ is not trained with the ground-truth concept labels in the case of \textsc{seq} and \textsc{jnt}(+\textsc{p}), and fixing concept predictions for these schemes may not work as well.
We also find that \textsc{eudtp} becomes much less effective under \textsc{seq} or \textsc{jnt} than other alternatives and actually underperforms \textsc{rand} (see \cref{sec:results-others-training}).
Hence, the effectiveness of a criterion can depend on which training strategy to use, implying the need of comprehensive evaluations for newly developed criteria.
For the SkinCon dataset, however, intervening on the concepts under \textsc{seq, jnt, jnt + p} strategies rather increases the average task error regardless of the concept selection criteria.
Specifically, training under \textsc{jnt} already achieves low task error and applying intervention does not help reduce it further (see \cref{fig:skincon_training_eudtp_main}).
We hypothesize that this is due to some inherent characteristics of the dataset as well as limited concepts provided in the bottleneck, resulting in the negative influence on making correct task predictions with binarized concepts.
This can potentially correspond to the known issue of information leakage in CBMs \citep{mahinpei2021promises, havasi2022addressing}.
\textbf{Effect of conceptualization}\quad
We find that \textsc{hard} and \textsc{samp} may begin with high task error compared to \textsc{soft} as expected.
However, when making use of the developed concept selection criteria such as \textsc{ucp}, the gap between these conceptualization methods decreases much faster with more intervention compared to \textsc{rand} as seen in \cref{fig:skincon_conceptualization_rand_main,fig:skincon_conceptualization_ucp_main}.
This result is consistent across different training strategies and datasets (see \cref{sec:results-others-conceptualization}).
\subsection{Summary}
We provide a summary of key findings in this section below.
\begin{itemize}[noitemsep, leftmargin=10pt, topsep=0pt]
\item Utilizing various information for concept selection can make the intervention procedure much more effective than the naive random baseline.
\item In theory, the effectiveness of criteria can change when taking into account the cost of intervention.
\item The intervention effectiveness is affected non-trivially by different intervention levels, and the individual batch wise intervention seems to be most effective in general.
\item The effectiveness of each criterion can change a lot by the choice of training strategies to a degree in which it becomes even worse than the random baseline.
\item Intervention based on effective criteria can close the performance gap between different conceptualization methods much faster than the random baseline.
\end{itemize}
\newpage
\section{Conclusion}
The intervention procedure of CBMs has been unattended in previous work despite its critical impact on practitioners.
In this work, we study a wide range of aspects regarding the procedure and provide an in-depth analysis for the first time in the literature.
Specifically, we develop various concept selection criteria that can be used for intervention and demonstrate that their behaviors can vary quite significantly based on an array of factors including intervention levels, cost, training, conceptualization, and data characteristics.
We also find several pitfalls in the current practices that need a careful addressing to be deployed in realistic settings.
We plan to investigate further on developing more effective and reliable intervention strategies in future work.
\section{Analyzing Intervention with Synthetic Data}
\label{sec:ablation-dataset}
We have observed that intervention can often yield different results over datasets.
Precisely, intervening on all concepts decreases the task error down to $0\%$ on CUB, whereas the amount of decrease is much less and the average task error remains still high around $29\%$ on SkinCon.
Also, the relative order of effectiveness between concept selection criteria can vary.
We find that it is difficult to unravel these findings if only experimenting on real datasets as in previous work \citep{koh2020concept,chauhan2022interactive, sheth2022learning, zarlenga2022concept}.
To provide an in-depth analysis, we develop a framework to generate synthetic datasets based on three different causal graphs that control the followings: input noise, hidden concepts, and concept diversity.
\subsection{Generating Synthetic Data}
\begin{figure}
\centering
\begin{subfigure}{0.2\linewidth}
\includegraphics[width=\linewidth]{figures/synthetic/graph_noisyinput.pdf}
\caption{Noisy input}
\label{fig:synthetic_graph_noisyinput}
\end{subfigure}%
\hspace*{3em}
\begin{subfigure}{0.2\linewidth}
\includegraphics[width=\linewidth]{figures/synthetic/graph_hidden.pdf}
\caption{Hidden concept}
\label{fig:synthetic_graph_hidden}
\end{subfigure}
\hspace*{3em}
\begin{subfigure}{0.2\linewidth}
\includegraphics[width=\linewidth]{figures/synthetic/graph_diverse.pdf}
\caption{Diverse concept}
\label{fig:synthetic_graph_diverseconcept}
\end{subfigure}
\caption{
Causal graphs for generating synthetic datasets.
$z$, $h$, and $d$ represent factors of input noise, hidden concepts, and concept diversity, respectively.
The full details of the data generation process are provided in \cref{sec:synthetic_dataset}.
}
\label{fig:synthetic_graphs}
\end{figure}
\begin{figure}
\begin{floatrow}
\ffigbox{%
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/inputnoise_ucp.pdf}
\caption{Noisy input}
\label{fig:synthetic_inputnoise_ucp}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/hidden_ucp.pdf}
\caption{Hidden concept}
\label{fig:synthetic_hidden_ucp}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/diversity_ucp.pdf}
\caption{Diverse concept}
\label{fig:synthetic_diversity_ucp}
\end{subfigure}
\vspace{1em}
}{%
\caption{
Effects of data on intervention with \textsc{ucp}.
Each plot is with different values of the variance of noise ($z$), the ratio of hidden concepts ($h$), and the probability to perturb the concept values ($d$), respectively.
}
\label{fig:synthetic_dataset_characteristics}
}
\ffigbox{%
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_subgroupsize_1.pdf}
\caption{$\gamma = 1$}
\label{fig:synthetic_subgroupsize_1}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_subgroupsize_10.pdf}
\caption{$\gamma = 10$}
\label{fig:synthetic_subgroupsize_10}
\end{subfigure}
}{%
\caption{
Intervention effectiveness with different sub-group size $\gamma$.
The relative order of effectiveness between selection criteria changes significantly according to $\gamma$.
}
\label{fig:synthetic_subgroupsize}
}
\end{floatrow}
\end{figure}
\textbf{\textsc{Case 1}: Noisy input}\quad
Real-world data contains a lot of random noise coming from various sources (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, lighting).
We construct a causal graph to consider this case where the Gaussian noise is added on input data (see \cref{fig:synthetic_graph_noisyinput}).
\textbf{\textsc{Case 2}: Hidden concept}\quad
When a subset of concepts is unknown or hidden, the target prediction is made incomplete with only available concepts as deep representations are not fully captured in the bottleneck layer.
We design a causal graph for this case and generate synthetic data for which some concepts that are necessary to make correct target predictions are hidden on purpose (see \cref{fig:synthetic_graph_hidden}).
\textbf{\textsc{Case 3}: Diverse concept}\quad
Examples within the same class can have different values for the same concept in realistic settings.
For instance, simple concept-level noise or fine-grained sub-classes (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \texttt{`black swan'} and \texttt{`white swan'} for \texttt{`swan'} class) can make such diverse concept values.
We construct a causal graph to generate such data for which concept values can vary probabilistically and inputs are produced according to these concepts (see \cref{fig:synthetic_graph_diverseconcept}).
\subsection{Results}
\label{sec:synthetic-result}
First, we display the effect of input noise in \cref{fig:synthetic_inputnoise_ucp}.
The initial task error increases with a level of noise ($z$) due to the poor performance on concept prediction.
Specifically, we need $17$ intervention counts to decrease the task error by half with extremely noisy data ($z=2.0$) while correcting only $2$ concepts yields the same effect for a moderate level of noise case ($z=0.5$).
In contrast, the initial task error is already near $0\%$ with an extremely small level of noise ($z=0.1$) where we do not need intervention at all.
Next, we evaluate the effect of hidden concepts in \cref{fig:synthetic_hidden_ucp}.
The final task error increases with more hidden concepts, and thus, intervention become less effective.
Specifically, the error is still high around $13\%$ when half of the concepts are hidden ($h=50\%$) while it reaches zero error without hidden concepts ($h=0\%$).
This is due to the fact that the target prediction cannot be made with complete information when there exist hidden concepts, which is often the case for constructing CBMs in realistic settings.
We also find that generating more diverse concept values within the same class increases both initial and final task error, making intervention less effective (see \cref{fig:synthetic_diversity_ucp}).
This is because learning discriminative representations for target prediction would be a lot more difficult.
To circumvent this issue, many previous works attempt to preprocess the data so as to force concepts within the same class have the same value.
However, this may have an adverse effect on model fairness as we discuss in \cref{sec:pitfalls}.
Furthermore, we discover that different sub-group sizes can change the relative ordering of intervention effectiveness between concept selection criteria.
Here, we define a sub-group as classes with similar concept values and denote its size as $\gamma$.
Interestingly, \textsc{eudtp} becomes less effective with a small group size ($\gamma = 1$) even compared to \textsc{rand} whereas it becomes the most effective when $\gamma=10$ except for \textsc{lcp} as seen in \cref{fig:synthetic_subgroupsize}.
We believe that it is because classes within the same sub-group are classified more easily by decreasing uncertainty in target prediction using \textsc{eudtp} when $\gamma$ is large.
The result indicates that the behavior of a criterion can vary significantly across different datasets and again demonstrate the necessity of a comprehensive evaluation of the newly developed criteria.
We refer to \cref{sec:results-others-dataset} for results on the effect of some other factors on intervention.
\section{Intervention Strategies}
\label{sec:int_strategies}
\subsection{Preliminary}
\label{subsec:preliminary}
Let $x \in \mathbb{R}^{d}$, $c \in \{0,1\}^k$, $y \in \mathcal{Y}$ be input data, binary concepts, and target responese, respectively;
here, $d$ and $k$ denote the dimensionality of input data and cardinality of concepts, and we assume $\mathcal{Y}$ encodes categorical distribution for classification tasks.
Given some input data (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, an image), a CBM first predicts its concepts (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, existing attributes in the given image) using a concept predictor $g$ and subsequently target response (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, class of the image) using a target predictor $f$:
\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, first $\hat{c} = g(x)$ then $\hat{y} = f(\hat{c})$, where $\hat{c}$ and $\hat{y}$ are predictions of concepts and target response.
In this process, one can intervene on a set of concepts $\mathcal{S} \subseteq \{1, \cdots, k\}$ so that the final prediction can be made based on rectified concept values, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\hat{y} = f(\tilde{c})$ where $\tilde{c} = \{\hat{c}_{\setminus\mathcal{S}}, c_{\mathcal{S}}\}$ denotes the updated concept values partly rectified on $\mathcal{S}$ with $\hat{c}_{\setminus\mathcal{S}}$ referring to the predicted concept values excluding $\mathcal{S}$.
\subsection{Concept Selection Criteria}
\label{sec:concept_selection_criteria}
How should one select which concepts to intervene on?
This is a fundamental question to be answered in order to legitimize CBMs in practice since intervention incurs the cost of employing experts, which would increase as with the number of intervening concepts $|\mathcal{S}|$.
In principle, one would select a concept by which it leads to the largest increase in the task performance.
To address this question and investigate the effectiveness of intervention procedure in current practice, we develop various concept selection criteria for which a selection score $s_i$ for $i$-th concept is defined.
Then, intervening concepts will be done based on the decreasing order of these scores.
\vspace{1.5em}
\textbf{Random} (\textsc{rand})\quad
It selects concepts uniformly at random as in \citet{koh2020concept}.
We can treat this method as assigning a random score for each concept, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $s_i \sim \mathcal{U}_{[0, 1]}$.
It will serve as a baseline to study the effectiveness of concept selection criteria.
\textbf{Uncertainty of concept prediction} (\textsc{ucp})\quad
It selects concepts with the highest uncertainty of concept prediction.
Specifically, it defines $s_i = \mathcal{H} (\hat{c_i})$ where $\mathcal{H}$ is the entropy function.
When the concepts are binary, it follows that $s_i = 1/|\hat{c_i} - 0.5|$ as in \citet{lewis1994heterogeneous,lewis1995sequential}.
Intuitively, uncertain concepts may have an adverse influence on making the correct target prediction, and thus, they are fixed first by this criterion.
\textbf{Loss on concept prediction} (\textsc{lcp})\quad
It selects concepts with the largest loss on concept prediction compared to the ground-truth.
Specifically, it defines $s_i = \lvert \hat{c}_i - c_i \rvert$.
This scheme can be advantageous to increasing task performance since a low concept prediction error is likely to lead to a correct target prediction.
Nonetheless, this score is unavailable in practice as the ground-truth is unknown at test time.
\begin{figure}
\begin{floatrow}
\capbtabbox{%
\centering
\footnotesize
\begin{tabular}{l c c c}
\toprule
Criteria & $N_g$ & $N_f$ & Cost in complexity\\
\midrule
\textsc{rand} & $0$ & $0$ & $\mathcal{O} \big( n \tau_i \big)$\\
\textsc{ucp} & $1$ & $0$ & $\mathcal{O} \big( \tau_g + n \tau_i \big)$\\
\textsc{lcp} & $1$ & $0$ & $\mathcal{O} \big( \tau_g + n \tau_i \big)$\\
\textsc{cctp} & $1$ & $2$ & $\mathcal{O} \big( \tau_g + 2 \tau_f + n \tau_i \big)$\\
\textsc{ectp} & $1$ & $2k + 1$ & $\mathcal{O} \big( \tau_g + (2k + 1) \tau_f + n \tau_i \big)$\\
\textsc{eudtp} & $1$ & $2k + 1$ & $\mathcal{O} \big( \tau_g + (2k + 1) \tau_f + n \tau_i \big)$\\
\bottomrule
\end{tabular}
}{%
\vspace{2em}
\caption{
Theoretical cost of employing concept selection criteria for intervening on $n$ number of predicted concepts.
$N_g$ and $N_f$ refer to the number of forward/backward passes to run $g$ and $f$, respectively.
}
\label{tab:intervention_methods_cost}
}
\ffigbox{%
\centering
\begin{subfigure}{0.43\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/diagram/level_ind_group.pdf}
\caption{Individual vs. Group}
\label{fig:level_ind_group}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}{0.53\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/diagram/level_single_batch.pdf}
\caption{Single vs. Batch}
\label{fig:level_single_batch}
\end{subfigure}
}{%
\caption{
Different levels of intervention conducted on concepts.
Each number represents the order of intervention.
}
\label{fig:intervention_levels}
}
\end{floatrow}
\end{figure}
\begin{table}[!t]
\end{table}
\textbf{Contribution of concept on target prediction} (\textsc{cctp})\quad
It selects concepts with the highest contribution on target prediction.
Specifically, it sums up the contribution as $s_i = \sum _{j=1} ^ {M} \big\lvert \hat{c}_i \frac{\partial f_j}{\partial \hat{c}_i} \big\rvert$ where $f_j$ is the output related to $j$-th target class and $M$ is the number of classes.
This scheme is inspired by methods to explain neural network predictions \citep{selvaraju2017grad}.
\textbf{Expected change in target prediction} (\textsc{ectp})\quad
It selects concepts with the highest expected change in the target predictive distribution with respect to intervention.
Specifically, it defines $s_i = (1 - \hat{c}_i) D_{\text{KL}}(\hat{y}_{\hat{c}_i = 0} \Vert \hat{y}) + \hat{c}_i D_{\text{KL}} (\hat{y}_{\hat{c}_i = 1} \Vert \hat{y})$ where $D_{\text{KL}}$ refers to the Kullback-Leibler divergence, and $\hat{y}_{\hat{c}_i = 0}$ and $\hat{y}_{\hat{c}_i = 1}$ refer to the new target prediction with $\hat{c}_i$ being intervened to be $0$ and $1$, respectively.
The intuition behind this scheme is that it would be better to intervene on those concepts whose rectification leads to a large expected change in target prediction \citep{settles2007multiple}.
\textbf{Expected uncertainty decrease in target prediction} (\textsc{eudtp})\quad
It selects concepts with the largest expected entropy decrease in target predictive distribution with respect to intervention.
Specifically, it defines $s_i = (1 - \hat{c}_i) \mathcal{H}(\hat{y}_{\hat{c}_i=0}) + \hat{c}_i\mathcal{H}(\hat{y}_{\hat{c}_i=1}) - \mathcal{H}(\hat{y})$.
Intuitively, it penalizes the concepts whose expected decrease in the target prediction entropy is low when intervened \citep{guo2007optimistic}.
\subsubsection{Cost of Intervention}
\label{sec:cost}
Note that the cost of intervention may differ by the choice of concept selection criteria.
Specifically, let the theoretical cost of intervening on a concept be $\tau_i$ (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the time for an expert to look at the input and fix its attribute), and the theoretical cost of making inference on $g$ and $f$ be $\tau_g$ and $\tau_f$, respectively.
Then, the total cost of utilizing \textsc{cctp} for intervening on $n$ number of concepts, for example, would be $\mathcal{O} (\tau_g + 2\tau_f + n\tau_i)$; here we assume that the cost of the backward pass on $f$ is the same as $\tau_f$.
We summarize the cost of all concept selection criteria in \cref{tab:intervention_methods_cost}.
\subsection{Levels of Intervention}
We find that intervention can be done at different levels given some auxiliary information about the structure of concepts or economic constraints put on practitioners.
For example, it is often the case that datasets used to train CBMs have the grouping information for related concepts \citep{wah2011caltech}.
Another situation worth consideration is where one has access to a batch of data to process with a budget constraint, and the goal is to maximize the overall task performance while minimizing the intervention effort (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, examining medical images in a hospital).
Taking into account these scenarios, we extend the intervention procedure at various levels to study the effectiveness of concept selection criteria.
\textbf{Individual vs. Group intervention}\quad
Intervention can be done depending on concept association (see \cref{fig:level_ind_group}):
\begin{itemize}[noitemsep, leftmargin=10pt, topsep=0pt]
\item Individual (\textsc{i}):
Concepts are assumed to be independent of each other and thus selected individually one at a time.
\item Group (\textsc{g}):
A group of related concepts is selected at once whose association information is subject to datasets.
The selection score is computed by taking the average of selection scores of individual concepts within group.
\end{itemize}
\begin{figure*}[t!]
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_main.pdf}
\caption{CUB}
\label{fig:cub_result_main}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/result_main.pdf}
\caption{SkinCon}
\label{fig:skincon_result_main}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_main.pdf}
\caption{Synthetic}
\label{fig:synthetic_result_main}
\end{subfigure}
\caption{
Intervention effectiveness of concept selection criteria (task error vs. number of concepts corrected by intervention) measured on \textsc{I+S} level.
A more effective method would reduce the error more for the same number of concepts intervened.
}
\label{fig:result-main}
\end{figure*}
\textbf{Single vs. Batch intervention}\quad
Intervention can be done depending on data accessibility (see \cref{fig:level_single_batch}):
\begin{itemize}[noitemsep, leftmargin=10pt, topsep=0pt]
\item Single (\textsc{s}):
Every test case is allocated with the same amount of intervention budget (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, intervention counts).
This could be useful for online systems where each test data comes in sequentially, and experts need to process as many cases as possible under a budget constraint.
\item Batch (\textsc{b}):
A batch of test cases shares a total intervention budget.
This scheme could be particularly useful when the concept prediction is imbalanced toward easy cases, and one wants to focus on intervening on hard cases so as to maximize the overall task performance.
\end{itemize}
\section{Introduction}
\begin{figure}
\begin{floatrow}
\ffigbox{%
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/diagram/cbm_figure.pdf}
\vspace{-0.1em}
\caption{Diagram of CBMs}
\label{fig:cbm_fig}
\end{subfigure}
\hspace*{1em}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/yerr_by_cerr.pdf}
\caption{Task vs. Concept errors}
\label{fig:yerr_by_cerr}
\end{subfigure}
\label{fig:cbm_intro}
}{%
\caption{
(a)
Given input data CBMs first predict its concepts ($g: x \rightarrow c$), and then based on which it makes a subsequent prediction for the target response ($f: c \rightarrow y$).
(b)
The task error increases rapidly as with more mistakes made for concept prediction (on the CUB dataset);
\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, making a single mistake yields $25\%$ increase in task error.
}
}
\capbtabbox{%
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{l c c c c c c}
\toprule
Work & Selection & Cost & Level & Imp. & Data & Rel. \\
\midrule
\citet{koh2020concept} & \xmark & \xmark & $\triangle$ & $\triangle$ & \xmark & \xmark \\
\citet{chauhan2022interactive} & \checkmark & $\triangle$ & $\triangle$ & $\triangle$ & \xmark & \xmark\\
\citet{sheth2022learning} & \checkmark & \xmark & $\triangle$ & \xmark & \xmark & \xmark \\
Ours & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark\\
\bottomrule
\end{tabular}
}
}{%
\vspace{3em}
\caption{
Comparison between the studies on intervention strategy of CBMs.
\emph{Selection} and \emph{Cost} represent concept selection criteria and their analysis in terms of theoretical cost as will be discussed in \cref{sec:results-criteria}.
We study the effects of \emph{Level, Implementation} and \emph{Data} on intervention effectiveness in \cref{sec:restuls-levels,sec:exp_train_inf_strategies,sec:ablation-dataset}.
\emph{Reliability} of intervention practice is discussed in \cref{sec:pitfalls}.
}
\label{tab:comparison_works}
}
\end{floatrow}
\end{figure}
While deep learning has made rapid strides in recent years \citep{lecun2015deep,jordan2015machine}, the standard neural network models are not quite explainable, in that their decision-making process is neither straightforward to account for nor easy to control.
To tackle this issue, various interpretable models have been proposed including, for example, those using concept activation vectors \citep{kim2018interpretability,ghorbani2019towards}, relating pixel contributions to image classification \citep{zhou2016learning,selvaraju2017grad}, or building intrinsically interpretable architectures \citep{alvarez2018towards}.
Concept bottleneck models (CBMs) are among these to empower interpretability \citep{koh2020concept,bahadori2020debiasing,margeloiu2021concept,mahinpei2021promises,sawada2022concept,zarlenga2022concept}.
Unlike standard end-to-end models, CBMs work in two steps:
they first predict human-interpretable properties of a given input called \emph{concepts}, and based on which, they subsequently make the final prediction for the given task.
For instance, CBMs may classify the species of a bird based on its wing pattern or leg color rather than straight from the raw pixel values (see \cref{fig:cbm_fig}).
Revisited recently by \citet{koh2020concept}, this classic idea further facilitates human-model interaction in addition to plain interpretability, in that it allows one to \emph{intervene} on the predicted concepts at test time, such that the subsequent prediction is made based on the rectified concept values.
Notably, such intervention must be treated attentively as we find that correcting only a small number of mistakes on mis-predicted concepts can lead to a significant increase in the task performance (see \cref{fig:yerr_by_cerr}).
Considering the high cost of intervention, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, having domain experts go over each concept requires tremendous effort, this result further indicates the necessity of efficient intervention procedures to ensure the utility of CBMs.
Despite the great potential, the intervention procedure of CBMs has not been studied much in the literature, quite surprisingly.
For example, previous works tend to focus on increasing task performance \citep{sawada2022concept, zarlenga2022concept} and addressing the problem of confounding factors \citep{bahadori2020debiasing} or information leakage \citep{margeloiu2021concept,mahinpei2021promises,havasi2022addressing,marconato2022glancenets}.
While a few concurrent works suggest new intervention methods \citep{chauhan2022interactive, sheth2022learning}, we find that many critical aspects of the intervention procedure still remain unexplored (see \cref{tab:comparison_works}).
Our contributions are summarized as follows.
First of all, we develop various concept selection criteria as new intervention strategies, improving the intervention performance of CBMs quite dramatically given the same amount of intervention counts.
We also provide extensive evaluations to analyze these criteria under a wide variety of experimental settings considering the theoretical cost of each criterion, levels of intervention related to test-time environments, and how to train these models or conceptualize the concept predictions.
We further develop a new framework to generate synthetic data using diverse causal graphs and conduct fully controlled experiments to verify the effectiveness of intervention on varying data.
These results reveal that data characteristics as well as intervention granularity can affect the intervention procedure quite significantly.
Finally, we identify some pitfalls of the current intervention practices, which helps to take a step toward building trustworthy and responsible interpretable models.
\section{Pitfalls of Intervention Practices}
\label{sec:pitfalls}
So far we have focused on analyzing the effectiveness of intervention procedure in many aspects.
In this section, we add another dimension, namely, reliability and fairness of the current intervention practices, to help advance toward trustworthy and responsible machine learning models.
\vspace{-0.4em}
\subsection{Nullifying Void Concepts Increases Task Error}
\label{sec:nvc}
\vspace{-0.2em}
Does intervention always help target prediction?
Contrary to expectation, we find that the answer is no, and in fact, intervention can rather increase the task error.
To verify this, we set up an ablation experiment where intervention is conducted only on the cases for which all concepts are predicted correctly with zero error; ideally intervention should have no effect in this case.
The results are quite the opposite as presented in \cref{fig:cub_nvc}.
The task error keeps on increasing as with more intervention, and the prediction error reaches to more than seven times as much as that with no intervention.
It turns out that it is due to nullifying void concepts (\textsc{nvc}), a common practice of treating unsure concepts by setting them to be simply zero, which leads to this catastrophic failure.
For example, just because the wing part of a bird species is invisible does not necessarily mean that the concept `\texttt{wing color:black}' should be zero valued;
this bird can fall in the class of `\texttt{Black\_Tern}' whose wing color is actually black.
We identify that this seemingly plausible tactic can in fact mistreat invalid concepts, and therefore, for invalid cases applying \textsc{nvc} intervention should be avoided.
\vspace{-0.5em}
\begin{figure}
\begin{floatrow}
\ffigbox{%
\begin{subfigure}{0.31\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/nvc_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_nvc_rand_main}
\end{subfigure}
\hspace*{0.1em}
\begin{subfigure}{0.31\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/nvc_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_nvc_ucp}
\end{subfigure}
\hspace*{0.1em}
\begin{subfigure}{0.31\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/nvc_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:cub_nvc_ectp}
\end{subfigure}
}{%
\caption{
Effect of \textsc{nvc} on task error.
Intervention is done on the CUB images for which concept prediction is $100\%$ accurate, and yet, \textsc{nvc} keeps on increasing the task error.
}
\label{fig:cub_nvc}
}
\hspace{1em}
\ffigbox{%
\begin{subfigure}{0.38\linewidth}
\includegraphics[width=\linewidth]{figures/cub/mv_rand.pdf}
\caption{Advantage of \textsc{mv}}
\label{fig:cub_mv_rand_main}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.55\linewidth}
\includegraphics[width=\linewidth]{figures/cub/fairness_prediction.pdf}
\caption{Disadvantage of \textsc{mv}}
\label{fig:fairness_prediction}
\end{subfigure}
}{%
\caption{
Effects of majority voting (\textsc{mv}) on target prediction.
(a)
While it helps decrease task error on intervention,
(b)
it yields biased predictions against minorities.
}
\label{fig:fairness_example}
}
\end{floatrow}
\end{figure}
\subsection{Majority Voting Neglects Minorities}
\label{sec:fairness}
\vspace{-0.2em}
Another common practice often taken by the community \citep{koh2020concept,zarlenga2022concept,havasi2022addressing} is to coalesce concept values among the same class by forcing them to have their majority votes (\textsc{mv}).
As a preprocessing, this tactic can dramatically improve the task performance as we demonstrate in \cref{fig:cub_mv_rand_main}.
This is quite obvious by now as with our Synthetic experiment results in \cref{sec:synthetic-result} where we show that high concept diversity can deteriorate the target prediction performance.
However, it turns out that \textsc{mv} can have a negative impact on model fairness by ignoring minority samples.
As a concrete example, consider the CUB dataset in which the majority of images of `\texttt{black tern}' class have black underparts while some minority samples have white underparts.
When \textsc{mv} is used in this case, we find that the task prediction is misled to yield black for the underpart color concept, even if the bird's true underparts color is white, because otherwise it will be an incorrect prediction (see \cref{fig:fairness_prediction}.).
In other words, target predictions become biased toward the majority.
This tactic also forces to misconduct intervention at test time with the majority votes, which is neither available in practice nor considered fair.
We defer addressing the trade-off between performance and fairness to future work.
\section{Related Work}
Since the seminal work of \citet{koh2020concept}, CBMs have evolved in many different ways.
\citet{bahadori2020debiasing} develop a debiased CBM to remove the impact of confounding information to secure causality.
\citet{sawada2022concept} augment CBMs with unsupervised concepts to improve task performance.
\citet{mahinpei2021promises,margeloiu2021concept} suggest addressing the information leakage problem in CBMs to improve interpretability of learned concepts, while \citet{marconato2022glancenets, havasi2022addressing} design new CBMs based on disentangled representations or autoregressive models.
\citet{zarlenga2022concept} proposes to learn semantically meaningful concepts using concept embedding models to push the accuracy-interpretability trade-off.
Both \citet{chauhan2022interactive} and \citet{sheth2022learning} present uncertainty based intervention methods to determine which concepts to intervene on.
We remark that previous work is mostly focused on developing CBM variants for high task performance from model-centric perspectives, whereas our work provides in-depth analyses and comprehensive evaluations on the intervention procedure of the standard CBMs in greater granularity.
\section{Datasets}
\label{app:datasets}
\subsection{CUB}
CUB \citep{wah2011caltech} is the standard dataset used to study CBMs in the previous works \citep{koh2020concept, zarlenga2022concept,havasi2022addressing, sawada2022concept}.
There are $5994$ and $5794$ examples for train and test sets in total, in which each example consists of the triplet of (image $x$, concepts $c$, label $y$) of a bird species.
All the concepts have binary values; for example, the `\texttt{wing color:black}' for a given bird image can be either $1$ (for true) or $0$ (for false).
Following previous works \citep{koh2020concept,sawada2022concept,zarlenga2022concept}, we perform so-called majority voting as pre-processing so that images of the same class always have the same concept values; for example, if more than half of the crow images have true value for the concept `\texttt{wing color:black}' then this process converts all concept labels for images belonging to the crow class to have the same true value.
Since the original concept labels are too noisy, this procedure helps to increase the overall performance.
However, it can be potentially harmful to model fairness in some cases as we address in \cref{sec:fairness}.
We also remove concepts that are too sparse (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, concepts that are present in less than $10$ classes) which results in $112$ out of $312$ concepts remaining.
It is suggested in \citet{koh2020concept} that including these sparse concepts in the concept layer makes it hard to predict their values as the positive training examples are too scarce.
\subsection{SkinCon}
SkinCon \citep{daneshjouskincon} is a medical dataset which can be used to build interpretable machine learning models.
The dataset provides densely annotated concepts for $3230$ images from Fitzpatrick 17k skin disease dataset \citep{groh2021evaluating}, which makes a triplet of (image $x$, concepts $c$, disease label $y$) of a skin lesion for each example.
Since training and test sets are not specified in the SkinCon dataset, we randomly split the dataset into $70\%, 15\%, 15\%$ of training, validation, and test sets respectively.
The dataset provides various levels of class labels ranging from individual disease labels with $114$ classes to binary labels representing if the skin is benign or malignant.
Following the experiments with Post-hoc CBM \citep{yuksekgonul2022post} introduced in \citet{daneshjouskincon}, we use the binary labels for the target task and only use $22$ concepts which are present in at least $50$ images.
Since the binary class labels are highly imbalanced ($87\%$ vs. $13\%$), we train the target predictor $f$ with weighted loss and use the average of per-class error as the metric instead of overall error for a fair comparison.
\subsection{Synthetic dataset}
\label{sec:synthetic_dataset}
\begin{algorithm}[H]
\begin{algorithmic}[1]
\STATE Sample $p_i \sim \mathcal{N} (\mu_\alpha = s, \sigma_\alpha)$ for $i = \{1, 2, \cdots, k\}$
\FOR{group $\ell = 0, 1, \cdots, k/\gamma - 1$}
\STATE Sample $\zeta_i \sim \mathcal{U}_{[0,1]}$ and set $\ell_i = \mathbbm{1} [\zeta_i \geq p_i]$ for $i = \{1, 2, \cdots, k\}$
\FOR{$y = 1, \cdots, \gamma$}
\STATE Sample $i_y \in \{1, 2, \cdots, k\}$ uniformly at random without replacement
\STATE Set $c_i^j = !\ell_i$ if $i = i_y$ and $c_i^j = \ell_i$ otherwise, where class index $j = \gamma * \ell + y$
\ENDFOR
\ENDFOR
\STATE Generate $W_x \in \mathbb{R}^{k \times k}$ with each element distributed according to the unit normal distribution $\mathcal{N}(0, \sigma_w)$
\FOR{class $j = 1, \cdots, k$}
\STATE Generate $\nu$ samples for class $j$ as $x = W_x \cdot c^j + z$ where $z \sim \mathcal{N} (0, \sigma_z)$
\ENDFOR
\end{algorithmic}
\caption{Generating synthetic data}
\label{alg:synthetic_full_algorithm}
\end{algorithm}
We generate the synthetic data following \cref{alg:synthetic_full_algorithm} to test the effect of dataset characteristics on intervention.
Here, we assume that all examples within the same class share the same concept values and denote the $i$-th concept value of $j$-th class as $c_i^j$.
For simplicity, we assume that the dimensionality of inputs and the number of target classes are the same as the number of concepts $k$, following \citet{bahadori2020debiasing}.
In line $1$, $\mu_\alpha = s$ and $p_i = P(c_i = 0)$ each represent the overall sparsity level of the concepts (proportion of concepts with value $0$) and the probability of $i$-th concept taking value $0$, respectively.
Since about $80\%$ of the concepts have value $0$ in the CUB dataset, we set $\mu_\alpha = 0.8$ accordingly.
In line $4$, we divide classes into $k/\gamma$ sub-groups of size $\gamma$ to make those within the same group have similar concept values.
Note that the classes within each sub-group only differ by two concept values as seen in line $6$.
We set $\gamma=2, k = 100, \nu = 100, \sigma_\alpha = 0.1, \sigma_w = 0.1, z_\alpha = 0.8$ unless stated otherwise.
We randomly divide the generated examples into $70\%$ of training sets, $15\%$ of validation sets, and $15\%$ of test sets.
To generate the data with hidden concepts, we randomly pick $h\%$ of the concepts and remove them from the concept layer of CBMs.
For training the models and intervention experiments, we only consider the remaining concepts.
For the data with concept diversity, we randomly reverse the generated concept values by line $6$ with a given probability, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, from $0$ to $1$ and vice versa.
Then, we generate the input values accordingly with the new concepts by line $11$.
\section{Architectures and Training}
\label{app:implementation}
\paragraph{CUB}
For the CUB dataset, we use Inception-v3 \citep{szegedy2016rethinking} pretrained on Imagenet \citep{deng2009imagenet} for the concept predictor $g$ and $1$-layer MLP for the target predictor $f$ respectively following the standard setup as in \citet{koh2020concept}.
Here, both $g$ and $f$ are trained with the same training hyperparameters as in \citet{koh2020concept}.
We used $\lambda=0.01$ for \textsc{jnt} and \textsc{jnt+p} whose values were directly taken from \citet{koh2020concept}.
For the experiments without majority voting (\cref{fig:cub_mv} in \cref{sec:results-others-fairness}), we use Inceptionv3 pretrained on the Imagenet for $g$ and 2-layer MLP for $f$ with the dimensionality of $200$ so that it can describe more complex functions.
We searched the best hyperparameters for both $g$ and $f$ over the same sets of values as in \citet{koh2020concept}.
Specifically, we tried initial learning rates of $[0.01, 0.001]$, constant learning rate and decaying the learning rate by $0.1$ every $[10, 15, 20]$ epoch, and the weight decay of $[0.0004, 0.00004]$.
After finding the optimal values of hyperparameters whose validation accuracy is the best, we trained the networks with the same values again over 5 different random seeds on the training and validation sets.
\paragraph{SkinCon}
For the SkinCon dataset, we fine-tune Deepderm \citep{daneshjou2022disparities} for the concept predictor $g$, which is the Inception-v3 network trained on the data in \citet{esteva2017dermatologist}, and train $1$-layer MLP for the target predictor $f$.
We select hyperparameters that achieve the best performance (in terms of overall accuracy and average per-class accuracy for $g$ and $f$ respectively) in the validation set.
Specifically, we tried initial learning rates of $[0.0005, 0.001, 0.005]$, and constant learning rate and decaying the learning rate by $0.1$ every $50$ epoch.
Here, we did not use the weight decay factor.
For \textsc{jnt} and \textsc{jnt+p} training strategies, we tried concept loss weight $\lambda$ of $[0.01, 0.1, 1.0, 5.0]$, but all of the values failed to decrease the task error at intervention.
As in the CUB dataset, we trained the networks with the best hyperparameters over $5$ different random seeds on the training and validation sets.
\paragraph{Synthetic}
For the synthetic datasets, we use $3$-layer MLP of hidden layer size $\{100, 100\}$ for $g$ and a single linear layer for $f$, as similar to \citet{zarlenga2022concept}.
For all the experiments, we tried constant learning rates of $[0.01, 0.1, 1.0]$ without learning rate decay or weight decay factor and trained the networks with the best hyperparameters over $5$ different random seeds on the training sets.
We used $\lambda=0.1$ for \textsc{jnt} and \textsc{jnt+p} whose values were determined by grid search over $[0.01, 0.1, 1.0]$.
\section{More on Reflecting Cost of Intervention}
\label{app:cost}
\begin{figure}[!t]
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a1.0_b1.pdf}
\caption{$\beta = 1$}
\label{fig:cost_beta1}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic//cost_a1.0_b3.pdf}
\caption{$\beta = 3$}
\label{fig:cost_beta3}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a1.0_b5.pdf}
\caption{$\beta = 5$}
\label{fig:cost_beta5}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a1.0_b10.pdf}
\caption{$\beta = 10$}
\label{fig:cost_beta10}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.19\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/cost_a1.0_b100.pdf}
\caption{$\beta = 100$}
\label{fig:cost_beta100}
\end{subfigure}%
\caption{
Effect of $\beta$ on intervention.
We fix $\tau_i = 1, \alpha = 1, k=100$.
\textsc{ectp}, the concept selection criteria strongly evaluated previously, becomes less effective as $\beta$ decreases.
}
\label{fig:cost_result_beta}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.3]{figures/cub/cost.pdf}
\caption{
Comparison between concept selection criteria in terms of the intervention cost for the CUB.
Here, cost represents the seconds for concept annotation time and inference times for $g, f$.
}
\label{fig:cost_cub}
\end{figure}
As $\beta$ becomes smaller \textsc{ucp} becomes the most effective compared to for instance \textsc{ectp} (see \cref{fig:cost_result_beta}).
This is because with small $\beta$, $\tau_g$ becomes marginalized in the cost of \textsc{ectp} which is $\mathcal{O}(\tau_g + (2k+1)\tau_f + n \tau_i)$, and therefore, the intervention effectiveness of \textsc{ectp} is penalized as with increasing $k$ compared to \textsc{ucp} which does not require $\tau_f$.
We also find that in this small $\beta$ regime, \textsc{ectp} can perform poorly even compared to \textsc{rand}.
In addition, we experiment with more realistic settings for the CUB where we set $\tau_i$ as the concept annotation time (seconds) provided in the dataset and $\tau_g, \tau_f$ as the wall-clock times for the inference.
Specifically, we set $\tau_i \approx 0.7$ by dividing the annotation time into the number of concepts within the group and taking the average.
$\tau_g \approx 18.7 * 10^{-3}$ and $\tau_f \approx 0.03 * 10^{-3}$ are acquired by measuring the inference time with RTX 3090 GPU and taking the average of $300$ repetitions.
In this setting, $\tau_i$ dominates the others, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\alpha$ is large, and the relative effectiveness between the criteria remains the same as seen in \cref{fig:cost_cub}.
Nonetheless, we remark that the result can change with different model sizes or GPU environments.
\section{More Results on the Effect of Intervention Levels on Intervention}
\label{sec:results-others-levels}
\begin{figure}[!th]
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_main.pdf}
\caption{\textsc{i+s}}
\label{fig:cub_result_level_is}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_level_gs.pdf}
\caption{\textsc{g+s}}
\label{fig:cub_result_level_gs}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_level_ib.pdf}
\caption{\textsc{i+b}}
\label{fig:cub_result_level_ib}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_level_gb.pdf}
\caption{\textsc{g+b}}
\label{fig:cub_result_level_gb}
\end{subfigure}
\caption{Comparison between intervention criteria under different levels for the CUB.}
\label{fig:cub_result_level}
\end{figure}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_gs_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_level_is_gs_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_gs_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_level_is_gs_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_gs_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:cub_level_is_gs_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_gs_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:cub_level_is_gs_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_gs_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:cub_level_is_gs_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_gs_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:cub_level_is_gs_eudtp}
\end{subfigure}
\caption{
Comparison between \textsc{i+s} vs. \textsc{g+s} for the CUB.
}
\label{fig:cub_level_is_gs_all}
\end{figure}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_ib_gb_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_level_ib_gb_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_ib_gb_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_level_ib_gb_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_ib_gb_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:cub_level_ib_gb_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_ib_gb_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:cub_level_ib_gb_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_ib_gb_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:cub_level_ib_gb_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_ib_gb_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:cub_level_ib_gb_eudtp}
\end{subfigure}
\caption{
Comparison between \textsc{i+b} vs. \textsc{g+b} for the CUB.
}
\label{fig:cub_level_ib_gb_all}
\end{figure}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_ib_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_level_is_ib_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_ib_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_level_is_ib_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_ib_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:cub_level_is_ib_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_ib_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:cub_level_is_ib_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_ib_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:cub_level_is_ib_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_is_ib_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:cub_level_is_ib_eudtp}
\end{subfigure}
\caption{
Comparison between \textsc{i+s} vs. \textsc{i+b} for the CUB.
}
\label{fig:cub_level_is_ib_all}
\end{figure}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_gs_gb_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_level_gs_gb_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_gs_gb_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_level_gs_gb_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_gs_gb_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:cub_level_gs_gb_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_gs_gb_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:cub_level_gs_gb_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_gs_gb_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:cub_level_gs_gb_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/level_gs_gb_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:cub_level_gs_gb_eudtp}
\end{subfigure}
\caption{
Comparison between \textsc{g+s} vs. \textsc{g+b} for the CUB.
For \textsc{g+b}, each point is plotted when the average number of intervened concepts per image first exceeds each integer value.
}
\label{fig:cub_level_gs_gb_all}
\end{figure}
The comparison between \textsc{i+s} and \textsc{g+s} using different concept selection criteria is presented in \cref{fig:cub_level_is_gs_all}.
Individual intervention is in general more effective than group-wise intervention except for \textsc{rand} criterion.
We find similar results for the comparison between \textsc{i+b} and \textsc{g+b} (see \cref{fig:cub_level_ib_gb_all}).
We also note that \textsc{cctp} becomes less effective in \textsc{g} levels as seen in \cref{fig:cub_result_level}.
Batch intervention is either more effective or at least as competitive as single intervention across different concept selection criteria as seen in \cref{fig:cub_level_is_ib_all}.
In \cref{fig:cub_level_gs_gb_all}, we observe that \textsc{g+b} are also more effective than \textsc{g+s} level.
\textsc{cctp} does not show much difference between \textsc{s} and \textsc{b}.
It is because the target predictor $f$ is a simple linear layer for our experiments and thus $\frac{\partial f_j}{\partial \hat{c}_i} = w_{ij} $ is fixed for all examples where $w_{ij}$ is the weight of $i$-th concept to $j$-th class in $f$.
\section{More Results on the Effect of Training Strategies on Intervention}
\label{sec:results-others-training}
\begin{figure*}[!th]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_main.pdf}
\caption{\textsc{ind}}
\label{fig:cub_result_training_ind}
\end{subfigure}%
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_training_seq.pdf}
\caption{\textsc{seq}}
\label{fig:cub_result_training_seq}
\end{subfigure}%
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_training_jnt.pdf}
\caption{\textsc{jnt}}
\label{fig:cub_result_training_jnt}
\end{subfigure}%
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/result_training_jntp.pdf}
\caption{\textsc{jnt + p}}
\label{fig:cub_result_training_jntp}
\end{subfigure}%
\caption{
Comparison between concept selection criteria using different training strategies for the CUB.
For \textsc{jnt, jnt + p}, we present the results when $\lambda=0.01$.
}
\label{fig:cub_result_training}
\end{figure*}
\begin{figure*}[!th]
\hspace*{\fill}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/training_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_training_rand}
\end{subfigure}%
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/training_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_training_ucp}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/training_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:cub_training_lcp}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/training_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:cub_training_cctp}
\end{subfigure}%
\hspace*{\fill}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/training_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:cub_training_ectp}
\end{subfigure}%
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/training_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:cub_training_eudtp}
\end{subfigure}%
\caption{
Comparison between different training strategies for a fixed concept selection criterion for the CUB.
}
\label{fig:cub_training_comparison}
\end{figure*}
\begin{figure*}[!th]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_main.pdf}
\caption{\textsc{ind}}
\label{fig:synthetic_result_training_ind}
\end{subfigure}%
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_training_seq.pdf}
\caption{\textsc{seq}}
\label{fig:synthetic_result_training_seq}
\end{subfigure}%
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_training_jnt.pdf}
\caption{\textsc{jnt}}
\label{fig:synthetic_result_training_jnt}
\end{subfigure}%
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_training_jntp.pdf}
\caption{\textsc{jnt + p}}
\label{fig:synthetic_result_training_jntp}
\end{subfigure}%
\caption{
Comparison between concept selection criteria using different training strategies for the Synthetic.
For \textsc{jnt, jnt + p}, we present the results when $\lambda=0.1$.
}
\label{fig:synthetic_result_training}
\end{figure*}
\begin{figure*}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/training_rand.pdf}
\caption{\textsc{rand}}
\label{fig:synthetic_training_rand}
\end{subfigure}%
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/training_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:synthetic_training_ucp}
\end{subfigure}%
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/training_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:synthetic_training_lcp}
\end{subfigure}%
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/training_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:synthetic_training_cctp}
\end{subfigure}%
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/training_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:synthetic_training_ectp}
\end{subfigure}%
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/training_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:synthetic_training_eudtp}
\end{subfigure}%
\caption{
Comparison between different training strategies for a fixed concept selection criterion for the Synthetic.
}
\label{fig:synthetic_training_comparison}
\end{figure*}
The results for the CUB dataset are presented in \cref{fig:cub_result_training}.
Note that \textsc{eudtp} becomes even less effective than \textsc{rand} in \textsc{seq, jnt}.
For the synthetic datasets, \textsc{eudtp} also becomes much less effective as in the CUB dataset (see \cref{fig:synthetic_result_training}).
Note that when using \textsc{jnt} or \textsc{jnt+p} training schemes, \textsc{lcp} may not be the best choice as the target predictor $f$ is not trained with the ground-truth concept values and thus rectifying the concept with the highest prediction loss does not always guarantee the decrease in the task error.
Comparison between different training strategies for a fixed concept selection criterion in the CUB and Synthetic are each presented in \cref{fig:cub_training_comparison,fig:synthetic_training_comparison}.
\section{More Results on the Effect of Conceptualization Methods on Intervention}
\label{sec:results-others-conceptualization}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_ind_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_conceptualization_ind_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_ind_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_conceptualization_ind_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_ind_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:cub_conceptualization_ind_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_ind_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:cub_conceptualization_ind_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_ind_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:cub_conceptualization_ind_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_ind_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:cub_conceptualization_ind_eudtp}
\end{subfigure}
\caption{
Intervention results under different conceptualization methods using various concept selection criteria.
Here, we used \textsc{ind} training strategy for the CUB.
}
\label{fig:cub_conceptualization_ind}
\end{figure}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_jntp_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_conceptualization_jntp_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_jntp_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_conceptualization_jntp_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_jntp_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:cub_conceptualization_jntp_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_jntp_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:cub_conceptualization_jntp_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_jntp_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:cub_conceptualization_jntp_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cub/conceptualization_jntp_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:cub_conceptualization_jntp_eudtp}
\end{subfigure}
\caption{
Intervention results under different conceptualization methods using various concept selection criteria.
Here, we used \textsc{jnt + p} training strategy for the CUB.
}
\label{fig:cub_conceptualization_jntp}
\end{figure}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/conceptualization_rand.pdf}
\caption{\textsc{rand}}
\label{fig:skincon_conceptualization_ind_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/conceptualization_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:skincon_conceptualization_ind_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/conceptualization_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:skincon_conceptualization_ind_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/conceptualization_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:skincon_conceptualization_ind_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/conceptualization_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:skincon_conceptualization_ind_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/skincon/conceptualization_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:skincon_conceptualization_ind_eudtp}
\end{subfigure}
\caption{
Intervention results under different conceptualization methods using other concept selection criteria.
Here, we used \textsc{ind} training strategy for the SkinCon.
}
\label{fig:skincon_conceptualization}
\end{figure}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_ind_rand.pdf}
\caption{\textsc{rand}}
\label{fig:synthetic_conceptualization_ind_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_ind_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:synthetic_conceptualization_ind_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_ind_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:synthetic_conceptualization_ind_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_ind_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:synthetic_conceptualization_ind_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_ind_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:synthetic_conceptualization_ind_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_ind_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:synthetic_conceptualization_ind_eudtp}
\end{subfigure}
\caption{
Intervention results under different conceptualization methods using various concept selection criteria.
Here, we used \textsc{ind} training strategy for the synthetic dataset.
}
\label{fig:synthetic_conceptualization_ind}
\end{figure}
\begin{figure}[!th]
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_jntp_rand.pdf}
\caption{\textsc{rand}}
\label{fig:synthetic_conceptualization_jntp_rand}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_jntp_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:synthetic_conceptualization_jntp_ucp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_jntp_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:synthetic_conceptualization_jntp_lcp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_jntp_cctp.pdf}
\caption{\textsc{cctp}}
\label{fig:synthetic_conceptualization_jntp_cctp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_jntp_ectp.pdf}
\caption{\textsc{ectp}}
\label{fig:synthetic_conceptualization_jntp_ectp}
\end{subfigure}
\begin{subfigure}{0.16\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/conceptualization_jntp_eudtp.pdf}
\caption{\textsc{eudtp}}
\label{fig:synthetic_conceptualization_jntp_eudtp}
\end{subfigure}
\caption{
Intervention results under different conceptualization methods using various concept selection criteria.
Here, we used \textsc{jnt + p} training strategy for the synthetic dataset.
}
\label{fig:synthetic_conceptualization_jntp}
\end{figure}
Across all the datasets and concept selection criteria, utilizing effective criteria can reduce the gap between different conceptualization strategies much faster than \textsc{rand} criterion as seen in \cref{fig:cub_conceptualization_ind,fig:cub_conceptualization_jntp,fig:skincon_conceptualization,fig:synthetic_conceptualization_ind,fig:synthetic_conceptualization_jntp}.
\section{More Results on the Effect of Data on Intervention}
\label{sec:results-others-dataset}
\begin{figure}[!th]
\centering
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_inputnoise_2.0.pdf}
\caption{Data with extremely high input noise}
\label{fig:synthetic_result_inputnoise2.0}
\end{subfigure}
\hspace*{10mm}
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/result_diversity_0.3.pdf}
\caption{Data with extremely high concept diversity}
\label{fig:synthetic_result_diversity3.0}
\end{subfigure}
\caption{Intervention results on the data with extremely high input noise (variance of $2.0$) or concept diversity (perturbation probability of $30\%$) respectively.
In these cases, the proposed concept selection criteria work less effectively.}
\label{fig:synthetic_result_extreme}
\end{figure}
\begin{figure}[!th]
\centering
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/sparsity_cctp.pdf}
\caption{\textsc{cctp} with different concept sparsity levels}
\label{fig:synthetic_sparsity_cctp}
\end{subfigure}
\hspace*{10mm}
\begin{subfigure}{0.22\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/synthetic/similarity_ucp.pdf}
\caption{\textsc{ucp} with different sub-group sizes}
\label{fig:synthetic_similarity_ucp}
\end{subfigure}
\caption{
(a)
\textsc{cctp} becomes more effective with a higher concept sparsity level.
(b)
Final task error increases, but intervention becomes more effective with larger sub-group sizes.
}
\label{fig:synthetic_characteristics_others}
\end{figure}
We find that intervention on data with extremely high input noise or extremely high diversity makes developed concept selection criteria less effective in general with a larger gap from \textsc{lcp} (see \cref{fig:synthetic_result_extreme}).
Specifically, \textsc{ucp} becomes less effective than other criteria in these cases.
We assume that high concept prediction uncertainty does not lead to high prediction loss when the concept predictor $g$ achieves very low accuracy.
We also evaluate the effect of concept sparsity levels, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, probability of each concept having value $0$, using \textsc{cctp} criterion.
Note that intervention becomes less effective as the sparsity level gets closer to $50\%$ as seen in \cref{fig:synthetic_sparsity_cctp}.
To understand why, recall that the criterion aggregates the contribution of each concept to the target label prediction.
When the sparsity level is high and most concepts have value $0$, target prediction is determined by only a few concepts and \textsc{cctp} can work effectively by first intervening on the concept with the highest contribution.
In contrast, as the level gets closer to $50\%$, target prediction is determined by almost half of the concepts and contribution on target prediction becomes no longer a discriminative feature of the concepts, thus decreasing the effectiveness of the criterion.
In addition, final task error increases, but intervention becomes more effective with a larger sub-group size $\gamma$ (see \cref{fig:synthetic_similarity_ucp}).
Specifically, we need $12$ intervention counts to decrease the task error by half for the data with $\gamma = 1$, but correcting $5$ concepts achieve the same effect for $\gamma = 10$.
This is because intervention can decrease the task error much faster for mis-classified examples by distinguishing from similar classes when $\gamma$ is large.
\section{More Results on Fairness of Majority Voting}
\label{sec:results-others-fairness}
\begin{figure}[!th]
\centering
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\linewidth]{figures/cub/mv_rand.pdf}
\caption{\textsc{rand}}
\label{fig:cub_mv_rand}
\end{subfigure}%
\hspace*{10mm}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\linewidth]{figures/cub/mv_ucp.pdf}
\caption{\textsc{ucp}}
\label{fig:cub_mv_ucp}
\end{subfigure}%
\hspace*{10mm}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{figures/cub/mv_lcp.pdf}
\caption{\textsc{lcp}}
\label{fig:cub_mv_lcp}
\end{subfigure}
\caption{
Comparison of test-time intervention results with and without using majority voting.
}
\label{fig:cub_mv}
\end{figure}
When we do not use majority voting on the CUB dataset, intervention rather increases the task error as seen in \cref{fig:cub_mv}.
Specifically, intervention does not decrease task error at all with \textsc{rand, ucp}.
Even with \textsc{lcp} criterion, intervention does not reduce the task error as much as when we use majority voting, and the error rather starts to increase after about $10$ concepts intervened.
See \cref{app:implementation} for the training details. |
{
"arxiv_id": "2302.14333",
"language": "en",
"timestamp": "2023-03-01T02:09:50",
"url": "https://arxiv.org/abs/2302.14333",
"yymm": "2302"
} | \section{Introduction}
Standard cosmology has been facing a great challenge since the end of the last century. The observational evidences since 1998 \cite{r1, r2, r3, r4, r5} are not in favour of decelerated expansion (prediction of standard cosmology) rather they are in favour of accelerated expansion. So far there are two options proposed by the cosmologists to accomodate these observational evidences. One of the possibilities is to introduce some exotic matter (known as dark energy (DE)) within the framework of Einstein gravity. This mysterious matter component is totally of unknown nature except its large negative pressure. At first cosmologists took cosmological constant as the DE candidate. But due to two severe drawbacks (namely discrepancy in its predicted and observed value and coincidence problem \cite{r6}) the cosmological constant is not a well accepted DE model rather dynamical DE models \cite{r7, r8, r9, r10} are widely used in the literature. This work is an example of using dynamical DE model. Usually, a scalar field having potential $V(\phi)$ is chosen as DE candidate so that the pressure component $p_{\phi}=\frac{1}{2}\dot{\phi}^2-V(\phi)$ can evolve to have the required negative value for observed accelerated expansion. Here, the scalar field (chosen as dynamical DE) is nonminimal coupled to dark matter (DM) through an interference term in the action \cite{r11}. As a result, there is a new source term in the matter conservation equation. This kind of DE model is termed as chameleon field. This model is quite useful to obtain accelerated expansion of the universe and other interesting cosmological consequences \cite{r12} (for details see the Ref. \cite{r13}).
On the other hand, since the last century, symmetry analysis has a significant role in studying global continuous symmetries (i.e, translation and rotation) as well as in local gauge symmetries, internal symmetries of the space-time (in cosmology) and permutation symmetry in quantum field theory \cite{r14, r15}. In particular, geometrical symmetries of the space-time and symmetries of the physical system have great role in analyzing any physical motion. From the perspective of Noether symmetry, the conserved charge has been used to identify the actual one among similar physical processes. Furthermore, in Noether symmetry approach, the Noether integral (i.e, the first integral) has been chosen as a tool for simplification of a system of differential equations or for the integrability of the system \cite{r16, r17, r18, r19, r20, r21}.
In addition, an advantage of using Noether symmetry to any physical system involving arbitrary physical parameters or some arbitrary functions of the field variables is that symmetry analysis uniquely determines these physical parameters or arbitrary functions involved (for details see Ref.\cite{r22}). Also since recent past symmetry analysis has been used for physical systems in Riemannian spaces \cite{r23, r24, r25, r26, r27, r28, r29, r1.3}.
Moreover, Noether symmetry analysis has opened new windows in studying quantum cosmology with suitable operator ordering, and the Wheeler DeWitt (WD) equation so constructed on the minisuperspace is associated with Lie point symmetries. It is possible to have a subset of the general solutions of the WD equation having oscillatory behaviours \cite{r30, r31} by imposing Noether symmetries. The Noether symmetries with Hartle criterion can identify those classical trajectories in minisuperspace \cite{r32, r33} which are solutions of the cosmological evolution equations i.e, one may consider Noether symmetries as a bridge between quantum cosmology and classical observable universe.
This work is another example of extensive use of Noether symmetry analysis to both classical and quantum cosmology for chameleon field DE model. By imposing Noether symmetry to the Lagrangian and making canonical transformation of the dynamical variables it is possible to have classical solutions of the coupled nonlinear Einstein field equations. WD equation is constructed for the present chameleon DE cosmological model in the background of FLRW space-time and Noether symmetry is used as a tool to solve the WD equation. The plan of the paper is as follows: a brief overview of Noether symmetry is described in Section--II whereas Section-- III presents the Noether symmetry and cosmological solutions to chameleon field DE model and Section-IV deals with quantum cosmology in the minisuperspace approach: a general prescription and the formation of WD equation in the present cosmological model and possible solution with Noether symmetry are presented in Section-V; finally the paper ends with a conclusion in Section-VI.
\section{A Brief Overview of Noether Symmetry Approach}
Noether's first theorem states that any physical system is associated with some conserved quantities provided the Lagrangian of the system is invariant with respect to the Lie derivative \cite{r34, r35} along an appropriate vector field ($\mathcal{L}_{\overrightarrow{V}}f=\overrightarrow{V}(f)$). By imposing these symmetry constraints, the evolution equations of the physical system can either be solvable or simplified to a great extent \cite{r36, r37}.
For a point like canonical Lagrangian $L[q^{\alpha}(x^i),\dot{q}^{\alpha}(x^i)]$, the Euler-Lagrange equations
\begin{equation}
\partial_{j}\left(\frac{\partial L}{\partial\partial_{j}q^{\alpha}}\right)=\frac{\partial L}{\partial q^{\alpha}}\label{eqn1}
\end{equation}
can be contracted with some unknown functions $\lambda^{\alpha}(q^{\beta})$ as follows:
\begin{equation}
\lambda^{\alpha}\bigg[\partial_{j}\left(\frac{\partial L}{\partial\partial_{j}q^{\alpha}}\right)-\frac{\partial L}{\partial q^{\alpha}}\bigg]=0\label{eqn2}
\end{equation}
i.e,
\begin{equation}
\lambda^{\alpha}\frac{\partial L}{\partial q^{\alpha}}+(\partial_{j}\lambda^{\alpha})\left(\frac{\partial L}{\partial\partial_{j}q^{\alpha}}\right)=\partial_{j}\left(\lambda^{\alpha}\frac{\partial L}{\partial\partial_{j}q^{\alpha}}\right)\nonumber
\end{equation}
Thus
\begin{equation}
\mathcal{L}_{\overrightarrow{X}}L=\lambda^{\alpha}\frac{\partial L}{\partial q^{\alpha}}+(\partial_{j}\lambda^{\alpha})\frac{\partial L}{\partial\left(\partial_{j}q^{\alpha}\right)}=\partial_{j}\left(\lambda^{\alpha}\frac{\partial L}{\partial\partial_{j}q^{\alpha}}\right)\label{eqn3}
\end{equation}
So according to Noether theorem the vector field \cite{r38, r39}
\begin{equation}
\overrightarrow{X}=\lambda^{\alpha}\frac{\partial}{\partial q^{\alpha}}+\left(\partial_{j}q^{\alpha}\right)\frac{\partial}{\partial\left(\partial_{j}q^{\alpha}\right)}\label{eqn4}
\end{equation}
can be chosen appropriately so that the Lagrangian of the system is invariant along the vector field i.e, $\mathcal{L}_{\overrightarrow{X}}L=0$ and consequently, the physical system is called invariant under Noether symmetry with $\overrightarrow{X}$, the infinitesimal generator of the symmetry. It is to be noted that the above symmetry vector as well as the Lagrangian is defined on the tangent space of configurations: $TQ\{q^{\alpha}, \dot{q}^{\alpha}\}$. In general Noether symmetry approach is very much relevant to identify conserved quantities of a physical system. The above symmetry condition is associated with a constant of motion for the Lagrangian having conserved phase flux along the vector field $\overrightarrow{X}$. Furthermore, from Eq.(\ref{eqn3}) this symmetry criteria is associated with a constant of motion of the system \cite{r16, r17, r36}
\begin{equation}
Q^i=\lambda^{\alpha}\frac{\partial L}{\partial\left(\partial_{i}q^{\alpha}\right)}\label{eqn5}
\end{equation}
satisfying
\begin{equation}
\partial_{i}Q^i=0\label{eqn6}
\end{equation}
So $Q^i$ is identified as Noether current or conserved current. Furthermore, the energy function associated with system is
\begin{equation}
E=\dot{q}^{\alpha}\frac{\partial L}{\partial\dot{q}^{\alpha}}-L\label{eqn7}
\end{equation}
The energy function (also known as Hamiltonian of the system) is a constant of motion provided there is no explicit time dependence in the Lagrangian \cite{r16, r17, r36}. Moreover, if the conserved current due to Noether symmetry has some physical meaning, \cite{r16, r17, r36} then symmetry analysis can identify reliable models. In the following, we shall show how symmetry analysis will simplify the present coupled cosmological model and as a result classical cosmological solutions can be obtained easily.
In the context of quantum cosmology, Hamiltonian formulation is very useful and Noether symmetry condition is rewritten as follows: \cite{r37}
\begin{equation}
\mathcal{L}_{\overrightarrow{X}_{H}}H=0\label{eqn8}
\end{equation}
with
\begin{equation}
{\overrightarrow{X}_{H}}=\dot{q}\frac{\partial}{\partial q}+\ddot{q}\frac{\partial}{\partial\dot{q}}\nonumber
\end{equation}
In minisuperspace models of quantum cosmology, symmetry analysis determines appropriate interpretation of the wave function. The conserved canonically conjugate momenta due to Noether symmetry can be written as follows:
\begin{equation}
\Pi_{l}=\frac{\partial L}{\partial q^l}={\sum}_{l}\label{eqn9}
\end{equation}
$l=1,2,...,m,$
where `$m$' denotes the number of symmetries. Also, the operator version (i.e, quantization) of Eq. (\ref{eqn9}) i.e,
\begin{equation}
-i\partial_{q^l}\ket{\psi}={\sum}_{l}\ket{\psi}\label{eqn10}
\end{equation}
identifies a translation along $q^l$-axis through symmetry analysis. Also Eq. (\ref{eqn10}) has oscillatory solution for real conserved quantity $\sum_{l}$ i.e,
\begin{equation}
\ket{\psi}=\sum_{l=1}^{m}e^{i\sum_{l}q^l}\ket{\phi(q^k)} , k<n,\label{eqn11}
\end{equation}
where the index `$k$' stands for directions along which there is no symmetry with $n$ the dimension of the minisuperspace. Thus oscillatory part of the wave function implies existence of Noether symmetry and the conjugate momenta along the symmetry directions should be conserved and vice-versa \cite{r41}. Due to symmetries the first integrals of motion identify the classical trajectories. In fact, for 2D minisuperspace, it is possible to have complete solution of the system by Noether symmetry.
\section{Noether Symmetry and Cosmological Solutions to Chameleon Field DE Model}
This section is devoted to study chameleon field DE cosmological model. This model consists of a canonical scalar field (having self-interaction potential) nonminimally coupled to DM. So the potential function and the coupling function are the two unknown functions of the scalar field. The action integral of the model has the explicit form \cite{r42, r43}
\begin{equation}
I=\int\left[\frac{R}{16\pi G}+\frac{1}{2}\phi_{,\mu}\phi^{,\mu}-V(\phi)+f(\phi)L_{m}\right]\sqrt{-g}d^4x\label{eq1}
\end{equation}
where as usual $R$ is the Ricci scalar, $G$ is the Newtonian gravitational constant and $\phi$ is the chameleon scalar field having potential $V(\phi)$. Here, $L_{m}$ is the Lagrangian for DM which is nonminimally coupled to the chameleon scalar field with $f(\phi)$ (an analytic function), the coupling function. By choosing the DM to be an ideal gas, the matter Lagrangian can be chosen as $L_m\simeq\rho_{m}$ \cite{r1.1}.
In the background of flat FLRW space-time the point-like Lagrangian for the above cosmological model takes the following form:
\begin{equation}
L(a,\dot{a},\phi,\dot{\phi})=3a\dot{a}^2-a^3\left(\frac{\dot{\phi}^2}{2}-V(\phi)\right)-\rho_{m} f(\phi)a^{3}\label{eq2}
\end{equation}
Now the Euler-Lagrange equations (i.e, the Einstein field equations) for the Lagrangian (\ref{eq2}) are given by
\begin{eqnarray}
3H^2=\rho_{m}f(\phi)+\frac{1}{2}\dot{\phi}^2+V(\phi)\label{eq3},\\
{2\dot{H}+3H^2=-\frac{1}{2}\dot{\phi}^2+V(\phi)+\rho_{m}\omega f(\phi)},\label{eq4}
\end{eqnarray}
where an over dot indicates differentiation with respect to the cosmic time `$t$'. {Furthermore, the equation of motion $T^{\mu\nu}_{;\nu}=0$ for the cosmological fluid with energy momentum tensor $T_{\mu\nu}=T^{(\phi)}_{\mu\nu}+T^{(m)}_{\mu\nu}$ is given by}
\begin{equation}
{ \dot{\phi}\ddot{\phi}+3H\dot{\phi}^2+v(\phi)\dot{\phi}+\rho_{m}f'(\phi)\dot{\phi}+\dot{\rho_{m}}f(\phi)+3H(1+\omega)\rho_{m}f(\phi)=0}\label{eq5}
\end{equation}
One may note that among these three evolution equations (\ref{eq3})-(\ref{eq5}), only two are independent while (\ref{eq3}) is termed as constraint equation.
{As in the present cosmological model there is the interaction term $f(\phi)$ so one has $\left(T^{(\phi)\mu\nu}\right)_{;\nu}=-Q$, $\left(T^{(m)\mu\nu}\right)_{;\nu}=Q$ or eqivalently }
\begin{equation}
{\ddot{\phi}+3H\dot{\phi}+V'(\phi)+f'(\phi)\rho_{m}=-\frac{Q}{\dot{\phi}}}\label{eq17.1}
\end{equation}
{and}
\begin{equation}
{ \dot{\rho_{m}}f(\phi)+3H(1+\omega)\rho_{m}f(\phi)=Q}\label{eq18.1}
\end{equation}
{As the present model reduces to that of Weyl integrable gravity with $f(\phi)=f_0e^{\lambda\phi}$ (\cite{r1.2}) and setting $Q=\alpha\rho_{m}f'(\phi)\dot{\phi}$ (where $\alpha$ is a non zero constant), Eqs. (\ref{eq17.1}) and (\ref{eq18.1}) become}
\begin{equation}
{\ddot{\phi}+3H\dot{\phi}+V'(\phi)+(1+\alpha)f'(\phi)\rho_{m}=0}\label{eq17.2}
\end{equation}
and
\begin{equation}
{ \dot{\rho_{m}}f(\phi)+3H(1+\omega)\rho_{m}f(\phi)-\alpha f'(\phi)\dot{\phi}\rho_{m}=0}\label{eq18.2}
\end{equation}
{The matter conservation equation (\ref{eq18.2}) can be integrated to have }
\begin{equation}
{\rho_{m}(t)=\rho_{0}a^{-3(1+\omega)}\{f(\phi)\}^{\alpha}}\label{eq21.1}
\end{equation}
{Thus the scalar field evolution equation (i.e, the modified Klein-Gordon equation) (\ref{eq17.2}) becomes}
\begin{equation}
{ \ddot{\phi}+3H\dot{\phi}+V'(\phi)=-\rho_{0}a^{-3(1+\omega)}\left[\{f(\phi)\}^{\alpha+1}\right]}\label{eq22.1}.
\end{equation}
The configuration space for the present model is a 2D space $(a,\phi)$ and the infinitesimal generator for the Noether symmetry takes the form
\begin{equation}
\overrightarrow{X}=p\frac{\partial}{\partial a}+q\frac{\partial}{\partial \phi}+\dot{p}\frac{\partial}{\partial \dot{a}}+\dot{q}\frac{\partial}{\partial\dot{\phi}},\label{eq6}
\end{equation}
where $p=p(a,\phi)$ and $q=q(a,\phi)$ are the unknown coefficients with
$\dot{p}=\frac{\partial p}{\partial a}\dot{a}+\frac{\partial p}{\partial\phi}\dot{\phi}$ and similarly for $\dot{q}$.
These coefficients of the Noether symmetry vector are determined from an overdetermined system of partial differential equation, obtained by imposing Noether symmetry to the Lagrangian i.e,
\begin{equation}
\mathcal{L}_{\overrightarrow{X}}L=0\nonumber
\end{equation}
i.e,
\begin{eqnarray}
p+2a\frac{\partial p}{\partial a}&=&0\nonumber\\
3p+2a\frac{\partial q}{\partial\phi}&=&0\nonumber\\
6\frac{\partial p}{\partial\phi}-a^2\frac{\partial q}{\partial a}&=&0\label{eq7}
\end{eqnarray}
with a differential equation for the potential and coupling function as follows:
\begin{equation}
{3\rho_{0}\omega pa^{-3\omega-1}F(\phi)+3pa^2V(\phi)+qa^3 V'(\phi)-\rho_{0} a^{-3\omega}qF'(\phi)=0}\label{eq8}
\end{equation}
{where ${F(\phi)=\{f(\phi)\}^{\alpha+1}}$}.
The above set of partial differential equations (\ref{eq7}) are solvable using the method of separation of variables i.e, $p(a,\phi)=p_{1}(a)p_{2}(\phi)$, $q(a,\phi)=q_{1}(a)q_{2}(\phi)$ as
\begin{equation}
p=a^{-\frac{1}{2}}\left(c_{p}e^{m\phi}+c_{q}e^{-m\phi}\right)\nonumber
\end{equation}
\begin{equation}
{q=-4ma^{-\frac{3}{2}}\left(c_{p}e^{m\phi}-c_{q}e^{-m\phi}\right)}\label{eq9}
\end{equation}
where $m^2=\frac{3}{8}$, $c_{p}$, $c_{q}$ and $q_{0}$ are arbitrary constants. Using the above solutions (\ref{eq9}) into (\ref{eq8}), the solutions for $V(\phi)$ and $f(\phi)$ can take the form (with $\omega=-1$)
\begin{equation}
{ V(\phi)-\rho_{0}F(\phi)=k\left(c_{p}e^{m\phi}-c_{q}e^{-m\phi}\right)^2}\label{eq27.1}
\end{equation}
{where $k$ is a positive integration constant.}
Thus, the infinitesimal generator of the Noether symmetry is determined (except for arbitrary integration constants) by imposing symmetry condition which in turn determines a relation between the potential function and the coupling function.
Another important issue related to Noether symmetry is the conserved quantities associated with it. In general for a field theory in curved space there is no well-defined notion of energy. However, the conserved quantity derived from Noether's theorem is the energy-momentum tensor. In particular, when the system has time-like killing vector then there is an associated conserved energy. Though FLRW space-time has no time-like killing vector field, but the Lagrangian density is explicit time independent. Hence in analogy with point-like Lagrangian, it is possible to define an energy which will be conserved in nature. Thus in the context of Noether symmetry to the present cosmological model one can have two conserved quantities, namely conserved charge (defined in Eq. (\ref{eqn5})) and conserved energy (defined in Eq. (\ref{eqn7})) having explicit form
\begin{equation}
{Q=6\dot{a}a^{\frac{1}{2}}\left(c_{p}e^{m\phi}+c_{q}e^{-m\phi}\right)+a^3\dot{\phi}\bigg\{4ma^{-\frac{3}{2}}\left(c_{p}e^{m\phi}-c_{q}e^{-m\phi}\right)\bigg\}}\nonumber
\end{equation}
\begin{equation}
{E=3a\dot{a}^2-\frac{1}{2}a^3\dot{\phi}^2-a^3V(\phi)+\rho_{0}F(\phi)a^{-3\omega}}\label{eq12}
\end{equation}
Usually, associated with Noether symmetry there is a conserved current (defined in Eq. (\ref{eqn5})), whose time component integrating over spatial volume gives a conserved charge. But in the present context as all the variables are time dependent only so $Q$ defined in (\ref{eq12}) is the Noether charge. Moreover, the above conserved charge can be expressed geometrically as the inner product of the infinitesimal generator with cartan one form \cite{r44} as follows:
\begin{equation}
Q=i_{\overrightarrow{X}}\theta_{L}\label{eq13}
\end{equation}
where $i_{\overrightarrow{X}}$ denotes the inner product with the vector field $\overrightarrow{X}$ and
\begin{equation}
\theta_{L}=\frac{\partial L}{\partial a}da+\frac{\partial L}{\partial\phi}d\phi\label{eq14}
\end{equation}
is termed as cartan one form.
On the other hand, this geometric inner product representation is useful to find out cyclic variables in the Lagrangian. In context of solving coupled nonlinear evolution equations, determination of cyclic variables will be very useful as not only the Lagrangian but also the evolution equations will be simplified to a great extent.
In the present context the transformation of the 2D augmented space: $(a,\phi)\rightarrow(u,v)$ transform the symmetry vector as
\begin{equation}
\overrightarrow{X_{T}}=\left(i_{\overrightarrow{X}}du\right)\frac{\partial}{\partial u}+\left(i_{\overrightarrow{X}}dv\right)\frac{\partial}{\partial v}+\left\{\frac{d}{dt}\left(i_{\overrightarrow{X}}du\right)\right\}\frac{d}{d\dot{u}}+\left\{\frac{d}{dt}\left(i_{\overrightarrow{X}}dv\right)\right\}\frac{d}{d\dot{v}}\label{eq15}
\end{equation}
Geometrically, $\overrightarrow{X_{T}}$ may be interpreted as the lift of a vector field on the augmented space. Now, without any loss of generality we restrict the above point transformation to \cite{r44}
\begin{equation}
i_{\overrightarrow{X}}du=1~~\mbox{and}~~ i_{\overrightarrow{X}}dv=0\label{eq16}
\end{equation}
so that
\begin{equation}
\overrightarrow{X_{T}}=\frac{\partial}{\partial u}~~\mbox{and}~~\frac{\partial L_{T}}{\partial u}=0\label{eq17}
\end{equation}
i.e, $u$ is the cyclic variable. The above geometric process of identification of cyclic variables can be interpreted so as to choose the transformed infinitesimal generator along any co-ordinate line (identified as the cyclic variable) \cite{r45}.
Now the explicit form of the above point transformation (\ref{eq16}) is the first-order linear partial differential equations having solution as follows:\\
\textbf{\underline{Case I:}} $c_{p}=c_{q}$
\begin{eqnarray}
u&=&\frac{2}{3}a^{\frac{3}{2}}\cosh{m\phi},\nonumber\\
v&=&a^{\frac{3}{2}}\sinh{m\phi}\label{eq18}.
\end{eqnarray}
\textbf{\underline{Case II:}} $c_{p}\neq c_{q}$
\begin{eqnarray}
u&=&\frac{1}{6c_{p}c_{q}}a^{\frac{3}{2}}\left(c_{p}e^{m\phi}+c_{q}e^{-m\phi}\right),\nonumber\\
v&=&a^{\frac{3}{2}}\left(c_{p}e^{m\phi}-c_{q}e^{-m\phi}\right)\label{eq19}.
\end{eqnarray}
The simplified Lagrangian in the new variables has the following forms:
\begin{eqnarray}
L&=&3\dot{u}^2-\frac{4}{3}\dot{v}^2+4kc_{p}^2v^2\label{l1},~~~~~ (\mbox{Case I})\\
&=&12c_{p}c_{q}\dot{u}^2-\frac{1}{3c_{p}c_{q}}\dot{v}^2+kv^2\label{l2} ~~~~~ (\mbox{Case II}).
\end{eqnarray}
The conserved quantities in the new variables can be expressed as follows:
\begin{eqnarray}
Q&=&6\dot{u}\nonumber\\
E&=&3\dot{u}^2-\frac{4}{3}\dot{v}^2-4kc_{p}^2v^2\nonumber ~~~~~(\mbox{Case I})
\end{eqnarray}
and
\begin{eqnarray}
Q&=&24c_{p}c_{q}\dot{u}\nonumber\\
E&=&12c_{p}c_{q}\dot{u}^2-\frac{1}{3c_{p}c_{q}}\dot{v}^2-kv^2\nonumber ~~~~~(\mbox{Case II})
\end{eqnarray}
Now solving the Euler-Lagrange equations for the new Lagrangian, the new augmented variables have the following forms:
\begin{eqnarray}
u&=&At+B\nonumber\\
v&=&k_{1}\cos\sqrt{3k}c_{p}t+k_{2}\sin\sqrt{3k}c_{p}t\nonumber ~~~~~~ (\mbox{Case I})
\end{eqnarray}
and
\begin{eqnarray}
u&=&rt+s\nonumber\\
v&=&k'_{1}\cos\sqrt{3c_{p}c_{q}k}t+k'_{2}\sin\sqrt{3c_{p}c_{q}k}\nonumber ~~~~~(\mbox{Case II})
\end{eqnarray}
Hence, the cosmic scale factor and the chameleon scalar field have the following explicit expressions:
\begin{eqnarray}
a(t)&=&\left[\frac{9}{4}\left(At+B\right)^2-\left(k_{1}\cos\sqrt{3k}c_{p}t+k_{2}\sin\sqrt{3k}c_{p}t\right)^2\right]^{\frac{1}{3}}\nonumber\\
\phi(t)&=&\frac{2\sqrt{2}}{3}\tanh^{-1}\left[\frac{2\left(k_{1}\cos\sqrt{3k}c_{p}t+k_{2}\sin\sqrt{3k}c_{p}t\right)}{3(At+B)}\right]\nonumber ~~~~~(\mbox{Case I})
\end{eqnarray}
and
\begin{eqnarray}
a(t)&=&\left[9c_{p}c_{q}\left(rt+s\right)^2-\frac{1}{4c_{p}c_{q}}\left(k'_{1}\cos\sqrt{3c_{p}c_{q}k}t+k'_{2}\sin\sqrt{3c_{p}c_{q}k}t\right)^2\right]^\frac{1}{3}\nonumber\\
\phi(t)&=&\frac{2\sqrt{2}}{\sqrt{3}}\ln\frac{6c_{p}c_{q}(rt+s)+\left(k'_{1}\cos\sqrt{3c_{p}c_{q}k}t+k'_{2}\sin\sqrt{3c_{p}c_{q}k}t\right)}{2c_{p}\left[9c_{p}c_{q}\left(rt+s\right)^2-\frac{1}{4c_{p}c_{q}}\left(k'_{1}\cos\sqrt{3c_{p}c_{q}k}t+k'_{2}\sin\sqrt{3c_{p}c_{q}k}t\right)^2\right]^\frac{1}{2}}\nonumber ~~~~~(\mbox{Case II})
\end{eqnarray}
In the above solutions $(A, B, k_{1}, k_{2})$ and $(r, s, k'_{1}, k'_{2})$ are arbitrary integration constants.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f1.eps}\\
\caption{Graphical representation of $\frac{\ddot{a}}{a}$ with respect to cosmic time $t$ when $c_{p} = c_{q}$.}
\label{fig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f2.eps}\\
\caption{Represents $\frac{\ddot{a}}{a}$ with respect to cosmic time $t$ when $c_{p} \neq c_{q}$.}
\label{fig2}
\end{figure}\\
\section{Quantum Cosmology in the Minisuperspace Approach: A General Prescription}
Minisuperspaces are considered as restrictions of geometrodynamics of the superspace and physically important and interesting models are defined on minisuperspaces. In cosmology, the simplest and widely used minisuperspace models are homogeneous and isotropic merics and matter fields and consequently the lapse function is homogeneous (i.e, $N=N(t)$) while shift function vanishes identically. So in 4D manifold, using $(3+1)$-decomposition the metric can be written as follows:
\begin{equation}
ds^2=-N^2(t)dt^2+h_{ab}(x,t)dx^adx^b\label{eqq1}
\end{equation}
and the Einstein-Hilbert action can be written as follows:
\begin{equation}
I(h_{ab},N)=\frac{{m^2}_p}{16\pi}\int dt~d^3xN\sqrt{h}\left[k_{ab}k^{ab}-k^2+(3)_{R}-2\Lambda\right]\label{eqq2},
\end{equation}
where $k_{ab}$ is the extrinsic curvature of the $3$ space, $k=k_{ab}h^{ab}$ is the trace of the extrinsic curvature, $(3)_{R}$ is the curvature scalar of the three space and $\Lambda$ is the cosmological constant.
Now due to homogeneity of the three space, the metric $h_{ab}$ is characterized by a finite number of time functions $q^{\alpha}(t)$, $\alpha=0,1,2,...,n-1$ and the above action can be written in the form of a relativistic point particle with self-interacting potential in a $n$D curved space-time as \cite{r42, r43}
\begin{equation}
I\left(q^{\alpha}(t),N(t)\right)=\int_{0}^{1}dtN\left[\frac{1}{2N^2}f_{\alpha\beta}(q)\dot{q}^{\alpha}\dot{q}^{\beta}-V(q)\right]\label{eqq3}
\end{equation}
So the equation of motion of the (equivalent) relativistic particle can be written as (considering variation of the action with respect to the field variables $q^{\alpha}(t)$)
\begin{equation}
\frac{1}{N}\frac{d}{dt}\left(\frac{\dot{q}^{\alpha}}{N}\right)+\frac{1}{N^2}\Gamma^{\alpha}_{\mu\nu}\dot{q}^{\mu}\dot{q}^{\nu}+f^{\alpha\beta}\frac{\partial\nu}{\partial q^{\beta}}=0\label{eqq4}
\end{equation}
with $\Gamma^{\alpha}_{\beta\gamma}$ being the christoffel symbols in the minisuperspace. Also there is a constraint equation obtained by variation with respect to the lapse function as follows:
\begin{equation}
\frac{1}{2N^2}f_{\alpha\beta}\dot{q}^{\alpha}\dot{q}^{\beta}+V(q)=0\label{eqq5}
\end{equation}
For canonical quantization scheme one has to switch over to Hamiltonian formulation. The momenta canonical to $q^{\alpha}$ are given by
\begin{equation}
p_{\alpha}=\frac{\partial L}{\partial q^{\alpha}}=f_{\alpha\beta}\frac{\dot{q}^{\beta}}{N},\label{eqq6}
\end{equation}
so the Hamiltonian is defined as follows:
\begin{equation}
H=p_{\alpha}\dot{q}^{\alpha}-L=N\left[\frac{1}{2}f^{\alpha\beta}p_{\alpha}p_{\beta}+V(q)\right]=N\mathcal{H}\label{eqq7}
\end{equation}
with $f^{\alpha\beta}$ being the inverse metric. Using the definition of $p_{\alpha}$ (i.e, equation (\ref{eqq6})) into the constraint equation (\ref{eqq5}) one obtains
\begin{equation}
\mathcal{H}(q^{\alpha},p_{\alpha})\equiv\frac{1}{2}f^{\alpha\beta}p_{\alpha}p_{\beta}+V(q)=0\label{eqq8}
\end{equation}
Now, writing $p_{\alpha}$ as $-i\hbar\dfrac{\partial}{\partial q_{\alpha}}$ in quantization scheme, the operator version of the above constraint equation (\ref{eqq8}) on a time-independent function (the wave function of the universe), one gets the WD equation in quantum cosmology as follows:
\begin{equation}
\mathcal{H}\left(q^{\alpha},-i\hbar\frac{\partial}{\partial q^{\alpha}}\right)\psi(q^{\alpha})=0\label{eqq9}
\end{equation}
In general, the minisuperspace metric depends on $q^{\alpha}$, so the above WD equation has operator ordering problem. However by imposing the quantization in minisuperspace to be covariant in nature, one may resolve the above operator ordering problem. Furthermore, in the context of quantam cosmology for probability measure, $\exists$ a conserved current for hyperbolic type of partial differential equation is as follows:
\begin{equation}
\overrightarrow{J}=\frac{i}{2}(\psi^{*}\nabla\psi-\psi\nabla\psi^{*})\label{eqq10}
\end{equation}
with $\overrightarrow{\nabla}.\overrightarrow{J}=0$. Here, $\psi$ is the solution of the hyperbolic-type WD differential equation. Thus it is possible to define the probability measure on the minisuperspace as follows:
\begin{equation}
dp=|\psi(q^{\alpha})|^2dV\label{eqq11}
\end{equation}
where $dV$ is a volume element on minisuperspace.
\section{Formation of WD Equation in the Present Cosmological Model and possible solution with Noether Symmetry}
In the present cosmological model, the 2D configuration space $\{a,\phi\}$ is associated with conjugate momenta is given by
\begin{eqnarray}
p_{a}&=&\frac{\partial L}{\partial\dot{a}}=6a\dot{a}\nonumber\\
p_{\phi}&=&\frac{\partial L}{\partial\dot{\phi}}=-a^3\dot{\phi}\label{eq20}
\end{eqnarray}
So the Hamiltonian of the system (also known as Hamiltonian constraint) can be expressed as follows:
\begin{equation}
{\mathcal{H}=\frac{1}{12a}p_{a}^2-\frac{1}{2a^3}p_{\phi}^2-a^3V(\phi)+\rho_{0} F(\phi)a^{-3\omega}}\label{eq21}
\end{equation}
with equivalent Hamilton's equations of motion
\begin{eqnarray}
\dot{a}&=&\frac{1}{6a}p_{a}\nonumber\\
\dot{\phi}&=&-\frac{1}{a^3}p_{\phi}\nonumber\\
{\dot{p_{a}}}&=&{\frac{1}{12a^2}p_{a}^2-\frac{3}{2a^4}p_{\phi}^2+3a^2V(\phi)+3\rho_{0}\omega F(\phi)a^{-3\omega-1}}\nonumber\\
{\dot{p_{\phi}}}&=&{a^3V'(\phi)-\rho_{0} F'(\phi)a^{-3\omega}}\label{eq22}
\end{eqnarray}
Furthermore, the Lagrangian (i.e, Eq. (\ref{eq2})) of the system can be interpreted geometrically, dividing it into two parts. The first two terms are known as kinetic part and the remaining two terms constitute the dynamic part. Also the kinetic part may be viewed as a 2D pseudo- Riemannian space with line element
\begin{equation}
ds^2=-6ada^2+a^2d\phi^2\label{eq23}
\end{equation}
This 2D Lorentzian manifold $(a,\phi)$ is known as minisuperspace (in quantum cosmology). The wave function of the universe in quantum cosmology is a solution of the WD equation, a second-order hyperbolic partial differential equation defined over minisuperspace and it is the operator version of the Hamiltonian constraint.
Furthermore, in the context of WKB approximation one can write the wave function as $\psi(x^k) \sim e^{i\delta(x^k)}$ and hence the WD equation (\ref{eqq9}) becomes first-order nonlinear partial differential equation which is nothing but (null) Hamilton-Jacobi (H-J) equation in the same geometry.
In quantization of the model one has to construct the WD equation $\hat{\mathcal{H}}\psi(u,v)=0$, with $\hat{\mathcal{H}}$ the operator version of the Hamiltonian (\ref{eq21}) and $\psi(u,v)$, the wave function of the universe. In course of conversion to the operator version there is a problem related to the ordering of a variable and its conjugate momentum \cite{r46}. In the first term of the Hamiltonian (\ref{eq21}) there is a product of `$a$' and `$p_a$', so one has to consider the ordering consideration: $p_a \rightarrow -i\partial_a$,~ $p_{\phi} \rightarrow -i\partial_{\phi}$. As a result there is a two-parameter family of WD equation
\begin{equation}
{\bigg[-\frac{1}{12} \frac{1}{a^l}\frac{\partial}{\partial a}\frac{1}{a^m}\frac{\partial}{\partial a}\frac{1}{a^n}+\frac{1}{a^3}\frac{\partial^2}{\partial \phi^2}-a^3 V(\phi)+\rho_0 F(\phi)a^{-3\omega}\bigg]\psi(a, \phi)=0}\label{eq24}
\end{equation}
with the triplet of real numbers $(l, m, n)$ satisfying $l+m+n=1$. Due to infinite possible choices for $(l, m, n)$ one may have infinite number of possible ordering. Also the semi-classical limit, namely, the Hamilton Jacobi equation (obtained by substituting $\psi=\exp (is))$ does not regard to the above triplet. In fact, the following choices are commonly used:\\
i) $l=2, m=-1, n=0$ : D'Alembert operator ordering.\\
ii) $l=0=n, m=1$: Vilenkin operator ordering.\\
iii) $l=1, m=0=n$: no ordering.\\
Thus factor ordering affects the behaviour of the wave function while semi classical results will not be influenced by the above ordering problem. Now choosing the third option (i.e., no ordering) the WD equation for the present model has the following explicit form:
\begin{equation}
{\bigg[-\frac{1}{12a} \frac{\partial^2}{\partial a^2}+\frac{1}{2a^3}\frac{\partial^2}{\partial \phi^2}-a^3 V(\phi)+\rho_0 F(\phi)a^{-3\omega}\bigg]\psi(a, \phi)=0}\label{eq25}
\end{equation}
The general solution of the above second-order hyperbolic partial differential equation is known as the wave function of the universe. This solution can be constructed from the separation of the eigen functions of the above WD operator as follows:\cite{r41}
\begin{equation}
\psi(a, \phi)=\int W(Q)\psi(a, \phi, Q)~dQ\label{eq26}
\end{equation}
with $\psi$ being an eigen function of the WD operator, $W(Q)$ being a weight function and $Q$ being the conserved charge. Now it is desirable to have wave function in quantum cosmology that is consistent with classical theory. In other words, one has to construct a coherent wave packet having good asymptotic behaviour in the minisuperspace and maximize around the classical trajectory. As the minisuperspace variables $\{a, \phi\}$ are highly coupled in the WD operator so it is not possible to have any explicit solution of the WD equation even with separation of variable method. Thus one may analyze the present model in the context of quantum cosmology using the new variables $(u, v)$ (obtained by point transformation) in the augmented space\\\\
{\bf{\underline{Case-I}}}: $c_p=c_q$\\
In this case the Lagrangian is given by Eq. (\ref{l1}) for which $u$ is the cyclic variable. So one has
\begin{eqnarray}
p_1&=&\frac{\partial L}{\partial \dot{u}}=6\dot{u}=\mbox{Conserved}\nonumber\\
p_2&=&\frac{\partial L}{\partial \dot{v}}=-\frac{8}{3}\dot{v}\label{eq27}
\end{eqnarray}
Hence, the Hamiltonian of the system takes the form
\begin{equation}
\mathcal{H}=\frac{1}{12}p_{u}^2-\frac{3}{16}p_{v}^2-4kc_p^2v^2\label{eq28}
\end{equation}
Thus the WD equation takes the following form:
\begin{equation}
\bigg[-\frac{1}{12} \frac{\partial^2}{\partial u^2}+\frac{3}{16}\frac{\partial^2}{\partial v^2}-4kc_p^2v^2\bigg]\chi(u, v)=0\label{eq29}
\end{equation}
The operator version of the conserved momentum in Eq. (\ref{eq27}) can be written as follows:
\begin{equation}
i \frac{\partial \chi(u, v)}{\partial u}=\Sigma_0 ~\chi(u, v)\label{eq30}
\end{equation}
Now writing $\chi(u, v)=A(u) B(v)$, one has
\begin{eqnarray}
i\frac{dA}{du}&=&\Sigma_0~A\nonumber\\
\mbox{i.e.,}~A(u)&=&A_0 \exp(-i\Sigma_0 ~u) \label{eq31}
\end{eqnarray}
with $A_0$ being the constant of integration. Using Eq. (\ref{eq31}) the WD equation (\ref{eq29}) becomes a differential equation in $B$ as follows:
\begin{eqnarray}
\frac{3}{16} \frac{d^2B}{dv^2}-4kc_p^2v^2B+\frac{\Sigma_0^2}{12}B&=&0\nonumber\\
\mbox{i.e.,}~\frac{d^2B}{dv^2}-(\lambda v^2-\mu)B&=&0\label{eq32}
\end{eqnarray}
with $\lambda=\frac{64}{3} kc_p^2,~\mu=\frac{4}{9}\Sigma_0^2$.\\
{\bf{\underline{Case-II}}}: $c_p \neq c_q$\\
The Lagrangian of the system (given by Eq. (\ref{l2})) shows the variable `$u$' to be cyclic and the conserved momentum has the following expression:
\begin{equation}
p_u=\frac{\partial L}{\partial \dot{u}}=24 c_p c_q \dot{u}=\Lambda_0,~\mbox{a constant}
\label{eq35}
\end{equation}
while the momentum conjugate to the variable `$v$' is given by
\begin{equation}
p_2=\frac{\partial L}{\partial \dot{v}}=-\frac{2}{3c_p c_q}\dot{v}\label{eq36}
\end{equation}
Hence, the Hamiltonian of the system is expressed as follows:
\begin{equation}
\mathcal{H}=\frac{1}{48c_p c_q}p_u^2-\frac{3c_pc_q}{4}p_v^2-kv^2\label{eq37}
\end{equation}
and consequently the WD equation takes the following form:
\begin{equation}
\bigg[\frac{1}{48c_p c_q}\frac{\partial^2}{\partial u^2}+\frac{3c_pc_q}{4}\frac{\partial^2}{\partial v^2}-kv^2\bigg]\xi(u, v)=0\label{eq38}
\end{equation}
The operator version of the conserved momentum as before shows
\begin{eqnarray}
\xi(u, v)&=&C(u)D(v)\nonumber\\
\mbox{with}~C(u)&=&C_0\exp (-i\Lambda_0 u)\label{eq39}
\end{eqnarray}
Thus from the above WD equation (\ref{eq38}) and using the separation of variables, the differential equation for $D$ reduces to
\begin{equation}
\frac{d^2D}{dv^2}-(l v^2-s)D=0\label{eq40}
\end{equation}
with $l=\frac{4k}{3c_p c_q} ,~s=\frac{\Lambda_0^2}{36 c_p^2 c_q^2}$.\\
The solution of this second-order differential equation takes the following form:
\begin{eqnarray}
{D(v)}&=&{C_1 \sqrt{v}J_{\frac{1}{4}}\bigg( \frac{1}{2}\sqrt{-l}v^2\bigg)+C_2\sqrt{v}Y_{\frac{1}{4}}\bigg( \frac{1}{2}\sqrt{-l}v^2\bigg)~~~~\mbox{when}(s=0)}\nonumber\\
&=&{\frac{C_1M_{\frac{1}{4}\frac{s}{\sqrt{l}},\frac{1}{4}}\bigg( \sqrt{l}v^2\bigg)}{\sqrt{v}}+\frac{C_2W_{\frac{1}{4}\frac{s}{\sqrt{l}},\frac{1}{4}}\bigg( \sqrt{l}v^2\bigg)}{\sqrt{v}}~~~~\mbox{when}(s\neq 0)}, \label{eq42}
\end{eqnarray}
{where $J$ and $Y$ are usual Bessel functions and $M$ and $W$ are known as Whittaker functions. We have represented the wave function graphically both for zero and non-zero $s$ in Figs.3 and 4, respectively. From the figures, we see that at $u=0$, $v=0$ (i.e, $l=0$), wave function has finite nonzero value. Therefore in the present model it is possible to avoid the Big Bang singularity using quantum cosmology near the initial singularity. }
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{DV.eps}\\
\caption {The wave function when $s=0$}
\label{fig3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Ds.eps}\\
\caption {Graphical representation of the wave function when $s\neq 0$}
\label{fig4}
\end{figure}
\section{Conclusion}
This work is an example where symmetry analysis particularly Noether symmetry has been extensively used both in classical and quantum cosmology. Here, chameleon field DE model has been considered in the background of homogeneous and isotropic flat FLRW space--time. Although the full quantum theory is described on the infinite-dimensional superspace, but here we shall confine to minisuperspace which is a 2D Lorentzian manifold.\\
Although the Einstein field equations are nonlinear coupled differential equation, but using a transformation in the augmented space and introducing geometric inner product it is possible to identify the cyclic variable(s) so that the field equations simplified to a great extent and consequently classical cosmological solutions are evaluated. There are two sets of solutions for two different choices of the arbitrary constants involved. Both the solutions show an expanding model of the universe with accelerating and decelerating phases (depending on the choices of the arbitrary constants involved). In particular, the present model describes the decelerating phase only for the choice $c_{p}=c_{q}$ (Fig. \ref{fig1}), while the model makes a transition from decelerating phase to accelerating phase and then again it goes to decelerating phase for the choice $c_{p} \neq c_{q}$ (Fig. \ref{fig2}).\\
On the other hand, the application of Noether symmetry to the minisuperspace shows the path for solving WD equation. The conserved momentum due to Noether symmetry, after converting to quantum version, shows an oscillatory solution to the WD equation and consequently it gives the semi-classical limit of quantum cosmology. Furthermore, the nonoscillatory part of the WD equation is an ordinary differential equation having solution in the form of Bessel function or Whittaker function. The graphical presentation of this part of the solution has been shown in Figs. (\ref{fig3} and\ref{fig4}), which clearly shows that the present quantum cosmological model can overcome the big bang singularity i.e., the present model may describe the early era of evolution without any singularity. Finally, one may conclude that Noether symmetry analysis is very useful in describing quantum cosmology in minisuperspace model and also leads to possible solution of the WD equation.
\frenchspacing
|
{
"arxiv_id": "2302.14322",
"language": "en",
"timestamp": "2023-03-01T02:09:35",
"url": "https://arxiv.org/abs/2302.14322",
"yymm": "2302"
} | \section{Introduction and Preliminaries}
In recent years, mathematical analysis has paid a lot of attention to the theory of special matrix functions and polynomials. The matrix analogue of hypergeometric functions play a significant role in the area of applied and pure analysis. These functions appears in the study of statistics \cite{Constantine}, probability theory \cite{Seaborn} and Lie theory \cite{James}. The matrix analogue of the Gauss hypergeometric function was introduced by J\'odar and Cort\'es \cite{Jodar1}. They studied the integral representation and matrix differential equation. Later, Abdalla \cite{Abdalla,Abdalla1}, Dwivedi and Sahai \cite{Dwivedi,Dwivedi1} studied the various properties like integral and differential relations, finite sum formulas, generating functions of special matrix functions. Particularly, these functions play a vital role in solving numerous problems of mathematical physics, engineering and mathematical sciences.\\
Throughout this work, we consider the complex space $\mathbf{C}^{R\times R}$ of complex matrices of common order $R$. For any matrix $P\in\mathbf{C}^{R\times R}$, $\sigma(P)$ is the spectrum of $P$ and
\begin{align}
a(P) = \max\left\lbrace \Re(z) : z\in\sigma(P)\right\rbrace, \quad b(P) = \min\left\lbrace \Re(z) : z\in\sigma(P)\right\rbrace,
\end{align}
where $a(P)$ is the spectral abscissa of $P$ and $b(P) = - a(-P)$. A Hermitian matrix $P$ in $\mathbf{C}^{R\times R}$ is a positive stable matrix if $\Re(\lambda)>0$ for all $\lambda\in\sigma(P)$, where $\sigma(P)$ is the set of all eigenvalues of $P$ or spectrum of $A$ and its two-norm is given by
\begin{align*}
||P|| = \sup_{x\neq 0} \frac{||Px||_2}{||x||_2} = \max\left\lbrace \sqrt{\lambda}: \lambda \in \sigma(P^*P) \right\rbrace,
\end{align*}
where for any vector $x\in\mathbf{C}^r$, $||x||_2 = (x^*x)^{1/2}$ is the Euclidean norm of $x$ and $P^*$ denotes the conjugate transpose of $P$. $I$ and $\mathbf{0}$ stands for the identity matrix and null matrix in $\mathbf{C}^{R\times R}$, respectively. Taking into account the Sch\"{u}r decomposition of a matrix $P$ \cite{Golud}, we have
\begin{align*}
\left\| e^{Pt} \right\| \leq e^{t a(P)} \sum_{u=0}^{r-1} \frac{\left(\left\| P \right\| r^{1/2}t\right)^u}{u!} \quad (t\geq 0),
\end{align*}
which yields
\begin{align*}
\left\| t^P \right\| \leq \left\| e^{P \ln t} \right\| \leq t^{a(P)} \sum_{u=0}^{r-1} \frac{\left(\left\| P \right\| r^{1/2} \ln t\right)^u}{u!} \quad (t\geq 1).
\end{align*}
If $f(z)$ and $g(z)$ are holomorphic functions of the complex variable $z$, which are defined in an open set $\Omega$ of the complex plane, and $P$ is a matrix in $\mathbf{C}^{R\times R}$ with $\sigma(P)\subset \Omega$, then from the properties of the matrix functional calculus \cite{Dunford}, it follows that
\begin{align*}
f(P)g(P) = g(P)f(P).
\end{align*}
Furthermore, if $Q\in \mathbf{C}^{R\times R}$ with $\sigma(Q)\subset \Omega$, and if $PQ=QP$, then
\begin{align}
f(P)g(Q) = g(Q)f(P).
\end{align}
The logarithmic norm of a matrix $P\in \mathbf{C}^{R\times R}$ is defined as (see \cite{Cortes,HuGD}),
\begin{align}
\mu(P) = \lim_{k\to 0} \frac{\left\| I+kP\right\|-1}{k} = \max \left\lbrace z \ | \ z \in \sigma\left(\frac{P+P^*}{2}\right)\right\rbrace
\end{align}
Let the number $\tilde{\mu}(P)$ such that
\begin{align}
\tilde{\mu}(P) = -\mu(-P) = \min \left\lbrace z \ | \ z \in \sigma\left(\frac{P+P^*}{2}\right)\right\rbrace.
\end{align}
The reciprocal gamma function $\Gamma^{-1}(z) = 1/\Gamma(z)$ is an entire function of the complex variable $z$. The image of $\Gamma^{-1}(z)$ acting on $P$, denoted by $\Gamma^{-1}(P)$, is a well defined matrix. If $P+nI$ is invertible for all integers $n\geq 0$, then the reciprocal gamma function is defined as (see \cite{Jodar3}),
\begin{align}
\Gamma^{-1}(P) = P(P+I)\dots (P+(n-1)I) \ \Gamma^{-1}(P+nI), \ \ n\geq 1.
\end{align}
By applications of the matrix functional calculus, the Pochhammer symbol \cite{Jodar3} for $P\in \mathbf{C}^{R\times R}$ is given by
\begin{align}\label{PCH1}
(P)_m = \begin{cases}
I, \;\;\;\;\;\;\;\;\;\; \hspace{4cm} \text{if} \; m=0\\
P(P+I)\dots(P+(m-1)I), \hspace{0.8cm} \text{if} \; m\geq 1,
\end{cases}
\end{align}
which gives
\begin{align}\label{PCH2}
(P)_m = \Gamma^{-1}(P) \ \Gamma(P+mI), \quad m\geq 1.
\end{align}
If $P\in \mathbf{C}^{R\times R}$ is a positive stable matrix and $m\geq 1$ is an integer, then the gamma matrix function can be represented in the following limit form as \cite{Jodar1}:
\begin{align}
\Gamma(P) = \lim_{m\to \infty} (m-1)! \ (P)_m^{-1} \ m^P.
\end{align}
Let $P$ and $Q$ be two positive stable matrices in $\mathbf{C}^{R\times R}$. The gamma matrix function $\Gamma(P)$ and the beta matrix function $B(P,Q)$ have been defined in \cite{Jodar1,Jodar3}, as follows
\begin{align}
\Gamma(P) = \int_0^\infty e^{-t} \ t^{P-1} dt; \quad t^{P-1} = \exp((P-I)\ln t),
\end{align}
and
\begin{align}
B(P,Q) = \int_0^1 t^{P-1} \ (1-t)^{Q-1} dt.
\end{align}
Let P and Q be commuting matrices in $\mathbf{C}^{R\times R}$ such that the matrices $P+nI, Q+nI$ and $P+Q+nI$ are invertible for every integer $n\geq 0$, then according to \cite{Jodar3}, we have
\begin{align}
B(P,Q) = \Gamma(P) \Gamma(Q) \left[ \Gamma(P+Q)\right]^{-1}.
\end{align}
\section{Main results}
Dwivedi and Sahai \cite{Dwivedi} introduced a natural generalization of hypergeometric matrix function called generalized hypergeometric matrix function and defined by
\begin{align}\label{sahai1}
{}_pF_q\left(P_1,\dots,P_p;Q_1,\dots,Q_q;z\right) = \sum_{m=0}^\infty\frac{(P_1)_m\dots(P_p)_m(Q_1)_m^{-1}\dots (Q_q)_m^{-1}}{m!} z^m,
\end{align}
where $P_i,Q_j \in \mathbf{C}^{R\times R}, 1\leq i \leq p,1 \leq j\leq q$, such that $Q_j+mI, 1 \leq j\leq q $ are invertible for all integers $m\geq 0$. \\
In the present work, we consider an Euler-type integral and present some integral representations of ${}_3F_2(\cdot)$ matrix function using the suitable adjustment of matrix parameters.
\begin{theorem}\label{thm1}
Let $P, Q$ and $R$ be commuting matrices in $\mathbf{C}^{R\times R}$ such that $Q,R$ and $R-Q$ are positive stable. Then, for $|z|<1$, we have the following integral representation:
\begin{align}\label{thm1a}
{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};z\right) = \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} (1-zu^2)^{-P} du.
\end{align}
\end{theorem}
\begin{proof}
From the left-hand side of (\ref{thm1a}), we find that
\begin{align}\label{thm1b}
{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};z\right) = \sum_{m=0}^\infty \frac{\left(P\right)_m \left(\frac{Q}{2}\right)_m\left(\frac{Q+I}{2}\right)_m}{\left(\frac{R}{2}\right)_m\left(\frac{R+I}{2}\right)_m} \frac{z^m}{m!}.
\end{align}
Using the relation
\begin{align*}
\left(P\right)_{2m} = 2^{2m} \left(\frac{P}{2}\right)_m\left(\frac{P+I}{2}\right)_m,
\end{align*}
equation (\ref{thm1b}) becomes
\begin{align*}
{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};z\right) &= \sum_{m=0}^\infty \frac{\left(P\right)_m \left(Q\right)_{2m}}{\left(R\right)_{2m}} \frac{z^m}{m!} \\
&= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} \sum_{m=0}^\infty \frac{\left(P\right)_m (zu^2)^m}{m!} \ du \\
&= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} (1-zu^2)^{-P} du.
\end{align*}
This completes the proof of Theorem \ref{thm1}.
\end{proof}
\begin{corollary}
Under the conditions stated in Theorem \ref{thm1}, the following integral relation holds true:
\begin{align}
{}_3F_2\left(-kI,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};z\right) = \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} (1-zu^2)^{kI} du.
\end{align}
\end{corollary}
\begin{theorem}\label{thm2}
Let $P, Q$ and $R$ be commuting matrices in $\mathbf{C}^{R\times R}$ such that $R,R-P,R-Q$ and $R-Q-P$ are positive stable. Then the following integral representation holds true:
\begin{align}\label{thm2a}
{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};1\right) = \frac{\Gamma(R)\Gamma(R-Q-P)}{\Gamma(R-P)\Gamma(R-Q)} \ {}_2F_1\left(P, Q; R-P;-1\right).
\end{align}
\end{theorem}
\begin{proof}
Substituting $z=1$ in (\ref{thm1a}), it becomes
\begin{align*}
{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};1\right) &= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-P-I} (1+u)^{-P} du \\
&= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty {-P \choose m} \int_0^1 u^{Q+(m-1)I} (1-u)^{R-Q-P-I} \ du \\
&= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty {-P \choose m} \frac{\Gamma(Q+mI)\Gamma(R-Q-P)}{\Gamma(R-P+mI)} \\
&= \frac{\Gamma(R)\Gamma(R-Q-P)}{\Gamma(R-P)\Gamma(R-Q)} \sum_{m=0}^\infty \frac{(P)_m (Q)_m}{(R-P)_m} \frac{(-1)^m}{m!} \\
&= \frac{\Gamma(R)\Gamma(R-Q-P)}{\Gamma(R-P)\Gamma(R-Q)} \ {}_2F_1\left(P, Q; R-P;-1\right),
\end{align*}
which completes the proof of Theorem \ref{thm2}.
\end{proof}
\begin{corollary}\label{cor1}
Under the conditions stated in Theorem \ref{thm2}, the following integral relation holds true:
\begin{align*}
{}_3F_2\left(-nI,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};1\right) = \frac{\left(R-Q\right)_n}{\left(R\right)_n} \ {}_2F_1\left(-nI, Q; R+nI;-1\right).
\end{align*}
\end{corollary}
\begin{theorem}\label{thm3}
Let $P, Q$ and $R$ be commuting matrices in $\mathbf{C}^{R\times R}$ such that $Q,R$ and $R-Q$ are positive stable. Then the following integral representation holds true:
\begin{align}\label{thm3a}
{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};\frac{1}{2}\right) = 2^P \sum_{m=0}^\infty {-P \choose m} \frac{(R-Q)_m}{(R)_m} \times {}_2F_1\left(-mI,Q;R+mI;-1\right).
\end{align}
\end{theorem}
\begin{proof}
Substituting $z=\frac{1}{2}$ in Theorem \ref{thm1}, we obtain
\begin{align*}
&{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};\frac{1}{2}\right) \\ & \quad = \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} \left(1-\frac{1}{2} u^2\right)^{-P} du\\
& \quad = \frac{2^P \ \Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} \left(2-u^2\right)^{-P} du \\
& \quad = \frac{2^P \ \Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty {-P \choose m} \int_0^1 u^{Q-I} (1-u)^{R-Q+(m-1)I} \left(1+u\right)^{m} du \\
& \quad = \frac{2^P \ \Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty \sum_{k=0}^m {-P \choose m} {m \choose k} \int_0^1 u^{Q+(k-1)I} (1-u)^{R-Q+(m-1)I} du \\
& \quad = 2^P \sum_{m=0}^\infty \sum_{k=0}^m {-P \choose m} {m \choose k} \frac{(Q)_k(R-Q)_m}{(R)_{k+m}}.
\end{align*}
Using the transformation $(R)_{k+m} = (R)_m (R+m)_k$, we arrive at
\begin{align*}
&{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};\frac{1}{2}\right) \\ & \quad = 2^P \sum_{m=0}^\infty {-P \choose m} \frac{(R-Q)_m}{(R)_m} \sum_{k=0}^m {m \choose k} \frac{(Q)_k}{(R+m)_k}\\
& \quad = 2^P \sum_{m=0}^\infty {-P \choose m} \frac{(R-Q)_m}{(R)_m} \times {}_2F_1\left(-mI,Q;R+mI;-1\right).
\end{align*}
This completes the proof of Theorem \ref{thm3}.
\end{proof}
\begin{corollary}\label{cor2}
Under the conditions stated in Theorem \ref{thm3}, the following integral relation holds true:
\begin{align}
{}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};\frac{1}{2}\right) = 2^P \sum_{m=0}^\infty {-P \choose m} \ {}_3F_2\left(-mI,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};1\right)
\end{align}
\end{corollary}
Now we generalize the Theorem \ref{thm1} in the following form using the suitable adjustment of argument in the ${}_3F_2(\cdot)$ matrix function.
\begin{theorem}\label{thm6}
Let $P, Q$ and $R$ be commuting matrices in $\mathbf{C}^{R\times R}$ such that $Q,R$ and $R-Q$ are positive stable and $w\in\mathbf{R} \backslash \left\lbrace 0,-1\right\rbrace$. Then the following integral representation holds true:
\begin{align}\nonumber
& {}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};\frac{1}{w+1}\right) \\ & \qquad\qquad\qquad = \left(\frac{w+1}{w}\right)^P \sum_{m=0}^\infty {-P \choose m} w^{-m} {}_3F_2\left(-mI,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};1\right). \label{thm6a}
\end{align}
\end{theorem}
\begin{proof}
Substituting $z = \frac{1}{w+1}$ in Theorem \ref{thm1}, we have
\begin{align*}
& {}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};\frac{1}{w+1}\right) \\ &\quad = \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} \left(1-\frac{u^2}{w+1}\right)^{-P} du \\
&\quad = \frac{(w+1)^P \ \Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} \left(w+1-u^2\right)^{-P} du \\
&\quad = \left(\frac{w+1}{w}\right)^P \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} \left(1+\frac{1-u^2}{w}\right)^{-P} du \\
&\quad = \left(\frac{w+1}{w}\right)^P \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} \sum_{m=0}^\infty {-P \choose m} \left(\frac{1-u^2}{w}\right)^m du \\
&\quad = \left(\frac{w+1}{w}\right)^P \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty {-P \choose m} w^{-m}\int_0^1 u^{Q-I} (1-u)^{R-Q+(m-1)I} \left(1+u\right)^m du \\
&\quad = \left(\frac{w+1}{w}\right)^P \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty \sum_{k=0}^m {-P \choose m} {m \choose k} w^{-m}\int_0^1 u^{Q+(k-1)I} (1-u)^{R-Q+(m-1)I} du \\
&\quad = \left(\frac{w+1}{w}\right)^P \sum_{m=0}^\infty \sum_{k=0}^m {-P \choose m} {m \choose k} w^{-m} \frac{(Q)_k (R-Q)_m}{(R)_m (R+m)_k}\\
&\quad = \left(\frac{w+1}{w}\right)^P \sum_{m=0}^\infty {-P \choose m} w^{-m} \frac{(R-Q)_m}{(R)_m} \sum_{k=0}^m \frac{(-1)^k (-m)_k}{k!} \frac{(Q)_k}{(R+m)_k} \\
&\quad = \left(\frac{w+1}{w}\right)^P \sum_{m=0}^\infty {-P \choose m} w^{-m} \frac{(R-Q)_m}{(R)_m} {}_2F_1\left(-mI,Q;R+mI;-1 \right).
\end{align*}
Using Corollary \ref{cor1}, this yields the right hand side of Theorem \ref{thm6}.
\end{proof}
\begin{remark}
It is interesting to observe that for $w=1$, Theorem \ref{thm6} reduces to Theorem \ref{thm3} and Corollary \ref{cor2} and for $w=-2$, it reduces to the following result asserted by Corollary \ref{cor3}.
\end{remark}
\begin{corollary}\label{cor3}
Under the conditions stated in Theorem \ref{thm6}, the following integral relation holds true:
\begin{align*}
& {}_3F_2\left(P,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};-1\right) = \left(\frac{1}{2}\right)^P \sum_{m=0}^\infty {-P \choose m} (-2)^{-m} {}_3F_2\left(-mI,\frac{Q}{2},\frac{Q+I}{2};\frac{R}{2},\frac{R+I}{2};1\right).
\end{align*}
\end{corollary}
Next we generalize the Theorem \ref{thm1} in the obvious way using the suitable adjustment of matrix parameters by introducing the sequence of $q$ parameters in the ${}_3F_2(\cdot)$ matrix function.
\begin{theorem}\label{thm4}
Let $P, Q$ and $R$ be commuting matrices in $\mathbf{C}^{R\times R}$ such that $Q,R$ and $R-Q$ are positive stable. Then, for $|z|<1$, the following integral representation holds true:
\begin{align}\nonumber
&{}_{q+1}F_q\left(P,\frac{Q}{q},\frac{Q+I}{q},\dots,\frac{Q+(q-1)I}{q};\frac{R}{q},\frac{R+I}{q},\dots,\frac{R+(q-1)I}{q};z\right) \\ & \qquad\qquad\qquad\qquad\quad = \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} (1-zu^q)^{-P} du. \label{thm4a}
\end{align}
\end{theorem}
\begin{proof}
Using the following relation:
\begin{align*}
\left(P\right)_{mn} = m^{mn} \left(\frac{P}{m}\right)_n \left(\frac{P+I}{m}\right)_n \dots \left(\frac{P+(m-1)I}{m}\right)_n,
\end{align*}
we can easily proceed for the proof similar to Theorem \ref{thm1}.
\end{proof}
Now for $q=3$ and $z=1$, Theorem \ref{thm4} leads to the following result asserted by Theorem \ref{thm5}.
\begin{theorem}\label{thm5}
Let $P, Q$ and $R$ be commuting matrices in $\mathbf{C}^{R\times R}$ such that $R,R-P,R-Q$ and $R-Q-P$ are positive stable. Then the following integral representation holds true:
\begin{align}\nonumber
& {}_4F_3\left(P,\frac{Q}{3},\frac{Q+I}{3},\frac{Q+2I}{3};\frac{R}{3},\frac{R+I}{3},\frac{R+2I}{3};1\right) \\ & \qquad = \frac{\Gamma(R)\Gamma(R-Q-P)}{\Gamma(R-P)\Gamma(R-Q)} \sum_{m=0}^\infty \frac{(-1)^m (P)_m (Q)_m}{m! (R-P)_m} \ {}_2F_1\left(-mI,Q+mI;R-P+mI;-1\right). \label{thm5a}
\end{align}
\end{theorem}
\begin{proof}
Substituting $q=3$ and $z=1$ in Theorem \ref{thm4}, we have
\begin{align*}
& {}_4F_3\left(P,\frac{Q}{3},\frac{Q+I}{3},\frac{Q+2I}{3};\frac{R}{3},\frac{R+I}{3},\frac{R+2I}{3};1\right) \\ & \quad= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} (1-u^3)^{-P}du \\
& \quad= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-P-I} (1+u+u^2)^{-P} du \\
& \quad= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty \frac{(P)_m (-1)^m}{m!} \int_0^1 u^{Q+(m-1)I} (1-u)^{R-Q-P-I} (1+u)^{m} du \\
& \quad= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty \sum_{k=0}^m \frac{(P)_m (-1)^m}{m!} {m \choose k} \int_0^1 u^{Q+(m+k-1)I} (1-u)^{R-Q-P-I} du \\
& \quad= \frac{\Gamma(R)\Gamma(R-Q-P)}{\Gamma(R-P)\Gamma(R-Q)} \sum_{m=0}^\infty \sum_{k=0}^m \frac{(P)_m (-1)^m}{m!} {m \choose k} \frac{(Q)_m (Q+m)_k}{(R-P)_m(R-P+m)_k} \\
& \quad= \frac{\Gamma(R)\Gamma(R-Q-P)}{\Gamma(R-P)\Gamma(R-Q)} \sum_{m=0}^\infty \frac{(-1)^m (P)_m (Q)_m}{m! (R-P)_m} \ {}_2F_1\left(-mI,Q+mI;R-P+mI;-1\right).
\end{align*}
This completes the proof.
\end{proof}
\begin{theorem}\label{thm7}
Let $P, Q$ and $R$ be commuting matrices in $\mathbf{C}^{R\times R}$ such that $Q,R$ and $R-Q$ are positive stable. Then, for $|z|<1$, the following integral representation holds true:
\begin{align}\nonumber
&{}_{q+1}F_q\left(P,\frac{Q}{q},\frac{Q+I}{q},\dots,\frac{Q+(q-1)I}{q};\frac{R}{q},\frac{R+I}{q},\dots,\frac{R+(q-1)I}{q};z\right) \\ & \qquad = \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty {R-Q-I \choose m} \frac{(-1)^m}{Q+mI} \ {}_2F_1\left(P,\frac{Q+mI}{2};\frac{Q+mI}{2}+I;z\right). \label{thm7a}
\end{align}
\end{theorem}
\begin{proof}
From Theorem \ref{thm4}, we have
\begin{align*}
&{}_{q+1}F_q\left(P,\frac{Q}{q},\frac{Q+I}{q},\dots,\frac{Q+(q-1)I}{q};\frac{R}{q},\frac{R+I}{q},\dots,\frac{R+(q-1)I}{q};z\right) \\ & \quad= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} (1-u)^{R-Q-I} (1-zu^q)^{-P} du \\
& \quad= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \int_0^1 u^{Q-I} \left[ \sum_{m=0}^\infty {R-Q-I \choose m} (-u)^m \right] (1-zu^q)^{-P} du \\
& \quad= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty {R-Q-I \choose m} (-1)^m \int_0^1 u^{Q+(m-1)I} (1-zu^q)^{-P} du.
\end{align*}
Substitution $s=u^q$ in the integral on the right hand side of above equation yields
\begin{align*}
&{}_{q+1}F_q\left(P,\frac{Q}{q},\frac{Q+I}{q},\dots,\frac{Q+(q-1)I}{q};\frac{R}{q},\frac{R+I}{q},\dots,\frac{R+(q-1)I}{q};z\right) \\ & \quad= \frac{\Gamma(R)}{\Gamma(Q)\Gamma(R-Q)} \sum_{m=0}^\infty {R-Q-I \choose m} \frac{(-1)^m}{q} \frac{\Gamma\left(\frac{Q+mI}{q}\right)\Gamma(1)}{\Gamma\left(\frac{Q+mI}{q}+I\right)} \ {}_2F_1\left(P,\frac{Q+mI}{2};\frac{Q+mI}{2}+I;z\right).
\end{align*}
Now using the Pochhammer matrix symbols (\ref{PCH2}) yields the right hand side of Theorem \ref{thm7}.
\end{proof}
\section*{Statements \& Declarations}
\subsection*{Funding} Not applicable.
\subsection*{Conflicts of interest/Competing interests} The authors declare that they have no competing interests.
\subsection*{Authors contributions}
Both the authors have equally contributed of reading and writing the manuscript.
\section*{References}
\begin{enumerate}
\bibitem{Abdalla} M. Abdalla, ``On the incomplete hypergeometric matrix functions", Ramanujan J. {\bf 43}(3), 663-678 (2017).
\bibitem{Abdalla1} M. Abdalla, ``Special matrix functions: characteristics, achievements and future directions, Linear and Multilinear Algebra", {\bf 68}(1), 1-28 (2020).
\bibitem{Constantine} A.G. Constantine and R.J. Muirhead, ``Partial differential equations for hypergeometric functions of two argument matrices", J. Multivariate. Anal. {\bf 2}, 332-338 (1972).
\bibitem{Cortes} J.C. Cort\'es and L. J\'odar, ``Asymptotics of the modified Bessel and incomplete Gamma matrix functions", Appl. Math. Lett. {\bf 16}(6), 815-820 (2003).
\bibitem{Dunford} N. Dunford and J. Schwartz, Linear operators, part-I (New York (NY): Addison-Wesley, 1957).
\bibitem{Dwivedi} R. Dwivedi and V. Sahai, ``On the hypergeometric matrix functions of two variables", Linear Multilinear Algebra. {\bf 66}(9), 1819-1837 (2017).
\bibitem{Dwivedi1} R. Dwivedi and V. Sahai, ``On the basic hypergeometric matrix functions of two variables", Linear and Multilinear Algebra. {\bf 67}(1), 1-19 (2019).
\bibitem{Golud} G.H. Golud and C.F. Van Loan, Matrix computations (London: The Johns Hopkins Press Ltd, 1996).
\bibitem{HuGD} G.D. Hu and M. Liu, ``The weighted logarithmic matrix norm and bounds of the matrix exponential", Linear Algebra Appl. {\bf 390}, 145-154 (2004).
\bibitem{James} A.T. James, Special functions of matrix and single argument in statistics. In: Askey RA, editor. Theory and application of special functions (New York, Academic Press, 1975).
\bibitem{Jodar1} L. J\'odar and J.C. Cort\'es, ``On the hypergeometric matrix function", J. Comp. Appl. Math. {\bf 99}(1-2), 205-217 (1998).
\bibitem{Jodar3} L. J\'odar and J.C. Cort\'es, ``Some properties of gamma and beta matrix functions", Appl. Math. Lett. {\bf 11}(1), 89-93 (1998).
\bibitem{Seaborn} J.B. Seaborn, Hypergeometric functions and their applications (New York (NY), Springer, 1991).
\end{enumerate}
\end{document} |
{
"arxiv_id": "2302.14148",
"language": "en",
"timestamp": "2023-03-01T02:02:38",
"url": "https://arxiv.org/abs/2302.14148",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
We are now at the beginning of a booming era for radio astronomy, with a worldwide effort to gear up for the Square Kilometre Array (SKA) -- a revolutionary radio telescope with capabilities for sub-arcsecond resolution and ultra-deep sensitivity. Pathfinding radio interferometers -- such as the Murchison wide-field Array (MWA; \citealp{2013PASA...30....7T}), the LOw Frequency ARay (LOFAR; \citealp{2013A&A...556A...2V}), ASKAP \citep{2007PASA...24..174J, 2008ExA....22..151J, 2021PASA...38....9H}, and MeerKAT \citep{2016mks..confE...1J} -- are paving the way by affording us the opportunity to expand our capabilities in detection, calibration, and image reconstruction of the unknown radio sky. Ongoing wide-field radio continuum surveys -- such as the LOFAR Two-meter Sky Survey (LoTSS; \citealp{2017A&A...598A.104S,2019A&A...622A...1S}), the LOFAR LBA Sky Survey (LoLSS; \citealp{2021A&A...648A.104D}), the MeerKAT MIGHTEE survey \citep{Taylor_2017}, and the ASKAP EMU survey \citep{2011PASA...28..215N} -- will be used in conjunction to gather statistics on millions of radio galaxies and thousands of galaxy clusters, leading to new and exciting results on cosmic magnetism, cosmic rays, dark matter, dark energy, and the evolution of large-scale structure in the Universe. The quest to convert the sheer quantity of radio data from these surveys into science-ready images has propelled radio astronomers toward developing innovative, state-of-the-art calibration \citep[\emph{e.g.}][]{2015MNRAS.449.2668S,2016MNRAS.460.2385W,2016ApJS..223....2V} and imaging \citep[\emph{e.g.}][]{2017MNRAS.471..301O,2018A&A...611A..87T, 2018MNRAS.473.1038P} techniques throughout the last decade. Through the implementation of these trailblazing techniques, results from LOTSS, the ASKAP-EMU Pilot Survey \citep[][]{2021PASA...38...46N}, and MIGHTEE are already revealing extraordinary extragalactic radio sources that have not been previously detected, some of which are so complex in their physical properties that they challenge current taxonomies and origination theories \citep[\emph{e.g.}][]{2020ApJ...897...93B, 2021A&A...647A...3B, 2022A&A...657A..56K}.
Real intensity structure in radio sources -- such as Galactic HI emission, supernova remnants, extended radio galaxies, and galaxy cluster radio halos and relics -- often exhibits both compact and diffuse components. A complex radio source might include bright, collimated threads of emission that appear embedded within fainter, dispersed lobes. For instance, intracluster radio sources, generated by large-scale turbulence and shocks, tend to exhibit high-resolution filamentary structures (tracing well-aligned magnetic field lines) as well as low-surface-brightness diffuse structure, which can span Mega-parsec scales (see the radio relic of Abell 2256 as one iconic example -- \citealp[\emph{e.g.}][]{2014ApJ...794...24O,2022ApJ...927...80R}). When imaging such complex radio emission, state-of-the-art CLEAN-based algorithms \citep[\emph{e.g.}][]{1974A&AS...15..417H,1980A&A....89..377C,1984AJ.....89.1076S,1988A&A...200..312W,2008ISTSP...2..793C, 2014MNRAS.444..606O} are usually implemented with data weighting schemes to adjust the sensitivity to either compact or diffuse emission. By modifying the shape of the synthesised beam and placing weights (such as a $uv$-taper) to short-baseline data, the sensitivity to fainter, more extended components of radio emission can be enhanced, albeit with a considerable loss of resolution. Consequently, such imaging methods often fail to accurately reconstruct both compact and diffuse components simultaneously in a single image.
In the last decade, progress in compressed sensing research has led to the development of optimisation-based algorithms to reconstruct true signal from partial, undersampled visibility data. This methodology was first applied and shown to be effective for radio interferometry imaging by \citet{2009MNRAS.395.1733W} and has since led to the development of several designated imaging algorithms \citep[e.g.][]{2011A&A...528A..31L, Carrillo12,Garsden2015,2015A&A...576A...7D}. Such innovative approaches -- relying on sophisticated sparsity-based image models -- have demonstrated their success at capturing both compact and diffuse components in the reconstructed radio images, albeit with an increase in computational expense. One such state-of-the-art optimisation-based class of methods is the ``Sparsity Averaging Reweighted Analysis'' (SARA) family \citep[][see the \href{https://basp-group.github.io/Puri-Psi/}{Puri-Psi} web-page for more details]{Carrillo12,Dabbech18,Abdulaziz19,thouvenin22a}. The monochromatic SARA algorithm, initially proposed by \citet{Carrillo12}, leverages state-of-the-art algorithmic structures from optimisation theory to enforce non-negativity and sparsity in a highly redundant sparsity dictionary
of the sought radio image under noise-driven data fidelity constraints \citep{onose2016,onose17}. Evolved versions of SARA include Faceted HyperSARA for wide-band imaging, shipped with spectral and spatial faceting functionalities to handle large image dimensions \citep{thouvenin22a,thouvenin22b}, Polarized SARA for polarisation imaging \citep{birdi18}, and a sparsity-based joint calibration and imaging framework \citep{repetti17,repettiProc17,birdi20,Dabbech21}.
In addition to the precision and robustness requirements of RI imaging algorithms, the need for scalability is more critical than ever in light of the extreme data volumes, wide fields-of-view, and broad frequency bandwidths offered by modern radio arrays. In this context, we have recently proposed a parallel and automated framework for wide-field, high-resolution, high-dynamic range monochromatic intensity imaging \citep{Dabbech22}. The framework encapsulates two imaging algorithms at the interface of optimisation theory and deep learning, recently proposed by \citet{Terris22}: (i) the unconstrained SARA (uSARA) algorithm relying on a handcrafted image model enforced via an optimisation-based denoiser, and (ii) the AI for Regularization in Imaging (AIRI) algorithm relying on an implicitly learnt image model promoted via denoising deep neural networks. The framework offers scalable implementation of both imaging algorithms through two key features: memory-efficient, parallel operators and automated parameter selection.
This article is Part I of a series aiming to showcase and validate the uSARA algorithm on imperfectly calibrated RI observations from the ASKAP radio telescope. In Part II, we expand upon this work to include a validation of the AIRI algorithm. In both articles, we aim to study the imaging performance of uSARA and AIRI in comparison to Multi-scale CLEAN \citep{2008ISTSP...2..793C} via the {\tt WSClean} imager \citep{2014MNRAS.444..606O}, both in terms of reconstruction quality and computational efficiency. For a coherent comparative analysis of the three imaging algorithms -- uSARA, AIRI, and {\tt WSClean} -- both Part I and Part II utilise the same RI data from publicly available ASKAP observations collected during Early Science and Pilot surveys and processed through the ASKAPsoft pipeline \citep{2021PASA...38....9H}. Targeted fields-of-view -- hosting extended, diffuse, and complex radio sources -- were carefully selected to test the precision and scalability capabilities of our imaging framework. A comprehensive summary of the considered imaging framework (with interchangeable denoising algorithms), including the wide-field measurement operator model, its distribution through parallelisation, and its implementation using high-performance computing systems (HPC), is provided by \citet{Dabbech22}. A fully detailed analysis of the framework's scalability is the subject of a forthcoming article.
The remainder of this article is structured as follows. In Section~\ref{sec:methods}, we present an overview of the investigated imaging framework from the algorithmic structure underpinning uSARA to the parallelisation and automation functionalities ensuring the computational scalability of the framework to large image and data dimensions in the context of wide-field imaging. In Section~\ref{sec:data}, we provide details of the scrutinised ASKAP data and the imaging settings of uSARA and the CLEAN-based benchmark algorithm. Reconstruction results of our primary targets of interest are exposed in Section~\ref{sec:results} and discussed in Section~\ref{sec:disc}. Section~\ref{sec:comp-cost} documents and discusses the computational cost of the imaging framework. Finally, conclusions are drawn in Section~\ref{sec:con}.
\section{Methods}\label{sec:methods}
In this section, we present the RI data model in the context of wide-field monochromatic intensity imaging. We provide an overview of the uSARA imaging algorithm and its underpinning algorithmic structure, and summarise the scalability features of its encompassing framework \citep{Terris22,Dabbech22}.
\subsection{Data Model}
\label{ssec:data}
In the absence of instrumental and atmospheric perturbations, RI visibilities measured at a given observing wavelength are noisy Fourier components of the radio sky, modulated by the so-called $w$-effect, a varying chirp-like phase induced by the non-coplanarity of the radio array. With no loss of generality, the data model can be discretised, such that the measured visibilities $\ensuremath{\boldsymbol{y}}\in \mathbb{C}^M$ are modelled from the sought intensity image ${\ensuremath{\boldsymbol{x}}}\in\mathbb{R}^N_+$ as follows
\begin{equation}
\label{eq:invpb}
\ensuremath{\boldsymbol{y}}=\bm{\Phi} {\ensuremath{\boldsymbol{x}}}+{\ensuremath{\boldsymbol{n}}},
\end{equation}
where $\ensuremath{\boldsymbol{n}} \in \mathbb{C}^M$ is a realisation of a zero-mean random Gaussian noise with a standard deviation $\tau >0$. The operator $\bm{\Phi}\in \mathbb{C}^{M\times N}$ is the measurement operator encompassing the Fourier sampling and the $w$-effect, and on some occasions, a data weighting scheme (\emph{e.g.}~ Briggs weighting; \citealp{briggs95}) to improve the effective resolution of the observation (due to the highly non-uniform density profile of the RI sampling). More precisely, the Direct Fourier transform being intractable due to the large amounts of data, the incomplete Fourier sampling is modelled via the non-uniform fast Fourier transform (NUFFT) \citep{Fessler2003,onose2016}.
The $w$-effect is taken into account via a hybrid model combining $w$-stacking \citep{2014MNRAS.444..606O} and $w$-projection \citep{Cornwell2008}, whereby RI data are grouped by their $w$-coordinates into $P$ $w$-stacks. For each data point, the chirp-like phase is decomposed into two terms: a modulation of its associated $w$-stack injected in the measurement operator via image-domain multiplication, and a compact Fourier kernel encoding the resulting $w$-offset modulation, injected via Fourier-domain convolution \citep{dabbech17}. As a final consideration, data weighting schemes -- typically derived from the sampling profile -- are often applied in combination with a noise-whitening operation.
Under these considerations, the measurement operator is decomposed into computationally and memory-efficient blocks as the vertical concatenation of the operators $\big(\bm{\Phi}_p\big)_{1\leq p\leq P}$, where for each $w$-stack $p\in\{1,\dots,P\}$, the associated measurement operator $\bm{\Phi}_p \in \mathbb{C}^{M_p \times N} $ is given by $\bm{\Phi}_p= \bm{\Theta}_p{\mathbf{G}}_p {\mathbf{F}} {\mathbf{Z}}_p$ \citep{Dabbech22}. More specifically, the operator $\bm{\Theta}_p\in \mathbb{R}^{M_p \times M_p}$ is a diagonal matrix encoding the considered data-weighting scheme. The sparse matrix $\mathbf{G}_p\in \mathbb{C}^{M_p \times N^\prime}$ is the de-gridding matrix, encompassing convolutions between the NUFFT interpolation kernels and the compact $w$-kernels correcting for the associated $w$-offsets in the Fourier plane. Note that estimates of direction-dependent effects (DDEs) can also be encoded as additional convolutions in the rows of the de-gridding matrix, when available. $\mathbf{F} \in \mathbb{C}^{N^\prime \times N^\prime}$ is the Discrete Fourier transform and the operator $\mathbf{Z}_p \in \mathbb{C}^{N^\prime \times N}$ encodes the $w$-modulation of the $p^{\textrm{th}}$ $w$-stack, the zero-padding operator for a finer grid of the Fourier plane, and the correction for the convolution with the approximate NUFFT interpolation kernels.
\subsection{uSARA algorithm}
Image formation from the noisy and incomplete RI measurements $\ensuremath{\boldsymbol{y}}$ is an ill-posed inverse problem. Here we consider the unconstrained SARA imaging algorithm from optimisation theory \citep{Terris22}. The algorithm provides an estimate of the radio sky as the minimiser of an objective function posed as the sum of two terms: a data fidelity term $f$, emanating from the nature of the noise, and a regularisation term $r$ encoding a prior knowledge of the image to address the ill-posedness of the inverse problem. The minimisation task is of the form
\begin{equation}
\label{eq:objfun}
\underset{\ensuremath{\boldsymbol{x}}\in \mathbb{R}^N}{\mathrm{minimise}} ~~ f(\ensuremath{\boldsymbol{x}}; \ensuremath{\boldsymbol{y}})+ \lambda r(\ensuremath{\boldsymbol{x}}),
\end{equation}
where $\lambda>0$ is the regularisation parameter controlling the balance between the two terms.
Given the Gaussian nature of the noise affecting the RI data, $f$ is naturally set to $f(\ensuremath{\boldsymbol{x}}; \ensuremath{\boldsymbol{y}})=1/2\|\bm{\Phi}\ensuremath{\boldsymbol{x}}-\ensuremath{\boldsymbol{y}}\|_2^2$, with $\| .\|_2$ denoting the $\ell_2$ norm of its argument vector.
The uSARA regularisation function $r$ is a multi-term non-differentiable function composed of a non-convex log-sum function enforcing average sparsity in an overcomplete dictionary $\bm{\Psi} \in \mathbb{R}^{N\times B}$, consisting in the normalised concatenation of nine orthogonal bases, and a non-negativity constraint \citep{Carrillo12,Terris22}, which reads
\begin{equation}
r(\ensuremath{\boldsymbol{x}}) = \rho \sum_{j=1}^{B} \log\left(\rho^{-1}\left|\left(\bm{\Psi}^\dagger \ensuremath{\boldsymbol{x}}\right)_{j}\right|+1\right)+\iota_{\mathbb{R}^N_+}(\ensuremath{\boldsymbol{x}}),
\label{eq:sara_prior}
\end{equation}
where $\left(.\right)_j$ denotes the $j^\text{th}$ coefficient of its argument vector, and $(\cdot)^\dagger$ stands for the adjoint of its argument operator. The parameter $\rho>0$ prevents the argument of the logarithmic from reaching zero values and can be set to the estimate of the noise level in the sparsity domain \citep{thouvenin22a}. The non-negativity constraint is encoded via the indicator function of the real positive orthant, given by $\iota_{{\mathbb{R}^N_{+}}} (\ensuremath{\boldsymbol{x}})=+\infty$ if $\ensuremath{\boldsymbol{x}} \notin {\mathbb{R}^N_{+}}$ and 0 otherwise. As such, the resulting minimisation task is non-convex and is addressed in an iterative manner. More specifically, the problem is approached by solving a sequence of surrogate convex minimisation tasks whereby $r$ is replaced by a convex regularisation function $g$ of the form $g(\ensuremath{\boldsymbol{x}})= \|\ensuremath{\boldsymbol{\mathsf{W}}} \bm{\Psi}^\dagger \ensuremath{\boldsymbol{x}}\|_1+\iota_{\mathbb{R}^N_+}(\ensuremath{\boldsymbol{x}})$ substituting the log-sum function with the $\ell_1$ function, denoted by $\|\cdot\|_1$, and weighted by the diagonal matrix $\ensuremath{\boldsymbol{\mathsf{W}}} \in\mathbb{R}^{B \times B}$. Each of the surrogate weighted minimisation tasks is the form
\begin{equation}
\label{eq:objfun2}
\underset{\ensuremath{\boldsymbol{x}}\in \mathbb{R}^N}{\mathrm{minimise}} ~~ f(\ensuremath{\boldsymbol{x}}; \ensuremath{\boldsymbol{y}})+ \lambda g(\ensuremath{\boldsymbol{x}}),
\end{equation}
where the convex and non-differentiable function $g$ is redefined through the update of its underlying weighting matrix $\ensuremath{\boldsymbol{\mathsf{W}}}$ from the solution of its preceding task \citep{Carrillo12,Terris22}. The convex minimisation task is solved approximately (\emph{i.e.} for a finite number of iterations $K>0$) using the forward-backward (FB) iterative scheme \citep{repetti20,Terris22}, and the overall procedure benefits from convergence guarantees \citep{repetti21}.
The FB iterative scheme relies on two-step image updates: a `forward' gradient descent step calling for the gradient of the data fidelity function $f$, given by $\nabla f(\ensuremath{\boldsymbol{x}} )= \text{Re}\{\bm{\Phi}^\dagger\bm{\Phi}\} \ensuremath{\boldsymbol{x}} -\text{Re}\{\bm{\Phi}^\dagger\ensuremath{\boldsymbol{y}}\}$, followed by a `backward' denoising step using the proximal operator of the convex regularisation function $g$ \citep[see ][for the mathematical details]{Terris22} such that for any $k \in \mathbb{N}$
\begin{equation}
\label{eq:fb}
\ensuremath{\boldsymbol{x}}^{(k+1)} = \operatorname{prox}_{\gamma \lambda g}\left( \ensuremath{\boldsymbol{x}}^{(k)} -\gamma \nabla f(\ensuremath{\boldsymbol{x}}^{(k)}) \right).
\end{equation}
Let $L>0$ denote the Lipschitz constant of $\nabla f$ given by $L=\| \text{Re}\{\bm{\Phi}^\dagger \bm{\Phi}\}\|_S$, with $\| .\|_S$ denoting the spectral norm of its argument operator. The step-size $\gamma$ satisfies the condition $0<\gamma<2/L$ to guarantee the convergence of the iterative scheme. Finally, the proximal operator $\operatorname{prox}_{\gamma \lambda g}$, not benefiting from closed-form solutions, is computed sub-iteratively, involving soft-thresholding operations in the sparsity dictionary $\bm{\Psi}$ by $\gamma \lambda$.
\subsection{A scalable and automated imaging framework}\label{sec:regparam}
To address the scalability requirement to large data and image sizes, our imaging framework provides automated parameter choice and parallel and memory-efficient models of the operators and functions involved~\citep{Dabbech22}, summarised in what follows.
\paragraph*{Regularisation parameter selection.}
The choice of the regularisation parameter $\lambda$, balancing data fidelity and image regularisation, is of paramount importance as it affects the solution of the minimisation task \eqref{eq:objfun}. Considering $\sigma>0$, the estimate of the standard deviation of the image domain noise, \citet{Terris22} proposed to equate the soft-thresholding parameter $\gamma \lambda$, involved in the denoising step, to the estimate of the standard deviation of the sparsity domain noise given by $\sigma/3 $ (the factor three emanates from the normalisation of the sparsity dictionary). In the case when a data-weighting scheme is adopted to compensate for the non-uniform density profile of the sampling (\emph{e.g.} Briggs weighting), additional correlation is induced in the image domain noise. Under this consideration, $\sigma$ can be obtained as
\begin{equation}
\label{eq:sigma}
\sigma=\eta \tau/\sqrt{2L}, \textrm{~with~} \eta={\| \text{Re}\{\bm{\Phi}^\dagger \bm{\Theta}^2\bm{\Phi}\}\|^{1/2}_S}/{\sqrt{L}},
\end{equation}
where the data-weighting operator $\bm{\Theta}\in \mathbb{R}^{M\times M}$ is a diagonal per block matrix, whose diagonal matrix-blocks are the data-weighting matrices $\big(\bm{\Theta}_p\big)_{1\leq p\leq P}$. The correction factor $\eta$ reduces to one otherwise. In our experiments, Briggs weighting was applied to the data in imaging, and the resulting values of $\eta$ were found to be in the interval $[0.3,0.6]$. The regularisation parameter $\lambda$ can be set around
\begin{equation}
\label{eq:heuristic}
\lambda \simeq \tau {\| \text{Re}\{\bm{\Phi}^\dagger \bm{\Theta}^2\bm{\Phi}\}\|^{1/2}_S} /(3\sqrt{2}\gamma L ),
\end{equation}
with the step size fixed to $\gamma=1.98/L$, ensuring the convergence of the FB algorithm.
\paragraph*{Denoiser Faceting.}
In light of the sub-iterative nature of the denoising operator underpinning uSARA, distribution and parallelisation of the sparsity operator $\bm{\Psi}$ are required, not only to handle the large image dimensions of interest but also to accelerate the computation. For this aim, we have adopted a faceted implementation of the operator $\bm{\Psi}$ \citep{Prusa2012}, enabling image facet denoising. The number of facets $F$ is derived from the number of CPU cores of the computing architecture on which the algorithm is to be deployed, and constraints on image facet dimensions from the wavelet transforms underpinning the sparsity operator $\bm{\Psi}$.
\paragraph*{Automated parallelisation of the measurement operator.}
\label{ssec:prallel} Three key features are supported in our implementation of the measurement operator to ensure its scalability to large data sizes \citep{Dabbech22}.
Firstly, the choice of the number of $w$-stacks, $P$, defining the decomposition of the measurement operator into the operators $\bm{\Phi}_p$ is automated via a planning strategy taking into consideration the computational cost derived from the complexity of the application of the measurement operator $\bm{\Phi}$ and the memory constraints of the computing architecture. Secondly, memory-efficient encoding of the resulting operators $\bm{\Phi}^\dagger_p \bm{\Phi}_p$, called for in FB, can be achieved through a data dimensionality reduction functionality via visibility-gridding, whereby the de-gridding and gridding operations underpinning $\bm{\Phi}^\dagger_p \bm{\Phi}_p$ are explicitly encoded via the sparse holographic matrices $\ensuremath{\boldsymbol{\mathsf{H}}}_p ={\ensuremath{\boldsymbol{\mathsf{G}}}}^\dagger_p \bm{\Theta}^2_p \ensuremath{\boldsymbol{\mathsf{G}}}_p$. By doing so, the dimensionality of the measurement operator is effectively driven solely by the image size. The feature is enabled when the memory required to host the de-gridding matrices exceeds the available resources. Thirdly, further decomposition of each operator $\bm{\Phi}_p^\dagger\bm{\Phi}_p$ into smaller blocks is enabled via a data-clustering step such that each data block corresponds to the aggregation of radially-neighbouring Fourier modes, identified under memory constraints. The number of CPU cores allocated for the forward step of the FB algorithmic structure is derived from the number of identified data clusters. These computing resources are initially used to compute the de-gridding matrices ($\ensuremath{\boldsymbol{\mathsf{G}}}_{p})$ or the holographic matrices ($\ensuremath{\boldsymbol{\mathsf{H}}}_{p})$, underpinning the operator $\bm\Phi^\dagger \bm\Phi$ (only once), and are later used to host them and apply them at each FB iteration.
\begin{table*}
\centering
\begin{center}
\begin{tabular}{ccccccc}
\textbf{SB -- Beam} & \textbf{Band} & \textbf{Sources} & \textbf{R.A., Dec.} & \textbf{Redshift} & \textbf{Features} \\
& [MHz] & & (J2000) & & \\
\hline
\multirow{2}{4em}{8275--15} & \multirow{2}{5em}{870 - 1158} & Abell 3391 & 06h26m22.8s, $-53^{\circ}41'44''$ & $z=0.056^{(1)}$ & North MC, FR I \\
& & Abell 3395 & 06h27m14.4s, $-54^{\circ}28'12''$ & $z=0.0518^{(1)}$ & South MC, FR I, radio phoenix\\
\hline
\multirow{2}{4em}{9351--12} & \multirow{2}{5em}{800 - 1088} & PKS B2014-558 & 20h18m01.1s, $-55^{\circ}39'31''$ & $z = 0.061^{(2)}$ & X-shaped RG \\
& & SPT-CL J2023-5535 & 20h23m24.5s, $-55^{\circ}35'32''$ & $z=0.23^{(3)}$ & radio halo, radio relic \\
\hline
\multirow{2}{4em}{9442--35} & \multirow{2}{5em}{800 - 1088} & Abell 3785 & \multirow{2}{10em}{21h34m28s, $-53^{\circ}37'$} & \multirow{2}{6em}{$z = 0.0773^{(4)}$} & complex RGs \\
& & PKS 2130-538 & & & ``the dancing ghosts'' \\
\hline
\end{tabular}
\caption{Selected observations from the ASKAP Early Science and EMU Pilot Surveys for imaging. Datasets are labelled by their ASKAP Scheduling Block (SB) and primary beam pointing ID. Multiple sources of interest are covered per dataset, as listed, with notes in the `Features' column. Redshifts are from (1) \citealp[][]{2004AJ....128.1558S} (2) \citealp[][]{Huchra_2012} (3) \citealp[][]{2021ApJS..253....3H} (4) \citealp[][]{1999ApJS..125...35S}. MC: Merging cluster. FR I: Fanaroff-Riley class I. RG: Radio galaxy.} \label{tab: obs}
\end{center}
\end{table*}
\begin{table}
\begin{tabular}{ccc}
\textbf{Parameters} & Sub-band Images & Full-band Image$^{~\star\star}$\\
\hline
Number of SPWs & 8 & 1 \\
Bandwidth & 36 MHz & 288 MHz\\
\hline
Field of View & $3.36^{\circ^{~\star}}$ & $2.5^{\circ}$\\
& $2.5^{\circ~^{\star\star}}$ & \\
\hline
Image Size & $5500 \times 5500^{~\star}$ & $4096 \times 4096$\\
& $4096 \times 4096^{~\star\star}$ & \\
\hline
Cell Size & \multicolumn{2}{c}{$2.2$ arcsec pixel$^{-1}$} \\
\hline
$uv$-range & \multicolumn{2}{c}{$>60$ m} \\
\hline
Data weighting & \multicolumn{2}{c}{Briggs Robust $-0.25$} \\
\hline
\end{tabular}
\caption{Imaging settings of uSARA and {\tt WSClean}. The notation ($^{\star}$) refers to the settings of the fields SB8275-15 and SB9351-12 and ($^{\star\star}$) refers to the settings of the field SB9442-35. \label{tab: imgparams}}
\end{table}
\section{Data, imaging, and analysis}\label{sec:data}
In this section, we provide a full description of the scrutinised data, including pre-processing and calibration steps. We provide the imaging settings of uSARA and {\tt WSClean}, and outline the computing architecture and resources required to run both algorithms. Finally, we provide procedures for a coherent comparative analysis.
\subsection{ASKAP Observations}
ASKAP consists of 36 12-metre parabolic dish antennas, spanning six kilometres, at the Murchison Radio Observatory in Western Australia. ASKAP's original design includes Phased Array Feeds (PAFs; \citealp{2006ESASP.626E.663H}) at the focus of each antenna, built as dual-polarisation chequerboard grids sensing signals in the frequency range of 700-1800 MHz. Signals received by each of the 188-element-sensor grids are cross-correlated to simultaneously form 36 separate primary beams (or pointings) on the sky \citep{2014PASA...31...41H, 2016PASA...33...42M}. This PAF technology gives ASKAP an instantaneous field-of-view (FoV) of 30-square-degrees, making it the most rapid surveying radio telescope in the world \citep{2009IEEEP..97.1507D}. ASKAP's EMU Survey will survey the radio continuum over the entire southern sky (up to a northern declination of +30$^{\circ}$) at a resolution of $\sim 10$ arcsec with sensitivities reaching $\sim 10 \mu$ Jy beam$^{-1}$, and is projected to detect more than 70 million galaxies. Science goals of the EMU collaboration include testing fundamental models for dark energy \citep[\emph{e.g.}][]{2012MNRAS.424..801R}, detecting the warm-hot intergalactic medium \citep[\emph{e.g.}][]{2020PASA...37...32H}, tracing active galactic nuclei (AGN) and star formation up to high-redshifts \citep[\emph{e.g.}][]{2017ApJ...842...95M}, and mapping the radio continuum of the Galactic Plane and Centre \citep[\emph{e.g.}][]{2021MNRAS.502...60R}.
The ASKAP Early Science Broadband Survey (Project: AS034, PI: Lisa Harvey-Smith) began in 2018 with the aim to test observations using the full ASKAP array while covering scientifically interesting areas of sky, such as the Galaxy And Mass Assembly 23-hour field (GAMA 23). The EMU Pilot Survey \citep[EMU-PS;][Project: AS101, PI: Ray Norris]{2021PASA...38...46N}, centred on RA, Dec: 21h00m00s, -55$^{\circ}$00$'$00$''$, was carried out in 2019 covering an area of 270 deg$^2$ overlapping a portion of sky covered by the first data release of the Dark Energy Survey (DES; \citealp{2018ApJS..239...18A}). From these early ASKAP data releases, we have selected three individual Scheduling Block (SB) beam observations covering extended, morphologically complex radio sources which represent robust test cases for image reconstruction with uSARA. ASKAP SBs contain 36 measurement sets each, corresponding to the 36 primary beam pointings. Therefore, to ensure maximum signal-to-noise, we have chosen the single beam measurement sets which were most closely centred on our primary targets of interest (beam 15 of SB8275, beam 12 of SB9351, and beam 35 of SB9442). Beam observations for each SB are carried out with the same specifications over a 10-hour total integration time. The observing band varies slightly between the selected Early Science data (SB8275 - central frequency 1013 MHz) and EMU-PS data (SB9351 and SB9442 - central frequency 943 MHz), yet both have instantaneous bandwidths of 288 MHz with physical channel intervals of 1 MHz. See Table~\ref{tab: obs} for further details on the selected observations.
The three selected SB-beam measurement sets, containing calibrated visibilities, were generated through the ASKAPsoft pipeline \citep{2021PASA...38....9H}. For direction-independent calibration, ASKAPsoft includes bandpass calibration using the standard calibrator PKS B1934-638 -- observed for five minutes in each beam before or after the science target. Further calibration includes a cycle of phase-only self-calibration for every 1 MHz of data. Each beam observation in an SB is calibrated and imaged independently, and as a final step, ASKAPsoft stitches images together to form mosaics covering the full 30-square-degree FoV. For our imaging purposes, we used the ASKAP data products after ASKAPsoft processing, with no further flagging or calibration. Prior to imaging, we shifted the pointing centres of our selected beam observations by their beam offsets. Although most RI packages assume a Stokes parameter $I$ (intensity) of $I = (XX + YY)/2$ (where $X$ and $Y$ represent the instrumental polarisations), ASKAPsoft uses the IAU (International Astronomical Union) definition $I = XX + YY$. We did not apply a factor two correction for the IAU intensity convention in any of our final image reconstructions; therefore, flux densities in our images are halved when compared to the values found in ASKAPsoft mosaic images.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/A3391_wsclean.pdf}
\caption{SB8275-15 -- {\tt WSClean}: Full FoV image covering the merging cluster system Abell 3391-95, at the first sub-band (SPW:1, centred at 887 MHz). This monochromatic image is a {\tt WSClean} restored image with a synthesised beam of $9.4 \times 10.9$ arcsec and rms noise of $\sigma_{\rm meas}\approx 50~\mu$Jy beam$^{-1}$ ($2~\mu$Jy pixel$^{-1}$). Panel (a) centred on the FR I radio galaxy in A3391; panel (b) centred on cluster member FR II radio galaxy; (c) panels centred on FR I and diffuse source in A3395. Middle (c) panel: r-band optical image from DES overlaid with {\tt WSClean} restored image, demarcated
by blue contours at levels $\{2^{n+1}\}_{1 \leq n \leq 10} ~\mu$Jy pixel$^{-1}$. Rightmost (c) panel: Spectral index map obtained with the first six sub-band images of {\tt WSClean} after smoothing with a common circular Gaussian beam of 20 arcsec. All sub-band images are combined into the GIF \texttt{`SB8275-15\_WSClean'} provided in \citet{askapdataset}. \label{A3391wsclean}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/A3391_main_uSARA.pdf}
\caption{SB8275-15 -- uSARA: Full FoV image covering the merging cluster system Abell 3391-95, at the first sub-band (SPW:1, centred at 887 MHz). This monochromatic image is a uSARA model with a pixel resolution of $2.2 \times 2.2$ arcsec. Panels are the same as described in Figure~\ref{A3391wsclean}. Middle (c) panel: r-band optical image from DES overlaid with uSARA model image, demarcated by blue contours at levels $\{2^n\}_{0 \leq n \leq 10} ~\mu$Jy pixel$^{-1}$. Rightmost (c) panel: Spectral index map obtained with the first six sub-band images of uSARA after smoothing with a common circular Gaussian beam of 5 arcsec. All sub-band images are combined into the GIF \texttt{`SB8275-15\_uSARA'} provided in \citet{askapdataset}. \label{A3391usara}}
\end{figure*}
\begin{table*}
\begin{tabular}{| *{9}{c|} }
\hline
\hline
\textbf{A3395 Phoenix} & $S_{887~{\rm MHz}}$ & $S_{923~{\rm MHz}}$ & $S_{959~{\rm MHz}}$ & $S_{995~{\rm MHz}}$ & $S_{1031~{\rm MHz}}$ & $S_{1067~{\rm MHz}}$ & $S_{1103~{\rm MHz}}$ & $S_{1139~{\rm MHz}}$ \\
\hline
uSARA model & 29.3 & 24.3 & 14.5 & 10.4 & 13.5 & 15.2 & 7.3 & 7.4 \\
\hline
{\tt WSClean} restored image & 30.2 & 25.1 & 25.2 & 20.0 & 25.7 & 21.6 & 10.0 & 8.3 \\
\hline
{\tt WSClean} smoothed model & 27.2 & 22.2 & 23.8 & 18.6 & 23.4 & 19.7 & 9.5 & 7.1 \\
\hline
\end{tabular}
\caption{Integrated flux density values in [mJy] of the diffuse phoenix source in Abell 3395 for each SPW imaged with uSARA and {\tt WSClean}.
\label{tab:fluxA3391}}
\end{table*}
\subsection{Imaging Settings}
To perform monochromatic sub-band imaging, we split the selected wide-band data into effective channels, or spectral windows (SPWs), across the full frequency band. More precisely, data from all three fields were binned uniformly into eight spectral windows with a bandwidth of 36 MHz each, under the assumption of nearly flat spectral behaviour within a spectral window. Data sizes per spectral window range from $\sim0.8$ to $\sim1.2$ GB. To further demonstrate the scalability of uSARA, we also imaged the third field (SB9442-35) over the full frequency band (covering 288 MHz) to form a single monochromatic image from $\sim7.5$ GB of data. We chose not to generate full-band monochromatic images for the other two fields (SB8275-15 and SB9351-12) since they host sources known to exhibit very steep spectral behaviour \citep[specifically, the galaxy clusters Abell 3391 and SPT-CL 2023-5535; ][]{2021A&A...647A...3B,2020ApJ...900..127H} and because both fields contain bright, large-scale artefacts.
According to the ASKAP Science Observation Guide\footnote{\url{https://confluence.csiro.au/display/askapsst/ASKAP+Survey+Science}}, recommended image size is set by the full width at half maximum (FWHM) of the primary beam. ASKAP beam pointings have a primary beam FoV approximated by a circular Gaussian with FWHM of $1.09\lambda_{\rm obs} / D$ \citep{2021PASA...38....9H}, where the observing wavelength, $\lambda_{\rm obs}$, is about 0.3 meters (at 1000 MHz) and $D$ is the dish diameter of a single ASKAP dish (12 m). We calculated the FWHM of a given beam pointing to be $\sim 1.56^{\circ}$ at the middle of the frequency band. For the first two selected ASKAP fields (SB8275-15 and SB9351-12), we noted the presence of bright sources lying just outside of the primary beam that created sidelobes and decided to image a FoV covering twice the FWHM. For the third FoV (SB9442-35), we did not find any bright sources directly outside of the primary FWHM and chose to image a FoV covering 1.6 times the FWHM. In both cases, the imaged FoV is well beyond the FWHM of the primary beam. With a maximum baseline of 6268~m, and a native instrumental resolution between 9 and 12 arcsec over the bandwidth, we selected a cell size of 2.2 arcsec pixel$^{-1}$, corresponding to a super-resolution factor of $2$ at the highest frequency. Under these considerations, the reconstructed images of fields SB8275-15 and SB9351-12 are $5500 \times 5500$ pixels in size, and those of the field SB9442-35 are $4096 \times 4096$ pixels. Unlike the output image of the CLEAN-based algorithm -- by design restricted to the instrumental resolution through the application of the restoring beam to its estimated non-physical model image -- the uSARA image retains super-resolution at the pixel size. On a final note, although the FWHM of the primary beam is used to determine the imaged FoVs, no primary beam correction is applied to the reconstructions in this work.
Systematic imaging was carried out on the three selected ASKAP fields with both uSARA and {\tt WSClean} using the same imaging settings where applicable. A summary of the imaging settings is listed in Table~\ref{tab: imgparams}. Our parallelised and automated imaging framework implemented in MATLAB, and the C++ software {\tt WSClean}, were both run on Cirrus\footnote{\url{http://www.cirrus.ac.uk}}, a UK Tier2 high-performance computing (HPC) service, comprising 280 compute nodes. Each node has 256~GB of memory and 36 CPU cores (with two hyperthreads each). Parameter choice in both algorithms is summarised in the following paragraphs.
\paragraph*{{\tt{WSClean}} parameters.}
The state-of-the-art {\tt WSClean} imager corrects for the $w$-effect via the $w$-stacking approach \citep{2014MNRAS.444..606O, 2017MNRAS.471..301O}. Monochromatic images of each of the selected ASKAP fields were made with Multi-scale CLEAN, using a gain factor of 0.8. Auto-masking was enabled down to 2.5 times the standard deviation of the {\tt WSClean} residual image, computed at the start of every major deconvolution iteration using the median absolute deviation estimator. The stopping criteria were set to a maximum iteration number of $10^6$ and a `cleaning' threshold equal to the estimated standard deviation of the residual image. Briggs robust weighting, with robust parameter $-0.25$ was considered in all the imaging experiments. These weights were stored in the {\tt IMAGING WEIGHT} column of the measurement set tables and were extracted and utilised in uSARA imaging. The number of $w$-stacks considered by {\tt WSClean} is set automatically based on the theoretical bound derived in \citet{2014MNRAS.444..606O} and the available compute resources (see Sec.~\ref{sec:comp-cost} for more details). For future reference, we note that CLEAN reconstructions are the so-called restored images, obtained as the sum of the non-physical model image convolved with the associated restoring beam, and the residual image. To support our flux density analysis, we also consider the {\tt WSClean} model images convolved with the restoring beam, referred to as smoothed model images.
\paragraph*{uSARA parameters.}\label{sec:usara_params}
With the uSARA algorithm being implemented in MATLAB, data and associated information were extracted from the measurement set tables as a collection of MAT files using a dedicated Python script relying on the `Pyxis' library (standing for Python Extensions for Interferometry Scripting, as part of the {MeqTrees software package;}~\citealp{Noordam10}). We recall that Briggs data weights generated by {\tt{WSClean}} were also considered in uSARA imaging. Concerning the uSARA measurement operator, the number of the $w$-stacks used in the different imaging experiments was set in an automated manner, via the planning step supported by our imaging framework (see Tables~\ref{tab:timeA3391}--\ref{tab:time9442}, for details). In the reconstruction of the sub-band images of all three fields, the operator $\bm\Phi^\dagger \bm\Phi$ was encoded via the underpinning sparse de-gridding matrices ($\ensuremath{\boldsymbol{\mathsf{G}}}_{p}$). For the full-band monochromatic imaging experiment of the field SB9442-35, the resulting large data size triggered the dimensionality reduction feature. The operator $\bm\Phi^\dagger \bm\Phi$ was therefore encoded via its underpinning holographic matrices ($\ensuremath{\boldsymbol{\mathsf{H}}}_{p}$), reducing the memory requirements to host it by nearly a factor 5. The regularisation parameter $\lambda$ was fixed to the heuristic value proposed in \eqref{eq:heuristic} for the full-band image of the field SB9442-35. However, some adjustment was found necessary to achieve a high reconstruction quality in terms of resolution and sensitivity for all sub-band images of the three selected fields, whereby $\lambda$ was set from 0.7 to 0.8 times the heuristic. Higher values of the regularisation parameter resulted in somewhat smoother reconstructions and non-recovery of some fainter point sources, whereas lower values led to images highly contaminated by calibration artefacts. The applied adjusting factor may be partly attributed to the imperfection of the DIE calibration and the lack of DDE calibration, affecting the accuracy of the measurement operator. Nonetheless, it is generally consistent with the findings of the theoretical study of uSARA's heuristic in the context of simulated RI data \citep{Terris22}. Finally, the stopping criteria of uSARA were set to their default values, including a maximum of 10 re-weighted minimisation tasks, and a relative variation between the image iterates of $0.0005$.
\subsection{Quantitative Analysis}
Focusing on the target sources of the selected observations, we provide their flux measurements and in-band spectral index maps obtained by the two imaging algorithms. Estimated model images from uSARA are in units of [Jy pixel$^{-1}$], whereas {\tt WSClean} restored images are, by design, in units of [Jy beam$^{-1}$]. For the sake of comparison, we normalised {\tt WSClean} restored images by the area of the associated restoring Gaussian beam, denoted by $A_{\rm beam}$ and given by
\begin{equation}
A_{\rm beam} = \frac{\pi \times B_{\rm MAJ} \times B_{\rm MIN}}{4 \times \log 2 },
\end{equation}
where $B_{\rm MAJ}$ and $ B_{\rm MIN}$ are the respective major and minor axes of the restoring beam in pixel units (\emph{i.e.} normalised by the cell size of 2.2 arcsec pixel$^{-1}$).
Diffuse emission of particular interest in our study presents a complex morphology with edges often blended into the background noise, as seen in {\tt WSClean} maps. As is common practice, we measure the total flux of diffuse structure within manually generated regions which roughly follow $\sim2\sigma_{\rm meas}$ contours of the source, where $\sigma_{\rm meas}$ is the root-mean-square (rms) noise measured in a nearby region void of sources from the {\tt WSClean} restored map. Regions were hand-drawn in the visualisation software SAOImageDS9 \citep{ds903} to closely follow the contours of recovered signal in both the uSARA and {\tt WSClean} maps, such that the same region was used when measuring flux density of the same diffuse source. Flux density measurements from uSARA images are expected to be lower than those measured from the {\tt WSClean} restored images, due to the bias introduced from the {\tt WSClean} residual map. For a more accurate comparison of flux density, we also provide measurements from {\tt WSClean} smoothed model images. Note that error measurements on flux densities are not reported since the Early Science and Pilot Survey ASKAP observations are not yet validated against standard flux catalogues. All reported flux measurements and statistics were obtained using the SAOImageDS9 software.
Spectral index maps were created to showcase how sources of interest change over their morphology in electron energy distribution and consequently, in their spectral energy distribution. Firstly, the sub-band maps were smoothed via convolution with a 2D circular Gaussian kernel with axes of 5 arcsec for uSARA images and 20 arcsec for {\tt WSClean} images. The spectral index maps were then obtained by fitting a first-order polynomial to the function ${\rm log}(S_{\nu}) = -\alpha {\rm log}(\nu)$, where $S_{\nu}$ is the flux density for a given beam area at a given frequency $\nu$ and $\alpha > 0$ is the spectral index. Only the first six sub-bands are considered when generating these maps since the last two sub-bands were consistently found to recover less diffuse signal for primary targets of interest (possibly attributed to the steepness of their spectra).
\section{Results}\label{sec:results}
In this section, we showcase high-resolution, high-fidelity images of our three selected fields produced by the uSARA algorithm and compare them to images made with multi-scale {\tt WSClean}. Select images are presented in Figures~\ref{A3391wsclean} -- \ref{9442usara}, showing the full imaged fields-of-view of our chosen ASKAP observations, and include zoomed-in views focusing on complex radio emission of interest and their associated optical images and spectral index maps. In \citet{askapdataset}, we provide FITS files of all spectral windows of the three selected fields imaged with both algorithms. For each field, images are also combined into animated GIF files to show how the recovered emission changes over the full frequency band. In what follows, we provide a detailed comparison of the morphology and flux density of specific sources between the uSARA and {\tt WSClean} images.
\subsection{First field: SB8275-15}
Beam 15 of SB8275 covers a FoV containing the complex merging galaxy cluster system Abell 3391 - Abell 3395. This field has been recently observed with the eROSITA X-ray space telescope where a warm gas bridge has been discovered between the cluster pair as part of a 15 Mpc intergalactic filament \citep{2021A&A...647A...2R}. The field also contains multiple bent-tail and Fanaroff-Riley class I and II (FR-I \& FR-II; \citealp{1974MNRAS.167P..31F}) radio galaxies, some belonging to the cluster system. A recent paper utilising mosaic images of SB8275 has confirmed more than 20 giant radio galaxies (at various redshifts) in the 30 deg$^2$ field \citep{2021A&A...647A...3B}.
In Figures~\ref{A3391wsclean} \& ~\ref{A3391usara}, we present our images of the full FoV (3.36$^{\circ}$) of the first sub-band (SPW:1) of SB8275-15, imaged with {\tt WSClean} and uSARA, respectively. Both figures include zoomed-in views of the FR-I in Abell 3391 (a: top right panels), a FR-II cluster member in the east (b: middle right panels), and radio sources in Abell 3395 (c: bottom panels). The FR-I radio galaxies at the centre of Abell 3391 in the north and Abell 3395 in the south (see Table~\ref{tab: obs} for source names) are reconstructed with superior resolution by uSARA. This is most evident in the appearance of `braiding' and gaps in the plasma of the jets, which are not resolved in the {\tt WSClean} map. The FR-I in Abell 3391 is the brightest source in the field with a peak pixel flux of 20 mJy, as measured in the SPW:1 uSARA image, and 12 mJy as measured in the SPW:1 {\tt WSClean} image. Calibration errors in this field manifest as strong ring-like artefacts emanating from these bright FR-I radio galaxies. \citet{2021A&A...647A...3B} successfully carried out additional direction-dependent calibration to reduce the effect of these large-scale artefacts, however, we note that we performed no such additional calibration prior to imaging these data and therefore the extended radial artefacts remain in our final images. Over the full frequency band, source morphology and the structure of artefacts change per spectral window (see associated GIFs in \citealp{askapdataset}).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/SPT2023_wsclean.pdf}
\caption{SB9351-12 --{\tt WSClean}: Full FoV image covering the merging cluster SPT2023 and the X-shaped radio galaxy PKS 2014-55, at the first sub-band (SPW:1, centred at 817 MHz). This monochromatic image is a {\tt WSClean} restored image with a synthesised beam of $10.1 \times 16.4$ arcsec and rms noise of $\sigma_{\rm meas}\approx 60~\mu$Jy beam$^{-1}$ ($1.6~\mu$Jy pixel$^{-1}$). Panel (a) centred on the merging galaxy cluster SPT2023; panel (b) centred on a field containing compact and point sources; (c) panels centred on the X-shaped radio galaxy PKS 2014-55. Middle (c) panel: r-band optical image from DES overlaid with the {\tt WSClean} restored image, demarcated
by blue contours at the levels $\{1.6 \times 2^n\}_{1 \leq n \leq 10} ~\mu$Jy pixel$^{-1}$. Rightmost (c) panel: spectral index map obtained with the first six sub-band images of {\tt WSClean} after smoothing with a common circular Gaussian beam of 20 arcsec. All sub-band images are combined into the GIF \texttt{`SB9351-12\_WSClean'} provided in \citet{askapdataset}.
\label{SPT2023wsclean}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/SPT2023_main_uSARA.pdf}
\caption{SB9351-12 -- uSARA: Full FoV image covering the merging cluster SPT2023 and the X-shaped radio galaxy PKS 2014-55, at the first sub-band (SPW:1, centred at 817 MHz). This monochromatic image is a uSARA model with a pixel resolution of $2.2 \times 2.2$ arcsec. Panels are the same as described in Figure~\ref{SPT2023wsclean}. Middle (c) panel: r-band optical image from DES overlaid with the uSARA model image, demarcated by blue contours at the levels $\{2^n\}_{1 \leq n \leq 10} ~\mu$Jy pixel$^{-1}$. Rightmost (c) panel: spectral index map obtained with the first six sub-band images of uSARA after smoothing with a common circular Gaussian beam of 5 arcsec. All sub-band images are combined into the GIF \texttt{`SB9351-12\_uSARA'} provided in \citet{askapdataset}.
\label{SPT2023usara}}
\end{figure*}
\begin{table*}
\begin{tabular}{| *{9}{c|} }
\hline
\hline
\textbf{X-Shaped RG} & $S_{817~{\rm MHz}}$ & $S_{853~{\rm MHz}}$ & $S_{889~{\rm MHz}}$ & $S_{925~{\rm MHz}}$ & $S_{961~{\rm MHz}}$ & $S_{997~{\rm MHz}}$ & $S_{1033~{\rm MHz}}$ & $S_{1069~{\rm MHz}}$ \\
\hline
uSARA model & 758.8 & 680.3 & 591.7 & 526.7 & 454.0 & 335.2 & 286.8 & 268.2 \\
\hline
{\tt WSClean} restored image & 756.4 & 721.4 & 629.3 & 546.2 & 511.8 & 409.5 & 369.0 & 352.1 \\
\hline
{\tt WSClean} smoothed model & 743.7 & 707.1 & 613.8 & 532.7 & 498.9 & 399.7 & 357.5 & 308.4 \\
\hline
\end{tabular}
\caption{Integrated flux density values in [mJy] of the X-shaped radio galaxy PKS 2014-55 for each SPW imaged with uSARA and {\tt WSClean}.
\label{tab:fluxXshape}}
\end{table*}
\begin{table*}
\begin{tabular}{| *{9}{c|} }
\hline
\hline
\textbf{SPT2023 Relic} & $S_{817~{\rm MHz}}$ & $S_{853~{\rm MHz}}$ & $S_{889~{\rm MHz}}$ & $S_{925~{\rm MHz}}$ & $S_{961~{\rm MHz}}$ & $S_{997~{\rm MHz}}$ & $S_{1033~{\rm MHz}}$ & $S_{1069~{\rm MHz}}$ \\
\hline
uSARA model & 4.9 & 4.7 & 3.5 & 3.6 & 2.9 & 1.7 & 1.5 & 1.3 \\
\hline
{\tt WSClean} restored image & 5.7 & 5.8 & 4.3 & 4.1 & 3.8 & 3.5 & 3.0 & 3.4 \\
\hline
{\tt WSClean} smoothed model & 5.0 & 5.1 & 3.8 & 3.6 & 3.6 & 2.9 & 2.6 & 2.7 \\
\hline
\end{tabular}
\caption{Integrated flux density values in [mJy] of the radio relic in SPT2023 for each SPW imaged with uSARA and {\tt WSClean}. Central frequency of each SPW is listed in MHz.
\label{tab:fluxSPT2023}}
\end{table*}
\subsubsection{Abell 3395}
The southern cluster Abell 3395 is made up of two sub-clusters, with separate X-ray peaks \citep{2021A&A...647A...2R}, indicating that this cluster is undergoing its own merger within the larger merging system. West of the central FR I in Abell 3395, there is a faint, diffuse source with a dim core connected by arms that extend to the north-west and south-west. As reported by \citet{2021A&A...647A...3B}, the peak intensity of this dim core does not clearly coincide with a host galaxy visible in optical maps, and therefore the source can not be classified as a typical radio galaxy associated with a host AGN. The diffuse source is possibly a so-called radio `phoenix' \citep[see][for classification]{2004rcfg.proc..335K}, re-ignited fossil plasma from past AGN activity which is no longer active. In the middle bottom panels of Figures~\ref{A3391wsclean} \& \ref{A3391usara}, {\tt WSClean} and uSARA images are overlaid as contours on an r-band optical image from DES DR1 \citep{2018ApJS..239...18A}. As seen in these optical overlays, a cluster member galaxy sits just south of the dim radio core, at RA, Dec 06h25m09.95s, -54$^{\circ}$30'34.4'', raising the possible scenario that old AGN emission from this galaxy has drifted or has been disturbed and shifted by hot gas in the cluster environment, leading to the observed faint emission. The structure of this phoenix candidate appears more clearly defined in our uSARA image, while the edges are much more blended into background noise in the {\tt WSClean} image. Most notably, the north-west and south-west limbs of the phoenix appear to pop out in the uSARA reconstruction, although they appear very faint and undefined in the {\tt WSClean} map.
We measure the flux density of this candidate phoenix source using identical polygonal regions hand-drawn to closely follow the total recovered signal in each of the uSARA and {\tt WSClean} sub-band images, such that the same region is used to measure between the two imaging algorithms. In the {\tt WSClean} map of SPW:1, this polygonal region closely traces the $2\sigma_{\rm meas}$ contour line, where $\sigma_{\rm meas} = 2~\mu$Jy pixel$^{-1}$. Since the morphology of the phoenix source changes dramatically over the frequency band, a polygonal region was drawn for each spectral window. Interestingly, in the uSARA map of SPW:1 we find that the border of the phoenix's recovered signal can be traced by a $\sim 1~\mu$Jy pixel$^{-1}$ contour level (see middle bottom panel of Figure~\ref{A3391usara}), well below the estimated standard deviation of the noise in the image domain $\sigma$ (see Eq.~\ref{eq:sigma}) calculated as $6~\mu$Jy pixel$^{-1}$. This finding remains consistent through subsequent spectral windows, indicating that the uSARA algorithm successfully recovers real signal below the estimated noise levels. In the {\tt WSClean} maps, due to the blending of diffuse emission with background noise, the border of the phoenix is more clearly defined by contour lines at $2$ to $3\sigma_{\rm meas}$.
For each sub-band, the measured flux densities from both uSARA and {\tt WSClean} images are listed in Table~\ref{tab:fluxA3391}. For the first two spectral windows, uSARA flux measurements of the candidate phoenix are greater; however, for all subsequent spectral windows, the {\tt WSClean} flux measurements are consistently greater. This is likely due to the fact that the measured flux density of this region in the {\tt WSClean} map is also integrated over the noise, which may increase for higher spectral windows and is amplified by artefacts emanating from the two bright FR I sources in the field. Indeed, lower flux densities are measured from the {\tt WSClean} smoothed model images, bringing them closer to uSARA values, particularly at the lower end of the frequency band. For such a faint source, we can see how the flux measurement from {\tt WSClean} restored images can be easily overestimated when mixed with a noisy background signal.
As apparent in Table~\ref{tab:fluxA3391}, the flux of the phoenix source reconstructed by uSARA drops off dramatically as the frequency increases, indicating a steep spectral index ($\alpha > 1$), as confirmed in \citet{2021A&A...647A...3B}. However, this fading over the frequency band is less dramatic in our {\tt WSClean} results, indicating that {\tt WSClean} may be possibly biased when wide-band imaging is deactivated. Comparing the spectral index maps of the phoenix (shown in the bottom right panels of Figures~\ref{A3391wsclean} \& \ref{A3391usara}), the general trend of steepening over the source morphology is similar. In the uSARA map, a steeper spectral index is seen in the phoenix's north-west limb. Interestingly, both spectral index maps show the phoenix hosting a steep core ($\alpha \sim 1.5$) surrounded by a halo of flatter emission ($\alpha < 1$), in contrast to the total intensity. This steep core is more clearly defined in the uSARA spectral index map. The fact that the brightest portion of the core is steeper in its spectral index than the surrounding fainter emission provides further evidence that this source may be an AGN remnant or phoenix. The emission around the potentially dormant core may exhibit a flatter spectral index because it has undergone gentle re-energisation \citep[\textit{e.g.}][]{2017SciA....3E1634D} from turbulence or small-scale shocks in the intracluster medium. Likewise, gentle re-energisation may explain why the ultra-steep emission in the north-west and south-west limbs is visible only at the lower end of the band (\textit{i.e.} subtle re-brightening of old AGN emission).
\subsection{Second field: SB9351-12}
Beam 12 of SB9351 covers a field containing the massive, merging galaxy cluster SPT-CL J2023-5535 (hereafter SPT2023) near the centre of the FoV and the X-shaped radio galaxy PKS 2014-55 on the western edge of the FoV. Since this beam observation lies on the western border of SB9351's full 30-square-degree field, we were unable to choose another beam observation that hosted the X-shaped radio galaxy closer to the pointing centre. Both the cluster and the radio galaxy have been recently studied: \citet{2020ApJ...900..127H} announced the discovery of a radio halo and radio relic in the merging cluster SPT2023 using the same EMU-PS observation we have selected, and \citet{2020MNRAS.495.1271C} used MeerKAT total intensity and polarisation observations to investigate the peculiar X-shaped morphology of PKS 2014-55.
In Figures ~\ref{SPT2023wsclean} \& ~\ref{SPT2023usara}, we present our images of the full FoV (3.36$^{\circ}$) of the first sub-band (SPW:1) of SB9351-12, imaged with {\tt WSClean} and uSARA, respectively. Both figures include zoomed-in views of the galaxy cluster SPT2023 (a: top right panels), a field of compact and point sources (b: middle right panels), and the X-shaped radio galaxy (c: bottom panels). The bright quasar RX J2024.3-5723 at the southern edge of the pointing introduces radial ring-type artefacts, which propagate up to 1 deg in the field.
In each of the zoomed-in views, uSARA shows higher resolution and more definition in the reconstruction of both compact and diffuse emission. However, the very faint emission of the radio halo in SPT2023 is not clearly recovered in the uSARA image. It is also apparent that some of the faintest point sources are missing from the uSARA image (see (b) panels of Figures~\ref{SPT2023wsclean} \& ~\ref{SPT2023usara}). This loss of the faintest point sources is likely attributed to the choice of the uSARA regularisation parameter. A lower value would enable the recovery of more of these point sources, but would also increase the amplitude of recovered calibration artefacts.
\subsubsection{X-shaped Radio Galaxy}
As apparent when comparing panels (c) of Figures~\ref{SPT2023wsclean} \& ~\ref{SPT2023usara}, the X-shaped radio galaxy exhibits more clearly defined borders in our uSARA image. In the middle (c) panels of the same figures, {\tt WSClean} and uSARA emission of the X-shaped radio galaxy are overlaid as contours on an r-band optical image from DES DR1 \citep{2018ApJS..239...18A}. Again, we find that the border of the recovered uSARA signal of the X-shaped radio galaxy traces a contour level at $\sim 1~\mu$Jy pixel$^{-1}$, well below the estimated standard deviation of the image noise for this sub-band: $\sigma = 6~\mu$Jy pixel$^{-1}$. In contrast, the diffuse edges of the east and west wings of the X-shaped radio galaxy blend into background noise in the {\tt WSClean} map, such that the border is more clearly defined by $3\sigma_{\rm meas}$ contour lines, where $\sigma_{\rm meas} = 1.6~\mu$Jy pixel$^{-1}$.
The total flux density of the radio galaxy is measured by summing the integrated flux density from three separate regions: the east wing, the core, and the west wing. The totalled flux density measurements from hand-drawn polygon regions for the lobes (roughly tracing emission bounded by the $2\sigma_{\rm meas}$ contour line in {\tt WSClean} images) and customised ellipse regions\footnote{Since the {\tt WSClean} image is convolved with the restoring beam, point sources are much more extended than they appear in our uSARA images. Therefore, we use differently sized ellipse regions to measure the flux density of the core of PKS 2014-55.} for the core are listed in Table~\ref{tab:fluxXshape}. The polygonal regions covering PKS 2014-55 were modified per each spectral window to more accurately follow the source morphology over the frequency band; however, identical regions were used to measure between the two imaging algorithms. As recorded in Table~\ref{tab:fluxSPT2023}, the flux density falls off as the frequency increases, indicating a steepening of the spectral index for this source. Except for the first spectral window, we see again that the flux is consistently greater in the {\tt WSClean} sub-band images, likely due to integration over the noise in the {\tt WSClean} map. When measuring from {\tt WSClean} smoothed model images, we find that the {\tt WSClean} flux densities decrease, bringing them more in line with uSARA measurements.
Spectral index maps constructed from {\tt WSClean} and uSARA sub-band images are shown in the bottom right panels of Figures~\ref{SPT2023wsclean} \& ~\ref{SPT2023usara}. The general trend of steepening and flattening is consistent between the two maps, with more patches of flatter emission occurring in the lower portion of the east wing. This flattening is indicative of turbulent ``hot-spots'', coinciding with brightening seen in the intensity maps. Our uSARA spectral index map shows a dramatic steepening on the edges of the wings, but this is likely to be an artificial steepening since the diffuse structure at the edges is not recovered as well at higher frequencies (see associated GIF available in \citet{askapdataset}, demonstrating how the source structure changes with the frequency).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/9442_wsclean.pdf}
\caption{SB9442-35 -- {\tt WSClean}: Full FoV image covering PKS 2130-538, formed using the full-band data (centred at 943 MHz). This monochromatic image is a {\tt WSClean} restored image with a synthesised beam of $11.8 \times 9.4$ arcsec and rms noise of $\sigma_{\rm meas}\approx 30~\mu$Jy beam$^{-1}$ ($1~\mu$Jy pixel$^{-1}$). Panel (a) centred on a field containing extended and point-like radio galaxies; panel (c) centred on the star-forming galaxy NGC 7090; (b) panels centred on ``the dancing ghosts'' (PKS 2130-538). Leftmost (b) panel: image made with the full-band data (centred at 943 MHz); middle (b) panel: image made with only the first sub-band of data (SPW:1, centred at 817), shown for a comparison of sensitivity; rightmost (b) panel: spectral index map made with the first six sub-band images of {\tt WSClean} after smoothing with a common circular Gaussian beam of 20 arcsec. All sub-band images are combined into the GIF \texttt{`SB9442-35\_WSClean'} provided in \citet{askapdataset}. \label{9442wsclean}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/9442_main_uSARA.pdf}
\caption{SB9442-35 -- uSARA: Full FoV image covering PKS 2130-538, formed using the full-band data (centred at 943 MHz). This monochromatic image is a uSARA model with a pixel resolution of $2.2 \times2.2$ arcsec. Panels are the same as described in Figure~\ref{9442wsclean}. Rightmost (b) panel: spectral index map made with the first six sub-band images of uSARA after smoothing with a common circular Gaussian beam of 5 arcsec. All sub-band images are combined into the GIF \texttt{`SB9442-35\_uSARA'} provided in \citet{askapdataset}. \label{9442usara}}
\end{figure*}
\begin{table*}
\begin{tabular}{| *{10}{c|} }
\hline
\hline
\textbf{Dancing Ghosts} & $S_{\rm fullband - 943~{\rm MHz}}$ & $S_{817~{\rm MHz}}$ & $S_{853~{\rm MHz}}$ & $S_{889~{\rm MHz}}$ & $S_{925~{\rm MHz}}$ & $S_{961~{\rm MHz}}$ & $S_{997~{\rm MHz}}$ & $S_{1033~{\rm MHz}}$ & $S_{1069~{\rm MHz}}$ \\
\hline
uSARA model & 117.2 & 131.0 & 126.0 & 121.2 & 116.3 & 112.1 & 107.7 & 104.6 & 101.8 \\
\hline
{\tt WSClean} restored image & 115.5 & 128.8 & 124.5 & 120.0 & 115.3 & 111.2 & 107.2 & 104.2 & 101.0\\
\hline
{\tt WSClean} smoothed model
& 115.5 & 128.6 & 124.3 & 119.7 & 115.2 & 111.1 & 107.1 & 104.0 & 101.0 \\
\hline
\end{tabular}
\caption{Integrated flux density values in [mJy] of ``the dancing ghosts'' PKS 2130-53 for each SPW imaged with uSARA and {\tt WSClean}.
\label{tab:fluxPKS2130-53}}
\end{table*}
\subsubsection{SPT-CL J2023-5535}
As shown in panel (a) of Figure~\ref{SPT2023wsclean}, the radio halo in SPT2023 is barely recovered by {\tt WSClean}, appearing as a very faint increase in the noise across the inner regions of the cluster. The SPT2023 radio relic is apparent in the {\tt WSClean} map as a small, elongated arc at the western side of the cluster (we refer the reader to \citet{2020ApJ...900..127H} for a more detailed analysis of these cluster sources). Our uSARA image does not recover diffuse emission resembling a radio halo (see panel (a) of Figure~\ref{SPT2023usara}), likely due to the choice of regularisation parameter dampening signal below the estimated noise level; nonetheless, the radio relic appears more clearly defined and brighter than in the {\tt WSClean} image. We note that the classification of the halo and relic from \citet{2020ApJ...900..127H} was made using a full-band (288~MHz bandwidth) multi-frequency-synthesis image and that the signal in our individual sub-band images (36~MHz bandwidth) is much weaker. We report flux density measurements for the recovered relic source in SPT2023 in Table~\ref{tab:fluxSPT2023}. Flux density measurements of the relic from {\tt WSClean} smoothed model images are more in line with uSARA measurements, except for the last three sub-band where uSARA shows a decrease in flux.
\subsection{Third field: SB9442-35}
Beam 35 of SB9442 is centred on the complex radio source PKS 2130-538. This peculiar source, nicknamed ``the dancing ghosts,'' is shaped by the jets and lobes of two radio galaxies in the Abell 3785 galaxy cluster. With the published catalogue and initial results of EMU-PS, \citet{2021PASA...38...46N} included an analysis of the morphology of PKS 2130-538. Two AGN hosts contribute to the observed emission: a radio galaxy in the north at the centre of a bright filamentary arch, and a second radio galaxy in the south at the centre of a smaller arch on the eastern lobe. Despite the advantage of ASKAP's resolution -- revealing previously unseen structure in PKS 2130-538 -- \citet{2021PASA...38...46N} point out that it is still unclear whether these two radio galaxies are superimposed or actually interacting with each other.
Similarly to the previous two fields, eight sub-band images were reconstructed and used to obtain the flux density measurements, and the first six sub-band images were used to generate spectral index maps \citep[see associated GIF provided in][]{askapdataset}. We also formed a single full-band monochromatic image, for increased sensitivity, and to demonstrate the scalability of our imaging framework. In Figures ~\ref{9442wsclean} \& ~\ref{9442usara}, we present our monochromatic images of the full FoV (3.36$^{\circ}$) of SB9442-35, formed from the full-band data using {\tt WSClean} and uSARA, respectively. The figures also include zoomed-in views of a region of background sources (a: top right panels), the star-forming galaxy NGC 7090 (c: mid-right panels), and ``the dancing ghosts'' (b: bottom panels). Unlike the two previous fields, the SB9442-35 images do not exhibit large-amplitude calibration artefacts. In fact, only a few compact radio galaxies, catalogued by the Sydney University Molonglo Sky Survey (SUMSS; \citealp{sumss}), emanate localised ring-like artefacts, and do not hamper the recovery of our targets of interest.
Overall, there is a clear difference between {\tt WSClean} and uSARA in terms of resolution. While uSARA recovers structure at higher resolution, diffuse components of the extended radio galaxy in panel (a) and the star-forming galaxy in panel (c) are also fully recovered when compared to the {\tt WSClean} map. However, the faintest point sources (near the noise level in the {\tt WSClean} image) in panel (a) are not fully recovered by uSARA. Again, we attribute this to the choice of the uSARA regularisation parameter.
\subsubsection{The Dancing Ghosts}
We focus on the complex emission of PKS 2130-538, displayed in panels (b) of Figures~\ref{9442wsclean} \& ~\ref{9442usara}. The left and middle panels represent a zoomed-in view of the ``dancing ghosts'' from the full-band image and the SPW:1 sub-band image, respectively. Looking at {\tt WSClean} images, as expected, the full-band restored image exhibits lower background noise and slightly more details in the source, in comparison to the sub-band map. As for uSARA images, a clear improvement in both resolution and sensitivity can be observed in the full-band image. The bridges, which consist of the jets from the northern and southern AGN, are more tightly collimated in the full-band image when compared to the sub-band image. Several faint point sources, missing from the sub-band image, also emerge. Interestingly, one can notice that the filamentary structure extending from the eastern lobe is more clearly defined and brighter in the uSARA reconstruction. This structure has a similar appearance to the synchrotron threads that were recently discovered in \citet{2020A&A...636L...1R,Dabbech22}. Recovery of this structure is an exciting result, as we may find more evidence of magnetic threads branching from and connecting extended lobes in radio galaxies with ultra-sensitive, super-resolved images.
The bottom right panels of Figures~\ref{9442wsclean} \& ~\ref{9442usara} show spectral index maps of the dancing ghosts, inferred from the sub-band images. The uSARA spectral index map contains much more detailed information on the spectra of the turbulent lobes of the northern AGN. We also observe that the higher intensity hot-spots exhibit flatter spectral indices, with steepening as the emission traces southward. The second AGN at the centre of the south-east bridge shows a flat core, as expected, and a second set of jets that bend back toward the north-east lobe of the first AGN. In {\tt WSClean} maps, the second south-east lobe appears to blend back into the emission of the first north-east lobe, however, with uSARA we are able to see a distinct separation of these two portions, indicating that the emission may be somewhat superimposed when seen in projection. The steepness of the spectra in this region does not indicate a sort of re-energising from one lobe pushing onto the other, as it stays fairly consistent in a steep range between $1 < \alpha < 2$ in both spectral index maps. The north-east thread exhibits a sharp, collimated spectral trend from steep to slightly flat to steep again -- in contrast to the turbulent lobes -- with an index ranging from $2.9 \geq \alpha \geq 0.9$ and then $0.9 \leq \alpha \leq 3.2$ following the thread from west-to-east in our uSARA map.
In Table~\ref{tab:fluxPKS2130-53}, flux densities for the Dancing Ghosts are reported. Integrated flux density was measured from identical polygonal regions tracing the full recovered signal in both uSARA and {\tt WSClean} maps, including the eastern and western lobes, the northern arch, the south-eastern jet, and the north-eastern filament. Unlike other diffuse sources of interest from previous fields, the uSARA flux densities are much more consistent with {\tt WSClean} flux densities. This presents an interesting case that may be explained by i) the source having overall flatter spectral behaviour, and ii) the lower noise level due to the absence of large-scale calibration artefacts in this field. At higher spectral windows, uSARA flux densities are even slightly greater than {\tt WSClean}, opposing the trend seen for the fainter, steeper diffuse sources of interest in previous fields. This may indicate that the faintest and steepest sources, with surface brightness near estimated noise levels, are more difficult to recover with uSARA, given our current choice of regularisation parameter. In both uSARA and {\tt WSClean} full-band images, the borders of the dancing ghosts are clearly defined by a contour level at $3~\mu$Jy pixel$^{-1}$, which is $3\sigma_{\rm meas}$ in the {\tt WSClean} full-band map (where $\sigma_{\rm meas} = 1~\mu$Jy pixel$^{-1}$). The uSARA imaging framework calculated the estimated standard deviation of the noise in the image domain as $\sigma = 8~\mu$Jy pixel$^{-1}$ for the full-band data, however, both uSARA and {\tt WSClean} images have captured signal below this value.
\section{Discussion}\label{sec:disc}
In this section, we discuss some specific points regarding the experiments performed with our novel automated and parallelised imaging framework, underpinned by the uSARA imaging algorithm \citep{Terris22,Dabbech22}. In comparison to the widely-used {\tt WSClean} imager, our uSARA-ASKAP images host greater detail at high resolution, resolving never-before-seen structure at scales up to 2.2 arcsec.
Arguably the most exciting feature of our reconstructed uSARA maps is that they successfully capture both compact and diffuse radio emission. Standard CLEAN imaging methods often require a compromise on resolution in order to gain sensitivity to diffuse emission, necessitating multiple maps at various resolutions to accurately recover emission on separate spatial scales. In many scientific publications and published radio surveys, imaging results are often found to be separated into one high-resolution map and another low-resolution map, where point sources and compact emission may be subtracted (usually through a crude technique that can leave holes in the image). Here, we have demonstrated that uSARA reconstructs images with both high resolution and sensitivity to diffuse emission, enabling advanced scientific analyses of physically complex sources. Thanks to uSARA's super-resolution and superior sensitivity to diffuse and extended components, we are also able to generate highly-detailed spectral index maps, which aid in the classification of our targeted sources.
Moreover, we argue that uSARA can catapult the sensitivity and resolution of existing surveys. Unlike {\tt WSClean,} no residual image is added to the final uSARA reconstruction. It is therefore highly likely that the flux densities of low surface-brightness sources measured from uSARA images are a closer approximation to the true source flux.
\paragraph*{Calibration errors.}
Several of the ASKAPsoft mosaic continuum images from early science and pilot fields show that calibrated data are still affected by radial artefacts that propagate -- up to several degrees in some cases -- from bright radio sources. For the ASKAP data considered in this work, the largest source of DDEs (manifested as antenna-based image modulations) is most likely attributed to ASKAP's synthesised beams. ASKAP's beamforming hardware applies digitised weights to the 188-elemental receivers and sums them to form a single primary beam at 1 MHz intervals. Currently, ASKAP uses a common algorithm to maximise signal-to-noise ratio to calculate the weights for beam synthesizing \citep[\emph{e.g.}][]{Jeffs2008,Ivashina2011}. Holographic observations of ASKAP's beam patterns \citep[see][for details]{Hotan16} show that their sensitivities vary over frequency from antenna to antenna. The complex-valued sensitivity pattern of the PAF beams therefore introduces DDEs which need to be modelled. Furthermore, antenna pointing errors can introduce direction-dependent antenna gains. ASKAPsoft corrects only for DIEs, and imperfections of the calibration undermine the accuracy of the measurement operator model. Consequently, reconstructed images can exhibit imaging artefacts and, more seriously, suffer from severely limited dynamic ranges. Examples of such artefacts can be seen in the field SB8275-15 containing the merging cluster system A3391-95, where large-amplitude ringing artefacts can be seen around the two bright FR-I radio galaxies (at the centres of the Abell clusters) in both uSARA and {\tt WSClean} reconstructions. In spite of the lack of DDE calibration, overall uSARA exhibits higher reconstruction quality than CLEAN. On a further note, uSARA can be easily plugged as the imaging module into a joint calibration and imaging framework \citep{Dabbech21}.
\paragraph*{In-band spectral index maps.}
Spectral index maps obtained from sub-band monochromatic imaging with uSARA have shown to be more detailed than {\tt WSClean}. All sub-band images having a common cell size at least two times beyond the observation's nominal resolution at the highest sub-band, the spectral index maps were inferred using a small blurring kernel, preserving their high level of detail in comparison with {\tt WSClean} spectral index maps.
We have found that the classification of some sources is more clearly defined based on the spectral behaviour exhibited in uSARA maps. Interestingly, some sources have shown steeper spectral indices in uSARA spectral index maps when compared to their {\tt WSClean} counterparts, which can be the result of the increase of sensitivity brought by uSARA. However, this spectral trend warrants further investigation by moving to wide-band deconvolution for a more precise spectral analysis.
\section{Computational performance}
\label{sec:comp-cost}
To assess the computational performance of our imaging framework, we report specific information for all uSARA imaging experiments in Tables~\ref{tab:timeA3391}--\ref{tab:time9442}. Details of the measurement operator are listed in these tables, including the number of processed visibilities $M$, the number of $w$-stacks $P$, the memory requirements to host its underlying sparse matrices ${{m}_{\ensuremath{\boldsymbol{\mathsf{H}}}/\ensuremath{\boldsymbol{\mathsf{G}}}}}$, and the computational cost in CPU core hour of $\bm{\Phi}^\dagger \bm{\Phi}$ pre-computation C\textsubscript{$\bm{\Phi}^\dagger \bm{\Phi}$}. We also report the total compute time T\textsubscript{Image} and the computational cost in CPU core hour C\textsubscript{Image} of the deconvolution in the same tables. For comparison purposes, we report both T\textsubscript{Image} and C\textsubscript{Image} of {\tt WSClean} runs in Table~\ref{tab:time-wsclean}. For uSARA, the retained number of $w$-stacks in each experiment from the planning step determines the sparsity of the de-gridding/holographic matrices underpinning the measurement operator and consequently the memory requirements to host $\bm{\Phi}^\dagger \bm{\Phi}$. The decomposition of the data and associated measurement operator into smaller blocks, and the resulting number of deployed CPU cores are inferred from the data clustering step (see Section~\ref{ssec:prallel}). From Tables~\ref{tab:timeA3391}--\ref{tab:time9442}, one can notice that, in general, a larger number of visibilities will increase the computational cost of the measurement operator's pre-computation. Specific to FB iterations of uSARA, the compute time of the forward step is dominated by the Fourier transforms performed, while that of the backward step is driven by its sub-iterative nature. Both steps, being parallelised, are on par in terms of computing time, with the latter taking about 1.6 times longer on average.
Although the number of $w$-stacks considered in {\tt WSClean} is significantly more important, the reported computational time and cost in CPU core hour of uSARA is about 20 times higher on average. The superior computational performance of the standard {\tt WSClean} RI imager is attributed to (i) its fast approximate data fidelity steps, whereby visibility gridding and de-gridding operations are only conducted few times, and (ii) its simplistic image model, in particular given the overall spatial compactness of the radio emission in the selected ASKAP fields, forcing multi-scale CLEAN to operate only on small-scales. However, the simple regularisation approach underpinning {\tt WSClean} comes at the expense of lower imaging quality.
Even though {\tt WSClean} is about one order of magnitude faster than our imaging algorithm in its current MATLAB implementation prototype, substantial improvement of uSARA's computational performance is expected when migrated to a production implementation using C++ and parallel libraries.
\begin{table}
\begin{tabular}{cccccccc}
\hline
\hline
\textbf{SB8275} & $M$ & $P$ & $F$ & ${{m}_{\ensuremath{\boldsymbol{\mathsf{G}}}}}$ & C\textsubscript{$\bm{\Phi}^\dagger \bm{\Phi}$} & C\textsubscript{Image}& T\textsubscript{Image} \\
& $\times 10^6$ & & & [GB] &[CPUh]&[CPUh]& [h] \\
\hline
\textbf{SPW:1} & 56.8 & 9 & 42 & 153 & 92 & 268 & 5.5 \\
\hline
\textbf{SPW:2} & 75.8 & 10 & 49 & 186 & 101 & 313 & 5.4 \\
\hline
\textbf{SPW:3} & 74.5 & 9 & 56 & 216 & 93 & 378 & 5.7 \\
\hline
\textbf{SPW:4} & 76.9 & 9 & 64 & 228 & 112 & 400 & 5.9 \\
\hline
\textbf{SPW:5} & 76.0 & 9 & 64 & 241 & 112 & 422 & 6.1 \\
\hline
\textbf{SPW:6} & 76.9 & 8 & 64 & 277 & 113 & 481 & 6.2 \\
\hline
\textbf{SPW:7} & 69.2 & 9 & 64 & 239 & 115 & 397 & 5.8 \\
\hline
\textbf{SPW:8} & 66.3 & 9 & 64 & 244 & 103 & 467 & 6.9 \\
\hline
\end{tabular}
\caption{SB8275-15 -- uSARA: settings and computational cost of the reconstruction of the sub-band images. $M$ is the number of measured visibilities; $P$ is the number of $w$-stacks; $F$ is the number of image facets used in the uSARA denoiser; $m_{\ensuremath{\boldsymbol{\mathsf{G}}}}$ [GB] refers to the memory required to host the de-gridding matrices; C\textsubscript{$\bm{\Phi}^\dagger \bm{\Phi}$} [CPUh] is the computational cost of the pre-computation of the operator $\bm{\Phi}^\dagger \bm{\Phi}$ in CPU core hours; C\textsubscript{Image} [CPUh] is the computational cost of the deconvolution in CPU core hours; T\textsubscript{Image} is the time in hours of the deconvolution. \label{tab:timeA3391}}
\end{table}
\begin{table}
\begin{tabular}{cccccccc}
\hline
\hline
\textbf{SB9351} & $M$ & $P$ & $F$ & ${{m}_{\ensuremath{\boldsymbol{\mathsf{G}}}}}$ & C\textsubscript{$\bm{\Phi}^\dagger \bm{\Phi}$} & C\textsubscript{Image}& T\textsubscript{Image} \\
& $\times 10^6$ & & & [GB] &[CPUh]&[CPUh]& [h] \\
\hline
\textbf{SPW:1} & 60.8 & 7 & 36 & 140 & 55 & 358 & 8.1 \\
\hline
\textbf{SPW:2} & 50.3 & 6 & 36 & 141 & 54 & 367 & 8.3 \\
\hline
\textbf{SPW:3} & 43.7 & 6 & 36 & 131 & 55 & 354 & 8.4 \\
\hline
\textbf{SPW:4} & 49.8 & 7 & 36 & 134 & 58 & 342 & 8.1 \\
\hline
\textbf{SPW:5} & 60.7 & 9 & 36 & 131 & 67 & 349 & 8.1 \\
\hline
\textbf{SPW:6} & 53.2 & 8 & 36 & 137 & 69 & 364 & 8.5 \\
\hline
\textbf{SPW:7} & 63.0 & 9 & 42 & 147 & 78 & 388 & 8.2 \\
\hline
\textbf{SPW:8} & 50.6 & 8 & 36 & 139 & 72 & 379 & 8.8 \\
\hline
\end{tabular}
\caption{SB9351-12 -- uSARA: settings and computational cost of the reconstruction of the sub-band images. See the caption of Table~\ref{tab:timeA3391} for the complete list of the acronyms. \label{tab:timeSPT2023}}
\end{table}
\begin{table}
\begin{tabular}{cccccccc}
\hline
\hline
\textbf{SB9442} & $M$ & $P$ & $F$ & ${{m}_{\ensuremath{\boldsymbol{\mathsf{H}}}/\ensuremath{\boldsymbol{\mathsf{G}}}}}$ & C\textsubscript{$\bm{\Phi}^\dagger \bm{\Phi}$} & C\textsubscript{Image}& T\textsubscript{Image} \\
&$\times10^6$ & & &[GB] &[CPUh]&[CPUh]&[h] \\
\hline
\textbf{Full-band}& 467 & 19 & 64 & 80 & 281 & 770 & 4.3 \\
\hline
\textbf{SPW:1} & 50 & 6 & 36 & 94 & 40 & 210 & 5.2 \\
\hline
\textbf{SPW:2} & 62 & 7 & 36 & 107 & 50 & 221 & 5.5 \\
\hline
\textbf{SPW:3} & 49 & 7 & 36 & 90 & 44 & 230 & 5.7 \\
\hline
\textbf{SPW:4} & 51 & 8 & 36 & 86 & 44 & 237 & 5.9 \\
\hline
\textbf{SPW:5} & 57 & 8 & 36 & 101 & 47 & 225 & 5.6 \\
\hline
\textbf{SPW:6} & 61 & 8 & 36 & 112 & 50 & 215 & 5.4 \\
\hline
\textbf{SPW:7} & 67 & 9 & 36 & 118 & 59 & 229 & 5.7 \\
\hline
\textbf{SPW:8} & 67 & 9 & 36 & 123 & 55 & 229 & 5.4 \\
\hline
\end{tabular}
\caption{SB9442-35 -- uSARA: settings and computational cost of the reconstruction of sub-band and full-band images. See the caption of Table~\ref{tab:timeA3391} for the complete list of the acronyms. In the full-band imaging experiment, the operator $\bm{\Phi}^\dagger \bm{\Phi}$ is encoded via its underlying holographic matrices. We therefore report the memory occupied by these matrices, denoted by $m_{\ensuremath{\boldsymbol{\mathsf{H}}}}$ [GB]. For all sub-band imaging experiments, we report the memory occupied by the de-gridding matrices denoted by $m_{\ensuremath{\boldsymbol{\mathsf{G}}}}$ [GB]. \label{tab:time9442}}
\end{table}
\begin{table*}
\begin{tabular}{cccccccccc}
\hline
\hline
& & \textbf{SB8275} & & &\textbf{SB9351} & & &\textbf{SB9442} &\\
\hline
& $P$ & C\textsubscript{Image}& T\textsubscript{Image}&$P$ &C\textsubscript{Image}&T\textsubscript{Image}&$P$&C\textsubscript{Image} & T\textsubscript{Image} \\
& & [CPUh] & [h] & & [CPUh] & [h] & & [CPUh] & [h] \\
\hline
\textbf{Full-band} & -- & -- & -- & -- & -- & -- & 72 & 58 & 0.8 \\
\hline
\textbf{SPW:1} & 89 & 22 & 0.6 & 72 & 22 & 0.6 & 72 & 14 & 0.4 \\
\hline
\textbf{SPW:2} & 92 & 22 & 0.6 & 72 & 18 & 0.5 & 72 & 11 & 0.3 \\
\hline
\textbf{SPW:3} & 96 & 22 & 0.6 & 75 & 14 & 0.4 & 72 & 11 & 0.3 \\
\hline
\textbf{SPW:4} & 99 & 22 & 0.6 & 78 & 18 & 0.5 & 72 & 7 & 0.2 \\
\hline
\textbf{SPW:5} & 103 & 22 & 0.6 & 81 & 18 & 0.5 & 72 & 11 & 0.3 \\
\hline
\textbf{SPW:6} & 107 & 22 & 0.6 & 84 & 18 & 0.5 & 72 & 11 & 0.3 \\
\hline
\textbf{SPW:7} & 110 & 22 & 0.6 & 87 & 22 & 0.6 & 72 & 11 & 0.3 \\
\hline
\textbf{SPW:8} & 114 & 25 & 0.7 & 90 & 18 & 0.5 & 72 & 11 & 0.3 \\
\hline
\end{tabular}
\caption{{\tt{WSClean}} settings and computational cost of all sub-band and full-band images: for each experiment, the number of $w$-stacks ($P$), the computational cost of the deconvolution in CPU core hours (C\textsubscript{Image}), and the imaging time in hours (T\textsubscript{Image}) are reported. Each experiment used a single node on Cirrus, leveraging its 36 CPU cores. Since each CPU core has two threads, 72 \textit{virtual} cores were detected, and consequently, the minimum number of $w$-stacks was set automatically to 72.\label{tab:time-wsclean}}
\end{table*}
\section{Conclusions}\label{sec:con}
This article presents a comprehensive study of a recently proposed monochromatic wide-field imaging framework for RI, validated on GB-sized, imperfectly calibrated data from Early Science and Pilot Survey ASKAP observations. The framework's underlying sparsity-based imaging algorithm, uSARA, yields image reconstructions with both super-resolution and high sensitivity to diffuse radio emission. Relying on a highly parallelised and automated implementation of the operators and functions involved, the imaging framework enables the formation of wide-field radio maps with large image dimensions.
Focusing on science cases characterised by complex morphology exhibiting both compact and faint diffuse emission, we have carried out the validation of the uSARA imaging algorithm through flux density measurements and spectral index maps generated from sub-band monochromatic images, in comparison with the widely-used {\tt WSClean} imager. In spite of the large-scale artefacts due to imperfect calibration present in some of the selected fields, the uSARA images show better reconstruction quality overall in comparison to {\tt WSClean}, both in terms of resolution and sensitivity. Our uSARA-ASKAP images host more detailed structure of the targeted radio emission -- most clearly seen in the super-resolved jets and lobes of radio galaxies in each of the selected FoVs. In addition to high-resolution structure, faint diffuse emission has also been captured by uSARA, revealing more extended emission of intracluster radio sources which appeared blended into background noise in sub-band {\tt WSClean} maps.
An advantageous result of our super-resolved uSARA-ASKAP images is the ability to generate more detailed spectral index maps. High-resolution structure, resembling turbulent emission in the radio lobes of several imaged radio galaxies, appears to closely trace small changes in the steepness of the observed spectra. Furthermore, each of our primary sources of interest exhibit steeper spectra in the uSARA spectral index maps, attributed to the increase in sensitivity and resolution delivered by the algorithm. Nonetheless, our spectral analysis of the target sources remains preliminary and warrants a deeper study using wide-band imaging. Planned upgrades to the uSARA framework -- which will incorporate joint DDE calibration \citep{repetti17} and wide-band deconvolution \citep{thouvenin22a} -- guarantee the robustness of future images, and consequently, more precise spectral information across all frequency channels.
Regarding the scalability of the proposed imaging framework, we have demonstrated that its fully automated and parallel measurement operators facilitate image reconstruction of data up to 7.5~GB in size. Yet, in its current MATLAB implementation, its computational cost remains higher than the benchmark imager {\tt WSClean}. Nevertheless its migration to C++ leveraging parallel libraries, can substantially boost its computational efficiency, thus narrowing the computational gap with the state-of-the-art imager.
In the sequel to this series, Part II: ``AIRI validated on ASKAP data,'' we investigate uSARA's sister algorithm AIRI, the second deconvolution algorithm built into our parallel automated imaging framework. AIRI differs from uSARA by exploiting learned Deep Neural Network (DNN) denoisers in lieu of uSARA's proximal operator in the underpinning FB algorithmic structure \eqref{eq:fb}. AIRI was already very recently and briefly demonstrated on MeerKAT data \citep{Dabbech22}. The algorithm will be validated on the same challenging ASKAP data as here, with the aim to further demonstrate its potential to deliver further imaging precision and faster reconstruction than uSARA.
\section*{Acknowledgements}
The first two authors contributed equally to this work. This work was supported by the UK Research and Innovation under the EPSRC grants EP/T028270/1 and EP/T028351/1, and the STFC grant ST/W000970/1. The research used Cirrus, a UK National Tier-2 HPC Service at EPCC funded by the University of Edinburgh and EPSRC (EP/P020267/1). ASKAP, from which the data under scrutiny originate, is part of the Australia Telescope National Facility managed by CSIRO. This project used public archival data from the Dark Energy Survey (DES).
\section*{Data and Code Availability}
The ASKAP data underlying this article (calibrated visibilities and mosaic images of Scheduling Blocks) are made publicly available for viewing and download on the \href{https://data.csiro.au/collections/#domain/casdaObservation/search/}{CSIRO ASKAP Science Data Archive} (CASDA; \citealp{2017ASPC..512...73C}), and can be accessed with the unique Project Identifiers AS034 and AS101. The reconstructed images in FITS format as well as the GIF files showing the imaged fields over the spectral windows are made available in \citet{askapdataset}. The uSARA and AIRI code will become available in a later release of the Puri-Psi library for RI imaging.
\bibliographystyle{mnras}
|
{
"arxiv_id": "2302.14227",
"language": "en",
"timestamp": "2023-03-01T02:05:46",
"url": "https://arxiv.org/abs/2302.14227",
"yymm": "2302"
} |
\section{Symbols and Notations}
\label{sec:appendix}
\captionof{table}{\small Symbols and Notations}
\begin{table}[H]
\centering
\captionsetup{width=1\linewidth}
\begin{tabular}[c]{l l}
\hline
$u(\cdot)$ & PDE solution \\
$\boldsymbol{\theta}$ & PINN learnable parameters \\
$u_{\boldsymbol{\theta}}(\cdot)$ & PINN PDE prediction \\
$\mathcal{R}(\cdot)$ & PDE residual \\
s-d & Stacked-decomposition \\
w-s & Window-sweeping \\
$N_{i}$ & Number of interface points \\
$N_{r}$ & Number of residual collocation points \\
$N_{ic}$ & Number of initial condition points \\
$N_{b}$ & Number of boundary points \\
M & Order of Fourier feature encoding \\
$dS$ & Number of sub-networks training at once \\
$n$ & Total number of time-slabs \\
$\Omega$ & Spatial domain of interest \\
$T$ & Temporal domain of interest \\
$\mathbf{x}$ & Spatial value, $\mathbf{x} \in \Omega$ \\
$t$ & Temporal value, $t \in T$ \\
FT & Fine tuning \\
TL & Transfer learning \\
BC & Boundary Conditions \\
bc & Backward-compatibility \\
IC & Initial condition \\
ic & Interface condition \\
\hline
\end{tabular}
\end{table}
\section{Auxiliary Results}
\label{sec:auxiliary}
\subsection{Convection: Fine Tuning vs. Transfer Learning}
\label{sec:ft_vs_tl_appendix}
Figure \ref{fig:Ft-vs-TR} shows the learnable parameter distributions of each layer in the final time-slab network for the results reported in Table \ref{tb:convec}. The respective models in the Table are s-d PINN (n = 10, dS = 1, ic = $C^p$) + FT and s-d PINN (n = 10, dS = 1, ic = $C^p$) + TL. We observe that when we freeze the first two layers in the network during transfer learning, the final three layers must over-adjust to compensate for the reduced expressively of the network. This can be seen in the distribution plots as the model parameter using fine tuning stays around $[-1, 1]$ while the transfer learning increases to $[-4, 4]$. In turn, this leads to longer training times for transfer learning compared to fine tuning as the model with greater expressivity more quickly converges to the solution. In contrast, the model with frozen layers must go to more extreme parameter values to satisfy the solution. Note that this observation is only in regard to temporal decomposition with PINNs and in no way is commenting on the trade-off between fine tuning and transfer learning for other applications.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{FT-vs-TR.png}
\caption{\small s-d PINN (n = 10, dS = 1, ic = $C^p$) learnable parameter distributions at the end of training for the final time-slab for results reported in Table \ref{tb:convec}.}
\label{fig:Ft-vs-TR}
\end{figure}
\subsection{Allen-Cahn: Training Dynamics of Stacked-Decomposition \& Window-Sweeping}
\label{sec:dynamics_sd_vs_ws_appendix}
In Figure \ref{fig:sd_vs_ws}, the training dynamics are reported for stacked-decomposition and window-sweeping using settings that recover models of time-marching with fine tuning and bc-PINNs, respectively. The loss in both jumps as either a new subdomain or subnetwork is added in time-marching or as the residual subdomain moves forward, leaving the prior subdomain to be considered backward compatible in bc-PINNs. The differences are that for stacked-decomposition, unless n = dS, the initial conditions will eventually not be considered in the loss minimization. The information is purely stored and propagated through later-in-time subdomains and interfaces. With respect to window-sweeping, the initial condition will always be included during optimization. Conversely, backward-compatibility, if used, does not occur until after the initial training near the starting time, as seen in the plots. Since only one network is used in window-sweeping, the residual and, therefore, the prediction will be continuous, whereas stacked-decomposition will have visible discontinuities at the interfaces. However, multiple networks have more expressivity than a single one as long as training challenges\cmnt{failure modes} do not occur and interfaces are well respected, leading to smaller residuals. This is particularly true at the end of time, as the residuals here contribute minimally to a global network but significantly to a local one.
\label{ssec:loss_convergence-appendix}
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{sd_vs_ws_loss.png}
\caption{\small Plots of loss as a function of training epochs and the full domain PDE residual at the end of training for results reported in Table \ref{tb:allen-cahn} (Top) s-d PINN (n = 10, dS = 1, ic = $C^p$) + FT on the Allen-Cahn problem. (Bottom) w-s PINN (kernel = ``uniform'', width = dt = $0.1$)$^c$ on the Allen-Cahn problem.}
\label{fig:sd_vs_ws}
\end{figure}
\subsection{Allen-Cahn: Causal Weights}
\label{sec:causal_weights_appendix}
A loss tolerance was used to propagate all stacked-decomposition and window-sweeping methods that eliminate the fixed epoch conditions and vastly reduce computational cost. To fairly compare accuracies and training times, the loss tolerance is consistent between settings. A value of $10^{-7}$ is used for Allen-Cahn, for which a lower tolerance increases training time with no improvement to accuracy. A higher tolerance decreases accuracy, since this trade-off is discussed throughout the manuscript. We find that for higher causality parameters such as $\epsilon = (10, 100)$ described in the original paper, the change in loss reaches below $10^{-8}$ within a few hundred iterations. For smaller values, the method does not strongly enforce causality enough to overcome the training challenge. This sensitivity is addressed in the original paper by using a cascading $\epsilon$ with increasing steepness. In effect, this sweeps across the domain five times instead of once, which is not in the scope of our study, although our window-sweeping method can be used for multiple sweeps in the same way.
Therefore, we provide self-contained results for this setting, so the reported values are not misinterpreted as advocating for or against a setting. The main contribution of the paper is to provide a unified framework in which many methods can be described, improve scalability, and generally overcome unmodified PINNs training challenges. To this end, the change in loss tolerance is removed, and a termination condition of $min_{i} w_{i} > \delta = 0.95$ is used.
Although causal weights appear continuous, due to its implementation in \cite{wang2022respecting}, the scheme is also broken up into time snapshots like any other kernel in our window-sweeping method. This is due to the loss being formed by the mean of the mean squared error for each snapshot, unlike the standard PINN residual loss, which is the mean squared error of all points in the domain. Therefore, this formulation acts differently than the weight masks we employ in the linear and error function kernels of window-sweeping, where all the points are considered separately.
For non-grid sampling, we have attempted to run the original implementation; however, when weighting without snapshots, spatial correlation is broken, making the method fail. Therefore, we adapt the method to non-grid sampling, in this case, Latin Hypercube sampling (LHS), by treating it similarly to a grid in terms of the algorithm. Given a $100 \times 100$ grid on $T = [0,1]$, a sequence of 100 weights is generated, representing an equidistant sampling of 100 spatial points at every 0.01 increment in time. For LHS, we simply order the set of 10,000 points in time and separate them into 100-point sets of size 100 in time. This gives a similar weighting scheme to grid sampling since the mean over each of the 100 weights is used, whereas before, there were individual weights for each point. This modification is not restricted to grids and still has spatial correlation if the sampling is dense enough.
In Table \ref{tb:allen-cahn-causal}, window-sweeping with causal weights kernel is used to solve the Allen-Cahn problem for a single pass of $\epsilon$ = 10. The modification made to alleviate the grid sampling restriction has not had any adverse effect on the method's performance. Additionally, by utilizing the null-set segmentation, which has been applied by only adding future sets (out of 100) when $min_{i} w_{i} > 0.05$, we have reduced the training time by not predicting the residual of points with negligible weights later in time. This can be extended to using the bc-set segmentation to further improve training time, as shown for the other window-sweeping kernels. Under these settings, we do not achieve the $10^{-3}$ relative $L_2$ errors reported in the original paper due to several factors. First, we use a less restrictive termination condition on $\delta$ such that the training time is in the same realm as the other kernels, which use the change in loss tolerance. We find that the difference in training time between $\delta = 0.95$ and $0.99$ is great. The success, in terms of accuracy, not cost, of the causal weights reported for Allen-Cahn in \cite{wang2022respecting} is likely largely due to the 10-100$\times$ increase in iterations 10-100$\times$ increase in network parameters $\theta$ as well as other modifications. All window-sweeping kernels reported achieving comparable results under the setting chosen. We do not make any assertion as to which method performs the best in the extreme training limit in terms of accuracy or cost, as that is not within the scope of this study.
\begin{table}[H]
\centering
\captionsetup{width=1\linewidth}
\caption{\small Table of $L_2$ relative error and training time for different window-sweeping settings. All methods use M = 10 unless otherwise stated. Note that window-sweeping is abbreviated w-s. $^a$(bc-set = off, null-set = off), $^b$(bc-set = off, null-set = on)} \label{tb:allen-cahn-causal}
\begin{tabular}[c]{l | c | c}
\toprule
Model settings & Relative $L_2$ Error & Training time \\
\hline
w-s PINN (kernel = causal weights, $\epsilon$ = 10)$^a$ + Grid sample & $3.72 \times 10^{-2}$ & 1,495 \\
w-s PINN (kernel = causal weights, $\epsilon$ = 10)$^a$ & $3.37 \times 10^{-2}$ & 1,452 \\
w-s PINN (kernel = causal weights, $\epsilon$ = 10)$^b$ & $4.03 \times 10^{-2}$ & 967 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{KdV (long-time) PINN Prediction}
\label{ssec:kdv_long-appendix}
In Figure \ref{fig:kdv_long_PINN}, while the PINN solves the KdV problem for $T = [0,1]$, it fails for $T = [0,5]$. Interestingly, the issue\cmnt{failure mode} is not one of the zero-solution as in the long-time Convection problem but is in fact the incorrect propagation challenge \cmnt{failure}such as in the Allen-Cahn problem. This is likely due to the traveling wave in the convection problem being extended due to the periodic conditions. While that feature is not present in the KdV problem, the increased training difficulty of a larger temporal domain manifests itself with incorrect information propagation instead.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{kdv_longTime.png}
\caption{\small PINN prediction for the long-time KdV problem in Section \ref{ssec:kdv}, representative of the incorrect propagation training challenge\cmnt{failure}.}
\label{fig:kdv_long_PINN}
\end{figure}
\section{Fourier Feature encoding ($\mathcal{C}^{\infty}$ periodic conditions)}
\label{sec:fourier_appendix}
Following the work from \cite{dong2021method, wang2022respecting}, we can exactly enforce $\mathcal{C}^{\infty}$ periodic boundary conditions by applying a Fourier feature encoding to the spatial input of the network. The spatial encoding is
\begin{align}
v(x) = \{ 1, cos(\omega x), sin (\omega x), ..., cos(M\omega x), sin (M\omega x)\}
\end{align}
where $\omega = \frac{2 \pi}{L}$, $L = x_{max} - x_{min}$, and M is a non-negative integer representing the sinusoidal frequency of the input. A higher M leads to even higher frequency components in the output after passing through nonlinear activation functions, which may be helpful in PDE problems with high-frequency solution components such as the Allen-Cahn problem considered here. All choices of M are shown to be $\mathcal{C}^{\infty}$ periodic in Lemma 2.1 of \cite{dong2021method}.
\section{Background}
\label{sec:background}
\subsection{Physics-Informed Neural Networks (PINNs)}
\label{ssec:pinns}
Physics-Informed Neural Networks (PINNs) were originally proposed in \cite{raissi2019physics,raissi2017physicsI,raissi2017physicsII}
as a neural-network-based alternative to traditional PDE discretizations. In the original PINNs work, when presented with a PDE specified over a domain $\Omega$ with boundary conditions on $\partial \Omega$ and initial conditions at $t=0$ (in the case of time-dependent PDEs), the solution is computed (i.e., the differential operator is satisfied) at a collection of collocation points. First, we rewrite our PDE system in a residual form as $\mathcal{R}(u) = \mathcal{S} - \frac{\partial}{\partial t} u - \mathcal{F}(u)$, where $\mathcal{S}$ is the source term/function and $\mathcal{F}$ is a nonlinear operator. The PINN formulation is expressed as follows: Given a neural network function $u_{\boldsymbol{\theta}}({\bf x},t)$ with specified activation functions and a weight matrix $\boldsymbol{\theta}$ denoting the degrees of freedom derived from the width and depth of the network, find $\boldsymbol{\theta}$ that minimizes the loss function:
\begin{equation}
MSE = MSE_u + MSE_r \label{eq:mse}
\end{equation}
\noindent where
\begin{eqnarray}
MSE_u &=& \frac{1}{N_u} \sum_{i=1}^{N_u} \| u_{\boldsymbol{\theta}}(x_u^i,t_u^i) - u^i \|^2 \\
MSE_r &=& \frac{1}{N_r} \sum_{i=1}^{N_r} \| \mathcal{R}(u_{\boldsymbol{\theta}}(x_r^i,t_r^i) \|^2 \,\,\,
\end{eqnarray}
\noindent where $\{x_u^i,t_u^i,u^i\}_{i=1}^{N_u}$ denote the initial and boundary training data on $u({\bf x},t)$ and $\{x^i_r,t^i_r\}_{i=1}^{N_r}$ specify the collocation points for evaluation of the collocating residual term $\mathcal{R}(u_{\boldsymbol{\theta}})$. The loss $MSE_u$ corresponds to the initial and boundary data, whereas $MSE_r$ enforces the structure imposed by the differential operator at a finite set of collocation points. For periodic boundary conditions, an exact enforcement can be used that encodes the spatial input as Fourier features \cite{dong2021method}, in which case $MSE_u$ represents only the initial condition loss. Additional terms can be added for PINN variants, such as interface terms in the case of domain-decomposition \cite{JagtapK, JAGTAP2020113028}. Often, term-wise or point-wise weights are added to Equation \ref{eq:mse} to provide improved training \cite{mcclenny2020self, wang2022and}. This loss-function minimization approach fits naturally into the traditional deep learning framework \cite{DeepLearning}. Various optimization procedures are available, including Adam \cite{kingma2014adam}, L-BFGS \cite{liu1989limited}, etc. The procedure produces a neural network $u_{\boldsymbol{\theta}}({\bf x},t)$ that attempts to minimize the weak imposition of the initial and boundary conditions while satisfying the PDE residual through a balancing act.
\subsection{Related work}
\label{ssec:related-work}
Previous works have attempted to address training issues in a variety of ways. In this section, we review relevant work that will be used as the foundation for our hypotheses and new training methods.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{related_models}
\caption{\small Illustrations of related models with time represented along the horizontal direction for which it progresses left to right. (A) Adaptive time-sampling. (B) Backward-compatibility. (C) Time-marching. (D) XPINNs.}
\label{fig:related}
\end{figure}
\textbf{Adaptive time-sampling:} In \cite{wight2020solving}, a strategy is proposed that splits the domain into equally sized ``time-slabs''. For a single network, the collocation points form the sequential union of the subsets in each time-slab on which the network is continuously being trained, as seen in Figure \ref{fig:related} (A). This method is essentially a start-up procedure because it is equivalent to a standard PINN when all slab subsets have been added. This method is shown to improve training accuracy and may provide a computational speedup since only a subset of the entire spatiotemporal sampling is active in the training phase until the final slab is added. Unnecessary collocation points are expensive to add, particularly for long-time integration and higher order derivatives, because PDE residuals must be calculated for each one.
\textbf{Time-marching:} In \cite{wight2020solving} and more recently \cite{krishnapriyan2021characterizing} and \cite{bihlo2022physics}, a training procedure is proposed in which the time-slabs are trained sequentially with the prior slab's end-time predictions used as the next initial conditions as seen in Figure \ref{fig:related} (C). Although the method's name differs between the three papers, we will refer to it here as time-marching. Since prior subnetworks stop training once a new slab is added, this enforces causality on the scale of the size of the time-slab. Internally, for each time-slab, causality is not enforced.
\textbf{bc-PINN:} In \cite{MATTEY2022114474}, a different sequential model is proposed that, while also broken up into time-slabs, uses only one network for the entire domain. Similar to adaptive time-sampling in \cite{wight2020solving}, the difference here is that for prior time-slabs, the prediction of the converged network is taken as a data term and forms the loss with future network predictions, as seen in Figure \ref{fig:related} (B). This is termed ``backward-compatibility (bc)'' since it ensures the network does not change its prediction for prior times and is the means by which the method enforces causality. As in the time-marching scheme, this causality is enforced only on the scale of the time-slabs. Additionally, although not touched upon in the paper, this approach reduces the computational cost on a per-iteration basis since prior collocation point residuals do not need to be continually computed.
\textbf{Causal weights:} In \cite{wang2022respecting}, conforming to causality is directly confronted and put forward as a leading contributor to successful PINN training. Similar to bc-PINNs, this approach is proposed for a single network, although it is later combined with time-marching for the final numerical results on difficult chaotic problems. Unlike the previous two methods, time-slabs are not used, and instead, causality is enforced by a clever weighting mask over all collocation points. This mask is inversely exponentially proportional to the magnitude of cumulative residual losses from prior times, as shown in Equation \ref{eq:causal-weights}. One drawback is that the results are sensitive to the new causality hyperparameter $\epsilon$, so an annealing strategy for training with $\epsilon$ is used. However, this requires multiple passes over the entire domain with different $\epsilon$, significantly increasing the computational cost and not guaranteeing convergence. Despite this, its application is shown to be successful on challenging problems.
\begin{linenomath}\begin{align}
\mathcal{L}_r \left( \boldsymbol{\theta} \right) = \frac{1}{N_t}\sum_{i=1}^{N_t} \text{exp} \left( -\epsilon \sum_{k-1}^{i-1} \mathcal{L}_r \left( t_k,\boldsymbol{\theta} \right) \right) \mathcal{L}_r \left( t_i,\boldsymbol{\theta} \right) \label{eq:causal-weights}.
\end{align}\end{linenomath}
\textbf{XPINN:} In \cite{JagtapK}, a generalized domain decomposition framework is proposed that allows for multiple subnetworks over different spatiotemporal subdomains to be stitched together and trained in parallel, as shown in Figure \ref{fig:related} (D). This method is not causal and suffers from similar training problems as standard PINNs. These problems, in some cases, become more prevalent as the interfaces and separate networks make for a more difficult optimization problem, specifically with respect to information propagation. While the idea of stitching together subdomains in time is made possible by XPINNs, time-marching and stitching together subdomains are not mutually exclusive. Time-marching is sequential, but the networks are stitched together by the hard constraint of the final end-time prediction of the prior network used as the following initial condition. We will refer to this as the solution continuity interface condition for first-order in time problems. More precisely, it would be $MSE(u_1 - u_2)$, or in the case of XPINNs, discontinuous enforcement by way of $MSE(u_{avg} - u_1)$ + $MSE(u_{avg} - u_2)$ where $u_{avg} = \frac{u_1+u_2}{2}$. This is extendable to second-order in time problems by adding the same forms for $u_t$ and so on for higher order in time derivative terms. While XPINNs also constrain residual continuity, this constraint is unnecessary for well-posedness when decomposing into time-slabs, such as in the prior methods discussed. In this case, the stitching between XPINNs and time-marching is the same, the difference being that the subnetworks in XPINNs are trained in parallel, and the subnetworks in time-marching are trained sequentially \cite{shukla2021parallel}.
\subsection{Causality classification}
\label{ssec:causality-class}
Given these prior works, we seek to find a generalization between all possible methods to categorize them. We, therefore, propose the idea of hard causality and soft causality. Hard causality is a method that cannot be violated, whether continuously or discretely. Soft causality is, therefore, a method that is possible to violate; however, the network is predisposed toward obeying it in some way. This will most commonly fall under the fact that, through optimization, a network has been guided to local minima, which loosely obeys causality. A perturbation in the optimization may cause the network to find different minima, which violates this proposition, but is unlikely. We, therefore, categorize the previously described methods in Table \ref{table:causality-class}.
\begin{center}
\captionof{table}{\small A classification of PINN causality enforcement methods}
\scalebox{0.8}{
\begin{tabular}{| c | c | c | c | c |}
\cline{2-5}
\multicolumn{1}{c|}{} & \textbf{Soft Causality} & \textbf{Hard Causality} & \textbf{Soft + Hard Causality} & \textbf{non-Causal}\\
\hline
\textbf{Time-slab scale} & Adaptive time-sampling \cite{wight2020solving} & Time-marching \cite{wight2020solving, krishnapriyan2021characterizing, bihlo2022physics} & & XPINN \cite{JagtapK} \\
& & bc-PINN \cite{MATTEY2022114474} & & \\
\hline
\textbf{Sampling scale} & Causal weights \cite{wang2022respecting} & & & \\
\hline
\end{tabular}}
\label{table:causality-class}
\end{center}
Notice that hard causality methods are defined only in terms of time-slabs, whereas causal weighting is a continuous form of causality. However, in the continuous case, current methods must still compute residuals for the entire domain in which they are used. There is a gap in methodology for enforcing hard causality on the sampling scale as well as for methods that combine the two. We will take inspiration from this classification to propose stacked-decomposition, which will fill the gap and form a smooth connection between a standard XPINN and time-marching, allowing for what we call causal XPINNs that overcome training issues present in their standard form. Additionally, we will use ideas from transfer learning to greatly speed up training with time-slab schemes. We will also propose a window-sweeping collocation point algorithm that will combine hard and soft causality constraints to not only speed up training by limiting the number of collocation residuals in the domain that need to be calculated, such as in adaptive time-sampling and bc-PINNs, but also enforce causality continuously. Finally, these methods can be combined to not only provide very accurate solutions, such as in \cite{wang2022respecting}, which combines time-marching and causal weights to solve previously out-of-reach forward PINN, but also to greatly reduce the computational cost even when causality is not needed to address training challenges\cmnt {failure modes}.
\section{Summary}
\label{sec:conclusion}
We have introduced a unified framework to describe existing and new causality-enforcing PINN methods. We have showcased examples in which PINNs and their temporal decompositions can struggle to train well without modification and how settings under the proposed framework overcome these issues. Additionally, we introduce adaptive propagation strategies based on a change in loss tolerance, compared to previous versions of the methods, which use fixed optimization iterations. We achieve a reduction in training time and therefore improve scalability over an unmodified PINN on problems without training challenges. We also investigate many nuanced model decisions, such as the transferring layer parameters or enforcing boundary conditions, among others, to help guide decision-making. In future work, we will consider second-order in time problems such as the wave and Boussinesq equations, which have separate considerations when decomposing. In addition, we hope to adapt our strategies to second-order problems with only zeroth-order information at the initial and final time. This contrasts the standard setup of zero and first-order information at the initial condition. This case poses unique information propagation considerations as the standard causality approach to move forward in time would not apply.
\section{In-progress Ideas}}
\section{Introduction}
\label{sec:introduction}
Physics-informed neural networks (PINNs) have emerged as a popular framework for solving partial differential equations (PDEs). The most ubiquitously used PINN implementation at present is the meshless, continuous-time approach in \cite{raissi2019physics}. This approach is often selected due to its flexibility in discretization and has been shown to be successful across a wide class of application domains \cite{mathews2021uncovering,kissas2020machine,10.1371/journal.pcbi.1007575,WANG2021109914,shukla2020physics,Chen:20,10.3389/fphy.2020.00042}. However, the users community has observed that the continuous-time approach suffers from various training challenges not experienced by the discrete-time approach. In this work, we are motivated to keep as much discretization flexibility as the continuous-time approaches allow while benefiting from the properties of the discrete-time approach.
We will return to this trade-off when proposing our new time-sweeping collocation point algorithm. As continuous-time PINNs have become the default form, future mentions of PINNs will refer to this approach unless explicitly stated otherwise.
Training (i.e., optimization) remains the primary challenge when using the continuous-time approach for forward problems. A significant amount of the research on PINNs revolves around improving the ease of training in some way \cite{krishnapriyan2021characterizing, wight2020solving, hu2021extended, mojgani2022lagrangian}. However, PINNs for inverse problems have shown great success on a range of applications and do not pose the same training issues as forward problems \cite{jagtap2022deep,jagtap2022physics,9664609,inverse_groundwater,chen2020physics,mishra2020estimates,thakur2023temporal}.
We, therefore, focus solely on forward problems in this paper, as they are often a principal building block in solving inverse problems and also the more challenging direction when training. Information propagation drives many training challenges\cmnt{failure modes} in forward PINN problems, and in the inverse form, this is a quite different problem entirely for which forward methods might not be applicable. However, the development of forward problems for PINNs will also help drive improvements for solving inverse problems as we gain a better understanding of PINNs in general, and is something to be studied in future work.
The strategies to enhance PINN training include diverse approaches such as adaptive sampling \cite{lu2021deepxde, daw2022rethinking, subramanian2022adaptive}, adaptive weighting \cite{wang2022respecting, mcclenny2020self}, adaptive activation functions \cite{jagtap2020adaptive, jagtap2022important}, additional loss terms \cite{yu2022gradient}, domain decomposition \cite{JagtapK, JAGTAP2020113028, meng2020ppinn, hu2022augmented}, and network architecture modification to obey characteristics \cite{mojgani2022lagrangian, braga2022characteristics}. A thorough summary of PINN training challenges and their proposed solutions is provided in \cite{mojgani2022lagrangian}. Recently, \cite{shin2020convergence} proposed the mathematical foundation of PINNs for linear partial differential equations, whereas \cite{mishra2022estimates} presented an estimate on the generalization error of the PINN methodology. The first comprehensive theoretical analysis of PINNs, as well as extended PINNs (XPINNs) for a prototypical nonlinear PDE, the Navier-Stokes equations have been presented in \cite{de2022error}. The optimization process of PINNs not only limits the lower bound accuracy
but also causes the network to be unable to learn\cmnt{fail to learn} over the entire domain in some cases.
Training difficulties\cmnt{failure modes} in PINNs can happen for various reasons, some of the most common being poor sampling, unequal loss term weights, or using a poor optimization scheme.
Even with a well-tuned PINN, ``stiff" PDEs with sharp transitions \cite{wang2021understanding}, multi-scale problems \cite{wang2021eigenvector}, or highly nonlinear time-varying PDEs \cite{MATTEY2022114474} can still pose problems for the standard PINN.
Our first contribution is an experimental study and classification of training challenges \cmnt{failure modes} in PINNs and their root cause. Furthermore, we relate these training challenges\cmnt{failure modes} to information propagation during training as well as their manifestation in temporal domain decomposition strategies such as XPINNs. In doing this, we put forward a new form of training challenge\cmnt {failure mode} for XPINNs. Our next contribution is the introduction of a new unified framework to address some of these challenges\cmnt {failure modes} and highlight the current methodological gaps in PINN time-causality enforcement. PINNs and their variants are numerous and ever-increasing. Setting aside the myriad of PINN topics, time-causality considerations alone have several different approaches. A central concern facing the PINN community is the rapid development of new methods without a supporting framework between them. It is time-consuming to reimplement and retune the dozens of PINN variants (e.g., the ``alphabet" of PINN variants, cPINNs, hpPINNs, bcPINNs, etc.) and other PINN approaches for any specific problem or in use as baselines. Therefore, our approach is backward compatible with all prior methods in this regime and can easily incorporate new variants in the future. In this framework, we also bridge the gap between methods such as time-marching and XPINNs and incorporate ideas to speed up existing methods. This is done by partitioning the subdomain into collocation point sets, requiring no or small computational cost, as well as incorporating transfer learning concepts.
The paper is organized as follows: In Section \ref{sec:background}, we first summarize PINNs and related work to time-causality, which can be similarly described. We introduce a classification to these prior works and discuss the current gap in methodology. In Section \ref{sec:motivation}, we analyze different types of training challenges\cmnt {failure modes} and their relation to information propagation and decomposition. We then propose, in Section \ref{sec:methods}, a unified framework for causality-enforcing methods. Two new methods are proposed, stacked-decomposition and window-sweeping, to be used in combination with each other. These methods describe the current work covered as well as new variants. In Section \ref{sec:results}, we provide computational performance results on PDE problems with known training difficulties. We summarize and conclude our results in Section \ref{sec:conclusion}.
\section{Unified Causality-enforcing Framework}
\label{sec:methods}
To address these decomposition challenges \cmnt{failure modes}and unify previous causal strategies, we propose two new methods to cover all aspects of causality enforcement shown in Table \ref{table:causality-class2}. Combined, these two methods impose soft and hard constraints on both the time-slab and sampling scale. We also introduce ways to improve temporal decomposition, such as transfer learning.
\begin{center}
\captionof{table}{\small A classification of PINN causality enforcement methods with our proposed stacked-decomposition and window-sweeping methods.}
\scalebox{0.8}{
\begin{tabular}{| c | c | c | c | c |}
\cline{2-5}
\multicolumn{1}{c|}{} & \textbf{Soft Causality} & \textbf{Hard Causality} & \textbf{Soft + Hard Causality} & \textbf{non-Causal}\\
\hline
\textbf{Time-slab scale} & Adaptive time-sampling \cite{wight2020solving} & Time-marching \cite{wight2020solving, krishnapriyan2021characterizing, bihlo2022physics} & \textit{Stacked-decomposition} & XPINN \cite{JagtapK} \\
& & bc-PINN \cite{MATTEY2022114474} & & \\
\hline
\textbf{Sampling scale} & Causal weights\cite{wang2022respecting} & & \textit{Window-sweeping} & \\
\hline
\end{tabular}}
\label{table:causality-class2}
\end{center}
\subsection{Stacked-decomposition}
\begin{figure}[H]
\centering
\includegraphics[width=0.85\linewidth]{stacked-decomp2}
\caption{\small Illustration of the proposed stacked-decomposition method compared with the existing time-marching and XPINN methods. }
\label{fig:stacked-xpinn}
\end{figure}
As seen in Figure \ref{fig:stacked-xpinn}, stacked-decomposition is parameterized by $n$ and $dS$. The length that a subdomain spans in time is then inferred from the total time domain for each problem and the number of partitions $n$. For $dS = 1$, stacked-decomposition is equivalent to time-marching. For $dS = n$ with XPINN interface conditions and all domains active at the start of training, stacked-decomposition is equivalent to the traditional XPINN approach. An additional term we define is causal $dS$: which describes if the amount of networks $dS$ represents should all be trainable at the start or if a warm-up procedure is used (starting at one and increasing to $dS$). When used with $dS = n$, we refer to this model as a ``causal XPINN''. In this configuration, later time-slabs are added as the prior slab reaches convergence, and the entire set of subnetworks continues to train. A causal XPINN arrives at the standard XPINN configuration once all subnetworks have been added. However, because of the warm-up procedure, it avoids the training challenge \cmnt{failure mode}described in Section \ref{ssec:motivation}. This is because future networks do not train to the zero-solution and are only added once the information in the previous slab has propagated to the final time in the subdomain. A main benefit of XPINNs is that they can be parallelized and, therefore, handle large-scale problems. In this regard, as subnetworks are added to causal XPINNs, they can be parallelized, introducing no limitation or cost. This contrasts time-marching, in which all prior networks must conclude training and run in sequence. Therefore, stacked-decomposition can describe an ideal middle ground in which we benefit from the causality of time-marching to avoid possible training difficulties \cmnt{failure modes}and the parallel training of XPINNs. The method also describes a new set of models when $1 < dS < n$, which may be useful for large-scale problems with time-history effects where training the full domain at once is expensive, but the information in prior domains is still useful. In the future, adaptive methods for determining $n$ a priori or during training will be considered since time scale correlation or local complexity may change with time.
\subsubsection{Interface Conditions}
Attempting to bridge the gap between temporal decomposition strategies, we must explain the differences in interface conditions in the loss term. Time-marching schemes use the final time prediction of the previous time-slab as the initial condition of the next time-slab. For first-order time problems, this condition is simply the solution continuity given by
\begin{align}
&\mathcal{L}_{i}(\boldsymbol{\theta}^{-}, \boldsymbol{\theta}^{+}) = \frac{1}{N_i} \sum_{i=1}^{N_i} \vert u_{\boldsymbol{\theta}^{-}}(x_i, t) - u_{\boldsymbol{\theta}^{+}}(x_i, t) \vert^2.
\label{eq:interface-ic}
\end{align}
We generalize this and refer to it as the $\mathcal{C}^p$ continuity where $p$ is the order in time minus one. For problems considered in this paper, it will be $\mathcal{C}^0$ and, as such, equivalent to the solution continuity. Traditional XPINNs use interface conditions of discontinuous solution continuity and residual continuity given by the following loss terms:
\begin{align}
\begin{split}
&\mathcal{L}_{i_{avg}}(\boldsymbol{\theta}^{-}, \boldsymbol{\theta}^{+}) = \frac{1}{N_i} \left( \sum_{i=1}^{N_i} \left( \vert u_{avg}(x_i, t) - u_{\boldsymbol{\theta}^{+}}(x_i, t) \vert^2 + \vert u_{avg}(x_i, t) - u_{\boldsymbol{\theta}^{-}}(x_i, t) \vert^2 \right) \right) \\
& \quad \quad \equiv \mathcal{L}_{i_{avg}}(\boldsymbol{\theta}^{-}, \boldsymbol{\theta}^{+}) = \frac{1}{2N_i} \sum_{i=1}^{N_i} \vert u_{\boldsymbol{\theta}^{-}}(x_i, t) - u_{\boldsymbol{\theta}^{+}}(x_i, t) \vert^2 \leftarrow u_{avg} = \frac{u_{\boldsymbol{\theta}^{-}} + u_{\boldsymbol{\theta}^{+}}}{2} \label{eq:interface-avg}
\end{split}
\\
&\mathcal{L}_{i_{\mathcal{R}}}(\boldsymbol{\theta}^{-}, \boldsymbol{\theta}^{+}) = \frac{1}{N_i} \sum_{i=1}^{N_i} \vert \mathcal{R} \left( u_{\boldsymbol{\theta}^{-}}(x_i, t) \right) - \mathcal{R} \left( u_{\boldsymbol{\theta}^{+}}(x_i, t) \right) \vert^2.
\label{eq:interface-res}
\end{align}
However, the discontinuous continuity reduces to the continuous continuity with a constant, and given that tuning loss terms and weights have been extensively studied and are part of the XPINN framework \cite{wang2022and, JagtapK}, we will make no distinction between these two terms as loss term weighting will override the factor of one half difference. Finally, since we are decomposing in time, there is no complex geometry with which we must compute the normal, such as in cPINNs \cite{JAGTAP2020113028}. Therefore, residual continuity is not necessary in time since we can use the solution continuity, which is equivalent to the initial conditions for a new domain and makes the problem well-posed. Gradient-based interface terms may also become prohibitively expensive as the number of concurrently trained subdomains increases. However, it may be helpful in training to include multiple interface terms as studied in \cite{2022arXiv221012669L}, so it is left up to the user and the problem to define which terms to include, such as residual continuity, so long as they are well-posed.
\rmk{Straight lines for time-slabs are used for convenience since it is common for time-marching schemes. However, if an irregular shape is used, the same $\mathcal{C}^p$ continuity can be used and is still well-posed without any modification.}
\subsubsection{Transfer learning}
Transfer learning fits naturally into our framework when multiple networks are stacked sequentially in time. A variation of this application was used in \cite{bihlo2022physics} for time-marching. However, it was only briefly touched upon and not thoroughly studied as we do here. We further state there is no need to retrain the network from scratch when a network that already obeys the initial or interface condition is known. In terms of stacked-decomposition, it is easy to see that regardless of $dS = 1$, in which case there are initial conditions, or $dS > 1$, in which case there are interface conditions, initializing the following network with the prior network will result in this term being exactly zero when added. This aspect goes beyond simply having a good starting point for optimization since we are transferring to a new domain that shares predictions with the model being transferred. Residual loss terms beyond the starting subdomain time will not be zero, as this region will be an extrapolation of the prior subdomain. However, it will be closer to convergence than randomizing the weights.
In this framework, we allow the flexibility of transferring any combination of layers and holding constant any combination of transferred layers. More precisely, we define the terms ``transfer learning'' and ``fine tuning'' to aid in this explanation. Traditionally, transfer learning refers not only to initializing the learnable parameters of one network with another but also to holding some number of the layers constant, which reduces the per-iteration cost. On the other hand, we will refer to fine tuning as the initialization of learnable parameters while still allowing the full network to be trainable. We claim this is an important distinction given this application because scales and solution dynamics may change over time, meaning that holding some layers constant may inhibit the expressibility of the network and its ability to accurately fit the true solution. Let us take the final linear combination of the network as basis functions and consider the nonlinear Allen-Cahn problem in \cite{raissi2019physics}. For the time-marching model, it can be seen that the basis sharpens from the first to the final subdomain in Figure \ref{fig:transfer-basis}.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.3]{transfer-basis2}
\caption{\small Spatial basis at the center of the time-slab given by the final layer of the PINN with time-marching on the Allen-Cahn problem in \cite{raissi2019physics}. The basis changes considerably between the first and last time-slabs, indicating true transfer learning would not work as the scales change in time for this problem. The distribution of learnable parameters is also shown not to change significantly despite the change in basis.}
\label{fig:transfer-basis}
\end{figure}
The overall network parameter distribution for each layering stays close to constant despite the drastic change in output basis, meaning this alone is not a good indicator of what is being learned. While fine tuning can still improve training in this case, transfer learning would inhibit it as we need earlier layers in the network to change so that the final basis can more accurately fit the smaller scales that form as time goes on in this problem.
\subsection{Window-sweeping collocation points}
As seen in Figure \ref{fig:window-sweep}, a soft causality window is moved through time, which acts as a weight mask on the collocation points. Unlike stacked decomposition, this method is defined by a set of point weights moving forward in time in a single PINN. We find this view can describe many previous and new methods.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{window_sweep}
\caption{\small (A) Illustration of window-sweeping method and its corresponding collocation point subsets. (B) Window-sweep propagation over training time. (B.1) Error function kernel with a steep transition. (B.2) Error function kernel with a smooth transition.}
\label{fig:window-sweep}
\end{figure}
Inspired by causal weighting in \cite{wang2022respecting}, this transition can be defined in many ways, which we will colloquially call the kernel. One option is to define it using the causal weighting scheme but to add upper and lower bound cutoffs to move those points into the prior time set of backward-compatibility points and the future set of points that have not yet been included in the training. The backward-compatibility set acts as a hard causality constraint in addition to the computational benefit of not requiring the expensive PDE residual. Causal weights have shown great performance on difficult PDE problems; however, they set future collocation point weights to zero until prior residuals have been satisfied, wasting time predicting and computing gradients for points that contribute negligibly to the overall loss landscape and, therefore, optimization. The inclusion of the null-set bound removes this inefficiency until the points are useful. In the user algorithm, the addition or absence of these sets is variable so that the causal weights method can be recovered. Since causal weights are explicitly based on prior residuals, this cutoff on the upper bound would be known without having to perform any operations on future points and therefore incur no additional cost. Other kernels considered in this paper are shown in Table \ref{tb:window-kernel}. Depending on the problem and hardware capacity, larger or smaller weighted domains can be considered, as shown in Figure \ref{fig:window-sweep} (B) with the error function kernel. Using the uniform weight kernel, bc-PINNs can be recovered when width is set to dt. Future work will consider modifying this method to solve second-order time problems with initial and final conditions on $u$ that require information to propagate in both directions.
\begin{center}
\captionof{table}{\small Window-sweeping kernel hyperparameters. The dt tolerance, similar to the tolerance in stacked-decomposition, is a bound on the change in loss required for the point-set bounds to move in time by the defined dt. This is analogous to wave speed but for information propagation as a function of PINN training.}
\scalebox{0.85}{
\begin{tabular}{| c | c |}
\hline
Kernel & Hyperparameters \\
\hline
Uniform & [width, dt, dt tolerance, scale]\\
Linear & [width, dt, dt tolerance, scale]\\
Error Function & [steepness, dt, dt tolerance, scale, cutoff tolerance]\\
Causal Weights & [$\epsilon$, cutoff tolerance]\\
\hline
\end{tabular}}
\label{tb:window-kernel}
\end{center}
\subsection{User Algorithm}
\begin{algorithm}
\caption{\small Order of operations for proposed temporal propagation strategies for PINNs}
\label{alg1}
\begin{algorithmic}
\STATE 1. Choose stacked-decomposition parameters \textbf{[$\mathbf{n}$, $\mathbf{dS}$, \textbf{causal} $\mathbf{dS}$, \textbf{tolerance}]}
\bindent
\STATE 1.1 Choose interface conditions \textbf{[residual continuity, $\mathbf{\mathcal{C}^p}$ continuity, other]}
\STATE 1.2 Choose transfer learning parameters \textbf{[number of layers, trainability of layers]}
\eindent
\STATE 2. Choose window-sweeping parameters \textbf{[bc-set, null-set, weighting kernel, kernel hyperparameters]}
\STATE Run Model
\end{algorithmic}
\label{algo:user}
\end{algorithm}
With this user algorithm, we attempt to capture as many temporal PINN training techniques as possible as a subset of the options. Additionally, the algorithm allows for a full range of variants, combinations, and improvements. To highlight this fact, we will define the existing models listed in Section \ref{ssec:related-work} in terms of Algorithm \ref{algo:user} choices. A subtle but large improvement is in the addition of a tolerance to stacked-decomposition, which the user sets to define the change in loss before a new subdomain is added. This minimizes the cost of unnecessary training time used in the original papers for time-marching, bc-PINNs, etc., that evaluated a fixed number of iterations before moving to the next time-slab. Using a tolerance also reduces hyperparameter tuning, as an underestimate of iterations may lead to an incorrect solution\cmnt{training failure} and an overestimate is expensive.
\rmk{Other methods such as adaptive weighting and sampling techniques (self-adaptive weights, RAR, Evo, and self-supervision adaptive sampling \cite{mcclenny2020self, lu2021deepxde, daw2022rethinking, subramanian2022adaptive}), or reformulating the network architecture to obey characteristics (LPINN, CINN \cite{mojgani2022lagrangian, braga2022characteristics}) can be used along with this framework, but do not fall into our unification of like methods.}
\begin{center}
\captionof{table}{\small Existing PINN methods and their corresponding recovered settings described under Algorithm \ref{algo:user}}
\scalebox{0.85}{
\begin{tabular}{| c | c | c | c | c |}
\hline
Existing Method & Step 1. & Step 1.1 & Step 1.2 & Step 2. \\
\hline
PINN & [1, 1, off, Any] & [None] & [None] & [None] \\
Adaptive time-sampling & [1, 1, off, Any] & [None] & [None] & [off, on, uniform, width = dt, scale = 1]\\
Time-Marching & [n, 1, off, Any] & [$\mathcal{C}^p$] & [None] & [None] \\
bc-PINN & [1, 1, off, Any] & [None] & [None] & [on, on, uniform, width = dt, scale = 1]\\
Causal weights & [1, 1, off, Any] & [None] & [None] & [off, off, causal weights, $\epsilon$]\\
XPINN & [n, n, off, Any] & [Residual, $u_{avg}$] & [None] & [None]\\
\hline
\end{tabular}}
\label{table:algo-subsets}
\end{center}
Additionally, a code package is included with this paper, which allows for easy configuration of options for new and existing problems using PyTorch for first-order in time PDEs. \footnote{The code and data accompanying this manuscript will be made publicly available at \url{https://github.com/mpenwarden/dtPINN} after publication.}
\section{Motivation: Training Challenges\cmnt {Failure to Trains}}
\section{Training Challenges: PINNs and their temporal decompositions}
\label{sec:motivation}
\subsection{Information Propagation}
\label{ssec:motivation}
In this work, we attempt to bridge the gap between many prior works and approaches to improving PINN training. Many of these approaches are predicated on conforming to causality. Although PINNs are technically well-posed when training over the entire spatiotemporal domain represented by the set of collocation points (when properly set up), the information must still propagate from the sources of information such as initial conditions (IC) and boundary conditions (BC). We will split this discussion into two parts: first, classifying training difficulties, also called ``failure modes'' in \cite{krishnapriyan2021characterizing, daw2022rethinking, wang2021understanding}, \cmnt{failure modes} which up until now have been homogeneously grouped together; second, analyzing training difficulties\cmnt{failure modes} relating to temporal decomposition given the prior classification.
\subsubsection{Types of Training Challenges}
\label{ssec:failure_mode_types}
Let us consider two forward PDE problems. First, consider the convection problem posed in \cite{krishnapriyan2021characterizing} with enough collocation points that a standard PINN can solve it well. Second, consider the commonly used Allen-Cahn problem, which PINNs struggle to solve well without modification \cite{raissi2019physics, wight2020solving, MATTEY2022114474, wang2022respecting}. In Figure \ref{fig:failure_mode}, PINN results for three distinct types of challenges \cmnt{failures}are shown in comparison to the time-marching PINN method that results in a near approximation to the exact solution and are discussed as follows:
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{failure_modes.png}
\caption{\small Training challenges for unmodified PINNs and the comparative accurate solution with time-marching (A) Convection problem with extended temporal domain on $T = [0, 5]$. (B) Convection problem with fewer collocation points on $T = [0, 1]$. (C) Allen-Cahn problem on $T = [0, 1]$.}\label{fig:failure_mode}
\end{figure}
\textbf{Zero-solution\cmnt{Failure}:} The zero-solution \cmnt{failure}mode is reproducible using the long-time convection problem, which extends the temporal domain to $T = [0,5]$ shown in Figure \ref{fig:failure_mode} (A). The number of residual collocation points is proportionally increased so as not to influence the result. Given periodic conditions, there is no information later in time, which results in the PINN converging to a zero-solution. This challange occurs because the zero-solution minimizes the loss due to the PDE residual containing only derivative terms (i.e., any constant function is in the null-space of the operator). We can see that the initial condition, the only source of information, propagates in the direction of its characteristic curve. However, due to the periodic conditions, the information must travel far before being ``completed'' in the sense that it reaches some end-point such as Dirichlet boundary conditions or the end of the time domain. When this happens, the solution can be refined. Until this happens, the propagation of information must overcome the zero-solution in the sense that the network resists the introduction of information from the initial condition.
\begin{figure}[H]
\centering
\includegraphics[width=0.75\linewidth]{PINN_convergence.png}
\caption{\small (Left) Plot of loss as a function of training epochs. (Right) The full domain PDE residual at the end of training. (Top) PINN on the convection problem with $T_{end} = 1$. (Bottom) PINN on the convection problem with $T_{end} = 5$.}\label{fig:PINN_converg}
\end{figure}
In terms of the loss landscape, the zero-solution skews it making it shallow, such that information propagates infinitesimally slowly once far enough away from the initial condition. This is shown in Figure \ref{fig:PINN_converg} where the loss and PDE residual of a trained PINN on convection $T_{end} = 1$ is shown on the top compared to $T_{end} = 5$ on the bottom. Both models are run with a termination tolerance of $10^{-7}$ measuring the change in loss per iteration. In the loss for the converged PINN, the drastic drop in the loss at around $7,500$ iterations is when the ``front" of propagation from the initial condition reaches the end of the time domain. Then, the solution refines and converges to the correct solution, minimizing the PDE residual in the domain. For a long-time problem, we can see that the residual at later times is exactly zero and therefore resists the information being propagated. Additionally, despite the variation of magnitude in the prediction, the gradients, and therefore residual, are quite uniform where the feature exists. That is to say: there is no directionality in the residual minimization at this point. Therefore, the model tries to maintain a trade-off between the loss resulting from the initial condition and its nearby collocation point residuals not being obeyed, along with the zero-solution later in time. This results in the solution petering out to zero, \cmnt{failing}never converging as it gets stuck between these two effects.
Finally, as seen by the time-marching solution to this problem, enforcing causality can help alleviate this issue\cmnt{failure mode} since it does not allow the network to converge to the zero-solution later in time, for which the solution will not be unique until all information needed for the true solution has reached it.
\rmk{Some causal enforcement methods that still allow residual minimization later in time, such as the Lagrangian network reformulation \cite{mojgani2022lagrangian}, may improve but not fully overcome this problem \cmnt{failure mode}for an arbitrarily long enough temporal domain as the zero-solution would still be allowed.} \\
\textbf{\cmnt{Failure to}No Propagation:}\cmnt{The failure to propagate mode} This problem is reproducible by using too few residual collocation points in the convection problem shown in Figure \ref{fig:failure_mode} (B). This\cmnt{failure mode}issue is the same as the one observed in \cite{krishnapriyan2021characterizing} for this problem. In Figure \ref{fig:failure_mode} (B), $2,500$ collocation points are used, whereas, in the rest of the paper, $10,000$ are used for every nondimensionalized length of one in the temporal domain. When a larger number of points is used, we find we can consistently solve this problem with a standard PINN. Therefore, we classify this training challenge\cmnt{failure mode}by its apparent failure to propagate any information as the initial condition features abruptly stop, indicating the point density is too small. Overcoming this \cmnt{failure mode}challenge through increased and adaptive sampling is investigated in more detail in \cite{daw2022rethinking}. This allows for a constant solution to prevail in the rest of the domain. \\
\textbf{Incorrect Propagation:} Incorrect propagation \cmnt{failure mode}is reproducible by trying to solve the Allen-Cahn problem with a PINN, regardless of standard model tuning, as seen in Figure \ref{fig:failure_mode} (C). This challenge \cmnt{failures}arise when strong enforcement of causality is needed, such as in chaotic problems shown in \cite{wang2022respecting}, and by not enforcing it, the PINN converges to an incorrect solution. It is distinct from the zero-solution challenge \cmnt{failure}since a solution is arrived at quickly, but not the correct one.
\rmk{Interestingly, the training challenge \cmnt{failure mode}for long-time solution of the KdV problem is incorrect propagation instead of the zero-solution, such as in long-time convection. This result is described in \ref{ssec:kdv_long-appendix}.}
\subsubsection{Temporal Decomposition Challenges\cmnt{Failures}}
\label{ssec:temporal_failure}
Let us now consider the convection problem with $T_{end} = 1$. In Figure \ref{fig:failure_decomp}, this PDE problem is run with a PINN (A), an XPINN (B), and an XPINN (more accurately, a multi-domain PINN \cite{2022arXiv221012669L}) using only solution continuity conditions at the interfaces (C). All models contain the same point sets, loss term weights, etc., with the addition of interface sets in the decomposition models. Unless stated otherwise, exact periodic boundary enforcement is used with $M = 1$, as described in \ref{sec:fourier_appendix}, where $M$ is the order of the Fourier feature encoding.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{failure_convec2}
\caption{\small Convection problem on $T = [0, 1]$. (A) PINN. (B) XPINN. (C) XPINN with only the solution continuity interface \cite{2022arXiv221012669L}. (1) 500 Adam + 2,500 L-BFGS. (2) 500 Adam + 10,000 L-BFGS. (3) 500 Adam + 2,500 L-BFGS + Dirichlet BC. (4) 500 Adam + Dirichlet BC. (5) 500 Adam + 2,500 L-BFGS + Weak BC. (6) 500 Adam + 2,500 L-BFGS + Dirichlet \& Weak BC.}
\label{fig:failure_decomp}
\end{figure}
In Figure \ref{fig:failure_mode} (A.1), due to periodic boundary conditions, the solution propagates from the initial condition, whereas the rest of the domain converges to the zero-solution because it must satisfy the PDE residual but has no unique information. The collocation points inside the domain where information has not yet propagated provide no benefit despite taking computational time to compute the PDE residual, which can be crippling if the problem has high sampling density, is high dimensional, or is a long-time problem since the number of point-wise predictions and gradients is ever increasing. In the case of domain decomposition approaches like XPINNs and cPINNs, where all networks are trained at once, this can, in fact, cause training challenges \cmnt{failure modes}where there were none with a standard PINN, even though parallelization can help alleviate the training cost. To highlight this, in (A.2), the PINN is run for more L-BFGS iterations and converges appropriately.
In Figure \ref{fig:failure_mode} (B.1) \& (C.1), the solution struggles to propagate information through the first interface (in this case, at dt = $0.1$). Since all networks are trained in concurrently from the start, the ones at later times become stuck in the local minima of the trivial zero-solution. This problem is intuitive to understand and is the same issue discussed in Section \ref{ssec:failure_mode_types} with respect to the long-time convection problem for a PINN. However, the issue is exacerbated here since, later in time, networks do not have direct access to the initial condition information; only through the interfaces, once the information has reached them, is a unique solution defined.
In (B.2) and (C.2), little has changed with the addition of more training iterations. The models will not overcome this challenge \cmnt{failure}with more training. Causality enforcement must be introduced to alleviate this issue.
A standard XPINN (B), which has a residual continuity term, further intensifies training issues because the interface also has a zero-solution challenge. We claim that for temporal decomposition, $\mathcal{C}^p$ continuity should be used instead of the standard XPINN continuity conditions, which perform worse in all scenarios of our study. This effect worsens when using periodic conditions because it allows for the zero-solution more readily. Furthermore, it is the boundary condition most papers use that focuses on PINN ``failure mode'' problems \cite{krishnapriyan2021characterizing, wang2022respecting} despite not identifying it as a contributing factor. In (C.3), applying Dirichlet boundary conditions to the domain decomposed model with solution continuity allows for the correct solution to be obtained, whereas in (B.3), the XPINN interface conditions still cause propagation issues\cmnt{the failure mode}. To a lesser extent, it is also the case for PINNs that Dirichlet instead of periodic boundary conditions are easier to train, as seen in (A.3) which converges while (A.1) has not, despite equivalent training iterations.
Finally, in (5) and (6), setups are repeated using weakly imposed, instead of exact (by way of Fourier feature encoding), periodic boundary conditions. Weakly imposed boundary conditions result in the same set of correct and incorrect solutions as before. Previous work implies that exact enforcement of periodic conditions may alleviate training issues, but we find that regardless of the enforcement, the problems can persist. Only different boundary conditions, such as Dirichlet, change the result.
For these reasons, time-marching, with the same amount and density of collocation points, helps alleviate the trivial zero-solution trivial \cmnt{failure}for temporal decomposition and can be described under the lens of information propagation. Time-marching, in effect, removes the collocation points later in time from optimization, not allowing the model to train itself into a trivial solution later in time, even though multiple subnetworks are similarly used in XPINNs. The resistance to propagate information is an optimization and uniqueness issue, as the null-space is an acceptable solution to the optimization problem. Despite the PDE being violated in between the true and zero-solution, it does not train out of the local minima. In the case of domain decomposition, the interface is an ideal location to violate the PDE and stop information from propagating. Whereas for a PINN on the long-time convection problem, this violation happens over a large time span as the feature gradually weakens.
\rmk{Information propagation is not fully understood and depends on multiple PINN aspects, such as the optimizer, sampling method, etc., which are not all studied here. For example, in the 3D Euler equation, characteristic information is complicated, making methods such as LPINN \cite{mojgani2022lagrangian} and CINN \cite{braga2022characteristics}, difficult.}
\section{Numerical Experiments}
\label{sec:results}
In this section, we demonstrate the efficacy of our proposed framework on various forward PDE problems. With these results, we seek to highlight the flexibility and variability of our framework in easy-to-define models with simple user settings. We do not advocate for one method over another in terms of accuracy or runtime but rather provide a thorough comparison of a subset of all possible choices. Ground truth solutions are generated using the Chebfun package \cite{Driscoll2014} with a spectral Fourier discretization with 512 modes and a fourth-order stiff time-stepping scheme (ETDRK4) \cite{COX2002430} with time-step size $10^{-5}$. The training set is composed of $10,000$ residual collocation points ($N_r$) using Latin hypercube sampling (LHS) and $200$ uniformly spaced boundary points ($N_b$) for every nondimensionalized length of one in the temporal domain. Each neural network is comprised of 50 neurons and 4 hidden layers. The collocation set is chosen using Latin-hypercube sampling. The initial condition and each interface consist of 200 uniformly spaced points ($N_{ic}$ \& $N_i$). All models use Fourier feature encoding, described in \ref{sec:fourier_appendix} unless weak boundary conditions are stated. If Fourier feature encoding is not used, the spatiotemporal input is normalized between $[-1, 1]$. Casual $dS$ is used for all stacked-decomposition models unless otherwise stated. The total loss for any given model can be written as
\begin{align}
MSE = \lambda_r MSE_r + \lambda_{BC} MSE_{BC} + \lambda_{IC} MSE_{IC} + \lambda_{bc} MSE_{bc} + \lambda_{i} MSE_{i}
\end{align}
where $MSE_{\#}$ is $0$ if unused, and $\lambda_r = 1$, $\lambda_{BC} = \lambda_{IC} = \lambda_{bc} = \lambda_{i} = 100$ unless stated otherwise. These experiments were run on an Intel Core i7-5930K processor with Windows 10 OS. The test performance is reported in relative $L_2$ error given by
\begin{align}
\frac{||u-u_{\theta}||_2}{||u||_2}
\end{align}
as well as wall-clock training time.
\rmk{For both stacked-decomposition and window-sweeping methods, loss tolerances can be decreased to potentially gain accuracy at the cost of additional training time. The parameter choices made provide a reasonable trade-off. As with all machine learning methods, the choice of tunable hyperparameters will depend on the intended use, and the results reported cannot be completely exhaustive of all training possibilities. Our goal is to make overarching insights, not tell the user the correct settings in each scenario.}
\subsection{Convection equation}
Let us consider the following convection problem
\begin{linenomath}\begin{align}
& \frac{\partial u}{\partial t} + 30\frac{\partial u}{\partial x} = 0, \; (t,x)\in [0,1] \times [0,2 \pi]
\label{eq:convection}
\end{align}\end{linenomath}
subject to periodic boundary conditions and an initial condition $u(0,x) = sin(x)$. The exact solution and point sets are shown in Figure \ref{fig:convec_exact}.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{convec+points.png}
\caption{\small (Left) Exact solution. (Right) Plot of 10 subdomains delineating the individual initial condition, boundary condition, interface, and subdomain collocation point sets.}
\label{fig:convec_exact}
\end{figure}
\begin{table}[H]
\centering
\captionsetup{width=1\linewidth}
\caption{\small Table of $L_2$ relative error and training time for different stacked-decomposition settings. M = 1 unless weak boundary conditions are used. Note stacked-decomposition is abbreviated s-d, interface condition as ic, residual continuity as rc, fine tuning as FT, and transfer learning as TL.}
\begin{tabular}[c]{l | c | c}
\toprule
Model settings & Relative $L_2$ Error & Training time (sec) \\
\hline
PINN & $8.28 \times 10^{-3}$ & 1,020 \\
PINN + weak BC & $2.94 \times 10^{-2}$ & 780 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) & $1.23 \times 10^{-2}$ & 1,141 \\
s-d PINN (n = 10, dS = 3, ic = $C^p$) & $4.47 \times 10^{-3}$ & 4,240 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) + weak BC & $7.69 \times 10^{-2}$ & 547 \\
s-d PINN (n = 10, dS = n, ic = $u_{avg}$ + rc) + FT & $3.90 \times 10^{-2}$ & 21,443 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) + FT & $7.43 \times 10^{-3}$ & 703 \\
s-d PINN (n = 10, dS = 3, ic = $C^p$) + FT & $5.13 \times 10^{-3}$ & 2,261 \\
s-d PINN (n = 10, dS = n, ic = $C^p$) + FT & $4.11 \times 10^{-3}$ & 5,066 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) + FT + weak BC & $3.44 \times 10^{-2}$ & 420 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) + TL & $1.96 \times 10^{-2}$ & 1,342 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) + TL + weak BC & $1.62 \times 10^{-2}$ & 490 \\
\bottomrule
\end{tabular}
\label{tb:convec}
\end{table}
In Table \ref{tb:convec}, many variations of stacked-decomposition are run for the convection problem. First, a standard PINN is able to solve the problem with relatively good accuracy and cost. We also observe that, unlike all results for the standard XPINN in Section \ref{fig:failure_decomp}, the causal XPINN with fine tuning (Table line 6) can converge to the correct solution, albeit with great computational cost. Therefore, we have demonstrated that even with the most unfavorable conditions, such as periodic boundaries and XPINN interfaces, causal enforcement and transfer learning are able to overcome the zero-solution issue\cmnt{failure mode}.
Another result is that, $dS$ = 1 to $dS$ = $n$ acts as a spectrum of trade-off between accuracy and cost. Looking at the results with fine tuning applied, $dS$ = 1, which is equivalent to time-marching, converged the fastest since only one network is training at once, lowering the cost. As $dS$ increases to three and then $n$, the cost increases, but the accuracy improves as training networks concurrently allows them to better resolve the solution and any discrepancies at the interfaces. Distributed parallel training \cite{shukla2021parallel} can reduce this additional cost while retaining improved accuracy.
We observe that weak boundary condition enforcement takes less time to reach convergence and is significantly less accurate. We also observe that true transfer learning is not appropriate for temporal decomposition, but fine tuning is. This issue is described in more detail in \ref{sec:ft_vs_tl_appendix}. In summary, stacked-decomposition, particularly with $dS$ = 1 and fine tuning, can outperform the standard PINN in accuracy and cost. This is significant as even for problems in which the unmodified PINN does not fail, the framework improves scalability in PINNs and yields improvement even on a short-time problem with relatively small amounts of points and training.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{convec_convergence.png}
\caption{\small Relative $L_2$ error in 16 subdomains for various numbers of decomposition partitions. (Left) Adam optimizer only until convergence. (Right) 500 Adam warm-up iterations, then L-BFGS optimization until convergence.}
\label{fig:convec_convergence}
\end{figure}
To investigate the effect of increasing the number of subdomains in causal, temporal decomposition, we systematically compare the relative $L_2$ error over subdomain sets for various settings. Starting with the single domain (PINN), we decompose the domain into n = 2, 4, 8, 10, 14, and 16 subdomains uniformly in time and report the relative $L_2$ error for each in 16 uniform subdomains. The temporal decomposition strategy is s-d PINN (n = \#, dS = 1, ic = $C^p$) + FT. Distinct from other experiments performed, we also consider optimizer choice in this study to provide insight into a main point of contention in PINNs training, Adam vs. L-BFGS. In Figure \ref{fig:convec_convergence} (Left), it is clear that for a single-domain PINN with only Adam optimization, the loss function gets stuck in a suboptimal local minima. As we introduce more subdomains, the relative $L_2$ error decreases. Eventually, the relative error converges, i.e., there is no improvement in predictive accuracy even after further decomposing the subdomain. Therefore, we observe that causal, temporal decomposition can overcome training challenges \cmnt{failures}due to poor optimizer choice, as well as previously discussed ones \cmnt{failures}in Section \ref{ssec:motivation}.
Figure \ref{fig:convec_convergence} (Right) uses a warm-up of 500 Adam iterations before switching to L-BFGS. This warm-up is known to reduce the failure of L-BFGS in the early stage of training. In contrast to Adam only optimization, the error is relatively constant throughout the number of subdomains. We report training times in this case because all methods converge. Training times are not reported for Adam only training since it is misleading to analyze when some cases fail, and some do not. We can see that even when training challenges \cmnt{failure mode}are not present, causal, temporal decomposition can improve training time and, therefore, the scalability of PINNs in larger and more expensive problems. However, there appears to be an ideal subdomain number, which will be problem specific, and going beyond what is necessary increases run time with no benefit here. This is likely due to the interplay between the cost of refined learning of the network when the loss changes slowly, which must happen in all subnetworks, versus the benefit of convergence speed for smaller domains.
\subsection{Allen-Cahn equation}
Let us consider the following Allen-Cahn problem
\begin{linenomath}\begin{align}
& \frac{\partial u}{\partial t} - 0.0001\frac{\partial^2 u}{\partial x^2} + 5u\left(u^2-1\right) = 0, \; (t,x)\in [0,1] \times [-1,1]
\label{eq:allen-cahn}
\end{align}\end{linenomath}
subject to periodic boundary conditions and an initial condition $u(0,x) = x^2 cos(\pi x)$. In Figure \ref{fig:exact_allen_cahn} shows the exact solution and (normalized) singular value spectra of temporal snapshots for different data sets representative of decomposition and point weighting schemes. The lens of Kolomogrov n-widths, approximated by the rate of decay of these singular values, is proposed as an \textit{a priori} PINNs convergence estimate in \cite{mojgani2022lagrangian}. We use this lens to view the Allen-Cahn problem with different time-slab sizes in addition to the window-sweeping weighting scheme with the error function kernel. As described in \cite{mojgani2022lagrangian}, a faster decay rate of the singular values of a set of snapshots should correlate to an increase in the rate of training convergence. We observe that smaller time-slabs have faster decay, which empirically aligns with faster training, potentially leading to reduced training times. The smooth error function kernel corresponds with zero-valued weights past $t = 0.1$. Therefore, compared to the decay $t \in [0, 0.1]$, which has no weightings over this region, the window-sweeping method has a faster drop-off, indicating it is even easier to train.
\rmk{Although a smaller subdomain or weighting scheme considering fewer points may converge faster, the overall training time of the domain does not exactly extrapolate from this. This is due to the ``overhead'' cost of achieving the lower loss tolerances many more times, which increases as the convergence rate increases, creating a trade-off.}
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{allen+SVD.png}
\caption{\small (Left) Exact solution. (Right) Study of (normalized)
singular value spectra of temporal snapshots (formalized by \cite{mojgani2022lagrangian}) for the Allen-Cahn problem.}
\label{fig:exact_allen_cahn}
\end{figure}
\begin{table}[H]
\centering
\captionsetup{width=1\linewidth}
\caption{\small Table of $L_2$ relative error and training time for different window-sweeping settings. All methods use M = 10 unless otherwise stated. The loss tolerance used to propagate all methods is $10^{-7}$. Note that window-sweeping is abbreviated w-s. $^a$(bc-set = on, null-set = on) $^b$(bc-set = off, null-set = on).} \label{tb:allen-cahn}
\begin{tabular}[c]{l | c | c}
\toprule
Model settings & Relative $L_2$ Error & Training time \\
\hline
PINN & $5.11 \times 10^{-1}$ & 3,421 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) + FT & $2.77 \times 10^{-2}$ & 798 \\
w-s PINN (kernel = uniform, width = dt = $0.1$)$^b$ & $6.57\times 10^{-2}$ & 875 \\
w-s PINN (kernel = uniform, width = dt = $0.1$)$^a$: M = 1 & $2.25\times 10^{-2}$ & 448 \\
w-s PINN (kernel = uniform, width = dt = $0.1$)$^a$ & $1.73\times 10^{-2}$ & 466 \\
w-s PINN (kernel = uniform, width = $2$dt = $0.1$)$^b$ & $3.33\times 10^{-2}$ & 1,053 \\
w-s PINN (kernel = uniform, width = $2$dt = $0.1$)$^a$ & $1.58\times 10^{-2}$ & 574 \\
w-s PINN (kernel = linear, width = $4$dt = $0.1$)$^a$ & $3.45\times 10^{-2}$ & 994 \\
w-s PINN (kernel = error function, steep, dt = $0.0125$)$^a$ & $3.62\times 10^{-2}$ & 534 \\
w-s PINN (kernel = error function, smooth, dt = $0.0125$)$^a$ & $4.29\times 10^{-2}$ & 564 \\
\bottomrule
\end{tabular}
\end{table}
In Table \ref{tb:allen-cahn}, many variations of window-sweeping are run for the Allen-Cahn problem. Unlike the convection problem considered, an unmodified PINN does not sufficiently solve this. The third row setting recovers adaptive time-sampling and the fifth row recovers bc-PINNs as described in Table \ref{table:algo-subsets}. First, we find that all methods are able to overcome the training challenge encountered by the unmodified PINN. We also find that by adding the backward compatibility set instead of continuously training on prior point sets vastly decreases the training time with no adverse effect on the accuracy.
Uniform weights perform well compared to soft causality enforcement via weighting schemes used by methods such as causal weights \cite{wang2022respecting} and extended to the unified window-sweeping method by way of kernels linear, error function, and an equivalent causal weighting scheme. Under the loss tolerance setting of $10^{-7}$ used, the causal weights kernel reaches this tolerance without sufficient training. Due to the sensitivity of its tunable causality parameter ($\epsilon$), as noted in the original paper, we present self-contained results for this kernel in \ref{sec:causal_weights_appendix}. We extend the method to non-grid sampling and reduce training time using the null-set segmentation of window-sweeping. We also find that for uniform weights, reducing the dt size such that new sets overlap with prior slightly improves accuracy with increased cost. The primary motivation for the model settings reported is to showcase how simple it is to modify the proposed framework to produce new models, not to conclude which method is the ``best'' since different settings may be ideal for different problems. We also note the improved scalability of this approach, particularly in the application of the change in loss tolerances to propagate the methods. As a comparison, in bc-PINNs \cite{MATTEY2022114474}, the authors use 50,000 Adam iterations per segment and then L-BFGS iterations until tolerance termination, leading to hundreds of thousands of iterations. We report almost identical relative $L_2$ errors and use a total of around 12,000 iterations. This modification, used in both stacked-decomposition and window-sweeping, allows us to achieve more accurate solutions than unmodified PINNs in less time.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{M1_vs_M10.png}
\caption{\small Point-wise error of w-s PINN (kernel = uniform, width = dt = $0.1$)$^c$ reported in Table \ref{tb:allen-cahn} (Left) M = 10 (Right) M = 1.}
\label{fig:w-s_M1vsM10}
\end{figure}
To investigate the effect of Fourier feature encoding frequency, we run the window-sweeping model with equivalent settings to recover bc-PINNs using an encoding of M = 1 and M = 10 shown in Figure \ref{fig:w-s_M1vsM10}. This encoding is used in the paper introducing causal weights \cite{wang2022respecting} with M = 10. We find that a higher order encoding can better resolve sharper features, similar to adaptive activation functions \cite{jagtap2020adaptive}, the error manifests itself elsewhere at this fidelity of training. As seen on the left side of the figure, the discontinuities that begin to form at the end of time around $\pm 0.5$ have low point-wise error for $ M = 10$, although the error manifests itself in the relatively smooth $x = 0$ region. This is opposed to $M = 1$, which struggles at the discontinuities. For higher levels of training, the higher order encoding will help resolve smaller scales in the solution space. However, at these stopping tolerances, we report similar accuracies for both encoding choices.
\subsection{Korteweg–de Vries equation}
\label{ssec:kdv}
Let us consider the following Korteweg–de Vries (KdV) problem
\begin{linenomath}\begin{align}
& \frac{\partial u}{\partial t} + u\frac{\partial u}{\partial x} + 0.0025 \frac{\partial^3 u}{\partial x^3} = 0, \; (t, x)\in [0, T] \times [-1, 1] \label{eq:kdv}
\end{align}\end{linenomath}
subject to periodic boundary conditions and an initial condition $u(0,x) = cos(\pi x)$. The exact solution for a short and long-time domain is shown in Figure \ref{fig:exact_kdv}.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{kdv_longTime2.png}
\caption{\small Exact solution of Korteweg–de Vries delimiting the respective $T = [0, 1]$ and $T = [0, 5]$ problems.}
\label{fig:exact_kdv}
\end{figure}
\begin{table}[H]
\centering
\captionsetup{width=1\linewidth}
\caption{\small Table of $L_2$ relative error and training time for a combination of stacked-decomposition and window-sweeping. A change in loss tolerance of $10^{-7}$ is used for all methods, with the condition in the combined form that w-s must finish propagating before s-d propagates, ensuring each subdomain is sufficiently trained given the equivalent tolerances on both methods. $^a$(bc-set = on, null-set = on), $\dag$(width = dt = $0.02$), $\ddag$(width = dt = $0.1$).}
\begin{tabular}[c]{l | c | c}
\toprule
Model settings & Relative $L_2$ Error & Training time \\
\hline
\multicolumn{3}{c}{\multirow{2}{*}{$T\in [0, 1]$}} \\
\multicolumn{3}{c}{} \\
\hline
PINN & $5.40 \times 10^{-2}$ & 2,030 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) + FT & $1.43 \times 10^{-2}$ & 780 \\
w-s PINN (kernel = uniform, width = dt = $0.1$)$^a$ & $1.84 \times 10^{-2}$ & 1,287 \\
s-d + w-s$\dag$ PINN & $2.37 \times 10^{-2}$ & 1,806 \\
\hline
\multicolumn{3}{c}{\multirow{2}{*}{$T\in [0, 5]$}} \\
\multicolumn{3}{c}{} \\
\hline
PINN & $9.85 \times 10^{-1}$ & 15,224 \\
s-d PINN (n = 10, dS = 1, ic = $C^p$) + FT & $1.84 \times 10^{-1}$ & 3,566 \\
w-s PINN (kernel = uniform, width = dt = $0.5$)$^a$ & $8.12 \times 10^{-2}$ & 16,262 \\
s-d + w-s$\ddag$ PINN & $5.15 \times 10^{-2}$ & 7,493 \\
\bottomrule
\end{tabular}
\label{tb:kdv}
\end{table}
In Table \ref{tb:kdv}, the results for an instance of stacked-decomposition and window-sweeping are reported separately and in conjunction with one another. We observe that unmodified PINNs train well for the short time domain but encounter training difficulties, shown in \ref{ssec:kdv_long-appendix}. Although the baseline PINN trains for $T = [0, 1]$, an improvement in accuracy is still achieved from s-d and w-s PINNs with well-performing settings. We also note reduced training time in all configurations over unmodified PINNs. However, we do not observe any benefit in the combination of s-d + w-s for this domain, likely due to the lower accuracy bound for the network size and tolerances already being achieved by the methods separately. An increase in training time, therefore, follows as there is more ``overhead'' cost by using smaller window-sweeping time steps inside of stacked-decomposition subnetworks.
In contrast, for the more difficult long-time problem, the methods on their own struggle to solve the problem well, along with unmodified PINNs. While s-d on this large domain is fast, the accuracy is poor, and w-s alone takes longer to train due to the change in loss tolerance. However, combining both yields an increase in accuracy while keeping the training time low relative to an unmodified PINN. For the w-s PINN alone, since the width and dt are large, the L-BFGS optimizer is likely to fail and cause NaNs. This is similar to why Adam is done at early training for any PINN before L-BFGS; if the domain change is too large, the optimizer is unstable. In this case, to keep the steps taken to a consistent 10, the dt = 0.5 is too large and causes optimization issues. Therefore, we perform 500 Adam iterations every time the window-sweeping scheme is propagated to ensure the stability of L-BFGS optimizer. This additional step also adds training time to the method. This is not necessary or performed for the w-s$\ddag$ setting. Figure \ref{fig:kdv_long_pwError} shows the point-wise error of the three methods used in the long-time problem. Aside from overall accuracy differences that can be inferred from the reported table values, all methods yield the highest errors at the latest time. This shows how important strongly respecting causality is, as any early deviation will lead to greater deviations later in time, regardless of the method. As the domain of a problem or the number of collocation points is increases, our framework yields greater improvements.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{kdv_long_pwError.png}
\caption{\small Point-wise error for $T = [0, 5]$ reported in Table \ref{tb:kdv} (A) s-d PINN (B) w-s PINN (C) s-d + w-s$\ddag$ PINN }
\label{fig:kdv_long_pwError}
\end{figure}
|
{
"arxiv_id": "2302.14178",
"language": "en",
"timestamp": "2023-03-01T02:03:55",
"url": "https://arxiv.org/abs/2302.14178",
"yymm": "2302"
} | \section{Introduction}\label{SEC1}
\subsection{Stochastic linear wave equation with L\'evy noise}
Stochastic partial differential equations (SPDEs)
have been studied intensively in the last 30 years, using different approaches.
In the semigroup approach (developed in \cite{DZ92}) or
the variational approach (pioneered in \cite{Par75} and
developed further in \cite{KR79}),
the solution and the noise are processes,
which evolve in time and take values in a Hilbert space.
The random field approach
(initiated by Walsh \cite{Walsh86} and developed further by Dalang \cite{Dalang99})
deviates significantly from these approaches by
proposing a different framework for
viewing the noise and the solution.
In Walsh-Dalang's approach, the solution is a space-time indexed process
(i.e. a random field) and
the noise is a process
indexed by subsets of the space-time domain (or functions on this domain).
We refer the readers to \cite{PR07, DKMNX09, RL18}
for an overview of the study of SPDEs using these approaches; see
also the paper \cite{DQS11}
for their close connections.
Regardless of the approach,
one can think of the noise (and the initial condition) as the input,
and the solution as the output.
One of the fundamental problems for SPDEs
is the well-posedness problem (i.e. existence, uniqueness, and stability
under perturbation
of the initial data and/or the noise).
And probabilists have been driven to study/discover
new properties of the SPDE solutions, for example, {\it stationarity, ergodicity}, and
{\it intermittency property} (i.e. exponential growth of the $p$-th moment
for large time), to name a few.
Various classes of processes have been proposed as models for the noise
perturbing a partial differential equation,
often derived by an analogy with the noises appearing in the classical SDEs:
Brownian motion, L\'evy processes, and fractional Brownian motions.
But the introduction of the infinite dimensional (and spatial) component
changes drastically the problem and leads to new challenges.
The class of SPDEs perturbed by L\'evy noise
have been studied extensively in the monograph \cite{PZ07}
using the semigroup approach,
where they are naturally interpreted as extensions of SDEs
driven by L\'evy processes.
In the present article, we will take Walsh-Dalang's random field perspective
and study the following stochastic linear wave equation with a multiplicative
{\it L\'evy noise} on $\mathbb{R}_+\times\mathbb{R}$:
\noindent
\begin{align}
\begin{cases}
&\partial_t^2 u(t,x) = \partial_x^2 u (t,x)+u(t,x)\dot{L}(t,x),
\,\,\, (t, x)\in (0,\infty) \times \mathbb{R} \\
&\text{$u(0,x)=1$ and $\partial_t u(0,x)=0$, $x\in\mathbb{R}$,}
\end{cases}
\label{wave}
\end{align}
\noindent
where $\dot{L}$ denotes a space-time L\'evy noise and the product
$u \dot{L}$ is interpreted in It\^o sense.
This equation is also known as the {\it hyperbolic Anderson model},
by an analogy of the parabolic Anderson model with the wave operator
$\partial_t^2- \partial_x^2$ replaced by the heat operator $\partial_t - \partial_x^2$.
Let us briefly set up the framework.
Let $\mathcal{B}_0(\mathbb{R}_+\times\mathbb{R})$ denote the collection of
Borel sets $A$ of $\mathbb{R}_+ \times \mathbb{R}$ with $\textup{Leb}(A)<\infty$,
where $\textup{Leb}$ denotes the Lebesgue measure on $\mathbb{R}_+\times\mathbb{R}$.
Let
\noindent
\begin{align}
Z=\mathbb{R}_{+} \times \mathbb{R} \times \mathbb{R}_0, \quad \mathcal{Z}= \textup{Borel $\sigma$-algebra on $Z$},
\,\,\, {\rm and} \,\,\,
m={\rm Leb} \times {\rm Leb} \times \nu,
\notag
\end{align}
\noindent
where
the space $\mathbb{R}_0:=\mathbb{R}\verb2\2\{0\}$ is equipped with the distance $d(x,y)=|x^{-1}-y^{-1}|$, and $\nu$
is a $\sigma$-finite measure on $\mathbb{R}_0$ subject to
\noindent
\begin{align}
\label{LevyM}
\int_{\mathbb{R}_0}\min (1,|z|^2)\nu(dz)<\infty.
\end{align}
\noindent
Let $N$ be a Poisson random measure on the space $(Z,\mathcal{Z})$ with intensity $m$,
and let $\widehat{N}=N-m$ be the compensated version of $N$;
see Definition \ref{def:PRM} for more details.
Fix $b\in\mathbb{R}$.
For $A\in\mathcal{B}_0(\mathbb{R}_+\times\mathbb{R})$,
we define
\noindent
\begin{align}\label{LbA}
\begin{aligned}
L_b(A)
&\equiv \int_{\mathbb{R}_+\times\mathbb{R}} \mathbf 1_A(t,x) \dot{L}_b(t, x)dtdx
\equiv \int_{\mathbb{R}_+\times\mathbb{R}} \mathbf 1_A(t,x) L_b(dt, dx) \\
&= b \cdot \textup{Leb}(A)+\int_{A \times \{|z|\leq 1\}}z \widehat{N}(dt,dx,dz)+\int_{A \times \{|z|> 1\}}z N(dt,dx,dz).
\end{aligned}
\end{align}
\noindent
which is an {\it infinitely divisible} random variable with
\begin{align}
\begin{aligned}
\mathbb{E}\big[ e^{i \lambda L_b(A)} \big]
& = \exp\bigg(
i \lambda b \textup{Leb}(A) + \textup{Leb}(A) \int_{|z|\leq 1} (e^{i \lambda z} - 1 - i\lambda z) \nu(dz) \\
&\qquad\qquad\qquad +\textup{Leb}(A) \int_{|z| > 1} (e^{i \lambda z} - 1 ) \nu(dz)
\bigg)
\end{aligned}
\label{IDlaw}
\end{align}
for any $\lambda\in\mathbb{R}$.
And one can easily verify that for any $p>0$,
\noindent
\begin{align}
\mathbb{E}\big[ | L_b(A) |^p \big]<\infty
\,\,\,
\Longleftrightarrow
\,\,\, M_p:=\int_{|z|>1}|z|^p\nu(dz)<\infty.
\label{def:MP}
\end{align}
See Appendix \ref{APP} for the proof. In particular,
$L_b(A)$ has finite variance if and only if $M_2<\infty$.
And throughout this paper,
\begin{center}
we always assume that $M_2 <\infty$.
\end{center}
\noindent
By choosing $b = - \int_{|z|>1} z\nu(dz)$,\footnote{This
integral is finite due to the condition \eqref{LevyM} and $M_2<\infty$.}
we put
\begin{align}
L(A) = \int_{A\times \mathbb{R}_0} z \widehat{N}(dt, dx, dz),
\label{LA1}
\end{align}
which has mean zero and differs from \eqref{LbA} by a constant.
We say that
\begin{center}
$\{L(A): A \in \mathcal{B}_0(\mathbb{R}_+ \times \mathbb{R})\}$ is
a (pure-jump) {\em space-time L\'evy white noise}.
\end{center}
%
%
%
Note that \eqref{LbA}
is the analogue of the {\em L\'evy-It\^o decomposition}
(\cite[Theorem 19.2]{Sato99})
of a classical L\'evy process
$X=\{X(t)\}_{t\geq 0}$ without Gaussian component, whereas \eqref{IDlaw} is the analogue of the
{\em L\'evy-Khintchine formula} (\cite[Theorem 8.1]{Sato99}).
In this classical setting, there is no space component $x\in\mathbb{R}$, and
the corresponding Poisson random measure on $\mathbb{R}_{+} \times \mathbb{R}_0$
with intensity $\textup{Leb}\times\nu$ contains information about the location
and the size of the jumps of $X$.
That being said, we also call $\nu$ the jump intensity measure
for the space-time L\'evy noise $L$.
In \cite{BN16}, the first author and Ndongo proved
the existence, uniqueness, and intermittency property
for the stochastic nonlinear wave equation
in dimension $d=1$,
i.e. with $u\dot{L}$ replaced by $\sigma(u)\dot{L}$,
where $\sigma:\mathbb{R}\to\mathbb{R}$ is Lipschitz.
For a general L\'evy noise, the existence of the solution of
the wave equation in dimension $d\leq 2$ was established in \cite{B21},
together with some path properties.
In this article, we consider the hyperbolic Anderson model \eqref{wave}
and establish the first ergodicity and central limit theorem in
a finite-variance setting, namely, when $M_2<\infty$.
In view of the condition \eqref{LevyM}, we assume the following
equivalent condition throughout this paper:
\noindent
\begin{align}
m_2:=\int_{\mathbb{R}_0} |z|^2 \nu(dz)<\infty.
\label{m2}
\end{align}
\noindent
$\bullet$ {\bf Mild solution.}
We say that $u$ is a (mild) {\em solution}
to hyperbolic Anderson model \eqref{wave} if
$u=\{ u(t,x): (t,x)\in\mathbb{R}_+\times\mathbb{R}\}$
is a predictable\footnote{Predictability is defined with respect
to the filtration generated by the noise $L$; see \eqref{filtraF}. } process
with $u(0,x)=1$
for any $x\in \mathbb{R}$ such that
for any $t>0$ and $x \in \mathbb{R}$, we have
\noindent
\begin{align*}
u(t,x)=1+\int_0^t \int_{\mathbb{R}}G_{t-s}(x-y)u(s,y)L(ds,dy),
\end{align*}
\noindent
almost surely,
where
\begin{align}
G_t(x)=\frac{1}{2} \mathbf 1_{\{|x|<t\}}
\label{FSol}
\end{align} is the fundamental solution
to the deterministic wave equation on $\mathbb{R}_{+} \times \mathbb{R}$,
and the stochastic integral is interpreted in the It\^o sense,
which is a particular case of the Kabanov-Skorohod integral;
see Lemma \ref{lem:CE2} (iv).
This mild formulation
was introduced in \cite{Walsh86},
being motivated by the Duhamel's principle in PDE theory.
Since the stochastic integral has zero-mean,
\[
\mathbb{E}[u(t,x) ] =1 \,\,\, \mbox{for any $(t,x)\in\mathbb{R}_+\times\mathbb{R}$}.
\]
Throughout this paper, we make
the following convention:
\noindent
\begin{align}
\text{$G_t(x)=0$ for all $t\leq 0$ and $x \in \mathbb{R}$}.
\label{convention}
\end{align}
By Theorem 1.1 of \cite{BN16}, the equation \eqref{wave}
has a unique solution satisfying
\[
\sup_{(t,x) \in [0,T] \times \mathbb{R}}\mathbb{E}[|u(t,x)|^2] <\infty \quad \mbox{for any} \ T>0.
\]
\noindent
The same theorem shows that if in addition, there exists some finite $p\geq 2$ such that
\begin{equation}
\label{mp}
m_p:=\int_{\mathbb{R}_0}|z|^p\nu(dz)<\infty,
\end{equation}
then
\begin{equation}
\label{KPT}
K_p(T):=\sup_{(t,x)\in [0,T] \times \mathbb{R}} \big( \mathbb{E}[|u(t,x)|^p] \big)^{\frac1p}<\infty \quad \mbox{for any} \ T>0.
\end{equation}
See \cite{BN16,BN17} for more details.
It is also known that due to the linearity of the noise in $u$,
the solution $u(t,x)$ to \eqref{wave} admits the following Wiener chaos expansion:
\noindent
\begin{align}
u(t,x) = \sum_{n\geq 0} I_n(F_{t,x,n}) ,
\label{u1}
\end{align}
\noindent
where $F_{t,x,0}=1$ and for $n\in\mathbb{N}_{\geq 1}$,
the (non-symmetric) kernel $F_{t,x,n}(\pmb{t_n},\pmb{x_n}, \pmb{z_n})$ is given by
\noindent
\begin{align}
F_{t,x,n}(\pmb{t_n},\pmb{x_n}, \pmb{z_n})
&=G_{t-t_n}(x-x_n)z_n \ldots G_{t_2-t_1}(x_2-x_1)z_1
\mathbf 1_{\{ t> t_n > ...> t_1>0\}};
\label{KER:F}
\end{align}
\noindent
see \cite{BN17} and see also Subsection \ref{SUB22}.
From the orthogonality relation (see \eqref{int3c}) with $\widetilde{F}_{t,x,n}$ denoting
the symmetrization of $F_{t,x,n}$ (see \eqref{defh2}), we see that
\noindent
\begin{align}
\begin{aligned}
{\rm Cov}(u(t,x),u(s, y))
=\sum_{n\geq 1}n! \langle \widetilde{F}_{t,x,n},\widetilde{F}_{s,y,n} \rangle_{L^2(Z^n)}
\end{aligned}
\label{rho_t}
\end{align}
and it is not difficult to see from \eqref{KER:F} that the covariance \eqref{rho_t}
depends on $(x,y)$ only via the difference $x-y$.
This hints that
for any fixed $t\in\mathbb{R}_+$, the process $\{u(t,x)\}_{x \in\mathbb{R}}$ is stationary.
In fact, as we will see in Lemma \ref{lem:stat},
the process $\{u(t,x)\}_{x \in\mathbb{R}}$ is strictly stationary in the sense that
for any $x_1, ... , x_m, y \in\mathbb{R}$ with any $m\in\mathbb{N}_{\geq 2}$,
\noindent
\begin{center}
$( u(t, x_1 +y), ... , u(t, x_m + y) ) = ( u(t, x_1), ... , u(t, x_m) ) $ in law.
\end{center}
Then, it is natural to define an associated family of shifts $\{\theta_y\}_{y\in\mathbb{R}}$
by setting
\[
\theta_y\big( \{ u(t, x)\}_{x\in\mathbb{R}} \big) := \{ u(t, x+y)\}_{x\in\mathbb{R}},
\]
which preserve the law of the (spatial) process. Then,
the following question arises:
\begin{align}
\textit{Are the invariant sets for $\{\theta_y\}_{y\in\mathbb{R}}$ trivial?
\textup{(i.e.} is $u(t,\bullet)$ spatially ergodic?\textup{)}}
\label{quest}
\end{align}
\noindent
One can refer to, for example, the book \cite{Peter89} for
an introduction to the ergodic theory.
\medskip
To the best of authors' knowledge, the question \eqref{quest} of spatial ergodicity
has not been investigated for the hyperbolic Anderson model
\eqref{wave} driven by L\'evy noise. See the work \cite{NZ20} by Nualart
and the second author for the study of stochastic nonlinear wave
equation driven by Gaussian noises and see also \cite{CKNP21} for
similar study for parabolic SPDEs.
In this paper, we present the first ergodicity result
for the equation \eqref{wave}, and thus answer the question affirmatively;
see Theorem \ref{thm:main} (i).
Consequently, the spatial ergodicity implies the following first-order
fluctuation ( `law of large number type'): letting
\noindent
\begin{align}
F_R(t) := \int_{-R}^R \big( u(t,x) - 1 \big)dx,
\label{FRT}
\end{align}
we have
\noindent
\begin{align}
\frac{F_R(t)}{R}
\to 0 \,\, \text{in $L^2(\Omega)$ as $R\to\infty$;}
\label{LLN}
\end{align}
in fact this convergence also holds almost surely. See Remark \ref{rem:erg} (b). After establishing the first-order fluctuation,
it is natural to investigate the second-order fluctuation:
we will show that $F_R(t) $ (with $t>0$) admits Gaussian fluctuation as $R\to\infty$;
see Theorem \ref{thm:main} (ii). The central limit theorems (CLT) therein are of
quantitative nature, described by Wasserstein distance and Kolmogorov distance.
We are also able to obtain a functional CLT (see part (iii) in Theorem \ref{thm:main}).
\subsection{Main results}
Now we are ready to state the main theorem in this paper.
\begin{theorem} \label{thm:main}
Recall the definition of $m_p$ in \eqref{mp}.
Let $u$ solve the hyperbolic Anderson model \eqref{wave}. Then,
the following statements hold.
\smallskip
\noindent
{\rm (i)} Fix $t\in\mathbb{R}_+$ and assume that $m_p< \infty$ for any finite $p\geq 2$.
Then,
$\{ u(t,x): x\in\mathbb{R}\}$ is ergodic.
\smallskip
\noindent
{\rm (ii)} Assume $m_4 < \infty$ and fix $t\in(0,\infty)$.
Then, the spatial integral $F_R(t)$,
defined in \eqref{FRT},
admits Gaussian fluctuation as $R\to\infty$. More precisely, letting
$\sigma_R(t) = \sqrt{\textup{Var} (F_R(t) ) }$, we have $F_R(t)/\sigma_R(t)$
converges in law to the standard normal distribution $\mathcal{N}(0,1)$
with $\sigma_R(t)\sim \sqrt{R}$ as $R\to\infty$.
Moreover, the following rates of convergences hold:
\noindent
\begin{align}
\textup{dist}\Big( \frac{F_R(t)}{ \sigma_R(t) } , \mathcal{N}(0,1) \Big)
\lesssim \frac{1}{\sqrt{R}},
\label{CLT}
\end{align}
\noindent
where the implicit constant in \eqref{CLT} does not depend on $R$
and one can choose the distributional metric $\textup{dist}$ to be one of the
following:
Fortet-Mourier distance, $1$-Wasserstein distance, and Kolmogorov distance;
see Subsection \ref{SUB23} for the definitions of these distances.
\smallskip
\noindent
{\rm (iii)} Assume $m_4 < \infty$. Then, for any fixed $R\geq 1$,
the process $\{ F_R(t) \}_{t\in\mathbb{R}_+}$ admits a
locally $\beta$-H\"older continuous modification for any $\beta\in(0, \frac34)$.
Let $\mathcal{G} := \{\mathcal{G}_t\}_{t\in\mathbb{R}_+}$ denote a real
centered continuous Gaussian process
with covariance structure $\Sigma$ given as in \eqref{COV_S}.
Then, the process $\{ \frac{1}{\sqrt{R}}F_R(t) \}_{t\in\mathbb{R}_+}$
converges in law to $\mathcal{G}$ in the space $C(\mathbb{R}_+; \mathbb{R})$ as $R\to\infty$.\footnote{The
space $C(\mathbb{R}_+; \mathbb{R})$ consists of continuous functions from $\mathbb{R}_+$ to $\mathbb{R}$.
Equipped with the compact-open topology (the topology of uniform
convergence on compact sets), the space $C(\mathbb{R}_+; \mathbb{R})$ is Polish
(i.e. a complete separable metrizable topological space).
}
\end{theorem}
\begin{remark} \label{rem:main}
\rm
(a) In view of \eqref{LevyM} and interpolation, one can deduce that
\[
m_p < \infty \Longrightarrow m_q <\infty
\]
for $2\leq q\leq p < \infty$. In particular, the condition $m_4<\infty$
implies the finiteness of both $m_3$ and $m_2$.
\smallskip
\noindent
(b) The condition `$m_4 <\infty$' ensures that the solution $u(t,x)$ has
finite fourth moment. This condition arises in one of the key steps in the proof,
which is the obtention of the $L^2(\Omega)$-norm of the Malliavin derivative of the
solution (see Proposition \ref{prop:dec})
and the use of H\"older's inequality in places like \eqref{ga1a}.
We would like to point out that this finite `fourth moment condition'
is mild in view of recent work
on the fourth moment theorems on a Poisson space; see
\cite{DP18, DVZ18}. The proof of functional CLT in part (iii) consists of
the usual two steps, that is to show convergence in finite-dimensional distributions and
prove the tightness. Note that for the tightness part, the condition
`$m_2 < \infty$'
suffices, in view of \eqref{Rosen2a} with $p=2$ and
a criterion of Kolmogorov-Chentsov (see e.g. \cite[Theorem 23.7]{Kall21}).
See also Subsection \ref{SUB42}.
\smallskip
\noindent
(c) The limiting Gaussian process $\mathcal{G}$ is almost surely locally $\beta$-H\"older
continuous for any $\beta\in(0, \frac34)$ provided $m_4<\infty$.
This may not be easily derived from its covariance structure \eqref{COV_S}.
Alternatively, one can prove it via a limiting argument,
combined with Theorem \ref{thm:main} (ii) and Proposition \ref{Prop:Rosen} (ii).
\end{remark}
\begin{remark} \label{rem:erg} \rm
\smallskip
\noindent
(a) To obtain the spatial ergodicity in part (i) of Theorem \ref{thm:main},
we exploit a criterion from \cite{CKNP21} that uses Poincar\'e-type
inequality. The use of Poincar\'e inequality inevitably leads
us to assume `$m_p<\infty$ for any finite $p\geq 2$', due to
the inherited discreteness of the Poisson setting.
It is not clear to us whether or not the spatial ergodicity in part (i) holds merely
under the assumption $`m_4<\infty$'.
\smallskip
\noindent
(b) Assume only $m_2<\infty$. Then, we can easily deduce the $L^2$-convergence in
\eqref{LLN} (as $R\to\infty$) from the bound \eqref{Rosen2b} in
Proposition \ref{Prop:Rosen}.
Moreover, if we assume $m_{2+\varepsilon} < \infty$ for some small $\varepsilon>0$,
then we can also
obtain the almost sure convergence as $R\in\mathbb{N}\to\infty$:
we first deduce from \eqref{Rosen2a} with $p=2+\varepsilon$ that
\begin{align}
\sum_{k\in\mathbb{N}} \mathbb{E}\Big[ \frac{|F_k(t) |^{2+\varepsilon}}{k^{2+\varepsilon}} \Big]
\lesssim \sum_{k\in\mathbb{N}} \frac{1}{k^{1+\frac\e2}} <\infty,
\notag
\end{align}
\noindent
and thus from Fubini's theorem, it follows that $\sum_{k\in\mathbb{N}} \frac{|F_k(t)|^{2+\varepsilon}}{k^{2+\varepsilon}}$
is finite almost surely, which implies that $F_k(t)/k \to 0$ almost surely as $k\in\mathbb{N} \to \infty$.
\end{remark}
Theorem \ref{thm:main} presents the first result of spatial ergodicity
and the (quantitative) central limit theorem for SPDEs driven by
space-time L\'evy noise. Our work is motivated by
a recent line of investigations for SPDEs with Gaussian noise.
In \cite{HNV20}, Huang, Nualart, and Viitasaari
initiated the study of central limit theorems for SPDEs in Dalang-Walsh's
random field framework. More precisely, they established
the first Gaussian fluctuation result
for the spatial integral of the solution
to a stochastic nonlinear heat equation driven by space-time Gaussian white noise.
Since then, we have witnessed a rapidly growing
literature on similar CLT results for heat equations with various
Gaussian homogeneous
noises; see, for example,
\cite{HNVZ20,NZejp, CKNP21, CKNP22, CKNPjfa,
ANTV22, NXZ22,NZ22, PU22, LP22}.
Meanwhile, such a program was carried out by Nualart,
the second author, and their collaborators
to investigate the stochastic nonlinear wave equation driven by Gaussian noises;
see \cite{DNZ20,BNZ21, NZ22, NZ20, BNQSZ}.
In the present article, we carry out a similar program for the SPDE
with L\'evy noises,
by first investigating the hyperbolic Anderson model \eqref{wave}
with multiplicative
space-time L\'evy white noise.
Our approach is built on some recent results of
Malliavin calculus on the Poisson space
(see \cite{PSTU10, PT13, Last16, LPS16, DP18, DP18b, DVZ18, LP18, LMS22}).
As such, we choose to first consider the finite-variance setting,
in which we develop an $L^2$ theory of Malliavin calculus
associated with the space-time L\'evy white noise.
Our main tool is a second-order Poincar\'e inequality due to \cite{LPS16}
(Propositions \ref{prop:Wass} and \ref{prop:Kol} below)
combined with some key estimates for the Malliavin derivatives of the solution
(relations \eqref{D1est} and \eqref{D2est} below).
The latter are obtained using the explicit chaos expansions of
these Malliavin derivatives,
and a connection with the solution to the stochastic wave equation
with delta initial velocity
(which is studied in Section \ref{SUB31} and may be of independent interest).
In the case of the stochastic nonlinear wave equation
(with $u\dot{L}$ replaced by $\sigma(u)\dot{L}$ in \eqref{wave}),
the solution does not have an explicit chaos expansion,
and one needs novel ideas
for establishing similar CLT results.
We plan to investigate this problem in the future.
Another interesting and more challenging direction is
to investigate the infinite-variance setting;
for example, one may begin with the hyperbolic Anderson model
\eqref{wave} with $L$ being the $\alpha$-stable L\'evy noise.
We expect that some {\it noncentral} limit theorems would arise.
In the recent work \cite{DT22},
Dhoyer and Tudor considered
a stochastic heat equation with Rosenblatt noise and
established a noncentral limit theorem with the limiting process
being a Rosenblatt process that lives
in the second Gaussian Wiener chaos and thus has all the moments. We expect it to be much more difficult
to obtain the conjectured noncentral limit theorem in the aforementioned
infinite-variance setting.
At the end of this introduction, let us also mention that the stochastic
heat equation with multiplicative L\'evy noises $\sigma(u) \dot{L}$, with $\sigma$ Lipschitz,
has been studied in a series of recent papers.
The existence of the solution was proved in \cite{chong17},
weak intermittency property was established in \cite{chong-kevei19},
some path properties were obtained in \cite{CDH19},
and the exact tail behavior was described in \cite{chong-kevei22} in the case
of additive noise (i.e. when $u\dot{L}$ is replaced by $\dot{L}$).
Uniqueness and strong intermittency of the solution were obtained in \cite{BCL}
in the case of multiplicative noise when $\sigma(u)=u$.
All these results are valid for a general L\'evy noise with
possibly infinite variance (such as the $\alpha$-stable L\'evy noise).
Then, it is also natural and interesting to study the limit theorems for the solution to the stochastic heat equation with this type of L\'evy noise.
\bigskip
\noindent
$\bullet$ {\bf Organization of this paper.} In Section \ref{SEC2},
we introduce the framework,
and include some basic definitions and results regarding:
stochastic analysis on the Poisson space, Poincar\'e inequalities,
and moment inequalities.
In Section \ref{SEC3}, we present moment estimates for the
Malliavin derivatives of the solution.
Section \ref{SEC4} is devoted to the proof of Theorem \ref{thm:main}.
\section{Preliminaries}\label{SEC2}
\subsection{Notations} \label{SUB21} By $a\lesssim b$, we mean that
$a\leq C b$ for some positive finite constant $C$ that does not
depend on $(a, b)$. And we write $a\sim b$ if $a\lesssim b$ and $b\lesssim a$.
For conciseness, we write $a\wedge b = \min(a, b)$ and
$a\vee b = \max(a,b)$
for any $a,b\in\mathbb{R}$. Throughout this paper, we may fix a rich enough probability
space $(\Omega, \mathcal{F}, \mathbb{P})$, on which all the random objects in this
paper are defined. We denote by $\mathbb{E}$ the associated expectation operator.
For a real-valued random variable
$X\in L^p(\Omega, \mathcal{F}, \mathbb{P})$, we write
$\| X\|_p := \| X \|_{L^p(\Omega)} = ( \mathbb{E}[ | X|^p ] )^{\frac1p}$ for finite $p\geq 1$,
while $\| X\|_\infty$ is defined as the essential supremum of $X$.
To indicate that two random objects
$X, Y$
have the same distribution,
we write $X\stackrel{\rm(law)}{=} Y$; and we write $Y\sim \mathcal{N}(0,1)$ to mean that
$Y$ is a standard Gaussian random variable.
We denote by $\sigma\{X\}$ the $\sigma$-algebra generated
by the random object $X$. For example, $L^2(\Omega, \sigma\{ N\}, \mathbb{P})$
denotes the space of real-valued, square-integrable random variables
that are measurable with respect to $\sigma\{N\}$.
\begin{definition}\label{def:PRM}
We say $N= \{ N(A)\}_{ A\in\mathcal{Z}}$ is a Poisson random measure with
intensity measure $m$, provided the following conditions are satisfied:
\begin{itemize}
\item for each $A\in\mathcal{Z}$, the random variable $N(A)$
follows Poisson distribution with mean $m(A)$.\footnote{If $m(A)=\infty$,
we set $N(A)=\infty$ almost surely.}
\item for any finite sequence $A_1, ... , A_k\in\mathcal{Z}$ of pairwise
disjoint sets, the random variables $N(A_1), ... , N(A_k)$ are independent.
\end{itemize}
For $A\in\mathcal{Z}$ with $m(A)<\infty$, we define $\widehat{N}(A) = N(A) - m(A)$ and we
call it the compensated Poisson random measure on $(Z, \mathcal{Z}, m)$.
\end{definition}
Assume that $M_2<\infty$ (see \eqref{def:MP} and \eqref{m2}),
and
let $L=\{L(A): A \in \mathcal{B}_0(\mathbb{R}_+ \times \mathbb{R})\}$ be the
finite-variance L\'evy space-time white noise given
as in \eqref{LA1}. We set $L(1_{A})=L(A)$,
and we extend this definition by linearity to simple functions.
Then, by approximation, for any function
$\varphi \in L^2(\mathbb{R}_+ \times \mathbb{R})$,
we define the stochastic integral
$L(\varphi)=\int_{\mathbb{R}_+ \times \mathbb{R}}\varphi(t,x) L(dt,dx)$.
Note that
\noindent
\begin{align}
\label{L1}
L(\varphi)=\int_{\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}_0} \varphi(t,x) z \widehat{N}(dt,dx,dz).
\end{align}
\noindent
Similarly to the Gaussian white noise, this integral satisfies an isometry property:
\[
\mathbb{E}[L(\varphi)L(\psi)]=m_2 \langle \varphi,\psi \rangle_{L^2(\mathbb{R}_+ \times \mathbb{R})}
\]
\noindent
with $m_2$ as in \eqref{m2}.
Moreover, the family $\{L_t(A)=L([0,t]\times A): t\geq 0, A \in \mathcal{B}_0(\mathbb{R})\}$
is a worthy martingale measure, as defined in \cite{Walsh86}.
The It\^o-type stochastic integral $\int_0^t \int_{\mathbb{R}}X(s,x)L(ds,dx)$
with respect to $L$ is well-defined for any predictable process
$X=\{X(t,x): t\geq 0,x\in \mathbb{R}\}$ with
\[
\mathbb{E} \int_0^t \int_{\mathbb{R}}|X(s,x)|^2 dxds<\infty \quad \mbox{for any $t>0$},
\]
and is related to the It\^o-type stochastic integral with respect to $\widehat{N}$
as follows:
\[
\int_0^t \int_{\mathbb{R}}X(s,x)L(ds,dx)
=
\int_0^t \int_{\mathbb{R}}\int_{\mathbb{R}_0}X(s,x)z \widehat{N}(ds,dx,dz).
\]
\noindent
Predictability is defined with respect to the filtration $\mathbb{F}$ induced by $N$, given by \eqref{filtraF} below. More concretely,
a predictable process is a process that is measurable with the
predictable $\sigma$-field on $\mathbb{R}_+ \times \mathbb{R}\times\mathbb{R}_0$,
which is the
$\sigma$-field generated by linear combinations of elementary
processes of the form
\noindent
\begin{align}
V(t,x, z)=Y \mathbf 1_{(a,b]}(t) \mathbf 1_{A\times\Gamma}(x, z),
\label{simpleX}
\end{align}
\noindent
where $0<a<b$,
$A\times\Gamma \in \mathcal{B}(\mathbb{R}) \times\mathcal{B}(\mathbb{R}_0)$ satisfies
$\textup{Leb}(A)+\nu(\Gamma)<\infty$,
and $Y$ is bounded $\mathcal{F}_a$-measurable.\footnote{We can additionally
restrict $Y$ to be Malliavin differentiable. This additional restriction
will be used in the proof of Lemma \ref{lem:CE2} (iv).}
We refer readers to \cite{B15,BN16}, and Section 8.7 of \cite{PZ07}
for more details about integration with respect to $L$ and $\widehat{N}$.
Recall that the stochastic integral $L(\varphi)$ given by \eqref{L1}
is a centered and square-integrable random variable with
\noindent
\begin{align*}
\textup{Var}\big( L(\varphi) \big)
&= \int_{\mathbb{R}_{+} \times \mathbb{R} \times \mathbb{R}_{0}} | \varphi(t,x)z |^2 \, dt dx \nu(dz) \\
&= m_2 \| \varphi \|_{L^2(\mathbb{R}_+\times \mathbb{R})}^2
\end{align*}
\noindent
with $m_2$ as in \eqref{m2}.
Note that $L(\varphi)$ lives in the first Poisson Wiener chaos associated
to the Poisson random measure $N$
and it coincides with the first-order Wiener-It\^o-Poisson integral
$I_1( \varphi\otimes z )$. Let us now construct $I_1(\phi)$
for a deterministic function $\phi\in L^2(Z, \mathcal{Z}, m)$.
First, there is a sequence of simple functions $\{ \phi_n\}_n$ of the form
\noindent
\begin{align}
\phi_n = \sum_{j=1}^{M_n} \alpha_j \mathbf 1_{A_j\times B_j\times C_j }
\label{int0}
\end{align}
\noindent
with $\alpha_j\in\mathbb{R}$, $M_n\in\mathbb{N}$, and
$(A_j, B_j, C_j) \in \mathcal{B}(\mathbb{R}_+) \times \mathcal{B}(\mathbb{R})\times \mathcal{B}(\mathbb{R}_0) $
with finite measure,
such that $\phi_n$ converges to $\phi$ in $L^2(Z, \mathcal{Z}, m)$
as $n\to\infty$. Then,
\begin{align}
I_1(\phi_n) := \sum_{j=1}^{M_n} \alpha_j \widehat{N}( A_j\times B_j\times C_j )
\label{int1}
\end{align}
\noindent
is well defined with $\| I_1(\phi_n) \|_2 = \| \phi_n\|_{ L^2(Z, \mathcal{Z}, m)}$,
and thus
\begin{align}
I_1(\phi) = \lim_{n\to\infty} I_1(\phi_n) \,\,\, \text{in $L^2(\mathbb{P})$}
\label{int2}
\end{align}
is well defined.\footnote{It is clear that
the definition of $I_1(\phi)$ in \eqref{int2} does not depend on
the choice of approximating sequence $\{\phi_n\}_n$.
The same comment applies to the definition of $I_k(h)$ in \eqref{int4}.}
The set $\mathbb{C}_1= \{ I_1(\phi): \phi\in L^2(Z, \mathcal{Z}, m) \}$
is called the first Poisson Wiener chaos associated with $N$ (or $\widehat{N}$).
See Subsection \ref{SUB22} for higher-order Poisson Wiener chaoses.
We denote by $\mathcal{F}^0_t$ the $\sigma$-algebra
generated by the random variables
$N([0,s]\times A \times B)$ with $s \in [0,t]$ and $\textup{Leb}(A)+ \nu(B)<\infty$.
And let $\mathcal{F}_t = \sigma\big( \mathcal{F}_t^0 \cup \mathcal{N} \big)$ be the
$\sigma$-algebra generated by $\mathcal{F}_t^0$ and the set $\mathcal{N} $ of $\mathbb{P}$-null sets.
This gives us a filtration
\begin{align}
\mathbb{F}:= \{ \mathcal{F}_t: t\in\mathbb{R}_+\}.
\label{filtraF}
\end{align}
It is not difficult to see from \eqref{int0}, \eqref{int1}, and an approximation argument
that for $\phi\in L^2(Z, \mathcal{Z}, m)$,
\noindent
\begin{align}
\mathbb{E}\big[ I_1(\phi) \vert \mathcal{F}_t \big] = I_1( \phi \mathbf 1_{[0,t] \times\mathbb{R}\times\mathbb{R}_0} ).
\label{CE1}
\end{align}
For conciseness of notations, we denote by $\mathfrak{H}$ the Hilbert space
$L^2(Z, \mathcal{Z}, m)$ and by $\mathfrak{H}^{\otimes n}$ the $n$-th tensor product
of $\mathfrak{H}$ for any integer $n\geq 1$.
We often write $\pmb{x_n} = (x_1,
\dots, x_n)$ for an element in $\mathbb{R}_+^n$, $\mathbb{R}^{n}$, or $\mathbb{R}_0^n$;
$d\pmb{x_n}$ is an abbreviation for $dx_1 \cdots dx_n$,
and
$\nu(d\pmb{z_n}) = \nu(dz_1)\cdots \nu(dz_n)$.
From time to time, we
write $\xi = (r, y, z)$ to denote a point in $Z$
and $m(d\xi) = dr dy \nu(dz)$.
For a function $h\in \mathfrak{H}^{\otimes n}$,
we often write
\begin{align}
h(\pmb{\xi_n}) = h(\pmb{t_n}, \pmb{x_n},\pmb{z_n})
= h(t_1, x_1,z_1, \dots, t_n, x_n, z_n),
\label{defh1}
\end{align}
\noindent
whenever no confusion appears.
For $h$ as in \eqref{defh1}, we define its canonical symmetrization $\widetilde{h}$
by setting
\noindent
\begin{align}
\begin{aligned}
\widetilde{h}(\pmb{\xi_n}) &=
\widetilde{h}(\pmb{t_n}, \pmb{x_n},\pmb{z_n}) \\
&= \frac{1}{n!} \sum_{\pi\in \mathfrak{S}_n} h( \xi_{\pi(1)}, \ldots, \xi_{\pi(n)} ) \\
&= \frac{1}{n!} \sum_{\pi\in \mathfrak{S}_n}
h( t_{\pi(1)}, x_{\pi(1)}, z_{\pi(1)}, \ldots, t_{\pi(n)}, x_{\pi(n)}, z_{\pi(n)} ),
\end{aligned}
\label{defh2}
\end{align}
\noindent
where $\mathfrak{S}_n$ denotes the set of permutations over $\{1, ..., n\}$.
Let $\mathfrak{H}^{\odot n}$ denote the symmetric subspace of $\mathfrak{H}^{\otimes n}$.
That is, $\mathfrak{H}^{\odot n}$ consists of all elements $h\in \mathfrak{H}^{\otimes n}$
with $h = \widetilde{h}$.
To ease the notations, we introduce the cut-off of a function $h\in\mathfrak{H}^{\otimes n}$
in the temporal variable:
\begin{align}
h^t(\xi_1, ... ,\xi_n) = h(t_1, x_1, z_1 ... , t_n, x_n, z_n)\mathbf 1_{[0,t]^n}(t_1, ..., t_n).
\label{ht1}
\end{align}
With the above notation, we can rewrite \eqref{CE1} as
$\mathbb{E}[ I_1(\phi) | \mathcal{F}_t ] = I_1(\phi^t)$.
\subsection{Basic stochastic analysis on the Poisson space} \label{SUB22}
Let $N$ be the Poisson random measure on $(Z, \mathcal{Z}, m)$ as in Subsection \ref{SUB21}.
A well-known theorem due to K. It\^o states that the $L^2(\mathbb{P})$ probability space generated
by the Poisson random measure $N$ can be written as a direct sum
of mutually orthogonal subspaces:
\begin{align}
L^2(\Omega, \sigma\{N\}, \mathbb{P}) = \bigoplus_{k\in\mathbb{N}_{\geq 0}} \mathbb{C}_k ,
\label{CD1}
\end{align}
where $\mathbb{C}_k$ is called the $k$-th Poisson Wiener chaos associated to $N$;
see \cite{Ito56, Last16, NN18}.
Let us begin with the construction of Poisson Wiener chaoses $\mathbb{C}_k$, $k\in\mathbb{N}_{\geq 0}$.
\medskip
\noindent
$\bullet$ {\bf Poisson Wiener chaoses.}
The zero-th chaos $\mathbb{C}_0\simeq \mathbb{R}$
is the set of (almost surely)
constant random variables in $L^2(\Omega, \sigma\{N\}, \mathbb{P})$.
We have already defined the first Poisson Wiener chaos
\[
\mathbb{C}_1:= \big\{ I_1(\phi): \phi\in \mathfrak{H} \big\} ,
\]
\noindent
where $I_1(\phi)$ is defined as in \eqref{int1}-\eqref{int2},
and we recall that $\mathfrak{H} = L^2(Z, \mathcal{Z}, m)$.
Now we define $\mathbb{C}_k$ for $k\geq 2$.
First, we denote by $\mathcal{E}^0_k$ the set of simple functions of the form
\noindent
\begin{align}
h(\xi_1, ... , \xi_k) = \sum_{i_1, ..., i_k =1}^m \beta_{i_1,..., i_k}
\mathbf 1_{F_{i_1} \times \cdots \times F_{i_k}}(\xi_1, ... , \xi_k),
\label{int3a}
\end{align}
\noindent
where $m\in\mathbb{N}_{\geq 1}$, $F_1, ... , F_m\in\mathcal{Z}$ are pairwise disjoint
sets of finite measures, and the coefficients $\beta_{i_1, ... , i_p}$
vanish whenever any two of the indices $i_1, ... , i_k$ are equal.
It is known that because of the atom-less nature\footnote{Even if $\nu$ may not
be atom-less, the product measure $m = \textup{Leb}\times\textup{Leb}\times \nu$ on $(Z, \mathcal{Z})$ does
not have any atom.}
of the $\sigma$-finite measure space $(Z, \mathcal{Z}, m)$, the set $\mathcal{E}^0_k$
is dense in $\mathfrak{H}^{\otimes n} \equiv L^2(Z^n)$; see, for example,
\cite[page 10]{Nua06}.
Since $\mathbf 1_{F_i}$ can be further
approximated by functions as in \eqref{int0},
we will then work with the dense subset $\mathcal{E}_k$ of $\mathfrak{H}^{\otimes n}$
that consists of simple functions $h\in\mathcal{E}_k^0$ as in \eqref{int3a}
such that $F_i = A_i \times B_i \times C_i$
for some $(A_i, B_i, C_i)\in\mathcal{B}(\mathbb{R}_+)\times\mathcal{B}(\mathbb{R})\times \mathcal{B}(\mathbb{R}_0)$
with $m(F_i)<\infty$,
$i=1,2, ...,m$.
For such a simple function $h\in\mathcal{E}_k$ as in \eqref{int3a},
we define
\noindent
\begin{align}
I_k(h) = \sum_{i_1, ... , i_k=1}^m \beta_{i_1, ... , i_k}
\prod_{j=1}^k \widehat{N}( A_{i_j} \times B_{i_j} \times C_{i_j} ),
\label{int3b}
\end{align}
\noindent
and the following properties hold, as one can easily verify:
\begin{itemize}
\item[(i)] for $h\in \mathcal{E}_k$, $I_k(h) = I_k( \widetilde{h})$,
with $\widetilde{h}$ denoting the canonical symmetrization of $h$;
see \eqref{defh2};
\item[(ii)]
for $h_1\in \mathcal{E}_k$ and $h_2\in\mathcal{E}_\ell$ ($k, \ell\in\mathbb{N}_{\geq 1}$),
\noindent
\begin{align}
\mathbb{E}[ I_k(h_1)I_\ell(h_2) ] = k! \mathbf 1_{\{ k= \ell\}}
\langle \widetilde{h}_1, \widetilde{h}_2 \rangle_{\mathfrak{H}^{\otimes k}}.
\label{int3c}
\end{align}
\item[(iii)]
for $h\in \mathcal{E}_k$ as in \eqref{int3a}, $I_k(h)$ as in \eqref{int3b}, and
for $t \in( 0,\infty)$, we have
\noindent
\begin{align*}
\begin{aligned}
\mathbb{E}[ I_k(h) | \mathcal{F}_t ]
&= \sum_{i_1, ... , i_k=1}^m \beta_{i_1, ... , i_k}
\prod_{j=1}^k \widehat{N}\big( ( A_{i_j} \cap [0,t] ) \times B_{i_j} \times C_{i_j} \big) \\
&= I_k (h^t ) ,
\end{aligned}
\end{align*}
\noindent
where $h^t$ is introduced in \eqref{ht1}.
\end{itemize}
The relation \eqref{int3c} in property (ii) is known as the orthogonality
and the $k=\ell$ case gives the modified isometry on $\mathcal{E}_k$, and hence
allows one to define for any $h\in\mathfrak{H}^{\otimes k}$,
\noindent
\begin{align}
I_k(h) := \lim_{n\to\infty} I_k(h_n)\,\,\, \text{in $L^2(\mathbb{P})$},
\label{int4}
\end{align}
\noindent
where $h_n\in\mathcal{E}_k$ converges to $h$ in $\mathfrak{H}^{\otimes k}$ as $n\to\infty$.
This defines the $k$-th Poisson Wiener chaos associated to $N$:
\begin{align}
\mathbb{C}_k := \{ I_k(h) : h\in \mathfrak{H}^{\otimes k} \} = \{ I_k(h) : h\in \mathfrak{H}^{\odot k} \}.
\notag
\end{align}
And we call $ I_k(h) $ the $k$-th multiple integral of $h$ with respect to
the compensated Poisson random measure $\widehat{N}$.
Note that the properties (i)-(ii) still hold for general functions $h, h_1\in \mathfrak{H}^{\otimes k}$
and $h_2\in\mathfrak{H}^{\otimes \ell}$:
\noindent
\begin{align}
\begin{aligned}
\mathbb{E}[ I_k(h_1)I_\ell(h_2) ] &= k! \mathbf 1_{\{ k= \ell\}}
\langle \widetilde{h}_1, \widetilde{h}_2 \rangle_{\mathfrak{H}^{\otimes k}}.
\label{int3d}
\end{aligned}
\end{align}
\noindent
And, the chaos decomposition \eqref{CD1}
reads as follows: for any $F\in L^2(\Omega, \sigma\{N\}, \mathbb{P})$,
\noindent
\begin{align}
F = \mathbb{E}[F] + \sum_{n=1}^\infty I_n( f_n),
\label{CD3}
\end{align}
\noindent
where $f_n\in \mathfrak{H}^{\odot n}$, $n\in\mathbb{N}_{\geq 1}$, are uniquely determined by $F$
up to a null set with respect to $m$;
see also \cite[Section 4]{Last16}.
Using \eqref{int3d},
we have
\begin{align}
\textup{Var}(F) = \sum_{n=1}^\infty n! \| f_n \|^2_{\mathfrak{H}^{\otimes n}} < \infty.
\label{CD4}
\end{align}
Unlike in the Gaussian setting, elements in a Poisson chaos may not
have all the moments and product of two random variables in Poisson chaoses
may not be in a sum of finitely many chaoses.
\medskip
\noindent
$\bullet$ {\bf Product formula.}
For $f \in \mathfrak{H}^{\otimes n}$ and $g \in \mathfrak{H}^{\otimes m}$ with $m,n\in\mathbb{N}_{\geq 1}$,
we define the {\it modified contractions} as follows:
\begin{itemize}
\item[(i)] $f\star^0_0 g = f\otimes g$ is the usual tensor product of $f$
and $g$;
\item[(ii)] for $1\leq k\leq n\wedge m$, $f\star^0_k g$ is a real measurable function
on $Z^{m+n-k}$, given by
\noindent
\begin{align}
\begin{aligned}
&(\zeta_1, ... , \zeta_k, \xi_1, ... , \xi_{n-k}, \theta_1, ... , \theta_{m-k}) \\
&\qquad \longmapsto
f(\zeta_1, ... , \zeta_k, \xi_1, ... , \xi_{n-k}) g(\zeta_1, ... , \zeta_k, \theta_1, ... , \theta_{m-k}),
\end{aligned}
\label{rule1}
\end{align}
\noindent
where $\zeta_1, ... , \zeta_k, \xi_1, ... , \xi_{n-k}, \theta_1, ... , \theta_{m-k}$ are points in
$Z = \mathbb{R}_+\times\mathbb{R}\times\mathbb{R}_0$.
\item[(iii)] for $1\leq \ell \leq k \leq n\wedge m$, $f\star^\ell_k g$ is a real measurable function
on $Z^{m+n-k-\ell}$, given by
\noindent
\begin{align}
\begin{aligned}
&(\zeta_1, ... , \zeta_{k-\ell}, \xi_1, ... , \xi_{n-k}, \theta_1, ... , \theta_{m-k}) \\
& \longmapsto
\int_{Z^\ell}f(\gamma_1, ... , \gamma_\ell, \zeta_1, ... , \zeta_{k-\ell}, \xi_1, ... , \xi_{n-k})
g(\gamma_1, ... , \gamma_\ell, \zeta_1, ... , \zeta_{k-\ell}, \theta_1, ... , \theta_{m-k})
\, m(d{\pmb\gamma_{\pmb\ell}}) .
\end{aligned}
\label{rule2}
\end{align}
\end{itemize}
In other words, $f\star^\ell_k g$ is obtained by first fixing $k$ arguments of both $f$
and $g$, and then integrating out $\ell$ variables out of these fixed arguments
according to the rules \eqref{rule1}-\eqref{rule2}.
When $k=\ell$ in \eqref{rule2}, $f\star^k_k g$ coincides with
the usual $k$-contraction $f\otimes_k g$
and by Cauchy-Schwarz's inequality, $f\star^k_k g\in\mathfrak{H}^{\otimes n+m-2k}$; see, for example, \cite[Appendix B]{NP12}. However, for $\ell < k$,
$f\star^\ell_k g$ may not belong to $\mathfrak{H}^{\otimes n+m-k-\ell}$.
For example, given $f\in\mathfrak{H}$, $f\star^0_1 f \in \mathfrak{H}= L^2(Z, \mathcal{Z},m)$ if and only if
$f\in L^4(Z, \mathcal{Z}, m)$.
The next result gives a product formula for elements of Poisson Wiener chaoses.
It was first proved by Kabanov for $m=1$ (see \cite[Theorem 2]{Kab75})
and extended by Surgailis to a product of several elements of chaoses
(see \cite[Proposition 3.1]{Sur84}).
The form that we present below corresponds to \cite[(9.22)]{NN18}
and Proposition 5 in \cite[page 22]{Last16}; see also \cite[Theorem 2.2]{DP18b}.
\begin{proposition}[Product Formula]
\label{prop:prod}
Let $f \in \mathfrak{H}^{\odot n}$ and $g \in \mathfrak{H}^{\odot m}$ be such that
$f \star_{k}^{\ell} g\in \mathfrak{H}^{\otimes (m+n-k-\ell)}$
for any $k=1,\ldots, n\wedge m$ and $\ell=0,1,\ldots,k$.
Then,
\[
I_n(f) I_m(g)
= \sum_{k=0}^{n\wedge m}k! \binom{n}{k}\binom{m}{k}\sum_{\ell=0}^{k}\binom{k}{\ell}I_{n+m-k-\ell}(f \star_{k}^{\ell} g ).
\]
\end{proposition}
When $f\star^1_k g = 0$, we deduce from the definition of modified contractions
that $f\star^\ell_k g = 0$ for all $\ell = 2, ... , k$.
In this case, we have a simpler
form of the product formula.
\begin{proposition} \label{prop:prod2}
Let $f \in \mathfrak{H}^{\otimes n}$ and $g \in \mathfrak{H}^{\otimes m}$ be not necessarily symmetric
such that $\widetilde{f} \star_{k}^{\ell} \widetilde{g}\in \mathfrak{H}^{\otimes(n+m-k-\ell)}$
for any $k=1, \ldots, n\wedge m$ and $\ell=1,\ldots,k$.
Suppose $\widetilde{f}\star_{k}^{1} \widetilde{g}=0$ for any $k=1,\ldots,n\wedge m$.
Then,
\[
I_n(f)I_m(g)=I_{n+m}(f \otimes g)+\sum_{k=1}^{n\wedge m}k! \binom{n}{k}\binom{m}{k}I_{n+m-k}(\widetilde{f} \star_{k}^{0} \widetilde{g}).
\]
\end{proposition}
\begin{proof} As $(I_n(f), I_m(g) ) = (I_n(\widetilde{f}), I_m(\widetilde{g}) )$,
the desired product formula follows from Proposition \ref{prop:prod},
the fact that $\widetilde{f} \star^\ell_k \widetilde{g} = 0$ for all $1\leq \ell \leq k$,
and by noting that
$\widetilde{f} \otimes \widetilde{g}$ and
$f \otimes g$ have the same symmetrization.
\qedhere
\end{proof}
\medskip
\noindent
$\bullet$ {\bf Malliavin derivatives.} Let $\textup{dom}(D)$ denote the set of random variables
$F$ as in \eqref{CD3} with the symmetric kernels $\{f_n\}_n$ satisfying
\begin{align}
\sum_{n=1}^\infty n! n \| f_n\|^2_{\mathfrak{H}^{\otimes n}} < \infty.
\notag
\end{align}
For such a random variable $F\in\textup{dom}(D)$, we define the Malliavin derivative
$DF$ of $F$ to be a $\mathfrak{H}$-valued random variable, given by
\begin{align}
D_\xi F = \sum_{n=1}^\infty n I_{n-1}(f_n(\xi, \bullet) ), \,\, \, \xi\in Z,
\label{CD5b}
\end{align}
\noindent
where for fixed $\xi\in Z$, $f_n(\xi, \bullet)\in\mathfrak{H}^{\odot (n-1)}$.
By using orthogonality relation \eqref{int3c}, we have
\[
\mathbb{E}\big[ \| DF \|_{\mathfrak{H}}^2 \big] = \sum_{n=1}^\infty n! n \| f_n\|^2_{\mathfrak{H}^{\otimes n}} <\infty.
\]
Comparing this equality with \eqref{CD4} yields
the following Poincar\'e inequality:
\begin{align}
\textup{Var}(F) \leq \mathbb{E}\big[ \| DF \|_{\mathfrak{H}}^2 \big]
\label{Poi1}
\end{align}
for any $F\in\textup{dom}(D)$,
with equality when and only when $F\in\mathbb{C}_0 \oplus \mathbb{C}_1$.
Similarly, we can define the second Malliavin derivative $D^2 F$ as follows:
for $F$ as in \eqref{CD3},
\noindent
\begin{align}
\begin{aligned}
D^2_{\zeta,\xi} F &:= D_\xi D_\zeta F
= \sum_{n=2}^\infty n(n-1) I_{n-2}( f_{n-2}(\zeta, \xi, \bullet) ),
\end{aligned}
\label{CD6}
\end{align}
provided the above series in \eqref{CD6} converges in $L^2(\mathbb{P})$.
That is, the domain of $D^2$ is given by
\[
\textup{dom}(D^2) = \Big\{ \text{$F$ as in \eqref{CD3}} :
\sum_{n=2}^\infty n^2 n! \| f_n \|^2_{\mathfrak{H}^{\otimes n}} < \infty \Big\} .
\]
\medskip
\noindent
$\bullet$ {\bf Kabanov-Skorohod integral $\delta$.} This is an adjoint operator of $D$,
characterized by the following duality relation:
\noindent
\begin{align}
\mathbb{E}[ \langle DF, V \rangle_\mathfrak{H} ] = \mathbb{E}[ F \delta(V) ],
\label{dualR}
\end{align}
\noindent
for any $F\in\textup{dom}(D)$.
In view of Riesz's representation theorem, we let $\textup{dom}(\delta)$ be the set of
$V\in L^2(\Omega ; \mathfrak{H})$ such that there is some finite constant $C=C(V) > 0$ such that
\[
\big| \mathbb{E}[ \langle DF, V \rangle_\mathfrak{H} ] \big| \leq C \| F \|_2
\]
for any $F\in\textup{dom}(D)$.
Then, the duality relation \eqref{dualR} holds for any $(F, V)\in \textup{dom}(D)\times \textup{dom}(\delta)$.
Suppose $V\in L^2(\Omega; \mathfrak{H})$. Then, for $m$-almost every $\xi\in Z$,
$V(\xi)\in L^2(\mathbb{P})$ by Fubini's theorem. Then, by chaos decomposition,
we can write
\begin{align}
V(\xi) = \mathbb{E}[ V(\xi) ] + \sum_{n=1}^\infty I_n\big(h_n(\xi, \bullet) \big) ,
\label{CD7}
\end{align}
\noindent
where $h_n(\xi, \bullet)\in \mathfrak{H}^{\odot n}$ may not be symmetric
in all its $(n+1)$ arguments,
and we write $h_0(\xi) = \mathbb{E}[ V(\xi) ]$.
Note that $V\in L^2(\Omega; \mathfrak{H})$ forces $h_n\in \mathfrak{H}^{\otimes (n+1)}$ for every $n$.
Assume first that there are finitely many chaoses in the above series \eqref{CD7}:
\noindent
\begin{align}
\text{$h_n(\xi,\bullet) = 0$ for $n\geq M$.}
\label{CD7b}
\end{align}
\noindent
Then, for $F\in\textup{dom}(D)$ having the form \eqref{CD3},
we deduce from \eqref{CD5b}, \eqref{CD7}, Fubini's theorem
and orthogonality relation \eqref{int3c} that
\noindent
\begin{align}
\begin{aligned}
\mathbb{E} \big[ \langle DF, V \rangle_\mathfrak{H} \big]
&= \mathbb{E} \int_Z \Big( \sum_{n=1}^\infty n I_{n-1}(f_n(\xi, \bullet ) \Big)
\Big( \sum_{m=0}^M I_m\big( h_m(\xi, \bullet) \big) \Big) m(d \xi) \\
&= \int_Z \sum_{n=1}^M n! \langle f_n(\xi, \bullet ), h_{n-1}(\xi, \bullet)
\rangle_{\mathfrak{H}^{\otimes (n-1)}} m(d\xi) \\
&= \sum_{n=1}^M n! \langle f_n, h_{n-1} \rangle_{\mathfrak{H}^{\otimes n}}
= \sum_{n=1}^M n! \langle f_n, \widetilde{h}_{n-1} \rangle_{\mathfrak{H}^{\otimes n}} ,
\end{aligned}
\label{CD8}
\end{align}
\noindent
which, together with Cauchy-Schwarz's inequality,
implies that
\noindent
\begin{align}
\begin{aligned}
\big| \mathbb{E} \big[ \langle DF, V \rangle_\mathfrak{H} \big] \big|
&\leq \bigg(\sum_{n=1}^M n! \| f_n\|^2_{\mathfrak{H}^{\otimes n}} \bigg)^{\frac12}
\bigg(\sum_{n=1}^M n! \| \widetilde{h}_{n-1}\|^2_{\mathfrak{H}^{\otimes n}} \bigg)^{\frac12} \\
&\leq \| F\|_2
\bigg(\sum_{n=1}^M n! \| \widetilde{h}_{n-1}\|^2_{\mathfrak{H}^{\otimes n}} \bigg)^{\frac12}.
\end{aligned}
\label{CD9}
\end{align}
In particular, we proved that for $V\in L^2(\Omega; \mathfrak{H})$ satisfying \eqref{CD7b},
$V$ belongs to $\textup{dom}(\delta)$;\footnote{This
also tells us that $\textup{dom}(\delta)$ is dense in $L^2(\Omega; \mathfrak{H})$.}
and in this case, we deduce again from \eqref{CD8}
and \eqref{int3c} that
\noindent
\begin{align}
\begin{aligned}
\mathbb{E} \big[ \langle DF, V \rangle_\mathfrak{H} \big]
& = \sum_{n=1}^M \mathbb{E}\big[ I_n( f_n) I_n( \widetilde{h}_{n-1} ) \big] \\
&= \mathbb{E} \bigg[ F \sum_{n=1}^M I_n(\widetilde{h}_{n-1} ) \bigg]
\end{aligned}
\label{CD9b}
\end{align}
for any $F\in\textup{dom}(D)$, and thus,
\begin{align}
\delta(V) = \sum_{n=1}^\infty I_n( h_{n-1} ).
\label{CD9c}
\end{align}
One can easily generalize this particular case of \eqref{CD7b}
to the following result, whose proof
is omitted.
\begin{lemma}\label{lem:dl}
Suppose $V\in L^2(\Omega; \mathfrak{H})$ has the expression \eqref{CD7}
with
\[
\sum_{n=1}^\infty n! \| \widetilde{h}_{n-1}\|^2_{\mathfrak{H}^{\otimes n}} <\infty.
\]
Then, $V\in\textup{dom}(\delta)$ and $\delta(V)$ is given as in \eqref{CD9c}.
\end{lemma}
As a consequence, for a deterministic function $\phi\in \mathfrak{H}$,
we have
\begin{align}\label{EXT1}
\delta(\phi) = I_1(\phi).
\end{align}
The following lemma generalizes \eqref{CE1}
and the second relation in \eqref{int3d}; it also shows that
the It\^o integral is a particular case
of the Kabanov-Skorohod integral and provides
a Clark-Ocone formula;
see Theorems 10.2.7 and 10.4.1 in \cite{NN18}
for the results for the classical L\'evy processes.
\begin{lemma} \label{lem:CE2}
{\rm (i)} Suppose that the assumptions in Lemma \ref{lem:dl} hold and fix $t\in(0, \infty)$.
Recall also the notation \eqref{ht1}.
Then, $V^t \in \textup{dom}(\delta)$
and
\[
\mathbb{E}\big[ \delta(V) | \mathcal{F}_t \big] = \delta(V^t) = \sum_{n=1}^\infty I_n( h^t_{n-1} ).
\]
\smallskip
\noindent
{\rm (ii)} Suppose $F\in\textup{dom}(D)$ is $\mathcal{F}_t$-measurable
for some fixed $t \in(0,\infty)$.
Then, $D_{s, y, z} F = 0$ almost surely
for almost every $(s, y, z)\in (t,\infty)\times\mathbb{R}\times\mathbb{R}_0$.
\smallskip
\noindent
{\rm (iii)} Suppose $F\in\textup{dom}(D)$ is $\mathcal{F}_t$-measurable
for some fixed $t \in(0,\infty)$.
Then, the following Clark-Ocone formula holds:
\begin{align}
F = \mathbb{E}[F] + \delta(V),
\notag
\end{align}
\noindent
where $V(r, y, z) = \mathbb{E}\big[ D_{r, y, z} F | \mathcal{F}_r \big]$ belongs to $\textup{dom}(\delta)$.
\smallskip
\noindent
{\rm (iv)} Suppose $V\in L^2(\Omega; \mathfrak{H})$ is $\mathbb{F}$-predictable. Then,
$V\in\textup{dom}(\delta)$ and
$\delta(V)$ coincides with the It\^o integral of $V$ against the compensate Poisson random
measure $\widehat{N}$:
\begin{align}\label{EXT0}
\delta(V) = \int_0^\infty \int_{\mathbb{R}} \int_{\mathbb{R}_0} V(t, x, z) \widehat{N}(dt, dx, dz).
\end{align}
\end{lemma}
\begin{proof} By going through \eqref{CD8}, \eqref{CD9}, and \eqref{CD9b}
with $M=\infty$ and $V^t$ in place of $V$,
we get $V^t\in \textup{dom}(\delta)$ and
\begin{align}
\delta(V^t) =\sum_{n=1}^\infty I_n( h^t_{n-1} ).
\label{pst1}
\end{align}
On the other hand, since the conditional expectation is a bounded operator
on $L^2(\mathbb{P})$, we deduce from \eqref{int3d} that
\noindent
\begin{align*}
\mathbb{E}\big[ \delta(V) | \mathcal{F}_t \big]
& = \sum_{n=1}^\infty \mathbb{E} \big[ I_n(h_{n-1}) | \mathcal{F}_t \big]
= \sum_{n=1}^\infty I_n(h^t_{n-1}),
\end{align*}
which, together with \eqref{pst1}, concludes the proof of (i).
\smallskip
Next, we prove (ii). We can deduce from part (i)
and the duality relation \eqref{dualR}
for several times that
\noindent
\begin{align*}
\mathbb{E}\big[ \langle DF, V \rangle_\mathfrak{H} \big]
&= \mathbb{E}[ F \delta(V) ] = \mathbb{E}\big[ F \mathbb{E}( \delta(V) | \mathcal{F}_t ) \big] \\
&=\mathbb{E}\big[ F \delta(V^t) \big] = \mathbb{E}\big[ \langle DF, V^t \rangle_\mathfrak{H} \big]
\end{align*}
for any $V\in\textup{dom}(\delta)$. It follows that
\[
\mathbb{E} \big[ \langle (DF)^t, V \rangle_\mathfrak{H} \big] = 0
\]
for any $V\in\textup{dom}(\delta)$. Then, the density of $\textup{dom}(\delta)$ in $L^2(\Omega; \mathfrak{H})$
implies that $(DF)^t = 0$ almost surely. Therefore, part (ii) is proved.
\smallskip
Now we prove the Clark-Ocone formula in (iii);
see also Theorem 10.4.1 in \cite{NN18}.
Assume that $F$ has the form \eqref{CD3}. Then,
\noindent
\begin{align}
\begin{aligned}
V(r,y,z) &= \mathbb{E}\big[ D_{r, y, z} F | \mathcal{F}_r \big] \\
& = \sum_{n=1}^\infty n \mathbb{E}\big[ I_{n-1}(f_n(r,y,z, \bullet) ) | \mathcal{F}_r \big] \\
&= \sum_{n=1}^\infty n I_{n-1}(f^r_n(r,y,z, \bullet) ) .
\end{aligned}
\label{def:V}
\end{align}
\noindent
Put
\[
h_n(t_1, y_1, z_1, ... , t_n, y_n, z_n) = n f_n^{t_1}(t_1, y_1, z_1, ... , t_n, y_n, z_n ).
\]
Then, (omitting the dummy variables $y_i, z_i$ to ease the notations)
\begin{align*}
\widetilde{h}_n(t_1, t_2, ... , t_n)
&= \frac{1}{n!} \sum_{\sigma\in\mathfrak{S}_n}
n f_n^{t_{\sigma(1)} }(t_{\sigma(1)}, t_{\sigma(2)}, ..., t_{\sigma(n)} ) \\
&= \frac{1}{(n-1)!} \sum_{k=1}^n \sum_{\sigma\in\mathfrak{S}_n}\mathbf 1_{\{\sigma(1)=k\}}
f_n(t_{\sigma(1)}, t_{\sigma(2)}, ..., t_{\sigma(n)} ) \mathbf 1_{\{ t_k \geq t_i, \, \forall i\neq k \}} \\
& = f_n(t_1, ... , t_n) \,\, \, \text{almost everywhere, since $f_n\in\mathfrak{H}^{\odot n}$.}
\end{align*}
\noindent
Therefore, we deduce from Lemma \ref{lem:dl} that
$V$, given as in \eqref{def:V}, belongs to $\textup{dom}(\delta)$
and
\begin{align*}
\delta(V) &= \sum_{n=1}^\infty I_n(\widetilde{h}_n) = \sum_{n=1}^\infty I_n(f_n) \\
&= F - \mathbb{E}[F].
\end{align*}
Finally, we prove the statement (iv). First we consider the case where
$V$ is an elementary process as in \eqref{simpleX}:
\[
V(t,x,z) =Y \mathbf 1_{(a, b]\times A\times\Gamma}(t, x, z)
\]
with $Y\in\textup{dom}(D)$ bounded $\mathcal{F}_a$-measurable, $a<b$, and
$\textup{Leb}(A) + \nu(\Gamma) <\infty$.
In this case,
\noindent
\begin{align*}
\textup{RHS of \eqref{EXT0}}
= Y \widehat{N}\big( (a, b] \times A \times\Gamma\big)
= Y \delta\big( \mathbf 1_{(a,b]\times A\times\Gamma} \big),
\end{align*}
\noindent
where the last equality follows from \eqref{EXT1}.
Let $F$ be any bounded random variable in $\textup{dom}(D)$.
Note also that by product rule of Malliavin derivative
(see Remark \ref{rem:add1} (ii), (iv))
\[
Y D_\xi F = D_\xi(YF) - FD_\xi Y - (D_\xi F)(D_\xi Y),
\]
we have $YF\in\textup{dom}(D)$ with
\begin{align*}
\langle DF, V \rangle_\mathfrak{H}
&= \langle Y DF, \mathbf 1_{(a,b]\times A\times\Gamma} \rangle_\mathfrak{H} \\
&= \langle D(YF), \mathbf 1_{(a,b]\times A\times\Gamma} \rangle_\mathfrak{H}
- \langle FDY, \mathbf 1_{(a,b]\times A\times\Gamma} \rangle_\mathfrak{H}
- \langle (DF)(DY), \mathbf 1_{(a,b]\times A\times\Gamma} \rangle_\mathfrak{H},
\end{align*}
and moreover, by part (ii) of Lemma \ref{lem:CE2},
we get
\begin{align}\label{EXT2}
\langle DF, V \rangle_\mathfrak{H}
&= \langle D(YF), \mathbf 1_{(a,b]\times A\times\Gamma} \rangle_\mathfrak{H}.
\end{align}
\noindent
Thus, we deduce from the duality relation
\eqref{dualR} with \eqref{EXT1} and \eqref{EXT2} that
\noindent
\begin{align*}
\mathbb{E}\big[ \langle D F, V \rangle_\mathfrak{H} \big]
&= \mathbb{E}\big[ \langle D(YF), \mathbf 1_{(a,b]\times A\times\Gamma} \rangle_\mathfrak{H} \big] \\
&= \mathbb{E}\big[ F Y \widehat{N}\big( (a, b]\times A\times\Gamma \big) \big]
\end{align*}
for any $F$ bounded Malliavin differentiable,
which implies \eqref{EXT0} with $V\in\textup{dom}(\delta)$.
For a general process $V\in L^2(\Omega; \mathfrak{H})$ that is predictable,
there is a sequence $\{ V^{(k) } \}_{k\geq 1}$ of elementary processes (i.e. linear combination of functions as
in \eqref{simpleX}) such that
\[
\| V^{(k) } - V \|_{L^2(\Omega; \mathfrak{H})} \to 0
\]
as $k\to\infty$; see e.g. \cite{B15}.
By previous step, we know that \eqref{EXT0}
holds for $V = V^{(k) }$, $k\geq 1$;
and $\delta(V^{(k) })$ converges in $L^2(\Omega)$ to some limit $G$,
by It\^o isometry.
Applying duality relation \eqref{dualR} again, we see that
$\delta$ is a closed operator meaning that $V$, as the $L^2(\Omega; \mathfrak{H})$-limit of
$V^{(k) } \in \textup{dom}(\delta)$, also belongs to $\textup{dom}(\delta)$:
for any $F\in\textup{dom}(D)$,
\noindent
\begin{align*}
\mathbb{E}\big[ \langle DF, V \rangle_\mathfrak{H} \big]
&=\lim_{k\to\infty} \mathbb{E}\big[ \langle DF, V^{(k) } \rangle_\mathfrak{H} \big] \\
&=\lim_{k\to\infty} \mathbb{E}\big[ F \delta( V^{(k) } ) \big]
= \mathbb{E}\big[ F G \big].
\end{align*}
\noindent
It follows that $V\in\textup{dom}(\delta)$ and $\delta(V) = G$. This concludes
the proof of Lemma \ref{lem:CE2}.
\qedhere
\end{proof}
\begin{lemma} \label{lem:Fubi}
Let $(E, \mu)$ be a finite measure space.
\smallskip
\noindent
{\rm (i)} Suppose that $F(\theta)\in\textup{dom}(D)$ for every $\theta\in E$ such that
\noindent
\begin{align}
\mathbb{E} \int_E \big( | F(\theta) |^2 + \| D F(\theta) \|^2_\mathfrak{H} \big) \mu(d\theta) <\infty.
\label{Fubi1}
\end{align}
\noindent
Then, $\int_E F(\theta) \mu(d\theta)$ belongs to $\textup{dom}(D)$ with
\[
D_\xi \int_E F(\theta) \mu(d\theta) = \int_E D_\xi F(\theta) \mu(d\theta)
\]
almost surely for $m$-almost every $\xi\in Z$.
\smallskip
\noindent
{\rm (ii)} \textup{(Stochastic Fubini's theorem)} Suppose that $G(\theta) \in \textup{dom}(\delta)$ for each $\theta\in E$ such that
$\int_{E} G(\theta) \mu(d\theta)$ also belongs to $\textup{dom}(\delta)$
and
\noindent
\begin{align}
\mathbb{E} \int_E \big( | \delta( G(\theta) ) |^2 + \| G(\theta) \|_{\mathfrak{H}}^2\big) \mu(d\theta) < \infty.
\label{Fubi2}
\end{align}
\noindent
Then,
\noindent
\begin{align}
\int_{E} \delta\big( G(\theta) \big) \mu(d\theta)
= \delta \bigg( \int_{E} G(\theta) \mu(d\theta) \bigg).
\label{Fubi3}
\end{align}
\end{lemma}
\begin{proof}
(i) Suppose $F(\theta)\in\textup{dom}(D)$ admits the chaos expansion
\begin{align}
F(\theta) = f_0(\theta) + \sum_{n=1}^\infty I_n( f_n(\theta)),
\notag
\end{align}
\noindent
where $f_n(\theta)\in \mathfrak{H}^{\odot n}$ for every $n\in\mathbb{N}_{\geq 1}$ and for every $\theta\in E$.
Then, the condition \eqref{Fubi1} implies that
\noindent
\begin{align}
\sum_{n\geq 1} n! n \int_E \| f_n(\theta))\|^2_{\mathfrak{H}} \mu(d\theta ) <\infty.
\label{Fubi5}
\end{align}
Fix any $g\in\mathfrak{H}^{\odot n}$ with $n\geq 1$. Then,
we deduce from modified isometry \eqref{int3c} and Fubini's theorem
with \eqref{Fubi5} that
\noindent
\begin{align*}
\mathbb{E}\bigg[ I_n(g) I_n\Big( \int_E f_n(\theta) \mu(d\theta) \Big) \bigg]
&= n! \int_E \langle f_n(\theta), g \rangle_{\mathfrak{H}^{\otimes n}} \mu(d\theta) \\
&= \int_E \mathbb{E}\big[ I_n(g) I_n( f_n(\theta) ) \big] \mu(d\theta) \\
&= \mathbb{E}\bigg[ I_n(g) \int_E I_n\big( f_n(\theta) \big) \mu(d\theta) \bigg],
\end{align*}
\noindent
which implies that almost surely
\begin{align}
I_n\Big( \int_E f_n(\theta) \mu(d\theta) \Big)
= \int_E I_n\big( f_n(\theta) \big) \mu(d\theta).
\label{Fubi6}
\end{align}
And it is straightforward to generalize the above argument to
show that
\noindent
\begin{align}
\int_E F(\theta) \mu(d\theta) = \int_E f_0(\theta)\mu(d\theta)
+ \sum_{n=1}^\infty I_n\Big( \int_E f_n(\theta) \mu(d\theta) \Big),
\notag
\end{align}
\noindent
which, together with \eqref{Fubi5}-\eqref{Fubi6} and orthogonality
relation \eqref{int3c}, implies that
$\int_E F(\theta) \mu(d\theta)$ belongs to $\textup{dom}(D)$
and
\noindent
\begin{align*}
D_\xi \int_E F(\theta) \mu(d\theta)
&= \sum_{n=1}^\infty n I_{n-1}\Big( \int_E f_n(\theta, \xi, \bullet) \mu(d\theta) \Big) \\
&= \sum_{n=1}^\infty n \int_E I_{n-1} \big( f_n(\theta, \xi, \bullet)\big) \mu(d\theta) \\
&= \sum_{n=1}^\infty \int_E D_\xi I_{n} \big( f_n(\theta)\big) \mu(d\theta)
= \int_E D_\xi F(\theta) \mu(d\theta)
\end{align*}
\noindent
almost surely. This proves (i).
Next, we prove (ii). Let $F\in\textup{dom}(D)$.
Then, we deduce from duality relation \eqref{dualR}
and Fubini's theorem with the condition \eqref{Fubi2}
that
\noindent
\begin{align}
\begin{aligned}
\mathbb{E}\bigg[ F \delta \Big(\int_E G(\theta) \mu(d\theta) \Big) \bigg]
&= \mathbb{E}\Big\langle DF, \int_E G(\theta) \mu(d\theta) \Big\rangle_{\mathfrak{H}} \\
&= \mathbb{E} \int_Z D_\xi F \int_E G(\theta, \xi) \mu(d\theta) m(d\xi) \\
&= \int_E \mathbb{E} \langle D F, G(\theta) \rangle_\mathfrak{H} \,\mu(d\theta) \\
&= \int_E \mathbb{E} \big[ F \delta( G(\theta) ) \big] \,\mu(d\theta) \\
&= \mathbb{E} \bigg[ F \int_E \delta( G(\theta) ) \,\mu(d\theta) \bigg].
\end{aligned}
\label{Fubi8}
\end{align}
Since $\textup{dom}(D)$ is dense in $L^2(\Omega, \sigma\{N\}, \mathbb{P})$,
we obtain \eqref{Fubi3} from \eqref{Fubi8}.
\qedhere
\end{proof}
We conclude this subsection with a remark on the add-one cost operator
$D^+_\xi$ that coincides with Malliavin derivative operator $D$
on $\textup{dom}(D)$.
\begin{remark} \label{rem:add1} \rm
(i) In this paper, we are mainly concerned about the distributional properties,
and then in view of \cite[Corollary 3.7]{LP18}, we will assume that the Poisson
random measure $N$ is a proper simple point process of the form
\[
N = \sum_{n=1}^\kappa \delta_{Z_n},
\]
where $Z_n$ are random variables with values in $Z$, $\kappa$ is a
random variable with values in $\mathbb{N}_{\geq 1}\cup\{+\infty\}$,
and $\delta_z$ is the Dirac mass at $z\in Z$.
Therefore,
we can view $N$ as a random variable with values in $\mathbf{N}_\sigma$, the set of all
$\sigma$-finite point measures $\chi$ on $(Z, \mathcal{Z})$ with
$\chi(B)\in\mathbb{N}_{\geq 0} \cup\{ +\infty\}$ for all $B\in\mathcal{Z}$.
Let $\mathscr{N}_\sigma$ be the smallest $\sigma$-algebra
that makes the mapping
\[
\chi\in \mathbf{N}_\sigma \mapsto \chi(B)\in[0,\infty]
\]
measurable for each $B\in\mathcal{Z}$.
\smallskip
\noindent
(ii) For a real-valued random variable $F$ that is $\sigma\{N\}$-measurable,
we can write $F = \mathfrak{f}(N)$ for some (unique) representative $\mathfrak{f}: \mathbf{N}_\sigma\to \mathbb{R}$
that is $\mathscr{N}_\sigma$-measurable. Then, the add-one cost operator is given by
\[
D^+_\xi F := \mathfrak{f}(N+\delta_\xi) - \mathfrak{f}(N).
\]
It is known that $F\in\textup{dom}(D)$ and $D^+F = DF$ provided
$\mathbb{E} \int_Z |D^+_\xi F |^2 m(d\xi) < \infty$;
see, for example, \cite[Lemma 3.1]{PT13}.
\smallskip
\noindent
(iii) Suppose that $F = \mathfrak{f}(N) \in\textup{dom}(D)$ and $\phi: \mathbb{R}\to\mathbb{R}$ is Lipschitz continuous
with Lipschitz constant $\textup{Lip}(\phi)$.
Then, one has $\phi(F)\in\textup{dom}(D)$. Indeed,
\begin{align*}
\big| D^+_\xi \phi(F) \big| &= \big|\phi( \mathfrak{f}(N+\delta_\xi)) - \phi( \mathfrak{f}(N)) \big|
\leq \textup{Lip}(\phi) | D^+_\xi F |,
\end{align*}
which, together with (ii), implies that $\phi(F)\in\textup{dom}(D)$
with $|D_\xi \phi(F)| \leq \textup{Lip}(\phi)| D_\xi F|$.
This leads to a generalization of the Poincar\'e inequality
\eqref{Poi1}:
\begin{align}
\textup{Var} ( \phi(F) ) \leq \textup{Lip}^2(\phi) \mathbb{E} [ \| DF \|_\mathfrak{H}^2 ].
\label{Poi1b}
\end{align}
And for any $F\in\textup{dom}(D)$, the truncated random variable $F_M : =( M \wedge F) \vee (-M)$
is a bounded random variable that belongs to $\textup{dom}(D)$ for any $M>0$.
\smallskip
\noindent
(iv) Let $\mathcal{A}$ be the set of bounded random variables $F\in \textup{dom}(D)$
with
\[
\mathbb{E} \int_Z | D_\xi F |^p m(d\xi) <\infty \,\,\, \text{for all $p\in[1,\infty)$}.
\]
Then, $\mathcal{A}$ is stable under multiplications.
Indeed, for $F = \mathfrak{f}(N), G= \mathfrak{g}(N)\in \mathcal{A}$,
\noindent
\begin{align}
\begin{aligned}
D^+_\xi (FG)
& = \mathfrak{f}(N+\delta_\xi) \mathfrak{g}(N + \delta_\xi) - \mathfrak{f}(N) \mathfrak{g}(N) \\
&= \big[ \mathfrak{f}(N+\delta_\xi) - \mathfrak{f}(N) \big] \mathfrak{g}(N) + \mathfrak{f}(N) \big[ \mathfrak{g}(N+\delta_\xi) - \mathfrak{g}(N)\big] \\
&\qquad\qquad + \big[ \mathfrak{f}(N+\delta_\xi) - \mathfrak{f}(N) \big] \cdot \big[ \mathfrak{g}(N+\delta_\xi) - \mathfrak{g}(N) \big] \\
& = FD^+_\xi G + GD^+_\xi F + (D^+_\xi F) D^+_\xi G,
\end{aligned}
\notag
\end{align}
\noindent
so that $D^+(FG)\in L^2(\Omega; \mathfrak{H})$. This implies $FG\in\textup{dom}(D)$ with
\begin{align*}
\mathbb{E}\int_Z |D_\xi (FG) |^p m(d\xi)
&\lesssim \| F\|_\infty \, \mathbb{E} \int_Z | D_\xi G |^p m(d\xi)
+ \| G\|_\infty \, \mathbb{E} \int_Z | D_\xi F |^p m(d\xi) \\
&\quad
+ \bigg( \mathbb{E} \int_Z | D_\xi F |^{2p} m(d\xi) \bigg)^{\frac12}
\bigg( \mathbb{E} \int_Z | D_\xi G |^{2p} m(d\xi) \bigg)^{\frac12} <\infty.
\end{align*}
Similarly, for $F_i\in\textup{dom}(D) $ with
\[
\| F_i\|_\infty + \mathbb{E} \int_Z | D_\xi F_i|^4 m(d\xi) < \infty, \,\, i=1,2,
\]
\noindent
one can show
that $F_1F_2\in\textup{dom}(D)$ with
\begin{align}
D_\xi (F_1F_2)
= F_1D_\xi F_2 + F_2D_\xi F_1 + (D_\xi F_1)D_\xi F_2
\label{F12}
\end{align}
almost surely for $m$-almost every $\xi\in Z$.
\end{remark}
\subsection{Poincar\'e inequalities} \label{SUB23}
Recall from the Poincar\'e inequality\eqref{Poi1} (see also \eqref{Poi1b})
that the variance $\textup{Var}(F)$ of a Malliavin differentiable random variable $F$
is controlled by the first Malliavin derivative $DF$.
That is, if $\| DF \|_\mathfrak{H}$ is typically small, then
the random variable $F$ has small fluctuations.
It was first in a paper by Chatterjee \cite{Ch09} that
a possible second-order extension of (Gaussian) Poincar\'e
inequality was investigated.
Suppose $F = g(X_1, ... , X_m) $ is a nice function of i.i.d. standard normal
random variables $\{ X_i\}_{i=1}^m$.
If the squared operator norm of
the Hessian matrix $\nabla^2 g(X_1, ... , X_m)$ is typically smaller compared
to $\nabla g(X_1, ... , X_m)$, then $F$ is close to a linear combination of $X_i$'s
and thus
approximately Gaussian,
with the proximity measured in total-variation distance \eqref{distTV};
see Theorem 2.2 in \cite{Ch09}. This quantitative bound is
then known as the second-order Gaussian Poincar\'e inequality.
And it has been generalized by Nourdin, Peccati, and Reinert \cite{NPR09}
to the case where $F$ is a general Malliavin differentiable
random variable (with respect to an isonormal Gaussian process)
and may depend on infinitely many coordinates (e.g.
$F= g( \{X_i\}_{i\in\mathbb{N}})$). See also Vidotto's improvement in
\cite{Vid20}. In a recent joint work \cite{BNQSZ} with Nualart and Quer-Sardanyons,
we implemented this second-order Gaussian Poincar\'e
inequality to prove the quantitative central limit theorem
for stochastic wave equation driven by colored-in-time Gaussian noise.
See also a study for stochastic heat equation in
\cite{NXZ22} by Nualart, Xia, and the second author.
\medskip
\noindent
$\bullet$ {\bf Second-order Poincar\'e inequality on the Poisson space.}
In \cite{LPS16}, Last, Peccati, and Schulte extended the second-order
Gaussian Poincar\'e inequality to the Poisson setting.
Let us first introduce several distances for distributional approximation.
Suppose $F, G$ are real random variables with
distribution measures $\mu$ and $\nu$, respectively.
\medskip
\noindent
(i) $d_{\rm FM}$ denotes the Fortet-Mourier metric, also known
as the bounded Wasserstein distance:
\noindent
\begin{align}
\begin{aligned}
d_{\rm FM}(F, G) &= d_{\rm FM}(\mu, \nu) \\
&= \sup\big\{ | \mathbb{E}[ h(F) ] - \mathbb{E}[ h(G) ] | :
\|h \|_\infty + \textup{Lip}(h) \leq 1 \big\}.
\end{aligned}
\notag
\end{align}
\noindent
It is well known that $d_{\rm FM}$ characterizes the weak convergence
on $\mathbb{R}$.
\medskip
\noindent
(ii) $d_{\rm Wass}$ denotes the
$1$-Wasserstein distance:
\noindent
\begin{align}
\begin{aligned}
d_{\rm Wass}(F, G) &= d_{\rm Wass}(\mu, \nu) \\
&= \sup\big\{ | \mathbb{E}[ h(F) ] - \mathbb{E}[ h(G) ] | :
\textup{Lip}(h) \leq 1 \big\}.
\end{aligned}
\notag
\end{align}
\noindent
It is trivial that $d_{\rm Wass}(F, G) \geq d_{\rm FM}(F, G)$.
\medskip
\noindent
(iii) $d_{\rm Kol}$ denotes the Kolmogorov distance:
\noindent
\begin{align}
\begin{aligned}
d_{\rm Kol}(F, G) &= d_{\rm Kol}(\mu, \nu) \\
&= \sup\big\{ | \mathbb{E}[ \mathbf 1_{(-\infty, t] }(F) ] - \mathbb{E}[ \mathbf 1_{(-\infty, t] }(G) ] | :
t\in\mathbb{R} \big\} \\
&= \sup\big\{ | \mathbb{P}(F\leq t) - \mathbb{P}(G\leq t) | :
t\in\mathbb{R} \big\}.
\end{aligned}
\notag
\end{align}
\noindent
Kolmogorov distance is a very natural metric in studying the normal approximation,
in view of the fact that for a sequence of real-valued random variables $\{F_n\}_{n\in\mathbb{N}}$,
$F_n$ converges in law to a standard normal random variable $Y$ (i.e.
$d_{\rm FM}(F_n, Y)\to 0$)
if and only if
$d_{\rm Kol}(F_n, Y) \to 0$ as $n\to\infty$; see \cite[Proposition C.3.2]{NP12}.
It is also well known that
\begin{align}
d_{\rm Kol}(F, Y) \leq \sqrt{ d_{\rm Wass}(F, Y)},
\label{KolWass}
\end{align}
\noindent
when $Y \sim \mathcal{N}(0,1)$;
see, for example, \cite[Proposition 1.2]{Ross11}.
\medskip
\noindent
(iv) The aforementioned total-variation distance is defined by
\begin{align}
\begin{aligned}
d_{\rm TV}(F, G) &= d_{\rm TV}(\mu, \nu) \\
&= \sup\big\{ | \mathbb{P}(F\in B) - \mathbb{P}(G\in B) | :
B\in\mathcal{B}(\mathbb{R}) \big\}.
\end{aligned}
\label{distTV}
\end{align}
\noindent
It is trivial that $d_{\rm TV}(F, G) \geq d_{\rm Kol}(F, G)$.
The total-variation distance is much stronger than weak convergence.
For example, consider $\{Y_i\}_{i\in\mathbb{N}}$ i.i.d. Poisson random variables
with mean $1$, $F_n:=\frac{1}{\sqrt{n}} (Y_1+ ... + Y_n - n)$,
which is an element
of the first Poisson Wiener chaos $\mathbb{C}_1$,
converges in law to $Y\sim\mathcal{N}(0,1)$ as $n\to\infty$; while due to discrete nature
of $F_n$, $d_{\rm TV}(F_n, Y) = 1$ for all $n$. For this reason, we will not consider
total-variation distance for our quantitative central limit theorems.
\bigskip
Now we are ready to state the second-order Poincar\'e inequality from
\cite{LPS16}.
\begin{proposition} \label{prop:Wass} \textup{(\cite[Theorem 1.1]{LPS16})}
Let $F\in\textup{dom}(D)$ with $\mathbb{E}[F]=0$ and $\textup{Var}(F)=1$. Then,
\noindent
\begin{align}
d_{\rm FM}(F, Y) \leq d_{\rm Wass}(F, Y) \leq \gamma_1 + \gamma_2 + \gamma_3,
\label{2nd:Wass}
\end{align}
\noindent
where $Y\sim \mathcal{N}(0,1)$ and
\noindent
\begin{align}
\begin{aligned}
\gamma_1&:= 2\bigg( \int_{Z^3} \| (D^+_{\xi_1} F) (D^+_{\xi_2}F) \|_2 \,
\| (D^+_{\xi_1} D^+_{\xi_3} F) (D^+_{\xi_2} D^+_{\xi_3} F) \|_2 \,
m(d\xi_1)m(d\xi_2) m(d\xi_3) \bigg)^{\frac12}
\\
\gamma_2&:= \bigg( \int_{Z^3} \| (D^+_{\xi_1} D^+_{\xi_3} F)
(D^+_{\xi_2} D^+_{\xi_3} F) \|^2_2 \,
m(d\xi_1)m(d\xi_2) m(d\xi_3) \bigg)^{\frac12} \\
\gamma_3&:= \int_{Z} \mathbb{E}\big[ | D^+_\xi F |^3 \big] m(d\xi).
\end{aligned}
\label{gamma123}
\end{align}
\end{proposition}
Recall from Remark \ref{rem:add1} that $D^+$ denotes the add-one cost operator
that coincides with Malliavin derivative oprerator $D$ on $\textup{dom}(D)$.
The quantities $\gamma_1, \gamma_2$ control the size of the fluctuations
of the second-order difference operator in a relative and an absolute
way so that a small size of $\gamma_1 + \gamma_2$
leads to the proximity of $F$ to its projection to the first Poisson Wiener chaos $\mathbb{C}_1$.
And a small value of $\gamma_3$ heuristically indicates that this projection to $\mathbb{C}_1$
is close in distribution to a Gaussian random variable.
In view of the bound \eqref{KolWass}, we deduce from \eqref{2nd:Wass}
that
\[
d_{\rm Kol}(F, Y) \leq \sqrt{\gamma_1 + \gamma_2 + \gamma_3},
\]
which would likely lead to sub-optimal rates.
\begin{proposition} \label{prop:Kol} \textup{(\cite[Theorem 1.2]{LPS16})}
Let $F\in\textup{dom}(D)$ with $\mathbb{E}[F]=0$ and $\textup{Var}(F)=1$. Then,
\noindent
\begin{align}
d_{\rm Kol}(F, Y) \leq \gamma_1 + \gamma_2 + \gamma_3 + \gamma_4 + \gamma_5+ \gamma_6,
\label{2nd:Kol}
\end{align}
\noindent
where $Y\sim \mathcal{N}(0,1)$, $\gamma_1,\gamma_2, \gamma_3$
are as in \eqref{gamma123},
and
\noindent
\begin{align}
\begin{aligned}
\gamma_4&:= \tfrac{1}{2} \| F\|_4 \int_Z \| D^+_\xi F \|_4^3 \, m(d\xi)
\\
\gamma_5&:= \bigg( \int_{Z} \| D^+_\xi F \|_4^4 \, m(d\xi) \bigg)^{\frac12} \\
\gamma_6&:= \bigg( \int_{Z^2} \Big[ 6 \| D^+_{\xi_1} F \|_4^2 \| D^+_{\xi_1} D^+_{\xi_2}F\|_4^2
+ 3 \| D^+_{\xi_1} D^+_{\xi_2}F\|_4^4 \Big] \, m(d\xi_1 ) m(d\xi_2 ) \bigg)^{\frac12} .
\end{aligned}
\label{gamma456}
\end{align}
\end{proposition}
\noindent
$\bullet$ {\bf Multivariate second-order Poincar\'e inequality.} The above univariate
second-order Poincar\'e inequality will be applied to show the quantitative central
limit theorems in Theorem \ref{thm:main} (ii). The following multivariate
version by Schulte and Yukich \cite{SY19} would be used to
show the convergence in finite-dimensional distributions in
Theorem \ref{thm:main} (iii).
\begin{proposition}\label{prop:mPoi}\textup{(\cite[Theorem 1.1]{SY19})}
Let $F = (F_1, ... , F_m)$ be a $m$-dimensional random vector
such that $F_i\in\textup{dom}(D)$ has zero mean for each $i$. Then,
for any $C^3$ function $h: \mathbb{R}^m \to \mathbb{R}$ such that the absolute
values of the second and third partial derivatives are bounded by one,
we have the following bound
\noindent
\begin{align}
\big| \mathbb{E}[ h(F)] - \mathbb{E}[ h(Z) ] \big|
\leq \frac{m}{2} \sum_{i,j=1}^m \big| \sigma_{i,j} - {\rm Cov}(F_i, F_j) \big| +
m \tau_1 + \frac{m}{2} \tau_2 + \frac{m^2}{4}\tau_3,
\notag
\end{align}
\noindent
where $Z$ is a centered Gaussian vector on $\mathbb{R}^m$ with covariance matrix
$\Sigma= \{ \sigma_{i,j} \}_{i,j=1}^m$
and
\noindent
\begin{align}
\begin{aligned}
\tau_1
&:= \bigg( \sum_{i,j=1}^m \int_{Z^3} \| (D^+_{\xi_1} F_i) (D^+_{\xi_2}F_i) \|_2 \\
&\qquad\qquad \cdot
\| (D^+_{\xi_1} D^+_{\xi_3} F_j) (D^+_{\xi_2} D^+_{\xi_3} F_j ) \|_2 \,
m(d\xi_1)m(d\xi_2) m(d\xi_3) \bigg)^{\frac12} \\
\tau_2
&:= \bigg( \sum_{i,j=1}^m \int_{Z^3}
\| (D^+_{\xi_1} D^+_{\xi_3} F_i) (D^+_{\xi_2} D^+_{\xi_3} F_i ) \|_2 \\
&\qquad\qquad \cdot
\| (D^+_{\xi_1} D^+_{\xi_3} F_j) (D^+_{\xi_2} D^+_{\xi_3} F_j ) \|_2 \,
m(d\xi_1)m(d\xi_2) m(d\xi_3) \bigg)^{\frac12}
\\
\tau_3
&:= \sum_{i=1}^m \int_Z \mathbb{E}\big[ | D^+_{\xi} F_i |^3 \big] m(d\xi).
\end{aligned}
\label{tau123}
\end{align}
\end{proposition}
\medskip
\subsection{Moment inequalities}
Recall the definition of $G_t$ from \eqref{FSol} and define
\begin{align}
\begin{aligned}
\varphi_{t, R}(r,y) &:= \int_{-R}^R G_{t-r}(x-y)dx.
\end{aligned}
\label{Rosen6b}
\end{align}
\noindent
We record below a few simple facts.
\begin{lemma} \label{lem:G}
{\rm (i)} For $t\in\mathbb{R}_+$, we have
\noindent
\begin{align}
\begin{aligned}
\int_{\mathbb{R}} G_t(y)dy &= t .
\end{aligned}
\label{factsG}
\end{align}
\smallskip
\noindent
{\rm (ii)} For $t \geq s >0$, we have $0 \leq \varphi_{t,R} - \varphi_{s,R} \leq t -s$
and
\noindent
\begin{align}
\begin{aligned}
\int_\mathbb{R} \big[ \varphi_{t, R}(r,y) - \varphi_{s, R}(r,y) \big] dy &= 2(t-s)R
\end{aligned}
\label{Rosen6c}
\end{align}
\noindent
for any $r \in (0, s]$.
\smallskip
\noindent
{\rm (iii)} For $0< s<t$, we have
\noindent
\begin{align}
\begin{aligned}
\int_s^t \int_{\mathbb{R}} \varphi^2_{t, R}(r,y) \, drdy
&\leq \frac{4}{3} R (t-s)^3
\\
\int_s^t \int_{\mathbb{R}} \varphi^4_{t, R}(r,y) \, drdy
&
\leq 2 R^2 (t-s)^4.
\end{aligned}
\label{Rosen7c}
\end{align}
As a consequence, we have
\noindent
\begin{align}
\int_s^t \int_{\mathbb{R}} \varphi^p_{t, R}(r,y) \, drdy
\leq
\begin{cases}
2^{\frac p2} R^{\frac p2} (t-s)^{2 + \frac p 2 } \,\,\, &\text{for $p\in[2,4]$} \\
2^{p-1} (t-s)^p R^2 &\text{for $p\in(4,\infty)$.}
\end{cases}
\label{Rosen7z}
\end{align}
\end{lemma}
\begin{proof} (i) is trivial. Let us prove (ii) now.
Let $t \geq s \geq 0$. Then,
\begin{align}
\varphi_{t,R}(r,y) - \varphi_{s,R}(r,y)
= \frac{1}{2} \int_{-R}^R \mathbf 1_{\{ s-r \leq |x-y | < t-r \}} dx,
\label{LG1}
\end{align}
which implies that
$ \varphi_{t,R}(r,y) - \varphi_{s,R}(r,y) \in [0,t-s]$ for any $(r, y)\in\mathbb{R}_+\times\mathbb{R}$.
It is also easy to see from \eqref{LG1}
that for $0< r \leq s$
\noindent
\begin{align*}
\int_\mathbb{R} \big[ \varphi_{t, R}(r,y) - \varphi_{s, R}(r,y) \big] dy
& = \frac{1}{2} \int_{-R}^R \bigg( \int_\mathbb{R} \mathbf 1_{\{ s-r \leq |x-y | < t-r \}} dy \bigg)dx \\
&= 2(t-s)R.
\end{align*}
That is, the equality \eqref{Rosen6c} is proved.
To prove the first bound in part (iii),
we write
\noindent
\begin{align}
\begin{aligned}
& \int_s^t \int_{\mathbb{R}} \varphi^2_{t, R}(r,y) \, drdy \\
&\quad
= \int_s^t \int_{\mathbb{R}} \int_{-R}^R \int_{-R}^R G_{t-r}(x_1-y) G_{t-r}(x_2-y) dx_1dx_2 dr dy \\
&\quad
\leq \int_s^t \int_{-R}^R \bigg[ \int_{-R}^R G_{2t-2r}(x_1-x_2)
\bigg( \int_{\mathbb{R}} G_{t-r}(x_2-y) dy \bigg)
dx_1 \bigg] dx_2 dr \\
&\quad
\leq \int_s^t 2(t-r)^2 \cdot 2R dr = \frac{4}{3} R |t-s|^3,
\end{aligned}
\label{Rosen7d}
\end{align}
\noindent
where the second step in \eqref{Rosen7d} follows from the triangle inequality
\[
\mathbf 1_{\{ |x_1- y| <t-r \}}\cdot \mathbf 1_{\{ |x_2- y| <t-r \}}
\leq \mathbf 1_{\{ |x_1- x_2| <2t-2r \}}\cdot \mathbf 1_{\{ |x_2- y| <t-r \}}.
\]
And similarly,
\noindent
\begin{align}
\begin{aligned}
& \int_s^t \int_{\mathbb{R}} \varphi^4_{t, R}(r,y) \, drdy \\
&\quad
= \int_s^t \int_{\mathbb{R}} \int_{[-R,R]^4} \prod_{j=1}^4 G_{t-r}(x_j-y) d\pmb{x_4} dr dy \\
&\quad
\leq \int_s^t \int_{[-R, R]^2} G_{2t-2r}(x_1-x_2) \bigg[ \int_{[-R,R]^2} G_{2t-2r}(x_2-x_3)
G_{2t-2r}(x_3-x_4) \\
&\qquad\qquad
\cdot \bigg( \int_{\mathbb{R}} G_{t-r}(x_4-y) dy\bigg) dx_4 dx_3 \bigg] dx_2 dx_1 dr \\
&\quad
\leq \int_s^t 2R^2 \cdot 4(t-r)^3 dr =
2 R^2 (t-s)^4.
\end{aligned}
\label{Rosen7e}
\end{align}
\noindent
It remains to show the inequality \eqref{Rosen7z}.
The case $p\in[2,4]$ follows from the inequalities in \eqref{Rosen7c}
by interpolation (i.e. an application of H\"older's inequality).
For $p\geq 4$ an integer, one can repeat the steps in \eqref{Rosen7e}
to arrive at
\noindent
\begin{align*}
\begin{aligned}
& \int_s^t \int_{\mathbb{R}} \varphi^p_{t, R}(r,y) \, drdy
\leq \int_s^t 2R^2 \cdot [2(t-r)]^{p-2} (t-r) dr
\leq 2^{p-1} (t-s)^p R^2,
\end{aligned}
\end{align*}
\noindent
and therefore,
the general case follows by interpolation.
This concludes the proof.
\qedhere
\end{proof}
Finally, we end this section with a consequence
of Rosenthal's inequality; see Theorem 2.1, Theorem 2.3, and Corollary 2.5 in
\cite{BN16}.
\begin{proposition} \label{Prop:Rosen}
Recall the definition of $G_t$ from \eqref{FSol}.
Then, the following statements hold.
\smallskip
\noindent
{\rm (i)}
Let $\{\Phi(s,y)\}_{ (s, y) \in \mathbb{R}_+ \times \mathbb{R}}$ be a predictable process such that
\noindent
\begin{align}
\mathbb{E}\int_0^t \int_{\mathbb{R}}G_{t-s}^2(x-y)|\Phi(s,y)|^2 dyds<\infty.
\label{Rosen1}
\end{align}
\noindent
Suppose \eqref{mp} holds for some finite $p\geq 2$. Then,
\noindent
\begin{align}
\begin{aligned}
&\mathbb{E} \bigg[ \Big| \int_0^t \int_{\mathbb{R}}G_{t-s}(x-y)\Phi(s,y)L(ds,dy) \Big|^p \bigg] \\
& \qquad\quad
\leq C_{p}(t) \int_0^t \int_{\mathbb{R}}G_{t-s}^p(x-y)\mathbb{E} \big[ |\Phi(s,y)|^p \big] dsdy,
\end{aligned}
\label{Rosen2}
\end{align}
where $C_{p}(t)=2^{p-1}B_p^p\big( m_2^{\frac p2} t^{p-2} + m_p \big)$
with $B_p$ the constant in Rosenthal's inequality.
\smallskip
\noindent
{\rm (ii)} Suppose $m_p<\infty$ for some finite $p\geq 2$. Recall $F_R(t)$ from \eqref{FRT}.
Then, for any finite $T > 0$,
there is some constant $A_T$ only depending on $T$
such that
\noindent
\begin{align}
\| F_R(t) - F_R(s) \|^p_p \leq A_{T} \cdot R^{\frac{p}{2}} |t-s|^p
\label{Rosen2a}
\end{align}
\noindent
for any $t,s\in [0,T]$ and for any $R\geq 1$.
\noindent
In particular, it holds for any $R\geq 1$ that
\noindent
\begin{align}
\sup_{t\leq T}\| F_R(t) \|^p_p \leq A_{T} \cdot R^{\frac{p}{2}} T^p
\label{Rosen2b}
\end{align}
\end{proposition}
\begin{proof}
Fix $t\in(0,\infty)$. We first prove the bound \eqref{Rosen2} in part (i).
By Theorem 2.3 in \cite{BN16}
and the condition \eqref{Rosen1},
the process $\{Y_r\}_{r\in[0,t]}$, given by
\begin{align*}
Y_r & = \int_0^r \int_{\mathbb{R}}G_{t-s}(x-y)\Phi(s,y)L(ds,dy) \\
&= \int_0^r \int_{\mathbb{R}} \int_{\mathbb{R}_0} G_{t-s}(x-y)\Phi(s,y) z \widehat{N}(ds,dy, dz), \,\, r\in[0,t],
\end{align*}
\noindent
has a c\`adl\`ag (i.e. right continuous with left limits) modification,
which is a
martingale with
\noindent
\begin{align}
\begin{aligned}
\| Y_t\|^p_p &= \Big\| \int_0^t \int_{\mathbb{R}}G_{t-s}(x-y)\Phi(s,y)L(ds,dy) \Big\|_p \\
&\leq B^p_p \bigg[
\Big\| \int_0^t \int_{\mathbb{R}} \int_{\mathbb{R}_0} G^2_{t-s}(x-y)\Phi^2(s,y) |z|^2 dsdy\nu(dz)
\Big\|_{\frac{p}{2}}^{\frac12} \\
&\qquad\qquad\qquad
+ \bigg( \mathbb{E} \int_0^t \int_{\mathbb{R}} \int_{\mathbb{R}_0} G^p_{t-s}(x-y)|\Phi|^p(s,y) |z|^p ds dy\nu(dz) \bigg)^{\frac1p} \bigg]^p ,
\end{aligned}
\label{Rosen3}
\end{align}
\noindent
where $B_p$ is the constant in the Rosenthal's inequality;
see Theorem 2.1 in \cite{BN16}.
Then, we deduce from \eqref{Rosen3}, $|a+b|^p \leq 2^{p-1} ( |a|^p + |b|^p)$,
and Minkowski's inequality with \eqref{m2} and \eqref{mp} that
\noindent
\begin{align}
\begin{aligned}
\| Y_t\|^p_p
&\leq 2^{p-1} B^p_p \bigg[ m_2^{\frac p 2}
\bigg( \int_0^t \int_{\mathbb{R}} G^2_{t-s}(x-y) \| \Phi(s,y)\|_p^2 dsdy \bigg)^{\frac p2} \\
&\qquad\qquad\qquad
+ m_p \int_0^t \int_{\mathbb{R}} G^p_{t-s}(x-y) \| \Phi (s,y)\|_p^p dsdy \bigg].
\end{aligned}
\label{Rosen4}
\end{align}
Note that $G_{t-s}(x-y) = 0$ for $|x-y| \geq t-s$ and
\begin{align}
\int_0^t \int_\mathbb{R} \mathbf 1_{\{ | x-y| < t-s\}} dsdy = t^2.
\label{Rosen5}
\end{align}
\noindent
Thus, it follows from Jensen's inequality with \eqref{Rosen5} that
\noindent
\begin{align}
\begin{aligned}
& \bigg( \int_0^t \int_{\mathbb{R}} G^2_{t-s}(x-y) \| \Phi(s,y)\|_p^2 dsdy \bigg)^{\frac p2} \\
&\quad \leq ( t^2 )^{\frac{p}{2} - 1} \int_0^t \int_{\mathbb{R}} G^p_{t-s}(x-y) \| \Phi(s,y)\|_p^p dsdy.
\end{aligned}
\label{Rosen6}
\end{align}
Hence, the desired inequality \eqref{Rosen2} in part (i) follows
from \eqref{Rosen4} and \eqref{Rosen6}.
\medskip
Now we prove the difference estimate \eqref{Rosen2a} in part (ii).
Without losing any generality, we assume $0 \leq s < t \leq T$.
By Lemma \ref{lem:Fubi},
we can rewrite $F_R(t)$ as
\begin{align*}
F_R(t) = \int_0^t \int_\mathbb{R} \int_{\mathbb{R}_0} \varphi_{t, R}(r,y) u(r,y) z \widehat{N}(dr, dy, dz)
\end{align*}
\noindent
with $\varphi_{t,R}$ as in \eqref{Rosen6b}.
Note that we can write
\begin{align}
\begin{aligned}
F_R(t) - F_R(s)
& = \int_0^s \int_{\mathbb{R}\times\mathbb{R}_0} \big[ \varphi_{t, R}(r,y) - \varphi_{s, R}(r,y) \big]
u(r,y) z \widehat{N}(dr, dy, dz) \\
&\qquad
+ \int_s^t \int_{\mathbb{R}\times\mathbb{R}_0} \varphi_{t, R}(r,y) u(r,y) z \widehat{N}(dr, dy, dz)\\
&:= \mathbf{T}_1 + \mathbf{T}_2.
\end{aligned}
\label{Rosen6ba}
\end{align}
\noindent
As in \eqref{Rosen4},
we can deduce from Rosenthal's inequality
(Theorem 2.3 in \cite{BN16}), Minkowski inequality, \eqref{KPT}, the fact that $\varphi_{t,R}-\varphi_{s,R} \in [0,t-s]$, and \eqref{Rosen6c},
that
\noindent
\begin{align}
\begin{aligned}
\| \mathbf{T}_1 \|_p^p
&\leq 2^{p-1} B_p^p \bigg[ m_2^{\frac p2 } \bigg( \int_0^s \int_{\mathbb{R}}
\big| \varphi_{t, R}(r,y) - \varphi_{s, R}(r,y) \big|^2 \| u(r,y) \|_p^2 \, drdy \bigg)^{\frac{p}{2}} \\
&\qquad\qquad
+ m_p \int_0^s \int_{\mathbb{R}}
\big| \varphi_{t, R}(r,y) - \varphi_{s, R}(r,y) \big|^p \| u(r,y) \|_p^p \, drdy \bigg] \\
&\leq 2^{p-1} B_p^p \big[ m_2^{\frac p2 } \cdot (2t)^{\frac{p}{2}} K_p^p(t) (t-s)^p R^{\frac{p}{2}}
+ m_p \cdot 2t K_p^p(t) (t-s)^p R \big] \\
&\lesssim K_p^p(t)(t+t^{\frac{p}{2}}) (t-s)^p R^{\frac{p}{2}} \,\,\, \text{for $R\geq 1$}
\end{aligned}
\label{Rosen7a}
\end{align}
\noindent
and
\noindent
\begin{align}
\begin{aligned}
\| \mathbf{T}_2 \|_p^p
&\leq 2^{p-1}B_p^p \bigg[ m_2^{\frac p2 } \bigg( \int_s^t \int_{\mathbb{R}}
\varphi^2_{t, R}(r,y) \| u(r,y) \|_p^2 \, drdy \bigg)^{\frac p2} \\
&\qquad\qquad
+ m_p \int_s^t \int_{\mathbb{R}}
\varphi^p_{t, R}(r,y) \| u(r,y) \|_p^p\, drdy \bigg] \\
&\lesssim K_p^p(t) \bigg( \int_s^t \int_{\mathbb{R}}
\varphi^2_{t, R}(r,y) \, drdy \bigg)^{\frac{p}{2}}
+ K_p^p(t) \int_s^t \int_{\mathbb{R}}
\varphi^p_{t, R}(r,y) \, drdy.
\end{aligned}
\label{Rosen7b}
\end{align}
\noindent
Therefore, we can deduce from \eqref{Rosen6ba}, \eqref{Rosen7a},
and \eqref{Rosen7b} with \eqref{Rosen7z}
that
\[
\big\| F_R(t) - F_R(s)\|_p^p
\lesssim K_p^p(t)[ 1 + t + t^{\frac p2} ] R^{\frac p2} | t-s|^p
\]
for $R\geq 1$. This proves the bound \eqref{Rosen2a},
and thus the uniform bound \eqref{Rosen2b} by noting that
$F_R(0)=0$.
Hence, the proof of Proposition \ref{Prop:Rosen} is completed.
\qedhere
\end{proof}
\medskip
\section{Malliavin derivatives of the hyperbolic Anderson model} \label{SEC3}
In this section, we will establish $L^p(\Omega)$-bounds for Malliavin derivatives
of hyperbolic Anderson model \eqref{wave}.
As an intermediate step,
we will first study the stochastic wave equation with delta initial velocity in
Subsection \ref{SUB31}.
\subsection{Stochastic wave equation with delta initial velocity} \label{SUB31}
In this subsection, we study the following stochastic wave equation:
\noindent
\begin{align}
\begin{cases}
\partial_t^2 v (t,x)
=\partial_x^2 v(t,x)+ v(t,x)\dot{L}(t,x), \quad t> r, \ x \in \mathbb{R} \\
v(r,\cdot) = 0, \quad
\partial_t v(r,\cdot) = z \delta_{y} ,
\end{cases}
\label{wave_dl}
\end{align}
\noindent
where $(r, y, z)\in\mathbb{R}_+\times\mathbb{R}\times\mathbb{R}_0$ is fixed
and $\dot{L}$ is the space-time L\'evy white noise as in \eqref{wave}.
We say that a predictable process $v=v^{(r,y,z)}$ is
a solution to the equation \eqref{wave_dl}
provided that:
\begin{itemize}
\item[(i)]
$v(r,x)=0$ for any $x \in \mathbb{R}$,
\item[(ii)]
for any $t> r$ and $x \in \mathbb{R}$, the following equation holds almost surely:
\noindent
\begin{align}
v(t,x)=G_{t-r}(x-y)z + \int_r^t \int_{\mathbb{R}}G_{t-s}(x-y') v(s,y')L(ds,dy'),
\label{wave_dl2}
\end{align}
\noindent
where the stochastic integral in \eqref{wave_dl2} is interpreted in It\^o sense and
coincides with the Kabanov-Skorohod integral $\delta( H)$ with
$H(s, y', z') =G_{t-s}(x-y') v(s,y') z'$.
\end{itemize}
As we will see shortly,
the solution $v^{(r,y,z)}$ is related to the Malliavin derivative $D_{r,y,z}u(t,x)$,
via relation \eqref{dec1}.
\begin{proposition} \label{prop:dl}
Fix $(r, y,z)\in\mathbb{R}_+\times\mathbb{R}\times\mathbb{R}_0$
and suppose $m_2<\infty$ as in \eqref{m2}. Then the following statements hold.
\smallskip
\noindent
{\rm (i)} The equation \eqref{wave_dl} has a unique solution $v=v^{(r,y,z)}$.
Moreover, if $m_p<\infty$ for some $p\geq 2$ as in \eqref{mp},
we have for any $T>0$ that
\noindent
\begin{align}
\sup_{ r \leq t \leq T}\, \sup_{x,y \in \mathbb{R}}
\| v^{(r,y,z)}(t,x)\|_p \leq C_{T, p}|z|,
\label{wave_dl3}
\end{align}
\noindent
where $C_{T,p}>0$ is a constant that only depends on $T$ and $p$
\textup{(}see \eqref{wave_dl9a}\textup{)}.
\smallskip
\noindent
{\rm (ii)} Let $t>r$ and $x \in \mathbb{R}$. Then, $v^{(r,y,z)}(t,x)$ admits the following
chaos expansion in $L^2(\Omega)$:
\noindent
\begin{align}
v^{(r,y,z)}(t,x)= G_{t-r}(x-y)z
+ \sum_{n \geq 1} I_n\big( G_{t,x,n+1} (r,y,z; \bullet ) \big),
\label{wave_dl4}
\end{align}
\noindent
where\footnote{That is,
$G_{t,x,k+1}(r, y, z; \bullet)
= F_{t,x,k+1}(\pmb{t_{k+1}},\pmb{x_{k+1}}, \pmb{z_{k+1}}) |_{(t_1, x_1, z_1) = (r,y,z)} $ with $F_{t,x,n}$ given by \eqref{KER:F}.}
\noindent
\begin{align}
\begin{aligned}
&G_{t,x,n+1}(r, y, z; \pmb{t_n},\pmb{x_n}, \pmb{z_n}) \\
& = G_{t-t_n}(x-x_n) G_{t_n-t_{n-1}}(x_n-x_{n-1})\cdots G_{t_2-t_1}(x_2-x_1)G_{t_1-r}(x_1-y) z \prod_{j=1}^n z_j .
\end{aligned}
\label{KER:F2}
\end{align}
\end{proposition}
\begin{proof} (i) Throughout this proof, we fix $T>0$ and omit the
fixed superscripts
$r, y, z$.
Consider the sequence $\{v_n\}_{n\geq 0}$ of Picard iterations defined as follows:
\begin{itemize}
\item we set $v_n(r,x)=0$ for any $x \in \mathbb{R}$ and $n\in\mathbb{N}_{\geq 0}$;
\item for $t>r$, we let $v_0(t,x)=G_{t-r}(x-y)z$ and
\noindent
\begin{align}
v_{n+1}(t,x) = G_{t-r}(x-y) z + \int_r^t \int_{\mathbb{R}}G_{t-s}(x-y')v_n(s,y') L(ds,dy')
\label{wave_dl6}
\end{align}
for any $n\in\mathbb{N}_{\geq 0}$.
\end{itemize}
Defining $v_{-1}(t,x)=0$, we see that
\[
v_{n+1}(t,x)-v_n(t,x)=\int_r^t \int_{\mathbb{R}}G_{t-s}(x-y') \big[ v_n(s,y')-v_{n-1}(s,y')\big] L(ds,dy').
\]
for any $n\in\mathbb{N}_{\geq 0}$, $t\geq r$, and $x \in \mathbb{R}$.
Then, we can deduce from Proposition \ref{Prop:Rosen} with \eqref{FSol}
and \eqref{factsG} that
\noindent
\begin{align}
\begin{aligned}
&\mathbb{E}\big[ |v_{n+1}(t,x)-v_n(t,x)|^p \big] \\
&\quad
\leq C_p(t) \int_r^t \int_{\mathbb{R}}G_{t-s}^p(x-y') \mathbb{E} \big[ |v_n(s,y')-v_{n-1}(s,y')|^p \big] ds dy' \\
&\quad
\leq C_p(t) 2^{1-p}t \int_r^t \bigg(\sup_{y' \in \mathbb{R}} \mathbb{E}\big[ |v_n(s,y')-v_{n-1}(s,y')|^p\big] \bigg) ds .
\end{aligned}
\label{wave_dl7}
\end{align}
\noindent
Letting $H_n(t):= \sup\big\{ \mathbb{E}\big[ |v_n(t,x)-v_{n-1}(t,x)|^p\big]: x\in\mathbb{R}\big\}$, we obtain from \eqref{wave_dl7} that
\begin{align}
H_{n+1}(t)\leq C_p(T) T 2^{1-p} \int_r^t H_n(s) ds \,\,\, \mbox{for all $t \in [r,T]$.}
\label{wave_dl8}
\end{align}
Note that
\begin{align}
M:=\sup_{t \in [r,T]}H_0(t)=\sup_{t \in [r,T]}\sup_{x\in \mathbb{R}}G_{t-r}^p(x-y) |z|^p = 2^{-p}|z|^p.
\label{wave_dl9}
\end{align}
Therefore, iterating \eqref{wave_dl8} with \eqref{wave_dl9} yields
\noindent
\begin{align}
H_{n+1}(t) \leq \frac{ \big( C_p(T) T 2^{1-p} \big)^{n+1} t^{n+1} }{(n+1)!} M
\,\,\,\text{for $t\in[r, T]$},
\notag
\end{align}
\noindent
and thus,
\begin{align}
\sum_{n\geq 0} \sup_{(t,x)\in[r, T]\times\mathbb{R}} \| v_n(t,x)-v_{n-1}(t,x) \|_p \leq C_{T,p} |z| <\infty,
\label{wave_dl9a}
\end{align}
\noindent
where $C_{T,p} $ is a constant that depends on $T$ and $p$.
This proves that $\{v_n(t,x)\}_{n\geq 1}$ is Cauchy in $L^p(\Omega)$,
uniformly in $(t,x) \in [r,T] \times \mathbb{R}$.
Its limit $v$ is the unique solution to \eqref{wave_dl} with
\noindent
\begin{align}
\sup_{(t,x)\in [r,T]\times \mathbb{R}}\|v(t,x)\|_p \leq C_{T,p}|z|.
\label{wave_dl9b}
\end{align}
The case $p=2$ is exactly the first part of (i). And for the other part with $p\geq 2$,
the uniform bound \eqref{wave_dl3} is exactly \eqref{wave_dl9b},
since the bound in \eqref{wave_dl9b} does not depend on $r$ or $y$ .
\medskip
(ii) From part (i), we know that $v(t,x)$ is the $L^2(\Omega)$-limit of $v_{n+1}(t,x)$
as $n\to\infty$. We will show that $v_{n+1}(t,x)$ lives in finitely many chaoses
with some explicit expression for each $n$, and then
the chaos expansion \eqref{wave_dl4} for $v(t,x)$ follows by
sending $n$ to infinity.
Recall $v_0(t,x) = G_{t-r}(x-y) z$ and
\noindent
\begin{align}
v_{n+1}(t,x) = G_{t-r}(x-y) z + \delta(V_{t,x,n}),
\label{wave_dl6b}
\end{align}
\noindent
where
\begin{align}
V_{t,x,n}(s, y', z') := \mathbf 1_{(r,t)}(s)G_{t-s}(x-y') v_n(s, y')z' .
\label{def_Vn}
\end{align}
\noindent
In what follows, we first show that for each $n\in\mathbb{Z}_{\geq -1}$,
$v_{n+1}(t,x)$ admits the following chaos expansion
\begin{align}
v_{n+1}(t,x) = G_{t-r}(x-y)z + \sum_{k=1}^{n+1} I_k( G_{t,x,k+1}(r, y, z; \bullet ) ),
\label{claimV}
\end{align}
\noindent
where
$G_{t,x,k+1}(r, y, z; \bullet)$ is as in \eqref{KER:F2}.
To prove \eqref{claimV}, we proceed with mathematical induction.
The base case where $n=-1$ is trivial.
And for the case where $n=0$, we deduce from \eqref{wave_dl6} and the base case
that
\begin{align*}
v_1(t,x) &= G_{t-r}(x-y)z + \int_r^t \int_\mathbb{R} \int_{\mathbb{R}_0} G_{t-s}(x-y') G_{s-r}(y'-y) z'
\widehat{N}(ds, dy', dz') \\
& = G_{t-r}(x-y)z + I_1(G_{t, x, 2}(r, y, z; \bullet)).
\end{align*}
That is, the claim \eqref{claimV} also holds for $n=0$.
Now assume \eqref{claimV}
holds for $n = m$ with $m\geq 0$.
Then, we write by using \eqref{wave_dl6b} with \eqref{def_Vn},
and the induction hypothesis
that
\noindent
\begin{align}
\begin{aligned}
v_{m+2}(t,x)
&= G_{t-r}(x-y)z + \delta\big( V_{t,x,m+1} \big)
\end{aligned}
\notag
\end{align}
with
\noindent
\begin{align}
\begin{aligned}
V_{t,x,m+1}(s, y', z')
&= \mathbf 1_{(r,t)}(s)G_{t-s}(x-y') z' v_{m+1}(s, y') \\
&= \mathbf 1_{(r,t)}(s)G_{t-s}(x-y') z'
\Big[ G_{s-r}(y'-y)z + \sum_{k=1}^{m+1} I_k\big( G_{s,y' ,k+1}(r, y, z; \bullet ) \big) \Big] \\
&= \mathbf 1_{\{r < s < t \}} G_{t,x, 2}(r, y, z; s, y', z') \\
&\qquad + \sum_{k=1}^{m+1}
I_k\big( \mathbf 1_{(r,t)}(s)G_{t-s}(x-y') z' \widetilde{G}_{s,y' ,k+1}(r, y, z; \bullet ) \big),
\end{aligned}
\label{wave_dl10}
\end{align}
\noindent
where $\widetilde{G}_{s,y' ,k+1}(r, y, z; \bullet ) $ denotes the symmetrization of
the function
$G_{s,y' ,k+1}(r, y, z; \bullet )$.
Note that the kernel of the $k$-th multiple integral in \eqref{wave_dl10}
can be rewritten as follows:
\noindent
\begin{align}
\begin{aligned}
& \mathbf 1_{(r,t)}(s)G_{t-s}(x-y') z' \widetilde{G}_{s,y' ,k+1}(r, y, z; \pmb{t_k},\pmb{y_k}, \pmb{z_k}) \\
&\quad
= G_{t-s}(x-y') z' \frac{1}{k!} \sum_{\sigma\in\mathfrak{S}_k}
G_{s- t_{\sigma(k)}}(y' - y_{\sigma(k)} ) z_{\sigma(k)} \\
&\qquad\quad\cdot G_{t_{\sigma(k)} -t_{\sigma(k-1)} }( y_{\sigma(k)}-y_{\sigma(k-1)} ) z_{\sigma(k-1)} \cdots G_{t_{\sigma(1)} - r }( y_{\sigma(1)} - y ) z \\
&= \frac{1}{k!} \sum_{\pi\in\mathfrak{S}_{k}}
G_{t- t_{\pi(k+1)}}(x - y_{\pi(k+1)} ) z_{\pi(k+1)} \\
&\qquad\quad\cdot G_{t_{\pi(k+1)} -t_{\pi(k)} }( y_{\pi(k+1)}-y_{\pi(k)} ) z_{\pi(k)} \cdots G_{t_{\pi(1)} - r }( y_{\pi(1)} - y ) z
\end{aligned}
\label{wave_dl11}
\end{align}
\noindent
with $(t_{\pi(k+1)}, y_{\pi(k+1)}, z_{\pi(k+1)} ) = (s, y', z')$
and the convention \eqref{convention}, where we point out that
the second sum in \eqref{wave_dl11} can be viewed
as a sum running over all permutations $\pi\in\mathfrak{S}_{k+1}$
such that $ t_{\pi(k+1)}=s$ is the biggest time among all
$\{ t_{\pi(j)}: j=1, ..., k+1\}$. Therefore, the symmetrization
of the function \eqref{wave_dl11}
\noindent
\begin{align*}
&(s, y', z', \pmb{t_k},\pmb{y_k}, \pmb{z_k} )
\equiv ( \pmb{t_{k+1} }, \pmb{y_{k+1}}, \pmb{z_{k+1}} ) \\
&\qquad\qquad
\longmapsto
\mathbf 1_{(r,t)}(s)G_{t-s}(x-y') z' \widetilde{G}_{s,y' ,k+1}(r, y, z; \pmb{t_k},\pmb{y_k}, \pmb{z_k})
\end{align*}
coincides with $\widetilde{G}_{t,x, k+2}(r,y,z; \pmb{t_{k+1} }, \pmb{y_{k+1}}, \pmb{z_{k+1}} )$.
As a consequence, we deduce from \eqref{wave_dl10} and
Lemma \ref{lem:dl} that
$V_{t, x, m+1}\in\textup{dom}(\delta)$
with
\[
\delta\big( V_{t,x,m+1} \big) = \sum_{k=1}^{m+2} I_k( \widetilde{G}_{t,x, k+1}(r,y,z; \bullet ) \big)
=\sum_{k=1}^{m+2} I_k( G_{t,x, k+1}(r,y,z; \bullet ) \big).
\]
Hence, we just proved that the claim \eqref{claimV} holds for
$n=m+1$, and thus for all $n$.
Then, the proof of part (ii) is concluded by
sending $n$ to infinity.
Finally, the proof of Proposition \ref{prop:dl} is completed.
\qedhere
\end{proof}
\subsection{Estimates of Malliavin derivatives} \label{SUB32}
In this subsection, our goal is to derive the $L^p(\Omega)$-bound
for the Malliavin derivatives of the solution to the hyperbolic
Anderson model \eqref{wave}.
From the chaos expansion \eqref{u1} with \eqref{KER:F},
we deduce that
\noindent
\begin{align}
\label{chaos-D}
D_{r,y,z}u(t,x) = \sum_{n\geq 1}n I_{n-1}\big(\widetilde{F}_{t,x,n}(r, y, z, \bullet)\big),
\end{align}
where $\widetilde{F}_{t,x,n}(r,y,z, \pmb{t_{n-1}},\pmb{x_{n-1}},\pmb{z_{n-1}})$
is obtained by first symmetrizing the kernel $F_{t,x,n}$ and then
putting $(r,y,z)$ in any of the $n$ `arguments'.\footnote{Here we view $(r,y,z)\in Z$
as one argument.}
It is not difficult to see that
\noindent
\begin{align}
\widetilde{F}_{t,x,n}(r,y,z, \bullet )
=\frac{1}{n} \sum_{j=1}^{n} H_{t,x,n}^{(j)}(r,y,z; \bullet ),
\notag
\end{align}
\noindent
where $H_{t,x,n}^{(j)}(r,y,z; \bullet)$ is the symmetrization of the function
$F_{t,x,n}^{(j)}(r,y,z; \bullet)$ given by
\noindent
\begin{align}
\begin{aligned}
F_{t,x,n}^{(j)}(r,y,z; \pmb{t_{n-1}, x_{n-1}},\pmb{z_{n-1}})
&= G_{t-t_{n-1}}(x-x_{n-1})z_{n-1}\ldots
G_{t_j-r}(x_j-y)z \\
&\qquad \cdot G_{r-t_{j-1}}(y-x_{j-1})z_{j-1}\ldots G_{t_2-t_1}(x_2-x_1)z_1;
\end{aligned}
\notag
\end{align}
that is, $F_{t,x,n}^{(j)}(r,y,z; \bullet)$ is obtained from $F_{t,x,n}$ by
putting $(r,y,z)$ at the $j$-th argument.
And it follows immediately that
\noindent
\begin{align}
F_{t,x,n}^{(j)}(r,y,z; \bullet )= F_{r,y,j-1} \otimes G_{t,x, n-j+1}(r,y,z; \bullet )
\label{decomp1}
\end{align}
with $G_{t,x,n-j+1}(r,y,z; \bullet )$ as in \eqref{KER:F2};
see also \cite[page 784]{BNQSZ}.
With the above notations, we can write
\begin{align}
\begin{aligned}
D_{r,y, z} u(t,x)
& = \sum_{n\geq 1} \sum_{j=1}^{n} I_{n-1}\big( F_{t,x,n}^{(j)}(r,y,z;\bullet)\big) \\
&= \sum_{n\geq 1} \sum_{j=1}^{n}
I_{n-1}\big( F_{r,y,j-1} \otimes G_{t,x,n-j+1}(r,y,z; \bullet ) \big).
\end{aligned}
\label{D-chaos}
\end{align}
Similarly, we can obtain the following chaos expansion
for the second Malliavin derivative:
for $r_1 < r_2 \leq t$,
\noindent
\begin{align}
\begin{aligned}
&D^2_{\pmb{r_2}, \pmb{y_2}, \pmb{z_2}} u(t, x)
\equiv D_{r_1, y_1, z_1} D_{r_2, y_2, z_2} u(t, x) \\
&\quad = \sum_{n= 2}^\infty \sum_{1\leq i < j \leq n}
I_{n-2} \big( F_{r_1, y_1, i-1} \otimes G_{r_2, y_2, j- i}(r_1, y_1,z_1; \bullet)
\otimes G_{t,x, n-j+1}(r_2, y_2,z_2; \bullet) \big);
\end{aligned}
\label{D2-chaos}
\end{align}
\noindent
while for $r_2 < r_1 \leq t$, we can get a similar equality by noting that
$D^2_{\pmb{r_2}, \pmb{y_2}, \pmb{z_2}} u(t, x) $ is almost surely
symmetric in those two arguments $(r_1, y_1, z_1)$ and $(r_2, y_2,z_2)$.
Now we are ready to state the main result in this subsection.
\begin{proposition} \label{prop:dec}
Suppose $m_2<\infty$ as in \eqref{m2}.
Then, $u(t,x) \in \textup{dom}(D^2)$ \textup{(}i.e. twice Malliavin differentiable\textup{)}
and
the following statements hold.
\smallskip
\noindent
{\rm (i)} Fix $(r, y,z)\in (0,t]\times\mathbb{R}\times\mathbb{R}_0$ and recall
the notation $v^{(r,y,z)}$ from Proposition \ref{prop:dl}.
Then,
\noindent
\begin{align}
D_{r,y,z}u(t,x)=u(r,y)v^{(r,y,z)}(t,x) \,\,\, \text{almost surely.}
\label{dec1}
\end{align}
\smallskip
\noindent
{\rm (ii)} Fix $(r_1,y_1,z_1), (r_2, y_2, z_2)\in \mathbb{R}_+\times\mathbb{R}\times\mathbb{R}_0$
with $r_1 < r_2\leq t$. Then,
\noindent
\begin{align}
D_{r_1,y_1,z_1} D_{r_2, y_2, z_2} u(t,x)
=u(r_1, y_1) v^{(r_1,y_1,z_1)}(r_2, y_2) v^{(r_2, y_2,z_2)}(t,x).
\label{dec2}
\end{align}
\smallskip
\noindent{\rm (iii)} Let $m_p<\infty$ for some finite $p\geq 2$ as in \eqref{mp}
and let $T \in(0, \infty)$.
Then, for any $0< r < t \leq T$ and for any $(y,z)\in \mathbb{R}\times\mathbb{R}_0$,
we have
\noindent
\begin{align}
\|D_{r,y,z}u(t,x)\|_p \leq C'_{T,p} G_{t-r}(x-y)|z|,
\label{D1est}
\end{align}
\noindent
where $C'_{T,p}$ is some constant that
only depends on $(T, p)$; see \eqref{CKTP}.
For any $(r_1,y_1,z_1)$, $(r_2, y_2, z_2)\in \mathbb{R}_+\times\mathbb{R}\times\mathbb{R}_0$,
we have
\noindent
\begin{align}
\begin{aligned}
&\|D_{r_1,y_1,z_1 } D_{r_2, y_2,z_2} u(t,x) \|_p \\
&\quad \leq C''_{T,p} |z_1z_2| \times
\begin{cases}
G_{t-r_2}(x-y_2)G_{r_2-r_1}(y_2-y_1) & \mbox{if $r_1<r_2$} \\
G_{t-r_1}(x-y_1)G_{r_1-r_2}(y_1-y_2) & \mbox{if $r_2< r_1$,}
\end{cases}
\end{aligned}
\label{D2est}
\end{align}
\noindent
where $C''_{T,p}$ is some constant that
only depends on $(T, p)$; see \eqref{CKTP2}.
\end{proposition}
\begin{remark}\label{rem:dec}\rm
(a)
Note that in part (iii), the assumption \eqref{mp} is used to
guarantee the uniform $L^p(\Omega)$-bounds of $v^{(r, y,z)}$
that are further applied in the steps
\eqref{CKTP} and \eqref{CKTP2};
while this assumption does not reflect
in the expression of the bounds \eqref{D1est} and \eqref{D2est}.
\smallskip
\noindent
(b) The upper bounds in \eqref{D1est}-\eqref{D2est} are optimal in the sense
that we can get matched lower bound. More precisely, using the orthogonality relation
\eqref{int3c} and \eqref{chaos-D},
we can get
\begin{align*}
\| D_{r,y,z} u(t,x) \|_2 \geq G_{t-r}(x-y)|z|;
\end{align*}
and similarly, we can get (with the convention \eqref{convention} in mind)
\[
\| D^2_{\pmb{r_2,y_2,z_2}} u(t,x) \|_2 \geq \big[G_{t-t_1}(x-y_1) G_{t_1-t_2}(y_1-y_2)
+ G_{t-t_2}(x-y_2) G_{t_2-t_1}(y_2-y_1) \big] \cdot |z_1z_2|.
\]
\end{remark}
\begin{proof}[Proof of Proposition \ref{prop:dec}]
We first prove the decomposition \eqref{dec1} in part (i).
Recall the chaos expansion \eqref{D-chaos}. Note that
the kernels $F_{r, y, j-1}$ and $G_{t,x, n-j+1}(r, y, z; \bullet)$ in
\eqref{D-chaos} and \eqref{decomp1} have disjoint temporal supports,
which implies immediately that
\noindent
\begin{align}
\begin{aligned}
\widetilde{F}_{r, y, j-1} \star^0_k \widetilde{G}_{t,x, n-j+1}(r, y, z; \bullet) & =0 \\
\widetilde{F}_{r, y, j-1} \star^1_k \widetilde{G}_{t,x, n-j+1}(r, y, z; \bullet) & =0
\end{aligned}
\label{dec3}
\end{align}
for $1\leq k \leq (j-1) \wedge (n-j)$, where $\widetilde{G}_{t,x, n-j+1}(r, y, z; \bullet)$
denotes the symmetrization of $G_{t,x, n-j+1}(r, y, z; \bullet)$ given by \eqref{KER:F2}.
Thus, we can deduce from \eqref{D-chaos},
Proposition \ref{prop:prod2} with \eqref{dec3}, \eqref{u1}, and
\eqref{wave_dl4} in Proposition \ref{prop:dl} that
\noindent
\begin{align}
\begin{aligned}
D_{r,y, z} u(t,x)
&= \sum_{n\geq 1} \sum_{j=1}^{n}
I_{j-1}( F_{r,y,j-1} ) I_{n-j}\big( G_{t,x,n-j+1}(r,y,z; \bullet ) \big) \\
&= \bigg( \sum_{j=1}^\infty I_{j-1}( F_{r,y,j-1} ) \bigg)
\sum_{n\geq 0} I_{n}\big( G_{t,x,n+1}(r,y,z; \bullet ) \big) \\
&= u(r, y) \cdot v^{(r, y,z)}(t,x).
\end{aligned}
\notag
\end{align}
That is, the decomposition \eqref{dec1} holds. Moreover,
due to the disjoint temporal supports of $F_{r, y, j-1}$ and $G_{t,x, n-j+1}(r, y, z; \bullet)$,
we obtain that the random variables $u(r,y)$ and $v^{(r, y,z)}(t,x)$
are independent, and thus, together with
the uniform bound \eqref{wave_dl3} in Proposition \ref{prop:dl},
we can further get
\noindent
\begin{align}
\begin{aligned}
\|D_{r,y, z} u(t,x) \|_p &= \| u(r,y) \|_p \|v^{(r, y,z)}(t,x)\|_p \\
&\leq K_p(T) C_{T,p} G_{t-r}(x-y) |z|.
\end{aligned}
\label{CKTP}
\end{align}
\noindent
where $C_{T,p}$ and $K_p(T)$ are as in \eqref{wave_dl9a} and
\eqref{KPT} respectively. This proves the bound \eqref{D1est}
in part (iii) with $C'_{T,p} = K_p(T) C_{T,p}$.
\medskip
Next, we prove \eqref{dec2} in part (ii). Similarly,
we can rewrite the chaos expansion \eqref{D2-chaos} with $r_1 < r_2$
as follows:
\noindent
\begin{align}
\begin{aligned}
&D_{r_1, y_1, z_1} D_{r_2, y_2, z_2} u(t, x) \\
& = \sum_{n= 2}^\infty \sum_{1\leq i < j \leq n}
I_{i-1} \big( F_{r_1, y_1, i-1} \big)
I_{j-i-1} \big( G_{r_2, y_2, j- i}(r_1, y_1,z_1; \bullet) \big) \\
&\qquad\qquad\cdot
I_{n-j} \big( G_{t,x, n-j+1}(r_2, y_2,z_2; \bullet) \big) \\
&= \bigg( \sum_{i\geq 1} I_{i-1} \big( F_{r_1, y_1, i-1} \big) \bigg)
\bigg( \sum_{j\geq 0} I_{j} \big( G_{r_2, y_2, j+1}(r_1, y_1,z_1; \bullet)\big) \bigg) \\
&\qquad\qquad \cdot \sum_{n\geq 0}I_{n} \big( G_{t,x, n+1}(r_2, y_2,z_2; \bullet) \big) \\
&= u(r_1, y_1) v^{(r_1, y_1, z_1)}(r_2, y_2) v^{(r_2, y_2, z_2)}(t, x),
\end{aligned}
\notag
\end{align}
which is exactly the decomposition \eqref{dec2} in part (ii).
And it is also clear that the random variables
$u(r_1, y_1)$, $v^{(r_1, y_1, z_1)}(r_2, y_2)$,
and $v^{(r_2, y_2, z_2)}(t, x)$
are independent.
Therefore, we deduce from \eqref{KPT} and \eqref{wave_dl9a}
that
\begin{align}
\begin{aligned}
&\|D_{r_1, y_1, z_1} D_{r_2, y_2, z_2} u(t, x) \|_p \\
&\quad
=\| u(r_1, y_1)\|_p \| v^{(r_1, y_1, z_1)}(r_2, y_2) \|_p \| v^{(r_2, y_2, z_2)}(t, x)\|_p \\
&\quad
\leq K_p(T) C^2_{T,p} G_{r_2- r_1}(y_2-y_1) G_{t-r_2}(x-y_2) |z_1 z_2| \cdot
\end{aligned}
\label{CKTP2}
\end{align}
This proves \eqref{D2est} with $C''_{T,p} = K_p(T) C^2_{T,p}$ when $r_1 < r_2$.
Note that when $r_1 > r_2$, the proof is identical and thus omitted.
Hence the proof of Proposition \ref{prop:dec} is completed.
\qedhere
\end{proof}
\section{Proof of main results} \label{SEC4}
\subsection{Spatial ergodicity}
We first establish the following strict stationarity.
\begin{lemma} \label{lem:stat}
Let $t>0$ and $(x_1,\ldots,x_k,y) \in \mathbb{R}^{k+1}$. Then,
\begin{align}
\big( u(t,x_1),\ldots,u(t,x_k) \big) \stackrel{\rm(law)}{=} \big( u(t,x_1+y),\ldots,u(t,x_k+y) \big).
\label{stat1}
\end{align}
\end{lemma}
\begin{proof} To show \eqref{stat1}, it suffices to prove
\begin{align}
\sum_{i=1}^k c_i u(t,x_i) \stackrel{\rm(law)}{=} \sum_{i=1}^k c_i u(t, x_i +y)
\label{stat2}
\end{align}
for any $(c_1, ... , c_k)\in\mathbb{R}^k$. By a limiting argument with \eqref{u1}-\eqref{KER:F},
we can reduce the verification of \eqref{stat2}
to showing
\begin{align}
\sum_{i=1}^k c_i \sum_{j=1}^M I_n(F_{t,x_i,n}) \stackrel{\rm(law)}{=} \sum_{i=1}^k c_i \sum_{j=1}^M I_n(F_{t, x_i+y,n})
\notag
\end{align}
for any $(c_1, ... , c_k)\in\mathbb{R}^k$ and for any $M\in\mathbb{N}_{\geq 1}$.
Note that
\[
F_{t,x +y ,n}(\pmb{t_n},\pmb{x_n}, \pmb{z_n})
= F_{t,x,n}(\pmb{t_n},\pmb{x_n}-y, \pmb{z_n})
\]
with $\pmb{x_n}-y := (x_1 - y, x_2 - y, ..., x_n - y)$. This motivates us to
define a Poisson random measure $N_y$ on $Z$
by setting
\[
N_y(A\times B \times C) = N( A \times B_y \times C)
\quad{\rm with}\quad B_y:= \{ b-y: b\in B\}
\]
for every $(A, B, C)\in \mathcal{B}(\mathbb{R}_+)\times \mathcal{B}(\mathbb{R}) \times \mathcal{B}(\mathbb{R}_0)$.
Then, it follows from the translational invariance of Lebesgue measure that
\begin{align}
N_y \stackrel{\rm(law)}{=} N.
\label{stat4}
\end{align}
Let $I^y_n$ denote the $n$-th multiple integral with respect to the compensated
version of $N_y$; see Subsection \ref{SUB22}.
Therefore, we deduce from the definition of multiple integrals with
\eqref{int3a}, \eqref{int3b}, and \eqref{int4} that
\begin{align*}
\sum_{i=1}^k c_i \sum_{j=1}^M I_n(F_{t, x_i+y,n})
& = \sum_{i=1}^k c_i \sum_{j=1}^M I^y_n(F_{t, x_i,n}) \\
&\stackrel{\rm(law)}{=} \sum_{i=1}^k c_i \sum_{j=1}^M I_n(F_{t, x_i,n}),
\end{align*}
\noindent
where the last step is a consequence of \eqref{stat4}.
Hence the proof of Lemma \ref{lem:stat} is completed now.
\qedhere
\end{proof}
The above Lemma \ref{lem:stat}
indicates that $\{u(t,x)\}_{x\in\mathbb{R}}$ is strictly stationary for every $t\in\mathbb{R}_+$.
The main goal of this subsection is to show the (spatial)
ergodicity of $\{u(t,x)\}_{x\in\mathbb{R}}$, and thus
answer the question \eqref{quest} affirmatively.
To achieve this goal,
we exploit a criterion from \cite{CKNP21} (see Lemma \ref{lem:CKNP})
and take advantage of tools from Malliavin calculus on the Poisson space.
In particular, we need the $L^p(\Omega)$-bound \eqref{D1est}
for the Malliavin derivatives of the solution $u(t,x)$.
Let us first recall the following criterion from \cite[Lemma 7.2]{CKNP21}.
\begin{lemma}\label{lem:CKNP}
A strictly stationary process $\{ Y(x)\}_{x\in\mathbb{R}}$
is ergodic provided that
\begin{align}
\lim_{R\to\infty} \frac{1}{R^2}
\textup{Var}\bigg( \int_{-R}^R \prod_{j=1}^k g_j\big( Y(x + \zeta_j) \big) dx \bigg)
=0
\label{ERG1}
\end{align}
\noindent
for all integer $k\geq 1$, every $\zeta_1, ... , \zeta_k\in\mathbb{R}$,
and all bounded Lipschitz continuous functions $g_1, ... , g_k : \mathbb{R}\to\mathbb{R}$.
\end{lemma}
In fact, the statement in \cite[Lemma 7.2]{CKNP21} requires the functions
$g_j$'s to be Lipschitz continuous with $g_j(0) = 0$ and $\textup{Lip}(g_j) =1$
for each $j$.
But as one can easily see from the proof of \cite[Lemma 7.2]{CKNP21},
\begin{itemize}
\item[(i)]
\eqref{ERG1} holds true if $g_j$'s are merely Lipschitz
continuous;
\item[(ii)] the final step in the proof of \cite[Lemma 7.2]{CKNP21} proceeds
with the particular choice that $g_j$'s are replaced with sine, cosine functions
in order to cook up complex exponentials and conclude the proof
with von-Neumann's $L^2$ version of the ergodic theorem.
\end{itemize}
This explains why we only need to consider
bounded Lipschitz continuous functions $g_j$'s in Lemma \ref{lem:CKNP}.
\begin{proof}[Proof of Theorem \ref{thm:main} \textup{(i)}]
By Lemma \ref{lem:stat}, $\{ u(t,x)\}_{x\in\mathbb{R}}$ is strictly stationary.
Then, we need to verify the condition \eqref{ERG1} in Lemma \ref{lem:CKNP}
to show the spatial ergodicity.
\smallskip
Recall from \eqref{FSol} that $G_t(x) = \tfrac{1}{2} \mathbf 1_{\{ |x| < t \}}$,
and we assume $m_p < \infty$ for any finite $p\geq 2$.
By Poincar\'e inequality \eqref{Poi1}, Lemma \ref{lem:Fubi}, and Minkowski's inequality, we can first write
\noindent
\begin{align}
\begin{aligned}
& \textup{Var}\bigg( \int_{-R}^R \prod_{j=1}^k g_j\big( u(t, x + \zeta_j) \big) dx \bigg) \\
&\quad
\leq \mathbb{E} \bigg[ \Big\| \int_{-R}^R D \prod_{j=1}^k g_j\big( u(t, x + \zeta_j) \big) dx \Big\|_{L^2(Z,\mathcal{Z}, m)}^2 \bigg]\\
&\quad
= \int_{(0,t)\times\mathbb{R}\times\mathbb{R}_0}
\bigg\| \int_{-R}^R D_{s, y, z} \prod_{j=1}^k g_j\big( u(t, x + \zeta_j) \big) dx \bigg\|^2_2
dsdy \nu(dz) \\
&\quad
\leq
\int_{(0,t)\times\mathbb{R}\times\mathbb{R}_0}
\bigg( \int_{-R}^R \Big\| D_{s, y, z} \prod_{j=1}^k g_j\big( u(t, x + \zeta_j) \big) \Big\|_2 dx
\bigg)^2 dsdy \nu(dz),
\end{aligned}
\label{ERG2}
\end{align}
\noindent
where the equality in \eqref{ERG2} follows essentially
from the fact that $D_{r, y, z}u(t,x)=0$ when $r\geq t$;
and this fact can be derived easily
from the explicit chaos expansion \eqref{D-chaos}.
Now we can deduce from Remark \ref{rem:add1} (iii)-(iv), Proposition \ref{prop:dec},
and the assumption $m_p<\infty$
that
$ g_j ( u(t, x + \zeta_j) ) \in \mathcal{A}$ for each $j$,
and thus the product of these functions
also belongs to $\mathcal{A}$.
Moreover, it follows from \eqref{F12} and Proposition \ref{prop:dec}
that
\noindent
\begin{align*}
\| D_{s, y, z} \prod_{j=1}^2 g_j\big( u(t, x + \zeta_j) \big) \|_p
&\lesssim G_{t-s}(x+ \zeta_1 - y)|z| + G_{t-s}(x+ \zeta_2 - y)|z| \\
&\quad + G_{t-s}(x+ \zeta_1 -y) G_{t-s}(x+ \zeta_2 -y) |z|^2;
\end{align*}
and a further induction yields
\begin{align}
\| D_{s, y, z} \prod_{j=1}^k g_j\big( u(t, x + \zeta_j) \big) \|_p
\lesssim \sum_{J\subset \{ 1, ... ,k\}} \mathbf 1_{\{ | J| \geq 1 \}} |z|^{|J|} \prod_{j\in J} G_{t-s}(x+ \zeta_j - y).
\label{ERG3}
\end{align}
Therefore, together with the fact that $G_t$ is uniformly bounded
and that $m_p <\infty$ for any finite $p\geq 2$,
it follows from \eqref{ERG2} and \eqref{ERG3} that
we can reduce the proof of \eqref{ERG1} to
showing for any $\zeta\in\mathbb{R}$ that
\noindent
\begin{align}
\frac{1}{R^2} \int_{\mathbb{R}_+\times\mathbb{R}}
\bigg( \int_{-R}^R G_{t-s}(x+\zeta -y) dx \bigg)^2 dsdy \to 0 \,\, \text{as $R\to\infty$.}
\label{ERG4}
\end{align}
It is clear that with $\varphi_{t,R}$ as in \eqref{Rosen6b} and \eqref{Rosen7c},
\begin{align}
\begin{aligned}
\text{LHS of \eqref{ERG4}}
&= \frac{1}{R^2} \int_0^t \int_\mathbb{R} \varphi^2_{t,R}(s, \zeta -y) ds dy
\ \to 0 \,\,\, \text{as $R\to\infty$.}
\end{aligned}
\notag
\end{align}
\noindent
This proves \eqref{ERG4} and hence the spatial ergodicity of $\{u(t,x)\}_{x\in\mathbb{R}}$.
\qedhere
\end{proof}
\medskip
\subsection{Central limit theorems} \label{SUB42}
Recall from \eqref{FRT} the definition of the spatial integral
$F_R(t)$. In view of Lemma \ref{lem:Fubi}, we can write
\begin{align}
F_R(t) =\int_{-R}^R \big[ u(t,x) -1 \big] dx
= \sum_{n=1}^\infty I_n\bigg( \int_{-R}^R F_{t,x, n} dx \bigg)
\notag
\end{align}
with $F_{t,x,n }$ as in \eqref{KER:F}.
This section is divided into three parts: in Part $\hspace{0.5mm}\text{I}\hspace{0.5mm}$,
we establish the limiting covariance structure of the process
$\{F_R(t)\}_{t\in\mathbb{R}_+}$, and in particular we get the limiting
variance at fixed time $t > 0$ that will be used
in Part $\text{I \hspace{-2.8mm} I} $;
and Part $\text{I \hspace{-2.8mm} I} $ is devoted to the proof of Theorem \ref{thm:main} \rm (ii),
while we prove the functional CLT (Theorem \ref{thm:main} \rm (iii))
in Part $\text{I \hspace{-2.9mm} I \hspace{-2.9mm} I}$.
\medskip
\noindent
$\bullet$ {\bf Part $\hspace{0.5mm}\text{I}\hspace{0.5mm}$: Limiting covariance structure.}
The arguments towards the limiting covariance structure are
similar to those in \cite[Subsection 4.1.1]{BNQSZ}, and thus
we only sketch the key steps below.
We begin with the covariance of $u(t,x)$ and $u(s,y)$:
\begin{align}
\begin{aligned}
\mathbb{E}[ u(t,x) u(s,y) ] -1
& = \sum_{n\geq 1} n! \langle \widetilde{F}_{t,x,n}, \widetilde{F}_{s,y,n}\rangle_\mathfrak{H} \\
& = \sum_{n\geq 1} n! m_2^n \langle \widetilde{f}_{t,x,n},
\widetilde{f}_{s, y,n}\rangle_{(L^2(\mathbb{R}_+\times\mathbb{R})^{\otimes n}},
\end{aligned}
\label{COV1}
\end{align}
\noindent
where $f_{t,x,n}$, given as in \cite[equations (1.7), (1.8)]{BNQSZ}, is determined
by
\begin{align}
F_{t,x,n}(\pmb{t_n}, \pmb{x_n},\pmb{z_n}) = f_{t,x, n}(\pmb{t_n}, \pmb{x_n})
\prod_{j=1}^n z_j .
\label{KER:f}
\end{align}
Note that the inner product on $L^2(\mathbb{R}_+\times\mathbb{R})$ corresponds
to the case in \cite[Subsection 4.1.1]{BNQSZ},
where temporal and spatial correlation kernels are both Dirac
mass at zero (i.e. $\gamma_0 = \gamma = \delta_0$ that corresponds
to the space-time Gaussian white noise on $\mathbb{R}_+\times\mathbb{R}$).
Let us continue with \eqref{COV1} and write as in \cite[page 797]{BNQSZ}
that
\noindent
\begin{align}
\begin{aligned}
\mathbb{E}[ u(t,x) u(s,y) ] -1
& = \sum_{n\geq 1} \frac{m_2^n}{n!} \Phi_n(t,s; x-y),
\end{aligned}
\label{COV2}
\end{align}
\noindent
where
\begin{align}
\Phi_n(t,s; x-y) : = (n!)^2
\langle \widetilde{f}_{t,x,n}, \widetilde{f}_{s, y,n}\rangle_{(L^2(\mathbb{R}_+\times\mathbb{R})^{\otimes n}} \geq 0
\label{COV2b}
\end{align}
\noindent
depends on $(x,y)$ via their difference, which is a consequence
of the stationarity (see Lemma \ref{lem:stat}). It follows
from \eqref{COV2} and dominated convergence theorem
that
\noindent
\begin{align}
\begin{aligned}
\frac{1}{R}\mathbb{E}\big[ F_R(t) F_R(s) \big]
& = \sum_{n\geq 1} \frac{2 m_2^n}{n!} \int_\mathbb{R} \frac{ \textup{Leb}( [-R, R] \cap [-R - \textup{w}, R -\textup{w} ]) }{2R} \Phi_n(t,s; \textup{w} ) d\textup{w} \\
&\longrightarrow
\sum_{n\geq 1} \frac{2 m_2^n}{n!} \int_\mathbb{R} \Phi_n(t,s; \textup{w} ) d\textup{w},
\end{aligned}
\label{COV3}
\end{align}
\noindent
provided the last series is finite. In the case where $\gamma_0 = \gamma = \delta_0$,
we have $\gamma(\mathbb{R}) =1$, $\Gamma_t : = \int_{-t}^t \gamma_0(s)ds =1$,
and the spectral density $\varphi$ associated with $\gamma$ is identically one.
Therefore, similar to the expression on page 801 in \cite{BNQSZ},
we get
\noindent
\begin{align}
\begin{aligned}
\sum_{n\geq 1} \frac{2 m_2^n}{n!} \int_\mathbb{R} \Phi_n(t,s; \textup{w} ) d\textup{w}
& \leq
m_2 t^3 + 2\pi t^2 \sum_{n\geq 2} \frac{m_2^n C_t^n}{n!} \\
& < \infty,
\end{aligned}
\label{COV4}
\end{align}
\noindent
where $C_t := 2(t^2 \vee 1) \int_\mathbb{R} \frac{1}{1+x^2} dx$ (see the equation below
\cite[equation (4.17)]{BNQSZ}).
Therefore, using the notations in \eqref{KER:f}, \eqref{COV2b} and the bound
\eqref{COV4} with \eqref{COV3}, we get the limiting covariance structure $\Sigma$ of
$\{ \frac{1}{\sqrt{R}} F_R(t) \}_{t\in\mathbb{R}_+}$ described by
\begin{align}
\Sigma_{t,s} := \sum_{n\geq 1} \frac{2 m_2^n}{n!} \int_\mathbb{R} \Phi_n(t,s; \textup{w} ) d\textup{w},
\,\,\, (t,s)\in\mathbb{R}_+^2.
\label{COV_S}
\end{align}
In particular, we have for any fixed $t\in(0,\infty)$,
\begin{align}
\sigma_R(t) := \sqrt{ \textup{Var} \big( F_R(t) \big) } \sim \sqrt{R}
\label{COV_s}
\end{align}
as $R\to\infty$.
\medskip
\noindent
$\bullet$ {\bf Part $\text{I \hspace{-2.8mm} I} $: Quantitative central limit theorems.}
\begin{proof}[Proof of Theorem \ref{thm:main} \rm (ii)]
Recall that $D^+ = D$ on $\textup{dom}(D)$
and $D^+ D^+ = D^2$ on $\textup{dom}(D^2)$.
And it is easy to see from Proposition \ref{prop:dec} (iii) and Lemma \ref{lem:Fubi}
that $F_R(t)\in \textup{dom}(D^2)$
with
\noindent
\begin{align}
\begin{aligned}
\| D_{r, y, z} F_R(t) \|_4
&\leq \int_{-R}^R \| D_{r, y, z} u(t,x) \|_4 \, dx \\
&\lesssim |z| \cdot \int_{-R}^R G_{t-r}(x-y) dx \\
&= \varphi_{t,R}(r,y) |z|
\end{aligned}
\label{Q1}
\end{align}
\noindent
and
\noindent
\begin{align}
\begin{aligned}
\| D_{r_1, y_1, z_1}D_{r_2, y_2, z_2} F_R(t) \|_4
&\leq \int_{-R}^R \| D_{r_1, y_1, z_1}D_{r_2, y_2, z_2} u(t,x) \|_4 \, dx \\
&\lesssim |z_1z_2| \cdot \int_{-R}^R \widetilde{f}_{t,x, 2}(r_1, y_1, r_2, y_2) dx \\
&\leq |z_1z_2| \cdot t,
\end{aligned}
\label{Q2}
\end{align}
\noindent
where $\varphi_{t,R}$ is as in \eqref{Rosen6b} and $f_{t,x,2}$ is as in \eqref{KER:f}
with
\begin{align}
\begin{aligned}
&\widetilde{f}_{t,x,2}(r_1, y_1, r_2, y_2 ) \\
&\quad =
\frac{1}{2} \big[ G_{t-r_1}(x-y_1) G_{r_1- r_2}(y_1 -y_2) +
G_{t-r_2}(x-y_2) G_{r_2- r_1}(y_2 -y_1) \big]
\label{tf2}
\end{aligned}
\end{align}
\noindent
with the convention \eqref{convention} in mind.
In what follows, we apply Proposition \ref{prop:Wass} and Proposition \ref{prop:Kol}
to derive the desired quantitative CLTs. More precisely,
we will compute the six quantities $\gamma_1, ... , \gamma_6$
as in \eqref{gamma123}-\eqref{gamma456} with $F = F_R(t) / \sigma_R(t) $.
To ease the notations, we write $\xi_i = (r_i, y_i, z_i)\in Z$
and $m(d\xi_i) = dr_i dy_i \nu(dz_i)$ for $i=1,2,3$.
And in what follows, we assume $m_4 <\infty$.
\medskip
\noindent
$\bullet$ {\bf Estimation of $\gamma_1$.}
By H\"older's inequality and \eqref{Q1}-\eqref{Q2} with \eqref{COV_s}, we have
\noindent
\begin{align}
\begin{aligned}
\gamma_1^2
&\leq \frac{4}{\sigma^4_R(t)} \int_{Z^3}
\| D_{r_1, y_1,z_1} F_R(t) \|_4
\| D_{r_2, y_2,z_2} F_R(t) \|_4
\| D_{r_1, y_1,z_1} D_{r_3, y_3,z_3} F_R(t) \|_4 \\
&\qquad\qquad \cdot
\| D_{r_2, y_2,z_2} D_{r_3, y_3,z_3} F_R(t) \|_4 \, m(d\xi_1)m(d\xi_2)m(d\xi_3) \\
&\lesssim \frac{m_2^3}{R^2}
\int_{\mathbb{R}_+^3\times\mathbb{R}^3} d\pmb{r_3} d\pmb{y_3}
\int_{[-R, R]^4} d\pmb{x_4}
G_{t-r_1}(x_1 - y_1)
G_{t-r_2}(x_2 - y_2) \\
&\qquad\qquad \cdot
\widetilde{f}_{t, x_3, 2} (r_1, y_1, r_3, y_3 )
\widetilde{f}_{t, x_4, 2} (r_2, y_2, r_3, y_3 ),
\end{aligned}
\label{ga1a}
\end{align}
where $\widetilde{f}_{t, x, 2}$ is as in \eqref{tf2}.
Now consider the integration over $\mathbb{R}_+^3$. It can be divided into six parts
depending on the sizes of $r_1$, $r_2$, and $r_3$.
That is, the last integral in \eqref{ga1a} can be rewritten as a sum of six sub-integrals.
Because the argument for each sub-integral is identical, we only consider
the following one that corresponds to $r_1> r_2 > r_3$:
\noindent
\begin{align}
\begin{aligned}
&\frac{m_2^3}{R^2}
\int_{\mathbb{R}_+^3\times\mathbb{R}^3} d\pmb{r_3} \mathbf 1_{\{r_1 > r_2 > r_3 \} }d\pmb{y_3}
\int_{[-R, R]^4} d\pmb{x_4}
G_{t-r_1}(x_1 - y_1)
G_{t-r_2}(x_2 - y_2) \\
&\qquad\qquad \cdot
\widetilde{f}_{t, x_3, 2} (r_1, y_1, r_3, y_3 )
\widetilde{f}_{t, x_4, 2} (r_2, y_2, r_3, y_3 ) \\
&\quad
\lesssim
\frac{1}{R^2}
\int_{[0,t]^3} d\pmb{r_3} \int_{\mathbb{R}^3} d\pmb{y_3}
\int_{[-R, R]^4} d\pmb{x_4}
G_{t-r_1}(x_1 - y_1)
G_{t-r_2}(x_2 - y_2) \\
&\qquad\qquad \cdot
G_{t-r_1}(x_3-y_1) G_{r_1- r_3}(y_1 -y_3)
G_{t-r_2}(x_4-y_2) G_{r_2- r_3}(y_2 -y_3) \\
&\quad
\lesssim \frac{1}{R},
\end{aligned}
\label{ga1b}
\end{align}
where the last step in \eqref{ga1b} is obtained by
applying the following simple inequalities in order:
\noindent
\begin{align}
\begin{aligned}
& \int_{[-R, R]^3} dx_1dx_3 dx_4 G_{t-r_1}(x_1 - y_1) G_{t-r_1}(x_3-y_1)G_{t-r_2}(x_4-y_2)
\leq t^3 \\
& \int_\mathbb{R} dy_2 \int_\mathbb{R} dy_3 \int_\mathbb{R} dy_1 G_{r_1 - r_3 }(y_1- y_3) G_{r_2 - r_3}(y_2- y_3) G_{t-r_2}(x_2-y_2) \leq t^3.
\end{aligned}
\label{ga1c}
\end{align}
Combining \eqref{ga1a} and \eqref{ga1b} yields
\begin{align}
\gamma_1 \lesssim \frac{1}{\sqrt{R}}.
\label{ga1d}
\end{align}
\noindent
$\bullet$ {\bf Estimation of $\gamma_2$.}
Similarly, we deduce from H\"older's inequality
and \eqref{Q1}-\eqref{Q2} with \eqref{COV_s} that
\noindent
\begin{align}
\begin{aligned}
\gamma_2^2
&\leq \frac{1}{\sigma^4_R(t)} \int_{Z^3}
\| D_{r_1, y_1, z_1} D_{r_3, y_3, z_3}F_R(t)\|_4^2
\| D_{r_2, y_2, z_2} D_{r_3, y_3, z_3}F_R(t)\|_4^2 \\
&\qquad\qquad\qquad m(d\xi_1) m(d\xi_2) m(d\xi_3)
\\
&\lesssim \frac{m_2^2 m_4}{R^2}
\int_{[0,t]^3} d\pmb{r_3} \int_{\mathbb{R}^3} d\pmb{y_3}
\int_{[-R, R]^4} d\pmb{x_4}
\widetilde{f}_{t, x_1, 2} (r_1, y_1, r_3, y_3 ) \\
&\qquad \cdot \widetilde{f}_{t, x_2, 2} (r_1, y_1, r_3, y_3 )
\widetilde{f}_{t, x_3, 2} (r_2, y_2, r_3, y_3 ) \widetilde{f}_{t, x_4, 2} (r_2, y_2, r_3, y_3 ).
\end{aligned}
\label{ga2a}
\end{align}
Consider the integral restricted to $r_1 > r_2 > r_3$ now:
\noindent
\begin{align}
\begin{aligned}
&\int_{[0,t]^3} d\pmb{r_3} \mathbf 1_{\{ r_1 > r_2 > r_3\}} \int_{\mathbb{R}^3} d\pmb{y_3}
\int_{[-R, R]^4} d\pmb{x_4}
\widetilde{f}_{t, x_1, 2} (r_1, y_1, r_3, y_3 ) \widetilde{f}_{t, x_2, 2} (r_1, y_1, r_3, y_3 ) \\
&\qquad\quad\cdot
\widetilde{f}_{t, x_3, 2} (r_2, y_2, r_3, y_3 ) \widetilde{f}_{t, x_4, 2} (r_2, y_2, r_3, y_3 ) \\
&\lesssim \int_{[0,t]^3} d\pmb{r_3} \int_{\mathbb{R}^3} d\pmb{y_3}
\int_{[-R, R]^4} d\pmb{x_4} G_{t-r_1}(x_1 - y_1) G^2_{r_1 - r_3}(y_1-y_3)
G_{t-r_1}(x_2 - y_1) \\
&\qquad\quad
\cdot G_{t-r_2}(x_3 - y_2) G^2_{r_2 - r_3}(y_2-y_3)
G_{t-r_2}(x_4 - y_2).
\end{aligned}
\label{ga2b}
\end{align}
Note that $G^2_t(x) = \frac{1}{2} G_t(x)$, and then we can bound the above expression
as in \eqref{ga1b}. More precisely, we first integrate out $dx_1 dx_3 dx_4$ using
the first inequality in \eqref{ga1c}, and then integrate out $dy_2$, $dy_3$, $dy_1$
(in this order) so as to obtain
\noindent
\begin{align}
\eqref{ga2b}\lesssim R, \,\, \text{and thus $\gamma_2 \lesssim \frac{1}{\sqrt{R} }$. }
\label{ga2c}
\end{align}
\medskip
\noindent
$\bullet$ {\bf Estimation of $\gamma_3$.} Now we deduce from \eqref{Q1} and \eqref{COV_s}
with \eqref{Rosen6b} that
\noindent
\begin{align}
\begin{aligned}
\gamma_3
& = \frac{1}{\sigma^3_R(t)} \int_Z \| D_{r, y, z}F_R(t) \|_3^3 dr dy \nu(dz) \\
&\lesssim \frac{m_3}{R^{\frac32}} \int_0^t \int_\mathbb{R} \varphi_{t,R}^3(r,y)
dr dy \\
&\lesssim \frac{1}{\sqrt{R}},
\end{aligned}
\label{ga3}
\end{align}
\noindent
where the last step follows from \eqref{Rosen7c} and the fact that $\varphi_{t,R}(r,y) \leq t$.
\medskip
Therefore, we can deduce from Proposition \ref{prop:Wass}
with \eqref{ga1d}, \eqref{ga2c}, and \eqref{ga3} that
\[
d_{\rm FM}\Big( \frac{F_R(t)}{\sigma_R(t)}, \mathcal{N}(0,1) \Big)
\leq d_{\rm Wass}\Big( \frac{F_R(t)}{\sigma_R(t)}, \mathcal{N}(0,1) \Big) \lesssim \frac{1}{\sqrt{R}}.
\]
Next, we continue to estimate $\gamma_4, \gamma_5, \gamma_6$
for getting the Kolmogorov bound.
\medskip
\noindent
$\bullet$ {\bf Estimation of $\gamma_4$.} First we note that the bounds in \eqref{ga3}
also implies that
\noindent
\begin{align}
\begin{aligned}
\int_Z \big\| D_\xi \frac{F_R(t)}{\sigma_R(t)}\big\|_4^3 \, m(d\xi) \lesssim \frac{1}{\sqrt{R} },
\end{aligned}
\label{ga4a}
\end{align}
\noindent
while we deduce from Proposition \ref{Prop:Rosen} (ii)
that
\begin{align}
\| F_R(t)\|_4 \lesssim \sqrt{R} \,\,
\text{for $R\geq 1$.}
\label{ga4b}
\end{align}
Therefore, it follows from \eqref{ga4a} and \eqref{ga4b} with \eqref{COV_s}
that
\noindent
\begin{align}
\begin{aligned}
&\gamma_4 \leq \frac{1}{2} \big\| \frac{F_R(t)}{\sigma_R(t)}\big\|_4
\int_Z \big\| D_\xi \frac{F_R(t)}{\sigma_R(t)} \big\|_4^3 \, m(d\xi) \\
&\quad
\lesssim \frac{1}{\sqrt{R} } \,\,\, \text{for $R\geq 1$.}
\end{aligned}
\label{ga4c}
\end{align}
\medskip
\noindent
$\bullet$ {\bf Estimation of $\gamma_5$.} Similarly as in the estimation of $\gamma_3$,
we can deduce from \eqref{Q1}, \eqref{COV_s}, \eqref{Rosen6b},
and the first inequality in \eqref{Rosen7c} with
$\varphi_{t,R}^4(r,y) \leq t^2 \varphi_{t,R}^2(r,y)$
that
\noindent
\begin{align}
\begin{aligned}
\gamma_5^2
& = \frac{1}{\sigma_R^4(t)} \int_Z \| D_\xi F_R(t) \|_4^4 m(d\xi) \\
& \lesssim \frac{m_4}{R^2} \int_0^t \int_\mathbb{R} \varphi_{t,R}^4(r,y) dr dy \\
&\lesssim \frac{1}{R}.
\end{aligned}
\label{ga5}
\end{align}
\medskip
\noindent
$\bullet$ {\bf Estimation of $\gamma_6$.} Note first from \eqref{Q2} and \eqref{tf2}
that
\begin{align}
\int_{-R}^R\widetilde{f}_{t,x,2}(r_1, y_1, r_2, y_2 ) dx \leq t.
\label{Q3}
\end{align}
\noindent
Then, we deduce from \eqref{Q1}, \eqref{Q2}, and \eqref{Q3} with \eqref{Rosen6b}, \eqref{tf2},
and \eqref{COV_s} that
\noindent
\begin{align}
\begin{aligned}
\gamma_6^2
&\leq \frac{6}{\sigma_R^4(t)} \int_{Z^2}
\Big(
\| D_{r_1, y_1, z_1} F_R(t) \|_4^2 \cdot \| D_{r_1, y_1, z_1} D_{r_2, y_2, z_2} F_R(t)\|_4^2 \\
&\qquad\qquad\qquad
+ \| D_{r_1, y_1, z_1} D_{r_2, y_2, z_2} F_R(t)\|_4^4 \Big) m(d\xi_1)m(d\xi_2) \\
&\lesssim \frac{1}{R^2} \int_{[0,t]^2\times\mathbb{R}^2} m_2 m_4 \varphi_{t,R}^2(r_1, y_1)
\bigg( \int_{-R}^R\widetilde{f}_{t,x,2}(r_1, y_1, r_2, y_2 ) dx \bigg)^2
dr_1 dr_2 dy_1 dy_2 \\
&\quad
+ \frac{1}{R^2} \int_{[0,t]^2\times\mathbb{R}^2} m^2_4
\bigg( \int_{-R}^R\widetilde{f}_{t,x,2}(r_1, y_1, r_2, y_2 ) dx \bigg)^4
dr_1 dr_2 dy_1 dy_2 \\
&\lesssim \frac{1}{R^2} \int_{[0,t]^2\times\mathbb{R}^2}
\int_{-R}^R\widetilde{f}_{t,x,2}(r_1, y_1, r_2, y_2 ) dx
dr_1 dr_2 dy_1 dy_2 \\
&\lesssim \frac1R.
\end{aligned}
\label{ga6a}
\end{align}
\noindent
Therefore, it follows from Proposition \ref{prop:Kol}
with \eqref{ga1d}, \eqref{ga2c}, \eqref{ga3}, \eqref{ga4c}, \eqref{ga5},
and \eqref{ga6a} that
\[
d_{\rm Kol}\Big( \frac{F_R(t)}{\sigma_R(t)}, \mathcal{N}(0,1) \Big) \lesssim \frac{1}{\sqrt{R}}.
\]
Hence the proof of part (ii) in Theorem \ref{thm:main} is completed.
\qedhere
\end{proof}
\medskip
\noindent
$\bullet$
{\bf Part $\text{I \hspace{-2.9mm} I \hspace{-2.9mm} I}$: Functional central limit theorems.}
In this part, we present the proof of Theorem \ref{thm:main} (iii).
The remaining part of the proof consists of two steps: we first show the
convergence in finite-dimensional distributions and then
conclude this section by proving the tightness of the process
$\{ \frac{1}{\sqrt{R}} \{F_R(t)\}_{t\in\mathbb{R}_+} : R\geq 1\}$.
\medskip
\noindent
$\bullet$ {\bf Step 1: Convergence in finite-dimensional distributions.}
Fix any $0< t_1 < ... < t_m <\infty$ with $m\in\mathbb{N}_{\geq 2}$.
We need to show that
\[
\Big( \frac{1}{\sqrt{R}} F_R(t_1), ... , \frac{1}{\sqrt{R}} F_R(t_m)\Big)
\]
converges in law to a centered Gaussian vector on $\mathbb{R}^m$
with covariance matrix $(\Sigma_{t_i, t_j})_{i,j=1,..., m}$,
where $\Sigma$ is as in \eqref{COV_S}.
In view of the multivariate second-order Poincar\'e inequality
(Proposition \ref{prop:mPoi}), it suffices to prove the asymptotic negligibility
of quantities $\tau_1, \tau_2, \tau_3$ in \eqref{tau123}.
Indeed, these three quantities have the same forms as $\gamma_1, \gamma_2, \gamma_3$
in \eqref{ga1a}, \eqref{ga2a}, and \eqref{ga3} respectively.
So the same computations can be carried out verbatim and hence omitted here.
\medskip
\noindent
$\bullet$ {\bf Step 2: Tightness.} We first deduce from Proposition \ref{Prop:Rosen} (ii)
and Kolmogorov's continuity theorem
(see e.g. \cite[Theorem 4.23]{Kall21})
that for each $R \geq 1$, the process $F_R:=\{ F_R(t)\}_{t\in\mathbb{R}_+}$
admits a continuous modification that is almost surely
locally $\beta$-H\"older continuous for any $\beta\in(0, \frac34)$.
Moreover, the bound \eqref{Rosen2a} in Proposition \ref{Prop:Rosen} (ii),
together with the tightness criterion of Kolmogorov-Chentsov
(see e.g. \cite[Theorem 23.7]{Kall21}),
implies that
\[
\big\{ \frac{1}{\sqrt{R} } F_R \big\}_{R\geq 1}
\]
is a tight family of continuous processes;
that is, a tight family of random variables
with values in $C(\mathbb{R}_+; \mathbb{R})$.
Combining the above two steps, we conclude the desired functional
CLT. Hence, we just finished the proof of Theorem \ref{thm:main}.
\hfill $\square$
|
{
"arxiv_id": "2302.14219",
"language": "en",
"timestamp": "2023-03-01T02:05:31",
"url": "https://arxiv.org/abs/2302.14219",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
With the advances in data collection and storage capabilities, massive multidimensional and multiway tensor data are being generated in a wide range of emerging applications~\cite{KB09}. Tensor computations and optimizations have been an active research area in the recent decade. Computing tensor norms are evidently essential in modelling various tensor optimization problems. One typical example is tensor completion (see e.g.,~\cite{YZ16}) in which the tensor nuclear norm is commonly used as the {\color{black}convex surrogate} of the tensor rank. However, most tensor norms are NP-hard to compute~\cite{HL13}, such as the spectral norm~\cite{HLZ10} and the nuclear norm~\cite{FL18} when the order of a tensor is more than two, a sharp contrast to matrices (tensors of order two) whose spectral and nuclear norms are easy to compute, e.g., using singular value decompositions.
The tensor spectral norm~\cite{L05} is commonly known as the maximization of a multilinear form over Cartesian products of unit spheres, a standard higher-order generalization of the matrix spectral norm. Taking a tensor $\mathcal{T}=(t_{ijk})\in\mathbb{R}^{n\times n\times n}$ of order three as an example, its spectral norm
\begin{equation}\label{eq:3tnorm}
\|\mathcal{T}\|_{\sigma}= \max \left\{\mathcal{T}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}): \|\boldsymbol{x}\|_2=\|\boldsymbol{y}\|_2=\|\boldsymbol{z}\|_2=1,\,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}\in\mathbb{R}^n\right\},
\end{equation}
where
$
\mathcal{T}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) = \sum_{i=1}^{n}\sum_{j=1}^{n}\sum_{k=1}^{n} t_{ijk} x_iy_jz_k
$
is a trilinear form of $(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})$. This is equivalent to the best rank-one approximation of the tensor $\mathcal{T}$ in the tensor community
\[\min\left\{\| \mathcal{T}-\lambda\,\boldsymbol{x}\otimes \boldsymbol{y} \otimes \boldsymbol{z} \|_{\textnormal{F}}: \lambda\in\mathbb{R}, \|\boldsymbol{x}\|_2=\|\boldsymbol{y}\|_2=\|\boldsymbol{z}\|_2=1,\,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}\in\mathbb{R}^n\right\},
\]
where $\|\bullet\|_{\textnormal{F}}$ stands for the Frobenius norm and $\otimes$ stands for the vector outer product, meaning that $\boldsymbol{x}\otimes \boldsymbol{y} \otimes \boldsymbol{z}$ is a rank-one tensor.
Although the tensor spectral norm is NP-hard to compute, it is easy to obtain feasible solutions of~\eqref{eq:3tnorm} to approximate this norm. There have been a lot of research works~\cite{S11,ZQY12,LHZ12,HJLZ14,HS14} on approximation algorithms of~\eqref{eq:3tnorm} in the optimization community since the seminal work of He et al.~\cite{HLZ10}. The best known worst-case bound to approximate~\eqref{eq:3tnorm} in polynomial time is $\Omega\left({\sqrt\frac{\ln n}{n}}\right)$~\cite{S11,HJLZ14}. One simple approach for this bound is a naive randomized algorithm in~\cite{HJLZ14}:
\begin{enumerate}
\item Sample a vector $\boldsymbol{v}$ uniformly on the sphere\footnote{\color{black}The $n$ in $\mathbb{S}^n$ refers to the dimension of the space in which this sphere of dimension $n-1$ lives.} $\mathbb{S}^n:=\{\boldsymbol{x}\in\mathbb{R}^n:\|\boldsymbol{x}\|_2=1\}$ and compute the spectral norm of the resulting matrix, i.e., $\max_{\|\boldsymbol{x}\|_2=\|\boldsymbol{y}\|_2=1}\mathcal{T}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{v})$;
\item Repeat the above procedure independently until the largest objective value from all samples hits the desired bound.
\end{enumerate}
If we were able to sample all vectors in the unit sphere for $\boldsymbol{z}$, then this approach certainly finds $\max_{\|\boldsymbol{x}\|_2=\|\boldsymbol{y}\|_2=\|\boldsymbol{z}\|_2=1}\mathcal{T}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})$. It is obviously not possible to cover the unit sphere by enumerating unit vectors. However, if we are allowed some tolerance, say an approximation ratio $\tau\in(0,1]$, then a sample unit vector $\boldsymbol{v}$ becomes a spherical cap
\[\mathbb{B}^n(\boldsymbol{v},\tau):=\left\{\boldsymbol{x}\in\mathbb{S}^n:\boldsymbol{x}^{\textnormal{T}}\boldsymbol{v}\ge\tau\right\}\]
with the angular radius $\theta=\arccos\tau$. In this setting, $\boldsymbol{v}$ is able to generate a $\tau$-approximate solution if and only if the spherical cap $\mathbb{B}^n(\boldsymbol{v},\tau)$ includes an optimal $\boldsymbol{z}$ in~\eqref{eq:3tnorm}. Alternatively, if we have a collection of sample unit vectors whose corresponding spherical caps joining together covers the whole sphere, then the best one in this collection can generate a $\tau$-approximate solution. In fact, the above algorithm does imply a randomized cover of the sphere whose covering volume is at least $1-\epsilon$ for any $\epsilon>0$ with high probability. However, this is much weaker than what we need here and {\color{black} even cannot guarantee the existence of a full cover}. One of the major contributions in this paper is to find a reasonable number of spherical caps to cover the sphere, deterministically and explicitly.
{\color{black}There are certainly lots of decision problems over spheres.} Among them many are hard problems that approximate solutions are commonly acceptable such as wireless communications~\cite{VH20} and spherical facility location~\cite{X95}. There are even harder problems {\color{black}where} sphere covering seems irrelevant but it can be indeed helpful. One of these problems is {\color{black}computing the tensor nuclear norm.}
Taking $\mathcal{T}\in\mathbb{R}^{n\times n\times n}$ again as an example, its nuclear norm is
\begin{equation}\label{eq:3nnorm}
\|\mathcal{T}\|_*=\min\left\{\sum_{i=1}^r|\lambda_i| : \mathcal{T}=\sum_{i=1}^r\lambda_i\, \boldsymbol{x}_i\otimes\boldsymbol{y}_i\otimes\boldsymbol{z}_i,\,\lambda_i\in\mathbb{R},\, \|\boldsymbol{x}_i\|_2=\|\boldsymbol{y}_i\|_2=\|\boldsymbol{z}_i\|_2=
,\, r\in\mathbb{N} \right\}.
\end{equation}
{\color{black}The decomposition of $\mathcal{T}$ into rank-one tensors in~\eqref{eq:3nnorm} is known as a CANDECOMP/PARAFAC (CP) decomposition~\cite{H27}}. {\color{black}While CP decompositions usually require the number of rank-one terms to be minimum, there is no such constraint in~\eqref{eq:3nnorm}.} In fact, the tensor nuclear norm and spectral norm are dual to each other (see e.g.,~\cite{LC14}), i.e.,
\[
\|\mathcal{T}\|_*=\max_{\|\mathcal{X}\|_\sigma \le 1}\langle\mathcal{T},\mathcal{X}\rangle \mbox{ and } \|\mathcal{T}\|_\sigma=\max_{\|\mathcal{X}\|_* \le 1}\langle\mathcal{T},\mathcal{X}\rangle,
\]
where $\langle,\rangle$ stands for the Frobenius inner product. Computing or approximating tensor nuclear norm is much harder no matter {\color{black}using the definition~\eqref{eq:3nnorm} or the dual formulation---the corresponding feasibility problem is not easy at all. The situation is different for the tensor spectral norm as the feasibility to~\eqref{eq:3tnorm} is trivial. There are various methods~\cite{DDV00,RK00,KB09,CHLZ12,VVM12,NW14,JMZ15,DCD16} to compute the tensor spectral norm in practice but there is only one known method~\cite{N17} to compute the tensor nuclear norm, to the best of our knowledge.} This crucial fact has resulted alternative concepts for the tensor nuclear norm in practice, such as the average nuclear norms of the matrix flattenings from three different ways. In terms of approximating the tensor nuclear norm, the best polynomial-time worst-case approximation bound is $\Omega\left(\frac{1}{\sqrt{n}}\right)$ via matrix flattenings~\cite{H15} or partitions into matrix slices~\cite{L16}. This bound is worse than the best known one $\Omega\left({\sqrt\frac{\ln n}{n}}\right)$ for the tensor spectral norm. It is natural to expect achieving this bound for the dual norm to the tensor spectral norm. As another major work in this paper, via certain reformulation and convex optimization proposed in~\cite{HJL22}, we are able to bridge the gap between the primal and dual norms, with the help of constructions of spherical caps for sphere covering.
Covering a sphere by identical spherical caps has been studied in computational geometry since the pioneering work of Rogers~\cite{R58}. Instead of describing spherical caps via the angular radius, the caps are measured in normalized volume in the study. Specifically, by defining the normalized volume of a spherical cap to be its true volume over the volume of $\mathbb{S}^n$ (in this sense the normalized volume of $\mathbb{S}^n$ is one), sphere covering asks for a given positive integer $m$, what is the smallest $\delta$ such that there are $m$ spherical caps with normalized volume $\delta$ covering $\mathbb{S}^n$? The quantity $\delta m$ is called the density of the covering. Studying the bounds of this density has been the main research topic along this line. An upper bound of $O\left(n\ln n\right)$ for the covering density was obtained by Rogers~\cite{R63} for sufficiently small $\delta$. This remains the best known asymptotic upper bound although there were improvements made in terms of the constant of the asymptotic bound and for any $\delta$ in~\cite{BW03,D07}.
For the lower bound of covering density, there is not a clear answer in general other than the trivial one, i.e., $\delta m\ge 1$. Rogers~\cite{R58} stated that the density of a covering cannot beat a natural strategy based on tiling $\mathbb{R}^n$ with regular simplices, known as the simplex bound, whose value remains a conjecture and unproven. Rogers~\cite{R58} computed that for $\delta\rightarrow 0$ the density is close to $\frac{n}{e\sqrt{e}}$. It is believed that the density is $\Omega(n)$. Several special cases for the simplex bound have been confirmed, either for very small $\delta$ or for $\delta$ in large cap regime; see~\cite{JM20} and references therein. Other than the two trivial cases for $m=1$ and $m=2$ which correspond to $\delta=1$ and $\delta=\frac{1}{2}$, respectively, perhaps the first nontrivial work along this line is due to Lusternik and Schnirelmann~\cite{B06}: If $n$ open or closed sets cover $\mathbb{S}^n$, then one of these contains a pair of antipodal points. This implies that if $\delta<\frac{1}{2}$ then $m\ge n+1$. An obvious lower bound of $\Omega(n)$ for any universal constant $\delta<\frac{1}{2}$.
There are two optimization problems that are relevant to sphere covering in the literature. The sphere coverage verification is to decide whether a given set of spherical caps cover the sphere or not. Petkovi\'{c} et a.~\cite{PPL12} showed that sphere coverage verification is NP-hard and proposed a recursive algorithm based on quadratic optimization. The spherical discrepancy is to find the furthest point in $\mathbb{S}^n$ to a given set of points $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\dots,\boldsymbol{v}_m\}\subseteq \mathbb{S}^n$, i.e., $\min_{\boldsymbol{x}\in\mathbb{S}^n}\max_{1\le i\le m}\boldsymbol{x}^{\textnormal{T}}\boldsymbol{v}_i$. The spherical discrepancy is also NP-hard since it is the optimization version of sphere coverage verification who is a decision version of spherical discrepancy. Jones and McPartlon~\cite{JM20} proposed a multiplicative weights-based algorithm that obtains an approximation bound up to lower order terms.
Although there is extensive research on the density of sphere covering and related problems, they do not exactly serve the purpose of our study in this paper. The asymptotic bounds on the normalized volumes are not aligned with the goal to obtain approximation bounds based on inner products between unit vectors. The upper bounds obtained in~\cite{BW03} are existence results via a randomized approach. The construction in~\cite{RS09} works only in the large cap regime for $\delta=e^{-\sqrt{n}}$ which resulted the number of caps to be exponential in $n$. A recent work on spherical discrepancy minimization~\cite{JM20} showed an algorithm to generate spherical caps sequentially until a covering is satisfied but the running time to generate a cap is $O(n^{10})$. Our goal is to achieve a good balance between the approximation measured by $\cos\theta$ for the angular radius $\theta$ and the number of caps that are not too large, say bounded by a polynomial function of $n$. More importantly, we hope to obtain explicit constructions of spherical caps to cover the unit sphere. These will be of great beneficial to the algorithm and optimization community apart from our applications in approximating tensor norms. The products of our simple and explicit constructions, together with some trivial and known constructions, are summarized in Table~\ref{table:list}.
\begin{table}[h]
\centering
\small\begin{tabular}{|p{2.6in}|p{1.3in}|p{1.3in}|}
\hline
Set of $\boldsymbol{v}$'s for $\mathbb{B}^n(\boldsymbol{v},\tau)$ & $\tau$ for $\mathbb{B}^n(\boldsymbol{v},\tau)$ & Number of $\mathbb{B}^n(\boldsymbol{v},\tau)$'s \\
\hline
Any $\{\boldsymbol{v}\}$ where $\boldsymbol{v}\in\mathbb{S}^n$ & $-1$ & $1$ \\
Any $\{\boldsymbol{v},-\boldsymbol{v}\}$ where $\boldsymbol{v}\in\mathbb{S}^n$ & $0$ & $2$ \\
Any regular simplex inscribed in $\mathbb{S}^n$ & $1/n$ & $n+1$ \\
Any basis of $\mathbb{R}^n$ with their negations & $1/\sqrt{n}$ & $2n$ \\
$\mathbb{H}^n_1$ (Section~\ref{sec:h1}), $\mathbb{H}^n_4$ and $\mathbb{H}^n_5$ (Section~\ref{sec:h4}) & $\Omega\big(\sqrt{\ln n/ n}\big)$ & $O(n^\alpha)$ for $\alpha > 1$
\\
$\mathbb{H}^n_2$ (Section~\ref{sec:h2}) & $\Omega\big(1/\sqrt{\ln n}\big)$ & $O(3^n)$ \\
$\mathbb{H}^n_3$ (Section~\ref{sec:h3}) & $\Omega\big(1\big)$ & $O(\beta^n)$ for $\beta > 4$\\
Grid points in spherical coordinates & $1-O\big(n/m^2\big)$ & $O(m^{n-1})$ \\
\hline
\end{tabular}
\caption{Constructions of spherical caps to cover the unit sphere}\label{table:list}
\end{table}
This paper is organized as follows. After introducing some uniform notations, we propose various constructions of spherical caps for sphere covering and bound the ratio $\tau$ and number of caps of each construction in Section~\ref{sec:h}. We work around a key ratio $\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$ which is the largest possible if the number of spherical caps is $O(n^\alpha)$ for some universal constant $\alpha>1$, from randomization (Section~\ref{sec:h1}) to deterministic covering (Section~\ref{sec:h4}) with some interesting byproducts (Sections~\ref{sec:h2} and~\ref{sec:h3}). In Section~\ref{sec:tensor}, we apply the covering results to approximate tensor norms. Specifically, we propose the first implementable and deterministic algorithm with the known best approximation bound for the tensor spectral norm and related polynomial optimization problems in Section~\ref{sec:snorm}. A deterministic algorithm with an improved approximation bound for the tensor nuclear norm is proposed in Section~\ref{sec:nnorm}. Numerical performance of the proposed algorithms are reported in Section~\ref{sec:numerical}. Finally, some concluding remarks are given in Section~\ref{sec:remark}.
\subsection*{Some uniform notations}
Throughout this paper we uniformly adopt lowercase letters (e.g., $x$), boldface lowercase letters (e.g., $\boldsymbol{x}=\left(x_i\right)$), capital letters (e.g., $X=\left(x_{ij}\right)$), and calligraphic letters (e.g., $\mathcal{X}=\left(x_{i_1i_2\dots i_d}\right)$) to denote scalars, vectors, matrices, and higher-order (order three or more) tensors, respectively. Denote $\mathbb{R}^{n_1\times n_2\times\dots\times n_d}$ to be the space of real tensors of order $d$ with dimension $n_1\times n_2\times\dots\times n_d$. The same notation applies for a vector space and a matrix space when $d=1$ and $d=2$, respectively. Denote $\mathbb{N}$ to be the set of positive integers.
The Frobenius inner product between two tensors $\mathcal{U},\mathcal{V}\in\mathbb{R}^{n_1\times n_2\times\dots\times n_d}$ is defined as
\[
\langle\mathcal{U}, \mathcal{V}\rangle := \sum_{i_1=1}^{n_1}\sum_{i_2=1}^{n_2} \dots\sum_{i_d=1}^{n_d} u_{i_1i_2\dots i_d} v_{i_1i_2\dots i_d}.
\]
Its induced Frobenius norm is naturally defined as $\|\mathcal{T}\|:=\sqrt{\langle\mathcal{T},\mathcal{T}\rangle}$. The two terms automatically apply to tensors of order two (matrices) and tensors of order one (vectors) as well. This is the conventional norm (a norm without a subscript) used throughout the paper.
All blackboard bold capital letters denote sets, such as $\mathbb{R}^n$, the unit sphere $\mathbb{S}^n$, a spherical cap $\mathbb{B}^n(\boldsymbol{v},\tau)$, the standard basis $\mathbb{E}^n:=\{\boldsymbol{e}_1,\boldsymbol{e}_2,\dots,\boldsymbol{e}_n\}$ of $\mathbb{R}^n$, where the superscript $n$ indicates that the concerned set is a subset of $\mathbb{R}^n$. Three vector operations are used, namely the outer product $\otimes$, the Kronecker product $\boxtimes$, and appending vectors $\vee$. Specifically, if $\boldsymbol{x}\in\mathbb{R}^{n_1}$ and $\boldsymbol{y}\in\mathbb{R}^{n_2}$, then
\begin{align*}
\boldsymbol{x}\otimes\boldsymbol{y}&=\boldsymbol{x}\boldsymbol{y}^{\textnormal{T}}\in\mathbb{R}^{n_1\times n_2} \\
\boldsymbol{x}\boxtimes\boldsymbol{y}&=(x_1\boldsymbol{y}^{\textnormal{T}},x_2\boldsymbol{y}^{\textnormal{T}},\dots,x_{n_1}\boldsymbol{y}^{\textnormal{T}})^{\textnormal{T}}\in\mathbb{R}^{n_1n_2} \\
\boldsymbol{x}\vee\boldsymbol{y}&=(x_1,x_2,\dots,x_{n_1},y_1,y_2,\dots,y_{n_2})^{\textnormal{T}}\in\mathbb{R}^{n_1+n_2}.
\end{align*}
These three operators also apply to vector sets via element-wise operations.
As a convention, the notion $\Omega(f(n))$ means that there are positive universal constants $\alpha,\beta$ and $n_0$ such that $\alpha f(n)\le\Omega(f(n))\le\beta f(n)$ for all $n\ge n_0$, i.e., the same order of magnitude to $f(n)$.
\section{Sphere covering by spherical caps}\label{sec:h}
This section is devoted to explicit constructions of spherical caps to cover $\mathbb{S}^n$ in $\mathbb{R}^n$ for $n\ge2$.
Although this is more commonly denoted by $\mathbb{S}^{n-1}$ in the literature, our notation is to emphasize that the sphere resides in the space of $\mathbb{R}^n$ and to better understand the constructions via Kronecker products.
Recall that for $\boldsymbol{v}\in\mathbb{S}^n$ and $-1\le\tau\le 1$, $\mathbb{B}^n(\boldsymbol{v}, \tau)=\left\{\boldsymbol{x}\in\mathbb{S}^n: \boldsymbol{x}^{\textnormal{T}}\boldsymbol{v} \ge \tau\right\}$ is a closed spherical cap with the angular radius $\arccos\tau$. Obviously, $\mathbb{B}^n(\boldsymbol{v}, -1)=\mathbb{S}^n$, $\mathbb{B}^n(\boldsymbol{v}, 0)$ is a hemisphere, and $\mathbb{B}^n(\boldsymbol{v}, 1)$ is a single point. A set of unit vectors $\mathbb{H}^n=\{\boldsymbol{v}_i\in\mathbb{S}^n:i=1,2,\dots,m\}$ is called a $\tau$-hitting set with cardinality $m$ if $\bigcup_{i=1}^m \mathbb{B}^n\left(\boldsymbol{v}_i, \tau\right) = \mathbb{S}^n$, i.e., the $m$ spherical caps cover the unit sphere. Denote all $\tau$-hitting sets of $\mathbb{S}^n$ with cardinality no more than $m$ to be
\[
\mathbb{T}(n,\tau,m):=\left\{\mathbb{H}^n\subseteq\mathbb{S}^n: \mathbb{H}^n \mbox{ is a $\tau$-hitting set},\,|\mathbb{H}^n|\le m\right\}.
\]
It is easy to see the monotonicity, i.e.,
\[
\begin{array}{ll}
\mathbb{T}(n,\tau_2,m)\subseteq\mathbb{T}(n,\tau_1,m) &\mbox{if } \tau_1\le\tau_2 \\
\mathbb{T}(n,\tau,m_1)\subseteq\mathbb{T}(n,\tau,m_2) &\mbox{if } m_1\le m_2.
\end{array}
\]
We will be working around $\tau$-hitting sets with $\tau=\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$. This is the largest possible if the cardinality of the hitting set is bounded by $O(n^\alpha)$ with some universal constant {\color{black}$\alpha>1$}; see e.g.~\cite{HJLZ14}. Other useful $\tau$-hitting sets with larger $\tau$'s are also constructed as byproducts that are of independent interest. The aim is to construct hitting sets with the cardinality as small as possible. Let us first look at some elementary ones.
It is obvious that for any $\boldsymbol{v}\in\mathbb{S}^n$,
\[
\{\boldsymbol{v}\}\in\mathbb{T}(n,-1,1) \mbox{ and } \{\boldsymbol{v},-\boldsymbol{v}\}\in\mathbb{T}(n,0,2)
\]
both attaining the minimum cardinality. For $\tau>0$, the famous Lusternik-Schnirelmann theorem~\cite{B06} rules out any possible $\tau$-hitting set with cardinality no more than $n$. There is an elegant construction of $\frac{1}{n}$-hitting sets with cardinality $n+1$. If $\boldsymbol{v}_1,\boldsymbol{v}_2,\dots,\boldsymbol{v}_{n+1}$ are the vertices of a regular simplex centered at the origin and inscribed in $\mathbb{S}^n$, then
\[
\left\{\boldsymbol{v}_1,\boldsymbol{v}_2,\dots,\boldsymbol{v}_{n+1}\right\}\in\mathbb{T}\left(n,\frac{1}{n},n+1\right).
\]
Detailed construction is easier to be obtained from $\mathbb{R}^{n+1}$ and is left to interested readers.
Raising $\tau$ to $\frac{1}{\sqrt{n}}$ without increasing the number of vectors too much, one has for any basis $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\dots,\boldsymbol{v}_n\}$ of $\mathbb{R}^n$,
\[
\left\{\pm\boldsymbol{v}_1,\pm\boldsymbol{v}_2,\dots,\pm\boldsymbol{v}_n\right\}\in\mathbb{T}\left(n,\frac{1}{\sqrt{n}},2n\right).
\]
However, slightly increasing this threshold, say to $\sqrt{\frac{\ln n}{n}}$, will significantly increase the cardinality of a hitting set. As mentioned earlier, if the cardinality is bounded by a polynomial function of $n$, then the largest possible $\tau=\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$.
Toward the extreme case that $\tau$ is close to one, the longitude and latitude of the Earth provide a clue. For any $\boldsymbol{x}=(x_1,x_2,\dots,x_n)^{\textnormal{T}}\in\mathbb{S}^n$, we denote its spherical coordinates to be $(\varphi_1,\varphi_2,\dots,\varphi_{n-1})$ with $\varphi_1,\varphi_2,\dots,\varphi_{n-2}\in[0,\pi]$ and $\varphi_{n-1}\in[0,2\pi)$ such that
\[ \begin{aligned}x_{1}&=\cos\varphi _{1}\\x_{2}&=\sin\varphi _{1}\cos\varphi _{2}\\ x_{3}&=\sin\varphi _{1}\sin\varphi _{2}\cos\varphi _{3}\\
&\;\;\vdots \\x_{n-1}&=\sin\varphi _{1}\dots \sin\varphi _{n-2}\cos\varphi _{n-1}\\x_{n}&=\sin\varphi _{1}\dots \sin\varphi _{n-2}\sin\varphi _{n-1}.\end{aligned}\]
If we let $\mathbb{D}_1=\left\{\frac{k\pi}{m}:k=0,1,\dots,m-1\right\}$ and $\mathbb{D}_{2}=\left\{\frac{k\pi}{m}:k=0,1,\dots,2m-1\right\}$, then the grid points in spherical coordinates (see~\cite[Lemma 3.1]{HJL22}) {\color{black}are}
\begin{equation}\label{eq:grid}
\mathbb{H}^n_0(m):=\left\{\boldsymbol{x}\in\mathbb{S}^n: \varphi_1,\varphi_2,\dots,\varphi_{n-2}\in\mathbb{D}_1,\,\varphi_{n-1}\in\mathbb{D}_2 \right\} \in \mathbb{T}\left( n,1-\frac{\pi^2(n-1)}{8m^2},2m^{n-1} \right).
\end{equation}
{\color{black}To see why $\mathbb{H}^n_0(m)$ is such a hitting set.} For any $\boldsymbol{z}\in\mathbb{S}^n$ with spherical coordinates $\varphi(\boldsymbol{z})$, there must exist $\boldsymbol{x}\in\mathbb{H}^n_0(m)$ with spherical coordinates $\varphi(\boldsymbol{x})$, such that
$$
\|\boldsymbol{x}-\boldsymbol{z}\| \le \|\varphi(\boldsymbol{x})-\varphi(\boldsymbol{z})\|\le \frac{1}{2}\cdot\frac{\pi}{m}\cdot\sqrt{n-1}=\frac{\pi\sqrt{n-1}}{2m}.
$$
Since $\|\boldsymbol{x}\|=\|\boldsymbol{z}\|=1$, the above further leads to
$$
\boldsymbol{x}^{\textnormal{T}}\boldsymbol{z} = \frac{1}{2}\left(2-\|\boldsymbol{x}-\boldsymbol{z}\|^2\right)\ge \frac{1}{2}\left(2-\frac{\pi^2(n-1)}{4m^2}\right)=
1-\frac{\pi^2(n-1)}{8m^2}.
$$
\subsection{Randomized $\Omega\big(\sqrt{\ln n/n}\big)$-hitting sets}\label{sec:h1}
It is instructive to consider randomized hitting sets via the uniform distribution on $\mathbb{S}^n$. This is also important as it guarantees the existence of $\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$-hitting sets. The following probability bound (see e.g.,~\cite{HJLZ14,BGKKLS98}) provides an insight of such a hitting set.
\begin{lemma} \label{thm:inner}
For any $\gamma\in(0,\frac{n}{\ln n})$, if $\boldsymbol{u}$ and $\boldsymbol{v}$ are drawn independently and uniformly on $\mathbb{S}^n$, then there is a constant $\delta_\gamma$ depending on $\gamma$ only, such that
\[
\textnormal{Prob}\,\left\{\boldsymbol{u}^{\textnormal{T}}\boldsymbol{v}\ge\sqrt{\frac{\gamma\ln n}{n}}\right\}\ge\frac{\delta_\gamma}{n^{2\gamma}\sqrt{\ln n}}.
\]
\end{lemma}
In fact, it is not difficult to cover $1-\epsilon$ of the volume of the unit sphere for any $\epsilon>0$ by applying Lemma~\ref{thm:inner} with the union bound; see~\cite{HJLZ14}. However, this is a much weaker statement than Theorem~\ref{thm:h1} below. {\color{black}In particular, the event of covering $1-\epsilon$ of the volume of $\mathbb{S}^n$ for any given $\epsilon >0$ does not even guarantee the existence of a full cover, hence being weaker than the latter event.} The following randomized hitting set has a cardinality $O(n^\alpha)$ for some constant $\alpha>1$.
\begin{theorem}\label{thm:h1}
For any $\epsilon>0$ and $\gamma\in(0,\frac{n}{\ln n})$, there is a constant $\kappa_\gamma>0$ depending on $\gamma$ only, such that
\[\mathbb{H}^n_1(\gamma,\epsilon):=\left\{\boldsymbol{z}_i \mbox{ is i.i.d. uniform on $\mathbb{S}^n$ for }i=1,2,\dots, \left\lceil \kappa_\gamma n^{2\gamma}\sqrt{\ln n}\left(n\ln n+\ln\frac{1}{\epsilon}\right)\right\rceil \right\}\]
satisfies
\[\textnormal{Prob}\,\left\{\mathbb{H}^n_1(\gamma,\epsilon)\in\mathbb{T}\left(n, \sqrt{\frac{\gamma\ln n}{2n}}, \left\lceil \kappa_\gamma n^{2\gamma}\sqrt{\ln n}\left(n\ln n+\ln\frac{1}{\epsilon}\right)\right\rceil \right)\right\}\ge1-\epsilon.\]
\end{theorem}
\begin{proof}
The sphere covering is established in two steps, a spherical grid $\mathbb{H}^n_0$ to cover the whole sphere and the randomized hitting set $\mathbb{H}^n_1$ to cover the grid.
According to~\eqref{eq:grid} one has $\mathbb{H}^n_0(m)\in\mathbb{T}\left( n,1-\frac{\pi^2(n-1)}{8m^2},2m^{n-1} \right)$. Let $m\ge n$. For any $\boldsymbol{x}\in\mathbb{S}^n$, there exists $\boldsymbol{y}\in \mathbb{H}^n_0(m)$ such that $\boldsymbol{x}^{\textnormal{T}}\boldsymbol{y}\ge 1-\frac{\pi^2(n-1)}{8m^2}$. By Lemma~\ref{thm:inner}, for any $\boldsymbol{z}_i\in\mathbb{H}^n_1(\gamma,\epsilon)$, there exists an $\delta_\gamma$ depending on $\gamma$ and
$\textnormal{Prob}\,\left\{\boldsymbol{y}^{\textnormal{T}}\boldsymbol{z}_i \ge \sqrt{\frac{\gamma\ln n}{n}}\right\} \ge \frac{\delta_\gamma}{n^{2\gamma}\sqrt{\ln n}}$,
i.e., $\textnormal{Prob}\,\left\{\boldsymbol{y}^{\textnormal{T}}\boldsymbol{z}_i < \sqrt{\frac{\gamma\ln n}{n}}\right\} \le 1-\frac{\delta_\gamma}{n^{2\gamma}\sqrt{\ln n}}$.
Denote $t=|\mathbb{H}^n_1(\gamma,\epsilon)|$. By the independence of $\boldsymbol{z}_i$'s, we have
\[
\textnormal{Prob}\,\left\{\boldsymbol{y}\notin\bigcup_{i=1}^t\mathbb{B}^n\left(\boldsymbol{z}_i,\sqrt{\frac{\gamma\ln n}{n}}\right)\right\} = \textnormal{Prob}\,\left\{\max_{1\le i\le t}\boldsymbol{y}^{\textnormal{T}}\boldsymbol{z}_i < \sqrt{\frac{\gamma\ln n}{n}}\right\} \le \left(1-\frac{\delta_\gamma}{n^{2\gamma}\sqrt{\ln n}}\right)^t.
\]
Since $\left|\mathbb{H}^n_0(m)\right|=2m^{n-1}$ and the points of $\mathbb{H}^n_0(m)$ are fixed, the probability that $\bigcup_{i=1}^t\mathbb{B}^n\left(\boldsymbol{z}_i,\sqrt{\frac{\gamma\ln n}{n}}\right)$ fails to cover at least one point of $\mathbb{H}^n_0(m)$ is no more than $2m^{n-1}\left(1-\frac{\delta_\gamma}{n^{2\gamma}\sqrt{\ln n}}\right)^t$. In other words,
\begin{align*}
\textnormal{Prob}\,\left\{\mathbb{H}^n_0(m)\subseteq\bigcup_{i=1}^t\mathbb{B}^n\left(\boldsymbol{z}_i,\sqrt{\frac{\gamma\ln n}{n}}\right)\right\}
\ge 1-2m^{n-1}\left(1-\frac{\delta_\gamma}{n^{2\gamma}\sqrt{\ln n}}\right)^t.
\end{align*}
By noticing that $m\ge n\ge2$, it is not difficulty to verify that if $t\ge \frac{n^{2\gamma}\sqrt{\ln n}}{\delta_\gamma}\left(n\ln m+\ln\frac{1}{\epsilon}\right)$, then the right hand side of the above is at least $1-\epsilon$.
To summarize, if $\mathbb{H}^n_0(m)\subseteq\bigcup_{i=1}^t\mathbb{B}^n\left(\boldsymbol{z}_i,\sqrt{\frac{\gamma\ln n}{n}}\right)$, then for any $\boldsymbol{x}\in\mathbb{S}^n$, there exists $\boldsymbol{y}\in\mathbb{H}^n_0(m)$ such that $\boldsymbol{y}^{\textnormal{T}}\boldsymbol{x}\ge 1-\frac{\pi^2(n-1)}{8m^2}$ and further there exists $\boldsymbol{z}\in\mathbb{H}^n_1(\gamma,\epsilon)$ such that $\boldsymbol{z}^{\textnormal{T}}\boldsymbol{y}\ge\sqrt{\frac{\gamma\ln n}{n}}$. If we are able to verify $\boldsymbol{z}^{\textnormal{T}}\boldsymbol{x}\ge\sqrt{\frac{\gamma\ln n}{2n}}$, then we must have $\bigcup_{i=1}^t\mathbb{B}^n\left(\boldsymbol{z}_i,\sqrt{\frac{\gamma\ln n}{2n}}\right)=\mathbb{S}^n$. This finally leads to
\[
\textnormal{Prob}\,\left\{ \bigcup_{i=1}^t\mathbb{B}^n\left(\boldsymbol{z}_i,\sqrt{\frac{\gamma\ln n}{2n}}\right)=\mathbb{S}^n \right\}
\ge \textnormal{Prob}\,\left\{\mathbb{H}^n_0(m)\subseteq\bigcup_{i=1}^t\mathbb{B}^n\left(\boldsymbol{z}_i,\sqrt{\frac{\gamma\ln n}{n}}\right)\right\} \ge 1-\epsilon.
\]
In order to show that $\boldsymbol{z}^{\textnormal{T}}\boldsymbol{x}\ge\sqrt{\frac{\gamma\ln n}{2n}}$, we let $\theta_1=\arccos (\boldsymbol{y}^{\textnormal{T}}\boldsymbol{x})$ and $\theta_2=\arccos (\boldsymbol{z}^{\textnormal{T}}\boldsymbol{y})$. Since $\cos\theta_1\ge1-\frac{\pi^2(n-1)}{8m^2}\ge 1-\frac{3}{2m}$, one has
$|\sin\theta_1| \le \sqrt{1-\left(1-\frac{3}{2m}\right)^2}\le\sqrt{\frac{3}{m}}$. Therefore,
\begin{equation}\label{eq:randombound}
\boldsymbol{z}^{\textnormal{T}}\boldsymbol{x} \ge \cos(\theta_1+\theta_2)
=\cos\theta_1\cos\theta_2-\sin\theta_1\sin\theta_2
\ge \left(1-\frac{3}{2m}\right)\cdot \sqrt{\frac{\gamma\ln n}{n}}-\sqrt{\frac{3}{m}}\cdot 1\ge \sqrt{\frac{\gamma\ln n}{2n}}
\end{equation}
if $n\ge n_0$ for some $n_0$ that depends on $\gamma$ only. By choosing $m=n$ in $\mathbb{H}^n_0(m)$ and $\kappa_\gamma=\frac{1}{\delta_\gamma}$ and we have the desired $t$ for $n\ge n_0$.
To finish the final piece for remaining $n\le n_0$, we may enlarge $m$ in $\mathbb{H}^n_0(m)$ in order for~\eqref{eq:randombound} to hold. If we choose $\kappa_\gamma= \frac{\ln m}{\delta_\gamma \ln n}$ correspondingly, this will ensure
\[
\kappa_\gamma n^{2\gamma}\sqrt{\ln n}\left(n\ln n+\ln\frac{1}{\epsilon}\right)= \frac{\ln m}{\ln n}\cdot\frac{n^{2\gamma}\sqrt{\ln n}}{\delta_\gamma }\left(n\ln n+\ln\frac{1}{\epsilon}\right) \ge \frac{n^{2\gamma}\sqrt{\ln n}}{\delta_\gamma}\left(n\ln m+\ln\frac{1}{\epsilon}\right).
\]
The largest $\kappa_\gamma$ for these finite $n\le n_0$ provides the final $\kappa_\gamma$ that depends only on $n_0$ who itself depends on $\gamma$ only.
\end{proof}
Theorem~\ref{thm:h1} not only provides a simple construction with varying $\gamma$ but also trivially implies the existence of hitting sets in $\mathbb{T}\left(n, \Omega\left(\sqrt{\frac{\ln n}{n}}\right), O(n^\alpha)\right)$. Although $\mathbb{H}^n_1(\gamma,\epsilon)$ is a full sphere covering with probability $1 - \epsilon$ for any small $\epsilon>0$, it cannot be used to derive deterministic algorithms and even in some scenarios the feasibility may be questioned as we will see in approximating the tensor nuclear norm in Section~\ref{sec:nnorm}. Moreover, to verify whether $\mathbb{H}^n_1(\gamma,\epsilon)$ covers the sphere or not, the sphere coverage verification, is NP-hard~\cite{PPL12}. Therefore, explicit and deterministic constructions of hitting sets in $\mathbb{T}\left(n, \Omega\left(\sqrt{\frac{\ln n}{n}}\right), O(n^\alpha)\right)$ are important. To get this job done, let us first look at two types $\tau$-hitting sets with larger $\tau$.
\subsection{An $\Omega\big(1/{\sqrt{\ln n}}\big)$-hitting set}\label{sec:h2}
As the hitting ratio $\tau$ goes beyond $\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$, we have to give up the polynomiality of $n$. Let us consider
\begin{equation*}
\mathbb{H}^n_2:=\left\{\frac{\boldsymbol{z}}{\|\boldsymbol{z}\|}\in\mathbb{S}^n: \boldsymbol{z}\in\{-1,0,1\}^n,\,\|\boldsymbol{z}\|\neq 0\right\}.
\end{equation*}
It is obvious that $|\mathbb{H}^n_2|<3^n$. We need to work out how large $\tau$ is for this $\tau$-hitting set, essentially Theorem~\ref{thm:h2} below. Interestingly, some results in matroid theory will be used in the proof.
To begin with, let $\mathbb{I}:= \{1, 2, \dots, n\}$ and its power set $2^{\mathbb{I}}:= \{\mathbb{D}: \mathbb{D} \subseteq \mathbb{I} \}$. For any $\mathbb{D}\in 2^{\mathbb{I}}$, define
\[
\mathbb{Y}^n_\mathbb{D}:=\left\{\boldsymbol{y}\in\mathbb{R}^n : y_i \in\{-1,1\} \mbox{ for } i \in \mathbb{D} \mbox{ and } y_i = 0 \mbox{ for } i \in \mathbb{I}\setminus\mathbb{D}\right\},
\]
and denote $\mathbb{Y}^n =\bigcup_{\mathbb{D} \in 2^{\mathbb{I}}\setminus\{\emptyset\}} \mathbb{Y}^n_{\mathbb{D}}$. It is easy to see that $\mathbb{H}^n_2=\left\{\frac{\boldsymbol{y}}{\|\boldsymbol{y}\|}:\boldsymbol{y}\in\mathbb{Y}^n\right\}$. Our goal is to establish a lower bound of
$\min_{\boldsymbol{x} \in \mathbb{S}^n}\max_{\boldsymbol{z} \in \mathbb{H}^n_2} \boldsymbol{x}^{\textnormal{T}} \boldsymbol{z}$.
\begin{theorem}\label{thm:h2}
It holds that
\begin{equation}\label{eq:h2thm}
\min_{\boldsymbol{x} \in \mathbb{S}^n}\max_{\boldsymbol{z} \in \mathbb{H}^n_2} \boldsymbol{x}^{\textnormal{T}} \boldsymbol{z}=\min_{\boldsymbol{x} \in \mathbb{S}^n}\max_{\boldsymbol{y} \in \mathbb{Y}^n} \frac{\boldsymbol{x}^{\textnormal{T}} \boldsymbol{y}}{\|\boldsymbol{y}\|} = \min_{\boldsymbol{x} \in \mathbb{S}^n}\max_{\mathbb{D} \in 2^{\mathbb{I}}\setminus \{\emptyset\}}\max_{\boldsymbol{y} \in \mathbb{Y}^n_{\mathbb{D}}} \frac{\boldsymbol{x}^{\textnormal{T}} \boldsymbol{y}}{\|\boldsymbol{y}\|} =\min_{\boldsymbol{x} \in \mathbb{S}^n}\max_{\mathbb{D} \in 2^{\mathbb{I}}\setminus \{\emptyset\}} \sum_{i \in \mathbb{D}} \frac{|x_i|} {\sqrt{|\mathbb{D} |}} \ge \frac{2}{\sqrt{\ln n+5 }}.
\end{equation}
\end{theorem}
It is straightforward to verify all the equalities in~\eqref{eq:h2thm}. To show the inequality, let us consider the following optimization problem
\[
\max\left\{\|\boldsymbol{x}\|^2 : \sum_{i \in \mathbb{D}} |x_i| \le \alpha\sqrt{|\mathbb{D}|} \mbox{ for all } \mathbb{D} \in {2^{\mathbb{I}}\setminus\{\emptyset\}} \right\},
\]
where $\alpha\ge0$ is a given constant. This is equivalent to
\begin{equation}\label{eq:polymatroid}
\max\left\{\|\boldsymbol{x}\|^2 : \boldsymbol{x}\ge{\bf0},\,\sum_{i \in \mathbb{D}} x_i \le \alpha\sqrt{|\mathbb{D}|} \mbox{ for all } \mathbb{D} \in2^{\mathbb{I}} \right\},
\end{equation}
which is to maximize a strictly convex quadratic function over a polyhedron
\[\mathbb{X}^n =\left\{ \boldsymbol{x} \in \mathbb{R}^n_{+} : \sum_{i \in \mathbb{D}} x_i \le \alpha\sqrt{|\mathbb{D}|} \mbox{ for all } \mathbb{D} \in {2^{\mathbb{I}}}\right\}.\]
Therefore, the optimal solution of~\eqref{eq:polymatroid} must be obtained at some extreme points of $\mathbb{X}^n$. To compute the optimal value, we now characterize extreme optimal points of~\eqref{eq:polymatroid}. We need the following two technical results for the preparation.
\begin{lemma}\label{thm:polymatroid}
$g: 2^{\mathbb{I}} \to \mathbb{R}$ where $g(\mathbb{D}) = \alpha \sqrt{|\mathbb{D}|}$ with $\alpha >0$, then $\mathbb{X}^n$ is a polymatroid with respect to the function $g$ and the index set $\mathbb{I}$.
\end{lemma}
\begin{proof}
It suffices to show that $g$ is a rank function, i.e., normalized, nondecreasing and submodular. Obviously, $g(\emptyset)=0$ and $g(\mathbb{D}_1) = {\color{black}\alpha \sqrt{|\mathbb{D}_1|} \le \alpha \sqrt{|\mathbb{D}_2|} }= g(\mathbb{D}_2)$ whenever $\mathbb{D}_1 \subseteq \mathbb{D}_2 \subseteq \mathbb{I}$. It remains to show the submodularity
\[
g(\mathbb{D}_1 \cup \mathbb{D}_2) + g(\mathbb{D}_1 \cap \mathbb{D}_2) \le g(\mathbb{D}_1) + g(\mathbb{D}_2)\quad \forall\;\mathbb{D}_1, \mathbb{D}_2 \subseteq \mathbb{I}.
\]
If we let $|\mathbb{D}_1| = a$, $|\mathbb{D}_2\setminus \mathbb{D}_1| = b$, and $|\mathbb{D}_1 \cap \mathbb{D}_2|=c$, then the above inequality is equivalent to
\[
\sqrt{a+b}+ \sqrt{c} \le \sqrt{a} + \sqrt{b+c}\quad \forall\;a\ge c\ge0, b\ge0.
\]
This is actually implied by
\[
\sqrt{a+b}-\sqrt{a} = \frac{b}{\sqrt{a+b}+\sqrt{a}} \le \frac{b}{\sqrt{b+c} +\sqrt{c}} = \sqrt{b+c} -\sqrt{c}.
\]
\end{proof}
The next result is well known regarding an optimal solution of maximizing a linear function over a polymatroid; see e.g.,~\cite{E03}.
\begin{lemma}\label{thm:matroid-greedy}
Consider the linear program
\begin{equation*}
\max\left\{\boldsymbol{a}^{\textnormal{T}}\boldsymbol{x} : \boldsymbol{x}\ge{\bf0},\,\sum_{i\in\mathbb{D}} x_i \le g(\mathbb{D}) \mbox{ for all } \mathbb{D} \in 2^{\mathbb{I}} \right\}
\end{equation*}
where $\boldsymbol{a} \in \mathbb{R}^n_+$ and $g$ is a rank function. Let $\pi = (\pi_1,\pi_2,\dots, \pi_n)$ be a permutation of $\mathbb{I}$ with $a_{\pi_1} \ge a_{\pi_2} \ge \dots \ge a_{\pi_n}\ge0$. An optimal solution $\boldsymbol{x}$ to the linear program can be obtained by letting
\[
x_{\pi_i}=\left\{
\begin{array}{ll}
g(\{ \pi_i \}) & i=1\\
g(\{ \pi_1,\dots, \pi_i \}) - g(\{ \pi_1,\dots, \pi_{i-1} \}) & i=2,\dots,n.
\end{array}
\right.
\]
\end{lemma}
We can now characterize extreme optimal points and upper bound the optimal value of~\eqref{eq:polymatroid}.
\begin{proposition}\label{thm:matroid-value-bound}
The optimal value of~\eqref{eq:polymatroid} is no more than $\left(\frac{\ln n+5}{4} \right) \alpha^2 $.
\end{proposition}
\begin{proof}
Denote $\boldsymbol{z}$ to be an optimal solution of~\eqref{eq:polymatroid}.
In fact, $\boldsymbol{z}$ is the unique optimal solution to the linear program
\begin{equation}\label{prob:LP-matroid}
\max_{\boldsymbol{x}\in\mathbb{X}^n} \boldsymbol{z}^{\textnormal{T}}\boldsymbol{x}.
\end{equation}
If this is not true, we then have another $\boldsymbol{y} \in \mathbb{X}^n$ with $\boldsymbol{y}\neq\boldsymbol{z}$ and ${\boldsymbol{z}}^{\textnormal{T}} \boldsymbol{y} \ge \boldsymbol{z}^{\textnormal{T}}\boldsymbol{z}= \|\boldsymbol{z}\|^2$. This implies that $\|\boldsymbol{y}\|^2 >\|\boldsymbol{z}\|^2$, invalidating the optimality of $\boldsymbol{z}$ to~\eqref{eq:polymatroid}.
Applying Lemma~\ref{thm:polymatroid} and Lemma~\ref{thm:matroid-greedy} to~\eqref{prob:LP-matroid} with $g(\mathbb{D}) = \alpha\sqrt{|\mathbb{D}|}$ and $\boldsymbol{a} = \boldsymbol{z} \in \mathbb{R}^n_+$ and choosing a permutation $\pi$ with $z_{\pi_1} \ge z_{\pi_2} \ge \dots \ge z_{\pi_n}\ge0$, one has
\[
z_{\pi_i}=\left\{
\begin{array}{ll}
\alpha & i=1\\
\alpha\left(\sqrt{i}-\sqrt{i-1}\right) & i=2,\dots,n.
\end{array}
\right.
\]
As a consequence,
\begin{eqnarray*}
\frac{\|\boldsymbol{z}\|^2}{\alpha^2} =1+\sum_{i=2}^n\left(\sqrt{i}-\sqrt{i-1}\right)^2
=1+\sum_{i=2}^n\left(\frac{1}{\sqrt{i}+\sqrt{i-1}}\right)^2
\le 1 + \sum_{i=2}^n\frac{1}{4(i-1)}
\le\frac{\ln n+5}{4},
\end{eqnarray*}
which shows that the optimal value of~\eqref{eq:polymatroid}, $\|\boldsymbol{z}\|^2$, is upper bounded by $\left(\frac{\ln n+5}{4} \right)\alpha^2$.
\end{proof}
We are ready to finish the final piece, i.e., to show the inequality in~\eqref{eq:h2thm}. If this is not ture, then there is an $\boldsymbol{x} \in \mathbb{S}^n$ such that
$$\max_{\mathbb{D} \in {2^{\mathbb{I}}\setminus\{\emptyset\}}} \sum_{i \in \mathbb{D}} \frac{|x_i|} {\sqrt{|\mathbb{D} |}}\le \frac{2\beta }{\sqrt{\ln n+5 }} \mbox{ with } 0<\beta <1.$$
This means that $|\boldsymbol{x}|\in\mathbb{R}^n_+$ with $\|\boldsymbol{x}\|^2=1$ is a feasible solution to~\eqref{eq:polymatroid} for $\alpha = \frac{2\beta}{\sqrt{\ln n+5 }}$. However, according to Proposition~\ref{thm:matroid-value-bound}, the optimal value of this problem is no more than
\[
\left(\frac{\ln n+5}{4} \right) \alpha^2 = \left(\frac{\ln n+5}{4} \right) \frac{4\beta^2}{\ln n +5} = \beta^2 <1,
\]
giving rise to a contradiction. Finally, we conclude this part as below.
\begin{corollary}\label{thm:h2}
It holds that
\begin{equation*
\mathbb{H}^n_2\in\mathbb{T}\left(n,\frac{2}{\sqrt{\ln n+5}},3^n\right).
\end{equation*}
\end{corollary}
\subsection{$\Omega(1)$-hitting sets}\label{sec:h3}
$\mathbb{H}^n_2$ is simple and almost close to help the construction of deterministic $\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$-hitting sets, whose story will be revealed in the next subsection. To make this final small but important step, $\Omega(1)$-hitting sets are needed. A finely tuned version of $\mathbb{H}^n_2$ is in place.
\begin{algorithm}\label{alg:h3}
Given $\mathbb{S}^n$ and two parameters $\alpha\ge1$ and $\beta\ge\alpha+1$, construct $\mathbb{H}^n_3(\alpha,\beta)$.
\vspace{-0.2cm}\noindent\hrulefill
\begin{enumerate}
\item Let $m=\left\lceil\log_\beta \alpha n \right\rceil$ and partition $\mathbb{I}$ into disjoint subsets $\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m$ such that
\begin{equation}\label{eq:partition}
|\mathbb{I}_1|=n-\sum_{k=2}^m|\mathbb{I}_k| \mbox{ and } |\mathbb{I}_k|=\left\lfloor \frac{\alpha n}{\beta^{k-1}} \right\rfloor
\mbox{ for } k=2,3\dots, m.
\end{equation}
\item For any partition $\{\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m\}$ of $\mathbb{I}$ satisfying~\eqref{eq:partition}, construct a set of vectors
\[\mathbb{Z}^n(\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m)=\left\{\boldsymbol{z}\in\mathbb{R}^n: z_i\in\left\{\pm1,\pm \beta^{\frac{k-1}{2}}\right\} \mbox{ if } i\in\mathbb{I}_k \mbox{ for } k=1,2,\dots,m \right\}.\]
\item Put all $\mathbb{Z}^n(\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m)$'s together to form\[\mathbb{Z}^n=\bigcup_{\{\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m\}\mbox{ is a partition of $\mathbb{I}$ satisfying~\eqref{eq:partition}}}\mathbb{Z}^n(\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m).\]
\item Project the vectors in $\mathbb{Z}^n$ onto the unit sphere, i.e.,
$\mathbb{H}^n_3(\alpha,\beta):=\left\{\frac{\boldsymbol{z}}{\|\boldsymbol{z}\|}\in\mathbb{S}^n: \boldsymbol{z}\in \mathbb{Z}^n\right\}.$
\end{enumerate}
\vspace{-0.2cm}\hrulefill
\end{algorithm}
We see from the first step of Algorithm~\ref{alg:h3} that
\[
\sum_{k=2}^m |\mathbb{I}_k| \le \sum_{k=2}^m \frac{\alpha n}{\beta^{k-1}} =\frac{\alpha n}{\beta-1}-\frac{\alpha n}{\beta^{m-1}(\beta-1)}\le\frac{\alpha n}{\beta-1},
\]
implying that
\begin{equation}\label{eq:i1}
|\mathbb{I}_1|=|\mathbb{I}|-\sum_{k=2}^m |\mathbb{I}_k|\ge n-\frac{\alpha n}{\beta-1}=\left(1-\frac{\alpha}{\beta-1}\right)n \ge0.
\end{equation}
Therefore, the feasibility of the construction is guaranteed. On the other hand, as $m=\left\lceil\log_\beta \alpha n \right\rceil$, we have
\[
\sum_{k=2}^m (|\mathbb{I}_k|+1)\ge \sum_{k=2}^m \frac{\alpha n}{\beta^{k-1}}
= \frac{\alpha n}{\beta-1}-\frac{\alpha n}{\beta^{m-1}(\beta-1)}
\ge \frac{\alpha n}{\beta-1}-\frac{\beta}{\beta-1} \ge\frac{\alpha n}{\beta-1} -2,
\]
implying that
\begin{equation}\label{eq:i2}
|\mathbb{I}_1|=n-\sum_{k=2}^m |\mathbb{I}_k| = n+m-1 -\sum_{k=2}^m (|\mathbb{I}_k|+1) \le n+\log_\beta \alpha n-\frac{\alpha n}{\beta-1} +2
\le \left(1-\frac{\alpha}{\beta-1}\right)n +\log_\beta n+3.
\end{equation}
\begin{theorem}
For any $\boldsymbol{x}\in\mathbb{S}^n$, there exists $\boldsymbol{z}\in\mathbb{H}^n_3(\alpha,\beta)$ such that $\boldsymbol{z}^{\textnormal{T}}\boldsymbol{x}\ge \frac{\alpha-1}{\sqrt{\alpha\beta(\alpha+1)}}$.
\end{theorem}
\begin{proof}
For any given $\|\boldsymbol{x}\| = 1$, define the index sets
\begin{align}
\mathbb{D}_0(\boldsymbol{x})&=\left\{ i \in \mathbb{I} : |x_i| \le \frac{1}{\sqrt{\alpha n}} \right\} \nonumber \\
\mathbb{D}_k(\boldsymbol{x})&=\left\{ i \in \mathbb{I} : \sqrt{\frac{\beta^{k-1}}{\alpha n}} < |x_i| \le \sqrt{\frac{\beta^k}{\alpha n}} \right\}\quad k=1,2, \dots, m. \label{eq:ekx}
\end{align}
For any entry $x_i$ of $\boldsymbol{x}$, $|x_i|\le 1\le \sqrt{\frac{\beta^m}{\alpha n}}$, and so $\{\mathbb{D}_0(\boldsymbol{x}),\mathbb{D}_1(\boldsymbol{x}),\dots,\mathbb{D}_m(\boldsymbol{x})\}$ is a partition of $\mathbb{I}$.
We first estimate $|\mathbb{D}_k(\boldsymbol{x})|$ for $k\ge2$. {\color{black}It is} obvious that for $k\ge2$,
\[
\frac{\beta^{k-1}}{\alpha n} |\mathbb{D}_k(\boldsymbol{x})|= \sum_{i \in \mathbb{D}_k(\boldsymbol{x})}\frac{\beta^{k-1}}{\alpha n} < \sum_{i \in \mathbb{D}_k(\boldsymbol{x})}|x_i|^2 \le 1.
\]
This implies that $|\mathbb{D}_k(\boldsymbol{x})| < \frac{\alpha n}{\beta^{k-1}}$, i.e., $|\mathbb{D}_k(\boldsymbol{x})| \le\left\lfloor \frac{\alpha n}{\beta^{k-1}}\right\rfloor$ for $k=2,3, \dots, m$. Hence, there exists a partition $\{\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m\}$ of $\mathbb{I}$ satisfying~\eqref{eq:partition} such that $\mathbb{D}_k(\boldsymbol{x})\subseteq \mathbb{I}_k$ for $k=2,3, \dots, m$. Furthermore, we may find a vector $\boldsymbol{z}\in \mathbb{Z}^n(\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m)$ such that
\[
z_i = \left\{
\begin{array}{ll}
\textnormal{sign}\,(x_i) & i \in {\color{black}\mathbb{D}_0(\boldsymbol{x})} \\
\textnormal{sign}\,(x_i)\beta^{\frac{k-1}{2}} & i\in \mathbb{D}_k(\boldsymbol{x}) \mbox{ for } {\color{black}k=1,2,\dots,m,}
\end{array}
\right.
\]
where the $\textnormal{sign}\,$ function takes $1$ for nonnegative reals and $-1$ for negative reals.
In the following, we shall estimate $\boldsymbol{z}^{\textnormal{T}}\boldsymbol{x}$ and $\|\boldsymbol{z}\|$. First of all,
\[
\sum_{i \in \mathbb{D}_0(\boldsymbol{x})}{x_i}^2 \le \sum_{i \in \mathbb{D}_0(\boldsymbol{x})}\frac{1}{\alpha n} \le \frac{1}{\alpha} \mbox{ and } \sum_{k=1}^{m} \sum_{i \in \mathbb{D}_k(\boldsymbol{x})} {x_i}^2 = \sum_{i\in\mathbb{I}} {x_i}^2 - \sum_{i \in \mathbb{D}_0(\boldsymbol{x})}{x_i}^2 \ge 1-\frac{1}{\alpha}.
\]
Next, we have
\[
\sum_{i \in \mathbb{D}_0(\boldsymbol{x})}{z_i}^2 = |\mathbb{D}_0(\boldsymbol{x})| \le n \mbox{ and }
\sum_{k=1}^{m} \sum_{i \in \mathbb{D}_k(\boldsymbol{x})} {z_i}^2
=\sum_{k=1}^{m} \sum_{i \in \mathbb{D}_k(\boldsymbol{x})} \beta^{k-1}
< \sum_{k=1}^{m} \sum_{i \in \mathbb{D}_k(\boldsymbol{x})} \alpha n{x_i}^2 \le \alpha n.
\]
Summing the above two inequalities would give us $\|\boldsymbol{z}\|^2\le (\alpha+1)n$ since $\{\mathbb{D}_0(\boldsymbol{x}),\mathbb{D}_1(\boldsymbol{x}),\dots,\mathbb{D}_m(\boldsymbol{x})\}$ is a partition of $\mathbb{I}$.
Lastly, as $\textnormal{sign}\,(x_i)=\textnormal{sign}\,(z_i)$ for every $i\in\mathbb{I}$,
\[
\boldsymbol{z}^{\textnormal{T}} \boldsymbol{x} \ge \sum_{k=1}^{m} \sum_{i \in \mathbb{D}_k(\boldsymbol{x})} \beta^{\frac{k-1}{2}} |x_i|
\ge \sum_{k=1}^{m} \sum_{i \in \mathbb{D}_k(\boldsymbol{x})} \sqrt{\frac{\alpha n}{\beta}}|x_i|^2
= \sqrt{\frac{\alpha n}{\beta}} \sum_{k=1}^{m} \sum_{i \in \mathbb{D}_k(\boldsymbol{x})} |x_i|^2
\ge \sqrt{\frac{\alpha n}{\beta}} \left(1-\frac{1}{\alpha}\right),
\]
where the second inequality is due to the upper bound in~\eqref{eq:ekx}.
To conclude, we find $\frac{\boldsymbol{z}}{\|\boldsymbol{z}\|}\in\mathbb{H}^n_3(\alpha,\beta)$ such that
\[
\boldsymbol{x}^{\textnormal{T}} \frac{\boldsymbol{z}}{\|\boldsymbol{z}\|} \ge \sqrt{\frac{\alpha n}{\beta}} \left(1-\frac{1}{\alpha}\right) \cdot\frac{1}{\sqrt{(\alpha+1)n}} = \frac{\alpha-1}{\sqrt{\alpha\beta(\alpha+1)}},
\]
proving the desired inequality.
\end{proof}
We also need to estimate the cardinality of $\mathbb{H}^n_3(\alpha,\beta)$. To simplify the display, we now replace $\beta$ by $\gamma+1$ where $\gamma\ge\alpha$ in the rest of this subsection.
\begin{proposition}
Given two constants $1\le\alpha\le\gamma$, one has
\begin{equation} \label{eq:t}
\left|\mathbb{H}^n_3(\alpha,\gamma+1)\right|\le \left( 2^{\frac{\gamma+\alpha}{\gamma}}\alpha^{-\frac{\alpha}{\gamma}} (\gamma+1)^{\frac{\alpha(\gamma+1)}{\gamma^2}} \left(\frac{\gamma}{\gamma-\alpha}\right)^{\frac{\gamma-\alpha}{\gamma}}+o(1)\right)^n.
\end{equation}
\end{proposition}
\begin{proof}
For any given partition $\{\mathbb{I}_1,\mathbb{I}_2,\dots,\mathbb{I}_m\}$ of $\mathbb{I}$ satisfying~\eqref{eq:partition}, $x_i$ can take 2 values if $i\in\mathbb{I}_1$ and 4 values if $i\in\mathbb{I}_k$ for $k\ge2$. By considering the number of such partitions and possible overlaps after projecting on to $\mathbb{S}^n$, one has
\[
\left|\mathbb{H}^n_3(\alpha,\gamma+1)\right| \le \frac{n!}{\prod_{k=1}^m|\mathbb{I}_k|!}2^{|\mathbb{I}_1|}\prod_{k=2}^m4^{|\mathbb{I}_k|}.
\]
By~\eqref{eq:i1}, we have
\[
2^{|\mathbb{I}_1|}\prod_{k=2}^m4^{|\mathbb{I}_k|} = 2^{|\mathbb{I}_1|}4^{\sum_{k=2}^m|\mathbb{I}_k|}= 2^{-|\mathbb{I}_1|}4^{\sum_{k=1}^m|\mathbb{I}_k|} \le 2^{-(1-\frac{\alpha}{\gamma})n} 4^n = 2^{(1+\frac{\alpha}{\gamma})n}.
\]
It remains to estimate $\frac{n!}{\prod_{k=1}^m|\mathbb{I}_k|!}$ based on the
followings from~\eqref{eq:partition},~\eqref{eq:i1} and~\eqref{eq:i2} with $\beta = \gamma+1$,
\[
{\color{black}\eta_1}n \le |\mathbb{I}_1|\le {\color{black}\eta_1}n +\ln_{\gamma+1} n+3
\mbox{ and } |\mathbb{I}_k|=\left\lfloor {\color{black}\eta_k}n \right\rfloor \mbox{ for }k=2,3\dots,m,
\]
where $\eta_1=1-\frac{\alpha}{\gamma}$ and $\eta_k=\frac{\alpha}{(\gamma+1)^{k-1}}$ for $k=2,3\dots,m$, as well as $\sum_{k=1}^m|\mathbb{I}_k|=n$, $m=\left\lceil\log_{\gamma+1} \alpha n \right\rceil$ and $1\le\alpha\le\gamma$.
We first notice that
$$
\left|n-\sum_{k=1}^m\eta_k n\right|=\left|\sum_{k=1}^m|\mathbb{I}_k|- \sum_{k=1}^m\eta_k n \right|
\le \left||\mathbb{I}_1| - \eta_1 n \right| + \sum_{k=2}^m\left||\mathbb{I}_k| - \eta_k n \right|\le O(\ln n)+m-1=O(\ln n).
$$
We then consider the function $f(x)=x \ln x - x$, which is increasing and convex over $[1,\infty)$ since $f'(x)=\ln x$ and $f''(x)=\frac{1}{x}$. Therefore, for any $x\ge1$ and $y>0$,
$\frac{f(x+y)-f(x)}{y}\le f'(x+y)=\ln(x+y)$, implying that $f(x+y)-f(x)\le y \ln (x+y)$. Applying this fact to $|\mathbb{I}_k|$ for $k=1,2,\dots,m$, we obtain
\[
f(|\mathbb{I}_1|)-f(\eta_1n) = O(\ln^2 n) \mbox{ and } f(\eta_kn) - f(|\mathbb{I}_k|)= O(\ln n) \mbox{ for } k=2,3,\dots,m.
\]
These, together with the Stirling approximation $\ln(n!)=f(n)+O(\ln n)$, lead to
\[
\ln(|\mathbb{I}_1|!)=f(\eta_1n)+ O(\ln^2 n)
\mbox{ and }
\ln(|\mathbb{I}_k|!)=f(\eta_kn) + O(\ln n) \mbox{ for } k=2,3,\dots,m.
\]
We are ready to estimate $\frac{n!}{\prod_{k=1}^m|\mathbb{I}_k|!}$ within a deviation of $o(n)$ as follows:
\begin{align*}
\ln \frac{n!}{\prod_{k=1}^m|\mathbb{I}_k|!} &= \ln (n!) - \sum_{k=1}^{m} \ln (|\mathbb{I}_k|!)\\
&= n\ln n - n - \sum_{k=1}^{m} \eta_kn \ln (\eta_kn) + \sum_{k=1}^{m} \eta_kn + o(n)\\
&= \left(n- \sum_{k=1}^{m} \eta_kn\right)(\ln n-1) - \sum_{k=2}^{m} \eta_kn \ln \eta_k - \eta_1n \ln \eta_1 + o(n)\\
&= - n \sum_{k=2}^{m} \frac{\alpha}{(\gamma+1)^{k-1}} \ln \frac{\alpha}{(\gamma+1)^{k-1}} -n \frac{\gamma-\alpha}{\gamma} \ln \frac{\gamma-\alpha}{\gamma} + o(n)\\
&= - n\sum_{k=2}^{m} \frac{\alpha \ln \alpha}{(\gamma +1)^{k-1}} + n\sum_{k=2}^{m} \frac{\alpha(k-1)}{(\gamma +1)^{k-1}}\ln (\gamma +1) +\left( \frac{\gamma - \alpha}{\gamma} \ln \frac{\gamma}{\gamma - \alpha}\right) n + o(n)\\
&= -\left(\frac{\alpha}{\gamma}\ln\alpha \right) n+ \left(\frac{\alpha(\gamma+1)}{\gamma^2}\ln(\gamma+1) \right)n + \left( \frac{\gamma - \alpha}{\gamma} \ln \frac{\gamma}{\gamma - \alpha}\right) n + o(n),
\end{align*}
where the last equality is due to the fact that $m=\left\lceil\log_{\gamma+1} \alpha n \right\rceil$ implies
\begin{align*}
\sum_{k=2}^{m} \frac{1}{(\gamma +1)^{k-1}} &= \frac{1}{\gamma} -\frac{1}{\gamma(\gamma+1)^{m-2}}=\frac{1}{\gamma}+o(1)\\
\sum_{k=2}^{m} \frac{k-1}{(\gamma +1)^{k-1}} &= \frac{\gamma+1}{\gamma^2} -\frac{m}{\gamma^2(\gamma+1)^{m-2}} + \frac{m-1}{\gamma^2(\gamma+1)^{m-1}} =\frac{\gamma+1}{\gamma^2}+o(1).
\end{align*}
Finally, by combining the upper bound of $2^{|\mathbb{I}_1|}\prod_{k=2}^m4^{|\mathbb{I}_k|}$, we have $|\mathbb{H}^n_3(\alpha,\gamma+1)|\le t^n$ where
\[
t = 2^{\frac{\gamma+\alpha}{\gamma}}\alpha^{-\frac{\alpha}{\gamma}} (\gamma+1)^{\frac{\alpha(\gamma+1)}{\gamma^2}} \left(\frac{\gamma}{\gamma-\alpha}\right)^{\frac{\gamma-\alpha}{\gamma}}e^{o(1)}= 2^{\frac{\gamma+\alpha}{\gamma}}\alpha^{-\frac{\alpha}{\gamma}} (\gamma+1)^{\frac{\alpha(\gamma+1)}{\gamma^2}} \left(\frac{\gamma}{\gamma-\alpha}\right)^{\frac{\gamma-\alpha}{\gamma}}+o(1).
\]
\end{proof}
In a {\color{black}nutshell}, we have the following.
\begin{corollary}\label{thm:h3col}
For any given $1\le\alpha\le\gamma$,
\begin{equation}\label{eq:h3}
\mathbb{H}^n_3(\alpha,\gamma+1) \in \mathbb{T}\left(n, \frac{\alpha-1}{\sqrt{\alpha(\alpha+1)(\gamma+1)}}, \left( 2^{\frac{\gamma+\alpha}{\gamma}}\alpha^{-\frac{\alpha}{\gamma}} (\gamma+1)^{\frac{\alpha(\gamma+1)}{\gamma^2}} \left(\frac{\gamma}{\gamma-\alpha}\right)^{\frac{\gamma-\alpha}{\gamma}}+o(1) \right)^n \right).
\end{equation}
\end{corollary}
For a fixed $\alpha\ge1$, the largest $\frac{\alpha-1}{\sqrt{\alpha(\alpha+1)(\gamma+1)}}$ is $\frac{\alpha-1}{(\alpha+1)\sqrt{\alpha}}$, achieved when $\gamma=\alpha$. Correspondingly,
\[
|\mathbb{H}^n_3(\alpha,\alpha+1)|\le \left(4\alpha^{-1}(\alpha+1)^{\frac{\alpha+1}{\alpha}}+o(1)\right)^n
\le (4+\epsilon)^n \mbox{ for some large $\alpha$.}
\]
If we maximize $\frac{\alpha-1}{(\alpha+1)\sqrt{\alpha}}$ to achieve the best $\tau$ for the $\tau$-hitting set, this is $\sqrt{\frac{1}{2}(5\sqrt{5}-11)} \approx 0.30028$ when $\alpha=2+\sqrt{5} \approx 4.236$. For reference, we list the $\tau$ and the cardinality for some $\mathbb{H}^n_3(\alpha,\alpha+1)$ in Table~\ref{table:h3}.
\begin{table}[h]
\centering
\small\begin{tabular}{|l|p{0.4in}p{0.4in}p{0.4in}p{0.4in}p{0.4in}p{0.4in}p{0.4in}|}
\hline
$\alpha$ & 17.42 & 7.64 & 4.75 & $4.24$ & 4.00 & 3.00 & 2.00 \\
\hline
$\tau$ for the $\tau$-hitting set & 0.213 & 0.278 & 0.299 & 0.30028 & 0.300 & 0.288 & 0.235 \\
$t$ for $|\mathbb{H}^n_3(\alpha,\alpha+1)|\le t^n$ & 5.00 & 6.00 & 7.00 & 7.31 & 7.48 & 8.47 & 10.40\\
\hline
\end{tabular}
\caption{Properties of $\mathbb{H}^n_3(\alpha,\alpha+1)$ for some $\alpha$.}\label{table:h3}
\end{table}
If we are interested to minimize the upper bound of $|\mathbb{H}^n_3(\alpha,\gamma+1)|$ in~\eqref{eq:t}, then by fixing $\alpha$ and choosing $\gamma$ sufficiently large, the bound can even be $(2+\epsilon)^n$. However, this makes sense only if $\gamma\ll n$. Moreover, the corresponding $\tau=\frac{\alpha-1}{\sqrt{\alpha(\alpha+1)(\gamma+1)}}$ will decrease quickly as $\gamma$ goes large.
\subsection{$\Omega\big(\sqrt{\ln n/n}\big)$-hitting sets}\label{sec:h4}
The hitting sets in Sections~\ref{sec:h2} and~\ref{sec:h3} can be used to construct new hitting sets which in fact derandomize the constructions in Section~\ref{sec:h1}. Recall that $\mathbb{E}^n=\{\boldsymbol{e}_1,\boldsymbol{e}_2,\dots,\boldsymbol{e}_n\}$ is the standard basis of $\mathbb{R}^n$, $\boxtimes$ denotes the Kronecker product, and $\vee$ denotes vector appending.
\begin{lemma}\label{thm:kron}
If a hitting set $\mathbb{H}^{n_1}\in\mathbb{T}(n_1,\tau,m)$ with $\tau\ge0$, then $\mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}\in\mathbb{T}\left(n_1n_2, \frac{\tau}{\sqrt{n_2}},mn_2\right)$.
\end{lemma}
\begin{proof}
First, for any $\boldsymbol{e}_i\in\mathbb{E}^{n_2}$ and $\boldsymbol{z}\in\mathbb{H}^{n_1}$, one has $\|\boldsymbol{e}_i\boxtimes\boldsymbol{z}\|=\|\boldsymbol{e}_i\|\cdot\|\boldsymbol{z}\|=1$. Thus, $\mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}\subseteq\mathbb{S}^{n_1n_2}$. For any $\boldsymbol{x}\in\mathbb{S}^{n_1n_2}$, let
$\boldsymbol{x}=\boldsymbol{x}_1\vee\boldsymbol{x}_2\vee\dots\vee\boldsymbol{x}_{n_2}$
where $\boldsymbol{x}_k\in\mathbb{R}^{n_1}$ for $k=1,2,\dots,n_2$. Since $\sum_{k=1}^{n_2}\|\boldsymbol{x}_k\|^2=\|\boldsymbol{x}\|^2=1$, there exists an $\boldsymbol{x}_i$, such that $\|\boldsymbol{x}_i\|^2\ge\frac{1}{n_2}$.
Observing that $\frac{\boldsymbol{x}_i}{\|\boldsymbol{x}_i\|}\in\mathbb{S}^{n_1}$, there exists $\boldsymbol{y}\in\mathbb{H}^{n_1}$ such that $\boldsymbol{y}^{\textnormal{T}}\frac{\boldsymbol{x}_i}{\|\boldsymbol{x}_i\|}\ge\tau$. Therefore, we have $\boldsymbol{e}_i\boxtimes\boldsymbol{y}\in \mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}$ satisfying
\[
(\boldsymbol{e}_i\boxtimes\boldsymbol{y})^{\textnormal{T}}\boldsymbol{x}=\boldsymbol{y}^{\textnormal{T}}\boldsymbol{x}_i\ge\tau\|\boldsymbol{x}_i\|\ge \frac{\tau}{\sqrt{n_2}}.
\]
Finally, by noticing possible overlaps, one has $|\mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}|\le |\mathbb{E}^{n_2}|\cdot|\mathbb{H}^{n_1}|\le n_2m$.
\end{proof}
\begin{lemma}\label{thm:append}
If two hitting sets $\mathbb{H}^{n_1}\in\mathbb{T}(n_1,\tau_1,m_1)$ and $\mathbb{H}^{n_2}\in\mathbb{T}(n_2, \tau_2,m_2)$ with $\tau_1,\tau_2>0$, then $$\left(\mathbb{H}^{n_1}\vee{\bf0}_{n_2}\right) \bigcup\left({\bf0}_{n_1}\vee\mathbb{H}^{n_2}\right)\in \mathbb{T}\left(n_1+n_2, \frac{\tau_1\tau_2}{\sqrt{{\tau_1}^2+{\tau_2}^2}},m_1+m_2\right).$$
\end{lemma}
\begin{proof}
For any $\boldsymbol{x}\in\mathbb{S}^{n_1+n_2}$, let $\boldsymbol{x}=\boldsymbol{x}_1\vee\boldsymbol{x}_2$ where $\boldsymbol{x}_1\in\mathbb{R}^{n_1}$ and $\boldsymbol{x}_2\in\mathbb{R}^{n_2}$. If one of them is a zero vector, say $\boldsymbol{x}_1={\bf 0}_{n_1}$, then $\|\boldsymbol{x}_2\|=1$. There exists $\boldsymbol{y}\in\mathbb{H}^{n_2}$ such that $\boldsymbol{y}^{\textnormal{T}}\boldsymbol{x}_2\ge\tau_2$, and so
\[
\langle {\bf 0}_{n_1}\vee\boldsymbol{y}, \boldsymbol{x}_1\vee\boldsymbol{x}_2\rangle = \boldsymbol{y}^{\textnormal{T}}\boldsymbol{x}_2\ge\tau_2\ge \frac{\tau_1\tau_2}{\sqrt{{\tau_1}^2+{\tau_2}^2}}.
\]
If both $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ are nonzero, then $\frac{\boldsymbol{x}_k}{\|\boldsymbol{x}_k\|}\in\mathbb{S}^{n_k}$ for $k=1,2$. There exist $\boldsymbol{y}_k\in\mathbb{H}^{n_k}$ with $\frac{\boldsymbol{y}_k^{\textnormal{T}}\boldsymbol{x}_k}{\|\boldsymbol{x}_k\|}\ge\tau_k$ for $k=1,2$. We have
\begin{align*}
\langle \boldsymbol{y}_1\vee{\bf0}_{n_2}, \boldsymbol{x}_1\vee\boldsymbol{x}_2 \rangle = \boldsymbol{y}_1^{\textnormal{T}}\boldsymbol{x}_1&\ge \tau_1\|\boldsymbol{x}_1\|\\
\langle {\bf0}_{n_1}\vee\boldsymbol{y}_2, \boldsymbol{x}_1\vee\boldsymbol{x}_2 \rangle = \boldsymbol{y}_2^{\textnormal{T}}\boldsymbol{x}_2&\ge \tau_2\|\boldsymbol{x}_2\|.
\end{align*}
As $\|\boldsymbol{x}_1\|^2+\|\boldsymbol{x}_2\|^2=\|\boldsymbol{x}\|^2=1$, we must have either $\|\boldsymbol{x}_1\|\ge\frac{\tau_2}{\sqrt{{\tau_1}^2+{\tau_2}^2}}$ or $\|\boldsymbol{x}_2\|\ge\frac{\tau_1}{\sqrt{{\tau_1}^2+{\tau_2}^2}}$. In any case, we have
\[
\max\{\tau_1\|\boldsymbol{x}_1\|,\tau_2\|\boldsymbol{x}_2\|\}
\ge
\frac{\tau_1\tau_2}{\sqrt{{\tau_1}^2+{\tau_2}^2}},
\]
implying that either $\boldsymbol{y}_1\vee{\bf0}_{n_2}$ or ${\bf0}_{n_1}\vee\boldsymbol{y}_2$ is close enough to $\boldsymbol{x}$.
\end{proof}
We are ready to construct new hitting sets using $\mathbb{H}^n_2\in\mathbb{T}\left(n,\Omega\left(\frac{1}{\sqrt{\ln n}}\right),3^n\right)$ in Section~\ref{sec:h2} and $\mathbb{H}^n_3\in\mathbb{T}\left(n,\mu,\nu^n\right)$ with universal constants $\mu,\nu>0$, a handy notation of~\eqref{eq:h3} in Section~\ref{sec:h3}.
\begin{theorem}\label{thm:h4}
Given integer $n\ge2$, let $n_1=\lceil\ln n\rceil$, $n_2=\lfloor\frac{n}{n_1}\rfloor$, and $n_3=n-n_1n_2$. One has
\begin{equation}\label{eq:h4}
\mathbb{H}^n_4:=\left(\left(\mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}_2\right)\vee{\bf0}_{n_3}\right) \bigcup \left({\bf0}_{n_1n_2}\vee\mathbb{H}^{n_3}_2\right)
\in\mathbb{T}\left(n,\Omega\left(\sqrt{\frac{\ln n}{n\ln\ln n}}\right),O(n^{1+\ln3})\right).
\end{equation}
\end{theorem}
\begin{proof}
First, as $n_3=n-n_1\lfloor\frac{n}{n_1}\rfloor<n_1$, we have
\begin{equation}\label{eq:n123}
n_1\le\ln n+1,\; n_2\le\frac{n}{\ln n}, \mbox{ and } n_3\le\ln n.
\end{equation}
According to Corollary~\ref{thm:h2} and Lemma~\ref{thm:kron},
\[
\mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}_2\in\mathbb{T}\left(n_1n_2, \Omega\left(\frac{1}{\sqrt{n_2\ln n_1}}\right),n_23^{n_1}\right)
\subseteq \mathbb{T}\left(n_1n_2, \Omega\left(\sqrt{\frac{\ln n}{n\ln\ln n}}\right),\frac{3n^{1+\ln3}}{\ln n}\right).
\]
Besides, one has
\[
\mathbb{H}^{n_3}_2\in\mathbb{T}\left(n_3,\Omega\left(\frac{1}{\sqrt{\ln n_3}}\right),3^{n_3}\right)
\subseteq \mathbb{T}\left(n_3,\Omega\left(\frac{1}{\sqrt{\ln \ln n}}\right),n^{\ln 3}\right).
\]
Noticing that $\frac{1}{\sqrt{\ln \ln n}}\ge \sqrt{\frac{\ln n}{n\ln\ln n}}$,~\eqref{eq:h4} can be obtained by applying Lemma~\ref{thm:append}.
\end{proof}
Although $\Omega\left(\sqrt{\frac{\ln n}{n\ln\ln n}}\right)$ is slightly lower than $\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$, the construction of $\mathbb{H}^n_4$ in~\eqref{eq:h4} is very simple (using $\mathbb{H}^n_2$) and enjoys a low cardinality $O(n^{1+\ln3})\le O(n^{2.1})$. {\color{black} We remark that it is even possible to construct an $\Omega\left(\sqrt{\frac{\ln n}{n\ln\ln n}}\right)$-hitting set with a lower cardinality $O(n^{1.5})$. This can be done by using $\mathbb{H}^{n_1}_0(m)$ with $m=\lceil\sqrt{\ln n}\rceil$ and $n_1=\lceil\frac{\ln n}{\ln\ln n}\rceil$ in place of $\mathbb{H}^{n_1}_2$ in constructing $\mathbb{H}^n_4$ in Theorem~\ref{thm:h4}}. We leave the details to interested readers. In order to remove the ${\color{black}\frac{1}{\sqrt{\ln\ln n}}}$ factor, we need to make use of $\mathbb{H}^n_3(\alpha,\beta)$.
\begin{theorem}\label{thm:h5}
Given integer $n\ge2$, let $n_1=\lceil\ln n\rceil$, $n_2=\lfloor\frac{n}{n_1}\rfloor$, and $n_3=n-n_1n_2$. By choosing any $\mathbb{H}^n_3(\alpha,\beta)\in\mathbb{T}\left(n,\mu,\nu^n\right)$ in~\eqref{eq:h3} with $\alpha\ge1$ and $\beta\ge\alpha+1$, one has
\
\mathbb{H}^n_5(\alpha,\beta):=\left(\left(\mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}_3(\alpha,\beta)\right)\vee{\bf0}_{n_3}\right) \bigcup \left({\bf0}_{n_1n_2}\vee\mathbb{H}^{n_3}_3(\alpha,\beta)\right)
\in\mathbb{T}\left(n,\mu\sqrt{\frac{\ln n}{n+\ln n}},O(n^{1+\ln\nu})\right).
\]
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem~\ref{thm:h4} by noticing~\eqref{eq:n123} and applying Lemma~\ref{thm:kron} and Lemma~\ref{thm:append}. We only need to carry out the calculations.
The $\tau$ for the $\tau$-hitting set $\mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}_3(\alpha,\beta)$ is $\frac{\mu}{\sqrt{n_2}}$ and that for $\mathbb{H}^{n_3}_3(\alpha,\beta)$ is $\mu$. By Lemma~\ref{thm:append}, the $\tau$ for $\mathbb{H}^n_5(\alpha,\beta)$ is
$
\frac{\frac{\mu}{\sqrt{n_2}}\cdot\mu}{\sqrt{\frac{\mu^2}{n_2}+\mu^2}} = \frac{\mu}{\sqrt{n_2+1}} \ge \mu\sqrt{\frac{\ln n}{n+\ln n}}.
$
For the cardinality, one has $|\mathbb{E}^{n_2}\boxtimes\mathbb{H}^{n_1}_3(\alpha,\beta)|\le n_2\nu^{n_1}\le \frac{\nu n^{1+\ln \nu}}{\ln n}$ and $|\mathbb{H}^{n_3}_3(\alpha,\beta)|\le \nu^{n_3}\le n^{\ln\nu}$. Adding up these two would give $O(n^{1+\ln\nu})$.
\end{proof}
With the cardinality $O(n^{1+\ln\nu})$ in place, it is natural to select the best $\mu$ for $\mathbb{H}^{n}_3(\alpha,\beta)$ with $\beta = \gamma +1$ in~\eqref{eq:h3}. According to Table~\ref{table:h3}, the largest $\mu=0.30028$ with $\nu=7.31$, obtained when $\alpha=2+\sqrt{5}$ and $\beta=3+\sqrt{5}$. This results a cardinality $O(n^{1+\ln\nu})\le O(n^3)$. To conclude, we have
\begin{equation}\label{eq:h6}
\mathbb{H}^n_5(2+\sqrt{5},3+\sqrt{5})\in\mathbb{T}\left(n,0.3\sqrt{\frac{\ln n}{n}},O(n^3)\right).
\end{equation}
{\color{black}
Before concluding this section, we remark that the estimated $\tau$ serves a lower bound and $m$ serves an upper bound for the proposed hitting sets. We evaluate their exact values of one example ($n=6$) by numerical computations, shown in Table~\ref{table:list2} {\color{black} where the Greek letters in estimated values are some unknown constants}. Due to the randomness of $\mathbb{H}^n_1$, we try ten times for any $m$ and provide corresponding $\tau$ by an interval range.
\begin{table}[h]
\centering
\small\begin{tabular}{|l|cc|cc|}
\hline
Hitting set in $\mathbb{S}^6$ & Exact $\tau$ & Exact $m$ & Estimated $\tau$ & Estimated $m$ \\
\hline
A regular simplex in $\mathbb{S}^6$ & 0.167 & 7 & 0.167 & 7 \\
$\mathbb{E}^6\cup(-\mathbb{E}^6)$ & 0.408 & 12 & 0.408 & 12\\
$\mathbb{H}^6_1(\gamma_1,\epsilon_1)$ & [0.331, 0.442] & 27 & $\omega_1\cdot 0.546$ & $o_1\cdot 6^{\alpha_1}$ \\
$\mathbb{H}^6_1(\gamma_2,\epsilon_2)$ & [0.521, 0.592] & 60 & $\omega_2\cdot 0.546$ & $o_2\cdot 6^{\alpha_2}$ \\
$\mathbb{H}^6_4$ & 0.546 & 27 & $\omega_3\cdot 0.546$ & $o_3\cdot 6^{2.792}$ \\
$\mathbb{H}^6_5(2+\sqrt{5},3+\sqrt{5})$ & 0.544 & 36 & $\omega_4\cdot 0.546$ & $o_4\cdot 6^{2.989}$ \\
$\mathbb{H}^6_2$ & 0.835 & 728 & $\omega_5\cdot 0.747$ & $3^6=729$ \\
$\mathbb{H}^6_3(2+\sqrt{5},3+\sqrt{5})$ & 0.820 & 16896 & $\omega_6\cdot 1.000$ & $7.31^6=152582$\\
\hline
\end{tabular}
\caption{Exact and theoretical estimates of $\tau$ and $m$ for hitting sets in $\mathbb{S}^6$}\label{table:list2}
\end{table}
For our main constructions of $\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$-hitting sets, $\mathbb{H}_1^n$, $\mathbb{H}_4^n$ and $\mathbb{H}_5^n$, comparisons of $\tau$ and $m$ for a few small $n$'s are shown in Table~\ref{table:list3}. We observe that random hitting sets will outperform deterministic ones when $n$ increases although they are worse when $n$ is small.
\begin{table}[h]
\centering
\small\begin{tabular}{|l|cc|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{$n=6$} & \multicolumn{2}{c|}{$n=8$} & \multicolumn{2}{c|}{$n=12$} & \multicolumn{2}{c|}{$n=15$} \\
Hitting set & $\tau$ & $m$& $\tau$ & $m$ & $\tau$ & $m$ & $\tau$ & $m$ \\
\hline
$\mathbb{H}^n_1$ & [0.331, 0.442] & 24 & [0.272, 0.384] & 32 & [0.368, 0.410] & 104 & [0.320, 0.391] & 130 \\
$\mathbb{H}^n_1$ & [0.521, 0.592] & 60 & [0.460, 0.502] & 80 & [0.441, 0.484] & 184 & [0.428, 0.452] & 235 \\
$\mathbb{H}^n_4$ & 0.546 & 24 & 0.485 & 32 & 0.4653 & 104 & 0.431 & 130 \\
$\mathbb{H}^n_5$ & 0.544 & 36 & 0.489 & 48 & 0.4713 & 256 & 0.433 & 320 \\
\hline
\end{tabular}
\caption{Exact $\tau$ and $m$ for $\Omega\big(\sqrt{\ln n/ n}\big)$-hitting sets in $\mathbb{S}^n$}
\label{table:list3}
\end{table}
}
\section{Approximating tensor norms}\label{sec:tensor}
In this section we apply explicit constructions for sphere covering, in particular the deterministic $\Omega\left(\sqrt{\frac{\ln n}{n}}\right)$-hitting sets in Section~\ref{sec:h4}, to derive new approximation methods for the tensor spectral norm and nuclear norm. Let us formally define the approximation bound for tensor norms.
\begin{definition}
A tensor norm $\|\bullet\|_\omega$ can be approximated with an approximation bound $\tau\in(0,1]$, if there exists a polynomial-time algorithm that computes a quantity $\omega_\mathcal{T}$ for any tensor instance $\mathcal{T}$ in the concerned space, such that $\tau \|\mathcal{T}\|_{\omega} \le \omega_\mathcal{T}\le \|\mathcal{T}\|_{\omega}$.
\end{definition}
Obviously the larger the $\tau$, the better the approximation bound. We consider the tensor space $\mathbb{R}^{n_1\times n_2 \times \dots \times n_d}$ of order $d\ge3$ and assume without loss of generality that $2\le n_1\le n_2 \le\dots\le n_d$.
\subsection{Approximation bound for tensor spectral norm}\label{sec:snorm}
Given a tensor $\mathcal{T}\in\mathbb{R}^{n_1\times n_2 \times \dots \times n_d}$, let us denote (recall that $\otimes$ stands for the outer product)
\begin{equation}\label{eq:mform}
\mathcal{T}(\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_d) = \left\langle \mathcal{T}, \boldsymbol{x}_1\otimes\boldsymbol{x}_2\otimes\dots\otimes\boldsymbol{x}_d \right\rangle = \sum_{i_1=1}^{n_1}\sum_{i_2=1}^{n_2}\dots\sum_{i_d=1}^{n_d} t_{i_1i_2\dots i_d} (x_1)_{i_1}(x_2)_{i_2}\dots (x_d)_{i_d}
\end{equation}
to be the multilinear function of vector entries $(\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_d)$ where $\boldsymbol{x}_k\in\mathbb{R}^{n_k}$ for $k=1,2,\dots,d$. If any vector entry, say $\boldsymbol{x}_1$, is missing and replaced by $\bullet$, then $\mathcal{T}(\bullet,\boldsymbol{x}_2,\boldsymbol{x}_3,\dots,\boldsymbol{x}_d)\in\mathbb{R}^{n_1}$ becomes a vector. Similarly, $\mathcal{T}(\bullet,\bullet,\boldsymbol{x}_3,\boldsymbol{x}_4,\dots,\boldsymbol{x}_d)\in\mathbb{R}^{n_1\times n_2}$ is a matrix, and so on.
\begin{definition}\label{def:snorm}
For a given tensor $\mathcal{T}\in\mathbb{R}^{n_1\times n_2\times\dots\times n_d}$, the spectral norm of $\mathcal{T}$ is defined as
\begin{equation} \label{eq:defsnorm}
\|\mathcal{T}\|_{\sigma}:=\max\left\{\mathcal{T}(\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_d): \|\boldsymbol{x}_k\|=1,\,\boldsymbol{x}_k\in\mathbb{R}^{n_k},\, k=1,2,\dots,d\right\}.
\end{equation}
\end{definition}
The tensor spectral norm was proposed by Lim~\cite{L05} in terms of singular values of a tensor. In light of~\eqref{eq:mform}, $\|\mathcal{T}\|_{\sigma}$ is the maximal value of the Frobenius inner product between $\mathcal{T}$ and a rank-one tensor whose Frobenius norm is one since $\|\boldsymbol{x}_1\otimes\boldsymbol{x}_2\otimes\dots\otimes\boldsymbol{x}_d\|=\prod_{k=1}^d\|\boldsymbol{x}_k\|=1$.
When $d=2$,~\eqref{eq:defsnorm} is reduced to the matrix spectral norm or the largest singular value of the matrix, which can be computed in polynomial time (e.g., via singular value decompositions). He et al.~\cite{HLZ10} showed that~\eqref{eq:defsnorm} is NP-hard when $d\ge3$. They also proposed the first polynomial-time algorithm with a worst-case approximation bound $\left(\prod_{k=1}^{d-2}\frac{1}{n_k}\right)^{\frac{1}{2}}$. The best known approximation bound for the tensor spectral norm is $\Omega\left(\left(\prod_{k=1}^{d-2}\frac{\ln n_k}{n_k}\right)^{\frac{1}{2}}\right)$ by So~\cite{S11}. However, the method in~\cite{S11} relies on the equivalence between convex optimization and membership oracle queries using the ellipsoid method and it is computationally impractical. There is also a simple but randomized algorithm for the same best bound proposed in~\cite{HJLZ14}. Here in this subsection we are able to present the first easily implementable and deterministic algorithm based on sphere covering, with the same approximation bound $\Omega\left(\left(\prod_{k=1}^{d-2}\frac{\ln n_k}{n_k}\right)^{\frac{1}{2}}\right)$. To make an exact bound without involving $\Omega$, we need to use $\mathbb{H}^n_5(2+\sqrt{5},3+\sqrt{5})\in\mathbb{T}\left(n,0.3\sqrt{\frac{\ln n}{n}},O(n^3)\right)$ in~\eqref{eq:h6}.
\begin{algorithm}\label{alg:snorm}
Given $\mathcal{T}\in\mathbb{R}^{n_1\times n_2\times\dots\times n_d}$, find approximate spectral norm of $\mathcal{T}$.
\vspace{-0.2cm}\noindent\hrulefill
\begin{enumerate}
\item Enumerate $\boldsymbol{z}_k\in\mathbb{H}^{n_k}_5(2+\sqrt{5},3+\sqrt{5})$ for $k=1,2,\dots,d-2$ and solve the resulting matrix spectral norm problem $\max\left\{\mathcal{T}(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_{d-2},\boldsymbol{x}_{d-1},\boldsymbol{x}_{d}):\|\boldsymbol{x}_{d-1}\|=\|\boldsymbol{x}_{d}\|=1 \right\}$ whose optimal solution is denoted by $(\boldsymbol{z}_{d-1},\boldsymbol{z}_d)$.
\item Compare all the objective values in the first step and output the largest one.
\end{enumerate}
\vspace{-0.2cm}\hrulefill
\end{algorithm}
It is obvious that Algorithm~\ref{alg:snorm} runs in polynomial time as $|\mathbb{H}^{n_k}_5(2+\sqrt{5},3+\sqrt{5})|=O({n_k}^3)$ and the matrix spectral norm is polynomial-time computable. Moreover, the corresponding approximate solution $(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_{d-2})$ is universal, i.e., $\boldsymbol{z}_k\in\mathbb{H}^{n_k}_5(2+\sqrt{5},3+\sqrt{5})$ is independent of the data $\mathcal{T}$ for $k=1,2,\dots,d-2$.
\begin{theorem}\label{thm:snorm}
Algorithm~\ref{alg:snorm} is a deterministic polynomial-time algorithm that approximates $\|\mathcal{T}\|_\sigma$ with a worst-case approximation bound $0.3^{d-2}\left(\prod_{k=1}^{d-2}\frac{\ln n_k}{n_k}\right)^{\frac{1}{2}}$ for any $\mathcal{T}\in\mathbb{R}^{n_1\times n_2\times\dots\times n_d}$, i.e., $\boldsymbol{z}_k\in\mathbb{H}^{n_k}_5(2+\sqrt{5},3+\sqrt{5})$ for $k=1,2,\dots,d-2$ and $\boldsymbol{z}_k\in\mathbb{S}^{n_k}$ for $k=d-1,d$ can be found such that
\[
0.3^{d-2}\left(\prod_{k=1}^{d-2}\frac{\ln n_k}{n_k}\right)^{\frac{1}{2}} \|\mathcal{T}\|_\sigma \le \mathcal{T}(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_d)\le \|\mathcal{T}\|_\sigma.
\]
\end{theorem}
\begin{proof}
Let us denote $\tau_k=0.3\sqrt{\frac{\ln n_k}{n_k}}$ for $k=1,2,\dots,d-2$. Let $(\boldsymbol{y}_1,\boldsymbol{y}_2,\dots,\boldsymbol{y}_d)$ be an optimal solution of~\eqref{eq:defsnorm}, i.e., $\mathcal{T}(\boldsymbol{y}_1,\boldsymbol{y}_2,\dots,\boldsymbol{y}_d)=\|\mathcal{T}\|_\sigma$.
For the vector $\boldsymbol{v}_1 = \mathcal{T}(\bullet,\boldsymbol{y}_{2},\boldsymbol{y}_{3},\dots,\boldsymbol{y}_{d})$, either $\|\boldsymbol{v}_1\|=0$ or there exists $\boldsymbol{z}_1\in\mathbb{H}^{n_1}_5(2+\sqrt{5},3+\sqrt{5})$ such that $\boldsymbol{z}_1^{\textnormal{T}}\frac{\boldsymbol{v}_1}{\|\boldsymbol{v}_1\|}\ge\tau_1$. In any case, one has
\[
\mathcal{T}(\boldsymbol{z}_1,\boldsymbol{y}_{2},\boldsymbol{y}_{3},\dots,\boldsymbol{y}_{d}) = \boldsymbol{z}_1^{\textnormal{T}} \boldsymbol{v}_1 \ge \tau_1 \|\boldsymbol{v}_1\|
\ge \tau_1 \boldsymbol{y}_1^{\textnormal{T}}\boldsymbol{v}_1=\tau_1 \mathcal{T}(\boldsymbol{y}_1,\boldsymbol{y}_2,\dots,\boldsymbol{y}_{d}).
\]
Similarly, for every $k=2,3\dots,d-2$ that are chosen one by one increasingly, there exists $\boldsymbol{z}_k\in\mathbb{H}^{n_k}_5(2+\sqrt{5},3+\sqrt{5})$ such that
\[
\mathcal{T}(\boldsymbol{z}_1,\dots,\boldsymbol{z}_{k-1},\boldsymbol{z}_k,\boldsymbol{y}_{k+1},\dots,\boldsymbol{y}_{d}) = \boldsymbol{z}_k^{\textnormal{T}} \boldsymbol{v}_k \ge \tau_k \|\boldsymbol{v}_k\|
\ge \tau_k \boldsymbol{y}_k^{\textnormal{T}}\boldsymbol{v}_k=\tau_k \mathcal{T}(\boldsymbol{z}_1,\dots,\boldsymbol{z}_{k-1},\boldsymbol{y}_k,\boldsymbol{y}_{k+1},\dots,\boldsymbol{y}_{d}),
\]
where $\boldsymbol{v}_k=\mathcal{T}(\boldsymbol{z}_1,\dots,\boldsymbol{z}_{k-1},\bullet,\boldsymbol{y}_{k+1},\dots,\boldsymbol{y}_{d})$. By applying the above inequalities recursively, we obtain
\[
\mathcal{T}(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_{d-2},\boldsymbol{y}_{d-1},\boldsymbol{y}_{d})\ge \left(\prod_{k=1}^{d-2}\tau_k\right) \mathcal{T}(\boldsymbol{y}_1,\boldsymbol{y}_2\dots,\boldsymbol{y}_{d}) = \left(\prod_{k=1}^{d-2}\tau_k\right) \|\mathcal{T}\|_\sigma.
\]
The first step of Algorithm~\ref{alg:snorm} must have enumerated this $(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_{d-2})$ and computed corresponding $\boldsymbol{z}_{d-1}\in\mathbb{S}^{n_{d-1}}$ and $\boldsymbol{z}_d\in\mathbb{S}^{n_d}$, such that
\begin{align*}
\mathcal{T}(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_d)& =\max_{\|\boldsymbol{x}_{d-1}\|=\|\boldsymbol{x}_{d}\|=1} \mathcal{T}(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_{d-2},\boldsymbol{x}_{d-1},\boldsymbol{x}_{d})\\
&\ge \mathcal{T}(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_{d-2},\boldsymbol{y}_{d-1},\boldsymbol{y}_{d}) \\
&\ge \left(\prod_{k=1}^{d-2}\tau_k\right) \|\mathcal{T}\|_\sigma.
\end{align*}
Finally, the best one found by the second step must be no less than the above $\mathcal{T}(\boldsymbol{z}_1,\boldsymbol{z}_2,\dots,\boldsymbol{z}_d)$.
\end{proof}
A closely related problem to the tensor spectral norm~\eqref{eq:defsnorm} is sphere constrained homogenous polynomial optimization $\max\left\{p(\boldsymbol{x}):\|\boldsymbol{x}\|=1\right\}$ where $p(\boldsymbol{x})$ a homogenous polynomial function of degree $d$. In other words, there is a symmetric (entries are invariant under permutations of indices) tensor $\mathcal{T}\in\mathbb{R}^{n\times n\times\dots\times n}$ of order $d$ such that $p(\boldsymbol{x})=\mathcal{T}(\boldsymbol{x},\boldsymbol{x},\dots,\boldsymbol{x})$. This is a widely applicable optimization problem but is also NP-hard when the degree of the polynomial $d\ge3$~\cite{N03}. The current best approximation bound for this problem is $\Omega\left(\left(\frac{\ln n}{n}\right)^{d/2-1}\right)$, obtained by {\color{black} a randomized algorithm~\cite{HJLZ14} or a deterministic but not implementable algorithm~\cite{S11} as it relies on the equivalence between convex optimization and membership oracle queries using the ellipsoid method. In fact, it is not difficult to obtain an easily implementable deterministic algorithm with the same best approximation bound with the help of a polarization formula~\cite[Lemma 1]{HLZ10} below.}
\begin{lemma}
Let $\mathcal{T}\in\mathbb{R}^{n\times n\times\dots\times n}$ be a symmetric tensor of order $d$ and $p(\boldsymbol{x})=\mathcal{T}(\boldsymbol{x},\boldsymbol{x},\dots,\boldsymbol{x})$. If $\xi_i,\xi_2,\dots,\xi_d$ are i.i.d. symmetric Bernoulli random variables (taking values $\pm1$ with equal probability), then
\[
{\bf\sf E} \left[\left(\prod_{i=1}^d\xi_i\right)p\left(\sum_{k=1}^d\xi_k\boldsymbol{x}_k\right) \right] =d!\mathcal{T}(\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_d).
\]
\end{lemma}
We only state the results but leave the details to interested readers.
\begin{theorem}
Let $p(\boldsymbol{x})$ be a homogenous polynomial function of dimension $n$ and degree $d\ge3$. If $d$ is odd, then there is a deterministic polynomial-time approximation algorithm which outputs $\boldsymbol{z}\in\mathbb{S}^n$, such that
\[
p(\boldsymbol{z})\ge 0.3^{d-2}d!d^{-d}\left(\frac{\ln n}{n}\right)^{d/2-1} \max_{\|\boldsymbol{x}\|=1}p(\boldsymbol{x}).
\]
If $d$ is even, then there is a deterministic polynomial-time approximation algorithm which outputs $\boldsymbol{z}\in\mathbb{S}^n$, such that
\[
p(\boldsymbol{z})-\min_{\|\boldsymbol{x}\|=1}p(\boldsymbol{x}) \ge 0.3^{d-2}d!d^{-d}\left(\frac{\ln n}{n}\right)^{d/2-1} \left(\max_{\|\boldsymbol{x}\|=1}p(\boldsymbol{x})-\min_{\|\boldsymbol{x}\|=1}p(\boldsymbol{x})\right).
\]
\end{theorem}
\subsection{Approximation bound for tensor nuclear norm}\label{sec:nnorm}
We now study the approximation for the tensor nuclear norm.
\begin{definition}\label{def:nnorm}
For a given tensor $\mathcal{T}\in\mathbb{R}^{n_1\times n_2\times\dots\times n_d}$, the nuclear norm of $\mathcal{T}$ is defined as
\begin{equation} \label{eq:ndecomp}
\|\mathcal{T}\|_*:=\min\left\{\sum_{i=1}^r|\lambda_i| : \mathcal{T}=\sum_{i=1}^r\lambda_i\, \boldsymbol{x}_1^{(i)}\otimes\boldsymbol{x}_2^{(i)}\otimes\dots \otimes\boldsymbol{x}_d^{(i)}, \|\boldsymbol{x}_k^{(i)}\|=1\mbox{ for all $i$ and $k$}, \, r\in\mathbb{N} \right\}.
\end{equation}
\end{definition}
{\color{black}From~\eqref{eq:ndecomp}, we see that the tensor nuclear norm is the minimum of the sum of Frobenius norms of rank-one tensors in any CP decomposition. A CP decomposition of $\mathcal{T}$ that attains $\|\mathcal{T}\|_*$ is called a nuclear decomposition of $\mathcal{T}$~\cite{FL18}}. When $d=2$, the tensor nuclear norm is reduced to the matrix nuclear norm, which is the sum of all singular values. Similar to the role of matrix nuclear norm used in many matrix rank minimization problems, the tensor nuclear norm is the convex envelope of the tensor rank and is widely used in tensor completions~\cite{GRY11,YZ16}.
The tensor nuclear norm is the dual norm to the tensor spectral norm, and vice versa, whose proof can be found in~\cite{LC14,CL20}.
\begin{lemma} \label{thm:dual}
For given tensors $\mathcal{T}$ and $\mathcal{Z}$ in a same tensor space, it follows that
\begin{equation}
\|\mathcal{T}\|_{\sigma}=\max_{\|\mathcal{Z}\|_*\le 1}\langle\mathcal{T},\mathcal{Z}\rangle
\mbox{ and }
\|\mathcal{T}\|_*=\max_{\|\mathcal{Z}\|_{\sigma}\le 1}\langle\mathcal{T},\mathcal{Z}\rangle. \label{eq:def2nnorm}
\end{equation}
\end{lemma}
Computing the tensor nuclear norm is also NP-hard when $d\ge3$ showed by Friedland and Lim~\cite{FL18}. In fact, it is much harder than computing the tensor spectral norm. From the definition~\eqref{eq:ndecomp} finding a CP decomposition is not an easy task for a given $r$, and from the dual formulation~\eqref{eq:def2nnorm} checking the feasibility $\|\mathcal{Z}\|_{\sigma}\le 1$ is also NP-hard. Perhaps the only known method is due to Nie~\cite{N17}, which is based on the sum-of-squares relaxation and can only work for symmetric tensors of low dimensions. In terms of polynomial-time approximation bounds, the best bound is $\prod_{k=1}^{d-2}\frac{1}{\sqrt{n_k}}$. There are two methods to achieve this bound, one is via matrix flattenings of the tensor~\cite{H15} and the other is via partitioning the tensor into matrix slices~\cite{L16}. This bound is {\color{black}worse} than the best one for the tensor spectral norm. Let us now bridge the gaps using an idea similar to grid sampling in~\cite{HJL22}.
To better illustrate our main idea, we discuss the details for a tensor $\mathcal{T}\in\mathbb{R}^{n_1\times n_2\times n_3}$ of order three. According to the dual formulation~\eqref{eq:def2nnorm},
\begin{align}
\|\mathcal{T}\|_*& = \max \left\{\langle\mathcal{T},\mathcal{Z}\rangle: \|\mathcal{Z}\|_{\sigma}\le 1\right\}\nonumber \\
&= \max \left\{\langle\mathcal{T},\mathcal{Z}\rangle: \mathcal{Z}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})\le1 \mbox{ for all } \|\boldsymbol{x}\|=\|\boldsymbol{y}\|=\|\boldsymbol{z}\|=1\right\} \nonumber \\
&= \max \left\{\langle\mathcal{T},\mathcal{Z}\rangle: \max_{\|\boldsymbol{y}\|=\|\boldsymbol{z}\|=1}\mathcal{Z}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})\le1 \mbox{ for all } \|\boldsymbol{x}\|=1\right\}. \label{eq:sdp0}
\end{align}
Notice that for a given $\boldsymbol{x}$, the constraint $\max_{\|\boldsymbol{y}\|=\|\boldsymbol{z}\|=1} \mathcal{Z}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) \le 1$ is the same to $\|\mathcal{Z}(\boldsymbol{x},\bullet,\bullet)\|_\sigma\le1$ or the largest singular value of the matrix $\mathcal{Z}(\boldsymbol{x},\bullet,\bullet)$ is no more than one. This can be equivalently represented by $I\succeq\mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet)\mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet)^{\textnormal{T}}$. Here a symmetric matrix $A\succeq O$ {\color{black}where $O$ is a zero matrix} means that $A$ is positive semidefinite and $A\succeq B$ means that $A-B \succeq O$. Applying the Schur complement, we then have
\
\max_{\|\boldsymbol{y}\|=\|\boldsymbol{z}\|=1} \mathcal{Z}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) \le 1
\Longleftrightarrow I\succeq\mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet)\mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet)^{\textnormal{T}} \Longleftrightarrow
\left[\begin{array}{cc}
I & \mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet) \\
\mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet)^{\textnormal{T}} & I
\end{array} \right] \succeq O.
\]
By combining with~\eqref{eq:sdp0} we obtain an equivalent formulation of the tensor nuclear norm
\begin{equation}\label{eq:sdp2}
\|\mathcal{T}\|_* = \max \left\{\langle\mathcal{T},\mathcal{Z}\rangle: \left[\begin{array}{cc}
I & \mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet) \\
\mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet)^{\textnormal{T}} & I
\end{array} \right] \succeq O \mbox{ for all } \|\boldsymbol{x}\|= 1\right\}.
\end{equation}
Obviously there is no way to enumerate all $\boldsymbol{x}$ in $\mathbb{S}^{n_1}$ in~\eqref{eq:sdp2} but the sphere covering is indeed helpful in this scenario. If we replace $\|\boldsymbol{x}\|=1$ with $\boldsymbol{x}\in\mathbb{H}^{n_1}$ for some deterministic $\mathbb{H}^{n_1}\in\mathbb{T}(n_1,\tau,O({n_1}^\alpha))$ with some university constant $\alpha$,~\eqref{eq:sdp2} is then relaxed to
\[
\max \left\{\langle\mathcal{T},\mathcal{Z}\rangle: \left[\begin{array}{cc}
I & \mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet) \\
\mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet)^{\textnormal{T}} & I
\end{array} \right] \succeq O \mbox{ for all } \boldsymbol{x}\in\mathbb{H}^{n_1}\right\}.
\]
This becomes a semidefinte program with $O({n_1}^\alpha)$ number of positive semidefinite constraints.
\begin{algorithm} \label{alg:3nnorm}
Given $\mathcal{T}\in\mathbb{R}^{n_1\times n_2\times n_3}$, find approximate nuclear norm of $\mathcal{T}$.
\vspace{-0.2cm}\noindent\hrulefill
\begin{enumerate}
\item Choose a $\tau$-hitting set $\mathbb{H}^{n_1}\in\mathbb{T}(n_1,\tau,O({n_1}^\alpha))$ and solve the semidefinite program
\begin{equation}\label{eq:sdp3}
u = \max \left\{\langle\mathcal{T},\mathcal{Z}\rangle: \left[\begin{array}{cc}
I & \mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet) \\
\mathcal{Z}(\boldsymbol{x}, \bullet ,\bullet)^{\textnormal{T}} & I
\end{array} \right] \succeq O \mbox{ for all } \boldsymbol{x}\in\mathbb{H}^{n_1}\right\}.
\end{equation}
\item Output $\tau u$.
\end{enumerate}
\vspace{-0.2cm}\hrulefill
\end{algorithm}
\begin{theorem}\label{thm:3nnorm}
For any $\mathbb{H}^{n_1}\in\mathbb{T}(n_1,\tau,O({n_1}^\alpha))$, Algorithm~\ref{alg:3nnorm} is a deterministic polynomial-time algorithm that approximates $\|\mathcal{T}\|_*$ with a worst-case approximation bound $\tau$.
\end{theorem}
\begin{proof}
Denote $\mathcal{Y}$ to be an optimal solution of~\eqref{eq:sdp3}. It is easy to see that~\eqref{eq:sdp3} is a relaxation of the maximization problem~\eqref{eq:sdp2} since $\mathbb{H}^{n_1}\subseteq\mathbb{S}^{n_1}$. Therefore, $u=\langle \mathcal{T},\mathcal{Y}\rangle\ge\|\mathcal{T}\|_*$.
For any $\boldsymbol{y},\boldsymbol{z}$, denote $\boldsymbol{v}=\mathcal{Y}(\bullet,\boldsymbol{y},\boldsymbol{z})$ and we have either $\|\boldsymbol{v}\|=0$ or there exists $\boldsymbol{x}\in\mathbb{H}^{n_1}$ such that $\boldsymbol{x}^{\textnormal{T}}\frac{\boldsymbol{v}}{\|\boldsymbol{v}\|}\ge\tau$, both leading to $\mathcal{Y}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})=\boldsymbol{x}^{\textnormal{T}}\boldsymbol{v}\ge\tau\|\boldsymbol{v}\|=\tau \|\mathcal{Y}(\bullet,\boldsymbol{y},\boldsymbol{z})\|$. Therefore,
\[
\max_{\boldsymbol{x} \in \mathbb{H}^{n_1},\|\boldsymbol{y}\|=\|\boldsymbol{z}\|=1} \mathcal{Y}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})
\ge \tau
\max_{\|\boldsymbol{y}\|=\|\boldsymbol{z}\|=1} \|\mathcal{Y}(\bullet,\boldsymbol{y},\boldsymbol{z})\|
=\tau \max_{\|\boldsymbol{x}\|=\|\boldsymbol{y}\|=\|\boldsymbol{z}\|=1}\mathcal{Y}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})=\tau\|\mathcal{Y}\|_\sigma.
\]
By the feasibility of $\mathcal{Y}$ in~\eqref{eq:sdp3}, $\|\mathcal{Y}(\boldsymbol{x},\bullet,\bullet)\|_\sigma\le1$ for all $\boldsymbol{x}\in\mathbb{H}^{n_1}$, implying that
\[\|\tau\mathcal{Y}\|_\sigma=\tau\|\mathcal{Y}\|_\sigma\le \max_{\boldsymbol{x} \in \mathbb{H}^{n_1},\|\boldsymbol{y}\|=\|\boldsymbol{z}\|=1} \mathcal{Y}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) =
\max_{\boldsymbol{x} \in \mathbb{H}^{n_1}} \| \mathcal{Y}(\boldsymbol{x},\bullet,\bullet)\|_\sigma \le 1.
\]
This means that $\tau\mathcal{Y}$ is a feasible solution to the dual formulation~\eqref{eq:def2nnorm}, and so
\[
\|\mathcal{T}\|_*=\max_{\|\mathcal{Z}\|_\sigma\le 1}\langle \mathcal{T},\mathcal{Z}\rangle \ge \langle \mathcal{T},\tau\mathcal{Y}\rangle = \tau \langle \mathcal{T},\mathcal{Y}\rangle=\tau u\ge\tau\|\mathcal{T}\|_*.
\]
\end{proof}
Compared to Algorithm~\ref{alg:snorm} that requires (possibly large) enumeration and then comparison, Algorithm~\ref{alg:3nnorm} only needs to solve one semidefinite program, albeit the size is large if $\mathbb{H}^{n_1}$ is large. We emphasize that $\mathbb{H}^{n_1}$ in Algorithm~\ref{alg:3nnorm} {\color{black} needs to be a deterministic $\tau$-hitting set in order to archive a feasible solution of $\|\mathcal{Z}\|_\sigma\le1$ in~\eqref{eq:def2nnorm} with the desired approximation bound $\tau$ in Theorem~\ref{thm:3nnorm}. Although a randomized hitting set $\mathbb{H}^{n_1}_1(\gamma,\epsilon)$ can be used in Algorithm~\ref{alg:3nnorm}, it is likely that $\tau\mathcal{Y}$ in the proof of Theorem~\ref{thm:3nnorm} is not feasible to~\eqref{eq:def2nnorm}. However, $\langle \mathcal{T},\mathcal{Y}\rangle$ could still be a good upper bound of $\|\mathcal{T}\|_*$ in this case.} Let us now extend Algorithm~\ref{alg:3nnorm} to a general tensor of order $d$.
\begin{algorithm} \label{alg:nnorm}
Given $\mathcal{T}\in\mathbb{R}^{n_1\times n_2\times \dots\times n_d}$, find approximate nuclear norm of $\mathcal{T}$.
\vspace{-0.2cm}\noindent\hrulefill
\begin{enumerate}
\item Choose $\mathbb{H}^{n_k}\in\mathbb{T}(n_k,\tau_k,O({n_k}^{\alpha_k}))$ for $k=1,2,\dots,d-2$ and solve the semidefinite program
\
u = \max \left\{\langle\mathcal{T},\mathcal{Z}\rangle: \left[\begin{array}{cc}
I & \mathcal{Z}(\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_{d-2}, \bullet ,\bullet) \\
\mathcal{Z}(\boldsymbol{x}_1,\boldsymbol{x}_2,\dots,\boldsymbol{x}_{d-2}, \bullet ,\bullet)^{\textnormal{T}} & I
\end{array} \right] \succeq O \mbox{ for all } \boldsymbol{x}_k\in\mathbb{H}^{n_k}\right\}.
\]
\item Output $u \prod_{k=1}^{d-2}\tau_k$.
\end{enumerate}
\vspace{-0.2cm}\hrulefill
\end{algorithm}
We state the final theorem that obtains an improved approximation bound for the tensor nuclear norm using the hitting set $\mathbb{H}^n_5(2+\sqrt{5},3+\sqrt{5})$ in~\eqref{eq:h6}. This bound finally matches the current best one for the tensor spectral norm; see Theorem~\ref{thm:snorm}. The proof is similar to that of Theorem~\ref{thm:3nnorm} and is omitted.
\begin{theorem}\label{thm:nnorm}
By choosing $\mathbb{H}^{n_k}_5(2+\sqrt{5},3+\sqrt{5})$ for $k=1,2,\dots,d-2$, Algorithm~\ref{alg:nnorm} is a deterministic polynomial-time algorithm that approximates $\|\mathcal{T}\|_*$ with a worst-case approximation bound $0.3^{d-2}\left(\prod_{k=1}^{d-2}\frac{\ln n_k}{n_k}\right)^{\frac{1}{2}}$ for any $\mathcal{T}\in\mathbb{R}^{n_1\times n_2\times\dots\times n_d}$.
\end{theorem}
\subsection{Numerical performance of approximation methods}\label{sec:numerical}
{\color{black}
We now test the numerical performance of the methods for approximating tensors norms, in complement to the theoretical results established earlier. All the experiments are conducted on a linux server (Ubuntu 20.04) with an Intel Xeon Platinum 8358 @ 2.60GHz and 512GB of RAM. The computations are implemented in Python 3. The semidefinite optimization solver\footnote{https://docs.mosek.com/latest/pythonfusion/tutorial-sdo-shared.html} in COPT Fusion API for Python 9.3.13 is called whenever semidefinite programs are involved.
We first test Algorithm~\ref{alg:snorm} to approximate the tensor spectral norm using examples in Nie and Wang~\cite[Examples 3.12, 3.13 and 3.14]{NW14}. The semidefinite relaxation method in~\cite{NW14} works well in practice and usually finds optimal values. This also enable us to check the true approximation bounds in practice rather than the conservative theoretical bounds. The results for the first two examples are shown in Table~\ref{table:00}. For Example 3.13, the method in~\cite{NW14} calls the fmincon function in MATLAB for a local improvement. This is the benchmark optimal value used to compute the approximation bounds. We also apply the classic alternating least square (ALS) method~\cite{KB09} as a local improvement starting from the approximate solutions obtained by Algorithm~\ref{alg:snorm}. Whenever a local improvement method is applied, the corresponding indicator is appended with a `+' sign.
\begin{table}[h]
\centering
\small
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{|l|l|cccccc|}
\hline
Example &Method & CPU & CPU+ & Value & Value+ & Bound & Bound+\\
\hline
Ex 3.12&\cite{NW14} & 0.703 & & 2.8167 & & 1.0000& \\
&Alg~\ref{alg:snorm} & 0.000 & 0.000 & 2.2076 & 2.8167 & 0.7837 & 1.0000\\
\hline
Ex 3.13&\cite{NW14} & 0.545 & 0.612 & 0.9862 & 1.0000 & 0.9862 & 1.0000\\
&Alg~\ref{alg:snorm} & 0.000 & 0.250 & 0.8397 & 1.0000 & 0.8397 & 1.0000\\
\hline
\end{tabular}
\caption{Numerical results for Examples 3.12 and 3.13 in~\cite{NW14}}\label{table:00}
\end{table}
The results for Example 3.14 in~\cite{NW14} are shown in Table~\ref{table:01}. In this example, the method in~\cite{NW14} obtained global optimality directly without applying the local improvement. We also listed the theoretical approximation bound $\sqrt{\frac{\ln n}{n}}$ (without showing the constant disguised under the $\Omega$) of our algorithm for comparison.
\begin{table}[h]
\small
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{|l|l|ccccccc|}
\hline
$n$ &Method & CPU & CPU+ & Value & Value+ & Bound & Bound+ & $\sqrt{\ln n /n}$\\
\hline
5 & \cite{NW14} & 0.997 & & 6.0996 & & 1.0000 & & \\
& Alg~\ref{alg:snorm} & 0.020 & 0.050 & 4.3058 & 6.0996 & 0.7059 & 1.0000 & 0.5674\\\hline
10 & \cite{NW14} & 1.411 & & 14.7902 & & 1.0000 & & \\
& Alg~\ref{alg:snorm} & 0.320 & 1.920 & 8.4779 & 14.7902 & 0.5732 & 1.0000 & 0.4799\\\hline
15 & \cite{NW14} & 3.696 & & 25.4829 & & 1.0000 & & \\
& Alg~\ref{alg:snorm} & 1.670 & 3.680 & 11.4022 & 25.4829 & 0.4474 & 1.0000 & 0.4249\\\hline
20 & \cite{NW14} & 8.763 & & 33.7020 & & 1.0000 & & \\
& Alg~\ref{alg:snorm} & 4.870 & 20.120 & 13.3617 & 33.7020 & 0.3964 & 1.0000 & 0.3870\\\hline
25 & \cite{NW14} & 37.535 & & 46.7997 & & 1.0000 & & \\
& Alg~\ref{alg:snorm} & 50.310 & 110.000 & 19.5674 & 46.7997 & 0.4181 & 1.0000 & 0.3588\\\hline
30 & \cite{NW14} & 52.994 & & 64.9106 & & 1.0000 & & \\
& Alg~\ref{alg:snorm} & 101.380 & 152.160 & 24.5234 & 64.9106 & 0.3778 & 1.0000 & 0.3367\\\hline
35 & \cite{NW14} & 111.547 & & 80.7697 & & 1.0000 & & \\
& Alg~\ref{alg:snorm} & 197.510 & 350.360 & 28.6220 & 80.7697 & 0.3543 & 1.0000 & 0.3187\\\hline
40 & \cite{NW14} & 241.565 & & 95.0878 & & 1.0000 & & \\
& Alg~\ref{alg:snorm} & 362.230 & 548.350 & 33.7020 & 95.0878 & 0.3307 & 1.0000 & 0.3037\\
\hline
\end{tabular}
\caption{Numerical results for Example 3.14 in~\cite{NW14}}\label{table:01}
\end{table}
Observed from the numerical results of these three examples, Algorithm~\ref{alg:snorm} obviously fails to obtain optimality in contrast to a practical method, but with the help of the ALS method the global optimality is obtained for all the test instances. The approximation bounds calculated by these numerical instances are better than the theoretical approximation bounds shown in Section~\ref{sec:snorm}. In terms of the computational time by comparing with the method in~\cite{NW14}, Algorithm~\ref{alg:snorm} runs quicker for low dimensions but the time increases quickly when the dimension of the problem increases.
To systematically verify and compare with the theoretical approximation bounds obtained by our algorithms, we now test randomly generated tensors whose spectral and nuclear norms can be easily obtained. In particular, let
\begin{equation} \label{eq:orthdecomp}
\mathcal{T}=\sum_{i=1}^r \lambda_i\, \boldsymbol{x}_i\otimes \boldsymbol{y}_i\otimes \boldsymbol{z}_i \mbox{ with } \lambda_i>0 \mbox{ and } \|\boldsymbol{x}_i\|=\|\boldsymbol{y}_i\|=\|\boldsymbol{z}_i\|=1 \mbox{ for }i=1,2,\dots,r,
\end{equation}
where $(\boldsymbol{x}_i^{\textnormal{T}}\boldsymbol{x}_j)(\boldsymbol{y}_i^{\textnormal{T}}\boldsymbol{y}_j)=\boldsymbol{z}_i^{\textnormal{T}}\boldsymbol{z}_j=0$ if $i\neq j$. This is a special type of orthogonally decomposable tensors. With the special structure of $\mathcal{T}$ in~\eqref{eq:orthdecomp}, it is not difficulty to see that $\|\mathcal{T}\|_\sigma=\max_{1\le i\le r}\lambda_i$ and $\|\mathcal{T}\|_*=\sum_{i=1}^r\lambda_i$. The components of $\mathcal{T}$ in~\eqref{eq:orthdecomp}, $\lambda_i$'s, $\boldsymbol{x}_i$'s, $\boldsymbol{y}_i$'s and $\boldsymbol{z}_i$'s, are generated from i.i.d.~standard normal distributions and made positive (by taking the absolute value) or orthogonal if necessary.
We apply Algorithm~\ref{alg:snorm} to approximate the spectral norm and Algorithm~\ref{alg:3nnorm} to approximate the nuclear norm for $n\times 10\times 10$ tensors and $10\times n\times n$ tensors, both with varying $n$. Instead of the deterministic hitting set $\mathbb{H}_5$ used in the original algorithms, we replace it with a randomized hitting set $\mathbb{H}_1$ that is numerically more stable and efficient. The results are shown in Tables~\ref{table:4} and~\ref{table:6} for the spectral norm and in Tables~\ref{table:10} and~\ref{table:12} for the nuclear norm. For each type of tensors with a fixed size, say $5\times 10\times 10$, we randomly generate 200 instances and find an approximate solution of the spectral norm by Algorithm~\ref{alg:snorm}, whose approximation bound is then computed since the optimal value is known. We then use the approximate solution as a starting point to apply the ALS method as a local improvement. As before, the corresponding indicator is appended with a `+' sign when a local improvement is involved.
The same setting is implemented for the tensor nuclear norm by Algorithm~\ref{alg:3nnorm} except that (1) there is no local improvement method to improve our approximation solution and (2) we do not multiply $\tau$ to the output solution $\mathcal{Y}$ as $\tau$ involves an $\Omega$ but directly use $\langle \mathcal{T},\mathcal{Y} \rangle$ to obtain the bound (see the proof of Theorem~\ref{thm:3nnorm}) and so the bound is larger than one. In this scenario, the closer to one the better the bound.
\begin{table}[h]
\small
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{|l|cccccc|}
\hline
$n$ & 5 & 10 & 20 & 30 & 40 & 50\\
\hline
$\sqrt{\ln n/n}$ & 0.5674 & 0.4799 & 0.3870 & 0.3367 & 0.3037 & 0.2797 \\
\hline
Min bound & 0.6317 & 0.6344 & 0.5751 & 0.5263 & 0.4663 & 0.4602 \\
Min bound+ & 0.6921 & 0.6500 & 0.5847 & 0.5371 & 0.4768 & 0.8603 \\
Max bound & 0.9879 & 0.9611 & 0.8572 & 0.7653 & 0.7025 & 0.6579 \\
Max bound+ & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\
Mean bound & 0.8898 & 0.8219 & 0.6786 & 0.6459 & 0.5778 & 0.5278 \\
Mean bound+ & 0.9895 & 0.9896 & 0.9859 & 0.9825 & 0.9758 & 0.9932 \\
\% of optimality+ & 92.0\% & 92.0\% & 91.0\% & 87.5\% & 84.5\% & 89.0\% \\
Mean CPU+ & 0.02 & 0.23 & 0.78 & 6.84 & 12.47 & 18.39 \\
\hline
\end{tabular}
\caption{Approximating spectral norm by Algorithm 3.3 (using $\mathbb{H}^n_1$) for $n\times 10\times 10$ tensors}\label{table:4}
\end{table}
\begin{table}[h]
\small
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{|l|cccccc|}
\hline
$n$ & 5 & 10 & 20 & 30 & 40 & 50\\
\hline
Min bound & 0.6574 & 0.5016 & 0.5094 & 0.6858 & 0.5133 & 0.5905 \\
Min bound+ & 0.6746 & 0.5321 & 0.5109 & 0.7236 & 0.5261 & 0.6099 \\
Max bound & 0.9451 & 0.9453 & 0.9472 & 0.9375 & 0.9819 & 0.9620 \\
Max bound+ & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\
Mean bound & 0.8271 & 0.8292 & 0.8308 & 0.8280 & 0.8326 & 0.8295 \\
Mean bound+ & 0.9899 & 0.9826 & 0.9893 & 0.9933 & 0.9814 & 0.9851 \\
\% of optimality+ & 90.0\% & 85.5\% & 91.0\% & 93.5\% & 88.5\% & 90.0\% \\
Mean CPU+ & 0.06 & 0.23 & 0.76 & 1.81 & 3.12 & 5.53 \\
\hline
\end{tabular}
\caption{Approximating spectral norm by Algorithm 3.3 (using $\mathbb{H}^{10}_1$) for $10\times n\times n$ tensors}\label{table:6}
\end{table}
\begin{table}[H]
\small
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{|l|cccccc|}
\hline
$n$ & 5 & 10 & 20 & 30 & 40 & 50\\
\hline
$\sqrt{n/\ln n}$& 1.7626 & 2.0840 & 2.5838 & 2.9699 & 3.2929 & 3.5751 \\
\hline
Min bound & 1.1791 & 1.2521 & 1.7417 & 1.8815 & 2.1672 & 2.5568 \\
Max bound & 1.4998 & 1.5263 & 2.0248 & 2.0187 & 2.2854 & 3.2108 \\
Mean bound & 1.3078 & 1.4135 & 1.9055 & 1.9522 & 2.2221 & 2.9763 \\
Mean CPU+ & 0.69 & 13.31 & 101.49 & 1957.03 & 5365.99 & 11609.46 \\
\hline
\end{tabular}
\caption{Approximating nuclear norm by Algorithm 3.9 (using $\mathbb{H}^{n}_1$) for $n\times 10\times 10$ tensors}\label{table:10}
\end{table}
\begin{table}[h]
\small
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{|l|cccccc|}
\hline
$n$ & 5 & 10 & 20 & 30 & 40 & 50 \\
\hline
Min bound & 1.2999 & 1.3110 & 1.3008 & 1.3225 & 1.3638 & 1.3511 \\
Max bound & 1.5275 & 1.5303 & 1.5257 & 1.5405 & 1.5099 & 1.5726 \\
Mean bound & 1.4120 & 1.4112 & 1.4148 & 1.4148 & 1.4200 & 1.4239 \\
Mean CPU+ & 2.06 & 13.23 & 160.03 & 941.46 & 6618.63 & 9488.85 \\
\hline
\end{tabular}
\caption{Approximating nuclear norm by Algorithm 3.9 (using $\mathbb{H}^{10}_1$) for $10\times n\times n$ tensors}\label{table:12}
\end{table}
From the above tables, we see that the exact approximation bounds obtained by numerical instances outperform the theoretical bound $\Omega\left({\sqrt\frac{\ln n}{n}}\right)$ for both the spectral and nuclear norms. For the latter, it obviously beats the previous known best one $\Omega\left(\frac{1}{\sqrt{n}}\right)$. For the spectral norm, running the ALS method starting with our approximate solutions can lead to global optimality for most random generated tensor instances.
}
\section{Concluding remarks}\label{sec:remark}
We constructed hitting sets or collections of spherical caps to cover the unit sphere with adjustable parameters for different levels of approximations and cardinalities, listed roughly in Table~\ref{table:list}. These readily available products can be used for various decision making problems on spheres or related problems. By applying the covering results we proposed easily implementable and deterministic algorithms to approximate the tensor spectral norm with the current known best approximation bound. The algorithms can be extended to provide approximate solutions for sphere constrained homogeneous polynomial optimization problems. Deterministic algorithms with an improved approximation bound for the tensor nuclear norm were proposed as well. This newly improved bound attains the best known one for the tensor spectral norm.
For $1\le p\le \infty$, the tensor spectral $p$-norm~\cite{L05} generalizes the tensor spectral norm in which the unit sphere $\|\boldsymbol{x}\|=1$ is replaced by the $L_p$-sphere $\|\boldsymbol{x}\|_p=1$. The tensor nuclear $p$-norm can also be defined similarly~\cite{FL18}. Hou and So~\cite{HS14} studied related $L_p$-sphere constrained homogeneous polynomial optimization problems and proposed approximation bounds. It is natural to ask whether one can construct $L_p$-sphere coverings and apply them to approximate the tensor spectral and nuclear $p$-norms. The answer is likely true but still challenging. In fact, one can construct randomized hitting sets using similar ideas in Section~\ref{sec:h1} to show the $L_p$ version of Theorem~\ref{thm:h1} but deterministic constructions remain difficult.
Perhaps a more interesting problem is to explicitly construct hitting sets for the binary hypercube $\{1,-1\}^n$ with different levels of approximations and cardinalities. It will have wider applications, particularly in discrete optimization and graph theory. We leave these to future works.
\section*{Acknowledgments}
The research is partially supported by the National Natural Science Foundation of China (Grants 71771141, 72171141, 71825003, 72150001, 72192832 and 11831002) and Program for Innovative Research Team of Shanghai University of Finance and Economics. The authors would like to thank the anonymous referees for their insightful comments that helped to improve this paper from its original version.
|
{
"arxiv_id": "2302.14193",
"language": "en",
"timestamp": "2023-03-01T02:04:27",
"url": "https://arxiv.org/abs/2302.14193",
"yymm": "2302"
} |
\section{Introduction}\label{sec:introduction}
Dynamic 3D scene understanding based on captured 3D point cloud data is
a critical enabling technology in the 3D vision systems. 3D scene flow
aims at finding the point-wise 3D displacement between consecutive point
cloud scans. With the increase in the availability of point cloud data,
especially those acquired via the LiDAR scanner, 3D scene flow
estimation directly from point clouds is an active research topic
nowadays. 3D scene flow estimation finds rich applications in 3D
perception tasks such as semantic segmentation, action recognition, and
inter-prediction in compressing sequences of LiDAR scans.
Today's solutions to 3D scene flow estimation mostly rely on supervised
or self-supervised deep neural networks (DNNs) that learn to predict the
point-wise motion field from a pair of input point clouds via end-to-end
optimization. One of the important components of these methods is to
learn flow embedding by analyzing spatio-temporal correlations among
regions of the two point clouds. After the successful demonstration of
such an approach in FlowNet3D \cite{liu2019flownet3d}, there has been an
increased number of papers on this topic by exploiting and combining
other ideas such as point convolutions and attention mechanism.
These DNN-based methods work well in an environment that meets the local
scene rigidity assumption. They usually outperform classical
point-correspondence-based methods. On the other hand, they have a
large number of parameters and rely on large training datasets. For the
3D scene flow estimation problem, it is non-trivial to obtain dense
point-level flow annotations. Thus, it is challenging to adopt the
heavily supervised learning paradigm with the real world data. Instead,
methods are typically trained on synthetic datasets with ground truth
flow information first. They are later fine-tuned for real world
datasets. This makes the training process very complicated.
In this paper, we develop a green and interpretable 3D scene flow
estimation method for the autonomous driving scenario and name it
``PointFlowHop''. We decompose our solution into vehicle ego-motion and
object motion modules. Scene points are classified as static and moving.
Moving points are grouped into moving objects and a rigid flow model is
established for each object. Furthermore, the flow in local regions is
refined assuming local scene rigidity. PointFlowHop method adopts the
green learning (GL) paradigm \cite{kuo2022green}. It is built upon
related recent work, GreenPCO \cite{kadam2022greenpco}, and preceding
foundation works such as R-PointHop \cite{kadam2022r}, PointHop
\cite{zhang2020pointhop}, and PointHop++ \cite{zhang2020pointhop++}.
The task-agnostic nature of the feature learning process in prior art
enables scene flow estimation through seamless modification and
extension. Furthermore, a large number of operations in PointFlowHop are
not performed during training. The ego-motion and object-level motion is
optimized in inference only. Similarly, the moving points are grouped
into objects only during inference. This makes the training process much
faster and the model size very small. The decomposition of 3D scene
flow into object-wise rigid motion and/or ego-motion components is not
entirely novel. However, our focus remains in developing a GL-based
solution with improved overall performance, including accuracy, model
sizes and computational complexity.
The novelty of our work lies in two aspects. First, it expands the scope
of existing GL-based point cloud data processing techniques. GL-based
point cloud processing has so far been developed for object-level
understanding \cite{kadam2023s3i, kadam2020unsupervised, kadam2022pcrp,
zhang2020unsupervised, zhang2020pointhop++, zhang2020pointhop} and
indoor scene understanding \cite{kadam2022r, zhang2022gsip}. This work
addresses the more challenging problem of outdoor scene understanding at
the point level. This work also expands the application scenario of
R-PointHop, where all points are transformed using one single rigid
transformation. For 3D scene flow estimation, each point has its own
unique flow vector. Furthermore, we show that a single model can learn
features for ego-motion estimation as well as object-motion estimation,
which are two different but related tasks. This allows model sharing and
opens doors to related tasks such as joint scene flow estimation and
semantic segmentation. Second, our work highlights the over-paramertized
nature of DL-based solutions which demand larger model sizes and higher
computational complexity in both training and testing. The overall
performance of PointFlowHop suggests a new point cloud processing
pipeline that is extremely lightweight and mathematically transparent.
To summarize, there are three major contributions of this work.
\begin{itemize}
\item We develop a lightweight 3D scene classifier that identifies
moving points and further clusters and associates them into moving
object pairs.
\item We optimize the vehicle ego-motion and object-wise motion based
on point features learned using a single task-agnostic feedforward
PointHop++ model.
\item PointFlowHop outperforms representative benchmark methods in the
scene flow estimation task on two real world LiDAR datasets with fewer
model parameters and lower computational complexity measured by FLOPs
(floating-point operations).
\end{itemize}
The rest of the paper is organized as follows. Related work is reviewed
in Sec. \ref{sec:review}. The PointFlowHop method is proposed in Sec.
\ref{sec:method}. Experimental results are presented in Sec.
\ref{sec:experiments}. Finally, concluding remarks and possible future
extensions are given in Sec. \ref{sec:conclusion}.
\section{Related Work}\label{sec:review}
\subsection{Scene Flow Estimation}
Early work on 3D scene flow estimation uses 2D optical flow estimation
followed by triangulation such as that given in \cite{vedula1999three}.
The Iterative Closest Point (ICP) \cite{besl1992method} and the
non-rigid registration work, NICP \cite{amberg2007optimal}, can operate
on point clouds directly. Series of image- and point-based seminal
methods for scene flow estimation relying on similar ideas were proposed
in the last two decades. The optical flow is combined with dense stereo
matching for flow estimation in \cite{huguet2007variational}. A
variational framework that predicts the scene flow and depth is proposed
in \cite{basha2013multi}. A peicewise rigid scene flow estimation
method is investigated in \cite{vogel2013piecewise}. Similarly, the
motion of rigidly moving 3D objects is examined in
\cite{menze2015object}. Scene flow based on Lucas-Kanade tracking
\cite{lucas1981iterative} is studied in \cite{quiroga2012scene}. An
exhaustive survey on 2D optical flow and 3D scene flow estimation
methods has been done by Zhai et al. \cite{zhai2021optical}.
Deep-learning-based (DL-based) methods have been popular in the field of
computer vision in the last decade. For DL-based 3D scene flow
estimation, FlowNet3D \cite{liu2019flownet3d} adopts the feature
learning operations from PointNet++ \cite{qi2017pointnet++}. HPLFlowNet
\cite{gu2019hplflownet} uses bilateral convolution layers and projects
point clouds to an ordered permutohedral lattice. PointPWC-Net
\cite{wu2020pointpwc} takes a self-supervised learning approach that
works in a coarse-to-fine manner. FLOT \cite{puy2020flot} adopts a
correspondence-based approach based on optimal transport. HALFlow
\cite{wang2021hierarchical} uses a hierarchical network structure with
an attention mechanism. The Just-Go-With-the-Flow method
\cite{mittal2020just} uses self-supervised learning with the nearest
neighbor loss and the cycle consistency loss.
DL-based methods that attempt to simplify the flow estimation problem
using ego-motion and/or object-level motion have also been investigated.
For example, Rigid3DSceneFlow \cite{gojcic2021weakly} reasons the scene
flow at the object level (rather than the point level). Accordingly,
the flow of scene background is analyzed via ego-motion and that of a
foreground object is described by a rigid model. RigidFlow
\cite{li2022rigidflow} enforces the rigidity constraint in local regions
and performs rigid alignment in each region to produce rigid pseudo
flow. SLIM \cite{baur2021slim} uses a self-supervised loss function to
separate moving and stationary points.
\subsection{Green Point Cloud Learning}
Green Learning (GL) \cite{kuo2022green} has started to gain attention as
an alternative to Deep Learning (DL) in recent years. Typcially, GL
consists of three modules: 1) unsupervised representation learning, 2)
supervised feature learnaing, and 3) supervised decision learning. The
unsupervised representation learning in the first module is rooted in
the derivation of data-driven transforms such as the Saak
\cite{kuo2018data} and the Saab \cite{kuo2019interpretable} transforms.
Both the training and inference processes in GL adopt a feedforward data
processing path without backpropagation. The optimization is
statistics-based, and it is carried out at each module independently.
The learning process is lightweight, making it data and computation
resource friendly. GL-based methods have been developed for a wide
variety of image processing and computer vision tasks
\cite{kuo2022green}.
Green Point Cloud learning \cite{liu20213d} was first introduced in
PointHop \cite{zhang2020pointhop}. The unsupervised representation
learning process involves constructing a local point descriptor via
octant space partitioning followed by dimensionality reduction via the
Saab transform. These operations together are called one PointHop unit.
It is the fundamental building block in series of follow-up works along
with other task-specific modules. PointHop++ \cite{zhang2020pointhop++}
replaces the Saab transform with its efficient counterpart called the
Channel-wise Saab transform \cite{chen2020pixelhop++}. PointHop and
PointHop++ adopt a multi-hop learning system for point cloud
classification, whereby the learned point representations are aggregated
into a global feature vector and fed to a classifier. The multi-hop
learning architecture is analogous to the hierarchical deep learning
architecture. The multi-hop architecture helps capture the information
from short-, mid-, and long-range point cloud neighborhoods.
More recently, R-PointHop \cite{kadam2022r}, GSIP \cite{zhang2022gsip}
and GreenPCO \cite{kadam2022greenpco} demonstrate green learning
capabilities on more challenging large-scale point clouds for indoor
scene registration, indoor segmentation, and odometry tasks,
respectively. R-PointHop finds corresponding points between the source
and target point clouds using the learned representations and then
estimates the 3D rotation and translation to align the source with the
target. In GreenPCO, a similar process is adopted to incrementally
estimate the object's trajectory. Additional ideas presented in GreenPCO
include a geometry-aware point cloud sampling scheme that is suitable
for LiDAR data. Other noteworthy green point cloud learning works
include SPA \cite{kadam2020unsupervised}, UFF
\cite{zhang2020unsupervised}, PCRP \cite{kadam2022pcrp}, and
S3I-PointHop \cite{kadam2023s3i}.
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.45]{Architecture.png}
\caption{An overview of the PointFlowHop method, which consists of six
modules: 1) ego-motion compensation, 2) scene classification, 3) object
association, 4) object refinement, 5) object motion estimation, and 6)
scene flow initialization and refinement.} \label{fig:architecture}
\end{figure*}
\section{Proposed PointFlowHop Method}\label{sec:method}
The system diagram of the proposed PointFlowHop method is shown in Fig.
\ref{fig:architecture}. It takes two consecutive point clouds $X_t \in
\mathbb{R}^{n_t \times 3}$ and $X_t \in \mathbb{R}^{n_{t+1} \times 3}$
as the input and calculates the point-wise flow $\bar{f_t} \in
\mathbb{R}^{n_1 \times 3}$ for the points in $X_t$.
PointFlowHop decomposes the scene flow estimation problem into two
subproblems: 1) determining vehicle's ego-motion $(T_{ego})$ and 2)
estimating the motion of each individual object (denoted by
$(T_{obj_i})$ for object $i$). It first proceeds by determining and
compensating the ego-motion and classifying scene points as being moving
or static in modules 1 and 2, respectively. Next, moving points are
clustered and associated as moving objects in modules 3 and 4, and the
motion of each object is estimated in module 5. Finally, the flow
vectors of static and moving points are jointly refined. These steps
are detailed below.
\subsection{Module 1: Ego-motion Compensation}
The $i^{th}$ point in $X_t$ has coordinates $(x_t^i,y_t^i,z_t^i)$.
Suppose this point is observed at $(x_{t+1}^i,y_{t+1}^i,z_{t+1}^i)$ in
$X_{t+1}$. These point coordinates are expressed in the respective LiDAR
coordinate systems centered at the vehicle position at time $t$ and
$t+1$. Since the two coordinate systems may not overlap due to vehicle's
motion, the scene flow vector, $\bar{f_t}^i$, of the $i^{th}$ point
cannot be simply calculated using vector difference. Hence, we begin by
aligning the two coordinates systems or, in other words, we compensate
for the vehicle motion (or called ego-motion).
The ego-motion compensation module in PointFlowHop is built upon a
recently proposed point cloud odometry estimation method, called
GreenPCO \cite{kadam2022greenpco}. It is briefly reviewed below for
self-containedness. GreenPCO determines the vehicle trajectory
incrementally by analyzing consecutive point cloud scans. It is
conducted with the following four steps. First, the two point clouds are
sampled using the geometry-aware sampling method, which selects points
spatially spread out with salient local surfaces based on the eigen
features \cite{hackel2016fast}. Second, the sampled points from the two
point clouds are divided into four views - front, left, right and rear
based on the azimuthal angles. Third, point features are extracted using
PointHop++ \cite{zhang2020pointhop++}. The features are used to find
matching points between the two point clouds in each view. Last, the
pairs of matched points are used to estimate the vehicle trajectory.
These steps are repeated as the vehicle advances in the environment. The
diagram of the GreenPCO method is depicted in Fig. \ref{fig:greenpco}.
Ego-motion estimation in PointFlowHop involves a
single iteration of GreenPCO whereby the vehicle's motion from time $t$
to $t+1$ is estimated. Then, the ego-motion can be represented by the 3D
transformation, $T_{ego}$, which consists of a 3D rotation and 3D
translation. Afterward, we use $T_{ego}$ to warp $X_t$ to $\Tilde{X}_{t}$, making it
in the same coordinate system as that of $X_{t+1}$. Then, the flow vector
can be computed by
\begin{equation}\label{eq:flow_vector}
\bar{f_t}^i=(x_{t+1}^i-\Tilde{x}_t^i,y_{t+1}^i-\Tilde{y}_t^i,z_{t+1}^i
-\Tilde{z}_t^i),
\end{equation}
where $(\Tilde{x}_t^i,\Tilde{y}_t^i,\Tilde{z}_t^i)$ is the warped
coordinate of the $i^{th}$ point.
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.65]{Architecture_GreenPCO.png}
\caption{An overview of the GreenPCO method \cite{kadam2022greenpco}.} \label{fig:greenpco}
\end{figure*}
\subsection{Module 2: Scene Classification}
After compensating for ego-motion, the resulting $\Tilde{X}_t$ and
$X_{t+1}$ are in the same coordinate system (i.e., that of $X_{t+1}$).
Next, we coarsely classify scene points in $\Tilde{X}_t$ and $X_{t+1}$
into moving and static two classes. Generally speaking, the moving
points may belong to objects such as cars, pedestrians, mopeds, etc.,
while the static points correspond to objects like buildings, poles,
etc. The scene flow of moving points can be analyzed later while static
points can be assigned a zero flow (or equal to the ego-motion depending
on the convention of the coordinate systems used). This means that the
later stages of PointFlowHop would process fewer points.
For the scene classifier, we define a set of shape and motion features
that are useful in distinguishing static and moving points. These features
are explained below.
\begin{itemize}
\item Shape features \\
We reuse the eigen features \cite{hackel2016fast} calculated in the
ego-motion estimation step. They summarize the distribution of
neighborhood points using covariance analysis. The analysis provides a
4-dimensional feature vector comprising of linearity, planarity, eigen
sum and eigen entropy.
\item Motion feature \\
We first voxelize $\Tilde{X}_t$ and $X_{t+1}$ with a voxel size of 2 meters.
Then, the motion feature for each point in $\Tilde{X}_t$ is the distance
to the nearest voxel center in $X_{t+1}$, and vice versa, for each point
in $X_{t+1}$.
\end{itemize}
The 5-dimensional (shape and motion) feature vector is fed to a binary
XGBoost classifier. For training, we use the point-wise class labels
provided by the SemanticKITTI \cite{behley2019iccv} dataset. We observe
that the 5D shape/motion feature vector are sufficient for decent
classification. The classification accuracy on the SemanticKITTI dataset
is 98.82\%. Furthermore, some of misclassified moving points are
reclassified in the subsequent object refinement step.
\subsection{Module 3: Object Association}
We simplify the problem of motion analysis on moving points by grouping
moving points into moving objects. To discover objects from moving
points, we use the Density-based Spatial Clustering for Applications
with Noise (DBSCAN) \cite{ester1996density} algorithm. Simply speaking,
DBSCAN iteratively clusters points based on the minimum distance ($eps$)
and the minimum points ($minPts$) parameters. Parameter $eps$ gives the
minimum Euclidean distance between points considered as neighbors.
Parameter $minPts$ determines the minimum number of points to form a
cluster. Some examples of the objects discovered using PointFlowHop are
colored in Fig. \ref{fig:clusters}.
Points belonging to distinct objects may get clustered together. We put
the points marked as ``outliers'' by DBSCAN in the set of static points.
The DBSCAN algorithm is run on $\Tilde{X}_t$ and $X_{t+1}$ separately.
Later, we use cluster centroids to associate objects between
$\Tilde{X}_t$ and $X_{t+1}$. That is, for each centroid in
$\Tilde{X}_t$, we locate its nearest centroid in $X_{t+1}$.
\subsection{Module 4: Object Refinement}
Next, we perform an additional refinement step to recover some of the
misclassified points during shape classification and potential inlier
points during object association. This is done using the nearest
neighbor rule within a defined radius neighborhood. For each point
classified as a moving point, we re-classify static points lying within
the neighborhood as moving points. The object refinement operation is
conducted on $\Tilde{X}_t$ and $X_{t+1}$.
The refinement step is essential for two reasons. First, an imbalance
class distribution between static and moving points usually leads to the
XGBoost classifier to favor the dominant class (which is the static
points). Then, the precision and recall for moving points are still low
in spite of high classification accuracy. Second, in the clustering
step, it is difficult to select good values for $eps$ and $minPts$ that
are robust in all scenarios for the sparse LiDAR point clouds. This may
lead to some points being marked as outliers by DBSCAN. Overall, the
performance gains of our method reported in Sec. \ref{sec:experiments}
are a result of the combination of all steps and not due to a single
step in particular.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.525]{Objects.png}
\caption{Objects clustered using the DBSCAN algorithm are shown
in different colors.}\label{fig:clusters}
\end{figure}
\subsection{Module 5: Object Motion Estimation}
We determine the motion between each pair of associated objects in this
step. For that, we follow a similar approach as taken by a point cloud
rigid registration method, R-PointHop \cite{kadam2022r}. The objective
of R-PointHop is to register the source point cloud with the target
point cloud. The block diagram of R-PointHop is illustrated in Fig.
\ref{fig:rpointhop}. It includes the following two major steps. First,
the source and target point clouds are fed to a sequence of R-PointHop
units for hierarchical feature learning (or multiple hops) in the
feature learning step. Point clouds are downsampled between two hops by
iteratively selecting farther points. The R-PointHop unit comprises of
constructing a local point descriptor followed by the channel-wise Saab
transform \cite{chen2020pixelhop++}. Second, the point features are
used to find pairs of corresponding points. The optimal rigid
transformation that aligns the two point clouds is then solved as a
energy minimization problem \cite{schonemann1966generalized}.
For object motion estimation in PointFlowHop, the features of refined
moving points from $\Tilde{X}_t$ and $X_{t+1}$ are extracted using the
trained PointHop++ model. We reuse the same model from the ego-motion
estimation step here. While four hops with intermediate downsampling is
used in R-PointHop, the PointHop++ model in PointFlowHop only involves
two hops without downsampling to suit the LiDAR data. Since
$\Tilde{X}_{t}^{obj_i}$ and $X_{t+1}^{obj_i}$ are two sets of points
belonging to object $i$, we find corresponding points between the two
point clouds using the nearest neighbor search in the feature space.
The correspondence set is further refined by selecting top
correspondences based on: 1) the minimum feature distance criterion and
2) the ratio test (the minimum ratio of the distance between the first
and second best corresponding points). The refined correspondence set
is then used to estimate the object motion as follows.
First, the mean coordinates of the corresponding points in
$\Tilde{X}_{t}^{obj_i}$ and $X_{t+1}^{obj_i}$ are found by:
\begin{equation}
\Bar x_t^{obj_i}=\frac{1}{N_{obj_i}}\sum\limits_{j=1}^{N_{obj_i}}
\Tilde{x}_t^{obj_{ij}}, \quad \Bar
x_{t+1}^{obj_i}=\frac{1}{N_{obj_i}}\sum\limits_{j=1}^{N_{obj_i}}
x_{t+1}^{obj_{ij}}.
\end{equation}
Then, the $3\times 3$ covariance matrix is computed using the pairs of
corresponding points as
\begin{equation}
K(\Tilde{X}_{t}^{obj_i},X_{t+1}^{obj_i})=\sum\limits_{j=1}^{N_{obj_i}}
(\Tilde{x}_t^{obj_{ij}}-\Bar x_t^{obj_i})(x_{t+1}^{obj_{ij}}-\Bar x_{t+1}^{obj_i})^T.
\end{equation}
The Singular Value Decomposition of $K$ gives matrices $U$ and $U$, which
are formed by the left and right singular vectors, respectively. Mathematically,
we have
\begin{equation}
K(\Tilde{X}_{t}^{obj_i},X_{t+1}^{obj_i})= USV^T.
\end{equation}
Following the orthogonal procrustes formulation \cite{schonemann1966generalized},
the optimal motion of $\Tilde{X}_{t}^{obj_i}$ can be expressed in form of a
rotation matrix $R^{obj_i}$ and a translational vector $t^{obj_i}$. They
can be computed as
\begin{equation}
R^{obj_i}=VU^T, \quad t^{obj_i}=-R^{obj_i}\Bar x_t^{obj_i}+\Bar x_{t+1}^{obj_i}.
\end{equation}
Since $(R^{obj_i}, t^{obj_i})$ form the object motion model for object $i$,
it is denoted as $T_{obj_i}$.
Actually, once we find the corresponding point $x_{t+1}^{obj_{ij}}$
of $\Tilde{x}_t^{obj_{ij}}$, the flow vector may be set to
$$
\Tilde{x}_t^{obj_{ij}} = x_{t+1}^{obj_{ij}}-\Tilde{x}_t^{obj_{ij}}.
$$
However, this point-wise flow vector can be too noisy, and it is desired
to use a flow model for the object rather than each point. The object
flow model found using SVD in the step after finding correspondences is
optimal in the mean square sense over all corresponding points and,
hence, is more robust. It makes a reasonable assumption of existence of
a rigid transformation between the two objects.
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.45]{Architecture_RPointHop.PNG}
\caption{An overview of the R-PointHop method \cite{kadam2022r}.} \label{fig:rpointhop}
\end{figure*}
\subsection{Module 6: Flow Initialization and Refinement}
In the last module, we apply the object motion model $T_{obj_i}$ to
$\Tilde{X}_{t}^{obj_i}$ and align it with ${X}_{t+1}^{obj_i}$. Since the
static points do not have any motion, they are not further transformed.
We denote the new transformed point cloud as $\Tilde{X}_{t}'$. At this
point, we have obtained an initial estimate of the scene flow for each
point in $X_t$. For static points, the flow is given by the ego-motion
transformation $T_{ego}$. For the moving points, it is a composition of
ego motion and corresponding object's motion $T_{ego}\cdot T_{obj_i}$.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.72]{Flow.png}
\caption{Flow estimation results using PointFlowHop: input point
clouds (left) and warped output using flow vectors (right).} \label{fig:flow}
\end{figure}
In this module, we refine the flow for all points in $\Tilde{X}_{t}'$
using the Iterative Closest Point (ICP) \cite{besl1992method} algorithm
in small non-overlapping regions. In each region, the points in
$\Tilde{X}_{t}'$ falling within it are aligned with corresponding points
in ${X}_{t+1}$. The flow refinement step ensures a tighter alignment and
is a common post processing operation in several related tasks. Finally,
the flow vectors for ${X}_{t}$ are calculated as difference between the
transformed and initial coordinates. Exemplar pairs of input and scene
flow compensated point clouds using PointFlowHop are shown in Fig.
\ref{fig:flow}.
\section{Experiments}\label{sec:experiments}
In this section, we report experimental results on real world LiDAR
point cloud datasets. We choose the stereoKITTI \cite{menze2015joint,
menze2018object} and the Argoverse \cite{chang2019argoverse} two datasets
since they represent challenging scenes in autonomous driving
environments. StereoKITTI has 142 pairs of point clouds. The ground
truth flow of each pair is derived from the 2D disparity maps and the
optical flow information. There are 212 test samples for Argoverse whose
flow annotations were given in \cite{pontes2020scene}. We use per-point
labels from the SemanticKITTI dataset \cite{behley2019iccv} to train our
scene classifier.
Following series of prior art, we measure the performance in the following
metrics:
\begin{itemize}
\item {\em 3D end point error (EPE3D).} It is the mean Euclidean distance
between the estimated and the ground truth flow.
\item {\em Strict accuracy (Acc3DS).} It is the percentage of points for
which EPE3D is less than 0.05m or the relative error is less than 0.05.
\item {\em Relaxed accuracy (Acc3DR).} It gives the ratio of points for
which EPE3D is less than 0.1m or the relative error is less than 0.1.
\item {\em Percentage of Outliers.} It is the ratio of points
for which EPE3D is greater than 0.3m or the relative error is greater
than 0.1. This is reported for the StereoKITTI dataset only.
\item {\em Mean angle error (MAE).} It is the mean of the angle errors
between the estimated and the ground truth flow of all points
expressed in the unit of radians. This is reported for the Argoverse
dataset only.
\end{itemize}
\subsection{Performance Benchmarking}
The scene flow estimation results on stereoKITTI and Argoverse are
reported in Table \ref{tab:kitti} and Table \ref{tab:argoverse},
respectively. For comparison, we show the performance of several
representative methods proposed in the past few years. Overall, the
EPE3D, Acc3DS and Acc3DR values are significantly better for stereoKITTI
as compared to the Argoverse dataset. This is because Argoverse is a
more challenging dataset. Furthermore, PointFlowHop outperforms all
benchmarking methods in almost all evaluation metrics on both datasets.
\begin{table}[htbp]
\centering
\caption{Comparison of scene flow estimation results on the Stereo
KITTI dataset, where the best performance number is shown in
boldface.} \label{tab:kitti}
\renewcommand\arraystretch{1.3}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{c | c c c c c c c c} \hline
Method & EPE3D (m)$\downarrow$ & Acc3DS $\uparrow$ & Acc3DR $\uparrow$ & Outliers $\downarrow$ \\ \hline
FlowNet3D \cite{liu2019flownet3d} & 0.177 & 0.374 & 0.668 & 0.527 \\
HPLFlowNet \cite{gu2019hplflownet} & 0.117 & 0.478 & 0.778 & 0.410 \\
PointPWC-Net \cite{wu2020pointpwc} & 0.069 & 0.728 & 0.888 & 0.265 \\
FLOT \cite{puy2020flot} & 0.056 & 0.755 & 0.908 & 0.242 \\
HALFlow \cite{wang2021hierarchical} & 0.062 & 0.765 & 0.903 & 0.249 \\
Rigid3DSceneFlow \cite{gojcic2021weakly} & 0.042 & 0.849 & 0.959 & 0.208 \\
PointFlowHop (Ours) & \bf{0.037} & \bf{0.938} & \bf{0.974} & \bf{0.189} \\ \hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{Comparison of scene flow estimation results on the Argoverse
dataset, where the best performance number is shown in boldface.}
\label{tab:argoverse}
\renewcommand\arraystretch{1.3}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{c | c c c c c c c c} \hline
Method & EPE3D (m) $\downarrow$ & Acc3DS $\uparrow$ & Acc3DR $\uparrow$ & MAE (rad) $\downarrow$ \\ \hline
FlowNet3D \cite{liu2019flownet3d} & 0.455 & 0.01 & 0.06 & 0.736 \\
PointPWC-Net \cite{wu2020pointpwc} & 0.405 & 0.08 & 0.25 & 0.674 \\
Just Go with the Flow \cite{mittal2020just} & 0.542 & 0.08 & 0.20 & 0.715 \\
NICP \cite{amberg2007optimal} & 0.461 & 0.04 & 0.14 & 0.741 \\
Graph Laplacian \cite{pontes2020scene} & 0.257 & 0.25 & 0.48 & 0.467 \\
Neural Prior \cite{li2021neural} & 0.159 & 0.38 & 0.63 & \bf{0.374} \\
PointFlowHop (Ours) & \bf{0.134} & \bf{0.39} & \bf{0.71} & 0.398 \\ \hline
\end{tabular}
\end{table}
\subsection{Ablation Study}
In this section, we assess the role played by each individual module
of PointFlowHop using the stereo KITTI dataset as an example.
{\bf Ego-motion compensation.} First, we may replace GreenPCO
\cite{kadam2022greenpco} with ICP \cite{besl1992method} for ego-motion
compensation. The results are presented in Table
\ref{tab:ablation_study_ego}. We see a sharp decline in performance with
ICP. The substitution makes the new method much worse than all
benchmarking methods. While the naive ICP could be replaced with other
advanced model-free methods, it is preferred to use GreenPCO since the
trained PointHop++ model is still needed later.
\begin{table}[htbp]
\centering
\caption{Ego-motion compensation -- ICP vs. GreenPCO.} \label{tab:ablation_study_ego}
\renewcommand\arraystretch{1.3}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{c | c c c c c c c c} \hline
Ego-motion Method & EPE3D $\downarrow$ & Acc3DS $\uparrow$ & Acc3DR $\uparrow$ & Outliers $\downarrow$ \\ \hline
ICP \cite{besl1992method} & 0.574 & 0.415 & 0.481 & 0.684 \\ \hline
GreenPCO \cite{kadam2022greenpco} & \bf{0.037} & \bf{0.938} & \bf{0.974} & \bf{0.189} \\ \hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{Performance gain due to object refinement.} \label{tab:ablation_study_object}
\renewcommand\arraystretch{1.3}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{c | c c c c c c c c} \hline
Object Refinement & EPE3D $\downarrow$ & Acc3DS $\uparrow$ & Acc3DR $\uparrow$ & Outliers $\downarrow$ \\ \hline
\ding{55}}%\textbf{ & 0.062 & 0.918 & 0.947 & 0.208 \\ \hline
\cmark & \bf{0.037} & \bf{0.938} & \bf{0.974} & \bf{0.189} \\ \hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{Performance gain due to flow refinement.} \label{tab:ablation_study_flow}
\renewcommand\arraystretch{1.3}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{c | c c c c c c c c} \hline
Flow Refinement & EPE3D $\downarrow$ & Acc3DS $\uparrow$ & Acc3DR $\uparrow$ & Outliers $\downarrow$ \\ \hline
\ding{55}}%\textbf{ & 0.054 & 0.862 & 0.936 & 0.230 \\ \hline
\cmark & \bf{0.037} & \bf{0.938} & \bf{0.974} & \bf{0.189} \\ \hline
\end{tabular}
\end{table}
{\bf Performance Gain Due to Object Refinement.} Next, we compare
PointFlowHop with and without the object refinement step. The results
are shown in Table \ref{tab:ablation_study_object}. We see consistent
performance improvement in all evaluation metrics with the object
refinement step. On the other hand, the performance of PointFlowHop is
still better than that of benchmarking methods except for
Rigid3DSceneFLow \cite{gojcic2021weakly} (see Table \ref{tab:kitti})
even without object refinement.
{\bf Performance Gain Due to Flow Refinement.} Finally, we compare
PointFlowHop with and without the flow refinement step in Table
\ref{tab:ablation_study_flow}. It is not surprising that flow refinement
is crucial in PointFlowHop. However, one may argue the refinement step
may be included in any of the discussed methods as a post processing
operation. While this argument is valid, we see that even without flow
refinement, PointFlowHop still is better than almost all methods (see
Table \ref{tab:kitti}). Between object refinement and flow refinement,
flow refinement seems slightly more important if we consider all four
evaluation metrics jointly.
\subsection{Complexity Analysis}
The complexity of a machine learning method can be examined from
multiple angles, including training time, the number of model parameters
(i.e., the model size) and the number of floating point operations
(FLOPs) during inference. These metrics are valuable besides performance
measures such as prediction accuracy/error. Furthermore, since some
model-free methods (e.g., LOAM \cite{zhang2014loam}) and the recently
proposed KISS-ICP \cite{vizzo2023kiss} can offer state-of-the-art
results for related tasks such as Odometry and Simultaneous Localization
and Mapping (SLAM), the complexity of learning-based methods deserves
additional attention.
To this end, PointFlowHop offers impressive benefits as compared to
representative DL-based solutions. Training in PointFlowHop only
involves the ego-motion compensation and shape classification steps.
For object motion estimation, PointHop++ obtained from the ego-motion
compensation step is reused while the rest of the operations in
PointFlowHop are parameter-free and performed only in inference.
Table \ref{tab:parameters_time} provides details about the number of
parameters of PointFlowHop. It adopts the PointHop++ architecture with
two hops. The first hop has 13 kernels of dimension 88 while the second
hop has 104 kernels of dimension 8. For XGBoost, it has 100 decision
tree estimators, each of which has a maximum depth of 3. We also report
the training time of PointFlowHop in the same table, where the training
is conducted on Intel(R) Xeon(R) CPU E5-2620 v3 at 2.40GHz.
\begin{table}[htbp]
\centering
\caption{The number of trainable parameters and training time of the
proposed PointFlowHop.} \label{tab:parameters_time}
\renewcommand\arraystretch{1.3}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{c | c | c} \hline
& Number of Parameters & Training time \\ \hline
Hop 1 & 1144 & \multirow{2}{*}{20 minutes} \\
Hop 2 & 832 & \\ \hline
XGBoost & 2200 & 12 minutes \\ \hline
\bf{Total} & \bf{4176} & \bf{32 minutes} \\ \hline
\end{tabular}
\end{table}
While we do not measure the training time of other methods ourselves, we
use \cite{pontes2020scene} as a reference to compare our training time
with others. It took the authors of \cite{pontes2020scene} about 18
hours to train and fine-tune the FlowNet3D \cite{liu2019flownet3d}
method for the KITTI dataset and about 3 days for the Argoverse dataset.
We expect comparable time for other methods. Thus, PointFlowHop is
extremely efficient in this context. While the Graph Laplacian
method \cite{pontes2020scene} offers a variant where the scene flow is
entirely optimized at runtime (non-learning based), its performance
is inferior to ours as shown in Table \ref{tab:argoverse}.
\begin{table}[htbp]
\centering
\caption{Comparison of model sizes (in terms of the number of parameters)
and computational complexity of inference (in terms of FLOPs) of four
benchmarking methods.}\label{tab:complexity}
\renewcommand\arraystretch{1.3}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{c | c | c} \hline
Method & Number of Parameters & FLOPs \\ \hline
FlowNet3D \cite{liu2019flownet3d} & 1.23 M (308X) & 11.67 G (61X) \\
PointPWC Net \cite{wu2020pointpwc} & 7.72 M (1930X) & 17.46 G (92X) \\
FLOT \cite{puy2020flot} & 110 K (28X) & 54.65 G (288X) \\
PointFlowHop (Ours) & \bf{4 K} (1X) & \bf{190 M} (1X) \\ \hline
\end{tabular}
\end{table}
Finally, we compare the model sizes and computational complexity of four
benchmarking methods in Table \ref{tab:complexity}. It is apparent that
PointFlowHop demands significantly less parameters than other methods.
Furthermore, we compute the number of floating-point operations (FLOPs)
of PointFlowHop analytically during inference and report it in Table
\ref{tab:complexity}. While calculating the FLOPs, we consider input
point clouds containing 8,192 points. Thus, the normalized FLOPs per
point is 23.19K. We conclude from the above discussion that
PointFlowHop offers a green and high-performance solution to 3D scene
flow estimation.
\section{Conclusion and Future Work}\label{sec:conclusion}
A green and interpretable 3D scene flow estimation method called
PointFlowHop was proposed in this work. PointFlowHop takes two
consecutive LiDAR point cloud scans and determines the flow vectors for
all points in the first scan. It decomposes the flow into vehicle's
ego-motion and the motion of an individual object in the scene. The
superior performance of PointFlowHop over benchmarking DL-based methods
was demonstrated on stereoKITTI and Argoverse datasets. Furthermore,
PointFlowHop has advantages in fewer trainable parameters and fewer
FLOPs during inference.
One future research direction is to extend PointFlowHop for the 3D
object detection task. Along this line, we may detect moving objects
using PointFlowHop and derive 3D bounding boxes around them. The
clustered points obtained by PointFlowHop may act as an initialization
in the object detection process. Another interesting problem to pursue
is simultaneous flow estimation and semantic segmentation. The
task-agnostic nature of our representation learning can be useful.
\section*{Acknowledgments}
This work was supported by a research gift from Tencent Media Lab. The
authors also acknowledge the Center for Advanced Research Computing
(CARC) at the University of Southern California for providing computing
resources that have contributed to the research results reported within
this publication. URL: https://carc.usc.edu.
\bibliographystyle{unsrt}
|
{
"arxiv_id": "2302.14152",
"language": "en",
"timestamp": "2023-03-01T02:02:41",
"url": "https://arxiv.org/abs/2302.14152",
"yymm": "2302"
} | \section*{Keywords}
Quantum gravity, asymptotic safety, renormalization group, Wetterich equation, Reuter fixed point, Einstein-Hilbert truncation, phase diagram
\section{Introduction}
\label{sec.intro}
\setcounter{footnote}{0}
Our theoretical understanding of nature rests on two pillars. The electroweak and strong force and their interactions with the elementary particles are described by the standard model of particle physics. This theory is formulated as a relativistic quantum field theory in Minkowski space. The description of gravity is provided by general relativity, a classical field theory which encodes the gravitational interactions in the dynamics of spacetime. Conceptually, these theories are on very different footing and
the construction of a framework unifying gravity with the laws of quantum mechanics is
one of the key open questions in theoretical high-energy physics to date.
An important insight along these lines is that the quantization techniques successful in the case of the standard model of particle physics do not extend to gravity in a straightforward way: the perturbative quantization of general relativity leads to a perturbatively non-renormalizable quantum field theory with new infinities appearing at every order in perturbation theory \cite{'tHooft:1974bx,Goroff:1985sz,Goroff:1985th,vandeVen:1991gw}. This has led to the advance of several physics principles which deviate from the principles of continuum quantum field theory in more or less radical ways, see \cite{Armas:2021yut,Loll:2022ibq} for recent non-technical accounts.
The gravitational asymptotic safety program is one particular line of quantum gravity research. The program is conservative in the sense that it strives for a consistent and predictive theory of the gravitational interactions within the framework of quantum field theory by seeking a non-perturbative high-energy completion. Its core assumptions are that the gravitational degrees of freedom are encoded in the spacetime metric also at trans-Planckian scales. Moreover, the theory retains invariance under coordinate transformations.\footnote{This assumption distinguishes the gravitational asymptotic safety program from Ho\v{r}ava-Lifshitz gravity \cite{Horava:2009uw} where this symmetry requirement is reduced to foliation-preserving diffeomorphisms, see \cite{Rechenberger:2012dt} for a pedagogical discussion.} The asymptotic safety hypothesis then stipulates that
\begin{enumerate}
\item these ingredients give rise to an interacting renormalization group fixed point -- called the Reuter fixed point.
\item this fixed point controls the gravitational dynamics at trans-Planckian scales.
\end{enumerate}
From a phenomenological perspective one also requires that the renormalization group flow emanating from the Reuter fixed point connects to a low-energy regime where the dynamics matches the one of general relativity to a good approximation.
We stress that the central element of the gravitational asymptotic safety program -- the existence of the Reuter fixed point coming with suitable properties -- is not an input. It must be established based on first-principle computations. At the technical level, this requires tools applicable to quantum field theory beyond the realm of perturbation theory. This is a highly non-trivial endeavor. It took about 20 years from Weinberg's first formulation of the asymptotic safety hypothesis \cite{Weinberg:1976xy,Weinberg:1980gg} to the advent of renormalization group techniques which could be used to investigate this hypothesis in a systematic way \cite{Reuter:1996cp}.
Nowadays, there are two complementary computational approaches which naturally lend themselves to the exploration of the asymptotic safety mechanism in the context of gravity. Causal Dynamical Triangulations \cite{Ambjorn:2012jv,Loll:2019rdj} and Euclidean Dynamical Triangulations \cite{Ambjorn:2013eha,Coumbe:2014nea,Rindlisbacher:2015ewa,Bassler:2021pzt,Asaduzzaman:2022kxz} use Monte Carlo techniques to investigate the phase space of quantum geometries resulting from the gravitational path integral. In this setting, the Reuter fixed point may manifest itself as a second-order phase transition \cite{Ambjorn:2011cg} which allows to take the continuum limit in a controlled way. Alternatively, the Reuter fixed point can manifest itself in (approximate) solutions of the Wetterich equation \cite{Wetterich:1992yh,Morris:1993qb}.
This chapter will provide a basic introduction to the ideas underlying the gravitational asymptotic safety program (Sec.\ \ref{sec.ea}) before introducing the Wetterich equation \cite{Wetterich:1992yh,Morris:1993qb} and its adaptation to gravity \cite{Reuter:1996cp} as one of the main computational tools in the program (Sec.\ \ref{sec.frge}). Sec.\ \ref{sec.eh} illustrates how this tool is used in practical computations by working out the example of the Einstein-Hilbert truncation in a modern, background-independent way. Sec.\ \ref{sec.conclusion} provides our conclusion and a brief comments on renormalization group techniques implemented by other approaches to quantum gravity.
We stress that the exposition in this chapter is necessarily incomplete since it seeks to provide a concise introduction to the gravitational asymptotic safety program and the functional renormalization group which is accessible to a broader quantum gravity audience. For further details the reader is invited to consult the text books \cite{Percacci:2017fkn,Reuter:2019byg}, lecture notes \cite{Nagy:2012ef,Reichert:2020mja}, and general reviews \cite{Niedermaier:2006wt,Codello:2008vh,Reuter:2012id}. General introductions to the functional renormalization group are provided in \cite{Berges:2000ew,Gies:2006wv,Pawlowski:2005xe,Dupuis:2020fhh} and there are topical reviews focusing on asymptotic safety in the presence of matter fields \cite{Eichhorn:2018yfc}, the fluctuation approach to asymptotic safety \cite{Pawlowski:2020qer}, and its applications in the context of black holes \cite{Koch:2014cqa} and cosmology \cite{Bonanno:2017pkg}. Open issues have been discussed in the community report \cite{Bonanno:2020bil}.
\section{The Asymptotic Safety Mechanism}
\label{sec.ea}
The insight that gravity could be asymptotically safe dates back to the seminal work of Weinberg \cite{Weinberg:1976xy,Weinberg:1980gg}. This initial proposal advocated asymptotic safety as a mechanism which renders physical scattering amplitudes finite (but non-vanishing) at energy scales exceeding the Planck scale. Motivated by computations showing that gravity in $d=2+\epsilon$ spacetime dimensions possesses a non-trivial renormalization group (RG) fixed point \cite{Gastmans:1977ad,Christensen:1978sc}, it was suggested that this family of fixed points admits an analytic continuation up to $d=4$ where the corresponding fixed point should provide the high-energy completion of the gravitational interactions. The link between scattering amplitudes being finite and the RG fixed point builds on the insight that at such a fixed point all dimensionless quantities remain finite. If the fixed point controls the high-energy behavior, this property will also carry over to scattering amplitudes, which by themselves are dimensionless objects. This heuristic argument implies that it is not necessary that all dimensionless couplings remain finite. It suffices that the subset of couplings entering into physical observables (called essential couplings) attain their fixed-point values, as this is sufficient to ensure that the observables are well-behaved. A more detailed analysis of this scenario within the amplitude approach to asymptotic safety \cite{Draper:2020bop,Draper:2020knh,Knorr:2021iwv} revealed that there must be intricate relations between couplings and propagators. Most likely, these arise as a consequence of quantum scale symmetry realized at the fixed point \cite{Wetterich:2019qzx}.
The starting point for developing the idea of Asymptotic Safety is the functional integral over all Euclidean metrics,
\be\label{eq.Zdef}
Z = \int \cD h \, e^{-S[h]} \, ,
\ee
which would allow to determine all physical quantities of interest. In this respect, Asymptotic Safety shares the same starting point as Monte Carlo approaches to quantum gravity, foremost the Causal Dynamical Triangulation \cite{Ambjorn:2012jv,Loll:2019rdj} and Euclidean Dynamical Triagulation \cite{Ambjorn:2013eha,Coumbe:2014nea,Rindlisbacher:2015ewa,Bassler:2021pzt,Asaduzzaman:2022kxz} programs as well as Quantum Regge Calculus \cite{Rocek:1981ama,Hamber:2009mt}.
The functional renormalization group then recasts the problem of performing this functional integral into the problem of solving a functional differential equation, the Wetterich equation for the effective average action $\Gamma_k$ \cite{Wetterich:1992yh,Morris:1993qb,Reuter:1993kw,Reuter:1996cp} (derived in Sec.\ \ref{sec.frge}):
\be\label{eq.Wetterich}
k \p_k \Gamma_k = \frac{1}{2} {\rm Tr}\left[ \left( \Gamma^{(2)}_k + \cR_k \right)^{-1} k \p_k \cR_k \right] \, .
\ee
Here $k$ is the coarse-graining scale and the trace contains an integration over loop momenta. The Wetterich equation implements the Wilsonian picture of renormalization in the following way: The regulator $\cR_k$ appearing on the right-hand side separates the fluctuations into low- and high-momentum modes with respect to $k$. The change of $\Gamma_k$ is then governed by integrating out quantum fluctuations with momenta $p^2 \approx k^2$. In this way, one arrives at a formulation that is much better behaved as the initial problem of solving the functional integral \eqref{eq.Zdef} in one stroke.
By construction, the propagators and vertices in the effective average action $\Gamma_k$ include the quantum corrections due to the high-momentum fluctuations. In this sense, it provides an effective description of physics at length scales $l \sim k^{-1}$. This makes $\Gamma_k$ a quite complicated object. Its natural habitat is the theory space $\cT$. By definition, this space consists of all action functionals $A[\cdot]$ which can be constructed from the field content of the theory and meets its symmetry requirements. In the context of gravity, where the field content is given by (Euclidean) spacetime metrics $g_{\mu\nu}$, prototypical examples for these building blocks include the terms appearing in the Einstein-Hilbert action,
\be\label{Oexamples}
\cO_1 = \int d^dx \sqrt{g} \, , \qquad \cO_2 = \int d^dx \sqrt{g} R \, ,
\ee
where $\sqrt{g} \equiv \sqrt{\det(g)}$ and $R$ is the Ricci scalar constructed from $g_{\mu\nu}$ (also see Table \ref{tab.derivativeexp} for further examples). Given a basis $\{\cO_i\}$ for these monomials, the effective average action can be expanded in this basis
\be\label{eq.expansionG}
\Gamma_k = \sum_i \, \bar{u}^i(k) \, \cO_i \, .
\ee
The dependence on the coarse-graining scale is captured by the dimensionful couplings $\bar{u}^i(k)$. For the purpose of studying RG flows it is useful to trade these dimensionful couplings with their dimensionless counterparts obtained by rescaling with $k$,
\be\label{eq.dimless}
u^i(k) \equiv \bar{u}^i(k) \, k^{-d_i} \, ,
\ee
where $d_i \equiv [\bar{u}^i]$ is the mass-dimension of the coupling. The couplings $u^i$ then serve as coordinates on $\cT$.
Evaluating \eqref{eq.Wetterich} for the expansion \eqref{eq.expansionG} gives the component form of the functional renormalization group equation
\be\label{def.betafcts}
k \p_k \, u^i(k) = \beta^i(\{u^j\}) \, .
\ee
The beta functions $\beta^i(\{u^j\})$ capture the dependence of the dimensionless couplings on the coarse-graining scale. Dimensional analysis entails that the functions $\beta^i(\{u^j\})$ are independent of $k$, since this is the only dimensionful object in the construction. Thus Eq.\ \eqref{def.betafcts} constitutes an infinite-dimensional system of coupled, autonomous, first order differential equations. Its solutions are called RG trajectories. The problem of performing the functional integral \eqref{eq.Zdef} is then translated into finding globally well-defined RG trajectories
\be\label{def.rgtraject}
k \rightarrow \Gamma_k \, , \qquad k \in [0,\infty] \, ,
\ee
which exist for all values of the coarse-graining scale $k$.
\begin{figure}[t!]
\centering
\includegraphics[width=.9\textwidth]{theoryspace}
\caption{\label{fig:theoryspace} Illustration of theory space and its structures: by definition, the theory space contains all action functionals $A[\cdot]$ which can be constructed from a given field content and obey the desired symmetries. The theory space comes with a vector field, the beta functions $\beta$. The integral curves of this vector field (RG trajectories) are exemplified by the black solid curve. The example emanates from a fixed point (red dot) with one UV-attractive (Re$\theta_I > 0$) and one UV-repulsive (Re$\theta_I < 0$) eigendirection. The endpoint of the RG trajectory at $k=0$ coincides with the effective action $\Gamma$. Conventionally, all arrows point towards a lower coarse-graining scale, i.e., in the direction of integrating out fluctuation modes.}
\end{figure}
By definition, RG fixed points $\{u^j_*\}$ are stationary points of the system \eqref{def.betafcts}, satisfying
\be\label{eq.fpcond}
\beta^i(\{u^j_*\}) = 0 \, , \qquad \forall \, i \, .
\ee
As a consequence, it takes infinite amount of ``RG-time'' for an RG trajectory to actually reach the fixed point. In this way fixed points can provide a well-defined limit $k\rightarrow \infty$ in which all dimensionless couplings $u^i(k) \rightarrow u^i_*$ remain finite. Thus, fixed points are natural candidates for providing a well-defined high-energy completion of a theory. It is this concept that underlies the Wilsonian picture of renormalization.
At this point it is interesting to inquire about the conditions for an RG trajectory being dragged into a fixed point as $k \rightarrow \infty$. This question is closely related to the predictive power of the construction. In the vicinity of a fixed point $\{u^i_*\}$, the properties of the RG flow can be studied by linearizing the system \eqref{def.betafcts},
\be\label{linflow}
k \p_k u^i(k) = \sum_j B^i{}_j \, \left(u^j(k) - u_*^j \right) + O(u^2) \, .
\ee
Here
\be\label{def.B}
B^i{}_j \equiv \left. \frac{\p}{\p u^j} \beta^i \right|_{u = u^*}
\ee
is the stability matrix associated with the fixed point. The solutions of \eqref{linflow} are readily given in terms of the right-eigenvectors $V_I$ and stability coefficients $\theta_I$ of $B$,
\be\label{def.theta}
\sum_j B^i{}_j \, V^j_I = - \theta_I \, V_I^i \, , \qquad \forall \, I \, ,
\ee
and take the form
\be\label{sollin}
u^i(k) = u^i_* + \sum_J C_J \, V_J^i \left( \frac{k_0}{k} \right)^{\theta_J} \, .
\ee
Here $C_J$ are constants of integration and $k_0$ denotes an arbitrary reference scale.
Inspecting \eqref{sollin} reveals that eigendirections with Re$(\theta_I) > 0$ are attracted by the fixed point as $k \rightarrow \infty$ while the ones with Re$(\theta_I)<0$ are repulsive in this limit. The corresponding scaling operators are called ``UV-relevant'' and ``UV-irrelevant'', respectively. This suggests splitting the set $\{C_I\}$ according to
\be\label{Crel}
\{C_I^{\rm relevant}\} = \{ C_I \, | \, {\rm Re}(\theta_I) > 0 \} \, , \qquad \{C_I^{\rm irrelevant}\} = \{ C_I \, | \, {\rm Re}(\theta_I) < 0 \} \, .
\ee
The case Re$\theta_I$=0 corresponds to a marginal direction. Determining whether this direction is UV-attractive or UV-repulsive requires going beyond the linear approximation \eqref{sollin} and will not be discussed in detail here.
The condition that the fixed point controls the UV-behavior of the RG-trajectory then enforces $C_I^{\rm irrelevant} = 0$, for all $I$. The solutions meeting this condition span the UV-critical hypersurface of the fixed point. The $\{C_I^{\rm relevant}\}$ are the free parameters of the construction and label the solutions within this hypersurface. Their value is unconstrained by demanding a well-defined UV-completion and must be determined by other theoretical considerations or experimental input. This discussion also shows that fixed points with a lower-dimensional UV-critical hypersurface have a higher predictive power.
Up to this point, our discussion of a high-energy completion referred to a generic renormalization fixed point. It is then customary to distinguish among a Gaussian fixed point (GFP) and a non-Gaussian fixed point (NGFP). The definition of the former is that the critical exponents of its stability matrix agree with the canonical mass-dimension of the corresponding coupling $\theta_I = d_I$. This signals that the underlying theory is the free theory. At a NGFP, the stability coefficients receive quantum corrections,
\be\label{theta.NGFP}
\theta_I = d_I + \text{quantum corrections} \, .
\ee
The latter indicate that the theory linked to the fixed point is interacting. Notably, this definition of a Gaussian and non-Gaussian fixed point is not based on the values $\{u_*^i\}$. Since the spectrum of the stability matrix is invariant under a redefinition $u^i \mapsto \tilde{u}^i(\{u^j\})$, this characterization is independent of a specific choice of ``coordinate system'' on $\cT$. An important subset of NGFPs are ``almost-Gaussian'' NGFPs. In this case the quantum corrections in \eqref{theta.NGFP} are weak in the sense that the $\theta_I$ are dominated by their classical part. This implies that classical power-counting is still a valid guiding principle for determining whether a scaling operator is relevant or irrelevant. Beyond the class of ``almost Gaussian'' NGFPs, there could also be fixed points where the critical exponents are dominated by quantum effects. The systematic investigation of this possibility is beyond the scope of most of current searches for RG fixed points based on functional renormalization group equations though.\footnote{Some insights on potential stability patterns associated with such fixed points have recently be discussed based on the composite operator equation \cite{Houthoff:2020zqy,Kurov:2020csd}, indicating that studying such fixed points requires approximations at a significant level of complexity as well as dedicated search strategies.} Depending on whether the high-energy completion is provided by a GFP or a NGFP, the theory is termed ``asymptotically free'' or ``asymptotically safe''. A prototypical example of the former case is Quantum Chromodynamics while the latter case is realized by gravity in $d=2+\epsilon$ spacetime dimensions \cite{Gastmans:1977ad,Christensen:1978sc}.
We conclude this section with two clarifications. For a globally well-defined RG trajectory, the solutions \eqref{def.rgtraject} interpolate between the microscopic dynamics determined by the RG fixed point for $k\rightarrow\infty$ and the standard effective action $\lim_{k \rightarrow 0} \Gamma_k = \Gamma$. \emph{All physics should then be extracted from $\Gamma$ using its quantum corrected propagators and vertices.} Similarly to \eqref{eq.expansionG}, $\Gamma$ can be expanded in a basis of the theory space
\be\label{EA.exp}
\Gamma = \sum_i \, \bar{u}^i_{\rm eff} \, \cO_i \, ,
\ee
with the relation between the couplings being $\bar{u}^i_{\rm eff} = \lim_{k \rightarrow 0} \bar{u}^i(k)$. This expansion is similar to the one encountered in effective field theory where the $\cO_i$ are organized according to their canonical mass-dimension and the sum is truncated at a given order. The key difference to the effective field theory approach is that the RG flow determines the effective couplings in terms of the free coefficients $\{C_I^{\rm relevant}\}$:
\be\label{predictions}
\bar{u}^i_{\rm eff} = \bar{u}^i_{\rm eff}(C_I^{\rm relevant}) \, .
\ee
Provided that there are more couplings $\bar{u}^i_{\rm eff}$ than free parameters $C_I^{\rm relevant}$, the high-energy completion induces a (potentially infinite number of) relations between the effective couplings. These provide predictions which can be confronted with theoretical consistency requirements and experimental data. On this basis one can deduce whether a given RG fixed point leads to low-energy physics compatible with nature. This also allows to falsify the construction, provided that the properties of the fixed point and its UV-critical surface are known at a sufficient level of detail.
We also stress that the dependence of couplings on the coarse-graining scale $k$ should not be identified with the running of a coupling with respect to a physical energy scale, see \cite{Donoghue:2019clr,Bonanno:2020bil} for instructive examples. Generically, the couplings appearing in \eqref{predictions} are not constant but come in the form of form factors depending on the momenta of the fields in a non-trivial way. In the simplest case (cf.\ \eqref{eq.formfactors}) this dependence contains a single momentum scale
\be
\bar{u}^i_{\rm eff} \rightarrow \bar{u}^i_{\rm eff}(p^2) \, .
\ee
In practice, the value of the coupling is then measured at a fixed momentum scale $\mu^2$. The non-trivial $p$-dependence then induces the ``running'' of the coupling with respect to its value determined at the reference scale. In this simplest case, this is the logarithmic running of a dimensionless coupling seen in perturbation theory, but the momentum dependence can be significantly more involved than that.
\section{The Functional Renormalization Group}
\label{sec.frge}
The basic idea of a functional renormalization group equation (FRGE) is to recast the functional integral over quantum fluctuations in terms of a functional differential equation. The FRGE implements Wilson's modern viewpoint on renormalization \cite{Wilson:1973jj}: in contrast to a perturbative approach based on evaluating Feynman diagrams, quantum fluctuations are not integrated over in one stroke. Instead they are integrated out ``shell-by-shell'' in momentum space starting with the most energetic ones. This leads to a one-parameter family of effective actions $\Gamma_k$ whose propagators and vertices already contain the quantum corrections from fluctuations with momenta $p^2 \gtrsim k^2$. The textbook effective action $\Gamma$ is recovered in the limit where all fluctuations are integrated out, $\Gamma = \lim_{k \rightarrow 0} \Gamma_k$.
The FRGE most frequently used in hands-on computations is the Wetterich equation \cite{Wetterich:1992yh,Morris:1993qb,Reuter:1996ub,Reuter:1996cp}. This section reviews its construction for scalar fields (Sec.\ \ref{sect.31}) before extending the formalism to gravity (Sec.\ \ref{sect.32}). The most common non-perturbative approximation techniques to this equation are introduced in Sec.\ \ref{sect.34} and important extensions giving structural insights to the gravitational renormalization group flow are summarized in Sec.\ \ref{sect.33}.
\subsection{The Wetterich equation for scalar field theory}
\label{sect.31}
The Wetterich equation is a universal tool for studying the RG flow of theories built from essentially any field content \cite{Dupuis:2020fhh}. In order to introduce this tool with the absolute minimum of technicalities, we first focus on a real scalar field $\varphi$ living on a $d$-dimensional Euclidean spacetime $(\mathbb{R}^d, \delta_{\mu\nu})$. For pedagogical reasons, we first review the construction of the effective action in this setting before introducing the effective average action and its FRGE.
We start from the generating functional of correlation functions (path integral)
\be\label{defZ}
Z[J] \equiv \frac{1}{N} \int \cD\varphi \, \exp\left\{ -S[\varphi] + \int d^dx \, J(x) \varphi(x) \right\} \, .
\ee
Here $N \equiv \int \cD\varphi \, \exp\left\{ -S[\varphi]\right\}$ is a normalization factor and $J(x)$ a source coupling to the quantum field. The dynamics of the field is governed by the bare action $S[\varphi]$ which is kept arbitrary at this point. Generically, this generating functional diverges and we implicitly assume that it has been suitably regularized by including an UV-cutoff. Eq.\ \eqref{defZ} allows to construct expectation values of operators $\cO$
\be\label{defnpt}
\left\langle \cO[\varphi] \right\rangle \equiv \frac{1}{N} \int \cD\varphi \, \cO[\varphi] \, \exp\left\{ -S[\varphi] \right\} \, .
\ee
In particular, expectation values of operators polynomial in $\varphi$ can be obtained by taking functional derivatives with respect to the source and subsequently setting $J$ to zero
\be\label{defcorrelators}
\langle \varphi(x_1) \cdots \varphi(x_n) \rangle = \left. \frac{\delta^n Z[J]}{\delta J(x_1) \cdots \delta J(x_n)} \right|_{J=0} \, .
\ee
Here, the normalization factors are chosen such that $\langle \unit \rangle = 1$. Based on the path integral \eqref{defZ}, one obtains the functional $W[J]$ generating all connected Green's functions by setting
\be\label{defW}
Z[J] \equiv e^{W[J]} \, .
\ee
We then introduce the mean field $\phi(x)$ as the expectation value of $\varphi(x)$:
\be\label{defmean}
\phi(x) = \langle \varphi(x) \rangle = \frac{\delta W[J]}{\delta J(x)} \, .
\ee
Finally, one constructs the effective action $\Gamma[\phi]$ as the Legendre transform of $W[J]$. If the relation \eqref{defmean} can be solved for the source, giving $J[\phi]$, it takes the form\footnote{In the general case, the effective action is obtained as the Legendre-Fenchel transform $\Gamma[\phi] = \sup_{J(x)}\left( \int d^dx \, J[\phi](x) \phi(x) - W[J[\phi]] \right)$. In the sequel, formulas are understood to include the supremum if needed.}
\be\label{defeffectiveaction}
\Gamma[\phi] = \int d^dx \, J[\phi](x) \phi(x) - W[J[\phi]] \, .
\ee
The fact that $W[J]$ and $\Gamma[\phi]$ are related by a Legendre transform implies that
\be\label{prop.inverse}
\int d^dy \, \frac{\delta^2 W[J]}{\delta J(x_1) \delta J(y)} \, \frac{\delta^2 \Gamma[\phi]}{\delta \phi(y) \delta \phi(x_2)} = \delta^d(x_1 - x_2) \, .
\ee
The effective action provides the equation of motion for the mean field in the presence of a source,
\be
\frac{\delta \Gamma[\phi]}{\delta \phi(x)} = J(x) \, .
\ee
Higher order functional derivatives generate the one-particle irreducible ($1$PI) $n$-point functions
\be
\Gamma^{(n)}[\phi] \equiv \frac{\delta^n \Gamma[\phi]}{\delta \phi(x_1) \cdots \delta\phi(x_n)} = \langle \varphi(x_1) \cdots \varphi(x_n) \rangle_{1 {\rm PI}} \, .
\ee
Eq.\ \eqref{prop.inverse} then entails that the second functional derivative of $\Gamma[\phi]$ encodes the quantum corrected propagator
\be
\left( \Gamma^{(2)}(x_1,x_2) \right)^{-1} = W^{(2)}(x_1,x_2) = G(x_1,x_2) \, .
\ee
Scattering processes are described by tree-level Feynman diagrams constructed from the propagators and vertices extracted from $\Gamma[\phi]$. In this sense, the effective action is the quantum analog of the classical action, since it encodes the quantum physics at tree level. Determining $\Gamma[\phi]$ is therefore often considered as equivalent to solving the quantum theory.
The construction of the effective average action $\Gamma_k[\phi]$ proceeds along very similar lines. The key modification occurs at the level of the generating functional \eqref{defZ} which is supplemented by an IR-regulator
\be\label{defreg}
\Delta S_k[\varphi] = \frac{1}{2} \int d^dx \, \varphi(x) R_k(-\p^2) \varphi(x) \, .
\ee
The purpose of this extra ingredient is to provide a $k$-dependent mass-term for quantum fluctuations with moments $p^2 \ll k^2$. In the simplest case, this is implemented by requiring that the regulator $R_k(p^2)$ satisfies
\be\label{reg.prop}
R_k(p^2) \approx
\left\{
\begin{array}{ll}
k^2 & \quad \text{for} \; p^2 \ll k^2 \, , \\
0 & \quad \text{for} \; p^2 \gg k^2 \, .
\end{array}
\right.
\ee
Examples of regulators used in practical computations include the (smooth) exponential cutoff,
\be\label{Rexp}
R_k(p^2) = p^2 \left( \exp(p^2/k^2) - 1 \right)^{-1} \, ,
\ee
and Litim-type regulators,
\be\label{Rlitim}
R_k(p^2) = (k^2 - p^2) \Theta(1-p^2/k^2) \, ,
\ee
where $\Theta(x)$ is the Heaviside step function. Adding \eqref{defreg} to the weight in the generating functional \eqref{defZ} induces a dependence on the scale $k$
\be\label{defZk}
Z_k[J] = \frac{1}{N} \int \cD\varphi \, \exp\left\{ -S[\varphi] - \Delta S_k[\varphi] + \int d^dx \, J(x) \varphi(x) \right\} \, .
\ee
The effect is that the contribution of modes with $p^2 \ll k^2$ to the generating functional becomes suppressed while the modes with $p^2 \gg k^2$ are integrated out in the usual way. Thus $k$ acquires a natural interpretation as a coarse-graining scale, marking the scale up to which microscopic quantum fluctuations are included in the generating functional.
Following the steps leading to the effective action, we then define the (now $k$-dependent) generating functional for connected Green's functions $W_k[J]$ by
\be\label{defWk}
Z_k[J] = \exp[W_k[J]] \, .
\ee
By definition, the effective average action is then given by a modified Legendre transform of $W_k[J]$:
\be\label{def.eaa}
\Gamma_k[\phi] \equiv \int d^dx \, J[\phi](x) \phi(x) - W_k[J] - \Delta S_k[\phi] \, .
\ee
For $k=0$ the IR regulator in the definition of $W_k[J]$ as well as $\Delta S_k[\phi]$ vanish and \eqref{def.eaa} agrees with the definition of the effective action \eqref{defeffectiveaction}:
\be
\lim_{k \rightarrow 0} \Gamma_k[\phi] = \Gamma[\phi] \, .
\ee
The key virtue of the effective average action is that its $k$-dependence is governed by a functional renormalization group equation, the Wetterich equation. This equation is formally exact in the sense that no approximations are made in its derivation. The construction of the Wetterich equation then proceeds along the following lines. We start by introducing the RG time $t \equiv \ln k/k_0$, with $k_0$ being an arbitrary reference scale, so that $\p_t = k \p_k$. We then consider the auxiliary generating functional
\be
\tilde{\Gamma}_k[\phi] \equiv \int d^dx \, J[\phi](x) \, \phi(x) - W_k[J] \, .
\ee
Taking a partial derivative of this definition with respect to RG time yields
\be\label{ptgammatilde}
\begin{split}
\p_t \tilde{\Gamma}_k[\phi] = & - \p_t W_k[J]= \frac{1}{2} \int d^dx \int d^dy \; \langle \varphi(x) \varphi(y) \rangle \; \p_t R_k(x,y) \, .
\end{split}
\ee
Here we have used that $\p_t W_k[J] = \p_t \ln Z_k[J]$ with
\be
\begin{split}
\p_t \ln Z_k[J]
& \, = - \frac{1}{2 Z_k} \int \cD \varphi \int d^dx \int d^dy \, \varphi(x) \, \p_t R_k(x,y) \, \varphi(y) \, \times \\ & \qquad \times \exp\left\{ -S[\varphi] - \Delta S_k[\varphi] + \int d^dx \, J(x) \varphi(x) \right\} \\
& \, = - \frac{1}{2} \int d^dx \int d^dy \, \, \langle \varphi(x) \varphi(y) \rangle \, \p_t R_k(x,y) \, \, ,
\end{split}
\ee
in the second step. We then introduce the ($k$-dependent) mean field
\be
\phi(x) = \langle \varphi(x) \rangle = \frac{\delta W_k[J]}{\delta J(x)} \, ,
\ee
together with the two-point functions
\be
\langle \varphi(x) \varphi(y) \rangle_{\rm c} \equiv \frac{\delta^2 W_k[J]}{\delta J(x) \delta J(y)}
\, , \qquad \tilde{\Gamma}_k^{(2)}(x,y) \equiv \frac{\delta^2 \tilde{\Gamma}_k[\phi]}{\delta \phi(x) \delta \phi(y)} \, .
\ee
Since $\tilde{\Gamma}_k[\phi]$ and $W_k[J]$ are again related by a Legendre transform, these functionals are again each others inverse, cf.\ Eq.\ \eqref{prop.inverse}. This allows to express the two-point function appearing in the relation \eqref{ptgammatilde} in terms of $\tilde{\Gamma}_k^{(2)}(x,y)$
\be
\begin{split}
\langle \varphi(x) \varphi(y) \rangle = & \, \langle \varphi(x) \varphi(y) \rangle_{\rm c} + \langle \varphi(x) \rangle \, \langle \varphi(y) \rangle \, \\
= & \left( \tilde{\Gamma}_k^{(2)}(x,y) \right)^{-1} + \phi(x) \phi(y) \, .
\end{split}
\ee
Here we used the definition of the (now $k$-dependent) mean field when recasting the last term. Substituting this relation into \eqref{ptgammatilde} then yields
\be
\p_t \tilde{\Gamma}_k = \frac{1}{2} \int d^dx \int d^dy \, \left[ \left( \tilde{\Gamma}_k^{(2)}(x,y) \right)^{-1} \, \p_t R_k(x,y) \right] + \p_t \Delta S_k[\phi] \, .
\ee
Bringing the second term to the left-hand side and using that $\Gamma_k = \tilde{\Gamma}_k - \Delta S_k$ allows to rewrite this equation in terms of the effective average action
\be\label{FRGEint}
\p_t \Gamma_k[\phi] = \frac{1}{2} \int d^dx \int d^dy \, \left[ \, \left( \Gamma_k^{(2)} + R_k \right)^{-1} \, \p_t R_k \, \right] \, .
\ee
Noticing that the integrals on the right-hand side actually correspond to taking the trace of the argument, we arrive at the Wetterich equation in its iconic form
\be\label{FRGE}
\p_t \Gamma_k = \frac{1}{2} {\rm Tr}\left[ \left( \Gamma_k^{(2)} + R_k \right)^{-1} \, \p_t R_k \right] \, .
\ee
The Wetterich equation exhibits several remarkable features arising from the interplay of $R_k(p^2)$ in the numerator and denominator of the trace argument. In the propagator term $\left( \Gamma_k^{(2)} + R_k \right)^{-1}$, the regulator provides a mass to the fluctuations, ensuring the absence of IR-singularities as long as $k$ is finite. In the numerator, the condition $R_k(p^2) \rightarrow 0$ for $p^2 \gg k^2$ entails that the trace argument vanishes for high-momentum modes. As a consequence the right-hand side is IR and UV-finite and any UV-regulator implicit in the definition of the initial functional integral can be removed trivially.\footnote{In some practical computations, as e.g.\ in the computation of spectral flows \cite{Braun:2022mgx}, one may want to resort to regulators $R_k(p^2)$ where this fall-off property in the UV does not hold. In this case, the flow equation must be supplemented by additional counterterms absorbing the UV-divergences.}
The regulator structure furthermore entails that the trace argument is peaked at momenta $p^2 \approx k^2$. Hence the flow of $\Gamma_k[\phi]$ is driven by integrating out quantum fluctuations whose momenta are comparable to the coarse-graining scale $k$. In this way the Wetterich equation implements the Wilsonian picture of renormalization, integrating out quantum fluctuations shell-by-shell in momentum space. Notably, Eq.\ \eqref{FRGE} allows to start from any initial condition $\Gamma_\Lambda$ and integrate its RG flow towards the infrared. Thus the Wetterich equation does not require specifying a bare action a priori. These are obtained as the fixed points of the RG flow through the reconstruction problem \cite{Manrique:2008zw}.
We also observe that the combination of propagator and regulator within the trace induces a projective feature. Any $k$-independent rescaling of the fluctuation field affects the regulator and propagator in the same way, so that such rescalings drop out from the right-hand side of the equation. This renders the flow equation invariant with respect to certain classes of field redefinitions.
\subsection{The Wetterich equation for gravity}
\label{sect.32}
In the previous section, we derived the Wetterich equation \eqref{FRGE} for a real scalar field. Its extension to gauge fields and fermions is conceptually straightforward. In the context of gravity the construction faces two conceptual obstacles though. Firstly, our understanding of classical gravity based on general relativity indicates that gravitational interactions are mediated through the curvature of spacetime. This implies that spacetime itself becomes a dynamical and, in the context of the quantum theory, also fluctuating object. Hence, the concept of a fixed, non-dynamical spacetime providing the stage for the dynamics is lost at this point. This raises the question about how to define the coarse-graining scale $k$. Secondly, gravity shares some properties of a gauge theory. The Einstein-Hilbert action, for example, is invariant under coordinate transformations which act on the metric according to
\be\label{eq.coordtrafo}
\delta g_{\mu\nu} \equiv \cL_v g_{\mu\nu} = v^\rho \p_\rho g_{\mu\nu} + (\p_\mu v^\rho) g_{\rho\nu} + (\p_\nu v^\rho) g_{\rho\mu} \, .
\ee
Here $\cL_v$ denotes the Lie derivative along the generating vector field $v^\mu$. In order to ensure that the generating functional $Z_k$ sums over physically inequivalent configurations only, one has to introduce a suitable gauge-fixing condition. By construction, the gauge-fixing term breaks the invariance under the transformations \eqref{eq.coordtrafo}. As a consequence, the effective (average) action may loose this symmetry, leading to a proliferation of interaction monomials which could be generated along the RG flow.
Following the seminal work by Reuter \cite{Reuter:1996cp}, both of these conceptual difficulties can be overcome by resorting to the background field method. This procedure splits the (Euclidean) quantum metric $g_{\mu\nu}$ into a generic (but non-fluctuating) background metric $\gb_{\mu\nu}$ and fluctuations around this background $h_{\mu\nu}$. There is no requirement that the latter are small. The decomposition can then be implemented either through a linear or an exponential split (see \cite{Ohta:2016npm,Ohta:2016jvw} for a detailed discussion):
\be\label{eq.backgrounddecomp}
g_{\mu\nu} = \gb_{\mu\nu} + h_{\mu\nu} \, , \qquad g_{\mu\nu} = \gb_{\mu\alpha} \left( e^h \right)^\alpha{}_\nu \, .
\ee
Here we follow the standard convention that indices are raised and lowered with the background metric, i.e., $h^\mu{}_\nu = \gb^{\mu\alpha} h_{\alpha\nu}$, etc. While these decompositions agree to leading order in the fluctuation field, they actually define different theories, since they do not cover the same space of quantum fluctuations. Heuristically, this can be argued based on the observation that the linear split allows for $g_{\mu\nu}$ and $\gb_{\mu\nu}$ having different signatures while in the exponential split this is not the case \cite{Demmel:2015zfa}. This is also confirmed by computing properties of the Reuter fixed point in $d=2+\epsilon$ dimensions \cite{Nink:2015lmq}. In the following, we will adopt the linear split for simplicity.
The background metric then allows to quantize metric fluctuations along the lines of quantum field theory in a curved spacetime. Moreover, it allows to circumvent the conceptual difficulties discussed above as follows. Firstly, it provides the basis for separating fluctuations into ``high-'' and ``low-''momentum modes relative to the coarse-graining scale in a purely geometric way. Taking the background to be compact and introducing the Laplacian $\Delta \equiv - \gb^{\mu\nu} \Db_\mu \Db_\nu$ constructed from the background metric, one can obtain the ordered set of eigenmodes
\be\label{eigenmodes}
\Delta h^{n}_{\mu\nu} = E_n \, h^{n}_{\mu\nu} \, , \qquad n=0,1,\cdots \, ,
\ee
with $E_0 \le E_1 \le E_2 \le \cdots$. Fluctuations with $E_n \lesssim k^2$ are then considered ``long-range'' and are suppressed by the regulator while ``short-range'' fluctuations characterized by $E_n \gtrsim k^2$ are integrated out without suppression factor. Practically, this is achieved by generalizing \eqref{defreg} to
\be\label{defreggrav}
\Delta S_k[h;\gb] = \frac{1}{2} \int d^dx \sqrt{\gb} \left[ h_{\mu\nu}(x) \, \cR_k^{\mu\nu\alpha\beta}(\Delta) \, h_{\alpha\beta}(x) \right] \, .
\ee
The switch from $R_k$ to $\cR_k$ anticipates that, in general, the regulator is a matrix in field space carrying a non-trivial tensor structure. Note that \eqref{defreggrav} is quadratic in the fluctuation field: specifically, $\cR_k^{\mu\nu\alpha\beta}(\Delta)$ is independent of the fluctuation field and depend on $\gb_{\mu\nu}$ only. This property is essential in order to arrive at a FRGE of the form \eqref{FRGE}.
Secondly, the linear split allows to realize the transformation \eqref{eq.coordtrafo} in two distinct ways. Quantum gauge transformations ($Q$) keep $\gb_{\mu\nu}$ fixed and attribute the transformation of $g_{\mu\nu}$ to the fluctuation field
\be\label{def.Qtrafo}
\delta^Q \gb_{\mu\nu} = 0 \, , \qquad \delta^Q h_{\mu\nu} = \cL_v(\gb_{\mu\nu}+h_{\mu\nu}) \, .
\ee
It is this transformation that must be gauge-fixed. In addition, one can define background gauge transformations ($\delta^B$) where each field transforms as a tensor of the corresponding rank
\be\label{eq.Btrafo}
\delta^B \gb_{\mu\nu} = \cL_v \gb_{\mu\nu} \, , \qquad \delta^B h_{\mu\nu} = \cL_v h_{\mu\nu} \, .
\ee
This transformation can be maintained as an auxiliary symmetry by resorting to the class of background covariant gauges. Following the Faddeev-Popov procedure, the gauge-fixing is implemented by supplementing the gravitational action $S[g]$ by a gauge-fixing term
\be\label{eq.gf}
S^{\rm gf}[h;\gb] = \frac{1}{2\alpha} \int d^dx \sqrt{\gb} \, \gb^{\mu\nu} F_\mu \, F_\nu \, .
\ee
Here, $\alpha$ is a free parameter and the gauge-fixing condition $F_\mu[h;\gb]$ transforms as a rank-one tensor with respect to \eqref{eq.Btrafo}.
The gauge-fixing term is accompanied by the action for the Faddeev-Popov ghost and anti-ghost fields $C^\mu$ and $\bar{C}_\mu$
\be\label{Sghost}
S^{\rm ghost}[h,\Cb,C;\gb] = - \sqrt{2} \int d^dx \sqrt{\gb} \, \Cb_\mu \, \gb^{\mu\nu} \, \frac{\delta F_\nu}{\delta h_{\alpha\beta}} \, \cL_C(\gb_{\alpha\beta} + h_{\alpha\beta}) \, .
\ee
This action exponentiates the Faddeev-Popov determinant
\be
\det \cM = \det \left[ \frac{\delta F_\mu}{\delta v^\nu} \right] = \int \cD C^\mu \cD \bar{C}_\nu \, e^{- \int \Cb \cM C} \, .
\ee
At this point we have all the ingredients to write down the analogue of the generating functional \eqref{defWk} in the context of gravity
\be\label{defZkgrav}
\begin{split}
\exp(W_k[J;\gb]) = \frac{1}{N} \int &\, \cD h_{\alpha\beta} \cD C^\mu \cD \bar{C}_\nu \, \exp\Big\{
-S[\gb+h] - S^{\rm gf}[h;\gb] \\
&
- S^{\rm ghost}[h,\Cb,C;\gb] - \Delta S_k[h,\Cb,C;\gb] + S^{\rm source}
\Big\} \, .
\end{split}
\ee
Here $S[g]$ denotes a generic action built from the metric $g_{\mu\nu}$, invariant under \eqref{eq.coordtrafo}, $S^{\rm gf}[h;\gb]$ and $S^{\rm ghost}[h,\Cb,C;\gb]$ are the gauge-fixing and ghost actions given in Eqs.\ \eqref{eq.gf} and \eqref{Sghost}, and $\Delta S_k[h,\Cb,C;\gb]$ is the IR regulator \eqref{defreggrav} extended by a $k$-dependent mass term for the ghost fields. Finally,
\be\label{Ssource}
S^{\rm source} = \int d^dx \sqrt{\gb} \left\{ t^{\mu\nu} h_{\mu\nu} + \bar{\sigma}_\mu C^\mu + \sigma^\mu \Cb_\mu \right\}
\ee
introduces sources for the quantum field, which we collectively label by $J \equiv (t^{\mu\nu}, \sigma^\mu, \bar{\sigma}_\mu)$.
The construction of the effective average action then proceeds analogously to the scalar case. Taking functional derivatives of $W_k[J;\gb]$ with respect to the sources gives the expectation values of the fluctuation fields
\be\label{def.flucexp}
\langle h_{\mu\nu} \rangle = \frac{1}{\sqrt{\gb}} \frac{\delta W_k}{\delta t^{\mu\nu}} \, , \quad
\langle \Cb_\mu \rangle = \frac{1}{\sqrt{\gb}} \frac{\delta W_k}{\delta \sigma^\mu} \, , \quad
\langle C^\mu \rangle = \frac{1}{\sqrt{\gb}} \frac{\delta W_k}{\bar{\sigma}_\mu} \, .
\ee
In a slight abuse of notation we then use the same labels for the mean- and quantum fields, identifying
\be\label{notationsimple}
h_{\mu\nu} = \langle h_{\mu\nu} \rangle \, , \qquad
C^\mu = \langle C^\mu \rangle \, , \qquad
\Cb_\mu = \langle \Cb_\mu \rangle \, , \qquad g_{\mu\nu} = \langle \gb_{\mu\nu} + h_{\mu\nu} \rangle \, .
\ee
We then assume again that the field-source relations \eqref{def.flucexp} can be solved for the sources as functions of the mean field. The effective average action is then again defined as the modified Legendre transform of $W_k$:
\be\label{def.eaagrav}
\Gamma_k[\Phi;\gb] = \int d^dx \sqrt{\gb} \left\{ t^{\mu\nu} h_{\mu\nu} + \bar{\sigma}_\mu C^\mu + \sigma^\mu \Cb_\mu \right\} - W_k[J;\gb] - \Delta S_k[\Phi;\gb] \, .
\ee
Here we used $\Phi = (h,\Cb_\mu,C^\mu)$ to denote the collection of expectation values.
The key property of the effective average action \eqref{def.eaagrav} is that its dependence on the coarse-graining scale $k$ is again governed by a formally exact functional renormalization equation taking the form \eqref{FRGE}. Its derivation essentially follows the one for the scalar theory. Taking the derivative of \eqref{def.eaagrav} with respect to the RG time $t$ and expressing the right-hand side in terms of the Hessian of $\Gamma_k[\Phi;\gb]$ one finds \cite{Reuter:1996cp}
\be\label{frge.grav1}
\begin{split}
\p_t \Gamma_k[\Phi;\gb] = & \, \frac{1}{2} {\rm Tr}\left[ \left( \Gamma_k^{(2)} + \cR_k \right)^{-1}_{hh} \left( \p_t \cR_k \right)_{hh} \right] \\
& - \frac{1}{2} {\rm Tr} \left[ \left\{ \left(\Gamma_k^{(2)} + \cR_k \right)^{-1}_{\Cb C} - \left( \Gamma_k^{(2)} + \cR_k \right)^{-1}_{C\Cb} \right\} (\p_t \cR_k)_{\Cb C}\right] \, .
\end{split}
\ee
Here the matrix elements constituting the Hessian of $\Gamma_k$ are defined via
\be\label{eq.hessians}
\left( \Gamma_k^{(2)} \right)_{ij}(x,y) \equiv \frac{1}{\sqrt{\gb(x)} \sqrt{\gb(y)}} \,
\frac{\delta^2 \Gamma_k}{\delta \Phi^i(x) \delta \Phi^j(y)} \, .
\ee
For the Grassmann-valued (anti-commuting) fields in the ghost sector, we adopt the convention that matrix elements are defined in terms of left-derivatives, i.e.,
\be\label{eq.hessianghost}
\left( \left( \Gamma_k^{(2)} \right)_{\Cb C} \right)_\mu{}^\nu(x,y) = \frac{1}{\sqrt{\gb(x)}} \frac{\delta}{\delta C^\mu(x)} \frac{1}{\sqrt{\gb(y)}} \frac{\delta}{\delta \Cb_\nu(y)} \, \Gamma_k[\Phi;\gb] \, .
\ee
Introducing a supertrace STr which includes a sum over all fluctuation fields as well as a minus sign for Grassmann-valued degrees of freedom, Eq.\ \eqref{frge.grav1} can again be written in compact form,
\be\label{frge.grav2}
\p_t \Gamma_k[\Phi;\gb] = \frac{1}{2} {\rm STr}\left[ \left( \Gamma_k^{(2)} + \cR_k \right)^{-1} \, \p_t \cR_k \right] \, .
\ee
This equation maintains all the properties discussed in the context of the scalar theory. It is the central result of this section and constitutes the starting point for investigating the Wilsonian renormalization group flow of gravity. Notably, its use is not limited to the case where the gravitational degrees of freedom are encoded in metric fluctuations. It is also applicable to formulations building on different sets of degrees of freedom, including unimodular gravity, the Hilbert-Palatini formulation, the Arnowitt-Deser-Misner (ADM) decomposition of the metric degrees of freedom, and also Ho\v{r}ava-Lifshitz gravity. This makes \eqref{frge.grav2} a powerful and rather universal tool to study the quantum properties of gravity beyond perturbation theory and its use in practical computations will be discussed in Sec.\ \ref{sec.eh}.
At this point the following conceptual clarifications are in order. At first sight the introduction of a background metric seems to contradict the idea of background independence intrinsic to general relativity. This is not the case though. Keeping $\gb_{\mu\nu}$ generic essentially corresponds to quantizing the theory in all backgrounds simultaneously. Subsequently, one can then evoke a dynamical principle determining $\gb_{\mu\nu}$. In this way one retains background independence even in the presence of a background metric. This viewpoint underlies the concept of self-consistent backgrounds developed in \cite{Becker:2014pea,Pagani:2019vfm}.
\subsection{Common approximation schemes}
\label{sect.34}
The Wetterich equation \eqref{frge.grav2} constitutes a formally exact equation. Finding exact solutions to it is equivalent to carrying out the functional integral \eqref{eq.Zdef}. This is extremely ambitious though and usually can not be carried out exactly. Thus, one has to resort to approximations.
Probably, the most prominent approximation is perturbation theory. In this case the standard result is recovered by neglecting the $k$-dependence of $\Gamma_k^{(2)}$ on the right-hand side of Eq.\ \eqref{frge.grav2} and approximating $\Gamma_k^{(2)} \rightarrow S_\Lambda^{(2)}$ with $S_\Lambda$ the bare action defined at the UV-scale $\Lambda$. This approximation turns the trace into a total derivative
\be
\p_t \Gamma_k \simeq \frac{1}{2} \p_t {\rm Tr} \left[ \ln \left( S_\Lambda^{(2)} + \cR_k \right) \right] \, .
\ee
Here and in the following we use $\simeq$ to indicate an approximation of the exact flow.
Integrating this equation from the UV-scale down to $k=0$ and assuming that the regulator vanishes at the boundaries then yields the standard formula for the one-loop effective action
\be
\Gamma^{\rm 1-loop} = S_\Lambda + \frac{1}{2} {\rm Tr} \left[ \ln S_\Lambda^{(2)} \right] \, .
\ee
The investigation of RG fixed points typically builds on non-perturbative approximation schemes though. The basic idea is to start from the exact flow and project it onto a subspace spanned by a finite (or even infinite) set of interaction monomials $\cO_i$. In the setup introduced in Sec.\ \ref{sec.ea}, this amounts to truncating the sum in eq.\ \eqref{eq.expansionG} to a finite set
\be
\Gamma_k \simeq \sum_{i=1}^N \, \bar{u}_i(k) \, \cO_i \, .
\ee
These types of approximations can be set up systematically, either in the form of a derivative expansion or a vertex expansion. These commonly used non-perturbative approximation schemes will be discussed in Sects.\ \ref{sect.341} and \ref{sect.342}, respectively.
\subsubsection{Derivative and curvature expansion}
\label{sect.341}
When developing non-perturbative approximation schemes, it is important to appreciate that $\Gamma_k$ depends on two metric arguments $g_{\mu\nu}$ and $\gb_{\mu\nu}$. The dependence on $g_{\mu\nu}$ can be traded for the fluctuations $h_{\mu\nu}$ by substituting the linear split \eqref{eq.backgrounddecomp}. Structurally, it is then convenient to organize the contributions in $\Gamma_k$ according to their transformation properties with respect to the background and quantum gauge transformations
\be\label{eq.gammaexp}
\Gamma_k[g, \gb, \bar{C}, C] = \bar{\Gamma}_k[g] + \widehat{\Gamma}_k[g,\gb] + \Gamma^{\rm gf}_k[g,\gb] + \Gamma_k^{\rm ghost}[g, \gb, \bar{C}, C] \, .
\ee
Here $\Gamma_k^{\rm gf}[g,\gb]$ and $\Gamma_k^{\rm ghost}[g, \gb, C, \bar{C}]$ are the standard gauge-fixing and ghost terms. The subscript $k$ thereby indicates that these sectors can contain $k$-dependent couplings, as, e.g., a wave-function renormalization for the ghost fields. The contribution $\bar{\Gamma}_k[g]$ collects all terms constructed from $g_{\mu\nu}$ only. By construction, $\bar{\Gamma}_k[g]$ is then invariant with respect to both background and quantum gauge transformations
\be
\delta^B \bar{\Gamma}_k[g] = 0 \, , \qquad \delta^Q \bar{\Gamma}_k[g] = 0 \, .
\ee
The terms contained in $ \widehat{\Gamma}_k[g,\gb]$ genuinely depend on both arguments. It collects the ``off-diagonal'' contributions and satisfies
\be
\widehat{\Gamma}_k[g,g] = 0 \, .
\ee
A rather broad class of approximations based on \eqref{eq.gammaexp} truncates the effective average action by setting $ \widehat{\Gamma}_k[g,\gb] \simeq 0$. Commonly, these approximations are referred to as \emph{single-metric approximations} \cite{Manrique:2009uh,Manrique:2010am,Manrique:2010mq}. Most approximations along these lines also work with a classical ghost sector, setting $\Gamma_k^{\rm ghost}[g, \gb, \bar{C}, C] \simeq S^{\rm ghost}[g, \gb, \bar{C}, C]$.
Building on the results by Fulling, King, Wybourne, Cummins \cite{Fulling:1992vm} (further elaborated on in \cite{Decanini:2008pr}), one can systematically construct a basis $\cO_i[g]$ in which $\bar{\Gamma}_k[g]$ can be expanded. The explicit construction of the independent basis elements needs to take into account redundancies due to the Bianchi identity $D_{[\mu} R_{\alpha\beta]\gamma\delta} = 0$. In addition, low-dimensional cases are subject to additional simplifications, e.g., due to the vanishing of the Weyl tensor in $d=3$.
The symmetries of $\bar{\Gamma}_k[g]$ dictate that the corresponding monomials are built from the Riemann tensor $R_{\mu\nu\rho\sigma}$, its contractions, and covariant derivatives $D_\mu$ acting on the curvature tensors. Convenient building blocks for the basis elements are then provided either by the Riemann basis
\be\label{eq.riemannbasis}
\cO_i[g] = \cO_i[\sqrt{g},R,R_{\mu\nu},R_{\mu\nu\rho\sigma},D_\mu]
\ee
or the Weyl basis
\be\label{eq.weylbasis}
\cO_i[g] = \cO_i[\sqrt{g},R,R_{\mu\nu},C_{\mu\nu\rho\sigma},D_\mu]
\ee
The two choices are related by the identity
\be\label{RiemannToWeyl}
C_{\mu\nu\rho\sigma} = R_{\mu\nu\rho\sigma} - \frac{2}{d-2}\left( g_{\mu[\rho} R_{\sigma]\nu} - g_{\nu[\rho} R_{\sigma]\mu} \right) + \frac{2}{(d-1)(d-2)} R g_{\mu[\rho} g_{\sigma]\nu} \, .
\ee
In terms of structural aspects, it is often useful to work in the Weyl basis, since this choice disentangles the contributions of the higher-derivative terms to the flat-space graviton propagator.
The expansion of $\bar{\Gamma}_k[g]$ can be organized systematically by counting the number of spacetime derivatives $n$ contained in the monomial $\cO_i[g] \equiv \int d^dx \sqrt{g} \tilde{\cO}_l^n[g]$. The index set $\{i\} \mapsto \{n,l\}$ where $l$ enumerates the basis elements occurring at a fixed order $n$, see Table \ref{tab.derivativeexp} for examples. This scheme is called the \emph{derivative expansion} of $\bar{\Gamma}_k[g]$. The basis elements appearing at the lowest orders are given in Table \ref{tab.derivativeexp}.
\begin{table}[t!]\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{c|cc|ccc}
\backslashbox{$n$}{$l$}
& \hspace{7mm} $1$ \hspace{7mm} & \hspace{7mm} $2$ \hspace{7mm} & \hspace{7mm} $3$ \hspace{7mm} & \hspace{7mm} $4$ \hspace{7mm} & \hspace{7mm} $\cdots$ \hspace{7mm} \\ \hline \hline
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
\hspace{3mm} $8$ \hspace{3mm} & $R \Delta^2 R$ & $C_{\mu\nu\rho\sigma} \Delta^2 C^{\mu\nu\rho\sigma}$ & $R^4$ & $R^2 \, R_{\mu\nu} \, R^{\mu\nu}$ & $\cdots$ \\
$6$ & $R \Delta R$ & $C_{\mu\nu\rho\sigma} \Delta C^{\mu\nu\rho\sigma}$ & $R^3$ & $R \, R_{\mu\nu} \, R^{\mu\nu}$ & $+6$ more \\
$4$ & $R^2$ & $C_{\mu\nu\rho\sigma} C^{\mu\nu\rho\sigma}$ & $E$ & & \\
$2$ & $R$ & & & & \\
$0$ & $\mathbb{1}$ & & & & \\ \hline \hline
\end{tabular}
\caption{Illustration of the interaction monomials $\tilde{\cO}_l^n[g]$ appearing in the derivative expansion of $\bar{\Gamma}_k[g]$ at order $n$ using the Weyl basis \eqref{eq.weylbasis}. The terms listed in the middle contribute to the graviton propagator in a four-dimensional flat background. Terms in the right-most block contribute terms proportional to the background curvature in $\Gamma_k^{(2)}[h=0;\gb]$ and may be interpreted as ``potential terms''. Furthermore, $E= R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} - 4 R_{\mu\nu}R^{\mu\nu} + R^2$ denotes the integrand of the Gauss-Bonnet term, which is topological in $d=4$.}\label{tab.derivativeexp}
\end{table}
The number of independent basis elements increases significantly with each order in the derivative expansion. This expansion scheme provides a good ordering principle when studying the ``low-energy'' properties of the theory. For fixed points which are Gaussian or ``almost-Gaussian'', the power-counting also provide a good guiding principle whether a given operator is relevant or irrelevant.
A conceptual shortcoming of the derivative expansion is that truncating the series of terms contributing to the gravitational propagator induces potentially spurious poles \cite{Becker:2017tcx}. The reason is that the approximation intrinsic to the derivative expansion leads to inverse propagators which are polynomial in the momentum. Hence it is difficult to address questions about stability and the potential presence of ghosts within this approximation \cite{Platania:2020knd,Platania:2022gtt}.
As stressed in \cite{Knorr:2019atm}, this feature can be bypassed by switching to a curvature expansion. The basic idea is to collect the covariant derivatives appearing in interaction monomials in operator-valued functions, called \emph{form factors}. These capture the dependence of propagators and interaction vertices on the (generalized) momenta of the fields and can also be defined in an arbitrary curved background spacetime. Building on the examples given in Table \ref{tab.derivativeexp}, the form factors appearing at the lowest non-trivial order in the curvature expansion arise from combining the terms in the columns with $l=1$ and $l=2$:
\be\label{eq.formfactors}
\begin{split}
& \sum_{i=0} \bar{u}^i(k) \, R \, \Delta^n \, R \mapsto R \, W^R_k(\Delta) \, R \, , \\
& \sum_{i=0} \bar{u}^i(k) \, C_{\mu\nu\rho\sigma} \, \Delta^n \, C^{\mu\nu\rho\sigma} \mapsto C_{\mu\nu\rho\sigma} \, W^C_k(\Delta) \, C^{\mu\nu\rho\sigma} \, .
\end{split}
\ee
Notably, there are only two form factors appearing at second order in the spacetime curvature. A potential third function $R_{\mu\nu} \, W^{\rm Ric}_k(\Delta) \, R^{\mu\nu}$ can be mapped to \eqref{eq.formfactors} and higher-curvature terms by applying the Bianchi identity. The functions $W^C_{k=0}(\Delta)$ and $W^R_{k=0}(\Delta)$ fix the graviton propagator in a flat background. From Table \ref{tab.derivativeexp}, it is also apparent that there is no form factor at first order in the curvature expansion. Any derivatives acting on $R$ would lead to a surface term. As a consequence, Newton's constant $G_0$ (and also the cosmological constant $\Lambda_0$) can not carry a dependence on the physical momenta of the field.
The $k$-dependence of a form factor can again be obtained by substituting the corresponding ansatz for $\Gamma_k$ into the Wetterich equation and projecting the flow on the corresponding subspace. In general, this results in a non-linear integro-differential equation for the unknown functions, see Table \ref{tab.diffeqs}. Solving these equations either numerically or by employing pseudospectral methods then allows to obtain information on the graviton propagator and momentum dependence of interaction vertices, see \cite{Bosma:2019aiu} for pioneering work in this direction.
\begin{table}[t!]
\centering
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{ccc}
approximation of $\Gamma_k$ & \hspace{3mm} structure of RG flow \hspace{3mm} & \hspace{3mm} fixed points \hspace{3mm} \\ \hline \hline
finite number of $\cO_i$ & ODEs & algebraic \\
\begin{tabular}{@{}c@{}}field dependent functions \\[-1.4ex] $f_k(R_1, \cdots, R_n)$\end{tabular} &
\begin{tabular}{@{}c@{}}PDEs \\[-1.4ex] $(n+1)$ variables\end{tabular} &
\begin{tabular}{@{}c@{}}PDEs \\[-1.4ex] $n$ variables\end{tabular} \\
\begin{tabular}{@{}c@{}}momentum-dependent form factors \\[-1.4ex] $f_k(p_1, \cdots, p_n)$\end{tabular} &
\begin{tabular}{@{}c@{}}IDEs \\[-1.4ex] $(n+1)$ variables\end{tabular} &
\begin{tabular}{@{}c@{}}IDEs \\[-1.4ex] $n$ variables \end{tabular} \\ \hline \hline
\end{tabular}
\caption{Summary of the mathematical structures capturing the flow of $\Gamma_k$
in different classes of approximations. Depending on the scale-dependent terms
retained in $\Gamma_k$, the projected flow equations are non-linear ordinary differential
equations (ODEs), partial differential equations (PDEs), or (partial) integro-
differential equations (IDEs). Since fixed functionals are $k$-stationary solutions, their
structure is encoded in differential equations which contain one variable less than the
\mbox{corresponding} flow equation.}\label{tab.diffeqs}
\end{table}
\subsubsection{Incorporating higher-order interaction vertices}
\label{sect.342}
The background approximation evaluates the Wetterich equation at zeroth order in the fluctuation field. This class of approximations can then be extended systematically by taking into account higher orders of the fluctuation field. This is the idea behind the bimetric computations initiated in \cite{Manrique:2009uh,Manrique:2010am,Manrique:2010mq} and the fluctuation approach reviewed in \cite{Pawlowski:2020qer}. It can be implemented systematically by performing a vertex expansion of $\Gamma_k[h;\gb]$ in powers of the fluctuation field:\footnote{The discussion of the ghost contributions follows the same lines, but is suppressed for the sake of readability.}
\be\label{eq.vertex}
\Gamma_k[h;\gb] = \sum_{n,l} \frac{1}{n!} \int d^dx \; \Gamma_k^{l;\mu_1\nu_1 \cdots \mu_n \nu_n}[\gb] \; h_{\mu_1\nu_1} \cdots h_{\mu_n\nu_n} \, .
\ee
Here $l$ enumerates the set of independent tensor structures contracting $n$ powers of the fluctuation fields. Note that all dependence on the background metric is stored in $\Gamma_k^{l;\mu_1\nu_1 \cdots \mu_n \nu_n}[\gb]$. Similarly to \eqref{eq.riemannbasis} and \eqref{eq.weylbasis}, the vertices can be build from $\sqrt{\gb}$, background curvature tensors and their contractions, as well as the background covariant derivative. By construction $\Gamma_k^{l;\mu_1\nu_1 \cdots \mu_n \nu_n}[\gb]$ transforms as a tensor of the corresponding rank with respect to background gauge transformations. Since the expansion captures contributions from both $\bar{\Gamma}_k[g]$ and $\widehat{\Gamma}_k[g,\gb]$, quantum gauge invariance is broken and the classification of admissible vertices is significantly more complicated than in the single-metric case. Prototypical examples of terms appearing in the vertex expansion can be obtained from expanding the gauge-fixed Einstein-Hilbert action in powers of $h_{\mu\nu}$. Explicit examples can then be found in Eqs.\ \eqref{eq.gammaquad2} and \eqref{eq.flucttensors}.
The $k$-dependence of the vertices appearing in \eqref{eq.vertex} can again be obtained from the Wetterich equation. Taking functional derivatives of \eqref{eq.Wetterich} with respect to the fluctuation fields gives a hierarchy of equations determining of the schematic form
\be\label{eq.vertexflow}
\p_t \Gamma^{(n)}_k[\gb] = \text{Flow}\left[ \Gamma^{(2)}_k[\gb], \cdots, \Gamma^{(n+2)}_k[\gb] \right] \, .
\ee
Here the superscript indicates the $n$-th functional derivative of $\Gamma_k$ with respect to the fluctuation fields, cf.\ \eqref{eq.hessians}. Background computations evaluate this hierarchy at zeroth order in $n$. Note that the right-hand side also depends on the higher-order vertices $\Gamma^{(n+1)}_k[\gb]$ and $\Gamma^{(n+2)}_k[\gb]$. The truncation of the system to a finite set of tensor structures then requires an assumption on these higher-order vertices in order to close the system. A typical strategy is to approximate the couplings appearing at the orders $(n+1)$ and $(n+2)$ by the ones appearing at the lower orders in the hierarchy.
In practice, computations maintaining information about the fluctuation fields have mainly been carried out in a flat background, setting $\gb_{\mu\nu} = \delta_{\mu\nu}$. This choice gives access to powerful momentum space techniques and the hierarchy \eqref{eq.vertexflow} can then be evaluated by employing standard Feynman diagram techniques. In particular, eq.\ \eqref{eq.vertex} simplifies to
\be\label{eq.vertex2}
\Gamma_k[h;\delta] = \sum_{n,l} \frac{1}{n!} \left( \prod_n \int \frac{d^dp}{(2\pi)^d} \right) \, \Gamma_k^{l;\mu_1\nu_1 \cdots \mu_n \nu_n}(p_1,\cdots, p_n) h_{\mu_1\nu_1}(p_1) \cdots h_{\mu_n\nu_n}(p_n) \, ,
\ee
where the $p_i$ are the momenta of the fluctuation fields. This has led to significant insights on the momentum-dependence of the graviton two-point function
\cite{Christiansen:2014raa,Bonanno:2021squ} and resolving the momentum-dependence of three- and four-point vertices \cite{Christiansen:2015rva,Denz:2016qks}.
\subsection{Further developments}
\label{sect.33}
The discussion of the Wetterich equation and its properties mainly followed the initial constructions \cite{Wetterich:1992yh,Reuter:1993kw,Reuter:1996cp}. We complete our exposition by briefly introducing two recent developments, the \emph{minimal essential scheme} \cite{Baldazzi:2021ydj} (Sec.\ \ref{sect.331}) and the $N$-type cutoffs \cite{Becker:2020mjl,Becker:2021pwo} (Sec.\ \ref{sect.332}).
\subsubsection{The minimal essential scheme}
\label{sect.331}
Ultimately, the goal of the gravitational asymptotic safety program is the construction of observables. From this perspective, it turns out that the theory space introduced in Sec.\ \ref{sec.ea}, spanned by all possible interaction monomials $\cO_i$, contains redundancies in the sense that not all couplings appearing in this basis will also enter into the observables. A prototypical example is the wave-function renormalization of a field, which drops out from the construction of scattering amplitudes. On this basis one distinguishes between \emph{essential couplings} which enter into the expressions for physical observables and \emph{inessential couplings} whose values can be changed without affecting the predictions of the theory.
Typically, a change in an inessential coupling can be absorbed into a reparameterization of the dynamical variables. Considering an infinitesimal change in the field, $\chi \mapsto \chi + \xi[\chi]$, the underlying action transforms as\footnote{We use the ``$\cdot$'' to indicate an integral over spacetime and potentially a sum over internal indices labeling the fields.}
\be\label{fieldrep}
S[\chi] \mapsto S[\chi] + \xi[\chi] \cdot \frac{\delta}{\delta \chi} S[\chi] \, .
\ee
This underlies the general statement that operators which are proportional to the equations of motion can be removed by a field redefinition and are thus linked with inessential couplings \cite{tHooft:1973pz}. Generically, one can also consider finite frame transformations to a new field parameterization,
\be\label{eq.frame1}
\phi(x) = \phi[\chi](x),
\ee
requiring that the map is quasi-local and invertible.
Implementing the procedure of removing inessential couplings at the level of the functional renormalization group is slightly more complicated. Since the corresponding couplings depend on the coarse-graining scale $k$, the field-redefinitions required in this process inherit this scale-dependence. Thus the frame transformation \eqref{eq.frame1} is promoted to be $k$-dependent
\be\label{eq.frame2}
\phi_k(x) = \phi_k[\chi](x),
\ee
This effect can be accommodated by formulating the Wetterich equation in a frame-covariant way \cite{Pawlowski:2005xe}
\be\label{FRGE.framecov}
\left( \p_t + \Psi_k[\phi] \, \frac{\delta}{\delta \phi} \right) \Gamma_k[\phi]
= \frac{1}{2} {\rm Tr} \left[ \left(\Gamma^{(2)}_k + \cR_k \right)^{-1} \left( \p_t + 2 \, \Psi_k[\phi] \frac{\delta}{\delta \phi} \right) \cR_k \right] \, .
\ee
The renormalization group kernel
\be\label{RGkernel}
\Psi_k[\phi] \equiv \p_t \phi_k[\chi]
\ee
thereby accounts for the $k$-dependence of the frame transformation.
In order to illustrate the working of the minimal essential scheme, we return to the example of a scalar field theory. Explicitly, we set
\be\label{eq.gammaex1}
\Gamma_k[\chi] = \int d^dx \left\{ \frac{Z_k}{2} \chi \left[ -\p^2 + m_k^2 \right] \chi + \frac{Z_k^2 \lambda_k}{12} \chi^4 + \cdots \right\} \, .
\ee
Here $m_k$ and $\lambda_k$ are scale-dependent couplings, $Z_k$ is the wave-function of the field, and the dots symbolizes additional interaction terms. The wave-function renormalization constitutes an inessential coupling and we seek to remove it by a $k$-dependent frame transformation. Inspecting \eqref{eq.gammaex1} indicates that this can be achieved by a $k$-dependent frame-transformation which is linear in the field
\be
\phi_k = Z_k^{1/2} \, \chi \, .
\ee
The kernel \eqref{RGkernel} then evaluates to $\Psi_k[\phi] = - \frac{1}{2} \eta_k \phi_k$ where $\eta_k \equiv - \p_t \ln Z_k$ is the anomalous dimension of the field. Evaluating \eqref{FRGE.framecov} then yields
\be\label{eq.gammaex2}
\left( \p_t - \frac{1}{2} \eta_k \, \phi \, \frac{\delta}{\delta \phi} \right) \Gamma_k[\phi]
= \frac{1}{2} {\rm Tr} \left[ \left(\Gamma^{(2)}_k[\phi] + \cR_k \right)^{-1} \left( \p_t \cR_k - \eta_k \, \cR_k \right) \right] \, .
\ee
The new functional $\Gamma_k[\phi]$ is then independent of $Z_k$. More precisely, the inessential coupling has been fixed to $Z_k=1$ at all scales. The result \eqref{eq.gammaex2} furthermore shows that $\eta_k$ depends on the essential couplings of the theory only.
As pointed out in \cite{Baldazzi:2021ydj} and illustrated by our explicit example above, the use of the frame-covariant flow equation in combination with the minimal-essential scheme may lead to significant technical simplifications when constructing solutions to the flow equation. In practice, these simplifications can be exploited systematically by parameterizing the kernel $\Psi_k[\phi]$ in terms of $k$-dependent $\gamma$-functions \cite{Baldazzi:2021orb,Knorr:2022ilz}. The freedom gained in this way can then be used to fix the inessential coupling constants to specific values. The scale-dependence of the theory is then captured by the $\beta$-functions (governing the $k$-dependence of the essential couplings) and the $\gamma$-functions (governing the $k$-dependence of the inessential ones). Both sets of equations depend on the essential couplings only. The last property then simplifies the search for RG fixed points in a significant way.
\subsubsection{Flows in terms of $N$-type cutoffs}
\label{sect.332}
Recently, a novel regularization scheme via dimensionless $N$-type cutoffs has been introduced
\cite{Becker:2020mjl,Becker:2021pwo,Banerjee:2023ztr}, which may constitute a more physical alternative to the usually employed dimensionful UV cutoffs. The motivation for the introduction of a scale-free regularization scheme is the construction of regularized quantum systems, which have the potential of being physically realizable themselves. In this way, physical properties of the theory, which conventionally are to be studied in the quantum field theory limit, could already be probed at the level of the regularized system. Moreover, this scale-free regularization scheme is designed in a way, such that self-consistent background geometries can easily be accessed.
Schematically, the $N$-type cutoff regularizes the path integral \eqref{eq.Zdef} as follows. One expands the field in the eigenbasis of a suitable self-adjoint operator, e.g., the background Laplacian, such that the corresponding eigenvalues increase with $n \in \mathbb{N}$ (or $n \in \mathbb{R}^+$), cf.\ Eq.\ \eqref{eigenmodes}. Then the path integral is regularized by restricting the domain of integration to the field modes $h^n$ with $n \le N$. As a result, one obtains $N$-sequences of regularized quantum systems, which in principle are physically realizable.
As a first application, the self-consistent spherical background geometries stemming
from summing up vacuum energy of a scalar field \cite{Becker:2020mjl} as well as metric fluctuations \cite{Becker:2021pwo} have been studied. The striking result, which is due to background independence, is that the self-consistent scalar curvatures $R(N)$ vanished for $N \rightarrow \infty$ in both cases. This is precisely the opposite behavior of the commonly perceived cosmological constant problem, according to which the background curvature, and therewith the total cosmological constant, should diverge when removing the UV regulator. Another striking result of this regularization scheme \cite{Becker:2020mjl} is that $N$-type cutoffs give an explanation of the microscopical degrees of freedom which the Bekenstein-Hawking entropy of de Sitter space counts.
\section{The Einstein-Hilbert truncation}
\label{sec.eh}
We proceed by giving an explicit example, illustrating how the Wetterich equation \eqref{frge.grav2} is used to extract non-perturbative information about the gravitational RG flow. The discussion is based on the arguably simplest approximation for the effective average action $\Gamma_k$, the Einstein-Hilbert truncation. Starting from the seminal paper \cite{Reuter:1996cp}, this projection has been studied in detail in a series of works \cite{Souma:1999at,Reuter:2001ag,Lauscher:2001ya,Litim:2003vp,Gies:2015tca}. It still forms an integral part of studying the RG flow in many gravity-matter systems. The present exposition differs from the historical computations where the background metric has been set to the one of the maximally symmetric $d$-sphere $S^d$. Instead, we combine the idea of the universal RG machine \cite{Benedetti:2010nr,Groh:2011vn} with off-diagonal heat-kernel techniques \cite{Gorbar:2002pw,Gorbar:2003yt,Decanini:2005gt,Benedetti:2010nr,Codello:2012kq} and carry out the derivation of the beta functions \emph{without specifying the background metric} $\gb_{\mu\nu}$. This stresses the background-independent nature of the computation and emphasizes the modern viewpoint on evaluating the FRGE in the context of gravity. In order to keep technical complications at the minimum, we adopt the harmonic gauge. The beta functions resulting from this setting are computed in Sec.\ \ref{eh.beta} and the resulting fixed point structure and phase diagram is presented in Sec.\ \ref{eh.phase}. Results obtained by generalizing this computation by resorting to additional field decompositions and generalizing the gauge-fixing and regularization prescription have been obtained in \cite{Gies:2015tca} and corroborate the findings reviewed in this section.
\subsection{Deriving the beta functions}
\label{eh.beta}
The Einstein-Hilbert (EH) truncation works in the background approximation. Thus the flow is obtained at zeroth order in the fluctuation fields. As a consequence only terms of zeroth and second order in the fluctuations are needed in the evaluation of the Wetterich equation. The projection of the flow equation tracks the scale-dependence of the (background) Newton's coupling $G_k$ and the cosmological constant $\Lambda_k$. The gravitational part of the effective average action is approximated by the Einstein-Hilbert action
\be\label{eq.einsteinhilbert}
\Gamma_k^{\rm EH}[g] = \frac{1}{16\pi G_k} \int d^dx \sqrt{g} \left(-R + 2 \Lambda_k \right) \, ,
\ee
with the couplings depending on the coarse-graining scale $k$. In view of the upcoming computation, it is convenient to introduce the dimensionless counterparts of Newton's coupling and the cosmological constant as well as the anomalous dimension of Newton's coupling
\be\label{dimlessvars}
g_k \equiv k^{d-2} G_k \, , \qquad \lambda_k \equiv k^{-2} \Lambda_k \, , \qquad
\eta_N(k) \equiv (G_k)^{-1} \p_t G_k \, .
\ee
Furthermore, geometrical quantities constructed from $\gb_{\mu\nu}$ are distinguished by a bar. E.g., $\Db_\mu$ is the covariant derivative constructed from the background metric.
In order to obtain well-defined propagators, $\Gamma^{\rm EH}_k$ must be supplemented by a gauge-fixing term and the corresponding ghost action. Concretely, we implement a background gauge-fixing
\be\label{eq.gaugefix}
\Gamma_k^{\rm gf}[h;\gb] = \frac{1}{32 \pi G_k \alpha} \int d^dx \sqrt{\gb} \, \gb^{\mu\nu} F_\mu F_\nu \, ,
\ee
where the gauge-fixing condition is taken to be linear in the fluctuation field
\be\label{eq.gaugefixlinear}
F_\mu[h;\gb] = \left[ \delta_\mu^\alpha \Db^\beta - \beta \gb^{\alpha\beta} \Db_\mu \right] \, h_{\alpha\beta} \, .
\ee
Here $\alpha$ and $\beta$ are two gauge-parameters which can largely be chosen arbitrary \cite{Gies:2015tca}. The ghost action accompanying \eqref{eq.gaugefix} is found in the standard way and reads
\be\label{eq.ghostaction}
S^{\rm ghost}[h,\bar{C},C;\gb] = - \sqrt{2} \int d^dx \sqrt{\gb} \, \Cb_\mu \, \cM[g,\gb]^\mu{}_\nu \, C^\nu \, ,
\ee
with the Faddeev-Popov operator being
\be\label{eq.FPop}
\cM[g,\gb]^\mu{}_\nu = \gb^{\mu\rho} \Db^\sigma \left(g_{\rho\nu} D_\sigma + g_{\sigma\nu} D_\rho \right) - 2 \beta \gb^{\rho\sigma} \Db^\mu g_{\sigma\nu} D_\rho \, .
\ee
Landau-type gauge fixings correspond to the limit $\alpha \rightarrow 0$ (with $\beta = 1/d$ being a preferred choice implementing the geometric gauge). The harmonic gauge adopted in the present computation sets $\alpha = 1$ (Feynman-type gauge) and $\beta = 1/2$. This has the technical advantage that all derivatives appear in the form of the background Laplace operator $\Delta = - \gb^{\mu\nu} \Db_\mu \Db_\nu$.
For the background computation ahead, it suffices to know the ghost-action to second order in the fluctuation fields. Adopting harmonic gauge and evaluating $\cM[g,\gb]^\mu{}_\nu|_{g = \gb}$ shows that the relevant contributions are captured by
\be\label{eq.ghost2}
S^{\rm ghost}[h=0,\bar{C},C;\gb] = \sqrt{2} \int d^dx \sqrt{\gb} \, \Cb_\mu \, \left[ \, \delta^\mu_\nu \Delta - \Rb^\mu{}_\nu \, \right] \, C^\nu \, .
\ee
Here, we used the commutator of two background-covariant derivatives evaluated on vectors in order to combine the last two terms in \eqref{eq.FPop} into the background Ricci scalar $\Rb_{\mu\nu}$. The approximation for the effective average action then combines the $\bar{\Gamma}_k[g]$ given in \eqref{eq.einsteinhilbert} with the gauge-fixing term \eqref{eq.gaugefix} and the ghost action \eqref{eq.ghostaction}
\be\label{eq.ans2}
\Gamma_k[h,\bar{C},C;\gb] \simeq \Gamma_k^{\rm EH}[g] + \Gamma_k^{\rm gf}[h;\gb] + S^{\rm ghost}[h,C,\bar{C};\gb] \, .
\ee
At this stage a comment on the projection prescription is in order. Substituting \eqref{eq.ans2} into its left-hand side and setting $g=\gb$ afterwards indicates that the scale-dependence of $G_k$ and $\Lambda_k$ can be read off from the coefficients multiplying
\be\label{eq.projectionspace}
\cO_0 = \int d^dx \sqrt{\gb} \, , \qquad \cO_1 = \int d^dx \sqrt{\gb} \Rb \, .
\ee
All other interaction monomials spanning the gravitational theory space do not contribute to the computation. This entails the following, profound consequence. Eq.\ \eqref{eq.projectionspace} corresponds to a derivative expansion truncated at first order in the spacetime curvature. Hence all terms containing two or more curvature tensors are outside the subspace spanned by our approximation. Moreover, \eqref{eq.projectionspace} does not contain derivatives of a curvature tensor. Hence, there is no need to track such terms in the present computation. These considerations allow to formulate projection rules, stating that
\be\label{eq.projectrule}
\Db_\mu \Rb_{\alpha\beta\gamma\delta} \simeq 0 \, , \qquad O(\Rb^2) \simeq 0 \, .
\ee
We stress that these rules should not be read as restrictions on $\gb_{\mu\nu}$. They merely identify structures which do not contribute to the computation. As a corollary of these relations, we conclude that we can freely commute covariant derivatives and curvature tensors, since the commutators just produce terms outside of the projection spanned by \eqref{eq.projectionspace}.
The first step in evaluating the trace appearing within the FRGE \eqref{frge.grav2} consists in expanding \eqref{eq.ans2} to second order in the fluctuation fields. In the ghost-sector the result is already given in \eqref{eq.ghost2}. For the gravitational fluctuations, we expand
\be
\Gamma_k[\gb+h,\gb] = \Gamma_k[\gb,\gb] + O(h) + \Gamma^{\rm quad}_k[h;\gb] + O(h^3) \, .
\ee
The relevant coefficient $\Gamma^{\rm quad}_k[h;\gb]$ is readily found using computer algebra packages like xAct \cite{Brizuela:2008ra} and has the form
\be\label{eq.gammaquad2}
\Gamma^{\rm quad}_k = \frac{1}{32 \pi G_k} \int d^dx \sqrt{\gb} \; \frac{1}{2} \, h_{\mu\nu} \left[ K^{\mu\nu}{}_{\alpha\beta} \left( \Delta - 2 \Lambda_k \right) + V^{\mu\nu}{}_{\alpha\beta} \right] \, h^{\alpha\beta} \, .
\ee
Here the ``kinetic'' and ``potential'' parts have the explicit form
\be\label{eq.flucttensors}
\begin{split}
K^{\mu\nu}{}_{\alpha\beta} = & \, \frac{1}{2} \left( \delta^\mu_\alpha \delta^\nu_\beta + \delta^\mu_\beta \delta^\nu_\alpha - \gb^{\mu\nu} \gb_{\alpha\beta} \right) \, , \\
V^{\mu\nu}{}_{\alpha\beta} = & \, \Rb \, K^{\mu\nu}{}_{\alpha\beta} + \left( \gb^{\mu\nu} \Rb_{\alpha\beta} + \Rb^{\mu\nu} \gb_{\alpha\beta} \right) - 2 \delta^{(\mu}_{(\alpha} \Rb^{\nu)}_{\beta)} - 2 \Rb^{(\mu}{}_{(\alpha}{}^{\nu)}{}_{\beta)} \, . \\
\end{split}
\ee
The potential $V$ collects all terms containing the spacetime curvature and is of first order in a curvature expansion.
In the next step, we would like to diagonalize the kinetic terms in the quadratic form \eqref{eq.gammaquad2}. This can be achieved by decomposing $h_{\mu\nu}$ into component fields, resorting to the transverse-traceless decomposition \cite{York:1973ia,Lauscher:2001ya}. In the present case, it suffices to split the fluctuations into their trace- and traceless part
\be\label{eq.fielddec}
h_{\mu\nu} = \hh_{\mu\nu} + \frac{1}{d} \gb_{\mu\nu} h \, , \qquad \gb^{\mu\nu} \hh_{\mu\nu} = 0 \, .
\ee
Substituting this decomposition into \eqref{eq.gammaquad2} then yields
\be\label{eq.gammaquad}
\begin{split}
\Gamma^{\rm quad}_k[h;\gb] = \frac{1}{32 \pi G_k} & \int d^dx \sqrt{\gb} \bigg[
\frac{1}{2} \hh_{\mu\nu} \left[ \Delta - 2 \Lambda_k + \Rb \right] \hh^{\mu\nu} \\
& - \left(\frac{d-2}{4d} \right) \, h \, \left[ \Delta - 2 \Lambda_k + \frac{d-4}{d} \Rb \right] \, h \\
& - \Rb_{\mu\nu} \hh^{\mu\alpha} \hh_{\alpha}{}^\nu - \Rb_{\mu\nu\alpha\beta} \hh^{\mu\alpha} \hh^{\nu\beta} + \frac{d-4}{d} \Rb_{\mu\nu} \, h \, \hh^{\mu\nu}
\bigg] \, .
\end{split}
\ee
At this point, we are ready to specify the explicit form of the regulator $\cR_k$. We dress up the Laplacians according to
\be
\Delta \mapsto \Delta + R_k \, ,
\ee
where $R_k(\Delta) = k^2 R^{(0)}(\Delta/k^2)$ is the dimensionful cutoff function and $R^{(0)}(z)$ the corresponding profile. In the nomenclature of the review \cite{Codello:2008vh} this corresponds to a cutoff of type I. This choice implements the initial idea of supplying the fluctuation field with a $k$-dependent mass term. The resulting $\cR_k$ is then diagonal in field space with its matrix elements given by
\be\label{eq.regeh}
\cR_k^{\hh\hh} = \frac{1}{32 \pi G_k} R_k \, \unit_{2T} \, , \quad
\cR_k^{hh} = - \frac{1}{32 \pi G_k} \left( \frac{d-2}{2d} \right) R_k \, , \quad
\cR_k^{\bar{C}C} = \sqrt{2} R_k \, \unit_1 \, .
\ee
Here
\be\label{eq.units}
\unit_{2T}^{\mu\nu}{}_{\alpha\beta} = \frac{1}{2}\left(\delta^\mu_\alpha \delta^\nu_\beta + \delta^\mu_\beta \delta^\nu_\alpha
\right) - \frac{1}{d} \gb^{\mu\nu} \gb_{\alpha\beta} \, , \qquad \unit_1^\mu{}_\nu = \delta^\mu_\nu \, ,
\ee
are the units on the space of symmetric traceless two-tensors (2T) and vectors (1), respectively.
We now proceed by constructing the inverse of the regularized Hessian. For the gravitational degrees of freedom, we encounter the two-by-two matrix
\be\label{eq.gamma2reg}
\left[\Gamma^{(2)}_k + \cR_k \right]^{ij} = \left[
\begin{array}{cc}
K_{2T}(\Delta) \, \unit_{2T} + V_{2T} & V_\times \\
V_\times^\dagger & K_{0}(\Delta) \, \unit_{0} + V_{0}
\end{array}
\right] \, .
\ee
Here $i,j = \{\hh,h\}$ labels the fields and we suppress all spacetime indices for the sake of readability. The explicit form of the kinetic functions $K$ and the potentials $V$ can be read off from Eqs.\ \eqref{eq.gammaquad} and \eqref{eq.ghost2} and read
\be\label{eq.kinetic}
\begin{split}
K_{2T}(\Delta) = & \, \frac{1}{32 \pi G_k} \left(\Delta + R_k - 2 \Lambda_k\right) \, , \\ K_{0}(\Delta) = & \, -\frac{1}{32 \pi G_k} \left(\frac{d-2}{2d} \right) \, \left(\Delta + R_k - 2 \Lambda_k \right) \, , \\
K_{1}(\Delta) = & \,\sqrt{2} \, \Delta \, ,
\end{split}
\ee
and
\be\label{eq.potentials}
\begin{split}
& V_{2T}^{\mu\nu}{}_{\alpha\beta} = \frac{1}{32 \pi G_k} \left(\Rb \, \unit_{2T}^{\mu\nu}{}_{\alpha\beta} - 2 \Rb^{(\mu}_{(\alpha} \delta^{\nu)}_{\beta)} - 2 \Rb_{(\alpha}{}^{(\mu}{}_{\beta)}{}^{\nu)} \right) \, , \\
& V_0 = - \frac{1}{32 \pi G_k} \, \left(\frac{d-2}{2d} \right) \left(\frac{d-4}{d}\right) \, \Rb \, , \\
& V_\times {}_{\mu\nu} =
\frac{1}{32 \pi G_k} \left( \frac{d-4}{d} \right) \, \left( \Rb_{\mu\nu} - \frac{1}{d} \gb_{\mu\nu} \Rb \right) \, , \\
& V_1{}^\mu{}_\nu = - \sqrt{2} \, \Rb^\mu{}_\nu \, .
\end{split}
\ee
Constructing the inverse of \eqref{eq.gamma2reg} builds on the exact inversion formula for block matrices
\be\label{eq.matinv}
\left[
\begin{array}{cc}
A & B \\
C & D
\end{array}
\right]^{-1} =
\left[
\begin{array}{cc}
\left(A - B D^{-1} C \right)^{-1} & - A^{-1} B \left( D-CA^{-1} B \right)^{-1} \\
-D^{-1} C \left(A-B D^{-1} C \right)^{-1} & \left(D - C A^{-1} B \right)^{-1}
\end{array}
\right] \, .
\ee
Since the potentials \eqref{eq.potentials} contain at least one power of the spacetime curvature, each entry can be constructed as a power series in $V$. The projection prescription \eqref{eq.projectrule} then indicates that it is sufficient to retain the terms up to one power of $V$. This implies, in particular, that the off-diagonal terms $V_\times$ do not enter into the present computation since they start to contribute at second order in $V$ only. Taking into account that the regulator $\p_t \cR_k$ is diagonal in field space, it is sufficient to consider the diagonal entries in \eqref{eq.matinv}. Explicitly, the corresponding inverses are given by
\be\label{eq.propagators}
\begin{split}
(32 \pi G_k)^{-1} \left[ \Gamma_k^{(2)} + \cR_k \right]^{-1}_{\hh\hh} \simeq & \, \frac{1}{K_{2T}} - \frac{1}{K_{2T}} V_{2T} \frac{1}{K_{2T}} + O(V^2)
\, , \\
(32 \pi G_k)^{-1} \left[ \Gamma_k^{(2)} + \cR_k \right]^{-1}_{hh} \simeq & \, \frac{1}{K_{0}} - \frac{1}{K_{0}} V_{0} \frac{1}{K_{0}} + O(V^2) \, .
\end{split}
\ee
Based on these preliminary considerations, we can now write down the projected flow equation
\be\label{eq.traces}
\begin{split}
\p_t \Gamma_k = & \, \frac{1}{2} {\rm Tr}_{2T} \left[ \frac{1}{K_{2T}} \p_t \cR_k^{\hh\hh} \right] - \frac{1}{2} {\rm Tr}_{2T} \left[\frac{1}{K_{2T}} V_{2T} \frac{1}{K_{2T}} \p_t \cR_k^{\hh\hh} \right] \\
& \, + \frac{1}{2} {\rm Tr}_{0} \left[ \frac{1}{K_{0}} \p_t \cR_k^{hh} \right] - \frac{1}{2} {\rm Tr}_{0} \left[ \frac{1}{K_{0}} V_{0} \frac{1}{K_{0}} \p_t \cR_k^{hh} \right] \\
& \, - {\rm Tr}_{1} \left[ \frac{1}{K_1} \p_t \cR_k^{\bar{C}C} \right] + {\rm Tr}_{1} \left[ \frac{1}{K_1} V_1 \frac{1}{K_1} \p_t \cR_k^{\bar{C}C} \right] \, .
\end{split}
\ee
Here the subscripts $s = \{ 2T, 0, 1\}$ indicate that the traces are over traceless, symmetric matrices, scalars, and vectors, respectively.
Structurally, the traces \eqref{eq.traces} can be separated in traces without and with operator insertion $V$. In order to evaluate the resulting expressions of the first type, we use the early-time expansion of the heat-kernel \cite{Vassilevich:2003xt}
\be\label{eq.heatearly}
{\rm Tr}_s\left[ e^{-s \Delta} \right] = \frac{1}{(4\pi s)^{d/2}} \, {\rm tr}(\unit_s) \, \int d^dx \sqrt{\gb} \left( 1 + \frac{1}{6} s \Rb \right) + O(\Rb^2) \, .
\ee
The trace tr$(\unit_s)$ counts the number of independent field components in each sector, i.e.,
\be
\begin{split}
{\rm tr}(\unit_0) = 1 \, , \quad
{\rm tr}(\unit_1) = d \, , \quad
{\rm tr}(\unit_{2T}) = \frac{1}{2}(d-1)(d+2) \, .
\end{split}
\ee
The heat-kernel \eqref{eq.heatearly} can readily be extended to traces including functions of the Laplacian $W(\Delta)$. Formally introducing the (inverse) Laplace transform $\widetilde{W}(s)$ through $W(z) = \int_0^\infty ds \, \widetilde{W}(s) \, e^{-sz}$, we write
\be\label{eq.trw}
{\rm Tr}_s\left[W(\Delta)\right] = \int_0^\infty ds \, \widetilde{W}(s) \, {\rm Tr}_s\left[e^{-s\Delta}\right] \, .
\ee
Substituting the early-time expansion \eqref{eq.heatearly}, then yields
\be\label{tr.Ws}
{\rm Tr}_s\left[W(\Delta)\right] = \frac{1}{(4\pi)^{d/2}} \, {\rm tr}(\unit_s) \, \int d^dx \sqrt{\gb} \, \left( Q_{d/2}[W] \, + \frac{1}{6} \, Q_{d/2-1}[W] \, \Rb \right) + O(\Rb^2) \, ,
\ee
where the $Q$-functionals are defined by
\be
\begin{split}
Q_n[W] \equiv \int_0^\infty ds \, s^{-n} \, \widetilde{W}(s) \, .
\end{split}
\ee
These functionals can be re-written in terms of the original function $W(z)$:
\be
\begin{split}
Q_n[W] = & \, \frac{1}{\Gamma(n)} \int_0^\infty dz \, z^{n-1} W(z) \, , \qquad n > 0 \, , \\
Q_0[W] = & W(0) \, .
\end{split}
\ee
For $n < 0$ one can always choose an integer $k$ such that $n+k > 0$. Integrating by parts, one then establishes that
\be
Q_n[W] = \frac{(-1)^k}{\Gamma(n+k)} \int_0^\infty dz \, z^{n+k-1} \, W^{(k)}(z) \, , \qquad n < 0, \quad n+k > 0 \, .
\ee
At this point we note that the functions $W(z)$ appearing in \eqref{eq.traces} have the generic form
\be\label{eq.Wgeneric}
W(z) = \frac{G_k}{(z +R_k + w)^p} \, \p_t \left(\frac{1}{G_k} R_k \right) \, .
\ee
In this case, it is then convenient to trade the dimensionful $Q$-functionals with the dimensionless threshold functions
\be\label{def.threshold}
\begin{split}
\Phi^p_n(w) \equiv & \, \frac{1}{\Gamma(n)} \int_0^\infty dz \, z^{n-1} \frac{R^{(0)} - z R^{(0)^\prime}(z)}{(z + R^{(0)} + w)^p} \, , \\
\widetilde{\Phi}^p_n(w) \equiv & \, \frac{1}{\Gamma(n)} \int_0^\infty dz \, z^{n-1} \frac{R^{(0)}(z)}{(z + R^{(0)}(z) + w)^p} \, ,
\end{split}
\ee
where $R^{(0)}(p^2/k^2)$ is the dimensionless profile function associated with the regulator $R_k(p^2) = k^2 R^{(0)}(p^2/k^2)$. It is then readily verified that
\be\label{Q-master}
Q_n\left[ \frac{G_k}{(z +R_k + w)^p} \, \p_t \left(\frac{1}{G_k} R_k \right) \right] = k^{2(n-p+1)} \left( 2 \Phi^p_n(w/k^2) - \eta_N \widetilde{\Phi}^p_n(w/k^2) \right) \, ,
\ee
where the anomalous dimension $\eta_N$ has been introduced in \eqref{dimlessvars}. For the traces in the ghost sector, the $G_k$-dependence in \eqref{eq.Wgeneric} is absent, so that the terms proportional to $\eta_N$ do not appear in this sector.
Starting from \eqref{eq.traces} the traces without potential insertions are readily evaluated by combining Eq.\ \eqref{tr.Ws} with the result for the Q-functionals \eqref{Q-master}. The traces including the insertion of a potential can be evaluated along the same lines. Formally, such traces can be evaluated using the off-diagonal heat-kernel formulas provided in \cite{Benedetti:2010nr}. Since the potentials \eqref{eq.potentials} do not contain any covariant derivatives and, owed to the projection prescription \eqref{eq.projectrule}, can be treated as covariantly constant leads to significant simplifications though. In this case the relevant contributions are given by the leading term in the early-time expansion \eqref{eq.heatearly} with ${\rm tr}\left[\unit_s\right] \rightarrow {\rm tr}\left[V_s\right]$. A brief computation establishes that
\be\label{trint}
\begin{split}
{\rm tr}_{1}\left[V_{1}\right] = & - \sqrt{2} \Rb \, , \\
{\rm tr}_{2T}\left[V_{2T}\right] = & \frac{1}{32 \pi G_k} \frac{(d+2)(d^2-3d+4)}{2d} \, \Rb \, .
\end{split}
\ee
The remaining terms in \eqref{eq.traces} are then found by pulling the contributions \eqref{trint} out of the operator trace and evaluating the latter by again combining Eqs.\ \eqref{tr.Ws} and \eqref{Q-master}. In this way one obtains the explicit form of the right-hand side of \eqref{eq.traces}. Reading off the coefficients multiplying the interaction monomials \eqref{eq.projectionspace} gives the equations governing the scale-dependence of the dimensionful couplings $G_k$ and $\Lambda_k$.
In order to study the renormalization group fixed points of the system, it is then natural to convert the dimensionful couplings to their dimensionless counterparts \eqref{dimlessvars}. The scale-dependence of $g_k$ and $\lambda_k$ is encoded in the beta functions
\be\label{eq.betadef}
\p_t g_k = \beta_g(g_k,\lambda_k) \, , \qquad \p_t \lambda_k = \beta_\lambda(g_k,\lambda_k) \, .
\ee
The explicit computation yields \cite{Reuter:1996cp}
\be\label{eq.betafinal}
\begin{split}
\beta_g(g,\lambda) = & \left(d-2+ \eta_N \right) \, g \, , \\
\beta_\lambda(g,\lambda) = & - (2-\eta_N) \lambda + \frac{g}{2(4\pi)^{d/2-1}}\Big(
2d(d+1) \Phi^1_{d/2}(-2\lambda) \\ & \qquad \qquad - 8d \Phi^1_{d/2}(0) - d(d+1) \eta_N \widetilde{\Phi}^1_{d/2}(-2\lambda)
\Big) \, .
\end{split}
\ee
The anomalous dimension $\eta_N$ takes the form
\be
\eta_N(g,\lambda) = \frac{g B_1(\lambda)}{1-g B_2(\lambda)} \, ,
\ee
with
\be
\begin{split}
B_1(\lambda) = & \frac{1}{3} (4\pi)^{1-d/2} \Big(
d(d+1) \Phi^1_{d/2-1}(-2\lambda) -6d(d-1) \Phi^2_{d/2}(-2\lambda)
\\ & \qquad \qquad \qquad
- 4d \Phi^1_{d/2-1}(0) -24 \Phi^2_{d/2}(0) \Big) \, , \\
B_2(\lambda) = & - \frac{1}{6} (4\pi)^{1-d/2} \Big(
d(d+1) \widetilde{\Phi}^1_{d/2-1}(-2\lambda) - 6d(d-1) \widetilde{\Phi}^2_{d/2}(-2\lambda)
\Big) \, .
\end{split}
\ee
At this point we have completed the explicit derivation of the beta functions \eqref{eq.betafinal} governing the scale-dependence of $\{g_k,\lambda_k\}$. The result agrees with the initial derivation \cite{Reuter:1996cp}, employing a maximally symmetric background. The present derivation shows, however, that \emph{this result is actually background independent}. Apart from general properties related to the existence of the heat-kernel \eqref{eq.heatearly}, we never specified an explicit background $\gb_{\mu\nu}$. Assuming its mere existence is sufficient to arrive at the final result.
\subsection{Fixed points, RG trajectories, and phase diagram}
\label{eh.phase}
The beta functions \eqref{eq.betadef} encode the dependence of Newton's coupling and the cosmological constant on the coarse-graining scale $k$. These have been derived for a generic regulator $R_k$. In order to investigate the resulting fixed point structure and phase diagram, we specify the regulator to be of Litim-type \eqref{Rlitim}. In this case, the integrals appearing in the threshold functions \eqref{def.threshold} can be evaluated analytically, yielding
\be\label{eq.threshlitim}
\Phi^p_n(w)^{\rm Litim} = \frac{1}{\Gamma(n+1)} \frac{1}{(1+w)^p} \, , \quad
\widetilde{\Phi}^p_n(w)^{\rm Litim} = \frac{1}{\Gamma(n+2)} \frac{1}{(1+w)^p} \, .
\ee
Upon substituting these expressions, the flow of $\{g_k,\lambda_k\}$ is governed by the coupled, non-linear, autonomous, first-order differential equations \eqref{eq.betadef} with
\be\label{beta-litim}
\begin{split}
\beta_g = & \left(2 + \eta_N \right) \, g \, , \\
\beta_\lambda = & - (2-\eta_N) \lambda + \frac{g}{8\pi}\Big(
\frac{20}{1-2\lambda} - 16 - \frac{5}{3} \, \eta_N \, \frac{1}{1-2\lambda}
\Big) \, .
\end{split}
\ee
and
\be
\eta_N = \frac{g \left(\frac{5}{1-2 \lambda } -\frac{9}{(1-2 \lambda )^2} -7\right)}{3 \pi \left(1 + \frac{g}{12 \pi } \left(\frac{5}{1-2 \lambda }-\frac{6}{(1-2 \lambda )^2}\right)\right)} \, .
\ee
Here we have specified $d=4$ for explicitness.
We then determine the fixed points of this system. Since the truncation retains a finite number of interaction monomials $\cO_i$ only, this search turns into the algebraic problem of finding the roots of the system $\{\beta_\lambda = 0, \beta_g = 0\}$, cf.\ Table \ref{tab.diffeqs}. Inspecting \eqref{beta-litim}, one finds two fixed points
\be\label{fp.pos}
\begin{array}{ll}
\text{GFP:} & \quad \{g_* = 0, \; \lambda_* = 0\} \, , \\[1.3ex]
\text{NGFP:} & \quad \{g_* = 0.707, \; \lambda_* = 0.193 \} \, .
\end{array}
\ee
These correspond to a free and interacting theory, respectively. The NGFP is \emph{the projection} of the Reuter fixed point onto the subspace spanned by the ansatz \eqref{eq.einsteinhilbert}.
The stability properties of the RG flow in the vicinity of these fixed point are readily obtained by evaluating the stability matrix \eqref{def.B} for the beta functions \eqref{beta-litim}. This yields
\be\label{fp.stab}
\begin{array}{ll}
\text{GFP:} & \quad \{\theta_1 = 2, \; \theta_2 = -2\} \, , \\[1.3ex]
\text{NGFP:} & \quad \{\theta_{1,2} = 1.48 \pm 3.04 i \} \, .
\end{array}
\ee
Thus the GFP constitutes a saddle point with one UV-attractive and one UV-repulsive eigendirection. Analyzing the corresponding eigenvectors shows that RG trajectories with a non-vanishing Newton's coupling are repelled by the GFP as $k\rightarrow\infty$. Hence this fixed point cannot act as the UV-completion of gravity. In contrast, the NGFP is UV-attractive for both $g_k$ and $\lambda_k$. The complex stability coefficients indicate that the RG flow spirals into the fixed point as $k \rightarrow \infty$. Thus this fixed point acts as UV-completion for the RG trajectories entering its vicinity.
\begin{table}[t!]\centering
\begin{minipage}[c]{0.48\linewidth}
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{cccccc}
\hspace{8mm} & \hspace{2mm} $d$ \hspace{2mm} & \hspace{2mm} $g_*$ \hspace{2mm} & \hspace{2mm} $\lambda_*$ \hspace{2mm} & \hspace{2mm} $\theta_1$ \hspace{2mm} & \hspace{2mm} $\theta_2$ \hspace{2mm} \\ \hline \hline
GFP & $d$ & $0$ & $0$ & $2-d$ & $2$ \\
\hline
NGFP & $2+\epsilon$ & $\frac{3}{38} \epsilon$ & $-\frac{3}{38} \epsilon$ & $\epsilon$ & $2 + \frac{1}{19} \epsilon$ \\
NGFP & $3$ & $0.20$ & $0.06$ & \multicolumn{2}{c}{$1.15 \pm 0.83i$} \\
NGFP & $4$ & $0.71$ & $0.19$ & \multicolumn{2}{c}{$1.48 \pm 3.04i$} \\
NGFP & $5$ & $2.85$ & $0.24$ & \multicolumn{2}{c}{$2.69 \pm 5.15i$} \\ \hline \hline \vspace{2mm}
\end{tabular}
\end{minipage}
\begin{minipage}[c]{0.48\linewidth}
\centering
\includegraphics[width=52mm]{gstard}
\end{minipage}
\caption{\label{tab.NGFPd} Characteristics of the family of NGFPs in various dimensions $d$ \cite{Souma:1999at,Reuter:2001ag}. The table on the left gives the position and stability coefficients of the fixed points for selected values of $d$. The diagram to the right displays that the family emerges from the Gaussian fixed point in $d=2+\epsilon$ and continuously connects to the Reuter fixed point in $d=4$. Whether there is an upper critical dimension where the family of fixed points ceases to exist is currently an open question. }
\end{table}
Treating the dimension $d$ as a continuous parameter, one can trace the properties of the NGFP when performing an analytic continuation of the spacetime dimension. The results are summarized in Table \ref{tab.NGFPd}. The table in the left panel gives the position and stability coefficients of the NGFP for selected values $d$ while the diagram in the right panel shows the position $g_*(d)$. The latter illustrates that the family of NGFPs emerges from the GFP in $d=2+\epsilon$ dimensions. It can then be analytically continued up to $d=4$. Thus the Reuter fixed point is the analytic continuation of the NGFP seen in the $\epsilon$-expansion around the free theory at the lower critical dimension $d=2$ \cite{Souma:1999at,Reuter:2001ag}. For $d > 4$, the system \eqref{beta-litim} suggests that the NGFP continues to exist for all dimensions $d > 2$ \cite{Litim:2003vp}. At $d \gtrsim 5$, the existence of the fixed point turns into a regulator-dependent statement though \cite{Reuter:2001ag}. Hence, it is currently unclear if there is an upper critical dimensions where the family of NGFPs ceases to exist.
The system \eqref{beta-litim} is readily integrated numerically. The resulting phase diagram is governed by the interplay of the fixed points \eqref{fp.pos} and shown in Fig.\ \ref{fig:phasespace}.
\begin{figure}[t!]
\centering
\includegraphics[width=.9\textwidth]{phasespace}
\caption{\label{fig:phasespace} Phase diagram constructed from integrating the beta functions \eqref{eq.betadef} in $d=4$. All arrows point towards lower coarse-graining scale $k$. The GFP and NGFP are marked by the black dots while the magenta line displays the locus where the anomalous dimension $\eta_N$ diverges. The NGFP acts as an UV-attractor capturing all trajectories in its vicinity. Lowering the coarse-graining scale, the flow undergoes a crossover towards the GFP. The separatrix connecting the two fixed points is highlighted in blue. This solution leads to a vanishing cosmological constant $\Lambda_0 = 0$. Trajectories to the left of this line, exemplified by the orange trajectory, have been classified as Type Ia and are characterized by $\Lambda_0 < 0$. RG trajectories to its right constitute the Type IIIa solutions (represented by the green trajectory). They terminate at a finite value of $k$ and lead to positive values $\Lambda_k > 0$. (Initially constructed in \cite{Reuter:2001ag}).}
\end{figure}
Here the magenta line indicates the position of a singular locus where $\eta_N$ diverges. The physically relevant part of the phase diagram consists of the RG trajectories with emanate from the NGFP in the UV and cross over to the GFP as $k$ decreases. A special role is thereby played by the separatrix (blue line) connecting the two fixed points. This trajectory leads to a vanishing cosmological constant $\lim_{k\rightarrow 0} \Lambda_k = 0$. The trajectories to the right of this line are classified as Type Ia (orange line). Their characteristic feature is a negative cosmological constant, $\lim_{k\rightarrow 0} \Lambda_k < 0$. The trajectories to the right of the separatrix (green line) have been labeled Type IIIa. They terminate at the singular locus at a finite value of $k$. In the vicinity of the GFP they exhibit a regime where $\Lambda_k$ is constant and positive. It is expected that nature is described by an RG trajectory within this class \cite{Reuter:2004nx}. This trajectory is special in the sense that it almost hits the GFP. Only at the very last moment, it takes a turn flowing away from the fixed point. In this way the trajectory accommodates the tiny value of the cosmological constant observed in cosmology. Thus, the Einstein-Hilbert truncation does not predict the value of the cosmological constant. It is a function of the free parameters labeling the RG trajectories leaving the NGFP. The cosmological constant then has the role of an experimental input which is used to identify the RG trajectory realized in nature.
\begin{figure}[t!]
\centering
\includegraphics[width=.45\textwidth]{Gkflow}
\includegraphics[width=.45\textwidth]{Lambdakflow}
\caption{\label{fig:coarsegrainingdimful} Dependence of the dimensionful Newton's coupling (left panel) and cosmological constant (right panel) on the coarse-graining scale along a typical RG trajectory of Type IIIa. The flow interpolates between the classical regime ($k \ll 1$) where $G_k$ and $\Lambda_k$ are constant and the fixed point regime ($k \gg 1$) where $G_k \propto k^{-2}$ and $\Lambda_k \propto k^2$. By definition, the cross-over between these scaling regimes occurs at the Planck scale $M_{\rm Pl} \equiv G_0^{-1/2}$ which is generated dynamically when flowing away from the NGFP. Notably, $\Lambda_k$ exhibits an intermediate scaling regime where $\Lambda_k \propto k^4$. All quantities are measured in units of $M_{\rm Pl}$. (Adaptation from \cite{Gubitosi:2018gsl}).}
\end{figure}
At this stage, it is instructive to pick a generic RG trajectory of Type IIIa and illustrate the $k$-dependence of the dimensionful couplings. The resulting flow of $G_k$ and $\Lambda_k$ is exemplified in Fig.\ \ref{fig:coarsegrainingdimful} where all dimensionful quantities are given in units of the Planck scale $M_{\rm Pl} = G_0^{-1/2}$. For $k > 1$ the scale-dependence is governed by the NGFP while for $k < 1$ the flow is controlled by the GFP. As a result, the flow of the couplings interpolates between the scaling regimes
\be\label{fp.crossover}
\begin{array}{llll}
\text{NGFP:} & \quad G_k \simeq g_* k^{-2} \, , & \quad \Lambda_k \simeq \lambda_* k^2 & \quad k > 1 \, , \\[1.2ex]
\text{GFP:} & \quad G_k \simeq G_0 \, , & \quad \Lambda_k \simeq \Lambda_0 & \quad k < 1 \, .
\end{array}
\ee
The crossover between the two regimes occurs at the Planck scale. This scale is generated dynamically when moving away from the NGFP. Classical general relativity (in the sense of a low-energy effective field theory) is then recovered in the vicinity of the GFP.
We conclude by stressing that the exact fixed point action $\Gamma_*$ associated with the Reuter fixed point does (most likely) not coincide with the Einstein-Hilbert action. While this may be suggested by the analysis of this section, one has to account for the fact that we have been working within a projection of the full theory space to this two-dimensional subspace. Additional contributions to $\Gamma_*$, as, e.g., higher-derivative terms, are not visible in this analysis.
\subsection{Further reading}
\label{sec.further-reading}
The Einstein-Hilbert truncation discussed in this section constitutes the starting point for understanding the theory space of gravity, its RG fixed points, and their mutual relations. By now, this exploration has made significant progress in moving beyond this basic example. At the level of the background approximation $f(R)$-type projections have been studied in polynomial approximations to very high order \cite{Codello:2007bd,Machado:2007ea,Falls:2013bv,Falls:2017lst,Falls:2018ylp} and it has been shown that the NGFP also persists once the two-loop counterterm identified by Goroff and Sagnotti \cite{Goroff:1985sz,Goroff:1985th} is included in the projection \cite{Gies:2016con}. Within the fluctuation approach there has been significant progress on understanding the momentum-structure of the graviton propagator \cite{Christiansen:2014raa,Bonanno:2021squ} and resolving the momentum-dependence of three- and four-point vertices \cite{Christiansen:2015rva,Denz:2016qks}. In parallel, a program geared towards developing asymptotically safe amplitudes has been initiated in \cite{Draper:2020bop}. Covering these developments in detail is beyond the scope of this introductory chapter and the interested reader may consult the more advanced chapters of this volume for further information.
\section{Concluding comments}
\label{sec.conclusion}
The Wetterich equation \eqref{eq.Wetterich} constitutes an essential tool in developing the gravitational asymptotic safety program. Starting from its adaption to gravity \cite{Reuter:1996cp}, it has provided substantial evidence for the existence of a viable interacting renormalization group fixed point -- the Reuter fixed point -- which could provide a consistent and predictive high-energy completion of the gravitational interactions.
The present chapter focused on the case where the gravitational degrees of freedom are carried by the metric field. The applicability of the functional renormalization group and in particular the Wetterich equation is not limited to this setting though. It has readily been extended to other sets of fields which, at the classical level, encode the same gravitational dynamics as general relativity. Notably, this includes the case where the gravitational degrees of freedom are encoded in the vielbein (``tetraed only''-formulation) \cite{Harst:2012ni,Dona:2012am}, the Palatini formalism \cite{Harst:2014vca,Harst:2015eha,Pagani:2015ema,Gies:2022ikv}, the Arnowitt-Deser-Misner decomposition \cite{Manrique:2011jc,Rechenberger:2012dt,Biemans:2016rvp,Biemans:2017zca,Houthoff:2017oam}, and unimodular gravity \cite{Eichhorn:2013xr,Eichhorn:2015bna,Percacci:2017fsy,deBrito:2020xhy}. While the exploration of the corresponding theory spaces is far less developed than the one for the metric theory, there are indications that these settings also possess interacting renormalization group fixed points suitable for rendering the construction asymptotically safe. In the case of unimodular gravity, there are arguments that the theory is in the same universality class as the metric formulation \cite{deBrito:2021pmw}. Whether the other fixed points are in the universality class of the Reuter fixed point is an open question though.
Notably, there also has been progress aiming at the implementation of the renormalization group on discrete geometries. In the context of the Causal Dynamical Triangulation program \cite{Ambjorn:2012jv,Loll:2019rdj}, renormalization group flows have been constructed in \cite{Ambjorn:2014gsa}. The underlying idea is to pick an observable whose value is held constant when varying the parameters of the Monte Carlo simulation. This led to the surprising conclusion that the phase-transition line expected to provide the high-energy completion of the theory actually appears to be approached in the infrared.
So far, our discussion has focused on gravitational degrees of freedom only. It is rather straightforward to extend this construction by including additional matter fields as well as all the building blocks of the standard model of particle physics \cite{Eichhorn:2018yfc,Eichhorn:2022gku}. Many of the gravity-matter systems investigated to date exhibit interacting renormalization group fixed points whose properties are very similar to the ones found for the Reuter fixed point. Since the beta functions encoding the fixed point structure of these systems depend on the number of matter fields in a continuous way, it is likely that the Reuter fixed point is part of a continuous web of interacting fixed points. Since it is unlikely that these encode the same universality class, it is suggestive to refer to these as deformed Reuter fixed points, highlighting that the systems actually realize different (albeit related) universal behaviors. A detailed summary of the state-of-the-art in investigating asymptotically safe gravity-matter systems is beyond the scope of this elementary introduction and we refer to the recent reviews \cite{Eichhorn:2018yfc,Eichhorn:2022gku} as well as other chapters of this book. In short, it is conceivable though that the asymptotic safety mechanism may lead to a unified theory incorporating the standard model of particle physics and gravity within the framework of a relativistic quantum field theory. This exciting perspective certainly warrants further investigation.
\section*{Acknowledgements}
My understanding of the functional renormalization group and its applications in the context of gravity has benefited enormously from countless discussions with many colleagues. It is therefore my pleasure to thank
J.\ Ambj{\o}rn,
D.\ Becker,
W.\ Beenakker,
A.\ Bonanno,
L.\ Bosma,
T.\ Budd,
L.\ Buoninfante,
J.\ Donoghue,
T.\ Draper,
A.\ Eichhorn,
R.\ Ferrero,
G.\ Gubitosi,
R.\ Kleiss,
A.\ Koshelev,
R.\ Loll,
M.\ Niedermaier,
R.\ Ooijer,
C.\ Pagani,
J.\ M.\ Pawlowski,
R.\ Percacci,
A.\ D.\ Pereira,
S.\ Pirlo,
A.\ Platania,
M.\ Reichert,
M.\ Schiffer,
O.\ Zanusso,
and C.\ Wetterich for sharing their insights and views. In addition, I want to thank M.\ Becker and A.\ Ferreiro for their insightful comments on the manuscript and B.\ Knorr and C.\ Ripken for their close collaboration on many aspects presented in this review. Finally, I would like to thank M.\ Reuter for introducing me to this subject and his continual advice and support.
\providecommand{\href}[2]{#2}\begingroup\raggedright |
{
"arxiv_id": "2302.14237",
"language": "en",
"timestamp": "2023-03-01T02:06:11",
"url": "https://arxiv.org/abs/2302.14237",
"yymm": "2302"
} | \section{INTRODUCTION}
Surgical robots for minimally invasive surgery (MIS)
enable surgeons to operate with greater flexibility and precision, thus reducing incision size, recovery time, and scarring.
Their widespread adoption into surgical specialties such as urology, gynecology, and general surgery has opened up new fields of interdisciplinary research.
Gesture segmentation and classification has been one of those research areas where both supervised
\cite{ahmidi2017dataset,lea2016temporal,lea2016segmental,dipietro2016recognizing, dipietro2019segmenting, funke2019using} and unsupervised learning \cite{noy2015unsupervised, fard2016soft,krishnan2017transition, jones2019zero, clopton2017temporal} approaches have been developed for gesture recognition. However, these approaches either rely on black-box deep learning models that are hard to verify and need extensive training data or do not capture the human interpretable contextual information of the gestures.
The JIGSAWS dataset \cite{gao2014jhu} with its surgical gesture labels has been the foundation of many advancements in surgical gesture recognition \cite{van2021gesture}, surgical process modeling \cite{ahmidi2017dataset}, skill assessment \cite{tao2012sparse, varadarajan2009data}, error detection \cite{yasar2020real,Li2022Runtime}, and autonomy \cite{ginesi2021overcoming}.
However, unlike annotations for surgical instrument segmentation, annotations for surgical workflow such as gestures need guidance from surgeons \cite{Kitaguchi2021Artificial}.
Labeling using descriptive gesture definitions is tedious and subjective, leaving uncertainty as to exactly when gestures start and end, and can have annotation errors that can adversely impact machine learning models and analyses \cite{van2021gesture,hutchinson2021analysis}.
Recent studies using the JIGSAWS dataset have found errors in $\sim$2-10\% of the gesture labels \cite{van2020multi, hutchinson2021analysis}.
As emphasized in \cite{van2021gesture}, larger labeled datasets using a common surgical language are needed to support collaboration and comparative analysis.
Some recent works have focused on finer-grained surgical actions such as action triplets \cite{meli2021unsupervised, li2022sirnet, nwoye2022rendezvous} and motion primitives \cite{COMPASS} based on the interactions between robotic tools and objects in the surgical environment. \cite{COMPASS} presented a formal framework for modeling surgical tasks with a unified set of motion primitives that cause changes in surgical context captured from the physical environment.
These motion primitives were shown to be generalizable across different surgical tasks and can be used to combine data from different datasets.
\cite{COMPASS} suggests a relation between context and existing gesture labels, but does not define direct relations between the two
Furthermore, despite limited availability of datasets that include kinematic data from surgical robots, datasets for instrument and object segmentation in MIS procedures are plentiful and have been the subject of imaging competitions \cite{allan20192017,allan20202018}. We propose methods that leverage the abundance of data with image annotations for surgical instruments and important surgical objects to address the challenges of manual labeling and relate surgical context to gestures.
Our goal is to develop an automated, independent, and explainable way of generating gesture transcripts based on video data that does not rely on expensive training data on gestures. Such a method would be easier to verify by humans/experts and can be used as the ground truth for evaluating the black-box gesture recognition models that directly detect gestures from kinematic data.
\begin{figure*}[ht!]
\centering
\begin{subfigure}{0.6\textwidth}
\centering
\includegraphics[trim = 0in 3.8in 7.1in 0in, clip, width=\textwidth]{Figures/hierarchy_context.pdf}
\caption{}
\label{fig:hierarchy}
\end{subfigure}
\hfill
\begin{subfigure}{0.38\textwidth}
\centering
\includegraphics[trim = 6.3in 3.8in 2.75in 0in, clip, width=\textwidth]{Figures/hierarchy_context.pdf}
\caption{}
\label{fig:context}
\end{subfigure}
\vspace{-0.5em}
\caption{\ref{fig:hierarchy} Surgical hierarchy and relation between gestures and context in a suturing task. \ref{fig:context} State variables and object encodings that comprise context for the JIGSAWS tasks (see Figure \ref{fig:pipeline}). In the Suturing and Needle Passing tasks, a needle is used to throw four sutures through the fabric and rings, respectively, while two knots are tied in the Knot Tying task.
}
\vspace{-1.5em}
\label{fig:hierarchyandcontext}
\end{figure*}
The main contributions of the paper are as follows:
\begin{itemize}
\item We present a method for the automated inference of surgical context based on detecting important surgical tool and object interactions using image segmentation.
\item We propose two methods for automated translation of context labels to gesture labels based on a knowledge-based finite state machine model and a data-driven machine learning model.
\item We use the JIGSAWS dataset as a case study to demonstrate that our proposed approach results in shorter labeling time using the segmentation masks.
\end{itemize}
\section{PRELIMINARIES}
\subsection{Surgical Process Modeling}
Surgical process modeling \cite{neumuth2011modeling}
defines how surgical procedures can be decomposed into steps, tasks, and gestures as shown in Figure \ref{fig:hierarchy}. Gestures are defined as actions with semantic meaning for a specific intent and involve particular tools and objects. Thus, they explicitly include the surgical context, capturing important states and interactions in the physical environment. The formal framework in \cite{COMPASS} extended this hierarchy to further include the finer-grained motion primitives (or verbs in action triplets~\cite{nwoye2022rendezvous, neumuth2006acquisition}) as the atomic units of surgical activity (e.g., grasp, push) that lead to changes in context, without explicitly including the semantics of physical context (e.g. needle through tissue).
\subsection{Surgical Context}
Surgical context is defined as a set of state variables describing the status of a task and interactions among the surgical instruments, objects, and anatomical structures in the physical environment \cite{yasar2019context, yasar2020real, COMPASS}. As shown in Figure \ref{fig:context}, the first four state variables represent objects held by or in contact with the surgical instruments and are the general state variables for all tasks.
The fifth state variable is task-specific and represents task progress; i.e., the needle's relation to the fabric or ring in the Suturing and Needle Passing tasks, or the knot's status in the Knot Tying task. Figure \ref{fig:context} shows the general and task-specific state variables with their possible values in the Suturing and Knot Tying tasks of the JIGSAWS dataset.
In Figure \ref{fig:context}, the example context of $ 00202 $ in the Suturing task means that the right grasper is holding the needle and the needle is in the fabric.
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[page=3, trim = 0in 3.75in 6.5in 0in, clip, width=0.5\textwidth]{Figures/ICRA Figures.pdf}
\caption{Surgical hierarchy and relation between gestures and context. Adapted from \cite{COMPASS}.}
\label{fig:hierarchy}
\vspace{-0.5em}
\end{figure}
\end{comment}
The COMPASS dataset \cite{COMPASS} has context labels for all three tasks in the JIGSAWS dataset based on consensus among three annotators.
But, it does not provide translations from context or motion primitives to gestures which limits comparisons to existing works. Manual labeling was needed to create the context labels which is still subjective and time consuming, despite achieving near-perfect agreement with expert surgeons.
However, \cite{COMPASS} showed that high quality surgical workflow labels can be generated by examining state variables that comprise the context. With recent improvements in surgical scene segmentation, we show that context can be detected automatically from video data.
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[trim = 0in 3.75in 8.3in 0in, clip, width=0.5\textwidth]{Figures/context_S_KT.pdf}
\caption{Surgical context for the JIGSAWS tasks from \cite{COMPASS}.}
\label{fig:context}
\vspace{-0.5em}
\end{figure}
\end{comment}
\subsection{Surgical Scene Segmentation}
\begin{figure*}[t!]
\centering
\includegraphics[trim = 0in 3.25in 0.5in 0in, clip, width=\textwidth]{Figures/ICRA_pipeline_9.pdf}
\vspace{-1.5em}
\caption{Pipeline for automatic context inference based on segmentation of video data and context to gesture translation.
}
\vspace{-1.5em}
\label{fig:pipeline}
\end{figure*}
To advance analysis on video data and provide insights on surgeon performance, the 2017 and 2018 EndoVis workshops at MICCAI introduced a challenge to perform robotic instrument and scene segmentation using images from a da Vinci Xi robot in porcine procedures \cite{allan20192017}. Various models have been proposed in the challenge, but segmenting all objects in a surgical scene has been challenging.
The DeepLab V3+ model \cite{chen2018encoder}
achieved the best overall performance in \cite{allan20202018} (see Table \ref{tb:segmentation}). Other DeepLab models \cite{chen2017deeplab, chen2018encoder} have also shown promise in surgical tool and object segmentation.
Most existing works on robot instrument or surgical scene segmentation were based on real surgery videos using publicly available datasets such as MICCAI EndoVis 17 \cite{allan20192017}, MICCAI EndoVis 18 \cite{allan20202018} and Cata7 \cite{ni2019raunet}. Popular frameworks include UNet \cite{ronneberger2015u}, TernausNet \cite{iglovikov2018ternausnet}, and LinkNet \cite{chaurasia2017linknet}.
Surgical scene segmentation in the dry-lab settings with the JIGSAWS dataset was done in \cite{andersen2021real} and \cite{papp2022surgical}, but we go further by segmenting additional objects and using tool and object segmentation for context inference.
Although surgical scene segmentation and instrument tracking can be used for skill assessment \cite{jin2018tool}, they have not yet been used for automatic context and gesture inference. Hence, our approach could be used as an independent source to evaluate context or gesture segmentation models trained using kinematic data.
Further, we aim to integrate data-driven segmentation with knowledge-driven context inference and context to gesture translation to perform gesture recognition. Compared to the above deep learning approaches for gesture recognition, this approach enables improvements by integrating human input. Our method also benefits from the availability of large open source image segmentation datasets that provide pretrained weights for segmentation models and could also improve segmentation performance via fine-tuning on smaller datasets.
\section{METHODS}
\begin{comment}
\begin{figure*}[t!]
\vspace{0.03cm}
\centering
\includegraphics[trim = 1.5in 3.2in 3in 2.8in,width=0.8\textwidth]{ICRA_pipeline.pdf}
\vspace{-0.5em}
\caption{Pipeline for automatic gesture generation from videos}
\vspace{-1.5em}
\label{fig:diff_networks}
\end{figure*}
\end{comment}
This section presents our overall pipeline for the automated inference of surgical context and translation to gesture labels based on the video data as depicted in Figure \ref{fig:pipeline}. Surgical context
can be inferred from the video or kinematic data by estimating the values of the state variables. In this work, we specifically focus on context inference solely based on video data as an independent method to verify gestures predicted from kinematic data or when kinematic data is not available. Our methods are presented for a case study of the JIGSAWS dataset~\cite{JIGSAWS} using the context labels from \cite{COMPASS}, but are applicable to other datasets and sets of gestures.
\subsection{Tool and Object Segmentation}
The detection of general and task-specific state variables for surgical context requires identifying the status and relative distance of the instruments and the objects of interest in a task. As shown in Figure \ref{fig:context} for the JIGSAWS tasks, these include the left and right graspers, needle, thread, and rings.
We modified the Deeplab V3 model \cite{chen2017deeplab} to perform binary segmentation that classifies the background vs. one object class in the video frames of a task trial. Specifically, we train separate binary classification models to classify background vs. left grasper, background vs. right grasper, background vs. needle, background vs. thread, and background vs. ring. The input to each model is a matrix $A_{H \times W \times 3}$ representing an RGB image of a video frame with Height (H) and Width (W). The output is a binary matrix $M_{H \times W}$ representing the segmentation mask with 0 for the background class, and 1 for the segmented object class. We need to infer the intersections between objects for generating context, which cannot be done with the existing multi-class segmentation models that classify each pixel to a single object class.
Binary segmentation models for each object class enable the analysis of intersections and overlaps among separate object masks to infer interactions between objects.
For each object, we combine the data from all tasks to train a single model to classify that object in all tasks. We leveraged transfer learning by initializing the model with a ResNet-50 \cite{he2016deep} backbone pre-trained on the COCO dataset \cite{lin2014microsoft}. We obtained tool and object annotations for the JIGSAWS dataset and used a subset of 70 videos for fine-tuning the model. However, the test set for the whole pipeline was significantly limited since much of the data from JIGSAWS was needed to train the image segmentation models. We trained our models for up to 20 epochs using Adam optimization \cite{kingma2014adam} with a learning rate of $10^{-5}$.
\subsection{Automated Context Inference}
The masks from the segmentation models provide us with information about the area and position of the instruments and objects
which can
enable
state variable estimation
at each frame. By calculating intersections and distances between the object masks in a given frame, we can detect interactions such as \textit{contact} and \textit{hold} as shown in Figure \ref{fig:context}.
In the mask matrices $M_{H \times W}$ generated by the segmentation models, each element $m_{hw} \in \{0,1\}$ indicates if the pixel $(h,w)$ belongs to an object mask. We first perform a pre-processing step on $M$ to eliminate the noise around masks such as the needles and threads.
Contour extraction is done to help eliminate the rough edges of the masks and improve intersection detection. This step uses the OpenCV library \cite{opencv_library} to iteratively construct contours around every element $m_{hw} \in M$, thus reducing the input matrix to a list of points $p \in C \subset M $ for each instrument class where $C$ is the boundary of $M$.
Using simplified polygons instead of binary masks greatly reduces the time needed to calculate intersections and distances between objects for each frame.
We experimentally determined that dropping polygons with areas under 15 pixel units squared and smoothing the polygons using the Ramer–Douglas–Peucker (RDP) algorithm \cite{RAMER1972244,douglas1973algorithms} results in better accuracy based on training set.
Next, we detect overlaps between masks by taking a list of valid polygons and calculating a feature vector $v$ of distances ($D$) and intersection areas ($Inter$) between pairs of input masks. The input polygons Left Grasper $(LG)$, Right Grasper $(RG)$, Thread $(T)$ are common for all tasks.
Task-specific objects are the Needle $(N)$ appearing in Needle Passing and Suturing, the manually labeled Tissue Points $(Ts)$ representing the markings on the tissue where the needle makes contact in Suturing, and the Rings $R$ in Needle Passing.
We define the distance functions $D(I, J)$ and $d(i,j)$ and the intersection function $Inter(I,J)$ to, respectively, calculate the pixel distance between two object masks $I$ and $J$, the pixel distance between the individual polygons $i_1, j_1, ...$ that constitute an object mask, and the area of intersection between two object masks $I$ and $J$. For any object polygon $I$ which is comprised of several polygon segments $i_1, i_2, ..., i_n$, the distance to any other object $J$ can be calculated as: $D(I, J) = \text{average}([d (i,j) \text{ for } i \in I \text{ and } j \in J])$. The intersection function $Inter(I,J)$ is implemented using a geometric intersection algorithm from the Shapely \cite{shapely} library. We also define the components $I.x,I.y$ for an object I as the horizontal and vertical coordinates of the midpoint of its polygon $I$, calculated as the average of every point in $I$.
In order to determine the Boolean function $ (\alpha) $ for each grasper, if the distance between the manually labeled pixel coordinates of the grasper jaw ends was less than 18 pixels, then the grasper was closed ($ \neg \alpha $), else it was open ($ \alpha $).
\setlength{\abovedisplayskip}{-3pt}
\setlength{\belowdisplayskip}{-3pt}
\small
\begin{align}
\text{Left Hold} &
\begin{cases} \label{equn:LH}
2 & \text{if } D(LG,N)<1 \wedge \neg \alpha \\
3 & \text{if } Inter(LG,T)>0 \wedge \neg \alpha \\
0 & \text{otherwise}
\end{cases}
\\
\text{Left Contact} &
\begin{cases} \label{equn:LC}
2 & \text{if } D(LG,N)<1 \wedge \alpha \\
3 & \text{if } Inter(LG,T)>0 \wedge \alpha \\
0 & \text{otherwise}
\end{cases}
\\
\text{Needle} &
\begin{cases} \label{equn:N}
2 & \text{if} (Inter(Ts,N) > 0 \wedge N.x < Ts.x) \\
1 & \text{if} (Inter(Ts,N) = 0 \vee N.x\geq Ts.x) \wedge \\
& (D(RG,T)>1 \vee D(LG,N)>1) \\
0 & \text{otherwise}
\end{cases}
\end{align}
\normalsize
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
The feature vector $v=< D(LG,N),Inter(LG,T),...>$ (see Figure \ref{fig:pipeline}) is then used to estimate the values of different state variables using a set of task-specific functions.
An example set of functions is shown in Equations \ref{equn:LH}-\ref{equn:N} for the state variables relating to the left robot arm and needle in Suturing task. A similar set of functions are used for the right arm. For example, if the distance between the left grasper and needle is less than one pixel ($D(LG,N)<1$) and the grasper is closed ($\neg \alpha$), then a value of 2 is estimated for the \textit{Left Hold} variable. Or the \textit{Needle} state is detected as touching (2) when the relative horizontal distance of the needle polygon $(N.x)$ is less than the average (midpoint) of the tissue points $(Ts.x)$ and these two objects intersect ($Inter > 0$).
\begin{comment}
\begin{table}[t!]
\centering
\caption{Spatial variables used for context detection in Suturing}
\begin{tabular}{P{0.3cm} P{0.3cm} | P{0.75cm} P{0.95cm} P{0.75cm} P{0.95cm} P{0.9cm}}
\toprule
\multicolumn{2}{l}{ } & & \multicolumn{3}{l}{State Variables} \\
\multicolumn{2}{l}{Input Masks} & Left Hold & Left Contact & Right Hold & Right Contact & Needle \\
\midrule
LG & N & D$<$1 \& $\neg \alpha $ & D$<$1 \& $ \alpha $ & - & - & D$>$1\\
RG & N & - & - & D$<$1 \&$\neg \alpha $ & D$<$1 \& $ \alpha $ & - \\
LG & T & Inter. \&$\neg \alpha $ & Inter. \& $ \alpha $ & - & - & - \\
RG & T & - & - & Inter. \&$\neg \alpha $ & Inter. \&$\alpha$ & D$>$1 \\
Ts & N & - & - & - & - & Inter \& N.x$>$Ts.x \\
\bottomrule
\end{tabular}
\label{tab:statevar_to_context}
\vspace{-2em}
\end{table}
\end{comment}
\begin{comment}
\text{Right Hold} &
\begin{cases} \label{equn:RH}
2 & \text{if } D(RG,N)<1 \wedge \neg \alpha \\
3 & \text{if } Inter(RG,T)<1 \wedge \neg \alpha \\
0 & \text{otherwise}
\end{cases}
\\
\text{Right Contact} &
\begin{cases} \label{equn:RC}
2 & \text{if } D(RG,N)<1 \wedge\alpha \\
3 & \text{if } Inter(RG,T)<1 \wedge \alpha \\
0 & \text{otherwise}
\end{cases}
\\
\end{comment}
\begin{comment}
\begin{align}
\centering
\text{Left Hold} = \left\{ \begin{array}{cc}
2 & \hspace{5mm} D(LG,N)<1 \wedge \neg \alpha \\
3 & \hspace{5mm} Inter(LG,T)<1 \wedge \neg \alpha \\
0 & \hspace{5mm} Otherwise \\
\end{array} \right.
\end{align}
\begin{align}
\centering
\text{Left Contact} = \left\{ \begin{array}{cc}
2 & \hspace{5mm} D(LG,N)<1 \wedge \alpha \\
3 & \hspace{5mm} Inter(LG,T)<1 \wedge \alpha \\
0 & \hspace{5mm} Otherwise \\
\end{array} \right.
\end{align}
\begin{align}
\centering
\text{Right Hold} = \left\{ \begin{array}{cc}
2 & \hspace{5mm} D(RG,N)<1 \wedge \neg \alpha \\
3 & \hspace{5mm} Inter(RG,T)<1 \wedge \neg \alpha \\
0 & \hspace{5mm} Otherwise \\
\end{array} \right.
\end{align}
\begin{align}
\centering
\text{Right Contact} = \left\{ \begin{array}{cc}
2 & \hspace{5mm} D(RG,N)<1 \wedge\alpha \\
3 & \hspace{5mm} Inter(RG,T)<1 \wedge \alpha \\
0 & \hspace{5mm} Otherwise \\
\end{array} \right.
\end{align}
\end{comment}
The input sample rate of the context to gesture translation was 3Hz, so the final estimated variables were downsampled from 30Hz to 3Hz using a rolling mode for each state variable with a window of 10 frames.
\subsection{Context to Gesture Translation}
The last step in our pipeline translates the automatically generated context labels into gesture labels.
The input to the translation model is a 2-dimensional time series matrix $\chi_{State \times n}$, where $State$ represents the 5 state variables describing the context (see Figure \ref{fig:context}) and $n$ represents the total number of samples in the trial. %
We map each time step $State_t$ to a corresponding gesture $G_i$ in the JIGSAWS dataset. The translation output is a 1-dimensional time series $Y_{n} \in \{\mathbb{G}\}$ with each time step mapped to a gesture.
We present two approaches based on domain knowledge and data.
\subsubsection{Finite State Machine Model}
\begin{figure}[b!]
\vspace{-1.75em}
\centering
\includegraphics[trim = 0in 5.45in 8.85in 0in, clip, width=0.49\textwidth]{Figures/suturing_context_gesture_graph_2.pdf}
\caption{Grouping and mapping of context to gestures in the grammar graph of the Suturing task. \textcolor{red}{*} denotes transitions due to duration limits as follows: G2$>$6.0 s $\rightarrow$ G3, G3$>$11.1 s $\rightarrow$ G6, G4$>$5.2 s $\rightarrow$ G2, G6$>$6.1 s $\rightarrow$ G4.}
\label{fig:contextgestures}
\vspace{-0.25em}
\end{figure}
Our first approach relies on a finite state machine (FSM) defined based on the knowledge of surgical tasks which directly relates context to gestures and is more explainable than deep learning models. The grammar graphs from \cite{ahmidi2017dataset} for each task were overlaid on top of the ideal context models from \cite{COMPASS} so that each gesture could be mapped into the groups of contextual changes that happen as the result of executing the gesture (see Figure \ref{fig:contextgestures} for the Suturing task). For example, G2 (positioning needle) corresponds to a change from a `0' to a `1' in the fifth state variable.
Or G4 (transferring needle from left to right) is the context sequence $ 20000 \rightarrow 20020 \rightarrow 20200 \rightarrow 02200 \rightarrow 00200 $ which means the needle is initially held in the left grasper, then touched and grasped by the right grasper, and released by the left grasper. In Figure \ref{fig:contextgestures}, the G4 and G8 groupings overlap since G8 (orienting needle) is performed by passing the needle from the right to the left grasper and back to the right grasper while changing its orientation.
Given the context transcript of a trial, the FSM is evaluated for each context and a transition to the next gesture is detected if the input context is part of the next gesture.
The FSM for each task was initialized in the `Start' state since not all of the trials started with G1. Also, G11 was assumed to be last and so it was appended to the gestures following the last detected gesture.
In addition, in the Suturing and Needle Passing tasks, G9 and G10 had low rates of occurrence and were not included in the final translation. This allowed us to focus only on state changes involving the needle and thus ignore grasps and touches of the thread and rings with the added benefit of simplifying the FSMs and limiting the total number of valid context changes.
We also consider gesture duration as a trigger for transitions between gestures. If the current gesture's duration exceeds a certain threshold based on the average
duration of that gesture class, a transition to the next gesture is enforced. This is to address the cases where a gesture transition does not happen due to inaccuracies in context detection. For example, the segmentation models tend to have lower accuracy in detecting the needle and thread states, leading to not detecting transitions that are dependent on those states.
\begin{comment}
\begin{table}[]
\centering
\begin{tabular}{L{4.15cm} P{1.55cm} P{1.2cm}}
\toprule
Context & Current Gesture & Next Gesture \\
\midrule
$00000$ or $00002$ or $00020$ & \multirow{2}{*}{Start} & \multirow{2}{*}{G1}\\
or $00022$ or $00202$ & & \\
\cmidrule(lr){1-3}
$00200$ or $02200$ & Start & G5 \\
\cmidrule(lr){1-3}
$00200$ or $00202$ or $02200$ & G1 & G5 \\
\cmidrule(lr){1-3}
$00201$ & G1 & G2 \\
\cmidrule(lr){1-3}
$00202$ or duration$>$6.00s & G2 & G3 \\
\cmidrule(lr){1-3}
$02202$ or $20202$ or $20022$ & \multirow{2}{*}{G3} & \multirow{2}{*}{G6}\\
or $20002$ or duration$>$11.07s & & \\
\cmidrule(lr){1-3}
$00200$ or $02200$ & \multirow{2}{*}{G3} & \multirow{2}{*}{G8}\\
or $20020$ or $20200$ & & \\
\cmidrule(lr){1-3}
$00200$ or $00201$ or duration$>$5.23s & G4 & G2 \\
\cmidrule(lr){1-3}
$02200$ or $20000$ or $20020$ & \multirow{2}{*}{G4} & \multirow{2}{*}{G8} \\
or $20200$ or $20002$ & & \\
\cmidrule(lr){1-3}
$00201$ & G5 & G2 \\
\cmidrule(lr){1-3}
$00200$ or $02200$ or $20200$ & \multirow{2}{*}{G5} & \multirow{2}{*}{G8} \\
or $20000$ or $20020$ & & \\
\cmidrule(lr){1-3}
$00200$ or $02200$ or $20000$ or $20020$ & \multirow{2}{*}{G6} & \multirow{2}{*}{G4} \\
or $20200$ or duration$>$6.07s & & \\
\cmidrule(lr){1-3}
$00200$ or $00201$ & G8 & G2 \\
\bottomrule
\end{tabular}
\label{tab:FSM_suturing}
\caption{States and transitions in the Suturing FSM}
\end{table}
\end{comment}
\begin{comment}
\begin{table}[]
\centering
\begin{tabular}{L{4.15cm} P{1.55cm} P{1.2cm}}
\toprule
Input & Current State & Next State \\
\midrule
$00000$ or $00002$ or $00020$ & \multirow{2}{*}{Start} & \multirow{2}{*}{G1}\\
or $00200$ or $00022$ or $00202$ & & \\
\cmidrule(lr){1-3}
$00200$ or $02200$ or duration$>$5.33s& G1 & G5 \\
\cmidrule(lr){1-3}
$00201$ & G1 & G2 \\
\cmidrule(lr){1-3}
$00202$ or duration$>$14.33s & G2 & G3 \\
\cmidrule(lr){1-3}
$02202$ or $20202$ or $20022$ & \multirow{2}{*}{G3} & \multirow{2}{*}{G6}\\
or $20002$ or duration$>$5.33s & & \\
\cmidrule(lr){1-3}
$00200$ or $00201$ or duration$>$4.1s & G4 & G2 \\
\cmidrule(lr){1-3}
$00200$ or $02200$ or $20200$ & \multirow{2}{*}{G5} & \multirow{2}{*}{G8} \\
or $20000$ or $20020$ & & \\
\cmidrule(lr){1-3}
$00200$ or $02200$ or $20000$ or $20020$ & \multirow{2}{*}{G6} & \multirow{2}{*}{G4} \\
or $20200$ or duration$>$6.23s & & \\
\cmidrule(lr){1-3}
$00200$ or $00201$ & G8 & G2 \\
\bottomrule
\end{tabular}
\label{tab:FSM_needle_passing}
\caption{States and transitions in the Needle Passing FSM}
\end{table}
\end{comment}
\begin{comment}
\begin{table}[]
\centering
\begin{tabular}{L{4.15cm} P{1.55cm} P{1.2cm}}
\toprule
Input & Current State & Next State \\
\midrule
$00300$ & Start & G1 \\
\cmidrule(lr){1-3}
$30030$ or $30000$ & Start & G12 \\
\cmidrule(lr){1-3}
$03300$ or $30300$ or duration$>$3.37s & G1 & G12 \\
\cmidrule(lr){1-3}
$30030$ or $30000$ or $30001$ & \multirow{2}{*}{G12} & \multirow{2}{*}{G13}\\
or duration $>$ 8.27s & & \\
\cmidrule(lr){1-3}
$30031$ or $30301$ or duration$>$4.4s & G13 & G14 \\
\cmidrule(lr){1-3}
$00300$ or $03300$ & G14 & G12 \\
\cmidrule(lr){1-3}
$30302$ or $30303$ or duration$>$6.63 & G14 & G15 \\
\cmidrule(lr){1-3}
$00000$ or $00003$ or duration$>$10.77s & G15 & G12 \\
\cmidrule(lr){1-3}
$30001$ or $30031$ & G15 & G14 \\
\bottomrule
\end{tabular}
\label{tab:FSM_knot_tying}
\caption{States and transitions in the Knot Tying FSM}
\end{table}
\end{comment}
\subsubsection{LSTM Model}
Our second approach for translation of context to gesture transcripts relies on sequential deep learning methods to learn relationships in the data that are not captured by the FSM models. We trained an LSTM model to perform automated context to gesture translation for each task. We chose the LSTM model for its ability to learn temporal features. Specifically, we used a simple double layer LSTM network with 64 hidden units for the Suturing and Needle Passing tasks and 256 hidden units for the Knot Tying task. We used Adam optimization \cite{kingma2014adam} and the cross entropy loss function to train the models. The hidden layers, number of hidden units and learning rates were determined by hyperparameter tuning. The final models were trained with the best model configurations and used to perform inference on the automatically generated context labels using the segmentation masks in the test set. Note that the LSTM model is a black box model and does not provide transparency like the FSM model in the previous section.
\section{EXPERIMENTAL EVALUATION}
\subsection{Experimental Setup}
We use an 80/20 train/test split of the JIGSAWS dataset for evaluating our pipeline. The original videos are 30Hz and we obtained binary masks for the tools and objects at 2Hz which we then used to train/test the segmentation models.
The LSTM networks are trained with the 3Hz context labels from \cite{COMPASS}.
We evaluate both the FSM and LSTM for context to gesture translation with the test set context labels
The experiments were conducted on a PC with an Intel Core i7 CPU \@ 3.60GHz, 32GB RAM, and an NVIDIA GeForce RTX 2080 Ti GPU running Ubuntu 18.04.2 LTS.
\subsection{Metrics}
The following metrics were used to evaluate the pipeline.
\textbf{Accuracy:}
Accuracy is the ratio of samples with correct labels divided by the total number of samples in a trial.
\textbf{Edit Score:}
Edit score is calculated using Equation \ref{equn:edit} from \cite{lea2016temporal} where the normalized Levenshtein edit distance, $ edit(G, P) $, quantifies the number of insertions, deletions, and replacements needed to transform the sequence of predicted labels $ P $ to match the ground truth sequence of labels $ G $. This is then normalized by the maximum length of the sequences so that a higher edit score is better.
\small
\begin{equation}
\text{Edit Score} = (1-\frac{edit(G, P)}{max(len(G), len(P))}) \times 100
\label{equn:edit}
\end{equation}
\normalsize
\textbf{Intersection over Union (IOU):}
Mean IOU, as calculated in Equation \ref{equn:IOU}, is the standard for assessing the segmentation and translation models \cite{lin2014microsoft}.
\small
\begin{equation}
IOU = TP/(TP+FP+FN)
\label{equn:IOU}
\end{equation}
\normalsize
Each predicted segment is matched to a corresponding segment in the ground truth. Then, the average IOU for each class is calculated and the mean of class IOUs is returned.
\subsection{Results}
\begin{table}[t!]
\centering
\caption{Tool and object segmentation performance on the test set (mean IOU for each object class) on the MICCAI18 (M) and JIGSAWS Suturing (S), Needle Passing (NP), and Knot Tying (KT) tasks.}
\vspace{-0.5em}
\label{tb:segmentation}
\begin{tabular}{P{0.275\linewidth} P{0.06\linewidth} P{0.06\linewidth} P{0.06\linewidth} P{0.075\linewidth} P{0.075\linewidth} P{0.05\linewidth}}
\toprule
\multirow{2}{*}{Model} & \multirow{2}{*}{Data} & \multicolumn{ 2}{c}{Graspers} & \multicolumn{ 3}{c}{Objects} \\
& & Left & Right & Needle & Thread & Ring \\
\midrule
Deeplab V3+ \cite{allan20202018} & \multirow{2}{*}{M~\cite{allan20202018}} & \multicolumn{ 2}{c}{0.78} & 0.014 & \textbf{0.48} & N/A \\
U-net \cite{allan20202018} & & \multicolumn{ 2}{c}{0.72} & 0.02 & 0.33 & N/A \\ \midrule
Mobile-U-Net~\cite{andersen2021real} & S & \multicolumn{ 2}{c}{\textbf{0.82}} & \textbf{0.56} & N/A & N/A \\ \midrule
\multirow{2}{*}{Trained UNet \cite{papp2022surgical}} & S &\multicolumn{ 2}{c}{0.69} & N/A & N/A & N/A \\
& NP &\multicolumn{ 2}{c}{0.66} & N/A & N/A & N/A \\
Trained LinkNet34 & KT & \multicolumn{ 2}{c}{0.80} & N/A & N/A & N/A \\ \midrule
\multirow{3}{*}{Deeplab V3 (ours)} & S & 0.71 & \textbf{0.64} & \textbf{0.19} & \textbf{0.52} & N/A \\
& NP & 0.61 & 0.49 & 0.09 & 0.25 & \textbf{0.37} \\
& KT & \textbf{0.74} & 0.61 & N/A & 0.44 & N/A \\
\bottomrule
\end{tabular}
\vspace{-2.5em}
\end{table}
\subsubsection{Tool and Object Segmentation}
\label{sec:res tool and obj seg}
\begin{table*}[ht!]
\centering
\caption{State variable IOU with consensus context using predicted masks from DeepLab V3 and ground truth masks}
\vspace{-0.5em}
\label{tab:generation}
\begin{tabular}{P{1.8cm} P{0.7cm} P{1cm} P{0.7cm} P{1cm} P{1cm} | P{0.7cm} P{0.7cm} P{1cm} P{0.7cm} P{1cm} P{1cm} | P{0.7cm} }
& \multicolumn{6}{c}{Predicted Masks} & \multicolumn{6}{c}{Ground Truth Masks} \\
\cmidrule(lr){2-7} \cmidrule(lr){8-13}
& Left Hold & Left Contact & Right Hold & Right Contact & Needle or Knot & Avg & Left Hold & Left Contact & Right Hold & Right Contact & Needle or Knot & Avg \\
\cmidrule(lr){1-7} \cmidrule(lr){8-13}
Suturing & 0.48 & 0.75 & \textbf{0.60} & 0.87 & 0.30 & 0.60 & 0.52 & 0.77 & \textbf{0.61} & 0.87 & 0.39 & 0.63\\
Needle Passing & 0.40 & \textbf{0.97} & 0.18 & \textbf{0.95} & \textbf{0.39}& 0.58 & 0.42 & \textbf{0.97} & 0.19 & \textbf{0.94} & \textbf{0.41} & 0.59\\
Knot Tying & \textbf{0.75} & 0.72 & 0.57 & 0.78 & \textbf{0.59} & \textbf{0.68} & \textbf{0.83} & 0.77 & \textbf{0.61} & 0.79 & \textbf{0.62} & \textbf{0.72}\\
\cmidrule(lr){1-7} \cmidrule(lr){8-13
Avg & 0.54 & \textbf{0.81} & 0.45 & \textbf{0.87} & 0.43 & & 0.59 & \textbf{0.84} & 0.47 & \textbf{0.87} & 0.47 & \\
\cmidrule(lr){1-7} \cmidrule(lr){8-13}
\end{tabular}
\vspace{-0.5em}
\end{table*}
\begin{table*}[h!]
\centering
\caption{Gestures translated from the automatic context inference given masks from the Deeplab V3 models }
\vspace{-0.5em}
\label{tab:c2g_all}
\begin{tabular}{P{1.75cm} P{1.75cm} P{1.65cm} P{1.65cm} P{1.65cm} P{1.65cm} P{1.65cm} P{0.65cm}}
& & \multicolumn{3}{c}{Gestures from Predicted Context Labels} & \multicolumn{3}{c}{Gestures from Consensus Context Labels} \\
\cmidrule(lr){3-5} \cmidrule(l){6-8}
Task & Model & Accuracy (\%) & Edit Score & IOU & Accuracy (\%) & Edit Score & IOU \\ \midrule
\multirow{2}{*}{Suturing} & FSM & \textbf{40.8} & \textbf{67.1} & \textbf{0.28} & \textbf{66.3} & \textbf{84.4} & \textbf{0.48} \\
& LSTM & 25.4 & 24.9 & 0.17 & 38.8 & 34.7 & 0.26\\
\cmidrule(lr){1-2} \cmidrule(lr){3-5} \cmidrule(l){6-8}
Suturing (vid) & Zero-shot \cite{jones2019zero} & \textbf{56.6} & \textbf{61.7} & & & & \\
Suturing (kin) & TSSC-DL \cite{clopton2017temporal} & 49.7 & 32.8 & & & & \\
\cmidrule(lr){1-2} \cmidrule(lr){3-5} \cmidrule(l){6-8}
\multirow{2}{*}{Needle Passing} & FSM & \textbf{18.0} & \textbf{76.2} & \textbf{0.12} & \textbf{70.1} & \textbf{88.7} & \textbf{0.54}\\
& LSTM & 14.8 & 20.1 & 0.02 & 17.0 & 20.0 & 0.04\\
\cmidrule(lr){1-2} \cmidrule(lr){3-5} \cmidrule(l){6-8}
\multirow{2}{*}{Knot Tying} & FSM & \textbf{42.9} & \textbf{70.7} & \textbf{0.43} & \textbf{54.6} & \textbf{91.5} & \textbf{0.43}\\
& LSTM & 36.5 & 23.8 & 0.17 & 50.8 & 49.2 & 0.28 \\
\bottomrule
\end{tabular}%
\vspace{-1.5em}
\end{table*}
Table \ref{tb:segmentation} shows the performance of our segmentation models in comparison to the related work.
Although the MICCAI 18 challenge \cite{allan20202018} dataset is from real porcine procedures, and differs from the JIGSAWS dataset collected from dry-lab experiments, it has similar objects including the clasper (similar to the graspers in JIGSAWS), needle and thread.
The Deeplab V3+ model achieved the best performance on the thread class. The top models from MICCAI 18 do not perform as well as our binary models on the needle and thread classes in the Suturing task.
However, the Mobile-U-Net \cite{andersen2021real} achieved the highest performance for grasper and needle segmentation in the JIGSAWS Suturing task.
\cite{papp2022surgical} reported tool segmentation IOUs for all the JIGSAWS tasks with up to 0.8 for KT using a Trained LinkNet34, but did not do object segmentation.
Among the JIGSAWS tasks, we achieved the best performance in Suturing for the right grasper, needle and thread, while the model performance on the Needle Passing task was the worst. This is likely due to Needle Passing's background having less contrast with the foreground compared to the other two tasks as shown in Figure \ref{fig:pipeline}). We can also see that the needle and thread masks are thinner compared to the grasper masks. So, the mask boundary errors could contribute to a lower score for the needle and thread classes.
The estimated time for segmenting the whole JIGSAWS dataset is 8.6 hours.
\subsubsection{Automated Context Inference}
\label{sec:res Automated Context Labeling}
Table \ref{tab:generation} shows the performance of the context inference method in terms of IOU achieved for each state variable with the predicted segmentation masks and the ground truth masks from crowd-sourcing.
The left column of Table \ref{tab:generation} shows that left and right contact have higher IOUs compared to left and right hold, and the needle or knot state has the lowest IOU.
This is because errors in estimating the position of the grasper jaw ends affect accurate inference of the hold state, while contact is relatively simple by finding if the two masks intersect. Better performance in detecting contact compared to hold states is also observed in the right column of Table \ref{tab:generation}, where ground truth segmentation masks are used.
Hence, the lower performance of the left hold and right hold could primarily be due to the difficulty in detecting these states.
For the needle/knot state, we need to detect if the needle is in the fabric/tissue for the Suturing task, in/out of the ring for the Needle Passing task, and if the knot is loose or tight in the Knot Tying task. Detecting the state of the needle and knot is difficult even with the ground truth segmentation masks in the right column of Table \ref{tab:generation}. This is because the needle and thread have the lowest segmentation performance compared to graspers as shown in Table \ref{tb:segmentation}.
The total time to perform automatic context inference is estimated to be about 30 seconds for the whole JIGSAWS dataset.
\begin{comment}
\begin{table}[]
\centering
\caption{State variable IoU with consensus context with masks from DeepLab V3 predictions}
\label{tab:generation_DeepLab}
\begin{tabular}{P{0.45cm} P{0.65cm} P{0.95cm} P{0.65cm} P{0.95cm} P{0.95cm} | P{0.7cm}}
\toprule
& Left Hold & Left Contact & Right Hold & Right Contact & Needle or Knot & Avg \\
\midrule
S & 0.45 & 0.70 & 0.67 & 0.82 & 0.23 & 0.58 \\
NP & 0.34 & 0.84 & 0.33 & 0.88 & 0.38 & 0.55 \\
KT & 0.57 & 0.66 & 0.57 & 0.64 & 0.38 & 0.56 \\
\midrule
Avg & 0.45 & 0.73 & 0.52 & 0.78 & 0.33 & \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{State variable IoU with consensus context with masks from ground truth labels}
\label{tab:generation_Cogito}
\begin{tabular}{P{0.45cm} P{0.65cm} P{0.95cm} P{0.65cm} P{0.95cm} P{0.95cm} | P{0.7cm}}
\toprule
& Left Hold & Left Contact & Right Hold & Right Contact & Needle or Knot & Avg \\
\midrule
S & 0.62 & 0.68 & 0.69 & 0.74 & 0.36 & 0.62\\
NP & 0.41 & 0.78 & 0.33 & 0.82 & 0.31 & 0.53 \\
KT & 0.69 & 0.79 & 0.59 & 0.74 & 0.64 & 0.64\\
\midrule
Avg & 0.57 & 0.75 & 0.53 & 0.77 & 0.44 & \\
\bottomrule
\end{tabular}
\end{table}
\end{comment}
\subsubsection{Context to Gesture Translation}
\label{sec:res Context to Gesture Translation}
The right column of Table \ref{tab:c2g_all} shows the performance of the FSM and LSTM methods in translating ground truth context labels to gestures.
The FSM model achieves higher accuracies and edit scores than the LSTM.
The left column of Table \ref{tab:c2g_all} shows the performance of the overall pipeline with automated context labels. We see that using automated context from predicted masks degrades the performance of both models because the segmentation models perform poorly at generating masks for the needle and for all tools and objects in Needle Passing. This effect is propagated through the pipeline, resulting in low accuracies and IOUs.
The FSM generally outperforms the LSTM likely due to its knowledge-based structure and setting limits on gesture durations that prevent the model from becoming stuck in any one gesture even with degraded context labels.
The FSM pipeline achieves accuracies lower than unsupervised models from \cite{jones2019zero} and \cite{clopton2017temporal} for Suturing, but outperforms them in terms of edit score.
These observations suggest that there are benefits to incorporating knowledge into context to gesture translation that can make the model more robust to degraded context labels.
However, the FSM is manually developed based on domain knowledge and relies on defined inputs and transitions while the LSTM requires labeled data for training. The time to generate the entire JIGSAWS gesture translation from context is less than 3 minutes for both models.
\begin{comment}
\begin{table}[]
\centering
\caption{Comparison of gestures translated from the automatic context labeling given masks from the Deeplab V3 models to JIGSAWS gestures}
\label{tab:c2g_all}
\begin{tabular}{P{0.65cm} P{1.65cm} P{1.65cm} P{1.65cm} P{0.65cm} }
\toprule
Task & Model & Accuracy (\%) & Edit Score & IoU \\ \midrule
S & FSM & 29.5 & 59.1 & 0.21 \\
& LSTM & 36.3 & 29.6 & 0.21 \\ \cmidrule(lr){1-5}
NP & FSM & 29.2 & 74.1 & 0.22 \\
& LSTM & 20.9 & 22.0 & 0.06 \\ \cmidrule(lr){1-5}
KT & FSM & 36.9 & 62.9 & 0.41 \\
& LSTM & 50.5 & 46.3 & 0.32 \\
\bottomrule
\end{tabular}%
\end{table}
\begin{table}[]
\centering
\caption{Comparison of gestures translated from the automatic context labeling given ground truth masks to JIGSAWS gestures}
\label{tab:c2g_gtmasks}
\begin{tabular}{P{0.65cm} P{1.65cm} P{1.65cm} P{1.65cm} P{0.65cm} }
\toprule
Task & Model & Accuracy (\%) & Edit Score & IoU \\ \midrule
S & FSM & 42.4 & 71.7 & 0.26 \\
& LSTM & 38.8 & 34.7 & 0.26\\ \cmidrule(lr){1-5}
NP & FSM & 35.3 & 74.1 & 0.20 \\
& LSTM & 17.0 & 20.0 & 0.04\\ \cmidrule(lr){1-5}
KT & FSM & 42.1 & 72.8 & 0.40 \\
& LSTM & 50.8 & 49.2 & 0.28 \\
\bottomrule
\end{tabular}%
\end{table}
\begin{table}[]
\centering
\caption{Comparison of context to gesture translation methods given consensus context labels}
\label{tb:c2g-consensus}
\begin{tabular}{P{0.65cm} P{1.65cm} P{1.65cm} P{1.65cm} P{0.65cm} }
\toprule
Task & Model & Accuracy (\%) & Edit Score & IoU \\ \midrule
S & FSM & 66.3 & 84.4 & 0.48 \\
& LSTM & 77.4 & 65.2 & 0.51 \\ \cmidrule(lr){1-5}
NP & FSM & 70.1 & 88.7 & 0.54 \\
& LSTM & 84.2 & 67.6 & 0.59 \\ \cmidrule(lr){1-5}
KT & FSM & 54.6 & 91.5 & 0.43 \\
& LSTM & 88.9 & 92.8 & 0.75\\
\bottomrule
\end{tabular}%
\end{table}
\end{comment}
\section{DISCUSSION AND CONCLUSIONS}
Our overall pipeline for automated inference of surgical context and translation to gesture labels can perform automatic gesture inference given video segmentation masks.
It can be used as an efficient and fast inference method by significantly shortening manual gesture labeling time ($\sim$9 hours vs. $\sim$26 hours for the case study of the JIGSAWS dataset).
We rely on models pre-trained on general images and publicly-available datasets which lowers the
cost of manually labeling video data and makes our model generalizable to other datasets and tasks. In this case study on JIGSAWS, our binary segmentation models achieve comparable performance to state-of-the-art models on the grasper and thread classes, and better performance on the needle class. However, they do not perform well enough for the needle and thread classes which are important for detecting surgical context and limit overall performance.
Given the ground truth segmentation masks, automated context inference can achieve 85\% IOU for states such as left/right contact, but only $\sim$45\% IOU for the needle/knot state, so automated context inference does not perform equally well for all states.
The FSM and LSTM models for context to gesture translation have better performance given ground truth context labels compared to predicted context which may be due to imperfect models at each stage of the pipeline and error propagation.
Manual annotations for the grasper end points and tissue points were needed
so future work will develop models for their automatic segmentation.
This method also relies on 2D images to infer context from a 3D environment which can complicate contact state detection.
Future work will improve the performance and robustness of the pipeline and apply it to runtime error detection \cite{yasar2020real, Li2022Runtime}.
\section*{ACKNOWLEDGMENT}
This work was supported in part by the National Science Foundation grants DGE-1829004, and CNS-2146295.
|
{
"arxiv_id": "2302.14112",
"language": "en",
"timestamp": "2023-03-01T02:01:25",
"url": "https://arxiv.org/abs/2302.14112",
"yymm": "2302"
} | \subsection{Derivation of the 1-RSB free entropy}\label{subsec_app:1rsb_derivation}
We perform here, for completeness of our presentation, the textbook calculation of the spherical perceptron free entropy at the one-RSB level.
We start again from eq.~\eqref{eq:Phir_final}, which we rewrite using a Gaussian transformation as:
\begin{align}\label{eq:Phir_1rsb_1}
\Phi(\alpha,\beta,r) &= \sup_{\bQ}\Big[\frac{1}{2} \log \det \bQ + \alpha \log \int_{\bbR^r} \frac{\rd \bu \rd \bv}{(2\pi)^{r}} e^{-\frac{1}{2} \sum_{a,b} Q^{ab} v^a v^b -\beta \sum_{a=1}^r \theta(u^a) + i \sum_{a=1}^r u^a v^a}\Big].
\end{align}
We assume a 1RSB ansatz given in eq.~\eqref{eq:rhoq_Q_1RSB}, with $q_1 > q_0$, and $m \in \{1,\cdots,r\}$ with $m \, | \, r$ the Parisi parameter (i.e.\ the size of the diagonal blocks
in the ultrametric $\bQ$).
More precisely we have, with $k \coloneqq r / m$:
\begin{align}\label{eq:1RSB_explicit}
\begin{dcases}
Q_{aa} &= 1, \\
Q_{ab} &= q_1 \textrm{ if } \Big\lfloor \frac{a}{k} \Big\rfloor = \Big\lfloor \frac{b}{k} \Big\rfloor, \\
Q_{ab} &= q_0 \textrm{ otherwise.}
\end{dcases}
\end{align}
\textbf{The entropic contribution --} We focus on the first term of eq.~\eqref{eq:Phir_1rsb_1}.
It is elementary algebra to check that under the ansatz of eq.~\eqref{eq:1RSB_explicit}, the spectrum of $\bQ$ is:
\begin{align*}
\mathrm{Sp}(\bQ) &= \{1-q_1\}^{r-k} \cup \{1-mq_0 + (m-1)q_1\}^{k-1} \cup \{1+(r-m)q_0 + (m-1)q_1\}.
\end{align*}
In particular, this yields:
\begin{align*}
\log \det \bQ &= r \frac{m - 1}{m} \log (1-q_1) + \frac{r-m}{m} \log [1-mq_0 + (m-1)q_1] + \log[1+(r-m)q_0 + (m-1)q_1].
\end{align*}
And thus:
\begin{align}\label{eq:entropic_1rsb}
\partial_r[\log \det \bQ]_{r = 0 } &= \frac{m - 1}{m} \log (1-q_1) + \frac{1}{m} \log [1-mq_0 + (m-1)q_1] + \frac{q_0}{[1-mq_0 + (m-1)q_1]}.
\end{align}
\textbf{The interaction contribution --}
We focus now on the second term $\alpha G_{2,r}(\bQ)$ in eq.~\eqref{eq:Phir_1rsb_1}, with:
\begin{align*}
G_{2,r}(\bQ) &\coloneqq \log \int_{\bbR^r} \frac{\rd \bu \rd \bv}{(2\pi)^{r}} e^{-\frac{1}{2} \sum_{a,b} Q^{ab} v^a v^b -\beta \sum_{a=1}^r \theta(u^a) + i \sum_{a=1}^r u^a v^a}.
\end{align*}
Under the 1-RSB ansatz, it becomes:
\begin{align*}
G_{2,r}(\bQ) &= \log \int_{\bbR^r} \frac{\rd \bu \rd \bv}{(2\pi)^{r}} e^{-\frac{1-q_1}{2} \sum_{a} (v^a)^2 - \frac{q_0}{2} \big(\sum_a v^a\big)^2
- \frac{q_1 - q_0}{2} \sum_{x=0}^{k-1} \big(\sum_{l=1}^m v^{mx + l}\big)^2 - \sum_{a} [\beta\theta(u^a) - i u^a v^a]}.
\end{align*}
Introducing Gaussian transformations based on the formula $e^{-x^2/2} = \int \mcD z \, e^{-i z x}$ to decouple the replicas, we obtain:
\begin{align*}
&G_{2,r}(\bQ) = \log \int \mcD \xi \prod_{x=0}^{k-1} \int \mcD z_x \int_{\bbR^r} \frac{\rd \bu \rd \bv}{(2\pi)^{r}} \exp\Big\{-\frac{1-q_1}{2} \sum_{a} (v^a)^2 - i \sqrt{q_0} \xi \sum_a v^a \\
&- i\sqrt{q_1 - q_0} \sum_{x=0}^{k-1} z_x \sum_{l=1}^m v^{mx + l} -\beta \sum_{a} \theta(u^a) + i \sum_{a} u^a v^a\Big\}, \\
&= \log \int \mcD \xi \Bigg\{ \int \mcD z \Bigg[\int \frac{\rd u \rd v}{2\pi} \exp\Big\{-\frac{1-q_1}{2} v^2 - i v [\sqrt{q_0} \xi + \sqrt{q_1 - q_0} z] -\beta \theta(u) + i u v\Big\} \Bigg]^m \Bigg\}^{\frac{r}{m}}.
\end{align*}
Using this Gaussian transformation trick, we were able to decouple replicas and therefore obtain an expression that is analytic in $r$.
This allows to take the $r \downarrow 0$ limit (keeping $m$ fixed), and to reach:
\begin{align*}
&\partial_r \big[G_{2,r}(\bQ)\big]_{r=0} \\
&= \frac{1}{m} \int \mcD \xi \log \Bigg\{ \int \mcD z \Bigg[\int \frac{\rd u \rd v}{2\pi} \exp\Big\{-\frac{1-q_1}{2} v^2 + i v [u - \sqrt{q_0} \xi - \sqrt{q_1 - q_0} z] -\beta \theta(u)\Big\} \Bigg]^m \Bigg\}.
\end{align*}
Performing the Gaussian integrals, and recall the definition of $H(x) \coloneqq \int_x^\infty \mcD u$, we reach:
\begin{align}\label{eq:energetic_1rsb}
\partial_r \big[G_{2,r}(\bQ)\big]_{r=0} &= \frac{1}{m} \int \mcD \xi \log \Bigg\{ \int \mcD z \Bigg[1- (1-e^{-\beta}) H\Big(- \frac{\sqrt{q_0} \xi + \sqrt{q_1 - q_0} z}{\sqrt{1-q_1}}\Big) \Bigg]^m \Bigg\}.
\end{align}
Combining eq.~\eqref{eq:entropic_1rsb} and eq.~\eqref{eq:energetic_1rsb}, we reach
eq.~\eqref{eq:phi_1rsb}.
\subsection{Zero-temperature limit}\label{subsec_app:zerotemp_1rsb}
Recall that in the $\beta \to \infty$ limit we have the scaling (see e.g.\ \cite{franz2017universality})
\begin{align}\label{eq:scaling_1rsb_zerotemp}
m \sim \frac{c_m}{\beta} \hspace{1cm} \textrm{and} \hspace{1cm}
1 - q_1 \sim \frac{\chi_\ORSB}{\beta},
\end{align}
while $q_0$ has a limit in $(0,1)$ as $\beta \to \infty$.
In the following of this section, we write $\chi_\ORSB = \chi$ to lighten the notations.
The asymptotics of the determinant term in eq.~\eqref{eq:phi_1rsb} can be worked out:
\begin{align*}
&\frac{m - 1}{2m} \log (1-q_1) + \frac{1}{2m} \log [1-mq_0 + (m-1)q_1] + \frac{q_0}{2[1-mq_0 + (m-1)q_1]} \nonumber \\
&\simeq \frac{\beta}{2} \Big[\frac{q_0}{\chi+c_m(1-q_0)} + \frac{1}{c_m} \log \Big(\frac{\chi + c_m(1-q_0)}{\chi}\Big)\Big].
\end{align*}
The limit of the other term in eq.~\eqref{eq:phi_1rsb} can also be computed, using that:
\begin{align*}
\Bigg[1- (1-e^{-\beta}) H\Big(- \frac{\sqrt{q_0} \xi + \sqrt{q_1 - q_0} \xi_1}{\sqrt{1-q_1}}\Big) \Bigg]^m &= \exp\{m f_\beta(u)\},
\end{align*}
with $u \coloneqq \sqrt{q_0} \xi_0 + \sqrt{1-q_0} \xi_1$, and $f_\beta(h) \coloneqq \log (1 - (1-e^{-\beta}) H[-h/\sqrt{1-q_1}])$.
We described the expansion of $f_\beta(h)/\beta$ for large $\beta$ in eq.~\eqref{eq:expansion_fbeta} (simply replacing $\chi_\RS$ by $\chi_\ORSB$).
We reach then:
\begin{align}\label{eq:expansion_1rsb_basis_term}
\Bigg[1- (1-e^{-\beta}) H\Big(- \frac{\sqrt{q_0} \xi + \sqrt{q_1 - q_0} \xi_1}{\sqrt{1-q_1}}\Big) \Bigg]^m &\simeq
\begin{dcases}
1 &\textrm{ if } u < 0, \\
e^{-c_m} &\textrm{ if } u > \sqrt{2\chi}, \\
e^{-\frac{c_m u^2}{2 \chi}} &\textrm{ if } u \in (0,\sqrt{2\chi}).
\end{dcases}
\end{align}
Anticipating on what follows, we introduce the auxiliary functions (in which $u = u(\xi_0,\xi_1) \coloneqq \sqrt{q_0}\xi_0 + \sqrt{1-q_0} \xi_1$):
\begin{align}\label{eq:def_nabc}
\begin{dcases}
n(\xi_0) &\coloneqq \int_{u \leq 0} \mcD \xi_1 + e^{-c_m} \int_{u > \sqrt{2 \chi}} \mcD \xi_1 + \int_{0<u< \sqrt{2 \chi}} \mcD \xi_1 e^{- \frac{c_m}{2\chi} u^2}, \\
a(\xi_0) &\coloneqq \int_{0< u < \sqrt{2\chi}} \mcD \xi_1 \, e^{-\frac{c_m}{2\chi}u^2} \, u, \\
b(\xi_0) &\coloneqq \int_{0< u < \sqrt{2\chi}} \mcD \xi_1 \, e^{-\frac{c_m}{2\chi}u^2} \, u^2, \\
c(\xi_0) &\coloneqq e^{-c_m} \int_{u > \sqrt{2\chi}} \mcD \xi_1 + \frac{1}{2 \chi} \int_{0 < u < \sqrt{2\chi}} \mcD \xi_1 \, u^2 \, e^{-\frac{c_m}{2 \chi} u^2}.
\end{dcases}
\end{align}
By integration by parts, all these functions can be expressed in terms of elementary functions and $H(x) \coloneqq \int_x^\infty \mcD \xi$.
Moreover, note that we have the identities:
\begin{subnumcases}{\label{eq:dn}}
\label{eq:dn_dq}
\partial_{q_0}n(\xi_0) = -\frac{c_m}{2\chi(1-q_0)} \Big[\frac{\xi_0}{\sqrt{q_0}} a(\xi_0) - b(\xi_0)\Big],& \\
\label{eq:dn_dchi}
\partial_{\chi}n(\xi_0) = \frac{c_m}{2\chi^2} b(\xi_0), & \\
\label{eq:dn_dcm}
\partial_{c_m}n(\xi_0) = -c(\xi_0). &
\end{subnumcases}
Eqs.~\eqref{eq:dn_dchi} and \eqref{eq:dn_dcm} can be obtained directly from the definition of eq.~\eqref{eq:def_nabc}.
For eq.~\eqref{eq:dn_dq}, we found more convenient to differentiate the finite-$\beta$ integral one can write for $n(\xi_0)$ using eq.~\eqref{eq:expansion_1rsb_basis_term}, and then take its large $\beta$ limit.
We leave the derivation of these equations to the reader.
Using the expansion of eq.~\eqref{eq:expansion_1rsb_basis_term}, we obtain the limit of the second term of eq.~\eqref{eq:phi_1rsb}:
\begin{align*}
&\int \mcD \xi_0 \log \Bigg\{ \int \mcD \xi_1 \Bigg[1- (1-e^{-\beta}) H\Big(- \frac{\sqrt{q_0} \xi_0 + \sqrt{q_1 - q_0} \xi_1}{\sqrt{1-q_1}}\Big) \Bigg]^m \Bigg\} \Bigg] \simeq \int \mcD \xi_0 \log n(\xi_0).
\end{align*}
In the end, we have computed the limit of the free energy at the 1-RSB level, i.e.\
$f^\star_\ORSB(\alpha) \coloneqq - \lim_{\beta \to \infty} \Phi_\ORSB(\alpha,\beta)/\beta$:
\begin{align}\label{eq:phi_1rsb_zero_temp}
f^\star_\ORSB(\alpha) &=
-\frac{q_0}{2[\chi+c_m(1-q_0)]} - \frac{1}{2c_m} \log \Big(\frac{\chi + c_m(1-q_0)}{\chi}\Big) - \frac{\alpha}{c_m} \int \mcD \xi_0 \log n(\xi_0),
\end{align}
in which one must implicitly maximize over $(c_m, q_0, \chi)$.
Note that an equivalent expression can be obtained using the limit of the average energy,
since $e^\star_\ORSB(\alpha,\beta = \infty) = \lim_{\beta \to \infty} [-\partial_\beta \Phi_\ORSB(\alpha, \beta)] = f^\star_\ORSB(\alpha)$.
Performing expansions in a similar way to the RS computations described in Appendix~\ref{subsec_app:zerotemp_RS},
we reach:
\begin{align}\label{eq:fstar_1rsb_app}
f^\star_\ORSB(\alpha) &= e^\star_\ORSB(\alpha,\beta = \infty) = \alpha \int \mcD \xi_0 \frac{1}{n(\xi_0)} e^{-c_m} \int_{u > \sqrt{2\chi}} \mcD \xi_1.
\end{align}
Let us emphasize that eq.~\eqref{eq:fstar_1rsb_app} is an identity involving the parameters $(c_m, q_0, \chi)$, which have to be found by maximizing eq.~\eqref{eq:phi_1rsb_zero_temp}.
\subsection{Numerical procedure}\label{subsec_app:1rsb_numerical}
Let us summarize here the equations that allow to find the $1$-RSB prediction for the injectivity threshold, using the set of auxiliary functions of eq.~\eqref{eq:def_nabc}.
One simply proceeds by derivation of the limit of the free energy functional given in eq.~\eqref{eq:phi_1rsb_zero_temp} with respect to $(q_0, \chi,c_m)$,
using eq.~\eqref{eq:dn}.
More precisely, at a given value of $\alpha > 2$, one must find $q_0 \in (0,1)$ and $\chi,c_m > 0$ satisfying the following set of three equations:
\begin{align}\label{eq:final_eq_1rsb_zerotemp}
\begin{dcases}
&\frac{q_0}{[\chi + (1-q_0)c_m]^2} = \frac{\alpha}{c_m \chi (1-q_0)}
\int \mcD \xi_0 \frac{1}{n(\xi_0)} \Big[\frac{\xi_0}{\sqrt{q_0}} a(\xi_0) - b(\xi_0) \Big], \\
&\frac{\chi + (1-q_0)^2 c_m}{\chi[(1-q_0) c_m + \chi]^2}
= \frac{\alpha}{\chi^2}\int \mcD \xi_0 \frac{b(\xi_0)}{n(\xi_0)}, \\
&\frac{c_m (1-q_0) [\chi + c_m (1-2q_0)]}{2[\chi + c_m(1-q_0)]^2} -\frac{1}{2} \log\frac{\chi+c_m(1-q_0)}{\chi}
= \alpha \int \mcD \xi_0 \, \Big\{ \log n(\xi_0) + c_m \frac{c(\xi_0)}{n(\xi_0)}\Big\}.
\end{dcases}
\end{align}
Once one has found the solution to eq.~\eqref{eq:final_eq_1rsb_zerotemp}, we can obtain the large-$\beta$ limit of the energy either from eq.~\eqref{eq:phi_1rsb_zero_temp} or eq.~\eqref{eq:fstar_1rsb_app}.
\myskip
Following the statistical physics folklore, in order to implement an iterative scheme to solve eq.~\eqref{eq:final_eq_1rsb_zerotemp},
we use auxiliary variables.
Namely, we iterate the first two equations of eq.~\eqref{eq:final_eq_1rsb_zerotemp} as:
\begin{align}\label{eq:final_eq_1rsb_zerotemp_what}
\begin{dcases}
A_0^t &= \frac{\alpha}{c_m \chi^t (1-q_0^t)}
\int \mcD \xi_0 \frac{1}{n_t(\xi_0)} \Big[\frac{\xi_0}{\sqrt{q_0^t}} a_t(\xi_0) - b_t(\xi_0) \Big], \\
A_1^t &= \frac{\alpha}{(\chi^t)^2}\int \mcD \xi_0 \frac{b_t(\xi_0)}{n_t(\xi_0)}, \\
q_0^{t+1} &= F_1(A_0^t, A_1^t, c_m^t), \\
\chi^{t+1} &= F_2(A_0^t,A_1^t, c_m^t),
\end{dcases}
\end{align}
in which we added a time subscript for the auxiliary functions to highlight their dependency on $q_0^t,\chi^t,c_m^t$.
Moreover, the functions $F_1,F_2$ are defined as the only roots (in $q_0,\chi$) of the equations
\begin{align*}
A_0 = \frac{q_0}{[\chi + (1-q_0)c_m]^2} \hspace{1cm} \textrm{and} \hspace{1cm}
A_1 = \frac{\chi + (1-q_0)^2 c_m}{\chi[(1-q_0) c_m + \chi]^2},
\end{align*}
such that $q_0 \in (0,1)$ and $\chi \geq 0$.
Note that this implies that
\begin{align}\label{eq:chi_from_q0_1rbs}
\chi = \frac{A_0 c_m (1-q_0)^2}{q_0 A_1 - A_0}.
\end{align}
Therefore, in order for the solution to exist we must have $A_0 < A_1$, and then the solution satisfies $q_0 > A_0 / A_1$.
The remaining equation on $q_0$ can be written as:
\begin{align}\label{eq:1rsb_zerotemp_remaining_q0}
A_0 &= \frac{(A_1 q_0 - A_0)^2}{q_0 c_m^2 (1-q_0)^2 (A_1 - A_0)^2}.
\end{align}
We solve eq.~\eqref{eq:1rsb_zerotemp_remaining_q0} on $q_0$ with a polynomial equations solver, and consider the unique solution in $(0,1)$ such that the
corresponding $\chi$ in eq.~\eqref{eq:chi_from_q0_1rbs} satisfies $\chi \geq 0$, i.e.\ such that $q_0 > A_0 / A_1$.
\myskip
At a given iteration $t$, we iterate eq.~\eqref{eq:final_eq_1rsb_zerotemp_what} for the value $c_m = c_m^t$.
We then do a binary search to solve the last equation of eq.~\eqref{eq:final_eq_1rsb_zerotemp} and find $c_m^{t+1}$.
We found this procedure to converge very quickly (see the attached code \cite{github_repo}), and it yields the 1RSB curves in Fig.~\ref{fig:chi_estar_T0}
and the prediction of eq.~\eqref{eq:alphainj_1RSB}.
\subsection{Entropic contribution}\label{subsec_app:entropic}
We start with the first ``entropic'' term in eq.~\eqref{eq:Phir_frsb_1}.
Its expression under a full-RSB ansatz is given in eq.~(23) of \cite{franz2017universality},
itself taken from Appendix~A.II of \cite{mezard1991replica}.
However
the derivation is itself very interesting and will be useful for the other term in eq.~\eqref{eq:Phir_frsb_1}, so we first detail it here.
\myskip\textbf{Derivation for $r > 0$ --}
We focus on the entropic term, which we may write as:
\begin{align}\label{eq:entropic_term_1}
\frac{1}{2} \log \det \bQ &= -\log \int_{\bbR^r} \frac{\rd \bu}{(2\pi)^{r/2}} \exp\Big\{-\frac{1}{2} \bu^\intercal \bQ \bu\Big\}.
\end{align}
We fix a $k$-RSB ansatz, cf.\ Fig.~\ref{fig:q_frsb}, and we will take in the end the limit $k \to \infty$.
We have in the $r \downarrow 0$ limit $m_{-1} \coloneqq r \leq m_0 \leq m_1 \leq \cdots \leq m_{k-1} \leq m_k = 1$,
and the parameters $q_0 \leq q_1 \leq q_k \leq q_{k+1} = 1$.
Recall that in this ansatz, the hierarchical overlap matrix $\{Q_{ab}\}$ can be written as:
\begin{align*}
\bQ &= \sum_{i=0}^{k+1} (q_i - q_{i-1}) \bJ^{(r)}_{m_{i-1}},
\end{align*}
with $J_m^{(r)}$ the block-diagonal matrix with $r/m$ blocks of size $m$, each diagonal block being the all-ones matrix.
In order to compute the integral of eq.~\eqref{eq:entropic_term_1}, we use a simple yet very powerful identity introduced in \cite{duplantier1981comment},
and valid for any matrix (not necessarily a hierarchical RSB matrix) $\{Q_{ab}\}$:
\begin{align}\label{eq:duplantier}
\exp\Big\{-\frac{1}{2} \sum_{a,b=1}^r Q_{ab} u_a u_b\Big\}
&= \exp\Bigg(\frac{1}{2} \sum_{a,b=1}^r Q_{ab} \frac{\partial^2}{\partial h_a \partial h_b}\Bigg)\Bigg[\prod_{c=1}^r \exp(-i u_c h_c)\Bigg]_{\bh = 0}.
\end{align}
This identity can be shown by Taylor-expanding the exponential involving the differential operator.
Using it in eq.~\eqref{eq:entropic_term_1} we get:
\begin{align}\label{eq:entropic_term_2}
- \frac{1}{2} \log \det \bQ &= \log \int_{\bbR^r} \frac{\rd \bu}{(2\pi)^{r/2}} \exp\Bigg(\frac{1}{2} \sum_{a,b=1}^r Q_{ab} \frac{\partial^2}{\partial h_a \partial h_b}\Bigg)\Bigg[\prod_{c=1}^r \exp(-i u_c h_c)\Bigg]_{\bh = 0}.
\end{align}
Note that $\bu$ does not appear in the differential operator, so that one can exchange the differential operator and the integral over $\bu$.
Integrating with respect to $\bu$ yields then, using the Fourier representation of the delta distribution (we denote $\partial_a = \partial/\partial_{h_a}$):
\begin{align}\label{eq:entropic_term_3}
&\hspace{-0.3cm}- \frac{1}{2} \log \det \bQ = \log \Bigg\{ (2\pi)^{r/2} \exp\Bigg(\frac{1}{2} \sum_{a,b=1}^r Q_{ab} \partial_a \partial_b\Bigg)\Bigg[\prod_{c=1}^r \delta(h_c)\Bigg]_{\bh = 0}\Bigg\}, \nonumber \\
&\hspace{-0.3cm}= \log \Bigg\{ (2\pi)^{r/2} \exp\Bigg(\frac{1}{2}\sum_{i=0}^{k+1} (q_i - q_{i-1}) \sum_{a,b=1}^r (\bJ^{(r)}_{m_{i-1}})_{ab} \partial_a \partial_b \Bigg)\Bigg[\prod_{c=1}^r \delta(h_c)\Bigg]_{\bh = 0}\Bigg\}, \nonumber \\
&\hspace{-0.3cm}= \log \Bigg\{ (2\pi)^{r/2} \exp\Bigg(\frac{1}{2}\sum_{i=0}^{k} (q_i - q_{i-1}) \sum_{a,b=1}^r (\bJ^{(r)}_{m_{i-1}})_{ab} \partial_a \partial_b\Bigg) \exp\Bigg(\frac{1-q_k}{2} \sum_{a=1}^r \partial^2_a\Bigg)\Bigg[\prod_{c=1}^r \delta(h_c)\Bigg]_{\bh = 0}\Bigg\}.
\end{align}
We use now the crucial identity, for any $\omega \geq 0$ and smooth function $f$, and which can be shown by Taylor-expanding $f$ around $h$ inside the integral on the right hand side:
\begin{align}\label{eq:identity_convolution}
\exp \Big(\frac{\omega}{2} \partial_h^2\Big) f[h] &= [\gamma_\omega \star f](h) = \int \frac{\rd z}{\sqrt{2 \pi \omega}} e^{-\frac{z^2}{2\omega}} f(h-z).
\end{align}
Here we denoted $\gamma_\omega(x) = e^{-x^2/(2\omega)} / \sqrt{2\pi \omega}$, and $\gamma_0(x) = \delta(x)$.
Using eq.~\eqref{eq:identity_convolution} inside eq.~\eqref{eq:entropic_term_3} we reach:
\begin{align}\label{eq:entropic_term_4}
&- \frac{1}{2} \log \det \bQ = \log \Bigg\{ (2\pi)^{r/2} \exp\Bigg(\frac{1}{2}\sum_{i=0}^{k} (q_i - q_{i-1}) \sum_{a,b=1}^r (\bJ^{(r)}_{m_{i-1}})_{ab} \partial_a \partial_b\Bigg) \Bigg[\prod_{c=1}^r \gamma_{1-q_k}(h_c)\Bigg]_{\bh = 0}\Bigg\}.
\end{align}
We will iteratively apply the differential operator in the exponential, starting from $i = 0$ up to $i = k$.
We will make use of another important identity, which is just a consequence of simple differential calculus combined with eq.~\eqref{eq:identity_convolution},
and valid for any $p,n \in \bbN$ and smooth $R(h_1, \cdots, h_n)$:
\begin{align}\label{eq:identity_diff_calculus}
\begin{dcases}
\Bigg[\Big(\sum_{a=1}^n \frac{\partial}{\partial h_a}\Big)^p R(h_1, \cdots, h_n)\Bigg]_{h_a = h} &= \frac{\partial^p}{\partial h^p} [h \mapsto R(h,h,\cdots,h)], \\
\exp\Bigg(\frac{\omega}{2} \Big(\sum_{a=1}^n \partial_a\Big)^2 \Bigg)[R(h_1, \cdots, h_n)]_{h_a = h} \!\!\!\! &=e^{\frac{\omega}{2} \frac{\partial^2}{\partial h^2}} R(h,\cdots,h) = \gamma_\omega \star [h \mapsto R(h,\cdots,h)].
\end{dcases}
\end{align}
Let us now come back to eq.~\eqref{eq:entropic_term_4}. We separate the term $i = 0$, and we have,
with eq.~\eqref{eq:identity_convolution}:
\begin{align}\label{eq:entropic_term_5}
\nonumber
&\exp\Bigg(\frac{1}{2}\sum_{i=0}^{k} (q_i - q_{i-1}) \sum_{a,b=1}^r (\bJ^{(r)}_{m_{i-1}})_{ab} \partial_a \partial_b\Bigg) \Bigg[\prod_{c=1}^r \gamma_{1-q_k}(h_c)\Bigg]_{\bh = 0} \\
&= \exp\Bigg(\frac{q_0}{2} \Big(\sum_a \partial_a\Big)^2\Bigg) [\Xi(\bh)]_{\bh = 0}
= \gamma_{q_0} \star [h \mapsto \Xi(h, \cdots, h)]_{h = 0},
\end{align}
with $\Xi(\bh)$ defined as:
\begin{align*}
\Xi(\bh) &\coloneqq \exp\Bigg(\frac{1}{2}\sum_{i=1}^{k} (q_i - q_{i-1}) \sum_{a,b=1}^r (\bJ^{(r)}_{m_{i-1}})_{ab} \partial_a \partial_b\Bigg)\Bigg[\prod_{c=1}^r \gamma_{1-q_k}(h_c)\Bigg].
\end{align*}
Note that $\Xi(\bh)$ factorizes over the inner diagonal blocks of size $m_0$, and we have
$\Xi(h,\cdots,h) = \zeta(h)^{r/{m_0}}$, with
\begin{align}
\label{eq:def_f_entropic_contribution}
\zeta(h)&\coloneqq \exp\Bigg(\frac{1}{2}\sum_{i=1}^{k} (q_i - q_{i-1}) \sum_{a,b=1}^{r/m_0} (\bJ^{(r/m_0)}_{m_{i-1}/m_0})_{ab} \partial_a \partial_b\Bigg)\Bigg[\prod_{c=1}^{m_0} \gamma_{1-q_k}(h_c)\Bigg]_{h_c = h}.
\end{align}
Therefore, putting it back into eq.~\eqref{eq:entropic_term_5} and using eq.~\eqref{eq:identity_diff_calculus}, we have:
\begin{align*}
&\exp\Bigg(\frac{1}{2}\sum_{i=0}^{k} (q_i - q_{i-1}) \sum_{a,b=1}^r (\bJ^{(r)}_{m_{i-1}})_{ab} \partial_a \partial_b\Bigg) \Bigg[\prod_{c=1}^r \gamma_{1-q_k}(h_c)\Bigg]_{\bh = 0}
\! \! \! = [\gamma_{q_0} \star \zeta^{r/m_0}](h = 0),
\end{align*}
with $\zeta(h)$ defined in eq.~\eqref{eq:def_f_entropic_contribution}.
This procedure can then be repeated iteratively on the diagonal blocks, all the way to the innermost ones.
Eq.~\eqref{eq:entropic_term_4} then becomes:
\begin{align}\label{eq:entropic_term_7}
&- \frac{1}{2} \log \det \bQ = \log\Big[ (2\pi)^{r/2} \, \gamma_{q_0} \star g^{r/m_0} (m_0, h=0) \Big],
\end{align}
with the functions $g(m_i,h)$ iteratively defined as:
\begin{align}\label{eq:iterative_construction_g}
\begin{dcases}
g(m_k = 1, h) &= \gamma_{1-q_k}(h), \\
g(m_{i-1}, h) &= \gamma_{q_i - q_{i-1}} \star g^{m_{i-1}/m_i}(m_i, h).
\end{dcases}
\end{align}
We now take the $k \to \infty$ (Full RSB) limit in eq.~\eqref{eq:iterative_construction_g}.
In this limit we can approximate any function $q(x)$ (see Fig.~\ref{fig:q_frsb}),
and taking for $(m_i)_{i=0}^k$ a regular grid on $x \in [0, 1]$,
we have $m_0 \to 0$, $m_{k-1} \to 1$, and for all $i = 0, \cdots, k-1$, we have $m_i \to x$ and $m_{i} - m_{i-1} = \rd x$.
Moreover $q_k \to q(1)$, $q_0 \to q(0)$, and $q_{i+1} - q_{i} = \dot{q}(x) \rd x$.
We sometimes also use the notation $q_m = q(0)$, $q_M = q(1)$.
To make things clearer, we will denote derivatives w.r.t.\ $x$ with dots, and the ones w.r.t.\ $h$ with the usual prime.
The second line of eq.~\eqref{eq:iterative_construction_g} becomes, at first order in $\rd x$ (recall the crucial eq.~\eqref{eq:identity_convolution}),
for $x \in (0,1)$:
\begin{align*}
g(x, h) - \rd x \, \dot{g}(x,h) &= e^{\frac{\dot{q}(x)}{2} \rd x \, \partial_h^2} \Big[g - \frac{\rd x}{x} g \log g\Big](x,h) = \Big(1 + \frac{\dot{q}(x)}{2} \rd x \, \partial_h^2\Big) \Big[g - \frac{\rd x}{x} g \log g\Big](x,h).
\end{align*}
Comparing the terms at first order in $\rd x$, we reach the PDE:
\begin{align}\label{eq:Parisi_PDE_g}
\dot{g}(x,h) &= -\frac{\dot{q}(x)}{2} g''(x,h) + \frac{1}{x} g \log g (x,h), \hspace{0.5cm} x \in (0, 1).
\end{align}
It is convenient to rewrite eq.~\eqref{eq:Parisi_PDE_g} in terms of $f(x,h) \coloneqq (1/x) \log g(x,h)$, which yields the \emph{Parisi PDE}:
\begin{align}\label{eq:Parisi_PDE_f}
\begin{dcases}
f(1,h) &= \log \gamma_{1-q(1)}(h), \\
\dot{f}(x,h) &= - \frac{\dot{q}(x)}{2} \big[f''(x,h) + x f'(x,h)^2\big], \hspace{0.5cm} x \in (0,1).
\end{dcases}
\end{align}
The boundary condition in the first line was given by eq.~\eqref{eq:iterative_construction_g}:
$g(1, h) = \gamma_{1-q(1)}(h)$.
\myskip
\textbf{Remark: universality of the Parisi PDE --}
As can be already hinted by the calculation above and the method of \cite{duplantier1981comment}, the Parisi PDE
described in eq.~\eqref{eq:Parisi_PDE_f} is actually extremely general: the specificities of the term that we wish to compute only appear
in the boundary conditions at $x = 1$, while the evolution equation is only dependent on the ultrametric structure of the problem.
We will see a clear example of this when computing the energetic contribution to the free entropy.
\myskip
\textbf{The $r \to 0$ limit --}
Taking the $r \to 0$ limit in eq.~\eqref{eq:entropic_term_7} yields finally:
\begin{align*}
- \frac{1}{2} \partial_r [\log \det \bQ]_{r = 0} = \frac{\log 2\pi}{2} + \gamma_{q_m} \star f (0,h = 0).
\end{align*}
\textbf{Solution to the Parisi PDE for the entropic contribution --}
Fortunately, with the boundary condition that we have here, the Parisi PDE of eq.~\eqref{eq:Parisi_PDE_f} is analytically solvable.
Indeed $g(x,h)$ always remains (up to a scaling) a centered Gaussian function of $h$, or equivalently we can look for a solution in the form
\begin{align*}
f(x,h) &= \frac{1}{x} \log C(x) + \frac{1}{x} \log \gamma_{\omega(x)}(h),
\end{align*}
with $\omega(1) = 1 - q(1)$ and $C(1) = 1$.
This yields after some algebra simple ODEs on $\omega, C$ that are easily verified to be solved by:
\begin{align*}
\begin{dcases}
\omega(x) &= \frac{1 - x q(x) - \int_x^1 q(u) \rd u}{x} = \frac{\lambda(x)}{x}, \\
\log C(x) &= \frac{x}{2} \int_x^1 \frac{\rd u}{u^2} [1 + \log 2\pi \omega(u)].
\end{dcases}
\end{align*}
We took the notation $\lambda(x)$ defined in eq.~\eqref{eq:def_lambdax}.
In particular, for every $x \in (0,1)$, we have:
\begin{align*}
\gamma_{q_m} \star f(x,h=0) &= \frac{1}{2} \int_x^1 \frac{\rd u}{u^2} \Big[1 + \log 2\pi \frac{\lambda(u)}{u}\Big] - \frac{1}{2 x} \log \Big[2 \pi \frac{\lambda(x)}{x}\Big] - \frac{q_m}{2 \lambda(x)}.
\end{align*}
We can take the limit of this equation as $x \to 0$.
With our notations, we have $\lambda(0) = 1 - \langle q \rangle$, $\lambda(1) = 1 - q_M$ and $\dot{\lambda}(q) = - u \dot{q}(u)$.
By integration by parts, we reach:
\begin{align*}
\gamma_{q_m} \star f(x,h=0) &= \frac{1}{2} \int_x^1 \frac{\rd u}{u} \Big[\frac{- u \dot{q}(u)}{\lambda(u)} - \frac{1}{u}\Big]- \frac{1}{2} [1 + \log 2\pi(1-q_M)] + \frac{1}{2 x} - \frac{q_m}{2 \lambda(x)}, \nonumber \\
&= -\frac{1}{2} \int_x^1 \rd u \frac{\dot{q}(u)}{\lambda(u)} - \frac{1}{2} \log 2\pi(1-q_M) - \frac{q_m}{2 \lambda(x)}.
\end{align*}
\noindent
\textbf{Final result for the entropic contribution --}
Therefore, taking the limit $x \to 0$, we have
\begin{align}\label{eq:entropic_frsb}
&\partial_r [\log \det \bQ]_{r = 0} = -\log 2\pi - 2 \gamma_{q_m} \star f (0,h = 0) = \log (1-q_M) + \frac{q_m}{1-\langle q \rangle} + \int_0^1 \rd u \frac{\dot{q}(u)}{\lambda(u)}.
\end{align}
Note that eq.~\eqref{eq:entropic_frsb} is also equivalent to a formula given in Appendix II of \cite{mezard1991replica}
as can be seen by integration by parts:
\begin{align}\label{eq:entropic_frsb_2}
\partial_r \big[\log \det \bQ \big]_{r=0} &= \log (1-\langle q \rangle) + \frac{q_m}{1 - \langle q \rangle} - \int_0^1 \frac{\rd x}{x^2} \log \frac{\lambda(x)}{ 1- \langle q \rangle}.
\end{align}
In the IPP, one uses $\lambda(0) = 1 - \langle q \rangle$, $\lambda(1) = 1 - q_M$, and $\lambda(u) = \lambda(0) + \mathcal{O}(u^2)$.
\subsection{Energetic contribution}\label{subsec_app:energetic}
The second part of the free entropy is the energetic contribution, i.e.\ $\alpha G_{2,r}(\bQ)$, with
\begin{align*}
G_{2,r}(\bQ) &\coloneqq \log \int_{\bbR^r} \frac{\rd \bu \rd \bv}{(2\pi)^{r}} e^{-\frac{1}{2} \sum_{a,b} Q^{ab} v^a v^b -\beta \sum_{a=1}^r \theta(u^a) + i \sum_{a=1}^r u^a v^a}.
\end{align*}
Again using the identity of eq.~\eqref{eq:duplantier}, we have:
\begin{align*}
G_{2,r}(\bQ) &= \log \int_{\bbR^r} \frac{\rd \bu \rd \bv}{(2\pi)^{r}} e^{-\beta \sum_{a=1}^r \theta(u^a) + i \sum_{a=1}^r u^a v^a} e^{\frac{1}{2} \sum_{a,b} Q^{ab} \partial_a \partial_b} \Bigg[\prod_{a=1}^r e^{-i v^a h_a}\Bigg]_{h=0}, \\
&= \log e^{\frac{1}{2} \sum_{a,b} Q^{ab} \partial_a \partial_b} \Bigg[\prod_{a=1}^r e^{-\beta \sum_{a=1}^r \theta(h_a)}\Bigg]_{h=0}.
\end{align*}
One can notice that this equation is extremely similar to eq.~\eqref{eq:entropic_term_3},
but the function $\delta(h)$ has been replaced with $e^{-\beta \theta(h)}$. However, the whole procedure that we described above to obtain the Parisi PDE does not change at all, since it did not depend on the specifics of this function:
the PDE itself remains the same, only the boundary condition at $x = 1$ will be different.
In the end, this yields:
\begin{align*}
\partial_r [G_{2,r}(\bQ)]_{r = 0} &= \gamma_{q(0)} \star f(x = 0, h = 0),
\end{align*}
with $f(x,h)$ given as the solution to the Parisi PDE with specific boundary condition at $x=1$:
\begin{align}\label{eq:Parisi_PDE_f_energetic}
\begin{dcases}
f(1,h) &= \log [\gamma_{1-q(1)} \star e^{-\beta \theta}](h), \\
\dot{f}(x,h) &= - \frac{\dot{q}(x)}{2} \big[f''(x,h) + x f'(x,h)^2\big], \hspace{0.5cm} x \in (0,1).
\end{dcases}
\end{align}
Note that one can equivalently write this PDE in terms of the parameter $q$ rather than $x$ by a change of variable $q = q(x)$, as described e.g.\ in \cite{urbani2018statistical}.
\subsection{Recovering the RS result from the full RSB equations}\label{subsec_app:rs_from_rsb}
In this paragraph we show that eq.~\eqref{eq:frsb_eq_rs} is equivalent to eq.~\eqref{eq:q_RS_eq_new}.
We denote $q_0 = q$ coherently with the RS computation.
One computes easily that
\begin{align*}
\gamma_{1-q} \star e^{-\beta \theta} (h) &= 1 - (1-e^{-\beta}) H \Big(\frac{-h}{\sqrt{1-q}}\Big).
\end{align*}
In particular, we have:
\begin{align*}
\Big[\gamma_{1-q} \star e^{-\beta \theta}\Big]' (h) &= \frac{1-e^{-\beta}}{\sqrt{1-q}} H' \Big(\frac{-h}{\sqrt{1-q}}\Big).
\end{align*}
Therefore eq.~\eqref{eq:frsb_eq_rs} reads:
\begin{align*}
\frac{q}{(1-q)^2} &= \alpha \int \rd h \frac{e^{-\frac{h^2}{2q}}}{(1-q)\sqrt{2 \pi q}}
\Bigg\{\frac{(1-e^{-\beta}) H' \Big(\frac{-h}{\sqrt{1-q}}\Big)}{1 - (1-e^{-\beta}) H \Big(\frac{-h}{\sqrt{1-q}}\Big)}\Bigg\}^2,
\end{align*}
or equivalently:
\begin{align}\label{eq:frsb_to_rs_1}
\frac{q}{1-q} &= \alpha \int \mcD \xi
\Bigg\{\frac{(1-e^{-\beta}) H' \Big(\xi \sqrt{\frac{q}{1-q}}\Big)}{1 - (1-e^{-\beta}) H \Big(\xi \sqrt{\frac{q}{1-q}}\Big)}\Bigg\}^2.
\end{align}
Since $H'(x) = - e^{-x^2/2}/\sqrt{2\pi}$, letting
$f(\xi) \coloneqq 1 - (1-e^{-\beta}) H [\xi \sqrt{q/(1-q)}]$ we can rewrite eq.~\eqref{eq:frsb_to_rs_1}
and use an integration by parts:
\begin{align*}
\frac{q}{1-q} &= -\alpha (1-e^{-\beta}) \sqrt{\frac{1-q}{q}} \int \rd \xi \frac{e^{-\frac{\xi^2}{2(1-q)}}}{2\pi} \Big[-\frac{f'(\xi)}{f(\xi)^2}\Big], \\
&= -\alpha (1-e^{-\beta}) \sqrt{\frac{1-q}{q}} \int \rd \xi \frac{e^{-\frac{\xi^2}{2(1-q)}}}{2\pi} \frac{\xi}{1-q} \frac{1}{f(\xi)}, \\
&= \alpha (1-e^{-\beta}) \sqrt{\frac{1}{q(1-q)}} \int \mcD \xi \frac{\xi H' \Big(\xi \sqrt{\frac{q}{1-q}}\Big)}{1 - (1-e^{-\beta}) H \Big(\xi \sqrt{\frac{q}{1-q}}\Big)},
\end{align*}
which is equivalent to eq.~\eqref{eq:q_RS_eq_new}.
\subsection{Technicalities of the derivation of the algorithmic procedure}\label{subsec_app:derivation_algorithmic_frsb}
\noindent
We give here some details on the arising of eqs.~\eqref{eq:frsb_procedure_iii}-\eqref{eq:frsb_procedure_vi}.
Recall that here all the quantities are considered at zero-temperature, with the scaling of eq.~\eqref{eq:frsb_zerotemp_scaling}.
\begin{itemize}[leftmargin=*]
\item Eq.~\eqref{eq:frsb_procedure_iii} is a general relation between $q^{-1}$, $f$ and $\Lambda$ when eq.~\eqref{eq:frsb_eqs} is satisfied. It is explained for instance in \cite{franz2017universality},
see eq.~(B.4).
\item Eq.~\eqref{eq:frsb_procedure_iv} is a consequence of the general relation between $q^{-1}(x)$ (the function corresponding to the overlap matrix $\bQ^{-1}$) and $q(x)$ in the full RSB ansatz,
which is (see e.g.\ eq.~(B.9) in \cite{franz2017universality}):
\begin{align*}
\frac{1}{\lambda(x)} - \frac{1}{\lambda(0)} &= - x q^{-1}(x) + \int_0^x \rd y \, q^{-1}(y) \hspace{1cm} \textrm{and} \hspace{1cm} \lambda(0) = \sqrt{-\frac{q(0)}{q^{-1}(0)}}.
\end{align*}
Discretization of this relation yields eq.~\eqref{eq:frsb_procedure_iv}.
\item One can invert eq.~\eqref{eq:def_lambdax} to obtain $q(x)$ as a function of $\lambda(x)$ via:
\begin{align*}
q(x) &= 1 - \frac{\lambda(x)}{x} + \int_x^1 \frac{\rd y}{y^2} \lambda(y).
\end{align*}
It is the discretization of this equation that yields eq.~\eqref{eq:frsb_procedure_v}.
\item Eq.~\eqref{eq:frsb_procedure_vi} is a consequence of the boundary condition of eq.~\eqref{eq:frsb_eq_1} taken at $x = 1$, followed by a change of variable from $x$ to $q$ in the parameters
of the functions $\Lambda,f,\lambda$. After these procedures, eq.~\eqref{eq:frsb_eq_1} becomes, for the \emph{unrescaled variables} and any $\beta \geq 0$:
\begin{align*}
\frac{q(0)}{\lambda(q(0))^2} + \int_{q(0)}^{q(1)} \frac{\rd p}{\lambda(p)^2} &= \alpha \int \rd h \Lambda(q(1), h) f'(q(1), h)^2.
\end{align*}
After taking the $\beta \to \infty$ limit, this yields for the variables that are rescaled as $\beta \to \infty$ according to eq.~\eqref{eq:frsb_zerotemp_scaling} (dropping the $\infty$ subscript):
\begin{align*}
\frac{q_0}{\lambda(q_0)^2} + \int_{q_0}^{1} \frac{\rd p}{\lambda(p)^2} &= \alpha \int \rd h \Lambda(1, h) f'(1, h)^2.
\end{align*}
Moreover, $f'(1,h) = - (h / \chi) \indi \{h \in (0,\sqrt{2\chi})\}$.
Therefore, rescaling then $t = h / \sqrt{2\chi}$ (and using the abusive notation $\Lambda(1,h) = \Lambda(1,t)$) we have:
\begin{align}\label{eq_app:frsb_iii_1}
\frac{q_0}{\lambda(q_0)^2} + \int_{q_0}^{1} \frac{\rd p}{\lambda(p)^2} &= \frac{2^{3/2}\alpha}{\sqrt{\chi}} \int_0^{1} \rd t \Lambda(1, t) \, t^2.
\end{align}
We focus on the left-hand side of this last equation, in the $k$-RSB ansatz.
We first use that $\lambda(q) = \chi + \int_q^1 \rd p \, x(p)$, a simple consequence of eq.~\eqref{eq:def_lambdax}, after change of variables and rescaling.
Therefore, we have:
\begin{align*}
\int_{q_0}^{1} \frac{\rd p}{\lambda(p)^2} &= \sum_{i=0}^{k-1} \int_{q_i}^{q_{i+1}} \frac{\rd p}{\Big[\chi + \sum_{j=i+1}^{k-1} (q_{j+1} - q_j) x_j + (q_{i+1} - p) x_i\Big]^2}, \\
&= \sum_{i=0}^{k-1} \frac{(q_{i+1} - q_i)}{\Big[\chi + \sum_{j=i+1}^{k-1} (q_{j+1} - q_j) x_j\Big] \Big[\chi + \sum_{j=i}^{k-1} (q_{j+1} - q_j) x_j\Big]}.
\end{align*}
Using the convention $q_{-1} = 0$ and $x_{-1} = 0$, we therefore reach from eq.~\eqref{eq_app:frsb_iii_1} that:
\begin{align*}
\sum_{i=0}^k \frac{(q_i - q_{i-1})}{\Big[\chi + \sum_{j=i+1}^k (q_j - q_{j-1}) x_{j-1}\Big]\Big[\chi + \sum_{j=i}^k (q_j - q_{j-1}) x_{j-1}\Big]}
&= \frac{2^{3/2}\alpha}{\sqrt{\chi}} \int_0^{1} \rd t \, \Lambda( 1,t) \, t^2,
\end{align*}
which is precisely eq.~\eqref{eq:frsb_procedure_vi}.
\end{itemize}
\subsection{Numerical results of the procedure}\label{subsec_app:numerical_results_frsb}
In Fig.~\ref{fig_app:convergence_frsb} we present the results of typical iterations of the algorithmic procedure described above.
For different values of $\alpha$ and the RSB parameter $k$ we show the evolution of the estimates of $f^\star(\alpha)$, the susceptibility $\chi$, and the function $q(x)$, along the iterations.
In all the cases implemented we see power-law convergence to the solution, and very consistent results when varying the parameters used in the algorithm (in particular increasing $k$).
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{convergence_FRSB_alpha_6.0_k_200_xmax_10.0_H_40.0_c_30.0.pdf}
\includegraphics[width=0.9\textwidth]{convergence_FRSB_alpha_6.8_k_200_xmax_10.0_H_40.0_c_30.0.pdf}
\caption{
Illustration of the convergence of the Full RSB procedure for different values of $\alpha$,
for $k =200, c = 30, H = 40$ (see Section~\ref{subsec_app:convolutions} for the definitions of $H,c$).
On the left we show the convergence of $\chi$ and $f^\star(\alpha)$ along the iterations, as well as (in inset) the evolution of the error $\|q_t - q_{t-1}\|_\infty$ up to the threshold $10^{-4}$ we took for convergence.
On the right, we show the evolution of $q(x)$ along the iterations. We find very consistent behaviors when varying the parameter $k$, indicating that our simulations indeed capture well the Full RSB limit.
We use $x_\mathrm{max} = 10$, well validated by the functions $q(x)$ we obtain.
\label{fig_app:convergence_frsb}}
\end{figure}
\subsection{Some details on the implementation of convolutions}\label{subsec_app:convolutions}
\subsubsection{Convolutions via DFTs}
In order to implement the algorithmic procedure of Section~\ref{subsec:zero_temp_algorithmic_frsb}, we use a discrete Fourier transform approach.
We refer to \cite{getreuer2013survey} for a review on Gaussian convolution algorithms.
The goal is to compute the convolution of a centered Gaussian $\gamma_\omega$ with variance $\omega > 0$ and a function
$f(h)$:
\begin{align*}
\gamma_\omega \star f(h) &= \int \rd z \, \gamma_\omega(z) \, f(h-z).
\end{align*}
We fix $N \in \bbN^\star$ and $H > 0$, and we consider a grid $h_\mu = \mu H / N$, with $\mu \in \{-N, \cdots, N\}$.
In order to leverage analytical formulas for the DFT of the Gaussian, we use a Shannon-Whittaker interpolation for $f$, i.e.\
we approximate $f$ as:
\begin{align*}
f(h) &\simeq \sum_{\nu=-N}^N f_\nu \, \varphi\Big[\frac{Nh}{H} - \nu\Big],
\end{align*}
with $\varphi(x) = \mathrm{sinc}(x) = \sin(\pi x)/ (\pi x)$. Since $\varphi(\nu) = 0$ for all $\nu \in \bbZ^\star$, and $\varphi(0) = 1$, we have
$f_\mu = f(h_\mu)$.
This approximation transfers into an approximation for $\gamma_\omega \star f$ as:
\begin{align*}
\gamma_\omega \star f(h) \simeq \sum_{\nu=-N}^N f_\nu \, (\gamma_\omega \star \varphi_\nu)(h),
\end{align*}
with $\varphi_\nu(h) \coloneqq \varphi(Nh / H - \nu)$.
Thus we have, with $(\gamma_\omega \star f)_\mu = \gamma_\omega \star f(h_\mu)$,
and using $\varphi_\nu(x) = \varphi_0(x - h_\nu)$:
\begin{align}\label{eq:dft_primal}
(\gamma_\omega \star f)_\mu \simeq \sum_{\nu=-N}^N f_\nu (\gamma_\omega \star \varphi_\nu)_\mu = \sum_{\nu=-N}^N f_\nu (\gamma_\omega \star \varphi_0)_{\mu-\nu}.
\end{align}
Note that we naturally extended $(\gamma_\omega \star \varphi_0)_\mu$ to all $\mu \in \bbZ$, since these coefficients have an analytic expression.
In the same way, we extend $f_\nu = 0$ if $|\nu| > N$.
For a general sequence $(f_\nu)_{\nu=-N}^N$, we define its Discrete Fourier Transform (DFT) as, for $k \in \{0, \cdots, 2N\}$:
\begin{align}
\label{eq:def_dft}
\begin{dcases}
\hat{f}_k &= \sum_{\mu=-N}^N e^{-\frac{2\pi ik (\mu + N)}{2N+1}} f_\mu = e^{-\frac{2\pi i k N}{2N+1}} \sum_{\mu=-N}^N e^{-\frac{2\pi i k\mu }{2N+1}} f_\mu, \\
f_\mu &= \frac{1}{2N+1} e^{\frac{2\pi i k N}{2N+1}} \sum_{k=0}^{2N} e^{\frac{2\pi i k \mu}{2N+1}} \hat{f}_k.
\end{dcases}
\end{align}
Taking the DFT of eq.~\eqref{eq:dft_primal}, one finds:
\begin{align}\label{eq:dft_convolution_1}
\widehat{\gamma_\omega \star f}_k &\simeq e^{\frac{2\pi i k N}{2N+1}} \, \hat{f}_k \, (\widehat{\gamma_\omega \star \varphi_0})_k.
\end{align}
Moreover, we define the Fourier transform as $\tilde{f}(\xi) \coloneqq \int \rd x \, f(x) \, e^{- 2 i \pi x \xi}$, and have
then easily $\tilde{\varphi}_0(\xi) = (H/N) \indi\{|\xi| \leq N/(2H)\}$.
The Fourier transform of the convolution is $\widetilde{f \star g}(\xi) = \tilde{f}(\xi) \tilde{g}(\xi)$.
This yields, via inverse Fourier transformation:
\begin{align*}
(\gamma_\omega \star \varphi_0)_\mu &= \frac{H}{N} \int_{|\xi| \leq \frac{N}{2H}} \rd \xi \, e^{- 2 \pi^2 \omega \xi^2 + \frac{2i \pi H \mu \xi}{N}}
= \int_{|\xi| \leq \frac{1}{2}} \rd \xi \, e^{- \frac{2 \pi^2 N^2 \omega \xi^2}{H^2} + 2i \pi \mu \xi}.
\end{align*}
Therefore, we have by eq.~\eqref{eq:def_dft}:
\begin{align*}
(\widehat{\gamma_\omega \star \varphi_0})_k &= e^{-\frac{2\pi i k N}{2N+1}} \int_{|\xi| \leq \frac{1}{2}} \rd \xi \, e^{- \frac{2 \pi^2 N^2 \omega \xi^2}{H^2}}\sum_{\mu=-N}^N e^{-\frac{2\pi i k\mu }{2N+1} + 2i \pi \mu \xi}.
\end{align*}
Taking $N \gg 1$, the term on the right is well approximated by the Dirac comb:
\begin{align*}
\sum_{\mu=-N}^N e^{-\frac{2\pi i k\mu }{2N+1} + 2i \pi \mu \xi} &\simeq \sum_{n \in \bbZ} \delta\Big(\xi - \frac{k}{2N+1} - n\Big).
\end{align*}
However, since $|\xi| \leq 1/2$ and $k \in \{0,\cdots,2N\}$, this implies:
\begin{align*}
(\widehat{\gamma_\omega \star \varphi_0})_k &\underset{N \to \infty}{\simeq} e^{-\frac{2\pi i k N}{2N+1}}\times
\begin{dcases}
\, e^{- \frac{2 \pi^2 N^2 \omega}{H^2} \Big[\frac{k}{2N+1}\Big]^2} \hspace{1cm} &\textrm{if } k \leq N, \\
\, e^{- \frac{2 \pi^2 N^2 \omega}{H^2} \Big[\frac{k}{2N+1} - 1\Big]^2} \hspace{1cm} &\textrm{if } k > N.
\end{dcases}
\end{align*}
Plugging it back into eq.~\eqref{eq:dft_convolution_1}, we finally obtain the formula we use for the DFT of the convolution $\gamma_\omega \star f$:
\begin{align*}
\widehat{\gamma_\omega \star f}_k &\simeq \hat{f}_k \times
\begin{dcases}
\,e^{- \frac{2 \pi^2 N^2 \omega}{H^2} \Big[\frac{k}{2N+1}\Big]^2} \hspace{1cm} &\textrm{if } k \leq N, \\
\,e^{- \frac{2 \pi^2 N^2 \omega}{H^2} \Big[\frac{k}{2N+1} - 1\Big]^2} \hspace{1cm} &\textrm{if } k > N.
\end{dcases}
\end{align*}
\subsubsection{Taking a large enough value of \texorpdfstring{$N$}{N}}
Note that in order for the Gaussian convolutions to be numerically well defined, we need the
spacing in the grid we take on $h$ to be much smaller than the standard deviation of the Gaussians, that is we need for any $(q(x), q(x) + \rd x \, \dot{q}(x))$:
\begin{align*}
\frac{H}{N} \ll \sqrt{\frac{\rd x \, \dot{q}(x)}{2 \chi}}.
\end{align*}
Note that, as shown in \cite{franz2017universality}, and as one can also verify from Fig.~\ref{fig:q_T0}, we have the following scaling
as $x \to \infty$: $q(x) \sim 1 - A / x^2$, with $A > 0$. Therefore $\dot{q}(x_\mathrm{max}) \sim 2 A / x_\mathrm{max}^3$.
Since we take $\rd x \sim x_\mathrm{max}/k$ in our numerical procedure, we have that in order for our procedure to be valid
we need
\begin{align*}
\frac{H}{N} \ll \sqrt{\frac{A}{k \chi x_\mathrm{max}^2}}.
\end{align*}
In practice, we find typically $\chi/A \sim 10^{-1}$, so that we will impose $N \gg N_0$, with
\begin{align*}
N_0 &\coloneqq \sqrt{k} \, H \, x_\mathrm{max}.
\end{align*}
In practice, we consider $N = c N_0$ (we often take $c = 30$) with a constant $c \gg 1$ in order to be well into the regime $N \gg N_0$, and still have a reasonable computational time.
\subsection{Bounds on the injectivity threshold}\label{subsec_app:numerics_frsb_threshold}
Let us detail the results of our numerical computation of $\alpha_\inj^\FRSB$, illustrated in Fig.~\ref{fig:alpha_inj_frsb_summary}.
For a given value of all parameters of the algorithm detailed in Section~\ref{subsec:zero_temp_algorithmic_frsb}, we ran
Brent's method to find the zero of $f^\star_\FRSB(\alpha) - 1$, with a tolerance of $10^{-4}$ on the value of $\alpha$.
For all values of $\alpha$, we iterated the FRSB equations until $\norm{q^{t+1} - q_t}_\infty \leq \epsilon = 10^{-5}$.
We ran this procedure for different values of $k \in \{30, 50, 100, 200\}$, $x_\mathrm{max} \in \{10,11,12,13,14,15\}$,
$H \in \{40,60\}$, and $c = 30$ (recall that $H$ and $c$ are defined in Section~\ref{subsec_app:convolutions}).
In Fig.~\ref{fig:alpha_inj_frsb_summary} the runs with different values of $H$ and $c$ are aggregated.
\subsection{Proof of Proposition~\ref{prop:injectivity_random_intersection}}\label{subsec_app:proof_random_intersection}
Note that if $m < n$, then
$\bx \in \bbR^n \mapsto \bW \bx \in \bbR^m$ is not injective,
$C_{m,n} = \bbR^m$, and eq.~\eqref{eq:pmn_random_intersection} stands trivially.
We thus assume $m \geq n$. Since $\bx \in \bbR^n \mapsto \bW \bx \in \bbR^m$ is then a.s.\ injective, $\bW \bbR^n$ is a (random) $n$-dimensional subspace of $\bbR^m$.
Moreover, by rotation invariance of the Gaussian distribution, it is uniformly sampled.
Thus it is enough to show that $p_{m,n} = \bbP[(\bW \bbR^{n}) \cap C_{m,n} = \{0\}]$.
In the end, it suffices to show the following lemma, whose proof elements can be found in \cite{puthawala2022globally,paleka2021injectivity,clum2022topics}, which we repeat for completeness:
\begin{lemma}\label{lemma:technical_random_intersection}
\noindent
Almost surely under the law of $\bW$, the following two statements are equivalent:
\begin{itemize}
\item[$(i)$] $\varphi_\bW$ is injective.
\item[$(ii)$] $(\bW \bbR^{n}) \cap C_{m,n} = \{0\}$.
\end{itemize}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:technical_random_intersection} --]
In the following, we assume that the following event stands:
\begin{align*}
E(\bW) \coloneqq \{\textrm{All choices of $n$ distinct rows of $\bW$ are linearly independent vectors in $\bbR^n$}\}.
\end{align*}
It is easy to see that $\bbP[E(\bW)] = 1$, since every set of $n$ independent standard Gaussian vectors in $\bbR^n$ is linearly independent almost surely.
\myskip
Let us show first that $(ii) \Rightarrow (i)$.
Recall $\mathrm{ReLU}(x) = \max(0,x)$.
Note that for any $a \leq b \in \bbR$, $\mathrm{ReLU}(a) = \mathrm{ReLU}(b)$
implies that ReLU is constant on $(a,b)$.
Assume that $\varphi_\bW(\bx) = \varphi_\bW(\by)$.
Let us consider $\bz = (\bx + \by) / 2$, then $\varphi_\bW(\bz) = \varphi_\bW(\bx)$ by the note above.
Moreover, for all $\mu \in [m]$ such that $(\bW \bz)_\mu > 0$, then $(\bW \bx)_\mu = (\bW \by)_\mu = (\bW \bz)_\mu$.
On the other hand, if $(\bW \bz)_\mu = 0$, then necessarily $(\bW \bx)_\mu = (\bW \by)_\mu = 0$, since
$(\bW \bx)_\mu \leq 0 \Leftrightarrow (\bW \by)_\mu \leq 0$.
By $(ii)$, $\bW \bz$ has at least $n$ non-negative coordinates, so the argument above implies that there exists at least
$n$ values of $\mu \in [m]$ such that $(\bW \bx)_\mu = (\bW \by)_\mu$.
Since $E(\bW)$ stands, this shows that $\bx = \by$.
\myskip
Let us now show $(i) \Rightarrow (ii)$.
We divide $\bbR^n$ into equivalence classes defined by the relation
$\bx \sim \by \Leftrightarrow \forall \mu \in [m], (\bW \bx)_\mu > 0 \Leftrightarrow (\bW \by)_\mu > 0$.
These equivalence classes $\mcR_S$ are defined by a subset $S$ of $[m]$,
so that for all $\bx \in \mcR_S$, $(\bW \bx)_\mu > 0 \Leftrightarrow \mu \in S$.
Assume that there exists $\bx \in \bbR^n$ such that $\bW \bx \neq 0$ and $\bW \bx$ has strictly less than $n$ positive coordinates, i.e.\
$\bx \in \mcR_S$ with $|S| < n$. On $\mcR_S$, $\varphi_\bW$ is a linear transformation with $|S| < n$ linearly independent rows $\{\bW_\mu\}_{\mu \in S}$.
Its image has thus dimension smaller than $n$.
Therefore, the following result, which implies that $\mcR_S$ has dimension $n$, then implies that $\varphi_\bW$ is not injective on $\mcR_S$. Having proved the contrapositive, we can then infer $(i) \Rightarrow (ii)$.
\end{proof}
\begin{lemma}\label{lemma:technical_equivalence_classes}
\noindent
The following statement is true almost surely:
for all $S \subseteq [m]$, either $\mcR_S = \{0\}$, or there exists $\bx \in \mcR_S$ and $\varepsilon > 0$ such that
$B_2(\bx, \varepsilon) \subseteq \mcR_S$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:technical_equivalence_classes} --]
By a union bound over all $S \subseteq [m]$, it suffices to show this statement a.s.\ for any fixed $S \subseteq [m]$.
Let us assume that $\mcR_S \neq 0$. The following statement implies the conclusion of Lemma~\ref{lemma:technical_equivalence_classes}:
\begin{align}\label{eq:to_show_equivalence_class}
\bbP \Big\{\forall \bx \in \mcR_S, \, \exists \mu \notin S \textrm{ s.t. } \bW_\mu \cdot \bx = 0\Big\} = 0.
\end{align}
Indeed, one can then a.s.\ find an element $\bx \in \mcR_S$ such that $(\bW \bx)_\mu > 0$ for all $\mu\in S$ and $(\bW \bx)_\mu < 0$ for all $\mu\notin S$, therefore $B_2(\bx, \varepsilon) \subseteq \mcR_S$ for sufficiently small $\varepsilon$.
We now show eq.~\eqref{eq:to_show_equivalence_class}.
\myskip
Assume that there exists $\bx \in \mcR_S \backslash\{0\}$, with $\nu_1, \cdots, \nu_{k_\bx} \in [m]$ all indices such that
$\bW_\nu \cdot \bx = 0$, that satisfies $k_\bx \geq 1$. Note that by $E(\bW)$ (which stands a.s.) and since $\bx \neq 0$ we must have $k_\bx < n $. Thus, since $\{\bW_{\nu_i}\}_{i=1}^{k_\bx}$ are linearly independent on $E(\bW)$,
we can then fix $\by \in \big(\{\bW_{\nu_i}\}_{i=1}^{k_\bx-1} \big)^\perp$ such that $\bW_{\nu_{k_\bx}} \cdot \by < 0$.
Consider $\bx' = \bx + \delta \by$ with arbitrary $\delta > 0$.
By hypothesis, $\bW_{\nu_i} \cdot \bx' = 0$ for all $i \in [k_\bx-1]$. Moreover, for $\delta$ small enough,
$\bW_\mu \cdot \bx'$ has the same sign as $\bW_\mu \cdot \bx$ if $\mu \notin \{\nu_1, \cdots, \nu_{k_\bx}\}$.
Finally, $\bW_{\nu_{k_\bx}} \cdot \bx' = \delta \bW_{\nu_{k_\bx}} \cdot \by < 0$. In the end, taking $\delta$ small enough, we have found
$\bx' \in \mcR_S$ with $k_{\bx'} = k_{\bx} - 1$. Iterating this procedure, we have shown that a.s.\ there exists a point $\bx \in \mcR_S$ such that $k_\bx = 0$, which implies eq.~\eqref{eq:to_show_equivalence_class}.
\end{proof}
\subsection{Proof of Lemma~\ref{lemma:cover}}\label{subsec_app:proof_cover}
Let us first recall Cover's theorem \cite{cover1965geometrical}.
We use the $\mathrm{sign}(x)$ function, with the convention $\sign(0) = 0$.
We call a set of vectors $\{\bW_1,\cdots,\bW_m\}$ in $\bbR^n$ in \emph{general position} if it has no linearly independent subset of size strictly less than $n$.
Cover's theorem is an exact formula for the number of dichotomies\footnote{A dichotomy is a binary labeling of the vectors.} of this set that are realizable by a linear separation:
\begin{theorem}[Cover \cite{cover1965geometrical}]\label{thm:cover}
\noindent
Let $\bW_1,\cdots,\bW_m \in \bbR^n$ be in general position. Then
\begin{align*}
\sum_{\bepsilon \in \{\pm 1\}^m} \indi \Big\{ \exists \bx \in \mcS^{n-1} \textrm{ realizing } \forall \mu \in [m] \, : \, \mathrm{sign}(\bW_\mu \cdot \bx) = \varepsilon_\mu \Big\} = 2 \sum_{k=0}^{n-1} \binom{m-1}{k}.
\end{align*}
\end{theorem}
Let us now show that Theorem~\ref{thm:cover} implies Lemma~\ref{lemma:cover}.
We assume $\alpha < 3$, so in particular we can fix $\delta > 0$ such that $m \leq (3-\delta) (n-1)$ for $n$ large enough.
We denote $\tilde{m} = m - (n-1) \leq (2-\delta) n$.
Since $\bW$ is a Gaussian matrix, the set $\{\bW_\mu\}_{\mu \in [\tilde{m}]}$ is a.s.\ in general position.
Moreover, by sign invariance,
for any $\bepsilon \in \{\pm 1\}^m$ we have:
\begin{align*}
\bbP_\bW \Big\{\exists \bx \in \mcS^{n-1} \, : \, \forall \mu \in [\tilde{m}] , \, \mathrm{sign}(\bW_\mu \cdot \bx) = \varepsilon_\mu \Big\} &=
\bbP_\bW \Big\{\exists \bx \in \mcS^{n-1} \, : \, \forall \mu \in [\tilde{m}] , \, \mathrm{sign}(\bW_\mu \cdot \bx) = -1 \Big\}.
\end{align*}
For $\bepsilon$ uniformly sampled in $\{\pm 1\}^m$ (independently of $\bW$),
we denote $\bbP_{\bW,\bepsilon}$ the joint probability law of $(\bW, \bepsilon)$, and $\bbP_\bepsilon$ the law of $\bepsilon$.
The previous remark on sign invariance allows to deduce:
\begin{align*}
&\bbP_\bW \Big\{\exists \bx \in \mcS^{n-1} \, : \, \forall \mu \in [\tilde{m}] , \, \mathrm{sign}(\bW_\mu \cdot \bx) = -1 \Big\} \\
&=
\bbP_{\bW,\bepsilon} \Big\{\exists \bx \in \mcS^{n-1} \, : \, \forall \mu \in [\tilde{m}] , \, \mathrm{sign}(\bW_\mu \cdot \bx) = \varepsilon_\mu \Big\}, \\
&=
\EE_{\bW} \Big[\bbP_\bepsilon\Big(\exists \bx \in \mcS^{n-1} \, : \, \forall \mu \in [\tilde{m}] , \, \mathrm{sign}(\bW_\mu \cdot \bx) = \varepsilon_\mu \Big | \bW\Big)\Big], \\
&=
\frac{1}{2^{\tilde{m}}} \times 2 \sum_{k=0}^{n-1} \binom{\tilde{m}-1}{k},
\end{align*}
by Theorem~\ref{thm:cover}. Since $\tilde{m} \leq (2-\delta) n$, it is then elementary to check that this implies
\begin{align*}
\lim_{n \to \infty} \bbP_\bW \Big\{\exists \bx \in \mcS^{n-1} \, : \, \forall \mu \in [\tilde{m}] , \, \mathrm{sign}(\bW_\mu \cdot \bx) = -1 \Big\} &= 1.
\end{align*}
The proof is then finished by noticing that if $\bx$ satisfies $\mathrm{sign}(\bW_\mu \cdot \bx) = -1$ for all $\mu \in [\tilde{m}]$,
it must satisfy $E_\bW(\bx) \leq m - \tilde{m} < n$, and using eq.~\eqref{eq:pmn_minimum}.
\subsection{Proof of Theorem~\ref{thm:free_entropy_concentration}}\label{subsec_app:proof_thm_fe_concentration}
It is easy to see that if $\bW, \bW'$ are two matrices for which $\bW'_{\nu} = \bW_{\nu}$ for all $\nu \in [m] \backslash \{\mu\}$, then
\begin{align*}
|\Phi_n(\bW, \beta) - \Phi_n(\bW',\beta)| &\leq \frac{2 \beta}{n}.
\end{align*}
The theorem is then a simple consequence of McDiarmid's inequality (see e.g.\ Theorem~6.2 of \cite{boucheron2013concentration}).
\subsection{Proof of Corollary~\ref{cor:sufficient_non_injectivity}}\label{subsec_app:proof_cor_sufficient_non_inj}
By a dominated convergence argument, we have:
\begin{align*}
\partial_\beta \{\EE_\bW \Phi_n(\bW, \beta)\} &= - \frac{1}{n} \EE_\bW \Big\{\frac{\int_{\mcS^{n-1}} \mu_n(\rd \bx) \, E_\bW(\bx) \, e^{-\beta E_\bW(\bx)}}{\int_{\mcS^{n-1}} \mu_n(\rd \bx) \, e^{-\beta E_\bW(\bx)}}\Big\}.
\end{align*}
Therefore
\begin{align*}
-\beta^2 \partial_\beta \{\EE_\bW \Phi_n(\bW, \beta)/\beta\} &= \frac{1}{n}\EE_\bW \Big\{\frac{\int_{\mcS^{n-1}} \mu_n(\rd \bx) \, \beta E_\bW(\bx) \, e^{-\beta E_\bW(\bx)}}{\int_{\mcS^{n-1}} \mu_n(\rd \bx) \, e^{-\beta E_\bW(\bx)}}\Big\} + \EE_\bW \Phi_n(\bW, \beta).
\end{align*}
Recall the definition of the Gibbs measure $\bbP_{\beta,\bW}$ in eq.~\eqref{eq:def_Gibbs}. It is easy to see that
the previous equation relates directly to the entropy of $\bbP_{\beta,\bW}$, i.e.\
\begin{align*}
\beta^2 \partial_\beta \{\EE_\bW \Phi_n(\bW, \beta)/\beta\} &= \frac{1}{n} \EE_\bW \int \rd \bbP_{\beta,\bW}(\bx) \, \log \frac{\rd \bbP_{\beta,\bW}}{\rd \mu_n}(\bx) = \EE_\bW D_{\rm KL}(\bbP_{\beta,\bW} \| \mu_n) \geq 0.
\end{align*}
In the language of statistical physics, this is a rewriting of the fact that the temperature derivative of the free energy is given by (minus) the entropy.
In particular, for any $n$, $\beta \mapsto -\EE_\bW \Phi_n(\bW, \beta)/\beta$ is non-increasing, and
in the limit this shows that $\beta \mapsto -\Phi(\alpha,\beta)/\beta$ is non-increasing.
The positivity of this function follows from $\Phi_n(\bW, \beta) \leq 0$, since $E_\bW(\bx) \geq 0$.
\myskip
Let us now assume that there exists some $\beta < \infty$ such that $- \Phi(\alpha,\beta) < \beta$.
In particular, fixing $\delta > 0$, for $n$ large enough we have $\EE_{\bW} \Phi_n(\bW,\beta) \geq - \beta + \delta$.
Using Theorem~\ref{thm:free_entropy_concentration}, for large enough $n$, this implies
\begin{align*}
\bbP_\bW[\Phi_n(\bW, \beta) \leq - \beta] &\leq 2 \exp\Big\{- \frac{n [\beta + \EE_\bW \Phi_n(\bW,\beta)]^2}{2 \alpha_n \beta^2}\Big\}, \\
&\leq 2 \exp\Big\{- C(\alpha,\beta) n\Big\},
\end{align*}
for some $C(\alpha, \beta) > 0$.
In particular, using eq.~\eqref{eq:bound_Phi_energy} and Proposition~\ref{prop:injectivity_random_intersection}:
\begin{align}\label{eq:bound_pmn}
p_{m,n} &= \bbP_\bW \Big[\min_{\bx \in \mcS^{n-1}} E_{\bW}(\bx) \geq n\Big]
\leq \bbP_\bW \Big[\Phi_n(\bW, \beta) \leq - \beta\Big]\leq 2 \exp\Big\{-C(\alpha,\beta) n \Big\}.
\end{align}
The claim follows.
\subsection{Proof of Theorem~\ref{thm:bound_Gordon}}\label{subsec_app:bound_Gordon}
\textbf{Remark --} In what follows we usually consider $m = \alpha n$ with $\alpha > 0$,
and the proof can be straightforwardly generalized to the original assumption $m/n \to \alpha > 0$.
For lightness of the presentation, we assume the simplified statement we described.
\myskip
First, note that given Lemma~\ref{lemma:cover}, we can assume $\alpha \geq 3$ in what follows.
Using Proposition~\ref{prop:injectivity_random_intersection}, we want to characterize
\begin{align*}
G(\bW) &\coloneqq \min_{\bx \in \mcS^{n-1}} e_\bW(\bx) = \min_{\bx \in \mcS^{n-1}} \frac{1}{n} \sum_{\mu=1}^m \indi\{\bW_\mu \cdot \bx > 0\},
\end{align*}
in which $\bW = \{\bW_\mu\}_{\mu=1}^m \iid \mcN(0,\Id_n)$.
The minimum of this function is reached since it
takes discrete values.
Introducing an auxiliary variable $z_\mu \coloneqq \bW_\mu \cdot \bx$, and a Lagrange multiplier $\blambda \in \bbR^m$ to fix this relation,
the problem is equivalent by strong duality to
\begin{align}\label{eq:ground_state_duality}
\nonumber
G(\bW) &= \min_{\bx \in \mcS^{n-1}} \inf_{\bz \in \bbR^m} \sup_{\blambda \in \bbR^m} \Big\{\blambda^\intercal \bW \bx - \blambda^\intercal \bz + \frac{1}{n} \sum_{\mu=1}^m \indi\{z_\mu > 0\}\Big\}, \\
&= \inf_{\bz \in \bbR^m} \min_{\bx \in \mcS^{n-1}} \sup_{\blambda \in \bbR^m} \Big\{\blambda^\intercal \bW \bx - \blambda^\intercal \bz + \frac{1}{n} \sum_{\mu=1}^m \indi\{z_\mu > 0\}\Big\}.
\end{align}
Note that the infimum over $\bz$ in eq.~\eqref{eq:ground_state_duality} is actually done over $\bz \in \bW \mcS^{n-1}$, since the supremum over $\blambda$ becomes $+\infty$ for $\bz \neq \bW \bx$.
Letting $\|\bW\|_{\rm op} \coloneqq \max_{\bx \in \mcS^{n-1}} \|\bW \bx\|_2$,
we know by classical concentration inequalities (see e.g.\ \cite{vershynin2018high} - Theorem 4.4.5) and since $m = \alpha n$,
that $\bbP[\|\bW\|_{\mathrm{op}} \geq K \sqrt{n}] \leq e^{-n}$, for some constant $K > 1$ (that might depend on $\alpha$).
Let us denote $B(K) \coloneqq \{\bz \in \bbR^m \, : \|\bz \|_2 \leq K \sqrt{n}\}$.
By the argument above and the law of total probability, for all $t > 0$,
\begin{align}\label{eq:ub_G_GK}
\bbP[G(\bW) \leq t] \leq \bbP[G_K(\bW) \leq t] + e^{-n},
\end{align}
with
\begin{align}\label{eq:ground_state_duality_K}
G_{K}(\bW) \coloneqq \inf_{\substack{\bz \in B(K)}} \min_{\bx \in \mcS^{n-1}} \sup_{\blambda \in \bbR^m} \Big\{\blambda^\intercal \bW \bx - \blambda^\intercal \bz + \frac{1}{n} \sum_{\mu=1}^m \indi\{z_\mu > 0\}\Big\}.
\end{align}
Moreover, we will approximate $\indi\{z > 0\}$ by continuous functions; we let, for any $\delta \geq 0$:
\begin{align}\label{eq:def_gdelta}
\ell_\delta(x) \coloneqq
\begin{cases}
0 &\textrm { if } x \leq 0, \\
1 &\textrm { if } x > \delta, \\
x/\delta &\textrm { if } x \in (0, \delta].
\end{cases}
\end{align}
Since $\ell_\delta(x) \leq \indi\{x > 0\}$, it is clear that:
\begin{align}\label{eq:ground_state_duality_smoothed}
G_K(\bW) &\geq G_{\delta,K}(\bW) \coloneqq \inf_{\substack{\bz \in B(K)}} \min_{\bx \in \mcS^{n-1}} \sup_{\blambda \in \bbR^m} \Big\{\blambda^\intercal \bW \bx - \blambda^\intercal \bz + \frac{1}{n} \sum_{\mu=1}^m \ell_\delta(z_\mu)\Big\}.
\end{align}
We now make use of the Gaussian min-max theorem \cite{gordon1988milman,thrampoulidis2015regularized}:
\begin{proposition}[Gaussian min-max theorem]\label{prop:gaussian_minmax}
\noindent
Let $\bW \in \bbR^{m \times n}$ be an i.i.d.\ standard normal matrix, and $\bg \in \bbR^m,\bh \in \bbR^n$ two independent vectors
with i.i.d.\ $\mcN(0,1)$ coordinates.
Let $\mcS_\bv, \mcS_\bu$ be two compact subsets respectively of $\bbR^n$ and $\bbR^m$, and let
$\psi : \mcS_\bv \times \mcS_\bu \to \bbR$ be a continuous function.
We define the two optimization problems:
\begin{align*}
\begin{dcases}
C(\bW) &\coloneqq \min_{\bv \in \mcS_\bv} \max_{\bu \in \mcS_{\bu}} \Big\{\bu^\intercal \bW \bv+ \psi(\bv, \bu)\Big\}, \\
\mcC(\bg, \bh) &\coloneqq \min_{\bv \in \mcS_\bv} \max_{\bu \in \mcS_{\bu}} \Big\{\norm{\bu} \bh^\intercal \bv + \norm{\bv} \bg^\intercal \bu+ \psi(\bv, \bu)\Big\}.
\end{dcases}
\end{align*}
Then, for all $t \in \bbR$, one has
\begin{align}\label{eq:gordon_bound}
\bbP_\bW[C(\bW) \leq t] &\leq 2 \, \bbP_{\bg,\bh}[\mcC(\bg,\bh) \leq t].
\end{align}
\end{proposition}
\textbf{Remark I --}
It is easy to see from the proof of \cite{gordon1988milman,thrampoulidis2015regularized} that
the statement of the theorem also holds if $\bW$ is a block matrix of the form
\begin{align*}
\bW = \begin{pmatrix}
\bW_1 & 0 \\
0 & 0
\end{pmatrix},
\end{align*}
with $\bW_1 \in \bbR^{m_1 \times n_1}$ having i.i.d.\ $\mcN(0,1)$ elements.
Denoting $\bu^\intercal \bW \bv = \bu_1^\intercal \bW_1 \bv_1$,
the definition of
the auxiliary problem that appears in the theorem is then modified as:
\begin{align}\label{eq:auxiliary_pb_gordon_block}
\mcC(\bg, \bh) \coloneqq \min_{\bv \in \mcS_\bv} \max_{\bu \in \mcS_{\bu}} \Big\{\norm{\bu_1} \bh^\intercal \bv_1 + \norm{\bv_1} \bg^\intercal \bu_1+ \psi(\bv, \bu)\Big\},
\end{align}
for $\bg \sim \mcN(0, \Id_{m_1})$, $\bh \sim \mcN(0, \Id_{n_1})$.
\myskip
\textbf{Remark II --}
The full result of \cite{thrampoulidis2015regularized} actually includes the proof of a converse bound to eq.~\eqref{eq:gordon_bound} when the function $\psi$ is convex-concave, and the sets $\mcS_\bu, \mcS_\bw$ are convex.
Here, we do not expect such a converse bound to be true, since the solution is conjecturally described by the full-RSB equations, and we will see that
the upper bound of eq.~\eqref{eq:gordon_bound} corresponds to the replica-symmetric (RS) solution.
\myskip
Let us first state a lemma that simplifies the auxiliary problem:
\begin{lemma}[Auxiliary problem simplification]\label{lemma:ao_simplification}
\noindent
For any $\delta> 0$, and any $A \in (0,\infty]$, we define the auxiliary optimization problem, for $\bg \in \bbR^m,\bh \in \bbR^n$:
\begin{align}\label{eq:def_CAd_gh}
\mcC_{A,\delta}(\bg, \bh) &\coloneqq \inf_{\bz \in \bbR^m} \min_{\bx \in \mcS^{n-1}} \max_{\norm{\blambda} \leq A} \Big\{\norm{\blambda} \bh^\intercal \bx + \bg^\intercal \blambda - \blambda^\intercal \bz + \frac{1}{n} \sum_{\mu=1}^m \ell_\delta(z_\mu)\Big\}.
\end{align}
Then $A \mapsto \mcC_{A, \delta}(\bg, \bh)$ is non-decreasing and one has:
\begin{align}\label{eq:def_Cd_gh}
\lim_{A \to \infty} \mcC_{A, \delta}(\bg, \bh) &= \min_{\substack{\bz \in \bbR^m \\ \norm{\bz} \leq \norm{\bh}}} \Big\{\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu)\Big\}.
\end{align}
\end{lemma}
Note that we added a constraint over $\norm{\blambda}$ in the auxiliary problem, so that the set of $\blambda$ considered is compact. This allows to deduce, using Proposition~\ref{prop:gaussian_minmax} (and Remark~I below) in eqs.~\eqref{eq:ground_state_duality_smoothed} and \eqref{eq:ub_G_GK}:
\begin{lemma}\label{lemma:application_minmax}
\noindent
For all $\delta > 0$, and all $t \in \bbR$, one has
\begin{align*}
\bbP_\bW[G(\bW) \leq t] &\leq 2 \bbP_{\bg,\bh}[\mcC_{\delta}(\bg,\bh) \leq t] + e^{-n},
\end{align*}
with $\mcC_{\delta}(\bg, \bh)$ the RHS of eq.~\eqref{eq:def_Cd_gh}, and $\bg, \bh$ vectors with i.i.d.\ $\mcN(0,1)$ coordinates.
\end{lemma}
Lemmas~\ref{lemma:ao_simplification} and \ref{lemma:application_minmax} are proven
in Section~\ref{subsubsec:proof_lemma_application_minimax}.
We are now ready to prove Theorem~\ref{thm:bound_Gordon}.
Note that by weak duality:
\begin{align*}
\mcC_{\delta}(\bg, \bh) &= \inf_{\bz \in \bbR^m} \sup_{\kappa \geq 0} \Big\{\frac{\kappa}{n} (\norm{\bz}^2 - \norm{\bh}^2) + \frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu)\Big\}, \\
& \geq \sup_{\kappa \geq 0} \inf_{\bz \in \bbR^m} \Big\{\frac{\kappa}{n} (\norm{\bz}^2 - \norm{\bh}^2) + \frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu)\Big\} \coloneqq \mathcal{M}_{\delta}(\bg,\bh).
\end{align*}
Therefore $\bbP[\mcC_{\delta}(\bg,\bh) \leq t] \leq \bbP[\mcM_{\delta}(\bg,\bh) \leq t]$.
Moreover, by $\norm{\bz}^2=\sum z_\mu^2$, one has:
\begin{align}\label{eq:M_gh}
\mcM_{\delta}(\bg,\bh) &= \sup_{\kappa \geq 0} \Big\{- \frac{\kappa}{n} \norm{\bh}^2 + \frac{1}{n} \sum_{\mu=1}^m \inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(g_\mu - z) \}\Big\}.
\end{align}
Let us show
\begin{align}\label{eq:to_show_Mgh}
\mcM_{\delta}(\bg, \bh) \pto \mcM_{\delta} \coloneqq \sup_{\kappa \geq 0} \Big\{- \kappa + \alpha \int_{\bbR} \mcD x \, \Big[\inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(x - z) \}\Big]\Big\}.
\end{align}
We can assume $\norm{\bh}^2/n \geq 1/2$, an event that has probability $1-\smallO_n(1)$.
Denoting $f(\kappa,\bg,\bh)$ the maximized function in eq.~\eqref{eq:M_gh}, we then have\footnote{Indeed, $\inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(x - z) \} \leq \ell_\delta(x) \leq 1$.}
$f(\kappa,\bg,\bh) \leq 2 \alpha - \kappa/2$ and $f(0, \bg, \bh) \geq 0$, so that we can write $\mcM_{\delta}(\bg,\bh) = \max_{0\leq\kappa\leq 4\alpha} f(\kappa,\bg,\bh)$.
Letting
\begin{align*}
f_\infty(\kappa) \coloneqq - \kappa + \alpha \int_{\bbR} \mcD x \, \Big[\inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(x - z) \}\Big],
\end{align*}
we have then for all $\kappa \in [0,4\alpha]$:
\begin{align}\label{eq:f_finfty_kappa}
&|f(\kappa,\bg,\bh) - f_\infty(\kappa)| \leq 4 \alpha \Big|\frac{\norm{\bh}^2}{n} - 1\Big| + \frac{\alpha}{m} \Bigg|\sum_{\mu=1}^m (X_\mu - \EE[X_\mu])\Bigg|, \\
\nonumber
&X_\mu \coloneqq \inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(g_\mu - z) \}.
\end{align}
Note that $\{X_\mu\}$ are i.i.d.\ random variables, and one shows easily that $X_{\mu} \in [0,1]$, so by Hoeffding's inequality, for all $t > 0$:
\begin{align*}
\bbP\Bigg[\frac{1}{m} \Bigg|\sum_{\mu=1}^m (X_\mu - \EE[X_\mu])\Bigg| \geq t\Bigg] &\leq 2 e^{- 2 m t^2}.
\end{align*}
Plugging it in eq.~\eqref{eq:f_finfty_kappa} and using the concentration of $\|\bh\|^2/n$, we reach that for all $t > 0$:
\begin{align*}
\lim_{n\to \infty}\sup_{\kappa \in [0, 4\alpha]}\bbP[|f(\kappa,\bg,\bh) - f_\infty(\kappa)| \geq t] &= 0.
\end{align*}
It is elementary to check that this implies
$\max_{0\leq\kappa\leq 4\alpha} f(\kappa,\bg,\bh) \pto \max_{0\leq\kappa\leq 4\alpha} f_\infty(\kappa)$, and therefore
eq.~\eqref{eq:to_show_Mgh}.
By Lemma~\ref{lemma:application_minmax} we have then shown that for any $t, \delta > 0$,
\begin{align}\label{eq:implication_Md_G}
\mcM_{\delta} > t \Rightarrow \lim_{n \to \infty} \bbP[G(\bW) > t] = 1.
\end{align}
We will then conclude by considering the limit $\delta \to 0$:
\begin{lemma}\label{lemma:Md_limit}
\noindent
We have $\lim_{\delta \to 0} \mcM_{\delta} = \mcM$,
with
\begin{align}\label{eq:def_M}
\nonumber
\mcM &\coloneqq \sup_{\kappa \geq 0} \Big\{- \kappa + \alpha \int_{\bbR} \mcD x \, \Big[\inf_{z \in \bbR} \{\kappa z^2 + \indi\{x > z\} \}\Big]\Big\}, \\
&=\sup_{\kappa \geq 0} \Big\{- \kappa + \alpha \int_{1/\sqrt{\kappa}}^\infty \mcD x + \alpha \kappa \int_0^{1/\sqrt{\kappa}} \mcD x \, x^2 \Big\}.
\end{align}
\end{lemma}
Moreover, the maximum in eq.~\eqref{eq:def_M} is reached in $\kappa^\star$ such that:
\begin{align*}
1 &= \alpha \int_0^{1/\sqrt{\kappa^\star}} \mcD x \, x^2.
\end{align*}
And the limit is then given by:
\begin{align*}
\mcM &= \alpha \int_{1/\sqrt{\kappa^\star}}^\infty \mcD x.
\end{align*}
We recognize the replica-symmetric prediction of eq.~\eqref{eq:fstar_RS}, with $\kappa = (2 \chi_\RS)^{-1}$!
By Lemma~\ref{lemma:Md_limit} and eq.~\eqref{eq:implication_Md_G},
we showed that $\mcM > t$ implies that $\bbP[G(\bW) > t] \to 1$ as $n \to \infty$.
Applying it for $t = 1$ ends the proof of Theorem~\ref{thm:bound_Gordon}. \qed
\subsection{Proof of Lemmas~\ref{lemma:ao_simplification}, \ref{lemma:application_minmax} and \ref{lemma:Md_limit}}\label{subsubsec:proof_lemma_application_minimax}
\begin{proof}[Proof of Lemma~\ref{lemma:ao_simplification} --]
First note that in eq.~\eqref{eq:def_CAd_gh}, writing $\blambda = \tau \bbe$ with $\norm{\bbe} = 1$, one can perform the supremum over $\bbe$:
\begin{align*}
\mcC_{A, \delta}(\bg, \bh) &= \inf_{\bz \in \bbR^m} \min_{\bx \in \mcS^{n-1}} \max_{\tau \in [0,A]} \Big\{ \tau \bh^\intercal \bx + \tau \norm{\bg - \bz} + \frac{1}{n} \sum_{\mu=1}^m \ell_\delta(z_\mu)\Big\}.
\end{align*}
The maximum over $\tau \in [0,A]$ and the minimum over $\bx$ can be carried out explicitly:
\begin{align*}
\mcC_{A, \delta}(\bg, \bh) &=
\inf_{\bz \in \bbR^m} \Big\{\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(z_\mu) + \min_{\bx \in \mcS^{n-1}} [A(\bh^\intercal \bx + \| \bg - \bz\|)\indi\{\bh^\intercal \bx+\| \bg - \bz\| > 0 \}] \Big\}, \\
&=
\inf_{\bz \in \bbR^m} \Big\{\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(z_\mu) + A(-\|\bh\| + \| \bg - \bz\|)\indi\{-\|\bh \|+\| \bg - \bz\| > 0 \} \Big\}.
\end{align*}
Letting $\bz' = \bg - \bz$, this yields:
\begin{align*}
\mcC_{A, \delta}(\bg, \bh) &= \min\Bigg\{\min_{\substack{\bz \in \bbR^m \\ \norm{\bz} \leq \norm{\bh}}} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu)\Big] \, , \, \inf_{\substack{\bz \in \bbR^m \\ \norm{\bz} > \norm{\bh}}} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu) + A (\norm{\bz} - \norm{\bh}) \Big]\Bigg\}.
\end{align*}
We now show that:
\begin{align}\label{eq:to_show_A_infty}
\lim_{A \to \infty} \inf_{\substack{\bz \in \bbR^m \\ \norm{\bz} > \norm{\bh}}} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu) + A (\norm{\bz} - \norm{\bh}) \Big]
&\geq \min_{\substack{\bz \in \bbR^m \\ \norm{\bz} \leq \norm{\bh}}} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu)\Big],
\end{align}
which ends the proof.
Notice that the LHS of eq.~\eqref{eq:to_show_A_infty} is obviously a non-decreasing function of $A$, so that it indeed has a limit (possibly $+\infty$).
Moreover, we can restrict the infimum to $\norm{\bz} \leq \norm{\bh} + \alpha / A$, since trivially for all $A$ one has (recall $\ell_\delta \leq 1$):
\begin{align*}
\inf_{\substack{\bz \in \bbR^m \\ \norm{\bh}<\norm{\bz} \leq \norm{\bh} + \alpha / A}} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu) &+ A (\norm{\bz} - \norm{\bh}) \Big]
\overset{(\rm a)}{\leq} \inf_{\substack{\bz \in \bbR^m \\ \norm{\bh}<\norm{\bz} \leq \norm{\bh} + \alpha / A}} \Big[\alpha + A (\norm{\bz} - \norm{\bh}) \Big], \\
&\leq \alpha \overset{(\rm b)}{\leq}
\inf_{\substack{\bz \in \bbR^m \\ \norm{\bz} > \norm{\bh} + \alpha / A}} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu) + A (\norm{\bz} - \norm{\bh}) \Big],
\end{align*}
in which we used in $(\rm a)$ and $(\rm b)$ that $\ell_\delta(x)\in[0,1]$.
We let $\varepsilon > 0$, and for all $A > 0$ we fix $\tilde{\bz}^{(A)} \in \bbR^m$ with $\|\tilde{\bz}^{(A)}\| \in (\|\bh\| , \|\bh\| + \alpha/A]$ such that:
\begin{align*}
\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - \tilde{z}^{(A)}_\mu) + A (\norm{\tilde{\bz}^{(A)}} - \norm{\bh})
&\leq
\inf_{\substack{\bz \in \bbR^m \\ \norm{\bz}>\norm{\bh} }} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu) + A (\norm{\bz} - \norm{\bh}) \Big] + \varepsilon.
\end{align*}
Since $\norm{\tilde{\bz}^{(A)}} \leq \norm{\bh} + \alpha / A$, we can extract a converging subsequence $\bz^{(k)}=\tilde{\bz}^{A(k)}$ such that $\exists\lim_{k\to\infty}\bz^{(k)} =: \bz^*$,
and $\norm{\bh} < \norm{\bz^{(k)}} \leq \norm{\bh} + \alpha/A(k)$, with $A(k) \to \infty$.
Therefore $\norm{\bz^*} = \norm{\bh}$. Moreover:
\begin{align*}
\sum_{\mu=1}^m \ell_\delta(g_\mu - z^*_\mu) &= \lim_{k \to \infty} \sum_{\mu=1}^m \ell_\delta(g_\mu - z^{(k)}_\mu), \\
&\leq \liminf_{k \to \infty} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z^{(k)}_\mu) + A(k) (\norm{\bz^{(k)}} - \norm{\bh}) \Big], \\
&\leq \liminf_{k \to \infty}\Big\{\inf_{\substack{\bz \in \bbR^m\\ \norm{\bz}>\norm{\bh}}} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu) + A(k) (\norm{\bz} - \norm{\bh}) \Big]\Big\} + \varepsilon, \\
&\leq
\lim_{A \to \infty} \inf_{\substack{\bz \in \bbR^m \\ \norm{\bz} > \norm{\bh}}} \Big[\frac{1}{n} \sum_{\mu=1}^m \ell_\delta(g_\mu - z_\mu) + A (\norm{\bz} - \norm{\bh}) \Big] + \varepsilon.
\end{align*}
Letting $\varepsilon > 0$ be arbitrarily small, the claim of eq.~\eqref{eq:to_show_A_infty} follows.
\end{proof}
\myskip
\begin{proof}[Proof of Lemma~\ref{lemma:application_minmax} --]
Recall eq.~\eqref{eq:ground_state_duality_smoothed}. In particular, for any $A, \delta, K > 0$ we have:
\begin{align}\label{eq:def_GAdK}
G_K(\bW) \geq G_{A, \delta, K}(\bW) &\coloneqq \inf_{\bz \in B(K)} \min_{\bx \in \mcS^{n-1}} \max_{\norm{\blambda} \leq A} \Big\{\blambda^\intercal \bW \bx - \blambda^\intercal \bz + \frac{1}{n} \sum_{\mu=1}^m \ell_\delta(z_\mu)\Big\}.
\end{align}
By eq.~\eqref{eq:ub_G_GK} and eq.~\eqref{eq:def_GAdK}, we have:
\begin{align}\label{eq:domination_G_GAdK}
\bbP_\bW[G(\bW) \leq t] &\leq \bbP_\bW[G_K(\bW) \leq t] + e^{-n} \leq \bbP_\bW[G_{A, \delta, K}(\bW)\leq t ] + e^{-n}.
\end{align}
Using Proposition~\ref{prop:gaussian_minmax} (since all sets are compact and functions involved are continuous, see in particular Remark~I below it)
we have, for all $t \in \bbR$:
\begin{align*}
\bbP_\bW[G_{A, \delta, K}(\bW, \bz)\leq t ] &\leq 2 \bbP_{\bg,\bh}[\mcC_{A, \delta, K}(\bg,\bh,\bz) \leq t],
\end{align*}
in which $\mcC_{A, \delta, K}(\bg, \bh)$ is defined as in eq.~\eqref{eq:def_CAd_gh}, restricting furthermore the infimum to $\bz \in B(K)$.
In particular, $\mcC_{A, \delta, K}(\bg, \bh) \geq \mcC_{A, \delta}(\bg, \bh)$.
Therefore by eq.~\eqref{eq:domination_G_GAdK}:
\begin{align}
\label{eq:domination_G_CAd}
\bbP_\bW[G(\bW) \leq t]&\leq 2 \bbP_{\bg,\bh}[\mcC_{A, \delta}(\bg,\bh) \leq t] + e^{-n}.
\end{align}
Note that $\bbP_{\bg,\bh}[\mcC_{A, \delta}(\bg,\bh) \leq t] = \EE_{\bg,\bh} [\indi\{\mcC_{A, \delta}(\bg,\bh) \leq t\}]$,
and moreover by Lemma~\ref{lemma:ao_simplification}\footnote{We use there the fact that $A \mapsto \mcC_{A, \delta}(\bg, \bh)$ is non-decreasing}:
\begin{align*}
\lim_{A \to \infty} \indi\{\mcC_{A, \delta}(\bg,\bh) \leq t\} = \indi\{\mcC_\delta(\bg,\bh) \leq t\}.
\end{align*}
Taking the $A \to \infty$ limit in eq.~\eqref{eq:domination_G_CAd} and using the dominated convergence theorem
ends the proof of Lemma~\ref{lemma:application_minmax}.
\end{proof}
\myskip
\begin{proof}[Proof of Lemma~\ref{lemma:Md_limit} --]
For $\delta\geq 0$, we define
\begin{align*}
f_\delta(\kappa) &\coloneqq - \kappa + \alpha \int_{\bbR} \mcD x \, \Big[\inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(x - z) \}\Big],
\end{align*}
so that $\mcM_\delta = \sup_{\kappa \geq 0} f_\delta(\kappa)$ for $\delta>0$, and
$\mcM = \sup_{\kappa \geq 0} f_0(\kappa)$.
Notice first that eq.~\eqref{eq:def_M} follows from the following identity, that can be easily checked:
\begin{align*}
\inf_{z \in \bbR} \{\kappa z^2 + \indi\{z < x\} \} &= \indi\{\sqrt{\kappa} x \geq 1\} + \indi\{\sqrt{\kappa} x \in (0,1)\} \kappa x^2.
\end{align*}
Lemma~\ref{lemma:Md_limit} will follow if we can show:
\begin{align}\label{eq:to_show_Md_M}
\lim_{\delta \to 0} \sup_{\kappa \geq 0} |f_\delta(\kappa) - f_0(\kappa)| = 0.
\end{align}
Notice that for all $\delta > 0$ and all $x \in \bbR$, we have
$\indi\{x > \delta \} \leq \ell_\delta(x) \leq \indi\{x > 0\}$.
In particular,
\begin{align*}
\inf_{z \in \bbR} \{\kappa z^2 + \indi\{z < x - \delta\} \} \leq \inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(x - z) \} \leq \inf_{z \in \bbR} \{\kappa z^2 + \indi\{z < x\} \}.
\end{align*}
One computes easily the left and right sides of this inequality:
\begin{align*}
\begin{dcases}
\inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(x - z) \} &\leq \indi\{\sqrt{\kappa} x \geq 1\} + \indi\{\sqrt{\kappa} x \in (0,1)\} \kappa x^2, \\
\inf_{z \in \bbR} \{\kappa z^2 + \ell_\delta(x - z) \} &\geq \indi\{\sqrt{\kappa} (x-\delta) \geq 1\} + \indi\{\sqrt{\kappa} (x-\delta) \in (0,1)\} \kappa (x-\delta)^2.
\end{dcases}
\end{align*}
Therefore we reach:
\begin{align*}
|f_\delta(\kappa) - f_0(\kappa)| &\leq \alpha \int_{1/\sqrt{\kappa}}^{1/\sqrt{\kappa} + \delta} \mcD x + \alpha \kappa \int_{0}^{1/\sqrt{\kappa}} \frac{\rd x}{\sqrt{2 \pi}} \, x^2 \, \Big[e^{-x^2/2} - e^{-(x+\delta)^2/2}\Big], \\
&\leq \alpha \int_0^\delta \mcD x + \alpha \int_{0}^{1/\sqrt{\kappa}} \frac{\rd x}{\sqrt{2 \pi}} \, \Big[e^{-x^2/2} - e^{-(x+\delta)^2/2}\Big], \\
&\leq \alpha \int_0^\delta \mcD x + \alpha \int_{0}^{\infty} \frac{\rd x}{\sqrt{2 \pi}} \, \Big[e^{-x^2/2} - e^{-(x+\delta)^2/2}\Big], \\
&\leq 2\alpha \int_0^\delta \mcD x,
\end{align*}
which goes to $0$ as $\delta \to 0$, uniformly in $\kappa$. This ends the proof.
\end{proof}
\subsection{Zero-temperature limit of the replica-symmetric solution}\label{subsec_app:zerotemp_RS}
In this section, we derive eqs.~\eqref{eq:chi_RS} and \eqref{eq:fstar_RS}.
Our arguments will sometimes be informal, and a rigorous treatment would demand more care.
\myskip
Recall that we have the expansion of eq.~\eqref{eq:q_RS_zerotemp}, with
$\chi_\RS$ the zero-temperature susceptibility of the system.
In this section, we often drop the $\RS$ subscript on quantities to lighten the notations.
We use the expansion of $H(x) = \int_x^\infty \mcD u$ for large $x \gg 1$:
\begin{align}\label{eq:expansion_H}
H(x) &= \frac{e^{-\frac{x^2}{2}}}{\sqrt{2 \pi}} \Big[\frac{1}{x} + \mathcal{O}_{x \to \infty}\Big(\frac{1}{x^3}\Big)\Big].
\end{align}
\textbf{Computation of $f^\star_\RS(\alpha)$ --}
We start by deriving eq.~\eqref{eq:fstar_RS}.
As one can check from eq.~\eqref{eq:phi_RS} that $\Phi_\RS(\alpha,\beta)$ is a differentiable function of $\beta$,
by L'Hospital's rule we have $f^\star_\RS(\alpha) = \lim_{\beta\to\infty} e^\star_\RS(\alpha, \beta)$,
with $e^\star_\RS(\alpha,\beta) \coloneqq - \partial_\beta \Phi(\alpha,\beta)$.
We have from eq.~\eqref{eq:q_RS_zerotemp}:
\begin{align*}
\sqrt{\frac{q}{1-q}} &= \sqrt{\frac{\beta}{\chi}} + \mathcal{O}(\beta^{-1/2}),
\end{align*}
We compute the limit of the integrand in eq.~\eqref{eq:estar_beta} (changing variables $\xi \to - \xi$):
\begin{align*}
\frac{e^{-\beta} H \Big(-\xi \sqrt{\frac{q}{1-q} }\Big)}{1 - (1 - e^{-\beta}) H \Big(-\xi \sqrt{\frac{q}{1-q} }\Big)}
&\simeq
\frac{e^{-\beta} H \Big(-\xi \sqrt{\frac{\beta}{\chi} }\Big)}{1 - (1 - e^{-\beta}) H \Big(-\xi \sqrt{\frac{\beta}{\chi} }\Big)}.
\end{align*}
We separate three cases, and use the expansion of eq.~\eqref{eq:expansion_H} to reach that at leading order
in $\beta$:
\begin{align}\label{eq:expansion_estar_RS}
\frac{e^{-\beta} H \Big(-\xi \sqrt{\frac{\beta}{\chi} }\Big)}{1 - (1 - e^{-\beta}) H \Big(-\xi \sqrt{\frac{\beta}{\chi} }\Big)}
&\simeq \begin{dcases}
\frac{\sqrt{\chi}}{\sqrt{2 \pi \beta} |\xi|}e^{-\beta - \frac{\beta \xi^2}{2 \chi}} \to_{\beta \to \infty} 0 & \textrm{ if } \xi < 0, \\
\frac{\xi \sqrt{2 \pi \beta}}{\sqrt{\chi}} e^{-\beta + \frac{\beta \xi^2}{2 \chi}} \to_{\beta \to \infty} 0 & \textrm{ if } \xi \in (0, \sqrt{2 \chi}), \\
1 & \textrm{ if } \xi > \sqrt{2 \chi}.
\end{dcases}
\end{align}
Using the pointwise limit above, we reach (as we mentioned above, a more careful argument would need to be carried out to make this expansion rigorous)
\begin{align*}
e^\star(\alpha,\beta)
&\simeq \alpha \int_{\sqrt{2\chi}}^\infty \, \mcD \xi = \alpha H[\sqrt{2\chi}].
\end{align*}
In the end, we reach eq.~\eqref{eq:fstar_RS}:
\begin{align*}
f^\star_\RS(\alpha) = \lim_{\beta \to \infty} e^\star_\RS(\alpha,\beta) &= \alpha H[\sqrt{2 \chi_\RS}].
\end{align*}
\textbf{Computing $\chi$ --}
There now remains to find $\chi$ as a function of $\alpha$, from eq.~\eqref{eq:q_RS_eq_new}.
Plugging in the expansion of eq.~\eqref{eq:q_RS_zerotemp} we find (changing $\xi \to - \xi$):
\begin{align}\label{eq:chi_RS_1}
\frac{1}{\sqrt{\chi}}
&= -\alpha \lim_{\beta \to \infty} \frac{1}{\sqrt{\beta}} \int \mcD \xi \frac{(1-e^{-\beta}) \xi H'\Big(-\xi \sqrt{\frac{q}{1-q}} \Big)}{1-(1-e^{-\beta}) H\Big(-\xi \sqrt{\frac{q}{1-q}} \Big)}.
\end{align}
In the same way as in eq.~\eqref{eq:expansion_estar_RS}, we can show:
\begin{align}\label{eq:expansion_chi_RS}
- \frac{1}{\sqrt{\beta}} \frac{(1-e^{-\beta}) \xi H'\Big(-\xi \sqrt{\frac{q}{1-q}} \Big)}{1-(1-e^{-\beta}) H\Big(-\xi \sqrt{\frac{q}{1-q}} \Big)}
&\simeq \begin{dcases}
\frac{\xi}{\sqrt{2 \pi\beta}} \, e^{-\frac{\beta \xi^2}{2 \chi}} \to_{\beta \to \infty} 0 & \textrm{ if } \xi < 0, \\
\frac{\xi^2}{\sqrt{\chi}} & \textrm{ if } \xi \in (0, \sqrt{2 \chi}), \\
\frac{\xi}{\sqrt{2 \pi\beta}} \, e^{\beta-\frac{\beta \xi^2}{2 \chi}} \to_{\beta \to \infty} 0 & \textrm{ if } \xi > \sqrt{2 \chi}.
\end{dcases}
\end{align}
Therefore, we reach from eq.~\eqref{eq:chi_RS_1} that, as $\beta \to \infty$:
\begin{align*}
\alpha \int_0^{\sqrt{2\chi}} \mcD \xi \, \xi^2 &= 1,
\end{align*}
which is eq.~\eqref{eq:chi_RS}.
\subsection{Stability of the replica-symmetric solution}\label{subsec_app:stability_rs}
In this section we follow Appendix~4 of \cite{engel2001statistical} (see also e.g.\ \cite{urbani2018statistical}) to characterize the stability of the RS solution in replica space.
This gives rise to the so-called de Almeida-Thouless conditions \cite{de1978stability,gardner1988optimal}, which is a criterion for stability
expressed in terms of so-called \emph{replicon eigenvalues}.
\myskip
We start again from the general expression of eq.~\eqref{eq:Phir_final}:
$\Phi(\alpha,\beta;r) = \sup_{\bQ} G_r(\bQ)$, with
\begin{align}\label{eq:Gr}
G_r(\bQ) \coloneqq \frac{1}{2} \log \det \bQ + \alpha \log \int_{\bbR^r} \frac{\rd \bz}{(2\pi)^{r/2} \sqrt{\det \bQ}} e^{-\frac{1}{2} \bz^\intercal \bQ^{-1} \bz -\beta \sum_{a=1}^r \theta(z^a)} = G_{1,r}(\bQ) + \alpha G_{2,r}(\bQ).
\end{align}
In what follows, we compute the Hessian of $G_r(\bQ)$ taken at the replica-symmetric point.
\subsubsection{The derivatives of \texorpdfstring{$G_{1,r}$}{G1r}}
The derivatives of $G_{1,r}(\bQ)$ can be worked out in terms of the matrix elements of $\bQ^{-1}$ (here $a<b$ and $c < d$):
\begin{align*}
\begin{dcases}
\frac{\partial G_{1,r}}{\partial Q_{ab}} &= Q^{-1}_{ab}, \\
\frac{\partial^2 G_{1,r}}{\partial Q_{ab} \partial Q_{cd}} &= -[Q^{-1}_{ac} Q^{-1}_{bd} + Q^{-1}_{ad} Q^{-1}_{bc}].
\end{dcases}
\end{align*}
Recall that at the replica symmetric point with $Q_{ab} = q$ and $Q_{aa} = 1$ we have
\begin{align}\label{eq:Qm1_RS}
\begin{dcases}
Q^{-1}_{aa} &= \frac{1 + (r-2)q}{(1-q) [1+(r-1)q]}, \\
Q^{-1}_{ab} &= -\frac{q}{(1-q) [1+(r-1)q]}.
\end{dcases}
\end{align}
Therefore (taking the notations of \cite{engel2001statistical}):
\begin{align}\label{eq:HessG1_RS}
\Bigg[\frac{\partial^2 G_{1,r}}{\partial Q_{ab} \partial Q_{cd}}\Bigg]_{\mathrm{RS}} &=
\begin{dcases}
P_1 & \textrm{ if } a = c \, ; b = d , \\
Q_1 & \textrm{ if } a = c \, ; b \neq d \textrm{ or } b = c \textrm{ or } a \neq c \, ; b = d \textrm{ or } a = d , \\
R_1 & \textrm{ if all indices are distinct},
\end{dcases}
\end{align}
in which $P_1,Q_1,R_1$ are defined as:
\begin{align*}
\begin{dcases}
P_1 &\coloneqq -\Bigg(\frac{1 + (r-2)q}{(1-q) [1+(r-1)q]}\Bigg)^2 - \Bigg(\frac{q}{(1-q) [1+(r-1)q]}\Bigg)^2, \\
Q_1 &\coloneqq -\Bigg(\frac{1 + (r-2)q}{(1-q) [1+(r-1)q]}\Bigg)\Bigg(-\frac{q}{(1-q) [1+(r-1)q]}\Bigg) - \Bigg(\frac{q}{(1-q) [1+(r-1)q]}\Bigg)^2, \\
R_1 &\coloneqq - 2 \Bigg(\frac{q}{(1-q) [1+(r-1)q]}\Bigg)^2.
\end{dcases}
\end{align*}
We now take the limit $r \downarrow 0$. With an abuse of notation, we still denote the limits $P_1,Q_1,R_1$:
\begin{align}\label{eq:PQR_1}
\begin{dcases}
P_1 &= \frac{-1 + 4 q(1-q) - q^2}{(1-q)^4}, \\
Q_1 &= \frac{q (1-q) - 2 q^2}{(1-q)^4}, \\
R_1 &= -\frac{2 q^2}{(1-q)^4}.
\end{dcases}
\end{align}
\subsubsection{The derivatives of \texorpdfstring{$G_{2,r}$}{G2r}}
We now turn to $G_{2,r}(\bQ)$, that we rewrite using a Gaussian transformation:
\begin{align}\label{eq:G2_alternate}
G_{2,r}(\bQ) &= \log \int_{\bbR^r} \frac{\rd \bu \rd \bv}{(2\pi)^{r}} e^{-\frac{1}{2} \sum_{a,b} Q^{ab} v^a v^b -\beta \sum_{a=1}^r \theta(u^a) + i \sum_{a=1}^r u^a v^a}.
\end{align}
This form is more suitable for computing the Hessian with respect to $\bQ$.
In order to write the formulas compactly, we introduce the following average for any function of $\{v^a\}$:
\begin{align*}
\langle g(\{v^a\}) \rangle_r &\coloneqq \Bigg\{\frac{\int_{\bbR^r} \rd \bu \, \rd \bv\, g(\{v^a\}) \, e^{-\frac{1}{2} \sum_{a,b} Q^{ab} v^a v^b -\beta \sum_{a=1}^r \theta(u^a) + i \sum_{a=1}^r u^a v^a}}{
\int_{\bbR^r} \rd \bu \, \rd \bv \, e^{-\frac{1}{2} \sum_{a,b} Q^{ab} v^a v^b -\beta \sum_{a=1}^r \theta(u^a) + i \sum_{a=1}^r u^a v^a}}\Bigg\}_{\mathrm{RS}}.
\end{align*}
With this definition, we have from eq.~\eqref{eq:G2_alternate}:
\begin{align}\label{eq:HessG2_1}
&\Bigg[\frac{\partial^2 G_{2,r}}{\partial Q_{ab} \partial Q_{cd}}\Bigg]_{\mathrm{RS}} =
\langle v^a v^b v^c v^d \rangle - \langle v^a v^b \rangle \langle v^c v^d \rangle,
\end{align}
in which $a < b$ and $c < d$.
One can easily see that this Hessian has the same ``replica-symmetric'' structure as the one of $G_{1,r}$:
\begin{align}\label{eq:HessG2_RS}
\Bigg[\frac{\partial^2 G_{2,r}}{\partial Q_{ab} \partial Q_{cd}}\Bigg]_{\mathrm{RS}} &=
\begin{dcases}
P_2 & \textrm{ if } a = c \, ; b = d , \\
Q_2 & \textrm{ if } a = c \, ; b \neq d \textrm{ or } b = c \textrm{ or } a \neq c \, ; b = d \textrm{ or } a = d , \\
R_2 & \textrm{ if all indices are distinct}.
\end{dcases}
\end{align}
We compute these three terms separately in the limit $r \to 0$.
In order to simplify the results, we introduce the notation $\EE \langle g(v) \rangle$, with $\EE$ the expectation over $\xi \sim \mcN(0,1)$, and
\begin{align*}
\langle g(v) \rangle \coloneqq \frac{\int \rd u \, \rd v\,g(v) \, e^{-\frac{1-q}{2} v^2 - \beta \theta(u) + i v [u - \sqrt{q}\xi]}}{\int \rd u \, \rd v\, e^{-\frac{1-q}{2} v^2 - \beta \theta(u) + i v [u - \sqrt{q}\xi]}}.
\end{align*}
From this definition and eq.~\eqref{eq:HessG2_1}, one can check (using the same trick to decouple the replicas we used in the RS calculation, cf.\ Section~\ref{subsec:rs}) that we have,
as $r \to 0$:
\begin{align}\label{eq:PQR_2}
\begin{dcases}
P_2 &= \EE[\langle v^2\rangle^2] - \EE[\langle v\rangle^2]^2, \\
Q_2 &= \EE[\langle v^2\rangle \langle v\rangle^2] - \EE[\langle v\rangle^2]^2, \\
R_2 &= \EE[ \langle v\rangle^4] - \EE[\langle v\rangle^2]^2.
\end{dcases}
\end{align}
\subsubsection{de Almeida-Thouless condition for replica-symmetric stability}\label{subsubsec_app:AT_condition}
Classical replica studies \cite{engel2001statistical} show that for a Hessian having the form of eqs.~\eqref{eq:HessG1_RS} or \eqref{eq:HessG2_RS},
the linear stability of the RS local maximum is given by the sign of the ``replicon'' eigenvalue $P-2Q+R$.
More precisely, the AT condition for the stability of the RS solution in replica space reads here:
\begin{align*}
\lambda_3 = [P_1 - 2Q_1+R_1] + \alpha [P_2 - 2Q_2 +R_2]\leq 0.
\end{align*}
By eqs.~\eqref{eq:PQR_1} and \eqref{eq:PQR_2} we get:
\begin{align}\label{eq:AT_condition}
\frac{1}{(1-q)^2} \geq \alpha \EE\Big[\big(\langle v^2 \rangle - \langle v \rangle^2\big)^2\Big].
\end{align}
In order to make eq.~\eqref{eq:AT_condition} more explicit, we compute the right-hand side using the identity $\langle v^2 \rangle - \langle v \rangle^2 = - q^{-1} \partial^2_\xi \log \mcZ(\xi)$,
with
\begin{align*}
\mcZ(\xi) &\coloneqq \int \frac{\rd u \, \rd v}{2\pi} \, e^{-\frac{1-q}{2} v^2 - \beta \theta(u) + i v [u - \sqrt{q}\xi]}.
\end{align*}
This integral is easy to work out:
\begin{align*}
\log \mcZ(\xi) &= \log \int \frac{\rd u}{\sqrt{2\pi(1-q)}} \, e^{- \beta \theta(u) - \frac{1}{2 (1-q)} (u - \sqrt{q}\xi)^2}
= \log \Big[1 - (1-e^{-\beta}) H \Big(- \xi \sqrt{\frac{q}{1-q} }\Big)\Big].
\end{align*}
Let us define $f_\beta(h) \coloneqq \log (1 - (1-e^{-\beta}) H[-h/\sqrt{1-q}])$,
so that $\log \mcZ(\xi) = f_\beta(\sqrt{q} \xi)$.
Then $\langle v^2 \rangle - \langle v \rangle^2 = - f_\beta''(\sqrt{q}\xi)$.
The AT condition for the stability of the replica-symmetric solution is then expressed easily as a function of $(\alpha,q)$
at any $\beta \geq 0$ as
\begin{align}\label{eq:AT_explicit}
\frac{1}{\alpha} \geq (1-q)^2 \int \mcD\xi f_\beta''(\sqrt{q} \xi)^2.
\end{align}
\subsubsection{The \texorpdfstring{$\beta \to \infty$}{hightemp} limit}
We now take the limit $\beta \to \infty$ in eq.~\eqref{eq:AT_explicit}, introducing the zero-temperature susceptibility $\chi_\RS = \chi$ (cf.\ eq.~\eqref{eq:q_RS_zerotemp}).
Using the same expansions as in eqs.~\eqref{eq:expansion_estar_RS} and \eqref{eq:expansion_chi_RS}
we have as $\beta \to \infty$:
\begin{align}\label{eq:expansion_fbeta}
\frac{1}{\beta} f_\beta(h) &\simeq \begin{cases}
0 & \textrm{ if } h < 0, \\
-1 & \textrm{ if } h > \sqrt{2\chi}, \\
-\frac{h^2}{2\chi} & \textrm{ otherwise }.
\end{cases}
\end{align}
Therefore, we have at large $\beta$, that $f_\beta''(h) \simeq - \beta \chi^{-1} \indi\{h \in (0,\sqrt{2\chi})\}$.
Since $(1-q)^2 \simeq \chi^2 / \beta^2$, the RS stability condition~\eqref{eq:AT_explicit} becomes, in the $\beta \to \infty$ limit:
\begin{align}\label{eq:AT_explicit_zero_temp}
\alpha \int_0^{\sqrt{2\chi}} \mcD \xi &\leq 1.
\end{align}
However, recall that in the zero-temperature limit, the RS susceptibility $\chi$ is given by the solution to eq.~\eqref{eq:chi_RS}:
\begin{align*}
\alpha \int_0^{\sqrt{2\chi}} \mcD \xi \, \xi^2 &= 1,
\end{align*}
which can be turned easily by integration by parts into:
\begin{align*}
\alpha \int_0^{\sqrt{2\chi}} \mcD \xi &= 1 + \alpha \sqrt{\frac{\chi}{\pi}} e^{-\chi} > 1,
\end{align*}
in which the inequality holds in all the ``UNSAT'' phase $\alpha > 2$ for which $\chi < \infty$.
Therefore, eq.~\eqref{eq:AT_explicit_zero_temp} is \emph{never} satisfied for any $\alpha > 2$: at zero-temperature, the replica-symmetric solution is never linearly stable!
\subsection{The full-RSB prediction for the free entropy}
The full-RSB calculation is detailed in Appendix~\ref{sec_app:frsb}, and quite closely follows a similar derivation
presented in \cite{franz2017universality,urbani2018statistical}.
\myskip
\textbf{Notations --}
Before stating the result, let us introduce some notation.
For any $\sigma \geq 0$, we let $\gamma_{\sigma^2}(h) = \exp\{-h^2/(2\sigma^2)\} / \sqrt{2 \pi \sigma^2}$ the PDF of $\mcN(0,\sigma^2)$.
For two functions $a, b : \bbR \to \bbR$, we denote $(a \star b)(h) = \int \rd u \, a(u) b(h-u)$ their convolution.
For a function $f(x,h)$ with $x \in [0,1]$ and $h \in \bbR$, we always consider convolutions in the $h$ variable, e.g.\ the notation
$\gamma_{\sigma^2} \star f(x,h)$ denotes the function $(\gamma_{\sigma^2} \star f)(x,h) = \int \rd u \, \gamma_{\sigma^2}(u) f(x, h - u)$.
Moreover, we denote with a dot derivatives in the $x$ variable, and with a prime derivatives in the $h$ variable, e.g.\ $\dot{f} = \partial_x f$ and $f''= \partial^2_h f$.
\myskip
Let us now state the results of the full-RSB calculation.
We obtain the following formula for the free entropy:
\begin{align}\label{eq:phi_frsb}
\Phi_\FRSB(\alpha,\beta) &= \inf_{\{q(x)\}} \Bigg\{\frac{1}{2} \log (1-q(1)) + \frac{q(0)}{2(1-\langle q \rangle)} + \frac{1}{2} \int_0^1 \rd u \frac{\dot{q}(u)}{\lambda(u)} + \alpha (\gamma_{q(0)} \star f) (0, 0)\Bigg\}.
\end{align}
Here, we denoted $\langle q \rangle = \int_0^1 \rd u \, q(u)$ and we defined the auxiliary function:
\begin{align}\label{eq:def_lambdax}
\lambda(x) &\coloneqq 1 - x q(x) - \int_x^1 \rd y \, q(y).
\end{align}
Moreover, $f(x,h)$ is taken to be the solution of the \emph{Parisi PDE}:
\begin{align}\label{eq:Parisi_PDE}
\begin{dcases}
f(1,h) &= \log{\gamma_{1-q(1)} \star e^{-\beta \theta}(h)}, \\
\dot{f}(x,h) &= - \frac{\dot{q}(x)}{2} \big[f''(x,h) + x f'(x,h)^2\big], \hspace{0.5cm} x \in (0,1).
\end{dcases}
\end{align}
Similar equations were derived and analyzed in \cite{franz2017universality,urbani2018statistical}.
These works followed a long series of important papers on
the spherical perceptron and its connection to the packing of hard spheres \cite{charbonneau2014fractal,franz2015universal,rainone2015following}.
Note that these works consider a shift $\sigma$ in the perceptron activation, so that here we are in the $\sigma = 0$ setting of their results.
Moreover, their energy function is slightly different from eq.~\eqref{eq:def_energy}, as it contains a multiplicative quadratic term.
\myskip
\textbf{The positive-temperature FRSB equations --}
In order to impose the Parisi PDE constraint on the function $f(x,h)$ in eq.~\eqref{eq:phi_frsb}, we use a functional Lagrange multiplier $\Gamma(x,h)$.
This yields that the free entropy $\Phi_\FRSB(\alpha,\beta)$ is given by the extremization with respect to $q(x), \Lambda(x,h), f(x,h)$ of:
\begin{align}\label{eq:frsb_fentropy_2}
\Phi_\FRSB(\alpha,\beta) &= \frac{1}{2} \log (1-q(1)) + \frac{q(0)}{2(1-\langle q \rangle)} + \frac{1}{2} \int_0^1 \rd u \frac{\dot{q}(u)}{\lambda(u)} + \alpha \gamma_{q(0)} \star f (0,0) \nonumber \\
& - \alpha \int \rd h \, \Lambda(1,h) [f(1,h) - \log \gamma_{1-q(1)} \star e^{-\beta \theta}(h)] \nonumber \\
& + \alpha \int_{0}^1 \rd x \int \rd h \, \Lambda(x,h) [\dot{f}(x,h) + \frac{\dot{q}(x)}{2} (f''(x,h) + x f'(x,h)^2)].
\end{align}
Differentiating these equations with respect to $\Lambda(x,h)$ yields the Parisi PDE of eq.~\eqref{eq:Parisi_PDE} (as it should),
while differentiation w.r.t.\ $q(x)$ and $f(x,h)$ respectively yield:
\begin{subnumcases}{\label{eq:frsb_eqs}}
\label{eq:frsb_eq_1}
\frac{q(0)}{\lambda(0)^2} + \int_0^x \rd u \frac{\dot{q}(u)}{\lambda(u)^2} = \alpha \int \rd h \, \Lambda(x,h) f'(x,h)^2, &\\
\label{eq:frsb_eq_2}
\dot{\Lambda}(x,h) = \frac{\dot{q}(x)}{2} \Big[\Lambda''(x,h) - 2 x (f'(x,h) \Lambda(x,h))'\Big], &\\
\label{eq:frsb_eq_3}
\Lambda(0, h) = \gamma_{q(0)}(h). &
\end{subnumcases}
Finally, differentiation w.r.t.\ $\beta$ yields the average energy:
\begin{align}\label{eq:estar_frsb}
e^\star_\FRSB(\alpha, \beta) &\coloneqq - \Phi_\FRSB'(\alpha,\beta) = \alpha \int \rd h \Lambda(1,h) \frac{\big[\gamma_{1-q(1)} \star \theta e^{-\beta \theta}\big](h)}{\big[\gamma_{1-q(1)} \star e^{-\beta \theta}\big](h)}.
\end{align}
\myskip
\textbf{A sanity check: the RS solution --}
In the RS assumption, we have $q(x) = q_0$ for all $x$. In particular, this implies that
$\dot{q}(x) = 0$, and $q(0) = \langle q \rangle = q_0$. Moreover, it is easy to see that in this case,
since $\dot{q}(x) = 0$, we have $f(x,h) = f(1,h) = \log \gamma_{1-q_0} \star e^{-\beta \theta(h)}$ for all $x$, and similarly $\Lambda(x,h) = \gamma_{q_0}(h)$.
Therefore, eq.~\eqref{eq:frsb_eq_1} becomes:
\begin{align}\label{eq:frsb_eq_rs}
\frac{q_0}{(1-q_0)^2} &= \alpha \int \rd h \gamma_{q_0}(h) \Bigg[\frac{\big(\gamma_{1-q_0} \star e^{-\beta \theta}\big)'(h)}{\gamma_{1-q_0} \star e^{-\beta \theta}(h)}\Bigg]^2.
\end{align}
One can check (the derivation is presented in Appendix~\ref{subsec_app:rs_from_rsb}) that this equation is equivalent to eq.~\eqref{eq:q_RS_eq_new}: we found back the RS solution!
\subsection{Zero-temperature limit and algorithm for the injectivity threshold}
\label{subsec:zero_temp_algorithmic_frsb}
\textbf{The zero-temperature limit --}
In the zero temperature limit, the scaling of the FRSB equations in the ``UNSAT'' phase of a slightly different spherical perceptron
has been shown in \cite{franz2017universality} to be very similar to the one of the SK model. We conjecture that this scaling remains the same in our model.
More precisely, in the $\beta \to \infty$ limit, letting $\lambda(q) \coloneqq \lambda[x(q)]$ and $f(q,h) \coloneqq f(x(q), h)$, we assume:
\begin{align}\label{eq:frsb_zerotemp_scaling}
\begin{cases}
q_M &\simeq 1 - \chi / \beta, \\
\beta x(q) &\to x_\infty(q), \\
\beta \lambda(q) &\to \lambda_\infty(q), \\
\beta^{-1} f(q,h) &\to f_\infty(q, h).
\end{cases}
\end{align}
Moreover, $\Lambda(q,h) \coloneqq \Lambda(x(q),h)$ remains finite.
In particular, since $x(q = 1) = 1$ by definition (see Fig.~\ref{fig:q_frsb}), we have that $x_\infty(q)$ now extends up to $+\infty$.
We define $q_\infty(x)$ as the inverse function to $x_\infty(q)$, and then we can define
all functions in terms of $x$, e.g.\ $f_\infty(x, h) \coloneqq f_\infty(q_\infty(x), h)$.
In this limit, all eqs.~\eqref{eq:frsb_eq_1},\eqref{eq:frsb_eq_2},\eqref{eq:frsb_eq_3} scale very naturally, and the
Parisi PDE of eq.~\eqref{eq:Parisi_PDE} as well. The only non-trivial part is the boundary condition at $x = 1$,
which becomes
\begin{align*}
f_\infty(x \to +\infty, h) &= \frac{1}{\beta}\log \gamma_{\chi/\beta} \star e^{-\beta \theta}(h) + \smallO(1).
\end{align*}
The scaling of the right hand-side can be worked out exactly:
\begin{align}\label{eq:f_xinfty}
f_\infty(x \to +\infty, h) &= \begin{cases}
0 & \textrm{ if } h < 0, \\
-1 & \textrm{ if } h > \sqrt{2\chi}, \\
-\frac{h^2}{2\chi} & \textrm{ otherwise }.
\end{cases}
\end{align}
Similarly, we can work out the zero-temperature limit of eq.~\eqref{eq:estar_frsb}, and we get:
\begin{align*}
f^\star_\FRSB(\alpha) &= \lim_{\beta \to \infty} e^\star_\FRSB(\alpha, \beta) = \alpha \int_{\sqrt{2\chi}}^\infty \Lambda_\infty(x \to \infty,h) \rd h.
\end{align*}
\myskip
\textbf{Algorithmic procedure --}
In this paragraph, for the clarity of the presentation, all quantities are considered in the zero-temperature limit, and we drop the $\infty$ subscripts.
\myskip
The procedure we use is relatively similar to the finite-temperature one described in Appendix~B of \cite{franz2017universality}, but is done at zero temperature,
and at fixed $x$ rather than fixed $q$ (as we found this choice to be numerically more stable).
In order to increase numerical precision, we rescale $h$ and use $t = h / \sqrt{2\chi}$, allowing to handle small values of the susceptibility $\chi$.
Our algorithmic procedure is as follows:
\myskip
\begin{itemize}
\item \textbf{Before starting --} Pick $k$ large enough, $x_\mathrm{max} \gg 1$ large enough, and a grid $0 < x_0 < x_1 < \cdots < x_{k-1} = x_\mathrm{max} < x_k = \infty$.
\item \textbf{Initialization --} Start from a guess $\chi > 0$ and $0 \leq q_0 \leq q_1 \leq \cdots \leq q_{k-1} < q_k = 1$.
\item[$(i)$] Find the functions $f(x_i,t)$ via the procedure:
\begin{align}\label{eq:frsb_procedure_i}
f(x_k = \infty, t) &=
\begin{dcases}
0 & \textrm{ if } t \leq 0 , \\
-1 & \textrm{ if } t \geq 1, \\
- t^2 & \textrm{ if } t \in (0,1).
\end{dcases}, \\
f(x_i,t) &= \frac{1}{x_i} \log \Big[\gamma_{\frac{q_{i+1} - q_i}{2\chi}} \star e^{x_i f}(x_{i+1},t)\Big].
\end{align}
\item[$(ii)$] Find $\Lambda(q_i,t)$ via the procedure:
\begin{align}\label{eq:frsb_procedure_ii}
\begin{dcases}
\Lambda(x_0, t) &= \gamma_{q_0}(\sqrt{2\chi}t), \\
\Lambda(x_i,t) &= e^{x_{i-1} f(x_i,t)}\, \gamma_{\frac{q_i - q_{i-1}}{2\chi}} \star \Big[\Lambda \cdot e^{-x_{i-1} f}\Big](x_{i-1},t).
\end{dcases}
\end{align}
\item[$(iii)$] Compute $q^{-1}_i$ (the hierarchical elements of $\bQ^{-1}$, not $1/q_i$) using, for all $i \in \{0,\cdots,k\}$:
\begin{align}\label{eq:frsb_procedure_iii}
q_i^{-1} &= - \frac{\alpha}{\sqrt{2\chi}} \int \rd t \, \Lambda(x_i,t) \, f'(x_i, t)^2.
\end{align}
\item[$(iv)$] Update $\lambda_i = \lambda(q_i)$ via
\begin{align}\label{eq:frsb_procedure_iv}
\begin{dcases}
\lambda_0 &= \sqrt{-\frac{q_0}{q_0^{-1}}}, \\
\frac{1}{\lambda_i} &= \frac{1}{\lambda_{i-1}} - x_{i-1} (q_i^{-1} - q_{i-1}^{-1}).
\end{dcases}
\end{align}
\item[$(v)$] Update $\{q_i\}_{i=0}^k$ with $q_k = 1$ and
\begin{align}\label{eq:frsb_procedure_v}
q_i &= 1 - \frac{\lambda_i}{x_i} - \sum_{j=i+1}^k \Big(\frac{1}{x_j} - \frac{1}{x_{j-1}}\Big) \lambda_j.
\end{align}
\item[$(vi)$]
Update $\chi$ by solving the equation (with $q_{-1} = 0$ and $x_{-1} = 0$):
\begin{align}\label{eq:frsb_procedure_vi}
\sum_{i=0}^k \frac{\sqrt{\chi}(q_i - q_{i-1})}{\Big[\chi + \sum_{j=i+1}^k (q_j - q_{j-1}) x_{j-1}\Big]\Big[\chi + \sum_{j=i}^k (q_j - q_{j-1}) x_{j-1}\Big]}
&= 2^{3/2}\alpha \int_0^{1} \rd t \, \Lambda( 1,t) \, t^2.
\end{align}
\item Iterate steps $(i) \to (vi)$ until convergence.
\item \textbf{Final value for the energy --} We then compute the ground state energy as:
\begin{align*}
f^\star_\FRSB(\alpha) &= \alpha \sqrt{2\chi} \int_{1}^\infty \rd t \, \Lambda(1,t).
\end{align*}
\end{itemize}
The procedure is done for $k$ large enough so that the result does not vary with $k$ and approaches the $k \to \infty$ limit.
Steps~$(i)$ and $(ii)$ are a discretization of the zero-temperature limits of the PDEs of eqs.~\eqref{eq:Parisi_PDE} and \eqref{eq:frsb_eq_2}, arising from the $k$-RSB ansatz (see Appendix~\ref{sec_app:frsb}).
We give more details on the derivation of steps $(iii) - (vi)$ in Appendix~\ref{subsec_app:derivation_algorithmic_frsb}, leveraging results of \cite{franz2017universality}.
\myskip
The different convolutions with Gaussians are done using an analytical formula for the Discrete Fourier Transform (DFT) of a Gaussian under a Shannon-Whittaker interpolation,
and fast Fourier transform techniques. More details on this point are given in Appendix~\ref{subsec_app:convolutions}.
\myskip
\textbf{Implementation and results --}
We present our results for $f^\star_\FRSB$ and the zero-temperature susceptibility $\chi$ in Fig.~\ref{fig:chi_estar_T0}, and the zero-temperature overlap distribution function $q(x)$
for various values of $\alpha$ in Fig.~\ref{fig:q_T0}.
\begin{figure}[th]
\centering
\includegraphics[width=0.8\textwidth]{overlap_function_T0.pdf}
\caption{$T=0$ limit quantities of the RS, 1RSB and FRSB solutions, as a function of $\alpha$.
We compare the predictions for the different
forms of the function $q(x)$ corresponding to the assumed level of replica symmetry breaking.
\label{fig:q_T0}}
\end{figure}
In particular, the full-RSB prediction for the injectivity threshold is $\alpha_\inj^\mathrm{FRSB} \simeq 6.698$.
We ran a more precise binary search procedure for computing the value of this transition,
which we detail in Appendix~\ref{subsec_app:numerics_frsb_threshold}. A summary of its result is presented
in Fig.~\ref{fig:alpha_inj_frsb_summary}, and it yields the bound we conjecture in Result~\ref{result:frsb_result}:
\begin{align}\label{eq:fRSB_inj_threshold}
6.6979 \leq \alpha_\inj^\mathrm{FRSB} \leq 6.6981.
\end{align}
\begin{figure}[th]
\centering
\includegraphics[width=1.0\textwidth]{alpha_inj_summary.pdf}
\caption{\label{fig:alpha_inj_frsb_summary}
Computation of $\alpha_\inj^\FRSB$ using the FRSB algorithmic procedure.
For different values of $x_\mathrm{max}$ and $k$ we give an interval numerically found to contain $\alpha_\inj^\FRSB$.
In Result~\ref{result:frsb_result} we took the interval of values obtained with $k = 200$ and $x_\mathrm{max} = 15$.
We give more details on the numerical procedure in Appendix~\ref{subsec_app:numerics_frsb_threshold}.
}
\end{figure}
Note that this bound is compatible with the hierarchy described in eq.~\eqref{eq:hierarchy_upper_bounds_alpha_inj}.
Moreover, the 1-RSB predictions are found to be very close (but not equal) to the exact FRSB results.
This can be intuitively visualized by the fact that $q(x)$ is relatively well approximated by a step function, which corresponds to the 1RSB ansatz, cf.\ Fig.~\ref{fig:q_T0}.
We emphasize however that the full-RSB algorithmic procedure above does not allow to directly recover the $1$-RSB result, even used with $k = 1$:
indeed it implicitly relies on the fact that one takes $k$ large enough so as not to have to optimize over the variables $x_1, \cdots, x_k$, so that we can take them to be fixed.
\myskip
\textbf{Remark: convexity of the Parisi functional --} In the context of mixed $p$-spin models, the so-called Parisi functional, i.e.\ the functional whose infimum we take in eq.~\eqref{eq:phi_frsb}, has been shown to be strictly convex,
and thus to have a unique minimizer \cite{auffinger2015parisi}.
This is conjectured to hold as well in our setting, however there is no rigorous guarantee that our iterative procedure should converge to a global minimizer.
However, our numerical simulations are compatible with this conjecture: as we detail in Appendix~\ref{subsec_app:numerical_results_frsb}, we find the iterative procedure
to converge to a consistent solution for all initializing points.
Moreover, our procedure exhibits polynomial convergence (see Fig.~\ref{fig_app:convergence_frsb}):
this suggests that there is an accumulation of near-zero eigenvalues in the Hessian of the Parisi functional close to the minimum (otherwise we would observe exponential convergence), and thus that the Parisi functional is strictly but not strongly convex.
\subsection{Injectivity and (random) neural networks}
Our focus is on framing injectivity as a statistical physics problem and exploring parallels and discrepancies with the mentioned conjecture based on integral geometry\footnote{If formal injectivity was itself the goal, we could simply replace ReLU by Leaky ReLU and reduce the problem to injectivity of matrices. In that case the interesting quantity to study may be the inverse Lipschitz constant.}.
But a study of injectivity has a variety of motivations in contemporary machine learning.
Inferring $\bx$ from $\varphi_{\bW}(\bx)$ is an ill-posed problem unless $\varphi_{\bW}$ is injective. The question thus arises naturally when applying neural networks to model forward and inverse maps in inverse problems \cite{puthawala2022globally, arridge2019solving}. There has been considerable interest in inverting generative models on their range to regularize ill-posed inverse problems \cite{bora2017compressed} and in building injective generative models \cite{brehmer2020flows,kothari2021trumpets,ross2021tractable}. Normalizing flows are designed to be invertible with efficiently computable inverses; similar feats can be achieved with injective maps, even with ReLU activations, while retaining favorable approximation-theoretic properties \cite{puthawala2022globally,puthawala2022universal,kothari2021trumpets}. In finite dimension injective maps are (locally) Lipschitz \cite{stefanov2009linearizing}. There is significant work on estimating and controlling the Lipschitz constants of deep neural networks; see for example \cite{fazlyab2019efficient,jordan2020exactly,gouk2021regularisation} and references therein.
\myskip
Applications abound beyond inverse problems: certain injective generative models can provably be trained with sample complexity which is polynomial in image dimension \cite{bai2018approximability}; a message-passing graph neural networks is as powerful as the Weisfeiler--Lehman test, but only if the aggregation function is injective \cite{xu2018powerful}; injective ReLU networks are universal approximators of \textit{any} map with a sufficiently high-dimensional output space \cite{puthawala2022globally} as well as of densities on manifolds \cite{puthawala2022universal}.
\myskip
There is an analogy between random neural networks and random matrices.
Just as results for random matrices help us understand general matrices and have implications throughout mathematics, physics, engineering, and computer science, random neural networks yield insight into general neural networks.
This is the perspective of recent work on ``nonlinear random matrix theory'' for machine learning \cite{pennington2017nonlinear,louart2018random}.
We mention two other examples from this emerging line of research: neural networks at initialization have been used to theoretically study batch normalization \cite{daneshmand2020batch} and properties of gradients in deep networks \cite{hanin2020products}.
\subsection{Injectivity and random geometry}
\paragraph{Notation --}
$\bbN^\star = \bbZ_{> 0}$ denotes the positive integers.
We say that an event occurs with high probability (w.h.p.) when its probability is $1 - \smallO(1)$ as the dimension $n \to \infty$.
We denote $\mu_n$ the uniform probability measure on the Euclidean unit sphere $\mcS^{n-1}$ in $\bbR^n$.
The symbol $\pto$ refers to convergence in probability, and $\mcD \xi$ is the standard Gaussian measure on $\bbR$, as usual in physics.
\myskip
Our first tool is a proposition proved in Appendix~\ref{subsec_app:proof_random_intersection}, stated as Proposition~4.10 in \cite{paleka2021injectivity} and Proposition~37 in \cite{clum2022topics}, which is a simple consequence of Theorem~1 of \cite{puthawala2022globally}.
It connects injectivity to random geometry:
\begin{proposition}[Injectivity and random geometry]
\label{prop:injectivity_random_intersection}
\noindent
The probability $p_{m,n}$ that $\varphi_\bW$ is injective is
\begin{align}\label{eq:pmn_random_intersection}
p_{m,n} &= \bbP_V\big[V \cap C_{m,n} = \{0\} \big],
\end{align}
where $V$ is a uniformly random $n$-dimensional subspace of $\bbR^m$, and $C_{m,n}$ is the set of vectors in $\bbR^m$ with strictly less than $n$ strictly positive coordinates.
\end{proposition}
\textbf{Remark --} Since $V \cap C_{m,n}$ is a cone, we can equivalently ask in eq.~\eqref{eq:pmn_random_intersection} that $V \cap C_{m,n} \cap \mcS^{m-1}$ be an empty set.
\myskip
Recall that we will study injectivity for large matrices $\bW$ in the proportional growth asymptotics,
\begin{equation*}
n \to \infty, \quad m / n \to \alpha.
\end{equation*}
In what follows we will only consider the case $m \geq n$ (and therefore $\alpha \geq 1$). For $m < n$ even $\bx \mapsto \bW \bx$ is not injective, implying that $p_{m,n} = 0$.
\myskip
The expression on the right-hand side of \eqref{eq:pmn_random_intersection} immediately evokes Gordon's ``escape through a mesh'' theorem which bounds the probability that a uniformly-sampled random subspace intersects a closed subset of the sphere $A \subseteq \mcS^{m-1}$ in terms of the Gaussian width of $A$ \cite{gordon1988milman}.
Applying Gordon's theorem is natural but we could only use it to show that $p_{m,n} \to 1$ when $\alpha \geq \alpha_\inj^\mathrm{mesh} \simeq 23.54$, which is suboptimal, given that previous work \cite{puthawala2022globally} proves injectivity
w.h.p.\ when $\alpha \geq 9.091$; see Appendix~\ref{sec_app:mesh} for details.
A more refined analysis of the random subspace--set intersection based on the phase transition in the expected Euler characteristic yields a sharp injectivity threshold prediction of $\alpha_\inj^\mathrm{Euler} \simeq 8.34$ \cite{paleka2021injectivity},
see Section~\ref{subsec:related_work}.
Here we refute this prediction and conjecture a new threshold based on a different geometric intuition.
\subsection{Statistical physics and the spherical perceptron}\label{subsec:statphys}
\paragraph{Injectivity as energy minimization --}
The random subspace $V$ of Proposition~\ref{prop:injectivity_random_intersection} is constructed as the column space of the random matrix $\bW$,
which has dimension $n$ with probability $1$ when $m \geq n$.
If $V' \coloneqq \bW(\mcS^{n-1})$ is the image of the $n$-dimensional unit sphere,
we have that $\bbP[V \cap C_{m,n} = \{0\}] = \bbP[V' \cap C_{m,n} = \emptyset]$.
Moreover, for any $\bx \in \mcS^{n-1}$ we can define $E_\bW(\bx)$ as the total number of positive coordinates of $\bW \bx$, and $e_\bW(\bx)$ as a normalization of this quantity:
\begin{align}\label{eq:def_energy}
E_\bW(\bx) &\coloneqq \sum_{\mu=1}^m \theta[(\bW \bx)_\mu], \hspace{2cm} e_\bW(\bx) \coloneqq \frac{E_\bW(\bx)}{n},
\end{align}
where $\theta(x) = \indi(x > 0)$ is the Heaviside step function, with the convention $\theta(0) = 0$.
Since $C_{m,n}$ is the set of all vectors in $\bbR^m$ with strictly less than $n$ (strictly) positive coordinates,
one has immediately that $\bW \bx \in C_{m,n} \Leftrightarrow E_\bW(\bx) < n$.
Therefore, by Proposition~\ref{prop:injectivity_random_intersection}, $p_{m,n}$ can be rewritten as\footnote{
The minimum is always reached since $E_\bW(\mcS^{n-1})$ is a finite set.
}
\begin{align}\label{eq:pmn_minimum}
p_{m,n} = \bbP_{\bW} \Big[\min_{\bx \in \mcS^{n-1}} E_{\bW}(\bx) \geq n\Big].
\end{align}
Eqs.~\eqref{eq:pmn_random_intersection} and \eqref{eq:pmn_minimum} express two different geometric intuitions. The former one lives in $\bbR^m$ (recall that $m \geq n$) and it is about an intersection of a random $n$-dimensional subspace and a certain nonconvex union of orthants. The latter one lives in $\bbR^n$ and it is about the existence of a halfspace which contains less than $n$ (out of $m$) random vectors. The two intuitions naturally encourage different analytic tools.
\paragraph{Statistical physics of disordered systems --}
The right-hand side of eq.~\eqref{eq:pmn_minimum} is reminiscent of quantities
that theoretical physicists have been tackling since the 1970s, in the field of \emph{physics of disordered systems}.
In these disordered models (also known as \emph{spin glasses}), one wishes to minimize an energy function like $E_\bW$,
which is itself a function of random interactions (also called \emph{quenched} disorder), represented in our case by $\bW$. We recommend the famous book by Mézard, Parisi, and Virasoro for a beautiful review of the early breakthroughs of the physics of spin glasses \cite{mezard1987spin}.
\myskip
Given this short description, we can see that eq.~\eqref{eq:pmn_minimum} fits the framework of these studies:
the energy function given in eq.~\eqref{eq:def_energy} defines a model known in the statistical physics literature as the
\emph{spherical perceptron} (sometimes referred to as the Gardner--Derrida perceptron \cite{gardner1988optimal} when $E_\bW(\bx)$ is given by eq.~\eqref{eq:def_energy}).
We consider this perhaps unexpected point of the view
on injectivity of random layers in neural networks.
\paragraph{Cover's theorem and the bound $\alpha_\inj \geq 3$ --}
Cover's theorem \cite{cover1965geometrical} leads to a first natural bound for $\alpha_\inj$.
It proves that for $\alpha < 2$, there exists with high probability (as $n \to \infty$) $\bx \in \mcS^{n-1}$ s.t.\ $E_\bW(\bx) = 0$ (that is, the constraint satisfaction problem $E_\bW(\bx) = 0$ is satisfiable w.h.p.).
This can be shown to imply that $\alpha_\inj \geq 3$:
\begin{lemma}[Cover's lower bound for injectivity]\label{lemma:cover}
\noindent
Assume $\alpha < 3$. Then as $n,m \to \infty$ the ReLU layer is non injective with high probability,
i.e., $\lim_{n \to \infty} p_{m,n} = 0$.
\end{lemma}
Such arguments are classical, and we detail the proof of Lemma~\ref{lemma:cover} for completeness in Appendix~\ref{subsec_app:proof_cover}\footnote{In a nutshell, by Lemma \ref{lemma:cover} there is always an $\bx$ at obtuse angle with the top $2n$ rows of $\bW$. Even if all the remaining $m - 2n$ rows form acute angles with $\bx$, we need at least $n$ such rows for injectivity.}.
Results about the perceptron based on Cover's theorem were greatly extended by Gardner and Derrida \cite{gardner1988space,gardner1988optimal} using non-rigorous tools, and then later rigorously justified by Scherbina and Tirozzi \cite{shcherbina2002volume,shcherbina2003rigorous} and Stojnic \cite{stojnic2013another}.
In the constraint satisfaction problem (CSP) view on the perceptron, $\alpha = 2$ is sometimes referred to as the \emph{Gardner capacity}, which marks the limit between the satisfiable (SAT) and unsatisfiable (UNSAT) phases.
\paragraph{Thermal relaxation: the Gibbs--Boltzmann distribution --}
Statistical physicists characterize the landscape of the (random) energy function $E_\bW(\bx)$ by considering the
\emph{Gibbs--Boltzmann} distribution $\bbP_{\beta,\bW}$, defined for any \emph{inverse temperature} $\beta \geq 0$ as
\begin{align}\label{eq:def_Gibbs}
\rd \bbP_{\beta,\bW}(\bx) &\coloneqq \frac{1}{\mcZ_n(\bW, \beta)} e^{-\beta E_\bW(\bx)}\mu_n(\rd \bx). \hspace{1cm} (\bx \in \mcS^{n-1})
\end{align}
Informally, the parameter $\beta \geq 0$ interpolates between two extremes: the infinite-temperature ($\beta = 0$) regime, in which the Gibbs measure is uniform on the sphere, and
the zero-temperature ($\beta \to \infty$) limit, in which the Gibbs measure is concentrated on the global minima of the energy function $E_\bW(\bx)$.
Studying the properties of the Gibbs measure for $n \to \infty$ at various $\beta$ (remaining finite when $n \to \infty$) yields deep insight about the landscape of the corresponding energy function \cite{ellis2006entropy}\footnote{The Gibbs distribution is also the invariant measure of stochastic optimization procedures such as Langevin dynamics.}.
In particular, many of our results will be based on an analysis of the large $n$ limit of the \emph{free entropy}, which is defined as the
the logarithm of the normalization in eq.~\eqref{eq:def_Gibbs}:
\begin{align}\label{eq:def_phi}
\Phi_n(\bW,\beta) &\coloneqq \frac{1}{n} \log \mcZ_n(\bW, \beta) = \frac{1}{n} \log \int_{\mcS^{n-1}} \mu_n(\rd \bx) \, e^{-\beta E_\bW(\bx)}.
\end{align}
\paragraph{Universality of the free entropy --}
Following classical arguments based on the Lindeberg exchange method \cite{chatterjee2006generalization}, one can
show that the free entropy $\Phi(\alpha, \beta)$ is universal for all matrices $\bW$
with independent zero-mean entries with unit variance and uniformly bounded third moment.
In particular, all our conjectures and theorems on the free entropy can be stated in this more general case.
We note that in a recent line of work, similar universality properties have been generalized to matrices with independent rows (see, e.g.,
\cite{montanari2022universality,gerace2022gaussian}
and references therein) under a ``one-dimensional CLT'' condition.
In particular, \cite{gerace2022gaussian} conjectures that the ground state energy $f^\star(\alpha) = \lim_{n \to \infty} \{\min_\bx e_\bW(\bx)\}$ (shown in Fig.~\ref{fig:chi_estar_T0})
is universal with respect to the distribution of $\bW$ in a much wider class than matrices with independent elements: we leave the investigation of this conjecture and its implications on injectivity for future work.
\subsection{Related work}\label{subsec:related_work}
\paragraph{Average Euler characteristic prediction --}
We follow here closely the presentation of \cite{paleka2021injectivity} (see also \cite{clum2022topics}).
Proposition~\ref{prop:injectivity_random_intersection} is reminiscent of the kinematic formulas in integral geometry
\cite{schneider2008stochastic}, that
allow to compute expressions of the type $\EE[F(V \cap C)]$, when $V$ is a uniformly-sampled random $n$-dimensional subspace, and
\begin{itemize}
\item[$(i)$] $C$ is a finite union of convex cones.
\item[$(ii)$] $F$ is an additive function, i.e.,\ it satisfies for any $A, B \subseteq \bbR^{m}$ that $F(A \cup B) + F(A \cap B) = F(A) + F(B)$.
\end{itemize}
Recall that we can write eq.~\eqref{eq:pmn_random_intersection} as $p_{m,n} = \EE[\indi_\mcS(V \cap C_{m,n})]$, with $\indi_\mcS(A) \coloneqq \indi\{A \cap \mcS^{m-1} \neq \emptyset\}$ the indicator function of the
sphere.
While $C_{m,n}$ is indeed a finite union of orthants (and thus of convex cones), $\indi_\mcS$ is not additive.
However, it follows from Groemer's extension theorem \cite{schneider2008stochastic} that
the unique additive function defined on finite unions of convex cones to agree with $\indi_\mcS$ on convex cones
is the (spherical) Euler characteristic $\chi_\mcS(A) \coloneqq \chi(A \cap \mcS^{m-1})$.
A possible heuristic is thus to approximate $p_{m,n} = \EE[\indi_\mcS(V \cap C_{m,n})]$ by
\begin{align}\label{eq:def_qmn}
q_{m,n} \coloneqq \EE[\chi_\mcS(V \cap C_{m,n})],
\end{align}
in order to apply the kinematic formulas.
We refer to \cite{paleka2021injectivity} for more discussion on the validity of this heuristic.
In particular, let us note that this strategy has also been used to estimate
the probability of excursions of random fields,
see \cite{adler2007random}.
Using the kinematic formulas, one can obtain an explicit formula for $q_{m,n}$.
Estimating its limit as $n, m \to \infty$ is involved, and a non-rigorous calculation performed in \cite{paleka2021injectivity}
leads to the conjecture:
\begin{align}\label{eq:conj_lim_qmn}
\begin{dcases}
\limsup_{n \to \infty} \frac{1}{n} \log q_{m,n} < 0 & \textrm{ for } \alpha < \alpha_\inj^\Eul , \\
\liminf_{n \to \infty} \frac{1}{n} \log q_{m,n} > 0 & \textrm{ for } \alpha > \alpha_\inj^\Eul,
\end{dcases}
\end{align}
for a sharp threshold $\alpha_\inj^\Eul \simeq 8.34$, which we will call the average Euler characteristic prediction for injectivity.
Checking the validity of this heuristic approach as a prediction for the behavior of $p_{m,n}$ was one of the motivations of our work.
\paragraph{Physics and mathematics of the perceptron --}
Motivated in particular by the relation of the perceptron to continuous constraint satisfaction problems (e.g.\ to soft sphere packing),
studies of the spherical perceptron in physics and mathematics are numerous. Without aiming at being exhaustive, and rather primarily referring to works relevant for our presentation,
these studies include \cite{gardner1988optimal,gardner1988space,franz2017universality} in the physics literature,
while the spherical perceptron has also been studied with mathematically rigorous techniques \cite{shcherbina2002volume}, \cite[Chapter 3]{talagrand2010mean}, \cite[Chapter 8]{talagrand2011mean}, \cite{stojnic2013another,stojnic2013negative,montanari2021tractability}.
In particular, the satisfiability threshold $\alpha = 2$ has been rigorously determined.
The techniques however do not apply to the unsatisfiable (UNSAT) regime, which is the one that is relevant in this paper. One reason for this is that the satisfiability question can be formulated in terms of a convex Hamiltonian, while in the unsatisfiable regime one is interested in a Hamiltonian given by the number of half-spaces a point is contained in, which is not convex. This precludes the straightforward use of these rigorous techniques to study the injectivity question. A rigorous sharp characterization of the unsatisfiable phase remains an important open problem.
We refer to \cite{bolthausen2022gardner} for a summary of current advances on the spherical perceptron, from both the physics and the mathematics points of view.
\paragraph{Other related work --}
Puthawala et al.\ derived a suite of results on injectivity of neural networks, including a simple analysis of random ReLU layers \cite{puthawala2022globally}.
By combining ideas related to Cover's theorem with union bounds over row selections from $\bW$ and concentration of measure, they proved upper and lower bounds on the injectivity threshold, the upper bound being later improved by Paleka \cite{paleka2021injectivity} and Clum \cite{clum2022topics}.
We summarize them in the following theorem:
\begin{theorem}[Known bounds for injectivity \cite{puthawala2022globally,paleka2021injectivity,clum2022topics}]\label{thm:known_bounds}
\begin{equation*}
\Big(\alpha \leq 3.3 \Rightarrow \lim_{n \to \infty} p_{m,n} = 0\Big) \quad \textrm{and} \quad \Big(\alpha \geq 9.091 \Rightarrow \lim_{n \to \infty} p_{m,n} = 1\Big).
\end{equation*}
\end{theorem}
By Proposition \ref{prop:injectivity_random_intersection}, the injectivity threshold can be characterized as a phase transition in the probability that a random subspace intersects a certain union of orthants.
Similar characterizations arise in the study of convex relaxations of sparse linear regression and other high-dimensional convex optimization problems with random data.
Amelunxen et al.\ connect the probability of success of these optimization problems to random convex constraint satisfaction problems, namely the probability that two random convex cones have a common ray \cite{amelunxen2014living}.
They prove that this probability exhibits a sharp phase transition in terms of scalar values known as the \emph{statistical dimension} of the cones.
Unfortunately, these results are limited to convex cones, whereas the union of orthants from Proposition \ref{prop:injectivity_random_intersection} is non-convex.
\subsection{Main results}\label{subsec:summary_results}
Recall that we study a high-dimensional regime in which $n \to \infty$
and $m = m(n)$ satisfies $m(n)/n \to \alpha > 0$.
We will sometimes use the notation $\alpha_n \coloneqq m(n)/n$.
The proofs of the rigorous statements in this section are given in Appendix~\ref{sec_app:proofs}.
\subsubsection{Relating the free entropy to injectivity}
Our starting point is eq.~\eqref{eq:pmn_minimum} in Proposition~\ref{prop:injectivity_random_intersection} (recall that $e_\bW(\bx) = E_\bW(\bx) / n$):
\begin{align*}
p_{m,n} = \bbP_{\bW} \Big[\min_{\bx \in \mcS^{n-1}} e_{\bW}(\bx) \geq 1\Big].
\end{align*}
Recall the definition of the free entropy in eq.~\eqref{eq:def_phi}.
We immediately have
\begin{align}\label{eq:bound_Phi_energy}
-\frac{\Phi_n(\bW,\beta)}{\beta} \geq \min_{\bx \in \mcS^{n-1}} e_{\bW}(\bx),
\end{align}
which formalizes the fact that the Gibbs distribution is a relaxation of the uniform distribution on the global minima of $E_\bW$.
Our strategy is to use eq.~\eqref{eq:bound_Phi_energy} to characterize injectivity.
This involves two challenging steps:
\begin{itemize}
\item[$(i)$]
Make the inequality of eq.~\eqref{eq:bound_Phi_energy} as tight as possible:
as we explain below,
conjecturally, when taking $n \to \infty$ and then $\beta \to \infty$, eq.~\eqref{eq:bound_Phi_energy} becomes an equality.
While we are not able to prove this statement, we will use it to conjecture a sharp transition for injectivity in terms of the aspect ratio $\alpha$.
Moreover, without assuming that this conjecture holds, we will also use eq.~\eqref{eq:bound_Phi_energy}
to prove upper bounds on the injectivity threshold.
\item[$(ii)$]
Second, computing the large system limit $n \to \infty$ of $\Phi_n(\bW, \beta)$ on the left-hand side of eq.~\eqref{eq:bound_Phi_energy}.
This is a central object in the physics of disordered systems, and we will provide a conjecture for its limiting value, as well as rigorous upper bounds.
Our results leverage a long line of work combining probability theory with heuristic predictions of statistical physics.
\end{itemize}
\myskip
The following statement is classical in the theory of disordered systems and a direct consequence of
celebrated concentration inequalities \cite{boucheron2013concentration}.
It bounds the probability that the free entropy deviates from its mean (with respect to the disorder $\bW$):
\begin{theorem}[Free entropy concentration]\label{thm:free_entropy_concentration}
\noindent
For any $\beta \geq 0$ and $n \geq 1$, we have,
\begin{align*}
\bbP_\bW[|\Phi_n(\bW,\beta) - \EE_\bW \Phi_n(\bW,\beta)| \geq t] &\leq 2 \exp \Big\{ - \frac{n t^2}{2 \alpha_n \beta^2} \Big\}.
\end{align*}
\end{theorem}
Combined with the bound of eq.~\eqref{eq:bound_Phi_energy}, this already allows us to state a sufficient condition for
non-injectivity with high probability.
We summarize this in the following corollary, proved in Appendix~\ref{subsec_app:proof_cor_sufficient_non_inj}.
\begin{corollary}[Sufficient condition for non-injectivity]\label{cor:sufficient_non_injectivity}
\noindent
We denote $\Phi(\alpha,\beta) = \liminf_{n \to \infty} \EE_\bW \Phi_n(\bW, \beta)$.
It has the following properties:
\begin{itemize}
\item[$(i)$] $\beta \mapsto - \Phi(\alpha,\beta)/\beta$ is a positive non-increasing function of $\beta > 0$.
\item[$(ii)$] Its limit as $\beta \to \infty$ satisfies
\begin{align*}
\lim_{\beta \to \infty} \Big[- \frac{\Phi(\alpha,\beta)}{\beta} \Big] &< 1 \Rightarrow \lim_{n \to \infty} p_{m,n} = 0,
\end{align*}
that is, the limit being smaller than 1 implies non-injectivity w.h.p.\ as $n,m \to \infty$\footnotemark.
\end{itemize}
\end{corollary}
\footnotetext{The proof actually shows that $p_{m,n}$ goes to zero exponentially fast in $n$, see eq.~\eqref{eq:bound_pmn}.}
\paragraph{Existence of the limit --}
While we expect the limit of $\EE_\bW \Phi_n(\bW, \beta)$ as $n \to \infty$ to exist, or, in other words, $\Phi(\alpha, \beta)$ to be defined not only as a $\liminf$, this fact is far from trivial.
In the spin glass literature, this has historically been shown using interpolation methods due to Guerra, by showing sub-additivity of the free entropy in the system size
\cite{guerra2002thermodynamic,talagrand2010mean} for mean-field spin glass models possessing certain convexity properties.
Guerra's technique, however, fails beyond this setting, e.g.\ in bipartite (or other multi-species) spin glass models \cite{panchenko2015free}.
On the other hand, even in some mean-field spin glasses, including spherical $p$-spins,
the existence of the limit was only shown as a corollary of the much stronger asymptotically tight two-sided bound allowing to precisely relate the value of the limit to the Parisi formula, i.e.\ the prediction of statistical physics \cite{talagrand2006free,chen2013aizenman}\footnote{However an approximate sub-additivity property has recently been shown to be enough to deduce the convergence of the free entropy in this case \cite{subag2022convergence}.}.
In the spherical perceptron we consider here, the existence of this limit is, to the best of our knowledge, still a conjecture.
\myskip
Following the statistical physics intuition about the asymptotic tightness of eq.~\eqref{eq:bound_Phi_energy}, we conjecture the following.
\begin{conjecture}[Tightness of the free entropy bound]\label{conj:tightness_criterion}
\noindent
The bound of Corollary~\ref{cor:sufficient_non_injectivity} is tight, i.e.,
\begin{align*}
\begin{dcases}
\lim_{\beta \to \infty} \Big[- \frac{\Phi(\alpha,\beta)}{\beta}\Big] &< 1 \Rightarrow \lim_{n \to \infty} p_{m,n} = 0, \\
\lim_{\beta \to \infty} \Big[- \frac{\Phi(\alpha,\beta)}{\beta}\Big] &> 1 \Rightarrow \lim_{n \to \infty} p_{m,n} = 1.
\end{dcases}
\end{align*}
\end{conjecture}
\paragraph{A generalized conjecture --}
Conjecture \ref{conj:tightness_criterion} is a weakened version of a more general conjecture one can make from the definition of $\Phi(\alpha,\beta)$,
which largely motivates the study of free entropies in statistical physics.
First, assume that the limit defining $\Phi(\alpha,\beta)$ is well defined, so that $\Phi(\alpha,\beta) = \lim_{n \to \infty} \EE_\bW \Phi_n(\beta,\bW)$.
As $\beta \to \infty$, we expect the configurations that have dominating mass under the Gibbs measure of eq.~\eqref{eq:def_Gibbs}
to have the smallest energy, i.e., to be the ground state configurations.
Therefore, the stronger conjecture that motivates our use of $\Phi(\alpha,\beta)$ to characterize injectivity is that as $\beta \to \infty$, the bound of eq.~\eqref{eq:bound_Phi_energy} is actually an equality.
In a nutshell, this conjecture can be stated as ($\plim$ denotes limit in probability):
\begin{align}\label{eq:relation_Phi_ground_state}
\lim_{\beta \to \infty} -\frac{\Phi(\alpha,\beta)}{\beta} &=
\plim_{n \to \infty} \Big\{\min_{\bx \in \mcS^{n-1}} e_\bW(\bx) \Big\}.
\end{align}
Note that such a statement also assumes the concentration of the ground state energy on a value independent of $\bW$ as $n \to \infty$.
Generally, the concentration of the intensive energy $e_\bW(\bx)$ under the Gibbs measure at any given $\beta \geq 0$
can be deduced from the existence of the limiting free entropy and its differentiability in $\beta$ \cite{auffinger2018concentration}\footnote{Unfortunately, proving these properties often requires the full power of the so-called Parisi formula for the limit of the free entropy,
which must first be proven as we discuss later.}.
\paragraph{A remark on discretization --}
A subtlety in establishing eq.~\eqref{eq:relation_Phi_ground_state} arises from the continuous nature of the
variable $\bx$: one needs to discard the existence of sets with ``super-exponentially'' small volume that might contain the global minima of $e_\bW$.
In discrete models this issue is often not present. For example, replacing $\int_{\mcS^{n-1}} \mu_n(\rd \bx)$ by $2^{-n} \sum_{\bx \in \{\pm 1\}^n}$ in eq.~\eqref{eq:def_phi} yields a model called the \emph{binary (or Ising) perceptron}, for which it is easy to see that
\begin{align}\label{eq:small_temp_discrete}
\min_{\bx \in \{\pm 1\}^n} e_{\bW}(\bx) \leq - \frac{\Phi_n(\bW, \beta)}{\beta} &\leq \min_{\bx \in \{\pm 1\}^n} e_{\bW}(\bx) + \frac{\log 2}{\beta},
\end{align}
so that the generalized conjecture of eq.~\eqref{eq:relation_Phi_ground_state} follows from the concentration and existence of the limit of the free entropy.
In our spherical model one could hope to approximate $\mcS^{n-1}$ by a sufficiently fine $\varepsilon$-net, so that the value of $\Phi_n(\bW, \beta)$ is well approximated by
averaging over the points of this net, and such that a two-sided bound similar to eq.~\eqref{eq:small_temp_discrete} holds.
Let us briefly describe such an approach.
Considering an arbitrary fixed vector $\bx \in \mcS^{n-1}$, it is clear that with high probability there exists $\mu \in [n]$ s.t.\ $|\bW_\mu \cdot \bx| \leq 1$\footnote{Since $\{\bW_\mu \cdot \bx\}_{\mu=1}^n \iid \mcN(0,1)$.}.
From this, one easily deduces that there exists a small rotation $\by = \bR \bx$ of $\bx$ (in the direction of $\pm \bW_\mu / \|\bW_\mu\|$), with angle $\mcO(1/\sqrt{n})$, such that $(\by \cdot \bW_\mu)(\bx \cdot \bW_\mu) < 0$, while $\|\by - \bx\|_2 \lesssim 1/\sqrt{n}.$
This (very) rough estimation shows that
$\varepsilon \lesssim 1/\sqrt{n}$ is necessary to approximate the minimum of $e_\bW$ over $\mcS^{n-1}$ by the minimum on an Euclidean-distance net.
However it is well known that such a net needs to have cardinality at least $(1 / \varepsilon)^n$ \cite{van2014probability}.
Thus under this discretization the term $\log 2/\beta$ in the upper bound of eq.~\eqref{eq:small_temp_discrete} becomes $\Omega(\log \varepsilon^{-1} / \beta) = \Omega(\log n/\beta)$.
Therefore we would need to consider diverging inverse temperatures $\beta = \beta(n) \gtrsim \log n$ in the discretized system for its free entropy to provably approximate the ground state energy.
A rigorous computation of the free entropy on this net with diverging $\beta$ would be challenging: since our results are based on heuristic methods of statistical physics assuming Conjecture~\ref{conj:tightness_criterion}
(with the exception of a rigorous upper bound), we leave the analysis of a possible discretization for future work.
\subsubsection{Predictions of full replica symmetry breaking theory}
Computing $\Phi(\alpha,\beta)$ is in general intractable rigorously.
We will show in Theorem~\ref{thm:bound_Gordon} that we can still derive meaningful rigorous bounds, but before describing that result we first introduce another conjecture, stemming from non-rigorous methods of statistical physics.
This conjecture, which we call a \emph{Parisi formula} as usual in spin glass models,
and that we derive in Section~\ref{sec:full_rsb} using the non-rigorous \emph{replica method} of statistical physics,
gives us a (heuristic) means to
\textit{exactly} compute $\Phi(\alpha,\beta)$.
\begin{conjecture}[Parisi formula]\label{conj:parisi_formula}
\noindent
$\Phi(\alpha,\beta)$ is given by the \emph{full replica symmetry breaking} (FRSB) prediction of statistical physics, discussed in Section~\ref{sec:full_rsb}.
More precisely, we have $\Phi(\alpha,\beta) = \Phi_\FRSB(\alpha,\beta)$, cf.\ eq.~\eqref{eq:phi_frsb}, with the following interpretation:
\begin{itemize}
\item[$(i)$] We have the ``Parisi formula'':
\begin{align}\label{eq:frsb_general}
\Phi_\FRSB(\alpha,\beta) = \inf_{q \in \mcF} \mcP[q;\alpha,\beta],
\end{align}
with $\mcF$ the set of non-decreasing functions from $[0, 1]$ to $[0,1]$, and
$\mcP[q;\alpha,\beta]$ a functional of $q$, whose expression is given in eq.~\eqref{eq:phi_frsb}.
\item[$(ii)$] The infimum in eq.~\eqref{eq:frsb_general} is attained at a $q^\star \in \mcF$ that is the functional inverse of the CDF of a probability distribution $\rho^\star$ on $[0,1]$, such that for any continuous bounded function $f$ we have
\begin{align}\label{eq:overlap_distribution}
\lim_{n \to \infty} \EE_{\bW} \Big[\EE_{(\bx, \bx') \sim \bbP_{\beta,\bW}^{\otimes 2}} f(\bx \cdot \bx')\Big] \, &= \int f(u) \, \rho^\star(\rd u),
\end{align}
where $\bbP_{\beta,\bW}$ is the Gibbs measure defined in eq.~\eqref{eq:def_Gibbs}.
\end{itemize}
\end{conjecture}
Eq.~\eqref{eq:overlap_distribution} shows that in the Parisi formula of eq.~\eqref{eq:frsb_general},
the functional parameter $q \in \mcF$ can be interpreted as the \emph{average overlap distribution} of the system.
Intuitively speaking, the ``alignment'' $\bx \cdot \bx'$ of two independent draws $\bx, \bx'$ of the Gibbs measure $\bbP_{\beta, \bW}$ (sharing the same matrix $\bW$)
will, on average, be distributed according to $\rho^\star$ as $n \to \infty$. The fact that the large-size limit of the system is characterized by this overlap distribution (called therefore an ``order parameter'' in statistical physics)
is one of the most important predictions of the replica symmetry breaking theory of Parisi, and
we will further discuss this theory in the following.
\paragraph{Rigorous approaches --}
While the most general full replica symmetry breaking framework is widely believed to yield exact predictions in the asymptotic limit,
proving these predictions is a field of probability theory in itself. Indeed, significant progress has been made
in some mean-field spin glass models, see e.g.\ \cite{talagrand2010mean,panchenko2014parisi},
or in the context of inference problems and the study of computational-to-statistical gaps \cite{bandeira2018notes,barbier2019optimal},
but proving the validity of the replica symmetry breaking procedure in more generality remains one of the important open problems in a rigorous description of the physics of disordered systems.
In particular, in the spherical perceptron considered here, the general full-RSB prediction is still a conjecture beyond the satisfiable phase.
\myskip
Based on Conjectures~\ref{conj:tightness_criterion} and \ref{conj:parisi_formula}, we can design a statistical physics program to characterize the injectivity of the ReLU layer:
\begin{itemize}
\item[$(i)$] For any $\beta \geq 0$, compute $\Phi(\alpha,\beta) = \lim_{n \to \infty} \EE_\bW \log \mcZ_n(\bW, \beta)/n$,
as given by the Parisi formula of Conjecture~\ref{conj:parisi_formula}.
\item[$(ii)$] Compute analytically the limit $f^\star(\alpha) \coloneqq - \lim_{\beta\to\infty} \Phi(\alpha,\beta)/\beta$.
As we discussed above, this is a non-decreasing function of $\alpha$, and moreover $f^\star(2) = 0$.
\item[$(iii)$] $\varphi_\bW$ is (typically) injective if $f^\star(\alpha) > 1$, and non-injective
if $f^\star(\alpha) < 1$. In particular, if $f^\star$ is continuous and strictly increasing (which we numerically observe), the injectivity threshold $\alpha_\inj$ is characterized by
\begin{align}\label{eq:criterion_alphainj}
\alpha_\inj &= [f^\star]^{-1}(1).
\end{align}
\end{itemize}
We perform this procedure in detail in Section~\ref{sec:full_rsb},
and it yields the main result of this section.
\begin{result}[``Full-RSB'' conjecture]\label{result:frsb_result}
\noindent
Assume Conjectures~\ref{conj:tightness_criterion} and \ref{conj:parisi_formula} hold.
Denote by $A$ the event ``$\varphi_\bW$ (cf.\ eq.~\eqref{eq:relu_layer}) is injective'',
and let $p_{m,n} = \bbP_\bW[A]$.
There exists a constant $\alpha_\inj^{\FRSB} \in (6.6979, 6.6982)$,
obtained via the Full-RSB prediction of Conjecture~\ref{conj:parisi_formula}, such that:
\begin{itemize}[leftmargin=25pt]
\item[$(i)$] If $\limsup_{n \to \infty} (m/n) < \alpha_\inj^\FRSB$, then $\lim_{n \to \infty} p_{m,n} = 0$.
\item[$(ii)$] If $\liminf_{n \to \infty} (m/n) > \alpha_\inj^\FRSB$, then $\lim_{n \to \infty} p_{m,n} = 1$.
\end{itemize}
\end{result}
\subsubsection{Additional bounds}
\paragraph{The replica hierarchy of upper bounds --}
In Conjecture~\ref{conj:parisi_formula}, the FRSB prediction is given as
\begin{align}\label{eq:def_Phi_FRSB}
\Phi_\FRSB(\alpha,\beta) = \inf_{ q \in \mcF } \mcP[q; \alpha,\beta],
\end{align}
and we saw that the function $q: [0,1] \to [0,1]$ could be interpreted in terms of an \emph{overlap distribution}.
Restricting the infimum to atomic overlap distributions with $k+1$ atoms (or, equivalently, letting $x \mapsto q(x)$ be a step function with $k+1$ steps)
yields a sequence of upper bounds indexed by $k \geq 0$:
\begin{align}\label{eq:hierarchy_upper_bounds_phi}
\Phi \overset{\conj}{=} \Phi_\FRSB = \lim_{k \to \infty} \Phi_{k-\RSB} \leq \cdots \leq \Phi_{k-\RSB} \leq \cdots \leq \Phi_{\ORSB} \leq \Phi_{\RS},
\end{align}
in which the ``$k-\RSB$'' functional is given by eq.~\eqref{eq:def_Phi_FRSB}, with the infimum restricted to step functions with $k+1$ steps, and we suppressed the dependence of all quantities on $\alpha$ and $\beta$ to lighten notation. Let $k^\star$ be the smallest $k$ such that $\Phi_\FRSB(\alpha,\beta) = \Phi_{k^\star-\RSB}(\alpha,\beta)$. If $1 \leq k^\star < \infty$, we say that the system is \emph{$k^\star$-th step replica symmetry breaking};
if $k^\star = 0$ the system is called \emph{replica-symmetric} (RS); if $k^\star$ does not exist the system is said to exhibit \emph{full replica symmetry breaking}.
We will clarify the meaning of ``replica symmetry breaking'' in Section~\ref{sec:upper_bounds}.
Finally, note that by using Corollary~\ref{cor:sufficient_non_injectivity} and Conjecture~\ref{conj:tightness_criterion},
eq.~\eqref{eq:hierarchy_upper_bounds_phi} transfers into a hierarchy of upper bounds for the injectivity threshold:
\begin{align}\label{eq:hierarchy_upper_bounds_alpha_inj}
\alpha_\inj \overset{\conj}{=} \alpha_\inj^\mathrm{FRSB} \leq \cdots \leq \alpha_\inj^\mathrm{(k+1)-RSB} \leq \alpha_\inj^\mathrm{k-RSB} \leq \cdots \leq \alpha_\inj^\mathrm{1RSB} \leq \alpha_\inj^\mathrm{RS},
\end{align}
in which $\alpha_\inj^{k-\RSB}$ is the value of $\alpha$ at which $f^\star_{k-\RSB}(\alpha) \coloneqq \lim_{\beta \to \infty}[-\Phi_{k-\RSB}(\alpha,\beta)/\beta]$ crosses $1$.
In particular, as we will see in Section~\ref{sec:upper_bounds}, one can compute $\alpha_\inj^\RS \simeq 7.65$.
Note that increasing $k$ and in particular going to the full-RSB solution, which conjecturally solves the problem, only takes us further from the Euler characteristic prediction $\alpha_\inj^\mathrm{Euler} \simeq 8.34$.
In Fig.~\ref{fig:chi_estar_T0} we illustrate the predictions at the RS, 1-RSB, and Full-RSB levels.
\paragraph{Proving the replica-symmetric bound --}
We can state another rigorous characterization.
Using Gordon's min-max theorem \cite{gordon1985some,thrampoulidis2018precise},
we can prove that the replica-symmetric prediction is an upper bound on the injectivity threshold:
\begin{theorem}[Replica-symmetric upper bound for the injectivity threshold]\label{thm:bound_Gordon}
\noindent
Assume that $\alpha > \alpha_\inj^\RS \simeq 7.65$. Then $p_{m,n} \to 1$ as $n,m \to \infty$, that is, $\varphi_\bW$ is injective w.h.p.
\end{theorem}
Note that for the computation of the Gardner capacity of the so-called ``positive'' perceptron, the replica-symmetric prediction has been shown to be tight also using Gordon's inequality \cite{stojnic2013another}, because
one can rewrite the associated min-max problem using a \emph{convex function} on \emph{convex sets}.
In the unsatisfiable phase we consider here
the solution is conjecturally full-RSB, and the replica-symmetric bound of Theorem~\ref{thm:bound_Gordon} is not expected to be tight.
\myskip
Theorem~\ref{thm:bound_Gordon} is proved in Appendix~\ref{subsec_app:bound_Gordon}. It improves upon the earlier upper bounds of Theorem~\ref{thm:known_bounds}, and it disproves the Euler characteristic based threshold prediction $\alpha_\inj^\Eul \simeq 8.34$~\cite{paleka2021injectivity,clum2022topics,clum_paleka_bandeira_mixon}.
Finally, let us mention two other bounds one can obtain on the injectivity threshold.
\paragraph{The annealed bound --}
A classical approach in statistical physics to upper-bound the free entropy $\Phi(\alpha,\beta)$ is an \emph{annealed} calculation.
Namely, one uses Jensen's inequality to write
\begin{align*}
\EE_\bW \Phi_n(\bW,\beta) &= \frac{1}{n} \EE_\bW \log \mcZ_n(\bW, \beta) \leq \frac{1}{n} \log \EE_\bW \mcZ_n(\bW, \beta) \stackrel{n \to \infty}{\longrightarrow} \Phi_{\annealed}(\alpha,\beta).
\end{align*}
This gives us an additional upper bound $\Phi(\alpha,\beta) \leq \Phi_\annealed(\alpha,\beta)$\footnote{However, it never improves over the replica-symmetric one, since it is a general fact that $\Phi_\RS(\alpha,\beta) \leq \Phi_\annealed(\alpha,\beta)$.} and a corresponding upper bound for the injectivity threshold $\alpha_\inj \leq \alpha_\inj^\annealed$.
We leave to the reader the exercise to show $\Phi_\annealed(\alpha,\beta) = \alpha \log [(1+e^{-\beta})/2]$. In particular, we have $ -\Phi_{\annealed}(\alpha,\beta)/\beta\to 0$ as $\beta \to \infty$ for any $\alpha>0$, and therefore $\alpha_\inj^\annealed = +\infty$: here, the result of the annealed calculation is completely uninformative.
\paragraph{An additional lower bound --}
In Fig.~\ref{fig:chi_estar_T0} we also show in green a region that is discarded for the injectivity threshold by a non-rigorous lower bound $\alpha_\inj \geq \alpha_{\AT} \simeq 5.32$,
based on the de Almeida-Thouless criterion \cite{de1978stability} of statistical physics. We detail its origin in Section~\ref{subsec:rs}, and its calculation in Appendix~\ref{sec_app:rs_lower_bound}.
While it is not mathematically rigorous, proving it would follow from a rigorous computation of the free entropy in the ``high-temperature'' (or small $\beta$) phase in which replica symmetry is conjectured to hold.
In many models, this turned out to be possible to handle more easily than the complete full-RSB conjecture, so we mention it for completeness.
We also note that it improves over the lower bound $\alpha_\inj \geq 3.4$ of Theorem~\ref{thm:known_bounds}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{chi_estar_T0.pdf}
\caption{$T=0$ limit of the RS, 1RSB and FRSB solutions, as a function of $\alpha$.
We compare the predictions for the ground state energy $f^\star(\alpha) = \lim_{\beta \to \infty} [-\Phi(\alpha,\beta)/\beta]$ and the zero-temperature susceptibility $\chi$ (see Sections~\ref{sec:upper_bounds} and \ref{sec:full_rsb}).
The green area is forbidden for $\alpha_\inj$ by the replica-symmetric lower bound of eq.~\eqref{eq:lower_bound_RS_stability}.
\label{fig:chi_estar_T0}}
\end{figure}
\subsection{Structure of the paper and open problems}
Section~\ref{sec:upper_bounds} has in great part a pedagogical purpose, to introduce the unacquainted reader
to the (mostly non-rigorous) results of statistical physics known in the spin glass literature under the umbrella of replica method and replica symmetry breaking.
We detail there the replica computation in the spherical perceptron and the arising of replica symmetry breaking,
and derive the replica symmetric and one-step replica symmetry breaking predictions for the injectivity threshold.
In Section~\ref{sec:full_rsb} we discuss the full-replica symmetry breaking prediction for the free entropy, and we derive an efficient algorithmic procedure
to solve the zero-temperature full-RSB equations. We discuss the numerical behavior of this algorithm, and use it to derive the numerical estimate of the injectivity threshold in Result~\ref{result:frsb_result}.
As mentioned, the proofs of our rigorous results (in particular Theorem~\ref{thm:bound_Gordon}) are given in Appendix~\ref{sec_app:proofs},
and other analytical or numerical details and technical arguments will be deferred to the other appendices.
\myskip
Let us finally mention a few open directions that stem from our analysis.
\paragraph{Deep networks --}
A natural extension of our results would be to analyze a composition of multiple ReLU layers. Denoting still by $n$ the input dimension, Theorem~\ref{thm:bound_Gordon} guarantees
injectivity w.h.p.\ if the size $k_L$ of the $L$-th layer satisfies $k_L > n (\alpha_\inj^\RS)^L$.
However, this is far from optimal: leveraging the structure of the image space of a ReLU layer, \cite{paleka2021injectivity,clum2022topics}
have shown that $k_L \geq n (C_1 + C_2 L \log L)$ (for some constants $C_1, C_2 > 0$) is enough to guarantee injectivity; this may be further improved using arguments based on random projections \cite{puthawala2022globally}.
An interesting open question is whether the techniques we develop here (and in particular the replica symmetry breaking framework) can be extended to predict exact injectivity transitions in the
multi-layer case.
\paragraph{Stability of the inverse --}
While the injectivity question is limited to non-injective $\sigma$ -- such as ReLU -- in eq.~\eqref{eq:relu_layer},
a natural extension would be to estimate the Lipschitz constant of the inverse of $\varphi_\bW$ on its range, either
in the injective phase we described for $\sigma = \mathrm{ReLU}$, or for any $\alpha > 0$ when $\sigma$ is injective.
Whether this question can be tackled using statistical physics tools similar to the ones we used here is an interesting open direction.
\paragraph{Improvement over Theorem~\ref{thm:bound_Gordon} --}
One can consider a closely-related model called the \emph{negative} perceptron by
replacing $\theta(x) = \indi\{x > 0\}$ by
$\indi\{x \geq \kappa\}$ with $\kappa < 0$ in the energy of eq.~\eqref{eq:def_energy}.
In this model, even computing the Gardner capacity conjecturally requires the full-RSB prediction.
However, \cite{stojnic2013negative,montanari2021tractability} have
made a refined use of Gordon's inequality to improve over the replica-symmetric upper bound for the capacity.
While similar ideas might be able to improve the upper bound of Theorem~\ref{thm:bound_Gordon}, it is not immediate to implement them, since the
method used in \cite{montanari2021tractability} relies on the min-max problem being formulated over unit-norm vectors, which is not the case here.
Since Theorem~\ref{thm:bound_Gordon} already allows to disprove the Euler characteristic prediction, we leave such an improvement for later work.
\paragraph{Large deviations of sublevel sets --}
The non-validity of the average Euler characteristic prediction also leads to interesting
predictions on the energy landscape of the perceptron. Indeed, the quantity $q_{m,n}$ of eq.~\eqref{eq:def_qmn}
is the mean Euler characteristic of a sublevel set $U$ of the perceptron, more precisely
$q_{m,n} = \EE_\bW[\chi(U)]$, with $U \coloneqq \{\bx \in \mcS^{n-1} \, : \, e_\bW(\bx) \leq 1\}$.
Recall that $\alpha_\inj^\Eul \simeq 8.34$ while $\alpha_\inj^\RS \simeq 7.65$.
According to Theorem~\ref{thm:bound_Gordon}, for all $\alpha \in (\alpha_\inj^\RS,\alpha_\inj^\Eul)$ (and conjecturally in $(\alpha_\inj^\FRSB,\alpha_\inj^\Eul)$) the set $U$ is typically empty:
however its average Euler characteristic is exponentially large!
A possible explanation for this discrepancy is that there exist large deviations events with probability $\exp(-n I_1)$ in which the set $U$ is not only non-empty, but has Euler characteristic $\exp(n I_2)$.
A natural conjecture is that $I_2 > I_1$ for $\alpha < \alpha_\inj^\Eul$ and $I_2 < I_1$ for
$\alpha > \alpha_\inj^\Eul$.
Exploring further these large deviations could thus explain the error made in the Euler characteristic approach.
\paragraph{Numerical code and reproducibility --}
All figures and numerical results in this paper are fully reproducible. The JAX \cite{jax2018github} code is available in a \href{https://github.com/AnMaillard/Injectivity_ReLu_layer}{GitHub repository} \cite{github_repo}.
\paragraph{Acknowledgments --} A.M.\ and A.B.\ thank D.\ Paleka, C.\ Clum, and D.\ Mixon for several discussions related to this paper. A.M.\ is grateful to F.\ Krzakala, L.\ Zdeborov\'a, B.\ Loureiro and P.\ Urbani for insightful discussions.
\section{Introduction}\label{sec:introduction}
\input{introduction.tex}
\section{The replica hierarchy of upper bounds}\label{sec:upper_bounds}
\input{upper_bounds.tex}
\section{The full-RSB solution: exact injectivity threshold}\label{sec:full_rsb}
\input{full_rsb.tex}
\bibliographystyle{alpha}
\subsection{General principles of the replica method}
The replica method is based on the \emph{replica trick}, a heuristic use of the following formula, for any random variable $X > 0$ (assuming that all the moments written hereafter are well-defined):
\begin{align}\label{eq:replica_trick}
\EE \log X &= \lim_{r \to 0} \frac{\EE X^r - 1}{r} = \frac{\partial}{\partial r} [\log \EE X^r]_{r = 0}.
\end{align}
While the replica trick is most often described as the first equality in eq.~\eqref{eq:replica_trick}, we will here use the second (and equivalent) equality.
Assuming that
$\Phi(\alpha, \beta) \coloneqq \lim_{n \to \infty} \EE \, \Phi_n(\bW, \beta)$ is well defined, we reach:
\begin{align}\label{eq:replica_trick_phi}
\Phi(\alpha,\beta) &= \lim_{n \to \infty} \frac{\partial}{\partial r} \Big[\frac{1}{n} \log \EE_\bW \big\{\mcZ_n(\bW, \beta)^r\big\}\Big]_{r = 0}.
\end{align}
So far, eq.~\eqref{eq:replica_trick_phi} is not really surprising.
The replica method is based on several heuristics, and leverages the fact that it is often possible to compute the RHS of eq.~\eqref{eq:replica_trick_phi}
for \emph{integer $r$}. More precisely, the replica method proceeds as follows:
\myskip
\fbox{\begin{minipage}{0.98\textwidth}
\textbf{Replica method}
\begin{itemize}[leftmargin=25pt]
\item[$(i)$] Assume that the limits $n \to \infty$ and $r \to 0$ can be inverted in eq.~\eqref{eq:replica_trick_phi},
i.e.\ that we have $\Phi(\alpha,\beta) = \partial_r [\Phi(\alpha,\beta;r)]_{r = 0}$, with
\begin{align}\label{eq:def_Phir}
\Phi(\alpha,\beta;r) &\coloneqq \lim_{n \to \infty} \frac{1}{n} \log \EE_\bW \big\{\mcZ_n(\bW, \beta)^r\big\}.
\end{align}
\item[$(ii)$] Compute $\Phi(\alpha,\beta;r)$ for \emph{integer $r$}, i.e.\ the asymptotics of the moments of $\mcZ_n(\bW, \beta)$.
\item[$(iii)$] Use these values to analytically expand $\{\Phi(\alpha,\beta;r)\}_{r \in \bbN}$ to all $r \geq 0$.
\item[$(iv)$] Compute $\Phi(\alpha,\beta) = \partial_r [\Phi(\alpha,\beta;r)]_{r = 0}$ from the analytic continuation above.
\end{itemize}
\end{minipage}}
\myskip
Note that step $(i)$, although \emph{a priori} non-rigorous, can sometimes be put on rigorous ground using convexity arguments, cf.\ e.g.\ page 146 of \cite{talagrand2010mean} in the context of the Sherrington-Kirkpatrick (or SK) model.
The arguably ``most heuristic'' step is $(iii)$, as there in general no guarantee for the uniqueness of the analytic continuation (and it is often not unique!).
The choice of the conjecturally correct continuation was proposed by Parisi in a remarkable series of papers \cite{parisi1979infinite,parisi1980order,parisi1980sequence}, one of the most important contributions for which he earned a Nobel prize in Physics in 2021, and we will describe this choice in the following sections.
In the SK model originally studied by Parisi
his prediction was ultimately proven to be correct by Talagrand \cite{talagrand2006parisi} and generalized by Panchenko \cite{panchenko2014parisi},
leveraging notably interpolation techniques that originated with Guerra \cite{guerra2003broken}.
An actual rigorous treatment of the replica method itself remains out of reach,
and in the spherical perceptron considered here replica predictions have not been proven, with the exception of the satisfiable phase \cite{shcherbina2003rigorous,stojnic2013another}.
\subsection{First steps of the replica method}
Let us now perform step $(ii)$ of the replica method.
From now on, we relax the level of rigor and sometimes adopt notations closer to the theoretical physics literature, since the core of the method is heuristic.
Fixing $r \in \bbN^\star$ we have (recall eq.~\eqref{eq:def_phi}):
\begin{align}\label{eq:Phir}
\nonumber
\Phi(\alpha,\beta;r) &= \lim_{n \to \infty}\frac{1}{n} \log \EE_\bW \Bigg\{\Bigg(\int_{\mcS^{n-1}} \mu_n(\rd \bx) \, e^{-\beta E_\bW(\bx)}\Bigg)^r\Bigg\}, \\
&= \lim_{n \to \infty}\frac{1}{n} \log \int \prod_{a=1}^r \mu_n(\rd \bx^a) \, \EE_\bW \Bigg\{ \prod_{a=1}^r \exp\Big\{- \beta \sum_{\mu=1}^m \theta\Big[(\bW \bx^a)_\mu\Big]\Big\} \Bigg\}.
\end{align}
We have used Fubini's theorem in eq.~\eqref{eq:Phir}.
We see appearing a set $\{\bx^a\}_{a=1}^r$ of independent samples from the Gibbs measure $\bbP_{\beta,\bW}$, with the \emph{same} realization of the matrix $\bW$:
we call such independent samples \emph{replicas}, following the statistical physics nomenclature.
The expectation with respect to $\bW$ in eq.~\eqref{eq:Phir} can be performed, since at fixed $\{\bx^a\}$, $\bz^a \coloneqq \bW \bx^a$ are jointly Gaussian vectors
with covariance $\EE[z^a_\mu z^b_\nu] = \delta_{\mu \nu} Q^{ab}$, where we introduced the \emph{overlap matrix} $Q^{ab} \coloneqq \bx^a \cdot \bx^b$ (note that $Q^{aa} = 1$, $Q^{ab}=Q^{ba}$, and that the matrix $\{\bx^a \cdot \bx^b\}$ is almost surely invertible under $\mu_n^{\otimes r}$).
Therefore we have:
\begin{align*}
\EE_\bW \prod_{a=1}^r e^{- \beta \sum_{\mu=1}^m \theta[(\bW \bx^a)_\mu]} &= I_\beta(\bQ)^m,
\end{align*}
in which we defined
\begin{align}\label{eq:def_Ibeta_Q}
I_\beta(\bQ) \coloneqq \int_{\bbR^r} \frac{\rd \bz}{(2\pi)^{r/2} \sqrt{\det \bQ}} e^{-\frac{1}{2} \bz^\intercal \bQ^{-1} \bz} e^{-\beta \sum_{a=1}^r \theta(z^a)}.
\end{align}
One can thus write eq.~\eqref{eq:Phir} as:
\begin{align*}
\Phi(\alpha,\beta;r) &= \lim_{n \to \infty} \frac{1}{n} \log \Bigg[\int \Big\{\prod_{a< b} \rd Q^{ab}\Big\} J(\bQ) \times I_\beta(\bQ)^m \Bigg],
\end{align*}
with $J(\bQ)$ defined as the PDF of the overlap matrix $\bQ(\{\bx^a\})$ (for $\{\bx^a\} \sim \mu_n^{\otimes r}$) evaluated in $\bQ$:
\begin{align}\label{eq:def_JQ}
J(\bQ) &\coloneqq \int \prod_{a=1}^r \mu_n(\rd \bx^a) \prod_{a < b} \delta(Q^{ab} - \bx^a \cdot \bx^b) = n^{\frac{r(r-1)}{2}} \frac{\int \prod_{a=1}^r \rd \bx^a \prod_{a \leq b} \delta(n Q^{ab} - \bx^a \cdot \bx^b)}{\int \prod_{a=1}^r \rd \bx^a \,\delta(n - \| \bx^a \|^2)},
\end{align}
in which we used that $Q^{aa} = 1$ and we re-normalized $\bx^a$ by $\sqrt{n}$.
One way to compute the numerator in eq.~\eqref{eq:def_JQ} is to use an exponential tilting method, by the following argument: for any symmetric $\bLambda \in \bbR^{r \times r}$ positive-definite, we have
\begin{align}\label{eq:denominator_JQ}
\nonumber
&\frac{1}{n} \log \int \prod_{a=1}^r \rd \bx^a \prod_{a \leq b} \delta(n Q^{ab} - \bx^a \cdot \bx^b) \\
&= \frac{1}{2} \Tr[\bLambda \bQ] + \frac{1}{n} \log \int \prod_{a=1}^r \rd \bx^a \,
\prod_{a \leq b} \delta(n Q^{ab} - \bx^a \cdot \bx^b)
\, e^{-\frac{1}{2} \sum_{a,b} \Lambda^{ab} \bx^a \cdot \bx^b}.
\end{align}
The idea is to pick $\bLambda$ so that under the probability distribution $P_\bLambda(\{\bx^a\}) \propto \exp\{-\frac{1}{2} \sum_{a,b} \Lambda^{ab} \bx^a \cdot \bx^b\}$, we have
with high probability $\bx^a \cdot \bx^b /n \to Q^{ab}$ as $n \to \infty$.
Since $P_\bLambda$ is Gaussian, one finds $\bLambda = \bQ^{-1}$ as the correct choice.
Heuristically, the argument then goes as follows: for $\bLambda = \bQ^{-1}$, the constraint terms in eq.~\eqref{eq:denominator_JQ} are satisfied as $n \to \infty$, so that we can remove the Dirac deltas without affecting the asymptotic value of the integral.
Performing the same calculation in the denominator (for which $\bLambda = \Id_r$ is now the correct choice), one reaches:
\begin{align}\label{eq:JQ_1}
\frac{1}{n} \log J(\bQ) &= \frac{1}{2} \Tr[\bQ^{-1} \bQ] + \log \int_{\bbR^r} \prod_{a=1}^r \rd x^a \, e^{-\frac{1}{2} \sum_{a,b} (\bQ^{-1})^{ab} x^a x^b} - \frac{r(1+\log 2\pi)}{2} + \smallO_n(1).
\end{align}
Such ``exponential tilting'' arguments can be made rigorous, and are classical e.g.\ in the theory of large deviations \cite{dembo1998large}.
Another (equivalent) way to obtain eq.~\eqref{eq:JQ_1} is to introduce the Fourier transform of the Dirac delta in eq.~\eqref{eq:def_JQ}, and perform a saddle-point method over the parameters of the Fourier integral, see e.g.\ \cite{castellani2005spin} or \cite{urbani2018statistical}.
The Gaussian integral in eq.~\eqref{eq:JQ_1} can be computed:
\begin{align*}
\frac{1}{n} \log J(\bQ) &= \frac{1}{2} \log \det \bQ + \smallO_n(1).
\end{align*}
This yields:
\begin{align}\label{eq:Phir_before_Laplace}
\Phi(\alpha,\beta;r) &= \lim_{n \to \infty} \frac{1}{n} \log \Bigg[\int \Big\{\prod_{a< b} \rd Q^{ab}\Big\} \exp \{n F_n(\bQ)\} \Bigg],
\end{align}
with
\begin{align*}
F_n(\bQ) &\coloneqq \frac{1}{2} \log \det \bQ + \alpha \log I_\beta(\bQ) + \smallO_n(1).
\end{align*}
It is crucial that in many physical models, the average of the replicated partition function can be written as in
eq.~\eqref{eq:Phir_before_Laplace}, as a function of a low-dimensional parameter (recall that $\bQ$ is a $r \times r$ matrix, and that $r$ is a fixed positive integer).
In physics, one refers to the overlap matrix $\bQ$ as the \emph{order parameter} of the problem: a low-dimensional quantity that allows to characterize the macroscopic
behavior of our high-dimensional system (similarly to the average magnetization in a ferromagnet for instance).
\myskip
Applying Laplace's method to the integral in eq.~\eqref{eq:Phir_before_Laplace}, we finally reach:
\begin{align}\label{eq:Phir_final}
\Phi(\alpha,\beta;r) &= \sup_{\bQ} \Big[\frac{1}{2} \log \det \bQ + \alpha \log I_\beta(\bQ)\Big],
\end{align}
where the supremum is over $r \times r$ symmetric positive-definite matrices such that $Q^{aa} = 1$, and
recall that $I_\beta(\bQ)$ is defined in eq.~\eqref{eq:def_Ibeta_Q}.
Note that we completely removed the high dimensionality of the problem!
The remaining task is to perform step $(iii)$ of the replica method, i.e.\ to analytically continue $\Phi(\alpha,\beta,r)$ to any $r > 0$.
This is the crucial difficulty of the replica method (and the main reason why it is ill-posed mathematically in general), which was solved by Parisi \cite{parisi1979infinite,parisi1980order,parisi1980sequence}.
\subsection{The replica-symmetric solution}\label{subsec:rs}
The functional in eq.~\eqref{eq:Phir_final} is symmetric: one can permute the different replicas of the systems (and correspondingly swap the rows and columns of $\bQ$)
without changing the value of the functional. This has led physicists to first assume that the supremum in eq.~\eqref{eq:Phir_final} is attained by a matrix $\bQ$ that is also invariant under permutations, i.e.\ that satisfies $Q^{ab} = q$ for all $a \neq b$.
This \emph{replica-symmetric} assumption was historically the first one considered to find a solution
to the SK model \cite{sherrington1975solvable}.
We will see how it allows to complete the final steps of the replica method.
\myskip
Note that replica symmetry can be put on a firmer mathematical ground, using the following characterization.
\myskip
\fbox{\begin{minipage}{0.98\textwidth}
\textbf{Replica symmetry (RS) --}
Let $Q(\bx, \bx') \coloneqq \bx \cdot \bx'$,
and recall the Gibbs distribution $\bbP_{\beta,\bW}$ of eq.~\eqref{eq:def_Gibbs}.
Replica symmetry amounts to assuming that the random variable $Q(\bx,\bx')$ concentrates when $\bx,\bx'$ are sampled independently from $\bbP_{\beta,\bW}$ (with the \emph{same} $\bW$), in the following sense:
\begin{equation}\label{eq:overlap_concentration}
\lim_{n \to \infty} \EE_\bW \Big[ \EE_{(\bx,\bx') \sim \bbP_{\beta,\bW}^{\otimes 2}} \Big\{(Q(\bx, \bx') - \EE Q )^2\Big\}\Big] = 0,
\end{equation}
with the shorthand $\EE Q \coloneqq \EE_\bW [\EE_{(\bx,\bx') \sim \bbP_{\beta,\bW}^{\otimes 2}}(Q(\bx, \bx'))]$.
\end{minipage}}
\myskip
In particular, under the RS ansatz, we can write the off-diagonal elements of the overlap matrix appearing in eq.~\eqref{eq:Phir_final}
as $Q^{ab} = q = \EE_\bW [\EE_{(\bx,\bx') \sim \bbP_{\beta,\bW}^{\otimes 2}}(\bx \cdot \bx')]$, in which $\bx, \bx'$ are two independent samples under the Gibbs measure with
quenched noise $\bW$ (two \emph{replicas} of the system), and $a \neq b$.
. Therefore, we also have
$q = \EE_\bW [\norm{ \EE_{\bx \sim \bbP_{\beta,\bW}}(\bx)}^2]$, which implies in particular that $q \in [0,1]$.
\myskip
Let us now finish the replica calculation under a replica symmetric assumption, going back to eq.~\eqref{eq:Phir_final}.
By simple linear algebra calculations, the RS ansatz implies, for all $a \neq b$:
\begin{align*}
\begin{dcases}
Q^{-1}_{ab} &= - \frac{q}{(1-q)[1+(r-1)q]},\\
Q^{-1}_{aa} - Q^{-1}_{ab} &= \frac{1}{1-q},
\end{dcases}
\end{align*}
and moreover
\begin{align}\label{eq:detQ_RS}
\log \det \bQ &= (r-1) \log [1-q] + \log [1+(r-1)q].
\end{align}
Plugging the form of $\bQ^{-1}$ we have:
\begin{align}
\label{eq:Gauss_weight_RS}
\nonumber
\exp\Big\{-\frac{1}{2} \bz^\intercal \bQ^{-1} \bz\Big\} &=
\exp\Big\{-\frac{1}{2(1-q)} \sum_{a=1}^r (z^a)^2 + \frac{q}{2 (1-q)[1+(r-1)q]} \Big(\sum_{a=1}^r z^a\Big)^2 \Big\} \\
&= \int \mcD \xi \exp\Big\{-\frac{1}{2(1-q)} \sum_{a=1}^r (z^a)^2 + \sqrt{\frac{q}{(1-q)[1+(r-1)q]}} \Big(\sum_{a=1}^r z^a\Big) \xi \Big\}.
\end{align}
Recall that $\mcD \xi$ is the standard Gaussian measure on $\bbR$,
and we have used the identity $\exp(x^2/2) = \int \mcD \xi \exp(x \xi)$.
Plugging eqs.~\eqref{eq:detQ_RS} and \eqref{eq:Gauss_weight_RS} in eq.~\eqref{eq:Phir_final} we reach, for $r \in \bbN^\star$:
\begin{align}\label{eq:phir_RS}
\Phi_\mathrm{RS}(\alpha,\beta;r) &= \sup_{q \in [0,1]} \Phi_\mathrm{RS}(\alpha,\beta;r, q) = \sup_{q \in [0,1]} \Big[- \frac{\alpha-1}{2} \Big[(r-1) \log [1-q] + \log [1+(r-1)q]\Big] \\
&\nonumber + \alpha \log \int \mcD \xi \Bigg[\int_{\bbR} \frac{\rd z}{\sqrt{2\pi}} e^{-\frac{1}{2(1-q)} z^2 + \sqrt{\frac{q}{(1-q)[1+(r-1)q]}} z \xi -\beta \theta(z)} \Bigg]^r \Big].
\end{align}
One can now begin to see how the replica-symmetric ansatz yields an analytical continuation of $\Phi_\mathrm{RS}(\alpha,\beta;r)$ for all $r > 0$.
A final non-trivial (and very non-rigorous) technicality of the replica method, which we do not detail here, is that when we analytically expand the function above to $r < 1$,
theoretical physicists argue that maximizers of eq.~\eqref{eq:phir_RS} are continued into \emph{minima} of $\Phi_\RS(r,q)$.
We refer to \cite{mezard1987spin} for a detailed discussion: a rough intuition is that one performs a Laplace method over the $r(r-1)/2$ variables $\{Q^{ab}\}_{a<b}$, which becomes a negative number of variables (!) for $r < 1$, turning the supremum into an infimum.
In the present case this phenomenon can be easily observed, see Fig.~\ref{fig:inversion_min_max}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{inversion_max_min_RS.pdf}
\caption{
The function $\Phi_\RS(r, q) - \Phi_\RS(r, q^\star(r))$ as a function
of $q \in [0,1]$, for different values of $r$ close to $1$, and $q^\star(r)$ the unique solution to $\partial_q \Phi_\RS(r, q) = 0$. We observe that $q^\star(r)$
is a global maximum for $r > 1$, and becomes a global minimum for $r < 1$.
Here $\alpha = 5$ and $\beta = 1$.
\label{fig:inversion_min_max}}
\end{figure}
Under the replica-symmetric ansatz, we therefore obtain:
\begin{align*}
\Phi_\mathrm{RS}(\alpha,\beta) &= \partial_r [\Phi_\mathrm{RS}(\alpha,\beta;r)]_{r=0} \nonumber \\
&= \inf_{q \in [0,1]} \Big\{
- \frac{\alpha-1}{2} \Big[\log [1-q] + \frac{q}{1-q}\Big]
+ \alpha \int \mcD \xi \log \int \frac{\rd z}{\sqrt{2\pi}} e^{-\frac{1}{2(1-q)} z^2 + \frac{\sqrt{q}}{1-q} z \xi -\beta \theta(z)} \Big\}.
\end{align*}
The inner integral is easy to work out:
\begin{align*}
\int_{\bbR} \frac{\rd z}{\sqrt{2\pi}} e^{-\frac{1}{2(1-q)} z^2 + \frac{\sqrt{q}}{1-q} z \xi -\beta \theta(z)} &= \sqrt{1-q} \, e^{\frac{q}{2(1-q)} \xi^2} \Big[1-(1-e^{-\beta}) H\Big(-\xi \sqrt{\frac{q}{1-q}} \Big) \Big],
\end{align*}
where $H(x) \coloneqq \int_x^\infty \mcD u = [1 - \mathrm{erf}(x/\sqrt{2})]/2$. In particular, $H'(x) = - e^{-x^2/2}/\sqrt{2 \pi}$.
Then:
\begin{align}\label{eq:phi_RS}
\Phi_\RS(\alpha,\beta) &= \inf_{q \in [0,1]} \Big\{ \frac{1}{2} \Big[\log [1-q] + \frac{q}{1-q}\Big]+ \alpha \int \mcD \xi \log \Big[1-(1-e^{-\beta}) H\Big(\xi \sqrt{\frac{q}{1-q}} \Big) \Big] \Big\}.
\end{align}
For any $\beta \geq 0$, the minimizing $q$ is thus given by the solution to
\begin{align}\label{eq:q_RS_eq_new}
\frac{q^{3/2}}{\sqrt{1-q}} &=
\alpha \int \mcD \xi \frac{(1-e^{-\beta}) \xi H'\Big(\xi \sqrt{\frac{q}{1-q}} \Big)}{1-(1-e^{-\beta}) H\Big(\xi \sqrt{\frac{q}{1-q}} \Big)}
\end{align}
that minimizes the functional of eq.~\eqref{eq:phi_RS}.
The quantity $e^\star(\alpha,\beta) \coloneqq - \partial_\beta \Phi_\RS(\alpha,\beta)$ is called the \emph{average intensive energy}:
as can be seen from eq.~\eqref{eq:def_phi}, $n e^\star(\alpha,\beta)$ is the average number of negative components of
$\bx$ when sampled from the Gibbs measure of eq.~\eqref{eq:def_Gibbs}.
At the replica-symmetric level it is given by:
\begin{align}\label{eq:estar_beta}
e_\RS^\star(\alpha,\beta) &= \alpha e^{-\beta} \int_\bbR \mcD \xi \frac{H\Big(\xi \sqrt{\frac{q}{1-q}} \Big)}{1-(1-e^{-\beta}) H\Big(\xi \sqrt{\frac{q}{1-q}} \Big) }.
\end{align}
In particular, one sees that for $\beta = 0$ we have $q = 0$ and $e^\star(\alpha,\beta = 0) = \alpha/2$, which is the typical number of negative components of a random $m$-dimensional vector (divided by $n$).
\myskip
\textbf{The zero-temperature limit --}
For any $\alpha > 2$ (i.e.\ in the UNSAT phase), one can check from eq.~\eqref{eq:q_RS_eq_new} that $q \to 1$ as $\beta \to \infty$.
This means that the replica-symmetric ansatz predicts that, as $\beta \to \infty$, the Gibbs measure concentrates on the global minima of $E_\bW(\bx)$,
and that (at fixed $\bW$) the distance between any two such minima goes to $0$ as $n \to \infty$\footnote{As we will see in Sec~\ref{sec:full_rsb}, while the replica-symmetry assumption turns out to be wrong, this prediction remains correct!}.
One can see also from this equation (cf.\ \cite{gardner1988optimal,franz2017universality}) that the expansion of the solution $q$ is of the type:
\begin{align}\label{eq:q_RS_zerotemp}
q &= 1 - \frac{\chi_\RS}{\beta} + \mathcal{O}(\beta^{-2}),
\end{align}
where $\chi_\RS$ is the so-called zero-temperature susceptibility.
Plugging this expansion in the equations above, we recover the result of \cite{gardner1988optimal} (we detail the computations in Appendix~\ref{subsec_app:zerotemp_RS}).
We find that $\chi_\RS$ is the unique solution to:
\begin{align}\label{eq:chi_RS}
\alpha \int_0^{\sqrt{2\chi_\RS}} \mcD \xi \, \xi^2 &= 1,
\end{align}
and $f^\star_\RS(\alpha) = \lim_{\beta \to \infty} [-\Phi_\RS(\alpha,\beta) / \beta]$ is given as (recall $H(x) = \int_x^\infty \mcD u$):
\begin{align}\label{eq:fstar_RS}
f^\star_\RS(\alpha) &= \alpha H[\sqrt{2 \chi_\RS}].
\end{align}
\myskip
\textbf{Replica-symmetric prediction for $\alpha_\inj$ --}
Recall the criterion of eq.~\eqref{eq:criterion_alphainj} for the injectivity threshold.
Eqs.~\eqref{eq:chi_RS} and \eqref{eq:fstar_RS}
are easy to analyze numerically, and they yield that $f^\star_\RS(\alpha) = 1$ for:
\begin{align}\label{eq:alpha_inj_RS}
\alpha_\inj^\mathrm{RS} &\simeq 7.64769,
\end{align}
in which $\mathrm{RS}$ stands for the replica-symmetric assumption.
\myskip
\textbf{Instability of the replica-symmetric solution and the need for a different ansatz --}
An important check of the validity of the replica-symmetric ansatz is that it indeed is a maximum of the functional given in eq.~\eqref{eq:Phir_final} (or a minimum when $r < 1$ as we discussed).
This can be verified locally, by considering the Hessian of this function, and looking at the sign of its eigenvalues when $r \to 0$. The stability criterion
is called the \emph{de Almeida-Thouless} (dAT) condition \cite{de1978stability}, and we derive it in Appendix~\ref{subsec_app:stability_rs} for any inverse temperature $\beta \geq 0$, cf.\ eq.~\eqref{eq:AT_explicit}.
However, we also show that this condition is never satisfied in the limit $\beta \to \infty$, for any $\alpha > 2$.
This suggests that the correct solution actually breaks the replica symmetry!
Formally, the functional of eq.~\eqref{eq:Phir_final} exhibits a well-known physical phenomenon known as
spontaneous symmetry breaking: while the function to maximize is invariant under the group of permutations of the $r$ replicas, any particular maximum is not invariant under this symmetry.
\myskip\textbf{A replica-symmetric lower bound --}
In Appendix~\ref{sec_app:rs_lower_bound}, we detail a way to use the replica-symmetric prediction at finite $\beta \geq 0$, combined with the stability
analysis of Appendix~\ref{subsec_app:stability_rs}, to obtain a lower bound on $\alpha_\inj$:
\begin{align}\label{eq:lower_bound_RS_stability}
\alpha_\inj \geq \alpha_\AT\simeq 5.3238,
\end{align}
in which the definition and calculation of $\alpha_\AT$ can be deduced solely from the replica-symmetric calculation: we refer to Appendix~\ref{sec_app:rs_lower_bound} for more details on this bound, which is
shown as a light green area in Fig.~\ref{fig:chi_estar_T0}.
\subsection{The overlap distribution and replica symmetry breaking}\label{subsec:rsb_discussion}
Since we must go beyond replica symmetry, one has to understand what could happen if the overlap concentration of eq.~\eqref{eq:overlap_concentration}
is not satisfied.
We define $q \equiv \bx \cdot \bx'$, in which $\bx,\bx'$ are independent samples under the Gibbs measure of eq.~\eqref{eq:def_Gibbs}, with the \emph{same quenched noise} $\bW$,
and we will study the law of $q$ \emph{averaged over $\bW$}, which we will denote $\rho_n(q)$.
\myskip
A natural possibility is that, while the random variable $q$ no longer concentrates, its average distribution $\rho_n(q)$ still
converges (weakly) to an asymptotic law $\rho(q)$ (for $q \in [0,1]$) as $n \to \infty$.
Replica-symmetry then corresponds to the case $\rho(q) = \delta(q - q_0)$.
But how does an arbitrary $\rho(q)$ transfers to a $r \times r$ overlap matrix $\bQ$ maximizing eq.~\eqref{eq:Phir_final}?
Actually, the other way (going from $\bQ$ to $\rho(q)$) is easier to formalize. Indeed, for the same $\bW$, let us draw two independent samples $\bx, \bx'$ under the Gibbs measure (two ``replicas'').
On average, their overlap is distributed as the off-diagonal elements of the overlap matrix, i.e.\ we have (one can formalize this argument, see e.g.\ \cite{montanari2022short})
\begin{align*}
\rho(q) \simeq \frac{1}{r(r-1)}\sum_{a \neq b} \delta(q - Q^{ab}).
\end{align*}
However, recall that our physical system is not represented by the overlap matrix $\bQ$ at finite $r$, but rather by its $r \to 0$ limit,
so we should take this limit as well to get the $\rho(q)$ that describes our original physical system (even though taking the $r \to 0$ limit of a $r \times r$ matrix shatters much of our intuition!).
More concretely, the overlap distribution $\rho(q)$ is related to the overlap matrix $\bQ$ by:
\begin{align}\label{eq:rho_q_general}
\rho(q) &= \lim_{r \to 0} \frac{1}{r(r-1)}\sum_{a \neq b} \delta(q - Q^{ab}).
\end{align}
\textbf{One-step replica symmetry breaking --}
To build back our intuition a bit, let us look at the simplest possible $\rho(q)$ beyond the RS ansatz, that is, let us assume that
$\rho(q) = m \delta(q-q_0) + (1-m) \delta(q-q_1)$, with $m \in [0,1]$, and $q_0 \leq q_1$.
One brilliant realization of Parisi \cite{parisi1979infinite,parisi1980order,parisi1980sequence} was that this distribution arises from an \emph{ultrametric} overlap matrix $\bQ$, i.e.\
that has the following form:
\begin{align}\label{eq:rhoq_Q_1RSB}
\rho(q) = m \delta(q - q_0) + (1-m) \delta(q-q_1) \quad ``\Longleftrightarrow" \quad \bQ = \begin{pmatrix}
1 & q_1 & q_1 && & & \\
q_1 & 1 & q_1 && \cdots & q_0 & \cdots &\\
q_1 & q_1 & 1 && & &\\
& & &\ddots & & &\\
& & & &1 &q_1 & q_1 \\
\cdots & q_0 & \cdots && q_1 & 1 & q_1 \\
& & && q_1 & q_1 & 1
\end{pmatrix}.
\end{align}
Let us detail how to go from the $\bQ$ shown in eq.~\eqref{eq:rhoq_Q_1RSB} to the $\rho(q)$ that we want.
We denote $x \in \{1,\cdots,r\}$ the size of the diagonal blocks in this matrix $\bQ$. Then:
\begin{align}\label{eq:Pq_1RSB_finite_r}
\frac{1}{r(r-1)}\sum_{a \neq b} \delta(q - Q^{ab}) &= \frac{x-1}{r-1} \delta(q-q_1) + \frac{r-x}{r-1} \delta(q-q_0).
\end{align}
Now arises an issue: since we take the $r \downarrow 0$ limit, and $x \in \{1,\cdots,r\}$ is an integer, how should we proceed?
Comparing eq.~\eqref{eq:Pq_1RSB_finite_r} with our target $\rho(q)$ gives us a possible answer (which turns out to be the correct one \cite{mezard1987spin}): relaxing the constraint that $x \in \{1,\cdots,r\}$, and taking the limit $r \downarrow 0$ independently of $x$, we reach:
\begin{align*}
\rho(q) &= (1-x) \delta(q-q_1) + x \delta(q-q_0),
\end{align*}
i.e.\ exactly the $\rho(q)$ we wanted to build, with $x = m \in [0,1]$ which now became a real parameter in $[0,1]$.
This type of distribution $\rho(q)$ (and by extension the corresponding $\bQ$ in eq.~\eqref{eq:rhoq_Q_1RSB})
is called One-Step Replica Symmetry Breaking (1RSB).
\myskip
\textbf{General replica symmetry breaking --}
More generally, one can represent a distribution with a finite support of $(k+1)$ elements as $\rho(q) = \sum_{i=0}^k (m_i - m_{i-1}) \delta(q-q_i)$, with weights
$m_0 \leq m_1 \leq \cdots \leq m_{k-1} \leq m_k$, using the conventions $m_{-1} = 0, m_k = 1$.
This distribution is called ``$k$-step replica symmetry breaking'' ($k$-RSB),
and in this ansatz, the overlap matrix $\{Q_{ab}\}$ can be written as a hierarchical generalization of eq.~\eqref{eq:rhoq_Q_1RSB} (with the convention $q_{-1} = 0$ and $q_{k+1} = 1$):
\begin{align*}
\bQ &= \sum_{i=0}^{k+1} (q_i - q_{i-1}) \bJ^{(r)}_{m_{i-1}},
\end{align*}
with $\bJ_m^{(r)}$ the block-diagonal matrix with $r/m$ blocks of size $m$, each diagonal block being the all-ones matrix.
Once again, the integers $\{m_i\}_{i=0}^k$ become elements of $[0,1]$ in the $r \downarrow 0$ limit.
As in the replica-symmetric case discussed above, the limit $r \downarrow 0$ also turns the maximum over $\{m_i,q_i\}$ into an infimum \cite{mezard1987spin}.
In the end, the $k$-RSB prediction for the free entropy is of the form:
\begin{align}\label{eq:Phi_kRSB_general}
\Phi_{k-\mathrm{RSB}}(\alpha,\beta) &= \inf_{0\leq q_0 \leq \cdots \leq q_k\leq 1} \, \inf_{0 < m_0 < \cdots m_{k-1} < m_k = 1} \mcP[\{m_i\}, \{q_i\};\alpha,\beta].
\end{align}
It is common to represent the right hand side as a function of a step function $q(x)$ for $x \in [0,1]$, uniquely defined by $\{m_i\}$ and $\{q_i\}$, see Fig.~\ref{fig:q_frsb} (left, blue curve).
We write then the argument of the RHS of eq.~\eqref{eq:Phi_kRSB_general} as $\mcP[\{q(x)\};\alpha,\beta]$.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{q_frsb.pdf}
\caption{
Illustration of the finite-RSB and full-RSB structure in the functions $\rho(q)$ (right) and $q(x)$ (left).
We use the convention $m_{-1} = 0$.
In terms of the overlap distribution, the $k$-RSB ansatz (in blue) corresponds to $\rho(q) = \sum_{i=0}^k (m_{i} - m_{i-1}) \delta(q - q_i)$, with the convention $m_{-1} = 0$.
In orange, the full-RSB distribution is $\rho(q) = \int_0^1 \delta(q-q(x)) \, \rd x$, and is assumed to have two delta peaks at the edges of its support
$q \in \{q_m, q_M\}$, with masses $\{x_m,(1-x_M)\}$ (see the equation on the right figure).
$x(q)$ is the functional inverse of $q(x)$.
\label{fig:q_frsb}}
\end{figure}
This allows to consider completely generic distributions $\rho(q)$ (or equivalently functions $q(x)$), by taking the $k \to \infty$ limit of eq.~\eqref{eq:Phi_kRSB_general}.
This generic procedure is called ``Full Replica Symmetry Breaking'' (Full RSB), and was introduced by Parisi in \cite{parisi1979infinite}.
It yields for the free entropy a formula of the type:
\begin{align}\label{eq:Phi_FRSB_general}
\Phi_\FRSB(\alpha,\beta) &= \inf_{\{q(x)\}} \{\mcP[\{q(x)\};\alpha,\beta]\}.
\end{align}
Such formulas are usually called Parisi formulas in the spin glass literature.
Note that in many disordered models, the overlap distribution $\rho(q)$ has been observed to have two points with positive mass, at the edges of its bulk (see Fig.~\ref{fig:q_frsb}, right).
This leads to generically characterize the function $q(x)$ as (see Fig.~\ref{fig:q_frsb} right, red curve):
\begin{align*}
\begin{cases}
q(x) = q_m & \textrm{ if } x \in [0,x_m], \\
q(x) & \textrm{ if } x \in [x_m,x_M], \\
q(x) = q_M & \textrm{ if } x \in [x_M, 1].
\end{cases}
\end{align*}
This is purely a convention that often turns out to be convenient
and does not remove any generality as one can always set $x_m = 0$ and $x_M = 1$.
\myskip
\textbf{Relation between $\rho(q)$ and $q(x)$ --}
For an overlap distribution with a well-defined density $\rho(q)$, one has the relation $\rho(q) = x'(q)$, with $x(q) \in [0,1]$ the CDF of the overlap, and $x \mapsto q(x)$ is then the functional inverse of $q \mapsto x(q)$.
\medskip\noindent
\textbf{RSB and the form of the Gibbs measure --}
Interestingly, one can interpret the level of RSB as an assumption on the structure of the level sets of the Gibbs measure (or the global minima of the energy, when $\beta = \infty$).
Roughly speaking, 1-RSB corresponds to an organization of the mass of the Gibbs measure into clusters.
Inside each cluster two solutions typically have overlap $q_1$, while solutions belonging to two different clusters have a typical overlap $q_0$.
This hierarchy can be iterated inside each cluster, which gives rise to the 2-RSB structure.
Iterating even further, the level of RSB corresponds to the depth of this hierarchical structure of clusters, which is known as \emph{ultrametric} \cite{mezard1984nature,panchenko2013parisi}.
Ultrametricity and RSB is a beautiful mathematical representation of the free energy landscape of spin glass models,
which also allows to create efficient algorithms \cite{alaoui2020algorithmic,alaoui2021optimization,subag2021following,montanari2021optimization,auffinger2022optimization}.
\myskip
A thorough description of all the consequences of replica symmetry breaking would be beyond our scope:
the major reference on this topic is \cite{mezard1987spin}, and we invite the reader to read as well \cite{talagrand2010mean}, and the very recent lecture notes \cite{montanari2022short}, for discussions in a more mathematically-friendly language.
\subsection{One-step replica symmetry breaking}\label{subsec:1rsb}
We start by generalizing the calculation we made in Section~\ref{subsec:rs} to the more general one-RSB ansatz we described above.
We give the results here, while the calculation is detailed in Appendix~\ref{sec_app:1rsb}.
The final result is given as an infimum over three parameters $\{m, q_0, q_1\}$ (see Fig.~\ref{fig:q_frsb} for their interpretation):
\begin{align}\label{eq:phi_1rsb}
\Phi_\ORSB(\alpha,\beta) &= \inf_{m,q_0,q_1} \Bigg[
\frac{m - 1}{2m} \log (1-q_1) + \frac{1}{2m} \log [1-mq_0 + (m-1)q_1] + \frac{q_0}{2[1-mq_0 + (m-1)q_1]} \nonumber \\
&+ \frac{\alpha}{m} \int \mcD \xi_0 \log \Bigg\{ \int \mcD \xi_1 \Bigg[1- (1-e^{-\beta}) H\Big(- \frac{\sqrt{q_0} \xi_0 + \sqrt{q_1 - q_0} \xi_1}{\sqrt{1-q_1}}\Big) \Bigg]^m \Bigg\} \Bigg].
\end{align}
Note that when $q_1 = q_0$ or when $m = 1$, the overlap distribution $\rho(q)$ reduces to a single delta peak, and we consistently retrieve the replica-symmetric solution of eq.~\eqref{eq:phi_RS}.
\myskip
\textbf{The zero-temperature limit and the injectivity threshold --}
In Appendix~\ref{subsec_app:zerotemp_1rsb}, we detail how to take the $\beta \to \infty$ limit in $\Phi_\ORSB(\alpha,\beta)$,
and to obtain the function $f^\star_\ORSB(\alpha) \coloneqq \lim_{\beta \to \infty} [-\Phi_\ORSB(\alpha, \beta)/\beta]$. In Appendix~\ref{subsec_app:1rsb_numerical} we present the numerical procedure we used to solve the resulting equations.
We reach the light blue curve in Fig.~\ref{fig:chi_estar_T0} for $f^\star_\ORSB(\alpha)$, and in particular
we have
\begin{align}\label{eq:alphainj_1RSB}
\alpha_\inj^\ORSB \simeq 6.7157.
\end{align}
\myskip
\textbf{Validity of the 1-RSB assumption --}
While the 1-RSB ansatz is a natural extension of the previous replica symmetric assumption,
the results of \cite{franz2017universality} (which study the same model with a slightly different energy function) strongly suggest that for any $\alpha > 2$, at low enough temperatures
the system undergoes a continuous transition from a RS to a Full RSB phase, without any finite level of RSB at intermediate temperatures\footnote{
In particular, a stability analysis of the 1-RSB ansatz, similar to what we did in Appendix~\ref{subsec_app:stability_rs}, would yield that it becomes unstable at the same temperature as the RS
ansatz.
}.
This motivates us to compute the complete Full RSB picture in Section~\ref{sec:full_rsb}.
Nevertheless, we will see the 1-RSB prediction of eq.~\eqref{eq:alphainj_1RSB} is already very accurate. |
{
"arxiv_id": "2302.14208",
"language": "en",
"timestamp": "2023-03-01T02:05:16",
"url": "https://arxiv.org/abs/2302.14208",
"yymm": "2302"
} | \section{Introduction: Open-world AI}
Many classical adversarial AI tasks, such as game playing, take place in ``closed-world'' domains where all aspects of the domain---the types of entities, their properties, their actions, and the overall domain dynamics---are fixed. They are typically known to the agents before they start their task performance, and they do not change during task execution. Examples of such domains are ``perfect information games'' such as Chess, Go, or Ms.Pac-man, where the rules of the game, the goals of the players, and the entire state of the game are always known by all agents \cite{brown2018depthlimited,nash1,perez2016general}. This characteristic simplifies the game AI behavior by limiting the number of novelties to instances of known types (e.g., a chess move with the bishop a player has not seen before), thus allowing the development of the game AI without needing to anticipate any unknown scenarios within the bounds of the system (e.g., a novel piece with novel rules being introduced).
In contrast, agents operating in an ``open-world'' must be able to handle changes to entities and domain rules. Specifically, in the context of open-world games, the rules, the state, and the actions of other players might only be partially known or could change anytime. The agent thus must discover these changes while playing the game \cite{boney2020learning,inbook}. Especially {\em interactive novelties} where agents interact with each other and with the environment present a challenge to any agent departing from a \textit{closed-world} assumption \cite{PONSEN200759,boardgame1,boardgame3}. In open-world environments, the action's effects and interaction's effects can be changed during the task operation time. Therefore, making the wrong move or wrongfully interacting with other agents can cause the agent to fail the task.
In an effort to tackle the challenges of interactive novelties in adversarial open worlds, we propose general methods and architectural mechanisms that allow AI agents to detect, characterize, and adapt to interactive novelties in adversarial games. We develop a general novelty-handling framework, as well as symbolic logical reasoning methods to detect, learn, and adapt to novelties in \textit{open-world} environments. Our main contributions include (1) an architectural framework to handle interactive novelties in an adversarial environment, and (2) new logical reasoning approaches to characterize novelties and accommodate them during planning (that expands current state space, action space, and expected action effects).
\section{Background and Related Work}
Recent applications of multi-agent environments such as multiplayer games \cite{Peng}, Poker \cite{billings1998opponent}, social robotic systems \cite{Barakova}, and adversarial attack and defense \cite{REN2020346} consist of adversary elements and complex agent's behaviors. Therefore, learning how to adapt to the opponents' strategies becomes an essential task for current AI architecture. Unlike collaborative AI, where all the agents manage to work together to pursue a team goal, adversarial AI agents must learn other agents' behaviors to develop suitable strategies to maximize their own goals. This paper uses the open-world Monopoly environment as the primary test bed. Monopoly contains several main characteristics of an adversarial environment, such as unknown opponents' behaviors, stochastic elements (e.g., dice rolls, community cards, and chance cards), and novelties in the game. These characteristics can be found in many real-world domains, such as stock market forecasting, self-driving vehicles, or cybersecurity.
Current cognitive architecture systems such as probabilistic graphical models \cite{koller2009probabilistic,sucar2015probabilistic} provide an excellent tool that combines graph theory and probability theory to enable efficient probabilistic reasoning and learning. The model is widely used in the AI community as one of the main tools to generate state-of-the-art results. These results show the capabilities of the model to handle some of the challenges in traditional cognitive architecture, such as perception, interaction, and adaptation. However, these approaches are not explicitly developed to deal with \textit{closed-world} environment. Even though these methods have shown excellent results in \textit{closed-world} environments, addressing open-world and interactive novelty remains a challenge.
Over the past two decades, many research studies attempted to tackle the challenge of open-world AI. However, the challenge of integrating a general intelligence system capable of detecting and adapting to an open-world environment still remains unsolved \cite{DBLP:journals/corr/abs-2106-02204,AIBenGoertzel,Gizzi2021TowardCP,sarathy2020spotter}. Several challenges of integrating general AI systems are pointed out in previous studies, such as the difficulty of integrating the requisite capabilities (e.g., detecting novelty and adapting to the novelty), and the difficulty of measuring the performance of the agent towards human-like behavior AI \cite{AIBenGoertzel}. Reinforcement learning (RL) methods have been proposed as a solution for open-world environment \cite{Padakandla_2020,choi,ALA12-hester, arulkumaran2017deep} in recent years. These methods use past and present experience to learn a new representation of the world or attempt to construct a suitable control policy in dynamically-changing environments. However, RL and deep RL suffer to adapt to small environmental changes. Small pixels change in Atari arcade games can cause the RL agent to corrupt and fail to complete the task, and adaptation to novelties may often take as long as training the agent from scratch \cite{goelnovelgridworlds}. Finally, recent works in the explainable AI (XAI) literature have looked at answering contrastive queries \cite{sreedharan2020bridging}, which could very well be about potential novelties in the world. However, applying such a line of work for detecting open-world novelties would require an agent (assumed to be a human in the loop in XAI) to formulate and initiate queries to the agent to elicit the presence of novelties. Similarly, XAI works \cite{verma2022advice} that initiate an explanatory dialogue depending on the human in the loop (instead of automated detection and characterization) to analyze and detect open-world novelties. Finally, there are works that learn terms in the user's vocabulary \cite{soni2022preference}. The user can then use these terms to advise the agent on accommodating the novelty.
Current approaches in cognitive AI systems such as Cognitive-Affective State System (CASS), and Sigma Cognitive Architecture have attempted to address the open-world AI challenge \cite{DBLP:journals/corr/abs-2101-02231,JiSigma,AIBenGoertzel}. Both architectures have been constructed to solve the problem without updating their core components or characterizing the novelty. These approaches may improve the overall performance of the AI. However, both architectures are not good enough to apprehend specific changes in the environment and accommodate those changes. More developments are needed for these architectures to perform well in a realistic open-world environment, where a part of the information can be changed, such as adversary mental models, transition functions, and agents' actions and interactions.
\section{Preliminaries}
\subsection{n-Person Non-Cooperative Stochastic Turn-Based Games}
We consider $\mathcal{M} = \langle n, S, \{A_i\}_{i \in n}, T, R,
\gamma \rangle$ as the non-cooperative stochastic game environment
consisting of a finite, non empty state space $\mathcal{S}$; $n$
players, $\{1,2,\cdots,n\}$; a finite set of action set $\{A_1, A_2,
A_3, \cdots A_n \}$ for each of the players; a set of conditional
transition probabilities between states $T$, such that $T(s,
a_1,a_2,\cdots,a_n,s') = P(s'|s,a_1,\cdots,a_n)$; a reward function
$R$ so that $R:\mathcal{S} \times A \rightarrow \mathbb{R}$, where $A
= A_1 \times A_2 \times A_3 \times \cdots \times A_n$. An n-person
stochastic game is turn-based if at each state, there is exactly one
player who determines the next state. In order to formulate the
problem, we extend the action sets $A_i$ for $i \in \{1,2,\cdots,n\}$
to be state dependent. For each particular state $s$, there is a
restricted action set $A_{ir}$, there is at most $i \in
\{1,2,\cdots,n\}$ such that $|A_{ir}| > 1$.
At the beginning of the game, all players start at the same initial state $s_0 \in \mathcal{S}$. Each player independently performs an action $a_i^1 \in A$ . Given $s_0$ and the selected actions $a_1 = \{a_1^1, a_1^2,\cdots, a_1^n\} \in A$, the next state $s_1$ is derived based on $s_0$ and $a_1$, with a probability $P(s_1|s_0,a_1)$. Then, each player independently performs an action $a_i^2$, next state $s_2$ is derived based on $s_1$ and $a_2$, with a probability $P(s_2|s_1,a_2)$. The game continues in this fashion for an infinite number of steps, or until the goal is reached. Therefore, the game generates a random history $h = \{s_0, a_1, s_1, a2,...\} \in H = S \times A \times S \times A ...$. Based on a partial of the history $h'= \{s_0, a_1, s_1, a2,...s_k\}$, we can derive the conditional distribution, so-called strategy $\pi_i(h') \in P(A_i)$, with $P(A_i)$ is the set of probability measures on $A_i$. A strategy set $ \pi = \{\pi_1; \pi_2,...,\pi_n\}$ consists of a strategy $\pi_i$ for each player $i$ is used to determine the next action $a_i^{k+1}$. Finally, the reward function $R$ is specified based on the transition of the game, and $\gamma \in (0,1]$ is the discount factor which determines the importance of immediate and long-term future rewards.
\subsection{Interactive Novelty}
In general, novelty refers to a change in the environment where the agent can neither apprehend the change from its own knowledge base nor from its past experience. In this paper, we want to address the challenge of detecting and accommodating interactive novelty. More specifically, interactive novelty refers to the change in agent's actions, interactions, and relations.
\begin{itemize}
\item \textbf{Novelty Level 1 [Action}]: New classes or attributes of external agent behavior.
\item \textbf{Novelty Level 2 [Interaction}]: New classes or attributes of dynamic, local properties of behaviors impacting multiple entities.
\item \textbf{Novelty Level 3 [Relation}]: New classes or attributes of static, local properties of the relationship between multiple entities.
\end{itemize}
We denote $\mathcal{C} = \{C_1, C_2, \cdots, C_n\} \in A$ as the interaction set of the agent. The set represents the agent's capability to interact with other agents, or with the environment. The relation set $\mathcal{L} = \{L_1, L_2, \cdots, L_n\} \in H$ represents the relationship of the agent with other agents, or agent with the environment, such that the relationship is shown as a part of the history, or action sequence. Each action $a_i$ in the action set $A$ is defined by a preconditions set $\delta_i(a)$ and an effects set $\beta_i(a)$. A preconditions set $\delta_i(a)$ of an action $a_i$ includes all the conditions that need to be satisfied in order to execute the action. Meanwhile, the effects set $\beta_i$ of an action $a_i$ indicates the expected results after a successful execution of action $a_i$.
The set of interactive novelties $\mathcal{N}$ consist of all the changes that can occur in action set, interaction set, and relation set. In this scenario, action novelty refers to changes in action space, action preconditions or action effects. We denote $A' = \{A_1', A_2', \cdots, A_n'\}$ as the new action set, which contains all unknown actions to the agent, such that $A' \cap A = \emptyset$, and $A' \notin \mathcal{KB}$, where $\mathcal{KB}$ is the agent knowledge base. We assume that the preconditions $\delta'$ and effects $\beta'$ of the new action set $A'$ are completely unknown to the agent, and both must be discovered through agent's interactions. Similarly, we can present the new interaction set as $\mathcal{C}'$, and relation set $\mathcal{L}'$, then formulate interaction novelty and relation novelty accordingly.
\subsection{Problem Formulation: Interactive Novelty Detection and Adaptation}
The integrated architecture allows us to map all the essential information in \textit{section 3.1} of the environment to the knowledge base, $\mathcal{KB}$. Based on the information, we can construct the strategy $\pi$ using the truncated-rollout MTCS solver. However, because interactive novelties may occur throughout the course of the game, the plan must be adjusted in order to accommodate new actions, interactions, or relations.
As described in \textit{Section 3.1}, the pre-novelty environment is represented as a non-cooperative stochastic turn-based game:
$$\mathcal{M} = \langle n, S, \{A_i\}_{i \in n}, T, R, \gamma \rangle$$
In order to detect and accommodate interactive novelties, we define a detection function $d(s,a,s')$ to determine if there is any unexpected change in the environment after the agent selects an action $a$ in state $s$ and observes the next state $s'$, or if the agent performed a new action. In addition, an identification function $\iota(s,a,s')$ characterizes the cause of the change based on logical reasoning. The purpose of these functions is to represent the new environment after novelty (post-novelty) $\mathcal{M}'$, such that $$\mathcal{M}' = \langle n, S', \{A_i'\}_{i \in n}, T', R', \gamma \rangle$$
\noindent where $S'$ is the new state space post-novelty. The set post-novelty $\{A_i'\}_{i \in n}$ is the new finite action set with respect to each agent $\alpha$ in the environment, $T'$ is the new conditional transition function, and $R'$ is the new reward function post-novelty. From the new model of the post-novelty world $\mathcal{M}'$, we modify the current strategy set $\pi$ in order to adapt to the changes in the environment, as described in the next section.
\section{Adversarial Domain: Open-World Monopoly}
\subsection{Environment Implementation}
Monopoly is a multi-player adversarial board game where all players start at the same position. The game can support up to 4 players, described in Figure \ref{board}. All players roll dice to move across the board. The ultimate goal of the game is to be the last player standing after bankrupting other players. This objective can be achieved by buying properties, and railroads, monopolizing color sets, and developing houses on properties. If one player lands on a property owned by another player, they get charged rent or a fee. After monopolizing color sets and developing houses and hotels, players can charge higher fees when other players land on their properties. The game includes different surprise factors such as chance cards, community cards, jail, auction, and trading ability between agents. These elements can completely change the game. Hence, any action in the game needs to be adapted to dice rolls, community cards, chance cards, and the decisions of other players. These game characteristics make it more challenging for integrated planning and execution. In the game simulator, novelties can be injected on top of the standard game to study how the agent detects and accommodates these changes \cite{mono}. The third-party team that ran the evaluation developed the Open-World Monopoly environment. Unlike traditional Monopoly, where we can fully observe all the states and actions of other agents, the Open-world Monopoly does not allow us to monitor all the actions and interactions on our turn \cite{KEJRIWAL2021102364}. So, the environment is partially observable.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{Board.jpg}
\caption{Classic Monopoly Board}
\label{board}
\vspace{-1em}
\end{figure}
\subsection{Interactive Novelties in Monopoly}
\label{iNovelty}
We implement three different categories of interactive novelty discussed in \textit{Novelty Characterization} section into a classic Monopoly game. Some theoretical examples of novelty are described as below:
\begin{itemize}
\item \textbf{Action Novelty}: This class of novelty can be illustrated through a stay-in-jail action. For this novelty, the player could stay in jail as long as they want. However, the player must pay a certain fee to the bank each time they receive rent (when the player decides to stay in jail by their own choice voluntarily).
\item \textbf{Interaction Novelty}: We illustrate the relation novelty through a loan interaction between two agents. For example, a player could send a loan request to another player and pay the loan back over a specific amount of time that both parties agree.
\item \textbf{Relation Novelty}: We illustrate the relation novelty through a relation property, where we enforce a relation of homogeneity between properties in a specific monopolized color group (one color group). The player must homogeneously improve a monopolized set of properties in a given move. For example, imagine the player has 3 orange properties (a monopolized set). In the default game, you could set up a house on the first property, and leave the second one unimproved. For this novelty, in the move, if you improve the first property, you must also improve the second and third so that the properties are 'homogeneously' improved at the end of the move. Failure to do this will lead to the improvement being revoked at the end of the move.
\end{itemize}
\section{The Architectural Framework}
The architecture includes four main components: the environment
interface, the novelty handling component, a knowledge base, and a
planning agent, as shown in Figure~\ref{architecture}. The Novelty
Handler component was integrated in the ``Agent Development
Environment'' (ADE) \cite{Andronache2006AdeA} which allows for the
development of different integrated architectures. The Knowledge Base
of the agent is constructed by the \textbf{Belief Stack},
\textbf{Action Stack}, \textbf{Interaction Stack}, and
\textbf{Relation Stack}. The Planning Agent component develops and
operates the plan based on the information in the knowledge base and
the goal. The Monopoly Interface connects to the Monopoly API so that
novelties can be injected into the environment. These novelties are
detected and characterized by the novelty handler component. The
component is developed using Answer Set Programming (ASP), a
declarative programming oriented towards challenging search problems
\cite{baral_2003, baral-etal-2004-using,
DBLP:journals/corr/GebserKKS14}. After the novelties are determined,
the novelty handler updates the new actions, effects, or states to the
knowledge base. When the agent receives the updated information, the
planning agent then reconstructs the plan according to the new
knowledge base.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.55]{Monopoly_Architecture.png}
\caption{The overall architecture of the novelty handling framework in an adversarial environment}
\label{architecture}
\end{figure*}
\subsection{Novelty Detection}
We record the information of the game as provided by the game environment and compare it with our ``expectation'' state of the game board. This ``expectation'' state is derived from the agent's knowledge base of the game, including expected states, actions, action preconditions, and end effects. Then, the game environment provides us with the actual game board states and actions that have occurred between the current time step and the previous time our agent performed an action (e.g., after our agent lands on a property and buys it, all other agents get to act in order until it is our agent's turn again). When we notice a discrepancy between our expected state and the actual state, we surmise that something must have changed within the game, i.e., a novelty may have been introduced, which makes some aspects of our domain representation incorrect. Such unpredicted changes require the agent to update its knowledge base accordingly (e.g., a new action is added to the action space of Monopoly). An example of the novelty-handling component is shown in \textit{Algorithm 1}. The evaluation is run in a tournament setup (many games in a tournament), discussed in section \ref{results}. Therefore, when the agent detects a novelty from the current game, this novelty information will be used to adapt to the next game.
\begin{algorithm}[ht]
\caption{Novelty Detection Pipeline}
\begin{algorithmic}[1]
\State Initialization:
\State State space $\mathcal{S}$, Action space $\mathcal{A}$, Expected State $\mathcal{S}$'
\State $d(s_t,a_t, s'_t)$ = $False$
\State $\iota(s_t, a_t, s'_t)$ = $None$
\State $t$ = 0 \Comment{Time step}
\While{Game does not end}
\If {$a_t = None$} \Comment{No action was performed}
\If {$S_{t+1} \neq S'_{t+1}$ }
\State $d(s_t, a_t, s'_t)$ = $True$ \Comment{Novelty Detected}
\State $\iota(s_t, a_t, s'_t)$ = $Relation$ \Comment{Novelty Characterization}
\Else
\State $d(s_t,a_t, s'_t)$ = $False$
\State $\iota(s_t, a_t, s'_t)$ = $None$
\EndIf
\Else
\If {$a_t \notin \mathcal{A}$} \Comment{Unknown Action}
\State $d(s_t, a_t, s'_t)$ = $True$ \Comment{Novelty Detected}
\State $\iota(s_t, a_t, s'_t)$ = $Action$ \Comment{Novelty Characterization}\\
\Comment {\textit{Case 1: All precondition $\delta(a_{t+1})$ for action $a_{t+1}$ are satisfied but action $a_{t+1}$ is not executable}}
\ElsIf {$\delta(a_{t+1})$ == $True \land a_{t+1} == False$}
\State $d(s_t,a_t, s'_t)$ = $True$ \Comment{Novelty Detected}
\State $\iota(s_t, a_t, s'_t)$ = $Action$ \Comment{Novelty Characterization}\\
\Comment {\textit{Case 2: At least one precondition for action $A_{t+1}$ is not satisfied but action $a_{t+1}$ is executable}}
\ElsIf {$\delta(a_{t+1}) == False \land a_{t+1} == True$}
\State $d(s_t,a_t, s'_t)$ = $True$ \Comment{Novelty Detected}
\State $\iota(s_t, a_t, s'_t)$ = $Action$ \Comment\Comment{Novelty Characterization}
\ElsIf
\State... \Comment{More Cases of Interactive Novelty}
\Else
\State $d(s_t,a_t, s'_t)$ = $False$
\State $\iota(s_t, a_t, s'_t)$ = $None$
\EndIf
\EndIf
\State t = t + 1
\EndWhile \\
\Return $d(s_t,a_t, s'_t)$, $\iota(s_t, a_t, s'_t)$
\end{algorithmic}
\end{algorithm}
\subsection{Novelty Characterization}
Next, the agent uses a novelty identification module to characterize
the novelty. This module has several sub-modules (which can be run in
parallel), each focused on determining a specific novelty type. Each novelty identification sub-module uses the same ASP code
(except for two changes) that is used for hypothetical reasoning about the effect of an action. The first change is that, a particular parameter,
which is the focus of that specific sub-module, which was originally
a fact, is now replaced by ``choice'' rules of ASP that enumerate
different values that the parameter can take. The second change is
that constraints are added to remove possible answer sets where the
predicted game board state does not match the observed game board
state. The resulting program's answer sets give us the parameter values which reconcile the predicted game board state and the
observed game board state. If there is only one answer set and thus a
unique parameter value, then if this value is different from the value
we had earlier, we have identified a novelty. Now we can
update our ASP code that was used for hypothetical reasoning by simply
replacing the earlier value of the parameter with the new value.
Below we first give a glimpse of how ASP can be used for reasoning
about the next state and how that code can be minimally modified to
infer a novelty.
To reason about the next state, the ASP code will first define the game
parameters through facts such as the following:
\begin{verbatim}
dice_value(1..6).
player(player1;player2).
cash(1..1000).
asset("B&O_Railroad").
penalty(50).
\end{verbatim}
Then rules of the following form are used to define actions and fluents.
\begin{verbatim}
action(sell_property(P,X)) :- player(P), asset(X).
fluent(asset_owned(P,V)) :- player(P), asset(V).
\end{verbatim}
Properties of actions, such as their pre-conditions, and their effects are defined using rules of the following kind:
\begin{verbatim}
:- occurs(sell_property(P,V), T), player(P),
asset(V), time(T), not holds(asset_owned(P,V),T).
not_holds(asset_owned(P,V),T+1) :-
holds(asset_owned(P,V),T),
occurs(sell_property(P,V),T),
player(P), asset(V),time(T).
not_holds(asset_mortgaged(P,V),T+1) :-
holds(asset_owned(P,V),T),
occurs(sell_property(P,V), T),
player(P), asset(V), time(T).
holds(current_cash(P,X+Y),T+1) :-
holds(current_cash(P,X),T),
occurs(sell_property(P,V),T),
not holds(asset_mortgaged(P,V),T),
asset_price(V,Y), player(P), asset(V), time(T).
not_holds(current_cash(P,X),T+1) :-
holds(current_cash(P,X),T),
occurs(sell_property(P,V),T),
not holds(asset_mortgaged(P,V),T),
asset_price(V,Y), player(P), asset(V), time(T).
holds(current_cash(P,X+Y),T+1) :-
holds(current_cash(P,X),T),
occurs(sell_property(P,V),T),
holds(asset_mortgaged(P,V),T),
asset_m_price(V,Y), player(P), asset(V), time(T).
not_holds(current_cash(P,X),T+1) :-
holds(current_cash(P,X),T),
occurs(sell_property(P,V),T),
holds(asset_mortgaged(P,V),T),
asset_m_price(V,Y), player(P), asset(V), time(T).
:- occurs(pay_jail_fine(P), T), player(P), time(T),
not holds(in_jail(P), T).
:- occurs(pay_jail_fine(P), T), player(P), time(T),
not holds(current_cash(P, _), T).
:- occurs(pay_jail_fine(P), T), player(P), time(T),
holds(current_cash(P,X),T), X < 50.
not_holds(in_jail(P), T+1) :- holds(in_jail(P), T),
occurs(pay_jail_fine(P), T),
player(P), time(T).
not_holds(current_cash(P, X), T+1) :-
holds(current_cash(P,X),T), holds(in_jail(P), T),
occurs(pay_jail_fine(P), T), player(P), time(T).
holds(current_cash(P, X-50), T+1) :-
holds(current_cash(P,X),T), holds(in_jail(P), T),
occurs(pay_jail_fine(P), T), player(P), time(T).
\end{verbatim}
The inertia rules are expressed as follows:
\begin{verbatim}
holds(F,T+1) :- fluent(F), holds(F,T),
not not_holds(F,T+1), time(T).
not_holds(F,T+1) :- fluent(F), not_holds(F,T),
not holds(F,T+1), time(T).
\end{verbatim}
The initial state is defined using holds facts with respect to time step 0, such as:
\begin{verbatim}
holds(in_jail(player1), 0).
holds(current_cash(player1,500),0).
\end{verbatim}
An action occurrence at time step 0 is then defined as a fact in the following form.
\begin{verbatim}
occurs(pay_jail_fine(player1),0).
\end{verbatim}
Now when a complete ASP program with rules and facts of the above kind is run, we get an answer set from which we can determine the state of the world at time step 1.
Suppose that the answer set has the facts:
\begin{verbatim}
holds(in_jail(player1), 0).
occurs(pay_jail_fine(player1),0).
holds(current_cash(player1,500),0).
holds(current_cash(player1,450),1).
\end{verbatim}
while our next observation gives us:
\begin{verbatim}
obs(current_cash(player1,477),1).
\end{verbatim}
The discrepancy between our prediction about player1's current\_cash being 500 (at time point 1) is different from our observation that player1's current\_cash is 477. This suggests there is a novelty. This can be determined by the following two simple rules.
\begin{verbatim}
discrepancy(F,T) :- fluent(F), time(T),
holds(F,T), not observed(F,T).
discrepancy(F,T) :- fluent(F), time(T),
not holds(F,T), observed(F,T).
\end{verbatim}
While the above could have been implemented in any language, including in the simulator language, which we also implemented in Python, having it in ASP, makes it easier for us to take the next step, which is to find out what the novelty is.
In ASP, we have to modify the above ASP code by adding the following and removing ``penalty(50)'' (referring to the jail fine in the Monopoly game) from the original code.
\begin{verbatim}
oneto200(1..500).
1 { penalty(X) : oneto200(X)} 1.
:- obs(current_cash(P,X),1),
holds(current_cash(P,Y),1), X!=Y, player(P).
\end{verbatim}
In the above, the first fact and the choice rule defines the range of penalty that we are exploring. If we had just those two rules, we will multiple answer sets with a penalty ranging from 1 to 500.
The constraint (the last ASP rule) then eliminates all the answer sets where the observation about current\_cash does not match with the holds. In the answer set that remains, we get the penalty value that would make the observation match with the holds, thus allowing us to figure out the novelty with respect to the penalty. In this particular case, the program will have the answer set with "penalty(23)" thus characterizing the novelty that the penalty is now 23.
\subsection{Novelty Accommodation}
Since novelties in the state (features, dynamics, actions) mean the
agent would have to replan often and would have to do so based on the
most updated information, we were interested in developing an online
planning algorithm to determine the best action. However, with
environments that are both \textit{long-horizon} and
\textit{stochastic}, using online planning approaches like Monte-Carlo tree search, quickly becomes intractable. To address
this problem, we formulate a truncated-rollout-based algorithm that
uses updated domain dynamics (learned from detected novelties) for a
few steps of the rollout and then uses a state evaluation function to
approximate the return for the rest of that rollout. In our evaluation
function, we use both domain-specific components and a more general
heuristic to approximate the return from the state after the truncated
rollout. Furthermore, to ensure the agent adapts to the detected novelties, we made both the environment simulator used for rollouts and the evaluation function sufficiently flexible and conditioned on the environment attributes; we only used a few tuned constants. Thus,
whenever a novelty was detected, we updated the relevant attributes in
our simulator and evaluation function before running our algorithm to
decide our actions. Using this approach, we are able to incorporate
novel information into our decision-making process and adapt
efficiently. An example of the whole process is shown in
\textit{Algorithm 2}.
We will now provide a detailed description of the rollout algorithm and the evaluation function. In our algorithm, when choosing the next best action in a given state, we execute multiple rollouts for each possible action and compute the mean return value for each action. Each rollout is terminated either when some terminal state is reached or when some $k$ number of actions have been taken. The rollouts use the updated domain dynamics of the environment. Due to potentially high branching factor, we keep these rollouts to be short (which also limits the effects of errors in our characterization of any novelties).
However, to infer some approximation of the long-term value of an action, we use an evaluation function. Our evaluation function consists of two components: one that is domain-specific and the other that is heuristic-based and can be applied to any domain in general. The heuristic component of the evaluation function involves relaxing the domain and performing another rollout on the relaxed domain for some depth $l$. Some examples of relaxations include limiting adversarial agents' actions and determination of domain dynamics. For instance, in the case of the Monopoly domain, we prevent the agent from taking any buying or trading actions. On the other hand, the domain-specific component of the evaluation function computes the value of the state as the sum of two terms: $\mathcal{M}_\text{assets}$ and $\mathcal{M}_{\text{monopoly}}$ where $\mathcal{M}_\text{assets}$ is the value of all the assets the agent owns, whereas $\mathcal{M}_{\text{monopoly}}$ computes the maximum rent that the agent would get if it gains a Monopoly over any color, scaled down by how far the agent is to get the monopoly.
\begin{algorithm}[ht]
\caption{New Action Effect Novelty Handling}
\begin{algorithmic}[1]
\State Initialization: State Space S, Action Space A, precondition Set of all actions $\delta(o)$,
\State $d(s_t,a_t)$ = $False$
\State $\iota(s_t, a_t)$ = $None$
\State $t$ = 0 \Comment{Time step}
\While{Game does not end}
\State Given $S_t \times a_t \overset{\beta(a_t)}{\rightarrow} S_{t+1}$
\If {[$\beta(a_t) \notin \beta(a)$] $\cup$ $(S_{t+1} \neq S_{t}')$}
\State $d(s_t,a_t)$ = $True$
\State $\iota(s_t, a_t)$ = $Action$
\Else
\State $d(s_t,a_t)$ = $False$
\State $\iota(s_t, a_t)$ = $None$
\EndIf
\If {$\iota(s_t, a_t)$ = $Action$}
\State{\textit{Update Action Set If Needed}}
\State $A.insert(\beta(a_t))$
\State{\textit{Update Action Effect Set If Needed}}
\State $\beta(a).insert(\beta(a_t))$
\State{\textit{Update Action Precondition Set If Needed}}
\State $\delta(a).insert(\delta(a_t))$
\State $\mathcal{M}_{assets} \leftarrow \mathcal{M}'_{assets}$ \Comment{Update assets value}
\State $\mathcal{R}_{s} \leftarrow \mathcal{R}'_{s}$ \Comment{Update short-term gain}
\State $\mathcal{R}_{l} \leftarrow \mathcal{R}'_{l}$ \Comment{Update long-term gain}
\State $\mathcal{M}_{Monopoly} \leftarrow \mathcal{M}'_{Monopoly}$ \Comment{Update Monopoly beneficial value}
\EndIf
\State t = t + 1
\EndWhile \\
\Return $ND$
\end{algorithmic}
\end{algorithm}
\section{Evaluation \& Results}
\label{results}
\subsection{External Evaluation}
In an effort to maintain the integrity of the evaluation, all the information about the novelty was hidden from our team, and all the information about our architecture or methodologies was also hidden from the evaluation team. The external evaluations were performed by a third-party team that originally created the Open-world Monopoly domain. Our agent was evaluated on the three interactive novelties: \textit{action}, \textit{interaction}, and \textit{relation}. For each type of interactive novelties, more than 20 different novelties were introduced during the evaluation process. Each novelty level also has three difficulty levels (shown in \textit{Table 2}). The difficulty levels expressed the difficulty of detecting and using the novelty. For instance, if the novelty involved an action, an \textit{easy} difficulty means the action was available for the agent to perform without any preconditions. A \textit{medium} difficulty means the actions can be detected by observing other agents. A \textit{hard} difficulty means the agent can only act under specific circumstances, and it may require the agent to explore the environment to learn the action. There were more than 60 novelties in which the agent was evaluated in total. At least 600 tournaments (100 games per tournament) were run to measure our agent's performance. Tournaments were started with a traditional Monopoly game (no novelty). At a certain point throughout the tournament (non-specific, e.g., on the $5^{th}$ game), a novelty was introduced. To avoid ambiguity between novelties, only one specific novelty at a time was injected into a tournament. In our internal evaluation, ASP performed excellently in novelty detection and characterization. However, due to the characteristics of ASP, the novelty-handling component run time can be very high. Moreover, due to the nature of the game Monopoly (the game can go on indefinitely), and limited computational resources, we decided to use Python to overload the requirements of our solver and leverage the access to the simulator instead of using ASP to the first model and subsequently detect for novelties, to optimize the run time for the external evaluation.
Our agent was evaluated based on four different metrics. M1 is the percent of correctly detected trials (CDT). In this case, the percent of CDT is the percent of trails that have at least one True Positive and no False Positives (FP). M2 is the percent FP (the agent reports novelty when no novelty exists). M3 is the novelty reaction performance (NRP) before the novelty was introduced (pre-novelty). M4 is the novelty reaction performance
(NRP) after the novelty was introduced (post-novelty). To measure the NRP, our agent was evaluated against a heuristic agent which embedded some of the most common strategies in Monopoly (e.g., target some specific color, never buy some properties, always reserve money, etc.). Finally, we compute the novelty reaction performance (NRP) of the agent based on the following formula:
$$ NRP = \frac{\mathcal{W}_{agent}}{\mathcal{W}_{baseline}} $$
Where, $\mathcal{W}_{agent}$ is the win rate of our agent (pre-novelty for M3, and post-novelty for M4). $\mathcal{W}_{baseline}$ is $65\%$.
\begin{center}
\begin{table}[t]
\begin{tabular}{ |p{1cm}||p{2cm}| p{2cm}| p{2cm}| }
\hline
\multicolumn{4}{|c|}{Novelty Level 1: Action} \\
\hline
Metrics & Easy & Medium & Hard \\
\hline
& Mean & Mean & Mean \\
\hline
M1 & $100\% $ & $100\% $ & $100\% $ \\
M2 & $0\% $ & $0\% $& $0\% $\\
M3 & $ 141.54\% $ & $ 129.23\% $ & $ 136.92\% $\\
M4 & $ 151.79\% $ & $ 135.38\% $ & $ 143.08\% $\\
\hline
\multicolumn{4}{|c|}{Novelty Level 2: Interaction} \\
\hline
M1 & $100 \% $ & $100\% $ & $100\% $ \\
M2 & $0\% $ & $0\% $& $0\% $\\
M3 & $ 124.31\% $ & $ 142.77\% $ & $ 121.85\% $\\
M4 & $ 130.46\% $ & $ 134.15\% $ & $ 113.23\% $\\
\hline
\multicolumn{4}{|c|}{Novelty Level 3: Relation} \\
\hline
M1 & $100\% $ & $100\% $ & $80\% $ \\
M2 & $0\% $ & $0\% $ & $0\% $\\
M3 & $ 147.08\% $ & $ 132.31\% $ & $ 150.15\% $\\
M4 & $ 146.46\% $ & $ 121.85\% $ & $ 145.23\% $\\
\hline
\end{tabular}
\caption{Evaluation}
\vspace{-3em}
\end{table}
\vspace{-1.5em}
\end{center}
The results suggest that our cognitive architecture provides outstanding solutions for the game regardless of the complexity of the environment and differing levels of novelty. Furthermore, our agent achieved a perfect precision score ($100\%$ in percent of CDT and $0\%$ of FP) at all difficulty levels of action and interaction novelties. The agent achieved a nearly perfect precision score in relation novelties. However, the agent missed $20\%$ of the novelties in the hard level of difficulty. These failures to detect certain novelties happened due to the nature of the relation novelty category: we can only detect these novelties types when a specific action is executed. Due to the stochasticity of the Monopoly game, the agent would sometimes not perform a specific action throughout the entire evaluation. To identify a relation novelty, the agent may need to perform a particular action at a specific state to reveal the novelty. For example, in the relation property novelty scenario (discussed in section \ref{iNovelty}), this novelty only occurs when we monopolize a green color group (one of the most challenging color groups to monopolize due to the cost of each property). The agent may then fail to detect the novelty because the agent would never monopolize the green color group throughout testing. The M3 and M4 NRP scores in all novelty levels show that our agent outperformed the baseline agent before and after when novelties were introduced. The scores in \textit{Table 1} indicate that our cognitive architecture and accommodation strategies allow the planning agent to handle interactive novelties perfectly.
\subsection{Internal Evaluation}
\subsubsection{Agent Performance Without Novelty Accommodation}
In order to understand the effectiveness of the novelty handler components (detection, characterization, and accommodation), we conduct experiments on all the novelties and record the win rate of the MCTS agent with and without the support from the novelty handler across all the novelties with a random number of games for each game tournament. Table \ref{table1} shows the overall performance of the MCTS agent with the novelty handler against the MCTS agent without the novelty handler. The result suggests that the MCTS agent with the support of the novelty handler outperforms the vanilla MCTS agent without novelty handling. There is a significant $10\%$ win rate difference between them. Furthermore, the results also indicate that the novelty handler components play an essential role in the agent's performance in an adversarial open-world domain. Although some novelties can have an essential effect on the game, and some novelties may not affect the game (nuisance novelties), the novelty handler mechanism still shows its efficiency in enhancing the agent's performance. For example, restricted color novelty can significantly affect the agent's strategies for buying and trading properties. On the other hand, other novelties, such as selling houses or property rates, can have minimal effects on the game.
\begin{table}[H]
\begin{tabular}{|p{2cm}||p{2.5cm}|p{2.5cm}|}
\hline
Novelty & Win rate of adaptive MCTS agent & Win rate of non-adaptive MCTS agent\\
\hline
& Mean $\pm$ SD & Mean $\pm$ SD \\
\hline
Action & $83.22\% \pm 5.33\%$ & $76.38\% \pm 6.313\%$ \\
\hline
Relation & $81.86\% \pm 7.26\%$ & $68.45\% \pm 5.164\%$ \\
\hline
Interaction & $89.6\% \pm 9.01\%$ & $72.5\% \pm 7.692\%$ \\
\hline
\end{tabular}
\caption{Evaluation Results Of Agent's Performance With and Without Novelty Handler}
\label{table1}
\end{table}
\subsubsection{Agent Performance Against Existing Methods}
In order to learn the performance level of our agent, we compare our agent against other Monopoly-playing agents. For this experiment, we evaluate our agent's performance against the hybrid deep reinforcement learning agent \cite{Bonjour2021DecisionMI}. The hybrid agent uses proximal policy optimization (PPO) and double-deep Q-learning (DDQN) algorithms. The authors compare the standard reinforcement approach to their hybrid approach, and the experimental results show that the hybrid agents outperform traditional RL agents. Significantly, the hybrid PPO agent has a win rate of $91\%$ against a fixed-policy agent that is developed base on the Monopoly world champion's strategy. In our evaluation, we ran two instances of our agents against one of the fixed-policy agents and the hybrid deep reinforcement learning agent in trials. The results are shown in table \ref{table2}. The results show our agent's dominant performance against the hybrid reinforcement learning approach. Our agent has a more than $85\%$ win rate in the tournament compared to $12\%$ of the hybrid learning agent.
\begin{table}[H]
\begin{tabular}{|p{1.3cm}||p{1.8cm}|p{1.8cm}|p{1.8cm}|}
\hline
& Our Agent 1 & Our Agent 2 & Hybrid Agent\\
\hline
& Mean $\pm$ SD & Mean $\pm$ SD & Mean $\pm$ SD \\
\hline
Win ratio & $43.65\% \pm 1.42\%$ & $42.43\% \pm 1.33\%$ & $12.13\% \pm 1.51\%$\\
\hline
\end{tabular}
\caption{Evaluation Results Of Agent's Performance Against The Hybrid Deep Reinforcement Learning Agent}
\label{table2}
\end{table}
\section{Conclusion}
Our work presented a new agent architecture for interactive novelty handling in an adversarial environment that can detect, characterize, and accommodate novelties. Our architecture is modeled based on the thought process of human cognition when we deal with environmental changes. First, we use ASP to detect and characterize interactive novelties (action, interaction, and relation). Then, we update the detected novelties to our agent's knowledge base. Finally, we utilize the truncated-rollout MCTS agent to accommodate the novelty. The external evaluation results support the cognitive architecture's effectiveness in handling different levels of interactive novelty.
However, the architecture has potential limitations in novelty characterization and learning agent behavior. One limitation of this architecture is the capability to learn the opponents' behaviors. Our cognitive architecture does not explicitly model the opponent's strategy to detect the change in other agents' behaviors and adapt accordingly. To address this limitation, we propose two additional models that can be a part of the novelty handler component. The first approach is to model the opponents' behavior using probabilistic reasoning \cite{PEARL198877, 1988i, JP1988ix}. In these models, we can learn the action probability distribution based on the game state, which helps us detect any change in opponents' behaviors.
Secondly, we would like to model the opponents' behavior using reinforcement learning. Recent applications of reinforcement learning show promising results in learning opponents' behavior without knowing opponent's observations and actions during both training and execution processes \cite{DBLP:journals/corr/abs-2001-10829,DBLP:journals/corr/abs-2011-07290}. Ultimately, we believe improving the model's capability of predicting another agent's behaviors is the biggest area for growth.
\begin{acks}
This work was funded in part by DARPA grant W911NF-20-2-0006. We would like to thank Mayank Kejriwal, Shilpa Thomas, Min-Hsueh Chiu and other members of the University of Southern California team for the Monopoly simulator and agent evaluation.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.14200",
"language": "en",
"timestamp": "2023-03-01T02:04:40",
"url": "https://arxiv.org/abs/2302.14200",
"yymm": "2302"
} | \section{Introduction}
\label{Introduction}
The Lorentz invariant (LI), a continuous symmetry representing the physical results independent of the boost and rotation, is one of the key characteristics at the heart of modern physics. Conventionally, LI is related to the scale-free nature of spacetime. Experimentally this is well-supported, and there is no reason to believe that LI is broken at the currently accessible energies \cite{Mattingly:2005re, Liberati:2013xla} (for a detailed list of results see \cite{Kostelecky:2008ts}). However, it is expected that the coupling constants in quantum field theories are energy-dependent. Namely, unlike low energy scales, the coupling constants addressing the Lorentz-breaking terms become significant at high energy. Searching for a Lorentz symmetry breaking (LSB) is one of the hot topics in theoretical physics since it is strictly related to fundamental issues, such as quantum gravity and string theory \cite{Kostelecky:1990pe, Kostelecky:1994rn}. Even though the nature of quantum gravity is not determined, it is well-known that space and time should break down at small distances (around Planck's length), where they can no longer be treated as a classical continuum \cite{Amelino-Camelia:2008aez}. Therefore, there is a pervasive belief that LI cannot be a well-established symmetry at all scales of nature, and it becomes invalid by approaching the fundamental scales \cite{Collins:2006bw}. In the modern analyses for detecting experimental deviations from LI, a phenomenological framework, known as Standard-Model Extension (SME), is utilized in which LI violating effects are introduced by spontaneous symmetry breaking \cite{Colladay:1998fq}. SME is an effective field theory for the development of low-energy experiments containing all possible LI and Charge-Parity-Time (CPT) violating coefficients\footnote{For the deep connection of LI with CPT theorem, in the case of violating the former, the latter is challenged, too.} that include not only Special Relativity but the Standard Model and General Relativity as well \cite{Colladay:1996iz}. Overall, the presence of both Nambu-Goldstone and massive Higgs bosons in theories with spontaneous LSB provide us with rich scenarios for exhibiting multiple phenomenological effects \cite{Bluhm:2004ep, Kostelecky:2005ic}. In this respect, by implementing the spontaneous LSB into a curved space-time via background vector fields, it became possible to construct some well-known alternatives to General Relativity such as the Einstein-Aether theory \cite{Jacobson:2000xp} and the Bumblebee Gravity (BG) model\footnote{An advantage of BG model, compared to Einstein-Aether theory, is that it has no difficulties of Einstein-Aether in the perturbative description \cite{Petrov:2020wgy}.} \cite{Bluhm:2004ep,Bertolami:2005bh}.
Concerning the BG, to which we are interested in, the LSB mechanism occurs owing to the nonzero vacuum condensation of a vector field (the bumblebee field), indicating a preferred frame \cite{Kostelecky:2003fs, Bluhm:2007xzd}. The most peculiar feature of the BG model that has attracted a lot of attention is that it has no local $U(1)$ gauge symmetry, but the propagation of massless vector modes is still allowed \cite{Bluhm:2008yt}. Indeed, in the BG model, the massless photon modes arise as Nambu-Goldstone bosons when the spontaneously LSB occurs. Bumblebee vector fields can be a source of cosmological anisotropies since they generate a preferred axis \cite{Liang:2022hxd}. This means, in turn, that a fraction of the anisotropies observed in the universe can be ascribed to LI violation \cite{Maluf:2021lwh}. In this regard, it has been recently derived, by implementing the big bang nucleosynthesis and gravitational baryogenesis governing in early Universe within the BG-based cosmology, some tight constraints for the Lorentz-violating bumblebee vector field \cite{Khodadi:2022mzt}.
Ref. \cite{Capelo:2015ipa} shows that spontaneous LSB arising from the bumblebee field in the background can describe the dark energy issue. Unlike general SME that suffers from the lack of an exact gravitational solution, in the BG model it is possible to find a Schwarzschild-like solution \cite{Casana:2017jkc} (see also the spherically symmetric solution in recent Ref. \cite{Filho:2022yrk} which shows the effect of the LSB on both the temporal and radial components), as well as a Kerr-like black hole \cite{Ding:2019mal} (traversable wormhole solutions have been found in \cite{Ovgun:2018xys}).
A Kerr-like black hole solution derived in \cite{Ding:2019mal} was re-evaluated in \cite{Maluf:2022knd}. The conclusion is that, at the present, there is no yet full rotating black hole solution for the Einstein-bumblebee theory. The case of slowly rotating metric has been derived in \cite{Ding:2020kfr}, whose validity is confirmed in \cite{Maluf:2022knd}, too.
In these solutions, the LSB arises from a nonzero vacuum expectation value (VEV) of the bumblebee vector field coupled to the space-time curvature. These solutions allow to study various aspects of the role of spontaneously LSB on the physics of compact objects, such as black holes and wormholes (e.g, see Refs. \cite{Gomes:2018oyd,Oliveira:2018oha, Liu:2019mls, Maluf:2020kgf, KumarJha:2020ivj, Jha:2020pvk, Khodadi:2021owg, Jha:2021eww, Jiang:2021whw, Wang:2021gtd, Khodadi:2022dff, Gu:2022grg}). It would be interesting to mention that the study of causality in Lorentz-violating models is a relevant topic. In Refs. \cite{Santos:2014nxm, Jesus:2020lsv}, the authors have studied the Gödel-type solution for the BG model. In the light of confronting this model with astrophysical bodies, such as stars, were derived tight constraints for the LSB parameter \cite{Paramos:2014mda}.
The aim of this paper is to study the modification induced by BG on the energy deposition rate (EDR) of neutrinos emitted from a thin accretion disk~\cite{Asano:2000dq,Asano:2000ib}. More exactly, if the disk is hot enough and the accretion rate of black hole obeys the condition $\dot{M}\sim (0.1-1) M_{\odot}$ s$^{-1}$, then
the disk is a source of neutrinos ($\nu$) and anti-neutrinos ($\bar{\nu}$), which partially annihilate above the disk and convert into electron ($e^{-}$) and positron ($e^{+}$), as follows \cite{Popham:1998ab, Birkl:2006mu, Chen:2006rra, Zalamea:2010ax}
\begin{equation}\label{nunuee}
\nu{\bar \nu}\to e^+ e^- \,.
\end{equation}
This process has significant consequences for the cosmological gamma-ray bursts (GRBs) jets, which are the most luminous objects in the universe. Accretion disks around BHs are the favourite candidates for the central engine of GRBs. Such a configurations are formed by the merging of compact objects (neutron-neutron stars, BH-neutron stars) or by supernova.
The hot accretion disk in these systems is the source of neutrinos/antineutrinos, and their annihilation into electrons/positrons may power GRBs.
However, in order that the relativistic fireball produces GRBs, the fireball must contain an extremely small amount of the baryon density. The latter, above the accretion disk, is low near the rotation axis so that the neutrino pair annihilation has the possibility of producing a clean fireball which allows to solve the baryon contamination problem, which hinders the creation of relativistic shocks and the emission of gamma-rays (see \cite{Asano:2000dq} and references therein). More exactly, the jets produced via neutrino annihilation, in essence, are cones relatively free of baryons. Note, finally, that in the processes of emission of neutrinos from the hot accretion disk and pair annihilation are important the relativistic effects that essentially take into account the gravitational redshift, the bending of the neutrino trajectories, and the redshift due to rotation. The EDR is affected by these effects when Kerr geometry is modified by LI violating corrections, as will see in what follows.
GRBs, in essence, are powerful cosmic explosions of the characteristic duration of seconds. Commonly are labeled into two classes: short and long GRBs for timescales, $\sim 1$ s and $(1-100)$ s, respectively \cite{Woosley:1993wj,Galama:1998ea,Stanek:2003tw}. Despite the uncertainty about the origin of these two categories, the evidence suggests that the former most likely comes from the merger of compact binaries such as a double neutron star and/or a binary system, including a neutron star and black hole \cite{Eichler:1989ve,Narayan:1992iy,Nakar:2007yr}, while the collapse of the core of massive stars (Wolf–Rayet) can be the origin of the latter \cite{Woosley:1993wj}. Based on the observed gamma-ray luminosity $L$, the luminosity of the long-duration type of GRBs should not exceed $L\sim 10^{53}$ erg/s (more accurate $10^{52-53}$ erg/s) \cite{Bloom:2003eq}. Overall, the order of magnitude of the luminosity of short and long GRBs is expected to be the same \cite{Leng:2014dfa}.
%
In recent years, however, a new population of ultra-long GRBs with timescale duration $\sim 10^{3-4}$ s that reduces the luminosity up to $\sim 10^{49-50}$ erg/s
(e.g, see Refs. \cite{Thone:2011yf,Gendre:2012wj,Levan:2013gcz}) has been investigated.
The relation of the maximum energy of a neutrino-powered jet as a function of the burst duration shows that the energy deposition falls down rapidly as the burst lasts longer \cite{Leng:2014dfa}.
Observations indicate that the process (\ref{nunuee}), due to the creation of relativistic $e^{\mp}$-dominated jets, can be a possible candidate to explain GRBs observed from galaxies containing the supermassive black hole in their center, e.g, see \cite{Zalamea:2010ax}. In \cite{Co86,Co87,Goodman:1986we,Eichler:1989ve,1993AcA....43..183J} it has been shown that the process (\ref{nunuee}) can deposit an energy $\gtrsim 10^{51}$ erg above the neutrino-sphere of a type II supernova \cite{Goodman:1986we}. In \cite{Salmonson:1999es, Salmonson:2001tz} it has been shown that taking into account the effects of the strong gravitational field in a Schwarzschild space-time, the efficiency of the process (\ref{nunuee}), for collapsing neutron stars, enhances (up to a factor of $30$) compared to the Newtonian case.
The same analysis around a thin and isothermal accretion disk for a Schwarzschild or Kerr metric was performed in \cite{Asano:2000ib,Asano:2000dq},
%
%
The neutrino annihilation luminosity from the disk has been also calculated, e.g., see \cite{Mallick:2008iv,Mallick:2009nvq,Chan:2009mw,Kovacs:2009dv,Kovacs:2010zp,Zalamea:2008dq,Harikae:2010yt}.
Time-dependent models of black-hole accretion disks, such as remnants of neutron-star mergers or collapse engines, have been investigated, for example, in Refs. \cite{Harikae:2010yt,Ruffert:1998qg,Popham:1998ab,DiMatteo:2002iex,Fujibayashi:2017xsz,Just:2015dba,Foucart:2018gis,Foucart:2020qjb}.
The principal output of these studies is that the neutrino-pair annihilation process, when analyzed in curved background described by General Relativity, is not efficient enough to power GRBs. There is another scenario for energy extraction from disk or black hole to launch the GRBs jets, the well-known magnetohydrodynamical (e.g., \cite{Katz:1997bh,Meszaros:1996ww}). According to this scenario, the Blandford-Znajek process is a more promising mechanism for launching jets. However, the main issue of this energy extraction model is whether (yet has been not proven) the magnetic flux arising from the collapse of the star is sufficient to power the jet or not \cite{Komissarov:2009dn}. Note that throughout this paper, we will only address the neutrino effects without considering the contribution of other potential forms of energy deposition, such as Blandford-Znajek and magnetic reconnection.
The process (\ref{nunuee}), with neutrino-antineutrino emitted from the surface of a neutron star, has been also investigated in the framework of extended theories of gravity \cite{Lambiase:2020iul, Lambiase:2020pkc, Lambiase:2022ywp}.
By admitting this idea that the environment around a black hole potentially is a cleaner place for the launch of a relativistic jet, we consider the BG-based rotating black hole with broken LI surrounded by a thin accretion disk from which neutrinos are emitted \cite{Asano:2000dq}. Inspired by
Refs. \cite{Asano:2000dq, Asano:2000ib,10.1143/PTPS.136.235}, we assume an idealized, semi-analytical, stationary state model, independent of details regarding the disk formation. Note that the self-gravitational effects are not taken into account, and the disk is described by an inner and outer edge.
The plan of our work is as follows. In Sec.~\ref{BMOD} we overview the modified slowly rotating Kerr black hole solution addressed by the BG model.
In Sec.~\ref{Formulation} we present the model used for computing the energy deposition from the thin disk. In Sec.~\ref{Results} we characterize the effects of the theories beyond General Relativity on the EDR of neutrino pair annihilation near the rotational axis of the gravitational source. We shall consider two profiles for the disk temperature: $T=const$ and $T\propto r^{-1}$.
Finally, in Sec.~\ref{Conclusion} we summarize our results.
\begin{table*}[t]
\begin{tabular}{|c|c|c|}
\hline
Allowed range of $l$ and $a$ & Scenario& Ref. \\
\hline
$(-1,0.6]$ for $a \in [0.5,1)$& BG with with Kerr-Sen-like solution in light of Event Horizon Telescope ($M87^*$) \footnote{Note to this point is essential that the shadow of BG with Kerr-like black hole solution introduced in \cite{Ding:2019mal} is not distinguishable from its standard counterpart, see \cite{Vagnozzi:2022moj} for more details.} & \cite{Jha:2021eww} \\ \hline
$[-0.23,0.06]$ for $a \in [0.28,0.31]$& BG with Kerr-like solution in light of quasi-periodic oscillations & \cite{Wang:2021gtd} \\
\hline
$[-0.56,6.5]$ for $a \in [0.32,0.81]$& - & \cite{Wang:2021gtd} \\
\hline
$[-0.7,10.8]$ for $a \in [0,4]$& - & \cite{Wang:2021gtd} \\
\hline
\end{tabular}
\caption{ The allowed ranges of $l$, that have been obtained via confronting the BG black hole with observational data. The constraint reported in the third to fifth rows, respectively come from the three observational data: GRO J1655-40 \cite{Motta:2013wga}, XTE J1550-564 \cite{Orosz:2011ki}, and GRS 1915+105 \cite{Reid:2014ywa}, within $1\sigma$ level.}
\label{tab:rangel}
\end{table*}
\section{modified slowly rotating Kerr black hole solution with a background bumblebee field}
\label{BMOD}
In this Section, we shortly review the slowly rotating Kerr black hole solution obtained from the nonminimal coupling of the background bumblebee field to gravity. For that, the spontaneous LSB occurs in a curved space-time and the metric tensor must couple to the vector field. This leads to the bumblebee action \cite{Bluhm:2004ep,Bertolami:2005bh} (in units $c=G_N=1$)
\begin{eqnarray}\label{BAction}
S &=&\int d^{4}x\sqrt{-g}\bigg( \frac{1}{16\pi}\left(R+\xi B^{\mu
}B^{\upsilon}R_{\mu\nu}\right) \\
& & \qquad
-\frac{1}{4}B^{\mu\nu}B_{\mu\nu}-V\left(
B^{\mu}\right) \bigg)\,.
\nonumber
\end{eqnarray}
Here $B_\mu$ is the bumblebee vector field with mass dimension $M$,
$B_{\mu\nu}=\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}$ the bumblebee-field strength with mass dimension $M^{2}$, $\xi$ the nonminimally coupling constant between
the background bumblebee field and gravity with mass dimension $M^{-2}$, and finally, $V\left(B^{\mu}\right)$ the potential defined as
\[
V\left(B^{\mu}\right)=B_{\mu}B^{\mu}\pm b^{2}\,.
\]
The potential $V(B^{\mu})$ is such that $B_\mu$ may acquire a nonzero VEV $\langle B^{\mu}\rangle=b^{\mu}$ (spontaneous LSB in the gravitational sector \cite{Kostelecky:2003fs, Bluhm:2004ep}).
The slowly rotating black hole metric derived from the BG in the Boyer-Lindquist coordinates $x^\mu=(t,r,\theta,\phi)$ is given by (see Ref. \cite{Ding:2020kfr} for details)
\begin{eqnarray}
ds^{2}&=&-\left( 1-\frac{2M}{r}\right) dt^{2}-\frac{4Ma\sin^{2} \theta }{r}dtd\phi \nonumber\\
&&+\frac{r^{2}}{\tilde{\Delta}}dr^{2}
+r^{2}d\theta^{2}+r^2\sin^{2} \theta d\varphi^{2}, \label{metricBmod}
\end{eqnarray}
where
\begin{eqnarray}
\tilde{\Delta} &=& \frac{r^{2}-2Mr}{l+1}\,,~~~l\neq-1 \label{Deltametr}
\end{eqnarray}
It gets modified by a soft deviation from the standard slowly rotating Kerr due to the appearance of dimensionless LSB parameter ($l=\xi b^2$), so that the final form of the metric tensor reads
\begin{equation}\label{metricg}
g_{\mu\upsilon}=\left(
\begin{array}[c]{cccc}
-\left( 1-\frac{2M}{r}\right) & 0 & 0 & -\frac{2Ma \sin^{2}\theta}{r} \\
0 & \frac{r^{2}}{\tilde{\Delta}} & 0 & 0 \\
0 & 0 & r^{2} & 0 \\
-\frac{2Ma\sin^{2}\theta}{r} & 0 & 0 & r^2\sin^{2} \theta
\end{array}\right)\,,
\end{equation}
As it is clear, the parameter $l$ leaves a distinguishable imprint on the metric via $\tilde{\Delta}$. As a result, the underlying metric differs from the standard slowly rotating Kerr metric. In other words, the BG-based metric at hand differs softly from the standard slowly rotating Kerr metric by a factor $(l+1)$ in the standard definition of component of $g_{11}$ i.e., $\frac{l+1}{1-2M/r}$.
Due to the play of the constructive role of LSB parameter $l$ on the results of our analysis, it is worthwhile to discuss it in a bit of detail. In essence, the sign of $l$ comes from the sign of the nonminimal coupling constant ($\xi$) between the background bumblebee vector field and gravity. Since there is no consensus in the literature, we deal with both signs negative \cite{Wang:2021gtd} and positive \cite{Paramos:2014mda, Casana:2017jkc}. Given the fact that the BG is a subclass of SME, it is shown that via Parameterized Post-Newtonian (PPN) analysis, the spacelike background bumblebee vector field $b_{\mu}$ is matched to dimensionless tensor $s^{\mu\nu}$ in SME, see Ref. \cite{Bailey:2006fd} for more details. Namely, the LSB parameter $l$ can be limited, via constraints imposed on Lorentz violating coefficients of the SME, to find most of the upper bounds extracted on the different combinations of $s^{\mu\nu}$ (see the review paper \cite{Hees:2016lyw}). In the recent paper \cite{Khodadi:2022pqh} there is a summarized list of the most important physical frameworks to derive upper bounds on the $s^{\mu\nu}$. Moreover, stringent constraints on $l$ have been directly inferred in Refs. \cite{Paramos:2014mda, Casana:2017jkc} by considering BG in the framework of astrophysics and some classic tests. In this regard, it is recommended to visit some more recent works such as \cite{Maluf:2021lwh,Khodadi:2022mzt} as well.
The common feature of all these constraints, whether directly or through connection with Lorentz violating coefficients of the SME, is that they have been derived in the weak-field regimes with the gravitational redshift $\ll 1$. However, the Lorentz violation effects appear at fundamental scales, as pointed out in the Introduction. The environment around compact objects, such as a black hole, no longer belongs to the weak-field regime since its redshift is $\gtrsim 1$. So it is reasonable to imagine that by increasing the redshift of the gravitational field under investigation (as around black holes), the current constraints derived in the weak-gravity regime may change \cite{Psaltis:2008bb}. In light of these points, one can safely relax the above-mentioned constraints on $l$ for the framework at hand. This also occurs in the frameworks reported in Table.~\ref{tab:rangel} where some BG black hole scenarios are directly compared with observational data.
%
%
It might be interesting to stress that in Ref. \cite{Gu:2022grg} were used newly the blurred reflection traits in the X-ray spectra of galactic black hole EXO 1846–031 to constrain $l$. Despite the lack of success to do it due to the degeneracy between the rotation parameter of the black hole and the LSB parameter, it is expected to fix this problem by combining other observations in future analysis. An important point to note about the above-mentioned constraints for $l$ in the framework of BG is that they come from taking the Kerr-like solution \cite{Ding:2019mal}, which is just valid in the slow rotating limit ($a^2\ll1$) \cite{Maluf:2022knd,Ding:2020kfr}. It means that these constraints can not be reliable beyond slow-rotating approximation.
\section{Energy deposition rate from $\nu{\bar \nu}$ annihilation}
\label{Formulation}
Let us consider a black hole with a thin accretion disk around it that emits neutrinos \cite{Asano:2000dq}. We will confine ourselves to the case of an idealized, semi-analytical, stationary state model, which is independent of details regarding the disk formation. The disk is described by an inner and outer edge, with corresponding radii defined by $R_{\mathrm{in}}$ and $R_{\mathrm{out}}$, respectively. Self-gravitational effects are neglected. We consider the generic metric
\begin{equation}
g_{\mu\nu}= \left(\begin{matrix} g_{00}& 0& 0& &g_{03}\\0& g_{11}& 0& &0\\0& 0& g_{22}& &0\\g_{03}& 0& 0& &g_{33}
\end{matrix}\right) \,.
\end{equation}
The Hamiltonian of a test particle reads
\begin{equation}
2\mathcal{H}=-E\dot{t}+L\dot{\phi}+g_{11}\dot{r}^2=\delta_1 \,,
\end{equation}
where $\delta_1=0, 1$ refers to null geodesics and massive particles, respectively, $E$ and $L$ are the energy and angular momentum of the test particles moving around the rotational axis of the black hole. The non-vanishing components of the 4-velocity are \citep{Prasanna:2001ie}
\begin{align}
U^{3}&=\dot{\phi}=E\left(\frac{L}{E}+\frac{1}{2}\frac{g_{03}}{g_{00}}\right)\left(g_{33}-\frac{1}{2}\frac{g_{03}^2}{g_{00}}\right)^{-1} \,, \nonumber \\
U^0&=\dot{t}=-\frac{E}{g_{00}}
\left[1+\frac{g_{03}}{2}
\left(\frac{L}{E}+
\frac{g_{03}}{2g_{00}}\right)
\left(g_{33}-\frac{g_{03}^2}{2g_{00}}\right)^{-1}\right] \nonumber \\
\dot{r}^2&=
\frac{E\dot{t}-L\dot{\phi}}
{g_{11}} \,. \nonumber
\end{align}
We are interested in the energy deposition rate near the rotational axis at $\theta=0^{\circ}$. We use the value $\theta=0^{\circ}$ for evaluating the energy emitted in a half cone of $\Delta \theta\sim10^{\circ}$. The accretion disk extends from $R_{in}=2R_{\mathrm{ph}}$ to $R_{out}=30M$, with $R_{\mathrm{ph}}$ the photosphere radius. Moreover, it can be shown that the following relation for the impact parameter holds~\cite{Asano:2000dq}
\begin{equation}
\rho_\nu=\sqrt{g_{00}(r_0,0)g_{22}(r_0,0)} \,,
\label{rho}
\end{equation}
with $r_0$ the nearest position between the particle and the centre before arriving at $\theta=0$. Finally, from the metric (\ref{metricg}), the equation of the trajectory becomes \cite{Asano:2000dq}
\begin{equation}
\int\frac{d\theta}{\sqrt{1-(\tilde{a}/\rho_\nu)^2\sin^2\theta}}=
\int \frac{dr'}{\sqrt{\frac{g_{22}^2(r',0)}{\rho^2_{\nu}}-\frac{g_{22}(r',0)}{g_{11}(r',0)}}}\,. \nonumber
\end{equation}
In this relation one takes into account that the neutrinos are emitted from the position $(R,\pi/2)$, with $R\in [R_{in},R_{out}]$, and arrive at $(r,0)$. The energy deposition rate of neutrino pair annihilation is given by \cite{Asano:2000dq}
\begin{equation}
\frac{dE_0(r)}{dtdV}=\frac{21\pi^4}{4}\zeta(5)KG_F^2T^9_{\mathrm{eff}}(R_{2R_{ph}})F(r, T_0) \,,
\label{trajectory}
\end{equation}
where $G_F$ is the Fermi constant, $k$ is the Boltzmann constant, $T_{\mathrm{eff}}(2R_{\mathrm{ph}})$ is the effective temperature at radius $2R_{ph}$ (the temperature observed in the comoving frame),
\begin{equation}
K=\frac{1\pm 4\sin^2\omega_W+8\sin^4\omega_W}{6\pi} \,,
\end{equation}
with the $+$ sign for $\nu_e$ and the $-$ sign for $\nu_{\mu/\tau}$, $F(r, T_0)$ is reported in the Appendix (Eq. \ref{F(r)}),
%
%
%
with $T_0$ the temperature observed at infinity
\begin{align}
&T_0(R)=\frac{T_{\rm{eff}}\left(R,\frac{\pi}{2}\right)}{\gamma}\sqrt{g_{00}\left(R,\frac{\pi}{2}\right)-\frac{g_{03}^2(R,\frac{\pi}{2})}{g_{33}(R,\frac{\pi}{2})}} \,\ , \\
&\gamma=\frac{1}{\sqrt{1-v^2/c^2}} \,\ , \\
&\frac{v^2}{c^2}=\frac{g_{33}^2(r,\pi/2)\left(\Omega_K-\omega\right)^2}{g_{03}^2(r,\pi/1)-g_{00}(r,\pi/2)g_{33}(r,\pi/2)} \,\ , \\
&\Omega_K=\frac{-g_{03,r}+\sqrt{(g_{03,r})^2-g_{00,r}g_{33,r}}}{g_{33,r}}\Big|_{(r,\pi/2)} \,\ , \\
&\omega=-\frac{g_{03}(r,\pi/2)}{g_{33}(r,\pi/2)} \,\ .
\end{align}
where $T_{\mathrm{eff}}$ is the effective temperature measured by a local observer and all the quantities are evaluated at $\theta=\pi/2$. In the treatment we will ignore the reabsorption of the deposited energy by the black hole and we will consider the case of isothermal disk, that is
\[
T_{\mathrm{eff}}=const\,,
\]
and the case of a gradient temperature \cite{Asano:2000dq}, for which $T_{\mathrm{eff}}$, in the simplest and acceptable model, is given by (for details, see \cite{10.1143/PTPS.136.235})
\begin{equation}\label{tdippr}
T_{\mathrm{eff}}(r)= \frac{2R_{ph}}{r} \,.
\end{equation}
\section{Applications to the Bumblebee metric}
\label{Results}
In this section, we calculate the emitted energy with the procedure shown in Sec.~\ref{Formulation} for the bumblebee metric in Eq.~(\ref{metricg}). We analyze two different cases, corresponding to the isothermal model and the temperature gradient model. We find an enhancement of the EDR for positive values of the LSB parameter $l$ ($l>0$) entering Eq. (\ref{metricg}).
\subsection{Isothermal model}
For our analysis, it turns out convenient to define the function
\begin{equation}\label{G(r)Gae}
G(r)=F(r, T_0)\frac{r^2+a^2}{4M^2} \,,
\end{equation}
where $F(r, T_0)$ is given in Eq. (\ref{F(r)}).
The function $G(r)$ is essential for evaluating the EDR and, therefore, the energy viable for a GRB explosion.
The EDR is estimated for the infinitesimal angle $d\theta$ taking into consideration a characteristic angle $\Delta \theta \simeq 10^{\circ}$ and temperature $T_{\rm{eff}}= 10 \mathrm{MeV}$. The explicit EDR expression is given by \cite{Asano:2000dq}
\begin{widetext}
\begin{equation}
\frac{dE_0}{dt}\simeq 4.41\times 10^{48}\left(\frac{\Delta \theta}{10^\circ}\right)^2\left(\frac{T_\mathrm{eff}(R_{\mathrm{in}})}{10~\mathrm{MeV}}\right)^9\left(\frac{2M}{10~\mathrm{km}}\right)\int_{R_{\mathrm{in}}}^{R_{\mathrm{out}}}\frac{G(r)}{2M}dr~\mathrm{erg~s^{-1}} \,\ .
\label{value}
\end{equation}
\end{widetext}
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{TC.pdf}
\includegraphics[scale=0.7]{avtc0.pdf}
\caption{Plot of $G(r)$ against $r/M$ for the isothermal disk. \textbf{Upper graph:} Different values of $l$ with fixed value $0.3$ for the slowly rotating parameter $a$. \textbf{Lower graph:} Different values of the spin parameter $a$ with $l<0$ and $l>0$.}
\label{Gr}
\end{figure}
Before going into the analysis of EDR, a comment is in order.
The function $F(r,T_0)$ defined in Eq. (\ref{F(r)}), is proportional to $g_{11}^4$. From (\ref{metricg}) it follows that $g_{11}\propto 1/\tilde{\Delta}$ and $\tilde{\Delta}\propto 1/(1+l)$, resulting in $g_{11}\propto (1+l)$. Therefore, the enhanced factor $(1+l)^4$ appears in the computation of the EDR. Moreover, the LSB parameter $l$ also contributes, via the modifications in the metric (\ref{metricg}), to the trajectory equation leading to important changes in the neutrino angular momentum at the rotational axis.
In Fig.~\ref{Gr} we show the behavior of $G(r)$ in BG metric and its standard counterpart, as well. In the upper graph, we plot curves with $l<0$, $l>0$ and $l=0$ by fixing the spin parameter to $a=0.3$. As a common feature in both cases, $G(r)$ initially increases with the distance, reaching a maximum value, and then decreases with the distance due to the interplay between temperature and red-shift effects. However, we see that the presence of the LSB parameter, with positive values, increases $G(r)$ compared to $l=0$, while for $l<0$ the function $G(r)$ decreases. By drawing the lower graph in Fig.~\ref{Gr}, we are interested in investigating the role of the slowly rotating parameter $a$ on the behavior of $G(r)$ (subsequently of the EDR) in the interplay with $l<0$ and $l>0$. As one can see, the spin parameter $a$ does not affect the curves with both negative and positive values of the LSB parameter $l$.
This means that for the isothermal disk model in an LSB-based metric such as BG, the rotation of the black hole has no effective role in the EDR.
This can be seen more clearly in the contour plot of the parameter space $l-a$ in Fig.~\ref{Con1}. As it is evident, the EDR$^{BG}$ increases by moving from $l<0$ to $l>0$, independently of the value of the spin parameter. Indeed, the main contribution to the increase of the EDR, which makes more efficient the process for powering the GRBs jets compared to the standard case, comes from the positive LSB parameter $l>0$ embedded in the background. In other words, rotational energy has no role in sourcing the energy of GRBs.
The parameter scan performed for $l-a$ within the ranges $0\leq a\leq0.3$ and $-0.5<l\leq0.5$, openly tells us that the model $T=const$ can successfully describe, in the BG-based black hole, the observed gamma-ray luminosity associated with short and long GRBs ($\sim10^{52-53}$ erg/s), if the LSB parameter falls down in the range $-0.1<l\leq0.3$ (see Fig.~\ref{Con1}). This result is independent of the value of the spin parameter of the black hole. In general, in the model with $T=const$, for not producing GRB with energy higher than the observed one (for short and long cases), one has to set $l \leq0.3$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{ContourEn.pdf}
\caption{A contour plot of the parameter space $l-a$ showing the EDR$^{BG}$ related to BG-based slowly rotating Kerr black hole in isothermal disk model. Here the color scale written in the logarithmic form $\log 10^{n}$ (the range of $n$ is from values lesser than $52$ to beyond $53$ ).}
\label{Con1}
\end{figure}
\subsection{Temperature gradient model}
\label{Bumblebee - Temperature gradient}
In the case of a gradient of temperature, the function $G(r)$ is again calculated by using Eq.~(\ref{F(r)}), and taking into account that the temperature varies along with $r$ ($T_{\rm eff}\propto r^{-1}$), as well as along with $\theta_{\nu(\bar{\nu})}$. In the upper graph of Fig.~\ref{Gr-Tvar}, we show $G(r)$ vs $r/M$ for different values of the LSB parameter $l$ and for a given value of the slowly rotating parameter of the black hole. Similar to the isothermal disk model, we see here, by moving from $l<0$ to $l>0$, an evident enhancement of the EDR induced by the bumblebee metric as compared to General Relativity ($l=0$).
Compared to the upper graph in Fig. \ref{Gr}, one finds that the total energy deposited is lesser than that one expected from the isothermal model. In the lower graph of Fig.~\ref{Gr-Tvar}, we probe, on a case-by-case basis, the role of spin. Unlike the former model, the spin parameter $a$ has a mild effect on the behavior of $G(r)$ related to $l<0$ and $l>0$, so that the EDR for the high value that satisfies the condition $a^2\ll1$, is a bit bigger than lower values.
%
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{TV.pdf}
\includegraphics[scale=0.7]{avtr.pdf}
\caption{Plot of $G(r)$ against $r/M$ for a disk with a temperature $T\propto 2R_{\mathrm{ph}}/r$. \textbf{Upper graph:}
Different values of $l$ with fixed value $0.5$ of the spin parameter $a$. \textbf{Lower graph:} Different values of the spin parameter $a$ with $l<0$ and $l>0$.}
\label{Gr-Tvar}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{Contour_Tvar_En.pdf}
\caption{The same contour plot \ref{Con1} with the color scale written in the logarithmic form $\log 10^{n}$ (the range of $n$ is from values lesser than $49$ to beyond $50$), this time for the temperature gradient disk model.}
\label{Con2}
\end{figure}
%
%
This mild dependency on the spin parameter $a$ can be traced through the contour plot showing the EDR$^{BG}$ for the temperature gradient model (Fig.~\ref{Con2}). Similar to the case of an isothermal disk, here we also see that the presence of $l>0$ induces an enhancement in EDR. Differently from the case $T=const$, and ignoring the mild dependency on $a$, it is clear that the neutrino annihilation in the environment of the Kerr black hole derived from the BG metric with the accretion disk profile $T\propto r^{-1}$, can only explain ultra-long GRBs ($\sim10^{49-50}$erg/s), if $-0.4\leq l\leq0.1$ (see Fig. \ref{Con2}).
As a consequence, this model of accretion disk profile is, by covering both negative and positive values of LSB parameter $l$, a suitable candidate for the description of luminosity measured of the ultra-long GRBs jets.
\section{Conclusion}
\label{Conclusion}
It is not clear yet what mechanism conclusively is responsible for launching the gamma-ray burst (GRB) jets. The central engine for powering these high energetic jets is usually found in two well-known models: magneto-hydrodynamical and neutrino–antineutrino annihilation ($\nu {\bar \nu}\to e^+ e^-$). These two models, in essence, have been proposed for the energy extraction from a composite system of black hole/accretion disk.
Concerning the latter (as the mechanism we have considered in this paper), in the case in which the condition $\dot{M}\sim (0.1-1) M_{\odot}$ s$^{-1}$ for the accretion rate of a black hole is satisfied, as well as high enough temperature for the disk, it is expected that the disk can play the role of an efficient neutrino emitter. In this way, the EDR arising from neutrino–antineutrino annihilation at the jet can justify the energetic bursts. In other words, by releasing enormous energy into $e^+ e^-$ pairs by the EDR and subsequently the annihilation process, it is supplied energy to power high energetic photons.
In this paper, we have studied the GRBs jets generated by neutrino pair annihilation for the case in which this process occurs in a slowly rotating Kerr black hole metric (near the rotational axis) modified by the Lorentz symmetry breaking (LSB) parameter $l$. The latter comes from the non-zero VEV of the background bumblebee field $B_\mu$. By employing two idealized models of the accretion disk, one the temperature $T_{\rm eff}=const$, and the other a gradient of temperature for which $T_{\rm eff}\sim r^{-1}$, we have shown that in the presence of the LSB parameter $l>0$, there is an enhancement of the EDR associated with the neutrino-antineutrino annihilation into electron-positron pairs, powering in such a way the GRBs jets. Concerning the first model of the accretion disk, the embedding $l>0$ into spacetime results in an improved situation, compared to the standard slowly rotating Kerr black hole, which is compatible with the observed gamma-ray luminosity associated with the short and long GRBs jets. In this regard, by doing a contour plot analysis of EDR into parameter space $l-a$, we have extracted the upper bound $l\leq 0.3$ for the LSB parameter. The same analysis for the second model has shown that, in the chosen range $-0.4\leq l\leq0.1$, the neutrino EDR can justify the observed luminosity of ultra-long GRBs. Moreover, we want to point out that in both models, the allowed range of $l$ is mostly independent of the spin parameter $a$. In other words, the additional contribution to EDR of GRBs jets arising from neutrino–antineutrino annihilation around a BG-based slowly rotating Kerr black hole, merely comes from the bumblebee vector field embedded in the spacetime background. This can be considered worthwhile since addresses the constructive role of fundamental modifications in explaining GRBs observed in the universe.
As a final comment, a follow-up of the present work would be, indeed, to implement a realist simulation for the disk temperature in the metric (\ref{metricg}) by firstly constructing quasi-stationary disk models as in Ref.~\cite{Popham:1998ab} and then releasing a self-consistent multi-dimensional simulation model.
\begin{acknowledgments}
M.Kh. thanks Shiraz University Research Council. The work of G.L. and L.M. is supported by the Italian Istituto Nazionale di Fisica Nucleare (INFN) through the ``QGSKY'' project and by Ministero dell'Istruzione, Universit\`a e Ricerca (MIUR).
The computational work has been executed on the IT resources of the ReCaS-Bari data center, which have been made available by two projects financed by the MIUR (Italian Ministry for Education, University and Re-search) in the "PON Ricerca e Competitività 2007-2013" Program: ReCaS (Azione I - Interventi di rafforzamento strutturale, PONa3\_00052, Avviso 254/Ric) and PRISMA (Asse II - Sostegno all'innovazione, PON04a2A)
\end{acknowledgments}
\section{Introduction}
\label{Introduction}
The Lorentz invariant (LI), a continuous symmetry representing the physical results independent of the boost and rotation, is one of the key characteristics at the heart of modern physics. Conventionally, LI is related to the scale-free nature of spacetime. Experimentally this is well-supported, and there is no reason to believe that LI is broken at the currently accessible energies \cite{Mattingly:2005re, Liberati:2013xla} (for a detailed list of results see \cite{Kostelecky:2008ts}). However, it is expected that the coupling constants in quantum field theories are energy-dependent. Namely, unlike low energy scales, the coupling constants addressing the Lorentz-breaking terms become significant at high energy. Searching for a Lorentz symmetry breaking (LSB) is one of the hot topics in theoretical physics since it is strictly related to fundamental issues, such as quantum gravity and string theory \cite{Kostelecky:1990pe, Kostelecky:1994rn}. Even though the nature of quantum gravity is not determined, it is well-known that space and time should break down at small distances (around Planck's length), where they can no longer be treated as a classical continuum \cite{Amelino-Camelia:2008aez}. Therefore, there is a pervasive belief that LI cannot be a well-established symmetry at all scales of nature, and it becomes invalid by approaching the fundamental scales \cite{Collins:2006bw}. In the modern analyses for detecting experimental deviations from LI, a phenomenological framework, known as Standard-Model Extension (SME), is utilized in which LI violating effects are introduced by spontaneous symmetry breaking \cite{Colladay:1998fq}. SME is an effective field theory for the development of low-energy experiments containing all possible LI and Charge-Parity-Time (CPT) violating coefficients\footnote{For the deep connection of LI with CPT theorem, in the case of violating the former, the latter is challenged, too.} that include not only Special Relativity but the Standard Model and General Relativity as well \cite{Colladay:1996iz}. Overall, the presence of both Nambu-Goldstone and massive Higgs bosons in theories with spontaneous LSB provide us with rich scenarios for exhibiting multiple phenomenological effects \cite{Bluhm:2004ep, Kostelecky:2005ic}. In this respect, by implementing the spontaneous LSB into a curved space-time via background vector fields, it became possible to construct some well-known alternatives to General Relativity such as the Einstein-Aether theory \cite{Jacobson:2000xp} and the Bumblebee Gravity (BG) model\footnote{An advantage of BG model, compared to Einstein-Aether theory, is that it has no difficulties of Einstein-Aether in the perturbative description \cite{Petrov:2020wgy}.} \cite{Bluhm:2004ep,Bertolami:2005bh}.
Concerning the BG, to which we are interested in, the LSB mechanism occurs owing to the nonzero vacuum condensation of a vector field (the bumblebee field), indicating a preferred frame \cite{Kostelecky:2003fs, Bluhm:2007xzd}. The most peculiar feature of the BG model that has attracted a lot of attention is that it has no local $U(1)$ gauge symmetry, but the propagation of massless vector modes is still allowed \cite{Bluhm:2008yt}. Indeed, in the BG model, the massless photon modes arise as Nambu-Goldstone bosons when the spontaneously LSB occurs. Bumblebee vector fields can be a source of cosmological anisotropies since they generate a preferred axis \cite{Liang:2022hxd}. This means, in turn, that a fraction of the anisotropies observed in the universe can be ascribed to LI violation \cite{Maluf:2021lwh}. In this regard, it has been recently derived, by implementing the big bang nucleosynthesis and gravitational baryogenesis governing in early Universe within the BG-based cosmology, some tight constraints for the Lorentz-violating bumblebee vector field \cite{Khodadi:2022mzt}.
Ref. \cite{Capelo:2015ipa} shows that spontaneous LSB arising from the bumblebee field in the background can describe the dark energy issue. Unlike general SME that suffers from the lack of an exact gravitational solution, in the BG model it is possible to find a Schwarzschild-like solution \cite{Casana:2017jkc} (see also the spherically symmetric solution in recent Ref. \cite{Filho:2022yrk} which shows the effect of the LSB on both the temporal and radial components), as well as a Kerr-like black hole \cite{Ding:2019mal} (traversable wormhole solutions have been found in \cite{Ovgun:2018xys}).
A Kerr-like black hole solution derived in \cite{Ding:2019mal} was re-evaluated in \cite{Maluf:2022knd}. The conclusion is that, at the present, there is no yet full rotating black hole solution for the Einstein-bumblebee theory. The case of slowly rotating metric has been derived in \cite{Ding:2020kfr}, whose validity is confirmed in \cite{Maluf:2022knd}, too.
In these solutions, the LSB arises from a nonzero vacuum expectation value (VEV) of the bumblebee vector field coupled to the space-time curvature. These solutions allow to study various aspects of the role of spontaneously LSB on the physics of compact objects, such as black holes and wormholes (e.g, see Refs. \cite{Gomes:2018oyd,Oliveira:2018oha, Liu:2019mls, Maluf:2020kgf, KumarJha:2020ivj, Jha:2020pvk, Khodadi:2021owg, Jha:2021eww, Jiang:2021whw, Wang:2021gtd, Khodadi:2022dff, Gu:2022grg}). It would be interesting to mention that the study of causality in Lorentz-violating models is a relevant topic. In Refs. \cite{Santos:2014nxm, Jesus:2020lsv}, the authors have studied the Gödel-type solution for the BG model. In the light of confronting this model with astrophysical bodies, such as stars, were derived tight constraints for the LSB parameter \cite{Paramos:2014mda}.
The aim of this paper is to study the modification induced by BG on the energy deposition rate (EDR) of neutrinos emitted from a thin accretion disk~\cite{Asano:2000dq,Asano:2000ib}. More exactly, if the disk is hot enough and the accretion rate of black hole obeys the condition $\dot{M}\sim (0.1-1) M_{\odot}$ s$^{-1}$, then
the disk is a source of neutrinos ($\nu$) and anti-neutrinos ($\bar{\nu}$), which partially annihilate above the disk and convert into electron ($e^{-}$) and positron ($e^{+}$), as follows \cite{Popham:1998ab, Birkl:2006mu, Chen:2006rra, Zalamea:2010ax}
\begin{equation}\label{nunuee}
\nu{\bar \nu}\to e^+ e^- \,.
\end{equation}
This process has significant consequences for the cosmological gamma-ray bursts (GRBs) jets, which are the most luminous objects in the universe. Accretion disks around BHs are the favourite candidates for the central engine of GRBs. Such a configurations are formed by the merging of compact objects (neutron-neutron stars, BH-neutron stars) or by supernova.
The hot accretion disk in these systems is the source of neutrinos/antineutrinos, and their annihilation into electrons/positrons may power GRBs.
However, in order that the relativistic fireball produces GRBs, the fireball must contain an extremely small amount of the baryon density. The latter, above the accretion disk, is low near the rotation axis so that the neutrino pair annihilation has the possibility of producing a clean fireball which allows to solve the baryon contamination problem, which hinders the creation of relativistic shocks and the emission of gamma-rays (see \cite{Asano:2000dq} and references therein). More exactly, the jets produced via neutrino annihilation, in essence, are cones relatively free of baryons. Note, finally, that in the processes of emission of neutrinos from the hot accretion disk and pair annihilation are important the relativistic effects that essentially take into account the gravitational redshift, the bending of the neutrino trajectories, and the redshift due to rotation. The EDR is affected by these effects when Kerr geometry is modified by LI violating corrections, as will see in what follows.
GRBs, in essence, are powerful cosmic explosions of the characteristic duration of seconds. Commonly are labeled into two classes: short and long GRBs for timescales, $\sim 1$ s and $(1-100)$ s, respectively \cite{Woosley:1993wj,Galama:1998ea,Stanek:2003tw}. Despite the uncertainty about the origin of these two categories, the evidence suggests that the former most likely comes from the merger of compact binaries such as a double neutron star and/or a binary system, including a neutron star and black hole \cite{Eichler:1989ve,Narayan:1992iy,Nakar:2007yr}, while the collapse of the core of massive stars (Wolf–Rayet) can be the origin of the latter \cite{Woosley:1993wj}. Based on the observed gamma-ray luminosity $L$, the luminosity of the long-duration type of GRBs should not exceed $L\sim 10^{53}$ erg/s (more accurate $10^{52-53}$ erg/s) \cite{Bloom:2003eq}. Overall, the order of magnitude of the luminosity of short and long GRBs is expected to be the same \cite{Leng:2014dfa}.
%
In recent years, however, a new population of ultra-long GRBs with timescale duration $\sim 10^{3-4}$ s that reduces the luminosity up to $\sim 10^{49-50}$ erg/s
(e.g, see Refs. \cite{Thone:2011yf,Gendre:2012wj,Levan:2013gcz}) has been investigated.
The relation of the maximum energy of a neutrino-powered jet as a function of the burst duration shows that the energy deposition falls down rapidly as the burst lasts longer \cite{Leng:2014dfa}.
Observations indicate that the process (\ref{nunuee}), due to the creation of relativistic $e^{\mp}$-dominated jets, can be a possible candidate to explain GRBs observed from galaxies containing the supermassive black hole in their center, e.g, see \cite{Zalamea:2010ax}. In \cite{Co86,Co87,Goodman:1986we,Eichler:1989ve,1993AcA....43..183J} it has been shown that the process (\ref{nunuee}) can deposit an energy $\gtrsim 10^{51}$ erg above the neutrino-sphere of a type II supernova \cite{Goodman:1986we}. In \cite{Salmonson:1999es, Salmonson:2001tz} it has been shown that taking into account the effects of the strong gravitational field in a Schwarzschild space-time, the efficiency of the process (\ref{nunuee}), for collapsing neutron stars, enhances (up to a factor of $30$) compared to the Newtonian case.
The same analysis around a thin and isothermal accretion disk for a Schwarzschild or Kerr metric was performed in \cite{Asano:2000ib,Asano:2000dq},
%
%
The neutrino annihilation luminosity from the disk has been also calculated, e.g., see \cite{Mallick:2008iv,Mallick:2009nvq,Chan:2009mw,Kovacs:2009dv,Kovacs:2010zp,Zalamea:2008dq,Harikae:2010yt}.
Time-dependent models of black-hole accretion disks, such as remnants of neutron-star mergers or collapse engines, have been investigated, for example, in Refs. \cite{Harikae:2010yt,Ruffert:1998qg,Popham:1998ab,DiMatteo:2002iex,Fujibayashi:2017xsz,Just:2015dba,Foucart:2018gis,Foucart:2020qjb}.
The principal output of these studies is that the neutrino-pair annihilation process, when analyzed in curved background described by General Relativity, is not efficient enough to power GRBs. There is another scenario for energy extraction from disk or black hole to launch the GRBs jets, the well-known magnetohydrodynamical (e.g., \cite{Katz:1997bh,Meszaros:1996ww}). According to this scenario, the Blandford-Znajek process is a more promising mechanism for launching jets. However, the main issue of this energy extraction model is whether (yet has been not proven) the magnetic flux arising from the collapse of the star is sufficient to power the jet or not \cite{Komissarov:2009dn}. Note that throughout this paper, we will only address the neutrino effects without considering the contribution of other potential forms of energy deposition, such as Blandford-Znajek and magnetic reconnection.
The process (\ref{nunuee}), with neutrino-antineutrino emitted from the surface of a neutron star, has been also investigated in the framework of extended theories of gravity \cite{Lambiase:2020iul, Lambiase:2020pkc, Lambiase:2022ywp}.
By admitting this idea that the environment around a black hole potentially is a cleaner place for the launch of a relativistic jet, we consider the BG-based rotating black hole with broken LI surrounded by a thin accretion disk from which neutrinos are emitted \cite{Asano:2000dq}. Inspired by
Refs. \cite{Asano:2000dq, Asano:2000ib,10.1143/PTPS.136.235}, we assume an idealized, semi-analytical, stationary state model, independent of details regarding the disk formation. Note that the self-gravitational effects are not taken into account, and the disk is described by an inner and outer edge.
The plan of our work is as follows. In Sec.~\ref{BMOD} we overview the modified slowly rotating Kerr black hole solution addressed by the BG model.
In Sec.~\ref{Formulation} we present the model used for computing the energy deposition from the thin disk. In Sec.~\ref{Results} we characterize the effects of the theories beyond General Relativity on the EDR of neutrino pair annihilation near the rotational axis of the gravitational source. We shall consider two profiles for the disk temperature: $T=const$ and $T\propto r^{-1}$.
Finally, in Sec.~\ref{Conclusion} we summarize our results.
\begin{table*}[t]
\begin{tabular}{|c|c|c|}
\hline
Allowed range of $l$ and $a$ & Scenario& Ref. \\
\hline
$(-1,0.6]$ for $a \in [0.5,1)$& BG with with Kerr-Sen-like solution in light of Event Horizon Telescope ($M87^*$) \footnote{Note to this point is essential that the shadow of BG with Kerr-like black hole solution introduced in \cite{Ding:2019mal} is not distinguishable from its standard counterpart, see \cite{Vagnozzi:2022moj} for more details.} & \cite{Jha:2021eww} \\ \hline
$[-0.23,0.06]$ for $a \in [0.28,0.31]$& BG with Kerr-like solution in light of quasi-periodic oscillations & \cite{Wang:2021gtd} \\
\hline
$[-0.56,6.5]$ for $a \in [0.32,0.81]$& - & \cite{Wang:2021gtd} \\
\hline
$[-0.7,10.8]$ for $a \in [0,4]$& - & \cite{Wang:2021gtd} \\
\hline
\end{tabular}
\caption{ The allowed ranges of $l$, that have been obtained via confronting the BG black hole with observational data. The constraint reported in the third to fifth rows, respectively come from the three observational data: GRO J1655-40 \cite{Motta:2013wga}, XTE J1550-564 \cite{Orosz:2011ki}, and GRS 1915+105 \cite{Reid:2014ywa}, within $1\sigma$ level.}
\label{tab:rangel}
\end{table*}
\section{modified slowly rotating Kerr black hole solution with a background bumblebee field}
\label{BMOD}
In this Section, we shortly review the slowly rotating Kerr black hole solution obtained from the nonminimal coupling of the background bumblebee field to gravity. For that, the spontaneous LSB occurs in a curved space-time and the metric tensor must couple to the vector field. This leads to the bumblebee action \cite{Bluhm:2004ep,Bertolami:2005bh} (in units $c=G_N=1$)
\begin{eqnarray}\label{BAction}
S &=&\int d^{4}x\sqrt{-g}\bigg( \frac{1}{16\pi}\left(R+\xi B^{\mu
}B^{\upsilon}R_{\mu\nu}\right) \\
& & \qquad
-\frac{1}{4}B^{\mu\nu}B_{\mu\nu}-V\left(
B^{\mu}\right) \bigg)\,.
\nonumber
\end{eqnarray}
Here $B_\mu$ is the bumblebee vector field with mass dimension $M$,
$B_{\mu\nu}=\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}$ the bumblebee-field strength with mass dimension $M^{2}$, $\xi$ the nonminimally coupling constant between
the background bumblebee field and gravity with mass dimension $M^{-2}$, and finally, $V\left(B^{\mu}\right)$ the potential defined as
\[
V\left(B^{\mu}\right)=B_{\mu}B^{\mu}\pm b^{2}\,.
\]
The potential $V(B^{\mu})$ is such that $B_\mu$ may acquire a nonzero VEV $\langle B^{\mu}\rangle=b^{\mu}$ (spontaneous LSB in the gravitational sector \cite{Kostelecky:2003fs, Bluhm:2004ep}).
The slowly rotating black hole metric derived from the BG in the Boyer-Lindquist coordinates $x^\mu=(t,r,\theta,\phi)$ is given by (see Ref. \cite{Ding:2020kfr} for details)
\begin{eqnarray}
ds^{2}&=&-\left( 1-\frac{2M}{r}\right) dt^{2}-\frac{4Ma\sin^{2} \theta }{r}dtd\phi \nonumber\\
&&+\frac{r^{2}}{\tilde{\Delta}}dr^{2}
+r^{2}d\theta^{2}+r^2\sin^{2} \theta d\varphi^{2}, \label{metricBmod}
\end{eqnarray}
where
\begin{eqnarray}
\tilde{\Delta} &=& \frac{r^{2}-2Mr}{l+1}\,,~~~l\neq-1 \label{Deltametr}
\end{eqnarray}
It gets modified by a soft deviation from the standard slowly rotating Kerr due to the appearance of dimensionless LSB parameter ($l=\xi b^2$), so that the final form of the metric tensor reads
\begin{equation}\label{metricg}
g_{\mu\upsilon}=\left(
\begin{array}[c]{cccc}
-\left( 1-\frac{2M}{r}\right) & 0 & 0 & -\frac{2Ma \sin^{2}\theta}{r} \\
0 & \frac{r^{2}}{\tilde{\Delta}} & 0 & 0 \\
0 & 0 & r^{2} & 0 \\
-\frac{2Ma\sin^{2}\theta}{r} & 0 & 0 & r^2\sin^{2} \theta
\end{array}\right)\,,
\end{equation}
As it is clear, the parameter $l$ leaves a distinguishable imprint on the metric via $\tilde{\Delta}$. As a result, the underlying metric differs from the standard slowly rotating Kerr metric. In other words, the BG-based metric at hand differs softly from the standard slowly rotating Kerr metric by a factor $(l+1)$ in the standard definition of component of $g_{11}$ i.e., $\frac{l+1}{1-2M/r}$.
Due to the play of the constructive role of LSB parameter $l$ on the results of our analysis, it is worthwhile to discuss it in a bit of detail. In essence, the sign of $l$ comes from the sign of the nonminimal coupling constant ($\xi$) between the background bumblebee vector field and gravity. Since there is no consensus in the literature, we deal with both signs negative \cite{Wang:2021gtd} and positive \cite{Paramos:2014mda, Casana:2017jkc}. Given the fact that the BG is a subclass of SME, it is shown that via Parameterized Post-Newtonian (PPN) analysis, the spacelike background bumblebee vector field $b_{\mu}$ is matched to dimensionless tensor $s^{\mu\nu}$ in SME, see Ref. \cite{Bailey:2006fd} for more details. Namely, the LSB parameter $l$ can be limited, via constraints imposed on Lorentz violating coefficients of the SME, to find most of the upper bounds extracted on the different combinations of $s^{\mu\nu}$ (see the review paper \cite{Hees:2016lyw}). In the recent paper \cite{Khodadi:2022pqh} there is a summarized list of the most important physical frameworks to derive upper bounds on the $s^{\mu\nu}$. Moreover, stringent constraints on $l$ have been directly inferred in Refs. \cite{Paramos:2014mda, Casana:2017jkc} by considering BG in the framework of astrophysics and some classic tests. In this regard, it is recommended to visit some more recent works such as \cite{Maluf:2021lwh,Khodadi:2022mzt} as well.
The common feature of all these constraints, whether directly or through connection with Lorentz violating coefficients of the SME, is that they have been derived in the weak-field regimes with the gravitational redshift $\ll 1$. However, the Lorentz violation effects appear at fundamental scales, as pointed out in the Introduction. The environment around compact objects, such as a black hole, no longer belongs to the weak-field regime since its redshift is $\gtrsim 1$. So it is reasonable to imagine that by increasing the redshift of the gravitational field under investigation (as around black holes), the current constraints derived in the weak-gravity regime may change \cite{Psaltis:2008bb}. In light of these points, one can safely relax the above-mentioned constraints on $l$ for the framework at hand. This also occurs in the frameworks reported in Table.~\ref{tab:rangel} where some BG black hole scenarios are directly compared with observational data.
%
%
It might be interesting to stress that in Ref. \cite{Gu:2022grg} were used newly the blurred reflection traits in the X-ray spectra of galactic black hole EXO 1846–031 to constrain $l$. Despite the lack of success to do it due to the degeneracy between the rotation parameter of the black hole and the LSB parameter, it is expected to fix this problem by combining other observations in future analysis. An important point to note about the above-mentioned constraints for $l$ in the framework of BG is that they come from taking the Kerr-like solution \cite{Ding:2019mal}, which is just valid in the slow rotating limit ($a^2\ll1$) \cite{Maluf:2022knd,Ding:2020kfr}. It means that these constraints can not be reliable beyond slow-rotating approximation.
\section{Energy deposition rate from $\nu{\bar \nu}$ annihilation}
\label{Formulation}
Let us consider a black hole with a thin accretion disk around it that emits neutrinos \cite{Asano:2000dq}. We will confine ourselves to the case of an idealized, semi-analytical, stationary state model, which is independent of details regarding the disk formation. The disk is described by an inner and outer edge, with corresponding radii defined by $R_{\mathrm{in}}$ and $R_{\mathrm{out}}$, respectively. Self-gravitational effects are neglected. We consider the generic metric
\begin{equation}
g_{\mu\nu}= \left(\begin{matrix} g_{00}& 0& 0& &g_{03}\\0& g_{11}& 0& &0\\0& 0& g_{22}& &0\\g_{03}& 0& 0& &g_{33}
\end{matrix}\right) \,.
\end{equation}
The Hamiltonian of a test particle reads
\begin{equation}
2\mathcal{H}=-E\dot{t}+L\dot{\phi}+g_{11}\dot{r}^2=\delta_1 \,,
\end{equation}
where $\delta_1=0, 1$ refers to null geodesics and massive particles, respectively, $E$ and $L$ are the energy and angular momentum of the test particles moving around the rotational axis of the black hole. The non-vanishing components of the 4-velocity are \citep{Prasanna:2001ie}
\begin{align}
U^{3}&=\dot{\phi}=E\left(\frac{L}{E}+\frac{1}{2}\frac{g_{03}}{g_{00}}\right)\left(g_{33}-\frac{1}{2}\frac{g_{03}^2}{g_{00}}\right)^{-1} \,, \nonumber \\
U^0&=\dot{t}=-\frac{E}{g_{00}}
\left[1+\frac{g_{03}}{2}
\left(\frac{L}{E}+
\frac{g_{03}}{2g_{00}}\right)
\left(g_{33}-\frac{g_{03}^2}{2g_{00}}\right)^{-1}\right] \nonumber \\
\dot{r}^2&=
\frac{E\dot{t}-L\dot{\phi}}
{g_{11}} \,. \nonumber
\end{align}
We are interested in the energy deposition rate near the rotational axis at $\theta=0^{\circ}$. We use the value $\theta=0^{\circ}$ for evaluating the energy emitted in a half cone of $\Delta \theta\sim10^{\circ}$. The accretion disk extends from $R_{in}=2R_{\mathrm{ph}}$ to $R_{out}=30M$, with $R_{\mathrm{ph}}$ the photosphere radius. Moreover, it can be shown that the following relation for the impact parameter holds~\cite{Asano:2000dq}
\begin{equation}
\rho_\nu=\sqrt{g_{00}(r_0,0)g_{22}(r_0,0)} \,,
\label{rho}
\end{equation}
with $r_0$ the nearest position between the particle and the centre before arriving at $\theta=0$. Finally, from the metric (\ref{metricg}), the equation of the trajectory becomes \cite{Asano:2000dq}
\begin{equation}
\int\frac{d\theta}{\sqrt{1-(\tilde{a}/\rho_\nu)^2\sin^2\theta}}=
\int \frac{dr'}{\sqrt{\frac{g_{22}^2(r',0)}{\rho^2_{\nu}}-\frac{g_{22}(r',0)}{g_{11}(r',0)}}}\,. \nonumber
\end{equation}
In this relation one takes into account that the neutrinos are emitted from the position $(R,\pi/2)$, with $R\in [R_{in},R_{out}]$, and arrive at $(r,0)$. The energy deposition rate of neutrino pair annihilation is given by \cite{Asano:2000dq}
\begin{equation}
\frac{dE_0(r)}{dtdV}=\frac{21\pi^4}{4}\zeta(5)KG_F^2T^9_{\mathrm{eff}}(R_{2R_{ph}})F(r, T_0) \,,
\label{trajectory}
\end{equation}
where $G_F$ is the Fermi constant, $k$ is the Boltzmann constant, $T_{\mathrm{eff}}(2R_{\mathrm{ph}})$ is the effective temperature at radius $2R_{ph}$ (the temperature observed in the comoving frame),
\begin{equation}
K=\frac{1\pm 4\sin^2\omega_W+8\sin^4\omega_W}{6\pi} \,,
\end{equation}
with the $+$ sign for $\nu_e$ and the $-$ sign for $\nu_{\mu/\tau}$, $F(r, T_0)$ is reported in the Appendix (Eq. \ref{F(r)}),
%
%
%
with $T_0$ the temperature observed at infinity
\begin{align}
&T_0(R)=\frac{T_{\rm{eff}}\left(R,\frac{\pi}{2}\right)}{\gamma}\sqrt{g_{00}\left(R,\frac{\pi}{2}\right)-\frac{g_{03}^2(R,\frac{\pi}{2})}{g_{33}(R,\frac{\pi}{2})}} \,\ , \\
&\gamma=\frac{1}{\sqrt{1-v^2/c^2}} \,\ , \\
&\frac{v^2}{c^2}=\frac{g_{33}^2(r,\pi/2)\left(\Omega_K-\omega\right)^2}{g_{03}^2(r,\pi/1)-g_{00}(r,\pi/2)g_{33}(r,\pi/2)} \,\ , \\
&\Omega_K=\frac{-g_{03,r}+\sqrt{(g_{03,r})^2-g_{00,r}g_{33,r}}}{g_{33,r}}\Big|_{(r,\pi/2)} \,\ , \\
&\omega=-\frac{g_{03}(r,\pi/2)}{g_{33}(r,\pi/2)} \,\ .
\end{align}
where $T_{\mathrm{eff}}$ is the effective temperature measured by a local observer and all the quantities are evaluated at $\theta=\pi/2$. In the treatment we will ignore the reabsorption of the deposited energy by the black hole and we will consider the case of isothermal disk, that is
\[
T_{\mathrm{eff}}=const\,,
\]
and the case of a gradient temperature \cite{Asano:2000dq}, for which $T_{\mathrm{eff}}$, in the simplest and acceptable model, is given by (for details, see \cite{10.1143/PTPS.136.235})
\begin{equation}\label{tdippr}
T_{\mathrm{eff}}(r)= \frac{2R_{ph}}{r} \,.
\end{equation}
\section{Applications to the Bumblebee metric}
\label{Results}
In this section, we calculate the emitted energy with the procedure shown in Sec.~\ref{Formulation} for the bumblebee metric in Eq.~(\ref{metricg}). We analyze two different cases, corresponding to the isothermal model and the temperature gradient model. We find an enhancement of the EDR for positive values of the LSB parameter $l$ ($l>0$) entering Eq. (\ref{metricg}).
\subsection{Isothermal model}
For our analysis, it turns out convenient to define the function
\begin{equation}\label{G(r)Gae}
G(r)=F(r, T_0)\frac{r^2+a^2}{4M^2} \,,
\end{equation}
where $F(r, T_0)$ is given in Eq. (\ref{F(r)}).
The function $G(r)$ is essential for evaluating the EDR and, therefore, the energy viable for a GRB explosion.
The EDR is estimated for the infinitesimal angle $d\theta$ taking into consideration a characteristic angle $\Delta \theta \simeq 10^{\circ}$ and temperature $T_{\rm{eff}}= 10 \mathrm{MeV}$. The explicit EDR expression is given by \cite{Asano:2000dq}
\begin{widetext}
\begin{equation}
\frac{dE_0}{dt}\simeq 4.41\times 10^{48}\left(\frac{\Delta \theta}{10^\circ}\right)^2\left(\frac{T_\mathrm{eff}(R_{\mathrm{in}})}{10~\mathrm{MeV}}\right)^9\left(\frac{2M}{10~\mathrm{km}}\right)\int_{R_{\mathrm{in}}}^{R_{\mathrm{out}}}\frac{G(r)}{2M}dr~\mathrm{erg~s^{-1}} \,\ .
\label{value}
\end{equation}
\end{widetext}
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{TC.pdf}
\includegraphics[scale=0.7]{avtc0.pdf}
\caption{Plot of $G(r)$ against $r/M$ for the isothermal disk. \textbf{Upper graph:} Different values of $l$ with fixed value $0.3$ for the slowly rotating parameter $a$. \textbf{Lower graph:} Different values of the spin parameter $a$ with $l<0$ and $l>0$.}
\label{Gr}
\end{figure}
Before going into the analysis of EDR, a comment is in order.
The function $F(r,T_0)$ defined in Eq. (\ref{F(r)}), is proportional to $g_{11}^4$. From (\ref{metricg}) it follows that $g_{11}\propto 1/\tilde{\Delta}$ and $\tilde{\Delta}\propto 1/(1+l)$, resulting in $g_{11}\propto (1+l)$. Therefore, the enhanced factor $(1+l)^4$ appears in the computation of the EDR. Moreover, the LSB parameter $l$ also contributes, via the modifications in the metric (\ref{metricg}), to the trajectory equation leading to important changes in the neutrino angular momentum at the rotational axis.
In Fig.~\ref{Gr} we show the behavior of $G(r)$ in BG metric and its standard counterpart, as well. In the upper graph, we plot curves with $l<0$, $l>0$ and $l=0$ by fixing the spin parameter to $a=0.3$. As a common feature in both cases, $G(r)$ initially increases with the distance, reaching a maximum value, and then decreases with the distance due to the interplay between temperature and red-shift effects. However, we see that the presence of the LSB parameter, with positive values, increases $G(r)$ compared to $l=0$, while for $l<0$ the function $G(r)$ decreases. By drawing the lower graph in Fig.~\ref{Gr}, we are interested in investigating the role of the slowly rotating parameter $a$ on the behavior of $G(r)$ (subsequently of the EDR) in the interplay with $l<0$ and $l>0$. As one can see, the spin parameter $a$ does not affect the curves with both negative and positive values of the LSB parameter $l$.
This means that for the isothermal disk model in an LSB-based metric such as BG, the rotation of the black hole has no effective role in the EDR.
This can be seen more clearly in the contour plot of the parameter space $l-a$ in Fig.~\ref{Con1}. As it is evident, the EDR$^{BG}$ increases by moving from $l<0$ to $l>0$, independently of the value of the spin parameter. Indeed, the main contribution to the increase of the EDR, which makes more efficient the process for powering the GRBs jets compared to the standard case, comes from the positive LSB parameter $l>0$ embedded in the background. In other words, rotational energy has no role in sourcing the energy of GRBs.
The parameter scan performed for $l-a$ within the ranges $0\leq a\leq0.3$ and $-0.5<l\leq0.5$, openly tells us that the model $T=const$ can successfully describe, in the BG-based black hole, the observed gamma-ray luminosity associated with short and long GRBs ($\sim10^{52-53}$ erg/s), if the LSB parameter falls down in the range $-0.1<l\leq0.3$ (see Fig.~\ref{Con1}). This result is independent of the value of the spin parameter of the black hole. In general, in the model with $T=const$, for not producing GRB with energy higher than the observed one (for short and long cases), one has to set $l \leq0.3$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{ContourEn.pdf}
\caption{A contour plot of the parameter space $l-a$ showing the EDR$^{BG}$ related to BG-based slowly rotating Kerr black hole in isothermal disk model. Here the color scale written in the logarithmic form $\log 10^{n}$ (the range of $n$ is from values lesser than $52$ to beyond $53$ ).}
\label{Con1}
\end{figure}
\subsection{Temperature gradient model}
\label{Bumblebee - Temperature gradient}
In the case of a gradient of temperature, the function $G(r)$ is again calculated by using Eq.~(\ref{F(r)}), and taking into account that the temperature varies along with $r$ ($T_{\rm eff}\propto r^{-1}$), as well as along with $\theta_{\nu(\bar{\nu})}$. In the upper graph of Fig.~\ref{Gr-Tvar}, we show $G(r)$ vs $r/M$ for different values of the LSB parameter $l$ and for a given value of the slowly rotating parameter of the black hole. Similar to the isothermal disk model, we see here, by moving from $l<0$ to $l>0$, an evident enhancement of the EDR induced by the bumblebee metric as compared to General Relativity ($l=0$).
Compared to the upper graph in Fig. \ref{Gr}, one finds that the total energy deposited is lesser than that one expected from the isothermal model. In the lower graph of Fig.~\ref{Gr-Tvar}, we probe, on a case-by-case basis, the role of spin. Unlike the former model, the spin parameter $a$ has a mild effect on the behavior of $G(r)$ related to $l<0$ and $l>0$, so that the EDR for the high value that satisfies the condition $a^2\ll1$, is a bit bigger than lower values.
%
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{TV.pdf}
\includegraphics[scale=0.7]{avtr.pdf}
\caption{Plot of $G(r)$ against $r/M$ for a disk with a temperature $T\propto 2R_{\mathrm{ph}}/r$. \textbf{Upper graph:}
Different values of $l$ with fixed value $0.5$ of the spin parameter $a$. \textbf{Lower graph:} Different values of the spin parameter $a$ with $l<0$ and $l>0$.}
\label{Gr-Tvar}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{Contour_Tvar_En.pdf}
\caption{The same contour plot \ref{Con1} with the color scale written in the logarithmic form $\log 10^{n}$ (the range of $n$ is from values lesser than $49$ to beyond $50$), this time for the temperature gradient disk model.}
\label{Con2}
\end{figure}
%
%
This mild dependency on the spin parameter $a$ can be traced through the contour plot showing the EDR$^{BG}$ for the temperature gradient model (Fig.~\ref{Con2}). Similar to the case of an isothermal disk, here we also see that the presence of $l>0$ induces an enhancement in EDR. Differently from the case $T=const$, and ignoring the mild dependency on $a$, it is clear that the neutrino annihilation in the environment of the Kerr black hole derived from the BG metric with the accretion disk profile $T\propto r^{-1}$, can only explain ultra-long GRBs ($\sim10^{49-50}$erg/s), if $-0.4\leq l\leq0.1$ (see Fig. \ref{Con2}).
As a consequence, this model of accretion disk profile is, by covering both negative and positive values of LSB parameter $l$, a suitable candidate for the description of luminosity measured of the ultra-long GRBs jets.
\section{Conclusion}
\label{Conclusion}
It is not clear yet what mechanism conclusively is responsible for launching the gamma-ray burst (GRB) jets. The central engine for powering these high energetic jets is usually found in two well-known models: magneto-hydrodynamical and neutrino–antineutrino annihilation ($\nu {\bar \nu}\to e^+ e^-$). These two models, in essence, have been proposed for the energy extraction from a composite system of black hole/accretion disk.
Concerning the latter (as the mechanism we have considered in this paper), in the case in which the condition $\dot{M}\sim (0.1-1) M_{\odot}$ s$^{-1}$ for the accretion rate of a black hole is satisfied, as well as high enough temperature for the disk, it is expected that the disk can play the role of an efficient neutrino emitter. In this way, the EDR arising from neutrino–antineutrino annihilation at the jet can justify the energetic bursts. In other words, by releasing enormous energy into $e^+ e^-$ pairs by the EDR and subsequently the annihilation process, it is supplied energy to power high energetic photons.
In this paper, we have studied the GRBs jets generated by neutrino pair annihilation for the case in which this process occurs in a slowly rotating Kerr black hole metric (near the rotational axis) modified by the Lorentz symmetry breaking (LSB) parameter $l$. The latter comes from the non-zero VEV of the background bumblebee field $B_\mu$. By employing two idealized models of the accretion disk, one the temperature $T_{\rm eff}=const$, and the other a gradient of temperature for which $T_{\rm eff}\sim r^{-1}$, we have shown that in the presence of the LSB parameter $l>0$, there is an enhancement of the EDR associated with the neutrino-antineutrino annihilation into electron-positron pairs, powering in such a way the GRBs jets. Concerning the first model of the accretion disk, the embedding $l>0$ into spacetime results in an improved situation, compared to the standard slowly rotating Kerr black hole, which is compatible with the observed gamma-ray luminosity associated with the short and long GRBs jets. In this regard, by doing a contour plot analysis of EDR into parameter space $l-a$, we have extracted the upper bound $l\leq 0.3$ for the LSB parameter. The same analysis for the second model has shown that, in the chosen range $-0.4\leq l\leq0.1$, the neutrino EDR can justify the observed luminosity of ultra-long GRBs. Moreover, we want to point out that in both models, the allowed range of $l$ is mostly independent of the spin parameter $a$. In other words, the additional contribution to EDR of GRBs jets arising from neutrino–antineutrino annihilation around a BG-based slowly rotating Kerr black hole, merely comes from the bumblebee vector field embedded in the spacetime background. This can be considered worthwhile since addresses the constructive role of fundamental modifications in explaining GRBs observed in the universe.
As a final comment, a follow-up of the present work would be, indeed, to implement a realist simulation for the disk temperature in the metric (\ref{metricg}) by firstly constructing quasi-stationary disk models as in Ref.~\cite{Popham:1998ab} and then releasing a self-consistent multi-dimensional simulation model.
\begin{acknowledgments}
M.Kh. thanks Shiraz University Research Council. The work of G.L. and L.M. is supported by the Italian Istituto Nazionale di Fisica Nucleare (INFN) through the ``QGSKY'' project and by Ministero dell'Istruzione, Universit\`a e Ricerca (MIUR).
The computational work has been executed on the IT resources of the ReCaS-Bari data center, which have been made available by two projects financed by the MIUR (Italian Ministry for Education, University and Re-search) in the "PON Ricerca e Competitività 2007-2013" Program: ReCaS (Azione I - Interventi di rafforzamento strutturale, PONa3\_00052, Avviso 254/Ric) and PRISMA (Asse II - Sostegno all'innovazione, PON04a2A)
\end{acknowledgments}
|
{
"arxiv_id": "2302.14141",
"language": "en",
"timestamp": "2023-03-01T02:02:25",
"url": "https://arxiv.org/abs/2302.14141",
"yymm": "2302"
} |
\section{Introduction}
\label{intro}
Financial time series are known for their rich dynamics, which incorporates heteroskedasticity, skewness and fat tails, among other stylized facts. For such reason, distributional forecasting of stock returns is a highly non-trivial task. Multiple models have been developed to model heteroskedastic time series. Pioneering works in such strand of literature include the development of the Autoregressive Conditional Heteroskedasticity (ARCH) model \citep{engle1982} and the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model \citep{bollerslev1986}. These simple models are able to capture the persistence in the evolution of volatility present in financial time series, and to produce associated volatility clusters.
The use of Hidden Markov Models (HMM), also referred to as regime-switching models, also gained traction in the literature since the seminal work of \cite{hamilton1989new}. Such an approach involves modeling return distributions for which parameters are modulated through a Markov chain, which therefore produces mixtures of distributions. Other papers combined the two aforementioned approaches and developed regime-switching GARCH models, see for instance \cite{gray1996modeling} and \cite{klaassen2002improving}, which allow for time-varying parameters in the mixture distributions, and thus for more volatility persistence.
\blfootnote{Financial support from Mitacs, Quantolio and NSERC (Godin: RGPIN-2017-06837) is gratefully acknowledged.}
More recently, these econometric models were extended in the machine learning literature through the introduction of recurrent mixture density networks (RMDN). The model presented by \cite{schittenkopf_forecasting_2000} is a non-linear RMDN, which is extended by \cite{Nikolaev2012} who include linear nodes in the hidden layer and propose to use the RTRL algorithm for training the RMDN\citep{Nikolaev2012}. We refer to this last model as the RMDN-GARCH. RMDNs can capture the time-varying shape of the conditional distribution of a time series in a semi-parametric fashion, and represent non-linear dependencies of the mean and variance with respect to previous observations from the time series. However, mixture density networks are notoriously difficult to train and are sensitive to the initialization method; they can easily get stuck in bad local minima during training \citep{hepp_mixture_2022}.
We propose a small alteration to the architecture of the RMDN-GARCH proposed by \cite{Nikolaev2012}, which makes use of some of the proposed improvements to the MDN architecture\citep{Guillaumes2017MixtureDN}. We also propose a novel training method, which relies on a pretraining procedure, where only a subset of weights from the RMDN are updated in initial backpropagation iterations, with all other weights being temporarily frozen. Such procedures allow approaching the predictive behavior of the GARCH model in early training stages and thus to surpass it in later training iterations. Numerical experiments presented hereby show that in absence of pre-training, the RMDN-GARCH sometimes even fails to reach GARCH performance in-sample, which is indicative of bad training due to the fact that the GARCH model is nested within the RMDN-GARCH model. While our method does not guarantee convergence to a global minima, it significantly improves the robustness of the model to the initial values of the network's parameters.
\section{The RMDN Model Trained by Backpropagation}
\label{model}
In this section, we propose a small modification to the architecture of the RMDN-GARCH model of \cite{Nikolaev2012}; such modification consists in changing the final activation function of the variance network of the RMDN, and reflects recently proposed improvements to the architecture of mixture density networks\citep{Guillaumes2017MixtureDN}. We refer to the new architecture as the ELU-RMDN. Following the work of \cite{Nikolaev2012}, such recurrent mixture density network is used to forecast the conditional density of stock returns through mixtures of Gaussian distributions. The training method that we propose also differs from the RTRL method used by \cite{Nikolaev2012}.
The architecture of the ELU-RMDN can be separated into three parts, which are used to estimate the parameters of a mixture of $N$ components. For simplicity, we restrict the model to a lag of 1 for neural network inputs. In this paper, we only consider a mixture of Gaussian distributions. The mixture distribution is characterized by three sets of parameters: the weight of each of the components $\hat{\eta}_{t+1} = (\hat{\eta}_{1,t+1},\hat{\eta}_{2,t+1}, ..., \hat{\eta}_{N,t+1})$, their conditional mean $\hat{\mu}_{t+1} = (\hat{\mu}_{1,t+1}, \hat{\mu}_{2,t+1}, ..., \hat{\mu}_{N,t+1})$ and their conditional variance $\hat{\sigma}_{t+1}^2 = (\hat{\sigma}_{1,t+1}^2, \hat{\sigma}_{2,t+1}^2, ..., \hat{\sigma}_{N,t+1}^2)$. The conditional mean and variance of the mixture distribution are respectively given by $\overline{\mu}_{t+1} = \sum_{i=1}^N \hat{\eta}_{i,t+1}\hat{\mu}_{i,t+1} $ and $\overline{\sigma}_{t+1}^2 = \sum_{i=1}^N \hat{\eta}_{i,t+1}(\sigma_{i,t+1}^2 + (\hat{\mu}_{i,t+1}-\overline{\mu}_{t+1})^2)$. The input of the ELU-RMDN is a time series $\{r_t\}_{t=1}^T$. The first part of the ELU-RMDN is the mixing network, which takes for input $r_{t}$ and estimates the weights $\hat{\eta}_{t+1}$ of the mixture components.
The output layer of this network is followed by a softmax activation function $s_i(y_1,\ldots,y_N) = \frac{e^{y_i}}{\sum_{j=1}^N e^{y_j}}$ to ensure that the condition $\sum_{i=1}^N \hat{\eta}_{i, t+1}=1$ is satisfied. The estimate $\hat{\eta}_{t+1}$ is thus defined as
\begin{equation}
\hat{\eta}_{i, t+1}= s_i \left(u_{n, 1} (U_{1,1}r_t +U_{1,0})+\sum_{k=2}^K u_{n, k} tanh(U_{k, 1} r_t +U_{k,0}) + u_{n, 0}, \, n=1,\ldots,N \right)
\end{equation}
where $U$ are the input-to-hidden layer weights and $u$ are the hidden-to-output layer weights. The second part of the model is the mean-level network, which takes as input $r_{t}$ and estimates the mean $\hat{\mu}_{i,t+1}$ for each mixture component $i$. The estimate $\hat{\mu}_{t+1}$ is thus defined through
\begin{equation}
\hat{\mu}_{i, t+1}= v_{i, 1} (V_{1,1}r_t +V_{1,0})+ \sum_{k=2}^K v_{i, k} tanh(V_{k, 1} r_t +V_{k,0})+ v_{i, 0}
\end{equation}
where $V$ are the input-to-hidden layer weights and $v$ are the hidden-to-output layer weights. The third part of the ELU-RMDN is the variance recurrent network, which estimates $\hat{\sigma}_{t+1}^2$, the conditional variance of the various components of the mixture. As in the GARCH model, this network takes for input the residual $e_{t}^2 = (r_t - \overline{\mu}_{t})^2$ and the last estimate for the conditional variance $\hat{\sigma}_{t}^2$. The output layer is fed through what we refer to as the positive exponential linear unit, which is defined as $e(x, \alpha) = ELU(X, \alpha) +1 +\epsilon$ where epsilon is a very small number and $ELU(x, \alpha)$ is the exponential linear unit defined as
\begin{equation}
ELU(x, \alpha) = \begin{cases}
x & \text{if } x >0\\
\alpha (e^x - 1) & \text{otherwise}
\end{cases}
\end{equation}
The hyperparameter $\alpha$ controls the saturation value for negative inputs\citep{elu}. The ELU activation function improves the numerical stability of MDN and ensures that the estimated variance is always greater than 0\citep{Guillaumes2017MixtureDN}\citep{deepandshallow}. We refer to this activation function as the "positive exponential linear activation unit" throughout this paper. The use of such activation function is what differentiates our architecture from that of \cite{Nikolaev2012}. The estimate for $\hat{\sigma}_{i,t+1}$ is
\begin{align}
\hat{\sigma}_{i,t+1}&= e \bigg(w_{i, 1} (W_{1,1}e_{t}^2 +W_{1,0})+ \sum_{k=2}^K w_{i, k} tanh(W_{k, 1} e_t^2 +W_{k,0})\\
\nonumber &+ w_{i, K+1} (W_{K+1,1}\hat{\sigma}_{i,t}^2 +W_{K+1,0})+ \sum_{k=K+2}^{2K} w_{i, k} tanh(W_{k, 1} \hat{\sigma}_{i,t}^2 +W_{k,0})+ w_{i, 0} \bigg)
\end{align}
where $w$ are the hidden-to-output weights and $W$ are the input-to-hidden weights.
\footnote{Note that to address identifiability issues, some of the weights can be set to either $0$ or $1$ without loss of generality. For instance, we can set $U_{1,1}=V_{1,1}=W_{1,1}=1$ and $U_{1,0}=V_{1,0}=W_{1,0}=0$.}
A detailed graphical representation of the architecture is presented in Figure \ref{fig:architecture}.
\begin{figure}[!h]
\centering
\include{rmdn_normal_diagram}
\caption{Architecture of the proposed RMDN}
\label{fig:architecture}
\end{figure}
It is worth noting that using a linear node in the hidden layer implies that the AR(1)-GARCH(1, 1) model is a particular case of the ELU-RMDN model. Consider a AR(1)-GARCH(1, 1) defined as
\begin{align}
r_{t+1} & = \mu_{t+1} + \sigma_{t+1} e_{t+1} \\
\nonumber\mu_{t+1} &= a_0 + a_1 r_t\\
\nonumber\sigma_{t+1}^2 &= \alpha_0 + \alpha_1 e_{t}^2 +\beta_1 \sigma_{t}^2
\end{align}
where $e_t\sim N(0,1)$ denotes an innovation. Now consider an ELU-RMDN with only $N=1$ component and with only the linear nodes in the hidden layer of each network. We achieve this by setting $V_i=0$ for $1<i \leq K$ and $W_i = 0$ for $1<i \leq K$ and for $K+2<i \leq 2K$.
Since only one component is considered, we can ignore the mixture weights $\hat{\eta}_{t+1}$. The RMDN-GARCH can then be reformulated mathematically as
\begin{align}
r_{t+1} &= \mu_{t+1} +\sigma_{t+1}e_{t+1}\\
\nonumber\hat{\mu}_{t+1} &= v_1 (V_{1}r_t +V_{0}) +v_0\\
\nonumber\hat{\sigma}_{t+1}^2&= e(w_{1} (W_{1}e_{t}^2 +W_{0}) + w_{2} (W_{2}\hat{\sigma}_{t}^2 +W_{3}) +w_{0})
\end{align}
Such representation of the variance $\hat{\sigma}_{t+1}^2$ is equivalent to conditional variance of a GARCH(1,1) when the input of the ELU is greater than $0$. The conditional mean also $\hat{\mu}_{t+1}$ simplifies to an expression equivalent to an AR(1) model. This implies that the proposed ELU-RMDN reduces to an AR(1)-GARCH(1,1) when the input of the ELU is greater than 0.
In this paper the model is implemented using PyTorch \citep{paszke2017automatic}. The ELU-RMDN is trained by minimizing the negative log-likelihood. Assuming a mixture of $N$ Gaussian components and denoting by $\phi(\cdot,\mu,\sigma^2)$ the density of a Gaussian distribution with mean $\mu$ and variance $\sigma^2$, the negative log-likelihood $-\ell$ is
\begin{eqnarray}
-\ell &=& \sum_{t=1}^T -log\;\sum_{i=1}^N \hat{\eta}_{i,t}\phi(r_t,\hat{\mu}_{i,t}, \hat{\sigma}_{i,t}^2) \notag
\\ &=& \sum_{t=1}^T -log\;\sum_{i=1}^N exp \biggr[ log\; \hat{\eta}_{i,t}- \frac{1}{2} log\;2\pi - log\;\hat{\sigma}_{i, t} - \frac{1}{2} \frac{(r_t - \hat{\mu}_{i,t})^2}{\hat{\sigma}_{i, t}^2}\biggr]. \label{loglikreformul}
\end{eqnarray}
The formulation \eqref{loglikreformul} makes it possible to use the \textit{logsumexp} trick from \cite{Blanchard2016} to avoid underflow issues.
We use the \textit{logsumexp} implementation available in the PyTorch library \citep{paszke2017automatic} as a direct replacement of the expression $log \sum_{i=1}^N exp $ in the loss function.
We train the model using the Adam optimizer of \citep{Adam2014} without weight regularization.
\section{Pretraining Method}
As previously noted, a well-known problem with mixture density networks is their propensity to hit (potentially bad) local minima\citep{hepp_mixture_2022}. This section details a method for pretraining the ELU-RMDN, which makes it possible to avoid local minima that would result in a log-likelihood lower than that of the GARCH model. Indeed, as mentioned above, the ELU-RMDN model is a generalization of the Gaussian AR-GARCH model, with the latter being nested within the former. A consequence of this relationship between the GARCH and the ELU-RMDN is that the optimized log-likelihood of the ELU-RMDN should always be greater or equal to that of a AR-GARCH.\footnote{Although the ELU-RMDN of this paper is designed to nest the AR(1)-GARCH(1), the method proposed can easily generalize to encompass AR(p)-GARCH(P,Q) models.} However, in early experiment the ELU-RMDN did not always converge to a minimum with a log-likelihood greater to that of the GARCH. The model was particularly unstable and exhibited the so-called \textit{persistent NaN problem} often encountered with mixture density networks, as defined by \cite{Guillaumes2017MixtureDN}.
We propose a pretraining for the ELU-RMDN that makes use of the linear nodes of the hidden layer of each neural network. The intuition behind our approach is that the parameters of a linear ELU-RMDN should be a good starting point for the parameters of a non-linear ELU-RMDN. The pretraining of the model starts by freezing the non-linear nodes of the hidden layer of each neural network. This is accomplished by setting the gradients of the node with non-linear activation function on the hidden layer to 0 every time the gradient is computed. The model is then trained using all the nodes in the hidden layer.
\section{Performance Evaluation}
We present numerical experiments comparing the performance of the proposed training approach based on the proposed linear pretraining to that of an ELU-RMDN that is initialized randomly and that is not pretrained.
\subsection{Methodology}
The performance of the two training methods is evaluated on daily returns of 10 stocks selected randomly from the S\&P 500 universe. We train the model on the period extending between September 20, 2017 and September 10, 2021. We first initialize the all of the ELU-RMDN parameters randomly, with the exception of parameters of the output node of the variance recurrent network and the biases of all networks, which were initialized to 1. However, as the pretraining method only trains the linear node of the neural network, we initialize the weights of the non-linear nodes to zeros. This step ensures that the model converges to the optimal linear model.
Both models are initialized randomly using 10 different seeds (one per training run) selected randomly between 0 and 50000. For the pretrained ELU-RMDN, we pretrain the ELU-RMDN for 20 epochs and then train all the nodes for 300 epochs. The ELU-RMDN without pretraining is trained for 300 epochs. The performance of the two training methods is evaluated by splitting the training runs for each stock into 2 groups, which we define as "Not Converged" and "Converged". The "Not Converged" group consists of the group of training runs for which the log-likelihood is equal to NaN or is unreasonable (smaller than -$100,\!000$). NaN values are due to the ELU-RMDN evaluating the parameters of the mixture to NaN, which can occur for instance when the gradient explodes. The "Converged" group is comprised of the remainder of the training runs, which exhibit a reasonable log-likelihood. We also evaluate the average log-likelihood across runs after training for each stock for a GARCH(1,1) with an underlying Gaussian distribution and for the ELU-RMDN with and without the pretraining.
\subsection{Results}
We report the convergence status for all the training groups in Table \ref{tab:is_table}. We report the average log-likelihood in Table \ref{tab:in_sample_nll}.
\begin{table}[!ht]
\include{is_table}
\caption{Results of In-Sample Convergence Tests across 10 stocks}
\label{tab:is_table}
\end{table}
The results show that the ELU-RMDN with pretraining converges in all cases, while the ELU-RMDN without pretraining does not converge in the majority of cases. The results presented in Table \ref{tab:in_sample_nll} show that using the pretraining leads in average to a likelihood greater than that of the GARCH model, with the exception of the training runs performed on the EMN stock. However, after retraining the model on the EMN return series with a smaller learning rate, the model was able to converge well below the log-likelihood of the GARCH. This suggests that the choice of learning rate is also an important factor related to the convergence of the ELU-RMDN.
\begin{table}[!ht]
\centering
\include{in_sample_nll}
\caption{Average Log-Likelihood across 10 stocks}
\label{tab:in_sample_nll}
\end{table}
\section{Conclusion and Future Work}
In this paper, we propose a novel method for training a recurrent mixture density network. Through empirical analysis on stock return data, we show that our pretraining method improves the robustness of our model to bad local minima. Our results also show that the ELU-RMDN trained using our pretraining method does not suffer from frequently obtaining NaN values, which are often obtained in absence of pretraining. Further research could focus on the application of pretraining methods to other neural networks for which a linear counterpart model exists, such as the autoencoder whose linear counterpart is principal component analysis (PCA).
\bibliographystyle{unsrtnat}
|
{
"arxiv_id": "2302.14163",
"language": "en",
"timestamp": "2023-03-01T02:03:18",
"url": "https://arxiv.org/abs/2302.14163",
"yymm": "2302"
} | \section{Introduction}
Since its first arrival, semantic segmentation has gained a lot of attention and remarkable progress has been made in this field. The advent of deep learning led to a new era with methods aiming to surpass human performance. However, most semantic segmentation methods are trained with dense pixel-level annotations and require large numbers of examples for every category or task.
Humans, in contrast, are able to recognize or at least have some context about new objects without ever seeing them before. Although natural to humans, this task requires a complex understanding of the semantic meaning of a class/category never seen by a learner and a capacity to generalize knowledge gained from seen classes. Most fully supervised methods, although performing comparably with humans on seen objects, struggle in this generalisation to unseen objects. Obtaining large numbers of fully annotated data for every target class can be extremely expensive and often impractical, so there is a need for models to be able to generalize to unseen classes.
To bridge the gap between human and artificial learning, increasing attention is being diverted to more challenging and data-efficient settings like Open Vocabulary Semantic Segmentation (OVSS)~\cite{xu2021simple,ma2022open}. In this paper, we focus on closely-related standard OVSS settings like Generalized Zero-Shot Segmentation (GZSS), Zero-Shot Segmentation (ZSS), Few-Shot Segmentation (FSS) and Cross-dataset Segmentation in an inductive setting (unlabeled pixels and novel class names are not observed during training) as opposed to the transductive setting (unlabeled pixels and novel class names may be observed during training). In ZSS~\cite{bucher2019zero, ding2022decoupling, li2022languagedriven, baek2021exploiting, liu2022open}, a model is provided with a set of base (seen) classes to learn from and then expected to perform well on the novel (unseen) classes it does not have access to. A commonly used setting Generalised ZSS (GZSS) further imposes the expectation that the model in addition to novel classes should retain its performance on base classes as well. Similar to how humans can relate visual understanding of classes with similar-in-meaning names or categories, ZSS methods generalize semantic visual information using the semantic textual information provided by language models. Another slightly relaxed data efficient setting is FSS~\cite{wang2019panet, Mao_Zhang_Wang_Zhang_Xiang_Pan_2022, kang2022integrative, lang2022learning, iqbal2022msanet, 10.1007/978-3-031-16452-1_8,pandey2022robust}, where the model is expected to generalize to unseen classes but is additionally given few support images with annotated unseen target classes. Typical FSS methods demonstrate admirable performance using support samples ranging from one to five examples for every unseen category.
Besides ZSS and FSS, many other problem settings aim to reduce the burden of large-scale annotations. A particularly interesting approach is Weakly Supervised Segmentation (WSS)~\cite{jiang2022l2g, zhang2020reliability, zhou2022regional, lee2021railroad}, where costly pixel-level annotations for training classes are replaced with relatively inexpensive weak labels like scribbles and bounding boxes. A particularly challenging setting here is that of image tags, where every image is accompanied by only the information of classes present in it. Without any information allowing the model to localize objects, this setting is perhaps the \textit{hardest} for WSS.
\begin{figure}[!h]
\centering
\includegraphics[width=0.75\linewidth]{figures/setting.pdf}
\caption{An overview of the problem setting explored. A common training procedure is used where a set of seen classes with \textit{only image-level labels} are exposed to the model. The same model is used to evaluate WGZSS and WFSS settings. In WGZSS, the model needs to segment classes ``person" and ``tvmonitor" which it had not seen during training, along with the seen classes. In WFSS, each task has a corresponding target class label in the support and the model segments only that class in the query. For WGZSS, no labels are available during testing while only image-level support labels are present during WFSS testing as opposed to FSS which has pixel-level support labels.}
\label{fig:setting}
\end{figure}
In this paper, we explore the challenging and practical problems of weakly supervised ZSS (WZSS) and weakly supervised FSS (WFSS). With an expectation of generalization to unseen classes and reliance on only weak image-level labels, these settings greatly reduce the annotation cost and assess a method's performance in challenging scenarios commonly faced by humans. A clear rift is evident between existing ZSS and FSS methods, where ZSS methods leverage language model-based learning and try to learn mappings between visual and textual features, while FSS methods tend to employ matching-based approaches that search semantically similar features between support and queries. We argue that when using weak labels, the Few-Shot tasks can also be de-coupled into WSS for learning to segment seen categories and ZSS for generalizing this learning to unseen categories. With this, we propose Weakly-Supervised Language-Guided Segmentation Network (WLSegNet), a unified method that can perform both WZSS and WFSS with a single training procedure. We also benchmark weakly supervised Cross-dataset segmentation setting where we train with weak image-level labels on one dataset (like MS COCO) and test on novel classes of a completely different dataset (like PASCAL VOC).
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\linewidth]{figures/prompt_learn.png}
\caption{An overview of the proposed prompt learning by WLSegNet. The fixed and the learned prompts overfit on seen classes during training while WLSegNet prompts employ mean features from the image batch to make prompts image-aware as opposed to being class-aware thereby avoiding overfitting on seen classes and generalising well on unseen classes. Also, as WLSegNet utilizes a batch of images instead of a single image, this makes the proposed prompt learning more computationally efficient. }
\label{fig:our_prompt_overview}
\end{figure}
We further address limitations like overfitting on seen classes and large computational requirements reported by existing prompt-based learning methods~\cite{zhou2022learning, zhou2022conditional}. We employ batch aggregate (mean) image features to make learnable prompts image-aware, while maintaining low computational requirements.
The learned prompts avoid overfitting on seen classes and generalise well to novel classes without aid from external datasets or fine-tuning. An overview of our proposed prompt learning approach is shown in Figure~\ref{fig:our_prompt_overview}. In summary, our contributions are fourfold:
\begin{itemize}
\item We propose a model to perform WZSS, WFSS and Cross-dataset segmentation in a unified manner in an inductive setting. To the best of our knowledge, we are the first to tackle these challenging yet impactful problems together that avoid fine-tuning and the use of external datasets with a frozen vision-language model (CLIP~\cite{radford2021learning}).
\item We propose an optimal pipeline that decouples the WZSS and WFSS problems into WSS and ZSS. This facilitates the optimization of WSS, mask proposal generation, and vision-language models, separately.
\item We propose a novel mean instance aware prompt learning method that makes prompts more generalizable and less prone to overfitting while scaling the prompt learning stage to larger batch sizes and improving its computation speed.
\item We perform extensive experiments on widely used PASCAL VOC and COCO datasets for WZSS and WFSS, beating previous weakly supervised baselines by large margins and obtaining results competitive with methods using strong supervision. We benchmark Cross-dataset segmentation by training with image-level labels of the COCO dataset and testing on novel classes of PASCAL VOC.
\end{itemize}
\section{Related Work}
\subsection{Zero-Shot Segmentation}
Existing zero-shot methods are broadly generative or discriminative, some of which further incorporate self-training to capture latent features of novel classes. Generative methods include ZS3Net~\cite{bucher2019zero} which trains a generator to generate synthetic features for unseen classes that are used along with real features of seen classes to train the classifier head, and CagNet~\cite{gu2020context} where-in feature generation is guided by contextual-information present in the image. Discriminative methods like SPNET~\cite{xian2019semantic} and LSeg~\cite{li2022languagedriven} map the pixel-level features of an image with the word embeddings of its class obtained from pre-trained word encoders such as word2vec~\cite{mikolov2013efficient} or fastText~\cite{bojanowski2017enriching}. STRICT~\cite{pastore2021closer} employs SPNET as a pseudo-label generator coupled with consistency regularization to improve Zero-Shot performance. Recent methods such as ZegFormer~\cite{ding2022decoupling} and SimSeg~\cite{xu2021simple} first learn to generate class-agnostic mask proposals using MaskFormer~\cite{cheng2021per}, and then classify proposal regions using knowledge of pre-trained vision-language models such as CLIP~\cite{radford2021learning}. ZegCLIP~\cite{zhou2022zegclip} proposes a one-stage approach that directly extends CLIP’s zero-shot prediction capability from image to pixel level.
\subsection{Weakly Supervised Segmentation}
Weakly Supervised Segmentation methods deal with the practical setting of generating segmentation masks with models trained with weak forms of supervision such as bounding-box~\cite{dai2015boxsup, oh2021background,lee2021bbam}, scribbles~\cite{lin2016scribblesup,vernaza2017learning,liang2022tree}, points~\cite{bearman2016s} and image-level labels. Here we focus on WSS methods using image-level labels only. A commonly used strategy is to train a classifier and then use Class Activation Maps (CAMs) to obtain pixel-level pseudo labels. Some recent methods try to expand the initial seed regions highlighted by CAMs via adversarial erasing~\cite{wei2017object,kumar2017hide,hou2018self}, region growing~\cite{kolesnikov2016seed,huang2018weakly,wang2018weakly}, random-walk~\cite{vernaza2017learning,ahn2018learning,ahn2019weakly} and stochastic inference~\cite{lee2019ficklenet} to name a few. Some methods refine initially coarse attention maps by trying to maximize object region and minimise background coverage. These include EPS~\cite{lee2021railroad} which uses supervision of saliency-maps from off-the-shelf saliency detectors to guide learning and RSCM~\cite{jo2022recurseed} which incorporates high-order feature-correlation and improves masks through seed recursion. CIAN~\cite{fan2020cian} propagates pixel-wise affinity to pixels in the neighbourhood. RCA~\cite{zhou2022regional} maintains a memory bank for storing object features across the images in the dataset which serves as a support during pseudo-label generation. L2G~\cite{jiang2022l2g} transfers features learned by local region-wise classifiers to a global classification network, thus capturing greater detail and obtaining higher-quality attention maps.
\subsection{Semantic Embeddings and Language Models}
Transfer of knowledge from seen to unseen classes requires auxiliary information.
Such information can be provided by the semantic embeddings of class names obtained from word encoders like word2vec~\cite{mikolov2013efficient} and fastText~\cite{bojanowski2017enriching} which are trained on large-scale word datasets without human annotation, with the help of simple axioms like the one that says that words occurring often in similar contexts have closer feature representations. More recently transformer-based vision-language models such as CLIP~\cite{radford2021learning} and ALIGN~\cite{jia2021scaling} have been pre-trained on large-scale image-text pairs from the web in a contrastive manner for zero-shot classification. The key idea in retrieving features in CLIP is to pass a sentence containing a class name along with context information which may be a predefined prompt-template~\cite{ding2022decoupling} or learned~\cite{zhou2022learning,zhou2022conditional}. ~\cite{zhou2022learning} shows that dataset-specific context information in prompt templates improves Zero-Shot classification accuracy using CLIP. In our work, we adopt CLIP and propose a novel prompt-learning technique that incorporates instance-specific context using a batch mean of input features in addition to dataset-specific context learning. A parallel line of works~\cite{ghiasi2022scaling, liang2022open,mukhoti2022open,luo2022segclip,xu2023learning,ren2023viewco} use image-level labels/captions, pre-train/fine-tune the vision-language or language pre-training models (like ALIGN, CLIP, BERT~\cite{devlin2018bert}), which
require large-scale external datasets or perform transductive segmentation~\cite{zhou2022extract}. Fusioner~\cite{ma2022open}, a cross-modality fusion module, explicitly bridges a variety of self-supervised pre-trained visual/language models for open-vocabulary semantic segmentation.
\subsection{Zero and Few-shot Segmentation with Weak Supervision}
Very few works have explored the practical setting of Zero and Few-Shot segmentation with weak annotations. \cite{lee2022pixel} follows a meta-learning approach for few-shot segmentation. For a support image and a given weak label, it generates CAMs for a set of seen classes using a pre-trained network, then performs a weighted summation of these with the weights proportional to similarities of the textual features obtained by word2vec. Similarly, \cite{shen2022dual} first proposed the setting of WZSS using only image labels for seen classes as supervision. Another line of work is open-world segmentation \cite{liu2022open} where models are trained using large-scale image captioning datasets without a need for dense pixel annotations. We take inspiration from previous works and propose a novel pipeline that unifies both Zero-Shot and Few-Shot segmentation using only weak labels as supervision.
\section{Methodology}
\label{sec:methodology}
\subsection{Problem Setting}
The task of WFSS includes train $\mathcal{D}_{\mathrm{train}}$ and test $\mathcal{D}^{F}_{\mathrm{test}}$ weakly labelled datasets having non-overlapping class sets.
The test dataset $\mathcal{D}^{F}_{\mathrm{test}}$ consists of a set of episodes with each episode containing $N$-way $K$-shot tasks with support and query sets.
The support set $\mathcal{S}_i$ has $K$ image $(\mathcal{I}_{\mathcal{S}})$ and image-level label $(L_{\mathcal{S}})$ pairs with a total of $N$ semantic classes i.e. $\mathcal{S}_i =\{(\mathcal{I}_{\mathcal{S}}^{k},L_{\mathcal{S}}^{k})\}$ where $L^{k}_{\mathcal{S}}$ is the ground-truth \textit{image tag} for $k$-th shot,
and $k=1,2, ..., K$.
The query set $\mathcal{Q}_i$ has $N_{\mathcal{Q}}$ $\mathcal{\mathrm{images}~(I_{Q})}$. The objective in each test episode $i$ is to obtain high-quality segmentation predictions for the query set $\mathcal{Q}_i$, relying on the weakly labelled support set $\mathcal{S}_i$, both of whose classes are never seen during training.
The training dataset $\mathcal{D}_{\mathrm{train}}$ consists of a set of images and their corresponding image tags (weak labels). A common approach in FSS is to break down $\mathcal{D}_{\mathrm{train}}$ into episodes having support and query sets and then train episodically using metric learning.
However, there is no restriction on the use of $\mathcal{D}_{\mathrm{train}}$, i.e. a method may instead decide to use the images of support and query sets in a non-episodic way. WZSS is logically an extension of WFSS in that it is simply an N-way 0-shot WFSS task. However, it differs in some critical aspects in its formulation. WZSS consists of $\mathcal{D}_{\mathrm{train}}$ and $\mathcal{D}^{Z}_{\mathrm{test}}$, where the training dataset is identical to that in WFSS but the testing dataset $\mathcal{D}^{Z}_{\mathrm{test}}$ simply consists of a set of images and the set of distinct classes $C_{test}$. Let the set of all distinct classes present in the training dataset $\mathcal{D}_{\mathrm{train}}$ be $C_{train}$. Depending on the nature of $C_{train}$ and $C_{test}$, two different settings are possible. The first, default WZSS setting is when $C_{train} \cap C_{test} = \phi$, i.e. the classes during testing are disjoint from the classes seen during training. The second setting of generalised Zero-Shot (WGZSS) holds when $C_{train} \subset C_{test}$, i.e. the test dataset contains both seen and unseen classes.
\begin{figure*}[]
\centering
\includegraphics[width=0.75\linewidth]{figures/weak_sup.pdf}
\caption{Weakly supervised segmentation using image-level labels.}
\label{fig:weak}
\end{figure*}
\subsection{Pseudo-label Generation Module}
We adopt L2G~\cite{jiang2022l2g}, a weakly supervised semantic segmentation method for this task. Traditional CAMs obtained from classification networks tend to only highlight the discriminative regions of an object making it unsuitable for semantic segmentation. L2G transfers the knowledge learnt by multiple local regional classification networks to a single global classification network thus expanding and improving the region of focus in attention maps.
In the method, a multi-target classification network is trained on the set of seen classes $C_{train}$. For a given input image $\mathcal{I}$, the corresponding pseudo-segmentation mask $\mathcal{M}_{psuedo}$ is obtained from CAMs produced by the network. We denote the fully trained model as Pseudo-label Generation (PLG) Module. The masks generated by the PLG are then used to train the Class-Agnostic Mask Generation (CAMG) module. A few samples of the attention maps learnt can be seen in Figure~\ref{fig:weak}.
While we use L2G~\cite{jiang2022l2g} as our pseudo-label generator, we would like to point out that it can easily be replaced by another WSS method without changing the overall architecture much due to the decoupled design.
Thus, rather than being constrained by L2G~\cite{jiang2022l2g}, we can leverage advances in the field of WSS to further improve the performance of WLSegNet. We do such an experiment and observe that with RCA~\cite{zhou2022regional} as the pseudo-label generator, the performance does not change drastically.
\subsection{Class-Agnostic Mask Generation (CAMG)}
The task of Zero-Shot segmentation is broken down into class-agnostic aggregation of pixels through the CAMG module followed by CLIP classification of aggregated regions (segments). Similar to past works, we adopt MaskFormer~\cite{cheng2021per}, a segmentation model that generates mask proposals for different objects present in the image irrespective of the class of any object. Specifically, for a given image $\mathcal{I}$ fed to the CAMG module, a set $\mathcal{M}$ of $n$ class-agnostic binary mask proposals are generated, such that $\mathcal{M} = \{m_1,m_2, \ldots, m_n\}$. During training, only the pseudo-labels obtained from PLG are used as supervision in MaskFormer’s Mask Loss.
For each mask or segment proposal $m \in \mathcal{M}$, we create a corresponding input proposal $i$ by multiplying input image $\mathcal{I}$ with $m$ to zero out the background in corresponding segments. The input proposals $\mathcal{I}_p = \{i_1, i_2, \ldots, i_n$\} thus obtained are then passed to CLIP for segment classification.
Instead of MaskFormer~\cite{cheng2021per}, the CAMG module can use other methods like GPB-UCM~\cite{arbelaez2010contour} and Selective Search~\cite{uijlings2013selective} as well. Since~\cite{xu2021simple} showed superior performance of MaskFormer, we perform all our experiments with it.
\subsection{CLIP language model}
For Zero-Shot classification of an image using CLIP, we first feed the CLIP text encoder with a corresponding text label in the form of a natural sentence for each class present. This can be a simple prompt such as “photo of a \{class\}” where \{class\} is replaced by the appropriate class name. However, such simple fixed prompts fail to capture context information present in the image and this affects the performance of the downstream task of segment classification.
\begin{figure}[]
\centering
\includegraphics[width=.75\linewidth]{figures/pr_prompt_learning.jpg}
\caption{Comparison of the proposed prompt learning strategy in WLSegNet with prior works. A) Fixed prompts~\cite{ding2022decoupling}, B) Learned prompts~\cite{zhou2022learning} with dataset-specific context vectors, C) WLSegNet (Ours): Batch mean instance features ($\mu^B$) incorporated into learned context vectors. $\lambda$ controls the extent of $\mu^B$ added to context vector $V$. Red-dotted arrow denotes backpropagation to update the context vector. During prompt learning, CLIP encoders are frozen.
}
\label{fig:prompt}
\end{figure}
\subsubsection{Prompt Learning}
To overcome the limitations of fixed prompts, many recent learned prompt works \cite{zhou2022learning, deng2022learning} propose to incorporate dataset-specific context information by using learnable prompt context vectors that can be catered to work best for each particular class. These learned prompts are biased towards seen classes. \cite{zhou2022conditional} further improved upon this by using an additional input-conditional token, making the prompts less sensitive to class shift and thus, more generalizable to unseen classes. However, they report increased computation and resulting restrictions on the batch size. We overcome these limitations and propose a mean instance aware prompt learning strategy to learn better and more generalizable prompts. A brief overview of the different prompt strategies can be seen in Figure~\ref{fig:prompt} for a better comparison.
Specifically, our prompt learning approach learns a context vector $V$ to capture context information of the dataset. $V$ is constructed such that it contains $k$ prompt tokens each of dimension $d$, expressed by $V = [v]_1[v]_2...[v]_k$. Class prompt proposal $V_c$ is then obtained by concatenating the context vector $V$ with the class embedding of class $c$ such that $V_c = [v]_1[v]_2...[v]_k[w_c]$ where $w_c$ represents the class embedding. For a given image $\mathcal{I}$ in input batch $B$ of size $b$, let $x$ be the image embedding obtained from the pre-trained CLIP Image Encoder. Image embedding $x$ is passed through a shallow neural network represented by $h_\theta(.)$ to obtain the instance-wise features $f = h_\theta(x)$. We then obtain the mean batch feature prototype $\mu^B$ as shown in Eq~(\ref{eq:mean}).
\begin{equation}
\label{eq:mean}
\mu^{B}=\frac{1}{b} \sum f_{i}, i \in\{1,2, \ldots, b\}
\end{equation}
Finally, for a given batch the mean instance aware class prompt $V_c^B$ is obtained as shown in Eq~(\ref{eq:lambda}), which is fed to the pre-trained CLIP text encoder $g(.)$ for Zero-Shot classification.
\begin{equation} \label{eq:lambda}
V_c^B = V_c + \lambda * G^B
\end{equation}
$G^B$ is $d \times (k+1)$ matrix with each column as $\mu^{B}$ repeated $(k+1)$ times. The extent of $\mu^B$ added to $V_c$ is controlled by a hyperparameter $\lambda$.
The class prediction probability for the given image is computed as shown in Eq~(\ref{eq:preduction}), where $t_c^B$ represents the text embedding of $V_c^B$ obtained from the CLIP text encoder, $C$ is the number of classes, $sim$ is cosine similarity and $\tau$ is the temperature coefficient.
\begin{equation}\label{eq:preduction}
p(y = c \mid x)=\frac{\exp \left(sim\left(x, t_c^B\right) / \tau\right)}{\sum_{i=1}^{C} \exp \left(sim\left(x,t_i^B) / \tau\right)\right.}
\end{equation}
The class predictions, combined with mask proposals are then aggregated to obtain semantic segmentation output. This provides more generalizable prompts compared to existing methods shown in Figure~\ref{fig:prompt}. WLSegNet facilitates scaling of prompt learning to larger batch size and thereby it is computationally faster than \cite{zhou2022conditional} while also being less prone to overfitting on seen classes.
\begin{figure*}[]
\centering
\includegraphics[width=1\linewidth]{figures/PR_fig_1.pdf}
\caption{WLSegNet consists of a \textbf{(A)} Pseudo-Label Generation (PLG) module that uses image-level supervision to generate pseudo-segmentation mask $\mathcal{M}_{psuedo}$ for input image $\mathcal{I}$ among seen classes in $C_{train}$, \textbf{(C)} A prompt learning module that incorporates batch mean instance features with learnable context vectors to generate instance aware prompts and \textbf{(B, D)} where $\mathcal{M}_{psuedo}$ generated by the PLG module governs training of the Class-Agnostic Mask Generation (CAMG) module which produces a set of class-agnostic mask proposals $\mathcal{M}$. The mask proposals in $\mathcal{M}$ are multiplied with $\mathcal{I}$ and passed to CLIP (frozen) for segment classification among $C_{test}$ classes, along with the learned class prompts. The segmented masks and corresponding class predictions thus obtained are aggregated to produce the final segmentation mask. $\mathrm{S}_{ij}$ is the cosine similarity between image embedding $x_i$ corresponding to $i^\mathrm{th}$ mask proposal and text embedding $t_j$ corresponding to class $j$.}
\label{fig:methodology}
\end{figure*}
\subsection{Mask Aggregation}
For every class $c$, the mean instance aware class prompt $V_c^B$ is passed to the CLIP text encoder to obtain semantic embedding $t_c^B$, which is used as the weights for classifying the segment embeddings of $\mathcal{M}$ obtained from the CLIP Image Encoder. Note that some regions in the different proposals may overlap. Thus, the segmentation map $\mathcal{Z}$ is obtained by aggregating the different classified proposals as shown in Eq~(\ref{eq:aggregation}).
\begin{equation}\label{eq:aggregation}
\mathcal{Z}_{j}(q)=\frac{\sum_i m_{i}^{p}(q) C_{i}^{p}(j)}{\sum_k \sum_i m_{i}^{p}(q) C_{i}^{p}(k)}
\end{equation}
Here, $m_i^p(q)$ represents the predicted probability of pixel $q$ belonging to the $i$-th mask proposal $m_i$, and ${C}_i^p(j)$ is the predicted probability of mask proposals $m_i$ belonging to $j$-th category. This pixel-wise class probability $\mathcal{Z}_{j}(q)$ is the final semantic segmentation output.
\subsection{Weak Zero and Few-Shot Inference}
Our method unifies both Zero-Shot and Few-Shot segmentation objectives with common training but different strategies for inference. During (generalized) Zero-Shot testing, for each input image the model segments pixels into seen and unseen classes. The prompts used by CLIP are kept the same for all images, containing one prompt for each class. On the other hand, in the Few-Shot evaluation, the only classes predicted for a particular query are those present in the weak label of the support of a particular task. Thus, the set of prompts used by CLIP varies across different tasks. This subtle difference can also be seen in Figure~\ref{fig:setting}. Additionally for Few-Shot inference, we utilize saliency maps generated from off-the-shelf saliency detectors, as done in prior WSS works like EPS~\cite{lee2021railroad}. The saliency maps help refine the prediction of the difficult-to-describe background class while maintaining predictions of the foreground.
An overview of the complete procedure can be seen in Figure~\ref{fig:methodology}. Note how PLG training is completely decoupled from the rest of the method. This structure has certain desirable qualities. Since the tasks of generating good pseudo labels for seen classes and generalization to unseen classes are completely decoupled, they can be developed or optimized independently.
Besides, the unified approach greatly reduces the training cost since a single training is sufficient for evaluation on WGZSS, WZSS, WFSS and Cross-dataset settings.
\section{Implementation Details}
We use one Nvidia A100 GPU to conduct our experiments on PASCAL VOC and 6 1080 GPUs for our experiments on MS COCO. For pixel pseudo labelling, we work with the ResNet38 backbone commonly used in WSS literature. For mask proposals, we use a ResNet50 backbone for COCO and ResNet101 backbone for PASCAL VOC, while for the CLIP language model, we use a ViT-B/16 backbone. CLIP remains frozen during our training, and we initialise it with pre-trained weights trained on publicly available image-caption data. All COCO experiments were trained on 6 GPUs with a batch size of 32 while PASCAL VOC experiments had a batch size of 16. We choose the value 0.01 for the tradeoff hyperparameter $\lambda$ and 0.01 for the temperature $\tau$. For $h_\theta(.)$, we use 2 fully connected layers separated by a ReLU in between. Embedding dimensions of both image and text are 512, and the size of dense layers is chosen to ensure that dimensions match. Other relevant hyperparameters are kept the same as previous works~\cite{jiang2022l2g, xu2021simple}. All our implementations can be found here: https://github.com/mustafa1728/WLSegNet.
\section{Experiments and Results}
\subsection{Datasets}
\input{tables/pascal_gzss}
\input{tables/pascal_zss}
\input{tables/coco_zss}
We perform experiments with PASCAL VOC and MS COCO datasets, keeping settings similar to previous works. Our evaluation metrics closely follow the conventions used in \cite{lee2022pixel}.
\subsubsection{PASCAL VOC 2012}
This dataset consists of 11185 training images and 1449 validation images, with a total of 20 semantic classes. To compare with WFSS and WZSS methods, we use the Pascal-$5^i$ splits commonly used in FSS literature. From the dataset of 20 classes, 4 folds are created by splitting the classes such that in each fold, 15 classes are seen during training and 5 are reserved as novel classes for testing. Most previous works employing generalised ZSS use a fixed set of seen and unseen classes (classes 1-15 seen, 16-20 unseen). We take the same split while comparing these WGZSS and GZSS methods. While a model would predict all classes (seen and unseen) in these generalised settings, ZSS and WZSS on a particular fold of Pascal-$5^i$ involve the prediction of only unseen classes. This differs from the unseen-mIOU in GZSS primarily in the number of classes being predicted at a time, and one can expect similar performances in both. The model is trained on the training set images with seen classes retained and unseen classes ignored. Evaluation is on the validation images with the novel (and also seen for WGZSS) classes retained.
\subsubsection{MS COCO 2014}
This dataset consists of a total of 82081 training images and 40137 validation images, with a total of 80 semantic classes. Similar to PASCAL VOC, we employ the COCO-$20^i$ splits used in literature with 4 folds created by splitting classes into 60 seen and 20 unseen classes.
\subsection{Results and Discussion}
We have selected pixel-level and weakly supervised Zero and Few-shot segmentation methods as our baselines. Also, we compare with Open Vocabulary Segmentation methods like SimSeg~\cite{xu2021simple} and Fusioner~\cite{ma2022open} that perform segmentation in an inductive setting without pre-training/fine-tuning the vision-language/language models with large-scale external datasets.
\subsubsection{Weakly Supervised Zero-Shot Segmentation}
The performance of our approach in GWZSS and WZSS settings can be seen in Table~\ref{tab:wgzss_pascal}, Table~\ref{tab:wzss_pascal} and Table~\ref{tab:wzss_coco}. This domain is highly under-explored and we do not have baselines strictly following the same setting. Nonetheless, we compare WLSegNet with other strongly supervised methods. It can be seen in Table~\ref{tab:wgzss_pascal} that WLSegNet, using only image labels, beats 6 of the 9 baselines that use dense pixel labels. DSG~\cite{shen2022dual} works on the same WZSS setting we explore and our method outperforms it by 28.8, 37 and 39 mIOU points for the seen, unseen and harmonic IOU measures. DSG~\cite{shen2022dual} does not report results on COCO or COCO-stuff, so we do not have a comparable baseline for this dataset. Nevertheless, our results on PASCAL VOC and COCO are comparable with strongly supervised baselines, as can be seen in Table~\ref{tab:wzss_pascal} and Table~\ref{tab:wzss_coco}, respectively.
\input{tables/pascal_fss}
\input{tables/coco_fss}
\begin{table}[!h]
\begin{minipage}{.5\linewidth}
\centering
\caption{2-way 1-shot FSS on Pascal-5$^i$.}
\setlength{\tabcolsep}{4pt}
\scalebox{0.6}{
\begin{tabular}{clccccc}
\toprule
\multirow{2}{*}{Sup} & \multirow{2}{*}{Method} & \multicolumn{4}{c}{Fold mIOU} & \multirow{2}{*}{mean}\\
& & 0 & 1 & 2 & 3 & \\
\midrule
\multirow{2}{*}{Pix} & Pix-MetaNet & 36.5 & 51.8 & 48.5 & 38.9 & 43.9 \\
& PANet & - & - & - & - & \textbf{45.1} \\
\midrule
\multirow{3}{*}{Img} & PANet & 24.5 & 33.6 & 26.3 & 20.3 & 26.2 \\
& Pix-MetaNet & 31.5 & 46.7 & 41.4 & 31.2 & 37.7 \\
& Ours (WLSegNet) & \textbf{50.9} & \textbf{52.9} & \textbf{45.5} & \textbf{53.4} & \textbf{50.7} \\
\bottomrule
\end{tabular}
}
\label{tab:2way_pascal}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\caption{2-way 1-shot FSS on COCO-20$^i$.}
\setlength{\tabcolsep}{4pt}
\scalebox{0.6}{
\begin{tabular}{clccccc}
\toprule
\multirow{2}{*}{Sup} & \multirow{2}{*}{Method} & \multicolumn{4}{c}{Fold mIOU} & \multirow{2}{*}{mean}\\
& & 0 & 1 & 2 & 3 & \\
\midrule
\multirow{1}{*}{Pix} & Pix-MetaNet & 18.2 & 12.2 & 9.1 & 6.5 & 11.5 \\
\midrule
\multirow{2}{*}{Img} & Pix-MetaNet & 17.4 & 9.5 & 10.4 & 7.1 & 11.1 \\
& Ours (WLSegNet) & \textbf{38.0} & \textbf{33.8} & \textbf{29.4} & \textbf{31.4} & \textbf{33.1}\\
\bottomrule
\end{tabular}
}
\label{tab:2way_coco}
\end{minipage}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[width=1\linewidth]{figures/1way5shotPascal.pdf}
\caption{1-way 5-shot FSS on Pascal-5$^i$. Orange bars represent strong supervision while blue bars represent weak supervision via image labels.}
\label{fig:1way5shotpascal}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=1\linewidth]{figures/1way5shotCoco.pdf}
\caption{1-way 5-shot FSS on COCO-20$^i$. Orange bars represent strong supervision while blue bars represent weak supervision via image labels.}
\label{fig:1way5shotcoco}
\end{figure}
\begin{figure}[!h]
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/2way5shotPascal.pdf}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/2way5shotcoco.pdf}
\end{minipage}
\caption{2-way 5-shot FSS on Pascal-5$^i$ (left) and COCO-20$^i$ (right). Orange bars represent strong supervision while blue bars represent weak supervision via image labels.}
\label{fig:2way5shotPC}
\end{figure}
\subsubsection{Weakly Supervised Few-Shot Segmentation}
The performance of our approach in WFSS can be seen in Table~\ref{tab:wfss_pascal} and Table~\ref{tab:wfss_coco}. As evident from the results, we beat all methods using weak supervision by at least 7\% mIOU on PASCAL VOC and at least 30\% mIOU on COCO. Besides the commonly used 1-way 1-shot setting, we also experiment with 2-way 1-shot FSS in Table~\ref{tab:2way_pascal} and Table~\ref{tab:2way_coco}. Again, we beat weakly supervised baselines by huge margins. Our performance here exceeds all baselines by at least 13 and 22 mIOU points for PASCAL VOC and COCO respectively. On \{1,2\}-way 5-shot setting, WLSegNet clearly outperforms image-level baselines while being a strong contender for methods availing pixel-level supervision as observed from Figure~\ref{fig:1way5shotpascal} to Figure~\ref{fig:2way5shotPC}.
\subsubsection{Cross-dataset Segmentation}
Following the setting of~\cite{Boudiaf_2021_CVPR}, we evaluate the performance of WLSegNet on the novel classes of PASCAL VOC with the COCO-trained model \textit{without fine-tuning}. These experiments test the ability of WLSegNet to handle domain shift between the classes of the two different datasets in the WZSS and WFSS settings. The novel PASCAL VOC classes are shown in Table~\ref{tab:cross-dataset}. The categories in fold $i$ are the novel classes in PASCAL
VOC after removing the seen classes in the corresponding training split on fold $i$ of COCO-20$^i$. We benchmark the performance of WLSegNet on the Cross-dataset setting in Table~\ref{tab:cross_data_res}. It is clearly evident from the results that even with domain shift, the generalizable prompts learned with WLSegNet help to deliver performance competitive with the pixel-based methods.
\begin{table}[!h]
\centering
\setlength{\tabcolsep}{8pt}
\caption{Novel classes in each fold of the PASCAL VOC dataset in the Cross-dataset segmentation setting.}
\scalebox{0.6}{
\begin{tabularx}{\linewidth}{LLLL}
\toprule
fold 0 & fold 1 & fold 2 & fold 3 \\
\midrule
aeroplane, boat, chair,
diningtable, dog, person & bicycle, bus,
horse, sofa & bird, car, pottedplant,
sheep, train, tvmonitor & bottle, cat,
cow, motorbike \\
\bottomrule
\end{tabularx}}
\label{tab:cross-dataset}
\end{table}
\begin{table}[!h]
\centering
\caption{
ZSS and FSS on novel classes of PASCAL VOC when the model is trained on COCO-20$^i$. The second best results are underlined.
}
\scalebox{0.6}
{
\centering
\begin{tabular}{clcccccc}
\toprule
\multirow{2}{*}{Supervision} & \multirow{2}{*}{Method} & \multirow{2}{*}{Setting} & \multicolumn{4}{c}{Fold mIOU} & \multirow{2}{*}{mean}\\
& & & 20$^0$ & 20$^1$ & 20$^2$ & 20$^3$ & \\
\midrule
\multirow{2}{*}{Pixel Labels} & LSeg~\cite{li2022languagedriven} & zero-shot & \underline{24.6} & - & \underline{34.7} & 35.9 & 31.7\\
& Fusioner~\cite{ma2022open} & zero-shot & \textbf{39.9} & \textbf{70.7} & \textbf{47.8} & \textbf{67.6} & \textbf{56.5}\\
\midrule
\multirow{1}{*}{Image Labels} & Ours (WLSegNet) & weak zero-shot & 17.6 & \underline{50.3} & 19.5 & \underline{52.4} & \underline{34.9} \\
\midrule
\multirow{3}{*}{Pixel Labels} & RPMM~\cite{10.1007/978-3-030-58598-3_45} & 1-way 1-shot & 36.3 & 55.0 & 52.5 & 54.6 & 49.6\\
& PFENet~\cite{9154595} & 1-way 1-shot & 43.2 & \textbf{65.1} & \textbf{66.5} & \textbf{69.7} & \textbf{61.1}\\
& CWT~\cite{Lu_2021_ICCV} & 1-way 1-shot & \textbf{53.5} & \underline{59.2} & \underline{60.2} & \underline{64.9} & \underline{59.5}\\
\midrule
\multirow{1}{*}{Image Labels} & Ours (WLSegNet) & weak 1-way 1-shot & \underline{44.1} & 44.2 & 37.1 & 60.3 & 46.4 \\
\bottomrule
\end{tabular}
}
\label{tab:cross_data_res}
\end{table}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.7\linewidth]{figures/vis_ZSS.pdf}
\caption{Predicted masks with different prompt learning strategies in Weak Generalized Zero-Shot Segmentation (WGZSS) setting on PASCAL VOC.}
\label{fig:vis_wzss}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.72\linewidth]{figures/1way_vis_pascal.pdf}
\caption{1-way 1-shot predicted masks on Pascal-5$^i$.}
\label{fig:1way_vis_pascal}
\end{figure*}
\subsubsection{Qualitative Analysis}
We visualize the predicted masks in different settings by WLSegNet which gives compelling results in the weak Few-Shot and Zero-Shot Segmentation. As observed from Figure~\ref{fig:vis_wzss}, the proposed prompt learning strategy is able to capture complex objects while other strategies fail to segment the desired seen/unseen classes in the weak Generalized Zero-Shot Segmentation (WGZSS) setting. Similarly, in a comparatively harder setting of 2-way (Figure~\ref{fig:2way_vis}) with a large-scale dataset like COCO, WLSegNet is able to segment the required target classes having different sizes thereby closely matching the Ground Truth (GT).
\begin{figure*}[!h]
\centering
\includegraphics[width=0.72\linewidth]{figures/1way_vis.pdf}
\caption{1-way 1-shot predicted masks on COCO-20$^i$.}
\label{fig:1way_vis}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.72\linewidth]{figures/2way_vis.pdf}
\caption{2-way 1-shot predicted masks on COCO-20$^i$.}
\label{fig:2way_vis}
\end{figure*}
\begin{figure}[!h]
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figures/ablation_wgzss.pdf}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/lambda.png}
\end{minipage}
\caption{Ablation with different prompting strategies.}
\label{fig:ablation}
\end{figure}
\subsection{Ablation Studies}
We perform further experiments to understand the relative contributions of the various components in our approach. First, we experiment with different strategies to get text prompts. We try fixed prompts, where a single prompt template is used for all classes; ImageNet prompts, where a prompt template is chosen randomly from 80 prompts designed for ImageNet; Learned prompts, similar to the ones used in \cite{xu2021simple} and finally ours. Figure~\ref{fig:ablation} (left) shows that our prompt learning method performs better for the unseen classes resulting in the highest harmonic mIOU. In Figure~\ref{fig:ablation} (right), we analyse the performance for different values of the hyperparameter $\lambda$ as used in Eq~(\ref{eq:lambda}).
While experimenting with different batch sizes, we observe that the method is not sensitive to changes here. We evaluate the performance of WLSegNet (Table~\ref{tab:vary_camg_clip_plg}) by varying the mask proposal generation methods in the CAMG module, CLIP backbones and pseudo-label generation methods in the PLG module. These experiments helped to design and optimize the CAMG and PLG modules and in the selection of the backbone architecture for CLIP. Finally, we visualize the features of images obtained from the image encoder and the text features of prompts of different classes in Figure~\ref{fig:tsne}. The image features do roughly form clusters in the feature space and red text features corresponding to the classes are roughly aligned with the centres of these clusters. The clusters are better formed and text features are better aligned in the plot on the right, further demonstrating the generalizability of the learned prompts. All these ablation studies are performed on the PASCAL VOC dataset in the WGZSS setting.
\begin{table}[!h]
\centering
\caption{WLSegNet performance (harmonic mIOU) with different mask proposal generation methods for the Class-Agnostic Mask Generation (CAMG) module, different CLIP backbones and different pseudo-label generation methods for the Pseudo Label Generation (PLG) module in the weak Generalized Zero-Shot Segmentation (WGZSS) setting on the PASCAL VOC dataset.}
\scalebox{0.6}{
\begin{tabular}{cc|cc|cc}
\toprule
CAMG & harmonic mIOU & CLIP backbone & harmonic mIOU & PLG & harmonic mIOU \\
\midrule
GPB-UCM~\cite{arbelaez2010contour} & 36.3 & ResNet50 & 58.8 & RCA~\cite{zhou2022regional} & 68.7\\
Selective Search~\cite{uijlings2013selective} & 36.6 & ResNet101 & 53.4 & L2G~\cite{jiang2022l2g} & 70.8\\
MaskFormer~\cite{cheng2021per} & 70.8 & ViT-B/16 & 70.8 & - & -\\
\bottomrule
\end{tabular}
}
\label{tab:vary_camg_clip_plg}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[width=0.49\linewidth]{figures/tsne_old.png}
\includegraphics[width=0.49\linewidth]{figures/tsne_ours.png}
\caption{t-SNE plots showing image and text features obtained from different prompt learning methods. The left figure is obtained using SimSeg~\cite{xu2021simple} and the right from WLSegNet.}
\label{fig:tsne}
\end{figure}
\section{Conclusion}
Data-efficient problem settings like Open Vocabulary Semantic Segmentation (OVSS) are of utmost importance for an intelligent model because of the similar difficulties existing in many real-world scenarios. Extensive research is being done to develop novel methods that require significantly lesser annotation costs while maintaining expected standards of performance. We explore one such challenging domain (OVSS) where a model is expected to generalize to a wide range of classes it never sees during training while also only relying on relatively inexpensive weak annotations and vision-language models like CLIP. In a unified approach to weakly supervised Zero and Few-Shot segmentation, we overcome certain limitations reported by existing works and learn a label-efficient model and prompts that are highly generalizable to unseen classes. The superior performance of our method is corroborated by extensive experimentation on two large-scale datasets. We hope this work will promote further research in this relatively under-explored domain and provide a strong baseline to benchmark new methods.
\bibliographystyle{elsarticle-num}
|
{
"arxiv_id": "2302.14209",
"language": "en",
"timestamp": "2023-03-01T02:05:19",
"url": "https://arxiv.org/abs/2302.14209",
"yymm": "2302"
} | \section{\textbf{Introduction}}
The report of ferroelectric sputtered AlScN in 2019 \cite{jap19_fichtner_alscn_ferroelectric} indicated the tantalizing possibility of introducing ferroelectric barriers into GaN RF and mm-wave transistors. One can pursue this either by integrating sputtered layers on epitaxial channels, or by direct epitaxy of AlScN on GaN transistors \cite{apl17_nrl_mbe_scaln_hemt}. Direct epitaxial AlScN on GaN revealed high piezoelectric coefficients \cite{apl20_joe_scaln_mbe}. When the leakage was reduced with increased Sc source purity \cite{aplMat21_joe_ScAlN_multilayer_purity}, epitaxial hi-K dielectric constants up to 20 were discovered \cite{apl22_casamento_AlScN_hi_K}. These epitaxial AlScN layers exhibited a low coercive field of $\sim$ 1 MV/cm \cite{arxiv21_joe_alscn_ferro_lowEc} compared to those of non-epitaxial sputtered layers reported in \cite{jap19_fichtner_alscn_ferroelectric}. Other reports by epitaxy showed higher coercive fields \cite{apl21_michigan_mbe_scaln_ferroelectric}, and memory functionality \cite{aelm22_umich_ferro_nitride_memory}. The RF and mm-wave properties of AlScN barrier HEMTs have been studied \cite{kazior2019,green2019,cheng2021}, but ferroelectric transistor behavior has not been reported.
Here we report ferroelectric gating behavior of epitaxial AlScN barrier HEMTs and contrast its ultra high-current, high-speed, and sub-Boltzmann performance to a control HEMT with a non-ferroelectric AlN barrier. The measured FerroHEMT behavior is consistent with a low coercive field of epitaxial AlScN. All measurements presented in this work were performed at room temperature.
\section{\textbf{Epitaxial Growth}}
Fig. \ref{fig1}(a) shows a test layer structure comprising of a 200 nm unintentionally doped (UID) GaN layer grown by MBE followed by a 100 nm AlScN layer with 18\% Sc. A polarization-induced 2D electron gas (2DEG) of density $1.8 \times 10^{13}$/cm$^2$ and mobility of $377$ cm$^2$/V$\cdot$s was observed in this heterostructure by Hall-effect measurement at room temperature. Using a deposited top electrode and the 2DEG as a bottom electrode, and applying positive-up-negative-down (PUND) voltage pulses and tracking the currents, the polarization-electric field (P-E) loops extracted are shown in Fig. \ref{fig1}(b) which indicate the epitaxial AlScN layer on UID GaN is ferroelectric with a coercive field $E_c \sim 0.9$ MV/cm.
A control AlN/GaN HEMT and a 14\% targeted Sc composition AlScN/AlN/GaN FerroHEMT structure were grown by MBE directly on semi-insulating 6H-SiC substrates with a 300 nm AlN nucleation layer and a 1 $\mu$m GaN buffer layer using methods reported in \cite{apl22_casamento_AlScN_hi_K} to study ferroelectric gating behavior. Figs. \ref{fig2}(a) and (c) show the corresponding energy band diagrams from the top surface, calculated using self-consistent Schr\"{o}dinger-Poisson solutions. 2DEG channels are expected at the heterojunctions as shown as $\psi_{1}^2$ at respective depths. To form Ohmic contacts to these 2DEGs, lithographically defined etching and regrowth of heavily doped n+ GaN ($N_d \sim$ 10$^{20}$/cm$^3$ Si) was performed by MBE. Figs. \ref{fig2}(b) and (d) show the measured 2DEG densities, mobilities, and sheet resistances over various dies of the wafer measured by room-temperature Hall effect. The 2DEG densities are consistent with the expected values from the calculation.
\section{\textbf{Device design and fabrication}}
Figs. \ref{fig3}(a) and (b) show the resistances of metal-regrown n$^+$GaN (low) and n$^+$GaN-2DEG (moderate, not exceptionally low). Figs. \ref{fig4}(a) and (b) show the control AlN barrier HEMT and the AlScN barrier FerroHEMT cross sections respectively. Figs. \ref{fig4}(c) shows the representative device process flow of HEMTs with regrown contacts. A SiO$_2$/Cr hard mask defined the source/drain regions. Ti/Au source/drain was deposited on the n$^+$GaN, and Ni/Au gate was deposited. Electron beam lithography (EBL) process was performed to fabricate T-gate devices for RF and mm-wave performance. Figs. \ref{fig4}(d) and the inset show SEM images of the final transistor structures. Gate lengths ranged from 90 nm to 18 $\mu$m.
\section{\textbf{Results and discussion}}
Figs. \ref{fig5}(a) and (b) show the measured $I_d - V_{ds}$ output characteristics and $I_{d}-V_{gs}$ transfer characteristics of the control AlN/GaN HEMT sample respectively. An on current of $\sim 1.5 $ A/mm with an on resistance of $R_{on}=1.34$ $\Omega \cdot$mm, an on/off ratio of $10^{6}$ limited by gate leakage, with a threshold voltage of $\sim -3.5$ V were observed. Fig. \ref{fig5}(c) shows a peak transconductance of $\sim 0.5$ S/mm with a barrier thickness of 2 nm GaN + 2 nm AlN. The subthreshold slope shown in Fig. \ref{fig5}(d) indicates very good normal transistor behavior, grazing the ideal Boltzmann limit of $\sim 60$ mV/decade. The control HEMT thus exhibits excellent performance, as borne out by its RF performance discussed subsequently.
Fig. \ref{fig6} shows the transistor characteristics when a 5 nm thick epitaxial AlScN barrier layer is added between the AlN and the GaN cap layer, as indicated in Figs. \ref{fig4}(a)-(b). From Fig. \ref{fig6}(a), the maximum $I_d$ still reaches $\sim 1.5$ A/mm at the same gate voltage, {\em in spite of a more than double barrier thickness} of 2 nm GaN + 5 nm AlScN + 2 nm AlN compared to the control sample. This is a result of the high-K dielectric property of AlScN as was reported in \cite{apl22_casamento_AlScN_hi_K}. The $I_d - V_{ds}$ curves indicate a higher output conductance, but a far larger difference from the control sample is observed in the transfer characteristics in Fig. \ref{fig6}(b). A counterclockwise (CCW) hysteresis loop develops in the subthreshold characteristics. A sub-Boltzmann steep slope of 23.6 mV/decade is observed for the on$\rightarrow$off voltage sweep. Such repeatable loops are observed in multiple devices. This translates to a hysteretic transconductance curve as seen in Fig. \ref{fig6}(c) and drain current as seen in Fig. \ref{fig6}(d). Note that though the peak transconductance is lower than the control HEMT, the hi-K AlScN helps maintain a high value in spite of a more than double barrier thickness.
The hysteresis window of the threshold voltage is between $1.0 - 2.0$ V as seen in Figs. \ref{fig6}(c) and (d). These observations are strong signatures of both high-K dielectric, and ferroelectric gating behavior. Based on the AlScN barrier thickness, a voltage drop $E_{c} \times t_{AlScN} \sim 0.5 $ V across the AlScN layer is consistent with a low $E_c \sim 1.0$ MV/cm. A large portion of the gate voltage still drops across the GaN and AlN layers between the gate metal and the 2DEG channel.
Fig. \ref{fig7}(a) shows the hysteresis loop measured on Die 7 of Fig. 2 (d). Several CCW hysteresis loops based on the voltage step of the measurement are shown in Fig. \ref{fig7}(a), and the corresponding substhreshold slopes are plotted in Figs. \ref{fig7}(b) and (c). While the majority of the sub-Boltzmann steep slopes are observed in the leftgoing voltage sweeps, some also appear in the rightgoing sweeps. It is well known that trapping behavior could lead to false conclusions of ferroelectricity. The CCW loops for n-channels is a strong evidence of ferroelectric FETs. Moreover, the absence of such behavior in the control HEMT sample, and the symmetry, voltage width, and sub-Boltzmann slopes of the FerroHEMT conclusively indicates ferroelectric gating in the all-epitaxial AlScN/GaN devices.
Fig. \ref{fig8}(a) shows the high-frequency characteristics of the AlN barrier control HEMT. It exhibits excellent cutoff frequencies of $f_{T}/f_{MAX} = 126/304$ GHz for a gate length of $L_g = 90$ nm. The device dimensions and bias conditions are indicated in the plot. Fig. \ref{fig8}(b) shows the measured values for the FerroHEMT of similar dimensions but different bias conditions due to the shifted threshold characteristics. It exhibits lower cutoff frequencies of $f_{T}/f_{MAX} = 78/156$ GHz. Even though the values of the AlScN FerroHEMT are lower than the control AlN barrier HEMT, they are the highest reported to date for AlScN barrier transistors, as indicated in Fig. \ref{fig8}(c). Morever, they are the fastest {\em ferroelectric} transistors reported to date: a maximum FerroHEMT $f_{MAX} = 168$ GHz is measured. The higher speed of the control sample is due to the (expected) higher maximum $g_{m,ext} = 0.616$ S/mm of the control HEMT compared to $g_{m,ext} = 0.475$ S/mm for the FerroHEMT, indicating higher speed FerroHEMTs are possible with further scaling. Fig. \ref{fig8}(d) shows the scaling of the output current $I_{d}^{max}$ in the AlScN FerroHEMTs when $L_g$ is scaled from 18 $\mu$m to 90 nm. An exceptionally high value of 2.5 A/mm is observed for an {\em optical} gate length of 1 $\mu$m. The deep submicron EBL gates reach record values of 4 A/mm at 90 nm gate length. These record high on-currents are enabled by the high 2DEG density generated by the large difference in polarization between GaN and the epitaxial AlScN layers.
\section{\textbf{Conclusions and future work}}
The moderate FerroHEMT channel mobility in Fig. \ref{fig2}(d) can be improved by 3$\times$ for better RF and mm-wave performance. The high-K value of AlScN will increase the breakdown voltage in FerroHEMTs by reducing the gate leakage \cite{apl22_casamento_AlScN_hi_K}. A negative DIBL effect is predicted \cite{edl19_sayeef_ferro_negative_dibl}, but this will be achievable with careful geometrical design of future epitaxial FerroHEMTs. Thus FerroHEMTs with thinnest epitaxial ferroelectric barriers show steep subthreshold slopes, and deliver the highest on-currents (4 A/mm), and highest speed ($f_{MAX}>$ 150 GHz) in all ferroelectric transistors. They present a new class of transistors with the potential to blur the boundaries between memory, logic, and communication devices.
\begin{figure*}
\centering
\includegraphics[width=0.58\linewidth]{fig1.png}
\caption{(a) Epitaxial AlScN/GaN heterostructure indicating a polarization-induced 2DEG at the heterojunction. (b) Measured $P-E$ loops on diodes with top electrode and the 2DEG of (a) as the bottom electrode. Ferroelectric loops are observed for the heterostructure.}
\label{fig1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{Fig2_Hall.png}
\caption{(a) Energy band diagram of control HEMT. (b) Hall-effect data of control HEMT. (c) Energy band diagram of FerroHEMT. (d) Hall-effect data of FerroHEMT. Though there are some non-uniformities, the measured 2DEG densities are consistent with the calculations.}
\label{fig2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{Fig3_TLM.png}
\caption{(a) Contact resistances for the control HEMT between the metal and the regrown n$^+$GaN regions and the n$^+$GaN regions and 2DEGs. (b) Corresponding contact resistances for the FerroHEMT structure. }
\label{fig3}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{Fig4_Device_X-sections.png}
\caption{(a) Control HEMT cross section schematic showing the regrown contacts, gate, and channel regions. (b) Corresponding schematic for the FerroHEMT has an extra 5 nm AlScN layer in the barrier layer. (c) The representative device process flow of III-Nitride HEMTs with regrown contacts. (d) SEM image of a processed HEMT, and inset shows Zoomed-in image of a T-gate.}
\label{fig4}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\linewidth]{Fig5_DC_characteristic_AlN_HEMT_control_sample.png}
\caption{Measured characteristics of the control HEMT with an AlN barrier: (a) Output characteristics, (b) Transfer characteristics in the log and (inset) linear scale, (c) Transconductance vs. $V_{gs}$, and (d) subthreshold slope vs. $I_d$. }
\label{fig5}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{Fig6_DC_characteristic_ScAlN_HEMT.png}
\caption{Measured characteristics of the FerroHEMT with an AlScN barrier: (a) Output characteristics, (b) Transfer characteristics in the log scale showing a counterclockwise hysteresis and sub-Boltzmann slope, (c) Transconductance vs. $V_{gs}$ showing hysteresis, and (d) Linear $I_d$ vs $V_{gs}$ showing hysteresis.}
\label{fig6}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.83\linewidth]{Fig7_Subthreshold_Slope.png}
\caption{(a) Subthreshold characteristics of a FerroHEMT showing counterclockwise hysteresis loops and steep slopes for various bias steps. Sub-Boltzmann behavior observed in the subthreshold slopes for (b) 20 mV gate sweep steps and (c) 10 mV gate sweep steps. }
\label{fig7}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\linewidth]{Fig8_Small-Signal.png}
\caption{Measured cutoff frequencies at $L_{g}=90$ nm of (a) the Control HEMT, and (b) the AlScN barrier FerroHEMT. (c) Speed benchmarks of AlScN HEMTs with previous reports~\cite{kazior2019,green2019,cheng2021}. (d) Scaling of output current with gate length. In addition to being the fastest AlScN HEMTs, the FerroHEMT data reported here also represents the highest speed and highest on-current ferroelectric transistors realized to date.}
\label{fig8}
\end{figure*}
\section*{Acknowledgment}
This work was supported in part by the SRC and DARPA through the Joint University Microelectronics Program (JUMP), by the DARPA TUFEN program, and performed at the Cornell NanoScale Facility, an NNCI member supported by NSF Grant No. NNCI-2025233.
\nocite{fontnote}
\nocite{fuller}
\nocite{vidmar}
\nocite{clarke}
\nocite{reber}
\nocite{yorozu}
\nocite{alqueres}
|
{
"arxiv_id": "2302.14123",
"language": "en",
"timestamp": "2023-03-01T02:01:36",
"url": "https://arxiv.org/abs/2302.14123",
"yymm": "2302"
} | \section{Introduction}
Over a century ago, before modern game theory was fully established,
\'Emile Borel proposed a family of zero-sum games
inspired by two armies competing
over multiple battlefields \cite{borel1921theorie}.
\begin{definition}[Colonel Blotto]
Two players, $A$ and $B$, are competing over $\nitem$ different
\emph{fronts}. The players have $\nplayer_a, \nplayer_b$ units of
effort at their disposal respectively. Each player wins a front if
they allocate more effort to the front than their opponent does,
and each player wishes to win as many
fronts as possible. Are there Nash stable arrangements of effort over
fronts, and if so, which are they?
\end{definition}
The name ``Colonel Blotto''
derives from the fact that a colonel controls multiple
individual soldiers, which they are able to allocate across the
battlefields in order to serve their overall objective. The Colonel
Blotto game has been the focus of extensive exploration, including
variants that allow for battlefields to have different values, for
effort to be allocated probabalistically, and for smoother utility
functions \cite{golman2009general, hart2008discrete, OSORIO2013164}.
The game has also found application in areas far removed from warfare
\cite{merolla2005play}.
For example, in national politics, two political parties must decide
how to allocate campaign budgets across multiple contested states;
in marketing competition,
two competing national retail chains must decide where to
locate individual stores across different metropolitan areas \cite{merolla2005play}.
Such situations have the structure of a Colonel Blotto game, with
the political parties or retail chains as the colonels, and the
geographic locations as the front where the competition takes place.
Despite the different contexts, these scenarios all share
a fundamental structure, with two centralized entities
organizing effort over multiple fronts.
\paragraph{\bf Decentralized conflicts.}
By contrast, modern-day conflicts often involve players that may share
common goals, but are not organized by a single actor.
Focusing on political competition, which will be one of
our primary motivating domains, there are many agents interested
in influencing voters in an election --- activists, local organizers,
political bloggers, everyday voters with large social media presences ---
and while they may be collectively backing one of two political parties,
many of them are not being explicitly directed in their actions by
these parties.
(A similar type of decentralization can take place in warfare itself,
with guerrilla warfare involving
allied groups that nevertheless choose their actions separately.)
How might we model this type of political {\em viewpoint competition}?
We could imagine that there is a large collection of agents, each of
whom is interested in taking part in a conflict with $\nitem$ ``fronts.''
Each agent controls only one unit of effort, and can choose to devote
that effort to one of the fronts.
There is no centralized ``colonel'' to direct the agents, but there
is an underlying structure that determines which agents are more or less
closely aligned.
In particular, each agent $i$ has a {\em type}, which we can think of as
a viewpoint, bias, or political position; this type is represented by a
real number $\beta_i$.
After each agent chooses a front to participate in, the outcome of
the conflict on each given front is determined by an {\em outcome function}
that takes the multiset of types at that front and determines a
real-valued outcome.
We can think of the fronts as issues, for example;
each agent can devote effort to influencing public opinion on
one of these issues, and the resulting value of public opinion on an issue
is some aggregation of the viewpoints of the agents who devote
effort to this issue.
We will be particularly interested in two of the most natural
outcome functions for aggregating viewpoints in this setting:
the {\em median} (in which
the outcome on a front is the median of the types there) and the
{\em mean} (in which the outcome is the mean of the types).
Agents want the outcomes on each front (even the ones
where they don't participate) to match their types;
thus, each agent experiences a cost equal to a weighted average of
the distances between the outcome on each front and the agent's type.
(The agent's payoff is simply the negative of this cost.)
Here the weights in this average can be viewed as representing
the relativel importance of the different fronts.
We will refer to this type of game as {\em Private Blotto};
like Colonel Blotto, it involves conflict over multiple fronts,
but it is fundamentally different because it is designed to
model decentralized conflict where each individual agent makes
their own choice about which front
to participate in.\footnote{In the military,
a \emph{private} is an enlisted soldier at the base of the hierarchy.
This reflects our setting, which views the individual
soldiers as the strategic actors, rather than the coordinating colonel
who commands the army.}
We summarize the discussion above in the following definition.
\begin{definition}[Private Blotto]
Multiple agents are competing as the players in a game
over $\nitem$ different \enquote{fronts}.
Each agent can choose exactly one front to compete in, and
each agent has a {\em type} corresponding to a real number.
An outcome function (for example, the median or mean) determines
the outcome value on each front.
An agent's cost is equal to a weighted average distance
between the outcome on each front and the agent's type.
\end{definition}
For this class of games, we can ask a number of basic questions.
Given a set of agents specified by their types, and a number of fronts,
are there Nash stable arrangements of agents over fronts?
And how does the choice of outcome function affect the
existence of stable arrangements?
We will see that these questions are already rich and complex
even when all agents belong to one of two types, just as
Colonel Blotto games already exhibit rich structure with just
two colonels.
Thus, we will focus much of our attention on this two-type case,
when there are real numbers $\beta_a$ and $\beta_b$, and each agent
has a type equal to either $\beta_a$ or $\beta_b$.
In this case, the median outcome function is simply majority rule:
the type with more agents on a given front becomes the outcome
for that front.
The mean outcome for a front
lies somewhere in the interval between $\beta_a$
and $\beta_b$, at a weighted average of these two endpoints determined
by the number of agents of each type at this front.
We now provide some more detail on settings that can be
modeled by the Private Blotto game, and then we give an
overview of our results.
\subsection{Motivating examples}\label{sec:examples}
Our Private Blotto formulation finds applicability in numerous modern-day settings. Here, we will describe a few key application areas in more detail.
Political contests and issue-based activism has historically been an application area for Colonel Blotto, such as \cite{merolla2005play}. We argue that Private Blotto might even be a more natural fit for this setting. Here, the $\nitem$ fronts might represent issues or political campaigns, while the agents might be activist groups or donors, which might share similar goals, but are unable (for logistical or legal reasons) to coordinate their actions. Differing types would reflect differing political leanings, which could reflect a continuum of preferences among groups.
Crowdsourcing on social media has become a growing area of societal and academic interest in recent years \cite{yasseri2021can, wojcik2022birdwatch, birdsdontfactcheck, prollochs2022community}. Typically, this setting involves asking social media users to provide labels on different items, such as whether certain news articles are misinformation or not. In this setting, the $\nitem$ fronts might be news articles or items that need to be labeled. Each agent would be a social media user choosing which front to label. Here, player types might reflect partisan bias, which has been shown to affect how users select which news articles some users choose to fact-check in Twitter's Birdwatch tool \cite{wojcik2022birdwatch, birdsdontfactcheck, prollochs2022community}). In this setting, our model explores dynamics in how voluntary, biased users might label multiple fronts.
Finally, Private Blotto can also be used in modeling military engagements, the most traditional application of Colonel Blotto games. In the Private Blotto formulation, we assume that each agent might be an individual soldier, guerrilla member, or other actor that is acting without coordination from some central organizer. In this way, Private Blotto might naturally model more modern types of asymmetric warfare conflicts. Agents on the same \enquote{side} militarily are of the same type. Agent types allow us to model interactions between three or more militaries or combatant groups, as it reflects potential alliances or more nuanced conflicts between different agents.
\subsection{Overview of results}
We are interested in which instances of Private Blotto admit
{\em stable arrangements}, by which we mean a choice of front by
each agent that forms a pure Nash equilibrium of the game:
no agent has an incentive to unilaterally change fronts.
We also prove results characterizing the properties of stable arrangements.
We will find it useful to divide our analysis of the model into two
main cases, depending on whether the number of agents is smaller or
larger than the number of fronts.
In the motivation from political viewpoint competition, these cases
correspond to different settings, each meaningful:
a small number of fronts relative to the number of agents can
correspond to a large collection of political actors dividing
their influence over a small number of key issues,
while a large number of fronts relative to the number of agents
arises in the crowdsourced settings discussed above,
where individuals are asked to label a large universe
of news articles or social media posts for misinformation.
When the number of fronts is small compared to the number of agents,
we focus on the setting in which there are two types $\beta_a$ and $\beta_b$.
Here, let $\nplayer$ be the number of agents of type $\beta_a$, and
$\nplayer_b$ be the number of agents of type $\beta_b$.
For both the median and the mean outcome functions, it is natural to ask:
for which pairs $(\nplayer_a,\nplayer_b)$ must there always exist a stable arrangement,
and for which pairs is there no stable arrangement?
In the case of the median outcome function,
we find that for every number of fronts $\nitem$,
the set of pairs $(\nplayer_a,\nplayer_b)$ that fail to yield a stable arrangement
is finite, and we can exactly characterize this set of pairs for each $\nitem$
as a certain {\em median-critical region} in the $\nplayer_a$-$\nplayer_b$ plane.
In the case of the mean outcome function, we show computationally
that the subset of the $\nplayer_a$-$\nplayer_b$ plane admitting stable arrangements
has a complex structure that seems to lack a closed-form description,
even with just two fronts with equal weights;
we also show that with two fronts, when a stable arrangement does exist,
it must divide the agents in an almost-proportional fashion.
We prove additional results about the structure of stable arrangements ---
how the agents divide across fronts in these arrangements --- through
the lens of {\em misallocated effort}.
If we adopt the normative principle that fronts (representing issues
for public debate) should receive a number of agents proportional
to their weights, then our result for the mean outcome
function shows that there is very little misallocation of effort
relative to this proportionality principle.
In contrast, we find that for the median outcome function, there can
be very high levels of misallocation, with stable arrangements
in which large numbers of agents choose fronts with relatively low weight.
Finally, we consider the case in which there are more fronts than agents;
intuitively, these are settings where the agents have the ability
to spread out arbitrarily thinly across fronts if they so choose,
and where some fronts will inevitably be left empty.
Here we identify sufficient conditions for stable
arrangements to exist, and characterize the smallest number of
agents $\nplayer$ for which there are instances without stable arrangements.
With more fronts than agents, our results for median and mean are
more similar to each other than they were for small numbers of fronts;
this connects intuitively with the fact that each front will tend
to have relatively few agents, leading to less of a distinction
between the behavior of the median and mean.
One difference between the functions is that $\nplayer = 4$ is the minimum
number of agents for which there can be instances lacking stable
arrangements for the mean outcome function --- every instance with
at most three agents has a stable outcome --- whereas
$\nplayer = 3$ is the corresponding minimum for the median outcome function.
\section{Related works}\label{sec:related}
\subsection{Colonel Blotto}
Colonel Blotto games (first proposed in \cite{borel1921theorie}) is a game theoretic model where two different players, $A$ and $B$, compete to allocate effort across multiple \enquote{fronts} or \enquote{fronts}, which may vary in how much each player values them. Typically, an player wins a front if they exert more effort there, and one main question of interest is when Nash equilibria of this system exist. The literature on Colonel Blotto games is extremely broad, so will focus on a few of the most relevant papers. Recently, \cite{ahmadinejad2019duels} included a polynomial time algorithm for computing equilibria of the standard Colonel Blotto game, as well as related zero-sum games.
First, we will highlight some of the most commonly-studied variants of Colonel Blotto games.
\cite{golman2009general} proposes the \enquote{General Blotto} game, which generalizes Colonel Blotto to permit multiple player types which have smooth utility functions over fronts and over combinations of fronts. \cite{hart2008discrete} proposes the \enquote{General Lotto} game where each player selects a probabilistic distribution of effort over each fronts and gets utility given by the probability that a randomly drawn level of their effort beats their opponents' random draw. Separately, \cite{OSORIO2013164} proposes the \enquote{Lottery Blotto} game, where players allocate effort deterministically, but the player that allocates greater effort to a front doesn't win deterministically, but rather probabalistically. This formulation is related to Tullock Contests Success functions (originally proposed in \cite{Tullock2001}, also studied in \cite{OSORIO2013164}, \cite{skaperdas1996contest}) where two players are competing over contests where they win probabalistically related to their effort (similar to our mean outcome function). \cite{attackdefense} similarly studies contest functions where fronts are connected in a network and an asymmetric \enquote{attacker} and \enquote{defender} are allocating resources across these fronts.
Next, we will highlight the papers that are closest to ours. \cite{schwartz2014heterogeneous} gives Nash stability results for the Colonel Blotto game where players vary in their strength (amount of resources) and fronts vary in their value, so long as there are at least three fronts with each value. \cite{kovenock2012coalitional} studies a limited form of coalitions where exactly two players $A$ and $B$ may form an alliance before playing a common opponent $C$. \cite{boix2020multiplayer} proposes and studies the \enquote{multi-player Colonel Blotto game}, which extends the classical Colonel Blotto structure to more than 2 players. \cite{anbarci2020proportional} studies a variant of Colonel Blotto with more than two competing forces, but where fronts are presented sequentially, rather than simultaneously. \cite{mazur2017partial} studies Nash equilibria of Colonel Blotto games with exactly 2 fronts, but where outcome functions are constrained to be a polynomial function of the difference of each type's allocation across the fronts.
In general, Colonel Blotto games and their variants differ from ours in a few ways. First, rather than assuming the player can control multiple agents, we assume each agent acts independently (a private citizen as opposed to a soldier). Because any of these agents could \enquote{win}, this dramatically increases the number of potential outcomes. However, in our second main difference, we assume that agents have some degree of similarity in their goals: agent $A$ may be more closely aligned with $B$ than $C$, for example. Finally, we study a more general class of settings than is typically studied in Colonel Blotto, allowing for arbitrary numbers and valuations of fronts, as well as more general notions of winning (all or nothing, as well as a more smooth fractional utility).
\subsection{Crowdsourcing}
The area of crowdsourcing has been studied experimentally and theoretically in a wide range of papers. Again, we will focus on summarizing those that are most closely related to ours. Some papers, such as \cite{hettiachchi2022survey}, \cite{zhang2017consensus} study how to assign different crowd workers to multiple tasks in order to maximize the expected accuracy of labels.
A more nascent branch of crowdsourcing considers the case where crowd workers may have agency over which fronts they choose to label. \cite{6195597} studies reputation-based mechanisms to incentivize crowd workers to exert effort on fronts that they are assigned. Our model is especially relevant to fact-checking on social media sites that allow voluntary labels by (potentially biased) users, such as on Facebook, Wikipedia and Twitter \cite{yasseri2021can}, especially Twitter's BirdWatch tool \cite{wojcik2022birdwatch}. \cite{birdsdontfactcheck}\cite{prollochs2022community} study how partisan affiliation, among other facts, affects how users on Twitter choose which tweets to fact check. \cite{saeed2022crowdsourced} compares the accuracy of labels produced by voluntary, biased crowdsource workers to expert labels.
Our paper differs from most crowdsourcing papers in how it allows crowdsource workers to act as voluntary, potentially biased agents with some agency over which fronts they choose to label. Our work also differs stylistically in that we focus primarily on Nash Equilibria of such systems, an area that has typically not been explored previously.
\section{Model}\label{sec:model}
In this section, we make our theoretical model more precise. We assume there are $\nitem$ \emph{fronts} that $\nplayer$ total \emph{agents} are competing over. We allow each front to have different weights or \emph{values} $\weight_i \in [\nitem]$, which are common values to each agent. Each agent controls exactly 1 unit of effort: they may choose which front to compete in, but may not coordinate with other players. However, agents come in $\ntype$ \emph{types}. Two agents of the same type have perfectly aligned incentives: when present on the same front, they work towards the same outcome, and when on different fronts, two agents of the same type are interchangeable. Each type has a real-valued \emph{bias} $\beta_\type \in \mathbb{R}$ that describes how similar or dissimilar it is to other types. For example, an agent of type $\beta_1 = 1$ is closer to an agent of $\beta_2 = -1$ than to $\beta_3 = 5$.
\subsection{Outcome functions}
Once agents are arrayed on a front, the outcome of the battle is governed by an \emph{outcome function} $f(\cd)$. In this paper, we will focus on two types of outcome functions: \emph{median} outcome and \emph{proportional} outcome. Given a set of agents $S_i$ on front $i$, the median outcome function returns the median of the biases $\text{med}(\{\bias_{\type}\} \ \vert \ \type \in S_i) $.
If there are an even number of players on a particular front, then the median function averages together the middle two biases. Note that for the traditional $\ntype =2$ two-player Colonel Blotto game, the median outcome function is equivalent to \enquote{winner-take-all}, where whichever type dominates the front wins. For $\ntype >2$ types, the median outcome means that the front's outcome is equivalent to the median $\bias_j$, for some $j \in S_i$. Every other agent $k$ would experience a cost given by $\abs{\bias_j - \bias_k}$: the magnitude of difference between their biases.
On the other hand, the mean outcome function returns the mean of the biases:
$\frac{1}{\abs{S_i}}\sum_{\type \in S_i}\bias_{\type}$. This models a scenario where the final outcome of the front depends on the distribution of agent biases, not solely the median agent. Again, every agent experiences cost given by their distance to the final outcome: $\abs{\frac{1}{\abs{S_i}}\sum_{\type \in S_i}\bias_{\type} - \bias_k}$.
\subsection{Agent cost}
Even though each agent only participates in a single front, they still have preferences over all of the fronts. An agent of type $\type$ will weight the cost of each front according to weights $\{\weight_i\}$ satisfying $\sum_{i \in [\nitem]} \weight_i = 1$. If a front has no agents on it ($S_i = \emptyset$), then we assume each agent experiences a cost $\costun\geq 0$ for leaving the front empty. This cost is independent of agent bias, but is influenced by the weight. We include this feature to model settings where agents may choose to leave a front empty (potentially to join a more heavily-weighted or contested front), but suffer some non-zero cost in doing so. We can write the total cost as:
$$\sum_{i \in [\nitem], \abs{S_i}>0}\weight_i \cd \abs{f(S_i) - \bias_{\type}} + \sum_{i \in [\nitem], \abs{S_i} =0} \weight_i \cd \costun$$
We will say that an arrangement of agents onto fronts is \emph{stable} if it satisfies Nash stability (no agent can unilaterally decrease its cost):
\begin{definition}[Nash stability]
An arrangement of players on fronts is (Nash) stable in the Private Blotto game if no agent can reduce its cost by switching from competing in one front to begin competing in another. Written in notation, this corresponds to requiring that for every agent of type $\type$ with $\bias_{\type} \in S_j$ on front $j$ and every other front $k \ne j$,
\begin{align*}
\weight_j \cd \abs{f(S_j) - \bias_{\type}} + & \weight_k \cd \p{\abs{f(S_k) - \bias_{\type}} \cd \mathbbm{1}[\abs{S_k} >0] + \costun \cd \mathbbm{1}[\abs{S_k} =0] }\\
\leq \weight_j \cd \p{\abs{f(S_j \setminus \beta_{\type}) - \bias_{\type}} \cd \mathbbm{1}[\abs{S_j} >1] + \costun \cd \mathbbm{1}[\abs{S_j} =1] } + & \weight_k \cd \abs{f(S_k \cup \bias_{\type}) - \bias_{\type}}
\end{align*}
\end{definition}
Nash stability is a natural definition of stability to study in this case because agents are assumed to model self-interested actors with the ability to move between fronts in order to reduce their cost. However, Nash stable solutions are not (in general) guaranteed to exist. In this paper, our main focus will be characterizing possible Nash stable arrangements.
\section{More agents than fronts: $\nplayer \geq \nitem$}\label{sec:moreagents}
In analyzing the Private Blotto game, we will find it helpful to divide our analysis into two main regimes: when there are more agents than fronts (this section), and when there are fewer agents than fronts (Section \ref{sec:feweragent}). The setting where there are more agents than fronts models the case of numerous small agents competing over a limited set of fronts. As related to the examples in Section \ref{sec:examples}, it might model the case where there is a small number of physical fronts in combat, for example, where the number of combatants is much larger. For political issues, it could represent the case where there is a relatively small subset of major divisive issues that multiple political actors are debating. In this section, we will make one assumption in our analysis:
\begin{assump}\label{assump:twotypes}
For the $\nplayer\geq \nitem$ setting, we assume agents come in $\ntype=2$ types, $A$ and $B$, with biases $\beta_a, \beta_b$.
\end{assump}
Traditionally, Colonel Blotto games have mainly focused on the two-player case, which is analogous to our two-type case in Private Blotto \cite{borel1921theorie}. We choose to present the two-type case in this section because it already admits a surprising amount of richness and complexity, while still being relatively clean. We relax this assumption in our analysis of the $\nplayer < \nitem$ case in Section \ref{sec:feweragent}. Exploring further relaxations of these assumptions would be an interesting avenue for further analysis of the Private Blotto game. Without loss of generality, we will always name the two types so that $\nplayer_a \geq \nplayer_b$ (there are more type $A$ than type $B$ players).
Lemma \ref{lem:nounlabeled} shows why Assumption \ref{assump:twotypes} is helpful. So long as it holds and given sufficiently high cost for leaving a front empty, agents' incentives become independent of bias values $\bias_a, \bias_b$.
\begin{restatable}{lemma}{nounlabeled}
\label{lem:nounlabeled}
Given Assumption \ref{assump:twotypes}, so long as
$$\costun\geq \frac{\max_{i \in \nitem}\weight_i}{\min_{j \in \nitem}\weight_j} \cd \abs{\bias_a - \bias_b} \cd \frac{1}{2}$$
then no agent wishes to leave a front empty. As a result, agent strategy becomes independent of biases $\bias_a, \bias_b$ and relies solely on the number of agents of each type on each front, $\{a_i, b_i\}, \ i \in [\nitem]$.
\end{restatable}
Proofs for Lemma \ref{lem:nounlabeled}, as well as for other proofs in this paper, are found in Appendix \ref{app:proofs}.
Given Lemma \ref{lem:nounlabeled}, all of our results for the remainder of the section hold for every possible value of biases.
For the rest of this section, unless stated otherwise, we will assume that the preconditions of Lemma \ref{lem:nounlabeled} hold, which ensures that no front will be left empty. This assumption is mainly made for cleanness of analysis: if it is relaxed, then the value of $\costun$ causes minor changes in the stable arrangements, primarily for small numbers of agents $\nplayer$.
\subsection{Median outcome}\label{sec:moreagentsmedian}
First, in this section we explore Private Blotto with $\nplayer\geq \nitem$ with median outcome function. We can view this setting as exploring the $\nplayer_a, \nplayer_b$ plane, studying for which values of $\nplayer_a, \nplayer_b$ a Nash stable arrangement exists, as well as constructively producing an example of a stable arrangement. Our results will be a function of the total number of fronts, $\nitem$, the common weights $\{\weight_i\}$ on each of these fronts, as well as $\nplayer_a, \nplayer_b$, the number of agents of types $A$ and $B$ respectively. (Recall that given Lemma \ref{lem:nounlabeled} all of our results will be independent of the biases $\bias_a, \bias_b$. )
The first question we will ask is whether the number of unstable \enquote{points} ($(\nplayer_a, \nplayer_b)$ pairs) is infinite or finite. Lemma \ref{lem:median2mstable} addresses this question by showing that whenever there are more than twice as many agents as there are fronts, there always exists a stable arrangement. This directly implies that, holding the number of fronts $\nitem$ constant, there must only be a finite number of unstable points.
\begin{lemma}\label{lem:median2mstable}
If $\nplayer_a + \nplayer_b \geq 2 \cd \nitem+1$ (or $\nplayer_a + \nplayer_b =2 \cd \nitem$ with $\nplayer_a, \nplayer_b$ even), then there always exists a stable arrangement.
\end{lemma}
\begin{proof}
We will prove this result constructively by producing an algorithm that always arrives at a stable arrangement.
Informally, this algorithm works by putting 2 type $B$ players on every front, stopping once either a) front $\nitem-1$ is reached, or b) all of the type $B$ players have been assigned, or c) there are 3 type $B$ players left (which are then all assigned to the current front). Then, the algorithm places at least 2 type $A$ players on each item, again stopping once a) front $\nitem$ is reached, or b) all of the type $A$ players have been allocated, or c) there are 3 type $A$ players left. This is a stable arrangement because for each front, each type wins by at least 2, so no single agent acting alone can change the outcome. For a formal description, see Algorithm \ref{alg:allocatemore}.
If $\nplayer_a + \nplayer_b = 2 \cd \nitem$ and both $\nplayer_a, \nplayer_b$ are even, then this algorithm will put exactly 2 agents on each front. If $\nplayer_a + \nplayer_b \geq 2 \cd \nitem +1$ and both $\nplayer_a, \nplayer_b$ are even, then this will put at least 2 agents on each front. If $\nplayer_a + \nplayer_b \geq 2 \cd \nitem +1$ and both $\nplayer_a, \nplayer_b$ are odd, then $\frac{\nplayer_b -1}{2}$ fronts will have type $B$ agents and $\frac{\nplayer_a - 1}{2}$ fronts will have type $A$ agents. In total, this covers $\frac{\nplayer_a+ \nplayer_b - 2}{2} = \frac{\nplayer_a +\nplayer_b}{2} - 1 \geq \frac{2 \cd \nitem + 1}{2} - 1 = \nitem$ fronts, as desired.
\end{proof}
\begin{algorithm}
\caption{Algorithm for stable arrangement, not in median-critical region, given $\nplayer_a + \nplayer_b \geq 2 \cd \nitem+1$ or $\nplayer_a + \nplayer_b = 2 \cd \nitem $ with $\nplayer_a, \nplayer_b$ even. }
\label{alg:allocatemore}
Set $a_i = b_i = 0 \ \forall i \in [\nitem]$\\
\While{$\sum_{i \in [\nitem]}b_i < \nplayer_b$}{
\For{$j \in [\nitem-1]$}{
\lIf(If on last front, allocate all remaining players){$j = \nitem-1$}{$b_j = \nplayer_b - \sum_{i \in [\nitem]} b_i, \ j^* = j$}
\lElseIf(If at least 3 players left, allocate 2 per front){$\nplayer_b - \sum_{i \in [\nitem]}b_i\geq 3$}{$b_j=2$}
\lElse(If there are exactly 3 left, allocate them all and stop){$b_j=3$, $j^*=j$}
}
}
\While{$\sum_{i \in [\nitem]}a_i < \nplayer_a$}{
\For(Start with the next front after type $B$ is allocated){$j \in [j^*+1, \nitem]$}{
\lIf(If on last front, allocate all remaining players){$j = \nitem$}{$a_i = \nplayer_a - \sum_{i \in [\nitem]}a_i$}
\lElseIf(If at least 3 players left, allocate 2 per front){$\nplayer_a - \sum_{i \in [\nitem]}a_i\geq 3$}{$a_j=2$}
\lElse(If there are exactly 3 left, allocate them all and stop){$a_j=3$}
}
}
\end{algorithm}
Lemma \ref{lem:median2mstable} described conditions where a stable arrangement exists, but left open the question of whether it might be possible to be more specifically characterize when stable arrangements fail to exist, a question that we will address next. In order to do so, we will need a more specific definition:
\begin{definition}[Median-critical region]
A set of parameters $(\nplayer_a, \nplayer_b)$ is in the median-critical region if they satisfy:
$$\nplayer_a + \nplayer_b \leq 2 \cd \nitem \text{ and } \nitem < \nplayer_a \text{ and } \nplayer_b < \nplayer_a - \nitem$$
and symmetrically if the roles of $\nplayer_a, \nplayer_b$ are reversed.
\end{definition}
Lemma \ref{lem:medianneverstable} immediately shows the importance of this definition: any set of parameters within the median-critical region must always result in an unstable arrangement.
\begin{restatable}{lemma}{medianneverstable}
\label{lem:medianneverstable}
For any set of parameters within the median-critical region with median outcome function and cost satisfying Lemma \ref{lem:nounlabeled}, there is never a stable arrangement of agents onto fronts, no matter the weights.
\end{restatable}
\begin{proof}[Proof sketch]
To start out with, we will describe a few arrangements where a deviation is always possible. The second half of this proof will show that if we have any arrangement satisfying the preconditions (parameters within the median-critical region), then at least one of these cases must occur, meaning that the arrangement must be unstable.
For notation, we will use $\{a_1\geq b_1\}$ to denote an arrangement where there are at least as many type $A$ players as type $B$ on front 1, for example. We will only refer to fronts 1 and 2 for convenience, but these results hold for any fronts.
\begin{equation*}
C1: \quad \{a_1 = b_1 - 1\}, \{a_2 \geq b_2 + 2\} \text{ or } \{b_1 = a_1-1\}, \{b_2 \geq a_2 + 2\}
\end{equation*}
This gives an arrangement where type $A$ loses by 1 on front 1 and wins by at least 2 on front 2. This gives a deviation because any $a$ player from front 2 could move to compete in front 1 and strictly reduce their cost (they now tie in the first and still win in the second). Similar reasoning holds for the other case: the type $B$ player from front 2 could move to compete in front 1 and strictly reduce their cost.
\begin{equation*}
C2: \quad \{a_1 = b_1\}, \{a_2 \geq b_2+2\} \text{ or } \{a_1 = b_1\}, \{b_2 \geq a_2 + 2\}
\end{equation*}
Here, the players tie on front 1 and type $A$ wins by at least 2 on front 2. This gives a deviation because any $a$ player from front 2 could move to compete in front 1 and strictly reduce their cost (they now win front 1 and still win front 2). Similar reasoning holds for the second case.
\begin{equation*}
C3: \quad \{a_1 \geq b_1 + 1\}, \{a_2 = b_2 + 1\} \text{ or } \{b_1 \geq a_1 + 1\}, \{b_2 = a_2 + 1\}
\end{equation*}
Here, type $A$ wins by at least one on front 1 and wins by exactly one on front 2. This gives a deviation because type $B$ is losing in front 1, and can move to front 2 where it will tie (and still lose front 1). Similar reasoning holds for the second case.
The full proof concludes by proving that any $\nplayer_a, \nplayer_b$ within the median-critical region must satisfy one of the cases, meaning that the arrangement must be unstable.
\end{proof}
Finally, we address the question of $(\nplayer_a, \nplayer_b)$ pairs with $\nplayer_a + \nplayer_b \leq 2 \cd \nitem$ (so that they are not addressed by Lemma \ref{lem:median2mstable}), but which also do not fall in the median-critical region (so they are not addressed by Lemma \ref{lem:medianneverstable}). Lemma \ref{lem:medianstableequal} examines this case and constructively shows that it is always possible to find a stable arrangement of agents onto fronts, assuming fronts have equal weights.
\begin{restatable}{lemma}{medianstableequal}
\label{lem:medianstableequal}
Any other number of agents ($\nplayer_a, \nplayer_b$) with $\nplayer_a + \nplayer_b \leq 2 \cd \nitem$ (besides those in the median-critical region) always has a stable arrangement, \emph{given equal weights} and cost satisfying Lemma \ref{lem:nounlabeled}.
\end{restatable}
\begin{proof}[Proof sketch]
We will show constructively that it is possible to create an arrangement satisfying the following criteria:
\begin{itemize}
\item For every front where there is more than 1 agent, type $A$ and type $B$ tie exactly.
\item Every other front has exactly one agent, which can be either type $A$ or type $B$.
\end{itemize}
This type of construction is stable by the following reasoning: None of the single agents can move (they can't leave an front empty). Additionally, no agent on an front with multiple agents wishes to leave - they would go from winning a single front and losing another, to losing on that front and tying on another, which gives equal costs when weights are equal.
Informally, we will describe how this algorithm works. If $\nplayer_a + \nplayer_b = \nitem$, then we place exactly one agent on each front, which satisfies the construction criteria.
If $\nplayer_a + \nplayer_b > \nitem$, and $\nplayer_a+ \nplayer_b - \nitem$ is odd, then the algorithm places equal numbers of players on front 1, and places exactly one player on every remaining front. On the other hand, if $\nplayer_a+ \nplayer_b - \nitem$ is even, then the algorithm places equal numbers of players on fronts 1, places exactly 1 of each player type on item 2, and then places exactly one player on every remaining front. The distinction between whether $\nplayer_a+ \nplayer_b - \nitem$ is even or odd is necessary to ensure that the total number of players placed sums up to $\nplayer_a + \nplayer_b$. Algorithm \ref{alg:allocatefewer} describes this algorithm formally.
\end{proof}
\begin{algorithm}
\caption{Algorithm for stable arrangement, not in median-critical region, given $\nplayer_a + \nplayer_b \leq 2 \cd \nitem$}
\label{alg:allocatefewer}
Set $a_i = b_i = 0 \ \forall i \in [\nitem]$\\
\If(Put exactly 1 player per front){$\nplayer_a + \nplayer_b = \nitem$}{
$a_i = 1 \ \forall i \in [\nplayer_a], b_i = 1 \ \forall i \in [\nplayer_a + 1, \nitem]$
}\uElseIf{$\nplayer_a+ \nplayer_b - \nitem$ is odd}{
$x = 0.5 \cd (\nplayer_a+ \nplayer_b - \nitem)+1$\\
$a_1 = b_1 = x$ Exact tie on first front\\
$a_i = 1 \ \forall i \in [2, \nplayer_a - x + 1]$ Put remaining $\nplayer_a -x$, $\nplayer_b-x$ players each on a single front. \\
$b_i = 1 \ \forall i \in [\nplayer_a - x + 2, \nitem]$\\
}\uElse{
$x = 0.5 \cd (\nplayer_a+ \nplayer_b - \nitem)$\\
$a_1 = b_1 = x$ Exact tie on first and second front. \\
$a_2 = b_2 = 1$\\
$a_i = 1 \ \forall i \in [3, \nplayer_a - x + 2]$ Put remaining $\nplayer_a -x -1, \nplayer_b -x -1$ agents each on a single front. \\
$b_i = 1 \ \forall i \in [\nplayer_a - x + 3, \nitem]$ \\
}
\end{algorithm}
Lemma \ref{lem:exmednotstab}, below, demonstrates why the equal weights condition in Lemma \ref{lem:medianstableequal} is necessary. If even the weight of one front is even slightly higher than the weight in another, there exist conditions where no stable arrangement exists.
\begin{restatable}{lemma}{exmednotstab}
\label{lem:exmednotstab}
Set $\nitem = 3, \nplayer_a = \nplayer_b = 3$, with $w_1 = w_2 + \epsilon$ and $w_2 = w_3 = w_4$, and and cost satisfying Lemma \ref{lem:nounlabeled}. Then, the arrangement proposed by Lemma \ref{lem:medianstableequal} is not stable, and moreover, there is no possible stable arrangement.
\end{restatable}
\begin{proof}[Proof sketch]
In this proof, we will use the notation
\begin{equation}\label{eq:exarrange}
\{a, b\}, \{a, b\}, \{a\},\{b\}
\end{equation}
to illustrate that there are 4 fronts in total, two of which with exactly 2 players on it (one of each type), and two fronts with exactly one player (one front with a single type $A$ player, and one front with a single type $B$ player).
Lemma \ref{lem:medianstableequal} would suggest that the arrangement in Equation \ref{eq:exarrange} would be stable. If $w_1 = w_2$, then this would be satisfied: none of the singleton players could move, and none of the $\{a, b\}$ players wish to move. If a player of type $A$ from front 2 moved to compete in front 1, they would go from experiencing cost $0.5 \cd \weight_1 + 0.5 \cd \weight_2 + \weight_3$ to experiencing cost: $\weight_2 + \weight_3$ which is identical when $\weight_1=\weight_2$. However, when $\weight_1 > \weight_2$, then this move does reduce cost, meaning that the original arrangement was unstable. The full proof, in Appendix \ref{app:proofs}, completes the proof by showing that no other possible arrangement is stable.
\end{proof}
Finally, Lemma \ref{lem:medianstablerelaxequal} extends Lemma \ref{lem:medianstableequal} by relaxing the requirement that the front weights be exactly equal. Instead, this proof shows that it is sufficient to have the two fronts with highest weight be equal, while no other two fronts differ in weight by more than a factor of 2. Taken together, these lemmas characterize the stability of the Private Blotto game with median outcome function and more agents than fronts.
\begin{restatable}{lemma}{medianstablerelaxequal}
\label{lem:medianstablerelaxequal}[Extension of Lemma \ref{lem:medianstableequal}]
Any other number of agents ($\nplayer_a, \nplayer_b$) with $\nplayer_a + \nplayer_b \leq 2 \cd \nitem$ (besides those in the median-critical region) always has a stable arrangement, given cost satisfying Lemma \ref{lem:nounlabeled} and weights in descending order satisfying
$\weight_0 = \weight_1 \text{ and } \weight_i \leq 2 \cd \weight_j \ \forall i, j \in [\nitem]$.
\end{restatable}
\subsection{Mean outcome}
Next, in this section we will explore the setting where again there are more agents than fronts ($\nplayer\geq\nitem$), but where instead the mean outcome function is used. In Section \ref{sec:moreagentsmedian} we were able to completely (constructively) classify the stable arrangements for median outcome. By contrast, in this section we will show that the pattern of stable arrangements is much more chaotic.
First, we explore the most direct question of when a stable arrangement exists. Even this question turns out to be surprisingly challenging. Figure \ref{fig:stable} numerically explores when a stable arrangement exists for $\nitem =2$ fronts\ifarxiv \footnote{Code to reproduce figures and numerical examples is available at \url{https://github.com/kpdonahue/private_blotto}.} \else \footnote{Code to reproduce figures and numerical examples can be provided in the final version.} \fi . The axes represent the total number of type $A$ and type $B$ agents that are present, with a red dot appearing at point $(\nplayer_a,\nplayer_b)$ if no possible stable arrangement involving that number of agents exists. The plots in different panels of the figure vary in the weights given to each of the $\nitem=2$ items. Note that the patterns across each of the plots seem to vary wildly. For equal weights ($\weight_0 = \weight_1 = 0.5$), there appears to be a structured pattern in when a stable arrangement exists and when one does not. As the weights become imbalanced, this pattern immediately disintegrates, resulting in a series of chaotic clusters.
\begin{figure}
\centering
\includegraphics[width=2.5in]{stable_0p5_01_23_23.png}
\includegraphics[width=2.5in]{stable_0p51_01_23_23.png}
\includegraphics[width=2.5in]{stable_0p52_01_23_23.png}
\includegraphics[width=2.5in]{stable_0p55_01_23_23.png}
\includegraphics[width=2.5in]{stable_0p6_01_23_23.png}
\includegraphics[width=2.5in]{stable_0p9_01_23_23.png}
\caption{Figures illustrating when stable arrangements of agents onto fronts exist for mean outcome. Within each figure, the x-axis gives the total number of type $A$ players ($\nplayer_a$), while the y-axis gives the total number of type $B$ players ($\nplayer_b$). For clarify, we restrict our attention to $\nplayer_a \geq \nplayer_b$, but identical results would hold for the opposite case. At each point, a red dot indicates that no stable arrangement exists, otherwise a stable arrangement does exist. Each figure describes different weights between the $\nitem=2$ items (e.g. $[0.5, 0.5]$ indicates equal weights, $[0.6, 0.4$] indicates a weight of 0.6 on the first item and 0.4 on the second, etc). While the first figure with equal weights seems to show a specific pattern, this rapidly disintegrates into a more chaotic picture once weights become even slightly uneven (see $[0.51, 0.49]$ and higher). }
\label{fig:stable}
\end{figure}
However, the unpredictable patterns in Figure \ref{fig:stable} actually obscures a deeper symmetry within the model. As Theorem \ref{thrm:meanbowl} below shows, if players were allowed to be allocated fractionally over fronts, then the stable arrangement would always be exactly equal to proportional allocation over fronts (according to each front's weight):
\begin{restatable}{theorem}{meanbowl}
\label{thrm:meanbowl}
For $\nitem$ fronts with two types of agents, $A$ and $B$ with mean outcome and $\costun$ satisfying the conditions of Lemma \ref{lem:nounlabeled}, if players are allowed to be allocated fractionally over fronts, then the stable arrangement is always given by $a_i = \weight_i \cd \nplayer_a, b_i = \weight_i \cd \nplayer_b$.
\end{restatable}
Note that, in general, Theorem \ref{thrm:meanbowl} does \emph{not} imply that stable arrangement for the standard (integer-valued) Private Blotto games will be close to proportional. While Theorem \ref{thrm:meanbowl} can be extended to show that the fractional Private Blotto game convex, it is known that in general, the minimum of a convex function, when restricted to integer values may be arbitrarily far from the minimum of the same convex function over real numbers. However, it turns out that, for the Private Blotto game with equal weights, it \emph{is} true that integer-valued stable arrangements are \enquote{close} to proportional. This idea is formalized in Theorem \ref{thrm:meanclose}:
\begin{restatable}{theorem}{meanclose}
\label{thrm:meanclose}
Assume $\nitem$ fronts with equal weights and two types of agents, $A$ and $B$ with mean outcome and $\costun$ satisfying the conditions of Lemma \ref{lem:nounlabeled}. Then, any arrangement that is stable must be \enquote{close} to proportional: $\abs{a_i - \weight_i \cd \nplayer_a} \leq 1, \abs{b_i - \weight_i \cd \nplayer_b} \leq 1$ for $i \in [\nitem]$.
\end{restatable}
\begin{proof}[Proof sketch]
In order to prove this result, we will show that any arrangement that is not \enquote{close} to proportional must have at least one agent that wishes to move. An arrangement is \enquote{close} to proportional if:
$$\abs{a_i - \weight_i \cd \nplayer_a} \leq 1, \abs{b_i - \weight_i \cd \nplayer_b} \leq 1 \ \forall i \in [\nitem]$$
At a high level, our proof technique involves finding a \enquote{large} front (with more agents) and a \enquote{small} front (with fewer agents). We will then pick whichever agent type is more represented in the larger front, and move exactly one agent to the smallest front. We will then give three sets of sufficient conditions so that this strictly reduces cost for the agent type that moves. We will then show that if none of the three sufficient conditions are met, all fronts must be \enquote{close} to proportional.
\noindent \textbf{Selecting the fronts:} First, we will consider all pairs of fronts $j, k$ such that $a_j + b_j \geq a_k + b_k$. One feature of this pair that we will consider is the \emph{gap in type prevalence}, which is given by:
$$\text{ type A: } \frac{a_j}{a_j + b_j} - \frac{a_k}{a_k + b_k} \quad \text{ type B: } \frac{b_j}{a_j + b_j} - \frac{b_k}{a_k + b_k}$$
Note that if one type has positive gap in type prevalence, then the other has negative gap in type prevalence, because:
$$\frac{a_j}{a_j + b_j} - \frac{a_k}{a_k + b_k} = -1 + \frac{a_j}{a_j + b_j} + 1- \frac{a_k}{a_k + b_k} = \frac{-b_j}{a_j + b_j} + \frac{b_k}{a_k + b_k}$$
WLOG, we will assume that type $A$ has positive or 0 gap in type prevalence. If the gap in type prevalence is 0 (both fronts have exactly equal proportions of player types), then we will again WLOG assume that type $A$ makes up a larger share of players, or $\frac{a_j}{a_j + b_j} = \frac{a_k}{a_k + b_k} \geq \frac{b_j}{a_j + b_j} = \frac{b_k}{a_k + b_k}$. Given this assumption, we will show that players of type $A$ could always reduce its cost by moving a single agent from front $j$ to front $k$ (unless all fronts are \emph{close} to proportional).
\textbf{Costs:}
Type $A$'s cost associated with the fronts $j$ and $k$ is:
$$\frac{b_j}{a_j + b_j} + \frac{b_k}{a_k + b_k}$$
Its cost associated with these fronts after moving a single agent from $a_j$ is given by:
$$\frac{b_j}{a_j + b_j-1} + \frac{b_k}{a_k + b_k+1}$$
which implies that its cost decreases with this move whenever:
$$\frac{b_j}{a_j + b_j} + \frac{b_k}{a_k + b_k} -\frac{b_j}{a_j + b_j-1} - \frac{b_k}{a_k + b_k+1} >0$$
Or:
\begin{equation}\label{eq:stableeq}
b_k \cd (a_j+ b_j) \cd (a_j + b_j -1) > b_j \cd (a_k + b_k) \cd (a_k + b_k +1)
\end{equation}
Equation \ref{eq:stableeq} is the central condition we will be studying in this proof. First, we will present several sufficient conditions for when Equation \ref{eq:stableeq} is satisfied, so a player wishes to move. Then, we will show that whenever none of those sufficient conditions are satisfied, all fronts must be \enquote{close} to proportional.
\noindent \textbf{Sufficient condition 1: Fronts differ by at least 2, positive gap in type prevalence}
There are a few conditions where we can immediately see that Equation \ref{eq:stableeq} is satisfied. First, we divide this equation into two separate components. The first is given by:
\begin{equation}\label{eq:overprop} b_k \cd (a_j+ b_j) \geq b_j \cd (a_k + b_k) \quad \Rightarrow \quad \frac{b_k}{a_k + b_k} \geq \frac{b_j}{a_j + b_j}\end{equation}
By prior reasoning, this is satisfied by the assumption that $\frac{a_j}{a_j + b_j} \geq \frac{a_k}{a_k + b_k}$ as given by type $A$ having positive gap in type prevalence. \\
Next, we will consider the second component of Equation \ref{eq:stableeq}:
\begin{equation}\label{eq:more1}a_j + b_j -1 \geq a_k + b_k +1 \quad \Rightarrow \quad a_j + b_j \geq a_k + b_k +2 \end{equation}
Note that this is \emph{not} required by how we selected fronts (all we require is $a_j + b_j \geq a_k + b_k$). However, in the event that both Equation \ref{eq:more1} holds, and either Equation \ref{eq:overprop} or Equation \ref{eq:more1} is satisfied strictly, then Equation \ref{eq:stableeq} holds strictly and type $A$ players have an incentive to move from front 1 to front 2. \\
In the full proof, we consider other sufficient conditions where Equation \ref{eq:more1} and \ref{eq:overprop} are not both strictly satisfied, and yet a type $A$ player still wishes to move from front $j$ to $k$. Finally, we conclude by showing that any arrangement that fails to satisfy one of the sufficient conditions must have all fronts having arrangements that are \enquote{close} to proportional.
\end{proof}
Theorem \ref{thrm:meanclose} suggests the following hypothesis, which is that the integer-valued Private Blotto game with \emph{arbitrary} weights over fronts still results in arrangements that are \enquote{close} to proportional.
\begin{hypothesis}\label{hyp:close}
Any stable arrangement in the integer Private Blotto game with mean outcome function must be \enquote{close} to proportional: $\abs{a_i - \weight_i \cd \nplayer_a} \leq 1, \abs{b_i - \weight_i \cd \nplayer_b} \leq 1$ for $i \in [\nitem]$.
\end{hypothesis}
From numerical simulations Hypothesis \ref{hyp:close} appears to hold for our functional form; we have not found counter-examples computationally, and we conjecture that it always holds.
\subsection{Misallocated effort}\label{sec:misallocate}
Finally, in this section we will compare the stable arrangements given either median or mean outcome functions. In particular, we will consider the question of how \enquote{bad} stable arrangements might be. This question has been formalized in a variety of ways in previous papers, including Price of Anarchy or Price of Stability \cite{koutsoupias2009worst, anshelevich2008price}. For example, in a congestion routing game, Price of Anarchy would measure the total congestion for all players in the worst-case stable arrangement, as compared to the arrangement that minimizes total congestion.
However, Private Blotto is modeling a fundamentally different game. In the political debate scenario, for example, Private Blotto is modeling different political actors competing over topics for which they have truly different viewpoints. In this case, society doesn't necessarily have a concept of which outcome is \enquote{right}. However, we might have normative preferences over how we wish these debates to unfold. One concrete example might be something like \enquote{political topics should be debated in rough proportion to how important they are to all actors}. For example, if a legislative body spends a week debating the naming of a bridge and only a morning debating the contents of a vital spending bill, we might view that allocation of effort as wasteful. This intuition of \enquote{misallocated effort} is formalized below:
\begin{definition}
Given an arrangement of agents onto fronts, we say it has \enquote{misallocated effort} given by the amount of agents that is above or below allocation equal to the weights. That is, misallocated effort is given by:
$$\sum_{i \in [\nitem]}\abs{\weight_i \cd \nplayer_a - a_i} + \abs{\weight_i \cd \nplayer_b - b_i}$$
\end{definition}
One question we will explore is the maximum possible misallocated effort, among any stable arrangements. The results of Lemma \ref{lem:nounlabeled} require that each front has at least one agent, which Lemma \ref{lem:maxmisallocate} uses to give an immediate upper bound to misallocated effort.
\begin{lemma}\label{lem:maxmisallocate}
Given $\costun$ satisfying Lemma \ref{lem:nounlabeled}, each front to have at least one agent, which means that (regardless of the outcome function used), misallocated effort is upper bounded by $\nplayer_a + \nplayer_b - \nitem$.
\end{lemma}
Next, we will obtain tighter bounds for mean and median outcome functions, respectively. First, Lemma \ref{lem:meanmisallocate} shows that a much tighter bound on misallocated effort is possible for mean outcome functions.
\begin{lemma}\label{lem:meanmisallocate}
For mean outcomes, if Hypothesis \ref{hyp:close} holds then misallocated effort is upper bounded by $2 \cd \nitem$.
\end{lemma}
\begin{proof}
This is a direct consequence of Hypothesis \ref{hyp:close}:
\begin{equation*} \sum_{i \in [\nitem]}\abs{\weight_i \cd \nplayer_a - a_i} + \abs{\weight_i \cd \nplayer_b - b_i} \leq \sum_{i \in [\nitem]} 2 = 2 \cd \nitem \end{equation*}
\end{proof}
Next, Lemma \ref{lem:medianmisallocate} shows that worst-case misallocated effort can be much higher for the median outcome function, especially in the case where there are many more agents than fronts. Lemma \ref{lem:maxmisallocate} tells us that misallocated effort is likely to be highest for large $\nplayer_a, \nplayer_b$. Lemma \ref{lem:medianmisallocate} looks at this specific case and shows that misallocated effort could be as high as $\nplayer_a \cd \p{1 - \frac{1}{\nitem}}$, which can be much greater than the $2 \cd \nitem$ bound given in Lemma \ref{lem:meanmisallocate}.
\begin{lemma}\label{lem:medianmisallocate}
For median outcomes, worst-case misallocated effort is \emph{lower}-bounded by $0.25 \cd \nplayer$, given $\nplayer = \nplayer_a + \nplayer_b \geq 2 \cd \nitem$.
\end{lemma}
\begin{proof}
We rely on the stable arrangements found in the proof of Lemma \ref{lem:medianstableequal}. For $\nplayer_a + \nplayer_b \geq 2 \cd \nitem$, the arrangement starts by placing 2 of each of type $B$ players on each front, up until either we run out of type $B$ players or reach the $\nitem-1$st front. We then place the type $A$ on the remaining fronts. In the case that $\frac{\nplayer_b}{2} \geq \nitem-1$, this implies that a single front has $\nplayer_a$ type $A$ agents. The preconditions of Lemma \ref{lem:medianstableequal} assume equal weights $\weight_i$, which means that misallocated effort is lower bounded by
$\nplayer_a - \frac{\nplayer_a}{\nitem} = \nplayer_a \cd \p{1- \frac{1}{\nitem}}$
Because we have required $\nplayer_a \geq \nplayer_b$ and $\nitem \geq 2$, this bound is at least: $0.5 \cd \nplayer \cd \p{1 - 0.5} = 0.25 \cd \nplayer$.
\end{proof}
Overall, the results of this analysis imply that mean outcome functions, rather than median ones, give sharper guarantees that any stable arrangement that exists will result in agents roughly arranging themselves across fronts in proportion to their overall value.
\section{Fewer agents than fronts: $\nplayer< \nitem$}\label{sec:feweragent}
In Section \ref{sec:moreagents}, we examined the case with more agents than fronts: $\nplayer\geq \nitem$. In this section, we will explore the other possibility, with fewer agents than fronts. A motivating example for this scenario could be voluntary crowdsourcing on a social media platform, such as Twitter's BirdWatch \cite{wojcik2022birdwatch}. \cite{birdsdontfactcheck}\cite{prollochs2022community}. Here, there might be many more relevant items (e.g. news articles or tweets) than there are active labelers. Our goal here is to model the set of stable arrangements, again comparing median and mean outcome functions.
In this section, we will drop Assumption \ref{assump:twotypes}, as all of our results extend easily to arbitrary numbers of agent types $\ntype >2$. Because $\nplayer< \nitem$, some fronts will inevitably need to be left empty. Because of this, we will drop the lower bound in Lemma \ref{lem:nounlabeled} and allow the cost for leaving a front empty $\costun$ to be set arbitrarily. Since Lemma \ref{lem:nounlabeled} no longer applies, in this section we will see that agent biases $\{\bias_i\}$ are relevant for agents' strategies.
At a high level, Section \ref{sec:moreagents} showed a wide divergence in behavior between median and mean outcome functions. In this section, we will show that the setting of $\nplayer < \nitem$ gives much more similar results between the two outcome functions, though with some differences. The intuition is that both median and mean outcome functions behave identically for fronts with only 1 or 2 agents present. Because there are relatively few agents, compared to the number of fronts, most arrangements will have 1 or 2 agents per front, unless $\costun$ is very small or differences in biases between types is very large.
First, we analyze the most natural arrangement: where there is exactly one agent on each of the most valuable $\nplayer$ fronts, with the remaining ones left empty. Lemma \ref{lem:cuNsmallNE} gives conditions where this type of arrangement is stable, showing that these conditions are identical for both median and mean outcome functions.
\begin{restatable}{lemma}{cuNsmallNE}
\label{lem:cuNsmallNE}
Assume that $\nplayer < \nitem$. WLOG, relabel the fronts in descending order of weight, so $\weight_1 \geq \weight_2 \geq \ldots \weight_{\nitem}$. Then, arrange each agent on each front in descending order of $\abs{\beta_i}$: that is, absolute value of bias, so that $\abs{\beta_1} \geq \abs{\beta_2}\ldots \abs{\beta_{\nplayer}}$. Define $\beta_{i^*}$ to be the first element that has an opposite sign from $\beta_1$. (If they are all the same sign, then set $i^* = \nplayer$). Then, place exactly one agent on each front in descending order of bias magnitude and weight. This arrangement is stable (for either median or mean outcome functions) so long as:
$$\abs{\beta_1 - \beta_{i*}} \leq 2\cd \frac{\weight{_\nplayer}}{\weight_1} \cd \costun$$
\end{restatable}
Next, Lemma \ref{lem:2stable} studies another natural scenario: where there are exactly two players and at least two fronts. Here, the lemma shows that for both median and mean outcome functions, players either both prefer to compete in the same front, or else both prefer to compete in separate fronts.
\begin{lemma}\label{lem:2stable}
For $\nplayer = 2$, $\nitem \geq 2$ with equal weights and either median or mean outcome functions, the two players either both prefer competing on the same front or both prefer each competing on separate fronts. Because their preferences are perfectly aligned, this implies that one of the two arrangements is always stable.
\end{lemma}
\begin{proof}
Consider two players with bias $\beta_a, \beta_b$. The player of type $A$ get lower cost when competing in separate fronts whenever:
\begin{align*}
\abs{f(\{\beta_a, \beta_b\}) - \beta_a} +\costun & \leq 0 + \abs{f(\beta_b) - \beta_a}\\
\costun & \leq 0.5 \cd \abs{\beta_b - \beta_a}
\end{align*}
On the other hand, the player of type $B$ gets lower cost when competing in separate fronts whenever:
\begin{align*}
\abs{f(\{\beta_a, \beta_b\}) - \beta_b} + \costun & \leq \abs{f(\beta_a) - \beta_b} + 0\\
0.5 \cd \abs{\beta_b - \beta_a} + \costun & \leq \abs{\beta_b - \beta_a}
\end{align*}
These terms are exactly equivalent, proving the result.
\end{proof}
\subsection{Median outcome}
The previous section described specific conditions for when a Nash stable arrangement exists. Next, we will explore the complementary condition by showing when (for median outcome functions), no stable arrangement exists. Lemma \ref{lem:medianunstable} gives a constructive example of settings where any possible arrangement of agents onto fronts is unstable.
\begin{restatable}{lemma}{medianunstable}
\label{lem:medianunstable}
For any $\nplayer, \nitem$ with such that $2 < \nplayer<\nitem$, with median outcome, there exists biases $\{\beta_i\}$ and costs $\costun$ such that no NE exists.
\end{restatable}
\begin{proof}[Proof sketch]
Set parameters as follows:
\begin{itemize}
\item $\nitem$ fronts, all with equal weight.
\item $\nplayer$ players, with given biases: 1 with bias $1$, and $\nplayer-1$ with bias $-0.5$.
\item Cost of 0.3 for leaving an front empty.
\end{itemize}
The full proof considers any possible arrangement of these players, at least one can reduce its total cost by moving.
\end{proof}
\subsection{Mean outcome}
Next, we will consider the case with mean outcome function. Lemma \ref{lem:meanunstable} follows a similar approach to Lemma \ref{lem:medianunstable}, showing constructively that there exist parameters where no stable arrangement exists. However, as compared to Lemma \ref{lem:medianunstable}, it has the added requirement of $\nplayer \geq 4$.
\begin{restatable}{lemma}{meanunstable}
\label{lem:meanunstable}
For any $\nplayer, \nitem$ such that $4 \leq \nplayer<\nitem$, with mean outcome, there exists parameters such that no NE exists.
\end{restatable}
\begin{proof}[Proof sketch]
The parameters are set identically to in Lemma \ref{lem:medianunstable}, except with a cost of $c_u \in (0.125, 0.25)$ for leaving an front empty. The proof follows similarly to that for Lemma \ref{lem:medianunstable}, exploring multiple possible arrangements and showing that none can be stable.
\end{proof}
The mean outcome function case explored in Lemma \ref{lem:meanunstable} is not exactly analogous to the median case in Lemma \ref{lem:medianunstable}: there is a gap at $\nplayer<4$ players. Lemma \ref{lem:meanstablen3m4} complete this gap by showing that the gap in Lemma \ref{lem:meanunstable} is inevitable: any possible set of $\nplayer =3$ agents must have a stable arrangement, given mean outcomes and equal weights. Note that the case with equal weights is the most interesting one: if weights are allowed to differ, then a stable arrangement can easily be found by setting one front to have extremely high weight, resulting in all agents competing in that front.
\begin{restatable}{lemma}{meanstablen}
\label{lem:meanstablen3m4}
For $\nplayer=3, \nitem \geq 4$ with mean outcome and equal weights, there is always a stable point.
\end{restatable}
\begin{proof}[Proof sketch]
There are five possible arrangements of 3 agents:
$$\{a\}, \{ b\}, \{c\} \quad \{a, b\}, \{c\} \quad \{a, c\}, \{b\} \quad \{b, c\}, \{a\} \quad \{a, b, c\}$$
We will use the notation
$$\{a, b\}, \{\}\rightarrow_a\{a\}, \{b\}$$
to mean that a player of type $A$ gets strictly lower cost when it leaves a front it is competing in with type $B$ to compete in an empty front. We will say $\{a, b\}, \{\}\not \rightarrow_a\{a\}, \{b\}$
when the player of type $A$ does not get strictly lower cost in $\{a\}, \{b\}$ and say $\{a\}, \{b\} \rightarrow_a \{a, b\}, \{\}$ when the player of type $A$ gets strictly lower cost in $\{a, b\}$.
Often, to be concise, we will drop $\{\}$ terms and simply write $\{a, b\} \rightarrow_a \{a\}, \{b\}$
as the second front is left empty on the lefthand side.
In this proof, we will use the results of Lemma \ref{lem:2stable} to show that for any pair of agents, they either would both prefer to be competing in the same front or else both prefer to be competing in different fronts. Next, we will show that if any two pairs of agents prefer competing in the same front (as opposed to competing in different fronts), then it is the two agents with most dissimilar biases (e.g. $i, j$ given $\max_{i, j \in \{a, b, c\}} \abs{\beta_i - \beta_j}$). In order to show this, we will write out the cost that an agent of type $A$ gets for every possible arrangement. Because agents are interchangeable, this gives costs for every other agent in different arrangements, up to relabeling.
\begin{align*}\label{eq:costs3}
& \{a\}, \{b\}, \{c\} \quad & \abs{\beta_b - \beta_a} + \abs{\beta_c - \beta_a}\\
& \{a, b\}, \{c\} \quad & 0.5 \cd \abs{\beta_b - \beta_a} + \abs{\beta_c - \beta_a} + \costun\\
& \{a, c\}, \{b\} \quad & 0.5 \cd \abs{\beta_a - \beta_c} + \abs{\beta_b - \beta_a} + \costun\\
& \{b, c\}, \{a\} \quad & 0.5 \cd \abs{\beta_b+ \beta_c - 2 \cd \beta_a} + \costun\\
& \{a, b, c\} \quad & \frac{1}{3} \cd \abs{\beta_b+ \beta_c - 2 \cd \beta_a} + 2 \cd \costun
\end{align*}
The second and third lines in the equations above show us that
$$\{a, b\}, \{c\} \rightarrow_a \{a, c\}, \{b\}$$
exactly whenever:
$$0.5 \cd \abs{\beta_b - \beta_a} + \abs{\beta_c - \beta_a} + \costun > 0.5 \cd \abs{\beta_a - \beta_c} + \abs{\beta_c - \beta_a} + \costun$$
$$ \abs{\beta_c - \beta_a} > \abs{\beta_b - \beta_a} $$
which is whenever $\beta_c$ is further from $\beta_a$ than $\beta_b$ is. In this analysis, WLOG we will say that $\beta_b, \beta_c$ are the most dissimilar (because agents are interchangeable, this is true up to relabeling). Note that this implies that $\{a, c\}, \{b\}$ and $\{a, b\}, \{c\}$ can never be stable, because both $b, c$ would prefer being together to being with $a$.
Next, we can analyze some cases: \\
\textbf{Case 1:} If the $\{b, c\}, \{a\} \rightarrow_{b, c} \{a\}, \{b\}, \{c\}$, then $\{a\}, \{b\}, \{c\}$ is stable. \\
We know from our prior reasoning that if the two most dissimilar players do not wish to compete in the same front, then no other pair of players do. This means that the arrangement with one agent per front is stable.
\noindent \textbf{Case 2:} If:
$$\{a\}, \{b\}, \{c\} \rightarrow_{b, c} \{b, c\}, \{a\} \text{ and } \{b, c\}, \{a\} \not \rightarrow_{a} \{a, b, c\}$$
then $\{b, c\}, \{a\}$ is stable. This is because the $\{b, c\}$ pair doesn't wish to split up and the type $A$ player doesn't wish to join.
\noindent \textbf{Case 3:} If no player wishes to leave $\{a, b, c\}$ to compete in an empty front, then $\{a, b, c\}$ is stable. \\
This is true simply by the statement: if no player wishes to move, then this arrangement must be stable.
\noindent \textbf{Case 4:} In this case, we will assume that:
$$\{a\}, \{b\}, \{c\} \rightarrow_{b, c} \{b, c\}, \{a\} \text{ and } \{b, c\}, \{a\} \rightarrow_{a} \{a, b, c\}$$
We will also assume that at least one of $b, c$ wishes to leave $\{a, b, c\}$. WLOG, we will assume it is $c$ (up to relabeling).
Note that this implies none of these arrangements can be stable:
$$\{a\}, \{b\}, \{c\} \quad \{b, c\}, \{a\} \quad \{a, b, c\}$$
Additionally, by our prior reasoning, if players $b, c$ prefer competing together to separate, then we know that no other pair can be stable:
$$\{a, b\}, \{c\} \quad \{a, c\}, \{b\}$$
The cycle that is given by this set of patterns is given by:
$$\{b, c\}, \{a\} \rightarrow_a \{a, b, c\} \rightarrow_c \{a, b\}, \{c\} \rightarrow_{b, c} \{a\}, \{b, c\}$$
We will show that this type of cycle cannot exist.
Note that the first move occurs when:
\begin{align*}
\abs{\frac{\beta_b + \beta_c}{2} - \beta_a} & > \abs{\frac{\beta_b + \beta_c + \beta_a}{3} - \beta_a} + \costun\\
\frac{1}{6} \cd \abs{\beta_b + \beta_c - 2 \beta_a} & > \costun
\end{align*}
The second move occurs when:
\begin{align*}
\abs{\frac{\beta_a + \beta_b + \beta_c}{3} - \beta_c} + \costun & > \abs{\frac{\beta_a + \beta_b}{2} - \beta_c}\\
\costun & > \frac{1}{6} \cd \abs{\beta_a + \beta_b - 2 \beta_c}
\end{align*}
And the third move occurs when:
\begin{align*}
0.5 \cd \abs{\beta_a - \beta_b} + \abs{\beta_c - \beta_b} & > \abs{\beta_a - \beta_b} + 0.5 \cd \abs{\beta_c - \beta_b}\\
0.5 \cd \abs{\beta_b - \beta_c} & > 0.5 \cd \abs{\beta_a - \beta_b}\\
\end{align*}
Putting these together, this implies:
\begin{equation}\label{eq:firsteqn4}
\abs{\beta_a + \beta_b - 2 \beta_c} < \abs{\beta_b + \beta_c - 2 \beta_a}
\end{equation}
\begin{equation}\label{eq:secondeqn4}
\abs{\beta_a - \beta_b}< \abs{\beta_b - \beta_c}
\end{equation}
In the full proof, we complete our analysis by showing that it is impossible to set $\beta_a, \beta_b, \beta_c$ so as to satisfy both of these inequalities simultaneously.
\end{proof}
\section{Discussion}\label{sec:discussion}
In this paper, we have proposed and analyzed the Private Blotto game, a multi-player game involving competition over fronts with different values. We focused on the impact of the outcome function on whether Nash stable arrangements exist. For the case with more agents than fronts, in Section \ref{sec:moreagents} we showed the existence of a median-critical set of parameters such that no stable arrangement is possible within this set. We additionally showed constructively that a stable arrangement is always possible for any set of parameters outside of the median-critical set (assuming weights on fronts that are equal or bounded in difference). However, stable arrangements may involve high degrees of \enquote{misallocated effort} where agents are distributed across fronts in ways that are far proportional to the front's value. For mean outcome rules, we showed that any stable arrangement must involve bounded misallocated effort, but patterns in which stable arrangement exist and fail to exist do not have a clean theoretical characterization. In Section \ref{sec:feweragent} we analyzed the complementary case with fewer agents than fronts. In this case, median and mean outcome functions have much more similar results. For both, we constructively show conditions for the existence of a stable solution, as well as showing that there exist parameters such that no stable arrangement can exist.
While in this work we addressed the primary question of Nash stability, there are multiple interesting extensions into the Private Blotto game. One natural extension could be to explore a combined Colonel Blotto and Private Blotto game, with some of the \enquote{types} being centrally-coordinated Colonel Blotto with others being distributed Private Blotto. The Private Blotto players would be strictly weaker than the Colonel Blotto ones, but might be able to perform competitively given increased numbers of agents. Another generalization of Private Blotto could allow agents to coordinate with up to $\nplayer\geq 1$ other agents. For example, for $\nplayer=2$, an arrangement would fail to be Nash stable if two agents, working together, could move and improve both of their utility. This modification would only make Nash stability harder to achieve, but could be interesting to analyze. Finally, some Colonel Blotto games allow for players to differ in their subjective valuations or weights for each front - if, for example, certain issues are more valuable to certain political groups than to others \cite{7943422}. Extending this model could allow for interesting variations in patterns of Nash stability.
\ifarxiv
\subsubsection*{Acknowledgements}
This work was supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, MURI grant W911NF-19-0217, AFOSR grant FA9550-19-1-0183, ARO grant W911NF19-1-0057, a Simons Collaboration grant, a grant from the MacArthur Foundation, and NSF grant DGE-1650441. We are extremely grateful to Maria Antoniak, Sarah Dean, Jason Gaitonde, and the AI, Policy, and Practice working group at Cornell for invaluable discussions.
\else
\begin{acks}
This work was supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, MURI grant W911NF-19-0217, AFOSR grant FA9550-19-1-0183, ARO grant W911NF19-1-0057, a Simons Collaboration grant, a grant from the MacArthur Foundation, and NSF grant DGE-1650441. We are extremely grateful to Maria Antoniak, Sarah Dean, Jason Gaitonde, and the AI, Policy, and Practice working group at Cornell for invaluable discussions.
\end{acks}
\fi
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.14205",
"language": "en",
"timestamp": "2023-03-02T02:12:39",
"url": "https://arxiv.org/abs/2302.14205",
"yymm": "2302"
} | \section{Introduction}
\label{intro} We consider the stability of the \emph{multi-solitons} of the Benjamin-Ono (BO) equation
\begin{equation}\tag{BO}\label{eq: BO}
u_t+Hu_{xx}+2uu_x=0, \qquad u(x,t) \in \R, \; (x,t) \in\R\times \R.
\end{equation}
Here $u=u(x,t)$ represents the amplitude of wave, and $H$ is the Hilbert transform given by
\begin{equation}\label{Hilbert}
Hu(x,t)=\frac{1}{\pi} \text{P.V.}\int_{-\infty}^{\infty}\frac{u(y,t)}{ y-x}\rmd y,
\end{equation}
where P.V. indicates that the integral is to be computed in the principle value sense.
The BO equation \eqref{eq: BO}, formulated by Benjamin \cite{benjamin1967internal} and Ono \cite{ono1975algebraic}, is used to model long internal gravity waves
in a two-layer fluid. Typical setup of the models requires the wave amplitudes to be much smaller than the depth of the upper layer, which in turn is small compared with the wavelengths, while the lower layer has infinite depth. One can also observe (see \cite{ablowitz1991solitons}) that by passing to the deep water limit, the BO equation \eqref{eq: BO} can be formally obtained from the following Intermediate Long Wave (ILW) equation (as $\delta\rightarrow+\infty$),
\begin{equation}\tag{ILW}\label{eq: ILW}
u_t+\frac{1}{\delta}u_x+Tu_{xx}+2uu_x=0, \ \
(Tf)(x)=\frac{1}{2\delta}\text{P.V.}\int_{-\infty}^{\infty}\coth \frac{\pi(y-x)}{2\delta}f(y)\rmd y.
\end{equation}
whereas the shallow water limit (as $\delta\rightarrow0$) of the ILW equation gives the Korteweg-de Vries (KdV) equation
\begin{equation}\tag{KdV}\label{KdV}
u_t + \frac{\delta}{3}u_{xxx}+2uu_x=0.
\end{equation}
\eqref{eq: BO} has much in common with \eqref{KdV}. A key difference is that \eqref{eq: BO} involves a singular integro-differential operator $H$, and this leads to solitons that only have algebraic decay for \eqref{eq: BO}, as opposed to exponential decay for \eqref{KdV}.
\eqref{eq: BO} can be written as an infinite-dimensional completely integrable Hamiltonian
dynamical system with infinitely many conservation laws and a suitable Lax-pair formulation \cite{KLM99,GK21,S21}. In particular, the following quantities are conserved formally along the flow of \eqref{eq: BO}:
\begin{eqnarray}
&&H_0(u):=\frac12\int_{\mathbb R}u\rmd x,\label{momentum}\\
&&H_1(u):=\frac12\int_{\mathbb R}u^2\rmd x,\label{mass}\\
&&H_2(u):=-\frac{1}{2}\int_{\mathbb R}\left(uHu_x +\frac23u^3\right)\rmd x, \label{ener}\\
&&H_3(u):=\frac{2}{3}\int_{\mathbb R}\left(u_x^2+\frac32u^2Hu_x+\frac12u^4\right)\rmd x.\label{conser2}
\end{eqnarray}
The \eqref{eq: BO} may be viewed as a Hamiltonian system of the form
\begin{equation}\label{eq:BO Hamiltonian}
u_t=\mathcal{J}\frac{\delta H_2(u)}{\delta u},
\end{equation}
where $\mathcal{J}$ is the operator $\partial_x$, and $\frac{\delta H_2(u)}{\delta u}$ (or simply $H'_2(u)$) refers to the variational derivative of $H_2$ as follows
$$\bigg(\frac{\partial}{\partial\epsilon}H_{2}(u+\epsilon v)\bigg)\mid_{\epsilon=0}
=\int^\infty_{-\infty}\frac{\delta H_{2}}{\delta u}(x)v(x)dx.
$$
However, unlike the KdV equation \eqref{KdV}, the bi-Hamiltonian structure of \eqref{eq: BO} is quite tough \cite{FS88}. As the BO equation formulated in terms of two space operator $\partial_x$ and the Hilbert transform $H$, which makes the \eqref{eq: BO} share many features with completely integrable equations in two spatial dimensions. Let subscript $12$
denote the dependence on $x_1:=x$ and $x_2$, then for arbitrary functions $f_{12}$ and $g_{12}$, let us define the following bilinear form:
\begin{equation}\label{eq:bilinear form}
\langle f_{12},g_{12}\rangle:=\int_{\R^2}f_{12}g^\ast_{12}\rmd x_1\rmd x_2,
\end{equation}
here the asterisk superscript denotes the complex conjugate in the rest of this manuscript.
Define the operators (in $L^2(\R^2,\mathbb C)$ with domain $H^1(\R^2,\mathbb C)$)
\begin{equation}\label{eq:functions}
\mathcal{u}^{\pm}_{12}:=u_1\pm u_2+i(\partial_{x_1}\mp\partial_{x_2}),\ u_j=u(x_j,t),\ j=1,2,
\end{equation}
then two compatible Hamiltonian operators associated with the BO equation are given by
\begin{equation}\label{eq:Hamilton operator}
\mathcal{J}^{(1)}_{12}:=\mathcal{u}^{-}_{12},\ \quad \mathcal{J}^{(2)}_{12}:=\big(i\mathcal{u}^{-}_{12}H_{12}-\mathcal{u}^{+}_{12}\big)\mathcal{u}^{-}_{12},
\end{equation}
where the operator $H_{12}$ is a generalized Hilbert transformation as follows
\begin{equation}\label{eq:extended Hilbert }
\big(H_{12}f_{12}\big)(x_1,x_2):=\frac{1}{\pi} \text{P.V.}\int_{-\infty}^{\infty}\frac{F(y,x_1-x_2)}{y-(x_1+x_2)}\rmd y,
\end{equation}
with $f_{12}(x_1,x_2)=F(x_1+x_2,x_1-x_2)$.
Then the BO hierarchy can be represented as follows \cite{FS88}:
\begin{eqnarray}
&&u_t=\frac{i}{2n}\int_{\R}\delta(x_1-x_2)\big(\mathcal{R}_{12}^\star\big)^n\mathcal{u}^{-}_{12}\cdot1\rmd x_2 \nonumber\\&&=\frac{i}{2n}\int_{\R}\delta(x_1-x_2)\mathcal{u}^{-}_{12}\mathcal{R}^n_{12}\cdot1\rmd x_2
=\mathcal{J}\frac{\delta H_n(u)}{\delta u}, \quad n\in \mathbb N. \label{eq:BO hierarchy}
\end{eqnarray}
where $\star$ denotes the adjoint with respect to the bilinear form \eqref{eq:bilinear form}. The recursion operator $\mathcal{R}_{12}$ and the adjoint recursion operator $\mathcal{R}_{12}^\star$ are defined by
\begin{equation}\label{eq:recursion operator}
\mathcal{R}_{12}:=\big(\mathcal{J}^{(1)}_{12}\big)^{-1}\mathcal{J}^{(2)}_{12},\ \ \mathcal{R}^\star_{12}:=\mathcal{J}^{(2)}_{12}\big(\mathcal{J}^{(1)}_{12}\big)^{-1}=i\mathcal{u}^{-}_{12}H_{12}-\mathcal{u}^{+}_{12}
\end{equation}
and in view of \eqref{eq:recursion operator}, they satisfy the following well-coupling condition
\begin{equation}\label{eq:well coupling}
\mathcal{R}^\star_{12}\mathcal{J}^{(1)}_{12}=\mathcal{J}^{(1)}_{12} \mathcal{R}_{12}.
\end{equation}
The first few equations of the BO hierarchy are then
\begin{eqnarray*}
&& u_t-u_x=0, \quad\text{for} \quad n=1,\quad \eqref{eq: BO}, \quad\text{for} \quad n=2;\\
&& u_{t}+\frac{4}{3}\left(u^3+\frac{3}{2}uHu_x+\frac{3}{2}H(uu_x)-u_{xx}\right)_x=0,\quad\text{for} \quad n=3.
\end{eqnarray*}
The energy space, where $H_2(u)$ is well-defined, is $H^{\frac12}(\R)$. The existence of global weak solutions $u\in C([0,+\infty);H^{\frac12}(\R))\cap C^1([0,+\infty);H^{-\frac32}(\R))$ with initial data in the energy space was proved by Saut \cite{S79}. For strong $H^{s}(\R)$-solution, Ionescu and Kenig \cite{IK07} showed the global well posedness for $s\geq0$ (see also the works of Tao \cite{T04} and Molinet and Pilod \cite{MP12} for global well posedness result in $H^1(\R)$ ). Such solution conserves $H_1$ and other conservation laws for suitable $s\geq0$. Concerning the weak continuity of the BO flow map, we refer to the work of \cite{CK10}. Breakthrough has been made for the sharp low regularity well posedness theory of the (m)KdV and NLS equations \cite{KT18,KV19}, where the continuous family of the conservation laws below $L^2$ are established. For \eqref{eq: BO}, the conservation laws are achieved in $H^s(\R)$ by Talbut \cite{T21} for any $s>-\frac12$, the sharp low regularity global well posedness in $H^s(\R)$ with $s>-\frac12$ has been shown by G{\'e}rard, Kappeler and Topalov \cite{GKT20} on the torus. However, the counterpart result on real line is still open.
The BO solitons profiles are often regarded as minimizers of a constrained functionals defined in the energy space. In particular, \eqref{eq: BO} has soliton
of the form
\begin{equation}\label{eq:so}
u(t,x) =Q_{c}(x-ct-x_0),\ \ Q_{c}(s)=\frac{2c}{c^2s^2+1},\ c>0,\ x_0\in \R.
\end{equation}
By inserting \eqref{eq:so} into \eqref{eq: BO}, one has that $Q_{c}>0$ satisfies the following ODE
\begin{equation}\label{eq:stationary}
-HQ_c'-Q_c^2+cQ_c=0,\quad c>0.
\end{equation}
Amick and Toland \cite{AT91}, Frank and Lenzmann \cite{FL13} showed that \eqref{eq:stationary} possesses a unique (up to symmetries) nontrivial $L^\infty$ solution. \eqref{eq: BO} exhibits even more complex solutions called {\em{multi-solitons}}. The $N$-soliton solution is
characterized by the $2N$ parameters $c_j$ and $x_{j}\ (j=1, 2, ..., N)$
where $N$ is an arbitrary positive integer, and
\begin{equation}\label{1.6a}
U^{(N)}(t,x)=U^{(N)}(x-c_1t-x_{1}, x-c_2t-x_{2}, ..., x-c_Nt-x_{N}).
\end{equation}
Here $c_j$ are wave speeds satisfying the conditions $c_j>0, c_j\not=c_k$ for
$j\not=k \ (j, k=1, 2, ..., N)$ and $x_{j}$ are initial position of $j$-th soliton.
$U^{(N)}$ has an explicit expression in terms of a tau function $f$ \cite{M06},
\begin{equation}\label{1.7a}
U^{(N)}=i\frac{\partial}{\partial x}{\rm ln}\ \frac{f^*}{f}, \ f={\rm det}\ F,
\end{equation}
where $F=(f_{jk})_{1\leq j,k\leq N}$ is an $N\times N$ matrix with elements
\begin{equation}\label{1.7b}
f_{jk}=\left(x-c_jt-x_j+\frac{i}{ c_j}\right)\delta_{jk}
-\frac{2i}{c_j-c_k}(1-\delta_{jk}).
\end{equation}
Here, $f^*$ is the complex conjugate of $f$ and $\delta_{jk}$ is Kronecker's function. The expression \eqref{1.7a} shows that the BO multi-solitons exhibit no phase shift after the soliton collisions. Moreover, for large time $t$, the BO $N$-solitons can be represented by a superposition of $N$ algebraic solitons as follows
\begin{equation}\label{eq:asympotic behavior}
\lim_{t\rightarrow +\infty}\|U^{(N)}(t)-\sum_{n=1}^NQ_{c_j}(\cdot-c_jt-x_j)\|_{H^s(\R)}=0,\quad\ s\in \mathbb{N}.
\end{equation}
The definition of the stability of solitons and multi-solitons may be classified according to the following four categories: i) spectral stability, ii) dynamical stability, iii) orbital (nonlinear)
stability, iv) asymptotic stability. In accordance with the above classification of the stability, we shall
briefly review some known results as associated with the stability characteristics of the BO
solitons and multi-solitons. A spectral stability analysis of the solitons has been given by Chen and Kaup \cite{CK80}; The orbital (i.e. up to translations) stability of one soliton in the energy space $H^{\frac12}(\R)$ was established in \cite{BBSDB83,W87}. Moreover, stability of solitons for two classes of nonlinear dispersive equations (consist of \eqref{eq: ILW} and BBM equations with general power type nonlinearity) were also investigated in \cite{W87}, see also \cite{B72} for earlier stability results.
Orbital stability of double solitons in $H^1(\R)$ as critical points of the constrained Hamiltonian $H_{3}(u)$ was showed in \cite{LN}. The stability in $H^{\frac12}(\R)$ of sum of widely separated solitons was considered in \cite{GTT09,KM09} and the asymptotic stability of sum of $N$ solitons is established by Kenig and Martel \cite{KM09} by employing the approach of \cite{MMT02}. For the generalized Benjamin-Ono equation, there are interesting results concerning the asymptotic stability and blow up of their solutions \cite{KMR11,MP17}. The existence and uniqueness (for mass supercritical BO) of strongly interacting multi-solitons (multi-pole type solutions) for a generalized BO equation has been shown recently by the authors \cite{LW23}. For \eqref{eq: BO}, there is no multi-pole solutions since its eigenvalue problem possesses only finite and simple eigenvalues \cite{W16}. We refer to \cite{S19} for a very nice exposition for the above related issues.
In this manuscript we aim to show the following dynamical stability of arbitrary $N$-solitons of the BO equation. As the BO equation is more likely a $2$d integrable system, our approach opens the way to treat the stability problems of multi-solitons for other completely integrable models like \eqref{eq: ILW}(even for some 2d integrable models like KP-I equation). Moreover, our approach can also give alternative proofs for the stability of multi-solitons of the KdV and mKdV equations \cite{MS93,LW}. The main result of this manuscript is as follows.
\begin{theorem}\label{thm1.1}
Given $N\in\mathbb N,$ $N\geq1$, a collection of wave speeds ${\mathbf c}=(c_1,\cdot\cdot\cdot,c_N)$ with $0<c_1<\cdot\cdot\cdot<c_N$ and a collection of space transitions $\mathbf x=(x_1,\cdot\cdot\cdot,x_N)\in\R^N$, let $U^{(N)}(\cdot,\cdot;{\mathbf c},{\mathbf x})$ be the corresponding multi-solitons of \eqref{eq: BO}. For any $\epsilon>0$, there exists $\delta>0$ such that for any $u_0\in H^{\frac{N}{2}}(\R)$, the following stability property holds. If
\[
\|u_0-U^{(N)}(0,\cdot;{\mathbf c},{\mathbf x})\|_{H^{\frac{N}{2}}}<\delta,
\]
then for any $t\in\R$ the corresponding solution $u$ of~\eqref{eq: BO} verifies
\[
\inf_{\tau\in\R,\ {\mathbf y}\in\R^N}\|u(t)-U^{(N)}(\tau,\cdot;{\mathbf c},{\mathbf y})\|_{H^{\frac{N}{2}}}<\epsilon.
\]
\end{theorem}
As a direct consequence, we give a new proof of the orbital stability of the double solitons in \cite{LN}. The main differences lie in the spectral analysis part in Section \ref{sec_4} (see Corollary \ref{co3.10} and Remark \ref{re:4.3} for details).
\begin{corollary}\label{Co1.1}\cite{LN}
The \eqref{eq: BO} double solitons $U^{(2)}(t,x)$ is orbitally stable in $H^1({\R})$.
\end{corollary}
\begin{remark} \label{re1.4} There are some interesting results of the stability and asymptotic stability of trains of $N$ solitons for the BO equations obtained in \cite{GTT09,KM09}. Such type of stability (which holds also for other non-integrable
models, see \cite{MMT02} for subcritical gKdV equations) usually does not include the dynamical stability of $N$-solitons as in Theorem \ref{thm1.1}. We get the stability of the whole orbit of $N$-solitons for all the time by minimizing the conserved quantities.
\end{remark}
The approach used in this paper originates from the stability analysis of the
multi-soliton solutions of the KdV equation by means of
variational argument \cite{MS93}. We first demonstrate with the help of the inverse scattering transform (IST) that the Lyapunov functional $S_N$ of the BO $N$-solitons profile $U^{(N)}(x)=U^{(N)}(0,x)$ is given by
\begin{equation}\label{1.9a}
S_N(u)=H_{N+1}(u)+\sum_{n=1}^N\mu_nH_{n}(u),
\end{equation}
and $\mu_n$ are Lagrange multipliers which will be expressed in terms of the elementary
symmetric functions of $c_1,c_2, ..., c_N$. See Section \ref{sepecsec2}
for the details.
Then we show that $U^{(N)}$ realized as a critical point of the
functional $S_N$.
Using \eqref{1.9a}, this condition can be written as the following Euler-Lagrange equation
\begin{equation}\label{1.10}
\frac{\delta H_{N+1}(u)}{\delta u}+\sum_{n=1}^N\mu_n\frac{\delta H_{n}(u)}{\delta u}=0, \ {\rm at}\ u=U^{(N)}.
\end{equation}
The dynamical stability of $U^{(N)}$ may characterize $U^{(N)}(x)$ as a minimizer of
the functional $H_{N+1}$
subjected to the following $N$ constraints
\begin{equation}\label{eq:constrain2}
H_{n}(u)=H_{n}(U^{(N)}), \quad n=1, 2, ..., N,
\end{equation}
and consequently
the self-adjoint second variation operator of $S_N$
\begin{eqnarray}\label{eq:linearized n-soliton operator}
\mathcal L_N:=S_N''(U^{(N)}),
\end{eqnarray}
is strictly positive when one modulate in corresponding to the constraints' directions. Recall that the second variation operator $\mathcal L_N$
$$\bigg(\frac{\partial^2}{\partial\epsilon^2}S_{N}(U^{(N)}+\epsilon v)\bigg)\mid_{\epsilon=0}
=\frac12\int^\infty_{-\infty}v(x)\mathcal L_Nv(x)dx.
$$
We mention here since the Hilbert transform $H$ is involved, $\mathcal L_N$ is highly nonlocal.
As a byproduct of showing Theorem \ref{thm1.1}, one can locate the negative eigenvalues of isoinertial operator $\mathcal L_N$ \eqref{eq:linearized n-soliton operator}, a counterpart statement for the KdV equation was shown in \cite{W22}, it follows that the discrete eigenvalues of which are continuous with respect to the wave speeds as follows.
\begin{theorem}\label{thm1.3}
The linearized operator $\mathcal {L}_N$ around the $N$-solitons possesses $[\frac{N+1}{2}]$ negative eigenvalues $\nu_k$, $k=1,2,\cdot\cdot\cdot,[\frac{N+1}{2}]$, where $[x]$ is the largest integer not exceeding $x$. Moreover, for each $k$ and $j=1,2,\cdot\cdot\cdot,N$, there exist constants $C_k>0$, which are independent of the wave speeds $c_1,\cdot\cdot\cdot,c_N$, such that
\begin{eqnarray}\label{eigenvalue1.3}
\nu_k=-C_kc_{2k-1}\prod_{j\neq 2k-1}^N(c_j-c_{2k-1}),\quad k=1,2,\cdot\cdot\cdot,[\frac{N+1}{2}].
\end{eqnarray}
\end{theorem}
The ideas developed by Maddocks and Sachs have been successfully implemented to obtain stability results in various settings. Neves and Lopes~\cite{LN} proved the stability of double solitons of the BO equation, but it seems that their approach did not handle the arbitrary $N$-soliton. Le Coz and the second author ~\cite{LW} proved the stability of $N$-solitons of the mKdV equation, meanwhile, a quasi-linear integrable model called Camassa-Holm equation was considered by the second author and Liu \cite{LW20}, where stability of smooth multi-solitons is proved by employing some inverse scattering techniques.
We also mention the work of Kapitula~\cite{K07}, which is devoted to the stability of $N$-solitons of a large class of integrable systems, including in particular the cubic nonlinear Schr\"odinger equation. Very recently, a variational approach was used by Killip and Visan~\cite{KV20} to obtain the stability of KdV multi-solitons in $H^{-1}(\R)$. Stability results in low regularity $H^s$ with $s>-\frac12$ were also obtained by Koch and Tataru~\cite{KoTa20} for multi-solitons of both the modified Korteweg-de Vries equation and the cubic nonlinear Schr\"odinger equation, the proof of which relies on an extensive analysis of an iterated B\"acklund transform. The major difference between the approach \cite{LW} and the approaches of \cite{MS93}, \cite{LN} lie in the analysis of spectral properties. Indeed, the spectral analysis of Maddocks and Sachs and many of their continuators relies on an extension of Sturm-Liouville theory to higher order differential operators (see~\cite[Section 2.2]{MS93}). As the BO equation is nonlocal, Neves and Lopes~\cite{LN} were lead to introduce a new strategy relying on isoinertial properties of the linearized operators around the $N$-solitons $\mathcal {L}_N$ for $N=2$. That is to say, the spectra information of $\mathcal {L}_2$ is independent of time $t$. Therefore, one can choose a convenient $t$ to calculate the inertia and the best thing we can do is to calculate the inertia $in(\mathcal {L}_2(t))$ as $t$ goes to $\infty$. However, in \cite{LN}, the approach of their spectral analysis for higher order linearized operators around one solitons can not be applied for large $N$.
To handle this issue, in \cite{LW}, we adapt the ideas of \cite{MS93} and \cite{LN} and develop a method to treat the spectral analysis of linearized operators around arbitrary $N$-solitons. The main ingredient is to show some conjugate operator identities to prove the spectral information of the linearized operator around the multi-solitons. Such conjugate operator identities are established by employing the recursion operator of the equations. In particular, let $\varphi_{c}$ be the one soliton profile with wave speed $c>0$ of the KdV or mKdV equation. The conservation laws of the equations denoted by $H_{K,n}$ (the subscript $K$ denotes the (m)KdV) for $n\geq1$. Then the linearized operator around the one soliton $H_{K,n+1}''(\varphi_{c})+cH_{K,n}''(\varphi_{c})$ can be diagonalized to their constant coefficient counterparts by employing the following auxiliary operators $M$ and $M^t$:
\begin{eqnarray*}
M:=\varphi_{c}\partial_x\left(\frac{\cdot}{\varphi_{c}}\right),\quad M^t=-\frac{1}{\varphi_{c}}\partial_x\left(\varphi_{c}\,\cdot\,\right),
\end{eqnarray*}
the following conjugate operator identity holds:
\begin{eqnarray}\label{Mt03}
M\bigg(H''_{K,n+1}(\varphi_{c})+cH''_{K,n}(\varphi_{c})\bigg)M^t=M^t\bigg((-\partial_x^2)^{n-1}(-\partial_x^2+c)\bigg)M.
\end{eqnarray}
The recursion operator plays an important role in showing \eqref{Mt03} as it can not be computed by hand when $n$ is large. Such method is valid for a large amount of 1d completely integrable models which possess explicit recursion operators. However, the BO equation is more similar to a $2$d completely integrable model and has no explicit recursion operators \eqref{eq:recursion operator}. Indeed, as stated in Zakharov
and Konopelchenko \cite{ZK84}, recursion operators seem to exist only in $1$d integrable systems. Hence, the approach in \cite{LW} can not be directly applied for the BO equation.
To extend the spectral theory of Neves and Lopes~\cite{LN} to an arbitrary number $N$ of composing solitons, which leads to increasing technical complexity (inherent to the fact that the number of composing solitons is now arbitrary), no major difficulty arises here since which has been done in ~\cite{LW}. Then our main task was to implement this spectral theory for the multi-solitons of~\eqref{eq: BO}. At that level, we had to overcome major obstacles coming from the non-locality of the linearized operators. The conjugate type operator identities \eqref{Mt03} are usually wrong or very difficult to check. To deal with the arbitrary $N$ case, it is necessary to acquire a deeper understanding of the relationships between $N$-solitons, the variational principle that they satisfy, and the spectral properties of the operators obtained by linearization of the conserved quantities around them. In particular, we need to have a good knowledge of the spectral information of the higher order linearized Hamiltonian $L_n:=H_{n+1}''(Q_c)+cH_n''(Q_c)$ for all $n\geq1$. To show the spectral information of such higher order linearized operators, to the best knowledge, there is no good way except the conjugate operator identity approach in the literature. In addition, as we stated before, it is impossible to prove the conjugate type operator identities \eqref{Mt03} for large $n$, since the \eqref{eq: BO} possesses no explicit recursion operator ( the conjugate type operator identity is quite involved even for $n=2$ which achieved by brute force in ~\cite{LN}).
To overcome this difficulty, we present a new idea here. Our approach for the spectral analysis of the linearized operators $L_n$ is as follows: Firstly, we derive the spectral information of the operator $\mathcal{J}L_n$, which is easier than to have the spectral information of $L_n$, the reason is that the operator $\mathcal{J}L_n$ is commutable with the adjoint recursion operator. The spectral analysis of the adjoint recursion operator is possible since we can solve the eigenvalue problem of the BO equation; Secondly, we show that the eigenfunctions of $\mathcal{J}L_n$ plus a generalized kernel of $\mathcal{J}L_n$ form an orthogonal basis in $L^2(\R)$, which can be viewed as a completeness or closure relation. Lastly, we calculate the quadratic form $\langle L_nz,z\rangle$ with function $z$ that has a decomposition in the above basis, then the spectral information of $L_n$ can be derived directly. We believe this approach can even be applied to some $2$d integrable models like KP-I equation.
The reminder of the paper is organized as follows. In Section \ref{sepecsec2}, we summarize some basic properties
of the Hamiltonian formulation of the BO equation and present some results with the help of IST,
which provide some necessary machinery in carrying out the spectral analysis. In Section \ref{sec_3}, for the sake of completeness, an invariant of the multi-solitons and abstract framework are introduced to handle the spectral analysis part of the linearized operator ${\mathcal L}_N$ around the BO $N$-solitons. Section \ref{sec_4} is devoted to a detailed spectral analysis of $\mathcal L_N$, the Hessian operator of $S_N$. The proof of Theorem \ref{thm1.1}, the dynamical stability of the $N$-soliton
solutions of the BO equation, and Theorem ~\ref{thm1.3} will be given in Section \ref{sec_5}.
\section{Background results for the BO equation }\label{sepecsec2}
\setcounter{equation}{0}
In this section we collect some preliminaries in proving Theorem \ref{thm1.1}. This Section is divided into four parts. At the first part, we review some basic properties of the Hilbert transform $H$ and the generalized Hilbert transform $H_{12}$ defined in \eqref{eq:extended Hilbert }. Secondly, we present the equivalent eigenvalue problem of the BO equation and the basic facts of which through the inverse scattering transform, the conservation laws and trace formulas of the BO equation are derived. In Subsection \ref{sec_2.3}, we recall the Euler-Lagrange equation of the BO multi-solitons in \cite{M06}, which admits a variational characterization of the $N$-soliton profile $U^{(N)}(x)$. Subsection \ref{sec_2.4} is devoted to the investigation of the bi-Hamiltonian formation of the BO equation, the recursion operators are introduced to the computation of the conservation laws at the multi-solitons. Moreover, an iteration formula of the linearized operators $H_{n+1}''(Q_c)+cH_n''(Q_c)$ for all $n\in\mathbb N$ is established, it follows that investigating the properties of recursion operators (even if they are not explicit) contributes the major difficulty of the spectral analysis issue.
\subsection{Some properties of the Hilbert transform}\label{sec_2.0}
For the reader's convenience, we review here some elementary properties of the Hilbert transform $H$ and the generalized Hilbert transform $H_{12}$ (defined in \eqref{eq:extended Hilbert }) that figured in the forthcoming analysis.
It is demonstrated that for $f\in L^2(\R)$ implies $Hf\in L^2(\R)$ and the Fourier transform of $Hf$
\[ \widehat{Hf}(\xi)=i sgn(\xi)\hat{f}(\xi), \ \text{where}\ sgn(\xi)\xi=|\xi|, \ \text{for all }\ \xi\in \R.\]
It is clear that $H^2f=-f$ for $f\in L^2(\R)$ and $H\partial_xf=\partial_xHf$ for $f\in H^1(\R)$. Moreover, the operator $H$ is skew-sdjoint in the sense that
\[\langle Hf,g\rangle=-\langle f,Hg\rangle,\]
and maps even functions into odd functions and conversely.
A useful property bears upon the Hilbert transformation of a function $f^+$ ($f^-$) analytic in the upper (lower) half complex plane and vanishing at $\infty$, in this case, one has
\begin{equation}\label{Hul}
Hf^{\pm}=\pm i f^{\pm}.
\end{equation}
There is a parallel theory upon the generalized Hilbert transform $H_{12}$ \eqref{eq:extended Hilbert }, for more details we refer to \cite{FS88}. Let $f_{12}=f(x_1,x_2)\in L^2(\R^2,\mathbb C)$ be the function depend on $x_1=x$ and $x_2$, then we see that
\[
H^2_{12}=-1, \ H^{\ast}_{12}=-H_{12}, \text{and}\ \partial_{x_j}H_{12}f_{12}=H_{12}\partial_{x_j}f_{12},\ j=1,2.
\]
Moreover, for any $g\in L^2(\R)$, there holds
\[
H_{12}g(x_j)=H_jg(x_j),\ H_jf(x_i,x_j):=\frac{1}{\pi} \text{P.V.}\int_{-\infty}^{\infty}\frac{f(x_i,y)}{ y-x_j}\rmd y,\ i\neq j.
\]
If $f_{12}^{(\pm)}:=\pm\frac{1}{2}(1\mp iH_{12})f_{12}$, then $f_{12}^{(+)}$ and $f_{12}^{(-)}$ are holomorphic for $\imp(x_1 +x_2)>0$
and $\imp(x_1+x_2)<0$, respectively. Moreover, one has
\begin{equation}\label{Hu2}
H_{12}\big(f_{12}^{(+)}-f_{12}^{(-)}\big)=i\big(f_{12}^{(+)}+f_{12}^{(-)}\big).
\end{equation}
\subsection{Eigenvalue problem and conservation laws}\label{sec_2.1}
The IST has been applied successfully to solve Cauchy
problems of the BO equation and obtain its special solution. Here, we summarize some background results
related to the IST theory of the BO equation necessary for our stability analysis and refer to \cite{FA83,CW90,KLM99,M06,W16,W17} for more details.
The eigenvalue problem associated with the IST of the BO equation
is as follows
\begin{eqnarray}
&&i\phi_x^++\lambda(\phi^+-\phi^-)=-u\phi^+,\ \lambda\in \R;\label{2.1}\\
&&i\phi_t^{\pm}-2i\lambda\phi_x^{\pm}+\phi_{xx}^{\pm}-2iP_{\pm}(u_x)\phi^{\pm}=-\gamma\phi^{\pm}.\label{2.1'}
\end{eqnarray}
which is the Lax pair of the BO equation. Here $\phi^+(\phi^-)$ is the boundary value of the analytic
function in $\mathbb C^+$ ($\mathbb C^-$), $u\in \R$ is rapidly decreasing at infinity, $\lambda$ is the
eigenvalue (or the spectral parameter) and $\gamma$ is an arbitrary constant. $P_{\pm}$ are the projection operators defined by
$P_{\pm}:=\pm\frac{1}{2}(1\mp iH)$ and therefore $P_+-P_-=1$. We define the two Jost
solutions of \eqref{2.1} as
$x\rightarrow +\infty$, which are analytic functions in $\mathbb C^+$.
\begin{equation}\label{2.2}
|N(x,\lambda)- e^{i\lambda x}|\rightarrow0,
\ |\bar N(x,\lambda)-1|\rightarrow0
\end{equation}
and for $x\rightarrow -\infty$
\begin{equation}\label{2.3}
|M(x,\lambda)- 1|\rightarrow0,
\ |\bar M(x,\lambda) - e^{i\lambda x}|\rightarrow0.
\end{equation}
The above Jost solutions satisfy the following relations
\begin{eqnarray}
&&N_x-i\lambda N=iP_+(uN); \nonumber\\
&&\bar N_x-i\lambda \bar N=iP_+(u\bar N)-i\lambda;\label{2.4b}\\
&&M_x-i\lambda M=iP_+(u M)-i\lambda; \nonumber\\
&&\bar M_x-i\lambda \bar M=iP_+(u\bar M).\nonumber
\end{eqnarray}
The Jost solutions are related by
\begin{equation}\label{2.5}
M=\bar N+\beta N,
\end{equation}
where $\beta$ is a reflection coefficient given by
\begin{equation*}
\beta(\lambda)=i\int_{\R}u(y)M(y,\lambda)e^{-i\lambda y}\rmd y.
\end{equation*}
The asymptotic behaviors of $N,\bar{N}$ and $M$ are summarized as follows \cite{KLM99}:
\begin{eqnarray}
&&|N(x,\lambda)-\frac{1}{\Gamma(\lambda)}e^{i\lambda x}|\rightarrow0,\ x\rightarrow -\infty,\ \Gamma(\lambda):=e^{\frac{1}{2\pi i}\int_0^{\lambda}\frac{|\beta(k)|^2}{k}\rmd k}; \label{2.N}\\
&&|\bar N(x,\lambda)-(1-\frac{\beta(\lambda)}{\Gamma(\lambda)}e^{i\lambda x})|\rightarrow 0,\ x\rightarrow -\infty;\label{2.barN}\\
&&|M(x,\lambda)-( 1+\beta(\lambda)e^{i\lambda x})|\rightarrow0,\ x\rightarrow +\infty,.\label{2.M}
\end{eqnarray}
There exists discrete eigenfunctions $\Phi_j(x)\in P_{+}(H^1(\R))$ for negative eigenvalues
$\lambda=\lambda_j$ and $j=1, 2, ..., N$ ($N$ must be finite and $\lambda_j$ is simple \cite{W16}), which satisfy the equation
\begin{equation}\label{2.6a}
\Phi_{j,x}-i\lambda_j\Phi_j=iP_+(u\Phi_j), \ (j=1, 2, ..., N),
\end{equation}
with the boundary conditions
\begin{equation}\label{2.6b}
\Phi_j \rightarrow \frac{1}{x}, \ x \rightarrow +\infty,
\ (j=1, 2, ..., N).
\end{equation}
By employing the Fredholm theory, Fokas and Ablowitz \cite{FA83} show that in the suitable $\lambda$ complex plane, as $\lambda$ approaches any of the bound state eigenvalues $\lambda_j$, $j=1,2,\cdot\cdot\cdot,N$, each of the functions would approaches the limit as follows
\[\bar N(x,\lambda\rightarrow \lambda_j)\rightarrow M(x,\lambda\rightarrow \lambda_j)\rightarrow - \frac{i\Phi_j(x)}{\lambda-\lambda_j}+(x+\gamma_j)\Phi_j(x)+\cdot\cdot\cdot,\]
here the complex-valued quantity $\gamma_j$ is a necessary piece of the scattering data called {\em{normalization constant}}. Moreover, one has
\begin{equation}\label{im normal}
\imp \gamma_j=-\frac{1}{2\lambda_j}=\frac{1}{c_j}.
\end{equation}
The set
\begin{equation} \label{eq sacttering data}
\mathcal{S}:=\{
\beta(\lambda) \quad (\lambda>0),\quad \lambda_j,\quad
\gamma_j,\quad j=1,\ldots N\}
\end{equation}
is called scattering data.
In particular, when $u$ is a soliton potential \eqref{eq:so}, one has that $\beta(\lambda)\equiv0$ and the corresponding Jost solutions can be computed explicitly. In this case, one has
\[
\lambda_1=-\frac{c}{2},\ \gamma_1=-x_0+\frac{i}{c}.
\]
Then from \eqref{2.6a} and \eqref{2.6b}, we have
\begin{eqnarray}
&&\Phi_1(x)=\frac{1}{x+\gamma_1};\label{phi1}\\
&&\bar{N}(x,\lambda)=M(x,\lambda)=1-\frac{i\Phi_1(x)}{\lambda-\lambda_1};\label{phi2}\\
&& N(x,\lambda)=e^{i\lambda x}\big(1+\frac{i}{\lambda_1}\Phi_1(x)\big).\label{phi3}
\end{eqnarray}
Let us compute the conservation laws of the BO equation. It reveals from \eqref{2.4b} and \eqref{2.1'} (by choosing $\gamma=0$) that,
\begin{equation}\label{2.7}
\bar N_t-2\lambda \bar N_{x}-i\bar N_{xx}-2(P_+u_x)\bar N =0,
\end{equation}
therefore, the integral $\int^\infty_{-\infty}u(x,t)\bar N(x,t)dx$
is independent of time. Expanding $\bar N$ in powers of $\lambda^{-1}$
\begin{equation*}
\bar N=\sum_{n=0}^\infty\frac{(-1)^n\bar N_{n+1}}{ \lambda^n}, \ \bar N_1=1,
\end{equation*}
and inserting above into \eqref{2.4b}, we obtain the following recursion relations of
$\bar N_n$:
\begin{equation}\label{2.8b}
\bar N_{n+1}=i\bar N_{n,x}+P_+(u\bar N_n), \ n\geq 1.
\end{equation}
Therefore, the higher order conservation laws can be calculated as follows
\begin{equation}\label{2.9}
I_n(u)=(-1)^n\int^\infty_{-\infty}u\bar N_ndx.
\end{equation}
In terms of the
scattering data $\beta$ and $\lambda_j$, The {\em{trace identities}} describes the conservation laws $I_n$ are as follows
\begin{equation}\label{2.10}
I_n(u)=(-1)^n\left\{2\pi\sum_{j=1}^N(-\lambda_j)^{n-1}+\frac{(-1)^n}{2\pi}
\int^\infty_0\lambda^{n-2}|\beta(\lambda)|^2d\lambda\right\},
\ (n=1, 2, ...),
\end{equation}
where $u\in L^2(\R,(1+x^2)dx)\cap L^{\infty}(\R)$.
The first term on the right-hand side of \eqref{2.10} is the contribution
from solitons and the second term comes from radiations.
In terms of $I_n$, the conservation laws $H_n$ presented in Section \ref{intro} has the following relation
\begin{equation}\label{HI relation}
H_n=\frac{2^{n-1}}{n}I_{n+1}, \ \text{for all} \ n\geq 1.
\end{equation}
The first four of $H_n$ except $H_0$ are already given by \eqref{mass}, \eqref{ener} and \eqref{conser2}.
In view of \eqref{2.10}, the corresponding trace identity for $H_n$ can be expressed as follows
\begin{equation}\label{2.10'}
H_n=\frac{(-1)^{n+1}}{n}\left\{\pi\sum_{j=1}^N(-2\lambda_j)^{n}+\frac{(-1)^{n+1}}{ 2\pi}
\int^\infty_0(2\lambda)^{n-1}|\beta(\lambda)|^2d\lambda\right\},
\ (n=1, 2, ...).
\end{equation}
Similar to the KdV equation case, the BO conservation laws are in involution,
i.e., $H_n$ $(n=0,1, 2, ...)$ commute with each other in the following Poisson bracket
\begin{equation*}
\int^\infty_{-\infty}\left(\frac{\delta H_n}{ \delta u}(x)\right)\mid_{u=U^{(N)}}
\frac{\partial}{\partial x}\left(\frac{\delta H_m}{\delta u}(x)\right)\mid_{u=U^{(N)}}dx=0,
\ n, m=0,1, 2, ...\ .
\end{equation*}
Notice that $H_0$ is the unique Casimir function of \eqref{eq: BO}.
\par
\subsection{ The Euler-Lagrange equation of the $N$-solitons profile}\label{sec_2.3}
In order to show the dynamical stability of the BO $N$-solitons,
we need the formulas of the variational derivatives of $H_n$ evaluated at
the $N$-soliton potential $U^{(N)}(t,x)$. Using the explicit expression \eqref{1.7a} for the BO $N$-solitons, it would in theory be possible to verify by hand for any given $N$ that they also satisfy variational principles. However, the calculations would
rapidly become unmanageable when $N$ grows. In the following, we review an algebraic proof by Matsuno \cite{M06} that the multi-solitons indeed verify a variational principle. Notice that our conservation laws are sightly modified (see \eqref{HI relation}) with respect to the conservation laws in \cite{M06,LN}. The variational principle of multi-solitons is commonly accepted since the work of Lax \cite{L68} for the KdV equation but rarely proved, we also refer to \cite{LW} for an analytic proof for variational principle of the mKdV $N$-solitons.
To the aim of the Euler-Lagrange equation that the multi-solitons satisfy, we first compute the variational derivatives of the scattering data with respect to the
potential. Then we show that the $N$-solitons profile $U^{(N)}(0,x)$ of the BO
equation \eqref{eq: BO} satisfies \eqref{1.10} if one prescribes the Lagrange multipliers $\mu_n$
appropriately, which provides a variational characterization of $U^{(N)}(x)$, and thus of $U^{(N)}(t,x)$.
In particular, the variational derivative of the discrete eigenvalues with respect to the
potential (at $N$-solitons profile) is as follows
\begin{equation}\label{2.12}
\left(\frac{\delta \lambda_j}{ \delta u}(x)\right)|_{u=U^{(N)}}=\frac{1}{2\pi\lambda_j}\Phi_j^*(x)\Phi_j(x),
\ j=1, 2, ..., N.
\end{equation}
Here, the eigenfunction $\Phi_j$ corresponding to the discrete spectrum $\lambda_j$
satisfies the system of linear algebraic equations
\begin{equation}\label{2.13}(x+\gamma_j)\Phi_j+i\sum_{k\neq j}^N\frac{1}{ \lambda_j-\lambda_k}\Phi_k=1,
\ j=1, 2, ..., N,
\end{equation}
where $\gamma_j=-x_{j}-\frac{i}{2\lambda_j}$ and $x_{j}$ are real constants.
Recall that $\lambda_j$ are related to the wave speeds $c_j$ by the relations $\lambda_j=-\frac{c_j}{2}, j=1, 2, ..., N$.
Notice that the reflection coefficient $\beta(\lambda)$ becomes zero for reflectionless potential $u=U^{(N)}$,
we can derive from \eqref{2.10'} and \eqref{2.12} the following variational derivatives of $H_n$ ( with respect to $u$) evaluated at
the $N$-soliton potential
\begin{equation}\label{2.14}\left(\frac{\delta H_n}{ \delta u}(x)\right)|_{u=U^{(N)}}=(-1)^{n+1}
2\sum_{j=1}^N(-2\lambda_j)^{n-2}\Phi_j^*(x)\Phi_j(x),
\ n= 1,2, 3, ..., N.
\end{equation}
The $N$-solitons profile $U^{(N)}(0,x)$ has the following two useful expressions \cite{M06}:
\begin{eqnarray}
&&U^{(N)}=i\sum_{j=1}^N(\Phi_j-\Phi_j^*),\ \
U^{(N)}=-\sum_{j=1}^N\frac{1}{\lambda_j}\Phi_j^*\Phi_j.\label{2.16}
\end{eqnarray}
In view of \eqref{2.16}, one sees that $U^{(N)}(x)>0$ since discrete eigenvalues $\lambda_j=-\frac{c_j}{2}<0$.
The following relation concerning the variational derivative of $\beta$ with respect to $u$
is useful in evaluating the contribution of the continuous part to the functional $H_n$:
\begin{equation*}\label{2.17}
\frac{\delta\beta(\lambda)}{\delta u}(x)=iM(x,\lambda)N^*(x,\lambda).
\end{equation*}
For the $N$-soliton potential $u=U^{(N)}$, one has $\beta\equiv 0$ and therefore $M\equiv\bar N $ by \eqref{2.5}.
The function $MN^*$ satisfies the following orthogonality conditions
\begin{equation}\label{2.18}\int^\infty_{-\infty}M(x,\lambda)N^*(x,\lambda)\frac{\partial}{\partial x}
\left(\Phi_j^*(x)\Phi_j(x)\right)dx=0, j=1, 2, ..., N.
\end{equation}
Similarly, the variational derivative of the normalization constants $\gamma_j$ ($j=1,2,\cdot\cdot\cdot,N$) with respect to $u$
is
\begin{eqnarray}
\frac{\delta \gamma_j}{\delta u}(x)&=&-\frac{1}{2\pi \lambda_j^2}(x+\gamma_j)\Phi_j^*\Phi_j+i\sum_{l\neq j}\frac{\Phi_j^*\Phi_l-\Phi_l^*\Phi_j}{2\pi \lambda_j(\lambda_l-\lambda_j)^2}\nonumber\\&+&\frac{1}{4\pi^2i \lambda_j}\int_0^{+\infty}\frac{\big(\beta(\lambda)\Phi_j^*N-\beta^\ast(\lambda)\Phi_jN^*\big)\rmd \lambda}{(\lambda-\lambda_j)^2}.\label{deriva gamma}
\end{eqnarray}
The results presented above are derived
by the IST of the BO equation, especially through the analysis of the eigenvalue problem \eqref{2.1} of the Lax pair, we refer to \cite{FA83,KLM99,M06} for more details.
The variational characterization of the BO $N$-solitons profile has been established in Matsuno \cite{M06}. For the sake of completeness, we recall the statement and proof as follows.
\begin{proposition} \label{pr2.2}\cite{M06} The profiles of the BO $N$-solitons $U^{(N)}$ satisfy \eqref{1.10} if the Lagrange multipliers $\mu_n$ are symmetric functions of the wave speeds $c_1,c_2,\cdot\cdot\cdot,c_N$ which satisfy the following:
\[
\prod_{n=1}^N(x+c_n)=x^N+\sum_{n=1}^N\mu_nx^{N-n}, \quad x\in \R.
\]
In particular,
$\mu_n$ are given by the following Vieta's formulas: for $k=1,\dots,N$
\begin{equation}
\mu_{N+1-k}=\sum_{1\leq i_{1}<\cdots<i_{k}\leq
N}\left(\prod_{j=1}^{k}c_{i_j}\right).\label{eq:vieta}
\end{equation}
\end{proposition}
\begin{proof} Let $\Psi_j=\Phi_j^*\Phi_j$ be squared eigenfunctions and $c_j=-2\lambda_j$ be the wave speeds.
We deduce from \eqref{1.10} and \eqref{2.14} to have the following linear relation among $\Psi_j$
\begin{equation*}\label{3.1}
\sum_{j=1}^Nc_j^{N-1}\Psi_j + \sum_{n=1}^N(-1)^{N-n+1}\mu_n
\sum_{j=1}^Nc_j^{n-2}\Psi_j=0.
\end{equation*}
In view of the fact that $\Psi_j$ are functionally independent,
$\mu_n$ must satisfy the following system of linear algebraic equations:
\begin{equation*}\label{3.2}
\sum_{n=1}^N(-1)^{N-n}c_j^{n-1}\mu_n=c_j^{N}, (j=1, 2, ..., N).
\end{equation*}
As a consequence, we see that for each $j=1,\dots, N$, we have
\[
(-c_j)^{N}+\sum_{n=1}^N\mu_{n}(-c_j)^{n-1}=0,
\]
which reveals that $-c_j$ are the roots of the
$N$-th order polynomial with coefficients $1,\mu_N,\dots,
\mu_1$. The relations between the roots of a polynomial and its
coefficients are well-known to be described by Vieta's formulas as in~\eqref{eq:vieta}.
Moreover, ff we denote by $\sigma_s$ $(1\leq s\leq N)$ the elementary symmetric functions of $c_1, c_2, ..., c_N$ as follows
\begin{equation}\label{3.7a}
\sigma_1=\sum_{j=1}^Nc_j, \ \sigma_2=\sum_{j<k}^Nc_jc_k, ...,
\ \sigma_N=\prod_{j=1}^Nc_j,
\end{equation}
then we derive a simple expression of $\mu_n$ as follows
\begin{equation}\label{3.11}
\mu_n=\sigma_{N-n+1}, \quad n=1, 2, ..., N.
\end{equation}
The proof is concluded.
\end{proof}
\subsection{Bi-Hamiltonian formation of \eqref{eq: BO}}\label{sec_2.4}
From \eqref{eq:BO hierarchy}, we can define the recursion operator from the following relations for the variational derivatives of conservation laws $H_n(u):H^{\frac{n-1}{2}}(\R)\rightarrow \R $ $(n\in \mathbb N)$ with respect to $u$,
\begin{equation} \label{eq:recursion2}
\frac{\delta H_{n+1}(u)}{\delta u}=\mathcal{R}(u)\frac{\delta H_{n}(u)}{\delta u},
\end{equation}
unlike the KdV case, the recursion operator $\mathcal{R}(u)$ is implicit and should be understood from \eqref{eq:recursion operator}.
The adjoint operator of $\mathcal{R}(u)$ is
\begin{equation}\label{eq:adjoint of R}
\mathcal{R}^{\star}(u)=\mathcal{J}\mathcal{R}(u)\mathcal{J}^{-1},
\end{equation}
and it is not difficult to see that the operators $\mathcal{R}(u)$ and $\mathcal{R}^{\ast}(u)$ satisfy
\begin{equation}\label{eq:R Rstar}
\mathcal{R}^{\star}(u)\mathcal{J}=\mathcal{J}\mathcal{R}(u).
\end{equation}
The above definitions of recursion operators are reasonable since $\mathcal{R}(u)$ maps the variational derivative of conservation laws of \eqref{eq: BO} onto the variational derivative of conservation laws, $\mathcal{R}^{\star}(u)$ maps infinitesimal generators of symmetries of \eqref{eq: BO} onto infinitesimal generators of symmetries.
The starting symmetry of \eqref{eq: BO} is $u_x$ \cite{FF81}, therefore, \eqref{eq:adjoint of R} is well-defined since
\[\big(\mathcal{R}^{\star}(u)\big)^n u_x=\mathcal{J}\big(\mathcal{R}(u)\big)^nH_1'(u)=\mathcal{J}\big(\mathcal{R}(u)\big)^nu, \quad n\in \mathbb N.\]
For future reference, we need to show the above definition of $\mathcal{R}(u)$ is unique and differentiable with respect to $u$. For KdV equation \eqref{KdV}, its recursion operator is explicit, the uniqueness and smoothness of which can be checked directly. In particular, we consider \eqref{KdV} with $\delta=3$ and for functions defined on Schwartz space $\mathcal S(\R)$ for simplicity, the recursion operator of \eqref{KdV} is
$\mathcal{R}_K(u):=-\partial_x^2-\frac{2}{3}u-\frac{2}{3}\partial_x^{-1}u\partial_x$, then $\mathcal{R}'_K(u)=-\frac{2}{3}-\frac{2}{3}\partial_x^{-1}(\cdot\partial_x)$.
\begin{proposition} \label{pr2.0}
Given $u\in H^{k+1}(\R)$ with $k\geq0$, there exists a unique linear operator
$$\mathcal{R}(u): H^{k+1}(\R)\rightarrow H^{k}(\R),$$
such that \eqref{eq:recursion2} and \eqref{eq:R Rstar} hold true. Moreover, $\mathcal{R}(u)$ is differentiable with respect to $u$.
\end{proposition}
\begin{proof} The idea is to relate the recursion operators $\mathcal{R}(u)$ and $\mathcal{R}_{12}$ \eqref{eq:recursion operator}. Consider $u\in\mathcal S(\R)$, then from \eqref{eq:BO hierarchy} and \eqref{eq:recursion2}, one has
\begin{eqnarray*}
&&\mathcal S(\R)\ni H'_{n+1}(u)=\frac{i}{2(n+1)}\mathcal{J}^{-1}\int_{\R}\delta(x_1-x_2)\mathcal{u}^{-}_{12}\mathcal{R}_{12}^{n+1}\cdot1\rmd x_2\\
&&=\mathcal{R}(u)H'_{n}(u)=\mathcal{R}(u)\frac{i}{2n}\mathcal{J}^{-1}\int_{\R}\delta(x_1-x_2)\mathcal{u}^{-}_{12}\mathcal{R}_{12}^{n}\cdot1\rmd x_2.
\end{eqnarray*}
The uniqueness of $\mathcal{R}(u)$ follows by an induction argument over $n$. Moreover, one infers that $\mathcal{R}(u)\sim-H\partial_x +L(u)$, where the higher order remainder term $L(u):\mathcal S(\R)\mapsto \mathcal S(\R)$ and which is differentiable. Therefore, $\mathcal{R}(u)$ is also differentiable and $\mathcal{R}'(u)\sim L'(u)$.
\end{proof}
It will be shown in Section \ref{sec_4} that understanding the spectral information of the (adjoint) recursion operators $\mathcal{R}(u)$ and $\mathcal{R}^{\star}(u)$ is essential in showing the (spectral) stability of the BO multi-solitons.
We first observe that the differential equation \eqref{eq:stationary} verified by the soliton profile and the bi-Hamiltonian structure \eqref{eq:recursion2} imply that the $1$-soliton $Q_c(x-ct-x_0)$ with speed $c>0$ satisfies, for all $n\geq2$ and for any $t\in\R$, the following variational principle
\begin{eqnarray}\label{eq:1-sol variaprinciple}
&&H_{n+1}'(Q_c)+c H'_{n}(Q_c)=\mathcal{R}(Q_c)\big(H_{n}'(Q_c)+c H'_{n-1}(Q_c)\big)\nonumber\\&&=\cdot\cdot\cdot=\mathcal{R}^{n-1}(Q_c)\big(H_{2}'(Q_c)+c H'_{1}(Q_c)\big)=0,
\end{eqnarray}
\eqref{eq:1-sol variaprinciple} holds true since the functions $H_{n}'(Q_c)+c H'_{n-1}(Q_c)\in H^1(\R)$ which belongs to the domain of $\mathcal{R}(Q_c)$.
For future reference, we calculate here the quantities $H_j(Q_c)$ related to $1$-soliton profile $Q_c$. Instead of applying the trace identity of $H_n$ \eqref{2.10'} directly, we multiply \eqref{eq:1-sol variaprinciple} with $\frac{\rmd Q_c}{\rmd c}$, then for each $n$ one has
\[
\frac{\rmd H_{n+1}(Q_c)}{\rmd c}=-c\frac{\rmd H_{n}(Q_c)}{\rmd c}=\cdot\cdot\cdot=(-c)^n\frac{\rmd H_{1}(Q_c)}{\rmd c}=(-1)^n\pi c^{n},
\]
and therefore by inductions to have $\lim_{c\rightarrow0} H_{n}(Q_c)=0$ and
\begin{equation}\label{eq:jHamliton}
H_{n+1}(Q_c)=(-1)^n\frac{\pi}{n+1}c^{n+1}.
\end{equation}
Let us recall that the soliton $Q_c(x-ct-x_0)$ \eqref{eq:so} is a solution of the BO equation. For simplicity, we denote $Q_c$ by $Q$. Then by \eqref{eq:recursion2}, we have
\begin{equation}\label{eq:recursion4}
H'_{n+1}(Q)=\mathcal{R}(Q) H'_{n}(Q).
\end{equation}
To analyze the second variation of the actions,
we linearize the equation \eqref{eq:recursion2} to let $u=Q+\varepsilon z$, and obtain a relation between linearized operators $H''_{n+1}(Q)+cH''_{n}(Q)$ and $H''_{n}(Q)+cH''_{n-1}(Q)$ for all $n\geq2$. One has
\begin{proposition} \label{pr2.3} Suppose that $Q$ is a soliton profile of the BO equation with speed $c>0$, if $z\in H^{n}(\R)$ for $n\geq1$, then there holds
the following iterative operator identity
\begin{equation}\label{eq:recursion Hamiltonianre}
\big(H''_{n+1}(Q)+cH''_{n}(Q)\big)z=\mathcal{R}(Q)\big(H''_{n}(Q)+cH''_{n-1}(Q)\big)z.
\end{equation}
\end{proposition}
\begin{proof} Let $u=Q+\varepsilon z$, by \eqref{eq:recursion2} and the definition of Gateaux derivative, one has
\begin{equation}\label{eq:recursion Hamiltonian2}
H''_{n+1}(Q)z=\mathcal{R}(Q)(H''_{n}(Q)z)+\big(\mathcal{R}'(Q)z\big)(H_n'(Q)),
\end{equation}
then by \eqref{eq:recursion Hamiltonian2}
\[
\big(H''_{n+1}(Q)+cH''_{n}(Q)\big)z=\mathcal{R}(Q)\bigg(\big(H''_{n}(Q)+cH''_{n-1}(Q)\big)z\bigg)+\big(\mathcal{R}'(Q)z\big)\big(H_n'(Q)+cH_{n-1}'(Q)\big).
\]
Notice that from Proposition \ref{pr2.0}, $\mathcal{R}'(Q)$ is well-defined, then \eqref{eq:recursion Hamiltonianre} follows directly from \eqref{eq:1-sol variaprinciple}.
\end{proof}
\section{Invariant of the Inertia }\label{sec_3}
The tools presented in this section have been developed by Lax \cite{L75}, Neves and Lopes \cite{LN} and Le Coz and the second author \cite{LW}. It is noted that the work of Neves and Lopes was devoted to the spectral analysis of linearized operators around the double solitons only and \cite{LW} extends their results, by employing recursion operator to prove conjugate operator identities \eqref{Mt03}, to the spectral analysis of linearized operators around $N$-solitons with $N$ an arbitrary integer. For the sake of completeness, we give the most relevant elements of
the statement only and refer to \cite{L75,LN,LW} for the details of the proof and further discussions.
Let $X$ be a real Hilbert space. We first define the inertia of a
self-adjoint operator with positive essential spectrum.
\begin{definition}\label{def:inertia}
Let $L:D(L)\subset X\to X$ be a self-adjoint operator.
Assume that there exists $\delta>0$ such that the spectrum of $L$ in $(-\infty,\delta)$ consists of a finite number of eigenvalues with finite geometric multiplicities.
The \emph{inertia} of $L$, denoted by $in(L)$, is the pair
$(n,z)$, where $n$ is the number of negative eigenvalues of $L$ (counted with geometric multiplicities) and $z$ is
the dimension of the kernel of $L$.
\end{definition}
\subsection{Isoinertial family of operators}
We will be working with linearized operators around a multi-soliton, which fit in the following more generic framework.
Consider the abstract evolution equation
\begin{equation}\label{eq:evolution eq}
u_t=f(u),
\end{equation}
for $u : \R \rightarrow X$, and recall that the following framework was set in \cite{L75,LN}. Let $X_2\subset X_1\subset X$ be Hilbert spaces
and $V : X_1\rightarrow\R$ be such that the following assumptions are verified.
(H1) $X_2\subset X_1\subset X$ are continuously embedded. The embedding from $X_2$ to $X_1$ is denoted by $i$.
(H2) The functional $V: X_1\rightarrow\R$ is $\mathcal{C}^3$.
(H3) The function $f: X_2\rightarrow X_1$ is $\mathcal{C}^2$.
(H4) For any $u\in X_2$, we have
\[
V'(u)f(u)=0.
\]
Moreover, given $u \in \mathcal{C}^1(\R, X_1)\cap \mathcal{C}(\R, X_2)$ a strong solution of \eqref{eq:evolution eq}, we assume that there exists a self-adjoint
operator $L(t) : D(L) \subset X\rightarrow X$ with domain $D(L)$ independent of $t$ such that for $h, k \in Z$, where $Z\subset D(L)\cap X_2$
is a dense subspace of $X$, we have
\[
\langle L(t)h,k\rangle=V''(u(t))(h,k).
\]
We also consider the operators $B(t):D(B)\subset X\rightarrow X$ such that for any $h\in Z$ we have
\[
B(t)h=-f'(u(t))h.
\]
Then we assume moreover that
(H5) The closed operators $B(t)$ and $B^\ast(t)$ have a common domain $D(B)$ which is independent of $t$. The
Cauchy problems
\[
u_t=B(t)u,\quad v_t=B^\ast(t)v,
\]
are well-posed in $X$ for positive and negative times.
We then have the following result (see \cite{L75,LN}).
\begin{proposition} \label{pr5.1}
Let $u\in \mathcal{C}^1(\R, X_1)\cap \mathcal{C}(\R, X_2)$ be a strong solution of \eqref{eq:evolution eq} and assume that (H1)-(H5) are
satisfied. Then the following assertions hold.
$\bullet$ Invariance of the set of critical points. If there exists $t_0 \in\R$ such that $V'(u(t_0))=0$, then $V'(u(t))=0$
for any $t\in \R$.
$\bullet$ Invariance of the inertia. Assume that $u$ is such that $V'(u(t))=0$ for all $t\in \R$. Then the inertia
$in(L(t))$ of the operator $L(t)$ representing $V''(u(t))$ is independent of $t$.
\end{proposition}
\subsection{Calculation of the inertia}
Given an $t$-dependent family of operators whose inertia we are interested in,
Proposition \ref{pr5.1} allows to choose for a specific $t$ to perform the calculation of the inertia. This is however in most
situations not sufficient, as we would like to let $t$ go to infinity and relate the inertia of our family with the
inertia of the asymptotic objects that we obtain. This is what is allowed in the following framework.
Let $X$ be a real Hilbert space. Let $N \in\mathbb{N}$ and $(\tau_n^j)$ be sequences of isometries of $X$ for $j = 1,\cdot\cdot\cdot, N$. For
brevity in notation, we denote the composition of an isometry $\tau_n^k$ and the inverse of $\tau_n^j$ by
\[
\tau_n^{k/j}:=\tau_n^k(\tau_n^j)^{-1}.
\]
Let $A, (B^j)_{j=1,...,N}$ be linear operators and $(R_n)$ be a sequence of linear operators. Define the sequences of
operators based on $(B^j)$ and $(\tau_n^j)$ by
\[
B_n^j=(\tau_n^j)^{-1}B_j(\tau_n^j).
\]
Define the operator $L_n: D(A)\subset X \rightarrow X $ by
\[
L_n=A+\sum_{j=1}^NB_n^j+R_n.
\]
We make the following assumptions.
(A1) For all $j=1,\cdot\cdot\cdot, N$ and $n\in\mathbb{N}$, the operators $A, A + B^j , A + B_n^j$ and $L_n$ are self-adjoint
with the same domain $D(A)$.
(A2) The operator $A$ is invertible. For all $j=1,\cdot\cdot\cdot, N$ and $n\in\mathbb{N}$, the operator $A$ commutes with $\tau_n^j$ (i.e. $A=(\tau_n^j)^{-1}A(\tau_n^j)$).
(A3) There exists $\delta>0$ such that for all $j=1,\cdot\cdot\cdot, N$ and $n\in\mathbb{N}$, the essential spectrum of $A, A + B_j , A + B_n^j$ and $L_n$ are contained in $(\delta, +\infty)$.
(A4) For every $\lambda\in \cap_{j=1}^N\rho(A+B^j)$ and for all $j=1,\cdot\cdot\cdot, N$ the operators $A(A+B^j-\lambda I)^{-1}$ are bounded.
(A5) In the operator norm, $\|R_nA^{-1}\|\rightarrow 0$ as $n\rightarrow+\infty$.
(A6) For all $u\in D(A)$ and $j,k=1,\cdot\cdot\cdot,N$ and $j\neq k$ one has
\[
\lim_{n\rightarrow +\infty}\|\tau_n^{j/k}B^k\tau_n^{k/j}\|_X=0.
\]
(A7) For all $u\in X$ and $j,k=1,\cdot\cdot\cdot,N$ and $j\neq k$ we have $\tau_n^{j/k}u\rightharpoonup 0$ weakly in X as $n\rightarrow\infty$.
(A8) For all $j=1,\cdot\cdot\cdot, N$, the operators $B^jA^{-1}$ is compact.
\begin{theorem} \label{th5.2}
Assume that assumptions (A1)-(A8) hold and let $\lambda<\delta$. The following assertions hold.
$\bullet$ If $\lambda\in \cap_{j=1}^N\rho(A+B^j)$, then there exists $n_\lambda\in \mathbb{N}$ such that for all $n>n_\lambda$ we have $\lambda\in \rho(L_n)$.
$\bullet$ If $\lambda\in \cup_{j=1}^N\sigma(A+B^j)$, then there exists $\varepsilon_0>0$ such that for all $0<\varepsilon<\varepsilon_0$ there exists $n_\varepsilon\in \mathbb{N}$ such
that for all $n > n_\varepsilon$ we have
\[
dim(Range(P_{\lambda,\varepsilon}(L_n)))=\sum_{j=1}^Ndim(Range(P_{\lambda,\varepsilon}(A+B^j))),
\]
where $P_{\lambda,\varepsilon}(L)$ is the spectral projection of $L$ corresponding to the circle of center $\lambda$
and radius $\varepsilon$.
\end{theorem}
\begin{corollary} \label{co5.3}
Under the assumptions of Theorem \ref{th5.2}, if there exists $n_L$ such that for all $n > n_L$ we have
\[
dim(ker(L_n))\geq\sum_{j=1}^Ndim(ker(A+B^j)),
\]
then for all $n > n_L$ we have
\[
in(L_n)=\sum_{j=1}^Nin(A+B^j).
\]
Moreover, a non-zero eigenvalue of $L_n$ cannot approach $0$ as $n\rightarrow\infty$.
\end{corollary}
Theorem \ref{th5.2} and Corollary \ref{co5.3} were proved in \cite{LN} for the case $N=2$. For the proof of general $N \in\mathbb{ N}$ cases, we refer to \cite{LW} for details.
For the dynamical stability problem of the BO $N$-solitons $U^{(N)}(t)$ we considered, the above abstract framework can be simplified to let $X_1=H^{\frac{N-1}{2}}(\R)$, $X_2=H^{\frac{N}{2}}(\R)$ and $V(u)=S_N(u)$ defined in \eqref{1.9a}. By Proposition \ref{pr5.1}, the inertia
$in(L(t))$ of the operator $$L(t):=V''(U^{(N)}(t))=S''_N(U^{(N)}(t)),$$ is independent of $t$.
\bigskip
\section{Spectral Analysis} \label{sec_4}
\setcounter{equation}{0}
Let $U^{(N)}(t,x)$ be the BO $N$-solitons and $ U^{(N)}(x)=U^{(N)}(0,x)$ be the $N$-solitons profiles. In this Section, we will use the subscript {\em{od}} to denote space of odd functions and the subscript {\em{ev}} to denote space of even functions.
A detailed spectral analysis of the linearized operator around $N$-solitons $\mathcal {L}_N$ (defined in \eqref{eq:linearized n-soliton operator}) will be presented by employing the (adjoint) recursion operators defined in section \ref{sepecsec2}.
The combination of two main
arguments allows to have the spectral information of $\mathcal {L}_N$. First, we have shown in Section~\ref{sec_3}
that a form of iso-spectral property holds for linearized operators $\mathcal L_N$
around multi-solitons $U^{(N)}(t,x)$, in the sense that the inertia (i.e. the number
of negative eigenvalues and the dimension of the kernel, see
Definition~\ref{def:inertia} ) is preserved along the
time evolution. Second, at large time, the linearized operator can be
viewed as a composition of several decoupled linearized operators
around each of the soliton profiles composing the multi-soliton, and
the spectrum of linearized operator around the multi-solitons will converge to
the union of the spectra of the linearized operators around each
soliton.
More precisely, the linearized operators around the multi-solitons
fit in the framework of Proposition \ref{pr5.1} (see also Theorem 3 in \cite{LN}), we conclude that the inertia $in(\mathcal {L}_N(t))$ of $\mathcal {L}_N(t)$
is independent of $t$. Therefore, we can choose a convenient $t$ to calculate the inertia and
the best thing we can do is to calculate the inertia $in(\mathcal {L}_N(t))$ as $t$ goes to $\infty$. More precisely,
the $N$-solitons $U^{(N)}(t,x)$ splits into $N$ one-solitons $Q_{c_j}(x-c_jt-x_j)$ far apart \eqref{eq:asympotic behavior}. By Theorem \ref{th5.2} we infer that, as $t$ goes to $\infty$, the spectrum $\sigma(\mathcal {L}_N(t))$ of $\mathcal {L}_N(t)$ converges
to the union of the spectrum $\sigma(L_{N,j})$ of $L_{N,j}:=I_N''(Q_{c_j})$. In this section, we show that the inertia of the linearized operator $\mathcal {L}_N$ related to the $N$-solitons $U^{(N)}$ has exactly $[\frac{N+1}{2}]$ negative eigenvalues and the dimension of the null space equals to $N$, namely, $in(\mathcal {L}_N(t))=([\frac{N+1}{2}],N)$.
This result follows from an alternative inertia property of operators $L_{N,j}$:\\
--for $j=2k-1$ odd, $in(L_{N,j})=(1,1)$, i.e., $L_{N,2k-1}$ has exactly one negative eigenvalue;\\
--for $j=2k$ even, $in(L_{N,j})=(0,1)$, i.e., $L_{N,j2k}\geq0$ is positive.
In view of the expression of $L_{N,j}$, it is the summation of the operators $$H_{n+1}''(Q_{c_j})+c_jH_{n}''(Q_{c_j}) \quad \text{for}\ n=1,2,\cdot\cdot\cdot,N.$$ In particular, from Proposition \eqref{pr2.3}, it can be factorized in the following way
\begin{eqnarray}\label{formula of L}
L_{N,j}=\sum_{n=1}^N\sigma_{j,N-n}\bigg(H_{n+1}''(Q_{c_j})+c_jH_{n}''(Q_{c_j})\bigg)=\left(\prod_{k=1,k\neq j}^{N}(\mathcal R(Q_{c_j})+c_k)\right)\bigg(H_2''(Q_{c_j})+c_j H_1''(Q_{c_j})\bigg),
\end{eqnarray}
where $\sigma_{j,k}$ are the elementally symmetric functions of $c_1,c_2,\cdot\cdot\cdot,c_{j-1},c_{j+1},\cdot\cdot\cdot,c_N$ as follows,
$$\sigma_{j,0}=1, \ \sigma_{j,1}=\sum_{l=1,l\neq j}^Nc_l, \ \sigma_{j,2}=\sum_{l<k,k,l\neq j}c_lc_k, ...,
\ \sigma_{j,N}=\prod_{l=1,l\neq j}^Nc_l.$$
\subsection{The spectrum of $L_{1,c}$}
Let us deal with the linearized operator around one soliton profile $Q_{c}$, the associated linearized operator is,
\begin{equation}\label{eq:linearized operator}
\mathcal {L}_1=L_{1,c}=H_{2}''(Q_c)+cH_{1}''(Q_c)=-H\partial_x+c-2Q_c.
\end{equation}
It is the purpose of this subsection to give an account of the spectral analysis for the operator $L_{1,c}$. We view $L_{1,c}$ as an unbounded, self-adjoint operator on $L^2(\R)$ with domain $H^1(\R)$, we refer to \cite{BBSDB83} for some details of the following spectral analysis.
Key spectral properties of $L_{1,c}$ follow by differentiating \eqref{eq:stationary} with respect to $x_0$ and with respect to $c$. In particular, for normalized wave speed $c=1$, we have
\begin{equation}\label{eq:eigen kernel}
L_{1,1}Q'_1=0, \quad L_{1,1}(Q_1+xQ'_1)=-Q, \ \eta_0:=\frac{1}{\sqrt{\pi}}Q'_1
\end{equation}
There also holds the following two identities
\[
L_{1,1}Q_1=-Q_1^2,\quad L_{1,1}Q_1^2=-Q_1-Q_1^2,
\]
then we have
\[L_{1,1}\big(\alpha Q_1+\beta Q_1^2\big)=-\beta Q_1-(\alpha+\beta)Q_1^2.\]
Now by selecting $\alpha$ and $\beta$ suitably, we can obtain the two eigenfunctions associated to the discrete eigenvalue of $L_1$.
Specifically, to achieve eigenfunctions, we need $\frac{\beta}{\alpha}=\frac{\alpha+\beta}{\beta}$, then we can solve a quadratic equation with solutions
$\frac{\beta}{\alpha}=\frac{1\pm \sqrt{5}}{2}$. Now if we take $\alpha=1,\beta=\frac{1+ \sqrt{5}}{2}$ or $\alpha=1,\beta=\frac{1- \sqrt{5}}{2}$, then we obtain the negative eigenvalue, one of the positive eigenvalues and their eigenfunctions as follows:
\begin{eqnarray}
&&\lambda_{-}=-\frac{1+ \sqrt{5}}{2}, \ \eta_{-}=N_-\big(2Q_1+(1+ \sqrt{5})Q_1^2\big),\ L_{1,1}\eta_{-}=\lambda_{-}\eta_{-};\label{ pos eigen}\\
&&\lambda_{+}=\frac{\sqrt{5}-1}{2}, \ \eta_{+}=N_+\big(2Q_1+(1- \sqrt{5})Q_1^2\big),\ L_{1,1}\eta_{+}=\lambda_{+}\eta_{+};\label{nega eigen}\\
&& N_{\pm}:=\frac{\big(1\pm\sqrt{5}\big)\big(\sqrt{5}\pm 2\big)^{\frac1 2}}{4 (\sqrt{5}\pi)^{\frac1 2}}.\nonumber
\end{eqnarray}
The remaining eigenvalue (not discrete) of $L_{1,1}$ is $1$, the eigenfunction of which is
\begin{equation}\label{eq:eigen 1}
\eta_1(x)=\frac{1}{\sqrt{\pi}}\big(Q'_1+xQ_1\big),\ L_{1,1}\eta_1=\eta_1.
\end{equation}
Now we consider the essential spectra of $L_1$, since $Q_1$ decays to zero at infinity, by Weyl's essential spectrum theorem, we know that the essential spectra of $L_{1,1}$ is the interval $[1,+\infty)$. For $\lambda>0$, let $\eta(x,\lambda)$ satisfy $L_{1,1}\eta=(\lambda+1)\eta$ with $\eta$ bounded as $x\rightarrow\pm\infty$. Following a standard approach, we represent $\eta$ in the form
\begin{equation}\label{eq:eta repre}
\eta=\eta^{(+)}+\eta^{(-)},
\end{equation}
where $\eta^{(+)}(z)$ is analytic in the upper half complex plane and bounded as $\imp z\rightarrow+\infty$, whilst $\eta^{(-)}(z)$ is analytic in the lower half complex plane and bounded as $\imp z\rightarrow-\infty$. Since $L_{1,1}$ is real and the potential $Q_1(z)=Q_1^\ast(z^\ast)$, we can presume that
\begin{equation}\label{eq:eta decom}
\psi(z,\lambda)=\eta^{(+)}(z,\lambda)=\big(\eta^{(-)}(z^\ast,\lambda)\big)^\ast.
\end{equation}
By \eqref{Hul} and substituting \eqref{eq:eta repre} into $L_1\eta=(\lambda+1)\eta$, we have
\[
i\eta^{(+)}_z-i\eta^{(-)}_z+\big(2Q_1(z)+\lambda\big)\big(\eta^{(+)}+\eta^{(-)}\big)=0,
\]
which by \eqref{eq:eta decom} is equivalent to
\[
i\psi_z+\big(2Q_1(z)+\lambda\big)\psi=0,
\]
the solution of which is
\[\psi(z)=\frac{1}{\sqrt{2\pi}}\frac{z-i}{z+i}e^{i\lambda z}.\]
The generalized eigenfunctions of $L_{1,1}$ is thus given by \eqref{eq:eta repre} and \eqref{eq:eta decom}, the explicit formula is
\[
\eta(x,\lambda)=\sqrt{\frac{2}{\pi}}\frac{\big(x^2-1\big)\cos(\lambda x)+2x \sin(\lambda x)}{x^2+1}.
\]
For $j,k\in \sigma:=\{-,0,+,1\}$, the associated four functions $\eta_\sigma (x)$ defined in \eqref{eq:eigen kernel}, \eqref{ pos eigen}, \eqref{nega eigen} and \eqref{eq:eigen 1}, combining with the generalized eigenfunctions $\psi(x,\lambda)$ \eqref{eq:eta decom}, there holds the following $L^2$-inner product properties:
\begin{eqnarray}\label{completeness in L22}
&&\langle \eta_j,\eta_k\rangle=\delta_{jk};\nonumber\\
&&\langle\psi(\cdot,\lambda),\psi^\ast(\cdot,\lambda')\rangle=\delta(\lambda-\lambda'),\ \ \langle\psi(\cdot,\lambda),\eta_j\rangle=0;\nonumber\\
&&\int_0^{+\infty}\bigg(\psi(x,\lambda)\psi^\ast(y,\lambda)+\psi^\ast(x,\lambda)\psi(y,\lambda)\bigg)\rmd \lambda+\sum_{j\in\sigma}\eta_j(x)\eta_j(y)=\delta(x-y).\label{completeness in L2}
\end{eqnarray}
\eqref{completeness in L2} means the completeness of the implied eigenfunction expansion in $L^2(\R)$. In particular, for any function $f\in L^2(\R)$, one can decompose which into the above basis as follows:
\begin{eqnarray} \label{decomposition of z1}
&& f(x)=\int_0^{+\infty}\bigg(\tilde{\alpha}(\lambda)\psi(x,\lambda)+\tilde{\alpha}^\ast(\lambda)\psi^\ast(x,\lambda)\bigg)\rmd \lambda +\tilde{\alpha}_j\eta_j(x) ,\\
&& \tilde{\alpha}(\lambda):=\langle f,\psi^\ast(\lambda)\rangle,\ \ \tilde{\alpha}_j:=\langle f,\eta_j(\lambda)\rangle,\ j\in \sigma=\{-,0,+,1\}.\nonumber
\end{eqnarray}
To obtain the spectrum of the operator $L_{N,j}$ \eqref{formula of L}, let us consider the spectral analysis of the linearized operators
\begin{eqnarray}\label{formula of Ln}
L_n:=H''_{n+1}(Q)+cH''_{n}(Q),
\end{eqnarray} for all integers $n\geq1$. Here we write for simplicity $Q_c$ by $Q$ in the rest of this section. It is nature to consider the quadratic form $\langle L_nz,z\rangle$ with the decomposition of $z(x)$ in \eqref{decomposition of z1}. However, it is quite involved as the eigenfunctions of the operator $L_1=L_{1,c}$ \eqref{eq:linearized operator} need not to be the eigenfunctions of $L_n$ for $n\geq2$. Our main ingredient part of the spectral analysis of $L_n$ is the observation that $JL_n$ share the same eigenfunctions of $JL_1$. To deal with this spectrum problem, the core is the following operator identities related to the recursion operator $\mathcal{R}(Q)$ and the adjoint recursion operator $\mathcal{R}^{\star}(Q)$ (see \eqref{eq:adjoint of R}).
\begin{lemma} \label{le3.5} The recursion operator $\mathcal{R}(Q)$, the adjoint recursion operator $\mathcal{R}^{\star}(Q)$ and the linearized operator $L_n$ for all integers $n\geq1$ satisfy the following operator identities.
\begin{eqnarray}
&&L_n\mathcal{J}\mathcal{R}(Q)=\mathcal{R}(Q)L_n\mathcal{J}, \label{operator identity1}\\
&&\mathcal{J}L_n\mathcal{R}^{\star}(Q)=\mathcal{R}^{\star}(Q)\mathcal{J}L_n,\label{operator identity2}
\end{eqnarray}
where $\mathcal{J}$ is the operator $\partial_x$.
\end{lemma}
\begin{proof}
We need only to prove \eqref{operator identity2}, since one takes the adjoint operation on \eqref{operator identity2} to have \eqref{operator identity1}. Notice that from
Proposition \ref{pr2.3}, one has that the operator $\mathcal{R}(Q)L_n=L_{n+1}$ is self-adjoint. This in turn implies that $$(\mathcal{R}(Q)L_n)^\star=\mathcal{R}(Q)L_n=L_n\mathcal{R}^\star(Q),$$
On the other hand, in view of \eqref{eq:R Rstar}, one has
$$
\mathcal{J}L_n\mathcal{R}^\star(Q)=\mathcal{J}\mathcal{R}(Q)L_n=\mathcal{R}^\star(Q)\mathcal{J}L_n,
$$
as the advertised result in the lemma.
\end{proof}
\begin{remark}
Types of \eqref{operator identity1} and \eqref{operator identity2} hold for any solutions of the BO equation. In particular, let $U^{(N)}$ be the BO $N$-soliton profile and $\mathcal{L}_N$ be the second variation operator defined in \eqref{eq:linearized n-soliton operator}. Then it is easy to verify that (similar to Lemma \ref{le3.5}) the following operator identities hold true
\begin{eqnarray}
&&\mathcal{L}_N\mathcal{J}\mathcal{R}(U^{(N)})=\mathcal{R}(U^{(N)})\mathcal{L}_N\mathcal{J};\label{operator identity3}\\
&&\mathcal{J}\mathcal{L}_N\mathcal{R}^{\star}(U^{(N)})=\mathcal{R}^{\star}(U^{(N)})\mathcal{J}\mathcal{L}_N.\label{operator identity4}
\end{eqnarray}
\end{remark}
An immediate consequence of the factorization results \eqref{operator identity1} and \eqref{operator identity2} is that the (adjoint) recursion operator $\mathcal{R}(Q)$($\mathcal{R}^\star(Q)$) and $L_n\mathcal{J}$ ($\mathcal{J}L_n$) are commutable. It then turns out that the operators $\mathcal{J}L_n$ and $\mathcal{R}^\star(Q)$ share the same eigenfunctions, and $L_n\mathcal{J}$ shares the same eigenfunctions with the recursion operator $\mathcal{R}(Q)$. It will be possible to derive the precise eigenvalues of operators $L_n\mathcal{J}$ and $\mathcal{J}L_n$ by analyzing the asymptotic behaviors of the corresponding eigenfunctions.
Our approach for the spectral analysis of the linearized operator $L_n$ is as follows. Firstly, we derive the spectrum of the operator $\mathcal{J}L_n$, which is more easier than to have the spectrum of $L_n$. The idea is motivated by \eqref{operator identity2} to reduce to the spectrum of the adjoint recursion operator $\mathcal{R}^\star(Q)$. We then show that the eigenfunctions of $\mathcal{R}^\star(Q)$ ($\mathcal{J}L_n$) plus a generalized kernel of $\mathcal{J}L_n$ form an orthogonal basis in $L^2(\R)$, which can be viewed as a completeness relation. Finally, we calculate the quadratic form $\langle L_nz,z\rangle$ with function $z$ has a decomposition in the above basis, and the inertia of $L_n$ can be computed directly.
\subsection{The spectrum of the recursion operator around the BO one soliton}
The spectrum of the recursion operator $\mathcal{R}(Q)$ and its adjoint operator $\mathcal{R}^\star(Q)$ are essential to analyze the linearized operator $L_n$ defined in \eqref{formula of Ln}. Note that the recursion operators are nonlocal and even not explicit, which are major obstacles to study them directly. However, by employing the properties of the squared eigenfunctions of the eigenvalue problem \eqref{2.1}, one could have the following result.
\begin{lemma} \label{le3.6} The recursion operator $\mathcal{R}(Q)$ defined in $L^2(\R)$ with domain $H^{1}(\R)$ has only one discrete eigenvalue $-c$ associated with the eigenfunction $Q$, the essential spectrum is the interval $[0,+\infty)$, and the corresponding eigenfunctions do not have spatial decay and not in $L^2(\R)$. Moreover, the kernel of $\mathcal{R}(Q)$ is spanned by $\big(N\bar{N}^\ast\big)(x,0)$ where $N(x,\lambda)$ and $\bar{N}(x,\lambda)$ are defined in \eqref{phi3} and \eqref{phi2}.
\end{lemma}
\begin{proof} Consider the Jost solutions of the spectral problem \eqref{2.1} with the potential $u=Q$ and the asymptotic expressions in \eqref{2.2}, \eqref{2.3}, \eqref{2.N}, \eqref{2.barN} and \eqref{2.M}. In this case, \eqref{2.1} possesses only one discrete eigenvalue $\lambda_1=-\frac{c}{2}<0$ which generates the soliton profile $Q$. The key ingredient in the analysis is to find the eigenvalues of $\mathcal{R}_{12}(Q)$ in \eqref{eq:recursion operator} around the soliton profile $Q$, as $\mathcal{R}(Q)$ is not explicit. It is then found that (using the properties of the generalized Hilbert transform presented in Subsection \ref{sec_2.0} and $Q_{12}^-Q_{12}^+=Q_{12}^+Q_{12}^-$) for $\lambda>0$, there holds the following
\begin{eqnarray}
&&\bigg(Q_{12}^+-iQ_{12}^-H_{12}\bigg)\bigg(Q_{12}^-(N(x_1,\lambda)\bar{N}^\ast(x_2,\lambda))\bigg)=-4\lambda Q_{12}^-(N(x_1,\lambda)\bar{N}^\ast(x_2,\lambda)),\label{eq:eigen ident1}\\
&&\bigg(Q_{12}^+-iQ_{12}^-H_{12}\bigg)\bigg(Q_{12}^-(N^\ast(x_1,\lambda)\bar{N}(x_2,\lambda))\bigg)=-4\lambda Q_{12}^-(N^\ast(x_1,\lambda)\bar{N}(x_2,\lambda)),\label{eq:eigen ident2}\\
&&\bigg(Q_{12}^+-iQ_{12}^-H_{12}\bigg)\bigg(Q_{12}^-(\Phi_1(x_1)\Phi_1^\ast(x_2))\bigg)=-4\lambda Q_{12}^-(\Phi_1(x_1)\Phi_1^\ast(x_2)),\label{eq:eigen ident3}
\end{eqnarray}
where $\bar{N}^\ast(x_2),\Phi_1^\ast(x_2)$ satisfy the adjoint eigenvalue problem of \eqref{2.1} with potential $u=Q$ ( i.e., replace $i,x$ by $-i,x_2$ in\eqref{2.1}). Recall that $Q_{12}^{\pm}=Q(x)\pm Q(x_2)+i(\partial_{x}\mp\partial_{x_2})$ defined similarly as in \eqref{eq:functions}. Then \eqref{eq:eigen ident1},\eqref{eq:eigen ident2} and \eqref{eq:eigen ident3} reveal that
\begin{eqnarray}
&&\mathcal{R}_{12}(Q)\big(N(x_1,\lambda)\bar{N}^\ast(x_2,\lambda)\big)=4\lambda\big(N(x_1,\lambda)\bar{N}^\ast(x_2,\lambda)\big),\label{eq:eigen ident4}\\
&&\mathcal{R}_{12}(Q)\big(N^\ast(x_1,\lambda)\bar{N}(x_2,\lambda)\big)=4\lambda\big(N^\ast(x_1,\lambda)\bar{N}(x_2,\lambda)\big),\label{eq:eigen ident5}\\
&&\mathcal{R}_{12}(Q)\big(\Phi_1(x_1)\Phi_1^\ast(x_2)\big)=4\lambda_1\big(\Phi_1(x_1)\Phi_1^\ast(x_2)\big)=-2c\big(\Phi_1(x_1)\Phi_1^\ast(x_2)\big).\label{eq:eigen ident6}
\end{eqnarray}
In view of the extra factor $\frac12$ in the bi-Hamiltonian structure \eqref{eq:BO hierarchy}, one sees that
the squared eigenfunctions $N\bar{N}^\ast$, $N^\ast\bar{N}$ satisfy
\begin{eqnarray}
&&\mathcal{R}(Q)\big(N\bar{N}^\ast\big)(x,\lambda)=2\lambda\big(N\bar{N}^\ast\big)(x,\lambda),\quad \text{for} \ \lambda>0, \label{eq:Reigen} \\ &&\mathcal{R}(Q)\big(N^\ast\bar{N}\big)(x,\lambda)=2\lambda\big(N^\ast\bar{N}\big)(x,\lambda),\quad \text{for} \ \lambda>0, \label{eq:Reigen0} \\
&&\mathcal{R}(Q)\big(\Phi_1\Phi_1^\ast\big)(x)=2\lambda_1\big(\Phi_1\Phi_1^\ast\big)(x)=-c\big(\Phi_1\Phi_1^\ast\big)(x).\label{eq:R eigen1}
\end{eqnarray}
\eqref{eq:R eigen1} and $\Phi_1\Phi_1^\ast=\frac{c}{2}Q$ reveal that $\mathcal{R}(Q)Q=-cQ$. Moreover, if we differentiate \eqref{eq:R eigen1} with respect to $c$, it follows that there holds
\begin{equation*}\label{eq:generalized kernel}
\mathcal{R}(Q)\frac{\partial Q}{\partial c}=-Q-c\frac{\partial Q}{\partial c}.
\end{equation*}
On account of \eqref{eq:Reigen} and \eqref{eq:Reigen0}, the essential spectrum of $\mathcal{R}(Q)$ is given by $2\lambda\geq0$, which equals to the interval $[0,+\infty)$.
The associated generalized eigenfunctions $\big(N\bar{N}^\ast\big)(x,\lambda)$ and $\big(N^\ast\bar{N}\big)(x,\lambda)$ possess no spatial decay and not in $L^2(\R)$ which can be seen from \eqref{phi2} and \eqref{phi3}.
On the other hand, a simple direct computation shows that the kernel of $\mathcal{R}(Q)$ is reached at $\lambda=0$, in view of \eqref{phi1}, \eqref{phi2} and \eqref{phi3}, the associated eigenfunction is $$\big(N\bar{N}^\ast\big)(x,0)=|N(x,0)|^2\notin L^2(\R).$$ The proof of the lemma is completed.
\end{proof}
\begin{remark}
The spectral information of $\mathcal{R}(Q)$ presented in Lemma \ref{le3.6} reveals that $\mathcal{R}(Q)$ is essentially invertible in $H^s(\R)$ for any $s\geq0$.
\end{remark}
Similar to the proof of Lemma \ref{le3.6}, we have the following result concerning the spectrum of the composite operators $\mathcal{R}^n(Q)$ for $n\geq2$.
\begin{corollary} \label{le3.8} The composite operator $\mathcal{R}^n(Q)$ defined in $L^2(\R)$ with domain $H^{n}(\R)$ has only one eigenvalue $(-c)^n$ associated with the eigenfunction $Q$, the essential spectrum is the interval $[0,+\infty)$, and the corresponding generalized eigenfunctions do not have spatial decay and not in $L^2(\R)$.
\end{corollary}
We now consider the adjoint recursion operator $\mathcal{R}^\star(Q)$. In view of the factorization \eqref{operator identity2}, it shares the same eigenfunctions of $\mathcal{J}L_n$ and thus is more relevant to the spectral stability problems of solitons. Recall from \eqref{eq:adjoint of R} that
$$
\mathcal{R}^{\star}(u)=J\mathcal{R}(u)J^{-1}.
$$
The spectral information of $\mathcal{R}^{\star}(Q)$ can be derived as follows.
\begin{lemma} \label{le3.81} The adjoint recursion operator $\mathcal{R}^\star(Q)$ defined in $L^2(\R)$ with domain $H^{1}(\R)$ has only one eigenvalue $-c$ associated with the eigenfunction $Q_x$, the essential spectrum is the interval $[0,+\infty)$, and the corresponding eigenfunctions do not have spatial decay and not in $L^2(\R)$. Moreover, the kernel of $\mathcal{R}^\star(Q)$ is spanned by $\big(N\bar{N}^\ast\big)_x(x,0)$.
\end{lemma}
\begin{proof}
Consider the Jost solutions of the spectral problem \eqref{2.1} with the potential $Q$ and the asymptotic formulas in \eqref{2.2}, \eqref{2.3}, \eqref{2.N}, \eqref{2.barN} and \eqref{2.M}. The soliton profile $Q$ is generated by the eigenvalue $\lambda_1=-\frac{c}{2}$. Similar to the proof of Lemma \ref{le3.6}, we find the eigenvalue of $\mathcal{R}^\star_{12}(Q)$ in \eqref{eq:recursion operator} around the soliton profile $Q$, as $\mathcal{R}^\star(Q)$ is not explicit. It is then found from \eqref{eq:eigen ident1}, \eqref{eq:eigen ident2} and \eqref{eq:eigen ident3} that for $\lambda>0$, one has
\begin{eqnarray*}
&&\mathcal{R}^\star_{12}(Q)\big(Q_{12}^-N(x_1)\bar{N}^\ast(x_2)\big)=4\lambda\big(Q_{12}^-N(x_1)\bar{N}^\ast(x_2)\big),\label{eq:eigen ident7}\\
&&\mathcal{R}^\star_{12}(Q)\big(Q_{12}^-N^\ast(x_1)\bar{N}(x_2)\big)=4\lambda\big(Q_{12}^-N^\ast(x_1)\bar{N}(x_2)\big),\label{eq:eigen ident8}\\
&&\mathcal{R}^\star_{12}(Q)\big(Q_{12}^-\Phi_1(x_1)\Phi_1^\ast(x_2)\big)=4\lambda_1\big(Q_{12}^-\Phi_1(x_1)\Phi_1^\ast(x_2)\big)=-2c\big(Q_{12}^-\Phi_1(x_1)\Phi_1^\ast(x_2)\big).\label{eq:eigen ident9}
\end{eqnarray*}
As a consequence, there holds the following relations
\begin{eqnarray}
&&\mathcal{R}^\star(Q)\big(N\bar{N}^\ast\big)_x(x,\lambda)=2\lambda\big(N\bar{N}^\ast\big)_x(x,\lambda),\quad \text{for} \ \lambda>0,\label{eq:Reigen2} \\ &&\mathcal{R}^\star(Q)\big(N^\ast\bar{N}\big)_x(x,\lambda)=2\lambda\big(N^\ast\bar{N}\big)_x(x,\lambda),\quad \text{for} \ \lambda>0,\label{eq:Reigen21} \\
&&\mathcal{R}^\star(Q)\big(\Phi_1\Phi_1^\ast\big)_x(x)=2\lambda_1\big(\Phi_1\Phi_1^\ast\big)_x(x)=-c\big(\Phi_1\Phi_1^\ast\big)_x(x),\label{eq:Reigen3}\\
&&\mathcal{R}^\star(Q)\frac{\partial Q_x}{\partial c}=-Q_x-c\frac{\partial Q_x}{\partial c}.\label{eq:Reigen4}
\end{eqnarray}
Since by \eqref{eq:Reigen3}, one has $\mathcal{R}^\star(Q)Q_x=-cQ_x$, then one sees that $-c$ is the only discrete eigenvalue. In view of \eqref{eq:Reigen2} and \eqref{eq:Reigen21}, the essential spectrum of $\mathcal{R}^\star(Q)$ is $2\lambda\geq0$ which is the interval $[0,+\infty)$.
The associated generalized eigenfunctions $\big(N\bar{N}^\ast\big)_x(x,\lambda)$ possess no spatial decay and not in $L^2(\R)$ which can be seen from \eqref{phi2} and \eqref{phi3}.
Similarly, the kernel of $\mathcal{R}^\star(Q)$ is attached at $\lambda=0$ and the associated kernel is $\big(N\bar{N}^\ast\big)_x(x,0)$. This completes the proof of Lemma \ref{le3.81}.
\end{proof}
\subsection{The spectrum of linearized operators $\mathcal{J}L_n$, $L_n\mathcal{J}$ and $L_n$}
In this subsection our attention is focused on the spectral analysis of the linearized operators $\mathcal{J}L_n$, $L_n\mathcal{J}$ and $L_n$. The main ingredients are \eqref{operator identity2} the observation that the eigenfunctions of the adjoint recursion operator $\mathcal{J}L_n$ and its generalized eigenfunction form an orthogonal basis in $L^2(\R)$ (see \eqref{decomposition of z} below). It follows that the spectra of $\mathcal{J}L_n$ lies on the imaginary axis which implies directly the spectral stability of the BO solitons.
Let us first deal with the $n=1$ case, recall from \eqref{2.2} that $|(N\bar{N}^\ast)(x,\lambda)- e^{i\lambda x}|\rightarrow0$ as $x\rightarrow +\infty$, then we can summarize the spectral information of $\mathcal{J}L_1$ as follows:
\begin{eqnarray*}
&&\mathcal{J}L_1\big(N\bar{N}^\ast\big)_x=i(\lambda^2+\lambda)\big(N\bar{N}^\ast\big)_x,\quad \text{for} \ \lambda>0;\label{eq:BOessentia} \\
&&\mathcal{J}L_1\big(N^\ast\bar{N}\big)_x=-i(\lambda^2+\lambda)\big(N^\ast\bar{N}\big)_x,\quad \text{for} \ \lambda>0;\label{eq:BOessentia'} \\
&&\mathcal{J}L_1\big(\Phi_1\Phi_1^{\ast}\big)_x=\frac{c}{2}\mathcal{J}L_1Q_x=0;\label{eq:BOkernel}\\
&&\mathcal{J}L_1\frac{\partial Q}{\partial c}=-Q_x.\label{eq:BOgenera kernel}
\end{eqnarray*}
Similarly, key spectral information of the operator $L_1\mathcal{J}$ is the following
\begin{eqnarray*}
&&L_1\mathcal{J}\big(N\bar{N}^\ast\big)=i(\lambda^2+\lambda)\big(N\bar{N}^\ast\big), \quad \text{for} \ \lambda>0 ; \label{eq:BOessentia2} \\
&&L_1\mathcal{J}\big(N^\ast\bar{N}\big)=-i(\lambda^2+\lambda)\big(N^\ast\bar{N}\big), \quad \text{for} \ \lambda>0 ; \label{eq:BOessentia2'} \\
&&L_1\mathcal{J}\big(\Phi_1\Phi_1^{\ast}\big)=\frac{c}{2}L_1Q_x=0;\label{eq:BOkerne2}\\
&&L_1\mathcal{J}\partial_x^{-1}\frac{\partial Q}{\partial c}= L_1\frac{\partial Q}{\partial c}=-Q.\label{eq:BOgenera kerne2}
\end{eqnarray*}
Here the function $\partial_x^{-1}\frac{\partial Q}{\partial c}\in L^2(\R)$ is well defined since $\frac{\partial Q}{\partial c}=\frac{2(1-c^2x^2)}{(c^2x^2+1)^2)}\in H^1(\R)$.
The eigenfunctions presented above in terms of the squared eigenfunctions of the eigenvalue problem of the BO equation \eqref{2.1} with the potential $u=Q$. In this case, $\beta(\lambda)=0$ for $\lambda>0$ and there exists only one discrete eigenvalue $\lambda_1=-\frac{c}{2}$, the Jost solutions are explicitly given by \eqref{phi1}, \eqref{phi2} and \eqref{phi3}. The squared eigenfunctions generate the two function sets as follows. The first set
\begin{equation}\label{eq:set JL}
\{\big(N\bar{N}^\ast\big)_x(x,\lambda),\ \big(N^\ast\bar{N}\big)_x(x,\lambda) \quad \text{for} \quad \lambda>0;\quad Q_x; \quad \frac{\partial Q}{\partial c} \}
\end{equation}
consists of linearly independent eigenfunctions and generalized kernel of the operator $\mathcal{J}L_1$. Moreover, they are essentially orthogonal under the $L^2$-inner product.
The second set
\begin{equation}\label{eq:set LJ}
\{\big(N\bar{N}^\ast\big)(x,\lambda),\ \big(N^\ast\bar{N}\big)(x,\lambda) \quad \text{for} \quad \lambda>0;\quad Q; \quad \partial_x^{-1}\frac{\partial Q}{\partial c} \}
\end{equation}
consists of linearly independent eigenfunctions and generalized kernel of the operator $L_1\mathcal{J}$. Notice that the function $\frac{\partial Q}{\partial c}$ is even, by using the asymptotic behaviors of the Jost solutions in \eqref{2.2}, \eqref{2.3}, \eqref{2.N}, \eqref{2.barN} and \eqref{2.M}, for $\lambda,\lambda'>0$, one can compute the inner product of the elements of the sets \eqref{eq:set JL} and \eqref{eq:set LJ} as the following (see \cite{KLM99}):
\begin{eqnarray}
&&\int_{\R}\big(N\bar{N}^\ast\big)_x(x,\lambda)(N^\ast\bar{N}\big)(x,\lambda')\rmd x=-2\pi i\lambda\delta(\lambda-\lambda');\label{eq:product1} \\
&&\int_{\R}\big(N^\ast\bar{N}\big)_x(x,\lambda)(N\bar{N}^\ast\big)(x,\lambda')\rmd x=2\pi i\lambda\delta(\lambda-\lambda');\label{eq:product1'} \\
&&\int_{\R}\big(N\bar{N}^\ast\big)_x(x,\lambda)(N\bar{N}^\ast\big)(x,\lambda')\rmd x=\int_{\R}\big(N^\ast\bar{N}\big)_x(x,\lambda)(N^\ast\bar{N}\big)(x,\lambda')\rmd x=0;\label{eq:product1''} \\
&&\int_{\R}Q_x\partial_x^{-1}\big(\frac{\partial Q}{\partial c}\big)\rmd x=-\int_{\R}Q\frac{\partial Q}{\partial c}\rmd x=-\frac{\rmd H_1(Q)}{\rmd c}=-\pi,\label{eq:product2}\\
&&\int_{\R}\frac{\partial Q}{\partial c}Q\rmd x=\frac{\rmd H_1(Q)}{\rmd c}=\pi.\label{eq:product3}
\end{eqnarray}
The corresponding closure or completeness relation is
\begin{eqnarray}\label{eq:closure relation}
&&\frac{1}{2\pi i}\int_0^{+\infty}\bigg(\big(N\bar{N}^\ast\big)_x(x,\lambda)\big(N^\ast\bar{N}\big)(y,\lambda)-\big(N^\ast\bar{N}\big)_x(x,\lambda)\big(N\bar{N}^\ast\big)(y,\lambda)\bigg)\frac{\rmd \lambda}{\lambda}\nonumber\\
&&+\frac{1}{\pi}\bigg(Q(y)\frac{\partial Q(x)}{\partial c}-Q_x\partial_y^{-1}\frac{\partial Q(y)}{\partial c}\bigg)=\delta(x-y),
\end{eqnarray}
which indicates that any function $z(y)$ which vanishes at $x\rightarrow\pm\infty$ can be expanded over the above two bases \eqref{eq:set JL} and \eqref{eq:set LJ}. In particular, we have the following decomposition of the function $z$:
\begin{eqnarray} \label{decomposition of z}
&& z(x)=\int_0^{+\infty}\bigg(\alpha(\lambda)\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)\rmd \lambda+\beta Q_x+\gamma \frac{\partial Q}{\partial c},\\
&&\alpha(\lambda)=\frac{1}{2\pi i\lambda}\langle\big(N^\ast\bar{N}\big)(y,\lambda),z(y)\rangle,\ \beta= \frac{1}{\pi}\langle\partial_y^{-1}\frac{\partial Q(y)}{\partial c},z(y)\rangle,\ \gamma=\frac{1}{\pi}\langle Q(y),z(y)\rangle.
\end{eqnarray}
Similarly, one can also decompose the function $z(x)$ on the second set \eqref{eq:set LJ} by multiplying \eqref{eq:closure relation} with $z(x)$ and integrating with $\rmd x$.
We now consider the operator $\mathcal{J}L_n$. Since $L_n=H''_{n+1}(Q)+cH''_n(Q)$ given by \eqref{formula of Ln} which is defined in $L^2(\R)$ with domain $H^{n}(\R)$, the symbol of the principle (constant coefficient) part of which is $$\big(H''_{n+1}(0)+cH''_n(0)\big)^{\wedge}(\xi)=\frac{2^n}{n+1}\widehat{(-H\partial_x)^n}+\frac{2^{n-1}c}{n}\widehat{(-H\partial_x)^{n-1}}=\frac{2^n}{n+1}|\xi|^n+\frac{2^{n-1}c}{n}|\xi|^{n-1},$$
it thus transpires that the symbol of the principle part of the operator $\mathcal{J}L_n$ is
\begin{equation}\label{eq:symbol JL}
\varrho_{n,c}(\xi):=i\frac{2^n}{n+1}|\xi|^{n}\xi+i\frac{2^{n-1}c}{n}|\xi|^{n-1}\xi.
\end{equation}
We have the following statement which concerning the spectrum for the operator $\mathcal{J}L_n$.
\begin{proposition} \label{pr3.5-3}
The essential spectra of $\mathcal{J}L_n$ (defined in $L^2(\R)$ with domain $H^{n+1}(\R)$) for $n\geq1$ is $i\R$, the kernel is spanned by the function $Q_x$ and the generalized kernel is spanned by $\frac{\partial Q}{\partial c}$.
\end{proposition}
\begin{proof}
The proof is by direct verification. We compute the spectrum of the operator $\mathcal{J}L_n$ directly by employing the squared eigenfunctions as follows
\begin{eqnarray}
&&\mathcal{J}L_n\big(N\bar{N}^\ast\big)_x=\varrho_{n,c}(\lambda)\big(N\bar{N}^\ast\big)_x,\quad \text{for} \ \lambda>0;\label{eq:essentia} \\
&&\mathcal{J}L_n\big(N^\ast\bar{N}\big)_x=\varrho^\ast_{n,c}(\lambda)\big(N^\ast\bar{N}\big)_x,\quad \text{for} \ \lambda>0;\label{eq:essentia'}\\
&&\mathcal{J}L_n\big(\Phi_1\Phi_1^{\ast}\big)_x=\frac{c}{2}\mathcal{J}L_nQ_x=0,\label{eq:kernel}\\
&&\mathcal{J}L_n\frac{\partial Q}{\partial c}=(-1)^{n}c^{n-1}Q_x.\label{eq:genera kernel}
\end{eqnarray}
In view of \eqref{eq:symbol JL}, \eqref{eq:essentia} and \eqref{eq:essentia'}, the essential spectrum of $\mathcal{J}L_n$ are $\pm\varrho_{n,c}(\lambda)$ for $ \lambda>0$, which is the whole imaginary axis. In view of \eqref{eq:kernel} and \eqref{eq:genera kernel}, the kernel and generalized kernel of $\mathcal{J}L_n$ is $Q_x$ and $\frac{\partial Q}{\partial c}$, respectively. The proof of Proposition \ref {pr3.5-3} is completed.
\end{proof}
For the adjoint operator of $\mathcal{J}L_n$, namely, the operator $-L_n\mathcal{J}$, for the spectrum of which, we have the following result.
\begin{proposition} \label{le3.82}
The essential spectrum of $L_n\mathcal{J}$ (defined in $L^2(\R)$ with domain $H^{n+1}(\R)$) for $n\geq1$ is $i\R$, the kernel is spanned by the function $Q$ and the generalized kernel is spanned by $\partial_x^{-1}\big(\frac{\partial Q}{\partial c}\big)$.
\end{proposition}
\begin{proof}
One can compute the spectrum of the operator $L_n\mathcal{J}$ directly by employing the squared eigenfunctions as follows
\begin{eqnarray}
&&L_n\mathcal{J}\big( N\bar{N}^\ast\big)=L_n\big(N\bar{N}^\ast\big)_x=\varrho_{n,c}(\lambda) N\bar{N}^\ast, \quad \text{for} \ \lambda>0;\label{eq:essentia2} \\
&&L_n\mathcal{J}\big( N^\ast\bar{N}\big)=L_n\big(N^\ast\bar{N}\big)_x=\varrho^\ast_{n,c}(\lambda)N^\ast\bar{N},\quad \text{for} \ \lambda>0; \label{eq:essentia2'} \\
&&L_n\mathcal{J}\Phi_1\Phi_1^{\ast}=\frac{c}{2} L_nQ_x=0;\label{eq:kerne2}\\
&&L_n\mathcal{J}\partial_x^{-1}\big(\frac{\partial Q}{\partial c}\big)=L_n\big(\frac{\partial Q}{\partial c}\big)=(-1)^{n}c^{n-1}Q.\label{eq:genera kerne2}
\end{eqnarray}
In view of \eqref{eq:symbol JL}, \eqref{eq:essentia2} and \eqref{eq:essentia2'}, the essential spectrum of $L_n\mathcal{J}$ is $\pm\varrho_{n,c}(\lambda)$ for $ \lambda>0$ which is the whole imaginary axis. In view of \eqref{eq:kerne2} and \eqref{eq:genera kerne2}, the kernel and generalized kernel of $\mathcal{J}L_n$ is $Q$ and $\partial_x^{-1}\frac{\partial Q}{\partial c}$, respectively. The proof is concluded.
\end{proof}
With the decomposition of function $z(x)$ in \eqref{decomposition of z}, we can compute the quadratic form related to the operator $L_n$ and illustrate the spectral information.
The following statement describes the full spectrum of linearized operator $L_n=H''_{n+1}(Q)+cH''_n(Q)$ for $n\geq1$.
\begin{lemma} \label{le3.9} For $n\geq1$ and any $z\in H^{\frac{n}{2}}_{od}(\R)$, we have $\langle L_{n}z,z\rangle\geq0$ and $\langle L_{n}z,z\rangle=0$ if and only if $z$ is a multiple of $Q_x$. In $H^{\frac{n}{2}}_{ev}(\R)$ and for odd $n$, the operator $L_{n}$ has exactly one negative eigenvalue and zero is not an eigenvalue any more; In $H^{\frac{n}{2}}_{ev}(\R)$ and for $n$ even, the operator $L_{n}$ has no negative eigenvalue.
\end{lemma}
\begin{proof}
For any $z(x)\in H^{\frac{n}{2}}(\R)$, we have the decomposition \eqref{decomposition of z}, then we can evaluate the quadratic form $\langle L_nz,z \rangle$ as follows,
\begin{eqnarray} \label{quadratic}
&&\langle L_nz,z \rangle=\langle\int_0^{+\infty}\bigg(\alpha(\lambda)L_n\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)L_n\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)\rmd \lambda,\nonumber\\&& \int_0^{+\infty}\bigg(\alpha(\lambda)\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)^\ast\rmd \lambda\rangle\nonumber\\&&+2\gamma\langle \int_0^{+\infty}\bigg(\alpha(\lambda)L_n\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)L_n\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)\rmd \lambda,\frac{\partial Q}{\partial c}\rangle\nonumber\\&&+\gamma^2\langle L_n\frac{\partial Q}{\partial c},\frac{\partial Q}{\partial c}\rangle=I+II+III.
\end{eqnarray}
First it is noticed from \eqref{eq:essentia2} and the zero inner product property of the two sets \eqref{eq:set JL} and \eqref{eq:set LJ} that
\begin{eqnarray}\label{second }
II&=&2\gamma\langle \int_0^{+\infty}\bigg(\alpha(\lambda)L_n\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)L_n\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)\rmd \lambda,\frac{\partial Q}{\partial c}\rangle\nonumber\\&=&2\gamma\int_0^{+\infty}\langle \alpha(\lambda)\varrho_{n,c}(\lambda)\big(N\bar{N}^\ast\big)(x,\lambda)+\alpha^\ast(\lambda)\varrho^\ast_{n,c}(\lambda)\big(N^\ast\bar{N}\big)(x,\lambda),\frac{\partial Q}{\partial c}\rangle P(\lambda)\varrho_{n,c}(\lambda)\rmd \lambda\nonumber\\&=&0.
\end{eqnarray}
For the third term of \eqref{quadratic}, a direct computation shows that,
\begin{eqnarray} \label{third }
III=\gamma^2\langle (-1)^nc^{n-1}Q,\frac{\partial Q}{\partial c}\rangle=\gamma^2(-1)^nc^{n-1}\frac{\rmd H_1(Q)}{\rmd c}=\pi\gamma^2 (-1)^{n}c^{n-1}.
\end{eqnarray}
To deal with the first term in \eqref{quadratic}, using \eqref{eq:essentia2} and \eqref{eq:product1} yields that
\begin{eqnarray} \label{first term }
&&I=\langle\int_0^{+\infty}\bigg(\alpha(\lambda)L_n\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)L_n\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)\rmd \lambda,\nonumber\\&& \int_0^{+\infty}\bigg(\alpha^\ast(\lambda)\big(N^\ast\bar{N}\big)_x(x,\lambda)+\alpha(\lambda)\big(N\bar{N}^\ast\big)_x(x,\lambda)\bigg)\rmd \lambda\rangle\nonumber\\
&=&\langle\int_0^{+\infty}\bigg(\alpha(\lambda)\varrho_{n,c}(\lambda)\big(N\bar{N}^\ast\big)(x,\lambda)+\alpha^\ast(\lambda)\varrho^\ast_{n,c}(\lambda)\big(N^\ast\bar{N}\big)(x,\lambda)\bigg)\rmd \lambda,\nonumber\\&& \int_0^{+\infty}\bigg(\alpha^\ast(\lambda)\big(N^\ast\bar{N}\big)_x(x,\lambda)+\alpha(\lambda)\big(N\bar{N}^\ast\big)_x(x,\lambda)\bigg)\rmd \lambda\rangle\nonumber\\
&=& \int_{\R_+^2}\varrho_{n,c}(\lambda)\alpha(\lambda)\alpha^\ast(\lambda')\langle\big(N\bar{N}^\ast\big)(x,\lambda),\big(N^\ast\bar{N}\big)_x(x,\lambda')\rangle\rmd \lambda\rmd \lambda'\nonumber\\
&+&\int_{\R_+^2}\varrho^\ast_{n,c}(\lambda)\alpha^\ast(\lambda)\alpha(\lambda')\langle\big(N^\ast\bar{N}\big)(x,\lambda),\big(N\bar{N}^\ast\big)_x(x,\lambda')\rangle\rmd \lambda\rmd \lambda'\nonumber\\
&=& \int_0^{+\infty}2\pi i\big(\varrho^\ast_{n,c}(\lambda)-\varrho_{n,c}(\lambda)\big)|\alpha(\lambda)|^2\rmd \lambda\nonumber\\
&=&2^{n+1}\pi\int_0^{+\infty}|\alpha(\lambda)|^2\lambda^{n+1}\big(\frac{2\lambda}{n+1}+\frac{1}{n}\big)\rmd \lambda\geq0,
\end{eqnarray}
where $I=0$ holds if and only if $\alpha(\lambda)=0$. Combining \eqref{first term }, \eqref{second } and \eqref{third }, one has
\begin{eqnarray} \label{quadratic form}
\eqref{quadratic}=2^{n+1}\pi\int_0^{+\infty}|\alpha(\lambda)|^2\lambda^{n+1}\big(\frac{2\lambda}{n+1}+\frac{1}{n}\big)\rmd \lambda+\pi\gamma^2(-1)^{n}c^{n-1}.
\end{eqnarray}
For $z\in H^{\frac{n}{2}}_{od}(\R)$, we have $\gamma=0$, then \eqref{quadratic form} and \eqref{first term } reveal that $\langle L_{n}z,z\rangle\geq0$. Moreover, $\langle L_{n}z,z\rangle=0$ infers that $\alpha(\lambda)=0$, therefore, $z=\beta Q_x$ for $\beta\neq0$.
If $z\in H^{\frac{n}{2}}_{ev}(\R)$, we then have $\beta=0$, In the hyperplane $\gamma=0$, $\langle L_{n}z,z\rangle\geq0$ and $\langle L_{n}z,z\rangle=0$ if and only if $\alpha(\lambda)=0$, then one has $z=0$. Therefore, $\langle L_{n}z,z\rangle>0$ in the hyperplane $\gamma=0$ and which implies that $L_n$ can have at most one negative eigenvalue. If $n$ is odd, then $L_n\frac{\partial Q}{\partial c}=-c^{n-1}Q<0$ and $\langle L_n\frac{\partial Q}{\partial c},\frac{\partial Q}{\partial c}\rangle=-c^{n-1}\frac{\rmd H_1(Q)}{\rmd c}=\pi (-1)^{n}c^{n-1}<0$. Therefore, $L_n$ has exactly one negative eigenvalue. If $n$ is even, then from \eqref{quadratic form} or $\langle L_n\frac{\partial Q}{\partial c},\frac{\partial Q}{\partial c}\rangle=(-1)^{n}c^{n-1}\frac{\rmd H_1(Q)}{\rmd c}=\pi c^{n-1}>0$, which means that $L_n$ has no negative eigenvalue. This completes the proof of Lemma \ref{le3.9}.
\end{proof}
\begin{remark}\label{re:4.0}
Lemma \ref{le3.9} states that for $k\in\mathbb{N}$, the inertia of the operators $L_n$ satisfy $in(L_{2k})=(0,1)$ and $ in(L_{2k-1})=(1,1)$. One can verify, by Weyl's essential spectrum theorem, that the essential spectrum of $L_n$ ($n\geq2$) is the interval $[0,+\infty)$. From $L_{2k}=\mathcal{R}(Q)L_{2k-1}$ or \eqref{quadratic form}, one infers that the operator $L_{2k}$ has a positive eigenvalue $\nu=O(c^{2n})$ (with $L^2$-eigenfunctions), which is embedded into its continuous spectrum.
\end{remark}
As a direct consequence of Lemma \ref{le3.9}, one has the following spectral information of higher order linearized operators $\mathcal {T}_{n,j}:=H''_{n+2}(Q_{c_j})+(c_1+c_2)H''_{n+1}(Q_{c_j})+c_1c_2H''_n(Q_{c_j})$ (defined in $L^2(\R)$ with domain $H^{2}(\R)$) with $n\geq1$, $j=1,2$ and $c_1\leq c_2$, which are related closely to stability problem of the double solitons $U^{(2)}$. Following the same line of the proof of Lemma \ref{le3.9}, we have
\begin{corollary} \label{co3.10} For $n\geq1$ and $c_1=c_2=c$, we have $\mathcal {T}_{n,1}=\mathcal {T}_{n,2}\geq0$, and the eigenvalue zero is double with eigenfunctions $Q_c'$ and $\frac{\partial Q_c}{\partial c}$.
For $n\geq1$ odd and $c_1<c_2$, the operator $\mathcal {T}_{n,1}$ has one negative eigenvalue and $\mathcal {T}_{n,2}\geq0$ is positive. For $n\geq1$ even and $c_1<c_2$, the operator $\mathcal {T}_{n,1}$ is positive and $\mathcal {T}_{n,2}\geq0$ has one negative eigenvalue.
$\mathcal {T}_{n,j}$ have zero as a simple eigenvalue with associated eigenfunctions $Q'_{c_j}$.
\end{corollary}
\begin{proof}
Similar to the proof of Lemma \ref{le3.9}, we study quadratic form related to the operator $\mathcal {T}_{n,j}$ with $z$ possessing the decomposition \eqref{decomposition of z}. One can verify that
\begin{equation}\label{direction}
\mathcal {T}_{n,j}\frac{\partial Q_{c_j}}{\partial c_j}=(c_j-c_k)(-c_j)^{n-1}Q_{c_j}, \quad\text{for}\quad k\neq j \ \text{and}\ j,k=1,2.
\end{equation}
In particular, if $c_1=c_2=c$, the function $\frac{\partial Q_{c}}{\partial c}$ belongs to the kernel of $\mathcal {T}_{n,1}$ and $\mathcal {T}_{n,2}$. Notice that $Q'_{c}$ always belongs to the kernel of which, therefore, zero eigenvalue is double with eigenfunctions $Q_c'$ and $\frac{\partial Q_c}{\partial c}$. The non-negativeness of $\mathcal {T}_{n,1}$ and $\mathcal {T}_{n,2}$ follow from the same argument of Lemma \ref{le3.9}.
If $c_1<c_2$, then by \eqref{direction} and following the same line of the proof of Lemma \ref{le3.9}, the operator $\mathcal {T}_{2k+1,1}$ has a negative eigenvalue and $\mathcal {T}_{2k+1,2}\geq0$, their zero eigenvalue are simple with associated eigenfunction $Q'_{c_j}$; the operator $\mathcal {T}_{2k,1}\geq0$ and $\mathcal {T}_{2k,2}$ has a negative eigenvalue.
\end{proof}
\begin{remark}\label{re:4.3}
The linearized operator $\mathcal{L}_2$ defined in \eqref{eq:linearized n-soliton operator} around the double solitons profile $U^{(2)}$ can be represented as follows:
\[\mathcal{L}_2=-\frac43\partial_x^2+2HU_x+2UH\partial_x+2H(U_x\cdot)+2U\partial_x+4U^{2}+(c_1+c_2)(-H\partial_x-2U)+c_1c_2, \ U:=U^{(2)},
\]
which possesses the following property: the spectra $\sigma(\mathcal{L}_2)$ trends to the union of $\sigma(\mathcal {T}_{1,1})$ and $\sigma(\mathcal {T}_{1,2})$ as $t$ goes to infinity. Since from Corollary \ref{co3.10}, we know the inertia $in(\mathcal {T}_{1,1})=(1,1)$ and $in(\mathcal {T}_{1,2})=(0,1)$, then it reveals that,
$$in(\mathcal{L}_2)=in(\mathcal {T}_{1,1})+in(\mathcal {T}_{1,2})=(1,2).$$
In this sense, Corollary \ref{co3.10} at the case $n=1$ gives an alternative proof of Theorem 9 in \cite{LN}, which is the key spectral property in showing the orbital stability of the double solitons of the BO equation.
\end{remark}
\subsection{The spectrum of linearized operator around the BO $N$-solitons}
In order to prove Theorem \ref{thm1.1}, we need to know the spectral information of the operator $\mathcal L_N$ \eqref{eq:linearized n-soliton operator}. More precisely, the inertia of $\mathcal L_N$ called $in(\mathcal L_N)$ has to be determined. The aim of this subsection is to show the following result.
\begin{lemma}\label{lemma3.6}
The operator $\mathcal L_N$ defined in $L^2(\R)$ with domain $H^{\frac{N}{2}}(\R)$ verifies the following spectral property
\begin{equation}\label{inertia L}
in(\mathcal L_N)=\big(n(\mathcal L_N),z(\mathcal L_N)\big)=\big([\frac{N+1}{2}],N\big).
\end{equation}
\end{lemma}
To this aim, for $j=1,2...,N$, recall that $L_{N,j}=S_N''(Q_{c_j})$ is defined in \eqref{formula of L}. By Theorem \ref{th5.2} (see Theorem 4 in \cite{LN} for the case $N=2$), the spectrum of $\mathcal L_N$ tends to the unions of $L_{N,j}$, that is $\sigma(\mathcal L_N)\rightarrow\bigcup_{ j=1}^N\sigma(L_{N,j})$ as $t\rightarrow+\infty$. The result \eqref{inertia L} follows directly from the following statement which concerning the inertia of the operators $L_{N,j}$, $j=1,2,\cdot\cdot\cdot,N$.
\begin{proposition} \label{pr3.11} (1). $L_{N,2k-1}$ (defined in $L^2(\R)$ with domain $H^{N}(\R)$) has zero as a simple eigenvalue and exactly one negative eigenvalue for $1\leq k\leq[\frac{N+1}{2}]$, i.e, $in(L_{N,2k-1})=(1,1)$;
\ \ \ (2). $L_{N,2k}$ (defined in $L^2(\R)$ with domain $H^{N}(\R)$) has zero as a simple eigenvalue and no negative eigenvalues for $1\leq k\leq[\frac{N}{2}]$, i.e, $in(L_{N,2k})=(0,1)$.
\end{proposition}
\begin{proof} The proof follows the same line of the proof of Lemma \ref{le3.9}.
We consider the operator $L_{N,j}=S_{N}''(Q_{c_j})$ for $1\leq j \leq N$ and compute the quadratic form $\langle L_{N,j}z,z\rangle$ under a special decomposition of $z$ \eqref{decomposition of z}. Recall from \eqref{formula of L} that the form of $L_{N,j}$ which is a combination of the operators $H_{n+1}''(Q_{c_j})+c_jH_{n}''(Q_{c_j})$, and those $\sigma_{j,k}>0$ are the elementally symmetric functions of $c_1,c_2,\cdot\cdot\cdot,c_{j-1},c_{j+1},\cdot\cdot\cdot,c_N$. Moreover, one has
\begin{equation}\label{eigenvalue}
L_{N,j}\frac{\partial Q_{c_j}}{\partial c_j}=-\prod_{k\neq j}^N(c_k-c_j)Q_{c_j}:=\Gamma_jQ_{c_j}.
\end{equation}
The quadratic form $\langle L_{N,j}z,z\rangle$ (for $z\in H^{\frac{N}{2}}(\R)$) can be evaluated similar to \eqref{quadratic} as follows
\begin{eqnarray*}
&&\langle L_{N,j}z,z\rangle=\langle\int_0^{+\infty}\bigg(\alpha(\lambda)L_{N,j}\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)L_{N,j}\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)\rmd \lambda,\nonumber\\&& \int_0^{+\infty}\bigg(\alpha(\lambda)\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)^\ast\rmd \lambda\rangle\nonumber\\&&+2\gamma\langle \int_0^{+\infty}\bigg(\alpha(\lambda)L_{N,j}\big(N\bar{N}^\ast\big)_x(x,\lambda)+\alpha^\ast(\lambda)L_{N,j}\big(N^\ast\bar{N}\big)_x(x,\lambda)\bigg)\rmd \lambda,\frac{\partial Q_{c_j}}{\partial c_j}\rangle\nonumber\\&&+\gamma^2\langle L_{N,j}\frac{\partial Q_{c_j}}{\partial c_j},\frac{\partial Q_{c_j}}{\partial c_j}\rangle=\sum_{n=1}^N\bigg(2^{n+1}\pi\sigma_{j,N-n}\int_0^{+\infty}|\alpha(\lambda)|^2\lambda^{n+1}\big(\frac{2\lambda}{n+1}+\frac{1}{n}\big)\rmd \lambda\bigg)+\pi\gamma^2\Gamma_j.
\end{eqnarray*}
One can check that the symbol of the principle part of $ L_{N,j}$ evaluated at $\lambda$ is
\begin{eqnarray}\label{symbol2}
\widehat{S_N''(0)}(\lambda)&=&\sum_{n=1}^N\sigma_{j,N-n}\rho_{n,c_j}(\lambda)>0.
\end{eqnarray}
Then the first term of the quadratic form $\langle L_{N,j}z,z\rangle$ is nonnegative and equals to zero if and only if $\alpha(\lambda)=0$.
If $j$ is even, then in view of the definition of $\Gamma_j$ \eqref{eigenvalue}, one has $\Gamma_j>0$ and $\langle L_{N,j}z,z\rangle\geq0$ and $\langle L_{N,j}z,z\rangle=0$ if and only if $\alpha(\lambda)=0$ and $\gamma=0$, which indicates that $z=\beta Q'_{c_j}$. Hence $L_{N,j}\geq0$ and zero is simple with associated eigenfunction $Q'_{c_j}$.
If $j$ is odd, then one has $\Gamma_j<0$, we investigate $z$ in $H^{\frac{N}{2}}_{ev}(\R)$ and $H^{\frac{N}{2}}_{od}(\R)$ respectively. If $z\in H^{\frac{N}{2}}_{od}(\R)$, then $\gamma=0$. Then one has $\langle L_{N,j} z,z\rangle\geq0$ and $\langle L_{N,j}z,z\rangle=0$ if and only if $\alpha(\lambda)=0$. Then $z=\beta Q'_{c_j}$ with $\beta\neq0$, which indicates that zero is simple with associated eigenfunction $Q'_{c_j}$.
If $z\in H^{\frac{N}{2}}_{ev}(\R)$, then $\beta=0$. In the hyperplane $\gamma=0$, $\langle L_{N,j}z,z\rangle\geq0$ and $\langle L_{N,j}z,z\rangle=0$ if and only if $\alpha(\lambda)$. Therefore, $\langle L_{N,j}z,z\rangle>0$ in the hyperplane $\gamma=0$ and which implies that $L_{N,j}$ can have at most one negative eigenvalue. Since $L_{N,j}\frac{\partial Q_{c_j}}{\partial c_j}=\Gamma_jQ_{c_j}<0$ and $$\langle L_{N,j}\frac{\partial Q_{c_j}}{\partial c_j},\frac{\partial Q_{c_j}}{\partial c_j}\rangle=\Gamma_j\frac{\rmd H_1(Q_{c_j})}{\rmd c_j}<0.$$ Therefore, $L_{N,j}$ has exactly one negative eigenvalue. This implies the desired result as advertised in the statement of Proposition \ref{pr3.11}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma3.6}]
From the invariance of inertia stated in Corollary \ref{co5.3} and the results of Proposition \ref{pr3.11}, we know that
\[
in(\mathcal L_N)=\big(n(\mathcal L_N),z(\mathcal L_N)\big)=\sum_{j=1}^Nin(\mathcal L_{N,j})=\big([\frac{N+1}{2}],N\big).
\]
The proof is concluded.
\end{proof}
\begin{remark}\label{re:4.6}
In view of \eqref{operator identity3} and \eqref{operator identity4}, one may also investigate the spectrum of the operator $\mathcal{J}\mathcal L_N$ to show the spectral stability of the BO $N$-solitons. The idea is similar to the $N=1$ case, by employing the eigenvalue problem \eqref{2.1}, we can derive the eigenvalues and the associated eigenfunctions of the recursion operator around the $N$-solitons profile. Then we need to show the eigenfunctions plus their derivatives with respect to the eigenvalues $\lambda_j$ ($j=1,2,\cdot\cdot\cdot,N$) form a basis in $L^2(\R)$. Finally, by a direct verification of the quadratic form $\langle\mathcal L_Nz,z\rangle$, one can also derive the inertia of the operator $\mathcal L_N$.
\end{remark}
\section{Proof of the main results} \label{sec_5}
\setcounter{equation}{0}
This section is devoted to the proof of Theorem ~\ref{thm1.1} and Theorem ~\ref{thm1.3}. To this aim, we will show that multi-solitons of \eqref{eq: BO} verify a stability criterion established by Maddocks and Sachs \cite{MS93}. Recall that the BO $N$-solitons $U^{(N)}(t,x)$ variational principle \eqref{1.10}
is the gradient of the functional \eqref{1.9a} evaluated at $u=U^{(N)}$. In general, $U^{(N)}$ is not a minimum of $S_N$, rather, it is at best a constrained and nonisolated minimum of the following variational problem
\begin{eqnarray*}
\min H_{N+1}(u(t)) \quad\quad \text{subject to} \quad H_{j}(u(t))= H_{j}(U^{(N)}(t)), \quad j=1,2,...,N.
\end{eqnarray*}
Now recall the self-adjoint second variation operator $\mathcal L_N(t)$ defined in \eqref{eq:linearized n-soliton operator} and denote by
$$n(\mathcal L_N(t))$$ the number of negative eigenvalue of $\mathcal L_N(t)$. Observe that the above defined objects are a priori time-dependent. We also define the $N\times N$ Hessian matrix by
\begin{eqnarray}\label{3.31}
D(t):=\big\{\frac{\partial^2S_N(U^{(N)}(t))}{\partial \mu_i\partial \mu_j}\big\},
\end{eqnarray}
and denote by $$p(D(t))$$ the number of positive eigenvalue of $D(t)$. Since $S_N(t)$ is a conserved quantity for the flow of \eqref{eq: BO}, the matrix $D(t)$ is independent of $t$. The proof of Theorem \ref{thm1.1} relies on the following theoretical result, which was obtained by Maddocks and Sachs \cite{MS93}.
\begin{proposition} \label{pr2.1}
Suppose that
\begin{equation}\label{n=p}
n(\mathcal L_N)=p(D).
\end{equation}
Then there exists a constant $C>0$ such that $U^{(N)}$ is a non-degenerate unconstrained
minimum of the augmented Lagrangian (Lyapunov functional)
\begin{equation}\label{eq:augmented Lagrangian}
\Delta(u):=S_N(u)+\frac{C}{2}\sum_{j=1}^N\big(H_j(u)-H_{j}(U^{(N)})\big)^2.
\end{equation}
As a consequence, $U^{(N)}(t,x)$ is dynamically stable.
\end{proposition}
\begin{proof}
Since the functional $S_N$ depends only on wave speeds ${\mathbf c}$ and not on $t$
or ${\mathbf x}$. Hence, by construction of the augmented Lagrangian $\Delta$, any $N$-solitons with parameters ${\mathbf c}$ is a critical point of $\Delta$.
Moreover, there exists $\gamma>0$ (which, as well as $C$, can be chosen independently of ${\mathbf x}$) such that
for any $U^{(N)}(\cdot,\cdot;{\mathbf c},{\mathbf x})$ and for any $h \in H^{\frac{N}{2}}(\R)$ such that
\[
\langle\nabla_{{\mathbf x}}U^{(N)}(t,\cdot;{\mathbf c},{\mathbf x}),h\rangle=0,
\]
one has
\[
\langle \Delta''(U^{(N)}(t,\cdot;{\mathbf c},{\mathbf x}))h,h\rangle\geq \gamma \|h\|^2_{H^{\frac{N}{2}}}.
\]
Now for any $\in H^{\frac{N}{2}}(\R)$ such that
\[\inf_{{\mathbf y}\in \R^N}\|u-U^{(N)}(t,\cdot;{\mathbf c},{\mathbf y})\|_{H^{\frac{N}{2}}}<\varepsilon,\]
there exists ${\mathbf y}_u\in \R^N$ such that
\begin{eqnarray*}
&&\inf_{{\mathbf y}\in \R^N}\|u-U^{(N)}(t,\cdot;{\mathbf c},{\mathbf y})\|^2_{H^{\frac{N}{2}}}\leq \frac{2}{\gamma}\bigg(\Delta(u)-\Delta(U^{(N)}(t,\cdot;{\mathbf c},{\mathbf y}_u))\bigg)\\&&=\frac{2}{\gamma}\bigg(\Delta(u)-\Delta(U^{(N)}(t,\cdot;{\mathbf c},{\mathbf x}))\bigg)=\frac{2}{\gamma}\bigg(\Delta(u_0)-\Delta(U^{(N)}(0,\cdot;{\mathbf c},{\mathbf x}))\bigg)\\&&\leq C\|u_0-\Delta(U^{(N)}(0,\cdot;{\mathbf c},{\mathbf x}))\|^2_{H^{\frac{N}{2}}}\leq C\delta^2<\varepsilon.
\end{eqnarray*}
Here we used the conservation of the augmented Lagrangian $\Delta$ by the \eqref{eq: BO} flow, given an initial data $u_0$ sufficiently close to an $N$-solitons profile $U^{(N)}(0,\cdot;{\mathbf c},{\mathbf x})$, the closeness to the $N$-solitons manifold with speeds ${\mathbf c}$ is preserved for all time.
\end{proof}
Hence, to complete the proof of Theorem~\ref{thm1.1}, it is sufficient to verify~\eqref{n=p}. We start with the count of the number of positive eigenvalues of the Hessian matrix $D$.
\begin{lemma}
\label{lem:2.2.}
For all
finite values of the parameters ${\mathbf{c}},{\mathbf{x}}$ with $0<c_1 < \cdots<c_N$, we have
\[
p(D)= [\frac{N+1}{2}].
\]
\end{lemma}
\begin{proof}
The Hessian matrix $D$ is defined by \eqref{3.31}. It is a real symmetric matrix,
whose elements can be calculated explicitly for the $N$-solitons.
Indeed, since $N$-solitons are reflectionless potentials, one takes $\beta=0$ in \eqref{2.10'}, the $n$-th conservation law corresponding to $u=U^{(N)}$ reduces to
$$H_n(U^{(N)})=\pi (-1)^{n+1}\sum_{l=1}^N\frac{c_l^{n}}{n}.$$
If we regard $S_N$ as a function of $\mu_j$ $j=1, 2, ..., N)$, then from \eqref{1.9a} and \eqref{1.10}, one has
$$\frac{\partial S_N}{\partial \mu_j}=H_{j}, \quad j=1, 2, ..., N. $$
Hence the elements of the matrix $D$ are as follows
\begin{equation}\label{B3}
d_{jk}:=\frac{\partial H_{j}}{\partial \mu_k}
=\pi (-1)^{j+1}\sum_{l=1}^Nc_l^{j-1}\frac{\partial c_l}{\partial \mu_k}.
\end{equation}
Let $A=(a_{jk})_{1\leq j,k\leq N}$ and $B=(b_{jk})_{1\leq j,k\leq N}$ be $N\times N$ matrices with
elements
$$a_{jk}=\pi (-1)^{j+1}c_k^{j-1}, $$
\begin{equation}\label{B4b}
b_{jk}=\frac{\partial \mu_j}{ \partial c_k}=\frac{\partial \sigma_{N-j+1}}{\partial c_k},
\end{equation}
respectively. Note that the right-hand side of \eqref{B4b} follows from \eqref{3.11}. Using the above definition,
we can rewrite \eqref{B3} in the form
\begin{equation}\label{B5}
D=AB^{-1},
\end{equation}
if $B^{-1}$ exists. $B$ is invertible since by the definition \eqref{3.7a} of $\sigma_{N-j+1}$, \eqref{B4b}
and $c_j\not=c_k$ for $j\not=k$, a simple calculation immediately leads to
$$\det B=\prod_{1\leq j<k\leq N}(c_k-c_j)\neq0.$$
It now follows from \eqref{B5} that
$$B^TDB=B^TA.$$
From the Sylvester's law of inertia, one deduces that the number of
positive eigenvalues of $D$ coincides with that of $B^TA$. We claim that $B^TA$ is a diagonal matrix
since the $(j, k)$ element of $B^TA$ becomes
\begin{equation}\label{B8}
(B^TA)_{jk}=\pi\sum_{l=1}^N(-1)^{l+1}\frac{\partial\sigma_{N-l+1}}{\partial c_j}c_k^{l-1}=\delta_{jk}\prod_{l\neq j}(c_l-c_k).
\end{equation}
The number of positive eigenvalues of $B^TA$ is equal to
$\left[\frac{N+1}{2}\right]$.
By taking account of Sylvester's law of inertia, we conclude
that $p[D]$ =$\left[\frac{N+1}{2}\right]$.
The proof of this Lemma is completed.
\end{proof}
\begin{proof} [Proof of Theorem \ref{thm1.1}]
By Lemma \ref{lem:2.2.} and Lemma \ref{lemma3.6}, one has that $n(\mathcal L_N)=p(D)=[\frac{N+1}{2}]$. The proof of Theorem \ref{thm1.1} is obtained directly in view of Proposition \ref{pr2.1}, since $U^{(N)}(t,x)$
is now an (non-isolated) unconstrained minimizers of the augmented Lagrangian \eqref{eq:augmented Lagrangian}
which therefore serves as a Lyapunov function.
\end{proof}
Now we are ready to prove Theorem \ref{thm1.3}.
\begin{proof}[Proof of Theorem \ref{thm1.3}]
The linearized operators around the $N$-solitons $\mathcal {L}_N=S''_N(U^{(N)})$ possess $[\frac{N+1}{2}]$ negative eigenvalues, which has been verified from \eqref{inertia L}. Next, we need to prove \eqref{eigenvalue1.3}. As by Theorem \ref{th5.2} we infer that, as $t$ goes to $\infty$, the spectrum $\sigma(\mathcal {L}_N(t))$ of $\mathcal {L}_N(t)$ converges
to the union of the spectrum $\sigma(\mathcal {L}_{N,j})$ of $\mathcal {L}_{N,j}=S_N''(Q_{c_j})$, namely
\[
\sigma(\mathcal {L}_N(t))\rightarrow \bigcup_{j=1}^N\sigma(\mathcal {L}_{N,j}), \quad \text{as} \quad t\rightarrow +\infty.
\]
Since for each $N$, Theorem \ref{th5.2} confirms that the operators $\mathcal {L}_N(t)$ are isoinertial, the spectrum of which $\sigma(\mathcal {L}_N(t))$ is independent of $t$. Therefore, the negative eigenvalues of $\mathcal {L}_N$ are exactly the same with the negative eigenvalues of $\mathcal {L}_{N,j}$ for all $j=1,2,\cdot\cdot\cdot,N$. In view of Lemma \ref{lemma3.6}, $\mathcal {L}_{N,j}$ possesses negative eigenvalues if $j=2k-1$ and $1\leq k\leq [\frac{N+1}{2}]$. We will show that such negative eigenvalues are exactly $\nu_k$ \eqref{eigenvalue1.3}. Indeed, by induction, $N=1$ is verified in \eqref{ pos eigen}, the associated negative eigenvalue is $\nu_1=-\frac{\sqrt{5}+1}{2}c$ \eqref{ pos eigen}. Suppose now \eqref{eigenvalue1.3} holds for $N=K$, namely, the $[\frac{K+1}{2}]$-th negative eigenvalue of $\mathcal {L}_{K}$ is
\begin{eqnarray}\label{Keigenvalue}
\nu^K_k:=-Cc_{2k-1}\prod_{j\neq 2k-1}^K(c_j-c_{2k-1}),\quad k=1,2,\cdot\cdot\cdot,[\frac{K+1}{2}].
\end{eqnarray}
If $N=K+1$ even, in this case $[\frac{K+1}{2}]=[\frac{K+2}{2}]$, for $k=1,2,\cdot\cdot\cdot,[\frac{K+1}{2}]$, one has
\begin{equation}\label{Kplus1 and K}
\mathcal {L}_{K+1,2k-1}=S_{K+1}''(Q_{c_{2k-1}})=\left(\mathcal{R}(Q_{c_{2k-1}})+c_{K+1}\right)I_{K}''(Q_{c_{2k-1}}).
\end{equation}
By Lemma \ref{le3.6}, the operator $(\mathcal{R}(Q_{c_{2k-1}})+c_{K+1})$ has an eigenvalue $c_{K+1}-c_{2k-1}>0$, the continuous spectrum is $[c_{K+1},+\infty)$ whose generalized eigenfunctions are not in $L^2(\R)$. Therefore, the $[\frac{K+2}{2}]$-th
negative eigenvalues of $\mathcal {L}_{K+1,2k-1}$ are
\begin{eqnarray}\label{K+1eigenvalue}
\nu^{K+1}_k:&=&\big(c_{K+1}-c_{2k-1}\big)\nu^K_k=-C\big(c_{K+1}-c_{2k-1}\big)c_{2k-1}\prod_{j\neq 2k-1}^K(c_j-c_{2k-1})\nonumber\\&=&-Cc_{2k-1}\prod_{j\neq 2k-1}^{K+1}(c_j-c_{2k-1}),\quad k=1,2,\cdot\cdot\cdot,[\frac{K+1}{2}],
\end{eqnarray}
where the constant $C>0$ is different from \eqref{Keigenvalue}.
If $N=K+1$ odd, in this case $[\frac{K+1}{2}]+1=[\frac{K+2}{2}]$. For $k=1,2,\cdot\cdot\cdot,[\frac{K+1}{2}]$, following by the same argument, the front $[\frac{K+1}{2}]$
negative eigenvalues of $\mathcal {L}_{K+1}$ are given by \eqref{K+1eigenvalue}. Now we compute the last negative eigenvalue which has been proven Lemma \ref{lemma3.6}. Since
\begin{equation}\label{Kplus1 and K2}
\mathcal {L}_{K+1,K+1}=S_{K+1}''(Q_{c_{K+1}})=\left(\mathcal{R}(Q_{c_{K+1}})+c_{j}\right)\tilde{S}_{K}''(Q_{c_{K+1}}),
\end{equation}
where $\tilde{S}_{K}$ is the action that with a wave speed $c_j$ in $S_K$ replacing to $c_{K+1}$ for some $1\leq j\leq K$.
By the assumption in \eqref{Keigenvalue}, the discrete eigenvalue of the operator $\tilde{S}_{K}''(Q_{c_{K+1}})$ is
\[
-Cc_{K+1}\prod_{l\neq j}^K(c_l-c_{K+1}).
\]
Since by Lemma \ref{le3.6}, the operator $(\mathcal{R}(Q_{c_{K+1}})+c_{j})$ has an eigenvalue $c_j-c_{K+1}<0$, the continuous spectrum of which is the interval $[c_{j},+\infty)$ and the generalized eigenfunctions are not in $L^2(\R)$. Therefore, the last
negative eigenvalues of $\mathcal {L}_{K+1}$ is
\begin{equation}\label{K+1eigen}
\nu^{K+1}_{[\frac{K+2}{2}]}:=\big(c_j-c_{K+1}\big)\big(-Cc_{K+1}\prod_{l\neq j}^K(c_l-c_{K+1})\big)=-Cc_{K+1}\prod_{l=1}^K(c_l-c_{K+1}).
\end{equation}
The proof of Theorem \ref{thm1.3} is concluded by combining \eqref{K+1eigenvalue} and \eqref{K+1eigen}.
\end{proof}
\section*{Acknowledgements}
Y. Lan acknowledges the support of the China National Natural Science Foundation under grant number 12201340, Z. Wang acknowledges the support of the China National Natural Science Foundation under grant number
11901092 and Guangdong Natural Science Foundation under grant number 2023A1515010706. Z. Wang is also indebted to Prof. Stefan Le Coz for stimulating discussions.
|
{
"arxiv_id": "2302.14229",
"language": "en",
"timestamp": "2023-03-01T02:06:08",
"url": "https://arxiv.org/abs/2302.14229",
"yymm": "2302"
} | \section{Introduction}
Cross-Lingual Summarization (CLS) aims to provide a target-language (\emph{e.g.}, German) summary for a lengthy document in a different source language (\emph{e.g.}, English)~\cite{Leuski2003CrosslingualCE,wan-etal-2010-cross,yao-etal-2015-phrase,zhu-etal-2019-ncls,zhu-etal-2020-attend,ladhak-etal-2020-wikilingua,perez-beltrachini-lapata-2021-models,bai-etal-2021-cross,Liang2022AVH,feng-etal-2022-msamsum,Hasan2021CrossSumBE,Wang2022ClidSumAB,Wang2022ASO,wang2022understanding,liu-etal-2022-assist,zheng2022long,aumiller-etal-2022-eur}. This task could help people quickly capture their interests from foreign documents efficiently.
In recent years, a number of powerful multi-lingual pre-trained generative models have been proposed one after another, such as mBART~\cite{Liu2020MultilingualDP}, mBART-50~\cite{Tang2020MultilingualTW}, mT5~\cite{xue-etal-2021-mt5} and BLOOM~\cite{scao2022bloom}. The parameters in these models have gradually increased from million levels (580M in mT5-base and 610M in mBART-Large) to billion levels (3.7B in mT5-XL, 13B in mT5-XXL and 176B in BLOOM), facilitating various research topics (\emph{e.g.}, machine translation and CLS) in the multi-lingual world. In addition, large language models (LLMs) have been key to strong performance when transferring to new tasks by simply conditioning on a few input-label pairs (\emph{in-context learning})~\cite{dong2022survey,min-etal-2022-rethinking} or short sentences describing crucial reasoning steps (\emph{chain-of-thoughts})~\cite{fu2022complexity,zhang2022automatic}.
More recently, ChatGPT, an intelligent human-machine dialogue LLM, has attracted great attention from both the research communities and industries. Similar to InstructGPT~\cite{ouyang2022training}, ChatGPT is created by fine-tuning a GPT-3.5 series model via reinforcement learning from human feedback (RLHF)~\cite{christiano2017deep}.
With the emergence of ChatGPT, there is growing interest in leveraging this model for various NLP tasks~\cite{qin2023chatgpt,jiao2023chatgpt,bang2023multitask,Yang2023ExploringTL}.
However, the exploration of ChatGPT in CLS is still lacking.
In this technical report, we present a preliminary evaluation of ChatGPT's zero-shot CLS performance.
Specifically, we design various prompts to guide ChatGPT to perform CLS from different paradigms (\emph{i.e.}, end-to-end and pipeline).
To further exploit the interaction capability of ChatGPT, we follow~\citet{bang2023multitask} and adopt an interactive prompt to let ChatGPT provide more concise summaries.
To provide a deeper understanding of ChatGPT's CLS performance, we compare it with (1) the most advanced GPT-3.5 model (\emph{i.e.}, text-davinci-003) and (2) fine-tuned mBART-50.
Experimental results on three CLS benchmark datasets, that cover three domains (news, how-to guide and dialogue) and two cross-lingual directions (En$\Rightarrow$Zh and En$\Rightarrow$De)\footnote{Since a CLS dataset might contain multiple source and target languages, we use ``X$\Rightarrow$Y'' to indicate the source language and target language are X and Y, respectively. En: English; Zh: Chinese; De: German.}, show that ChatGPT outperforms GPT-3.5 but is worse than fine-tuned mBART-50 in terms of ROUGE scores and BERTScore.
Furthermore, we conduct case studies to show that ChatGPT could absorb the core idea of the given source-language documents and generate fluent and concise target-language summaries.
Our main contributions are concluded as follows:
\begin{itemize}[leftmargin=*,topsep=0pt]
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item To our knowledge, we are the first to leverage LLMs to perform zero-shot cross-lingual summarization.
\item We design various prompts to guide ChatGPT to perform CLS from different paradigms, and evaluate the results on three widely-used CLS datasets covering different domains.
\end{itemize}
\begin{figure*}[t]
\centerline{\includegraphics[width=0.98\textwidth]{figure/prompts.pdf}}
\caption{An illustration of all prompts used to guide ChatGPT to perform cross-lingual summarization.}
\label{fig:prompts}
\end{figure*}
\section{ChatGPT for Cross-Lingual Summarization}
\label{sec: evaluation_method}
We systematically explore 6 prompts to guide ChatGPT to perform CLS, which are shown as follows with an example from an English document to a Chinese summary (c.f., Figure~\ref{fig:prompts}):
\vspace{0.5ex}
\begin{itemize}[leftmargin=*,topsep=0pt]
\item \textbf{ChatGPT (e2e)}: The end-to-end (e2e) prompt is ``\textit{Please summarize the following text in Chinese:} \texttt{[English Doc]}'', where \texttt{[English Doc]} indicates a given English document. In this way, ChatGPT will directly output the corresponding target-language summary.
\item \textbf{ChatGPT (e2e+interact)}: Recent work~\cite{bang2023multitask} shows that ChatGPT usually generates overly long summaries perhaps due to the RLHF tuning. To further explore the CLS ability of ChatGPT, after prompting with the e2e prompt, we simply input another prompt ``\textit{Please make the Chinese summary shorter}'' (named interactive prompt) to ChatGPT in the next round.
\item \textbf{ChatGPT (Trans-Sum)}: The translate-then-summarize (Trans-Sum) prompt is ``\textit{Please first translate the following text to Chinese and then summarize the translated text in Chinese: }\texttt{[English Doc]}''. With the help of this prompt, ChatGPT performs CLS in a pipeline manner that first translates the given document from the source language to the target language, and then summarizes the translated document.
\item \textbf{ChatGPT (Trans-Sum+interact)}: Similar to ChatGPT (e2e+interact), after prompting with the Trans-Sum prompt, we additional input the interactive prompt to ChatGPT in the next round to obtain shorter summaries.
\item \textbf{ChatGPT (Sum-Trans)}: The summarize-then-translate (Sum-Trans) prompt is ``\textit{Please first summarize the following text and then translate the summary to Chinese: }\texttt{[English Doc]}''. This prompt also guides ChatGPT to perform CLS in a pipeline manner that first summarizes the given document and then translates the output summary to the target language.
\item \textbf{ChatGPT (Sum-Trans+interact)}: The Sum-Trans and the interactive prompts are input to ChatGPT successively to obtain brief target-language summaries.
\end{itemize}
\begin{table*}[t]
\centering
\resizebox{0.98\textwidth}{!}
{
\begin{tabular}{lcccccc}
\toprule[1pt]
\multicolumn{1}{c}{Dataset} & Src Lang. & Trg Lang. & Domain & Example & Doc. Length & Sum. Length \\ \midrule[1pt]
CrossSum~\cite{Hasan2021CrossSumBE} & English & Chinese & News & 3981 / 497 / 50 out of 497 & 814.2 & 35.6 \\ \midrule
\multirow{2}{*}{WikiLingua~\cite{ladhak-etal-2020-wikilingua}} & \multirow{2}{*}{English} & Chinese & \multirow{2}{*}{How-to guide} & 13211 / 1886 / 50 out of 3775 & 538.6 & 53.2 \\
& & German & & 40839 / 5833 / 50 out of 11669 & 526.1 & 63.4 \\ \midrule
\multirow{2}{*}{XSAMSum~\cite{Wang2022ClidSumAB}} & \multirow{2}{*}{English} & Chinese & \multirow{2}{*}{Dialogue} & 14732 / 818 / 50 out of 819 & 140.1 & 27.6 \\
& & German & & 14732 / 818 / 50 out of 819 & 140.1 & 31.7 \\ \bottomrule[1pt]
\end{tabular}
}
\caption{Statistics of CLS datasets used in experiments. ``\emph{Src Lang.}'' and ``\emph{Trg Lang}'' denote the source and the target languages. ``\emph{Doc. Length}'' and ``\emph{Sum. Length}'' show the average length of source documents and target summaries (token level). ``\emph{Example}'' lists the number of samples in each dataset w.r.t training, validation and test sets.}
\label{table:statistics}
\end{table*}
\begin{table*}[t]
\centering
\resizebox{0.98\textwidth}{!}
{
\begin{tabular}{llcccccccccccccccccccc}
\toprule[1pt]
& \multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multicolumn{4}{c}{CrossSum (En$\Rightarrow$Zh)} & \multicolumn{4}{c}{WikiLingua (En$\Rightarrow$Zh)} & \multicolumn{4}{c}{WikiLingua (En$\Rightarrow$De)} & \multicolumn{4}{c}{XSAMSum (En$\Rightarrow$Zh)} & \multicolumn{4}{c}{XSAMSum (En$\Rightarrow$De)} \\
\cmidrule(r){3-6}\cmidrule(r){7-10}\cmidrule(r){11-14}\cmidrule(r){15-18}\cmidrule(r){19-22} \multicolumn{1}{c}{} & & R-1 & R-2 & R-L & B-S & R-1 & R-2 & R-L & B-S & R-1 & R-2 & R-L & B-S & R-1 & R-2 & R-L & B-S & R-1 & R-2 & R-L & B-S \\ \midrule[1pt]
& mBART-50 & 26.1 & 7.4 & 22.1 & 65.4 & 32.1 & 10.4 & 26.8 & 68.5 & 26.8 & 7.7 & 20.5 & 62.5 & 40.6 & 14.4 & 33.9 & 74.5 & 42.4 & 18.9 & 35.4 & 73.7 \\ \midrule[1pt]
\multirow{7}{*}{\rotatebox[origin=c]{90}{\small{Zero-Shot}} $\begin{dcases} \\ \\ \\ \\ \\ \end{dcases}$} & Text-davinci-003 & 18.7 & 3.6 & 14.7 & 60.2 & 23.6 & 3.8 & 17.8 & 60.9 & 18.8 & 2.6 & 12.2 & 60.7 & 24.4 & 8.0 & 20.7 & 63.4 & 35.5 & 12.4 & 27.3 & 62.4 \\
& ChatGPT (e2e) & 14.2 & 3.3 & 10.3 & 60.3 & 20.9 & 5.6 & 15.5 & 62.7 & 16.9 & 2.1 & 10.7 & 60.1 & 21.3 & 5.5 & 17.1 & 63.5 & 32.0 & 10.3 & 24.5 & 61.4 \\
& ChatGPT (e2e+interact) & 22.1 & 3.8 & 15.6 & 61.8 & 28.4 & 6.5 & 22.1 & 64.5 & \textbf{22.4} & 2.8 & 14.7 & \textbf{61.3} & 27.2 & 6.9 & 22.9 & 67.5 & \textbf{39.6} & \textbf{16.0} & \textbf{31.4} & \textbf{64.3} \\
& ChatGPT (Trans-Sum) & 15.8 & 3.3 & 11.9 & 60.9 & 24.8 & 5.4 & 19.1 & 62.9 & 19.4 & 2.4 & 12.6 & 60.0 & 26.0 & 7.3 & 21.2 & 66.4 & 33.2 & 9.6 & 25.3 & 61.1 \\
& ChatGPT (Trans-Sum+interact) & \textbf{22.6} & \textbf{4.1} & \textbf{16.9} & \textbf{62.7} & 26.1 & 5.3 & 19.7 & 63.7 & 21.6 & 2.4 & 15.1 & 60.8 & 27.4 & 6.7 & 22.4 & 67.1 & 39.4 & 13.5 & 29.4 & 63.3 \\
& ChatGPT (Sum-Trans) & 16.5 & 3.8 & 12.0 & 60.8 & 27.2 & 7.3 & 20.3 & 64.3 & 21.3 & \textbf{3.5} & 14.4 & 60.9 & 26.8 & 7.7 & 21.3 & 66.7 & 31.7 & 8.8 & 23.5 & 60.8 \\
& ChatGPT (Sum-Trans+interact) & 21.6 & 3.5 & 15.5 & 61.7 & \textbf{30.1} & \textbf{8.1} & \textbf{22.4} & \textbf{64.9} & 21.4 & 3.1 & \textbf{15.4} & 60.6 & \textbf{31.4} & \textbf{11.5} & \textbf{28.1} & \textbf{70.1} & 35.9 & 13.2 & 29.0 & 62.8 \\ \bottomrule[1pt]
\end{tabular}
}
\caption{Experimental results on CrossSum, WikiLingua and XSAMSum.}
\label{table:experiments}
\end{table*}
\section{Experiments}
We conduct experiments on the ChatGPT platform\footnote{\url{https://chat.openai.com/}} between February 17 to February 19, 2023.
\subsection{Experimental Setup}
\noindent \textbf{Datasets.} We evaluate ChatGPT on the following three CLS datasets: CrossSum (En$\Rightarrow$Zh)~\cite{Hasan2021CrossSumBE}, WikiLingua (En$\Rightarrow$Zh/De)~\cite{ladhak-etal-2020-wikilingua} and XSAMSum (En$\Rightarrow$Zh/De)~\cite{Wang2022ClidSumAB}. CrossSum is collected from BBC news website, it contains 3,981 English news reports paired with Chinese summaries. WikiLingua involves 18,887 English how-to guides paired with Chinese summaries, and 58,375 English how-to guides paired with German summaries. XSAMSum contains 16,369 English dialogues paired with both Chinese and German summaries. The detailed statistics of these datasets are listed in Table~\ref{table:statistics}. Since ChatGPT can only be interacted with manually, evaluating its performance is time-consuming. Thus, we randomly sample 50 documents from the test set of each CLS dataset for evaluation.
\vspace{0.5ex}
\noindent \textbf{Metrics.} We adopt ROUGE-1/2/L (R-1/2/L)~\cite{Lin2004ROUGEAP} and BERTScore (B-S)~\cite{Zhang2020BERTScoreET} in our experiments. The ROUGE scores measure the lexical overlap between the generated summaries and corresponding references based on the unigram, bigram and longest common subsequence, while the BERTScore measures the semantic similarity. For ROUGE scores, we use \textit{multi-lingual rouge}\footnote{\url{https://github.com/csebuetnlp/xl-sum/tree/master/multilingual_rouge_scoring}} toolkit. For BERTScore, we use \textit{bert-score}\footnote{\url{https://github.com/Tiiiger/bert_score}} toolkit, and the score is calculated based on \textit{bert-base-multilingual-cased}\footnote{\url{https://huggingface.co/bert-base-multilingual-cased}} model.
\vspace{0.5ex}
\noindent \textbf{Baselines.} We also compare ChatGPT with the following models to provide deeper analyses: (1) \emph{mBART-50}~\cite{Tang2020MultilingualTW}, a multi-lingual version of BART~\cite{lewis-etal-2020-bart} with the vanilla transformer encoder-decoder architecture~\cite{vaswani2017attention}. This model has been pre-trained on large-scale multi-lingual unlabeled corpora with BART-like denoising objectives. (2) \emph{Text-davinci-003} is the most advanced GPT-3.5 model. We test the zero-shot CLS performance of text-davinci-003 with the same e2e prompt described in Section~\ref{sec: evaluation_method}.
\begin{table}[t]
\centering
\resizebox{0.48\textwidth}{!}
{
\begin{tabular}{lccccc}
\toprule[1pt]
\multicolumn{1}{c}{\multirow{2}{*}{Method}} & CrossSum & \multicolumn{2}{c}{WikiLingua} & \multicolumn{2}{c}{XSAMSum} \\
\cmidrule(r){2-2}\cmidrule(r){3-4}\cmidrule(r){5-6} \multicolumn{1}{c}{} & En$\Rightarrow$Zh & En$\Rightarrow$Zh & En$\Rightarrow$De & En$\Rightarrow$Zh & En$\Rightarrow$De \\ \midrule[1pt]
Text-davinci-003 & 83.3 & 78.5 & 149.1 & 61.8 & 62.5 \\ \midrule[1pt]
e2e & 183.7 & 176.6 & 273.5 & 68.6 & 75.3 \\
e2e+interact & 66.4 & 50.0 & 80.7 & 28.7& 42.5 \\
TransSum & 155.1 & 82.1 & 149.3 & 48.2 & 60.9 \\
TransSum+interact & 63.4 & 46.2 & 70.0 & 30.3 & 41.1 \\
SumTrans & 132.7 & 94.3 & 124.2 & 54.9 & 68.1 \\
SumTrans+interact & 57.8 & 50.1 & 71.6 & 29.3 & 37.5 \\ \bottomrule[1pt]
\end{tabular}
}
\caption{The average length (token level) of the generated summaries on the test set of each dataset.}
\label{table:output_lens}
\end{table}
\subsection{Main Results}
Table~\ref{table:experiments} shows the experimental results. We first analyze the effects of different prompts which guide ChatGPT to perform CLS, and then make comparisons between ChatGPT and baselines.
\vspace{0.5ex}
\noindent \textbf{ChatGPT.} Without the interactive prompt, we find that ChatGPT (e2e) typically performs worse than the pipelines, \emph{i.e.}, ChatGPT (Trans-Sum) and ChatGPT (Sum-Trans), perhaps due to the following reasons: (1) It is still challenging for a single model to directly perform CLS which requires both the translation and summarization ability, making the model struggle to simultaneously perform them. (2) Different from other pipelines that completely separate CLS into translation and summarization sub-tasks, when generating the target-language summaries, ChatGPT (Trans-Sum) and ChatGPT (Sum-Trans) are also conditioned on both the original input and intermediate products (target-language documents in Trans-Sum and source-language summaries in Sum-Trans). Thus, the pipeline ChatGPT could consider more information to make its final judgments.
With the help of the interactive prompt, the interaction capability of ChatGPT could be further exploited. As shown in Table~\ref{table:output_lens}, more concise summaries can be generated after inputting the interactive prompt, \emph{e.g.}, 183.7 tokens generated by ChatGPT (e2e) on CrossSum, while the counterpart of ChatGPT (e2e+interact) is 66.4 tokens.
\vspace{0.5ex}
\noindent \textbf{ChatGPT vs. Text-davinci-003.} Compared with ChatGPT (e2e), text-davinci-003 generates more concise target-language summaries and achieves better performance. This phenomenon also exists in monolingual summarization~\cite{qin2023chatgpt}. After using the interactive prompt, ChatGPT outperforms text-davinci-003 by a large margin (\emph{e.g.}, 22.6 R-1 vs. 18.7 R-1 on CrossSum), indicating the superiority of zero-shot CLS ability in ChatGPT.
\vspace{0.5ex}
\noindent \textbf{ChatGPT vs. mBART-50.} Compared with the fine-tuned mBART-50, ChatGPT achieves lower ROUGE scores as well as BERTScore.
One possible reason is that ChatGPT is not aware of the text style of the golden summaries when performing zero-shot CLS on each dataset. However, lower automatic scores do not indicate worse performance.
For example, as discussed by~\citet{goyal2022news}, the news summaries generated by GPT-3 achieve lower ROUGE scores than fine-tuned methods but higher human evaluation scores.
Thus, the comparison between ChatGPT and fine-tuned mBART-50 in CLS needs more carefully-designed human evaluation, which we reserve for the future.
\begin{figure*}[t]
\centerline{\includegraphics[width=0.95\textwidth]{figure/cases.pdf}}
\caption{Comparison of summaries generated by ChatGPT and baselines. The first case and the second case come from XSAMSum~\cite{Wang2022ClidSumAB} and CrossSum~\cite{Hasan2021CrossSumBE}, respectively.}
\label{fig:cases}
\end{figure*}
\subsection{Case Study}
Figure~\ref{fig:cases} shows summaries generated by ChatGPT and the baselines. We can see that, with the help of the interactive prompt, ChatGPT could absorb the core idea of the given source-language (\emph{i.e.}, English) documents and generate fluent and concise target-language (\emph{i.e.}, Chinese) summaries.
\section{Related Work}
Given documents in one language, cross-lingual summarization (CLS) generates summaries in another language.
Early work typically focuses on pipeline methods~\cite{Leuski2003CrosslingualCE,Orasan2008EvaluationOA,wan-etal-2010-cross,wan-2011-using,yao-etal-2015-phrase}, \emph{i.e.}, translation and then summarization or summarization and then translation.
Recently, with the availability of large-scale CLS datasets~\cite{zhu-etal-2019-ncls,ladhak-etal-2020-wikilingua,perez-beltrachini-lapata-2021-models,Wang2022ClidSumAB,zheng2022long},
many researchers shift the research attention to end-to-end CLS models.
According to a comprehensive CLS review~\cite{Wang2022ASO}, the end-to-end models involve multi-task learning~\cite{cao-etal-2020-jointly,Bai2021BridgingTG,Liang2022AVH}, knowledge distillation~\cite{Nguyen2021ImprovingNC}, resource-enhanced~\cite{zhu-etal-2020-attend,Jiang2022ClueGraphSumLK} and pre-training~\cite{xu-etal-2020-mixed,chi-etal-2021-mt6} frameworks. However, none of them explore LLMs performance on CLS. To our knowledge, we are the first to explore \emph{can LLMs perform zero-shot CLS} and \emph{how their results are}.
Recently, there are growing interest in leveraging ChatGPT for various NLP tasks. \citet{bang2023multitask} and \citet{qin2023chatgpt} conduct systematic investigations of ChatGPT's performance on various downstream tasks. \citet{jiao2023chatgpt} and \citet{Yang2023ExploringTL} evaluate ChatGPT on machine translation and aspect-based text summarization, respectively.
\section{Conclusion}
In this technical report, we preliminarily design various prompts to guide ChatGPT to perform CLS, and compare ChatGPT with the most advanced GPT-3.5 model (\emph{i.e.}, text-davinci-003).
We find that ChatGPT can combine the ability to summarize and translate to perform zero-shot CLS, and outperforms GPT-3.5. In detail, ChatGPT originally prefers to produce lengthy target-language summaries with more detailed information. After prompting with the interactive prompt, ChatGPT can balance between informativeness and conciseness, and provide better summaries.
As for the comparison with fine-tuned mBART-50, ChatGPT achieves lower automatic evaluation scores, but the overall comparison needs to be further judged by humans.
\section*{Limitations}
While we evaluate the performance of ChatGPT on the cross-lingual summarization task, there are some limitations worth noting: (1) We only evaluate the lower threshold of ChatGPT's CLS performance. Future work could explore better prompts to obtain better results. (2) This report only uses two cross-lingual directions (En$\Rightarrow$Zh and En$\Rightarrow$De) in experiments, and all the languages are considered high-resource languages in the world. The performance of ChatGPT on low-resource languages still needs to be explored. According to~\citet{jiao2023chatgpt}, the machine translation ability of ChatGPT is limited on low-resource languages. We conjecture that the same situation might exist in CLS. (3) In the future, we would like to conduct human evaluation to give more analyses.
|
{
"arxiv_id": "2302.14142",
"language": "en",
"timestamp": "2023-03-01T02:02:25",
"url": "https://arxiv.org/abs/2302.14142",
"yymm": "2302"
} | \section{Introduction}
The Kolkata Paise Restaurant Problem (KPRP) was first introduced in 2007 \cite{CMC07} during work on the Kolkata Paise Hotel Problem. Since then, it has been studied extensively \cite{BGCN12,CCGM17,CG19,GC17,DSC11,BMM13,BM21,SC20,GDCM12,CCCM15,CRS22,KPA22,CMC07,MK17,R13,Y10,GCCC14} in the econophysics literature. In its simplest form, we assume $N \gg 1$ agents will choose among $N$ restaurants. Choice is governed by a distribution determined by an implicit ranking of the restaurants. The ranking represents the payoff of eating at a given restaurant. If two or more agents select the same restaurant, then the restaurant randomly chooses which agent to serve.
A broad overview of KPRP can be found in \cite{CCGM17,BMM13,CCCM15}. When all restaurants are ranked equally (i.e., have payoff $1$) and agents choose a restaurant at random, the expected payoff to each agent is easily seen to be approach $1-1/e$ as $N \to \infty$. Using stochastic strategies and resource utilization models, the mean payoff can be increased to $ \sim 0.8$ \cite{GCMC10}. Identifying strategies to improve on the uncoordinated outcome is a central problem in KPRP.
KPRP is an example of an anti-coordination game (such as Hawk-Dove) \cite{M12}. Other examples of this class of game are minority games \cite{CZ98,HZDH12} and the El Farol bar problem \cite{arthur1994, decara1999, FGH02, challet2004}. These types of games also emerge in models of channel sharing in communications systems \cite{AFGJ13,AFGJ13a,GK14}.
Learning in KPRP is considered in \cite{CRS22,GCMC10,GSC10} with both classical and quantum learning considered in \cite{CRS22}. Quantum versions of the problem are considered in \cite{CRS22,R13,Y10} and its relevance to other areas of physical modelling are considered in \cite{BM21,GCCC14,MK17,GDCM12} with phase transitions considered recently in \cite{BGCN12,SC20}. Distributed and coordinated solutions to optimizing agent payoff are discussed in \cite{CG19,GC17,DSC11,KPA22}.
In this paper, we use evolutionary game theory to study a group formation problem within the context of KPRP. We assume that some subset of the population of $N$ individuals forms a dining club. Individuals in the dining club coordinate their actions and will choose distinct restaurants from each other, thus increasing the odds that any individual within the dining club will eat. In this context, we show the following results:
\begin{enumerate}
\item When all restaurants are ranked equally, membership in the dining club is globally stable. That is, asymptotically all players join the dining club (in the limit as $N \to \infty$).
\item When the dining club taxes its members by collecting food for redistribution to those members who did not eat, there is an optimal tax rate that ensures all members are equally well-fed.
\item When non-club members can choose to deceptively share in the communal food (freeload) of the dining club, a new unstable fixed point emerges. The fixed point corresponding to a population where all members join the dining club remains stable, but is no longer globally stable. We characterize the basin of attraction in this case. This effectively introduces a public goods game into the KPRP.
\item We then use numerical analysis to study the case where two dining clubs are active. We numerically illustrate the existence of equilibrium surfaces where multiple dining clubs can exist simultaneously along with non-group members as a result of group taxation (food sharing), cheating (freeloading), and cheating detection.
\end{enumerate}
The remainder of this paper is organized as follows: In \cref{sec:Math}, we analyse an evolutionary model of KPRP with a dining club. We study resource distribution through taxation and cheating in \cref{sec:Cheating}. Cheating is modelled in an evolutionary context in \cref{sec:EvolveCheating}. KPRP with multiple dynamic clubs is studied numerically in \cref{sec:MultipleClubs}. Finally, in \cref{sec:Conclusions} we present conclusions and future directions.
\section{Mathematical Analysis}\label{sec:Math}
We first study KPRP with a single dining club. Let $g$ be the size of the dining club and let $n$ be the size of the free population with total population given by $N = g+n$. The probability that an individual in the dining club eats is given by
\begin{equation*}
p_g(n, g) = \sum_{k=0}^n \binom{n}{k}\left(\frac{n+g-1}{n+g}\right)^{n-k}\left(\frac{1}{n+g}\right)^k\frac{1}{k+1},
\end{equation*}
while the probability that a free individual eats is given by
\begin{equation*}
p_n(n,g) =
\sum_{k=0}^{n-1} \binom{n-1}{k}\left(\frac{n}{n+g}\frac{1}{k+1} +
\frac{g}{n+g}\frac{1}{k+2} \right)\left(\frac{n+g-1}{n+g}\right)^{n-k-1}\left(\frac{1}{n+g}\right)^k.
\end{equation*}
If we assume $g = \alpha n$ and sum over $k$, then we can rewrite $p_g(n,g)$ in closed form as
\begin{equation*}
p_g(n, \alpha) = \frac{\left(1-\frac{1}{\alpha n+n}\right)^n \left((\alpha +1) n
\left(\left(\frac{1}{\alpha n+n-1}+1\right)^n-1\right)+1\right)}{n+1}.
\end{equation*}
Likewise, $p_n(n,g)$ can be written as
\begin{equation*}
p_n(n,\alpha) = \frac{\left(1-\frac{1}{\alpha n+n}\right)^n}{n+1}
\left\{ \alpha ^2 n +\alpha
n -\alpha -n-1
-\left[(\alpha
+1) ((\alpha -1) n-1) \left(\frac{1}{\alpha n+n-1}+1\right)^n\right]\right\}.
\end{equation*}
If we compute the limit as $n \to \infty$, this yields the asymptotic probabilities
\begin{equation}
p_g(\alpha) = \lim_{n \to \infty} p_g(n,\alpha) = \left(1-e^{-\frac{1}{\alpha +1}}\right) (\alpha +1),
\label{eqn:pg}
\end{equation}
and
\begin{equation}
p_n(\alpha) = \lim_{n \to \infty} p_n(n,\alpha) = -\alpha ^2+e^{-\frac{1}{\alpha +1}} \left(\alpha ^2+\alpha -1\right)+1.
\label{eqn:pn}
\end{equation}
For the remainder of this section and the next, we assume an infinite population. While it was easier to work with $g = \alpha n$ for the previous computation, for further analysis it is simpler to express $g$ as a fraction of the total population. Let
\begin{equation*}
\beta = \frac{g}{n+g} = \frac{\alpha}{1+\alpha}.
\end{equation*}
Substituting
\begin{equation}
\alpha = \frac{\beta}{1-\beta}.
\label{eqn:alpha}
\end{equation}
into \cref{eqn:pg,eqn:pn} yields the simplified forms,
\begin{align*}
p_g(\beta) &= \frac{1-e^{\beta -1}}{1-\beta}\quad \text{and}\\
p_n(\beta) &= \frac{-2 e \beta -e^{\beta } ((\beta -3) \beta +1)+e}{e (\beta -1)^2}.
\end{align*}
A simple plot shows that $p_g(\beta) \geq p_n(\beta)$ for all $\beta\in[0,1]$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{Figures/EatingProbAsBeta.pdf}
\caption{A plot of $p_g(\beta)$ and $p_n(\beta)$ shows that it is always better for an individual to join the dining club than to remain independent.}
\label{fig:pnpg}
\end{figure}
Let $S_g$ be a random variable denoting the meal size for an individual in the dining club, and let $S$ be a random variable denoting the meal size for a randomly chosen member of the population. Then the probability of eating $p_g(\beta)$ is now easily seen as the expected meal size $\mean{S_g}$, with a meal size of $1$ corresponding to eating and a meal size of $0$ corresponding to not eating. Using this interpretation, and equating meal size with fitness, we assume the rate of growth of the dining club is given by
\begin{equation*}
\dot{g} = g\mean{S_g} = g p_g(\beta).
\end{equation*}
From \cite{GB17}, it follows that the proportion $\beta$ must follow the replicator dynamic
\begin{equation}
\dot{\beta} = \beta \left[p_g(\beta) - \bar{p}(\beta)\right] = \beta \left(\mean{S_g} - \mean{S}\right).
\label{eqn:BetaReplicator}
\end{equation}
The population mean $\bar{p}(\beta) = \mean{S}$ can be computed as
\begin{equation*}
\bar{p}(\beta) = \mean{S} = \frac{\alpha p_g(\alpha) + p_n(\alpha)}{1+\alpha},
\end{equation*}
and converted to an expression in $\beta$ using \cref{eqn:pg,eqn:pn,eqn:alpha} as,
\begin{equation*}
\mean{S} = \bar{p}(\beta) = e^{\beta -1} (\beta -1)+1.
\end{equation*}
Let
\begin{equation*}
r(\beta) = p_g(\beta) - \bar{p}(\beta) = \mean{S_g} - \mean{S}
= \frac{1-e^{\beta -1}}{1-\beta}
- \left(e^{\beta -1} (\beta -1)+1 \right),
\end{equation*}
be the growth rate of $\beta$. Then $r(0) = 0$ and we see that
$\lim_{\beta \to 1} r(\beta) = 0$. That is, \cref{eqn:BetaReplicator} has two fixed points. From \cref{fig:pnpg}, we must have $r(\beta) > 0$ for $0 < \beta < 1$. This is illustrated in \cref{fig:beta} (left). It follows that $\beta(t)$ is described by a non-logistic sigmoid, as shown in \cref{fig:beta} (right).
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/rbeta.pdf}\quad
\includegraphics[width=0.45\textwidth]{Figures/beta.pdf}\quad
\caption{(Left) The growth rate of $r(t)$ is an unimodal positive function with zeros at $\beta = 0$ and $\beta = 1$. (Right) The solution curve for $\beta(t)$ assuming $\beta(0) = 0.01$.}
\label{fig:beta}
\end{figure}
We conclude that the decision to join the dining club is an evolutionarily stable strategy and the fixed point $\beta = 1$ is globally asymptotically stable while the fixed point $\beta = 0$ is asymptotically unstable.
\section{Social Safety Nets and Deceptive Free Loading}\label{sec:Cheating}
Suppose the dining club imposes a \textit{food tax} on its members at the rate $\kappa \in [0,1]$ so that if a diner is successful in obtaining food, then he reserves $\kappa \times 100\%$ of his meal to be shared with club members who choose a restaurant that is occupied by an independent individual. If we assume these resources are pooled and then shared equally, the expected meal size (normalized to the interval $[0,1]$) available for a club member who cannot obtain food on his own is given by
\begin{equation}
\tilde{p}_g(\beta) = \frac{gp_g(\beta)\kappa}{g - gp_g(\beta)} = \frac{p_g(\beta)\kappa}{1 - p_g(\beta)}.
\label{eqn:tildepg}
\end{equation}
Note that sharing (for any value of $\kappa$) does not affect the expected meal size obtained by a group member, since we have the expected meal size
\begin{equation}
\mean{S_g} = (1-\kappa) p_g(\beta) + [1-p_g(\beta)]\frac{p_g(\beta)\kappa}{1 - p_g(\beta)} = p_g(\beta).
\label{eqn:ES}
\end{equation}
We can construct a tax-rate that depends on $\beta$ and ensures all participants in the dining club receive the same meal size. Setting $\tilde{p}_g(\beta) = 1 - \kappa$ and solving, we obtain:
\begin{equation}
\kappa^* = 1 - p_g(\beta).
\label{eqn:OptimalKappa}
\end{equation}
Thus, as $\beta$ increases, the tax decreases. As a result of \cref{eqn:ES}, the right-hand-side of \cref{eqn:BetaReplicator} remains unchanged and the decision to join the dining club is still evolutionarily stable, even in the presence of sharing. That is $\beta = 1$ is still globally asymptotically stable.
Suppose a proportion $\phi \in [0,1]$ of the independent population \textit{that does not eat} can deceptively pose as club members, thereby sharing in the communally available food. In the presence of a food tax, the resulting decision to join the dining club now becomes a public goods problem. Then the expected meal size to anyone receiving shared food is given by
\begin{equation*}
\tilde{p}_g(\beta) = \frac{\kappa gp_g(\beta)}{n[1-p_n(\beta)]\phi + [1-p_g(\beta)]g} = \frac{\alpha \kappa p_g(\beta)}{\phi [1-p_n(\beta)] + \alpha[1-p_g(\beta)]},
\end{equation*}
where $\alpha$ is defined in terms of $\beta$ in \cref{eqn:alpha}. Let $S_n$ be the random variable denoting the expected meal size for an independent member of the population. Then as a function of $\kappa$ and $\phi$,
\begin{align}
\mean{S_g} &= (1-\kappa ) p_g(\beta)+ [1-p_g(\beta)]\frac{\alpha \kappa p_g(\beta) }{\alpha [1-p_g(\beta)]+ [1-p_n(\beta)] \phi }\; \text{and} \label{eqn:SgFreeloader}\\
\mean{S_n} &= p_n(\beta) + [1-p_n(\beta)]\phi\frac{\alpha \kappa p_g(\beta) }{\alpha [1-p_g(\beta)] + [1-p_n(\beta)]\phi }.
\label{eqn:SnFreeloader}
\end{align}
It is possible but unwieldy to compute $r(\beta,\phi) = \mean{S_g} - \mean{S}$ using the expected meal size with deception rate $\phi$ and group size $\beta$. Plotting sample curves for $r(\beta,\phi)$ shows that the growth rate now changes sign at some value $\beta(\phi)$; see \cref{fig:RateBetaPhi} (left).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth,valign=t]{Figures/RateOfBetaPhi.pdf}\quad
\includegraphics[width=0.45\textwidth,valign=t]{Figures/BetaOfPhi.pdf}
\caption{(Left) The rate function $r(\beta,\phi)$ for varying values of $\phi$ shows that $r(t)$ changes sign as a function of $\beta$. (Right) The solution curve for $\beta^*$ as a function of $\phi$ so that $r(\beta^*,\phi) = 0$.}
\label{fig:RateBetaPhi}
\end{figure}
As a consequence of this, the replicator equation for $\beta$ is given by
\begin{equation*}
\dot{\beta} = \beta\left(\mean{S_g} - \mean{S}\right).
\end{equation*}
These dynamics exhibit a new unstable equilibrium point, illustrating a bifurcation in parameter $\phi$ with numerically computed bifurcation diagram shown in \cref{fig:RateBetaPhi} (right). An example solution flow (for various initial conditions) is shown in \cref{fig:ExampleFlow}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{Figures/FlowExample.pdf}
\caption{Here $\phi = 0.1$ and we show the instability of the interior fixed point. With $\beta(0) > \beta^*$ all members of the population are eventually driven to join the dining club. If $\beta(0) < \beta^*$, the dining club fails as a result of freeloading.}
\label{fig:ExampleFlow}
\end{figure}
We can compute $\beta^* \approx 0.577$ for $\phi = 1$. This is particularly interesting because we have essentially constructed a public goods problem in which joining the dining club enforces a taxation rate of $\kappa = 1 - p_g(\beta)$ on the members, who are then guaranteed (the public good of) a meal each day. The presence of freeloaders destabilizes the group formation process, but does not guarantee that a group cannot form. Since $\beta^*(\phi)$ is monotonically increasing, it follows that if $\phi$ grows slowly enough so that at any time $\beta(t) > \beta^*[\phi(t)]$, then the dining club will grow to include the entire population. If $\beta(t) < \beta^*[\phi(t)]$, then the dining club collapses. We impose an evolutionary dynamic on the freeloaders in the next section to study this effect.
\section{Evolving Freeloaders}\label{sec:EvolveCheating}
If we divide the population into three groups, dining club members ($g$), non-dining club freeloaders ($f$) and non-dining club non-freeloaders ($h$), we can construct an evolutionary dynamic for the freeloaders. Let $\chi$ be the proportion of the population that is not in the dining club and will freeload (cheating) and $\eta = 1 - \beta - \chi$ to be the proportion of the population that is not in the dining club and not freeloading (honest). Then the population of freeloaders is $\chi(n + \alpha n)$. The expected meal size to any agent accepting communal food is then
\begin{multline}
\frac{\kappa gp_g(\beta)}{g[1-p_g(\beta)] + [1-p_n(\beta)]\chi(n+\alpha n)} =
\frac{\kappa \alpha p_g(\beta)}{\alpha [1-p_g(\beta)] + [1-p_n(\beta)]\chi(1+\alpha)} =\\ \frac{\kappa \alpha p_g(\beta)}{\alpha [1-p_g(\beta)] + [1-p_n(\beta)]\chi(1-\beta)^{-1}}.
\label{eqn:NewDistribution}
\end{multline}
Let $S_g$ be as before, and let $S_f$ be the random variable denoting the meal size for an individual in the freeloading group and $S_h$ be the random variable denoting meal size for an individual from the non-freeloading non-dining club group. It follows from \cref{eqn:SgFreeloader,eqn:SnFreeloader,eqn:NewDistribution} that
\begin{align*}
&\mean{S_g} = (1-\kappa ) p_g(\beta)+ [1-p_g(\beta)]\frac{\alpha \kappa p_g(\beta) }{\alpha [1-p_g(\beta)]+ [1-p_n(\beta)] \chi(1-\beta)^{-1} },\\
&\mean{S_f} = p_n(\beta) + [1-p_n(\beta)]\frac{\alpha \kappa p_g(\beta) }{\alpha [1-p_g(\beta)] + [1-p_n(\beta)]\chi(1-\beta)^{-1} }, \text{ and}\\
&\mean{S_h} = p_n(\beta).
\end{align*}
Here, we have replaced $\phi$ with its definition in terms of $\chi$ and $\beta$. Employing the same reasoning we used to obtain \cref{eqn:BetaReplicator}, we can construct replicator equations for proportions $\beta$, $\chi$ and $\eta$.
The population mean meal size is
\begin{equation*}
\mean{S} = \chi\mean{S_f} +\beta\mean{S_g} + \eta\mean{S_h}.
\end{equation*}
The dynamics of $\eta$ (the non-freeloading, non-dining club group) are extraneous, and we can focus on the two-dimensional system
\begin{align*}
&\dot{\beta} = \beta\left(\mean{S_g} - \mean{S}\right)\\
&\dot{\chi} = \chi\left(\mean{S_f} - \mean{S}\right),
\end{align*}
which do not depend on the value of $\eta$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]
{Figures/PhasePortrait.pdf}
\caption{A phase portrait of the two-dimensional system showing the dynamics of $(\beta,\chi)$. The red curve shows a numerically computed boundary between the basin of attraction of $(\beta,\chi) = (1,0)$ and $(\beta,\chi) = (0,1)$.}
\label{fig:PhasePortrait}
\end{figure}
\cref{fig:PhasePortrait} shows the dynamics of this evolutionary system. It is straightforward to compute that when $\beta = 0$, then $\mean{S_g} - \mean{S} = \mean{S_f} - \mean{S} = 0$ for all values of $\chi \in [0,1]$. Thus, the dynamics freeze on the left boundary of the simplex
\begin{equation*}
\Delta_2 = \left\{(\beta, \chi) \in \mathbb{R}^2 : \beta + \chi \leq 1,\;\beta \geq 0,\;\chi \geq 0\right\}.
\end{equation*}
There is a single hyperbolic saddle on the boundary of $\Delta_2$ that can be numerically computed as $(\beta,\chi) \approx (0.578, 0.422)$. The two boundary equilibria $(\beta,\chi) = (1,0)$ and $(\beta,\chi) = (0,1)$ are both locally asymptotically stable. Thus, the long-run population behaviour is dependent on the initial conditions. We can numerically construct a curve of initial conditions showing this dichotomous behaviour. This is shown in \cref{fig:InitConditions} and as the red curve in \cref{fig:PhasePortrait}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{Figures/InitialConditionStability.pdf}
\caption{(Right) Numerically computed curve showing the boundary between the stable and unstable dining club strategy for varying initial conditions.}
\label{fig:InitConditions}
\end{figure}
As $\beta_0$ approaches $\beta^* \approx 0.578$ corresponding to equilibrium point for $\phi = 1$, the curve stops because $\chi_0$ would need to lie outside the simplex to cause the dining club to collapse. It is interesting to note that the phase portrait illustrates trajectories in which both $\beta$ and $\chi$ are increasing up to a point, followed by either the collapse of the dining club (while $\chi$ continues to increase) or the collapse of the freeloading group, as all population members join the dining club (and $\beta$ continues to increase).
\section{Numerical Results on Multiple Dining Clubs}\label{sec:MultipleClubs}
We now consider KPRP with two dining clubs. We model three groups of agents $\mathcal{F}$, $\mathcal{G}_1$ and $\mathcal{G}_2$ for free agents, dining club one and dining club two respectively. We estimate $\mean{S_{g_1}}$, $\mean{S_{g_2}}$ and $\mean{S_f}$ using Monte Carlo simulation. This Monte Carlo simulation is then embedded into a larger dynamic process for updating the groups.
In the Monte Carlo simulation, the free agent group acts normally, choosing a restaurant randomly. The members of the dining clubs also chose restaurants randomly, but with the constraint that no two agents in a dining club may choose the same restaurant. Since we are studying this system numerically, we introduce two kinds of taxation policies.
\begin{enumerate}
\item Policy I: We assume a given tax rate $\kappa$ with no redistribution; i.e., the tax goes to maintain the dining club in some form.
\item Policy II: Agents within the dining club are taxed at a rate $\kappa$ given by, \cref{eqn:OptimalKappa} and food is redistributed to club members who do not eat (and possibly freeloaders).
\end{enumerate}
Agents in the free market will randomly choose a dining club to eat in if they do not get food on a given day with probability $1$. That is, we assume $\phi = 1$. We also introduce a probability $\rho$ that cheaters will be caught. If a cheater gets caught, their food is not distributed and becomes waste.
In the dynamic model that follows, we refer to the process of simulating groups eating over several days by the function $\texttt{MonteCarlo}(\mathcal{F},\mathcal{G}_1,\mathcal{G}_2, \kappa, \rho)$. The system dynamics of our simulation are then described by the following steps:
\begin{algorithmic}[1]
\STATE \underline{\textbf{Input:}} $\mathcal{F}$, $\mathcal{G}_1$, $\mathcal{G}_2$.
\WHILE{There is at least one agent in each group}
\STATE Compute $(\mean{S_{g_1}}, \mean{S_{g_2}}, \mean{S_f}) = \texttt{MonteCarlo}(\mathcal{F},\mathcal{G}_1,\mathcal{G}_2,\kappa,\phi)$.
\STATE Set $\mathcal{P} = \mathcal{F} \cup \mathcal{G}_1 \cup \mathcal{G}_2$.
\STATE Choose two agents $i$ and $j$ at random from $\mathcal{P}$.
\STATE Let $\texttt{Group}(i)$ (resp. $\texttt{Group}(j)$) be the group to which $i$ (resp. $j$) belongs.
\STATE Let $p_i$ (resp. $p_j$) be the probability that $i$ (resp. $j$) eats.
\IF{$p_i > p_j$}
\STATE Move $j$ to $\texttt{Group}(i)$
\ELSIF{$p_j > p_i$}
\STATE Move $i$ to $\texttt{Group}(j)$
\ENDIF
\STATE Remove $i$ and $j$ from $\mathcal{P}$.
\IF {$|\mathcal{P}| > 1$}
\STATE \textbf{goto} 5
\ELSE
\STATE \textbf{goto} 3
\ENDIF
\ENDWHILE
\end{algorithmic}
It is clear in the dynamics simulated by this model, there are three equilibria corresponding to the cases when all agents are in $\mathcal{F}$ or $\mathcal{G}_1$ or $\mathcal{G}_2$.
\subsection{Simulation Results}
For each simulation, we divide 100 agents into $\mathcal{F}$, $\mathcal{G}_1$ and $\mathcal{G}_2$. To construct an approximation for the basins of attraction for three equilibrium populations, we ran the simulation using 1000 replications simulation and every possible (discrete) starting condition on $|\mathcal{F}|$, $|\mathcal{G}_1|$ and $|\mathcal{G}_2|$.
\paragraph{Tax Policy I:} We explore the effect of varying $\kappa$ from $0.05$ to $0.15$. To manage simulation time, we executed the while loop at most, 10000 times. If all players had not joined a single community by then, we declared this a failed run, suggesting slow convergence from this initial condition. The outcome of almost all experiments resulted in a dominant group (either free agents or dinning clubs) being formed. This is illustrated in \cref{fig:Policy1}.
\begin{figure}[htbp]
\centering
\subfloat[$\kappa = 0.05$]{\includegraphics[width=0.42\textwidth]{Figures/tax=0.05.pdf}}
\subfloat[$\kappa = 0.07$]{\includegraphics[width=0.42\textwidth]{Figures/tax=0.07.pdf}}\\
\subfloat[$\kappa = 0.1$]{\includegraphics[width=0.42\textwidth]{Figures/tax=0.1.pdf}}
\subfloat[$\kappa = 0.15$]{\includegraphics[width=0.42\textwidth]{Figures/tax=0.15.pdf}}\\
\subfloat[$\kappa = 0.2$]{
\includegraphics[width=0.45\textwidth]{Figures/tax=0.2.pdf}
}\\
\includegraphics[width=0.55\textwidth]{Figures/Legend.pdf}
\caption{Basins of attraction for various tax rates are shown in a ternary plot. The different colours indicate where the model converges from the given starting point.
}
\label{fig:Policy1}
\end{figure}
Let $\beta_1$ and $\beta_2$ be the proportion of the population in dining clubs one and two, respectively, and let $\nu = 1 - \beta_1 - \beta_2$ be the free group proportion. Then the dynamics can be projected to the two-dimensional unit simplex $\Delta_2$ embedded in $\mathbb{R}^3$ with coordinates $(\beta_1,\beta_2,\eta)$. When the simulation converges, can determine the $\omega$-limit set of trajectories leaving (near) an initial condition $(\beta_1^0,\beta_2^0,\eta^0)$. \cref{fig:Policy1} shows that the size of the tax rate $\kappa$ is correlated with the size of the basin of attraction for the free agent group. The dynamics roughly partition the simplex into three basins of attraction, with the basins of attraction for the two dining clubs exhibiting symmetry as expected. On the boundaries of these regions, we expect unstable coexistence of multiple groups would be possible. This is qualitatively similar to the unstable fixed point identified in \cref{fig:ExampleFlow}.
\paragraph{Tax Policy II:}
In a second set of experiments, we let $\rho$ vary between $0$ and $1$ and used \cref{eqn:OptimalKappa} to set the tax policy. The cheating probability was fixed at $\phi = 1$. As before, we executed the while loop at most, 10000 times. If all players had not joined a single community by then, we declared this a failed run, suggesting slow convergence from this initial condition. Basins of attraction for various fixed points are shown in \cref{fig:Policy2}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Figures/tax.pdf}
\includegraphics[width=0.45\textwidth]{Figures/tax0.5.pdf}\\
\includegraphics[width=0.4\textwidth]{Figures/Legend.pdf}
\caption{(Left) We show the basins of attraction when the probability that a cheater is caught is set at $0.5$. (Right) Basins of attraction when the probability that a cheater is caught is $1$.}
\label{fig:Policy2}
\end{figure}
It is interesting to notice that there are a substantial number of failed cases between the clubs. This suggests an area of slow dynamics and possibly the existence of a slow manifold. Constructing a mathematical model of this scenario is an area reserved for future work, since it is unclear exactly how the dynamics are changing in this region.
\section{Conclusions and Future Directions}\label{sec:Conclusions}
In this paper, we studied the Kolkata Paise Restaurant Problem (KPRP) with dining clubs. Agents in a dining club mutually agree to visit separate restaurants, thereby increasing the probability that they eat (obtain a resource). An evolutionary game model was formulated describing the choice to join the dining club. We showed that joining the dining club is an evolutionarily stable strategy, even when members are taxed (in food) and resources are distributed. When cheating was introduced to the non-dining club members, i.e. the non-dining club members could deceptively benefit from the communal food collected by the dining club, a new unstable fixed point appears. We analysed this bifurcation as well as the decision to cheat using the resulting replicator dynamic. Numeric experiments on two dining clubs show that the behaviour in this case is similar to the case with one dining club, but may exhibit richer dynamics.
There are several directions for future research. Studying the theoretical properties of two (or more) dining clubs is clearly of interest. Adding many groups (i.e., so that the number of groups is a proportion of the number of players) might lead to unexpected phenomena. Also, allowing groups to compete for membership (by varying tax rates) might create interesting dynamics. As part of this research, investigation of the dynamics on the boundary both in theory and through numeric simulation would be of interest. A final area of future research would be to investigate the effect of taxing cheaters who are caught, thus allowing them to eat, but discouraging them from cheating. Determining the impact on the basins of attraction in this case would be the primary research objective.
\section*{Acknowledgements}
A.H., A.B., and C.G. were supported in part by the National Science Foundation under grant DMS-1814876.
\bibliographystyle{elsarticle-num}
|
{
"arxiv_id": "2302.14140",
"language": "en",
"timestamp": "2023-03-01T02:02:24",
"url": "https://arxiv.org/abs/2302.14140",
"yymm": "2302"
} | \section{Introduction}
Black hole event horizons are one of the most intriguing objects in physics. Thinking about their existence frequently leads us to reconsider what we think we understand about basal concepts such as causality, locality, predictability, information, and unitarity. In recent years, the nature of event horizons has been intensively studied in both mathematical physics and astrophysics, and this has led us to interesting speculations about the structure of spacetime in their vicinity: the puzzle of information loss \cite{information}, the firewall paradox \cite{firewall}, the black hole complementarity \cite{complementarity}, the proposal of fuzzballs \cite{fuzzball}, and the discovery of conformal symmetries in the near horizon geometry \cite{KerrCFT} are among the most interesting subjects that have attracted attention in high energy physics research in the last decades. In the astrophysical context, on the other hand, the new possibility of having observational access at the scale of the event horizon initiated a new era: the quantitative study of the black hole shadow \cite{shadow}, the analysis of the inner disk dynamics \cite{dynamics}, the observation of the photon ring \cite{ring} and of the polarization produced by the magnetic field near the horizon \cite{EHT} allow relativistic astrophysics to be tested in a regime hitherto unimagined. All this motivates the detailed study of the physical processes that take place in the vicinity of black hole event horizons.
Among all the interesting phenomena that take place near the event horizons is the Meissner effect \cite{King}; that is, the fact that, under certain conditions, black holes in magnetic fields behave as superconductors do. Near extremality, when their Hawking temperature goes to zero, spinning black holes tend to eject the lines of magnetic fields from their event horizons. This is analogous to the Meissner-Ochsenfeld effect in superconductors \cite{Meissner}; namely, the expulsion of the magnetic field from a superconducting material during its transition to the superconducting state. It has been argued that studying the Meissner effect in black holes might be of importance in the astrophysical context \cite{Penna, Mac, Mac1, Mac2, Mac3, Mac4}, especially if it is considered in connection to the so-called Blandford-Znajek process \cite{BZ}, that is, the process by which energy can be extracted from the rotation of a black hole and transferred to the generation of relativistic jets. In the Blandford-Znajek mechanism, the ergosphere plays an important role, causing the magnetosphere within it to rotate, ultimately resulting in the extraction of angular momentum from the black hole. Since the Blandford-Znajek mechanism is the favoured explanation for the jet generation in quasars and other sources, concern arises as whether the Meissner effect could quench the power of the jets in the case of rapidly rotating black holes, as the magnetic field is necessary for the entire process to develop. This has recently been discussed in the literature \cite{Penna}, where it has been argued that the feeding process can actually continue all the way to the extremal limit, and therefore the jets are not necessarily turned off by the Meissner effect. This seems to be in agreement with relativistic magnetohydrodynamics simulations as well as with observations of near-extremal black hole candidates.
While lately there have been important advances in the near horizon magnetohydrodynamics, further analytic studies of the role played by the magnetic field in the zone close to the spinning black holes are necessary. Here, we will study the black hole Meissner effect from the near horizon perspective and in the regime in which the magnetic field is fully backreacting on the spacetime geometry. By resorting to methods recently developed in the literature \cite{Laura, Extended}, which allow to compute conserved charges in the near horizon region, we will investigate the properties of the black hole horizon when in its Meissner state. We will show that, when in such state, the horizon exhibits infinite-dimensional symmetries: two sets of supertranslation symmetries as well as a symmetry generated by the local conformal group. The supertranslations are generated by two infinite sets of currents, one of which comes from local dilations of the advanced null coordinate $v$ at the horizon $H$, and the other from local gauge transformations that preserve the electromagnetic field configuration at the horizon. As we will show, the evaluation of the Noether charges associated to the zero modes of these symmetries correctly reproduces the black hole physical charges and its thermodynamics. This represents a concrete application of the techniques developed in \cite{Puhm} and it extends the results of \cite{Lucho} to the case of arbitrary values of the black hole charges. In addition, we will elaborate on the charges computation at the horizon: we will show that the horizon charges admit to be written as Komar integrals on $H$. Besides, we will explicitly show that the near horizon charges can be written as flux integrals, explaining in this way the agreement with the computations performed with more traditional methods, some of which require to handle the asymptotic conditions at large distance. This analysis will enable us to derive the thermodynamics of black holes in magnetic environments in a remarkably succinct manner, and then we will apply this to the case of black holes exhibiting Meissner effect.
Our paper is organized as follows: In section 2, we study the near horizon geometry of the Kerr-Newman black hole immersed in a backreacting magnetic field. This is given by a limit of the Ernst-Wild solution to Einstein-Maxwell equation, which describes an electrically charged spinning black hole embedded in a Melvin universe. We study the symmetries of the solution and show that it admits an infinite-dimensional set of asymptotic Killing vectors that preserve the near horizon boundary conditions. In section 3, we analyze the Noether charges associated to the near horizon symmetries of the magnetized black holes. We show that these {\it horizon charges} can be expressed as Komar integrals and admit to be written as flux integrals. The latter proves the validity of the Gauss law and explains the success of the near horizon method. In section 4, we apply the study of the Noether charges and thermodynamics to the case of event horizons of black holes when in their Meissner states. We derive their thermodynamics and we discuss the emergence of an infinite-dimensional symmetry in their vicinity.
The black hole Meissner effect was also studied recently in references \cite{me1, me2, me3, me4, me5, me6, me7, me8, me9, me11, Ruffini, Bicak, Bicak2, Mac, Voro, Budin, backreaction}; see also references thereof; from the near horizon perspective, it was studied in \cite{AstorinoKerrCFT, otroKerrCFT, ne1, ne2, ne3, ne4}, although with approaches different from ours.
\section{Magnetized black holes}
\subsection{Kerr-Newman black holes in an external field}
The spacetime geometry describing a black hole immersed in an external backreacting magnetic field $B$ is given by the Ernst-Wild solution to the Einstein-Maxwell equations \cite{Ernst, ErnstWild, Wild}, which might be compared with the solutions in the linear approximation \cite{Wald, King}, cf. \cite{Bicak, Bicak2}. The full backreacting solution is characterized by three parameters, $m$, $a$, and $q$, which are related in an intricate way with the mass, the angular momentum, and the electric charge. While the solution can be thought of as a Kerr-Newman black hole embedded in a magnetic Melvin universe \cite{Melvin}, so that when $B=0$ the parameters $m$, $a$ and $q$ do agree with the mass, the angular momentum per unit of mass and the electric charge, respectively, when $B\neq 0 $, because of the backreaction, the relation between the three parameters and the {\it physical} conserved charges is more involved -- non-linear --. The precise relation has been debated in the literature, specially in connection to the mass and the first law of the black hole thermodynamics, cf. \cite{Gibbons, Gibbons2, Booth, Astorino, AstorinoKerrCFT, Lucho}; here we will contribute to that discussion.
Let us first consider the case $q=0$. We do this for the following reasons: First, while the full solution with three non-vanishing parameters can be written down analytically, the expression is more cumbersome and makes it more difficult to visualize the geometry. Second, in our previous paper \cite{Lucho} we considered the case $a=0$ with $q\neq 0$ in Eddington-Finkelstein type coordinates similar to those we will consider here, so that the expressions for the near horizon geometry can easily be found there. Third, the full expression in Boyer-Lindquist type coordinates can also be found in the original papers \cite{Ernst, ErnstWild}. Fourth, we will introduce the parameter $q$ later, in Section 3, as it is crucial to investigate the Meissner effect.
The solution with $m \neq 0 \neq a$ and $q=0$ in Boyer-Lindquist coordinates takes the form
\begin{equation}
ds^2 = \Lambda (r,\theta)\, R^2(r,\theta) \Big( - \frac{f(r)}{\Sigma (r,\theta) }\, dt^2 + \frac{dr^2}{f(r)} + d\theta^2 \Big) + \frac{\Sigma (r,\theta) \sin^2\theta }{\Lambda(r,\theta) R^2(r,\theta)} \, ( d\phi - \omega(r,\theta)\, dt)^2\label{Uno}
\end{equation}
with the metric functions
\begin{equation}
f(r) = r^2 + a^2 - 2mr \quad , \quad R^2(r,\theta ) = r^2 +a^2 \cos^2\theta \quad , \quad \Sigma (r,\theta ) = (r^2+a^2)^2 - a^2 f(r)\, \sin^2\theta \nonumber
\end{equation}
along with
\begin{align}
&\Lambda (r,\theta ) = 1 + \frac{B^2 \sin^2\theta }{2R^2(r,\theta )} \Big((r^2+a^2)^2 - a^2 f(r)\, \sin^2\theta \Big) + \frac{B^4}{R^2(r,\theta )} \Big[ \frac{R^2(r,\theta ) \sin^4\theta }{16} (r^2+a^2)^2 \nonumber \\
& \ \ \ \ \ \ \ \ \ \ + \frac{m a^2 r}{4} (r^2+a^2) \sin^6\theta + \frac{m^2 a^2}{4} \Big( r^2 (\cos^2\theta - 3)^2 \cos^2\theta + a^2(1+\cos^2\theta )^2 \Big) \Big]\, , \nonumber \\
&\omega (r,\theta )= \frac{2mra}{\Sigma (r,\theta )} + \frac{B^4}{\Sigma (r,\theta )} \Big[\frac{a^3}{2} m^3 r (3+\cos^4\theta ) + \frac{am^2}{4} \Big( r^4 (3-6\cos^2\theta + \cos^4\theta )
\\
& \ \ \ \ \ \ \ \ \ \ + 2a^2 r^2 (3 \sin^2\theta - 2\cos^4\theta )- a^4(1+\cos^4\theta ) \Big) +\frac{amr}{8} (r^2+a^2) \nonumber \\
& \ \ \ \ \ \ \ \ \ \ \
\times \, \Big( r^2 (3+6\cos^2\theta - \cos^4\theta ) - a^2 (1-6\cos^2\theta - 3\cos^4\theta )\Big) \Big]\, ;\nonumber
\end{align}
here, $t\in \mathbb{R}, \, r\in \mathbb{R}_{\neq 0 } $, and $\phi , \, \theta $ are two angular variables that chart the constant-$t$ surfaces of the event horizon. $B$, $m$ and $a$ are integration constants; the fourth integration constant, $q$, will be introduced latter. We will denote $r_0$ the radial location of the black hole event horizon, which exists provided $m^2\geq a^2+q^2$. This solution is usually referred to as the Kerr-Newman-Melvin black hole, or the Kerr-Newman black hole in a Melvin universe. It is worth pointing out that, due to the $B$-dependent non-linear relation between the physical charges of the black hole and the parameters appearing in the metric, it turns out that even for $q=0$ the solution above describes an electrically charged rotating black hole. In fact, a specific relation between $m$, $a$, $B$ and $q$ is necessary for the solution to describe a spinning neutral black hole (see (\ref{LaQarga}) below). Melvin universe \cite{Melvin} corresponds to $m=a=q=0$ -- with $B$ being the external field that fills all space--, while the Kerr-Newman solution is obtained when $B=0$. We use units $G=c=1$.
The solution of the electromagnetic field reads
\begin{equation}
A = [\Phi_0(r,\theta)\, -\, \omega (r,\theta)\, \Phi_3(r,\theta)]\, dt + \Phi_3(r,\theta)\, d\phi
\end{equation}
with
\begin{equation}
\begin{split}
\Phi_0 =& -\frac{a B^3}{8\Sigma (r,\theta)} \Big[4 a^4 m^2 + 2 a^4 m r - 24 a^2 m^2 r (m + r) - 6 m r^5 - 6r f(r)\, 2m (r^2+a^2) \cos^2\theta \\
&- 4 a^2 m r^3 - 12m^2 r^4 + f(r)\, (2mr^3 + 4a^2 m^2 - 6a^2 mr)\cos^4\theta \Big]\, , \\
\Phi_3 = & \frac{B}{R^2(r,\theta) \Lambda (r,\theta)} \Big[ \frac{\Sigma (r,\theta)}{2} \sin^2\theta + B^2 \Big( \frac{a^2}{2} m^2[ r^2 (3-\cos^2\theta )^2 \cos^2\theta + a^2(1+\cos^2 \theta )^2] \\
&+ \frac{a^2}{2} mr (r^2 + a^2) \sin^6\theta + \frac{R^2(r,\theta) }{8} (r^2 + a^2)^2 \sin^4\theta \Big) \Big]
\end{split}\label{Omega}
\end{equation}
We emphasize that the solution is fully backreacting, so that it corresponds to an exact electrovacuum solution to the Einstein-Maxwell equations. The geometry of it has been extensively analyzed in the literature as well as its thermodynamics properties. For the latter, we refer to the relatively recent paper \cite{Astorino}. The geometry exhibits special features, such as non-compact ergoregions \cite{Gibbons} and horizons with sections of non-constant curvature, among others. It is related to other well-known solutions to Einstein-Maxwell theory, and it can also be generalized, for example, by including dyonic charges and acceleration. It is also related to interesting solutions in higher-dimensions.
\subsection{Near horizon limit}
Now, let us study the near horizon limit of the solution (\ref{Uno})-(\ref{Omega}). Our goal is to express the spacetime geometry and the gauge field configuration in a system of coordinates as the one introduced in \cite{Laura, Extended}. This would enable us to show that, near their vicinity, spinning magnetized black holes exhibit infinite-dimensional symmetries. More precisely, if we could prove that the Ernst-Wild solution can be written in the Eddington-Finkelstein type coordinates introduced in \cite{Laura, Extended} to perform the near horizon expansion, then we would {\it ipso facto} prove that there exist an infinite-dimensional isometry group that preserves the near horizon boundary conditions for these black holes.
Since $\omega(r,\theta)$ does not vanish at $r=r_0$, the first step to achieve our goal is to consider a boost $d\phi \rightarrow d\phi + c\, dt$ to produce a shift $\omega(r,\theta) \to \tilde{\omega }(r,\theta)=\omega(r,\theta) - \omega(r_0)$, with
\begin{equation}
\omega(r_0) \equiv \omega_0 = \frac{a}{2 m r_0} + \frac{B^4 a}{8 r_0} \Big(3 r_0^3 + 3 a^2 r_0 + 2 a^2 m\Big)
\end{equation}
being a constant. This suffices to reach a comoving frame and make the angular velocity to be zero at the horizon.
Next, we perform the change of coordinates
\begin{align}
v \, =\, & t + \int \frac{dr'}{f(r')} \, {\sqrt{\Sigma(r',\theta)}} \label{v_MKN}\\
\varphi \, =\, & \phi - \omega_0\, v + \int_{r_0}^{r} \frac{dr'}{f(r')} {(\omega(r',\theta) - \omega_0) \sqrt{\Sigma(r', \theta)} }
\label{phi_MKN}
\end{align}
which yields
\begin{align}
&dv\, =\, dt + \frac{\sqrt{\Sigma (r,\theta )}}{f(r)} dr + \gamma(r,\theta ) d\theta\\
&d\varphi \, =\, d\phi - \omega_0\, dv + (\omega(r,\theta )\, - \omega_0) \frac{\sqrt{\Sigma (r,\theta )}}{f(r)} dr + h(r,\theta)\, d\theta
\end{align}
with
\begin{equation}
\gamma(r,\theta) = \int_{r_0}^r \frac{dr'}{f(r')}\, {\partial_{\theta}\sqrt{\Sigma(r',\theta)}} , \quad h(r,\theta) = \int_{r_0}^r \frac{dr'}{f(r')} \, {\partial_{\theta}\Big((\omega(r',\theta) - \omega_0) \sqrt{\Sigma (r',\theta)}\Big)}
\end{equation}
With this, the metric takes the form
\begin{equation}
\begin{split}
&ds^2 = \Lambda (r,\theta) R^2(r,\theta) \Big[ - \frac{f(r)}{\Sigma (r,\theta)} \, dv^2 + \Big( 1 - \frac{\gamma^2(r,\theta) f(r)}{\Sigma (r,\theta)} \Big) d\theta^2 + \frac{2}{\sqrt{\Sigma (r,\theta)}} dr dv
\\
& \ \ \
+ 2 \frac{\gamma (r,\theta) f(r)}{\Sigma (r,\theta)} dv d\theta + 2 \frac{\gamma (r,\theta)}{\sqrt{\Sigma (r,\theta)}} dr d\theta \Big] + \frac{\Sigma (r,\theta) \sin^2\theta }{\Lambda (r,\theta) R^2(r,\theta)} \Big[ d\varphi - (\omega (r,\theta) - \omega_0) dv \\
& \ \ \ + ((\omega (r,\theta) - \omega_0) \gamma (r,\theta) - h (r,\theta)) d\theta \Big]^2
\end{split}
\end{equation}
This coordinate system is regular at the horizon, on which it defines constant-$v$ slices with restricted metric
\begin{equation}
ds^2 _{{|}_{H}} = \Lambda_0(\theta) R_0(\theta )^2 d\theta^2 + \frac{\Sigma_0 \sin^2\theta }{R_0^2(\theta )\Lambda_0(\theta)} d\varphi^2\, ,
\end{equation}
where we denoted $R_0(\theta)=R(r_0,\theta)$, $\Lambda_0(\theta)=\Lambda(r_0,\theta)$ and $\Sigma_0=\Sigma (r_0,\theta)$.
On $H$, we find the null vector $\ell = \partial_{v}$ and we can thus look for a vector $n$, also null, normalized such as $n_{\mu } \ell^{\mu } = 1$; namely,
This is
\begin{equation}\label{vectores}
n = \frac{\sqrt{\Sigma_0}}{\Lambda_0 (\theta ) R_0^2 (\theta ) } \partial_r
\end{equation}
Following the construction in \cite{Moncrief, Booth2}, we can consider a family of geodesics that cross $H$ with $n^{\mu}$ being the tangent vector, and consider that the geodesics are parameterized with an affine parameter $\rho$ such that $\rho_{|H} = 0$. Up to order $\mathcal{O}(\rho^2)$, this congruence of curves defines the vector field
\begin{equation}
\Xi^{\mu }(v,\theta,\varphi, \rho) = \{v, r_0,\theta,\varphi \}^{\mu } + \rho \, n^{\mu } + \frac{\rho^2}{2} {\partial_{\rho }^2 \Xi^{\mu }}_{{|}_{\rho = 0}} + \mathcal{O}(\rho^3)
\end{equation}
with the second derivative being defined by the geodesic equation $
{\partial_{\rho }^2 \Xi^{\alpha}}_{{|}_{\rho = 0}} = - \Gamma_{\mu \nu}^{\alpha} n^{\mu} n^{\nu}$. Given the way in which $n$ has been defined, we have the following gauge conditions for the radial components of the spacetime metric $g_{\rho v}= n _{\mu} \ell^{\mu}= 1, \, g_{\rho \rho}= n _{\mu} n^{\mu} =0, g_{\rho A}=n_{\mu} e_{A}^{\mu} =0$, with $A=1,2$ referring to the coordinates on the constant-$v$ slices of the horizon; we will often use the notation $z^A=\{ \varphi \, , \, \theta \}$ to refer to the angular coordinates.
To obtain the other metric components up to order $\mathcal{O}(\rho )$, it is sufficient to consider the variation, up to that order, in the direction generated by the affine parameter $n=\partial_{\rho} $; namely
\begin{equation}
g_{\mu \nu} = g_{\mu \nu}^{(0)} + g_{\mu \nu}^{(1)}\, \rho + \mathcal{O}(\rho^2)
\end{equation}
with $g_{\mu \nu}^{(0)} = g_{\mu \nu\, |\rho = 0}$ and $ g_{\mu \nu}^{(1)} = ( \mathcal{L}_{n} g )_{\mu \nu\, | \rho = 0}$. Let us be reminded that the horizon, $H$, which is located at $r=r_0$, in these new coordinates would be $\rho =0$. At order $\mathcal{O}(\rho )$, the non-vanishing components of the metric are
\begin{equation}
\begin{split}
&g_{v v}^{(1)} = -\frac{f'(r_0)}{\Sigma_0^{1/2}}\\
&g_{v \theta}^{(1)} =- \frac{1}{\Lambda_0 (\theta ) R_0^2(\theta )} \partial_{\theta} \big( \Lambda R^2 \big) _{|r=r_0}\\
&g_{\theta \theta}^{(1)} = \frac{\Sigma_0^{1/2}}{\Lambda_0(\theta ) R_0^2(\theta )}\, \partial_{r} \big( \Lambda R^2 \big)_{|r=r_0}\\
&g_{v \varphi}^{(1)} = -\frac{\Sigma_0^{3/2} \sin^2\theta }{\Lambda_0^2(\theta ) R_0^4(\theta )} \, (\partial_{r} \omega )_{|r=r_0}\\
&g_{\varphi \varphi}^{(1)} = \frac{\Sigma_0^{1/2} \sin^2\theta}{\Lambda_0(\theta ) R_0^2(\theta )} \, \partial_{r} \Big( {\Sigma}{\Lambda^{-1} R^{-2}} \Big) _{|r=r_0}\\
&g_{\theta \varphi}^{(1)} = -\frac{\Sigma_0^{2} \sin^2\theta }{\Lambda_0^2(\theta ) R_0^4(\theta )}\, \partial_{\theta} \Big({\omega}{f}^{-1} \Big) _{|r=r_0}\, ;
\end{split}
\end{equation}
and then we have $g_{vv}^{(0)}=0$, $g_{v\varphi }^{(0)}=0$, $g_{v\theta }^{(0)}=0$, as well as $g_{\rho \mu }^{(0)}=0$ with $\mu =0,1,2,3$. $\kappa =-\frac 12 g^{(1)}_{v v }$ is the surface gravity at the horizon, ultimately associated to the Hawking temperature $T= \kappa/(2\pi )$ ($k_B=\hbar =1$). -- For the extremal configuration the surface gravity vanishes ($\kappa = 0$), and in that case the analysis of the charges has to be done separately.-- In the new coordinate system, the metric takes a form that satisfies the asymptotic boundary conditions at the horizon considered in \cite{Laura, Extended}. This means that the magnetized black hole geometry admits infinite asymptotic Killing vectors preserving the near horizon form. The next step would be to verify whether the gauge field also admits the correct asymptotic conditions, cf. \cite{Puhm}. In order to check that, let us express the electromagnetic potential $A = A_{t} \, dt + A_{\phi} \, d\phi$ in the coordinates introduced in (\ref{v_MKN})-(\ref{phi_MKN}). This yields
\begin{align}
\nonumber
A = A_{t} \hspace{0.2em} dv - \frac{\sqrt{\Sigma (r, \theta )}}{f(r)} [A_{t} + (\omega (r, \theta ) - \omega_0) A_{\varphi})]\hspace{0.2em} dr + A_{\varphi} \hspace{0.2em} d\varphi - A_{\varphi} h (r, \theta ) \hspace{0.2em} d\theta
\end{align}
Next, we can use a residual gauge freedom to make $A_{\rho} = 0$ at $H$. That is, we perform the gauge transformation $A \rightarrow A + d\zeta$ with
\begin{equation}
\zeta = \int_{r_0}^{r} {dr'} \frac{\sqrt{\Sigma(r',\theta)}}{f(r')} \Big[ A_{t}(r',\theta) + (\omega (r',\theta)- \omega_0) A_{\varphi}(r',\theta)\Big]
\end{equation}
which allows to write
\begin{equation}
A = [\Phi_0 (r,\theta) - (\omega (r,\theta)-\omega_0) \Phi_3 (r,\theta) ] \hspace{0.2em} dv + \Phi_3 (r,\theta) \hspace{0.2em} d\varphi + [\Phi_3 (r,\theta) h(r,\theta) - \partial_{\theta}\zeta] \hspace{0.2em} d\theta\label{La20}
\end{equation}
This expression does satisfy the right asymptotic conditions for the electromagnetic field at the horizon; namely,
\begin{align}\label{potencialA}
&A_{v} = A_{v}^{(0)} + \rho \hspace{0.2em} A_{v}^{(1)}(v,z^{A}) + \mathcal{O}(\rho^2)\\
\nonumber
&A_{B} = A_{B}^{(0)}(z^{A}) + \rho \hspace{0.2em} A_{B}^{(1)}(v,z^{A}) + \mathcal{O}(\rho^2)
\end{align}
where $A_{v}^{(0)}$ is a fixed constant, $A_{B}^{(0)}$ with $B=1,2$ only depend on the angular variables $z^B=\{\varphi , \theta \}$, and $A_{\rho} = 0$. To obtain the potential at order $\mathcal{O}(\rho )$, we follow a similar procedure as before: we expand the electromagnetic potential around $r\simeq r_0$ as follows $A_{\mu} = A_{\mu}^{(0)} + \rho A_{\mu}^{(1)} + \mathcal{O}(\rho^2)$, where $A_{\mu}^{(0)} = A_{\mu\, |H}$ and $ A_{\mu}^{(1)} = (\mathcal{L}_{n} \, A )_{\mu\, |H}$; see (\ref{LaBelow}) below. In this way, we obtain
\begin{equation}
\begin{split}
&A_{v}^{(0)} = \Phi_0(r_0, \theta )\\
&A_{\varphi}^{(0)} = \Phi_3(r_0, \theta ) - \Phi_3(r_0, 0 )\\
&A_{\theta}^{(0)} = 0
\end{split}
\end{equation}
along with
\begin{equation}
\begin{split}
&A_{v}^{(1)} =- \frac{{\Sigma_0^{1/2}}}{\Lambda_0(\theta ) R_0^2(\theta ) } \Big( \Phi_3 \, \partial_r \omega - \partial_r \Phi_0 \Big) _{|r=r_0}\\
& A_{\varphi}^{(1)} = \, \, \frac{{\Sigma_0^{1/2}}}{\Lambda_0(\theta ) R_0^2(\theta ) } (\partial_r \Phi_3) _{|r=r_0} \\
& A_{\theta}^{(1)} = -\frac{{\Sigma_0^{1/2}}}{\Lambda_0(\theta ) R_0^2(\theta ) } \Big( \partial_r \partial_{\theta} \zeta - \Phi_3 \, \partial_r h \Big) _{|r=r_0}
\end{split}
\end{equation}
where we have added a constant to $A_{\varphi}$ using the remnant gauge freedom; with this, the potential vaishes both at the north and the south pole. In this way, we have explicitly shown that the magnetized black hole geometry obeys the horizon boundary conditions discussed in \cite{Laura, Extended, Puhm}, and therefore the magnetized horizon of (\ref{Uno}) enjoys asymptotic infinite-dimensional supertranslation and superrotation symmetries.
\subsection{Near horizon symmetries}
In the next section we will consider the Noether charges associated to the infinite-dimensional symmetries we just discussed. In preparation to do so, let us review the form of the asymptotic Killing vectors and gauge transformations that generate such symmetries: The near horizon expansion discussed above corresponds to the expansion around $r\simeq r_0$, i.e. $\rho \simeq 0$, in powers of $\rho $, and the diffeomorphisms and gauge transformations that preserve such asymptotia are known to be of the form
\begin{equation}
\delta g_{\mu \nu} = \mathcal{L}_{\chi }g_{\mu \nu } \, , \ \ \ \ \delta A_{\mu} = \mathcal{L}_{\chi } +\partial_{\mu} \epsilon\label{LaBelow}
\end{equation}
with
\begin{equation}
\chi ^v = T(z^A) +\mathcal{O}(\rho )\, , \ \ \
\chi ^{\rho } = \mathcal{O}(\rho )\, , \ \ \
\chi^A = Y^A (z^B) +\mathcal{O}(\rho )\, ,
\end{equation}
and
\begin{equation}
\epsilon = U(z^A) -T(z^A)A_v^{(0)} +\mathcal{O}(\rho ) \, ,\label{NoB}
\end{equation}
where $T(\varphi , \theta ), \, Y^{\varphi }(\varphi , \theta ), \, Y^{\theta }(\varphi , \theta ),$ and $ U(\varphi , \theta )$ are arbitrary constants of the angular coordinates. The Fourier modes of the expansion of these functions in the angular variables $z^A$ on $H$ generate an infinite-dimensional current algebra in semidirect sum with another set of supertranslations and two copies of Witt algebra; see \cite{Extended} for details. In the next section, we construct the charges associated to the symmetries generated by (\ref{LaBelow})-(\ref{NoB}).
\section{Conserved charges}
\subsection{Noether charges on the horizon}
The symmetries discussed above have associated the following Noether charges \cite{Glenn, Glenn2}
\begin{equation}
Q[T,Y^A, U] = -\frac{1}{16\pi }\int_{H} dS \Big( T\, g_{vv}^{(1)}+Y^A (g_{vA}^{(1)} + 4A_{A}^{(0)} A_v^{(1)} )+4U\, A_v^{(1)} \Big)\label{LACARGASSS}
\end{equation}
where
\begin{equation}
dS= \sqrt{\det g_{AB}^{(0)}}\, d\varphi d\theta \,
\end{equation}
is the measure of the constant-$v$ slices on $H$; we will occasionally write $d\varphi d\theta=d^2z$. The subindex $H$ in the integral in (\ref{LACARGASSS}) refer to constant-$v$ slices on $H$. These are charges computed at the horizon and are defined by integrating on the constant-$v$ slices. The values associated to the zero modes are
\begin{equation}
S=\frac{2\pi }{ \kappa}Q[1,0,0] \, , \ \ \ \ j=Q[0,\delta^{A}_{\varphi }, 0] \, , \ \ \ \ e=Q[0,0,1] \, ;
\end{equation}
they are the entropy, the angular momentum and the electric charge, respectively. While entropy (in the non-extremal case) is associated to rigid translations $\partial_v$ on $H$, the angular momentum is the charge associated to $\partial_{\varphi }$ defined also on $H$.
\subsection{Angular momentum and Komar integrals}
While the analysis performed above is valid in general, the explicit expressions we wrote in Section 2 correspond to the particular case $q=0$. Now, let us consider the most general expressions. The explicit solution with arbitrary parameters $m$, $a$, $q$ and $B$ can be found in \cite{Ernst, ErnstWild, Gibbons, Gibbons2, Booth, Astorino}, and the near horizon analysis with $q\neq 0$ was done in \cite{Lucho} for the case $a=0$. As we will see, the total angular momentum explicitly depends on $q$ and $B$, and not only on $a$. When $a\neq 0 \neq q$ the metric functions depend on both parameters and, as in the Kerr-Newman solution, the horizon location $r_0$ depends, not only on the mass, but also on both the angular momentum and the electric charge: $f(r)=r^2+a^2-2mr+q^2$, so that $r_0=m^2+ \sqrt{m^2-a^2-q^2}$.
Computing the charge $Q[0,\delta^{A}_{\varphi }, 0]$, which corresponds to the angular momentum, amounts to calculate the integral
\begin{equation}\label{J}
j = -\frac{1}{16 \pi}\int_{H} dS\, \big(g_{v \varphi}^{(1)} + 4 A_{\varphi}^{(0)} A_{v}^{(1)} \big)
\end{equation}
on the horizon. Despite being a concrete analytic expression, in the case of the magnetized horizon the evaluation of (\ref{J}) is quite cumbersome; see \cite{Lucho} for the explicit computation in the case $q\neq0=a$. What we will rather do here is to prove that (\ref{J}) admits to be expressed as a Komar integral on $H$; this will allow us to compare with the results in the literature. In order to do so, let us separate (\ref{J}) in two contributions: a first contribution coming from the spacetime geometry, which corresponds to the first term in the integrand, and a second contribution coming from the electromagnetic field, which corresponds to the second term in the integrand. Each of these contributions will be shown to match the corresponding Komar expression. As for the first one, it is possible to show that it matches the integral formula
\begin{equation}
J_{K} = \frac{1}{16 \pi} \int_{{}} *\, dK
\end{equation}
with $K^{\mu}$ being the rotational Killing vector $\partial_{\varphi}$ and $dK $ stands for the exterior derivative of $K_{\mu}= g_{\mu \varphi}$, namely
\begin{equation}
dK = \partial_{\rho}g_{v \varphi}\, d\rho \wedge dv + \partial_{\theta}g_{v \varphi} \, d\theta \wedge dv + \partial_{\theta}g_{\varphi \varphi}\, d\theta \wedge d\varphi +
\partial_{\rho}g_{\theta \varphi}\, d\rho \wedge d\theta + \partial_{\rho}g_{\varphi \varphi}\, d\rho \wedge d\varphi
\end{equation}
with Hodge dual $*dK_{\mu \nu } = \frac{1}{2} \sqrt{- g}\, dK^{\alpha \beta} \epsilon_{\alpha \beta \nu \mu }$, whose explicit expressions can be found, for example, in \cite{Booth}. To compare with our expression (\ref{J}), we evaluate the component $*dK_{\theta \varphi }$ on the horizon, namely
\begin{equation}
*dK_{\theta \varphi \, |H} = \sqrt{\text{det} g^{(0)}_{AB}}\, \epsilon_{v \rho \theta \varphi}\, dK^{v \rho}_{|H} = - \sqrt{\text{det} g^{(0)}_{AB}}\, \partial_{\rho}(g_{v \varphi})_{|H}\, ,
\end{equation}
and, by expanding in $\rho $, we get
\begin{equation}
J_{K} = -\frac{1}{16 \pi} \int_{{H}} dS \, g_{v \varphi}^{(1)} \, .
\end{equation}
That is to say, the first contribution to the horizon charge (\ref{J}) is found to agree with a Komar integral. Now, let us consider the second contribution. For asymptotically flat spacetimes the entire contribution to the angular momentum would be $j=J_K$; however, in the Melvin universe the gauge field configuration does not vanish at infinity and the second term in (\ref{J}) does contribute to the Komar integral; it does with a term \cite{Booth}
\begin{equation}
J_{EM} = \int_{{H}} dS\, K^{\alpha} \mathcal{J}_{\alpha} \qquad \text{with} \qquad \mathcal{J}_{\alpha} = \frac{1}{4\pi} \ell^{\mu} n^{\nu} F_{\mu \nu} (g^{\beta}_{\alpha} + \ell^{\beta} n_{\alpha} + \ell_{\alpha} n^{\beta}) A_{\beta}
\end{equation}
where now $K^{\alpha }$ is a rotational Killing vector, $\ell$ and $n$ are the two transversal null vectors on $H$, and the integrand $E_{\perp} \equiv \ell^{\mu} n^{\nu} F_{\mu \nu}$ is the transversal component of the electric field. Evaluating this expression on $H$, where we can use $\ell = \partial_{v}$ and $n = \partial_{\rho}$, we obtain
\begin{equation}
J_{EM} = \frac{1}{4\pi}\int_{{H}} dS\, (F_{v \rho} A_{\varphi}) _{|H} = - \frac{1}{4\pi}\int_{{H}} dS\, A^{(1)}_{v} A^{(0)}_{\varphi} \, ,
\end{equation}
which exactly reproduces the second term in (\ref{J}). In other words, we have shown that the contribution to the horizon charge (\ref{LACARGASSS}) that corresponds to the angular momentum admits to be written as Komar integrals on the horizon; namely
\begin{equation}
Q[0,\delta^{A}_{\varphi }, 0]= J_{K} + J_{EM} \, .
\end{equation}
In \cite{Lucho}, the angular momentum of the black hole immersed in a magnetic Melvin universe was computed from the near horizon perspective for the case $q\neq a=0$, resulting in
\begin{equation}
j_{|a=0}=-q^3B(1+\frac 14 q^2B^2)\, .
\end{equation}
This result was found to be consistent with the angular momentum computed by other methods in \cite{Gibbons, Gibbons2, Astorino, Booth, AstorinoKerrCFT}, which in the general case reads
\begin{equation}
j= am-q^3B-\frac 32 amq^2B^2 - (2qa^2m^2+\frac 14 q^5)B^3-(a^3m^3+\frac{3}{16}q^4am)B^4\,
.\label{LaJota}
\end{equation}
The angular momentum results to be a finite expansion in powers of $B$, which sometimes it is convenient to write as a polynomial in $am$ or as a polynomial in $q$.
We notice from (\ref{LaJota}) that when $B=0$ the angular momentum reduces to the standard result $j=ma$ of Kerr-Newman black holes. Also, we notice that when $q=0$ the angular momentum receives a contribution from the external magnetic field, yielding $j=am(1-a^2m^2B^4)$. When $a=0$ the angular momentum is $j=-q^3B(1+\frac 14 q^2B^2)$. It is also worth noticing that the parity and charge conjugation symmetry express themselves in the fact that expression (\ref{LaJota}) is invariant under the transformations $\{ a\to -a,\, j\to -j,\, q\to \mp q ,\, B\to \pm B\}$ and under the transformation $\{q\to - q ,\, B\to - B \}$.
\subsection{Wald entropy}
Now, we can analyze the other conserved charges, one of them corresponding to the black hole entropy. As shown by Wald \cite{WaldEntropy}, the black hole entropy can be expressed as a Noether charge computed at the horizon. It was observed in \cite{Laura} that one of the charges (\ref{LACARGASSS}), the one corresponding to the zero mode $T=1$, which realizes rigid translations in the coordinate $v$, actually reproduces the Wald entropy charge. In other words, the charge associated to the Killing vector $\chi = \partial_v$ gives the Bekenstein-Hawking entropy formula multiplied by the Hawking temperature, namely
\begin{equation}
Q[1,0,0]=-\frac{1 }{16\pi }\int_H dS\, g_{vv}^{(1)}= \frac{ \kappa }{2\pi }\, \frac{A}{4 } = TS\, ,\label{entro}
\end{equation}
($G=c= k_B =1$), with $A$ being the area of the horizon. The second equality in (\ref{entro}) simply follows from $g_{vv}^{(1)}$ being constant. When evaluating the component $g_{vv}^{(1)}$ above, it is worth noticing that, when $q\neq 0$, $f'(r_0)=2\sqrt{m^2-a^2-q^2}$.
\subsection{Electric charge and Gauss law}
Now, let us move to the electric charge. The claim is that it corresponds to
\begin{equation}\label{sec3eq0}
Q[0,0,1] = - \frac{1}{4 \pi}\int_H dS\, A_{v}^{(1)}\, ,
\end{equation}
evaluated on $H$. In order to prove that this gives the correct result, we can compare with the canonical form, namely with the computation of the electric charge as the integral of the dual 2-form $*F$ over a 2-dimensional surface that encloses the black hole at a distance. In simple words, {the Gauss law in the black hole background} should give us the total electric charge of the system
\begin{equation}\label{sec3eq1}
e = \frac{1}{4 \pi}\int *F \, .
\end{equation}
What we will show here is that the integral representations (\ref{sec3eq0}) and (\ref{sec3eq1}) actually coincide. The strategy is simple: since the flux (\ref{sec3eq1}) can be taken over any 2-dimensional constant-$r$ and constant-$t$ surface, while in contrast (\ref{sec3eq0}) is defined as an integral on the constant-$v$ sections of $H$, we will first compute the charge (\ref{sec3eq1}) at fixed $r = r_{0}$. Moreover, we will work in the gauge where $\omega(r_{0},\theta) = 0$ to impose the required near horizon boundary conditions. In advanced coordinates and in the original gauge, the electromagnetic potential (\ref{La20}) has the form
\begin{equation}
A = [\Phi_0 (r,\theta) - (\omega (r,\theta)-\omega_0) \Phi_3 (r,\theta) ] \hspace{0.2em} dv + \Phi_3 (r,\theta) \hspace{0.2em} d\varphi + \Phi_3 (r,\theta) h(r,\theta) \hspace{0.2em} d\theta \, ,
\end{equation}
which in terms of the original Boyer-Lindquist type coordinates reads
\begin{equation}\label{sec3eq2}
A = \Big[\Phi_0 (r,\theta ) - \omega (r,\theta ) \Phi_3 (r,\theta )\Big] dt + \Phi_3 (r,\theta )d\phi \equiv A_{t} \, dt + A_{\phi}\, d\phi \, ;
\end{equation}
this expression can be found in appendix B of \cite{Gibbons}, the angular frequency $\omega $ is shifted with respect to the function defined in Eq. (B.8) therein in order to fix the right boundary conditions. With this, we compute the field strength
\begin{align}\label{sec3eq3}
F_{\mu \nu } dx^{\mu } \wedge dx^{\nu } = \partial_{r}A_{t} \, dr\wedge dt + \partial_{\theta}A_{t}\, d\theta \wedge dt + \partial_{r}A_{\varphi}\, dr\wedge d\varphi +\partial_{\theta}A_{\varphi}\, d\theta \wedge d\varphi
\end{align}
from which we get the components of its dual $*F_{\mu \nu } = \frac{1}{2} \sqrt{- g}F^{\alpha \beta} \epsilon_{\alpha \beta \mu \nu }$. Since we are interested in writing the integral (\ref{sec3eq1}) as the flux through a surface defined at constant $r$ and constant $t$, the only component we need to look at is $*F_{\theta \varphi} $. For the spacetime metric given by (\ref{Uno}) -- supplemented with the dependence on $q$--, after some algebra one finds that $*F_{\theta \varphi} $ evaluated at the horizon takes the form
\begin{equation}\label{sec3eq71}
*F_{\theta \varphi \,|H } = -\epsilon_{t r \theta \varphi} \, \frac{\Sigma_0 \sin\theta}{\Lambda _0(\theta ) R_0^2(\theta )} \Big( \partial_{r}A_{t}\Big) _{|r=r_0}
\end{equation}
which can be written as
\begin{equation}\label{sec3eq7}
*F_{\theta \varphi \, |H} = \sqrt{\Sigma_0} \sin\theta \, \times \, \frac{\sqrt{\Sigma_0}}{\Lambda_0(\theta ) R_0^2(\theta )} \Big( \Phi_3 \partial_{r}\omega - \partial_{r}\Phi_0 \Big) _{|r=r_0}\, .
\end{equation}
It turns out that this Equation is both simple and remarkable: while the first factor in (\ref{sec3eq7}) is the square root of the determinant of the induced metric on the 2-dimensional surface, the second factor gives the correct contribution of the electric potential; namely
\begin{equation}
\sqrt{\Sigma_0} \sin\theta=\sqrt{\det g^{(0)}_{AB}} \, , \ \ \text{and}\ \ \ \frac{\sqrt{\Sigma_0}}{\Lambda_0(\theta ) R_0^2(\theta )} ( \partial_{r}A_{t})_{|r=r_0}=A_{v\, }^{(1)}.
\end{equation}
Therefore, the equivalence of the two methods is completely proven; we find
\begin{align}\label{sec3eq8}
e &= \frac{1}{4 \pi}\int *F = - \frac{1}{4 \pi}\int_H dS\, A_{v}^{(1)} = Q[0,0,1] \, .
\end{align}
In terms of the black hole parameters, the electric charge takes the following form
\begin{equation}\label{LaQarga}
e = q (1-\frac 14 q^2B^2) +2amB
\end{equation}
From this we observe that when $B=0$ the electric charge reduces to the standard result for the Kerr-Newman geometry, namely $e=q$. We also notice that when $B\neq 0$ the physical charge $e$ receives a contribution from both the spin and the external magnetic field; it exhibits a non-linear dependence with $q$ and it yields a finite value $e=2amB$ in the case $q=0$. Due to the non-linear term in $q$, when $a=0$ the electric charge can still be zero for non-vanishing values of $q$ provided the condition $q=\pm 2/B$ is satisfied. This yields a critical value $B_c$ for the field for which $e=0$ for $q\neq 0$, namely $|B_c|=2/|q|$; at this value $j$ does not necessarily vanish. It is also worth noticing that the symmetry under parity and charge conjugation express themselves in the fact that expression (\ref{LaQarga}) is invariant under the transformations $\{ q\to -q,\, e\to -e,\, a\to \mp a ,\, B\to \pm B\}$ and under the transformation $\{a\to - a ,\, B\to - B \}$.
In conclusion, we can say that from (\ref{LACARGASSS}) we obtained the right conserved charges of the charged spinning black hole immersed in a backreacting magnetic field. This analysis, however, was valid for the case of non-extremal black holes. As explained in \cite{Extended}, the first term in the expression (\ref{LACARGASSS}) for the charges gets modified in the extremal limit, which is the case we are mainly interested in -- as it is when the Meissner effect can occur--. The method to compute the horizon charges in the extremal case has been worked out in \cite{Puhm}, and in Section 4 we will discuss its application to analyze the black hole in the Meissner state.
\subsection{Thermodynamics of magnetized black holes}
Before moving to analyze the Meissner effect, let us review the thermodynamics in the case of magnetized black holes for generic values of $a$, $q$ and $m$. In the ensemble defined by keeping the external field fixed, the first law of black hole mechanics takes the form
\begin{equation}
dM=\frac{\kappa}{2\pi} dS +\Omega \,dj + \Phi\, de \, ,\label{1spp}
\end{equation}
where the explicit expressions for the angular velocity $\Omega$ and the electric potential $\Phi$ at the horizon can be found in \cite{Astorino}. $M$ in (\ref{1spp}) is the Christodoulou-Ruffini mass, which takes a cumbersome expression in terms of the parameters $m,\, a,\, q$ and $B$. More precisely, $M^2$ is a polynomial of degree 4 in $B$ with coefficients that depend on $m,\, a$ and $q$; see Eq. (44) in \cite{Astorino}. In terms of this mass and the other physical charges, the constraint $m^2\geq a^2+q^2$, which is the condition for avoiding naked singularities, translates into
\begin{equation}
M^2\geq \frac 12 \Big( e^2+\sqrt{e^4+4j^2}\Big)\, ,
\end{equation}
which reduces to the standard inequalities $M\geq |e|$ and $M\geq |j|/M$ in the cases $j=0$ and $e=0$, respectively.
In addition to the first law (\ref{1spp}), the black hole quantities obey the Smarr type formula
\begin{equation}
M=\frac{\kappa}{\pi}S+2\Omega\, j + \Phi\, e\, ,
\end{equation}
which notably simplifies when the black hole is in the Meissner state $e=0$, $M^2=|j|$.
\section{Meissner effect}
\subsection{The phenomenon}
Now, let us go back to the Meissner effect: By considering the solution of Maxwell equations describing a Kerr black hole in a background magnetic field, which was first studied by Wald in \cite{Wald} in the probe approximation, King et al. made in \cite{King} a remarkable observation: as it approaches extremality, spinning black holes expel the lines of magnetic field. More precisely, they found that the flux of magnetic field through the event horizon hemisphere decreases monotonically from $4\pi m^2B$ to zero as the angular momentum increases. In their own words, the lines of force of the magnetic field seem to experience a centrifugal repulsion as the hole spun up. This is the black hole Meissner effect, and it is
confirmed by the analysis of the full backreacting solution \cite{backreaction, Voro, Budin}. In fact, the Meissner effect is a quite generic features of stationary axisymmetric solutions \cite{Bicak, Bicak2, Mac}, having been observed, for example, in charged black holes \cite{Mac, Bicak, Ruffini} and in extended solutions in higher-dimensions \cite{me11}.
We will focus on uncharged black holes. The Meissner effect then takes place when the black hole is maximally rotating. According to our charge computation, the zero electric charge condition is $Q[0,0,1]=0$, which reads
\begin{equation}
a= \frac{1}{2m} \Big( \frac 14 q^3B -\frac qB \Big) \label{LaA}
\end{equation}
On the other hand, the extremality condition is
\begin{equation}
m=\sqrt{q^2+a^2}\label{LaB}
\end{equation}
One can explicitly verify that, provided both (\ref{LaA}) and (\ref{LaB}) are satisfied, the magnetic field vanishes at the horizon. To see this, we can take a look at the radial component magnetic field at the horizon, which is given by
\begin{equation}
B_{r|H}=\frac{1}{\sqrt{\Sigma_0 }\sin \theta }\, ({\partial }_{ \theta }\Phi_3 )_{|r=r_0}\, ,\label{Berre}
\end{equation}
and the azimuthal component of the magnetic field at the horizon, which is
\begin{equation}
B_{\theta |H}=\frac{\sqrt{f(r_0)}}{\sqrt{\Sigma_0 }\sin \theta }\, ({\partial }_{ r }\Phi_3 )_{|r=r_0}\, .\label{Btita}
\end{equation}
It turns out that, when both (\ref{LaA}) and (\ref{LaB}) hold, both (\ref{Berre}) and (\ref{Btita}) vanish; that is,
\begin{equation}
B_{r|H}=0\, , \ \ \ \ B_{\theta |H}=0\, .
\end{equation}
A particular case in which this happens is $m=\pm q$ with $q=\pm 2/B$, which yields $a=0$ and $|j|=4/|B|=2|q|$. However, this is a much more general phenomenon that occurs always that $e=0$ and $m^2=a^2+q^2$. We will focus on solutions that are continuously connected with the Kerr-Newman solution when $B\to 0$. For such solutions, the validity of the neutral condition (\ref{LaA}) and the extremality condition (\ref{LaB}) implies the following relation between the parameters $m$, $q$ and $B$
\begin{equation}
B_{\sigma }=\frac{4m\, \text{sign} (a)}{q^3} \Big( \sqrt{m^2-q^2}+\sigma m-\sigma \frac{q^2}{2m} \Big) \, ,
\end{equation}
with $\sigma =\pm 1$ indicating two different (Meissner) branches; cf. Eqs. (50)-(52) in \cite{Astorino}. Noticing that, for the extremal configuration, $\text{sign}(a)\sqrt{m^2-q^2}=a$, the value of the magnetic field in each branch can be written as
\begin{eqnarray}
B_{\sigma} = {2\sigma \, \text{sign}(qa) } \, \frac{(m+ \sigma |a|)^{ 1/2}} {(m- \sigma |a|)^{ 3/2}}\, . \label{Bsigma}
\end{eqnarray}
This expresses that, when $B=0$, the extremality condition for the neutral black hole reduces to $m=|a|$; and it also shows the existence of branches that are not continuously connected to the Kerr-Newman solution in the limit $B\to 0$. Of course, each branch ($\sigma =\pm 1$) of solutions of (\ref{Bsigma}) remains unchanged when changing $\{B_{\sigma },q,a\}\to \{-B_{\sigma },\pm q,\mp a\}$ or $\{B_{\sigma },q,a\}\to \{B_{\sigma },- q,- a\}$, as it follows from (\ref{LaA}).
It is worth pointing out that, when (\ref{LaA}) and (\ref{LaB}) hold, the azimuthal component of the magnetic field also vanishes -- and not only the radial one --. This remark is important because previous analysis of the Meissner effect were based on the observation that the flux of the magnetic field through a hemisphere of the horizon vanishes, which of course suffices to verify the expulsion of the magnetic lines from the black hole. However, the vanishing of the azimuthal component in the near horizon limit can only be observed by explicitly computing the components of the field and not by comouting the flux. Figure \ref{fig:1} shows the lines of magnetic field and its strength close to the event horizon.
\begin{figure}
\centering
\includegraphics[scale=0.50]{Meissner_Fig1.jpg}
\caption{Lines of magnetic field close to the event horizon for the case of a black hole exhibiting Meissner effect (right) and one that is not in its Meissner state (left). Red colour indicates the zones where the magnetic field is weaker, while yellow colour indicates where the magnetic field is stronger. The parameters of the black hole in the Meissner state (right) are such that $|j|=M^2$ and $e=0$, while in the other configuration (left) $0<|j|<M^2$ and $|e|> 0$.} \label{fig:1}
\end{figure}
\subsection{Thermodynamics of the Meissner state}
The neutral Meissner state we are interested in is characterized by (\ref{LaA}) and (\ref{LaB}), which in terms of the black hole charges corresponds to
\begin{equation}
e=0 \, , \ \ \ M^2=|j|\, .\label{urol}
\end{equation}
In such extremal uncharged state, the Smarr formula reduces to $M={2\Omega j}$, which, using the explicit expression for the angular velocity at the horizon, yields a remarkably simple expression for the product of the squared black hole mass and its entropy, namely
\begin{equation}
M^2S={2\pi j^2}.
\end{equation}
Therefore, for the Meissner state we get that the mass and the entropy are
\begin{equation}
M=\sqrt{|j|} \, , \ \ \ \ \ S={2\pi |j|}\, .
\end{equation}
As we will see, this entropy can be reproduced with our near horizon approach.
\subsection{Charged black holes and Meissner effect}
Non-rotating black holes ($j=0$) in the Melvin background are also known to exhibit the Meissner effect \cite{Mac}, and we do observe it, as depicted in Figure \ref{fig:2}.
\begin{figure}
\centering
\includegraphics[scale=0.50]{Meissner_Fig2.jpg}
\caption{Lines of magnetic field close to the event horizon for the case of an electrically charged black hole exhibiting Meissner effect. The colour code is the same as in the previous figures: red colour indicates the zones where the magnetic field is weaker, while yellow colour indicates where the magnetic field is stronger. The parameters of the black hole are such that $j=0$ and $M=|e|$.} \label{fig:2}
\end{figure}
While here we are mainly interested in the neutral extremal black holes, let us write down the charges in the electrically charged case for completeness: In fact, using the explicit expression for the electric potential at the horizon \cite{Astorino}, we obtain remarkably simple formulae for the mass and the entropy of the magnetized extremal non-spinning black hole ($j=0$); namely
\begin{equation}
M=|e| \, , \ \ \ \ \ S={\pi e^2}\, .
\end{equation}
\subsection{Symmetries and charges of the Meissner state}
In Section 2, we have shown that the Ernst-Wild solution to Einstein-Maxwell equations, describing a spinning black hole immersed in a magnetic Melvin universe, can be accommodated in a coordinate system that fulfill the near horizon asymptotic conditions studied in \cite{Puhm}. While we showed this in full generality, for arbitrary values of $m, q, a$ and $B$, the computation of the Noether charges (\ref{LACARGASSS}) was done for the non-extremal case ($\kappa \neq 0$), the extremal case being somehow special. In order to generalize the horizon charge computation to the extremal case, we revisit the results of \cite{Extended, Puhm}: On the constant-$v$ slices of the horizon, we consider a conformal metric
\begin{equation}
g_{AB}^{(0)} = \Theta\, \gamma_{AB}\, ,
\end{equation}
where $\gamma_{AB}$ is the metric of constant curvature on the 2-sphere and $A,B=1,2$ either refer to the coordinates $z^A=\{\varphi , \, \theta \}$ or to complex coordinates $z^A=\{z,\, \bar{z}\}$. The conformal factor $\Theta$ is given by an arbitrary function of these coordinates, say of $z$ and $\bar{z}$. Explicitly, we have
\begin{equation}
g_{AB}^{(0)} = \frac{2\Theta (z,\bar{z} )}{(1+|z|^2)^2}(\delta^{z}_{A}\delta^{\bar{z}}_{B} +\delta^{\bar{z}}_{A}\delta^{z}_{B})\, .
\end{equation}
In the extremal case ($\kappa =0$) the boundary conditions at the horizon are preserved by the asymptotic Killing vectors of the form
\begin{equation}
\xi = D(z,\bar z )\, v\partial_v + Y(z)\partial_z + \bar Y (\bar z )\partial_{\bar z } +\mathcal{O}(\rho )\label{blaudiff}
\end{equation}
and by the gauge parameter
\begin{equation}
\epsilon = U(z,\bar z ) -X(z,\bar z )\, vA^{(0)}_v +\mathcal{O}(\rho )\, ,\label{blaudifff}
\end{equation}
with $D(z,\bar z )$, $U(z,\bar z )$, $Y^A=\{Y(z), \bar{Y}(\bar z )\}$, $A=1,2$, being four arbitrary functions. Notice that $D=const$ corresponds to dilations of the null direction $v$; in the non-extremal case this gets replaced by local translations in $v$; this is the reason why diffeomorphisms generated by $D(z,\bar{z}) \, v\, \partial_v$ have non-vanishing commutator with the horizon supertranslations generated by $\chi = T(z,\bar z )\, \partial_v$, cf. \cite{Extended, Puhm}; schematically, the structure $[T,D]=D$ corresponds to translations and dilations on $H$. Vectors (\ref{blaudiff})-(\ref{blaudifff}) generate the change in the fields
\begin{equation}
\delta_{\xi , \epsilon} g_{\mu \nu }=\mathcal{L}_{\xi }g_{\mu \nu}\, , \ \ \ \delta_{\xi , \epsilon} A_{\mu }=\mathcal{L}_{\xi }A_{\mu }+\partial_{\mu }\epsilon \, ,
\end{equation}
preserving the correct boundary conditions at the horizon ($\rho = 0$). The conserved charges associated to these symmetries are given by \cite{Extended}
\begin{equation}
Q[D, Y^{A}, U]= - \frac{1}{16\pi G }\int dS\, \Big( -2D+Y^{B}(g_{vB}^{(1)}+4A_{B}^{(0)}A_v^{(1)})+4UA^{(1)}_v \Big)\, .
\end{equation}
While the charges $Q[D(z,\bar{z}),0,0]$ and $Q[0, 0,U(z,\bar{z})]$ generate two commuting copies of the level-0 current algebra $\hat{u}(1)_0$, i.e. supertranslations with no central extension, the charges $Q[0, \delta^A_{z}Y(z),0]$ and $Q[0,\delta^A_{\bar{z}}\bar{Y}(\bar{z}), 0]$ generate two copies of the 2-dimensional local conformal (Virasoro) algebra with vanishing central charge, i.e. two commuting copies of Witt algebra generated by $L(z)=Y^z(z)\partial_{z}$ and $\bar{L}(\bar{z})=Y^{\bar{z}}(\bar{z})\partial_{\bar{z}}$, with the Fourier expansion $L(z)=\sum_{n\in \mathbb{Z}}L_nz^n\,\partial_{{z}}$, $\bar{L}(\bar{z})=\sum_{n\in \mathbb{Z}}\bar{L}_n\bar{z}^n\,\partial_{\bar{z}}$ expressing the extended $SL(2, \mathbb{R})$ structure $[L,L]=L$, along with the non-diagonal piece $[L,D]=D$. The transformations generated by diffeomorphisms and gauge transformations associated to functions $D(z,\bar{z})$ and $U(z,\bar{z})$ are infinite-dimensional Abelian ideals of the full algebra; see \cite{Puhm} for details. One can verify that the charge $Q[1,0,0]$ reproduces the entropy for the exremal magnetized black hole; more precisely, we find
\begin{equation}
Q[1,0,0] = \frac{1}{2\pi }\, \frac{A}{4}\, .\label{ultimate}
\end{equation}
which is the entropy multiplied by a factor $1/(2\pi )$. For the neutral black hole in the Meissner state, we find that (\ref{ultimate}) yields
\begin{equation}
Q[1,0,0] = |j|\, .
\end{equation}
Notice that the factor $1/(2\pi )$ in (\ref{ultimate}) is reminiscent of the one appearing in the Kerr/CFT computation of the entropy \cite{AstorinoKerrCFT, otroKerrCFT}; in Kerr/CFT that factor is interpreted as coming from the left moving temperature in the Frolov-Thorne vacuum \cite{KerrCFT}. Understanding the precise connection between Kerr/CFT and our near-horizon calculations would be very interesting. The relation with other scenarios involving magnetic fields and extremal black holes, such as the analysis of the force-free electrodynamics done in \cite{Lupsasca:2014pfa}, is also worthwhile studying.
\section{Conclusions}
In this paper we have explicitly shown that Kerr-Newman black holes immersed in an external magnetic field exhibit infinite-dimensional symmetries in the near horizon limit. To show this, we applied the method developed in \cite{Laura, Extended, Puhm} to the Ernst-Wild solution to Einstein-Maxwell equations, which describes a spinning, electrically charged black hole embedded in a magnetic Melvin universe. By carefully adapting the formalism of \cite{Puhm} to the case of magnetized black holes, we wrote the asymptotic near horizon expansion for the spacetime metric and the gauge field; we showed that it corresponds to the boundary conditions yielding supertranslation and superrotation asymptotic symmetries at the horizon. Then, we showed that the Noether charges associated to the zero modes of these symmetries reproduce the physical variables of the magnetized black hole and its thermodynamics. This represents a generalization of the results of \cite{Lucho} to arbitrary values of the black hole parameters. In addition, we elaborated on the horizon symmetry computation by proving that the Noether charge associated to the angular momentum computed at the horizon admits to be expressed as the sum of two Komar integrals, one corresponding to the geometry contribution and one to the gauge field contribution. While the latter vanishes in the asymptotically flat spacetime, it does contribute to the angular momentum when the black hole is embedded in the Melvin magnetic bundle. We also showed the validity of the Gauss phenomenon by expressing the horizon charge associated to the electric charge as a flux integral. Then, we focused on the case in which the spinning black hole is neutral and it approaches extremality: this is the case in which the event horizon exhibits the Meissner effect. As explained in \cite{Extended}, for such configuration the horizon charges change, although the theory still exhibits an infinite-dimensional symmetry. This symmetry is still a combination of local conformal transformations and two sets of supertranslations. The latter correspond to superdilations on the null coordinate on the horizon and to gauge transformations that preserve the gauge field configuration at the horizon. The computation of the charges associated to the zero-modes of these symmetries allowed us to perform the analysis of the thermodynamics of the event horizon in its Meissner state reproducing the results in the literature.
To conclude, let us mention that an interesting future direction of this line of research would be to understand the near horizon description of the magnetized horizons from the perspective of \cite{Donnay}. There, the authors show that the geometry of a black hole horizon can be described as a Carrollian geometry emerging from an ultra-relativistic limit in the near-horizon region. Extending the formalism of \cite{Donnay} to study the dynamics of the magnetic field from the near horizon perspective would be interesting.
\[\]
The authors thank Marco Astorino, Sasha Brenner, Gregory Gabadadze, Andrei Gruzinov, and Juan Laurnagaray for discussions. This work has been partially supported by grants PIP-(2017)-1109, PICT-(2019)-00303, PIP-(2022)-11220210100685CO, PIP-(2022)-11220210100225CO, PICT-(2021)-GRFTI-00644.
|
{
"arxiv_id": "2302.14195",
"language": "en",
"timestamp": "2023-03-01T02:04:29",
"url": "https://arxiv.org/abs/2302.14195",
"yymm": "2302"
} | \subsection{Configurations of (pre) \nuevamacro{\text{\sc acn}}{es}}
We state some further notions and results on (pre) \nuevamacro{\text{\sc acn}}{es}, on same line of what we have done in \cite{lics}.
\input{aux-ipt}
\begin{definition}\label{de:ca-conf}
Let $C = \langle S, T, F, I, \mathsf{m}\rangle$ be a p\nuevamacro{\text{\sc acn}}. A set of
transitions $X\subseteq T$ is a \emph{configuration} of $C$ if
\begin{enumerate}
\item $\forall t\in X$. $\histtwo{t}{\lessdot}\subseteq X$
(\emph{left closedness} with respect to $\lessdot$); and
\item $\textcolor{black}{\leadsto}\cup\lessdot$ is acyclic on $X$.
\end{enumerate}
The set of configurations of an p\nuevamacro{\text{\sc acn}}{} $C$ is denoted by $\Conf{C}{p\nuevamacro{\text{\sc acn}}}$.
\end{definition}
If the net considered is a \nuevamacro{\text{\sc acn}}{} the condition $\textcolor{black}{\leadsto}\!\cup\lessdot$ acyclic on $X$ can
be written as $\forall t, t'\in X.\ \neg (t {\natural} t')$, because of the peculiar form
of the $\textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ }$ relations, as $\textcolor{black}{\leadsto} = \textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ }$.
Notice that there is a close correspondence between the configurations of a p\nuevamacro{\text{\sc acn}}\
and reachable markings: any reachable marking determines a
configuration of the net and \emph{vice versa}.
\begin{proposition}\label{pr:ca-reach-markings-are-conf}
Let $C = \langle S, T, F, I, \mathsf{m}\rangle$ be an p\nuevamacro{\text{\sc acn}}{}. Then,
\begin{enumerate}
\item if $m'\in\reachMark{C}$ then $\pre{m'}\in\Conf{C}{p\nuevamacro{\text{\sc acn}}}$; and
\item if $X\in\Conf{C}{p\nuevamacro{\text{\sc acn}}}$ then $\mathsf{m}-\pre{X}+\post{X}\in\reachMark{C}$.
\end{enumerate}
\end{proposition}
\ifreport
\begin{proof}
Consider $m'\in\reachMark{C}$, then there exists a firing sequence
$\sigma$ such that $m' = \lead{\sigma}$.
%
We prove on the length of the firing sequence that $\pre{m'}\in\Conf{C}{p\nuevamacro{\text{\sc acn}}}$.
%
If $\len{\sigma} = 0$ then $\lead{\sigma} = \mathsf{m}$ and $\pre{\mathsf{m}} = \emptyset$
which is a configuration in $\Conf{C}{p\nuevamacro{\text{\sc acn}}}$.
%
Assume it holds holds for firing sequences of length $n$ and consider
$m' = \lead{\sigma}$ with $\len{\sigma} = n+1$.
%
We have that $\sigma(n)\trans{t}m'$ for some $t\not\in \pre{\sigma(n)} = X'\in\Conf{C}{p\nuevamacro{\text{\sc acn}}}$,
and, being $X'$ a configuration, it can be partially ordered. Now,
as $\sigma(n)\trans{t}$, we have that for all $t'\lessdot t$, $t'\in X'$ because
$\inib{t}\cap\pre{t'}\neq\emptyset$, and this implies that
$\histtwo{t}{\lessdot}\subseteq X'\cup\setenum{t}$.
%
$\textcolor{black}{\leadsto}\cup\lessdot$ is acyclic on $X'$ hence, $\forall t'\in X'$ it must be that
$\neg t'\textcolor{black}{\leadsto} t$
as $\post{t'}\subseteq \sigma(n)$ and this would contradict that $\sigma(n)\trans{t}$. But then
$\textcolor{black}{\leadsto}\cup\lessdot$ is acyclic also on $X'\cup\setenum{t} = \pre{m'}$.
%
We conclude that $m'\in\reachMark{C}$ implies $\pre{m'}\in\Conf{C}{p\nuevamacro{\text{\sc acn}}}$.
For the second point, we observe that, as $X$ is a configuration,
its elements can be partially ordered, thus $X = \setenum{t_1, \dots, t_n}$ and
if $t_i \lessdot t_j$ or $t_i \ \textcolor{black}{\leadsto}\ t_j$ then $i < j$.
%
We prove that $\mathsf{m}=m_0\trans{t_1}m_1\trans{t_2}\cdots m_{n-1}\trans{t_n}m_n$ is
a firing sequence and $\forall i \in \setenum{1, \dots n}$ we have
$\pre{m_i} = \setenum{t_1, \dots, t_i}$ and $m_{i-1}\trans{t_i}$.
%
Clearly to $m_0 = \mathsf{m}$ the empty configuration corresponds, and
$m_0\trans{t_1}$ as $\histtwo{t_1}{\lessdot} = \emptyset$ and $t_1$ is not prevented by any
transition. Assume $\forall i \in \setenum{1, \dots n-1}$, both
$\pre{m_i} = \setenum{t_1, \dots, t_i}$ and $m_{i-1}\trans{t_i}$ hold, and
$\pre{m_{n-1}} = X\setminus \setenum{t_n}$. Now $\pre{m_{n-1}}\trans{t_n}$
as $\histtwo{t_n}{\lessdot}\subseteq X\setminus \setenum{t_n}$ which implies
that $\inib{t_n}\cap m_{n-1} = \emptyset$, and also for all $t'\textcolor{black}{\leadsto} t_n$ we
have that $t'\not\in X\setminus \setenum{t_n}$.
%
We can then conclude that $m = \mathsf{m} -\pre{X} + \post{X}$ is a reachable marking.
\end{proof}
\fi
If $X \in \Conf{C}{p\nuevamacro{\text{\sc acn}}}$ the transitions in
$X$ can be partially ordered with respect to $\lessdot \cup \ \textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ }$,
which means that $X = \setenum{t_1, \dots, t_n}$ and
if $t_i \lessdot t_j$ or $t_i \ \textcolor{black}{\leadsto}\ t_j$ then $i < j$,
where $t \ \textcolor{black}{\leadsto}\ t'$ iff $t' \textcolor{black}{\leadsto} t$.
\begin{lemma}\label{lm:ordering-configuration}
Let $C = \langle S, T, F, I, \mathsf{m}\rangle$ be an p\nuevamacro{\text{\sc acn}}{} and let $X\subseteq T$ a
\emph{configuration} of $C$. Then $X$ can be partially ordered with respect
to $\textcolor{black}{\leadsto}\cup\lessdot$.
\end{lemma}
\ifreport
\begin{proof}
Take $X\in \Conf{C}{p\nuevamacro{\text{\sc acn}}}$.
As $X$ is acyclic with respect to $\textcolor{black}{\leadsto}\cup\lessdot$, then
we have that $\hat{\leq} = (\textcolor{black}{\leadsto}\cup\lessdot)^{\ast} \cap (X \times X)$ is a partial
order and then we have the thesis.
\end{proof}
\fi
We stress that all the results are still valid if the nets are \nuevamacro{\text{\sc acn}}, as conflicts inheritance along the
$\lessdot$ relation does not play any role here.
\subsubsection{Morphisms and configurations:}
We end this part observing that morphisms preserve configurations, which is proven using the lemma
below.
\begin{lemma}\label{lm:cause-preservation}
Let $(f_S,f_T) : C_0 \rightarrow C_1$ be an \nuevamacro{\text{\sc acn}}-morphism. Then
\begin{itemize}
\item for all $t\in T_0$, if $f_T(t) \neq \bot$ then
$\histtwo{f_T(t)}{\lessdot_1} \subseteq f_T(\histtwo{t}{\lessdot_0})$; and
\item for all $t_0, t_0'\in T_0$ such that $f_T(t_0) \neq \bot\neq f_T(t_0')$, if
$f_T(t_0) \textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ }_1\ f_T(t_0')$ then $t_0 \textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ }_0\ t_0'$.
\end{itemize}
\end{lemma}
\ifreport
\begin{proof}
Take $t\in T_0$ such that $f_T(t) \neq \bot$. Consider $\histtwo{f_T(t)}{\lessdot_1}$.
For each $t_0\in \histtwo{f_T(t)}{\lessdot_1}$ we have that either $\pre{t_0}\cap\inib{f_T(t)} \neq\emptyset$
or there exists $t_1,\dots, t_n \in \histtwo{f_T(t_0)}{\lessdot_1}$ such that
$(\pre{t_0},t_1), (\pre{t_1},t_2), \dots, (\pre{t_n},f_T(t)) \in I_1$,
but as $(f_S,f_T)$ is a \nuevamacro{\text{\sc acn}}-morphism,
either there exists an $s_0\in S_0$
such that $\post{s_0} = t_0'$, $f_T(t) = t_0'$ and $(s_0, t_0')\in I_0$,
or there exists $s_0, s_1, \dots, s_n$ such $\post{s_i} = t_i'$,
$f_T(t_i') = t_i$ and $(s_i, t_i')\in I_0$ that which implies that
$f_T(t_0') = t_0\in f_T(\histtwo{t_0}{\lessdot_1})$. This proves the inclusion.
The other item is implied by the fact that \nuevamacro{\text{\sc acn}}-morphisms reflects inhibitor arcs.
\end{proof}
\fi
\begin{proposition}
Let $(f_S,f_T) : C_0 \rightarrow C_1$ be a \nuevamacro{\text{\sc acn}}-morphism. If $X\in\Conf{C_0}{p\nuevamacro{\text{\sc acn}}}$ then
$f_T(X)\in\Conf{C_1}{p\nuevamacro{\text{\sc acn}}}$.
\end{proposition}
\ifreport
\begin{proof}
Take $X\in\Conf{C_0}{p\nuevamacro{\text{\sc acn}}}$ and $f_T(X)$. $\forall t_1\in f_T(X)$ there exists
$t_0\in X$ and
$f_T(t_0) = t_1$. As $\histtwo{f_T(t_0)}{\lessdot_1} \subseteq f_T(\histtwo{t_0}{\lessdot_0})$
we have that $\histtwo{t_1}{\lessdot_1} \subseteq f_T(X)$.
Assume now that $\textcolor{black}{\leadsto}_1\!\cup\lessdot_1$ is not acyclic on $f_T(X)$. But, as
the two relations are induced by inhibitor arcs which are reflected, this would imply that
also $\textcolor{black}{\leadsto}_0\!\cup\lessdot_0$ is cyclic on $X$ which contradicts the assumption that $X$ is
a configuration. Therefore $f_T(X)\in\Conf{C_1}{p\nuevamacro{\text{\sc acn}}}$.
\end{proof}
\fi
\begin{corollary}
Let $(f_S,f_T) : C_0 \rightarrow C_1$ be a \nuevamacro{\text{\sc acn}}-morphism and $C_0, C_1$ be two \nuevamacro{\text{\sc acn}}{es}. If $X\in\Conf{C_0}{\nuevamacro{\text{\sc acn}}}$ then
$f_T(X)\in\Conf{C_1}{\nuevamacro{\text{\sc acn}}}$.
\end{corollary}
\subsection{Causal Nets and Asymmetric Causal Nets}
In \cite{lics} we introduced the notion of \emph{causal net} to show that it was the proper
kind of net corresponding to reversible prime event structures of \cite{rpes}. In that
paper we proved that each occurrence nets, the classical counterpart of prime event structures in
net terms, could be seen as a causal net.
Here we show that each causal net can be seen as an asymmetric causal nets, and this implies that
the tight correspondence between causal nets and reversible prime event structures can be
transferred to \nuevamacro{r\text{\sc acn}}. In that paper we did not considered causal nets in categorical terms.
\input{pcnversuspacn}
\subsection{Auxiliary definitions}
We start fixing notation and introducing auxiliary definitions concerning nets that will be useful in the following.
Given an \textsf{fs}\ $\sigma$, $\mathcal{X}_{\sigma}$ is the set of all sequences of multisets of
transitions that \emph{agree} with $\sigma$, namely the set
$\setcomp{\theta}{\len{\theta} = \len{\sigma}\ \land\ (\sigma(i)\trans{\theta(i)}\sigma(i+1)\
\mathit{with}\ i<\len{\sigma})}$, and
$X_{\sigma} = \setcomp{\sum_{i=1}^{\len{\theta}} \theta(i)}{\theta\in \mathcal{X}_{\sigma}}$
for the set of multisets of transitions associated to an \textsf{fs}.
Each multiset of $X_{\sigma}$ is a \emph{state} of the net and write
\(
\states{N} = \bigcup_{\sigma\in\firseq{N}{\mathsf{m}}} X_{\sigma}
\)
for the set of states of the net $N$, and
$\theta\in \mathcal{X}_{\sigma}$ is an \emph{execution} of the net.
\subsection{Equivalence of the definitions of \nuevamacro{r\textsc{aes}es}}
The definition of reversible asymmetric event structure given in \cite{rpes} and
\cite{GPY:categories} has just two relations which, as we already said, comprises
both the forward and the reverse causality and prevention.
To avoid confusion we call them \emph{standard reversible asymmetric event structures}.
\begin{definition}\label{de:raes-old}
A \emph{standard reversible asymmetric event structures} (s\nuevamacro{r\textsc{aes}}) is the quadruple
$\mathsf{K} = (E, U, \prec, \lhd)$
where $E$ is the set of events and
\begin{enumerate}
\item\label{cond:1biso} $U \subseteq E$ is the set of \emph{reversible} events;
\item\label{cond:3biso} $\prec \subseteq E \times (E\cup\un{U})$ is an irreflexive causation relation;
\item\label{cond:2biso} $\lhd \subseteq (E\cup\un{U}) \times E$ is an
irreflexive \emph{precedence} relation
such that for all $\alpha\in E\cup\un{U}.\ \setcomp{e\in E}{e\prec \alpha}$ is finite and
acyclic with respect to $\lhd\cup\prec$;
\item\label{cond:4biso} $\forall u\in U.\ u \prec \un{u}$;
\item\label{cond:5biso} for all $e\in E$ and $\alpha\in E\cup\un{U}$, if $e\prec \alpha$
then not $\alpha\lhd e$; and
\item\label{cond:6biso} the relation $\ensuremath{\prec\!\!\prec}$,
defined as $e \ensuremath{\prec\!\!\prec} e'$ when $e \prec e'$ and if $e = u$,
with $u\in U$, then
$\un{u}\lhd e'$, is such that
\begin{itemize}
\item $e\ensuremath{\prec\!\!\prec} e'$ implies $e\lhd e'$;
\item it is a transitive relation; and
\item if $e \# e'$ and $e \ensuremath{\prec\!\!\prec} e''$ then $e' \# e''$, where $\# = \lhd\cap\rhd$.
\end{itemize}
\end{enumerate}
\end{definition}
In this definition $\prec$ comprises the forward causality (which we called \emph{causation})
and the reverse causality, and $\lhd$ comprises the weak causality (forward relation)
and the prevention involving the undoing of events.
Just observing that the relations on subsets of events reduces
always to the part concerning \emph{forward} relations, and that the last condition of the previous
definition
can be rewritten just requiring that,
when restricted to forward events only,
sustained causation and the restriction of the prevention to these events is an \nuevamacro{\textsc{aes}}{},
it is straightforward to see that the two proposition below hold.
\begin{proposition}\label{pr:raesdefcorruno}
Let $\mathsf{K} = (E, U, \prec, \lhd)$ be an s\nuevamacro{r\textsc{aes}}, then
$\mathsf{H} = (E, U, <_{\mathsf{H}}, \nearrow_{\mathsf{H}}, \prec_{\mathsf{H}}, \lhd_{\mathsf{H}})$ is an
\nuevamacro{r\textsc{aes}}, where $<_{\mathsf{H}} = \prec\cap(E\times E)$, $\nearrow_{\mathsf{H}} = \lhd\cap(E\times E)$,
$\prec_{\mathsf{H}} = \prec\cap(E\times \un{U})$ and $\lhd_{\mathsf{H}} = \lhd\cap(\un{U}\times E)$.
\end{proposition}
\begin{proposition}\label{pr:raesdefcorrdue}
Let $\mathsf{H} = (E, U, <, \nearrow, \prec, \lhd)$ be an \nuevamacro{r\textsc{aes}}, then
$\mathsf{K} = (E, U, <\cup\prec, \nearrow\cup\lhd)$ is an
s\nuevamacro{r\textsc{aes}}.
\end{proposition}
The proofs of both propositions are trivial and omitted.
\subsection{Asymmetric Event Structures}\label{sec:aes}
An \nuevamacro{\textsc{aes}} consists of a set of events and two relations: \emph{causality} ($<$) and
\emph{weak causality} or \emph{precedence} ($\nearrow$). If $e$ weakly causes $e'$,
written $e \nearrow e'$, then $e$ cannot occur after $e'$; i.e., if both events occur in
a computation, then $e$ precedes $e'$. In this case we say that $e'$
is in an \emph{asymmetric} conflict with $e$.
Events $e$ and $e'$ are in {\em (symmetric) conflict}, written $e \# e'$, iff $e\nearrow e'$ and $e'\nearrow e$; intuitively, they cannot take place in the same
computation.
\changed{In figures, weak causality is drawn as a dashed red arrow, and causality as a thick solid black line,
the direction being given from the weak causality one.}{}
\begin{definition}\label{de:aes}
An \emph{Asymmetric Event Structure} ({\nuevamacro{\textsc{aes}}}) is a triple $\mathsf{G} = (E, <, \nearrow)$ where
\begin{enumerate}
\item \label{def:aes-countable}
$E$ is a countable set of \emph{events};
\item \label{def:aes-finite-causes}
$<\ \subseteq E\times E$ is an irreflexive
partial order, called \emph{causality}, defined such that
$\forall e\in E$. $\hist{e} = \setcomp{e'\in E}{e' \leq e}$ is finite; and
\item $\nearrow\ \subseteq E\times E$, called {\em weak causality}, is defined such that for all $e, e'\in E$:
\begin{enumerate}
\item
\label{def:aes-reflect-causality}
$e < e'\ \Rightarrow\ e \nearrow e'$;
\item\label{def:aes-acyclic-causality}
${\nearrow} \cap {(\hist{e}\times \hist{e})}$ is acyclic; and
\item
\label{def:aes-conflict-inh}
if $e \# e'$ and $e' < e''$ then $e \# e''$.
\end{enumerate}
\end{enumerate}
\end{definition}
Each event has a finite set of causes (Condition~\ref{def:aes-finite-causes}).
Moreover, weak causality is consistent with causality:
if $e$ is a cause of $e'$, then $e$ should be also a weak cause of $e'$ (Condition~\ref{def:aes-reflect-causality}); and
there cannot be circular dependencies on the causes of an event (Condition~\ref{def:aes-acyclic-causality}).
Finally, (symmetric) conflicts ($\#$) are required to be inherited along causality (Condition~\ref{def:aes-conflict-inh}).
\begin{example}\label{ex:aes}
\begin{figure}[t]
\begin{subfigure}{.15\textwidth}
\scalebox{0.80}{\input{figures/exaes-aes.tex}}
\caption{\nuevamacro{\textsc{aes}} $\mathsf{G}$}\label{fig:aesuno}
\end{subfigure}\qquad
\begin{subfigure}{.15\textwidth}
\scalebox{0.80}{\input{figures/exaes-aesbis.tex}}
\caption{\nuevamacro{\textsc{aes}} $\mathsf{G}'$}\label{fig:aesdue}
\end{subfigure}\qquad
\begin{subfigure}{.25\textwidth}
\scalebox{0.80}{\input{figures/exaes-confaes.tex}}
\caption{$\Conf{\mathsf{G}}{\nuevamacro{\textsc{aes}}}$}
\label{fig:aesconfuno}
\end{subfigure}\qquad
\begin{subfigure}{.25\textwidth}
\scalebox{0.80}{\input{figures/exaes-confaesbis.tex}}
\caption{$\Conf{\mathsf{G}'}{\nuevamacro{\textsc{aes}}}$}
\label{fig:aesconfdue}
\end{subfigure}
\caption{}
\end{figure}
Consider $\mathsf{G}= (E, <, \nearrow)$ in Fig.~\ref{fig:aesuno} where
$E =\setenum{a,b,c}$, $a < c$, $b < c$ and $a \nearrow b, a \nearrow c, b \nearrow c$.
$E$ is countable~(Condition~\ref{def:aes-countable}) and $<$ is an irreflexive partial order such that
every event has a finite set of causes~(Condition~\ref{def:aes-finite-causes}), as
$\hist{c} = \{a,b,c\}$, $\hist{a} = \setenum{a}$ and $\hist{b} = \setenum{b}$, furthermore
$a < c$ implies $a \nearrow c$ and $b < c$ implies $b \nearrow c$ (Condition~\ref{def:aes-reflect-causality}).
It is immediate to check that
$\nearrow$ is acyclic on $\hist{a}, \hist{b}$ and $\hist{c}$
(Condition~\ref{def:aes-acyclic-causality}).
%
In the \nuevamacro{\textsc{aes}}{} $\mathsf{G}' = (E', <', \nearrow')$ of Fig.~\ref{fig:aesdue}, the causality relation
contains just $a <' c$ and the weak causality is
$a \nearrow' b, b \nearrow' a, a \nearrow' c, b\nearrow' c, c\nearrow' b$.
In this case $\nearrow'$ induces a symmetric conflict among
$a$ and $b$ and one among $b$ and $c$, hence $b \#' a$ and $b \#' c$, and also
the inheritance of conflicts along the causality relation
is verified (Condition \ref{def:aes-conflict-inh}).
\end{example}
\begin{definition}\label{de:aes-conf}
A \emph{configuration} of an \nuevamacro{\textsc{aes}}{} $\mathsf{G} = (E, <, \nearrow)$ is a set $X\subseteq E$ of events
such that
\begin{enumerate}
\item \label{def:conf-aes-well-founded}
$\nearrow$ is well-founded on $X$;
\item \label{def:conf-aes-left-close}
$\forall e\in X.\ \hist{e}\subseteq X$; and
\item \label{def:conf-aes-finite-prec}
$\forall e\in X$ the set $\setcomp{e'\in X}{e'\nearrow e}$ is finite.
\end{enumerate}
The set of configurations of $\mathsf{G}$ is denoted by $\Conf{\mathsf{G}}{\nuevamacro{\textsc{aes}}}$.
\end{definition}
A configuration consists of a set of events representing a possible partial execution of $\mathsf{G}$.
Condition~\ref{def:conf-aes-well-founded} implies that events in $X$ are not in conflict since there are not circular
weak-causal dependencies.
By Condition~\ref{def:conf-aes-left-close}, $X$ contains all the causes of the events in $X$, i.e.,
an event may occur only if its causes have occurred.
Despite a configuration may be infinite (e.g., a non-terminating execution),
each event has a finite set of weak causes (Condition~\ref{def:conf-aes-finite-prec}).
Given $X, Y\subseteq E$ such that $X\subseteq Y$, we say that $Y$ \emph{extends} $X$ if
$\neg (e' \nearrow e)$ for all $e\in X$ and $e'\in Y\setminus X$.
If $X$ and $Y$ are configurations,
then $Y$ can be reached from $X$.
\begin{example}\label{ex:conf-aes}
The set of configurations of the \nuevamacro{\text{\sc aes}es} $\mathsf{G}$ and $\mathsf{G}'$ in \Cref{ex:aes} are
shown in
Figs.~\ref{fig:aesconfuno} and~\ref{fig:aesconfdue} respectively. The arrows represent
the extend relation.
Straightforwardly configurations satisfy the conditions in Definition~\ref{de:aes-conf}.
\end{example}
A morphism between \nuevamacro{\text{\sc aes}es} is a (possibly partial) mapping of events that preserves computations (i.e., configurations).
\begin{definition}\label{de:aes-morphisms}
Let $\mathsf{G}_0 = (E_0, <_0, \nearrow_0)$ and $\mathsf{G}_1 = (E_1, <_1, \nearrow_1)$ be
\nuevamacro{\text{\sc aes}es}. An \nuevamacro{\textsc{aes}}{}-morphism $f : \mathsf{G}_0\rightarrow \mathsf{G}_1$ is a
partial function $f : E_0 \rightarrow E_1$ such that for all $e,e'\in E_0$
\begin{enumerate}
\item \label{def:aes-morphism-preserve}
if $f(e)\neq \bot$ then $\hist{f(e)}\subseteq f(\hist{e})$; and
\item if $f(e)\neq \bot \neq f(e')$ then
\begin{enumerate}
\item \label{def:aes-morphism-reflect}
$f(e)\nearrow_1 f(e')$ implies $e\nearrow_0 e'$; and
\item \label{def:aes-morphism-identify}
$f(e) = f(e')$ and $e\neq e'$ implies $e \#_0 e'$.
\end{enumerate}
\end{enumerate}
\end{definition}
An \nuevamacro{\textsc{aes}}{}-morphism preserves the causes of each mapped event (Condition~\ref{def:aes-morphism-preserve}) and reflects its weak causes (Condition~\ref{def:aes-morphism-reflect}). Moreover,
two different events can be mapped to the event element only when the events are in conflict (Condition~\ref{def:aes-morphism-identify}).
The above conditions ensure that morphisms preserve computations (i.e., configurations), as stated below.
\begin{proposition}[{\cite[Lemma 3.6]{BCM01IC}}]\label{pr:aesmorph-conf}
Let $f : \mathsf{G}_0\rightarrow \mathsf{G}_1$ be an \nuevamacro{\textsc{aes}}{}-morphism and $X$ a configuration of $\mathsf{G}_0$, i.e., $X\in \Conf{\mathsf{G}_0}{\nuevamacro{\textsc{aes}}}$.
Then, $f(X)\in \Conf{\mathsf{G}_1}{\nuevamacro{\textsc{aes}}}$.
\end{proposition}
Since \nuevamacro{\textsc{aes}}{}-morphisms compose~\cite{BCM01IC}, \nuevamacro{\text{\sc aes}es} and \nuevamacro{\textsc{aes}}-morphisms turn into a category, which we denote by
$\mathbf{AES}$.
\endinput
\section{Relating models}\label{sec:adjunctions}
In this section we study the relationship between the categories of (reversible) asymmetric causal nets and (reversible) asymmetric event structures by providing functors and showing
that a suitable adjunction arises.
\input{relatingmodels}
\section{Discussion}\label{sec:disc}
In this paper we complete previous efforts~\cite{PhilippouP22a,PhilippouP22,lics} aimed at relating classes of reversible event structures with
classes of Petri nets: Firstly, we account for the full class of \nuevamacro{r\textsc{aes}es} instead of proper subclasses (being \nuevamacro{r\textsc{aes}es} the
most general reversible event structures considered in the literature). Secondly,
the correspondence is established according to the standard technique of exhibiting a coreflection between suitable categories.
Besides the theoretical relevance of establishing a correspondence between these two different models, such
connection may be exploited in concrete scenarios.
One of the successful application of reversibility is causal-consistent
\emph{reversible debugging}~\cite{GiachinoLM14,Lanese0PV18}.
Reversible debuggers extend the classical ones with the possibility to get back in the execution of a program, so to find the source of a bug in an effective way.
Causal-consistent reversible debuggers improve on reversible ones by exploiting causal information while undoing computations.
So far this technique has been applied to message passing concurrent systems\changed{ has been applied to message passing concurrency}{} (e.g., actor-like languages), but not to shared-memory based concurrency.
Consider the next code snippet consisting of a two-threaded program that accesses dynamically allocated memory.
\input{exampleC}
The behaviour of the program can be thought in terms of events. Take $a$ as the event corresponding to the initialisation of $x$ at line $13$, $b$ for the instruction at line $8$ and $c$ for the one at line $17$.
It is clear that both $b$ and $c$ causally depend on $a$ ($a < b$ and $a<c$); while $c$ can happen after $b$, $b$ cannot happen after $c$, that is $b \nearrow c$.
Moreover, the reversal of a complete execution of the program should ensure that $c$ is reversed (i.e., the memory is allocated) before $b$ is reversed, hence $\underline{b} \triangleleft c$.
Consider instead a version of the program in which $c$ is executed outside a critical section (e.g., without acquiring and releasing the lock).
In this case, the execution may raise a \textit{segmentation fault} error. When debugging such a faulty execution, the programmer would observe that the execution violates
$b \nearrow c$ because $b$ happened after $c$.
On the hand, an execution can be visualised in terms of events, i.e., the programmer can be provided with a high-level description of the current state of the system (a configuration of the event structure) along with the relevant dependencies. On the other hand, the instrumented execution of the program and of its reversal can be handled by the underlying operational model (i.e., a reversible causal net).
Also, one could think of the undoing of an event as a backward breakpoint. That is, one could trigger the undoing of an event from the net and then the debugger will execute all the necessary backward steps in the code, to undo such event.
The seamless integration of \nuevamacro{r\textsc{aes}} and \nuevamacro{r\text{\sc acn}} with causal-consistent reversible debuggers
can be a nice exploitation of our results, which we will consider in future work.
\section{Introduction}
Reversible models of concurrent computation~\cite{revbook} have gained momentum in recent years, as witnessed by the variety of models available today: RCCS~\cite{rccs}, CCSK~\cite{ccsk}, rho$\pi$~\cite{rhopi}, R$\pi$~\cite{rpi}, reversible Petri nets~\cite{revpt}, reversible event structures~\cite{rpes}, to name a few.
As expected, the attention has turned to the question of how these models relate to each other (see e.g.~\cite{LaneseMM21,MedicMPY20,lics}).
This work addresses such goal by revisiting, in the context of reversibility, the connection between
{\em Event Structures} (\nuevamacro{\text{\sc es}es})~\cite{Win:ES} and {\em Petri Nets} (\nuevamacro{\text{\sc pn}s}) established by Winskel~\cite{NielsenPW79}.
On the event structure side, we focus on the \emph{reversible Asymmetric Event Structures} (\nuevamacro{r\textsc{aes}es}) introduced in~\cite{rpes}, which are a
reversible counterpart of
\emph{Asymmetric Event Structures} (\nuevamacro{\text{\sc aes}es})~\cite{BCM01IC}, which in turn are a
generalisation of {\em Prime Event Structures} ({\pes{}es}\xspace)~\cite{NielsenPW79}.
A \nuevamacro{\textsc{pes}} describes a computational process as a set of events whose occurrence is constrained by two relations: \emph{causality} and (symmetric) \emph{conflicts}.
A simple \nuevamacro{\textsc{pes}} is depicted in Fig.~\ref{fig:exintro-es}, where
causality ($<$) is drawn with straight lines (to be read from bottom
to top) and conflicts ($\#$) with curly lines. In
this case, $b$ causally depends on $a$ (i.e., $a < b$) meaning that $b$ cannot occur if $a$ does not occur first; additionally, $b$ and
$c$ are in conflict (i.e., $b \# c$) meaning that $b$ and $c$ are mutually exclusive and cannot occur in the same execution of the process.
The behaviour of a \nuevamacro{\textsc{pes}} can be understood in
terms of a transition system defined over {\em configurations} (i.e.,
sets of events), as illustrated in Fig.~\ref{fig:exintro-conf}. For
instance, the transition $\emptyset \rightarrow \{a,c\}$ indicates
that the initial state $\emptyset$ (i.e., no event has been executed
yet) may evolve to $\{a,c\}$ by concurrently executing $a$
and $c$.
Neither $\{b\}$ nor $\{a,b,c\}$ are configurations because
$b$ cannot occur without $a$; and
$b$ and $c$ cannot happen in the same run.
\begin{figure}[t]
\begin{subfigure}{.15\textwidth}
\vspace*{.1cm}
\scalebox{0.70}{\input{figures/exintro-es.tex}}
\caption{\nuevamacro{\textsc{pes}} $\mathsf{P}$}\label{fig:exintro-es}
\end{subfigure}
\begin{subfigure}{.35\textwidth}
\vspace*{.1cm}
\centerline{\scalebox{0.70}{\input{figures/exintro-conf.tex}}}
\caption{Transition system of $P$}\label{fig:exintro-conf}
\end{subfigure}
\quad
\begin{subfigure}{.18\textwidth}
\centerline{\scalebox{0.70}{\input{figures/exintro-aes-es.tex}}}
\caption{\nuevamacro{\textsc{aes}} $\mathsf{G}$}\label{fig:exintro-aes-es}
\end{subfigure}
\begin{subfigure}{.18\textwidth}
\centerline{\scalebox{0.70}{\input{figures/exintro-aes.tex}}}
\caption{\nuevamacro{\textsc{aes}} $\mathsf{G}'$}\label{fig:exintro-aes}
\end{subfigure}
\vspace*{1cm}
\begin{subfigure}{.35\textwidth}
\vspace*{.1cm}
\centerline{\scalebox{0.70}{\input{figures/exintro-conf-aes.tex}}}
\caption{Transition system of $\mathsf{G}'$}\label{fig:exintro-aesconf}
\end{subfigure}
\quad
\begin{subfigure}{.20\textwidth}
\centerline{\scalebox{0.70}{\input{figures/exintro-raes.tex}}}
\caption{\nuevamacro{r\textsc{aes}}{} $\mathsf{H}$}\label{fig:exintro-raes}
\end{subfigure}
\quad
\begin{subfigure}{.40\textwidth}
\vspace*{-.3cm}
\centerline{\scalebox{0.70}{\input{figures/exintro-rconf.tex}}}
\caption{Transition system of ${\mathsf{H}}$}\label{fig:exintro-rconf-raes}
\end{subfigure}
\caption{\nuevamacro{\textsc{pes}}{}, \nuevamacro{\textsc{aes}}{}, \nuevamacro{r\textsc{aes}}{} and their transition systems.}
\end{figure}
In order to accommodate asymmetries that may arise, e.g., in shared-memory concurrency,
\nuevamacro{\text{\sc aes}es} relax the notion of conflicts by considering instead \emph{weak causality}.
Intuitively, an event $e$ weakly causes the event $e'$
(written $e \nearrow e'$) if $e'$ can happen after $e$ but $e$ cannot happen after $e'$.
This can be considered as an asymmetric conflict
because $e'$ forbids $e$ to take place, but not the other way round.
However, symmetric conflicts can be recovered by making a pair of conflicting events to weakly cause each other.
For instance, the \nuevamacro{\textsc{pes}} $\mathsf{P}$ in Fig.~\ref{fig:exintro-es} can be rendered as the \nuevamacro{\textsc{aes}} $\mathsf{G}$ in Fig.~\ref{fig:exintro-aes-es}, where weak causality is depicted with
red, dashed arrows. Now the conflict between $b$ and $c$ is represented as $b \nearrow c$ and $c \nearrow b$\changed{
(the additional weak causal dependency $a \nearrow b$ is required by consistency with the causality relation)}{}.
Unsurprisingly, the transition system associated with $\mathsf{G}$ coincides with that of $\mathsf{P}$ in Fig.~\ref{fig:exintro-conf}.
Differently, the \nuevamacro{\textsc{aes}} $\mathsf{G}'$ relaxes the conflict between $b$ and $c$ by making it asymmetric: we keep
$b \nearrow c$ but drop $c \nearrow b$. Hence, $c$ can be added
to the configuration $\setenum{a, b}$ but
$b$ cannot be added to $\setenum{a, c}$, as rendered in the transition system in
Fig.~\ref{fig:exintro-aesconf}.
A model of reversible computation embodies two different flows of computation, i.e.,
in addition to the description of the standard {\em forward} execution, there is a {\em backward} flow that
expresses the way in which the effects of the forward computation can be undone.
In \nuevamacro{r\textsc{aes}es},
the backward flow is described in terms of a set of reversing
events, each of them representing the undoing of some event of the forward flow.
For the \nuevamacro{\textsc{aes}} $\mathsf{G'}$ in Fig.~\ref{fig:exintro-aes}, with $\{\underline a, \underline b\}$
we indicate that $a$ and $b$ are reversible, while $c$ is not.
Two relations, dubbed
\emph{reverse causation} ($\prec$) and \emph{prevention} ($\triangleleft$), describe the backward flow and
regulate the
way in which reversing events occur:
$\prec$ prescribes the events required for the undoing
while $\triangleleft$ stipulates those that preclude it.
The \nuevamacro{r\textsc{aes}} $\mathsf{H}$ in Fig.~\ref{fig:exintro-raes} consists of the
forward flow defined by $\mathsf{G'}$ extended with a backward flow represented
by blue arrows: solid arrows correspond to reverse causation and dashed ones to prevention.
Then,
{$a \prec \underline{a}$} says that {$\underline{a}$} can be executed
(meaning $a$ can be undone) only when $a$ has occurred (the pair {$b \prec \underline{b}$} is analogous).
The prevention arc
{$\underline{a}\triangleleft c$} states that {$a$} can be
reversed only if {$c$} has not occurred.
The transition system associated with $\mathsf{H}$ is in Fig.~\ref{fig:exintro-rconf-raes}: transitions corresponding to the forward flow (i.e., the ones in black mimicking those in Fig.~\ref{fig:exintro-aesconf}) add events to configurations; on the contrary, reversing transitions (in blue) remove events from configurations.
For instance, the transition $\{a,b\} \rightarrow \{a\}$ accounts for the fact that $b$ is reversible and can always be undone.
Note that $a$ can be reversed both in $\{a\}$ and $\{a,b\}$ but not in $\{a,c\}$ since the
prevention relation (i.e., $\underline{a}\triangleleft c$) forbids $a$ to be reversed if $c$ has already occurred.
Interestingly, $a$ can be reversed in $\{a,b\}$ leading to the configuration $\{b\}$, which is not reachable by the
forward flow (black arrows). This is known as out-of-causal order reversibility \cite{PhillipsUY13}.
\begin{figure*}[bt]
\begin{subfigure}{.25\textwidth}
\centerline{\scalebox{0.70}{\input{figures/exintroabis.tex}}}
\centerline{\textcolor{black}{$a < b$} and $b \# c$}
\caption{$N_P$\label{ex:introa}}
\end{subfigure}\qquad
\begin{subfigure}{.25\textwidth}
\centerline{\scalebox{0.70}{\input{figures/exintrobbis.tex}}}
\centerline{\textcolor{red}{$a < b$}, \textcolor{red}{$b\nearrow c$} and \textcolor{red}{$c\nearrow b$}}
\caption{$N_R$\label{ex:introb}}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centerline{\scalebox{0.70}{\input{figures/exintroc.tex}}}
\centerline{\textcolor{red}{$\underline a \triangleleft c$}, \textcolor{red}{$\un a\prec a$} and \textcolor{red}{$\un b\prec b$}}
\caption{$N'_R$\label{ex:introc}}
\end{subfigure}
\caption{Occurrence net $N_P$ and (Reversible) causal nets $N_R$ and $N'_R$}
\label{fig:exintro}
\end{figure*}
Since their introduction~\cite{Win:ES}, \nuevamacro{\text{\sc es}es} have played a central role in the development of
denotational semantics for concurrency models; in particular, for \nuevamacro{\text{\sc pn}s}.
It is well-known that different classes of \nuevamacro{\text{\sc es}es} correspond to different classes of \nuevamacro{\text{\sc pn}s}~\cite{Win:ES,BCM01IC,flowEvent}. Moreover, different relations in \nuevamacro{\text{\sc es}es} translate into different operational mechanisms of \nuevamacro{\text{\sc pn}s}.
Causality and conflicts are typically modelled in nets via places \emph{shared} among transitions.
However, shared places fall short when translating other kinds of dependencies, such as weak causality, which
require contextual arcs~\cite{BCM01IC}.
Reversible \nuevamacro{\text{\sc es}es} have introduced further questions about the required features of their operational counterpart, suggesting that shared places are not always a suitable choice (\cite{PhilippouP22a,PhilippouP22,lics}).
It has been recently shown that the operational model behind (reversible) {\pes{}es}\xspace can be recovered as a subclass
of contextual Petri nets, called {\em (reversible) Causal Nets} ({\rcn}s\xspace), in which causality is modelled via inhibitor arc~\cite{lics}
rather than relying on a shared place in which one transition produces a token to be consumed
by another one. Inhibitor arcs neither produce nor consume tokens but check for the \emph{absence} of them.
This idea is rendered by the nets in Fig.~\ref{fig:exintro}.
We recall that a \nuevamacro{\text{\sc pn}} gives an operational description of a computation in terms of {\em transitions} (depicted as boxes)
that consume and produce {\em tokens} (bullets) in different {\em places} (circles). According
to the black arrows in Fig.~\ref{ex:introa}, the transition $a$ consumes a token from $s_1$ producing a token in $s_4$; similarly \changed{the transition }{}$b$ consumes from $s_4$, $s_2$ and $s_3$ producing in $s_5$. Note that $b$ and $c$ are in mutual exclusion (i.e., conflict) because
they both compete for the shared resource (i.e., token) in $s_3$.
The arc connecting $s_4$ to $b$ indicates that $b$ cannot be fired if $s_4$ does not contains any token; consequently, $b$ can happen
only after $a$ has produced the token in $s_4$. The causal relation between $a$ and $b$ arises because
of $s_4$.
$N_P$ in Fig.~\ref{ex:introa} is the \emph{classical} operational counterpart of the \nuevamacro{\text{\sc es}} $P$ in Fig.~\ref{fig:exintro-es}.
However, we note that both the causalities and conflicts in $N_P$ can be alternatively represented by using inhibitor arcs instead of shared places,
as shown in Fig.~\ref{ex:introb}. The inhibitor arc (depicted as a red line ending with a circle) between $s_1$ and $b$ models $a < b$, whereas the inhibitor
arcs $(s_5,c)$ and $(s_6,b)$ represent the symmetric conflict between $b$ and $c$.
The net in Fig.~\ref{ex:introb} can be made reversible by adding a reversing transition for each
reversible event, as shown in Fig.~\ref{ex:introc} (in gray).
The added transitions $\underline a$ and
$\underline b$ respectively reverse the effects of $a$ and $b$: each of them consumes (produces) the tokens produced (resp., consumed) by the associated forward transition. Inhibitor arcs are also used for modelling reverse causation and prevention as, e.g., the inhibitor arc connecting $\underline a$ with $s_6$ stands for $\underline a\triangleleft c$.
In this paper we generalise the ideas presented in~\cite{lics}
so to be able to deal with \nuevamacro{r\textsc{aes}es} (and not just with the subclass of reversible {\pes{}es}\xspace).
This is achieved by using inhibitor arcs
to represent, not only causal dependencies, but also (symmetric) conflicts\changed{instead of using
shared places as done classically and in~\cite{lics}}{}.
In this way,
all the dependencies between transitions are modelled \emph{uniformly} by
using inhibitor arcs.
Concretely, we identify a subclass of r\textsc{cn}{es}, dubbed {\em reversible Asymmetric Causal Nets} (\nuevamacro{r\text{\sc acn}es}),
which are the operational counterpart of \nuevamacro{r\textsc{aes}es}.
We show that the correspondence is tight by following the long tradition of comparing concurrency models in
categorical terms \cite{NielsenPW79,Win:ES,Win:PNAMC}.
To do so we first turn \nuevamacro{r\textsc{aes}es} and \nuevamacro{r\text{\sc acn}es} into categories by providing suitable notions of morphisms,
then we introduce two functors relating these categories and finally we show that . the functor
that associates \nuevamacro{r\textsc{aes}es} to \nuevamacro{r\text{\sc acn}es} is the left adjoint of
the one that that give \nuevamacro{r\text{\sc acn}es} out of \nuevamacro{r\textsc{aes}es}.
Besides establishing a correspondence between \nuevamacro{r\textsc{aes}es} and \nuevamacro{r\text{\sc acn}es}, this allows us to
reinterpret the results in~\cite{lics} categorically.
\subsection{Nets with inhibitor arcs}\label{sec:inet}
We summarise the basics of Petri net with inhibitor arcs along the
lines of \cite{MR:CN,BBCP:rivista}.
We write $\ensuremath{\mathbb{N}}$ for the set of natural numbers.
%
A \emph{multiset} over a set $A$ is a function $m : A
\rightarrow \ensuremath{\mathbb{N}}$.
%
We assume the usual operations of
union ($+$) and difference ($-$) on multisets, and
write $m \subseteq m'$ if $m(a) \leq m'(a)$ for all $a \in A$.
%
We shall write $\flt{m}$ for the underlying set of a multiset $m$, i.e., the
multiset defined such that $\flt{m}(a)
= 1$ if $m(a) > 0$ and $\flt{m}(a) = 0$ otherwise.
%
We often confuse a multiset $m$ with the set
$\setcomp{a\in A}{m(a) \neq 0}$ when $m = \flt{m}$.
In such cases, we write $a\in m$ instead of $m(a) \neq 0$,
and $m\subseteq A$ if $m(a) = 1$ implies $a\in A$.
%
Furthermore, we will use standard operations on sets, such as $\cap$, $\cup$ or
$\setminus$.
We write $\mu A$ for the set of all multisets over $A$ and
$\ensuremath{\mathsf{0}}$ for the unique multiset over $A$ such that $\flt{\ensuremath{\mathsf{0}}} = \emptyset$.
\begin{definition}\label{de:contextualnet}
A \emph{Petri net with inhibitor arcs} ({\sc{ipt}}\xspace for short) is a tuple
$N = \langle S, T, F, I, \mathsf{m}\rangle$, where
$S$ is a set of \emph{places}, $T$ is a set
of \emph{transitions} such that $S \cap T = \emptyset$, $F \subseteq
(S\times T)\cup (T\times S)$ is the \emph{flow} relation, $I \subseteq S\times T$
is the \emph{inhibiting} relation, and
$\mathsf{m}\in \mu S$ is the \emph{initial marking}.
\end{definition}
Given an {\sc{ipt}}\xspace $N = \langle S, T, F, I, \mathsf{m}\rangle$ and $x\in
S\cup T$, the {\em pre-} and {\em postset} of $x$ are respectively
defined as the (multi)sets $\pre{x} = \setcomp{y}{(y,x)\in F}$ and
$\post{x} = \setcomp{y}{(x,y)\in F}$.
If $x\in S$ then $\pre{x} \in \mu T$ and $\post{x} \in \mu T$;
analogously, if $x\in T$ then $\pre{x}\in\mu S$ and $\post{x} \in \mu
S$.
The {\em inhibitor set} of a transition $t$ is the (multi)set
$\inib{t} = \setcomp{s}{(s,t)\in I}$.
The definition of $\pre{\cdot}, \post{\cdot}, \inib{\cdot}$ generalise
straightforwardly to multisets of transitions.
\begin{figure}[t]
\begin{subfigure}{.23\textwidth}
\scalebox{0.70}{\input{examples/example-cn-a}}
\caption{$N_1$}
\label{fig:pcn-a}
\end{subfigure}
\qquad
\begin{subfigure}{.18\textwidth}
\scalebox{0.70}{\input{examples/example-cn-b}}
\caption{$N_2$}
\label{fig:pcn-b}
\end{subfigure}
\qquad
\begin{subfigure}{.2\textwidth}
\scalebox{0.70}{\input{examples/example-cn-c}}
\caption{$N_3$}
\label{fig:pcn-c}
\end{subfigure}
\quad
\begin{subfigure}{.2\textwidth}
\scalebox{0.70}{\input{examples/example-cn-d}}
\caption{$N_4$}
\label{fig:pcn-d}
\end{subfigure}
\caption{}
\label{fig:pcn}
\end{figure}
\begin{example}\label{ex:cont-net1}
Fig.~\ref{fig:pcn} introduces some \nuevamacro{\text{\sc ipt}s}\changed{ that will be used in the remaining sections}{}.
The {\sc{ipt}}\xspace $N_1$ in Fig.~\ref{fig:pcn-a} has six places (named $s_i$) and three transitions $a$, $b$, and $c$; the initial marking is
$ \mathsf{m} = \setenum{s_1, s_2, s_3}$. For instance, the transition $b$ consumes a token from $s_2$ and produces a token in $s_5$ and it is inhibited by $s_1$, i.e., its pre-, post-, and inhibiting sets are respectively $\pre b = \setenum{s_2}$, $\post b = \setenum{s_5}$
and $\inib b = \setenum{s_1}$.
\end{example}
A (multiset of) transition(s) $A\in \mu T$ is {\em enabled at a marking}
$m\in \mu S$, written $m\trans{A}$, if ${\pre{A}}
\subseteq m$, $\inib{A} \cap\ \flt{m} = \emptyset$ and
$\forall t\in \flt{A}.\ \inib{t}\cap \post{(A-\setenum{t})} = \emptyset$.
Intuitively, $A$ is enabled at $m$ if $m$ contains the tokens to be consumed by
$A$ (${\pre{A}} \subseteq m$) and none of the transitions in $A$ is inhibited in $m$ ($\inib{A} \cap\ \flt{m} = \emptyset$); the last condition avoids cases in which a transition in $A$ produces
tokens that inhibit other transition in $A$. Observe
that the multiset $\ensuremath{\mathsf{0}}$ is enabled at every marking.
If $A$ is enabled at $m$ then it can
\emph{fire} and its firing produces the marking $m' = m - \pre{A} +
\post{A}$, which is written $m\trans{A}m'$.
Hereafter, we assume that each transition $t$ is defined such
that $\pre{t}\neq\emptyset$, i.e., it cannot fire \emph{spontaneously}
without consuming tokens.
A marking $m$ is \emph{reachable} if there exists a sequence of firings $m_i\trans{A_i}m_{i+1}$ originated in the
initial marking and leading to $m$;
$\reachMark{N}$ stands for the set of reachable markings of $N$.
An {\sc{ipt}}\xspace $N$ is {\em safe} if every reachable marking
is a set, i.e., $\forall m\in \reachMark{N}.m = \flt{m}$.
From now on, we will only consider safe \nuevamacro{\text{\sc ipt}s}.
\begin{example}
Consider $N_4$ in Fig.~\ref{fig:pcn-d}. Both $b$ and $c$ are enabled
at marking $m = \setenum{s_1, s_2, s_3}$. On the contrary, $a$ is not enabled because it is
inhibited by the token in the place $s_2$. The firing
of $b$ on $m$ produces the marking $m' = \setenum{s_1, s_3, s_5}$, i.e. $m\trans{b} m'$. The transition $a$ is enabled at $m'$ while $c$ is disabled and cannot be fired because of the token in $s_5$. The firing of $a$ on $m'$ produces $m'' = \setenum{s_3, s_4, s_5}$.
The reachable markings of $N_4$
are $\setenum{s_1, s_2, s_3}, \setenum{s_1, s_3, s_5}$ $\setenum{s_1, s_2, s_6}$ and $\setenum{s_3, s_4, s_5}$.
\end{example}
\section{Appendix}
\input{appendix}
\end{document}
\subsection{Configurations of \nuevamacro{r\text{\sc acn}}{es}}
The notion of configuration changes as we cannot resort to the causality relations, either \emph{forward}
or \emph{reverse} one), and the prevention relations, hence we simply require that each configuration
is reachable via a firing sequence.
\begin{definition}\label{de:prcn-configuration}
Let $\arcn{V}{\bwdset{}} = \langle S, T, F, I, \mathsf{m}\rangle$ be a \nuevamacro{r\text{\sc acn}}{}. A \emph{configuration}
of $\arcn{V}{\bwdset{}}$ is any subset $X\subseteq \fwdset{}$ of forward transitions such that
there exists a firing sequence $\mathsf{m}\trans{A_1}\dots m'$ such that $X = \pre{m'}\cap \fwdset{}$.
\end{definition}
\begin{example}
Consider the net $V_2$ in Fig.~\ref{fig:rcn-b}, one of its configuration is $\setenum{a, c}$ which is
obtained by executing first $b$, followed by $a$, then $c$ and then undoing $b$ (executing $\un{b}$).
%
This configuration is reachable only undoing $b$ as, being $b$ a cause of $c$, in order to
execute $c$, $b$ should be present.
\end{example}
\subsubsection{Configurations and morphisms:}
We end this part observing that morphisms preserve configurations.
\begin{proposition}
Let $(f_S,f_T) : \arcn{V}{\bwdset{0}} \rightarrow \arcn{V}{\bwdset{1}}$ be a \nuevamacro{r\text{\sc acn}}-morphism. Assume that $X\in\Conf{\arcn{V}{\bwdset{0}}}{\nuevamacro{r\text{\sc acn}}}$, then
$f_T(X)\in\Conf{\arcn{V}{\bwdset{1}}}{\nuevamacro{r\text{\sc acn}}}$.
\end{proposition}
\ifreport
\begin{proof}
It is enough to observe that configurations in \nuevamacro{r\text{\sc acn}}{} are characterized via reachable makings and
\nuevamacro{r\text{\sc acn}}-morphism preserve them.
\end{proof}
\fi
\section{Reversible Asymmetric Causal Nets}\label{sec:rcn}
We now introduce Asymmetric Causal Nets as a subclass of nets with inhibitor arcs
in which standard notions of causality and conflicts among transitions are modelled
via inhibitor arcs.
We first recall the notion of nets with inhibitor arcs, and then introduce asymmetric causal nets and their reversible
versions. We develop suitable notions of morphisms,
which turn asymmetric causal nets and
reversible asymmetric causal nets into categories.
\input{ipt}
\input{revised-causal-nets-defs}
\ifreport
\input{acn-configuration}
\fi
\input{revised-morphisms}
\ifreport
\input{acn-morph-configuration}
\fi
\input{revised-rev-causal-nets-defs}
\ifreport
\input{racn-configurations}
\fi
\input{revised-rev-morphisms}
\ifreport
\input{racn-morph-configuration}
\fi
\section{Reversible Asymmetric Event Structures}\label{sec:raes}
In this section we recall the basics of {\em Asymmetric Event Structures} (\nuevamacro{\text{\sc aes}es})~\cite{BCM01IC}
and their reversible version introduced in~\cite{rpes,GPY:categories}.
\input{asymmes}
\input{revasymmes}
\subsection{Reversible {\nuevamacro{\text{\sc aes}es}}}
We now summarise \emph{reversible} \nuevamacro{\text{\sc aes}es}, which were introduced in~\cite{rpes,GPY:categories}
as the reversible counterpart of \nuevamacro{\text{\sc aes}es}.
Given a set $U$ of events and $u\in U$, we write $\un{u}$ for
the undoing of $u$, and $\un{U} = \setcomp{\un{u}}{u\inU}$
for the set of \emph{undoings} of $U$.
\begin{definition}\label{de:raes}
A \emph{Reversible Asymmetric Event Structures} (\nuevamacro{r\textsc{aes}}) is a sextuple
$\mathsf{H} = (E, U, <, \nearrow, \prec, \lhd)$
where $E$ is the set of events, $U \subseteq E$ is the set of \emph{reversible} events, and
\begin{enumerate}
\item\label{cond:2} $\nearrow\ \subseteq E \times E$, called {\em weak causality};
\item\label{cond:2bis} $\lhd \subseteq \un{U} \times E$, called
\emph{prevention};
\item\label{cond:3} $<\ \subseteq E \times E$, called {\em causation}, is an irreflexive relation defined
such that for all $e\in E$, $\histtwo{e}{<} = \setcomp{e'\in E}{e' \leq e}$ is finite and
($\nearrow\cup <$) is acyclic on $\histtwo{e}{<}$;
\item $\prec\ \subseteq E \times \un{U}$, called {\em reverse causation}, is defined such that
\begin{enumerate}
\item \label{cond:4} $\forall u\in U.\ u \prec \un{u}$;
\item \label{cond:3bis} for all $u\in U$, $\histtwo{u}{\prec} = \setcomp{e\in E}{e \prec \un{u}}$ is
finite and ($\nearrow\cup <$) is acyclic on $\histtwo{e}{\prec}$;
\end{enumerate}
\item\label{cond:5b} for all
$e\in E, \un{u}\in \un{U}.$ $e\prec \un{u}\ \Rightarrow \neg(\un{u}\lhd e$); and
\item\label{cond:6}
$(E,\ensuremath{\prec\!\!\prec}, \nearrow)$ with $ {\ensuremath{\prec\!\!\prec}} = {<} \cap {\{(e,e')\ |\ e\not\inU \textit{ or } \un{e}\lhd e'\}}$ is
an \nuevamacro{\textsc{aes}}.
\end{enumerate}
\end{definition}
An \nuevamacro{r\textsc{aes}} is defined in terms of a set of events $E$; the ones in $U$ are reversible.
Causation ($<$) and weak causality ($\nearrow$) specify the forward flow, while reverse causation ($\prec$, drawn as solid blue arrows)
and prevention ($\lhd$, as dashed blue arrows) describe the backward flow. Weak causality plays the same role as in \nuevamacro{\text{\sc aes}es}:
$e\nearrow e'$ states that $e$ cannot occur after $e'$. Prevention constrains instead the undoing of events:
$\un{e}\lhd e'$ indicates that $e$ cannot be undone if $e'$ has occurred. Causation (like causality in \nuevamacro{\text{\sc aes}es}) indicates causal dependencies.
As in \nuevamacro{\text{\sc aes}es}, every event has
a finite set of causes $\histtwo{e}{<}$, which does not contain cyclic dependencies according to causation and weak causality (Condition~\ref{cond:3}).
Acyclicity of $\histtwo{e}{<}$ implies that for all $e, e'\in E.\ e < e'\ \Rightarrow \neg(e'\nearrow e)$
Reverse causation specifies the causes for the undoing of each event: $e \prec \un{e'}$ states that $e'$ can be undone only if $e$ has occurred. Hence, condition~\ref{cond:4} simply states that an event $u$ can be undone only if it has occurred. Condition~\ref{cond:3bis}, which is analogous to Condition~\ref{cond:3}, establishes that each undoing has a finite set of causes.
Causation and weak causality have to be consistent, i.e., an event cannot have precedence over some of its causes. Condition~\ref{cond:5b} states an analogous requirement for the backward flow.
The last condition is the most subtle one. Note that the definition of \nuevamacro{r\textsc{aes}} does not require $(E, <, \nearrow)$ to be an \nuevamacro{\textsc{aes}},
in particular, because conflicts may not be inherited along causation.
This is essential
for accommodating out-of-causal order reversibility (see the example below).
The definition instead considers the \emph{sustained causation} relation $\ensuremath{\prec\!\!\prec}$, which is obtained
by removing from causation all those pairs $e<e'$ where $e$ can be undone even when $e'$ has occurred (i.e.,
$\un{e}\lhd e'$ does not hold).
It should be noted that $\ensuremath{\prec\!\!\prec}$ coincides with $<$ when $U = \emptyset$; hence $(E, \emptyset, <, \nearrow, \emptyset, \emptyset)$ is indeed an \nuevamacro{\textsc{aes}}.
\begin{figure}[t]
\begin{subfigure}{.15\textwidth}
\scalebox{0.80}{\input{figures/exaes-raes.tex}}
\caption{\nuevamacro{r\textsc{aes}}{} $\mathsf{H}$}\label{fig:raesuno}
\end{subfigure}\qquad
\begin{subfigure}{.15\textwidth}
\scalebox{0.80}{\input{figures/exaes-raesbis.tex}}
\caption{\nuevamacro{r\textsc{aes}}{} $\mathsf{H}'$}\label{fig:raesdue}
\end{subfigure}\qquad
\begin{subfigure}{.25\textwidth}
\scalebox{0.80}{\input{figures/exaes-confraes.tex}}
\caption{$\Conf{\mathsf{H}}{\nuevamacro{r\textsc{aes}}}$}
\label{fig:raesconfuno}
\end{subfigure}\quad
\begin{subfigure}{.25\textwidth}
\scalebox{0.80}{\input{figures/exaes-confraesbis.tex}}
\caption{$\Conf{\mathsf{H}'}{\nuevamacro{r\textsc{aes}}}$}
\label{fig:raesconfdue}
\end{subfigure}
\caption{}
\end{figure}
\begin{example}\label{ex:raes}
Consider $\mathsf{H} = (E, U, <, \nearrow, \prec, \lhd)$ shown in Fig.~\ref{fig:raesuno}.
The (forward) events are $\setenum{a,b,c}$ where $b$ is the only reversible one ($U =\setenum{b}$).
\emph{Causation} is defined such that $a < c$ and $b < c$, and weak causality states
$a \nearrow c, b \nearrow c, b \nearrow a$. \emph{Reverse causation} is such that
$a \prec \un{b}$ and $b \prec \un{b}$
and prevention is $\un{b}\lhd c$. Note that $c$ is caused by $a$ and $b$, with
$b$ a weakly cause of $a$ (or, $a$ is in asymmetric conflict with $b$). Moreover, $b$ can be reversed
only when $a$ is present and $c$ has not been executed.
In this case, {\em sustained causation} coincides with causation, because the only reversible event $b$
cannot be reversed if $c$ (which causally depends on $b$) is present. It is routine to check that
$(E, \ensuremath{\prec\!\!\prec}, \nearrow)$ is an \nuevamacro{\textsc{aes}} and that
$\mathsf{H}$ is an \nuevamacro{r\textsc{aes}}{}.
Consider $\mathsf{H}' = (E, U', <, \nearrow, \prec', \lhd')$ depicted in Fig.~\ref{fig:raesdue},
which has the same set of events, causation and weak causality of $\mathsf{H}$ but also takes $c$ as
reversible (i.e., $U' = \setenum{b, c}$). The reverse causation is extended with the pairs $c \prec' \un{c}$,
$a \prec' \un{b}$, and $b \prec' \un{b}$.
Observe that $b$ can be reversed even if the event $c$ (which depends on $b$) has occurred.
This is known as out-of-causal order reversibility.
In this case, sustain causation consists only of $a \ensuremath{\prec\!\!\prec}' c$, i.e., the pair $b<c$ is removed because $b$ can be reversed despite $c$ (which causally depends on $b$)
has occurred. It can be checked that $(E, \ensuremath{\prec\!\!\prec}', \nearrow)$ is an \nuevamacro{\textsc{aes}}{} and $\mathsf{H'}$ an \nuevamacro{r\textsc{aes}}{}.
\end{example}
\begin{remark}
For the sake of the presentation, \cref{de:raes} differs in style from the original
definition in~\cite{rpes},
where causation and reverse causation are glued together, and also weak causality and
prevention are combined
in a single relation. Moreover, we explicitly require $(E,\ensuremath{\prec\!\!\prec}, \nearrow)$ to be an \nuevamacro{\textsc{aes}} instead of
restating conditions.
%
We provide a discussion in Appendix.
\end{remark}
The definition of configurations of \nuevamacro{r\textsc{aes}es} has an operational flavour,
which relies on the notion of enabling.
Let $\mathsf{H} = (E, U, <, \nearrow, \prec, \lhd)$ be an \nuevamacro{r\textsc{aes}} and $X \subseteq E$ a set of events
such that
$\nearrow$ is acyclic on $X$. For $A\subseteq E$ and $B\subseteq U$, we say $A\cup\un{B}$ is
\emph{enabled} at $X$ if $A\cap X = \emptyset$, $B\subseteq X$, $\nearrow$ is acyclic on $A \cup X$ and
\begin{itemize}
\item for every $e\in A$, if $e' < e$ then $e'\in X\setminus B$ and if $e \nearrow e'$ then
$e'\not\in X\cup A$; and
\item for every $u\in B$, if $e'\prec \un{u}$ then $e'\in X\setminus(B\setminus\setenum{u})$ and
if $\un{u}\lhd e'$ then $e'\not\in X\cup A$.
\end{itemize}
The former condition ensures that $X$ contains all the causes of the events to be added (i.e., those in $A$)
but none of their preventing events. The latter states that $X$ contains the reverse causes of the events to
be undone (i.e., those in $B$) but none of the preventing ones. If $A\cup\un{B}$ is
\emph{enabled} at $X$ then $X' = (X\setminus B)\cup A$ can be reached from $X$, which is written
$X\ \xlongrightarrow{A\cup\underline{B}}\ X'$.
The first condition above can be restated as $\forall e\in A. \histtwo{e}{<}\subseteq X$ and
$X\cup A$ extends $X$.
\begin{definition}\label{de:rpes-conf}
Let $\mathsf{H} = (E, U, <, \nearrow, \prec, \lhd)$ be an \nuevamacro{r\textsc{aes}}{} and
$X\subseteq E$ a set of events that is well-founded with respect to $(\nearrow \cup <)$.
We say that $X$ is a \emph{(reachable) configuration} if there exist two sequences
of sets $A_i$ and $B_i$, for $i=1,\ldots,n$, such that
\begin{itemize}
\item $A_i\subseteq E$ and $B_i\subseteq U$ for all $i$, and
\item $X_i\ \xlongrightarrow{A_i\cup\underline{B_i}}\ X_{i+1}$
$X_1 = \emptyset$ and $X_{n+1}=X$.
\end{itemize}
The set of configurations of $\mathsf{H}$ is denoted by $\Conf{\mathsf{H}}{\nuevamacro{r\textsc{aes}}}$.
\end{definition}
\begin{example}\label{ex:raes-conf}
The configurations of the \nuevamacro{r\textsc{aes}es} in the Example~\ref{ex:raes} (and how they are reached) are depicted in
\cref{fig:raesconfuno,fig:raesconfdue}. Observe that $\setenum{a,c}$, which is a configuration
of $\mathsf{H}'$ but not of $\mathsf{H}$, is reached from the configuration
$\setenum{a, b, c}$ by the undoing of $b$.
\end{example}
\begin{definition}\label{de:raes-morphisms}
Let $\mathsf{H}_i = (E_i, U_i, <_i, \nearrow_i, \prec_i, \lhd_i)$ with $i=0,1$ be two \nuevamacro{r\textsc{aes}es}.
An \nuevamacro{r\textsc{aes}}{}-morphism $f : \mathsf{H}_0\rightarrow \mathsf{H}_1$ is an \nuevamacro{\textsc{aes}}{}-morphism
$f : (E_0, <_0, \nearrow_0) \rightarrow (E_1, <_1, \nearrow_1)$ such that
\begin{itemize}
\item $f(U_0)\subseteq U_1$;
\item for all $u \in U_0$, if $f(u)\neq \bot$ then
$\histtwo{\un{f({u})}}{\prec_1}\subseteq f(\histtwo{\un{u}}{\prec_0})$; and
\item for all $e \in E_0$ and ${u}\in{U_0}$,
if $f(e)\neq \bot \neq f(u)$ then
$\un{f({u})}\lhd_1 f(e)\ \Rightarrow \un{u}\lhd_0 e$.
\end{itemize}
\end{definition}
Observe that $(E_0, <_0, \nearrow_0)$ and $(E_1, <_1, \nearrow_1)$ are not necessarily \nuevamacro{\text{\sc aes}es}, but here
it is enough that $f$ works on causation and weak causality as an \nuevamacro{\textsc{aes}}{}-morphism.
Recall that \nuevamacro{r\textsc{aes}}{}-morphisms preserve the causes (and the reverse causes) of each event (resp., reversing event),
and reflect preventions.
\begin{proposition}\label{pr:raesmorph-conf}
Let $f : \mathsf{H}_0\rightarrow \mathsf{H}_1$ be an \nuevamacro{r\textsc{aes}}{}-morphism and let
$X\in \Conf{\mathsf{H}_0}{\nuevamacro{r\textsc{aes}}}$.
Then $f(X)\in \Conf{\mathsf{H}_1}{\nuevamacro{r\textsc{aes}}}$.
\end{proposition}
\ifreport
\begin{proof}
It is enough to show that, given $X\subseteq E_0$ such that $\nearrow_0$ is acyclic on $X$,
$A\cup\un{B}$ is enabled at $X$, with $A\subseteq E_0$ and $B\subseteq U_0$, and
$X\ \xlongrightarrow{A\cup\un{B}}\ X'$, with $X' = (X\setminus B)\cup A$, then
$f(A)\cup\un{f(B)}$ is enabled at $f(X)$. First observe that $f(A)\cap f(X) = \emptyset$, as
$A \cap X = \emptyset$ and $f(B) \subseteq f(X)$ as $B \subseteq X$. Furthermore
$\nearrow_1$ is acyclic on $f(X\cup A) = f(X)\cup f(A)$.
Assume it is not, then there exist a sequence of events
$f(e_0), \dots, f(e_n)$ in $f(X\cup A)$ such that $f(e_i) \nearrow_1 f(e_{i+1})$, with $0\leq i < n$,
and $f(e_n) \nearrow_1 f(e_0)$, but as $f: (E_0, <_0, \nearrow_0)\rightarrow (E_1, <_1, \nearrow_1)$
is an \nuevamacro{\textsc{aes}}-morphism, we have that $e_i\nearrow_0 e_{i+1}$ for all $i\in\setenum{0,n-1}$ and
$e_n\nearrow_0 e_0$ contradicting the
acyclicity of $\nearrow_0$ on $X\cup A$. Consider now an $e\in A$ such that $f(e)$ is defined and
take $e' <_0 e$ such that $f(e')$ is defined. As $f$ is an \nuevamacro{\textsc{aes}}-morphism we have that if
$e' \in X\setminus B$ then also $f(e') \in f(X)\setminus f(B)$ and if $f(e) \nearrow_1 f(e')$
we have again that $e\nearrow_0 e'$ which implies that $e'\not\in X\cup A$ and then
$f(e')\not\in f(X)\cup f(A)$. Finally consider $u\in B$ and assume $f(u)$ is defined, and take
$e' \prec_0 u$. Assume $f(e')$ is defined, as
$\histtwo{\un{f({u})}}{\prec_1}\subseteq f(\histtwo{\un{u}}{\prec_0})$
we have that $f(e')\in f(X)\setminus (f(B)\setminus\setenum{f(u)})$ as
$e'\in X\setminus (B\setminus\setenum{u})$. Consider now $\un{f(u)} \lhd_1 f(e')$, we
have that $f(e')\not\in f(X)\cup f(A)$ as $\un{f(u)} \lhd_1 f(e')$ implies that
$u \lhd_0 e'$ and $e'\not\in X\cup A$. Hence $f(X)\ \xlongrightarrow{f(A)\cup\un{f(B)}}\ f(X')$.
Observing that configurations are subsets of events reachable from the empty one, we have the thesis.
\end{proof}
\fi
As shown in \cite{GPY:categories}, \nuevamacro{r\textsc{aes}}{}-morphisms compose;
hence \nuevamacro{r\textsc{aes}es} and \nuevamacro{r\textsc{aes}}{}-morphisms are a category, denoted with
$\mathbf{RAES}$.
Moreover, $\mathbf{AES}$ is a full and faithful subcategory of $\mathbf{RAES}$.
\endinput
\subsection{Asymmetric Causal Nets}
In this section we introduce {\em Asymmetric Causal Nets}, a class of \nuevamacro{\text{\sc ipt}s} that
generalises the {\em Causal Nets} of \cite{lics} to account for asymmetric conflicts.
Roughly speaking, we focus on {\sc{ipt}}\xspace{s} where
all the dependencies between transitions arise because of \emph{inhibitor} arcs.
As in causal nets,
$\post{t} \cap \pre{t'} = \emptyset$ holds for all $t, t'$, i.e., if a place appears in
the preset of a transition, it does not appear in the postset of any transition, and vice
versa. Consequently, the flow relation induces an empty causal relation.
However, causality can be recovered from inhibitor arcs. Intuitively,
a transition $t$ connected via an inhibitor arc to some place in
the preset of another transition $t'$ cannot be fired before $t'$ (if we assume
that the preset of $t'$ is marked).
This is the case, e.g., of the transitions $a$ and $b$ in Fig.~\ref{fig:pcn-d}, where
$a$ can be fired only after $b$.
The induced (immediate) causality relation $\lessdot$ is defined by
$t \lessdot t'$ iff $\pre{t}\cap\inib{t'}\neq\emptyset$,
i.e., the firing of $t$ consumes (at least) one of the tokens that
inhibit the firing of $t'$.
Additionally, asymmetric causal nets impose places not to be shared between
the presets and postsets of transitions, i.e.,
$\post{t} \cap \post{t'} \neq \emptyset\ \lor\
\pre{t} \cap \pre{t'} \neq \emptyset$ implies $t = t'$ for all $t,t'$.
As a consequence, the flow relation does not introduce forward or
backward conflicts, which need to be recovered from inhibitor arcs.
Note that a transition $t$ inhibited by some place in the postset of
another transition $t'$ cannot be fired if $t'$ has been fired, i.e., $t'$
prevents $t$. This is the case, e.g., of
$b$ and $a$ in Fig.~\ref{fig:pcn-b}, in which $b$ cannot be fired
after $a$ (i.e., $b$ is prevented by $a$).
Hence, the induced prevention relation $\textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ }$ is defined by
$t \textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ } t'$ iff $\post{t}\cap\inib{t'}\neq\emptyset$.
We shall write $\textcolor{black}{\leadsto}$ for the inverse of $\textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ }$.
Observe that $\textcolor{black}{\leadsto}$ is analogous to the weak causality of an \nuevamacro{\textsc{aes}}{}:
if $t'\ \textcolor{black}{\leadsto}\ t$ then $\post{t}\cap\inib{t'}\neq\emptyset$; hence
$t'$ cannot be fired if $t$ has been fired, however $t$ can be fired after $t'$. %
As in \nuevamacro{\textsc{aes}}, \emph{symmetric} conflicts are recovered from
prevention, i.e., $t$ and $t'$ are in \emph{symmetric} conflict,
written $t {\natural} t'$, whenever $t \textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ } t'$ and $t\ \textcolor{black}{\leadsto}\ t'$.
\begin{definition}\label{de:pre-acausal-net}
Let $C = \langle S, T, F, I, \mathsf{m}\rangle$ be an {\sc{ipt}}\xspace.
$C$ is a
\emph{pre asymmetric causal net} (p\nuevamacro{\text{\sc acn}}) if the following conditions hold:
%
\begin{enumerate}
\item\label{pcn:cond1} $\forall t, t'\in T.\ \post{t}\cap\pre{t'} = \emptyset$;
\item\label{pcn:cond2} $\forall t\in T$. $\card{\pre{t}} = \card{\post{t}} = 1$ and $\mathsf{m} = \pre{T}$;
\item\label{pcn:cond3} $\lessdot^{+}$ is a partial order;
\item\label{pcn:cond4} $\forall t\in T.\ \histtwo{t}{\lessdot} = \setcomp{t'\in T}{t'\lessdot^{\ast} t}$ is finite and
$(\textcolor{black}{\leadsto}\cup\lessdot)$ is acyclic on $\histtwo{t}{\lessdot}$; and
\item\label{def:causality-saturated} for all $t,t'\in T$. $t \lessdot^{+} t'$ implies $t \lessdot t'$.
\end{enumerate}
\end{definition}
The first condition states that
causal dependencies do not arise because of the flow
relation.
The second one implies that a place may appear in the preset/postset of at most one transition
and that the places in the presets of all transitions are initially marked.
Since $\lessdot$ is meant to model causal dependencies, we require its transitive closure $\lessdot^{+}$ to be a
partial order (third condition). %
The fourth condition requires each transition to have a finite set of causes; hence, $\lessdot^{+}$ is a well-founded partial order. By requiring $(\textcolor{black}{\leadsto}\cup\lessdot)$ to be acyclic on the causes of every transition $t$ we ensure that
the causes can be ordered so to satisfy causality and prevention. More precisely, we exclude situations in which (i) prevention contradicts causality (e.g., $t \lessdot t'$ and $t' \textcolor{black}{\leadsto} t$), (ii) there are circular chains of prevention (i.e., $t_0 \textcolor{black}{\leadsto} t_1\textcolor{black}{\leadsto} \ldots \textcolor{black}{\leadsto} t_n\textcolor{black}{\leadsto} t_0$), where symmetric conflicts are a particular case, and (iii) self-blocked transitions (i.e., $\lessdot$ needs to be irreflexive and hence ${\pre{t}} \cap{\inib{t}}=\emptyset$ for all $t$).
The last condition imposes that all dependencies among transitions have to be
explicitly represented in the structure of the net, namely
causality is saturated (Condition~\ref{def:causality-saturated}).
\begin{definition}\label{de:acausal-net}
Let $C = \langle S, T, F, I, \mathsf{m}\rangle$ be a p\nuevamacro{\text{\sc acn}}. Then $C$ is an \emph{asymmetric causal net} (\nuevamacro{\text{\sc acn}}) whenever for all $t, t', t''\in T$. $t{\natural} t' \land\ t'\lessdot t''$ imply $t {\natural} t''$.
\end{definition}
The added condition implies that there are inhibitor arcs for all inherited symmetric conflicts.
\begin{example}\label{ex:relations-explained}
The \nuevamacro{\text{\sc ipt}s} in Fig.~\ref{fig:pcn} are \nuevamacro{\text{\sc acn}s}. The first two conditions of
\cref{de:pre-acausal-net} hold for the four nets because transitions do not share places in their pre and postsets. Moreover, the places in the presets of all transitions are the only ones that are initially marked.
%
For $N_1$ (Fig.~\ref{fig:pcn-a}), we have that $a \lessdot b$ and $b \lessdot c$; consequently, $\lessdot^{+}$ is a
total order, and causality is saturated as we also have $a \lessdot c$.
Moreover, $\textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ }$ is empty (because none of the transitions has an inhibitor arc connected to the postset of
another transition). Hence, the fourth condition also holds.
%
For the net $N_2$ in Fig.~\ref{fig:pcn-b}, the causality relation is empty while prevention contains the unique
pair $a \textcolor{black}{\ \mbox{\reflectbox{$\leadsto$}}\ } b$: $b$ cannot be fired after $a$ but $a$ can be executed after $b$. Fig.~\ref{fig:pcn-c} shows a similar net in which $a$ and $b$ are in symmetric conflict ($a {\natural} b$): the execution of one prevents the other.
Fig.~\ref{fig:pcn-d} shows a net where $b \lessdot a$, $b {\natural} c$ and $a {\natural} c$, and
conflicts
are inherited along the causality relation $\lessdot$.
\end{example}
\subsection{Morphisms for (pre) Asymmetric Causal Nets}
We define a suitable notion of morphisms for (p)\nuevamacro{\text{\sc acn}s},
which takes into account that inhibitor arcs correspond to two different kind of dependencies: causality
and prevention.
In particular, the inhibitor arcs that represent prevention demand a peculiar treatment of
markings if compared against classical notions of morphisms for nets~\cite{Win:ES,Win:PNAMC}.
We state some technical machinery.
A partial mapping $f : A \to B$ extends naturally to a mapping from $\mu A$ to
$\mu B$ by stipulating that $f(\sum_{a\in A}n_a\cdot a) = \sum_{b\in B}(\sum_{a\in f^{-1}(b)} n_a)\cdot b$.
%
A multirelation $f$ is a multiset on $A \times B$;
it is \emph{finitary} if $\setcomp{b\in B}{f(a,b) > 0}$ is finite for all $a$.
A multirelation $f$ is a relation if $f = \flt{f}$, i.e., if it is a set.
The composition of two multirelations $f$ on $A\times B$ and $g$ on $B \times C$, written
$\composizione{f}{g}$, is the multirelation on $A \times C$ defined such that
$[\composizione{f}{g}](a,c) = \sum_{b\in B} f(a,b)\cdot g(b,c)$ for all $a\in A, c\in C$.
%
A finitary multirelation $f$ on $A\times B$ induces a partial mapping from
$\mu f : \mu A\rightarrow\mu B$ defined such that
$\mu f(\sum_{a\in A}n_a\cdot a) = \sum_{b\in B}\sum_{a\in A}(n_a\cdot f(a,b))\cdot b$.
Conversely, a partial mapping $\mu f : \mu A\rightarrow\mu B$ induces a mapping
$\mrflt{ f} : \mu A\rightarrow B$ defined such as
$\mrflt{f}(m) = \flt{\mu f(m)}$.
%
\begin{definition}\label{de:pca-morphism}
Let $C_0 = \langle S_0, T_0, F_0, I_0, \mathsf{m}_0\rangle$ and
$C_1 = \langle S_1, T_1, F_1, I_1, \mathsf{m}_1\rangle$ be p\nuevamacro{\text{\sc acn}s}. An \nuevamacro{\text{\sc acn}}-morphism is a
pair $(f_S,f_T)$ consisting of a relation $f_S \subseteq S_0\times S_1$ and a partial function
$f_T : T_0 \rightarrow T_1$ defined such that
\begin{enumerate}
\item\label{cond:c} for all $t\in T_0$ if $f_T(t) \neq \bot$ then
\begin{enumerate}
\item\label{cond:preset-preserved} $\pre{f_T(t)} = \mu f_S(\pre{t})$ and $\post{f_T(t)} = \mu f_S(\post{t})$;
\item\label{cond:inhibitor-reflected} $\forall (s,f_T(t))\in I_1$. $\forall s'\in f_S^{-1}(s)$.
$(s',t)\in I_0$; and
\end{enumerate}
\item\label{cond:e} $\forall t, t'\in T_0$ if $f_T(t) \neq \bot\neq f_T(t')$ then
$f_T(t) = f_T(t')\ \Rightarrow\ t\ {\natural}_0\ t'$.
\item\label{cond:b} $\forall s_1\in S_1.\ \forall s_0, s_0'\in f_S^{-1}(s_1).$
$s_0\neq s_0'$ implies
$\post{s_0}\ {\natural}_0\ \post{s_0'}$
or $\pre{s_0}\ {\natural}_0\ \pre{s_0'}$;
\item\label{cond:a} $\mrflt{f_S}(\mathsf{m}_0) = \mathsf{m}_1$.
\end{enumerate}
\end{definition}
The first two conditions are quite standard for morphisms between safe nets: presets and postsets are preserved (Condition~\ref{cond:preset-preserved}), inhibitor arcs
are reflected (Condition~\ref{cond:inhibitor-reflected}).
Condition~\ref{cond:inhibitor-reflected} implies that $\inib{f_T(t)} \subseteq \mrflt{f_S}(\inib{t})$.
Furthermore only conflicting transitions can be identified (Condition~\ref{cond:e}).
Differently from usual requirements, Condition~\ref{cond:b} allows $f_S$
to \emph{identify} places in the preset of different transitions.
This may be seen as problematic because a place in the target of the morphism may represent different tokens in the source, which in
principle may evolve independently.
However, according to Condition~\ref{cond:b}, $f_S$ may only identify places connected to transitions that are in (symmetric) conflict; and moreover
the mapping induced on markings is $\mrflt{f_S}$ (Condition~\ref{cond:a}).
We observe that the notion of morphisms is not influenced by the inheritance of symmetric conflicts along
the causality relation, hence they do apply to \nuevamacro{\text{\sc acn}}{es} as well.
\begin{example}\label{ex:arnmorph}
Consider the \nuevamacro{\text{\sc acn}s} $C_0$ and $C_1$ in Fig.~\ref{fig:acn-morph}.
The morphism $(f_S,f_T): C_0\rightarrow C_1$ is as follows.
The mapping $f_T$ for transitions is
$f_T(a) = a'$, $f_T(b) = b'$ and $f_T(c) = c' = f_T(d)$, i.e., it identifies the conflicting
transitions $c$ and $d$.
The relation on places is defined as expected, i.e., $f_S(s^{0}_{i},s^{1}_i)$ for $1 \leq i \leq 6$,
$f_S(s^{0}_{7},s^{1}_3)$, and $f_S(s^{0}_{8},s^{1}_6)$.
The inhibitor arc $(s^{1}_{3},b')$ of
$C_1$ is reflected in the one $(s^{0}_{3},b)$ of $C_0$. Note that the remaining
arcs of $C_0$ are not preserved by the mapping.
\begin{figure}[t]
\qquad\quad
\begin{subfigure}{.30\textwidth}
{\scalebox{0.7}{\input{examples/example-acn-morph1}}}
\caption{$C_0$}
\label{fig:acn-morph1}
\end{subfigure}
\qquad
\begin{subfigure}{.20\textwidth}
{\scalebox{0.7}{\input{examples/example-acn-morph2}}}
\caption{$C_1$}
\label{fig:acn-morph2}
\end{subfigure}
\qquad
\begin{subfigure}{.20\textwidth}
{\scalebox{0.7}{\input{examples/example-arcn-c}}}
\caption{$V$}
\label{fig:racn-c}
\end{subfigure}
\caption{}\label{fig:acn-morph}
\end{figure}
\end{example}
We now show that \nuevamacro{\text{\sc acn}}-morphisms preserve the tokens game, namely if $m\trans{A}m'$ then $\mrflt{f_S}\trans{f_T(A)}\mrflt{m'}$.
\begin{proposition}\label{pr:morph-preserve-token-game}
Let $(f_S,f_T) : C_0 \rightarrow C_1$ be an \nuevamacro{\text{\sc acn}}-morphism,
and $m\trans{A}m'$ be a step. Then
$\mrflt{f_S}(m)\trans{\mu f_{T}(A)}\mrflt{f_S}(m')$.
\end{proposition}
\ifreport
\begin{proof}
We considered nets that are p\nuevamacro{\text{\sc acn}}, hence we have that
each transition has just one place in its preset and one in its postset, and
$\pre{t}$ and $\post{t}$ are singletons.
This implies that $\pre{\mu f_T(t)} = \mu f_S(\pre{t}) = \mrflt{f_S}(\pre{t})$,
$\post{ \mu f_T(t)} = \mrflt{f_S}(\post{t})$.
We observe that $\mu f_S(\inib{t}) \neq \mrflt{f_S}(\inib{t})$ means that two marked
places in $\mu f_S(\inib{t})$ are identified, but we know that this can happen only if the
tow places are either the preset or the postset of two conflicting transitions.
Now, two conflicting transitions are never enabled together, as well as two transitions
such that one prevents the other, and therefore is quite clear that if $m\trans{A}m'$ we have that
$\mrflt{f_S}(m)\trans{\mu f_T(A)}$ and $\mrflt{f_S}(m)\trans{\mu f_T(A)}\mrflt{f_S}(m')$.
In fact $\mrflt{f_S}(m)\trans{\mu f_T(A)}$ as $\pre{\mu f_T(A)} = \mu f_S(\pre{A}) = \mrflt{f_S}(\pre{A})$.
Now consider an inhibitor arc $(s,f_T(t))$ with $t\in A$, then, for each
$s'$ such that $(s',s)\in f_S$ there must be an inhibitor arc $(s',t)$ and this implies
that, as $m\trans{A}$, also $\mrflt{f_S}(m)\trans{\mu f_T(A)}$.
By the same arguments we have that $\mrflt{f_S}(m)\trans{\mu f_T(A)}\mrflt{f_S}(m')$
as $\mrflt{f_S}(m') = \mrflt{f_S}(m) - \mrflt{f_S}(\pre{A}) + \mrflt{f_S}(\post{A})$.
\end{proof}
\fi
\nuevamacro{\text{\sc acn}}-morphisms preserve behaviours and are closed under composition.
\begin{lemma}\label{lm:morph-compose}
Let $(f_S,f_T) : C_0 \rightarrow C_1$ and $(g_S,g_T) : C_1 \rightarrow C_2$ be two \nuevamacro{\text{\sc acn}}-morphisms.
Then $(\composizione{f_S}{g_S},\composizione{f_T}{g_T}) : C_0 \rightarrow C_2$ is a \nuevamacro{\text{\sc acn}}-morphism as well.
\end{lemma}
\ifreport
\begin{proof}
We know that $\mrflt{\composizione{f_S}{g_S}}(m_0) = m_2$ as
$\mrflt{f_S}(m_0) = m_1$ and $\mrflt{g_S}(m_1) = m_2$.
%
We check that $\forall s_2\in S_2$, $\forall s_0, s_0'\in f_S^{-1}(g_S^{-1}(s_2)).$ either
$\post{s_0}\ {\natural}_0\ \post{s_0'}$ or $\pre{s_0}\ {\natural}_0\ \pre{s_0'}$.
Now either there is a place $s_1\in g_S^{-1}(s_2)$ such that $s_0$ and $s_0'$ are in $f_S^{-1}(s_1)$
or there are two places $s_1, s_1'\in g_S^{-1}(s_2)$ and $s_0$ and $s_0'$ are in
$f_S^{-1}(\setenum{s_1, s_1'})$. In the first case
$\post{s_0}\ {\natural}_0\ \post{s_0'}$ or $\pre{s_0}\ {\natural}_0\ \pre{s_0'}$ holds as $(f_S,f_T)$ is
a \nuevamacro{\text{\sc acn}}-morphism;
in the second case we have $\post{s_1}\ {\natural}_1\ \post{s_1'}$ or $\pre{s_1}\ {\natural}_1\ \pre{s_1'}$
as $(g_S,g_T)$ is a \nuevamacro{\text{\sc acn}}-morphism, but conflicts are reflected, hence
also $\post{s_0}\ {\natural}_0\ \post{s_0'}$ or $\pre{s_0}\ {\natural}_0\ \pre{s_0'}$ holds.
%
The other conditions are standard, and we can conclude that \nuevamacro{\text{\sc acn}}-morphisms compose.
\end{proof}
\fi
We denote as $\mathbf{pACN}$ the category of p\nuevamacro{\text{\sc acn}s} and \nuevamacro{\text{\sc acn}}-morphisms, and this category has a full and faithful subcategory where the objects are \nuevamacro{\text{\sc acn}s} which is denoted with $\mathbf{ACN}$, which is
the category of our interest.
\subsection{Reversible Asymmetric Causal Nets}
In this section we introduce the notion of Reversible Asymmetric Causal Nets by following the
approach in \cite{lics}: we extend \nuevamacro{\text{\sc acn}s} with {\em backward} transitions that are
responsible for the undoing / reversing of the effects of
previously executed {\em forward} transitions (i.e., the ordinary transitions). We assume that
the set $T$ of transitions of a net is partitioned into a set $\fwdset{}$ of forward transition and $\bwdset{}$
of backward transition. Moreover, every backward transition ${\underline t}\in\bwdset{}$ undoes the effect of one and only
one forward transition ${t}\in\fwdset{}$. Conversely, there could be forward transitions that are irreversible.
With slightly abuse of notation, we will write ${t}$ for the forward transition and ${\underline t}$ for the associated reversing transition (when it exists).
\begin{definition}\label{de:reversible-causal-net}
An {\sc{ipt}}\xspace $V = \langle S, T, F, I, \mathsf{m}\rangle$ is a
\emph{reversible Asymmetric Causal Net}
(\nuevamacro{r\text{\sc acn}}) if there
exists a partition $\setenum{\fwdset{} , \bwdset{}}$ of $T$, with $\fwdset{}$ the forward transitions and
$\bwdset{}$
the backward ones, such that:
\begin{enumerate}
\item\label{rcn:cond1} $\resarcn{V}{}{\fwdset{}} = \langle S, \fwdset{}, F_{|\fwdset{}\times\fwdset{}}, I_{|\fwdset{}\times\fwdset{}},
\mathsf{m}\rangle$ is a p\nuevamacro{\text{\sc acn}}\ net;
\item\label{rcn:cond2}
$\forall {\underline t}\in\bwdset{}.\ \exists!\ {t}\in \fwdset{}$
such that $\post{{t}} = \pre{{\underline t}}$,
$\pre{{t}} = \post{{\underline t}}$, and $\pre{{t}}\subseteq\inib{{\underline t}}$;
\item\label{rcn:cond3}
$\forall {\underline t}\in\bwdset{}$.
$K_{{\underline t}} = \setcomp{{t}' \in \fwdset{}}{\inib{{\underline t}} \cap
\pre{{t}'{}} \neq \emptyset}$ is finite and $\textcolor{black}{\leadsto}$ acyclic on $K_{{\underline t}}$;
\item\label{rcn:cond4} $\forall {\underline t}\in\bwdset{}.\ \forall {t}\in\fwdset{}$ if
$\pre{t}\cap \inib{{\underline t}}{} \neq \emptyset$ then $\post{t}\cap \inib{{\underline t}}{} = \emptyset$;
\item\label{rcn:cond6} $\forall {t}, {t}', {t}''\in \fwdset{} .\ {t}\ {\natural}\ {t}'\ \land\
{t}'\lll {t}''\ \Rightarrow\ {t}\ {\natural}\ {t}''$ with $\lll$ being the transitive closure of $\lessdot \cap \{({t}, {t}')\ |\ {\underline t}\not\in\bwdset{}\ \textit{or}\ \inib{{\underline t}}\cap\post{{t}'}\neq\emptyset \}$.
\end{enumerate}
\end{definition}
By Condition~\ref{rcn:cond1}, the underlying net $\resarcn{V}{}{\fwdset{}}$ containing just the
forward transitions of $V$ is an p\nuevamacro{\text{\sc acn}}. The requirement that it is a p\nuevamacro{\text{\sc acn}} and not an \nuevamacro{\text{\sc acn}} is
originated by the fact that the conflict relation is not always inherited along the $\lessdot$, which plays the role of the causation,
as reversing transition may allow that potentially conflicting transitions can be both executed.
Condition~\ref{rcn:cond2} establishes that each backward transition ${\underline t}$ reverses
exactly one forward transition ${t}$; consequently, ${\underline t}$ consumes the
tokens produced by ${t}$ (i.e., $\post{{t}} = \pre{{\underline t}}$) and produces
the tokens consumed by ${t}$ (i.e., $\post{{\underline t}} = \pre{{t}}$).
The condition $\pre{{t}}\subseteq\inib{{\underline t}}$ amounts to saying that ${\underline t}$
(i.e., the reverse of ${t}$) cannot be fired if ${t}$ has not been executed; in other words,
a transition can be reversed only if it has been fired.
Condition \ref{rcn:cond3} can be read as requiring a finite set of causes for the undoing of each transition; in fact,
$\pre{K_{{\underline t}}}$ contains all the forward transitions ${t}'$ that enable the
execution of ${\underline t}$.
By Condition~\ref{rcn:cond4}, if a backward transition ${\underline t}'$
causally depends on the forward transition ${t}$ (i.e.,
$\pre{t}\cap \inib{{\underline t}'}{} \neq \emptyset$) then ${\underline t}'$ cannot be prevented by the
same transition ${t}$, i.e. $\post{t}\cap \inib{{\underline t}'}{} = \emptyset$ since otherwise will be blocked.
The relation $\lll$ in Condition~\ref{rcn:cond6}, which is analogous to the \emph{sustained
causation} of \nuevamacro{r\textsc{aes}}, coincides with causality except for the cases in which a cause can be
reversed even when a causally-dependent transition has been fired. The
conflicts should be inherited along $\lll$, not along the $\lessdot$.
Sometimes we write $\arcn{V}{\bwdset{}}$ for a \nuevamacro{r\text{\sc acn}}{} $V$
whose set of backward transitions is $\bwdset{}$.
The
inhibitor arcs of an {\sc{ipt}}\xspace $V$ induce the
relations $\lessdot$ and $\textcolor{black}{\leadsto}$ corresponding to the forward flow (defined on $\fwdset{}\times\fwdset{}$) and two new relations defined
on $\fwdset{}\times\bwdset{}$, namely \emph{reverse causation} $\prec$
defined by $t \prec \un{t'}$ iff
$\pre{t}\cap\inib{\un t'}\neq \emptyset$; and \emph{prevention} $\lhd$ defined by
$\un{t'} \lhd t$ iff $\post{t}\cap\inib{\un t'}\neq \emptyset$.
\begin{figure}[bt]
\begin{subfigure}{.30\textwidth}
\scalebox{0.70}{\input{examples/example-arcn-a.tex}}
\caption{$V_1$}
\label{fig:rcn-a}
\end{subfigure}
\quad
\begin{subfigure}{.30\textwidth}
\scalebox{0.70}{\input{examples/example-arcn-a-forw.tex}}
\caption{$V_{1_{\setenum{a,b,c}}}$}
\label{fig:rcn-a-forw}
\end{subfigure}
\quad
\begin{subfigure}{.30\textwidth}
\scalebox{0.70}{\input{examples/example-arcn-b.tex}}
\caption{$V_2$}
\label{fig:rcn-b}
\end{subfigure}
\caption{}
\label{fig:rcn}
\end{figure}
\begin{example}
It is easy to check that the net in Fig.~\ref{fig:rcn-a} is a \nuevamacro{r\text{\sc acn}}. In
$V_1$ there is an asymmetric conflict between $a$ and $b$, and the reversing of $b$ (i.e., $\un{b}$) can happen only if $a$ and $b$ have been fired but $c$ has not. If we remove the transition
$\un{b}$ (and the connected inhibitor arcs) we obtain the \nuevamacro{\text{\sc acn}}{} in Fig.~\ref{fig:rcn-a-forw}.
In the net $V_2$ also $c$ can be reversed (by $\un{c}$). If we remove the transitions $\un{b}$
and $\un{c}$ (and the connected inhibitor arcs) from $V_2$ we obtain the \nuevamacro{\text{\sc acn}}{} in
Fig.~\ref{fig:rcn-a-forw}.
\end{example}
\begin{example}
Consider the net $V$ in Fig.~\ref{fig:acn-morph},
if we remove the reversing transition $\un{b}$ we do not
get a \nuevamacro{\text{\sc acn}} as the conflict among $a$ and $b$ is not inherited along $b\lessdot c$. However this
would contrast with the fact that executing first $b$ followed by $c$ and then reversing $b$
(doing $\un{b}$) we can execute $a$, which is a legal computation in \nuevamacro{r\text{\sc acn}}.
\end{example}
\subsection{Morphisms for Reversible Asymmetric Causal Nets}
The notion of morphisms for \nuevamacro{r\text{\sc acn}s} is given below.
\begin{definition}\label{de:rev-morphism}
Let $\arcn{V}{\bwdset{0}} = \langle S_0, T_0, F_0, I_0, \mathsf{m}_0\rangle$ and
$\arcn{V}{\bwdset{1}} = \langle S_1, T_1, F_1, I_1, \mathsf{m}_1\rangle$ be two \nuevamacro{r\text{\sc acn}s}.
A \nuevamacro{r\text{\sc acn}}-morphism is a
pair $(f_S,f_T)$ consisting of a relation $f_S\subseteq S_0\times S_1$ and a
partial function $f_T : T_0 \rightarrow T_1$ satisfying the following conditions
\begin{enumerate}
\item $f_T(\fwdset{0})\subseteq \fwdset{1}$ and $f_T(\bwdset{0})\subseteq \bwdset{1}$;
\item \label{pacamorf-in-restriction}
$(f_S,f_T|_{\fwdset{0}}) : \resarcn{V}{}{\fwdset{0}} \rightarrow \resarcn{V}{}{\fwdset{1}}$
is a \nuevamacro{\text{\sc acn}}-morphism,
\item $\forall {\underline t}\in \bwdset{0}.$ if $f_T(t) \neq \bot$
then
\begin{enumerate}
\item \label{mapping-rev} $f_T({\underline t}) \neq \bot$
and $f_T({\underline t}) = \un{f_T({t})}$; and
\item \label{non-collapsing-inhib}
$\forall (s,f_T(t))\in I_1$. $\forall s'\in f_S^{-1}(s)$.. $(s',t)\in I_0$.
\end{enumerate}
\end{enumerate}
\end{definition}
The first condition stipulates that forward (resp. backward) transitions are mapped to forward (resp. backward) transitions.
By Condition~\ref{pacamorf-in-restriction}, when restricting to forward transitions (i.e., the restriction of $f_T$ to ${\fwdset{0}}$) we have an \nuevamacro{\text{\sc acn}}{}-morphism on the underlying
$\resarcn{V}{}{\fwdset{0}}$ and
$\resarcn{V}{}{\fwdset{1}}$ (i.e., the nets consisting only of forward transitions).
Condition~\ref{mapping-rev}, establishes that the transition ${\underline t}$ that reverses $t$
should be mapped to the transition $\un{f_T({t})}$ that reverses $f_T({t})$.
Finally, Condition~\ref{non-collapsing-inhib} avoids collapsing inhibitor arcs, i.e., the identification
of different causes that may prevent the reversing of a transition.
\begin{figure}[tb]
\begin{subfigure}{.45\textwidth}
{\scalebox{0.7}{\input{examples/example-racn-morph1}}}
\caption{$V_0$}
\label{fig:racn-morph1}
\end{subfigure}
\qquad
\begin{subfigure}{.45\textwidth}
{\scalebox{0.7}{\input{examples/example-racn-morph2}}}
\caption{$V_1$}
\label{fig:racn-morph2}
\end{subfigure}
\caption{Two \nuevamacro{r\text{\sc acn}}}\label{fig:racn-morph}
\end{figure}
\begin{example}\label{ex:revmorph}
An \nuevamacro{r\text{\sc acn}}{}-morphism $(f_S,f_T) : V_0 \rightarrow V_1$ for the \nuevamacro{r\text{\sc acn}s} $V_0$ and $V_1$ in Fig.~\ref{fig:racn-morph} is the following.
Take the multirelation $f_S$ on places as in \Cref{ex:arnmorph} and define the mapping on transitions $f_T$ such that
it coincides with the one in \Cref{ex:arnmorph} on forward
transitions, whereas it is
$f_T(\un{a}) = \un{a}'$ on the reversing transition $\un{a}$. Clearly $\un{f_T(a)} = f_T(\un{a})$.
%
The other conditions straightforwardly hold.
\end{example}
The following two results state that \nuevamacro{r\text{\sc acn}}-morphisms preserve behaviours and that they
are closed under composition.
\begin{proposition}\label{pr:rev-morph-preserve-token-game}
Let $(f_S,f_T) : \arcn{V}{\bwdset{0}} \rightarrow \arcn{V}{\bwdset{1}}$ be a \nuevamacro{r\text{\sc acn}}{}-morphism.
If $m\trans{A}m'$ in $\arcn{V}{\bwdset{0}}$, then
$\mrflt{f_S}(m)\trans{\mu f_{T}(A)}\mrflt{f_S}(m')$ in $\arcn{V}{\bwdset{1}}$.
\end{proposition}
\ifreport
\begin{proof}
It is enough to observe that a \nuevamacro{r\text{\sc acn}}{}-morphism acts on markings as the associated
\nuevamacro{\text{\sc acn}}{}-morphism, and as inhibitor arcs are reflected, the enabling conditions
$\mrflt{f_S}(m)\trans{\mu f_{T}(A)}$ hold. As a consequence the tokens game is preserved.
\end{proof}
\fi
\begin{lemma}\label{lm:rcnmorph-compose}
Let $(f_S,f_T) : V_0 \rightarrow V_1$ and $(g_S,g_T) : V_1 \rightarrow V_2$ be two \nuevamacro{r\text{\sc acn}}{}-morphisms.
Then $(\composizione{f_S}{g_S},\composizione{f_T}{g_T}) : V_0 \rightarrow V_2$ is a \nuevamacro{r\text{\sc acn}}{}-morphism as
well.
\end{lemma}
\ifreport
\begin{proof}
The proof follows the same lines as the one of \Cref{lm:morph-compose}.
\end{proof}
\fi
Hence, \nuevamacro{r\text{\sc acn}s} and \nuevamacro{r\text{\sc acn}}-morphisms are the category denoted by
$\mathbf{RACN}$.
|
{
"arxiv_id": "2302.14128",
"language": "en",
"timestamp": "2023-03-01T02:01:51",
"url": "https://arxiv.org/abs/2302.14128",
"yymm": "2302"
} | \section{Reductions}\label{sec:modtw_reduction}
\input{tex_arxiv/modtw_st_reduction}
\input{tex_arxiv/modtw_cds_reduction}
\input{tex_arxiv/modtw_cvc_algo}
\input{tex_arxiv/modtw_fvs_algo}
\section{Lower Bounds}\label{sec:modtw_lb}
In this section, we prove the tight lower bounds for \textsc{Connected Vertex Cover}\xspace and \textsc{Feedback Vertex Set}\xspace parameterized by twinclass-pathwidth, cf.~\cref{thm:intro_lower_bounds}. The construction principle follows the style of Lokshtanov et al.~\cite{LokshtanovMS18}. On a high level, that means the resulting graphs can be interpreted as a \emph{matrix of blocks}, where each block spans several rows and columns. Every row is a long \emph{path-like gadget} that simulates a constant number of variables of the \textsc{Satisfiability}\xspace instance and which contributes 1 unit of twinclass-pathwidth. The number of simulated variables is tied to the running time we want to rule out. For technical reasons, we consider bundles of rows simulating a \emph{variable group} of appropriate size. Every column corresponds to a clause and consists of gadgets that \emph{decode} the states on the path gadgets and check whether the resulting assignment satisfies the clause.
In both lower bounds, the main technical contribution is the design of the \emph{path gadgets}. Whereas the design of the \emph{decoding gadgets} can be adapted from known constructions. The main challenge in the construction of the path gadgets is that the appearance of twinclasses \emph{restricts} the design space: we \emph{cannot} attach separate gadgets to each vertex in the twinclass, but only gadgets to read the state of the twinclass as a \emph{whole}. To interface with the decoding gadgets, each path gadget contains a \emph{clique-like} center containing one vertex per desired state of the path gadget. An additional complication is the \emph{transitioning} of the state throughout a long path, where the presence of twinclasses means that we have \emph{less control} over the transitioning compared to the sparse case, e.g., when simply parameterizing by pathwidth.
\input{tex_arxiv/modtw_cvc_lb}
\input{tex_arxiv/modtw_fvs_lb}
\bibliographystyle{splncs04}
\section{Cut and Count for Modular-Treewidth}
\label{sec:modtw_cutandcount}
\subsection{General Approach}
In this section, we give an overview of the cut-and-count-technique and adapt it to parameterization by modular-treewidth. If we solve a problem on a graph $G = (V,E)$ involving connectivity constraints, we can make the following general definitions. We let $\mathcal{S} \subseteq \powerset{U}$ denote the set of \emph{solutions}, living over some \emph{universe} $U$, and we have to determine whether $\mathcal{S}$ is empty or not. The cut-and-count-technique does so in two parts:
\begin{itemize}
\item \textbf{Cut part:} By \emph{relaxing} the connectivity constraints, we obtain a set $\mathcal{S} \subseteq \mathcal{R} \subseteq \powerset{U}$ of possibly connected solutions. The set $\mathcal{Q}$ will contain pairs $(X, C)$ consisting of a candidate solution $X \in \mathcal{R}$ and a consistent cut $C$ of $X$, which is defined in \cref{dfn:cons_cut}.
\item \textbf{Count part:} We compute $|\mathcal{Q}|$ modulo some power of $2$ such that all non-connected solutions $X \in \mathcal{R} \setminus \mathcal{S}$ cancel, because they are consistent with too many cuts. Hence, only connected candidates $X \in \mathcal{S}$ remain.
\end{itemize}
The main definition and property for the cut-and-count-technique are as follows.
\begin{dfn}[\cite{CyganNPPRW22}]
\label{dfn:cons_cut}
A cut $(V_L, V_R)$ of an undirected graph $G = (V, E)$ is \emph{consistent} if $u \in V_L$ and $v \in V_R$ implies $\{u,v\} \notin E$. A \emph{consistently cut subgraph} of $G$ is a pair $(X, (X_L, X_R))$ such that $X \subseteq V$ and $(X_L, X_R)$ is a consistent cut of $G[X]$. We denote the set of consistently cut subgraphs of $G$ by $\mathcal{C}(G)$.
\end{dfn}
\begin{lem}[\cite{CyganNPPRW22}]
\label{thm:cons_cut}
Let $X$ be a subset of vertices. The number of consistently cut subgraphs $(X, (X_L, X_R))$ is equal to $2^{\mathtt{cc}(G[X])}$.
\end{lem}
\begin{proof}
By the definition of a consistently cut subgraph $(X, (X_L, X_R))$ we have for every connected component $C$ of $G[X]$ that either $C \subseteq X_L$ or $C \subseteq X_R$. Hence, there are two choices for every connected component and we obtain $2^{\mathtt{cc}(G[X])}$ different consistently cut subgraphs $(X, (X_L, X_R))$. \qed
\end{proof}
The cut-and-count-approach can fail if $|\mathcal{S}|$ is divisible by the considered power of $2$, as then even the connected solutions would cancel each other out. The isolation lemma, \cref{thm:isolation}, allows us to avoid this problem at the cost of randomization: We sample a weight function $\mathbf{w} \colon U \rightarrow [N]$ and instead count pairs with a fixed weight, then the isolation lemma tells us that it is likely that there exists a weight with a unique solution, which therefore cannot cancel.
\begin{dfn}
A function $\mathbf{w} \colon U \rightarrow \mathbb{Z}$ \emph{isolates} a set family $\mathcal{F} \subseteq \powerset{U}$ if there is a unique $S' \in \mathcal{F}$ with $\mathbf{w}(S') = \min_{S \in \mathcal{F}} \mathbf{w}(S)$, where for subsets $X$ of $U$ we define $\mathbf{w}(X) = \sum_{u \in X} \mathbf{w}(u)$.
\end{dfn}
\begin{lem}[Isolation Lemma, \cite{MulmuleyVV87}]
\label{thm:isolation}
Let $\emptyset \neq \mathcal{F} \subseteq \powerset{U}$ be a set family over a universe $U$. Let $N \in \mathbb{N}$ and for each $u \in U$ choose a weight $\mathbf{w}(u) \in [N]$ uniformly and independently at random. Then
$\mathbb{P}[\mathbf{w} \text{ isolates } \mathcal{F}] \geq 1 - |U|/N$.
\end{lem}
\cref{thm:cons_cut} distinguishes disconnected candidates from connected candidates via the number of consistent cuts for the respective candidate. We determine this number not for a single relaxed solution, but for all of them with a fixed weight.
To apply the cut-and-count-technique for modular-treewidth, we first study how connectivity interacts with the modular structure. Typically, we consider vertex sets $X$ contained in some module ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ that intersect at least two child modules of ${\module^{\uparrow}}$, i.e., $|\pi_{\module^{\uparrow}}(X)| \geq 2$. When $|\pi_{\module^{\uparrow}}(X)| = 1$, we can recurse in the modular decomposition tree until at least two child modules are intersected or we arrive at an easily solvable special case. The following exchange argument shows that the connectivity of $G[X]$ is not affected by the precise intersection $X \cap M$, $M \in \mathtt{children}({\module^{\uparrow}})$, but only whether $X \cap M$ is empty or not.
\begin{lem}
\label{thm:module_exchange_connected}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X \subseteq {\module^{\uparrow}}$ be a subset with $|\pi_{\module^{\uparrow}}(X)| \geq 2$ and such that $G[X]$ is connected. For any module $M \in \mathtt{children}({\module^{\uparrow}})$ with $X \cap M \neq \emptyset$ and $\emptyset \neq Y \subseteq M$, the graph $G[(X \setminus M) \cup Y]$ is connected.
\end{lem}
\begin{proof}
Since $G[X]$ is connected and intersects at least two modules, there has to be a module $M' \in \mathtt{children}({\module^{\uparrow}})$ adjacent to $M$ such that $X \cap M' \neq \emptyset$. The edges between $Y$ and $X \cap M'$ induce a biclique and hence all incident vertices must be connected to each other. Fix a vertex $u \in X \cap M$ and consider any $w \in X \setminus M$, then $G[X]$ contains an $u,w$-path $P$ such that the vertex $v$ after $u$ on $P$ is in $X \setminus M$. For any $y \in Y$, we obtain an $y,w$-path $P_y$ in $G[(X \setminus M) \cup Y]$ by replacing $u$ with $y$ in $P$. Finally, consider two vertices $u,w \in X \setminus M$, then there is an $u,w$-path $P$ in $G[X]$. If $P$ does not intersect $M$, then $P$ is also a path in $G[(X \setminus M) \cup Y]$. Otherwise, we can assume that $P$ contains exactly one vertex $v$ of $M$ and simply replace $v$ with some $y \in Y$ to obtain a $u,w$-path $P'$ in $G[(X \setminus M) \cup Y]$. Hence, $G[(X \setminus M) \cup Y]$ is connected as claimed. \qed
\end{proof}
Building upon \cref{thm:module_exchange_connected} allows us to reduce checking the connectivity of $G[X]$ to the quotient graph at ${\module^{\uparrow}}$, as ${\qgraph{\pmodule}}$ is isomorphic to the induced subgraph of $G$ obtained by picking one vertex from each child module of ${\module^{\uparrow}}$.
\begin{lem}
\label{thm:quotient_connected}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X \subseteq {\module^{\uparrow}}$ with $|{\modprojection_\pmodule}(X)| \geq 2$, i.e., $X$ intersects at least two modules in $\mathtt{children}({\module^{\uparrow}})$. It holds that $G[X]$ is connected if and only if ${\qgraph{\pmodule}}[{\modprojection_\pmodule}(X)]$ is connected.
\end{lem}
\begin{proof}
For every module $M \in \mathtt{children}({\module^{\uparrow}})$ with $X \cap M \neq \emptyset$, pick a vertex $v_M \in X \cap M$ and define $X' = \{v_M : X \cap M \neq \emptyset, M \in \mathtt{children}({\module^{\uparrow}})\} \subseteq X$. Note that $G[X']$ is isomorphic to ${\qgraph{\pmodule}}[{\modprojection_\pmodule}(X)]$. Hence, we are done if we can show that $G[X]$ is connected if and only if $G[X']$ is connected. If $G[X]$ is connected, then so is $G[X']$ by repeatedly applying \cref{thm:module_exchange_connected}.
For the converse, suppose that $G[X']$ is connected. We argue that every $v \in X \setminus X'$ is adjacent to some $w \in X'$ and then it follows that $G[X]$ is connected as well. There is some $M \in \mathtt{children}({\module^{\uparrow}})$ with $v \in M$ and $v_M \in X'$ by definition of $X'$. Since $|X'| \geq 2$ and $G[X']$ is connected, there is a neighbor $w \in X'$ of $v_M$ in $G[X']$ and $w = v_{M'}$ for some $M' \in \mathtt{children}({\module^{\uparrow}}) \setminus \{M\}$. The vertex $w$ has to be a neighbor of $v$ because $M$ is a module and $w \notin M$. \qed
\end{proof}
\cref{thm:quotient_connected} tells us that we do not need to consider \emph{heterogeneous} cuts, i.e., $(X, (X_L, X_R)) \in \mathcal{C}(G)$ with $X_L \cap M \neq \emptyset$ and $X_R \cap M \neq \emptyset$ for some module $M \in \Pi_{mod}(G)$, because checking connectivity can be reduced to a set that contains at most one vertex per module.
\begin{dfn}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$. We say that a cut $(X_L, X_R)$, with $X_L \cup X_R \subseteq {\module^{\uparrow}}$, is \emph{${\module^{\uparrow}}$-homogeneous} if $X_L \cap M = \emptyset$ or $X_R \cap M = \emptyset$ for every $M \in \mathtt{children}({\module^{\uparrow}})$. We may just say that $(X_L, X_R)$ is \emph{homogeneous} when ${\module^{\uparrow}}$ is clear from the context. We define for every subgraph $G'$ of $G$ the set $\homcuts{{\module^{\uparrow}}}(G') = \{(X, (X_L, X_R)) \in \mathcal{C}(G') : (X_L, X_R) \text{ is ${\module^{\uparrow}}$-homogeneous}\}.$
\end{dfn}
Combining \cref{thm:cons_cut} with \cref{thm:quotient_connected}, the connectivity of $G[X]$ can be determined by counting ${\module^{\uparrow}}$-homogeneous consistent cuts of $G[X]$ modulo 4.
\begin{lem}
\label{thm:hom_cut}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X \subseteq {\module^{\uparrow}}$ with $|{\modprojection_\pmodule}(X)| \geq 2$. It holds that $|\{(X_L, X_R) : (X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)\}| = 2^{\mathtt{cc}({\qgraph{\pmodule}}[{\modprojection_\pmodule}(X)])}$ and $G[X]$ is connected if and only if $|\{(X_L, X_R) : (X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)\}| \neq 0 \mod 4$.
\end{lem}
\begin{proof}
Fix ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X \subseteq {\module^{\uparrow}}$ with $|{\modprojection_\pmodule}(X)| \geq 2$. For any set $S \subseteq {\module^{\uparrow}}$, we write $S^q = {\modprojection_\pmodule}(S)$ in this proof. We will argue that the map $(X_L, X_R) \mapsto (X^q_L, X^q_R)$ is a bijection between $\{(X_L, X_R) : (X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)\}$ and $\{(Y_L, Y_R) : (X^q, (Y_L, Y_R)) \in \mathcal{C}({\qgraph{\pmodule}})\}$. First of all, notice that $(X^q_L, X^q_R)$ is a cut of ${\qgraph{\pmodule}}[X^q]$ because $(X_L, X_R)$ is homogeneous. Furthermore, $(X^q_L, X^q_R)$ is a consistent cut, since any edge $\{v^q_{M_1}, v^q_{M_2}\}$ crossing $(X^q_L, X^q_R)$ would give rise to an edge $\{u_1, u_2\}$, $u_i \in M_i$, $i \in [2]$, crossing $(X_L, X_R)$ which contradicts the assumption that $(X_L, X_R)$ is a consistent cut.
For injectivity, consider $(X, (X_L, X_R))$, $(X, (Z_L, Z_R)) \in \homcuts{{\module^{\uparrow}}}(G)$ such that $(X^q_L, X^q_R) = (Z^q_L, Z^q_R)$. Since they are homogeneous cuts, we can compute
$$X_L = \bigcup_{{v^q_\module} \in X^q_L} X \cap M = \bigcup_{{v^q_\module} \in Z^q_L} X \cap M = Z_L$$
and similarly for $X_R = Z_R$. For surjectivity, note that every $(Y_L, Y_R)$ with $(X^q, (Y_L, Y_R)) \in \mathcal{C}({\qgraph{\pmodule}})$ is hit by the following homogeneous cut $(X, (\bigcup_{v^q_M \in Y_L} X \cap M, \bigcup_{v^q_M \in Y_R} X \cap M))$.
Finally, we can apply \cref{thm:cons_cut} to $X^q \subseteq V({\qgraph{\pmodule}})$ to obtain, via the bijection, that $|\{(X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)\}| = 2^{\mathtt{cc}({\qgraph{\pmodule}}[X^q])}$. Hence, ${\qgraph{\pmodule}}[X^q]$ is connected if and only if $|\{(X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)\}| \neq 0 \mod 4$. The statement then follows by \cref{thm:quotient_connected}. \qed
\end{proof}
\subsection{Connected Vertex Cover}
\label{sec:modtw_cvc_lb}
This subsection is devoted to proving that \textsc{Connected Vertex Cover}\xspace parameterized by twinclass-pathwidth cannot be solved in time $\Oh^*((5-\varepsilon)^{\tcpw(G)})$ for some $\varepsilon > 0$ unless the SETH\xspace fails. We first design the path gadget and analyze it in isolation and afterwards we present the complete construction. The decoding gadgets are directly adapted from the lower bound for \textsc{Connected Vertex Cover}\xspace parameterized by pathwidth given by Cygan et al.~\cite{CyganNPPRW11arxiv}.
\newpage
\subsubsection{Path Gadget Construction and Analysis}
\paragraph{Root.} We create a vertex $\hat{r}$ called the \emph{root} and attach a vertex $\hat{r}'$ of degree 1 to ensure that every connected vertex cover contains $\hat{r}$. Given a subset $X \subseteq V(G)$ with $\hat{r} \in X$, a vertex $v \in X$ is \emph{root-connected} in $X$ if there is a $v,\hat{r}$-path in $G[X]$. We just say \emph{root-connected} if $X$ is clear from the context. Note that $G[X]$ is connected if and only if all vertices of $X$ are root-connected in $X$.
\paragraph{States.} We define the three atomic states $\mathbf{atoms} = \{\mathbf{0}, \mathbf{1}_0, \mathbf{1}_1\}$ and define the two predicates $\mathtt{sol}, \mathtt{conn} \colon \mathbf{atoms} \rightarrow \{0,1\}$ by $\mathtt{sol}(\mathbf{a}) = [\mathbf{a} \in \{\mathbf{1}_0, \mathbf{1}_1\}]$ and $\mathtt{conn}(\mathbf{a}) = [\mathbf{a} = \mathbf{1}_1]$. The atom $\mathbf{0}$ means that a vertex is not inside the partial solution; $\mathbf{1}_1$ and $\mathbf{1}_0$ indicate that a vertex is inside the partial solution and the subscript indicates whether it is root-connected or not. Building on these atomic states, we define five states consisting of four atomic states each:
\begin{itemize}
\item $\mathbf{s}^1 = (\mathbf{0}_{\phantom{0}}, \mathbf{0}_{\phantom{0}}, \mathbf{1}_1, \mathbf{1}_1)$,
\item $\mathbf{s}^2 = (\mathbf{1}_0, \mathbf{0}_{\phantom{0}}, \mathbf{1}_1, \mathbf{1}_0)$,
\item $\mathbf{s}^3 = (\mathbf{1}_1, \mathbf{0}_{\phantom{0}}, \mathbf{1}_0, \mathbf{1}_0)$,
\item $\mathbf{s}^4 = (\mathbf{1}_0, \mathbf{1}_0, \mathbf{1}_1, \mathbf{0}_{\phantom{0}})$,
\item $\mathbf{s}^5 = (\mathbf{1}_1, \mathbf{1}_1, \mathbf{1}_0, \mathbf{0}_{\phantom{0}})$.
\end{itemize}
Why the states are numbered in this way will become clear later.
We collect the five states in the set $\mathbf{states} = \{\mathbf{s}^1, \ldots, \mathbf{s}^5\}$ and use the notation $\mathbf{s}^\ell_i \in \mathbf{atoms}$, $i \in [4]$, $\ell \in [5]$, to refer to the $i$-th coordinate of state $\mathbf{s}^\ell$.
\paragraph{Path gadget.} The path gadget $P$ is constructed as follows. We create 15 \emph{central} vertices $w_{\ell, i}$, $\ell \in [5]$, $i \in [3]$, in 5 sets $W_\ell = \{w_{\ell, 1}, w_{\ell, 2}, w_{\ell, 3}\}$ of size 3 and each set will form a twinclass. We create 2 \emph{input} vertices $u_1, u_2$, 4 \emph{cost} vertices $w_{+,1}, \ldots, w_{+,4}$, 5 \emph{clique} vertices $v_1, \ldots, v_5$, and 5 \emph{complement} vertices $\bar{v}_1, \ldots, \bar{v}_5$. Furthermore, for every $f \in [4]$, we create 2 \emph{auxiliary} vertices $a_{1,f}, a_{2,f}$, 2 \emph{indicator} vertices $b_{0,f}, b_{1,f}$, and 2 \emph{connectivity} vertices $c_{0,f}, c_{1,f}$. Finally, we create 4 further auxiliary vertices $\bar{a}_{1,1}, \bar{a}_{2,1}, \bar{a}_{1,2}, \bar{a}_{2,2}$ and 4 further connectivity vertices $\bar{c}_{0,1}, \bar{c}_{1,1}, \bar{c}_{0,2}, \bar{c}_{1,2}$. The vertices $a_{1,4}$ and $\bar{a}_{1,2}$ will also be called \emph{output} vertices.
We add edges such that the central sets $W_\ell$, $\ell \in [5]$, are pairwise adjacent twinclasses, i.e. they induce a complete 5-partite graph, and such that the clique vertices $v_\ell$, $\ell \in [5]$, form a clique. Each complement vertex $\bar{v}_\ell$, $\ell \in [5]$, is made adjacent to $W_\ell$ and to $v_\ell$. The cost vertices $w_{+,1}$ and $w_{+,2}$ are made adjacent to $W_1$; $w_{+,3}$ is made adjacent to $W_2$; and $w_{+,4}$ is made adjacent to $W_3$.
For every $f \in [4]$, we add edges $\{a_{1,f}, a_{2,f}\}$, $\{a_{2,f}, b_{1,f}\}$, $\{b_{1,f}, b_{0,f}\}$, $\{b_{0,f}, a_{1,f}\}$, forming a $C_4$, and the edges $\{a_{1,f}, c_{1,f}\}$ and $\{c_{0,f}, c_{1,f}\}$. For every $i \in [2]$, we add edges $\{\bar{a}_{1,i}, \bar{a}_{2,i}\}$, $\{\bar{a}_{1,i}, \bar{c}_{1,i}\}$, $\{\bar{c}_{0,i}, \bar{c}_{1,i}\}$. The input vertices $u_1$ and $u_2$ are made adjacent to each $a_{1,f}$ for $f \neq 4$ and they are made adjacent to $\bar{a}_{1,1}$.
All vertices except $\{a_{1,f} : f \in [4]\} \cup \{\bar{a}_{1,i}, \bar{a}_{2,i} : i \in [2]\} \cup \{u_1, u_2\}$ are made adjacent to the root $\hat{r}$. Finally, we describe how to connect the central vertices to the rest. Each twinclass $W_\ell$, $\ell \in [5]$, is made adjacent to $b_{[\mathbf{s}^\ell_2 = \mathbf{0}], f}$ and to $c_{[\mathbf{s}^\ell_1 = \mathbf{s}^\ell_2],f}$ for all $f \in [4]$ and $W_\ell$ is also made adjacent to $\bar{c}_{[\mathbf{s}^\ell_1 \neq \mathbf{1}_0], 1}$ and $\bar{c}_{[\mathbf{s}^\ell_1 \neq \mathbf{1}_1], 2}$. The construction is depicted in \cref{fig:cvc_modtw_path_1} and \cref{fig:cvc_modtw_path_2}.
\begin{figure}[h]
\centering
\scalebox{0.9}{\tikzfig{pictures/cvc_modtw_path_gadget_new}}
\caption{Vertices depicted with a rectangle are adjacent to the root vertex $\hat{r}$. The graph in the black dashed rectangle appears thrice with the same connections to the remaining vertices. The vertices inside the cyan dashed rectangle induce a complete 5-partite graph. The dashed circles at the central vertices indicate the number of cost vertices attached to this set and the dashed vertices and edges at the right indicate how to connect to the next copy of the path gadget.}
\label{fig:cvc_modtw_path_1}
\vspace*{-0.3cm}
\end{figure}
We emphasize that the graphs $P[\{a_{1,f}, a_{2,f}, b_{0,f}, b_{1,f}, c_{0,f}, c_{1,f}\} \cup \bigcup_{\ell \in [5]} W_\ell]$, $f \in [4]$, are all isomorphic to each other, however the first three are also adjacent to the input vertices $u_1$ and $u_2$, whereas the fourth one is not. To study the path gadget $P$, we mostly consider the parts in \cref{fig:cvc_modtw_path_1}; the parts in \cref{fig:cvc_modtw_path_2} are considerably simpler and will later allow us to simply attach the standard decoding gadget already used by Cygan et al.~\cite{CyganNPPRW11arxiv} for \textsc{Connected Vertex Cover}\xspace parameterized by pathwidth.
\begin{figure}[h]
\centering
\scalebox{0.9}{\tikzfig{pictures/cvc_modtw_path_gadget_new_2}}
\caption{The remaining parts of the path gadget $P$ which will be connected to the decoding gadget. All vertices that are depicted with a rectangle are adjacent to the root vertex $\hat{r}$. The vertices inside the cyan dashed rectangle induce a complete 5-partite graph or a clique respectively. Only the clique vertices have neighbors outside of $P$.}
\label{fig:cvc_modtw_path_2}
\vspace*{-0.5cm}
\end{figure}
For the upcoming lemmas, we assume that $G$ is a graph that contains $P + \hat{r}$ as an induced subgraph and that only the input vertices $u_1, u_2$, the output vertices $a_{1,4}, \bar{a}_{1,2}$, and the clique vertices $v_\ell, \ell \in [5]$, have neighbors outside this copy of $P + \hat{r}$. Furthermore, we assume that $\{u_1, u_2\}$ is a twinclass in $G$. Let $X$ be a vertex cover of $G$ with $\hat{r} \in X$. We study the behavior of such vertex covers on $P$; we will abuse notation and write $X \cap P$ instead of $X \cap V(P)$.
Observe that the set
\begin{align*}
M = \, & \{\{a_{1,f}, a_{2,f}\}, \{b_{0,f}, b_{1,f}\}, \{c_{0,f}, c_{1,f}\} : f \in [4]\} \\
\cup\, & \{\{\bar{a}_{1,i}, \bar{a}_{2,i}\}, \{\bar{c}_{0,i}, \bar{c}_{1,i}\} : i \in [2]\} \\
\cup\, & \{\{v_\ell, \bar{v}_\ell\} : \ell \in [5]\}
\end{align*}
is a matching in $P$ of size $4 \cdot 3 + 2 \cdot 2 + 5 = 21$.
\begin{lem}\label{thm:cvc_modtw_path_gadget_lb}
We have that $|\{\ell \in [5] : W_\ell \subseteq X\}| \geq 4$ and $|X \cap P| \geq |M| + 4 \cdot 3 = 33$.
If $G[X]$ is connected, then $|X \cap P| \geq |M| + 4 \cdot 3 + 2 = 35$ and in case of equality, $|X \cap \{u_1, u_2, w_{+,1}, \ldots, w_{+,4}\}| = 2$ and there is a unique $\ell \in [5]$ such that $W_\ell \not\subseteq X$.
\end{lem}
\begin{proof}
The vertex set $\bigcup_{\ell \in [5]} W_\ell$ induces a complete $5$-partite graph disjoint from the matching $M$. Any vertex cover must contain at least 4 of the 5 partition classes completely, otherwise there is an edge that is not covered, and since each class is of size 3, this accounts for $4 \cdot 3 = 12$ further vertices. This shows that $|X \cap P| \geq |M| + 4 \cdot 3 = 33$.
If $X$ completely contains all $W_\ell$, $\ell \in [5]$, then it immediately follows that $|X \cap P| \geq 36$, so if $|X \cap P| = 35$, then there is an unique $\ell \in [5]$ such that $W_\ell \not\subseteq X$. If $\ell = 1$, then we must have $w_{+,1}, w_{+,2} \in X$, so $|X \cap P| \geq 35$. Before we proceed with the remaining proof, notice that $A_f = \{a_{1,f}, a_{2,f}, b_{0,f}, b_{1,f}\}$ induces a $C_4$ for all $f \in [4]$, so if $|X \cap A_f| = 2$, then $X \cap A_f \in \{\{a_{1,f}, b_{1,f}\}, \{a_{2,f}, b_{0,f}\}\}$, i.e., $X$ must pick an antipodal pair from $A_f$.
For the remainder of the proof, assume that $G[X]$ is connected. Suppose that $X \cap \{u_1, u_2\} = \emptyset$, then $a_{1,f} \in X$ for all $f \in [3]$ and $a_{1,f}$ must be root-connected in $X$. If $\ell \in \{2,3\}$, then $b_{1,f}, c_{0,f} \in X$, so whichever neighbor of $a_{1,f}$ we choose for the sake of root-connectedness, the size of $X$ increases by one for every $f \in [3]$. If $\ell \in \{4,5\}$, then $b_{0,f} \in X$, so $a_{1,f}$ is root-connected, but we need to pick another vertex of $A_f$ to cover the remaining edge induced by $A_f$, again increasing the size of $X$. In summary, we obtain $|X \cap P| \geq 36$ if $\ell > 1$ and $X \cap \{u_1, u_2\} = \emptyset$.
Suppose that $|X \cap \{u_1, u_2\}| = 1$ and without loss of generality $u_1 \in X$ and $u_2 \notin X$. Again, we must have $a_{1,f} \in X$ for all $f \in [3]$. If $\ell \in \{2, 3\}$, we have that $w_{+,3} \in X$ or $w_{+,4} \in X$. If $\ell \in \{4, 5\}$, we again see that $|X \cap A_f| \geq 3$ for all $f \in [3]$ and hence $|X \cap P| \geq 37$, so $|X \cap P| \geq 35$ in either case.
By the previous arguments, we see that $|X \cap P| = 35$ and $X \cap \{u_1, u_2\} = \emptyset$ implies that $\ell = 1$; $|X \cap P| = 35$ and $|X \cap \{u_1, u_2\}| = 1$ implies that $\ell \in \{2,3\}$; $|X \cap P| = 35$ and $|X \cap \{u_1, u_2\}| = 2$ implies that $\ell \in \{4,5\}$. So, the equation $|X \cap \{u_1, u_2, w_{+,1}, \ldots, w_{+,4}\}| = 2$ follows. \qed
\end{proof}
We want to study the connected vertex covers on $P$ locally, but connectivity is not a local property. However, through our assumption, we know that any vertex in $G[X]$ that is not root-connected in $X \cap (P + \hat{r})$ has to be root-connected through the input or output vertices. In particular, although the clique vertices $v_\ell, \ell \in [5]$, may be adjacent to vertices outside of $P + \hat{r}$, any path leaving $P + \hat{r}$ through some clique vertex immediately yields a path to $\hat{r}$ in $P + \hat{r}$, since the clique vertices are adjacent to $\hat{r}$. This motivates that we should distinguish whether a vertex in $P + \hat{r}$ is root-connected already in $P + \hat{r}$ or via a path that leaves $P$.
Let $Y \subseteq V(G)$, we define $\mathbf{state}_Y \colon V(G) \rightarrow \mathbf{atoms}$ by
\begin{equation*}
\mathbf{state}_Y(v) = \begin{cases}
\mathbf{0} & \text{if } v \notin Y, \\
\mathbf{1}_0 & \text{if } v \in Y \text{ and $v$ is not root-connected in $Y \cup \{\hat{r}\}$}, \\
\mathbf{1}_1 & \text{if } v \in Y \text{ and $v$ is root-connected in $Y \cup \{\hat{r}\}$}. \\
\end{cases}
\end{equation*}
For $Y \subseteq V(P)$, we define $\mathbf{state}(Y) = (\mathbf{state}_Y(u_1), \mathbf{state}_Y(u_2), \mathbf{state}_Y(\bar{a}_{1,2}), \mathbf{state}_Y(a_{1,4}))$.
We say that a vertex subset $Y \subseteq V(G)$ is \emph{canonical} with respect to the twinclass $\{u_1, u_2\}$ if $u_2 \in Y$ implies $u_1 \in Y$; we will just say that $Y$ is canonical if $\{u_1, u_2\}$ is clear from the context. Since $\{u_1, u_2\}$ is a twinclass, we can always assume that we are working with a canonical subset.
\begin{lem}\label{thm:cvc_modtw_path_gadget_tight}
If $X$ is canonical, $G[X]$ is connected, and $|X \cap P| \leq 35$, then $|X \cap P| = 35$ and there is an unique $\ell \in [5]$ such that $v_\ell \notin X$ and we have that $\mathbf{state}(X \cap P) = \mathbf{s}^\ell$.
\end{lem}
\begin{proof}
\cref{thm:cvc_modtw_path_gadget_lb} implies that $|X \cap P| = 35$, $|X \cap \{u_1, u_2, w_{+,1}, \ldots, w_{+,4}\}| = 2$, that $X$ contains exactly one endpoint of each edge in $M$ and that there is an unique $\ell \in [5]$ such that $W_\ell \not\subseteq X$. To cover all edges between $W_\ell$ and $\bar{v}_\ell$, we must have that $\bar{v}_\ell \in X$ and $v_\ell \notin X$, since $\{\bar{v}_\ell, v_\ell\} \in M$. Furthermore, we must have $X \cap \{v_1, \ldots, v_5\} = \{v_1, \ldots, v_5\} \setminus \{v_\ell\}$, because otherwise $X$ does not cover the clique induced by $v_1, \ldots, v_5$. Hence, the uniqueness of $v_\ell$ follows.
Recall that $A_f = \{a_{1,f}, a_{2,f}, b_{0,f}, b_{1,f}\}$ induces a $C_4$ and $|X \cap A_f| = 2$ because $A_f$ contains two edges of $M$, hence we have that $X \cap A_f \in \{\{a_{1,f}, b_{1,f}\}, \{a_{2,f}, b_{0,f}\}\}$ for all $f \in [4]$.
We claim that $\mathbf{state}_{(X \cap P) \setminus \{u_1, u_2\}}(a_{1,f}) = \mathbf{s}^\ell_4$ for all $f \in [4]$. Observe that $\mathbf{s}^\ell_2 = \mathbf{0} \Leftrightarrow \mathbf{s}^\ell_4 \neq \mathbf{0}$ and $\mathbf{s}^\ell_1 = \mathbf{s}^\ell_2 \Leftrightarrow \mathbf{s}^\ell_4 \neq \mathbf{1}_0$. Hence, by construction $W_\ell$ is adjacent to $b_{[s^\ell_4 \neq \mathbf{0}],f}$ and $c_{[\mathbf{s}^\ell_4 \neq \mathbf{1}_0], f}$, so $b_{[s^\ell_4 \neq \mathbf{0}],f}, c_{[\mathbf{s}^\ell_4 \neq \mathbf{1}_0], f} \in X$ to cover the edges incident to $W_\ell$. So, we see that $a_{1,f} \in X \Leftrightarrow b_{1,f} \in X \Leftrightarrow \mathbf{s}^\ell_4 \neq \mathbf{0}$ as desired. Concerning the root-connectivity of $a_{1,f}$ in $(X \cap P) \setminus \{u_1, u_2\}$, we know that the adjacent vertices $a_{2,f}$ and $b_{0,f}$ are not in $X$ when $a_{1,f}$ is in $X$, due to $A_f$ inducing a $C_4$, hence $a_{1,f}$ can only be root-connected via $c_{1,f}$. Finally, we see that $c_{1,f} \in X \Leftrightarrow \mathbf{s}^\ell_4 \neq \mathbf{1}_0$. This proves the claim.
The claim implies that $\mathbf{state}_{X \cap P}(a_{1,4}) = \mathbf{s}^\ell_4$ as desired. We proceed by computing $\mathbf{state}_{(X \cap P) \setminus \{u_1, u_2\}}(\bar{a}_{1,i})$ for $i \in {1,2}$. Due to the degree-1-neighbor $\bar{a}_{2,i}$, we see that $\bar{a}_{1,i} \in X$ because $X$ is a connected vertex cover. The vertex $\bar{a}_{1,i}$ can only be root-connected via $\bar{c}_{1,i}$ and because $\bar{c}_{1,i}$ is an endpoint of a matching edge, we see that $\bar{c}_{1,i} \in X$ if and only if $\bar{c}_{1,i}$ is adjacent to $W_\ell$. For $i = 1$, we have that
\begin{equation*}
\mathbf{state}_{(X \cap P) \setminus \{u_1, u_2\}}(\bar{a}_{1,1}) = \mathbf{1}_1 \Leftrightarrow \bar{c}_{1,1} \in X \Leftrightarrow \mathbf{s}^\ell_1 \neq \mathbf{1}_0 \Leftrightarrow \ell \in \{1,3,5\}.
\end{equation*}
For $i = 2$, we have that
\begin{equation*}
\mathbf{state}_{(X \cap P) \setminus \{u_1, u_2\}}(\bar{a}_{1,2}) = \mathbf{1}_1 \Leftrightarrow \bar{c}_{1,2} \in X \Leftrightarrow \mathbf{s}^\ell_1 \neq \mathbf{1}_1 \Leftrightarrow \mathbf{s}^\ell_3 = \mathbf{1}_1.
\end{equation*}
In particular, we have shown that $\mathbf{state}_{X \cap P}(\bar{a}_{1,2}) = \mathbf{s}^\ell_3$ as desired.
It remains to show that $\mathbf{state}_{X \cap P}(u_1) = \mathbf{s}^\ell_1$ and $\mathbf{state}_{X \cap P}(u_2) = \mathbf{s}^\ell_2$. Due to $|X \cap \{u_1, u_2, w_{+,1}, \ldots, w_{+,4}\}| = 2$ and $X$ being canonical, we see that
\begin{equation*}
X \cap \{u_1, u_2\} = \begin{cases}
\emptyset, & \ell = 1, \\
\{u_1\}, & \ell \in \{2,3\}, \\
\{u_1, u_2\}, & \ell \in \{4,5\}.
\end{cases}
\end{equation*}
Hence, we only have to determine the root-connectivity of $u_1$ and possibly $u_2$ in $X \cap P$ for $\ell > 1$. They can only obtain root-connectivity via $a_{1,1}$, $a_{1,2}$, $a_{1,3}$, or $\bar{a}_{1,1}$. By the previous calculations, at least one of these is root-connected in $(X \cap P) \setminus \{u_1, u_2\}$ if and only if $\mathbf{s}^\ell_3 = \mathbf{1}_0$ or $\mathbf{s}^\ell_4 = \mathbf{1}_1$, which happens precisely when $\ell \in \{3,5\}$ as desired (as $\ell = 1$ is excluded). \qed
\end{proof}
\begin{lem}\label{thm:cvc_modtw_state_exists}
For every $\ell \in [5]$, there exists a canonical vertex cover $X_P^\ell$ of $P$ such that $|X_P^\ell| = 35$, $X_P^\ell \cap \{v_1, \ldots, v_5\} = \{v_1, \ldots, v_5\} \setminus \{v_\ell\}$, and $\mathbf{state}(X_P^\ell) = \mathbf{s}^\ell$. If $X$ is a vertex cover of $G$ with $\hat{r} \in X$, $X \cap P = X_P^\ell$, and $\mathbf{state}_X(\{u_1$, $u_2$, $\bar{a}_{1,2}$, $a_{1,4}\}) \subseteq \{\mathbf{0}, \mathbf{1}_1\}$, then every vertex of $X_P^\ell$ is root-connected in $X$.
\end{lem}
\begin{proof}
We claim that
\begin{equation*}
X_P^\ell = \left(\bigcup_{k \in [5] \setminus \{\ell\}} W_k \cup \{v_k\}\right) \cup \{\bar{a}_{1,1}, \bar{a}_{1,2}\} \cup \{a_{2 - [\mathbf{s}_2^\ell = 0], f} : f \in [4]\} \cup U_\ell \cup N(W_\ell),
\end{equation*}
where $U_1 = \emptyset$, $U_2 = U_3 = \{u_1\}$, $U_4 = U_5 = \{u_1, u_2\}$, is the desired vertex cover. Clearly, $X_P^\ell$ is canonical. By construction of $P$, we compute that
\begin{equation*}
N(W_\ell) = \{\bar{v}_\ell, \bar{c}_{[s_1^\ell \neq \mathbf{1}_0], 1}, \bar{c}_{[s_1^\ell \neq \mathbf{1}_1], 2}\} \cup \{b_{[\mathbf{s}_2^\ell = 0],f}, c_{[\mathbf{s}_1^\ell = \mathbf{s}_2^\ell],f} : f \in [4]\} \cup W_{+,\ell},
\end{equation*}
where $W_{+,1} = \{w_{+,1}, w_{+,2}\}, W_{+,2} = \{w_{+,3}\}, W_{+,3} = \{w_{+,4}\}, W_{+,4} = W_{+,5} = \emptyset$. Note that $|U_\ell| + |W_{+,\ell}| = 2$ and hence $|X_P^\ell| = 35$ for all $\ell \in [5]$.
We proceed by verifying that $X_P^\ell$ is a vertex cover of $P$. The only non-trivial edges to consider are $\{a_{1,f}, c_{1,f}\}$, $f \in [4]$, and the edges between $\{u_1, u_2\}$ and $\{a_{1,f} : f \in [3]\}$. If $a_{1,f} \notin X_P^\ell$, then $\mathbf{s}_2^\ell \neq \mathbf{0}$ which also implies that $\mathbf{s}_1^\ell = \mathbf{s}_2^\ell$ and hence $c_{1,f} \in X_P^\ell$, so the edge $\{a_{1,f}, c_{1,f}\}$, $f \in [4]$, is covered in all cases. If $1 \leq \ell \leq 3$, then $\mathbf{s}_2^\ell = \mathbf{0}$, so $a_{1,f} \in X_P^\ell$ for all $f \in [4]$. If $4 \leq \ell \leq 5$, then $u_1, u_2 \in X$, so in either case the edges between $\{u_1, u_2\}$ and $\{a_{1,f} : f \in [3]\}$ are covered.
Moving on to the second part, assume that $X$ is a vertex cover of $G$ with $\hat{r} \in X$, $X \cap P = X_P^\ell$, and $\mathbf{state}_X(\{u_1, u_2, \bar{a}_{1,2}, a_{1,4}\}) \subseteq \{\mathbf{0}, \mathbf{1}_1\}$. We only have to consider the vertices in $X_P^\ell \setminus N(\hat{r}) \subseteq \{a_{1,f} : f \in [4]\} \cup \{\bar{a}_{1,1}, \bar{a}_{1,2}\}$. The statement immediately follows if $u_1$ or $u_2$ is root-connected in $X$, because they are adjacent to all vertices in $\{a_{1,f} : f \in [3]\} \cup \{\bar{a}_{1,1}\}$ and $a_{1,4}$ and $\bar{a}_{1,2}$ are handled by assumption. It remains to consider the case $u_1, u_2 \notin X$ which corresponds to $\ell = 1$, so we see that $a_{1,f}, c_{1,f} \in X$ for all $f \in [4]$ and $\bar{c}_{1,1} \in X$. Then, $a_{1,f}$ is root-connected via $c_{1,f}$ and $\bar{a}_{1,1}$ is root-connected via $\bar{c}_{1,1}$. \qed
\end{proof}
In the complete construction, we create long paths by repeatedly concatenating the path gadgets $P$. To study the \emph{state transitions} between two consecutive path gadgets, suppose that we have two copies $P^1$ and $P^2$ of $P$ such that the vertices $a_{1,4}$ and $\bar{a}_{1,2}$ in $P^1$ are joined to the vertices $u_1$ and $u_2$ in $P^2$. We denote the vertices of $P^1$ with a superscript $1$ and the vertices of $P^2$ with a superscript $2$, e.g., $a^1_{1,4}$ refers to the vertex $a_{1,4}$ of $P^1$. Again, suppose that $P^1$ and $P^2$ are embedded as induced subgraphs in a larger graph $G$ with a root vertex $\hat{r}$ and that only the vertices $u_1, u_2^1, a_{1,4}^2, \bar{a}_{1,2}^2$ and the clique vertices $v^1_\ell, v^2_\ell$, $\ell \in [5]$, have neighbors outside of $P^1 + P^2 + \hat{r}$. Let $X$ be a connected vertex cover of $G$ with $\hat{r} \in X$.
\begin{lem}\label{thm:cvc_modtw_path_transition}
Suppose that $X$ is canonical with respect to $\{u_1^1, u_2^1\}$ and $\{u_2^1, u_2^2\}$, that $G[X]$ is connected and that $|X \cap P^1| \leq 35$ and $|X \cap P^2| \leq 35$, then $\mathbf{state}(X \cap P^1) = \mathbf{s}^{\ell_1}$ and $\mathbf{state}(X \cap P^2) = \mathbf{s}^{\ell_2}$ with $\ell_1 \leq \ell_2$.
Additionally, for each $\ell \in [5]$, the set $X^\ell = X^\ell_{P^1} \cup X^\ell_{P^2}$ is a vertex cover of $P^1 + P^2$ with $\mathbf{state}_{X^\ell}(\{u_1^1, u_2^1, a_{1,4}^2, \bar{a}_{1,2}^2\}) \subseteq \{\mathbf{0}, \mathbf{1}_1\}$.
\end{lem}
\begin{proof}
By \cref{thm:cvc_modtw_path_gadget_tight}, we see that there are $\ell_1, \ell_2 \in [5]$ such that $\mathbf{state}(X \cap P^1) = \mathbf{s}^{\ell_1}$ and $\mathbf{state}(X \cap P^2) = \mathbf{s}^{\ell_2}$. It remains to show that $\ell_1 \leq \ell_2$.
Define $U^1 = \{a_{1,4}^1, \bar{a}_{1,2}^1\}$ and $U^2 = \{u_1^2, u_2^2\}$ and $U = U^1 \cup U^2$. By the assumption on how $P^1 + P^2 + \hat{r}$ can be connected to the rest of the graph $G$, one can see that any path from $U$ to $\hat{r}$ passes through some vertex in $(V(P_1) \cup V(P_2)) \cap N(\hat{r})$. Hence, we can determine whether the vertices of $X \cap U$ are root-connected in $X$ by just considering the graph $P^1 + P^2 + \hat{r}$.
\newcommand{\bar{\state}}{\bar{\mathbf{s}}}
Consider the state pairs $\bar{\state}^1 = (\mathbf{state}_{X \cap P^1}(\bar{a}_{1,2}^1), \mathbf{state}_{X \cap P^1}(a_{1,4}^2)) = (\mathbf{s}^{\ell_1}_3, \mathbf{s}^{\ell_2}_4)$ and $\bar{\state}^2 = (\mathbf{state}_{X \cap P^2}(u_1^2), \mathbf{state}_{X \cap P^2}(u_2^2)) = (\mathbf{s}^{\ell_2}_1, \mathbf{s}^{\ell_2}_2)$. We claim that whenever $\ell_1 > \ell_2$ there is some edge in $G[U]$ that is not covered by $X$ or there is a vertex in $X \cap U$ that is not root-connected in $X$. There is an uncovered edge in $G[U]$ if and only if both $\bar{\state}^1$ and $\bar{\state}^2$ each contain at least one $\mathbf{0}$. This shows that $(\ell_1, \ell_2) \notin \{4,5\} \times [3]$. Some vertex in $X \cap U$ is not root-connected in $X$ if and only if either $\bar{\state}^1$ or $\bar{\state}^2$ contains a $\mathbf{1}_0$ and the other one only contains two $\mathbf{0}$s or if both contain no $\mathbf{1}_1$ at all. This shows that $(\ell_1, \ell_2) \notin \{(5,4),(3,2),(3,1),(2,1)\}$ and concludes the proof of the first part.
For the second part, notice that $\mathbf{state}(X_{P^1}^\ell) = \mathbf{state}(X_{P^2}^\ell) = \mathbf{s}^\ell$ by \cref{thm:cvc_modtw_path_gadget_tight} and using the same approach as in the last paragraph, we see that for $\ell = \ell_1 = \ell_2$ all edges in $G[U]$ are covered and all vertices in $X^\ell$ are root-connected in $X^\ell$. \qed
\end{proof}
\cref{thm:cvc_modtw_path_transition} is the reason for the chosen numbering of the elements of $\mathbf{states}$. We say that a \emph{cheat occurs} if $\ell_1 < \ell_2$. Creating arbitrarily long paths of the path gadgets $P$, \cref{thm:cvc_modtw_path_transition} tells us that at most $|\mathbf{states}| - 1 = 4 = \Oh(1)$ cheats may occur on such a path.
\subsubsection{Complete Construction}
\newcommand{{4\ngrps\grpsize + 1}}{{4\ngrpsp + 1}}
\newcommand{{\nclss(\nregions)}}{{m({4\ngrps\grpsize + 1})}}
\newcommand{\mathbf{h}}{\mathbf{h}}
\begin{figure}
\centering
\tikzfig{pictures/cvc_modtw_decoding_gadget}
\caption{The decoding gadget for group $i \in [t]$ and column $\ell \in [{\nclss(\nregions)}]$. The clause gadget for column $\ell$ consists of $o^\ell$ and $\bar{o}^\ell$ and represents clause $C_{\ell'}$, where $\ell' = (\ell - 1) \mod m$. In this figure the truth assignment for group $i$ corresponding to $(2,1,\ldots) \in [5]^p$ satisfies clause $C_{\ell'}$.}
\label{fig:cvc_modtw_decoding}
\end{figure}
\paragraph{Setup.}
Assume that \textsc{Connected Vertex Cover}\xspace can be solved in time $\Oh^*((5 - \varepsilon)^{\tcpw(G)})$ for some $\varepsilon > 0$. Given a \textsc{Satisfiability}\xspace-instance $\sigma$ with $n$ variables and $m$ clauses, we construct an equivalent \textsc{Connected Vertex Cover}\xspace instance with twinclass-pathwidth approximately $n \log_5(2)$ so that the existence of such an algorithm for \textsc{Connected Vertex Cover}\xspace would imply that CNF-SETH\xspace is false.
We pick an integer $\beta$ only depending on $\varepsilon$; the precise choice of $\beta$ will be discussed at a later point. The variables of $\sigma$ are partitioned into groups of size at most $\beta$, resulting in $t = \lceil n / \beta \rceil$ groups. Furthermore, we pick the smallest integer $p$ that satisfies $5^p \geq 2^\beta$. We now begin with the construction of the \textsc{Connected Vertex Cover}\xspace instance $(G = G(\sigma, \beta), \overline{b})$.
We create the root vertex $\hat{r}$ and attach a leaf $\hat{r}'$ which forces $\hat{r}$ into any connected vertex cover.
For every group $i \in [t]$, we create $p$ long path-like gadgets $P^{i, j}$, $j \in [p]$, where each $P^{i, j}$ consists of ${\nclss(\nregions)}$ copies $P^{i, j, \ell}$, $\ell \in [{\nclss(\nregions)}]$, of the path gadget $P$ and consecutive copies are connected by a join. More precisely, the vertices in some $P^{i, j, \ell}$ inherit their names from $P$ and the superscript of $P^{i, j, \ell}$ and for every $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)} - 1]$, the output vertices $a_{1,4}^{i, j, \ell}$ and $\bar{a}_{1,2}^{i, j, \ell}$ are joined to the input vertices $u_1^{i, j, \ell + 1}$ and $u_2^{i, j, \ell + 1}$ of the next path gadget. The ends of each path $P^{i, j}$, namely the vertices $u_1^{i, j, 1}$, $u_2^{i, j, 1}$, $a_{1,4}^{i, j, {\nclss(\nregions)}}$, $\bar{a}_{1,2}^{i, j, {\nclss(\nregions)}}$ are made adjacent to the root $\hat{r}$.
For every group $i \in [t]$ and column $\ell \in [{\nclss(\nregions)}]$, we create a \emph{decoding gadget} $D^{i,\ell}$ in the same style as Cygan et al.~\cite{CyganNPPRW11arxiv} for \textsc{Connected Vertex Cover}\xspace parameterized by pathwidth. Every variable group $i$ has at most $2^\beta$ possible truth assignments and by choice of $p$ we have that $5^p \geq 2^\beta$, so we can find an injective mapping $\kappa \colon \{0,1\}^\beta \rightarrow [5]^p$ which assigns to each truth assignment $\tau \in \{0,1\}^\beta$ a sequence $\kappa(\tau) \in [5]^p$. For each sequence $\mathbf{h} = (h_1, \ldots, h_p) \in [5]^p$, we create vertices $x^{i, \ell}_\mathbf{h}$, $\bar{x}^{i, \ell}_\mathbf{h}$, $y^{i, \ell}_\mathbf{h}$ and edges $\{x^{i, \ell}_\mathbf{h}, \bar{x}^{i, \ell}_\mathbf{h}\}$, $\{x^{i, \ell}_\mathbf{h}, y^{i, \ell}_\mathbf{h}\}$, $\{y^{i, \ell}_\mathbf{h}, \hat{r}\}$. Furthermore, we add the edge $\{x^{i, \ell}_\mathbf{h}, v^{i, j, \ell}_{h_j}\}$ for all $\mathbf{h} = (h_1, \ldots, h_p) \in [5]^p$ and $j \in [p]$. Finally, we create two adjacent vertices $z^{i, \ell}$ and $\bar{z}^{i, \ell}$ and edges $\{z^{i, \ell}, y^{i, \ell}_\mathbf{h}\}$ for all $\mathbf{h} \in [5]^p$. For every group $i \in [t]$ and column $\ell \in [{\nclss(\nregions)}]$, we bundle the the path gadgets $P^{i,j,\ell}$, $j \in [p]$, and the decoding gadget $D^{i, \ell}$ into the \emph{block} $B^{i, \ell}$.
Lastly, we construct the \emph{clause gadgets}.
We number the clauses of $\sigma$ by $C_0, \ldots, C_{m - 1}$. For every column $\ell \in [{\nclss(\nregions)}]$, we create an adjacent pair of vertices $o^{\ell}$ and $\bar{o}^{\ell}$. Let $\ell' \in [0, m - 1]$ be the remainder of $(\ell - 1)$ modulo $m$. For every $i \in [t]$, $\mathbf{h} \in \kappa(\{0,1\}^\beta)$, we add the edge $\{o^{\ell}, y^{i, \ell}_\mathbf{h}\}$ whenever $\kappa^{-1}(\mathbf{h})$ is a truth assignment for variable group $i$ that satisfies clause $C_{\ell'}$. See \cref{fig:cvc_modtw_decoding} for a depiction of the decoding and clause gadgets and \cref{fig:cvc_modtw_schematic} for a high-level view of the whole construction.
\begin{figure}
\centering
\scalebox{.75}{\tikzfig{pictures/cvc_modtw_schematic}}
\caption{The matrix structure of the constructed graph. Every $m$ columns form a region.}
\label{fig:cvc_modtw_schematic}
\end{figure}
\begin{lem}\label{thm:cvc_modtw_sat_to_sol}
If $\sigma$ is satisfiable, then there exists a connected vertex cover $X$ of $G = G(\sigma, \beta)$ of size $|X| \leq (35t p + (5^p + 2)t + 1){\nclss(\nregions)} + 1 = \overline{b}$.
\end{lem}
\begin{proof}
Let $\tau$ be a satisfying truth assignment of $\sigma$ and let $\tau^i$ denote the restriction of $\tau$ to the $i$-th variable group for every $i \in [t]$ and let $\kappa(\tau^i) = \mathbf{h}^i = (h^i_1, \ldots, h^i_p)$ be the corresponding sequence.
The connected vertex cover is given by
\begin{equation*}
X = \{\hat{r}\} \cup \bigcup_{\ell \in [{\nclss(\nregions)}]} \left(\{o^\ell\} \cup \bigcup_{i \in [t]} \left(\{y^{i, \ell}_{\mathbf{h}^i}, z^{i,\ell}\} \cup \bigcup_{\mathbf{h} \in [5]^p} \{x^{i,\ell}_\mathbf{h}\} \cup \bigcup_{j \in [p]} X_{P^{i,j,\ell}}^{h^i_j} \right) \right),
\end{equation*}
where $X_{P^{i,j,\ell}}^{h^i_j}$ refers to the sets given by \cref{thm:cvc_modtw_state_exists}.
Clearly, $|X| = \overline{b}$, so it remains to prove that $X$ is a connected vertex cover. By \cref{thm:cvc_modtw_state_exists} and the second part of \cref{thm:cvc_modtw_path_transition} all edges induced by the path gadgets are covered by $X$ and all vertices on the path gadgets that belong to $X$ are root-connected, except for possibly the vertices at the ends, i.e. $\bigcup_{i \in [t]} \bigcup_{j \in [p]} \{u^{i,j,1}_1, u^{i,j,1}_2, a^{i,j,{\nclss(\nregions)}}_{1,4}, \bar{a}^{i,j,{\nclss(\nregions)}}_{1,2}\}$, but these are contained in the neighborhood of $\hat{r}$ by construction.
Fix $i \in [t]$, $\ell \in [{\nclss(\nregions)}]$, and consider the corresponding decoding gadget. Since $z^{i, \ell} \in X$ and $x^{i,\ell}_\mathbf{h} \in X$ for all $\mathbf{h} \in [5]^p$, all edges induced by the decoding gadget and all edges between the decoding gadget and the path gadgets are covered by $X$. Furthermore, since $o^{\ell} \in X$, all edges inside the clause gadget and all edges between the clause gadget and the decoding gadgets are covered by $X$. Hence, $X$ has to be a vertex cover of $G$.
It remains to prove that the vertices in the decoding and clause gadgets that belong to $X$ are also root-connected. Again, fix $i \in [t]$, $\ell \in [{\nclss(\nregions)}]$, and $\mathbf{h} = (h_1, \ldots, h_p) \in [5]^p \setminus \{\mathbf{h}^i\}$. Since $\mathbf{h} \neq \mathbf{h}^i$, there is some $j \in [p]$ such that $v^{i,j,\ell}_{h_j} \in X$ by \cref{thm:cvc_modtw_state_exists} which connects $x_\mathbf{h}^{i, \ell}$ to the root $\hat{r}$. The vertices $x_{\mathbf{h}^i}^{i, \ell}$ and $z^{i, \ell}$ are root-connected via $y^{i, \ell}_{\mathbf{h}^i} \in X$.
We conclude by showing that $o^\ell$ is root-connected for all $\ell \in [{\nclss(\nregions)}]$. Since $\tau$ is a satisfying truth assignment of $\sigma$, there is some variable group $i \in [t]$ such that $\tau^i$ already satisfies clause $C_{\ell'}$, where $\ell'$ is the remainder of $\ell - 1$ modulo $m$. By construction of $G$ and $X$, the vertex $y^{i, \ell}_{\mathbf{h}^i} \in X$ is adjacent to $o^\ell$, since $\kappa(\tau^i) = \mathbf{h}^i$, and connects $o^\ell$ to the root $\hat{r}$. This shows that all vertices of $X$ are root-connected, so $G[X]$ has to be connected. \qed
\end{proof}
\begin{lem}\label{thm:cvc_modtw_sol_to_sat}
If there exists a connected vertex cover $X$ of $G = G(\sigma, \beta)$ of size $|X| \leq (35t p + (5^p + 2)t + 1){\nclss(\nregions)} + 1 = \overline{b}$, then $\sigma$ is satisfiable.
\end{lem}
\begin{proof}
We assume without loss of generality that $X$ is canonical with respect to each twinclass $\{u_1^{i, j, \ell}, u_2^{i, j, \ell}\}$, $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$.
We begin by arguing that $X$ has to satisfy $|X| = \overline{b}$. First, we must have that $\hat{r} \in X$, because $\hat{r}$ has a neighbor of degree 1. By \cref{thm:cvc_modtw_path_gadget_lb}, we have that $|X \cap P^{i,j,\ell}| \geq 35$ for all $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$. In every decoding gadget, i.e.\ one for every $i \in [t]$ and $\ell \in [{\nclss(\nregions)}]$, the set $\{z^{i,\ell}\} \cup \bigcup_{\mathbf{h} \in [5]^p} x_\mathbf{h}^{i, \ell}$ has to be contained in $X$, since every vertex in this set has a neighbor of degree 1. Furthermore, to connect $z^{i,j}$ to $\hat{r}$, at least one of the vertices $y^{i,\ell}_\mathbf{h}$, $\mathbf{h} \in [5]^p$, has to be contained in $X$. Hence, $X$ must contain at least $5^p + 2$ vertices per decoding gadget. Lastly, $o^\ell \in X$ for all $\ell \in [{\nclss(\nregions)}]$, since $o^\ell$ has a neighbor of degree 1. Since we have only considered disjoint vertex sets, this shows that $|X| = \overline{b}$ and all of the previous inequalities have to be tight, in particular for every $i \in [t]$ and $\ell \in [{\nclss(\nregions)}]$, there is a unique $\mathbf{h} \in [5]^p$ such that $y^{i,\ell}_\mathbf{h} \in X$.
By \cref{thm:cvc_modtw_path_gadget_tight}, we know that $X$ assumes one of the five possible states on each $P^{i,j,\ell}$. Fix some $P^{i,j} = \bigcup_{\ell \in [{\nclss(\nregions)}]} P^{i,j,\ell}$ and note that due to \cref{thm:cvc_modtw_path_transition} the state can change at most four times along $P^{i,j}$. Such a state change is called a \emph{cheat}. Let $\gamma \in [0, 4 t p]$ and define the $\gamma$-th \emph{region} $R^\gamma = \bigcup_{i \in [t]} \bigcup_{j \in [p]} \bigcup_{\ell = \gamma m + 1}^{(\gamma + 1) m} P^{i,j,\ell}$. Since there are ${4\ngrps\grpsize + 1}$ regions and $t p$ many paths, there is at least one region $R^\gamma$ such that no cheat occurs in $R^\gamma$. We consider region $R^\gamma$ for the rest of the proof and read off a satisfying truth assignment from this region.
For $i \in [t]$, let $\mathbf{h}^{i} = (h^i_1, \ldots, h^i_p) \in [5]^p$ such that $v^{i,j,\gamma m + 1}_{h^i_j} \notin X$ for all $j \in [p]$; this is well-defined by \cref{thm:cvc_modtw_path_gadget_tight}. Since $R^\gamma$ does not contain any cheats, the definition of $\mathbf{h}^i$ is independent of which column $\ell \in [\gamma m + 1, (\gamma + 1)m]$ we consider. For every $i \in [t]$ and $\ell \in [\gamma m + 1, (\gamma + 1)m]$, we claim that $y^{i, \ell}_\mathbf{h} \in X$ if and only if $\mathbf{h} = \mathbf{h}^i$. We have already established that for every $i$ and $\ell$, there is exactly one $\mathbf{h}$ such that $y^{i, \ell}_\mathbf{h} \in X$. Consider the vertex $x^{i, \ell}_{\mathbf{h}^i} \in X$, its neighbors in $G$ are $v^{i, 1, \ell}_{h^i_1}, v^{i, 2, \ell}_{h^i_2}, \ldots, v^{i, p, \ell}_{h^i_p}$, $\bar{x}^{i,\ell}_{\mathbf{h}^i}$, and $y^{i,\ell}_{\mathbf{h}^i}$. By construction of $\mathbf{h}^i$ and the tight allocation of the budget, we have $(N(x^{i, \ell}_{\mathbf{h}^i}) \setminus \{y^{i, \ell}_{\mathbf{h}^i}\}) \cap X = \emptyset$. Therefore, $X$ has to include $y^{i, \ell}_{\mathbf{h}^i}$ to connect $x^{i, \ell}_{\mathbf{h}^i}$ to the root $\hat{r}$. This shows the claim.
For $i \in [t]$, we define the truth assignment $\tau^i$ for group $i$ by taking an arbitrary truth assignment if $\mathbf{h}^i \notin \kappa(\{0,1\}^\beta)$ and setting $\tau^i = \kappa^{-1}(\mathbf{h}^i)$ otherwise. By setting $\tau = \bigcup_{i \in [t]} \tau^i$ we obtain a truth assignment for all variables and we claim that $\tau$ satisfies $\sigma$. Consider some clause $C_{\ell'}$, $\ell' \in [0, m - 1]$, and let $\ell = \gamma m + \ell' + 1$. We have already argued that $o^\ell \in X$ and to connect $o^\ell$ to the root $\hat{r}$, there has to be some $y^{i, \ell}_{\mathbf{h}} \in N(o^\ell) \cap X$. By the previous claim, $\mathbf{h} = \mathbf{h}^i$ for some $i \in [t]$ and therefore $\tau^i$, and also $\tau$, satisfy clause $C_{\ell'}$ due to the construction of $G$. Because the choice of $C_{\ell'}$ was arbitrary, $\tau$ has to be a satisfying assignment of $\sigma$. \qed
\end{proof}
\begin{lem}\label{thm:cvc_modtw_bound}
The constructed graph $G = G(\sigma, \beta)$ has $\tcpw(G) \leq t p + 3 \cdot 5^p + \Oh(1)$ and a path decomposition of $G^q = G / \Pi_{tc}(G)$ of this width can be constructed in polynomial time.
\end{lem}
\begin{proof}
By construction, all sets $\{u_1^{i,j,\ell}, u_2^{i,j,\ell}\}$, $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$, are twinclasses. Let $G'$ be the graph obtained by contracting each of these twinclasses, denoting the resulting vertex by $u^{i,j,\ell}$, then $G^q$ is a subgraph of $G'$. We will show that $\tcpw(G) = \pw(G^q) \leq \pw(G') \leq t p + 3 \cdot 5^p + \Oh(1)$ by giving an appropriate strategy for the mixed-search-game on $G'$ and applying \cref{thm:mixed_search}.
\begin{algorithm}
Place searchers on $\hat{r}$ and $\hat{r}'$\;
Place searchers on $u^{i,j,1}$ for all $i \in [t]$, $j \in [p]$\;
\For{$\ell \in [{\nclss(\nregions)}]$}
{
Place searchers on $o^\ell$ and $\bar{o}^\ell$\;
\For{$i \in [t]$}
{
Place searchers on all vertices of the decoding gadget $D^{i, \ell}$\;
\For{$j \in [p]$}
{
Place searchers on all vertices of $P^{i,j,\ell} - \{u_1^{i,j,\ell}, u_2^{i,j,\ell}\}$\;
Remove searcher from $u^{i,j,\ell}$ and place it on $u^{i,j,\ell + 1}$\;
Remove searchers on $P^{i,j,\ell} - \{u_1^{i,j,\ell}, u_2^{i,j,\ell}\}$\;
}
Remove searchers on $D^{i, \ell}$\;
}
Remove searchers on $o^\ell$ and $\bar{o}^\ell$\;
}
\caption{Mixed-search-strategy for $G'$}
\label{algo:cvc_modtw_search_game}
\end{algorithm}
\vspace*{-.5cm}
The mixed-search-strategy for $G'$ described in \cref{algo:cvc_modtw_search_game} proceeds column by column and group by group in each column. The maximum number of placed searchers occurs on line $8$ and is $2 + t p + 2 + (3 \cdot 5^p + 2) + 61$. \qed
\end{proof}
\begin{thm}
No algorithm can solve \textsc{Connected Vertex Cover}\xspace, given a path decomposition of $G^q = G / \Pi_{tc}(G)$ of width $k$, in time $\Oh^*((5 - \varepsilon)^k)$ for some $\varepsilon > 0$, unless CNF-SETH\xspace fails.
\end{thm}
\begin{proof}
Suppose there is an algorithm $\mathcal{A}$ that solves \textsc{Connected Vertex Cover}\xspace in time $\Oh^*((5 - \varepsilon)^k)$ for some $\varepsilon > 0$ given a path decomposition of $G^q = G / \Pi_{tc}(G)$ of width $k$. Given $\beta$, we define $\delta_1 < 1$ such that $(5 - \varepsilon)^{\log_5(2)} = 2^{\delta_1}$ and $\delta_2$ such that $(5 - \varepsilon)^{1 / \beta} = 2^{\delta_2}$. By picking $\beta$ large enough, we can ensure that $\delta = \delta_1 + \delta_2 < 1$. We show how to solve \textsc{Satisfiability}\xspace using $\mathcal{A}$ in time $\Oh^*(2^{\delta n})$, where $n$ is the number of variables, thus contradicting CNF-SETH\xspace.
Given a \textsc{Satisfiability}\xspace instance $\sigma$, construct $G = G(\sigma, \beta)$ and the path decomposition from \cref{thm:cvc_modtw_bound} in polynomial time, as we have $\beta = \Oh(1)$ and hence $p = \Oh(1)$. We run $\mathcal{A}$ on $G$ and return its answer. This is correct by \cref{thm:cvc_modtw_sat_to_sol} and \cref{thm:cvc_modtw_sol_to_sat}. Due to \cref{thm:cvc_modtw_bound}, the running time is
\begin{alignat*}{6}
\phantom{\leq} \,\, & \Oh^*\left( (5 - \varepsilon)^{t p + 3 \cdot 5^p + \Oh(1)} \right)
& \,\,\leq\,\, & \Oh^*\left( (5 - \varepsilon)^{t p} \right)
& \,\,\leq\,\, & \Oh^*\left( (5 - \varepsilon)^{\lceil \frac{n}{\beta} \rceil p} \right) \\
\leq \,\, & \Oh^*\left( (5 - \varepsilon)^{\frac{n}{\beta} p} \right)
& \,\,\leq\,\, & \Oh^*\left( (5 - \varepsilon)^{\frac{n}{\beta} \lceil \log_5(2^\beta) \rceil} \right)
& \,\,\leq\,\, & \Oh^*\left( (5 - \varepsilon)^{\frac{n}{\beta} \log_5(2^\beta)} (5 - \varepsilon)^{\frac{n}{\beta}} \right)\\
\leq \,\, & \Oh^*\left( 2^{\delta_1 \beta \frac{n}{\beta}} 2^{\delta_2 n} \right)
& \,\,\leq\,\, & \Oh^*\left( 2^{(\delta_1 + \delta_2) n} \right)
& \,\,\leq\,\, & \Oh^*\left( 2^{\delta n} \right),
\end{alignat*}
hence completing the proof. \qed
\end{proof}
\section{Preliminaries}
\label{sec:modtw_prelims}
For two integers $a,b$ we write $a \equiv_c b$ to indicate equality modulo $c \in \mathbb{N}$. We use Iverson's bracket notation: for a boolean predicate $p$, we have that $[p]$ is $1$ if $p$ is true and $0$ otherwise. For a function $f$ we denote by $f[v \mapsto \alpha]$ the function $(f \setminus \{(v, f(v))\}) \cup \{(v, \alpha)\}$, viewing $f$ as a set. By $\mathbb{F}_2$ we denote the field of two elements. For $n_1, n_2 \in \mathbb{Z}$, we write $[n_1, n_2] = \{x \in \mathbb{Z} : n_1 \leq x \leq n_2\}$ and $[n_2] = [1, n_2]$. For a function $f\colon V \rightarrow \mathbb{Z}$ and a subset $W \subseteq V$, we write $f(W) = \sum_{v \in W} f(v)$. Note that for functions $g\colon A \rightarrow B$, where $B \not\subseteq \mathbb{Z}$, and a subset $A' \subseteq B$, we still denote the \emph{image of $A'$ under $g$} by $g(A') = \{g(v) : v \in A'\}$. If $f \colon A \rightarrow B$ is a function and $A' \subseteq A$, then $f\big|_{A'}$ denotes the \emph{restriction} of $f$ to $A'$ and for a subset $B' \subseteq B$, we denote the \emph{preimage of $B'$ under $f$} by $f^{-1}(B') = \{a \in A : f(a) \in B'\}$. The \emph{power set} of a set $A$ is denoted by $\powerset{A}$.
\subsubsection*{Graph Notation.}
We use common graph-theoretic notation and the essentials of parameterized complexity. Let $G = (V, E)$ be an undirected graph. For $X \subseteq V$, we denote by $G[X]$ the subgraph of $G$ induced by $X$. The \emph{open neighborhood} of $v \in V$ is given by $N_G(v) = \{u \in V : \{u,v\} \in E\}$, whereas the \emph{closed neighborhood} is given by $N_G[v] = N_G(v) \cup \{v\}$. For $X \subseteq V$, we define $N_G[X] = \bigcup_{v \in X} N_G[v]$ and $N_G(X) = N_G[X] \setminus X$. The degree of $v \in V$ is denoted $\deg_G(v) = |N_G(v)|$. For two disjoint vertex subsets $A, B \subseteq V$, we define $E_G(A,B) = \{\{a,b\} \in E(G) : a \in A, b \in B\}$ and adding a \emph{join} between $A$ and $B$ means adding an edge between every vertex in $A$ and every vertex in $B$. We denote the \emph{number of connected components} of $G$ by $\mathtt{cc}(G)$. A \emph{cut} of $G$ is a partition $V = V_L \cup V_R$, $V_L \cap V_R = \emptyset$, of its vertices into two parts.
\subsubsection*{Tree Decompositions.} A \emph{path/tree decomposition} of a graph $G=(V,E)$ is a pair $(\TT, (\mathbb{B}_t)_{t \in V(\TT)})$, where $\TT$ is a path/tree and every \emph{bag} $\mathbb{B}_t \subseteq V$, $t \in V(\TT)$, is a set of vertices such that the following properties are satisfied:
\begin{itemize}
\item every vertex $v \in V$ is contained in some bag $v \in \mathbb{B}_t$,
\item every edge $\{v,w\} \in E$ is contained in some bag $\{u,v\} \subseteq \mathbb{B}_t$,
\item for every vertex $v$, the set $\{t \in V(\TT) : v \in \mathbb{B}_t\}$ is connected in $\TT$.
\end{itemize}
The \emph{width} of a path/tree decomposition $(\TT, (\mathbb{B}_t)_{t \in
V(\TT)})$ is $\max_{t \in V(\TT)} |\mathbb{B}_t| - 1$. The \emph{pathwidth/treewidth} of a graph $G$, denoted $\pw(G)$ or $\tw(G)$ respectively, is the minimum width of a path/tree decomposition of $G$. For dynamic programming algorithms on tree decompositions, it is convenient to use \emph{very nice tree decompositions}~\cite{CyganNPPRW22}, further refining the \emph{nice tree decompositions} of Kloks~\cite{Kloks94}.
\begin{dfn}
A tree decomposition $(\TT, (\mathbb{B}_t)_{t \in V(\TT)})$ is \emph{very nice} if it is rooted at the \emph{root node} $\hat{r} \in V(\TT)$ with $\mathbb{B}_{\hat{r}} = \emptyset$ and every bag $\mathbb{B}_t$ has one of the following types:
\begin{itemize}
\item \textbf{Leaf bag:} $t$ has no children and $\mathbb{B}_t = \emptyset$.
\item \textbf{Introduce vertex $v$ bag:} $t$ has exactly one child $t'$ and $\mathbb{B}_t = \mathbb{B}_{t'} \cup \{v\}$ with $v \notin \mathbb{B}_{t'}$.
\item \textbf{Forget vertex $v$ bag:} $t$ has one child $t'$ and $\mathbb{B}_t = \mathbb{B}_{t'} \setminus \{v\}$ with $v \in \mathbb{B}_{t'}$.
\item \textbf{Introduce edge $\{v,w\}$ bag:} $t$ is labeled with an edge $\{v,w\} \in E$ and $t$ has one child $t'$ which satisfies $\{v,w\} \subseteq \mathbb{B}_t = \mathbb{B}_{t'}$.
\item \textbf{Join bag:} $t$ has exactly two children $t_1$ and $t_2$ with $\mathbb{B}_t = \mathbb{B}_{t_1} = \mathbb{B}_{t_2}$.
\end{itemize}
Furthermore, we require that every edge in $E$ is introduced exactly once.
\end{dfn}
\begin{lem}[\cite{CyganNPPRW22}]\label{thm:very_nice_tree_decomposition}
Any tree decomposition of $G$ can be converted into a very nice tree decomposition of $G$ with the same width in polynomial time.
\end{lem}
\subsubsection*{Quotients and Twins.}
Let $\Pi$ be a partition of $V(G)$. The \emph{quotient graph} $G / \Pi$ is given by $V(G / \Pi) = \Pi$ and $E(G / \Pi) = \{\{B_1, B_2\} \subseteq \Pi : B_1 \neq B_2, \exists u \in B_1, v \in B_2 \colon \{u,v\} \in E(G)\}$. We say that two vertices $u, v$ are \emph{twins} if $N(u) \setminus \{v\} = N(v) \setminus \{u\}$. The equivalence classes of this relation are called \emph{twinclasses} and we let $\Pi_{tc}(G)$ denote the partition of $V(G)$ into twinclasses. If $N(u) = N(v)$, then $u$ and $v$ are \emph{false twins} and if $N[u] = N[v]$, then $u$ and $v$ are \emph{true twins}. Every twinclass of size at least 2 consists of only false twins or only true twins. A false twinclass induces an independent set and a true twinclass induces a clique.
\subsubsection*{Lifting to Twinclasses.}
The \emph{twinclass-treewidth} and \emph{twinclass-pathwidth} of $G$ are defined by $\tctw(G) = \tw(G/\Pi_{tc}(G))$ and $\tcpw(G) = \pw(G/\Pi_{tc}(G))$, respectively. The parameters twinclass-treewidth and twinclass-pathwidth have been considered before under the name modular treewidth and modular pathwidth~\cite{Lampis20,Mengel16,PaulusmaSS16}. We use the prefix \emph{twinclass} instead of \emph{modular} to distinguish from the quotient graph arising from a \emph{modular partition} of $G$.
\subsubsection*{Modular Decomposition.}
A vertex set $M \subseteq V(G)$ is a \emph{module} of $G$ if $N(v) \setminus M = N(w) \setminus M$ for every pair $v, w \in M$ of vertices in $M$. Equivalently, for every $u \in V(G) \setminus M$ it holds that $M \subseteq N(u)$ or $M \cap N(u) = \emptyset$. In particular, every twinclass is a module. We let $\mathcal{M}(G)$ denote the set of all modules of $G$. The modules $\emptyset$, $V(G)$, and all singletons are called \emph{trivial}. A graph that only admits trivial modules is called \emph{prime}. If $M \neq V(G)$, then we say that $M$ is \emph{proper}. For two disjoint modules $M_1, M_2 \in \mathcal{M}(G)$, either $\{\{v,w\} : v \in M_1, w \in M_2\} \subseteq E(G)$ or $\{\{v,w\} : v \in M_1, w \in M_2\} \cap E(G) = \emptyset$; in the first case, $M_1$ and $M_2$ are \emph{adjacent} and in the second case, they are \emph{nonadjacent}.
A module $M$ is \emph{strong} if for every module $M' \in \mathcal{M}(G)$ we have that $M \cap M' = \emptyset$, $M \subseteq M'$, or $M' \subseteq M$, so strong modules intersect other modules only in a trivial way. Let ${\mathcal{M}_s}(G)$ denote the set of all strong modules of $G$. The defining property of strong modules implies that ${\mathcal{M}_s}(G)$ is a \emph{laminar set family}. Hence, if we consider ${\mathcal{M}_{\mathrm{tree}}}(G) = {\mathcal{M}_s}(G) \setminus \{\emptyset\}$ with the inclusion-relation, the associated Hasse diagram, i.e., there is an edge from $M_1 \in {\mathcal{M}_{\mathrm{tree}}}(G)$ to $M_2 \in {\mathcal{M}_{\mathrm{tree}}}(G)$ if $M_1 \subsetneq M_2$ and there is no $M_3 \in {\mathcal{M}_{\mathrm{tree}}}(G)$ with $M_1 \subsetneq M_3 \subsetneq M_2$, is a rooted tree, called the \emph{modular decomposition (tree)} of $G$. We freely switch between viewing ${\mathcal{M}_{\mathrm{tree}}}(G)$ as a set family or as the modular decomposition tree of $G$. In the latter case, we usually speak of \emph{nodes} of the modular decomposition tree.
Every graph $G$ with at least two vertices can be uniquely partitioned into a set of inclusion-maximal non-trivial strong modules $\Pi_{mod}(G) = \{M_1, \ldots, M_\ell\}$, with $\ell \geq 2$, called \emph{canonical modular partition}. For $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ with $|M| \geq 2$, let $\mathtt{children}(M) = \Pi_{mod}(G[M])$ as the sets in $\Pi_{mod}(G[M])$ are precisely the children of $M$ in the modular decomposition tree; if $|M| = 1$, then $\mathtt{children}(M) = \emptyset$. We write ${\mathcal{M}_{\mathrm{tree}}^*}(G) = {\mathcal{M}_{\mathrm{tree}}}(G) \setminus \{\{v\} : v \in V\}$. Forming the \emph{quotient graph $\qgraph{M} = G[M] / \Pi_{mod}(G[M])$ at $M \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$}, there are three cases:
\begin{thm}[\cite{Gallai67}]\label{thm:gallai_modular}
For $M \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$, exactly one of the following holds:
\begin{itemize}
\item \textbf{Parallel node}: $G[M]$ is not connected and $\qgraph{M}$ is an independent set,
\item \textbf{Series node}: the complement $\overline{G[M]}$ is not connected and $\qgraph{M}$ is a clique,
\item \textbf{Prime node}: $\Pi_{mod}(G[M])$ consists of the inclusion-maximal proper modules of $G[M]$ and $\qgraph{M}$ is prime.
\end{itemize}
\end{thm}
We collect the graphs that appear as prime quotient graphs in the modular decomposition of $G$ in the family ${\mathcal{H}_p}(G) = \{\qgraph{M} : M \in {\mathcal{M}_{\mathrm{tree}}^*}(G), \text{$\qgraph{M}$ is prime}\}$. The modular decomposition tree can be computed in time $\Oh(n+m)$, see e.g.\ Tedder et al.~\cite{TedderCHP08} or the survey by Habib and Paul~\cite{HabibP10}.
Let $M \in {\mathcal{M}_{\mathrm{tree}}}(G) \setminus \{V\}$ and ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$ be its \emph{parent module}. We have that $M \in \Pi_{mod}(G[{\module^{\uparrow}}])$, hence $M$ appears as a vertex of the quotient graph ${\qgraph{\pmodule}}$; we will also denote this vertex by ${v^q_\module}$. Note that ${\qgraph{\pmodule}}$ is the only quotient graph in the modular decomposition of $G$ where $M$ appears as a vertex. So, we implicitly know that ${v^q_\module} \in V({\qgraph{\pmodule}})$ without having to specify ${\module^{\uparrow}}$. To each quotient graph $\qgraph{{\module^{\uparrow}}} = G[{\module^{\uparrow}}] / \Pi_{mod}(G[{\module^{\uparrow}}])$, ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$, appearing in the modular decomposition, we also associate a \emph{canonical projection} ${\modprojection_\pmodule} \colon {\module^{\uparrow}} \rightarrow V(\qgraph{{\module^{\uparrow}}})$ with ${\modprojection_\pmodule}(v) = {v^q_\module}$ whenever $v \in M \in \Pi_{mod}(G[{\module^{\uparrow}}])$.
\subsubsection*{Lifting to Modules.}
Many graph problems can be solved by working only on ${\mathcal{H}_p}(G)$. Hence, we consider the values of standard graph parameters on ${\mathcal{H}_p}(G)$. We define the \emph{modular-width} of $G$ by $\mw(G) = \max(2, \max_{H \in {\mathcal{H}_p}(G)} |V(H)|)$, the \emph{modular-pathwidth} by $\modpw(G) = \max(2, \max_{H \in {\mathcal{H}_p}(G)} \pw(H))$, and the \emph{modular-treewidth} by $\modtw(G) = \max(2, \max_{H \in {\mathcal{H}_p}(G)} \tw(H))$. By combining an algorithm to compute the modular decomposition tree with an algorithm to compute treewidth, we obtain the following.
\begin{thm}\label{thm:compute_modtw_transfer}
If $\mathcal{A}_{\tw}$ is an algorithm that given an $n$-vertex graph $G$ and an integer $k$, in time $\Oh(f(k)n^c)$, $c \geq 1$, either outputs a tree decomposition of width at most $g(k)$ or determines that $\tw(G) > k$, then there is an algorithm $\mathcal{A}_{\modtw}$ that given an $n$-vertex $m$-edge graph $G$ and an integer $k$, in time $\Oh(f(k)n^c + m)$ either outputs a tree decomposition of width at most $g(k)$ for every prime quotient graph $\qgraph{M} \in {\mathcal{H}_p}(G)$ or determines that $\modtw(G) > k$.
\end{thm}
\begin{proof}
The algorithm $\mathcal{A}_{\modtw}$ works as follows. We first compute the modular decomposition tree of $G$ in time $\Oh(n + m)$ with, e.g., the algorithm of Tedder et al.~\cite{TedderCHP08} and obtain the family of prime quotient graphs ${\mathcal{H}_p}(G)$. Since the modular decomposition tree has $n$ leaves and every internal node has at least two children, we obtain that $|{\mathcal{M}_{\mathrm{tree}}}(G)| \leq 2n$. This also implies that $\sum_{H \in {\mathcal{H}_p}(G)} |V(H)| \leq 2n$, since the vertices of the quotient graph $G^q_M$ at $M \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ are precisely the children of $M$ in the modular decomposition tree. We run $\mathcal{A}_{\tw}$ on every $H \in {\mathcal{H}_p}(G)$ and bound the running time, neglecting the constant term, of this step as follows:
\begin{equation*}
\sum_{H \in {\mathcal{H}_p}(G)} f(k) |V(H)|^c \leq f(k) n^{c-1} \sum_{\mathclap{H \in {\mathcal{H}_p}(G)}} |V(H)| \leq 2 f(k) |V(H)|^{c}
\end{equation*}
The algorithm is clearly correct, so this concludes the proof. \qed
\end{proof}
\begin{cor}
There is an algorithm, that given an $n$-vertex graph $G$ and an integer $k$, in time $2^{\Oh(k)}n + m$ either outputs a tree decomposition of width at most $2k+1$ for every prime quotient graph $\qgraph{M} \in {\mathcal{H}_p}(G)$ or determines that $\modtw(G) > k$.
\end{cor}
\begin{proof}
We apply \cref{thm:compute_modtw_transfer} with the algorithm of Korhonen~\cite{Korhonen21} that satisfies $f(k) = 2^{\Oh(k)}$ and $g(k) = 2k+1$. \qed
\end{proof}
\subsubsection*{Associated Subgraphs for Modular-Treewidth.} Given a very nice tree decomposition $(\TT^q_{{\module^{\uparrow}}}, (\mathbb{B}^q_t)_{t \in V(\TT^q_{{\module^{\uparrow}}})})$ of the quotient graph $\qgraph{{\module^{\uparrow}}}$, we associate to every node $t \in V(\TT^q_{{\module^{\uparrow}}})$ a subgraph $G^q_t =(V^q_t, E^q_t)$ of $\qgraph{{\module^{\uparrow}}}$ as follows:
\begin{itemize}
\item $V^q_t$ contains all ${v^q_\module} \in V(\qgraph{{\module^{\uparrow}}})$ such that there is a descendant $t'$ of $t$ in $\TT^q_{{\module^{\uparrow}}}$ with ${v^q_\module} \in \mathbb{B}^q_{t'}$,
\item $E^q_t$ contains all $\{v^q_{M_1}, v^q_{M_2}\} \in E(\qgraph{{\module^{\uparrow}}})$ that were introduced by a descendant of $t$ in $\TT^q_{{\module^{\uparrow}}}$.
\end{itemize}
Based on the vertex subsets of the quotient graph $\qgraph{{\module^{\uparrow}}}$, we define vertex subsets of the original graph $G[{\module^{\uparrow}}]$ as follows: $\mathbb{B}_t = {\modprojection^{-1}_\pmodule}(\mathbb{B}^q_t) = \bigcup_{v^q_M \in \mathbb{B}^q_t} M$ and $V_t = {\modprojection^{-1}_\pmodule}(V^q_t) = \bigcup_{v^q_M \in V^q_t} M$. We also transfer the edge set as follows
\begin{equation*}
E_t = \bigcup_{{v^q_\module} \in V^q_t} E(G[M]) \cup \smashoperator[r]{\bigcup_{\{v^q_{M_1}, v^q_{M_2}\} \in E^q_t}} \{\{u_1, u_2\} : u_1 \in M_1 \wedge u_2 \in M_2\},
\end{equation*}
allowing us to define the graph $G_t = (V_t, E_t)$ associated to any node $t \in V(\TT^q_{\module^{\uparrow}})$.
\subsubsection*{Clique-Expressions and Clique-Width.}
A \emph{labeled graph} is a graph $G = (V,E)$ together with a \emph{label function} $\mathtt{lab} \colon V \rightarrow \mathbb{N} = \{1, 2, 3, \ldots\}$; we usually omit mentioning $\mathtt{lab}$ explicitly. A labeled graph is \emph{$k$-labeled} if $\mathtt{lab}(v) \leq k$ for all $v \in V$. We consider the following four operations on labeled graphs:
\begin{itemize}
\item the \emph{introduce}-operation $\intro{\ell}(v)$ which constructs a single-vertex graph whose unique vertex $v$ has label $\ell$,
\item the \emph{union}-operation $G_1 \oplus G_2$ which constructs the disjoint union of two labeled graphs $G_1$ and $G_2$,
\item the \emph{relabel}-operation $\relab{i}{j}(G)$ changes the label of all vertices in $G$ with label $i$ to label $j$,
\item the \emph{join}-operation $\join{i}{j}(G)$, $i \neq j$, which adds an edge between every vertex in $G$ with label $i$ and every vertex in $G$ with label $j$.
\end{itemize}
A valid expression that only consists of introduce-, union-, relabel-, and join-operations is called a \emph{clique-expression}. The graph constructed by a clique-expression $\mu$ is denoted $G_\mu$ and the label function is denoted $\mathtt{lab}_\mu \colon V(G_\mu) \rightarrow \mathbb{N}$. We associate to a clique-expression $\mu$ the syntax tree $T_\mu$ in the natural way and to each node $t \in V(T_\mu)$ the corresponding operation. For any node $t \in V(T_\mu)$ the subtree rooted at $t$ induces a \emph{subexpression} $\mu_t$. When a clique-expression $\mu$ is fixed, we define $G_t = G_{\mu_t}$ and $\mathtt{lab}_t = \mathtt{lab}_{\mu_t}$ for any $v \in V(T_\mu)$. We say that a clique-expression $\mu$ is a \emph{$k$-clique-expression} or just \emph{$k$-expression} if $(G_t, \mathtt{lab}_t)$ is $k$-labeled for all $t \in V(T_\mu)$. The \emph{clique-width} of a graph $G$, denoted by $\cw(G)$, is the minimum $k$ such that there exists a $k$-expression $\mu$ with $G = G_\mu$. A clique-expression $\mu$ is \emph{linear} if in every union-operation the second graph consists only of a single vertex. Accordingly, we also define the \emph{linear-clique-width} of a graph $G$, denoted $\lcw(G)$, by only considering linear clique-expressions.
\subsubsection*{Strong Exponential-Time Hypothesis.}
The \emph{Strong Exponential-Time Hypothesis} (SETH\xspace) \cite{CalabroIP09,ImpagliazzoPZ01} concerns the complexity of $q$-\textsc{Satisfiability}\xspace, i.e., \textsc{Satisfiability}\xspace where every clause contains at most $q$ literals. We define $c_q = \inf \{\delta : q\text{-\textsc{Satisfiability}\xspace can be solved in time } \Oh(2^{\delta n}) \}$ for all $q \geq 3$. The \emph{Exponential-Time} \emph{Hypothesis} (ETH) of Impagliazzo and Paturi~\cite{ImpagliazzoP01} posits that $c_3 > 0$, whereas the Strong Exponential-Time Hypothesis states that $\lim_{q \rightarrow \infty} c_q = 1$.
Or equivalently, for every $\delta < 1$, there is some $q$ such that $q$-\textsc{Satisfiability}\xspace cannot be solved in time $\Oh(2^{\delta n})$.
For one of our lower bounds, the following weaker variant of SETH\xspace, also called CNF-SETH\xspace, is sufficient.
\begin{cnj}[CNF-SETH\xspace]
\label{conj:cnfseth}
For every $\varepsilon > 0$, there is no algorithm solving \textsc{Satisfiability}\xspace with $n$ variables and $m$ clauses in time $\Oh(\poly(m)(2-\varepsilon)^n)$.
\end{cnj}
\subsection{Parameter Relationships}
\begin{lem}\label{thm:cw_modpw}
For any graph $G$, we have $\cw(G) \leq \modpw(G) + 2$. An appropriate clique-expression can be computed in polynomial time given optimal path decompositions of the graphs in ${\mathcal{H}_p}(G)$.
\end{lem}
\begin{proof}
We construct a clique-expression $\mu$ for $G$ using at most $\modpw(G) + 2$ labels by working bottom-up along the modular decomposition tree. More precisely, we inductively construct $(\modpw(G)+2)$-expressions $\mu_M$ for every $G[M]$, $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$.
As the base case, we consider the leaves of the modular decomposition tree which correspond to singleton modules $\{v\}$, $v \in V$, and therefore each $\mu_{\{v\}}$ simply consists of a single introduce-operation. For any internal node $M$ of the modular decomposition tree with $\Pi_{mod}(G[M]) = \{M_1, \ldots, M_\ell\}$, we inductively assume that the clique-expressions $\mu_i := \mu_{M_i}$ for $G[M_i]$, $i \in [\ell]$, have already been constructed. Furthermore, we assume without loss of generality that every $\mu_i$ relabels all vertices to label $1$ at the end. We now distinguish between the node type of $M$ in the modular decomposition tree. If $M$ is a parallel node, then we obtain $\mu_M$ by successively taking the union of all $\mu_i$, $i \in [\ell]$.
If $M$ is a series node, then we set $\mu'_{1} := \mu_1$ and $\mu'_{i+1} := \relab{2}{1}(\join{1}{2}(\mu'_{i} \oplus \relab{1}{2}(\mu_{i+1})))$ for all $i \in [\ell-1]$ and $\mu_M = \mu'_\ell$. So, we add one child module after the other and add all edges to the previous child modules using two labels.
If $M$ is a prime node, then we consider an optimal path decomposition $(\TT^q, (\mathbb{B}^q_t)_{t \in V(\TT^q)})$ of the quotient graph $\qgraph{M} = G[M] / \Pi_{mod}(G[M])$. By \cref{thm:very_nice_tree_decomposition}, we can assume that it is a very nice path decomposition. We inductively construct clique-expressions $\mu'_t$ for every $t \in V(\TT^q)$ such that every module in the current bag has a private label and all forgotten modules get label $\ell_{max} := \modpw(G) + 2$. Since every bag contains at most $\modpw(G) + 1$ modules, all smaller labels may be used as private labels. If $\hat{r}$ denotes the root node of $\TT^q$, then we set $\mu_M = \mu'_{\hat{r}}$. The base case is given by the leaf node with $\mathbb{B}^q_t = \emptyset$, where $\mu'_t$ is simply the empty expression.
For an introduce vertex node $t$ introducing vertex $v^q_{M_i}$, with child $s$, let $\ell_i$ denote the smallest empty label at the end of $\mu'_{s}$ and set $\mu'_t = \mu'_{s} \oplus \relab{1}{\ell_i}(\mu_i)$.
For an introduce edge node $t$ introducing edge $\{v^q_{M_i}, v^q_{M_j}\}$, with child $s$, let $\ell_i$ and $\ell_j$ denote the labels of $M_i$ and $M_j$ respectively in $\mu'_{s}$ and set $\mu'_t = \join{\ell_i}{\ell_j}(\mu'_{s})$.
For a forget vertex node $t$, which forgets vertex $v^q_{M_i}$, with child $s$, we let $\ell_i$ denote the label of $M_i$ in $\mu'_{s}$ and set $\mu'_t = \relab{\ell_i}{\ell_{max}}(\mu'_{s})$. \qed
\end{proof}
Note that \cref{thm:cw_modpw} can only hold for modular-pathwidth and not modular-treewidth, as already for treewidth, Corneil and Rotics~\cite{CorneilR05} show that for every $k$ there exists a graph $G_k$ with treewidth $k$ and clique-width exponential in $k$.
\begin{lem}
For any graph $G$, we have $\modpw(G) \leq \max(2, \tcpw(G))$ and $\modtw(G) \leq \max(2, \tctw(G))$.
\end{lem}
\begin{proof}
Since parallel and series nodes do not affect $\modpw(G)$ or $\modtw(G)$, it is sufficient to consider the prime nodes. Let $G[M]$, $M \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$, be some internal prime node in the modular decomposition tree of $G$. We want to show that $\pw(\qgraph{M}) = \pw(G[M] / \Pi_{mod}(G[M])) \leq \pw(G / \Pi_{tc}(G)) = \tcpw(G)$ and similarly for the treewidth. We claim that $\qgraph{M}$ is a subgraph of $G / \Pi_{tc}(G)$ which implies the desired inequalities.
Since $M$ is a module, we see that the twinclasses of $G[M]$ have the form $C \cap M$, where $C$ is a twinclass of $G$. Therefore, the graph $G[M] / \Pi_{tc}(G[M])$ is an induced subgraph of $G / \Pi_{tc}(G)$. Furthermore, every proper twinclass of $G[M]$ is also a proper module of $G[M]$. By \cref{thm:gallai_modular}, $\Pi_{mod}(G[M])$ must consist of all inclusion-maximal proper modules of $G[M]$. Thus, $\Pi_{tc}(G[M])$ is a finer partition than $\Pi_{mod}(G[M])$ and $\qgraph{M} = G[M] / \Pi_{mod}(G[M])$ is an induced subgraph of $G[M] / \Pi_{tc}(G[M])$ which shows our claim. \qed
\end{proof}
\begin{thm}[\cite{HegerfeldK22}]
\label{thm:hierarchy_cliquewidth}
For a graph $G$, we have $\cw(G) \leq \lcw(G) \leq \tcpw(G) + 4 \leq \pw(G) + 4$.
\end{thm}
\subsubsection*{Mixed-search.} To prove that the graphs in our lower bound constructions have small pathwidth, it is easier to use a \emph{search game} characterization instead of directly constructing a path decomposition. The search game corresponding to pathwidth is the \emph{mixed-search-game}. In such a game, the graph $G$ represents a system of tunnels where all edges are contaminated by a gas. The objective is to clear all edges of this gas. An edge can be cleared by either placing searchers at both of its endpoints or by moving a searcher along the edge. If there is a path from an uncleared edge to a cleared edge without any searchers on the vertices or edges of the path, then the cleared edge is recontaminated. A \emph{search strategy} is a sequence of operations of the following types: a searcher can be placed on or removed from a vertex, and a searcher on a vertex can be moved along an incident edge and placed on the other endpoint. We say that a search strategy is \emph{winning} if after its termination all edges are cleared. The \emph{mixed-search-number} of a graph $G$, denoted $\ms(G)$, is the minimum number of searchers required for a winning strategy of the mixed-search-game on $G$.
\begin{lem}[\cite{TakahashiUK95}]
\label{thm:mixed_search}
We have that $\pw(G) \leq \ms(G) \leq \pw(G) + 1$.
\end{lem}
\section{Independent Set Parameterized by Modular-Treewidth}
\label{sec:modtw_vc_algo}
Let $G = (V,E)$ be a graph with a cost function $\mathbf{c} \colon V \rightarrow \mathbb{N} \setminus \{0\}$. We show how to compute for every $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ an independent set $X_M \subseteq M$ of $G[M]$ of maximum cost in time $\Oh^*(2^{\modtw(G)})$ given an optimal tree decomposition of every prime node in the modular decomposition of $G$.
\begin{lem}\label{thm:is_modular_structure}
If $X$ is an independent set of $G$, then for every module $M \in \Pi_{mod}(G)$ either $X \cap M = \emptyset$ or $X \cap M$ is a non-empty independent set of $G[M]$. Furthermore, $X^q := \pi_V(X)$ is an independent set of $G^q := G^q_V = G / \Pi_{mod}(G)$.
\end{lem}
\begin{proof}
If $G[X \cap M]$ contains an edge, then so does $G[X]$, hence the first part is trivially true. If $G^q[X^q]$ contains an edge $\{{v^q_\module}, v^q_{M'}\}$, then $M$ and $M'$ are adjacent modules and $X \cap M \neq \emptyset$ and $X \cap M' \neq \emptyset$, so $G[X]$ cannot be an independent set. \qed
\end{proof}
Proceeding bottom-up along the modular decomposition tree of $G$, we make use of \cref{thm:is_modular_structure} to compute $X_M$ for all $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$. As the base case, we consider singleton modules, i.e., $M = \{v\}$ for some $v \in V$. Clearly, $X_{\{v\}} = \{v\}$ is an independent set of maximum cost of $G[\{v\}]$ in this case. Otherwise, inductively assume that we have computed an independent set $X_M$ of maximum cost of $G[M]$ for all $M \in \Pi_{mod}(G)$ and we want to compute an independent set $X_V$ of maximum cost of $G$.
\subsubsection*{Parallel and series nodes.} If $G^q$ is a parallel or series node in the modular decomposition tree, i.e., $G^q$ is an independent set or clique respectively, then we give a special algorithm to compute $X_V$ that does not use a tree decomposition. If $G^q$ is a parallel node, then we simply set $X_V = \bigcup_{M \in \Pi_{mod}(G)} X_M$. If $G^q$ is a series node, then any independent set may intersect at most one module $M \in \Pi_{mod}(G)$, else the set would immediately induce an edge. Thus, we set in this case $X_V = \arg \max_{X_M} \mathbf{c}(X_M)$, where the maximum ranges over all $X_M$ with $M \in \Pi_{mod}(G)$.
\subsubsection*{Prime nodes.} If $G^q = (V^q, E^q)$ is a prime node, then we are given a tree decomposition $(\TT^q, (\mathbb{B}^q_t)_{t \in V(\TT^q)})$ of $G^q$ of width at most $k$, which we can assume to be \emph{very nice} by \cref{thm:very_nice_tree_decomposition}. We perform dynamic programming along this tree decomposition. By \cref{thm:is_modular_structure}, it is natural that every module in the currently considered bag has two possible states; it can be empty (state $\mathbf{0}$), or non-empty (state $\mathbf{1}$) and we take an independent set of maximum cost inside. Given that we have already computed the maximum independent sets $X_M$ for each $M \in \Pi_{mod}(G)$, we define the partial solutions of the dynamic programming as follows.
For each node $t \in V(\TT^q)$ of the tree decomposition, we define $\mathcal{A}_t$ as the family consisting of all $X \subseteq V_t = \pi_V^{-1}(V^q_t)$ such that the following properties hold for all $M \in \Pi_{mod}(G)$:
\begin{itemize}
\item $X \cap M \in \{\emptyset, X_M\}$,
\item if $X \cap M \neq \emptyset$, then $X \cap M' = \emptyset$ for all $\{{v^q_\module}, v^q_{M'}\} \in E(G^q_t)$.
\end{itemize}
Given a \emph{$t$-signature} $f \colon \mathbb{B}^q_t \rightarrow \mathbf{states} := \{\mathbf{0}, \mathbf{1}\}$, we define the subfamily $\mathcal{A}_t[f] \subseteq \mathcal{A}_t$ consisting of all $X \in \mathcal{A}_t$ such that the following properties hold for all ${v^q_\module} \in \mathbb{B}^q_t$:
\begin{itemize}
\item $f({v^q_\module}) = \mathbf{0}$ implies that $X \cap M = \emptyset$,
\item $f({v^q_\module}) = \mathbf{1}$ implies that $X \cap M = X_M$.
\end{itemize}
For each $t \in V(\TT^q)$ and $t$-signature $f \colon \mathbb{B}^q_t \rightarrow \mathbf{states}$, we compute $A_t[f] := \max_{X \in \mathcal{A}_t[f]} \mathbf{c}(X)$ by dynamic programming along the tree decomposition using the following recurrences depending on the bag type of node $t$.
\subsubsection*{Leaf bag.}
The base case, where $\mathbb{B}_t = \mathbb{B}^q_t = \emptyset$ and $t$ is a leaf node of the tree decomposition, i.e., $t$ has no children. Here, we simply have $\mathcal{A}_t = \emptyset$ and hence $A_t[\emptyset] = 0$.
\subsubsection*{Introduce vertex bag.}
We have that $\mathbb{B}^q_t = \mathbb{B}^q_s \cup \{{v^q_\module}\}$ and ${v^q_\module} \notin \mathbb{B}^q_s$, where $s$ is the only child node of $t$. We extend every $s$-signature by one of the two possible states for ${v^q_\module}$ and update the cost if necessary. Note that no edges incident to ${v^q_\module}$ are introduced yet. Hence, the recurrence is given by
\begin{align*}
A_t[f[{v^q_\module} \mapsto \mathbf{0}]] & = A_s[f], \\
A_t[f[{v^q_\module} \mapsto \mathbf{1}]] & = A_s[f] + \mathbf{c}(X_M),
\end{align*}
where $f$ is an $s$-signature.
\subsubsection*{Introduce edge bag.}
Let the introduced edge be denoted $\{{v^q_\module}, v^q_{M'}\}$. We have that $\{{v^q_\module}, v^q_{M'}\} \subseteq \mathbb{B}^q_t = \mathbb{B}^q_s$, where $s$ is the only child node of $t$. The recurrence only needs to filter all partial solutions $X$ that intersect both $M$ and $M'$, since these cannot be independent sets. Hence, the recurrence is given by
\begin{equation*}
A_t[f] = [f({v^q_\module}) = \mathbf{0} \vee f(v^q_{M'}) = \mathbf{0}]A_s[f],
\end{equation*}
where $f$ is a $t$-signature.
\subsubsection*{Forget vertex bag.}
We have that $\mathbb{B}^q_t = \mathbb{B}^q_s \setminus \{{v^q_\module}\}$ and ${v^q_\module} \in \mathbb{B}^q_s$, where $s$ is the only child node of $t$. We simply try both states for the forgotten module $M$ and take the maximum, so the recurrence is given by
\begin{equation*}
A_t[f] = \max(A_s[f[{v^q_\module} \mapsto \mathbf{0}]], A_s[f[{v^q_\module} \mapsto \mathbf{1}]]),
\end{equation*}
where $f$ is a $t$-signature.
\subsubsection*{Join bag.}
We have that $\mathbb{B}^q_t = \mathbb{B}^q_{s_1} = \mathbb{B}^q_{s_2}$, where $s_1$ and $s_2$ are the two children of $t$. For each $t$-signature $f$, we can simply combine a best partial solution compatible with $f$ at $s_1$ with one at $s_2$, but we do have to account for overcounting in the cost. We have that $V^q_{s_1} \cap V^q_{s_2} = \mathbb{B}^q_t$, so these partial solutions can only overlap in the current bag. Hence, the recurrence is given by
\begin{equation*}
A_t[f] = A_{s_1}[f] + A_{s_2}[f] - \mathbf{c}(\pi_V^{-1}(f^{-1}(\mathbf{1}))),
\end{equation*}
where $f$ is a $t$-signature.
\subsubsection*{Lexicographic maximum independent set.}
When using this algorithm as a subroutine, we want to find an independent set $X$ that lexicographically maximizes $(\tilde{\mathbf{c}}(X), \tilde{\mathbf{w}}(X))$, where $\tilde{\mathbf{c}}\colon V \rightarrow [1, N_c]$ and $\tilde{\mathbf{w}}\colon V \rightarrow [1, N_w]$ are some given cost and weight function with maximum value $N_c$ and $N_w$ respectively. Setting $\mathbf{c}(v) = (|V|+1) N_w \tilde{\mathbf{c}}(v) + \tilde{\mathbf{w}}(v)$ for all $v \in V$, we can simulate this setting with a single cost function $\mathbf{c}$ and recover $\tilde{\mathbf{w}}(X) = \mathbf{c}(X) \mod (|V|+1)N_w$ and $\tilde{\mathbf{c}}(X) = (\mathbf{c}(X) - \tilde{\mathbf{w}}(X)) / ((|V|+1)N_w)$. Alternatively, we may augment the dynamic programming to remember which arguments in the recurrences lead to the maximum to construct the independent set $X$ and simply compute the values $\tilde{\mathbf{c}}(X)$ and $\tilde{\mathbf{w}}(X)$ directly.
\begin{thm}
\label{thm:is_mtw_algo}
Let $G = (V, E)$ be a graph, $\tilde{\mathbf{c}}\colon V \rightarrow [1, N_c]$ be a cost function, and $\tilde{\mathbf{w}}\colon V \rightarrow [1, N_w]$ be a weight function.
If $N_c, N_w \leq |V|^{\Oh(1)}$, then there exists an algorithm that given a tree decomposition of width $k$ for every prime quotient graph in the modular decomposition tree of $G$, computes an independent set $X$ of $G$ lexicographically maximizing $(\tilde{\mathbf{c}}(X), \tilde{\mathbf{w}}(X))$ in time $\Oh^*(2^k)$.
\end{thm}
\begin{proof}
We first transform $\tilde{\mathbf{c}}$ and $\tilde{\mathbf{w}}$ into a single cost function $\mathbf{c}$ as described and then run the algorithm described in this section. Note that $\mathbf{c}$ is also polynomially bounded by $|V|$. The modular decomposition tree of $G$ contains at most $2|V|$ nodes. The base case, parallel nodes, and series nodes are handled in polynomial time. For every prime node, we perform the dynamic programming along the given tree decomposition in time $\Oh^*(2^k)$. Hence, the theorem follows. \qed
\end{proof}
\section{Problem Definitions}
\label{sec:problems}
\textbf{Connected Vertex Cover}
\begin{itemize}
\item[]\hspace*{-0.5cm} \textbf{Input:} An undirected graph $G = (V, E)$, a cost function $\mathbf{c}\colon V \rightarrow \mathbb{N} \setminus \{0\}$ and an integer $\overline{b}$.
\item[]\hspace*{-0.5cm} \textbf{Question:} Is there a set $X \subseteq V$, $\mathbf{c}(X) \leq \overline{b}$, such that $G - X$ contains no edges and $G[X]$ is connected?
\end{itemize}
\noindent\textbf{Connected Dominating Set}
\begin{itemize}
\item[]\hspace*{-0.5cm} \textbf{Input:} An undirected graph $G = (V, E)$, a cost function $\mathbf{c}\colon V \rightarrow \mathbb{N} \setminus \{0\}$ and an integer $\overline{b}$.
\item[]\hspace*{-0.5cm} \textbf{Question:} Is there a set $X \subseteq V$, $\mathbf{c}(X) \leq \overline{b}$, such that $N[X] = V$ and $G[X]$ is connected?
\end{itemize}
\noindent\textbf{(Node) Steiner Tree}
\begin{itemize}
\item[]\hspace*{-0.5cm} \textbf{Input:} An undirected graph $G = (V, E)$, a set of terminals $K \subseteq V$, a cost function $\mathbf{c}\colon V \rightarrow \mathbb{N} \setminus \{0\}$ and an integer $\overline{b}$.
\item[]\hspace*{-0.5cm} \textbf{Question:} Is there a set $X \subseteq V$, $\mathbf{c}(X) \leq \overline{b}$, such that $K \subseteq X$ and $G[X]$ is connected?
\end{itemize}
\noindent\textbf{Feedback Vertex Set}
\begin{itemize}
\item[]\hspace*{-0.5cm} \textbf{Input:} An undirected graph $G = (V, E)$, a cost function $\mathbf{c}\colon V \rightarrow \mathbb{N} \setminus \{0\}$ and an integer $\overline{b}$.
\item[]\hspace*{-0.5cm} \textbf{Question:} Is there a set $X \subseteq V$, $\mathbf{c}(X) \leq \overline{b}$, such that $G - X$ contains no cycles?
\end{itemize}
\noindent\textbf{Vertex Cover}
\begin{itemize}
\item[]\hspace*{-0.5cm} \textbf{Input:} An undirected graph $G = (V, E)$, a cost function $\mathbf{c}\colon V \rightarrow \mathbb{N} \setminus \{0\}$ and an integer $\overline{b}$.
\item[]\hspace*{-0.5cm} \textbf{Question:} Is there a set $X \subseteq V$, $\mathbf{c}(X) \leq \overline{b}$, such that $G - X$ contains no edges?
\end{itemize}
\noindent\textbf{Dominating Set}
\begin{itemize}
\item[]\hspace*{-0.5cm} \textbf{Input:} An undirected graph $G = (V, E)$, a cost function $\mathbf{c}\colon V \rightarrow \mathbb{N} \setminus \{0\}$ and an integer $\overline{b}$.
\item[]\hspace*{-0.5cm} \textbf{Question:} Is there a set $X \subseteq V$, $\mathbf{c}(X) \leq \overline{b}$, such that $N[X] = V$?
\end{itemize}
\noindent\textbf{Satisfiability}
\begin{itemize}
\item[]\hspace*{-0.5cm} \textbf{Input:} A boolean formula $\sigma$ in conjunctive normal form.
\item[]\hspace*{-0.5cm} \textbf{Question:} Is there a satisfying assignment $\tau$ for $\sigma$?
\end{itemize}
\noindent\textbf{$q$-Satisfiability}
\begin{itemize}
\item[]\hspace*{-0.5cm} \textbf{Input:} A boolean formula $\sigma$ in conjunctive normal form with clauses of size at most $q$.
\item[]\hspace*{-0.5cm} \textbf{Question:} Is there a satisfying assignment $\tau$ for $\sigma$?
\end{itemize}
\subsection{Connected Dominating Set}
\label{sec:modtw_cds_reduction}
In the \textsc{Connected Dominating Set}\xspace problem, we are given a graph $G = (V,E)$, a cost function $\mathbf{c} \colon V \rightarrow \mathbb{N} \setminus \{0\}$, and an integer $\overline{b}$ and we have to decide whether there exists a subset of vertices $X \subseteq V$ such that $N_G[X] = V$ and $G[X]$ is connected. We assume that $G$ is connected, otherwise the answer is trivially no, and that the costs $\mathbf{c}(v)$, $v \in V$, are at most polynomial in $|V|$.
\textsc{Connected Dominating Set}\xspace can be solved by essentially considering only the first quotient graph. First, we will have to handle some edge cases though. If the first quotient graph $G^q = \qgraph{V} = G / \Pi_{mod}(G)$ contains a \emph{universal} vertex ${v^q_\module} \in V(G^q)$, i.e., $N_{G^q}[{v^q_\module}] = V(G^q)$, then there could be a connected dominating set $X$ of $G$ that is fully contained in $M$. We search for such a connected dominating set by recursively solving \textsc{Connected Dominating Set}\xspace on $G[M]$. At some point, we arrive at a graph, where the first quotient graph does not contain a universal vertex, or at the one-vertex graph. In the latter case, the answer is trivial. Otherwise, the structure of connected dominating sets allows us to solve the problem on the quotient graph $G^q$.
\begin{lem}
\label{thm:universal_implies_series}
If $|V| \geq 2$, then $G^q$ contains a universal vertex if and only if $G^q$ is a clique.
\end{lem}
\begin{proof}
The reverse direction is simple: every vertex of a clique is a universal vertex.
For the forward direction, first notice that $G^q$ cannot be a parallel node if $G^q$ contains a universal vertex. Suppose that $G^q$ contains a universal vertex $v^q_{M_0}$. Consider the set $M = V(G) \setminus M_0$ and notice that $M$ has to be a module of $G$, because $v^q_{M_0}$ is a universal vertex in $G^q$. If $G^q$ were a prime node, then all modules in $\Pi_{mod}(G)$ are maximal proper modules by \cref{thm:gallai_modular}, but $V(G) = M_0 \cup M$ implies that $|\Pi_{mod}(G)| \leq 2$ which contradicts that $G^q$ is prime. Therefore, the only remaining possibility is that $G^q$ is a series node, i.e., $G^q$ is a clique. \qed
\end{proof}
\begin{lem}
\label{thm:cds_mod_structure}
If $G^q$ is a prime node, then no connected dominating set $X$ of $G$ is contained in a single module $M \in \Pi_{mod}(G)$. Furthermore, for any optimum connected dominating set $X$ of $G$ and module $M \in \Pi_{mod}(G)$ it holds that either $X \cap M = \emptyset$ or $X \cap M = \{v_M\}$, where $v_M$ is some vertex of minimum cost in $M$.
\end{lem}
\begin{proof}
By \cref{thm:universal_implies_series}, $G^q$ cannot contain a universal vertex. Suppose that $X \subseteq M$ for some $M \in \Pi_{mod}(G)$. Since $v^q_M \in V(G^q)$ is not a universal vertex, there exists a module $M' \in \Pi_{mod}(G) \setminus \{M\}$ that is not adjacent to $M$, hence $X$ cannot dominate the vertices in $M'$ and thus cannot be a connected dominating set.
For the statement about optimum connected dominating sets, suppose that $X$ is a connected dominating set of $G$ and $\mathbf{c}(X \cap M) > \mathbf{c}(v_M) > 0$, where $v_M$ is some vertex of minimum cost in $M$, for some $M \in \Pi_{mod}(G)$. The set $X' = (X \setminus M) \cup \{v_M\}$ satisfies $\mathbf{c}(X') < \mathbf{c}(X)$ and \cref{thm:module_exchange_connected} shows that $G[X']$ is connected. Since $X$ is a connected dominating set intersecting at least two modules, there has to be a module $M' \in \Pi_{mod}(G)$ that is adjacent to $M$ and satisfies $X \cap M' \neq \emptyset$. Since $M \neq M'$, there is some $v \in X' \cap M' \neq \emptyset$ which dominates all vertices in $M$. Hence, $X'$ is a dominating set as well.
Repeatedly applying this argument shows the statement about optimum connected dominating sets. \qed
\end{proof}
\begin{prop}[\cite{CyganNPPRW22}]
\label{thm:cds_tw_algo}
There exists an algorithm that given a tree decomposition of width at most $k$ for $G$ and a weight function $\mathbf{w}$ isolating the optimum connected dominating sets solves \textsc{Connected Dominating Set}\xspace in time $\Oh^*(4^k)$. If $\mathbf{w}$ is not isolating, then the algorithm may return false negatives.
\end{prop}
\begin{proof}
The algorithm presented by Cygan et al.~\cite{CyganNPPRW22} can be easily augmented to handle positive vertex costs in this running time under the assumption that the costs $\mathbf{c}(v)$, $v \in V$, are at most polynomial in $|V|$. Notice that the only source of randomness in the algorithm of Cygan et al.\ is the sampling of a weight function. If we are already given an isolating weight function, the algorithm will always succeed. \qed
\end{proof}
As for \textsc{Steiner Tree}\xspace, the strategy is again to essentially just call the known algorithm for \textsc{Connected Dominating Set}\xspace parameterized by treewidth on the quotient graphs. However, a single call will not be sufficient in the case of \textsc{Connected Dominating Set}\xspace; to still obtain the same success probability, we will analyze the behavior of isolating weight functions under the following reduction.
Let $(G, \mathbf{c}, \overline{b})$ be a \textsc{Connected Dominating Set}\xspace instance such that $G^q$ is a prime node and let $\mathbf{w} \colon V \rightarrow \mathbb{N}$ be a weight function. In each $M \in \Pi_{mod}(G)$ pick a vertex $v^{\mathbf{c}, \mathbf{w}}_M$ that lexicographically minimizes $(\mathbf{c}(v), \mathbf{w}(v))$ among all vertices $v \in M$. We construct the \textsc{Connected Dominating Set}\xspace instance $(G^q, \mathbf{c}^q, \overline{b})$ with $\mathbf{c}^q(v^q_M) = \mathbf{c}(v^{\mathbf{c}, \mathbf{w}}_M)$ for all $v^q_M \in V(G^q)$ and define the weight function $\mathbf{w}^q(v^q_M) = \mathbf{w}(v^{\mathbf{c}, \mathbf{w}}_M)$ for all $v^q_M \in V(G^q)$.
\begin{lem}
\label{thm:cds_modtw_reduction}
Let $(G, \mathbf{c}, \overline{b})$ be a \textsc{Connected Dominating Set}\xspace instance such that $G^q$ is a prime node, let $\mathbf{w} \colon V \rightarrow \mathbb{N}$ be a weight function, and let $(G^q, \mathbf{c}^q, \overline{b})$ and $\mathbf{w}^q$ be defined as above. The following statements hold:
\begin{enumerate}
\item If $X$ is an optimum connected dominating set of $(G, \mathbf{c})$, then $X^q = \pi_V(X)$ is a connected dominating set of $G^q$ with $\mathbf{c}^q(X^q) = \mathbf{c}(X)$.
\item If $X^q$ is an optimum connected dominating set of $(G^q, \mathbf{c}^q)$, then $X = \{v^{\mathbf{c}, \mathbf{w}}_M : v^q_M \in X^q\}$ is a connected dominating set of $G$ with $\mathbf{c}(X) = \mathbf{c}^q(X^q)$.
\item If $\mathbf{w}$ isolates the optimum connected dominating sets of $(G, \mathbf{c})$, then $\mathbf{w}^q$ isolates the optimum connected dominating sets of $(G^q, \mathbf{c}^q)$.
\end{enumerate}
\end{lem}
\begin{proof}
First, notice that the subgraph $G' = (V', E')$ of $G$ induced by $\{v^{\mathbf{c},\mathbf{w}}_M : M \in \Pi_{mod}(G)\}$ is isomorphic to $G^q$.
\begin{enumerate}
\item Let $X$ be an optimum connected dominating set of $(G, \mathbf{c})$ and set $X^q = \pi_V(X)$. We compute
\begin{equation*}
\mathbf{c}^q(X^q) = \sum_{v^q_M \in X^q} \mathbf{c}^q(v^q_M) = \sum_{\substack{M \in \Pi_{mod}(G):\\X \cap M \neq \emptyset}} \mathbf{c}(v^{\mathbf{c},\mathbf{w}}_M) = \sum_{\substack{M \in \Pi_{mod}(G):\\X \cap M \neq \emptyset}} \mathbf{c}(X \cap M) = \mathbf{c}(X),
\end{equation*}
where the penultimate equality follows from \cref{thm:cds_mod_structure} and the choice of $v^{\mathbf{c}, \mathbf{w}}_M$. Furthermore, we can assume $X \cap M = \{v^{\mathbf{c}, \mathbf{w}}_M\}$ whenever $X \cap M \neq \emptyset$ by \cref{thm:cds_mod_structure}. Then, the isomorphism between $G^q$ and $G'$ also maps $X^q$ to $X$ and hence $X^q$ has to be a connected dominating set of $G^q$.
\item Suppose that $X^q$ is an optimum connected dominating set of $(G^q, \mathbf{c}^q)$. Defining $X$ as above, we see that $X^q$ satisfies $X^q = \pi_V(X)$. By \cref{thm:universal_implies_series}, $G^q$ contains no universal vertex, hence $|X^q| \geq 2$ and $X$ must intersect at least two modules. Therefore, we can apply \cref{thm:quotient_connected} to see that $G[X]$ is connected. The isomorphism between $G^q$ and $G'$ shows that $X$ must dominate all vertices in $V'$.
For any vertex $v \in V \setminus (X \cup V')$ and its module $v \in M \in \Pi_{mod}(G)$, we claim that there exists a module $M' \in \Pi_{mod}(G)$ such that $v^{\mathbf{c}, \mathbf{w}}_{M'} \in X$ dominates $v$. If $X \cap M = \emptyset$, then there exists an adjacent module $M'$ with $X \cap M' \neq \emptyset$, because the vertex $v^{\mathbf{c}, \mathbf{w}}_M \in V'$ must be dominated by $X$. If $X \cap M \neq \emptyset$, a module $M'$ with the same properties exists, because $X$ intersects at least two modules and $G[X]$ is connected. In either case, $v^{\mathbf{c}, \mathbf{w}}_{M'}$ must dominate the vertex $v$ by the module property, hence $X$ is a connected dominating set of $G$. It remains to compute
\begin{equation*}
\mathbf{c}(X) = \sum_{v^{\mathbf{c},\mathbf{w}}_M \in X} \mathbf{c}(v^{\mathbf{c},\mathbf{w}}_M) = \sum_{v^q_M \in X^q} \mathbf{c}^q(v^q_M) = \mathbf{c}^q(X^q).
\end{equation*}
\item The first two statements show that connected dominating sets in $(G, \mathbf{c})$ and $(G^q, \mathbf{c}^q)$ have the same optimum cost. Suppose that $\mathbf{w}$ is a weight function that isolates the optimum connected dominating sets of $(G, \mathbf{c})$ and let $X$ be the optimum connected dominating set that is isolated by $\mathbf{w}$. Therefore, $X$ lexicographically minimizes $(\mathbf{c}(X), \mathbf{w}(X))$ among all connected dominating sets of $G$. By \cref{thm:cds_mod_structure}, we know that $X \cap M = \{v'_M\}$ whenever $X \cap M \neq \emptyset$, where $v'_M$ is a vertex of minimum cost in $M$.
We claim that $v'_M = v^{\mathbf{c}, \mathbf{w}}_M$ for all modules $M \in \Pi_{mod}(G)$ with $X \cap M \neq \emptyset$. By definition of $v^{\mathbf{c}, \mathbf{w}}_M$, we must have $\mathbf{w}(v'_M) \geq \mathbf{w}(v^{\mathbf{c}, \mathbf{w}}_M)$. If $\mathbf{w}(v'_M) > \mathbf{w}(v^{\mathbf{c}, \mathbf{w}}_M)$, then we could reduce the weight of $X$ by exchanging $v'_M$ with $v^{\mathbf{c}, \mathbf{w}}_M$, contradicting the minimality of $(\mathbf{c}(X), \mathbf{w}(X))$. If $\mathbf{w}(v'_M) = \mathbf{w}(v^{\mathbf{c}, \mathbf{w}}_M)$ and $v'_M \neq v^{\mathbf{c}, \mathbf{w}}_M$, then $X$ cannot be the isolated connected dominating set, because by exchanging $v'_M$ and $v^{\mathbf{c}, \mathbf{w}}_M$ we would obtain another connected dominating set of the same cost and weight. This proves the claim.
Using the claim, we compute
\begin{equation*}
\mathbf{w}^q(X^q) = \sum_{v^q_M \in X^q} \mathbf{w}^q(v^q_M) = \sum_{\substack{M \in \Pi_{mod}(G):\\X \cap M \neq \emptyset}} \mathbf{w}(v^{\mathbf{c}, \mathbf{w}}_M) = \mathbf{w}(X).
\end{equation*}
Finally, consider any other optimum connected dominating set $Y^q \neq X^q$ of $G^q$. Setting $Y = \{v^{\mathbf{c}, \mathbf{w}}_M : v^q_M \in Y^q\} \neq X$, we obtain $Y^q = \pi_V(Y)$ and $\mathbf{c}(Y) = \mathbf{c}^q(Y^q) = \mathbf{c}^q(X^q) = \mathbf{c}(X)$, hence $\mathbf{w}^q(Y^q) = \mathbf{w}(Y) > \mathbf{w}(X) = \mathbf{w}^q(X^q)$, where the inequality follows because $\mathbf{w}$ isolates the optimum connected dominating sets of $(G, \mathbf{c})$. This shows that $\mathbf{w}^q$ isolates the optimum connected dominating sets of $(G^q, \mathbf{c}^q)$. \qed
\end{enumerate}
\end{proof}
\begin{thm}
\label{thm:cds_modtw_algo}
There exists a Monte-Carlo algorithm that given a tree decomposition of width at most $k$ for every prime node in the modular decomposition of $G$ solves \textsc{Connected Dominating Set}\xspace in time $\Oh^*(4^k)$. The algorithm cannot give false positives and may give false negatives with probability at most $1/2$.
\end{thm}
\begin{proof}
We begin by sampling a weight function $\mathbf{w} \colon V \rightarrow [2|V|]$. By \cref{thm:isolation}, $\mathbf{w}$ isolates the optimum connected dominating sets of $(G, \mathbf{c})$ with probability at least $1/2$. The algorithm proceeds top-down through the modular decomposition tree of $G$, but we only recurse further if the current node is a series node. Each recursive call is determined by some ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$ and we have to determine in this call if a connected dominating set $X$ of $G[{\module^{\uparrow}}]$ with $\mathbf{c}(X) \leq \overline{b}$ exists, i.e., solve the \textsc{Connected Dominating Set}\xspace instance $(G[{\module^{\uparrow}}], \mathbf{c}\restrict{{\module^{\uparrow}}}, \overline{b})$. The weight function $\mathbf{w}$ is passed along by considering its restriction, i.e., $\mathbf{w}\restrict{{\module^{\uparrow}}}$.
Let $\mathcal{A}_{\tw}$ denote the algorithm from \cref{thm:cds_tw_algo}. Our algorithm may perform several calls to $\mathcal{A}_{\tw}$, where each call may return false negatives when the considered weight function is not isolating. We return to the error analysis after finishing the description of the modular-treewidth algorithm.
We begin by explaining the three base cases. If $|{\module^{\uparrow}}| = 1$, then we let ${\module^{\uparrow}} = \{v_{\module^{\uparrow}}\}$ and check whether $\mathbf{c}(v_{\module^{\uparrow}}) \leq \overline{b}$ and return yes or no accordingly. Otherwise, we have $|{\module^{\uparrow}}| \geq 2$ and can consider ${\qgraph{\pmodule}}$. If ${\qgraph{\pmodule}}$ is a parallel node, then the answer is trivially no. If ${\qgraph{\pmodule}}$ is a prime node, then we can invoke \cref{thm:cds_modtw_reduction} to reduce the \textsc{Connected Dominating Set}\xspace instance $(G[{\module^{\uparrow}}], \mathbf{c}\restrict{{\module^{\uparrow}}}, \overline{b})$ to a \textsc{Connected Dominating Set}\xspace instance on the quotient graph ${\qgraph{\pmodule}}$. We are given a tree decomposition of ${\qgraph{\pmodule}}$ of width at most $k$ by assumption. We run $\mathcal{A}_{\tw}$ on the quotient instance together with the weight function from \cref{thm:cds_modtw_reduction} and return its result.
Finally, suppose that ${\qgraph{\pmodule}}$ is a series node. In this case, any set $X$ of size $2$ that intersects two different modules $M \in \mathtt{children}({\module^{\uparrow}}) = \Pi_{mod}(G[{\module^{\uparrow}}])$ is a connected dominating set of $G[{\module^{\uparrow}}]$. We compute all those sets by brute force in polynomial time and return yes if any of them satisfies $\mathbf{c}(X) \leq \overline{b}$. Otherwise, we need to recurse into the modules $M \in \mathtt{children}({\module^{\uparrow}})$, because any connected dominating set of $G[M]$ will also be a connected dominating set of $G[{\module^{\uparrow}}]$. We return true if at least one of these recursive calls returns true. This concludes the description of the algorithm and we proceed with the error analysis now.
The only source of errors is that we may call $\mathcal{A}_{\tw}$ with a non-isolating weight function, but this can only yield false negatives and hence the modular-treewidth algorithm cannot give false positives either. Even if the sampled weight function is isolating, this may not be the case for the restrictions $\mathbf{w}\restrict{{\module^{\uparrow}}}$, ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$. Nonetheless, we show that if $\mathbf{w}$ is isolating, then the modular-treewidth algorithm does not return an erroneous result. To do so, we show that if $\mathbf{w}\restrict{\module^{\uparrow}}$ is isolating at a series node, then the weight function in the branch containing the isolated optimum connected dominating set must be isolating as well.
To be precise, suppose that ${\qgraph{\pmodule}}$ is a series node and that $\mathbf{w}\restrict{{\module^{\uparrow}}}$ isolates $X^*$ among the optimum connected dominating sets of $(G[{\module^{\uparrow}}], \mathbf{c}\restrict{{\module^{\uparrow}}})$. We claim that $\mathbf{w}\restrict{M}$, $M \in \mathtt{children}({\module^{\uparrow}})$, isolates $X^*$ among the optimum connected dominating sets of $(G[M], \mathbf{c}\restrict{M})$ if $X^* \subseteq M$. This follows by a simple exchange argument: if $\mathbf{w}\restrict{M}$ is not isolating, i.e., there is some optimum connected dominating set $X \neq X^*$ of $(G[M], \mathbf{c}\restrict{M})$ with $\mathbf{w}(X) = \mathbf{w}(X^*)$, then $X$ is also an optimum connected dominating set of $(G[{\module^{\uparrow}}], \mathbf{c}\restrict{{\module^{\uparrow}}})$, contradicting that $\mathbf{w}\restrict{{\module^{\uparrow}}}$ is isolating $X^*$. If $X^*$ intersects multiple modules $M \in \mathtt{children}({\module^{\uparrow}})$, then $X^*$ is found deterministically among the sets of size $2$.
As $\mathbf{w}$ is isolating with probability at least $1/2$ this concludes the error analysis. Furthermore, for every module $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$, we need at most time $\Oh^*(4^k)$. Therefore, the theorem statement follows. \qed
\end{proof}
Cygan et al.~\cite{CyganNPPRW11arxiv} have shown that \textsc{Connected Dominating Set}\xspace cannot be solved in time $\Oh^*((4 - \varepsilon)^{\pw(G)})$ for some $\varepsilon > 0$, unless SETH\xspace fails. Since $\modtw(G) \leq \tw(G) \leq \pw(G)$, this shows that the running time of \cref{thm:cds_modtw_algo} is tight.
\section{Connected Vertex Cover Algorithm}
\label{sec:modtw_cvc_algo}
In the \textsc{Connected Vertex Cover}\xspace problem, we are given a graph $G = (V,E)$, a cost function $\mathbf{c}\colon V \rightarrow \mathbb{N} \setminus \{0\}$, and an integer $\overline{b}$ and we have to decide whether there exists a subset of vertices $X \subseteq V$ with $\mathbf{c}(X) \leq \overline{b}$ such that $G - X$ contains no edges and $G[X]$ is connected. We will assume that the values of the cost function $\mathbf{c}$ are polynomially bounded in the size of the graph $G$. We also assume that $G$ is connected and contains at least two vertices, hence $|\Pi_{mod}(G)| \geq 2$ and $G^q := G^q_V = G / \Pi_{mod}(G)$ cannot be edgeless.
To solve \textsc{Connected Vertex Cover}\xspace, we begin by computing some optimum (possibly non-connected) vertex cover $Y_M$ with respect to $\mathbf{c}{\big|_M}$ for every module $M \in \Pi_{mod}(G)$ that $G[M]$ contains at least one edge. If $G[M]$ contains no edges, then we set $Y_M = \{v^*_M\}$, where $v^*_M \in M$ is a vertex minimizing the cost inside $M$, i.e., $v^*_M := \arg \min_{v \in M} \mathbf{c}(v)$. The vertex covers can be computed in time $\Oh^*(2^{\modtw(G)})$ by using the algorithm from \cref{thm:is_mtw_algo}.
\begin{dfn}
Let $X \subseteq V$ be a vertex subset. We say that $X$ is \emph{nice} if for every module $M \in \Pi_{mod}(G)$ it holds that $X \cap M \in \{\emptyset, Y_M, M\}$.
\end{dfn}
We will show that it is sufficient to only consider nice vertex covers via some exchange arguments. This allows us to only consider a constant number of states per module in the dynamic programming algorithm.
\begin{lem}
\label{thm:cvc_mod_structure}
If there exists a connected vertex cover $X$ of $G$ that intersects at least two modules in $\Pi_{mod}(G)$, then there exists a connected vertex cover $X'$ of $G$ that is nice and intersects at least two modules in $\Pi_{mod}(G)$ with $\mathbf{c}(X') \leq \mathbf{c}(X)$.
\end{lem}
\begin{proof}
Let $X$ be the given connected vertex cover. Via exchange arguments, we will see that we can find a nice connected vertex cover with the same cost. Suppose that there is a module $M \in \Pi_{mod}(G)$ such that $G[M]$ contains no edges and $1 \leq |X \cap M| < |M|$. We claim that $X' = (X \setminus M) \cup \{v^*_M\}$ is a connected vertex cover with $\mathbf{c}(X') \leq \mathbf{c}(X)$. For any module $M' \in \Pi_{mod}(G)$ adjacent to $M$, we must have that $X' \cap M' = X \cap M' = M'$, else there would be an edge between $M$ and $M'$ that is not covered by $X$. In particular, all edges incident to $M$ are already covered by $X \setminus M = X' \setminus M$. By \cref{thm:module_exchange_connected}, $X'$ is connected and we have that $\mathbf{c}(X') \leq \mathbf{c}(X)$ due to the choice of $v^*_M$.
If $M \in \Pi_{mod}(G)$ is a module such that $G[M]$ contains at least one edge, then we consider two cases. If $\mathbf{c}(X \cap M) < \mathbf{c}(Y_M)$, then $X \cap M$ cannot be a vertex cover of $G[M]$ and hence $X$ would not be a vertex cover of $G$. If $\mathbf{c}(Y_M) \leq \mathbf{c}(X \cap M) < \mathbf{c}(M)$, then we claim that $X' = (X \setminus M) \cup Y_M$ is a connected vertex cover with $\mathbf{c}(X') \leq \mathbf{c}(X)$. By assumption, we have $\mathbf{c}(X') \leq \mathbf{c}(X)$. We must have that $X \cap M \neq M$, therefore, as before, $X$ and $X'$ must fully contain all modules adjacent to $M$ to cover all edges leaving $M$. Since $G[M]$ contains at least one edge, we have that $Y_M \neq \emptyset$ and $G[X']$ must be connected by \cref{thm:module_exchange_connected}.
By repeatedly applying these arguments to $X$, we obtain the claim. \qed
\end{proof}
The next lemma enables us to handle connected vertex covers that are contained in a single module with polynomial-time preprocessing.
\begin{lem}
\label{thm:cvc_single_module}
A vertex set $X \subseteq V$ is a connected vertex cover of $G$ with $X \subseteq M$ for some module $M \in \Pi_{mod}(G)$ if and only if $X = M$, all edges of $G$ are incident to $M$, and $G[M]$ is connected.
\end{lem}
\begin{proof}
The reverse direction is trivial. We will show the forward direction. Since $G$ is connected and $|\Pi_{mod}(G)| \geq 2$, there exists a module $M' \in \Pi_{mod}(G)$ adjacent to $M$. If $X \neq M$, then there exists an edge between $M$ and $M'$ that is not covered by $X$. If there is an edge in $G$ not incident to $M$, then clearly $X$ cannot cover all edges. Clearly, $G[X] = G[M]$ must be connected. \qed
\end{proof}
Before going into the main algorithm, we handle the edge case of series nodes. The following lemma shows that there are only a polynomial number of interesting cases for series nodes, hence we can check them by brute force in polynomial time.
\begin{lem}
\label{thm:cvc_series}
If $G^q$ is a clique of size at least two, then for any vertex cover $X$ there is some $M' \in \Pi_{mod}(G)$ such that for all other modules $M' \neq M \in \Pi_{mod}(G)$, we have $X \cap M = M$.
\end{lem}
\begin{proof}
Suppose there are two modules $M_1 \neq M_2 \in \Pi_{mod}(G)$ such that $X \cap M_1 \neq M_1$ and $X \cap M_2 \neq M_2$. These modules are adjacent, because $G^q$ is a clique< and thus $X$ cannot be a vertex cover, since there exists an uncovered edge between $M_1 \setminus X$ and $M_2 \setminus X$. \qed
\end{proof}
\subsection{Dynamic Programming for Prime Nodes}
It remains to handle the case that $G$ is a prime node. Due to \cref{thm:cvc_single_module}, we only need to look for connected vertex covers that intersect at least two modules in $\Pi_{mod}(G)$ now. Hence, we can make use of \cref{thm:cvc_mod_structure} and \cref{thm:hom_cut}. We are given a tree decomposition $(\TT^q, (\mathbb{B}^q_t)_{t \in V(\TT^q)})$ of the quotient graph $G^q := G^q_V = G / \Pi_{mod}(G)$ of width $k$ and by \cref{thm:very_nice_tree_decomposition}, we can assume that it is a very nice tree decomposition.
To solve \textsc{Connected Vertex Cover}\xspace on $G$, we perform dynamic programming along the tree decomposition $\TT^q$ using the cut-and-count-technique. \cref{thm:hom_cut} allows us to work directly on the quotient graph. We begin by presenting the cut-and-count-formulation of the problem. For any subgraph $G'$ of $G$, we define the \emph{relaxed solutions} $\mathcal{R}(G') = \{X \subseteq V(G') : X \text{ is a nice vertex cover of $G'$}\}$ and \emph{the cut solutions} $\mathcal{Q}(G') = \{(X, (X_L, X_R)) \in \homcuts{V}(G') : X \in \mathcal{R}(G')\}$.
For the isolation lemma, cf.\ \cref{thm:isolation}, we sample a weight function $\mathbf{w}\colon V \rightarrow [2n]$ uniformly at random. We will need to track the cost $\mathbf{c}(X)$, the weight $\mathbf{w}(X)$, and the number of intersected modules $|\pi_V(X)|$ of each partial solution $(X, (X_L, X_R))$. Accordingly, we define $\mathcal{R}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G') = \{X \in \mathcal{R}(G') : \mathbf{c}(X) = {\overline{c}}, \mathbf{w}(X) = {\overline{w}}, |\pi_V(X)| = {\overline{m}}\}$ and $\mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G') = \{(X, (X_L, X_R)) \in \mathcal{Q}(G') : X \in \mathcal{R}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G')\}$ for all subgraphs $G'$ of $G$, ${\overline{c}} \in [0, \mathbf{c}(V)], {\overline{w}} \in [0, \mathbf{w}(V)], {\overline{m}} \in [0, |\Pi_{mod}(G)|]$.
As discussed, to every node $t \in V(\TT^q)$ we associate a subgraph $G^q_t = (V^q_t, E^q_t)$ of $G^q$ in the standard way, which in turn gives rise to a subgraph $G_t = (V_t, E_t)$ of $G$. The subgraphs $G_t$ grow module by module and are considered by the dynamic program, hence we define $\mathcal{R}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t = \mathcal{R}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G_t)$ and $\mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t = \mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G_t)$ for all ${\overline{c}}$, ${\overline{w}}$, and ${\overline{m}}$. We will compute the sizes of the sets $\mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t$ by dynamic programming over the tree decomposition $\TT^q$, but to do so we need to parameterize the partial solutions by their state on the current bag.
Disregarding the side of the cut, \cref{thm:cvc_mod_structure} tells us that each module $M \in \Pi_{mod}(G)$ has one of three possible states for some $X \in \mathcal{R}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t$, namely $X \cap M \in \{\emptyset, Y_M, M\}$. Since we are considering homogeneous cuts there are two possibilities if $X \cap M \neq \emptyset$; $X \cap M$ is contained in the left side of the cut or in the right side. Thus, there are five total choices. We define $\mathbf{states} = \{\mathbf{0}, \mathbf{1}_L, \mathbf{1}_R, \mathbf{A}_L, \mathbf{A}_R\}$ with $\mathbf{1}$ denoting that the partial solution contains at least one vertex, but not all, from the module and with $\mathbf{A}$ denoting that the partial solution contains all vertices of the module; the subscript denotes the side of the cut.
A function of the form $f\colon \mathbb{B}^q_t \rightarrow \mathbf{states}$ is called \emph{$t$-signature}. For every node $t \in V(\TT^q)$, cost ${\overline{c}} \in [0, \mathbf{c}(V)]$, weight ${\overline{w}} \in [0, \mathbf{w}(V)]$, number of modules ${\overline{m}} \in [0, |\Pi_{mod}(G)|]$, and $t$-signature $f$, the family $\mathcal{A}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f)$ consists of all $(X, (X_L, X_R)) \in \mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t$ that satisfy for all $v^q_M \in \mathbb{B}^q_t$:
\begin{align*}
f(v^q_M) = \mathbf{0}_{\phantom{L}} & \leftrightarrow _{\phantom{L}}X \cap M = \emptyset, \\
f(v^q_M) = \mathbf{1}_L & \leftrightarrow X_L \cap M = Y_M \neq M, &
f(v^q_M) = \mathbf{1}_R & \leftrightarrow X_R \cap M = Y_M \neq M, \\
f(v^q_M) = \mathbf{A}_L & \leftrightarrow X_L \cap M = M, &
f(v^q_M) = \mathbf{A}_R & \leftrightarrow X_R \cap M = M.
\end{align*}
Recall that by considering homogeneous cuts, we have that $X_L \cap M = \emptyset$ or $X_R \cap M = \emptyset$ for every module $M \in \Pi_{mod}(G)$. We use the condition $Y_M \neq M$ for the states $\mathbf{1}_L$ and $\mathbf{1}_R$ to ensure a well-defined state for modules of size 1. Note that the sets $\mathcal{A}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f)$, ranging over $f$, partition $\mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t$ due to considering nice vertex covers and homogeneous cuts.
Our goal is to compute the size of $\mathcal{A}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_{\hat{r}}(\emptyset) = \mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_{\hat{r}} = \mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G)$, where $\hat{r}$ is the root vertex of the tree decomposition $\TT^q$, modulo 4 for all ${\overline{c}}$, ${\overline{w}}$, ${\overline{m}}$. By \cref{thm:hom_cut}, there is a connected vertex cover $X$ of $G$ with $\mathbf{c}(X) = {\overline{c}}$ and $\mathbf{w}(X) = {\overline{w}}$ if the result is nonzero.
We present the recurrences for the various bag types to compute $A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f) = |\mathcal{A}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f)|$; if not stated otherwise, then $t \in V(\TT^q)$, ${\overline{c}} \in [0, \mathbf{c}(V)]$, ${\overline{w}} \in [0, \mathbf{w}(V)]$, ${\overline{m}} \in [0, |\Pi_{mod}(G)|]$, and $f$ is a $t$-signature. We set $A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f) = 0$ whenever at least one of ${\overline{c}}$, ${\overline{w}}$, or ${\overline{m}}$ is negative.
\subsubsection*{Leaf bag.} We have that $\mathbb{B}^q_t = \mathbb{B}_t = \emptyset$ and $t$ has no children. The only possible $t$-signature is $\emptyset$ and the only possible partial solution is $(\emptyset, (\emptyset, \emptyset))$. Hence, we only need to check the tracker values:
\begin{equation*}
A_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(\emptyset) = [{\overline{c}} = 0] [{\overline{w}} = 0] [{\overline{m}} = 0].
\end{equation*}
\subsubsection*{Introduce vertex bag.} We have $\mathbb{B}^q_t = \mathbb{B}^q_s \cup \{v^q_M\}$, where $s \in V(\TT^q)$ is the only child of $t$ and $v^q_M \notin \mathbb{B}^q_s$. Hence, $\mathbb{B}_t = \mathbb{B}_s \cup M$. We have to consider all possible interactions of a partial solution with $M$, since we are considering nice vertex covers these interactions are quite restricted. To formulate the recurrence, we let, as an exceptional case, $f$ be an $s$-signature here and not a $t$-signature. Since no edges of the quotient graph $G^q$ incident to $v^q_M$ are introduced yet, we only have to check some edge cases and update the trackers when introducing $v^q_M$:
\begin{equation*}
\begin{array}{lcll}
A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f[v^q_M \mapsto \mathbf{0}]) & = & [G[M] \text{ is edgeless}] & A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_s(f), \\
A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f[v^q_M \mapsto \mathbf{1}_L]) & = & [|M| > 1] & A^{{\overline{c}} - \mathbf{c}(Y_M), {\overline{w}} - \mathbf{w}(Y_M), {\overline{m}} - 1}_s(f), \\
A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f[v^q_M \mapsto \mathbf{1}_R]) & = & [|M| > 1] & A^{{\overline{c}} - \mathbf{c}(Y_M), {\overline{w}} - \mathbf{w}(Y_M), {\overline{m}} - 1}_s(f), \\
A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f[v^q_M \mapsto \mathbf{A}_L]) & = & & A^{{\overline{c}} - \mathbf{c}(M), {\overline{w}} - \mathbf{w}(M), {\overline{m}} - 1}_s(f), \\
A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f[v^q_M \mapsto \mathbf{A}_R]) & = & & A^{{\overline{c}} - \mathbf{c}(M), {\overline{w}} - \mathbf{w}(M), {\overline{m}} - 1}_s(f). \\
\end{array}
\end{equation*}
\subsubsection*{Introduce edge bag.} Let $\{v^q_{M_1}, v^q_{M_2}\}$ denote the introduced edge. We have that $\{v^q_{M_1}, v^q_{M_2}\} \subseteq \mathbb{B}^q_t = \mathbb{B}^q_s$. The edge $\{v^q_{M_1}, v^q_{M_2}\}$ corresponds to adding a join between the modules $M_1$ and $M_2$. We need to filter all solutions whose states at $M_1$ and $M_2$ are not consistent with $M_1$ and $M_2$ being adjacent. There are essentially two possible reasons: either not all edges between $M_1$ and $M_2$ are covered, or the introduced edges go across the homogeneous cut. We implement this via the helper function $\cons\colon \mathbf{states} \times \mathbf{states} \rightarrow \{0,1\}$ which is defined by $\cons(\mathbf{s}_1, \mathbf{s}_2) = [\{\mathbf{s}_1, \mathbf{s}_2\} \cap \{\mathbf{A}_L, \mathbf{A}_R\} \neq \emptyset][\mathbf{s}_1 \in \{\mathbf{1}_L, \mathbf{A}_L\} \rightarrow \mathbf{s}_2 \notin \{\mathbf{1}_R, \mathbf{A}_R\}][\mathbf{s}_1 \in \{\mathbf{1}_R, \mathbf{A}_R\} \rightarrow \mathbf{s}_2 \notin \{\mathbf{1}_L, \mathbf{A}_L\}]$ or, equivalently, the following table:
\begin{equation*}
\begin{array}{l|ccccc}
\cons & \mathbf{0} & \mathbf{1}_L & \mathbf{1}_R & \mathbf{A}_L & \mathbf{A}_R \\
\hline
\mathbf{0} & 0 & 0 & 0 & 1 & 1 \\
\mathbf{1}_L & 0 & 0 & 0 & 1 & 0 \\
\mathbf{1}_R & 0 & 0 & 0 & 0 & 1 \\
\mathbf{A}_L & 1 & 1 & 0 & 1 & 0 \\
\mathbf{A}_R & 1 & 0 & 1 & 0 & 1
\end{array}
\end{equation*}
The recurrence is then simply given by
\begin{equation*}
A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f) = \cons(f(v^q_{M_1}), f(v^q_{M_2})) A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_s(f).
\end{equation*}
\subsubsection*{Forget vertex bag.} We have that $\mathbb{B}^q_t = \mathbb{B}^q_s \setminus \{v^q_M\}$, where $v^q_M \in \mathbb{B}^q_s$ and $s \in V(\TT^q)$ is the only child of $t$. Here, we only need to forget the state at $v^q_M$ and accumulate the contributions from the different states ${v^q_\module}$ could assume, as the states are disjoint no overcounting happens:
\begin{equation*}
A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_t(f) = \sum_{\mathbf{s} \in \mathbf{states}} A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_s(f[v \mapsto \mathbf{s}]).
\end{equation*}
\subsubsection*{Join bag.} We have $\mathbb{B}^q_t = \mathbb{B}^q_{s_1} = \mathbb{B}^q_{s_2}$, where $s_1, s_2 \in V(\TT^q)$ are the children of $t$. Two partial solutions, one at $s_1$, and the other at $s_2$, can be combined when the states agree on all $v^q_M \in \mathbb{B}^q_t$. Since we update the trackers already at introduce vertex bags, we need to take care that the values of the modules in the bag are not counted twice. For this sake, define $S^f = \bigcup_{v^q_M \in f^{-1}(\{\mathbf{1}_L, \mathbf{1}_R\})} Y_M \cup \bigcup_{v^q_M \in f^{-1}(\{\mathbf{A}_L, \mathbf{A}_R\})} M$ for all $t$-signatures $f$. This definition satisfies $X \cap \mathbb{B}_t = S^f$ for all $(X, (X_L, X_R)) \in \mathcal{A}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f)$. Then, the recurrence is given by
\begin{equation*}
A_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f) = \sum_{\substack{{\overline{c}}_1 + {\overline{c}}_2 = {\overline{c}} + \mathbf{c}(S^f) \\ {\overline{w}}_1 + {\overline{w}}_2 = {\overline{w}} + \mathbf{w}(S^f)}} \sum_{{\overline{m}}_1 + {\overline{m}}_2 = {\overline{m}} + (|\mathbb{B}^q_t| - f^{-1}(\mathbf{0}))} A_{s_1}^{{\overline{c}}_1, {\overline{w}}_1, {\overline{m}}_1}(f) A_{s_2}^{{\overline{c}}_2, {\overline{w}}_2, {\overline{m}}_2}(f).
\end{equation*}
\begin{lem}\label{thm:cvc_mtw_prime}
If $G^q$ is prime, then there exists a Monte-Carlo algorithm that, given a tree decomposition for $G^q$ of width at most $k$ and the sets $Y_M$ for all $M \in \Pi_{mod}(G)$, determines whether there is a connected vertex cover $X$ of $G$ with $\mathbf{c}(X) \leq \overline{b}$ intersecting at least two modules of $\Pi_{mod}(G)$ in time $\Oh^*(5^k)$. The algorithm cannot give false positives and may give false negatives with probability at most $1/2$.
\end{lem}
\begin{proof}
The algorithm samples a weight function $\mathbf{w}\colon V \rightarrow [2n]$ uniformly at random. Using the recurrences, we compute the values $A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_{\hat{r}}(\emptyset)$ modulo 4 for all ${\overline{c}} \in [0, \mathbf{c}(V)]$, ${\overline{w}} \in [0, \mathbf{w}(V)]$, ${\overline{m}} \in [2, |\Pi_{mod}(G)|]$. Setting $\mathcal{S}^{{\overline{c}}, {\overline{w}}, {\overline{m}}} = \{X \in \mathcal{R}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G) : G[X] \text{ is connected}\}$, we have that $|\mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G)| = |\mathcal{Q}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_{\hat{r}}| = A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_{\hat{r}}(\emptyset) = \sum_{X \in \mathcal{R}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(G)} 2^{\mathtt{cc}(G[X])} \equiv_4 2|\mathcal{S}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}|$ by \cref{thm:hom_cut}. By \cref{thm:isolation}, $\mathbf{w}$ isolates the set of optimum nice connected vertex covers intersecting at least two modules of $\Pi_{mod}(G)$ with probability at least $1/2$. If ${\overline{c}}$ denotes the optimum value, then there exist choices of ${\overline{w}}$ and ${\overline{m}}$ such that $|\mathcal{S}^{{\overline{c}}, {\overline{w}}, {\overline{m}}}| = 1$ and hence $A^{{\overline{c}}, {\overline{w}}, {\overline{m}}}_{\hat{r}}(\emptyset) \not\equiv_4 0$. The algorithm searches for the smallest such ${\overline{c}}$ and returns true if ${\overline{c}} \leq \overline{b}$. Note that if a connected vertex cover $X$ intersecting at least two modules with $\mathbf{c}(X) \leq \overline{b}$ exists, then so does a nice one by \cref{thm:cvc_mod_structure}. If ${\overline{c}} > \overline{b}$, the algorithm returns false.
It remains to prove the correctness of the provided recurrences and the running time of the algorithm. We first consider the running time. Since a very nice tree decomposition has polynomially many nodes and since the cost function $\mathbf{c}$ is assumed to be polynomially bounded, there are $\Oh^*(5^k)$ table entries to compute. Furthermore, it is easy to see that every recurrence can be computed in polynomial time, hence the running time of the algorithm follows. We proceed by proving the correctness of the recurrences.
If $t$ is a leaf node, then we have that $V_t = \emptyset$ and hence $\mathcal{Q}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}$ can contain at most $(\emptyset, (\emptyset, \emptyset))$, and we have that $\mathbf{c}(\emptyset) = \mathbf{w}(\emptyset) = |\pi_V(\emptyset)| = 0$, which is checked by the recurrence.
If $t$ is an introduce vertex node introducing $v^q_M$, consider $(X,(X_L, X_R)) \in \mathcal{A}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f[{v^q_\module} \mapsto \mathbf{s}])$, where $f$ is some $s$-signature and $\mathbf{s} \in \mathbf{states}$. We have that $(X \setminus M, (X_L \setminus M, X_R \setminus M)) \in \mathcal{A}_s^{{\overline{c}}', {\overline{w}}', {\overline{m}}'}(f)$ for ${\overline{c}}' = \mathbf{c}(X \setminus M)$, ${\overline{w}}' = \mathbf{w}(X \setminus M)$, ${\overline{m}}' = |\pi_V(X \setminus M)|$. Depending on $\mathbf{s}$, we argue that this sets up a bijection between $\mathcal{A}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f[{v^q_\module} \mapsto \mathbf{s}])$ and $\mathcal{A}_s^{{\overline{c}}', {\overline{w}}', {\overline{m}}'}(f)$. The injectivity of this map follows in general by observing that $\mathbf{s}$ completely determines the interaction of $(X, (X_L, X_R))$ with $M$.
\begin{itemize}
\item $\mathbf{s} = \mathbf{0}$: We have $X \cap M = \emptyset$, which implies that $G[M]$ does not contain an edge, as $X$ cannot be a vertex cover of $G_t$ otherwise. In this case, the mapping is essentially the identity mapping, hence the trackers do not change and it is clearly bijective.
\item $\mathbf{s} = \mathbf{1}_L$: We have $X \cap M = X_L \cap M = Y_M \neq M$ and $X_R \cap M = \emptyset$. Due to $\emptyset \neq Y_M \neq M$, we have that $|M| > 1$. As $X \cap M = Y_M$, we update the trackers according to $Y_M$. Note that any $(X', (X'_L, X'_R)) \in \mathcal{A}_s^{{\overline{c}}', {\overline{w}}', {\overline{m}}'}(f)$ is hit by $(X' \cup Y_M, (X'_L \cup Y_M, X'_R)) \in \mathcal{A}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f[{v^q_\module} \mapsto \mathbf{s}])$, which relies on the fact that no edges incident to ${v^q_\module}$ have been introduced yet, so that neither the vertex cover property nor consistent cut property can be violated when extending by $Y_M$.
\item $\mathbf{s} = \mathbf{1}_R$: analogous to the previous case.
\item $\mathbf{s} = \mathbf{A}_L$: We have $X \cap M = X_L \cap M = M$ and $X_R \cap M = \emptyset$. Hence, we update the trackers according to $M$. For surjectivity, we see that $(X', (X'_L, X'_R)) \in \mathcal{A}_s^{{\overline{c}}', {\overline{w}}', {\overline{m}}'}(f)$ is hit by $(X' \cup M, (X'_L \cup M, X'_R)) \in \mathcal{A}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f[{v^q_\module} \mapsto \mathbf{s}])$, which again relies on the fact that no edges incident to ${v^q_\module}$ have been introduced yet.
\item $\mathbf{s} = \mathbf{A}_R$: analogous to the previous case.
\end{itemize}
If $t$ is an introduce edge bag introducing edge $\{v^q_{M_1}, v^q_{M_2}\}$, then $\mathcal{Q}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}} \subseteq \mathcal{Q}_s^{{\overline{c}}, {\overline{w}}, {\overline{m}}}$ and we need to filter out all $(X,(X_L,X_R)) \in \mathcal{Q}_s^{{\overline{c}}, {\overline{w}}, {\overline{m}}} \setminus \mathcal{Q}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}$. A partial solution $(X, (X_L, X_R)) \in \mathcal{Q}_s^{{\overline{c}}, {\overline{w}}, {\overline{m}}}$ has to be filtered if and only if an edge between $M_1$ and $M_2$ is not covered or an edge between $X \cap M_1$ and $X \cap M_2$ connects both sides of the homogeneous cut. These criteria are implemented by the function $\cons$; the first case corresponds to $\cons(\mathbf{s}_1, \mathbf{s}_2) = \mathbf{0}$ for all $\mathbf{s}_1, \mathbf{s}_2 \in \{\mathbf{0}, \mathbf{1}_L, \mathbf{1}_R\}$ and the second case corresponds to $\cons(\mathbf{s}_1, \mathbf{s}_2) = \mathbf{0}$ whenever $\mathbf{s}_1 \neq \mathbf{0} \neq \mathbf{s}_2$ and the cut subscript of $\mathbf{s}_1$ and $\mathbf{s}_2$ disagrees.
If $t$ is a forget vertex bag forgetting $v^q_M$, then $\mathcal{Q}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}} = \mathcal{Q}_s^{{\overline{c}}, {\overline{w}}, {\overline{m}}}$ and every $(X,(X_L,X_R)) \in \mathcal{Q}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}$ is counted by some $A_s^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f[v^q_M \mapsto \mathbf{s}])$ with $\mathbf{s}$ being the appropriate state and the states are disjoint as already noted.
If $t$ is a join bag, then $V_t = V_{s_1} \cup V_{s_2}$ and $\mathbb{B}_t = \mathbb{B}_{s_1} = \mathbb{B}_{s_2} = V_{s_1} \cap V_{s_2}$. Since $G_{s_1}$ and $G_{s_2}$ are subgraphs of $G_t$, any $(X, (X_L, X_R)) \in \mathcal{A}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f)$ splits into $(X^1, (X^1_L, X^1_R)) \in \mathcal{A}_{s_1}^{{\overline{c}}_1, {\overline{w}}_1, {\overline{m}}_1}(f)$ and $(X^2, (X^2_L, X^2_R)) \in \mathcal{A}_{s_2}^{{\overline{c}}_2, {\overline{w}}_2, {\overline{m}}_2}(f)$, where $X^i = X \cap V_{s_i}$, $X^i_L = X_L \cap V_{s_i}$, $X^i_R = X_R \cap V_{s_i}$ for $i \in [2]$. Since $S^f = X\cap \mathbb{B}_t = X^1\cap \mathbb{B}_t = X^2\cap \mathbb{B}_t$, some overcounting occurs when adding up e.g.\ the costs ${\overline{c}}_1$ and ${\overline{c}}_2$. This is accounted for by the equation ${\overline{c}}_1 + {\overline{c}}_2 = {\overline{c}} + \mathbf{c}(S^f)$ and similarly for the weights and the number of modules hit by $X$. Vice versa, the union of the graphs $G_{s_1}$ and $G_{s_2}$ yields $G_t$, and any $(X^1, (X^1_L, X^1_R)) \in \mathcal{A}_{s_1}^{{\overline{c}}_1, {\overline{w}}_1, {\overline{m}}_1}(f)$ and $(X^2, (X^2_L, X^2_R)) \in \mathcal{A}_{s_2}^{{\overline{c}}_2, {\overline{w}}_2, {\overline{m}}_2}(f)$ must agree on $\mathbb{B}_t$, since the behavior on $\mathbb{B}_t$ is completely specified by $f$. Therefore, one can argue that $(X^1 \cup X^2, (X^1_L \cup X^2_L, X^1_R \cup X^2_R)) \in \mathcal{A}_t^{{\overline{c}}, {\overline{w}}, {\overline{m}}}(f)$. \qed
\end{proof}
Putting everything together, we obtain the following algorithm.
\begin{thm}
\label{thm:cvc_mtw_algo}
There exists a Monte-Carlo algorithm that given a tree decomposition of width at most $k$ for every prime quotient graph $H \in {\mathcal{H}_p}(G)$, solves \textsc{Connected Vertex Cover}\xspace in time $\Oh^*(5^k)$. The algorithm cannot give false positives and may give false negatives with probability at most $1/2$.
\end{thm}
\begin{proof}
If $|V(G)| = 1$, then $\emptyset$ is a connected vertex cover and we can always answer true. Otherwise, we first compute the sets $Y_M$ for all $M \in \Pi_{mod}(G)$ in time $\Oh^*(2^k)$ using \cref{thm:is_mtw_algo}. Using \cref{thm:cvc_single_module}, we first check in polynomial time if there is any connected vertex cover $X$ of $G$ contained in a single module with $\mathbf{c}(X) \leq \overline{b}$. If yes, then we return true. Otherwise, we will proceed based on the node type of $V(G)$ in the modular decomposition of $G$.
If $V(G)$ is a parallel node, i.e., $G^q$ is an independent set of size at least two, then $G$ cannot be connected, contradicting our assumption. If $V(G)$ is a series node, i.e., $G^q$ is a clique of size at least two, then we solve the problem in polynomial time using \cref{thm:cvc_mod_structure} and \cref{thm:cvc_series}, which tell us that there only $3 |\Pi_{mod}(G)|$ possible solutions to consider.
If $G^q$ is prime, then it remains to search for connected vertex covers intersecting at least two modules and hence we can invoke \cref{thm:cvc_mtw_prime}. This completes the proof. \qed
\end{proof}
Note that \cref{thm:cvc_mtw_algo} gets a tree decomposition for \emph{every} quotient graph as input, whereas \cref{thm:cvc_mtw_prime} only requires a tree decomposition for the topmost quotient graph. This is due to the fact that the algorithm in \cref{thm:is_mtw_algo} to compute the vertex cover $Y_M$ of $G[M]$ for every $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ requires a decomposition for every quotient graph, but the vertex covers are enough information to enable us to solve \textsc{Connected Vertex Cover}\xspace by just considering the topmost quotient graph.
\section{Feedback Vertex Set Algorithm}
\label{sec:modtw_fvs_algo}
\newcommand{\mathcal{F}_{opt}}{\mathcal{F}_{opt}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\mathbf{is}}{\mathbf{is}}
\newcommand{\mathbf{if}}{\mathbf{if}}
\newcommand{{\overline{v}}}{{\overline{v}}}
\newcommand{{\overline{e}}}{{\overline{e}}}
\newcommand{D}{D}
\newcommand{{\overline{d}}}{{\overline{d}}}
\newcommand{\mathbf{req}}{\mathbf{req}}
\newcommand{\varphi}{\varphi}
\newcommand{{\two_{\mathcal{I}}}}{{\mathbf{2}_{\mathcal{I}}}}
\newcommand{{\two_{\mathcal{E}}}}{{\mathbf{2}_{\mathcal{E}}}}
The cut-and-count-technique applies more naturally to the dual problem \textsc{Induced Forest}\xspace instead of \textsc{Feedback Vertex Set}\xspace, so we choose to study the dual problem. An instance of \textsc{Induced Forest}\xspace consists of a graph $G = (V, E)$, and a budget $\overline{b} \in \mathbb{N}$, and the task is to decide whether there exists a vertex set $X \subseteq V$ with $|X| \geq \overline{b}$ such that $G[X]$ is a forest. As our algorithm is quite technical, we only consider the case of unit costs here to reduce the amount of technical details.
For \textsc{Connected Vertex Cover}\xspace, it was sufficient to essentially only look at the first quotient graph, because we did not have to compute \emph{connected} vertex covers for the subproblems, only usual vertex covers. However, for \textsc{Induced Forest}\xspace this is not the case; here, we do need to compute an induced forest in each module $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$. This essentially means that we need a \emph{nested} dynamic programming algorithm; one \emph{outer dynamic program (outer DP)} along the modular decomposition tree and one \emph{inner dynamic program (inner DP)} along the tree decompositions of the quotient graphs solving the subproblems of the outer DP.
The inner DP will again be using the cut-and-count-technique and can therefore produce erroneous results due to the randomization. We will carefully analyze where errors can occur and see that a single global sampling of an isolating weight function will be sufficient, even though some subproblems might be solved incorrectly. For this reason, the notation in this section will more closely track which node of the modular decomposition we are working on, as the setup in the \textsc{Connected Vertex Cover}\xspace algorithm would be too obfuscating here.
\subsubsection*{Notation.} ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$ will denote the parent module and represents the current subproblem to be solved by the inner DP. The inner DP will work on the quotient graph ${\qgraph{\pmodule}} = G[{\module^{\uparrow}}] / \Pi_{mod}(G[{\module^{\uparrow}}])$ whose vertices correspond to modules $M \in \mathtt{children}({\module^{\uparrow}}) = \Pi_{mod}(G[{\module^{\uparrow}}])$; associated to the quotient graph ${\qgraph{\pmodule}}$ is the projection ${\modprojection_\pmodule}\colon {\module^{\uparrow}} \rightarrow V({\qgraph{\pmodule}})$. By ${v^q_\module} \in {\qgraph{\pmodule}}$ we refer to the vertex in the quotient graph corresponding to $M$. At times, it will be useful to not have to specify the parent module and then we say that two modules $M_1, M_2 \in {\mathcal{M}_{\mathrm{tree}}}(G)$ are \emph{siblings} if there is some ${\module^{\uparrow}}$ such that $M_1, M_2 \in \mathtt{children}({\module^{\uparrow}})$, i.e., they have the same parent. For a module $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$, we let $\nsib(M)$ denote the family of sibling modules of $M$ that are adjacent to $M$ and we define $\mathcal{N}_{\mathtt{all}}(M) = \{ M' \in {\mathcal{M}_{\mathrm{tree}}}(G) : M \cap M' = \emptyset, E_G(M, M') \neq \emptyset\}$, i.e., the family of all strong modules that are adjacent to $M$.
\subsection{Structure of Optimum Induced Forests}
We begin by studying the structure of optimum induced forests with respect to the modular decomposition. Let $\mathcal{F}_{opt}(G)$ be the family of maximum induced forests of $G$. We start by giving some definitions to capture the structure of induced forests with respect to the modular decomposition.
\begin{dfn}\label{dfn:fvs_modtw_marking}
Let $X \subseteq V(G)$ be a vertex subset. We associate with $X$ a \emph{module-marking} $\varphi_X \colon {\mathcal{M}_{\mathrm{tree}}}(G) \rightarrow \{\mathbf{0}, \mathbf{1}, {\two_{\mathcal{I}}}, {\two_{\mathcal{E}}}\}$ defined by
\begin{equation*}
\varphi_X(M) =
\begin{cases}
\mathbf{0}, & \text{if $|X \cap M| = 0$}, \\
\mathbf{1}, & \text{if $|X \cap M| = 1$}, \\
{\two_{\mathcal{I}}}, & \text{if $|X \cap M| \geq 2$ and $G[X \cap M]$ contains no edge}, \\
{\two_{\mathcal{E}}}, & \text{if $|X \cap M| \geq 2$ and $G[X \cap M]$ contains at least one edge}.
\end{cases}
\end{equation*}
\end{dfn}
We use module-markings to describe the states taken by an induced forest $X$ on the modules $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$.
Ordering $\mathbf{0} < \mathbf{1} < {\two_{\mathcal{I}}} < {\two_{\mathcal{E}}}$, note that every module-marking $\varphi_X$ is \emph{monotone} in the following sense: for all $M_1, M_2 \in {\mathcal{M}_{\mathrm{tree}}}(G)$ the inclusion $M_1 \subseteq M_2$ implies that $\varphi_X(M_1) \leq \varphi_X(M_2)$.
Any induced forest has to satisfy some local properties relative to the modules which are captured by the following definition.
\begin{dfn}\label{dfn:fvs_modtw_nice}
Let $X \subseteq V(G)$ be a vertex subset. We say that $X$ is \emph{forest-nice} if for every $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ the following properties hold:
\begin{itemize}
\item If $\varphi_X(M) = {\two_{\mathcal{I}}}$, then $\varphi_X(\mathcal{N}_{\mathtt{all}}(M)) \subseteq \{\mathbf{0}, \mathbf{1}\}$ and $|\nsib(M) \cap \varphi_X^{-1}(\mathbf{1})| \leq 1$.
\item If $\varphi_X(M) = {\two_{\mathcal{E}}}$, then $\varphi_X(\mathcal{N}_{\mathtt{all}}(M)) \subseteq \{\mathbf{0}\}$.
\end{itemize}
\end{dfn}
The ``degree-condition'' $|\nsib(M) \cap \varphi_X^{-1}(\mathbf{1})| \leq 1$ deliberately only talks about the sibling modules, as we can have arbitrarily long chains of modules with $v \in M_1 \subseteq M_2 \subseteq \cdots \subseteq M_\ell$, so no useful statement is possible if we would instead consider all modules.
\begin{lem}\label{thm:forest_nice}
Every induced forest $X \subseteq V(G)$ of $G$ is forest-nice.
\end{lem}
\begin{proof}
Consider any $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ with $|X \cap M| \geq 2$. If there were some module $M' \in \mathcal{N}_{\mathtt{all}}(M)$ with $|X \cap M'| \geq 2$, then $G[X \cap (M \cup M')]$ contains a cycle of size 4 as all edges between $M$ and $M'$ exist in $G$, hence such $M'$ cannot exist. If, additionally, $G[X \cap M]$ contains an edge, then any $M' \in \mathcal{N}_{\mathtt{all}}(M)$ with $X \cap M' \neq \emptyset$ would necessarily lead to a cycle of size 3 in $G[X \cap (M \cup M')]$, hence such $M'$ cannot exist. Finally, suppose that $\varphi_X(M) = {\two_{\mathcal{I}}}$ and two neighboring sibling modules $M_1 \neq M_2 \in \nsib(M)$ with $\varphi_X(M_1) = \varphi_X(M_2) = \mathbf{1}$ exist. We must have $M_1 \cap M_2 = \emptyset$ and therefore a cycle of size 4 would exist in $G[X \cap (M \cup M_1 \cup M_2)]$, which is again not possible. \qed
\end{proof}
The modular structure allows us to perform the following exchange arguments.
\begin{lem}\label{thm:forest_exchange}
Let $X$ be an induced forest of $G$ and $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$.
\begin{enumerate}
\item If $\varphi_X(M) = {\two_{\mathcal{I}}}$ and $Y$ is an independent set of $G[M]$, then $(X \setminus M) \cup Y$ is an induced forest of $G$.
\item If $\varphi_X(M) = {\two_{\mathcal{E}}}$ and $Y$ is an induced forest of $G[M]$, then $(X \setminus M) \cup Y$ is an induced forest of $G$.
\end{enumerate}
\end{lem}
\begin{proof}
We set $X' = (X \setminus M) \cup Y$ in both cases. Since $X' \setminus M = X \setminus M$, there cannot be any cycle in $G[X' \setminus M]$. Also there cannot be any cycle in $G[X \cap M] = G[Y]$ by assumption.
\begin{enumerate}
\item Suppose there is a cycle $C'$ in $G[X']$. By the previous arguments, we must have $C' \cap M \neq \emptyset$ and $C' \setminus M \neq \emptyset$. We will argue that such a cycle would give rise to a cycle $C$ in $G[X]$, contradicting the assumption that $X$ is an induced forest. Let $v_1, \ldots, v_\ell, v_1$ be the sequence of vertices visited by $C'$ and let $v_{i_1}, \ldots, v_{i_r}$ with $1 \leq i_1 < \cdots < i_r \leq \ell$ denote the vertices of $C'$ that are in $M$. If some edge of $C'$, say $\{v_1,v_2\}$ without loss of generality, is contained in $G[X' \setminus M]$, pick some $u \in X \cap M$ and consider the cycle $C$ given by the vertex sequence $v_1, v_2, \ldots, v_{i_1 - 1}, u, v_{i_r + 1}, \ldots, v_\ell, v_1$; $C$ is a cycle of $G[X]$ as the edges $\{v_{i_1 - 1}, u\}$ and $\{u, v_{i_r + 1}\}$ exist in $G$, because $u, v_{i_1 - 1}, v_{i_r + 1} \in M$. If no such edge exists in $C'$, then $C'$ is a cycle in the biclique with parts $X' \cap M$ and $N_G(X' \cap M)$, in particular $|C' \cap M| \geq 2$ and $|C' \setminus M| \geq 2$. Since $|X \cap M| \geq 2$ by assumption and $|X \cap M| = |X' \cap M| \geq |C' \setminus M| \geq 2$, it follows that $G[X]$ contains a biclique with parts of size at least two and hence $G[X]$ must contain a cycle.
\item Since $X$ is forest-nice by \cref{thm:forest_nice}, $\varphi_X(M) = {\two_{\mathcal{E}}}$ implies that $\varphi_{X'}(\mathcal{N}_{\mathtt{all}}(M)) = \varphi_X(\mathcal{N}_{\mathtt{all}}(M)) \subseteq \{\mathbf{0}\}$, and therefore $(X' \cap M, X' \setminus M)$ is a consistent cut of $G[X']$. Therefore any cycle $C$ in $G[X']$ must be fully contained in either $X' \cap M$ or $X' \setminus M$, but we ruled out each of these cases previously. Hence, $G[X']$ contains no cycle. \qed
\end{enumerate}
\end{proof}
\cref{thm:forest_exchange} allows us to see that maximum induced forests must make locally optimal choices inside each module. We capture these local choices with the following two definitions.
\begin{dfn}\label{dfn:fvs_modtw_opt_sub}
Let $X \subseteq V(G)$ be a vertex subset. We say that $X$ has \emph{optimal substructure} if for every $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ the following properties hold:
\begin{itemize}
\item If $\varphi_X(M) = {\two_{\mathcal{I}}}$, then $X \cap M$ is a maximum independent set of $G[M]$.
\item If $\varphi_X(M) = {\two_{\mathcal{E}}}$, then $X \cap M$ is a maximum induced forest of $G[M]$.
\end{itemize}
\end{dfn}
\begin{dfn}\label{dfn:fvs_modtw_promotion}
Let $X \subseteq V(G)$ be a vertex subset. We say that $X$ has the \emph{promotion property} if for every $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ with $|X \cap M| \geq 2$ and $\varphi_X(\mathcal{N}_{\mathtt{all}}(M)) = \{\mathbf{0}\}$, we have that $X \cap M$ is a maximum induced forest of $G[M]$.
\end{dfn}
While we could have subsumed the promotion property as part of the definition of optimal substructure, we define it separately as it has more involved implications on the dynamic program and deserves separate care.
\begin{lem}\label{thm:fvs_opt_sub}
Every maximum induced forest of $G$, i.e., $X \in \mathcal{F}_{opt}(G)$, has optimal substructure and the promotion property.
\end{lem}
\begin{proof}
\cref{thm:forest_nice} already shows that $X$ is forest-nice. If $X$ would not have optimal substructure, then we can invoke \cref{thm:forest_exchange} to obtain a larger induced forest $X'$, hence $X$ would not be a maximum induced forest.
We prove a strengthened exchange argument to show the promotion property. We claim that for any induced forest $X$ of $G$, module $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ with $\varphi_X(M) \in \{{\two_{\mathcal{I}}}, {\two_{\mathcal{E}}}\}$ and $\varphi_X(\mathcal{N}_{\mathtt{all}}(M)) \subseteq \{\mathbf{0}\}$, and induced forest $Y$ of $G[M]$, the set $X' = (X \setminus M) \cup Y$ is again an induced forest of $G$. Suppose that $G[X']$ contains a cycle $C'$. By assumption on $X$, $C'$ cannot be contained in $G[X' \setminus M] = G[X \setminus M]$. By assumption on $Y$, $C'$ cannot be contained in $G[X' \cap M] = G[Y]$. Therefore, $C'$ must intersect $X' \cap M$ and $X' \setminus M$ simultaneously. However, $\varphi_{X'}(\mathcal{N}_{\mathtt{all}}(M)) = \varphi_X(\mathcal{N}_{\mathtt{all}}(M)) \subseteq \{\mathbf{0}\}$ implies that $(X' \cap M, X' \setminus M)$ is a consistent cut of $G[X']$ and hence such a cycle $C'$ cannot exist. Therefore $X'$ is also an induced forest. If an induced forest $X$ violates the promotion property, then we can invoke this exchange argument to see that $X$ cannot be a maximum induced forest. \qed
\end{proof}
Since any induced forest $X$ is forest-nice, the condition $\varphi_X(M) = {\two_{\mathcal{E}}}$ implies $\varphi_X(\mathcal{N}_{\mathtt{all}}(M)) \subseteq \{\mathbf{0}\}$ and therefore the second condition of optimal substructure also follows from the promotion property.
The requirement $|X \cap M| \geq 2$ in the promotion property could also be removed. However, the dynamic programming on quotient graphs will only apply the underlying exchange argument when $|X \cap M| \geq 2$ holds, therefore we already add this requirement here.
Note that a forest-nice vertex subset $X$ does not necessarily induce a forest as a cycle could be induced by the modules $M \in \Pi_{mod}(G)$ with $\varphi_X(M) = \mathbf{1}$.
\subsection{Application of Isolation Lemma}
We will again use the cut-and-count-technique and the isolation lemma to solve \textsc{Induced Forest}\xspace parameterized by modular-treewidth. However, since \textsc{Induced Forest}\xspace is a maximization problem, we feel it is more natural to use a maximization version of the isolation lemma as we must closely investigate when isolation transfers to subproblems. Let us define the appropriate terminology.
\begin{dfn}
A function $\mathbf{w} \colon U \rightarrow \mathbb{Z}$ \emph{max-isolates} a set family $\mathcal{F} \subseteq 2^U$ if there is a unique $S' \in \mathcal{F}$ with $\mathbf{w}(S') = \max_{S \in \mathcal{F}} \mathbf{w}(S)$, where for subsets $X$ of $U$ we define $\mathbf{w}(X) = \sum_{u \in X} \mathbf{w}(u)$.
\end{dfn}
\begin{lem}[Adapt proof of \cite{MulmuleyVV87} or \cite{TaShma15}]
\label{thm:max_isolation}
Let $\mathcal{F} \subseteq 2^U$ be a nonempty set family over a universe $U$. Let $N \in \mathbb{N}$ and for each $u \in U$ choose a weight $\mathbf{w}(u) \in [N]$ uniformly and independently at random. Then
$\mathbb{P}[\mathbf{w} \text{ max-isolates } \mathcal{F}] \geq 1 - |U|/N$.
\end{lem}
Due to \cref{thm:forest_nice} and \cref{thm:fvs_opt_sub}, we want our algorithm to compute maximum independent sets and maximum induced forests of $G[M]$ for every $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$. The computation of the maximum independent sets can be done deterministically quickly enough using \cref{thm:is_mtw_algo}. To compute the maximum induced forests however, we essentially want to recursively call our algorithm again, but the algorithm is randomized. Doing this naively and sampling a weight function for each call would exponentially decrease the success probability depending on the depth of the modular decomposition tree.
To circumvent this issue, we sample a global weight function only once and let the subproblems inherit this weight function, observing that for all ``important'' subproblems the inherited weight function is max-isolating if the global weight function is (for appropriate choices of set families).
We define $\mathcal{F}_{opt}(G, \mathbf{s})$, where $\mathbf{s} \in \{\mathbf{0}, \mathbf{1}, {\two_{\mathcal{I}}}, {\two_{\mathcal{E}}}\}$, as the family of maximum sets $X$ subject to $G[X]$ being a forest and $\varphi_X(V(G)) \leq \mathbf{s}$. Hence, we have that $\mathcal{F}_{opt}(G, {\two_{\mathcal{E}}}) = \mathcal{F}_{opt}(G)$ and $\mathcal{F}_{opt}(G, {\two_{\mathcal{I}}})$ is the family of maximum independent sets of $G$ and $\mathcal{F}_{opt}(G, \mathbf{1})$ is the family of singleton sets.
\begin{lem}\label{thm:fvs_descendant_isolation}
Let $N \in \mathbb{N}$ and assume that $\mathbf{w} \colon V(G) \rightarrow [N]$ is a weight function that max-isolates $\mathcal{F}_{opt}(G)$. Let $X \in \mathcal{F}_{opt}(G)$ be the set that is max-isolated by $\mathbf{w}$. For every $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$, we have that $\mathbf{w}\big|_M$ max-isolates $X \cap M$ in $\mathcal{F}_{opt}(G[M], \varphi_X(M))$.
\end{lem}
\begin{proof}
$X$ has optimal substructure due to \cref{thm:fvs_opt_sub}, therefore we have $X \cap M \in \mathcal{F}_{opt}(G[M], \varphi_X(M))$ for all $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$. Suppose there is some $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$ such that $\mathbf{w}\big|_M$ does not max-isolate $\mathcal{F}_{opt}(G[M], \varphi_X(M))$, then there is some $X \cap M \neq Y \in \mathcal{F}_{opt}(G[M], \varphi_X(M))$ with $\mathbf{w}(Y) \geq \mathbf{w}(X \cap M)$. By \cref{thm:forest_exchange}, $X' = (X \setminus M) \cup Y$ must satisfy $X' \in \mathcal{F}_{opt}(G)$, $X' \neq X$, and $\mathbf{w}(X') \geq \mathbf{w}(X)$. However, then $\mathbf{w}$ cannot max-isolate $X$ in $\mathcal{F}_{opt}(G)$. \qed
\end{proof}
We remark that the previous lemma allows for the possibility that, e.g. $\mathbf{w}\big|_M$ max-isolates $\mathcal{F}_{opt}(G[M], {\two_{\mathcal{I}}})$, but $\mathbf{w}\big|_M$ does not max-isolate $\mathcal{F}_{opt}(G[M], {\two_{\mathcal{E}}}) = \mathcal{F}_{opt}(G[M])$, which can lead to our algorithm not finding an optimum induced forest for this subinstance.
\subsection{Detecting Acyclicness}
Let us describe how to check whether a forest-nice subset $X$ induces a forest. The property of being forest-nice essentially allows us to only consider the induced subset on a quotient graph which we then handle by lifting cut-and-count. The property of being forest-nice is a global property in the sense that it considers the whole modular decomposition tree. We first introduce a local version of forest-nice that only considers the children of a parent module ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$:
\newcommand{{\widetilde{G}^q}}{{\widetilde{G}^q}}
\begin{dfn}\label{dfn:fvs_modtw_nice_local}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$, ${\widetilde{G}^q}$ be a subgraph of ${\qgraph{\pmodule}}$, and $X \subseteq {\module^{\uparrow}}$ with $X^q := {\modprojection_\pmodule}(X) \subseteq V({\widetilde{G}^q})$, we say that $X$ is \emph{${\module^{\uparrow}}$-forest-nice with respect to ${\widetilde{G}^q}$}, if the following properties hold for all ${v^q_\module} \in V({\widetilde{G}^q})$:
\begin{itemize}
\item If $\varphi_X(M) = {\two_{\mathcal{I}}}$, then $\deg_{{\widetilde{G}^q}[X^q]}({v^q_\module}) \leq 1$ and $\varphi_X(M') \in \{\mathbf{0}, \mathbf{1}\}$ for all $v^q_{M'} \in N_{{\widetilde{G}^q}}({v^q_\module})$.
\item If $\varphi_X(M) = {\two_{\mathcal{E}}}$, then $\varphi_X(M') = \mathbf{0}$ for all $v^q_{M'} \in N_{{\widetilde{G}^q}}({v^q_\module})$.
\end{itemize}
In the case ${\widetilde{G}^q} = {\qgraph{\pmodule}}$, we simply say that $X$ is \emph{${\module^{\uparrow}}$-forest-nice}.
\end{dfn}
As the (very nice) tree decomposition of ${\qgraph{\pmodule}}$ adds edges one-by-one, we need to account for changes in the neighborhoods of vertices in the local definition of forest-niceness via ${\widetilde{G}^q}$. Otherwise, \cref{dfn:fvs_modtw_nice_local} is essentially the same definition as \cref{dfn:fvs_modtw_nice}, but only considering the child modules of ${\module^{\uparrow}}$. In particular, if $X$ is forest-nice, then $X \cap {\module^{\uparrow}}$ is ${\module^{\uparrow}}$-forest-nice for all ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$.
The next lemma essentially shows that in a ${\module^{\uparrow}}$-forest-nice set $X$ no cycles intersecting some module $M \in \mathtt{children}({\module^{\uparrow}})$ in more than one vertex exist, hence all possible cycles can already be seen in the quotient graph.
\begin{lem}\label{thm:forest_quotient}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X \subseteq {\module^{\uparrow}}$ be ${\module^{\uparrow}}$-forest-nice and suppose that $G[X \cap M]$ is a forest for all modules $M \in \mathtt{children}({\module^{\uparrow}})$ and define $X^q = {\modprojection_\pmodule}(X)$. Then, $G[X]$ is a forest if and only if ${\qgraph{\pmodule}}[X^q]$ is a forest.
\end{lem}
\begin{proof}
The graph ${\qgraph{\pmodule}}[X^q]$ can be considered a subgraph of $G[X]$, so if ${\qgraph{\pmodule}}[X^q]$ is not a forest, then neither is $G[X]$.
For the other direction, suppose that $G[X]$ contains a cycle $C$. It cannot be that $C \subseteq X \cap M$ for some $M \in \mathtt{children}({\module^{\uparrow}})$, since $G[X \cap M]$ contains no cycle by assumption. It also cannot be that $G[C \cap M]$ contains an edge for some $M \in \mathtt{children}({\module^{\uparrow}})$, since ${\module^{\uparrow}}$-forest-nice would then imply that $C$ is contained in $M$, which we just ruled out. If $|C \cap M| \geq 2$ for some $M \in \mathtt{children}({\module^{\uparrow}})$, then ${\module^{\uparrow}}$-forest-nice implies that at most one neighboring sibling module $M'$ is intersected by $C$ and $|C \cap M'| \geq 1$, but since $G[C \cap M]$ cannot contain an edge, this means that the vertices in $C \cap M$ must have degree one in $C$, so $C$ cannot be a cycle.
Finally, we must have $|C \cap M| \leq 1$ for all $M \in \mathtt{children}({\module^{\uparrow}})$, but any such cycle $C$ clearly gives rise to a cycle $C^q = {\modprojection_\pmodule}(C)$ in $G^q[X^q]$, too. \qed
\end{proof}
\begin{lem}[Lemma~4.5 in \cite{CyganNPPRW22}]
\label{thm:forest_check}
Let $G$ be a graph with $n$ vertices and $m$ edges. Then, $G$ is a forest if and only if $\mathtt{cc}(G) \leq n - m$ if and only if $\mathtt{cc}(G) = n - m$.
\end{lem}
·
One could use the \emph{marker technique} already used by Cygan et al.~\cite{CyganNPPRW22} for the treewidth-parameterization together with \cref{thm:forest_check} to obtain a cut-and-count algorithm, but the marker technique results in several further technical details to take care of. The marker technique can be avoided by working modulo higher powers of two instead of only modulo two, which was also done by Nederlof et al.~\cite{NederlofPSW20} when applying cut-and-count to edge-based problems parameterized by treedepth. We also do so, to obtain a cleaner presentation of our algorithm.
\begin{lem}
\label{thm:forest_hom_cuts}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X \subseteq {\module^{\uparrow}}$ be ${\module^{\uparrow}}$-forest-nice and suppose that $G[X \cap M]$ is a forest for all modules $M \in \mathtt{children}({\module^{\uparrow}})$. Let $X^q = {\modprojection_\pmodule}(X)$ and let ${\overline{v}} = |X^q|$ and ${\overline{e}} = |E({\qgraph{\pmodule}}[X^q])|$. Then, $G[X]$ is a forest if and only if $|\{(X_L, X_R) : (X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)\}| \not\equiv_{2^{{\overline{v}} - {\overline{e}} + 1}} 0$.
\end{lem}
\begin{proof}
By \cref{thm:hom_cut}, we have that $|\{(X_L, X_R) : (X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)\}| = 2^{\mathtt{cc}({\qgraph{\pmodule}}[X^q])}$. By \cref{thm:forest_check}, we see that ${\qgraph{\pmodule}}[X^q]$ is a forest if and only if $|\{(X_L, X_R) : (X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)\}| \neq 0 \mod 2^{{\overline{v}}^q - {\overline{e}}^q + 1}$. The lemma then follows via \cref{thm:forest_quotient}. \qed
\end{proof}
\subsection{Outer DP: Candidate Forests}
\newcommand{\fixv}[1]{{Y^{\mathbf{1}}_{#1}}}
\newcommand{\fixis}[1]{{Y^{{\two_{\mathcal{I}}}}_{#1}}}
\newcommand{\fixif}[1]{{Y^{{\two_{\mathcal{E}}}}_{#1}}}
\newcommand{\nv}[1]{{\#^{\mathbf{v}}_{#1}}}
\newcommand{\nis}[1]{{\#^{\mathbf{is}}_{#1}}}
\newcommand{\nif}[1]{{\#^{\mathbf{if}}_{#1}}}
\newcommand{\cv}[1]{{c^{\mathbf{1}}_{#1}}}
\newcommand{\cis}[1]{{c^{{\two_{\mathcal{I}}}}_{#1}}}
\newcommand{\cif}[1]{{c^{{\two_{\mathcal{E}}}}_{#1}}}
\newcommand{\wv}[1]{{w^{\mathbf{1}}_{#1}}}
\newcommand{\wis}[1]{{w^{{\two_{\mathcal{I}}}}_{#1}}}
\newcommand{\wif}[1]{{w^{{\two_{\mathcal{E}}}}_{#1}}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{{\ctarget, \wtarget, \vtarget, \etarget}}{{{\overline{c}}, {\overline{w}}, {\overline{v}}, {\overline{e}}}}
\newcommand{\cwpairs}[1]{P_{#1}}
Fix an \textsc{Induced Forest}\xspace instance $(G = (V, E), \overline{b})$ and a weight function $\mathbf{w} \colon V \rightarrow [N]$ throughout this section. To solve \textsc{Induced Forest}\xspace parameterized by modular-treewidth, we perform dynamic programming in two ways: we proceed bottom-up along the modular decomposition tree of $G$ and to compute the table entries for the node corresponding to module ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$, we use the tables of the children $\mathtt{children}({\module^{\uparrow}}) = \Pi_{mod}(G[{\module^{\uparrow}}])$ and perform dynamic programming along the tree decomposition of the associated quotient graph ${\qgraph{\pmodule}} = G[{\module^{\uparrow}}] / \Pi_{mod}(G[{\module^{\uparrow}}])$.
For every module $M \in {\mathcal{M}_{\mathrm{tree}}}(G)$, we have the following data precomputed:
\begin{itemize}
\item a singleton set $\fixv{M}$ in $M$ that maximizes $\mathbf{w}(\fixv{M})$ and its weight $\wv{M} = \mathbf{w}(\fixv{M})$,
\item a maximum independent set $\fixis{M}$ of $G[M]$ that maximizes $\mathbf{w}(\fixis{M})$, the size $\cis{M} = |\fixis{M}|$ and the weight $\wis{M} = \mathbf{w}(\fixis{M})$ of such an independent set.
\end{itemize}
The vertex data can clearly be precomputed in polynomial time and the independent set data can be precomputed in time $\Oh^*(2^{\modtw(G)})$ by running the \textsc{Independent Set}\xspace algorithm from \cref{thm:is_mtw_algo}.
\subsubsection*{Candidate Forests.} We will recursively define for each module ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$, the \emph{${\module^{\uparrow}}$-candidate forest} $\fixif{{\module^{\uparrow}}}$ (which depends on the fixed weight function $\mathbf{w}$). Among all induced forests $X$ of $G[{\module^{\uparrow}}]$ found by the algorithm, the forest $\fixif{{\module^{\uparrow}}}$ lexicographically maximizes $(|X|, \mathbf{w}(X))$. Due to the randomization in the cut-and-count-technique however, it can happen that $\fixif{{\module^{\uparrow}}}$ is not necessarily a maximum induced forest of $G[{\module^{\uparrow}}]$. We will see that if we sampled an isolating weight function $\mathbf{w}$, then no errors will occur for the ``important'' subproblems, hence still allowing us to find a maximum induced forest of the whole graph.
The definition of $\fixif{{\module^{\uparrow}}}$ is mutually recursive with the definition of the solution family that will be defined afterwards.
\subsubsection*{Properties of Candidate Forests.} We highlight several properties of the candidate forests that are important for the algorithm.
\begin{itemize}
\item The base case is given by $\fixif{\{v\}} = \{v\}$ for all $v \in V(G)$.
\item $\fixif{{\module^{\uparrow}}}$ is an induced forest of $G[{\module^{\uparrow}}]$.
\item If $G[{\module^{\uparrow}}]$ contains no edge, then $\fixif{{\module^{\uparrow}}} = \fixis{{\module^{\uparrow}}}$.
\item If $G[{\module^{\uparrow}}]$ contains an edge, then $|\fixif{{\module^{\uparrow}}}| > |\fixis{{\module^{\uparrow}}}|$.
\end{itemize}
Given $\fixif{M}$ for all $M \in \mathtt{children}({\module^{\uparrow}})$, we can describe how to compute $\fixif{{\module^{\uparrow}}}$. This step depends on which kind of node ${\module^{\uparrow}}$ corresponds to in the modular decomposition. We first handle the degenerate cases of a parallel or series node and then proceed with the much more challenging case of a prime node.
\subsubsection{Computing Candidate Forests in Parallel and Series Nodes.}\label{sec:fvs_parallel_series}
If ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ is a parallel node, i.e., ${\qgraph{\pmodule}}$ is an independent set, then \cref{thm:forest_nice} and \cref{thm:fvs_opt_sub} tell us to simply take a maximum induced forest inside each child module $M \in \mathtt{children}({\module^{\uparrow}})$. Hence, we set $\fixif{{\module^{\uparrow}}} = \bigcup_{M \in \mathtt{children}({\module^{\uparrow}})} \fixif{M}$ and accordingly $\cif{{\module^{\uparrow}}} = \sum_{M \in \mathtt{children}({\module^{\uparrow}})} \cif{M}$ and $\wif{{\module^{\uparrow}}} = \sum_{M \in \mathtt{children}({\module^{\uparrow}})} \wif{M}$.
If ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ is a series node, then we first analyze the structure of maximum induced forests with respect to a series node.
\begin{lem}\label{thm:fvs_modtw_series}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X$ be a maximum induced forest of $G[{\module^{\uparrow}}]$. If ${\module^{\uparrow}}$ is a series module, i.e., the quotient graph ${\qgraph{\pmodule}}$ is a clique, then one of the following statements holds:
\begin{itemize}
\item $X \subseteq M$ for some $M \in \mathtt{children}({\module^{\uparrow}})$ and $X$ is a maximum induced forest of $G[M]$.
\item $X \subseteq M_1 \cup M_2$ for some $M_1 \neq M_2 \in \mathtt{children}({\module^{\uparrow}})$ and $X \cap M_1$ is a maximum independent set of $G[M_1]$ and $|X \cap M_2| = 1$.
\end{itemize}
\end{lem}
\begin{proof}
Suppose that $X$ intersects three different modules in $\mathtt{children}({\module^{\uparrow}})$, since they are all adjacent $X$ would induce a triangle. Hence, $X$ can intersect at most two different modules. By \cref{thm:forest_nice} and \cref{thm:fvs_opt_sub}, $X$ is forest-nice, has optimal substructure and satisfies the promotion property. If $X$ intersects only a single module $M$, then the first statement follows due to the promotion property. If $X$ intersects two modules, then the second statement follows due to $X$ being forest-nice and optimal substructure. \qed
\end{proof}
Given the maximum independent sets $\fixis{M}$ for all $M \in \mathtt{children}({\module^{\uparrow}})$, we can in polynomial time compute an optimum induced forest $\tilde{Y}_{\module^{\uparrow}}$ of $G[{\module^{\uparrow}}]$ subject to the second condition in \cref{thm:fvs_modtw_series}. We compare the induced forests $\tilde{Y}_{\module^{\uparrow}}$ and all $\fixif{M}$ for all $M \in \mathtt{children}({\module^{\uparrow}})$ lexicographically by their cost and weight and, motivated by \cref{thm:fvs_modtw_series}, we let $\fixif{{\module^{\uparrow}}}$ be the winner of this comparison.
\subsubsection{Computing Candidate Forests in Prime Nodes.}\label{sec:fvs_prime_candidate}
To compute the ${\module^{\uparrow}}$-candidate forest $\fixif{{\module^{\uparrow}}}$ when ${\module^{\uparrow}}$ is a prime node, we will use the cut-and-count-technique and dynamic programming along the given tree decomposition of the quotient graph ${\qgraph{\pmodule}}$. Before going into the details of the dynamic programming, we will give the necessary formal definitions to describe the partial solutions of the dynamic programming and the subproblem that has to be solved. This will already allow us to define the induced forest $\fixif{{\module^{\uparrow}}}$ and prove the correctness of the outer loop involving the modular decomposition. We first introduce some ``local'' versions of \cref{dfn:fvs_modtw_opt_sub} and \cref{dfn:fvs_modtw_promotion}.
\begin{dfn}\label{dfn:fvs_modtw_opt_sub_local}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X \subseteq {\module^{\uparrow}}$, we say that $X$ has \emph{${\module^{\uparrow}}$-substructure} if for all $M \in \mathtt{children}({\module^{\uparrow}})$ we have that $\varphi_X(M) \neq \mathbf{0}$ implies $X \cap M = Y^{\varphi_X(M)}_{M}$.
\end{dfn}
Comparing the definition of \emph{${\module^{\uparrow}}$-substructure} to \emph{optimal substructure}, we see that in ${\module^{\uparrow}}$-substructure we only consider the child modules and require the choice of a specified vertex, maximum independent set, or induced forest, respectively. Note that due to the previously discussed issue, $\fixif{M}$ does not necessarily need to be a maximum induced forest.
\begin{dfn}\label{dfn:fvs_modtw_promotion_local}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and $X \subseteq {\module^{\uparrow}}$, we say that $X$ satisfies the \emph{${\module^{\uparrow}}$-promotion property} if for all modules $M \in \mathtt{children}({\module^{\uparrow}})$ with $|X \cap M| \geq 2$ and $\varphi_X(\nsib(M)) = \{\mathbf{0}\}$ it holds that $X \cap M = \fixif{M}$.
\end{dfn}
\cref{dfn:fvs_modtw_promotion_local}, unlike \cref{dfn:fvs_modtw_nice_local}, does not need to account for the current subgraph of ${\qgraph{\pmodule}}$ as promotion is only checked for modules that have already been forgotten by the tree decomposition, i.e., all incident edges have already been added, and for non-introduced modules $M$, we simply have $X \cap M = \emptyset$.
We can now define the solution family considered by our algorithm.
\begin{dfn}
The family $\mathcal{R}_{\module^{\uparrow}}$ consists of all $X \subseteq {\module^{\uparrow}}$ such that $X$ is ${\module^{\uparrow}}$-forest-nice wrt.\ ${\qgraph{\pmodule}}$, has ${\module^{\uparrow}}$-substructure, and satisfies the ${\module^{\uparrow}}$-promotion property. Given ${\overline{c}} \in [0, |{\module^{\uparrow}}|]$, ${\overline{w}} \in [0, \mathbf{w}({\module^{\uparrow}})]$, ${\overline{v}} \in [0, |\mathtt{children}({\module^{\uparrow}})|]$, ${\overline{e}} \in [0, {\overline{v}} - 1]$, the family $\mathcal{R}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}$ consists of all $X \in \mathcal{R}_{\module^{\uparrow}}$ with
\begin{itemize}
\item $|X| = {\overline{c}}$ and $\mathbf{w}(X) = {\overline{w}}$,
\item $|X^q| = {\overline{v}}$ and $|E({\qgraph{\pmodule}}[X^q])| = {\overline{e}}$, where $X^q = {\modprojection_\pmodule}(X)$.
\end{itemize}
We also define $\mathcal{S}_{\module^{\uparrow}}^{\ctarget, \wtarget, \vtarget, \etarget} = \{X \in \mathcal{R}_{\module^{\uparrow}}^{\ctarget, \wtarget, \vtarget, \etarget} : G[X] \text{ is a forest}\}$.
\end{dfn}
By pairing elements of $\mathcal{R}_{\module^{\uparrow}}^{\ctarget, \wtarget, \vtarget, \etarget}$ with homogeneous cuts, we can use the cut-and-count-technique to decide whether $\mathcal{S}_{\module^{\uparrow}}^{\ctarget, \wtarget, \vtarget, \etarget}$ is empty or not.
\begin{dfn}
The family $\mathcal{Q}_{\module^{\uparrow}}$ consists of all $(X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)$ with $X \in \mathcal{R}_{\module^{\uparrow}}$.
Similarly, $\mathcal{Q}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}$ consists of all $(X, (X_L, X_R)) \in \homcuts{{\module^{\uparrow}}}(G)$ with $X \in \mathcal{R}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}$.
\end{dfn}
The crucial property of $\mathcal{Q}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}$ is given by the following lemma.
\begin{lem}\label{thm:fvs_nonzero_implies_forest}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$. It holds that $|\mathcal{Q}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}| \equiv_{2^{{\overline{v}} - {\overline{e}} + 1}} 2^{{\overline{v}} - {\overline{e}}} |\mathcal{S}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}|$.
\end{lem}
\begin{proof}
Consider any $X \in \mathcal{R}_{\module^{\uparrow}}^{\ctarget, \wtarget, \vtarget, \etarget}$ and let $X^q = {\modprojection_\pmodule}(X)$. If $G[X]$ is a forest, then so is ${\qgraph{\pmodule}}[X^q]$ by \cref{thm:forest_quotient} and we have that $X$ contributes exactly $2^{{\overline{v}} - {\overline{e}}}$ objects to $\mathcal{Q}_{\module^{\uparrow}}^{\ctarget, \wtarget, \vtarget, \etarget}$ by \cref{thm:hom_cut} and \cref{thm:forest_check}. By \cref{thm:forest_hom_cuts}, we see that if $G[X]$ is not a forest, then $X$ contributes a multiple of $2^{{\overline{v}} - {\overline{e}} + 1}$ objects to $\mathcal{Q}_{\module^{\uparrow}}^{\ctarget, \wtarget, \vtarget, \etarget}$, which therefore cancel. \qed
\end{proof}
From the sets $\mathcal{Q}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}$ for a fixed ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$, we can finally give the recursive definition of the ${\module^{\uparrow}}$-candidate forest $\fixif{{\module^{\uparrow}}}$.
\begin{dfn}
\label{dfn:recursive_candidate_forest}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ such that ${\qgraph{\pmodule}}$ is prime. The set of \emph{attained cost-weight-pairs} $\cwpairs{{\module^{\uparrow}}}$ consists of all pairs $({\overline{c}}, {\overline{w}})$ such that there exist ${\overline{v}}$ and ${\overline{e}}$ with $|\mathcal{Q}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}| \not\equiv_{2^{{\overline{v}} - {\overline{e}} + 1}} 0$. We denote the lexicographic maximum pair in $\cwpairs{{\module^{\uparrow}}}$ by $({\overline{c}}_{max}, {\overline{w}}_{max})$. \cref{thm:fvs_nonzero_implies_forest} guarantees the existence of an induced forest $Y$ of $G[{\module^{\uparrow}}]$ with $|Y| = {\overline{c}}_{max}$ and $\mathbf{w}(Y) = {\overline{w}}_{max}$. If ${\overline{c}}_{max} > |\fixis{{\module^{\uparrow}}}|$, then the \emph{${\module^{\uparrow}}$-candidate forest} $\fixif{{\module^{\uparrow}}}$ is an arbitrary induced forest among these, else we greedily extend $\fixis{{\module^{\uparrow}}}$ by some vertices, without introducing cycles, to obtain $\fixif{{\module^{\uparrow}}}$. We set $\cif{{\module^{\uparrow}}} = |\fixif{{\module^{\uparrow}}}|$ and $\wif{{\module^{\uparrow}}} = \mathbf{w}(\fixif{{\module^{\uparrow}}})$.
\end{dfn}
The algorithm does not know the exact set $\fixif{{\module^{\uparrow}}}$, hence no issue is caused by the arbitrary choice, but the algorithm knows the values $\cif{{\module^{\uparrow}}}$ and $\wif{{\module^{\uparrow}}}$. The set $\fixif{{\module^{\uparrow}}}$ is only used for the analysis of the algorithm. We will see that the choice of $\fixif{{\module^{\uparrow}}}$ is unique when $\mathbf{w}\restrict{{\module^{\uparrow}}}$ isolates the optimum induced forests of $G[{\module^{\uparrow}}]$, else the choice might not be unique. Only in the latter case can ${\overline{c}}_* \leq |\fixis{{\module^{\uparrow}}}|$ occur, but since ${\qgraph{\pmodule}}$ is prime, the graph $G[{\module^{\uparrow}}]$ must contain some edges and hence there exists a larger induced forest that is not an independent set.
Note that $\fixif{{\module^{\uparrow}}}$ is always an induced forest, but $G[\fixif{{\module^{\uparrow}}}]$ does not necessarily contain an edge, i.e., $\fixif{{\module^{\uparrow}}}$ may be an independent set or even a single vertex if ${\qgraph{\pmodule}}$ is a parallel node or singleton node. This means that for some $X \subseteq V(G)$ with $X \cap {\module^{\uparrow}} = \fixif{{\module^{\uparrow}}}$, we only know $\varphi_X({\module^{\uparrow}}) \leq {\two_{\mathcal{E}}}$ and not necessarily $\varphi_X({\module^{\uparrow}}) = {\two_{\mathcal{E}}}$.
The complete outer DP is summarized in \cref{algo:outer_dp}.
\begin{algorithm}
\uIf{${\module^{\uparrow}}$ is a parallel node}{
$\fixif{{\module^{\uparrow}}} := \bigcup_{M \in \mathtt{children}({\module^{\uparrow}})} \fixif{M}$\;
}
\uElseIf{${\module^{\uparrow}}$ is a series node}{
pick $Y_1$ among all $\fixif{M}$, $M \in \mathtt{children}({\module^{\uparrow}})$, to lex.\ maximize $(|Y_1|, \mathbf{w}(Y_1))$\;
pick $Y_2 \in \{\fixis{M_1} \cup \fixv{M_2} : M_1 \neq M_2 \in \mathtt{children}({\module^{\uparrow}})\}$ to lex.\ max.\ $(|Y_2|, \mathbf{w}(Y_2))$\;
pick $\fixif{{\module^{\uparrow}}}$ as a winner of the lex.\ comparison $(|Y_1|, \mathbf{w}(Y_1))$ vs $(|Y_2|, \mathbf{w}(Y_2))$\;
}
\Else{
compute $|\mathcal{Q}^{{\ctarget, \wtarget, \vtarget, \etarget}}_{\module^{\uparrow}}|$ for all ${\ctarget, \wtarget, \vtarget, \etarget}$ using treewidth-based DP\;
construct $\cwpairs{{\module^{\uparrow}}} = \left\{({\overline{c}}, {\overline{w}}) : \text{ there are ${\overline{v}}, {\overline{e}}$ such that } |\mathcal{Q}^{{\ctarget, \wtarget, \vtarget, \etarget}}_{\module^{\uparrow}}| \not\equiv_{2^{{\overline{v}} - {\overline{e}} + 1}} 0 \right\}$\;
let $({\overline{c}}_{max}, {\overline{w}}_{max}) \in \cwpairs{{\module^{\uparrow}}}$ be the lexicographic maximum\;
\uIf{${\overline{c}}_{max} > |\fixis{{\module^{\uparrow}}}|$}{
pick any $\fixif{{\module^{\uparrow}}}$ among induced forests $Y$ of $G[{\module^{\uparrow}}]$ with $|Y| = {\overline{c}}_{max}$ and $\mathbf{w}(Y) = {\overline{w}}_{max}$\;
}
\Else{
obtain $\fixif{{\module^{\uparrow}}}$ by greedily extending $\fixis{{\module^{\uparrow}}}$ by vertices without creating cycles\;
}
}
\caption{Outer DP to compute $\fixif{{\module^{\uparrow}}}$.}
\label{algo:outer_dp}
\end{algorithm}
\subsubsection{Correctness of Outer DP.}
Assuming an algorithm that computes the values $|\mathcal{Q}_{{\module^{\uparrow}}}^{{\ctarget, \wtarget, \vtarget, \etarget}}|$ for all prime ${\qgraph{\pmodule}}$ and all ${\ctarget, \wtarget, \vtarget, \etarget}$, we obtain an algorithm that implicitly computes $\fixif{{\module^{\uparrow}}}$ for all ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$ by starting with $\fixif{\{v\}} = \{v\}$ for all $v \in V$ and performs bottom-up dynamic programming along the modular decomposition tree using the appropriate algorithm based on the node type. While the precise set $\fixif{{\module^{\uparrow}}}$ is not known to the algorithm, it knows the value $\cif{{\module^{\uparrow}}} = |\fixif{{\module^{\uparrow}}}|$. The algorithm returns positively if $\cif{V} \geq \overline{b}$ and negatively otherwise. As we ensure that $\fixif{{\module^{\uparrow}}}$ is an induced forest for all ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$, the algorithm does not return false positives. The next lemma concludes the discussion of the outer DP and implies that the algorithm answers correctly assuming that the weight function $\mathbf{w}$ isolates the maximum induced forests of $G$.
\begin{lem}[Main Correctness Lemma]\label{thm:fvs_mtw_main_correctness}
Suppose that $\mathbf{w}$ max-isolates $X_*$ in $\mathcal{F}_{opt}(G)$. The following properties hold for all ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$:
\begin{enumerate}
\item $\mathbf{s}({\module^{\uparrow}}) := \varphi_{X_*}({\module^{\uparrow}}) \neq \mathbf{0}$ implies that $X_* \cap {\module^{\uparrow}} = Y^{\mathbf{s}({\module^{\uparrow}})}_{\module^{\uparrow}}$, (${\module^{\uparrow}}$-substructure for all ${\module^{\uparrow}}$)
\item $\mathbf{s}({\module^{\uparrow}}) = \varphi_{X_*}({\module^{\uparrow}}) = {\two_{\mathcal{E}}}$ implies that $\fixif{{\module^{\uparrow}}}$ is a maximum induced forest of $G[{\module^{\uparrow}}]$,
\item $\mathbf{s}({\module^{\uparrow}}) = \varphi_{X_*}({\module^{\uparrow}}) = {\two_{\mathcal{E}}}$ implies that $X_* \cap {\module^{\uparrow}} \in \mathcal{R}_{{\module^{\uparrow}}}$.
\end{enumerate}
\end{lem}
\begin{proof}
Notice that for singleton modules only the first property is relevant and is trivially true. By \cref{thm:forest_nice} and \cref{thm:fvs_opt_sub}, $X_*$ is forest-nice, has optimal substructure and the promotion property. By \cref{thm:fvs_descendant_isolation}, it follows that $\mathbf{w}\big|_{{\module^{\uparrow}}}$ max-isolates $X_* \cap {\module^{\uparrow}}$ in $\mathcal{F}_{opt}(G, \mathbf{s}({\module^{\uparrow}}))$ for all ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$. Since $X_*$ is forest-nice, $X_* \cap {\module^{\uparrow}}$ must be ${\module^{\uparrow}}$-forest-nice for all ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ as the quotient graph ${\qgraph{\pmodule}}$ captures when two sibling modules $M, M' \in \mathtt{children}({\module^{\uparrow}})$ are adjacent.
We proceed by proving the first property whenever $\mathbf{s}({\module^{\uparrow}}) \neq {\two_{\mathcal{E}}}$. Fix some ${\module^{\uparrow}}$ with $\mathbf{s}({\module^{\uparrow}}) \in \{\mathbf{1}, {\two_{\mathcal{I}}}\}$. We have $X_* \cap {\module^{\uparrow}}, Y^{\mathbf{s}({\module^{\uparrow}})}_{{\module^{\uparrow}}} \in \mathcal{F}_{opt}(G[{\module^{\uparrow}}], \mathbf{s}({\module^{\uparrow}}))$ by optimal substructure and definition. By choice of $Y^{\mathbf{s}({\module^{\uparrow}})}_{{\module^{\uparrow}}}$, we have that $\mathbf{w}(X_* \cap {\module^{\uparrow}}) \leq \mathbf{w}(Y^{\mathbf{s}({\module^{\uparrow}})}_{{\module^{\uparrow}}})$. By max-isolation of $X_* \cap {\module^{\uparrow}}$ it follows that $\mathbf{w}(X_* \cap {\module^{\uparrow}}) = \mathbf{w}(Y^{\mathbf{s}({\module^{\uparrow}})}_{{\module^{\uparrow}}})$ and even $X_* \cap {\module^{\uparrow}} = Y^{\mathbf{s}({\module^{\uparrow}})}_{{\module^{\uparrow}}}$.
The remainder of the proof is an induction along the modular decomposition tree, as the base case we consider modules ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ with $\mathbf{s}({\module^{\uparrow}}) = {\two_{\mathcal{E}}}$ and $\mathbf{s}(M) \neq {\two_{\mathcal{E}}}$ for all $M \in \mathtt{children}({\module^{\uparrow}})$. For the base case, we have already shown that $X_* \cap M = Y^{\mathbf{s}(M)}_M$ for all $M \in \mathtt{children}({\module^{\uparrow}})$, hence $X_* \cap {\module^{\uparrow}}$ has ${\module^{\uparrow}}$-substructure in this case.
We continue with the ${\module^{\uparrow}}$-promotion property in the base case. Suppose it is violated for some $M \in \mathtt{children}({\module^{\uparrow}})$, i.e., $\mathbf{s}(\nsib(M)) \subseteq \{\mathbf{0}\}$ and $X_* \cap M = \fixis{M} \neq \fixif{M}$ (using ${\module^{\uparrow}}$-substructure). By definition of $\fixif{M}$, we have that $\fixif{M}$ is an induced forest of $G[M]$ and $\fixif{M} \neq \fixis{M}$ if and only if $|\fixif{M}| > |\fixis{M}|$. We claim that $M$ must also violate the promotion property of $X_*$. For this it remains to establish that $\mathbf{s}(\mathcal{N}_{\mathtt{all}}(M)) \subseteq \{\mathbf{0}\}$. We have $\mathbf{s}(\nsib(M)) \subseteq \{\mathbf{0}\}$ by assumption, this shows that $\mathbf{s}(M') = \mathbf{0}$ for all $M' \in \mathcal{N}_{\mathtt{all}}(M)$ with $M' \subseteq {\module^{\uparrow}}$. Every module $M' \in \mathcal{N}_{\mathtt{all}}(M)$ with $M' \not\subseteq {\module^{\uparrow}}$ must be disjoint from ${\module^{\uparrow}}$ and hence $M' \in \mathcal{N}_{\mathtt{all}}({\module^{\uparrow}})$ which implies that $\mathbf{s}(M') = \mathbf{0}$ since $X_*$ is forest-nice.
For the base case, we have now established that $X_* \cap {\module^{\uparrow}} \in \mathcal{R}_{{\module^{\uparrow}}}$, as we have verified that $X_* \cap {\module^{\uparrow}}$ is ${\module^{\uparrow}}$-forest-nice wrt.\ ${\qgraph{\pmodule}}$, has ${\module^{\uparrow}}$-substructure, and has the ${\module^{\uparrow}}$-promotion property. We can now proceed by showing the first and second property for the base case when $\mathbf{s}({\module^{\uparrow}}) = {\two_{\mathcal{E}}}$. Note that the second property follows from the first one by optimal substructure of $X_*$, so we only have to prove the first property.
If ${\qgraph{\pmodule}}$ is a parallel or series node, then the analysis in \cref{sec:fvs_parallel_series} shows that $\fixif{{\module^{\uparrow}}} \in \mathcal{F}_{opt}(G[{\module^{\uparrow}}])$. Since also $X_* \cap {\module^{\uparrow}} \in \mathcal{F}_{opt}(G[{\module^{\uparrow}}])$ and both maximize their weight (by definition and max-isolation), the isolation of $X_* \cap {\module^{\uparrow}}$ implies $X_* \cap {\module^{\uparrow}} = \fixif{{\module^{\uparrow}}}$. If ${\qgraph{\pmodule}}$ is a prime node, then we set $X^q_* = {\modprojection_\pmodule}(X_* \cap {\module^{\uparrow}})$ and ${\overline{c}}_* = |X_* \cap {\module^{\uparrow}}|$, ${\overline{w}}_* = \mathbf{w}(X_* \cap {\module^{\uparrow}})$, ${\overline{v}}_* = |X_*^q|$, ${\overline{e}}_* = |E({\qgraph{\pmodule}}[X_*^q])|$. Hence, we have that $X_* \cap {\module^{\uparrow}} \in \mathcal{R}^{{\overline{c}}_*, {\overline{w}}_*, {\overline{v}}_*, {\overline{e}}_*}_{\module^{\uparrow}}$ and $X_* \cap {\module^{\uparrow}} \in \mathcal{S}^{{\overline{c}}_*, {\overline{w}}_*, {\overline{v}}_*, {\overline{e}}_*}_{\module^{\uparrow}}$. By max-isolation of $X_* \cap {\module^{\uparrow}}$, we therefore have $|\mathcal{S}^{{\overline{c}}_*, {\overline{w}}_*, {\overline{v}}_*, {\overline{e}}_*}_{\module^{\uparrow}}| = 1$ and \cref{thm:fvs_nonzero_implies_forest} shows that $|\mathcal{Q}^{{\overline{c}}_*, {\overline{w}}_*, {\overline{v}}_*, {\overline{e}}_*}_{\module^{\uparrow}}| \not\equiv_{2^{{\overline{v}}_* - {\overline{e}}_* + 1}} 0$, so $({\overline{c}}_*, {\overline{w}}_*) \in \cwpairs{{\module^{\uparrow}}}$. Also, $({\overline{c}}_*, {\overline{w}}_*)$ must be the lexicographic maximum in $\cwpairs{{\module^{\uparrow}}}$. Therefore, \cref{dfn:recursive_candidate_forest} must pick $\fixif{{\module^{\uparrow}}} = X_* \cap {\module^{\uparrow}}$; we must have $|X_* \cap {\module^{\uparrow}}| > |\fixis{{\module^{\uparrow}}}|$, since $G[{\module^{\uparrow}}]$ contains an edge and $X_* \cap {\module^{\uparrow}} \in \mathcal{F}_{opt}(G[{\module^{\uparrow}}])$. This concludes the proof of the base case.
Now, when proving the three properties for some ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$, we can inductively assume that they hold for all $M \in \mathtt{children}({\module^{\uparrow}})$. The argument for the inductive step is essentially the same as for the base case, however $\mathbf{s}(M) = {\two_{\mathcal{E}}}$ can occur now, but for this case we can apply the already proven properties. The first two properties for the child modules allow us to establish $X_* \cap {\module^{\uparrow}} \in \mathcal{R}_{\module^{\uparrow}}$ even in the inductive step. From that point on, the same argument considering the sets $\mathcal{Q}^{{\ctarget, \wtarget, \vtarget, \etarget}}_{\module^{\uparrow}}$ can be followed to also obtain the first and second property for ${\module^{\uparrow}}$. \qed
\end{proof}
\subsection{Dynamic Programming along Tree Decomposition}\label{sec:fvs_prime_dp}
\newcommand{\mathrm{iso}}{\mathrm{iso}}
We now need to show how to compute the values $|\mathcal{Q}_{{\module^{\uparrow}}}^{{\ctarget, \wtarget, \vtarget, \etarget}}|$ modulo $2^{{\overline{v}} - {\overline{e}} + 1}$ for all ${\ctarget, \wtarget, \vtarget, \etarget}$ when ${\qgraph{\pmodule}}$ is prime, from which we can then obtain the ${\module^{\uparrow}}$-candidate forest $\fixif{{\module^{\uparrow}}}$ and proceed through the modular decomposition. We will compute these values by performing dynamic programming along the tree decomposition of the quotient graph ${\qgraph{\pmodule}} = G[{\module^{\uparrow}}] / \mathtt{children}({\module^{\uparrow}})$.
\subsubsection*{Precomputed Data.} Let us fix some ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and recap the data that is available from solving the previous subproblems. For every $M \in \mathtt{children}({\module^{\uparrow}})$, we know the values
\begin{itemize}
\item $\cv{M} = |\fixv{M}| = 1$, $\wv{M} = \mathbf{w}(\fixv{M})$,
\item $\cis{M} = |\fixis{M}|$, $\wis{M} = \mathbf{w}(\fixis{M})$,
\item $\cif{M} = |\fixif{M}|$, $\wif{M} = \mathbf{w}(\fixif{M})$.
\end{itemize}
The algorithm also knows the sets $\fixv{M}$ and $\fixis{M}$, but not the sets $\fixif{M}$, they will be used in the analysis however. Furthermore, we are given a tree decomposition $(\TT^q_{{\module^{\uparrow}}}, (\mathbb{B}^q_t)_{t \in V(\TT^q_{{\module^{\uparrow}}})})$ of the quotient graph ${\qgraph{\pmodule}}$ of width $k$ which can be assumed to be very nice by \cref{thm:very_nice_tree_decomposition}. To lighten the notation, we do not annotate the bags $\mathbb{B}^q_t$ with ${\module^{\uparrow}}$, but keep in mind that there is a different tree decomposition for each quotient graph.
\begin{dfn}
Let $t \in V(\TT^q_{\module^{\uparrow}})$ be a node of the tree decomposition $\TT^q_{\module^{\uparrow}}$. The set of \emph{relaxed solutions} $\mathcal{R}_{t, {\module^{\uparrow}}}$ consists of the vertex subsets $X \subseteq V_t = {\modprojection^{-1}_\pmodule}(V^q_t)$ that satisfy the following properties:
\begin{itemize}
\item $X$ is ${\module^{\uparrow}}$-forest-nice with respect to $G^q_t$,
\item $X$ has ${\module^{\uparrow}}$-substructure,
\item $\forall M \in \mathtt{children}({\module^{\uparrow}})\colon \varphi_X(M) = {\two_{\mathcal{E}}} \rightarrow (M \subseteq V_t \setminus \mathbb{B}_t \vee \text{$G[M]$ is a clique of size at least 2})$,
\item $\forall {v^q_\module} \in V^q_t \setminus \mathbb{B}^q_t\colon (|X \cap M| \geq 2 \wedge \deg_{G^q_t[{\modprojection_\pmodule}(X)]}(v^q_M) = 0) \rightarrow X \cap M = \fixif{M}$.
\end{itemize}
\end{dfn}
Let $\hat{r}$ be the root node of the tree decomposition $\TT^q_{\module^{\uparrow}}$, we want this definition to achieve $\mathcal{R}_{\hat{r}, {\module^{\uparrow}}} = \mathcal{R}_{\module^{\uparrow}}$. Hence, the first two properties are a natural requirement. The third and fourth property lead to the ${\module^{\uparrow}}$-promotion property at the root node $\hat{r}$ and are more intricate to facilitate the dynamic program. To be precise, since the the bag $\mathbb{B}^q_{\hat{r}}$ at the root node $\hat{r}$ is empty, the third property is trivially satisfied and the fourth property turns into the ${\module^{\uparrow}}$-promotion property.
We exclude the current bag from consideration, because we only want to check whether a module $M$ is isolated in $X$ once all incident edges have been introduced. This is certainly the case when $M$ leaves the current bag, i.e., it is forgotten. If $M$ is isolated at this point, we can safely replace the independent set $\fixis{M}$ inside $M$ by the induced forest $\fixif{M}$, which cannot decrease the size of $X$. This means, with the exception of modules inducing a clique, that no module $M$ in the current bag satisfies $\varphi_X(M) = {\two_{\mathcal{E}}}$.
The naive dynamic programming routine would not use promotion and track in which modules of the current bag the solution chooses an induced forest (and not just an independent set). By using promotion, we can save this state and only handle the remaining states, namely choosing no vertex, a single vertex, or an independent set. Thereby, we obtain an improved running time.
Due to \cref{thm:fvs_nonzero_implies_forest}, we want to count for each $X \in \mathcal{R}_{t, {\module^{\uparrow}}}$ the number of consistent homogeneous cuts. Before considering cuts, each module $M$ in the considered bag has four possible states. The intersection with $X$ can be empty, contain a single vertex, or contain at least two vertices, and in the latter case we distinguish whether $X$ intersects a neighboring module or not. To count the homogeneous cuts naively, we would split all states except the empty state into two states, one for each side of a cut, thus obtaining seven total states. However, it turns out that tracking the cut side is not necessary when $X$ intersects $M$ in at least two vertices. When $M$ is isolated, we can simply count it twice, and otherwise $M$ inherits the cut side from the unique neighboring module that is also intersected by $X$. Hence, five states suffice and we define the cut solutions accordingly.
\begin{dfn}
Let $t \in V(\TT^q_{{\module^{\uparrow}}})$ be a node of the tree decomposition $\TT^q_{{\module^{\uparrow}}}$. The set of \emph{cut solutions} $\mathcal{Q}_{t, {\module^{\uparrow}}}$ consists of pairs $(X, (X_L, X_R))$ such that $X \in \mathcal{R}_{t, {\module^{\uparrow}}}$ and $(X_L, X_R)$ is ${\module^{\uparrow}}$-homogeneous and a consistent cut of $G_t[X \setminus (\mathrm{iso}_t(X) \cap \mathbb{B}_t)]$, where $\mathrm{iso}_t(X) = \bigcup \{M \in \mathtt{children}({\module^{\uparrow}}) : |X \cap M| \geq 2, \deg_{G^q_t[{\modprojection_\pmodule}(X)]}({v^q_\module}) = 0\}$.
\end{dfn}
In the case of isolated modules, we consider it easier to account for the cut side when forgetting the module. Hence, the cuts considered in the definition of $\mathcal{Q}_{t, {\module^{\uparrow}}}$ do not cover such modules that belong to the current bag $\mathbb{B}_t$. Again, for the root node $\hat{r}$ of the tree decomposition $\TT^q_{{\module^{\uparrow}}}$ this extra property will be trivially satisfied as the associated bag is empty. The definition is again built in such a way that $\mathcal{Q}_{\hat{r}, {\module^{\uparrow}}} = \mathcal{Q}_{\module^{\uparrow}}$.
Our dynamic programming algorithm has to track certain additional data of a solution $X$, namely its size ${\overline{c}} = |X|$, its weight ${\overline{w}} = \mathbf{w}(X)$ for the isolation lemma, the number ${\overline{v}} = |{\modprojection_\pmodule}(X)|$ of intersected modules, and the number ${\overline{e}} = |E(G^q_t[{\modprojection_\pmodule}(X)])|$ of induced edges in the currently considered subgraph $G^q_t$ of the quotient graph ${\qgraph{\pmodule}}$. We need ${\overline{v}}$ and ${\overline{e}}$ to apply \cref{thm:forest_hom_cuts}. Accordingly, we define
$\mathcal{R}_{t, {\module^{\uparrow}}}^{{\ctarget, \wtarget, \vtarget, \etarget}} = \{X \in \mathcal{R}_{t, {\module^{\uparrow}}} : {\overline{c}} = |X \setminus \mathbb{B}_t|, {\overline{w}} = \mathbf{w}(X \setminus \mathbb{B}_t), {\overline{v}} = |{\modprojection_\pmodule}(X) \setminus \mathbb{B}^q_t|, {\overline{e}} = |E(G^q_t[{\modprojection_\pmodule}(X)])|\}$ and $\mathcal{Q}_{t, {\module^{\uparrow}}}^{{\ctarget, \wtarget, \vtarget, \etarget}} = \{(X, (X_L, X_R)) \in \mathcal{Q}_{t, {\module^{\uparrow}}} : X \in \mathcal{R}_{t, {\module^{\uparrow}}}^{{\ctarget, \wtarget, \vtarget, \etarget}}\}$. Note that we exclude the current bag in these counts, except for ${\overline{e}}$, hence we have to update these counts when we forget a module. This choice simplifies some recurrences in the algorithm, otherwise updating the counts would be a bit cumbersome due to promotion.
Finally, we can define the table that is computed at each node $t \in V(\TT^q_{{\module^{\uparrow}}})$ by our dynamic programming algorithm. Every module $M$ in the current bag has one of five states for a given solution $X$, these states are denoted by $\mathbf{states} = \{\mathbf{0}, \mathbf{1}_L, \mathbf{1}_R, \mathbf{2}_0, \mathbf{2}_1\}$. The bold number refers to the size of the intersection $X \cap M$, i.e., $\mathbf{0}$ if $X \cap M = \emptyset$, $\mathbf{1}$ if $|X \cap M| = 1$, and $\mathbf{2}$ if $|X \cap M| \geq 2$. For $\mathbf{1}$, we additionally track whether the module belongs to the left ($\mathbf{1}_L$) or right side ($\mathbf{1}_R$) of the considered homogeneous cut. For $\mathbf{2}$, we additionally track how many neighboring modules are intersected by $X$, due to the definition of ${\module^{\uparrow}}$-forest-nice this number is either zero ($\mathbf{2}_0$) or one ($\mathbf{2}_1$). As argued before, we will not have any modules $M$ with $\varphi_X(M) = {\two_{\mathcal{E}}}$ in the current bag unless $M$ induces a clique.
We remark that there is an edge case when the graph $G[M]$ is a clique of size at least 2, as in that case the maximum independent sets of $G[M]$ are simply singletons which are captured by the states $\mathbf{1}_L$ and $\mathbf{1}_R$. As we do not track the degree of such states, we cannot safely perform promotion for them. Instead we directly introduce induced forests inside $M$ in this exceptional case with the state $\mathbf{2}_1$.
\begin{dfn}
Let $t \in V(\TT^q_{{\module^{\uparrow}}})$ be a node of the tree decomposition $\TT^q_{{\module^{\uparrow}}}$. A function $f \colon \mathbb{B}^q_t \rightarrow \mathbf{states}$ is called a \emph{$t$-signature}. Let $(X, (X_L, X_R)) \in \mathcal{Q}_{t, {\module^{\uparrow}}}$ and $X^q = {\modprojection_\pmodule}(X)$. We say that $(X,(X_L,X_R))$ is \emph{compatible} with a $t$-signature $f$ if the following properties hold for every $v^q_M \in \mathbb{B}^q_t$:
\begin{itemize}
\item $f(v^q_M) = \mathbf{0}$ implies that $\varphi_X(M) = \mathbf{0}$,
\item $f(v^q_M) = \mathbf{1}_L$ implies that $\varphi_X(M) = \mathbf{1}$ and $X \cap M \subseteq X_L$,
\item $f(v^q_M) = \mathbf{1}_R$ implies that $\varphi_X(M) = \mathbf{1}$ and $X \cap M \subseteq X_R$,
\item $f(v^q_M) = \mathbf{2}_0$ implies that $\varphi_X(M) = {\two_{\mathcal{I}}}$ and $\deg_{G^q_t[X^q]}(v^q_M) = 0$,
\item $f(v^q_M) = \mathbf{2}_1$ and $G[M]$ is not a clique implies that $\varphi_X(M) = {\two_{\mathcal{I}}}$ and $\deg_{G^q_t[X^q]}(v^q_M) = 1$,
\item $f(v^q_M) = \mathbf{2}_1$ and $G[M]$ is a clique implies that $\varphi_X(M) = {\two_{\mathcal{E}}}$.
\end{itemize}
For a $t$-signature $f$, we let $\mathcal{A}_{t,{\module^{\uparrow}}}(f)$ denote the set of all $(X, (X_L, X_R)) \in \mathcal{Q}_{t,{\module^{\uparrow}}}$ that are compatible with $f$. Similarly, we define $\mathcal{A}_{t,{\module^{\uparrow}}}^{{\ctarget, \wtarget, \vtarget, \etarget}}(f)$ for given ${\overline{c}} \in [0, \mathbf{c}({\module^{\uparrow}})]$, ${\overline{w}} \in [0, \mathbf{w}({\module^{\uparrow}})]$, ${\overline{v}} \in [0, |{\module^{\uparrow}}|]$, and ${\overline{e}} \in [0, {\overline{v}} - 1]$.
\end{dfn}
Fix a parent module ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ and for every node $t \in V(\TT^q_{{\module^{\uparrow}}})$, $t$-signature $f$, and appropriate ${\ctarget, \wtarget, \vtarget, \etarget}$, define the value $A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f) = |\mathcal{A}_{t, {\module^{\uparrow}}}^{{\ctarget, \wtarget, \vtarget, \etarget}}(f)|$. Whenever at least one of ${\ctarget, \wtarget, \vtarget, \etarget}$ is negative, we assume that $A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f) = 0$. We will now describe the dynamic programming recurrences to compute $A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f)$ for all choices of $t$, $f$, ${\ctarget, \wtarget, \vtarget, \etarget}$ based on the type of the node $t$ in the very nice tree decomposition $\TT^q_{{\module^{\uparrow}}}$.
\subsubsection*{Leaf bag.} We have that $V^q_t = \mathbb{B}^q_t = \emptyset$ and $t$ has no child. Therefore, the only candidate is $(\emptyset, (\emptyset, \emptyset))$ and we simply need to check if the trackers ${\ctarget, \wtarget, \vtarget, \etarget}$ agree with that:
\begin{equation*}
A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f) = [{\overline{c}} = {\overline{w}} = {\overline{e}} = {\overline{v}} = 0]
\end{equation*}
\subsubsection*{Introduce vertex bag.} We have that $\mathbb{B}^q_t = \mathbb{B}^q_s \cup \{{v^q_\module}\}$, where ${v^q_\module} \notin \mathbb{B}^q_s$ and $s$ is the only child of $t$. For the sake of the write-up, we assume that $f$ is an $s$-signature here. The recurrence is straightforward with the exception of handling the clique case:
\begin{equation*}
A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f[{v^q_\module} \mapsto \mathbf{s}]) =
\begin{cases}
A_s^{{\ctarget, \wtarget, \vtarget, \etarget}}(f), & \text{if } \mathbf{s} \in \{\mathbf{0}, \mathbf{1}_L, \mathbf{1}_R\}, \\
[\text{$G[M]$ is not a clique}] A_s^{{\ctarget, \wtarget, \vtarget, \etarget}}(f), & \text{if } \mathbf{s} = \mathbf{2}_0, \\
[|M| > 1 \text{ and $G[M]$ is a clique}] A_s^{{\ctarget, \wtarget, \vtarget, \etarget}}(f), & \text{if } \mathbf{s} = \mathbf{2}_1.
\end{cases}
\end{equation*}
If $G[M]$ is a clique, then $\varphi_X(M) = {\two_{\mathcal{I}}}$ can never be satisfied. So, we will directly generate solutions with $\varphi_X(M) = {\two_{\mathcal{E}}}$ in this case. If $G[M]$ is not a clique, such solutions will only be generated at forget nodes by \emph{promotion}. Recall that no edges incident to $M$ have been introduced yet, which in particular rules out the case that $f({v^q_\module}) = \mathbf{2}_1$ when $G[M]$ is not a clique, and the trackers are only updated when we forget a module.
\subsubsection*{Introduce edge bag.} We have that $\{v^q_{M_1}, v^q_{M_2}\} \subseteq \mathbb{B}^q_t = \mathbb{B}^q_s$, where $\{v^q_{M_1}, v^q_{M_2}\}$ denotes the introduced edge and $s$ is the only child of $t$.
Define helper functions $\newedge, \cons \colon \mathbf{states} \times \mathbf{states} \rightarrow \{0,1\}$ by $\newedge(\mathbf{s}_1, \mathbf{s}_2) = [\mathbf{s}_1 \neq \mathbf{0} \wedge \mathbf{s}_2 \neq \mathbf{0}]$ and $\cons$ is given by the following table:
\begin{equation*}
\begin{array}{l|ccccc}
\cons & \mathbf{0} & \mathbf{1}_L & \mathbf{1}_R & \mathbf{2}_0 & \mathbf{2}_1 \\
\hline
\mathbf{0} & 1 & 1 & 1 & 1 & 1 \\
\mathbf{1}_L & 1 & 1 & 0 & 0 & 1 \\
\mathbf{1}_R & 1 & 0 & 1 & 0 & 1 \\
\mathbf{2}_0 & 1 & 0 & 0 & 0 & 0 \\
\mathbf{2}_1 & 1 & 1 & 1 & 0 & 0
\end{array}
\end{equation*}
The $\cons$-function is used to filter partial solutions that have incompatible states at the newly introduced edge. There are three reasons why states might be incompatible: they belong to different sides of the cut, they directly induce a cycle, or they do not correctly account for the degree in the graph induced by the partial solution.
Furthermore, given a $t$-signature $f$, we define the $s$-signature $\tilde{f}$ as follows. We set $\tilde{f} := f$ if $\cons(f(v^q_{M_1}), f(v^q_{M_2})) = 0$ or $\newedge(f(v^q_{M_1}), f(v^q_{M_2})) = 0$ or $\mathbf{2}_1 \notin \{f(v^q_{M_1}), f(v^q_{M_2})\}$. Otherwise, the introduced edge changes the state from $\mathbf{2}_0$ to $\mathbf{2}_1$ at one of its endpoints, i.e., without loss of generality $f(v^q_{M_1}) = \mathbf{2}_1$ and $f(v^q_{M_2}) \in \{\mathbf{1}_L, \mathbf{1}_R\}$ (else, swap role of $M_1$ and $M_2$) and we set $\tilde{f} := f[v^q_{M_1} \mapsto \mathbf{2}_0]$. Finally, the recurrence is given by
\begin{equation*}
A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f) = \cons(f(v^q_{M_1}), f(v^q_{M_2})) A_s^{{\overline{c}}, {\overline{w}}, {\overline{v}}, {\overline{e}} - \newedge(f(v^q_{M_1}), f(v^q_{M_2}))}(\tilde{f}).
\end{equation*}
Observe that we update the edge count, if necessary, in this recurrence. We remark that if $f(v^q_{M_1}) = \mathbf{2}_1$ and $f(v^q_{M_2}) \in \{\mathbf{1}_L, \mathbf{1}_R\}$ and $G[M_1]$ is a clique, we should filter as well, because this means $\varphi_X(M_1) = {\two_{\mathcal{E}}}$ and hence $v^q_{M_1}$ should not receive incident edges in $G^q_t[{\modprojection_\pmodule}(X)]$. One could explicitly adapt the recurrence for this case or instead, as we do, observe that since $\varphi_X(M_1) = {\two_{\mathcal{I}}}$ is impossible, all entries $A_s^{{\ctarget, \wtarget, \vtarget, \etarget}}(\tilde{f})$ will be zero due to $\tilde{f}(v^q_{M_1}) = \mathbf{2}_0$ and hence we do not generate any partial solutions for this case anyway.
\subsubsection*{Forget vertex bag.} We have that $\mathbb{B}^q_t = \mathbb{B}^q_s \setminus \{{v^q_\module}\}$, where ${v^q_\module} \in \mathbb{B}^q_s$ and $s$ is the only child of $t$. Recall that $\cis{M}$, $\cif{M}$, $\wv{M}$, $\wis{M}$, $\wif{M}$ denote the size or weight of a singleton set, maximum independent set, or the candidate forest inside $M$, respectively. The recurrence is given by:
\begin{equation*}
\begin{array}{lclcl}
A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f) & = & & & A_s^{{\ctarget, \wtarget, \vtarget, \etarget}}(f[v^q_M \mapsto \mathbf{0}]) \\
& + & & & A_s^{{\overline{c}} - 1, {\overline{w}} - \wv{M}, {\overline{v}} - 1, {\overline{e}}}(f[v^q_M \mapsto \mathbf{1}_L]) \\
& + & & & A_s^{{\overline{c}} - 1, {\overline{w}} - \wv{M}, {\overline{v}} - 1, {\overline{e}}}(f[v^q_M \mapsto \mathbf{1}_R]) \\
& + & 2 & \cdot & A_s^{{\overline{c}} - \cif{M}, {\overline{w}} - \wif{M}, {\overline{v}} - 1, {\overline{e}}}(f[v^q_M \mapsto \mathbf{2}_0]) \\
& + & [\text{$G[M]$ is not a clique}] & \cdot & A_s^{{\overline{c}} - \cis{M}, {\overline{w}} - \wis{M}, {\overline{v}} - 1, {\overline{e}}}(f[v^q_M \mapsto \mathbf{2}_1]) \\
& + & 2[\text{$|M| > 1$ and $G[M]$ is a clique}] & \cdot & A_s^{{\overline{c}} - \cif{M}, {\overline{w}} - \wif{M}, {\overline{v}} - 1, {\overline{e}}}(f[v^q_M \mapsto \mathbf{2}_1])
\end{array}
\end{equation*}
As $M$ leaves the current bag, we need to update the trackers ${\overline{c}}$, ${\overline{w}}$, and ${\overline{v}}$. The first three cases are straightforward, but the latter three deserve an explanation. If $M$ had state $\mathbf{2}_0$ before, then $M \subseteq \mathrm{iso}_s(X)$ and $G[M]$ cannot be a clique, so we want to promote the independent set in $M$ to an induced forest and also track the cut side now. Since $M$ remains isolated, both cut sides are possible, explaining the factor 2. If $G[M]$ is not a clique and $M$ had state $\mathbf{2}_1$ before, then we keep the independent set in $M$ and its cut side is already tracked. If instead $G[M]$ is a clique and had state $\mathbf{2}_1$ before, then $M \subseteq \mathrm{iso}_s(X)$ and we are taking an edge (= maximum induced forest) inside $M$ and we need to track its cut side now.
\subsubsection*{Join bag.} We have that $\mathbb{B}^q_t = \mathbb{B}^q_{s_1} = \mathbb{B}^q_{s_2} = V^q_{s_1} \cap V^q_{s_2}$, where $s_1$ and $s_2$ are the two children of $t$. To state the recurrence for the join bag, we first introduce the \emph{induced forest join} $\oplus_{\mathbf{if}} \colon \mathbf{states} \times \mathbf{states} \rightarrow \mathbf{states} \cup \{\bot\}$, where $\bot$ stands for an undefined value, which is defined by the following table:
\begin{equation*}
\begin{array}{l|ccccc}
\oplus_{\mathbf{if}} & \mathbf{0} & \mathbf{1}_L & \mathbf{1}_R & \mathbf{2}_0 & \mathbf{2}_1 \\
\hline
\mathbf{0} & \mathbf{0} & \bot & \bot & \bot & \bot \\
\mathbf{1}_L & \bot & \mathbf{1}_L & \bot & \bot & \bot \\
\mathbf{1}_R & \bot & \bot & \mathbf{1}_R & \bot & \bot \\
\mathbf{2}_0 & \bot & \bot & \bot & \mathbf{2}_0 & \mathbf{2}_1 \\
\mathbf{2}_1 & \bot & \bot & \bot & \mathbf{2}_1 & \bot
\end{array}
\end{equation*}
When combining two partial solutions, one coming from child $s_1$ and the other one coming from $s_2$, we want to ensure that they have essentially the same states on $\mathbb{B}^q_t = V^q_{s_1} \cap V^q_{s_2}$. However for the state $\mathbf{2}_1$ (if the considered modules does not induce a clique), we need to decide which child contributes the incident edge in the quotient graph and ensure that the other child does not contribute an additional edge. This is implemented by the operation $\oplus_{\mathbf{if}}$. Given some set $S$ and functions $f, g \colon S \rightarrow \mathbf{states}$, we abuse notation and let $f \oplus_{\mathbf{if}} g \colon S \rightarrow \mathbf{states} \cup \{\bot\}$ denote the function obtained from $f$ and $g$ by pointwise application of $\oplus_{\mathbf{if}}$. We also define $\oplus_{\mathbf{2}} = \oplus_{\mathbf{if}} \big|_{\{\mathbf{2}_0, \mathbf{2}_1\} \times \{\mathbf{2}_0, \mathbf{2}_1\}}$ and similarly extend it to functions.
\newcommand{\widetilde{\bag}}{\widetilde{\mathbb{B}}}
For any module $M$ with ${v^q_\module} \in \mathbb{B}^q_t$ that induces a clique, the state $\mathbf{2}_1$ behaves differently and should agree on both children. Hence, we define $\widetilde{\bag}^q_t = \{{v^q_\module} \in \mathbb{B}^q_t : G[M] \text{ is a clique}\}$. We can now state a first version of the recurrence, which will be transformed further to enable efficient computation. The preliminary recurrence is given by
\begin{equation*}
A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f) = \sum_{\substack{{\overline{c}}_1 + {\overline{c}}_2 = {\overline{c}} \\ {\overline{w}}_1 + {\overline{w}}_2 = {\overline{w}}}} \sum_{\substack{{\overline{v}}_1 + {\overline{v}}_2 = {\overline{v}} \\ {\overline{e}}_1 + {\overline{e}}_2 = {\overline{e}}}} \smashoperator[r]{\sum_{\substack{f_1, f_2 \colon \mathbb{B}^q_t \setminus \widetilde{\bag}^q_t \rightarrow \mathbf{states} \colon \\ f_1 \oplus_{\mathbf{if}} f_2 = f}}} A_{s_1}^{{\overline{c}}_1, {\overline{w}}_1, {\overline{v}}_1, {\overline{e}}_1}(f_1 \cup f\restrict{\widetilde{\bag}^q_t}) A_{s_2}^{{\overline{c}}_2, {\overline{w}}_2, {\overline{v}}_2, {\overline{e}}_2}(f_2 \cup f\restrict{\widetilde{\bag}^q_t}),
\end{equation*}
where we ensure that all states agree for modules inducing cliques and otherwise apply the induced forest join $\oplus_{\mathbf{if}}$.
\newcommand{D^=}{D^=}
\newcommand{D^{\neq}}{D^{\neq}}
To compute this recurrence quickly, we separately handle the part of $\oplus_{\mathbf{if}}$ that essentially checks for equality and reduce the remaining part to already known the results. Given a $t$-signature $f \colon \mathbb{B}^q_t \rightarrow \mathbf{states}$, we define $D^=_t(f) := \widetilde{\bag}^q_t \cup f^{-1}(\{\mathbf{0}, \mathbf{1}_L, \mathbf{1}_R\})$ and $D^{\neq}_t(f) := \mathbb{B}^q_t \setminus D^=_t(f)$. We decompose $f$ into $f^= := f \restrict{D^=_t(f)}$ and $f^{\neq} := f \restrict{D^{\neq}_t(f)}$.
\newcommand{\vec{\mathbf{x}}}{\vec{\mathbf{x}}}
We fix the values ${\ctarget, \wtarget, \vtarget, \etarget}$ and a function $g \colon S \rightarrow \mathbf{states}$ where $\widetilde{\bag}^q_t \subseteq S \subseteq \mathbb{B}^q_t$ is some subset of the current bag containing the clique modules. We claim that the entries $A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f)$ for all $t$-signatures $f$ with $f^= = g$ (including $D^=_t(f) = S$) can be computed in time $\Oh^*(2^{|\mathbb{B}^q_t \setminus S|})$. We branch on $\vec{\mathbf{x}}_1 = ({\overline{c}}_1, {\overline{w}}_1, {\overline{v}}_1, {\overline{e}}_1)$, which determines the values $\vec{\mathbf{x}}_2 = ({\overline{c}}_2, {\overline{w}}_2, {\overline{v}}_2, {\overline{e}}_2)$, and define the auxiliary table $T_g^{\vec{\mathbf{x}}_1, \vec{\mathbf{x}}_2}$ indexed by $h\colon \mathbb{B}^q_t \setminus S \rightarrow \{\mathbf{2}_0, \mathbf{2}_1\}$ as follows
\begin{equation*}
T_g^{\vec{\mathbf{x}}_1, \vec{\mathbf{x}}_2}(h) = {\sum_{\substack{h_1, h_2 \colon \mathbb{B}^q_t \setminus S \rightarrow \{\mathbf{2}_0, \mathbf{2}_1\} \colon \\ h_1 \oplus_{\mathbf{2}} h_2 = h}}} A_{s_1}^{{\overline{c}}_1, {\overline{w}}_1, {\overline{v}}_1, {\overline{e}}_1}(g \cup h_1) A_{s_2}^{{\overline{c}}_2, {\overline{w}}_2, {\overline{v}}_2, {\overline{e}}_2}(g \cup h_2).
\end{equation*}
Since $\oplus_{\mathbf{2}}$ is essentially the same as addition over $\{0,1\}$ with $1 + 1$ being undefined, we can compute all entries of $T_g^{\vec{\mathbf{x}}_1, \vec{\mathbf{x}}_2}$ in time $\Oh^*(2^{|\mathbb{B}^q_t \setminus S|})$ by the work of, e.g., van Rooij~\cite[Theorem 2]{Rooij21} using fast subset convolution and the fast fourier transform. Then, for every $t$-signature $f$ with $f^= = g$, we obtain $A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f)$ by summing $T_g^{\vec{\mathbf{x}}_1, \vec{\mathbf{x}}_2}(f^{\neq})$ over all $\vec{\mathbf{x}}_1 + \vec{\mathbf{x}}_2 = ({\overline{c}}, {\overline{w}}, {\overline{v}}, {\overline{e}})$. Since there are only polynomially many choices for $\vec{\mathbf{x}}_1$ and $\vec{\mathbf{x}}_2$, this proves the claim.
In conclusion, to compute $A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f)$ for all ${\ctarget, \wtarget, \vtarget, \etarget}$, $f$, we need time
\begin{align*}
\sum_{\widetilde{\bag}^q_t \subseteq S \subseteq \mathbb{B}^q_t} \sum_{g \colon S \rightarrow \{\mathbf{0}, \mathbf{1}_L, \mathbf{1}_R\}} \Oh^*(2^{|\mathbb{B}^q_t \setminus S|}) & \leq \sum_{S \subseteq \mathbb{B}^q_t} \Oh^*(3^{|S|}2^{|\mathbb{B}^q_t \setminus S|}) = \Oh^*\left(\sum_{i = 0}^{|\mathbb{B}^q_t|} \binom{|\mathbb{B}^q_t|}{i} 3^i 2^{|\mathbb{B}^q_t| - i}\right) \\
& = \Oh^*((3 + 2)^{|\mathbb{B}^q_t|}) = \Oh^*(5^k).
\end{align*}
\begin{lem}\label{thm:fvs_mtw_prime_algo}
Let ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$ be a prime node and $\mathbf{w}\colon V \rightarrow [2|V|]$ a weight function. Given a tree decomposition of ${\qgraph{\pmodule}}$ of width $k$ and the sets $\fixv{M}$, $\fixis{M}$ and values $\cif{M}$, $\wif{M}$ for all $M \in \mathtt{children}({\module^{\uparrow}})$, the values $|\mathcal{Q}^{\ctarget, \wtarget, \vtarget, \etarget}_{\module^{\uparrow}}|$ can be computed in time $\Oh^*(5^k)$ for all ${\ctarget, \wtarget, \vtarget, \etarget}$.
\end{lem}
\begin{proof}
From the sets $\fixv{M}$ and $\fixis{M}$, we directly obtain the values $\wv{M}$, $\cis{M}$, $\wis{M}$ for all $M \in \mathtt{children}({\module^{\uparrow}})$. We then transform the given tree decomposition into a very nice tree decomposition $(\TT^q_{{\module^{\uparrow}}}, (\mathbb{B}^q_t)_{t \in V(\TT^q_{{\module^{\uparrow}}})})$ using \cref{thm:very_nice_tree_decomposition} and run the described dynamic programming algorithm described before to compute the values $A_{\hat{r}}^{{\ctarget, \wtarget, \vtarget, \etarget}}(\emptyset)$, where $\hat{r}$ is the root of $\TT^q_{{\module^{\uparrow}}}$, for all appropriate values of ${\ctarget, \wtarget, \vtarget, \etarget}$. Assuming the correctness of the recurrences, we have that $A_{\hat{r}}^{{\ctarget, \wtarget, \vtarget, \etarget}}(\emptyset) = |\mathcal{A}_{\hat{r}}^{{\ctarget, \wtarget, \vtarget, \etarget}}(\emptyset)| = |\mathcal{Q}_{\module^{\uparrow}}^{{\ctarget, \wtarget, \vtarget, \etarget}}|$ by definition and the degeneration of the conditions at $\hat{r}$.
For the running time, note that for every $t \in V(\TT^q_{{\module^{\uparrow}}})$, there are at most $\Oh^*(5^k)$ table entries $A_{t}^{{\ctarget, \wtarget, \vtarget, \etarget}}(f)$ and the recurrences can be computed in polynomial time except for the case of join bags. In the case of a join bag, we have shown how to compute all table entries simultaneously in time $\Oh^*(5^k)$. By \cref{thm:very_nice_tree_decomposition} the tree decomposition $\TT^q_{{\module^{\uparrow}}}$ has a polynomial number of nodes, hence the running time follows and it remains to sketch the correctness of the dynamic programming recurrences.
For leaf bags, the correctness follows by observing that $\mathcal{A}_t(\emptyset) = \mathcal{Q}_{t, {\module^{\uparrow}}} = \{(\emptyset, (\emptyset, \emptyset))\}$. So, we start by considering introduce vertex bags. We set up a bijection between $\mathcal{A}_{t}(f[{v^q_\module} \mapsto \mathbf{s}])$ and $\mathcal{A}_{s}(f)$ depending on $\mathbf{s} \in \mathbf{states}$. We map $(X, (X_L, X_R)) \in \mathcal{A}_{s}(f)$ to
\begin{itemize}
\item $(X, (X_L, X_R))$ if $\mathbf{s} = \mathbf{0}$,
\item $(X \cup \fixv{M}, (X_L \cup \fixv{M}, X_R))$ if $\mathbf{s} = \mathbf{1}_L$ ($\mathbf{1}_R$ is analogous),
\item $(X \cup \fixis{M}, (X_L, X_R))$ if $\mathbf{s} = \mathbf{2}_0$ and $G[M]$ is not a clique,
\item $(X \cup \fixif{M}, (X_L, X_R))$ if $\mathbf{s} = \mathbf{2}_1$ and $G[M]$ is a clique of size at least 2.
\end{itemize}
In the last two cases, we have $M \subseteq \mathrm{iso}_t(X)$, so we do not need to track the cut side. Using ${\module^{\uparrow}}$-substructure it is possible to verify that these mappings constitute bijections. The case that $\mathbf{s} = \mathbf{2}_1$ and $G[M]$ is not a clique is impossible, since no edges incident to ${v^q_\module}$ are introduced yet. The case that $\mathbf{s} = \mathbf{2}_0$ and $G[M]$ is a clique is impossible, since any subset of $M$ of size at least two has to induce an edge.
For introduce edge bags, we highlight the case that $\tilde{f}(v^q_{M_1}) = \mathbf{2}_0$ and $f(v^q_{M_1}) = \mathbf{2}_1$, where $M_1$ needs to inherit the cut side from $M_2$. Formally, a partial solution $(X, (X_L, X_R)) \in A_s^{{\overline{c}}, {\overline{w}}, {\overline{v}}, {\overline{e}} - 1}(\tilde{f})$ with $f(v^q_{M_2}) = \mathbf{1}_L$ is bijectively mapped to $(X, (X_L \cup (X \cap M), X_R)) \in A_t^{{\ctarget, \wtarget, \vtarget, \etarget}}(f)$ and analogously when $f(v^q_{M_2}) = \mathbf{1}_R$. We have already argued the correct handling of the clique case when presenting the recurrence. The remaining cases are straightforward.
We proceed with forget vertex bags. First, we observe that all considered cases are disjoint, hence no overcounting occurs. The handling of the cases $\mathbf{0}$, $\mathbf{1}_L$, and $\mathbf{1}_R$ is standard and we omit further explanation. For isolated modules, we need to track the cut side when we forget them, since both sides are possible, we multiply with the factor 2. Furthermore, we need to perform the promotion when we forget a module with state $\mathbf{2}_0$. The most involved case is $\fixif{M} \neq \fixis{M}$ and $G[M]$ is not a clique, then we perform promotion on the isolated module $M$, swapping $\fixis{M}$ with $\fixif{M}$, and now have to track the cut side of $M$, again yielding the factor 2. Formally, if $f$ is a $t$-signature and $(X, (X_L, X_R)) \in \mathcal{A}_s(f[{v^q_\module} \mapsto \mathbf{2}_0])$, then $G[M]$ is not a clique and we obtain the solutions $((X \setminus M) \cup \fixif{M}, (X_L \cup \fixif{M}, X_R)) \in \mathcal{A}_t(f)$ and $((X \setminus M) \cup \fixif{M}, (X_L, X_R \cup \fixif{M})) \in \mathcal{A}_t(f)$.
For the join bags, we have that $V^q_{s_1} \cap V^q_{s_2} = \mathbb{B}^q_t$, so the behavior on the intersection is completely described by the signature $f$. Every $(X,(X_L,X_R)) \in \mathcal{A}_t(f)$ splits into a solution $(X^1, (X^1_L, X^2_L)) \in \mathcal{A}_{s_1}(f_1)$ at $s_1$ and a solution $(X^2, (X^2_L, X^2_R)) \in \mathcal{A}_{s_2}(f_2)$ at $s_2$, where for $i \in [2]$ we set $X^i = X \cap V_{s_i}$, $X^i_L = (X_L \cap V_{s_i}) \setminus (\mathrm{iso}_{s_i}(X^i) \cap \mathbb{B}_{s_i})$, $X^i_R = (X_R \cap V_{s_i}) \setminus (\mathrm{iso}_{s_i}(X^i) \cap \mathbb{B}_{s_i})$ and
\begin{equation*}
f_i({v^q_\module}) = \begin{cases}
f({v^q_\module}), & \text{if } f({v^q_\module}) \neq \mathbf{2}_1, \\
\mathbf{2}_1, & \text{if $f({v^q_\module}) = \mathbf{2}_1$ and $G[M]$ is clique of size $\geq 2$}, \\
\mathbf{2}_d, & \text{if $f({v^q_\module}) = \mathbf{2}_1$ and $G[M]$ is not a clique and $\deg_{G^q_{s_i}[{\modprojection_\pmodule}(X^i)]}({v^q_\module}) = d$}.
\end{cases}
\end{equation*}
For a non-clique module with state $\mathbf{2}_1$, the edge leading to degree 1 is present at one of the child nodes $s_1$ or $s_2$, but not at the other one. At the child, where the edge is not present, the module has state $\mathbf{2}_0$ and is isolated, therefore we do not track the cut side and hence have to account for this in the definitions of $X^i_L$ and $X^i_R$. This map can be seen to be a bijection between $\mathcal{A}_t(f)$ and $\bigcup_{f_1, f_2} \mathcal{A}_{s_1}(f_1) \times \mathcal{A}_{s_2}(f_2)$, where the union is over all $f_1, f_2 \colon \mathbb{B}^q_t \rightarrow \mathbf{states}$ such that $f\restrict{\widetilde{\bag}^q_t} = f_1\restrict{\widetilde{\bag}^q_t} = f_2\restrict{\widetilde{\bag}^q_t}$ and $f\restrict{\mathbb{B}^q_t \setminus \widetilde{\bag}^q_t} = f_1\restrict{\mathbb{B}^q_t \setminus \widetilde{\bag}^q_t} \oplus_{\mathbf{if}} f_2\restrict{\mathbb{B}^q_t \setminus \widetilde{\bag}^q_t}$, which is implemented by the join-recurrence once we account for the trackers ${\overline{c}}$, ${\overline{w}}$, ${\overline{v}}$, and ${\overline{e}}$; as every edge is introduced exactly once and the other trackers are only computed for forgotten vertices, no overcounting happens here and we only have to consider how the trackers are distributed between $s_1$ and $s_2$. We also remark that the correctness here requires that the promotion property is only applied to forgotten modules which have received all incident edges already. \qed
\end{proof}
Finally, we have assembled all ingredients to prove the desired theorem.
\begin{thm}
There exists a Monte-Carlo algorithm that, given a tree decomposition of width $k$ for every prime quotient graph in the modular decomposition of $G$, solves \textsc{Feedback Vertex Set}\xspace in time $\Oh^*(5^k)$. The algorithm cannot give false positives and may give false negatives with probability at most $1/2$.
\end{thm}
\begin{proof}
Solving the complementary problem \textsc{Induced Forest}\xspace, we begin by computing the sets $\fixv{{\module^{\uparrow}}}$ and $\fixis{{\module^{\uparrow}}}$ for all ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}}(G)$ in time $\Oh^*(2^k)$ using \cref{thm:is_mtw_algo}. We sample a weight function $\mathbf{w}\colon V \rightarrow [2n]$ uniformly at random, which max-isolates $\mathcal{F}_{opt}(G)$ with probability at least $1/2$ by \cref{thm:max_isolation}. We generate the sets $\fixif{{\module^{\uparrow}}}$ for the base cases ${\module^{\uparrow}} = \{v\}$, $v \in V$.
By bottom-up dynamic programming along the modular decomposition, we inductively compute the values $\cif{{\module^{\uparrow}}}$ and $\wif{{\module^{\uparrow}}}$, ${\module^{\uparrow}} \in {\mathcal{M}_{\mathrm{tree}}^*}(G)$, given the values $\cif{M}$ and $\wif{M}$ for all $M \in \mathtt{children}({\module^{\uparrow}})$. To do so, we distinguish whether ${\module^{\uparrow}}$ is a parallel, series, or prime node. In the first two cases, we can compute these values in polynomial time by \cref{sec:fvs_parallel_series}.
In the prime case, we compute the values $|\mathcal{Q}^{\ctarget, \wtarget, \vtarget, \etarget}_{\module^{\uparrow}}|$ in time $\Oh^*(5^k)$ using \cref{thm:fvs_mtw_prime_algo}. From these values, we can obtain the values $\cif{{\module^{\uparrow}}}$ and $\wif{{\module^{\uparrow}}}$ by the description in \cref{sec:fvs_prime_candidate} in polynomial time. As the modular decomposition has a polynomial number of nodes, the running time follows.
If $\cif{V} \geq \overline{b}$, then the algorithm returns true and otherwise the algorithm returns false. It remains to prove the correctness of this step, assuming that the weight function $\mathbf{w}$ is isolating. By \cref{thm:fvs_mtw_main_correctness}, we have that $\fixif{V}$ is a maximum induced forest of $G[V] = G$ if $\mathbf{w}$ is isolating and since $\cif{V}
= |\fixif{V}|$ this shows that the algorithm is correct in this case. Since we always ensure that $\fixif{V}$ is an induced forest, but not necessarily maximum, even if $\mathbf{w}$ is not isolating, the algorithm cannot return false positives. \qed
\end{proof}
\subsection{Feedback Vertex Set}
\label{sec:modtw_fvs_lb}
This subsection is devoted to proving that \textsc{Feedback Vertex Set}\xspace parameterized by twinclass-pathwidth cannot be solved in time $\Oh^*((5-\varepsilon)^{\tcpw(G)})$ for some $\varepsilon > 0$ unless the SETH\xspace fails. The main challenge is the design of the path gadget. The decoding gadgets are adapted from the lower bound constructions for \textsc{Odd Cycle Transversal}\xspace by Hegerfeld and Kratsch~\cite{HegerfeldK22} which rely on \emph{arrows} that are adapted from Lokshtanov et al.~\cite{LokshtanovMS18}. We remark that our construction will rely on false twinclasses and not true twinclasses, because in the algorithm for \textsc{Feedback Vertex Set}\xspace it can already be seen that true twinclasses only admit four distinct states instead of the desired five.
\renewcommand{{4\ngrps\grpsize + 1}}{{4\ngrpsp + 1}}
\renewcommand{{\nclss(\nregions)}}{{m({4\ngrps\grpsize + 1})}}
\renewcommand{\mathbf{h}}{\mathbf{h}}
\newcommand{A}{A}
\subsubsection*{Triangle edges.}
Given two vertices $u$ and $v$, by \emph{adding a triangle edge between $u$ and $v$} we mean that we add a new vertex $w_{\{u,v\}}$ and the edges $\{u, v\}$, $\{u, w_{\{u,v\}}\}$, $\{w_{\{u,v\}}, v\}$, so that the three vertices $u$, $v$, $w_{\{u,v\}}$ induce a triangle. The vertex $w_{\{u,v\}}$ will not receive any further neighbors in the construction. Any feedback vertex set $X$ has to intersect $\{u, v, w_{\{u,v\}}\}$ and since $w_{\{u,v\}}$ has only degree 2, we can always assume that $w_{\{u,v\}} \notin X$. In this way, a triangle edge naturally implements a logical or between $u$ and $v$.
\subsubsection*{Arrows.}
Given two vertices $u$ and $v$, by \emph{adding an arrow from $u$ to $v$} we mean that we add three vertices $x_{uv}$, $y_{uv}$, $z_{uv}$ and the edges $\{u, x_{uv}\}$, $\{u, y_{uv}\}$, $\{x_{uv}, y_{uv}\}$, $\{y_{uv}, z_{uv}\}$, $\{y_{uv}, v\}$, $\{z_{uv}, v\}$, i.e., we are essentially adding two consecutive triangle edges between $u$ and $v$. The resulting graph is denoted by $A(u,v)$ and $u$ is the \emph{tail} and $v$ the \emph{head} of the arrow. None of the vertices in $V(A(u,v)) \setminus \{u,v\}$ will receive any further neighbors in the construction. The construction of an arrow is symmetric, but the direction will be relevant for constructing a cycle packing that witnesses a lower bound on the size of a feedback vertex set.
We use arrows to propagate deletions throughout the graph. Let $X$ be a feedback vertex set. If $u \notin X$, then we can resolve both triangles simultaneously by putting $y_{uv}$ into $X$. If $u \in X$, then the first triangle is already resolved and we can safely put $v$ into $X$, hence propagating the deletion from $u$ to $v$. The former solution is called the \emph{passive} solution of the arrow and the latter is the \emph{active} solution. Using simple exchange arguments, we see that it is sufficient to only consider feedback vertex sets that on each arrow either use the passive solution or the active solution.
\subsubsection*{Setup.}
Assume that \textsc{Feedback Vertex Set}\xspace can be solved in time $\Oh^*((5 - \varepsilon)^{\tcpw(G)})$ for some $\varepsilon > 0$. Given a $q$-\textsc{Satisfiability}\xspace-instance $\sigma$ with $n$ variables and $m$ clauses, we construct an equivalent \textsc{Feedback Vertex Set}\xspace instance with twinclass-pathwidth approximately $n \log_5(2)$ so that the existence of such an algorithm for \textsc{Feedback Vertex Set}\xspace would imply that SETH\xspace is false.
We pick an integer $\beta$ only depending on $\varepsilon$; the precise choice of $\beta$ will be discussed at a later point. The variables of $\sigma$ are partitioned into groups of size at most $\beta$, resulting in $t = \lceil n / \beta \rceil$ groups. Furthermore, we pick the smallest integer $p$ that satisfies $5^p \geq 2^\beta$. We now begin with the construction of the FVS instance $(G = G(\sigma, \beta), \overline{b})$.
\subsubsection*{Root.} We create a distinguished vertex $\hat{r}$ called the \emph{root} which will be connected to several vertices throughout the construction. Given a vertex subset $Y \subseteq V(G)$ with $\hat{r} \in Y$, we say that a vertex $v \in Y$ is \emph{root-connected} in $Y$ if there is a $v,\hat{r}$-path in $G[Y]$. We will just say \emph{root-connected} if $Y$ is clear from the context. The construction and choice of budget will ensure that the root vertex $\hat{r}$ cannot be deleted by the desired feedback vertex sets.
\begin{figure}
\centering
\scalebox{0.9}{\tikzfig{pictures/fvs_modtw_path}}
\caption{The superscripts in vertex names are omitted and the edges between the auxiliary vertices, connectivity vertices and clique vertices are not drawn directly for visual clarity. All vertices that are depicted with a rectangle are adjacent to the root vertex $\hat{r}$. The thick green edges denote triangle edges. The vertices inside the dashed rectangle induce a 5-clique or a complete 5-partite graph using triangle edges. The edges from the output vertices to the next pair of input vertices are hinted at.}
\label{fig:fvs_modtw_path}
\vspace*{-0.5cm}
\end{figure}
\subsubsection*{Path gadgets.}
For every $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$, we create a path gadget $P^{i, j, \ell}$ that consists of two \emph{input} vertices $u^{i, j, \ell}_1$, $u^{i, j, \ell}_2$ forming a false twinclass; four \emph{auxiliary} vertices $a^{i, j, \ell}_1$, $\ldots$, $a^{i, j, \ell}_4$; two \emph{connectivity} vertices $c^{i, j, \ell}_0$, $c^{i, j, \ell}_1$; five \emph{clique} vertices $v^{i, j, \ell}_1$, \ldots, $v^{i, j, \ell}_5$; and ten \emph{output} vertices in pairs of two $b^{i, j, \ell}_{1,1}$, $b^{i, j, \ell}_{1,2}$, $b^{i, j, \ell}_{2,1}$, $b^{i, j, \ell}_{2,2}$, \ldots, $b^{i, j, \ell}_{5,2}$. We add a join between the input vertices $u^{i, j, \ell}_1$, $u^{i, j, \ell}_2$ and the first three auxiliary vertices $a^{i, j, \ell}_1$, $a^{i, j, \ell}_2$, $a^{i, j, \ell}_3$, furthermore we add the edges $\{a^{i, j, \ell}_2, a^{i, j, \ell}_3\}$, $\{a^{i, j, \ell}_1, c^{i, j, \ell}_1\}$, and $\{b^{i, j, \ell}_{1,1}, b^{i, j, \ell}_{1,2}\}$. The vertices $c^{i, j, \ell}_1$ and $b^{i, j, \ell}_{2,1}$ are made adjacent to the root $\hat{r}$. We add triangle edges between $a^{i, j, \ell}_4$ and the other auxiliary vertices $a^{i, j, \ell}_1$, $a^{i, j, \ell}_2$, $a^{i, j, \ell}_3$ and we add a triangle edge between $c^{i, j, \ell}_0$ and $c^{i, j, \ell}_1$. We add a triangle edge between every pair of distinct clique vertices $v^{i, j, \ell}_\varphi$, $\varphi \in [5]$, and every pair of output vertices $b^{i, j, \ell}_{\varphi, \gamma}$ and $b^{i, j, \ell}_{\varphi', \gamma'}$ with $\varphi \neq \varphi' \in [5]$ and $\gamma, \gamma' \in \{1,2\}$. For all $\varphi \in [5]$, we add a triangle edge between $v^{i, j, \ell}_\varphi$ and every $b^{i, j, \ell}_{\psi, \gamma}$ for $\psi \in [5] \setminus \{\varphi\}$ and $\gamma \in \{1,2\}$.
We finish the construction of $P^{i, j, \ell}$ by describing how to connect the clique vertices $v^{i, j, \ell}_\varphi$, $\varphi \in [5]$, to the left side of $P^{i, j, \ell}$. For each $\varphi \in [5]$, we add triangle edges between $v^{i, j, \ell}_\varphi$ and one or several \emph{target} vertices on the left side of $P^{i, j, \ell}$. The target vertices, depending on $\varphi \in [5]$, are
\begin{itemize}
\item for $\varphi = 1$: $a^{i, j, \ell}_4$ and $c^{i, j, \ell}_1$;
\item for $\varphi = 2$: $a^{i, j, \ell}_3$, $a^{i, j, \ell}_4$, and $c^{i, j, \ell}_1$;
\item for $\varphi = 3$: $a^{i, j, \ell}_3$, $a^{i, j, \ell}_4$, and $c^{i, j, \ell}_0$;
\item for $\varphi = 4$: $a^{i, j, \ell}_1$, $a^{i, j, \ell}_2$, $a^{i, j, \ell}_3$, and $c^{i, j, \ell}_1$;
\item for $\varphi = 5$: $a^{i, j, \ell}_2$, $a^{i, j, \ell}_3$, $a^{i, j, \ell}_4$, and $c^{i, j, \ell}_1$.
\end{itemize}
Finally, for $\ell \in [{\nclss(\nregions)} - 1]$, we connect $P^{i, j, \ell}$ to $P^{i, j, \ell + 1}$ by adding a join between the output pair $b^{i, j, \ell}_{\varphi, 1}$, $b^{i, j, \ell}_{\varphi, 2}$ and the next input vertices $u^{i, j, \ell + 1}_1$, $u^{i, j, \ell + 1}_2$ for every $\varphi \in \{1, 2, 3\}$ and we join the vertex $b^{i, j, \ell}_{4, 1}$ to $u^{i, j, \ell + 1}_1$ and $u^{i, j, \ell + 1}_2$. This concludes the description of the path gadgets, cf.~\cref{fig:fvs_modtw_path}.
\begin{figure}
\centering
\tikzfig{pictures/fvs_modtw_decoding}
\caption{A depiction of the decoding $D^{i,\ell,\mathbf{h}}$ and clause gadget $Z^\ell$ with $\mathbf{h} = (2,1,\ldots)$. The red triangle is part of the packing $\mathcal{P}$. The arrows point in the direction of the deletion propagation.}
\label{fig:fvs_modtw_decoding}
\end{figure}
\subsubsection*{Decoding gadgets.}
For every group $i \in [t]$, column $\ell \in [{\nclss(\nregions)}]$, and state sequence $\mathbf{h} = (h_1, \ldots, h_p) \in [5]^p$, we create a decoding gadget $D^{i, \ell, \mathbf{h}}$ consisting of $4p$ vertices $x^{i, \ell, \mathbf{h}}_1$, $\ldots$, $x^{i, \ell, \mathbf{h}}_{4p}$; a distinguished vertex $\hat{x}^{i, \ell, \mathbf{h}}$; and two vertices $y^{i, \ell, \mathbf{h}}_1$ and $y^{i, \ell, \mathbf{h}}_2$. We add the edges $\{y^{i, \ell, \mathbf{h}}_1, y^{i, \ell, \mathbf{h}}_2\}$, $\{y^{i, \ell, \mathbf{h}}_1, \hat{x}^{i, \ell, \mathbf{h}}\}$, $\{y^{i, \ell, \mathbf{h}}_2, \hat{x}^{i, \ell, \mathbf{h}}\}$ and for every $\gamma \in [4p]$, the edges $\{y^{i, \ell, \mathbf{h}}_1, x^{i, \ell, \mathbf{h}}_\gamma\}$ and $\{y^{i, \ell, \mathbf{h}}_2, x^{i, \ell, \mathbf{h}}_\gamma\}$, hence $\{y^{i, \ell, \mathbf{h}}_1, y^{i, \ell, \mathbf{h}}_2, x^{i, \ell, \mathbf{h}}_\gamma\}$ induces a triangle for every $\gamma \in [4p]$. The path gadgets $P^{i, j, \ell}$ with $j \in [p]$ are connected to $D^{i, \ell, \mathbf{h}}$ as follows. For every clique vertex $v^{i, j, \ell}_\varphi$ with $\varphi \in [5] \setminus \{h_j\}$, we pick a private vertex $x^{i, \ell, \mathbf{h}}_\gamma$, $\gamma \in [4p]$, and add an arrow from $v^{i, j, \ell}_\varphi$ to $x^{i, \ell, \mathbf{h}}_\gamma$. Since there are precisely $4p$ such $v^{i, j, \ell}_\varphi$ for fixed $i$, $\ell$, and $\mathbf{h}$, this construction works out. For every $i \in [t]$, $\ell \in [{\nclss(\nregions)}]$, the \emph{block} $B^{i,\ell}$ consists of the path gadgets $P^{i,j,\ell}$, $j \in [p]$, and the decoding gadgets $D^{i,\ell,\mathbf{h}}$, $\mathbf{h} \in [5]^p$. See \cref{fig:fvs_modtw_decoding} for a depiction of the decoding gadget.
\subsubsection*{Mapping truth assignments to state sequences.}
Every variable group $i \in [t]$ has at most $2^\beta$ possible truth assignments. By choice of $p$, we have that $5^p \geq 2^\beta$, hence we can fix an injective mapping $\kappa \colon \{0,1\}^\beta \rightarrow [5]^p$ that maps truth assignments $\tau \in \{0,1\}^\beta$ to state sequences $\mathbf{h} \in [5]^p$.
\subsubsection*{Clause cycles.}
We number the clauses of $\sigma$ by $C_0, \ldots, C_{m - 1}$. For every column $\ell \in [{\nclss(\nregions)}]$, we create a cycle $Z^\ell$ consisting of $q 5^p$ vertices $z^\ell_\gamma$, $\gamma \in [q 5^p]$. Let $\ell'$ be the remainder of $\ell - 1$ modulo $m$. For every group $i \in [t]$ and state sequence $\mathbf{h} \in [5]^p$, we add an arrow from $\hat{x}^{i, \ell, \mathbf{h}}$ to a private $z^\ell_\gamma$ if $\mathbf{h} \in \kappa(\{0,1\}^\beta)$ and $\kappa^{-1}(\mathbf{h})$ is a truth assignment for variable group $i$ that satisfies clause $C_{\ell'}$. Since $\sigma$ is a $q$-\textsc{Satisfiability}\xspace instance, every clause intersects at most $q$ variable groups. Every variable group has at most $2^\beta \leq 5^p$ possible truth assignments, hence $q 5^p$ is a sufficient number of vertices for this construction to work out. See \cref{fig:fvs_modtw_schematic} for a depiction of the high-level structure.
\begin{figure}
\centering
\scalebox{0.75}{\tikzfig{pictures/fvs_modtw_schematic}}
\caption{The matrix structure of the constructed graph. Every $m$ columns form a region.}
\label{fig:fvs_modtw_schematic}
\vspace*{-1cm}
\end{figure}
\subsubsection*{Packing.}
We construct a vertex-disjoint packing $\mathcal{P}$ that will witness a lower bound on the size of any feedback vertex set in the constructed graph $G$. The packing $\mathcal{P}$ consists of the following subgraphs:
\begin{itemize}
\item the triangle edge between $c^{i, j, \ell}_0$ and $c^{i, j, \ell}_1$ for all $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$,
\item the graph induced by the clique vertices $v^{i, j, \ell}_\varphi$, $\varphi \in [5]$, and the triangle edges between them for all $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$,
\item the graph induced by the output vertices $b^{i, j, \ell}_{\varphi, \gamma}$, $\varphi \in [5]$, $\gamma \in \{1,2\}$, and the triangle edges between them for all $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$,
\item the graph induced by the input vertices $u^{i, j, \ell}_1$, $u^{i, j, \ell}_2$ and the auxiliary vertices $a^{i, j, \ell}_1$, $\ldots$, $a^{i, j, \ell}_4$ and the triangle edges between them for all $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$,
\item the triangle induced by $\hat{x}^{i, \ell, \mathbf{h}}, y^{i, \ell, \mathbf{h}}_1, y^{i, \ell, \mathbf{h}}_2$ for all $i \in [t]$, $\ell \in [{\nclss(\nregions)}]$, $\mathbf{h} \in [5]^p$,
\item the second triangle in every arrow $A(u,v)$, i.e., the triangle containing the head $v$ if the arrow was constructed from $u$ to $v$.
\end{itemize}
Observe that in the construction of $G$ at most the tail of an arrow is incident with any of the other subgraphs in $\mathcal{P}$, hence the subgraphs in $\mathcal{P}$ are indeed vertex-disjoint.
Let $n_A$ be the number of arrows in $G$, we define
\begin{equation*}
\mathrm{cost}_\mathcal{P} = (1 + 4 + 8 + 3)t p {\nclss(\nregions)} + t {\nclss(\nregions)} 5^p + n_A
\end{equation*}
\begin{lem} \label{thm:fvs_modtw_packing}
Let $X$ be a feedback vertex set of $G$, then $|X| \geq \mathrm{cost}_\mathcal{P}$.
\end{lem}
\begin{proof}
We first apply the standard exchange arguments for triangle edges and arrows to $X$, obtaining a feedback vertex set $X'$ of $G$ with $|X'| \leq |X|$ that never contains the degree-2 vertex in a triangle edge and always uses the passive or active solution on any arrow.
For every triangle in $\mathcal{P}$, the feedback vertex set $X'$ must clearly contain at least one vertex of that triangle. Fix $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$ for the rest of the proof. Consider the graph induced by the clique vertices $v^{i, j, \ell}_\varphi$, $\varphi \in [5]$, and suppose that there are $\varphi \neq \psi \in [5]$ such that $v^{i, j, \ell}_\varphi, v^{i, j, \ell}_\psi \notin X'$, then the triangle edge between these two vertices is not resolved by assumption on $X'$. Hence, $X'$ contains at least four of the vertices $v^{i, j, \ell}_\varphi$, $\varphi \in [5]$. Similarly, consider the graph induced by the output vertices $b^{i, j, \ell}_{\varphi, \gamma}$, $\varphi \in [5]$, $\gamma \in \{1,2\}$, and suppose that there are $\varphi \neq \psi \in [5]$, $\gamma, \gamma' \in \{1,2\}$ such that $b^{i, j, \ell}_{\varphi, \gamma}, b^{i, j, \ell}_{\psi, \gamma'} \notin X'$, then the triangle edge between these two vertices is not resolved by assumption on $X'$. Hence, $X'$ contains at least eight of these vertices, in particular four out of five pairs $b^{i, j, \ell}_{\varphi, 1}$, $b^{i, j, \ell}_{\varphi, 2}$, $\varphi \in [5]$, must be completely contained in $X'$.
It remains to show that $X'$ contains at least three vertices in the subgraph induced by the input vertices $u^{i, j, \ell}_1$, $u^{i, j, \ell}_2$ and the auxiliary vertices $a^{i, j, \ell}_1$, $\ldots$, $a^{i, j, \ell}_4$. First, observe that $X'$ has to contain all of the first three auxiliary vertices $a^{i, j, \ell}_1$, $a^{i, j, \ell}_2$, $a^{i, j, \ell}_3$ or the last auxiliary vertex $a^{i, j, \ell}_4$, otherwise there is an unresolved triangle edge incident to the last auxiliary vertex $a^{i, j, \ell}_4$. We distinguish three cases based on $\alpha = |X' \cap \{u^{i, j, \ell}_1, u^{i, j, \ell}_2\}|$. If $\alpha = 2$, we are done by the first observation. If $\alpha = 1$, there is a triangle induced by $a^{i, j, \ell}_2$, $a^{i, j, \ell}_3$, and the remaining input vertex which needs to be resolved. Hence, $a^{i, j, \ell}_2 \in X'$ or $a^{i, j, \ell}_3 \in X'$ and due to the first observation $X'$ has to contain at least one further vertex. Finally, if $\alpha = 0$, note that the graph induced by the input vertices and the first three auxiliary vertices contains a $K_{2,3}$, so $X'$ has to contain at least two of the first three auxiliary vertices and due to the first observation $X'$ has to contain at least one further vertex, hence we are done. \qed
\end{proof}
\begin{lem}\label{thm:fvs_modtw_sat_to_sol}
If $\sigma$ is satisfiable, then there is a feedback vertex set $X$ of $G$ with $|X| \leq \mathrm{cost}_\mathcal{P}$.
\end{lem}
\begin{proof}
Let $\tau$ be a satisfying truth assignment of $\sigma$ and let $\tau^i$ be the induced truth assignment for variable group $i \in [t]$. Each truth assignment $\tau^i$ corresponds to a state sequence $\kappa(\tau^i) = \mathbf{h}^i = (h^i_1, \ldots, h^i_p)$ which we will use to construct the feedback vertex set $X$. On every path gadget $P^{i, j, \ell}$, $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$, we consider five different types of solutions $X^{i,j,\ell}_\varphi$, $\varphi \in [5]$, which we will define now:
\begin{itemize}
\item $X^{i,j,\ell}_1 = \{v^{i,j,\ell}_\varphi, b^{i,j,\ell}_{\varphi, 1}, b^{i,j,\ell}_{\varphi, 2} : \varphi \in [5] \setminus\{1\}\} \cup \{c^{i,j,\ell}_1\} \cup \{a^{i,j,\ell}_4\} \cup \{u^{i,j,\ell}_1, u^{i,j,\ell}_2\}$
\item $X^{i,j,\ell}_2 = \{v^{i,j,\ell}_\varphi, b^{i,j,\ell}_{\varphi, 1}, b^{i,j,\ell}_{\varphi, 2} : \varphi \in [5] \setminus\{2\}\} \cup \{c^{i,j,\ell}_1\} \cup \{a^{i,j,\ell}_3, a^{i,j,\ell}_4\} \cup \{u^{i,j,\ell}_1\}$
\item $X^{i,j,\ell}_3 = \{v^{i,j,\ell}_\varphi, b^{i,j,\ell}_{\varphi, 1}, b^{i,j,\ell}_{\varphi, 2} : \varphi \in [5] \setminus\{3\}\} \cup \{c^{i,j,\ell}_0\} \cup \{a^{i,j,\ell}_3, a^{i,j,\ell}_4\} \cup \{u^{i,j,\ell}_1\}$
\item $X^{i,j,\ell}_4 = \{v^{i,j,\ell}_\varphi, b^{i,j,\ell}_{\varphi, 1}, b^{i,j,\ell}_{\varphi, 2} : \varphi \in [5] \setminus\{4\}\} \cup \{c^{i,j,\ell}_1\} \cup \{a^{i,j,\ell}_1, a^{i,j,\ell}_2, a^{i,j,\ell}_3\} \cup \emptyset$
\item $X^{i,j,\ell}_5 = \{v^{i,j,\ell}_\varphi, b^{i,j,\ell}_{\varphi, 1}, b^{i,j,\ell}_{\varphi, 2} : \varphi \in [5] \setminus\{5\}\} \cup \{c^{i,j,\ell}_1\} \cup \{a^{i,j,\ell}_2, a^{i,j,\ell}_3, a^{i,j,\ell}_4\} \cup \emptyset$
\end{itemize}
The feedback vertex set on the path gadgets $P^{i,j,\ell}$ is given by
\begin{equation*}
X_P = \bigcup_{i \in [t]} \bigcup_{j \in [p]} \bigcup_{\ell \in [{\nclss(\nregions)}]} X^{i, j, \ell}_{h^i_j}.
\end{equation*}
On the decoding gadgets $D^{i, \ell, \mathbf{h}}$, we define
\begin{equation*}
X_D = \bigcup_{i \in [t]} \bigcup_{\ell \in [{\nclss(\nregions)}]} \left(\{\hat{x}^{i, \ell, \mathbf{h}^i}\} \cup \{y^{i, \ell, \mathbf{h}}_1 : \mathbf{h} \in [5]^p \setminus \{\mathbf{h}^i\}\} \right).
\end{equation*}
We obtain the desired feedback vertex set $X$ by starting with $X_P \cup X_D$ and propagating the deletions throughout $G$ using the arrows, i.e., if the tail $u$ of an arrow $A(u,v)$ is in $X$, then we choose the active solution on this arrow and otherwise we choose the passive solution. Since $|X^{i, j, \ell}_\varphi| = 16$ for all $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$, $\varphi \in [5]$, we compute that $|X_P| = 16 t p {\nclss(\nregions)}$ and for $X_D$, we see that $|X_D| = t {\nclss(\nregions)} 5^p$ and hence $|X| = \mathrm{cost}_\mathcal{P}$ as desired, since we perform one additional deletion per arrow.
It remains to show that $X$ is a feedback vertex set of $G$, i.e., that $G - X$ is a forest. First, notice that the passive solution of an arrow $A(u,v)$ disconnects $u$ from $v$ inside $A(u,v)$ and that the remainder of $A(u,v) - \{u,v\}$ cannot partake in any cycles. The active solution of an arrow $A(u,v)$ deletes $u$ and $v$, so that the three remaining vertices of the arrow form a single connected component. Since the path gadgets $P^{i, j, \ell}$ are connected to the decoding gadgets $D^{i, \ell, \mathbf{h}}$ only via arrows and also the decoding gadgets are only connected to the clause cycles $Z^\ell$ via arrows, $X$ disconnects these three types of gadgets from each other and we can handle each type separately.
We begin with the decoding gadgets $D^{i, \ell, \mathbf{h}}$, $i \in [t]$, $\ell \in [{\nclss(\nregions)}]$, $\mathbf{h} \in [5]^p$. Every $D^{i, \ell, \mathbf{h}}$ is in its own connected component in $G - X$, since one can only enter or leave $D^{i, \ell, \mathbf{h}}$ via an arrow. Every cycle in $D^{i, \ell, \mathbf{h}}$ intersects $y^{i, \ell, \mathbf{h}}_1$ which is in $X$ if $\mathbf{h} \neq \mathbf{h}^i$. Hence, it remains to consider the case $\mathbf{h} = \mathbf{h}^i$. In this case, $X$ contains $\hat{x}^{i, \ell, \mathbf{h}^i}$ by definition of $X_D$ and we claim that $x^{i, \ell, \mathbf{h}^i}_\gamma \in X$ for all $\gamma \in [4p]$ due to propagation via arrows. By construction of $G$, every $x^{i, \ell, \mathbf{h}^i}_\gamma$, $\gamma \in [4p]$, is the head of an arrow $A(v^{i, j, \ell}_\varphi, x^{i, \ell, \mathbf{h}^i}_\gamma)$ for some $j \in [p]$ and $\varphi \in [5] \setminus \{h^i_p\}$, but every such $v^{i, j, \ell}_\varphi$ is in $X$ by definition of $X_P$. Hence, these deletions are propagated to the $x^{i, \ell, \mathbf{h}^i}_\gamma$, $\gamma \in [4p]$ and the only remaining vertices of $D^{i, \ell, \mathbf{h}^i}$ are $y^{i, \ell, \mathbf{h}^i}_1$ and $y^{i, \ell, \mathbf{h}^i}_2$ which clearly induce an acyclic graph.
We continue with the clause cycles $Z^\ell$, $\ell \in [{\nclss(\nregions)}]$. Again, each clause cycle $Z^\ell$ is in its own connected component in $G - X$ and $Z^\ell$ consists of a single large cycle with vertices $z^\ell_\gamma$, $\gamma \in [q 5^p]$. We claim that $X$ propagates a deletion to at least one of these $z^\ell_\gamma$. Let $\ell'$ be the remainder of $\ell - 1$ modulo $m$. Since $\tau$ satisfies $\sigma$ and in particular clause $C
_{\ell'}$, there is some variable group $i \in [t]$ such that already $\tau^i$ satisfies clause $C_{\ell'}$. By construction of $G$, there is an arrow $A(\hat{x}^{i, \ell, \mathbf{h}^i}, z^\ell_\gamma)$ for some $\gamma \in [q 5^p]$ because $\kappa(\tau^i) = \mathbf{h}^i$. By definition of $X_D$, we have that $\hat{x}^{i, \ell, \mathbf{h}^i} \in X$ and a deletion is indeed propagated to $z^\ell_\gamma$, thus resolving the clause cycle $Z^\ell$.
It remains to show that there is no cycle in $G - X$ intersecting a path gadget $P^{i,j,\ell}$, $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$. All path gadgets are connected to each other via the root vertex $\hat{r}$ and furthermore consecutive path gadgets $P^{i,j,\ell}$ and $P^{i,j,\ell + 1}$ are connected via the joins between them. We first show that there is no cycle in $G - X$ that is completely contained in a single path gadget $P^{i, j, \ell}$. It is easy to see that each $X \cap P^{i,j,\ell} = X^{i,j,\ell}_{h^i_j}$ contains at least one vertex per triangle edge in $P^{i, j, \ell}$. Any further cycle that could remain in $P^{i,j,\ell}$ can only involve the vertices $u^{i,j,\ell}_1$, $u^{i,j,\ell}_2$, $a^{i,j,\ell}_1$, $a^{i,j,\ell}_2$, and $a^{i,j,\ell}_3$. These vertices induce a $K_{2,3}$ plus the edge $\{a^{i,j,\ell}_2, a^{i,j,\ell}_3\}$ in $G$. In each $X^{i,j,\ell}_\varphi$, $\varphi \in [5]$, one side of the biclique $K_{2,3}$ is contained completely with the exception of at most one vertex and $a^{i,j,\ell}_2$ and $a^{i,j,\ell}_3$ only remain together if the other side is contained completely. Hence, no cycle remains there either.
Observe that $P^{i,j,\ell}$ is separated from any $P^{i,j,\ell'}$ with $\ell' \notin \{\ell - 1, \ell, \ell + 1\}$ in $G - (X \cup \{\hat{r}\})$, because $X$ contains at least one endpoint of each triangle edge between the clique vertices $v^{i, j, \ell}_\varphi$, $\varphi \in [5]$, and the output vertices $b^{i, j, \ell}_{\varphi, \gamma}$, $\varphi \in [5]$, $\gamma \in \{1,2\}$. Hence, any cycle in $G - (X \cup \{\hat{r}\})$ would have to involve two consecutive path gadgets. Furthermore, $\{u^{i,j,\ell + 1}_1, u^{i,j,\ell + 1}_2\}$ is a separator of size two between $P^{i,j,\ell}$ and $P^{i,j,\ell + 1}$ in $G - (X \cup \{\hat{r}\})$, so any cycle involving both path gadgets has to contain $u^{i,j,\ell + 1}_1$ and $u^{i,j,\ell + 1}_2$. Therefore, we only have to consider the partial solutions $X^{i,j,\ell}_4 \cup X^{i,j,\ell+1}_4$ and $X^{i,j,\ell}_5 \cup X^{i,j,\ell+1}_5$ as otherwise at least one of $u^{i,j,\ell + 1}_1$ and $u^{i,j,\ell + 1}_2$ will be deleted. In both cases, the connected component of $G-X$ containing $u^{i,j,\ell + 1}_1$ and $u^{i,j,\ell + 1}_2$ induces a path on three vertices plus some pendant edges from the triangle edges. Hence, there is no cycle in $G - (X \cup \{\hat{r}\})$.
We are left with showing that $G - X$ contains no cycle containing the root vertex $\hat{r}$. We do so by arguing that each vertex in $G - X$ has at most one path to $\hat{r}$ in $G - X$. The neighbors of $\hat{r}$ are the vertices $b^{i,j,\ell}_{2,1}$ and $c^{i,j,\ell}_1$ for all $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$. It is sufficient to show that there is no path between any of these neighbors in $G - (X \cup \{\hat{r}\})$. By the same argument as in the previous paragraph, we only have to consider consecutive path gadgets $P^{i,j,\ell}$ and $P^{i,j,\ell + 1}$. By resolving the triangle edges between the clique vertices $v^{i,j,\ell}_\varphi$, $\varphi \in [5]$, and the output vertices $b^{i,j,\ell}_{\varphi, \gamma}$, $\varphi \in [5]$, $\gamma \in \{1,2\}$, all paths in $G - \hat{r}$ between $b^{i,j,\ell}_{2,1}$ and $c^{i,j,\ell}_1$ are intersected by $X$. Similarly for paths in $G - \hat{r}$ between $c^{i,j,\ell}_1$ and one of the vertices $c^{i,j,\ell + 1}_1$ or $b^{i,j,\ell + 1}_{2,1}$ and paths between $b^{i,j,\ell}_{2,1}$ and $b^{i,j,\ell + 1}_{2,1}$.
It remains to consider paths in $G - (X \cup \{\hat{r}\})$ between $b^{i,j,\ell}_{2,1}$ and $c^{i,j,\ell+1}_1$. We distinguish based on the chosen partial solution $X^{i,j,\ell}_\varphi \cup X^{i,j,\ell + 1}_\varphi$, $\varphi \in [5]$. For $\varphi \neq 3$, we see that $c^{i,j,\ell}_1 \in X$. For $\varphi = 3$, we see that $b^{i,j,\ell}_{2,1} \in X$. Hence, no such path can exist and $X$ has to be a feedback vertex set. \qed
\end{proof}
We say that a vertex subset $X \subseteq V(G)$ is \emph{canonical} with respect to the twinclass $\{u^{i,j,\ell}_1, u^{i,j,\ell}_2\}$ if $u^{i,j,\ell}_2 \in X$ implies $u^{i,j,\ell}_1 \in X$. Since $\{u^{i,j,\ell}_1, u^{i,j,\ell}_2\}$ is a twinclass, we can always assume that we are working with a canonical subset.
Given a vertex subset $X \subseteq V(G) \setminus \{\hat{r}\}$ that is canonical with respect to each twinclass $\{u^{i,j,\ell}_1, u^{i,j,\ell}_2\}$, we define $\mathbf{state}_X \colon [t] \times [p] \times [{\nclss(\nregions)}] \rightarrow \{\mathbf{2}, \mathbf{1}_0, \mathbf{1}_1, \mathbf{0}_0, \mathbf{0}_1\}$ by
\begin{equation*}
\mathbf{state}_X(i,j,\ell) =
\begin{cases}
\mathbf{2}, & \text{if $|X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2\}| = 2$}, \\
\mathbf{1}_0, & \text{if $X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2\} = \{u^{i,j,\ell}_1\}$ and} \\
& \text{ $u^{i,j,\ell}_2$ is not root-connected in $(P^{i,j,\ell} + \hat{r}) - X$}, \\
\mathbf{1}_1, & \text{if $X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2\} = \{u^{i,j,\ell}_1\}$ and} \\
& \text{ $u^{i,j,\ell}_2$ is root-connected in $(P^{i,j,\ell} + \hat{r}) - X$}, \\
\mathbf{0}_0, & \text{if $X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2\} = \emptyset$ and} \\
& \text{ $u^{i,j,\ell}_1$ and $u^{i,j,\ell}_2$ are not connected in $(P^{i,j,\ell} + \hat{r}) - X$}, \\
\mathbf{0}_1, & \text{if $X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2\} = \emptyset$ and} \\
& \text{ $u^{i,j,\ell}_1$ and $u^{i,j,\ell}_2$ are connected in $(P^{i,j,\ell} + \hat{r}) - X$}.
\end{cases}
\end{equation*}
Due to the assumption that $X$ is canonical, we see that $\mathbf{state}_X$ is well-defined. We remark that the meaning of the subscript is slightly different when one or no vertex of the twinclass is in $X$. We also introduce the notation $\mathbf{s}^1 = \mathbf{2}$, $\mathbf{s}^2 = \mathbf{1}_0$, $\mathbf{s}^3 = \mathbf{1}_1$, $\mathbf{s}^4 = \mathbf{0}_0$, and $\mathbf{s}^5 = \mathbf{0}_1$.
\begin{lem}\label{thm:fvs_modtw_sol_to_sat}
If there is a feedback vertex set $X$ of $G$ of size $|X| \leq \mathrm{cost}_\mathcal{P}$, then $\sigma$ is satisfiable.
\end{lem}
\begin{proof}
Due to \cref{thm:fvs_modtw_packing}, we immediately see that $|X| = \mathrm{cost}_\mathcal{P}$ and $X \cap V(H)$ has to be a minimum feedback vertex set of $H$ for any $H \in \mathcal{P}$. So, $X$ contains precisely one vertex of each triangle in $\mathcal{P}$ and satisfies the \emph{packing equations} for all $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$:
\begin{itemize}
\item $|X \cap \{v^{i,j,\ell}_\varphi : \varphi \in [5]\}| = 4$,
\item $|X \cap \{b^{i,j,\ell}_{\varphi, 1}, b^{i,j,\ell}_{\varphi,2} : \varphi \in [5]\}| = 8$,
\item $|X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2, a^{i,j,\ell}_1, a^{i,j,\ell}_2, a^{i,j,\ell}_3, a^{i,j,\ell}_4\}| = 3$.
\end{itemize}
In particular, this also implies that $X$ cannot contain the root vertex $\hat{r}$.
Furthermore, due to the standard exchange arguments for triangle edges and arrows, we can assume for any triangle edge between $u$ and $v$ that $X$ contains $u$ or $v$ and for any arrow $A(u,v)$ that $X$ uses the passive solution or the active solution on $A(u,v)$. Finally, we can assume that $X$ is canonical with respect to each twinclass $\{u^{i,j,\ell}_1, u^{i,j,\ell}_2\}$, i.e., $u^{i,j,\ell}_2 \in X$ implies that $u^{i,j,\ell}_1 \in X$.
We begin by studying the structure of $X \cap P^{i,j,\ell}$ for any $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$. For fixed $i,j,\ell$, there is a unique $\varphi \in [5]$ such that $v^{i,j,\ell}_\varphi \notin X$ due to the packing equations. Hence, we must have $X \cap \{b^{i,j,\ell}_{\psi, 1}, b^{i,j,\ell}_{\psi,2} : \psi \in [5]\} = \{b^{i,j,\ell}_{\psi, 1}, b^{i,j,\ell}_{\psi,2} : \psi \in [5] \setminus \{\varphi\}\}$ due to the packing equations and the triangle edges between $v^{i,j,\ell}_\varphi$ and the output vertices $\{b^{i,j,\ell}_{\psi, 1}, b^{i,j,\ell}_{\psi,2} : \psi \in [5] \setminus \{\varphi\}\}$.
For the left side of a path gadget $P^{i,j,\ell}$, we claim that $v^{i,j,\ell}_\varphi \notin X$ implies that $\mathbf{state}_X(i,j,\ell) = \mathbf{s}^{\varphi'}$ with $\varphi' \geq \varphi$. For $\varphi = 1$ there is nothing to show. One can see that $(\varphi', \varphi) \notin ([3] \times \{4,5\}) \cup (\{1\} \times \{2,3\})$ by considering the size of $X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2, a^{i,j,\ell}_1, a^{i,j,\ell}_2, a^{i,j,\ell}_3, a^{i,j,\ell}_4\}$ in those cases: Due to the triangle edges between the clique vertices $v^{i,j,\ell}_\psi$, $\psi \in [5]$ and auxiliary vertices $a^{i,j,\ell}_\gamma$, $\gamma \in [4]$, we see that $X$ contains at least two auxiliary vertices if $\varphi \geq 2$ and at least three if $\varphi \geq 4$. Using the packing equations, we see that this implies $|X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2\}| \leq 1$ if $\varphi \geq 2$ and $X \cap \{u^{i,j,\ell}_1, u^{i,j,\ell}_2\} = \emptyset$ if $\varphi \geq 4$, but the listed cases contradict this. It remains to handle the two cases $(\varphi', \varphi) = (2,3)$ and $(\varphi', \varphi) = (4,5)$. In the first case, the triangle edges between the vertex $v^{i,j,\ell}_3$ and the vertices $a^{i,j,\ell}_3$, $a^{i,j,\ell}_4$, $c^{i,j,\ell}_0$ together with the packing equations imply that $u^{i,j,\ell}_2$, $a^{i,j,\ell}_1$, $c^{i,j,\ell}_1 \notin X$, but then $\mathbf{state}_X(i,j,\ell) = \mathbf{1}_1 = \mathbf{s}^3 \neq \mathbf{s}^{\varphi'}$ because $u^{i,j,\ell}_2$, $a^{i,j,\ell}_1$, $c^{i,j,\ell}_1$, $\hat{r}$ is a path in $(P^{i,j,\ell} + \hat{r}) - X$. In the second case, the triangle edges between $v^{i,j,\ell}_5$ and the auxiliary vertices $a^{i,j,\ell}_2$, $a^{i,j,\ell}_3$, $a^{i,j,\ell}_4$ together with the packing equations imply that $u^{i,j,\ell}_1$, $u^{i,j,\ell}_2$, $a^{i,j,\ell}_1 \notin X$ and hence $\mathbf{state}_X(i,j,\ell) = \mathbf{0}_1 = \mathbf{s}^5 \neq \mathbf{s}^{\varphi'}$. This proves the claim.
Next, we claim that for any $i \in [t]$, $j \in [p]$, and $\ell_1, \ell_2 \in [{\nclss(\nregions)}]$ with $\ell_1 < \ell_2$, that the unique $\varphi_1 \in [5]$ and $\varphi_2 \in [5]$ such that $v^{i,j,\ell_1}_{\varphi_1} \notin X$ and $v^{i,j,\ell_2}_{\varphi_2} \notin X$ satisfy $\varphi_1 \geq \varphi_2$. We can assume without loss of generality that $\ell_2 = \ell_1 + 1$. By the previous arguments, we know that $b^{i,j,\ell_1}_{\varphi_1, 1}, b^{i,j,\ell_1}_{\varphi_1,2} \notin X$ and $\mathbf{state}_X(i,j,\ell_2) = \mathbf{s}^{\varphi'}$ with $\varphi' \geq \varphi_2$, so we are done if we can show that $\varphi_1 \geq \varphi'$. We do so by arguing that $G - X$ contains a cycle in all other cases, thus contradicting that $X$ is a feedback vertex set. If $\varphi_1 < \varphi'$ and $(\varphi_1, \varphi') \notin \{(2,3),(4,5)\}$, then $G[\{b^{i,j,\ell_1}_{\varphi_1,1}, b^{i,j,\ell_1}_{\varphi_1,2}, u^{i,j,\ell_1 + 1}_1, u^{i,j,\ell_1 + 1}_2\} \setminus X]$ simply contains a cycle. If $(\varphi_1, \varphi') = (2,3)$, then there is a cycle passing through the root $\hat{r}$ in $G - X$ visiting $\hat{r}$, $b^{i,j,\ell_1}_{2,1}$, $u^{i,j,\ell_1 + 1}_2$, and then uses the path to $\hat{r}$ inside $(P^{i,j,\ell_1 + 1} + \hat{r}) - X$ which exists due to $\mathbf{state}_X(i,j,\ell_1 + 1) = \mathbf{s}^{\varphi'} = \mathbf{s}^3 = \mathbf{1}_1$. If $(\varphi_1, \varphi') = (4,5)$, then there is a cycle in $G - X$ visiting $u^{i,j,\ell_1 + 1}_1$, $b^{i,j,\ell_1}_{4,1}$, $u^{i,j,\ell_1 + 1}_2$, and then uses the path between $u^{i,j,\ell_1 + 1}_2$ and $u^{i,j,\ell_1 + 1}_1$ in $(P^{i,j,\ell_1 + 1} + \hat{r}) - X$ which exists due to $\mathbf{state}_X(i,j,\ell_1 + 1) = \mathbf{s}^{\varphi'} = \mathbf{s}^5 = \mathbf{0}_1$. This shows the claim.
We say that $X$ \emph{cheats} from $P^{i,j,\ell}$ to $P^{i,j,\ell + 1}$ if $v^{i,j,\ell}_{\varphi_1}, v^{i,j,\ell + 1}_{\varphi_2} \notin X$ with $\varphi_1 > \varphi_2$. By the previous claim, there can be at most four cheats for fixed $i$ and $j$. For $\gamma \in [{4\ngrps\grpsize + 1}]$, we define the $\gamma$-th \emph{column region} $R^\gamma = [(\gamma - 1)m + 1, \gamma m]$. Since there are $t p$ paths, there is a column region $R^\gamma$ that contains no cheats by the pigeonhole principle, i.e., for all $i \in [t]$, $j \in [p]$, $\ell_1, \ell_2 \in R^\gamma$, $\varphi \in [5]$, we have $v^{i,j,\ell_1}_\varphi \notin X$ if and only if $v^{i,j,\ell_2}_\varphi \notin X$. Fix this $\gamma$ for the remainder of the proof.
We obtain sequences $\mathbf{h}^i = (h^i_1, \ldots, h^i_p) \in [5]^p$, $i \in [t]$, by defining $h^i_j \in [5]$ as the unique number satisfying $v^{i,j,\gamma m}_{h^i_j} \notin X$. Since $R^\gamma$ contains no cheats, note that we would obtain the same sequences if we use any column $\ell \in R^\gamma \setminus \{\gamma m\}$ instead of column $\gamma m$ in the definition. We obtain a truth assignment $\tau^i$ for variable group $i$ by setting $\tau^i = \kappa^{-1}(\mathbf{h}^i)$ if $\mathbf{h}^i \in \kappa(\{0,1\}^\beta)$ and otherwise picking an arbitrary truth assignment.
We claim that $\tau = \tau^1 \cup \cdots \cup \tau^t$ is a satisfying assignment of $\sigma$. To prove this claim, we begin by showing for all $i \in [t]$, $\ell \in R^\gamma$, $\mathbf{h} \in [5]^p$, that $\hat{x}^{i, \ell, \mathbf{h}} \in X$ implies $\mathbf{h} = \mathbf{h}^i$. Suppose that $\mathbf{h} = (h_1, \ldots, h_p) \neq \mathbf{h}^i$, then there is some $j \in [p]$ with $h_j \neq h^i_j$. There is an arrow from $v^{i,j,\ell}_{h^i_j} \notin X$ to some $x^{i, \ell, \mathbf{h}}_\gamma$, $\gamma \in [4p]$, but $X$ uses the passive solution on this arrow and hence $x^{i, \ell, \mathbf{h}}_\gamma \notin X$ as well, otherwise the packing equation for the second triangle in the arrow would be violated. To resolve the triangle in $D^{i, \ell, \mathbf{h}}$ induced by $\{x^{i, \ell, \mathbf{h}}_\gamma, y^{i, \ell, \mathbf{h}}_1, y^{i, \ell, \mathbf{h}}_2\}$, we must have $y^{i, \ell, \mathbf{h}}_1 \in X$ or $y^{i, \ell, \mathbf{h}}_2 \in X$. Hence, we must have $\hat{x}^{i, \ell, \mathbf{h}} \notin X$ in either case, as otherwise the packing equation for the triangle induced by $\{\hat{x}^{i, \ell, \mathbf{h}}, y^{i, \ell, \mathbf{h}}_1, y^{i, \ell, \mathbf{h}}_2\}$ would be violated. This proves the subclaim.
Consider clause $C_{\ell'}$, $\ell' \in [0, m - 1]$, we will argue now that $\tau$ satisfies clause $C_{\ell'}$. The clause cycle $Z^\ell$ with $\ell = (\gamma - 1)m + \ell' + 1 \in R^\gamma$ corresponds to clause $C_{\ell'}$ and since $X$ is a feedback vertex set, there exists some $z^\ell_\eta \in X \cap Z^\ell$, $\eta \in [q 5^p]$. By construction of $G$, there is at most one arrow incident to $z^\ell_\eta$. If there is no incident arrow, then $z^\ell_\eta$ is not contained in any of the subgraphs in the packing $\mathcal{P}$ and hence $z^\ell_\eta \in X$ contradicts $|X| = \mathrm{cost}_\mathcal{P}$. So, there is exactly one arrow incident to $z^\ell_\eta$ and by construction of $G$, this arrow comes from some $\hat{x}^{i, \ell, \mathbf{h}}$. We must have $\hat{x}^{i, \ell, \mathbf{h}} \in X$ as well, because $X$ uses the active solution on this arrow. The previous claim implies that $\mathbf{h} = \mathbf{h}^i$. Finally, such an arrow only exists, by construction, if $\kappa^{-1}(\mathbf{h}) = \kappa^{-1}(\mathbf{h}^i) = \tau^i$ satisfies clause $C_{\ell'}$, so $\tau$ must satisfy $C_{\ell'}$ as well. In this step we use that the definition of $\mathbf{h}^i$ is independent of the considered column in region $R^\gamma$. Since the choice of $C_{\ell'}$ was arbitrary, this shows that $\sigma$ is satisfiable. \qed
\end{proof}
\begin{lem}\label{thm:fvs_modtw_bound}
The graph $G = G(\sigma, \beta)$ has $\tcpw(G) \leq t p + (4 p + 3 + q) 5^p + \Oh(1)$ and a path decomposition of $G^q = G / \Pi_{tc}(G)$ of this width can be constructed in polynomial time.
\end{lem}
\begin{proof}
By construction, all sets $\{u_1^{i,j,\ell}, u_2^{i,j,\ell}\}$, $i \in [t]$, $j \in [p]$, $\ell \in [{\nclss(\nregions)}]$, are twinclasses. Let $G'$ be the graph obtained by contracting each of these twinclasses, denoting the resulting vertex by $u^{i,j,\ell}$, then $G^q$ is a subgraph of $G'$. We will show that $\tcpw(G) = \pw(G^q) \leq \pw(G') \leq t p + (4 p + 3 + q) 5^p + \Oh(1)$ by giving an appropriate strategy for the mixed-search-game on $G'$ and applying \cref{thm:mixed_search}.
\begin{algorithm}
Handling of arrows: whenever a searcher is placed on the tail $u$ of an arrow $A(u,v)$, we place searchers on all vertices of $A(u,v)$ and immediately afterwards remove the searchers from $A(u,v) - \{u,v\}$ again\;
Place searcher on $\hat{r}$\;
Place searchers on $u^{i,j,1}$ for all $i \in [t]$, $j \in [p]$\;
\For{$\ell \in [{\nclss(\nregions)}]$}
{
Place searchers on all vertices of the clause cycle $Z^\ell$\;
\For{$i \in [t]$}
{
\For{$\mathbf{h} \in [5]^p$}
{
Place searchers on all vertices of the decoding gadget $D^{i, \ell, \mathbf{h}}$\;
}
\For{$j \in [p]$}
{
Place searchers on all vertices of $P^{i,j,\ell} - \{u_1^{i,j,\ell}, u_2^{i,j,\ell}\}$\;
Remove searcher from $u^{i,j,\ell}$ and place it on $u^{i,j,\ell + 1}$\;
Remove searchers on $P^{i,j,\ell} - \{u_1^{i,j,\ell}, u_2^{i,j,\ell}\}$\;
}
\For{$\mathbf{h} \in [5]^p$}
{
Remove searchers on $D^{i, \ell, \mathbf{h}}$\;
}
}
Remove searchers on $Z^\ell$\;
}
\caption{Mixed-search-strategy for $G'$}
\label{algo:fvs_modtw_search_game}
\end{algorithm}
The mixed-search-strategy for $G'$ is described in \cref{algo:fvs_modtw_search_game} and the central idea is to proceed column by column and group by group in each column. The maximum number of placed searchers occurs on line $10$ and is divided into one searcher for $\hat{r}$; one searcher for each $(i,j) \in [t] \times [p]$; $q 5^p$ searchers for the current $Z^\ell$; $(4p + 3) 5^p$ searchers for all $D^{i, \ell, \mathbf{h}}$ with the current $i$ and $\ell$; $\Oh(1)$ searchers for the current $P^{i,j,\ell}$; and $\Oh(1)$ searchers to handle an arrow $A(u,v)$. Note that arrows can be handled sequentially, i.e., there will be at any point in the search-strategy at most one arrow $A(u,v)$ with searchers on $A(u,v) - \{u,v\}$. Furthermore, note that whenever we place a searcher on the tail $u$ of an arrow $A(u,v)$, we have already placed a searcher on the head $v$ of the arrow. \qed
\end{proof}
\begin{thm}
There is no algorithm that solves \textsc{Feedback Vertex Set}\xspace, given a path decomposition of $G^q = G/\Pi_{tc}(G)$ of width $k$, in time $\Oh^*((5 - \varepsilon)^k)$ for some $\varepsilon > 0$, unless SETH\xspace fails.
\end{thm}
\begin{proof}
Assume that there exists an algorithm $\mathcal{A}$ that solves \textsc{Feedback Vertex Set}\xspace in time $\Oh^*((5 - \varepsilon)^k)$ for some $\varepsilon > 0$ given a path decomposition of $G^q = G / \Pi_{tc}(G)$ of width $k$. Given $\beta$, we define $\delta_1 < 1$ such that $(5 - \varepsilon)^{\log_5(2)} = 2^{\delta_1}$ and $\delta_2$ such that $(5 - \varepsilon)^{1 / \beta} = 2^{\delta_2}$. By picking $\beta$ large enough, we can ensure that $\delta = \delta_1 + \delta_2 < 1$. We will show how to solve $q$-\textsc{Satisfiability}\xspace using $\mathcal{A}$ in time $\Oh^*(2^{\delta n})$, where $n$ is the number of variables, for all $q$, thus contradicting SETH\xspace.
Given a $q$-\textsc{Satisfiability}\xspace instance $\sigma$, we construct $G = G(\sigma, \beta)$ and the path decomposition from \cref{thm:fvs_modtw_bound} in polynomial time, note that we have $q = \Oh(1)$, $\beta = \Oh(1)$ and hence $p = \Oh(1)$. We then run $\mathcal{A}$ on $G$ and return its answer. This is correct by \cref{thm:fvs_modtw_sat_to_sol} and \cref{thm:fvs_modtw_sol_to_sat}. Due to \cref{thm:fvs_modtw_bound}, we have that $\tcpw(G) \leq t p + f(q, p)$ for some function $f(q, p)$ and hence we can bound the running time by
\begin{alignat*}{6}
\phantom{\leq} \,\, & \Oh^*\left( (5 - \varepsilon)^{t p + f(q, p)} \right)
& \,\,\leq\,\, & \Oh^*\left( (5 - \varepsilon)^{t p} \right)
& \,\,\leq\,\, & \Oh^*\left( (5 - \varepsilon)^{\lceil \frac{n}{\beta} \rceil p} \right) \\
\leq \,\, & \Oh^*\left( (5 - \varepsilon)^{\frac{n}{\beta} p} \right)
& \,\,\leq\,\, & \Oh^*\left( (5 - \varepsilon)^{\frac{n}{\beta} \lceil \log_5(2^\beta) \rceil} \right)
& \,\,\leq\,\, & \Oh^*\left( (5 - \varepsilon)^{\frac{n}{\beta} \log_5(2^\beta)} (5 - \varepsilon)^{\frac{n}{\beta}} \right)\\
\leq \,\, & \Oh^*\left( 2^{\delta_1 \beta \frac{n}{\beta}} 2^{\delta_2 n} \right)
& \,\,\leq\,\, & \Oh^*\left( 2^{(\delta_1 + \delta_2) n} \right)
& \,\,\leq\,\, & \Oh^*\left( 2^{\delta n} \right),
\end{alignat*}
hence completing the proof. \qed
\end{proof}
\section{Introduction}
Connectivity constraints are a very natural form of global constraints in the realm of graph problems. We study connectivity problems from a fine-grained parameterized perspective. The starting point is an influential paper of Cygan et al.~\cite{CyganNPPRW22} introducing the cut-and-count-technique which yields randomized algorithms with running time $\Oh^*(\alpha^{\tw})$\footnote{The $\Oh^*$-notation hides polynomial factors in the input size.}, for some constant \emph{base} $\alpha > 1$, for connectivity problems parameterized by \emph{treewidth} ($\tw$). The obtained bases $\alpha$ were proven to be optimal assuming the Strong Exponential-Time Hypothesis\footnote{The hypothesis that for every $\delta < 1$, there is some $q$ such that $q$-\textsc{Satisfiability}\xspace cannot be solved in time $\Oh(2^{\delta n})$, where $n$ is the number of variables.} (SETH) \cite{CyganNPPRW11arxiv}.
Since dense graphs cannot have small treewidth, the results for treewidth do not help for graphs with dense structure. A well-known tool to capture dense structure is the \emph{modular decomposition} of a graph, which recursively partitions the graph into \emph{modules} whose members have the same neighborhood outside of the module. Contracting these modules, we obtain a \emph{quotient graph} describing the adjacencies between the modules. Having isolated the dense part to the modules, measuring the complexity of the quotient graph by standard graph parameters such as treewidth yields e.g.\ the parameter \emph{modular-treewidth} ($\modtw$), a natural intermediate step between treewidth and clique-width. While modular-treewidth is not as general as clique-width, the algorithms for computing treewidth transfer to modular-treewidth, yielding e.g.\ reasonable constant-factor approximations for modular-treewidth in single-exponential time, whereas for clique-width we are currently only able to obtain approximations with exponential error.
We obtain the first tight running times for connectivity problems parameterized by modular-treewidth. To do so, we lift the algorithms using the cut-and-count-technique from treewidth to modular-treewidth. A crucial observation is that all vertices inside a module will be connected by choosing a single vertex from a neighboring module. In some cases, this observation is strong enough to lift the treewidth-based algorithms to modular-treewidth for free, i.e., the base $\alpha$ of the running time does not increase, showing that we can deal with a greater generality in input structure at no cost in complexity for these problems.
\begin{thm}[informal]\label{thm:intro_reductions}
There are one-sided error Monte-Carlo algorithms that, given a decomposition witnessing modular-treewidth $k$, can solve
\begin{itemize}
\item \textsc{Steiner Tree}\xspace in time $\Oh^*(3^k)$,
\item \textsc{Connected Dominating Set}\xspace in time $\Oh^*(4^k)$.
\end{itemize}
\end{thm}
These bases are optimal under SETH, by known results of Cygan et al.~\cite{CyganNPPRW11arxiv}.
However, in other cases the interplay of the connectivity constraint and the remaining problem constraints does increase the complexity for modular-treewidth compared to treewidth. In these cases, we provide new algorithms adapting the cut-and-count-technique to this more intricate setting.
\begin{thm}[informal]\label{thm:intro_algos}
There are one-sided error Monte-Carlo algorithms that, given a decomposition witnessing modular-treewidth $k$, can solve
\begin{itemize}
\item \textsc{Connected Vertex Cover}\xspace in time $\Oh^*(5^k)$,
\item \textsc{Feedback Vertex Set}\xspace in time $\Oh^*(5^k)$.
\end{itemize}
\end{thm}
Both problems can be solved in time $\Oh^*(3^k)$ parameterized by treewidth~\cite{CyganNPPRW22}. In contrast, \textsc{Vertex Cover}\xspace (without the connectivity constraint) has complexity $\Oh^*(2^k)$ with respect to treewidth~\cite{LokshtanovMS18} and modular-treewidth simultaneously.
For these latter two problems, we provide new lower bounds to show that the bases are optimal under SETH. However, we do not need the full power of the modular decomposition to prove the lower bounds. The modular decomposition allows for \emph{recursive} partitioning, when instead allowing for only a single level of partitioning and limited complexity inside the modules, we obtain parameters called \emph{twinclass-pathwidth} ($\tcpw$) and \emph{twinclass-treewidth}.
\begin{thm}\label{thm:intro_lower_bounds}
Unless SETH fails, the following statements hold for any $\varepsilon > 0$:
\begin{itemize}
\item \textsc{Connected Vertex Cover}\xspace cannot be solved in time $\Oh^*((5-\varepsilon)^{\tcpw})$.
\item \textsc{Feedback Vertex Set}\xspace cannot be solved in time $\Oh^*((5-\varepsilon)^{\tcpw})$.
\end{itemize}
\end{thm}
The obtained results on connectivity problems parameterized by modular-treewidth are situated in the larger context of a research program aimed at determining the optimal running times for connectivity problems relative to width-parameters of differing generality, thus quantifying the price of generality in this setting. The known results are summarized in \cref{table:conn_time_overview}. Beyond the results for treewidth by Cygan et al.~\cite{CyganNPPRW11arxiv,CyganNPPRW22}, Bojikian et al.~\cite{BojikianCHK23} obtain tight results for the more restrictive \emph{cutwidth} by either providing faster algorithms resulting from combining cut-and-count with the rank-based approach or by showing that the same lower bounds already hold for cutwidth. Hegerfeld and Kratsch~\cite{HegerfeldK23cw} consider \emph{clique-width} and obtain tight results for \textsc{Connected Vertex Cover}\xspace and \textsc{Connected Dominating Set}\xspace. Their algorithms combine cut-and-count with several nontrivial techniques to speed up dynamic programming on clique-expressions, where the interaction between cut-and-count and clique-width can yield more involved states compared to modular-treewidth, as clique-width is more general. These algorithms are complemented by new lower bound constructions following similar high-level principles as for modular-treewidth, but allow for more flexibility in the gadget design due to the mentioned generality. However, the techniques of Hegerfeld and Kratsch~\cite{HegerfeldK23cw} for clique-width yield tight results for fewer problems compared to the present work; in particular, the optimal bases for \textsc{Steiner Tree}\xspace and \textsc{Feedback Vertex Set}\xspace parameterized by clique-width are currently not known.
\newcommand{\sexp}[1]{$\Oh^*(#1^k)$}%
\begin{table}%
\centering
\begin{tabular}{l|cccc}%
Parameters & cutwidth & treewidth & modular-tw & clique-width\\%
\hline%
\textsc{Connected Vertex Cover}\xspace & \sexp{2} & \sexp{3} & \sexp{5} & \sexp{6} \\%
\textsc{Connected Dominating Set}\xspace & \sexp{3} & \sexp{4} & \sexp{4} & \sexp{5} \\%
\textsc{Steiner Tree}\xspace & \sexp{3} & \sexp{3} & \sexp{3} & ? \\%
\textsc{Feedback Vertex Set}\xspace & \sexp{2} & \sexp{3} & \sexp{5} & ? \\%
\hline%
References & \cite{BojikianCHK23} & \cite{CyganNPPRW11arxiv,CyganNPPRW22} & here & \cite{HegerfeldK23cw}%
\end{tabular}%
\caption{Optimal running times of connectivity problems with respect to various width-parameters listed in increasing generality. The results in the penultimate column are obtained in this paper. The ``?'' denotes cases, where an algorithm with single-exponential running time is known by Bergougnoux and Kant\'e~\cite{BergougnouxK19a}, but a gap between the lower bound and algorithm remains.}\label{table:conn_time_overview}%
\vspace*{-1cm}
\end{table}%
\subsubsection*{Related work.}
We survey some more of the literature on parameterized algorithms for connectivity problems relative to dense width-parameters. Bergougnoux~\cite{Bergougnoux19} has applied cut-and-count to several width-parameters based on structured neighborhoods such as clique-width, rank-width, or mim-width. Building upon the rank-based approach of Bodlaender et al.~\cite{BodlaenderCKN15}, Bergougnoux and Kant\'e~\cite{BergougnouxK19a} obtain single-exponential running times $\Oh^*(\alpha^{\cw})$ for a large class of connectivity problems parameterized by clique-width ($\cw$). The same authors~\cite{BergougnouxK21} also generalize this approach to other dense width-parameters via structured neighborhoods. All these works deal with general \textsc{Connected $(\sigma, \rho)$-Dominating Set} problems capturing a wide range of problems; this generality of problems (and parameters) comes at the cost of yielding running times that are far from optimal for specific problem-parameter-combinations, e.g., the first article~\cite{Bergougnoux19} is the most optimized for clique-width and obtains the running time $\Oh^*((2^{4 + \omega})^{\cw}) \geq \Oh^*(64^{\cw})$, where $\omega$ is the matrix multiplication exponent~\cite{AlmanW21}, for \textsc{Connected Dominating Set}\xspace. Bergougnoux et al.~\cite{BergougnouxDJ23} obtain XP algorithms parameterized by mim-width for problems expressible in a logic that can also capture connectivity constraints. Beyond dense width-parameters, cut-and-count has also been applied to the parameters branchwidth~\cite{PinoBR16} and treedepth~\cite{HegerfeldK20,NederlofPSW20}.
Our version of modular-treewidth was first used by Bodlaender and Jansen for \textsc{Maximum Cut}~\cite{BodlaenderJ00}. Several papers~\cite{Lampis20,Mengel16,PaulusmaSS16} also use the name modular-treewidth, but use it to refer to what we call \emph{twinclass-treewidth}. In particular, Lampis~\cite{Lampis20} obtains tight results under SETH for \textsc{$q$-Coloring} with respect to twinclass-treewidth and clique-width. Hegerfeld and Kratsch~\cite{HegerfeldK22} obtain tight results for \textsc{Odd Cycle Transversal}\xspace parameterized by twinclass-pathwidth and clique-width and for \textsc{Dominating Set}\xspace parameterized by twinclass-cutwidth. Kratsch and Nelles~\cite{KratschN22} combine modular decompositions with tree-depth in various ways and obtain parameterized algorithms for various efficiently solvable problems.
\subsubsection*{Organization.}
In \cref{sec:modtw_prelims} we discuss the general preliminaries and \cref{sec:modtw_cutandcount} the cut-and-count-technique. We prove \cref{thm:intro_reductions} in \cref{sec:modtw_reduction}. \Cref{sec:modtw_cvc_algo} contains the \textsc{Connected Vertex Cover}\xspace algorithm of \cref{thm:intro_algos} and \cref{sec:modtw_fvs_algo} contains the \textsc{Feedback Vertex Set}\xspace algorithm. \Cref{sec:modtw_cvc_lb} contains the \textsc{Connected Vertex Cover}\xspace lower bound of \cref{thm:intro_reductions} and \cref{sec:modtw_fvs_lb} the \textsc{Feedback Vertex Set}\xspace lower bound. \Cref{sec:modtw_vc_algo} contains an algorithm for \textsc{Vertex Cover}\xspace used as a subroutine. The problem definitions can be found in \cref{sec:problems}.
\subsection{Steiner Tree}
\label{sec:modtw_st_reduction}
In the \textsc{(Node)} \textsc{Steiner Tree}\xspace problem, we are given a graph $G = (V,E)$, a set of \emph{terminals} $K \subseteq V$, a cost function $\mathbf{c} \colon V \rightarrow \mathbb{N} \setminus \{0\}$, and an integer $\overline{b}$ and we have to decide whether there exists a subset of vertices $X \subseteq V$ such that $K \subseteq X$, $G[X]$ is connected, and $\mathbf{c}(X) \leq \overline{b}$.
We assume that $G$ is a connected graph, otherwise the answer is trivially no if the terminals are distributed across several connected components, or we can just look at the connected component containing all terminals. We also assume that $G[K]$ is not connected, as otherwise $X = K$ is trivially an optimal solution. Furthermore, we assume that the costs $\mathbf{c}(v)$, $v \in V$, are at most polynomial in $|V|$.
For \textsc{Steiner Tree}\xspace, it is sufficient to consider the topmost quotient graph $G^q := G^q_V = G / \Pi_{mod}(G)$, unless there is a single module $M \in \Pi_{mod}(G) = \mathtt{children}(V)$ containing all terminals. In this edge case, we find a solution of size $|K| + 1$, by taking a vertex in a module adjacent to $M$, or we consider the graph $G[M]$, allowing us to recurse into the module $M$.
We first consider the case that all terminals are contained in a single module $M \in \Pi_{mod}(G)$. The next lemma shows that we can either find a solution of size $|K| + 1$, which can be computed in polynomial time, or it suffices to consider the graph $G[M]$.
\begin{lem}
\label{thm:st_all_terminals_in_module}
If there is a module $M \in \Pi_{mod}(G)$ of $G$ such that $K \subseteq M$, then there is an optimum Steiner tree $X$ satisfying $X \subseteq M$, or there is an optimum Steiner tree $X$ satisfying $|X| = |K| + 1$.
\end{lem}
\begin{proof}
Consider a Steiner tree $X$ such that $X \not\subseteq M$, then $X$ has to contain at least one vertex $v$ inside a module $M' \in \Pi_{mod}(G)$ adjacent to $M$. We claim that $X' = K \cup \{v\}$ is a Steiner tree with $\mathbf{c}(X') \leq \mathbf{c}(X)$. Clearly, $X' \subseteq X$, and since the costs are positive we have that $\mathbf{c}(X') \leq \mathbf{c}(X)$. Since $K \subseteq M$, the vertex $v$ is adjacent to all terminals $K$ and $G[X']$ is connected, hence $X'$ is a Steiner tree.
If there is no optimum Steiner tree $X$ satisfying $X \subseteq M$, then by applying the previous argument to an optimum Steiner tree, we obtain an optimum Steiner tree $X$ satisfying $|X| = |K| + 1$. \qed
\end{proof}
After recursing until no module $M \in \Pi_{mod}(G)$ contains all terminals (and updating $G$ accordingly), we can apply the following reduction to solve the problem if the quotient graph is prime. Let $(G, K, \mathbf{c}, \overline{b})$ be a \textsc{Steiner Tree}\xspace instance such that $|\pi_V(K)| \geq 2$ and $G^q = G / \Pi_{mod}(G)$ is prime. We consider the \textsc{Steiner Tree}\xspace instance $(G^q, K^q, \mathbf{c}^q, \overline{b}^q)$ where $K^q = \pi_{V}(K)$, $\mathbf{c}^q(v^q_M) = \mathbf{c}(K \cap M) = \sum_{v \in K \cap M} \mathbf{c}(v)$ if $K \cap M \neq \emptyset$ and $\mathbf{c}^q(v^q_M) = \min_{v \in M} \mathbf{c}(v)$ otherwise, and $\overline{b}^q = \overline{b}$.
\begin{lem}\label{thm:st_mod_reduction}
Suppose that $(G, K, \mathbf{c}, \overline{b})$ is a \textsc{Steiner Tree}\xspace instance such that no module $M \in \Pi_{mod}(G)$ contains all terminals $K$ and $G^q$ is prime.
Then, the answer to the \textsc{Steiner Tree}\xspace instance $(G, K, \mathbf{c}, \overline{b})$ is positive if and only if the answer to the \textsc{Steiner Tree}\xspace instance $(G^q, K^q, \mathbf{c}^q, \overline{b}^q)$ is positive.
\end{lem}
\begin{proof}
If $X$ is an optimum Steiner tree of $(G, K, \mathbf{c}, \overline{b})$, then we claim that $X^q = \pi_V(X)$ is a Steiner tree of $(G^q, K^q, \mathbf{c}^q, \overline{b}^q)$ with $\mathbf{c}^q(X^q) \leq \mathbf{c}(X)$. We have that $K^q = \pi_V(K)$, so $K \subseteq X$ implies that $K^q \subseteq X^q$. By \cref{thm:quotient_connected}, we see that $G^q[X^q]$ is connected as well. By definition of $X^q$ and $\mathbf{c}^q$, we have for all $v^q_M \in X^q$ that $\mathbf{c}^q(v^q_M) \leq \mathbf{c}(X \cap M)$ and hence $\mathbf{c}^q(X^q) \leq \mathbf{c}(X) \leq \overline{b} = \overline{b}^q$.
If $X^q$ is an optimum Steiner tree of $(G^q, K^q, \mathbf{c}^q, \overline{b}^q)$, then we claim that $X = K \cup \{v_M : v^q_M \in X^q, K \cap M = \emptyset\}$, where $v_M = \arg \min_{v \in M} \mathbf{c}(v)$, is a Steiner tree of $(G, K, \mathbf{c}, \overline{b})$ with $\mathbf{c}(X) \leq \mathbf{c}^q(X^q)$. We have that $K \subseteq X$ by definition of $X$ and for the costs we compute that $\mathbf{c}(X) = \mathbf{c}(K) + \mathbf{c}(X \setminus K) = \mathbf{c}^q(K^q) + \mathbf{c}^q(X^q \setminus K^q) = \mathbf{c}^q(X^q) \leq \overline{b}^q = \overline{b}$. Note that $X^q$ satisfies $X^q = \pi_V(X)$ by definition of $X$. Therefore, \cref{thm:quotient_connected} implies that $G[X]$ is connected and $X$ is a Steiner tree of $G$. \qed
\end{proof}
\begin{prop}[\cite{CyganNPPRW22}]
\label{thm:st_tw_algo}
There exists a Monte-Carlo algorithm that given a tree decomposition of width at most $k$ for $G$ solves \textsc{Steiner Tree}\xspace in time $\Oh^*(3^k)$. The algorithm cannot give false positives and may give false negatives with probability at most $1/2$.
\end{prop}
\begin{proof}
The algorithm presented by Cygan et al.~\cite{CyganNPPRW11} can be easily augmented to handle positive vertex costs in this running time under the assumption that the costs $\mathbf{c}(v)$, $v \in V$, are at most polynomial in $|V|$. \qed
\end{proof}
By recursing, applying \cref{thm:st_tw_algo} to solve the reduced instance from \cref{thm:st_mod_reduction}, and handling parallel and series nodes, we obtain the following.
\begin{thm}
\label{thm:st_modtw_algo}
There exists a Monte-Carlo algorithm that given a tree decomposition of width at most $k$ for every prime node in the modular decomposition of $G$ solves \textsc{Steiner Tree}\xspace in time $\Oh^*(3^k)$. The algorithm cannot give false positives and may give false negatives with probability at most $1/2$.
\end{thm}
\begin{proof}
If no module $M \in \Pi_{mod}(G)$ contains all terminals $K$, then we want to invoke \cref{thm:st_mod_reduction}. If $G^q$ is a parallel node, then the answer is trivially no. If $G^q$ is a series node, then $G[K]$ is already connected, but we have assumed that this is not the case. Hence, by \cref{thm:gallai_modular} $G^q$ must be a prime node and we can indeed invoke \cref{thm:st_mod_reduction}, so it suffices to solve the \textsc{Steiner Tree}\xspace instance $(G^q, K^q, \mathbf{c}^q, \overline{b}^q)$. By definition of modular-treewidth, we have $\tw(G^q) \leq \modtw(G) \leq k$ and we are given a corresponding tree decomposition of $G^q$. Hence, we can simply run the algorithm of \cref{thm:st_tw_algo} and return its result.
If some module $M \in \Pi_{mod}(G)$ contains all terminals $v$, then due to \cref{thm:st_all_terminals_in_module} we first compute in polynomial time an optimum Steiner tree $X_1$ of $G$ subject to $|X_1| = |K| + 1$ by brute force. If $\mathbf{c}(X_1) \leq \overline{b}$, then we answer yes. Otherwise, we repeatedly recurse into the module $M$ until we reach a node $G^q_* = G_* / \Pi_{mod}(G_*)$ in the modular decomposition of $G$ such that no $M_* \in \Pi_{mod}(G_*)$ contains all terminals $K$. We can then solve the \textsc{Steiner Tree}\xspace instance $(G_*, K, \mathbf{c}\big|_{V(G_*)}, \overline{b})$ like in the first paragraph and return its answer. Note that this recursion can never lead to a $G_*$ with $|V(G_*)| = 1$ as that would imply $|K| = 1$, which contradicts the assumption that $G[K]$ is not connected.
As we call \cref{thm:st_tw_algo} at most once, we obtain the same error bound. \qed
\end{proof}
Cygan et al.~\cite{CyganNPPRW11arxiv} have shown that \textsc{Steiner Tree}\xspace cannot be solved in time $\Oh^*((3 - \varepsilon)^{\pw(G)})$ for some $\varepsilon > 0$, unless SETH\xspace fails. Since $\modtw(G) \leq \tw(G) \leq \pw(G)$, this shows that the running time of \cref{thm:st_modtw_algo} is tight.
|
{
"arxiv_id": "2302.14204",
"language": "en",
"timestamp": "2023-03-01T02:05:09",
"url": "https://arxiv.org/abs/2302.14204",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Deep learning has shown extraordinary performance in recognizing and discriminating different sounds in recent years. However, such good performance relies on a large amount of high-quality labeled data. Although few-shot learning has been proposed to learn robust classifiers from only a few examples, most existing works only apply to the image classification tasks~\cite{matchingnetwork,prototypical,relationnet,maml,imprinting,closerlook,activation,TPN}. Collecting a large amount of labeled image data is time-consuming and expensive, whereas collecting audio data annotations is even more difficult. For example, it is intuitive for humans to label an image with ``dog'' by looking at the entire image with a glimpse; however, it usually takes a much longer time to annotate audio with ``dog barking'' as it takes more efforts to listen and understand the entire audio clip. Furthermore, it is almost impossible for humans to annotate an audio clip by only looking at its spectrogram. Additionally, humans rely more heavily on visual cues than audio cues, therefore it is sometimes difficult to give precise labels by only listening to audio clips such as the classic confusion between ``baby crying'' and ``cat meowing''~\cite{meowingcry}. All the above-mentioned challenges impose a great demand for few-shot audio classification algorithms.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{new1.png}
\vspace{-0.1in}
\caption{Illustration of our HalluAudio idea. Detailed structure information in images is utilized as ``concepts'' to improve few-shot learning performance, but it lacks effectiveness due to the additional labeling cost and the restricted scope of ``concepts''. However, for audio data, the frequency domain in the spectrogram is discriminative. We hallucinate the high-frequency and low-frequency areas as hidden concepts lying in the spectrogram, which are utilized to improve the few-shot audio classification.}
\label{fig:illustration}
\vspace{-0.1in}
\end{figure}
However, there is only a handful of work addressing few-shot audio classification~\cite{fsactraining,fsacstudy,fsaed,metaaudio}. Among those, most works attempted to directly apply general few-shot learning methods like Prototypical Network~\cite{prototypical}, MAML~\cite{maml} on audio data. Beyond that, a very limited number of works tried to develop new methods for few-shot audio classification. For example, \cite{audiognn} proposed an attentional GNN for audio classification, \cite{attentionsimi} developed an attention similarity module and \cite{raresoundeventdetection} integrated CTM~\cite{ctm}, TPN~\cite{TPN} and MixUp~\cite{mixup} with audio data augmentation to build a task-adaptive module. Nonetheless, all these methods are still focusing only on the extracted unstructured embedding space rather than the audio spectrogram itself, just like the most common way for few-shot image classification. In other words, those methods could be reasonable for handling visual images but may not be capable of highlighting the special modality of the audio spectrogram in the image format.
In terms of images themselves rather than their embeddings, \cite{conceptlearner} is the first meta-learning work to dig out the utility of concepts from those. What are {\it concepts} in images? They are some items with structured knowledge such as the head, wing, and tail of a bird. Given those human-interpretable concepts, \cite{conceptlearner} is able to improve the performance in a straightforward way as well as by introducing reasoning for the recognition, which is different from other methods only targeting the unstructured embedding space that is prone to be a black box.
Although audio spectrograms can be presented in the same format as visual images and fed into similar neural networks, it is unclear whether it is possible to use interpretable concepts for audio spectrograms. First and foremost, does it exist the ``real'' structured concepts in audio spectrograms that can be cognized by humans? We can easily recognize the head, wings or tail of birds, but it remains unexplored whether there is a similar pattern for the audio spectrogram. Secondly, a strong prerequisite of using those interpretable concepts is that samples belonging to similar classes should share that structured knowledge. For example, sparrows and terns both have heads, wings and tails, whereas laptops have neither of those concepts, so there is a barrier to using concepts when classifying laptops and sparrows. Lastly, annotating the bounding boxes and labels for the concepts in images needs a large amount of extra workloads. Consequently, this is a notable restriction to apply the utilization of structured concepts. Only very limited numbers of datasets provide those extra detailed labels. For example, the CUB dataset~\cite{cub2011} has the detailed locations of 15 concepts in each image, without which it's not feasible to learn those structured concepts.
Motivated by those challenges in audio spectrograms, we propose HalluAudio, a meta-learning method that hallucinates high-frequency and low-frequency parts in the audio spectrogram as structured concepts and then utilizes those concepts to build frequency-specific learners. More specifically, the high-frequency prototype and low-frequency prototype are constructed from the high-frequency part and low-frequency part in the spectrogram, respectively. Then HalluAudio aggregates high-frequency, low-frequency, and original prototypes as the representation for a given audio spectrogram. With this way of ``hallucinating'' audio spectrogram, the previous mentioned challenges are addressed as: (1) it provides a practical way of depicting concepts for audio spectrogram; (2) it does not rely on the assumption that samples should belong to similar classes because every audio spectrogram can have concepts in the high and low-frequency areas; (3) it needs no extra labeling work, because all the high and low-frequency areas can be derived from some specific ranges of Hz in the spectrogram. To the best of our knowledge, this is the first method directly exploring the pattern in the audio spectrogram for few-shot audio classification, which is essentially different to methods leveraging the unstructured embedding space.
\section{Proposed Method}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{rebuttal_flowchart.png}
\vspace{-0.1 in}
\caption{The proposed HalluAudio in a 3-way 1-shot setting. We compute three types of distance and aggregate them for a final decision with the prototypical network. The first type is from the query audio's embedding with the prototypes of support audios. The second and third types of distance are computed from the embedding of the query audio in the high-frequency and low-frequency domain with the corresponding prototypes of support audios, respectively. High-frequency and low-frequency domains in the spectrogram are served as two hallucinated concepts. }
\label{fig:flowchart}
\vspace{-0.1 in}
\end{figure}
\subsection{Problem Definition}
In few-shot audio classification, the goal is to train a robust classifier for novel-class audio data given only a few examples. During training, we are given a training dataset $\cD_{base}$ containing many-shot audios from base classes $\cC_{base}$. During testing, we are given a test dataset $\cD_{novel}$ containing few-shot audios from novel classes $\cC_{novel}$ where $\cC_{base} \cap \cC_{novel}=\emptyset$. In an $N$-way $K$-shot task, we are given a support set $\cS=\{(\cX_s,\cY_s)\}$ and one query sample ${\mathbf x}_q$ \cite{attentionsimi}, where $\cX_s$ consists of $N\times K$ audios $({\mathbf x}_1,{\mathbf x}_2,\dots,{\mathbf x}_{N\times K})$, $\cY_s$ are their class labels $(y_1,y_2,\dots,y_{N\times K})$ and ${\mathbf x}_q$ belongs to those $N$ classes.
\subsection{Method Description}
As summarized in Figure \ref{fig:flowchart}, the input of the neural networks for a given audio file's waveform $\w_l$ is the log mel spectrogram ${\mathbf x}_l$, which represents the amplitude of the audio in $T$ $\times$ $F$ dimension where $T$ is the time-domain range and $F$ is the frequency range in mel-scale. In our HalluAudio, we hallucinate frequency from different ranges as the structured concepts embedded in the log mel spectrogram. More specifically, we denote the frequency hallucination group as $\cM=\{\m^{(n)}\}_{n=1}^N$, where $\m^{n}$ is the $n$-th binary vector masking the frequency area, and $N$ is the number of masks. With this, the $n$-th frequency prototype for class $k$ is $\p_k^{(n)}=\frac{1}{|\cS_k|}\sum_{({\mathbf x}_l,y_l)\in\cS_k}{f^{(n)}({\mathbf x}_l\cdot \m^{(n)})}$, where $f^{(n)}(\cdot)$ is the feature extractor for $n$-th hallucination. Then, the final probability of a given ${\mathbf x}$ belonging to class $k$ combines the results from the original spectrogram and different frequency groups:
$$\frac{\exp \left( -d\left(f({\mathbf x}),\p_k\right)-\sum_n{d\left(f^{(n)}({\mathbf x}\cdot \m^{(n)}),\p_k^{(n)}\right)} \right )}{\sum_{k}\exp \left(-d\left(f({\mathbf x}),\p_k\right)-\sum_n{d\left(f^{(n)}({\mathbf x}\cdot \m^{(n)}),\p_k^{(n)}\right)} \right )},$$
where $ f(\cdot)$ is the feature extractor for the whole spectrogram which has the same structure with $f^{(n)}(\cdot)$, $\p_k$ is the prototype of the whole spectrogram $\p_k=\frac{1}{|\cS_k|}\sum_{({\mathbf x}_l,y_l)\in\cS_k}{f({\mathbf x}_l)}$, and $d(\cdot)$ is the Euclidean Distance.
\noindent{\bf Remark:} We point out that a similar operation of setting time and frequency areas to zero introduced in SpecAug~\cite{specaugment} is only a way of data augmentation, which aims to add noises to the log mel spectrogram to improve robustness. Our method significantly differs from this augmentation way on the whole idea and motivation.
\section{Experiments}
In this section, we evaluate our proposed HalluAudio and conduct an ablation study on the widely adopted ESC-50 dataset and our curated dataset from Kaggle18 for few-shot audio classification.
\subsection{Dataset Configuration}
The current research for few-shot audio classification lacks common agreements on criteria from dataset choices, processing, and evaluation metrics. To be consistent with \cite{attentionsimi} and fit the current research focus on fixed-length data, we choose ESC-50 dataset~\cite{esc50} which contains 50 classes and 40 samples per class with a fixed length of 5 seconds. In addition, we curate a balanced fixed-length dataset from Kaggle18 dataset which is originally variable-length data of 11,073 audios from 41 classes of Audioset Ontology~\cite{audioset}.
All audio samples from ESC-50 and Kaggle18 datasets are down-sampled from 44.1kHz to 16kHz. We extract log mel spectrogram using $librosa$~\cite{librosa}. The number of Mel bands is set to 128. The highest frequency is set to 8000Hz.
The hop size is set to 502 for ESC-50 and 201 for Kaggle18 to generate spectrograms with 160$\times$128 dimensions. The power spectrogram is converted to decibel units. Because there are not enough details for generating the log mel spectrogram for ESC-50 in \cite{attentionsimi}, our generated log mel spectrogram could be slightly different from their provided files using their codes.
\subsection{Training and Evaluation}
With the very limited public codes in this domain, we strictly follow the training pipeline used by \cite{attentionsimi}. Note that the episode-building strategy in \cite{attentionsimi} is slightly different from the one commonly used in few-shot image classification. During the testing stage, each sample is served as a query and the corresponding N-way K-shot supports are randomly sampled from the rest of the test data to build an episode. To get more reliable results and confidence intervals, we conduct the sampling 50 times instead of only once as in \cite{attentionsimi}.
The network backbone is used the same as \cite{attentionsimi}, which is also adopted in \cite{knowledge}. This backbone consists of 3 blocks. Each block is composed of a 3$\times$3 convolutional layer, batch normalization, ReLU layer, and 4$\times$4 max pooling layer. The initial learning rate is set to $0.01$ and SGD is used for optimization with the weight-decay set to 0.0001. For ESC-50, we use the same strategy as \cite{attentionsimi} in which the learning rate is divided by 10 after every 20 epochs. For Kaggle18, the learning rate is divided by 10 after every 30 epochs. Both are trained for 60 epochs.
\begin{table*}[t]
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
Dataset & Method & 5-way 1-shot & 5-way 5-shot & 10-way 1-shot & 10-way 5-shot \\
\hline
\multirow{2}*{ESC-50} & Baseline \cite{attentionsimi} & 69.77 $\pm$ 0.62 & 83.47 $\pm$ 0.48 &
54.51 $\pm$ 0.66&
71.36 $\pm$ 0.56\\
~ & HalluAudio & {\bf 71.88 $\pm$ 0.60} & {\bf 86.46 $\pm$ 0.46} &
{\bf 57.12 $\pm$ 0.64}&
{\bf 75.20 $\pm$ 0.58}\\
\hline
\multirow{2}*{Kaggle18} & Baseline \cite{attentionsimi} & 57.58 $\pm$ 0.63 &
70.69 $\pm$ 0.55&
43.67 $\pm$ 0.62 &58.12 $\pm$ 0.58 \\
~& HalluAudio &{\bf 59.35 $\pm$ 0.65} & {\bf 73.92 $\pm$ 0.57} & {\bf 44.50 $\pm$ 0.64} & {\bf 61.80 $\pm$ 0.61}\\
\hline
\end{tabular}
\caption{Accuracy (in \%) on ESC-50 and Kaggle18 datasets with 95\% confidence interval. Note Baseline result is not identical to \cite{attentionsimi} because of the parameters for the log mel spectrogram and testing sampling times as mentioned in section 3.2.}
\label{tab:mainresult}
\end{table*}
\begin{table*}[tp]
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
Dataset & Method & 5-way 1-shot & 5-way 5-shot & 10-way 1-shot & 10-way 5-shot \\
\hline
\multirow{4}*{ESC-50} & Baseline \cite{attentionsimi} & 69.77 $\pm$ 0.62 & 83.47 $\pm$ 0.48 &
54.51 $\pm$ 0.66&
71.36 $\pm$ 0.56\\
~ & Time Concept & 70.66 $\pm$ 0.61
& 82.89 $\pm$ 0.48
&55.67 $\pm$ 0.67
&70.60 $\pm$ 0.53 \\
~ &Gain(time) & 0.89 & -0.58 & 1.16 & -0.76\\
~ & {\bf Gain(freq.)} & {\bf 2.11} & {\bf 2.99} & {\bf 2.61} & {\bf 3.84} \\
\hline
\multirow{4}*{Kaggle18} & Baseline \cite{attentionsimi} & 57.58 $\pm$ 0.63 &
70.69 $\pm$ 0.55&
43.67 $\pm$ 0.62 &58.12 $\pm$ 0.58 \\
~& Time Concept & 58.13 $\pm$ 0.65 & 71.30 $\pm$ 0.57& 43.95 $\pm$ 0.61 & 58.77 $\pm$ 0.60\\
~ &Gain(time) & 0.55 & 0.61 & 0.28 & 0.65\\
~ & {\bf Gain(freq.)} & {\bf 1.77} & {\bf 3.23} & {\bf 0.83} & {\bf 3.68} \\
\hline
\end{tabular}
\caption{Ablation study of hallucinating concepts in time domain vs. frequency domain in the spectrogram. Taking concepts in the time domain with the same network does not notably improve the performance and it even harms the results in some cases.
}
\label{tab:timedomain}
\vspace{-0.1 in}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{threelineplot.png}
\vspace{-0.1 in}
\caption{HalluAudio (Frequency Concept) vs. Time Concept vs. Baseline in 5-way K-shot settings.}
\label{fig:threeline}
\vspace{-0.2 in}
\end{figure}
\subsection{Experimental Results}
Table \ref{tab:mainresult} shows the results and confidence intervals for the baseline (prototypical network) and our proposed HalluAudio. It is clearly showing that our method outperforms the baseline by a large margin. For ESC-50, the gains are 2.11\%, 2.99\%, 2.61\%, 3.84\% for 5-way 1-shot, 5-way 5-shot, 10-way 1-shot, 10-way 5-shot, respectively. For Kaggle18 dataset, the gains are 1.77\%, 3.23\%, 0.83\%, 3.68\%, respectively.
To validate that the gain is from hallucinating high frequency and low frequency as concepts rather than additional weights in the network, we conduct an ablation study of hallucinating time domain as concepts. In particular, we hallucinate the first half of the time as one concept, and the second half of the time as another concept. Notably, the network constructed using this way of hallucinating ``time'' concepts has the same weights as the network using frequency concepts. As shown in Table \ref{tab:timedomain}, for ESC-50, although there is a little improvement for 1-shot, the time concepts make a negative contribution for 5-shot. This reflects the human intuition that audio with uncertain patterns has little structured information in the time domain. On Kaggle18, hallucination from the time domain improves the performance a little bit because we curate fixed-length audios around the peak of the waveform. In this case, the first half of the spectrogram in the time domain could stand for the starting period of the audio and the second half stands for the ending period. However, the significantly inferior performance by hallucinating time concepts compared with hallucinating frequency concepts strongly show the rationality of our method. To have a more comprehensive comparison, we add the results of three methods in 5-way K-shot settings for ESC-50 in Figure \ref{fig:threeline}.
\subsection{Frequency Importance}
To better show the reasoning behind the hallucination of frequency areas, we calculate the frequency importance in some representative classes. Specifically, we select 5 representative classes and their 5-way 5-shot episodes. Given a query, we classify it by the distance only from (1) its high-frequency embedding and support samples' high-frequency prototypes; (2) its low-frequency embedding and support samples' low-frequency prototypes. In this way, we get two correctly classified numbers of queries in all episodes: $Q_{high}$ and $Q_{low}$.
Given the number, we calculate the frequency importance by $\frac{Q_{high}}{Q_{low}}$.
A ratio greater than 1 means the high frequency has more importance and a ratio less than 1 means the low frequency is more important. As shown in Figure \ref{fig:ratio}, the ratio matches common sense: birds' chirping has more information in the high-frequency area, whereas thunderstorm is more depicted in the low-frequency area. Furthermore, we also show some examples in Figure \ref{fig:birdthun} which matches our analysis.
\begin{figure}[]
\centering
\includegraphics[width=0.38\textwidth]{ratio.png}
\vspace{-0.1in}
\caption{The frequency importance in representative classes.}
\label{fig:ratio}
\vspace{-0.1in}
\end{figure}
\begin{figure}[]
\centering \includegraphics[width=0.4\textwidth]{rebuttal_pre.png}
\vspace{-0.1in}
\caption{Illustration of high/low-frequency concepts for bird-chirping and thunderstorm. Bird-chirping has more similar patterns in high-frequency concepts, whereas
thunderstorm is mostly depicted by low-frequency concepts.}
\label{fig:birdthun}
\vspace{-0.1in}
\end{figure}
\section{Conclusion}
We have proposed a simple yet effective method for few-shot audio classification. Our method, dubbed as HalluAudio, hallucinates high-frequency and low-frequency areas in the spectrogram as structured concepts. Compared with the real concepts used for few-shot image classification, hallucinating concepts take the advantage of the special format of spectrogram and do not need any extra labeling work or have any restrictions on specific classes. Extensive experiments on ESC-50 and Kaggle18 datasets demonstrate the rationality of our proposed solution. To the best of our knowledge, this is the first work focusing on and utilizing the specificity of audio spectrogram format with interpretability in few-shot audio classification and opens a new horizon in this area.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.14213",
"language": "en",
"timestamp": "2023-03-01T02:05:23",
"url": "https://arxiv.org/abs/2302.14213",
"yymm": "2302"
} | \section{Introduction}
Storyline visualizations are a popular way of visualizing characters and their interactions through time. They were popularized by Munroe's xkcd comic~\cite{munroe_movie_2009} (see~\cref{fig:xkcd} for a storyline describing a movie as a series of scenes through time, in which the characters participate). A character is drawn using an $x$-monotone curve, and the vertical ordering of the character curves varies from left to right. A scene is represented by closely gathering the curves of characters involved in said scene at the relevant spot on the $x$-axis, which represents time. Storylines attracted significant interest in visualization research, especially the question of designing automated methods to create storylines adhering to certain quality criteria~\cite{LiuWWLL13,OgawaM10,TanahashiM12}.
\begin{figure}[th!]
\centering
\includegraphics[width=\linewidth]{xkcd1.png}
\caption{The xkcd comic showing a storyline of the Star Wars movie.
}
\label{fig:xkcd}
\end{figure}
\looseness=-1
While different design optimization goals can be specified, most theoretical research has been focused on crossing minimization~\cite{GronemannJLM16,KostitsynaNP0S15} and variants like block crossing minimization~\cite{DijkFFLMRSW17,DijkLMW17}.
This problem is \NP-hard~\cite{KostitsynaNP0S15,DijkFFLMRSW17} and is commonly solved using ILP and SAT formulations~\cite{GronemannJLM16,DijkLMW17}; \new{it has many similarities with the metro line crossing minimization problem~\cite{BekosKPS07,agm-mcp-08,fp-mcmhatc-13,bnuw-miecw-07}}.
Recently a new model for storylines was proposed by Di Giacomo et al.~\cite{GiacomoDLMT20} that allows for one character to be part of multiple interactions at the same point in time, by modeling each character as a tree rather than a curve. Using this model, it is possible to represent data sets which have a more loosely defined ordering of interactions.
Furthermore, authorship networks have been a popular application for storylines visualizations~\cite{GiacomoDLMT20,herrmann_2022}. In this paper we introduce \emph{time interval} storylines, an alternative approach to visualize data sets with less precise temporal attributes. In the time interval model, a set of discrete, totally ordered timestamps is given, which serve to label disjoint time intervals (e.g., the timestamp 2021 represents all interactions occurring between January and December of the year 2021). Each interval is represented in a storyline as a horizontal section in which all interactions with the same timestamp occur. The horizontal ordering within this section, however, does not correspond to a temporal ordering anymore (see \cref{fig:model}). For example, an authorship network often sorts publications by year.
In a traditional storyline model, the complete temporal ordering of the interactions must be provided.
\new{Previous models like the one by van Dijk et al.~\cite{DijkLMW17} can place multiple disjoint interactions in the same vertical layer, but the assignment of interactions to the totally ordered set of layers must be given as input.
Unlike the traditional model, we have no pre-specified assignment of interactions to layers, but interactions with the same timestamp can be assigned to any layer within the time interval of this timestamp.}
\begin{figure}
\centering
\includegraphics{storyline.pdf}
\caption{\textbf{\textsf{(a)}} A classic storyline with blue character lines. Interactions are shown in gray, they happen on specific timestamps and have a duration. \textbf{\textsf{(b)}} A time interval storyline. The horizontal orange segment shows a slice, every interaction on this segment has the same timestamp. A layer is highlighted in red, containing two interactions with the same timestamp but not sharing a character.
}
\label{fig:model}
\end{figure}
\subparagraph*{Problem setting.}
We are given a triple $\mathcal{S}=(\mathcal{C},\mathcal{I},T)$, of characters $\mathcal{C}=\{c_1,\dots,c_n\}$, interactions $\mathcal{I}=\{I_1,\dots,I_m\}$, and totally ordered timestamps $T=\{t_1,\dots,t_p\}$ as input. Each interaction $(C_j,t) = I_j\in \mathcal{I}$ consists of a set~$C_j\subseteq \mathcal{C}$ of characters involved in the interaction and a timestamp~$t\in T$ at which the interaction~$I_j$ occurred, respectively denoted by $\texttt{char}(I_j)=C_j$ and $\texttt{time}(I_j)=t$.
A subset of interactions can form a \emph{layer}~$\ell$, when for every pair of interactions $I,I'$ in~$\ell$, $\texttt{time}(I)=\texttt{time}(I')$.
A time interval storyline is composed of a sequence of layers to which interactions are assigned. Intuitively, a layer represents a column in the storyline visualization, in which interactions are represented as vertical stacks. Thus, to each layer we associate a vertical ordering of~$\mathcal{C}$.
Consider the set $S$ containing all interactions with timestamp $t$, we call the union of layers containing $S$ a \emph{slice}.
Characters are represented with curves passing through each layer at most once. To represent an interaction $I=(C,t)$ in a layer $\ell$, the ordering of the characters in $\ell$ must be such that the characters of $C$ appear consecutively in that ordering.
For a pair~$I, I'$ of interactions in the same layer, it must hold that $\texttt{char}(I)\cap\texttt{char}(I')=\emptyset$.
\looseness=-1
For a layer $\ell$, we denote the set of interactions by $\texttt{inter}(\ell)$ and the timestamp of a layer by $\texttt{time}(\ell)$ (with slight abuse of notation). We focus on combinatorial storylines, as opposed to geometric storylines, meaning that our algorithm should output a (horizontal) ordering $o_L(\mathcal{S})$ of layers, and for each layer $\ell$, a (vertical) ordering $o_c(\ell)$ of the characters, and all interactions must occur in some layer.
For two interactions $I,I'$ such that $\texttt{time}(I)<\texttt{time}(I')$, let~$\ell$ and $\ell'$ be the layers of $I$ and $I'$, respectively. Then $\ell$ must be before $\ell'$ in~$o_L(\mathcal{S})$.
A character is \emph{active} in a layer if it appears in the character ordering for that layer.
A character must be active in a contiguous range of layers including the first and last interaction it is involved in. A character is active in a layer if it appears in the character ordering for that layer.
\subparagraph*{Contributions.}
In this paper we introduce the time interval storylines model, as well as two methods to compute layer and character orderings. In~\cref{sec:thesis_algs} we introduce an algorithmic pipeline based on ILP formulations and heuristics that computes time interval storylines. We further present an ILP formulation that outputs a crossing-minimal time interval storyline in~\cref{sec:ilp}. Lastly in~\cref{sec:eval}, we experimentally evaluate our pipeline and ILP formulation.
\section{Computing combinatorial storylines}
\subsection{A pipeline heuristic}\label{sec:thesis_algs}
As the traditional storyline crossing minimization problem is a restricted version of the time interval formulation, our problem is immediately \NP-hard~\cite{KostitsynaNP0S15}. Thus, we first aim to design an efficient heuristic to generate time interval storylines, which consists of the following stages.
\begin{enumerate}[(i)]
\item Initially, we assign each interaction to a layer,
\item then, we compute a horizontal ordering~$o_L(\mathcal{S})$ of the layers obtained in step (i), and
\item finally, we compute a vertical ordering~$o_c(\ell)$ of the characters for each layer~$\ell\in o_L(\mathcal{S})$.
\end{enumerate}
For step (i), the assignment is obtained using graph coloring. For each $t\in T$, we create a conflict graph $G_t=(\mathcal{I}_t,E)$ where $\mathcal{I}_t\subseteq\mathcal{I}$ and $I\in \mathcal{I}_t$ if and only if $\texttt{time}(I)=t$. Two interactions are connected by an edge if they share at least one character. Each color class then corresponds to a set of interactions which share no characters and can appear together in a layer.
We solve this problem using a straightforward ILP formulation based on variables $x_{v,c}=1$ if color $c$ is assigned to vertex $v$ and $0$ otherwise.
We can choose to limit the size of each color class \new{by adding an upper bound on the number of interactions assigned to each color}, which forces fewer interactions per layer. \new{While this allows us to limit the height of each slice, it} likely results in more layers.
To compute a horizontal ordering of the layers in step (ii), we use a traveling salesperson (TSP) model. Concretely, for the slice corresponding to the timestamp $t$, we create a complete weighted graph $G=(\mathcal{L},E)$, where $\mathcal{L}$ corresponds to all the layers $\ell$ such that $\texttt{time}(\ell)=t$.
For each edge $e$ between a pair of layers $\ell$ and $\ell'$ in $\mathcal{L}$, we associate a weight $w_e$, estimating the number of crossings that may occur if the two layers are consecutive as follows.
Minimizing \new{the} crossings \new{of the curves representing the characters} is \NP-complete~\cite{garey83,KostitsynaNP0S15}, thus we propose two heuristics to estimate the number of crossings.
First, we propose to use set similarity measures to describe how similar the interactions in two layers $\ell$ and $\ell'$ are: If $\ell$ and $\ell'$ both have an interaction that contains the same set of characters, then no crossing should be induced by the curves corresponding to those characters, when these two layers are consecutive (see~\cref{fig:pattern}a).
Second, we consider pattern matching methods that guess how many crossings could be induced by a certain ordering of the characters. There are certain patterns of interactions between two layers for which a crossing is unavoidable (see~\cref{fig:pattern}b). We count how many of these patterns occur between each pair of layers in $G$ and set the weight of the corresponding edge to that crossing count.
\subparagraph*{Set similarity.}
We propose the use of the Rand index to evaluate how similar two layers are to one another. For the layers $\ell$ and $\ell'$, the Rand index is calculate in the following manner.
\begin{itemize}
\item We count how many character pairs are together in an interaction in $\ell$ and $\ell'$ ($n_1$).
\item We count how many character pairs are in different interactions in $\ell$ and in $\ell'$ ($n_2$).
\item We count how many character pairs are in different interactions in $\ell$ and in the same interaction in $\ell'$ ($n_3$).
\item We count how many character pairs are together in an interaction in $\ell$ and not in $\ell'$ ($n_4$).
\end{itemize}
The Rand index~\cite{rand1971objective} is then given by the value $\frac{n_1+n_2}{n_1+n_2+n_3+n_4}$. If this value is closer to one then most interactions between $\ell$ and $\ell'$ have a similar set of characters.
\subparagraph*{Pattern matching.}
Consider the four interactions $I_1,I_2,I_3,I_4$, all having timestamp $t$. We set $\texttt{char}(I_1)={a,b}$, $\texttt{char}(I_2)={c,d}$, $\texttt{char}(I_3)={a,c}$ and $\texttt{char}(I_4)={b,d}$. Naturally these four cannot be in the same layer, so we assume $I_1$ and $I_2$ are in layer $\ell_1$ and $I_3$ and $I_4$ are in layer $\ell_2$. Then, if $\ell_1$ and $\ell_2$ are consecutive, there is necessarily a crossing induced by the curve of actor $b$ and $c$ or $a$ and $d$. We count how often this pattern occurs between each layer pair in the same slice.
\begin{figure}
\centering
\includegraphics{patternmatching.pdf}
\caption{\textbf{\textsf{(a)}} The orange and light green characters are together in two interactions, which increases similarity between $\ell$ and $\ell'$, but the two blue characters are once together and once apart, which decreases similarity. \textbf{\textsf{(b)}} An example of an unavoidable crossing pattern.
}
\label{fig:pattern}
\end{figure}
\medskip
To finish step (ii), we solve the \new{path formulation of the} TSP problem on $G$ and find a horizontal ordering of the layers for each time slice. We have now obtained a traditional storyline, in which each interaction belongs to a specific layer, and all layers are totally ordered. Thus, we can solve step (iii) using the state-of-the-art crossing minimization ILP by Gronemann et al.~\cite{GronemannJLM16}.
We call the pipeline variants \ensuremath{\mathrm{P_s}}\ and \ensuremath{\mathrm{P_p}}, when using the set similarity heuristic and the pattern matching heuristic in step (ii), respectively.
\subsection{ILP formulations}\label{sec:ilp}
Crossing minimization in storylines is generally solved using ILP formulations~\cite{GronemannJLM16, DijkFFLMRSW17}. We propose two formulations to handle slices, which build on the ideas of Gronemann et al.~\cite{GronemannJLM16}.
Both formulations will give us an assignment of interactions to layers, that are already totally ordered, and an ordering of characters per layer.
For each timestamp $t\in T$, let $\mathcal{L}_t$ be a set of $|\{I\mid \texttt{time}(I)=t\}|$ layers corresponding the number of interactions at $t$, and let $\mathcal{L}=\bigcup_{t\in T}\mathcal{L}_t$. In the first formulation we assume that a character $c$ is active in all layers between the first timestamp and last timestamp, inclusively, \new{where there exists an interaction $I$ such that $c\in \texttt{char}(I)$}.
In the second formulation we will introduce additional variables that model whether a character really needs to be active, \new{since, in fact, character curves do not need to be active before their first interaction or after their last interaction}.
\new{In contrast to the pipeline approach, the presented ILP formulations are able to find the crossing-minimal solution for the explored search space.}
For both formulations we also present an adaptation that allows for the minimum number of layers.
\subparagraph*{First formulation.}
Let $\mathcal{C}_\ell$ be the characters that appear in layer $\ell\in \mathcal{L}$, as discussed before.
First we introduce for each $t\in T$ the binary variables $y_{\ell,I}$ for $\ell\in \mathcal{L}_t$ and $I\in \mathcal{I}$ where $\texttt{time}(I)=t$. These should be one iff interaction $I$ is assigned to layer $\ell$.
This is realized by constraints of type~\eqref{eq:interactionassign}.
If two different interactions $I$ and $I'$ share a character they cannot be in the same layer, realized by type~\eqref{eq:interactionintersection} constraints.
\begin{align}
\sum_{\ell\in \mathcal{L}_t}y_{\ell,I}&=1&t\in T,I\in \mathcal{I},\texttt{time}(I)=t\label{eq:interactionassign}\\
y_{\ell,I}+y_{\ell,I'}&\le 1&\texttt{time}(I)=\texttt{time}(I')=t,\texttt{char}(I)\cap \texttt{char}(I')\ne\emptyset,\ell\in \mathcal{L}_t\label{eq:interactionintersection}
\end{align}
Next we introduce binary ordering variables $x_{\ell,c_i,c_j}$ for each layer $\ell\in \mathcal{L}$ and $c_i,c_j\in \mathcal{C}_{\ell}$ with $i<j$. Variable $x_{\ell,c_i,c_j}$ should be one iff $c_i$ comes before $c_j$ on layer $\ell$.
Standard transitivity constraints \eqref{eq:transitivity} (see e.g.~\cite{DBLP:journals/mp/GrotschelJR85a}) ensure that the binary variables induce a total order.
\begin{align}
0\le x_{\ell,c_h,c_i}+x_{\ell,c_i,c_j}-x_{\ell,c_h,c_j}&\le 1&c_i,c_j,c_h\in \mathcal{C_\ell},i<j<h\label{eq:transitivity}
\end{align}
The crux is now to model the assignment of some interaction $I$ to some layer $\ell$, linking the $x$- and $y$-variables together.
This is done with so-called tree-constraints~\cite{GronemannJLM16}:
Let $\ell\in \mathcal{L}_t$, $I\in \mathcal{I}$ with $\texttt{time}(I)=t$ and $c_i,c_j,c_k\in \mathcal{C}_\ell$ such that $i<j$, $c_i,c_j\in \texttt{char}(I)$, and $c_k\not\in\texttt{char}(I)$.
If $i<j<k$ we add constraints \eqref{eq:tree1} and \eqref{eq:tree2}, which ensure that $c_k$ is either before or after both $c_i$ and $c_j$.
\begin{align}
x_{\ell,c_i,c_k}&\le x_{\ell,c_j,c_k}+\new{1-y_{\ell,I}}\label{eq:tree1}\\
x_{\ell,c_j,c_k}&\le x_{\ell,c_i,c_k}+\new{1-y_{\ell,I}}\label{eq:tree2}
\end{align}
Similarly, if $k<i<j$ we add constraints of type~\eqref{eq:tree3} and~\eqref{eq:tree4}.
\begin{align}
x_{\ell,c_k,c_i}&\le x_{\ell,c_k,c_j,}+\new{1-y_{\ell,I}}\label{eq:tree3}\\
x_{\ell,c_k,c_j}&\le x_{\ell,c_k,c_i}+\new{1-y_{\ell,I}}\label{eq:tree4}
\end{align}
Lastly, if $i<k<j$ we add constraints
of type~\eqref{eq:tree5} and~\eqref{eq:tree6}.
\begin{align}
x_{\ell,c_i,c_k}&\le x_{\ell,c_k,c_j}+\new{1-y_{\ell,I}}\label{eq:tree5}\\
x_{\ell,c_k,c_j}&\le x_{\ell,c_i,c_k}+\new{1-y_{\ell,I}}\label{eq:tree6}
\end{align}
Lastly, to \new{optimize} the number of crossings we have to provide an objective function.
For this we introduce binary variables $z_{\ell,c_i,c_j}$ for all layers $\ell$ but the rightmost one and all $c_i,c_j\in\mathcal{C}_{\ell}\cap\mathcal{C}_{\ell'}$ where $\ell'$ is the adjacent layer of $\ell$ to the right. Variable $z_{\ell,c_i,c_j}$ should be one iff the character lines of $c_i$ and $c_j$ cross between layers $\ell$ and $\ell'$.
Linking variables $z_{\ell,c_i,c_j}$ is done by introducing the constraints corresponding to setting $z_{\ell,c_i,c_j}\ge x_{\ell,c_i,c_j}\new{\oplus} x_{\ell',c_i,c_j}$ where $x\oplus y$ denotes the exclusive-or relation of two binary variables $x$ and $y$. This is done with the following constraints.
\begin{align}
z_{\ell,c_i,c_j}\ge x_{\ell,c_i,c_j}-x_{\ell',c_i,c_j}\label{eq:xor1}\\
z_{\ell,c_i,c_j}\ge x_{\ell',c_i,c_j}-x_{\ell,c_i,c_j}\label{eq:xor2}
\end{align}
The objective is then to simply minimize $\sum z_{\ell,c_i,c_j}$. A solution to the ILP model is then transformed into a storyline realization of the input. We call this formulation \ensuremath{\mathrm{ILP1}}.
In the above formulation we have one layer for each interaction, which does not utilize the potential of having multiple interactions in one layer. We can, however, minimize the number of layers beforehand, using the graph coloring problem as in \cref{sec:thesis_algs}.
If we need $q$ colors for timestamp $t$, we let $\mathcal{L}_t$ only consist of $q$ layers. This can of course result in more crossings in the end. We call this adapted formulation \ensuremath{\mathrm{ILP1ML}}.
\subparagraph*{Second Formulation.}
In the above models \ensuremath{\mathrm{ILP1}}\ and \ensuremath{\mathrm{ILP1ML}}\ a character was contained in all layers of the first and last timestamp that contains an interaction, in which the character appears.
We also present a second ILP formulation called \ensuremath{\mathrm{ILP2}}\ that accounts for the active range of a character. This formulation contains the same variables as \ensuremath{\mathrm{ILP1}}\ plus the binary variables $a_{c,\ell}$ for all $c\in\mathcal{C}$ and all $\ell\in\mathcal{L}$ such that $c\in\mathcal{C}_\ell$. If $c\in C_\ell$ this does not mean that $c$ will be in layer $\ell$ when transforming a solution of the ILP to a storyline realization. This is only the case if additionally $a_{c,\ell}$ is one.
The formulation \ensuremath{\mathrm{ILP2}}\ contains the constraints for layers \eqref{eq:interactionassign} and \eqref{eq:interactionintersection}, the transitivity-constraints \eqref{eq:transitivity}, the tree-constraints \eqref{eq:tree1},\eqref{eq:tree2}, and \eqref{eq:tree3}-\eqref{eq:tree6}, and the objective function from formulation \ensuremath{\mathrm{ILP1}}.
Additionally, it contains the following constraints.
First, if character $c$ appears in some interaction $I$ that is present in layer $\ell$, then $c$ has to be active in $\ell$ (see \eqref{eq:activeinteraction}).
\begin{align}
a_{c,\ell}&\ge y_{\ell,I}&I\in\mathcal{I},c\in\mathcal{C},\ell\in\mathcal{L},c\in\mathcal{C}_\ell,\texttt{time}(I)=\texttt{time}(\ell)\label{eq:activeinteraction}
\end{align}
Now for each three different layers $\ell,\ell'$, and $\ell''$ such that $\ell$ appears first, $\ell'$ appears second, and $\ell''$ appears third in the ordering of layers, let $c\in \mathcal{C}_{\ell}\cap \mathcal{C}_{\ell'}\cap \mathcal{C}_{\ell''}$. We have to ensure that if $c$ is active in $\ell$ and $\ell''$, it also has to be active in $\ell'$, done with constraints of type~\eqref{eq:activepropag}.
\begin{align}
a_{c,\ell'}+1&\ge a_{c,\ell}+a_{c,\ell''}\label{eq:activepropag}
\end{align}
Lastly, we only have to count crossings between two character lines in two layers, if both characters are active in the corresponding layers.
Hence, let $c$ and $c'$ be two different characters and let $\ell$ and $\ell'$ be two consecutive layers such that $c,c'\in\mathcal{C}_\ell\cap\mathcal{C}_{\ell'}$.
We transform the xor-constraints \eqref{eq:xor1} and \eqref{eq:xor2} into the new constraints \eqref{eq:xornew1} and \eqref{eq:xornew2}.
Notice that none of the constraints have an effect on the $z$-variable, if at least one of the $a$-variables is zero.
\begin{align}
z_{\ell,c_i,c_j}\ge x_{\ell,c_i,c_j}-x_{\ell',c_i,c_j}-4+a_{c,\ell}+a_{c,\ell'}+a_{c',\ell}+a_{c',\ell'}\label{eq:xornew1}\\
z_{\ell,c_i,c_j}\ge x_{\ell',c_i,c_j}-x_{\ell,c_i,c_j}-4+a_{c,\ell}-a_{c,\ell'}+a_{c',\ell}+a_{c',\ell'}\label{eq:xornew2}
\end{align}
We can then introduce the fewest possible number of layers as for \ensuremath{\mathrm{ILP1}}, resulting in formulation \ensuremath{\mathrm{ILP2ML}}.
\section{Experimental Evaluation}\label{sec:eval}
We performed a set of experiments to evaluate all of our presented algorithms.
In the following we describe the datasets, implementation and setup, and elaborate on the results.
\subsection{Datasets}\label{sec:appdata}
Our dataset consists of 7 instances provided by different sources. All of them come from work on storyline visualizations. General statistics on these datasets can be found in \cref{tab:datastata}.
The datasets gdea10 and gdea20 are from the work of of Gronemann et al.~\cite{GronemannJLM16} and consist of a set of publications from 1994 to 2019. These publications are the interactions and the characters are the union of all authors of these publications. The years of interactions are set as their timestamps.
Similarly, datasets ubiq1 and ubiq2 is publication data of a work by Giacomo et al.~\cite{GiacomoDLMT20} on storylines with ubiquituous characters.
Interactions, characters, and timestamp are determined as in the gdea10 and gdea20 datasets.
The last three datasets are interactions between characters of a book. These consist of the first chapter of \emph{Anna Karenina} (anna1), the first chapter of \emph{Les Misérables} (jean1), and the entirety of \emph{Huckleberry Finn} (huck). The timestamps of interactions are taken as the scenes of the book (or chapter).
\begin{table}[!t]
\centering
\caption{Statistics for the input data sets. The column $|\mathcal{L}|$-coloring shows the number of layers achieved by the graph coloring pipeline step, which is used for the algorithms \ensuremath{\mathrm{P_s}}, \ensuremath{\mathrm{P_p}}, \ensuremath{\mathrm{ILP1ML}}, and \ensuremath{\mathrm{ILP2ML}}.}
\label{tab:datastata}
\begin{tabular}{l | >{\centering\arraybackslash}p{2cm} | >{\centering\arraybackslash}p{2cm}| >{\centering\arraybackslash}p{2cm}| >{\centering\arraybackslash}p{2cm}}
\toprule
& $|\mathcal{I}|$ & $|\mathcal{C}|$ & $|T|$ & $|\mathcal{L}|$-coloring\\\midrule
gdea10 & 41 & 9 & 16 & 35 \\
gdea20 & 100 & 19 & 17 & 47 \\
ubiq1 & 41 & 5 & 19 & 41 \\
ubiq2 & 45 & 5 & 18 & 38 \\
anna1 & 58 & 41 & 34 & 53 \\
jean1 & 95 & 40 & 65 & 88 \\
huck & 107 & 74 & 43 & 81 \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Implementation and Setup}
All implementations were done in Python3. For the graph coloring step of the algorithms we used a simple ILP formulation. For the TSP formulation in the algorithms \ensuremath{\mathrm{P_p}}\ and \ensuremath{\mathrm{P_s}}\ we used a simple subtour elimination formulation. As the graph coloring and TSP steps of our algorithms take negligible time when compared to the crossing minimization, we do not provide more details here.
All ILP formulations in the algorithms \ensuremath{\mathrm{P_p}}\ and \ensuremath{\mathrm{P_s}}\ were solved using CPLEX 22.1 and were run on a local machine with Linux and an Intel i7-8700K 3.70GHz CPU. All formulations in the algorithms \ensuremath{\mathrm{ILP1}}, \ensuremath{\mathrm{ILP1ML}}, \ensuremath{\mathrm{ILP2}}, and \ensuremath{\mathrm{ILP2ML}}\ were solved using Gurobi 9.5.1 and were run on a cluster with an AMD EPYC 7402, 2.80GHz 24-core CPU. No multithreading was used in any algorithm.
The timeout for all algorithms was set to 3600 seconds. If we run into a timeout during one of our algorithms we report the best feasible solution (in \cref{tab:crossings}) and the gap to the best lower bound found by the ILP solver (in \cref{tab:time}).
\subsection{Results}
Next, we provide the results on layer minimization, number of crossings and runtime. Examples for storylines produced by our algorithms are given in \cref{sec:examplestorylines}.
\subparagraph*{Layer minimization.}
\cref{tab:datastata} shows the number of layers that was achieved by the graph coloring pipeline step ($|\mathcal{L}|$-coloring).
The layer minimization step is applied by the algorithms \ensuremath{\mathrm{P_s}}, \ensuremath{\mathrm{P_p}}, \ensuremath{\mathrm{ILP1ML}}, and \ensuremath{\mathrm{ILP2ML}}.
When comparing these numbers with the upper bound of the number of interactions $|\mathcal{I}|$ we can see that for some of the datasets we could reduce the number of layers significantly. There is only one dataset where we could not reduce the number of required layers.
\begin{table}[!t]
\centering
\caption{The number of crossings achieved by the different algorithms for all the datasets. If a number is red, then the approach timed out and we only report the best upper bound of a feasible solution given by the ILP solver.}
\label{tab:crossings}
\begin{tabular}{l | >{\centering\arraybackslash}p{1.6cm} | >{\centering\arraybackslash}p{1.6cm}| >{\centering\arraybackslash}p{1.6cm}| >{\centering\arraybackslash}p{1.6cm}| >{\centering\arraybackslash}p{1.6cm}| >{\centering\arraybackslash}p{1.6cm}}
\toprule
& \ensuremath{\mathrm{P_s}} & \ensuremath{\mathrm{P_p}} &\ensuremath{\mathrm{ILP1}} &\ensuremath{\mathrm{ILP1ML}} &\ensuremath{\mathrm{ILP2}} & \ensuremath{\mathrm{ILP2ML}}\\\midrule
gdea10 & 6 & 6 & 7 & 7 & 6 & 6 \\
gdea20 & 38 & 38 & \textcolor{red}{44} & \textcolor{red}{42} & \textcolor{red}{796} & \textcolor{red}{34} \\
ubiq1 & 10 & 9 & 10 & 10 & 8 & 8 \\
ubiq2 & 19 & 17 & 15 & 15 & 14 & 15 \\
anna1 & 19 & 19 & 23 & 23 & 16 & 16 \\
jean1 & 12 & 12 & \textcolor{red}{23} & \textcolor{red}{25} & \textcolor{red}{3} & \textcolor{red}{3} \\
huck & 44 & 42 & \textcolor{red}{58} & \textcolor{red}{55} & \textcolor{red}{59} & \textcolor{red}{45} \\
\bottomrule
\end{tabular}
\end{table}
\subparagraph*{Crossings.}
\cref{tab:crossings} shows the number of crossings for all algorithms and datasets. If an algorithm times out we provide the number of crossings for the best feasible solution. We can see that the ILP-formulations produce \new{fewer} crossings than the pipeline-approaches whenever they do not time out.
Further, even if they time out, sometimes the best feasible solution is better than the solution produced by the pipeline-approaches.
It is also the case that the ILP formulations \ensuremath{\mathrm{ILP2}}\ and \ensuremath{\mathrm{ILP2ML}}\ using activity-variables achieve fewer crossings than the formulations \ensuremath{\mathrm{ILP1}}\ and \ensuremath{\mathrm{ILP1ML}}\ without these additional variables. Another interesting observation is that the layer minimization does not have a negative effect on the number of crossings in most cases (see \ensuremath{\mathrm{ILP1}} vs.\ \ensuremath{\mathrm{ILP1ML}}\ and \ensuremath{\mathrm{ILP2}}\ vs. \ensuremath{\mathrm{ILP2ML}}).
\new{The size of the solution space depends on the combination of number of characters, number of interactions, and number of layers, so it is hard to quantify the effect of a single input parameter onto the runtime. But generally, the increase of any of these parameters also increases the solution space and thus also the runtime of our algorithms.}
\begin{table}[!t]
\centering
\caption{The runtime for the different algorithms and datasets in seconds. If an approach timed out (3600 $s$), we report the relative gap between best lower and upper bound in percent given by the ILP solver in red. If the gap is 100\% then no lower bound greater than zero was found, otherwise it is calculated as $(\textsc{UB}-\textsc{LB})/\textsc{UB}$ where $\textsc{UB}$ is the upper bound and $\textsc{LB}$ is the lower bound.}
\label{tab:time}
\begin{tabular}{l|>{\centering\arraybackslash}p{1.6cm} | >{\centering\arraybackslash}p{1.6cm}| >{\centering\arraybackslash}p{1.6cm}| >{\centering\arraybackslash}p{1.6cm}| >{\centering\arraybackslash}p{1.6cm}| >{\centering\arraybackslash}p{1.6cm}}
\toprule & \ensuremath{\mathrm{P_s}} & \ensuremath{\mathrm{P_p}} &\ensuremath{\mathrm{ILP1}} &\ensuremath{\mathrm{ILP1ML}} &\ensuremath{\mathrm{ILP2}} & \ensuremath{\mathrm{ILP2ML}}\\\midrule
gdea10 & 4 & 4 & 32 & 10 & 40 & 12 \\
gdea20 & 114 & 141 & \textcolor{red}{100\%} & \textcolor{red}{88\%} & \textcolor{red}{100\%} & \textcolor{red}{91\%} \\
ubiq1 & 3 & 3 & 45 & 45 & 24 & 25 \\
ubiq2 & 4 & 4 & 1039 & 176 & 1291 & 281 \\
anna1 & 16 & 16 & 1826 & 330 & 1108 & 289 \\
jean1 & 19 & 19 & \textcolor{red}{95\%} & \textcolor{red}{92\%} & \textcolor{red}{100\%} & \textcolor{red}{33\%} \\
huck & 136 & 127 & \textcolor{red}{84\%} & \textcolor{red}{78\%} & \textcolor{red}{95\%} & \textcolor{red}{73\%} \\
\bottomrule
\end{tabular}
\end{table}
\subparagraph*{Runtime.}
\cref{tab:time} reports the runtimes of the algorithms. If an algorithm times out then we report the gap between the best feasible solution and best lower bound found by the solver. As expected, the runtimes of the ILP-algorithms are much higher than the pipeline-algorithms.
Further, minimizing the number of layers seems to positively affect the runtime of the ILP-approaches. This is due to the fact that having less layers also reduces the search space of the ILP-formulations. The optimality gaps of the ILP-approaches on instances that timed out are quite high, so we do not expect that the solvers would find an optimal solution for these instances in a feasible time. Additional experiments even showed that most of these instances could not be solved by our ILP-formulations optimally even after one week.
\subsection{Example Storylines}\label{sec:examplestorylines}
\Cref{fig:remainingexamples} show storylines created by the different algorithms and the anna1 dataset. We have applied a simple wiggle-height minimization post-processing algorithm to the purely combinatorial output of our algorithms. This post-processing algorithm assigns actual $x$- and $y$-coordinates to the character lines, and is based on a similar approach that was applied to drawing directed graphs \cite{DBLP:journals/tse/GansnerKNV93}. Notice that in the outputs for \ensuremath{\mathrm{ILP1}}\ and \ensuremath{\mathrm{ILP1ML}}\ character lines sometimes start before their first interaction and end before their last interaction. This is never the case for all other algorithms. But those algorithms have the problem, that character curves are often very short, making it hard to follow these curves. Further, some characters only appear for one layer (see the last layer of all shown storylines). In follow-up visualizations we should add some offset to these curves so they are visible. \new{We do not dare to give a qualitative judgement about which of the visualizations looks best, but generally the visual clarity of the visualizations is increased due to having few crossings.}
\begin{figure}
\centering
\begin{tabular}[b]{l}
{\centering
\includegraphics[width=0.9\linewidth]{Figures/anna1_rand.pdf} }\\
\vspace{\abovecaptionskip}
\small \textbf{\textsf{(a)}} Algorithm \ensuremath{\mathrm{P_s}}: 53 layers and 19 crossings.
\end{tabular}
\medskip
\begin{tabular}[b]{l}
{\centering
\includegraphics[width=0.9\linewidth]{Figures/anna1_pattern.pdf} }\\
\vspace{\abovecaptionskip}
\small \textbf{\textsf{(b)}} Algorithm \ensuremath{\mathrm{P_p}}: 53 layers and 19 crossings.
\end{tabular}
\medskip
\begin{tabular}[b]{l}
{\centering
\includegraphics[width=0.9\linewidth]{Figures/anna1_ilp1.pdf} }\\
\vspace{\abovecaptionskip}
\small \textbf{\textsf{(a)}} Algorithm \ensuremath{\mathrm{ILP1}}: 53 layers and 23 crossings.
\end{tabular}
\medskip
\begin{tabular}[b]{l}
{\centering
\includegraphics[width=0.9\linewidth]{Figures/anna1_ilp1ml.pdf} }\\
\vspace{\abovecaptionskip}
\small \textbf{\textsf{(c)}} Algorithm \ensuremath{\mathrm{ILP1ML}}: 53 layers and 23 crossings.
\end{tabular}
\medskip
\begin{tabular}[b]{l}
{\centering
\includegraphics[width=0.9\linewidth]{Figures/anna1_ilp2.pdf} }\\
\vspace{\abovecaptionskip}
\small \textbf{\textsf{(d)}} Algorithm \ensuremath{\mathrm{ILP2}}: 58 layers and 16 crossings.
\end{tabular}
\medskip
\begin{tabular}[b]{l}
{\centering
\includegraphics[width=0.9\linewidth]{Figures/anna1_ilp2ml.pdf} }\\
\vspace{\abovecaptionskip}
\small \textbf{\textsf{(e)}} Algorithm \ensuremath{\mathrm{ILP2ML}}: 53 layers and 16 crossings.
\end{tabular}
\caption{The storylines for dataset anna1 and the algorithms \ensuremath{\mathrm{P_s}}, \ensuremath{\mathrm{P_p}}, \ensuremath{\mathrm{ILP1}}, \ensuremath{\mathrm{ILP1ML}}, \ensuremath{\mathrm{ILP2}}, and \ensuremath{\mathrm{ILP2ML}}. The $x$-axes are labeled by the scenes of the book, which are separated by dashed gray lines and correspond to the time intervals. Interactions are visualized with black vertical bars and correspond to the characters in the book interacting with each other shown as $x$-monotone curves.}
\label{fig:remainingexamples}
\end{figure}
\section{Conclusion}
We introduced storylines with time intervals, which capture many real-world datasets, such as authorship networks with multiple papers per year. We also provide two methods to compute the vertical and horizontal orderings in these storylines. Preliminary experiments show that these storylines allow for more effective width-minimization, by assigning multiple interactions to one (vertical) layer, and more effective crossing-minimization, by exploiting different horizontal orderings of interactions.
Further research directions include optimizing our current methods and extending the model further, for example with overlapping slices.
|
{
"arxiv_id": "2302.14192",
"language": "en",
"timestamp": "2023-03-01T02:04:27",
"url": "https://arxiv.org/abs/2302.14192",
"yymm": "2302"
} | \section{Introduction}
In recent years, modern deep learning models have set the state-of-the-art (SOTA) in various applications. However, they tend to give high confidence to out-of-distribution (OOD) samples. For instance, during the inference of a classification task, an instance can be assigned to one of the classes with an overconfident prediction even though the sample does not belong to any class that appears in the training set. Since these models make closed-world assumptions, their predictions on real-world scenarios such as medical and autonomous driving applications may cause undesirable consequences. Hence, to mitigate the overconfident predictions on OOD data, various OOD detection studies \cite{b24, b25, b26, b27, b28, b29, b30, b31} have been revealed. The same problem also exists for radar-based applications.
Radars, thanks to their insensitivity to illumination and environmental conditions like fog and rain, and their ability to preserve privacy, have attracted attention for several applications such as presence sensing \cite{b32}, human activity classification \cite{b34}, and gesture recognition \cite{b33}. However, none of the previous works consider how to act when a sample from a different distribution comes into evaluation. For instance, in \cite{b35}, the authors aim to detect abnormal human activities like fainting and falling in a toilet cabin. However, they do not consider when a non-human moving object like a robot vacuum cleaner appears in the toilet. Typically, the system needs to say, "I do not know that object," but it says either normal or abnormal activity, which belongs only to human targets. This study aims to create an OOD detector that works with Range Doppler Images (RDIs) of 60 \si{\GHz} L-shaped FMCW Radar. The OOD detector tries to detect moving objects other than a walking human.
An OOD detector works as follows; It first assigns a score to the test sample \textbf{x} by using a score function $S$. Then, using the validation data containing only in-distribution (ID) samples, a threshold $\tau$ is calculated. Finally, if $S(\textbf{x}) < \tau$, the sample is treated as ID, otherwise as OOD. To determine the scores, some detectors rely on softmax probabilities \cite{b1, b2}, some use the logits \cite{b7}, and some exploit intermediate feature representations \cite{b4,b5, b6 ,b31}. In this work, we propose two scores: Patch-based reconstruction (PB-REC) and latent energy scores (PB-LSE). For PB-REC, we use a patch-based autoencoder architecture. For PB-LSE, on the other hand, we use the latent representation of the autoencoder by bringing it from a 1D vector $(\mathbb{R}^D)$ to a scalar $(\mathbb{R})$. With the help of patch-based training (only with ID samples) and inference strategy, we focus on the local features of RDIs. Thus, the local features of an ID sample are better reconstructed, leading to better OOD detection during inference. Also, with the same idea, for a single RDI, we get as many latent codes as patches in inference time, which means that we get compressed but more detailed information from each patch. We combine the latent codes to get the final energy score (PB-LSE). This combination makes the assigned energy score more distinguishable for ID and OOD samples since we have more information about the data. The experiments in Section \ref{experiments} also reflect the correctness of the proposed idea. Our key contributions are as follows:
\begin{itemize}
\iffalse \item To the best of our knowledge, this is the first deep learning based OOD detection method that is designed for real-world short-range radar data. We believe this work inspires future research on OOD detection for short-range radar based applications.\fi
\item We propose a reconstruction-based OOD detector. Our detector uses two scoring functions that produce patch-based reconstruction and latent energy scores. We achieve an AUROC of 90.72\% on our dataset, consisting of several ID and OOD samples.
\item Compared to the baseline and the other SOTA methods, our method has superior results. Besides, when we only use the energy scores, we get an AUROC of 90.72\%. Thus, we can differentiate between ID and OOD samples without needing the decoder, using only the encoder part of the network. The size of the encoder part is only 641 kB, making our detector very suitable for embedded usage.
\end{itemize}
\section{Related Work}
The existing studies on OOD detection are mainly shaped by the image domain, especially for classification tasks. In this regard, as a baseline method, Hendrycks \& Gimpel published their work \cite{b1}, which is based on maximum softmax probability (MSP). Their main claim is that in the inference of deep neural networks, higher confidence values are given to ID samples compared to OOD samples. Therefore, with simple thresholding, the ID and OOD samples can be differentiated. The ODIN approach introduced in \cite{b2} improves \cite{b1} by introducing temperature scaling and input perturbations to increase the softmax scores of ID samples. This method is extended by introducing a model ensembling approach \cite{b3}. Mahalanobis distance based detector (MAHA) \cite{b4} aims to detect OOD samples using intermediate feature representations of DNNs. It creates a class conditional Gaussian distribution for each ID class and assigns an OOD score to the test sample using the Mahalanobis distance of the sample to each of the class conditional Gaussian distributions. It also uses layer ensembling and input perturbation like ODIN \cite{b2}. Similarly, \cite{b5, b6} utilize intermediate representations to detect OOD samples. Liu et al. \cite{b7} created an energy-based OOD detection method by applying the $LogSumExp(LSE)$ function on the logit layer and stated that ID samples have lower energy scores than OOD samples. The methods above work with any pre-trained model, but some of them require OOD samples to tune the hyper-parameters.
In the Outlier Exposure (OE) technique by Hendrycks et al. \cite{b8}, a limited number of OOD samples are used for either fine-tuning or training from scratch with a new loss, which pushes the softmax probabilities of the OODs to the uniform distribution. In the inference step, they only use regular cross-entropy loss and propose a score-based OOD detector. Similar studies \cite{b9, b10} exploit OE while training their deep architectures with novel loss functions. In \cite{b11}, a GAN architecture is used with a special loss function to generate artificial OOD samples to be used for OE. In the literature, many other studies \cite{b12, b13} utilize generative models like GANs and Normalizing Flows to detect OODs.
Some studies \cite{b2, b14} emphasize the importance of gradient information for the OOD detection task. For example, in GradNorm \cite{b14}, the magnitudes of the gradients are utilized to distinguish between IDs and OODs. The gradients are calculated by backpropagating the KL divergence between the softmax output and uniform distribution. In this logic, the ID samples have higher magnitudes than OODs.
There are also some reconstruction-based OOD detection methods. For example, \cite{b15} uses AE, and as an OOD score, it combines the reconstruction loss with a Mahalanobis distance in the latent code. Some other studies \cite{b16 ,b17} benefit from Variational Autoencoder (VAE) and its latent representation to detect OODs.
Most of the works in this area are based on convolutional neural network (CNN) architectures. However, in recent years, some works \cite{b18, b19, b20} based on pre-trained transformers with attention mechanisms like VIT \cite{b21} and BERT \cite{b22} are revealed as OOD detectors, and they achieved SOTA results on some benchmark datasets.
In the radar domain, there are few OOD detection studies utilizing DNNs. One study \cite{b23} compares a few conventional and deep learning methods for low-resolution radar micro-Doppler signatures using a synthetically generated dataset. In \cite{b36}, a hand gesture recognition system is proposed that leverages FMCW radar technology and includes the ability to detect OOD input data. \cite{b37} proposes a meta-reinforcement learning approach for robust radar tracking with OOD detection support.
\section{System Design}
\subsection{Radar Configuration}
Our sensor is based on Infineon's BGT60TR13C chipset with a 60 \si{\GHz} L-Shaped FMCW radar. It consists of one transmit (Tx) and three receiver (Rx) antennas. The radar configuration is listed in Table \ref{tab:radar_conf}. The Tx antenna transmits $N_c$ chirp signals, and the Rx antennas receive the reflected signals. The transmitted and reflected signals are mixed, and the mixture of the signals produces the intermediate frequency (IF). The final raw Analogue-to-Digital Converter (ADC) data is acquired by low-pass filtering and digitizing the IF signal. Each chirp has $N_s$ samples, so with some rearranging, the dimensions of a frame are shaped as $N_{Rx} \times N_c \times N_s$, and the frame becomes ready for further digital signal processing.
\subsection{Pre-processing}
In our network, we use RDIs as inputs; therefore, we apply the following pre-processing steps to our raw data:
\begin{itemize}
\item \textbf{Range FFT}: We apply range FFT with Chebyshev window at 100 \si{\dB} on fast time with mean removal to get one channel range data out of the three channels from three Rx antennas.
\item \textbf{MTI}: Following, we apply simple frame-wise moving target identification (MTI) on range data to remove any static object in the field of view of the radar.
\item \textbf{Doppler FFT}: We then perform doppler FFT with Chebyshev window at 100 \si{\dB} on slow time for each sample of a chirp and have our final RDI with the dimension of $64 \times 64$.
\end{itemize}
\begin{table}[h]
\caption{\small FMCW Radar Configuration Parameters }
\centering
\begin{tabular}{@ {\extracolsep{10pt}} ccc}
\toprule
\centering
Configuration name & Symbol & Value \\
\midrule
Number of Transmit Antennas & $N_{Tx}$ & 1 \\
Number of Receive Antennas & $N_{Rx}$ & 3 \\
Sampling frequency & $f_s$ & 2 \si{\MHz} \\
Number of chirps per frame & $N_c$ & 64 \\
Number of samples per chirp & $N_s$ & 128 \\
Frame Period & $T_f$ & 50 \si{\ms} \\
Chirp to chirp time & $T_c$ & 391.55 \si{\us} \\
Ramp start frequency & $f_{min}$ & 60.1 \si{\GHz}\\ Ramp stop frequency & $f_{max}$ & 61.1 \si{\GHz}\\
Bandwidth & $B$ & 1 \si{\GHz}\\
\bottomrule
\end{tabular}
\label{tab:radar_conf}
\end{table}
\begin{figure}[htbp]
\centerline{\includegraphics[width=\linewidth]{RDI-Process-Chart-eps-converted-to.pdf}}
\caption{The diagram of the pre-processing steps to obtain RDIs}
\label{fig:preprocess}
\end{figure}
\section{Problem Statement and Method}
In this work, we aim to have a system that is able to say "I do not know" for the moving targets (OOD) other than a walking human (ID) using the RDIs. Normally, this is a binary (0-1) classification task. The ID samples should be classified as one, while the OODs need to be labeled as zero. However, the OOD detection task differs from a typical binary classification application since there may be infinitely many OOD objects, which need to be correctly detected. That is why we test our pipeline with a large number of distinct OOD objects, which may appear with a high chance in indoor environments like offices and homes. We are also confident that our pipeline would perform well when tested with different OODs we do not use in our study.
\subsection{Architecture and Training}
For our reconstruction-based OOD detection pipeline, we use a patch-based AE architecture. The AE aims first to compress the input data to a latent space with smaller dimensions than the input sample, and from the latent space, it tries to reconstruct the input without losing information.
Our patch-based AE architecture consists of encoder and decoder parts. The encoder includes three 2D convolution layers with ReLu activation and a pooling layer after each convolution layer. Each convolution has a kernel size of $3 \times 3$. The first has 16 filters while the others have 32 and 64, respectively. With a flattening and a dense layer, the encoder brings the input to its latent space (128). The decoder, on the other hand, first takes the latent code and sequentially feeds it to dense and reshaping layers. Then it applies four transpose 2D convolutions with an up-sampling layer after the first three convolution layers. The first three convolution layers have ReLu activation, and a kernel size of $3 \times 3$. The first has 64 filters, while the others have 32 and 16. To reconstruct the input with the same dimensionality, the final convolution layer consists of a $3 \times 3$ kernel size with one filter, and sigmoid activation.
For the training, we only use the ID samples. We divide each RDI (64,64,1) into four equally sized (32,32,1) patches, and we feed our network with the patches separately while keeping their actual position in one compact RDI. With this technique, the system focuses on the local features instead of learning the entire RDI once. Together with Adam optimizer, we use binary cross-entropy as the loss function and calculate it between the input RDI and the concatenated reconstructions.
\subsection{OOD Detection}
To detect the OOD data, we propose two scores consisting of reconstruction and latent energy scores. During inference, each instance in the RDI sequence is again divided into four patches, and each patch is fed to the pre-trained DNN model. The entire model gives the reconstruction results from each patch. We combine the reconstructed patches based on their position and calculate the mean of the reconstruction mean squared error (MSE). This score represents the reconstruction score (\ref{eq:1}). The encoder part of the model gives four latent codes coming from each patch. We first sum them and apply the LogSumExp ($LSE$) function to the summed latent code (1D vector) and calculate a latent energy score out of it (\ref{eq:2}). Then we simply use the scores to assess the performance of our method on different metrics. The overall pipeline is presented in Figure \ref{fig:pipline}.
\begin{equation}
\small {S_r(\textbf{X}) = mean(\sum_{z=1}^q\frac{1}{hw} \sum_{i=1}^h \sum_{j=1}^w (D(E(\textbf{P}_z))_{ij}-(\textbf{P}_z)_{ij})^2 )}
\label{eq:1}
\end{equation}
\begin{equation}
\small {S_e(\textbf{X}) = \log\left[\sum_{j=1}^k \exp(\sum_{t=1}^q(\textbf{z}_t(j))) \right]}
\label{eq:2}
\end{equation}
where $\textbf{X} \in \mathbb{R}^{64 \times 64}$ is the input RDI and consists of four patches $\{\textbf{P}_1,\textbf{P}_2,\textbf{P}_3,\textbf{P}_4\}$. $E$ and $D$ are encoder and decoder respectively. $\textbf{z}_t(j)$ is the j-th element of $\textbf{z} \in \mathbb{R}^{k}$ for patch $\textbf{P}_t$. $h$ and $w$ are the dimensions of a patch (32,32), $q$ is the number of patches (4), and $k$ is the length of the latent vector (128).
Energy-based models (EBM) are used in the literature for OOD tasks. They mainly depend on a function that maps $D$ dimensional data vectors to a scalar value called energy. \cite{b7} utilizes this idea by applying LSE on the logit layer, which involves class label information of a K-class classification task. We are inspired by \cite{b7} and apply LSE on the latent code that has all the information about the input in a compressed way. The main difference is that they have classification-based architectures and get the energy scores just before the softmax layer, which includes information from all classes, including the actual class the data belongs to. On the other hand, we use reconstruction-based architecture and have our energy scores from the latent representation of an AE. Meaning our energy scores come from the real data and only the actual class and are expected to be more informative. We expect lower energy scores for ID samples than for OOD samples, as well as reconstruction scores due to the fact that we train our network only with ID samples.
\begin{figure}[htbp]
\centerline{\includegraphics[width=\linewidth]{E-D-Process-Chart-1-eps-converted-to.pdf}}
\caption{The overall pipeline of the proposed method.}
\label{fig:pipline}
\end{figure}
\section{Experiments}
\label{experiments}
\subsection{Dataset}
The RF data is collected using Infineon's BGT60TR13C 60 \si{\GHz} FMCW radar sensor with four individuals in 10 distinct indoor places, including office and house rooms. The radar is placed at two meters height and tilted 15 degrees down from the front side. Our dataset consists of ID and OOD samples. ID samples comprise a human walking activity. For this scenario, the person in the radar field of view freely walks without supervision with a changing distance from 1 to 5m. The free walks contain any movement, such as forward-backward walks, from left to right and right to left, and circular and diagonal walks. On the other hand, OOD samples consist of various moving objects seen in indoor environments, including a table fan, a remote-controlled (RC) toy car, swinging plants, swinging blinds, swinging laundry, a swinging paper bag, a vacuum cleaner, and a robot vacuum cleaner. Each activity is recorded for around 1 to 2 minutes. After the pre-processing, we obtain a set of RDI data from the recordings of 10 separate rooms and use six rooms for the training and the remaining four for the test phase. For training, we use 20000 ID frames from six different rooms and four different individuals. These frames consist of a balanced number of individuals and rooms. For inference, we exploit many ID (10000) and OOD (12000) samples from four different rooms with a balanced number of individuals, other moving objects, and rooms. Radars tend to learn the room itself, so in some cases, an application may perform well in one room but poorly in different rooms. This means the system may overfit to the room. In our experiments, none of the rooms utilized during training appear in testing, and this reflects the robustness of our pipeline. The dataset details are provided in Table \ref{tab:data}.
\begin{table}[h]
\caption{\small Number of ID and OOD RDI data samples for each activity used in the training and test. }
\centering
\begin{tabular}{@ {\extracolsep{4pt}} c|c|c}
\toprule
\centering
ID Activity & \# of Training Samples & \# of Test Samples \\
\midrule
Walking & 20000 & 10000\\
\midrule
\midrule
OOD Activity \\
\midrule
Table Fan & -&2000 \\
RC Toy Car & -&2000 \\
Swinging Laundry & -&2000\\
Robot Vacuum Cleaner &- & 1500\\
Vacuum Cleaner & -&1500 \\
Swinging Blinds & -&1000\\
Swinging Plants & -&1000\\
Swinging Paper Bag & -&1000\\
\midrule
Total OOD & - & 12000 \\
\midrule
Total & 20000& 22000 \\
\bottomrule
\end{tabular}
\label{tab:data}
\end{table}
\subsection{Evaluation Metrics and Experimental Results}
In OOD tasks, accuracy is not a reliable metric to be used, so researchers commonly evaluate their OOD detectors using the following metrics:
\begin{itemize}
\item \textbf{AUROC} is the area under the receiver operating characteristic (ROC) curve.
\item \textbf{AUPR\_IN} is defined as the area under the precision-recall curve when ID samples are assumed to be positives.
\item \textbf{AUPR\_OUT} is the area under the precision-recall curve when OOD samples are assumed to be positives.
\end{itemize}
\begin{table}[h]
\caption{\small \textbf{Main Results.} Comparison with the baseline (AE) and other SOTA methods. All values are percentages.}
\centering
\begin{tabular}{@ {\extracolsep{10pt}} cccc}
\toprule
\centering
Method & AUROC & AUPR\_IN & AUPR\_OUT\\
\midrule
Baseline (AE) & 75.44 & 78.07 & 76.80
\\ \midrule
MSP (LeNet5) \cite{b1} & 88.66 & 81.03 & 93.26\\
Energy (LeNet5) \cite{b7} & 88.85 & 82.49&93.31 \\
ODIN (LeNet5) \cite{b2} & 88.86 & 82.50 & 93.32 \\
OE (LeNet5) \cite{b8} & 50.34 & 32.31 & 67.80 \\ \midrule
MSP (ResNet-34) \cite{b1} & 50.07 & 29.00 & 71.17 \\
Energy (ResNet-34) \cite{b7} & 68.09 & 64.90&74.29 \\
ODIN (ResNet-34) \cite{b2} & 68.68 & 65.35 & 74.67 \\
OE (ResNet-34) \cite{b8} & 17.38 & 20.61 & 49.69 \\ \midrule
MSP (ResNet-50) \cite{b1} & 50.13 & 28.96 & 70.78 \\
Energy (ResNet-50) \cite{b7} & 89.82 & 79.48&94.36 \\
ODIN (ResNet-50) \cite{b2} & 89.86 & 79.51 &\textbf{ 94.40 } \\
OE (ResNet-50) \cite{b8} & 7.99 & 18.89 & 47.21 \\ \midrule
PB-REC (Ours) & 88.84 & 83.61 & 92.16 \\
\textbf{PB-LSE} (Ours) & \textbf{90.72} & \textbf{87.67} & 92.81 \\
\bottomrule
\end{tabular}
\label{tab:main_results}
\end{table}
Table \ref{tab:main_results} demonstrates the superiority of the proposed method in terms of AUROC and AUPR\_IN over the baseline and other SOTA methods. \iffalse Our method is better than all the compared methods concerning all metrics, except the ODIN method, which is better than our method for AUPR\_OUT metric by relatively a small margin. However, our method shows much better performance results for the other two metrics than ODIN. \fi The baseline method (AE) uses the same architecture as our proposed methods, and instead of using patches, it directly uses the RDIs to train the AE. In the OOD detection phase, the baseline method only uses the reconstruction scores calculated from the mean of the reconstruction MSE in between input and reconstructed RDIs. In MSP, Energy, ODIN, and OE methods, LeNet5 \cite{lecun1998gradient}, ResNet-34, and ResNet-50 \cite{resnet34} backbones are used. The architectures are trained in a one-class classification manner by only using ID samples, and the trained models are used to apply the SOTA methods.
\section{Conclusion}
OOD detection has a crucial place in academia and industry since being able to detect unknown samples has great importance, especially for safety-critical applications like medical and autonomous driving. In the literature, there are several OOD detection methods, and they produce impressive results. However, in the radar domain, this field is not studied broadly. Therefore, in this study, we addressed the OOD detection problem on top of a short-range FMCW radar. We propose two score values by using a reconstruction score from an autoencoder and a latent energy score from the latent representation of the same autoencoder to detect the OOD samples, which are all the moving objects in the field of view of the radar other than a walking person. With our approach, we reach an AUROC of 90.72\%, which outperforms the baseline as well as other compared SOTA methods. Our experiments also demonstrate the effectiveness of patch-based strategy. Thanks to it, we get better reconstructions and more informative latent codes, resulting in more distinguishable reconstruction and energy scores for ID and OOD samples. Additionally, with its small size of 641 kB, our detector is appropriate for embedded devices. \iffalse Additionally, we believe that this work will inspire future studies on deep learning based OOD detection for short-range radars.\fi
|
{
"arxiv_id": "2302.14201",
"language": "en",
"timestamp": "2023-03-01T02:04:41",
"url": "https://arxiv.org/abs/2302.14201",
"yymm": "2302"
} |
\section{Appendix}
\section{Understanding the need for unconfirmed submarine category} ~\label{unconfirmed_submarine_explanation}
Submarine cables tend to be viewed as long-haul cables that connect disjoint (non-connected) landmasses in the world. But it is important to note that this is not always the case. There are many domestic and small inter-country submarine cables and Figure~\ref{potential_land_submarine_cables} shows examples of 2 such submarine cables. As a terrestrial cable could also cover the path taken by these submarine cables, to account for the possibility that it could be a submarine cable, Nautilus generates a category called Unconfirmed submarine. This category thus captures such submarine cables and assigns a lower prediction score as there is still a potential for a terrestrial cable along that path.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/brazil_festoon.png}
\caption[]%
{{\small Brazilian Festoon is an entirely domestic submarine cable that connects various locations in Brazil's coastline}}
\end{subfigure}
\vspace{3mm}
\hfill
\begin{subfigure}{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/adria_1_cable.png}
\caption[]%
{{\small Adria-1 is a cable that connects Albania, Croatia, and Greece.}}
\end{subfigure}
\caption{Examples of submarine cables where potentially a terrestrial cable could be used to connect the landing points in the cable}
\label{potential_land_submarine_cables}
\end{figure}
\section{Parallel Cables} \label{parallel_cables_analysis}
We define parallel cables as cables whose landing points on either end are in close proximity to each other. As cables tend to be on the shortest path that avoids any obstacles or hazards like undersea mountains or regions with high seismic activity, in regions with high demand and capacity requirements, cables tend to be laid in parallel. An example of a parallel cable is shown in Figure~\ref{fig:parallel_cables}. Hence to identify the exact cable a particular IP link takes, Nautilus relies on the idea that links of an AS are more likely to be on a submarine cable that the same AS owns. Using this, Nautilus is able to significantly reduce the number of cable predictions it makes finally.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/parallel_cables.png}
\caption{An example of a parallel cable - Yellow and Grace Hopper - which are essentially parallel for the path between Bude, UK, and Bellport, NY, US. While these cables share the exact same landing points on both ends, to be a parallel cable, the landing points on both ends need to be just proximal to each other}
\label{fig:parallel_cables}
\end{figure}
\section{SoL validation performance analysis} \label{sol_validation_analysis}
To analyze the gains with SoL validation, we again utilize the ground truth data generated by Gharaibeh et. al.~\cite{geolocation_problem} and normalize the results based on the total number of IPs. Figure~\ref{fig:sol_validation} contains the results of this experiment. As observed from this figure, though with SoL validation, some fraction of IPs tend to have no valid geolocation output, the overall geolocation fidelity improves up to 2\%. This validates the overall performance gains with SoL validation. Additionally, to identify the best SoL threshold, we examine the normalized geolocation accuracy and observe that the 0.05 threshold has the best performance. With a lower SoL threshold, we impose a harder constraint for the margin of error with traceroute delay annotations, and by increasing the threshold, we loosen this constraint. As observed with a threshold of 0, the accuracy drops drastically due to more geolocation sources not producing output for a given IP. On the other hand with a threshold of 0.1, we start observing a decrease in normalized geolocation accuracy as more IPs might have erroneous geolocation.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Fig-sol-validation.pdf}
\caption{Comparison of normalized geolocation accuracy without SoL and with different SoL thresholds. SoL validation with a 0.05 threshold gives the best-normalized geolocation accuracy}
\label{fig:sol_validation}
\end{figure}
\section{Cable Failures} \label{cable_failures_appendix}
Figures~\ref{fig:yemen_cables} and ~\ref{fig:kdcn} show the submarine cables that experienced an outage during the Yemen outage and Papua New Guinea earthquake respectively.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/yemen_cables.png}
\caption{The 2 cables (FALCON shown in red and SeaMeWe-5 shown in blue) with a landing point in Al Hudaydah which was damaged by airstrike}
\label{fig:yemen_cables}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/kumul_domestic_cable.png}
\caption{The Kumul domestic cable in Papua New Guinea that was damaged by an earthquake in the region near Madang}
\label{fig:kdcn}
\end{figure}
\section{Conclusion}
While the submarine cable network is a critical component of the Internet infrastructure, there currently exists no tools for generating a unique mapping between the network (IP) layer links and the submarine cables. We design Nautilus, a framework that utilizes publicly available traceroute measurements, submarine cable map, geolocation, and AS owner information to generate a mapping of IP links to submarine cable with a confidence score for each prediction. Nautilus generates a mapping for 91\% of all active cables and 82\% of all identified submarine (definite and unconfirmed) links while ensuring that it predicts a small number of cables for each link. Good prediction accuracy achieved in validation experiments underscores the importance of Nautilus in submarine network research. By filling a gap in Internet cartography, Nautilus also opens an avenue for research on the end-to-end analysis of the Internet that stitches together both terrestrial and submarine segments.
\section{Datasets} \label{data_sources}
In this section, we present (i) a brief overview and statistics on the data sources used by Nautilus, and (ii) the rationale behind the selection of specific thresholds in the design.
\begin{table*}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lcccccccc}
\toprule
\multirow{2}{*}{ } &
\multicolumn{4}{c}{IPv4} &
\multicolumn{4}{c}{IPv6} \\
\cmidrule(lr){2-5}
\cmidrule(lr){6-9}
& RIPE 5051 & RIPE 5151 & CAIDA (v4) & \textbf{Total} & RIPE 6052 & RIPE 6152 & CAIDA (v6) & \textbf{Total} \\
\midrule
Total traceroutes & 15.3 M & 15.3 M & 86.2 M & \textbf{116.8 M} & 7.13 M & 7.13 M & 105 M & \textbf{119.3 M}\\
\# links & 3.4 M & 3.9 M & 2.1 M & \textbf{5.8 M} & 1.83 M & 1.23 M & 1.17 M & \textbf{3.08 M} \\
\# unique links & 1.09 M & 1.23 M & 724 K & & 441 K & 345 K & 1.42 M & \\
\# link endpoint & 790 K & 937 K & 709 K & \textbf{1.11 M} & 424 K & 337 K & 1.38 M & \textbf{1.64 M} \\
\bottomrule
\end{tabular}%
}
\caption{RIPE Atlas and CAIDA traceroute dataset characteristics}
\label{traceroute_table}
\end{table*}
\subsection{Data Sources}
\subsubsection{Traceroutes}
\label{subsubsec:traceroutes}
Nautilus relies on traceroutes collected by CAIDA's Ark and RIPE Atlas. From RIPE Atlas, Nautilus uses long-running traceroute measurements---(5051 \& 5151) and (6052 \& 6152) for IPv4 and IPv6, respectively. These measurements execute traceroutes between a subset of $approx$ 1000 RIPE probes every 15 minutes. CAIDA dataset has /24 (IPv4) and /48 (IPv6) prefix measurements~\cite{caida_24_probing, caida_48_probing}. CAIDA initiates ICMP-based Paris traceroute from a collection of 15-30 anchor probes to a random IP address in every /24 and /48 address space. We use RIPE Atlas traceroutes over the period of two weeks, March 15-29, 2022, with approximately 30M IPv4 and 14M IPv6 traceroutes. We use CAIDA traceroutes collected over a single cycle (1647) corresponding to about ten days, March 13-23, 2022, containing 86M IPv4 and 105M IPv6 traceroutes. We chose two weeks, as the number of unique IPs and links went up only marginally, at around 3-4\% per day, past two weeks. Table~\ref{traceroute_table} gives a detailed breakdown of the datasets. The final dataset includes 5.8 million IPv4 and 3.08M IPv6 unique links. Only 5.11 million IPv4 and 2.34 million IPv6 links pass the SoL validation.
\subsubsection{Submarine Cable Information}
Nautilus uses the Submarine Cable map provided by Telegeography. This map offers information such as landing points (terrestrial termination points of submarine cables), cable owners, Ready For Service (RFS) year, and the length of $\approx$ 480 submarine cables. In Nautilus, we only consider 450 cables with an RFS earlier than 2022. Additionally, there are $\approx$ 1200 valid landing points. Based on the information database, there are 407 unique cable owners.
\vspace{-2mm}
\subsubsection{Geolocation}
For the 1.11M IPv4 and 1.64M IPv6 unique addresses identified from traceroutes, we utilize the following 11 geolocation sources: RIPE IPMap~\cite{ripe_ipmap}, CAIDA geolocation~\cite{caida_itdk}, Maxmind~\cite{maxmind}, IP2Location~\cite{ip2location}, IPinfo~\cite{ipinfo}, DB-IP~\cite{db-ip}, IPregistry~\cite{ipregistry}, IPGeolocation~\cite{ipgeolocation}, IPapi~\cite{ipapi}, IPapi.co~\cite{ipapi_co}, and IPdata~\cite{ipdata} to generate a geolocation output for a given IP. For the 1.11 million IPv4 addresses, we obtain geolocation results for 91 K IPs from RIPE IPMap, 573 K IPs from CAIDA geolocation, 1.05 M IPs from Maxmind, and 1.07 M IPs from all other private databases. Similarly, for 1.64 million IPv6 addresses, we have the results for 18 K IPs from RIPE IPMap, 1.64 million IPs from Maxmind, and 1.62 million IPs from other private databases.
\vspace{-2mm}
\subsubsection{IP to ASN mapping}
For generating an IP to AS mapping, we use four sources: RADB servers~\cite{radb_server}, Routinator RPKI validator~\cite{routinator_whois}, Cymru WhoIS~\cite{cymru_whois} and CAIDA AS2Org~\cite{caida_as_to_org}. All sources used in IP to ASN mapping typically trace back their origins to the organization data stored with Regional Internet Registries (RIR). For the 1.11 M IPv4 addresses, we obtain matches for 470 K IPs from Whois RADB servers, 1.05 M IPs from routinator servers, 391 K IPs from CAIDA AS2Org mapping, and 1.11 M IPs from Cymru whois. Similarly, for 1.64 million IPv6 addresses, we obtain matches for 511 K IPs from Whois RADB servers, 1.56 M IPs from routinator servers, and 1.64 million IPs from Cymru whois. Upon merging the results from all sources, we obtain all mapping for 1.11 M IPs and 1.64 million IPs in IPv4 and IPv6 respectively.
In summary, we obtain 5.8M (IPv4) and 3.08M (IPv6) unique links with 1.11M (IPv4) and 1.64M (IPv6) unique IPs from the traceroutes collected from RIPE Atlas and CAIDA over a period of approximately 15 days. We obtain valid geolocation for $\approx$ 99\% of all the unique IPs and a valid AS number mapping for all the IPs.
\subsection{Design Choices}
\subsubsection{Performance with multiple geolocation sources} \label{multiple_geolocation_performance_gains}
We validate the geolocation accuracy of our geolocation eleven sources and Nautilus at three granularities (city, country, and continent) by testing on the ground-truth data generated by Gharaibeh et. al.~\cite{geolocation_problem}. We normalize the results based on the total number of IPs in the ground truth data. Note that some sources do not produce an output for certain data points. Figure~\ref{fig:geolocation_accuracies} shows the results for this validation. Compared to other geolocation sources, Nautilus has at least $\approx$ 5\% improvement in geolocation accuracy at the city level and around 3\% gain at the country and continent level. The lower city-level accuracy of RIPE and CAIDA is due to the lack of geolocation data for many IP addresses in these datasets.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Fig-geolocation-source-accuracies.pdf}
\caption{The accuracy of various geolocation sources at City, Country \& Continent granularity. The accuracies are normalized based on the total number of IPs in the ground truth data. Nautilus geolocation is determined by the cluster with the largest cluster score.}
\label{fig:geolocation_accuracies}
\end{figure}
\vspace{-2mm}
\subsubsection{SoL validation threshold} \label{sol_validation_threshold}
Nautilus employs SoL validation to improve geolocation fidelity. Using the same ground-truth source~\cite{geolocation_problem}, we evaluate the normalized geolocation accuracy without SoL validation and with SoL validation at various thresholds. Figure~\ref{fig:sol_validation} in Appendix~\ref{sol_validation_analysis} contains the results for this validation. We empirically derive that the SoL threshold of 0.05 (or 5\% error margin) gives the highest geolocation accuracy, with an improvement of 2\% over the accuracy without SoL validation.
\vspace{-2mm}
\subsubsection{Clustering \& Geolocation threshold in B-O-N classification} \label{cluster_geolocation_threshold}
Nautilus employs auto-clustering with a threshold of 20 km since less than 25\% of the landing points are within a 20 km radius of another landing point and below this threshold, clustering fails. The geolocation threshold is 0.6, i.e., if any cluster has at least 60\% of total valid geolocation outputs within it, we consider the IP geolocation to be good. We arrive at this 0.6 threshold empirically: (i) at least half the geolocation sources agree, and (ii) it offers the best-normalized geolocation accuracy when evaluated against the ground-truth data~\cite{geolocation_problem}.
\vspace{-2mm}
\subsubsection{Geolocation radius threshold in geolocation-based cable mapping} \label{geolocation_radius_threshold}
Post geolocation of IPs, Nautilus employs a radius-based query to identify landing points. Nautilus performs this query recursively with an initial radius of 500 km, a step size of 50 km, and goes up to a maximum radius of 1000 km. We arrive at these thresholds based on the average country radius being $\approx$ 460 km and thus an average country diameter of $\approx$ 920 km. Hence, we set the maximum to 1000km to include the possibility of a link endpoint being mapped to a more well-known PoP within the country.
\vspace{-2mm}
\subsubsection{Prediction score \& PaCT threshold} \label{pact_threshold}
Nautilus predication score is a weighted sum of cluster score, $C_i$ (fraction of elements in an IP's geolocation cluster), distance-based score, $d_i$ (normalized distance between IP geolocation at endpoint $i$ and the corresponding submarine landing point location), and ownership score of endpoint, $O_i$ (1 if ASN of IP address at endpoint $i$ and ASN of any of cables owners match, 0 otherwise), weighted in the ratio 5:4:1, and scaled by the category factor, $f$ (0.5 for definitely submarine links and 0.25 for unconfirmed links). Thus, prediction score, $S$ $\epsilon$ [0,1], is computed as:
\vspace{-3.5mm}
\begin{equation}
S = f * [0.5 * (C_1 + C_2) + 0.4 * (d_{1} + d_{2}) + 0.1 * (O_1 + O_2)]
\end{equation}
Nautilus additionally uses a PaCT threshold of 0.05 (or 5\% margin) to prune unlikely cables and determine the final list of candidate cables. The 5:4:1 ratio and the 0.05 PaCT threshold values were selected based on empirical evaluation using validation experiments.
\section{Design} \label{design}
In this section, we present the design of Nautilus, a framework for cross-layer IP link to submarine cable mapping (Figure~\ref{system_diagram}). We defer the discussion on data sources and data-dependent design parameters to \S~\ref{data_sources}. Before delving into the details, we discuss the high-level workflow of Nautilus.
\begin{itemize}[leftmargin=*,nolistsep]
\item First, Nautilus extracts IP links from traceroutes (\circledsmall{1}).
\item Next, the geolocation module (i) geolocates a given IP and performs SoL validation (\circledsmall{2}), (ii) classifies a given IP link to definitely submarine, definitely terrestrial, or potentially submarine (\circledsmall{3}), and (iii) generate a cable mapping for a given IP link based on geolocation information (\circledsmall{4})
\item Third, the cable owner module (i) gets the AS number corresponding to each IP address and retrieves all cables belonging to those ASes (\circledsmall{5}), and (ii) generates a cable mapping based on this cable ownership information (\circledsmall{6})
\item The aggregation phase combines the mappings generated by the geolocation and cable owner modules to generate a final cable mapping with a prediction score. (\circledsmall{7})
\end{itemize}
We present a running example of Nautilus outputs for a single link at various stages of the workflow in Figure~\ref{running_example}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/system_design_v6.png}
\caption{Nautilus System Architecture. Each stage in the pipeline is earmarked with a number that indicates the order of processing information.}
\label{system_diagram}
\vspace{10mm}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/running_example_v5.png}
\caption{A running example to illustrate the outputs at each stage of Nautilus. These tables show the outputs for a single link extracted from a traceroute between RIPE probes in Taiwan and Los Angeles, US.}
\label{running_example}
\end{figure*}
\subsection{IP \& Link Extraction} \label{ip_link_extraction}
The first step in the Nautilus pipeline involves the extraction of IP links from traceroutes. Using RIPE Atlas~\cite{ripe_atlas} and CAIDA~\cite{CAIDA} datasets (\S~\ref{subsubsec:traceroutes}), Nautilus captures a large number of IP links, ensuring extensive coverage of submarine cables in the analysis. Nautilus processes the traceroutes from both sources by first eliminating traceroutes labeled as invalid or loop. It then extracts IP links that satisfy the following criteria: (i) both link endpoints have non-private IP addresses (geolocation is not possible for a private IP address), and (ii) the link endpoints are consecutive hops in the given traceroute (some routers disable ping and appear as * in traceroutes). Based on the extracted IP links, we generate a list of unique IPs. Figure~\ref{running_example}A shows two hops in a traceroute and Figure~\ref{running_example}B shows the IPs and links extracted from it.
\iffalse
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Fig-geolocation-source-accuracies.pdf}
\caption{The accuracy of various geolocation sources at City, Country \& Continent level granularity. The accuracies are normalized based on the total number of IPs in the ground truth data, rather than the number of IPs for which each source has an output. In the plot, Nautilus indicates the normalized geolocation accuracy for Nautilus by considering the location within the maximal cluster}
\label{fig:geolocation_accuracies}
\end{figure}
\fi
\subsection{Geolocation Module} \label{geolocation_module}
Nautilus uses geolocation as a critical tool for establishing an IP link to a submarine cable map. However, relying on a single IP geolocation service has two issues: (i) the coverage of various geolocation services differs significantly and all IPs cannot be geolocated using a single source, and (ii) geolocation services are known to be inaccurate, especially at the city level. To overcome these challenges, Nautilus first estimates the geolocation using eleven geolocation services and validates it using Speed-of-Light (SoL) based tests. Second, the geolocation information and additional criteria of land connectivity are used to classify the extracted IP link into one of seven categories. Finally, a cable mapping is generated based on the IP geolocation information and the landing point information of submarine cables.
\vspace{-1mm}
\subsubsection{Geolocation of IPs \& SoL validation}\hfill\\
\vspace{-4mm}
\parab{Geolocation of IPs.} Prior work relied on limited geolocation services for mapping. However, IP geolocation is known to be inaccurate at the city level~\cite{geolocation_problem}. Furthermore, not all sources produce geolocation for all IPs. This especially holds true for geolocation sources that employ strict constraints. We propose the use of multiple geolocation services to improve geolocation fidelity. Nautilus integrates eleven sources to estimate the geolocation of a given IP. We demonstrate the performance gains achieved with multiple geolocation sources in \S~\ref{multiple_geolocation_performance_gains}.
\iffalse
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Fig-sol-validation.pdf}
\caption{Comparison of normalized geolocation accuracy without SoL and with different SoL thresholds. SoL validation with a 0.05 threshold gives the best-normalized geolocation accuracy}
\label{fig:sol_validation}
\end{figure}
\fi
\vspace{-0.25mm}
\parab{Speed-of-Light (SoL) validation.} To improve the geolocation fidelity, Nautilus employs a Speed-of-Light (SoL) based validation and eliminates geolocations that do not satisfy the SoL constraint. Speed-of-Light in fiber optic dictates the minimum delay that is theoretically possible between two geolocations. Using the well-known geolocations of RIPE and CAIDA probes to a given IP in a traceroute, we eliminate any geolocation for the IP that fails the SoL constraint (with a 5\% margin of error). While SoL is a strict constraint, to accommodate issues with the delay annotation in traceroutes (especially at the sub-millisecond scale), we add a 5\% margin of error. We validate the performance gains with SoL validation and justify the selection of the SoL threshold of 0.05 (or 5\% margin of error) in \S~\ref{sol_validation_threshold}. Figure~\ref{running_example}C shows the output of geolocation and SoL validation.
\subsubsection{Link (B-O-N-U-S) Classification} \label{bonus_classification}
Identifying the links that potentially traverse a submarine cable is a critical step in mapping IP links to submarine cables. Latency-based classification of transoceanic IP links requires a threshold value, and determining the ideal threshold is difficult due to the large variation in submarine cable lengths. Hence Nautilus relies on a geolocation-based approach called B-O-N-U-S classification to categorize the IP links into seven categories based on (i) confidence in geolocation accuracy and (ii) the potential of a link using a submarine cable. Table~\ref{bonus_classification_table} shows the various categories from the B-O-N-U-S classification and Figure~\ref{running_example}D shows an example output.
\begin{table}[]
\centering
\small
\begin{tabularx}{\linewidth}{
>{\centering\arraybackslash}X |
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X }
\textit{{\# of endpoints with good geolocation}} & \emph{Definitely \textbf{S}ubmarine} & \emph{\textbf{U}nconfirmed Submarine} & \emph{Definitely Terrestrial} \\ \hline
\textit{\textbf{B}oth} & S, B & U, B & T \\
\textit{\textbf{O}ne} & S, O & U, O & T \\
\textit{\textbf{N}one} & S, N & U, N & T \\
\end{tabularx}
\vspace{1mm}
\caption{B-O-N-U-S Classification categories}
\vspace{-7mm}
\label{bonus_classification_table}
\end{table}
\parab{Confidence in Geolocation (B-O-N Classification).} Based on the confidence in the geolocation of IP link endpoints, Nautilus classifies the link into one of three categories: B (both ends good geolocation), O (one good), N (none good) (Table~\ref{bonus_classification_table}). Nautilus is confident in an IP geolocation when several sources agree. Nautilus employs an auto-clustering algorithm, DBSCAN~\cite{dbscan}, to (i) aggregate the geolocation from eleven services into clusters, and (ii) estimate a score to represent the confidence of each cluster. We use 20 km as our clustering threshold. We consider the geolocation to be good if the fraction of elements in any one cluster, which we call as its cluster score, is greater than the geolocation threshold of 0.6. Finally, we compute the link geolocation accuracy as the sum of cluster scores at two endpoints. We explain the selection of thresholds in \S~\ref{cluster_geolocation_threshold}.
\parab{Submarine Cable Potential (U-S Classification).} Nautilus employs a three-pronged approach to determine the potential of an IP link to traverse a submarine cable. Nautilus classifies a link into definitely submarine (S), unconfirmed or potential submarine (U) (see Appendix~\ref{unconfirmed_submarine_explanation} for more details), or terrestrial (T) categories. Since Nautilus relies on multiple geolocation sources, despite SoL validation and clustering, it is still possible for each IP to have a few potential locations (potentially in different countries or continents, as accuracy is not 100\% even at these coarser granularities). Hence Nautilus retains and examines all combinations of location pairs for IP link endpoints.
First, Nautilus analyzes continent data of IP link endpoints. Nautilus tags all IP links interconnecting non-connected continents (like Asia and North America) as \emph{definitely submarine}. IP links not tagged in the first step are then classified based on an iterative country-level neighbor search. Nautilus labels an IP link as \emph{unconfirmed submarine} if any combination of the country pairs for given IP link endpoints either border each other or share a common land neighbor (like Netherlands and Denmark). Otherwise, Nautilus labels the IP link as \emph{definitely submarine}. Finally, to capture the IP links that would be \emph{definitely terrestrial}, Nautilus employs a two-criteria check: (i) if all country combinations indicate both endpoints to be essentially land-locked or (ii) if the straight-line distance between endpoints is significantly shorter than the distance between nearest submarine cable landing points (terrestrial termination points of submarine cables).
\vspace{-1mm}
\subsubsection{Geolocation-based cable mapping} \label{geolocation_cable_mapping}
Nautilus employs radius-based querying at multiple granularities to identify landing points (and consequently submarine cables) and then, relies on a threshold to prune irrelevant submarine cables. To efficiently identify nearby landing points, Nautilus constructs a BallTree~\cite{balltree} based on the submarine cable landing points. BallTree partitions the coordinate space into a set of intersecting hyper-spheres, thus enabling an efficient identification of all landing points within a given radius. A key challenge is the identification of an optimal radius for querying. Nautilus starts the search with an initial radius of 500km and expands the search by 50 km until either a cable match is found or the query radius reaches the maximum radius of 1000km. Finally, to eliminate irrelevant cables, Nautilus computes a distance-based score as the haversine distance of the IP link endpoints to their identified submarine cable landing points. We discuss the radius thresholds in \S~\ref{geolocation_radius_threshold}. The cable set generated for the running example is shown in Figure~\ref{running_example}E.
\subsection{Cable Owner Module} \label{organization_module}
Cable owner-based inference can prove crucial in regions with several parallel submarine cables (e.g., US-UK). Nautilus relies on the idea that links of an AS are more likely to be on a submarine cable that the same AS owns. Thus, Nautilus can reduce the number of cable predictions if it can identify an AS that owns both the IP and a submarine cable located in close proximity. Nautilus uses multiple IP to ASN mapping sources to identify the ASN of a given IP (e.g., Figure~\ref{running_example}F). Nautilus employs Natural Language Processing (NLP) techniques to match a cable owner to a list of ASNs. Nautilus incorporates this information to generate an IP to Submarine cable owner map. Finally, Nautilus merges the results from the earlier steps to generate an IP to submarine cable mapping (e.g., Figure~\ref{running_example}G).
\vspace{-1mm}
\subsubsection{IP to Cable Owner Mapping}
To generate an IP to ASN mapping, Nautilus relies on data from four sources. Nautilus uses ASNs instead of AS organization names to standardize the results from multiple sources.
To generate a submarine cable owner to ASN mapping, Nautilus relies on submarine cable owner information from the submarine cable map~\cite{submarine_cable_map}, and ASN and AS information from the CAIDA ASRank database~\cite{ASRank} to generate this mapping. A simplistic approach would involve selecting all ASNs whose organization name has a partial/complete match with the submarine cable owner's name. While this works in some cases, complexity arises in the case of acquisitions and mergers. For example, Lumen (a submarine cable owner) would only be mapped to ASNs owned by Lumen and would miss ASNs owned by CenturyLink and Level 3 (Lumen's acquisitions). Another complex scenario is the usage of abbreviations. For example, Korea Telecom is stored as KT in the Submarine Cable Map~\cite{submarine_cable_map}.
We observe that AS names often reflect the old name of an AS after acquisition. Based on this observation, Nautilus employs a search along three dimensions for a given submarine cable owner. These are: (i) a partial/complete match with AS organization name, (ii) a short form match with AS organization name, and (iii) a match with the standard AS name. Based on the assumption that submarine cables typically are owned by large entities, Nautilus first chooses the entity with the highest AS rank and includes all ASNs owned by this entity. Then for the rest of the matches, a constraint is imposed to select only the ASNs where any one of the validated ASNs is a provider to this given ASN.
\vspace{-1mm}
\subsubsection{Cable owner-based mapping} \label{cable_owner_based_mapping}
Cable owner-based mapping is instrumental in pruning the potential cable set by eliminating many parallel submarine cables. Using the IP to submarine cable owner mapping, we generate an additional ownership score for the IP link, effectively assisting Nautilus to eliminate the parallel cable in the final step (an example of parallel cables shown in Figure~\ref{fig:parallel_cables} of Appendix~\ref{parallel_cables_analysis}). If the ASN of one endpoint IP address matches with the list of ASNs of submarine cable owners, then 0.5 is added as the ownership score. Thus this score is always from the set (0, 0.5, 1) that corresponds to no match, one endpoint match, and both endpoints match, respectively.
\subsection{Aggregation \& Final mapping} \label{final_mapping}
\label{safm}
To generate the final IP link to submarine cable mapping, Nautilus systematically combines the outputs of the geolocation module(\S~\ref{geolocation_module}) and the cable owner module (\S~\ref{organization_module}). Nautilus uses the B-O-N-U-S classification results (excluding all links tagged as \emph{definite terrestrial}) and generates a cable mapping based on radius-based querying (\S~\ref{geolocation_cable_mapping}). Then, for all the mapped cables from the previous step, the cable owner-based mapping (\S~\ref{cable_owner_based_mapping}) is employed to validate/boost scores for each IP link. Finally, based on a threshold to accommodate parallel cables, Nautilus generates a pruned list of potential cables for an IP link (e.g., Figure~\ref{running_example}H).
Nautilus computes the prediction score as a weighted sum (with a 5:4:1 ratio) of the three scores computed in the previous stages: (i) geolocation accuracy score (\S~\ref{bonus_classification}), (ii) distance-based score (\S~\ref{geolocation_cable_mapping}) and (iii) ownership score (\S~\ref{cable_owner_based_mapping}), respectively. The ownership score is assigned a lower weight primarily for two reasons: (i) an organization could own multiple cables in a specific region, hence, cable ownership may not be always helpful, and (ii) many large ISPs do not appear in the owner map (due to either leasing links or incomplete owner information in the database) and hence, a higher weight would bias the mapping in favor of submarine cable owners. For example, Cogent does not own any cables according to the submarine cable map but has transoceanic links as seen in its network map.
The normalized prediction score is finally weighted based on the B-O-N-U-S category to which the particular link belongs. We determine and consider the top cable mappings for a link by imposing a Parallel Cable Threshold (PaCT) value of 0.05. This restricts the predictions of cables to a 5\% margin from the highest mapping score for the given IP link. We discuss the breakdown of our prediction score and the PaCT threshold in \S~\ref{pact_threshold}.
\section{Limitations}
Nautilus is by no means a perfect solution for generating an IP link to submarine cable mapping, but rather a first attempt. For generating a cable map, Nautilus relies on geolocation and organizational owner information, but each piece of information has its shortcomings. While we can identify the AS (organizational) owner for a given IP address, there is a lack of complete owner information on the submarine cable map generated by TeleGeography. Ideally, there should be a one-to-one map between IP links and submarine cables. But as observed with Nautilus, this is not always the case as Nautilus predicts a parallel cable set for about 46\% of the IP links. Additionally, Nautilus categorizes more than 45\% of the IP links as the unconfirmed type and hence assigns a lower confidence score for the mapping generated for these links. An example of this was observed in the Yemen outage case, where Nautilus correctly identified a submarine link between 2 endpoints within Yemen, but also generated a mapping for a likely terrestrial link between Yemen and Saudi Arabia. Similar examples are also present in the Papua New Guinea outage. In an ideal framework, all submarine links are identified perfectly and mapped to a single cable. But Nautilus succeeds in predicting less than 5 cables for 93\% of the links while identifying a much larger number of links as potential submarine links.
Another limitation of Nautilus is the geolocation accuracy of various databases, which at the city level is known to be only 50-60\%~\cite{geolocation_problem}. Though Nautilus employs multiple techniques like using several geolocation sources and searching for submarine cables with a larger initial radius (of 500 km) to curb the impact of geolocation inaccuracies, there is still scope for improvement. Currently, Nautilus equates an MPLS tunnel to an IP link for simplicity and captures the submarine cable potentially used by the tunnel. But without completely unraveling the missing links from MPLS, some of the links are bound to have either (i) no cables mapped to them, or (ii) incorrect cables mapped to them. Donnet et. al.~\cite{mpls_issues} in their work from 2012, identified that MPLS tunnels are present in approximately 30\% of the experimental data and indicate a higher presence of MPLS tunnels at the core of the Internet. As submarine cables form the core of inter-continental communication, we can expect a significant presence of MPLS tunnels in our measurements. This could be one of the primary causes for the lack of cable mapping in approximately 20\% of the links.
\section{Mapping Analysis}
\label{mapping_analysis}
In this section, we analyze the mapping generated by Nautilus. In particular, we answer the following questions:
\begin{itemize}[leftmargin=*,nolistsep]
\item How effectively can Nautilus generate a mapping for a given link? (\S~\ref{cma})
\item How does Nautilus mapping compare with prior work: SCN-Crit~\cite{submarine_drivability} and iGDB~\cite{igdb}? (\S~\ref{cpw})
\end{itemize}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|}
\hline
Category & Count (v4) & Count (v6) \\
\hline
S, B & 511 K & 200 K \\
S, O & 342 K & 172 K \\
S, N & 42 K & 30 K \\
U, B & 1261 K & 490 K \\
U, O & 1112 K & 497 K \\
U, N & 469 K & 406 K \\
Terrestrial & 1377 K & 564 K \\
Unclassified & 672 K & 712 K \\\hline
Total & 5.8 M & 3.08 M \\
\hline
\end{tabular}
\vspace{2mm}
\caption{Count of links under B-O-N-U-S classification. Links that do not have geolocation data or fail SoL validation are in the Unclassified category.}
\label{link_classification_results}
\vspace{-6mm}
\end{table}
\begin{table}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{|c|c||c|c|}
\hline
Country Pairs (v4) & Count & Country Pairs (v6) & Count \\
\hline
US-DE & 40 K & US-CL & 23 K \\
US-UK & 30 K & US-DE & 10.2 K \\
US-FR & 18.5 K & UK-RU & 7.4 K \\
SG-IN & 18.2 K & US-HK & 7.3 K \\
US-NL & 16.9 K & DE-UK & 6.3 K \\
UK-DE & 12.9 K & US-UK & 6.1 K \\
US-SW & 10 K & UK-NL & 5.6 K \\
US-AU & 9.9 K & US-JP & 5.2 K \\
US-BR & 9.8 K & US-BR & 5 K \\
UK-NL & 8.9 K & SW-NL & 4.5 K \\
\hline
\end{tabular}
}%
\vspace{2mm}
\caption{Top 10 country pairs based on definitely submarine links count (using ISO 2-alpha code for countries).}
\label{country_count}
\vspace{-6mm}
\end{table}
\subsection{B-O-N-U-S classification analysis}
Of the 5.8M IPv4 and 3.08M IPv6 links, we generate a classification for $\approx$85\% of links. 15\% links fail due to a lack of geolocation data or invalidation during SoL tests. Nautilus labels approximately 15.5\% IPv4 and 13\% IPv6 links as \emph{definitely submarine}, and 24\% IPv4 and 18.5\% IPv6 links as \emph{definitely terrestrial}. The remaining are tagged as \emph{unconfirmed submarine}. Table~\ref{link_classification_results} holds a
a detailed breakdown of each category.
\parab{Top Country Pairs:} We analyze the top 10 country pairs for \emph{definitely submarine} links (Table~\ref{country_count}). There is a high overlap between the top country pairs of IPv4 and IPv6. Note that this table excludes intra-country links. An observation from Table~\ref{country_count} is that most of the countries in top pairs, especially in Europe, house some of the largest IXPs in the world, e.g., Germany (DE-CIX), Netherlands (AMS-IX), UK (LINX), Brazil (IX.br), etc.
\parab{Top Continent Pairs:} Figure~\ref{continent_map} shows that NA-EU accounts for $\approx$38.5\% of all inter-continental links, while EU-AS and NA-AS account for at least $\approx$70K links each. Almost 41\% of all links in the Nautilus map in the \emph{definitely submarine} category are intra-continental links, under which EU and AS account for at least 90K links each.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/world_countries_fig_v1.png}
\caption{Inter-continental link distribution (IPv4 and IPv6 combined). NA-EU makes up around 205 K links, followed by EU-EU and AS-AS (not depicted in the figure) which make up at least 90 K links each.}
\label{continent_map}
\vspace{-2mm}
\end{figure}
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Fig-scores-541_all_v4.pdf}
\caption[]%
{{\small Prediction scores for various categories in IPv4}}
\label{scores_v4}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Fig-scores-541_all_v6.pdf}
\caption[]%
{{\small Prediction scores for various categories in IPv6}}
\label{scores_v6}
\end{subfigure}
\label{score_result}
\caption{The distribution of prediction scores for IPv4 and IPv6}
\end{figure*}
\subsection{Cable Mapping Analysis} \label{cma}
We first analyze the coverage of Nautilus' mapping with respect to submarine cables and IP links. We then analyze the distributions for all categories.
\parab{Cable and link coverage:} Based on the generated cable mapping and a PaCT value of 0.05 (\S \ref{safm}), of the 448 active cables, we find at least one IP link mapped to 91\% of the submarine cables. For the $\approx$ 1200 landing points, we identify at least a single submarine cable that passes through 1074 landing points, thus accounting for 90\% of all landing points. Most of the missed submarine cables and landing points are part of small or regional cables with a short geographic spread. These missed cables are typically present on small islands or in island nations like Indonesia. From the perspective of IP links, we generate a mapping for 82\% and 80\% of submarine links(definitely and unconfirmed submarine links combined) in IPv4 and IPv6, respectively. Mapping fails when geolocation or SoL validation is unsuccessful.
\parab{Predicted Cable Distribution:} The prediction score is critical in finalizing the list of candidate cables for a given IP link. Hence, we analyze the scores in each category by computing a CDF based on the maximum prediction score for each IP link. Figures~\ref{scores_v4} and ~\ref{scores_v6} shows the results for IPv4 and IPv6 respectively. The CDF plots for IPv4 and IPv6 look similar but have subtle differences. Particularly, IPv6 has slightly higher scores in comparison to IPv4. From the scores, we observe that in the best scenario of S, B (definitely submarine links with good geolocation accuracy), Nautilus predicts cables with a prediction score higher than 0.8 for at least 30\% of the links and a score higher than 0.6 for more than 85\% of the links in both IPv4 and IPv6. All other categories have a similar distribution but with lower prediction score bounds depending on the category. Finally, considering all the links, we observe that only 20\% of the links have a score greater than 0.4. This is expected as most of the links fall under the unconfirmed category, which has been assigned a lower prediction score by Nautilus.
Accounting for the possibility of parallel cables, Nautilus predicts greater than one cable in some scenarios. Using the PaCT value of 0.05 as detailed earlier (\S \ref{safm}), the stacked bar plot based on the number of cables predicted per link is shown in Figures~\ref{count_v4} and \ref{count_v6} for IPv4 and IPv6, respectively. From these plots, we observe that for all links with a mapping, at least $\approx$54\% of the links have exactly one cable mapped to them for both IPv4 and IPv6 (the best-case scenario). We have more than 76\% of the links with a mapping of $\leq$ two cables. Finally, only $\approx$7\% of the links have a cable mapping with more than five cables.
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Fig-count_all_v4.pdf}
\caption[]%
{{\small Cable count for various link categories in IPv4}}
\label{count_v4}
\end{subfigure}
\hfill
\begin{subfigure}{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Fig-count_all_v6.pdf}
\caption[]%
{{\small Cable count for various link categories in IPv6}}
\label{count_v6}
\end{subfigure}
\caption{The distribution of the number of cables predicted for links belonging to each category. Most links are mapped to a single cable.}
\label{count_result}
\end{figure*}
\vspace{-2mm}
\subsection{Comparison with Prior Work} \label{cpw}
As there exists no prior work on uniquely mapping IP links to submarine cables, we compare Nautilus against the two closest related works: (i) Liu et. al. (which we call SCN-Crit from here on)~\cite{submarine_drivability} and (ii) iGDB~\cite{igdb}. Since there are no agreed-upon validation criteria for these prior works, we use the number of cable predictions as our benchmark for comparison. Moreover, these works do not validate their mapping, but Nautilus attempts to validate its mapping efficacy in \S~\ref{validation}.
\parab{SCN-Crit.} SCN-Crit~\cite{submarine_drivability} runs traceroute measurements from RIPE probes to the top 50 websites for 63 countries to evaluate the impact of submarine cable networks on end users. SCN-Crit uses the drivability metric to determine the presence of a submarine cable in any given path. The drivability metric states that if there is no drivable route between each end of a given path, then it must indicate that some part of this path relies on a submarine cable. Finally, SCN-Crit generates a cable mapping based on the submarine cable overlap between the countries in the path.
We reproduce a part of this work by initiating traceroute measurements to top 50 websites from a RIPE probe within each country for 63 countries, thus generating approximately 3150 traceroutes (one measurement per website-probe pair). Though this does not capture the entirety of traceroute measurements carried out in SCN-Crit which includes multiple locations for the same resource, this acts as a good proxy. For the $3150$ traceroutes, Nautilus identifies a cable match for $\approx$70\% of all the websites. Nautilus predicts 35\% lesser cables than SCN-Crit due to fine-grained location. The number of cables predicted by SCN-Crit is higher primarily due to the policy of cable mapping at the country level.
Finally, as SCN-Crit takes a conservative approach to identifying paths with submarine cables (based on its drivability metric), this approach misses many paths that could potentially utilize a submarine cable. For example, in our validation experiments, we observed that cities within the same country might sometimes use a submarine cable, as seen in the case of an IP link between two cities in Yemen that relies on a submarine cable (\S~\ref{subsec:failures}). SCN-Crit ignores this possibility as these cities are drivable and hence would fail the drivability constraint. Such links are also common in large countries such as the US.
\parab{iGDB.} iGDB~\cite{igdb} is a framework for generating a mapping between terrestrial cables and the IP layer. iGDB, like Nautilus, relies on geolocation as the critical piece to infer the potential terrestrial cable for a given IP link. Unlike Nautilus, iGDB partitions the Earth into Thessian polygons based on the nearest city and then overlays this information with the road/rail network to predict potential routes. The paper shows mapping in specific use cases, but as confirmed with the authors of this work, iGDB does not support mapping IP links to submarine cables.
We extend the iGDB framework for mapping over submarine cables. As geolocation scripts are missing in iGDB codebase, we use CAIDA's geolocation as a proxy. CAIDA also employs geolocation based on DNS hostname resolutions and IXP locations that iGDB relies on. Using this geolocation and the iGDB framework, we map each IP endpoint to the nearest city cluster. To identify the submarine cables, we use an intersection of submarine cables across landing points within each city cluster for a given IP link. We observe that Nautilus predicts $\approx$ 75\% fewer cables than iGDB. The higher number of cable predictions could be due to the higher concentration of submarine cable landing points within a few city clusters. Additionally, iGDB generates a mapping for only a fraction (11\%) of links. iGDB's strict geolocation constraints and its reliance on limited geolocation sources could be reasons for its limited coverage. For example, an IP link between France and the US with an endpoint geolocated at Paris (owing to either geolocation inaccuracies or the use of MPLS tunnels) would not find a cable match since no landing points fall within the Paris city cluster. But Nautilus succeeds in mapping such links due to increasing radius-based queries.
\section{Introduction}
Submarine cables form the backbone of the Internet infrastructure. Today, there are fewer than 500 undersea cables interconnecting various landmasses. However, these cables carry over 99\% of intercontinental traffic~\cite{99_traffic}. Nearly \$10 trillion worth of financial transactions are transmitted over transoceanic cables every day~\cite{cable-10-tril}. By virtue of their role in international connectivity and the global economy, submarine cables are deemed as critical components of the Internet infrastructure. There is also a rallying cry for protecting submarine cables as a national security priority due to their strategic relevance~\cite{cable-10-tril}. However, these cables are vulnerable to several natural and man-made threats.
Submarine cables are often damaged by natural disasters such as earthquakes~\cite{earthquake_1,earthquake_2,earthquake_3,earthquake_3,earthquake-1,earthquake-2,earthquake-3}, typhoons~\cite{typhoon-1,typhoon-2}, and underwater volcanic eruptions~\cite{tonga_cable_issue}. In addition, they are susceptible to accidental physical damages from ship anchors~\cite{anchor-1,anchor-2} and various marine activities of humans. These cables could also be used as a target by malicious states and non-state terrorist organizations~\cite{cable-10-tril, cable-security}. On average, there are over 100 submarine cable failures each year~\cite{submarine_faq}. In the past few months alone, there have been multiple reports of cable failures and repairs around the globe~\cite{cable_failures_1, cable_failures_2, cable_failures_3, cable_failures_4, cable_failures_5, cable_failures_6, multiple_providers_aae1_outage}. Unfortunately, repair of submarine cables, particularly when the damage is located in the deep sea, could take anywhere from a few weeks to months~\cite{tonga_cable_repair}.
The significance of submarine cables in the global economy coupled with the challenges in their restoration during failures underscores the need for a better understanding of the role of submarine cables in end-to-end Internet connectivity. The impact of submarine cable failures is further exacerbated at higher layers of the Internet due to the sharing of each cable by multiple Autonomous Systems (ASes)~\cite{submarine}. When one cable fails, all ASes that have IP links on the cable are affected~\cite{multiple_providers_aae1_outage}. Today, while the physical map of submarine cables is publicly available, we have a limited understanding of the IP links and Internet paths traversing these cables. To bridge this gap, we need a cross-layer map of submarine cables and IP links.
Past work on Internet mapping was largely confined to specific layers of the Internet and did not support cross-layer mapping~\cite{internet_mapping_survey_1, internet_mapping_survey_2}; for example, mapping of physical cables~\cite{physical_1, physical_2, physical_3}, interfaces~\cite{subnet_1, subnet_2, subnet_3, subnet_4}, routers~\cite{router_paper_4, router_paper_8, router_paper_10, router_paper_11}, PoPs ~\cite{pop_2, pop_3, pop_6, pop_7}, and ASes~\cite{AS_1, AS_3, AS_8}. Recently, iGDB~\cite{igdb} put forward the first cross-layer mapping framework for the Internet. But iGDB does not currently support cross-layer mapping over transoceanic paths. We find that the mapping techniques used by iGDB, particularly Thiessen polygons around urban regions that aggregate traffic to the area, align well with the layout of long-haul cables on land but are not well-suited for submarine cable mapping. Most submarine cable landing points are located farther away from urban areas. Moreover, accounting for geolocation inaccuracies during mapping, which iGDB does not support, is critical for cross-layer mapping on submarine cables.
In this paper, we present Nautilus, a framework for cross-layer cartography of submarine cables and IP links, using a wide range of publicly available datasets and tools as well as custom mapping techniques. Specifically, Nautilus uses traceroutes, the global submarine cable map, owner information of submarine cables, and IP- and AS-level Internet maps as data sources. Nautilus also leverages eleven geolocation services, speed of light-based geolocation validation, and custom mapping techniques. Using these datasets and tools, Nautilus first identifies IP links that are traversing submarine cables and then maps them to one or more potential cables. Nautilus also assigns a prediction score to each IP link to submarine cable map which denotes the degree of certainty in mapping.
Nautilus leverages long-running RIPE~\cite{ripe_atlas} and CAIDA~\cite{CAIDA} traceroute measurements to extract IP links. Nautilus then classifies a link into definitely submarine, definitely terrestrial, or potential submarine categories based on the geolocation of IP link endpoints and various geolocation-based tests (\S~\ref{ip_link_extraction}). For example, based on the disconnected continents test, a US-UK link is determined to be definitely submarine. Based on the proximity criterion for endpoints, an IP link with endpoints in France and Belgium is classified as potentially submarine. Nautilus relies on eleven geolocation services~\cite{ripe_ipmap,caida_itdk,maxmind,ip2location,ipinfo,ipapi,db-ip,ipregistry,ipgeolocation,ipdata, ipapi_co} in this stage to improve the accuracy of geolocation.
After link classification, Nautilus employs a two-pronged approach for mapping. The two submodules in Nautilus relies on disjoint data sources and techniques to identify IP link to cable mapping independently, which are then aggregated to obtain the final mapping. The \textit{Geolocation module} of Nautilus uses IP geolocation and Speed of Light (SoL) based geolocation validation to identify a set of potential cables (\S~\ref{geolocation_module}). The \textit{Cable owner module} leverages IP to AS mapping of link endpoints and AS numbers of cable owners to generate another set of potential cable mappings (\S~\ref{organization_module}). Finally, the aggregation phase combines the mapping generated by the two modules to generate a set of candidate cables with a prediction score per cable (\S~\ref{final_mapping}). The prediction scores express the degree of certainty associated with mapping.
Using over 100 million traceroutes each, Nautilus extracts 5.8 million (5.8M) IPv4 and 3.08M IPv6 links. Nautilus classifies 15.5\% of v4 links and 13\% of v6 links as definitely submarine. Additionally, 49\% of IPv4 links and 45\% of IPv6 links fall into the potentially submarine category. Nautilus then maps these links---3.7M IPv4 links and 1.83M IPv6 links---to one or more potential cables. For 93\% of the mapped links, Nautilus predicts five or fewer submarine cables with 54\% of links mapped to a single cable. The IP link to cable map generated by Nautilus identifies at least one mapping for 91\% of all active cables and 90\% of submarine cable landing points. Due to the absence of closely related work, we compare the mapping generated by Nautilus with the cable predictions generated in Liu et. al. (which we call SCN-Crit from here on)~\cite{submarine_drivability} and iGDB~\cite{igdb} extended to submarine cables. We show that Nautilus generates a more accurate mapping to a fewer set of cables (35\% and 75\% fewer cables compared to SCN-Crit and iGDB, respectively).
Unfortunately, there are no public datasets available for ground truth validation of Nautilus' mapping. We contacted over a dozen large ISPs around the globe and received a response from only two tier-1 providers in the US; they pointed out the highly sensitive nature of the data we requested and declined to share it. Hence, we perform validation of Nautilus' mapping with three measurement techniques: (i) analysis of traceroutes collected before, during, and after cable failures to test the presence of links mapped to the failed cables, (ii) targeted traceroute measurements between probes located near submarine cable landing points to test the presence of IP links mapped to the cable, and (iii) comparisons with publicly available physical layer network maps of two providers.
Our validation experiments show that (i) links mapped by Nautilus to a specific cable are missing during an outage while being present before and after the event, (ii) Nautilus' top cable prediction accurately matches the expected cable for 79\% of the links, and is a secondary (non-top) choice for 17\% of the links; only 4\% of the links did not have a match, and (iii) there is a significant overlap between Nautilus' cable predictions for an operator and the operator's ground-truth physical map for two providers.
In summary, we make the following contribution:
\begin{itemize}[leftmargin=*,nolistsep]
\item We present Nautilus, a framework for cross-layer mapping of submarine cables and IP links. It uses publicly available geolocation and AS owner databases to generate a mapping with an assigned prediction score (\S~\ref{design}).
\item Nautilus generates a mapping for 82\% of all identified submarine (definite and unconfirmed) links, covering 91\% of all active cables (\S~\ref{cma}).
\item We demonstrate that Nautilus is more accurate and predicts fewer submarine cables for an IP link when compared against related works SCN-Crit and iGDB (\S~\ref{cpw}).
\item We validate Nautilus' mapping accuracy by performing a wide range of validation experiments (\S~\ref{validation}).
\end{itemize}
\iffalse
\section{Introduction}
The Internet has experienced rapid growth in the past few decades and has grown to become an integral component in many critical infrastructures. While the distributed and decentralized nature of the Internet has played a key role in this growth, this diversity in Internet ownership and a lack of centralized organization have made mapping the Internet difficult. Some of the primary motivations behind mapping the Internet are (i) to enable better resilience studies, (ii) to facilitate the design of network protocols and evaluate them under realistic conditions, and (iii) to help in improving and optimizing networking infrastructure for better performance <all citations>. There have been several bodies of work over the past 2 decades for mapping the internet~\cite{internet_mapping_survey_1, internet_mapping_survey_2}.
Most of the work on Internet mapping could be broadly classified based on the resolution of the network topology on which the mapping is performed like physical-level~\cite{physical_1, physical_2, physical_3}, interface-level~\cite{subnet_1, subnet_2, subnet_3, subnet_4}, router-level~\cite{router_paper_4, router_paper_8, router_paper_10, router_paper_11}, PoP-level~\cite{pop_2, pop_3, pop_6, pop_7} and AS-level~\cite{AS_1, AS_3, AS_8}. In addition to this mapping, there has also been a significant thrust toward identifying and tagging the geolocation of a specific networking device or infrastructure~\cite{geolocation_1, geolocation_2, geolocation_3, geolocation_4, geolocation_5}.
While building a topology at each level has its advantages and limitations, to the best of our knowledge, there exists no mapping which uniquely links the physical layer with the IP layer. Infrastructure sharing at the physical layer is significant, as seen in the case of a recent cable cut in the terrestrial portion of the AAE-1 cable which caused an outage of Google cloud and OVHcloud while impacting multiple other services~\cite{multiple_providers_aae1_outage}. As seen with the AAE-1 outage, any damage at the physical layer has a cascading impact on the higher layers. Outages like these exemplify the need to understand the implications of the physical layer on the higher layers. In Nautilus, we specifically look at generating the mapping between the submarine cables with the IP layer primarily for the following reasons
\parab{Criticality and Vulnerability of Submarine Cables:} Submarine cables are prone to natural and man-made risks and their repairs could take anywhere from a few weeks to months. Recently, the Tonga cable, the only cable which connects Tonga to the rest of the world, experienced damage due to a volcanic eruption in the region \cite{tonga_cable_issue} and it took over a month to repair the cable and establish the connection \cite{tonga_cable_repair}. In January 2022, Yemen experienced a four-day Internet outage caused by an airstrike which rendered damage to the landing points of Falcon and Sea-Me-We 5 cable \cite{yemen_cable}. Just in the past few months, there have been multiple reports of cable failures and repairs around the globe \cite{cable_failures_1, cable_failures_2, cable_failures_3, cable_failures_4, cable_failures_5, cable_failures_6, multiple_providers_aae1_outage}.
\parab{Other points: } In addition to causing loss of connectivity of a region to the rest of the world, Liu et.al.~\cite{submarine_drivability} observed that $\sim$ 64\% of all web traffic from countries capturing $\sim$ 80\% of the population traverses submarine cables. The submarine cables are the sparsest components of the current Internet infrastructure with a complete cable map available \cite{submarine_cable_map}. There are only $\sim$ 450 submarine cables and these cables are responsible for carrying 99\% of the inter-continental traffic \cite{99_traffic}. Due to the high cost involved with laying, maintaining, and repairing submarine cables, these are typically owned by multiple organizations, and damage to a single cable could have an impact on multiple AS connectivity \cite{submarine}.
In this work, we present Nautilus, a framework that takes the first effort to generate a mapping between IP links and submarine cables without utilizing any private information from network providers or CDN operators. Nautilus utilizes the submarine cable information from Telegeography \cite{submarine_cable_map} and traceroute measurements from RIPE Atlas \cite{ripe_atlas} and CAIDA \cite{CAIDA}. Given this information, we first identify the submarine cable hops in a given traceroute primarily based on latency thresholds which are detailed further in \S \ref{iscl}. Then we classify the links based on geolocation accuracy and submarine cable possibility \S \ref{bonus_classification}. Then we map the IP links in each of these categories to candidate cables with a confidence score \S \ref{icc}. Additionally, we augment measurements from RIPE Atlas by forcing specific paths using the existing RIPE probes with websites in target locations to increase the coverage of our system to encompass a maximum extent of submarine cable landing points \S \ref{atm}.
<Once results are there, highlight the key stats from that here>
\fi
\section{Related Work}
\parab{Cross-layer mapping on submarine cables:}
Liu et al. (SCN-Crit~\cite{submarine_drivability}) estimated the fraction of paths traversing subsea cables using traceroute measurements to top-50 websites from 63 countries. This work identified a set of potential submarine cables by (i) geolocating all routers on the path, (ii) identifying potential submarine hops using drivability criteria, and (iii) assigning all cables between the two countries that constitute the submarine endpoints as the potential set. We observe that this subset is much larger and coarse-grained compared to the mapping generated by Nautilus.
iGDB~\cite{igdb} is the first framework that supports cross-layer mapping of the Internet, but it is restricted to terrestrial cables. iGDB maps potential land cable end points by dividing urban areas using Thiessen polygons and identifying potential cables between them. Such fine-grained geolocation is a good fit for long-haul land cables but does not work well for submarine cables. Multiple submarine-cable endpoints typically fall into the same Thiessen polygon.
\parab{Single-layer mapping:} Past work in Internet mapping mainly focused on individual layers and can be broadly classified as mapping at the (i) the physical layer~\cite{physical_1, physical_2, physical_3} where each link represents a physical cable between two locations, (ii) the interface level~\cite{subnet_1, subnet_2, subnet_3, subnet_4}, (iii) the router level~\cite{router_paper_1, router_paper_2, router_paper_4, router_paper_6, router_paper_8, router_paper_14, router_hot, router_recursive_2}, (iv) the PoP level~\cite{pop_1, pop_2, pop_3, pop_4, pop_5, pop_6, pop_7}, and (v) the AS level~\cite{AS_1, AS_2, AS_3, AS_4, AS_7, AS_8, AS_9}.
The most common methods employed by various mapping architectures are either (i) aggregation-based, or (ii) constraint-based approaches. Examples of aggregation-based solutions include physical layer techniques~\cite{physical_1, physical_2, physical_3} that collect data from various public and private sources and aggregate them to generate the mapping. An example of a constraint-based approach is the router-level mapping solution, HOT~\cite{router_hot}, which uses various network constraint parameters and optimizes an objective function to generate a mapping.
\iffalse
\section{Related Work}
There has been extensive research on building an Internet map over the past few decades. While a lot of these works employ a wide array of techniques, these mappings can be categorized along three dimensions (1) The resolution or granularity of the mapping, (2) The data sources and tools used, and (3) The methodology employed to generate the mapping. \\
\parab{Granularity of the mapping} While there exists a dearth of mapping between 2 layers of the network stack, most of the works are primarily concentrated on a specific granularity or resolution. These works could be broadly classified as building a map at the (1) physical level \cite{physical_1, physical_2, physical_3}, where each link involves a physical connection between two locations, (2) interface level \cite{subnet_1, subnet_2, subnet_3, subnet_4}, where a link involves a connection between interfaces on routers, (3) router-level \cite{router_paper_1, router_paper_2, router_paper_4, router_paper_6, router_paper_8, router_paper_14, router_hot, router_recursive_2}, where a link involves a connection between two routers, (4) PoP-level \cite{pop_1, pop_2, pop_3, pop_4, pop_5, pop_6, pop_7}, where a link involves a connection between two PoPs either within the same AS or across AS'es, and (5) AS-level \cite{AS_1, AS_2, AS_3, AS_4, AS_7, AS_8, AS_9}, where a link involves a connection between two AS. \\
\parab{Data sources and tools} Based on the methodology employed for generating the mapping, the data sources utilized could be either active or passive, control-plane data or data-plane data. Traceroutes \cite{original-traceroute} are the best examples of active and data-plane measurement and BGP \cite{bgp_latest_version} is an example of passive and control-plane measurement. In addition to the popular data sources like traceroute and BGP, additionally, methods at a coarser granularity also employ geolocation information \cite{geolocation_1, geolocation_2, geolocation_3, geolocation_4, geolocation_5} of the routers. For non-measurement-based methods, typically data is collected from various public and private records to generate the required mapping.
Substantial works on mapping utilize the traceroute measurements. Traceroute uses limited Time-to-Live (TTL) probes from a source to a destination, to obtain all the IP addresses along the flow. Traceroutes can be classified into 3 types based on the protocol employed and are namely ICMP-based \cite{original-traceroute}, UDP-based, and TCP-based \cite{tcptraceroute}. While traceroutes are popular active measurement tools, they are limited by issues like router response inconsistency, load balancing inflows, and opaque layer 2 clouds. There continues to be active research done to improve the traceroute measurement like the Paris-traceroute \cite{paris-traceroute} to manage the issue of flow-based load balancing. \\
Another popular source of data especially at the AS level is the BGP information \cite{bgp_latest_version}. BGP is the default protocol for inter-AS routing which is based on the network policies and reachability. While the BGP-based methods enjoy the advantage of being a control plane of information, thus making it less obsolete or stale, it has their own sets of limitations. The main issue with this method is that it only indicates reachability rather than actual connectivity.
\parab{Methodologies employed} While there exist different methods to generate the mapping at different granularity, the most common methods are the aggregation-based methods and the constraint-based methods. \\
Studies specifically at the physical layer \cite{physical_1, physical_2, physical_3} employ data collected from various public and private sources and hence typically involve aggregating this data to generate the mapping. \\
At the Interface-level mapping, traceroutes are commonly used, where the connectivity between two interfaces is based on the information obtained from each traceroute. While this has several issues as mentioned earlier, another major limitation at this level is the scalability to generate a global mapping. Another popular method at this level is subnet discovery \cite{subnet_1, subnet_2, subnet_3, subnet_4} which is based on the aggregation of information from multiple traceroutes from a Vantage point to a given subnet. \\
Works on the router level could be broadly classified into alias resolution and recursive router discovery. Alias resolution is an aggregation-based mechanism where interfaces from different traceroutes are aggregated to form aliases if they belong to the same router. Common alias resolution methods utilize (1) the common source address \cite{router_paper_3, router_paper_8}, (2) common IP-identification counter \cite{router_paper_5, router_paper_6, router_paper_9}, (3) DNS-name-based methods \cite{router_paper_1}, (4) graph-based resolution \cite{router_paper_12, router_paper_2}, (5) Analytical alias resolution \cite{router_paper_13, router_paper_14}, and (6) other heuristic-based methods \cite{router_paper_4, router_paper_10, router_paper_11}. Recursive router discovery methods \cite{router_recursive_1, router_recursive_2} typically utilize various properties of a specific type of router or localization to generate a mapping within these specific classes. In addition to these aggregation-based methods, constraint-based methods like Heuristically Optical Topology (HOT) \cite{router_hot} has also been utilized to generate the mapping at this level. \\
Studies at the PoP-level are mostly aggregation-based methods that involve aggregating data from (1) traceroutes \cite{pop_4, pop_6, pop_7}, (2) public and private information from Internet providers \cite{pop_2, pop_3}, (3) delay-based measurements \cite{pop_1}, and (4) PoP properties \cite{pop_5}. \\
At the AS level can be classified based on the data sources used for generating the mapping. The common data sources at this granularity include BGP, Traceroute and Internet Routing Registries (IRR). While the typical BGP-based mapping methods utilize passive measurements like BGP dumps \cite{AS_5, AS_6, AS_7} and updates, there have also been methods that use an active measurement approach by utilizing BGP beacons \cite{AS_8} to advertise and withdraw prefixes. Traceroute-based methods \cite{AS_9} employ an IP-to-AS mapping from various sources like BGP dumps, IRRs to generate an aggregate mapping at the AS level. While traceroute-based methods face the same limitations as at other granularity, there exist methods which use the information from IRR databases to generate a simple AS topology. \\
While there above methods work at a single granularity, this work employs traceroute information in lieu with information from IRR databases, geolocation databases, submarine cable database and AS Org databases to generate a cross-layer mapping between the physical layer (submarine cables) and the network layer (IP links).
\fi
\section{Validation} \label{validation}
In this section, we provide an overview of the techniques we use to validate the map generated by Nautilus. We initially attempted to validate Nautilus mapping by requesting ground truth information from nearly a dozen large ISPs across the globe. Only two ISPs responded; they were unable to share the data due to its sensitive nature. Hence, we devise the following validation techniques:
\begin{itemize}[leftmargin=*,nolistsep]
\item Validation using submarine cable failures: Ideally, when a submarine cable fails, all links mapped to the failed cable segment should disappear. Hence, we evaluate the mapping accuracy of Nautilus by examining two specific submarine cable failure scenarios.
\item Validation using targeted measurements: We run targeted traceroute measurements using RIPE probes from probes located close to submarine cable landing points. The potential submarine hop (high latency IP link) is then extracted and compared against the mapping generated by Nautilus.
\item Validation using operator network maps: A few network operators have made their global network map images publicly available. To validate Nautilus, we predict the likely cables each of these network operators use and compare this against the operator's network map.
\end{itemize}
\vspace{-1mm}
\subsection{Using Submarine Cable Failures}
\label{subsec:failures}
We use publicly available information about cable failures to validate the accuracy of Nautilus' mapping. Ideally, when there is a cable failure, all links passing through the segment of the failed cable should disappear during the outage, while being active before and after an outage. We leverage this idea to perform validation of our mapping.
In stage 1, depending on the nature of the failure, we either classify a failure as a landing point failure (all cables at a specific landing point are affected) or a cable failure (a single cable affected). Once the landing point or the submarine cable is selected, we extract all IP links from Nautilus' mapping that traverse the failed location/cable. In stage 2, two days worth of RIPE traceroute data (IPv4 5051 and 5151 measurements) is collected before, during, and after an outage (a total of 6 days' data for a given failure). Finally, we perform an intersection of the IP links from stage 1 and stage 2 for three periods (\textit{Before\_Failure}, \textit{During\_Failure}, \textit{After\_Failure}). We present this analysis on two recent submarine cable failures.
\parab{Yemen Outage:} The FALCON and SeaMeWe-5 cables experienced a 4-day outage in Yemen in January 2022 due to an airstrike at Al Hudayah, one of the cable landing points. Both cables connect the Arabian peninsula, with no parallel cables in the exact cable path, except for a short overlap of FALCON and SeaMeWe-5 cables between Egypt and Yemen (Figure~\ref{fig:yemen_cables} in Appendix~\ref{cable_failures_appendix}). This example is selected to display Nautilus' performance with non-parallel, single landing point, and multiple cable failure scenarios. We analyze the traceroutes collected between 19-21 January (\textit{Before\_Failure}), 22-24 January (\textit{During\_Failure}), and 25-27 January (\textit{After\_Failure}). Analyzing the IP links mapped to FALCON \& SeaMeWe-5 cables in Nautilus' mapping, we observe an overlap of (i) 106 links in \textit{Before\_Failure} traceroutes, (ii) 5 links in \textit{During\_Failure} traceroutes, and (iii) 93 links in \textit{After\_Failure} traceroutes collected after the outage.
This drastic fall in the number of links only during the outage indicates the accurate predictions made by Nautilus. Ideally, as there are no parallel cables, during an outage we should observe 0 links, but we observe 5 active links. Upon closer examination, we find that these links were classified as \emph{unconfirmed submarine}, and Nautilus had assigned a low prediction score for all these 5 links. Another interesting observation is that there was another link mapped to FALCON cable between the cities of Al Hudaydah and Al Ghaydah in Yemen, which was active in \textit{Before\_Failure} and \textit{After\_Failure} but was missing in \textit{During\_Failure}. This link highlights the advantage of maintaining unconfirmed submarine links by Nautilus when SCN-Crit would have missed this link because of the drivability criterion.
\parab{Papua New Guinea Earthquake:} Papua New Guinea experienced an earthquake in September 2022, which caused the Kumul Domestic submarine cable system (KDSC) (Figure~\ref{fig:kdcn} in Appendix~\ref{cable_failures_appendix}) to fail around Madang. We collect traceroutes on 5-7 September (\textit{Before\_Failure}) and 12-14 September (\textit{During\_Failure}). As the cable was not fully operational by the date of this experiment, we do not have \textit{After\_Failure} traceroutes. We observe that there are (i) 100 links from Nautilus' mapping in \textit{Before\_Failure}, while there were (ii) only 61 links in \textit{During\_Failure}. Upon closer examination, we observe that all IP links between islands disappeared during the outage. The 61 active IP links belong to the unconfirmed category (due to the possibility of both land and sea hops on an island), and hence, the results are not as pronounced as with the Yemen outage example. We also observe some unconfirmed submarine links (e.g., between Madang and Lae) disappear during the outage, confirming that they traverse the submarine cable although the cities pass the drivability test.
\vspace{-3.5mm}
\subsection{Using targeted traceroutes}
Another approach we employ to validate Nautilus' mapping is executing traceroutes between a pair of RIPE probes located near cable landing points. We select probe pairs such that both probes are located near the endpoints of a definitely submarine link, and at least one probe's ISP is a cable owner in that region. We initiate traceroutes between these selected probe pairs using the RIPE Atlas infrastructure and extract high-latency IP links from these targeted traceroute measurements.
First, we eliminate all measurements with high-latency links that are not present in Nautilus' mapping, i.e., those traversing a different path (Ex: Link between France and Singapore would be expected to take a cable connecting Europe to Singapore, but due to paths chosen by BGP, the traceroute might follow a circuitous path France to the US, and then to Singapore). After this elimination, we identify 328 IP links that satisfy the criteria mentioned above. We compare these links with Nautilus mapping. For $\approx$77\% (or 252 links) of all such IP links, the top cable prediction from Nautilus matches the expected cable. For $\approx$19\% (or 63 links) of the IP links, Nautilus predicts the expected cable, but it is not the top prediction. Finally, for only 13 (or $\approx$ 4\% of) IP links, the expected cable does not feature in the predicted cable set.
\vspace{-1.5mm}
\subsection{Using operator network maps}
Some large ISPs and network operators make their submarine cable network map images publicly available. We attempt to construct a similar map using Nautilus mapping and compare the results. We perform validation using network maps from Tata Communications~\cite{tata_map} and Vodafone~\cite{vodafone_map}. First, we filter all IP links from Nautilus' mapping which belong to the AS of an organization. Next, we generate the list of cables that these IP links were mapped to (by Nautilus). We then compare the potential submarine cables identified from the publicly available network map of operators with the Nautilus' cable list. We observe that of the 34 cables present in the Tata Communications network map, 31 cables are present in Nautilus' predicted cable list. Similarly, for the Vodafone Network Map, Nautilus correctly identifies 52 of the 58 cables present in Vodafone's network map. We observe that the missed cables are from regions where Nautilus data sources do not have sufficient coverage.
|
{
"arxiv_id": "2302.14110",
"language": "en",
"timestamp": "2023-03-01T02:01:25",
"url": "https://arxiv.org/abs/2302.14110",
"yymm": "2302"
} | \section{Introduction}
The behavior of fluids perturbed from mechanical and thermal equilibrium is commonly described by classical hydrodynamics. In this approach, dissipative effects are accounted for by phenomenological parameters known as transport coefficients~\cite{landau_fluid_1987}. These quantities, which include the viscosity and thermal conductivity, characterize dissipation that arises due to small perturbations. In particular,
viscosity quantifies the dissipation that arises from a non-uniform velocity of the fluid. Of late, there has been a surge of interest in the application of hydrodynamics to low-dimensional gases and liquids of fermions~\cite{joseph_observation_2011,bandurin_negative_2016,crossno_observation_2016,moll_evidence_2016,levitov_electron_2016,lucas_transport_2016,guo_higher-than-ballistic_2017}.
Viscosity plays a determinative role in a variety of non-equilibrium behaviors displayed by such systems, including transport properties of quantum wires~\cite{andreev_hydrodynamic_2011} as well as the relaxation of the collective modes in cold atomic gases~\cite{vogt_scale_2012}.
For three-dimensional systems, viscous effects are captured by two transport coefficients, the shear and bulk viscosities. In one dimension shear is not defined, and thus the relevant transport coefficient is the bulk viscosity. In the present work, we study the bulk viscosity of a one-dimensional (1D) Fermi gas with an arbitrary single-particle dispersion. To appreciate the central role that the dispersion plays in viscous effects, consider the classical monoatomic gas. As is well-known, its bulk viscosity is suppressed because the single-particle dispersion is quadratic~\cite{lifshitz_physical_1981}. On the other hand, in condensed matter systems the dispersion is altered by lattice effects. These effects, even if small, are expected to significantly enhance the bulk viscosity $\zeta$. One of the key results of the present work is an expression for $\zeta$ for a generic dispersion.
Interactions play a crucial role in viscous dissipation, as they cause the collisions responsible for the relaxation of a system to equilibrium. Interactions also alter the form of the dispersion. Typically, dispersion relations are discussed in the context of single-particle dynamics. However, an \emph{effective} single particle dispersion that accounts for interactions perturbatively can be defined. We derive this effective dispersion and use it to obtain an expression for the bulk viscosity. In an experiment, both interactions and lattice effects can be weak. We thus consider the competition between these effects. In particular, this physics is investigated in the case of a dilute Fermi gas using a tight-binding dispersion as an example.
The application of hydrodynamics requires that the system be close to equilibrium. A unique feature of the relaxation of 1D systems is that not all relaxation processes have rates of the same order of magnitude. Typical thermal excitations relax at a rate that scales as a power of temperature. In contrast, fermion backscattering occurs at a rate $1/\tau_b$, which is exponentially suppressed at low temperatures~\cite{lunde_three-particle_2007,micklitz_transport_2010,matveev_scattering_2012}. When a 1D system is driven at a frequency $\omega \gg 1/\tau_b$, backscattering is frozen out, and thus the system cannot fully equilibrate. As long as $\omega$ is less than all other relaxation rates, the behavior of this system is described by two-fluid hydrodynamics~\cite{matveev_second_2017,matveev_propagation_2018,matveev_two-fluid_2019}, analogous to the well-known theory of superfluid He-4~\cite{landau_fluid_1987,khalatnikov_introduction_2000}. As in the superfluid, accounting for viscous dissipation in the driven Fermi gas requires three bulk viscosities~\cite{matveev_propagation_2018}. We calculate these three transport coefficients for an arbitrary dispersion.
The paper is organized as follows. Section~\ref{sec:boltzmann} introduces our approach for the calculation of the bulk viscosity for a generic dispersion and applies it to the tight-binding dispersion. In Sec.~\ref{sec:finite_frequencies}, we consider the viscous coefficients for a gas in the two-fluid regime. We study the effect of weak interactions on viscous dissipation in Sec.~\ref{sec:interactions}. In Sec.~\ref{sec:scaleinvariance}, we discuss the suppression of the bulk viscosity for specific dispersions. Finally, in Sec.~\ref{sec:discconcl}, we highlight the implications of our results.
\section{Bulk Viscosity and the Boltzmann Equation}
\label{sec:boltzmann}
In this section, we derive the bulk viscosity for a general dispersion $\varepsilon_p$ (Sec.~\ref{sec:boltzmannA}) and apply this result to the tight-binding model (Sec.~\ref{sec:tight-binding}). Viscous dissipation arises when the velocity of the gas is not uniform. In the usual case of a Galilean invariant system, the velocity of an element of the fluid is that of its center of mass. For a generic dispersion, however, mass is not defined. We thus use an alternative definition of velocity based on the equilibrium distribution function. In a translationally invariant Fermi gas, not only particle number and energy but also momentum is conserved, and the equilibrium distribution function takes the form
\begin{equation}
n_p^{(0)} = \frac{1}{ e^{\left(\varepsilon_p - u p - \mu\right)/T} + 1}.
\label{eq:neq}
\end{equation}
Here $p$ is the momentum of the state, while $T$, $u$, and $\mu$ control the average energy, momentum, and particle number. The parameters $T$ and $\mu$ are the temperature and chemical potential of the gas, respectively. The parameter $u$ has the dimension of velocity. In the case a Galilean invariant system, $\varepsilon_p = p^2/2m$, it coincides with the center of mass velocity of the gas. For generic $\varepsilon_p$, we take $u$ appearing in Eq.~(\ref{eq:neq}) as the definition of the velocity of the gas. In the following we assume for simplicity that $\varepsilon_p$ is even in $p$ and monotonically increasing for positive $p$. At $T \to 0$ such a gas only has a single pair of Fermi points.
A gas with a spatially varying velocity $u(x)$ is not in equilibrium. For infinitesimal $\partial_x u$, the distribution function is given by
\begin{equation}
n_p = n_p^{(0)} + \delta n_p,
\label{eq:ansatz}
\end{equation}
where $\delta n_p$ is the infinitesimal dissipative part of $n_p$. While the perturbation $\partial_x u \neq 0$ drives the system out of equilibrium, collisions between particles caused by weak interactions tend to restore it, resulting in a non-zero rate of change of the distribution function, which we denote by $\dot{n}_p$. The power dissipated per unit length is given by
\begin{equation}
w = - \nu T \int_{-\infty}^{\infty} \frac{dp}{h} \frac{\dot{n}_p \, \delta n_p}{n_p^{(0)} \left( 1 - n_p^{(0)} \right)} ,
\label{eq:w2}
\end{equation}
where $h$ is Planck's constant and $\nu$ is the degeneracy associated with spin, with $\nu = 2S+1$ for a gas of spin-$S$ fermions. The expression for $w$ was derived~\cite{matveev_viscous_2017,degottardi_viscous_2020} by evaluating $T \dot{s}$, where the entropy density $s$ was expressed in terms of the Fermi occupation numbers. Both $\delta n_p$ and $\dot{n}_p$ vanish for a system in equilibrium and therefore must be proportional to $\partial_x u$. Thus, the power dissipated per unit length (\ref{eq:w2}) has the form
\begin{equation}
w = \zeta \left( \partial_x u \right)^2.
\label{eq:zetadef}
\end{equation}
The proportionality constant $\zeta$ is the bulk viscosity. This definition is directly analogous to the definition of $\zeta$ for a Galilean-invariant fluid~\cite{landau_fluid_1987}. In the absence of Galilean invariance, $\zeta$ is a function of $u$. From now on, we limit ourselves to the study of the bulk viscosity at $u = 0$.
\subsection{The Boltzmann equation approach}
\label{sec:boltzmannA}
We now calculate $\zeta$ using Eqs.~(\ref{eq:w2}) and (\ref{eq:zetadef}). This requires us to obtain $\dot{n}_p$ and $\delta n_p$. In the Boltzmann equation formalism, $\dot{n}_p$ can be expressed in two ways:
\begin{eqnarray}
\dot{n}_p &=& \partial_t n_p + \left( \partial_p \varepsilon_p \right) \partial_x n_p - \left( \partial_x \varepsilon_p \right) \partial_p n_p, \label{eq:boltzmann1} \\
\dot{n}_p &=& I [ n_p ],
\label{eq:boltzmann2}
\end{eqnarray}
where $I[n_p]$ is the collision integral describing the relaxation to equilibrium. The standard Boltzmann equation~\cite{lifshitz_physical_1981} is obtained by equating these two expressions for $\dot{n}_p$. Interactions are responsible for the collisions between the particles. They also alter the effective dispersion $\varepsilon_p$ appearing in Eq.~(\ref{eq:boltzmann1}), an effect that will be considered in Sec.~\ref{sec:interactions}. In this section, the third term on the right hand side of Eq.~(\ref{eq:boltzmann1}) vanishes because $\partial_x \varepsilon_p = 0$. This term will play a role later in the paper.
A system with a velocity gradient is either expanding or contracting. As a result, the temperature $T$ and chemical potential $\mu$ depend on time. This is in contrast to the calculation of thermal conductivity, which can be obtained from a steady-state solution of the Boltzmann equation~\cite{matveev_thermal_2019}.
Substituting the expression (\ref{eq:neq}) for $n_p^{(0)}$ into Eq.~(\ref{eq:boltzmann1}) and allowing for the dependences $u(x)$, $T(t)$, and $\mu(t)$, we find
\begin{equation}
\dot{n}_p = \frac{1}{T} n_p^{(0)} \left(1 - n_p^{(0)} \right) \left[ \frac{ \varepsilon_p - \mu}{T} \partial_t T + \partial_t \mu + p \partial_p \varepsilon_p \partial_x u \right].
\label{eq:boltzmann3}
\end{equation}
This expression can be written more compactly as
\begin{equation}
\dot{n}_p = \frac{1}{T} n_p^{(0)} \left( 1 - n_p^{(0)}\right) \Upsilon(\xi),
\label{eq:leftboltz4}
\end{equation}
where $\xi = \varepsilon_p - \mu$ and
\begin{equation}
\Upsilon(\xi) = \partial_t \mu + \frac{\partial_t T}{T} \xi + \partial_x u \frac{p(\mu+\xi)}{p'(\mu+\xi)}.
\label{eq:upsilon}
\end{equation}
The function $p(\varepsilon)$ is the inverse of $\varepsilon_p$ for positive $p$, and $p'(\varepsilon)$ is its derivative.
Next, we use conservation laws to express $\partial_t \mu$ and $\partial_t T$ in terms of $\partial_x u$. The conservation of particle number, momentum, and energy can be expressed as
\begin{eqnarray}
\int_{-\infty}^{\infty} \frac{dp}{h} \dot{n}_p X_p = 0,
\label{eq:conservation}
\end{eqnarray}
for $X_p = 1, p,$ and $\varepsilon_p$, respectively. While conservation of momentum is trivially satisfied given that the right-hand side of Eq.~(\ref{eq:boltzmann3}) is even in $p$, the conservation of particle number and energy gives two linear relations involving the infinitesimal quantities $\partial_t T$, $\partial_t \mu$, and $\partial_x u$, thus allowing us to express $\dot{n}_p$ in terms of $\partial_x u$ alone. Working to leading order in $T$ with $\xi \sim T$, we obtain
\begin{equation}
\dot{n}_p = \frac{g_p}{2T} \Upsilon''(0) \phi_p,
\label{eq:np}
\end{equation}
where
\begin{equation}
\phi_p = g_p \left( \xi^2 - \frac{\pi^2 T^2}{3} \right),
\label{eq:phi}
\end{equation}
and
\begin{equation}
g_p = \sqrt{n_p^{(0)} \left( 1 - n_p^{(0)} \right)} = \frac{1}{2 \cosh{\frac{\xi}{2T}}}.
\end{equation}
The result for $\dot n_p$ given by Eqs.~(\ref{eq:np}) and (\ref{eq:phi}) holds for any $\Upsilon(\xi)$ in Eq.~(\ref{eq:leftboltz4}), as long as the second derivative $\Upsilon''(0)$ is well defined. For $\Upsilon(\xi)$ given by Eq.~(\ref{eq:upsilon}), we have
\begin{equation}
\Upsilon''(0) = - \chi^{\phantom\dagger}_0 \partial_x u,
\label{eq:upsilonzero}
\end{equation}
where
\begin{equation}
\chi^{\phantom\dagger}_0 = - \left. \left( \frac{p(\varepsilon)}{p'(\varepsilon)} \right)'' \right\vert_{\varepsilon = \mu} = - \left. \frac{1}{\varepsilon_p'} \left( \frac{p \, \varepsilon_p''}{\varepsilon_p'} \right)' \right\vert_{p = p_F}.
\label{eq:chi}
\end{equation}
Here the Fermi momentum $p_F$ is defined by $\varepsilon_{p_F} = \mu$. Of the two equivalent forms for $\chi^{\phantom\dagger}_0$ given in Eq.~(\ref{eq:chi}), the second is more convenient for a given dispersion. The sign in the definition (\ref{eq:chi}) of $\chi_0$ is chosen so that $\chi_0$ is positive for the tight-binding model (Sec. \ref{sec:tight-binding}).
The final ingredient necessary to calculate $\zeta$ using Eqs.~(\ref{eq:w2}) and (\ref{eq:zetadef}) is $\delta n_p$. As we now demonstrate, $\delta n_p$ may be obtained from Eq.~(\ref{eq:boltzmann2}) by linearizing the collision integral and inverting it. To do so, it is convenient to symmetrize the collision integral by introducing the function $x_p$ defined by $\delta n_p = g_p x_p$~\cite{buot_relaxation_1972}. For infinitesimal $x_p$, its rate of change is
\begin{equation}
\dot{x}_p = - \hat{\Gamma} x_p,
\label{eq:gamma}
\end{equation}
where the linear operator $\hat{\Gamma}$ is given by
\begin{equation}
\hat{\Gamma} x_p = - \frac{1}{g_p} \left. \frac{d}{d s} I \left[ n_p^{(0)} + s g_p x_p \right] \right|_{s = 0}.
\label{eq:linearized}
\end{equation}
(The parameter $s$ has been used to linearize $I$ in $x_p$.) The operator $\hat{\Gamma}$ is symmetric and thus its eigenvalues are real. Furthermore, the eigenvalues must be nonnegative in order for $n_p^{(0)}$ to represent a stable equilibrium.
We now formally express $x_p$ by inverting the linear operator $\hat{\Gamma}$ appearing in Eq.~(\ref{eq:gamma}). This gives
\begin{equation}
x_p = \frac{\chi^{\phantom\dagger}_0}{2T} \left( \partial_x u \right) \hat{\Gamma}^{-1} \phi_p,
\label{eq:xpinverse}
\end{equation}
where we have used Eqs.~(\ref{eq:np}) and (\ref{eq:upsilonzero})~\footnote{\label{foot:zero} The operator $\hat{\Gamma}^{-1}$ appearing in Eq.~(\ref{eq:xpinverse}) is well-defined as long as it acts on the subspace of eigenvectors with strictly non-zero eigenvalues. Indeed, $\phi_p$ belongs to this subspace because our procedure of using the conservation laws to express $\dot{n}_p$ in terms of $\partial_x u$ alone ensures that $\phi_p$ is orthogonal to the zero modes of $\hat{\Gamma}$. Thus, the operator $\hat{\Gamma}^{-1}$ in Eq.~(\ref{eq:xpinverse}) is well-defined.}. Then, Eq.~(\ref{eq:w2}) may be written in the compact form
\begin{equation}
w = \frac{\chi_0^2}{4 T} \langle \phi | \hat{\Gamma}^{-1} | \phi \rangle \left( \partial_x u \right)^2,
\label{eq:w3}
\end{equation}
where the inner product is defined by
\begin{equation}
\langle a | b \rangle = \nu \int_{-\infty}^{\infty} \frac{dp}{h} a_p b_p,
\end{equation}
for generic functions $a_p$ and $b_p$. In particular, substitution of $\phi_p$ given by Eq.~(\ref{eq:phi}) for both $a_p$ and $b_p$ yields
\begin{equation}
\langle \phi | \phi \rangle = \frac{16 \pi^3 \nu T^5}{45 \hbar v_F},
\label{eq:phiinner}
\end{equation}
at $T \ll E_F$. (Here the Fermi energy $E_F$ $ = \varepsilon_{p_F} - \varepsilon_0$).
Using Eqs.~(\ref{eq:zetadef}), (\ref{eq:w3}), and (\ref{eq:phiinner}), we obtain for the bulk viscosity
\begin{equation}
\zeta = \frac{4 \pi^3 \nu \chi_0^2 T^4 \tau }{45 \hbar v_F},
\label{eq:zeta}
\end{equation}
where the effective relaxation time $\tau$ is defined by
\begin{equation}
\tau = \frac{\langle \phi | \hat{\Gamma}^{-1} | \phi \rangle}{\langle \phi | \phi \rangle}.
\label{eq:tau}
\end{equation}
This expression for $\tau$ is the average of the inverse decay rates of the eigenmodes of the collision integral (\ref{eq:linearized}) weighted by their overlap with $\phi_p$.
The spectra of decay rates for 1D systems have been studied in a number of different cases~\cite{matveev_thermal_2019,matveev_relaxation_2020,matveev_scattering_2012,degottardi_equilibration_2019}. At low temperatures, the relaxation spectrum of a 1D Fermi gas exhibits two disparate rates. Fermionic backscattering occurs at a rate $1/\tau_b$, which is exponentially small at low temperatures~\cite{lunde_three-particle_2007,levchenko_transport_2010,micklitz_transport_2010,matveev_scattering_2012,matveev_thermal_2019}. All other relevant processes are comparatively fast, with rates that scale as a power of $T$. Importantly, backscattering is associated with a perturbation $x_p$ that is odd in $p$, while $\phi_p$ that appears in the definition (\ref{eq:tau}) is even. Therefore, only the fast modes contribute to Eq.~(\ref{eq:tau}), and the relaxation time $\tau$ scales as a power of temperature~\footnote{In contrast, the thermal conductivity is the response
to a nonzero gradient of temperature which, unlike $\partial_x u$, is odd
with respect to inversion. As a result, it is controlled by the exponentially long backscattering time $\tau_b$~\cite{matveev_relaxation_2020}.}.
For the quadratic dispersion $\varepsilon_p = p^2/2m$, Eq.~(\ref{eq:chi}) yields $\chi^{\phantom\dagger}_0 = 0$, and we recover the well-known result that the bulk viscosity is suppressed in this case~\cite{lifshitz_physical_1981}. We stop short of asserting that $\zeta$ given by Eq.~(\ref{eq:zeta}) vanishes since interactions can alter $\varepsilon_p$, as will be discussed in Secs.~\ref{sec:interactions} and \ref{sec:discconcl}.
In addition to the case of the quadratic dispersion, $\chi^{\phantom\dagger}_0$ also vanishes for the ultrarelativistic dispersion, $\varepsilon_p = c |p|$, cf.~Ref.~\cite{lifshitz_physical_1981}. We are thus led to ask what is the most general form of $\varepsilon_p$ for which $\chi^{\phantom\dagger}_0$ vanishes for any density. To answer this question, we set the second form of $\chi^{\phantom\dagger}_0$ given in Eq.~(\ref{eq:chi}) equal to zero for all $p_F$. The solution to the resultant third order differential equation gives a general dispersion of the form~\footnote{\label{foot:special} The condition that $\varepsilon_p$ is even necessitates the use of the absolute value in Eqs.~(\ref{eq:special_dispersion1}) and (\ref{eq:exp}). Additionally, the condition that $\varepsilon_p$ is monotonically increasing for positive $p$ requires that $A B > 0$.}
\begin{equation}
\varepsilon_p = A |p|^B + C.
\label{eq:special_dispersion1}
\end{equation}
This expression has as special cases the quadratic ($B = 2$) and ultra-relativistic ($B = 1$) dispersions. The physics of the vanishing of $\chi^{\phantom\dagger}_0$ is discussed in Sec.~\ref{sec:scaleinvariance}.
\subsection{Tight-binding dispersion}
\label{sec:tight-binding}
The bulk viscosity is sensitive to lattice effects through its dependence on $\chi^{\phantom\dagger}_0$, which in turn depends on $\varepsilon_p$. Here, we evaluate $\chi^{\phantom\dagger}_0$ for the tight-binding model in which the single particle dispersion relation is obtained by assuming that particles hop between neighboring sites. This model has been applied to a number of relevant systems, such as 1D fermions in optical lattices in the deep lattice regime~\cite{ibanez-azpiroz_tight-binding_2013}. In Sec.~\ref{sec:interactions}, we will apply the tight-binding dispersion to study the relative importance of viscous dissipation arising from lattice effects and interactions.
For a 1D lattice with spacing $a$, the tight-binding dispersion is
\begin{equation}
\varepsilon_p = \frac{D}{2} \left[ 1 - \cos \left( \frac{p a}{\hbar} \right) \right],
\label{eq:tight-binding-spectrum}
\end{equation}
where $D$ is the full bandwidth. This dispersion is characterized by the Fermi velocity
\begin{equation}
v_F = \varepsilon_{p_F}' = \frac{D a}{2 \hbar} \sin \left( \frac{p_F a}{\hbar} \right).
\label{eq:vF}
\end{equation}
The effects of the dispersion enter the bulk viscosity (\ref{eq:zeta}) through $v_F$ and $\chi^{\phantom\dagger}_0$. Substituting the tight-binding dispersion (\ref{eq:tight-binding-spectrum}) into Eq.~(\ref{eq:chi}), we obtain
\begin{equation}
\chi^{\phantom\dagger}_0 = \frac{1}{D} \frac{2 p_F a / \hbar - \sin \left( 2 p_F a / \hbar \right)}{\sin^3 \left( p_F a / \hbar \right)}.
\label{eq:chi_tight-binding}
\end{equation}
At $p_F \to 0$, we have $\chi^{\phantom\dagger}_0 \to 4 / 3 D$.
It is instructive to consider the behavior of $\chi^{\phantom\dagger}_0$ in the continuum limit. For a given particle density $n = \nu p_F / \pi \hbar$, the latter is achieved by requiring $a \rightarrow 0$ and $D \rightarrow \infty$, while $a^2 D$ is held fixed. In this limit, the dispersion (\ref{eq:tight-binding-spectrum}) approaches the form
\begin{equation}
\varepsilon_p = \frac{p^2}{2 m^\ast}, \quad
m^\ast = \frac{2\hbar^2}{D a^2}.
\label{eq:mstar}
\end{equation}
Since the dispersion (\ref{eq:mstar}) is quadratic, one expects $\chi^{\phantom\dagger}_0$ to vanish. Indeed, because $\chi_0 \rightarrow 4 / 3 D \propto a^2$ at $a \rightarrow 0$, we find that $\chi^{\phantom\dagger}_0$ does in fact vanish in the continuum limit.
\section{Viscous Response at Finite Frequencies}
\label{sec:finite_frequencies}
In our discussion of the relaxation time $\tau$, we noted that the bulk viscosity is not affected by backscattering. This conclusion, though derived for a time independent perturbation $\partial_x u \neq 0$, holds for the time-dependent case as long as the associated frequency obeys $\omega \ll 1/\tau_b$. For such frequencies, the system still comes to the equilibrium state~(\ref{eq:neq}), cf.~Ref.~\cite{matveev_second_2017}.
Since backscattering is the slowest relaxation process at low temperatures, there is a broad range of frequencies $1/\tau_b \ll \omega \ll 1/\tau$, for which backscattering is essentially frozen out, and the numbers of right and left movers are separately conserved~\cite{micklitz_transport_2010}. Because $\omega \ll 1/\tau$, at $T \ll E_F$ fast processes bring the system to \emph{partial} equilibrium~\cite{micklitz_transport_2010}, as described by the distribution function
\begin{equation}
n_p^{(0)} = \frac{1}{e^{\left( \varepsilon_p - u p - \mu - \textrm{sgn}(p) \delta \mu / 2 \right)/T}+1}.
\label{eq:dist2}
\end{equation}
The form of the partially equilibrated distribution function (\ref{eq:dist2}) is dictated by the fact that, at these frequencies, the left and right movers cannot come to diffusive equilibrium and thus are described by the distinct chemical potentials $\mu - \delta \mu/2$ and $\mu + \delta \mu / 2$, respectively.
We now consider the viscous effects that arise from gradients of $u$ and $\delta \mu$. It is necessary to establish the form of the dissipated power $w$ that generalizes Eq.~(\ref{eq:zetadef}). First, we observe that for $\delta \mu$ independent of position, $w$ must reduce to the form (\ref{eq:zetadef}) since the processes that underlie $\zeta$ are still operative for $\omega \ll 1/\tau$. From the argument that led to Eq.~(\ref{eq:zetadef}), it follows that for $\partial_x \delta \mu \neq 0$ and $\partial_x u = 0$, both $\dot{n}_p$ and $\delta n_p$ are proportional to $\partial_x \delta \mu$. Hence, from Eq.~(\ref{eq:w2}) we have that $w \propto ( \partial_x \delta \mu )^2$. From Eq.~(\ref{eq:dist2}), it is clear that both $\partial_x u$ and $\partial_x \delta \mu$ generate perturbations to the distribution function that are odd in momentum, thus allowing for the presence of the cross term $(\partial_x u)(\partial_x \delta \mu)$. Thus, the dissipated power must have the form
\begin{equation}
w = \zeta \left( \partial_x u \right)^2 + \gamma \left( \partial_x \delta \mu \right)^2 + 2 \lambda \left( \partial_x u \right) \left( \partial_x \delta \mu \right),
\label{eq:w4}
\end{equation}
where we have introduced two additional transport coefficients, $\gamma$ and $\lambda$. In order for $w$ to be nonnegative, these coefficients must satisfy $\zeta, \gamma > 0$ and $\lambda^2 \leq \zeta \gamma$.
The appearance of additional transport coefficients $\gamma$ and $\lambda$ at finite frequencies is a result of the breakdown of the single fluid description of the 1D Fermi gas. Indeed, it was shown recently~\cite{matveev_second_2017,matveev_propagation_2018,matveev_propagation_2018,matveev_two-fluid_2019} that at low temperatures 1D Fermi systems are described by two-fluid hydrodynamics analogous to that of superfluid He-4~\cite{landau_fluid_1987,khalatnikov_introduction_2000}. Given this correspondence, it is instructive to introduce parallel notation. For the case of superfluid He-4, the viscous coefficients are defined via the mass current~\cite{khalatnikov_introduction_2000}. For a generic dispersion, however, mass current is not a meaningful quantity. Instead, we can express the dissipated power in terms of the particle number current $j_n$, which is defined by
\begin{equation}
j_n = \frac{\nu}{h} \int_{-\infty}^\infty v_p n_p \, dp ,
\label{eq:jn}
\end{equation}
where $v_p = \partial \varepsilon_p / \partial p$. For $T \ll E_F$ and $u \ll v_F$, we have
\begin{equation}
j_n = n u + \frac{\nu}{h} \delta \mu,
\label{eq:jnform}
\end{equation}
where $n$ is the particle density. This expression is obtained by substituting the distribution function (\ref{eq:dist2}) into the definition (\ref{eq:jn}). Using Eq.~(\ref{eq:jnform}), we express $\delta \mu$ in terms of $j_n$ and $u$, thus bringing Eq.~(\ref{eq:w4}) to the form
\begin{eqnarray}
w = \zeta_2 \left( \partial_x u \right)^2 + \zeta_3 \left[ \partial_x \left( j_n - n u \right) \right]^2 \nonumber \\ + 2 \zeta_1 \left[ \partial_x \left( j_n - n u \right) \right] \left( \partial_x u \right),
\label{eq:w6}
\end{eqnarray}
where
\begin{eqnarray}
\zeta_1 = \frac{h}{\nu} \lambda, \quad
\zeta_2 = \zeta, \quad
\zeta_3 = \left( \frac{h}{\nu} \right)^2 \gamma.
\end{eqnarray}
Equation~(\ref{eq:w6}) is the one-dimensional analog of the well known expression for the dissipation rate in superfluid He-4~\cite{khalatnikov_introduction_2000}.
We now calculate the viscosities $\gamma$ and $\lambda$ by following the procedure described in the previous section. In particular, we consider a point in the gas at which both $u$ and $\delta \mu$ vanish but the gradients of these quantities are non-zero. We begin by substituting the distribution function~(\ref{eq:dist2}) into Eq.~(\ref{eq:boltzmann1}). This gives
\begin{eqnarray}
\dot{n}_p = \frac{g_p^2}{T} \left[ \frac{\varepsilon_p - \mu}{T} \partial_t T + \partial_t \mu + p \partial_p \varepsilon_p \partial_x u \right. \nonumber \\ \left. + \frac{1}{2} \textrm{sgn}(p) \partial_p \varepsilon_p \partial_x \delta \mu \right].
\label{eq:leftboltz5}
\end{eqnarray}
The expression for $\dot{n}_p$ may be cast in the form of Eq.~(\ref{eq:leftboltz4}), where now
\begin{equation}
\Upsilon(\xi) = \partial_t \mu + \frac{\partial_t T}{T} \xi + \frac{p(\mu + \xi)}{p'(\mu + \xi)} \partial_x u + \frac{1}{2 p'(\mu + \xi)} \partial_x \delta \mu.
\end{equation}
Applying conservation of energy and particle number (\ref{eq:conservation}), we obtain two linear relations involving $\partial_t T$, $\partial_t \mu$, $\partial_x u$, and $\partial_x \delta \mu$. The quantities $\partial_t T$ and $\partial_t \mu$ may thus be eliminated from Eq.~(\ref{eq:leftboltz4}) in favor of $\partial_x u$ and $\partial_x \delta \mu$. To leading order in $T/E_F$, this procedure yields $\dot{n}_p$ of the form (\ref{eq:np}) where $\phi_p$ is still given by Eq.~(\ref{eq:phi}) but the expression (\ref{eq:upsilonzero}) is replaced by
\begin{equation}
\Upsilon''(0) = - \chi^{\phantom\dagger}_0 \partial_x u - \eta^{\phantom\dagger}_0 \partial_x \delta \mu.
\label{eq:upsilonzero2}
\end{equation}
Here the quantity $\chi^{\phantom\dagger}_0$ is again given by Eq.~(\ref{eq:chi}) and
\begin{equation}
\eta^{\phantom\dagger}_0 = - \left. \left( \frac{1}{2 p'(\varepsilon)} \right)'' \right \vert_{\varepsilon = \mu} = - \left. \frac{1}{2\varepsilon_p'} \left( \frac{\varepsilon_p''}{ \varepsilon_p'} \right)' \right\vert_{p = p_F}.
\label{eq:chitilde}
\end{equation}
We now apply Eqs.~(\ref{eq:w2}), (\ref{eq:np}), and (\ref{eq:upsilonzero2}) to obtain
\begin{equation}
w = \frac{1}{4 T} \left( \chi^{\phantom\dagger}_0 \partial_x u + \eta^{\phantom\dagger}_0 \partial_x \delta \mu \right)^2 \langle \phi | \hat{\Gamma}^{-1} | \phi \rangle.
\label{eq:w5}
\end{equation}
The inner product appearing in this formula can be expressed in terms of $\tau$ using its definition (\ref{eq:tau}). Matching the two forms of $w$ given by Eqs.~(\ref{eq:w4}) and (\ref{eq:w5}), we recover Eq.~(\ref{eq:zeta}) for $\zeta$ and obtain
\begin{eqnarray}
\gamma = \left(\frac{\eta^{\phantom\dagger}_0}{\chi^{\phantom\dagger}_0} \right)^2 \zeta, \quad
\lambda = \frac{\eta^{\phantom\dagger}_0}{\chi^{\phantom\dagger}_0} \zeta. \label{eq:gammalambda}
\end{eqnarray}
We observe that these values of $\gamma$ and $\lambda$ saturate the inequality $\lambda^2 \leq \zeta \gamma$, which is a feature of working to only leading order in $T/E_F$.
Equation~(\ref{eq:special_dispersion1}) gives the general form of the dispersion for which $\chi^{\phantom\dagger}_0$ vanishes at any density. We now derive the analog for $\eta_0$. Setting the right-hand side of Eq.~(\ref{eq:chitilde}) to zero and solving the resultant differential equation, we obtain~\cite{Note3}
\begin{equation}
\varepsilon_p = A \exp\left( B \left| p \right| \right) + C.
\label{eq:exp}
\end{equation}
It is worth mentioning that for $B = c/A$ and $C = - A$ in the limit that $A$ tends to infinity, Eq.~(\ref{eq:exp}) reduces to the ultrarelativistic dispersion $\varepsilon_p = c | p |$. That $\eta^{\phantom\dagger}_0$ vanishes in this case is apparent from Eq.~(\ref{eq:chitilde}) given the presence of $\varepsilon_p''$ in the second expression for $\eta^{\phantom\dagger}_0$. The physics underlying the vanishing of $\eta^{\phantom\dagger}_0$ is discussed in Sec.~\ref{sec:scaleinvariance}.
\section{Weak Interactions}
\label{sec:interactions}
So far, our consideration of interactions has focused exclusively on their role in restoring the gas to equilibrium. These effects enter the expression for the bulk viscosity (\ref{eq:zeta}) through the relaxation time $\tau$ defined by Eq.~(\ref{eq:tau}). However, interactions also alter the effective dispersion $\varepsilon_p$ appearing in Eq.~(\ref{eq:boltzmann1}), cf. Ref.~\cite{degottardi_viscous_2020}. While the resultant correction is small for weak interactions, this becomes a crucial consideration if lattice effects are also weak. In Sec.~\ref{sec:interactionsA}, we calculate the bulk viscosity $\zeta$ of a weakly interacting gas. Then, in Sec.~\ref{sec:interactionsB}, we study the competition between lattice effects and interactions in the context of the tight-binding model.
\subsection{Effect of interactions on the bulk viscosity}
\label{sec:interactionsA}
We consider a weak two-particle interaction described by the Hamiltonian
\begin{equation}
\label{eq:interaction_hamiltonian}
\hat{V} = \frac{1}{2L}\sum_{\substack{p,p',q\\ \sigma,\sigma'}}
V(q)a_{p+q,\sigma}^\dagger a_{p'-q,\sigma'}^\dagger
a_{p',\sigma'}^{}a_{p,\sigma}^{}.
\end{equation}
Here, $V(q)$ is the Fourier transform of the interaction potential, and
$a_{p,\sigma}^{}$ annihilates a fermion of momentum $p$ and $z$-component of spin $\sigma$. To first order, the energy of the state with occupation numbers $n_{p\sigma}$ is
\begin{equation}
\label{eq:E}
E=\sum_{p,\sigma} \varepsilon_p n_{p\sigma}
+\frac{1}{2L}\sum_{\substack{p,p'\\ \sigma,\sigma'}}
[V(0)-V(p-p')\delta_{\sigma,\sigma'}]n_{p\sigma}n_{p'\sigma'}.
\end{equation}
For spinless systems, this expression is applicable as long as $V(0)/\hbar v_F \ll 1$. For systems with spin, spin-charge separation \cite{dzyaloshinskii_correlation_1974,giamarchi_quantum_2004} would seem to preclude the application of perturbation theory. Fortunately however, such effects are negligible for $p_F V(0)/\hbar \ll T$~\cite{matveev_relaxation_2020,karzig_energy_2010}. Given that $E$ in Eq.~(\ref{eq:E}) has the form of the Fermi liquid expression for interactions between quasiparticles~\cite{lifshitz_statistical_1980}, we may apply the well established procedure of calculating transport coefficients in Fermi liquid theory~\cite{abrikosov_theory_1959,lifshitz_physical_1981,sykes_transport_1970} as long as we only work to first order in interactions.
In Fermi liquid theory, the quasiparticle energies are given by $\mathcal{E}_{p\sigma} = \delta E/\delta n_{p\sigma}$~\cite{lifshitz_statistical_1980}. Evaluating this quantity using Eq.~(\ref{eq:E}) gives an effective dispersion
\begin{equation}
\mathcal{E}_p = \varepsilon_p + \delta \varepsilon_p,
\label{eq:quasiparticle_energies1}
\end{equation}
where
\begin{equation}
\delta \varepsilon_p = \int \frac{dp'}{h}[\nu V(0)-V(p-p')] n_{p'}^{(0)},
\label{eq:quasiparticle_energies2}
\end{equation}
and the equilibrium distribution function now takes the form~\cite{abrikosov_theory_1959,sykes_transport_1970}
\begin{equation}
n_p^{(0)} = \frac{1}{e^{(\mathcal{E}_p - u p - \mu)/T}+1}.
\end{equation}
In deriving Eq.~(\ref{eq:quasiparticle_energies2}), we have assumed spin degeneracy and summed over spins.
We repeat our calculation of the viscosity given in Sec.~\ref{sec:boltzmann}, taking into account the correction to the dispersion arising from interactions. We work to first order in the interaction strength. In proceeding, care must be taken because $\mathcal{E}_p$ given by Eqs.~(\ref{eq:quasiparticle_energies1}) and (\ref{eq:quasiparticle_energies2}) depends on $u(x)$, $T(t)$, and $\mu(t)$ via $n_p^{(0)}$. We find~\footnote{Because $\mathcal{E}_p$ is defined through $n_p^{(0)}$, it is also a function of $u(x)$. This introduces two additional contributions
to $\dot{n}_p$ (not shown in the text) that cancel each other. These terms arise from the second and third terms on the right-hand side of Eq.~(\ref{eq:boltzmann1}).}
\begin{eqnarray}
\dot{n}_p &=& n_p^{(0)} \left( 1 - n_p^{(0)} \right) \frac{1}{T} \Bigg[ \left(1 - \frac{\partial \mathcal{E}_p}{\partial \mu} \right) \partial_t \mu \nonumber \\
&& + \left( \frac{\mathcal{E}_p - \mu}{T} - \frac{\partial \mathcal{E}_p }{\partial T} \right) \partial_t T + p \frac{\partial \mathcal{E}_p}{\partial p} \partial_x u \Bigg].
\label{eq:boltz1}
\end{eqnarray}
Introducing $\xi = \mathcal{E}_p - \mu$ and eliminating $p$ in favor of $\xi$, we can
cast the quantity $\dot{n}_p$ in form of Eq.~(\ref{eq:leftboltz4}) where
\begin{eqnarray}
\Upsilon(\xi) &=& \left( 1 - \frac{\partial \delta \varepsilon_{p(\mu + \xi)}}{\partial \mu} \right) \partial_t \mu + \left[ \frac{\xi}{T} - \frac{\partial \delta \varepsilon_{p(\mu + \xi)} }{\partial T} \right] \partial_t T \nonumber \\ && + \frac{p(\mu + \xi)}{p'(\mu + \xi)} \partial_x u.
\label{eq:upsilon3}
\end{eqnarray}
Here $p(\mathcal{E})$ is the inverse function of $\mathcal{E}_p$. The terms $\partial \delta \varepsilon_{p(\mu + \xi)}/ \partial \mu$ and $\partial \delta \varepsilon_{p(\mu + \xi)} / \partial T$ vanish in the non-interacting limit. In evaluating these terms, the corrections that arise from the dependence of $p(\mu+\xi)$ on interactions enter the calculation at second order in interaction strength and thus can be neglected. In contrast, the dependence of $p(\mu + \xi)$ on interactions must be included in the final term of Eq.~(\ref{eq:upsilon3}).
We now repeat the steps leading to the expression (\ref{eq:zeta}) for the bulk viscosity. Conservation of momentum and energy gives two relations involving the infinitesimal quantities $\partial_t T$, $\partial_t \mu$, and $\partial_x u$. Eliminating $\partial_t T$ and $\partial_t \mu$ and working to leading order in $T \sim \xi \ll E_F$, we obtain Eq.~(\ref{eq:np}) with
\begin{equation}
\Upsilon''(0) = - ( \chi^{\phantom\dagger}_0 + \chi^{\phantom\dagger}_1 ) \partial_x u,
\label{eq:upsilonzero3}
\end{equation}
where $\chi^{\phantom\dagger}_0$ is given by Eq.~(\ref{eq:chi}) and
\begin{widetext}
\begin{eqnarray}
\chi^{\phantom\dagger}_1 &=& -\frac{1}{2 \pi \hbar \left( \varepsilon'(p_F) \right)^4} \bigg\{ \left[ 3 p_F \left( \varepsilon''(p_F)\right)^2 - 2 \varepsilon'(p_F) \varepsilon''(p_F) - 2 p_F \varepsilon'(p_F) \varepsilon'''(p_F) \right] \left( V(0) - V(2p_F) \right) \nonumber \\ && + \left[ 3 p_F \varepsilon'(p_F) \varepsilon''(p_F) - \left( \varepsilon'(p_F) \right)^2 \right] V'(2p_F) - 2 p_F \left(\varepsilon'(p_F) \right)^2 V''(2p_F) \bigg\}.
\label{eq:eta}
\end{eqnarray}
\end{widetext}
While the quantity $\chi_1$ receives a contribution from $\partial \delta \varepsilon_{p(\mu + \xi)}/ \partial \mu$, the term $\partial \delta \varepsilon_{p(\mu + \xi)}/ \partial T$ generates terms that are higher order in temperature. Applying Eqs.~(\ref{eq:w2}), (\ref{eq:np}), and (\ref{eq:upsilonzero3}), we find
\begin{equation}
\zeta = \frac{4 \pi^3 \nu (\chi^{\phantom\dagger}_0 + \chi^{\phantom\dagger}_1 )^2 T^4 \tau}{45 \hbar v_F}.
\label{eq:zetaeta}
\end{equation}
In the limit of weak interactions, $\chi^{\phantom\dagger}_1 \rightarrow 0$, we recover the result (\ref{eq:zeta}).
In Sec.~\ref{sec:boltzmannA}, it was found that the parameter $\chi^{\phantom\dagger}_0$ vanishes for a dispersion of the form (\ref{eq:special_dispersion1}) for any positive exponent $B$. For such dispersions, the parameter $\chi_1$ will also vanish for potentials of the form $V(p) \propto |p|^{B-1}$. We defer a discussion of this to Sec.~\ref{sec:scaleinvariance}. It is worth pointing out that $\chi^{\phantom\dagger}_1$ also vanishes for a potential $V(p)$ that is independent of $p$, which in real space corresponds to a delta function interaction potential. For such a potential, the correction $\delta \varepsilon_p$ given by Eq.~(\ref{eq:quasiparticle_energies2}) is a constant and thus represents a trivial shift of the energy $\mathcal{E}_p$.
Our expression (\ref{eq:zetaeta}) for the bulk viscosity is consistent with other results that appear in the literature. In Ref.~\cite{degottardi_viscous_2020}, the bulk viscosity of a gas of spin-$1/2$ fermions with quadratic dispersion was derived. Our expression (\ref{eq:zetaeta}) reproduces the bulk viscosity given in Ref.~\cite{degottardi_viscous_2020} for
$\varepsilon_p = p^2/2m$ and $\nu = 2$. The viscosity of a liquid of spinless fermions was studied for arbitrary interaction strength in Ref.~\cite{matveev_viscous_2017}. We find that to first order in the interaction strength, the results of that work are consistent with those presented here.
\subsection{Competition between interactions and lattice effects}
\label{sec:interactionsB}
In the regime of weak interactions, it is natural to expect that $\chi_1 \ll \chi_0$, i.e., that lattice effects dominate. However, in certain cases of experimental relevance, the dispersion can be nearly quadratic. As discussed in Sec.~\ref{sec:tight-binding}, in such cases $\chi_0$ tends to zero, and $\chi_1$ may be expected to become the dominant contribution to the bulk viscosity (\ref{eq:zetaeta}).
We thus explore the competition between interactions and lattice effects. As an example, we consider the tight-binding model at low fermion density, $n a \ll 1$, so that the dispersion approaches the quadratic form (\ref{eq:mstar}). We further assume that the fermions are spinless ($\nu = 1$) and discuss separately the cases of short- and long-range potentials. For a short-range potential, we have~\footnote{Here, a potential is considered short-range if the small $q$ expansion (\ref{eq:expansion}) applies.}
\begin{equation}
V(q) = V(0) + \frac{1}{2} V''(0) q^2,
\label{eq:expansion}
\end{equation}
where we have assumed that $V(x)$ decays faster than $1/|x|^3$. In this case, the dimensionless interaction strength is
\begin{equation}
\frac{V(0) - V(2p_F)}{\pi \hbar v_F} = - 2 m^\ast n V''(0),
\label{eq:pertcond}
\end{equation}
where the effective mass $m^\ast$ is defined by Eq.~(\ref{eq:mstar}) and the particle density $n = p_F/\pi \hbar$ for $\nu = 1$. The applicability of the weak interaction approximation of Sec.~\ref{sec:interactionsA} requires that the parameter (\ref{eq:pertcond}) be much less than unity. To leading order in $n a \ll 1$, the ratio of $\chi_1$ [Eq.~(\ref{eq:chi})] to $\chi_0$ [Eq.~(\ref{eq:eta})] is given by
\begin{equation}
\frac{\chi_1}{\chi_0} = \frac{5}{2} m^\ast n V''(0).
\label{eq:chi1overchi0}
\end{equation}
We find that the ratio $\chi_1/\chi_0$ is of the same order of magnitude as the small parameter (\ref{eq:pertcond}). Thus, for weak short-range interactions, $\chi_1$ is a small correction to $\chi_0$.
We now consider long-range interactions. As an example, we take
\begin{equation}
V(x) = \frac{e^2}{|x|} - \frac{e^2}{\sqrt{x^2 + 4 d^2}}.
\end{equation}
This is the Coulomb interaction screened at large distances by a gate modeled as a metal plane at a distance $d$ from the system. At $n d \gg 1$, the small parameter of the perturbation theory is given by
\begin{equation}
\frac{V(0) - V(2p_F)}{\pi \hbar v_F} = \frac{2}{\pi^2 n a_B} \log \left( 2 \pi n d \right),
\label{eq:pertcond2}
\end{equation}
where $a_B = \hbar^2/m^\ast e^2$ is the Bohr radius. Neglecting the logarithmic factor, we conclude that perturbation theory holds as long as $n a_B \gg 1$. From Eqs.~(\ref{eq:chi}) and (\ref{eq:eta}), we have
\begin{equation}
\frac{\chi_1}{\chi_0} = - \frac{3}{2 \pi^4 n^3 a_B a^2} \log \left( 2 \pi n d \right).
\label{eq:eta1vseata02}
\end{equation}
Neglecting the logarithm, we find the condition that this ratio greatly exceeds unity can be written as $n^3 a_B a^2 \ll 1$. In summary, the condition $| \chi_1 / \chi_0 | \gg 1$ holds in the perturbative regime provided that
\begin{equation}
\frac{1}{a_B} \ll n \ll \frac{1}{a_B^{1/3} a^{2/3}}.
\label{eq:range}
\end{equation}
For sufficiently weak interactions we have $a_B \gg a$, which guarantees the existence of the range (\ref{eq:range}).
We conclude that for weak interactions, there are scenarios in which either lattice effects or interactions may dominate the viscous behavior of a Fermi gas. As we have seen, this behavior is closely linked to whether the interactions are short- or long-range.
\section{Suppression of Viscosity}
\label{sec:scaleinvariance}
For dispersions of the form~(\ref{eq:special_dispersion1}), the parameter $\chi_0$ vanishes, signalling the suppression of $\zeta$. For the special case of a quadratic dispersion ($B = 2$), this is the analog of the well known result that the bulk viscosity of a three-dimensional classical gas is suppressed~\cite{lifshitz_physical_1981} due to the scale invariance of the fermion dispersion~\cite{matveev_viscous_2017,degottardi_viscous_2020}. In this section, we show that the same argument explains the vanishing of $\chi_0$ for the power-law dispersion~(\ref{eq:special_dispersion1}) with any exponent $B$. Similarly, the parameter $\eta_0$, which appears in the expressions for the viscosities $\gamma$ and $\lambda$ given by Eq.~(\ref{eq:gammalambda}), vanishes for dispersions of the form (\ref{eq:exp}). We will show that this behavior can be explained with similar arguments.
We begin by considering the case of the bulk viscosity $\zeta$ in the case of the power-law dispersion~(\ref{eq:special_dispersion1}), where without loss of generality we can set $C=0$. As in Sec.~\ref{sec:boltzmann}, we consider a gas, initially in thermal equilibrium, subject to an infinitesimal velocity gradient $\partial_x u$. We will show that for dispersions of the form (\ref{eq:special_dispersion1}), this perturbation will not drive the system out of equilibrium. From the continuity equation, the infinitesimal gradient of the velocity $u$ is equivalent to a time-dependent particle density, $\partial_t n = - n \partial_x u$. We consider the scenario in which the particle number $N$ is fixed but the length of the system $L$ is time-dependent. Finite size quantization requires that the momentum of any state $p$ be quantized in units of $\pi\hbar/L$. For a power-law dispersion of the form~(\ref{eq:special_dispersion1}), $\varepsilon_p$ therefore scales as $1/L^B$. To wit, as the system size changes from $L(0) = L_0$ to $L(t) = L$, the energy of a state of momentum $p$ evolves from $\varepsilon_p^{(0)}$ to
\begin{equation}
\varepsilon_{p(t)} = \varepsilon_p^{(0)}\left( \frac{L_0}{L} \right)^{B}.
\label{eq:scaling2}
\end{equation}
Let us assume that at $t = 0$ the system is in equilibrium, and the occupation numbers of the fermion states are given by the Fermi-Dirac distribution
\begin{equation}
n_p = \frac{1}{e^{(\varepsilon_p - \mu)/T}+1}
\label{eq:FD}
\end{equation}
with $\varepsilon_p = \varepsilon_p^{(0)}$. A generic change to $\varepsilon_p$ would violate the relation (\ref{eq:FD}) and thus would drive the system out of equilibrium. However, in the case of the power-law dispersion, $\varepsilon_p$ changes by a factor that does not depend on $p$. As a result, by choosing new values of the temperature and chemical potential according to
\begin{equation}
T = T_0 \left( \frac{L_0}{L} \right)^{B}
\label{eq:Tchange}
\end{equation}
and
\begin{equation}
\mu = \mu_0 \left( \frac{L_0}{L} \right)^{B},
\end{equation}
we find that the distribution function $n_p$ retains its Fermi-Dirac form (\ref{eq:FD}). Thus, the system remains in equilibrium, and the rate $\dot{n}_p$ must vanish.
The above argument applies only in the limit of weak interactions. We now show that for certain types of interactions, the bulk viscosity vanishes regardless of the interaction strength. Consider a Hamiltonian with kinetic energy described by $\varepsilon_p \propto |p|^B$ and interactions given by Eq.~(\ref{eq:interaction_hamiltonian}) with $V(q) \propto |q|^{B-1}$. A system in thermal equilibrium is described by the Gibbs distribution
\begin{equation}
w_n = \frac{1}{Z} e^{-E_n / T},
\label{eq:Gibbs}
\end{equation}
where $w_n$ is the probability of finding the system in a state with energy $E_n$, and $Z$ is the partition function~\cite{landau_statistical_2013}. As above, we take the system to have a time-dependent length $L(t)$ and consider the scaling of the energy $E_n$. The Hamiltonian is composed of an operator describing kinetic energy, which scales as $1/L^B$ in accordance with Eq.~(\ref{eq:scaling2}), and an operator describing interactions. Noting the factor of $1/L$ in Eq.~(\ref{eq:interaction_hamiltonian}), the interaction Hamiltonian also scales as $1/L^B$ provided that $V(q) \propto |q|^{B-1}$. As a result, the total Hamiltonian scales as $1/L^B$ and thus its eigenvalues $E_n$ must also scale as $1/L^B$. If the system starts in equilibrium, then it will remain in equilibrium with a distribution described by Eq.~(\ref{eq:Gibbs}) and the time-dependent temperature given by Eq.~(\ref{eq:Tchange}). As we saw in Sec.~\ref{sec:interactions}, the linear in interactions contribution to $\chi$, given by $\chi_1$, does indeed vanish in this case. The above argument is more general and shows that the viscosity must vanish in all orders in the interaction strength.
We now consider a system with a dispersion of the form (\ref{eq:exp}) and demonstrate that an infinitesimal gradient of $\delta \mu$ does not drive the system out of equilibrium. We recall that a system with a spatially uniform $\delta \mu$ has an equilibrium distribution (\ref{eq:dist2}), in which we set $u = 0$. This distribution can be formally interpreted as the standard Fermi-Dirac distribution (\ref{eq:FD}) for particles with energies $\varepsilon_{p} \to \varepsilon_p + U(p)$,
where $U(p)$ is the momentum-dependent potential
\begin{equation}
U(p) = - \frac{1}{2} \textrm{sgn}(p) \delta \mu.
\label{eq:potential}
\end{equation}
We now take $\delta \mu$ to have an infinitesimal gradient. This gives rise to a momentum dependent force
\begin{equation}
- \frac{\partial U}{\partial x} = \textrm{sgn}(p) \frac{\partial_x \! \left( \delta \mu \right)}{2}
\end{equation}
acting on a particle with momentum $p$. As a result, the momentum of each particle evolves in time according to
\begin{equation}
p(t) = p + \textrm{sgn}(p) \frac{\partial_x \! \left( \delta \mu \right)}{2} t.
\end{equation}
For the dispersion given by Eq.~(\ref{eq:exp}), the energy of a particular state is given by
\begin{equation}
\varepsilon_{p(t)} = \varepsilon_p^{(0)} e^{B (\partial_x \delta \mu) t/2}.
\end{equation}
[We have set $C$ in Eq.~(\ref{eq:exp}) equal to zero]. Generically, a change in $\varepsilon_p$ drives the system out of equilibrium. But as was the case in Eq.~(\ref{eq:scaling2}), the energies $\varepsilon_{p(t)}$ change by a $p$-independent factor. As long as the temperature and chemical potential have the same time dependences, namely
\begin{equation}
T = T_0 e^{B (\partial_x \delta \mu) t/2}, \quad \mu = \mu_0 e^{B (\partial_x \delta \mu) t/2},
\end{equation}
the system is described by the equilibrium distribution (\ref{eq:FD}). We conclude that an infinitesimal $\partial_x \delta \mu$ does not drive a system described by the dispersion (\ref{eq:exp}) out of equilibrium, consistent with the fact that $\eta_0 = 0$.
\section{Discussion and Results}
\label{sec:discconcl}
We have presented a systematic study of the bulk viscosity for one-dimensional Fermi gases with arbitrary dispersions, and thus the theory can account for lattice effects. The expression for the bulk viscosity, given by Eq.~(\ref{eq:zetaeta}), has the general form
\begin{equation}
\zeta \propto \chi^2 \tau,
\label{eq:zeta2}
\end{equation}
i.e., the bulk viscosity is controlled by two quantities: the relaxation time $\tau$ and the parameter $\chi$. The latter is a measure of the \emph{sensitivity} of the gas to the velocity gradient $\partial_x u$ and quantifies the extent to which this perturbation displaces the gas from equilibrium. For instance, we found that a gas of free fermions with a dispersion given by Eq.~(\ref{eq:special_dispersion1}) is insensitive to gradients of $u$, resulting in $\chi_0 = 0$. (In the absence of interactions, $\chi = \chi_0$.)
To appreciate the central role played by $\chi$, we consider a gas of fermions with a dispersion given by Eq.~(\ref{eq:special_dispersion1}), for which $\chi$ vanishes in the limit of weak interactions due to scale invariance. Given the expression (\ref{eq:zeta2}), we expect that $\zeta$ vanishes in this limit. However, this conclusion is premature since interactions also control the relaxation properties of the system. In the limit that interactions vanish, $\tau$ diverges, and thus the expression (\ref{eq:zeta2}) for the bulk viscosity is indeterminate. To determine the fate of $\zeta$, we must therefore consider the regime of weak but non-vanishing interactions~\cite{matveev_viscous_2017}. While $\chi_0$ vanishes, $\chi_1$, given by Eq.~(\ref{eq:eta}), is proportional to the interaction strength $V$, and thus $\chi \propto V$. On the other hand, the relaxation in one dimension is dominated by three-particle processes, for which $\tau \propto 1/V^4$~\cite{lunde_three-particle_2007,khodas_fermi-luttinger_2007}. Thus, in the limit of weak interactions, the bulk viscosity (\ref{eq:zeta2}) is still large $\zeta \propto 1/V^2$, but is suppressed compared with that of Fermi gases with generic dispersions, for which $\zeta \propto 1/V^4$.
The conclusion that interactions in a Fermi gas with power-law dispersion (\ref{eq:special_dispersion1}) result in a nonvanishing viscosity assumes that the interaction will spoil the scale invariance of the system. However, for an interaction that satisfies $V(q) \propto |q|^{B-1}$ where $B$ is the exponent in Eq.~(\ref{eq:special_dispersion1}), the exact many-body energy levels scale as a power of the system size. As a result, the bulk viscosity vanishes regardless of the strength of the interactions. This argument is not limited to 1D systems. An example of a three-dimensional system with zero bulk viscosity is the Fermi gas in the unitary limit~\cite{son_vanishing_2007}.
A peculiarity of one-dimensional quantum systems is that for particular interactions they can possess an infinite number of conserved quantities. Such systems are described by \emph{integrable models}. These systems do not relax, i.e., the relaxation time $\tau$ is formally infinite, and thus the bulk viscosity (\ref{eq:zeta2}) is infinite for integrable models. An exception is the Calogero-Sutherland model~\cite{sutherland_beautiful_2004}, which describes particles with dispersion $\varepsilon_p\propto |p|^B$ and interactions $V(q)\propto |q|^{B-1}$ with $B=2$. As discussed in the previous paragraph, this model is insensitive to the velocity gradient, so $\chi=0$. On the other hand, by virtue of integrability, the relaxation time $\tau$ in Eq.~(\ref{eq:zeta2}) is infinite. The bulk viscosity (\ref{eq:zeta2}) is thus indeterminate.
We also considered the case of Fermi gases driven at finite frequencies. For a broad range of frequencies, these systems fail to come to full equilibrium and are instead described by the distribution (\ref{eq:dist2}) in which the parameter $\delta \mu$ is the difference between the chemical potentials of right and left movers. Position dependence of this parameter, $\partial_x \delta \mu \neq 0$, leads to viscous dissipation, which is described by the quadratic form~(\ref{eq:w4}). This expression defines two additional bulk viscosities, $\gamma$ and $\lambda$. As discussed above, systems of particles with dispersion (\ref{eq:special_dispersion1}) are insensitive to the perturbation $\partial_x u$ in that it does not drive the system out of equilibrium. Similarly, a gas of fermions with dispersion (\ref{eq:exp}) is insensitive to the perturbation $\partial_x \delta \mu$. It is worth noting that gases of fermions obeying the ultra-relativistic dispersion $\varepsilon_p = c|p|$, which is a special case of both Eqs.~(\ref{eq:special_dispersion1}) and (\ref{eq:exp}), are insensitive to both gradients of $u$ and $\delta \mu$.
Finally, given the importance of interactions to viscous properties, it is natural to ask whether the results of this work can be extended beyond the weakly interacting limit. In fact, viscous dissipation of a spinless Luttinger liquid was considered in Ref.~\cite{matveev_viscous_2017}, where the interaction strength was not assumed to be weak. Though the focus of that work was on Galilean invariant systems, much of the discussion applies to arbitrary dispersions. Unfortunately, making a similar generalization to spinful systems is not straightforward given that their relaxation properties are not well understood.
\begin{acknowledgments}
The authors are grateful to A. V. Andreev for stimulating discussions. Work at Argonne National Laboratory was supported by the U.S. Department of Energy, Office of Science, Basic
Energy Sciences, Materials Sciences and Engineering Division.
\end{acknowledgments}
|
{
"arxiv_id": "2302.14124",
"language": "en",
"timestamp": "2023-03-01T02:01:37",
"url": "https://arxiv.org/abs/2302.14124",
"yymm": "2302"
} | \section{Introduction}
Glioblastoma (GBM) is a highly aggressive brain neoplasm with a median survival of $15$ months. Surgical resection and adjuvant chemo-radio therapy are palliative treatments alone~\cite{stupp2005radiotherapy}. The latter induces changes to brain tissue which in-turn produces similar neuroimaging changes to tumor recurrence~\cite{kim2010differentiating}, hence making it critical for clinical management decisions to differentiate between recurring tumor (TP) and treatment effect (TN). Positron emission tomography (PET) with Fluorine-18 fluorodeoxyglucose (18F-FDG) as a surrogate marker for glucose metabolism, represents an imaging technique that can provide pathophysiologic and diagnostic data in this clinical setting. The current standard of care regarding clinical FDG PET is a qualitative and visual analysis by performing comparisons to the contra-lateral and other brain regions. Static standardized PET uptake values (SUV) measured at a specific time point post-FDG injection have been widely used as a semi-quantitative measure \cite{nozawa2013glucose}. However, SUV does not reliably differentiate tumor from therapy effect, as it can depend on several factors such as body weight and blood glucose level. Dynamic FDG PET (dPET), an advance from traditional static FDG PET, may prove advantageous in clinical staging. dPET includes novel methods of a model-corrected blood input function that accounts for partial volume averaging to compute a parametric rate of uptake, or Ki, maps that reveal kinetic information.
In this work, we propose a multi-modal tumor classification framework consisting of a dual-encoder 3D convolutional neural network (CNN) and found that the metabolic information from dPET assists MRI in classification and the CNN combined with multiple image modalities performs better in differentiating tumor progression from treatment effect in human GBM.
\begin{figure*}[h]
\centering
\includegraphics[scale=0.5]{images/prop_model.pdf}
\caption{Proposed multi-modal classification network and data processing pipeline.}
\label{fig:prop_model}
\end{figure*}
\section{Proposed Multi-modal Classification}
In this section, we present our proposed multi-modal classification framework and its components. Our model consists of two modules -- (I) Ki Map Generation \& Tumor Extraction and (II) Tumor Classification shown in \textbf{Fig. \ref{fig:prop_model}}. Details of our proposed methodology are introduced as follows.
\vspace{-5pt}
\subsection{Ki Map Generation}
\textbf{PET Motion Correction and MR-PET Registration.} To correct patient head motion during the dynamic PET scan, we perform motion correction and co-registration to MR space as described \cite{seshadri2021dynamic}. Since static PET is the standard of care for clinical diagnosis, we also compute Standardized Uptake Value (SUV) maps for evaluation.
\textbf{ICA segmentation and IDIF derivation.} To compute parametric glucose rate-of-uptake or Ki maps from co-registered dynamic PET scans representing kinetic information, we compute an image-derived blood input function (IDIF) from the ICA as the region of interest. An early reference frame is selected for semi-automated volumetric annotation of the ICA using thresholding and islanding in 3D Slicer. The resultant ICA segmentation is convolved to compute the average blood time-activity curve across all time frames to produce the tracer time-activity curve serving as the IDIF.
\textbf{IDIF correction and graphical Patlak for parametric Ki computation.} To generate the parametric brain PET maps, a model-corrected blood input function (MCIF) is computed by optimizing the image-derived input function (IDIF) derived from the ICA as described \cite{seshadri2021dynamic}, to account for partial volume recovery of the blood input.
Finally, the computed MCIF and whole-brain PET data are fed into a graphical Patlak model \cite{patlak1983graphical}. The model performs a voxel-wise linear regression on the data to derive the rate of FDG uptake, Ki, as the slope. By analyzing millions of voxels across the entire PET volume, a parametric 3D Ki map is computed.
\subsection{Tumor Extraction}
To specifically ensure that the model only focuses on tumor voxels instead of brain background voxels, we perform a semi-automated segmentation of tumor volumes using the seed-based region growing segmentation tool in 3D Slicer. This is followed by Gaussian smoothing to get a conservative mask of the abnormality and then verification from clinical experts. These masks are dropped onto co-registered PET Ki, SUV, and T1-weighted MRI images to mask out the tumor.
3D parametric PET Ki, static PET standardized uptake value (SUV), and MR tumor voxels are extracted in the same image space. To perform the extraction of 3D tumor voxels while consistently preserving positional information, as a pre-processing step, the images and tumor mask for each subject are co-registered to the SRI24 atlas \cite{rohlfing2008sri24} with dimensions $(240, 240, 155)$ and re-oriented into Left Posterior Superior (LPS) orientation. This is done by registering the MR to the atlas after performing temporary $N4$ bias field correction and mutual information for rigid registration using the Cancer Imaging Phenomics Toolkit (CaPTk) \cite{davatzikos2018cancer}, then using the computed transformation to bring the other modalities to the same space after which the mask is applied. The masked images were further center cropped to the brain to $(170, 170, 120)$ dimensions by computing a minimum viable bounding box across all datasets to remove unnecessary background voxels and bring focus to the tumors. In \textbf{Fig. \ref{fig:tumors}}, we visualize the extracted tumor voxels for each modality. We observe a higher signal-to-noise ratio showcasing distinct tumor metabolic features within the Ki maps compared to SUVs.
\begin{figure}[htb!]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8.5cm]{images/s1.png}}
\end{minipage}
\caption{Examples processed tumor images from each modality (MR, SUV, Ki) used as network inputs.}
\label{fig:tumors}
\end{figure}
\begin{table}[htb]
\centering
\caption{Classification performance metrics for different image modalities and network architectures.\\}
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Model} & \textbf{Modality} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Single-\\ encoder\end{tabular}} & MR & 0.56 & 0.71 & 0.68 \\ \cline{2-5}
& SUV & 0.65 & 0.78 & 0.72 \\ \cline{2-5}
& Ki & 0.71 & 0.80 & 0.80 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Dual-\\ channel\end{tabular}} & MR+SUV & 0.53 & 0.72 & 0.60 \\ \cline{2-5}
& Ki+SUV & 0.71 & 0.78 & 0.84 \\ \cline{2-5}
& MR+Ki & 0.71 & 0.80 & 0.65 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textbf{Dual-}\\ \textbf{encoder}\end{tabular}} & MR+SUV & 0.62 & 0.75 & 0.72 \\ \cline{2-5}
& Ki+SUV & 0.65 & 0.76 & 0.76 \\ \cline{2-5}
& \textbf{MR+Ki} & \textbf{0.74} & \textbf{0.80} & \textbf{0.84} \\ \hline
\end{tabular}
\label{table:comp}
\end{table}
\vspace{-10pt}
\subsection{Multi-modal Architecture}
After extracting tumor voxels from different image modalities, we develop a dual-encoder CNN architecture for multi-modal classification. The average dimensions of the tumors are relatively much smaller than the bounding brain region. For these low-dimensional tumor volumes and due to the low number of samples, there is a high probability that employing any SOTA classification network (has more than $30$M parameters e.g. ResNet18: $\sim34$M, VGGNet16: $\sim138$M) will overfit. Hence, we develop a custom convolutional neural network with a limited number of encoder layers. For each modality, we utilize a shallow 3D convolutional encoder architecture with $3$ convolutional layers (kernel size $3$, and a number of filters $B\times2$, $B\times4$ and $B\times4$ where the number of base filters $B$ is empirically set to $8$). The output latent feature vectors from the encoder of each modality are flattened and fused by concatenation before being fed into the fully connected layers. Additionally, a drop-out layer is added after the first dense layer with a rate of $0.2$ for regularization.
\vspace{-5pt}
\section{Experimental Evaluation}
\subsection{Dataset}
Our dataset consists of dynamic 18F-FDG PET scans for $26$ subjects with GBM obtained using the whole-body time of flight (TOF) Siemens Biograph mCT scanner with attenuation correction over time. Dynamic acquisition consisted of an intravenous $\sim10$ mCi tracer injection over $10$ seconds with the initiation of a $60$-minute scan in list-mode format. T1-weighted MPRAGE MRI scans are also obtained for each subject using the Siemens 3T MR. The scans comprise a total of $35$ tumor abnormalities across all subjects along with surgical pathology as the ground truth for each of them. By considering multiple disjoint tumor regions present among subjects as distinct inputs, we can increase the number of training samples.
\subsection{Training setup}
Training experiments are performed with both single and multi-modal image input combinations consecutively to perform a comparative evaluation of their classification performance. Adjustments for class imbalance are done by using weighted categorical cross-entropy based on the training distribution of labels (TN:TP balanced class weight ratios $1.95:0.65$). For our cross-validation approach, we select Leave-one-out cross-validation (LOOCV), given the low sample size to perform a less biased and thorough measure of test metrics while utilizing most of the training data. Training is performed with the Adam optimizer and learning rate $1e^{-5}$ with a batch size of $2$, consistently across all iterations for each experiment on a single P100 GPU with 16GB VRAM.
For evaluation, we compute the accuracy, recall, and precision over the entire set of test predictions ($35$ test samples) across all leave-one-out iterations.
\subsection{Results Analysis and Discussion}
\textbf{Table \ref{table:comp}} showcases classification metrics of our model, compared with the baseline experiments. Our metrics clearly show that the combination of anatomical differences ingrained in conventional MRI and metabolic differences captured by parametric PET Ki maps yields fairly high classification performance (best accuracy of $0.74$) compared to other modalities. Moreover, this also shows that the dual-encoder architecture, which encodes image modalities independently before fusing features outperforms compared to the simultaneous dual-channel and single-encoder architectures across all test metrics. Comparing image modalities independently trained with single encoder CNNs, we also observe that Ki alone performs better than MR and SUV with accuracies $0.71$, $0.65$, and $0.56$, respectively. Although we do not see drastic improvements, there is an incremental increase in the accuracy and recall between 4-5\%. Single and dual encoder's similar accuracies can, however, be attributed to the lower sample size.
Prior works have explored radiomics feature extraction from multi-modal MRI \cite{gao2020deep} and diffusion MRI \cite{park2021differentiation} along with feature selection and oversampling methods for this task but yield limited accuracy and have not evaluated the combination of metabolic image modalities such as PET with these structural ones. Another work follows the same input function derivation methodologies to compute average tumor Ki and other kinetic rate constants for classification using linear regression models \cite{schetlick2021parametric}. However, in comparison, not only do we utilize image features directly and develop more complex deep learning-based CNN approach, which can scale better with more data, but also utilize multi-modal combinations involving MRI and perform evaluation against static PET SUV maps for classification.
\section{Conclusion}
Prediction accuracy may still be limited due to the low sample size and the resulting class imbalance. Moreover, the application of data augmentation e.g. through affine image transformations is not possible because of the high heterogeneity in structure among these tumors, and applying random transformations without a better understanding of differentiating tumor features, could alter the associated class. To overcome these problems and train robust image-based classification models that could be deployed in clinical space, future improvements could include better pre-processing and feature extraction pipelines and machine learning techniques developed for small-scale datasets. Despite the low sample size while accounting for class imbalance, the current evaluation metrics elucidate that parametric PET Ki which models underlying glucose transport into the tumors performs better than static PET SUV for the classification of progressing versus necrotic tumor volumes and hence, serves as a useful addition comprising metabolic information to structural differences incorporated within MRI. For future work, we will consider incorporating deep learning-based automatic segmentation, for an end-to-end classification model.
\section{Compliance with Ethical Considerations}
The dynamic FDG PET human imaging studies were approved by the Institutional Review Board (IRB) at the University of Virginia under protocol numbers IRB-HSR \# 17556 and IRB-HSR \# 190096.
\section{Acknowledgements}
The authors acknowledge grant funding from the University of Virginia Brain Institute.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.14198",
"language": "en",
"timestamp": "2023-03-01T02:04:33",
"url": "https://arxiv.org/abs/2302.14198",
"yymm": "2302"
} | \section*{Introduction}
In our previous paper \cite{MOA}, we introduced and studied some invariants intended to measure how far from birationally isomorphic two given varieties $X$ and $Y$ of the same dimension might be. These were defined by studying the minimal birational complexity of correspondences between $X$ and $Y$. Following a suggestion of Jordan Ellenberg, the present note continues this line of thought by investigating self-correspondences of a given variety.
Let $X$ be a smooth complex projective variety of dimension $n$. By an auto-correspondence of $X$ we understand a smooth projective variety $Z$ of dimension $n$ sitting in a diagram:
\begin{equation}\begin{gathered} \label{Diagram1.Intro}
\xymatrix{
& Z \ar[dl]_a\ar[dr]^b \\
X & & X,
}
\end{gathered}
\end{equation}
with $a$ and $b$ dominant and hence generically finite.
We assume that $Z$ maps birationally to its image in $X \times X$ (so that general fibres of $a$ and $b$ are identified with subsets of $X$). The \textit{auto-correspondence degree} of $X$ is defined to be
\begin{equation} \label{Def.Autocorrdeg.Intro}
\textnormal{autocorr}(X) \ = \ \min_{Z \ne \Delta} \,\big \{ \deg(a) \cdot \deg(b) \big\},
\end{equation}
the minimum being taken over all such $Z$ excluding those that map to the diagonal. Thus
$\textnormal{autocorr}(X) = 1$ if and only if $X$ admits non-trivial birational automorphisms. By considering the fibre square of a rational covering $X \dashrightarrow \mathbf{P}^n$, one sees that
\[ \textnormal{autocorr}(X) \ \le \ \big( \textnormal{irr}(X) - 1 \big)^2, \]
where the degree of irrationality $\textnormal{irr}(X)$ is defined to be the least degree of such a covering (see \S1, below). Our intuition is that equality holding means that $X$ is ``as far as possible" from having any interesting self-correspondences of low degree.
Our main results are as follows:
\begin{propositionalpha} \label{NewPropositionA}
If $X$ is a very general curve of genus $g \ge 3$, then
\[ \textnormal{autocorr}(X) \ = \ \big( \textnormal{gon}(X) -1 \big)^2, \]
and minimal correspondences arise from the fibre square of a gonal map.
\end{propositionalpha}
\begin{theoremalpha} \label{NewTheoremB}
Let $X \subseteq \mathbf{P}^{n+1}$ be a very general hypersurface of degree $d \ge 2n + 2$. Then
\[
\textnormal{autocorr}(X) \ = \ (d-2)^2 \ = \ \big( \textnormal{irr}(X) -1\big)^2,
\]
and again minimal correspondences are birational to the fibre square of projection from a point.
\end{theoremalpha}
\noindent In fact, we classify all self-correspondences in a slightly wider numerical range: see Theorem \ref{NewTheoremD'}.
If $X$ is a hyperelliptic curve of genus $g$, then $\textnormal{autocorr}(X) = 1$ since $X$ has a non-trivial automorphism whose graph is a non-diagonal copy of $X$ sitting in $X \times X$. David Rhyd asked whether there are any unexpected hyperelliptic curves in this product. Our final result asserts that there are not:
\begin{theoremalpha} \label{NewTheoremC}
Let $X$ be a very general hyperelliptic curve of genus $g\geq 2$. The only hyperelliptic curves in $X\times X$ are
\setlength{\parskip}{2pt}
\begin{itemize}
\item the fibers of the projection maps;
\vskip 3pt
\item the diagonal; and
\vskip 3pt
\item the graph of the hyperelliptic involution.
\end{itemize}
In particular, the image of any hyperelliptic curve in $X\times X$ under the Abel-Jacobi map is geometrically degenerate in $J(X)\times J(X)$, i.e.\! it generates a proper subtorus of that product.
\end{theoremalpha}
\noindent By a hyperelliptic curve in $X \times X$, we mean an irreducible curve $Z \subseteq X \times X$ whose normalization is hyperelliptic.
\setlength{\parskip}{.125in minus .08in}
We work throughout over the complex numbers. We are grateful to Jordan Ellenberg and David Rhyd for raising the questions addressed here. We also thank Mark Green and Radu Laza for showing us proofs of the lemma in \S 3 before we were able to find a reference in the literature.
It is an honor to dedicate this paper to Claire Voisin on the occasion of her sixtieth birthday. Her influence on both the field as a whole and the work of the two authors has been immense.
\numberwithin{equation}{section}
\section{Preliminaries and Proof of Proposition \ref{NewPropositionA}}
We start with some general remarks about the auto-correspondence degree. Given a smooth complex projective variety $X$ of dimension $n$, its auto-correspondence degree $\textnormal{autocorr}(X)$ is defined as in the Introduction. Evidently this is a birational invariant of $X$.
Note that if $X$ admits a rational covering $X \dashrightarrow \mathbf{P}^n$ of degree $\delta$, then
\[ \textnormal{autocorr}({X}) \ \le \ (\delta -1)^2. \tag{$\ast$} \]
In fact, replacing $X$ with a suitable birational model, we can suppose that $X \longrightarrow \mathbf{P}^n$ is an actual morphism. Then
\[ X \times_{\mathbf{P}^n} X \, \subseteq \, X \times X \]
contains the diagonal $\Delta_X$ as an irreducible component. The union of the remaining components $Z^\prime \subseteq X \times_{\mathbf{P}^n} X$ has degree $(\delta - 1)$ over each of the factors, and ($\ast$) follows. In particular:
\begin{equation} \label{autocorr.bound.eqn}
\textnormal{autocorr}(X) \, \le \, ( \textnormal{irr}(X) - 1 )^2,
\end{equation}
where $\textnormal{irr}(X)$ denotes the minimal degree of such a rational covering $X \dashrightarrow \mathbf{P}^n$. Our main results assert that in several circumstances equality holds in \eqref{autocorr.bound.eqn}, and that the minimal correspondences arise as just described. We will say in this case that $Z$ is residual to the fibre square of a minimal covering of $\mathbf{P}^n$.
As in the earlier works \cite{BCD,BDELU,MOA}, the action of a correspondence on cohomology plays a central role. In the situation of diagram \eqref{Diagram1.Intro}, $Z$ gives rise to endomorphisms
\[
Z_* \, = \, b_* \circ a^* \ \ , \ \ Z^* \, = \, a_* \circ b^* \]
of the Hodge structure $H^n(X)$. We denote by
\[ Z_*^{n,0} \ = \ \textnormal{Tr}_b \circ a^* \ \ , \ \ {Z^*}^{n,0} \ = \ \textnormal{Tr}_a \circ b^*\]
the corresponding endomorphisms of the space $H^{n,0}(X)$ of holomorphic $n$-forms on $X$. In the cases of interest these will act as a multiple of the identity, allowing us to use a variant of the arguments from the cited papers involving Cayley--Bacharach.
We now turn to the proof of Proposition \ref{NewPropositionA}. We suppose then that $X$ is a very general curve of genus $g \ge 3$, and that $Z \to X \times X$ is a correspondence as in \eqref{Diagram1.Intro} that computes the auto-correspondence degree of $X$. The generality hypothesis on $X$ implies first of all that
\[
\textnormal{Pic}(X \times X) \ = \ a^* \textnormal{Pic}(X) \, \oplus \, b^* \textnormal{Pic}(X) \, \oplus \, \mathbf{Z} \cdot \Delta,
\]
and hence the image of $Z$ in $X \times X$ is defined by a section of
\begin{equation} \label{Defining.Eqn.Z.for.Curves}
\big( B \, \boxtimes \, A \big) (-m\Delta)
\end{equation}
for some line bundles $A, B$ on $X$ and some $m \in \mathbf{Z}$. Note that then
\[ \deg(a) \, = \, \deg(A) - m \ \ , \ \ \deg(b) \, = \, \deg(B) - m. \]
Moreover, both maps
\[
Z_*^{1,0} \, , \, {Z^*}^{1,0} \ : \ H^{1,0}(X) \longrightarrow H^{1,0}(X)
\]
are multiplication by $-m$.
We start by proving that \[ \deg(a) \ , \ \deg(b) \ \ge \ \textnormal{gon}(X) -1, \]
which will imply that $\textnormal{autocorr}(X) = (\textnormal{gon}(X) -1)^2$. We may suppose that $m \ne 0$, for if $m = 0$ then we are in the setting of \cite[Example 1.7]{MOA} and $\deg(a), \deg(b) \ge \textnormal{gon}(X)$. Now fix a general point $y \in X$, and suppose that
\[ b^{-1}(y) \ = \ x_1 + \ldots + x_\delta, \]
where $\delta = \deg(b)$. Then for any $\omega \in H^{1,0}(X)$, we have
\[ \omega(x_1) + \ldots + \omega(x_\delta)\, = \, Z_*^{*}(\omega)(y) \, = \, -m \cdot \omega(y). \]
It follows that the $\delta + 1$ points $y, x_1, \ldots , x_\delta$ do not impose independent conditions on $H^{1,0}(X)$, and hence they move in at least a pencil. In other words, $\deg(b) + 1 \ge \textnormal{gon}(X)$ and similarly $\deg(a) + 1 \ge \textnormal{gon}(X)$, as required.
Assuming that $\deg(a) = \deg(b) = \textnormal{gon}(X) - 1$, it remains to show that a minimal correspondence arises from the fibre square of a pencil. For this, we first of all rule out the possibility that $m < 0$. In fact, by intersecting $Z$ with the diagonal one finds that
$
\deg(A) \ + \ \deg(B) \ge -m\cdot (2g - 2)$,
and hence $\deg(a) + \deg(b) \ge -m \cdot(2g)$. But unless $g = 3$ this is impossible if $m < 0$ since $2 \cdot (\textnormal{gon}(X) -1) \le g+1$. When $g = 3$ one needs to rule out the existence of line bundles $A, B$ of degree $2$ such that $r\big(A(y)\big) = r\big(B(y)\big) \ge 1$ for every $y \in X$, and this follows from the well-known description of pencils of degree three on a smooth plane quartic. (See also Remark \ref{Tang.Correspondence.Remark}).
Returning to the setting of \eqref{Defining.Eqn.Z.for.Curves}, assume now that $m > 0$. Then
for every $x \in X$:
\[ a^{-1}(x) \, \in \, \linser{ A(-m\cdot x) } \ \ , \ \ b^{-1}(x) \, \in \, \linser{ B(-m\cdot x) }, \]
which implies that
\begin{equation} \label{Eqn.r(A).r(B)} r(A) \, , \, r(B) \ \ge \ m. \end{equation}
We will use this to show that $\deg(a)$ and $\deg(b)$ are minimized when $A$ and $B$ move in pencils.
In fact, write $d = \deg(A)$. If $A$ is non-special, then $r(A) = d-g$, so $\deg(a) = d -m \ge g$ thanks to \eqref{Eqn.r(A).r(B)}, and we get a map of smaller degree from a gonal pencil.\footnote{The case $g = 3$ requires a special argument here that we leave to the reader.} Assume therefore that $A$ is special, so that
\[ A \, \in \, W^m_d(X). \]
We may suppose that $X$ is Brill--Noether general, in which case
\[\rho(m,d,g) \ = \ g-(m+1)(g-d+m)\ \geq \ 0.\]
It follows that
\[(m+1)d \ \geq \ mg+m(m+1)\]
and
\[d \ \geq \ \left(\frac{m}{m+1}\right)g+m,\]
so that
\[\deg(a) \, = \, d-m \ \geq \ \left(\frac{m}{m+1}\right)g.\]
This is minimized when $m = 1$ and similarly for $B$. Thus we can assume that $r(A) = r(B) =1$, and that the image of $Z$ lies in the linear series
\[ \linser{ \big( B \, \boxtimes \, A \big) (-\Delta) } \]
on $X \times X$. But this series is empty unless $A =B$, in which case it consists exactly of the residual to the diagonal in the fibre square of the pencil defined by $A$. This completes the proof.
\begin{remark} \label{Tang.Correspondence.Remark}
Suppose that $X \subseteq \mathbf{P}^2$ is a smooth plane curve of degree $d>3$ then the correspondence defined as the closure of
\[\left\{(x,y)\in X\times X\mid y\neq x\text{ is in the embedded tangent line to } X \text{ at }x\right\}\]
dominates the first factor with degree $d-2$ but fails to arise from the fiber square of projection from a point on $X$. The degree of the second projection is $d(d-1)$ which is $\gg d-1 = \textnormal{gon}(X)$.
\end{remark}
\section{Proof of Theorem \ref{NewTheoremB}}
In this section we prove the following refinements of Theorem \ref{NewTheoremB} from the Introduction:
\begin{theorem} \label{NewTheoremD}
Let
$X \subseteq \mathbf{P}^{n+1}$
be a very general hypersurface of degree $d \ge 2n + 2$, and consider a self-correspondence $Z$ as in diagram \eqref{Diagram1.Intro}$:$
\[
\xymatrix{
& Z \ar[dl]_a\ar[dr]^b \\
X & & X.
} \]
Assume that $Z$ does not map to the diagonal. Then
\[ \deg(a) \, \ge \, d-2 \ \ , \ \ \deg(b) \, \ge \, d-2, \]
and hence $\textnormal{autocorr}(X) = (d-2)^2$.
\end{theorem}
\begin{theorem} \label{NewTheoremD'}
In the situation of Theorem \ref{NewTheoremD}, assume in addition that $\deg(a)\leq 2d-2n-3$.
\begin{itemize}
\item[$(i).$] If $\deg(b) \le d-2$, then $\deg(a) = d-2$ and $Z$ is birationally residual to the fibre square of projection from a point $x_0 \in X$.
\vskip 8pt
\item[$(ii).$] If $\deg(b) = d-1$, then either:
\vskip 6pt
\begin{enumerate}
\item[$(a).$] $Z$ is birational to the fibre product of two rational mappings $\phi_1 , \phi_2 : X \dashrightarrow \mathbf{P}^{n}$; or
\vskip5pt
\item[$(b).$] There is an $n$-fold $Y$ and a dominant rational mapping $\phi: X \dashrightarrow Y$ of degree $d$ such that $Z$ is birationally residual to the diagonal in the fibre product $X \times{_Y} X$.
\end{enumerate}
\end{itemize}
\end{theorem}
\begin{remark}
The various possibilities in Theorem \ref{NewTheoremD'} actually occur. For example in (ii)(a) one considers projection from two different points in $X$, while (ii)(b) arises for a general projection $X \longrightarrow \mathbf{P}^n$ from a point off $X$. \end{remark}
Turning to the proofs, the arguments follow the line of attack of \cite{BCD,BDELU,MOA}, so we will be relatively brief. Fix $X$ and $Z$ as above, and write
\[\delta_a\ =_{\text{def}}\ \deg (a) \ \text{ and } \ \delta_b\ =_{\text{def}}\ \deg (b).\]
The first point to observe is that we may -- and do -- assume that the endomorphism ring of the Hodge structure $H^n_{\text{pr}}(X,\mathbf{Z})$ is $\mathbf{Z}$.
\begin{lemmann}
If $X$ is a very general hypersurface in $\mathbf{P}^{n+1}$
$$\textup{End}(H^n_{\textup{pr}}(X,\mathbf{Z}))=\mathbb{Z}\cdot \textup{Id}.$$
Equivalently, $\textnormal{Hdg}^{n,n}(X\times X)$ is generated by the classes of the diagonal and the products $ h_1^ih_2^{n-i}$ $(1 \le i \le n)$ where $h_j=\textup{pr}_{j}^* c_1(\mathcal{O}(1)|_X)$.
\end{lemmann}
\begin{proof}The lemma follows from the computation of the algebraic monodromy group for the corresponding variations of Hodge structures \cite[\S 10.3]{PS}. In the cases $d=1,2$, the primitive cohomology has rank $0$ and $1$ respectively so the statement holds trivially. For larger $d$, an element of $\textnormal{GL}(H^n_{\textup{pr}}(X,\mathbf{Z}))$ is a morphism of Hodge structures if and only if it commutes with the orthogonal group ($n$ even) or the symplectic group ($n$ odd). The centralizer of both of these subgroups is $\mathbb{Z}\cdot \textup{Id}$. Alternative arguments were shown to us by Radu Laza and Mark Green.
\end{proof}
It follows from the lemma that
\[ Z_*^{n,0} \, : \, H^{n,0}(X) \longrightarrow H^{n,0}(X)\]
is multiplication by some integer $c$. Note that then ${Z^*}^{n,0} : H^{n,0}(X) \longrightarrow H^{n,0}(X)$ is multiplication by the same integer $c$. In fact, abusively writing $[Z]$ for the class of the image of $Z$ in $H^*(X \times X)$, one has:
$$[Z]\, \in \, \text{Hdg}^{n,n}(X\times X)\ =\ \big \langle \Delta\, , \, h_1^ih_2^{n-i}\, | \, 1\leq i\leq n\big \rangle_{\mathbb{Q}},$$
and of these classes, only $\Delta$ gives rise to a non-zero map
\[H^{n,0}(X)\longrightarrow H^{n,0}(X)\]
under the identification
\[\text{Hdg}^{n,n}(X\times X)\, \cong \, \text{End}_{\mathbf{Q}-\text{HS}}(H^n(X)).\]
Moreover,
\[\Delta_*=\Delta^*=\text{Id}_{H^{n,0}(X)}.\]
As in the previous section, we will need to distinguish between the cases $c=0$ and $c\neq 0$.\\
We start by showing that
\[ \delta_a \, , \, \delta_b \ \ge \ d-2,\]
by an argument parallel to that appearing in \S 1.
First, observe that
\begin{equation} \delta_a
\, ,\,\delta_b \ \ge \ \begin{cases} d-n & \text{ if } c=0\\
d-n-1 &\text{ if } c\neq 0. \end{cases}\label{lowerbounddelta}\end{equation}
Indeed, if $c=0$, $Z$ is a traceless correspondence so given general $x,y\in X$, the sets $a^{-1}(x)$ and $b^{-1}(y)$ both satisfy the Cayley-Bacharach condition with respect to $H^{n,0}(X)$. Similarly when $c \ne 0$ the cycle $Z-c\Delta$ is a traceless correspondence, and hence for general $x,y\in X$, the sets $a^{-1}(x)\cup \{x\}$ and $b^{-1}(y)\cup \{x\}$ also both satisfy the Cayley-Bacharach condition. The inequality \eqref{lowerbounddelta} then follows from \cite[Theorem 2.4]{BCD}.\\
We assume next that $\delta_b \le d-1$, aiming for a contradiction when $\delta_b \le d-3$. Fix a general point $y \in X$. The fibre of $Z$ over $y$ sits naturally as a subset of $X$ and hence also $\mathbf{P}^{n+1}$:
\[ Z_y \, =\, \{x_1,\cdots, x_{\delta_b}\}\, =_{\text{def}} \, b^{-1}(y)\, \subseteq \, X\, \subseteq \, \mathbf{P}^{n+1}.\]
Note that if $y$ is general then the points $x_j$ are distinct. Since $\delta_b+1 \le 2d - 2n + 1$, it follows from \cite[Theorem 2.5]{BCD} and the vanishing of $(Z-c\cdot\Delta)_*$ that
the finite set $Z_y$ spans a line $\, \ell_y \, \subseteq \,\mathbf{P}^{n+1}$. In a similar fashion, the generic fibre $a^{-1}(x)$ spans a line $_x \ell $. Furthermore, if $c\neq 0$ the point $y$ lies on $\ell_y$ and $x$ lies on $_x \ell$.
Write
\[X\cdot \ell_y\ =\ \sum_{i=1}^r a_i z_i \, ;\]
we denote by $m(z)$ the multiplicity of $z$ in $X\cdot \ell_y$, and we note that the $x_j$ appear among these points. Observe that $m(x_j)$ does not depend on $j$. Indeed, if $m(x_j)$ were to vary, picking out the $x_j$'s with the highest multiplicity for each $y\in X$ would define a non-trivial multisection of the generically finite map
\[b: Z\longrightarrow X,\] thereby violating irreducibility of $Z$. Moreover, $m(x_j)=1$ for for every $j$. Indeed, if $c=0$ then $\delta_b\geq d-n$ and thus $2\delta_b>d$. Since $\sum m(x_j)\leq d$, we see that $m(x_j)=1$. If $c\neq 0$ then $\delta_b\geq d-n-1$ and $2\delta_b+1>d$. Since $y$ is in the support of $X\cdot \ell_y$, we get
\[1+\sum m(x_j)\leq d,\]
which shows $m(x_j)=1$ for all $i$.
\setlength{\parskip}{.125in minus .08in}
Let $\psi: X\dashrightarrow \mathbf{G}(1,n+1)$ be the rational map that associates to a generic $y\in X$ the line $\ell_y$ and denote by $\Gamma_\psi\subset X\times \mathbf{G}(1,n+1)$ the graph of $\psi$. Consider the incidence correspondence
$$I=\{(x,\ell): x\in \ell\}\, \subseteq \, X\times \mathbf{G}(1,n+1).$$
We can assume that $X$ does not contain any lines, and hence the projection $I\longrightarrow \mathbf{G}(1,n+1)$ is finite. Consider the cycle
\[A \ =_{\text{def}} \ \textnormal{pr}_{X\times X, *} \big( \text{pr}_{\mathbf{G}(1,n+1)\times X}^*\Gamma_\psi^t \cdot \text{pr}_{X\times \mathbf{G}(1,n+1)}^*I\, \big )\] on $X \times X$.
So the support of $A$ is the set $\{ (x, y) \mid x \in \ell_y\}$. The image $\ol{Z}$ of $Z$ in $X \times X$ and possibly the diagonal $\Delta$ are among the irreducible components of this cycle, and denoting by $R$ the remaining components we have:
\[ A \ = \ \ol{Z} \, + \, m \Delta \, + \, R. \]
By construction $R$ dominates the second factor, and we assert that it cannot dominate the first. Indeed, were $R$ to dominate both factors, it would define a correspondence violating the degree bounds \ref{lowerbounddelta}.
Observe next that $A$ acts as the composition
$$[A]^* \, =\, [I]^*\circ\psi_*: H^{n,0}(X)\longrightarrow H^{\bullet}(\mathbf{G}(1,n+1))\longrightarrow H^{n,0}(X),$$
and this composition is zero since $H^{\bullet}(\mathbf{G}(1,n+1))$ is Hodge-Tate. Furthermore
$$[R]^*=0: H^{n,0}(X)\longrightarrow H^{n,0}(X),$$
since $R$ does not dominate the first factor.
Therefore $m=-c$, and in particular $c\leq 0$.
If $c\neq 0$, we contend that $c=-1$. Indeed, given a general point $(x,y)\in Z$ the lines $_x\ell$ and $\ell_y$ pass trough $x$ and $y$ and thus
$$_x\ell=\ell_y.$$
Consequently, $_x\ell\cdot X=\ell_y\cdot X$ and by the statements above we see that $x$ and $y$ both appear with multiplicity $1$ in this intersection. Accordingly, the diagonal must appear with multiplicity $1$ in $ A$, and thus $c=-1$.
To finish the proof, we need the following:
\begin{claim}\label{claim}
Every irreducible component of (the support of) $R$ is of the form $x_0\times X$ for some $x_0\in X$.
\end{claim}
\begin{proof}
The proof proceeds exactly as the proof of Theorem A from \cite{MOA}. In brief, if the projection of an irreducible component of $R$ to the first factor is $S$, one shows that sections of the canonical bundle of a desingularization of $S$ does not birationally separate many points. This contradicts computations of Ein \cite{Ein} and Voisin \cite{Voisin} if $\dim S>0$.
\end{proof}
The claim implies that $R$ must be irreducible and reduced since lines meeting $X$ in any fixed zero-dimensional subscheme of $X$ of length 2 do not meet a general point of $X$. It follows that $\delta_b\geq d-2$, and by symmetry that $\delta_a\geq d-2$.\\
Theorem \ref{NewTheoremD'} also follows from this claim as follows:
\noindent\emph{Proof of (i)}:
If $\delta_b=d-2$ we must have $c=-1$ and $R=x_0\times X$ for some $x_0\in X$. Then we have the equality
\[\ol{Z} \ = \ \textnormal{closure}\big \{(x,y)\mid x\neq x_0, y\neq x, x_0, \text{ and } x\in \overline{x_0y}\big \}.\]
Indeed, every irreducible component of the right hand side must dominate the second factor and the degree of the projection of the right hand side to the second factor is $d-2$.\\
\noindent\emph{Proof of (ii)}: If $\delta_b=d-1$ and $c=0$ we will show that $(a)$ is satisfied. There is a point $x_0\in X$ such that $R=x_0\times X$. Consider $(x,y)\in Z$ general and let
\[_x\ell\cap X=\{y_j \mid 0\leq j\leq d-1\}\text{ and }\ell_y\cap X=\{x_j\mid 0\leq j\leq \delta_a\},\]
where $x=x_1$ and $y=y_1$, so that $(x_1,y),\cdots, (x_{d-1},y)$ and $(x,y_1),\cdots, (x,y_{\delta_a})$ are in $Z$. Since $(x,y)\in Z$ was chosen generically, $b^{-1}(y_2)$ consists of $d-1$ points, one of which is $x$. Moreover, $b^{-1}(y_2)$ is contained in a line passing through $x_0$, and thus is contained on the line through $x_0$ and $x$. It follows that
$$b^{-1}(y_2)=\{(x_i,y_2)\mid 1\leq i\leq d-1\}.$$
The same reasoning shows that
$$\{(x_i,y_j)\mid 1\leq i\leq d-1, 1\leq j\leq \delta_a\}\subset Z.$$
Let $\varphi_{1}: X\dashrightarrow \mathbf{P}^n$ be the projection from $x_0$ and consider the map
\begin{align*}\varphi_{2}: X&\dashrightarrow \mathbf{G}(1,n+1)\\
x\ &\longmapsto \ \ \ \ \ {_x\ell} .\end{align*}
The maps $\varphi_1$ and $\varphi_2$ are generically finite of degree $d-1$ and at least $\delta_a$ respectively. Considering degrees in the following diagram, we see that $\varphi_2$ had degree $\delta_a$, and that $(\varphi_1\times\varphi_2)(\ol{Z})\subset \mathbf{P}^n\times \text{Im}(\varphi_2)$ maps birationally to each factor.
\[
\begin{tikzcd}
&Z \ar[dl,swap,"a"] \ar[dr,"b"] \ar[dd,dashed]&\\
X \ar[dd,swap,dashed,"\varphi_1"] & & X \ar[dd,dashed,"\varphi_2"] \\
&(\varphi_1\times\varphi_2)(\ol{Z}) \ar[dl,"\text{pr}_1"] \ar[dr,swap,"\text{pr}_2"]&\\
\mathbf{P}^n&& \text{Im}(\varphi_2)
\end{tikzcd}
\]
Hence, the subvariety
\[(\varphi_1\times\varphi_2)(Z)\subset \mathbf{P}^n\times \text{Im}(\varphi_2)
\] is the graph of a birational isomorphism $\psi: \mathbf{P}^n\dashrightarrow \text{Im}(\varphi_2)$. Accordingly, $\ol{Z}$ is the fiber product of $\varphi_1$ and $\psi^{-1}\circ\varphi_2$.
Finally, if $\delta_b=d-1$ and $c\neq 0$, we show that ($b$) is satisfied. We must have $c=-1$ and $\deg(a)=d-1$. Consider the rational map
\begin{align*}\varphi: X&\dashrightarrow \mathbf{G}(1,n+1)\\
y \ &\longmapsto \ \ \ \ \ \ \ell_y.\end{align*}
Denoting by $U$ an open on which $\varphi$ is defined, we contend that
\[\ol{Z}\, =\, \overline{\{(x,y)\in U^2: x\neq y, \varphi(x)=\varphi(y)\}}\subset X\times X.\]
Given a generic $(x,y)\in Z$, the line $\ell_y$ coincides with the line $_x\ell$ as they both pass through $x$ and $y$. Write
\[_x\ell\cap X=\ell_y\cap X=\{z_j: 1\leq j\leq d\},\]
where $z_1=x$ and $z_2=y$. For any $j>1$, the point $(x,z_j)$ is on $Z$, and $b^{-1}(z_j)$ is contained in $\ell_{z_j}={_x\ell}=\ell_y$, so that
\[b^{-1}(z_j)=\ell_y\cap X\setminus \{z_j\}\]
and
\[\{(z_i,z_j): i\neq j\}\subset Z.\]
It follows that $\ell_x= {_x\ell}$ for a generic $x\in X$ and that
\[\ol{Z}\, =\, \overline{\{(x,y)\in U^2: x\neq y, \varphi(x)=\varphi(y),\}}\subset X\times X.\]
\section{Proof of Theorem \ref{NewTheoremC}}
Theorem \ref{NewTheoremC} from the Introduction follows easily from the following result:
\begin{proposition}\label{PropositionE}
Let $X$ be a very general hyperelliptic curve of genus $g\geq 3$ and let $Z\subseteq X\times X$ be a hyperelliptic curve. Then the image of (the normalization of) $Z$ under the Abel-Jacobi map is geometrically degenerate, i.e.\! it generates a proper subtorus of $J(X)\times J(X)$.
\end{proposition}
\noindent Note that we do not assume that $Z$ is smooth; to say it is hyperelliptic means that its normalization is so.
We note that some genericity condition is necessary in Theorem \ref{NewTheoremC}. For example, given a hyperelliptic curve $X$, the graph of an automomorphism $X\longrightarrow X$ which is neither the identity nor the hyperelliptic involution is a hyperelliptic curve sitting in $X\times X$. The fact that such graphs map to geometrically degenerate curves in $J(X)\times J(X)$ together with Proposition \ref{PropositionE} suggests the following:
\begin{question}
Given an arbitrary hyperelliptic curve $X$ (resp. hyperelliptic curves $X$ and $Y$), does every hyperelliptic curve $Z\subseteq X\times X$ (resp. $Z\subset X\times Y$) map to a geometrically degenerate curve in $J(X)\times J(X)$ (resp. $J(X)\times J(Y)$)?
\end{question}
Let us first show how Theorem \ref{NewTheoremC} follows from Proposition \ref{PropositionE}
Consider a very general hyperelliptic curve $X$ and a hyperelliptic curve $Z\subset X\times X$ with normalization $Z'$. Abusing notation, we will call the image in $Z$ of Weierstrass points of $Z'$ Weierstrass points of $Z$. Such points map to Weiestrass points of $X$ under each projection. Consider a Weierstrass point $(x_0,y_0)\in Z$ and the embedding
\begin{align*}X\times X \ &\longrightarrow \ \ \ J(X)\times J(X)\\
(x,y) \ \ \ &\longmapsto \ ([x]-[x_0],[y]-[x_0]).\end{align*}
By Proposition \ref{PropositionE}, a translate of the image of $Z$ in $J(X)\times J(X)$ is contained in an abelian subvariety of $J(X)\times J(X)$. Since the image of $Z$ passes through
\[\tau\ =_{\text{def}}\ (0,[y_0]-[x_0])\in J(X)[2]\times J(X)[2],\] it is in fact contained in $\tau+A$ for some proper abelian subvariety $A\subset J(X)\times J(X)$. Moreover, since $X$ is very general, the automorphism group of the Jacobian of $X$ is $\mathbf{Z}$, and thus there are integers $m,n\in \mathbf{Z}$, $m\geq 0$, such that $Z$ is contained in the image of
\begin{align*} J(X)&\longrightarrow J(X)\times J(X)\\
x \ \ & \longmapsto \ (mx,nx)+\tau.
\end{align*}
Hence,
\[Z\subset \{(x,x')\in X\times X: nx+x_0=mx'+y_0\in J(X) \}\subset X\times X.\]
Equivalently, $Z$ is contained in the fiber of the following map over $y_0-x_0$
\begin{align*}X\times X&\longrightarrow \ \ J(X)\\
(x,x')&\longmapsto mx-nx'.\end{align*}
Considering the differential of the map above and the fact that the Gauss map of $X$ embedded in its Jacobian has degree two, it is easy to see that the only possibility is $n=\pm m$ and $x_0=y_0$.
We have thus shown that $Z$ is contained either in the diagonal of $J(X)$ or in the anti-diagonal of $J(X)$. This completes the proof as the diagonal of $J(X)$ intersects $X\times X$ along the diagonal of $X$ and the anti-diagonal of $J(X)$ intersects $X\times X$ along the graph of the hyperelliptic involution of $X$.
Finally we give the proof of Proposition \ref{PropositionE}.
\begin{proof}
Consider $\mathcal{X}/S$, a locally complete family of hyperelliptic curves of genus $g$ and
\[\mathcal{Z}\subset J(\mathcal{X}/S)\times_SJ(\mathcal{X}/S)\]
a family of hyperelliptic curves such that for very general $s\in S$, the curve $\mathcal{Z}_s$ generates $J(\mathcal{X}_s)\times J(\mathcal{X}_s)$. The idea is to arrive at a contradition to the observation of Pirola \cite{Pirola} that hyperelliptic curves on abelian varieties are rigid up to translation.
Specifically, specialize to loci $S_\lambda\subset S$ along which $J(\mathcal{X}_s)$ is isogenous to $\mathcal{A}_s^\lambda\times E$, where $E$ is a fixed elliptic curve and $\mathcal{A}^\lambda\longrightarrow S_\lambda$ is a family of abelian $(g-1)$-folds. For each $\lambda$, we have a map
\[p_\lambda: \mathcal{Z}_s\longrightarrow E\times E,\]
which is the composition of the inclusion of $\mathcal{Z}_s$ in $J(\mathcal{X}_s)\times J(\mathcal{X}_s)$ with the isogeny and the projection to the $E\times E$ factor. We assert:
\begin{claim}
The image of $\mathcal{Z}_s$ in $E\times E$ varies with $s\in S_\lambda$.
\end{claim}
\noindent But as we noted this is impossible thanks to \cite{Pirola}, completing the proof.
The Claim is established along the lines of \cite{Voisin.Abel.Var} and \cite{Martin}. Denoting by $\mathcal{G}/S$ the relative Grassmanian of $(g-1)$-planes in $T_{J(\mathcal{X}_s),0}$, \cite{CP} proves the density of the set:
\[\{T_{\mathcal{A}_s^\lambda,0}\, \subseteq \, T_{J(\mathcal{X}_s),0}\mid s\in S_\lambda\}\ \subseteq \ \mathcal{G}.\footnote{In fact, \cite{CP} shows that the locus
\[\{T_{E,0}\subset T_{J(\mathcal{X}_s),0} \mid s\in S_\lambda\}\] is dense in the relative Grassmanian of lines in $T_{J(\mathcal{X}_s),0}$. However, one can use the fact that Jacobians are isomorphic to their duals to get the stated assertion.}\]
By a density argument, one can construct families of smooth curves $\widetilde{\mathcal{Z}}\longrightarrow \mathcal{G}'$ and $\widetilde{\mathcal{Z}}'\longrightarrow \mathcal{G}'$
over a generically finite cover $\mathcal{G}'$ of $\mathcal{G}$ and a morphism
\[p: \widetilde{\mathcal{Z}}\longrightarrow \widetilde{\mathcal{Z}}'\]
satisfying the following:
\begin{itemize}
\item Denoting by $\pi$ the map $\mathcal{G}'\longrightarrow S$, the curve $\widetilde{\mathcal{Z}}_s$ is the normalization of $\mathcal{Z}_{\pi(s)}$.
\item For $s\in \mathcal{G}'$ such that $\pi(s)\in S_\lambda\subset S$, the maps
\[p_\lambda: \mathcal{Z}_s\longrightarrow p_\lambda(\mathcal{Z}_s)\]
and
\[p: \widetilde{\mathcal{Z}}_s\longrightarrow \widetilde{\mathcal{Z}}_s'\]
agree birationally.
\end{itemize}
Now consider the composition
\begin{equation}\label{picmap1}
\text{Pic}^0(J(\mathcal{X}_s)\times J(\mathcal{X}_s))\longrightarrow \text{Pic}^0(\widetilde{\mathcal{Z}}_s)\xrightarrow{p_*}\text{Pic}^0(\widetilde{\mathcal{Z}}_s'),\end{equation}
where the first map is the pullback by the composition
\[\widetilde{\mathcal{Z}}_s\longrightarrow \mathcal{Z}_s\longrightarrow J(\mathcal{X}_s)\times J(\mathcal{X}_s).\]
One easily checks that the composition \eqref{picmap1} cannot be zero. Since $J(\mathcal{X}_s)$ is simple for generic $s\in \mathcal{G}'$, we deduce that the abelian variety $\text{Pic}^0(\widetilde{\mathcal{Z}}_s')$ contains an abelian subvariety isogenous to $J(\mathcal{X}_s)$ for all $s$ in an open set $U\subset \mathcal{G}'$.\\
Finally, consider $\lambda$ such that $\pi^{-1}(S_\lambda)\cap U\neq\emptyset$. If $p_\lambda(\mathcal{Z}_s)$ does not vary with $s\in S_\lambda$, the fixed abelian variety $\text{Pic}^0(\widetilde{\mathcal{Z}}_s')$ contains an abelian subvariety isogenous to $J(\mathcal{X}_s)$ for all $s\in S_\lambda$. This cannot be since the family $J(\mathcal{X}_{S_\lambda}/S_\lambda)$ is not isotrivial.
\end{proof}
|
{
"arxiv_id": "2302.14196",
"language": "en",
"timestamp": "2023-03-01T02:04:29",
"url": "https://arxiv.org/abs/2302.14196",
"yymm": "2302"
} | \section{Background and Motivations}
\label{sec:background}
\subsection{Video Streaming}
Streaming is a technology used to deliver content from the server to clients over the internet without having to download it. Multimedia streaming is one of the most popular and successful streaming services since it allows the user to watch the video or listen to music almost immediately without having to wait for the file to be completely downloaded. Unlike the file transfer which keeps the file on the device until the user manually deletes it, the streaming data is automatically removed after the user uses it.
Video streaming requires a relatively fast internet connection. For services like Hulu, YouTube, and Netflix, 2-3 Mbps are required for SD, 5-8 Mbps for HD, and 12-25 Mbps for UHD. Live streaming uses the same techniques, but it is specifically designed for real-time internet content delivery. Live streaming is popular when it comes to viewing and interacting with live concert shows, gaming broadcasts, and special events or sports.
User Datagram Protocol (UDP) is preferred over Transmission Control Protocol (TCP) for video streaming. Unlike TCP, UDP does not send message back and forth to open a connection before transmitting data, and it does not ensure that all data packets arrive in order. As a result, transmitting data does not take as long as it does via TCP. Even though some packets are lost during the transmission, there are abundant data packets involved in keeping a video stream playing that the user would not notice the lost ones.
\subsection{Adaptive Video Streaming}
Traditional progressive video streaming is simply one single video file being streamed over the internet, and the video can be stretched or shrunk to fit different screen resolutions. Regardless of the device playing it, the video file will always be the same. However, the technique brings two main problems: (1) The device with higher screen resolution than the video resolution would possibly encounter pixelation; (2) The device with poor internet connection, which cannot download the video stream quickly enough, will need to pause for receiving more video data to the buffer (this is also referred to as {\em rebuffering}). Either situation will bring a horrible video streaming experience for the end user.
Adaptive streaming (also known as {\em Adaptive Bitrate Streaming}), instead, is a technique designed to deliver the multimedia contents to the user in the most efficient way and in the highest possible quality for each user. Specifically, adaptive streaming needs the video streaming server to create a different video file for each target screen size, and it will lower the video quality for the device with slow internet connection.
\subsection{Wireless Network and Access Point}
A wireless network allows devices to connect to the internet without any cables. Access points amplify WiFi signals, so a device can be far from a router but still be connected to the network. The wireless networks are usually realized and administered using the radio communications. Wireless networks such as Wireless Local Area Network (WLAN) and 4G go for high data rate applications, which will perfectly fit with the future requirements of the wireless video streaming applications.
An access point receives data by wired Ethernet, and converts to a 2.4GHz or 5GHz wireless signal. It sends and receives wireless traffic to and from nearby wireless clients. An access point is different from a wireless router in that it does not have firewall functions and does not protect your local network against threats from the Internet.
When you set up your access point using a wired connection, the access point functions as a WiFi base station or, if you use a mesh WiFi network, as a root access point. With a wired connection to the Internet, your access point can function as a WiFi base station or root access point for up to four other access points. These access points function as WiFi repeaters or, if you use a mesh WiFi network, extender access points. These access points connect to the Internet through the WiFi base station or root access point over a WiFi connection.
\subsection{ns-3 Network Simulator}
ns-3 \cite{ns-3doc} is a discrete-event network simulator for internet systems, and has been developed to provide an open, extensible network simulation platform, for networking research and education.
ns-3 provides models of how packet data networks work and perform, and provides a simulation engine for users to conduct simulation experiments. Some of the reasons to use ns-3 include to perform studies that are more difficult or not possible to perform with real systems, to study system behavior in a highly controlled, reproducible environment, and to learn about how networks work. Users will note that the available model set in ns-3 focuses on modeling how Internet protocols and networks work, but ns-3 is not limited to Internet systems; several users are using ns-3 to model non-Internet-based systems.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we give an overview of the video streaming service and wireless network. The combination of two techniques can be applied in a variety of fields. We implemented an adaptive video streaming application in ns-3 simulator, which can deliver the video at different bitrates based on the internet speeds of connected clients. We test our implementation in various network scenarios, from a simple P2P network with only one client, to wireless networks with multiple servers and multiple clients. The results validate the accuracy and efficiency of our adaptive video streaming application and wireless network simulation.
We noticed that the mobile devices will lose connection to the server when it moves out of the range of wireless signals. However, we did not observe the transmission rate dropping when the mobile devices are away from the sever, which requires further investigation on how different modes work in the ns-3 wireless simulator. It is also worth noting that the simulation is under the most ideal circumstances, without any obstacle, electromagnetic interference, or air loss, which could possibly make our simulation results differ from real life scenarios. Therefore, future research should be conducted in more realistic settings to produce more realistic results.
\section{Introduction}
\label{sec:introduction}
Wireless networks are very common in modern life, which are deployed not only in big companies but also in individual houses. The evolution of the wireless local area network (WLAN) provides fast and convenient network connections. Live video streaming has been widely used nowadays, and the number of applications is continuously growing. For example, people use smartphones to watch video streaming for entertainment. In the field of security system, the installation of surveillance cameras can be flexible and economical if wireless networks are used to provide connections. During searching and rescuing operations, real-time audio and video communications in the wireless ad-hoc network can save lives.
Live streaming requires a steady stream of information and delivery of packets by a deadline. However, wireless networks have difficulties in providing such services in a reliable way, as the range of wireless home networks is typically limited and there is radio intermittent interference from external sources such as microwave ovens or cordless phones. For the mobile node, multi-path fading and shadow may further increase the link capacity and the variability of transmission error rate. The ability to transport multimedia content over a variety of networks at different channel conditions and bandwidth capacities with various requirements of quality-of-service (QoS), is considered to be a fundamental challenge for future communication systems. The end-to-end performance of the live video streaming service is effected jointly by video coding, reliable transport, and wireless resource allocation. Therefore, performance-aware adaptation techniques have become hot research topics to achieve optimal and dynamic network configurations at all times and for all network conditions.
The paper provides an overview of the video steaming service over wireless networks and presents a simulation platform with ns-3 network simulator. The structure of this paper is organized as follows. Section \ref{sec:background} describes the background and motivations of the project. Section \ref{sec:related-work} describes related works. Section \ref{sec:problem} discusses the problem that is studied in the project. Section \ref{sec:method} illustrates the proposed simulation platform. Section \ref{sec:results} presents the experimental results and analysis. Section \ref{sec:conclusion} concludes the paper.
\section{Methodology}
\label{sec:method}
The simulation platform includes three main parts: video streaming server, video streaming client, and simulated point to point and Wirelss network.
\subsection{Video Streaming Server}
The main task of the video streaming server is to transmit the video data to the client(s). A maximum packet size is set for video data transmission, the number is 1400 bytes in our case. Nevertheless, the size of video frame can be greater than given maximum packet size, e.g., the size of one frame from high-definition is approximately 500 KB. Hence, the server is required to break the frame into several packets and deliver each of them to the client.
Since the server is capable of delivering video files to multiple clients, it needs to handle connections from different clients and make sure that the video file is sent to corresponding client. Algorithm \ref{alg:handlerecv} demonstrates the behaviors of the server for handling the packets received from the client. The server first receives a socket from the client, and it will get the sender IP address from the socket. Then, the server will look up the hash table of the clients to check if the IP address already exists. If not, it means a new client tries to connect to the server, and the sever will add the new client to the hash table (line 4-9). Otherwise, the server will treat the packet as an request from the client to either raise or lower the video resolution (line 10-14).
\begin{algorithm}
\caption{Server handles packet receiving}
\label{alg:handlerecv}
\begin{algorithmic}[1]
\Function{HandleReceive}{}
\State $socket\gets ReceiveSocket ()$
\State $ipAddr \gets socket.GetSendingIPAddress ()$
\If {$clientTable.Find (ipAddr) == 0$}
\State $client = CreateNewClient ()$
\State $client.sentNumber = 0$
\State $client.videoLevel = 3$
\State $client.ipAddress = ipAddr$
\State $client.scheduleSendEvent ()$
\Else
\State $client = clientTable.Get(ipAddr)$
\State $buffer \gets packet.ReadData()$
\State $newLevel \gets buffer.ReadValue()$
\State $client.UpdateLevel(newLevel)$
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Video Streaming Client}
The video streaming client is to receive the video frames from the server and display it. The client contains a playback buffer storing the video frames, and it reads the video frames from the buffer every second. If the buffer does not contain enough frames, the client will pause the video and replay it until it receives enough frames from the remote server.
The rate controller is based on the size of playback buffer at the client side, and Algorithm \ref{alg:readbuffer} shows how the client deals with the playback buffer. First the client checks if the buffer has enough frames to play the video. If not, the client will decide if it needs to request the lower video quality (line 9-14) or stop the application (line 4-7) by comparing the current buffer size with the last recorded size. If the video can be played as expected, the client will also examine if the buffer stores much more frames than required for one second, then decide if it needs to request higher video quality.
\begin{algorithm}
\caption{Client reads data from buffer}
\label{alg:readbuffer}
\begin{algorithmic}[1]
\Function{ReadBuffer}{}
\If {$curBufferSize < frameRate$}
\If {$lastBufferSize == curBufferSize$}
\State $stopCounter += 1$
\If {$stopCounter \geq 3$}
\State $StopClient()$
\State \Return
\EndIf
\Else
\State $stopCounter = 0$
\State $rebufferCounter += 1$
\If {$rebufferCounter \geq 3$}
\State $RequestLowerQuality()$
\EndIf
\EndIf
\Else
\State $stopCounter = 0$
\State $rebufferCounter = 0$
\State $PlayFromBuffer()$
\If {$curBufferSize > 5 * frameRate$}
\State $RequestHigherQuality()$
\EndIf
\EndIf
\State $ScheduleBufferEvent()$
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Wireless Network}
The wireless network used in the simulation follows IEEE 802.11 standard. Algorithm \ref{alg:wireless} shows the way we simulate the wireless network. Two helper classes \texttt{YansWifiChannelHelper} and \texttt{YansWifiPhyHelper} are used to set up \texttt{Channel} and \texttt{WifiPhy}. Default propagation delay model is \texttt{ConstantSpeedPropagationDelayModel}, and the default loss model is \texttt{LogDistancePropagationLossModel}. Default bit rate model: \texttt{NistErrorRateModel}, use the default \texttt{PHY} layer configuration and channel model. Then configure the Mac type and basic settings. To set up \texttt{WifiMac} and install \texttt{NetDevice} in the node, two helper classes \texttt{WifiHelper} and \texttt{WifiMacHelper} are used. The \texttt{SetRemoteStationManager} method tells the helper class which rate control algorithm to use, here is the AARF algorithm. Create an IEEE 802.11 service set identifier (SSID) object to set the "SSID" attribute value of the MAC layer. The MAC created by the helper class will not send a probe request. The setting will not send out the command to actively detect AP, the default AP is SSID. SSID is the abbreviation of Service Set Identifier. SSID technology can divide a wireless local area network into several sub-networks that require different authentication. Each sub-network requires independent authentication. Only users who pass the authentication can enter the corresponding sub-network. Network to prevent unauthorized users from entering the network.
Then set up the mobile model, we hope that the STA node is mobile, roaming within a bounding box, and we hope that the AP node remains stationary. ns-3 uses the Cartesian coordinate system to identify node positions, and the helper class is \texttt{MobilityHelper}. Set the mobile model for the mobile node. The mobile node model setting is divided into two parts: the initial position distribution and the subsequent movement trajectory model. Initial position distributor: \texttt{GridPositionAllocator}. The nodes will be equally spaced in a two-dimensional Cartesian coordinate system according to the set row and column parameters. Movement trajectory model: \texttt{RandomWalk2dMobilityModel}, the node moves at random speed in a rectangular area of specified size, the default value range is 2-4m/s and random direction. Then set the fixed position model for the AP node, using a fixed position mobile model \texttt{ConstantPositionMobilityModel}, the two-dimensional coordinates of the AP node of this model are (0, 0).
Then we install server and client programs, and set properties for them, e.g., \texttt{MaxFrame = 500, Interval = 0.01s, PacketSize = 1400 bytes}.
We visualized the network topology with PyViz. PyViz is a coordinated effort to make data visualization in Python easier to use, learn and more powerful. PyViz consists of a set of open-source Python packages to work effortlessly with both small and large datasets right in the web browsers.
\begin{algorithm}
\caption{Simulate wireless network}
\label{alg:wireless}
\begin{algorithmic}[1]
\Function{WirelessSimulation}{}
\State $wifiStaNodes = CreateNodes(numSta)$
\State $wifiApNodes = CreateNodes(numAp)$
\State $YansWifiChannelHelper\gets Default ()$
\State $YansWifiPhyHelper \gets Default ()$
\State $WifiHelper \gets AarfWifiManager()$
\State $WifiMacHelper \gets SetSsid()$
\State $wifi.Install (phy, mac, $
\State $wifiStaNodes, wifiApNodes)$
\State $wifiApNodes \gets SetMobilityModel() $
\State $setBaseAddress()$
\State $wifiApNodes.install(serverApps)$
\State $wifiStaNodes.install(clientApps)$
\State $PopulateRoutingTables ()$
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Testing Method}
\begin{figure*}[ht]
\centering
\begin{tabular}{c c}
\includegraphics[height=3cm]{./Figures/case1.png} & \quad
\includegraphics[height=4cm]{./Figures/case2.png} \\
(a) & (b) \\
\includegraphics[height=4cm]{./Figures/case3.png} & \quad
\includegraphics[height=4cm]{./Figures/case4.png} \\
(c) & (d) \\
\end{tabular}
\centering
\caption{Different network environments to be tested with the simulation, from wired P2P network to wireless home network.}
\label{fig:paradigm}
\end{figure*}
To verify the developed simulation platform, we will set the link speed and the size of the video packet. By collecting the results from the simulation, we can validate the accuracy and performance of the model by computing the transmission rate and effective maximum throughput.
We will run our simulation on four scenarios, which are shown in Fig. \ref{fig:paradigm}.
\begin{enumerate}
\item[(a)] Wired network with the point-to-point link between one server and one client.
\item[(b)] Wired network with point-to-point connections between one server and two clients.
\item[(c)] Wireless network with one access point (AP) server and one mobile client.
\item[(d)] Wireless network with three AP servers and three mobile clients.
\end{enumerate}
The entire simulation will be performed in the virtual machine, and the guest OS installed on the VM is Ubuntu 18.04. The software that will be used in the project includes ns-3 (version 3.30), Python 3.6, Pyviz GUI.
\section{Problem Definition and Thesis Statement}
\label{sec:problem}
\subsection{Problem Definition}
The project is to design a video streaming application with adaptive rate controller on top of UDP protocol. It is easy for a streaming service to meet either one of the objectives on its own. To maximize video quality, a service could just stream at the maximum video rate all the time. Of course, this would risk extensive rebuffering. On the other hand, to minimize rebuffering, the service could just stream at the minimum video rate all the time, which would lead to low video quality. The approach in our project is to dynamically change the video rate based on the link speed and frame buffer of the client, and thus ensure the best viewing experience for the user.
\subsection{Project Objectives}
\begin{figure}[t]
\centering
\includegraphics[scale=1.0]{./Figures/Network.PNG}
\caption{Two different network topology for multiple video streams sharing
a single-hop wireless network.}
\label{fig:topology}
\end{figure}
The project is designed to provide a simulation of video streaming over wireless home networks in different scenarios, with increasing complexity. We first simulate the simple scenario of delivering a single video stream over a single wireless link, and then the case of sharing a wireless multi-access channel among multiple video streams. We plan to use different video files, such as 480p, 720p, 1080p, and so on, for different screen resolutions and network speed. The video streaming task is performed over simulated 802.11a home networks. The streaming application transmits video frames with UDP packets and contains an application-layer rate controller that can switch between different versions of video bitstreams based on the feedback from the client.
The project aims to simulate two kinds of network topology for multiple video streams sharing a single-hop wireless network \cite{zhu2007video} in Figure \ref{fig:topology}.
\begin{itemize}
\item All streams originate from the same wireless node.
\item The video source nodes are distributed.
\end{itemize}
\section{Related Works}
\label{sec:related-work}
Video streaming is considered one of the most prevalent technologies, and has been studied in the past several years as the research topic.
\cite{zhu2007video} discusses different wireless streaming scenarios, ranging from the simple case of delivering a single video stream over a single wireless link, to sharing a wireless multi-access channel among multiple video streams, to the general case of multiple streams sharing a mesh network. It also introduces two types of network topology for multiple video streams sharing a single-hop wireless network (See Section \ref{sec:problem} for details). By analyzing the results from the simulation, they validate the accuracy and performance of the model by computing the transmission rate and effective maximum throughput.
\cite{fouda2014real} presents a ns3-based real-time video streaming emulation platform for evaluating and optimizing the performance of the LTE networks. Three sample test cases are studied in the paper to verify the developed platform, which are video client mobility, streaming video to multiple clients and handover over the HIL platform. Moreover, it also evaluates multiple streaming protocols such as UDP, RTP, and Dynamic Adaptive Streaming over HTTP (DASH) with the emulation platform.
\cite{huang_buffer-based_2014} introduces a buffer-based approach to rate adaption which tries to solve the challenge in estimating future capacity. The method uses {\em only} the buffer to choose a video rate, and then ask {\em when} capcity estimation is needed. There are usually two separate phases of operation in video streaming service: a {\em steady-state} phase when the buffer has been built up, and a {\em startup} phase when the buffer is still growing from empty. The paper tests the viability of the approach through a series of experiments spanning millions of real users in a commercial service, and reveals that capacity estimation is unnecessary in steady state; however using simple capacity estimation is crucial during the startup phase.
\cite{liu_when_2020} proposes an integration of wireless multimedia systems and deep learning, which starts with decomposing a wireless multimedia system into three components, including end-users, network environment, and servers. Then the paper presents two cases, deep learning based QoS/QoE prediction and bitrate adjustment. In the former case, an end-to-end and unified DNN architecture was devised to fuse different types of multimedia data and predict the QoS/QoE value. In the latter case, a deep reinforcement learning based framework was designed for bitrate adjustment according to the viewer's interests. By evaluating the performance with a real wireless dataset, the deep learning approach can improve the video QoE average bitrate, rebuffering time, and bitrate variation significantly.
\section{Results}
\label{sec:results}
\subsection{Wireless Network Topology}
We simulated all scenarios mentioned previously, and the visualization is shown as Fig. \ref{fig:results}. We observe comparable application-level throughput in four different network environments if the internet speed is configured in similar way.
\subsection{Video Stream Server}
In the experiment, we use the text file that stores a sequence of video frame sizes to pretend the video file. A sample text file is given as follows. The number in each line denote a frame size in bytes. The first line in the sample text means the first frame of the video is 22500 bytes.
\begin{minted}[frame=lines, breaklines, fontsize=\scriptsize]{text}
22500
1027
1027
1251
\end{minted}
The video stream server is able to break the large frame into several packets and deliver them to the client. The below shows the output for the first frame in the sample text file.
\begin{minted}[frame=lines, breaklines, fontsize=\scriptsize]{text}
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 1400 bytes to 10.1.1.2 port 49153
At time 1s server sent 100 bytes to 10.1.1.2 port 49153
\end{minted}
\subsection{Video Stream Client}
The video stream client is required to monitor the status of the playback buffer, and handle different conditions of the buffer.
\begin{itemize}
\item The case of requesting a lower video quality:
\begin{minted}[frame=lines, breaklines, fontsize=\scriptsize]{text}
(......)
At time 3.5 s: Not enough frames in the buffer, rebuffering!
At time 3.61909s client received frame 15 and 56408 bytes from 10.1.1.1 port 5000
At time 3.85341s client received frame 16 and 57350 bytes from 10.1.1.1 port 5000
At time 4.08793s client received frame 17 and 57400 bytes from 10.1.1.1 port 5000
At time 4.33389s client received frame 18 and 60200 bytes from 10.1.1.1 port 5000
At time 4.5 s: Not enough frames in the buffer, rebuffering!
At time 4.58276s client received frame 19 and 60898 bytes from 10.1.1.1 port 5000
At time 4.7944s client received frame 20 and 51800 bytes from 10.1.1.1 port 5000
At time 5.05752s client received frame 21 and 64400 bytes from 10.1.1.1 port 5000
At time 5.27239s client received frame 22 and 52577 bytes from 10.1.1.1 port 5000
At time 5.5 s: Not enough frames in the buffer, rebuffering!
At time 5.50119s: Lower the video quality level!
(......)
\end{minted}
\item The case of requesting a higher video quality:
\begin{minted}[frame=lines, breaklines, fontsize=\scriptsize]{text}
(......)
At time 2.5349s client received frame 120 and 277397 bytes from 10.1.1.1 port 5000
At time 2.55769s client received frame 121 and 278819 bytes from 10.1.1.1 port 5000
At time 2.58043s client received frame 122 and 278288 bytes from 10.1.1.1 port 5000
At time 2.60336s client received frame 123 and 280667 bytes from 10.1.1.1 port 5000
At time 2.61411s client received frame 124 and 131446 bytes from 10.1.1.1 port 5000
At time 2.61411s: Increase the video quality level to 4
(......)
At time 4.11853s client received frame 245 and 120400 bytes from 10.1.1.1 port 5000
At time 4.1286s client received frame 246 and 123200 bytes from 10.1.1.1 port 5000
At time 4.13844s client received frame 247 and 120400 bytes from 10.1.1.1 port 5000
At time 4.14839s client received frame 248 and 121800 bytes from 10.1.1.1 port 5000
At time 4.15835s client received frame 249 and 121800 bytes from 10.1.1.1 port 5000
At time 4.16841s client received frame 250 and 123200 bytes from 10.1.1.1 port 5000
At time 4.16841s: Increase the video quality level to 5
\end{minted}
\end{itemize}
|
{
"arxiv_id": "2302.14144",
"language": "en",
"timestamp": "2023-03-01T02:02:28",
"url": "https://arxiv.org/abs/2302.14144",
"yymm": "2302"
} | \section{Introduction}
The Standard Model (SM) has been established as the consistent description of particle physics at the electroweak (EW) scale. In particular, the Higgs mechanism provides the necessary ingredient to understand the origin of masses of all known fundamental particles. The precision of this picture is currently being tested in large part at the Large Hadron Collider (LHC) and will guide the ongoing efforts to understand in greater detail the role of the Higgs boson in nature and its possible connection to physics beyond the SM.
Although several couplings of the SM Higgs are well understood from an experimental standpoint~\cite{ATLAS:2022vkf,CMS:2022dwd}, the current precision of its couplings to the second generation fermions remains fairly lacking. In particular the decay of the Higgs boson to muon pairs may deviate more than twice from what is expected in the SM~\cite{ATLAS:2020fzp}. On the other hand, the muon anomalous magnetic moment has raised an intriguing puzzle, where its recently measured value was found to be more than four standard deviations away from the SM prediction~\cite{Muong-2:2021ojo,Aoyama:2020ynm}. The latter observation has garnered much attention in the literature as a possible hint for new particles in nature. The possibilities range from new gauge forces, fermionic matter, or extensions of the Higgs sector, see Refs.~\cite{Czarnecki:2001pv,Freitas:2014pua,Lindner:2016bgg} and references thererin.
Among the proposed explanations for the muon anomalous magnetic moment puzzle, those which lead to a \textit{chiral enhancement} in the muon dipole operator are associated with scenarios with the largest mass scales of new physics~\cite{Stockinger:2022ata,Crivellin:2022wzw}. In these models, the typical contribution to the muon magnetic moment scales as $\Delta a_{\mu}\simeq m_{\mu}\lambda^{3}v/16\pi^{2}M^{2}$ where $\lambda$ and $M$ are representative of the couplings and masses of new particles, whereas the naive scaling due to contributions from new particles would be $\Delta a_{\mu}\simeq \lambda^{2}m_{\mu}^{2}/16\pi^{2}M^{2}$. Thus, we see that the enhancement appears as a factor of $\sim \lambda v/m_{\mu}$ compared to the naive scaling. Hints of such models may also exist in $B$ anomalies~\cite{Raby:2017igl,Crivellin:2018qmi,Barman:2018jhz,Arnan:2019uhr,1906.11297}, the recently studied Cabibbo anomaly~\cite{Endo:2020tkb,Crivellin:2020ebi,Crivellin:2022ctt}, Higgs decays~\cite{Crivellin:2020tsz,Crivellin:2021rbq}, and physics of the dark sector~\cite{Kowalska:2017iqv,Calibbi:2018rzv,Jana:2020joi,Athron:2021iuf,Arcadi:2021cwg,Cai:2021nmk}. However, a drawback to these scenarios is that increasingly large scales will be difficult or even impossible to test at the LHC and future colliders. We demonstrate that models which generate a chirally-enhanced contribution to the muon dipole moment through the coupling of the Higgs boson to new particles simultaneously generate a modification of the Higgs coupling to the muon which could be observed through the corresponding modification of the $h\to\mu^{+}\mu^{-}$ branching ratio. Further, we elaborate on our recent proposal to correlate these observations with future measurements of the electric dipole moment of the muon~\cite{Dermisek:2022aec}. In this respect, we argue that while direct evidence for new physics may be out of reach for colliders in these scenarios, the pattern of deviations of SM couplings in the low-energy effective field theory is distinct and therefore sharp conclusions can still be made.
We discuss models where new particles are allowed to couple to the muon through the SM Higgs boson at tree- or one-loop level. Simplified models with loop-level mixing have been extensively studied previously~\cite{Capdevilla:2020qel,Capdevilla:2021rwo,Crivellin:2021rbq}. To generalize previous observations and encompass all relevant models in a unified way, we outline our arguments using the SM effective field theory (SMEFT) and provide model independent relations. Using this machinery we show how, in specific models, the correlation of the Higgs decay to muon dipole moments can be used to set an upper bound on the scale of new physics which can be stronger than that from more general constraints from perturbative unitarity or other unphysical regions of parameter space.
This paper is organized as follows. In Section~\ref{sec:smeft_ops}, we discuss the SMEFT operators relevant for our main results and outline our notation. In Section~\ref{sec:models}, we outline the matching of individual models in SMEFT showing, in particular, the predicted correlations of operators connecting Higgs decays to the muon dipole moments. In Section~\ref{sec:ellipse}, we present our main results and discuss implications for upcoming precision measurements related to the muon followed by a discussion of non-minimal models in Section~\ref{sec:discussion}. We conclude in Section~\ref{sec:conclusions}. We also provide detailed appendices useful for approximate formulas appearing in the text.
\section{Effective interactions}
\label{sec:smeft_ops}
We focus on the dimension-six operators in SMEFT which generate modifications of the muon coupling to the Higgs as well as muon dipole moments. In a given basis, these effects are captured by operators coupling left- and right-handed muon fields, $\bar{l}_{L}\mathcal{O}e_{R}$. In the Warsaw basis~\cite{Grzadkowski:2010es}, and including the tree-level muon Yukawa coupling, the effective lagrangian relevant to this discussion comprises of four operators
\begin{flalign}
-\mathcal{L}\supset y_{\mu}&\bar{l}_{L}\mu_{R}H + C_{\mu H}\bar{l}_{L}\mu_{R}H\left(H^{\dagger}H\right)+C_{\mu W}\bar{l}_{L}\sigma^{\mu\nu}\mu_{R}\tau^{I}HW^{I}_{\mu\nu}+C_{\mu B}\bar{l}_{L}\sigma^{\mu\nu}\mu_{R}HB_{\mu\nu} + h.c.,
\label{eq:eff_lagrangian}
\end{flalign}
where the doublet components of the muon field are labeled as $l_{L}=(\nu_{\mu}, \mu_{L})^{T}$, $\tau^{I}$ are the Pauli matrices, and the gauge field strength tensors are given by
\begin{flalign}
W^{I}_{\mu\nu}=&\partial_{\mu}W_{\nu} - \partial_{\nu}W_{\mu}-g\varepsilon^{IJK}W^{J}_{\mu}W^{K}_{\nu}\\
B_{\mu\nu}=&\partial_{\mu}B_{\nu} - \partial_{\nu}B_{\mu}.
\end{flalign}
We assume that all parameters in Eq.~\ref{eq:eff_lagrangian} can be complex. Other operators sharing the chiral structure $\bar{l}_{L}\mathcal{O}e_{R}$ include only a few cases of four-fermion operators. These operators are relevant to the one-loop renormalization of Eq.~\ref{eq:eff_lagrangian}. For the moment we will restrict our discussion to the tree-level predictions of Eq.~\ref{eq:eff_lagrangian}, and defer a discussion of renormalization group (RG) effects to later sections.
When the Higgs develops a vev
\begin{equation}
H=\begin{pmatrix}0\\
v+\frac{1}{\sqrt{2}}h
\end{pmatrix},\;\;\;\; v=174\text{ GeV},
\end{equation}
triggering EWSB, the first two terms in Eq.~\ref{eq:eff_lagrangian} generate the mass and Higgs coupling to the muon. Written in terms of Dirac spinors, we have
\begin{equation}
-\mathcal{L}\supset m_{\mu}\bar{\mu}\mu + \frac{1}{\sqrt{2}}\left(\lambda^{h}_{\mu\mu}\bar{\mu}P_{R}\mu h + h.c.\right),
\label{eq:mass_lag}
\end{equation}
where $m_{\mu}$ is the physical muon mass and
\begin{flalign}
m_{\mu}=&\left(y_{\mu}v + C_{\mu H}v^{3}\right)e^{-i\phi_{m_{\mu}}},\label{eq:mmu}\\
\lambda_{\mu\mu}^{h}=&\left(y_{\mu} + 3C_{\mu H}v^{2}\right)e^{-i\phi_{m_{\mu}}}.
\label{eq:mmu_lamhmu}
\end{flalign}
The overall phase, $\phi_{m_{\mu}}$, appears through a redefinition of the muon fields, $\bar{\mu}_{L}\mu_{R}\to e^{-i\phi_\mu}\bar{\mu}_{L}\mu_{R}$, to make the mass term real and positive.
Additional corrections to the Higgs coupling to the muon arise from the dimension-six operators $C_{H\Box}(H^{\dagger}H)\Box(H^{\dagger}H)$ or $C_{HD}(H^{\dagger}D^{\mu}H)^{*}(H^{\dagger}D_{\mu}H)$. After EWSB, both operators result in non-canonical corrections to the physical Higgs kinetic term. This leads to a universal shift of all Higgs couplings to SM fermions, and in particular $y_{\mu}\rightarrow y_{\mu}(1+C_{HD}v^{2}-C_{H\Box}v^{2}/4)$ in Eq~\ref{eq:mmu_lamhmu}, see~\cite{Alonso:2013hga}. Thus, the contribution of these operators to the Higgs coupling to the muon propagates as corrections suppressed by $\sim m_{\mu}/v$ compared to that of $C_{\mu H}$. This suppression is in principle compensated in a given model if it happens that $C_{HD}$ and $C_{H\Box}$ are generated at tree-level at the matching scale while $C_{\mu H}$ is generated at one loop, e.g. as in the SM extended with a singlet scalar~\cite{Jiang:2018pbd}. For the models we focus on in the following sections $C_{HD}$ and $C_{H\Box}$ are always generated at one loop. Thus, we will ignore these operators in our main discussion as they will be suppressed by a loop factor in addition to power counting in $m_{\mu}/v$.
Due to the different combinatorial factor accompanying the corrections proportional to $C_{\mu H}$ in Eqs.~\ref{eq:mmu} and~\ref{eq:mmu_lamhmu}, the Higgs coupling to the muon is necessarily modified compared to that in the SM, $(\lambda^{h}_{\mu\mu})_{SM}=m_{\mu}/v$. In the basis where the muon mass is real and positive, we define
\begin{equation}
R_{h\to\mu^{+}\mu^{-}} \equiv \frac{BR(h\to\mu^{+}\mu^{-})}{BR(h\to\mu^{+}\mu^{-})_{SM}}=\left(\frac{v}{m_{\mu}}\right)^{2}\big|\lambda_{\mu\mu}^{h}e^{-i\phi_{m_{\mu}}}\big|^{2},
\end{equation}
for which the most up-to-date measurements of $h\to\mu\mu$ set an upper limit of $R_{h\to\mu^{+}\mu^{-}} \leq2.2$~\cite{ATLAS:2020fzp,CMS:2020xwi}. Unless the modification to $(\lambda^{h}_{\mu\mu})_{SM}$ is only a pure phase, we see that the decay rate is necessarily modified.
Using Eqs~\ref{eq:mmu} and \ref{eq:mmu_lamhmu} gives
\begin{flalign}
R_{h\to\mu^{+}\mu^{-}} = 1 + 4\textrm{Re}\left(\frac{C_{\mu H}v^{3}}{m_{\mu}}e^{-i\phi_{m_{\mu}}}\right) + 4\left[\textrm{Re}\left(\frac{C_{\mu H}v^{3}}{m_{\mu}}e^{-i\phi_{m_{\mu}}}\right)\right]^{2}+ 4\left[\textrm{Im}\left(\frac{C_{\mu H}v^{3}}{m_{\mu}}e^{-i\phi_{m_{\mu}}}\right)\right]^{2}.
\label{eq:R_mu_def}
\end{flalign}
We note that corrections to the muon mass proportional to $\textrm{Re}(C_{\mu H})$ can deviate $R_{h\to\mu^{+}\mu^{-}}$ away from 1 in either direction depending on the sign of $\textrm{Re}(C_{\mu H})$, whereas corrections proportional to $\textrm{Im}(C_{\mu H})$ can only enhance $R_{h\to\mu^{+}\mu^{-}}$.
After EWSB the second two terms in Eq.~\ref{eq:eff_lagrangian} combine to generate the electric and magnetic dipole moment of the muon
\begin{equation}
\mathcal{L}\supset -C_{\mu \gamma}ve^{-i\phi_{m_{\mu}}}\bar{\mu}_{L}\sigma^{\mu\nu}\mu_{R}F_{\mu\nu} + h.c.
\end{equation}
with $C_{\mu\gamma}=(C_{\mu B}c_{W}-C_{\mu W}s_{W})$, where $s_{W}\equiv \sin\theta_{W}$ is the Weinberg angle. Defining the electric and magnetic dipole moments in terms of Dirac spinors, in the basis where the muon mass is real and positive,
\begin{equation}
\mathcal{L}\supset \frac{\Delta a_{\mu}e}{4m_{\mu}}\bar{\mu}\sigma^{\mu\nu}\mu F_{\mu\nu} - \frac{i}{2}d_{\mu}\bar{\mu}\sigma^{\mu\nu}\gamma^{5}\mu F_{\mu\nu},
\label{eq:ldipole}
\end{equation}
we have that
\begin{flalign}
\Delta a_{\mu} &=-\frac{4m_{\mu}v}{e}\textrm{Re}[C_{\mu \gamma}e^{-i\phi_{m_{\mu}}}]\label{eq:mdipole}\\
d_{\mu} &= 2v\textrm{Im}[C_{\mu \gamma}e^{-i\phi_{m_{\mu}}}].
\label{eq:edipole}
\end{flalign}
Our convention in defining the dipole moments is such that the electromagnetic (EM) charge unit $e>0$.\;\footnote{It is worth noting the sign conventions appearing in Eqs.~\ref{eq:ldipole},~\ref{eq:mdipole}, and \ref{eq:edipole}. The signs are chosen so that the definition of dipole moments from the Lagrangian, Eq.~\ref{eq:ldipole}, matches that typically used in the literature, accounting for both our convention for the sign of $e$ and using the mostly-minus metric $\eta=\text{diag}(+1,-1,-1,-1)$.} The recent measurement of the muon anomalous magnetic moment provides~\cite{Muong-2:2021ojo,Aoyama:2020ynm}
\begin{equation}
\Delta a_{\mu}=(2.51\pm 0.59)\times 10^{-9}.
\end{equation}
For the electric dipole moment, there is currently both a direct limit from the Brookhaven Muon $g-2$ results~\cite{Muong-2:2008ebm}:
\begin{equation}
|d_{\mu}|<1.9\times 10^{-19}\; [e\cdot cm],
\end{equation}
as well as an indirect limits based on the Schiff moments of heavy molecules~\cite{Ema:2021jds}:
\begin{equation}
|d_{\mu}| < 1.9\times 10^{-20}\; [e\cdot cm ],
\end{equation}
where we have quoted the more stringent bound based on ThO. Currently there are two new proposals to improve this limit. The Fermilab (FNAL) Muon $g-2$ collaboration projects~\cite{Lukicov:2019ibv} an improvement down to
\begin{equation}
|d_{\mu}| < 10^{-21}\; [e\cdot cm],
\end{equation}
while a new experiment based on frozen-spin technique, proposed to be hosted at the Paul Scherrer Institute (PSI) projects \cite{Adelmann:2021udj}
\begin{equation}
|d_{\mu}| < 6\times 10^{-23}\; [e\cdot cm].
\end{equation}
Other operators involving a single leptonic current of the muon fields, such as $C^{(1)}_{Hl}H^{\dagger}i\overset{\leftrightarrow}{D_{\mu}}H\left(\bar{l}_{L}\gamma^{\mu}l_{L}\right)$, $C^{(3)}_{Hl}H^{\dagger}i\overset{\leftrightarrow}{D_{\mu}^{I}}H\left(\bar{l}_{L}\tau^{I}\gamma^{\mu}l_{L}\right)$, and $C_{He}H^{\dagger}i\overset{\leftrightarrow}{D_{\mu}}H\left(\bar{\mu}_{R}\gamma^{\mu}\mu_{R}\right)$ lead to corrections of the muon couplings to gauge bosons. While the corresponding Wilson coefficients are not central to the main observables that we discuss, they are subject to constraints for a given matching scale, $\Lambda$, as they lead to corrections to electroweak precision observables (EWPO) such as the partial width of the $Z$-boson and the muon lifetime~\cite{Kannike:2011ng,Dermisek:2021ajd}. See also the appendix of Ref.~\cite{Crivellin:2021rbq} for general expressions of couplings for the $Z$- and $W$-bosons to the muon in terms of $C^{(1)}_{Hl}$, $C^{(3)}_{Hl}$, and $C_{He}$.
In the following sections, we will argue that while the observables $\Delta a_{\mu}$, $d_{\mu}$, and $R_{h\to\mu^{+}\mu^{-}}$ need not be related in general, they are tightly correlated in models of new physics which aim to explain $\Delta a_{\mu}$ via a mass-enhanced correction.
\section{Mass enhanced corrections}
\label{sec:models}
Despite the proliferation of effective operators it is often the case that the effective field theory simplifies in the context of concrete models, where subsets of operators may be dictated by the same couplings if they are even generated at all. In this section, we outline two classes of models which are known to generate mass enhanced corrections to $\Delta a_{\mu}$ and discuss their matching onto the effective theory. The two classes of models are distinguished depending on whether the dominant contribution to $C_{\mu H }$ is generated at tree- or one-loop level at the matching scale and refer to these cases hereafter as \textit{tree models} and \textit{loop models}, respectively. The tree models have been enumerated and studied in connection with $\Delta a_{\mu}$ in~\cite{Kannike:2011ng,Dermisek:2013gta}, whereas similar studies for loop models can be found in~\cite{Calibbi:2018rzv,Crivellin:2018qmi,Crivellin:2021rbq}. Further, a new class of models which generate chirally-enhanced corrections to $\Delta a_{\mu}$ has recently been identified~\cite{Guedes:2022cfy}, which we will refer to as \textit{bridge models}. In~\cite{Dermisek:2022aec}, we argued that $C_{\mu H }$ and $C_{\mu\gamma}$ are sharply correlated at the matching scale in the tree and loop models. Here, we elaborate on this relationship and expand the scope of our arguments to a subset of the bridge models.
\subsection{Tree models}
Models where $C_{\mu H}$ is generated at tree-level at the matching scale consists of UV completions with two new fermion fields which have a mutual coupling to the SM Higgs boson and tree level mixing with the left- and right-handed muon fields. In the left panel of Fig.~\ref{fig:tree_diags}, we show a representative Feynman diagram which generates $C_{\mu H}$ when the heavy fields, $Y$ and $Z$, are integrated out. The Higgs legs on the diagram are generically labeled as $H$. However, gauge invariance will force one leg to be $H^\dagger$ once quantum numbers of the new leptons are chosen. The direction of charge arrows on the internal fermion lines will similarly be enforced in a given model. We consider models where $Y$ and $Z$ are vectorlike leptons and thus have contributions to their mass which do not originate from EWSB.
\begin{figure}[t]
\includegraphics[width=1\linewidth]{tree_diags}
\caption{A representative tree-level diagram generating $C_{\mu H}$ in models with heavy leptons which couple to the muon through the SM Higgs (left), and a corresponding diagram leading to the mass-enhanced correction to $\Delta a_{\mu}$.}
\label{fig:tree_diags}
\end{figure}
Starting from the left diagram in Fig.~\ref{fig:tree_diags}, we see that by connecting $H^{\dagger}H$ pairs of the external Higgs legs and dressing the resulting diagram by all possible insertions of a photon leg constructs the contributions to $C_{\mu\gamma}$, as in the right panel of Fig.~\ref{fig:tree_diags}, which lead to mass-enhanced corrections to $\Delta a_{\mu}$.~\footnote{Note that by replacing $Y\to l_{L}$ or $Z\to \mu_{R}$ in Fig.~\ref{fig:tree_diags} (left) leads to a tree-level correction to $C_{\mu H}$ which is proportional to the muon Yukawa coupling. We neglect these corrections in the tree models as they are $m_{\mu}/v$ suppressed compared to the corrections we discuss.} Thus, in the tree models it is expected that the same couplings needed to generate $C_{\mu H}$ simultaneously give a contribution to $C_{\mu\gamma}$, and the two Wilson coefficients will be directly related without a free parameter. Indeed, as was studied in~\cite{Kannike:2011ng,Dermisek:2021ajd,Dermisek:2022aec}, the dominant contributions to $C_{\mu H}$ and $C_{\mu\gamma}$ are related and lead to the following correlation between Wilson coefficients at the matching scale
\begin{equation}
C_{\mu\gamma} \simeq \frac{\mathcal{Q}e}{64\pi^{2}}C_{\mu H},
\label{eq:tree_formula}
\end{equation}
where $\mathcal{Q}$ is an integer factor depending only on the quantum numbers of the new leptons. In Table~\ref{table:models}, we list the quantum numbers under $SU(2)_{L}\times U(1)_{Y}$ of the possible pairs of new leptons in tree models, and the corresponding $\mathcal{Q}$-factor relating the Wilson coefficients via Eq.~\ref{eq:tree_formula}. Since these models require couplings of heavy fermions to both the muon and SM Higgs fields to generate these corrections, the possible models are highly limited by the allowed quantum numbers.
\begin{table}[t]
\begin{center}
\begin{tabular}{ |c||c| }
\hline
$SU(2)\times U(1)_{Y}$& $\mathcal{Q}$ \\
\hline
$\mathbf{2}_{-1/2}\oplus\mathbf{1}_{-1}$ & 1 \\
\hline
$\mathbf{2}_{-1/2}\oplus\mathbf{3}_{-1}$ & 5 \\
\hline
$\mathbf{2}_{-3/2}\oplus\mathbf{1}_{-1}$ & 3 \\
\hline
$\mathbf{2}_{-3/2}\oplus\mathbf{3}_{-1}$ & 3 \\
\hline
$\mathbf{2}_{-1/2}\oplus\;\mathbf{3}_{0}$ &1 \\
\hline
\end{tabular}
\caption{Quantum numbers of $Y\oplus Z$ fields under $SU(2)\times U(1)_{Y}$ and corresponding $\mathcal{Q}$-factor relating $C_{\mu\gamma}$ and $C_{\mu H}$ in the five tree models~\cite{Kannike:2011ng}. The hypercharge numbers are normalized so that $Q_{EM}=T^{3}+Y$.}
\label{table:models}
\end{center}
\end{table}
In~\cite{Kannike:2011ng,Dermisek:2021ajd,Dermisek:2022aec}, Eq.~\ref{eq:tree_formula} was presented after lengthy calculations in the mass eigenstate basis involving loops of EW gauge bosons in addition to Higgs mediated diagrams. To demonstrate that this correlation appears in tree models at the matching scale via the diagramatic arguments we have just presented we consider as a specific case the SM extended with a vectorlike doublet and charged singlet leptons, $L_{L}$ and $E_{R}$, whose quantum numbers mirror those of the left- and right-handed muon fields (corresponding to the first row in Table~\ref{table:models}). The most general lagrangian of Yukawa and mass terms is then
\begin{flalign}\nonumber
-\mathcal{L}\supset \;&y_{\mu}\bar{l}_{L}\mu_{R}H + \lambda_{L}\bar{L}_{L}\mu_{R}H + \lambda_{E}\bar{l}_{L}E_{R}H + \lambda\bar{L}_{L}E_{R}H + \bar{\lambda}H^{\dagger}\bar{E}_{L}L_{R}\\
&+M_{L}\bar{L}_{L}L_{R}+M_{E}\bar{E}_{L}E_{R} + h.c.,
\end{flalign}
where the doublet components are labeled as $L_{L,R}=(L^{0}_{L,R},L^{-}_{L,R})^{T}$. We work in the unbroken phase of the SM where the full $SU(2)\times U(1)_{Y}$ is linearly realized. Integrating out heavy leptons at tree level gives
\begin{figure}[t]
\includegraphics[width=1\linewidth]{C_mugamma}
\caption{Dominant contributions to $C_{\mu B,W}$ in the $\mathbf{2}_{-1/2}\oplus\mathbf{1}_{-1}$ tree model. The $B,W^{a}$ fields are attached to all particles in the loop. In these diagrams, arrows on fermion lines denote chiral flow.}
\label{fig:Cmugamma}
\end{figure}
\begin{equation}
C_{\mu H}=\frac{\lambda_{L}\lambda_{E}\bar{\lambda}}{M_{L}M_{E}}.
\end{equation}
To calculate the chirally-enhanced contributions to $\Delta a_{\mu}$ in this model we consider the diagrams constructed as we described by connecting $H^{\dagger}H$ pairs in all possible ways and dressing the diagram with all possible insertions of the $B_{\mu}$ and $W^{a}_{\mu}$ gauge fields, shown in Fig.~\ref{fig:Cmugamma}. We note that corrections obtained via an insertion of the $B_{\mu}$ and $W^{a}_{\mu}$ on the internal heavy fermion propagator which is not in the loop result in renormalization of the respective gauge charges. Thus, we need only compute diagrams constructed from $B_{\mu}$ and $W^{a}_{\mu}$ insertions for particles in the loop. Since we are interested in corrections when $M_{L,E}\gg v$, propagators of $L_{L,R}$ and $E_{L,R}$ are treated in the heavy mass limit. For the dipole operators, we find
\begin{flalign}
C_{\mu B}=&-\frac{g^{\prime}}{64\pi^{2}}\frac{\lambda_{L}\lambda_{E}\bar{\lambda}}{M_{L}M_{E}}\left[\sum_{j=L,R}\left(Y_{E_{j}}F(x_{E})+2Y_{L_{j}}F(x_{L})\right)+Y_{H}\left(G(x_{E}) + 2G(x_{L})\right)\right]\\
C_{\mu W}=&-\frac{g}{128\pi^{2}}\frac{\lambda_{L}\lambda_{E}\bar{\lambda}}{M_{L}M_{E}}G(x_{E}),
\end{flalign}
where $Y_{L_{L,R}}$, $Y_{E_{L,R}}$, and $Y_{H}$ are the $U(1)_{Y}$ charges of the heavy lepton doublets, singlets, and SM Higgs doublet, respectively,
\begin{flalign}
F(x)&=-\frac{x^{3}-4x^{2}+3x+2x\ln(x)}{2(1-x)^{3}},\label{eq:Ffunc}\\
G(x)&=\frac{x-x^{3}+2x^{2}\ln(x)}{(1-x)^{3}},\label{eq:Gfunc}
\end{flalign}
and $x_{L,E}=M^{2}_{L,E}/m_{h}^{2}$.
After EWSB, and identifying the factor corresponding to $C_{\mu H}$ we have that
\begin{equation}
C_{\mu\gamma}=-\frac{e}{64\pi^{2}}\left[\sum_{j=L,R}\sum_{i=L,E}Q_{i_{j}}F(x_{i}) + Q_{G^{+}}G(x_{L})\right]C_{\mu H},
\end{equation}
where $Q_{L_{L,R},E_{L,R}}=-1$ and $Q_{G^{+}}=+1$ are the EM charges of the charged components of all new fermion degrees of freedom and charged Goldstone in the SM Higgs doublet, respectively. Note that we have performed the calculation in the Feynman gauge where the Goldstones appear as massive particles and we have approximated the Goldstone loops assuming $m_{G^{0}}=m_{G^{\pm}}=m_{h}$.
Taking the limit $x_{i}\gg 1$, $F(x_{i})\to 1/2$, $G(x_{i})\to 1$, we finally obtain
\begin{equation}
C_{\mu \gamma}\simeq\frac{e}{64\pi^{2}}C_{\mu H},
\end{equation}
which reproduces the known result quoted in Table~\ref{table:models} which was originally calculated in the mass eigenstate basis.
The calculation for other models in Table~\ref{table:models} proceeds similarly, and thus for tree models the generic correlation between $C_{\mu H}$ and the dipole operator can be parameterized as
\begin{equation}
C_{\mu\gamma}\simeq-\frac{e}{64\pi^{2}}C_{\mu H}\sum_{j=L,R}\sum_{i=\text{charged fermions}}Q_{i_{j}}a_{i_{j}}(x_{L},x_{E}) + Q_{G^{+}}b(x_{L},x_{E})
\label{eq:tree_gen}
\end{equation}
where $a(x_{L},x_{E})$ are real numbers parameterizing loops with heavy fermions and the physical Higgs and $b(x_{L},x_{E})$ parameterizes the Goldstone-mediated contributions. In Table~\ref{table:models_loops}, we list all particles charged under $Q_{EM}$ in each of the tree models and their corresponding contribution to the sum in Eq.~\ref{eq:tree_gen}. The $F(x)$ and $G(x)$ loop functions are as defined above, whereas the remaining functions needed to complete the table are given by
\begin{flalign}
A(x,y)&=\frac{xy}{2}\left(\frac{-3y+x(1+x+y)}{(1-x)^{2}(x-y)^{2}}+\frac{2(x^{3}+x^{2}y(x-3)+y^{2})\ln(x)}{(1-x)^{3}(x-y)^{3}}-\frac{2y^{2}\ln(y)}{(1-y)(x-y)^{3}}\right),\\
B(x,y)&=\frac{xy}{2(x-y)}\left(\frac{(1-y)(y-3)-2\ln(y)}{(1-y)^{3}}-\frac{(1-x)(x-3)-2\ln(x)}{(1-x)^{3}}\right)-A(y,x),\\
C(x,y)&=\frac{xy}{2}\left(\frac{x+xy+y-3}{(1-x)^{2}(1-y)^{2}}-\frac{2x\ln(x)}{(1-x)^{3}(x-y)} + \frac{2y\ln(y)}{(1-y)^{3}(x-y)}\right).
\end{flalign}
\begin{table}[t]
\begin{tabular}{ |c|c||c|c|c|c|c|c| }
\hline
&$Q_{EM}$&&$\mathbf{2}_{-1/2}\oplus\mathbf{1}_{-1}$ & $\mathbf{2}_{-1/2}\oplus\mathbf{3}_{-1}$ & $\mathbf{2}_{-3/2}\oplus\mathbf{1}_{-1}$ & $\mathbf{2}_{-3/2}\oplus\mathbf{3}_{-1}$ & $\mathbf{2}_{-1/2}\oplus\;\mathbf{3}_{0}$\\
\hline
$L_{L}^{-}$ & $-1$ & $a_{L_{L}}$& $F(x_{L})$ & $F(x_{L})$ & $F(x_{L}) + B(x_{L},x_{E})$ & $F(x_{L}) + B(x_{L},x_{E})$ & $B(x_{L},x_{E})$\\
\hline
$L_{R}^{-}$ & $-1$ &$a_{L_{R}}$& $F(x_{L})$ & $F(x_{L})$ & $F(x_{L}) + A(x_{L},x_{E})$ & $F(x_{L}) + A(x_{L},x_{E})$ & $A(x_{L},x_{E})$\\
\hline
$L_{L}^{--}$ & $-2$ & $a_{L_{L}}$ & - & - & $F(x_{L})$ & $-F(x_{L}) + 2B(x_{L},x_{E})$ & - \\
\hline
$L_{R}^{--}$ & $-2$ & $a_{L_{R}}$& - & - & $F(x_{L})$ & $-F(x_{L}) + 2A(x_{L},x_{E})$ & - \\
\hline
$E_{L}^{-}$ & $-1$ &$a_{E_{L}}$& $F(x_{E})$ & $F(x_{E})$ & $A(x_{E},x_{L})$ & $A(x_{E},x_{L})$ & $F(x_{E}) + A(x_{E},x_{L})$\\
\hline
$E_{R}^{-}$ & $-1$ &$a_{E_{R}}$& $F(x_{E})$ & $F(x_{E})$ & $B(x_{E},x_{L})$ & $B(x_{E},x_{L})$ & $F(x_{E}) + B(x_{E},x_{L})$\\
\hline
$E_{L}^{--}$ & $-2$ &$a_{E_{L}}$& - & $2F(x_{E})$& - & $2A(x_{E},x_{L})$& - \\
\hline
$E_{R}^{--}$ & $-2$ &$a_{E_{R}}$& - & $2F(x_{E})$& - & $2B(x_{E},x_{L})$& - \\
\hline
$G^{+}$ & $+1$ &$b$& $G(x_{L})$ & $2G(x_{E}) - G(x_{L})$& $G(x_{L})$ & $-G(x_{L}) + 4C(x_{L},x_{E})$ & $\frac{1}{2}G(x_{L}) + C(x_{L},x_{E})$\\
\hline
\end{tabular}
\caption{Particles with non-zero $Q_{EM}$ in tree models and their contribution to Eq.~\ref{eq:tree_gen}. Entries labeled with - indicate the absence of a particle in a given model. The loop functions needed for each contribution are given in the text.}
\label{table:models_loops}
\end{table}
\subsection{Loop Models}
Models where $C_{\mu H}$ is generated at one loop without the muon Yukawa coupling comprise UV completions consisting of either two new fermions and one scalar, or two scalars and one fermion. Following~\cite{Capdevilla:2021rwo}, we refer to these cases as FFS- and SSF-models, respectively. We define the couplings and masses in each model via the lagrangians
\begin{figure}[t]
\includegraphics[width=1\linewidth]{loop_diags}
\caption{Diagrams which generate $C_{\mu H}$ and $C_{\mu \gamma}$ in the FFS- (diagrams with $Y,Z$ fermion lines) and SSF-type (diagrams with an $F$ fermion line) models considered in~\cite{Calibbi:2018rzv,Crivellin:2018qmi,Crivellin:2021rbq}.}
\label{fig:loop_diags}
\end{figure}
\begin{flalign}\nonumber
-\mathcal{L}_{FFS}\supset& \;\lambda_{Z}\bar{l}_{L}Z_{R}\Phi + \lambda_{Y}\bar{Y}_{L}\mu_{R}\Phi^{\dagger} + \lambda_{YZ}\bar{Z}_{L}Y_{R}H + \lambda^\prime_{YZ}\bar{Z}_{R}Y_{L}H\\
&+ M_{Y}\bar{Y}_{L}Y_{R} + M_{Z}\bar{Z}_{L}Z_{R} + M_{\Phi}^{2}|\Phi|^{2} + h.c.,\label{eq:FFS_lag}\\\nonumber
-\mathcal{L}_{SSF}\supset& \;\lambda_{2}\bar{l}_{L}F_{R}\Phi_{2} + \lambda_{1}\bar{F}_{L}\mu_{R}\Phi_{1}^{\dagger} + A\Phi_{2}^{\dagger}\Phi_{1}H\\
&+M_{F}\bar{F}_{L}F_{R} + M_{\Phi_{1}}^{2}|\Phi_{1}|^{2} + M_{\Phi_{2}}^{2}|\Phi_{2}|^{2} + h.c.,\label{eq:SSF_lag}
\end{flalign}
which for a given model is matched to Eq~\ref{eq:eff_lagrangian}. In the bottom row of Fig.~\ref{fig:loop_diags}, we show the possible diagrams leading to $C_{\mu H}$ where the three Higgs legs are attached either all to the internal fermion line as in the FFS-type models (left), or all on the scalar line as in the SSF-type models (right). For the SSF models we show possible contributions from scalar quartic couplings which may appear depending on details of the model and the scalar potential. Similar diagrams could also possibly appear in FFS models. Charge arrows on internal fermion lines are enforced once specific representations are chosen, as in Fig.~\ref{fig:tree_diags}. It should be noted that in these minimal models there is no tree-level correction to $C_{\mu H}$ even when allowing corrections proportional to the muon Yukawa coupling.
In contrast to the tree models, $C_{\mu\gamma}$ can be constructed starting from the bottom row of Fig.~\ref{fig:loop_diags} by appropriately removing an $H^{\dagger}H$ pair and dressing the resulting diagram with a photon leg.
Following this logic in the FFS scenarios this amounts to removing a factor of $|\lambda_{YZ}|^{2}$ from the diagram for Higgs couplings to fermions as in Eq.~\ref{eq:FFS_lag}, or a factor of $\lambda_{\phi}$ (or $\lambda_{\phi}^{\dagger}$) for scalar quartic couplings. Thus, schematically the expected relation between $C_{\mu\gamma}$ and $C_{\mu H}$ is given by
\begin{equation}
\left(b_{F}|\lambda_{YZ}|^{2}+b_S\lambda_{\phi}\right)C_{\mu \gamma}\simeq\left(\sum_{F}a_{F}Q_{F} + \sum_{S}a_{S}Q_{S}\right)eC_{\mu H},
\label{eq:loop_WC}
\end{equation}
where $a_{S,F}$ are fixed real numbers parameterizing loops in $C_{\mu\gamma}$ with the photon attached to either a scalar or fermion line, respectively. $b_{S,F}$ are analogous numbers parameterizing loops generating $C_{\mu H}$ with or without a scalar quartic coupling, respectively. In the SSF scenarios the same relation is expected with the replacement $\lambda_{YZ}\rightarrow A/M$, where $M$ is identified as a common scale of all new particles. Possible contributions from a scalar quartic coupling, $\lambda_{\phi}$, could be generated from the SM Higgs quartic coupling or otherwise depending on details of the model. In the limit $\lambda_{\phi}\to 0$, we note that although Eq.~\ref{eq:loop_WC} is more complicated than the relation for the tree models, the fact that $C_{\mu H}$ and $C_{\mu\gamma}$ are implicitly determined by $\lambda_{YZ}$ (or $A$) implies that $C_{\mu H}$ and $C_{\mu\gamma}$ are related by a single parameter.
The FFS and SSF-type models have been studied extensively in~\cite{Calibbi:2018rzv,Crivellin:2018qmi,Crivellin:2021rbq} and matching onto SMEFT in the Warsaw basis is provided in~\cite{Crivellin:2021rbq}. In terms of model building, either case in fact represents an infinite class of models as the couplings alone do not completely determine the quantum numbers of new particles. We follow~\cite{Crivellin:2021rbq} and parameterize the corrections in SSF-type models with the hypercharge of the new fermion, while in the FFS-type models we parameterize corrections in terms of the hypercharge of $\Phi$. However, we recover the results of~\cite{Crivellin:2021rbq} only in the limit $\lambda_{YZ}=\lambda^{\prime}_{YZ}$. We will consider contributions from $\lambda_{YZ}$ and $\lambda^{\prime}_{YZ}$ separately and take the limit $\lambda_{YZ}=\lambda^{\prime}_{YZ}$ as a special case. In each model type, assuming a common scale of new physics (and ignoring contributions from quartic couplings) Eq.~\ref{eq:loop_WC} becomes
\begin{table}[t]
\begin{center}
\begin{tabular}{ |c||c|c|c|c|c| }
\hline
$\begin{matrix}SU(2)\\FFS\\SSF\end{matrix}$& $\mathcal{Q}_{SSF}$ & $\mathcal{Q}_{FFS}$ & $\mathcal{Q}_{FFS}' $ &$ \mathcal{Q}_{FFS}^{(\lambda_{YZ} = \lambda_{YZ}')}$ & $\xi_{eH}$ \\
\hline
$121$ &$-2(1+2Y_{F})/\xi_{eH}$& $-2(3+2Y_{\Phi})/\xi_{eH}$ & $-2/3\xi_{eH}$ & $-2(1+Y_{\Phi})/\xi_{eH}$ & $1$\\
\hline
$212$ & $4Y_{F}/\xi_{eH}$& $4(1+Y_{\Phi})/\xi_{eH}$ & $2/3\xi_{eH}$ &$(1+2Y_{\Phi})/\xi_{eH}$ & $-1$\\
\hline
$323$ &$2(1-6Y_{F})/\xi_{eH}$ & $ -2(5+6Y_{\Phi})/\xi_{eH}$& $-2 / \xi_{eH}$ &$-2(1+3Y_{\Phi})/\xi_{eH}$& $5$\\
\hline
$232$ & $-4(2+3Y_{F})/\xi_{eH}$ & $-4(5+3Y_{\Phi})/\xi_
{eH}$ & $-2/\xi_{eH}$ & $-(7+6Y_{\Phi})/\xi_{eH}$ & $5$\\
\hline
\end{tabular}
\caption{$\mathcal{Q}$ factors defining the relations, Eqs.~\ref{eq:loop_Qs1} and ~\ref{eq:loop_Qs2}, for the FFS- and SSF-type models. In the left-most column, we list the models by representations of new particles under $SU(2)_{L}$. In each case, the leftover hypercharge (as described in the text) is listed as a free parameter in the corresponding row. The right-most column gives the $\xi_{eH}$ factors for a given model, which are fixed by the choice of representations.}
\label{table:loop_models}
\end{center}
\end{table}
\begin{flalign}
e\mathcal{Q}^{(\prime)}_{FFS}C_{\mu H}=4|\lambda^{(\prime)}_{YZ}|^{2}C_{\mu\gamma},\label{eq:loop_Qs1}\\
e\mathcal{Q}_{SSF}C_{\mu H}=4\frac{|A|^{2}}{M^{2}}C_{\mu\gamma},
\label{eq:loop_Qs2}
\end{flalign}
where $\mathcal{Q}^{(\prime)}_{FFS}$, $\mathcal{Q}_{SSF}$ and $\xi_{eH}$ are determined by the charges of new particles and are defined in Table~\ref{table:loop_models} for the models emphasized in~\cite{Crivellin:2021rbq}.~\footnote{Note that the Wilson coefficients in~\cite{Crivellin:2021rbq} are defined with opposite sign in the Lagrangian compared to our conventions. While this does not affect the definition of the ratio of coefficients in Eqs.~\ref{eq:loop_Qs1} and ~\ref{eq:loop_Qs2}, the sign difference is relevant for matching $C_{\mu\gamma}$ to $\Delta a_{\mu}$.} See the Appendix for more detailed formulas. In the left-most column we list the $SU(2)_{L}$ representations of $Y_{L,R}-Z_{L,R}-\Phi$ particles in Eq.~\ref{eq:FFS_lag} models or the $\Phi_{1}-\Phi_{2}-F_{L,R}$ particles in Eq.~\ref{eq:SSF_lag}. The leftover hypercharge is left as a free parameter for the $\mathcal{Q}$ factor of a given model in the corresponding row. The $\xi_{eH}$ factors are given in the right-most column. For FFS scenarios we give the $\mathcal{Q}$ factors when only $\lambda_{YZ}$ or $\lambda^{\prime}_{YZ}$ is present in the third and fourth columns, respectively. In the fifth column we consider the limit when $\lambda_{YZ} = \lambda_{YZ}'$. In this case, Eq.~\ref{eq:loop_Qs1} has the same form with the replacement $\mathcal{Q}^{(\prime)}_{FFS}\to\mathcal{Q}_{FFS}^{(\lambda_{YZ} = \lambda_{YZ}')}$.
\subsubsection{Bridge Models}
The bridge models~\cite{Guedes:2022cfy} represent an interesting class of models sitting somewhere in between the tree and loop models. As an example, we consider the two-field extension of the SM with a new lepton singlet $E\sim(1,1,-1)$ and new scalar $S\sim(1,1,0)$. The relevant couplings of the model are given by
\begin{equation}
-\mathcal{L}\supset y_{\mu}\bar{l}_{L}\mu_{R}H + \lambda_{E}\bar{l}_{L}E_{R}H + \lambda_{S}\bar{E}_{L}\mu_{R}S + \bar{\lambda}\bar{E}_{L}E_{R}S + M_{E}\bar{E}_{L}E_{R} + h.c.
\end{equation}
In this model, $C_{\mu H}$ is generated by a tree-level contribution proportional to the muon Yukawa coupling as well as a loop-level contribution through only new-lepton couplings. The corresponding contributions are generated by the left and right diagrams in Fig.~\ref{fig:bridge_diags}, respectively, and give
\begin{figure}[t]
\includegraphics[width=1\linewidth]{bridge_diags}
\caption{Tree and loop diagrams which generate $C_{\mu H}$ in the two-field bridge model introduced in~\cite{Guedes:2022cfy} with a charged fermion singlet, $E_{R}$, and a neutral scalar singlet, $S$.}
\label{fig:bridge_diags}
\end{figure}
\begin{equation}
C_{\mu H} \simeq \frac{y_{\mu}|\lambda_{E}|^{2}}{M_{E}^{2}} - \frac{\lambda_S\lambda_{E}\bar{\lambda}}{16\pi^{2}}\frac{|\lambda_{E}|^{2}}{M_{E}^{2}}D(x),
\label{eq:CmuH_bridge}
\end{equation}
where
\begin{equation}
D(x)=\frac{x^{2}-x-x\ln(x)}{(1-x)^{2}},
\end{equation}
and $x=M_{E}^{2}/M_{S}^{2}$.~\footnote{When $\lambda_{S}\lambda_{E}\bar{\lambda}>0$, there is also the possibility that $C_{\mu H}=0$. Although, exact cancellation would require some fine tuning in the model.} Taking $M_{E}\sim M_{S}$ we have that
%
\begin{equation}
\frac{C^{(loop)}_{\mu H}}{C^{(tree)}_{\mu H}}\sim \frac{\lambda_S\lambda_{E}\bar{\lambda}v}{32\pi^{2}m_{\mu}},
\end{equation}
which is maximal when the bound from the muon coupling to the $Z$-boson, $|\lambda_{E}|v/M_{E}\leq 0.03$, is saturated. When this occurs, the loop contribution can dominate for $\lambda_{S},\bar{\lambda}\simeq 1.0 - 0.5$
and a scale of new physics at $M_{E}\sim M_{S} = 1-10$ TeV. Thus, in the two-field extensions of the bridge models it is easy for the dominant contribution to $C_{\mu H}$ to be that generated by the loop in Fig.~\ref{fig:bridge_diags} (right), for moderate-size new couplings up to the size allowed by perturbativity limits at the scale of new physics.
In this model, $C_{\mu\gamma}$ is constructed from the right diagram by removing the Higgs vertices in the loop and dressing the resulting diagram with an external photon. Further, we have that (c.f. Eq. 4.7 of~\cite{Guedes:2022cfy})
\begin{equation}
C_{\mu\gamma} \simeq -e\frac{\lambda_{S}\lambda_{E}\bar{\lambda}}{32\pi^{2}M_{E}^{2}}F(x),
\end{equation}
where $F(x)$ is defined in Eq.~\ref{eq:Ffunc}. In the region of parameters where the loop contribution to $C_{\mu H}$ dominates, we have
\begin{equation}
3|\lambda_{E}|^{2}C_{\mu \gamma}\simeq e C_{\mu H}.
\label{eq:bridge_example}
\end{equation}
Beyond this example, in~\cite{Guedes:2022cfy} it was found that there are five additional such models which generate a mass-enhanced correction to $\Delta a_{\mu}$ via the same topology.~\footnote{There are eight possible models in total. However, two possibilities result in $C_{\mu\gamma}=0$ regardless of the hierarchy of new masses involved. It would be interesting to investigate if these cases share a similar type of internal symmetries which lead to a "magic zero" as in~\cite{Craig:2021ksw,DelleRose:2022ygn}.} In each case, the same arguments correlating $C_{\mu H}$ would apply albeit with a slightly different numerical factor in Eq~\ref{eq:bridge_example}. Thus, in the six examples we may write a generic expression as
\begin{equation}
e \mathcal{Q}C_{\mu H}\simeq|\lambda_{F\mu}|^{2}C_{\mu \gamma},
\label{eq:bridge_equation}
\end{equation}
\begin{table}[t]
\begin{center}
\begin{tabular}{ |c||c| }
\hline
$SU(2)\times U(1)_{Y}$& $\mathcal{Q}$ \\
\hline
$\mathbf{1}_{-1}\oplus\mathbf{1}_{0}$ & 1/3 \\
\hline
$\mathbf{1}_{-1}\oplus\mathbf{1}_{-2}$ & 2/3 \\
\hline
$\mathbf{2}_{-1/2}\oplus\mathbf{1}_{0}$ & 1/3 \\
\hline
$\mathbf{2}_{-1/2}\oplus\mathbf{3}_{0}$ & - \\
\hline
$\mathbf{2}_{-1/2}\oplus\;\mathbf{3}_{-1}$ & 3/4 \\
\hline
$\mathbf{3}_{-1}\oplus\;\mathbf{3}_{0}$ & 1/2 \\
\hline
\end{tabular}
\caption{Quantum numbers of $F\oplus S$ under $SU(2)\times U(1)_{Y}$ and corresponding $\mathcal{Q}$-factor relating $C_{\mu\gamma}$ and $C_{\mu H}$ in the two-field bridge models~\cite{Kannike:2011ng}.}
\label{table:bridge_models}
\end{center}
\end{table}
where, in a given model, $\lambda_{F\mu}$ should be understood as the coupling of the Higgs which mixes the heavy fermion with either the left- or right-handed muon. In Table~\ref{table:bridge_models}, we provide a list of the possible two-field bridge models which generate a mass-enhanced correction to $\Delta a_{\mu}$. In the left column, we give the quantum numbers of the new fermion and scalar ($F\oplus S$) under $SU(2)\times U(1)_{Y}$ and in the right column we list the corresponding $\mathcal{Q}$-factor in each case appearing in Eq.~\ref{eq:bridge_equation}. For the model $\mathbf{2}_{-1/2}\oplus\mathbf{3}_{0}$, $C_{\mu\gamma}$ happens to vanish in the limit $M_{\mathbf{2}_{-1/2}}=M_{\mathbf{3}_{0}}$. However, for a general spectrum of new particles the relation to $C_{\mu H}$ is given by
\begin{equation}
4|\lambda_{F\mu}|^{2}C_{\mu\gamma}=e\frac{H(x)}{D(x)}C_{\mu H},
\end{equation}
where
\begin{equation}
H(x)=-\frac{x^{3} + 4x^{2} - 5x - 2x(2x + 1)\ln(x)}{(1-x)^3},
\end{equation}
and $\sqrt{x}=M_{\mathbf{2}_{-1/2}}/M_{\mathbf{3}_{0}}$.
We see that in the 2-field models discussed in~\cite{Guedes:2022cfy}, $C_{\mu\gamma}$ is correlated with the loop contribution to $C_{\mu H}$ through only a single parameter, Eq.~\ref{eq:bridge_equation}, similarly as in the loop models. Further, the loop contribution to $C_{\mu H}$ can dominate in regions of parameter space with the largest couplings and highest mass scales. We note that the 3-field bridge models discussed in~\cite{Guedes:2022cfy} do not exhibit this correlation and are the only class of mass-enhanced models suggested as solutions for g-2 which evade this feature at the level we have outlined here. However, we do find that in these scenarios the correlation between $C_{\mu H}$ and $C_{\mu\gamma}$ can appear at the two-loop level by connecting the diagrams generating $C^{(1)}_{Hl}H^{\dagger}i\overset{\leftrightarrow}{D_{\mu}}H\left(\bar{l}_{L}\gamma^{\mu}l_{L}\right)$, $C^{(3)}_{Hl}H^{\dagger}i\overset{\leftrightarrow}{D_{\mu}^{I}}H\left(\bar{l}_{L}\tau^{I}\gamma^{\mu}l_{L}\right)$, or $C_{He}H^{\dagger}i\overset{\leftrightarrow}{D_{\mu}}H\left(\bar{\mu}_{R}\gamma^{\mu}\mu_{R}\right)$ to that of $C_{\mu\gamma}$ on an external fermion line. We do not explore this in this paper as these corrections are not expected to compete with the tree level correction, as in Eq.~\ref{eq:CmuH_bridge}.
\section{The ellipse of dipole moments}
\label{sec:ellipse}
In the tree and loop models discussed above, we have argued that the dominant contribution to the dipole operator is parametrically related to the correction to the muon-Higgs coupling by
\begin{equation}
C_{\mu H}\simeq \frac{k}{e} C_{\mu\gamma},
\label{eq:WC_relation}
\end{equation}
for some numerical factor $k$ which depends on the details of the model. Further, this relation applies to the 2-field bridge models in the moderate to extreme regions of parameter space. In the effective theory, regardless if the operators are generated in tree or loop models, this relation has important consequences for predictions of the muon dipole moments and $h\to\mu^{+}\mu^{-}$ branching ratio. Assuming the generic relation, Eq.~\ref{eq:WC_relation}, in Eq~\ref{eq:R_mu_def} and using Eqs~\ref{eq:mdipole} and \ref{eq:edipole} we find that the modification of the Higgs decay to muons defines an ellipse with respect to the electric and magnetic dipole moments
\begin{flalign}
R_{h\to\mu^{+}\mu^{-}}=\left(\frac{\Delta a_{\mu}}{2\omega} - c_{\phi_k}\right)^{2} + \left(\frac{m_{\mu}d_{\mu}}{e\omega} - s_{\phi_k}\right)^{2},
\label{eq:ellipse}
\end{flalign}
where $c_{\phi_k}=\cos \phi_k$, $s_{\phi_k}=\sin \phi_k$ and $\omega = m_{\mu}^{2}/|k|v^{2}$. Thus, in models where $C_{\mu\gamma}$ is generated by mass-enhanced corrections future measurements of $R_{h\to\mu^{+}\mu^{-}}$ and $\Delta a_{\mu}$ are directly connected to the determination of the electric dipole moment, $d_{\mu}$.
\begin{figure}[t]
\includegraphics[scale=0.5]{tree_couplings_M_plane_k_contours}
\includegraphics[scale=0.5]{tree_edm_mass_Q_contours}
\caption{Left: Common scale of new physics required in tree models with $\mathcal{Q}=1,3,5$ needed to achieve $\Delta a_{\mu}\pm 1\sigma$ for a given overall size of model couplings, assuming $d_{\mu}=0$. Right: Values of $|d_{\mu}|~[e\cdot cm]$ with respect to the maximum mass scale allowed to give the central value of $\Delta a_{\mu}$ in tree models. Dashed lines show the value of $d_{\mu}$ as given by the $k$ equation when $R_{h\to \mu^{+}\mu^{-}}= 2.2$ for a given model. Hatched regions show the corresponding range for $R_{h\to \mu^{+}\mu^{-}}= 1\pm 10\%$. Projected sensitivity to $|d_{\mu}|$ from Fermilab and PSI are shown with dash-dotted and dotted lines, respectively.}
\label{fig:tree_scales}
\end{figure}
Concretely, for tree models, assuming a single scale of new physics, we have for Eq.~\ref{eq:WC_relation}
\begin{equation}
k=-\frac{1}{\left[\sum_{j=L,R}\sum_{i=L,E}Q_{i_{j}}F(x_{i}) + Q_{G^{+}}G(x_{L})\right]}=\frac{64\pi^{2}}{\mathcal{Q}},
\label{eq:tree_X_SM}
\end{equation}
where $\mathcal{Q}$ is a numerical factor for a given model as in Table~\ref{table:models}. In Fig.~\ref{fig:tree_scales}, we show mass scale of new physics, assumed to be common among all new particles, required in the tree models with $\mathcal{Q}=1,3,5$ to obtain $\Delta a_{\mu}\pm 1\sigma$ for an overall size of couplings, defined as $(|\lambda_{L}\lambda_{E}\bar{\lambda}|)^{1/3}$. In the left panel, $d_{\mu}=0$. In the legend, we also quote the predicted range of $R_{h\to \mu^{+}\mu^{-}}$ for a given $\mathcal{Q}$. In each case, we have checked that EW precision constraints are satisfied. For $\mathcal{Q}=1$, mass scales up to $\sim 45$ TeV can explain $\Delta a_{\mu}\pm 1\sigma$ before the overall size of couplings reaches non-perturbative values. For $\mathcal{Q}=5$, mass scales up to slightly above 100 TeV are possible under the same restriction.
In the right panel of Fig.~\ref{fig:tree_scales}, we show the maximum mass scale allowed in tree models which explain the central value of $\Delta a_{\mu}$ as a function of the predicted value of $d_{\mu}$. For each model, a given value for $d_{\mu}$ automatically fixes $R_{h\to\mu^{+}\mu^{-}}$. The dashed lines show the values of $d_{\mu}$ corresponding to $R_{h\to \mu^{+}\mu^{-}}= 2.2$ in a given model. Hatched regions show the corresponding range for $R_{h\to \mu^{+}\mu^{-}}= 1\pm 10\%$. Note that for $\mathcal{Q}=1$, the region where $R_{h\to \mu^{+}\mu^{-}}= 1\pm 10\%$ never occurs. Even for $d_{\mu} = 0$, the smallest value of $R_{h\to \mu^{+}\mu^{-}}$ is 1.32, assuming the central value of $\Delta a_{\mu}$. Interestingly, this implies that if future measurements of Higgs decays reach $R_{h\to \mu^{+}\mu^{-}}= 1\pm 10\%$, a measurement at FNAL or PSI of $|d_{\mu}|\neq 0$ up to their projected limits will rule out models with $\mathcal{Q}=1$, assuming the central value of $\Delta a_{\mu}$. However, values as low as $R_{h\to \mu^{+}\mu^{-}}=0.42$ are possible for $\Delta a_{\mu}-1\sigma$. Projected sensitivity to $|d_{\mu}|$ from Fermilab and PSI are shown with dash-dotted and dotted lines, respectively. Note that while models with $\mathcal{Q}=3,5$ would not be ruled out by FNAL, their range of validity as evaluated by the $k$ equation would be severely restricted, requiring $|d_{\mu}|\simeq 3\times 10^{-22}\; [e\cdot cm]$ and $|d_{\mu}|\simeq 4.5\times 10^{-22}\; [e\cdot cm]$, respectively. Again assuming $R_{h\to \mu^{+}\mu^{-}}= 1\pm 10\%$, if $|d_{\mu}|$ is \textit{not} seen at PSI, models with $\mathcal{Q}=3,5$ would be completely excluded.
For the loop models, we discussed in previous sections minimal UV completions involving either two new fermions and one scalar, or two new scalars and one fermion. For the FFS-type models, assuming a single scale of new physics, we have for Eq.~\ref{eq:WC_relation}
\begin{equation}
k=\frac{\left(b_{F}|\lambda_{YZ}|^{2}+b_S\lambda_{\phi}\right)}{\left(\sum_{F}a_{F}Q_{F} + \sum_{S}a_{S}Q_{S}\right)},
\label{eq:loop_k}
\end{equation}
and similarly in the SSF models with the replacement $\lambda_{YZ}\to A/M$, as discussed above. Ignoring, for the moment, contributions from quartic couplings we can identify the appropriate $k$ factors as
\begin{flalign}
k_{FFS}=&\frac{4}{\mathcal{Q}^{(\prime)}_{FFS}}|\lambda^{(\prime)}_{YZ}|^{2},\\
k_{SSF}=&\frac{4}{\mathcal{Q}_{SSF}}\frac{|A|^{2}}{M^{2}},
\end{flalign}
where the $\mathcal{Q}$'s and $\xi_{eH}$ factors are given in Table~\ref{table:loop_models}. The relation in the FFS models when $\lambda_{YZ} = \lambda_{YZ}'$ as discussed above would have the same form. We discuss corrections due to quartic couplings and other generalizations in the next section.
Finally, in the 2-field extensions of the bridge models we have
\begin{equation}
k=\frac{|\lambda_{F\mu}|^{2}}{\mathcal{Q}},
\end{equation}
where the $\mathcal{Q}$ values are given in Table~\ref{table:bridge_models}. We reiterate that this relation applies in regions of parameters where the loop contribution to $C_{\mu H}$ is dominant. This also corresponds to the region of parameters where a given model gives the largest contribution to $\Delta a_{\mu}$.
\begin{figure}[t]
\includegraphics[scale=0.65]{FFS_k_contours_coupling_mass_plane}
\caption{Contours of $k\mathcal{Q}^{3}$ in the FFS models with $SU(2)$ doublets and $\lambda_{YZ} = \lambda_{YZ}'$, as in the first two rows and fifth column of Table~\ref{table:loop_models}, defined in the plane of the scale of new physics needed to obtain the central value of $\Delta a_{\mu}$ with respect to the overall size of model couplings. The blue shaded region is excluded by the perturbativity limit $|\lambda_{YZ}|\leq \sqrt{4\pi}$ for $\mathcal{Q}=1$. The red, green, and purple regions show the corresponding exclusion regions for $\mathcal{Q}=2$, $\mathcal{Q}=3$ and $\mathcal{Q}=6$, respectively. The red and blue contours are highlighted to show where $R_{h\to \mu^{+}\mu^{-}}=0.9$ and $0.99$ for $\mathcal{Q}=1$, respectively.}
\label{fig:scales1}
\end{figure}
To demonstrate the correlation between $\Delta a_{\mu}$, $R_{h\to\mu^{+}\mu^{-}}$, and $d_{\mu}$ in FFS models we may write the dipole moments with respect to $k$ by
\begin{flalign}
\Delta a_{\mu}=&\frac{1}{96\pi^{2}}\left(\frac{m_{\mu}v}{M^{2}}\right)\left(\frac{k\mathcal{Q}_{FFS}^{3}\xi_{eH}^{2}}{4}\right)^{1/2}\textrm{Re}\left[\lambda_{Y}\lambda_{Z}e^{i\phi_{YZ}}\right],\label{eq:amu_k_loop}\\
|d_{\mu}|=&\frac{e}{192\pi^{2}}\left(\frac{v}{M^{2}}\right)\left(\frac{k\mathcal{Q}_{FFS}^{3}\xi_{eH}^{2}}{4}\right)^{1/2}\textrm{Im}\left[\lambda_{Y}\lambda_{Z}e^{i\phi_{YZ}}\right],\label{eq:dmu_k_loop}
\end{flalign}
where we have assumed a common scale of new physics and the $\xi_{eH}$ factors are defined in Table~\ref{table:loop_models}. For SSF models, the formulas are similar except with an overall sign in the analogous relation for $\Delta a_{\mu}$. We reiterate that the combined factor $Q^{3}\xi_{eH}^{2}$ is dependent only on the choice of representions of new particles and is fixed for a given model.
In Fig.~\ref{fig:scales1}, we show contours of $k\mathcal{Q}^{3}$ for the FFS models with $\lambda_{YZ} = \lambda_{YZ}'$ in the plane of the common scale of new physics for a given overall size of couplings, this time defined as $\sqrt{|\lambda_{Y}\lambda_{Z}|}$ where we have eliminated the free parameter $\lambda_{YZ}$ in favor of $k\mathcal{Q}^{3}$. We have used the central value of $\Delta a_{\mu}$ and assumed $d_{\mu}=0$. The contours are drawn for models with $SU(2)$ doublets, as in the first two rows of Table~\ref{table:loop_models}. For models with triplets as in the bottom two rows, the contour labels should be multiplied by a factor of 25, resulting from the modified factors of $\mathcal{Q}^{3}\xi_{eH}^{2}$ in Eqs.~\ref{eq:amu_k_loop} and~\ref{eq:dmu_k_loop}. A larger scale of new physics requires a larger size couplings and hence larger $k\mathcal{Q}^{3}$. For example, we see that a new physics scale of $\sim 10$ TeV would require at most $\mathcal{O}(1)$ couplings. In an FFS model, increasing $k\mathcal{Q}^{3}$ is achieved by adjusting $\lambda_{YZ}$ while respecting the limit on perturbativity at the scale of new physics, $|\lambda_{YZ}|\leq \sqrt{4\pi}$. For $\mathcal{Q}=1$, this coupling becomes nonperturbative in the shaded blue region. Similarly the red, green, and purple shaded regions show the analogous exclusion region for models with $\mathcal{Q}=2$, $\mathcal{Q}=3$ and $\mathcal{Q}=6$, respectively. For models with $\lambda_{YZ}=0$ ($\lambda^{\prime}_{YZ}=0$) the contours should be rescaled by $9/4~(1/4)$. The corresponding figure for SSF models looks similar with all contour labels divided by a factor of $4$ resulting from an additional factor of $1/4$ in the square root factors of Eqs.~\ref{eq:amu_k_loop} and~\ref{eq:dmu_k_loop}. We note that $k\mathcal{Q}^{3}\propto |A|^{2}/M^{2}$ in SSF models and there is no analogous perturbativity limit on $A$.
The red and blue contours in Fig.~\ref{fig:scales1} are highlighted to show where $R_{h\to \mu^{+}\mu^{-}}=0.9$ and $0.99$, respectively, when $\mathcal{Q}=1$. Note that these highlighted contours show that increasing precision in $R_{h\to \mu^{+}\mu^{-}}$ provides an upper limit on the scale of new physics which is stronger than that based on pertubativity, and that this upper limit decreases as $R_{h\to \mu^{+}\mu^{-}}\to1$ . Further, recall that $d_{\mu}\neq 0$ can only increase $R_{h\to \mu^{+}\mu^{-}}$ in the models we have discussed. Thus, for $d_{\mu}\neq 0$ it is expected that a given precision on $R_{h\to \mu^{+}\mu^{-}}$ provides an upper limit on the scale of new physics which is strictly less than the corresponding maximum scale allowed as shown in Fig.~\ref{fig:scales1}. We show the maximum scale allowed defined in this way in Fig.~\ref{fig:scales2} with respect to $|d_{\mu}|$. The left (right) panel shows contours for the FFS models with $\mathcal{Q}=+1(-1)$. Note that the sign of $\mathcal{Q}$ is always the same as $k$. Thus, for $\mathcal{Q}=+1$ Eq.~\ref{eq:ellipse} allows two positive solutions for $k$ when $R_{h\to\mu^{+}\mu^{-}}<1$ whose contours are denoted by the solid and dashed lines. In contrast, Eq.~\ref{eq:ellipse} only admits one positive solution when $R_{h\to\mu^{+}\mu^{-}}>1$ for $\mathcal{Q}=+1$. For $\mathcal{Q}=-1$, there is only one solution for $k$ and we have that $R_{h\to\mu^{+}\mu^{-}}>1$. We also show the projected upper limits on $|d_{\mu}|$ from the experiment at Fermilab (dash-dotted) and PSI (dotted). The gray shaded regions show where the all model couplings reach the limit of perturbativity at the scale $M_{max}$.
\begin{figure}[t]
\includegraphics[scale=0.5]{edm_mass_R_contours_FFS_Q1_pos}
\includegraphics[scale=0.5]{edm_mass_R_contours_FFS_Q1_neg}
\caption{Contours of $R_{h\to\mu^{+}\mu^{-}}$with respect to $|d_{\mu}|$ and the maximum mass scale allowed by the correlation with $\Delta a_{\mu}$ given by the $k$ equation in the FFS models with $\lambda_{YZ} = \lambda_{YZ}'$ and $\mathcal{Q}=+1$ (left) and $\mathcal{Q}=-1$ (right). Projected sensitivity to $|d_{\mu}|$ from Fermilab and PSI are shown with dash-dotted and dotted lines, respectively. The gray shaded region shows where all model couplings would be nonpertubrative.}
\label{fig:scales2}
\end{figure}
Focusing first on the left panel, $\mathcal{Q}=+1$, we see that for a given value of $|d_{\mu}|$, the current limit $R_{h\to\mu^{+}\mu^{-}}<2.2$ defines an upper limit on the scale of new physics up to $\sim 8$ TeV before the model couplings become nonperturbative. Slightly higher scales are allowed depending on the future precision of Higgs decay measurements. In particular, allowing for $R_{h\to\mu^{+}\mu^{-}}$ within 10\% of the SM value, we see that the upper limit on the scale of new physics does not exceed $\sim 14$ TeV for any value of $|d_{\mu}|$. For $\mathcal{Q}=-1$ we see that $M_{max}$ is strictly decreasing as $R_{h\to\mu^{+}\mu^{-}}\to 1$. Similarly, for $R_{h\to\mu^{+}\mu^{-}}$within 10\% of the SM value, the upper limit on the scale of new physics is $\sim 14$ TeV.
\begin{figure}[t]
\includegraphics[scale=0.5]{edm_mass_Q_pos_contours_FFS}
\includegraphics[scale=0.5]{edm_mass_Q_neg_contours_FFS}\\
\includegraphics[scale=0.5]{edm_mass_Q_pos_contours_SSF}
\includegraphics[scale=0.5]{edm_mass_Q_neg_contours_SSF}
\caption{Contours of $R_{h\to\mu^{+}\mu^{-}}$with respect to $|d_{\mu}|$ and the maximum mass scale as in Fig.~\ref{fig:scales2} for different values of $\mathcal{Q}$. The top row shows contours in the FFS models with $\lambda_{YZ}=\lambda_{YZ}^{\prime}$ for $\mathcal{Q}>0$ (left) and $\mathcal{Q}<0$ (right). The bottom row shows the corresponding contours in the SSF models.}
\label{fig:scales3}
\end{figure}
In Fig.~\ref{fig:scales3}, we show similar contours as in Fig.~\ref{fig:scales2} for various values of $\mathcal{Q}$. In the top row, we show the upper limits on the scale of new physics in the FFS models for $\mathcal{Q}>0$ (left) and $\mathcal{Q}<0$ (right). In each case we show the upper limit on the scale of new physics for $R_{h\to \mu^{+}\mu^{-}}=0.9$ (dashed) and $R_{h\to \mu^{+}\mu^{-}}=0.99$ (solid). The dotted lines show the regions of nonpertubativity for a given model. We see that, within the parameter space allowed by perturbativity, increasing precision in $R_{h\to \mu^{+}\mu^{-}}$ can have an impact on the scale of new physics allowed for a given prediction of $|d_{\mu}|$. For instance, for models with $\mathcal{Q}=+1$ which will satisfy the PSI bound on $|d_{\mu}|$, $R_{h\to \mu^{+}\mu^{-}}=0.9\to0.99$ decreases the maximum scale of new physics providing and solution to $\Delta a_{\mu}$ from $\sim 14$ TeV to $\sim 8$ TeV.
We show the corresponding limits derived in the SSF models in the bottom row. The behavior of the solid and dashed contours follows the same logic as in Fig.~\ref{fig:scales2}. However, as mentioned there is no analogous region on nonperturbativity for SSF models based on the overall size of couplings. One approach to evaluate the perturbativity limit of these models is to demand unitarity of $2\to 2$ scalar scattering amplitudes in a given model~\cite{Capdevilla:2021rwo,Goodsell:2018tti}. However, this bound highly depends on the values of scalar quartic couplings in the model which are beyond our considerations. Rather, we restrict the values of this coupling such that no particles in the scalar spectrum of the model become tachyonic. For SSF models the dotted lines show the bound from this requirement.
\section{Discussion}
\label{sec:discussion}
In the previous sections, we derived the so-called $k$ equation and explored its implications in the context of minimal models for mass-enhanced corrections to the muon anomalous magnetic moment. Here we elaborate on possible features that could appear in non-minimal models or including sub-dominant contributions and the expected corrections to the $k$-equation.
Tree models with heavy leptons where the SM Higgs acts as only a single component of an extended Higgs sector participating in EWSB present only a minor modification to our arguments. In this case, the $k$ equation will generically be parameterized by an additional free parameter related to the mixing in the Higgs sector. For example, in the case of a 2HDM type-II, this is determined by the ratio of vacuum expectation values of the two Higgs doublets, $\tan\beta$, see~\cite{Dermisek:2021ajd,Dermisek:2021mhi}, and the modification to Eq.~\ref{eq:tree_X_SM}, assuming that new leptons are the heaviest particles in the spectrum, becomes
\begin{equation}
k=\frac{64\pi^{2}}{\mathcal{Q}(1+\tan^{2}\beta)}.
\label{eq:tree_X_2HDM}
\end{equation}
For further a more in depth study of this case, including the pattern of corrections assuming arbitrary Higgs masses compared to new leptons and corrections from scalar quartic couplings, see [to appear].
Another example, which is more generic, arises when considering sub-leading contributions to $C_{\mu\gamma}$. For instance, in the tree models corrections of order $(v/M)^{2}$ appear in the mass eigenstate basis from loops of gauge bosons where a mixing angle between light and heavy leptons is necessary to generate the diagram. In this case, we have
\begin{equation}
C_{\mu\gamma} \simeq \frac{k}{e}C_{\mu H} + \Delta.
\end{equation}
Note that if $\phi(\Delta)\neq\phi(C_{\mu H})$ then the overall phase of $C_{\mu\gamma}$ will be different than $C_{\mu H}$. This leftover phase, denoted by $\phi_{k}$ in Eq.~\ref{eq:ellipse}, has the effect of shifting the center of the ellipse. The same comment would apply to any non-minimal model with additional fields and couplings which could generate a $\Delta$-like term. For example, in tree models when $C_{\mu\gamma}$, and hence the muon mass, is generated by two sources of chiral enhancement we have
\begin{flalign}
C_{\mu\gamma}=C_{\mu\gamma}^{A}+C_{\mu\gamma}^{B}=\frac{k^{A}}{e}C_{\mu H}^{A} + \frac{k^{B}}{e}C_{\mu H}^{B},
\label{eq:AB_k}
\end{flalign}
where $C_{\mu\gamma}^{A,B}$ and $C_{\mu H}^{A,B}$ denote two independent corrections to either Wilson coefficient. We may then define $k$ by
\begin{flalign}
k^{AB}\equiv (k^{A}C_{\mu H}^{A} + k^{B}C_{\mu H}^{B})/C_{\mu H},
\end{flalign}
where $C_{\mu H}=C_{\mu H}^{A} + C_{\mu H}^{B}$. Then, we have
\begin{flalign}
C_{\mu\gamma} = \frac{k^{AB}}{e}C_{\mu H}.
\label{eq:gen_k}
\end{flalign}
Thus, clearly if the phases $\phi\left(C_{\mu H}^{A}\right)\neq \phi\left(C_{\mu H}^{B}\right)$ then the phase of $C_{\mu\gamma}$ will be different than $C_{\mu H}$. This occurs, for instance, also in the tree models from the subdominant contribution proportional to $y_{\mu}$, if $y_{\mu} \in \mathbb{C}$. Eq.~\ref{eq:gen_k} holds also for loop models where multiple couplings may generate a chiral enhancement simultaneously, e.g. $\lambda_{YZ}$ and $\lambda^{\prime}_{YZ}$ in FFS models. In this case, $k^{AB}$ may not be separable as in Eq.~\ref{eq:AB_k} and will generally be complex, see the Appendix.
In these examples we see that generically $k\in \mathbb{C}$ when including additional corrections in non-minimal models or even from subdominant contributions. The effect of this, is to either stretch the ellipse, when $|k|$ is modified, or shifts the center of the ellipse, when $\phi_{k}$ is modified, or both.
As a final consideration, we have noted that scalar quartic couplings play a role in both FFS- and SSF-type models. This can also distort the ellipse in the ways we have described for $\lambda_{\phi}\in \mathbb{C}$ in Eq.~\ref{eq:loop_k}. Additionally, scalar quartic couplings can play a role in tree-models with heavy Higgs bosons.
\subsection{RG effects}
\label{sec:RG}
Up to this point our discussion has focused on correlations of dimension-six operators obtained at the matching scale $\Lambda$. Generally speaking, solutions to the measured discrepancy in $\Delta a_{\mu}$ via a chiral enhancement are attractive as they can generate the needed correction assuming perturbative couplings with a mass scale of new physics $\Lambda \gg 1$ TeV, and thus can easily evade direct and indirect constraints. However, with an increasing mass scale of new physics or even a mild scale with a range of couplings up to the limit of perturbativity it is expected that RG induced effects also become increasingly important. We turn to a discussion of these effects in this section. We have followed the conventions of~\cite{Jenkins:2013zja,Jenkins:2013wua,Alonso:2013hga} where, in particular, $\dot{C}_{i}=16\pi^{2}\mu\;dC/d\mu$.
Wilson coefficients calculated at the matching scale, $\Lambda$, are evolved to lower energies via the renormalization group. For $\Lambda$ not too far above the weak scale (justifying the leading log expression), Wilson coefficients at $m_{Z}$ are given by
\begin{equation}
C_{i}(m_{Z})\simeq C_{i}(\Lambda) - \frac{1}{16\pi^{2}}\gamma_{ij}C_{j}(\Lambda)\log\left(\Lambda/m_{Z}\right),
\label{eq:RG}
\end{equation}
where $\gamma_{ij}$ is the anomalous dimension leading to the renormalization of $C_{i}$ due to $C_{j}$. The complete set of renormalization group equations for SMEFT in the Warsaw basis is given in~\cite{Jenkins:2013zja,Jenkins:2013wua,Alonso:2013hga}.
The RG evolution of lepton dipole operators has recently been discussed extensively in~\cite{Aebischer:2021uvt}. There it was pointed out that for a large enough scale of new physics RG mixing of the four-fermion operator $C^{(3)}_{lequ}\epsilon_{jk}(\bar{l}_{L})^{j}\sigma_{\mu\nu}\mu_{R}(\bar{q})^{k}\sigma^{\mu\nu}u_{R}$ (contraction is with respect to doublet components) with the dipole operators can be competitive with the tree level term in Eq.~\ref{eq:RG}. This effect is then expected to be relevant when $C^{(3)}_{lequ}$ is generated at tree level at the matching scale, such as what occurs in scalar leptoquark models~\cite{deBlas:2017xtg,Feruglio:2018fxo,Gherardi:2020det,Aebischer:2021uvt}.
With this is mind, the question of radiative stability of Eq.~\ref{eq:WC_relation} comes into play. Retaining only effects driven by the SM top Yukawa coupling the relevant RGE's simplify to
\begin{flalign}
\dot{C}_{\mu H}&\simeq 4N_{C}y_{t}^{3}C^{(1)}_{lequ} + 3N_{C}y_{t}^{2}C_{\mu H},\\
\dot{C}_{\mu\gamma} &\simeq \frac{10}{3}eN_{C}y_{t}C^{(3)}_{lequ} + N_{C}y_{t}^{2}C_{\mu\gamma},
\end{flalign}
where $N_{C}=3$ is the number of colors for SM quarks.
In the absence of four-fermion operators, we see that the RG effects for $C_{\mu H}$ and $C_{\mu \gamma}$ are simply due to themselves and no operator mixing is present in this limit. It is clear, then, that the RG equation for $k$ as defined by Eq.~\ref{eq:WC_relation} will also be proportional to itself and we find a simple approximate solution for the running of this parameter
\begin{equation}
k(\mu)\simeq k(\Lambda)\left(\frac{\Lambda}{\mu}\right)^{-\frac{N_{C}y_{t}^{2}}{8\pi^{2}}}.
\end{equation}
Thus, for new physics at $\Lambda = 1 - 10$ TeV, $k(m_{Z})$ receives RG corrections of $\sim10\%$.
In the opposite limit, where $C^{(1)}_{lequ}$ and $C^{(3)}_{lequ}$ dominate the RG evolution we have
\begin{equation}
\dot{k}\simeq \frac{eN_{C}y_{t}}{C_{\mu\gamma}}\left(4y_{t}^{2}C^{(1)}_{lequ}-\frac{10}{3}kC^{(3)}_{lequ}\right).
\label{eq:k_RG_FF}
\end{equation}
Schematically, we see that in models where $C^{(1)}_{lequ}$ and $C^{(3)}_{lequ}$ are generated at tree level the running of Eq.~\ref{eq:WC_relation} can be large, $\dot{k}\sim 16\pi^{2}$. It is worth mentioning that in the scalar leptoquark models with a single new particle $C^{(1)}_{lequ} \propto C^{(3)}_{lequ}$ (see, for example, the models discussed in~\cite{Aebischer:2021uvt}). Further, the chirally-enhanced contributions to $C_{\mu\gamma}$ are generated by the same couplings and $C^{(1)}_{lequ} \propto C^{(3)}_{lequ} \propto C_{\mu\gamma}$. Thus, the RG evolution defined by Eq.~\ref{eq:k_RG_FF} generates large corrections to $k$, but is determined solely by SM couplings.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have argued that in almost all classes of models which generate mass-enhanced corrections to the muon anomalous magnetic moment, the effective operator which generates the muon dipole moment is correlated with the operator which generates a correction to the muon mass, where the latter manifests as deviations from the SM prediction for $h\to\mu^{+}\mu^{-}$. At the matching scale, these operators are dictated by couplings of the SM Higgs to new fermions or new scalars. Thus, the resulting low-energy predictions are sensitive probes of dynamics of Higgs couplings beyond the SM.
The correlation between the dipole operators with corrections to the muon mass leads to a correlation between the ratio of $h\to\mu^{+}\mu^{-}$ compared to that in the SM, $R_{h\to\mu^{+}\mu^{-}}$, and the electric and magnetic dipole moments of the muon, $\Delta a_{\mu}$ and $d_{\mu}$, which we refer to as the $k$-equation. Future measurements of $R_{h\to\mu^{+}\mu^{-}}$ are expected to reach a precision of $\mathcal{O}(1)\%$ within the SM prediction. The correlation to $\Delta a_{\mu}$, assuming that the measured deviation persists, leads to predictions of $d_{\mu}$ which in many models are within reach of upcoming experiments.
Our study shows that in the context of the minimal models we have considered, the pattern of deviation defined by the muon ellipse allows to set upper limits on the scale of new physics that can be stronger than more general considerations such as perturbative unitarity or tachyonic particles. In particular, we find that for FFS- and SSF-type models which can explain the central value of $\Delta a_{\mu}$, future measurements of $R_{h\to \mu^{+}\mu^{-}}$ reaching 10\% to 1\% can reduce the maximum mass scale of new physics by close to a factor of two within this projected range of precision, conservatively assuming that the limit on $|d_{\mu}|$ is probed at least to the FNAL projection. Thus, we advocate that within the class solutions to $\Delta a_{\mu}$ involving a chirally-enhanced dipole moment the correlation of the predictions of $R_{h\to \mu^{+}\mu^{-}}$ and $|d_{\mu}|$ defined by the muon ellipse can have a significant impact on the allowed parameter space in a given model.
For simplicity and breadth, we have restricted our discussion to simplified models which contain the minimal particle content to generate mass-enhanced corrections to $\Delta a_{\mu}$. However, the $k$ equation has applications in a variety of well-motivated UV completions. Vectorlike leptons are commonplace in both non-supersymmetric and supersymmetric GUT's, the spectrum of the FFS models are prototypical in extensions of the SM with scalar leptoquarks (where the fermions in the loop correspond to the left- and right-handed top quarks), and the spectrum and corresponding corrections in the SSF type models are analogous to SUSY corrections arising from smuon-Bino loops in the minimal supersymmetric SM. Going beyond these examples, having discussed the implications for models with $SU(2)$ triplets or more generic hypercharges and bridge-type models, essentially covers an infinite class of possible models with phenomenological implications of the $k$ equation.
It is worth noting that in this work we have elaborated on the implications of the correlation provided by the $k$ equation relying minimally on the details of the models which are needed to generate chirally-enhanced corrections to $C_{\mu\gamma}$ and $C_{\mu H}$. In a more complete setting other constraints may come in to play. For example, in loop models we have largely ignored details of the scalar potential and, as we have commented on, this may provide further constraints with respect to perturbative unitarity. Similarly, some models may also generate the dimension-5 operator $l_{L}l_{L}HH$ providing a connection to the neutrino sector of the SM and associated phenomenology. Though, these and similar considerations are not generic to the full classes of models we have discussed.
Many puzzles remain regarding the second generation SM fermions. In the near future, the couplings of the muon in particular will be intensely scrutinized. In this work, we have outlined a novel way to consider corrections to $\Delta a_{\mu}$, $R_{h\to\mu^{+}\mu^{-}}$, and $d_{\mu}$ from new physics, informing the pattern of deviations expected in the presence of a signal.
\acknowledgments
The work of R.D. was supported in part by the U.S. Department of Energy under Award No. {DE}-SC0010120. TRIUMF receives federal funding via a contribution agreement with the National Research Council of Canada.
|
{
"arxiv_id": "2302.14174",
"language": "en",
"timestamp": "2023-03-01T02:03:42",
"url": "https://arxiv.org/abs/2302.14174",
"yymm": "2302"
} |
\subsection{Determining $b(x)$ and $h(x)$ on the boundary}\label{subsec_boundary}
In this part,
we would like to determine the jets of the one-form $\varrho$ and the potential $h$, on the subset $(0,T) \times \partial \Omega$ of the boundary, from the first-order linearization ${\partial_{\epsilon_1} \LbF(\ep_1 \bullet) |_{\epsilon_1=0}}$.
This result is proved in \cite{Stefanov2018} for the wave operator with a magnetic field, which corresponds to a slightly different smooth one-form.
Here we present the proof for completeness.
Indeed, by the asymptotic expansion in $(\ref{expand_u})$, we have ${\partial_{\epsilon_1} \LbF(\ep_1 f_1) |_{\epsilon_1=0}} = v_1|_{\partial M}$, where $v_1$ solves the boundary value problem for the linear wave equation (\ref{eq_v1}) with Dirichlet data $f_1$.
This implies ${\partial_{\epsilon_1} \LbF(\ep_1 \bullet) |_{\epsilon_1=0}}$ is the DN map for the linear wave equation.
In \cite{Stefanov2018}, it is proved that the jets of the metric, the magnetic field, and the potential are determined from the DN map in a stable way, up to a gauge transformation, for the linear problem.
Here we assume the metric is known and we would like to recover the jets of $b$ and $h$
on the boundary, up to a gauge transformation, as a special case of \cite{Stefanov2018}.
More explicitly, suppose there are smooth one-forms $b^{(k)}$ and smooth function $h^{(k)}$, for $k=1,2$.
Consider the corresponding DN map $\Lambda_{b,h,F}^{(k)}$ for the nonlinear problem (\ref{eq_problem}), for $k = 1,2$.
\begin{lm}\label{lm_boundary}
If the DN maps satisfy
\[
\Lambda_{\bone, \hone, \Fone}(f) = \Lambda_{\btwo, \htwo, \Ftwo}(f)
\]
for any $f$ in a small neighborhood of the zero function in $C^6((0,T) \times \partial \Omega)$,
then there exists a smooth function $\varrho$ on $M$ with $\varrho|_{(0,T) \times \partial \Omega} = 1$
such that for $j = 0, 1, 2, \ldots$ we have
\begin{align*}
\partial_\nu^j (\langle b^{(2)}, \nu \rangle)|_{(0,T) \times \partial \Omega} &=
\partial_\nu^j (\langle b^{(1)} - 2\varrho^{-1} \diff \varrho, \nu \rangle)|_{(0,T) \times \partial \Omega},\\
\partial_\nu^j h^{(2)}|_{(0,T) \times \partial \Omega} &=
\partial_\nu^j (h^{(1)} - \langle b^{(1)}, \varrho^{-1} \diff \varrho \rangle - \varrho^{-1} \square_g \varrho)|_{(0,T) \times \partial \Omega}.
\end{align*}
\end{lm}
\begin{proof}
First, we fix some $(y_|, \eta_|) \in T^*(\partial M)$, where $y_| \in (0,T) \times \partial \Omega$ and $\eta_|$ is {a future-pointing timelike covector}.
There exists a unique $(y, \eta) \in L^*_{+, \partial M} M$
such that $(y_|, \eta_|)$ is the orthogonal projection of $(y, \eta)$ to $\partial M$.
In the following, we consider the semi-geodesic normal coordinates $(x_|, x^3)$ near $y \in \partial M$.
The dual variable is denoted by $(\xi_|, \xi_3)$.
Moreover, in this coordinate system the metric tensor $g$ takes the form
\[
g(x) = g_{\alpha \beta} (x)\diff x^\alpha \otimes \diff x^\beta + \diff x^3 \otimes \diff x^3, \quad \alpha, \beta \leq 2.
\]
The normal vector on the boundary is locally given by $\nu = (0,0,0,1)$
and we write $\partial_\nu = \partial_3$.
For more details about the semi-geodesic coordinates see \cite[Lemma 2.3]{Stefanov2018}.
Second, by \cite[Lemma 2.5]{Stefanov2018}, there exist smooth functions $\psi^{(k)}$ with $\psi^{(k)}|_{(0,T) \times \partial \Omega} = 0$ such that in the semi-geodesic normal coordinates,
one has
\begin{align*}
\partial_3^j(\langle b^{(k)} - \diff \psi^{(k)}, \nu \rangle)|_{x^3 = 0} = 0, \quad j = 0,1,2, \ldots.
\end{align*}
We write $b^{(k)}_3(x_|, 0) = \langle b^{(k)}(x_|, 0), \nu \rangle$ and we can assume
\[
\partial_3^j b^{(k)}_3(x_|, 0) = 0, \quad k = 1,2
\]
without loss of generosity.
Indeed, if it is not true, we can replace $b^{(k)}$ by $b^{(k)} - \diff \psi^{(k)}$ and $h^{(k)}$ by $h^{(k)} - \langle b^{(k)}, (\varrho^{(k)})^{-1} \diff \varrho^{(k)} \rangle - (\varrho^{(k)})^{-1} \square_g \varrho^{(k)}$,
where we set $\varrho^{(k)} = e^{\frac{1}{2}\psi^{(k)}}$.
By Lemma \ref{lm_gauge}, the linearized DN maps do not change.
Let $(y_|, \eta_|) \in T^*(\partial M)$ be fixed as above.
We focus on a small conic neighborhood $\Gamma_\partial$ of $(y_|, \eta_|)$.
Let $\chi(x_|, \xi_|)$ be a smooth cutoff function homogeneous in $\xi_|$ of degree zero, supported in $\Gamma_\partial$.
Suppose $\chi(x_|, \xi_|)= 1$ near $(y_|, \eta_|)$.
Consider the DN map $\Upsilon^{(k)}$ for the linear problem (\ref{bvp_qg}) with $b^{(k)}, h^{(k)}$, $k = 1,2$.
From the first-order linearization of $\Lambda_{\bk, \Fk, \hk}$ for $k=1,2$, we have
\[
\Upsilon^{(1)}(f) = \Upsilon^{(2)}(f)
\]
for $f \in C^6((0,T) \times \partial \Omega)$ with small data.
In particular,
since there are no periodic null geodesics, one can consider the microlocal version of $\Upsilon^{(k)}$, i.e., the
map from $f_1 \in \mathcal{E}'(\partial M)$ to
$v_1|_{\partial M}$ restricted near $(y_|, \eta_|)$,
with ${\text{\( \WF \)}}(f_1) \subset \Gamma_\partial$ and
\[
\square_g v_1 \in C^\infty(M) \text{ near } y, \quad v|_{\partial M} = f_1 \mod C^\infty(M).
\]
In the rest of the proof, we abuse the notation $\Upsilon^{(k)}$ to denote its microlocal version.
We follow the proof of \cite[Theorem 3.2]{Stefanov2018}.
One can choose a special designed function
\[
h(x_|) =e^{i\lambda x_| \cdot \xi_|} \chi(x_|, \xi_|)
\]
with large parameter $\lambda$,
where $\chi$ is the smooth cutoff function supported near $(y_|, \eta_|)$ that we defined before.
For $k =1,2$, we construct a sequence of geometric optics approximations of the local outgoing solutions near $(y_|, \eta_|)$ of the form
\[
u_{N}^{(k)} (x) = e^{i \lambda \phi^{(k)}(x, \xi_|)} a^{(k)}(x, \xi_|)= e^{i \lambda \phi^{(k)}(x, \xi_|)} \sum_{j=0}^{N} \frac{1}{\lambda^j}a^{(k)}_{j}(x, \xi_|),
\]
where $\phi^{(k)}(x, \xi_|)$ is the phase function
and $a^{(k)}(x, \xi_|) $ is the amplitude with the asymptotic expansion $a^{(k)} = \sum_{j \geq 0} a^{(k)}_j$.
Here we assume each $a^{(k)}_j(x, \xi_|)$ is homogeneous in $\xi_|$ of order $-j$.
We plug the ansatz into the linear equation to compute
\begin{align*}
&\square_g u_{N}^{(k)} + \langle b^{(k)}(x), \nabla u_{N}^{(k)} \rangle + h^{(k)}(x) u_{N}^{(k)}\\
= & e^{i \lambda \phi^{(k)}}
(-\lambda^2|\nabla \phi^{(k)}|_g^2 a^{(k)}
+ \lambda (i 2\langle \nabla \phi^{(k)}, \nabla a^{(k)} \rangle
+ i \lambda \square_g \phi^{(k)} a^{(k)}
+ i \lambda \langle b^{(k)}, \nabla \phi^{(k)} \rangle a^{(k)})\\
&
+ (\square_g a^{(k)} + \langle b^{(k)}, \nabla a^{(k)} \rangle + h^{(k)} a^{(k)})).
\end{align*}
Note the phase functions $\phi^{(k)}$
satisfy the same eikonal equation with the same initial condition
\begin{align*}
\partial_3\phi^{(k)}(x) = \sqrt{-g^{\alpha\beta}(x) \partial_\alpha \phi^{(k)}(x) \partial_\beta \phi^{(k)} (x)}, \text{ for } \alpha, \beta \leq 2, \quad
\phi^{(k)}(x_|,0) = x_|\cdot \xi_|,
\end{align*}
This implies that $\phi^{(1)} = \phi^{(2)}$ and thus we denote them by $\phi$.
Next, the amplitude satisfies the transport equation with the initial condition
\begin{align*}
X^{(k)} a^{(k)}_0 &= 0, \quad a^{(k)}_0 (x_|, 0, \xi_|)
= \chi(x_|, \xi_|),\\
X^{(k)} a^{(k)}_j &= r_j, \quad a^{(k)}_j(x_|, 0, \xi_|) = 0, \text{ for } j > 0.
\end{align*}
Here we write
\[
X^{(k)} = i(2 g^{\alpha \beta} \partial_\alpha \phi\partial_\beta + \langle b^{(k)}, \nabla \phi \rangle + \square_g \phi),
\]
and $r_j$ is the term involving the derivatives w.r.t. $a^{(k)}_{0}, a^{(k)}_{1}, \ldots, a^{(k)}_{j-1}, \phi$ of order no more than $j$.
In semi-geodesic coordinates $(x_|, x^3)$,
one has $g^{3\alpha} = \delta_{3\alpha}$.
Then the first transport equation can be written as
\begin{align}\label{eq_a0}
(2 \partial_3 \phi \partial_3 + \sum_{\alpha, \beta \leq 2} b^{(k)}_\alpha g^{\alpha \beta} \partial_\beta \phi ) a^{(k)}_{0}
+ (\sum_{\alpha, \beta \leq 2}2 \partial_\alpha \phi \partial_\beta a^{(k)}_{0} + b^{(k)}_3 \partial_3 \phi
+ \square_g \phi))a^{(k)}_{0} = 0.
\end{align}
Here we reorganize the left hand side as two groups.
When restricting the left hand side to $x_| = 0$, we would like to show the terms in the second group is fixed for $k = 1, 2$.
Indeed, recall we assume $b^{(k)}_3(x_|, 0) = 0$ with loss of generosity.
Moreover, with $a^{(k)}_{0}(x_|, 0, \xi_|) = \chi(x_|, \xi_|)$, we have
\[
(\sum_{\alpha, \beta \leq 2}2 \partial_\alpha \phi \partial_\beta + \square_g \phi) a^{(1)}_{0}(x_|, 0, \xi_|) = (\sum_{\alpha, \beta \leq 2}2 \partial_\alpha \phi \partial_\beta + \square_g \phi)a^{(2)}_{0}(x_|, 0, \xi_|).
\]
It follows that
\begin{align}\label{eq_tr0}
(2 \partial_3 \phi \partial_3 + \sum_{\alpha, \beta \leq 2} b^{(1)}_\alpha g^{\alpha \beta} \partial_\beta \phi ) a^{(1)}_{0}
=
(2 \partial_3 \phi \partial_3 + \sum_{\alpha, \beta \leq 2} b^{(2)}_\alpha g^{\alpha \beta} \partial_\beta \phi ) a^{(2)}_{0}
\end{align}
when $x^3 = 0$.
On the other hand, the local DN map is given by
\begin{align*}
\Upsilon^{(k)}(h) = - e^{i\lambda x_| \cdot \xi_|}
(i \lambda \partial_3 \phi(x_|,0,\xi_|)
+ \sum_{j=1}^N(\partial_3 - \frac{1}{2} \langle b^{(k)}, \nu \rangle)
a^{(k)}_{j}(x_|,0, \xi_|) + O(\lambda^{-N-1})).
\end{align*}
Recall $a^{(k)}_{0}(x_|,0, \xi_|) = \chi(x_|, \xi_|) = 1$ near $y$ and we have $b^{(k)}_3(x_|,0) = 0$ for $k = 1,2$.
By comparing $\Upsilon^{(k)}$ up to $O(\lambda^{-1})$, we have
\begin{align}\label{eq_Upk}
\partial_3
a^{(1)}_{0}(x_|,0, \xi_|)
=
\partial_3
a^{(2)}_{0}(x_|,0, \xi_|),
\end{align}
since $\langle b^{(k)}, \nu \rangle|_{x^3 = 0} = b^{(k)}_3(x_|,0) = 0$.
Then by an inductive procedure, by comparing $\Upsilon^{(k)}$ up to $O(\lambda^{-j-1})$, we have
\begin{align}\label{eq_a1}
\partial_3
a^{(1)}_{j}(x_|,0, \xi_|)
=
\partial_3
a^{(2)}_{j}(x_|,0, \xi_|).
\end{align}
Note that $\partial_\alpha \phi(x_|, 0) = \xi_\alpha$, when $\alpha = 0,1,2$.
Combining (\ref{eq_tr0}) and (\ref{eq_Upk}), we have
\[
\sum_{\alpha, \beta \leq 2} b^{(1)}_\alpha(x_|, 0) g^{\alpha \beta} \xi_\beta
=\sum_{\alpha, \beta \leq 2} b^{(2)}_\alpha(x_|, 0) g^{\alpha \beta} \xi_\beta.
\]
By perturbing $\xi_|$, i.e., choosing three linearly independent covectors, we can show $b^{(1)}_\alpha(x_|, 0) = b^{(2)}_\alpha(x_|, 0)$ for $\alpha = 0,1,2$.
Next, we would like to determine $h^{(k)}$ and $\partial_3 b^{(k)}$ on the boundary.
The transport equation for $a^{(k)}_1$ can be written as
\begin{align*}
&i(2 \partial_3 \phi \partial_3 + \sum_{\alpha, \beta \leq 2} b^{(k)}_\alpha g^{\alpha \beta} \partial_\beta \phi ) a^{(k)}_{1}
+ (\sum_{\alpha, \beta \leq 2}2 \partial_\alpha \phi \partial_\beta a^{(k)}_{1} + b^{(k)}_3 \partial_3 \phi
+ \square_g \phi))a^{(k)}_{1} \\
=& - \square_g a^{(k)}_0 - \langle b^{(k)}, \nabla a^{(k)}_0 \rangle - h^{(k)} a^{(k)}_0.
\end{align*}
Restricting each term above to the boundary, we have
the second group of terms on the left hand side vanish, since $a^{(k)}_1(x_|, 0) = 0$.
With $b^{(1)}(x_|, 0) = b^{(2)}(x_|, 0)$ and $a^{(k)}_0(x_|, 0) = 1$ near $y$, we have
\begin{align}\label{eq_a1h}
2i \partial_3 \phi \partial_3 a^{(k)}_1(x_|,0, \xi_|)
= \partial_3^2a^{(k)}_0(x_|,0, \xi_|)- h^{(k)}(x_|, 0).
\end{align}
In addition, we can differentiate the first transport equation (\ref{eq_a0}) on both sides to have
\begin{align*}
& \partial_3(2 \partial_3 \phi \partial_3 + \sum_{\alpha, \beta \leq 2} b^{(k)}_\alpha g^{\alpha \beta} \partial_\beta \phi ) a^{(k)}_{0}
+ \partial_3(\sum_{\alpha, \beta \leq 2}2 \partial_\alpha \phi \partial_\beta a^{(k)}_{0} + b^{(k)}_3 \partial_3 \phi
+ \square_g \phi))a^{(k)}_{0} = 0\\
\Rightarrow
&( 2 \partial_3 \phi \partial_3^2 + \sum_{\alpha, \beta \leq 2} \partial_3b^{(k)}_\alpha g^{\alpha \beta} \partial_\beta \phi )a^{(k)}_{0}
+ R(\partial\phi, \partial^2 \phi, \partial_3 a^{(k)}_{0}, \partial_3 b^{(k)}_3 ) = 0,
\end{align*}
where $R$ contains all the remaining terms only depending on $\partial\phi, \partial^2 \phi, a^{(k)}_0, \partial_2 a^{(k)}_{0}, \partial_3 b^{(k)}_3$.
When restricted to the boundary, these terms are the same for $k = 1, 2$.
This implies that
\begin{align}\label{eq_a0b}
2 \xi_3 \partial_3^2 a^{(1)}_0 (x_|,0, \xi_|)
+ \sum_{\alpha, \beta \leq 2} \xi_\beta g^{\alpha \beta} \partial_3b^{(1)}_\alpha(x_|, 0)
=
2 \xi_3 \partial_3^2 a^{(2)}_0 (x_|,0, \xi_|)
+ \sum_{\alpha, \beta \leq 2} \xi_\beta g^{\alpha \beta} \partial_3b^{(2)}_\alpha(x_|, 0),
\end{align}
where we write $\partial_j \phi = \xi_j$, for $j = 1,2,3$.
Combining (\ref{eq_a1}), (\ref{eq_a1h}), and (\ref{eq_a0b}), we have
\[
2 \xi_3 (h^{(1)} - h^{(2)})(x_|, 0) + \sum_{\alpha, \beta \leq 2} \xi_\beta g^{\alpha \beta} (\partial_3b^{(1)}_\alpha - \partial_3b^{(2)}_\alpha)(x_|, 0)
= 0.
\]
By the eikonal equation, the covector $\xi^i$ satisfies
$|\xi|_g = 0$, which implies it is lightlike. Then we can perturb fixed $\xi$ to get four lightlike covectors $\xi^l, l=1,2,3,4$, such that the equation above gives us a nondegenerate linear system of four equations.
This implies
\[
h^{(1)}(x_|, 0) = h^{(2)} (x_|, 0), \quad \partial_3b^{(1)}_\alpha(x_|, 0) = \partial_3b^{(2)}_\alpha(x_|, 0).
\]
Then to determine the derivatives of $h^{(k)}$ and $b^{(k)}$, we can repeat the same analysis above.
\end{proof}
\subsection{Extending $b(x)$ and $h(x)$}\label{subsec_extension}
In this subsection, we smoothly extend the unknown one-forms $b^{(k)}(x)$ and the unknown potentials $h^{(k)}(x)$ across the boundary, for $k=1,2$.
Recall $V = (0,T) \times \Omega_\mathrm{e} \setminus \Omega$.
As before, we fix some $(y, \eta) \in L^*_{+, \partial M} M$ on the boundary
and consider the semi-geodesic normal coordinates $(x_|, x^3)$ near $y \in \partial M$.
By using a partition of unity, we focus on a small neighborhood of $y$.
First, we extend $b^{(1)}, h^{(1)}$ in a small collar neighborhood of $\partial M$ near $y$.
We denote their extension by $\tilde{b}^{(1)}, \tilde{h}^{(1)}$.
By Lemma \ref{lm_boundary}, there exists a
a smooth function $\varrho$ on $M$ with $\varrho|_{(0,T) \times \partial \Omega} = 1$
such that any order of the derivatives of $b^{(2)}$ and $b^{(1)} - 2 \varrho^{-1}\diff \varrho$ coincides on $\partial M$.
This implies if we extend $\varrho$ smoothly across the boundary to $\tilde{\varrho}$, then
there exists a smooth extension $\tilde{b}^{(2)}$ of $b^{(2)}$ such that
\[
\tilde{b}^{(2)} = b^{(1)} - 2 \varrho^{-1}\diff \varrho, \quad \text{for any } x \in V.
\]
But note that in $M$, they may not coincide.
Similarly, we extend $h^{(1)}$ smoothly to $\tilde{h}^{(1)}$
and there exists a smooth extension $\tilde{h}^{(2)}$ of $h^{(2)}$ such that
\[
\tilde{h}^{(2)} = h^{(1)} - \langle b^{(1)}, \varrho^{-1} \diff \varrho \rangle - \varrho^{-1} \square_g \varrho, \quad \text{for any } x \in V.
\]
In particular, one can shrink ${M_{\text{e}}}$ if necessary, such that the extension of $b(x), h(x)$ is defined in ${M_{\text{e}}}$.
\section{Gauge invariance}\label{sec_gauge}
\begin{lm}\label{lm_gauge}
Let $(M,g)$ be defined as in Theorem \ref{thm}.
Suppose $b(x) \in C^\infty(M; T^*M)$, $h(x) \in C^\infty(M)$, and $F$ is the nonlinear term given by
$\sum_{m=1}^{+\infty} \beta_{m+1}(x) \partial_t^2 (p^{m+1})$,
with smooth $\beta_{m+1}$ for $m \geq 1$.
Let $\varrho \in C^\infty(\Omega)$ be nonvanishing with $\varrho|_{\partial \Omega} = 1$.
We define
\begin{align*}
b^\varrho = b + 2 \varrho^{-1}\diff \varrho, \quad
h^\varrho = h + \langle b(x), \varrho^{-1} \diff \varrho \rangle + \varrho^{-1} \square_g \varrho, \quad
F^\varrho(x, p,\partial_t p, \partial^2_t p) = \sum_{m=1}^{+\infty} \varrho^{{m}} \beta_{m+1} \partial_t^2 (p^{m+1}).
\end{align*}
Then we have
\begin{align*}
\Lambda_{b,h,F}(f)= \Lambda_{b^\varrho, h^\varrho, F^\varrho }(f)
\end{align*}
for any $f$ in a small neighborhood of the zero functions in $C^6([0,T] \times \partial \Omega)$.
\end{lm}
\begin{proof}
For a fixed $f$ with small data, let $p$ be the solution to the boundary value problem (\ref{eq_problem}).
We write
$ p = \varrho \tilde{p}$ and we compute
\begin{align*}
\square_g p
&= \varrho \square_g \tilde{p} + 2 \langle \nabla \varrho, \nabla \tilde{p} \rangle + \tilde{p}\square_g \varrho
= \varrho (\square_g \tilde{p} + 2\langle \varrho^{-1} \nabla \varrho, \nabla \tilde{p} \rangle + (\varrho^{-1}\square_g \varrho)\tilde{p}).
\end{align*}
Note that
$\partial_t^2 ((p)^{m+1})
= \varrho^{m+1} \partial_t^2 \tilde{p}$,
since we assume $\varrho$ does not depend on $t$.
It follows that
\begin{align*}
\sum_{m=1}^{+\infty} \beta_{m+1} \partial_t^2 (p^{m+1})
= \varrho \sum_{m=1}^{+\infty} \varrho^{m} \beta_{m+1} \partial_t^2 (\tilde{p}^{m+1})
=\varrho F^\varrho (x, \tilde{p},\partial_t \tilde{p}, \partial^2_t \tilde{p}).
\end{align*}
Then we compute
\begin{align*}
&\square_g p
+ \langle b, \nabla p \rangle + h (x) p
- F(x, \tilde{p},\partial_t p, \partial^2_t p) \\
= &
\varrho (\square_g \tilde{p} + \langle b(x) + 2\varrho^{-1} \nabla \varrho, \nabla \tilde{p} \rangle + (\varrho^{-1} \square_g \varrho
+ \langle b(x), \varrho^{-1} \nabla \varrho \rangle
+ h)\tilde{p} -
F^\varrho (x, p,\partial_t \tilde{p}, \partial^2_t \tilde{p})
)\\
= & \varrho (\square_g \tilde{p} + \langle b^\varrho, \nabla \tilde{p} \rangle + h^\varrho \tilde{p} -
F^\varrho (x, \tilde{p},\partial_t \tilde{p}, \partial^2_t \tilde{p})
).
\end{align*}
This implies $\tilde{p}$ is the solution to the nonlinear equation
$\square_g \tilde{p} + \langle b^\varrho, \nabla \tilde{p} \rangle + h^\varrho \tilde{p} -
F^\varrho (x, \tilde{p},\partial_t \tilde{p}, \partial^2_t \tilde{p}) = 0$
with the boundary data $\tilde{p}|_{(0,T) \times \partial \Omega} = p|_{(0,T) \times \partial \Omega} = f$.
Then we have
\begin{align*}
\Lambda_{b,h,F}(f)
=& (\partial_\nu (\varrho \tilde{p}) + \frac{1}{2} \langle b, \nu \rangle \varrho \tilde{p})|_{(0,T) \times \partial \Omega}\\
=&
(\varrho \partial_\nu \tilde{p} + \frac{1}{2} \varrho \langle b, \nu \rangle \tilde{p}
+ \langle \nabla \varrho , \nu \rangle p) |_{(0,T) \times \partial \Omega}\\
= & (\varrho \partial_\nu \tilde{p} + \frac{1}{2} \varrho \langle b^\varrho, \nu \rangle \tilde{p})|_{(0,T) \times \partial \Omega}
= \Lambda_{b^\varrho, h^\varrho, F^\varrho }(f)
\end{align*}
since $\varrho|_{\partial \Omega} = 1$.
\end{proof}
\section{The third-order and fourth-order linearization}\label{sec_threefour}
In this section, we briefly recall some results in \cite[Section 3, 4, and 5]{UZ_acoustic}.
Let $(x_j, \xi_j)_{j=1}^J \subset L^+V$ be $J$ lightlike vectors, for $J = 3, 4$.
In some cases, we denote this triplet or quadruplet by $(\vec{x}, \vec{\xi})$.
We introduce the definition of regular intersection of three or
four null geodesics at a point $q$, as in \cite[Definition 3.2]{Kurylev2018}.
\begin{df}\label{def_inter}
Let $J = 3$ or $4$.
We say the null geodesics corresponding to
$(x_j, \xi_j)_{j=1}^J$
intersect regularly at a point $q$, if
\begin{enumerate}[(1)]
\item there are $0 < s_j < \rho(x_j, \xi_j)$ such that $q = \gamma_{x_j, \xi_j}(s_j)$, for $j= 1, \ldots, J$,
\item the vectors $\dot{\gamma}_{x_j, \xi_j}(s_j), j= 1, \ldots, J$ are linearly independent.
\end{enumerate}
\end{df}
In this section, we consider lightlike vectors $(x_j, \xi_j)_{j=1}^J$ such that the corresponding null geodesics $\gamma_{x_j, \xi_j}(s)$ intersect regularly at $q \in M^{{o}}$, for $J = 3, 4$.
In addition, we suppose $(x_j, \xi_j)_{j=1}^J$ are causally independent, i.e.,
\begin{align}\label{assump_xj}
x_j \notin J^+(x_k),
\quad \text{ for } j \neq k.
\end{align}
Note the null geodesic $\gamma_{x_j, \xi_j}(s)$ starting from $x_j \in V$ could never intersect $M$ or could enter $M$ more than once.
Thus, we define
\begin{align}\label{def_bpep}
t_j^o = \inf\{s > 0 : \ \gamma_{x_j, \xi_j}(s) \in M \},
\quad t_j^b = \inf\{s > t_j^0 : \ \gamma_{x_j, \xi_j}(s) \in {M_{\text{e}}} \setminus M \}
\end{align}
as the first time when it enters $M$ and
the first time when it leaves $M$ from inside,
if such limits exist.
As in \cite{Kurylev2018}, to deal with the complications caused by the cut points, we consider the interaction of waves in the open set
\begin{align}\label{def_nxxi}
{\mathcal{N}(\vec{x}, \vec{\xi})} = M \setminus \bigcup_{j=1}^J J^+(\gamma_{x_j, \xi_j}(\rho(x_j, \xi_j))),
\end{align}
which is the complement of the causal future of the first cut points.
In ${\mathcal{N}(\vec{x}, \vec{\xi})}$, any two of the null geodesics $\gamma_{x_j, \xi_j}(\mathbb{R}_+)$ intersect at most once, by \cite[Lemma 9.13]{Beem2017}.
As in \cite{UZ_acoustic}, to deal with the complications caused by the reflection part, we consider the interaction of waves in the open set
\begin{align}\label{def_ntxxi}
{{\mathcal{R}}(\vec{x}, \vec{\xi})} = M \setminus \bigcup_{j=1}^J J^+(\gamma_{x_j, \xi_j}(t_j^b)),
\end{align}
as the complement of the causal future of the point $\gamma_{x_j, \xi_j}(t_j^b) \in \partial M$,
where the null geodesic leaves $M$ from inside for the first time.
\subsection{Distorted plane waves and boundary sources}\label{sub_distorted}
Let $g^+$ be a Riemannian metric on ${M_{\text{e}}}$.
For each $(x_j, \xi_j) \in L^+ {M_{\text{e}}}$ and a small parameter $s_0 >0$,
we define
\begin{align*}
\mathcal{W}({x_j, \xi_j, s_0}) &= \{\eta \in L^+_{x_j} {M_{\text{e}}}: \|\eta - \xi_j\|_{g^+} < s_0 \text{ with } \|\eta \|_{g^+} = \|\xi_j\|_{g^+}\}
\end{align*}
as a neighborhood of $\xi_j$ at the point $x_0$.
We define
\begin{align*}
K({x_j, \xi_j, s_0}) &= \{\gamma_{x_j, \eta}(s) \in {M_{\text{e}}}: \eta \in \mathcal{W}({x_j, \xi_j, s_0}), s\in (0, \infty) \}
\end{align*}
be the subset of the light cone emanating from $x_0$ by light-like vectors in $\mathcal{W}({x_j, \xi_j, s_0})$.
As $s_0$ goes to zero, the surface $K({x_j, \xi_j, s_0})$ tends to the null geodesic $\gamma_{x_j, \xi_j}(\mathbb{R}_+)$.
Consider the Lagrangian submanifold
\begin{align*}
\Sigma(x_j, \xi_j, s_0) =\{(x_j, r \eta^\flat )\in T^*{M_{\text{e}}}: \eta \in \mathcal{W}({x_j, \xi_j, s_0}), \ r\neq 0 \},
\end{align*}
which is a subset of the conormal bundle $N^*\{x_j\}$.
We define
\begin{align*}
\Lambda({x_j, \xi_j, s_0})
= &\{(\gamma_{x_j, \eta}(s), r\dot{\gamma}_{x_j, \eta}(s)^\flat )\in T^*{M_{\text{e}}}:
\eta \in \mathcal{W}({x_j, \xi_j, s_0}), s\in (0, \infty), r \in \mathbb{R}\setminus \{0 \} \}
\end{align*}
as the flow out from $\mathrm{Char}(\square_g) \cap \Sigma(x_j, \xi_j, s_0)$ by the Hamiltonian vector field of $\square_g$ in the future direction.
Note that $\Lambda({x_j, \xi_j, s_0})$ is the conormal bundle of $K({x_j, \xi_j, s_0})$ near $\gamma_{x_j, \xi_j}(\mathbb{R}_+)$, before the first cut point of $x_j$.
Now we construct point sources $\tilde{f}_j \in \mathcal{I}^{\mu + 1/2}(\Sigma(x_j, \xi_j, s_0))$ at $x_j \in V$.
To construct distorted planes waves in ${M_{\text{e}}}$ from these sources, we would like to smoothly extend the unknown one-form $b(x)$ and the unknown potential $h(x)$ to a small neighborhood of $M$ in ${M_{\text{e}}}$, from the knowledge of the DN map.
Indeed, the jets of $b(x)$ and $h(x)$ are determined by the first-order linearization of $\Lambda_{b,h,F}$, see Section \ref{subsec_boundary}.
For more details about the extension, see Section \ref{subsec_extension}.
Then we consider distorted plane waves
\[
u_j = Q(\tilde{f}_j) \in \mathcal{I}^\mu(\Lambda(x_j, \xi_j, s_0)), \quad j = 1, \ldots, J.
\]
Note that $u_j$ satisfies
\[
(\square_g + \langle b(x), \nabla \rangle + h(x)) u_j \in C^\infty(M)
\]
with nonzero principal symbol along $(\gamma_{x_j, \xi_j}(s), (\dot{\gamma}_{x_j, \xi_j}(s))^\flat )$ for $s > 0$.
Since $u_j \in \mathcal{D}'({M_{\text{e}}})$ has no singularities conormal to $\partial M$,
then its restriction to the submanifold $\partial M$ is well-defined, see \cite[Corollary 8.2.7]{Hoermander2003}.
Thus, we set $f_j = u_j|_{\partial M}$ and let $v_j$ solve the
boundary value problem
\begin{equation}\label{eq_v1}
\begin{aligned}
(\square_g + \langle b(x), \nabla \rangle + h(x) ) v_j &= 0, & \ & \mbox{on } M ,\\
v_j &= f_j, & \ &\mbox{on } \partial M,\\
v_j &= 0, & \ &\mbox{for } t <0.
\end{aligned}
\end{equation}
It follows that $v_j = u_j \mod C^\infty(M)$ and
we call $v_j$ the distorted plane waves.
We would like to consider the nonlinear problem (\ref{eq_problem}) with the Dirichlet data
$
f = \sum_{j=1}^J \epsilon_j f_j.
$
One can write the solution $p$ to (\ref{eq_problem}) as an asymptotic expansion with respect to $v_j$, following the same idea as in \cite[Section 3.6]{UZ_acoustic}.
More explicitly,
let $Q_{\text{bvp}}$ be the solution operator to the boundary value problem \begin{equation}\label{bvp_qg}
\begin{aligned}
(\square_g+ \langle b(x), \nabla \rangle + h(x)) w &= l(x), & \ & \mbox{on } (0, T) \times \Omega ,\\
w &= 0 , & \ & \mbox{on } (0,T) \times \partial \Omega,\\
w &= 0, & \ & \mbox{for } t <0.
\end{aligned}
\end{equation}
That is, we write $w = Q_{\text{bvp}}(l)$ if $l$ solves (\ref{bvp_qg}).
For more details about $Q_{\text{bvp}}$, see \cite[Section 3.5]{UZ_acoustic}.
The same analysis implies that
\begin{align}\label{expand_u}
p &= v + \sum_{m=1}{Q_{\text{bvp}}( \beta_{m+1}(x) \partial_t^2 (p^{m+1}))}, \nonumber \\
& = v + \sum_{i,j} \epsilon_i \epsilon_j A_2^{ij}
+ \sum_{i,j, k} \epsilon_i\epsilon_j\epsilon_k A_3^{ijk}
+ \sum_{i,j, k,l} \epsilon_i\epsilon_j\epsilon_k \epsilon_l A_4^{ijkl}
+ \dots,
\end{align}
where we write
\begin{align}\label{eq_A}
\begin{split}
A_2^{ij} &= Q_{\text{bvp}}(\beta_2 \partial_t^2(v_iv_j)),\\
A_3^{ijk} &= Q_{\text{bvp}}(2\beta_2 \partial_t^2(v_iA_2^{jk})+\beta_3 \partial_t^2(v_i v_j v_k))\\
A_4^{ijkl} &= Q_{\text{bvp}}(2 \beta_2 \partial_t^2(v_iA_3^{jkl}) + \beta_2\partial_t^2(A_2^{ij}A_2^{kl}) + 3\beta_3\partial_t^2(v_i v_j A_2^{kl}) + \beta_4\partial_t^2(v_iv_jv_kv_l)).
\end{split}
\end{align}
Next, we can analyze the singularities of each term above using the calculus of conormal distributions.
For this purpose, we write $K_j = K(x_j, \xi_j, s_0), \Lambda_j = \Lambda(x_j, \xi_j, s_0)$ and introduce the following notations
\[
\Lambda_{ij} = N^*(K_i \cap K_j), \quad \Lambda_{ijk} = N^*(K_i \cap K_j \cap K_k), \quad \Lambda_q = T^*_q M \setminus 0.
\]
In addition, we define
\[
\Lambda^{(1)} = \cup_{j=1}^J \Lambda_j, \quad \Lambda^{(2)} = \cup_{i<j} \Lambda_{ij}, \quad
\Lambda^{(3)} = \cup_{i<j<k} \Lambda_{ijk}.
\]
Let $\Theta^b_{y, \eta}$ be the broken bicharacteristic arc of $\square_g$ in $T^*M$.
The flow-out of $\Lambda^{(3)} \cap \mathrm{Char}(\square_g)$ under the broken bicharacteristic arcs is denoted by
\[
\Lambda^{(3), b} = \{(z, \zeta) \in T^*M: \ \exists \ (y, \eta) \in \Lambda^{(3)} \text{ such that } (z, \zeta) \in \Theta^b_{y, \eta}
\},
\]
see Section \cite[Section 3.5]{UZ_acoustic} for more details.
We consider the set
\begin{align*
\Gamma({\vec{x}, \vec{\xi}}, s_0) = (\Lambda^{(1)} \cup \Lambda^{(2)} \cup \Lambda^{(3)} \cup \Lambda^{(3),b}) \cap T^*M,
\end{align*}
which depends on the parameter $s_0$ by definition.
Then we define
\begin{align}\label{def_Gamma}
\Gamma({\vec{x}, \vec{\xi}}) = \bigcap_{s_0>0}\Gamma({\vec{x}, \vec{\xi}}, s_0)
\end{align}
as the set containing all possible singularities {
produced by the interaction of at most three distorted plane waves. }
\subsection{The third-order linearization}
In this part, we consider the interaction of three distorted plane waves.
Let $(x_j, \xi_j), K_j, \Lambda_j, \tilde{f}_j, f_j, v_j$ be defined as above, for $j =1,2,3$.
Recall we assume the null geodesics corresponding to
$(x_j, \xi_j)_{j=1}^3$ intersect regularly at a fixed point $q \in \mathbb{W}$.
With sufficiently small $s_0$, we can assume the submanifolds $K_1, K_2, K_3$ intersect 3-transversally,
see \cite[Definition 2]{UZ_acoustic}.
Let $p$ solves (\ref{eq_problem}) with the Dirichlet data
$
f = \sum_{j=1}^3 \epsilon_j f_j.
$
We consider
\[
\mathcal{U}_3 = \partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} p |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0}.
\]
By Section \ref{sub_distorted}, we have
\begin{align*}
\mathcal{U}_3
= \sum_{(i,j,k) \in \Sigma(3)} Q_{\text{bvp}}(2\beta_2 \partial_t^2(v_iA_2^{jk})+\beta_3 \partial_t^2(v_i v_j v_k)).
\end{align*}
Note that $\mathcal{U}_3$ is not the third order linearization of $\Lambda_F$ but they are related by
\begin{align*
\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} \Lambda_{b,F} (f) {|_{\epsilon_1 = \epsilon_2 = \epsilon_3=0}} = (\partial_\nu \mathcal{U}_3 + \frac{1}{2}\langle b(x), \nu \rangle)|_{(0,T) \times \partial \Omega}.
\end{align*}
For convenience, we introduce the trace operator ${R}$ on $\partial M$, as in \cite{Hintz2020}.
It is an FIO and maps distributions in $\mathcal{E}'(M)$ whose singularities are away from $N^*(\partial M)$ to $\mathcal{E}'(\partial M)$, see \cite[Section 5.1]{Duistermaat2010}.
Notice for any timelike covector $(y_|, \eta_|) \in T^* \partial M \setminus 0$, there is exactly one outward pointing lightlike covector $(y, \eta^+)$ and one inward pointing lightlike covector $(y, \eta^-)$ satisfying $y_| = y,\ \eta_| = \eta^\pm|_{T^*_{y} \partial M}$.
The trace operator ${R}$ has
a nonzero principal symbol at such $(y_|, \eta_|, y, \eta^+)$ or $(y_|, \eta_|, y, \eta^-)$.
Combining \cite[Lemma 6]{UZ_acoustic} and \cite[Proposition 5]{UZ_acoustic}, we have the following proposition.
\begin{pp}[{\cite[Proposition 5]{UZ_acoustic}}]\label{pp_uthree}
Let $(y, \eta) \in L^{*}_{\partial M, +}M$ be a covector
lying along the forward null-bicharacteristic starting from
$(q, \zeta) \in \Lambda_{ijk}$.
Suppose $y \in {\mathcal{N}(\vec{x}, \vec{\xi})} \cap {{\mathcal{R}}(\vec{x}, \vec{\xi})}$ and $(y, \eta)$ is away from $\Lambda^{(1)}$.
Then we have
\begin{align*}
&{{ \sigma_{p}}}(\mathcal{U}_3)(y, \eta)
= 2 (2\pi)^{-2} {{ \sigma_{p}}}(Q)(y, \eta, q, \zeta)
(\zeta_0)^2
(-2 \beta^2_2 - \beta_3)
\prod_{j=1}^3{{ \sigma_{p}}}(v_m) (q, \zeta^j).
\end{align*}
Let $(y_|, \eta_|)$ be the projection of $(y, \eta)$ on the boundary.
Moreover, we have
\begin{align}\label{eq_LambdaU3}
{{ \sigma_{p}}}({\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} \Lambda_{b,h,F} |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0}})(y_|, \eta_|) = \iota \langle \nu, \eta \rangle_g
{ \sigma_{p}}(R)(y_|, \eta_|, y, \eta)
{{ \sigma_{p}}}(\mathcal{U}_3)(y, \eta).
\end{align}
\end{pp}
We emphasize that we cannot ignore the term ${{ \sigma_{p}}}(Q)(y, \eta, q, \zeta)$ and
$\prod_{j=1}^3{{ \sigma_{p}}}(v_j) (q, \zeta^j)$ to recover the nonlinear coefficients, since
the unknown one-form $b(x)$ will affect these terms.
In particular,
by (\ref{eq_vps}) we can write
\begin{align*}
\sigma_{{p}}(v_j)(q, \zeta^j) = \frac{\omega(q, \zeta^j)}{\omega(x^o_j, (\xi^o_j)^\sharp)}
\sigma_{{p}}(Q)(q, \zeta^j, x^o_j, (\xi^o_j)^\sharp))
\sigma_{{p}}(v)(x^o_j, (\xi^o_j)^\sharp)),
\end{align*}
where $\omega$ is a fixed strictly positive half density on the flow-out and
\[
(x^o_j, (\xi^o_j)^\sharp) = (\gamma_{x_j, \xi_j}(t^o_j), (\dot{\gamma}_{x_j, \xi_j}(t^o_j))^\sharp)
\]
with $t^o_j$ defined in (\ref{def_bpep}).
Thus, for fixed $\zeta^j \in L_q^*M, j=1,2,3$,
we can expect to recover the quantity
\begin{align}\label{eq_M3}
M_3(q, \zeta^{1}, \zeta^{2}, \zeta^{3}) =
(-2 \beta^2_2 - \beta_3) {{ \sigma_{p}}}(Q)(y, \eta, q, \zeta)\prod_{j=1}^3{{ \sigma_{p}}}(Q)(q, \zeta, x^o_j, (\xi^o_j)^\sharp)
\sigma_{{p}}(v)(x^o_j, (\xi^o_j)^\sharp),
\end{align}
for more details about the recovery, see Section \ref{sec_recover_b}.
\subsection{The forth-order linearization}
In this part, we consider the interaction of four distorted plane waves.
Let $(x_j, \xi_j), K_j, \Lambda_j,\tilde{f}_j, f_j, v_j$ be defined as above, for $j =1,2,3,4$.
Now we assume the null geodesics corresponding to
$(x_j, \xi_j)_{j=1}^4$ intersect regularly at a fixed point $q \in \mathbb{W}$.
With sufficiently small $s_0$, we can assume the submanifolds $K_1, K_2, K_3, K_4$ intersect 4-transversally, see \cite[Definition 2]{UZ_acoustic}.
Let $p$ solves (\ref{eq_problem}) with the Dirichlet data
$
f = \sum_{j=1}^4 \epsilon_j f_j.
$
We consider
\[
\Ufour = \partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3}\partial_{\epsilon_4} p {|_{\epsilon_1 = \epsilon_2 = \epsilon_3=\epsilon_4 = 0}}.
\]
By (\ref{eq_A}), we have
\begin{align*}
\Ufour
= \sum_{(i,j,k,l) \in \Sigma(4)} Q_{\text{bvp}}(2 \beta_2 \partial_t^2(v_iA_3^{jkl}) + \beta_2\partial_t^2(A_2^{ij}A_2^{kl}) + 3\beta_3\partial_t^2(v_i v_j A_2^{kl}) + \beta_4\partial_t^2(v_iv_jv_kv_l)).
\end{align*}
Note that $\Ufour$ is not the forth-order linearization of $\Lambda_{b,h,F}$ but they are related by
\begin{align*
\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3}\partial_{\epsilon_4} \Lambda_{b,h,F} (f) {|_{\epsilon_1 = \epsilon_2 = \epsilon_3=\epsilon_4 = 0}} = (\partial_\nu \Ufour + \frac{1}{2}\langle b(x), \nu \rangle)|_{(0,T) \times \partial \Omega}.
\end{align*}
\begin{pp}[{\cite[Proposition 6]{UZ_acoustic}}]\label{pp_ufour}
Let $(y, \eta) \in L^{*}_{\partial M, +}M$ be a covector
lying along the forward null-bicharacteristic starting from
$(q, \zeta) \in \Lambda_q$.
Suppose $y \in {\mathcal{N}(\vec{x}, \vec{\xi})} \cap {{\mathcal{R}}(\vec{x}, \vec{\xi})}$
and $(y, \eta)$ is away from $\Gamma({\vec{x}, \vec{\xi}})$.
Then we have
\begin{align*
&{{ \sigma_{p}}}(\Ufour)(y, \eta)
= 2 (2\pi)^{-3} {{ \sigma_{p}}}({Q}_g)(y, \eta, q, \zeta) (\zeta_0)^2 \mathcal{C}(\zeta^{(1)}, \zeta^{(2)}, \zeta^{(3)}, \zeta^{(4)})
(\prod_{j=1}^4 {{ \sigma_{p}}}(v_j) (q, \zeta^{(j)})),
\end{align*}
where we write
\begin{align*}
\mathcal{C}(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4})
= &\sum_{(i,j,k,l) \in \Sigma(4)}
-(4\frac{(\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})^2}{|{\zeta^{i}} + {\zeta^{j}} + {\zeta^{k}}|^2_{g^*}} + \frac{(\zeta_0^{i}+\zeta_0^{l})^2}{| {\zeta^{i}} + {\zeta^{l}}|^2_{g^*}})
\frac{(\zeta_0^{j} + \zeta_0^{k})^2}{|{\zeta^{j}} + {\zeta^{k}}|^2_{g^*}} \beta_2^3 + \nonumber \\
& \quad \quad \quad \quad \quad \quad \quad
+ (3 \frac{(\zeta_0^{k}+\zeta_0^{l})^2}{| {\zeta^{k}} + {\zeta^{l}}|^2_{g^*}} + 2\frac{(\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})^2}{|{\zeta^{i}} + {\zeta^{j}} + {\zeta^{k}}|^2_{g^*}}) \beta_2 \beta_3
-\beta_4.
\end{align*}
Let $(y_|, \eta_|)$ be the projection of $(y, \eta)$ on the boundary.
Moreover, we have
\begin{align}\label{eq_LambdaU4}
{{ \sigma_{p}}}({\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3}\partial_{\epsilon_4} \LbF(f) |_{\epsilon_1 = \epsilon_2 = \epsilon_3 = \ep_4=0}})(y_|, \eta_|) =
\iota \langle \nu, \eta \rangle_g
{ \sigma_{p}}({R})(y_|, \eta_|, y, \eta)
{{ \sigma_{p}}}(\Ufour)(y, \eta).
\end{align}
\end{pp}
Similarly we cannot ignore the term ${{ \sigma_{p}}}(Q)(y, \eta, q, \zeta)$ and
$\prod_{j=1}^4{{ \sigma_{p}}}(v_j) (q, {\zeta^{m}})$ to recover the nonlinear coefficients.
In particular,
by (\ref{eq_vps}) we can write
\begin{align*}
\sigma_{{p}}(v_j)(q, {\zeta^{m}}) = \frac{\omega(q, \zeta^j)}{\omega(x_j, \xi_j^\sharp)}
\sigma_{{p}}(Q)(q, {\zeta^{m}}, x_j, \xi_j^\sharp)\sigma_{{p}}(v_j)(x_j, \xi_j^\sharp),
\end{align*}
where $\omega$ is a fixed strictly positive half density on the flow-out.
Thus,
for fixed $\zeta^j \in L_q^*M, j=1,2,3,4$,
we can expect to recover the quantity
\begin{align}\label{eq_M4}
&M_4(q, {\zeta^{1}}, {\zeta^{2}}, {\zeta^{3}}, {\zeta^{4}})\\
&= \mathcal{C}({\zeta^{1}}, {\zeta^{2}}, {\zeta^{3}}, {\zeta^{4}}) {{ \sigma_{p}}}(Q)(y, \eta, q, \zeta)\prod_{j=1}^4{{ \sigma_{p}}}(Q)(q, \zeta, x^o_j, (\xi^o_j)^\sharp)\sigma_{{p}}(v_j)(x^o_j, (\xi^o_j)^\sharp). \nonumber
\end{align}
\section{Using the nonlinear term}\label{sec_nonlinear}
In Section \ref{sec_recover_b}, our analysis shows that there exists $\varrho \in C^\infty(M)$ with $\varrho|_{\partial M} = 1$ such that
\begin{align*}
b^{(2)} = b^{(1)} + 2\varrho^{-1} \diff \varrho, \quad \beta^{(2)}_{m+1} = \varrho^m \beta^{(1)}_{m+1}.
\end{align*}
In this part, we would like to use the nonlinear term to conclude that $\varrho$ is a smooth function on $\Omega$, when the potential is known.
In other words, it does not depends on $t$, even though $b^{(k)}, \beta^{(k)}_{m+1}$ may depend on $t$.
We consider the nonlinear problem corresponding to $b^{(1)}, h^{(1)}, F^{(1)}$, i.e.,
\begin{equation}\label{eq_nltwo}
\begin{aligned}
\square_g p^{(1)} + \langle b^{(1)}(x), \nabla p^{(1)} \rangle + h^{(1)}(x) p^{(1)} - F^{(1)}(x, p^{(1)},\partial_t p^{(1)},\partial^2_t p^{(1)}) &= 0, & \ & \mbox{in } (0, T) \times \Omega\\
p^{(1)} &= f, & \ &\mbox{on } (0,T) \times \partial \Omega,\\
p^{(1)} = {\partial_t p^{(1)}} &= 0, & \ & \mbox{on } \{t=0\}.
\end{aligned}
\end{equation}
Let $p^{(3)} = \varrho^{-1} p^{(1)}$, then $p^{(3)}$ solves the equation
\begin{align*}
&\square_g (\varrho p^{(3)}) + \langle b^{(1)}(x), \nabla (\varrho p^{(3)}) \rangle + h^{(1)}(x) \varrho p^{(3)}
-\sum_{m=1}^{+\infty} \beta^{(1)}_{m+1}(x) \partial_t^2 ((\varrho p^{(3)})^{m+1})
\\
= &\varrho(\square_g p^{(3)} + \langle b^{(1)} + 2 \varrho^{-1} \diff \varrho, \nabla p^{(3)} \rangle + (h^{(1)} + \varrho^{-1} \square_g \varrho + \langle b^{(1)}, \nabla \varrho \rangle)p^{(3)} )\\
&- \sum_{m=1}^{+\infty} \beta^{(1)}_{m+1}(x) (\varrho^{m+1} \partial_t^2 (( p^{(3)})^{m+1})
+ 2\partial_t(\varrho^{m+1}) \partial_t (( p^{(3)})^{m+1})
+ (p^{(3)})^{m+1} \partial_t^2 (\varrho^{m+1})) \\
= & \varrho(\square_g p^{(3)} + \langle b^{(2)}, \nabla p^{(3)} \rangle + (h^{(1)} + \varrho^{-1} \square_g \varrho + \langle b^{(1)}, \nabla \varrho \rangle)
- \sum_{m=1}^{+\infty} \beta^{(2)}_{m+1}(x) \partial_t^2 (( p^{(3)})^{m+1})
+ \varrho^{-1} N_1
+ \varrho^{-1} N_0)\\
= & 0,
\end{align*}
where we introduce the following notations
\begin{align*}
N_1(x, p^{(3)},\partial_t p^{(3)},\partial^2_t p^{(3)}) &= - \sum_{m=1}^{+\infty} 2\beta^{(1)}_{m+1}(x) \partial_t(\varrho^{m+1}) \partial_t (( p^{(3)})^{m+1}),\\
N_0(x, p^{(3)},\partial_t p^{(3)},\partial^2_t p^{(3)}) &= - \sum_{m=1}^{+\infty} \beta^{(1)}_{m+1}(x) \partial^2_t (\varrho^{m+1}) ( p^{(3)})^{m+1}.
\end{align*}
In addition, we have
\begin{align}\label{eq_p3}
p^{(3)}|_{(0,T) \times \partial \Omega} = (\rho^{-1}p^{(1)})|_{(0,T) \times \partial \Omega} = p^{(1)}|_{(0,T) \times \partial \Omega},
\end{align}
and
\begin{align}\label{eq_pp3}
&((\partial_\nu + \frac{1}{2}\langle b^{(1)}, \nu \rangle) p^{(1)}) |_{(0,T) \times \partial \Omega}\\
=&( (\partial_\nu + \frac{1}{2}\langle b^{(1)}, \nu \rangle) (\varrho p^{(3)})) |_{(0,T) \times \partial \Omega}
= ( (\partial_\nu + \frac{1}{2}\langle b^{(2)}, \nu \rangle) p^{(3)}) |_{(0,T) \times \partial \Omega}. \nonumber
\end{align}
By equation (\ref{eq_p3}), we know that $p^{(3)}$ solves a new nonlinear problem
\begin{equation}\label{eq_nl3}
\begin{aligned}
\square_g p^{(3)} + \langle b^{(2)}(x), \nabla p^{(3)} \rangle + {h^{(2)}(x)} p^{(3)} - F^{(3)}(x, p^{(3)},\partial_t p^{(3)},\partial^2_t p^{(3)}) &= 0, & \ & \mbox{in } (0, T) \times \Omega\\
p^{(3)} &= f, & \ &\mbox{on } (0,T) \times \partial \Omega,\\
p^{(3)} = {\partial_t p^{(3)}} &= 0, & \ & \mbox{on } \{t=0\},
\end{aligned}
\end{equation}
for $f = p^{(1)}|_{(0,T) \times \partial \Omega}$,
where we write
\begin{align}
&F^{(3)}(x, p^{(3)},\partial_t p^{(3)},\partial^2_t p^{(3)}) = F^{(2)}(x, p^{(3)},\partial_t p^{(3)},\partial^2_t p^{(3)}) + \varrho^{-1}N_1 + \varrho^{-1}N_0. \label{eq_nlterm3}
\end{align}
We can define the corresponding DN map
\[
{\Lambda}_{\btwo, \htwo, \Fthree}(f) = (\partial_\nu + \frac{1}{2}\langle b^{(2)}, \nu \rangle) p^{(3)}.
\]
By equation (\ref{eq_pp3}), we must have
\[
\Lambda_{\bone, \hone, \Fone}(f) = {\Lambda}_{\btwo, \htwo, \Fthree}(f)
\]
for any $f \in C^6((0,T) \times \partial \Omega)$ with sufficiently small data.
This implies
\begin{align}\label{eq_lambda3}
\Lambda_{\btwo, \htwo, \Ftwo}(f) = \Lambda_{\bone, \hone, \Fone}(f) = {\Lambda}_{\btwo, \htwo, \Fthree}(f)
\end{align}
for such $f \in C^6((0,T) \times \partial \Omega)$.
Now we would like to prove that (\ref{eq_lambda3}) implies
$\partial_t \varrho = 0$.
Indeed, we follow the previous analysis and compare the nonlinear terms $F^{(2)}$ and $F^{(3)}$.
Note by (\ref{assump_h}), the linear parts are the same and we write it as
\begin{align*}
P = \square_g + \langle b^{(2)}(x), \nabla \rangle + h^{(2)}(x)
\end{align*}
and we denote their parametrix in ${M_{\text{e}}}$ by $Q$.
Our goal is to show
$F^{(2)} (x, p,\partial_t p,\partial^2_t p) = F^{(3)} (x, p,\partial_t p,\partial^2_t p)$, i.e., $N_1 = N_0 = 0$,
using the assumption that the DN maps $\Lambda_{\btwo, \htwo, \Ftwo}, {\Lambda}_{\btwo, \htwo, \Fthree}$
are equal for small data.
In this case, the two nonlinear terms have different forms and therefore we have different asymptotic expansions for them.
The term $F^{(2)}$ has been considered in Section \ref{sub_distorted}.
In the following, we perform the same analysis to the term $F^{(3)}$.
\subsection{The asymptotic expansion of $F^{(3)}$}\label{subsec_assyp2}
Let $f = \sum_{j = 1}^3 \epsilon_j f_j$.
The small boundary data $f_j$ are properly chosen as before.
Let $v_j$ solve the boundary value problem (\ref{eq_v1}) with the boundary source $f_j$, the one-form $b^{(2)}$, the potential $h^{(2)}$, and the nonlinearity $F^{(3)}$.
In the following, we denote $p^{(3)}$ by $p$ and $\beta^{(2)}_{m+1}$ by $\beta_{m+1}$ for simplification.
Let $v = \sum_{j=1}^3 \epsilon_j v_j$ and we have
\[
P (p -v) = F^{(3)}(x, p,\partial_t p, \partial^2_t p).
\]
It follows from (\ref{eq_nlterm3}) that
\begin{align*}
p &= v + \sum_{m=1}
Q_{\text{bvp}}(\beta_{m+1}(x) \partial_t^2 (p^{m+1})
+ 2\varrho^{-1} \beta_{m+1}(x) \partial_t(\varrho^{m+1}) \partial_t (p^{m+1})
+ \varrho^{-1}\beta_{m+1}(x) \partial^2_t (\varrho^{m+1}) p^{m+1})
\nonumber \\
& = v + {B_2 + B_3 + \dots},
\end{align*}
where we rearrange the these terms by the order of $\epsilon$-terms, such that $B_2$ denotes the terms with $\epsilon_i \epsilon_j$,
$B_3$ denotes the terms with $\epsilon_i \epsilon_j \epsilon_k$, for $1 \leq i,j,k \leq 3$.
One can find the expansions of $B_2, B_3$ as
\begin{align*}
B_2 &= Q_{\text{bvp}}(\beta_2 \partial_t^2(v^2)
+ c_2 \partial_t (v^{2})
+ d_2 v^{2})
= A_2 + Q_{\text{bvp}}(
c_2 \partial_t (v^{2})
+ d_2 v^{2}),\\
B_3 &= Q_{\text{bvp}}(2\beta_2 \partial_t^2(vB_2)+\beta_3 \partial_t^2(v^3)
+ c_3 \partial_t (vB_2)
+ c_3 \partial_t (v^3)
+ d_3 vB_2
+ d_3 v^{3}
)\\
& = A_3
+Q_{\text{bvp}}(
2\beta_2
\partial_t^2(vQ_{\text{bvp}}(c_2 \partial_t (v^{2})
+ d_2 v^{2})))
+c_3 \partial_t (vB_2)
+ c_3 \partial_t (v^3)
+ d_3 vB_2
+ d_3 v^{3}
),
\end{align*}
where we write
\[
c_k = 2\varrho^{-1}\beta_{k} \partial_t(\varrho^{k}),
\quad d_k = \varrho^{-1}\beta_{k} \partial^2_t(\varrho^{k})
\] to further simplify the notations.
For $N \geq 4$, we write
\[
B_N = Q_{\text{bvp}}(\beta_N \partial_t^2 (v^N)) + \mathcal{Q}_N(\beta_2, \beta_3, \ldots, \beta_{N-1}),
\]
where $\mathcal{Q}_N(\beta_2, \beta_3, \ldots, \beta_{N-1})$ contains all terms only involved with $\beta_2, \beta_3, \ldots, \beta_{N-1}$.
Note that $v$ appears $j$ times in each $B_j, A_j$, $j = 2, 3$.
Same as before, we introduce the notation $B_2^{ij}$ to denote the result if we replace $v$ by $v_i, v_j$ in $B_2$ in order, and similarly the notations
$B_3^{ijk}$,
such that
\[
B_2 = \sum_{i,j} \epsilon_i \epsilon_j B_2^{ij}, \quad
B_3 = \sum_{i,j, k} \epsilon_i\epsilon_j\epsilon_k B_3^{ijk},
\]
More explicitly, we have
\begin{align*}
\begin{split}
B_2^{ij} &= A_2^{ij}
+ Q_{\text{bvp}}(c_2 \partial_t (v_{i}v_j)
+ d_2 v_i v_j),\\
B_3^{ijk} &=
A_3^{ijk}
+Q_{\text{bvp}}(
2\beta_2
\partial_t^2(v_iQ_{\text{bvp}}(c_2 \partial_t (v_jv_k) + d_2 v_j v_k)
+c_3 \partial_t (v_i B_2^{jk})
+ c_3 \partial_t (v_i v_j v_k)
+ d_3 v_i B_2^{jk}
+ d_3 v_i v_j v_k
).\\
\end{split}
\end{align*}
\subsection{The third-order linearization}
In this subsection, we consider the third-order linearization of the DN maps for $F^{(2)}, F^{(3)}$.
We define
\[
\mathcal{U}_3^{(2)} =
\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} p^{(2)} |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0}, \quad
\mathcal{U}_3^{(3)} = \partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} p^{(3)} |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0}.
\]
Recall in Section \ref{sub_distorted}, we show that
\[
\mathcal{U}_3^{(2)}
= \sum_{(i,j,k) \in \Sigma(3)} A_3^{ijk}
= \sum_{(i,j,k) \in \Sigma(3)}
Q_{\text{bvp}}(2\beta_2 \partial_t^2(v_iA_2^{jk})+\beta_3 \partial_t^2(v_i v_j v_k)).
\]
The analysis above shows that
\begin{align*}
&\mathcal{U}_3^{(3)}
= \sum_{(i,j,k) \in \Sigma(3)} B_3^{ijk}\\
= & \sum_{(i,j,k) \in \Sigma(3)} A_3^{ijk}
+Q_{\text{bvp}}(
2\beta_2
\partial_t^2(v_i(c_2 \partial_t (v_jv_k) + d_2 v_j v_k)
+c_3 \partial_t (v_i B_2^{jk})
+ c_3 \partial_t (v_i v_j v_k)
+ d_3 v_i B_2^{jk}
+ d_3 v_i v_j v_k
)\\
\coloneqq & \mathcal{U}_3^{(2)} + \mathcal{U}_3^{(3,1)},
\end{align*}
where $\mathcal{U}_3^{(3,1)}$ contains the lower order terms.
Note that $\mathcal{U}_3^{(k)}$ is not the third order linearization of $\Lambda_{\bk, \Fk, \hk}$ for $k = 2, 3$ but they are related by
\begin{align*
\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} \Lambda_{\bk, \Fk, \hk}(f) |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0}
= (\partial_\nu \mathcal{U}_3^{(k)} + \frac{1}{2}\langle b^{(k)}, \nu \rangle\mathcal{U}_3^{(k)}) |_{(0,T) \times \partial \Omega}.
\end{align*}
Thus, we have
\begin{align*}
&\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} {\Lambda}_{\btwo, \htwo, \Fthree}(f) |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0}\\
=&
\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} \Lambda_{\btwo, \htwo, \Ftwo}(f) |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0}
+ (\partial_\nu \mathcal{U}_3^{(3,1)} + \frac{1}{2}\langle b^{(k)}, \nu \rangle\mathcal{U}_3^{(3,1)}) |_{(0,T) \times \partial \Omega}.
\end{align*}
Since the these DN maps are equal, we must have
\begin{align}\label{eq_DNU3}
(\partial_\nu \mathcal{U}_3^{(3,1)} + \frac{1}{2}\langle b^{(k)}, \nu \rangle \mathcal{U}_3^{(3,1)}) |_{(0,T) \times \partial \Omega} = 0.
\end{align}
\subsection{Analyze $\mathcal{U}_3^{(3,1)}$}
Following the same analysis as before,
we can show that the principal symbol of $\mathcal{U}_3^{(3,1)}$ is given by the terms
\[
{\mathcal{V}}^{(3)}
=
\sum_{(i,j,k) \in \Sigma(3)}
Q_{\text{bvp}}(
c_3 \partial_t (v_i Q_{\text{bvp}}(\beta_2 \partial_t^2 (v_j v_k)))
+ c_3 \partial_t (v_i v_j v_k)
+2\beta_2 \partial_t^2(v_i(Q_{\text{bvp}}(c_2 \partial_t (v_{j}v_k))
).
\]
Then we can compute
\begin{align*}
{ \sigma_{p}}(\mathcal{V}^{(3)})
=&
{ \sigma_{p}}(Q_{\text{bvp}})(y, \eta, q, \zeta)
(\sum_{(i,j,k) \in \Sigma(3)}
c_3 (\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})
\frac{\beta_2(\zeta_0^{j} + \zeta_0^{k})^2}{\|{\zeta^{j}} + {\zeta^{k}}\|^2_g}\\
&\quad \quad \quad \quad
+ c_3
(\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})
+ 2 \beta_2 (\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})^2
\frac{c_2(\zeta_0^{j} + \zeta_0^{k})}{\|{\zeta^{j}} + {\zeta^{k}}\|^2_g}) \prod_{m=i,j,k,l}{ \sigma_{p}}(v_m).
\end{align*}
It follows that from (\ref{eq_DNU3}) we have
\[
\sum_{(i,j,k) \in \Sigma(3)}
c_3 \beta_2 \frac{(\zeta_0^{j} + \zeta_0^{k})^2}{|{\zeta^{j}} + {\zeta^{k}}|^2_g}
+ c_3
+ 2 \beta_2 c_2(\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})
\frac{(\zeta_0^{j} + \zeta_0^{k})}{|{\zeta^{j}} + {\zeta^{k}}|^2_{g^*}} = 0.
\]
By \cite[Lemma ]{UZ_acoustic}, we have
\[
\sum_{(i,j,k) \in \Sigma(3)}
\frac{(\zeta_0^{j} + \zeta_0^{k})^2}{|{\zeta^{j}} + {\zeta^{k}}|^2_{g^*}} = -1.
\]
This implies we have
\begin{align}\label{eq_I3}
\sum_{(i,j,k) \in \Sigma(3)}
c_3 (-\beta_2+1)
+ 2 c_2 \beta_2 I_3(\zeta^{1}, \zeta^{2}, \zeta^{3}) = 0,
\end{align}
where we write
\[
I_3(\zeta^{1}, \zeta^{2}, \zeta^{3}) = \sum_{(i,j,k) \in \Sigma(3)}(\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})
\frac{(\zeta_0^{j} + \zeta_0^{k})}{|{\zeta^{j}} + {\zeta^{k}}|^2_{g^*}}.
\]
In the following,
we would like to construct two different sets of lightlike covectors
$\zeta^{1}, \zeta^{2}, \zeta^{3}$ such that $I_3$ has different values, which implies we can construct a homogeneous linear system of two equations and show that
$c_3(-\beta_2 + 1) = \beta_2 c_2 = 0$.
Indeed, we can prove the following lemma.
\begin{lm}\label{lm_construction3}
For fixed $q \in \mathbb{W}$ and $\zeta, \hat{\zeta}^{(1)} \in L^{*,+}_q M$,
we can find three different sets of nonzero lightlike covectors
\[
(\zeta^{1,k}, \zeta^{2,k}, \zeta^{3,k}), \quad k = 1,2,
\]
such that $\zeta = \sum_{j=1}^3 \zeta^{j,k}$ with { ${\zeta^{j}} = \alpha_j \hat{{\zeta^{j}}}$} for some $\alpha_j$ and the vectors
\[
(1, I_3(\zeta^{1,k}, \zeta^{2,k}, \zeta^{3,k})), \quad k=1,2,
\] are linearly independent.
\end{lm}
\begin{proof}
First we choose local coordinates $x = (x^0, x^1, x^2, x^3)$ at $q$ such that $g$ coincides with the Minkowski metric.
Then we rotate the coordinate system in the spatial variables such that
$\zeta, {\zeta^{j}}, j = 1,2,3$ are in the same plane $\zeta_3 = 0$, since they are linearly dependent.
Without loss of generality, we assume
\[
\zeta = \lambda \hat{\zeta}, \quad \zeta^{1} = \alpha_1 \hat{\zeta}^{1},
\quad \zeta^{2} = \alpha_2 \hat{\zeta}^{2},
\quad \zeta^{3} = \alpha_3 \hat{\zeta}^{3},
\]
where $\lambda, \alpha_1, \alpha_2, \alpha_3$ can be solved in the following and
\begin{align*}
&\hat{\zeta} = (-1, -\cos \varphi, \sin \varphi, 0), \quad
&\hat{\zeta}^{1} = (-1, 1, 0, 0),\\
&\hat{\zeta}^{2} = (-1, \cos \theta, \sin \theta, 0), \quad
&\hat{\zeta}^{3} = (-1, \cos \theta, -\sin \theta, 0),
\end{align*}
with distinct parameter $\varphi, \theta \in (0, 2\pi)$.
From $\zeta = \sum_{j=1}^3 {\zeta^{j}}$, a direct computation shows that
\begin{align*}
&\lambda = 2 \sin \theta(1 - \cos \theta),
&\alpha_1 = -2 \sin \theta(\cos \varphi + \cos \theta),\\
&\alpha_2 = (1 + \cos \varphi) \sin \theta + (1 - \cos \theta) \sin \varphi,
&\alpha_3 = (1 + \cos \varphi) \sin \theta - (1 - \cos \theta) \sin \varphi.
\end{align*}
Note that we do not need these explicit forms in the following.
Instead, we compute
\begin{align*}
\langle \hat{\zeta}^{1}, \hat{\zeta}^{2} \rangle_g = \cos \theta - 1,
\quad \langle \hat{\zeta}^{1}, \hat{\zeta}^{3} \rangle_g = \cos \theta - 1,
\quad \langle \hat{\zeta}^{2}, \hat{\zeta}^{3} \rangle_g = 2(\cos^2 \theta - 1).
\end{align*}
With $\zeta$ lightlike, one has
\begin{align*}
&| \alpha_1 \hat{\zeta}^{1} + \alpha_2 \hat{\zeta}^{2} + \alpha_3 \hat{\zeta}^{3}|_{g^*}^2 = 0\\
\Rightarrow \quad &
(\alpha_1 \alpha_2 + \alpha_1 \alpha_3)(\cos \theta - 1) + \alpha_2 \alpha_3 \cdot 2(\cos \theta - 1)(\cos \theta + 1) = 0\\
\Rightarrow \quad &
\frac{\alpha_2 + \alpha_3}{\alpha_2 \alpha_3}
=\frac{1}{\alpha_3} + \frac{1}{\alpha_2}
=-\frac{2(\cos\theta + 1)}{\alpha_1}.
\end{align*}
It follows that
\begin{align*}
{I}_3(\zeta^{1}, \zeta^{2}, \zeta^{3}) &=
-\lambda
(\frac{\alpha_1 + \alpha_2}{2(\cos \theta -1)\alpha_1 \alpha_2}
+ \frac{\alpha_1 + \alpha_3}{2(\cos \theta -1)\alpha_1 \alpha_3}
+ \frac{\alpha_2 + \alpha_3}{4(\cos \theta -1)(\cos \theta + 1)\alpha_2 \alpha_3})\\
& = \frac{-\lambda}{2(\cos \theta -1)}( \frac{\alpha_1 + \alpha_2}{\alpha_1 \alpha_2} + \frac{\alpha_1 + \alpha_3}{\alpha_1 \alpha_3}
- \frac{1}{(\cos \theta + 1)} \cdot \frac{(\cos \theta + 1)}{\alpha_1}) \\
& = \frac{-\lambda}{2(\cos \theta -1)\alpha_1}(\frac{\alpha_1 + \alpha_2}{\alpha_2} + \frac{\alpha_1 + \alpha_3}{\alpha_3}
- {1})\\
& = \frac{-\lambda}{2(\cos \theta -1)\alpha_1}
(\alpha_1(\frac{1}{\alpha_2} + \frac{1}{\alpha_3}) +1)\\
& = \frac{-\lambda}{2(\cos \theta -1)\alpha_1}
(-{2(\cos\theta + 1)} +1)\\
&= \frac{-2 \sin \theta(1 - \cos \theta)}{2(\cos \theta -1)(-2 \sin \theta(\cos \varphi + \cos \theta))}
(-2\cos\theta -1)
= \frac{2\cos\theta +1}{2 (\cos \varphi + \cos \theta)}.
\end{align*}
By fixing $\varphi$ and choosing different $\theta$, we can find two sets of $(\zeta^{1,k}, \zeta^{2,k}, \zeta^{3,k}), k =1,2$ such that
$I_3(\zeta^{1,k}, \zeta^{2,k}, \zeta^{3,k})$ are different.
This proves the lemma.
\end{proof}
Thus, from (\ref{eq_I3}) we conclude that $c_3(-\beta_2 + 1) = \beta_2 c_2 = 0$
at any $q \in \mathbb{W}$.
Now we consider the open set $W_2 = \{x \in \mathbb{W}: \beta_2(x) = 0\}^\text{int}$, that is, the interior of the set where $\beta_2(x) = 0$.
For $x \in W \setminus W_2$, there exists a sequence $x_j$ converging to $x$ for $j \rightarrow \infty$, such that $\beta_2(x_j) \neq 0$.
For each $x_j$, we have $c_2(x_j) = 0$, which implies $\partial_t(\varrho)(x_j) = 0$.
It follows that $\partial_t(\varrho^2)(x) = 0$ for any $x$ in $W \setminus W_2$.
If $W_2$ is not empty, for $x \in W_2$ we must have $c_3 = 2 \varrho^{-1} \beta_3 \partial_t(\varrho^3) = 0$, since $-\beta_2 + 1= 1$.
For convenience, we define a sequence of open sets
\[
W_k = \{x \in W_{k-1}: \beta_k(x) = 0\}^\text{int}
\]
as a subset of $W_{k-1}$, for $k=3, 4 \ldots$.
Similarly for any $x \in W_2 \setminus W_3$, there is a sequence $x_j$ converging to $x$ for $j \rightarrow \infty$, such that $\beta_3(x_j) \neq 0$, which implies $\partial_t(\varrho^3)(x_j) = 0$.
Then we must have $\partial_t \varrho = 0$ on $W_2 \setminus W_3$.
If $W_3$ is not empty, for $x \in W_3$ the nonlinear coefficients $\beta_2, \beta_3$ vanish in a small neighborhood of $x$.
In this case, we consider the fourth-order terms in the asymptotic expansion of $F^{(3)}$, with $\beta_2 = \beta_4 = 0$, i.e.,
\begin{align*}
B_4 = Q_{\text{bvp}}(\beta_4\partial_t^2(v^4) + c_4 \partial_t(v^4) + d_4 v^4).
\end{align*}
We consider the fourth-order linearization of the DN maps to have
\begin{align*}
(\partial_\nu \Ufour^{(3)} - \frac{1}{2}\langle b^{(k)}, \nu \rangle\Ufour^{(3)} |_{(0,T) \times \partial \Omega} = 0,
\end{align*}
where the principal part of $\Ufour^{(3)}$ is given by
\[
{ \sigma_{p}}(Q_{\text{bvp}})(y, \eta, q, \zeta)
\sum_{(i,j,k,l) \in \Sigma(4)}
(\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k} + \zeta_0^{l}) c_4 \beta_4 \prod_{m=i,j,k,l}{ \sigma_{p}}(v_m)(q, \zeta^m).
\]
It follows that $c_4 \beta_4 = 0$, for $x \in W_3$.
The same argument shows that $\partial_t \varrho = 0$ on $W_3 \setminus W_4$.
One can continue this process by considering the $N$-th order linearization, if $W_{N-1}$ is not empty, for $N \geq 4$.
Note that we assume for each $x \in \mathbb{W}$, there exists some index $j$ such that
$\beta_j(x) \neq 0$.
This implies that $x \neq W_j$ for such $j$.
Therefore, we must have $\partial_t\varrho = 0$ on $\mathbb{W}$.
\section{The recovery of the one-form and the nonlinearity}\label{sec_recover_b}
In this section, we would like to recover the one-form $b(x)$
at any point
in the suitable larger set
\[
\mathbb{W} = \bigcup_{y^-, y^+ \in (0,T) \times \partial \Omega} I(y^-, y^+) \cap \intM,
\]
by combining the third-order and forth-order linearization of the DN map.
More explicitly, let $q \in \mathbb{W}$ be fixed.
For a covector $\zeta^o \in L_q^{*,\pm}M$, we denote by
\[
N^\pm(\zeta^o, \varsigma) = \{\zeta \in L_q^{*,\pm}M: \|\zeta-\zeta^o\| < \varsigma\}
\]
a conic neighborhood of $\zeta^o$ containing lightlike covectors
with small parameter $\varsigma>0$.
Similarly, we denote the conic neighborhood for a lightlike vector $w \in L^\pm_qM$ by $N^\pm(w, \varsigma)$.
The following lemma in \cite{UZ_acoustic} shows that one can perturb a lightlike vector to choose another one that are corresponding to null geodesic segments without cut points.
Here recall $V = (0,T) \times (\tO \setminus \Omega)$ is the open set where we construct virtual point sources and send distorted plane waves.
\begin{lm}[{\cite[Lemma 4]{UZ_acoustic}}]\label{lm_perturb_zeta1}
Let $q \in \mathbb{W}$ and $\hat{\zeta}^{1} \in L^{*,+}_q M$.
Suppose there is $(x_1, \xi_1) \in L^{+} V$ with
\[
(q, \hat{\zeta}^{1}) = (\gamma_{x_1, \xi_1}(s_1), (\dot{\gamma}_{x_1, \xi_1}(s_1))^\flat), \quad 0 < s_1 < \rho(x_1, \xi_1).
\]
Then we can find $\varsigma >0$ such that for any $\hat{\zeta}^{2} \in N^+(\hat{\zeta}^{1}, \varsigma)$, there exists a vector
$(x_2, \xi_2) \in L^{+} V$ with
\[
(q, \hat{\zeta}^{2}) = (\gamma_{x_2, \xi_2}(s_2), (\dot{\gamma}_{x_2, \xi_2}(s_2))^\flat), \quad 0 < s_2 < \rho(x_2, \xi_2).
\]
Moreover, one has $(x_1, \xi_1)$ and $(x_2, \xi_2)$ are causally independent.
\end{lm}
\subsection{Construction for the third-order linearization}\label{subsec_three}
In this subsection, we claim that
for any fixed point $q \in \mathbb{W}$,
one can find a set of lightlike vectors $\{(x_j, \xi_j)\}_{j=1}^3$ in $V$ and a lightlike covector $\zeta$ at $q$, which are corresponding to null geodesics intersecting regularly at $q$.
More precisely, the lightlike vectors $\{(x_j, \xi_j)\}_{j=1}^3$ are corresponding to three incoming null geodesics
and the lightlike covector $\zeta$ at $q$ is corresponding to the new singularities produced by the interaction of three distorted plane waves.
When $s_0 >0$ is small enough,
the covector $\zeta$ can be chosen away from the singularities caused by the interaction of at most two waves.
Then $(q, \zeta)$ is corresponding to an outgoing null geodesic and we would like to find a lightlike vector $(y, \eta)$ in $V$ along this null geodesic before its first cut point.
\begin{claim}\label{cl_const_three1}
Suppose $q \in \mathbb{W}$ and $s_0>0$ is sufficiently small.
Then one can find \[
{ \{(x_j, \xi_j)\}_{j=1}^3} \subset L^+V, \quad
\zeta \in \Lambda_{123}\setminus (\Lambda^{(1)} \cup \Lambda^{(2)}),
\quad (y, \eta) \in L^{*}_{\partial M, +}M,
\]
such that
\begin{enumerate}[(a)]
\item $(x_j, \xi_j), j = 1,2,3$ are causally independent as in (\ref{assump_xj}) and the null geodesics starting from them intersect regularly at $q$ (see Definition \ref{def_inter}),
satisfying
$\zeta$ is in the span of $(\dot{\gamma}_{x_j, \xi_j}(s))^\flat$ at $q$;
\item each $\gamma_{x_j, \xi_j}(\mathbb{R}_+)$ hits $\partial M$ exactly once and transversally before it passes $q$;
\item $(y, \eta) \in L^{*}_{\partial M, +}M$ lies in the bicharacteristic from $(q, \zeta)$ and additionally there are no cut points along $\gamma_{q, \zeta^\sharp}(s)$ from $q$ to $y$.
\end{enumerate}
\end{claim}
\begin{proof}
By \cite[Lemma 3.5]{Kurylev2018}, first we pick $\zeta$ and ${\hat{\zeta}^{1}}$ in $L^{*,+}_q M$ such that there exist $(x_1, \xi_1) \in L^{+}V$ and $(\hat{y}, \hat{\eta}) \in L^{*,+}V$ with
\[
(q, \hat{\zeta}^{1}) = (\gamma_{x_1, \xi_1}({s_q}), (\dot{\gamma}_{x_1, \xi_1}(s_q))^\flat), \quad
(\hat{y}, \hat{\eta}) = (\gamma_{q, \zeta^\sharp}(s_{\hat{y}}), (\dot{\gamma}_{q, \zeta^\sharp}(s_{\hat{y}}))^\flat),
\]
for some $ 0 <s_q< \rho(x_1, \xi_1)$ and
$ 0 <s_{\hat{y}}< \rho(q, \zeta)$.
Note that one can find such $(\hat{y}, \hat{\eta})$ by considering the opposite direction, following the proof of \cite[Lemma 3.5]{Kurylev2018}.
Next by Lemma \ref{lm_perturb_zeta1}, one can find two more covectors $\hat{\zeta}^{j}$ at $q$, with $(x_j, \xi_j)$ for $j = 2,3$, such that $(x_j, \xi_j), j=1,2,3$ are linearly independent and casually independent.
Then to prove the rest of (a),
we would like to choose such $\hat{\zeta}^{j}, j=2,3$ satisfying Lemma \ref{lm_perturb_zeta_three} in the following.
To have (b), we can always replace $(x_j, \xi_j)$ by $(\gamma_{x_j, \xi_j}(s_j),\dot{\gamma}_{x_j, \xi_j}(s_j))$ for some $s_j > 0$ if necessary.
Then by \cite[Lemma 2.4]{Hintz2017}, the null geodesic $\gamma_{x_j, \xi_j}(s)$ always hit $\partial M$ transversally before it passes $q$, since the boundary is assumed to be null-convex.
To have (c), recall we have found
$\zeta \in L_q^{*,+}M$ with
$(\hat{y}, \hat{\eta}) = (\gamma_{q, \zeta^\sharp}(s_{\hat{y}}), (\dot{\gamma}_{q, \zeta^\sharp}(s_{\hat{y}}))^\flat) \in L^{*, +}V$ for some
$ 0 <s_{\hat{y}}< \rho(q, \zeta)$.
We define
\[
s_y = \inf \{ s> 0: \gamma_{q, \zeta}(s) \in \partial M \}, \quad (y, \eta) = (\gamma_{q, \zeta}(s_y), (\dot{\gamma}_{q, \zeta}(s_y)^\flat).
\]
Note that $s_y < s_{\hat{y}} < \rho(q, \zeta)$.
In addition, the null geodesic $\gamma_{q, \zeta}(s)$ hit $\partial M$ transversally at $y$.
Thus, $(y, \eta) \in L^{*}_{\partial M, +}M$ and (c) is true for $(y, \eta)$.
\end{proof}
\begin{lm}\label{lm_perturb_zeta_three}
Let $q \in \mathbb{W}$ and $\zeta, \hat{\zeta}^{1} \in L_q^{*,+} M$ be fixed.
Let $(x_1, \xi_1) \in L^{+} V$ satisfying
\[
(q, \hat{\zeta}^{1}) = (\gamma_{x_1, \xi_1}(s_1), (\dot{\gamma}_{x_1, \xi_1}(s_1))^\flat)
\] with $0 < s_1 < \rho(x_1, \xi_1)$.
For sufficiently small $\varsigma >0$,
we can find
$\hat{\zeta}^{2}, \hat{\zeta}^{3} \in N^+(\hat{\zeta}^{1}, \varsigma)$, such that there exist lightlike vectors
$(x_j, \xi_j) \in L^{+} V$ for $j = 2,3$, with
\[
(q, \hat{\zeta}^{j}) = (\gamma_{x_j, \xi_j}(s_j), (\dot{\gamma}_{x_j, \xi_j}(s_j))^\flat), \quad
0 < s_j < \rho(x_j, \xi_j),
\]
and satisfying that $\hat{\zeta}^{1}, \hat{\zeta}^{2}, \hat{\zeta}^{3}$ are linearly independent with
$\zeta$ in their span.
In addition, we can find such $x_1, x_2, x_3$ that are casually independent.
\end{lm}
\begin{proof}
Let $\theta = -\dot{\gamma}_{x_1, \xi_1}(s_1)$ be the past pointing lightlike vector at $q$.
By the same idea of Lemma \ref{lm_perturb_zeta1},
there exists $\varsigma >0$ such that for each $\vartheta \in N^-(\theta, \varsigma)$,
one can find
a vector $(x, \xi) \in L^{+} V$
satisfying $(x, \xi) = (\gamma_{q, \vartheta}(s_x), -\dot{\gamma}_{q, \vartheta}(s_x))$ with $ 0 < s_x < \rho(q, \vartheta)$.
In particular, the proof there shows that $\textbf{t}(x) = \textbf{t}(x_1)$, where $\textbf{t}$ is the time function.
In the following, we would like to choose two more lightlike vectors $\vartheta_j \in N^-(\theta, \varsigma)$ that are linearly independent and additionally $w = \zeta^\sharp \in L^{+}_q M$ is in their span.
For this purpose, first at $q$, we consider local coordinates \[
x = ( x^0, x^1, x^2, x^3)
\]
such that $g$ coincides with the Mankowski metric.
One can rotate the coordinate system in the spatial variables such that $\theta$ and $w$ lie in the same plane $x^3 = 0$.
This indicates without loss of generality, we can assume
\[
w = (1, \sqrt{1-r_0^2}, -r_0, 0), \quad \theta = (-1, 1, 0, 0),
\]
where $r_0 \in [-1, 1]$ is a parameter.
We set $\vartheta_1 = \theta$ and choose
\[
\vartheta_2 = (-1, \sqrt{1-s^2}, s, 0), \quad \vartheta_3 = (-1, \sqrt{1-s^2}, -s, 0),
\]
with a sufficiently small parameter $s$.
This is the construction proposed in \cite{Hintz2020}.
One can see that $\vartheta_j$ are linearly independent and $w$ is indeed in the span of $\vartheta_j$, $j = 1, 2, 3$.
From the analysis above, for each $\vartheta_j$, we can find a vector $(x_j, \xi_j) \in L^+V$
before the first cut point
with $\textbf{t}(x_j) = \textbf{t}(x_1)$ for $j=2,3$.
Thus, one has $x_1, x_2, x_3$ are causally independent.
Then let $s_j$ be the time such that $q = \gamma_{x_j, \xi_j}(s_j)$.
We must have $ 0 < s_j < \rho(x_j, \xi_j)$, since $x_j$ is before the first cut point of $q$ along $\gamma_{q, \vartheta_j}$.
This proves the lemma.
\end{proof}
Next, we claim that one can construct a sequence of lightlike vectors in $V$ and a lightlike covector $\zeta$ at $q$, which satisfy Claim \ref{cl_const_three1}.
More explicitly,
for any fixed $q \in \mathbb{W}$
and sufficiently small $s_0 > 0$,
one can find $(x_1, \xi_1)\in L^+V$ and
sequences of lightlike vectors
\[
(x_{j,k}, \xi_{j,k}) \rightarrow (x_1, \xi_1)
\quad \text{as } k \rightarrow +\infty,
\]
for $j =2, 3$ with
\[
\zeta \in \Lambda_{123}\setminus (\Lambda^{(1)} \cup \Lambda^{(2)}),
\quad (y, \eta) \in L^{*}_{\partial M, +}M,
\]
such that for each fixed $k$, the conditions (a) - (c) in Claim \ref{cl_const_three1} hold.
Indeed, by the proof of Claim \ref{cl_const_three1}, one can find such $(y, \eta)$ satisfying the condition (c).
To satisfy (a) and (b),
by Lemma \ref{lm_perturb_zeta_three},
we choose a sequence of $\varsigma_k$ that converges to zero.
For each $\varsigma_k$ that is sufficiently small,
we can find different
$\hat{\zeta}^{2}, \hat{\zeta}^{3} \in N^+(\hat{\zeta}^{1}, \varsigma_k)$, such that there are lightlike vectors
$(x_{j,k}, \xi_{j,k}) \in L^{+} V$ for $j = 2,3$
satisfying (a) and (b).
With $\varsigma_k$ goes to zero, we have $(x_{j,k}, \xi_{j,k})$ to converge to $(x_1, \xi_1)$, when $k$ goes to $+\infty$.
For each fixed $k$, we can recover the quantity
$M_3(q, \zeta^{1}, \zeta^{2,k}, \zeta^{3,k})$, see (\ref{eq_M3}).
Since $(x_{j,k}, \xi_{j,k})$ converges to $(x_1, \xi_1)$ as $k \rightarrow +\infty$,
the null geodesics $\gamma_{x_{j,k}, \xi_{j,k}}(s)$ with $j = 2,3$ converge to $\gamma_{x_1, \xi_1}(s)$.
In this case, from a sequence of (\ref{eq_M3}), we expect to recover
\begin{align}\label{eq_m3}
m_3 (q, \zeta, \zeta^1)
= -(2 \beta^2_2 + \beta_3)
{{ \sigma_{p}}}(Q_g)(y, \eta, q, \zeta)
({{ \sigma_{p}}}(Q_g)(q, \zeta, x_1, \xi_1^\sharp)\sigma_{{p}}(v_1)(x_1, \xi_1^\sharp))^3.
\end{align}
\section{Preliminaries}\label{sec_prelim}
\subsection{Lorentzian manifolds}
Recall $(M,g)$ is globally hyperbolic with timelike and null-convex boundary, where $M = \mathbb{R} \times \Omega$.
As in \cite{Hintz2020}, we extend $(M,g)$ smoothly to a slightly larger globally hyperbolic Lorentzian manifold $({M_{\text{e}}}, g_\mathrm{e})$ without boundary,
where ${M_{\text{e}}} = \mathbb{R} \times \Omega_{\mathrm{e}}$
such that $\Omega$ is contained in the interior of the open set $\Omega_{\mathrm{e}}$.
{
See also \cite[Section 7]{Uhlmann2021a} for more details about the extension.
In the following, we abuse the notation and do not distinguish $g$ with $g_\mathrm{e}$ if there is no confusion caused.
Let \[V = (0,T) \times \Omega_\mathrm{e} \setminus \Omega\] be the virtual observation set.
In Section \ref{sec_threefour}, we will use $V$ to construct boundary sources.
}
We recall some notations and preliminaries in \cite{Kurylev2018}.
For $\eta \in T_p^*{M_{\text{e}}}$, the corresponding vector of $\eta$ is denoted by $ \eta^\sharp \in T_p {M_{\text{e}}}$.
The corresponding covector of $v \in T_p {M_{\text{e}}}$ is denoted by $ v^\flat \in T^*_p {M_{\text{e}}}$.
We denote by
\[
L_p {M_{\text{e}}} = \{v \in T_p {M_{\text{e}}} \setminus 0: \ g(v, v) = 0\}
\]
the set of light-like vectors at $p \in {M_{\text{e}}}$ and similarly by $L^*_p {M_{\text{e}}}$ the set of light-like covectors.
The sets of future-pointing (or past-pointing) light-like vectors are denoted by $L^+_p {M_{\text{e}}}$ (or $L^-_p {M_{\text{e}}}$), and those of future-pointing (or past-pointing) light-like covectors are denoted by $L^{*,+}_p {M_{\text{e}}}$ (or $L^{*,-}_p {M_{\text{e}}}$).
We denote the outward (+) and inward (-) pointing tangent bundles by
\begin{equation}\label{def_tbundle}
T_{\partial M, \pm} M = \{(x, v) \in \partial TM: \ \pm g(v, n)>0 \},
\end{equation}
where $n$ is the outward pointing unit normal of $\partial M$.
For convenience, we also introduce the notation
\begin{equation}\label{def_Lbundle}
L^{*}_{\partial M, \pm}M =\{(z, \zeta)\in L^* M \text{ such that } (z, \zeta^\sharp)\in T_{\partial M, \pm} M \}
\end{equation}
to denote the lightlike covectors that are outward or inward pointing on the boundary.
The time separation function $\tau(x,y) \in [0, \infty)$ between two points $x < y$ in ${M_{\text{e}}}$
is the supremum of the lengths \[
L(\alpha) = \int_0^1 \sqrt{-g(\dot{\alpha}(s), \dot{\alpha}(s))} ds
\] of
the piecewise smooth causal paths $\alpha: [0,1] \rightarrow {M_{\text{e}}}$ from $x$ to $y$.
If $x<y$ is not true, we define $\tau(x,y) = 0$.
Note that $\tau(x,y)$ satisfies the reverse triangle inequality
\[
\tau(x,y) +\tau(y,z) \leq \tau(x,z), \text{ where } x \leq y \leq z.
\]
For $(x,v) \in L^+{M_{\text{e}}}$, recall the cut locus function
\[
\rho(x,v) = \sup \{ s\in [0, \mathcal{T}(x,v)]:\ \tau(x, \gamma_{x,v}(s)) = 0 \},
\]
where $\mathcal{T}(x,v)$ is the maximal time such that $\gamma_{x,v}(s)$ is defined.
Here we denote by $\gamma_{x, v}$ the unique null geodesic starting from $x$ in the direction $v$.
The cut locus function for past lightlike vector $(x,w) \in L^-{M_{\text{e}}}$ is defined dually with opposite time orientation, i.e.,
\[
\rho(x,w) = \inf \{ s\in [\mathcal{T}(x,w),0]:\ \tau( \gamma_{x,w}(s), x) = 0 \}.
\]
For convenience, we abuse the notation $\rho(x, \zeta)$ to denote $\rho(x, \zeta^\sharp)$ if $\zeta \in L^{*,\pm}{M_{\text{e}}}$.
By \cite[Theorem 9.15]{Beem2017}, the first cut point $\gamma_{x,v}(\rho(x,v))$ is either the first conjugate point or the first point on $\gamma_{x,v}$ where there is another different geodesic segment connecting $x$ and $\gamma_{x,v}(\rho(x,v))$.
In particular, when $g = -\diff t^2 + g_0$, one can prove the following proposition.
\begin{pp}[{\cite[Proposition 2]{UZ_acoustic}}]\label{pp_thm1}
Let $(\Omega, g_0)$ satisfy the assumption (\ref{assum_Omega}) and $g = -\diff t^2 + g_0$, see (\ref{eq_g0}) for the definition of $g_0$.
For any $x'_0 \in \Omega$, one can find a point $q\in \mathbb{W}$ with $q = (t_q, x'_0)$ for some $t_q \in (0, T)$.
\end{pp}
Moreover, with $\langle b(x), \nabla \rangle = b_0(x) \partial_t$, we have
$\Lambda_{b,h,F} = \Lambda_{b_0, F}$.
This implies Theorem \ref{thm1} is the result of Theorem \ref{thm}, since
there is no $\varrho(x) \in C^\infty(M)$ such that
\[
b^{(2)}(x') = b^{(1)}(x') + 2\varrho^{-1} \diff \varrho
\]
satisfying
$
\langle b^{(k)}(x'), \nabla p(x) \rangle = b^{(k)}_0(x') \partial_t p
$
for $k=1,2$.
\subsection{Distributions}
Suppose $\Lambda$ is a conic Lagrangian submanifold in $T^*{M_{\text{e}}}$ away from the zero section.
We denote by $\mathcal{I}^\mu(\Lambda)$ the set of Lagrangian distributions in ${M_{\text{e}}}$ associated with $\Lambda$ of order $\mu$.
In local coordinates, a Lagrangian distribution can be written as an oscillatory integral and we regard its principal symbol,
which is invariantly defined on $\Lambda$ with values in the half density bundle tensored with the Maslov bundle, as a function in the cotangent bundle.
If $\Lambda$ is a conormal bundle of a submanifold $K$ of ${M_{\text{e}}}$, i.e. $\Lambda = N^*K$, then such distributions are also called conormal distributions.
The space of distributions in ${M_{\text{e}}}$ associated with two cleanly intersecting conic Lagrangian manifolds $\Lambda_0, \Lambda_1 \subset T^*{M_{\text{e}}} \setminus 0$ is denoted by $\mathcal{I}^{p,l}(\Lambda_0, \Lambda_1)$.
If $u \in \mathcal{I}^{p,l}(\Lambda_0, \Lambda_1)$, then one has ${\text{\( \WF \)}}{(u)} \subset \Lambda_0 \cup \Lambda_1$ and
\[
u \in \mathcal{I}^{p+l}(\Lambda_0 \setminus \Lambda_1), \quad u \in \mathcal{I}^{p}(\Lambda_1 \setminus \Lambda_0)
\]
away from their intersection $\Lambda_0 \cap \Lambda_1$. The principal symbol of $u$ on $\Lambda_0$ and $\Lambda_1$ can be defined accordingly and they satisfy some compatible conditions on the intersection.
For more detailed introduction to Lagrangian distributions and paired Lagrangian distributions, see \cite[Section 3.2]{Kurylev2018} and \cite[Section 2.2]{Lassas2018}.
The main reference are \cite{MR2304165, Hoermander2009} for conormal and Lagrangian distributions and
{
\cite{Melrose1979,Guillemin1981,Hoop2015,Greenleaf1990,Greenleaf1993}
for paired Lagrangian distributions.}
\subsection{The causal inverse}\label{subsec_Q}
We consider the linear operator
\[
P_o = \square_g
+ \langle b(x), \nabla \rangle + h(x),
\]
on the globally hyperbolic Lorentzian manifold $({M_{\text{e}}},{g_\mathrm{e}})$ without boundary.
Note that here $P_o$ is defined for distributions on ${M_{\text{e}}}$.
To apply the calculus in \cite{Hoermander1971},
more precisely, one needs to consider operators acting on half densities instead of distributions.
In particular, since we deal with subprincipal symbols,
considering half densities gives us some constant in our analysis.
However, this is not essential for the recovery of the one-form and nonlinearity.
More precisely, one can consider the half-density $|g|^{\frac{1}{4}}$ and define
\[
P v = |g|^{\frac{1}{4}} \square_g(|g|^{-\frac{1}{4}} v)
+ \langle b(x), |g|^{\frac{1}{4}}\nabla(|g|^{-\frac{1}{4}} v) \rangle + h(x)v,
\]
for $v \in \mathcal{E}'({M_{\text{e}}}; \Omega^{\frac{1}{2}})$, see \cite{Chen2020}.
The principal symbol and subprincipal symbol is given by
\begin{align}\label{eq_psP}
\sigma_{{p}}(P)(x, \zeta) =g^{ij}\zeta_i \zeta_j,
\quad \quad \quad \sigma_{\mathrm{sub}}(P)(x, \zeta) = \iota \langle b(x), \zeta \rangle.
\end{align}
The characteristic set $\mathrm{Char}(P)$ is the set $\sigma_{{p}}(P)^{-1}(0) \subset T^*{M_{\text{e}}}$.
It is also the set of light-like covectors with the Lorentzian metric $g$.
The Hamilton vector field is
\begin{align*}
H_P = 2g^{ij}\zeta_i \frac{\partial}{\partial x^j}
- \frac{\partial g^{kl}}{\partial x^j}\zeta_k\zeta_l \frac{\partial}{\partial \zeta_j},
\end{align*}
and we consider the corresponding flow $\phi_s: T^*M \rightarrow T^*M$, for $s \in \mathbb{R}$.
We write
\[
\phi_s(x, \zeta) = (x(s), \zeta(s)) = \lambda(s).
\]
The set $\{(x(s), \zeta(s)), \ s \in \mathbb{R}\}$ is the null bicharacteristic $\Theta_{x, \zeta}$ of $P$.
Moreover,
let $\Lambda$ be a conic Lagrangian submanifold in $T^*M \setminus 0$ intersecting $\mathrm{Char}(P)$ transversally.
We use the notation $ \Lambda^g$ to denote the flow-out of $\Lambda \cap \mathrm{Char}(P)$ under the Hamiltonian flow, i.e.,
for any fixed lightlike covector $(x, \zeta) \in \Lambda \cap \mathrm{Char}(P)$, we have $\phi_s(x, \zeta) \in \Lambda^g$ for $s \in \mathbb{R}$.
In addition, the integral curves $x(s), \zeta(s)$ satisfy the equations
\begin{align*}
\dot{x}^j = 2 g^{ij} \zeta_i,
\quad \dot{\zeta}_j = - \frac{\partial g^{kl}}{\partial x^j} \zeta_k \zeta_l,
\end{align*}
where we write ${\diff x}/{\diff s} = \dot{x}$
and ${\diff \zeta}/{\diff s}=\dot{\zeta}$.
This implies
that $x(s)$ is a unique null geodesic on ${M_{\text{e}}}$, starting from $x$ in the direction of $2 \zeta^\sharp$, with
$\zeta_i(s) = \frac{1}{2}g_{ij}\dot{x}^j(s)$.
Note that $P$ is normally hyperbolic, see \cite[Section 1.5]{Baer2007}.
It has a unique casual inverse $P^{-1}$ according to \cite[Theorem 3.3.1]{Baer2007}.
By \cite{Duistermaat1972}
and \cite[Proposition 6.6]{Melrose1979}, one can symbolically construct a parametrix $Q$, which is the solution operator to the wave equation
\begin{align}\label{eq_lineareq}
P v &= f, \quad \text{ on } {M_{\text{e}}},\\
v & = 0, \quad \text{ on } {M_{\text{e}}} \setminus J^+(\supp(f)), \nonumber
\end{align}
in the microlocal sense.
It follows that $Q \equiv P^{-1}$ up to a smoothing operator.
Let $k_Q(x, \tilde{x}) \in \mathcal{D}'({M_{\text{e}}} \times {M_{\text{e}}}; \Omega^{\frac{1}{2}})$ be the Schwartz kernel of $Q$, i.e.,
\[
Q v(x) = \int k_Q(x, \tilde{x}) v(\tilde{x}) \diff \tilde{x},
\]
and
it is a paired Lagrangian distribution
in $\mathcal{I}^{-\frac{3}{2}, -\frac{1}{2}} (N^*\text{Diag}, (N^*\text{Diag})^g)$.
Here $\text{Diag}$ denotes the diagonal in ${M_{\text{e}}} \times {M_{\text{e}}}$ and $N^*\text{Diag}$ is its conormal bundle.
The notation $(N^*\text{Diag})^g$ is the flow out of
$N^*\text{Diag} \cap \mathrm{Char}(P)$ under the Hamiltonian vector field $H_P$.
We construct the microlocal solution to the equation
\[
P k_Q(x, \tilde{x}) = \delta(x, \tilde{x}) \mod C^\infty({M_{\text{e}}} \times {M_{\text{e}}}; \Omega^{\frac{1}{2}}),
\]
using the proof of \cite[Proposition 6.6]{Melrose1979},
where we regard $P$ as its lift to $X \times X$ under the first projection $X \times X \rightarrow X$ of a differential operator on $X$.
The symbol of $Q$ can be found during the construction there.
In particular, the principal symbol of $Q$ along $N^*\text{Diag}$ satisfying
$
\sigma_p(\delta) = \sigma_p(P) \sigma_p(Q)
$
is nonvanishing.
The principal symbol of $Q$ along $(N^*\text{Diag})^g \setminus N^*\text{Diag}$ solves the transport equation
\begin{align*
\mathcal{L}_{H_P}\sigma_p(Q) + \iota \sigma_{\mathrm{sub}}(P)\sigma_p(Q) = 0,
\end{align*}
where the Hamiltonian vector field $H_P$ is lifted to $(T^*X \setminus 0) \times (T^*X \setminus 0)$ and
$\mathcal{L}_{H_P}$ is its Lie action on half densities over $(T^*X \setminus 0) \times (T^*X \setminus 0)$.
The initial condition
is given by restricting $\sigma_p(Q)|_{N^*\text{Diag}}$ to $\partial (N^*\text{Diag})^g$;
see \cite[(6.7) Section 4 and 6]{Melrose1979}.
We have the following proposition according to \cite[Proposition 2.1]{Greenleaf1993}, see also \cite[Proposition 2.1]{Lassas2018}.
\begin{pp}
Let $\Lambda$ be a conic Lagrangian submanifold in $T^*M \setminus 0$.
Suppose $\Lambda$ intersects $\mathrm{Char}(P)$ transversally, such that its intersection with each bicharacteristics has finite many times.
Then
\[
Q: \mathcal{I}^\mu(\Lambda) \rightarrow \mathcal{I}^{p,l}(\Lambda, \Lambda^g),
\]
where $ \Lambda^g$ is the flow-out of $\Lambda \cap \mathrm{Char}(P)$ under the Hamiltonian flow.
Moreover, for $u \in \mathcal{I}^\mu(\Lambda)$ and $(x, \xi) \in \Lambda^g \setminus \Lambda$, we have
\[
\sigma_p(Q u)(x, \xi) = \sum \sigma(Q)(x, \xi, y_j, \eta_j)\sigma_p(u)(y_j, \eta_j),
\]
where the summation is over the points $(y_j, \eta_j) \in \Lambda$ that lie on the bicharacteristics from $(x, \xi)$.
\end{pp}
}
On the other hand, we can symbolically construct the solution $v$ to (\ref{eq_lineareq}) directly by
[Hormander 2] and \cite[Proposition 6.6]{Melrose1979}, see also \cite[Theorem 3]{Chen2020}.
More precisely, let $\Lambda$ and $\Lambda^g$ be defined as in the proposition above.
When $f \in \mathcal{I}^\mu(\Lambda)$,
the solution $v \in \mathcal{I}^{\mu - \frac{3}{2},-\frac{1}{2}}(\Lambda, \Lambda^g)$ satisfies
\begin{align}
\sigma_{{p}}(v) =
\sigma_{{p}}(P)^{-1} \sigma_{{p}}(f) &\quad \text{on } {\Lambda \cap \mathrm{Char}(P)}, \label{eq_Lambda0} \\
\mathcal{L}_{H_P} \sigma_{{p}}(v) + \iota \sigma_{\mathrm{sub}}(P)\sigma_{{p}}(v) = 0
&\quad \text{on } \Lambda^g, \label{eq_transp}
\end{align}
where the initial condition of (\ref{eq_transp})
is given by restricting (\ref{eq_Lambda0})
to $\partial \Lambda^g$,
see \cite[Section 4 and 6]{Melrose1979} and also \cite[Appendix A]{Chen2020}.
To solve (\ref{eq_transp}) more explicitly,
we fix a strictly positive half density $\omega$ on $\Lambda_g$, which is positively homogeneous of degree ${1}/{2}$.
This half density can be chosen by considering a Riemannian metric $g^+$ on ${M_{\text{e}}}$.
Indeed, $g^+$ induces a Sasaki metric $\varpi$ on $T^*{M_{\text{e}}}$ and one can consider the half density $|\varpi|^{\frac{1}{4}}$.
Now suppose $\sigma_{{p}}(v) = a \omega$, where $a$ is a smooth function on $\Lambda^g$.
Then we have
\[
\mathcal{L}_{H_P}(a \omega) = (H_P a) \omega + a \mathcal{L}_{H_P}(\omega).
\]
The transport equation (\ref{eq_transp}) can be written as
\[
H_P a + (c_\omega + \iota \sigma_{\mathrm{sub}}(P)) a = 0,
\]
where $c_\omega = \omega^{-1} \mathcal{L}_{H_P}(\omega)$.
Recall the Hamiltonian flow $\lambda(s) = (x(s), \zeta(s))$, where $x(s)$ is a null geodesic with $\zeta_i(s) = \frac{1}{2}g_{ij}\dot{x}^j(s)$.
Along $\lambda(s)$, we compute
\begin{align*}
(H_P a)\circ \lambda(s) = \frac{\diff \ }{\diff s}(a \circ \lambda)(s).
\end{align*}
Thus, the transport equation (\ref{eq_transp}) along $\lambda(s)$ is given by
\begin{align*}
\frac{\diff \ }{\diff s}(a \circ \lambda) + (c_\omega + \iota \sigma_{\mathrm{sub}}(P)) a\circ \lambda(s) = 0.
\end{align*}
Using equation (\ref{eq_psP}), we have
\begin{align*}
\frac{\diff \ }{\diff s}(a \circ \lambda) +
(c_\omega \circ x(s) - \langle b(x(s)), \zeta(s) \rangle) = 0.
\end{align*}
It has a unique solution
\[
a \circ \lambda(s) = a(x(s), \zeta(s)) = a(x, \zeta)
\exp(-\int_0^s (c_\omega \circ x(s') - \langle b(x(s')), \zeta(s') \rangle) \diff s').
\]
We compute
\[
\langle b(x(s')), \zeta(s') \rangle = \langle b(x(s')), \frac{1}{2}g_{ij}\dot{x}^j(s)\rangle = \frac{1}{2} \langle b(x(s')), \dot{x}^j(s)\rangle.
\]
This implies that along $\lambda(s)$,
the principal symbol of $Q$ at $(x(s), \zeta(s), x, \zeta) \in \Lambda^g$ is given by
\begin{align}\label{eq_Qsym}
\sigma_{{p}}(Q)(x(s), \zeta(s), x, \zeta)
&= \exp(-\int_0^s c_\omega \circ x(s') - \frac{1}{2}\langle b(x(s')), \dot{x}(s') \rangle \diff s'),
\end{align}
and we have
\begin{align*}
\sigma_{{p}}(v)(x(s), \zeta(s)) = \frac{\omega((x(s), \zeta(s))}{\omega(x,\zeta}
\sigma_{{p}}(Q)(x(s), \zeta(s), x, \zeta)\sigma_{{p}}(v)(x, \zeta),
\end{align*}
where $\omega$ is the strictly positive half density on $\Lambda_g$.
By choosing $s_o$ with $0 \leq s_o < s$, we can show that
\begin{align}\label{eq_vps}
\sigma_{{p}}(v)(x(s), \zeta(s)) = \frac{\omega((x(s), \zeta(s))}{\omega(x(s_o),\zeta(s_o))}
\sigma_{{p}}(Q)(x(s), \zeta(s), x(s_o), \zeta(s_o))\sigma_{{p}}(v)(x(s_o), \zeta(s_o)),
\end{align}
where we write
\begin{align*}
\sigma_{{p}}(Q)(x(s), \zeta(s), x(s_o), \zeta(s_o))
&= \exp(-\int_{(s_o)}^s c_\omega \circ x(s') - \frac{1}{2}\langle b(x(s')), \dot{x}(s') \rangle \diff s'),
\end{align*}
In particular, the principal symbol of $Q$ satisfies the equation
\begin{align}\label{eq_Qtransport}
(\frac{\diff \ }{\diff s} + c_\omega \circ x(s) - \frac{1}{2}\langle b(x(s)), \dot{x}^j(s) \rangle) \sigma_{{p}}(Q)(x(s), \zeta(s), x(s_o), \zeta(s_o)) = 0.
\end{align}
\section{Introduction}
Nonlinear ultrasound waves are widely used in medical imaging.
The propagation of high-intensity ultrasound waves are modeled by nonlinear wave equations; see \cite{humphrey2003non}.
They have many applications in diagnostic and therapeutic medicine, for example, see \cite{Anvari2015, Aubry2016,Demi2014,Demi2014a,Eyding2020,Fang2019, Gaynor2015, Harrer2003,Hedrick2005,Soldati2014, Szabo2014, Haar2016, Thomas1998,Webb2017}.
In this work, we consider a nonlinear acoustic equation with a damping term and a general nonlinearity.
Let $\Omega$ be a bounded subset in $\mathbb{R}^3$ with smooth boundary.
Let { $x = (t,x') \in \mathbb{R} \times \Omega$} and $c(x') > 0$ be the smooth sound speed of the medium.
Let $p(t,x')$ denote the pressure field of the ultrasound waves.
A model for the pressure field in the medium $\Omega$ with a damping term can be written as (see \cite{Kaltenbacher2021})
\begin{equation*
\begin{aligned}
\partial_t^2 p - c^2(x) \Delta p
- Dp
- F(x, p,\partial_t p, \partial^2_t p) &= 0, & \ & \mbox{in } (0,T) \times \Omega,\\
p &= f, & \ &\mbox{on } (0,T) \times \partial \Omega,\\
p = {\partial_t p} &= 0, & \ & \mbox{on } \{t=0\},
\end{aligned}
\end{equation*}
where $f$ is the insonation profile on the boundary,
$D$ models the damping phenomenon,
and $F$ is the nonlinear term modeling the nonlinear
response of the medium.
When there are no damping effects, the recovery of the nonlinear coefficients
from the Dirichlet-to-Neumann map (DN map) is studied in \cite{ultra21, UZ_acoustic}.
In particular, in \cite{ultra21} the author consider the nonlinear wave equation of Westervelt type, i.e., with $F(x,p, \partial_t, \partial^2_t) = \beta(x') \partial_t^2 (p^2)$,
using the second-order linearization and Gaussian beams.
In \cite{UZ_acoustic}, the recovery of a nonlinear term given by
$F(x,p, \partial_t, \partial^2_t) =\sum_{m=1}^{+\infty} \beta_{m+1}(x) \partial_t^2 (p^{m+1})$
from the DN map is considered, using distorted plane waves.
On the other hand, damping effects exist in many applications of medical imaging, physics, and engineering,
for example, see \cite{aanonsen1984distortion}.
The damped or attenuated acoustic equations have been studied in many works, including but not limited to \cite{ang1988strongly,pata2005strongly,kaltenbacher2009global,arrieta1992damped,kaltenbacher2011well,kaltenbacher2012analysis,dekkers2019mathematical,scholle2022weakly,rudenko2022dispersive,meliani2022analysis,
kaltenbacher2021some,k2022parabolic,romanov2020recovering,
romanov2021recovering,
kaltenbacher2011well,
baker2022linear,
kaltenbacher2022simultanenous}.
Among this, the stabilization and control of damped wave equations are considered in \cite{burq2016exponential, burq2015imperfect,burq2006energy,le2021stabilization}.
Most recently, in \cite{fu2022inverse}, the author considers the recovery of a time-dependent weakly damping term and the nonlinearity, using measurements from the initial data to the Neumann boundary data.
The analysis is based on Carleman estimates and Gaussian beams.
In this work, we plan to study the recovery of a general nonlinearity as well as the damping coefficient, when there is a damping term $D = -b_0(x) \partial_t$.
More explicitly, the nonlinear equation is given by
\begin{align}\label{eq_damp}
\partial_t^2 p - c^2(x) \Delta p + b_0(x) \partial_t p
- \sum_{m=1}^{+\infty} \beta_{m+1}(x) \partial_t^2 (p^{m+1}) = 0,
\end{align}
where $b_0(x) \in C^\infty(M)$ and $\beta_{m+1}(x) \in C^\infty(M)$, for $m\geq 1$.
This term is called a weakly damping term in some literature and it models the damping mechanism proportional to velocity.
We consider the boundary measurement given by the DN map
\[
\Lambda_{b_0, F} f = \partial_\nu p|_{(0, T) \times \partial \Omega},
\]
where $\nu$ is the outer unit normal vector to $\partial \Omega$.
\subsection{Main result}
We have the following result for nonlinear acoustic imaging with weakly damping effects, which is a special case of our result for a nonlinear acoustic equation with an arbitrary one-from in Theorem \ref{thm}.
First, we suppose the smooth functions $c, b_0, \beta_{m+1}$ are independent of $t$.
\begin{assumption}\label{assum_Omega}
Consider the rays associated to the wave speed $c(x')$ in $\Omega$,
i.e., the geodesics of the Riemannian metric
$g_0 = c^{-2}(x') ((\diff x^1)^2 + (\diff x^2)^2 + (\diff x^3)^2).$
We assume that $\Omega$ is nontrapping and $\partial \Omega$ is strictly convex w.r.t. these rays (geodesics).
Here by nontrapping, we mean there exists $T>0$ such that
\[
\mathrm{diam}_{g_0}(\Omega) = \sup\{\text{lengths of all rays, i.e., geodesics in $(\Omega, g_0)$}\} < T.
\]
\end{assumption}
With this assumption,
we show that the DN map determines the damping term and the nonlinear coefficients, under nonvanishing assumptions on $\beta_2, \beta_3$.
\begin{thm}\label{thm1}
Let $(\Omega, g_0)$ satisfy Assumption \ref{assum_Omega}.
Consider the nonlinear wave equation
\[
\partial_t^2 p^{(k)} - c^2(x') \Delta p^{(k)} + b^{(k)}_0(x') \partial_t p^{(k)}
-\sum_{m=1}^{+\infty} \beta^{(k)}_{m+1}(x') \partial_t^2 ((p^{(k)})^{m+1}) = 0, \quad k = 1,2.
\]
Suppose for $k=1,2$ and
for each $x' \in \Omega$,
there exists $m_k \geq 1$ such that { $\beta^{(k)}_{m_k+1}(x') \neq 0$}.
Assume the quantity $2 (\beta^{(k)}_2)^2 + \beta^{(k)}_3$ does not vanish on any open set of $\Omega$.
If the Dirichlet-to-Neumann maps satisfy
\[
\Lambda_{b^{(1)}_0, \beta^{(1)}}(f) = \Lambda_{b^{(2)}_0, \beta^{(2)}}(f)
\]
for all $f$ in a small neighborhood of the zero functions in $C^6([0,T] \times \partial \Omega)$,
then
\[
b^{(2)}_0 = b^{(1)}_0,
\quad \beta^{(2)}_{m+1} = \beta^{(1)}_{m+1},
\]
for any $x' \in \Omega$ and $m \geq 1$.
\end{thm}
We emphasize that the Westervelt type equation with a weakly damping term is covered as a special case by Theorem \ref{thm1}.
This result can be regarded as an example of a more general setting, including the case of a time-dependent damping term and a general nonlinear term.
Recall $M = \mathbb{R} \times \Omega$ and let $M^{{o}}$ be the interior of $M$.
The leading term of the differential operator in (\ref{eq_damp}) corresponds to a Lorentzian metric
\[
g = - \diff t^2 + g_0 = -\diff t^2 + c^{-2}(x')(\diff x')^2
\]
and we have
\[
\square_g p = \partial_t^2 p(t,x')
- c^2(x') \Delta p(t,x')
+ c^3(x') \partial_i(c^{-3}(x')g^{ij})\partial_j p(t,x').
\]
Note that $(M,g)$ is a globally hyperbolic Lorentzian manifold with timelike boundary $\partial M = \mathbb{R} \times \partial \Omega$.
Additionally, we assume $\partial M$ is null-convex, that is,
for any null vector $v \in T_p \partial M$ one has
\begin{equation*
\kappa(v,v) = g (\nabla_{\nu} v, v) \geq 0,
\end{equation*}
where we denote by $\nu$ the outward pointing unit normal vector field on $\partial M$.
This is true especially when $\partial \Omega$ is convex w.r.t $g_{0}$.
In the following, we consider a globally hyperbolic Lorentzian manifold $(M,g)$ with timelike and null-convex boundary.
We consider the nonlinear acoustic equation
\begin{equation}\label{eq_problem}
\begin{aligned}
\square_g p
+ \langle b(x), \nabla p \rangle + h(x)p
-\sum_{m=1}^{+\infty} \beta_{m+1}(x) \partial_t^2 (p^{m+1}) &= 0, & \ & \mbox{in } (0, T) \times \Omega\\
p &= f, & \ &\mbox{on } (0,T) \times \partial \Omega,\\
p = {\partial_t p} &= 0, & \ & \mbox{on } \{t=0\},
\end{aligned}
\end{equation}
where $b(x) \in C^\infty(M; T^*M)$ is a one-form,
$h(x) \in C^\infty(M)$ is a potential,
and the nonlinear coefficients $\beta_{m+1}(x) \in C^\infty(M)$ for $m \geq 1$.
We consider the boundary measurement for each $f$ given by the DN map
\[
\Lambda_{b,h,F} f = (\partial_\nu p + \frac{1}{2}\langle b(x), \nu \rangle p)|_{(0,T) \times \partial \Omega},
\]
where $\nu$ is the outer unit normal vector to $\partial \Omega$.
Suppose
the nonlinear coefficients
$\beta_{m+1}(x), m \geq 1$ are unknown and the one-form $b(x)$ is unknown.
We consider the inverse problem of recovering $\beta_{m+1}(x)$ and $b(x)$ from $\Lambda_{b,h,F}$, for $m \geq 1$.
We introduce some definitions to state the result.
A smooth path $\mu:(a,b) \rightarrow M$ is timelike if $g(\dot{\mu}(s), \dot{\mu}(s)) <0 $ for any $s \in (a,b)$.
It is causal if $g(\dot{\mu}(s), \dot{\mu}(s)) \leq 0 $ with $\dot{\mu}(s) \neq 0$ for any $s \in (a,b)$.
For $p, q \in M$, we denote by $p < q$ (or $p \ll q$) if $p \neq q$ and there is a future pointing casual (or timelike) curve from $p$ to $q$.
We denote by $p \leq q$ if either $p = q$ or $p<q$.
The chronological future of $p$ is the set $I^+(p) = \{q \in M: \ p \ll q\}$
and the causal future of $p$ is the set $J^+(p) = \{q \in M: \ p \leq q\}$.
Similarly we can define the chronological past $I^-(p)$ and the causal past $J^-(p)$.
For convenience, we use the notation $J(p,q)= J^+(p) \cap J^-(q)$ to denote the diamond set $J^+(p) \cap J^-(q)$ and
$I(p,q)$ to denote the set $I^+(p) \cap I^-(q)$.
We consider the recovery of the nonlinear coefficients in a suitable larger set
\[
\mathbb{W} = \bigcup_{y^-, y^+ \in (0,T) \times \partial \Omega} I(y^-, y^+) \cap \intM.
\]
\begin{thm}\label{thm}
Let $(M,g)$ be a globally hyperbolic Lorentzian manifold with timelike and null-convex boundary, where we assume $M = \mathbb{R} \times \Omega$ and $\Omega$ is a 3-dimensional manifold with smooth boundary.
Consider the nonlinear wave equation
\[
\square_g p^{(k)} + \langle b^{(k)}(x), \nabla p^{(k)} \rangle + h^{(k)}(x)p^{(k)} - F^{(k)}(x, p^{(k)},\partial_t p^{(k)},\partial^2_t p^{(k)}) = 0, \quad k=1,2,
\]
where $F^{(k)}$ depends on $x$ smoothly and have the convergent expansions
\begin{align*}
F^{(k)}(x, p^{(k)},\partial_t p^{(k)}, \partial^2_t p^{(k)}) = \sum_{m=1}^{+\infty} \beta^{(k)}_{m+1}(x) \partial_t^2 ((p^{(k)})^{m+1}).
\end{align*}
Suppose
for each $x \in \mathbb{W}$,
there exists $m \geq 1$ such that { $\beta_{m+1}(x) \neq 0$}.
Assume the quantity $2 (\beta^{(k)}_2)^2 + \beta^{(k)}_3$ does not vanish on any open set of $\mathbb{W}$.
If the Dirichlet-to-Neumann maps satisfy
\[
\Lambda_{\bone, \hone, \Fone}(f) = \Lambda_{\btwo, \htwo, \Ftwo}(f)
\]
for all functions $f$ in a small neighborhood of the zero functions in $C^6([0,T] \times \partial \Omega)$,
then there exists $\varrho \in C^\infty(M)$ with $\varrho|_{\partial M} = 1$ such that
\begin{align*}
b^{(2)} = b^{(1)} + 2\varrho^{-1} \diff \varrho, \quad \beta^{(2)}_{m+1} = \varrho^m \beta^{(1)}_{m+1},
\end{align*}
{for any } $m \geq 1$ and $x \in \mathbb{W}$.
In addition, if we have
\begin{align}\label{assump_h}
h^{(2)} = h^{(1)} + \langle b^{(1)}, \nabla \varrho \rangle + \varrho^{-1} \square_g \varrho,
\end{align}
then $\partial_t \varrho = 0$ for any $x \in \mathbb{W}$.
\end{thm}
This theorem shows the unique recovery of the one-form $b(x)$ and the nonlinear coefficients $\beta_{m+1}$ for $m \geq 1$, from the knowledge of the DN map, up to a gauge transformation, under our assumptions.
On the one hand, without knowing the potential $h(x)$, one can recover $b(x)$ up to an error term $2\varrho^{-1} \diff \varrho$, where $\varrho \in C^\infty(M)$ with $\varrho|_{\partial M} = 1$.
On the other hand, with the assumptions on $h(x)$, we can show this error term is given by some $\varrho \in C^\infty(\Omega)$ with $\varrho|_{\partial \Omega} = 1$, which corresponds to a gauge transformation, for more details see Section \ref{sec_gauge}.
We point out it would be interesting to consider the recovery of the potential $h(x)$ from the DN map but this is out of scope for this work.
The inverse problems of recovering the metric and the nonlinear term for a semilinear wave equation were considered in \cite{Kurylev2018}, in a globally hyperbolic Lorentzian manifold without boundary.
The main idea is to use the multi-fold linearization and the nonlinear interaction of waves.
By choosing specially designed sources, one can expect to detect the new singularities produced by the interaction of distorted plane waves, from the measurements.
The information about the metric and the nonlinearity is encoded in these new singularities.
One can extract such information from the principal symbol of the new singularities, using the calculus of conormal distributions and paired Lagrangian distributions.
Starting with \cite{Kurylev2018, Kurylev2014a},
there are many works studying inverse problems for nonlinear hyperbolic equations, see
\cite{Barreto2021,Barreto2020, Chen2019,Chen2020,Hoop2019,Hoop2019a,Uhlig2020,Feizmohammadi2019,Hintz2020, Kurylev2014,Balehowsky2020,Lai2021,Lassas2017,Tzou2021,Uhlmann2020,Hintz2021,ultra21,Uhlmann2019}.
For an overview of the recent progress, see \cite{lassas2018inverse,Uhlmann2021}.
In particular, inverse boundary value problems for nonlinear hyperbolic equations are considered in \cite{Hoop2019,Uhlmann2019, Hoop2019,Uhlmann2019, Hintz2017, Hintz2020, Hintz2021, Uhlmann2021a, ultra21, UZ_acoustic}.
Compared to \cite{fu2022inverse}, we consider the recovery using the DN map, instead of using the map from the initial data to the Neumann boundary data.
In Section \ref{sec_gauge}, we show there is a gauge transformation for the DN map.
In \cite{Chen2020}, the recovery of a connection from the source-to-solution map is considered, using the broken light ray transform, for wave equations with a cubic nonlinear term.
This connection is contained in the lower order term as well,
while the nonlinearity is known.
In our case, to recover the lower order term (the one-form) and the nonlinearity at the same time,
we cannot expect to recover one of them first.
Our main idea is to combine the third-order linearization and the fourth-order linearization of the DN map.
In particular, we consider the asymptotic behavior of the fourth-order linearization for some special constructions of lightlike covectors, based on the analysis in \cite{UZ_acoustic}.
The plan of this paper is as follows.
In Section \ref{sec_gauge}, we derive the gauge invariance of the DN map.
In Section \ref{sec_prelim}, we present some preliminaries for Lorentzian geometry as well as microlocal analysis, and construct the parametrix for the wave operator.
By Proposition \ref{pp_thm1}, Theorem \ref{thm1} is a special case of Theorem \ref{thm} and therefore our goal is to prove Theorem \ref{thm} using nonlinear interaction of distorted plane waves.
In Section \ref{sec_threefour}, we recall some results for the interaction of three and four distorted planes waves in \cite{UZ_acoustic}.
Based on these results,
we recover the one-form and the nonlinear coefficients up to en error term in Section \ref{sec_recover_b}.
The recovery is based on special constructions of lightlike covectors at each $q\in \mathbb{W}$.
In Section \ref{sec_nonlinear}, we use the nonlinearity to show the error term is corresponding to a gauge transformation, with the assumption on the potential $h(x)$.
In the appendix, we establish the local well-posedness for the boundary value problems (\ref{eq_problem}) with small boundary data in Section \ref{Sec_well}, and then we determine the jets of the one-form and the potential on the boundary in \ref{subsec_boundary}.
The latter allows us to smoothly extend the one-form and the potential to a larger Lorentzian manifold without boundary, see Section \ref{subsec_extension}.
\subsection*{Acknowledgment}
The author would like to thank Gunther Uhlmann for numerous helpful discussion throughout this project, and to thank Katya Krupchyk for suggestions on some useful reference.
The author is partially supported by a Simons Travel Grant.
\subsection{The recovery of the one-form and the nonlinearity}
For $k=1,2$,
suppose $b^{(k)} \in C^\infty(M;T^*M)$
are two one-forms
and $\beta^{(k)}_{m+1} \in C^\infty(M)$, $m \geq 1$ are nonlinear coefficients.
Suppose $p^{(k)}$ solve the boundary value problem (\ref{eq_problem})
with the one-form $b^{(k)}$ and the nonlinear terms $F^{(k)}$ given by
\[
\Fk(x, \pk,\partial_t \pk, \partial^2_t \pk) = \sum_{m=1}^{+\infty} \beta^{(k)}_{m+1}(x) \partial_t^2
{ ((p^{(k)})^{m+1})},
\quad k = 1, 2,
\]
and satisfy the assumption in Theorem \ref{thm}.
Suppose the two DN maps satisfy
\[
\Lambda_{b^{(1)},h^{(1)}, F^{(1)}}(f) = \Lambda_{b^{(2)},h^{(2)}, F^{(2)}}(f),
\]
for small boundary data $f$ supported in $(0,T) \times \partial \Omega$.
Now let $q \in \mathbb{W}$ be fixed.
Firstly,
we choose a lightlike vectors $(x_1, \xi_1) \in V$, sequences of lightlike vectors ${(x_{j,l}, \xi_{j,l})} \in V$ for $j = 2,3$, a lightlike covector $\zeta \in L^*_qM$, and $(y, \eta) \in L^{*}_{\partial M, +}M$,
such that
\[
(x_{j,l}, \xi_{j,l}) \rightarrow (x_1, \xi_1)
\quad \text{as } l \rightarrow +\infty,
\]
and for each fixed $l$, the lightlike vectors $(x_1, \xi_1), (x_{j,l}, \xi_{j,l})$ and covectors $\zeta, (y, \eta)$
satisfy Claim \ref{cl_const_three1}.
Secondly, we construct the boundary source $f$ following the ideas in Section \ref{sub_distorted}, for each fixed $l$.
For convenience, we denote $(x_{j,l}, \xi_{j,l})$ by $(x_{j}, \xi_{j})$ in this part,
with $j = 2,3$.
Let $\Q^{(k)}$ be the parametrices to the linear problems with different one-forms $b^{(k)}$ and potentials $h^{(k)}$, for $k =1,2$.
Recall we extend $b^{(k)}, h^{(k)}$ smoothly to $\tilde{b}^{(k)}, \tilde{h}^{(k)}$ in ${M_{\text{e}}}$, see Section \ref{subsec_extension}.
Moreover, there exists a smooth function $\tilde{\varrho}$ on ${M_{\text{e}}}$ with $\tilde{\varrho}|_{\partial M} = 1$ such that for any $x \in V$ we have
\begin{align*}
\tilde{b}^{(2)} &= b^{(1)} - 2 \tilde{\varrho}^{-1}\diff \tilde{\varrho},\\
\tilde{h}^{(2)} &= h^{(1)} - \langle b^{(1)}, \tilde{\varrho}^{-1} \diff \tilde{\varrho} \rangle - \tilde{\varrho}^{-1} \square_g \tilde{\varrho}.
\end{align*}
Now we choose point sources $\tilde{f}_j^{(2)} = \tilde{\varrho} \tilde{f}_j^{(2)}$, which are singular near $(x_j, \xi_j)$.
Then there are distorted plane waves $u^{(k)}_j \in \mathcal{I}^{\mu}(\Sigma(x_j, \xi_j, s_0))$ satisfying
\[
(\square_g + \langle b^{(k)}(x), \nabla \rangle + h^{(k)}(x)) u_j^{(k)} = \tilde{f}_j^{(k)}.
\]
Note that we choose negative $\mu$ such that $u^{(k)}_j$ is at least continuous, since we would like to choose boundary sources in $C^6((0,T) \times \partial \Omega)$.
Then by continuity we claim that
\[
u^{(2)}_j|_{O_j} = \varrho u^{(1)}_j|_{\partial M}= u^{(1)}_j|_{O_j},
\]
where $O_j \subset (0,T) \times \partial \Omega$ is a small open neighborhood of $\gamma_{x_j, \xi_j}(t_j^o)$ with $t_j^o$ defined in (\ref{def_bpep}).
Then following the same ideas of scattering control as in \cite[Proposition 3.2]{Hintz2021},
we can choose a boundary source $f_j$ and set $v^{(k)}_j$ be the solution to the boundary value problem (\ref{eq_v1}) with $f_j$ and $b^{(k)}, h^{(k)}$,
such that
\[
v^{(1)} = u^{(1)}_j \mod C^\infty(M), \quad v^{(2)} = u^{(2)}_j \mod C^\infty(M).
\]
With $v^{(1)} |_{\partial M} = v^{(2)} |_{\partial M} = f_j$, we have
\begin{align}\label{eq_psv1}
\sigma_{{p}}(v^{(1)})(x^o_j, (\xi^o_j)^\sharp) = \sigma_{{p}}(v^{(2)})(x^o_j, (\xi^o_j)^\sharp),
\end{align}
where we write $(x^o_j, (\xi^o_j)^\sharp) = (\gamma_{x_j, \xi_j}(t^o_j), (\dot{\gamma}_{x_j, \xi_j}(t^o_j))^\sharp)$
as the point where $\gamma_{x_j, \xi_j}$ enters $M$ for the first time.
Then by Proposition \ref{pp_uthree}, (\ref{eq_M3}), and (\ref{eq_m3}), we conclude that
\begin{align*}
{ \sigma_{p}}(\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} \Lambda_{b^{(1)},h^{(1)}, F^{(1)}} |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0})(y_|, \eta_|) &= { \sigma_{p}}(\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3} \Lambda_{b^{(2)},h^{(2)}, F^{(2)}} |_{\epsilon_1 = \epsilon_2 = \epsilon_3=0})(y_|, \eta_|)\\
\Rightarrow \quad
m_3^{(1)} (q, \zeta, \zeta^1) &= m_3^{(2)} (q, \zeta, \zeta^1).
\end{align*}
More explicitly, combining (\ref{eq_v1}) we have
\begin{align}\label{eq_u3}
& (2(\beta_2^{(1)})^2 + \beta_3^{(1)})
{{ \sigma_{p}}}(\Q^{(1)})(y, \eta, q, \zeta)
({{ \sigma_{p}}}(\Q^{(1)})(q, \zeta, x^o_1, (\xi^o_1)^\sharp))^3\\
=&
(2(\beta_2^{(2)})^2 + \beta_3^{(2)})
{{ \sigma_{p}}}(\Q^{(2)})(y, \eta, q, \zeta)
({{ \sigma_{p}}}(\Q^{(2)})(q, \zeta, x^o_1, (\xi^o_1)^\sharp))^3 . \nonumber
\end{align}
Thirdly, we choose sequences of lightlike vectors ${(\tilde{x}_{j,l}, \tilde{\xi}_{j,l})} \in V$ for $j = 2,3,4$
such that
\[
(\tilde{x}_{j,l}, \tilde{\xi}_{j,l}) \rightarrow (x_1, \xi_1)
\quad \text{as } l \rightarrow +\infty,
\]
and for each fixed $l$, the lightlike vectors $(x_1, \xi_1), (\tilde{x}_{j,l}, \tilde{\xi}_{j,l})$ and covectors $\zeta, (y, \eta)$
satisfy Claim \ref{cl_const_four1}, .
Then by Proposition \ref{pp_ufour}, (\ref{eq_M4}), and (\ref{eq_m4}), we conclude that
\begin{align*}
{ \sigma_{p}}(\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3}\partial_{\epsilon_4} \Lambda_{b^{(1)},h^{(1)}, F^{(1)}} |_{\epsilon_1 = \epsilon_2 = \epsilon_3=\epsilon_4 = 0})(y_|, \eta_|) &= { \sigma_{p}}(\partial_{\epsilon_1}\partial_{\epsilon_2}\partial_{\epsilon_3}\partial_{\epsilon_4} \Lambda_{b^{(2)},h^{(2)}, F^{(2)}} |_{\epsilon_1 = \epsilon_2 = \epsilon_3=\epsilon_4 = 0})(y_|, \eta_|)\\
\Rightarrow \quad
m_4^{(1)} (q, \zeta, \zeta^1) &= m_4^{(2)} (q, \zeta, \zeta^1),
\end{align*}
which implies
\begin{align}\label{eq_u4}
& (4(\beta_2^{(1)})^3 - 3 \beta_2^{(1)}\beta_3^{(1)})
{{ \sigma_{p}}}(\Q^{(1)})(y, \eta, q, \zeta)
({{ \sigma_{p}}}(\Q^{(1)})(q, \zeta, x^o_1, (\xi^o_1)^\sharp))^4\\
=&
(4(\beta_2^{(2)})^3 - 3 \beta_2^{(2)}\beta_3^{(2)})
{{ \sigma_{p}}}(\Q^{(2)})(y, \eta, q, \zeta)
({{ \sigma_{p}}}(\Q^{(2)})(q, \zeta, x^o_1, (\xi^o_1)^\sharp))^4 . \nonumber
\end{align}
Now we combine (\ref{eq_u3}) and (\ref{eq_u4}) to have
\begin{align}\label{eq_Q1Q2}
{{ \sigma_{p}}}(\Q^{(1)}_g)(q, \zeta, x^o_j, (\xi^o_j)^\sharp)
= \varrho(q) {{ \sigma_{p}}}(\Q^{(2)}_g)(q, \zeta, x^o_j, (\xi^o_j)^\sharp),
\end{align}
where we define
\begin{align*}
\varrho(x) \equiv \frac{(2(\beta_2^{(1)})^2 + \beta_3^{(1)})}{ (2(\beta_2^{(2)})^2 + \beta_3^{(2)})} \frac{(4(\beta_2^{(2)})^3 - 3 \beta_2^{(2)}\beta_3^{(2)})}{(4(\beta_2^{(1)})^3 - 3 \beta_2^{(1)}\beta_3^{(1)})},
\end{align*}
if we assume $2(\beta^{(k)}_2)^2 + \beta^{(k)}_3 \neq 0$ and
$4(\beta^{(k)}_2)^3 - 3 \beta^{(k)}_2\beta^{(k)}_3 \neq 0$ for $x \in \mathbb{W}$ and $k = 1,2$.
This implies that {$\varrho(x) \neq 0$} for any $x \in \mathbb{W}$.
\begin{remark}
Recall in Theorem \ref{thm} we assume the quantity $2 (\beta^{(k)}_2)^2 + \beta^{(k)}_3$ does not vanish on any open set of $\mathbb{W}$.
If for fixed $x \in \mathbb{W}$, we have $4(\beta^{(k)}_2)^3 - 3 \beta^{(k)}_2\beta^{(k)}_3 = 0$, then in (\ref{eq_Cs}), the terms w.r.t. $1/s^3$ and $1/s^2$ will vanish.
The leading order terms are given by $1/s$ and instead of (\ref{eq_u4}), we have
\begin{align*}
& (40(\beta_2^{(1)})^3 - 9 \beta_2^{(1)}\beta_3^{(1)})
{{ \sigma_{p}}}(\Q^{(1)})(y, \eta, q, \zeta)
({{ \sigma_{p}}}(\Q^{(1)})(q, \zeta, x^o_1, (\xi^o_1)^\sharp))^4\\
=&
(40(\beta_2^{(2)})^3 - 9 \beta_2^{(2)}\beta_3^{(2)})
{{ \sigma_{p}}}(\Q^{(2)})(y, \eta, q, \zeta)
({{ \sigma_{p}}}(\Q^{(2)})(q, \zeta, x^o_1, (\xi^o_1)^\sharp))^4. \nonumber
\end{align*}
In this case, we define
\begin{align*}
\varrho(x) \equiv \frac{(2(\beta_2^{(1)})^2 + \beta_3^{(1)})}{ (2(\beta_2^{(2)})^2 + \beta_3^{(2)})} \frac{(40(\beta_2^{(2)})^3 -9 \beta_2^{(2)}\beta_3^{(2)})}{(40(\beta_2^{(1)})^3 - 9 \beta_2^{(1)}\beta_3^{(1)})},
\end{align*}
where $40(\beta^{(k)}_2)^3 - 9 \beta^{(k)}_2\beta^{(k)}_3 \neq 0$.
\end{remark}
To analyze (\ref{eq_Q1Q2}), recall Section \ref{subsec_Q}.
Let
$\lambda$ be the null characteristic starting from $(x_1, \xi_1^\sharp)$ with $(q, \zeta) = \lambda(s) = (x_1(s), \xi_1^\sharp(s))$.
Along $\lambda(s)$, the equation (\ref{eq_Q1Q2}) can be written as
\begin{align*}
{{ \sigma_{p}}}(\Q^{(1)}_g)(x_1(s), \xi^\sharp_1(s), x_1(s_o), \xi_1^\sharp(s_o))
=\varrho(x_1(s))
{{ \sigma_{p}}}(\Q^{(2)}_g)(x_1(s), \xi^\sharp_1(s), x_1(s_o), \xi_1^\sharp(s_o)),
\end{align*}
where we write $(x^o_1, (\xi^o_1)^\sharp) = (x_1(s_o), \xi^\sharp(s_o))$.
Differentiating w.r.t. $s$ on both sides, we have
\begin{align*}
&(c_\omega \circ x_1(s) - \frac{1}{2}\langle b^{(1)}(x_1(s)), \dot{x}_1(s) \rangle) {{ \sigma_{p}}}(\Q^{(1)}_g)(x_1(s), \xi_1^\sharp(s), x_1(s_o), \xi_1^\sharp(s_o))\\
= & (\langle \diff \varrho, \dot{x}_1(s) \rangle
+ \varrho(x_1(s))(c_\omega \circ x_1(s) - \frac{1}{2}\langle b^{(2)}(x_1(s)), \dot{x}_1(s) \rangle))
(\tilde{\varrho}(x_1){{ \sigma_{p}}}(\Q^{(2)}_g)(x_1(s), \xi_1^\sharp(s), x_1(s_o), \xi_1^\sharp(s_o)))
\end{align*}
by (\ref{eq_Qsym}).
This implies that
\begin{align*}
(c_\omega \circ x_1(s) - \frac{1}{2}\langle b^{(1)}(x_1(s)), \dot{x}_1(s) \rangle) \varrho(x_1(s))
= \langle \diff \varrho, \dot{x}_1(s) \rangle
+ \varrho(x_1(s))(c_\omega \circ x_1(s) - \frac{1}{2}\langle b^{(2)}(x_1(s)), \dot{x}_1(s) \rangle)
\end{align*}
and therefore we have
\begin{align*}
\langle b^{(2)}(x_1(s)) - b^{(1)}(x_1(s)), \dot{x}_1(s) \rangle = \langle 2\varrho^{-1} \diff \varrho, \dot{x}_1(s) \rangle.
\end{align*}
Note that $x_1(s)$ is a null geodesic on $({M_{\text{e}}}, {g_\mathrm{e}})$.
By perturbing $\dot{x}_1(s)$, we can choose linearly independent $\dot{x}_1(s)$ at $q$.
It follows that
\begin{align*}
b^{(2)} - b^{(1)} = 2\varrho^{-1} \diff \varrho,
\end{align*}
for any $q \in \mathbb{W}$.
Next, we would like to show that $\beta^{(2)}_{m+1} = \varrho^m \beta^{(1)}_{m+1}$.
Indeed, we plug in (\ref{eq_Q1Q2}) to (\ref{eq_M4}) and (\ref{eq_Cs}) to have
\begin{align*}
&\varrho^3 (C(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4}) (\beta^{(1)}_2)^3 + D(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4})\beta^{(1)}_2\beta^{(1)}_3 )\\
= &C(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4}) (\beta^{(2)}_2)^3 + D(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4})\beta^{(2)}_2\beta^{(2)}_3.
\end{align*}
Following the same analysis in \cite[Section 6]{UZ_acoustic}, we have
\[
\varrho \beta^{(1)}_2 = \beta^{(2)}_2, \quad \varrho^2 \beta^{(1)}_3 = \beta^{(2)}_3, \quad
\varrho^3 \beta^{(1)}_4 = \beta^{(2)}_4
\]
Then following the same analysis in \cite[Section 8]{UZ_acoustic} by using higher order linearization, we can prove that
\[
\varrho^{m}\beta^{(1)}_{m+1} = \beta^{(2)}_{m+1}, \quad m \geq 4.
\]
\subsection{Construction for the fourth-order linearization}
In this subsection, we claim that
for any fixed point $q \in \mathbb{W}$,
one can find a set of lightlike vectors $\{(x_j, \xi_j)\}_{j=1}^4$ in $V$ and a lightlike covector $\zeta$ at $q$, which are corresponding to null geodesics intersecting regularly at $q$.
Similarly, the lightlike vectors $\{(x_j, \xi_j)\}_{j=1}^4$ are corresponding to four incoming null geodesics
and the lightlike covector $\zeta$ at $q$ is corresponding to the new singularities produced by the interaction of four distorted plane waves.
When $s_0 >0$ is small enough,
the covector $\zeta$ can be chosen away from the singularities caused by the interaction of at most three waves.
Then $(q, \zeta)$ is corresponding to an outgoing null geodesic and we would like to find a lightlike vector $(y, \eta)$ in $V$ along this null geodesic before its first cut point.
The same claim and proof is used in \cite{UZ_acoustic}.
We emphasize that even though we use the same notations as before,
the choice of $(x_j, \xi_j)$ and $\hat{\zeta}^{j}$ for $j = 2,3,4$ should be totally different from those in Section \ref{subsec_three}.
\begin{claim}\label{cl_const_four1}
Suppose $q \in \mathbb{W}$ and $s_0>0$ is sufficiently small.
Then one can find \[
{ \{(x_j, \xi_j)\}_{j=1}^4} \subset L^+V, \quad
\zeta \in \Lambda_{q}\setminus(\Lambda^{(1)} \cup \Lambda^{(2)} \cup \Lambda^{(3)}),
\quad (y, \eta) \in L^{*}_{\partial M, +}M,
\]
such that
\begin{enumerate}[(a)]
\item $(x_j, \xi_j), j = 1,2,3,4$ are causally independent as in (\ref{assump_xj}) and the null geodesics starting from them intersect regularly at $q$ (see Definition \ref{def_inter}),
and thus
$\zeta$ is in the span of $(\dot{\gamma}_{x_j, \xi_j}(s))^\flat$ at $q$;
\item each $\gamma_{x_j, \xi_j}(\mathbb{R}_+)$ hits $\partial M$ exactly once and transversally before it passes $q$;
\item $(y, \eta) \in L^{*}_{\partial M, +}M$ lies in the bicharacteristic from $(q, \zeta)$ and additionally there are no cut points along $\gamma_{q, \zeta^\sharp}$ from $q$ to $y$.
\end{enumerate}
\end{claim}
\begin{proof}
First, we pick $\zeta$ and ${\hat{\zeta}^{1}}$ in $L^{*,+}_q M$ as in the proof of Claim \ref{cl_const_three1}.
Note that there exist $(x_1, \xi_1) \in L^{+}V$ and $(\hat{y}, \hat{\eta}) \in L^{*,+}V$ with
\[
(q, \hat{\zeta}^{1}) = (\gamma_{x_1, \xi_1}({s_q}), (\dot{\gamma}_{x_1, \xi_1}(s_q))^\flat), \quad
(\hat{y}, \hat{\eta}) = (\gamma_{q, \zeta^\sharp}(\hat{s}), (\dot{\gamma}_{q, \zeta^\sharp}(\hat{s}))^\flat),
\]
for some $ 0 <s_q< \rho(x_1, \xi_1)$ and $0 <\hat{s}< \rho(q, \zeta)$.
Next, by Lemma \ref{lm_perturb_zeta1}, one can find three more linearly independent covectors $\hat{\zeta}^{j}$ with $(x_j, \xi_j)$ for $j = 2,3,4$ at $q$ such that $(x_j, \xi_j)_{j=1}^4$ satisfy the condition (a).
Then (b) and (c) can be satisfied following the same idea as before.
\end{proof}
Moreover, according to Lemma \ref{lm_perturb_zeta1}, with $\zeta, \hat{\zeta}^{1}$ given, we have freedom to choose $(x_j, \xi_j), j = 2,3,4$, as long as they are from sufficiently small perturbations of $\hat{\zeta}^{1}$.
The proof of \cite[Lemma 5]{UZ_acoustic} shows that
for fixed $\zeta, \hat{\zeta}^{1} \in L_q^{*,+} M$,
there exist a set of lightlike covectors $\hat{\zeta}^{2}, \hat{\zeta}^{3}, \hat{\zeta}^{4}$ near $\hat{\zeta}^{1}$, depending a small parameter $\theta$, such that
$
\zeta =\sum_{j=1}^4 {\zeta^{j}} = \sum_{j=1}^4 \alpha_{j} \hat{\zeta}^{j},
$
for some constant $\alpha_{j}$.
More explicitly,
one can choose local coordinates $x = (x^0,x^1, x^2, x^3)$ at $q$ such that $g$ coincides with the { Minkowski} metric.
By rotating the coordinate system in the spatial variables, without loss of generality, we can assume
\[
\zeta = (-1, 0, \cos \varphi, \sin \varphi), \quad {\hat{\zeta}^{1}} = (-1, 1, 0, 0),
\]
where $\varphi \in [0, 2\pi)$. For $\theta \neq 0$ sufficiently small, we choose
\begin{align*}
\hat{\zeta}^{2} &= (-1, \cos \theta, \sin \theta \sin \varphi, -\sin \theta \cos \varphi), \\
\hat{\zeta}^{3} &= (-1, \cos \theta, -\sin \theta \sin \varphi, \sin \theta \cos \varphi), \\
\hat{\zeta}^{4} &=(-1, \cos \theta, \sin \theta \cos \varphi, \sin \theta \sin \varphi).
\end{align*}
The coefficients $\alpha_{j}$ can be computed and we have ${\zeta^{j}} = \alpha_{j} \hat{\zeta}^{j}$.
Then the analysis in \cite[Lemma 5]{UZ_acoustic} shows that
\begin{align*}
C(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4})
& \equiv \sum_{(i,j,k,l) \in \Sigma(4)} (4\frac{(\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})^2}{|{\zeta^{i}} + {\zeta^{j}} + {\zeta^{k}}|^2_{g^*}} + \frac{(\zeta_0^{i}+\zeta_0^{l})^2}{| {\zeta^{i}} + {\zeta^{l}}|^2_{g^*}})
\frac{(\zeta_0^{j} + \zeta_0^{k})^2}{|{\zeta^{j}} + {\zeta^{k}}|^2_{g^*}},\\
& = -\frac{2}{s^3} +\frac{14}{s^2} + \frac{10}{s} + \mathcal{O}(1)\\
D(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4})
& \equiv \sum_{(i,j,k,l) \in \Sigma(4)} (3 \frac{(\zeta_0^{k}+\zeta_0^{l})^2}{| {\zeta^{k}} + {\zeta^{l}}|^2_{g^*}} + 2\frac{(\zeta_0^{i} + \zeta_0^{j} + \zeta_0^{k})^2}{|{\zeta^{i}} + {\zeta^{j}} + {\zeta^{k}}|^2_{g^*}})\\
& = \frac{3}{2s^3} - \frac{21}{2s^2} - \frac{9}{4s} + \mathcal{O}(1),
\end{align*}
which implies that
\begin{align}\label{eq_Cs}
\mathcal{C}(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4})
&= C(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4}) \beta_2^3 + D(\zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4})\beta_2\beta_3 + \beta_4 \nonumber\\
&=-\frac{1}{2s^3}( 4\beta_2^3
- 3\beta_2\beta_3) + \frac{7}{2s^2}(4\beta_2^3
- 3\beta_2\beta_3) + \frac{1}{4s}(40\beta_2^3
- 9\beta_2\beta_3) + \mathcal{O}(1),
\end{align}
when $s = \sin \thett$ is sufficiently small.
For each fixed $\theta$, we can expect to recover the quantity
$M_4(q, \zeta^{1}, \zeta^{2}, \zeta^{3}, \zeta^{4})$, see (\ref{eq_M3}).
Now to distinguish these lightlike covectors (or lightlike vectors) from those in Section \ref{subsec_three},
we use the notation $\tilde{\zj}$ and $\tilde{\xi}_j$ for $j = 2,3,4$ instead.
We compute
\begin{align*}
M_4(q, {\zeta^{1}}, {\tilde{\zeta}^{2}}, {\tilde{\zeta}^{3}}, {\tilde{\zeta}^{4}})
& = \frac{1}{2s^3}(4 \beta_2^3 + 3 \beta_2 \beta_3 ){{ \sigma_{p}}}(Q_g)(y, \eta, q, \zeta)\prod_{j=1}^4{{ \sigma_{p}}}(Q_g)(q, \zeta, \tilde{x}_j, \tilde{\xi}^\sharp_j)
+ O(\frac{1}{s^2}).
\end{align*}
When $s$ goes to zero, the null geodesics $\gamma_{\tilde{x}_j, \tilde{\xi}_j}$ converge to $\gamma_{x_1, \xi_1}$.
By analyzing the asymptotic behavior of $M_4$ when $s \rightarrow 0$,
we can expect to recover the quantity
\begin{align}\label{eq_m4}
m_4(q, \zeta, \zeta^1)
= (-4 \beta^2_3 + 3\beta_2\beta_3)
{{ \sigma_{p}}}(Q_g)(y, \eta, q, \zeta)
({{ \sigma_{p}}}(Q_g)(q, \zeta, x_1, \xi^\sharp_1)\sigma_{{p}}(v_1)(x_1, \xi_1^\sharp))^4.
\end{align}
\section{Appendix}
\input{wellposedness}
\input{boundary}
\subsection{Energy estimates}\label{subsec_energy}
The well-posedness of nonlinear problem {(\ref{eq_problem})} for a small boundary source $f$
can be established following similar arguments as in \cite{Hintz2020},
see also
\cite{ultra21, Uhlmann2021a} and in particular \cite{UZ_acoustic}.
Compared to \cite{UZ_acoustic}, the difference is that we have a lower order term in the differential operator.
Recall in \cite[Section 2]{UZ_acoustic}, one uses energy estimates for the liner problem in \cite[Theorem 3.1]{Dafermos1985}, to construct a contraction map for the nonlinear problem.
To perform the same arguments, we need a slightly modified version of \cite[Theorem 3.1]{Dafermos1985}.
We briefly state the setting and modification in the following.
Recall $M = \mathbb{R} \times \Omega$, where $\Omega$ is a bounded set in $\mathbb{R}^3$ with smooth boundary, and we write $x = (t,x') = (x^0, x^1, x^2, x^3) \in M$.
In the following, we consider the case
when the leading term of the differential operator is given by $\partial^2_{t} +
\sum_{i,j = 1}^3 a_{ij}(x) \partial_{i} \partial_{j}$.
The case for a globally hyperbolic Lorentzian manifold can be considered in a similar way.
We first review the result in \cite[Theorem 3.1]{Dafermos1985} and then modify it to allow an arbitrary first-order term.
In \cite[Section 3]{Dafermos1985},
one considers the linear initial value problem
\begin{align*}
\partial^2_{t} u(t,x') + B(t) u(t,x') = f(t,x'), \quad \mbox{in } (0,T) \times \Omega,\\
u(0,x') = u^0, \quad \partial_t{u}(0,x') = u^1,
\end{align*}
where $B(t)$ is a linear differential operator w.r.t $x'$ satisfying the assumptions (B1), (B2), and (B3) in the following. Here instead of the original assumption (B1), we use the stronger assumption (B1') in \cite{Dafermos1985} and denote it by (B1) here.
This is enough for our model.
In addition,
let $H_k(\Omega) = W^{k,2}(\Omega)$ be the Sobolev space and
we choose a suitable subspace $V$ of $H_1(\Omega)$, dense in $H_0(\Omega)$.
We would like to find a solution $u$ in the space $X_k \equiv V \cap H_k(\Omega)$, to accommodate the boundary condition.
For convenience, we write $\|v(t)\|_{H_k} = \|v(t)\|_k$ for any $v(t) \in H^k(\Omega)$ and we denote by $\langle v, w \rangle$ the inner product of two functions in $L^2(\Omega)$.
\begin{itemize}
\item[(B1)] We assume $B(t) \in C^{m-1}([0, T]; \mathcal{L}_{2,m})$, where let $\mathcal{L}(Z, Y)$ denotes the space of bounded linear operators from $Z$ to $Y$ and we define
\[
\mathcal{L}_{2,m} \equiv \bigcap_{j=-1}^{m-2} \mathcal{L}(H_{j+1}(\Omega), H_j(\Omega)).
\]
\item[(B2)] For each $t \in[0, T]$ and $k=0, \ldots, m-2$, the conditions $v \in X_k$ and $B(t) v \in H_k$ together imply that $v \in X_{k+2}$. Moreover, there is a constant $\mu>0$ such that
\begin{align*}
\|v\|_{k+2} \leq \mu\left(\|v\|_k+\|B(t) v\|_k\right) \quad \forall v \in X_{k+2}, \quad t \in[0, T], \quad k=0, \ldots, m-2.
\end{align*}
\item[(B3)] There are constants $\kappa, \lambda, \eta>0$ such that
\begin{align*}
\langle B(t) v, v\rangle+ \kappa\|v\|_0^2 \geq \lambda\|v\|_1^2 \quad \forall v \in V, \quad t \in[0, T],
\end{align*}
and
\begin{align*}
|b(t ; v, \omega)| \leq \eta\|v\|_1 \cdot\|\omega\|_0 \quad \forall v, \omega \in V, \quad t \in[0, T],
\end{align*}
where
\begin{align*}
\quad b(t ; v, \omega):=\langle B(t) v, \omega\rangle-\langle B(t) \omega, v\rangle \quad \forall v, \omega \in V, \quad t \in[0, T].
\end{align*}
\end{itemize}
In particular, for our model, suppose $u^0 = u^1 = 0$ and we impose the boundary condition $u|_{(0,T) \times \partial \Omega} = 0$ by choosing $V = W_0^{1,2}(\Omega)$.
Moreover, we suppose
\begin{align}\label{def_B}
B(t) u = \sum_{i,j = 1}^3 a_{ij}(t,x') \partial_{x_i} \partial_{x_j} u + \langle b(x), \nabla u \rangle + B_0(x) u \equiv B_2(t) u + B_1(t) u + B_0(x) u,
\end{align}
where the matrix $\{a_{ij}(t,x')\}$ is symmetric and positive definite with smooth entries,
$\nabla u$ denotes the gradient of $u$ w.r.t. ${x = (t,x')}$,
and the one-form $b(x)\in C^\infty(M; T^*M)$ with the potential $B_0(x) \in C^\infty(M)$.
We write $B_1(t) = b_0 \partial_t + \sum_{j=1}^3 b_j(x) \partial_j$, where $b_k(x) \in C^\infty(M)$ for $k =0, 1, 2, 3$.
In the following, first, we would like to show a modified version of \cite[Theorem 3.1]{Dafermos1985} for $B(t)$ given by (\ref{def_B}), when $b_0(t,x') \equiv 0$, i.e.,
$B_1(t) = \sum_{j=1}^3 b_j(x) \partial_j$.
Then the case with $b_0(t,x')$ can be proved by considering an integrating factor $e^{\phi(t,x')}$, where $\phi(t,x') = \int_0^t b_0(s, x') \diff s$ is smooth over $M$.
For $R>0$, we define $Z^m(R,T)$ as the set containing all functions $v$ such that
\[
v \in \bigcap_{k=0}^{m} W^{k, \infty}([0,T]; H_{m-k}(\Omega)),
\quad \|v\|^2_{Z^m} = \sup_{t \in [0, T]} \sum_{k=0}^m \|\partial_t^k v(t)\|^2_{H_{m-k}} \leq R^2.
\]
We abuse the notation $C$ to denote different constants that depends on $m, M, T$.
Recall \cite[Theorem 3.1]{Dafermos1985} shows that with $B(t)$ satisfying (B1), (B2), (B3),
there exists a unique solution
\[
u \in \bigcap_{k=0}^{m} C^{k}([0,T]; X_{m-k})
\]
with the estimate
\[
\|u\|^2_{Z^m} = \sup_{t \in [0, T]} \sum_{k=0}^m \|\partial_t^k u (t)\|^2_{{m-k}} \leq Ce^{KT}
(
\sup_{t \in [0, T]} \sum_{k=0}^{m-2} \|\partial_t^k f (t)\|^2_{{m-2-k}}
+
\int_0^T \|\partial_t^{m-1} f (t)\|_{H^0}^2 \diff t
),
\]
where $C$ and $K$ are constants depending on the constants in the estimates of (B2), (B3).
For our purpose, we would like to relax the second estimate
\begin{align*}
|b(t ; v, \omega)| \leq \eta\|v\|_1 \cdot\|\omega\|_0 \quad \forall v, \omega \in V, \quad t \in[0, T],
\end{align*}
in (B3) to allow an arbitrary first-order term in $B(t)$, see (\ref{def_B}).
First, we note that the principal part of $B(t)$, i.e.,
$B_2(t) = \sum_{i,j = 1}^3 a_{ij}(t,x') \partial_{x_i} \partial_{x_j}$
satisfies (B1), (B2), (B3).
Now with extra terms $B_1(t)$ and $B_0(t)$ as above,
the condition (B1) and (B2) still hold, since $B(t)$ is an elliptic operator.
For (B3), we have
\[
\langle B_2(t) v, v \rangle + \kappa \|v\|_0^2 \geq \lambda \|v\|_1^2, \quad \forall
v \in V, \ t \in [0,T],
\]
where $\lambda, \kappa > 0$ are constants.
Since $h(x)$ and $b_j(x), j = 1,2,3$ are smooth over $M$, there exist $c_1, c_2$ such that
\[
|\langle B_1(t) v, v \rangle| \leq c_1 \|v\|_1 \| v\|_0,
\quad
|\langle B_0(t) v, v \rangle| \leq c_0 \|v\|_0 \| v\|_0.
\]
Then we have
\begin{align*}
\langle B(t) v, v \rangle + \kappa \|v\|_0^2
&\geq
\lambda \|v\|_1^2
- c_1 \|v\|_1 \| v\|_0 - c_0 \|v\|_0 \| v\|_0\\
&\geq
\frac{\lambda}{2} \|v\|_1^2 - (\frac{c_1^2}{\lambda^2} + c_0) \|v\|_0 \| v\|_0,
\end{align*}
which implies $B(t)$ satisfies the first estimate in (B3) with new constants $\frac{\lambda}{2}$ and $\kappa + \frac{c_1^2}{\lambda^2} + c_0$.
For the second assumption in (B3), if we write
\[
b(t; v, w) \equiv \langle B(t) v, w\rangle -
\langle B(t) w, v\rangle,
\]
it requires that
\begin{align}\label{eq_35}
|b(t; v, w)| \leq \eta \|v\|_{H^1} \|w\|_{H^0}, \quad \forall v, w \in V, \ t \in [0, T].
\end{align}
Let $b_j(t; v, w) \equiv \langle B_j(t) v, w\rangle -
\langle B_2(t) w, v\rangle$, for $j = 2, 1, 0$.
Note that $b_2(t; v, w), b_0(t; v, w)$ satisfy this estimate, since $\{a_{ij}(x)\} + B_0(x) I_3$ is symmetric.
But for $j =1$, we have
\[
|b_1(t; v, w)| = | \langle \sum_{j=0}^3 b_j(x) \partial_j v, w\rangle -
\langle\sum_{j=1}^3 b_j(x) \partial_j w, v\rangle|,
\]
which not necessarily satisfies (\ref{eq_35}).
Thus, we rewrite $B(t)$ as two parts
$
B(t) = B_s(t) + B_1(t),
$
where $B_s(t) = B_2(t) + B_0(t)$.
If we check the proof of
\cite[Theorem 3.1]{Dafermos1985},
the assumption (\ref{eq_35}) is used in several places that we list below.
Firstly, in the proof of \cite[Lemma 3.1]{Dafermos1985},
one constructs a sequence of approximate solutions $\{u_n(t)\}_{n=1}^\infty$ to employ the method of Faedo-Galerkin.
The assumption (\ref{eq_35}) is used to estimate (3.22) there.
The goal is to show that the sequence $\{u_n(t)\}_{n=1}^\infty$ is bounded in $W^{m,2}([0, T]; H_0)$ and in $W^{m-1,2}([0, T]; V)$.
Note that (3.22) is derived from (3.20) by setting $\omega = 2 \partial_t^m u_n(t)$, i.e.,
\begin{equation*}
\begin{aligned}
&\langle \partial_t^{m+1} u_n(t), 2 \partial_t^m u_n(t)\rangle + \langle B(t) \dtmminusu_n(t), 2\partial_t^m u_n(t) \rangle\\
=&- \sum_{k=1}^{m-1}
\binom {m-1}{k} \langle \partial_t^{k} B(t) \partial_t^{m-1-k} u_n(t), 2 \partial_t^m u_n(t) \rangle + \langle \partial_t^{m-1} f(t), 2 \partial_t^m u_n(t) \rangle.
\end{aligned}
\end{equation*}
With $B_1(t) = \sum_{j=1}^3 b_j(x) \partial_j$ , we rewrite (3.22) as
\begin{equation*}
\begin{aligned}
&2 \langle \partial_t^{m+1} u_n(t), \partial_t^m u_n(t)\rangle
+ 2\langle B_s(t) \dtmminusu_n(t), \partial_t^m u_n(t) \rangle
+ 2\langle B_1(t) \dtmminusu_n(t), \partial_t^m u_n(t) \rangle\\
=&-2 \sum_{k=1}^{m-1}
\binom {m-1}{k} \langle \partial_t^{k} B(t) \partial_t^{m-1-k} u_n(t), \partial_t^m u_n(t) \rangle + 2\langle \partial_t^{m-1} f(t), \partial_t^m u_n(t) \rangle.
\end{aligned}
\end{equation*}
It follows that
\begin{equation}\label{eq_du}
\begin{aligned}
&\frac{\diff}{\diff t} (\| \partial_t^m u_n(t) \|_{0}^2)
+ \frac{\diff}{\diff t} (\langle B_s(t) \dtmminusu_n(t), \partial_t^{m-1} u_n(t) \rangle)
\\
=&
-2\sum_{k=1}^{m-1}
\binom {m-1}{k} \langle \partial_t^{k} B(t) \partial_t^{m-1-k} u_n(t), \partial_t^m u_n(t) \rangle
- 2\langle B_1(t) \dtmminusu_n(t), \partial_t^m u_n(t) \rangle\\
& \quad
-\langle B_s(t) \dtmminusu_n(t), \partial_t^m u_n(t) \rangle
+ \langle B_s(t) \dtmu_n(t), \partial_t^{m-1} u_n(t) \rangle\\
& \quad +\langle \partial_t B_s(t) \dtmminusu_n(t), \partial_t^{m-1} u_n(t) \rangle + 2\langle \partial_t^{m-1} f(t), \partial_t^m u_n(t) \rangle.
\end{aligned}
\end{equation}
In addition, for $k = 1, \ldots, m-1$, we have
\begin{align}\label{eq_Bk}
&2\int_0^t \langle \partial_t^{k} B(s) \partial_t^{m-1-k} u_n(s), \partial_t^m u_n(s) \rangle
\diff s \\
= & \quad 2\langle \partial_t^{k} B(t) \partial_t^{m-1-k} u_n(t), \partial_t^{m-1} u_n(t) \rangle
- \langle \partial_t^{k} B(0) \partial_t^{m-1-k} u_n(0), 2 \partial_t^{m-1} u_n(0) \rangle
\nonumber \\
& \quad
- 2\int_0^t \langle \partial_t^{k+1} B(s) \partial_t^{m-1-k} u_n(s), \partial_t^{m-1} u_n(s) \rangle
\diff s
-2\int_0^t \langle \partial_t^{k} B(s) \partial_t^{m-k} u_n(s), \partial_t^{m-1} u_n(s) \rangle
\diff s. \nonumber
\end{align}
We plug (\ref{eq_Bk}) into (\ref{eq_du}) and integrate this equation w.r.t. $t$ to have
\begin{equation}\label{eq_un}
\begin{aligned}
&\| \partial_t^m u_n(t) \|_{0}^2
+ \langle B_s(t) \dtmminusu_n(t), \partial_t^{m-1} u_n(t) \rangle
\\
=&\| \partial_t^m u_n(0) \|_{0}^2 +
\langle B_s(0) \dtmminusu_n(0), \partial_t^{m-1} u_n(0) \rangle \\
& + \sum_{k=1}^{m-1}\binom {m-1}{k} \big(\langle \partial_t^{k} B(t) \partial_t^{m-1-k} u_n(t), 2 \partial_t^{m-1} u_n(t) \rangle
- \langle \partial_t^{k} B(0) \partial_t^{m-1-k} u_n(0), 2 \partial_t^{m-1} u_n(0) \rangle
\\
& +
\int_0^t \langle \partial_t^{k+1} B(s) \partial_t^{m-1-k} u_n(s)+ \partial_t^{k} B(s) \partial_t^{m-k} u_n(s), 2 \partial_t^{m-1} u_n(s) \rangle
\diff s \big)
\\
& + \int_0^t \langle \partial_t^{m-1} f(s), 2 \partial_t^m u_n(s) \rangle \diff s
- \int_0^t \langle B_1(s) \dtmminusu_n(s), 2\partial_t^m u_n(s) \rangle \diff s \\
& - \int_0^t \langle b_s(t; \partial_t^{m-1} u_n(s), \partial_t^m u_n(s)) \diff s
+ \int_0^t \langle \partial_t B_s(s) \dtmminusu_n(s), \partial_t^{m-1} u_n(s) \rangle \diff s.
\end{aligned}
\end{equation}
Note that
\[
\langle B_s(t) \dtmminusu_n(t), \partial_t^{m-1} u_n(t) \rangle
\geq
\lambda\| \partial_t^{m-1} u_n(t) \|_{1}^2
- \kappa \| \partial_t^{m-1} u_n(t) \|_{0}^2,
\]
for some constant $\lambda, \kappa > 0$.
On the other hand, we have
\begin{align}\label{eq_B1}
\| B_1(s) \dtmminusu_n(s)\|_0^2 \leq C \|\dtmminusu_n(s)\|_1^2,
\end{align}
and integrating by parts w.r.t. $x'$ we have
\begin{align*}
|\langle \partial_t^{k} B(s) \partial_t^{m-1-k} u_n(s), \partial_t^{m-1} u_n(s) \rangle|
\leq &
C ((\|\partial_t^{m-1-k} u_n(s)\|_1 )
\|\partial_t^{m-1} u_n(s)\|_1 + \text{b. v.})\\
\leq &
C (\|\partial_t^{m-1-k} u_n(s)\|_1^2 +
\|\partial_t^{m-1} u_n(s)\|_1^2 + \text{b. v.}).
\end{align*}
This implies that
\begin{align*}
|\int_0^t \langle \partial_t^{k+1} B(s) \partial_t^{m-1-k} u_n(s), 2 \partial_t^{m-1} u_n(s) \rangle
\diff s|
&\leq C \int_0^t \| \partial_t^{m-1-k} u_n(s) \|_1^2
+ \|\partial_t^{m-1} u_n(s) \|_1^2
\diff s,\\
|\int_0^t \langle \partial_t^{k} B(s) \partial_t^{m-k} u_n(s), 2 \partial_t^{m-1} u_n(s) \rangle
\diff s|
&\leq C
\int_0^t \| \partial_t^{m-k} u_n(s) \|_1^2
+ \|\partial_t^{m-1} u_n(s) \|_1^2
\diff s.
\end{align*}
Moreover, we have
\[
\partial_t^j u_n(t)
= \partial_t^j u_n(0) + \int_0^t \partial_t^{j+1} u_n(s) \diff s,
\]
which implies for $j = m-1, \ldots, 0$ we have
\begin{align}\label{eq_jj1}
\|\partial_t^j u_n(t) \|_0^2
\leq \|\partial_t^j u_n(0) \|^2
+ \int_0^t \| \partial_t^{j+1} u_n(s) \diff s\|_0^2 \diff s.
\end{align}
Thus, equations (\ref{eq_un}) and (\ref{eq_Bk}) imply that
\begin{equation*}
\begin{aligned}
& \sum_{j=0}^{m} \| \partial_t^{j} u_n(t) \|_0^2
+ \sum_{j=0}^{m-1} \| \partial_t^{j} u_n(t) \|_1^2 \\
\leq & CN + K(\sum_{j=0}^{m} \int_0^t \| \partial_t^{j} u_n(s) \|_0^2 \diff s+ \sum_{j=0}^{m-1} \int_0^t \| \partial_t^{j} u_n(s) \|_1^2 \diff s),
\end{aligned}
\end{equation*}
where with zero initial condition we write
\begin{align*}
N &= \sum_{j=0}^{m} \| \partial_t^{j} u_n(0) \|_0^2
+ \sup_{t \in [0, T]} \sum_{k=0}^{m-2} \|\partial_t^k f (t)\|^2_{H^{m-2-k}}
+ \int_0^T \|\partial_t^{m-1} f (t)\|_{H^0}^2 \diff t\\
&= \sup_{t \in [0, T]} \sum_{k=0}^{m-2} \|\partial_t^k f (t)\|^2_{H^{m-2-k}}
+ \int_0^T \|\partial_t^{m-1} f (t)\|_{H^0}^2 \diff t.
\end{align*}
Thus, the sequence $\{u_n\}_{n=1}^\infty$ is bounded in the desired space and one can prove the existence of a weak solution by a standard argument.
Secondly, we can prove the estimate in (3.28) in the proof of \cite[Lemma 3.2]{Dafermos1985}, with an arbitrary smooth one-form.
Indeed, (3.28) is obtained in a similar way as (3.22).
This time, we have
\begin{equation*}
\begin{aligned}
&\langle \partial_t^{m+1} u_n(t), 2 \partial_t^m u_n(t)\rangle + \langle B_s(t) \dtmminusu_n(t), 2\partial_t^m u_n(t) \rangle + \langle B_1(t) \dtmminusu_n(t), 2\partial_t^m u_n(t) \rangle\\
=&-\sum_{k=1}^{m-1}
\binom {m-1}{k} \langle \partial_t^{k} B(t) \partial_t^{m-1-k} u_n(t), 2 \partial_t^m u_n(t) \rangle + \langle \partial_t^{m-1} f(t), 2 \partial_t^m u_n(t) \rangle.
\end{aligned}
\end{equation*}
We can rewrite (3.28) as
\begin{equation*}
\begin{aligned}
&\| \partial_t^m u_n(t) \|_{0}^2
+ \langle B_s(t) \dtmminusu_n(t), \partial_t^{m-1} u_n(t) \rangle
\\
=& \| \partial_t^m u_n(0) \|_{0}^2
+\langle B_s(0) \dtmminusu_n(0), \partial_t^{m-1} u_n(0) \rangle \\
& \quad + \sum_{k=1}^{m-1}\binom {m-1}{k} \int_0^t \langle \partial_t^{k} B(s) \partial_t^{m-1-k} u_n(s), 2 \partial_t^m u_n(s) \rangle \diff s
\\
& \quad + \int_0^t \langle \partial_t^{m-1} f(s), 2 \partial_t^m u_n(s) \rangle \diff s
- \int_0^t \langle B_1(s) \dtmminusu_n(s), 2\partial_t^m u_n(s) \rangle \diff s \\
& \quad - \int_0^t (\langle B_s(t)\partial_t^{m-1} u_n(s), \partial_t^m u_n(s) \rangle - \langle B_s(t)\partial_t^m u_n(s), \partial_t^{m-1} u_n(s) \rangle) \diff s\\
&\quad + \int_0^t \langle \partial_t B_s(s) \dtmminusu_n(s), \partial_t^{m-1} u_n(s) \rangle \diff s.
\end{aligned}
\end{equation*}
By (\ref{eq_B1}) and (\ref{eq_jj1}), this implies
\[
\| \partial_t^m u_n(t) \|^2_{0} + \| \partial_t^{m-1} u_n(t) \|^2_{1}
\leq CN + K \int_0^t \sum_{k=0}^m \| \partial_t^k u_n(s) \|^2_{m-k} \diff s, \quad \forall t \in [0, T],
\]
which proves (3.32) in \cite{Dafermos1985}.
Then we can follow the same analysis in the rest of the proof of \cite[Lemma 3.2]{Dafermos1985}.
This proves the desired result.
\subsection{Local well-posedness}\label{Sec_well}
Now let $T >0$ be fixed and let $m \geq 5$.
We consider the boundary value problem for the nonlinear equation
\begin{equation*}
\begin{aligned}
\partial_t^2 p - c^2(x) \Delta p + \langle b(x), \nabla p \rangle + h(x) p
- F(x, p,\partial_t p, \partial^2_t p) &= 0, & \ & \mbox{in } (0,T) \times \Omega,\\
p &= f, & \ &\mbox{on } (0,T) \times \partial \Omega,\\
p = {\partial_t p} &= 0, & \ & \mbox{on } \{t=0\},
\end{aligned}
\end{equation*}
where we assume $F(x,p, \partial_t, \partial^2_t) =\sum_{m=1}^{+\infty} \beta_{m+1}(x) \partial_t^2 (p^{m+1})$ with
$b(x) \in C^\infty(M; T^*M)$, $h(x) \in C^\infty(M)$, and $\beta_{m+1}(x) \in C^\infty(M)$ for $m \geq 1$.
Suppose $f \in C^{m+1} ([0,T] \times \partial \Omega)$ satisfies $\|f\|_{C^{m+1} ([0,T] \times \partial \Omega)} \leq \epsilon_0$, with small positive number $\epsilon_0$ to be specified later.
Then there exists a function $u_f \in C^{m+1} ([0,T] \times \Omega)$ such that $ u_f|_{\partial M} = f$ and
\[
\|u_f\|_{C^{m+1} ([0,T] \times \Omega)} \leq\|f\|_{C^{m+1} ([0,T] \times \partial \Omega)} .
\]
Let $\tilde{p} = p -u_f $ and
we rewrite the nonlinear term as
\begin{align}\label{eq_F}
F(x, p, \partial_t p, \partial^2_t p)
& = \sum_{j = 1}^{+\infty} \beta_{j+1}(x) \partial_t^2(p^{j+1}) \\
& = (\sum_{j = 1}^{+
\infty} (j+1)\beta_{j+1}(x) p^{j}) \partial^2_{t} p
+ (\sum_{j = 1}^\infty (j+1)j \beta_{j+1}(x) p^{j-1}) \partial_t p \partial_t p \nonumber\\
& \equiv F_1(x,p)p \partial_{t}^2 p + F_2(x,p) (\partial_t p)^2. \nonumber
\end{align}
Note that the functions $F_1, F_2$
are smooth over $M \times \mathbb{R}$.
Then $\tilde{p}$ must solve the equation
\begin{align*}
&(1 - F_1(x,\tilde{p} + u_f)(\tilde{p} + u_f))\partial_t^2 \tilde{p} - c(x)^2 \Delta \tilde{p}
+ \langle b(x), \nabla \tilde{p} \rangle + h\tilde{p} \\
= &
-(\partial_t^2 - c(x)^2 \Delta + \langle b(x), \nabla \rangle + h)u_f
+ F_1(x,\tilde{p} + u_f)(\tilde{p} + u_f)\partial_{t}^2 u_f +
F_2(x,\tilde{p} + u_f) (\partial_t \tilde{p} + u_f)^2.
\end{align*}
When $\tilde{p}$ and $u_f$ are small enough, the factor
$1 - F_1(x,\tilde{p} + u_f)(\tilde{p} + u_f)$ is smooth and nonzero.
We define
\begin{align*}
\kappa(x,p) = \frac{1}{1 - F_1(x,p)p}, \quad
\alpha(x, p) = \frac{c(x)^2}{1 - F_1(x,p)p},\\
q_1(x,p) = \frac{F_1(x,p)}{1 - F_1(x,p)p}, \quad
q_2(x,p) = \frac{F_2(x,p)}{1 - F_1(x,p)p},
\end{align*}
and write the operator as
\begin{align*}
P(x, p) = \partial_t^2 - \alpha(x,p) \Delta
+ \langle \kappa(x,p)b(x), \nabla \rangle + \kappa(x,p)h(x)
\end{align*}
with the nonlinear term
\begin{align*}
\widetilde{F} (x, \partial_t^2 u_f, \Delta u_f, p, \partial_t p)
=
-P(x, p)u_f
+ q_1(x,p)p\partial_{t}^2 u_f +
q_2(x,p) (\partial_t p)^2.
\end{align*}
Note that there exists $c_1, c_2, \epsilon > 0$ such that $c_1 \leq \alpha(x, p) \leq c_2$ when $\| p \|_{{Z^m}} \leq \epsilon$.
It follows that $\tilde{p}$ solves the system
\begin{equation}\label{hmNBC}
\begin{cases}
P(x, \tilde{p} + u_f)\tilde{p}
= \widetilde{F}(x, \partial_t^2 u_f, \Delta u_f, \tilde{p}+ u_f, \partial_t(\tilde{p}+u_f)),
&\mbox{on } M,\\
\tilde{p} = 0, &\mbox{on } \partial M,\\
\tilde{p} = 0, &\mbox{for } t <0.
\end{cases}
\end{equation}
For $R>0$, we define $Z^m(R,T)$ as the set containing all functions $v$ such that
\[
v \in \bigcap_{k=0}^{m} W^{k, \infty}([0,T]; H_{m-k}(\Omega)),
\quad \|v\|^2_{Z^m} = \sup_{t \in [0, T]} \sum_{k=0}^m \|\partial_t^k v(t)\|^2_{{m-k}} \leq R^2.
\]
We abuse the notation $C$ to denote different constants that depends on $m, M, T$.
One can show the following claim by Sobolev Embedding Theorem.
\begin{claim}[{\cite[Claim 3]{Uhlmann2021a}}]\label{normineq}
Suppose $u \in Z^m(R, T)$.
Then $\|u\|_{Z^{m-1}} \leq \|u\|_{Z^m}$ and $\nabla^j_g u \in Z^{m-1}(R, T)$,
$j = 1, \dots, 4$. Moreover, we have the following estimates.
\begin{enumerate}[(1)]
\item If $v \in Z^m(R', T)$, then $\|uv\|_{Z^m} \leq C \|u\|_{Z^m} \|v\|_{Z^m}$.
\item If $v \in {Z^{m-1}}(R', T)$, then $\|uv\|_{Z^{m-1}} \leq C \|u\|_{Z^m} \|v\|_{Z^{m-1}}$.
\item If $q(x,u) \in C^m(M \times \mathbb{C})$,
then $\|q(x, u)\|_{Z^m} \leq C \|q\|_{C^m(M \times \mathbb{C})} (\sum_{l=0}^{m} \|u\|^l_{Z^m})$.
\end{enumerate}
\end{claim}
For $v \in Z^m(\rho_0, T)$ with $\rho_0$ to be specified later, we consider the linearized problem
\begin{equation*}
\begin{cases}
P(x, v + u_f)\tilde{p}
= \widetilde{F}(x, \partial_t^2 u_f, \Delta u_f, v + u_f, \partial_t(v+u_f)),\\
\tilde{p} = 0, &\mbox{on } \partial M,\\
\tilde{p} = 0, &\mbox{for } t <0,
\end{cases}
\end{equation*}
and we define the solution operator $\mathcal{J}$ which maps $v$ to the solution $\tilde{p}$.
By Claim \ref{normineq} and (\ref{eq_F}),
we have
\begin{align*}
&\|
\widetilde{F}(x, \partial_t^2 u_f, \Delta u_f, v + u_f, \partial_t(v+u_f))
\|_{Z^{m-1}} \\
= &
\| -P(x, v+u_f) u_f
+ q_1(x,v+u_f)(v+u_f)\partial_{t}^2 u_f +
q_2(x,v+u_f) (\partial_t (v+u_f))^2
\|_{Z^{m-1}} \\
\leq & \| P(x, v+u_f) u_f \|_{C^{m-1}([0,T]\times \Omega)}
+ \|q_1(x,v + u_f)\|_{Z^m} \|v + u_f\|_{Z^m}
\| \partial_t^2 u_f \|_{C^{m-1}([0,T]\times \Omega)} \\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad +
\|q_2(x,v + u_f)\|_{Z^m} \|v + u_f\|^2_{Z^m} \\
\leq & C(\epsilon_0 + (1 + (\rho_0 + \epsilon_0) + \ldots + (\rho_0 + \epsilon_0)^m) (\rho_0 + \epsilon_0)^2).
\end{align*}
According to our modified version of \cite[Theorem 3.1]{Dafermos1985} in Section \ref{subsec_energy},
the linearized problem has a unique solution
\[
\tilde{p} \in \bigcap_{k=0}^{m} C^{k}([0,T]; H_{m-k}(\Omega))
\]
such that
\[
\| \tilde{p}\|_{Z^m} \leq C(\epsilon_0 + (1 + (\rho_0 + \epsilon_0) + \ldots + (\rho_0 + \epsilon_0)^m) (\rho_0 + \epsilon_0)^2 )e^{KT},
\]
where $C, K$ are positive constants.
If we assume $\rho_0$ and $\epsilon_0$ are small enough, then the above inequality implies that
\[
\| \tilde{p}\|_{Z^m} \leq C(\epsilon_0 + (\rho_0 + \epsilon_0)^2 )e^{KT}.
\]
For any $\rho_0$ satisfying $\rho_0 < 1/({2C e^{KT}})$,
we can choose $\epsilon_0 = {\rho_0}/({8C e^{KT}}) $ such that
\begin{equation}\label{eps}
C(\epsilon_0 + (\rho_0 + \epsilon_0)^2 )e^{KT} < \rho_0.
\end{equation}
In this case, we have $\mathcal{J}$ maps $Z^m(\rho_0, T)$ to itself.
In the following we show that $\mathcal{J}$ is a contraction map if $\rho_0$ is small enough.
It follows that the boundary value problem (\ref{hmNBC}) has a unique solution $\tilde{u} \in Z^m(\rho_0, T)$ as a fixed point of $\mathcal{J}$.
Indeed, for $\tilde{p}_j = \mathcal{J}(v_j)$ with $v_j \in Z^m(\rho_0, T)$, we have that $\tilde{p}_2 - \tilde{p}_1$ satisfies
\begin{align*}
&
P(x, v_2 + u_f) (\tilde{p}_2 - \tilde{p}_1)\\
=&\widetilde{F}(x, \partial_t^2 u_f, \Delta u_f, v_2 + u_f, \partial_t(v_2 + u_f))
- \widetilde{F}(x, \partial_t^2 u_f, \Delta u_f, v_1 + u_f, \partial_t(v_1 + u_f))\\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + (P(x, v_1 + u_f) - P(x, v_2 + u_f) ) \tilde{p}_1 \\
=&(\alpha(x, v_2 + u_f) - \alpha(x, v_1 + u_f))\Delta(u_f - \tilde{p}_1)
+ \langle (\kappa(v_2 + u_f) - \kappa (v_1+ u_f))b(x), \nabla (u_f - \tilde{p}_1)\rangle\\
&\quad \quad \quad \quad \quad \quad \quad+ (\kappa(v_2 + u_f) - \kappa (v_1+ u_f))h(x) (u_f - \tilde{p}_1)
+ ((q_1(x,v_2 +u_f)(v_2 +u_f)\\
& -q_1(x,v_1 +u_f)(v_1 + u_f))\partial_t^2 u_f +(q_2(x,v_2 +u_f)(\partial_t(v_2 + u_f))^2 -q_2(x,v_1 +u_f)(\partial_t(v_2 + u_f))^2)
\\
= & (\alpha(x, v_2 + u_f) - \alpha(x, v_1 + u_f))\Delta(u_f - \tilde{p}_1)
+ \langle (\kappa(v_2 + u_f) - \kappa (v_1+ u_f))b(x), \nabla (u_f - \tilde{p}_1)\rangle\\
&+ (\kappa(v_2 + u_f) - \kappa (v_1+ u_f))h(x) (u_f - \tilde{p}_1)
+(q_1(x,v_2 +u_f) -q_1(x,v_1 +u_f))(v_2 + u_f)\partial_t^2 u_f\\
&+ q_1(x,v_1 +u_f)(v_2 -v_1)\partial_t^2 u_f
+ (q_2(x,v_2 +u_f) -q_2(x,v_1 +u_f))(\partial_t(v_2 + u_f))^2)
+ q_2(x,v_1 +u_f)\\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad
\quad \quad \quad \quad \quad \quad \quad \quad \quad
\quad \quad \quad \quad \quad \quad
\quad \quad \quad \quad \quad
+\partial_t(v_2 +v_1+ 2u_f)\partial_t(v_2 - v_1).
\end{align*}
We denote the right-hand side by $\mathcal{I}$ and using Claim \ref{normineq} for each term above,
we have
\begin{align*}
\|\mathcal{I}\|_{Z^{m-1}}
&\leq C' \|v_2 - v_1\|_{Z^m} (\rho_0 + \epsilon_0),
\end{align*}
where $\rho_0, \epsilon_0$ are chosen to be small enough.
By \cite[Theorem 3.1]{Dafermos1985} and (\ref{eps}), one obtains
\begin{align*}
&\| \tilde{p}_2 - \tilde{p}_1 \|_{Z^m}
\leq CC' \|v_2 - v_1\|_{Z^m} (\rho_0 + \epsilon_0) e^{KT} < CC'{{ e^{KT} }} (1 + 1/(8Ce^{KT}))\rho_0 \|v_2 - v_1\|_{Z^m}.
\end{align*}
Thus, if we choose $\rho \leq \frac{1}{CC'e^{KT}(1 + 1/(8Ce^{KT}))} $, then
\[
\| \mathcal{J}(v_2 -v_1)\|_{Z^m} < \|v_2 - v_1\|_{Z^m}
\]
shows that $\mathcal{J}$ is a contraction.
This proves that there exists a unique solution $\tilde{u}$ to the problem (\ref{hmNBC}).
Furthermore, by \cite[Theorem 3.1]{Dafermos1985} this solution
satisfies the estimates $\|\tilde{p}\|_{Z^m} \leq 8C e^{KT} \epsilon_0.$
Therefore, we prove the following proposition.
\begin{pp}
Let $f \in C^{m+1} ([0,T] \times \partial \Omega)$ with $m \geq 5$.
Suppose $f = \partial_t f = 0$ at $t=0$.
Then there exists small positive $\epsilon_0$ such that for any
$\|f\|_{C^{m+1} ([0,T] \times \partial \Omega)} \leq \epsilon_0$, we can find a unique solution
\[
p \in \bigcap_{k=0}^m C^k([0, T]; H_{m-k}(\Omega))
\]
to the boundary value problem (\ref{eq_problem})
with
$b(x) \in C^\infty(M; T^*M)$, $h(x) \in C^\infty(M)$, and $\beta_{m+1}(x) \in C^\infty(M)$ for $m \geq 1$.
Moreover, we have $p$ satisfies the estimate
\[
\|{p}\|_{Z^m} \leq C \|f\|_{C^{m+1}([0,T]\times \partial \Omega)}
\]
for some $C>0$ independent of $f$.
\end{pp}
|
{
"arxiv_id": "2302.14211",
"language": "en",
"timestamp": "2023-03-01T02:05:20",
"url": "https://arxiv.org/abs/2302.14211",
"yymm": "2302"
} | \section{\label{sec:intro}Introduction}
Since the beginning of quantum mechanics, potentials which admit analytical solution
have played a key role to provide insights of the phenomena at the quantum scale.
Right after them, we have non-analytically solvable problems
which can be only accessed via approximation methods. One of the simplest non-analytically
solvable problems is the quartic oscillator including both quadratic and
quartic terms \cite{Lai,REID1970183,doi:10.1063/1.531962,Jing,Turbiner_delvalle,Turbiner_book}. In particular, for negative quadratic term added to a positive quartic term, we have the
double well with a Mexican hat symmetric form which provides one of the simplest examples
of symmetry breaking \cite{Joger,DELABAERE1997180,meier}. The double well problem offers a good model to test
theories of tunneling and to explore asymptotic approximations \cite{fujimura,Joger,meier}.
The ground and lowest states are well established and can be reached with a desired accuracy \cite{Fernandez2017}
by several approximation methods even though some of them show faster
convergence than others (see discussion \cite{Okun,delvalle_comment} for instance).
Less studied is the regime of highly excited states in the spectrum.
By its simplicity, the quartic potential also offers a good chance
to explore quantum-classical correspondence \cite{Plata_1992}. For example, in \cite{Novaes_2003} a correspondence between
the Husimi function and the classical trajectory for high quantum numbers was found and it was concluded that the zero point energy
is relevant when dealing with low energy states. Similar correspondences have been observed for many-body models between the trajectory of the semiclassical approximation and the Husimi functions \cite{Aguiar_1991,FURUYA1992313,NaderPRE2021,Arranz1,PhysRevA.88.043835}.
Apart from the Husimi and trajectories, classical correspondence can also be observed in the density of states \cite{PhysRevA.89.032101,STRANSKY201473,LUNAACOSTA2000192,Puebla}. However this can be achieved only if highly excited states are estimated to a certain accuracy.
Comparing quantum and classical analysis is specially valuable if some signatures of chaotic behavior can persist in the quantum analogous. The out-of-time ordered correlators (OTOC), which quantify the degree of non commutativity in
time between two operators, have been widely used as a common tool to investigate chaos in quantum systems \cite{Maldacena2016,Wang,Roberts2015,FAN2017707,Luitz,Herrera,Izrailev,Yan_2019}. In general the exponential growth rate of the OTOC is related to positive Lyapunov exponents \cite{Chavez2019}, which is interpreted as high sensitivity to initial conditions i.e. connected to the notion of chaos. However, the OTOC can also grow exponentially fast in one-degree-of-freedom quantum systems that are not globally chaotic, but are critical \cite{Hummel,Chavez2019}. This is the case of the quartic double well potential which exhibits periodical trajectories but it contains a critical energy at the local maximum.
In this manuscript we test different approximation methods in quantum mechanics in order to chose the most suitable option to access to highly excited states and compare them to a semiclassical method (EBK). Then we use the complete spectrum below the critical energy to observe classical correspondence in the density of states when $\hbar\to0$. Finally we identify signatures of the positive Lyapunov exponent on the spectrum.
\section{Generalities}
The Hamiltonian we are considering is the following
\begin{equation}
\label{Ham}
H=\frac{p^2}{2m}+a x^2+b x^4\,,
\end{equation}
where $p$ represents the momentum, $x$ the coordinate and $m$ the mass which, for simplicity, will be set from now on to the unity, $m=1$. In the quantum case, the variable $x$ and $p$ are promoted to operators $x\to\hat{x}$ and $p\to \hat{p}=-i\hbar\frac{d}{dx}\,$.
The potential is plotted in Fig. \ref{pot} for particular constants $a=-10$ and $b=1$. It shows degenerated minima at $x=\pm\sqrt{\frac{-a}{2b}}=\pm\sqrt{5}$ and a local maximum at the center of coordinates $x=0$ which represents an unstable point of equilibrium at a critical energy $E_c=0$.
\begin{figure
\centerline{\includegraphics[width=0.6\textwidt
]{Figures/FormPot.pdf}}
\caption{\label{pot} Double well potential in (\ref{Ham}) with constants $a=-10$, $b=1$.}
\end{figure}
From a classical point of view, we can obtain the period of the orbits by considering action-angle coordinates
$$
J=\oint p dx= 2\int_{x_1}^{x_2} p dx= 2\int_{x_1}^{x_2} \sqrt{2 } \sqrt{E-V(x)}dx,
$$
where $x_i$ ($i=1,2$) are the classical turning points. The period in which the coordinates undergo an oscillation
is given by
$$\nu=\frac{1}{T}=\frac{\partial E}{\partial J},$$
and therefore the period for a particular value of the energy $E$ can be obtained by the following integral
\begin{equation}
\label{Tclasico}
T=\frac{\partial J}{\partial E}=\int_{x_1}^{x_2} \frac{\sqrt{2}}{\sqrt{E-V(x)}}dx\,.
\end{equation}
In Fig. \ref{periodo} the classical period $T$ is plotted as a function of the energy. The divergence corresponds to the critical energy $E_c=0$, associated to the unstable equilibrium point at $x=0$, which physically implies that the particles at the double well potential take infinite time to reach the unstable equilibrium point.
In quantum mechanics, since the potential grows to infinite for $x\rightarrow \pm\infty$, there are infinite bound states, however the number of states with energy below the critical $E_c=0$ is finite. The Hamiltonian commutes with the parity operator, $\hat{P}\psi(x)=\psi(-x)$, consequently the energy eigenfunctions are symmetric ($P=1$) or anti-symmetric ($P=-1$ ), $\hat{P}\psi_k^\pm(x)=\pm\psi_k^{\pm}(x)$.
Below the critical energy, the eigenstates organize in pairs ($P=\pm$) of quasidegenerate levels. The probability of tunneling between the two wells diminish as a function of the energy gap between these quasi-degenerate pairs \cite{Bhatta1996,Yuste1993}, and is it weaker for states in the bottom of the wells.
The energy gap between parity partners increases as their energy approaches the top of the double well, and so does the probability of tunneling. The energy gap between the lowest levels has been studied to a certain depth \cite{Zhou,Park} for different separations between the two potential minima. This type of solution to the Schr\"odinger equation leads to the concept of instantons \cite{SHURYAK1988621,doi:10.1063/1.526205,Van}.
Within the approach of quantum mechanics, the frequency and therefore the period of the quantum dynamics inside either well at a certain energy region is given by the energy difference of consecutive levels with the same parity as
\begin{equation}
E_{k+1}^{\pm}-E_{k}^{\pm} = \hbar \omega = \frac{2 \pi \hbar}{T}\,.
\end{equation}
By defining the energy density of states of a given parity as $\rho^{\pm}(\bar{E}) = 1/(E_{k+1}^\pm-E_{k}^{\pm})$, where $\bar{E}=(E_{k+1}^\pm+E_{k}^{\pm})/2$, the period as a function of the energy is given by
\begin{equation}
\label{Tcuantico}
T= 2\pi \hbar \rho^\pm(\bar{E})\,.
\end{equation}
The total density of states, taking into account the quasi double degeneracy of energy-level pairs with different parity is $\rho(\bar{E})=2\rho^\pm(\bar{E})$, therefore
\begin{equation}
\label{densityQ}
2 T= 2\pi \hbar \rho(\bar{E}),\ \ \ \ \ \ \ \text{(for $E<E_c=0$).}
\end{equation}
For states with energy above the double well, $E>E_c=0$, the double quasi-degeneracy is broken and the period and density of states are related as follows
\begin{equation}
T= 2\pi \hbar \rho(\bar{E}),\ \ \ \ \ \ \ \text{(for $E>E_c=0$),}
\end{equation}
where $\rho(\bar{E})$ is the total density of states given by
$\rho(\bar{E}) = 1/(E_{k+1}-E_{k})$ with $E_k$ the energy of levels irrespective of their parity.
\begin{figure
\centerline{\includegraphics[width=0.7\textwidt
]{Figures/per.pdf}}
\caption{\label{periodo} Classical period $T$ as a function of the energy $E$. In order to obtain a symmetric curve around the critical energy $E_c=0$, the period for trajectories with $E<0$ was multiplied by a factor~$2$. }
\end{figure}
Following the Bohr's correspondence principle, the results from quantum mechanics, as $\hbar\to 0$, should reproduce those from classical mechanics where discrete energy levels or tunneling do not exist. However as $\hbar$ decreases, the number of bound states below the critical energy increases as $1/\hbar$, even though the potential remains fixed. In this case, large matrices are needed in order to obtain the spectrum below the critical energy. Accelerating the convergence of the energy levels is a priority to access to highly excited states close to the critical energy.
In the following section we briefly describe approximation methods we used to obtain the spectrum from the ground state up to the critical energy $E_c=0$.
\section{Methods}
Within the framework of variational methods, the simplest choice
for building the trial function is to consider a linear superposition in a certain complete and orthogonal basis
$$\Psi(x)=\sum_{k=1}^Nc_k\psi_k(x)\,,$$
where $N$ is the size of the basis set and $\psi_i$ the basis functions.
In the following subsections we briefly describe
different basis.
\subsection{Sinc Method}
Within the Sinc method, the wavefunction is a linear superposition of sinc functions
\begin{equation}
\label{sinc}
\psi_k(\Omega,x)=\frac{\sin(\pi(x-k\Omega)/\Omega)}{\pi(x-k\Omega)/\Omega},
\end{equation}
where $\Omega$ is the spacing between maximas of neighbouring sinc functions and $k$ is an
integer index which controls the location of such maximas and takes values from $-k_{max}$ to $k_{max}$.
The matrix elements of the Hamiltonian evaluated in the set of sinc functions are
\begin{eqnarray}
\label{matrixSinc}
V_{kk^\prime}&=&V(k\Omega)\delta_{kk^\prime} \nonumber \\
T_{kk}&=& \frac{\pi^2}{6\Omega^2} \nonumber \\
T_{k\neq k^\prime}&=& \frac{(-1)^{k^\prime-k}}{\Omega^2(k^\prime-k)^2}\nonumber \\
\end{eqnarray}
The spacing parameter $\Omega$ is treated as variational and its optimal value is obtained by solving
$\frac{d}{d\Omega} {\rm Tr}[\hat{H}]=0$. The size of the matrix representations is $N\times N$ where
$N=2k_{max}+1\,$ is the number of sinc functions. For a detailed description we refer to \cite{Amore_2006}.
\subsection{Lagrange Mesh Method}
In the LMM the wave function is composed by a superposition of Lagrange functions which satisfy the Lagrange condition
$\psi_k(x_{k^\prime})=\lambda_k^{-1/2} \delta_{kk^\prime}\,,$ thus facilitating the integration of the matrix elements using the Gauss quadrature.
In particular for the Hermite mesh, the Lagrange functions are
$$\psi_k(x)=\frac{(-1)^{N-k}}{(2h_N)^{1/2}}\frac{H_N(x)}{(x-x_k)}e^{-x^2/2}\,,$$
where $H_N$ are the Hermite polynomials, $x_i$ the zeroes of $H_N$, $N$ the size of the mesh and $h_N=2^NN!\sqrt{\pi}$.
The matrix elements for the potential and the kinetic terms are
\begin{eqnarray}
V_{kk^\prime}&=&V(x_i)\delta_{kk^\prime}\,,\nonumber\\
T_{kk}&=&\frac{1}{3}(2N+1-x_k^2)\,, \nonumber \\
T_{k\neq k^\prime}&=&(-1)^{k-k^\prime}\frac{2}{(x_k-x_k^\prime)^2}\,.
\end{eqnarray}
For details we refer to \cite{Amore_2006}. For numerical calculations one can use the Mathematica package~\cite{delValleLMM}.
\subsection{Hermite basis}
In this method the wave function is considered as a linear superposition
composed by eigenfunctions of the harmonic oscillator i.e. Hermite polynomials weighted by a Gaussian function \cite{Verguilla}.
The simplest way to construct the matrix representation in this basis, is rewriting the Hamiltonian
(\ref{Ham}) in terms of the usual ladder operators $\hat{a}= \frac{1}{\sqrt{2\hbar}}(\hat{x}+i\hat{p})$ and $\hat{a}^\dagger=\frac{1}{\sqrt{2\hbar}}(\hat{x}-i\hat{p})$
acting on eigenstates of the harmonic oscillator $|n\rangle$. However, the convergence rate of the energy in the truncated basis can be accelerated by introducing a scaling $x^\prime\to \Omega x$ \cite{AMORE200587}, which in turns transform the annihilation and creation operators as $$\hat{a}_\Omega= \frac{1}{\sqrt{2\hbar}}\left(\frac{\hat{x}}{\Omega}+i\Omega\hat{p}\right)$$ and $$\hat{a}^\dagger_\Omega=\frac{1}{\sqrt{2\hbar}}\left(\frac{\hat{x}}{\Omega}-i\Omega\hat{p}\right),$$ respectively. The scaling parameter is optimized by $\frac{d}{d \Omega}{\rm Tr} \hat{H}=0$.
In terms of scaled operators, the Hamiltonian reads
$$\hat{H}=-\frac{\hbar}{4\Omega^2}(\hat{a}^\dagger_\Omega-\hat{a}_\Omega)^2+b\left(\frac{\hbar}{2\Omega^2}\right)^2(\hat{a}^\dagger_\Omega+\hat{a}_\Omega)^4+\frac{a\hbar}{2\Omega^2}(\hat{a}^\dagger_\Omega+\hat{a}_\Omega)^2\,.$$
Thus the matrix representation of the Hamiltonian is tridiagonal with non-zero elements
\begin{eqnarray}
\label{HNN}
H_{n,n}&=& \frac{b\Omega^4}{4}(6n^2 + 6n + 3)\hbar^2+\left(\frac{a\Omega^2}{2} + \frac{1}{4\Omega^2}\right)(2n + 1)\hbar\nonumber \\
H_{n-2,n}&=&\sqrt{n(n-1)}\left( \frac{b\Omega^4}{4} (4n - 2)\hbar^2 +\frac{a\Omega^2}{2 }\hbar -\frac{\hbar}{4\Omega^2}\right) \nonumber \\
H_{n-4,n}&=&\frac{b\Omega^4}{4 }\hbar^2\sqrt{n(n-1)(n-2)(n-3)}\,,\nonumber \\
\end{eqnarray}
and their corresponding transposed elements. The basis is truncated at $n_{max}$ and the size of the matrix is $N\times N$ where $N=n_{max}+1$
\subsection{Einstein-Brillouin-Keller}
This method, named after Einstein-Brillouin-Keller \cite{KELLER1958180} (EBK), belongs to the semiclassical methods since it is an improvement to the Bohr-Sommerfeld quantization. The quantization rule is the following
\begin{equation}
J = \oint p dx = 2\pi \hbar \left( n + \frac{\mu}{4} + \frac{d}{2} \right)\,,
\end{equation}
\\
where $J$ is the angle-action variable, $n$ is a non-negative integer (the quantum number), $\mu$ and $d$ are the Maslov index:
$\mu$ corresponds to the number of turning points in the classical trajectory ($\mu=2$ in this case) and $d$ corresponds to the number of reflections with a hard wall (absent in this case $d=0$).
Since EBK is a semiclassical method, it is expected to correctly reproduce the spectrum for large quantum number $n$ \cite{Hruska} and therefore to be a good approximation to access to highly excited states when $\hbar\to 0$.
\section{\label{wf} Results}
We performed numerical diagonalization of the Hamiltonian (\ref{Ham}) for different values of $\hbar$ in order to approach the classical limit and identify some signatures of classical instability in the spectrum of the quartic oscillator. Our results are organized in four subsections.
For simplicity we focus our attention to a
quartic potential with constant values $a=-10$ and $b=1$.
\subsection{Convergence}
\begin{figure*}
\begin{tabular}{cc}
\hline \\
$(\hbar=1,n=1)$ & $(\hbar=1,n=5)$\\
\includegraphics[width=8cm, height=4cm,angle=0]{Figures/EvsN1.pdf} &
\includegraphics[width=8cm, height=4cm,angle=0]{Figures/EvsN6.pdf} \\
$(\hbar=\frac{1}{10},n=20)$ & $(\hbar=\frac{1}{10},n=50)$\\
\includegraphics[width=8cm, height=4cm,angle=0]{Figures/EvsN2.pdf} &
\includegraphics[width=8cm, height=4cm,angle=0]{Figures/EvsN3.pdf} \\
$(\hbar=\frac{1}{50},n=100)$ & $(\hbar=\frac{1}{100},n=200)$\\
\includegraphics[width=8cm, height=4cm,angle=0]{Figures/EvsN4.pdf} &
\includegraphics[width=8cm, height=4cm,angle=0]{Figures/EvsN5.pdf} \\
\hline
\end{tabular}
\caption{\label{convergence}
Convergence of the energy $\Delta E$ as a function of the size of the basis $N$ for the following basis: Sinc (blue), LMM (yellow), Hermite (green) and Hermite with scaled coordinate (red).
}
\end{figure*}
We tested the convergence of the energy for different levels and different values of $\hbar$ as a function of the basis size $N$ for the following basis: (i) Sinc, (ii) LMM basis, (iii) Hermite functions and (iv) the Hermite functions with scaled coordinate. We measure the rate of convergence by computing the energy difference $\Delta E_n{(N)}=|E_n(N)-E^{ref}_n|$ where $n$ is the quantum number, $N$ the size of the basis and $E^{ref}_n$ a reference energy. We use as reference the results obtained by the corresponding method with the basis size $N=2000$.
Since the three methods involve numerical diagonalization, we use the routine \textit{diasym} of LAPACK and perform diagonalization in \textit{fortran90}. Obtaining the matrix representation does not imply a major difference in computing time for any method. As for the optimization of the scaling parameters $\frac{d}{d \Omega}Tr\hat{H}=0$ it is necessary to solve a third (Hermite) or sixth-degree (Sinc) equation in $\Omega$. This process does not represent a significant addition on computational time even for large basis size. However LMM requires to obtain $N$ roots of the Hermite polynomials to build the mesh thus increasing considerably the computing time for large values of $N$.
In Fig. \ref{convergence} we plot $\Delta E_n(N)$ as a function of the basis size considering the four basis. One can appreciate that
the fastest convergence is always obtained with the Sinc basis, while the lowest is for LMM. One can notice that from $\hbar=\frac{1}{10}$ the convergence rate of Lagrange basis is very slow. This is mainly because the mesh points must be redistributed by introducing a scaling parameter (see \cite{BAYE20151}) and treating it as variational parameter. In LMM there is no closed expression to estimate its optimal value, instead numerical minimization must be performed to obtain its optimal value. Since the optimal value of the scaling parameter highly depends on the basis size $N$ and the quantum number $n$, the analysis is impractical for highly excited states. On the other hand, the optimization of the scaling parameter $\Omega$, corresponding to the spacing between maximas within the Sinc and squeezing within the Hermite basis, accelerates the rate of the convergence by using the closed expression $\frac{d}{d\Omega}{\rm Tr}{\hat{H}}=0$.
Finally we compared the accuracy of the Sinc method to the semiclassical method EBK. It can be seen from Fig. \ref{SincvsEBK}
that EBK provides correctly at least six decimal digits in the entire region below the critical energy $E_c=0$ for $\hbar=\frac{1}{2000}$.
\begin{figure
\centerline{\includegraphics[width=0.6\textwidt
]{Figures/PRESS.pdf}}
\caption{\label{SincvsEBK} Energy difference between the Sinc Method and EBK as a function of the energy of the state $n$ (obtained by using Sinc basis), corresponding to $\hbar=1/2000$.}
\end{figure}
\subsection{Quantum-classical correspondence}
As $\hbar\to0$ it can be expected that the energy difference between discrete states vanishes. Therefore more bound states are contained below the critical energy even though the potential remains fixed. The number of states which lie below the critical energy can be counted using the EBK results and they can also be corroborated with the Sinc method. In Table \ref{tablahbar}, the number of bound states with energy below the critical for different values of $\hbar$ is presented.
Whereas the EBK does not have a limitation for the number of bound states, the quantum methods are limited by the size of the matrix to perform diagonalization.
The minimum value reached in this work is $\hbar=\frac{1}{2000}$ where 18980 bound states lie below the critical energy.
In Fig. \ref{Correspondence} we plot the continuous classical period Eq.(\ref{Tclasico}) along with the discrete quantum density Eq.(\ref{densityQ}), for different values of $\hbar$. We notice that the discrete points from the quantum mechanics are distributed along the classical continuous period and the emergence of the divergence corresponding to the critical energy becomes evident as $\hbar\to 0$. Before the critical energy $E<E_c=0$, according to Figure \ref{SincvsEBK}, the difference between the energies obtained by diagonalization in the Sinc basis and the EBK is of the order $10^{-6}$ energy units, because of that in Fig.\ref{Correspondence} the EBK values (green dots) lie behind the results coming from the Sinc method(purple dots) and only a few of them can be appreciated in the region $E<0$.
\begin{widetext}
\begin{center}
\begin{table
\begin{center}
\begin{tabular}{c|ccccccccccccc}
\hline
$\hbar$ & 1 & & $\frac{1}{10}$ & & $\frac{1}{100}$ & &$\frac{1}{200}$ & & $\frac{1}{500}$ & & $\frac{1}{1000}$ & &$\frac{1}{2000}$ \\
No. states & 10 & & 94 & & 950 & & 1898 & & 4746 & & 9490 & & 18980 \\
\hline \end{tabular}
\caption{\label{tablahbar} Number of states below the critical energy $E_c$ for different values of $\hbar$.
}
\end{center}
\end{table}
\end{center}
\end{widetext}
\begin{figure*}
\begin{tabular}{ccc}
\hline \\
$\hbar=\frac{1}{10}$ & & $\hbar=\frac{1}{100}$ \\
\includegraphics[width=0.45\textwidth
{Figures/trifuerza10.pdf}
& & \includegraphics[width=0.45\textwidth
{Figures/trifuerza100.pdf} \\
$\hbar=\frac{1}{1000}$ & & $\hbar=\frac{1}{2000}$ \\
\includegraphics[width=0.45\textwidth
{Figures/trifuerza1000.pdf}
& &\includegraphics[width=0.45\textwidth
{Figures/trifuerza2000.pdf} \\
\hline
\end{tabular}
\caption{\label{Correspondence}
Dots depict the energy density of states, $2\pi\hbar\rho$, as a function of the energy for different values of $\hbar$. Solid red line is the classical period for $E>0$ and twice the period for $E<0$. Purple dots indicates quantum density of states using energy levels from Sinc method and green dots using levels from EBK.
Before the critical energy, green dots lie behind the purple dots and only a few of them can be appreciated.
}
\end{figure*}
\subsection{Tunneling decay}
\begin{figure*}
\begin{tabular}{cc}
\hline \\
(a)& (b)\\
\includegraphics[width=8cm, height=5cm,angle=0]{Figures/plotGapE.pdf} &
\includegraphics[width=8cm, height=5cm,angle=0]{Figures/plotTE.pdf} \\
\multicolumn{2}{c}{(c)}\\
\multicolumn{2}{c}{\includegraphics[width=8cm, height=5cm,angle=0]{Figures/plotGapT.pdf}} \\
\hline
\end{tabular}
\caption{\label{GapT}
(a)Energy gap $\Delta E$ between pairs of quasi-degenerate energy levels of different parity and (b) coefficient of transmission $\mathcal{T}$ as a function of the energy average $\bar{E}$ for different values of $\hbar$. In panel (c), the relation between $\mathcal{T}$ and $\Delta E$ is shown in $\log-\log$ scale. The same slope for the different lines implies that these quantities are related as $\mathcal{T}\propto (\Delta E)^\alpha$, while the different intercepts lead to $\mathcal{T}\propto \hbar^{-\alpha}$, where $\alpha=5/2$.
}
\end{figure*}
One of the most intriguing properties of the quantum theory is the possible quantum tunneling in the region of the double well which is classically forbidden. The energy gap between quasi-degenerated pairs of different parity is a consequence of the tunneling phenomenon which appears in the energy region inside the double well. It is well known that the phenomenon of tunneling is less likely to be observed if the wells are too separated from each other \cite{Zhou}.
In this case we kept the potential fixed but the tunneling effect in quantum mechanics should vanish when approaching the classical limit as $\hbar\to0$. Our high accurate results, provided by the Sinc method, give us access to the energy gap between two consecutive energy levels corresponding to different parities. On the other hand we can also use our high accurate energies to estimate the transmission coefficient by using the WKB approximation
$$\mathcal{T}=\exp\left( -2\sqrt{\frac{2}{\hbar}}\int_{x_1}^{x_2} \sqrt{|E-V(x^\prime)|}dx^\prime\right)\,$$
where $x_1$ and $x_2$ are the classical turning points.
In Figure \ref{GapT} we plot the energy gap of parity-partners energy levels $\Delta E$ at panel (a), and the transmission coefficient $\mathcal{T}$ at panel (b) as a function of their energy average $\bar{E}$ for different values of $\hbar$. It can be noticed that the energy gap and the transmission coefficient decreases exponentially as the energy becomes more negative and the levels go deeper into the double well. This can be understood because the energy barrier is narrower close to the local maximum at $x=0$ corresponding to the critical energy $E=0$. As $\hbar$ decreases, the energy gap and transmission coefficient become detectable only for energies close to the critical energy $E_c=0$. Indicating, thus, that tunneling phenomenon can only be observed close to the critical energy when approaching the classical limit. Given the linear behavior of the transmission coefficient and the energy gap of parity partner in the log-log scale, at Figure \ref{GapT} panel (c), we are able to establish that the tunneling increases as a function of the energy gap as ${\mathcal T}\propto(\Delta E)^\alpha$ and decay as ${\mathcal T}\propto\hbar^{-\alpha}$, where $\alpha=5/2$.
\subsection{Manifestation of the positive Lyapunov exponent}
\begin{figure
\centerline{\includegraphics[width=0.6\textwidt
]{Figures/ajuste.pdf}}
\caption{\label{PeriodvsLogE} Period $2\pi\hbar\rho(E)$ as a function of the logarithm of the energy $E$ with $\hbar=\frac{1}{2000}$. The red line correspond to the linear fit and the points to the numerical data.}
\end{figure}
From a classical treatment of the Hamiltonian (\ref{Ham}) it can be easily shown that the positive Lyapunov exponent which characterizes the critical behavior of the systems at the stationary point $(x_s,p_s)=(0,0)$ $(\dot{x}=0,\dot{p}=0)$ is (see Appendix~\ref{LyapunovAp} and \cite{PilatowskyChavez})
\begin{equation*}
\label{Lyapunov}
\lambda=\sqrt{-2 a
=\sqrt{20}\approx 4.47214\,.
\end{equation*}
On the other hand, it can be shown that the density of states, close to the critical energy, behaves as
\begin{eqnarray}
\label{linearrelation}
2\pi\hbar\rho(E)&=&
-\frac{2}{\lambda} \log |E| - \frac{4}{\lambda} \log\left( \frac{\sqrt{b}}{-4 a} \right)\\
& =&-\frac{2}{\lambda} \log |E| - \frac{4}{\lambda} \log\left( \frac{1}{40} \right)\,,\nonumber
\end{eqnarray}
where $\lambda$ is the Lyapunov exponent. In Appendix \ref{FormulaLogaritmica} we discuss the derivation of (\ref{linearrelation}) in detail.
Thus, the relation between the density of states and the logarithm of the energy is linear and the Lyapunov exponent $\lambda$ governs the slope in agreement with the results from Ref.\cite{Richter2019}.
In Fig. \ref{PeriodvsLogE} the density of states, for energies lower but very close to $E_c=0$, is plotted against the logarithm of the energy for the smallest value of $\hbar=1/2000$. We can observe that the data fits to a linear function with negative slope $\beta=-0.4192$. The slope given by the fit is similar to that in Eq. (\ref{linearrelation}) $\beta=-\frac{2}{\lambda}= -0.447214$ in terms of the Lyapunov exponent. This is an indication that the positive Lyapunov exponent governs the behavior of the quantum density of states close to the critical energy. We think that the deviation of the slope would vanishes for smaller values of the Planck constant $\hbar$ than those reached in this work.
\section{Conclusions}
The Sinc method accelerate the rate of convergence of the energy level
, allowing us to explore highly excited states close to the critical energy when approaching the classical limit. The optimization of spacing parameter $\Omega$ plays an essential role to increase the rate of convergence. The highly accurate results by the Sinc method allow to explore how the tunneling phenomenon decay when approaching the classical limit and how it can be detected only close to the critical energy.
On the other hand, the semiclasical method EBK provides at least six correct decimal digits for the entire energy region of interest in this work, thus facilitating the analysis of the density of states.
We observed that the highly excited states converge to the critical energy forming a logarithmic divergence in the density of state
. The quantum results go to the classical results as $\hbar\to0$ and they successfully reproduce the divergence on the classical period corresponding to the stationary point for the minimum value considered in this work $\hbar=1/2000$.
Furthermore, we found a manifestation of the positive Lyapunov exponent on the density of states of the spectrum of the quartic harmonic oscillator. Therefore we can conclude that the Lyapunov exponent governs the logarithm divergent behavior of the density of states for energies close to the critical $E_c=0$.
\section{Acknowledgments}
DJN thanks the kind hospitality of the Brown University Department of Chemistry. This project was partially supported by Fulbright COMEXUS Project NO. P000003405.
|
{
"arxiv_id": "2302.14139",
"language": "en",
"timestamp": "2023-03-03T02:00:39",
"url": "https://arxiv.org/abs/2302.14139",
"yymm": "2302"
} | \section{Introduction}
Many core techniques for artificial intelligence (AI), especially machine learning (ML), are available today and can be invoked easily via open-source software \cite{gulli2017deep, pedregosa2011scikit} or cloud services \cite{elger2020ai, salvaris2018microsoft}. However, connecting them to software products, training on appropriate data, and driving business value remains challenging.
The McKinsey State of AI 2022 report \cite{mckinsey2022} notes \cite{vb2022} that industry AI/ML adoption more than doubled since 2017 and then leveled off, with common concerns on how to scale up the use of AI and speed up AI development. The report breaks down AI/ML adoption by capabilities and use cases, the latter being led by Service Operations Optimization (24\%), Creation of New AI-based Products (20\%), Customer Service Analytics (19\%), Customer Segmentation (19\%), New AI-based Enhancement of Products (18\%), etc. The sum over all categories is ~180\%.
A global survey on AI adoption by 2025 — available in the “CIO vision 2025: Bridging the gap between BI and AI” report by MIT Technology Review \cite{ciovision2025} — reveals several important trends. Out of 600 CIOs and other technology leaders polled, AI is used today by almost all companies represented (94\%+ in each of seven enterprise categories). Of those, an overwhelming majority (84\%) of companies are not AI-driven and remain in early stages of AI adoption. Perhaps, such statistics can be extrapolated to individual large companies as well, where a handful of AI-driven applications are surrounded by numerous AI-light applications. When asked to identify the top priority of their enterprise data strategy for the next three years, 78\% of the respondents chose scaling Al and machine learning use cases to create business value. In particular, a wider adoption of AI to a variety of use cases is seen as mission-critical.
\noindent
{\bf Data-driven software platforms}. The McKinsey State of AI 2022 report notes that “most respondents indicate that hiring for each Al-related role has been difficult in the past year and hasn't become easier over time.” Rather than hire such engineers and scientists for each application, we advocate building data-driven software platforms that orchestrate components, services, and access interfaces to support a variety of external or platform-hosted products by implementing and automating workflows to perform application-specific tasks. In other words, a more efficient use of precious expertise is to increase the accessibility of applied AI by developing software platforms that lower the barrier to entry.
Another benefit of data-driven software platforms is illustrated by a common strategy \cite{molinaro2022privacy} is to limit retention of data and trained ML models to, say, 35 days, which improves data privacy but requires infrastructure for feature lineage tracking and automation for ML model retraining.
\noindent
{\bf ML platforms} \cite{Hermann2017Michelangelo, markov2022looper, soifer2019inference} often support and automate workflows that train ML models on data to perform prediction, estimation, ranking, selection, and other ML tasks. Such applications necessitate regular data collection and retraining of ML models \cite{wu2020deltagrad}, which provides strong motivation for platforms in practice. More frequent retraining is common when data trends change quickly. Trained models must be hosted for batch-mode or real-time inference. Different applications call for different ML model types, and a broad portfolio of auto-configurable components may bring strong results faster than hand-optimizing a small set of components. Choosing between various model types (tree-based, DNN, etc) and combining them encourages automation.
ML platform development is driven by the following tradeoffs:
\begin{enumerate}
\item Customer needs vs. technology availability,
\item Expert support, fine configuration, and optimization for high-end applications vs. automation to reduce cognitive load and engineering effort,
\item System-level development (driven by traditional SW engineering, design and architecture considerations) vs. ML-driven development (initiated and driven by data, model and metric considerations),
\item Reactive short-term efforts vs. proactive long-term plans,
\item Orchestrating numerous point tools into entire workflows.
\end{enumerate}
\noindent
{\bf End-to-end ML platforms} support workflows with a broader scope, including data collection and preparation as well as tracking and optimization of product-impact metrics to drive business value. While relatively new, several industry platforms \cite{markov2022looper, Hermann2017Michelangelo} drive products that are used by billions of users and operate on powerful data sources. Such platforms automate data collection and model retraining, and also support online causal evaluation for products they enable. A/B testing \cite{kohavi2017online, gilotte2018offline} with product metrics is a particularly big step towards evaluating end-decisions driven by ML models instead of only evaluating ML models in terms of closed-form loss function(s) on pre-saved data \cite{gomez2015netflix}, as is common in traditional ML platforms. End-decisions affect product metrics and usually cannot be handled by closed-form functions, and evaluating product impact of multiple alternative changes requires causal product experiments (otherwise, there is no guarantee that correlations captured by ML models have a significant and positive impact on product metrics). A/B testing captures a variety of phenomena \cite{fabijan2017benefits} that are difficult to model and predict, such as network effects.
End-to-end platforms are categorized as specialized and general-purpose \cite{markov2022looper}. Specialized platforms focus on certain application categories and use cases (e.g., image \cite{khan2021machine}, text \cite{chowdhary2020natural}, or speech analysis for harmful content \cite{nassif2019speech}), which allows them to tailor their interfaces to application concepts and product specifics. They may support a handful of high-value customers or a larger number of customers interested in such particular ML capabilities. General-purpose platforms offer broader capabilities and the flexibility of configuration, but often must justify the engineering investment via sufficiently broad adoption during platform development.
\noindent
{\bf Scaling the adoption of end-to-end ML platforms and driving the business value in applications} are the main challenges discussed in this work. Thus, we upgrade end-to-end platforms to {\em self-serve platforms} and describe this new concept in detail in Section \ref{sec:self-serve}. Achieving the {\em self-serve} quality requires pervasive use of AutoML techniques (Section \ref{sec:background}), platform integration, online testing, as well as balancing the five tradeoffs for ML platforms itemized above. We illustrate this approach for two high-volume commercial end-to-end ML platforms and outline future work necessary for an effective upgrade (Sections \ref{sec:details} and \ref{sec:conclusions}). Deployed for several years, today these platforms support millions of AI outputs per second: Looper\xspace is a general-purpose platform, while PEX\xspace is relatively more specialized. Recent platform improvements toward self-serve are discussed in Section \ref{sec:deployment} along with deployment experience.
\section{The landscape of ML automation}
\label{sec:background}
Most of the literature on applied ML focuses on what we call the Kaggle paradigm, where training data and loss functions are provided, and the goal is to train a model that, when evaluated on hold-out data, would minimize the loss function. To this end, the Kaggle Web site hosts competitions \cite{yang2018deep,puurula2014kaggle} for creating such ML models and determines the winners by loss function values. Such competitions help evaluate new model architectures and, less often, new optimization algorithms, but don’t capture implementation and runtime tradeoffs, where a slight technical win may come at the cost of hand-tuning and customized optimization, not to mention model size, inference latency, etc. To bridge the gap between Kaggle competitions, a newer field of MLOps \cite{makinen2021needs} combines ML with DevOps (development operations, \cite{leite2019survey}) and focuses on deployment and maintenance of optimized ML models in applications. A key issue for industry ML deployments is the efficiency and overall capacity of inference \cite{soifer2019deep}. However, most of the published MLOps work \cite{kreuzberger2022machine} is limited to model-based metrics, such as the loss function(s), input and output distributions, etc. We find it convenient to view the Kaggle paradigm broadly as including such MLOps considerations.
\noindent
{\bf AutoML toolboxes}. Within the Kaggle paradigm, automation is needed to optimize hyperparameters \cite{yu2020hyper} for ML models and perform neural architecture search \cite{elsken2019neural} for deep learning models. These tasks can be accomplished using traditional optimization algorithms, novel applications of deep learning to optimization, and specialized methods to optimize deep-learning models (such as training supernets). For practical uses, AutoML techniques \cite{he2021automl} must be implemented using a consistent interface that supports interchangeability, composition, ML pipeline management, and live-product experimentation. AutoML toolboxes that offer such implementations can be illustrated by open-source libraries, such as:
\begin{itemize}
\item \href{https://github.com/microsoft/FLAML}{FLAML} (Microsoft, \cite{Wang_FLAML_A_Fast_2021}) - a lightweight AutoML library that handles common tasks (model selection, neural architecture search, hyperparameter optimization, model compression),
\item
\href{https://github.com/google/vizier}{Vizier} (Google, \cite{oss_vizier}) - a uniform Python interface for black-box optimization intended for hyperparameter optimization that offers a variety of optimization algorithms,
\item \href{https://ax.dev/docs/why-ax.html}{Ax} (Meta, \cite{botorch}) – a general ML tool for black-box optimization that allows users to explore large search spaces in a sample-efficient manner for services such as multi-objective neural architecture search, hyperparameter optimization, etc.
\item \href{https://automl.github.io/auto-sklearn/master/}{Auto-sklearn}\cite{feurer-neurips15a, feurer-arxiv20a} - an automated machine learning toolkit and a drop-in replacement for a scikit-learn estimator,
\item \href{https://github.com/autogluon/autogluon}{AutoGluon} (Amazon, \cite{agtabular}) - an AutoML library that manages a variety of ML models, with provisions for image, text and tabular data, as well as multimodal data,
\item
\href{https://github.com/IBM/lale}{LaLe} (IBM, \cite{baudart_et_al_2021}) - an AutoML library for semi-automated data science,
\end{itemize}
\noindent
{\bf A high-level snapshot of the field} can be seen in
\cite{hutter2019automated}
and, more broadly, in the \href{https://automl.cc/call-for-papers/}{Call for Papers} for the 1st International Conference on AutoML (AutoML-Conf 2022) that lists tracks for submitting papers from industry and academia. We give a condensed version:
\begin{itemize}
\item Neural Architecture Search (NAS, \cite{elsken2019neural}) and Hyperparameter Optimization (HPO, \cite{yu2020hyper})
\item Combined Algorithm Selection and Hyperparameter Optimization (CASH)
\item AutoML algorithms \cite{he2021automl}: Bayesian Optimization, Multi-Objective
\item AutoAI (incl. Algorithm Configuration and Selection, \cite{cao2022autoai}) and Meta-Learning \cite{vanschoren2018meta}
\item Automated Data Mining
\item Reproducibility
\item Automated Reinforcement Learning (AutoRL, \cite{parker2022automated})
\item Trustworthy AutoML with respect to fairness, robustness, uncertainty quantification and interpretability \cite{amirian2021two}.
\end{itemize}
\eat{
NAS, HPO, CASH and AutoML Algorithms are well-known lines of work and represent first-generation AutoML. More recently, they have been augmented with inference optimization, such as network quantization, and optimizations sensitive to hardware (GPUs, TPUs). Automatic selection of algorithms and configuration is helpful within the Kaggle paradigm to quickly hone in on promising ML models, but it is also required in ML workflow automation to circumvent the need for extensive ML background and avoid pitfalls. Meta-learning, defined as learning to learn, has largely remained an academic line of research. Automated Data mining includes automated exploratory analysis (as a stand-alone task) and also automated data pre-processing as part of ML workflows, such as outlier removal before model training.
Concerns about reproducibility originate in scientific literature where claimed results are sometimes difficult to reproduce and, in fact, believe. For industry ML applications, there is a surprising number of ML models that cannot be re-trained because some data, code or configurations are missing (or because partial system upgrades made training workflows incompatible). This serious problem can be solved by system-level automation, and is a prerequisite for regular automatic model retraining that ensures model freshness for nonstationary training data as is common in industry applications.
Reinforcement Learning (RL) learns how to map situations to actions/decisions towards maximizing a numerical reward signal. This approach differs from traditional decision-making systems, where an algorithm is often manually crafted to select a decision based on some ML output(s), such as scores computed by supervised learning models. Traditional systems can suffer from biases due to feedback loops caused by training on examples selected by the algorithm. This can be suboptimal as such systems try to make future improvements by only looking at a biased set of logs of interactions that miss a number of counterfactual decisions and their feedback. In end-to-end ML platforms, RL techniques stand as a more natural way of decision-making for optimizing product metrics, as they support optimization over a longer time period – in contrast to traditional ML algorithms which tend to recommend something for a single point in time, e.g., what will result in conversion now, and also they provide the ability to balance multiple objectives via a principal mathematical formula of rewards and punishments. RL systems are also especially adept at optimizing in environments where the impacts of a decision/action are not known immediately – a common case in real-world applications. However, the success of RL techniques is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and especially in the context of end-to-end ML Platforms with broader adoption and scale. AutoRL has been primarily motivated by these difficulties in configuring and training RL models in practice, and involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods (“Automated Reinforcement Learning (AutoRL): A Survey and Open Problems”).
The Trustworthy AutoML rubric combines several very different issues in AutoML. In practice, robustness includes configuration robustness (where choosing good configuration is relatively easy) and robustness to disruptions during real-time serving, such as missing or drifting data. Here both ad hoc system-level and deliberate ML methods are needed. Uncertainty quantification is a theoretically-deep field that enriches predictions and loss functions with additional outputs that quantify uncertainty of predictions and loss function values, e.g., to quantify robustness to noise and disruptions. In high-risk applications, less weight can be put on uncertain predictions. Interpretability in ML can refer to feature importance estimates, and even to entire models, as illustrated in the \ref{https://captum.ai/}{CAPTUM} package, but more importantly can refer to individual decisions driven by ML models, or the distillation of ML models to heuristic policies With respect to explainability, Prof. Ed Lee (Berkeley) \href{https://www.youtube.com/watch?v=Yv13-UPZNGE}{notes} two pitfalls: (1) “explanations” may be too long or too technical for most people to understand, (2) people are very good at creating plausible “backplanations'' that can justify mutually exclusive and even nonsensical decisions, and machines are likely good at this too. Combined, these two pitfalls favor model interpretability rather than explaining individual decisions. They also raise the bar for such work, especially in the context of AutoML rather than one-off human-curated explanations, as we discuss below in the context of specific end-to-end platforms. For more details, see \cite{zytek2022need}.
Fairness quantification, broadly speaking, tracks the impact of relevant sub-populations in training data and the impact on relevant sub-populations during inference. Academic literature distinguishes a number of different notions of fairness, and it is difficult to definitively establish fairness in practice. Several industry studies show that fairness can be improved considerably, and today some notions of fairness are required for certain applications. While much of the work on fairness has been separate from AutoML, fairness automation is required to make it practical. End-to-end ML platforms that incorporate fairness need to evaluate which notion of fairness is being applied, and provide adequate information to clients for assessment of such notions and their impact to end-users.
}
\noindent
{\bf AutoML capabilities in end-to-end ML platforms} are facilitated by AutoML tool-boxes (frameworks). However, in addition to the Kaggle paradigm, these capabilities also help manage the ML lifecycle from data collection to product impact tracking and are invoked as services. When integrated with an ML platform and usable with no ML coding, they form platform-level AutoML solutions. Priorities for AutoML services can be application-specific, and also emphasize integration in addition to component quality. Before we discuss specific end-to-end systems, we note that \cite{breck2017ml} illustrates the significance of automation in production systems. This paper presents “28 specific tests and monitoring needs, drawn from experience with a wide range of production ML systems to help quantify these issues and present an easy to follow road-map to improve production readiness and pay down ML technical debt.” Those tests require proper formalization of interfaces, explicit capture of metrics and requirements, reproducibility and automation of tests, full hyperparameter tuning, full automation of data pipelines, automatic canarying of candidate ML models before promotion or rollback, automatic model refresh and promotion, etc. This is why our overall strategy in this work focuses on pervasive use of AutoML techniques, platform integration, and online testing.
\section{Two End-to-end ML platforms}
\label{sec:twoplatforms}
We now briefly review two end-to-end ML platforms on which we later illustrate our proposed concept of a self-serve platform. Almost 100 product teams across currently leverage them to drive product-metric improvements. These platforms are particularly relevant because several of their architectural aspects bring them halfway through to this concept.
\begin{itemize}
\item
Looper\xspace and PEX\xspace maintain full custody of data — from (i) automatic data collection to (ii) causal model evaluation by means of A/B testing with product metrics (no “clean” data is required). The context and APIs for automated data collection differ between the two platforms, but the results are similar: simplified maintenance, reduced engineering effort by eliminating known pitfalls with data collection.
\item
Looper\xspace and PEX\xspace do not require customers to define or implement new model types. A variety of managed model types can be chosen from or auto-selected, and are then interfaced with the entire ML workflow without the need for customer-written ML code.
\item
Looper\xspace and PEX\xspace platforms are config-driven, maintain reproducible ML models and regularly retrain ML on fresh data to adapt to data drift. Retrained models are evaluated and promoted (automatically) upon positive results.
\end{itemize}
Each of the two platforms enjoys a distinct broad customer base among product teams and is proven to drive business value in numerous user- and system-facing applications. Together they produce 3-4M AI outputs per second for hundreds of product use cases. A detailed architectural description of the platforms and a discussion of applications are available in [removed for blind review], but in this section, we highlight key technical aspects, important use cases and how product impact is measured.
Looper\xspace and PEX\xspace specialize in tabular data, where features may be of different types and scales, might not correlate (like image pixels do), and might include individually-engineered features (like event counters). AI outputs of Looper\xspace and PEX\xspace often choose between several options, e.g., in binary decisions or when determining configuration options for various software products. In Looper\xspace, binary classification models are also trained for other applications, such as ranking, where scores produced by those models are sorted to order given items.
\noindent
{\bf Looper\xspace} is a general-purpose end-to-end ML platform that enables product teams to deploy fully managed real-time ML-based strategies to optimize impact metrics for a variety of product surfaces. It obtains training data through a predict-observe API that observes desired results shortly after the predictions and joins them for use when retraining models. Looper\xspace back-end supports a variety of ML tasks (including classification and regression, supervised and reinforcement learning, multimodal-multitasks and contextual bandits) and model types, but does not require that product engineers have prior ML experience or even understand the different model types. By lowering barriers to entry, Looper\xspace broadens the use of ML in Meta\xspace products and simplifies maintenance for ML-based applications.
The typical time spent to launch a model is reduced to one month from several months for traditional ML development cycles.
Looper\xspace performs multiple stages of optimization within the platform. In the first (offline) stage, loss functions are optimized during model training, and hyperparameters are optimized using offline Bayesian optimization. When a resulting model is deployed, the platform tracks and optimizes the product metrics. In the second (online) stage, product metrics can be improved further by methods unrelated to traditional loss functions, e.g., by tuning decision policies that postprocess the outputs of ML models.
\noindent
{\bf PEX\xspace}
addresses ML needs for in-situ live-product experimentation within multiple product surfaces. It is a specialized end-to-end ML-backed experimentation framework that enables product teams to leverage \href{https://egap.org/resource/10-things-to-know-about-heterogeneous-treatment-effects/}{Heterogeneous Treatment Effects} (HTE, \cite{kunzel2019metalearners}) to optimize end-user experience at the level of individual end-users. PEX\xspace predicts (based on user features) which variant treatment performs best for a given end-user by obtaining training data directly from the results of A/B tests (viewed as training labels) and can launch sequences of A/B tests to optimize hyperparameters. Traditional live-product experimentation relies on the Average Treatment Effect (ATE) to determine a single static decision that performs best on average to all users. But ATE often does not perform best for all users, meaning some users are receiving a worse experience than others. In contrast, PEX\xspace learns to predict product metrics for all the possible treatment assignments for a given end-user. Depending on the context and needs, PEX\xspace trains
\begin{itemize}
\item
An HTE meta-learner that more accurately models the difference between immediate effects on the product metrics (techniques for this ML task are given in \href{https://arxiv.org/abs/1706.03461}{arXiv:1706.03461}).
\item An RL model, adept at optimizing over a longer time horizon and capable of modeling delayed product metrics for multiple alternative treatments, as well as of choosing the most promising treatment. RL models are generally more suitable for use cases where end-users interact repeatedly with the product.
\end{itemize}
PEX\xspace then builds decision policies to provide each user with a personalized treatment. A decision policy either post-processes the outputs of an ML model (as in the case of HTE meta-learners) or is directly expressed by an ML model (as in the case of RL).
PEX\xspace performs policy optimization in two phases: offline and online. First, PEX\xspace ensures the given use case is eligible for personalization, by running offline policy optimization with counterfactual evaluation that simulates responses that would be observed online. Assuming eligibility, the offline policy optimization (driven by black-box optimization methods) identifies the “best” (for a single metric) or several non-dominated (for multiple metrics) policies to be evaluated online. Such early pruning helps avoid online evaluation (A/B tests) of poor-performing policies (and thus avoid product-experience deterioration for some end-users). In the second phase, the candidate policies are evaluated online and each is optimized further using sequential live-product experimentation framework Adaptive Experimentation \cite{bakshy2018ae} which uses Bayesian Optimization to iteratively improve policy configurations.
PEX\xspace is tightly integrated with a company live-product experimentation system, while supporting sufficient abstraction level to provide the most intuitive user experience. Prior ML knowledge is not necessary to successfully run PEX\xspace and launch new decision policies in production. Since 2020, there have been 40+ product launches from teams across Meta\xspace. PEX\xspace helps Meta\xspace engineers to maximize product impact by personalizing experiences via ML, with moderate live-product experimentation effort.
\noindent
{\bf The value proposition} of end-to-end ML platforms like Looper\xspace and PEX\xspace is that shared engineering effort (new technologies, regular platform maintenance, and system upgrades) can help customers focus on applications. AutoML plays a significant role in supporting this value proposition by scaling configuration and optimization. Both platforms are integrated with company infrastructure and especially other related platforms.
\section{Self-serve ML platforms and tradeoffs}
\label{sec:self-serve}
End-to-end ML platforms have shown their worth by supporting numerous product use cases, improving product metrics and/or reducing maintenance costs. A brief review of industry end-to-end platforms can be found in [removed for blind review]. Further development of such platforms is driven by attempts to scale them to more applications and support hundreds of platform customers with minimal engineering support to individual customers. However, minimizing support to individual customers can only be achieved via engineering investments into platform development. This paradox is at the core of our exploration, and we develop guidelines for such investments. Specifically, we introduce the notion of a self-serve ML platform, supported by an integrated suite of AutoML techniques (prior work on AutoML focuses on specific subsets of the ML lifecycle). We then discuss this notion for the Looper\xspace and PEX\xspace platforms specifically to illustrate how it may be applied to other platforms.
\subsection{The notion of a self-serve platform}
\label{sec:notion}
We studied the entire lifecycle of ML development in end-to-end platforms and identified steps that must be automated for the platforms to qualify for self-serve. Some of the greatest challenges and, respectively, the greatest opportunities are associated with data handling and with product impact.
\subsubsection{Data handling}
Maintaining a full custody of data (from API-driven data ingestion and collection of training data to inference and in-situ product evaluation) and automating related operations saves effort and avoid many human errors, inconsistencies and pitfalls.
\subsubsection{Product impact evaluation and optimization}
Given data sources and preprocessing, as well as ML models and decision policies, we are interested in
\begin{itemize}
\item {\em Observational tasks}: evaluating, learning, and modeling the impact of AI outputs on product end-metrics,
\item {\em Interventional tasks}: optimizing prediction mechanisms (models, decision policies) to improve product metrics.
\end{itemize}
\noindent
{\bf Observational tasks} for product impact are less direct (compared to loss-function evaluation when training individual ML models) and require additional considerations.
Longer-term product metrics implicitly aggregate the impact of AI outputs over time;
Product impact is causal in nature and depends on external factors (users, systems, environments) that often cannot be fully modeled; non-causal methods (such as traditional ML models based on correlations) are unable to estimate some causal effects;
When estimating causal effects with external factors, any one of multiple possible AI outputs can be evaluated but typically no more than that — a user would not walk through the same door twice, and a system will not perfectly replay a sequence of events to try different responses.
Live-product causal experiments (such as A/B tests) consume valuable resources and must use available data effectively. Otherwise, they risk being underpowered and fail to produce statistically significant results. Before live-product experiments, clearly-deficient candidate ML configurations are caught by offline evaluation using proxy metrics.
\noindent
{\bf Interventional tasks} require more effort relative to offline evaluation, experiment cycles may be long to have enough data to make a judgment, and there is always the risk of deploying terrible experiments. The direct approach to circumvent these is counterfactual evaluation, as it allows estimating the outcomes of potential experiments without actual deployment, and can drive offline optimization based on the effect of imaginari stimuli for identifying good candidates for online evaluation.
Observational and interventional tasks require time and expertise, and carry risks. Hence, the management of live-product experiments and product impact optimization must be automated.
\subsubsection{Requirements for a self-serve platform}
Informally, a self-serve platform enables customers to deploy effective applications with little support. We expand on this intuition and list various aspects of platform development necessary to back it up. Appropriate metrics may be developed for individual platforms and contexts.
\noindent
{\bf Self-serve ML platforms} must satisfy the following requirements:
\begin{enumerate}
\item
{\bf low cognitive barrier} to entry and low requirements for ML experience for product engineers, (in addition to “hiding” routine tasks behind automation; the UI should avoid unnecessary dependencies on ML concepts, data science concepts, as well as Looper\xspace and PEX\xspace architecture concepts; when unavoidable, these concepts can be explained in tooltips and/or using links to documentation).
\item
{\bf automated data collection} from applications and customization of subsequent data preprocessing (normalization, outlier removal, data imputation, down/up sampling, etc)
\item
{\bf AutoConf}: automated selection of an ML problem formulation, ML task (ranking, classification, etc), model type and default parameters, followed by workflow-automated traditional AutoML (parameter selection, network architecture search, etc),
\item
{\bf product-impact metrics} - tracking and automatic optimization of product-impact metrics, support for (i) counterfactual evaluation and (ii) online causal evaluation, such as A/B testing
\item
{\bf sufficient ML quality} with limited manual configuration and optimization effort
(comparisons are made to (a) custom AI solutions, (b) AutoML tools and services)
\item
{\bf full management of hosting} of data, models and other components, with modest resource utilization (some customer resources may not be accessible),
\item
{\bf adaptation to data drift (calibration and retraining)} to ensure model freshness,
\item
{\bf resilience and robustness to disruptions in data (delayed, missing) and system environment (resource limitations and
outages)} with minimal recurring customer-side maintenance effort,
\item {\bf customer-facing monitoring} and root-causing of customer errors
\item
{\bf scalable internal platform maintenance} and customer white-glove support.
\end{enumerate}
Attaining self-serve quality includes point-automation as in traditional AutoML, flow-automation which connects point optimizations, as well as whole-strategy (vertical) optimizations that co-optimize multiple point components. It would be naive to support self-serve by a custom-engineering service backend — instead support for individual customers is either generalized or traded off for more scalable platform support (provided by platform development engineers). An apparent limitation is that the largest ML applications may require too much custom optimization and maintenance. On the other hand, a self-serve platform helps even seasoned ML engineers by automating boring and error-prone routines for less critical but numerous ML applications.
\noindent
{\bf Additional capabilities} for self-serve platforms depend on the client base and uses:
\begin{enumerate}
\item
open architecture: in addition to the end-to-end ML support, the self-serve platform may offer its components and partial workflows individually (such as model-training for client-provided data or using a client-provided model), such decomposability helps leverage ML platforms with infrastructure client-designed for high-value use cases.
\item
customizations to common ML tasks: ranking and selection — succinct high-level APIs that use relevant concepts, support for relevant model architectures, loss functions, regularizations, output constraints, and diagnostics, etc.
\item
reproducibility of models (i.e., all necessary code and data are available; where actual training data may be subject to retention policies, comparable data are available).
\item
meta-learning, including transfer learning - automatically choosing learning parameters, reusing and adapting trained models to new circumstances.
\item
interpretable ML models -- ability to provide platform client insight into model behavior, without requiring the understanding of model internals.
\item
fairness in ML addresses various ways to evaluate fairness (proportionality, equal reach, opportunity, impact, etc) and ways to train ML models to improve these metrics.
\end{enumerate}
These capabilities broaden adoption in several ways. Open architecture facilitates reuse, evaluation and improvement of individual platform components. Clients with ML experience would benefit from open architecture more than entry-level clients that may prefer end-to-end support. Customizations to common ML tasks help platform clients to support certain applications. Reproducibility of models maintains a healthy SW and ML development cycle, helps keep models fresh, and also supports recurrent retraining of ML models. Meta-learning, including transfer learning, seeks more effective models and faster training, based on past experiences. Explicable ML models play an important role in personalized treatment to avoid inadvertent biases, receive legal approval, and provide platform users greater understanding of their client base. Fairness support is required in certain applications, but remains challenging when no clear guidelines are provided given the various notions of fairness that exist.
\begin{table*}
\caption{
\label{tab:selfserve}
Applying the notion of self-serve to the Looper\xspace and PEX\xspace platforms.}
\begin{tabular}{|c|c|c|}
\hline
\sc 10 requirements & \sc Looper\xspace & \sc PEX\xspace \\
\hline
Low cognitive barrier to entry &
Broader scope of UI &
Specialized UI and data sources \\
\hline
Automated data collection &
Generic Looper\xspace APIs &
\parbox{5.5cm}{
Labels from A/B experiments; \\
API to experimentation system} \\
\hline
AutoConf &
\parbox{5.5cm}{
Selection of ML task, model type, features, default params;
Hyperparameter tuning
Decision-policy tuning for binary classification;
Value-model tuning for multiclass classification and multimodel multitasks;
}
&
\parbox{5.5cm}{
Fixed ML task, selection of HTE meta-learner, RL, or derived heuristic policy, feature selection, offline/online policy optimization and param tuning }
\\
\hline
Product-impact metrics &
\multicolumn{2}{c|}{
Tracking and optimization are critical in most applications} \\
\hline
Sufficient ML quality &
\parbox{5.5cm}{
Last 1-2\% model quality viz. loss functions (SOTA) is not critical in many cases. Overall quality is most affected by the decision policies, and then by the ML model.} &
\parbox{5.5cm}{
Somewhat more important than for Looper\xspace. Greatly affected by label selection} \\
\hline
Full management of hosting &
\multicolumn{2}{l|}{
\parbox{11cm}{
Clients use their storage quota for data, pipelines, but management is fully automated. This includes automatic canarying and promotion of retrained models.
}} \\
\hline
Adaptation to data drift &
\multicolumn{2}{l|}{
\parbox{11cm}{
For nonstationary data: automatic model calibration, model retraining and promotion; decision policy re-tuning.
}} \\
\hline
Resilience, robustness to disruptions &
\multicolumn{2}{l|}{
\parbox{11cm}{
Data distribution monitoring, real-time alerts;
handling of missing data
}} \\
\hline
\parbox{4.5cm}{
Client-facing monitoring and\\ root-causing of client errors.}
&
\multicolumn{2}{l|}{
\parbox{11cm}{
Alerts on anomalies in data and AI outputs.
Help diagnose and address client and platform errors.
}}
\\
\hline
Scalable internal platform maintenance &
\multicolumn{2}{l|}{
\parbox{11cm}{
Maintenance load is due to: (i) company-wide system environment changes, (ii) client activity, (iii) use-case issues: quotas, missing data, etc.
}} \\
\hline
\multicolumn{3}{c}{}
\\
\hline
\sc 6 additional capabilities & \sc Looper\xspace & \sc PEX\xspace \\
\hline
Open architecture &
\parbox{5.5cm}{
Training models on given data/comparable to Google AutoML tables;
Logging as a separate service}
&
N/A
\\
\hline
Customizations to ML tasks &
\parbox{5.5cm}{
Ranking as a service, selection (top N out of a large predefined set), etc}
&
\parbox{5.5cm}{
The formal ML task is fixed, but outputs are used for a variety of applications (UI or value model optimizations, etc).
}\\
\hline
Reproducibility of models &
\multicolumn{2}{l|}{
\parbox{11cm}{
Automatic model refresh (i) to adapt to data drifts, \\
(ii) to meet data retention limits related to data laws.
}} \\
\hline
Meta- and transfer learning &
\parbox{5.5cm}{
Transfer-learning across related apps, where some training data can be reused}
&
\parbox{5.5cm}{
Meta-learners (T learner, X learner, etc)
}
\\
\hline
Interpretable ML &
\parbox{5.5cm}{
Feature importance analysis (mostly for model building);
Monotone constraint for model output w.r.t. feature values.
}
&
\parbox{5.5cm}{
In addition to feature importance analysis, automatic user segmentation analysis provided to understand meta-learners;
Automated heuristic policy distillation possible
}
\\
\hline
Fairness in ML &
\multicolumn{2}{l|}{
Pending guidance for individual applications
}
\\
\hline
\end{tabular}
\end{table*}
In terms of the overall impact of self-serve platforms, we notice several tradeoffs. In particular, scaling client adoption and lowering the barrier to adoption increases platform-level maintenance load. Another source of additional work is interoperability with other platforms and company-wide systems (revision control and build systems, basic ML environments, experimentation platforms, etc) - serious upgrades or changes of interface in partner systems require attention and urgent efforts. Thus, scaling internal maintenance becomes important. Achieving self-serve quality and scaling adoption also requires additional platform development, advocacy to attract product teams, better tracking of client roadmaps and impact metrics, and some amount of white-glove support for high-impact clients. There are important tradeoffs among model configurability, model optimization, and eligibility for our platform, as well as operational decisions about the amount of engineering support offered to individual clients, etc. In particular, most valuable application-specific models are supported by a large cadre of engineers that try to optimize and extend those models in parallel, and such work necessitates flexible configuration infrastructure that can be difficult to support in a general platform. When a model or a use case requires white-glove support from the platform team, the support effort may be better spent on platform improvements for multiple clients. In such cases, we try to generalize lessons from one use case.
While reaching self-serve quality is a worthy goal, its value is greatly increased upon attaining an economy of scale. For the latter, we give the following criterion: very few client-support and maintenance activities are done for individual clients.
\section{Extending ML Platforms toward Self-serve}
\label{sec:details}
The ten stated requirements for self-serve platforms and six additional capabilities apply to Looper\xspace and PEX\xspace in somewhat different ways, given that Looper\xspace is a general-purpose end-to-end platform while PEX\xspace is a more specialized platform.
Table \ref{tab:selfserve} contrasts relevant differences.
For example, automated data collection in Looper\xspace is implemented via the Looper\xspace APIs, which are used by many diverse applications and from several programming language environments. For PEX\xspace, despite a variety of specific product applications, training data for ML models is collected from A/B experiments via integration with Meta\xspace’s widely adopted online experimentation system. To ensure sufficient ML quality, Looper\xspace offers several types of ML models, including tree ensembles, deep learning, contextual bandits, reinforcement learning, etc, along with extensive feature selection.
The PEX\xspace platform is directly integrated into
the company-standard experimentation system to
reduce the cognitive barrier to entry. Additionally,
we leverage consistent UIs and APIs to provide a seamless transition from randomized experimentation to personalized experimentation. We also surface our client-facing monitoring and alerting via integrated UI.
\section{Improvements and deployment experience}
\label{sec:deployment}
Three implementation efforts brought the Looper\xspace and PEX\xspace platforms closer to {\em full self-serve}. To this end, we also describe deployment experience, impact on the usability of the platforms, and client survey results. Improving ML performance is not a primary objective of these efforts, but small ML performance improvements were observed in some cases as well.
\subsection{Product-metric optimization in Looper\xspace \& PEX\xspace}
\label{sec:pismo}
For applications deployed on our platforms, models are auto-retrained when new training data is available, but decision policies were not. A handful of clients retune decision policies manually, but most teams don’t due to limited engineering resources and the tedious effort required.
A survey of platform clients indicated high importance of unlocking regular improvement of product metrics by updating smart strategies (including ML models and decision policies) as data trends change. This can
\begin{enumerate}
\item Save significant manual effort to run experiments when optimizing smart strategies
\item Enable automatic configuration and launching of online experiments to relianbly tune decision policies
\item Increase trust with clients by accurately tracking and optimizing product metrics via the platform
\end{enumerate}
To realize these possibilities, we now allow platform clients to specify the product decision space upfront: the metrics to optimize, the experimentation preference for each product decision (such as whether to prefetch, whether to send notification), the relations among product decisions, etc. This information is then used to automatically generate decision policies or launch offline analyses and efficient online experimentations to tune the smart strategies. After smart strategies are tuned/set up, Looper\xspace and PEX\xspace can generate product decisions directly for the incoming traffic.
After enabling automatic decision policy tuning,
we’ve observed the following benefits:
\begin{itemize}
\item Unlock {\em repeated} product metrics improvement. We validated this workflow on several examples. We onboarded a use case which determines whether to prefetch certain content (such as stories, posts, reels) to a given edge device (smartphone) during peak hours. We launched newly-tuned decision policies which improved the team’s top-line product metrics for different device types. The overall success rate is improved by 0.73\% with neutral comptational cost.
\item
Save at least two weeks of manual effort for running experiments to maintain decision policies each half for each loop/use case. So far, we saved at least six weeks of total manual effort for three onboarded loops.
\item
Automate and enable product metrics evaluation on decision policy iterations, otherwise product-metric impact of decision policy improvements cannot be justified systematically.
\end{itemize}
Automatic model tuning and refresh have been deployed for several months without disruptions.
These efforts brought platform automation to a new level ---
after training ML models, the clients no longer need to build and maintain the decision policies on their own. They can rely on the platform to automatically improve product decisions over time
with all necessary safeguards.
\subsection{Client-driven platform improvement for PEX\xspace}
\label{sec:selfservepex}
To evaluate the self-servability of the PEX\xspace platform, we conducted a User Research Study where 6 users from 6 different teams were interviewed about their experience using the platform. All users indicated a strong favorable sentiment of the value of PEX\xspace with respect to to product-metric improvement, and a strong interest in self-servability for ML platforms in general. At least some level of unfamiliarity with ML was ubiquitous across all users, and some findings pointed to the need for even further reduction of the cognitive barrier to entry. For example, feature selection was a major pain point for most clients interviewed as they were unfamiliar with the concept. To address this, we created a base set of features that demonstrated high importance in many use cases, and automatically include these features in all newly created experiments. Thus, additional features are recommended but not required for new use cases. Furthermore, the study highlighted the need for high-quality documentation and tool-tips to guide the user through the platform.
Recently, a product team evaluated several different personalization methods: manual creation and hand-tuning of models; a manual conversion of experimental data to personalized methods toolbox; and the self-serve PEX\xspace platform. The team evaluated the performance of these methods in the context of the necessary time investment and the improvement to product metrics of interest. PEX\xspace out-performed the toolbox in both time investment and product impact. In comparison to hand-tuned models, PEX\xspace performed on par to slightly worse with respect to metric impact, but is substantially easier to use; hand-tuning models is approximately 6 months per use case, while the active engineering effort for using PEX\xspace is approximately 2-3 weeks per use case. Ultimately, due to the scale of use cases that can be explored via PEX\xspace and the relatively good performance for product metrics, the team chose to heavily invest in PEX\xspace for personalization needs.
\eat{
Usage of the PEX\xspace platform for personalization of end-user product experience is standard in product growth teams at Facebook. Typically in these types of use cases, the team is interested in finding an optimal tradeoff between end user engagement and a counter metric such as SMS cost or capacity resources. Identifying optimal trade-off points manually can be quite time intensive as there are many permutations of treatment assignments, and as discussed above hand-tuning models is also time consuming. PEX\xspace provides a simple, intuitive approach to ML based personalization via automatic data collection, monitoring and alerting, management of data pipelines and hosting, and automatic interpretable ML analysis.
}
We now review how a typical deployment experience for PEX\xspace clients supports self-serve. Clients discover the PEX\xspace platform, via pull or push marketing, and initialize their use case from the integrated UI flow that guides them through selection of treatment groups, product metrics \eat{(used as labels in our underlying models)}, and features. Most clients complete setup independently but some request an initial consultation with the platform team to discuss the fit of their use case. After 1-2 weeks, the product engineer returns to the integrated UI to review the offline counterfactual policy evaluation which provides product-metric impact analysis and automatic user segmentation analysis for enhanced interpretability. Again most users are able to move independently from offline testing to online testing, but those with weaker analytical background prefer to meet with the platform team to review the offline results. During product launch, many clients feel more reassured with a review from the platform team after they independently complete the launch flow and necessary code changes. The platform team provides support via weekly office hours and Q\&A fora, but also collaborates
with product teams on high-impact high-complexity applications and may implement additional platform functionalities on request.
\eat{
This is mutually beneficial as the product team receives high levels of direct support, and the platform team is able to empirically evaluate new methods in production.
}
\begin{figure*}[h]
\includegraphics[width=0.7\paperwidth]{self-serve-RL-6.png}
\vspace{-5mm}
\caption{
Our workflow for self-serve reinforcement learning (RL).
\label{fig:RL}
}
\end{figure*}
\subsection{Self-serve reinforcement learning}
\label{sec:rl}
Reinforcement Learning (RL) is used in applications driven by metrics that are {\em delayed} from actions they evaluate and/or are {\em cumulative} over many actions (long-term). Using Supervised Learning in such cases is problematic because adequate labels are lacking.
Trained ML policies generally process outputs of ML models (trained on observational tasks) and are (manually) configured via dedicated parameters.
There is also significant risk of unintentional {\em negative feedback loops} undermining long-term product impact. In contrast to {\em ad hoc} policies for traditional supervised learning,
RL parameterizes policies explicitly and optimizes them \cite{sutton2018reinforcement}. RL can also track feedback loops and sequential dependencies while optimizing long-term product impact. Traditionally, the RL agent interacts with the environment over the course of learning ({\em online learning}). But in industry applications, this is often expensive, risky, etc.
\noindent
\textbf{{\em Offline reinforcement learning}} \cite{levine2020offline} reduces these risks and is practical in many applications. Using static logged datasets as in supervised learning,
offline RL extracts better policies via {\em off-policy learning} by generating trajectories that may have never been observed in the dataset. This makes offline RL relevant to industry applications where various interactions of previously deployed policies and systems have been logged.
But higher software and mathematical complexity keeps offline RL challenging for production applications. The rare success stories often include extensive hand-tuning and difficult maintenance.
Current efforts focus on training and keep in mind extrapolation errors seen in off-policy learning. Despite promising results in RL literature, deploying self-serve industry applications
on ML platforms would require additional AutoML components and must address several remaining challenges:%
\begin{itemize}
\item
{\bf Quality datasets and the need for randomization.} Without environmental interaction during learning, the logged dataset
must offer good quality and sufficient coverage. In healthcare and autonomous driving applications, it is difficult to ensure coverage,
but recommendation systems and Web-based services are less sensitive
to randomized exploration, which can be used to improve coverage. At the same time, logged datasets may be limited to some audiences or by manually drafted rules based on domain insights. Therefore, a self-serve offline RL system may need to augment prior historical data or collect fresh data.
\item
{\bf Effective training and evaluation pipelines.} Compared to supervised learning, workflows for RL in product applications are more difficult to configure and maintain. Self-serve offline RL requires effective and configurable pipelines for model training, evaluation and tuning to product metrics.
\item
{\bf Reward shaping.} When using RL, designing the reward function is commonly the first step. Offline RL for product applications is particularly sensitive to reward functions that must proxy for product metrics. Defining faithful proxies is challenging in practice since product applications often entail several product metrics with multiway different tradeoffs. Although metric values are normally found in the dataset logged to train offline RL, combining them in a reward function requires ($i$) application insights and ($ii$) mapping insights to a numerical reward function. This step is difficult to automate in self-serve platforms.
\end{itemize}
\begin{table*}
\caption{\label{tab:future} Future efforts outlined.}
\begin{tabular}{|c|c|}
\hline
\sc Future effort categories &
\sc Targets \\
\hline
\hline
\parbox{7cm}{
{\bf Proactive improv’t of platform adoption} (includes usability issues, but not ML technology - see below)
}
&
\parbox{9cm}{
Effective UIs, multiple APIs, back-end support, open architecture, documentation and tutorials, advocacy.
}
\\
\hline
\parbox{7cm}{
{\bf MLOps}
(stability, robustness, avoiding product metric regressions, moderate internal maintenance)
}&
\parbox{9cm}{
Reliability, resilience, effective maintenance, eng excellence and beyond (including ML eng excellence)
}\\
\hline
\parbox{7cm}{
{\bf Technology- and ML-driven development}
(high-value use cases,
new use scenarios for greater adoption, ML performance, effective resource usage)
}
&
\parbox{9cm}{
Support for large/high-value ML models. New individual components and services. Integration within the platform and with other platforms, product and platform metrics. Support for additional ML tasks, applications and use scenarios; more effective ML models and decision policies; whole-strategy optimizations.
}
\\
\hline
\end{tabular}
\end{table*}
Our self-serve offline RL workflow Figure \ref{fig:RL} is implemented on both of our platforms for clients without prior engineering expertise in RL. The level of automation we provide for RL is similar to that for supervised learning. A dozen of preexisting and new product applications at Meta\xspace have onboarded to our RL workflow or started onboarding in the first month of availability. Application categories include Web-based services and
medium-scale recommendation systems.
Several product applications have already improved product metrics by $40-50$\% over prior production policies based on supervised learning models.
\noindent
\textbf{The self-serve client journey} includes: (1) data logging, (2) data preprocessing for RL, (3) policy training; evaluation by product metrics rather than RL-centric metrics,
(4) reward shaping and hyperparameter tuning (5) online deployment, (6) recurrent training.
\noindent
\textbf{To onboard} a product application, an owner specifies product metrics to be optimized and prediction features for the product application. These steps take under one hour using the integrated platform UI. The next and final step is for the application owner to extend the software-defined RL API for use in their product code. The product application uses this API for data collection, training and online deployment of any learned policy with offline RL. In detail, data collection starts with a randomized policy or some preexisting application-specific policy. Data are collected and preprocessed in the appropriate RL schema of tuples of one-step state transitions. After sufficient coverage of every action has been obtained in the collected data, offline RL training starts. Our system is integrated with ReAgent, an open-source RL platform, that offers well tested state-of-the-art RL algorithms. Trained policies are evaluated using {\em counterfactual analysis} which estimates impact on product metrics if the policy be deployed online. Training and evaluation are driven by the {\em reward function} specified by the application owner as a proxy of product metrics. Commonly, no such function is known up-front, and thus our {\em reward-tuning workflow} constructs one by exploring the space of linear parameterized reward functions by means of Bayesian Multi-Objective Optimization. Linear combinations of products metrics with different weights of preference are compared to identify non-dominated (Pareto optimal) learned policies. The same methodologies of Bayesian Multi-Objective Optimization and counterfactual evaluation are leveraged by our system for every learning-based tuning, e.g., model architectures, hyperparameters, etc. Candidates produced by reward tuning/training are reviewed by the owner, and based on how the different learned policies impact the product-metrics, the most promising one is selected for online deployment. After that, our system continues to collect training data based on the deployed policy with a small allowance for exploration (vs. exploitation), optionally controllable by the owner based on the application requirements. Collected data enables recurrent training.
\noindent
\textbf{From a self-serve perspective}, our offline RL system is application-agnostic but its owner-side invocation is RL-agnostic and requires no RL expertise.
Automation and tuning are enabled for end-product metrics rather than in terms of RL's internal metrics or proxy objectives, as is common in the literature \cite{paine2020hyperparameter, kumar2021workflow}. Owners of product applications can set up their use case and deploy an offline RL learning policy online in a few days, assuming sufficient application traffic.
The value of our self-serve RL workflow is in automation, as other RL solutions
often require one-off coding and tuning that are hard to generalize and maintain in the long term.
\section{Conclusions and Future Work}
\label{sec:conclusions}
Our long-term vision for building self-serve ML platforms is developed by examining client journeys, identifying tasks that require unnecessary manual work, and automating this work by improvements in UI or system back-end, or by implementing relevant AutoML techniques. Our experience with end-to-end ML platforms suggests that the notion of self-serve that we develop helps lower the barrier for adoption of ML in product applications, broaden this adoption, reduce the time to product impact, and reduce maintenance costs. Product engineers with limited ML experience directly benefit from self-serve platforms, but questions remain about power-users who understand the underlying techniques and are often interested in finer control over individual tasks. This is why we advocate open-architecture end-to-end ML platforms to optionally automate selected parts of the overall workflow, e.g., any of data ingestion, training, serving, monitoring, etc. A power user interested in tweaking a particular task for high-value ML models and applications would rely on the automation and reproducibility of other tasks. When improving the performance of ML models, there is a risk of focusing too much on the model’s loss function and losing sight of product impact and overlooking powerful optimizations beyond the loss function of an ML model. An end-to-end platform can guide clients appropriately and make necessary tools available to them. Moreover, in some applications, it is more important to build and deploy additional use cases than super-optimize existing use cases.
With cliemts in mind, our work explains how to improve end-to-end ML platforms to help optimize key product metrics and ensure broader adoption by attaining the self-serve level. We foresee three categories of future efforts, as shown in Table \ref{tab:future}.
Each of these effort categories leverages AutoML in essential ways. Platform adoption can be proactively improved by automating typical points of failure for new clients and adding support for prototyping to decrease the cognitive effort needed to demonstrate the first success. Formalizing and automating MLOps workflows helps improve platform reliability and reduce maintenance costs for clients and platform owners. AutoML is also essential to enable more sophisticated ML models and reduce the configuration burden for clients of such models.
Self-serve platforms amplify the impact of end-to-end platforms and help accumulate “passive income” from use cases deployed on the platform over long periods of time. However, the broad impact of a given development effort can be difficult to estimate (beyond pilot clients) because it depends on preferences of many clients and because several efforts often contribute to any one result. We distinguish the needs of specific clients from platform improvements initiated by the platform team with numerous clients in mind. Both types have been helpful in the past and will be pursued in the future. Another strategic dichotomy pits a handful of sophisticated high-value models against numerous simpler “tail models” that support various aspects of non-ML-driven applications. While open architectures and partial workflows should be helpful for power users, it makes sense to (i) consider client tiers by support needed, (ii) prioritize clients by their likely impact on product applications.
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.14188",
"language": "en",
"timestamp": "2023-03-01T02:04:25",
"url": "https://arxiv.org/abs/2302.14188",
"yymm": "2302"
} | \section{Introduction}
\label{section:Intro}
The proliferation of man-made objects in Earth's orbit has dramatically improved the capability of space-based services (internet, imaging, science missions, etc.). This has come at the cost of an increasingly congested environment risking the safety and reliability of the same services to interference and cascading space debris. In partial response to the threat posed by space debris and complexity of on-orbit servicing missions, NASA has identified the task of autonomously inspecting resident space objects as a key enabling technology for next generation space missions \cite{Starek16}; specifically for autonomous rendezvous, proximity operations, and docking (ARPOD). Inspection is a necessary precursor to on-orbit servicing in an effort to identify possible damage, locate docking points, and plan an approach if the object is uncontrollable/non-cooperative (e.g., tumbling). Thus, the inspection problem typically requires comprehensive imaging of the target surface necessary for relative state estimation, pose estimation, and inertia estimation. This is then used as an input for a large variety of ARPOD missions.
The inspection task itself faces several challenges, including the need to plan rapidly in a nonlinear, high-dimensional, highly constrained, and uncertain environment. Using a team of satellites to perform the inspection has the potential to increase task efficiency with many existing solutions actively exploiting this; see \cite{Lei22,Bernhard20,Nakka21}. To handle additional complexity that multi-agent modeling and derivations introduce, most existing solutions must make strong simplifying assumptions.
For example, \cite{Bernhard20} and \cite{Nakka21} restrict their solutions to parking satellites on pre-computed elliptical natural motion trajectories around the target, which can increase both the time to complete an inspection and the number of inspecting agents required to get full sensor coverage of the target. In \cite{Phillips2022}, simplified Clohessy-Wiltshire equations and a Lyapunov-based closed loop control method were used to converge a multi-satellite spacecraft system to a formation around a chief agent. The articles \cite{oestreich21} and \cite{Dor18} focus on relative state and pose estimation from realistic images, and formulate the inspection task as a simultaneous localization and mapping (SLAM) problem. \cite{oestreich21} does so using a factor graph, and implements their solution on a hardware testbed, while \cite{Dor18} uses a novel technique they call ORB-SLAM. Neither imposes major assumptions on the target, but both assume a fixed trajectory for the inspecting agent rather than including optimization over data collection and/or fuel consumption. When trajectory optimization is explicitly considered, the problem of relative state estimation of the target is often simplified by assuming the target is cooperative with only a single inspecting agent, as in \cite{woffinden07}.
In \cite{Maestrini22}, trajectory optimization is combined with an unknown, non-cooperative target but again using only a single inspecting agent. Furthermore, this solution requires onboard trajectory sampling and simulation that quickly becomes computationally expensive with large potential for excessive power draw.
We seek to create a general solution framework that combines optimal pose and relative state estimation for an unknown/non-cooperative target while also optimizing inspecting agent trajectories to reduce fuel consumption or time to complete an inspection, in part through the use of multiple inspecting agents. We argue that tools in machine learning are particularly well-suited to handle the spatiotemporal complexity of inspection planning operations through high upfront computational requirements but low computational demands at deployment. Through the utilization of deep reinforcement learning, we aim to strike a balance between tractability and simplifying assumptions; see \cite{dh93,googleFRL}. In this, diverse formulations of inspection criteria and tumbling dynamics can be accommodated through offline simulation. This allows for the training of a policy that remains computationally feasible for online calculation. It has the distinct advantage of not requiring explicit consideration of the underlying nonlinear dynamics during controller design. The resulting trained policy may also robustly capture a host of diverse behaviors without needing to recompute optimal control inputs in real-time. Reinforcement learning then has the potential to overcome many of the challenges faced when executing complex maneuvers in space; see \cite{BL19,OLG21,RLattitude}.
In multi-agent reinforcement learning, however, these challenges are more pronounced and less work has been done in this area; for instance \cite{Lei22,WANG2011,taxirl,MAMP}.
This paper presents a multi-agent deep reinforcement learning (RL) solution to on-orbit satellite inspection within an extendable and flexible framework.
We focus specifically on the decentralized navigation of multiple agents to achieve a certain percentage of inspection coverage through image collection in a fuel- or time-optimal manner for a non-cooperative/tumbling target.
We do not consider the problem of pose or relative state estimation (although argue that our RL solution can be extended to encompass this as well), but rather assume a priori knowledge used to generate a point cloud model of the target. Using this, we assume that agents have access to sensor visibility indices based on target pose, camera range, field of view (FOV), and agent relative positions. Successful inspection is defined through information retrieval, which is a function of agent and target motion, rather than the traversal of an a priori fixed set of ``inspection points'', as in our previous work \cite{Lei22}.
Hence, the time and energy required to successfully inspect the target is variable and well-suited to more complex information theoretic approaches.
We formalize the problem as a decentralized partially observable Markov decision process (DEC-POMDP).
Decentralization captures the need for each agent to execute its own navigation strategy without duplicating or conflicting with the work of the other agents.
The POMDP is necessary given the lack of perfect information sharing across agents.
We operate under the assumption that agents do not share reconstructed point clouds of the target, and therefore each agent only has partial information about the joint visibility ledger. The DEC-POMDP is solved to optimize two, potentially conflicting, objectives. The first objective is to achieve a combined point cloud reconstruction across all agents that guarantees a fixed proportion of surface coverage (e.g., view at least 90\% of all visible points in the cloud). The second is to minimize a performance metric, which could be time to complete inspection or total agent fuel consumption.
Given the complexity of DEC-POMDPs, we decompose the problem in a hierarchical manner. We fix a number of viewpoints centered around the target that are independent of the target's rotational motion. The agents are limited to traveling between these points, and only take images upon arrival at a viewpoint. Hence the problem may be decomposed into a high-level planner that tells the agent which viewpoint to travel to, and a low-level navigation controller that then carries the agent along a viewpoint transfer. Satellite point to point navigation is well understood, and model predictive control (MPC) has often been used for satellite applications; see \cite{petersen21, Morgan14, Foust20}. We utilize a basic velocity-based MPC controller for navigation that supports simple expressions for single-burn estimates of the controlled velocity. This allows us to focus on high-level planning requiring the anticipation of target motion and information retrieval from different viewpoints across highly variable mission time horizons. This is nontrivial due to both target dynamic mode and surface geometry. As such, we employ multi-agent deep RL, specifically the recurrent replay distributed deep Q network algorithm (R2D2, \cite{Kapturowski18}) to construct high level decentralized policies for each agent.
We test the composed hierarchical solution in a simulated environment and find that the trained agents effectively balance exploration of their environment with minimization of their performance objective. Further, the agents learned to coordinate without explicit communication about target visibility - only broadcasting their own position and velocity. As compared to our previous solution \cite{Lei22} that requires visitation of a full set of inspection points tied to the body frame of the target, our current results demonstrate that navigation through the entire viewpoint space is unnecessary, and that dissociating the viewpoints from the target body frame enables efficient inspection of a tumbling target. Finally, the use of deep RL allows us to synthesize policies robust to target motion, in which the agents learn to cooperate and anticipate viewpoints that maximize information retrieval without having to explicitly construct and store a model of the target or the other agent behavior. Our approach further guarantees inspection coverage up to a user-specified percentage of target surface area while simultaneously optimizing for time or fuel consumption.
The rest of the paper is organized as follows. Section \ref{section:Background} summarizes the concepts necessary to understand our solution. Section \ref{section:Problem} outlines the problem formulation, including the environment setup and how we decompose the viewpoint selection planner from the navigation controller. Our methodology, including how we set up our problem for a reinforcement learning solution, is given in Section \ref{section:Methodology} followed by results in Section \ref{section:Results} and concluding remarks in Section \ref{section:Conclusions}.
\FloatBarrier
\section{Background}
\label{section:Background}
Here we briefly document key prerequisite knowledge needed for our analysis. This includes background on space dynamics in Section~\ref{subsec:spaceDynamics} and a brief summary of policy generation utilizing reinforcement learning as a tool to approximate solutions of DEC-POMDPs in Section~\ref{subsection:rl}.
\subsection{Space Dynamics:}\label{subsec:spaceDynamics}
\subsubsection{Reference Frames:} The earth-centered inertial frame (ECI) is a global inertial reference frame whose origin is at the center of mass of the earth with a fixed orientation relative to the constellation field. In the ECI frame the z axis is close to being aligned with the North Pole, but is not incident due to the precession of the earth. The Hill frame (H) is a non-inertial frame fixed on an orbiting body such that the y-axis is tangent to the orbital path, the x-axis points radially away from the earth, and the z-axis completes the orthonormal right-handed coordinate system; seen in Fig. \ref{fig:OrbitFrames}A. Body frames are fixed to each orbiting vehicle and are aligned with their respective principal inertial axes.
\begin{figure}[th!]
\centering
\includegraphics[width=.8\linewidth]{figures/CWHFrame2.eps}
\caption{A) Example Hill Frame unit axes are shown in ECI at five different time steps with respect to the the orbital period T. B) The position for an agent can be represented as a relative position with respect to the Hill Frame. }
\label{fig:OrbitFrames}
\end{figure}
\subsubsection{Clohessy-Wiltshire Hill Dynamics:}
The orbital path of the inspection target is described using Keplerian mechanics defined through several variables including: mean motion, earth radius, J2 perturbation, and orbital radius; see \cite{FMark}. Assuming an orbital eccentricity of zero with no external torques or forces, the relative motion between an agent and the target can be described through Clohessy-Wiltshire-Hill (CWH) dynamics, see \cite{Clohessy60}; an example of target and agent orbit is shown in Fig.~\ref{fig:OrbitFrames}. This permits a linearized form of the CWH dynamics given by:
\begin{align}
\dot{\mathbf{x}}(t) =
\begin{bmatrix}
0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 1\\
3n^2 & 0 & 0 & 0 & 2n & 0\\
0 & 0 & 0 & -2n & 0 & 0 \\
0 & 0 & -n^2 & 0 & 0 & 0\\
\end{bmatrix}
\mathbf{x}(t)
+
\begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\frac{1}{m} & 0 & 0 \\
0 & \frac{1}{m} & 0 \\
0 & 0 & \frac{1}{m} \\
\end{bmatrix}
\mathbf{u}(t)
\label{eq:CWH}
\end{align}
where $\mathbf{x}(t)$ is a state vector of Euclidean positions and velocities in $\mathbb{R}^6$, $n$ is the mean motion parameter of the target representing its orbital angular frequency, $m$ is the mass of the agent, and $\mathbf{u}(t)=[u_{x},u_{y},u_{z}]^{\top}\in \mathbb{R}^{3}$ are the forces produced by the agent. For simplicity, we assume $u$ is produced by thrusters on the satellite and no additional internal or external forces are present. For a more detailed description of the assumptions required to establish \eqref{eq:CWH}, see \cite{petersen2021} or \cite{wiesel1989}. A family of closed form solutions, called Natural Motion Trajectories (NMTs), are derived from \eqref{eq:CWH} by setting $u(t) \equiv 0$, computing the Laplace transform, and solving for $x(t)$, $y(t)$, and $z(t)$ explicitly ; see derivation of eq. 82 in \cite{Irvin2007}).
NMTs can be parametrized by starting position $[x_{0},y_{0},z_{0}]$, ending position $[x_{f},y_{f},z_{f}]$, and a feasible time-of-flight (TOF) $T>0$. Viewpoint transfers are determined by the fixed initial velocity needed by the agent to place them on the parameterized NMT connecting a starting and ending position. We later utilize this to approximate low-fuel impulsive burn commands to navigate between points in the Hill frame. Following eq. 109 in \cite{Irvin2007}, the required initial velocity is:
\begin{align}
\begin{bmatrix}
\dot{x}_o\\
\dot{y}_o\\
\dot{z}_o\\
\end{bmatrix}
=
\begin{bmatrix}
\frac{-4S+3nTC}{D} &\frac{2-2C}{D} & 0 & \frac{4S-3nT}{D} &\frac{-2+2C}{D} & 0\\
\frac{-14+6nTS+14C}{D} &\frac{-S}{D} & 0 & \frac{2-2C}{D} & \frac{S}{D} & 0\\
0 & 0 & \frac{C}{S} & 0 & 0 & \frac{1}{S}\\
\end{bmatrix}
\begin{bmatrix}
x_0\\
y_0\\
z_0\\
x_f\\
y_f\\
z_f\\
\end{bmatrix}
\label{eq:initialvelocity}
\end{align}
where $S = \sin(nT)$, $C = \cos(nT)$ and $D = 8-3nTS-8C$. Similarly, the final velocity upon arrival at $[x_{f},y_{f},z_{f}]$ is calculated in eq. 113 of \cite{Irvin2007} as:
\begin{align}
\begin{bmatrix}
\dot{x}_f\\
\dot{y}_f\\
\dot{z}_f\\
\end{bmatrix}
=
\begin{bmatrix}
\frac{-4S+3nTC}{D} &\frac{-2+2C}{D} & 0 & \frac{4S-3nT}{D} &\frac{2-2C}{D} & 0\\
\frac{2-2C}{D} &\frac{-S}{D} & 0 & \frac{-14+6nTS+14C}{D} & \frac{S}{D} & 0\\
0 & 0 & \frac{-1}{S} & 0 & 0 & \frac{C}{S}\\
\end{bmatrix}
\begin{bmatrix}
x_0\\
y_0\\
z_0\\
x_f\\
y_f\\
z_f\\
\end{bmatrix}.
\label{eq:finalvelocity}
\end{align}
\subsubsection{Torque-Free Rigid Body Dynamics:}\label{subsection:tumbling_dynamics}
The evolution of attitude dynamics of the inspection target are modeled by Euler's rotational equations of motion for rigid bodies. Supposing the body frame (BF) of the inspection target aligns with the principal axes of inertia and assuming that there are no external applied torques, the equations of motion take the following form:
\begin{align}
I_{xx}\dot{\omega}_x &= (I_{yy} - I_{zz})\omega_y\omega_z\nonumber \\
I_{yy}\dot{\omega}_y &= (I_{zz} - I_{xx})\omega_x\omega_z\nonumber \\
I_{zz}\dot{\omega}_z &= (I_{xx}- I_{yy})\omega_x\omega_y
\label{eq:ERQ_MOI_NoTorque}
\end{align}
where $I \in \mathbb{R}^{3\times3} = \text{Diag}(I_{xx}, I_{yy}, I_{zz})$ is the inertia matrix, and $\mathbf{\omega} \in \mathbb{R}^{3} = [\omega_x \ \omega_y \ \omega_z]^T$ is the angular velocity of the inspection target in the ECI frame. As in \cite{FMark}, we may then recover a quaternion representation $q^{BF}_{ECI}$ of attidude dynamics from BF to ECI by solving \eqref{eq:ERQ_MOI_NoTorque}. This is given by:
\begin{align}
\dot{q}^{BF}_{ECI} =
\frac{1}{2}
q^{BF}_{ECI}
\otimes
\begin{bmatrix}
0 \\
\mathbf{\omega} \\
\end{bmatrix}
.
\label{eq:ERQ_att}
\end{align}
The resulting dynamics can induce a variety of different behaviors in target attitude evolution, a few examples are shown in Fig. \ref{fig:CombinedDynamics}. These capture the evolution of target body frame relative to Hill frame calculated through a frame transformation applied to \eqref{eq:ERQ_att}. The difference between dynamic modes is a direct result of the intermediate axis theorem \cite{VanDamme,Leine2021} applied to various configurations of initial velocity in \eqref{eq:ERQ_MOI_NoTorque}. Within Hill frame, ECI static represents a slow counterclockwise rotation; Hill static represents no rotation; Single-Axis represents a clockwise rotation and the tumbling modes represent multi-axis rotations about stable and unstable body-frame axes respectively. Alternatively, ECI static can be considered a star pointing mode and Hill static a Nadir pointing mode.
\begin{figure}[h!]
\centering
\includegraphics[width=.6\linewidth]{figures/CombinedDynamics.eps}
\caption{Examples of torque-free rigid body dynamic behaviors as seen in the Hill frame. Each example was generated over a different amount of time to exemplify distinctive qualitative features of each mode.}
\label{fig:CombinedDynamics}
\end{figure}
\subsection{Reinforcement Learning and DEC-POMDPs}\label{subsection:rl}
RL is a subset of machine learning techniques commonly used to solve sequential optimization problems. Heuristically this is done through many, repeated interactions with an environment that provides a reward or penalty to each interaction. Utilizing a pool of collected experience, estimation of the ``best" interactions and their corresponding ``value" can be iteratively improved upon. In well-posed problems, convergence to an analytically derived optimal control is guaranteed. For more complicated environments where analytical options are unavailable, RL methods have proven to be indispensable in planning operations; see for instance \cite{Lin92, Mnih13, Mnih15, fedus20}. The multiagent autonomous inspection problem frequently falls under this umbrella and we focus our motivation on a specific learning objective within RL, known as Q-learning. To do so, we briefly introduce the basis of our optimization problem in the form of a DEC-POMDP.
A DEC-POMDP may be formally represented as a tuple $(\mathbb{D},S,\mathbb{A},T,\mathbb{O},O,R)$ for which: $\mathbb{D} = \{1,\dots,m\}$ is a set of $m$ agents, $S$ is a set of model states, $\mathbb{A} = \prod_{i\in\mathbb{D}}\mathbb{A}_{i}$ is the set of joint actions, $T$ is a transition probability function, $\mathbb{O} = \prod_{i\in\mathbb{D}}\mathbb{O}_{i}$ is a set of joint observations, $O$ is the observation probability function, and $R:S\times\mathbb{A}\rightarrow\mathbb{R}$ is the joint immediate reward or cost function. Using agent-specific subscripts $i$, if $S$ and $R$ can be factored according to $S=\prod_{i\in\mathbb{D}}S_{i}$ and $R(s,a) = f(R_{1}(s_{1},a_{1}),\dots, R_{m}(s_{m},a_{m}))$ for monotone, continuous $f(\cdot)$, we say the DEC-POMDP is \textit{agent-factored}. Frequently used to cast optimal planning problems under partial information where $R(s,a) = \sum_{i\in\mathbb{D}}R_{i}$, this framework forms the basis of many reinforcement learning problems; see for example in \cite{Mnih15, fedus20, Schaul15}. In these, agents seek to maximize the sum of discounted future rewards accrued through sequential interaction with the environment. The time-preference for reward acquisition is modeled by a fixed discount rate $\gamma\in(0,1)$. An agent's \emph{interaction} is modeled through a policy mapping function $\pi_{i}:O_{i}\mapsto \mathbb{A}_{i}$ (potentially stochastic); a joint policy can then be naturally expressed through $\pi = (\pi_{1},\dots,\pi_{m})$. Formally, the policy mapping functions are composed of two separate components. One that maps $O_{i}\mapsto S_{i}$ through so-called belief state estimation, and the second which maps $S_{i}\mapsto \mathbb{A}_{i}$. Belief state estimation is a crucial complication in many problems where the observation space isn't rich enough to reveal the hidden system state. The RL methods that we implement work directly to estimate both steps simultaneously with two different parametrized neural networks resulting in a single mapping from $O_{i}\mapsto \mathbb{A}_{i}$, as seen above. This is enabled by taking advantage of agent-$i$'s \textit{past experience} contained in the observation space. For the RL algorithm to know what actions are considered ``good" or ``bad", it needs to have a metric to determine the value of a policy in terms of the optimization problem itself; this is enabled through the estimation of a so-called \textit{Q-function}. Collecting past experience\footnote{There are multiple ways of defining sets of observations. These range from a linear sequence of fixed and finite cardinality to collections of observations selected through internal analysis of estimable quantities and functions. The algorithm we use forms this through an \textit{experience replay} buffer.} in a set of observations $\overline{o}_{i} = \{o_{i,k}\}_{k=1}^{K}$ we aim to estimate the \emph{observation-value function} $Q_{i}$ given by:
\begin{equation}\label{eqn:agentQ_ER}
Q_{i}^{\pi_{i}}(\overline{o}_{i},a_{i}):=\mathbb{E}^{\pi}\left[\sum_{k=0}^{\infty}\gamma^{k}R_{i}\left(S_{i,t+k+1}^{\pi},a_{i,t+k+1}\right)\big| \overline{O}_{i,t}=\overline{o}_{i},\,a_{i,t}=a_{i}\right],\,\forall i\in\mathbb{D}
\end{equation}
where $a_{i,s} =\pi_{i}(\overline{o}_{i,s}^{\pi}),\, \forall s\ge t\text{ and }a_{i,t} = a_{i}$ are the actions determined by each agent's policy $\pi_{i}$; $\overline{O}_{i}$ represent the powerset of all agent-i observations of cardinality less than or equal $k$ occurring before time $t\ge0$. The corresponding \emph{value} of $\pi_{i}$, denoted by $V_{i}^{\pi_{i}}$ is the average of $Q_{i}^{\pi_{i}}$ with respect to the measure induced by $\pi_{i}$, ie. $V_{i}^{\pi_{i}}(\overline{o}_{i}):=\sum\limits_{a\in\mathbb{A}}\pi(a|\overline{o}_{i})Q_{i}^{\pi_{i}}(\overline{o}_{i},a)$. The reward that is received depends on the true value of the hidden state, which is what provides the basis for implicit hidden state estimation. Each agent then seeks to find an optimal policy satisfying:
\begin{equation}\label{eqn:agentPol}
\pi_{i}^{*} \in \argmax\limits_{\pi_{i}}V_{i}^{\pi_{i}}(\overline{o}_{i}),\text{ corresponding to some } Q_{i}^{*}.
\end{equation}
The objective of Q-learning is to learn the solution to \eqref{eqn:agentPol} through maximization of the $Q_{i}$ function in \eqref{eqn:agentQ_ER}, converging to the maximal $Q_{i}^*$; see \cite{Mnih13, Mnih15} for the first neural-network based approach. Due to the break in Markovianity when working in environments with partial information, many RL algorithms leverage a transformed version of \eqref{eqn:agentQ_ER} (directly including a model for belief state transition probability) to ensure applicability of dynamic programming methods for stochastic gradient descent on $Q$-function updates; see \cite{Oliehoek16} for a brief summary. So-called model-free methods don't require this and in general don't require any models for the state transition, observation transition, or belief-state probability distribution. This is particularly advantageous when belief-state probability is difficult to model and is commonly circumvented utilizing a recurrent network to carry forward information through time; see for instance the implementation of IMPALA, PG, PPO, and R2D2 in \cite{RLlib}. This is typically balanced with off-policy methods where $Q_{i}^*$ can be directly estimated using a target policy instead of the action policy; key to establishing many early convergence results in RL. For a high-level review of this topic, see \cite{Sutton18}.
\FloatBarrier
\section{Problem Formulation}
\label{section:Problem}
The autonomous inspection task considers the inspection of a target object in space via one or multiple inspector satellites equipped with sensors that perform surface area scans within a certain range of the target. Inspection completion is measured by cumulative agent sensor exposure to a user defined proportion of the target surface, as measured through a visibility calculation on predefined points of interest (POIs) on the surface of the target. The inspection process is decomposed into three distinct components: 1) Viewpoint planning (where to take images), 2) Agent Routing/Path Planning (which agents should move to which viewpoints), and 3) Viewpoint transfers ( point-to-point navigation). This takes place in a global (i.e., shared by all actors) environment consisting of feasible agent viewpoints, target POIs, as well as all necessary parameters to fix agent and target dynamics. Our model assumptions and definitions used to construct this are detailed below in Section~\ref{subsection:Env}. Viewpoint planning and vehicle routing are considered in a single RL-driven agent factorized DEC-POMDP henceforth referred to as ``high-level" planning. We implement a simple analytical MPC controller built around the structure of high-level operations to address (3), henceforth referred to as ``low-level" planning.
\subsection{Environment}\label{subsection:Env}
The global environment on which the inspection task is carried out consists of the following key components: a viewpoint graph, inspecting agents, the inspection target, agent camera operation or point visibility, and initializing parameter specifications.
\subsubsection{Viewpoints:} We fix a set $\mathcal{V}$ of $m=20$ viewpoints that stay static within Hill frame. These represent feasible positions from which agents may take an image of the target. We position them pseudo-uniformly on the surface of a sphere of radius $200m$ via a projected Fibonnaci lattice, see \cite{Hardin2016}. This viewpoint generation methodology easily allows us to change the number of viewpoints and provides a simple proxy for reasonable surface coverage of an interior target centered at the Hill origin. We assume that there are three inspecting agents giving $\mathbb{D}=\{1,2,3\}$ where each high-level planning action represents a viewpoint transfer. We then take the high-level agent-factored action space as the index set of $\mathcal{V}$, denoted by $\mathbb{A}_{i}=I_{\mathcal{V}}$. The set $\mathcal{V}$ is shown in the leftmost plot within Fig. \ref{fig:ExSnap} with labels indicating the action in $I_{\mathcal{V}}$ that would take an agent to the fixed corresponding viewpoint. A joint action is then a triplet of viewpoint indices corresponding to physical locations in Hill frame to transfer to.
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=.5\linewidth]{figures/SphereInspectionActionDecoder.eps}
\caption{Map of viewpoints with associated actions.}
\label{fig:Viewpoints}
\end{figure}
\end{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=.9\linewidth]{figures/SphereInspectionHybrid2.eps}
\caption{Left: Map of actions to viewpoints. Right: An example of a joint agent action and the resulting collection of visible POIs.}
\label{fig:ExSnap}
\end{figure}
\subsubsection{Point Visibility:}
Upon arrival at a requested viewpoint transfer, each agent takes an ``image" of the target. We assume that each agent maintains a target-pointing attitude, so that it's camera is pointing towards the Hill frame origin. Each image is captured using a basic FOV filter (set to $15^{\circ}$ ) applied to the set of all visible points given fixed target attitude and agent position. Point occlusion was handled through reflected spherical projections, as described in \cite{Katz07}; natively implemented by Open3D \cite{open3d}. An example of a joint snapshot by three agents of a unit sphere target is shown in the bottom right portion of Fig.~\ref{fig:ExSnap}. We tuned the diameter of our spherical projection used for hidden point removal to 208874.855 (m).
\subsubsection{Target Geometry and Dynamics:} The target geometry used was a point cloud (.ply) representation of the Aura satellite (https://nasa3d.arc.nasa.gov). This contained a full 3-D deconstruction of the satellite including internal components. As these aren't externally visible to our agents, we downsampled a version using Open3D ensuring that 95 percent of the reduced set of points (9514 POIs) were lying on a \textit{visible} target surface. Due to our attitude pointing assumption, this limits visibility for two dynamic rotation modes (static in H and static in ECI) to a maximum of 93\% of the downsampled point cloud. In particular, the maximum inspection percentage that is achievable across all rotation modes is $<98\%$. This point cloud contained the final set of POIs used for our inspection, which we denote by $\mathcal{P}$. Inspection progress is tracked utilizing the index set of $\mathcal{P}$ denoted by $I_{\mathcal{P}}$. An example inspection of $\mathcal{P}$ is shown in Fig. \ref{fig:AuraInsEx}, represented by a sequence of joint agent actions. Our inspection threshold used during training was 85\% of $\mathcal{P}$, corresponding to $\ge 90\%$ of visible POIs.
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{figures/SphereInspection.eps}
\caption{Example inspection of a point cloud in the shape of a sphere. Step 1 shows an action for a single agent. The sphere that all potential viewpoints lie on is shown. The target to inspect lies at the center of the viewpoint sphere. Steps 2 through 4 show three actions taken by three agents and the accumulated target POIs seen. \kl{what is red vs blue??}}
\label{fig:CombinedDynamics}
\end{figure}
\end{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{figures/SatInspection.eps}
\caption{Example inspection of a point cloud in the shape of the Aura Satellite used for training. Step 1 shows an action for a single agent where the discretized sphere of all potential viewpoints is shown. The target to inspect lies at the center of the viewpoint sphere. Steps 2 through 4 show three actions taken by three agents and the corresponding accumulated POIs seen.}
\label{fig:AuraInsEx}
\end{figure}
For simplicity, we assume the target is in an orbit with an inclination and eccentricity of zero with no external disturbances. The earth is specified as having a radius $r$ of $6,378.14$km with a gravitational parameter of $3.986004418e^{-5}$ $\frac{km^{3}}{s^{2}}$. The target has an orbital radius $r_{t_{orbit}} = 7357$km. This yields a mean motion of the target equal to $n=.001027 \frac{1}{s}$ and orbital period of 6117.99s. We assume the target has moments of inertia $I_{xx} =$ 100, $I_{yy} =$ 50, and $I_{zz} =$ 70 with an initial attitude $q^{BF}_{ECI} = [1 \ 0 \ 0 \ 0]$. Each dynamic mode tested is determined by the initial angular velocity in the target body frame, denoted by $\omega^{BF}$; these are illustrated in Fig. \ref{fig:CombinedDynamics}.
\subsubsection{Viewpoint Transfers: }
The fuel cost and transfer time associated with movement between viewpoints (along an NMT) is calculated according to a basic heuristic allowing for an analytical connection between costs for high-level planning and physically realizable trajectory generation during low-level navigation. This segregates high-level from low-level planning and greatly expedites training. It also helps prevent information leakage and reduces the difficulty of causal attribution during training; see Section~\ref{section:Methodology} below for a more detailed discussion. Viewpoint transfers are calculated through single burst thrust targets needed to place an agent on an NMT connecting the viewpoints under a parametrized and predetermined TOF. We assume agents move between viewpoints at roughly the same rate needed to maintain a \emph{closed} NMT or natural motion circumnavigation (NMC). This is determined by the angle between viewpoints in Hill frame. Parking actions are unique since the natural TOF for an NMC is simply one orbital period. This is overly restrictive since agents may only take an image at the end of a viewpoint transfer. As such, we choose a single burst rule for parking according to 1/2 the time needed to traverse to the nearest neighboring viewpoint. By fixing two viewpoints $v_{1},v_{2}\in\mathcal{V}$, the TOF is determined by:
\begin{equation}\label{eqn:LL_proxy}
\Delta T(v_{1},v_{2}) :=
\begin{cases}
\sqrt{r_{0}^{3} / \mu}\arccos\left(\frac{v_{1}\cdot v_{2}}{\|v_{1}\| \|v_{2}\|}\right), & v_{1}\neq v_{2}\\
\frac{1}{2}\sqrt{r_{0}^{3}/\mu}\min\limits_{\mathbf{x},\mathbf{y}\in\mathcal{V}}\arccos\left(\frac{\mathbf{x}\cdot \mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|}\right), & v_{1}=v_{2}
\end{cases}.
\end{equation}
The velocity needed to successfully transfer from $v_{1}$ to $v_{2}$ along an NMT with TOF $\Delta T(v_{1},v_{2})$ is given by $\mathbf{v}_{0}\left(v_{1},v_{2},\Delta T(v_{1},v_{2})\right)$ solving \eqref{eq:initialvelocity}. For a fixed initial velocity at $v_{1}$ given by $\mathbf{v}(v_{1})$ the instantaneous $\Delta V$ required for transfer is then
\begin{equation}\label{eqn:delV}
\Delta V(v_{1},v_{2}) := \|\mathbf{v}_{0}\left(v_{1},v_{2},\Delta T(v_{1},v_{2})\right) - \mathbf{v}(v_{1})\|_{2}.
\end{equation}
\subsection{Viewpoint Selection and Vehicle Routing}\label{subsection:viewController_prob}
The viewpoint selection controller or high-level planner is constructed to solve an agent-factored DEC-POMDP determining which viewpoints to visit and in what order. Rewards for the high-level are determined by point-visibility upon arrival at the viewpoint and are penalized on expected fuel cost. The joint state space $S:= \mathcal{A}\times\mathcal{T}\times \mathcal{I}$ is composed of three distinct parts: agent state $\mathcal{A}$, target state $\mathcal{T}$, and inspection state $\mathcal{I}$. The agent state is defined as the combined relative position and velocity of each agent in Hill frame; $(\mathbf{x}_{i},\mathbf{v}_{i})\in \mathbb{R}^{3}\times\mathbb{R}^{3}$ for $i=1,2,3$ giving $\mathcal{A}:=\mathbb{R}^{18}$. Since the target's translational state defines the Hill-frame origin in ECI, the target's translational dynamics are constant with zero \textit{relative} position and velocity. Target state is then defined solely through attitude and angular velocity with respect to Hill frame: $(\mathbf{q}_{H}^{BF},\mathbf{\omega}^{H})\in Q\times \mathbb{R}^{3}:=\mathcal{T}$. Lastly, inspection state is defined through a boolean vector $\mathbf{p} = (p_{1},\dots, p_{N})\in \mathcal{I}:=\{0,1\}^{N}$ where $N = |I_{\mathcal{P}}|$ and $p_{i} = 1$ if $i\in I_{\mathcal{P}}$ has been observed in $\mathcal{P}$ and is $0$ otherwise. Agent actions in $\mathbb{A}_{i}=\mathcal{V}$ represent a viewpoint transfer as described in Section~\ref{subsection:Env}. These are determined by a feedback control policy $\pi_{i}(\cdot)$ mapping experience or observation with viewpoint choice. A joint environment step is driven by the simultaneous selection of three actions, one by each agent. More concretely, the $(k+1)^{\text{th}}$ joint environment step is related to agent-level traversal time $t_{i,k+1}$ by:
\begin{equation}\label{eqn:EnvCalTime}
t_{k+1} := \max\limits_{i=1,2,3}|t_{i,k+1}| = \max\limits_{i=1,2,3}\big|t_{k}+\Delta T\left(\mathbf{x}_{i}(k),\mathbf{x}_{i}(k+1)\right)\big|.
\end{equation}
An important consequence of this is the way in which hidden state information is updated on the target. All agents must make simultaneous planning decisions, even in the face of sequential rollout. Agent-level state is only updated upon successful viewpoint transfer and incorporates any updates to the inspection state that other agents may have triggered prior to arrival at the commanded viewpoint ($s_{i,t_{k+1}}:=s_{i,k+1}$); the joint update is simply: $s_{t_{k+1}}$. In this model, the state transition function is deterministic. From each state-action pair $(s_{i,k}=s, a_{i,k}=a)\in S_{i}\times \mathbb{A}_{i}$ the witnessing of state $s_{i,k+1} = s'$ is guaranteed with the draw $s'$ determined by the evolution of target dynamics over time $\Delta T(\mathbf{x}^{i},\mathbf{x})$ and forward propagated agent velocity calculated via \eqref{eq:finalvelocity}.
Agent observations on this space include: agent state, target state, a \emph{subset} of inspection state, and traversal time. For instance, fix the $(k+1)^{\text{st}}$ transition state for agent $i$ as $s_{i,k+1}$, as above. Upon arrival at viewpoint $\mathbf{x'}\in \mathcal{V}$ at time $t_{i,k+1}$, agent $i$ \textit{and only agent $i$} observes $\mathbf{p}_{vis}^{\mathbf{x}'}(t_{i,k+1})$ of the target POIs. This provides an update to the state and forms an observation $o_{i,k+1}$ given by:
\begin{align*}
s_{i,k+1} &= [\mathbf{x}_{\{1,2,3\}},\mathbf{v}_{\{1,2,3\}},\mathbf{q}_{H}^{BF}(t_{i,k+1}),\mathbf{\omega}^{H}(t_{i,k+1}),\mathbf{p}_{vis}^{\mathbf{x}'}(t_{i,k+1})\lor \mathbf{p}(t_{j,k+1})],\text{ for }t_{j,k+1}\in[t_{k},t_{i,k+1}]\\
o_{i,k+1} &= [\mathbf{x}_{\{1,2,3\}},\mathbf{v}_{\{1,2,3\}},\mathbf{q}_{H}^{BF}(t_{i,k+1}),\mathbf{\omega}^{H}(t_{i,k+1}),\mathbf{p}_{vis}^{\mathbf{x}'}(t_{i,k+1}),t_{i,k+1}].
\end{align*}
In the above, note that the other agents' positions and velocities are passed in by the last state update $s_{j,k}$ occurring immediately before $t_{i,k+1}$. This prevents agent foresight into future actions and also helps maintain appropriate DEC-POMDP structure. Note that the observation probability function $O$ is determined in much the same way that $T$ for the state space was; deterministic and dependent on dynamics evolution. The state encodes all information that has been previously retrieved by all agents within the inspection, whereas the observation simply encapsulates a single agent's ``picture" of the target. As each agent interacts with the global multiagent environment, translational and rotational movement are fully communicated but the agent does not witness what others have actually observed of the target.
The immediate reward of a joint state action pair $(s_{k},a_{k})$ is determined through the sum of agent rewards containing a scaled linear combination between exposure based \textit{relative} information gain and expected fuel cost of view traversal. This is defined by:
\begin{align}\label{eqn:reward}
r_{i}(s_{i,k}, a_{i,k}) &=
\begin{cases}
\alpha\frac{\|\mathbf{p}(t_{i,k+1})\land\neg \mathbf{p}(t_{i,k})\|_{1}}{|\mathcal{P}|-\|\mathbf{p}(t_{i,k})\|_{1}} + \beta \Delta V(\mathbf{x}_{k}^{i},\mathbf{x}_{k+1}^{i}) + r_{0}, & \text{ if }|\mathcal{P}|-\|\mathbf{p}(t)\|_{1}>0\\
r_{0}, & \text{ if }|\mathcal{P}|-\|\mathbf{p}(t)\|_{1}=0
\end{cases}\\
r(s_{k},a_{k}) &= \sum_{i=1}^{3}r_{i}(s_{i,k}, a_{i,k})\nonumber
\end{align}
where $\alpha,\beta,r_{0}$ are fixed real-valued constants and $\Delta V$ satisfies \eqref{eqn:delV}. In our problem, rewards are only accessible to agents while there exists a sufficiently large number of POIs to inspect. This is encoded into the reward function through a random horizon $\tau$ defined by the first hitting time to a user-defined threshold for inspection coverage ratio, call it $M\in(0,1]$. For the inspection state $\mathbf{p}(t_{k})$ at step $k$ and calendar time $t_{k}$, this is defined by
\begin{equation}\label{eqn:timeHorizon}
\tau:=\inf\left\{t\ge 0\,,\,\frac{\|\mathbf{p}(t)\|_{1}}{|\mathcal{P}|}\ge M\right\}.
\end{equation}
Note that the law of this random variable is determined by the joint viewpoint selection policy $\pi = (\pi_{1},\pi_{2},\pi_{3})$; therefore becoming a part of the optimization problem. The updated reward functions become:
\begin{equation}
R_{i}(s_{i,k}, a_{i,k}) = \mathbbm{1}_{\{t_{k}\le\tau\}}r_{i}(s_{i,k}, a_{i,k}),\text{ and } R(s_{k}, a_{k}) = \mathbbm{1}_{\{t_{k}\le\tau\}}r(s_{k}, a_{k})
\end{equation}
respectively. Putting these together, the viewpoint selection planner aims to solve:
\begin{problem}[High-Level Planning]\label{prob:VSP}
For the tuple $(\mathbb{D},S,\mathbb{A},T,\mathbb{O},O,R)$ defining an agent-factored transition and observation independent DEC-POMDP, the viewpoint selection planner aims to find a joint policy $\pi = (\pi_{1},\pi_{2},\pi_{3})$ satisfying \eqref{eqn:agentPol} for each $i=1,2,3$.
\end{problem}
The key assumptions we work under as they relate to Problem~\ref{prob:VSP} are:
\begin{enumerate}
\item Agents follow an NMT with TOF given by \eqref{eqn:LL_proxy} and approximate fuel cost \eqref{eqn:delV}.
\item During training, all agents must make simultaneous decisions. Action synchronization is used to ensure a spatial correlation structure isn't formed between agents when training. This is \emph{not} enforced during rollout of the trained policies.
\item There is no hard constraint preventing the possibility of collision or limiting fuel burn capacity. This was used to help minimize environment complexity but can be readily relaxed.
\item All agents are assumed to be Hill frame center pointing throughout training and camera shutter is linked with viewpoint arrival.
\end{enumerate}
\subsection{Navigation Control and Hierarchical Policy Rollout}\label{subsection:mpc_prob}
While the high-level planner assigns agents to viewpoints, for implementation, the low-level controller is needed to actually move the agents to those viewpoints. At each simulated time step, the low-level controller is passed an observation tuple consisting of agent position, velocity, goal position and expected traversal time. It solves for the thrust input $u(t)$ in \eqref{eq:CWH} required to achieve a target velocity needed to track the desired NMT. Notationally, we distinguish between two different time scales (high vs low level) using the following notation: $\mathbf{x}_{t}$ denotes agent position in Hill frame at time $t$, for a feasible time interval $(s-t)>0$ we denote $\mathbf{v}_{0}(\mathbf{x}_{t},\mathbf{x}_{s})$ as the \emph{initial velocity} required to traverse between $\mathbf{x}_{t}$ and $\mathbf{x}_{s}$ over $(t-s)$ TOF whereas $\mathbf{v}_{f}(\mathbf{x}_{t},\mathbf{x}_{s})$ represents the \textit{final velocity}; see eq. (\ref{eq:initialvelocity}) and eq. (\ref{eq:finalvelocity}) respectively. The optimization problem solved between high-level environment steps is formulated as:
\begin{problem}[Low-Level Planning] \label{prob: OptimizatonLLC}
Fix a starting time $t_{0}\ge0$, starting position $\mathbf{x}_{0}\in\mathcal{V}$, target traversal time $t_{f}>t_{0}$, and target viewpoint position $\mathbf{x}_{f}\in\mathcal{V}$. The initial agent positional state is given by $\mathbf{x}_{t_{0}}=\mathbf{x}_{0}$ and the desired agent control trajectory must satisfy $\mathbf{x}_{t_{f}}=\mathbf{x}_{f}$. For a fixed $n$-step discretization of the TOF $\Delta T:=t_{f}-t_{0}$ with uniform mesh width $\Delta t := \frac{\Delta T}{n}$, the low-level planner solves:
\begin{align*}
&\min\limits_{\Tilde{\mathbf{x}},\Tilde{\mathbf{u}}}\sum_{j=0}^{n-1}\|\Tilde{\mathbf{u}}_{t_{j}}\|_{2}\Delta t,\,\text{ such that: } \\
&\|\mathbf{v}_{0}(\Tilde{\mathbf{x}}_{t_{j+1}},\mathbf{x}_{f})-\mathbf{v}_{f}(\Tilde{\mathbf{x}}_{t_{j}},\Tilde{\mathbf{x}}_{t_{j+1}})\|_{2}=0,\quad \forall j\in\{0,\dots,n-1\}
\end{align*}
where $\Tilde{\mathbf{x}}_{t_{j}}$ is the agent's position at time $t_{j}$ under historical thrust control policy $\Tilde{u}_{t_{0}},\dots,\Tilde{u}_{t_{j}}$ evolving according to the discretized form of \eqref{eq:CWH}.
\end{problem}
Note that since the equality between steps in the constraint is binding, the optimization procedure is therefore implicitly minimizing the slack variable $\Tilde{\mathbf{x}}$ through expected positional drift. This is what ensures consistency with high-level rewards requiring agent position and TOF for information gain and velocity for fuel cost. For comparable approaches on \textit{trajectory} (vs. velocity) constrained optimization, see \cite{Morgan14, Foust20}.
A solution to Problem~\ref{prob: OptimizatonLLC} represents a specific trajectory for a low level policy performing navigation control. Acting on the space of all feasible viewpoints, this yields a mapping $\pi^{ll}:\mathbb{R}^{9}\times\mathbb{R}_{+}\mapsto \mathbb{R}^{3}$ where the domain represents a low-level observation space consisting of: agent position, agent velocity, agent viewpoint command, and remaining allowable TOF; see \eqref{eqn:obs} below. This is merged with trained high-level policies and used in tandem to perform the multi-agent inspection task. An important consequence of this is the change in environment time scale. Although high-level policies are trained making synchronous decisions according to the environment stepping in \eqref{eqn:EnvCalTime}, when rolling out in the hierarchical environment, regular calendar time is used and the artificial time synchronicity is relaxed. Here, there are two policies for each agent: $(\pi_{i}^{hl}(\overline{o}_{hl}),\pi_{i}^{ll}(o_{ll}))$. At every point in time the agent receives an observation $o(t) = [o_{hl}(t),o_{ll}(t)]$ from the hierarchical environment of the form:
\footnotesize
\begin{align}\label{eqn:obs}
o_{hl}(t)
&= [\mathbf{x}_{\{1,2,3\}}(t^{HL}),\mathbf{v}_{\{1,2,3\}}(t^{HL}),\mathbf{q}_{H}^{BF}(t),\mathbf{\omega}^{H}(t),\mathbf{p}_{vis}^{\mathbf{x}'}(t^{HL}),t^{HL}],\,\text{ if }\mathbf{x}(t_{+}^{HL})\in\mathcal{V}_{\epsilon}\text{ and } o(t_{-1})=o_{ll}(t_{-1})\nonumber\\
o_{ll}(t) &= [\mathbf{x}(t),\mathbf{v}(t),\mathbf{x}(t_{+}^{HL}),t_{+}^{HL}-t],\,\text{ if }\mathbf{x}(t_{+}^{HL})\not\in\mathcal{V}_{\epsilon} \text{ or } |t_{+}^{HL}-t|> \overline{t}.
\end{align}
\normalsize
In the above, $t^{HL}$ denotes the calendar time of the agent's current high-level action. $\mathbf{x}(t_{+}^{HL})\in\mathcal{V}$ denotes a prespecified \emph{goal} viewpoint determined by the high-level agent whereas $t_{+}^{HL}$ denotes the planned time of arrival for the low-level navigator. The set $\mathcal{V}_{\epsilon}$ denotes a collection of $\epsilon$ neighborhoods around each element in $\mathcal{V}$; $\overline{o}$ denotes concatenated observations over historical actions stored within the replay buffer. The parameters $\epsilon\in\mathbb{R}_{+}$ and $\overline{t} \in\mathbb{R}_{+}$ represent success criteria imposed on the low level navigator. Each low-level observation provides the information needed to the MPC controller to calculate the subsequent agent control input $u(t)$. Once $\|\mathbf{x}-\mathbf{x}(t_{+}^{HL})\| \le \epsilon \text{ and } |t_{+}^{HL}-t|\le \overline{t}$ are simultaneously satisfied, the next time step will return a high level observation for the viewpoint selection planner which maps to a new goal viewpoint through $\pi_{i}^{hl}(\overline{o}_{hl})$. This continues independently until the prespecified inspection threshold is reached or the environment times out.
\FloatBarrier
\section{Methodology}\label{section:Methodology}
\subsection{Recurrent Replay Distributed DQN and Environment Crafting}
We solved the high-level planning problem \ref{prob:VSP} utilizing an algorithm known as Recurrent Replay Distributed DQN (R2D2) developed by \cite{Kapturowski18}. It is classified as a model-free, off policy, value-based reinforcement learning algorithm, that is suitable for partially observable environments like the multi-agent inspection problem. The key considerations for this selection were the potential size of hidden state passed through an environment in the form of images or point clouds in addition to the partial observability and proven success in information limited environments. We chose to limit observations of inspection progress to \emph{just} include only current ``images" of the target and not the collectively observed portion of POIs. In cases where progress is shared but images aren't due to bandwidth or size constraints, if the database is lost or corrupted, the agents will likely lose direction unless dropouts are simulated explicitly during training. Doing so requires an intentional and direct modeling effort that may not be reflective of real-world operation. In this sense, we aimed to provide observations that were as close to those received in ``single-agent" environments while still providing enough information to ensure the multi-agent collaboration remains beneficial to the inspection mission.
Algorithmically, R2D2 follows a similar structure for training as deep Q networks (DQN, \cite{Mnih15}) with modifications to handle recurrent neural networks. The key algorithmic components include: a prioritized distributed experience replay buffer, an n-step double Q learning framework, and an application-specific recurrent deep neural network architecture that is trained through a combination of backpropagation-through-time and stochastic gradient descent. For more details, readers are encouraged to consult \cite{Kapturowski18,Mnih13,Mnih15, Van16,Schaul15, Yu19}. Our training environments were constructed as gym environments to conform with Ray/RLlib's multiagent training framework, which includes implementations of many of the most popular reinforcement learning algorithms including R2D2; see \cite{RLlib}. Ray's native parameter tuner was then used to find the learning rate, priority exponent, and importance sampling exponent. Each parameter was searched on an interval of $[.9p_{0},1.1p_{0}]$ where $p_{0}$ is the default value used in the standard R2D2 config. Batch size and replay buffer capacity were fixed experimentally to reduce run-time during training; the buffer capacity and max sequence length were determined heuristically as upper bounds for the most action intensive inspection mode, the stable tumble. For each rotational dynamic mode, we trained policies with a common architecture. Observations are pre-processed in accordance to RLlib's default module and fed into a feed forward fully connected network with two hidden layers of size 64 with tanh activation functions. The network is wrapped in an LSTM layer (see \cite{Hochreiter97, Yu19}) with a hidden state of size 64. For a replay sequence of observations and hidden states: $\{o_{i,t+k}\}_{k=1}^{m},\{s_{i,t+k}\}_{k=1}^{m}$ a corresponding $Q$ update is based on the $Q$-value discrepancy given by:
\begin{equation}
\Delta Q_{i} = \frac{\|Q_{i}(\hat{s}_{i,t+k};\hat{\theta}) - Q_{i}(s_{i,t+k};\hat{\theta}) \|_{2}}{|\max\limits_{a,j}Q(\hat{s}_{i,t+j};\hat{\theta})_{a}|}
\end{equation}
where the $\hat{s}$ represents internal estimate of hidden state and $\hat{\theta}$ are the parametrized weights for the neural network approximating $Q_{i}$. As motivated in \cite{Kapturowski18} this helps measure the impact of representational drift and recurrent state staleness. The temporal differencing (TD) error in approximations of derived policy value represents the stepped difference between improvements toward true optimal value $V^*$ in \eqref{eqn:agentPol}. This is used internally by R2D2 to weight the state-action-reward sequences in the experience replay buffer. We set our corresponding priority exponent as $p=.6$, lower than in \cite{Kapturowski18}. This was due to the $n=1$ step update used for $Q$ reducing the averaging effect over long sequences of observations seen in \cite{Kapturowski18}. The choice of $n=1$ was motivated by considerations for training on tumbling modes where state estimation becomes exceedingly difficult on long time horizons. This similarly influenced our choice of agent reward discounting where we set $\gamma=.95$, 4.5 basis points lower than default. Finalized hyperparameter values are summarized in Table~\ref{table:Problem hyperparameters} below.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|}
\hline
High-level Environment Parameters & Values\\
\hline
$\alpha$ & 2\\
$\beta$ & 1\\
$r_{0}$ & 0\\
$\gamma$ & .95\\
$M$ & .85\\
\hline
Hierarchical Environment Parameters & Values\\
\hline
$\epsilon$ & .35 m\\
$\overline{t}$ & 50 s\\
$\Delta t$ & 1s\\
\hline
R2D2 Hyperparameters & \\
\hline
Learning Rate & 5e-5 \\
Batch Size & 256\\
Replay Burn-In & 20\\
Replay Buffer Capacity & 100000\\
Policy Max Seq Length (excl. burn-in) & 20\\
Priority Exponent & .6\\
Minibatch size & 128 \\
Trained Iterations & 100000 \\
\hline
Network Parameters & \\
\hline
Type & Fully Connected, 2 layer, [64,64] \\
LSTM Cell Size & 64 \\
\hline
\end{tabular}
\caption{Hyperparameters and policy network for the multiagent inspection problem.}
\label{table:Problem hyperparameters}
\end{table}
The choice of reward shape was initially motivated by a simple metric for a linear combination between information gain and fuel cost of the form $r(s,a)=\alpha\frac{\|\mathbf{p}(t_{i,k+1})\land\neg \mathbf{p}(t_{i,k})\|_{1}}{|\mathcal{P}|} + \beta \Delta V(\mathbf{x}_{k}^{i},\mathbf{x}_{k+1}^{i}) + r_{0}$. Unfortunately, the weight of intertemporal choice disincentivized agent movement when the feasible set of expectable newly visible POIs shrinks during the inspection mission. During early inspection stages, agents were eager to search for new information; at later inspection stages (when the set of unobserved POIs shrinks), each agent placed increased value on fuel optimal choices disregarding the remaining information retrieval criterion. To remedy this, we introduce the form in eq. \eqref{eqn:reward} offering a \emph{temporally} consistent incentive to look for the remaining POIs. For differing tuning parameters $\alpha,\beta,r_{0}$, the histogram shape during each episode reflects actions that heavily bias exploration vs exploitation. To address this, we set $\alpha,\beta,r_{0}$ to reflect an approximate median at $r(s,a)=0$; these are reported in Table~\ref{table:Problem hyperparameters}.
\subsection{Navigation Control and Distributed Training}
The full hierarchical inspection problem, consisting of both high level planners and low level point to point navigation controllers would be computationally burdensome to train on, due to the mixed temporal scale of the high level and low level actions and the necessity of full forward simulations of the physical agent trajectories. Though instances of long-horizon multi-agent games have been successfully trained, these often have simple action spaces and consistent temporal scales between actions and environment evolution, for instance \cite{Jaderberg19}. Instead, we looked to train the high level planner independently on its own abbreviated training environment (Problem \ref{prob:VSP}), then transfer the learned policies to the hierarchical environment to be deployed jointly with the low level navigation controllers. The hierarchical inspection environment thus consist of three high-low level agent pairs. Steps were taken to ensure that the high level and hierarchical environments have similar decision spaces for the view point planner to act on. Broadly, the MPC controller for point-to-point navigation was tuned to ensure consistency with the implied trajectories in the high level environment, and care was taken to train independent distributed view-point planning policies so that they exhibit similar behaviours when deployed on the asynchronous independent agents in the hierarchical inspection environment.
We needed to ensure that information leakage in the reward function wasn't a problem during hierarchical rollout. This was what informed the imposition of two strong assumptions on the low-level controller. We required that TOF \eqref{eqn:LL_proxy} and fuel cost be statistically similar to the high-level trained versions, with an aggregate error of $<$10\% of high-level cost. This was enforced utilizing a constraint on target velocity where agreement between controllers (see Problem~\ref{prob: OptimizatonLLC}) is guaranteed through positional and temporal agreement upon switching controllers through the parameters $\epsilon$ and $\overline{t}$. Since we aren't using RL methods for policy estimation of the low-level (as in \cite{Lei22}), we can partially ensure the low-level agents behave in a way that is correct-by-construction. Furthermore, this framework reflects consistency with the work established in \cite{Morgan14,Foust20} and can be directly analyzed for optimality within existing frameworks of guidance, navigation, and control. An MPC formulation of Problem~\ref{prob: OptimizatonLLC} is solved using sequential least squares quadratic programming (SLSQP) within Scipy's optimization framework. In this, we take a temporally greedy search strategy and iteratively optimize the $j^{\text{th}}$ step of Problem~\ref{prob: OptimizatonLLC} for each $j\in\{0,1,\dots,n-1\}$. The parameters $\epsilon$ and $\overline{t}$ are jointly set to ensure consistency (up to 10\% error) with the proxy values used during training of the high-level planner. Note that a single-burn controller is insufficient for this task due to accumulated spatial drifting as a result of the one step burn lag. Tuned parameter values are summarized in Table~\ref{table:Problem hyperparameters} above.
Another key modeling decision is in the explicit characterization of the relationship between joint policies and agent-specific policies. Partially determined by joint reward structuring, the RL algorithm may be trained on a distributed single policy where the joint Q-function is learned and each agent's derived $Q_{i}$ function is homogeneously determined. In this scenario, joint policies $\pi = (\pi_{1},\pi_{2},\dots,\pi_{m})$ take the form $\pi = (\pi^*,\pi^*,\dots,\pi^*)$. This suggests that all agents make decisions according to the same policy mapping, albeit under fundamentally different sequences of observations. Heuristically, this represents training directly on the joint policy $\pi$ versus agent-specific policies used to \emph{construct} $\pi$. A clear advantage to this approach is in the model simplicity and homogeneity in policy behavior. In transfer learning, such policies may be less sensitive to perturbations in the underlying environment \cite{Torrey10}; such as training on a stable tumble and rolling out on a chaotic tumble. However, we found it necessary to train \emph{independent} policies due to the timing structure of our problem and the choice to treat the high-level independently of the low-level. Without additional information provided by the low-level (specifically in the time-step scale) the trained policies don't contain enough variance within sampling distribution to effectively explore the environment; even in the least temporally complicated case where the target is Nadir pointing. Training independent policies helps delineate choices through agent specific context and yields distributions that are statistically similar, but map to differing sets of actions; see row 3 of Table~\ref{tab:table1} in Section~\ref{subsec:Hier}.
\FloatBarrier
\section{Results}
\label{section:Results}
We trained five different policies for each of the sample dynamics modes presented in Figure~\ref{fig:CombinedDynamics}. These all share the environmental and hyperparameter configuration presented in Table~\ref{table:Problem hyperparameters}. The resulting multi-agent policies were capable of inspecting a moving target across all 5 dynamic modes. They exhibited behaviours that intuitively represent notions of cooperation and fuel minimization while completing their target coverage objective. The high level policies were transferred to a hierarchical environment and similarly demonstrated success in completing the inspection problem cooperatively and with minimal fuel expenditure. A summary of the training process is shown in Figures \ref{fig:tboard_q} and \ref{fig:tboard_r}, and example trajectories and summary statistics for the hierarchical inspection problem are shown in Figures \ref{fig:insp_traj}, \ref{fig:rollout_traj} and Table~\ref{tab:table1}. Our results are presented in two sections; high level training and hierarchical policy rollout. Section~\ref{subsec:Training} presents results of the high-level training to solve Problem~\ref{prob:VSP} while Section~\ref{subsec:Hier} presents summary statistics and comparative performance for the full hierarchical rollout on each of the trained dynamic modes.
\subsection{High Level Training Results}\label{subsec:Training}
The high-level environment for different target rotation modes has to contend with two competing sources of uncertainty: the scale of $Q_{i}$ maximizing action spaces and the relative difficulty of hidden state estimation. There is less contributable variance between hidden state changes for static or slowly tumbling targets. This allows a larger sampling of observations around clusters of hidden state behavior providing more robust implicit state estimation. In other words, we expect the trained high-level policies to more easily contextualize the environment through observation. With that being said, these dynamic modes also have very particular viewpoint configuration graphs that can lead to maximal information discrimination. In this, we expect larger initial variability in estimates for state-action policy $Q_{i}$ that reduces much more quickly than the harder to characterize modes such as Stable Tumble and Chaotic Tumble. The assumptions present in Problem~\ref{prob:VSP} together with the configuration in Table~\ref{table:Problem hyperparameters} led to a sufficiently diverse space of rewards over observation, action pairs for R2D2 to effectively distinguish maximizers of each $Q_{i}$-function. The joint $Q$ function is then approximated through a particular learned tuple $(Q_{1}^{*},Q_{2}^{*},Q_{3}^{*})$. Note that this being additive over agent rewards suggests the trained policies represent single points on a hyperplane of maximizing $Q_{i}$ functions. As such, the space of solutions to $\pi_{i}^{*} \in \argmax_{\pi_{i}}V_{i}^{\pi_{i}}(\overline{o}_{i})$ is not unique. The plot in Figure~\ref{fig:tboard_q} below shows extreme values for $Q_{i}$ during training of two distinctly different rotation modes, static in Hill frame and a single-axis tumble; for more detail see Section~\ref{subsec:Hier}.
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=.75\linewidth]{figures/TensorBoard/SmoothedQ_94_landscape.png}
\caption{Q function estimation.}
\label{fig:tboard_q}
\end{figure}
\end{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=.75\linewidth]{figures/QPlot.eps}
\caption{Q function estimation.}
\label{fig:tboard_q}
\end{figure}
It is interesting to observe the induced variance as a result of poor state estimation through imperfect observations and uncertainty in hidden state estimation. The variance about the extended moving average (EMA) for each of the dynamic modes in Figure~\ref{fig:tboard_q} indicates a range of feasible actions that may result in comparably large rewards. In particular, for those modes with very well defined maximizing actions in conjunction with easier state estimation, the corresponding measure of variance about the estimated Q function is lower; for instance with the single axis rotation. A larger variance is partially indicative of a large set of actions providing similar rewards; thereby creating state-action ambiguity within training. In these cases, the learning rate is correspondingly slower; for instance in the tumbling modes in Figure~\ref{fig:tboard_r}\footnote{This is more directly evidenced in temporal differencing error and rate of change in mean $Q_{i}$ across modes. For the sake of clarity, we did not include this in Fig.~\ref{fig:tboard_q}, but all data is available by request.}. Additionally, the advantage of training three independent policies for each mode versus a single shared policy (as in \cite{Lei22}) is reflected in the heterogeneity in learned $Q_{i}$. Each agent in the static Hill mode has a similar internal reconstruction of $Q_{i}$ indicating homogeneity in the choice to explore or exploit environmental information; Table~\ref{tab:table1}. In contrast, taking advantage of learned geometry for the single axis rotation, $Q_{2}$ under agent-2 has dominating peaks and a higher average than its peers. This indicates a preference to explore new views in search of information gain whereas the other two indicate a willingness to exploit fuel-efficient view options; see Figure~\ref{fig:tboard_q}.
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{figures/TrainLenRew.eps}
\caption{Policy reward and length distributions during training.}
\label{fig:tboard_r}
\end{figure}
\end{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=.65\linewidth]{figures/TableBoard4.eps}
\caption{Policy reward and length distributions during training.}
\label{fig:tboard_r}
\end{figure}
These metrics are directly related to observed mean episode length and episode reward (quoted for the joint policy); displayed in Figure~\ref{fig:tboard_r}. The mean episode length compiles an average number of high-level environment steps taken to reach inspection completion where the mean reward is calculated on instantaneous joint reward per episode step. These provide additional insight regarding the difficulty of learning movement behavior. In particular, both multi-axis tumbling modes exhibit a larger relative reward spread when compared to static and single-axis modes. The rate of change in reward mean is similarly slowed with training on both tumbles taking the longest to hit a target of 1.5. The mean episode length is partially correlated with parking actions. The static Hill mode requires the fewest number of steps primarily due to forced exploration for information retrieval purposes. Parking, although relatively cheap will not result in any new information. All dynamic modes learn that parking may be a beneficial action and split behavior to hedge risk in the uncertainty induced through hidden state estimation; thereby increasing the required number of environment steps. This indicates a larger number of viewpoint transfers leveraging target rotation moving POIs into the agent's FOV over time.
\subsection{Hierarchical Policy Rollout}\label{subsec:Hier}
To test the hierarchical approach, we simulated the full environment as described in Section~\ref{subsection:mpc_prob}, using the trained high-level policies as trained and tested above in combination with the MPC low-level controller. Although further policy evaluation is required to test for \textit{statistically significant} differences within agent-level inter-mode policies, a sample of $100$ inspection runs for each rotation mode (see Table~\ref{tab:table1}) suggests key differences in \emph{joint} strategy development. Our relatively simple high-level environment increases the likelihood of causal explanation originating from the change in underlying target dynamic mode. High-level trajectory analysis and stability of estimates for agent specific policy mean and variance indicate diversity in learned strategy within each mode across the set of \emph{agents}. As an example, we have provided a hierarchical multiagent inspection trajectory/rollout for two distinct dynamic modes in Figure~\ref{fig:insp_traj}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{figures/HierExamplePlotTraj2.png}
\caption{Agent trajectories for a hierarchical inspection mission under static Hill and single-axis target rotation modes.}
\label{fig:insp_traj}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{figures/HierExampleStats3.eps}
\caption{Inspection percentage, cumulative fuel cost, and viewpoint transfers plotted against simulation time for Static Hill and Single Axis.}
\label{fig:rollout_traj}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{l|c|c|c|c|c}
\textbf{Mean Metrics} & \textbf{Static Hill} & \textbf{Static ECI} & \textbf{Single Axis} & \textbf{Stable Tumble} & \textbf{Chaotic Tumble}\\
\hline
Inspection \% & 83.28 & 83.06 & 83.63 & 83.03 & 83.03\\
Time (s) & 6625.06 & 4751.51 & 3641.69 & 5228.71 & 3910.14\\
Actions $\frac{Unique}{Total}$
& $\frac{3.79}{4.67}$,$\frac{4.08}{4.83}$,$\frac{3.96}{4.68}$
& $\frac{1.53}{13.45}$,$\frac{2.52}{4.47}$,$\frac{2.11}{8.35}$
& $\frac{1.16}{10.46}$,$\frac{2.42}{2.87}$,$\frac{2.34}{2.98}$
& $\frac{2.57}{6.08}$,$\frac{2.87}{6.10}$,$\frac{3.34}{5.09}$
& $\frac{2.05}{7.05}$,$\frac{2.56}{3.89}$,$\frac{2.40}{5.27}$\\
Accum. $\Delta$V (m/s) & 2.08 & 2.35 & 1.74 & 3.20 & 2.55\\
\end{tabular}
\end{center}
\caption{Mean hierarchical rollout metrics by dynamic mode over 100 runs. The actions metric is given as a tuple indicating the average proportion of unique to total actions taken during an inspection episode for each of the three agents. The policies were trained to inspect 85\% of $\mathcal{P}$; due to the asynchronous agent movement, we rolled out on an 83\% threshold for this table.}
\label{tab:table1}
\end{table}
\subsubsection{Static Hill: }
For this case, we fix target rotation mode by taking initial target angular velocity as $\omega^{BF} = [0, 0, n]$ where $n = 0.001027$ rad/s is the target's mean motion. This corresponds to a target that appears static in the Hill frame. This implies that parking actions will yield no new information from the target. As captured in Figure~\ref{fig:insp_traj}, agents inspecting this target chose to navigate to points along a trajectory providing maximally discriminative viewpoints of the target. Note that the geometric diversity is partially obfuscated by the nonconvex surface of the target. In particular, viewpoints that are relatively close in Hill frame are still geometrically optimal due to POIs hidden by satellite antennae and communications arrays; see Figure~\ref{fig:AuraInsEx} for further intuition. Agent specific policy distributions are shifted forms of one another with differences in probability mass concentrated around specific regions of the viewpoint graph. The separation here further exemplifies the necessity of training policies for each agent versus reliance on a single, distributed policy as in \cite{Lei22}. The inspection took the longest of all the modes primarily due to the transfer time imposed by \eqref{eqn:LL_proxy} and the suboptimality of vehicle parking. This is reflected in the homogeneity between agents when comparing the ratio between unique and total actions required for inspection, see the third row of Table~\ref{tab:table1}. A distinctive feature for this rotation mode is the lower number of average environment steps needed to complete inspection during rollout. This indicates that agents are effectively learning to strategically explore the viewpoint graph based on single transfers versus a string of multiple transfers. The reduced variance in agent-policy mappings help ensure that specific inspection trajectories have a higher likelihood of being fuel efficient relative to other dynamic modes; see for instance cumulative $\Delta V$ in Figure~\ref{fig:rollout_traj}.
\subsubsection{Static ECI and Single-Axis: }
Both of these cases represent a single-axis rotation about the z-axis. Here $\omega^{BF} = [0, 0, 0]$ and $\omega^{BF} = [0, 0, .097]$ for which the scale factor $.097$ is indicative of the number of $2\pi$ rotations performed about the z-axis for each target orbital period; given by $.097/n\approx 94.45$ rotations. These two modes represent rotations with opposing direction and differing rotation rate. The exhibited dynamic behavior is interesting because it induces nontrivial state estimation and provides a strong intuition originating from geometrical arguments. Broadly, we expected agents to attempt movements in a plane whose normal vector \emph{nearly} coincides with the axis of rotation. It should not be parallel owing to the target convexity and initial orientation. Angular direction should be opposite of target rotation direction, heuristically letting points fall into view on their own accord.
The resulting trained strategies are similar in nature. Depending on the initial locations of the agents, agent 1 learned to perpetually stay at the same point providing a reasonable balance between information gain and fuel efficiency. More importantly, increasing the predictability of the inspection time horizon (remember that agents can't explicitly share inspection progress) allowing for agent 1 and agent 2 to effectively exploit fuel efficient, information discriminative views. Agent 2 moves for immediate information gain whereas agent 3 moves for immediate fuel conservation. This plays out at two different time scales for each rotation mode. For the single-axis case, one agent favors parking; the other two agents only have one or two feasible actions before the inspection finishes due to rotation rate. In this case, Agent 2 moves to cover the target caps (which Agent 1 can't witness while parked) and Agent 3 seeks a fuel efficient viewpoint to park on, see the far right panel in Figure~\ref{fig:rollout_traj}. The parking nature of Agent 3 is exemplified in the third row of Table~\ref{tab:table1} where you can see a much lower ratio between unique to total viewpoint actions taken when compared to single-axis. This is largely attributable to rotation rate in which less information rotates into view between parking actions, extending time horizon. Referencing the fuel costs in Figure~\ref{fig:rollout_traj}, note that Agent 1 accumulates $\Delta V$ according to a step function consistent with the calculation made in \eqref{eqn:delV} and Problem~\ref{prob: OptimizatonLLC}. It is important to note that the zero-fuel cost incurred by Agent 3 reflects accrual during the transition between viewpoint 10 and viewpoint 6; not a parking action. These modes broadly demonstrate the capability of \textit{indirect} multiagent coordination learned through action contextualization of peers in the absence of action telegraphing or \emph{direct} coordination.
\subsubsection{Stable and Chaotic Tumble: }
These two modes represent multi-axis tumbles. Here, we fix the rotation mode by taking initial target angular velocity as $\omega^{BF} = [0.0097, .097, 0]$ for a stable tumble (y-axis with x-axis perturbation) or $\omega^{BF} = [0.0097, 0, .097]$ for an unstable tumble (z-axis with x-axis perturbation). The stable and then chaotic tumble represents an increasingly large solution space of feasible optimal viewpoint candidates. This makes both exploration and exploitation equally enticing to each agent; similar to Static Hill. Unlike Static Hill, parking actions are now valuable due to the decreased likelihood of witnessing the same POIs from two different points in time and space. This makes hidden state estimation increasingly difficult and creates strategies that have a similar propensity to maintain geometrically consistent spatial inspection patterns. The key differentiation between single-axis rotations and these two modes is in the complexity of dynamic motion coupled with the number of efficacious actions. With an increased propensity to park, see the third row of Table~\ref{tab:table1}, the inspection time horizon is shorter than in Static Hill. Unintuitively, this doesn't decrease the average cumulative $\Delta V$ required for inspection. This is because there is increased unpredictability in tail-end retrieval due to the difficult hidden state estimation. As the inspection task nears its end, there is a large spike in fuel burn expended to intercept the remaining unseen POIs. Unlike Static Hill, advantageous \textit{future} positions on long time-scales aren't easily predictable increasing the tendency of agents to weight fuel lower than information gain in policy distribution. Moreover, the impact of surface geometry on inspection difficulty can also be seen in the differential between accumulated $\Delta V$ and inspection time in Table~\ref{tab:table1}. On average, the chaotic tumble took only 80\% of the required fuel and 75\% of the required time compared with the stable tumble. This originates from the primary axis of rotation used in the stable tumble mode. More specifically, rotation about the y-axis relegates priority viewpoint transfers to off-axis positions thereby forcing fuel expensive decisions.
\section{Discussion and Concluding Remarks}
\label{section:Conclusions}
In this paper, we considered a hierarchical approach for the decentralized planning of multi-agent inspection of a tumbling target. This consisted of two components; a viewpoint or high-level planner trained using deep reinforcement learning and a navigation planner handling point-to-point navigation between pre-specified viewpoints. Operating under limited information, our trained multi-agent high-level policies successfully contextualize information within the global hierarchical environment and are correspondingly able to inspect over 90\% of non-convex tumbling targets (corresponding to 85\% of the actual POIs in $\mathcal{P}$), even in the absence of additional agent attitude control. Our assumption that viewpoint traversal coincides with camera shutter led to artificially long inspection missions; this can be ameliorated by increasing viewpoint density and sampling from approximate neighborhoods utilizing action masking. This will substantially increase offline training time, but will have negligible effect on policy rollout and online decision making.
Our results show promise in the incorporation of machine-learning based methods for autonomous inspection tasks. For viewpoint planning in the face of partial information and limited observability, our agents can quickly and efficiently contextualize environmental state with action mapping in numerous dynamic scenarios. We tested this under five modes: static in Hill frame (Nadir pointing), static in ECI frame (star pointing), single-axis rotation, stable tumble, and chaotic tumble. Each of these helped demonstrate the diversity in inspection strategy witnessed as a result of the direct modeling of information-gain based agent reward. We sampled all hierarchical policies in each mode to get estimates of key inspection statistics; these revealed large differences in learned strategy. Agents inspecting a (relatively) static target learn to move to distinct viewpoints that maximize information discrimination with minimal fuel cost. Contrasting this is the case of single-axis rotation and static ECI where agents learn to park and take advantage of rotational dynamics to observe new information with minimal fuel penalty. The variability in rolled-out policies for chaotic and stable tumbles together with higher cumulative $\Delta V$ as compared with simpler modes indicate difficulty in finding unseen POIs near the end of mission. This effect may be reduced through reward and hyperparameter tuning or through stronger assumptions on agent observations. The variety in learned behavior indicates that the potential of transfer learning between modes may be limited; although a wider sampling of dynamic modes may be used to extend the observation and state space within the DEC-POMDP formulation. In this context, competing solutions (such as \cite{Lei22}) that make assumptions on the validity of information retrieval do not provide any real guarantee of inspectability, even through complete graph traversal. Our formulation partially demonstrates that such paradigms may increase fuel cost unnecessarily by requiring visitation at all viewpoints regardless of inspection state. In comparison with dedicated computer-vision based techniques such as ORB-SLAM, our framework directly incorporates \textit{path planning and optimization} on the basis of implicit belief state versus assuming a fixed trajectory for path planning \text{and} navigation.
Future work aims to investigate the potential for model scalability through increasingly complicated modeling assumptions and technicalities. For instance potentially supporting uncertainty in target geometry and rotation dynamics estimation, torque induced dynamical modes, safety constraints, attitude control, elliptical NMT viewpoint discretization, communications dropouts, variable camera shutter rate, and hybrid mutual information metrics. Furthermore, it would be prudent to extend analysis to directly compare the effect of training a high-level policy directly versus the full hierarchical environment. Depending on model complexity and the density of sample observations, it may be necessary to train the hierarchical agents directly. This may help refine internal belief state estimation in the face of increasing variance as a result of observation ambiguity. Lastly, a more robust and empirically motivated low-level navigation controller can be integrated during training.
\bibliographystyle{ieeetr}
|
{
"arxiv_id": "2302.14122",
"language": "en",
"timestamp": "2023-03-01T02:01:36",
"url": "https://arxiv.org/abs/2302.14122",
"yymm": "2302"
} | \section{Introduction}
There are many settings where it is helpful to elicit information
from a population in order to support a decision, and in particular
in elicting beliefs about the probability of an uncertain event. For
example: \emph{Will expanding the port facility in Seattle lead to
an increase of 5M tons of freight/year in 2025?
Will a start-up achieve revenue growth
of more than 20\% in its target markets in 2023? Will a high speed
trainline, if built from Manchester to the center of London, meet
a target of 2M trips a year in 2027?}
In many ways, this may look like a well-studied problem.
This is a
setting of information elicitation with verification, in that there
is a downstream uncertain event whose future value can be verified
and then scored against, and solutions such as scoring rules and their
multi-user extensions to market-scoring rules and prediction markets
may come to mind~\citep{brier1950verification,GneitingTilmann2007SPSR,hanson2007logarithmic,wolfers2004prediction}.
And yet, there are a number of aspects of this problem that make it
non-standard and unsolved. In particular, the presence of a decision
that will be made in a way that depends on the aggregation of reports
brings about new challenges.
First, the outcome of the uncertain future event may only be observed
conditioned on the decision being made affirmatively. This means that
payments depend in turn on the decision and in turn on reports, and
this changes incentives. Elicitation together with decisions has been
studied before, notably in the context of \emph{decision markets}~\citep{ChenYilingDMwG}
and VCG-based scoring mechanisms~\citep{york2021eliciting}. But
to our knowledge, this has not been studied together with participants
who also have competing incentives. In the above examples, perhaps
a participant is a shipper who would personally benefit from the port
expansion.
Perhaps a participant is subject
to a bribe from the CEO of the start-up, or a bribe from a politician
in the North of England. This raises the following question:
\begin{quote}
\emph{Can robust elicitation mechanisms be designed when there is
a decision to be made and participants may face competing incentives?}
\end{quote}
Moreover, the coupling of a decision-contingent observation of the
outcome together with participants whose beliefs about the future
event can depend on the beliefs of others brings about novel challenges.
This introduces a subtle interaction, whereby a participant who conditions
on the payments that arise when a decision is made should also condition
on what this implies about others' reports (and thus beliefs), and
in turn what this should imply in regard to updates about their own
belief.\footnote{Although in a different context, this style of reasoning pattern is
likely familiar in the context of the ``winner's curse'' in common-value
auctions, wherein a bidder should reason about what inference they
would make in the event that their bid is the highest~\citep{krishna2009auction}.} In the above examples, perhaps a participant believes that others
who report beliefs about the impact on freight in Seattle are doing
this with their own independent data and analysis, and that they may
be more skilled in forecasting.
This raises the following
question:
\begin{quote}
\emph{Can elicitation mechanisms be designed when the observation of the uncertain future event is decision-contingent and participants have dependent beliefs about this event?}
\end{quote}
More formally, we consider in this paper a principal who wants to
elicit information from $n$ recommenders about a hidden binary variable,
$O$, and then use this information to take a binary decision, $A$.
We design mechanisms that make use of a \emph{payment function}, determining
the payments to each recommender, and a \emph{decision function},
choosing an action $A$ based on the elicited information. In the
simpler set-up, we assume the realized outcome is always observed.
This would be the case in the setting of forecasting revenue growth
at the start-up, or whether it will rain on a May weekend in London
(this information to be used to decide whether or not to throw a coronation
party). More generally, and as would be the case in the other motivating
examples, the realized outcome $O$ is only observed contingent on
an affirmative decision, and for this reason the payment function
can only depend on $O$ when action $A=1$.
\textbf{Contributions.} We characterize the space of truthful mechanisms
in the face of these difficulties of competing incentives, observing
the outcome conditionally on the decision taken, and belief dependencies
between recommenders. We offer a number of answers, both positive
and negative. We work in a model where each recommender can form a
belief about the other recommenders' beliefs, competing incentives,
and in turn the reports of others, and we can consider the worst-case
manipulation over recommender beliefs.
The style of negative result is to show that there can always be a
participant who will prefer to misreport their belief, to some degree,
when a decision rule is sensitive in even a small way to their input
and there is an \emph{intrinsic competing incentive}, i.e., some kind
of interest in the decision that does not rely on a bribe from an
interested party. %
We prove in \Cref{resultCostOfLyingUpper} that the maximum cost
of misreporting that can be imposed by a scoring rule scales quadratically
in the size of the misreport (i.e., the loss in payment scales with
${(r_{i}-q_{i})}^{2}$ uniformly across all beliefs). As a result,
no rule can do better than the Quadratic Scoring Rule (see \Cref{resultCostOfLyingLower}).
This leads to %
a fundamental tradeoff between the degree to which a rule can be manipulated
in the presence of competing incentives and its sensitivity to reports.
If recommenders can have a large influence on the decision %
they also have a larger incentive to misreport. However, if we design
a decision rule to be insensitive to reports, then the rule has no
utility. We show that this conflict cannot be avoided by any mechanism,
even if we are free to design any decision function and scoring rule.
In particular, we give a lower bound on the extent to which the decision
can be manipulated, this depending on the influence that recommenders
can exert and the budget that is available for payments \pcref{resultSingleRecLower}.
We also study the setting where the competing incentive is not intrinsic
but comes about from a \emph{rational briber}, who cares about trying
to promote a particular decision. This opens up new possibilities
for positive results. In particular, the briber will choose not to
bribe in equilibrium with the rational best response of recommenders
when the effect of the bribe on the decision is too small relative
to the cost of the bribe. Truthful mechanisms exist in this setting,
and we provide conditions on the sensitivity to agent reports and
the budget of the mechanism. The budget required for truthfulness
scales with the sum of squares of the sensitivity of the rule to reports
of agents \pcref{resultMultiRecSufficient}. %
As a result, the total amount of influence that recommenders can have
can grow with $\sqrt{n}$, where $n$ is the number of recommenders,
while maintaining truthfulness or fixed low manipulation.
We also address the additional challenge when the outcome is only
observed conditionally on the decision. For this, we propose a \emph{decoupling
construction} (\Cref{defn:alpha-decoupling}), akin to importance
sampling, that can be used to disentangle the payment and decision
rule. It can be combined with any mechanism that is truthful without
this censoring, allowing to avoid the conditional observation problem
while preserving expected payments to recommenders and maintaining
the truthfulness of the underlying mechanism (\Cref{resultDecoupling}).
\textbf{Outline.} \Cref{sec:problem-definition} defines the problem,
recommender utility, competing incentives, rational briber, and truthfulness.
\Cref{sec:lying-cost} provides upper and lower bounds to the cost
to the recommender of lying in the presence of a proper scoring rule.
\Cref{sec:competing-incentives-and-bribery} gives bounds on the
manipulability of any mechanism in the face of competing incentives,
gives an impossibility result for zero-collusion when recommenders
have instrinsic outside incentives, and a positive no-collusion result
in the case of rational bribers. \Cref{sec:conditional-observations-and-dependent-recommenders}
gives a category of mechanisms that cannot be truthful when recommender
beliefs are dependent, and proposes the decoupling construction to
handle this. Finally, \Cref{sec:conclusion} concludes.
\subsection{Related work}
\label{subsec:related-work}
We delineate between our work and the prior literature along the axes
of single vs.~multi-agent elicitation, whether agent beliefs are
independent or dependent, whether or not there is a decision to make
(perhaps with an outcome observed conditionally on this decision),
and whether or not agents have preferences on the decision (``competing
incentives'').
A first connection is with \emph{scoring rules}, which elicit subjective
information from a single agent about an uncertain future event and
align incentives with truthful reports~\citep{brier1950verification,winkler1994evaluating,GneitingTilmann2007SPSR}.
Unlike our setting, scoring rules are single agent and do not model
settings in which there is a decision to be made (including settings
with competing incentives). While \emph{Wagering mechanisms}~\citep{freeman2018axiomatic}
and \emph{prediction markets}~\citep{wolfers2004prediction} extend
this setting to multiple agents and support belief aggregation, they
do not model a setting in which there is a decision to be made (and
consequently, do not handle competing incentives). These mechanisms
allow for agents with dependent beliefs, at least implicitly through
the use of sequential elicitation, which allows an agent to incorporate
relevant information from earlier in making their own report. \citet{zhang2011task}
study the use of multi-agent scoring rules for the routing of prediction
tasks and with dependent beliefs (agents have beliefs about each others'
prediction ability), but without a decision to make (and thus, without
competing incentives).
\citet{ChenYilingDMwG} study \emph{decision markets}, where there
is a principal who uses the aggregation of beliefs in a prediction
market to make a decision, this leading to a decision-contingent observation.
They prove that incentive alignment requires randomized decision rules
with full support in a setting with sequential elicitation. For this
reason, all of our truthful mechanisms also require full support on
the space of possible decisions. In contrast with our model, their
agents do not have competing incentives, and incentives are aligned
for the myopic beliefs of an agent at the time they make a report
(and thus, the subtle interaction between dependent beliefs and inference
in regard to beliefs of others in the event of a decision is unmodeled).
\citet{york2021eliciting} also study a model with multiple agents
and with a decision to make based on the aggregation of their reports.
In contrast to \citet{ChenYilingDMwG}, they consider agents who make
simultaneous reports about a future event, and achieve \emph{interim}
incentive alignment without appeal to a randomized decision rule by
appealing to uncertainty about reports of others. In contrast to our
setting, they do not handle dependent beliefs or participants with
competing incentives.
The \emph{peer prediction} literature~\citep[e.g.]{miller2005eliciting,36128,prelec2004bayesian,witkowski2012robust}
studies an elicitation setting with multiple agents and dependent
beliefs, but without a decision to make, and thus without competing
incentives. Another major difference is that the problem studied in
peer prediction is that of \emph{information elicitation without verification}---eliciting
information in the absence of an uncertain future event against which
to score. %
In the different setting of social choice, \citet{alon2011sum} study
a setting with a decision to make and competing incentives, considering
the problem of selecting a committee from a set of voters where each
voter would prefer to be selected. They suggest a mechanism that is
able to align incentives through the use of randomization to introduce
suitable independence between an agent's own report and whether or
not it is selected; see also~\citet{kurokawa2015impartial}. These
approaches are not applicable in the present setting. We also make
brief mention of \emph{transitive trust}, a setting that is again
distinct from that studied here but that does include participants
with dependent beliefs and competing incentives, in that participants
care about their own trust ranking. In particular,~\citet{hopcroft2007manipulation}
develop a variation on PageRank~\citep{page1999pagerank} that is
robust to misreports. Bribery has also been studied in voting systems~\citep{keller2018approximating,elkind2009swap,faliszewski2016control,ParkesDavidC.2017TVBT},
for example in regard to how many votes a briber must flip to change
a discrete decision.
In summary, this paper is the first we are aware of to develop truthful
mechanisms for the aggregation of beliefs where there is a decision
to make (and an outcome is observed contingent on this decision),
there are competing incentives, and where agents' beliefs may be dependent.
\section{Problem Definition}
\label{sec:problem-definition}
Before we define the problem in full generality, we start with an
example (inspired by the setting studied in \citet{york2021eliciting})
to illustrate the problems we will study. \begin{exmp}[Loan Allocation]\label{example}
A lender wants to decide whether to make a loan to a potential borrower
($A=1$) or not ($A=0$) and does not know if the borrower will return
the loan ($O=1$) or not ($O=0$). The lender wants to elicit information
regarding trustworthiness of the borrower from a group of $n$ recommenders.
Each recommender $i$ holds their own belief, i.e., their own estimate
$q_{i}\in{[0,1]}$ of the probability of the loan being returned $O=1$
if granted. Recommenders make reports, $r=(r_{1},..,r_{n})$ and the
lender determines the probability of allocating the loan using a (possibly
randomized) decision rule $\textsc{pact}$, i.e., $P(A=1)=\textsc{pact}(r)$.
To ensure truthful reporting, i.e. $r=q$, the lender pays each recommender
according to some payment rule $\textsc{pay}_{i}(r,O)$, that rewards
accurate reports. The payment can only depend on $O$ in the case
that the loan is made, and we need $\textsc{pay}_{i}(r,o)\ge0~\forall r,o$.
Further, the lender does not want the sum of payments to exceed a
budget, $\beta>0$. By way of example, the lender may decide whether
to allocate the loan using the decision rule,
\begin{equation}
\textsc{pact}(r)=\min(\max[L\cdot\left(\frac{1}{n}\sum_{i\in[n]}r_{i}-t\right),\varepsilon],1),
\end{equation}
which is parametrized by a minimum allocation probability, $\varepsilon\in{[0,1]}$,
a threshold $t\in{[0,1]}$, and a slope $L\ge0$. %
\end{exmp}
We will see that the magnitude of $L$ is one of the key tradeoffs
that principals must make when defending against recommender manipulation
due to outside incentives. The principal would like to fully use reported
information, ideally using a deterministic decision function with
$\varepsilon=0$ and $L\to\infty$. Not doing so could lead to concerns
around perceived fairness (``Why did he get a loan and I didn't,
even though my rating was higher?''). However, we will show that
the more decision influence we give to recommenders, the more incentive
they have to misreport, with deterministic decision functions always
being subject to some manipulation.
\subsection{Notation}
We use upper-case variables to denote random variables (RVs). We use
$P(E)$ to denote the probability of an event $E$ and $\mathbb{E}[Y|E]$
to denote a conditional expectation of the RV $Y$ given event $E$.
When clear in the context, we abbreviate the event $Z=z$ by simply
$z$; i.e., we write $P(Y=y)$ as $P(y)$ and $\mathbb{E}[Y|Z=z]$
as $\mathbb{E}[Y|z]$. The expectation always only applies to the
RVs inside the expectation; i.e., to the upper-case variables. For
instance, $\mathbb{E}[f(Y,z)]=\sum_{y\in\mathcal{Y}}f(y,z)P(y)$.
With a slight abuse of notation, for a continuous RV $Y$, we will
use $P(y)$ to denote the probability density function of $Y$. We
will use $\mathcal{PDF}(\mathcal{Y})$ to denote the set of all possible
probability density functions of a random variables that take values
in $\mathcal{Y}\subseteq\mathbb{R}^{m}$, for some $m$.
Importantly, we use a subscript $i$, i.e., $P_{i}(Y=y)$ and $\mathbb{E}_{i}[Y]$,
when the distribution of an RV $Y$ is subjective and hence specific
to recommender $i$. Whenever there is no subscript, for example $P(Y=y)$
and $\mathbb{E}[Y]$, this means that the distribution of $Y$ is
objective, fully defined by the mechanism and known to everyone. Further,
for sequences $z=(z_{1},..,z_{n})$ we will use $z_{\neg i}$ to denote
$(z_{1},..,z_{i-1},z_{i+1},..,z_{n})$. With some abuse of notation,
whenever clear from the context, we will use $(y_{i},z_{\neg i})$
to denote $(z_{1},..,z_{i-i},y_{i},z_{i+1},..,z_{n})$. We will use
$[n]$ to denote the set $\{1,..,n\}$.
\subsection{Basic Set-up}
There is a principal, whose goal is to elicit from the recommenders
(also referred as agents when this causes no confusion) information
about an unknown \emph{outcome}, $O\in\{0,1\}$, in order to take
an \emph{action} $A\in\{0,1\}$ of interest to the principal. There
are $n$ \emph{recommenders}, and possibly a \emph{briber} (unless
we mention the briber explicitly, we assume that there is no briber
present). Each recommender $i$ holds a private, subjective belief,
$Q_{i}\in{[0,1]}$, regarding the probability of $O=1$. The recommender
is asked to reveal $Q_{i}$ to an elicitation mechanism and submits
a report $R_{i}\in{[0,1]}$, which, if truthful, equals $Q_{i}$.
The recommender may also have a private incentive that yields utility
$C_{i}\ge0$ if $A=1$, and $0$ otherwise. This utility is distinct
from any utility they will receive as a result of the payments in
the mechanism, and may compete with the incentives for truthfulness.
We study two settings: (i) The recommender may have an \emph{intrinsic}
preference regarding the outcome. In this setting, we don't make any
assumption about the underlying process that generates $C$, other
than that it lies in some domain $\mathcal{C}$. (ii) In the second
setting, we assume that the competing incentive is a bribe offered by
a self-interested briber, paid conditional on the decision being $A=1$.
\subsection{Game and Mechanism}
To determine payments to recommenders and the action to be taken,
the principal makes use of a \emph{mechanism}, which is known to all
agents and defined as follows:
\begin{defn}[Mechanism]
\label{def:mechanism} A mechanism is defined by the tuple $\textsc{mech}=(\textsc{act},\textsc{pay}_{1:n},\textsc{rand})$,
with
\begin{enumerate}
\item $\textsc{rand}$ being a probability distribution on a domain $\mathcal{X}$,
\item $\textsc{act}:{[0,1]}^{n}\times\mathcal{X}\to\{0,1\}$ being the function
that produces a decision based on the reports, and
\item the payments $\textsc{pay}_{i}:{[0,1]}^{n}\times\{0,1\}\times\mathcal{X}\to\mathbb{R}_{+}$.
\end{enumerate}
For convenience, we also define
\[
\textsc{epay}_{i}(r,o):=\mathbb{E}_{X\sim\textsc{rand}}[\textsc{pay}_{i}(r,o,X)],\quad\textsc{pact}(r):=\mathbb{E}_{X\sim\textsc{rand}}[\textsc{act}(r,X)]
\]
and we call $\beta:=\max_{r,o}\sum_{i\in[n]}\textsc{epay}_{i}(r,o)$
the budget of the mechanism.
\end{defn}
We study two different settings, one where $O$ is always observed,
regardless of the action taken, and one in which the outcome $O$
is only observed in the case of decision $A=1$. Note that the second
case imposes constraints on the payment functions $\textsc{pay}_{i}$,
as they may only depend on $o$ whenever action $A=1$ is taken.
Either way, the rules of the mechanism are common knowledge to the
recommenders and the briber (if any). A mechanism induces the following
{\em reporting game}, which is the focus of our analysis:
\begin{enumerate}
\item In the first step, the competing incentives $C=(C_{1},..,C_{n})$
are observed by the respective recommenders (stemming either from
intrinsic preferences or from a briber). Note that $C_{i}$ is unknown
to the principal and all recommenders other than $i$.
\item Each recommender, $i\in[n]$, then submits their \emph{report}, $R_{i}\in{[0,1]}$
to the mechanism.
\item The mechanism then takes a decision $A\in\{0,1\}$ based on the recommenders'
reports, according $A:=\textsc{act}(R,X)$, where $X\in\mathcal{X}$
is a random variable internal to the mechanism.
\item If a briber is present, the briber pays $C_{i}$ to each recommender
$i\in[n]$ when the desired action $A=1$ was taken by the mechanism.
\item Either outcome $O$ is unconditionally observed, or (depending on
the setting), outcome $O$ is only observed for decision $A=1$.
\item Each recommender $i\in[n]$ receives a payment, $\textsc{pay}_{i}(R,O,X)$,
from the mechanism, based on how accurate their prediction was, where
$X$ can be used for randomization.
\end{enumerate}
Since we would like the mechanism to be truthful, at least in the
absence of competing incentives, we will mostly focus on proper mechanisms,
defined as follows:
\begin{defn}[Proper Mechanism]
\label{def:proper-mechanism} We say that a mechanism $\textsc{mech}=(\textsc{act},\textsc{pay}_{1:n},\textsc{rand})$
is proper if for all recommenders $i$, $\textsc{epay}_{i}(r,o)$
is a strictly proper scoring rule with respect to $r_{i}$ (regardless
of others' reports $r_{\neg i}$).
\end{defn}
In the setting where the outcome $O$ is observed, regardless of the
decision taken, it is straightforward to ensure that a mechanism be
proper, so we will mostly focus on proper mechanisms. When
the outcome is only observed conditionally on the decision being $A=1$,
ensuring properness is not as straightforward, we will discuss this
problem in \Cref{sec:conditional-observations-and-dependent-recommenders}.
\subsection{Recommenders}
We define the recommender type such that it contains the information
required to model recommenders' behavior. This information consists
of recommenders' beliefs and their competing incentives.
To recommender $i$, the variables $O,Q_{\neg i},C_{\neg i}$, and
$R_{\neg i}$ are unknown, hence $i$ holds a private, subjective
belief $P_{i}(o,q_{\neg i},c_{\neg i},r_{\neg i}|\textsc{mech},c_{i})$
regarding these variables (given the $c_{i}$ and $\textsc{mech}$,
which are revealed to $i$; note that we don't view $q_{i}$ as an
observation, but instead as part of recommender $i$'s belief, hence
we don't explicitly condition on it). Each recommender's beliefs may
be different, but we assume that all of them respect the independences
implied by the Bayesnet in \Cref{fig:bayesnet}, which can be interpreted
as the the causal process giving rise to the random variables. %
\begin{figure}
\centering{}\includegraphics[width=0.4\columnwidth]{bayser_net}\caption{The Bayesnet representing the dependences from recommender $i$'s
perspective. The only variables known to $i$ are its own competing
incentive $c_{i}$ and mechanism $\textsc{mech}$. The arrows here
can be interpreted as the causal direction: There is a hidden state
of the world $W$, which will give rise to the outcome $O$. Further,
other recommenders may have information about $W$, giving rise to
their estimates $Q_{\neg i}$. Further, they may have competing incentives
$C_{\neg i}$. Each recommender $j\protect\neq i$ will then decide
on a report $R_{j}$, based on their competing icentive $C_{j}$,
their estimate $Q_{j}$, and the mechanism $\textsc{mech}$. For simplicity,
we assume that recommender $i$ does not believe that their own competing
incentive $c_{i}$ contains information about others' competing incentives
$C_{\neg i}$. \label{fig:bayesnet}}
\end{figure}
Given the independences expressed in \Cref{fig:bayesnet}, recommender
$i$'s belief factorizes as follows:
\begin{align*}
P_{i}(o,q_{\neg i},c_{\neg i},r_{\neg i}|\textsc{mech},c_{i}) & =P_{i}(o)P_{i}(q_{\neg i}|o)P_{i}(c_{\neg i})P_{i}(r_{\neg i}|q_{\neg i},c_{\neg i},\textsc{mech}).
\end{align*}
Hence, a recommender's belief can be defined by defining each of these
distributions. This belief, along with the recommender's competing
incentive $c_{i}$ forms the recommender's type, leading to the following
definition:
\begin{defn}[Recommender Types and Profile]
\label{def:recommenders} A \emph{recommender profile} is a tuple
$(n,t)$, where $n\in\mathbb{N}_{\ge1}$ is the number of recommenders
and $t=(t_{1},..,t_{n})$ is the list of the recommenders' types.
Each \emph{type} $t_{i}$ is a tuple $t_{i}=\left(q_{i},f_{i},c_{i}\right)$,
where
\begin{enumerate}
\item $q_{i}\in[0,1]$ defines $i$'s belief $P_{i}(O=1)=q_{i}$, which
is the information we want to elicit,
\item $c_{i}\in\mathbb{R}$ is the competing incentive, i.e., the utility
$i$ gains if $A=1$, and
\item $f_{i}=(f_{i}^{q},f_{i}^{c},f_{i}^{r})\in\mathcal{D}:=\mathcal{D}^{q}\times\mathcal{D}^{c}\times\mathcal{D}^{r}$
defines $i$'s belief regarding the other recommenders' beliefs $Q_{\neg i}$,
incentives $C_{\neg i}$, and reports $R_{\neg i}$:
\begin{align*}
f_{i}^{q}\in\mathcal{D}^{q}:= & \mathcal{PDF}([0,1]^{n-1})\times\{0,1\} & & P_{i}(q_{\neg i}|o)=f_{i}^{q}(q_{\neg i}|o) \\
f_{i}^{c}\in\mathcal{D}^{c}:= & \mathcal{PDF}(\mathbb{R}^{n-1}) & & P_{i}(c_{\neg i})=f_{i}^{c}(c_{\neg i}) \\
f_{i}^{r}\in\mathcal{D}^{r}:= & \mathcal{PDF}([0,1]^{n-1}) & & P_{i}(r_{\neg i}|q_{\neg i},c_{\neg i},\textsc{mech})=f_{i}^{r}(r_{\neg i}|q_{\neg i},c_{\neg i},\textsc{mech}) \\
\times & [0,1]^{n-1}\times\mathbb{R}^{n-1}\times\mathcal{M}
\end{align*}
where $\mathcal{M}$ is the set of all mechanisms.
\end{enumerate}
\end{defn}
For a single recommender, we simplify the type as $(q,c)$, where
$q\in[0,1]$ is this recommender's belief and $c\in\mathbb{R}$ is
the competing incentive. Without competing incentive, the type is
just belief $q\in[0,1]$, and this reduces to the standard setting
in information elicitation. Similarly, if we only consider mechanisms
that make payments independently, i.e. $\textsc{epay}_{i}(r,o)=\textsc{epay}_{i}(r',o)~\forall r,r':r_{i}=r_{i}',o$,
and assume that there are no competing incentives (i.e., $c=0$),
then each recommenders' utility is independent of others' reports.
In this case, the types simplify to $(q_{i})$, and we have
the standard setting of eliciting information from $n$ independent
recommenders.
\subsubsection{Utility}
Each recommender has the following utility:
\begin{defn}[Recommender Utility]
\label{def:recommenderUtil} Given a mechanism $\textsc{mech}=(\textsc{act},\textsc{pay}_{1:n},\textsc{rand})$,
recommender $i\in[n]$ with competing incentive $c_{i}$, has \emph{utility}
(for a realization $R_{\neg i}=r_{\neg i},O=o$, and in expectation
with respect to $X$) of
\begin{equation}
\textsc{util}_{i}\left(r,o,c_{i},\textsc{mech}\right):=\textsc{epay}_{i}\left(r,o\right)+c_{i}\cdot\textsc{pact}\left(r\right).\label{eq:util}
\end{equation}
Hence, the subjective expected utility of a recommender \pcref{def:recommenders}
of type $t_{i}$ is
\begin{align*}
\textsc{se-util}_{i}\left(r_{i},t_{i},\textsc{mech}\right): & =\mathbb{E}_{R_{\neg i},O\sim(t_{i},\textsc{mech})}\left[\textsc{util}_{i}\left(r_{i}\big|R_{\neg i},O,c_{i},\textsc{mech}\right)\right].
\end{align*}
where, with slight abuse of notation, we used $R_{\neg i},O\sim(t_{i},\textsc{mech})$
to denote that $i$'s subjective distribution of $R_{\neg i},O$ is
fully specified by $t_{i},\textsc{mech}$ (see \Cref{def:recommenders}).
\end{defn}
Recommenders will submit the report that maximizes their subjective
expected utility:
\begin{align}
r_{i}^{*}\left(t_{i},\textsc{mech}\right) & :=\arg\max_{r_{i}}\textsc{se-util}_{i}\left(r_{i},t_{i},\textsc{mech}\right).\label{eq:optimal-report}
\end{align}
In the single-recommender setting, these definitions simplify as follows:
\begin{defn}[Single-Recommender Setting]
\label{def:single-recommender} In the case of a single recommender,
the type profile simply consists of $t=(q,c)$, with $q\in[0,1]$
and $c\in\mathbb{R}$, and the subjective expected utility \Cref{def:recommenderUtil}
simplifies to
\begin{align*}
\textsc{se-util}\left(r,t,\textsc{mech}\right) & =\mathbb{E}_{O\sim q}\left[\textsc{util}\left(r,O,c,\textsc{mech}\right)\right] \\
& =\mathbb{E}_{O\sim q}\left[\textsc{epay}\left(r,O\right)\right]+c\cdot\textsc{pact}\left(r\right)
\end{align*}
Defining $S(r,q):=$$\mathbb{E}_{O\sim{q}}\left[\textsc{epay}\left(r,O\right)\right]$,
for simplicity, we have
\begin{align}
\textsc{se-util}\left(r,t,\textsc{mech}\right) & =S(r,q)+c\cdot\textsc{pact}\left(r\right).\label{eq:se-util-single}
\end{align}
\end{defn}
\subsubsection{Recommender Domains}
Throughout this article, we will consider different domains of recommenders,
defined as follows:
\begin{defn}[Recommender Domains]
\label{def:type-domain} We say that a recommender profile $\left(n,(q_{i},f_{i},c_{i})_{i\in[n]}\right)$
lies in a \emph{belief domain} $\mathcal{F}\subseteq\mathcal{D}$
(with $\mathcal{D}$ being the full belief domain, defined in \Cref{def:recommenders})
if $f_{i}\in\mathcal{F}~\forall i$. Further, we say that a recommender
profile lies in the \emph{incentive domain} $\mathcal{C}\subset\mathbb{R}^{n}$
if $(c_{1},..,c_{n})\in\mathcal{C}$. We call the tuple $\mathcal{T}=(\mathcal{F},\mathcal{C})$
the \emph{type domain}, and we say a recommender profile lies in the
type domain $\mathcal{T}$ if it lies both in the belief domain $\mathcal{F}$
and in the incentive domain $\mathcal{C}$.
\end{defn}
We will consider the following belief domains:
\begin{defn}[Belief domains]
\label{def:belief-domains} We will consider the following sets of
beliefs:
\begin{enumerate}
\item $\mathcal{F}_{all}:=\mathcal{D}$ is the set of all beliefs , with
$\mathcal{D}$ as defined in \pcref{def:recommenders},
\item $\mathcal{F}_{indep}:=\left\{ \text{f\ensuremath{\in}}\mathcal{D}:f^{q}(q_{\neg}|0)=f^{q}(q_{\neg}|1)\quad\forall q_{\neg}\in[0,1]^{n-1}\right\} $,
i.e., each recommender $i$ believes that, given their own estimate
$q_{i}$, other recommenders' estimates $Q_{\neg i}$ do not carry
any additional information about $O$. This implies, through the independences
in \Cref{fig:bayesnet}, that $R_{\neg i}$ and $O$ are independent
as well, according to $i$'s subjective belief.
\end{enumerate}
\end{defn}
\subsection{Bribers}
We define a briber, who may attempt to manipulate the action by bribing
recommenders, as follows:
\begin{defn}[$d$-Rational Briber]
\label{def:briber} A \emph{$d$-rational briber} gains utility $d>0$
in the case of action $A=1$, and utility $0$ for action $A=0$.
The briber holds their own belief $\phi\in\Phi$, where $\Phi$ is
the set of all possible beliefs about the recommenders' beliefs, $Q_{1},..,Q_{n}$
and $F_{1},..,F_{n}$ \pcref{def:recommenders}, hence the briber's
\emph{type} is $(\phi,d)$.
The briber may offer a bribe to each recommender, which is only paid
if the desired action $A=1$ is taken. We model this as the briber
determining the competing incentives $c_{1},..,c_{n}\ge0$ in the
recommenders' types \pcref{def:recommenders}.
\end{defn}
The briber knows the mechanism, and that recommenders will maximize
their subjective expected utility, i.e. the briber knows that recommender
$i$ will pick their report according to $r_{i}^{*}\left((Q_{i},F_{i},c_{i}),\textsc{mech}\right)$
\Cref{eq:optimal-report}. However, $Q_{i}$ and $F_{i}$ are unknown
to the briber. Hence, given a mechanism $\textsc{mech}=(\textsc{act},\textsc{pay}_{1:n},\textsc{rand})$,
the briber's \emph{utility} (for a realization of recommenders' beliefs
$Q=q,F=f$, and in expectation with respect to $X$) is
\begin{align}
\textsc{util}_{b}(c,d,q,f,\textsc{mech}) & =\nonumber \\
\left(d-\sum_{i\in[n]}c_{i}\right)\cdot & \textsc{pact}\left(r_{1}^{*}\left((q_{1},f_{1},c_{1}),\textsc{mech}\right),..,r_{n}^{*}\left((q_{n},f_{n},c_{n}),\textsc{mech}\right)\right).\label{eq:briber-util} \\
\nonumber
\end{align}
Therefore, the subjective expected utility of the briber is,
\begin{align}
\textsc{se-util}_{b}(c,(d,\phi),\textsc{mech}) & =\mathbb{E}_{q,f\sim\phi}\left[\textsc{util}_{b}(c,d,q,f,\textsc{mech})\right],\label{eq:se-util-briber}
\end{align}
and they will hence pick the bribe
\begin{equation}
c^{*}\left((d,\phi),\textsc{mech}\right):=\arg\max_{c\ge0}\textsc{se-util}_{b}(c,(d,\phi),\textsc{mech}).\label{eq:optimal-bribe}
\end{equation}
Unless we state otherwise, we will assume that there is no briber
present.
\subsection{Incentive Compatibility}
We will study two properties of mechanisms. The first is truthfulness,
i.e., whether it is a dominant strategy for each recommender to reveal
their true $q_{i}$. Whenever this is not the case, we will study
to what extent the decision is affected by misreporting (i.e., the
manipulability of a mechanism).
\subsubsection{Truthfulness}
For a mechanism to be \emph{strictly truthful}, we require that all
recommenders' subjective expected utilities are strictly larger for
reporting their truthful beliefs, than for any other report, regarless
of other recommenders' reports:
\begin{defn}[Strict truthfulness]
\label{def:truthfulness}
A mechanism $\textsc{mech}$ \pcref{def:mechanism} is \emph{strictly
truthful,} with respect to a type domain \pcref{def:type-domain}
$(\mathcal{F},\mathcal{C})$, iff for all $i\in[n]$ we have
\begin{align*}
\textsc{se-util}_{i}\left(q_{i},(q_{i},f_{i},c_{i}),\textsc{mech}\right) & >\textsc{se-util}_{i}\left(r_{i},(q_{i},f_{i},c_{i}),\textsc{mech}\right) \\
& \quad\quad\quad\forall r_{i}\neq q_{i}\in[0,1],f_{i}\in\mathcal{F},c\in\mathcal{C}. \\
\end{align*}
\end{defn}
In this case, truthful reporting is a dominant strategy of each recommender.
In the presence of a briber, two more notions will be useful:
\begin{defn}[Strict truthfulness with briber]
\label{def:truthfulness-briber}
A mechanism $\textsc{mech}$ \pcref{def:mechanism} is \emph{strictly
truthful in the presence of a d-rational briber}\textbf{\emph{ }}\emph{\pcref{def:briber}}
with respect to a belief domain $\mathcal{F}$, iff the mechanism
is strictly truthful \pcref{def:truthfulness} with respect to the
type domain $\mathcal{T}=(\mathcal{F},\mathcal{C})$, where $\mathcal{C}$
is the set of competing incentives that could possibly be induced
by a briber type $\phi$ \pcref{eq:optimal-bribe}, i.e. $\mathcal{C}=\left\{ c^{*}\left((d,\phi),\textsc{mech}\right):\phi\in\Phi\right\} $.
\end{defn}
\begin{defn}[Bribe-freeness]
\label{def:bribe-freeness}
We say that mechanism $\textsc{mech}$ \pcref{def:mechanism} is
\emph{bribe-free }given a d-rational-briber\textbf{ }\pcref{def:briber}
if it is irrational for a briber of any type to bribe, i.e. a bribe
would reduce the subjective expected utility \Cref{eq:se-util-briber}:
\[
\textsc{se-util}_{b}(0,(d,\phi),\textsc{mech})>\textsc{se-util}_{b}(c,(d,\phi),\textsc{mech})\quad\forall\phi\in\Phi,c\in\mathbb{R}_{>0}^{n}
\]
which is equivalent to
\[
\textsc{util}_{b}(0,d,q,f,\textsc{mech})>\textsc{util}_{b}(c,d,q,f,\textsc{mech})\quad\forall q\in[0,1]^{n},f\in\mathcal{D}^{n},c\in\mathbb{R}_{>0}^{n}.
\]
\end{defn}
\subsubsection{Manipulability}
It is often impossible to achieve strict truthfulness in the presence
of competing incentives. Therefore, we also study the extent to which
misreports affect the decision:
\begin{defn}[Manipulability]
\label{def:manipulability}
We define the \emph{manipulability of a mechanism} $\textsc{mech}$
\pcref{def:mechanism}, with respect to a type domain \pcref{def:type-domain}
$\mathcal{T}=(\mathcal{F},\mathcal{C})$, as the worst-case (in terms
of types) manipulation:
\begin{align*}
& \textsc{manip}\left(n,\mathcal{F},\mathcal{C},\textsc{mech}\right):= \\
& \max_{q\in[0,1]^{n},f\in\mathcal{F}^{n},c\in\mathcal{C}}\left|\textsc{pact}\left(r_{1}^{*}\left((q_{1},f_{1},c_{1}),\textsc{mech}\right),..,r_{n}^{*}\left((q_{n},f_{n},c_{n}),\textsc{mech}\right)\right)-\textsc{pact}\left(q_{1},..,q_{n}\right)\right|.
\end{align*}
For a single recommender, we use the notation $\textsc{manip}\left(\mathcal{C},\textsc{mech}\right)$
instead of $\textsc{manip}\left(1,\emptyset,\mathcal{C},\textsc{mech}\right)$.
\end{defn}
We will characterize the assumptions on the mechanism and the recommender
types for which strict truthfulness or bounds on manipulability can
or cannot be achieved.
\subsection{Sensitivity to Reports}
Intuitively, if agents have a larger influence on the decision, then
their incentive for manipulation, in consideration of a competing
incentive, becomes larger. However, if the decision probability, $\textsc{pact}(r)$,
is completely insensitive to reports, then the mechanism has no utility.
To characterize the trade-off between budget, sensitivity to reports,
and truthfulness (or the amount of manipulation), we will use the
following definitions:
\begin{defn}[Sensitivity]
\label{def:sensitivity} For a single recommender, a decision function,
$\textsc{pact}(r)$, has \emph{sensitivity}, $\Delta$, if the recommender
can affect the decision probability by $\Delta$, i.e., $\Delta=\max_{r,r'\in{[0,1]}}\left|\textsc{pact}(r')-\textsc{pact}(r)\right|$.
For multiple recommenders, the sensitivity may be different with respect
to each recommender, and it may be a function of others' reports.
Hence, in the multi-recommender-setting, $\Delta=(\Delta_{1},..,\Delta_{n})$
represents a series of functions $\Delta_{i}:{[0,1]}^{n-1}\to[0,1]$:
\begin{equation}
\Delta_{i}(r_{\neg i})=\max_{r_{i},r_{i}'\in{[0,1]}}\left|\textsc{pact}(r_{i},r_{\neg i})-\textsc{pact}(r_{i}',r_{\neg i})\right|\quad\forall i\in[n],\forall r_{\neg i}\in{[0,1]}^{n-1}.
\end{equation}
\end{defn}
\begin{defn}[Max/Min-Uniform-Sensitivity]
\label{def:uniform-sensitivity} For a single recommender, we say
that a decision rule, $\textsc{pact}$, has \emph{max-uniform-sensitivity,
$L$, on an interval $a\subseteq[0,1]$}, if
\begin{align}
L & =\max_{r,r'\in a:r\neq r'}\frac{|\textsc{pact}(r')-\textsc{pact}(r)|}{|r'-r|},
\end{align}
and \emph{min-uniform-sensitivity} as defined as above, except with
a $\min$ instead of a $\max$.
For multiple recommenders, the sensitivity may be different with respect
to each recommender, and it may be a function of others' reports.
Hence, in the multi-recommender-setting, $L=(L_{1},..,L_{n})$, represents
a series of functions $L_{i}:{[0,1]}^{n-1}\to[0,1]$, and we have
a profile of intervals, $a=(a_{1},..,a_{n})$, with $a_{i}\subseteq[0,1]$.
We say that $\textsc{pact}$ has \emph{max-uniform-sensitivity $L$
on interval profile, $a$} if
\begin{align}
L_{i}(r_{\neg i}) & =\max_{r_{i},r_{i}'\in a_{i}:r_{i}\neq r_{i}'}\frac{|\textsc{pact}(r_{i}',r_{\neg i})-\textsc{pact}(r_{i},r_{\neg i})|}{|r_{i}'-r_{i}|}\quad\forall r_{\neg i}\in{[0,1]}^{n-1},
\end{align}
and \emph{min-uniform-sensitivity on interval profile $a$} as defined
as above, except with a $\min$ instead of a $\max$. Whenever we
refer to max/min-uniform sensitivity without specifying an interval,
we mean the interval is the full domain $a=[0,1]$.
\end{defn}
If $\textsc{pact}$ is linear on an interval $a$, then min-uniform-sensitivity
and max-uniform-sensitivity are identical on that interval. Further,
if $\textsc{act}$ has min-uniform-sensitivity $L$ on an interval
$a$, then it has sensitivity of at least $L|a|$.
\section{Lying a Little is Cheap}
\label{sec:lying-cost}
In this section, we consider the single-recommender setting \pcref{def:single-recommender}
and we study the properties of the expected payment function $S$,
which determines the incentive for truthfulness, in isolation. In
later sections, we will tie these results to the setting where we
explicitly model agents' competing incentives in regard to the action
chosen.
In particular, we are interested in how much cost a recommender incurs
for deviations from truthfulness, and we make the following definition:
\begin{defn}[Cost of $\varepsilon$-Lying]
\label{def:cost-lying} For a single recommender, we define the \emph{cost
of $\varepsilon$-lying} of a scoring rule (payment function) with
expected payment $S$, for a given $\varepsilon\in(-1,1)$, as the
minimal (in terms of true belief) expected cost that is associated
with an $\varepsilon$ deviation, i.e.,
\begin{equation}
\textsc{cost}(\varepsilon):=\min_{q\in{[0,1]}:q+\varepsilon\in{[0,1]}}\left(S(q,q)-S(q+\varepsilon,q)\right).
\end{equation}
\end{defn}
Intuitively, this is the best case expected cost to a recommender,
for a given scoring rule, for changing their report by a small amount.
We will give upper bounds on the cost of $\varepsilon$-lying that
could possibly be achieved by any payment function with a fixed budget
$\beta$. Further, we will show that the quadratic scoring rule is
optimal in terms of cost of $\varepsilon$-lying.
\subsection{Upper Bound on the Cost of Lying}
\begin{restatable}[Upper bound on the cost of lying]{thm}{resultCostOfLyingUpper}\label{resultCostOfLyingUpper}
Any expected-payment function $S(r,q)\in[0,\beta]\ \forall r,q\in{[0,1]}$
has a cost of $\varepsilon$-lying \pcref{def:cost-lying} of at
most,
\begin{align*}
\textsc{cost}(\varepsilon) & \le\frac{\varepsilon^{2}}{1-|\varepsilon|}\beta,
\end{align*}
for any deviation $\varepsilon\in(-1,1)$. \end{restatable} This
result shows that the cost of lying scales quadratically in the size
of the misreport $\varepsilon$, meaning that small deviations are
cheap for any scoring rule.
\subsection{Lower Bound on the Cost of Lying}
We now show that the quadratic scoring rule is essentially optimal
in terms of the cost of $\varepsilon$-lying (\Cref{def:cost-lying}).
The following is the \emph{quadratic scoring rule}~\citep{brier1950verification},
normalized so that payments lie in $[0,\beta]$.
\begin{defn}[$\beta$-Quadratic Scoring Rule]
\label{def:quad-scoring} The payment made by the \emph{$\beta$-Quadratic
Scoring Rule} rule is,
\begin{equation}
\textsc{epay}(r,o)=\beta\left((2r-r^{2})o+(1-r^{2})(1-o)\right).
\end{equation}
This yields an expected payment, given a true belief $q$, of
\begin{equation}
S(r,q)=\beta\left((2r-r^{2})q+(1-r^{2})(1-q)\right).
\end{equation}
We have $\max_{r,o}[\textsc{epay}(r,o)]=\beta$, and $\min_{r,o}[\textsc{epay}(r,o)]=0$.
\end{defn}
\begin{restatable}[Lower bound on the cost of lying]{lem}{resultCostOfLyingLower}\label{resultCostOfLyingLower}
The $\beta$-Quadratic Scoring Rule has a cost of $\varepsilon$-lying
\pcref{def:cost-lying} of
\begin{equation}
\textsc{cost}(\varepsilon)=\varepsilon^{2}\beta.
\end{equation}
\end{restatable} \begin{myproof} The result follows straightforwardly
by plugging the definition of the $\beta$-Quadratic Scoring Rule
into the definition of $\varepsilon$-lying \pcref{def:cost-lying}.
\end{myproof}
Comparing this result to \Cref{resultCostOfLyingUpper}, we see
that Quadratic Scoring is essentially optimal in terms of the cost
of $\varepsilon$-lying.
\section{Competing Incentives and Bribery}
\label{sec:competing-incentives-and-bribery}
We have seen that small deviations from truthfulness are cheap for
any payment scheme. However, there may still be hope for strict truthfulness
\pcref{def:truthfulness} in the setting where the competing incentives
(called $c$ in \Cref{def:recommenders}) stem from the utility
that recommenders gain from the mechanism picking $A=1$ through the
function $\textsc{pact}$. To see this, recall the definition of recommenders'
utilities \Cref{eq:util}:
\[
\textsc{util}_{i}\left(r_{i}\big|r_{\neg i},o,c_{i},\textsc{mech}\right):=\textsc{epay}_{i}\left(r,o\right)+c_{i}\cdot\textsc{pact}\left(r\right).
\]
For example, if we design $\textsc{act}$ (and hence $\textsc{pact}$)
to be independent of the reports $r$, then recommenders cannot manipulate
the action and have hence no incentive for misreporting. Although
this extreme mechanism would have no utility, one may hope to achieve
truthfulness, or at least a bound on the manipulation of the decision
\pcref{def:manipulability}, through the careful co-design of \textsc{act}
and \textsc{pay}, so that wherever $\textsc{pact}$ has a large sensitivity,
the incentives from $\textsc{epay}$ are strong enough to overcome
the temptation of manipulation.
In fact, we show that as soon as there is even one recommender \pcref{def:recommenders}
with a fixed competing incentive $c_{i}>0$, no mechanism can achieve
strict truthfulness \pcref{def:truthfulness} unless it completely
ignores all reports (i.e., it has sensitivity \pcref{def:sensitivity}
equal to $0$). We then quantify to what extent misreporting affects
the decision taken, i.e., we provide lower bounds on the manipulability
\pcref{def:manipulability} as a function the sensitivity $\Delta$
\pcref{def:sensitivity}, the budget $\beta$ \pcref{def:mechanism},
and the competing incentives $(c_{1},..,c_{n})$. For the single-recommender
case, we again provide a matching upper bound, achieved by a mechanism
making payments according to the Quadratic Scoring rule.
On the other hand, if the competing incentives stem from a $d$-rational
briber \pcref{def:briber}, we give a positive result---it is possible
to attain truthfulness. We provide necessary and sufficient conditions
for truthfulness as a function of the sensitivity $\Delta$, the budget
$\beta$, and the incentive of the briber $d$.
At a high level, our results establish that the budget $\beta$ of
a mechanism \pcref{def:mechanism} must grow with the sum of squares
of the sensitivities, i.e., $\sum_{i}{\Delta_{i}}^{2}$, for the mechanism
to guarantee small manipulation or, whenever possible, truthfulness.
Of interest, is that this allows the total influence of recommenders
on the decision (i.e., the sensitivity of the decision rule) to grow
with $\sqrt{n}$, while keeping the budget constant. Hence, with a
sufficient number of recommenders, we can attain a large aggregate
sensitivity, while maintaining truthfulness, or at least low manipulability.
\subsection{Single Recommender}
We first consider the single-recommender setting \pcref{def:single-recommender},
and then generalize the results to the multi-recommender setting.
We start by showing that no mechanism is strictly truthful when the
recommender has a fixed competing incentive, $c>0$. Thereafter, we
consider a setting where the incentive for misreporting is not intrinsic,
but comes from a $d$-rational briber, and we show that in this setting
truthfulness can be achieved.
\subsubsection{Recommender with an Intrinsic, Competing Incentive}
The following theorem gives a lower-bound on manipulability in the
presence of a recommender with an intrinsic (i.e., fixed) competing
incentive $c$: \begin{restatable}[Lower-bound on single-recommender
manipulability]{thm}{resultSingleRecLower}\label{resultSingleRecLower}
In the single-recommender setting \pcref{def:single-recommender},
any proper mechanism $\textsc{mech}$ \pcref{def:proper-mechanism}
with a budget $\beta>0$ and sensitivity $\Delta\geq0$ \pcref{def:sensitivity}
has manipulability \pcref{def:manipulability} of at least
\[
\textsc{manip}\left(\{c\},\textsc{mech}\right)\ge\frac{\Delta{}^{2}}{8\beta/c+2\Delta}.
\]
It follows that for any $c>0$, there is no strictly truthful mechanism,
unless it ignores the recommender's report, i.e., $\Delta=0$. Furthermore,
this result implies that to guarantee that $\textsc{manip}\left(\{c\},\textsc{mech}\right)\le\varepsilon$,
we need a budget of at least
\[
\beta=c\cdot\Delta^{2}\cdot\Omega\left(\frac{1}{\varepsilon}\right)\quad\text{as }\varepsilon\to0.
\]
\end{restatable}
This result has far-reaching implications. It states that as soon
as the recommender may have any (arbitrarily-small) competing incentive
$c>0$, there exists no proper mechanism that can guarantee that the
decision will not be manipulated by the recommender. Even if we co-design
the decision $\textsc{act}$ and payment $\textsc{pay}$ function
in possibly intricate ways, allowing for discontinuities and randomness,
it is impossible to guarantee that there will be no manipulation.
Before we discuss this lower bound quantitatively, we provide a complementary
upper bound on the achievable manipulability. \begin{restatable}[Upper-bound
on single-recommender manipulability]{thm}{resultSingleRecUpper}\label{resultSingleRecUpper}
In the single-recommender setting \pcref{def:single-recommender},
any proper mechanism $\textsc{mech}$ \pcref{def:proper-mechanism}
with max-uniform-sensitivity $L$ \pcref{def:uniform-sensitivity},
that makes payments according to the $\beta$-quadratic scoring rule
\pcref{def:quad-scoring}, has manipulability \pcref{def:manipulability}
of at most
\[
\textsc{manip}\left(\{c\},\textsc{mech}\right)\le\frac{L^{2}}{\beta/c}.
\]
This result implies that to guarantee that $\textsc{manip}\left(\{c\},\textsc{mech}\right)\le\varepsilon$,
we need a budget of at most
\[
\beta=c\cdot L^{2}\cdot\mathcal{O}\left(\frac{1}{\varepsilon}\right)\quad\text{as }\varepsilon\to0.
\]
\end{restatable}
To illustrate this result, suppose that we pick the decision function
$\textsc{pact}(r)=r\cdot L$, for some $0\le L\le1$, (along with
the $\beta$-quadratic payment rule). This mechanism has sensitivity
$\Delta=L$, and we can compare \Cref{resultSingleRecUpper} with
\Cref{resultSingleRecLower}. Together, they imply that the budget
$\beta$ required to guarantee a maximum manpulation $\varepsilon>0$,
while exhibiting senstivity of at least $\Delta$, satsifies
\[
\beta=c\cdot\Delta^{2}\cdot\Theta\left(\frac{1}{\varepsilon}\right)\quad\text{as }\varepsilon\to0,
\]
where $c>0$ is the competing incentive. The budget must be proportional
to the competing incentive $c$, and inversely proportional to the
admissible manipulation $\varepsilon$. Interestingly, it scales quadratically
with the sensitivity. We will discuss the implications of these dependences
after discussing the setting where a briber is present.
\subsubsection{Self-Interested, Rational Briber}
As we have seen in the previous section, whenever the competing incentive
of the recommender is non-zero ($c>0$) and the decision rule has
sensitivity $\Delta>0$, then no mechanism is truthful.
However, if the competing incentive comes from a briber, we shall
see that it is possible to design a mechanism in which it is not in
the interest of the briber \pcref{def:briber} to make a bribe. This
means that strict truthfulness \pcref{def:truthfulness-briber} can
be achieved and is equivalent to bribe-freeness \pcref{def:bribe-freeness}.
In the following, we characterize under what conditions truthfulness
can be achieved in this setting, by providing a necessary and then
a sufficient condition.
\begin{restatable}[Single Recommender with Briber: Necessary Condition
for Truthfulness]{thm}{resultSingleRecNecessary}\label{resultSingleRecNecessary}
In the single-recommender setting, for a proper mechanism $\textsc{mech}$
\pcref{def:proper-mechanism} to be strictly truthful in the presence
of a $d$-rational briber \pcref{def:truthfulness-briber}, it is
necessary that
\[
\Delta^{2}\leq8\frac{\beta}{d}\max_{r\in[0,1]}\textsc{pact}\left(r\right)
\]
with $\beta>0$ being the budget and $\Delta\geq0$ the sensitivity
\pcref{def:sensitivity} of the mechanism. \end{restatable}
We now turn to a positive result, showing the possibility of strict
truthfulness in the presence of competing incentives stemming from
a rational briber:
\begin{restatable}[Single Recommender with Briber: Sufficient Condition
for Truthfulness]{thm}{resultSingleRecSufficient}\label{resultSingleRecSufficient}
Consider the single-recommender setting with a $d$-rational briber
and a proper mechanism \pcref{def:proper-mechanism} with max-uniform-sensitivity,
$L$ \pcref{def:uniform-sensitivity}, and making payments according
to the $\beta$-Quadratic Scoring Rule \pcref{def:quad-scoring}.
In this case, a sufficient condition for strict truthfulness is
\[
L^{2}\le\frac{\beta}{d}\min_{r\in[0,1]}\textsc{pact}(r).
\]
\end{restatable}
As in the previous section, suppose that we pick the decision function
$\textsc{pact}(r)=r\cdot L$, for some $0\le L\le1$, (along with
the $\beta$-quadratic payment rule). This mechanism has sensitivity
$\Delta=L$, we can hence combine \Cref{resultSingleRecSufficient}
with \Cref{resultSingleRecNecessary}, and obtain
\begin{align}
\frac{1}{8\max_{r\in[0,1]}\textsc{pact}\left(r\right)}\cdot d\cdot\Delta^{2} & \le\beta\le\frac{1}{\min_{r\in[0,1]}\textsc{pact}(r)}\cdot d\cdot\Delta^{2}.
\end{align}
As in the case of a fixed, intrinsic incentive, the budget grows quadratically
with the sensitivity $\Delta$. As we shall see, in the multi-recommender
setting, this property will allow for truthful mechanisms with a low
budget, yet large aggregate sensitivity to recommenders' reports.
\subsection{Multiple Recommenders}
When eliciting information form multiple recommenders, several of
them may have competing incentives. We will consider the incentive
domain $\mathcal{C}(\bar{c})$ \pcref{def:type-domain} that contains
all incentive profiles such that the sum of incentives is bounded
by $\bar{c}$, i.e. $\mathcal{C}(\bar{c}):=\left\{ c\in\mathbb{R}_{\ge0}^{n}:\sum_{i\in[n]}c_{i}\le\bar{c}\right\} $.
\subsubsection{Recommenders with Intrinsic, Competing Incentives}
The lower bound on the manipulability from the single-recommender
case translates straighforwardly to the multi-recommender setting:
\begin{cor}[Lower-bound on multi-recommender manipulability]
\label{resultMultiRecLower} Any proper mechanism $\textsc{mech}$
\pcref{def:proper-mechanism} with budget $\beta>0$ and sensitivity
$\Delta$ \pcref{def:sensitivity}, has manipulability \pcref{def:manipulability}
(with respect to the full belief domain \pcref{def:type-domain}
$\mathcal{F}_{all}$) of at least
\begin{align*}
\textsc{manip}\left(n,\mathcal{F}_{all},\mathcal{C}(\bar{c}),\textsc{mech}\right) & \ge\max_{i\in[n],r_{\neg i}\in[0,1]^{n-1}}\left(\frac{\Delta_{i}^{2}(r_{\neg i})}{8\beta/\bar{c}+2\Delta_{i}(r_{\neg i})}\right).
\end{align*}
This implies that if there is any competing incentive $|\bar{c}|>0$,
there is no strictly truthful mechanism, unless it ignores all recommenders'
reports, i.e., $\Delta_{i}(r_{\neg i})=0\forall i\in[n],r_{\neg i}\in[0,1]^{n-1}$.
\end{cor}
\begin{myproof} Consider recommender $i$. Suppose this is the only
recommender with a conflicting incentive, i.e., $c_{i}=\bar{c}$,
so that the others report truthfully, with $R_{\neg i}=q_{\neg i}$.
Further, suppose that recommender $i$ happens to reason correctly
about others' reports, i.e. $i$ knows $R_{\neg i}$. In the best-case
scenario, the mechanism happens to allocate the entire budget $\beta$
to recommender $i$. From \Cref{resultSingleRecLower}, we know
this recommender will manipulate the action probability by at least
\[
\frac{\Delta_{i}^{2}(q_{\neg i})}{8\beta/\bar{c}+2\Delta_{i}^{2}(q_{\neg i})}.
\]
Taking the worst-case recommender $i$ and beliefs $q_{\neg i}$,
we obtain the result.\end{myproof}
Hence, as in the single-recommender setting, it is impossible to design
a mechanism that guarantees strict truthfulness in the presence of
a fixed, intrinsic competing incentive.
\subsubsection{Self-Interested, Rational Briber}
We now turn to the setting where the competing incentives in this
multi-recommender setting come from a rational briber who attempts
to manipulate the decision. The necessary condition for a single
recommender translates straightforwardly to the multi-recommender
setting:
\begin{cor}[Multiple Recommenders with Briber: Necessary Condition for Truthfulness]
\label{resultMultiRecNecessary} For a proper mechanism $\textsc{mech}$
\pcref{def:proper-mechanism} to be strictly truthful in the presence
of a $d$-rational briber \pcref{def:truthfulness-briber}, it is
necessary that
\[
\Delta_{i}^{2}(r_{\neg i})\leq8\frac{\beta}{d}\max_{r_{i}\in[0,1]}\textsc{pact}\left(r_{i},r_{\neg i}\right)\quad\forall i\in[n],r_{\neg i}\in[0,1]^{n-1}
\]
with $\beta>0$ being the budget and $\Delta$ the sensitivity \pcref{def:sensitivity}
of the mechanism.
\end{cor}
\begin{myproof}Suppose the briber \pcref{def:briber} only targets
a single recommender. In the best case, the mechanism happens to allocate
the entire budget $\beta>0$ to this recommender. The result then
follows from \Cref{resultSingleRecNecessary} by taking the worst-case
in terms of who the briber targets and the the bribers' beliefs $\phi$.
\end{myproof}
We now present a sufficient condition for truthfulness in the briber
setting, and we shall see in the following that this insight allows
for truthful mechanisms with large aggregate sensitivity.
\begin{restatable}[Multiple recommenders with briber: sufficient
condition for truthfulness]{thm}{resultMultiRecSufficient}\label{resultMultiRecSufficient}
Consider the single-recommender setting with a $d$-rational briber
and a proper mechanism \pcref{def:proper-mechanism} with max-uniform-sensitivities
$L_{1},..,L_{n}$ (independently of others' reports) and budget $\beta$.
Suppose the mechanism makes payments to each recommender $i$ according
to the $\beta_{i}$-Quadratic Scoring Rule \pcref{def:quad-scoring},
with $\beta_{i}=\beta\frac{L_{i}^{2}}{\bar{L}}$ and $\bar{L}:=\sum_{j\in[n]}L_{j}^{2}$.
In this case, a sufficient condition for strict truthfulness is
\[
\sum_{j\in[n]}L_{j}^{2}\le\frac{\beta}{d}\min_{r\in[0,1]}\textsc{pact}(r).
\]
\end{restatable} Comparing this result to \Cref{resultSingleRecSufficient},
we see that we now have the \emph{sum of the squares} of the max-uniform-sensitivities
on the left-hand-side. Hence, to maintain truthfulness when adding
more recommenders (suppose we give an equal amount of influence, that
is the sensitivity of the decision rule to reports, to everyone),
each recommender's influence has to scale with $L_{i}=\frac{1}{\sqrt{n}}$,
and hence the sum of max-uniform-sensitivities constants can grow
with $\sqrt{n}$, i.e., the total influence of recommenders can grow
as more recommenders are added.
\section{Conditional Observations and Dependent Recommenders}
\label{sec:conditional-observations-and-dependent-recommenders}
It is often the case that we observe the outcome only conditionally
on the action taken. For instance, we will only observe the quality
of a product if we decide to buy it or the repayment or not of a loan
if we decide to make it. Hence, in this section we address the challenge
that $O$ is only observed if $A=1$, which imposes the following
structure on the payment function.
\begin{defn}[Conditional Payment Function (CPF)]
\label{def:cond-pay} We say that a payment function is a \emph{conditional
payment function} (CPF) if it can be expressed as,
\begin{align*}
\textsc{pay}_{i}(r,o,x) & =\textsc{act}(r,x)\cdot\textsc{pay}_{i}^{1}(r,o,x)+(1-\textsc{act}(r,x))\textsc{pay}_{i}^{0}(r,x).
\end{align*}
\end{defn}
This decision-conditional outcome structure does not allow for $\textsc{pay}_{i}$
to be a standard scoring rule. \citet{ChenYilingDMwG} study this
setting in the context of decision markets and \citet{york2021eliciting}
in the context of a VCG-based scoring rule for information elicitation.
However, neither model is able to handle the presence of competing
incentives and moreover, we show that these styles of mechanism are
not truthful for dependent recommenders and develop a novel mechanism
that addresses this problem.
Before we proceed to study this problem, we derive a condition for
truthfulness that is simpler than \Cref{def:truthfulness} but equivalent.
\subsection{Equivalent Conditions for Truthfulness}
The following result reformulates the condition for strict truthfulness
in a manner that will be convenient in this section.
\begin{restatable}[Strict truthfulness]{lem}{resultTruthfulness}\label{resultTruthfulness}
A mechanism $\textsc{mech}$ \pcref{def:mechanism} is \emph{strictly
truthful \pcref{def:truthfulness},} with respect to the type domain
$(\mathcal{F}_{all},\{\mathbf{0}\})$ \pcref{def:type-domain} of
recommenders with any belief and no competing incentives, iff for
all $i\in[n]$ we have
\[
\mathbb{E}_{O\sim q_{i}}\left[\textsc{epay}_{i}\left(q_{i}\big|r_{\neg i}^{O},O\right)\right]>\mathbb{E}_{O\sim q_{i}}\left[\textsc{epay}_{i}\left(r_{i}\big|r_{\neg i}^{O},O\right)\right]\quad\forall r_{i}\neq q_{i}\in[0,1],r_{\neg i}^{0},r_{\neg i}^{1}\in{[0,1]}^{n-1}.
\]
Further, a mechanism $\textsc{mech}$ \pcref{def:mechanism} is \emph{strictly
truthful \pcref{def:truthfulness},} with respect to the type domain
$(\mathcal{F}_{indep},\{\mathbf{0}\})$ \pcref{def:type-domain}
of recommenders with any belief and no competing incentives, iff for
all $i\in[n]$ we have
\[
\mathbb{E}_{O\sim q_{i}}\left[\textsc{epay}_{i}\left(q_{i}\big|r_{\neg i},O\right)\right]>\mathbb{E}_{O\sim q_{i}}\left[\textsc{epay}_{i}\left(r_{i}\big|r_{\neg i},O\right)\right]\quad\forall r_{i}\neq q_{i}\in[0,1],r_{\neg i}\in{[0,1]}^{n-1}.
\]
\end{restatable}
\subsection{Independent vs Dependent Recommenders}
We first present a slight generalization of the VCG-Scoring Mechanism
\citep{york2021eliciting} that is truthful for independent recommenders
and then show the difficulties that arise when recommenders are dependent.
\begin{defn}[Critical-Payment Mechanism (CPM)]
In the \emph{critical-payment mechanism}, the payment and decision
functions are deterministic, and $\textsc{act}(r)$ is strictly increasing
in each $r_{i}$ individually. We define the \emph{critical value},
at which the decision changes from 1 to 0, as
\[
\textsc{act}_{i}^{-1}(r_{\neg i}):=\min_{r_{i}:\textsc{act}(r)=1}r_{i}.
\]
Given this, the payment is defined as,
\[
\textsc{pay}_{i}(r,o)=\textsc{act}(r)\cdot o+(1-\textsc{act}(r))\textsc{act}_{i}^{-1}(r_{\neg i}).
\]
\end{defn}
\begin{restatable}[Based on \citet{york2021eliciting}]{lem}{resultCPMIndep}\label{resultCPMIndep}
CPM is weakly truthful for independent recommenders in the absence
of competing incentives. \end{restatable} \begin{myproof} Recall
the condition for strict truthfulness for independent recommenders
\pcref{resultTruthfulness} which, for weak truthfulness takes the
form
\begin{equation}
\mathbb{E}_{q_{i}}\left[\textsc{pay}_{i}\left(q_{i},r_{\neg i},O\right)\right]\ge\mathbb{E}_{q_{i}}\left[\textsc{pay}_{i}\left(r,O\right)\right]\quad\forall r_{i}\neq q_{i}\in{[0,1]};r_{\neg i}\in{[0,1]}^{n-1}
\end{equation}
in the absence of competing interests, i.e.~with $c_{i}=0$. The
expected payment of CPM is
\begin{align}
\mathbb{E}_{q_{i}}\left[\textsc{pay}_{i}\left(r,O\right)\right] & =\textsc{act}(r)q_{i}+(1-\textsc{act}(r))\textsc{act}_{i}^{-1}(r_{\neg i}) \\
& =\begin{cases}
q_{i} & \text{if }r_{i}\ge\textsc{act}_{i}^{-1}(r_{\neg i}) \\
\textsc{act}_{i}^{-1}(r_{\neg i}) & \text{if }r_{i}<\textsc{act}_{i}^{-1}(r_{\neg i})
\end{cases}
\end{align}
and satisfies the weak truthfulness condition. Further, the optimal
report satisfies
\begin{align}
\textsc{act}(r_{i},r_{\neg i}) & =\begin{cases}
1 & \text{if }q_{i}>\textsc{act}_{i}^{-1}(r_{\neg i}) \\
0 & \text{if }q_{i}<\textsc{act}_{i}^{-1}(r_{\neg i})
\end{cases} \\
& =\textsc{act}(q_{i},r_{\neg i}).
\end{align}
\end{myproof} The following lemma shows, however, that the CPM mechanism
is not truthful for dependent recommenders and illustrates the following
intuition: \emph{Suppose recommender $i$ believes that recommender
$j$'s report is informative, i.e., $i$ assumes that $j$'s report
is larger when the hidden outcome is $O=1$ than when $O=0$ (i.e.,
$r_{j}^{1}>r_{j}^{0}$). Then, if recommender $i$ knew $r_{j}$'s
report, they would adjust their own report $r_{i}$ towards $r_{j}$.
Since our mechanism can only evaluate recommenders' predictions in
the case $A=1$, the mere fact of scoring on $O$ must itself reveal
information about others' reports (assuming that the decision depends
in some way on others' reports).} \begin{restatable}{lem}{resultCPMDep}\label{resultCPMDep}
CPM is not weakly truthful for dependent recommenders. \end{restatable}
\begin{myproof} Recall the condition for strict truthfulness for
dependent recommenders \pcref{resultTruthfulness} which, for weak
truthfulness and in the absence of competing interests, i.e., $c_{i}=0$,
takes the form
\begin{equation}
\mathbb{E}_{q_{i}}\left[\textsc{pay}_{i}\left(q_{i},r_{\neg i}^{O},O\right)\right]\ge\mathbb{E}_{q_{i}}\left[\textsc{pay}_{i}\left(r_{i},r_{\neg i}^{O},O\right)\right]\quad\forall r_{i}\neq q_{i}\in{[0,1]};r_{\neg i}^{0},r_{\neg i}^{1}\in{[0,1]}^{n-1}.
\end{equation}
The expected payment of CPM is now
\begin{align*}
& \mathbb{E}_{q_{i}}\left[\textsc{pay}_{i}\left(r_{i},r_{\neg i}^{O},O\right)\right] \\
& =\mathbb{E}_{q_{i}}\left[\textsc{act}(r_{i},r_{\neg i}^{O})\cdot O+\left(1-\textsc{act}(r_{i},r_{\neg i}^{O})\right)\textsc{act}_{i}^{-1}(r_{\neg i}^{O})\right] \\
& =\textsc{act}(r_{i},r_{\neg i}^{1})\cdot q_{i}+\left(1-\textsc{act}(r_{i},r_{\neg i}^{1})\right)x\cdot q_{i}+\left(1-\textsc{act}(r_{i},r_{\neg i}^{0})\right)y\cdot(1-q_{i}),
\end{align*}
with $x:=\textsc{act}_{i}^{-1}(r_{\neg i}^{1})$ and $y:=\textsc{act}_{i}^{-1}(r_{\neg i}^{0})$
(which implies that $\textsc{act}(r_{i},r_{\neg i}^{1})=1$ iff $r_{i}\ge x$
and $\textsc{act}(r_{i},r_{\neg i}^{0})=1$ iff $r_{i}\ge y$). Suppose
that $x<y$, which is the case if $i$ believes that other recommenders'
beliefs are positively correlated with the outcome. We then have
\[
\mathbb{E}_{q_{i}}\left[\textsc{pay}_{i}\left(r_{i},r_{\neg i}^{O},O\right)\right]=\begin{cases}
x\cdot q_{i}+y(1-q_{i}) & \text{if }r_{i}<x \\
q_{i}+y(1-q_{i}) & \text{if }x\le r_{i}<y \\
q_{i} & \text{if }y\le r_{i}.
\end{cases}
\]
This implies that the recommender will always report $x\le r_{i}<y$,
regardless of their true belief, $q_{i}$. \end{myproof}
\subsection{Truthful Mechanisms for Dependent Recommenders}
We now propose a method for decoupling payments and the action taken,
thereby preventing the leakage of information through the action.
This decoupling can be applied to make the payment functions of any
mechanism take the form of CPFs (\Cref{def:cond-pay}), and hence
admissible in the conditional-observation setting.
\begin{defn}[$\alpha$-decoupling]
\label{defn:alpha-decoupling} An \emph{$\alpha$-decoupling} takes
any mechanism, $(\textsc{act}',\textsc{pay}'_{1:n},\textsc{rand}')$,
as input and produces a modified mechanism, $(\textsc{act},\textsc{pay}_{1:n},\textsc{rand})$,
where the new payment functions $\textsc{pay}_{1:n}$ are CPFs. First,
we draw a Bernoulli RV, $X_{1}\sim\mathcal{B}(1-\alpha$), and, if
the original mechanism is randomized, we also draw from its distribution
$X_{2}\sim\textsc{rand}'$. Let $X=(X_{1},X_{2})$. The new decision
and payment functions are defined as
\begin{align*}
\textsc{act}(r,x) & =(1-x_{1})+x_{1}\textsc{act}'(r,x_{2}), \\
\textsc{pay}_{i}(r,o,x) & =\frac{1}{\alpha}(1-x_{1})\textsc{pay}_{i}'(r,o,x_{2}).
\end{align*}
\end{defn}
\begin{restatable}[Decoupling maintains properties]{lem}{resultDecoupling}\label{resultDecoupling}
The mechanism $(\textsc{act},\textsc{pay}_{1:n},\textsc{rand})$ obtained
by applying $\alpha$-decoupling to mechanism $(\textsc{act}',\textsc{pay}'_{1:n},\textsc{rand}')$
satisfies $\textsc{epay}_{i}(r,o)=\textsc{epay}_{i}'(r,o)$ and $\textsc{pact}(r)=\alpha+(1-\alpha)\textsc{pact}'(r)$.
Further, if the original mechanism is strictly truthful, so is its
decoupled version. \end{restatable} \begin{myproof} We have
\begin{align*}
\textsc{epay}_{i}(r,o) & =\mathbb{E}\left[\textsc{pay}_{i}(r,o,X)\right] \\
& =\frac{1}{\alpha}\mathbb{E}\left[(1-X_{1})\textsc{pay}_{i}'(r,o,X_{2})\right] \\
\text{(since \ensuremath{X_{1},X_{2}} are indep)} & =\frac{1}{\alpha}\mathbb{E}\left[(1-X_{1})\right]\mathbb{E}\left[\textsc{pay}_{i}'(r,o,X_{2})\right] \\
& =\textsc{epay}_{i}'(r,o).
\end{align*}
and
\begin{align*}
\textsc{pact}(r) & =\mathbb{E}\left[\textsc{act}(r,X)\right] \\
& =\mathbb{E}\left[(1-X_{1})+X_{1}\textsc{act}'(r,X_{2})\right] \\
& =\alpha+(1-\alpha)\textsc{pact}'(r).
\end{align*}
Strict truthfulness of the decoupled mechanism follows because the
expected payments are identical, and the influence of recommenders
on the action probability is smaller, so incentives for misreporting
are smaller everywhere. \end{myproof} Hence, the results from the
previous section can easily be carried over to the conditional-observation
setting.
\section{Conclusion}
\label{sec:conclusion}
We have demonstrated that no decision scoring mechanism is completely
robust to intrinsic competing recommender incentives, and that the
best performance can be achieved through the use of the Quadratic
Scoring Rule. However, when a rational briber is the cause of the
competing incentives, we can get a no-manipulation result, as long
as the mechanism has sufficient budget relative to the maximum influence
that recommenders can have over the decision. For multiple recommenders,
we can allow the total recommender influence to grow as $\frac{1}{\sqrt{n}}$
while preserving the same incentives. We also show that dependent
recommender beliefs can cause an additional violation of strict-truthfulness,
but that this can be resolved with a general, decoupling construction.
To our knowledge, this problem of competing incentives in the context
of elicitation and decision making, also with dependent recommender
beliefs, has not been formally studied before and we believe it is
a rich area for future work. One interesting direction is to develop
optimal scoring rules for different types of allocation functions,
in particular aligning the magnitude of incentive with the magnitude
of decision impact of a recommender. Another interesting direction
is to allow recommenders to explicitly rate the quality of other recommenders,
and be rewarded for their posterior beliefs. This can open up improved
avenues for informative belief aggregation. %
\newpage{} \bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.14126",
"language": "en",
"timestamp": "2023-03-01T02:01:44",
"url": "https://arxiv.org/abs/2302.14126",
"yymm": "2302"
} | \section{Introduction} \label{sec:intro}
The smart grid is a critical cyber-physical system that plays a central role in modernizing the energy landscape. It stands to enhance energy security, facilitate the integration of renewable energy sources, and enable significant improvements in energy efficiency.
Enabling the smart grid is dependent on the development of reliable protection schemes to ensure its resilience against faults.
Smart grid protection is a highly challenging task, particularly at the distribution level in microgrids, due to factors such as bidirectional power flow, flexible topology, and the increasing utilization of renewable energy sources~\cite{brearley2017review, lin2019adaptive}. As microgrids transition between different topology configurations and between grid-connected mode with a high short-circuit current capacity and islanded mode with limited and variable capacity, it is vital for the protection schemes to adapt to changing grid conditions~\cite{hooshyar2017microgrid, barra2020survey}.
Adaptive protection is regarded as the most suitable approach to protecting microgrids \cite{brearley2017review, senarathna2019review}. It involves changes in the fault response of protection relays based on the system operating state and conditions \cite{senarathna2019review, khalid2021existing}.
This is commonly achieved by extending the functionality of existing traditional protection devices, such as inverse-time over-current relays, which constitutes a cost-effectiveness benefit in the absence of specialized commercial microgrid relays \cite{hooshyar2017microgrid} and the desire to easily integrate protection schemes into existing devices and substations with minimal capital expenditure \cite{abdulhadi2011adaptive}.
Supported by years of operation in traditional power systems, over-current relays offer an economical and effective solution to protect the power system against faults.
With the rise of microgrids, there is a growing interest to extend the capabilities of over-current relays to enable adaptive protection \cite{bhattarai_adaptive_2015, wong2021selectivity, yousaf2021control}. This requires the development of methods to update over-current relay settings in response to changes in the grid, while maintaining both selectivity and sensitivity \cite{habib2017adaptive}. Selectivity refers to the relay's ability to precisely locate and classify faults to isolate the smallest possible faulty section of the power system, while sensitivity refers to the ability to promptly and accurately detect faults to quickly clear them.
Recent studies have explored the use of data-driven techniques, including machine learning algorithms, to leverage the growing amount of data generated in power systems in satisfying these protection requirements.
Data-driven techniques have been applied to detecting faults \cite{lin2016adaptive, lin2019adaptive}, identifying the location, type, and phase of faults \cite{james2017intelligent, abdelgayed2017new}, and adjusting relay settings \cite{tang2018data}.
Based on the findings of these studies, it appears that data-driven techniques can offer several benefits.
Firstly, data-driven methods can enhance the accuracy of protection decisions, improving fault detection and classification and reducing unnecessary tripping.
For example, Mishra \textit{et al.} \cite{mishra2015combined} conducted a study where they compared the accuracy of fault detection in protection schemes based on decision trees and random forests with traditional over-current relays. Their study showed over 40\% improvement in fault detection accuracy when leveraging the machine learning algorithms.
Secondly, data-driven techniques can offer a scalable solution to adapting relay settings, significantly reducing the need to manually identify system topology configurations and compute relay settings.
For example, in centralized communication-based implementations, data-driven techniques have the potential to replace traditional relays, monitoring the entire power system and directly controlling circuit breakers \cite{senarathna2019review}.
We survey some of the literature applying data-driven techniques to adaptive protection in Section \ref{sec:lit_review}.
Data-driven techniques, however, come with their own set of limitations and potential threats to protection reliability.
Black-box algorithms, such as neural networks and random forests, while often highly accurate, cannot be easily verified or validated, making them a significant risk for adoption in safety-critical applications such as protection.
Some other algorithms such as decision trees, while more interpretable, may suffer from instability: inconsistent decisions resulting from small changes in input data. This instability can also make it difficult to validate their decisions, thus undermining their feasibility for protection.
To successfully integrate data-driven techniques into adaptive protection, it is crucial to ensure that the interpretability of the protection decisions is preserved. This will enable protection engineers to verify the decisions while simultaneously reducing potential threats to reliability.
Additionally, adaptive protection schemes, which rely on real-time communication infrastructure such as peer-to-peer communication between relays or communication between a central protection management center and relays, may face potential threats to reliability due to network failure, latency, and cyberattacks \cite{gutierrez2020review, habib2017adaptive}. Although increased communication can lead to more accurate and informed protection decisions, the associated risks and threats must be carefully considered.
To acquire benefits of data-driven techniques and communication-based adaptive protection while reducing threats to reliability, we propose a novel data-driven adaptive protection scheme that emphasizes the interpretability of protection decisions for validation and minimizes reliance on communication.
Our proposed approach is based on Gaussian discriminant analysis and is designed to: (1) identify the system topology configurations that require a change in the protection relays' fault response, (2) specify necessary communication with relays, (3) locate faults with respect to relays for relay coordination, and (4) optimize relay settings for fast fault clearing. This approach is centralized with a central microgrid protection management center communicating with relays following infrequent major grid changes.
To demonstrate the effectiveness of our proposed strategy, we apply it to inverse-time protection relays.
\textbf{Contributions.}
\begin{itemize}
\item Our work presents a first-of-its-kind application of Gaussian discriminant analysis (GDA) to adaptive protection. Through our research, we discuss and demonstrate several benefits of GDA for adaptive protection, including improved data- and computational-efficiency, interpretability for ease of validation, adaptability to changing grid conditions, timely response, and reduced reliance on communication for protection.
To showcase the interpretability of our proposed strategy, we put emphasis on visualizing its protection decision-making process in this paper.
\item We extend the inverse-time protection formula to improve fault distinction by incorporating voltage drops associated with faults.
\item We formulate an optimization problem that enables the coordination and optimization of relays in a changing system environment.
\end{itemize}
We validate the strategy on the CIGRE medium voltage benchmark system.
The benchmark system permits many configurations to help validate the proposed adaptive protection method, including radial and mesh topology, and grid-connected and islanded modes.
\textbf{Organization. }The rest of the paper is structured as follows:
We survey related work in Section \ref{sec:lit_review}.
We present background material in Section \ref{sec:background}.
We present the problem statement discussing the goals of data-driven adaptive protection in Section \ref{sec:problem_formulation}.
We develop the protection strategy in Section \ref{sec:method}.
Next, we illustrate and discuss the results of the protection strategy using the benchmark system in Section \ref{sec:results}. The conclusion is in Section \ref{sec:conclusion}.
\section{Related Work} \label{sec:lit_review}
The literature reviews in \cite{gutierrez2020review, barra2020survey, khalid2021existing} provide different perspectives on adaptive protection. We will summarize the contents of these reviews that are relevant to our work. Our work proposes an intelligent centralized protection scheme with pre-set relays, which involves a central microgrid protection management center issuing instructions to the relays to switch their settings only after significant topology changes.
Next, we will survey some of the literature that applies data-driven techniques to adaptive protection to motivate our work.
In their analysis of literature on adaptive protection in the context of communication, Gutierrez-Rojas \textit{et al.} \cite{gutierrez2020review} categorize their examination according to communication technology (wireless or wired) and control approach (centralized or decentralized). Most of the adaptive protection research proposes centralized control approaches. In centralized control, a central control center is established to monitor the power system in real-time and send (either wired or wireless) instructions to relays to switch between relay settings or directly to circuit breakers to trip. For example, Oudalov \textit{et al.} \cite{oudalov2009adaptive} implemented a centralized control approach in which wired communication is established between the control center and advanced circuit breakers with integrated over-current relay units. In real-time, data is collected from the circuit breakers and relay settings are modified. Alternatively, communication can be limited to notifying relays of infrequent major grid topology changes, such as network reconfiguration, as in \cite{bhattarai_adaptive_2015}, with local measurements used to detect other changes, such as distributed energy resource status.
Barra \textit{et al.} \cite{barra2020survey} organize their review by the methods that rely on computational intelligence, such as machine learning, fuzzy logic, and optimization, and methods that do not.
Khalid \textit{et al.} \cite{khalid2021existing} categorize their review based on papers that propose methods for fault detection, relay setting adjustment to changes in network configuration, or relay optimization for coordination, and papers with methods that require and do not require pre-setting relays.
Recent studies have explored the use of data-driven techniques to make use of the growing streams of data in power systems for more accurate and reliable adaptive protection.
In these studies, machine learning algorithms are used to identify (or detect) the presence of a fault, classify the type of fault (phase-to-ground, phase-to-phase, etc.), and locate the fault in order to isolate it.
Typically, the data collection, pre-processing, and mining, as well as model training are performed offline.
For example, Lin \textit{et al.} in \cite{lin2016adaptive} trained two artificial neural networks (ANN) models to classify and locate faults using current measurements.
The fault localization ANN was replaced with a support vector machine (SVM) in \cite{lin2019adaptive}.
In \cite{lin2019adaptive}, the fault localization accuracy of the SVM model was consistently over 99\% (when testing the method using the IEEE 9 bus system). However, the classification accuracy of the ANN dropped to 77.5\% and 80.5\% on some lines where fault classification was challenging.
This highlights the challenge of accurately classifying faults in microgrids.
Similarly, Zayandehroodi \textit{et al.} in \cite{zayandehroodi2012novel} used combinations of radial-basis function ANNs to locate faults using current measurements.
Their results show that the first ANN was able to locate the fault within 0.01 km error. The second ANN predicts the faulty line based on the fault location.
They propose a backtracking algorithm to coordinate between relays.
Other studies like James \textit{et al.} \cite{james2017intelligent} and Tang \textit{et al.} \cite{tang2018data} trained algorithms on statistical features extracted from wavelet transforms of current measurements.
James \textit{et al.} used deep recurrent ANNs with to classify and locate faults. Their method achieved a fault detection, and fault-type and -phase classification accuracies of over 97\%, and over 94\% accuracy in fault localization.
Tang \textit{et al.} used decision trees (DT) local to each relay to classify faults. Instead of using communication with a central controller, they proposed a peer-to-peer communication system between relay intelligent electronic devices (IED).
A novelty in Tang \textit{et al.} was the use of an ANN to adjust relay settings.
Although neural networks can be highly accurate and valuable for addressing the complexities of adaptive protection, they pose significant risks for adoption due to their black-box nature and difficulties in verification and validation.
Alternatively, the authors in \cite{abdelgayed2017new}
and \cite{mishra2015combined} applied interpretable machine learning algorithms to adaptive protection.
Abdelgayed \textit{et al.} \cite{abdelgayed2017new} applied a variety of algorithms to statistical features extracted from wavelet transforms of current measurements to classify faults type. The accuracy of the algorithms were as follows: k-nearest neighbors (kNN) with 95.63\%, DT with 90.4\%, SVM with 93.3\%, and na\"ive Bayes with 94.24\%.
Mishra \textit{et al.} used DTs to detect and classify faults with 97\% and 85\% accuracy, respectively.
Although their decisions are interpretable, DTs have a drawback of instability, meaning that small changes in the training data can result in large changes in the structure of the decision tree, leading to vastly different hypotheses and undermining their interpretability \cite{kenneth2007decision}.
The kNN algorithm is slow in test time as it involves calculating the distance of the test instance to all training data. This means that as the size and dimensionality of the training data increases, the computation time for making a prediction also increases. This can make kNN impractical for real-time applications with large datasets that require timeliness.
Na\"ive Bayes and GDA are both probabilistic data-driven techniques that are similar. GDA can learn more complex features and potentially achieve higher accuracy.
We note that probabilistic data-driven methods are generally stable, intuitive, interpretable, and computationally- and data-efficient \cite{ghahramani2015probabilistic, ghahramani2017pres}.
SVM and GDA are comparable; we discuss why we apply GDA over SVM in Section \ref{sec:results}.
\section{Background} \label{sec:background}
{We apply our adaptive protection method to the optimization and coordination of inverse-time protection relays.
We begin this section by describing the operation of time-inverse relays.
Next, we introduce GDA, which is the probabilistic data-driven approach that we use to distill fault data into protection decisions.
Lastly, we introduce Conditional Value-at-risk, which we use in parametrizing the fault data for relay optimization.}
\subsection{Inverse-time Protection} \label{sec:adaptiveprotection}
Over-current relays apply the following inverse-time equation \cite{benmouyal1999ieee} to compute their operation time: the delay before the relay trips.
\begin{equation} \label{eq:toc}
t_{op}(I_M; \text{TMS}, \eta, \lambda, L, I_S) = \text{TMS} \left( \frac{\lambda}{\left( \frac{I_M}{I_S} \right)^\eta - 1} + L \right)
\end{equation}
where $t_{op} > 0$ is the relay operation time, $I_M$ is the relay current measurement, and TMS and $I_S$ are the time-multiplier and pick-up current settings of the relay, respectively.
Directional over-current relays measure current in one direction making them useful for protecting mesh and multi-source grids.
Equation (\ref{eq:toc}) defines a curve that is shaped by characteristic constants $\lambda$, $\eta$, and $L$.
Adaptive protection involves shifting the curve to cope with changing grid conditions \cite{che2014adaptive}.
Inverse-time over-current relays work on the principle that fault currents decrease with increasing distance to the fault location.
As such, shorter operating times are computed per (\ref{eq:toc}) for nearer faults. This facilitates relay coordination where relays closer to the fault location trip earlier to isolate the smallest possible section of the power system.
Anticipating the challenge of coordinating relays only based on fault currents in distribution systems, such as microgrids, due to limited fault currents, we extend \eqref{eq:toc} to consider fault voltage.
\begin{align} \label{eq:tocV}
t_{op}&(I_M, V_M; \text{TMS}, \eta_i, \eta_v, \lambda_i, \lambda_v, L_i, L_v, I_S, V_S) = \\
&\text{TMS} \left( \frac{\lambda_i}{\left( \frac{I_M}{I_S} \right)^{\eta_i} - 1} + L_i \right) \left( \frac{\lambda_v}{\left( \frac{V_S}{V_M} \right)^{\eta_v} - 1} + L_v \right) \nonumber
\end{align}
where $V_M$ is the relay voltage measurement and $V_S$ is the pick-up voltage settings of the relay. Setting $\lambda_v=0$ and $L_v=1$ in \eqref{eq:tocV} yields \eqref{eq:toc}.
In \eqref{eq:tocV}, the trip time is inversely proportional to the current magnitude and inversely proportional to the voltage magnitude. This means that the protective device will trip faster for larger currents and voltage drops and slower for smaller currents and voltage drops.
\subsection{Gaussian Discriminant Analysis}
GDA is a probabilistic data classification algorithm that fits a Gaussian distribution to each class of data.
Let $y \in \{1,2,...,n\}$ represent the classes, and $n$ be the number of classes.
The probability of sampling an observation (or measurement) $\bm{x}$ given class $y = k$ is
\begin{equation} \label{eq:1}
P(\bm{x}|y=k) = \mathcal{N} (\bm{x}; \bm{\mu}_k, \bm{\Sigma}_k)
\end{equation}
where $\bm{\mu}_k$ and $\bm{\Sigma}_k$ are the mean vector and covariance matrix of the Gaussian distribution fitted to class $k$, respectively.
A GDA classifier {infers} the observation's class via Bayes' theorem:
\begin{equation} \label{eq:2}
P(y=k|\bm{x}) = \frac{P(\bm{x}|y=k) P(y=k)}{\sum_{i=1}^{n} P(\bm{x}|y=i) P(y=i)}
\end{equation}
GDA can also be used for supervised dimensionality reduction to project the data to a reduced subspace, which maximizes the separation between classes.
Following an assumption that all classes share the same covariance matrix of $\bm{\Sigma}_y = \bm{\Sigma}$ for $y = \{1,2,...,n\}$, we can choose this reduced subspace to be linear, which aids in interpreting the results of GDA. The resulting algorithm is termed Linear Discriminant Analysis (LDA). Taking the log-posterior of (\ref{eq:2}) gives the following equation
\begin{equation}
\log P(y=k|\bm{x}) = (\bm{\Sigma}^{-1} \bm{\mu}_k) \bm{x} + C
\end{equation}
where the coefficients in $\bm{\Sigma}^{-1} \bm{\mu}_k$ determine the importance of the observations' features in separating the classes ($C$ includes other unimportant terms).
Below a certain threshold, unimportant features can be neglected to reduce the dimensionality of the problem.
\subsection{Conditional Value-at-risk}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth, trim={0cm, 6.5cm, 0cm, 6.5cm}, clip]{figuresV2/cvar.pdf}
\caption{Illustrating the CVaR of a Gaussian distribution. The red area under the curve represents the $100\alpha \%$ tail of the Gaussian distribution. The CVaR$_{\alpha}$ value is the mean of this area and is represented by the vertical line.}
\label{fig:cvarIll}
\end{figure}
Conditional Value-at-risk (CVaR) \cite{rockafellar2002conditional} is a risk measure of the expected value of a random variable in the lowest $100\alpha \%$ of cases.
Let $X$ be the random variable and $q_\alpha$ be the $100\alpha$ percentile value of the distribution, then the CVaR with a confidence interval of $100\alpha \%$ is
\begin{equation}
\text{CVaR}_{\alpha} = E(X | X \leq q_\alpha)
\end{equation}
For a Gaussian distribution, CVaR has the following closed-form:
\begin{equation}
\text{CVaR}_{\alpha} = \mu - \sigma \frac{\phi(\Phi^{-1}(\alpha))}{\alpha}
\end{equation}
where $\mu$ and $\sigma$ are the distribution's mean and standard deviation, respectively; $\phi$ is the density function; and $\Phi^{-1}$ is the standard normal quantile. CVaR$_{1}$ is equivalent to the mean of the random variable
Fig. \ref{fig:cvarIll} demonstrates the area representing the $100\alpha \%$ tail of the normal distribution in red. CVaR$_{\alpha}$ is the mean of this area.
\section{{Problem Statement}} \label{sec:problem_formulation}
{
Our data-driven adaptive protection method distills data into protection decisions that address the following goals: (1) fault identification and classification, (2) relay optimization and coordination, and (3) communication specification. This section discusses these goals.
}
{
When a power system experiences a short-circuit fault, the sudden reduction in impedance due to the introduction of the short-circuit causes a current rush.
Over-current relays must measure the increased current and isolate the fault to prevent significant consequences, including power system instability or equipment damage.
Primary relays, closest to the fault, must trip earliest to isolate the smallest possible section of the power system, with backup relays coordinated to trip in case the primary fails.
In practice, primary relays should operate following a short delay to allow temporary faults to clear, and backup relays should operate following a preset minimum coordination time (MCT) delay to coordinate with the downstream relay.
}
{
Hence, in relation to a relay, faults can be classified according to their distance from the relay.
A fault is primary (to a relay) if the relay is the closest device to the fault location.
The next (closest) relay(s) are backup to the primary relay.
A fault can have multiple primary relays and a relay can have multiple backups in situations of equidistant relays from the fault.
In this paper, we use the terms secondary and tertiary faults to classify faults that are progressively further from a relay, and \textit{`other'} to collectively classify tertiary and further faults. {Fault identification and classification} involve enabling relays to accurately identify the existence and class of a fault from local current and voltage measurements.
Identifying the class of the fault helps relays locate the fault, which enables their coordination.
{Relay optimization and coordination} involve computing relay settings to ensure rapid, correct, selective, and coordinated protection.
}
{
Variations in power system operating conditions can significantly change the currents measured by relays, which can cause relay misoperation.
Relays must adjust their operation in response to major grid changes, including the transition between grid-connected and islanded modes of operation, grid topology, and generator availability, which can significantly affect fault current.
This involves communicating (typically per the IEC61850 \cite{iec61850:2021_2021} protocol) these grid changes to the relays and directing them to switch their relay settings.
It is desirable to minimize the reliance on communication to reduce the infrastructure capital costs and system complexity and prevent risks associated with communication delays and cyberattacks. {Relay communication specification} involves specifying the contents of the information that will be communicated to the relays, and the communication channels required to enable reliable protection.
}
{
Finally, a comprehensive protection strategy must consider immeasurable, uncertain factors, including fault impedance, renewable energy intermittency, and fault distance from relays, which also influence the fault current.
}
The next section will introduce a GDA-based method that will address these goals.
\section{Proposed Adaptive Protection Method} \label{sec:method}
\textbf{Terminology.} We introduce the following {terminology} before presenting the method:
We use $P(X|Y;Z)$ to denote the probability of a random variable $X$ given $Y$.
The distribution parameters are $Z$.
We denote directional relays with R$A$-$B$, where $A$ is the bus it is located at and $B$ is the bus it is directed to.
Each relay stores multiple relay setting groups. A setting group specifies the TMS, characteristic constants, and pick-up settings used by the relay to compute its operation time.
Each group is for a distinct relay mode of operation, which is dependent on the state of the system.
The superscript on variable $X^{(r)}$ associates it with a specific relay $r$ and the uppercase superscript on variable $X^{(Z)}$ additionally associates it with a specific relay mode $\bm{Z}$. The subscript $\psi$ denotes the fault class (e.g., primary fault).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figuresV2/APscheme.pdf}
\caption{Flowchart of the proposed method.}
\label{fig:APscheme}
\end{figure*}
The proposed method consists of the steps shown in Fig. \ref{fig:APscheme}, which aim to address the goals outlined in the Problem Formulation, Section \ref{sec:problem_formulation}.
The relay setting groups are computed via offline steps prior to deployment.
The offline steps consist of running Monte Carlo simulations to generate fault data representing the various fault scenarios that a protection relay may encounter following deployment.
The fault data consists of local current and voltage measurements that are locally accessible to the relay and discrete system states, including switch and generation states, that can be communicated to the relay from the central Microgrid Protection Management Center (MPMC) when needed.
Next, we apply the dimensionality reduction utility of GDA to identify the minimal communication requirements of the relay: we fit an LDA classifier to the fault data and extract the features of most importance to fault classification.
We use these features to specify the grid state (or topology) changes that the MPMC needs to communicate to the relay to enable adaptive protection.
A stage 2 LDA parametrizes the fault data to statistics that can be used in an optimization formulation to compute relay settings for relay selectivity and coordination.
The setting groups are stored in the memory of each relay IED.
Depending on the system state, the relay will operate in a certain mode with an associated setting group.
Post-deployment, the MPMC communicates with the relay to switch its setting group only if the relay needs to adapt in response to a system change.
In the case of a fault, the relay computes its operating time based on local measurements and its relay setting group.
The steps are explained in detail in the following subsections.
We focus the section on the protection against 3-phase faults as the strategy can be easily generalized to the protection against other types of faults. We discuss applicability to different fault types in the Discussions in Section \ref{sec:results}.
\subsection{Offline Steps}
This subsection details the steps that are performed using offline simulation studies prior to deployment.
\textbf{Generate Fault Data.} Monte Carlo simulations {\cite{hammersley2013monte}} are performed to simulate various fault scenarios by leveraging probability distributions over factors that affect measured fault currents and voltages, including grid topology and generator availability and control, and uncertain factors such as fault impedance, fault distance from the relay, renewable energy intermittency, and fault class.
Power system studies, historical data, or experts' subjective judgments should define these distributions.
A fault datum for relay $r$, of the form,
\begin{equation*}
\{{\mathcal{X}}, \psi, I_F, V_F\}^{(r)}
\end{equation*}
consists of discrete measurements, ${\mathcal{X}}$, that can be infrequently communicated to relay $r$, including switch states (closed/open) and generator modes (in-service/out-of-service);
the fault current, $I_F$, and voltage, $V_F$, measured by the relay;
and the fault class, $\psi$.
In this work, we consider three classes for $\psi$ grouped in the set $\Psi = \{\text{primary}, \text{backup}, \text{other}\}$; but these classes can be extended.
\textbf{Specify Relay Communication.} In the proposed method, each relay is an agent making decisions based on its local current and voltage measurement and a subset of the measurements ${\mathcal{X}}$ communicated to the relay from the MPMC.
It is desirable to reduce this subset to minimize the reliance (in terms of communication frequency and bandwidth) on communication infrastructure for protection decisions.
To address this communication requirement, we use the feature importance, as determined by LDA, to specify which measurements need to be communicated to the relay to adapt its settings.
A (stage-1) LDA classifier fits Gaussian distributions to each class of $\psi$ over the space (${\mathcal{X}}$, $I_F$, $V_F$). i.e.,
\begin{equation}
(I_F, V_F) | \psi \sim \mathcal{N}(\bm{\mu}^{(r)}_\psi, \bm{\Sigma}^{(r)}), \quad \psi \in \Psi
\end{equation}
after which the subset of ${\mathcal{X}}$ with significant coefficients in $\{ \left( \bm{\Sigma}^{(r)} \right)^{-1} \bm{\mu}^{(r)}_\psi\}^{(\psi \in \Psi)}$ is considered.
We denote the reduced set $\mathcal{X}^{(r)}_S \subseteq \mathcal{X}$.
Each of the elements in $\mathcal{X}^{(r)}_S$ is an equipment status, which when changed will require the relay to use a different relay setting group.
Hence, $\mathcal{X}^{(r)}_S$ contains information of the minimal distinct modes of operation of the relay.
Grid changes that are not in $\mathcal{X}^{(r)}_S$ would not significantly affect the protection decisions of relay $r$, and hence, are not communicated to the relay.
\textbf{Parametrize Fault Data.} Next, to parametrize the fault data into statistics that we can use for relay optimization and coordination, a (stage-2) LDA fits Gaussian distributions to each class of $\psi$ over $(I_F, V_F)$ for each grid topology identified in the reduced space $\mathcal{X}^{(r)}_S$.
For each relay mode $\bm{Z}$, the distributions fitted to the data are:
\begin{align}
P(I_F, V_F | \psi, \bm{Z}) &= \mathcal{N} \left( I_F, V_F; \bm{\mu}^{(Z)}_\psi, \bm{\Sigma}^{(Z)}_\psi \right), \\
& \psi \in \Psi, \bm{Z} \in \mathcal{X}^{(r)}_S \nonumber
\end{align}
where
\begin{equation*}
\bm{\mu}^{(Z)}_\psi = \begin{bmatrix}
{\mu}\{I_F\}^{(Z)}_\psi, {\mu}\{V_F\}^{(Z)}_\psi
\end{bmatrix}
\end{equation*}
\begin{equation*}
\bm{\Sigma}^{(Z)}_\psi = \begin{bmatrix}
{\sigma}^2\{I_F\}^{(Z)}_\psi & {\sigma}\{I_F,V_F\}^{(Z)}_\psi\\
{\sigma}\{I_F,V_F\}^{(Z)}_\psi & {\sigma}^2\{V_F\}^{(Z)}_\psi
\end{bmatrix}
\end{equation*}
denote the mode's fault statistics: ${\mu}\{I_F\}^{(Z)}_\psi$ and ${\mu}\{V_F\}^{(Z)}_\psi$ are the means of the fault current and voltage, respectively, and ${\sigma}^2\{I_F\}^{(Z)}_\psi$ and ${\sigma}^2\{V_F\}^{(Z)}_\psi$ are the variances of the fault current and voltage, respectively, in mode $\bm{Z}$.
\textbf{Compute Relay Settings.} Next, relay settings are optimized, using the above {fault statistics}, to achieve speed and selectivity of protection, and facilitate relay coordination.
We will describe the parameters of the optimization problem before formulating it.
To simplify relay optimization, we assume $L_i = L_v = 0$ in \eqref{eq:toc} and introduce the variable $\zeta = \text{TMS} \times \lambda_i \lambda_v$ to derive the following equation for relay operating time:
\begin{align} \label{eq:toc2}
t_{op}&(I_M, V_M; \zeta, \eta_i, \eta_v, I_S) = \\
& \frac{\zeta}{
\left( \left( \frac{I_M}{I_S} \right)^{\eta_i} - 1 \right)
\left( \left( \frac{V_S}{V_M} \right)^{\eta_v} - 1 \right)} \nonumber
\end{align}
The objective is to optimize the parameters $\zeta$, $\eta_i$, and $\eta_v$.
To include information about both the mean and variance of the current and voltage fault data in the optimization problem, we replace $I_M$ and $V_M$ in \eqref{eq:toc2} with
\begin{equation}
I_M = \text{CVaR}_{\alpha_i}(I_F) = \mu\{I_F\}^{(Z)}_\psi - \sigma\{I_F\}^{(Z)}_\psi \frac{\phi(\Phi^{-1}(\alpha))}{\alpha}
\end{equation}
\begin{equation}
V_M = \text{CVaR}_{\alpha_v}(V_F) = \mu\{V_F\}^{(Z)}_\psi - \sigma\{V_F\}^{(Z)}_\psi \frac{\phi(\Phi^{-1}(\alpha))}{\alpha}
\end{equation}
Using the CVaR enables more control over the parametrization of the fault statistics, as we will illustrate in Section \ref{sec:results}.
We use the following notation in the optimization problem formulation: $\mathcal{R}$ is the set of all relays in the system, $\mathcal{B}$ is the set of all buses in the system, $\mathcal{P}_B \subset \mathcal{R}$ is the set of all primary relays to bus $B$, and $\mathcal{S}_p \subset \mathcal{R}$ is the set of all backup relays to relay $p$. $\bm{\zeta}$ is the vector of all settings $\zeta^{(r)}, r \in \mathcal{R}$. $\eta_i, \eta_v$ are the same for all relays in the system for a specified topology.
{
Primary relays should operate following a short delay $\text{D}_{\text{min}}$, and backup relays should operate following an MCT delay.
}
For all grid topology configurations, we solve the following:
\begin{align} \label{eq:optimization_problem}
\min_{\bm{\zeta}, \eta} \quad & \sum_{B \in \mathcal{B}} \bigg{(} \sum_{p \in \mathcal{P}_B} \Big{(} t_{op} \big{(} \text{CVaR}^{(p)}_{\alpha_i}, \text{CVaR}^{(p)}_{\alpha_v}; \zeta^{(p)} \big{)} \nonumber\\
& + \sum_{b \in \mathcal{S}_p} t_{op} \big{(} \text{CVaR}^{(b)}_{\alpha_i}, \text{CVaR}^{(b)}_{\alpha_v}; \zeta^{(b)} \big{)} \Big{)} \bigg{)}
\end{align}
\begin{align}
& \text{subject to} \nonumber\\
& \eta_{i,\text{min}} \leq \eta_i \leq \eta_{i,\text{max}}\\
& \eta_{v,\text{min}} \leq \eta_v \leq \eta_{v,\text{max}}\\
& \zeta_{\text{min}} \leq \zeta^{(r)} \leq \zeta_{\text{max}}, \quad r \in \mathcal{R}\\
& t_{op} \big{(} \text{CVaR}_{\alpha_1}; \zeta^{(p)} \big{)} \geq \text{D}_{\text{min}}\\
& t_{op} \big{(} \text{CVaR}_{\alpha_2}; \zeta^{(b)} \big{)} - t_{op} \big{(} \text{CVaR}_{\alpha_1}; \zeta^{(p)} \big{)} \geq \text{MCT},\\
& \quad \quad \quad \quad B \in \mathcal{B}, p \in \mathcal{R}_B, b \in \mathcal{S}_p \nonumber
\end{align}
Next, relay settings $\{{\zeta}^{(r)}, \eta_i, \eta_v\}$ are stored in-memory on relay $r$ if the grid topology is identified in $\mathcal{X}^{(r)}_S$.
This optimization problem over $\{\bm{\zeta}, \eta_i, \eta_v\}$ is not convex. However, pre-setting $\eta_i, \eta_v$ yields a linear program over $\bm{\zeta}$. We optimize $\eta_i, \eta_v$ via gradient descent by re-running the linear program for different values of $\eta_i, \eta_v$. While not convex, in practice we have found that starting from very small $\eta_i = \eta_v = 0.01$ yields good optimization results.
The pickup settings $I_S, V_S$ are preset before optimization. $I_S$ should exceed the maximum full-load line currents, and $V_S$ should be less than bus voltages seen during normal operation.
We compute $I_S$ separately for each relay depending on its line's maximum load, and preset $V_S$ to $0.9$ pu for all relays.
\subsection{Online Operation}
During operation, each relay will operate in a specific relay mode with a relay setting group, based on the current state of the system. If the system state changes, and the relay's fault response needs to be switched accordingly, the MPMC will communicate with the relay to instruct it to switch to a different relay setting group.
The communication between the MPMC and the relay should be developed to ensure that the MPMC message has been delivered successfully to the relay (e.g., acknowledgments); else, the message should be redelivered.
In the event of a fault, the relay detects the fault by checking whether its local current and voltage drop measurements exceed its pickup settings. Applying (\ref{eq:toc2}), the relay will calculate its operating time. If the fault is not cleared before the relay's operating time is reached, the relay will trip its circuit breaker. If the relay fails to trip, backup relays will have computed a longer operating time and will trip their circuit breakers following an MCT delay.
After tripping its circuit breaker, the relay will communicate to the MPMC the open status of its circuit breaker.
\section{Simulation Results, Validation and Discussion} \label{sec:results}
\begin{figure}[t]
\centering
\vspace{2mm}
\includegraphics[width=0.8\columnwidth]{figuresV2/cigre.pdf}
\caption{CIGRE medium voltage distribution network \cite{barsali2014benchmark, hooshyar2017microgrid}. It consists of two microgrids (MG). S denotes a switch, PC a point-of-common coupling, and DG a distributed generator. }
\label{fig:cigre_testbed}
\end{figure}
{
In this section, we will illustrate the proposed method and discuss its benefits.
}
The benchmark system we use to demonstrate the results is the CIGRE medium voltage distribution network, shown in Fig. \ref{fig:cigre_testbed}.
The system is composed of two radial microgrids (or feeders), but meshed topology can be obtained by closing switches S1, S2, and S3.
The system-rated voltage and frequency are 12.47 kV and 60 Hz.
For the purpose of studying fault current contributions from distributed generation, we add, as in \cite{hooshyar2017microgrid}, three DG units:
\begin{enumerate}
\item DG1 is a 4 MW rated full-scale VSC-interfaced DG. A 5 MVA, 4.16/12.47 kV, $X=0.07$ pu dYG transformer connects the DG to bus 6.
\item DG2 is composed of two 4.3 MVA synchronous generators. A 10 MVA, 4.16/12.47 kV, $X=0.07$ pu dYG transformer connects the DGs to bus 8.
\item DG3 is a 4.3 MVA synchronous generator. A 5 MVA, 4.16/12.47 kV, $X=0.07$ pu dYG transformer connects the DG to bus 12.
\end{enumerate}
The two microgrids can each operate in islanded mode as DG2 and DG3 can supply entire loads of their respective microgrids, form an islanded mesogrid, or operate in grid-connected mode.
Hence, the benchmark system permits many configurations to help validate the proposed adaptive protection method.
We generate short circuit data using Pandapower \cite{pandapower}.
Pandapower is an open-source software library for power system analysis and optimization in Python, which we use for the ease of integration with Python's data analysis and machine learning libraries.
Pandapower's short circuit calculations are based on IEC60909 \cite{iec60909} and tested against DigSilent.
We also verify our results using a PSCAD time-domain fault simulation.
The probability distributions used to generate Monte Carlo simulations for faults with different grid topology and fault impedance are outlined in Table \ref{table:monte-carlo-vars} in the Appendix.
The studied cases focus on relay R12-13 and the protection of bus 8 for the insights that can be gained from their analysis.
The study of R12-13 validates the method for adjusting relay settings in response to variations in fault current resulting from transitioning from an isolated to a grid-connected mode of operation, as well as modifications in network structure from radial to mesh configuration.
The study of Bus 8 faults validates the method for coordinating relays in a mesh topology.
In optimizing the relay settings, we set $\text{D}_{\text{min}} = 3$ cycles, $\text{MCT} = 0.15$ seconds, and $\alpha_i = \alpha_v = 1$.
\subsection{Computing offline relay settings of R12-13}
\begin{figure}
\centering
\begin{tabular}{c c}
\includegraphics[width=0.45\textwidth]{figuresV2/kdeCurrents.png}
&
\includegraphics[width=0.45\textwidth]{figuresV2/kdeVoltages.png}
\\
(a) & (b)
\end{tabular}
\begin{tabular}{c}
\includegraphics[width=0.45\textwidth]{figuresV2/kdeCurrentsVoltages.png}
\\ (c)
\end{tabular}
\caption{Kernel density estimation plots of (a) current multiples, (b) voltage inverse multiples, and (c) current multiples and voltage inverse multiples measured by relay R12-13 during 3-phase faults.}
\label{fig:kdes}
\end{figure}
Fig. \ref{fig:kdes} illustrates kernel density estimations (KDE) of the fault current and voltage data of relay R12-13. The independent variables in Fig. \ref{fig:kdes} (a) and (b) represent the fault currents and voltages measured by R12-13 during the primary, secondary, and other classes of faults. Fig. \ref{fig:kdes} (c) combines the currents and voltages into a single KDE plot.
Note that the current (voltage) is expressed in terms of multiples (inverse multiples) of the current (voltage) pickup setting.
The significant overlap between the different fault classes hinders the ability to classify faults based on the fault current and/or voltage magnitude.
In order to address this issue, the stage-1 LDA separates the relay operation into modes that better distinguish the fault classes based on current and/or voltage measurement to enable a local measurements-based approach for protection.
Fig. \ref{fig:star} displays the feature importance coefficients as determined by the stage-1 LDA.
From Fig. \ref{fig:star}, it is observed that the features with the most significant importance coefficients are the voltage and current magnitudes, and the status of the downstream switch S1.
This is further confirmed in Fig. \ref{fig:GDAaccuracy} by evaluating the LDA's classification accuracy as the number of features it has access to is increased (in accordance with the order of feature importance in Fig. \ref{fig:star}).
The blue bars in Fig. \ref{fig:GDAaccuracy} indicate the accuracy when classifying faults into 3 classes (primary, secondary, or other), and the orange bars indicate the accuracy when classifying faults into 2 classes (primary/secondary or other).
The classifier's accuracy (in blue) reaches a plateau of $77\%$ after being provided with a small subset of the measurements comprising the voltage and current magnitudes, and the states of S1 and PC2.
As such, we compartmentalize the R12-13's operation into $2^2 = 4$ modes of operation corresponding to permutations of each of S1 and PC2 being open or closed.
Hence, the MPMC needs only to communicate the status of S1 and PC2 to relay R12-13.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth, trim={0cm 2cm 17cm 2cm}, clip]{figuresV2/star1.png}
\caption{Feature importance coefficients of the GDA classifier trained on R12-13 fault data. The voltage (V) and current (I) are the only two local measurements. The rest are states of switches (S), points-of-common coupling (PC), or distributed generators (DG) that can be communicated to the relay from the MPMC.}
\label{fig:star}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figuresV2/histogramAccuracy1.png}
\caption{GDA fault classification accuracy. The classes are primary (P), backup (B), and other (O).
Each bar represents the classification accuracy when the GDA is trained on its feature, in addition to all features to the left of the bar. The right-most bar is the GDA accuracy when trained on all features. The accuracy plateaus after the first 4 features.}
\label{fig:GDAaccuracy}
\end{figure}
In Fig. \ref{fig:r1213States} (a) to (d), we divide R12-13's operation into these 4 modes and plot the KDEs of the fault current and voltage data in each mode.
Fig. \ref{fig:r1213States} (a) and (b) correspond to the modes with S1 open; hence, R12-13 only measures primary and secondary faults.
Fig. \ref{fig:r1213States} (b) and (d) correspond to the modes with PC2 closed; hence, R12-13 experiences higher fault current magnitudes due to the increased contribution from upstream sources, including the main grid.
It can be observed that by compartmentalizing the relay operation into modes, the
overlap between the different fault classes is significantly reduced, facilitating the distinction of fault classes based on voltage and current magnitude.
The fault current (voltage) magnitude decreases (increases) as the distance from the fault increases.
This enables the use of inverse-time protection to coordinate relays.
The objective of the stage-2 LDA is to parametrize the fault data so that their distribution statistics can be utilized for relay coordination.
To the right of each KDE plot in Fig. \ref{fig:r1213States}, we plot the corresponding inverse-time operation curves against the fault current (measured as multiples of pickup current setting).
While the relay operation time is a function of both the fault current and voltage, we only plot the operation curves against the fault current for the purposes of demonstrating relay coordination, as it is hard to observe the coordination in a 3-dimensional plot.
We illustrate the full 3-dimensional inverse-time operation curve of R12-13 against both fault current and voltage (measured as inverse multiples of pickup voltage setting) for mode (c) in Fig. \ref{fig:topVI}.
\begin{figure}
\centering
\begin{tabular}{ >{\centering\arraybackslash} m{0.02cm} >{\centering\arraybackslash} m{6cm} >{\centering\arraybackslash} m{6cm}}
{\centering (a)} & \includegraphics[width=0.37\textwidth]{figuresV2/state0.png}
&
\includegraphics[width=0.37\textwidth]{figuresV2/TopState0.png}
\\
(b) & \includegraphics[width=0.37\textwidth]{figuresV2/state1.png}
&
\includegraphics[width=0.37\textwidth]{figuresV2/TopState1.png}
\\
(c) & \includegraphics[width=0.37\textwidth]{figuresV2/state2.png}
&
\includegraphics[width=0.37\textwidth]{figuresV2/TopState2.png}
\\
(d) & \includegraphics[width=0.37\textwidth]{figuresV2/state3.png}
&
\includegraphics[width=0.37\textwidth]{figuresV2/TopState3.png}
\end{tabular}
\caption{(left) Kernel density estimation plots of current multiples and voltage inverse multiples measured by relay R12-13 and (right) relay operating curves against current multiples for each of its four modes. Modes (a): S1, PC2 open, (b): S1 open, PC2 closed, (c): S1 closed, PC2 open, (d): S1, PC2 closed.}
\label{fig:r1213States}
\end{figure}
To demonstrate relay coordination:
\begin{itemize}
\item We plot vertical dashed lines in Fig. \ref{fig:r1213States} to illustrate the CVaR of primary fault currents measured by R13-14 (blue), i.e., $\text{CVaR}_{\alpha_i}(I_F)^{13-14}$.
\item Since the current multiple measurements by the backup relay R12-13 might be different than that measured by R13-14 due to different $I_S$ settings or line branching, we shift the operation curve of R12-13 so that the vertical line also matches how R12-13 measures the fault current.
\item The voltage value of each of the operation curves of R13-14 and R12-13 corresponds to the voltage values each measures during the fault characterized by the vertical line.
\end{itemize}
The intersection of the relay curve with the dashed line marks the relay operation time; hence, the higher curve trips later.
Relay coordination can be easily evaluated by looking at the relay operation curves. R12-13 serves as a backup for R13-14, which can be observed from R12-13's curve being higher than R13-14's curve in all modes.
Fig. \ref{fig:r1213States} illustrates how the relays adjust to changes in the grid operation, including going from connected to islanded modes and switching between radial and meshed topologies.
The 4 relay group settings for R12-13 and its downstream relay R13-14 are included in Table \ref{table:m2settings} in the Appendix. These settings groups must be stored in their memories to enable them to adapt to changing conditions.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figuresV2/TopCurrentVoltage.png}
\caption{Relay operating curve of relay R12-13 against current multiples and voltage inverse multiples for mode (c): S1 closed, PC2 open.}
\label{fig:topVI}
\end{figure}
\textit{Remarks:} In modes (a) and (b) when S1 is open, the optimization results in a TMS of 0 for R13-14, which means it is sufficient to operate R13-14 as an instantaneous over-current relay. The increase in operation time for the relays when S1 is closed allows for coordination with relays further downstream.
In Fig. \ref{fig:r1213States} (c), an important insight is the distinctive `crescent' shape observed in the KDE plot. This shape indicates that voltage drops increase more significantly than fault current increases for nearer faults.
A factor contributing to this is the limited fault current of distributed generation.
As a result, incorporating voltage measurements enhances the ability to distinguish fault classes in this mode. This finding supports our decision to include voltage drops in the inverse-time relay operation.
Lastly, we discuss how the use of CVaR is beneficial in the relay optimization process.
Fig. \ref{fig:cvar} shows how different value pairs of CVaR for current and voltage can be used to better control how the data is parametrized before it is used in the optimization. Lowering the CVaR values allows for greater emphasis on the lower tails of the fault current and voltage distributions. This is useful because inverse-time protection is faster at clearing faults with high current and low voltage magnitudes (top-right corner of the figure). Lowering (raising) the CVaR values shifts the focus of the optimization and allows for faster (slower) clearing of faults.
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{figuresV2/cvar.png}
\caption{Illustrating the benefits of CVaR in optimization. Each point represents $(\text{CVaR}_{\alpha_i}(I_F)^{12-13}, \text{CVaR}_{\alpha_v}(V_F)^{12-13})$. The values of $(\alpha_i, \alpha_v)$ are annotated.}
\label{fig:cvar}
\end{figure}
\subsection{Relay coordination at Bus 8 with topology change}
\begin{figure}
\centering
\begin{tabular}{c c}
\includegraphics[width=0.45\textwidth]{figuresV2/r38Case0.png}
&
\includegraphics[width=0.45\textwidth]{figuresV2/r38Case1.png}
\\
(a) & (b)\\
\includegraphics[width=0.45\textwidth]{figuresV2/r38Case2.png}
&
\includegraphics[width=0.45\textwidth]{figuresV2/r38Case3.png}
\\
(c) & (d)
\end{tabular}
\caption{
Relay operation curves against current multiples for relays (a) R14-8, R13-14, and R12-13, (b) R7-8 and R6-7, and (c) R3-8, R2-3, and R1-2 for a fault at bus 8 when S3 is open and S1, S2, PC1 and PC2 are closed.
(d) Relay operation curves against current multiples for relays R3-8, R2-3, and R1-2 for a fault at bus 8 when S1, S2, and S3 are open and PC1 is closed.}
\label{fig:r38States}
\end{figure}
\begin{figure}
\vspace{2mm}
\centering
\includegraphics[width=0.7\textwidth]{figuresV2/time-domain-pscad.png}
\caption{PSCAD simulation of the (a) instantaneous current measured by R1-2, (b) voltages, and (c) tripping signals of R3-8, R2-3, and R1-2 for a $1 \Omega$ fault at bus 8 with PC1 closed, and switches S1, S2, and S3 open.}
\label{fig:pscad}
\end{figure}
To validate the method for coordinating relays in a mesh topology, we consider faults at bus 8. When S1 and S2 are closed, a fault at bus 8 is isolated by three primary relays: R14-8, R7-8, and R3-8.
Each primary relay has one or more backup.
In Fig. \ref{fig:r38States}, we plot the relay operation curves of relays (a) R14-8, (b) R7-8, and (c) R3-8 and their backup relays when PC1, PC2, S1 and S2 are closed.
The dashed vertical lines show the CVaR of fault currents measured by the primary relays for faults at bus 8.
The figure demonstrates the successful isolation of the fault from all generation sources and relay coordination in case of relay failure.
If S1 and S2 are open, the topology of the microgrids becomes radial with R3-8 offering primary protection to bus 8 and R2-3 and R1-2 its backups.
The fault current is supplied by the main grid.
The relay operation curves of these relays in this case are shown in Fig. \ref{fig:r38States}(d).
PSCAD simulations validate the coordination results of this case study. In Fig. \ref{fig:pscad}, a $1 \Omega$ fault occurs at bus 8 at the 0.2 seconds mark. We suppress the operation of the primary (R3-8) and its backup (R2-3) relays, and let R1-2 isolate the fault. The operation times of the relays closely match the coordination curves in Fig. \ref{fig:r38States}(d).
The relay settings for these two case studies are listed in Table \ref{table:m1settings} in the Appendix.
\subsection{Discussion}
{
The proposed method has the following merits:}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figuresV2/classifiersAccuracy.png}
\caption{Comparing LDA classification accuracy with other algorithms.}
\label{fig:compareAccuracy}
\end{figure}
\subsubsection{Evaluating LDA accuracy}
The significant overlap of different fault classes observed in Fig. \ref{fig:kdes} shows that it is infeasible to classify a fault based only on local current and voltage measurements.
In Fig. \ref{fig:compareAccuracy}, we compare the classification accuracy of multiple machine learning algorithms in classifying the faults measured by R12-13 when limiting the training data to discrete grid states and local measurements -- to avoid the reliance on real-time communication.
On the left side of Fig. \ref{fig:compareAccuracy}, we include algorithms that yield interpretable results, including SVM, DT, LDA, and kNN.
On the right, we include algorithms that generally have lower learning bias and higher accuracy, but do not yield interpretable results, including random forest (RF) and multi-layer perceptrons (MLP).
Hyperparameter optimization for these algorithms is listed in Table \ref{table:hyperparameters} in the Appendix.
We observe that MLPs achieve the highest classification accuracy, as expected;
but, highlighting the difficulty of the classification task, the maximum accuracy is only $84\%$.
Note that these algorithms have been shown to yield higher performance in centralized real-time communication-based protection strategies in the literature, when not constrained to local measurements and discrete system states. But this exposes the protection to reliability threats related to the communication infrastructure.
While LDAs accuracy is lower, the interpretability of the LDA results enabled us to augment it with an optimization-based approach that facilitates relay coordination, preventing miscoordination between relays and their backups.
The interpretability of LDA enabled easy visualization of the fault data statistics, protection decisions, and relay coordination.
Hence, the results of the method can be easily interpreted and validated by system operators and protection engineers, facilitating its use for adaptive protection and reducing associated implementation risks and reliability threats.
Despite its lower accuracy, we prefer LDA over DT due to its stability.
Further, we prefer LDA over SVM since LDA enables fitting Gaussian distributions to fault classes, which we use to reduce problem dimensionality and parametrize the fault data for optimization.
As observed by the results, Gaussian distributions fit well to the fault data.
\subsubsection{Communication Network Delays and Cyberattacks}
The proposed protection strategy requires communication of the grid topology to the relays. Generally, this reliance on communication infrastructure can expose the protection relay operation to potential risks, such as network delays or cyberattacks. To address this issue, our method minimizes communication to the relays by communicating only a subset of the infrequent major changes to the grid topology. The necessary settings to operate the relays in real-time are stored locally in memory on the relays.
This provides an advantage over centralized protection strategies that require a constantly active communication channel with the relay.
Due to the infrequent communication, temporary network delays will not significantly impact the protection.
Further, the infrequency permits the use of encryption algorithms to authenticate and protect the communication to relays. This approach ensures that the proposed protection strategy is more secure against potential cyberattacks and/or network delays.
\subsubsection{Achieving Protection Relay Characteristics}
The strategy extends the operation of inverse-time protection relays by adapting protection decisions to changing grid conditions and considering voltages for an improved distinction of faults in systems with limited fault currents.
Below, we evaluate the proposed strategy against the characteristics of effective protection strategies.
\begin{itemize}
\item Reliability:
The proposed protection strategy detects and clears faults in a timely and accurate manner.
The optimization problem minimizes the operation time of relays to expedite isolating faults.
To prevent false alarms or misoperations, the relays in the proposed strategy,
similar to existing inverse-time over-current relays,
do not trigger unless the fault current exceeds a preset pick-up current setting.
Uncontrolled variables, including fault impedance, inception angles, pre-fault power system conditions, and relay measurement errors, can be modeled into the Monte-Carlo simulations.
Further considerations, including in-rush currents due to switching of loads, distributed generation, filter capacitors, or transformers and transformer saturation, can be addressed as is currently with over-current relays or included as constraints in the optimization \eqref{eq:optimization_problem}.
\item Selectivity:
As illustrated in the KDE plots, the proposed protection strategy compartmentalizes relay operation into modes, in which it is more feasible for the relay to discriminate between different types of faults based on local measurements.
\item Coordination:
As demonstrated by relay curves in multiple case studies in the paper, the protection strategy is able to coordinate relays to isolate minimal sections of the system during faults.
Non-adjustable devices, such as fuses, should be coordinated with the relays, or further constraints can be considered in the optimization \eqref{eq:optimization_problem} to coordinate with any devices.
\item Adaptability:
Cases have demonstrated the ability of the relays to adapt to transitions from isolated to a grid-connected mode of operation with higher short-circuit capacities, as well as modifications to network topology from radial to mesh configuration.
\item Simplicity:
Modern IEDs used for protection can readily process the proposed protection strategy.
The strategy minimizes the reliance on the communication infrastructure and memory use of the IEDs.
In addition, the method yields interpretable decisions that can be easily validated by protection engineers.
\item Communication:
The strategy relies on a communication infrastructure between the MPMC and relays.
Commands can be established from the MPMC to switch the relay setting group by means of the IEC61850 protocol.
The method minimizes the communication bandwidth and frequency, only relying on infrequent updates due to changes in a grid topology.
\end{itemize}
\subsubsection{Applicability to Different Protection Relays and Fault Types}
The proposed method specifies the communication requirements and optimizes relay settings.
While we demonstrated our method on inverse-time protection relays, the method can be extended to other protection devices, such as distance relays, to adapt their operation to changing grid conditions.
We emphasize that the main contribution of the paper is the data-driven component: applying GDA to the adaptive protection of power systems.
We demonstrated the strategy for 3-phase faults.
The proposed strategy is applicable to other fault types.
In the same way that relay settings were computed for 3 phase faults in the paper, data can be generated and relay settings can be computed for other fault types.
The relay settings will be stored on the relay IED memory.
During a fault, the relay will use the current and voltage measurement to distinguish the fault type.
The relay settings for the fault type and relay mode will be used to compute an operation time and isolate the faulted phases.
\section{Conclusion} \label{sec:conclusion}
In recent years, there has been a growing interest in utilizing machine learning for adaptive protection in power systems, leveraging the increasing amount of power system data. However, safety-critical applications such as power system protection require methods that are transparent, interpretable, and easily validated. Deep learning methods, although highly accurate, are difficult to validate -- presenting a significant risk to their utilization for power system protection.
To address this issue, we propose a novel adaptive protection strategy for inverse-time relays based on probabilistic machine learning, specifically Gaussian discriminant analysis. We apply the strategy to distill power system data to inverse-time relay settings, generating operation curves that can be used to verify and validate protection decisions.
We demonstrate the use of our strategy for fault detection and localization, relay coordination and optimization, and protection management by specifying minimal communication requirements for adapting relays.
To demonstrate the effectiveness of our method, we apply it to the protection of the CIGRE medium voltage grid, as it switches between islanded and grid-connected modes of operation, as well as radial and mesh topology in the presence of distributed generation.
|
{
"arxiv_id": "2302.14233",
"language": "en",
"timestamp": "2023-03-01T02:06:09",
"url": "https://arxiv.org/abs/2302.14233",
"yymm": "2302"
} | \section{Analysis} \label{sec:analysis}
We use \textsc{Open\taskname}{} to analyze the limitations of our metrics, and we discuss more limitations and future work in Appendix \ref{app:limit}.
\noindent\textbf{Hypotheses about the corpora might not be appropriate predicates on individual samples.}
When comparing highly rated definitions from \url{UrbanDictionary.com} to others, our system generates the hypothesis that the former ``\textit{is more likely to include slang or colloquial terms.}'' This is a statement about a collection of text samples, but the validator requires the hypothesis $h$ to be a predicate on individual text samples $x$.
To address this problem, we use GPT-3 to automatically detect and remove comparatives from the hypotheses, e.g., rewriting the hypothesis above to ``\textit{include slang or colloquial terms}.''
However, some versions of this problem were harder to remove.
For example, when comparing reviews from American Airlines (AA) flights and Delta Airlines to understand which aspects of each airline are doing better/worse, the proposer generated the hypothesis ``\textit{mentions \textbf{American Airlines'} staff being unfriendly and unhelpful}''.
Interpreted literally, this hypothesis can only be true on the corpus of AA reviews, since it presupposes the review to be about AA. The correct predicate for use on individual samples should instead be ``\textit{mentions staff being unfriendly and unhelpful}'' (without the words ``\textit{American Airlines}''').
Therefore, future systems should explicitly convert corpus-level statements to their corresponding correct predicates, and the metrics should evaluate whether the validity of the predicates implies the corpus-level statements.
\noindent\textbf{Our metrics do not evaluate diversity.}
There are often multiple valid and meaningful discoveries, and our system should ideally generate all of them.
For example, when comparing low-rating and high-rating reviews to understand what stands out to customers, both ``\textit{mentions the hidden fees and poor customer service at the airport}'' and ``\textit{mentions the airline charging extra for carry-on items} '' could be valid discoveries.
However, our current system sometimes repeats a discovery using similar paraphrases, e.g., ``\textit{mentions the rude and unprofessional attitude of the staff}'' and ``\textit{mentions the staff being rude and unhelpful}''.
Future evaluation metrics can take diversity into account.
\noindent\textbf{Interpreting discoveries requires domain experts.}
We used Turkers' judgment when computing $T(h, x)$ to judge the validity of a discovery.
However, many discoveries require expert knowledge to interpret properly.
For example, it requires medical training to reliably judge whether a self-reported drug-use experience satisfies ``\textit{mentions psychedelics, such as LSD and shrooms.}''
\noindent\textbf{Correlation $\neq$ causation.}
Our metrics currently do not evaluate whether the discovery is causally related to how the corpus pair was generated.
For example, when comparing self-reported happy moments from females and males, even if the former corpus has more samples that ``\textit{mention children and family}'', it does not necessarily imply that family plays a more important role in inter-personal relations for females; an alternative hypothesis is that females might mention people in general more often than males, hence leading to the observation that they mention family more often.
Spurious correlations could also sneak into our validity evaluation: for example, if the Turkers implicitly associate female activities as family-related \cite{greenwald1995implicit}, we might falsely make this discovery due to evaluator biases.
Future metrics should also consider plausible alternative hypotheses to evaluate causality and control the potential biases from the human evaluators.
We should also treat the discovery from \textsc{D5}{} with caution to prevent automating and amplifying societal biases.
\section{Computing Turker Judgement $T(h, x)$} \label{app:turker}
\noindent\textbf{Scoring.} To estimate $T(h, x)$ with Turker's rating, where $h$ is a truth predicate of a text sample $x$, the Turker needs to read $h$ and $x$ and then choose among six options: ``Certainly Yes'', ``Likely Yes'', ``Neutral'', ``Likely No'', ``Certainly No'', and ``Confusing/Cannot be decided.''
For each $(h, x)$ pair, we collect responses from three Turkers.
To compute the average across them, we collect a list of scores using the following rule:
each ``Certainly Yes'' would receive a score of 1.00, ``Likely Yes'' 0.75, ``Neutral'' 0.50, ``Likely No'' 0.25, ``Certainly No'' 0.00, and ``Confusing/Cannot be decided.'' receive two scores of 0.50.
We then take the average over all the scores we collected from the Turkers for one $h$ and $x$.
``Confusing/Cannot be decided.'' receives two scores of 0.50 because we want such a response to drag the average rating towards neutral and it has a larger effect than choosing ``Neutral''.
\noindent\textbf{Payment.}
We adjust the payment for each HIT task based on the number of words they need to read.
We pay them approximately 0.001 cent per word, and using the conservative estimate that adults read about 200 words per minute, we pay them around \$12 per hour.
We spent in total around \$5K on this HIT task.
\noindent\textbf{Qualification.}
We only recruited Turkers who are located in the U.S. Additionally, we designed qualification test with 8 questions; the questions are designed to be easy to answer as long as they have read our instructions below, and we only accepted turkers who made mistakes on at most one questions.
\noindent\textbf{Annotation Instruction.}
We show our annotation instruction below.
We only show examples of choosing ``Certainly Yes'', ``Certainly No'', and ``Confusing'' to encourage the Turkers not to choose neutral ratings.
Additionally, we explicitly tried to address Halo effect -- where the text does not satisfy a predicate $h$ but satisfies a predicate $h'$ that is highly correlated with $h$.
For example, for the text sample $x=$ ``\textit{Really love the flight!!}'' does not satisfy the predicate $h=$ ``\textit{mentions that the breakfeast is good on the plane}'', even though it satisfies a highly correlated predicate $h'=$ ``\textit{likes the flight.}''
\subsection{Instructions}
Below are the same instructions we have shown you during the qualification. Thanks for visiting this page and refresh your memory about the instruction!
\textbf{Instruction}: In this task, you will check whether a TEXT satisfies a PROPERTY
\textbf{Example 1}\\
\textbf{Property}: mentions a natural scene.\\
\textbf{Text}: I love the way the sun sets in the evening.
\begin{itemize}
\item A) Certainly Yes.
\item B) Likely Yes.
\item C) Neutral.
\item D) Likely No.
\item E) Certainly No.
\item F) Confusing/Cannot be decided.
\end{itemize}
\noindent\textbf{Answer.} A. sun set is nature-related; if you feel a bit ambivalent, B is also acceptable.
\textbf{Example 2}\\
\textbf{Property}: writes in a 1st person perspective.\\
\textbf{Text}: Makima is cute.
\begin{itemize}
\item A) Certainly Yes.
\item B) Likely Yes.
\item C) Neutral.
\item D) Likely No.
\item E) Certainly No.
\item F) Confusing/Cannot be decided.
\end{itemize}
\noindent\textbf{Answer.} E. This text is undoubtedly written in the 3rd person perspetive, so E.
\textbf{Example 3}\\
\textbf{Property}: is better than group B.\\
\textbf{Text}: I also need to buy a chair.
\begin{itemize}
\item A) Certainly Yes.
\item B) Likely Yes.
\item C) Neutral.
\item D) Likely No.
\item E) Certainly No.
\item F) Confusing/Cannot be decided.
\end{itemize}
\noindent\textbf{Answer.} F. It is unclear what the hypothesis mean (e.g., what does group B mean?) and doesn't seem related to the text. So F.
\textbf{Example 4}\\
\textbf{Property}: mentions that the breakfast is good on the airline.\\
\textbf{Text}: The airline staff was really nice! Enjoyable flight.
\begin{itemize}
\item A) Certainly Yes.
\item B) Likely Yes.
\item C) Neutral.
\item D) Likely No.
\item E) Certainly No.
\item F) Confusing/Cannot be decided.
\end{itemize}
\noindent\textbf{Answer.} E. Although the text appreciates the flight experience, it DOES NOT mention about the breakfast. So the answer is E.
\textbf{Example 5}\\
\textbf{Property}: appreciates the writing style of the author.\\
\textbf{Text}: The paper absolutely sucks because its underlying logic is wrong. However, the presentation of the paper is clear and the use of language is really impressive.
\begin{itemize}
\item A) Certainly Yes.
\item B) Likely Yes.
\item C) Neutral.
\item D) Likely No.
\item E) Certainly No.
\item F) Confusing/Cannot be decided.
\end{itemize}
\noindent\textbf{Answer.} A. Although the text dislikes the paper, it DOES like the writing style. So the answer is A.
\pagebreak
\section{Correlation Between Meaningfulness Metrics} \label{app:meaningful}
\begin{table*}[]
\centering
\begin{tabular}{lrrr|lrrr}
\toprule
with-goal & rel & sig & nov & no-goal & rel & sig & nov \\
\midrule
rel & 1.00 & 0.68 & 0.45 & rel & 1.00 & 0.85 & 0.71 \\
sig & 0.68 & 1.00 & 0.56 & sig & 0.85 & 1.00 & 0.80 \\
nov & 0.45 & 0.56 & 1.00 & nov & 0.71 & 0.80 & 1.00 \\
\bottomrule
\end{tabular}
\caption{For each hypothesis we take the average rating. Then we calculate the pairwise spearman rank correlation between relevance (rel), significance (sig), and novelty (nov) on the subset of hypotheses proposed with (\textbf{left}) and without (\textbf{right}) in the prompt. }
\label{tab:meaningfulness-corr}
\end{table*}
We report the pair-wise correlation between the three meaningfulness metrics in Table \ref{tab:meaningfulness-corr}.
We find a substantial positive correlation between these metrics both when we include or exclude the goal from the prompt.
We also observe that significance and relevance has the highest correlation, while novelty and relevance has the least.
\begin{table}[]
\centering
\begin{tabular}{lrrr}
\toprule
{} & aut3\_rel & aut3\_sig & aut3\_nov \\
\midrule
aut1\_rel & 0.60 & 0.49 & 0.43 \\
aut1\_sig & 0.52 & 0.50 & 0.48 \\
aut1\_nov & 0.16 & 0.21 & 0.37 \\
\bottomrule
\end{tabular}
\caption{The correlation between different metrics between author 1 and author 3.}
\label{tab:aut1-aut3}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{lrrr}
\toprule
{} & aut2\_rel & aut2\_sig & aut2\_nov \\
\midrule
aut1\_rel & 0.61 & 0.51 & 0.45 \\
aut1\_sig & 0.53 & 0.54 & 0.49 \\
aut1\_nov & 0.16 & 0.19 & 0.34 \\
\bottomrule
\end{tabular}
\caption{The correlation between different metrics between author 1 and author 2.}
\label{tab:aut1-aut2}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{lrrr}
\toprule
{} & aut2\_rel & aut2\_sig & aut2\_nov \\
\midrule
aut3\_rel & 0.91 & 0.74 & 0.69 \\
aut3\_sig & 0.68 & 0.88 & 0.67 \\
aut3\_nov & 0.60 & 0.64 & 0.80 \\
\bottomrule
\end{tabular}
\caption{The correlation between different metrics between author 2 and author 3.}
\label{tab:aut2-aut3}
\end{table}
We then investigate the correlation between each metric across different evaluators.
For example, what's the correlation between author\_1's rating of relevance and author\_2's rating of novelty.
Suppose that all authors interpret the metrics similarly, we should observe that author\_2's rating of relevance is more correlated with author\_1's rating of relevance than author\_1's rating of novelty.
We report the pair-wise correlation between the author's rating on individual metrics in Table \ref{tab:aut1-aut2}, Table \ref{tab:aut1-aut3}, and Table \ref{tab:aut2-aut3}.
We find that across all pairs of authors $X$ and $Y$, $X$'s relevance rating is most predictive of $Y$'s relevance rating, compared to $X$'s rating of novelty and significance.
The same property also holds for significance rating.
However, author\_1's relevance rating is more predictive for author\_2 and author\_3's novelty rating than author\_1's novelty rating, which might imply that author\_1 operationalizes the evaluation of novelty differently from the other authors.
We conjecture this is because author\_1 has a different background from the author\_2 and author\_3 -- author\_1 is a Ph.D. student who speaks English as a second language, while author\_2 and author\_3 are undergrads and native speakers.
\section{Full Pipeline of the Proposer} \label{app:proposer}
We present the full details of how we generated the hypotheses with the language model.
The process roughly contains four stages: 1) obtaining representative samples for each corpus, 2) sampling hypotheses from GPT-3, 3) rewriting hypotheses, and 4) optionally plugging in example hypotheses.
\paragraph{Obtaining representative samples.}
This step is the same as \citet{zhong2022describing}, and we borrow the related text from that paper for the reader's convenience.
Since $\mathcal{D}^{\text{res}}_{A}$ and $\mathcal{D}^{\text{res}}_{B}$ might overlap significantly, random samples from $\mathcal{D}^{\text{res}}_{A}$ and $\mathcal{D}^{\text{res}}_{B}$ might not be representative and informative enough for GPT-3 to notice the differences between the two distributions.
Therefore, we choose samples that are representative of their differences.
To find those samples, we fine-tune RoBERTa-Large
\cite{liu2019roberta} to predict whether each sample comes from Corpus A or Corpus B and keep the top-p percentile samples with the highest confidence.
Next, we take samples from the top-$p$ percentile to prompt GPT-3.
\paragraph{Selecting samples to prompt GPT-3.}
We randomly select $S=$25 samples from the top-5 percentile from Corpus A and Corpus B to prompt GPT-3 to propose the hypotheses, using the template shown in Figure \ref{fig:prompts} left.
We require the length of the prompt to be at most 3,200 GPT-3 tokens (the max window size for GPT-3 text-davinci-003 is 4096) and gradually decrease the number of samples $S$ in the prompt until the prompt length is less than 3,200;
additionally, we truncate each text samples to at most 256 GPT-3 tokens.
Finally, to prevent GPT-3 from proposing hypotheses that reflect simple lexical correlations that can be detected with unigram models, e.g., ``\textit{uses the word ``hey'' more often.}'', we incrementally construct the subset of samples for Corpus A and Corpus B such that at any time of the construction, no single word can appear $0.25S$ times more often in one corpus than the other.
We repeat the same process for the top-20 and top-100 percentile until we obtain 60 hypotheses.
\paragraph{Rewriting hypotheses with GPT-3.}
As mentioned in Section \ref{sec:analysis}, the hypotheses generated by GPT-3 are frequently statements about the corpus, while the validator requires the hypothesis to be a predicate on individual text samples.
For example, when comparing definitions that people like from \url{UrbanDictionary.com} to other definitions, the hypothesis that the former ``\textit{is more likely to include slang or colloquial terms.}'' is a statement about a collection of text samples, rather than a predicate on an individual sample.
$T(h, x)$ is undefined in this case, since it does not make sense to check whether a single text sample is more likely to include slang.
Ideally, we want to detect these comparison statements and automatically remove the comparatives, e.g., rewrite it to ``\textit{includes slang or colloquial terms.}''.
To detect and remove the comparatives from the hypotheses, we tag the part of speech for each word in the hypotheses using the NLTK package \citep{bird2009natural} and check whether any tag is \texttt{JJR} or \texttt{RBR}.
If a hypothesis indeed contain theses tags, we prompt GPT-3 to rewrite the hypothesis.
We show an example prompt in Figure \ref{fig:rm-cmp}.
\paragraph{Plugging in example hypotheses (optionally).}
We can also add a few problem-specific example hypotheses to the prompt to elicit more relevant hypotheses, and we do so by adding them to the ``formatting instruction'' part in the prompt used to propose hypotheses Figure \ref{fig:prompts}.
In \textsc{Open\taskname}{}, we provided example hypotheses for each problem to steer our system to generate more meaningful discoveries; we produced the example hypotheses by prompting GPT-3 to generate a few hypotheses and selecting the meaningful ones from them.
For the reported discoveries in Section \ref{sec:application}, we confirmed that they are unambiguously different from our provided hypotheses; otherwise, the system might have produced the discoveries by copying the provided hypotheses.
We did not use the example hypotheses in Section \ref{sec:more-meaningful} to test GPT-3's zero-shot understanding of the goal.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{remove_comparison.png}
\caption{The prompt to remove comparatives from a hypotheses.}
\label{fig:rm-cmp}
\end{figure}
\section{Collecting Data to Fine-tune the Validator} \label{app:verifier}
Here we provide a high-level description of how the data was collected.
For each problem in \textsc{Open\taskname}{}, we used our proposer to produce a list of hypotheses.
We automatically judged each hypothesis on a subset of samples from the research split using GPT-3 text-davinci-002 \citep{ouyang2022training}, Flan-T5 \citep{chung2022scaling}, and a model trained with RLHF from \citet{bai2022training}.
We created the input distribution for training by combining and equally weighting the following $3 \times 2 = 4$ distributions: the subset of $(h, x)$ pairs that GPT-3/Flan-T5/``RLHF'' considers Yes or No to be the most likely answer.
We then collected averaged turker ratings for in total 3138 $(h, x)$ pairs and used them to fine-tune Flan-T5 to create the validator \citep{chung2022scaling}.
To test cross problem generalization capability of our \textsc{D5}{} system, whenever we applied our \textsc{D5}{} system to a problem in \textsc{Open\taskname}{} in Section \ref{sec:application}, we used a validator that is NOT fine-tuned on the $(h, x)$ pairs from this problem.
We achieved this by keeping track of which problem each $(h, x)$ pair comes from and split all the $(h,x)$ pairs into three folds based on the problems; whenever we applied our \textsc{D5}{} system to a problem, we used the validator trained on the two folds that do not contain this problem.
\begin{figure*}[h!]
\centering
\includegraphics[width=\linewidth]{corporadiffpipeline.pdf}
\caption{A sketch of the baseline method. The description can be seen in Section \ref{sec:method} and the actual prompts can be seen in Figure \ref{fig:prompts}. }
\label{fig:baseline}
\end{figure*}
\input{limitation.tex}
\section{Annotation Interface to Collect Human-Generated Hypotheses} \label{app:interface}
(This section describes an interesting research direction we did not have time to fully pursue.)
\noindent\textbf{Task.}
To fine-tune the language model to propose better hypotheses and perform validation more accurately, we also designed an interface to collect human annotations earlier in the project.
In this annotation task, the annotators see five text samples from each of the two corpora; they then write one or many natural language predicate(s) that describe how samples from the two groups are different and choose which text samples satisfy each predicate the annotator has written.
Since it is challenging for humans to identify systematic differences between even groups of five sentences, we made the task easier for them by
\begin{itemize}
\item we chose the representative samples from each corpus to form the two groups of samples, similar to the process in Section \ref{app:proposer}, and
\item we highlighted subspan of the text samples that are informative for how the two corpora differ. For example, if Corpus A is sports related while Corpus B is entertainment related, we hope to highlight sports-related words like ``basketball''.
To automatically identify the text spans to highlight,
we fine-tuned RoBERTa to classify whether a sample comes from Corpus A and Corpus B, used the SHAP library to calculate how much each text span influences the classifier's decision, and highlighted the text spans based on the influence.
\end{itemize}
\begin{figure*}[h!]
\centering
\includegraphics[width=\linewidth]{annotator_diagram.pdf}
\caption{A detailed screenshot of our annotation interface.}
\label{fig:survey}
\end{figure*}
A screenshot of the annotation interface can be seen in Figure \ref{fig:survey}.
\noindent\textbf{Preliminary Results}
We performed initial experiments on text clusters formed on the wikitext-2 dataset \citep{wikitext}.
We asked the authors to write hypotheses for 30-50 samples and then compare the results with GPT-3 generated hypotheses.
We found that human annotators were able to write 2-4 valid hypotheses per pair of text groups, while GPT-3 text-davinci-003 was able to generate 4-6.
Out of the valid generated hypotheses, approximately a third were variations on another valid hypothesis.
The number of times humans were able to write a hypothesis that GPT-3 was unable to generate was around a third of the samples, while GPT-3 was able to generate a novel hypothesis humans have not thought about before in nearly every single text corpora.
Given that GPT-3 is close to our author's ability to write hypotheses, we estimated that we would not be able to fine-tune T5 to propose better hypotheses with human annotations, and hence gave up on this research direction.
\input{datasets.tex} \label{app:dataset-descriptions}
\section{Application} \label{sec:application}
Every time we build a better \textsc{D5}{} system in the future, we may use it to automatically generate useful discoveries on an existing aggregation of open problems like \textsc{Open\taskname}{} and send the discoveries to researchers who posed the problems.
In this section, we use our current system to automatically generate discoveries on \textsc{Open\taskname}{}.
\subsection{Automatically Generating Discoveries on \textsc{Open\taskname}{}}
We ran our system on all problems in \textsc{Open\taskname}, obtaining in total 3296 discoveries across 402 problems.
However, we do not have enough budget to validate every finding, since estimating $V$ is expensive (Section \ref{sec:soundness}).
Therefore, from the remaining 3296 discoveries, we manually selected 21 discoveries that 1) the authors think are meaningful enough, 2) are representative of potential use cases, 3) do not require expert knowledge for Turkers to judge, and 4) are likely to achieve a small $p$-value with fewer than 200 samples from $\mathcal{D}^{\text{val}}_{A}$ and $\mathcal{D}^{\text{val}}_{B}$.
We then estimated their validity based on the procedure described in Section \ref{sec:soundness} by using fewer than 200 samples from the validation split and calculated the $p$-values.\footnote{We determined the number of samples s.t. $V'$ can achieve a $p$-value of $0.005$. Estimating $V$ for these discoveries costs $\sim$\$1500.}
Since we are testing multiple discoveries and each of them can be statistically significant merely due to chance, we keep 15 discoveries with $V$ that are significantly non-zero with $p$-value below 7\%, a threshold determined by the Benjamini Hochberg's procedure with a false discovery rate of 10\%.
In other words, fewer than 10\% of the discoveries below are false discoveries in expectation.
\subsection{Example Discoveries on \textsc{Open\taskname}{}}
For each example discovery, we also report the estimated $V$ based on Turker's rating and the AUCROC score of using the discovery to discriminate samples from $\mathcal{D}^{\text{val}}_{A}$ and $\mathcal{D}^{\text{val}}_{B}$.
All italicized text in quotes from this section are literal copies of what our system generated.
\noindent\textbf{Comparing lyrics from different eras.}
Compared to lyrics from the 70s, those from the 80s more often ``\textit{references violence or aggression}'' ($V \approx 0.06$, AUCROC $\approx$ 0.58).
\noindent\textbf{Analyzing gender differences in self-reported happy moments.}
Compared to self-reported happy moments written by males, those by females ``\textit{mentions children or family}'' more often ($V \approx 0.08$, AUCROC $\approx$ 0.56).
\noindent\textbf{Analyzing errors in NLP systems.}
We considered the task of perspectrum classification \citep{CKYCR19}, which has the following instruction: ``given a perspective and a claim, classify whether the given perspective supports or undermines the claim. If the perspective could possibly convince someone with different view, it is supporting, otherwise it is undermining.''
We considered two few-shot learning systems: GPT-3 Instruct Curie \citep{ouyang2022training} and Tk-Instruct-11B \citep{wang2022super}.
We focused on the perspectives where the ground truth label is undermining, and compare the following two corpora: Corpus A -- the set of perspectives where Curie correctly classifies the input as undermining but Tk-11B is wrong, and Corpus B -- the set where TK-11B is correct while Curie is wrong.
We found that Corpus B more often ``\textit{Uses language that is positive or uplifting}'' ($V \approx 0.12$, AUCROC $\approx$0.67).
One possible explanation is that Curie made many mistakes by misinterpreting undermining as a label for negative sentiment rather than a logical relation between the claim and the perspective.
For another example, we compared two natural language inference models, one trained on MNLI and the other trained on SNLI.
Then we tested them on MNLI and compare two corpora: Corpus A -- the set of inputs where the model trained with in-distribution data (MNLI) is wrong but the model trained with out-of-distribution data is correct, and Corpus B vice versa.
We found that the latter more often ``\textit{has an informal tone, such as slang or colloquial speech}'' ($V \approx 0.08$, AUC-ROC $\approx$0.62).
One possible explanation is that MNLI contains more different genres and hence more informal speeches, causing the former model to perform better on these examples.
\noindent\textbf{Understanding political stances and stereotypes in speeches.}
When comparing presidential speeches on immigrants from Obama to those from Trump, the former ``\textit{argues for a path forward to promote the fair and just treatment of immigrants}'' ($V \approx 0.16$, AUCROC $\approx$ 0.73), while the latter more frequently ``\textit{refers to illegal immigrants as criminals}'' ($V \approx 0.09$, AUCROC $\approx$ 0.62).
\noindent\textbf{Analyzing airline customer reviews.}
We compared the concerns in reviews of the airline Air Canada v.s.~its subsidiary, Air Canada Rogue, which is considered a low-price wing of Air Canada.
The latter more often ``\textit{ mentions lack of legroom}``($V \approx 0.16$, AUCROC $\approx$ 0.68).
\noindent\textbf{Identifying temporal differences in news headlines.}
We compared headlines published by ABC news across different years.
Compared to the year 2014, headlines from the year 2010 ``\textit{mention disasters and crimes, such as plane accidents and assaults}'' more often ($V \approx 0.03$, AUCROC $\approx$ 0.53). Compared to year 2019, year 2020 more often ``\textit{discusses coronavirus-related topics}'' ($V \approx 0.21$, AUCROC $\approx$ 0.65).
\noindent\textbf{Describing distribution shift.}
We compared the premises from the SNLI dataset and MNLI dataset, and the former ``\textit{involves physical activity, such as walking, playing, climbing, or biking}'' ($V \approx 0.13$, AUC-ROC $\approx$0.64).
One possible explanation is that SNLI is based on image captions.
\noindent\textbf{Comparing discussion topics between bots and human users.}
We compared the topical differences between tweets identified as written by bots vs. human users on Twitter, and our system finds that the bots more often ``\textit{contains keywords related to business, finance or trading}'' ($V \approx 0.08$, AUC-ROC $\approx$ 0.61).
One possible explanation is that bots are frequently used to generate finance-related scams.
\noindent\textbf{Describing text clusters.}
We present two example descriptions for text clusters.
One from Wikipedia: ``\textit{references pop culture, such as movies, books, and television shows}.'' ($V \approx 0.21$, AUC-ROC $\approx$ 0.73);
one from PoetryFoundation.com: ``\textit{uses vivid imagery and metaphors to convey a feeling}'' ($V \approx 0.09$, AUC-ROC $\approx$0.65).
We hope future works can collect more open problems, allowing \textsc{D5}{} systems to produce more impactful discoveries.
\section{\textsc{Open\taskname}} \label{sec:data}
We contribute a new meta-dataset, \textsc{Open\taskname}{}, which contains 675 open-ended \textsc{D5}{} problems.
We describe how the problems are represented, how we collected them, and their open-ended nature.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.87\linewidth]{corporadifffig2.pdf}
\caption{
\textsc{Open\taskname}\phantom{0}contains 675 problems, and we show some examples here by row. Appendix \ref{app:dataset-descriptions} includes the citations.
}
\label{fig:dataset-overview}
\end{figure*}
\noindent\textbf{Representation.} Each problem in \textsc{Open\taskname}{} is represented by 1) a corpus pair, Corpus A and Corpus B, with on average 17K samples, and 2) a description of the research goal.
In this task, the input is a research goal and a corpus pair, while the outputs are valid and meaningful discoveries in natural language predicates (Figure \ref{fig:task-description}).
For example, Corpus A/B can be self-reported reactions after taking drug A/B, and the research goal is to understand the side effects of drug A;
one discovery can be that Corpus A has more samples that ``\textit{mentions feelings of paranoia}''.
We use 50\% of each corpus as the ``research'' split and 50\% as the ``validation'' split.
The system can only access the research split, while the validation split is reserved for the evaluators to validate the discovery.
A validation split prevents overfitting the discoveries to the given corpora and is analogous to the train-test split in machine learning.
\noindent\textbf{Collection Process.}
We collected 675 problems in total, ranging across business, social sciences, humanities, health, and machine learning; see Figure \ref{fig:dataset-overview} for a few examples.
To build \textsc{Open\taskname}, two of the authors performed an extensive literature review on problems that could potentially benefit from our system, e.g., reading survey papers \citep{nguyen2020we} and courses\footnote{\url{http://www1.cs.columbia.edu/~smara/teaching/S18/}} on computational social sciences, and skimming through the ACL proceedings from the past decade\footnote{\url{https://aclanthology.org}} and datasets from Kaggle\footnote{\url{https://www.kaggle.com}} that has an NLP tag; we then brainstormed the research goals, scraped/generated the corresponding corpora, and post-processed them over nine months, resulting in 675 problems.
Appendix \ref{app:dataset-descriptions} includes the complete list of citations for the datasets we aggregated.
\noindent\textbf{Open-Endedness.}
Since we hope to build systems that can tackle challenging research problems, we did not avoid cases where we do not know the ground truth answer.
On the contrary, we favored problems which we do not have an answer for.
This means that, for some problems, it might be infeasible to produce any meaningful discovery.
This is different from standard benchmarking practices, where humans can provide a reference solution to evaluate an AI system.
However, even though we do not know the ground truth, once a system produces a discovery, we can still evaluate it.
We present our evaluation metrics in the next section.
\section{Datasets} \label{sec:datasets}
\textbf{abc-headlines}. We collect headlines published by ABC news, an American news company from \citet{abc-headlines}.
ABC headlines are directly downloaded from Harvard Dataverse. The year is extracted from the publication date field. Samples are constructed from the headline text.
The data is downloadable from \url{https://doi.org/10.7910/DVN/SYBGZL} with license CC0 1.0.
\textbf{ad-transcripts}. We collect ad scripts from a variety of industries from \citet{ad-transcripts}. Ad transcripts are directly downloaded from Kaggle. The top eight industries by frequency are selected. Newlines are replaced with spaces.
The dataset is downloadable from \url{https://www.kaggle.com/datasets/kevinhartman0/advertisement-transcripts-from-various-industries} with license CC0 Public Domain.
\textbf{admin-statements}. We collect statements of administration policy from American presidents from \citet{admin-statements}. Administration statements are extracted from a collection hosted on GitHub. Extraneous symbols are removed and samples are split by paragraph.
The dataset is downloadable from \url{https://github.com/unitedstates/statements-of-administration-policy\#statements-of-administration-policy} and origin files have a Creative Commons Attribution 3.0 License.
\textbf{ai2-natural-instruction}. We collect a learning-from-instructions dataset released by the Allen Institute for AI from \citet{ai2-natural-instruction}. Natural instruction tasks are directly downloaded without modification. The dataset is released under an Apache-2.0 license.
\textbf{airline-reviews}. We collect reviews of airlines collected from the review website Skytrax. Airline reviews for airlines, airports, and seats are downloaded from a public GitHub repository. Names of aircraft, airlines, countries, and traveler types are standardized. Ratings of 1, 4, or 5 on a scale of 5, and 1, 5, 8, or 10 on a scale of 10 are kept. This dataset can be downloaded via \url{https://github.com/quankiquanki/skytrax-reviews-dataset}.
\textbf{aita}. We collect posts on the ``Am I The Asshole'' Subreddit, an online forum people ask others whether they were in the wrong from \citet{aita}. Posts from r/AmITheAsshole are downloaded from a praw scrape of Reddit. Topic areas are chosen based on common themes in posts and coarsely defined based on manual keywords. Each post can belong to multiple topic areas.
The dataset can be downloaded at \url{https://doi.org/10.5281/zenodo.3677563}.
\textbf{all-the-news}. We collect news articles collected from various outlets between 2015 and 2017 from \citet{all-the-news}. News articles are downloaded directly from the Components website. The titles are used as text samples.The dataset can be downloaded at \url{https://components.one/datasets/all-the-news-articles-dataset} .
\textbf{amazon-reviews}. We collect Amazon reviews collected from various product categories from \citet{amazon-reviews}. Amazon reviews are downloaded from a 2018 crawl of the website. The first 100,000 review texts are treated as the text sample.
The dataset can be downloaded at \url{https://nijianmo.github.io/amazon/index.html} .
\textbf{armenian-jobs}. We collect job postings in Armenia from \citet{armenian-jobs}. The Armenian job postings dataset is downloaded from a snapshot on GitHub. Different IT jobs are manually coded and time intervals are defined in order to balance sample availability.
The dataset can be downloaded at \url{https://www.kaggle.com/datasets/udacity/armenian-online-job-postings} .
\textbf{boolq}. We collect a reading comprehension dataset of yes/no questions from \citet{boolq}. Boolean questions are downloaded directly as is.
The dataset can be downloaded at \url{https://github.com/google-research-datasets/boolean-questions} with license CC-SA-3.0.
\textbf{clickbait-headlines}. We collect headlines across time from the Examiner, a clickbait news site from \citet{clickbait-headlines}. The Examiner headlines are directly downloaded from Kaggle. The year is extracted from the publication date field. Samples are constructed from the headline text.
The dataset can be downloaded at \url{https://www.kaggle.com/datasets/therohk/examine-the-examiner}, with license CC0: public domain.
\textbf{convincing-arguments}. We collect arguments on a variety of topics annotated for convincingness from \citet{convincing-arguments}. Annotated arguments are downloaded from the GitHub repository. Arguments are sorted by rank. The bottom 400 are treated as ``unconvincing'', the top 200 are treated as ``convincing'', and the next 200 are treated as ``somewhat convincing.''
The dataset can be downloaded at \url{https://github.com/UKPLab/acl2016-convincing-arguments}, with license CC-BY 4.0.
\textbf{craigslist-negotiations}. We collect dialogue from Craigslist negotiations, an online seller platform from \citet{craigslist-negotiations}. Craigslist negotiations are downloaded from Huggingface. Sequences which contained a ``quit'' intention or ``reject'' intention are categorized as failures; those which contained an ``accept'' intention are categorized as successes. The mid-price is defined as the mean price of the items sold. Within each category, the items are sorted by mid-price. The top half is treated as high-price and the bottom half is treated as low-price.
This dataset can be downloaded at \url{https://huggingface.co/datasets/Hellisotherpeople/DebateSum} with MIT license.
\textbf{debate}. We collect evidence compiled for American competitive policy debate, published online by debate camps from \citet{debate}. The train split is downloaded from Huggingface. For each sample, we use the abstract as the text. Arguments are categorized by type, debate camp of origin, and topic/specific argument. For topics, we use domain knowledge to list relevant keywords for each topic and include any sample with a file name that includes any keyword. A single sample can belong to multiple topics.
This dataset can be downloaded at \url{https://huggingface.co/datasets/Hellisotherpeople/DebateSum} with MIT license.
\textbf{dice-jobs}. We collect American technology job postings on dice.com from \citet{dice-jobs}. Job postings are downloaded from Kaggle. Posts from the six most popular companies are categorized by company. We remove miscellaneous characters and blank descriptions. We additionally apply our splitting procedure to reduce description length.
This dataset can be downloaded at \url{ https://www.kaggle.com/datasets/PromptCloudHQ/us-technology-jobs-on-dicecom} under CC BY-SA 4.0 .
\textbf{diplomacy-deception}. We collect dialogue from games of Diplomacy, which involves deception from \citet{diplomacy-deception}. Diplomacy dialogues are downloaded from GitHub (all splits). The data are ASCII encoded and newlines are removed. Each message and label is treated as a sample.
This dataset can be downloaded at \url{https://huggingface.co/datasets/diplomacy_detection } under unknown license.
\textbf{echr-decisions}. We collect facts of cases heard before the European Court of Human Rights from \citet{echr-decisions}. Decisions are downloaded from a public archive. A random sample of 500 decisions is selected from the files. The samples with any violated articles are categorized as ``violation,'' while the rest are categorized as ``no violation.''
This dataset can be downloaded at \url{https://paperswithcode.com/dataset/echr} under unknown license.
\textbf{essay-scoring}. We collect essays from students from \citet{essay-scoring}. Essays are downloaded from a GitHub repository. Only essays from set 5 are considered. Essays with a score of at least 3 are categorized as good essays, while essays with a score less than 3 are bad essays.
This dataset can be downloaded at \url{https://www.kaggle.com/c/asap-aes} under unknown license.
\textbf{fake-news}. We collect fake and legitimate news from \citet{fake-news}. Fake news articles are downloaded from the author's website. Full articles are treated as text snippets.
This dataset can be downloaded at \url{http://web.eecs.umich.edu/~mihalcea/downloads.html\#FakeNews} under CC-BY-4.0.
\textbf{fomc-speeches}. We collect Federal Open Market Committee (FOMC) speeches from 1996-2020, which describe Federal Reserve policy from \citet{fomc-speeches}. Fed speeches are downloaded from Kaggle. The macro indicator data are merged in on the year and month. Full speech text is split by paragraph and categorized by speaker, year, and macroeconomic indicator.
This dataset can be downloaded at \url{https://www.kaggle.com/datasets/natanm/federal-reserve-governors-speeches-1996-2020} under unknown license.
\textbf{genius-lyrics}. We collect lyrics collected from Genius.com before 2020 from \citet{genius-lyrics}. Genius lyrics are downloaded from Google Drive. The lyrics are merged with song metadata and treated as samples. We categorize lyrics by hand-selecting popular artists, common genres, time periods, and view counts (over 1M views is high, 500k-1M is medium).
This dataset can be downloaded at \url{https://www.cs.cornell.edu/~arb/data/genius-expertise/} under unknown license.
\textbf{happy-moments}. We collect self-reported happy moments and demographic characteristics from \citet{happy-moments}. The HappyDB dataset is downloaded from the official GitHub repository. Demographic data is cleaned and merged into happy moments. Happy moment descriptions are treated as samples and are categorized by type of happy moment, country of origin, and other demographic features.
This dataset can be downloaded at \url{https://github.com/megagonlabs/HappyDB} under unknown license.
\textbf{huff-post-headlines}. We collect headlines from the news outlet Huffington Post from \citet{huff-post-headlines} and \citet{huff-post-headlines2}. Huffington Post headlines are downloaded from Kaggle. The short description of each article is treated as a sample and tokenized at the sentence level.
This dataset can be downloaded at \url{https://rishabhmisra.github.io/publications/} under CC-BY-4.0.
\textbf{immigration-speeches}. We collect congressional and presidential speeches that mention immigration from 1880 to the present from \citet{immigration-speeches}. Immigration speeches are downloaded from the replication package. The speech text is preprocessed to remove extraneous spaces. We engineer features corresponding to time periods, well-known speakers, other significant time periods, the racial group under discussion, and the geographic area within the United States.
This dataset can be downloaded at \url{https://github.com/dallascard/us-immigration-speeches/releases}.
\textbf{kickstarter}. We collect names of startups on kickstarter.com from \citet{kickstarter}. We download a 2018 crawl from Kickstarter from Kaggle. The project name is treated as the text sample.
This dataset can be downloaded at \url{https://www.kaggle.com/datasets/kemical/kickstarter-projects?select=ks-projects-201612.csv} under CC BY-NC-SA 4.0.
\textbf{microedit-humor}. We collect funny sentences generated by making one-word edits to normal statements from \citet{microedit-humor}. The Microedit dataset is downloaded from the author's website. We make the relevant edit to each text sample and treat the edited text sample as the data point. We bin the mean annotator grade into 4 and denote each as unfunny, neutral, funny, and very funny, respectively.
This dataset can be downloaded at \url{https://paperswithcode.com/dataset/humicroedit}.
\textbf{mnli}. We collect a collection of sentence pairs annotated with textual entailment information from a range of genres from \citet{mnli}. The MNLI corpus is downloaded from the official website. We treat the premise and hypothesis as text samples.
This dataset can be downloaded from \url{https://cims.nyu.edu/~sbowman/multinli/}, most of which are under the OANC license.
\textbf{monster-jobs}. We collect American job postings on monster.com. Jobs on Monster.com are downloaded from Kaggle. Job descriptions are treated as samples and split at the paragraph and sentence level. We keep and categorize jobs from seventeen large cities.
This dataset can be downloaded from \url{https://www.kaggle.com/datasets/PromptCloudHQ/us-jobs-on-monstercom} under CC BY-SA 4.0 .
\textbf{movie-tmdb}. We collect movie plot summaries from TMDB from \citet{movie-tmdb}. TMDB movie overviews are downloaded from Kaggle. We keep only English movies and bin popularity by deciles. The top decile is considered ``hits,'' the 70-80th percentiles are considered ``average,'' and the 30-40th percentiles are considered ``bad.''
This dataset can be downloaded from \url{https://www.kaggle.com/datasets/tmdb/tmdb-movie-metadata21}.
\textbf{movie-wiki}. We collect movie plot summaries collected from Wikipedia from \citet{movie-wiki}. Wikipedia movie summaries are downloaded from Kaggle.
This dataset can be downloaded from \url{https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots} under CC BY-SA 4.0.
\textbf{news-popularity}. We collect news headlines posted on social media platforms from \citet{news-popularity}. Headlines are downloaded from a reproduction package. The headline and title text are cleaned, and the title is treated as the text sample. The 100 most positive and negative or popular and unpopular articles on each topic are used as distributions.
This dataset can be downloaded from \url{https://archive.ics.uci.edu/ml/datasets/News+Popularity+in+Multiple+Social+Media+Platforms}.
\textbf{nli-benchmarks}. We collect training examples from various natural language inference (NLI) datasets from \citet{nli-benchmarks}. NLI benchmarks are downloaded from a public collection on Google Drive. We examine the premise and hypothesis separately as samples.
This dataset can be downloaded from \url{https://github.com/alisawuffles/wanli}.
\textbf{npt-conferences}. We collect Non-Proliferation of Nuclear Weapons (NPT) conference transcripts from \citet{npt-conferences}. NPT conference notes are extracted from the accompanying replication package. Text is split by paragraph, and only paragraphs longer than 50 characters are preserved. Text is split into three time ranges: pre-2008, 2008-2012, and post-2012.
This dataset can be downloaded from \url{https://journals.sagepub.com/doi/full/10.1177/0022343320960523}.
\textbf{open-deception}. We collect arbitrary lies and truths from any domain generated by crowdworkers from \citet{open-deception}. Open domain lies are downloaded from the public dataset and lie texts are split into lies and truths.
This dataset can be downloaded from \url{https://web.eecs.umich.edu/~mihalcea/downloads.html#OpenDeception}.
\textbf{open-review}. We collect submissions to ICLR, a machine learning conference from 2018 to 2021. Open review abstracts are accessed via the openreview API. We query for abstracts from the 2018-2021 ICLR blind submissions. Abstracts are classified based on rating: $>=7$ (``great''), 5-6 (``good''), and $<=4$ (``bad'').
This dataset can be downloaded from \url{https://openreview.net/}.
\textbf{parenting-subreddits}. We collect posts from various parenting-related subreddits, which are text-based forums on the site Reddit from \citet{parenting-subreddits}. Posts from various subreddits are downloaded from the paper's GitHub repository. We clean the text and split the posts according to the topic(s) each post is tagged with.
This dataset can be downloaded from \url{https://github.com/SALT-NLP/Parenting_OnlineUsage}.
\textbf{poetry}. We collect poems from PoetryFoundation.com from \citet{poetry}. Poems are downloaded from a 2019 scrape of the PoetryFoundation website from Kaggle. The text is cleaned and split according to subject tags and authorship.
This dataset can be downloaded from \url{ https://www.kaggle.com/datasets/tgdivy/poetry-foundation-poems} under GNU Affero General Public License.
\textbf{political-ads}. We collect political ads observed by Facebook users from \citet{political-ads}. Ads are downloaded from the Ad Observer website, which maintains an aggregate of all collected ads. We extract targeting metadata from the targeting field and define splits according to age, gender, location, interests, time, and political lean.
This dataset can be downloaded from \url{https://adobserver.org/ad-database/}.
\textbf{qqp}. We collect questions from Quora.com from \citet{qqp}.
\textbf{rate-my-prof}. We collect reviews of lecturers from RateMyProfessor.com from \citet{rate-my-prof}. We download a sample of RateMyProfessor.com reviews from an online repo. We clean the text and guess the gender of the reviewed lecturer from the first name using the gender-guesser package. Due to data availability, we consider only male and female names. To improve the quality of the classification, we remove any posts which use pronouns from the opposing sex (e.g. ``him'').
This dataset can be downloaded from \url{https://data.mendeley.com/datasets/fvtfjyvw7d/2} under CC BY 4.0 .
\textbf{radiology-diagnosis}. We collect impressions and medical histories of radiology patients from \citet{radiology-diagnosis}. Radiology diagnoses are downloaded from a GitHub copy of the original task dataset. We parse the metadata to retrieve the diagnostic code, decision type, impression, and patient history. Referencing the associated ICD codes, we convert codes to colloquial diagnoses (e.g. 786.2 denotes cough). We treat the histories and impressions as samples and split them according to diagnosis and level of consensus.
\textbf{reddit-humor}. We collect jokes posted on the Reddit forum r/Jokes, a message board for sharing jokes from \citet{reddit-humor}. Jokes are downloaded from the dev and test splits of the dataset. We clean the text and split the dataset according to whether they are labeled as funny.
This dataset can be downloaded from \url{https://github.com/orionw/rJokesData} under Reddit License and Terms of Service, and users must follow the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user.
\textbf{reddit-stress}. We collect stress-related posts on Reddit from \citet{reddit-stress}. We split the post text based on which subreddit they are posted on (related to PTSD, anxiety, or stress generally).
Reddit posts are downloaded from \url{https://github.com/gillian850413/Insight_Stress_Analysis}, and we recommend following the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user.
\textbf{reuters-authorship}. We collect articles from various Reuters authors from \citet{reuters-authorship}. The articles are split according to the author. Reuters articles are downloaded from the UCI repository \url{https://archive.ics.uci.edu/ml/datasets/Reuter_50_50}.
\textbf{riddles}. We generated several riddles. The 3000 most common English words are manually copied from a website. Words with between 5 and 8 characters are kept. We create two popular riddles. First, we split words based on whether they have a duplicate character. We exclude any words with multiple ``doubles'' or more than 2 of any character. Second, we split words based on whether they have the letter T.
\textbf{scotus-cases}. We collect facts from cases heard by the Supreme Court of the United States (SCOTUS) from \citet{scotus-cases}. Supreme Court cases are downloaded from a GitHub repository. We identify state/federal parties by manually defining keywords. We split based on the winning party, the identity of each party, and the type of decision. We then define several time periods and relevant political eras and split decisions accordingly. Finally, we split according to the ruling's policy area and how it changes over time.
The dataset can be downloaded from \url{https://paperswithcode.com/paper/justice-a-benchmark-dataset-for-supreme-court} under CC-BY-SA.
\textbf{short-answer-scoring}. We collect short answers from students from \citet{short-answer-scoring}. Short answers are downloaded from a GitHub mirror of the dataset. We consider only responses to essay set 1. The two scores are averaged and binned into good ($>=$ 2.5), medium (1.5-2.5), and bad ($<$1.5).
The dataset can be downloaded from \url{https://www.kaggle.com/c/asap-sas}.
\textbf{snli}. We collect a collection of sentence pairs annotated with textual entailment information from images from \citet{snli}.
The dataset can be downloaded from \url{https://nlp.stanford.edu/projects/snli/} under CC BY-SA 4.0.
\textbf{squad-v2}. We collect reading comprehension questions crowdsourced from Wikipedia articles from \citet{squad-v2}.
The dataset can be downloaded from \url{https://rajpurkar.github.io/SQuAD-explorer/} under CC BY-SA 4.0.
\textbf{stock-news}. We collect top news headlines on Reddit, an online message board from \citet{stock-news}. Headlines are downloaded from a GitHub mirror. We clean the text and divide the samples based on whether the DOW rose or fell that day.
The dataset can be downloaded from \url{https://github.com/ShravanChintha/Stock-Market-prediction-using-daily-news-headlines} under Reddit License and Terms of Service, and users must follow the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user.
\textbf{suicide-notes}. We collect posts from r/SuicideWatch and r/depression, two forums on Reddit from\citet{suicide-notes}. The post title and body are combined to form the text samples. Samples are split based on whether they were posted in a suicide-related Subreddit.
The dataset can be downloaded from a github: \url{https://github.com/hesamuel/goodbye_world},
under Reddit License and Terms of Service, and users must follow the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user.
\textbf{times-india-headlines}. We collect headlines from Times of India news from \citet{times-india-headlines}. Headlines are downloaded from a Dataverse mirror. We use the first 1000 headlines in each year as samples.
The dataset can be downloaded from \url{https://www.kaggle.com/datasets/therohk/india-headlines-news-dataset} under CC0 Public Domain.
\textbf{trial-deception}. We collect testimonies from witnesses in real trials from \citet{trial-deception}. Trial testimonies are downloaded from the author's website. The testimonies are divided based on whether they are considered truthful.
The dataset can be downloaded from \url{https://web.eecs.umich.edu/~mihalcea/downloads.html#RealLifeDeception}.
\textbf{un-debates}. We collect speeches from debates at the United Nations from \citet{un-debates}. Debate transcripts are downloaded from the Dataverse reproduction package. Samples are divided based on the country and year of the snippet. First, we isolate samples from Russia, China, and the United States and specify 3 time periods of interest. Next, we divide all samples by the decade. Finally, we create distributions for 19 countries of interest.
The dataset can be downloaded from \url{https://doi.org/10.7910/DVN/0TJX8Y} under CC0 1.0 .
\textbf{unhealthy-conversations}. We collect expert-annotated unhealthy conversations from \citet{unhealthy-conversations}. Conversation transcripts are downloaded from the official GitHub repository. For each annotated attribute, we split the dataset based on whether that form of unhealthy conversation is present in the sample.
The dataset can be downloaded from \url{https://github.com/conversationai/unhealthy-conversations} under CC BY-NC-SA 4.0.
\textbf{urban-dictionary}. We collect definitions from UrbanDictionary.com, a crowdsourced English dictionary from \citet{urban-dictionary}. Urban Dictionary entries are downloaded from Kaggle. Definitions are split into groups representing the top 1, 5, and 10 percent of definitions ranked by both upvotes and downvotes; we sample 10,000 from each and create a control distribution by randomly sampling 10,000 definitions from all entries.
The dataset can be downloaded from \url{https://www.kaggle.com/therohk/urban-dictionary-words-dataset} under CC0 Public Domain.
\textbf{wikitext}. We collect text snippets from Wikipedia from \citet{wikitext}. The Wikipedia snippets are loaded from HuggingFace. We remove any samples that are empty or start with '=' (which represent headings); samples are tokenized at the sentence level and used for clustering.
The dataset can be downloaded from \url{https://huggingface.co/datasets/wikitext} under CC BY-SA 3.0.
\textbf{yc$-$startups}. We collect descriptions of companies that were part of the Y Combinator startup incubator from \citet{yc-startups}. YCombinator company descriptions are downloaded from a 2022 scrape on GitHub. Only companies with long descriptions are preserved. Companies are split according to founder characteristics, year, ``top company'' designation, operating status, and location.
The dataset can be downloaded from \url{https://www.kaggle.com/datasets/benhamner/y-combinator-companies}.
\section{Introduction}
The processes of generating discoveries from large corpora are ad hoc and laborious.
For example, to compare the side effects of drug A and drug B, doctors might inspect two large corpora of patients' self-reported reactions after taking each drug;
based on ad hoc insights, they hypothesize that more patients taking drug A ``\textit{mentions feelings of paranoia}'', and then validate this hypothesis by laboriously inspecting text samples from the two corpora.
Machine learning (ML) can potentially accelerate these discovery processes.
However, ML requires a unified evaluation metric and input-output space, while the evaluation of discoveries and the data that inform them vary across applications.
We need a homogeneous formulation to cast heterogeneous discovery processes as an ML task, which can be automated, benchmarked, learned, and analyzed.
We formalize one family of these processes as an ML task with unified metrics and input-output space: goal \textbf{d}riven \textbf{d}iscovery of \textbf{d}ifferences between text \textbf{d}istributions via language \textbf{d}escriptions (\textsc{D5}{}).
As shown in Figure \ref{fig:task-description}, the input to the \textsc{D5}{} task is a ``problem'' comprising a description of the research goal (understanding side effects) and a corpus pair (text samples from the distributions of self-reported reactions after taking each drug).
The output is a ``discovery'' represented as a natural language predicate (``\textit{mentions feelings of paranoia}'').
We evaluate a discovery using two categories of criteria (Section~\ref{sec:metrics}):
(1) validity: it should describe a true difference
\citep{zhong2022describing}; and
(2) meaningfulness: it needs to be driven by the research goal and thus relevant, novel, and significant \citep{mcgarry2005survey}.
To study the \textsc{D5}{} task, we curate \textsc{Open\taskname}{}, a meta-dataset that aggregates 675 {open}-ended \textsc{D5}{} problems ranging across business, social sciences, humanities, health, and machine learning (Section \ref{sec:data}, Figure \ref{fig:dataset-overview}), comprising 4.4 million text samples in total across problem corpora.
We collected these 675 problems by surveying papers that focus on text analysis (e.g.~\citet{nguyen2020we}), brainstorming research goals, scraping the corresponding corpora, and post-processing them over nine months.
Since we hope to build systems that can tackle challenging research problems, we included problems for which we do not currently know the answer, hoping \textsc{D5}{} systems can generate meaningful discoveries.
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{corporadifffig1.pdf}
\caption{Each problem in \textsc{Open\taskname}{} contains 1) a corpus pair, which has $\sim$17K samples on average and is partitioned into two halves called ``research split'' and ``validation split'', and 2) a natural language description of the research goal, which also contains information about how the corpus pair was collected.
A \textsc{D5}{} system takes the goal and the research split as inputs and generates valid and meaningful discoveries in natural language as outputs.
The underlined texts in the research goal vary across problems, while the rest are templates.
}
\label{fig:task-description}
\end{figure*}
To tackle problems in \textsc{Open\taskname}{}, we built a \textsc{D5}{} system (Section \ref{sec:method}).
Our system first proposes hypotheses about how two corpora differ, given the goal and a subset of samples from the corpus pair. It then validates which hypotheses are statistically different on one corpus vs.~the other, and finally outputs the valid ones as discoveries.
Our system closely mirrors that of \citet{zhong2022describing}, which also describes differences in text corpora, but which does not consider the research goal for either modeling or evaluation.
We used \textsc{Open\taskname}{} to evaluate our system and found that compared to the baseline system from \citet{zhong2022describing}, incorporating the goal produces relevant hypotheses 31\% of the time more often (21\% for novelty and 28\% for significance).
Besides quantitative evaluation, a repository of open problems like \textsc{Open\taskname}{} allows the following list of operations:
\noindent\textbf{Automate discoveries.}
Every time we build a better \textsc{D5}{} system, we can apply it to a repository of open problems and send the discoveries to researchers who posed them.
We show this paradigm is plausible by using our system to automatically produce useful discoveries on \textsc{Open\taskname}{} (Section \ref{sec:application}), including insights from commercial reviews, temporal and demographic differences in discussion topics, political stances and stereotypes in speeches, differences in lyric styles, and error patterns in NLP systems.
We anticipate future systems to produce more discoveries.
\noindent\textbf{Train better \textsc{D5}{} systems.}
Like other machine learning tasks, we can train a system once we have a dataset.
We describe a self-supervised learning algorithm that uses a repository of problems (without solutions) to train LMs to propose more valid hypotheses (Section \ref{sec:training}).
As a proof-of-concept, we show that it can make LMs better describe the differences between groups of text samples.
\noindent\textbf{Analyze the limitations of our evaluation.}
Using concrete examples from \textsc{Open\taskname}{}, we show that our current evaluation metrics do not encourage diverse findings, do not always produce causal conclusions, and cannot evaluate discoveries involving heavy expert knowledge (Section~\ref{sec:analysis}).
These analyses inform areas for future improvement.
To conclude, by collecting \textsc{Open\taskname}{}, we show that \textsc{D5}{} can be benchmarked, automated, analyzed, and learned, just like any other machine learning task.
Since the authors are not domain experts in most of the open problems we have collected, we hope future research can improve by gathering feedback from domain experts and a more authentic meta-dataset, potentially accelerating discoveries.\footnote{
We share the code at \url{https://github.com/ruiqi-zhong/D5}, and the dataset at \url{https://doi.org/10.5281/zenodo.7683302}. The license information is in Appendix \ref{app:dataset-descriptions}.}
\section{Limitations and Future Work} \label{app:limit}
We still face many challenges in building a broadly useful system.
We describe technical challenges that machine learning researchers can tackle in Appendix \ref{app:engineering-challenges} and organizational challenges that require domain experts in Appendix \ref{app:organizational-challenges}.
\subsection{Engineering Challenges} \label{app:engineering-challenges}
\noindent\textbf{Beyond truth predicates.}
Our work requires the discovery to be a truth predicate that maps a text sample to a truth value.
However, scientific discoveries can be arbitrary natural language expressions; extending to more flexible expressions requires a significant redesign of our system and evaluation framework.
Some more feasible near-term extensions include 1) allowing natural language expressions that map from text samples to real values, e.g., ``how polite the sentence is compared to other samples from the corpora'' or 2) using additional logical forms to combine individual truth predicates; e.g., learn a shallow and interpretable decision tree where each split point is a natural language predicate.
\noindent\textbf{Beyond corpus-level differences.}
Our work focuses on describing corpus-level differences and validates a discovery by comparing how often it is true on each corpus.
Future work can consider other ways to validate a discovery: for example, suppose each text sample is associated with a continuous target variable, we can validate whether a discovery is more likely true if the target variable is large.
\noindent\textbf{Clarifying a discovery.}
Some discoveries seem to have clear meanings on the surface, but they become ambiguous when we judge them on individual text samples.
For example, judging whether a text sample $h=$ ``\textit{mentions people}'' seems like an unambiguous task a priori; however, it is unclear whether it is true on the sample $x=$ ``\textit{I woke up this morning.}'', since the ``\textit{people}'' in $h$ is a plural form, while $x$ only mentions one person ``\textit{I}''.
Future work can use a language model to automatically clarify the meaning of a hypothesis and make it more specific, e.g., rewrite $h$ as ``\textit{mentions one or more humans.}''
\noindent\textbf{Correlation $\neq$ causation.}
Like other tools that rely on correlations to analyze patterns in data (e.g., linear regression), our system cannot establish causal relations either.
For example, when comparing self-reported happy moments from females and males, even if the former corpus has more samples that ``\textit{mention children and family}'', it does not necessarily imply family plays a more important role in inter-personal relations for females; an alternative hypothesis is that females might mention any other people more often than males, hence leading to the observation that they mention family more often.
Future work can use language models to propose what control hypothesis to test.
\noindent\textbf{Decreasing the cost of validation.}
As alluded to in Section \ref{sec:soundness}, estimating $V$ is extremely expensive as it requires a lot of human labor.
Future work can consider an importance sampling procedure that uses $\hat{T}$ as a proposer to improve the sample efficiency of estimating $V$.
\noindent\textbf{Training a better proposer.}
We developed a self-supervised learning algorithm to propose more valid hypotheses.
However, it does not take into account the meaningfulness metric, and it is unclear how to manage its trade-offs with validity if they exist.
We look forward to future works that can train a better proposer with as minimal supervision as possible.
\noindent\textbf{Combining Meaningfulness and Validity Metrics.}
To simplify evaluation, we assumed meaningfulness to be independent of the magnitude validity $V$.
Such an assumption allows us to directly evaluate hypotheses that are not necessarily valid but is also limiting for evaluating the final discoveries: for example, for that 2008 ``\textit{discuss economy}'' more often than 2007, it would be way more significant if $V=0.99$ compared to $V=0.0000001$.
Future works can propose better metrics that do not assume that validity and meaningfulness are independent.
\subsection{Organizational Challenges}\label{app:organizational-challenges}
As discussed in \citet{polanyi2000republic}, it requires implicit community norms rather than explicit deductive logic to decide what counts as good research results;
to guide our system to produce truly important discoveries, our system needs feedback from researchers who work in the domain of interest.
However, except for machine learning, the authors do not have research expertise in most of the domains listed in Figure \ref{fig:dataset-overview}.
We look forward to future contributions from other domains and list concrete directions below.
\noindent\textbf{What problems to solve?}
We generated the problems in \textsc{Open\taskname}{} by reading relevant papers and guessing what domain experts might care about.
However, our guesses can be inaccurate.
Future works can directly gather problems from domain experts to reflect the actual usage of our system.
\noindent\textbf{How to interpret a discovery?}
We asked for Turker's judgment to compute $T(h, x)$.
However, many hypotheses require expert knowledge to interpret properly.
For example, only law experts can reliably judge whether a contract $x$ satisfies the predicate $h$ ``\textit{contains a license grant that is irrevocable}.''
Domain experts are needed to evaluate the validity of a discovery and supervise the validator.
\noindent\textbf{What discoveries are meaningful?}
Our work developed the evaluation instructions to approximately evaluate what hypotheses are meaningful.
However, just as no one can become an outstanding peer reviewer simply by reading the review guideline, we do not consider it feasible to provide a gold evaluation simply by reading our instructions.
Whether a discovery is meaningful depends heavily on implicit community norms, and we hope domain experts can provide better evaluation and training signals for our system.
\section{Method} \label{sec:method}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.90\linewidth]{corporadiffprompts.pdf}
\caption{All underlined content in the prompt differs across problems, while the other content in the prompt is templated.
\textbf{Left}: proposer prompt. The generated hypotheses are in blue. All content with colored background is excluded from the figure for brevity. For the baseline of not using the research goal, we removed the ``research goal'' block from the prompt.
\textbf{Right}: the validator prompt.}
\label{fig:prompts}
\end{figure*}
We describe our \textsc{D5}{} system, which maps from a corpus pair and a research goal to a set of natural language predicates.
Our system is inspired by a two-stage model of how humans discover patterns in data: creatively brainstorming hypotheses and then rigorously validating them on the data \citep{ludwig2022algorithmic}.
Analogously, we first 1) propose hypotheses conditioned on the research goal and a subset of samples from the corpus pair (Section \ref{sec:proposer}), and then 2) validate each hypothesis whether it is more often true on one corpus than the other and outputs the valid ones as the final discoveries (Section \ref{sec:verifier}); see Appendix Figure \ref{fig:baseline} for a more illustrative overview.
Our system closely mirrors that of \citet{zhong2022describing}, except that we leverage the research goal to propose more meaningful hypotheses.
Using \textsc{Open\taskname}, we quantitatively show in Section \ref{sec:more-meaningful} that GPT-3 text-davinci-003 (abbreviated as GPT-3) can use the research goal to propose more meaningful hypotheses.
As indicated in Section \ref{sec:data}, our system only accesses the research split of each corpus, which we denote as $\mathcal{D}^{\text{res}}_{A}$/$\mathcal{D}^{\text{res}}_{B}$.
\subsection{Hypothesis Proposer} \label{sec:proposer}
We prompt GPT-3 \citep{ouyang2022training} to propose hypotheses.
We construct the prompt by concatenating a few random samples from $\mathcal{D}^{\text{res}}_{A}$ and $\mathcal{D}^{\text{res}}_{B}$, the research goal, and an instruction to output a list of hypotheses.
Different from \citet{zhong2022describing}, we include the research goal to elicit meaningful hypotheses.
We continue sampling hypotheses from GPT-3 until we obtain a set of 60 distinct hypotheses, which we call $H_{\text{init}}$.
See Figure \ref{fig:prompts} left for an example prompt, and Appendix \ref{app:proposer} for additional details.
\begin{table*}[h!]
\centering
\begin{tabular}{lcc|cc|cc}
\hline
{} & with-goal & no-goal & kappa & spearmanr & $p$ of avg & worst $p$ of ind\\
\hline
Relevance & 1.68 & 1.20 & 0.56 & 0.71 & 1 $\times$ 10$^{-10}$ & 1 $\times$ 10$^{-8}$\\
Novelty & 1.24 & 0.97 & 0.37 & 0.50 & 5 $\times$ 10$^{-6\phantom{0}}$ & 4 $\times$ 10$^{-2}$ \\
Significance & 1.56 & 1.05 & 0.46 & 0.64 & 2 $\times$ 10$^{-10}$ & 2 $\times$ 10$^{-7}$\\
\hline
\end{tabular}
\caption{\textbf{Left.} For each metric, we report the average rating on hypotheses generated with or without using the research goal, and find that the former performs better. \textbf{Middle.} The inter-annotator agreement rate averaged across pairs of author evaluators, measured by Kappa and Spearman rank coefficient; we find substantial correlations between evaluators across all these subjective metrics, with relevance $>$ significance $>$ novelty.
\textbf{Right.}
We compute the $p$-values for the null hypothesis that ``with-goal and no-goal result in the same performance''. The $p$ of avg column reports the $p$-values after we average the ratings from all evaluators, while the ``worst $p$ of ind'' column takes the max of all $p$-values based on ratings of individual evaluators.
Overall, the conclusions are statistically significant and they can be robustly reproduced across individual evaluators.
}
\label{tab:quantitative}
\end{table*}
\subsection{Hypothesis Validator} \label{sec:verifier}
Many hypotheses in $H_{\text{init}}$ are invalid: they are not more often true on $\mathcal{D}_{A}$ than on $\mathcal{D}_{B}$ (i.e. $V(h) < 0$).
To automatically filter them out, we use a language model $T'$ to simulate the Turkers' judgement $T$ and hence approximate the validity score $V$ of a hypothesis $h$ with $V'(h)$, where
\begin{equation}
V'(h) := \mathbb{E}_{x\sim \mathcal{D}^{\text{res}}_{A}}[T'(h, x)] - \mathbb{E}_{x\sim \mathcal{D}^{\text{res}}_{B}}[T'(h, x)].
\end{equation}
To compute $T'$, we ask FLAN-T5 \citep{chung2022scaling} whether $x$ satisfies $h$ with the prompt shown in Figure \ref{fig:prompts} right.
To better simulate Turker's judgment, we collected additional Turker annotations to fine-tune FLAN-T5 (see Appendix~\ref{app:verifier} for details).
We then perform a t-test to compare the mean value of $V'(h, x)$ on the research split of Corpus $A$ and the mean value on Corpus $B$, rule out the hypotheses with $p$-value greater than 0.001, and output them as a set of discoveries.
Finally, we obtain additional discoveries by repeating the same process but asking our system to propose and validate hypotheses about Corpus $B$ rather than Corpus $A$.
\subsection{Goal Leads to More Meaningful Hypotheses} \label{sec:more-meaningful}
Compared to \citet{zhong2022describing}, we added the research goal to our prompt when generating hypotheses.
Does this improve the quality of the proposed hypotheses?
To investigate this, we sampled 100 problems from \textsc{Open\taskname}\phantom{0}with distinct research goals and randomly sampled 2 hypotheses from GPT-3 with and without using research goal (see Figure \ref{fig:prompts}), resulting in 400 hypotheses to evaluate.
Three authors then rated their meaningfulness based on the three metrics defined in Section \ref{sec:metrics}, while being blinded about which hypotheses were generated with the research goal.
The results are shown in Table \ref{tab:quantitative}.
We found that, when prompted with the research goal, GPT-3 on average proposes more relevant, novel, and significant hypotheses;
additionally, it proposes hypotheses with ratings higher than \circled{0} 31\%/21\%/28\% more often in terms of relevance/novelty/significance.
Since this is a subjective evaluation, the Kappa inter-annotator agreement is only moderate, ranging from 0.37 to 0.56.
However, we can still robustly conclude that the model can propose more meaningful hypotheses when conditioned on the goal: we calculate the $p$-values for the null hypothesis that with-goal and no-goal have equal performance, and we find $p$-values to be highly significant and robust across evaluators, for all three submetrics.
We provide additional analyses in Appendix \ref{app:meaningful}.\footnote{
The experiments in this paper were run at different iterations of the data collection process;
since they require intense manual effort and no automatic metric is available, it is expensive to rerun them on our final polished version.
The differences between iterations are mainly due to 1) noticing data sharing constraints due to licenses, 2) increasing diversity by including new problems or removing similar problems, or 3) improving the research goal description.
For reproducibility, we include the set of research goals for each experiment in our github repo.}
\section{Evaluation} \label{sec:metrics}
For the research goal of comparing the side effects of drug A and drug B, how do we evaluate a system-generated discovery that Corpus A ``\textit{mention feelings of paranoia}'' more often?
First, it needs to be \textbf{valid}, such that indeed more samples from Corpus A satisfy this predicate, which can be evaluated (approximately) objectively.
Second, it needs to be \textbf{meaningful} to the research goal of understanding side effects, which depends on the researcher's subjective judgement.
We define validity and meaningfulness below.
\subsection{Validity} \label{sec:soundness}
Similar to \citet{zhong2022describing}, we require an output discovery $h$ to be a truth predicate on a text sample.
For example, if $h$ = ``\textit{mentions about family and children}'', then $h$ is true on the string $x_{1}$ = ``\textit{My daughter loves me.}'' and false on the string $x_{2}$ = ``\textit{I'm going to school}''.
Define $T(h, x) \in [0, 1]$ as ``the certainty that $h$ is true on $x$'', e.g., $T(h, x_{1}) \approx 1$ and $T(h, x_{2}) \approx 0$.
We approximate $T(h, x)$ by asking three Turkers how certain they are and averaging their responses (see Appendix \ref{app:turker} for more details).
Let $\mathcal{D}^{\text{val}}_{A}$ and $\mathcal{D}^{\text{val}}_{B}$ denote the validation set for Corpus A and B, respectively. Then we define the validity $V$ as
\begin{equation} \label{eq:soundness}
V(h) := \mathbb{E}_{x\sim \mathcal{D}^{\text{val}}_{A}}[T(h, x)] - \mathbb{E}_{x\sim \mathcal{D}^{\text{val}}_{B}}[T(h, x)].
\end{equation}
In practice, we do not have the budget to compute $V(h)$ since it requires asking humans to read the entire validation split just to evaluate one single discovery $h$;
therefore, we approximate this quantity by choosing a subset of samples from Corpus $A$ and Corpus $B$ to estimate $V$.
We compute a $p$-value for the null hypothesis that $V \leq 0$ by conducting a t-test to compare the mean of $T(h, x)$ on the subset from Corpus $A$ to that of Corpus $B$.
A discovery should ideally have a large $V$ value and a small $p$-value.
\subsection{Meaningfulness} \label{sec:pragmatics}
Not every valid discovery is meaningful.
For example, if the goal is to understand the topical differences between news from 2008 (Corpus A) and news from 2007 (Corpus B), the discovery that Corpus A ``\textit{contains news from 2008}'' is completely valid by definition but meaningless, since it provides only trivial information and is irrelevant to the goal of understanding topical differences.
\citet{mcgarry2005survey} surveyed a list of desirable properties for discovery, and we condensed them into three submetrics to rate how meaningful a discovery is based on the research goal: 1) relevance, 2) novelty, and 3) significance.
We evaluate these independently of validity and assume that the discovery is already valid.
For example, the discovery that ``something can travel faster than light'' is meaningful if true, even though it is highly implausible.
We rate each submetric with \circled{0}, \circled{1}, or \circled{2}, where higher is better.
The evaluation instructions are below.
\noindent\textbf{Relevance.}
How relevant the discovery is to the goal.
For example, suppose we were a student comparing essays rated as convincing vs. not convincing to figure out what writing style is convincing. Then:
\begin{itemize}
\item The discovery ``\textit{write in first person}'' is directly related to the writing style, so we rate it \circled{2}.
\item The discovery ``\textit{use the word “I”}'', is not exactly a writing style, but can still inform the relevant underlying principle of ``\textit{write in first person}'', so we rate it \circled{1}.
\item The discovery ``\textit{argue for abortion}'' does not tell us about the underlying writing style, so we rate it \circled{0}.
\end{itemize}
\noindent\textbf{Novelty.}
The difficulty of generating the discovery, e.g. can we think of the discovery in 5 minutes with the goal but without looking at the corpora?
For example, suppose we were an airline manager trying to find improvements to the flight experience, and we were comparing negative reviews vs.~positive reviews. Then:
\begin{itemize}
\item The discovery ``\textit{contain more negative language}'' is almost certain for negative reviews, so we rate it \circled{0}.
\item The discovery ``\textit{complain about the crew members}'' is not entirely novel, but is not tautologically true and hence requires confirmation, so we rate it \circled{1}.
\item The discovery ``\textit{mention a language barrier with the crew members}'' is specific and hard to think of without looking at the data, so we rate it \circled{2}.
\end{itemize}
Note that our evaluation is ``blinded to the samples'': we still consider a discovery novel as long as it is hard to think of before looking at the corpora, even if it might be easy to think of after looking at the corpora.
For example, the physical law that $F=ma$ is easy to observe if we have collected and plotted the data on acceleration, mass, and force; however, it might be difficult to think of before we see any such data, so we consider it novel.
\noindent\textbf{Significance.}
Given the research goal, how beneficial is it to learn the discovery for the first time?
For example, suppose we were an Amazon retailer trying to figure out what customers like and dislike about my product based on negative reviews and positive reviews. Then:
\begin{itemize}
\item The discovery ``\textit{accuses the team pushing out a bad product}'' is not significant since it cannot direct the retailer to improve the product, so we rate it \circled{0}.
\item The discovery ``\textit{asks for a more durable product}'' gives some hints about how to improve the product, but isn’t sufficiently helpful on its own, so we rate it \circled{1}.
\item The discovery ``\textit{says the wrench is missing}'' can lead to concrete actions for improvement, so we rate it \circled{2}.
\end{itemize}
To conclude, an ideal discovery would have a high $V$ value with a small $p$-value and achieve ratings of \circled{2} across all of relevance, novelty, and significance.
The latter three submetrics are inherently subjective; however, the next section shows that we can still use them to compare hypothesis proposers and draw robust conclusions.
\section{Related Work} \label{sec:related}
\noindent\textbf{Inductive Reasoning with NLP Models.}
Recent works show that language models are capable of inductive reasoning under restricted settings, discovering patterns from a set of text data points and describing them with language \citep{ honovich2022instruction}.
\citet{zhou2022large} and \citet{ye2022guess} use this capability to improve zero/few-shot accuracy by inferring the most likely instruction using input-output example(s) of the target task.
\citet{zhong2022describing} and \citet{singh2022explaining} use this capability to discover patterns in datasets, and we improve by building a meta-dataset of open-ended problems and require the discovery to be meaningful.
ML models can also perform inductive reasoning in other modalities, such as vision.
For example, \citet{hernandez2021natural} describes visual features that activate a neuron;
\citet{zhu2022gsclip} describes distribution shifts between the training distribution and the test distribution for images;
and \citet{eyuboglu2022domino} describes errors made by vision models.
We hope future models can perform inductive reasoning in additional modalities, such as sound \citep{aghajanyan2023scaling} or physical senses \citep{thomason2016learning}.
\noindent\textbf{Automated Discovery.}
It is not new to automatically discover patterns by learning from empirical data.
To list a few classical methods, linear regression analyzes the effect of each real-valued feature by interpreting the learned weights \citep{draper1998applied};
n-gram models can extract discriminative phrases, thus yielding insights about corpus-level differences \citep{manning1999foundations};
small decision trees can extract interpretable if-then statements \citep{2015};
and an entity embedding model learned on existing relations between entities can predict unseen relations \citep{socher2013reasoning}.
In comparison, \textsc{D5}{} produces discoveries in the form of natural language predicates, which are flexible and interpretable; additionally, it is more directed at the research goal, while machine learning classifiers like linear regressions will pick up any discriminative features.
\noindent\textbf{Epistemology.} While the process of validating a hypothesis is well-formulated, it is much less well-understood how to automatically generate hypotheses and decide what discoveries are meaningful \citep{shapere1964structure, heckman2017abducting}.
Related works in this area have been sparse, among which \citet{mcgarry2005survey} sketches high-level principles for evaluating knowledge discoveries and \citet{ludwig2022algorithmic} proposes to crowd-source hypotheses from MTurk workers.
We concur with the perspective of \citet{polanyi2000republic} that meaningfulness of a hypothesis cannot be explicitly verbalized with simple logic but is dependent on implicit community norms;
therefore, the process of proposing hypotheses should be learned from empirical data (e.g. pre-training) rather than deduced from a priori analysis of concepts \citep{quine1969naturalistic}.
We hope contributions from other domains can provide more empirical data on what discoveries are meaningful, hence guiding our system to produce more important discoveries.
\section{Conclusion}
We formalized the task of \textsc{D5}{}, which discovers corpus-level differences via language descriptions in a goal-driven way.
We defined its evaluation metrics -- validity, relevance, novelty, and significance -- and collected a meta-dataset, \textsc{Open\taskname}{}, to evaluate \textsc{D5}{} systems.
We presented 10 use cases on \textsc{D5}{}, proposed a self-supervised learning algorithm, and analyzed the limitation of the current evaluation metrics.
To conclude, like any other traditional machine learning task, \textsc{D5}{} can be automated, benchmarked, learned, and analyzed.
We hope future research can improve by gathering feedback from domain experts and a more authentic meta-dataset, potentially accelerating future discoveries.
\section*{Acknowledgement}
We thank Xiaochuang Han and Sam Bowman for their early discussions on this project.
We thank Cathy Chen, Erik Jones, Jessy Lin, Alex Pan, Chenglei Si, Xi Ye, and Tianyi Zhang for their helpful feedback on the paper draft.
We thank OpenAI and Anthropic for providing model access.
\section*{Individual Contributions}
Ruiqi Zhong proposed the \textsc{D5}{} task, drew the conceptual connection to naturalistic epistemology, and proposed to treat it as a standardized machine learning task by collecting a dataset; co-designed the evaluation metric; collected most of the machine learning problems in \textsc{Open\taskname}{}; conducted all the experiments; drafted the entire paper.
Peter Zhang collected all the non-machine learning problems and the text clustering problems in \textsc{Open\taskname}{}; co-designed the evaluation metrics, organized the hypotheses evaluation, and contributed directions for future work; left feedback on the paper draft.
Steve Li led the development of the annotation interface described in Appendix \ref{app:interface}; provided feedback on the evaluation metrics and participated in the evaluation; left feedback on the paper draft.
Jinwoo Ahn provided feedback on the annotation interface; provided feedback on the evaluation metrics and participated in the evaluation; left feedback on the paper draft.
Dan Klein left feedback on the title, abstract, and intro.
Jacob Steinhardt provided guidance throughout the project and left detailed feedback on the entire paper.
\section{Self-Supervised Learning} \label{sec:training}
Since the problems in \textsc{Open\taskname}{} are open-ended, our system could potentially produce discoveries with higher validity scores.
Therefore, we design a self-supervised learning algorithm to improve a language model's ability to propose more valid hypotheses, using the principle that \textbf{it is easier to validate a discovery than to generate one}.
\noindent\textbf{Algorithm.}
Suppose we are given a set of problems for training and an initial language model $m_{\text{init}}$.
Our goal is to automatically generate a set of \textit{prompt}-\textit{completion} pairs to fine-tune $m_{\text{init}}$ so that it can propose hypotheses that are more valid.
To generate a \textit{prompt}, we randomly sample a problem and create a proposer prompt following the procedure in Section \ref{sec:proposer}.
To generate the desired \textit{completion} given a prompt, we sample multiple hypotheses from $m_{\text{init}}$, approximate their $V'$ score on the samples in the proposer prompt with the same language model $m_{\text{init}}$ (Section \ref{sec:verifier}), and select the highest scoring hypothesis.
Finally, we use the prompt-completion pairs to fine-tune $m_{\text{init}}$.
However, since we cannot fine-tune instruction-tuned GPT-3, we can only experiment with Flan-T5 \citep{chung2022scaling}, an open-sourced instruction-tuned model that might only work well for easier ``mini-problems''.
Therefore, as a proof of concept, we test our algorithms for describing groups of four samples, where each group comes from a text cluster.
As an overly simplified example, we will give the LM the prompt ``\textit{Group A: 1. dog 2. cat 3. pig 4. cow. Group B: 1. phone 2. laptop 3. desk 4. cup}'' as an input and the LM can output ``\textit{mentions an animal}'' as a hypothesis of how group A differs from group B.
\noindent\textbf{Data.}
We created 33 corpora by merging all corpora in \textsc{Open\taskname}{} with the same domain, and automatically generated 4503 text clusters using RoBERTa embeddings \citep{aharoni-goldberg-2020-unsupervised}.
We focused on clustering because it can automatically generate a large amount of semantically coherent groups of samples.
To create a pair of four samples, we randomly sampled a corpus, sampled two clusters within that corpus, and took four random samples from each cluster.
To test cross-corpus generalization, we reserved 28 of the 33 corpora to create mini-problems for evaluation, using the rest for training.
We used Flan-T5 \citep{chung2022scaling} as $m_{\text{init}}$ and sampled hypotheses with a temperature of 0.8.
For training, we sampled 30,000 mini-problems and selected the best of eight hypotheses generated by $m_{\text{init}}$ as the target completion;
for evaluation, we sampled 200 mini-problems to calculate $V$ with Turkers and 1500 mini-problems to calculate $V'$ automatically.
\noindent\textbf{Results.}
We evaluated randomly sampled hypotheses from the language model before and after self-supervised training.
The automated ``self-evaluation'' validity score $V'$ improves substantially from 0.22 to 0.37, and the ``true'' validity score $V$ according to Turker evaluation improves from 0.07 to 0.10, with a $p$-value of 0.02.
This result provides preliminary evidence that our algorithm (or similar variants) could be applied to a large set of problems to improve the validity of the hypotheses; we expect future validators to simulate human judgments better, hence decreasing the approximated gap of improvement between $V$ and $V'$. |
{
"arxiv_id": "2302.14136",
"language": "en",
"timestamp": "2023-03-01T02:02:17",
"url": "https://arxiv.org/abs/2302.14136",
"yymm": "2302"
} | \section{Introduction}
Multi-armed bandit (MAB) algorithms are a popular and well-established framework for sequential decision-making \cite{Robbins_MAB, Berry_bandit, nonstat_ex1, sutton_MAB, MAB_overview}. Because of their regret-minimizing properties, MABs are used across various applications such as optimal treatment allocation in clinical trials \cite{MAB_clinical_trials}, recommender systems improvements \cite{rec_MAB}, anomaly detection for networks \cite{anomaly_MAB}, and aiding online decision-making \cite{netflix_MAB}. For example, \citeauthor{MAB_clinical_trials} uses an adaptive bandit algorithm to sequentially assign treatments against skin cancer in a mice experiment. Here, the authors care both about assigning the best possible treatment and estimating the relative effectiveness of the different treatments.
As the clinical trial example illustrates, real-world applications of MABs increasingly require inference (i.e., determining if one of the arms is significantly better than the alternatives). Unfortunately, inference is challenging because MAB algorithms collect data adaptively, breaking the standard independent and identically distributed ($iid$) assumption often evoked in the statistics literature. To make progress, researchers have focused on specific settings, such as linear bandits where the rewards follow a parametric linear model \citep{linear_ex3, linear_ex2, linear_ex1}, or have made strong untestable assumptions on both the rewards and the action space, such as stationarity and $iid$ rewards \citep{kelly_batched_bandits, hadad, ian_bandit}.
Broadly, there are two approaches for inference in MAB problems. The first is to perform inference at the \textit{end} of a pre-specified time by constructing a single confidence \textit{interval} (CI). The second is to perform inference \textit{during} the study as new data arrives by constructing a confidence \textit{sequence} --- a sequence of confidence intervals that are uniformly valid over time. As we perform inference on every arm at every time, the second approach allows us to stop collecting the data once we detect that one of the arms is statistically significantly outperforming the others.
In our paper, we propose a generic and flexible framework for performing inference both during and at the end of the study. Our work leverages the design-based approach to causal inference, which has a long history in the statistics community dating back to Fisher and Neyman but has seen a resurgence in popularity as it permits for inference on the obtained sample while handling complicated settings such as interference in an assumption-light manner \cite{fisher:1935, neym:35, rubin_design1, Holland_designbased, rubin:imbens, peng_luke_design, guillaume_design, mypaper}. The design-based framework has several advantages in MAB settings. First, it lets us relax the stationary and independent assumption for the rewards because the design-based framework conditions on the (potential) outcomes, allowing our inferential results to hold for any general reward distribution.\footnote{For example, the mean reward of each arm may be time-varying and also dependent on past data.} Second, a central principle in any MAB algorithm is balancing exploration versus exploitation; however, this trade-off is inherently about analyzing how the agent has performed for the \textit{current} sample, \emph{e.g.}, how much the agent ``lost'' by picking arm A over arm B for the finite-sample data. Unlike the more common super-population approach that performs inference on the general population eligible to interact with the MAB algorithm \cite{hadad, ian_bandit}, the design-based framework directly performs inference for relevant finite-sample estimands. Lastly, while relaxing the stationary and independence assumptions, the design-based framework only requires light and mostly testable assumptions. In particular, the main assumption we require is that the probability of picking an arm at any time is bounded away from zero or one. Since the agent/experimenter is in control of the data collection process, this can not only be verified but also controlled by using the appropriate adaptive scheme.
The main contribution of this paper is that we formally introduce the design-based approach for MAB, allowing us to build valid confidence intervals (Section~\ref{section:CI_results}) and confidence sequences (Section~\ref{section:CS_results}) for the mean reward of each arm and the difference between two arms' mean rewards. We define our setting and design-based estimand and estimators in Section~\ref{section:setting}. Then we extend our results to contextual MAB in Section~\ref{section:CMAB} and end with simulations in Section~\ref{section:sims}.
\section{Setup: Estimands and Estimation}
\label{section:setting}
We now introduce relevant notation, the design-based inference framework, the estimands of interest in the MAB framework, and our proposed estimators.
\subsection{Design-Based Causal Inference}
\label{subsection:DB_intro}
Suppose we observe $T$ samples of $\{W_t, Y_t\}_{t = 1}^T$, where $W_t \in \{0, \dots, K-1\}$ is the $K$ possible actions (or treatments) the agent takes at time $t$ and $Y_t$ is the observed reward (or outcome) at time $t = 1,\dots, T$. For readers more familiar with the MAB literature, the notation $(A_t, R_t)$ is often used for the action and reward, respectively. We purposefully use the alternative notation to emphasize the connection with causal inference. In Section~\ref{section:CMAB}, we further assume that at each point in time, we also observe some covariates or contexts $X_t$ to generalize our analysis to contextual MAB \cite{CMAB_ref}.
Borrowing from the standard causal inference literature, we assume that for each $t$ there are $K$ potential outcomes $Y_t(0),\dots,Y_t(K-1)$, corresponding to the $K$ possible actions \cite{fisher:1935, neym:35, rubin:imbens,abadie2020sampling}. Note that we implicitly assume the no-interference assumption of potential outcomes for simplicity and brevity of notation. To connect the potential outcomes to the observed outcome, we assume that $y_t = Y_t(w_t)$, where lower case $(w_t, y_t)$ denotes our observed treatment samples.
The design-based approach conditions on the full set of potential rewards $Y_t(0),\dots,Y_t(K-1)$, allowing the rewards to be arbitrarily dependent on the past rewards and non-stationary. This setting generalizes the common assumptions invoked in MAB, where researchers assume $Y_t(W_t)$ is a function of only its current action and is generated from an $iid$ distribution. The relaxation of the assumptions is possible because the design-based approach shifts the modeling burden away from the unknown outcome to the know action distribution.
Specifically, we assume for each $t$ the action $W_t$ to be adapted based on the historical data,
\begin{equation}
p_{t \mid t-1}(w) := \Pr(W_t = w_t \mid \mathcal{F}_{t-1}),
\label{eq:adaptive_trt}
\end{equation}
where the filtration $\mathcal{F}_{t -1}$ contains the information set of the past $(W_{1:(t -1)}, Y_{1:(t-1)})$. In Section~\ref{section:CMAB}, we extend Equation~\eqref{eq:adaptive_trt} to allow the action to further depend on the context variables. Lastly, we impose that $p_{t \mid t-1}(w)$ is bounded away from zero or one, often known as probabilistic treatment assignment assumption in causal inference \cite{rubin:imbens}.
\begin{assumption}[Probabilistic Treatment Assignment]
\label{assumption:PTA}
For all times $t = 1, \dots, T$ and possible actions $w \in \{0, \dots, K-1\}$, we have that
$$0 < p_{t \mid t-1}(w) < 1.$$
\end{assumption}
Assumption~\ref{assumption:PTA} is fairly weak and holds as long as the proposed MAB algorithm does not converge to a zero/one probability event for any of the arms in finite time. Furthermore, we remark that many existing works require this regularity condition for inference \cite{hadad, kelly_lucas, mypaper}.
\subsection{Multi-arm Bandit Estimands}
\label{subsection:MAB_intro}
The primary estimand of interest in MAB algorithms is the reward mean distribution for arm $w\in \{0,\dots,K-1\}$. In the typical MAB literature, the reward mean distribution of arm $w$ is distributed $iid$, thus can be characterized through $\mu_t(w) = \mu(w)$ since it does not vary over time. As we do not impose any restriction on the reward mean, we choose to define our estimand as a finite-sample cumulative mean:
\begin{equation}
\label{eq:Q_func}
Q_t(w) := \frac{1}{t} \sum_{j = 1}^t Y_j(w) ,
\end{equation}
When constructing confidence intervals at the end of a fixed time $T$, we are interested in $Q_T(w)$. However, when constructing confidence sequences, we perform inference at every time $t$; hence our estimand is the time-varying $Q_t(w)$ at every time $t$.
To connect this to the commonly known MAB reward mean distribution, consider the following example where the agent is interested in the mean reward of arm $w = 1$.
\begin{example}
\label{example:Q_func}
Suppose $t = 3$, then Equation~\eqref{eq:Q_func} reduces to
$$Q_3(1) = \frac{1}{3} (Y_1(1) + Y_2(1) + Y_3(1)).$$
\end{example}
Notice that $Q_3(1)$ in Example~\ref{example:Q_func} is the sample finite-population analogue of the more typical reward mean distribution function $E(Y_t(1))$\footnote{Here the expectation is taken with respect to the random reward.}. Because the design-based framework conditions on the full set of potential outcome, all corresponding estimands have a finite-sample interpretation for quantities of interest.
In practice, we are more interested in comparing the mean reward between two or more arms,
\begin{equation}
\label{eq:tau}
\tau_t(w, w') := Q_t(w) - Q_t(w').
\end{equation}
For simplicity, we focus on pairwise differences between two arms, but our paper can easily be extended to estimate any linear combinations of the $K$ mean rewards. Our main result builds confidence intervals and sequences for both $Q_t(w)$ and $\tau_t(w, w')$.
\subsection{Estimation}
\label{subsection:estimates}
We now propose an estimators for $Q_t(w)$ and consequently $\tau_t(w, w')$ that serve as the main building blocks for theoretical results throughout the paper. We leverage the inverse propensity score estimator and its corresponding (conservative variance) estimator in \cite{timeseries, DB_paper}.
First, we introduce our unbiased estimator $\hat Q_t(w)$ along with an estimate of its variance
\begin{equation}
\label{eq:Q_mean_var}
\begin{split}
\hat Q_t(w) & := \frac{1}{t} \sum_{j = 1}^t \hat\tau_j(w) := \frac{1}{t} \sum_{j = 1}^t \frac{\mathbf{1}\{W_j = w \} Y_j}{p_{j \mid j-1}(w)} \\
S_t(w) & := \sum_{j = 1}^t \hat\sigma_j^2(w) \\
& := \sum_{j = 1}^t \frac{ \mathbf{1}\{W_j = w \} Y_j^2(1 - p_{j \mid j-1}(w))}{p_{j \mid j-1}(w)^2} .
\end{split}
\end{equation}
Similarly, we propose the following estimator of $\tau_t(w, w')$ and an estimate of the upper bound of its variance
\begin{equation}
\label{eq:tau_mean_var}
\begin{split}
\hat\tau_t(w, w') & := \hat Q_t(w) - \hat Q_t(w') = \frac{1}{t} \sum_{j = 1}^t \hat\tau_j(w) - \hat\tau_j(w') \\
S_t(w, w') & := \sum_{j = 1}^t \hat\sigma_j^2(w, w') \\
&:= \sum_{j = 1}^t \frac{ \mathbf{1}\{W_j = w \} Y_j^2}{p_{j \mid j -1}(w)^2} + \frac{\mathbf{1}\{W_j = w' \}Y_j^2}{p_{j \mid j -1}(w')^2}.
\end{split}
\end{equation}
The above estimators are conditionally unbiased for the respective estimands, which we formally state in the following lemma that is proven in Appendix~\ref{appendix:proof_lemma}. Here, and throughout the paper, the expectation is taken with respect to the random action $W_t$.
\begin{lemma}[Unbiased properties of estimators]
\label{lemma:moment_cond}
Under Assumption~\ref{assumption:PTA}, we have that
$$E(\hat Q_t(w) \mid \mathcal{F}_{t-1}) = Q_t(w)$$
$$E(\hat\tau_t(w, w') \mid \mathcal{F}_{t-1}) = \tau_t(w, w')$$
holds for every $t= 1, \dots, T$. Furthermore, we have that
$$\text{Var}(\hat\tau_t(w) \mid \mathcal{F}_{t-1}) = E(\hat \sigma_t^2(w) \mid \mathcal{F}_{t-1})$$
$$\text{Var}(\hat\tau_t(w) - \hat\tau_t(w') \mid \mathcal{F}_{t-1}) \leq E(\hat\sigma_t^2(w, w') \mid \mathcal{F}_{t-1})$$
holds for every $t = 1, \dots, T$.
\end{lemma}
We conclude this section with a few remarks about our framework. First, besides Assumption~\ref{assumption:PTA}, which can be verified and controlled by the agent, we do not place restrictions on either the adaptive assignment process or the reward generation process. This allows us to assume that the rewards distribution is both non-$iid$ and non-stationary. Hence, all our estimands are denoted with subscript $t$ to show that these may change over time. Despite this general framework, the design-based approach allows us to perform inference since we leverage the randomness in the actions. Second, we can only obtain an upper bound of the variance for $\hat\tau_t(w) - \hat\tau_t(w')$ because the actual variance contains a product of $Y_t(w)Y_t(w')$, which is never observed.
Finally, Lemma~\ref{lemma:moment_cond} holds for any $t$ including $t = T$. Consequently, the estimates proposed in this section are relevant for constructing both confidence intervals and confidence sequences by leveraging the martingale convergence theory.
\section{Design-based Confidence Intervals for Multi-arm Bandits}
\label{section:CI_results}
We now demonstrate how to construct asymptotically valid confidence intervals for $Q_t(w)$ and $\tau_t(w, w')$. Before providing our result, we require an additional assumption that restricts any realized reward to be bounded by an arbitrarily large constant.
\begin{assumption}[Bounded realized rewards]
$$|Y_t(W_t)| \leq M$$ for all $t$ and $W_t \in \{0, \dots, K - 1\}$, where $M \in \mathbb{R}$.
\label{assumption:boundedPO}
\end{assumption}
\noindent Note that $M$ can be extreme to make this assumption hold. Such assumptions are commonly used to satisfy the necessary regularity conditions used in design-based inference \cite{timeseries, peng_bounded}.
For example, if $T$ potential outcomes were generated from a $N(0,1)$ distribution, an unbounded distribution, each of the $T$ realized rewards are still bounded.
\subsection{Design-based Confidence Intervals}
\label{subsection:CI}
Given these assumptions, we state the first result that allows an agent to build confidence \textit{intervals} at the end of the study at time $t = T$ for both the reward mean function $Q_T(w)$ and the reward mean difference between two arms $\tau_T(w, w')$ for general arbitrary distributions for the rewards.
\begin{theorem}[Design-based CI for MAB]
\label{theorem:CI}
Suppose data $\{w_t, y_t\}_{t = 1}^T$ are observed for a fixed pre-specified $T$, where Assumption~\ref{assumption:PTA}-~\ref{assumption:boundedPO} are satisfied and $W_t$ adapts based on the past as shown in Equation~\eqref{eq:adaptive_trt}. Then, as $T \rightarrow \infty$,
$$\hat Q_T(w) \pm z_{1 - \alpha/2}\frac{\sqrt{\sum_{j = 1}^T \hat \sigma_j^2(w)}}{T}$$
forms an asymptotically valid $(1 - \alpha)$ confidence interval for $Q_T(w)$, where $z_{a}$ is the $a^\text{th}$-quantile of a standard normal distribution. Furthermore,
$$\hat\tau_T(w, w') \pm z_{1 - \alpha/2}\frac{\sqrt{\sum_{j = 1}^T \hat \sigma_j^2(w, w')}}{T}$$
forms an asymptotically valid $(1 - \alpha)$ confidence interval for $\tau_T(w, w')$.
\end{theorem}
The proof is provided in Appendix~\ref{appendix:proof_theom}, which uses a central limit theorem for martingale sequence differences. The widths of the confidence intervals in Theorem~\ref{theorem:CI} decrease with rate approximately $1/\sqrt{T}$, similar to that of a $t$-test. For Theorem~\ref{theorem:CI} to hold, we only required bounded realized rewards and probabilistic treatment assignments. The first is a mild condition, while the second is within the agent's control. Otherwise, we assume nothing about the data-generating process of the reward distribution and allow that the action $W_t$ is adapted based on the past.
\section{Design-based Confidence Sequences for Multi-arm Bandits}
\label{section:CS_results}
We focus on the sequential nature of MAB and present a strategy to perform inference any time new data arrives by constructing valid confidence sequences. Confidence sequences are sequences of confidence intervals that are uniformly valid over time (often referred to as anytime-valid inference). Formally, a sequence set of confidence intervals $\{V_t\}_{t=1}^T$ is a valid confidence sequence with type-1 error $\alpha$ for the target parameter $\mu_t$ if
\begin{equation}
\label{eq:validity}
\Pr(\forall t, \mu_t \in V_t) \geq 1 - \alpha
\end{equation}
holds for arbitrary data-dependent stopping rule $T$, where $T$ is the final time of the experiment and can be determined in any data-dependent way \cite{howard_nonasymp}. In words, Equation~\eqref{eq:validity} states that our confidence sequence $V_t$ covers the desired potentially time-varying estimand estimand, \emph{e.g.}, mean reward of arm, uniformly at \textit{any time} with probability $1 - \alpha$. This formally allows the analyst to stop the MAB as soon as the analyst is satisfied with the inference, \emph{e.g.}, mean reward of an arm is statistically higher than a threshold.
In Section~\ref{subsection:exact_CS}, we provide non-asymptotic confidence sequences that have some practical issues. Consequently, we provide an improved asymptotic confidence sequence in Section~\ref{subsection:asymp_CS}.
\subsection{Design-Based Exact Confidence Sequence}
\label{subsection:exact_CS}
We begin by stating the exact closed-form confidence sequence in the following theorem that is proved in Appendix~\ref{appendix:closed_form}.
\begin{theorem}[Design-based Exact CS for MAB]
\label{theorem:nonasymp_CS}
Suppose data $\{w_t, y_t\}_{t = 1}^T$ are observed for any arbitrary data dependent stopping time
$T$\footnote{With a slight abuse of notation we also use $T$ (the final data collection time) as a stopping time. Furthermore, because $T$ is data-dependent, we require that it is formally a well-defined stopping time, \emph{i.e.}, a measurable function dependent on the current and previous data (not on the future).}, where Assumption~\ref{assumption:PTA}-~\ref{assumption:boundedPO} holds. Let $m:= M/p_{min}$, where $p_{min} = \min\limits_{t, w} p_{t \mid t-1}(w)$. Then, $\hat Q_t(w) \pm C_t(S_t(w))$ forms a valid $(1-\alpha)$ confidence sequence for $Q_t(w)$, where $C_t(S_t) := $
\begin{equation*}
\begin{scriptsize}
\left[\frac{m(m+1)}{t} \log \Bigg( \frac{2}{\alpha} \Bigg) + \frac{S_t}{t}\left(\frac{m+1}{m}\log\Big(1 + \frac{1}{m} \Big) - \frac{1}{m} \right) \right]
\end{scriptsize}
\end{equation*}
Furthermore, $\hat \tau_t(w, w') \pm C_t(S_t(w, w'))$ forms a valid $(1-\alpha)$ confidence sequence for $\tau_t(w, w')$, where $S_t(w), S_t(w, w')$ are defined in Equation~\eqref{eq:Q_mean_var} and \eqref{eq:tau_mean_var}, respectively.
\end{theorem}
There are two practical limitations to Theorem~\ref{theorem:nonasymp_CS}. First, the confidence sequence scales with $M/p_{min}$. $M$ may be unknown unless under special cases such as binary rewards. Furthermore, in the spirit of a sequential test, the agent may desire to change $p_{t \mid t-1}(w)$ as more units enter. However, Theorem~\ref{theorem:nonasymp_CS} requires $p_{min}$ to be determined before the start of the algorithm, thus restricting the action probabilities to a pre-specified range.
Second, the confidence width is of order approximately $O(S_t/t)$, which likely only shrinks asymptotically to zero if the estimated variances grow sub-linearly or there are stronger assumptions on the potential outcomes. We can fix the second issue by leveraging a mixture distribution with the truncated gamma distribution to build another confidence sequence shown in Appendix~\ref{appendix:alternative} with order approximately $O(\sqrt{S_t \log(S_t)} /t)$. The price we pay for the additional performance gain is that the CS no longer has a closed-form solution and requires root-solving algorithms.
\subsection{Design-Based Asymptotic Confidence Sequence}
\label{subsection:asymp_CS}
We now improve the confidence sequence presented in Theorem~\ref{theorem:nonasymp_CS} by providing asymptotic confidence sequences. Informally, asymptotic confidence sequences are valid confidence sequences after a ``sufficiently large'' time. We further show the robustness of our confidence sequence through simulations in Section~\ref{section:sims}.
For completeness, we formally define an asymptotic confidence sequence first introduced in \cite{time_uniform}.
\begin{definition}[Asymptotic Confidence Sequences]
\label{def:asymp_CS}
We say that ($\hat\mu_i \pm V_i$) is a two-sided $(1-\alpha)$ asymptotic confidence sequence for a target parameter $\mu_i$ if there exists a non-asymptotic confidence sequence $(\hat\mu_i \pm \tilde{V}_i)$ for $\mu_i$ such that
\begin{equation}
\label{eq:asymp_CS_req}
\frac{\tilde{V}_t}{V_t} \xrightarrow{a.s.} 1.
\end{equation}
\end{definition}
To give some intuition of the above definition, ``couplings'' has been used in the literature of strong approximations \cite{approx1, approx2} to formally define asymptotic confidence \textit{intervals}. In this literature, asymptotic confidence interval is defined by a ``coupled'' finite-sample valid confidence interval centered at the same statistic such that the difference between the two non-asymptotic and asymptotic confidence intervals is negligible in the limit. Equation~\eqref{eq:asymp_CS_req} captures the same notion except with the almost-sure convergence to satisfy the time \textit{uniform} guarantee required.\footnote{This is formally proven in Appendix C.4 of \cite{time_uniform}.}
Before stating the theorem, we require an additional assumption that restricts the variance from vanishing in the limit.
\begin{assumption}[None Vanishing Variance]
\label{assumption:adaptive_var_no_vanish}
Let $\sigma_t^2(w) := \text{Var}(\hat\tau_t(w) \mid \mathcal{F}_{t-1})$ and $\sigma_t^2(w, w')$ is defined similarly. Then we assume that both
$$\sum_{j = 1}^t \sigma_{j}^2(w) \rightarrow \infty, \quad \sum_{j = 1}^t \sigma_{j}^2(w, w') \rightarrow \infty$$
almost surely.
\end{assumption}
Assumption~\ref{assumption:adaptive_var_no_vanish} holds if $1/t \sum_{j = 1}^t \sigma_j^2(w) \xrightarrow{a.s.} \sigma_{*}^2$ or if $\sigma_1^2(w) = \sigma_2^2(w) = \dots = \sigma_T^2(w)$. Informally, Assumption~\ref{assumption:adaptive_var_no_vanish} is satisfied as long as the potential outcomes do not vanish to zero as time grows.
\begin{theorem}[Design-based Asymptotic CS for MAB]
Suppose data $\{w_t, y_t\}_{t = 1}^T$ are observed for any arbitrary data dependent stopping time $T$, where Assumption~\ref{assumption:PTA}-~\ref{assumption:adaptive_var_no_vanish} are satisfied and $W_t$ adapts based on the past as shown in Equation~\eqref{eq:adaptive_trt}. Then, $\hat Q_t(w) \pm D_t(S_t(w))$ forms a valid $(1-\alpha)$ asymptotic confidence sequence for $Q_t(w)$, where
$$D_t(S_t) := \frac{1}{t} \sqrt{\frac{S_t \eta^2 + 1}{\eta^2} \log \Bigg( \frac{S_t \eta^2 + 1}{\alpha^2}\Bigg) } $$
forms a valid $(1-\alpha)$ asymptotic confidence sequence for $Q_t(w)$. Furthermore, $\hat\tau_t(w, w') \pm D_t(S_t(w, w'))$ forms a valid $(1-\alpha)$ asymptotic confidence sequence for $\tau_t(w, w')$, where $\eta > 0$ is any pre-specified constant
\label{theorem:CS}
\end{theorem}
The proof is also provided in Appendix~\ref{appendix:proof_theom}. The confidence sequences provided in Theorem~\ref{theorem:CS} can also serve as valid confidence \textit{intervals} at any time $t$. The width of the confidence sequences scale similar to that in Theorem~\ref{theorem:CS} except there is an extra log term to control the type-1 error at every time uniformly. Lastly, $\eta$ is typically chosen by the analyst to minimize the confidence width at a certain fixed time. Following the advice in \cite{DB_paper}, for all examples and simulations, we choose $\eta$ so that the width is minimized at time 10. We do this because the confidence sequence width is largest at early times, and the choice of $\eta$ becomes insignificant as more data arrives. We additionally show a closed-form expression to calculate $\eta$ in Appendix~\ref{appendix:choosing_rho}. For practice, we recommend using the same $\eta$ we do throughout the paper, \emph{i.e.}, $\eta \approx 0.77$.
To illustrate Theorems~\ref{theorem:CI} and \ref{theorem:CS}, consider the following basic example.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=12cm]{"arxiv_plot.pdf"}
\caption{Four Arm Bandit (Example~\ref{example:bandit}). The left panel uses Theorem~\ref{theorem:CI} to construct confidence intervals for the four-arm means, where the red dot represents the truth. On the right panel, the red contours show the lower and upper confidence sequence using Theorem~\ref{theorem:CS}. The horizontal red dotted line represents the true mean difference of the rewards, and the black horizontal dotted line represents the zero (null) line. }
\label{fig:bandit_example}
\end{center}
\end{figure}
\begin{example}[Stationary and independent four-arm bandit]
\label{example:bandit}
Suppose a four-arm bandit problem, where an agent pulls from arms $0, 1, 2, 3$. Suppose the rewards for arm $w$ are stationary and independently generated from a $N(\mu_w, 1)$, where $\mu_0 = \mu_4 = 1, \mu_1 = 2, \mu_2 = 1.5$, so that arm 1 has the highest mean reward.
We use the following adaptive probabilities for the $t^\text{th}$ observation
\begin{equation}
\label{eq:adaptive_prob}
p_{t \mid t-1}(w) = \frac{\bar{Y}_{w, t-1}}{\sum_{w = 1}^K \bar{Y}_{w, t-1}}, \quad t > 0.1*T,
\end{equation}
where $\bar{Y}_{w, t-1}$ is the sample mean of arm w using the obtained samples for up to time $t -1$. In other words, the agent up-weights the arm that produces higher mean rewards. Lastly, the agent does a fair coin flip for all the arms for the first 10\% of the sample (exploration period), where $T = 1000$.
The left panel of Figure~\ref{fig:bandit_example} shows our proposed confidence intervals cover the true mean reward for all arms even under adaptively sampled data. Furthermore, the right panel shows our confidence sequence tightens to the desired truth and covers the true mean difference at all times. For this case, if the agent desired to find any arm that is better than the default arm 0, then this confidence sequence shows the agent can terminate the experiment as early as $t = 400$ (approximately the first time the red contours are statistically significantly above zero), saving further budget and sample.
\end{example}
\section{Contextual multi-arm bandits}
\label{section:CMAB}
We now extend our results to contextual MAB, where an agent has access to some ``context'' or covariates $X_t$ before deciding the $t^\text{th}$ action. Consequently, at the end of the $t^\text{th}$ roud the full data are $\{X_t, W_t, Y_t\}_{t = 1}^T$, where $X_t$ may be multi-dimensional. Although we can ignore $X_t$ and only use $(W_t, Y_t)$ to construct valid confidence intervals and confidence sequences, we can leverage $X_t$ as a variance reduction technique.
To formalize this, we first denote
$$\hat Y_{t \mid t-1} := \hat f_{t}(X_{1:t}, Y_{1:(t-1)}, W_{1:(t-1)}),$$
where $\hat f_{t}$ denotes the prediction for the next observation $Y_t$ using all the past data and the current $X_t$. With a slight abuse of notation, we make the filtration $\mathcal{F}_{t - 1}$ to further contain the information set for $X_{1:t}$. Since our results leverage martingale theory, $\hat Y_{t \mid t-1}$ is equivalent to a constant because we always condition on $\mathcal{F}_t$.
More formally, our estimand $\tau_t(w, w')$ in Equation~\eqref{eq:tau} can be re-written as $\tau_t(w, w') = $
\begin{equation}
\label{eq:proxy_outcome}
\begin{split}
& Q_t(w) - Q_t(w') \\
&= \frac{1}{t} \sum_{j = 1}^t \{Y_j(w) -\hat Y_{j \mid j-1} \} - \{Y_j(w') -\hat Y_{j \mid j-1} \}.
\end{split}
\end{equation}
Using the above formulation, we can define our new ``residualized'' outcome $\tilde{Y}_t := Y_t - \hat Y_{t \mid t-1}$ and use the same estimators in Section~\ref{subsection:estimates} and results in Theorems~\ref{theorem:CI} and \ref{theorem:CS} except replacing $Y_t$ with $\tilde{Y}_t$.
More formally, we rewrite Equation~\eqref{eq:tau_mean_var} as
\begin{equation}
\label{eq:tau_mean_var_proxy}
\begin{split}
\hat\tau_t(w, w')^X & := \frac{1}{t} \sum_{j = 1}^t \hat\tau_j(w)^X - \hat\tau_j(w')^X \\
S_t(w, w')^X &:= \sum_{j = 1}^t \hat\gamma_j^2(w, w') ,
\end{split}
\end{equation}
where $\hat\tau_t(w)^X, \hat\gamma_t^2(w, w')$ are defined similar to $\hat\tau_t(w), \hat\sigma_t^2(w, w')$ except replacing $Y_t$ with $\tilde{Y}_t$.
The result of Lemma~\ref{lemma:moment_cond} directly extends to these new estimators because we condition on the filtration; thus $\hat Y_{t \mid t-1}$ is equivalent to a constant (hence it is crucial that the predictions are constructed based on the past data without using $W_t$). This allows the analyst to formally use $\hat Y_{t \mid t-1}$ to incorporate any machine learning algorithm or prior knowledge to reduce the variance. This reduction is proportional to how small $\{Y_t - \hat Y_{t \mid t-1} \}^2$ is, \emph{i.e.}, how well the analyst can use the prior data to predict the next response.
Furthermore, we can also allow $p_{t \mid t-1}(w)$, defined in Equation~\eqref{eq:adaptive_trt}, to further adapt based on the context variables. Note that, we can only use $\hat Y_{t \mid t-1}$ for tackling $\tau_t(w, w')$ since $\hat Y_{t \mid t-1}$ does not cancel in Equation~\eqref{eq:proxy_outcome} when just estimating $Q_t(w)$ alone.
\subsection{Variance Reduction Technique}
We now extend Theorems~\ref{theorem:CI} and \ref{theorem:CS} for building valid confidence intervals and sequences for $\tau_t(w, w')$ using $\hat Y_{t \mid t-1}$ in the contextual MAB setting.
\begin{theorem}[Design-based CI and CS for contextual MAB]
\label{theorem:CI_CMAB}
Suppose data $\{x_t, w_t, y_t\}_{t = 1}^T$ are observed, where Assumption~\ref{assumption:PTA}-~\ref{assumption:boundedPO} are satisfied and $W_t$ adapts based on the past as shown in Equation~\eqref{eq:adaptive_trt}. Further suppose $\hat Y_{t \mid t-1}$ is bounded for all $t$. Then,
$$\hat\tau_T(w, w')^X \pm z_{1 - \alpha/2}\frac{\sqrt{\sum_{j = 1}^T \hat\gamma_j^2(w, w')}}{T}$$
forms an asymptotically valid $(1 - \alpha)$ confidence interval for $\tau_T(w, w')$ for a pre-specified $T$.
Denote $\tilde{\sigma}_j^2(w, w') := \text{Var}(\hat\tau_t(w ,w)^X \mid \mathcal{F}_{t-1})$. Then if we further assume none vanishing variance for the new residualized outcome, \emph{i.e.}, $\sum_{j = 1}^t \tilde{\sigma}_j^2(w, w') \rightarrow \infty$ almost surely, then
$$\hat\tau_t(w, w')^X \pm D_t(S_t(w, w')^X)$$
forms a valid $(1-\alpha)$ asymptotic confidence sequence for $\tau_t(w, w')$ for any arbitrary data-dependent stopping rule $T$, where $\eta > 0$ is any pre-specified constant and $D_t(S_t)$ is defined in Theorem~\ref{theorem:CS}.
\end{theorem}
The proof is omitted because under the stated assumptions, the setting is identical to that of Theorems~\ref{theorem:CI} and \ref{theorem:CS} except we replace the response $Y_t$ with the new residualized response $\tilde{Y}_t$. We show through simulations in Section~\ref{subsection:sim_cont} that residualizing the outcome can lead to a substantial reduction in the variance.
\section{Simulations and Related Work}
\label{section:sims}
We now provide a simulation with two goals. First, we show the empirical coverage of the proposed methods and compare the advantages and disadvantages of using confidence sequences over confidence intervals in a simple $iid$ binary reward setting. Next, we shift to a more complex setting with non-stationary continuous rewards with context variable $X_t$. We demonstrate both the validity and gain we gain from incorporating $X_t$ into our confidence interval and sequence. Finally, we end with a discussion of our results in context with existing works.
\subsection{Independent and Identically Distributed Binary Rewards}
\label{subsection:sim_bin}
We begin our simulations in a simplistic setting where we have two arms $w = 0, 1$ with $iid$ binary rewards from $\text{Bern}(\mu_w)$, where $\mu_1 = 0.27, \mu_0 = 0.15$, \emph{i.e.}, a 12\% expected increase from choosing arm 1. Although this is a simplified setting, it represents typical adaptive A/B tests or MAB with a treatment and control group, \emph{e.g.}, learning whether the new product is better than the standard offering through bandits.
For simulations in Section~\ref{subsection:sim_bin}-~\ref{subsection:sim_cont}, we build confidence interval and confidence sequence for $\tau_t(1, 0)$, \emph{i.e.}, the mean reward difference from choosing arm 1 over arm 0 when $\alpha = 0.05$. For simplicity, we use the adaptive sampling procedure outlined in Equation~\eqref{eq:adaptive_prob}, where we use the first 10\% of samples for exploration and adapt based on the sample means of each arm. Additionally, we run 1000 Monte-Carlo experiments and report four statistics. We first report the coverage, \emph{i.e.}, the proportion of times the confidence interval covers $\tau_T(1, 0)$ for a fixed $T$. Then, for confidence sequences, we report the proportion of times our confidence sequence covers $\tau_t(1, 0)$ for all times $t > 10$. Following the advice in \cite{DB_paper}, we check only after an initial 10 samples because our method is asymptotic, and it is practically unlikely for the analyst to terminate the experiment only after 10 samples. Second, we report the average width at a pre-specified time $t = T$ for all methods. Third, we report the average stopping time for only the confidence sequences, where our stopping time is defined as the first time the confidence sequence does not cover zero. Therefore, for all confidence sequences, we run the simulation for sufficiently large $t > T$. Lastly, we report the statistical power for the confidence interval.
Table~\ref{tab:sim_simple} shows the simulation results under the simple $iid$ setting for $T = 700$ samples. We find that all our proposed methods have the desired coverage. As expected, the width for the confidence sequences is wider than that of the confidence interval by less than two times. However, on average, the asymptotic confidence sequence can detect an effect as early as $t = 580$ (approximately 80\% of $T = 700$ the total sample). Although the analyst would reject close to 92\% of the times with the proposed confidence interval by $t = 700$, the confidence sequences are attractive alternatives if the agent wishes to terminate the experiment as soon as the agent detects a statistically significant effect. Lastly, we find that the asymptotic confidence sequence has improved width and stopping time compared to the non-asymptotic confidence sequence while maintaining proper coverage at even early times, likely due to our conservative variance estimator.
\begin{table}[t]
\begin{center}
\begin{adjustbox}{max width=\textwidth,center}
\label{tab:sim_simple}
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & Coverage & Width & Stopping Time & Power \\
\hline
Asymp-CI & 95\% & 0.14 & NA & 92\% \\
\hline
Asymp-CS & 95\% & 0.23 & 580 & NA \\
\hline
Exact-CS & 99\% & 0.25 & 640 & NA\\
\hline
CS with X & 98\% & 1.92 & 115 & NA \\
\hline
\end{tabular}
\end{adjustbox}
\caption{Binary $iid$ rewards simulation. The first row shows the performance of the asymptotic CI in Theorem~\ref{theorem:CI} while the second and third row shows the performance of the asymptotic and exact CS using Theorem~\ref{theorem:CS} and Theorem~\ref{theorem:nonasymp_CS}, respectively under $\alpha = 0.05$.}
\end{center}
\end{table}
\subsection{Non-stationary Continuous Rewards}
\label{subsection:sim_cont}
We now change our reward distribution to a non-stationary continuous distribution with four arms ($K = 4$). We further add one binary context variable $X_t$ to illustrate Theorem~\ref{theorem:CI_CMAB}. More specifically, our data generating process is the following AR(1) linear model
\begin{equation}
\label{eq:sim_DGP}
\begin{split}
Y_t(0) &= \rho Y_{t-1}(0) + \beta X_t + \epsilon_{t},|\rho| \leq 1, \epsilon_{t} \overset{iid}{\sim} N(\mu_{t, 0}, 1) \\
Y_{t}(w) &= Y_{t}(0) + N(\mu_{t, w}, 1) \\
Y_{0}(0) &= 0, \quad X_t \sim \text{Bern}(0.5),
\end{split}
\end{equation}
where $\rho$ represents how the next potential outcome is lag-1 dependent on its immediate history. Furthermore, $\mu_{t, 1} - \mu_{t, 0}$ is our target parameter, \emph{i.e.}, the mean causal effect of being in arm 1 over arm 0. Although we can make $\mu_{t, w}$ time-varying, we fix $\mu_{t, 0} = 1, \mu_{t, 1} = 1.5, \mu_{t, 2} = 1.25, \mu_{t, 3} = 1$ for all $t$ for simplicity. Our reward distribution is non-stationary and no longer $iid$ due to $\rho$, which we set at $\rho = 0.1$. Finally, we let $\beta = 1$ and use a linear regression of $Y_{1:(t-1)}$ on $X_{1:(t-1)}$ to predict $Y_t$ to demonstrate Theorem~\ref{theorem:CI_CMAB}.
\begin{table}[t]
\begin{center}
\begin{adjustbox}{max width=\textwidth,center}
\label{tab:sim_cont}
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & Coverage & Width & Stopping Time & Power \\
\hline
CI no X & 95\% & 1.84 & NA & 88\% \\
\hline
CS no X & 99\% & 3.68 & 340 & NA \\
\hline
CI with X & 95\% & 1.02 & NA & 98\%\\
\hline
CS with X & 98\% & 1.92 & 115 & NA \\
\hline
\end{tabular}
\end{adjustbox}
\caption{Continuous non-stationary rewards simulation. The first two row shows the performance of the asymptotic CI and CS in Theorem~\ref{theorem:CI} and Theorem~\ref{theorem:CS}, respectively. The last two rows show the corresponding CI and CS incorporating the context variable $X_t$ through Theorem~\ref{theorem:CI_CMAB}. }
\end{center}
\end{table}
Table~\ref{tab:sim_cont} shows the simulation results under the non-stationary setting described in Equation~\eqref{eq:sim_DGP} for $T = 300$ samples. As expected, our methods have an over-conservative coverage due to the conservative variance estimator. Nevertheless, incorporating $X$ using Theorem~\ref{theorem:CI_CMAB} successfully reduces the stopping time by a third, approximately halving the width of the confidence sequence and interval, and increasing the power substantially. Furthermore, we similarly find that the confidence sequence width is larger than that of the confidence intervals but has the potential to end the MAB substantially earlier than $T = 300$.
\subsection{Comparison with Related Works}
\label{subsection:related_works}
We now extend the previous simulation to compare with existing work. Our confidence interval result is most closely related to \cite{hadad}, where the aforementioned paper takes a super-population approach to inference, \emph{i.e.}, the potential outcomes $Y_t(W_t)$ are generated $iid$ from a distribution satisfying technical conditions, \emph{e.g.}, bounded moments. Additionally, to the best of our knowledge, \cite{ian_bandit, bandit_CS} are the only existing works that build confidence sequence for MAB. However, the main results presented in the aforementioned papers assume bounded $[0, 1]$ stationary rewards. Furthermore, there also exist many technical and often untestable assumptions on the rewards, which our work bypasses through the design-based approach.
Therefore, we only compare our method with \cite{hadad}. In particular, we use Theorem 2 in \cite{hadad} with unity weights since our adaptive allocation probabilities do not diverge or oscillate asymptotically. Since \cite{hadad} takes a super-population approach, the corresponding estimand is $E(Y(0))$, where the expectation is taken with respect to the stochastic potential outcomes.
We keep an identical simulation setting of that in Section~\ref{subsection:sim_cont} except we set $\beta = 0$ to simplify the setting and vary $\rho$ in the $x$-axis to induce dependency and break the $iid$ stationarity assumption. We also focus on estimating $Q_T(0)$, i.e., the mean reward for arm 0. We remark that $E(Y(0)) = \mu_{t, 0}$ marginally, which is the target estimand for \cite{hadad}. Since $\rho$ induces dependency among the potential outcomes, we expect poor type-1 error control in \cite{hadad} while our proposed confidence interval in Theorem~\ref{theorem:CI} remains valid regardless of any potential outcome distribution. Figure~\ref{fig:comparison} shows that the coverage for the design-based confidence interval (DBCS) remains at the desired 95\% level while the confidence interval in \cite{hadad} quickly loses validity as $\rho$ grows. Therefore, our work extends existing work for inference in multi-arm bandits to general non-stationary MAB in an assumption-light manner through the design-based approach.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7cm]{"ICML_comparison.pdf"}
\caption{The simulation study settings remains identical to that in Section~\ref{subsection:sim_cont} except we vary $\rho$ to break stationarity and set $\beta = 0$. The red line shows the coverage of $Q_T(0)$ using Theorem~\ref{theorem:CI}. The blue line shows the coverage of $E(Y(0))$ using the method proposed in \cite{hadad}, respectively. Finally, the black dotted line shows the desired type-1 error at $\alpha = 0.05$.}
\label{fig:comparison}
\end{center}
\end{figure}
|
{
"arxiv_id": "2302.14145",
"language": "en",
"timestamp": "2023-03-01T02:02:30",
"url": "https://arxiv.org/abs/2302.14145",
"yymm": "2302"
} | \section{Introduction}
Rydberg atoms have gained a lot of attention in recent years due to their strong interactions over large distances \cite{gallagher2006rydberg}. This, paired with their long lifetimes in the order of milliseconds, creates a platform to explore quantum many-body physics of strongly interacting spin systems \cite{weimer2010rydberg,schauss2012observation,Bernien2017,Browaeys2020,Surace2020,scholl2021quantum,Leseleuc2019,Semeghini2021} and to implement key elements for quantum information processing \cite{PhysRevLett.85.2208,PhysRevLett.87.037901,gaetan2009observation,urban2009observation,saffman2010quantum}. Moreover,
optically driven Rydberg atoms (see Fig.~\ref{fig:intro}a) can be used to investigate many-body dynamics of spin systems in inherently dissipative environments \cite{PhysRevA.98.022109,PhysRevLett.112.013002,malossi2014full,urvoy2015strongly,letscher2017bistability}, as the laser excitation into high lying Rydberg states is often accompanied by strong dephasing. The latter includes important dynamical phenomena such as an absorbing-state phase transition (see Fig.~\ref{fig:intro}b), one of the simplest classical non-equilibrium phase transitions displaying critical behavior and universality \cite{doi:10.1080/00018730050198152,henkel2008non}.
Absorbing-state phase transitions
are of general interest as they occur in many phenomena outside of physics such as population dynamics, epidemics, or the spreading of information in social media \cite{grassberger1983critical,kuhr2011range,bonachela2012patchiness,xie2021detecting}. Systems with this phase transition generically fall into the universality class of directed percolation (DP) \cite{doi:10.1080/00018730050198152}. The unambiguous experimental observation of DP
universal behavior is however challenging and has only been achieved in few systems in recent years \cite{hinrichsen2000possible,rupp2003critical,takeuchi2007directed,takeuchi2009experimental,lemoult2016directed,kohl2016directed}.
More recently experimental signatures of such a transition have been reported in optically driven Rydberg gases \cite{gutierrez2017experimental}.
In Rydberg systems,
dissipation can give rise to another important dynamical phenomenon: self-organized criticality (SOC) \cite{bak1988self, bak2013nature}, which is believed to be a cause for the abundance of scale-invariance in nature \cite{sornette1989self,rhodes1996power,malamud1998forest,hesse2014self}. An SOC system dynamically evolves to the critical point of a phase transition by itself due to dissipation without the need for parameter fine tuning (see Fig.~\ref{fig:intro}c). Since the dissipation is strongly reduced once the critical point is reached, further evolution into the absorbing phase happens on much longer time scales.
Recent experiments on Rydberg facilitation have shown some evidence of SOC through the use of ionization, or a decay into an auxiliary "dead" state as a loss mechanism (see Fig.~\ref{fig:intro}a) \cite{helmrich2020}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{(a): Level scheme for one atom with transition rates between ground $\ket{g}$, Rydberg $\ket{r}$, and inert $\ket{0}$ states. (b): Steady state Rydberg density depending on total density for ${b=0}$ from Monte-Carlo simulations. (c): Total density over time from Monte-Carlo simulations for ${b > 0}$ showing self-organization of system to critical density $n_\textrm{crit}$, if the initial density of atoms in states $\vert g\rangle$ and $\vert r\rangle$ is larger. (d): Schematic of facilitation shell (white), atoms (grey) in the red area are subject to Rydberg blockade and atoms in the blue area only weakly interact with the Rydberg atoms (red).
}
\label{fig:intro}
\end{figure}
However, the directed percolation transition is known to be susceptible to disorder \cite{PhysRevE.72.036126} and more recent experiments on Rydberg facilitation in a trapped ultra-cold gas of atoms gave indications for an emergent heterogeneity in the system \cite{natcom_Griffiths}. In such a heterogeneous system, the critical point of the absorbing-state phase transition is replaced by an intermediate extended Griffiths phase. Griffiths phases are characterized by generic scale invariance and the lack of universal behavior. This is in contrast to an absorbing state phase transition where scale invariance is only expected at the critical point.
As a result (e.g. in the Rydberg gas), one expects a power law decay in active density over time with continuously varying exponents depending on the driving strength \cite{PhysRevLett.23.17}.
In \cite{natcom_Griffiths}, it was experimentally shown that a Rydberg system in the facilitation regime produces signatures of such a Griffiths phase for short times compared to the lifetime of the Rydberg state. A power-law decay in Rydberg density over time was observed with the decay exponents varying with driving strength and a phenomenological susceptible-infected-susceptible (SIS) network model was put forward to describe the observations. The model included a fitting function for the node weights of the network depending on the excitation rate $\kappa$.
The interpretation being, that in the network model heterogeneity originates from a velocity selective excitation mechanism, where only atoms with relative velocities smaller than the Landau-Zener velocity $v_\textrm{LZ}(\kappa)$ could participate in facilitation dynamics. Above this velocity all further excitations are exponentially suppressed.
In the present paper, we show indications for the
existence of a Griffiths phase in the Rydberg facilitation gas
both by experiments and by Monte-Carlo simulations of the microscopic dynamics.
In the experiment we continuously monitor the number of Rydberg excitations in a trapped ultra-cold gas of $^{87}$Rb atoms. We show that the size distribution of the Rydberg excitation number follows a power-law distribution, i.e. shows a scale free behavior, over an extended parameter regime, which is a key characteristic of a Griffiths phase.
In order to understand and to quantitatively describe the emergence of the Griffiths phase, we theoretically analyze two limiting cases: (i) a frozen gas and (ii) a gas with high temperature. While we recover a direct absorbing-state phase transition in the high-temperature limit with no signs of a velocity induced heterogeneity, we can identify a Griffiths phase in the \emph{frozen gas limit} as a result of the finite paths along which facilitated excitations can spread. We give a quantitative analysis of the factors contributing to the emergence of a Griffiths phase and provide an estimate for the characteristic exponents of the power-law decay of Rydberg activity in this phase.
Additionally, starting from a microscopic quantum model, we will expand upon the semi-classical Langevin approach derived in \cite{helmrich2020}
to include Rydberg blockade and, more importantly, the fragmentation of possible excitation paths at low temperatures. The macroscopic equation, which we are going to derive, accurately reproduces the results of Monte-Carlo simulations in both the limits of a frozen and a high-temperature gas.
The facilitation of Rydberg excitations in a gas of optically driven atoms can be microscopically described by a Lindblad master equation \cite{lindblad} for the density matrix $\hat{\rho}$, which takes the form
\begin{align}
\label{eq:master_equation}
\frac{\text{d}}{\text{d}t} \hat{\rho} = i [\hat{\rho}, \hat{\mathcal{H}}]
+ \sum_l \hat{L}_l \hat{\rho} \hat{L}_l^\dagger
- \frac{1}{2} \{ \hat{L}_l^\dagger \hat{L}_l, \hat{\rho} \}.
\end{align}
Here, the atom-light interaction Hamiltonian $\hat{\mathcal{H}}$ is given by
\begin{align}
\label{eq:hamiltonian}
\hat{\mathcal{H}} &= \sum_i
\Big[
\Omega (\hat{\sigma}^{gr}_i + \hat{\sigma}^{rg}_i)
+
\Big(
\sum_{j \neq i} \frac{c_6}{r_{ij}^6} \hat{\sigma}^{rr}_j - \Delta
\Big)
\hat{\sigma}^{rr}_i
\Big],
\end{align}
where $\hat\sigma_j^{\mu\nu} = \vert \mu\rangle_{jj}\langle \nu\vert$
is the transition operator between states $\vert \nu\rangle$ and $\vert \mu\rangle$ of the $j$th atom. The strength of the laser driving shifted from the ground-Rydberg resonance frequency
by the detuning $\Delta$ is described by the Rabi-frequency $\Omega$, and there is a van der Waals interaction
proportional to $c_6/r_{ij}^6$, with $r_{ij}=\vert \vec r_i-\vec r_j\vert$ being the distance between atoms $i$ and $j$.
Dissipative processes are taken into account by the Lindblad jump operators ${\hat{L}_1^{(i)} = \sqrt{(1-b)\gamma} \hat{\sigma}^{gr}_i}$, ${\hat{L}_2^{(i)} = \sqrt{b\gamma} \hat \sigma_i^{r0}}$ describing spontaneous decay of the Rydberg state into the ground state $\ket{g}$ and the inert state $\ket{0}$, with the branching parameter $b$. Finally, dephasing is described by ${\hat{L}_\perp^{(i)} = \sqrt{\gamma_\perp} \hat{\sigma}^{rr}_i}$.
The strong van der Waals interaction of a Rydberg atom, shifts energy levels of surrounding atoms significantly up to distances of multiple $\mu$m. When the atoms are resonantly coupled to a laser field, this will block further excitations into Rydberg states from occurring for all atoms within a finite distance, a phenomenon known as Rydberg blockade \cite{lukin2001dipole}. On the other hand, if the laser excitation is strongly detuned, the excitation of isolated atoms is suppressed while atoms close to the facilitation distance $r_\text{f} \equiv \sqrt[6]{\frac{c_6}{\Delta}}$ are shifted into resonance (Fig.~\ref{fig:intro}d) and are excited with a greatly increased rate. This process, termed Rydberg facilitation, leads to a cascade of excitations quickly spreading through the system following a single (off-resonant) excitation \cite{PhysRevLett.98.023002, PhysRevLett.104.013001}. It is important to note that Rydberg blockade still occurs in this regime. The excitation of atoms with distances ${r < r_\text{f}}$ are greatly suppressed (red zone in Fig.~\ref{fig:intro}d).
\section{Experimental observation of scale-free behavior in a driven Rydberg gas}
To experimentally confirm signatures of a Griffiths phase, we investigate the excitation density in a trapped gas of $^{87}$Rb atoms. To this end, we prepare a sample containing \num{150e3} atoms at a temperature of \SI{1}{\micro \kelvin} in a crossed optical dipole trap.
The sample has a density on the order of $10^{12}/$\si{\centi \meter \cubed}.
From the $5\mathrm{S}_{1/2}$ ground state, a UV laser at \SI{297}{\nano \meter} continuously couples to the $40\mathrm{P}_{3/2}$ Rydberg state with a detuning of +\SI{40}{\mega \hertz} and a resonant Rabi frequency of $2\pi\times\SI{100}{\kilo \hertz}$.
The temperature of the gas corresponds to a most probable speed $\hat{v} = 0.7 \, r_\text{f} \gamma$ with the facilitation radius $r_\text{f} $ and decay rate $\gamma $.
Atoms in the $40\mathrm{P}_{3/2}$ state are ionized because of multiple intrinsic processes \cite{schlagmuller_ultracold_2016, niederprum_giant_2015}, which we use to continuously monitor the excitation number. To this end
we guide the resulting ions to a detector using a small electric field.
This yields a time-resolved signal proportional to the number of Rydberg excitations in the sample (Fig.~\ref{fig:experiment}a).
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig2.pdf}
\caption{(a): Ion signal per \SI{10}{\micro \second} time interval for a single experiment run. The ground state is off-resonantly coupled to the Rydberg state for \SI{100}{\milli \second}. During the measurement, the density continuously decreases because of the intrinsic ionization of Rydberg atoms. In the first few milliseconds, the system is in the active phase displaying continuously high activity. Afterwards, the dynamics is dominated by isolated avalanches. The colored areas indicate the time segments evaluated in (b). (b): Experimentally found distribution of ion counts for different sample densities averaged over \num{1000} experimental runs. We choose exemplary \SI{5}{\milli \second}-long time segments at \SI{15}{\milli \second} (orange), \SI{40}{\milli \second} (green) and \SI{85}{\milli \second} (violet) corresponding to three densities. The distributions show power-law behavior (fitted in red), albeit with distinct exponents (\num{-1.51}, \num{-1.79}, and \num{-2.03} respectively).}
\label{fig:experiment}
\end{figure}
At the beginning of the continuous laser exposure, which lasts \SI{100}{\milli \second}, there are no excitations in the sample. As soon as the first off-resonant excitation is created, activity spreads through the system via facilitation, setting it up in the active phase. Due to the continuous atom loss caused by the ionization of excited atoms, the sample density decreases, reducing the effective driving strength. The sample thus approaches the phase transition.
We divide the ion signal in segments of \SI{5}{\milli\second} to account for the temporally varying effective driving.
For each of these segments, we analyze the ion count distribution in \SI{10}{\micro \second} bins and average over \num{1000} experimental runs.
After about \SI{10}{\milli \second} the average activity has dropped more than an order of magnitude compared to its maximum value, while in individual runs, it is dominated by avalanches. Therefore, we assume that at this time the sample is leaving the active phase.
Our measurement data shows persistent power-law behavior in the distribution of avalanche sizes over a wide range of densities (Fig.~\ref{fig:experiment}b). Power laws are a clear signature of scale invariance, which is expected only at the critical point of an absorbing-state phase transition or in a Griffiths phase characterizing a heterogeneous system.
The extracted exponent of the power-law distribution is not fixed but varies with the density, strongly indicating non-universal behavior as expected for a Griffiths phase.
\
\section{Microscopic Model of Rydberg facilitation}
\subsection{Rate equation limit and Monte-Carlo simulations}
In the limit of large dephasing, the dynamics of a many-body Rydberg gas are effectively governed by classical rate equations \cite{Levi_2016}. As such, we will simulate a gas of atoms governed by \eqref{eq:master_equation} using classical Monte-Carlo simulations of a set of rate equations derived from \eqref{eq:master_equation} in the limit of large dephasing. After adiabatic elimination of coherences, eqs. \eqref{eq:master_equation} reduce to classical rate equations between ground, Rydberg, and inert states (see Fig.~\ref{fig:intro}a), with the stimulated rate $\Gamma_\text{f}(\Sigma)$ given as
\begin{align}
\Gamma_\text{f}(\Sigma) = \frac{2 \Omega^2 \gamma_\perp}{\gamma_\perp^2 + \Delta^2
\big(
\sum_{\substack{j \neq i \\ j \in \Sigma}} \frac{r_\text{f}^6}{r_{ij}^6} - 1
\big)^2},
\end{align}
where $\Sigma$ is the set of indices of Rydberg-excited atoms. If no other Rydberg atom exists in the gas or their distance is much larger than $r_\text{f}$, $\Gamma_\text{f}(\Sigma)$ reduces to the off-resonant excitation rate of an isolated atom
\begin{align}
\tau = \frac{2 \Omega^2 \gamma_\perp}{\gamma_\perp^2 + \Delta^2}.
\end{align}
Each Rydberg atom spans a facilitation shell around it at the radius $r_\text{f}$ and with the width ${\delta r_\text{f}}$ (white disks in Fig.~\ref{fig:intro}d) given by
\begin{align}
\delta r_\text{f} = \frac{\gamma_\perp}{2 \Delta} r_\text{f}.
\end{align}
Inside this shell, the stimulated rate takes its maximal value ${\Gamma_\text{f} = \frac{2\Omega^2}{\gamma_\perp}}$ referred to as the facilitation rate. Relevant for
later mappings to epidemic models is this rate integrated over the volume $V_s$ of the facilitation shell given by
\begin{align}
\kappa = \Gamma_\text{f} \, V_s.
\end{align}
The relevant quantities of interest here are the coarse grained Rydberg density (in a small volume $\Delta V$)
\begin{align}
\label{eq:coarse_grain_rho}
\rho(\vec{r},t) = \frac{1}{\Delta V}\sum_{i:\vec{r}_i\in \Delta V} \langle \hat{\sigma}^{rr}_i\rangle,
\end{align}
and total density of ground and Rydberg atoms
\begin{align}
\label{eq:coarse_grain_n}
n(\vec{r},t) = \frac{1}{\Delta V}\sum_{i:\vec{r}_i\in \Delta V} \Bigl(\langle \hat{\sigma}^{rr}_i\rangle + \langle \hat{\sigma}^{gg}_i\rangle\Bigr),
\end{align}
With this, $n \kappa$ corresponds to the rate with which excitations spread through the cloud.
The gas is simulated in a cube with size $L^3$ and periodic boundary conditions, typically ${L=7 \, r_\text{f}}$. Atom positions are chosen randomly and velocities are sampled from the Maxwell-Boltzmann distribution with the temperature parameter $\hat{v}$, corresponding to the most probable atom velocity in the gas.
After choosing a fixed time-step (${dt=1/400 \; \gamma}$), the time evolution of the system is given by a fixed time step Monte-Carlo (ftsMC) algorithm \cite{RUIZBARLETT20095740}. We choose a ftsMC algorithm as opposed to a kinetic Monte-Carlo algorithm \cite{Chotia_2008} as atomic movement, paired with long-range interactions leads to quickly changing transitional rates in the system.
In Ref.~\cite{helmrich2020} Langevin equations have been derived to macroscopically describe the density of Rydberg atoms $\rho$ and the total density
$n$ in the system. Ignoring diffusion terms, these equations take the form
\begin{subequations}
\begin{align}
\label{eq:langevin_rho}
\frac{\text{d}}{\text{d}t} \rho &=
-\kappa (2 \rho^2 - \rho n)
-\gamma \rho
-\tau (2\rho - n) + \xi,
\\
\label{eq:langevin_n}
n &= n_0 - b\gamma \int_0^{t} \text{d}t^\prime \: \rho(t^\prime),
\end{align}
\label{eq:langevin_diehl}
\end{subequations}
with the off-resonant excitation rate $\tau$, and a noise term $\xi$. The parameter $b$ characterizes the branching of the decay from the Rydberg state $\vert r\rangle$ back to the ground state $\vert g\rangle$ or into an inert (or dead) state $\vert 0\rangle$ (see Fig.~\ref{fig:intro}d) thus effectively removing atoms from the dynamics.
In the absence of decay into $\vert 0\rangle$, i.e. for ${b=0}$, and in the absence of an off-resonant excitation, i.e. $\tau =0$, the dynamics described by eqs.~\eqref{eq:langevin_diehl}, features an absorbing-state phase transition at the critical atom density
\begin{align}
\label{eq:n_crit}
n_\textrm{crit} = \frac{\gamma}{\kappa},
\end{align}
when the facilitation rate is fixed, or alternatively, at the critical facilitation rate $\kappa_\textrm{crit}= \gamma/n_0$ for fixed density.
Below the critical point, any initially existing excitations in the system will eventually decay and the steady state of the system is one where all atoms are in the ground state (absorbing phase).
Above the critical point, any arbitrarily small number of excitations initially present in the system will facilitate further excitations cascading through the system until a steady state with finite excitation density ${\rho(t \to \infty) > 0}$ is reached (active phase).
Off-resonant excitations, with the rate $\tau$, will seed an excitation cascade in the active phase; whereas, in the absorbing phase, they cause fluctuations in the excitation number. As a result, the true absorbing state ${\rho = 0}$ can only be approximately reached experimentally through a large separation of the off-resonant and facilitation time-scales, suppressing off-resonant excitations on the experimentally relevant facilitation time-scales.
Finally, the (slow) decay into a dead state $\vert 0\rangle$ with rate $b\gamma$ is responsible for the
self-organized approach to the critical point when starting in the active phase as indicated in Fig.~\ref{fig:intro}c. Starting at an initial density $n_0$ above the critical value $n_\textrm{crit}$, i.e. in the active phase, the large number of atoms in the Rydberg state cause a fast loss of atoms into the dead state. As a consequence, the total density of atoms $n$ effectively participating in the facilitation process, i.e. atoms in states $\vert g\rangle$ and $\vert r\rangle$ decreases quickly and approaches the critical value. This loss continues at the critical density and drives the system further into the absorbing state. However, this happens on a much slower time-scale, as fewer Rydberg excitations are present at the critical point.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{(a): Sum of Rydberg and ground state atom densities over time from Monte-Carlo simulations (blue dots) for ${\hat{v}=0 \; r_\text{f} \gamma}$ and ${\hat{v} = 100 \; r_\text{f} \gamma}$ compared with the prediction from the Langevin equations \eqref{eq:langevin_diehl} from \cite{helmrich2020} (red line). (b): Rydberg density for ${\hat{v} = 100 \; r_\text{f} \gamma}$. (c): Rydberg density for ${\hat{v} = 0 \; r_\text{f} \gamma}$. For all plots we use the parameters: ${n_0 = 4 \; r_\text{f}^{-3}}; \; {\Omega / \gamma = 20}; \; {\Delta / \gamma = 2000};$ ${\gamma_\perp / \gamma = 20}; \; {b=0.3}; \; {L / r_\text{f} = 7}$.
}
\label{fig:total_densities}
\end{figure}
In Fig.~\ref{fig:total_densities} we have plotted the time evolution of the total density $n$, initially ten times higher than the critical density $n_\textrm{crit}$, and the Rydberg density $\rho$ for a frozen gas as well as a high-temperature gas with otherwise identical conditions, obtained from Monte-Carlo simulations. Here, all atoms in the system are initially in the ground state until one atom is off-resonantly excited to the Rydberg state.
For comparison we also show the solution of the mean-field Langevin equations \eqref{eq:langevin_diehl}, which
capture the long-time SOC dynamics of the high-temperature gas, but fail to describe the frozen gas outside of very short times (see Figs.~\ref{fig:total_densities}b and c). The discrepancy in the peak values of $\rho$
can be attributed to Rydberg blockade, which truncates the maximum number of Rydberg excitations simultaneously present in the gas.
Qualitatively, the Rydberg density in the frozen gas displays a similar time dynamics to that of the high temperature gas, albeit with substantial quantitative differences in the long-time limit. We will show that in the low temperature regime of the Rydberg gas the absorbing state phase transition is replaced with an extended Griffiths phase, whose characteristic features become visible when off-resonant excitations and decay into state $\ket{0}$ are negligible.
It is important to note that for ${b > 0}$ the decay into $\ket{0}$ dominates the dynamics at times ${t > 1 / b\gamma}$. Thus in order to experimentally observe a Griffiths phase by monitoring the long-time dynamics with this system, ionization and loss of Rydberg atoms (manifested in the parameter $b$) must be reduced as much as possible.
\subsection{Effects of relative motion between atoms}
The rate equation approximation used above is valid as long as the population dynamics are slow compared to the dephasing rate. In a frozen gas, the relevant time scales
are solely determined by the internal dynamics of an individual atom for a given (fixed) configuration of Rydberg atoms in its vicinity. If however, the gas of atoms has a finite temperature, a ground state atom can fly in and out of the facilitation volume $V_s$ of a Rydberg atom, which can amount to a fast sweep of the detuning of the ground state atom. Thus there is an additional time scale given by the crossing time $\sim \delta r_f/v$.
To analyze the effects of atomic motion onto the facilitation process we consider the two-body problem of a ground state atom moving with velocity $v$ and with the impact parameter $d$ relative to a Rydberg atom (see inset of Fig.~\ref{fig:rate-eqs-vs-full}). For ${d > r_\text{f}}$ the ground state atom is not shifted into resonance and no facilitation occurs. For ${d \leq r_\text{f}}$ one has to distinguish two cases depending on the impact parameter: (i) ${d < r_\text{f}}$ the ground state atom flies through the facilitation shell twice (blue case in Fig.~\ref{fig:rate-eqs-vs-full}), and (ii) ${d \approx r_\text{f}}$ the ground state atom grazes the facilitation shell and is only briefly shifted into resonance (orange case in Fig.~\ref{fig:rate-eqs-vs-full}).
In case (i) in the rate equation approximation we find the excitation probability after a single pass of the ground state atom through the facilitation shell as
\begin{eqnarray}
p_\text{exc} &=& 1-\exp\left\{-\int_{t_i}^{t_f}\!\!\! \text{d}t\, \Gamma_\uparrow(t)\right\}\nonumber\\
&=& 1-\exp\left\{-2 \Omega^2 \int_{t_i}^{t_f}\!\!\! \text{d}t\, \frac{\gamma_\perp}{\Delta(t)^2+\gamma_\perp^2}\right\}.
\end{eqnarray}
Linearizing the time dependent detuning $\Delta(t)$ for times $t_i< t < t_f$, while passing through the facilitation shell, we receive $\Delta(t) \approx \dot\Delta \times (t-t_0)$ yielding
\begin{eqnarray}
p_\text{exc} &=& \exp\left\{-\frac{2\Omega^2}{\dot\Delta}\int_{\Delta_i}^{\Delta_f}\!\! \! \! d\Delta \, \frac{\gamma_\perp}{\Delta^2+\gamma_\perp^2}\right\}\nonumber\\
& \approx& \exp\left\{-2\pi \frac{\Omega^2}{\dot\Delta}\right\},
\label{eq:Landau-Zener}
\end{eqnarray}
where we have assumed that $\vert \Delta_{i,f}\vert =\vert\Delta(t_{i,f})\vert \gg \gamma_\perp$, which is exactly the same expression as given by the Landau-Zener formula.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\columnwidth]{fig4.pdf}
\caption{Velocity dependent excitation probability of a ground state atom flying past a Rydberg atom for the impact parameter ${d=0.5 r_\text{f}}$ (blue) and ${d = r_\text{f}}$ (orange) (see inset) calculated from Monte-Carlo simulations with fixed time step ${dt = 1/400 \, \gamma}$ (dots), full numeric density matrix simulation (solid lines), and analytical Landau-Zener formula eq.\eqref{eq:Landau-Zener} (dashed lines). Full solution and rate equation approximation only differ for the case of "grazing incidence" ($d=r_\text{f}$)}
\label{fig:rate-eqs-vs-full}
\end{figure}
If $p_\text{exc}$ is small the asymptotic excitation probability after two passages is just
${p_\text{exc} \approx 1-\exp\{-4\pi \Omega^2/\dot \Delta\}}$. From this discussion we expect the rate equations to accurately describe the facilitation
process even for large atom velocities as long as the impact parameter $d$ is different from $r_f\pm \delta r_f$. In case (ii) however,
i.e. for "grazing incidence", the Landau-Zener formula is no longer valid and there could be a difference between rate-equation approximation and the solution of the full two-particle density matrix equations. This is indeed the case, as can be seen from Fig.~\ref{fig:rate-eqs-vs-full}, where we have plotted the asymptotic excitation probability of the ground state atom as function of relative velocity and impact parameter both from a simulation of the full density-matrix equations (dashed lines), the analytic Landau Zener formula (solid line), and by a Monte-Carlo simulation of the rate equation in the large-dephasing limit with time step ${dt = 1/400 \, \gamma}$ (dots). One recognizes perfect agreement except for large relative velocities and impact parameters close to the facilitation radius ${d\approx r_\text{f}}$, where the rate equations predict up to an order of magnitude higher excitation probabilities than the full simulation. Since ${\delta r_\text{f}\ll r_\text{f}}$ the contribution of these "grazing-incidence" cases is negligibly small, allowing us to accurately describe high gas temperatures with a fixed time step Monte-Carlo algorithm.
In Ref.~\cite{natcom_Griffiths} it was argued that a Rydberg atom moving at an average velocity larger than the Landau-Zener velocity ${v_\textrm{LZ}= 2\pi^2 \Omega^2 r_\text{f} / (3\Delta)}$,
effectively decouples from the excitation cascade. While, as can be seen from Fig.\ref{fig:rate-eqs-vs-full}, the excitation probability
above this velocity indeed quickly drops and scales as $1/v$, the number of ground state atoms seen by a moving Rydberg atom in a given time increases linearly with its velocity $v$ too. This compensates the former effect, and thus does not lead to an emerging heterogeneity in phase space as argued in \cite{natcom_Griffiths}, as long as the Rydberg atom does not move out of the gas sample.
Considering the two limiting cases of a frozen gas and a high temperature gas we argue that the Griffiths phase, which originates from spatial inhomogeneity, disappears
when the atoms average velocity is increased above a certain limit, resulting in a direct absorbing-state phase transition. A quantitative discussion of the crossover between a frozen system with an extended Griffiths phase and a high-temperature gas with a direct absorbing-state phase transition is beyond the scope of the present paper
and is subject to future work. Instead we will focus on the quantitative description of the
facilitation dynamics in a low-temperature or frozen gas.
\section{Network Structure of Facilitation Paths in a Frozen Gas}
The emergence of a Griffiths phase
results from facilitation events being constrained to a network structure. In the limit of a frozen gas, atoms have random but fixed positions. If we regard the system at the time scale of facilitated excitations, off-resonant excitations can be neglected. Therefore, the dynamics are described by the facilitated spreading of Rydberg excitations, which is only possible if atomic distances are approximately $r_\text{f}$. As a result, we can regard the structure of atom positions and the paths along which excitations can spread as a random graph with the entries of its adjacency matrix given by
\begin{align}
a_{ij} =
\begin{cases}
1, \quad \textrm{for}\quad r_{ij} \in [r_\text{f} - \frac{\delta r_\text{f}}{2}, r_\text{f} + \frac{\delta r_\text{f}}{2}]
\\
0, \quad \text{else}
\end{cases}.
\end{align}
Obviously this matrix is symmetric, corresponding to an undirected network \cite{sayama2015introduction}.
Assuming a uniform distribution of atom positions in the gas, the probability that a randomly selected atom has $k$ atoms in its facilitation shell (see Fig~\ref{fig:intro}a), meaning the atom is of degree $k$, is given by the Poissonian distribution
\begin{align}
\label{eq:poissonian}
P(k) = \frac{(n V_s)^k}{k!} \exp{(-n V_s)}.
\end{align}
As the degree distribution is Poissonian we can map this problem to a random Erdős–Rényi (ER) network \cite{erdHos1960evolution}. In contrast, the network structure of atoms trapped by an optical lattice or tweezer array would be given by a regular lattice network.
Of particular interest in random graph theory is the question if a system percolates. In a percolating system, the probability $p$ that a bond between two randomly selected atoms exists is high enough, such that a path exists which runs through the entire system, i.e. there almost surely exists a single cluster with its size in the order of the system size. If, however, the connectivity is below a critical threshold for bond connectivity $p < p_c$, the system is composed of many small, disconnected clusters \cite{erdHos1960evolution, LI20211}. For $p = p_c$ the percolation transition occurs. A 2D network with ${p=p_c}$ from Monte-Carlo sampling is illustrated in Fig.~\ref{fig:lcc}.
If $N$ is the number of atoms and $s_1(N)$ is the size of the largest connected cluster (LCC), then the system percolates if
${ \lim_{N\to \infty} {s_1(N)}/{N} > 0}$.
For an ER network the percolation transition occurs when the average network degree is $\langle k \rangle = 1$ \cite{LI20211, sayama2015introduction}. Using eq.~\eqref{eq:poissonian}, the density at which the percolation transition occurs is therefore
\begin{align}
\label{eq:n_perc}
n_\textrm{perc} = \frac{1}{V_s}.
\end{align}
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig5.pdf}
\caption{Schematic frozen gas atom positions (blue dots) for a 2D system with ${L=10 \; r_\text{f}}$. Network clusters connecting atoms which have distances $r_{ij} \in [r_\text{f} - \frac{\delta r_\text{f}}{2}, r_\text{f} + \frac{\delta r_\text{f}}{2}]$ (grey dashed lines), and the largest connected cluster of these (red lines) for ${n_0 = n_\text{perc}}$ are shown (left). Size of largest connected cluster (LCC) $s_1$ depending on the density from Monte-Carlo samples in a 3D cube with ${L=7 \; r_\text{f}}$ (right). The black dashed line corresponds to a power-law with exponent ${\nu = 1}$. And LCC divided by number of atoms $N$ depending on density (inset).}
\label{fig:lcc}
\end{figure}
This density is a factor $\Gamma_\text{fac} / \gamma$ larger than the critical density $n_\text{crit}$ of the absorbing state phase transition, given by eq.~\eqref{eq:n_crit}. We can verify that eq.~\eqref{eq:n_perc} corresponds to the correct percolation density by calculating the size of the LCC $s_1$ depending on the density of the gas (Fig.~\ref{fig:lcc}). In the thermodynamic limit, ${s_1 / N = 0}$ for all densities ${n < n_\textrm{perc}}$. As numeric simulations are restricted to a finite system size however, we instead consider the percolation transition to occur when $s_1$ grows faster than linear with the density $n$ (the black dashed line in Fig.~\ref{fig:lcc} corresponds to linear growth).
Of relevance for the Griffiths phase is the size distribution of clusters in the network. Using Monte-Carlo simulations we can verify that the length of clusters follow a geometric distribution ${P(s) \sim \text{e}^{-cs}}$ under the assumption that clusters are made of linear chains of $s$ atoms. This assumption holds true for small cluster sizes and an average network degree ${\langle k \rangle \ll 1}$.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{fig6.pdf}
\caption{Network cluster size probability distribution from 3D Monte-Carlo samples (dots) and eq.~\eqref{eq:cluster_sizes} (dashed lines). From left to right for the densities ${n_0 / n_\text{perc} = [0.019, 0.063 , 0.125 , 0.188 , 0.250 , 0.313 ]}$.}
\label{fig:cluster_sizes}
\end{figure}
We can then approximate the decay constant $c$, with $p_0$ being the probability of an atom having the degree ${k = 0}$, as
\begin{subequations}
\begin{align}
P(s) &= p_0 (1 - p_0)^{s - 1}
\\
&=
\text{e}^{-n V_s}
(1 - \text{e}^{-n V_s})^{s - 1}
\\
&\propto \text{e}^{-cs},
\end{align}
\label{eq:cluster_sizes}
\end{subequations}
with ${c = - \text{ln}(1 - \text{e}^{-n V_s})}$. In Fig.~\ref{fig:cluster_sizes} a comparison between cluster sizes in Monte-Carlo simulations and eqs.~\eqref{eq:cluster_sizes} is shown. The agreement is very good for small densities as almost all clusters in the gas are composed of linear chains. As the density of the gas increases, the probability that at least one atom in the cluster has more than two connections, i.e. $k \geq 3$, increases. While the distribution remains exponential, the probability for larger clusters to exist in the system greatly increases compared to the prediction by eqs.~\eqref{eq:cluster_sizes}.
\section{Epidemic Dynamics on the Network}
It is known that Rydberg systems in the facilitation regime bear close similarities to epidemics \cite{PhysRevLett.119.140401, natcom_Griffiths}. In the following, we will systematically analyze the Rydberg facilitation dynamics on the random network formed by atoms within their respective facilitation shells. For this, we will map the dynamics to the Susceptible-Infected-Susceptible (SIS) epidemic model \cite{WEISS1971261, murray2002mathematical, bailey1975mathematical}. We will (i) disregard the decay of Rydberg atoms into the dead state $\ket{0}$ (parameter ${b=0}$, see Fig.~\ref{fig:intro}a), reducing the dynamics of each atom to a two-level system. Additionally, we will (ii) neglect off-resonant excitations by setting ${\tau=0}$, meaning excitations can only be created by means of facilitation. We will refer to simulations carried out with these two constraints as the SIS approximation.
One major difference between our Rydberg system in the SIS approximation and a classical SIS system remains with Rydberg blockade.
Atoms excited to the Rydberg state do not only facilitate the spread of excitations, they can also block the spreading in adjacent clusters. This will be analyzed more systematically later.
The network structure of cluster of atoms in facilitation distance of each other strongly depends on the temperature of the gas.
If the rms average relative velocity $\overline{v}$ is large, such that each excited Rydberg atom meets many ground state atoms during its
lifetime $\gamma^{-1}$, i.e. if in a 3D gas
\begin{eqnarray}
\overline{v} \gg \gamma \, n^{1/3},
\end{eqnarray}
any network structure is effectively washed out and the system is homogeneous. Close to the critical point of the absorbing-state phase transition, the above condition is equivalent to $ \overline{v} \gg \gamma \, r_f$.
If, on the other hand, the average velocity of atoms is very small, such that during
a facilitation time $\Gamma_\text{f}^{-1}$ they do not move out of the facilitation shell, i.e. if
\begin{equation}
\overline{v} \ll \Gamma_\text{f} \, \delta r_\text{f},
\end{equation}
the atoms form a finitely connected network. We will now discuss these two limits.
\subsection{High temperature limit}
In a high-temperature gas with rms average relative velocity $\overline{v} \gg \gamma n^{1/3}$, we can map the system to the Susceptible-Infected-Susceptible (SIS) epidemic model \cite{WEISS1971261, murray2002mathematical, bailey1975mathematical}.
The SIS model is
characterized by the infection and recovery rates, $\lambda$ and $\mu$ respectively, which for the Rydberg gas read
\begin{subequations}
\begin{align}
\lambda &= n \kappa
\\
\mu &= \gamma.
\end{align}
\end{subequations}
The SIS model predicts an active/absorbing phase transition when
\begin{equation}
\label{eq:lambda_c_1}
\lambda_c^{(1)} = \mu
\end{equation}
where excitation spread equals spontaneous decay. This corresponds to the critical density \eqref{eq:n_crit} of the absorbing state phase transition discussed before.
In Fig.~\ref{fig:ryd_response}a, Monte-Carlo simulations of the Rydberg system with the SIS approximation and ${\rho(t=0) = n}$ are shown for the high temperature gas for different values of $n$ and a fixed facilitation rate $\Gamma_\text{f}$. One recognizes that an active/absorbing-state phase transition occurs for $\lambda=\lambda_c^{(1)}$, with the Rydberg density either exponentially decaying at the time-scale $\mu$ (for $\lambda < \lambda_c^{(1)}$), or decaying to a steady-state active density (for $\lambda >\lambda_c^{(1)}$). At the critical density (green curve in Fig.~\ref{fig:ryd_response}a) the system should decay with $\rho \sim t^{-1}$ \cite{munoz2010Griffiths}, however, this decay is truncated by an exponential decay due to finite system size.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\columnwidth]{fig7.pdf}
\caption{Decay of Rydberg density over time from Monte-Carlo simulations in the SIS approximation (${b=0}$, ${\tau=0}$) with an initial density ${\rho(t=0) = n}$ for the gas at high temperature (a) with ${\hat{v} = 100 \, r_\text{f} \gamma}$ and for the frozen gas (b) with ${\hat{v} = 0 \, r_\text{f} \gamma}$. The black dashed line in (a) is a power-law decay with ${\rho \sim t^{-1}}$, expected at the critical density. The colors show total system densities (increasing from left to right) with ${n=}$ 0.003, 0.03, 0.39, 1.98, 3.97, 5.95, 9.12, 11.90, 14.28, 15.91, 16.66 and 19.94. Figure (a) only shows the lowest 7 densities.
The critical density is $n_\textrm{crit}=$ 0.39. Between the percolation density $n_\textrm{perc}= $ 15.91 and $n_\textrm{crit}^{(2)}= $ 16.6 the curves features a decay (see Fig.~\ref{fig:phase-diagram}).
}
\label{fig:ryd_response}
\end{figure}
\subsection{Frozen gas limit}
In the limit of an effectively frozen gas the atoms that can participate at the facilitation process form a network.
The dynamics of an SIS epidemic strongly depend on the structure of this underlying network. For example, in the case of a heterogeneous but \emph{scale-free} network, i.e. ${P(k) \sim k^{-\nu}}$, the absorbing phase can disappear altogether, leaving the system in an endemic phase regardless of the infection rate \cite{chatterjee2009contact, SciRep.6.22506, RevModPhys.87.925}.
For the case of a heterogeneous ER network, which describes the frozen gas of atoms, an active phase can only occur if the network is above the percolation threshold (i.e. ${\langle k \rangle > 1}$). However, for a finitely connected (but percolating) ER network, the threshold for the active phase is modified since activity occurs in localized regions and thus the effective infection rate is reduced. One finds \cite{munoz2010Griffiths}
\begin{equation}
\lambda_c^{(2)} = \mu \frac{\langle k\rangle}{\langle k\rangle -1},
\end{equation}
with $\langle k\rangle$ being the average degree of the network. For ${\langle k \rangle \to \infty}$ the threshold given by eq.~\eqref{eq:lambda_c_1} is recovered. For a fixed facilitation rate and facilitation volume this threshold can be expressed in terms of a critical density of atoms using $\langle k\rangle = n V_s$
\begin{align}
\nonumber
n_\textrm{crit}^{(2)} &= \frac{1}{V_s} + \frac{\gamma}{\kappa}
\\
&\equiv n_\textrm{perc} + n_\textrm{crit}^{(1)}.
\end{align}
If the network is below the percolation threshold, the finite size of clusters truncates the spread of activity through the system. Therefore, the network cannot support an active phase, and instead, a Griffiths phase emerges above the critical infection rate $\lambda_c^{(1)}$ \cite{munoz2010Griffiths}. One of the most distinguishing characteristics of a Griffiths phase is the presence of rare regions with above average activity which lead to a slow, algebraic decay of excitations \cite{PhysRevLett.23.17}.
In the non-percolating network (i.e. ${\langle k \rangle < 1}$), for ${\lambda \leq \mu}$ decay dominates, leading to very short times until all activity disappears in clusters as excitations cannot sustain themselves. If however ${\lambda > \lambda_c^{(1)}= \mu}$, the time until activity disappears in clusters increases exponentially with cluster size $s$ and is given by \cite{10.2307/3215641}
\begin{align}
\label{eq:cluster_lifetime}
\tau(s)
\propto
\sqrt{\frac{2 \pi}{s}}
\frac{\lambda}{(\lambda - 1)^2}
\text{exp}
\Big\{
s \Big(\text{ln}(\lambda) - 1 + \frac{1}{\lambda}\Big)
\Big\}.
\end{align}
In the following we will refer to $\tau(s)$ as the extinction time of activity in a cluster. As a result of the convolution of exponentially rare cluster sizes ${P(s) \sim \text{e}^{-cs}}$ and a cluster lifetime increasing exponentially with cluster size ${\tau(s) = \text{e}^{as}}$, the activity in the Griffiths phase decays with a power law:
\begin{align}
\label{eq:rho_integral}
\rho(t) = \int \text{d}s \: s P(s) \, \text{e}^{-t / \tau(s)}.
\end{align}
Using eqs.~\eqref{eq:cluster_sizes} and \eqref{eq:cluster_lifetime}, the integral in \eqref{eq:rho_integral} can be approximated with Laplace's method and results in an algebraic decay
\begin{align}
\rho \sim t^{-c/a},
\end{align}
with the coefficient $a$ given by \eqref{eq:cluster_lifetime} as ${a = \text{ln}(\lambda) - 1 + \frac{1}{\lambda}}$.
If the network is above the percolation threshold, i.e. if $\langle k\rangle \ge 1$, but the driving strength is
below the critical value for the active phase $\lambda_c^{(2)}$ the
decay of activity is expected to follow a stretched exponential.
A qualitative phase diagram of the facilitation dynamics in the frozen Rydberg gas is shown in Fig.~\ref{fig:phase-diagram}.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\columnwidth]{fig8.pdf}
\caption{Schematic phase diagram of Rydberg facilitation of a frozen gas with the percolation threshold given at ${\langle k \rangle = 1}$. Increasing the density $n$ of the gas one moves along the red line crossing from an absorbing into a Griffiths phase at $n_\textrm{crit}$, and subsequently into a phase with streched exponential decay at the percolation threshold $n_\textrm{perc}$ and eventually into the active phase at $n_\textrm{crit}^{(2)}$. Time has been rescaled such that ${\mu = 1}$.}
\label{fig:phase-diagram}
\end{figure}
Fig.~\ref{fig:ryd_response}b shows the results of Monte Carlo simulations for a frozen gas in the SIS approximation for the same parameters and color code as in the high-temperature case of Fig.~\ref{fig:ryd_response}a. For ${n < n_\textrm{crit}}$ all initial excitations decay exponentially (curves 1 and 2 from left to right), corresponding to the absorbing phase. The behavior changes at and above the critical point but below the percolation threshold ${n_\textrm{crit}\le n < n_\textrm{perc}}$ (curves 3-7). Here, the system is in an extended Griffiths phase with a power-law decay with varying exponents. Above the percolation threshold but below the threshold
of the active phase $n_\textrm{perc} \le n < n_\textrm{crit}^{(2)}$ the decay is expected to become a streched exponential \cite{munoz2010Griffiths}, which we cannot resolve however in our simulations due to the very long time scale of this decay. Finally, for ${n\ge n_\textrm{crit}^{(2)}}$ the system enters the active phase where excitations simply decay to a steady state.
In the following, we want to give a quantitative estimate for the power law decay coefficient in the Griffiths phase based on the SIS model and compare them with those from the Monte-Carlo simulations. In contrast to the standard SIS model, a Rydberg system features Rydberg blockade and facilitated de-excitation, making it unclear if analytic predictions from an SIS model would be accurate. To check this, we compare the extinction time of activity in clusters in a linear excitation chain, given by eq.~\eqref{eq:cluster_lifetime}, using the spreading rate ${\lambda = \Gamma_\text{f} V_s \cdot 1 r_\text{f}^{-3}}$, with Monte-Carlo simulations of the SIS approximation in Fig.~\ref{fig:Griffiths_exponents}. For this, we simulate a 1D cluster of length $s$ where each atom is initially in the Rydberg state and measure the average time until all atoms are decayed. Here, we assume the above mentioned SIS approximation (no decay to $\ket{0}$ and no off-resonant excitations). One recognizes that eq.~\eqref{eq:cluster_lifetime} gives a good approximation of the extinction time.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\columnwidth]{fig9.pdf}
\caption{
Extinction times for activity in clusters in Monte-Carlo simulations of 1D lattice chains of length $s$ (blue dots) and prediction by \cite{10.2307/3215641} (red line).}
\label{fig:Griffiths_exponents}
\end{figure}
Using \eqref{eq:cluster_sizes} and \eqref{eq:cluster_lifetime} we can approximate the power-law exponent $\nu$ in the Griffiths phase dependent on the density and internal rates. We receive
\begin{equation}
\label{eq:Griffiths_exponent}
\nu \equiv -\frac{c}{a} = -\frac{\ln\left(1-e^{-n V_s}\right)}{\ln(\lambda) -1 +\lambda^{-1}},
\end{equation}
with $\lambda = 4\pi \Gamma_\text{f} \frac{\delta r_\text{f}}{r_\text{f}}$. The comparison with exponents fitted from the power law decay of Rydberg density in Monte-Carlo simulations of the frozen gas under the SIS approximation (seen in Fig.~\ref{fig:ryd_response}) can be seen in Fig.~\ref{fig:compare_ryd_blockade}b. Our very rough approximation of the Griffth-phase decay exponents qualitatively fits with Monte-Carlo data.
A fundamental difference between Rydberg facilitation and classical SIS activity spreading is Rydberg blockade. Considering the frozen gas limit, two effects arise from Rydberg blockade: first, if an atom is surrounded by two Rydberg atoms in the facilitation distance, i.e. the atom is in the middle of a cluster, then it cannot be facilitated, as it receives twice the dipole shift and is pushed out of resonance again. If this atom decays or is in the ground state at the beginning, it cannot be excited resulting in a hole splitting the cluster \cite{letscher2017bistability}. Additionally, Rydberg atoms can block excitations from spreading through adjacent clusters. However, neither of these effects change the actual structure of the network, instead they effectively retard the timescale at which excitations spread. For a quantitative comparison we simulate the Rydberg gas and compare the decay of excitations in the SIS approximation with and without Rydberg blockade (Fig.~\ref{fig:compare_ryd_blockade}).
\begin{figure} [H]
\centering
\includegraphics[width=0.9\columnwidth]{fig10.pdf}
\caption{
a) Decay of Rydberg density over time from Monte-Carlo simulations in the SIS approximation for the frozen gas without Rydberg blockade (full lines) compared to the results from Fig.~\ref{fig:ryd_response}b (dashed lines). The colors show total system densities (increasing from left to right) with n = 0.003, 0.03, 0.39, 1.98,
3.97, 5.95, 9.12, 11.90, 14.28, 15.91, 16.66 and 19.94.
b)
Power law decay exponent ${\nu = -c/a}$ of Rydberg density over time fitted from frozen gas Monte-Carlo simulations in the SIS approximation from Fig.~\ref{fig:compare_ryd_blockade}a with Rydberg blockade (blue dots), without Rydberg blockade (orange dots), and from the analytical approximation geiven by eq.~\eqref{eq:Griffiths_exponent} (red line).}
\label{fig:compare_ryd_blockade}
\end{figure}
As blockade allows fewer Rydberg atoms to be present in the system, the steady state Rydberg density of the active phase, and therefore the density at which the power law decay of the Griffiths phase begins is much lower. However, as seen in Fig.~\ref{fig:compare_ryd_blockade}, the exponents of the power-law decay in the Griffiths phase show no qualitative change depending on the presence of Rydberg blockade in the system.
\section{Modified Langevin Description of Epidemic Evolution}
In the following section, we develop an effective macroscopic theory of the Rydberg facilitation process, expanding the Langevin equation \eqref{eq:langevin_diehl} from Ref.~\cite{helmrich2020}, starting from the microscopic model. This new equation takes into account the network structure in the case of a frozen gas as well as Rydberg blockade.
To this end we start from the microscopic Heisenberg-Langevin equations describing the quantum many-body dynamics of Rydberg excitations for atoms
at given spatial positions:
\begin{align}
\frac{\text{d}}{\text{d}t} \hat{\sigma}^{rr}_i &=
-i\Omega (\hat{\sigma}^{rg}_i - \hat{\sigma}^{gr}_i) - \gamma \hat{\sigma}^{rr}_i + \hat{\xi}_1,
\\
\frac{\text{d}}{\text{d}t} \hat{\sigma}^{rg}_i &=
-i
\Big(
\Omega (\hat{\sigma}^{rr}_i - \hat{\sigma}^{gg}_i) - \hat{V}_i \hat{\sigma}^{rg}_i
\Big)
- \gamma_\perp\hat{\sigma}^{rg}_i + \hat{\xi}_2,
\end{align}
where $\hat{\sigma}^{\mu\nu}_j=\vert \mu\rangle_{jj}\langle \nu\vert$ is the transition operator between stets $\vert \mu\rangle$ and $\vert \nu\rangle$ of the $j$th atom. We have accounted for losses and dephasing and added corresponding fluctuation operators $\hat{\xi}_1$ and $\hat{\xi}_2$, which disappear in the quantum mechanical average and whose properties can be obtained from the fluctuation-dissipation relation \cite{kubo1966fluctuation}.
The operator $\hat{V}_i$ describes the detuning of the $i$-th atom and depends on the states of all other atoms. It is given by
\begin{align}
\label{eq:potential_V_k}
\hat{V}_i = \Delta
\Big(
-1 + \sum_{j \neq i} \frac{r_\text{f}^6}{r_{ij}^6}
\hat{\sigma}^{rr}_j
\Big).
\end{align}
We note that the operator valued quantities are objects in Hilbert space
describing the quantum mechanical evolution, and are subject to the classical statistics of the (time dependent) random positions of the atoms.
The dynamics of the atom positions are treated classically, which is well justified in the high-dephasing limit, assumed
throughout the present paper.
Furthermore the effect of $c_6$ forces acting on the center-of-mass
motion of the atoms due to the distance dependence of $\hat{V}_i$ are disregarded in the present paper. They will be discussed elsewhere in more detail \cite{Daniel-2023b}, where we will show that under typical experimental conditions they can be accounted for by a change of the atoms velocity distribution and, in the case of a trapped gas, by an additional loss channel.
Assuming high dephasing ${\gamma_\perp \gg \Omega}$ the coherences $\hat{\sigma}^{rg}_i$ quickly decay to quasi-stationary values relative to the relevant many-body time scales. Therefore, we adiabatically eliminate coherences (${\frac{\text{d}}{\text{d}t} \hat{\sigma}^{rg}_i = 0}$) and arrive at
\begin{align}
\label{eq:dsrrdt-full}
\frac{\text{d}}{\text{d}t} \hat{\sigma}^{rr}_i &=-
\frac{2\Omega^2 {\gamma_\perp}}{{\gamma_\perp}^2 + \hat{V}_i^2}(\hat{\sigma}^{rr}_i - \hat{\sigma}^{gg}_i) - \gamma \hat{\sigma}^{rr}_i + \hat{\xi}.
\end{align}
In the following, we expand the leading fraction in eq.~\eqref{eq:dsrrdt-full} with a full basis of projection operators ${\Pi}_i(m)$ projecting onto $m$ Rydberg atoms in the facilitation \textit{sphere} of atom $i$. Each of these atoms have a relative distance to atom $i$ in the interval $0 \leq {r_{ij} \leq r_\text{f}, \; \forall j \in [1, m]}$. Using this, the equation of motion takes the form
\begin{align}
\nonumber
\frac{\text{d}}{\text{d}t} \hat{\sigma}^{rr}_i &=
-{\Pi}_i(0) \frac{2\Omega^2{\gamma_\perp}}{{\gamma_\perp}^2 + \Delta^2} (\hat{\sigma}^{rr}_i - \hat{\sigma}^{gg}_i)
\\
\nonumber
&- {\Pi}_i(1)
\underbrace{
\frac{2\Omega^2 {\gamma_\perp}}{{\gamma_\perp}^2 + \Delta^2
\big(
(
\frac{r_\text{f}}{r_{1i}}
)^6 - 1
\big)^2
}
}_{(*)}
(\hat{\sigma}^{rr}_1 \hat{\sigma}^{rr}_i - \hat{\sigma}^{rr}_1 \hat{\sigma}^{gg}_i)
\\
\nonumber
&+ ...
\\
&- \gamma \hat{\sigma}^{rr}_i + \hat{\xi}.
\label{eq:ddt_srr_expanded}
\end{align}
All rates for more than one Rydberg atom in the facilitation sphere ${m > 1}$ are strongly suppressed due to blockade. As a result, we truncate the expansion at ${m=1}$.
The fraction $(*)$ in eq.~\eqref{eq:ddt_srr_expanded} is the distance dependent facilitation rate for an atom in the presence of one Rydberg atom. Here, we approximate this rate as being non-zero only if atom $i$ is in the facilitation \emph{shell} of the Rydberg atom ${r_{1i} \in S_\text{f} \equiv [r_\text{f} - \frac{\delta r_\text{f}}{2}, r_\text{f} + \frac{\delta r_\text{f}}{2}]}$. We express this as
\begin{align}
\hat{\sigma}^{rr}_1 \Pi_i(1) \to \hat{\sigma}^{rr}_1 \, \Xi_i(1) = \hat{\sigma}^{rr}_1 \times
\begin{cases}
1, \quad r_{1i} \in S_\text{f} \\
0, \quad \text{else}
\end{cases},
\end{align}
and obtain
\begin{align}
\nonumber
\frac{\text{d}}{\text{d}t} \hat{\sigma}^{rr}_i &=
-\tau\, {\Pi}_i(0) (\hat{\sigma}^{rr}_i - \hat{\sigma}^{gg}_i)
\\
\nonumber
&- \Gamma_\text{f} \, {\Xi}_i(1)
(\hat{\sigma}^{rr}_1 \hat{\sigma}^{rr}_i - \hat{\sigma}^{rr}_1 \hat{\sigma}^{gg}_i)
\\
&- \gamma \hat{\sigma}^{rr}_i + \hat{\xi},
\end{align}
with the maximal facilitation rate $\Gamma_\text{f} = \frac{2 \Omega^2}{{\gamma_\perp}}$ and the off-resonant excitation rate $\tau = \frac{2\Omega^2 \gamma_\perp}{{\gamma_\perp}^2 + \Delta^2}$.
Finally, we calculate the expectation value of the operator $\hat{\sigma}^{rr}_i$ with a double averaging over the quantum mechanical state and the ensemble of the many different atom positions in the gas. We will denote these
double averages as $\langle \langle \hat{\sigma}^{rr}_i \rangle \rangle$ and write
\begin{align}
\nonumber
\frac{\text{d}}{\text{d}t} \langle \langle \hat{\sigma}^{rr}_i \rangle \rangle &=
-\tau \langle \langle{\Pi}_i(0) (\hat{\sigma}^{rr}_i - \hat{\sigma}^{gg}_i) \rangle \rangle
\\
\nonumber
&- \Gamma_\text{f} \,
\langle \langle{\Xi}_i(1) \hat{\sigma}^{rr}_1
(\hat{\sigma}^{rr}_i - \hat{\sigma}^{gg}_i) \rangle \rangle
\\
&- \gamma
\langle \langle \hat{\sigma}^{rr}_i \rangle \rangle
\\
\nonumber
&\approx
-\tau \langle \langle {\Pi}_i(0) \rangle \rangle
\Bigl(\langle \langle \hat{\sigma}^{rr}_i \rangle \rangle - \langle \langle \hat{\sigma}^{gg}_i \rangle \rangle\Bigr)
\\
\nonumber
&- \Gamma_\text{f} \,
\langle \langle {\Pi}_i(1) \hat{\sigma}^{rr}_1 \rangle \rangle
\Bigl(\langle \langle \hat{\sigma}^{rr}_i \rangle \rangle - \langle \langle \hat{\sigma}^{gg}_i \rangle \rangle\Bigr)
\\
&- \gamma
\langle \langle \hat{\sigma}^{rr}_i \rangle \rangle.
\end{align}
Assuming a randomly distributed gas, we can approximate the probabilities $\langle \langle {\Pi}_i(m) \rangle \rangle$ as Poissonian with the rate $\rho V_\textrm{f}$, i.e. $\langle \langle {\Pi}_i(m) \rangle \rangle = (\rho V_f)^m e^{-\rho V_f}/m!$,
resulting in
\begin{align}
\langle \langle {\Pi}_i(0) \rangle \rangle
&=
\text{e}^{-\rho V_\textrm{f}}
\\
\langle \langle {\Xi}_i(1) \hat{\sigma}^{rr}_1 \rangle \rangle
&\equiv \frac{V_s}{V_f}
\langle \langle {\Pi}_i(1) \rangle \rangle
=
\rho V_\textrm{s} \, \text{e}^{-\rho V_\textrm{f}}.
\end{align}
The factor $V_s/V_f$ in the second line takes into account that only atoms in the facilitation shell contribute. We then perform the coarse-graining given by eqs.~\eqref{eq:coarse_grain_rho}~and~\eqref{eq:coarse_grain_n} and eventually arrive at the modified Langevin equation
\begin{align}
\label{eq:langevin_rho_full_high_temp}
\nonumber
\frac{\text{d}}{\text{d}t} \rho =
&-\kappa \text{e}^{-\rho V_\text{f}} \rho (2 \rho - n)
\\
&-\gamma \rho
-\tau (2\rho - n).
\end{align}
Furthermore, we have assumed here ${\text{e}^{-\rho V_\text{f}} \tau \approx \tau}$ as the off-resonant rate is only relevant when ${\rho V_\text{f} \ll 1}$.
Note that in the above derivation we have not taken into account the network structure of the atom distribution and thus eq.~\eqref{eq:langevin_rho_full_high_temp} is valid only in the high-temperature limit, where the gas can be considered in a continuum limit.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig11.pdf}
\caption{Total densities $n$ (Fig.~a) and Rydberg densities $\rho$ (Figs.~b,c) over time from Langevin equation \eqref{eq:langevin_diehl} \cite{helmrich2020} (faint red, dashed), from improved high temperature Langevin equation \eqref{eq:langevin_rho_full_high_temp} (full red, solid), from improved frozen-gas Langevin equation \eqref{eq:langevin_rho_full}, and from Monte-Carlo simulations (blue dots) for a high temperature gas and frozen gas. }
\label{fig:modified-facilitation}
\end{figure}
For the frozen gas the finite connectivity greatly reduces the facilitation rate. Taking into account that facilitation can only occur if the degree of the atom $k$ is not $0$, we alter the facilitation rate to
\begin{align}
\label{eq:fractured_gamma_fac}
\kappa \to \kappa \, P(k > 0).
\end{align}
For an ER network with average degree $\langle k \rangle \ll 1$, we can approximate $P(k > 0) \approx \langle k \rangle$. This new infection rate $\kappa$ corresponds to the Kephart-White model \cite{kephart1992directed, chakrabarti2008epidemic}.
The full Langevin equation for the Rydberg density then reads
\begin{align}
\label{eq:langevin_rho_full}
\nonumber
\frac{\text{d}}{\text{d}t} \rho =
&-\kappa \text{e}^{-\rho V_\text{f}} P(k > 0) \rho (2 \rho - n)
\\
&-\gamma \rho
-\tau (2\rho - n),
\end{align}
with ${P(k>0) = 1 - \text{e}^{-n V_s}}$ for the frozen gas and ${P(k>0) = 1}$ at high temperatures.
In Fig.~\ref{fig:modified-facilitation} we have compared the predictions from the modified Langevin equation~\eqref{eq:langevin_rho_full} with Monte Carlo simulations, both in the frozen-gas limit and for high temperatures. One recognizes much better agreement.
\section{Conclusion}
We studied the facilitation dynamics of Rydberg excitations in an ultra-cold gas of atoms. While in the homogeneous limit the system is expected to show a phase transition between an absorbing phase and an active phase, and -- in the presence of an additional loss channel from the Rydberg state -- self-organized criticality (SOC),
experiments with a gas of trapped $^{87}$Rb atoms at low temperatures show signs of scale invariant dynamics in an extended parameter regime. The latter is a clear signature of a Griffiths phase replacing the critical point of the absorbing-state phase transition. To understand and quantitatively describe the emergence of this Griffiths phase, we numerically
simulated the many-body Rydberg gas in the facilitation regime
through the use of Monte-Carlo simulations in the classical rate-equation approximation. We showed that the latter is well justified for the large dephasing characteristic for the experiment even for a high-temperature gas. Since a Griffiths phase originates from heterogeneity in the system, we numerically and theoretically analyzed two limiting cases: (i) a high-temperature gas and
(ii) a frozen gas.
While in the high-temperature limit, a homogeneous
mean-field behavior is recovered, with a clear absorbing-state
phase transition and SOC, the facilitation dynamics in a low-temperature or frozen gas is governed by the presence of a network structure of atoms that can participate in the excitation spread.
Numerical simulations show characteristic power-law decay of Rydberg excitations in time if off-resonant excitations and atom losses are neglected.
We have shown that in the frozen gas the spread of excitations is constrained to a network resembling a random Erdős–Rényi (ER) graph.
Increasing the density of atoms the ER network has a percolation transition from a fragmented phase, in which the maximum cluster size of connected atoms remains finite, to a phase where the size of the largest cluster scales with the size of the system.
A theoretical explanation of the Rydberg facilitation dynamics observed in Monte-Carlo simulations can then be given by mapping to a susceptible-infected-susceptible (SIS) epidemic model on such
an ER graph taking into account the effects of Rydberg blockade, which truncates the maximum Rydberg excitation density.
An active phase of self-sustained Rydberg excitations is only possible above the percolation threshold.
Below this threshold, an extended Griffiths phase emerges in the place of the (for homogeneous systems) expected absorbing-state phase transition. We showed that the modified SIS model quantitatively explains the observed power-law decay exponents as well as the overall dynamics of the Rydberg density.
Finally, we have expanded an existing macroscopic Langevin equation to include the effects of Rydberg blockade and the fragmentation of the network in the frozen gas limit. The modified Langevin equation
is in very good quantitative agreement with the microscopic Monte-Carlo simulations.
While the limits of a high-temperature and a frozen gas are well captured with our model, it does not yet allow the study of the crossover
between the two regimes. To this end, the Rydberg facilitation process needs to be mapped to a dynamical network, which is beyond the scope of the present work and will be the subject of future work.
Furthermore, in order to quantitatively understand the power-law exponents in the number distribution of Rydberg atoms in a given time interval observed in the experiment, it is necessary to extend the microscopic simulations to much larger system sizes matching those used in the experiments. To this end different approaches, e.g. using machine-learning algorithms might be useful \cite{ohler2022towards}. Finally, the interplay between coherent quantum dynamics and dynamical network structures in
Rydberg facilitation under conditions where dephasing is much less dominant could give rise to very different dynamics \cite{mattioli2015classical,PhysRevLett.116.245701}. The latter requires, however, the development of new microscopic simulation techniques capable of incorporating quantum coherences in 3D Rydberg gases, at least in an approximate way \cite{PhysRevResearch.4.043136}.
\
\subsection*{Acknowledgement}
The authors thank Simon Ohler and Johannes Otterbach for fruitful discussions.
Financial support from the DFG through SFB TR 185, project number
277625399, is gratefully acknowledged.
\subsection*{Authors contributions}
DB and MF developed the theoretical models, DB performed all numerical simulations and developed the mapping to the ER network. MF, TN, and HO conceived the project. JB, PM, and DB performed the experiments guided by TN and HO. DB and MF wrote the initial version of the manuscript with support by JB. All authors discussed the results and contributed to the writing of the manuscript.
|
{
"arxiv_id": "2302.14230",
"language": "en",
"timestamp": "2023-03-01T02:06:08",
"url": "https://arxiv.org/abs/2302.14230",
"yymm": "2302"
} | \section{Introduction}
The power prior \citep{chen_2000} is a popular class of informative priors that allow the incorporation of historical data through a tempering of the likelihood.
It is constructed by raising the historical data likelihood to a power $a_0$, where $0 \le a_0 \le 1$.
The discounting parameter $a_0$ can be fixed or modelled as random.
When it is modelled as random and estimated jointly with other parameters of interest, the normalized power prior \citep{duan_2006} is recommended as it appropriately accounts for the normalizing constant necessary for forming the correct joint prior distribution \citep{neuens_2009}. Many extensions of the power prior and the normalized power prior have been developed.
\cite{Banbeta_2019} develop the dependent and robust dependent normalized power priors which allow dependent discounting parameters for multiple historical datasets.
When the historical data model contains only a subset of covariates currently of interest and the historical information may not be equally informative for all parameters in the current analysis, \cite{Boonstra_Barbaro_2020} propose an extension of the power prior that adaptively combines a prior based upon the historical information with a variance-reducing prior that shrinks parameter values toward zero.
The power prior and the normalized power prior have been shown to have several desirable properties.
\cite{ibrahim_2003} show that the power prior defines an optimal class of priors in the sense that it minimizes a convex combination of Kullback-Leibler (KL) divergences between a distribution based on no incorporation of historical data and a distribution based on completely pooling the historical and current data.
\cite{YE202229} prove that the normalized power prior minimizes the expected weighted KL divergence similar to the one in \cite{ibrahim_2003} with respect to the marginal distribution of the discounting parameter.
They also prove that if the prior on $a_0$ is non-decreasing and if the difference between the sufficient statistics of the historical and current data is negligible from a practical standpoint, the marginal posterior mode of $a_0$ is close to one.
\cite{Carvalho_Ibrahim_2021} show that the normalized power prior is always well-defined when the initial prior is proper, and that, viewed as a function of the discounting parameter, the normalizing constant is a smooth and strictly convex function.
\cite{Neelon_OMalley_2010} show through simulations that for large datasets, the normalized power prior may result in more downweighting of the historical data than desired.
\cite{Han_Ye_Wang_2022} point out that the normalizing constant might be infinite for $a_0$ values close to zero with conventionally used improper priors on $a_0$, in which case the optimal $a_0$ value might be lower than the suggested value. Despite the aforementioned research, there has not been any theoretical investigation of the asymptotic properties of the normalized power prior when the historical and current datasets are discrepant.
Many empirical Bayes-type approaches have been developed to adaptively determine the discounting parameter.
For example, Gravestock and Held \citep{Gravestock_Held_2017,Gravestock_Held_2019} propose to set $a_0$ to the value that maximizes the marginal likelihood.
\cite{Liu_2018} proposes choosing $a_0$ based on the p-value for testing the compatibility of the current and historical data.
\cite{Bennett_2021} propose using an equivalence probability weight and a weight based on tail area probabilities to assess the degree of agreement between the historical and current control data for cases with binary outcomes.
\cite{Pan_Yuan_Xia_2017} propose the calibrated power prior, where $a_0$ is defined as a function of a congruence measure between the historical and current data.
The function which links $a_0$ and the congruence measure is prespecified and calibrated through simulation. While these empirical Bayes approaches shed light on the choice of $a_0$, there has not been any fully Bayesian approach based on an optimal prior on $a_0$.
In this work, we first explore the asymptotic properties of the normalized power prior when the historical and current data are fully compatible (i.e., the sufficient statistics of the two datasets are equal) or incompatible (i.e., the sufficient statistics of the two datasets have some non-zero difference).
We prove that for generalized linear models (GLMs) utilizing a normalized power prior, the marginal posterior distribution of $a_0$ converges to a point mass at zero if there is any discrepancy between the historical and current data.
When the historical and current data are fully compatible, the asymptotic distribution of the marginal posterior of $a_0$ is derived for GLMs; we note that it does not concentrate around one.
Secondly, we propose a novel fully Bayesian approach to elicit the shape parameters of the beta prior on $a_0$ based on two optimality criteria, Kullback-Leibler (KL) divergence and mean squared error (MSE). For the first criterion, we propose as optimal the beta prior whose shape parameters result in a minimized weighted average of KL divergences between the marginal posterior for $a_0$ and user-specified target distributions based on hypothetical scenarios where there is no discrepancy and where there is a maximum tolerable discrepancy. This class of priors on $a_0$ based on the KL criterion is optimal in the sense that it is the best possible beta prior at balancing the dual objectives of encouraging borrowing when the historical and current data are compatible and limiting borrowing when they are in conflict. For the second criterion, we propose as optimal the beta prior whose shape parameters result in a minimized weighted average of the MSEs based on the posterior mean of the parameter of interest when its hypothetical true value is equal to its estimate using the historical data, or when it differs from its estimate by the maximum tolerable amount. We study the properties of the proposed approaches \textit{via} simulations for the \textit{i.i.d.} normal and Bernoulli cases as well as for the normal linear model.
Two real-world case studies of clinical trials with binary outcomes and covariates demonstrate the performance of the optimal priors compared to conventionally used priors on $a_0$, such as a uniform prior.
\section{Asymptotic Properties of the Normalized Power Prior}\label{sec2}
Let $D$ denote the current data and $D_0$ denote the historical data.
Let $\theta$ denote the model parameters and $L(\theta|D)$ denote a general likelihood function.
The power prior \citep{chen_2000} is formulated as
\begin{align*}
\pi(\theta|D_0, a_0) \propto L(\theta|D_0)^{a_0}\pi_0(\theta),
\end{align*}
where $0 \le a_0 \le 1$ is the discounting parameter which discounts the historical data likelihood, and $\pi_0(\theta)$ is the initial prior for $\theta$.
The discounting parameter $a_0$ can be fixed or modelled as random.
Modelling $a_0$ as random allows researchers to account for uncertainty when discounting historical data and to adaptively learn the appropriate level of borrowing.
\cite{duan_2006} propose the \emph{normalized power prior}, given by
\begin{align}\label{npp}
\pi(\theta, a_0|D_0) = \pi(\theta|D_0, a_0)\pi(a_0) = \frac{L(\theta|D_0)^{a_0}\pi_0(\theta)}{c(a_0)}\pi(a_0),
\end{align}
where $c(a_0)=\int L(\theta|D_0)^{a_0}\pi_0(\theta) d\theta$ is the normalizing constant.
The normalized power prior is thus composed of a conditional prior for $\theta$ given $a_0$ and a marginal prior for $a_0$.
Ideally, the posterior distribution of $a_0$ with the normalized power prior would asymptotically concentrate around zero when the historical and current data are in conflict, and around one when they are compatible.
In this section, we study the asymptotic properties of the normalized power prior for the exponential family of distributions as well as GLMs.
Specifically, we are interested in exploring the asymptotic behaviour of the posterior distribution of $a_0$ when the historical and current data are incompatible and when they are compatible, respectively.
\subsection{Exponential Family}\label{sec2:expfam}
First, we study the asymptotic properties of the normalized power prior for the exponential family of distributions.
The density of a random variable $Y$ in the one-parameter exponential family has the form
\begin{align}\label{expfam}
p(y|\theta) = q(y)\exp\left(y\theta-b(\theta)\right),
\end{align}
where $\theta$ is the canonical parameter and $q(\cdot)$ and $b(\cdot)$ are known functions.
Suppose $D=(y_1,\dots,y_n)$ is a sample of $n$ \textit{i.i.d.} observations from an exponential family distribution in the form of \eqref{expfam}.
The likelihood is then given by
\begin{align*}
L(\theta|D) = Q(D)\exp\left(\sum_{i=1}^ny_i\theta-nb(\theta)\right),
\end{align*}
where $Q(D) = \prod_{i=1}^{n}q(y_i)$.
Suppose $D_0=(y_{01},\dots,y_{0n_0})$ is a sample of $n_0$ \textit{i.i.d.} observations from the same exponential family.
The likelihood for the historical data raised to the power $a_0$ is
\begin{align*}
[L(\theta|D_0)]^{a_0} = Q(D_0)^{a_0}\exp\left(a_0\left[\sum_{i=1}^{n_0}y_{0i}\theta-n_0b(\theta)\right]\right),
\end{align*}
where $Q(D_0) = \prod_{i=1}^{n_0}q(y_{0i})$. Using the normalized power prior defined in \eqref{npp}, the joint posterior of $\theta$ and $a_0$ is given by
\begin{align*}
\pi(\theta,a_0|D,D_0) &\propto L(\theta|D)\pi(\theta,a_0|D_0) = L(\theta|D)\frac{L(\theta|D_0)^{a_0}\pi_0(\theta)}{c(a_0)}\pi(a_0).
\end{align*}
The marginal posterior of $a_0$ is given by
\begin{align}
\pi(a_0|D,D_0) = \int\pi(\theta,a_0|D,D_0)d\theta \propto \int L(\theta|D)\frac{L(\theta|D_0)^{a_0}\pi_0(\theta)}{c(a_0)}\pi(a_0)d\theta.\label{apost1}
\end{align}
With these calculations in place, the question now arises as to what prior should be given to $a_0$.
One commonly used class of priors on $a_0$ is the beta distribution \citep{chen_2000}.
Let $\alpha_0$ and $\beta_0$ denote the shape parameters of the beta distribution.
We first prove that the marginal posterior of $a_0$ \eqref{apost1} with $\pi(a_0)=\text{beta}(\alpha_0,\beta_0)$ converges to a point mass at zero for a fixed, non-zero discrepancy between $\bar{y}$ and $\bar{y}_0$.
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}{Corollary}[section]
\begin{thm}\label{nocov_th}
Suppose $y_1,\dots,y_n$ and $y_{01}, \dots, y_{n_0}$ are independent observations from the same exponential family distribution \eqref{expfam}.
Suppose also that the difference in the estimates of the canonical parameter $\theta$ is fixed and equal to $\delta$, i.e., $|\dot{b}^{-1}(\bar{y})-\dot{b}^{-1}(\bar{y}_0)| = \delta$, and $\frac{n_0}{n}=r$, where $\delta > 0$ and $r > 0$ are constants, and $\dot{b}(\cdot)=\partial_{\theta}b(\cdot)$.
Then, the marginal posterior of $a_0$ using the normalized power prior \eqref{apost1} with a $\operatorname{beta}(\alpha_0, \beta_0)$ prior on $a_0$ converges to a point mass at $0$.
That is, $\lim\limits_{n\rightarrow \infty} \frac{\int_0^{\epsilon}\pi(a_0|D, D_0,\alpha_0,\beta_0)da_0}{\int_0^1 \pi(a_0|D, D_0,\alpha_0,\beta_0)da_0} = 1$ for any $\epsilon > 0$.
\end{thm}
\begin{proof}
See section \ref{nocov_proof}.
\end{proof}
Theorem \ref{nocov_th} asserts that the normalized power prior is sensitive to any discrepancy between the sufficient statistics in large samples, as the mass of the marginal distribution of $a_0$ will concentrate near zero as the sample size increases for any fixed difference $\delta$.
The natural question to then ask is whether Theorem~\ref{nocov_th} has a sort of converse in that the posterior should concentrate around one under compatibility.
We derive the asymptotic marginal posterior distribution of $a_0$ when $\bar{y}=\bar{y}_0$ and show that it does not converge to a point mass at one.
\begin{cor}\label{nocov_coro}
Suppose $y_1,\dots,y_n$ and $y_{01}, \dots, y_{n_0}$ are independent observations from the same exponential family distribution \eqref{expfam}.
Suppose $\bar{y}=\bar{y}_0$ and $\frac{n_0}{n}=r$ where $r > 0$ is a constant.
The marginal posterior of $a_0$ using the normalized power prior, as specified in \eqref{apost1}, converges to
\begin{equation*}
\tilde{\pi}(a_0 | D, D_0) = \frac{\sqrt{\frac{ra_0}{ra_0+1}}\pi(a_0)}{\int_0^1 \sqrt{\frac{ra_0}{ra_0+1}}\pi(a_0) da_0}
\end{equation*}
as $n \rightarrow \infty$.
\end{cor}
\begin{proof}
See section \ref{nocov_coro_proof}.
\end{proof}
Corollary \ref{nocov_coro} shows that the normalized power prior fails to fully utilize the historical data when the means of the historical data and the current data are equal for a generic, non-degenerate prior on $a_0$.
However, if $\pi(a_0)$ is chosen to be concentrated near one, then the marginal posterior of $a_0$ may be concentrated near one.
\subsection{Generalized Linear Models}
The ability to deal with non \textit{i.i.d.} data and incorporate covariates is crucial to the applicability of the normalized power prior; we thus now extend these results to generalized linear models (GLMs).
We first define the GLM with a canonical link and fixed dispersion parameter.
Let $y_i$ denote the response variable and $x_i$ denote a $p$-dimensional vector of covariates for subject $i=1, \dots, n$.
Let $\beta = (\beta_1, \dots, \beta_p)'$ be a $p$-dimensional vector of regression coefficients.
The GLM with a canonical link is given by
\begin{align}\label{glm}
p(y_i|x_i, \beta, \phi) = q(y_i, \phi)\exp\{\phi^{-1}[y_ix_i'\beta-b(x_i'\beta)]\}.
\end{align}
Without loss of generality, we assume $\phi=1$.
Let $D=\{(y_i,x_i), i=1,\dots,n\}\equiv(n,Y_{n\times 1},X_{n\times p})$ where $Y=(y_1,\dots,y_n)'$ and $X=(x_1,\dots,x_n)'$.
Assuming the $y_i$'s are (conditionally) independent, the likelihood is given by
\begin{align*}
L(\beta|D) = Q(Y)\exp\left(\sum_{i=1}^n y_i x_i'\beta-\sum_{i=1}^nb(x_i'\beta)\right),
\end{align*}
where $Q(Y) = \prod_{i=1}^{n}q(y_i, 1)$.
Let $\hat{\beta}$ denote posterior mode of $\beta$ obtained by solving $\partial_{\beta}\log L(\beta|D)=0$.
Let $D_0=\{(y_{0i},x_{0i}), i=1,\dots,n_0\}\equiv(n_0,Y_{0n_0\times 1},X_{0n_0\times p})$ where $Y_0=(y_{01},\dots,y_{0n_0})'$ and $X_0=(x_{01},\dots,x_{0n_0})'$.
Assuming the $y_{0i}$'s are (conditionally) independent, the historical data likelihood raised to the power $a_0$ is given by
\begin{align*}
[L(\beta|D_0)]^{a_0} = Q(Y_0)^{a_0}\exp\left(a_0\left[\sum_{i=1}^{n_0}y_{0i}x_{0i}'\beta-\sum_{i=1}^nb(x_{0i}'\beta)\right]\right),
\end{align*}
where $Q(Y_0) = \prod_{i=1}^{n_0}q(y_{0i}, 1)$.
Let $c^*(a_0)=\int L(\beta|y_0)^{a_0}\pi_0(\beta) d\beta$.
Using the normalized power prior defined in \eqref{npp}, the joint posterior of $\beta$ and $a_0$ is given by
\begin{align*}
\pi(\beta,a_0|D,D_0) &\propto L(\beta|D)\pi(\beta,a_0|D_0) = L(\beta|D)\frac{L(\beta|D_0)^{a_0}\pi_0(\beta)}{c^*(a_0)}\pi(a_0).
\end{align*}
Let $\hat{\beta}_0$ denote posterior mode of $\beta$ obtained by solving $\partial_{\beta}\log \left[\frac{L(\beta|D_0)^{a_0}\pi_0(\beta)}{c^*(a_0)}\right]=0.$ The marginal posterior of $a_0$ is given by
\begin{align}
\pi(a_0|D,D_0) &= \int\pi(\beta,a_0|D,D_0)d\beta \propto \int L(\beta|D)\frac{L(\beta|D_0)^{a_0}\pi_0(\beta)}{c^*(a_0)}\pi(a_0)d\beta.\label{glmnpp}
\end{align}
Now we extend Theorem \ref{nocov_th} to GLMs.
\begin{thm}\label{glm_th}
Suppose $X$ is $n\times p$ of rank $p$ and $X_0$ is $n_0\times p$ of rank $p$. Suppose $\hat{\beta}-\hat{\beta}_0 = \delta$ where $\delta \neq 0$ is a constant vector, and $\frac{n_0}{n}=r$ where $r > 0$ is a constant scalar.
Assume $n\left[\frac{\partial^2\log [L(\beta|D)]}{\partial\beta_i\partial\beta_j}\right]^{-1}$ and $n_0a_0\left[\frac{\partial^2\log [L(\beta|D_0)^{a_0}\pi_0(\beta)]}{\partial\beta_i\partial\beta_j}\right]^{-1}$ do not depend on $n$ and $a_0$.
Then, the marginal posterior of $a_0$ using the normalized power prior \eqref{glmnpp} with a $\operatorname{beta}(\alpha_0, \beta_0)$ prior on $a_0$ converges to a point mass at zero.
That is, $\lim\limits_{n\rightarrow \infty} \frac{\int_0^{\epsilon}\pi(a_0|D, D_0, \alpha_0, \beta_0)da_0}{\int_0^1 \pi(a_0|D, D_0, \alpha_0, \beta_0)da_0} = 1$ for any $\epsilon > 0$.
\end{thm}
\begin{proof}
See section \ref{glm_proof}.
\end{proof}
Theorem \ref{glm_th} asserts that the normalized power prior is sensitive to discrepancies in the historical and current data in the presence of covariates.
The mass of the marginal distribution of $a_0$ will concentrate near zero as the sample size increases for any fixed discrepancy between the historical and current data, assuming $\frac{1}{n}X'X$ and $\frac{1}{n_0}X_0'X_0$ are fixed, i.e., $n\left[\frac{\partial^2\log [L(\beta|D)]}{\partial\beta_i\partial\beta_j}\right]^{-1}$ and $n_0a_0\left[\frac{\partial^2\log [L(\beta|D_0)^{a_0}\pi_0(\beta)]}{\partial\beta_i\partial\beta_j}\right]^{-1}$ do not depend on $n$ and $a_0$.
Next, we derive the asymptotic marginal posterior distribution of $a_0$ when the sufficient statistics and covariate (design) matrices of the historical and current data equal.
\begin{cor}\label{glm_coro}
Suppose $X$ is $n\times p$ of rank $p$ and $X_0$ is $n_0\times p$ of rank $p$.
Let $Y=(y_1,\dots,y_n)'$ and $Y_0=(y_{01},\dots,y_{0n_0})'$. Consider the GLM in \eqref{glm}.
If $n=n_0$, $X=X_0$, and $X'Y = X_0'Y_0$, then the marginal posterior of $a_0$ using the normalized power prior, as specified in \eqref{glmnpp}, converges to
\begin{equation*}
\tilde{\pi}(a_0 | X, Y, X_0, Y_0) = \frac{\left(\frac{a_0}{a_0+1}\right)^{\frac{p}{2}}\pi(a_0)}{\int_0^1 \left(\frac{a_0}{a_0+1}\right)^{\frac{p}{2}}\pi(a_0) da_0},
\end{equation*}
as $n \rightarrow \infty$.
\end{cor}
\begin{proof}
See section \ref{glm_coro_proof}.
\end{proof}
Corollary \ref{glm_coro} states that the marginal posterior of $a_0$ using the normalized power prior does not converge to a point mass at one when the sufficient statistics and the covariates of the historical and current data are equal. We also observe that as $p$ approaches infinity, the marginal posterior of $a_0$ specified above converges to a point mass at one. The form of the asymptotic marginal posterior of $a_0$ suggests that the normalized power prior may be sensitive to overfitting when the historical and current datasets are compatible.
In Theorem~\ref{glm_comp} we also relax the previous result by deriving the asymptotic marginal posterior distribution of $a_0$ assuming only that the sufficient statistics of the historical and current data are not equal.
This means that the covariate matrices need not be equal so long as the sufficient statistics $X'Y$ and $X_0'Y_0$ are, increasing the applicability of the result.
\begin{thm}\label{glm_comp}
Suppose $X$ is $n\times p$ of rank $p$ and $X_0$ is $n_0\times p$ of rank $p$. Let $Y=(y_1,\dots,y_n)'$ and $Y_0=(y_{01},\dots,y_{0n_0})'$.
Consider the GLM in \eqref{glm}, where $\frac{n_0}{n}=r$ and $r > 0$ is a constant.
If $X'Y = X_0'Y_0$ and $X\neq X_0$, then the marginal posterior of $a_0$ using the normalized power prior, as specified in \eqref{glmnpp}, is asymptotically proportional to $$\pi(a_0)\cdot\frac{|\hat{\Sigma}_g|^{1/2}}{|\tilde{\Sigma}_k|^{1/2}}\exp\left\{-n[g_n(\hat{\beta})-k_n(\tilde{\beta})]\right\},$$
where the definitions of $g_n(\beta)$, $k_n(\beta)$ and $\frac{|\hat{\Sigma}_g|^{1/2}}{|\tilde{\Sigma}_k|^{1/2}}$ can be found in section \ref{glm_coro_proof} of the appendix.
\end{thm}
\begin{proof}
See section \ref{glmproof}.
\end{proof}
Corollary \ref{glm_coro} and Theorem \ref{glm_comp} show that, for GLMs, the marginal posterior of $a_0$ using the normalized power prior does not converge to a point mass at one when the sufficient statistics of the historical and current data are equal.
From Theorems \ref{nocov_th}-\ref{glm_comp}, we conclude that, asymptotically, the normalized power prior is sensitive to discrepancies between the historical and current data, but cannot fully utilize the historical information when there are no discrepancies.
\section{Optimal Beta Priors for $a_0$}
\subsection{Kullback-Leibler Divergence Criterion}\label{sec3}
In this section, we propose a prior based on minimizing the KL divergence of the marginal posterior of $a_0$ to two reference distributions. This prior is optimal in the sense that it is the best possible beta prior at balancing the dual objectives of encouraging borrowing when the historical and current data are compatible and limiting borrowing when they are in conflict.
Let $\bar{y}_0$ denote the mean of the historical data and $\bar{y}$ denote the mean of the hypothetical current data.
Let $\pi_1(a_0) \equiv \operatorname{beta}(c, 1)$ ($c \gg 1$ is fixed) and $\pi_2(a_0) \equiv \operatorname{beta}(1, c)$.
The distributions $\pi_1(a_0)$ and $\pi_2(a_0)$ represent two ideal scenarios, where $\pi_1(a_0)$ is concentrated near one and $\pi_2(a_0)$ is concentrated near zero.
The KL-based approach computes the hyperparameters ($\alpha_0$ and $\beta_0$) for the beta prior on $a_0$ that will minimize a convex combination of two KL divergences; one is the KL divergence between $\pi_1(a_0)$ and the marginal posterior of $a_0$ when $\bar{y}=\bar{y}_0$, while the other is the KL divergence between $\pi_2(a_0)$ and the marginal posterior of $a_0$ when there is a user-specified difference between $\bar{y}$ and $\bar{y}_0$.
Let $d=\bar{y} - \bar{y}_0$, representing the difference between the means of the hypothetical current data and the historical data.
Our approach is centered on a user-specified \textbf{maximum tolerable difference} (MTD), $d_{\textrm{MTD}}$. Let $\pi^*(a_0)$ denote the marginal posterior of $a_0$ when $d=0$. Let $\pi_{\textrm{MTD}}(a_0)$ denote the marginal posterior of $a_0$ when $d=d_{\textrm{MTD}}$. For $d=0$, we want $\pi^*(a_0)$ to resemble $\pi_1(a_0)$ and for $d = d_{\textrm{MTD}}$, we want $\pi_{\textrm{MTD}}(a_0)$ to resemble $\pi_2(a_0)$.
The distributions $\pi_1(a_0)$ and $\pi_2(a_0)$ have been chosen to correspond to cases with substantial and little borrowing, respectively.
Therefore, our objective is to solve for $\alpha_0>0$ and $\beta_0>0$ to minimize
\begin{equation*}
K(\alpha_0, \beta_0) = w KL(\pi^*(a_0), \pi_1(a_0)) + (1-w)KL(\pi_{\textrm{MTD}}(a_0), \pi_2(a_0)).
\end{equation*}
Here $0 < w < 1$ is a scalar and $KL(p,q)$ for distributions $P$ and $Q$ with P as reference is defined as
\begin{align*}
KL(p,q) &= \int \log\left(\frac{p(x)}{q(x)}\right)dP(x)= E_p[\log(p)] - E_p[\log(q)].
\end{align*}
The scalar $w$ weights the two competing objectives. For $w > 0.5$, the objective to encourage borrowing is given more weight, and for $w < 0.5$, the objective to limit borrowing is given more weight.
Below we demonstrate the simulation results using this method for the \textit{i.i.d.} normal case, the \textit{i.i.d.} Bernoulli case and the normal linear model.
We compare the marginal posterior of $a_0$ using the KL-based optimal prior with that using the uniform prior.
For all simulations in this section, we choose $w=0.5$ so that the two competing objectives are given equal weight.
We choose $c=10$ so that $\pi_1(a_0)$ and $\pi_2(a_0)$ represent cases with substantial and little borrowing, respectively.
\subsubsection{Normal \textit{i.i.d.} Case}
We assume $y_1,\dots,y_n$ and $y_{01},\dots,y_{0{n_0}}$ are \textit{i.i.d.} observations from N($\mu$, $\sigma^2$) where $\sigma^2=1$.
We choose $\bar{y}_0=1.5$ and $n=n_0=30$.
The objective function $K(\cdot, \cdot)$ is computed using numerical integration and optimization is performed using the \verb|optim()| function in (base) R \citep{r}.
In Figure~\ref{sim_normal}, the first figure of each row plots the historical and current data likelihoods if the hypothetical degree of conflict is equal to $d_{\textrm{MTD}}$.
For each row of the figure below, the maximum tolerable difference $d_{\textrm{MTD}}$ is chosen to be $0.5$, $1$ and $1.5$, and the corresponding optimal prior is derived for each value of $d_{\textrm{MTD}}$.
For each optimal prior, we vary the observed sample mean, denoted by $\bar{y}_{\textrm{obs}}$, to evaluate the posterior based on the optimal prior for different observed current data.
We use $d_{\textrm{obs}}=\bar{y}_{\textrm{obs}}-\bar{y}_0$ to represent the difference between the means of the observed current data and the historical data.
For columns 2-4, $d_{\textrm{obs}}$ is chosen to be $0$, $1$ and $1.5$, respectively. Note that the values of $d_{\textrm{MTD}}$ and $d_{\textrm{obs}}$ are relative to the choices of $\sigma^2$, $n$ and $n_0$. For example, for larger $n$, $d_{\textrm{MTD}}$ would need to be decreased to produce a similar plot to Figure~\ref{sim_normal}.
\begin{figure}
\begin{center}
\includegraphics[width=16cm]{KL_normal_biom.pdf}
\end{center}
\caption{Simulation results for the normal \textit{i.i.d.} case, where $\sigma^2=1$, $\bar{y}_0=1.5$ and $n=n_0=30$.
The first figure of each row plots the historical (black solid line) and current (black dashed line) data likelihoods if the hypothetical degree of conflict is equal to $d_{\textrm{MTD}}$.
For each row of the figure, the maximum tolerable difference $d_{\textrm{MTD}}$ is chosen to be $0.5$, $1$ and $1.5$, and the corresponding optimal prior (pink dotted line) is derived for each value of $d_{\textrm{MTD}}$.
For each optimal prior, we vary $d_{\textrm{obs}}=\bar{y}_{\textrm{obs}}-\bar{y}_0$ to evaluate the performance of the optimal prior for different observed data.
For columns 2-4, $d_{\textrm{obs}}$ is chosen to be $0$, $1$ and $1.5$, respectively.
The black and blue curves correspond to $\pi_1(a_0)\equiv\operatorname{beta}(10, 1)$ and $\pi_2(a_0)\equiv\operatorname{beta}(1, 10)$, respectively.
The purple dashed line represents the marginal posterior of $a_0$ with the optimal prior for a given $d_{\textrm{obs}}$.
The grey dashed line plots the marginal posterior of $a_0$ with the uniform prior.}
\label{sim_normal}
\end{figure}
From columns 2-4, we observe that when $d_{\textrm{MTD}}=0.5$, very little conflict is tolerated, and the resulting optimal prior does not strongly encourage either borrowing substantially or borrowing little.
As $d_{\textrm{MTD}}$ becomes larger, larger conflict is allowed and the optimal prior shifts more towards $\pi_1(a_0)$.
We also observe that when $d_{\textrm{MTD}}=1$ (the optimal hyperparameters are $\alpha_0=1$ and $\beta_0=0.4$) and $d_{\textrm{MTD}}=1.5$ (the optimal hyperparameters are $\alpha_0=2.6$ and $\beta_0=0.5$), the marginal posterior of $a_0$ with the optimal prior more closely mimics the target distribution when $d_{\textrm{obs}}=0$, i.e., the observed current and historical data are fully compatible.
As $d_{\textrm{obs}}$ increases, the marginal posterior shifts toward zero.
This behaviour is highly desirable as it achieves both goals of encouraging borrowing when the datasets are compatible and limiting borrowing when they are incompatible.
We can compare the marginal posterior of $a_0$ using the optimal prior with that using a uniform prior in Figure \ref{sim_normal}.
We observe that while the marginal posterior on $a_0$ with the uniform prior is very responsive to conflict, it does not concentrate around one even when the datasets are fully compatible.
We conclude that when $d_{\textrm{MTD}}$ is chosen to be reasonably large, the optimal prior on $a_0$ achieves a marginal posterior that is close to the target distribution when the datasets are fully compatible, while remaining responsive to conflict in the data.
Table \ref{tab_normal} shows the posterior mean and variance of the mean parameter $\mu$ for various combinations of $d_{\textrm{MTD}}$ and $d_{\textrm{obs}}$ values corresponding to the scenarios in Figure \ref{sim_normal}. The posterior mean and variance of $\mu$ with a normalized power prior are computed using the R package \textit{BayesPPD} \citep{bayesppd}.
Again, $\bar{y}_0$ is fixed at $1.5$. Since $\bar{y}_{\textrm{obs}} \ge \bar{y}_0$, within each row, the posterior mean of $\mu$ is always smaller than $\bar{y}_{\textrm{obs}}$ due to the incorporation of $\bar{y}_0$.
We can also compare the results by column.
Note that for fixed $d_{obs}$ (or equivalently $\bar{y}_{\textrm{obs}}$), if more historical information is borrowed, the posterior mean of $\mu$ will be smaller.
When $d_{\textrm{obs}}=0$, the posterior mean stays constant while the variance decreases as $d_{\textrm{MTD}}$ increases.
If the maximum tolerable difference is large, more historical information is borrowed, leading to reduced variance. When $d_{\textrm{obs}}=0.5$, the posterior of $\mu$ decreases as more borrowing occurs when $d_{\textrm{MTD}}$ increases.
When $d_{\textrm{obs}}=1$ or $1.5$, the posterior of $\mu$ first increases and then decreases, as $d_{\textrm{MTD}}$ increases.
This is a result of two competing phenomena interacting; as $d_{\textrm{MTD}}$ increases, the optimal prior gravitates towards encouraging borrowing; however, since $d_{\textrm{obs}}$ is very large, the marginal posterior of $a_0$ moves toward zero even though the prior moves toward one.
In conclusion, we argue that the posterior estimates of $\mu$ with the optimal prior respond in a desirable fashion to changes in the data.
\begin{table}
\begin{center}
\caption{Posterior mean (variance) of $\mu$ for the normal \textit{i.i.d.} case}\label{tab_normal}
\begin{tabular}{lcccc}
\\
& $d_{\textrm{obs}}=0$ & $d_{\textrm{obs}}$=0.5 & $d_{\textrm{obs}}$=1 & $d_{\textrm{obs}}$=1.5\\[5pt]
\hline
$d_{\textrm{MTD}}$=0.5 & 1.5 (0.022) & 1.85 (0.026) & 2.32 (0.036) & 2.87 (0.036) \\
$d_{\textrm{MTD}}$=1 & 1.5 (0.019) & 1.82 (0.026) & 2.36 (0.042)& 2.93 (0.035)\\
$d_{\textrm{MTD}}$=1.5 & 1.5 (0.018) & 1.78 (0.020) & 2.19 (0.040) & 2.84 (0.039)\\
\end{tabular}
\end{center}
\end{table}
\subsubsection{Bernoulli Model}
For the Bernoulli model, we assume $y_1,\dots,y_n$ and $y_{01},\dots,y_{0{n_0}}$ are \textit{i.i.d.} observations from a Bernoulli distribution with mean $\mu$.
Again, we choose $n=n_0=30$ and optimization is performed analogously to the normal case.
For each row of Figure \ref{sim_bern} below, the maximum tolerable difference $d_{\textrm{MTD}}$ is chosen to be $0.2$, $0.4$ and $0.6$, and the corresponding optimal prior is derived for each value of $d_{\textrm{MTD}}$.
For each optimal prior, we vary the observed $\bar{y}_{\textrm{obs}}$ to evaluate the performance of the optimal prior for different observed data.
For columns 2-4, $d_{\textrm{obs}}=\bar{y}_{\textrm{obs}}-\bar{y}_0$ is chosen to be $0$, $0.4$ and $0.6$, respectively.
Values of $\bar{y}_0$ and $\bar{y}_{\textrm{obs}}$ are chosen so that the variance stays constant for different values of $d_{\textrm{MTD}}$ or $d_{\textrm{obs}}$.
The optimal marginal prior and posterior of $a_0$ for Bernoulli data are similar to those of the normal model. We observe that when the datasets are perfectly compatible, i.e., $d_{\textrm{obs}}=0$, the marginal posterior of $a_0$ with the optimal prior concentrates around one when $d_{\textrm{MTD}}$ is relatively large.
When $d_{\textrm{obs}}$ increases to $0.4$ or $0.6$, the marginal posterior of $a_0$ concentrates around zero when $d_{\textrm{MTD}}$ is relatively large.
The optimal prior becomes increasingly concentrated near one as $d_{\textrm{MTD}}$ increases.
Compared to the marginal posterior with the uniform prior, the optimal prior on $a_0$ achieves a marginal posterior that closely mimics the target distribution when the datasets are fully compatible, while remaining responsive to conflict in the data.
\begin{figure}
\begin{center}
\includegraphics[width=16cm]{KL_bern_biom.pdf}
\end{center}
\caption{Simulation results for the Bernoulli \textit{i.i.d.} case, where $\sigma^2=1$ and $n=n_0=30$. The first figure of each row plots the historical (black solid line) and current (black dashed line) data likelihoods if the hypothetical degree of conflict is equal to $d_{\textrm{MTD}}$.
For each row of the figure, the maximum tolerable difference $d_{\textrm{MTD}}$ is chosen to be $0.5$, $1$ and $1.5$, and the corresponding optimal prior (pink dotted line) is derived for each value of $d_{\textrm{MTD}}$.
For each optimal prior, we vary $d_{\textrm{obs}}=\bar{y}_{\textrm{obs}}-\bar{y}_0$ to evaluate the performance of the optimal prior for different observed data.
For columns 2-4, $d_{\textrm{obs}}$ is chosen to be $0$, $1$ and $1.5$, respectively.
The black and blue curves correspond to $\pi_1(a_0)\equiv\operatorname{beta}(10, 1)$ and $\pi_2(a_0)\equiv\operatorname{beta}(1, 10)$, respectively.
The purple dashed line represents the marginal posterior of $a_0$ with the optimal prior for a given $d_{\textrm{obs}}$.
The grey dashed line plots the marginal posterior of $a_0$ with the uniform prior.}
\label{sim_bern}
\end{figure}
\subsubsection{Normal Linear Model}
Suppose $y_{01},\dots,y_{0{n_0}}$ are independent observations from the historical data where $y_{0i} \sim N(\beta_0 + \beta_1x_{0i}, \sigma^2)$ for the $i$-th observation and $x_{0i}$ is a single covariate.
Also suppose $y_1,\dots,y_n$ are independent observations from the current data where $y_j \sim N(\beta_0 + \beta_1x_j + d_{\textrm{MTD}}, \sigma^2)$ for the $j$-th observation and $x_j$ is a single covariate.
We vary $d_{\textrm{MTD}}$ to represent different degrees of departure of the intercept of the simulated current data to the intercept of the historical data.
We choose $\beta_0=1.5$, $\beta_1=-1$, $\sigma^2=1$ and $n=n_0=30$. We choose $d_{\textrm{MTD}}=0.1, 0.5$, and $1$ and $d_{\textrm{obs}}=0, 0.5$, and $1$.
The objective function $K$ is computed using Monte Carlo integration and optimization is performed using the \verb|optim()| function in R.
Figure \ref{sim_nlm} shows the optimal prior and optimal posterior for $a_0$ as well as the posterior of $a_0$ with the uniform prior for various $d_{\textrm{MTD}}$ and $d_{\textrm{obs}}$ values.
We observe that when the datasets are perfectly compatible, i.e., $d_{\textrm{obs}}=0$, the marginal posterior of $a_0$ with the optimal prior concentrates around one when $d_{\textrm{MTD}}$ is relatively large.
When $d_{\textrm{obs}}$ increases to $1$, the marginal posterior of $a_0$ concentrates around zero.
The optimal prior becomes increasingly concentrated near one as $d_{\textrm{MTD}}$ increases.
Compared to the marginal posterior with the uniform prior, the optimal prior for $a_0$ achieves a marginal posterior that closely mimics the target distribution when the datasets are fully compatible, while remaining responsive to conflict in the data.
\begin{figure}
\begin{center}
\includegraphics[width=16cm]{KL_nlm_biom.pdf}
\end{center}
\caption{Simulation results for the normal linear model with one covariate where $\beta_0=1.5$, $\beta_1=-1$, $\sigma^2=1$ and $n=n_0=30$.
The first figure of each row shows the historical (black solid line) and current (black dashed line) data likelihoods as a function of the intercept if the hypothetical degree of conflict is equal to $d_{\textrm{MTD}}$.
For each row of the figure, the maximum tolerable difference $d_{\textrm{MTD}}$ is chosen to be $0.1$, $0.5$ and $1$, and the corresponding optimal prior (pink dotted line) is derived for each value of $d_{\textrm{MTD}}$.
For each optimal prior, we vary $d_{\textrm{obs}}$ to represent different degrees of departure of the intercept of current data to the intercept of historical data.
For columns 2-4, $d_{\textrm{obs}}$ is chosen to be $0$, $0.5$ and $1$, respectively.
The black and blue curves correspond to $\pi_1(a_0)\equiv\operatorname{beta}(10, 1)$ and $\pi_2(a_0)\equiv\operatorname{beta}(1, 10)$, respectively.
The purple dashed line represents the marginal posterior of $a_0$ with the optimal prior for a given $d_{\textrm{obs}}$.
The grey dashed line plots the marginal posterior of $a_0$ with the uniform prior.}
\label{sim_nlm}
\end{figure}
\subsection{Mean Squared Error Criterion}\label{sec4}
In this section, we derive the optimal prior for $a_0$ based on minimizing the MSE. This prior is optimal in the sense that it minimizes the weighted average of the MSEs of the posterior mean of the parameter of interest when its hypothetical true value is equal to its estimate using the historical data, or when it differs from its estimate by the maximum tolerable amount.
Suppose $y_1,\dots,y_n$ and $y_{01},\dots,y_{0{n_0}}$ are observations from a distribution with mean parameter $\mu$. Let $\mu^*$ denote the true value of $\mu$. Let $\bar{\mu}$ denote the posterior mean of $\mu$ using the normalized power prior.
Then, the MSE of $\bar{\mu}$ is
\begin{align*}
\text{MSE}(\mu^*)=&\int\left[\bar{\mu}(y) - \mu^*\right]^2p(y|\mu^*)dy.
\end{align*}
In the regression setting, $\mu$ is replaced by the regresison coefficients $\beta$.
Let $\bar{y}_0$ denote the mean of the historical data.
We aim to find the hyperparameters, $\alpha_0$ and $\beta_0$, for the beta prior for $a_0$ that will minimize
\begin{equation*}
w\text{MSE}(\mu^*=\bar{y}_0)+(1-w)\text{MSE}(\mu^*=\bar{y}_0+d_{\textrm{MTD}}),
\end{equation*}
where $d_{\textrm{MTD}}$ is the maximum tolerable difference.
Again, we use $d_{\textrm{obs}}=\bar{y}_{\textrm{obs}}-\bar{y}_0$ to represent the difference between the means of the observed current data and the historical data.
\subsubsection{Normal \textit{i.i.d.} Case}\label{sec4.1}
We demonstrate the use of this criterion for the normal \textit{i.i.d.} case.
Suppose $y_1,\dots,y_n$ and $y_{01},\dots,y_{0{n_0}}$ are \textit{i.i.d.} observations from N($\mu$, $\sigma^2$) where $\sigma^2=1$ and $n=n_0=30$.
In this example, we fix $\mu^*$ and $y_{MTD}$ at $1.5$, and define $d_{\textrm{MTD}}=\bar{y}_0 -\bar{y}$ and $d_{\textrm{obs}}=\bar{y}_0 -\bar{y}$.
The posterior mean of $\mu$ is computed using Monte Carlo integration and optimization is performed using a grid search.
The optimal prior, optimal posterior, and the posterior using the uniform prior for $a_0$ are plotted in Figure \ref{sim_mse}.
When $d_{\textrm{MTD}}=0.5$, the optimal prior is unimodal with mode around $0.3$.
When $d_{\textrm{MTD}}=1$, the optimal prior is concentrated near zero. When $d_{\textrm{MTD}}=1.5$, the optimal prior is U-shaped and favouring either strong or weak borrowing. When $d_{\textrm{MTD}}$ is small, the algorithm cannot distinguish between the two competing scenarios in the objective function and the resulting optimal prior concentrates around $0.5$.
When $d_{\textrm{MTD}}$ is large, the optimal prior will favour the two scenarios equally.
For columns 2-4, $d_{\textrm{obs}}$ is chosen to be $0$, $1$ and $1.5$. The marginal posterior using the optimal prior concentrates more around zero as $d_{\textrm{obs}}$ increases for a given $d_{\textrm{MTD}}$.
Comparing Figures \ref{sim_mse} and \ref{sim_normal}, we observe that the optimal prior derived using the MSE criterion is more conservative in the sense that it tends to discourage borrowing than that derived using the KL criterion.
\begin{figure}
\begin{center}
\includegraphics[width=16cm]{MSE_plot_biom.pdf}
\end{center}
\caption{Simulation results for the normal \textit{i.i.d.} case when minimizing a convex combination of MSEs when $n=n_0=30$.
The first figure of each row shows the historical (black solid line) and current (black dashed line) data likelihoods if the hypothetical degree of conflict is equal to $d_{\textrm{MTD}}$.
The mean of the hypothetical current data is fixed at $1.5$.
For each row of the figure, the maximum tolerable difference $d_{\textrm{MTD}}$ is chosen to be $0.5$, $1$ and $1.5$, and the corresponding optimal prior (pink dotted line) is derived for each value of $d_{\textrm{MTD}}$.
For each optimal prior, we vary $d_{\textrm{obs}}=\bar{y}_0-\bar{y}_{\textrm{obs}}$. For columns 2-4, $d_{\textrm{obs}}$ is chosen to be $0$, $1$ and $1.5$, respectively.
The purple dashed line represents the marginal posterior of $a_0$ with the optimal prior for a given $d_{\textrm{obs}}$.
The grey dashed line plots the marginal posterior of $a_0$ with the uniform prior.}
\label{sim_mse}
\end{figure}
Table \ref{tab_mse_1} shows the MSE for the optimal prior, $\operatorname{beta}(1,1)$ (uniform) and $\operatorname{beta}(2,2)$ as well as the percent reduction of MSE of the optimal prior compared to the uniform prior.
We can see that the percent reduction of MSE increases as $d_{\textrm{MTD}}$ increases.
Table \ref{tab_mse} displays the decomposition of MSE into bias squared and variance for the three choices of priors.
When $d_{\textrm{MTD}}=0.5$ or $1$, the prior discourages borrowing which results in smaller bias and larger variance.
When $d_{\textrm{MTD}}=1.5$, the model can distinguish easily between the two contrasting objectives, leading to smaller bias and smaller variance.
Additional simulation results for varying choices of $n$ and $n_0$ are provided in section B of the appendix.
\begin{table}
\begin{center}
\caption{MSE for different prior choices and percent reduction of MSE of the optimal prior compared to the uniform prior}\label{tab_mse_1}
\begin{tabular}{lcccc}
\\
& Optimal Prior & Beta$(1,1)$ & Beta$(2,2)$ & Percent Reduction of MSE,\\
&&&& Optimal Prior vs. $\operatorname{beta}(1,1)$ \\[5pt]
\hline
$d_{\textrm{MTD}}=0.5$ & 0.054 & 0.057 & 0.057 & 5\% \\
$d_{\textrm{MTD}}=1$ & 0.063 & 0.069 & 0.079 & 9\% \\
$d_{\textrm{MTD}}=1.5$ & 0.052 & 0.059 & 0.067 & 12\% \\
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Bias and variance decomposition for different prior choices}\label{tab_mse}
\begin{tabular}{lccc}
& Optimal Prior & Beta$(1,1)$ & Beta$(2,2)$ \\
\hline
Bias$^2$\\[5pt]
$d_{\textrm{MTD}}=0.5$ & 0.011 & 0.015 & 0.018 \\
$d_{\textrm{MTD}}=1$ & 0.005 & 0.012 & 0.025 \\
$d_{\textrm{MTD}}=1.5$ & 0.003 & 0.006 & 0.015 \\[5pt]
Variance&&&\\[5pt]
$d_{\textrm{MTD}}=0.5$ & 0.043 & 0.042 & 0.039 \\
$d_{\textrm{MTD}}=1$ & 0.058 & 0.057 & 0.054 \\
$d_{\textrm{MTD}}=1.5$ & 0.049 & 0.053 & 0.052 \\
\end{tabular}
\end{center}
\end{table}
\section{Case Studies}\label{sec5}
We now illustrate the proposed methodologies by analysing two clinical trial case studies.
First, we study an important application in a pediatric trial where historical data on adults is available. This constitutes a situation of increased importance due to the difficulty in enrolling pediatric patients in clinical trials \citep{fda_guide}.
Then, we study a classical problem in the analysis clinical trials: using information from a previous study.
This is illustrated with data on trials of interferon treatment for melanoma.
\subsection{Pediatric Lupus Trial}
Enrolling patients for pediatric trials is often difficult due to the small number of available patients, parental concern regarding safety and technical limitations \citep{psioda_ped}. For many pediatric trials, additional information must be incorporated for any possibility of establishing efficacy \citep{psioda_ped}. The use of Bayesian methods is natural for extrapolating adult data in pediatric trials through the use of informative priors, and is demonstrated in FDA guidance on complex innovative designs \citep{fda}.
Belimumab (Benlysta) is a biologic for the treatment of adults with active, autoantibody-positive systemic lupus erythematosus (SLE).
It was proposed that the indication for Belimumab can be expanded to include the treatment of children \citep{psioda_ped}.
The clinical trial PLUTO \citep{ped_2020} has been conducted to examine the effect of Belimumab on children 5 to 17 years of age with active, seropositive SLE who are receiving standard therapy. The PLUTO study has a small sample size due to the rarity of childhood-onset SLE. There have been two previous phase 3 trials, BLISS-52 and BLISS-76 \citep{Furie_2011, Navarra_2011}, which established efficacy of belimumab plus standard therapy for adults. The FDA review of the PLUTO trial submission used data from the adult trials to inform the approval decision \citep{psioda_ped}. All three trials employ the same composite primary outcome, the SLE Responder Index (SRI-4).
We conduct a Bayesian analysis of the PLUTO study incorporating information from the adult studies, BLISS-52 and BLISS-76, using a normalized power prior.
We derive the optimal priors on $a_0$ based on the KL criterion and the MSE criterion.
Our parameter of interest is the treatment effect of Belimumab for children, denoted by $\beta$.
The total sample size of the pooled adult data (BLISS-52 and BLISS-76) is $n_0=1125$ and the treatment effect is $0.481$.
We choose $d_{\textrm{MTD}}=0.481$ to be the maximum tolerable difference.
The pediatric data has a sample size of $92$ and the estimated treatment effect is $0.371$.
We use the asymptotic normal approximation to the logistic regression model
with one covariate (the treatment indicator).
We choose $w=0.5$ and $n=100$ (sample size of the simulated current dataset).
For the KL criterion, the objective function $K$ is computed using Monte Carlo integration and optimization is performed using the \verb|optim()| function in R. For the MSE criterion, the posterior mean of $\beta$ is computed using the R package \textit{hdbayes} \citep{hdbayes} and optimization is performed using a grid search where the values of $\alpha_0$ and $\beta_0$ range from $0.5$ to $6$ with an increment of $0.5$.
The optimal priors derived using the KL criterion and MSE criterion are displayed in Figure \ref{real_ped}.
The optimal prior derived using the KL criterion is $\operatorname{beta}(5.5, 5.5)$, which is a unimodal and symmetric distribution centred at $0.5$.
The optimal prior derived using the MSE criterion is $\operatorname{beta}(2, 5)$, which is a unimodal distribution with mode around $0.2$.
Table \ref{tab:ped_beta} provides the posterior mean, standard deviation and 95\% credible interval for $\beta$ using the optimal priors and several other beta priors for comparison.
We observe that while the posterior means remain the same, the posterior standard deviation is lowest with the optimal prior derived using the KL criterion.
In this case, the optimal prior using the KL criterion borrowed the most historical information out of the five prior choices being considered.
\begin{figure}
\begin{center}
\includegraphics[width=16cm]{ped_plots_biom.pdf}
\end{center}
\caption{After combining studies BLISS-52 and BLISS-76 for adults, the total sample size is $n_0=1125$ and log odds ratio for treatment vs. control group is $0.481$.
We choose $d_{\textrm{MTD}}=0.481$ to be the maximum tolerable difference.
The pediatric data has a sample size of $n=92$.
The actual observed log odds ratio is $0.371$.
The figure on the left displays the optimal prior (pink dotted line) and posterior (purple dashed line) derived using the KL criterion.
The figure on the right displays the optimal prior for $a_0$ and the posterior derived using the MSE criterion.
The posterior of $a_0$ using the uniform prior (grey dashed line) is also shown.}
\label{real_ped}
\end{figure}
\begin{table}
\caption{pediatric lupus trial: posterior mean, standard deviation, and 95\% credible interval for $\beta$}
\begin{center}
\begin{tabular}{lccc}
& Mean & SD & 95\% Credible Interval \\[5pt]
\hline
$\operatorname{beta}(5.5, 5.5)^1$ & 0.47 & 0.16 & (0.15, 0.79) \\
$\operatorname{beta}(2, 5)^2$ & 0.47 & 0.21 & (0.04, 0.89) \\
$\operatorname{beta}(1, 1)$& 0.47 & 0.18 & (0.12, 0.83) \\
$\operatorname{beta}(2, 2)$ & 0.47 & 0.17 & (0.13, 0.79) \\
$\operatorname{beta}(0.5, 0.5)$ & 0.47 & 0.17 & (0.12, 0.81)
\end{tabular}
\end{center}
\label{tab:ped_beta}
$^1$ Optimal by the KL criterion\\
$^2$ Optimal by the MSE criterion
\end{table}
\subsection{Melanoma Trial}
Interferon Alpha-2b (IFN) is an adjuvant chemotherapy for deep primary or regionally metastatic melanoma.
IFN was used in two phase 3 randomized controlled clinical trials, E1684 and E1690 \citep{mel_1996}.
In this example, we choose overall survival (indicator for death) as the primary outcome.
We conduct a Bayesian analysis of the E1690 trial incorporating information from the E1684 trial, using a normalized power prior.
We include three covariates in the analysis, the treatment indicator, sex and the logarithm of age.
As before, we obtain the optimal priors for $a_0$ based on both the KL criterion and the MSE criterion.
Our parameter of interest is the treatment effect of IFN, denoted by $\beta$.
The total sample size of the E1684 trial is $n_0=285$ and the treatment effect is $-0.423$.
We choose $d_{\textrm{MTD}}=0.423$ to be the maximum tolerable difference.
The E1690 trial has a sample size of $427$ and the treatment effect is $0.098$.
We use the asymptotic normal approximation to the logistic regression model
with three covariates.
We choose $w=0.5$, $d_{\textrm{MTD}}=0.423$ and $n=400$ (sample size of the simulated current dataset).
For the KL criterion, the objective function $K$ is computed using Monte Carlo integration and optimization is performed using the \verb|optim()| function in R.
For the MSE criterion, the posterior mean of $\beta$ is computed using the R package \textit{hdbayes} \citep{hdbayes} and optimization is performed using a grid search where the values of $\alpha_0$ and $\beta_0$ range from $0.5$ to $6$ with an increment of $0.5$.
The optimal priors derived using the KL criterion and MSE criterion are displayed in Figure \ref{real_mel}.
The optimal prior derived using the KL criterion is $\operatorname{beta}(0.7, 1.5)$, which has mass around zero.
For the MSE criterion, the optimal prior derived is $\operatorname{beta}(5.5, 3)$, which is unimodal with mode around $0.7$.
This is likely due to the fact that $d_{\textrm{MTD}}$ is small relative to the total sample size of $712$ -- see also simulations in section B of the appendix.
Because the observed difference is larger than $d_{\textrm{MTD}}$, the marginal posterior of $a_0$ has mode around $0.4$, which discourages more strongly than the prior.
Table \ref{tab:mel_beta} provides the posterior mean, standard deviation and 95\% credible interval for $\beta$ using the optimal priors and several other beta priors for comparison.
The posterior mean is the largest for the optimal prior derived by the KL criterion and the $\operatorname{beta}(0.5, 0.5)$ prior, indicating that the least historical information is borrowed.
The optimal prior derived using the MSE criterion borrows the most, resulting in the smallest posterior mean and smallest variance.
\begin{figure}
\begin{center}
\includegraphics[width=16cm]{mel_plots_biom.pdf}
\end{center}
\caption{The total sample size of the E1684 trial is $n_0=285$ and log odds ratio for treatment vs. control group is $-0.423$.
We choose $d_{\textrm{MTD}}=0.423$ to be the maximum tolerable difference. The E1690 trial has sample size $n=427$.
The observed log odds ratio is $0.098$.
The figure on the left displays the optimal prior (pink dotted line) and posterior (purple dashed line) derived using the KL criterion.
The figure on the right displays the optimal prior for $a_0$ and the posterior derived using the MSE criterion.
The posterior of $a_0$ using the uniform prior (grey dashed line) is also shown.}
\label{real_mel}
\end{figure}
\begin{table}
\caption{Melanoma trial: posterior mean, standard deviation and 95\% credible interval for $\beta$}
\begin{center}
\begin{tabular}{lccc}
& Mean & SD & 95\% Credible Interval \\[5pt]
\hline
$\operatorname{beta}(0.7, 1.5)^1$ & 0.05 & 0.19 & (-0.33, 0.43) \\
$\operatorname{beta}(5.5, 3)^2$ & -0.01 & 0.17 & (-0.35, 0.33) \\
$\operatorname{beta}(1, 1)$& 0.04 & 0.19 & (-0.32, 0.42) \\
$\operatorname{beta}(2, 2)$ & 0.03 & 0.18 & (-0.34, 0.40) \\
$\operatorname{beta}(0.5, 0.5)$ & 0.05 & 0.19 & (-0.33, 0.41)
\end{tabular}
\end{center}
\label{tab:mel_beta}
$^1$ Optimal by the KL criterion\\
$^2$ Optimal by the MSE criterion
\end{table}
\section{Discussion}
In this paper, we have explored the asymptotic properties of the normalized power prior when the historical and current data are compatible and when they are incompatible.
We have proposed two criteria based on which the optimal hyperparameters of the prior for $a_0$ can be derived.
While the exact values of the hyperparameters can be obtained using our objective functions, we suggest the following rules of thumb for estimating the optimal prior given different choices of the maximum tolerable difference.
When the KL criterion is used, a beta distribution centered around $0.5$, such as the $\operatorname{beta}(2, 2)$, is optimal for small values (when plots of the current and historical data likelihoods substantially overlap) of maximum tolerable difference , while a beta distribution with mean close to $1$, such as the $\operatorname{beta}(2, 0.5)$, should be used for large values of maximum tolerable difference.
When the MSE criterion is used, a beta distribution with mean less than $0.5$, such as the $\operatorname{beta}(3, 6)$, is optimal for small values of maximum tolerable difference, while a beta distribution with modes at zero and one, as for example a $\operatorname{beta}(0.5, 0.5)$, should be used for large values of maximum tolerable difference.
The MSE criterion is a more conservative criterion, in the sense that it tends to discourage borrowing, than the KL criterion.
Potential future work includes extending our method to survival and longitudinal outcomes, as well as accommodating dependent discounting parameters when multiple historical datasets are available.
|
{
"arxiv_id": "2302.14222",
"language": "en",
"timestamp": "2023-03-01T02:05:39",
"url": "https://arxiv.org/abs/2302.14222",
"yymm": "2302"
} | \section{Introduction}
In the past few years, multi-pass cells (MPC) have emerged as efficient tools for nonlinear spectral broadening and temporal manipulation of ultrashort laser pulses~\cite{MPCtheory_Hanna2017, MPC_Schulte2016}, mainly because of their high throughput~\cite{MPC_Weitenberg2017, MPC_gasFilled_Lavenu2018, MPC_gasFilled_Kaumanns2018, MPC_HighCompressionFactor_Balla2020, HighPower_FewCycle_Muller2021}, power scalability~\cite{MPC_highestPeakPower_Kaumanns2021, Spatio-spectral_Daher2020, MPC_KWpower_Grebing2020}, high compression factors~\cite{MPC_HighCompressionFactor_Balla2020}, spatio-spectral beam homogeneity~\cite{MPC_temporalQuality_Escoto2022}, and strong potential for nonlinear spatio-temporal pulse shaping applications~\cite{MPC_Weitenberg2017, MPC_Serrodyne_Balla2022}. The attractiveness of MPCs lies in the fact that extended nonlinear propagation occurs over a relatively small footprint and that large B-integrals can be accumulated in small increments for every pass with minimal spatio-temporal couplings, provided the input beam size is carefully matched to the cell eigenmode and the beam size on the end-mirrors remains fairly constant for every pass \cite{ModeMatching_Hanna2021}. MPCs also provide a large number of degrees of freedom in terms of choice of nonlinear medium as they are compatible with bulk material \cite{MPC_Schulte2016}, gases \cite{MPC_gassFilled_Ueffing2018, MPC_gasFilled_Lavenu2018, MPC_gasFilled_Kaumanns2018} or even hybrid geometries \cite{HybridMPC_Seidel2022}.
Recently, MPC-based post-compression was extended from Ytterbium(Yb)-based laser systems to Titanium:Sapphire (Ti:Sa) lasers, with compressibility down to the few-cycle regime in a single stage~\cite{SN3_MPC_Daniault2021, MPCwithTiSa_Rueda2021}. When used to reach ultra-high focused intensities in the frame of light-matter interaction experiments, most Ti:Sa laser systems rely on some form of nonlinear temporal filtering to suppress amplified spontaneous emission (ASE) and parasitic pulses surrounding the pulse peak. Such contrast enhancement techniques include saturable absorbers~\cite{SaturableAbsorbers_ITATANI1998, Saturableabsorber_Fourmaux2011}, optical parametric amplifiers~\cite{OPA_DUBIETIS1992, OPA_Yoshida2003, OPA_Dorrer2007}, second harmonic generation~\cite{SHG_Aoyama2001}, plasma mirrors~\cite{plasmaMirrors_Kapteyn1991, DoublePM_Levy2007}, spectrally filtered self-phase modulation (SPM)~\cite{FilteredSPM_Buldt2017}, nonlinear ellipse rotation (NER)~\cite{NERat1um_Tapie1992, NER_Sala1978, NERinHCF_Homoelle2002, NER_Kalashnikov2004, NER_LOA_Jullien2004, NER_Chun_Mei2008} and the widely used cross-polarized wave generation (XPW)~\cite{ContrastEnhancementXPW_Julien2005}.
In this article, we benchmark both NER and XPW techniques in an MPC architecture for simultaneous post-compression and spatio-temporal cleaning of mJ-energy 30\,fs pulses from a Ti:Sa laser, with compressibility down to few-cycle duration and record efficiency. We also rely on comprehensive (3+1)D numerical simulation of the nonlinear process, which accurately reproduces the measured data, to pinpoint the role played by the different experimental parameters and to find optimized, application-specific configurations for both techniques.
\section{Nonlinear ellipse rotation in multipass cells}
In NER, temporal cleaning is achieved thanks to the intensity-dependent rotation of the polarization ellipse of an ultrashort pulse undergoing nonlinear propagation \cite{NLO_Boyd2008}. Polarizing optics can be used to discriminate the high-intensity pulse peak that experiences nonlinear rotation against the unwanted, unrotated low-intensity ASE pedestal and the parasitic side pulses. NER was tested early on for contrast enhancement and spatial mode cleaning in air~\cite{NER_Kalashnikov2004, NER_LOA_Jullien2004} and gas-filled hollow fibers~\cite{NERinHCF_Homoelle2002}, the latter having been later shown to enable post-compression down to few-cycle duration with internal conversion efficiencies in the range of 40-55\%~\cite{NERinHCF_Khodakovskiy2019, NERinHCF_Smijesh2019}. Very recently, NER in a gas-filled MPC was first explored through simulations~\cite{NERinMPC_Pajer2021} and then experimentally with Yb lasers both in a gas~\cite{NERinMPC_Pfaff2022} and in a multi-plate arrangement~\cite{NER_Seidel2022}. In the following section, we describe how NER implemented in a gas-filled MPC can be used as a direct method for generating few-cycle pulses with high temporal fidelity using a Ti:Sa laser.
\subsection{Experimental setup for NER}
A schematic layout of the general experimental setup is shown in fig. \ref{fig:expt-setup}. A Ti:Sa chirped pulse amplifier system (Femtopower Pro-HE) delivers mJ-energy 30\,fs pulses at 1\,kHz repetition rate and features an acousto-optic programmable dispersion filter (Dazzler, Fastlite) that enables precise control over the output pulse spectral phase. A pair of focusing/diverging optics is used to fulfill the mode-matching conditions for stable MPC operation. A pair of folding mirrors mounted on a translation stage after the telescope is used to tune the position of the beam-waist inside the cell. The quasi-concentric MPC consists of two 50.8\,mm diameter enhanced-silver coated mirrors with a radius of curvature of 1.5\,m, separated by $\sim 3$\,m. A small off-axis plane injection mirror couples the beam into the cavity with each beam pass forming a circular pattern on the mirror. After the required number of round-trips, the beam is picked off by another small plane off-axis mirror located on the same side as the injection. The 50.8\,mm cavity mirror size ensures sufficient distance between the adjacent beam spots on the mirrors and minimizes beam steering losses. The number of passes in the cavity can be varied in even numbers by simply moving the pick-off mirror and can be set to a maximum of 18. The MPC can therefore support up to 54\,m of propagation folded within an effective 3\,m footprint.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figures/ExperimentalSetup.pdf}
\caption{Experimental setup layout for both NER- and XPW-MPC. Components marked in dark blue and green are used for performing NER and XPW, respectively. For clarity, only a few passes in the cell are represented. HWP: Half-Wave Plate, QWP: Quarter-Wave Plate}
\label{fig:expt-setup}
\end{figure}
The MPC is filled with Argon at a controlled pressure. Nonlinear refraction in Argon is taken into account in the simulations to find the appropriate mode-matching conditions and maintain an overall constant beam size on the cavity end-mirrors and at the center of the cavity, which also avoids damaging the optics and minimizes ionization~\cite{PropModelMPC_Hanna2020}. The beam waist is set to $450~\mu\mathrm{m}$, corresponding to a 2.9\,mm beam diameter on the end-mirrors. The number of passes is set to 12, corresponding to a total propagation length of 36\,m.
Elliptical polarization is achieved by inserting a broadband quarter-wave plate (QWP) into the collimated beam before entering the mode-matching telescope. A half-wave plate (HWP) is placed before it to tune the input polarization direction and hence the ellipticity $\epsilon_i$ inside the MPC from 0 (linear) to 1 (circular). A second crossed broadband QWP is used to retrieve linear polarization at the output of the MPC. A low-dispersion broadband thin film polarizer (TFP) (Femtolasers GmbH) with an extinction ratio of $5 \times 10^{-3}$ filters out the pulse peak rotated by NER at 90° with respect to the input polarization, while rejecting the low-intensity unrotated background. The temporally filtered pulses are then post-compressed using a set of chirped mirrors (up to $-~450~\mathrm{fs^2}$, PC42, Ultrafast Innovations GmbH) and a pair of adjustable thin fused silica wedges. A range of diagnostic tools are used to characterized the NER-filtered pulses: An SHG-FROG (FemtoEasy) characterizes the pulses spectrally and temporally, a high dynamic range third-order cross-correlator (TUNDRA, Ultrafast Innovations GmbH) is used to detect the ASE pedestal and parasitic pulses over a time window of 200\,ps around the pulse peak, and an imaging spectrometer (MISS, FemtoEasy) is used to assess the output spatio-spectral beam quality.
\subsection{Maximizing NER efficiency}
Internal NER efficiency is defined as the ratio between the power measured after and before the output polarizer. Maximum NER efficiency is therefore achieved when the polarization ellipse rotates by exactly 90°, such that transmission through the TFP is maximized. However, the NER angle is time-dependant along the time-varying pulse profile, which leads to a time-dependent transmission through the TFP. Moreover, as the pulses simultaneously experience spectral broadening and chirping through self-phase modulation (SPM), the NER angle also becomes wavelength-dependant, implying that the broadened spectrum is not uniformly transmitted through the polarizer. All these effects combined limit both the energy and the spectral throughput and thus drastically affect post-compression performance.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figures/NER_Simus.pdf}
\caption{Left: Simulated NER efficiency vs. B-integral for various pulse ellipticities. Right: Temporal profile of NER pulse (grayed), transmitted pulse (solid blue) and rejected (dashed blue) pulse by the TFP; time-dependant NER angle (solid red), all simulated for $\epsilon_i = 0.25$ at maximum efficiency ($B\simeq 6\:$ rad).}
\label{fig:NER_Simus}
\end{figure}
Fig. \ref{fig:NER_Simus} (left panel) shows the evolution of internal NER efficiency versus B-integral for different ellipticities, obtained from numerical simulations for a plane-wave 1\,mJ, 30\,fs pulse propagating in Argon, including SPM, cross-phase modulation (XPM), self-steepening and gas dispersion. One clearly notices that lower ellipticities lead to a reduced spread of NER angles and, when combined with high B-integrals, yield higher throughput. However, our previous work on direct post-compression of 1\,mJ 30\,fs TiSa pulses in the same MPC configuration showed that the B-integral cannot be pushed much beyond 6 rad, corresponding to an Ar gas pressure around 400\,mbar, where excessive self-steepening effects lead to poor pulse compressibility~\cite{SN3_MPC_Daniault2021}. In the case of NER, the ellipticity should therefore not be lower than 0.15 and the maximum NER efficiency should not exceed 75\%. Fig. \ref{fig:NER_Simus}(right panel) shows the simulated time-dependent NER angle during the rise and fall of the pulse along with the transmitted and rejected pulse profiles for $\epsilon_i = 0.25$ at maximum efficiency ($B\simeq 6\:$ rad). The maximum NER angle is above 90°, such that the transmission averaged over the whole pulse profile is maximized.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/SpectralEvol_V2dataset-scaled.pdf}
\caption{Top: evolution of the measured (left) and the simulated (right) transmitted NER spectrum with Ar gas pressure, for $\epsilon_i=0.25$. Bottom: variation of the measured spectral width and Fourier-transform-limited (FTL) duration (left) and corresponding experimental NER conversion efficiency with Ar pressure compared to simulated values (right).}
\label{fig:NERevolution}
\end{figure}
We performed the experiment using $\epsilon_i = 0.25$ and measured the internal NER efficiency along with the output pulse spectrum for increasing Ar pressures up to 420\,mbar, where pulse break-up starts. Fig. \ref{fig:NERevolution} compares the measured data with the results obtained from (3+1)D simulations now including temporal, radial, and polarization dimensions and using measured device losses and the experimental laser pulse spectrum as input. Simulations show excellent agreement with measurements. Experimentally, the spectral bandwidth increases fast at first, then starts flattening out before increasing again around 420\,mbar. The effects of pulse break-up can be seen in simulations at higher pressures, with the sudden appearance of wings and deep modulations in the broadened spectrum. The agreement between experiment and simulations is particularly good for the internal NER efficiency versus pressure, which reaches a maximum of 58\% at an optimum Ar pressure of $\simeq$~350 mbar and then rolls off because of the peak NER angle exceeding 90°. We now can tune the ellipticity to a lower value, such that the maximum NER efficiency occurs just before pulse break-up around 420\,mbar. This should lead to both a higher throughput and a broader spectrum, and therefore yield shorter compressed output pulses.
\subsection{Experimental results at optimum ellipticity}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Figures/NER-FROG450mbar_135deg_ALLopti.pdf}
\caption{Top: measured (left) and retrieved (right) SHG-FROG traces. Bottom: input and NER spectra (left) and the corresponding temporal profile (right) for $\epsilon_i = 0.18$.}
\label{fig:NER-FROG}
\end{figure}
By setting the Ar pressure to 420\,mbar and the pulse ellipticity to $\epsilon_i=0.18$, the internal NER efficiency increases from 58\% to 69\%. The output polarizer is rotated accordingly to preserve the extinction ratio and the output energy drops to 0.49\,mJ, yielding a global NER efficiency of 49\%, including losses in the MPC. In this configuration, we obtain, nearly-FTL 5.8 fs pulses (2.2 optical cycles at $\lambda=800\ $nm) as shown in fig. \ref{fig:NER-FROG}. The reconstructed 5.8\,fs pulse profile (solid blue curve) is very close to the Fourier-transform-limited profile (FTL, dotted black curve) exhibiting near-perfect compression with a very low-intensity pedestal structure, limited only by residual phase and spectral modulations introduced by the paired double-angle chirped mirror compressor. The compressor efficiency is measured to be 87\%, leading to an overall post-compression efficiency of 43\%.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{Figures/NER_contrast.pdf}
\caption{Temporal contrast enhancement between input and NER ($\epsilon_i=0.18$) pulses measured using a high dynamic range third-order autocorrelator.}
\label{fig:NER_contrast}
\end{figure}
Fig. \ref{fig:NER_contrast} compares the long-range temporal profiles of input and output pulses measured by the TUNDRA device. The contrast enhancement obtained via NER is at least 3 orders of magnitude, with ASE levels dropping down to $1:10^{-11}$ a few ps prior to the pulse peak, and is limited by the extinction ratio of the TFP. The pre-pulse seen at -7.5\,ps in the traces for both the input and NER pulses is an inherent artifact of the TUNDRA. The train of post-pulses visible in the NER trace originates from internal reflections within the TFP itself and are not observed when a Glan polarizer is used for extinction. However, the high thickness of a Glan polarizer leads to excessive dispersion and nonlinear effects distorting the output pulse and beam. Finally, the spectrally-resolved beam profile in the horizontal and vertical dimensions and the corresponding V-parameter measured for $\epsilon_i=0.18$ are shown in fig. \ref{fig:NER_spatio-specrtral}. In both dimensions, the output beam exhibits good spectral homogeneity, $>98$\% at FWHM and above $>85$\% at $1/e^2$.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/NER-Spatio-Spectral.pdf}
\caption{(Top) spectrally-resolved beam profile in the horizontal (left) and vertical (right) dimensions and (bottom) output beam profile in arbitrary units in the horizontal (left) and vertical dimensions (right) along with their spectral homogeneity, measured for $\epsilon_i=0.18$. }
\label{fig:NER_spatio-specrtral}
\end{figure}
These results must be compared to direct post-compression experiments~\cite{SN3_MPC_Daniault2021}. Under the exact same experimental conditions, albeit with only 16 passes in the MPC, we measured 5.3\,fs post-compressed pulses with 66\% overall efficiency, chirped mirror compressor included. The 43\% overall transmission measured in the case of NER amply justifies its implementation in an MPC post-compressor as it enables the direct generation of high-contrast few-cycle pulses with moderately higher losses, little compromise on the output pulse duration and very low added complexity.
\section{Cross-polarized wave generation in multi-pass cells}
XPW generation is a well-established and perhaps the most widely used technique for temporal contrast enhancement in high-energy Ti:Sa lasers. It is a degenerate four-wave mixing process governed by the anisotropy of the real part of the third-order nonlinearity tensor $\chi^{(3)}$. Inside the nonlinear medium, a new orthogonally polarized wave is generated at the same central frequency as the incident wave. Conventionally, its implementation is quite straightforward: a linearly polarized laser pulse is focused into a crystal placed between crossed polarizers. Due to the cubic intensity dependence of the process in space and time, efficient conversion occurs only at high intensities, making it easy to filter out the cleaned XPW pulse from the lower-intensity ASE pedestal and parasitic pulses. The XPW conversion efficiency depends on the intensity of the pulse incident on the crystal, the thickness of the crystal, the input spatio-temporal pulse quality and its spectral phase~\cite{ SpectralPhase_XPW_Jullien2007, SpectralPhase_XPW_Canova2008}. The incident intensity on the XPW crystal is limited by the threshold of white light generation in the crystal (e.g. $\sim 10^{12}~\mathrm{W/cm^2}$ for BaF$_2$ crystals). Using thicker crystals to achieve higher conversion leads to unwanted nonlinear third-order processes that tend to compete with XPW generation, making the XPW beam properties more sensitive to spatial-temporal couplings. The input intensity is also limited by damage due to self-focusing inside the crystal, which tends to reduce its lifetime. So far, the highest demonstrated global conversion efficiency has been limited to 10-20\% using a double thin-crystal configuration~\cite{EfficientXPW_Julien2006} and, for mJ energy pulses, some form of spatial filtering or shaping is needed to ensure a smoother or more homogeneous incident spatial beam profile on the crystals~\cite{MoreEfficientXPW_Julien2008, EfficientXPW_Ramirez2011}. In this work, we tested the implementation of XPW in the MPC, since the nonlinearity inside an MPC is acquired in small increments and spatially redistributed across the beam for every pass.
\subsection{Experimental setup for XPW}
The setup for testing XPW in the MPC is depicted in fig. \ref{fig:expt-setup}. Here, no QWPs are needed since the linear polarization direction of the XPW signal can simply be set by the HWP at the MPC input. The chamber is operated under vacuum and two anti-reflection-coated, 600\,µm thick, holographic cut BaF$_2$ crystals aligned with same orientation are placed symmetrically with respect to the center of the MPC. This configuration helps to mitigate spatial nonlinear effects and ensures spatio-spectral beam homogeneity. The distance of the crystals from the waist enables the tuning of the nonlinearity for every pass and is set to about 1\,m, placing the crystals approximately 50\,cm away from the end-mirrors. The chirped mirrors of the output compressor are changed to accommodate the narrower spectral bandwidth typically produced by the XPW process, and to introduce higher dispersion in order to compensate for the total amount of material traversed by the pulses in the BaF$_2$ plates. The input pulse parameters, output polarizer and pulse diagnostics all remain the same as for NER.
In the case of XPW, mode matching into the MPC becomes much more complicated. First, for a Gaussian input pulse in space and time, both spatial and temporal pulse profiles of the XPW wave are shortened by a factor of $\sqrt{3}$ because of the cubic nonlinearity. Therefore, both input and XPW beams do not share the same beam matching conditions and their respective caustics cannot be stabilized together. Moreover, the input pulse peak power is about $5\times 10^{3}$ times higher than the critical power for self-focusing in BaF$_2$, under our experimental conditions. Although the BaF$_2$ plates are thin enough to prevent catastrophic beam collapse, they have to be considered as thin nonlinear lenses that can disturb mode matching and overall propagation in the MPC. Moreover, the repeated energy transfer from the input pulse to the XPW wave leads to changes in the pulse peak power, such that the Kerr lensing experienced through the BaF$_2$ plates is different for both pulses and for every pass, especially for such high nonlinearities. Material dispersion and reflection losses from the plate surfaces further exacerbate this behavior.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/XPW_Caustics.pdf}
\caption{Left: Beam caustic of the fundamental wave alone including Kerr Lensing in the BaF$_2$ plates (solid blue) compared to linear mode matching in the MPC (dashed red). Right: XPW and fundamental beam caustics.}
\label{fig:XPW_Caustics}
\end{figure}
Numerical simulations were performed to determine the best beam matching when including Kerr lensing from the BaF$_2$ plates. First, it was run for the input fundamental wave alone, while inhibiting XPW generation. Fig. \ref{fig:XPW_Caustics} shows that the beam caustic of the fundamental wave in the MPC, which is stable for the first few passes and then becomes strongly modulated, illustrating the impact of dispersion on the caustic stability. However, the beam size on the MPC end-mirrors is always larger than that of the linearly matched beam, which excludes any risk of optical damage by design. Excessive ionization at the beam waist, which can occur if the caustic is unstable, is not a concern here as the MPC is operated under vacuum. Second, XPW generation was enabled in the simulation by choosing the proper polarization that maximises efficiency, as detailed in the next section. The XPW caustic is plotted in fig.\ref{fig:XPW_Caustics}. It can be seen that the newly created XPW beam in the first BaF$_2$ plate pass is smaller than the fundamental. As expected, its caustic is highly modulated throughout the MPC, but here again the beam sizes on the mirrors are always larger than in the linear regime. The fundamental beam is more modulated in the presence of XPW generation due to the energy transfer, but the minimum beam size on the mirrors still remains close to that for linear beam matching, thus avoiding optical damages. These simulations show that Kerr lensing in the BaF$_2$ plates drastically disturbs beam propagation and caustic stability throughout the MPC, but also leads to larger beam sizes on the mirrors, such that the beam fluence systematically remains below the damage threshold of the end-mirrors.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{Figures/XPW_eff.pdf}
\caption{Left: Internal (red) and overall (blue) XPW generation efficiency as a function of number of passes in the MPC.}
\label{fig:XPW_efficiency}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/XPW_FROG-broad-offlineRet.pdf}
\caption{Top: measured (left) and retrieved (right) SHG-FROG traces. Bottom: input and XPW spectra (left) and the corresponding compressed XPW temporal profile (right), for FTL pulses injected into the MPC.}
\label{fig:FROG-XPW-FTLcase}
\end{figure}
\subsection{Experimental results of XPW generation}
Fig. \ref{fig:XPW_efficiency} shows the XPW efficiency simulated as a function of the number of passes in the MPC, including reflection losses introduced by the enhanced-silver-coated mirrors ($\sim 1\%$ per bounce) and the BaF$_2$ crystals ($\sim 3\%$ per MPC pass). The input polarization is set to 64.5° with respect to the optical axes of the BaF$_2$ crystals, which maximizes XPW conversion efficiency per pass~\cite{EnergyscalingXPW_Canova2009}. For 16 passes, the total internal XPW efficiency reaches up to 65\%, an unprecedented value which cannot be obtained for XPW in free space. First, the beam size on the crystals is well controlled thanks to the MPC geometry and enables good conversion efficiency per pass, as opposed to a free-space two-plate arrangement, where the beam constantly diverges. Second, the MPC geometry mitigates nonlinear spatial effects, enabling higher efficiencies without distorting the beam profile. However, such a high number of passes through the BaF$_2$ plates implies higher reflection losses, such that the overall efficiency, calculated as the output XPW power over the input power, rapidly drops. In practice, the number of passes should be limited to 10 passes to maximize the overall efficiency to about 35\%, which is still higher than the current state-of-the-art. The beam size is set to $330~\mu$m at the waist and 4\,mm on the end-mirrors in order to fulfill the stability conditions of the MPC. The input pulses are p-polarized and the output TFP is oriented to select the XPW pulses along the orthogonal polarization direction. The XPW pulses are then compressed outside the MPC using a set of chirped mirrors (total dispersion $\simeq -750~fs^2$, HD58 from Ultrafast Innovations) and a pair of adjustable thin fused silica wedges. By compensating dispersion (entrance window, propagation in air) with the Dazzler in the laser chain, we can inject nearly-FTL 30\,fs pulses into the MPC, and obtain $295~\mu$J total XPW pulse energy, corresponding to 57\% internal and 28\% global conversion efficiency, respectively, while taking into account all the losses in the MPC.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/XPWbroad_beamProfileV2.pdf}
\caption{Top: measured spatio-spectra traces in horizontal (left) and vertical (right) dimensions; bottom: output beam profile in arbitrary units in horizontal (left) and vertical dimensions (right) along with their spectral homogeneity.}
\label{fig:homogeneity-XPW}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{Figures/XPW-contrast.pdf}
\caption{Temporal contrast enhancement between fundamental and XPW pulses.}
\label{fig:contrast-XPW}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/XPW-FROG-narrow.pdf}
\caption{Top: measured (left) and retrieved (right) SHG-FROG traces; bottom: input and XPW spectral properties (left) and corresponding output XPW temporal profile (right).}
\label{fig:FROG-XPW-ChirpedCase}
\end{figure}
The broadened XPW spectrum can be compressed down to 15\,fs as shown in fig. \ref{fig:FROG-XPW-FTLcase}. Fig. \ref{fig:homogeneity-XPW} shows the spectrally-resolved beam profile measured in both the vertical and horizontal dimensions with the imaging spectrometer, which exhibits a nearly-Gaussian profile in both cases. The homogeneity factor, as defined in \cite{MPC_Weitenberg2017}, is also shown for both the dimensions in fig. \ref{fig:homogeneity-XPW}. The beam exhibits excellent spectral homogeneity above 99\% at the FWHM and above 95\% at 1/e$^2$ in both dimensions. This is a direct advantage of implementing XPW in an MPC, where incremental accumulation of B-integral mitigates spatio-temporal couplings and yields excellent output beam quality. Fig. \ref{fig:contrast-XPW} shows the long-range temporal intensity profiles measured with the TUNDRA for both input and XPW pulses. The contrast enhancement is at least 3 orders of magnitude and limited, as for NER, by polarization extinction capability of the TFP. The pre-pulse at +7.5\,ps and the train of post-pulses are similar to those observed in the NER measurements.
\subsection{Maximizing XPW efficiency}
XPW generation has been shown to be accompanied by significant spatio-temporal reshaping due to interplay between XPM and SPM involving both fundamental and XPW pulses~\cite{MoreEfficientXPW_Julien2008,XWPspatialTemporalDynamics_Adams2010}. When an intial -500~$fs^2$ spectral phase is applied to the input pulses with the Dazzler to globally compensate for the effects of dispersion inside the MPC, the XPW energy increases to $360~\mu$J, corresponding to 65\% internal and 34\% global conversion efficiencies, respectively. To our best knowledge, this is the highest conversion efficiency reported so far for XPW generation. However, this increase in conversion efficiency comes at the cost of lower spectral broadening and therefore slightly longer re-compressed pulses of 19\,fs, as shown in fig. \ref{fig:FROG-XPW-ChirpedCase}. This result is in good agreement with previous studies on the effect of residual chirp on the output spectral behaviour~\cite{FewCycle_XPW_Jullien2009}, where narrower albeit smoother output XPW spectra were observed for negatively chirped input pulses. Finally, spectral homogeneity and contrast enhancement factors similar to the FTL case were measured for negatively chirped input pulses. Overall, the smooth XPW spectrum together with the increased available XPW pulse energy could be particularly useful for efficient seeding of further laser amplification stages in a double chirped pulse amplifier architecture.
\section{Summary}
In conclusion, we demonstrate efficient spatial-temporal cleaning of mJ-energy 30\,fs pulses in an MPC using two different third-order nonlinear filtering techniques: XPW and NER. Comprehensive (3+1)D numerical simulations show excellent agreement with the measured data in both cases and enables us to carefully design the MPC architectures so as to obtain the highest output pulse fidelity. In both cases, a contrast enhancement $>10^3$ could be observed together with near-FTL post-compressed pulse durations.
To the best of our knowledge, this is the first time that XPW has been implemented inside an MPC, exhibiting several advantages over a more conventional free-space setup: (1) record high efficiencies (up to 65\% internal and 34\% global), (2) no need for spatial filtering, (3) excellent output beam quality and spectral homogeneity, and (4) relatively higher tolerance to input beam pointing fluctuations. More adapted surface coatings on the nonlinear crystals and cavity end-mirrors should help significantly increase the overall energy throughput and polarization optics with higher extinction ratios could easily increase the contrast enhancement factor ($>10^3$) by 2-3 orders of magnitude. This approach could therefore aid in designing efficient and compact devices for spatio-temporal pulse cleaning in high-peak-power laser systems.
To the best of our knowledge, this is also the highest total internal efficiency (up to 69\%) reported for NER for 30\,fs input pulses. Implementing NER in an MPC architecture with such pulses enables the direct generation of high-contrast few-cycle pulses ($< \mathrm{6}\:$ fs) with up to 43\% global efficiency, in a single post-compression stage. The inherent power scalability of MPCs makes this an attractive end-of-chain solution for producing high peak-power few-cycle pulses with high temporal contrast suited to ultra-high intensity laser-matter experiments. Shorter post-compressed pulse duration, down to the single-cycle regime, could in principle be reached using dispersion-engineered coatings targeting net-zero linear chirp to suppress the saturation of Kerr nonlinearities, as observed in~\cite{SN3_MPC_Daniault2021}, which should enable octave-spanning broadening with high throughput. For this, however, the limitations on pulse compressibility imposed by the residual phase profile of these coatings and ionization remains to be investigated.
\begin{backmatter}
\bmsection{Funding}
This work was supported by the Agence National de la Recherche (ANR-10-LABX-0039-PALM), the Horizon 2020 Framework program (Advanced Grant ExCoMet 694596), LASERLAB-EUROPE (871124) and the R\'egion Ile-de-France (SESAME 2012-ATTOLITE).
\bmsection{Disclosures} The authors declare no conflicts of interest regarding publication of this article.
\bmsection{Data Availability Statement} Data underlying the results presented in this article are not publicly available at this time but may be obtained from the authors upon reasonable request.
\end{backmatter}
|
{
"arxiv_id": "2302.14220",
"language": "en",
"timestamp": "2023-03-01T02:05:34",
"url": "https://arxiv.org/abs/2302.14220",
"yymm": "2302"
} | \section{Introduction}
Character-level or byte-level models\footnote{We will hereafter refer to byte-level models as character-level models or simply character models.} have been a source of interest in Natural Language Processing (NLP) for many years, with the promise of a pipeline without need of a subword tokenizer, and its ability to process and output text on a finer granularity. However they have failed to become the dominant paradigm over subword-level models. The general consensus seems to be that character models perform similarly to subword models, however require additional time and compute resources due to the models requiring longer input sequences. There are some instances that character models have been shown to outperform subword models, however these instances may be seen as niche (e.g. tasks that specifically require character information) or unrealistic (e.g. using data corrupted with synthetic noise) \cite{xue-etal-2022-byt5}.
\begin{figure}
\centering
\includegraphics[width=210pt]{pngs/attribution_example.svg.png}
\caption{ByT5's source vs. target attribution for the output translation: ``That is super-good!'' Peaks in source attribution at the beginning of each word indicate an internal word-level understanding.}
\label{fig:attr_example}
\end{figure}
There are however some instances in which the latest character models have not been tested. In this work we compare the effectiveness of state-of-the-art character and subword models on one of the most researched subfields of NLP, machine translation (MT). Despite the popularity of neural MT (NMT) overall, character models have not yet been thoroughly researched in this task. Current research on character models for NMT has used NMT models trained from scratch \cite{libovicky2021don, edman2022subword}, and examining such models can only be reliably used to assess performance on high-resource languages, where the beneficial effects of multilingual pretraining are less impactful.
Given the diversity of NMT settings, be it low or high-resource, similar or distant language pairs, or various writing scripts, there are several potential avenues for character models to stand out, either positively or negatively.
We fine-tune the character model ByT5 \cite{xue-etal-2022-byt5} and its subword counterpart mT5 \cite{xue-etal-2021-mt5} on a variety of languages and scenarios. Among our numerous findings, those that stand out are:
\begin{itemize}
\item Using a standard NMT training scheme, ByT5 performs in general better than mT5.
\item ByT5's edge over mT5 increases when resources are limited.
\item With a simple NMT instruction tuning setup, ByT5 can learn NMT faster than mT5, with less data.
\item Using many examples during instruction tuning causes ByT5 to lose cross-lingual performance in the zero-shot setting.
\item ByT5 performs particularly well for languages not seen in pretraining that are similar to a high-resource language.
\end{itemize}
Our findings support the idea that in several realistic circumstances, particularly low-resource scenarios, character models are a superior choice in terms of quality of translation over the widely-used subword models.
We further analyze ByT5's performance on translation, finding that its success can be attributed to its ability to implicitly segment sentences into words or subwords (exemplified by Figure \ref{fig:attr_example}).
We start with an overview of prior work in Section \ref{sect:rw}. This is followed by our methodology in Section \ref{sect:method}. Section \ref{sect:dt_res} shows our results for trained language pairs, while Section \ref{sect:zs_res} shows our cross-lingual results. In Section \ref{sect:analysis}, we analyze our results, looking at how ByT5 is translating and where it performs well. Section \ref{sect:time} shows the efficiency of character models (or lack thereof), and we conclude in Section \ref{sect:conclusion}.
\section{Related Work} \label{sect:rw}
Character-level models have long been of interest for use in machine translation, dating back to when statistical models were the dominant paradigm \cite{tiedemann-nakov-2013-analyzing}. At that time, character models were already competitive with word-level models, particularly in the cases where training data was limited to 100 thousand sentence pairs or less. We note that, at the time, subword tokenizers such as BPE \cite{sennrich-etal-2016-neural} were not yet commonly used.
Character-level NMT models have also been researched, first using RNNs \cite{costa-jussa-fonollosa-2016-character, lee2017fully}, and more recently with Transformers \cite{libovicky2021don, edman2022subword}, all of which compare against the more widely-used subword-based models as baselines. The overall results have been less convincing, with either moderate or no improvement over subword models, all while requiring considerably more time for both training and inference.
While these works were less convincing for character models, it should be noted that they were mostly focused on translation for high-resource languages. In the context of low-resource MT, \citet{edman2022subword} showed that character-level models can perform better than subword models on the low-resource pair Xhosa--Zulu, while \citet{carrion-ponz-casacuberta-2022-effectiveness} showed that quasi-character models (subword models with a small vocabulary of size 350) perform better than subwords with a more standard vocabulary size of 32 thousand when data is limited for a number of European languages, finding consistent improvements across several domains.
All of the previous work with character models in MT however has trained MT models from scratch, as this is a long-standing practice in the field of MT. This is not a particularly practical scenario for many languages, especially low-resource ones, where the paradigm of fine-tuning a pretrained, multilingual model has shown to be effective \cite{ranathunga2023neural}.
The size of the models previously tested was also not conducive to testing the effect of cross-lingual transfer, as the models had less than 70 million parameters (based on Transformer-Base). In contrast, ByT5-small has 300 million, up to 13 billion for ByT5-XXL. With an order of magnitude difference in model size, there may be emergent properties of larger character or subword models when trained to translate.
There is evidence from other generative tasks that indicates that such could be the case. ByT5 has been shown to consistently outperform its subword counterpart mT5 for 3 generative tasks: TweetQA, GemXSUM, and DROP \cite{xue-etal-2022-byt5}. However these generative tasks are all in English, and all have a significantly shorter output sequence than their input sequence. Considering these distinctions, it remains unclear whether this superior performance of ByT5 would extend to translation tasks, necessitating this work.
\section{Method} \label{sect:method}
As our experiments are aimed to provide a fair comparison of state-of-the-art character and subword models for translation, we first justify our model choice, followed by our training scheme, and lastly our choice of metric for evaluation.
\subsection{Models} \label{sect:method_models}
While there are other models that operate on the character level \cite{boukkouri2020characterbert, tay2021charformer}, we opt to compare ByT5 to its subword counterpart mT5. These models are, to our knowledge, the most comparable due to their similar training setup and parameter count.
We note that although the parameter counts between mT5 and ByT5 models are similar, \citeauthor{xue-etal-2022-byt5} elect to increase the width (i.e. hidden dimension size) of the ByT5 models to compensate for it using fewer parameters for embedding bytes. This is most noticeable in the \texttt{small} models, with 85\% of the parameters of mT5 being used for embeddings, compared to 0.3\% for ByT5 \cite{xue-etal-2022-byt5}. As this disparity lessens with increasing model size, we consider this difference to be an explaining factor if the disparity in any of our results also correlates negatively with model size. As such, the majority of our experiments use the \texttt{small}, \texttt{base}, and \texttt{large} model sizes of ByT5 and mT5.
\subsection{Training}
To train our translation models, we finetune mT5 and ByT5 models of sizes \texttt{small}, \texttt{base}, and \texttt{large} using the same prompt used in~\citet{raffel2020exploring}:
$$
\textit{Translate <S> to <T>: <src>}
$$
\noindent where \textit{<S>} is the source language, \textit{<T>} is the target language, and \textit{<src>} is the source text.
We primarily use WMT's NewsCommentary v16\footnote{\url{https://data.statmt.org/news-commentary/v16/}} datasets for training. We consider 5 levels of ``resourcedness'' for training, using \{0.4, 2, 10, 50, 250\} thousand sentence pairs.
We also use the WMT14 German--English dataset to test higher-resource settings of 1.25 and 4.5 million sentence pairs (i.e. the entire dataset).\footnote{By default, we use NewsCommentary, any use of WMT14 is specified.} Our hyperparameter choices for training can be found in Appendix \ref{sect:app_hyperparam}. For development and testing, we use the FLoRes200 dataset \cite{costa2022no}.
As for training language pairs, we train on \{Arabic, German, Russian\}$\rightarrow$English and \{Portuguese, English\}$\rightarrow$Spanish.\footnote{We opt for mostly into-English experiments so that our results can be easily compared for any of the source languages used. Nevertheless, we also include English$\rightarrow$German results in Appendix \ref{sect:all_results}.} We choose these language pairs as they are all within NewsCommentary, guaranteeing a similar quality, and for the varying degrees of language similarity.
We additionally test the models' cross-lingual retention with the FLoRes200 dataset, whose wide variety of languages allow us to further isolate important language characteristics. To test the models' cross-lingual capabilities, we simply swap out \textit{<S>} and \textit{<src>} for a new language, keeping the target language the same. No further training is performed, making the setting zero-shot. We specifically target 3 factors:
\begin{enumerate}
\item Whether the model has seen the substituted language during \textbf{pretraining}.
\item Whether the substituted language is in the same \textbf{script} as the trained language.
\item Whether the substituted language is in the same language \textbf{family} as the trained language.
\end{enumerate}
These 3 factors are easily identifiable for any given language, allowing for a simple means of potentially assessing the efficacy of character and subword models in any specific linguistic context.
\begin{figure}[!htp]
\centering
\includegraphics[width=210pt]{pngs/main_deu_eng.svg.png}
\caption{Performance of mT5 and ByT5 on German$\rightarrow$English.}
\label{fig:train_deu_eng}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=210pt]{pngs/main_ara_eng.svg.png}
\caption{Performance of mT5 and ByT5 on Arabic$\rightarrow$English.}
\label{fig:train_ara_eng}
\end{figure}
\begin{table}[!htp]\centering
\begin{tabular}{lrrrr}\toprule
&250k &1.25M &4.5M \\\midrule
mt5-large &54.72 &58.38 &61.51 \\
byt5-large &\textbf{56.83} &\textbf{59.78} &\textbf{62.73} \\
\bottomrule
\end{tabular}
\caption{Performance of mT5 and ByT5 trained on WMT14 German$\rightarrow$English, using chrF++.}
\label{tab:train_deueng_wmt}
\end{table}
\subsection{Evaluation}
We considered several metrics for evaluating these models, opting eventually for chrf++ \cite{popovic2017chrf++}, which is formulated as a combination of word-level and character-level F-scores.
There are several other potential metrics to choose from, and we cover in more detail the reasoning for why we did not select them in Appendix \ref{sect:app_metric}. We additionally provide the BLEU \cite{papineni2002bleu} scores for all experiments in Appendix \ref{sect:all_results}.
\section{Direct Translation Results} \label{sect:dt_res}
Our direct translation results look directly at the performance of the models on the language pairs for which they are fine-tuned. We start by varying the amount of training data. As seen in several previous works noted in Section \ref{sect:rw}, character models appear to thrive in situations where training data is scarce. We confirm this with our experiments on \{Arabic, German\}$\rightarrow$English.
We continue by varying the distance between the source and target languages to find any differences in the models' abilities to handle increasing language distance. This is motivated by the assumption that character-level models are more capable of handling similar languages with only small changes required for accurate translation, such as changes in spelling, while largely preserving word order. We test this using a similar language pair, Portuguese$\rightarrow$Spanish compared to the more distant English$\rightarrow$Spanish, as well as German$\rightarrow$English compared to Arabic$\rightarrow$English.
\subsection{Amount of Training Data} \label{sect:res}
Varying the amount of training data reveals the largest difference in performance of character and subword models.
Figures \ref{fig:train_deu_eng} and \ref{fig:train_ara_eng} show that ByT5 outperforms mT5 in all resource levels.
However, the performance gap increases as resources are limited.
We see that model size also plays a role, with the \texttt{large} model having the largest performance gap of 8-10 points when only 400 sentences are available.
This is counter to our assumption that, given the differences in architecture being largest between the \texttt{small} models, we would see the largest difference in performance from them (see Section \ref{sect:method_models}).
We test higher-resource settings in Table \ref{tab:train_deueng_wmt}, which shows the performance of the \texttt{large} models on the German$\rightarrow$English WMT14 dataset.
Here, we see that while ByT5 maintains a lead, the performance of the models does appear to be converging near the full size of the dataset.
This indicates that eventually, with enough training data, the advantage of operating on the character-level diminishes.
However, for many language pairs, such as those with less than 4.5 million sentence pairs, the advantage could potentially amount to an increase in translation performance.
\begin{figure}[!htp]
\centering
\includegraphics[width=210pt]{pngs/relative_diff_porspa.svg.png}
\caption{Performance of models on English$\rightarrow$Spanish, relative to Portuguese$\rightarrow$Spanish.}
\label{fig:poreng_dist}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=210pt]{pngs/relative_diff_deueng.svg.png}
\caption{Performance of models on Arabic$\rightarrow$English, relative to German$\rightarrow$English.}
\label{fig:deuara_dist}
\end{figure}
\begin{figure*}[!htp]
\centering
\includegraphics[width=210pt]{pngs/zs_deu_eng.svg.png}
\includegraphics[width=210pt]{pngs/zs_ara_eng.svg.png}
\caption{Average performance of German$\rightarrow$English (left) and Arabic$\rightarrow$English (right) models where German or Arabic is replaced with a broad selection of languages.}
\label{fig:zs_ara}
\end{figure*}
\subsection{Source--Target Language Distance} \label{subsect:lang_dist}
To examine the effect of language distance between the source and target, we divide the scores of our more distant language pairs (eng$\rightarrow$spa, ara$\rightarrow$eng) by the scores of our closer language pairs (por$\rightarrow$spa, deu$\rightarrow$eng).
This is shown in Figures \ref{fig:poreng_dist} and \ref{fig:deuara_dist}, respectively.
Here we can see that, in most cases, ByT5 appears more robust to language distance than mT5, achieving a greater percentage of its performance on the closer pair when the distance between source and target increases.
\section{Cross-lingual Results} \label{sect:zs_res}
Our cross-lingual results examine how well the models retain information from similar languages, or generalize to unseen languages, when trained on a specific language pair. This can have important ramifications on best practices for training a low-resource model, as cross-lingual transfer learning is a common technique for obtaining a good initialization for a language pair with few training sentences.
We first look at the general performance by selecting 10 languages from different language families with which we replace the source language.
This aims to provide an overview of the performance one can expect from character and subword models.
Second, we focus on performance based on whether the substituted languages have been seen in pretraining. This tests the ability of the models to generalize to novel languages, compared to their retention of similar, often higher-resourced languages.
Third, we investigate the importance of writing script. Historically in MT, differences in script have motivated the use of transliteration \cite{durrani-etal-2010-hindi, nakov-tiedemann-2012-combining}, however, as of yet, no work has examined the effect of script on neural character models. We compare the cross-lingual retention of Slavic languages written in Cyrillic and Latin using models trained on Russian$\rightarrow$English.
Lastly, we look at the effect of language distance. Rather than comparing the distance in source and target as examined in Section \ref{subsect:lang_dist}, here we look at the distance between the original and substituted source language. Specifically, we look at 4 branches in the Indo-European language family: Germanic (the original source language branch), Italic, Celtic, and Slavic.
\begin{figure*}[!htp]
\centering
\includegraphics[width=218pt]{pngs/por_spa_pret_chrf.svg.png}
\includegraphics[width=210pt]{pngs/deu_eng_pret_chrf.svg.png}
\caption{Average chrF++ performance of languages seen in pretraining versus unseen in pretraining for Portuguese$\rightarrow$Spanish (left) and German$\rightarrow$English (right) models. \texttt{S}, \texttt{B}, or \texttt{L} indicate the model size (\texttt{small}, \texttt{base}, or \texttt{large}), and the subscript indicates the amount of training data used.}
\label{fig:zs_pretrain}
\end{figure*}
\subsection{Overall}
Figure \ref{fig:zs_ara} shows
the average performance of every model on a selection of 10 languages from 10 of the most representative language families.\footnote{We provide details of this selection in Appendix \ref{app:select}.}
Overall, we see that ByT5 continues to outperform mT5 for lower resource scenarios, however its zero-shot performance drastically decreases in several cases above 10 thousand training examples.
However, mT5 continues to perform well up to 250 thousand examples.\footnote{Upon further testing with our higher resource models, we do start to see this trend with mT5 as well.}
This phenomenon of ByT5 being both better at learning and generalizing from fewer examples but also losing generality faster is difficult to explain and requires further investigation.
However, the trend of requiring fewer examples has been seen in the growth of large language models, to the point where simply prompting without any fine-tuning can be effective \cite{bommasani2021opportunities}.
This implies that the character models are behaving as though they are larger models, and in the sense of training time, they are (see Section \ref{sect:time}).
The faster loss of generality is, to our knowledge, not an attribute of larger language models, though this is also likely untested.
Seeing as this loss of generality appears to occur only when at least 50 thousand sentences are used for training, the following sections only analyze results where 400-10,000 sentences are used for training.
\subsection{Presence in Pretraining}
The presence of a language in pretraining could greatly affect the performance of a model. As we have seen in Section \ref{sect:res}, the amount of data used for fine-tuning can have a great effect, so naturally the amount of data in pretraining should show similar trends.
Here, we test Portuguese$\rightarrow$Spanish models on 3 related languages seen in pretraining (Catalan, French, and Italian) and 3 not seen (Asturian, Friulian, and Occitan).
We also test this with our German$\rightarrow$English models, again using 3 seen languages (Danish, Dutch, Swedish) and 3 unseen (Faroese, Limburgs, and Norwegian Nyorsk).
In Figure \ref{fig:zs_pretrain}, we see that for German$\rightarrow$English and Portuguese$\rightarrow$Spanish, ByT5 performs markedly better than mT5 on languages that are similar to the source but are not seen in pretraining, compared to similar languages seen in pretraining. The differences in the Portuguese$\rightarrow$Spanish model are larger than in the German$\rightarrow$English model. This could be due to the relative closeness of Italic languages compared to Germanic languages, which have a larger degree of diversity.\footnote{Using lang2vec's KNN features \cite{littell2017uriel}, the average similarity between Portuguese and the Italic languages (0.9127) is higher than German and the Germanic languages (0.8943).}
\subsection{Script}
\begin{figure}[!htp]
\centering
\includegraphics[width=210pt]{pngs/rus_eng_script_chrf.svg.png}
\caption{Average chrF++ performance of languages with the same script as the source (Cyrillic) or a different script (Latin).}
\label{fig:zs_script}
\end{figure}
With regards to the script of the languages used, we see very little effect.
Figure \ref{fig:zs_script} shows the performance of our Russian$\rightarrow$English model on other Slavic languages, some of which (Ukranian, Bulgarian, and Serbian) use the Cyrillic alphabet like Russian, while others (Polish, Czech, Slovenian) use the Latin alphabet.
Here we see that script has a smaller but noticeable effect, where ByT5 generally performs better on languages with the same script than those with a different script. This is intuitive, as the embeddings used for Cyrillic are almost entirely different from those used in Latin, so it is more difficult for ByT5 to detect similar words when they are spelled with a different character set.
\subsection{Source--Source Language Distance}
\begin{figure}[!htp]
\centering
\includegraphics[width=210pt]{pngs/deu_eng_all_fams_chrf.svg.png}
\caption{Average chrF++ scores on different language families, compared to the same family (Germanic).}
\label{fig:zs_fam}
\end{figure}
Intuitively, it would make sense that a character model could capitalize on similar source languages, given that there are similar morphemes which differ only by a few characters. As such, we look at the zero-shot performance of a German$\rightarrow$English model for 4 different language families: Germanic, Italic, Celtic, and Slavic.
These 4 families all contain at least 3 languages written in Latin script and are part of the mC4 corpus (the corpus used for pretraining), making them ideal candidates to isolate the effect of language family.
As we can see in Figure \ref{fig:zs_fam}, language family does not appear to play a large role. ByT5 performs considerably better on Celtic, though this may be more due to the resourcedness of Celtic during pretraining, where only 5.8 billion tokens of the Celtic languages were seen, compared to the 201.8 billion tokens of Slavic and 741 billion tokens of Italic.
The slight underperformance of Slavic with the smaller model sizes could be due to Slavic languages being more distant from Germanic, Celtic, and Italic languages.
\section{Analysis} \label{sect:analysis}
Our analysis focuses on the reasoning for why ByT5 may perform better than mT5 for many cases in translation, as shown in the previous two sections.
We start by showing that character models can and do predominantly operate on the word level when translating. We then identify two aspects in which character models stand out: translation of cognates and translation of rare words.
\subsection{Translation Granularity}
Although \citet{xue-etal-2022-byt5} showed that ByT5 works better on a few generative tasks such as summarization, translation is unique from these previously tested tasks due to the information on the source and target side having roughly equivalent length and content. As such, one could argue that a subword model would be better suited for translation, given that it may be easier for the model to create an almost one-to-one mapping between source and target subwords. Meanwhile there is rarely a one-to-one mapping between characters, save for some very closely-related languages or dialects.
The one-to-one mapping of source to target subwords is easy to visualize with saliency \cite{simonyan2013deep}. Saliency is an attribution method using gradients collected at prediction time, which allows us to quantify the importance of each source token and previous target tokens in predicting the target token at each generation step. Thus, we can apply this attribution method to our character models to gauge the influence of the source and target context in the model's predictions. We use InSeq \cite{inseq} for our attribution analysis.
A similar analysis of token importance was conducted by~\citet{voita-etal-2021-analyzing} on subword MT models, showing the relation between exposure bias and the distribution of source and target contributions. In particular, hallucinatory behavior in MT model was shown to be connected to a language modeling regime in which neural models disregard source tokens, assigning more importance to the previously generated target prefix.
For large character models operating at a very low input granularity, it is reasonable to assume that word-level or morpheme-level co-occurrences could be implicitly learned throughout the training process.
We can verify this hypothesis by evaluating source-side and target-side contributions in relation to the generated characters.
From an input contribution perspective, this would result in higher contributions in the source text for characters marking the beginning of a new word in the translation, while all subsequent characters would then rely heavily on the previous ones in the same generated word.
In other words, in a character model, if it is translating character-by-character, we would expect the attributions to the source side for each character to be relatively uniform. However if it is translating word-by-word, we would expect the first character's attributions to be more based on the source side, and subsequent characters attributions to be more based on the previous characters in that generated word.
Referring back to Figure \ref{fig:attr_example}, we see an example of the source-side versus target-side attributions, where the source-side attribution is elevated at the beginning of each word. This implies that the model is translating word-by-word, as it decides on the appropriate translation for a given word when it generates the first character, and subsequently relies more on the target-side to complete the word.
\begin{figure}
\centering
\includegraphics[width=210pt]{pngs/attr_pos_spa.svg.png}
\caption{Average source attributions for ByT5-Large, based on the position of each byte token within a word.}
\label{fig:attr_pos}
\end{figure}
In Figure \ref{fig:attr_pos}, we see the average source attribution for each character, given its position in the word (with 0 being the 1st position). The average source-side attribution declines, therefore the average target-side attribution increases, indicating there is an implicit word-by-word translation. This might explain why, in every scenario shown in Section \ref{sect:dt_res}, ByT5's performance is greater or equal to mT5's performance: ByT5 is at the very least capable of mimicking word-by-word translation, while also exploiting character-by-character modeling whenever useful (e.g. for translating cognates, see Section~\ref{sec:cognates}).
We also see that the slope for English$\rightarrow$Spanish is steeper than Portuguese$\rightarrow$Spanish, which may be indicating that character models operate more on the character level when translating Portuguese into Spanish, given their linguistic similarity relative to English and Spanish. This implies that character models are capable of varying the granularity of their translation, depending on the language pair.
\subsection{Performance on Cognates}\label{sec:cognates}
If character models primarily operate on the word-level for translation, why do we see such a large performance increase from character models, particularly when trained on small amounts of data?
One intuitive reason could be that they can operate on the character-level when desirable. Such would be the case for orthographically similar word pairs, or cognates, as their character-level similarity is innately useful and easily exploited by a character model.
To confirm that character models perform better at translating cognates, we align the source, reference, and hypothesis sentences with AWESoME (also known as awesome-align) \cite{dou-neubig-2021-word} in order to get a word-level accuracy. We define a cognate based on the inverse normalized-Levenshtein distance of the source and reference words, varying the threshold from 0 to 1.
\begin{figure}[!htp]
\centering
\includegraphics[width=210pt]{pngs/cognate_acc_porspa.svg.png}
\caption{Difference in word-level accuracy for \texttt{large} models trained on Portuguese$\rightarrow$Spanish.}
\label{fig:cognate_acc}
\end{figure}
In Figure \ref{fig:cognate_acc}, we see the difference in accuracy of the \texttt{large} ByT5 and mT5 models trained for Portuguese$\rightarrow$Spanish. We see that as the similarity threshold increases (i.e. the words become more similar), the accuracy disparity also increases in favor of ByT5. Such is especially the case for the models trained on less data, indicating that character models can learn cognate translations not only more effectively, but also more quickly.
\subsection{Performance on Rare Words}
As we have seen in our cross-lingual results (Section \ref{sect:zs_res}), character models can substantially outperform subword models on similar languages not seen in pretraining. Additionally, across all of our results, a common theme appears to be that character models perform well in low-resource scenarios.
Ostensibly, it would follow that character models can correctly translate rare words more often than subword models, however is this indeed the case? To answer this, similar to our analysis of cognates (Section~\ref{sec:cognates}), we also measure word-level translation accuracy, this time binning based on the frequency of the words in the training set. We use the \texttt{large} models trained on 10 thousand sentence pairs of German$\rightarrow$English. Figure \ref{fig:word_freq} shows the results.
\begin{figure}[!htp]
\centering
\includegraphics[width=210pt]{pngs/word_freq_10k.svg.png}
\caption{Difference in word level accuracy by their frequency in the test set for \texttt{large} models trained on German$\rightarrow$English with 10 thousand sentence pairs}.
\label{fig:word_freq}
\end{figure}
Here, we see that ByT5 has a higher translation accuracy on words of fewer than 100 occurrences. The accuracy disparity between the trained source language (German) and the 3 Germanic languages seen in pretraining (Danish, Dutch, and Swedish) is minimal, showing that the impact of fine-tuning on a language has a relatively equal effect to similar languages seen in pretraining.
Meanwhile for the languages unseen in pretraining (Faroese, Limburgs, and Norwegian Nyorsk), character models have a higher accuracy across the board, though still more pronounced for lower-frequency bins.
\section{Efficiency} \label{sect:time}
Although we have shown that the translation quality of character models is competitive or better than subword models, another important aspect that should not be ignored is the efficiency of character models. We report the training and inference times for our models in Table \ref{tab:times}, using a single Nvidia V100 (32GB) GPU.
\begin{table}[!htp]\centering
\begin{tabular}{lrrrr}\toprule
& \multicolumn{2}{c}{Training} & Inference\\
Model & samples/s & epochs & samples/s \\\midrule
mt5-small &\textbf{2.5} &38.11 &\textbf{52.91} \\
byt5-small &0.43 &\textbf{29.24} &8.90 \\
mt5-base &\textbf{1.15} &22.87 &\textbf{20.77} \\
byt5-base &0.24 &\textbf{18.16} &3.96 \\
mt5-large &\textbf{0.48} &19.58 &\textbf{6.23} \\
byt5-large &0.12 &\textbf{16.25} &1.17 \\
\bottomrule
\end{tabular}
\caption{The training and inference speeds for German$\rightarrow$English experiments. Epochs reported are using the models trained on 10 thousand pairs, using early stopping. Best result per model size and column shown in bold.}\label{tab:times}
\end{table}
Both the training and inference speeds in samples per second are considerably higher for the character models, being 4-5 times slower for training and 5-6 times slower for inference.
However, the number of epochs needed is lower for the character models, but not enough to counteract the slowness during training.
The slower speed comes largely from the increase in sequence length. While we tried to balance the size of the batches such that each model sees the same amount of text per batch, achieving this required the character models to accumulate gradients for 4 times as many iterations as the subword models.
Thus, if training or inference speed are a concern, subword models are likely the superior choice, particularly for high-resource languages. In the low-resource setting, there is a significant trade-off between accuracy and speed when choosing between character and subword models.
\section{Conclusion} \label{sect:conclusion}
Subword-level models have been the dominant model for machine translation, however this work has shown that character models can be competitive or superior in many circumstances. First, character models outright have better performance on the trained language pair. Second, character models particularly excel when training data is scarce. Third, character models have superior cross-lingual transferability, especially with languages unseen but similar to the source language.
We attribute this superior performance, as shown in our analyses, to a character model's ability to translate both word-by-word and character-by-character, choosing the appropriate granularity for the context. This results in better word-level accuracy on cognates and rare words.
The performance increase is however not without a trade-off: speed. The character models are at least 4 times slower in both training and inference, leading them to be sub-par for many real-world situations. Nevertheless, character models can still find use in less time-sensitive settings.
So are character-level translations worth the wait? Maybe.
\section*{Acknowledgements}
We thank Gabriele Sarti for the many fruitful discussions regarding attribution analysis and for the idea of investigating the models' performances on translating cognates. We also thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster.
|
{
"arxiv_id": "2302.14129",
"language": "en",
"timestamp": "2023-03-01T02:01:51",
"url": "https://arxiv.org/abs/2302.14129",
"yymm": "2302"
} | \section{Introduction}
The various different domination concepts are
well-studied now, however new concepts are introduced frequently and the interest is growing
rapidly. We recommend three fundamental books \cite{domination,2} and some surveys \cite{6,5} about
domination in general. A set $D\subseteq V$ is a \emph{strong dominating set} of a simple graph $G=(V,E)$, if for every vertex $x\in \overline{D}=V\setminus D$ there is a vertex $y\in D$ with $xy\in E(G)$ and ${\rm deg}(x)\leq {\rm deg}(y)$. The \emph{strong domination number} $\gamma_{\rm st}(G)$ is defined as the minimum cardinality of a strong dominating set. A $\gamma_{\rm st}$-\emph{set} of $G$ is a strong dominating set of $G$ of minimum cardinality $\gamma_{\rm st}(G)$. If $D$ is a strong dominating set in a graph $G$, then we say that a vertex $u \in \overline{D}$ is \emph{strong dominated} by a vertex $v \in D$ if $uv\in E(G)$, and ${\rm deg}(u)\leq {\rm deg}(v)$.
In 1996, Sampathkumar and Pushpa Latha \cite{DM} introduced strong domination number and some upper bounds on this parameter presented in \cite{DM2,DM}. Similar to strong domination number, a set $D\subset V$ is a weak dominating set of $G$, if every vertex $v\in V\setminus S$ is
adjacent to a vertex $u\in D$ such that $deg(v)\geq deg(u)$ (see \cite{Boutrig}). The minimum cardinality of a weak dominating set of $G$ is denoted by $\gamma_{\rm w}(G)$. Boutrig and Chellali proved that for any graph $G$ of order $n\geq 3$, $\gamma_{\rm w}(G)+\frac{3}{\Delta+1}\gamma_{\rm st}(G)\leq n$. Alikhani, Ghanbari and Zaherifard \cite{sub} examined the effects on $\gamma_{st}(G)$, when $G$ is modified by the edge deletion, the edge subdivision and the edge contraction. Also they studied the strong domination number of $k$-subdivision of $G$.
Motivated by enumerating of the number of dominating sets of a graph and domination polynomial (see e.g. \cite{euro}), the enumeration of the strong dominating sets for certain
graphs has studied in \cite{JAS}.
Study of the strong domination number of graph operations are natural and interesting subject and for join and corona products have studied in \cite{JAS}.
A domatic partition is a partition
of the vertex set into dominating sets, in other words, a partition $\pi$ = $\{V_1, V_2, . . . , V_k \}$ of $V(G)$
such that every set $V_{i}$ is a dominating set in $G$.
Cockayne and Hedetniemi \cite{Cockayne} introduced the domatic number of a graph $d(G)$ as the maximum order $k$ of a vertex partition. For more details on the domatic number refer to e.g., \cite{11,12,13}.
Aram, Sheikholeslami and Volkmann in \cite{TOC} have shown that the total domatic number of a random $r$-regular graph is almost surely at most $r-1$,
and that for $3$-regular random graphs, the total domatic number is almost surely equal to $2$. They also have given a lower bound on the total domatic number of a graph in terms of order, minimum degree and maximum degree.
Motivated by the definition of the domatic number and total domatic number, we focus on studying strong domatic number of a graph.
A partition of $V(G)$, all of whose classes are strong dominating sets in $G$, is called a \textit{strong domatic partition} of $G$. The maximum number of classes of a strong domatic partition of $G$ is called the \textit{strong
domatic number} of $G$ and is denoted by $d_{\rm st} (G)$.
In Section 2, we compute and study the strong domatic number for certain graphs and we present different sharp bounds on $d_{st}(G)$.
In Section 3, we determine this parameter for all cubic graphs of order at most $10$.
\section{Results for certain graphs}
In this section, we study the strong domatic number for certain graphs. First we state and prove the following theorem for graphs $G$ with $\delta(G)=1$.
\begin{theorem}\label{thm:pendant}
If a graph $G$ has a pendant vertex, then $d_{\rm st} (G)=1$ or $d_{\rm st} (G)=2$.
\end{theorem}
\begin{proof}
Suppose that $u$ is a pendant vertex $u$, $N(u)=\{v\}$ and $P$ is a strong domatic partition of $G$. We claim than $|P|\leq 2$. Since ${\rm deg}(u)=1$, so in any strong dominating set of $G$, say $D$, we should have either $u\in D$ or $v\in D$ or $\{u,v\}\subseteq D$. If $\{u,v\}\subseteq D$, then by the definition of the strong dominating set and the strong domatic partition, we should have $D=V(G)$, and $P=\{D\}$. Because if we have $D'\in P$ such that $D'\neq D$, then no vertex strong dominate $u$ which is a contradiction. The other case is $u\in D$ or $v\in D$ and not both, which in the best case gives us two strong dominating sets. Therefore we have the result.
\hfill $\square$\medskip
\end{proof}
The following result gives bounds for the strong domatic number based on the number of vertices with maximum degree.
\begin{theorem}\label{thm:max-degree}
Let $G$ be a graph with maximum degree $\Delta$ and $m$ be the number of vertices with degree $\Delta$. Then $1\leq d_{\rm st} (G)\leq m$.
\end{theorem}
\begin{proof}
Since any vertex with degree $\Delta$ should be in a strong dominating set or strong dominated by another vertex with degree $\Delta$, so the maximum number of sets which are strong dominating sets and a partition of $V(G)$ is $m$, and we are done.
\hfill $\square$\medskip
\end{proof}
\begin{remark}\label{rem:star-complete}
Bounds in Theorem \ref{thm:max-degree} are tight. For the lower bound, it suffices to consider the star graph $K_{1,n}$. Since we only have one vertex with maximum degree, then all of vertices should be in strong dominating set, and we have $d_{\rm st} (K_{1,n})=1$. For the upper bound, it suffices to consider complete graph $K_n$. Since a single vertex is a strong dominating set, so we have $d_{\rm st} (K_{n})=n$, and we are done.
\end{remark}
We need the following result to obtain more results:
\begin{theorem}{\rm\cite{Cockayne}}\label{thm:domatic-min-deg}
For any graph $G$, $d(G)\leq \delta +1$, where $\delta$ is the minimum degree, and $d(G)$ is the domatic number of $G$.
\end{theorem}
Since in every regular graph, all vertices have the same degree, so each dominating set of a graph is a strong dominating set, too. Therefore, by Theorem \ref{thm:domatic-min-deg} we have the following result.
\begin{corollary}\label{cor:strong-domatic-min-deg}
For any $k$-regular graph $G$, $d(G)=d_{\rm st}(G)$ and $d_{\rm st}(G)\leq k +1$.
\end{corollary}
\begin{figure}
\begin{center}
\psscalebox{0.5 0.5}
{
\begin{pspicture}(0,-7.215)(20.277115,-1.245)
\psline[linecolor=black, linewidth=0.04](1.7971154,-1.815)(1.7971154,-1.815)
\psdots[linecolor=black, dotsize=0.4](8.997115,-1.815)
\psdots[linecolor=black, dotsize=0.4](10.5971155,-1.815)
\psdots[linecolor=black, dotsize=0.4](9.797115,-4.215)
\psdots[linecolor=black, dotsize=0.4](8.997115,-6.615)
\psdots[linecolor=black, dotsize=0.4](10.5971155,-6.615)
\psline[linecolor=black, linewidth=0.08](8.997115,-1.815)(10.5971155,-1.815)(8.997115,-6.615)(10.5971155,-6.615)(8.997115,-1.815)(8.997115,-1.815)
\psdots[linecolor=black, dotsize=0.4](12.197115,-3.415)
\psdots[linecolor=black, dotsize=0.4](12.197115,-5.015)
\psdots[linecolor=black, dotsize=0.4](7.397115,-3.415)
\psdots[linecolor=black, dotsize=0.4](7.397115,-5.015)
\psline[linecolor=black, linewidth=0.08](12.197115,-5.015)(7.397115,-3.415)(7.397115,-5.015)(12.197115,-3.415)(12.197115,-5.015)(12.197115,-5.015)
\psdots[linecolor=black, dotsize=0.4](0.1971154,-3.415)
\psdots[linecolor=black, dotsize=0.4](0.1971154,-5.015)
\psdots[linecolor=black, dotsize=0.4](2.5971155,-4.215)
\psline[linecolor=black, linewidth=0.08](2.5971155,-4.215)(0.1971154,-3.415)(0.1971154,-5.015)(2.5971155,-4.215)(2.5971155,-4.215)
\psdots[linecolor=black, dotsize=0.4](3.3971155,-1.815)
\psdots[linecolor=black, dotsize=0.4](4.5971155,-2.615)
\psdots[linecolor=black, dotsize=0.4](3.3971155,-6.615)
\psdots[linecolor=black, dotsize=0.4](4.5971155,-5.815)
\psline[linecolor=black, linewidth=0.08](2.5971155,-4.215)(4.5971155,-5.815)(3.3971155,-6.615)(3.3971155,-6.615)
\psline[linecolor=black, linewidth=0.08](2.5971155,-4.215)(3.3971155,-6.615)(3.3971155,-6.615)
\psline[linecolor=black, linewidth=0.08](2.5971155,-4.215)(3.3971155,-1.815)(4.5971155,-2.615)(2.5971155,-4.215)(2.5971155,-4.215)
\psdots[linecolor=black, dotsize=0.4](15.397116,-2.615)
\psdots[linecolor=black, dotsize=0.4](16.597115,-1.815)
\psdots[linecolor=black, dotsize=0.4](17.397116,-4.215)
\psdots[linecolor=black, dotsize=0.4](15.397116,-5.815)
\psdots[linecolor=black, dotsize=0.4](16.597115,-6.615)
\psdots[linecolor=black, dotsize=0.4](19.397116,-5.815)
\psdots[linecolor=black, dotsize=0.4](18.197115,-6.615)
\psdots[linecolor=black, dotsize=0.4](14.997115,-3.415)
\psdots[linecolor=black, dotsize=0.4](14.997115,-5.015)
\psdots[linecolor=black, dotsize=0.4](18.197115,-1.815)
\psdots[linecolor=black, dotsize=0.4](19.397116,-2.615)
\psdots[linecolor=black, dotsize=0.1](18.997116,-3.815)
\psdots[linecolor=black, dotsize=0.1](18.997116,-4.215)
\psdots[linecolor=black, dotsize=0.1](18.997116,-4.615)
\rput[bl](17.697115,-4.295){$x$}
\rput[bl](16.137115,-1.595){$u_1$}
\rput[bl](14.857116,-2.375){$u_2$}
\psline[linecolor=black, linewidth=0.08](17.397116,-4.215)(19.397116,-2.615)(18.197115,-1.815)(17.397116,-4.215)(16.597115,-1.815)(15.397116,-2.615)(17.397116,-4.215)(14.997115,-3.415)(14.997115,-5.015)(17.397116,-4.215)(15.397116,-5.815)(16.597115,-6.615)(17.397116,-4.215)(18.197115,-6.615)(19.397116,-5.815)(17.397116,-4.215)(17.397116,-4.215)
\rput[bl](14.277116,-3.495){$u_3$}
\rput[bl](14.297115,-5.095){$u_4$}
\rput[bl](14.817116,-6.195){$u_5$}
\rput[bl](16.217115,-7.175){$u_6$}
\rput[bl](18.037115,-7.215){$u_7$}
\rput[bl](19.517115,-6.295){$u_8$}
\rput[bl](18.097115,-1.495){$u_{2n}$}
\rput[bl](19.337116,-2.315){$u_{2n-1}$}
\end{pspicture}
}
\end{center}
\caption{Friendship graphs $F_3$, $F_4$ and $F_n$, respectively.}\label{fig:friend}
\end{figure}
\begin{figure}
\begin{center}
\psscalebox{0.6 0.6}
{
\begin{pspicture}(0,-4.4)(17.953062,-1.605769)
\psdots[linecolor=black, dotsize=0.4](1.3971153,-3.0028846)
\psdots[linecolor=black, dotsize=0.4](2.5971153,-3.0028846)
\psdots[linecolor=black, dotsize=0.4](2.5971153,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](1.3971153,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](1.3971153,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](0.19711533,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](2.5971153,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](3.7971153,-4.2028847)
\psline[linecolor=black, linewidth=0.08](2.5971153,-4.2028847)(3.7971153,-4.2028847)(2.5971153,-3.0028846)(1.3971153,-3.0028846)(2.5971153,-4.2028847)(1.3971153,-3.0028846)(1.3971153,-1.8028846)(2.5971153,-1.8028846)(2.5971153,-3.0028846)(1.3971153,-4.2028847)(0.19711533,-4.2028847)(1.3971153,-3.0028846)(1.3971153,-3.0028846)
\psline[linecolor=black, linewidth=0.08](5.7971153,-4.2028847)(6.997115,-4.2028847)(6.997115,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](5.7971153,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](6.997115,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](6.997115,-3.0028846)
\psdots[linecolor=black, dotsize=0.4](8.197115,-3.0028846)
\psdots[linecolor=black, dotsize=0.4](8.197115,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](9.397116,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](8.197115,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](9.397116,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](6.997115,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](5.7971153,-1.8028846)
\psline[linecolor=black, linewidth=0.08](5.7971153,-4.2028847)(6.997115,-3.0028846)(5.7971153,-1.8028846)(6.997115,-1.8028846)(8.197115,-3.0028846)(6.997115,-3.0028846)(8.197115,-1.8028846)(9.397116,-1.8028846)(8.197115,-3.0028846)(9.397116,-4.2028847)(8.197115,-4.2028847)(6.997115,-3.0028846)(8.197115,-3.0028846)(6.997115,-4.2028847)(6.997115,-4.2028847)
\psline[linecolor=black, linewidth=0.08](13.397116,-4.2028847)(14.5971155,-4.2028847)(14.5971155,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](13.397116,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](14.5971155,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](14.5971155,-3.0028846)
\psdots[linecolor=black, dotsize=0.4](15.797115,-3.0028846)
\psdots[linecolor=black, dotsize=0.4](15.797115,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](16.997116,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](15.797115,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](16.997116,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](14.5971155,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](13.397116,-1.8028846)
\psline[linecolor=black, linewidth=0.08](13.397116,-4.2028847)(14.5971155,-3.0028846)(13.397116,-1.8028846)(14.5971155,-1.8028846)(15.797115,-3.0028846)(14.5971155,-3.0028846)(15.797115,-1.8028846)(16.997116,-1.8028846)(15.797115,-3.0028846)(16.997116,-4.2028847)(15.797115,-4.2028847)(14.5971155,-3.0028846)(15.797115,-3.0028846)(14.5971155,-4.2028847)(14.5971155,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](11.797115,-2.2028844)
\psdots[linecolor=black, dotsize=0.4](12.997115,-2.2028844)
\psdots[linecolor=black, dotsize=0.4](12.997115,-3.8028846)
\psdots[linecolor=black, dotsize=0.4](11.797115,-3.8028846)
\psline[linecolor=black, linewidth=0.08](11.797115,-2.2028844)(12.997115,-2.2028844)(11.797115,-2.2028844)(11.797115,-2.2028844)(11.797115,-2.2028844)
\psline[linecolor=black, linewidth=0.08](14.5971155,-3.0028846)(11.797115,-2.2028844)(11.797115,-2.2028844)
\psline[linecolor=black, linewidth=0.08](12.997115,-2.2028844)(15.797115,-3.0028846)(15.797115,-3.0028846)
\psline[linecolor=black, linewidth=0.08](14.5971155,-3.0028846)(11.797115,-3.8028846)(11.797115,-3.8028846)
\psline[linecolor=black, linewidth=0.08](11.797115,-3.8028846)(12.997115,-3.8028846)(15.797115,-3.0028846)(15.797115,-3.0028846)
\psdots[linecolor=black, dotsize=0.1](17.797115,-2.6028845)
\psdots[linecolor=black, dotsize=0.1](17.797115,-3.4028845)
\psdots[linecolor=black, dotsize=0.1](17.903782,-3.0028846)
\end{pspicture}
}
\end{center}
\caption{Book graph $B_3$, $B_4$ and $B_n$, respectively} \label{fig:book}
\end{figure}
The following result gives the strong domatic number of certain graphs:
\begin{proposition}
The following holds:
\begin{itemize}
\item[(i)]
For the path graph $P_n$, $n\geq 4$, we have $d_{\rm st}(P_n)=2$.
\item[(ii)]
For the cycle graph $C_n$,
\[
d_{\rm st}(C_n)=\left\{
\begin{array}{ll}
{\displaystyle
3}&
\quad\mbox{if $n=3k $,}\\[15pt]
{\displaystyle
2}&
\quad\mbox{otherwise.}
\end{array}
\right.
\]
\item[(iii)]
For the complete bipartite graph $K_{n,m}$,
\[
d_{\rm st}(K_{n,m})=\left\{
\begin{array}{ll}
{\displaystyle
1}&
\quad\mbox{if $n<m $,}\\[15pt]
{\displaystyle
n}&
\quad\mbox{if $n=m $.}
\end{array}
\right.
\]
\item[(iv)]
For the friendship graph $F_n$ (see Figure \ref{fig:friend}), $d_{\rm st}(F_n)=1$.
\item[(v)]
For the book graph $B_n$ (see Figure \ref{fig:book}), $d_{\rm st}(B_n)=2$.
\end{itemize}
\end{proposition}
\begin{figure}
\begin{center}
\psscalebox{0.70 0.70}
{
\begin{pspicture}(0,-3.8135576)(6.4171157,-3.0464423)
\psdots[linecolor=black, dotsize=0.1](3.3971155,-3.6164422)
\psdots[linecolor=black, dotsize=0.1](3.7971156,-3.6164422)
\psdots[linecolor=black, dotsize=0.1](4.1971154,-3.6164422)
\psline[linecolor=black, linewidth=0.08](4.5971155,-3.6164422)(6.1971154,-3.6164422)(6.1971154,-3.6164422)
\psline[linecolor=black, linewidth=0.08](2.9971154,-3.6164422)(0.19711548,-3.6164422)
\psdots[linecolor=black, dotsize=0.4](0.19711548,-3.6164422)
\psdots[linecolor=black, dotsize=0.4](1.3971155,-3.6164422)
\psdots[linecolor=black, dotsize=0.4](2.5971155,-3.6164422)
\psdots[linecolor=black, dotsize=0.4](4.9971156,-3.6164422)
\psdots[linecolor=black, dotsize=0.4](6.1971154,-3.6164422)
\rput[bl](0.037115477,-3.3164423){$v_1$}
\rput[bl](1.2771155,-3.3564422){$v_2$}
\rput[bl](2.4371154,-3.3564422){$v_3$}
\rput[bl](6.0171156,-3.2964423){$v_n$}
\rput[bl](4.8371153,-3.3564422){$v_{n-1}$}
\end{pspicture}
}
\end{center}
\caption{\label{path} The path graph with $V(P_n)=\{v_1,v_2,\ldots,v_n\}$.}
\end{figure}
\begin{proof}
\begin{itemize}
\item[(i)]
Suppose that $V(P_n)=\{v_1,v_2,\ldots,v_n\}$, and vertices are as in Figure \ref{path}. One can easily check that the set of vertices with even indices is a strong dominating set, and the set of vertices with odd indices is another strong dominating set. Therefore, by Theorem \ref{thm:pendant}, we have $d_{\rm st}(P_n)=2$.
\item[(ii)]
Suppose that $V(C_n)=\{v_1,v_2,\ldots,v_n\}$, and vertices are in a natural order. We consider the following cases:
\begin{itemize}
\item[(a)]
$n=3k$. Let
$$P=\Bigl\{ \{v_1,v_4,\ldots,v_{3k-2}\},\{v_2,v_5,\ldots,v_{3k-1}\},\{v_3,v_6,\ldots,v_{3k}\} \Bigl\}.$$
Clearly $P$ is a strong domatic partition of $C_{3k}$. By Corollary \ref{cor:strong-domatic-min-deg}, $d_{\rm st}(C_n)\leq 3$, and therefore we are done.
\item[(b)]
$n=3k+1$. Since $\gamma_{\rm st}(C_{n})=\gamma(C_n)=\lfloor \frac{n+2}{3} \rfloor$, then $\gamma_{\rm st}(C_{3k+1})=k+1$. So a strong dominating set of $C_{3k+1}$ has at least $k+1$ vertices, which means that we can not have a strong domatic partition of $C_{3k+1}$ of size $3$.
\item[(c)]
$n=3k+2$. By a similar argument as part (b), we have the result.
\end{itemize}
\item[(iii)]
Suppose that $V(K_{n,m})=\{v_1,v_2,\ldots,v_n,u_1,u_2,\ldots,u_m\}$, and for $i=1,2,\ldots,n$, $N(v_i)=\{u_1,u_2,\ldots,u_m\}$. We consider the following cases:
\begin{itemize}
\item[(a)]
$n<m$. We should have all vertices in the strong dominating set to have a partition of $V(K_{n,m})$, because no vertex can strong dominate $v_i$ for any $1\leq i\leq n$. So $d_{\rm st}(K_{n,m})=1$.
\item[(b)]
$n=m$. Let
$$P=\Bigl\{ \{u_1,v_1\},\{u_2,v_2\},\ldots,\{u_n,v_n\} \Bigl\}.$$
Then $P$ is a strong domatic partition of $K_{n,n}$. Since set of a single vertex is not a strong dominating set of $K_{n,n}$, so we are not able to create a strong domatic partition of a bigger size. Hence $d_{\rm st}(K_{n,n})=n$, and we are done.
\end{itemize}
\item[(iv)]
It is an immediate consequence of Theorem \ref{thm:max-degree}.
\item[(v)]
Suppose that $u$ and $v$ are the vertices with maximum degree. Let $D_1=\{u\}\cup N(v)$ and $D_2=\{v\}\cup N(u)$. Clearly, $P=\{D_1,D_2\}$ is a strong domatic partition of $B_n$, and by Theorem \ref{thm:max-degree}, we have the result.
\hfill $\square$\medskip
\end{itemize}
\end{proof}
\begin{figure}
\begin{center}
\psscalebox{0.6 0.6}
{
\begin{pspicture}(0,-3.385)(8.861389,-0.955)
\psline[linecolor=black, linewidth=0.08](0.20138885,-1.585)(4.201389,-1.585)(3.8013887,-1.585)
\psline[linecolor=black, linewidth=0.08](5.8013887,-1.585)(7.4013886,-1.585)(8.601389,-1.585)(8.601389,-1.585)
\psline[linecolor=black, linewidth=0.08](0.20138885,-1.585)(0.20138885,-2.785)(0.20138885,-2.785)
\psline[linecolor=black, linewidth=0.08](1.4013889,-1.585)(1.4013889,-2.785)(1.4013889,-2.785)
\psline[linecolor=black, linewidth=0.08](2.601389,-1.585)(2.601389,-2.785)(2.601389,-2.785)
\psline[linecolor=black, linewidth=0.08](3.8013887,-1.585)(3.8013887,-2.785)(3.8013887,-2.785)
\psline[linecolor=black, linewidth=0.08](6.201389,-1.585)(6.201389,-2.785)(6.201389,-2.785)
\psline[linecolor=black, linewidth=0.08](7.4013886,-1.585)(7.4013886,-2.785)(7.4013886,-2.785)
\psline[linecolor=black, linewidth=0.08](8.601389,-1.585)(8.601389,-2.785)(8.601389,-2.785)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-1.585)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](1.4013889,-1.585)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-1.585)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](3.8013887,-1.585)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](6.201389,-1.585)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-1.585)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](8.601389,-1.585)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-2.785)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](1.4013889,-2.785)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-2.785)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](3.8013887,-2.785)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](6.201389,-2.785)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-2.785)
\psdots[linecolor=black, fillstyle=solid, dotstyle=o, dotsize=0.4, fillcolor=white](8.601389,-2.785)
\psdots[linecolor=black, dotsize=0.1](4.601389,-1.585)
\psdots[linecolor=black, dotsize=0.1](5.001389,-1.585)
\psdots[linecolor=black, dotsize=0.1](5.4013886,-1.585)
\rput[bl](0.0013888549,-1.225){$v_1$}
\rput[bl](1.1813889,-1.245){$v_2$}
\rput[bl](2.4213889,-1.225){$v_3$}
\rput[bl](3.581389,-1.265){$v_4$}
\rput[bl](5.8013887,-1.225){$v_{n-2}$}
\rput[bl](7.041389,-1.245){$v_{n-1}$}
\rput[bl](8.461389,-1.265){$v_n$}
\rput[bl](0.061388854,-3.385){$u_1$}
\rput[bl](1.2613889,-3.385){$u_2$}
\rput[bl](2.4213889,-3.365){$u_3$}
\rput[bl](3.601389,-3.365){$u_4$}
\rput[bl](8.381389,-3.325){$u_n$}
\rput[bl](5.8813887,-3.345){$u_{n-2}$}
\rput[bl](7.081389,-3.365){$u_{n-1}$}
\end{pspicture}
}
\end{center}
\caption{$P_n \circ K_1$.} \label{fig:PnoK1}
\end{figure}
The corona product of two graphs $F$ and $H$, denoted by $F\circ H$, is defined as the graph obtained by taking one copy of $F$ and $|V(F)|$ copies of $H$ and joining the $i$-th vertex of $F$ to every vertex in the $i$-th copy of $H$.
The following theorem gives the strong domatic number of corona of path and cycle graph with $K_1$.
\begin{theorem}
The following holds:
\begin{itemize}
\item[(i)] For any $n\geq 2$,
$d_{\rm st}(P_n \circ K_1)=2$.
\item[(ii)] For any $n\geq 3$,
$d_{\rm st}(C_n \circ K_1)=2$.
\end{itemize}
\end{theorem}
\begin{proof}
\begin{itemize}
\item[(i)]
Consider graph $P_n \circ K_1$, as we see in Figure \ref{fig:PnoK1}. Let
$$P=\Bigl\{ \{v_1,u_2,v_3,u_4,\ldots,v_{2t+1},u_{2t+2},\ldots\},\{u_1,v_2,u_3,v_4,\ldots,u_{2t+1},v_{2t+2},\ldots\} \Bigl\}.$$
It is easy that $P$ is a strong domatic partition of $P_n \circ K_1$. Therefore by Theorem \ref{thm:pendant}, we have the result.
\item[(ii)]
By a similar argument as Part (i), we have the result.
\hfill $\square$\medskip
\end{itemize}
\end{proof}
The following theorem gives bounds for the strong domatic number of corona of two graphs.
\begin{theorem}\label{thm:corona}
Let $G$ and $H$ be two graphs. We have
$$1 \leq d_{\rm st}(G\circ H)\leq d_{\rm st}(G).$$
\end{theorem}
\begin{proof}
Note that the set of a set including all vertices is a strong domatic partition of $G\circ H$, and we have nothing to prove for the lower bound. Now, we consider the upper bound and prove it. Suppose that $V(G)=\{v_1,v_2,\ldots,v_n\}$, and for the copy of $H$ related to vertex $v_i$, for $i=1,2,\ldots,n$, $V(H_{v_i})=\{u_{i_1},u_{i_2},\ldots,u_{i_m}\}$. By the definition of $G\circ H$ it is clear that ${\rm deg} (u_{i_j})< {\rm deg} (v_i)$, for all $j=1,2,\ldots,m$. So, there is no vertex in $V(H_{v_i})$ such that strong dominate $v_i$, for $i=1,2,\ldots,n$. Therefore, in the best case, we can find $d_{\rm st}(G)$ sets to have a strong domatic partition of $G\circ H$, and we are done.
\hfill $\square$\medskip
\end{proof}
\begin{remark}
Bounds in Theorem \ref{thm:corona} are tight. For the lower bound, it suffices to consider $G=\overline{K_n}$ and $H=\overline{K_m}$. Then $G\circ H$ is the union of $n$ star graphs $K_{1,m}$. As shown in Remark \ref{rem:star-complete}, we have $d_{\rm st}(G\circ H)=1$. For the upper bound let $G=H=K_n$. As shown in Remark \ref{rem:star-complete}, $d_{\rm st}(G)=n$. Now, we present a strong domatic partition of $G\circ H$ of size $n$. Suppose that $V(G)=\{v_1,v_2,\ldots,v_n\}$, and for the copy of $H=K_n$ related to vertex $v_i$, for $i=1,2,\ldots,n$, $V(H_{v_i})=\{u_{i_1},u_{i_2},\ldots,u_{i_n}\}$. Let
$$A_i=\{v_i, u_{1_i}, u_{2_i}, u_{3_i}, \ldots,u_{n_i} \},$$
for $i=1,2,\ldots,n$. Then,
$$P=\{A_1,A_2,A_3,\ldots,A_n\}$$
is a strong domatic partition of $G\circ H=K_n\circ K_n$, and we have the result.
\end{remark}
\section{Computing $d_{st}(G)$ for cubic graphs of order at most $10$}
The class of cubic graphs
is especially interesting for mathematical applications, because for various important open problems in
graph theory, cubic graphs are the smallest or simplest possible potential counterexamples, and so this creates motivation to study strong domatic number for the cubic graphs of order at most $10$.
Alikhani and Peng have studied the domination polynomials (which is the generating function for the number of dominating sets of a graph) of cubic graphs of order $10$ in \cite{1}. As a consequence, they have shown that the Petersen graph is determined uniquely by its domination polynomial. Ghanbari has studied the Sombor characteristic polynomial and Sombor energy of these graphs in \cite{Energy}, and has shown that the Petersen graph is not determined uniquely by its Sombor energy, but it has the maximum Sombor energy among others.
First, we determine the strong domatic number of the cubic graphs of order $6$. There are exactly two cubic graphs of order $6$ which are denoted by $G_{1}$ and $G_{2}$ in Figure \ref{fig:Cubic6}.
\begin{figure}
\begin{center}
\psscalebox{0.7 0.7}
{
\begin{pspicture}(0,-4.575)(8.82,-0.085)
\psline[linecolor=black, linewidth=0.04](0.4,-1.355)(1.6,-0.555)(2.8,-1.355)(2.8,-2.555)(1.6,-3.355)(0.4,-2.555)(0.4,-1.355)(0.4,-1.355)
\psline[linecolor=black, linewidth=0.04](1.6,-0.555)(1.6,-3.355)
\psline[linecolor=black, linewidth=0.04](0.4,-1.355)(2.8,-1.355)
\psline[linecolor=black, linewidth=0.04](0.4,-2.555)(2.8,-2.555)
\rput[bl](1.52,-0.355){1}
\rput[bl](3.0,-1.475){2}
\rput[bl](2.98,-2.695){3}
\rput[bl](0.04,-2.735){5}
\rput[bl](0.0,-1.495){6}
\psdots[linecolor=black, dotsize=0.2](1.6,-0.555)
\psdots[linecolor=black, dotsize=0.2](0.4,-1.355)
\psdots[linecolor=black, dotsize=0.2](0.4,-2.555)
\psdots[linecolor=black, dotsize=0.2](2.8,-1.355)
\psdots[linecolor=black, dotsize=0.2](2.8,-2.555)
\psdots[linecolor=black, dotsize=0.2](1.6,-3.355)
\rput[bl](1.48,-3.855){4}
\psline[linecolor=black, linewidth=0.04](6.0,-1.355)(7.2,-0.555)(8.4,-1.355)(8.4,-2.555)(7.2,-3.355)(6.0,-2.555)(6.0,-1.355)(6.0,-1.355)
\psline[linecolor=black, linewidth=0.04](7.2,-0.555)(7.2,-3.355)
\rput[bl](7.12,-0.355){1}
\rput[bl](8.6,-1.475){2}
\rput[bl](8.58,-2.695){3}
\rput[bl](5.64,-2.735){5}
\rput[bl](5.6,-1.495){6}
\psdots[linecolor=black, dotsize=0.2](7.2,-0.555)
\psdots[linecolor=black, dotsize=0.2](6.0,-1.355)
\psdots[linecolor=black, dotsize=0.2](6.0,-2.555)
\psdots[linecolor=black, dotsize=0.2](8.4,-1.355)
\psdots[linecolor=black, dotsize=0.2](8.4,-2.555)
\psdots[linecolor=black, dotsize=0.2](7.2,-3.355)
\rput[bl](7.08,-3.855){4}
\psline[linecolor=black, linewidth=0.04](6.0,-1.355)(8.4,-2.555)(8.4,-2.555)
\psline[linecolor=black, linewidth=0.04](8.4,-1.355)(6.0,-2.555)(6.0,-2.555)
\rput[bl](1.38,-4.575){\large{$G_1$}}
\rput[bl](7.02,-4.555){\large{$G_2$}}
\end{pspicture}
}
\end{center}
\caption{\label{fig:Cubic6} Cubic graphs of order $6$.}
\end{figure}
\begin{theorem}\label{thm:cubic6}
The strong domatic number of the cubic graphs $G_{1}$ and $G_{2}$ (Figure \ref{fig:Cubic6}) of order $6$ is $3$.
\end{theorem}
\begin{proof}
It is clear that a single vertex cannot strong dominate all other vertices. So, we need at least two vertices in any strong dominating sets of $G_1$ and $G_2$. We see that
$$P = \Bigl\{ \{1,4 \},\{2,3 \},\{5,6 \} \Bigl\}$$
is a strong domatic partition of $G_1$ and also $G_2$. Therefore we have the result.
\hfill $\square$\medskip
\end{proof}
\medskip
Now, we compute the strong domatic number of cubic graphs of order $8$. There are exactly $6$ cubic graphs of order $8$ which is denoted by $G_{1},G_{2},...,G_{6}$ in Figure \ref{fig:Cubic8}. The following theorem gives the strong domatic numbers of cubic graphs of order $8$:
\begin{figure}
\begin{center}
\psscalebox{0.7 0.7}
{
\begin{pspicture}(0,-7.7901173)(15.509375,2.3501172)
\rput[bl](1.1,2.0901172){1}
\rput[bl](2.28,2.0701172){2}
\rput[bl](3.38,0.9701172){3}
\rput[bl](2.32,-1.4098828){5}
\rput[bl](1.08,-1.4098828){6}
\rput[bl](3.38,-0.24988282){4}
\rput[bl](7.58,-2.1298828){\large{$G_2$}}
\psdots[linecolor=black, dotsize=0.2](1.18,1.8701172)
\psdots[linecolor=black, dotsize=0.2](2.38,1.8701172)
\psdots[linecolor=black, dotsize=0.2](2.38,-0.9298828)
\psdots[linecolor=black, dotsize=0.2](1.18,-0.9298828)
\rput[bl](0.0,-0.2898828){7}
\rput[bl](0.02,0.9501172){8}
\rput[bl](7.1,2.0901172){1}
\rput[bl](8.28,2.0701172){2}
\rput[bl](9.38,0.9701172){3}
\rput[bl](8.32,-1.4098828){5}
\rput[bl](7.08,-1.4098828){6}
\rput[bl](9.38,-0.24988282){4}
\psdots[linecolor=black, dotsize=0.2](7.18,1.8701172)
\psdots[linecolor=black, dotsize=0.2](8.38,1.8701172)
\psdots[linecolor=black, dotsize=0.2](8.38,-0.9298828)
\psdots[linecolor=black, dotsize=0.2](7.18,-0.9298828)
\rput[bl](6.0,-0.2898828){7}
\rput[bl](6.02,0.9501172){8}
\rput[bl](13.9,-3.509883){1}
\rput[bl](14.08,-4.529883){2}
\rput[bl](12.66,-5.389883){3}
\rput[bl](14.14,-5.409883){5}
\rput[bl](14.08,-6.069883){6}
\rput[bl](15.08,-5.3498826){4}
\rput[bl](12.7,-6.989883){7}
\rput[bl](15.08,-7.009883){8}
\rput[bl](1.54,-2.1298828){\large{$G_1$}}
\rput[bl](13.58,-2.1298828){\large{$G_3$}}
\rput[bl](1.58,-7.7298827){\large{$G_4$}}
\rput[bl](7.58,-7.7298827){\large{$G_5$}}
\rput[bl](13.58,-7.7298827){\large{$G_6$}}
\psline[linecolor=black, linewidth=0.04](1.18,-3.7298827)(1.18,-3.7298827)
\psdots[linecolor=black, dotsize=0.2](12.78,-6.529883)
\psdots[linecolor=black, dotsize=0.2](15.18,-6.529883)
\psdots[linecolor=black, dotsize=0.2](13.98,-6.129883)
\psdots[linecolor=black, dotsize=0.2](13.98,-5.3298826)
\psline[linecolor=black, linewidth=0.04](13.98,-5.3298826)(12.78,-6.529883)(15.18,-6.529883)(13.98,-5.3298826)(13.98,-6.129883)(12.78,-6.529883)(12.78,-6.529883)
\psline[linecolor=black, linewidth=0.04](13.98,-6.129883)(15.18,-6.529883)(15.18,-6.529883)
\psdots[linecolor=black, dotsize=0.2](12.78,-4.929883)
\psdots[linecolor=black, dotsize=0.2](15.18,-4.929883)
\psdots[linecolor=black, dotsize=0.2](13.98,-4.529883)
\psdots[linecolor=black, dotsize=0.2](13.98,-3.7298827)
\psline[linecolor=black, linewidth=0.04](13.98,-3.7298827)(12.78,-4.929883)(15.18,-4.929883)(13.98,-3.7298827)(13.98,-4.529883)(12.78,-4.929883)(12.78,-4.929883)
\psline[linecolor=black, linewidth=0.04](13.98,-4.529883)(15.18,-4.929883)(15.18,-4.929883)
\psdots[linecolor=black, dotsize=0.2](3.18,1.0701172)
\psdots[linecolor=black, dotsize=0.2](3.18,-0.12988281)
\psdots[linecolor=black, dotsize=0.2](0.38,1.0701172)
\psdots[linecolor=black, dotsize=0.2](0.38,-0.12988281)
\psline[linecolor=black, linewidth=0.04](2.38,1.8701172)(3.18,1.0701172)(3.18,-0.12988281)(2.38,-0.9298828)(2.38,-0.9298828)
\psline[linecolor=black, linewidth=0.04](1.18,1.8701172)(0.38,1.0701172)(0.38,-0.12988281)(1.18,-0.9298828)(1.18,-0.9298828)
\psline[linecolor=black, linewidth=0.04](1.18,1.8701172)(0.38,-0.12988281)(0.38,-0.12988281)
\psline[linecolor=black, linewidth=0.04](0.38,1.0701172)(1.18,-0.9298828)(1.18,-0.9298828)
\psline[linecolor=black, linewidth=0.04](2.38,1.8701172)(3.18,-0.12988281)(3.18,-0.12988281)
\psline[linecolor=black, linewidth=0.04](3.18,1.0701172)(2.38,-0.9298828)(2.38,-0.9298828)
\psline[linecolor=black, linewidth=0.04](1.18,1.8701172)(2.38,1.8701172)(2.38,1.8701172)
\psline[linecolor=black, linewidth=0.04](1.18,-0.9298828)(2.38,-0.9298828)(2.38,-0.9298828)
\psdots[linecolor=black, dotsize=0.2](6.38,1.0701172)
\psdots[linecolor=black, dotsize=0.2](6.38,-0.12988281)
\psdots[linecolor=black, dotsize=0.2](9.18,1.0701172)
\psdots[linecolor=black, dotsize=0.2](9.18,-0.12988281)
\psline[linecolor=black, linewidth=0.04](7.18,1.8701172)(8.38,1.8701172)(9.18,1.0701172)(9.18,-0.12988281)(8.38,-0.9298828)(7.18,-0.9298828)(6.38,-0.12988281)(6.38,1.0701172)(7.18,1.8701172)(7.18,1.8701172)(7.18,1.8701172)
\rput[bl](7.1,-3.509883){1}
\rput[bl](8.28,-3.529883){2}
\rput[bl](9.38,-4.629883){3}
\rput[bl](8.32,-7.009883){5}
\rput[bl](7.08,-7.009883){6}
\rput[bl](9.38,-5.8498826){4}
\psdots[linecolor=black, dotsize=0.2](7.18,-3.7298827)
\psdots[linecolor=black, dotsize=0.2](8.38,-3.7298827)
\psdots[linecolor=black, dotsize=0.2](8.38,-6.529883)
\psdots[linecolor=black, dotsize=0.2](7.18,-6.529883)
\rput[bl](6.0,-5.889883){7}
\rput[bl](6.02,-4.649883){8}
\psdots[linecolor=black, dotsize=0.2](6.38,-4.529883)
\psdots[linecolor=black, dotsize=0.2](6.38,-5.7298827)
\psdots[linecolor=black, dotsize=0.2](9.18,-4.529883)
\psdots[linecolor=black, dotsize=0.2](9.18,-5.7298827)
\psline[linecolor=black, linewidth=0.04](7.18,-3.7298827)(8.38,-3.7298827)(9.18,-4.529883)(9.18,-5.7298827)(8.38,-6.529883)(7.18,-6.529883)(6.38,-5.7298827)(6.38,-4.529883)(7.18,-3.7298827)(7.18,-3.7298827)(7.18,-3.7298827)
\psline[linecolor=black, linewidth=0.04](7.18,1.8701172)(8.38,-0.9298828)(8.38,-0.9298828)
\psline[linecolor=black, linewidth=0.04](8.38,1.8701172)(6.38,-0.12988281)(6.38,-0.12988281)
\psline[linecolor=black, linewidth=0.04](9.18,1.0701172)(6.38,1.0701172)(6.38,1.0701172)
\psline[linecolor=black, linewidth=0.04](9.18,-0.12988281)(7.18,-0.9298828)(7.18,-0.9298828)
\rput[bl](13.1,2.0901172){1}
\rput[bl](14.28,2.0701172){2}
\rput[bl](15.38,0.9701172){3}
\rput[bl](14.32,-1.4098828){5}
\rput[bl](13.08,-1.4098828){6}
\rput[bl](15.38,-0.24988282){4}
\psdots[linecolor=black, dotsize=0.2](13.18,1.8701172)
\psdots[linecolor=black, dotsize=0.2](14.38,1.8701172)
\psdots[linecolor=black, dotsize=0.2](14.38,-0.9298828)
\psdots[linecolor=black, dotsize=0.2](13.18,-0.9298828)
\rput[bl](12.0,-0.2898828){7}
\rput[bl](12.02,0.9501172){8}
\psdots[linecolor=black, dotsize=0.2](12.38,1.0701172)
\psdots[linecolor=black, dotsize=0.2](12.38,-0.12988281)
\psdots[linecolor=black, dotsize=0.2](15.18,1.0701172)
\psdots[linecolor=black, dotsize=0.2](15.18,-0.12988281)
\psline[linecolor=black, linewidth=0.04](13.18,1.8701172)(14.38,1.8701172)(15.18,1.0701172)(15.18,-0.12988281)(14.38,-0.9298828)(13.18,-0.9298828)(12.38,-0.12988281)(12.38,1.0701172)(13.18,1.8701172)(13.18,1.8701172)(13.18,1.8701172)
\psline[linecolor=black, linewidth=0.04](13.18,1.8701172)(14.38,-0.9298828)(14.38,-0.9298828)
\psline[linecolor=black, linewidth=0.04](14.38,1.8701172)(13.18,-0.9298828)(13.18,-0.9298828)
\psline[linecolor=black, linewidth=0.04](15.18,1.0701172)(12.38,-0.12988281)(12.38,-0.12988281)
\psline[linecolor=black, linewidth=0.04](12.38,1.0701172)(15.18,-0.12988281)(15.18,-0.12988281)
\rput[bl](1.1,-3.509883){1}
\rput[bl](2.28,-3.529883){2}
\rput[bl](3.38,-4.629883){3}
\rput[bl](2.32,-7.009883){5}
\rput[bl](1.08,-7.009883){6}
\rput[bl](3.38,-5.8498826){4}
\psdots[linecolor=black, dotsize=0.2](1.18,-3.7298827)
\psdots[linecolor=black, dotsize=0.2](2.38,-3.7298827)
\psdots[linecolor=black, dotsize=0.2](2.38,-6.529883)
\psdots[linecolor=black, dotsize=0.2](1.18,-6.529883)
\rput[bl](0.0,-5.889883){7}
\rput[bl](0.02,-4.649883){8}
\psdots[linecolor=black, dotsize=0.2](0.38,-4.529883)
\psdots[linecolor=black, dotsize=0.2](0.38,-5.7298827)
\psdots[linecolor=black, dotsize=0.2](3.18,-4.529883)
\psdots[linecolor=black, dotsize=0.2](3.18,-5.7298827)
\psline[linecolor=black, linewidth=0.04](1.18,-3.7298827)(2.38,-3.7298827)(3.18,-4.529883)(3.18,-5.7298827)(2.38,-6.529883)(1.18,-6.529883)(0.38,-5.7298827)(0.38,-4.529883)(1.18,-3.7298827)(1.18,-3.7298827)(1.18,-3.7298827)
\psline[linecolor=black, linewidth=0.04](1.18,-3.7298827)(2.38,-6.529883)(2.38,-6.529883)
\psline[linecolor=black, linewidth=0.04](2.38,-3.7298827)(3.18,-5.7298827)(3.18,-5.7298827)
\psline[linecolor=black, linewidth=0.04](0.38,-4.529883)(1.18,-6.529883)(1.18,-6.529883)
\psline[linecolor=black, linewidth=0.04](0.38,-5.7298827)(3.18,-4.529883)
\psline[linecolor=black, linewidth=0.04](7.18,-3.7298827)(7.18,-6.529883)(7.18,-6.529883)
\psline[linecolor=black, linewidth=0.04](8.38,-3.7298827)(8.38,-6.529883)(8.38,-6.529883)
\psline[linecolor=black, linewidth=0.04](9.18,-4.529883)(6.38,-4.529883)
\psline[linecolor=black, linewidth=0.04](6.38,-5.7298827)(9.18,-5.7298827)
\end{pspicture}
}
\end{center}
\caption{\label{fig:Cubic8} Cubic graphs of order $8$.}
\end{figure}
\begin{theorem}\label{thm:cubic8}
For the cubic graphs $G_{1},G_{2},...,G_{6}$ of order $8$ (Figure \ref{fig:Cubic8}) we have:
\begin{enumerate}
\item[(i)]
$d_{\rm st}(G_1)=d_{\rm st}(G_5)=d_{\rm st}(G_6)=4.$
\item[(ii)]
$d_{\rm st}(G_2)=d_{\rm st}(G_3)=2.$
\item[(iii)]
$d_{\rm st}(G_4)=3$
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item[(i)]
By Theorem \ref{thm:max-degree}, for a cubic graph $G$ of order $8$ we have $d_{\rm st}(G)\leq 4$. Now we present the strong domatic partition of size $4$ for $G_1$, $G_5$ and $G_6$. Consider the following sets:
\begin{align*}
P_{1} &= \Bigl\{ \{1,5\},\{2,6\},\{3,7\},\{4,8 \} \Bigl\}, &
P_{5} &= \Bigl\{ \{1,4\},\{2,7\},\{3,6\},\{5,8 \} \Bigl\},\\
P_{6} &= \Bigl\{ \{1,5\},\{2,6\},\{3,7\},\{4,8 \} \Bigl\}.
\end{align*}
Observe that $P_i$ is a strong domatic partition of $G_i$, for $i=1,5,6$ and so we have the result.
\item[(ii)]
Suppose that $D$ is a strong dominating set of $G_2$. We show that $|D|\geq 3$. If we have two adjacent vertices in $D$, then at least one vertex is not strong dominate by them. So we consider other cases. If $1\in D$, then it strong dominate $2,5,7$, and we need at least two vertices among $3,4,6,8$ to be in $D$. If $2\in D$, then it strong dominate $1,3,8$, and we need at least two vertices among $4,5,6,7$ to be in $D$. If $3\in D$, then it strong dominate $2,4,8$, and we need at least two vertices among $1,5,6,7$ to be in $D$. If $4\in D$, then it strong dominate $3,5,6$, and we need at least two vertices among $1,2,7,8$ to be in $D$. If $5\in D$, then it strong dominate $1,4,6$, and we need at least two vertices among $2,3,7,8$ to be in $D$. If $6\in D$, then it strong dominate $4,5,7$, and we need at least two vertices among $1,2,3,8$ to be in $D$. If $7\in D$, then it strong dominate $2,6,8$, and we need at least two vertices among $1,3,4,5$ to be in $D$. And finally if $8\in D$, then it strong dominate $1,3,7$, and we need at least two vertices among $2,4,5,6$ to be in $D$. So $|D|\geq 3$. Suppose that $P$ is a strong domatic partition of $G_2$ of the biggest size. By our argument $|P|$ cannot be $3$ or $4$, because then we need a strong dominating set of size $2$. So $|P|\leq 2$. It is clear that
$$P_2 = \Bigl\{ \{1,3,5,7\},\{2,4,6,8\} \Bigl\}$$
is a strong domatic partition of $G_2$, and we are done. By a similar argument we have $d_{\rm st}(G_3)=2$.
\item[(iii)]
For $G_3$ it is possible to have strong dominating sets of size $2$ which are
$\{2,6\}$ and $\{4,8\}$. Now suppose that $D$ is a strong dominating set of $G_5$ and $1\in D$. By a similar argument as part (ii) we conclude that $|D|\geq 3$. Now suppose that $P$ is a strong domatic partition of $G_5$ of the biggest size. By our argument $|P|$ cannot be $4$, because then we need that all of strong dominating sets be of size $2$. So $|P|\leq 3$. It is clear that
$$P_5 = \Bigl\{ \{2,6\},\{4,8\},\{1,3,5,7\} \Bigl\}$$
is a strong domatic partition of $G_2$, and we are done.
\hfill $\square$\medskip
\end{enumerate}
\end{proof}
One of the famous cubic graphs is the Petersen
graph which is a symmetric non-planar $3$-regular graph of order $10$.
There are exactly twenty one $3$-regular graphs of order $10$ \cite{1}.
Now, we study the strong domatic number of cubic graphs of order $10$.
\begin{figure}
\begin{center}
\psscalebox{0.55 0.55}
{
\begin{pspicture}(0,-5.845)(9.06,2.865)
\psdots[linecolor=black, dotsize=0.4](4.58,2.355)
\psdots[linecolor=black, dotsize=0.4](4.58,0.355)
\psdots[linecolor=black, dotsize=0.4](2.58,-1.245)
\psdots[linecolor=black, dotsize=0.4](3.38,-3.645)
\psdots[linecolor=black, dotsize=0.4](5.78,-3.645)
\psdots[linecolor=black, dotsize=0.4](6.58,-1.245)
\psline[linecolor=black, linewidth=0.08](4.58,0.355)(3.38,-3.645)(6.58,-1.245)(2.58,-1.245)(5.78,-3.645)(4.58,0.355)(4.58,0.355)
\psdots[linecolor=black, dotsize=0.4](8.58,-0.445)
\psdots[linecolor=black, dotsize=0.4](0.58,-0.445)
\psdots[linecolor=black, dotsize=0.4](6.58,-5.245)
\psdots[linecolor=black, dotsize=0.4](2.58,-5.245)
\psline[linecolor=black, linewidth=0.08](4.58,2.355)(8.58,-0.445)(6.58,-5.245)(2.58,-5.245)(0.58,-0.445)(4.58,2.355)(4.58,0.355)(4.58,0.355)
\psline[linecolor=black, linewidth=0.08](6.58,-1.245)(8.58,-0.445)(8.58,-0.445)
\psline[linecolor=black, linewidth=0.08](5.78,-3.645)(6.58,-5.245)(6.58,-5.245)
\psline[linecolor=black, linewidth=0.08](3.38,-3.645)(2.58,-5.245)(2.58,-5.245)
\psline[linecolor=black, linewidth=0.08](2.58,-1.245)(0.58,-0.445)(0.58,-0.445)
\rput[bl](4.48,2.595){1}
\rput[bl](4.78,0.435){6}
\rput[bl](0.0,-0.425){2}
\rput[bl](2.2,-5.845){3}
\rput[bl](6.58,-5.825){4}
\rput[bl](8.84,-0.565){5}
\rput[bl](2.58,-0.845){7}
\rput[bl](2.96,-3.405){8}
\rput[bl](6.02,-3.445){9}
\rput[bl](6.44,-0.865){10}
\end{pspicture}
}
\end{center}
\caption{Petersen graph $P$. } \label{fig:petersen}
\end{figure}
First we state and prove the following theorem for the Petersen graph.
\begin{theorem}\label{thm:petersen}
For the Petersen graph, $d_{\rm st}(P)=2$.
\end{theorem}
\begin{proof}
Suppose that $S$ is a strong dominating set of $P$. Since each vertex in $S$ strong dominate at most $3$ other vertices, we need to have $|S|\geq 3$. Consider Figure \ref{fig:petersen}. Note that no subset of size three of $A=\{1,2,3,4,5\}$ or $B=\{6,7,8,9,10\}$ is a strong dominating set of $P$. So, we need at least one element of $A$, and at least one element of $B$. Now, we claim that if we have a strong dominating set of size $3$, then it is not possible to have a strong domatic partition of $P$ of size $3$. We consider vertex $1\in A$. One can easily check that the only possible strong dominating sets of $P$ of size three, which contain $1$, are the following:
\begin{align*}
S_{1} &= \{1,3,7\}, &
S_{2} &= \{1,4,10\}, &
S_{3} &= \{1,8,9\}.
\end{align*}
Since all of the elements of $S_1$ strong dominate $2$ and $N(2)=S_1$, so clearly it is not possible to have a strong domatic partition of $P$ of size $3$. By the same reason, since $N(5)=S_2$ and $N(6)=S_3$, so it is not possible to have a strong domatic partition of $P$ of size $3$ including $1$. So we need to have $1$ in a strong dominating set of bigger size. Since Petersen graph is a symmetric graph, this argument holds for all vertices. So, if we have a strong dominating set of size $3$, then it is not possible to have a strong domatic partition of $P$ of size $3$, as we claimed. Since we have only $10$ vertices, it is not possible to have a strong domatic partition of $P$ of size three and it has at least four elements. So $d_{\rm st}(P)\leq 2$. Clearly,
$P=\{A,B\}$ is a strong domatic partition of $P$, and therefore we have the result.
\hfill $\square$\medskip
\end{proof}
\begin{figure}[!h]
\begin{center}
\psscalebox{0.69 0.69}
{
\begin{pspicture}(0,-10.145)(21.18,9.525)
\rput[bl](1.48,9.255){1}
\rput[bl](2.26,9.235){2}
\rput[bl](2.96,8.135){3}
\rput[bl](2.3,6.155){5}
\rput[bl](1.46,6.155){6}
\rput[bl](2.96,7.315){4}
\rput[bl](0.66,6.175){7}
\rput[bl](0.0,7.315){8}
\rput[bl](1.4,5.455){\large{$G_1$}}
\rput[bl](0.0,8.095){9}
\rput[bl](0.52,9.255){10}
\rput[bl](5.0,5.455){\large{$G_2$}}
\rput[bl](8.6,5.455){\large{$G_3$}}
\rput[bl](12.2,5.455){\large{$G_4$}}
\rput[bl](15.8,5.455){\large{$G_5$}}
\rput[bl](19.4,5.455){\large{$G_6$}}
\psdots[linecolor=black, dotsize=0.2](0.76,9.035)
\psdots[linecolor=black, dotsize=0.2](1.56,9.035)
\psdots[linecolor=black, dotsize=0.2](2.36,9.035)
\psdots[linecolor=black, dotsize=0.2](2.76,8.235)
\psdots[linecolor=black, dotsize=0.2](2.76,7.435)
\psdots[linecolor=black, dotsize=0.2](2.36,6.635)
\psdots[linecolor=black, dotsize=0.2](1.56,6.635)
\psdots[linecolor=black, dotsize=0.2](0.76,6.635)
\psdots[linecolor=black, dotsize=0.2](0.36,7.435)
\psdots[linecolor=black, dotsize=0.2](0.36,8.235)
\psline[linecolor=black, linewidth=0.04](0.76,9.035)(0.36,8.235)(0.36,7.435)(0.76,6.635)(0.76,6.635)
\psline[linecolor=black, linewidth=0.04](2.36,9.035)(2.76,8.235)(2.76,7.435)(2.36,6.635)(2.36,6.635)
\psline[linecolor=black, linewidth=0.04](0.76,9.035)(2.36,9.035)(2.36,9.035)
\psline[linecolor=black, linewidth=0.04](0.76,6.635)(2.36,6.635)(2.36,6.635)
\rput[bl](5.08,9.255){1}
\rput[bl](5.86,9.235){2}
\rput[bl](6.56,8.135){3}
\rput[bl](5.9,6.155){5}
\rput[bl](5.06,6.155){6}
\rput[bl](6.56,7.315){4}
\rput[bl](4.26,6.175){7}
\rput[bl](3.6,7.315){8}
\rput[bl](3.6,8.095){9}
\rput[bl](4.12,9.255){10}
\psdots[linecolor=black, dotsize=0.2](4.36,9.035)
\psdots[linecolor=black, dotsize=0.2](5.16,9.035)
\psdots[linecolor=black, dotsize=0.2](5.96,9.035)
\psdots[linecolor=black, dotsize=0.2](6.36,8.235)
\psdots[linecolor=black, dotsize=0.2](6.36,7.435)
\psdots[linecolor=black, dotsize=0.2](5.96,6.635)
\psdots[linecolor=black, dotsize=0.2](5.16,6.635)
\psdots[linecolor=black, dotsize=0.2](4.36,6.635)
\psdots[linecolor=black, dotsize=0.2](3.96,7.435)
\psdots[linecolor=black, dotsize=0.2](3.96,8.235)
\psline[linecolor=black, linewidth=0.04](4.36,9.035)(3.96,8.235)(3.96,7.435)(4.36,6.635)(4.36,6.635)
\psline[linecolor=black, linewidth=0.04](5.96,9.035)(6.36,8.235)(6.36,7.435)(5.96,6.635)(5.96,6.635)
\psline[linecolor=black, linewidth=0.04](4.36,9.035)(5.96,9.035)(5.96,9.035)
\psline[linecolor=black, linewidth=0.04](4.36,6.635)(5.96,6.635)(5.96,6.635)
\rput[bl](8.68,9.255){1}
\rput[bl](9.46,9.235){2}
\rput[bl](10.16,8.135){3}
\rput[bl](9.5,6.155){5}
\rput[bl](8.66,6.155){6}
\rput[bl](10.16,7.315){4}
\rput[bl](7.86,6.175){7}
\rput[bl](7.2,7.315){8}
\rput[bl](7.2,8.095){9}
\rput[bl](7.72,9.255){10}
\psdots[linecolor=black, dotsize=0.2](7.96,9.035)
\psdots[linecolor=black, dotsize=0.2](8.76,9.035)
\psdots[linecolor=black, dotsize=0.2](9.56,9.035)
\psdots[linecolor=black, dotsize=0.2](9.96,8.235)
\psdots[linecolor=black, dotsize=0.2](9.96,7.435)
\psdots[linecolor=black, dotsize=0.2](9.56,6.635)
\psdots[linecolor=black, dotsize=0.2](8.76,6.635)
\psdots[linecolor=black, dotsize=0.2](7.96,6.635)
\psdots[linecolor=black, dotsize=0.2](7.56,7.435)
\psdots[linecolor=black, dotsize=0.2](7.56,8.235)
\psline[linecolor=black, linewidth=0.04](7.96,9.035)(7.56,8.235)(7.56,7.435)(7.96,6.635)(7.96,6.635)
\psline[linecolor=black, linewidth=0.04](9.56,9.035)(9.96,8.235)(9.96,7.435)(9.56,6.635)(9.56,6.635)
\psline[linecolor=black, linewidth=0.04](7.96,9.035)(9.56,9.035)(9.56,9.035)
\psline[linecolor=black, linewidth=0.04](7.96,6.635)(9.56,6.635)(9.56,6.635)
\rput[bl](12.28,9.255){1}
\rput[bl](13.06,9.235){2}
\rput[bl](13.76,8.135){3}
\rput[bl](13.1,6.155){5}
\rput[bl](12.26,6.155){6}
\rput[bl](13.76,7.315){4}
\rput[bl](11.46,6.175){7}
\rput[bl](10.8,7.315){8}
\rput[bl](10.8,8.095){9}
\rput[bl](11.32,9.255){10}
\psdots[linecolor=black, dotsize=0.2](11.56,9.035)
\psdots[linecolor=black, dotsize=0.2](12.36,9.035)
\psdots[linecolor=black, dotsize=0.2](13.16,9.035)
\psdots[linecolor=black, dotsize=0.2](13.56,8.235)
\psdots[linecolor=black, dotsize=0.2](13.56,7.435)
\psdots[linecolor=black, dotsize=0.2](13.16,6.635)
\psdots[linecolor=black, dotsize=0.2](12.36,6.635)
\psdots[linecolor=black, dotsize=0.2](11.56,6.635)
\psdots[linecolor=black, dotsize=0.2](11.16,7.435)
\psdots[linecolor=black, dotsize=0.2](11.16,8.235)
\psline[linecolor=black, linewidth=0.04](11.56,9.035)(11.16,8.235)(11.16,7.435)(11.56,6.635)(11.56,6.635)
\psline[linecolor=black, linewidth=0.04](13.16,9.035)(13.56,8.235)(13.56,7.435)(13.16,6.635)(13.16,6.635)
\psline[linecolor=black, linewidth=0.04](11.56,9.035)(13.16,9.035)(13.16,9.035)
\psline[linecolor=black, linewidth=0.04](11.56,6.635)(13.16,6.635)(13.16,6.635)
\rput[bl](15.88,9.255){1}
\rput[bl](16.66,9.235){2}
\rput[bl](17.36,8.135){3}
\rput[bl](16.7,6.155){5}
\rput[bl](15.86,6.155){6}
\rput[bl](17.36,7.315){4}
\rput[bl](15.06,6.175){7}
\rput[bl](14.4,7.315){8}
\rput[bl](14.4,8.095){9}
\rput[bl](14.92,9.255){10}
\psdots[linecolor=black, dotsize=0.2](15.16,9.035)
\psdots[linecolor=black, dotsize=0.2](15.96,9.035)
\psdots[linecolor=black, dotsize=0.2](16.76,9.035)
\psdots[linecolor=black, dotsize=0.2](17.16,8.235)
\psdots[linecolor=black, dotsize=0.2](17.16,7.435)
\psdots[linecolor=black, dotsize=0.2](16.76,6.635)
\psdots[linecolor=black, dotsize=0.2](15.96,6.635)
\psdots[linecolor=black, dotsize=0.2](15.16,6.635)
\psdots[linecolor=black, dotsize=0.2](14.76,7.435)
\psdots[linecolor=black, dotsize=0.2](14.76,8.235)
\psline[linecolor=black, linewidth=0.04](15.16,9.035)(14.76,8.235)(14.76,7.435)(15.16,6.635)(15.16,6.635)
\psline[linecolor=black, linewidth=0.04](16.76,9.035)(17.16,8.235)(17.16,7.435)(16.76,6.635)(16.76,6.635)
\psline[linecolor=black, linewidth=0.04](15.16,9.035)(16.76,9.035)(16.76,9.035)
\psline[linecolor=black, linewidth=0.04](15.16,6.635)(16.76,6.635)(16.76,6.635)
\rput[bl](19.48,9.255){1}
\rput[bl](20.26,9.235){2}
\rput[bl](20.96,8.135){3}
\rput[bl](20.3,6.155){5}
\rput[bl](19.46,6.155){6}
\rput[bl](20.96,7.315){4}
\rput[bl](18.66,6.175){7}
\rput[bl](18.0,7.315){8}
\rput[bl](18.0,8.095){9}
\rput[bl](18.52,9.255){10}
\psdots[linecolor=black, dotsize=0.2](18.76,9.035)
\psdots[linecolor=black, dotsize=0.2](19.56,9.035)
\psdots[linecolor=black, dotsize=0.2](20.36,9.035)
\psdots[linecolor=black, dotsize=0.2](20.76,8.235)
\psdots[linecolor=black, dotsize=0.2](20.76,7.435)
\psdots[linecolor=black, dotsize=0.2](20.36,6.635)
\psdots[linecolor=black, dotsize=0.2](19.56,6.635)
\psdots[linecolor=black, dotsize=0.2](18.76,6.635)
\psdots[linecolor=black, dotsize=0.2](18.36,7.435)
\psdots[linecolor=black, dotsize=0.2](18.36,8.235)
\psline[linecolor=black, linewidth=0.04](18.76,9.035)(18.36,8.235)(18.36,7.435)(18.76,6.635)(18.76,6.635)
\psline[linecolor=black, linewidth=0.04](20.36,9.035)(20.76,8.235)(20.76,7.435)(20.36,6.635)(20.36,6.635)
\psline[linecolor=black, linewidth=0.04](18.76,9.035)(20.36,9.035)(20.36,9.035)
\psline[linecolor=black, linewidth=0.04](18.76,6.635)(20.36,6.635)(20.36,6.635)
\rput[bl](1.48,4.055){1}
\rput[bl](2.26,4.035){2}
\rput[bl](2.96,2.935){3}
\rput[bl](2.3,0.955){5}
\rput[bl](1.46,0.955){6}
\rput[bl](2.96,2.115){4}
\rput[bl](0.66,0.975){7}
\rput[bl](0.0,2.115){8}
\rput[bl](0.0,2.895){9}
\rput[bl](0.52,4.055){10}
\psdots[linecolor=black, dotsize=0.2](0.76,3.835)
\psdots[linecolor=black, dotsize=0.2](1.56,3.835)
\psdots[linecolor=black, dotsize=0.2](2.36,3.835)
\psdots[linecolor=black, dotsize=0.2](2.76,3.035)
\psdots[linecolor=black, dotsize=0.2](2.76,2.235)
\psdots[linecolor=black, dotsize=0.2](2.36,1.435)
\psdots[linecolor=black, dotsize=0.2](1.56,1.435)
\psdots[linecolor=black, dotsize=0.2](0.76,1.435)
\psdots[linecolor=black, dotsize=0.2](0.36,2.235)
\psdots[linecolor=black, dotsize=0.2](0.36,3.035)
\psline[linecolor=black, linewidth=0.04](0.76,3.835)(0.36,3.035)(0.36,2.235)(0.76,1.435)(0.76,1.435)
\psline[linecolor=black, linewidth=0.04](2.36,3.835)(2.76,3.035)(2.76,2.235)(2.36,1.435)(2.36,1.435)
\psline[linecolor=black, linewidth=0.04](0.76,3.835)(2.36,3.835)(2.36,3.835)
\psline[linecolor=black, linewidth=0.04](0.76,1.435)(2.36,1.435)(2.36,1.435)
\rput[bl](5.08,4.055){1}
\rput[bl](5.86,4.035){2}
\rput[bl](6.56,2.935){3}
\rput[bl](5.9,0.955){5}
\rput[bl](5.06,0.955){6}
\rput[bl](6.56,2.115){4}
\rput[bl](4.26,0.975){7}
\rput[bl](3.6,2.115){8}
\rput[bl](3.6,2.895){9}
\rput[bl](4.12,4.055){10}
\psdots[linecolor=black, dotsize=0.2](4.36,3.835)
\psdots[linecolor=black, dotsize=0.2](5.16,3.835)
\psdots[linecolor=black, dotsize=0.2](5.96,3.835)
\psdots[linecolor=black, dotsize=0.2](6.36,3.035)
\psdots[linecolor=black, dotsize=0.2](6.36,2.235)
\psdots[linecolor=black, dotsize=0.2](5.96,1.435)
\psdots[linecolor=black, dotsize=0.2](5.16,1.435)
\psdots[linecolor=black, dotsize=0.2](4.36,1.435)
\psdots[linecolor=black, dotsize=0.2](3.96,2.235)
\psdots[linecolor=black, dotsize=0.2](3.96,3.035)
\psline[linecolor=black, linewidth=0.04](4.36,3.835)(3.96,3.035)(3.96,2.235)(4.36,1.435)(4.36,1.435)
\psline[linecolor=black, linewidth=0.04](5.96,3.835)(6.36,3.035)(6.36,2.235)(5.96,1.435)(5.96,1.435)
\psline[linecolor=black, linewidth=0.04](4.36,3.835)(5.96,3.835)(5.96,3.835)
\psline[linecolor=black, linewidth=0.04](4.36,1.435)(5.96,1.435)(5.96,1.435)
\rput[bl](8.68,4.055){1}
\rput[bl](9.46,4.035){2}
\rput[bl](10.16,2.935){3}
\rput[bl](9.5,0.955){5}
\rput[bl](8.66,0.955){6}
\rput[bl](10.16,2.115){4}
\rput[bl](7.86,0.975){7}
\rput[bl](7.2,2.115){8}
\rput[bl](7.2,2.895){9}
\rput[bl](7.72,4.055){10}
\psdots[linecolor=black, dotsize=0.2](7.96,3.835)
\psdots[linecolor=black, dotsize=0.2](8.76,3.835)
\psdots[linecolor=black, dotsize=0.2](9.56,3.835)
\psdots[linecolor=black, dotsize=0.2](9.96,3.035)
\psdots[linecolor=black, dotsize=0.2](9.96,2.235)
\psdots[linecolor=black, dotsize=0.2](9.56,1.435)
\psdots[linecolor=black, dotsize=0.2](8.76,1.435)
\psdots[linecolor=black, dotsize=0.2](7.96,1.435)
\psdots[linecolor=black, dotsize=0.2](7.56,2.235)
\psdots[linecolor=black, dotsize=0.2](7.56,3.035)
\psline[linecolor=black, linewidth=0.04](7.96,3.835)(7.56,3.035)(7.56,2.235)(7.96,1.435)(7.96,1.435)
\psline[linecolor=black, linewidth=0.04](9.56,3.835)(9.96,3.035)(9.96,2.235)(9.56,1.435)(9.56,1.435)
\psline[linecolor=black, linewidth=0.04](7.96,3.835)(9.56,3.835)(9.56,3.835)
\psline[linecolor=black, linewidth=0.04](7.96,1.435)(9.56,1.435)(9.56,1.435)
\rput[bl](12.28,4.055){1}
\rput[bl](13.06,4.035){2}
\rput[bl](13.76,2.935){3}
\rput[bl](13.1,0.955){5}
\rput[bl](12.26,0.955){6}
\rput[bl](13.76,2.115){4}
\rput[bl](11.46,0.975){7}
\rput[bl](10.8,2.115){8}
\rput[bl](10.8,2.895){9}
\rput[bl](11.32,4.055){10}
\psdots[linecolor=black, dotsize=0.2](11.56,3.835)
\psdots[linecolor=black, dotsize=0.2](12.36,3.835)
\psdots[linecolor=black, dotsize=0.2](13.16,3.835)
\psdots[linecolor=black, dotsize=0.2](13.56,3.035)
\psdots[linecolor=black, dotsize=0.2](13.56,2.235)
\psdots[linecolor=black, dotsize=0.2](13.16,1.435)
\psdots[linecolor=black, dotsize=0.2](12.36,1.435)
\psdots[linecolor=black, dotsize=0.2](11.56,1.435)
\psdots[linecolor=black, dotsize=0.2](11.16,2.235)
\psdots[linecolor=black, dotsize=0.2](11.16,3.035)
\rput[bl](15.88,4.055){1}
\rput[bl](16.66,4.035){2}
\rput[bl](17.36,2.935){3}
\rput[bl](16.7,0.955){5}
\rput[bl](15.86,0.955){6}
\rput[bl](17.36,2.115){4}
\rput[bl](15.06,0.975){7}
\rput[bl](14.4,2.115){8}
\rput[bl](14.4,2.895){9}
\rput[bl](14.92,4.055){10}
\psdots[linecolor=black, dotsize=0.2](15.16,3.835)
\psdots[linecolor=black, dotsize=0.2](15.96,3.835)
\psdots[linecolor=black, dotsize=0.2](16.76,3.835)
\psdots[linecolor=black, dotsize=0.2](17.16,3.035)
\psdots[linecolor=black, dotsize=0.2](17.16,2.235)
\psdots[linecolor=black, dotsize=0.2](16.76,1.435)
\psdots[linecolor=black, dotsize=0.2](15.96,1.435)
\psdots[linecolor=black, dotsize=0.2](15.16,1.435)
\psdots[linecolor=black, dotsize=0.2](14.76,2.235)
\psdots[linecolor=black, dotsize=0.2](14.76,3.035)
\psline[linecolor=black, linewidth=0.04](15.16,3.835)(14.76,3.035)(14.76,2.235)(15.16,1.435)(15.16,1.435)
\psline[linecolor=black, linewidth=0.04](16.76,3.835)(17.16,3.035)(17.16,2.235)(16.76,1.435)(16.76,1.435)
\psline[linecolor=black, linewidth=0.04](15.16,3.835)(16.76,3.835)(16.76,3.835)
\psline[linecolor=black, linewidth=0.04](15.16,1.435)(16.76,1.435)(16.76,1.435)
\rput[bl](19.48,4.055){1}
\rput[bl](20.26,4.035){2}
\rput[bl](20.96,2.935){3}
\rput[bl](20.3,0.955){5}
\rput[bl](19.46,0.955){6}
\rput[bl](20.96,2.115){4}
\rput[bl](18.66,0.975){7}
\rput[bl](18.0,2.115){8}
\rput[bl](18.0,2.895){9}
\rput[bl](18.52,4.055){10}
\psdots[linecolor=black, dotsize=0.2](18.76,3.835)
\psdots[linecolor=black, dotsize=0.2](19.56,3.835)
\psdots[linecolor=black, dotsize=0.2](20.36,3.835)
\psdots[linecolor=black, dotsize=0.2](20.76,3.035)
\psdots[linecolor=black, dotsize=0.2](20.76,2.235)
\psdots[linecolor=black, dotsize=0.2](20.36,1.435)
\psdots[linecolor=black, dotsize=0.2](19.56,1.435)
\psdots[linecolor=black, dotsize=0.2](18.76,1.435)
\psdots[linecolor=black, dotsize=0.2](18.36,2.235)
\psdots[linecolor=black, dotsize=0.2](18.36,3.035)
\psline[linecolor=black, linewidth=0.04](18.76,3.835)(18.36,3.035)(18.36,2.235)(18.76,1.435)(18.76,1.435)
\psline[linecolor=black, linewidth=0.04](20.36,3.835)(20.76,3.035)(20.76,2.235)(20.36,1.435)(20.36,1.435)
\psline[linecolor=black, linewidth=0.04](18.76,3.835)(20.36,3.835)(20.36,3.835)
\psline[linecolor=black, linewidth=0.04](18.76,1.435)(20.36,1.435)(20.36,1.435)
\rput[bl](1.48,-1.145){1}
\rput[bl](2.26,-1.165){2}
\rput[bl](2.96,-2.265){3}
\rput[bl](2.3,-4.245){5}
\rput[bl](1.46,-4.245){6}
\rput[bl](2.96,-3.085){4}
\rput[bl](0.66,-4.225){7}
\rput[bl](0.0,-3.085){8}
\rput[bl](0.0,-2.305){9}
\rput[bl](0.52,-1.145){10}
\psdots[linecolor=black, dotsize=0.2](0.76,-1.365)
\psdots[linecolor=black, dotsize=0.2](1.56,-1.365)
\psdots[linecolor=black, dotsize=0.2](2.36,-1.365)
\psdots[linecolor=black, dotsize=0.2](2.76,-2.165)
\psdots[linecolor=black, dotsize=0.2](2.76,-2.965)
\psdots[linecolor=black, dotsize=0.2](2.36,-3.765)
\psdots[linecolor=black, dotsize=0.2](1.56,-3.765)
\psdots[linecolor=black, dotsize=0.2](0.76,-3.765)
\psdots[linecolor=black, dotsize=0.2](0.36,-2.965)
\psdots[linecolor=black, dotsize=0.2](0.36,-2.165)
\psline[linecolor=black, linewidth=0.04](0.76,-1.365)(0.36,-2.165)(0.36,-2.965)(0.76,-3.765)(0.76,-3.765)
\psline[linecolor=black, linewidth=0.04](2.36,-1.365)(2.76,-2.165)(2.76,-2.965)(2.36,-3.765)(2.36,-3.765)
\psline[linecolor=black, linewidth=0.04](0.76,-1.365)(2.36,-1.365)(2.36,-1.365)
\psline[linecolor=black, linewidth=0.04](0.76,-3.765)(2.36,-3.765)(2.36,-3.765)
\rput[bl](5.08,-1.145){1}
\rput[bl](5.86,-1.165){2}
\rput[bl](6.56,-2.265){3}
\rput[bl](5.9,-4.245){5}
\rput[bl](5.06,-4.245){6}
\rput[bl](6.56,-3.085){4}
\rput[bl](4.26,-4.225){7}
\rput[bl](3.6,-3.085){8}
\rput[bl](3.6,-2.305){9}
\rput[bl](4.12,-1.145){10}
\psdots[linecolor=black, dotsize=0.2](4.36,-1.365)
\psdots[linecolor=black, dotsize=0.2](5.16,-1.365)
\psdots[linecolor=black, dotsize=0.2](5.96,-1.365)
\psdots[linecolor=black, dotsize=0.2](6.36,-2.165)
\psdots[linecolor=black, dotsize=0.2](6.36,-2.965)
\psdots[linecolor=black, dotsize=0.2](5.96,-3.765)
\psdots[linecolor=black, dotsize=0.2](5.16,-3.765)
\psdots[linecolor=black, dotsize=0.2](4.36,-3.765)
\psdots[linecolor=black, dotsize=0.2](3.96,-2.965)
\psdots[linecolor=black, dotsize=0.2](3.96,-2.165)
\psline[linecolor=black, linewidth=0.04](4.36,-1.365)(3.96,-2.165)(3.96,-2.965)(4.36,-3.765)(4.36,-3.765)
\psline[linecolor=black, linewidth=0.04](5.96,-1.365)(6.36,-2.165)(6.36,-2.965)(5.96,-3.765)(5.96,-3.765)
\psline[linecolor=black, linewidth=0.04](4.36,-1.365)(5.96,-1.365)(5.96,-1.365)
\psline[linecolor=black, linewidth=0.04](4.36,-3.765)(5.96,-3.765)(5.96,-3.765)
\rput[bl](8.68,-1.145){1}
\rput[bl](9.46,-1.165){2}
\rput[bl](10.16,-2.265){3}
\rput[bl](9.5,-4.245){5}
\rput[bl](8.66,-4.245){6}
\rput[bl](10.16,-3.085){4}
\rput[bl](7.86,-4.225){7}
\rput[bl](7.2,-3.085){8}
\rput[bl](7.2,-2.305){9}
\rput[bl](7.72,-1.145){10}
\psdots[linecolor=black, dotsize=0.2](7.96,-1.365)
\psdots[linecolor=black, dotsize=0.2](8.76,-1.365)
\psdots[linecolor=black, dotsize=0.2](9.56,-1.365)
\psdots[linecolor=black, dotsize=0.2](9.96,-2.165)
\psdots[linecolor=black, dotsize=0.2](9.96,-2.965)
\psdots[linecolor=black, dotsize=0.2](9.56,-3.765)
\psdots[linecolor=black, dotsize=0.2](8.76,-3.765)
\psdots[linecolor=black, dotsize=0.2](7.96,-3.765)
\psdots[linecolor=black, dotsize=0.2](7.56,-2.965)
\psdots[linecolor=black, dotsize=0.2](7.56,-2.165)
\psline[linecolor=black, linewidth=0.04](7.96,-1.365)(7.56,-2.165)(7.56,-2.965)(7.96,-3.765)(7.96,-3.765)
\psline[linecolor=black, linewidth=0.04](9.56,-1.365)(9.96,-2.165)(9.96,-2.965)(9.56,-3.765)(9.56,-3.765)
\psline[linecolor=black, linewidth=0.04](7.96,-1.365)(9.56,-1.365)(9.56,-1.365)
\psline[linecolor=black, linewidth=0.04](7.96,-3.765)(9.56,-3.765)(9.56,-3.765)
\rput[bl](12.28,-1.145){1}
\rput[bl](13.06,-1.165){2}
\rput[bl](13.76,-2.265){3}
\rput[bl](13.1,-4.245){5}
\rput[bl](12.26,-4.245){6}
\rput[bl](13.76,-3.085){4}
\rput[bl](11.46,-4.225){7}
\rput[bl](10.8,-3.085){8}
\rput[bl](10.8,-2.305){9}
\rput[bl](11.32,-1.145){10}
\psdots[linecolor=black, dotsize=0.2](11.56,-1.365)
\psdots[linecolor=black, dotsize=0.2](12.36,-1.365)
\psdots[linecolor=black, dotsize=0.2](13.16,-1.365)
\psdots[linecolor=black, dotsize=0.2](13.56,-2.165)
\psdots[linecolor=black, dotsize=0.2](13.56,-2.965)
\psdots[linecolor=black, dotsize=0.2](13.16,-3.765)
\psdots[linecolor=black, dotsize=0.2](12.36,-3.765)
\psdots[linecolor=black, dotsize=0.2](11.56,-3.765)
\psdots[linecolor=black, dotsize=0.2](11.16,-2.965)
\psdots[linecolor=black, dotsize=0.2](11.16,-2.165)
\psline[linecolor=black, linewidth=0.04](11.56,-1.365)(11.16,-2.165)(11.16,-2.965)(11.56,-3.765)(11.56,-3.765)
\psline[linecolor=black, linewidth=0.04](13.16,-1.365)(13.56,-2.165)(13.56,-2.965)(13.16,-3.765)(13.16,-3.765)
\psline[linecolor=black, linewidth=0.04](11.56,-1.365)(13.16,-1.365)(13.16,-1.365)
\psline[linecolor=black, linewidth=0.04](11.56,-3.765)(13.16,-3.765)(13.16,-3.765)
\rput[bl](15.88,-1.145){1}
\rput[bl](16.66,-1.165){2}
\rput[bl](17.36,-2.265){3}
\rput[bl](16.7,-4.245){5}
\rput[bl](15.86,-4.245){6}
\rput[bl](17.36,-3.085){4}
\rput[bl](15.06,-4.225){7}
\rput[bl](14.4,-3.085){8}
\rput[bl](14.4,-2.305){9}
\rput[bl](14.92,-1.145){10}
\psdots[linecolor=black, dotsize=0.2](15.16,-1.365)
\psdots[linecolor=black, dotsize=0.2](15.96,-1.365)
\psdots[linecolor=black, dotsize=0.2](16.76,-1.365)
\psdots[linecolor=black, dotsize=0.2](17.16,-2.165)
\psdots[linecolor=black, dotsize=0.2](17.16,-2.965)
\psdots[linecolor=black, dotsize=0.2](16.76,-3.765)
\psdots[linecolor=black, dotsize=0.2](15.96,-3.765)
\psdots[linecolor=black, dotsize=0.2](15.16,-3.765)
\psdots[linecolor=black, dotsize=0.2](14.76,-2.965)
\psdots[linecolor=black, dotsize=0.2](14.76,-2.165)
\rput[bl](19.48,-1.145){1}
\rput[bl](20.26,-1.165){2}
\rput[bl](20.96,-2.265){3}
\rput[bl](20.3,-4.245){5}
\rput[bl](19.46,-4.245){6}
\rput[bl](20.96,-3.085){4}
\rput[bl](18.66,-4.225){7}
\rput[bl](18.0,-3.085){8}
\rput[bl](18.0,-2.305){9}
\rput[bl](18.52,-1.145){10}
\psdots[linecolor=black, dotsize=0.2](18.76,-1.365)
\psdots[linecolor=black, dotsize=0.2](19.56,-1.365)
\psdots[linecolor=black, dotsize=0.2](20.36,-1.365)
\psdots[linecolor=black, dotsize=0.2](20.76,-2.165)
\psdots[linecolor=black, dotsize=0.2](20.76,-2.965)
\psdots[linecolor=black, dotsize=0.2](20.36,-3.765)
\psdots[linecolor=black, dotsize=0.2](19.56,-3.765)
\psdots[linecolor=black, dotsize=0.2](18.76,-3.765)
\psdots[linecolor=black, dotsize=0.2](18.36,-2.965)
\psdots[linecolor=black, dotsize=0.2](18.36,-2.165)
\psline[linecolor=black, linewidth=0.04](18.76,-1.365)(18.36,-2.165)(18.36,-2.965)(18.76,-3.765)(18.76,-3.765)
\psline[linecolor=black, linewidth=0.04](20.36,-1.365)(20.76,-2.165)(20.76,-2.965)(20.36,-3.765)(20.36,-3.765)
\psline[linecolor=black, linewidth=0.04](18.76,-3.765)(20.36,-3.765)(20.36,-3.765)
\rput[bl](4.68,-6.345){1}
\rput[bl](5.46,-6.365){2}
\rput[bl](6.16,-7.465){3}
\rput[bl](5.5,-9.445){5}
\rput[bl](4.66,-9.445){6}
\rput[bl](6.16,-8.285){4}
\rput[bl](3.86,-9.425){7}
\rput[bl](3.2,-8.285){8}
\rput[bl](3.2,-7.505){9}
\rput[bl](3.72,-6.345){10}
\psdots[linecolor=black, dotsize=0.2](3.96,-6.565)
\psdots[linecolor=black, dotsize=0.2](4.76,-6.565)
\psdots[linecolor=black, dotsize=0.2](5.56,-6.565)
\psdots[linecolor=black, dotsize=0.2](5.96,-7.365)
\psdots[linecolor=black, dotsize=0.2](5.96,-8.165)
\psdots[linecolor=black, dotsize=0.2](5.56,-8.965)
\psdots[linecolor=black, dotsize=0.2](4.76,-8.965)
\psdots[linecolor=black, dotsize=0.2](3.96,-8.965)
\psdots[linecolor=black, dotsize=0.2](3.56,-8.165)
\psdots[linecolor=black, dotsize=0.2](3.56,-7.365)
\psline[linecolor=black, linewidth=0.04](3.96,-6.565)(3.56,-7.365)(3.56,-8.165)(3.96,-8.965)(3.96,-8.965)
\psline[linecolor=black, linewidth=0.04](5.56,-6.565)(5.96,-7.365)(5.96,-8.165)(5.56,-8.965)(5.56,-8.965)
\psline[linecolor=black, linewidth=0.04](3.96,-6.565)(5.56,-6.565)(5.56,-6.565)
\psline[linecolor=black, linewidth=0.04](3.96,-8.965)(5.56,-8.965)(5.56,-8.965)
\rput[bl](9.88,-6.345){1}
\rput[bl](10.66,-7.165){2}
\rput[bl](10.62,-8.625){3}
\rput[bl](8.8,-8.265){5}
\rput[bl](8.8,-7.505){6}
\psrotate(9.82, -9.425){-0.37204528}{\rput[bl](9.82,-9.425){4}}
\rput[bl](11.04,-7.165){7}
\rput[bl](11.84,-7.145){8}
\rput[bl](11.86,-8.625){9}
\rput[bl](11.02,-8.585){10}
\rput[bl](1.4,0.255){\large{$G_7$}}
\rput[bl](5.0,0.255){\large{$G_8$}}
\rput[bl](8.6,0.255){\large{$G_9$}}
\rput[bl](12.2,0.255){\large{$G_{10}$}}
\rput[bl](15.8,0.255){\large{$G_{11}$}}
\rput[bl](19.4,0.255){\large{$G_{12}$}}
\rput[bl](1.4,-4.945){\large{$G_{13}$}}
\rput[bl](5.0,-4.945){\large{$G_{14}$}}
\rput[bl](8.6,-4.945){\large{$G_{15}$}}
\rput[bl](12.2,-4.945){\large{$G_{16}$}}
\rput[bl](15.8,-4.945){\large{$G_{17}$}}
\rput[bl](19.4,-4.945){\large{$G_{18}$}}
\rput[bl](4.6,-10.145){\large{$G_{19}$}}
\rput[bl](10.2,-10.145){\large{$G_{20}$}}
\rput[bl](15.8,-10.145){\large{$G_{21}$}}
\psline[linecolor=black, linewidth=0.04](1.56,9.035)(1.56,6.635)(1.56,6.635)
\psline[linecolor=black, linewidth=0.04](2.36,9.035)(2.76,7.435)(2.76,7.435)
\psline[linecolor=black, linewidth=0.04](2.76,8.235)(2.36,6.635)(2.36,6.635)
\psline[linecolor=black, linewidth=0.04](0.76,9.035)(0.36,7.435)(0.36,7.435)
\psline[linecolor=black, linewidth=0.04](0.36,8.235)(0.76,6.635)(0.76,6.635)
\psline[linecolor=black, linewidth=0.04](5.16,9.035)(5.16,6.635)
\psline[linecolor=black, linewidth=0.04](5.96,9.035)(3.96,8.235)(3.96,8.235)
\psline[linecolor=black, linewidth=0.04](4.36,9.035)(3.96,7.435)(3.96,7.435)
\psline[linecolor=black, linewidth=0.04](6.36,8.235)(5.96,6.635)(5.96,6.635)
\psline[linecolor=black, linewidth=0.04](4.36,6.635)(6.36,7.435)(6.36,7.435)
\psbezier[linecolor=black, linewidth=0.04](7.96,9.035)(7.96,8.235)(9.56,8.235)(9.56,9.035)
\psline[linecolor=black, linewidth=0.04](7.56,8.235)(9.96,8.235)(9.96,8.235)
\psline[linecolor=black, linewidth=0.04](8.76,9.035)(9.96,7.435)(9.96,7.435)
\psline[linecolor=black, linewidth=0.04](8.76,6.635)(7.56,7.435)(7.56,7.435)
\psbezier[linecolor=black, linewidth=0.04](11.56,9.035)(11.56,8.235)(13.16,8.235)(13.16,9.035)
\psline[linecolor=black, linewidth=0.04](12.36,9.035)(13.56,8.235)(13.56,8.235)
\psline[linecolor=black, linewidth=0.04](13.56,7.435)(11.56,6.635)(11.56,6.635)
\psline[linecolor=black, linewidth=0.04](13.16,6.635)(11.16,7.435)(11.16,7.435)
\psline[linecolor=black, linewidth=0.04](12.36,6.635)(11.16,8.235)(11.16,8.235)
\psline[linecolor=black, linewidth=0.04](15.96,9.035)(17.16,8.235)(17.16,8.235)
\psbezier[linecolor=black, linewidth=0.04](15.16,9.035)(15.16,8.235)(16.76,8.235)(16.76,9.035)
\psline[linecolor=black, linewidth=0.04](17.16,7.435)(15.96,6.635)(15.96,6.635)
\psline[linecolor=black, linewidth=0.04](16.76,6.635)(14.76,7.435)(14.76,7.435)
\psline[linecolor=black, linewidth=0.04](15.16,6.635)(14.76,8.235)(14.76,8.235)
\psline[linecolor=black, linewidth=0.04](19.56,9.035)(19.56,6.635)(19.56,6.635)
\psline[linecolor=black, linewidth=0.04](20.36,9.035)(18.76,6.635)(18.76,6.635)
\psline[linecolor=black, linewidth=0.04](20.76,8.235)(18.36,7.435)(18.36,7.435)
\psline[linecolor=black, linewidth=0.04](20.76,7.435)(18.36,8.235)(18.36,8.235)
\psline[linecolor=black, linewidth=0.04](18.76,9.035)(20.36,6.635)(20.36,6.635)
\psline[linecolor=black, linewidth=0.04](1.56,3.835)(1.56,1.435)(1.56,1.435)
\psline[linecolor=black, linewidth=0.04](0.76,1.435)(2.36,3.835)(2.36,3.835)
\psline[linecolor=black, linewidth=0.04](2.36,1.435)(0.76,3.835)(0.76,3.835)
\psline[linecolor=black, linewidth=0.04](0.36,3.035)(2.76,3.035)(2.76,3.035)
\psline[linecolor=black, linewidth=0.04](0.36,2.235)(2.76,2.235)(2.76,2.235)
\psline[linecolor=black, linewidth=0.04](5.16,3.835)(5.16,1.435)(5.16,1.435)
\psline[linecolor=black, linewidth=0.04](5.96,3.835)(3.96,2.235)(3.96,2.235)
\psline[linecolor=black, linewidth=0.04](6.36,3.035)(4.36,1.435)(4.36,1.435)
\psline[linecolor=black, linewidth=0.04](6.36,2.235)(4.36,3.835)(4.36,3.835)
\psline[linecolor=black, linewidth=0.04](5.96,1.435)(3.96,3.035)(3.96,3.035)
\psline[linecolor=black, linewidth=0.04](8.76,3.835)(8.76,1.435)(8.76,1.435)
\psbezier[linecolor=black, linewidth=0.04](7.96,3.835)(7.96,3.035)(9.56,3.035)(9.56,3.835)
\psline[linecolor=black, linewidth=0.04](7.96,1.435)(9.96,3.035)(9.96,3.035)
\psline[linecolor=black, linewidth=0.04](9.96,2.235)(7.56,2.235)(7.56,2.235)
\psline[linecolor=black, linewidth=0.04](9.56,1.435)(7.56,3.035)(7.56,3.035)
\psline[linecolor=black, linewidth=0.04](12.36,3.835)(13.16,3.835)(13.16,3.835)
\psline[linecolor=black, linewidth=0.04](12.36,3.835)(12.36,1.435)(12.36,1.435)
\psline[linecolor=black, linewidth=0.04](12.36,3.835)(13.16,1.435)(13.16,1.435)
\psline[linecolor=black, linewidth=0.04](13.16,3.835)(13.56,3.035)(13.56,3.035)
\psline[linecolor=black, linewidth=0.04](13.16,3.835)(11.56,1.435)(11.56,1.435)
\psline[linecolor=black, linewidth=0.04](13.56,3.035)(13.56,2.235)(13.56,2.235)
\psline[linecolor=black, linewidth=0.04](13.56,3.035)(11.16,2.235)(11.16,2.235)
\psline[linecolor=black, linewidth=0.04](13.56,2.235)(13.16,1.435)(13.16,1.435)
\psline[linecolor=black, linewidth=0.04](13.56,2.235)(11.16,3.035)(11.16,3.035)
\psline[linecolor=black, linewidth=0.04](13.16,1.435)(11.56,3.835)(11.56,3.835)
\psline[linecolor=black, linewidth=0.04](12.36,1.435)(11.56,1.435)(11.56,1.435)
\psline[linecolor=black, linewidth=0.04](12.36,1.435)(11.56,3.835)(11.56,3.835)
\psline[linecolor=black, linewidth=0.04](11.56,1.435)(11.16,2.235)(11.16,2.235)
\psline[linecolor=black, linewidth=0.04](11.16,2.235)(11.16,3.035)(11.16,3.035)
\psline[linecolor=black, linewidth=0.04](11.16,3.035)(11.56,3.835)(11.56,3.835)
\psline[linecolor=black, linewidth=0.04](15.96,3.835)(15.96,1.435)(15.96,1.435)
\psbezier[linecolor=black, linewidth=0.04](15.16,3.835)(15.16,3.035)(16.76,3.035)(16.76,3.835)
\psline[linecolor=black, linewidth=0.04](17.16,3.035)(14.76,3.035)(14.76,3.035)
\psline[linecolor=black, linewidth=0.04](17.16,2.235)(14.76,2.235)(14.76,2.235)
\psbezier[linecolor=black, linewidth=0.04](7.96,6.635)(7.96,7.435)(9.56,7.435)(9.56,6.635)
\psbezier[linecolor=black, linewidth=0.04](15.16,1.435)(15.16,2.235)(16.76,2.235)(16.76,1.435)
\psline[linecolor=black, linewidth=0.04](19.56,3.835)(19.56,1.435)(19.56,1.435)
\psline[linecolor=black, linewidth=0.04](20.36,3.835)(20.76,2.235)(20.76,2.235)
\psline[linecolor=black, linewidth=0.04](20.76,3.035)(18.76,1.435)(18.76,1.435)
\psline[linecolor=black, linewidth=0.04](20.36,1.435)(18.36,3.035)(18.36,3.035)
\psline[linecolor=black, linewidth=0.04](18.76,3.835)(18.36,2.235)(18.36,2.235)
\psline[linecolor=black, linewidth=0.04](1.56,-1.365)(1.56,-3.765)(1.56,-3.765)
\psline[linecolor=black, linewidth=0.04](2.36,-3.765)(0.36,-2.965)(0.36,-2.965)
\psline[linecolor=black, linewidth=0.04](0.76,-3.765)(2.76,-2.965)(2.76,-2.965)
\psline[linecolor=black, linewidth=0.04](0.36,-2.165)(2.76,-2.165)(2.36,-2.165)
\psbezier[linecolor=black, linewidth=0.04](0.76,-1.365)(0.76,-2.165)(2.36,-2.165)(2.36,-1.365)
\psline[linecolor=black, linewidth=0.04](5.16,-1.365)(5.16,-3.765)(5.16,-3.765)
\psline[linecolor=black, linewidth=0.04](6.36,-2.165)(5.96,-3.765)(5.96,-3.765)
\psline[linecolor=black, linewidth=0.04](3.96,-2.165)(4.36,-3.765)(4.36,-3.765)
\psline[linecolor=black, linewidth=0.04](3.96,-2.965)(6.36,-2.965)(6.36,-2.965)
\psbezier[linecolor=black, linewidth=0.04](4.36,-1.365)(4.36,-2.165)(5.96,-2.165)(5.96,-1.365)
\psline[linecolor=black, linewidth=0.04](8.76,-1.365)(8.76,-3.765)(8.76,-3.765)
\psline[linecolor=black, linewidth=0.04](9.56,-1.365)(9.96,-2.965)(9.96,-2.965)
\psline[linecolor=black, linewidth=0.04](9.96,-2.165)(7.56,-2.165)(7.56,-2.165)
\psline[linecolor=black, linewidth=0.04](7.96,-1.365)(7.96,-3.765)(7.96,-3.765)
\psline[linecolor=black, linewidth=0.04](7.56,-2.965)(9.56,-3.765)(9.56,-3.765)
\psline[linecolor=black, linewidth=0.04](12.36,-1.365)(12.36,-3.765)(12.36,-3.765)
\psline[linecolor=black, linewidth=0.04](13.16,-1.365)(11.16,-2.165)(11.16,-2.165)
\psline[linecolor=black, linewidth=0.04](11.56,-1.365)(13.56,-2.165)(13.56,-2.165)
\psline[linecolor=black, linewidth=0.04](13.56,-2.965)(11.56,-3.765)(11.56,-3.765)
\psline[linecolor=black, linewidth=0.04](13.16,-3.765)(11.16,-2.965)(11.16,-2.965)
\psline[linecolor=black, linewidth=0.04](15.96,-1.365)(16.76,-1.365)(16.76,-1.365)
\psline[linecolor=black, linewidth=0.04](15.96,-1.365)(16.76,-3.765)(16.76,-3.765)
\psline[linecolor=black, linewidth=0.04](15.96,-1.365)(15.96,-3.765)(15.96,-3.765)
\psline[linecolor=black, linewidth=0.04](16.76,-1.365)(17.16,-2.165)(17.16,-2.165)
\psline[linecolor=black, linewidth=0.04](16.76,-1.365)(15.16,-3.765)(15.16,-3.765)
\psline[linecolor=black, linewidth=0.04](17.16,-2.165)(17.16,-2.965)(17.16,-2.965)
\psline[linecolor=black, linewidth=0.04](17.16,-2.165)(14.76,-2.965)(14.76,-2.965)
\psline[linecolor=black, linewidth=0.04](17.16,-2.965)(16.76,-3.765)(16.76,-3.765)
\psline[linecolor=black, linewidth=0.04](17.16,-2.965)(14.76,-2.165)(14.76,-2.165)
\psline[linecolor=black, linewidth=0.04](16.76,-3.765)(15.16,-1.365)(15.16,-1.365)
\psline[linecolor=black, linewidth=0.04](15.96,-3.765)(14.76,-2.965)(14.76,-2.965)
\psline[linecolor=black, linewidth=0.04](15.96,-3.765)(14.76,-2.165)(14.76,-2.165)
\psline[linecolor=black, linewidth=0.04](15.16,-3.765)(14.76,-2.165)(14.76,-2.165)
\psline[linecolor=black, linewidth=0.04](15.16,-3.765)(15.16,-1.365)(15.16,-1.365)
\psline[linecolor=black, linewidth=0.04](14.76,-2.965)(15.16,-1.365)(15.16,-1.365)
\psline[linecolor=black, linewidth=0.04](19.56,-1.365)(20.36,-1.365)(20.36,-1.365)
\psline[linecolor=black, linewidth=0.04](19.56,-1.365)(20.76,-2.165)(20.76,-2.165)
\psline[linecolor=black, linewidth=0.04](19.56,-1.365)(20.36,-3.765)(20.36,-3.765)
\psline[linecolor=black, linewidth=0.04](20.36,-1.365)(20.76,-2.965)(20.76,-2.965)
\psline[linecolor=black, linewidth=0.04](19.56,-3.765)(18.76,-1.365)(18.76,-1.365)
\psline[linecolor=black, linewidth=0.04](18.76,-1.365)(18.36,-2.965)(18.36,-2.965)
\psline[linecolor=black, linewidth=0.04](18.36,-2.165)(18.76,-3.765)(18.76,-3.765)
\psline[linecolor=black, linewidth=0.04](4.76,-6.565)(4.76,-8.965)(4.76,-8.965)
\psline[linecolor=black, linewidth=0.04](5.96,-7.365)(3.56,-8.165)(3.56,-8.165)
\psline[linecolor=black, linewidth=0.04](5.96,-8.165)(3.56,-7.365)(3.56,-7.365)
\psbezier[linecolor=black, linewidth=0.04](3.96,-6.565)(3.96,-7.365)(5.56,-7.365)(5.56,-6.565)
\psbezier[linecolor=black, linewidth=0.04](3.96,-8.965)(3.96,-8.165)(5.56,-8.165)(5.56,-8.965)
\psline[linecolor=black, linewidth=0.04](9.96,-8.965)(9.16,-8.165)(9.16,-7.365)(9.96,-6.565)(10.76,-7.365)(10.76,-8.165)(9.96,-8.965)(9.96,-8.965)
\psline[linecolor=black, linewidth=0.04](11.16,-8.165)(11.16,-7.365)(11.96,-7.365)(11.96,-8.165)(11.16,-8.165)(11.16,-8.165)
\psline[linecolor=black, linewidth=0.04](11.16,-7.365)(11.96,-8.165)(11.96,-8.165)
\psline[linecolor=black, linewidth=0.04](11.96,-7.365)(11.16,-8.165)(11.16,-8.165)
\psdots[linecolor=black, dotsize=0.2](9.96,-6.565)
\psdots[linecolor=black, dotsize=0.2](10.76,-7.365)
\psdots[linecolor=black, dotsize=0.2](10.76,-8.165)
\psdots[linecolor=black, dotsize=0.2](9.96,-8.965)
\psdots[linecolor=black, dotsize=0.2](9.16,-8.165)
\psdots[linecolor=black, dotsize=0.2](9.16,-7.365)
\psdots[linecolor=black, dotsize=0.2](11.16,-7.365)
\psdots[linecolor=black, dotsize=0.2](11.96,-7.365)
\psdots[linecolor=black, dotsize=0.2](11.96,-8.165)
\psdots[linecolor=black, dotsize=0.2](11.16,-8.165)
\rput[bl](15.48,-6.345){1}
\rput[bl](16.26,-7.165){2}
\rput[bl](16.22,-8.625){3}
\rput[bl](14.4,-8.265){5}
\rput[bl](14.4,-7.505){6}
\psrotate(15.42, -9.425){-0.37204528}{\rput[bl](15.42,-9.425){4}}
\rput[bl](16.64,-7.165){7}
\rput[bl](17.44,-7.145){8}
\rput[bl](17.46,-8.625){9}
\rput[bl](16.62,-8.585){10}
\psline[linecolor=black, linewidth=0.04](15.56,-8.965)(14.76,-8.165)(14.76,-7.365)(15.56,-6.565)(16.36,-7.365)(16.36,-8.165)(15.56,-8.965)(15.56,-8.965)
\psline[linecolor=black, linewidth=0.04](16.76,-8.165)(16.76,-7.365)(17.56,-7.365)(17.56,-8.165)(16.76,-8.165)(16.76,-8.165)
\psline[linecolor=black, linewidth=0.04](16.76,-7.365)(17.56,-8.165)(17.56,-8.165)
\psline[linecolor=black, linewidth=0.04](17.56,-7.365)(16.76,-8.165)(16.76,-8.165)
\psdots[linecolor=black, dotsize=0.2](15.56,-6.565)
\psdots[linecolor=black, dotsize=0.2](16.36,-7.365)
\psdots[linecolor=black, dotsize=0.2](16.36,-8.165)
\psdots[linecolor=black, dotsize=0.2](15.56,-8.965)
\psdots[linecolor=black, dotsize=0.2](14.76,-8.165)
\psdots[linecolor=black, dotsize=0.2](14.76,-7.365)
\psdots[linecolor=black, dotsize=0.2](16.76,-7.365)
\psdots[linecolor=black, dotsize=0.2](17.56,-7.365)
\psdots[linecolor=black, dotsize=0.2](17.56,-8.165)
\psdots[linecolor=black, dotsize=0.2](16.76,-8.165)
\psline[linecolor=black, linewidth=0.04](9.96,-6.565)(9.96,-8.965)(9.96,-8.965)
\psline[linecolor=black, linewidth=0.04](9.16,-7.365)(10.76,-7.365)(10.76,-7.365)
\psline[linecolor=black, linewidth=0.04](9.16,-8.165)(10.76,-8.165)(10.76,-8.165)
\psline[linecolor=black, linewidth=0.04](15.56,-6.565)(15.56,-8.965)(15.56,-8.965)
\psline[linecolor=black, linewidth=0.04](16.36,-8.165)(14.76,-7.365)(14.76,-7.365)
\psline[linecolor=black, linewidth=0.04](14.76,-8.165)(16.36,-7.365)(16.36,-7.365)
\end{pspicture}
}
\end{center}
\caption{Cubic graphs of order $10$.}\label{fig:cubic}
\end{figure}
In the following, we consider cubic graphs of order $10$, as we see in Figure \ref{fig:cubic}. Note that $G_{17}=P$.
\begin{theorem}\label{thm:cubic10}
If $G$ is a cubic graph of order $10$ which is not the Petersen graph, then $d_{\rm st}(G)=3$.
\end{theorem}
\begin{proof}
Consider Figure \ref{fig:cubic}. Suppose that $D$ is a strong dominating set of a cubic graph of order $10$. Since each vertex in $D$ strong dominate at most $3$ other vertices, we need to have $|D|\geq 3$. Now, consider the following sets:
\begin{align*}
P_{1} &= \Bigl\{ \{1,3,9 \},\{2,6,8 \},\{4,5,7,10 \} \Bigl\}, &
P_{2} &= \Bigl\{ \{1,3,8 \},\{2,5,7,10 \},\{4,6,9 \} \Bigl\},\\
P_{3} &= \Bigl\{ \{1,3,6 \},\{2,5,9 \},\{4,7,8,10 \} \Bigl\}, &
P_{4} &= \Bigl\{ \{1,6,7 \},\{2,4,9 \},\{3,5,8,10 \} \Bigl\},\\
P_{5} &= \Bigl\{ \{1,4,9 \},\{2,6,7 \},\{3,5,8,10 \} \Bigl\}, &
P_{6} &= \Bigl\{ \{1,4,7\},\{2,5,8 \},\{3,6,9,10 \} \Bigl\},\\
P_{7} &= \Bigl\{ \{1,3,6,9 \},\{2,5,8 \},\{4,7,10 \} \Bigl\}, &
P_{8} &= \Bigl\{ \{1,4,8 \},\{2,5,7,10 \},\{3,6,9 \} \Bigl\},\\
P_{9} &= \Bigl\{ \{1,4,8,10 \},\{2,5,7 \},\{3,6,9 \} \Bigl\}, &
P_{10} &= \Bigl\{ \{1,8,9 \},\{2,5,7,10 \},\{3,4,6 \} \Bigl\},\\
P_{11} &= \Bigl\{ \{1,4,8 \},\{2,5,7,10 \},\{3,6,9 \} \Bigl\}, &
P_{12} &= \Bigl\{ \{1,3,9 \},\{2,5,7,10 \},\{4,6,8 \} \Bigl\},\\
P_{13} &= \Bigl\{ \{1,4,8 \},\{2,5,7,10 \},\{3,6,9 \} \Bigl\}, &
P_{14} &= \Bigl\{ \{1,4,8,10 \},\{2,5,7 \},\{3,6,9 \} \Bigl\},\\
P_{15} &= \Bigl\{ \{1,4,8 \},\{2,5,7,10 \},\{3,6,9 \} \Bigl\}, &
P_{16} &= \Bigl\{ \{1,4,8 \},\{2,5,7,10 \},\{3,6,9 \} \Bigl\},\\
P_{18} &= \Bigl\{ \{1,4,7,10 \},\{2,5,8 \},\{3,6,9 \} \Bigl\}, &
P_{19} &= \Bigl\{ \{1,4,8 \},\{2,5,7,10 \},\{3,6,9 \} \Bigl\},\\
P_{20} &= \Bigl\{ \{1,3,7 \},\{2,4,8 \},\{5,6,9,10 \} \Bigl\}, &
P_{21} &= \Bigl\{ \{1,4,7 \},\{2,5,8 \},\{3,6,9,10 \} \Bigl\}.
\end{align*}
One can easily check that $P_i$ is a strong domatic partition of $G_i$, for $1\leq i\leq 21$ and $i\neq 17$. So, we found a strong domatic partition of size $3$ for each. Therefore we have the result.
\hfill $\square$\medskip
\end{proof}
As an immediate result of Corollary \ref{cor:strong-domatic-min-deg}, and Theorems \ref{thm:petersen} and \ref{thm:cubic10}, we have the following:
\begin{corollary}
Domatic number and strong domatic number of the Petersen graph are unique among the cubic graphs of order $10$.
\end{corollary}
|
{
"arxiv_id": "2302.14218",
"language": "en",
"timestamp": "2023-03-01T02:05:30",
"url": "https://arxiv.org/abs/2302.14218",
"yymm": "2302"
} | \section{Introduction}
\label{s:intro}
Recent technological advances have allowed biomedical researchers to collect an extraordinary amount of genetic data from patients.
Studies that seek to characterize associations between these genetic factors and medical outcomes face numerous challenges.
Firstly, the standard maximum likelihood estimator (MLE) is not well-defined for classical statistical models such as linear and generalized linear models (GLMs) when the covariates are high dimensional, in particular, $p\gg n$.
Several variable selection and estimation procedures are available for sparse models with high dimensional covariates.
The commonly-used lasso estimator \citep{tibshirani1996regression} introduces a penalty that shrinks many coefficient estimates to exactly zero, thus performing simultaneous variable selection and coefficient estimation.
Similar approaches include the adaptive lasso \citep{zou2006adaptive}, the smoothly clipped absolute deviation (SCAD) estimator \citep{fan2001variable} and elastic net regularization \citep{zou2005regularization}.
Another common approach, which only performs variable selection, is sure independence screening \citep{fan2008sure,fan2010sure}.
Secondly, although penalized regression methods can produce coefficient estimates for GLMs, they typically have substantial biases and do not quantify the uncertainty around these estimates.
Thus they cannot be used to directly construct confidence intervals or to do hypothesis testing in the high dimensional setting.
More recent progress on statistical inference for high dimensional data includes debiased lasso estimators \citep{zhang2014confidence,van2014asymptotically}, MLE-based sample splitting approaches \citep{fei2021estimation,fei2019drawing}, and the decorrelated score test \citep{ning2017general}.
Other work on hypothesis testing without coefficient estimation includes the sample splitting approach for p-values by \citet{meinshausen2009p} and the test developed by \citet{zhu2018linear} for linear models without imposing sparsity assumptions on the regression parameters.
Lastly, we note that some post-model selection inference procedures, such as the simultaneous inference method of \cite{kuchibhotla2020valid} and the sample splitting and bootstrap method of \cite{rinaldo2019bootstrapping}, are applicable to high dimensional covariates, although these in particular only address linear models.
The goal of such methods is distinct from ours.
They are designed to be valid for arbitrary model selection procedures and under potential model misspecification.
This robustness comes at a cost of conservative inference results when a model selection procedure, such as the lasso estimator, can select a superset of the correct model with high probability, which is the setting that we consider below.
Our proposed method is based on sample splitting, a two-stage approach where, first, one part of the data is used to select the covariates to be included in the model, and then the other part is used to estimate the now lower-dimensional model ($p<n$ but $p$ may grow with $n$).
Similar methods have been recently developed for linear models \citep{fei2019drawing,wang2020debiased} and generalized linear models \citep{fei2021estimation} based on the MLE, as well as for Cox proportional hazards models \citep{zhang2022projection} based on the decorrelated score function.
Compared to \cite{fei2021estimation}, we use the lasso instead of sure independence screening for model selection, which allows for more mild theoretical assumptions, as well as the debiased lasso in place of the MLE, which can substantially improve the bias, variance, and confidence interval coverage of the resulting estimators, as shown in our simulations below.
Note that debiased lasso estimation would be equivalent to using the MLE in sample splitting approaches for linear models.
Our contributions are as follows.
For the model selection component, following \cite{huang2012estimation} we show that, under mild sufficient conditions, the lasso screens out a large portion of the noise variables and selects a model containing the true signal variables with high probability in the case of random design.
Existing work on the selected model size and estimation error of the lasso method for linear regression includes \citet{bickel}, \citet{zhang2008sparsity}, \citet{meinshausen2009lasso}, and \citet{zhao2006model}, under differing assumptions on the covariates.
Similar results were shown for generalized linear models by \citet{van2008high} and \cite{huang2012estimation}, only the latter of which addressed model selection consistency and the number of selected noise variables.
Their results imply that the lasso selected model size is of the same order as the number of nonzero regression coefficients, with high probability.
This is essential because if the selected model size is too large then its design matrix may not have full rank, even asymptotically, and thus the regression coefficients will still not be estimable via MLE or even the refined debiased lasso of \citet{xia2021debiased}.
For the lower-dimensional model estimation stage, while a naive approach would be to use the standard maximum likelihood estimator with the selected covariates, which is commonly used in sample splitting literature, we instead apply the recently developed refined debiased lasso approach of \citet{xia2021debiased}.
This method is better suited for models that contain a substantial number of noise variables, as is typically expected after variable selection.
This is a more desirable approach because the conditions for the GLM lasso to not select any noise variables, for example, are known to be quite stringent \citep{huang2012estimation}.
We illustrate the potentially large difference in performance through simulations, where the MLE exhibits a strong bias that inflates the size of estimated coefficients, and this bias increases with the true signal strength, in contrast to the approximately unbiased estimates from the debiased lasso.
Such tendencies are discussed further by \citet{xia2021debiased} in the lower-dimensional setting.
For a set of prespecified coefficients, since their estimators based on a single sample split suffer from a loss of efficiency due to only using part of the data for estimation, we further investigate the idea of averaging estimates across multiple sample splits.
\cite{fei2019drawing} proposed a multiple splitting procedure for linear models, where they required model selection consistency for their asymptotic theory.
\cite{wang2020debiased} discussed multiple splitting for a single treatment variable in linear models, under much milder assumptions.
Recently, \cite{fei2021estimation} proposed a multiple splitting procedure for generalized linear models based on the MLE under a partial orthogonality condition, which requires that the signal variables be independent of the noise variables.
For our proposed multiple splitting procedure, we apply the debiased lasso estimator instead of the MLE in the estimation stage, and show through simulations that our procedure results in approximately unbiased estimates and substantially reduced variability compared to the MLE-based multiple splitting that is often biased. For the theoretical development, we adapt the mild conditions of \cite{wang2020debiased} to GLMs and show the asymptotic normality of our proposed approach.
As evidenced by simulations, our multiple splitting estimator can produce confidence intervals with the nominal coverage.
\section{Methods}
\label{s:method}
We first provide brief overviews of lasso and debiased lasso methods, then introduce our proposed sample splitting debiased lasso methods for GLMs.
\subsection{Notation}
\label{s:notation}
For a positive integer $p$, we denote the size of any index set $S\subseteq\{1,\ldots,p\}$ by $|S|$.
For a $p$-dimensional vector $\boldsymbol{b}$ and a $p\times p$ dimensional matrix $\boldsymbol{B}$, let $\boldsymbol{b}_S$ denote the $|S|$-dimensional subvector with entries indexed by $S$, and similarly $\boldsymbol{B}_S$ denote the $|S|\times |S|$ submatrix with rows and columns indexed by $S$.
The $\ell_q$ norm is denoted as $\norm{\boldsymbol{b}}{q}$ for $q\geq 1$.
For positive sequences $a_n$ and $b_n$, we write $a_n=\bigO{b_n}$ if there exists a constant $c>0$ and $N>0$ such that $a_n/b_n<c$ for all $n>N$, we write $a_n=\littleO{b_n}$ if $a_n/b_n\rightarrow 0$ as $n\rightarrow\infty$, and we write $a_n\sim b_n$ if $a_n=\bigO{b_n}$ and $b_n=\bigO{a_n}$.
Let $(y_i, \boldsymbol{\widetilde{x}}_i)$, $i=1,\ldots,n$, be independent and identically distributed copies of $(y, \boldsymbol{\widetilde{x}})$, where $y$ is a scalar-valued response variable and $\boldsymbol{\widetilde{x}}$ is a $p$-dimensional vector of covariates.
We consider the high dimensional setting where $p$ can be much larger than $n$.
Let $\boldsymbol{x}_i = (1,\boldsymbol{\widetilde{x}}_i^T)^T$ and denote the $n\times (p+1)$ design matrix by $\boldsymbol{X}$, with $i$th row $\boldsymbol{x}_i^T$ and $j$th column $\boldsymbol{X}_j$.
We assume without loss of generality that $\boldsymbol{\widetilde{x}}$ has mean zero.
For any function $f(y,\boldsymbol{x})$, we define $P_nf = n^{-1}\sum_{i=1}^n f(y_i, \boldsymbol{x}_i)$.
We consider generalized linear models \citep{mccullagh2019generalized} with canonical link function and known dispersion parameter.
Denote
\[
\rho_{\boldsymbol{\beta}}(y, \boldsymbol{x})
= \rho(y, \boldsymbol{x}^T\boldsymbol{\beta})
= A(\boldsymbol{x}^T\boldsymbol{\beta}) - y\boldsymbol{x}^T\boldsymbol{\beta}
\]
for a known twice-differentiable function $A(\cdot)$, and its first and second derivatives with respect to $\boldsymbol{\beta}$ as $\dot{\rho}_{\boldsymbol{\beta}}$ and $\ddot{\rho}_{\boldsymbol{\beta}}$, respectively.
The negative log-likelihood for $\boldsymbol{\beta}$ is then $P_n\rho_{\boldsymbol{\beta}} = n^{-1}\sum_{i=1}^n \rho_{\boldsymbol{\beta}}(y_i, \boldsymbol{x}_i)$ with score function $P_n\dot{\rho}_{\boldsymbol{\beta}}=n^{-1}\sum_{i=1}^n \{A'(\boldsymbol{x}_i^T\boldsymbol{\beta}) - y_i \}\boldsymbol{x}_i $ and Hessian matrix $P_n\ddot{\rho}_{\boldsymbol{\beta}}(y_i, \boldsymbol{x}_i) = n^{-1}\sum_{i=1}^n A''(\boldsymbol{x}_i^T\boldsymbol{\beta})\boldsymbol{x}_i\boldsymbol{x}_i^T$.
Denote $\boldsymbol{\Sigma}_{\boldsymbol{\beta}} = E\{P_n\ddot{\rho}_{\boldsymbol{\beta}}(y_i, \boldsymbol{x}_i)\}$.
The $(p+1)$-dimensional unknown true coefficient vector $\boldsymbol{\beta}^0$ is assumed to be sparse, and we denote the index set of signal variables by $S_0 = \{j:\boldsymbol{\beta}_j^0\neq 0\}$, with $s_0=|S_0|$.
The quantities $p$, $\boldsymbol{\beta}^0$, $s_0$, and $S_0$ are allowed to change with $n$.
For practical applications, one would generally consider $s_0$ and $\boldsymbol{\beta}^0_{S_0}$ to be fixed so that the regression coefficients have the usual interpretation in terms of the conditional mean $E[y|\boldsymbol{x}]$ as $p$ grows with $n$. Letting $s_0$ grow with $n$ does not maintain the log odds ratio interpretation of $\boldsymbol{\beta}^0$, for example, in a logistic regression model.
\subsection{The lasso estimator for generalized linear models}
\label{s:lasso}
The lasso estimator $\boldsymbol{\widetilde\beta}^\lambda$ for the GLM parameters $\boldsymbol{\beta}^0$ is, given a tuning parameter $\lambda>0$, a minimizer of the penalized negative log likelihood
\[
\boldsymbol{\widetilde\beta}^\lambda = \argmin_{\boldsymbol{\beta}} P_n\rho_{\boldsymbol{\beta}} + \lambda\norm{\boldsymbol{\beta}}{1}.
\]
In general there is no closed form solution for $\boldsymbol{\widetilde\beta}^\lambda$, but the objective function is convex and can be optimized efficiently with widely available software \citep{friedman2010regularization}.
The penalization shrinks the estimated coefficients towards zero, with many of their values set exactly to zero, resulting in a sparse estimator $\boldsymbol{\widetilde\beta}^\lambda$.
Model selection may then be performed by keeping only the covariates with nonzero estimated coefficients.
In practice, a small subset with a finite number of coefficients may be left unpenalized so that they are never excluded from the selected model.
In our simulations and data analysis, we do not penalize the intercept.
\subsection{The debiased lasso estimator}
\label{s:debiased}
\cite{zhang2014confidence} and \citet{van2014asymptotically} proposed debiased, also called desparsified, lasso estimators for linear models and GLMs, respectively.
This procedure may be used to obtain coefficient estimates and confidence intervals across the entire parameter vector $\boldsymbol{\beta}^0$ in high dimensional settings.
It does not rely on sample splitting or variable selection.
Instead, using the lasso estimator $\boldsymbol{\widetilde\beta}^\lambda$ as an initial value, it performs a single Newton iteration for minimizing the negative log likelihood $P_n\rho_{\boldsymbol{\beta}}$, using an approximate inverse for the Hessian matrix $P_n\ddot{\rho}_{\boldsymbol{\widetilde\beta}^\lambda}$ such as a nodewise lasso estimator.
The resulting desparsified lasso estimator is no longer sparse and is, under further conditions, asymptotically unbiased with a normal distribution.
A key assumption for accurately estimating the inverse of the Hessian matrix in high dimensional settings, however, is the sparsity assumption on $\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0}^{-1} = [E\{A''(\boldsymbol{x}^T\boldsymbol{\beta}^0)\boldsymbol{x}\bIx^T\}]^{-1}$.
For generalized linear models, each entry of this matrix generally depends on all the signal variables due to the non-identity link function, so the sparsity assumption is difficult to interpret and often may not hold in practice.
Such issues were discussed further by \citet{xia2021debiased}, who proposed a refined debiased lasso method for lower dimensional GLMs with a diverging number of covariates.
In that setting and under some standard conditions, $\boldsymbol{\widehat\Sigma}_{\boldsymbol{\widetilde\beta}^\lambda} = P_n\ddot{\rho}_{\boldsymbol{\widetilde\beta}^\lambda}$ is directly invertible with probability going to one, and the resulting estimator
\begin{equation}\label{dlasso}
\boldsymbol{\widetilde\beta}^{DL} = \boldsymbol{\widetilde\beta}^\lambda - \boldsymbol{\widehat\Theta}_{\boldsymbol{\widetilde\beta}^\lambda} P_n \dot{\rho}_{\boldsymbol{\widetilde\beta}^\lambda}
\end{equation}
with estimated covariance matrix $\boldsymbol{\widehat\Theta}_{\boldsymbol{\widetilde\beta}^\lambda}/n = \boldsymbol{\widehat\Sigma}_{\boldsymbol{\widetilde\beta}^\lambda}^{-1}/n$
was shown to be asymptotically normal.
Additionally, it outperforms both the original desparsified lasso estimator and the MLE in several simulation settings of lower-dimensional GLMs with sparse regression coefficients.
It can also be computed using standard GLM software packages for the lasso and MLE.
\subsection{The debiased lasso after single sample splitting}
\label{s:sample-splitting}
We propose a debiased lasso after sample splitting procedure that uses the lasso for model selection and the refined debiased lasso for estimation and inference.
For a fixed splitting proportion $q\in (0,1)$, we randomly split the sample $(y_i,\boldsymbol{x}_i)_{i=1}^n$ into two subsets: $\mathcal{D}_1$ containing $n_1= qn$ individuals to be used for variable selection, and $\mathcal{D}_2$ containing $n_2 = n-n_1$ individuals to be used for estimation.
For a pre-specified fixed set of coefficient indices $S$ that is allowed to be an empty set, do the following:
\begin{enumerate}
\item \label{ss-selection} using the subsample $\mathcal{D}_1$, fit a GLM lasso estimator $\boldsymbol{\widetilde\beta}^\lambda$ to obtain the selected model $\widehat{S} = S\cup \{j:\boldsymbol{\widetilde\beta}^\lambda_j\neq 0\}$;
\item\label{ss-estimation}
using the subsample $\mathcal{D}_2$, compute the debiased lasso estimator $\boldsymbol{\widetilde\beta}^{DL}_{\widehat{S}}$ for $\boldsymbol{\beta}^0_{\widehat{S}}$ based on a GLM with covariates $\widehat{S}$, and use the estimated covariance matrix of $\boldsymbol{\widetilde\beta}^{DL}_{\widehat{S}}$, $\boldsymbol{\widehat\Theta}_{\boldsymbol{\widetilde\beta}^\lambda,\widehat{S}}/n_2$, to construct $(1-\alpha)$\% confidence intervals for any contrast $\boldsymbol{a}_{\widehat{S}}^T\boldsymbol{\beta}^0_{\widehat{S}}$ based on a normal distribution.
\end{enumerate}
Because the two subsamples are independent, we can make asymptotically valid statistical inference on any covariate in $\widehat{S}$ as long as this set contains the true signal variables when $n$ is large enough.
In practice, covariates with small coefficients may not be selected in the first stage, so making conclusions about the statistical significance of omitted variables from step \ref{ss-selection} is not advised.
The large bias that can potentially occur when $S$ is a null set is illustrated through simulations in Section \ref{s:simulations}.
Furthermore, although the size of the fitted model $\widehat{S}$ must be low-dimensional, this does not prevent one from obtaining estimates for each individual coefficient.
For example, step \ref{ss-estimation} may be repeated for each individual coefficient, i.e. for $S=\{1\}, \{2\},\ldots, \{p\}$, along with an appropriate correction for multiple testing when computing p-values and confidence intervals.
\subsection{The debiased lasso after multiple sample splitting}
\label{s:sample-splitting-m}
As previously noted, the single sample splitting estimator suffers from a loss of efficiency due to only using a fraction of the total sample to estimate $\boldsymbol{\beta}^0$.
The following multiple splitting procedure addresses this issue. For a pre-specified fixed set of coefficient indices $S$,
we now generate $B$ random splits of the sample $(y_i,\boldsymbol{x}_i)_{i=1}^n$, denoted $(\mathcal{D}_{1,b}, \mathcal{D}_{2,b})$ for $b=1,\ldots,B$, and do the following:
\begin{enumerate}
\item for $b=1,\ldots,B$,
\begin{enumerate}
\item using the subsample $\mathcal{D}_{1,b}$, fit a GLM lasso estimator $\boldsymbol{\widetilde\beta}^\lambda_b$ to obtain the selected model $\widehat{S}_b = S\cup \{j:\boldsymbol{\widetilde\beta}^\lambda_{b,j}\neq 0\}$;
\item\label{ms-estimation} using the subsample $\mathcal{D}_{2,b}$, compute the debiased lasso estimator $\boldsymbol{\widetilde\beta}^{DL}_{S,b}$ for $\boldsymbol{\beta}^0_{S}$ based on a GLM with covariates $\widehat{S}_b$;
\end{enumerate}
\item finally the multiple splitting estimators are $\boldsymbol{\widehat\beta}_S = B^{-1}\sum_{b=1}^B \boldsymbol{\widetilde\beta}^{DL}_{S,b}$.
\end{enumerate}
Note that all target coefficients to estimate must be pre-specified, in contrast to the single split procedure.
For exploratory analysis, where step \ref{ms-estimation} may be repeated for each individual coefficient, the multiple splitting estimator typically has a substantially more strict significance threshold compared to single split inference on $\widehat{S}$ due to correcting for $p$, $p>|\widehat{S}|$, multiple comparisons.
We apply the same variance estimator as \cite{fei2021estimation}, based on the nonparametric delta method with a bias correction for controlling the Monte Carlo error \citep{efron2014estimation,wager2014confidence}.
Letting $\boldsymbol{v}_b$ denote the $n$-vector of sampling indicators such that $\boldsymbol{v}_{b,i}\in \{0,1\}$ is equal to one if $i\in \mathcal{D}_{2,b}$, and is zero otherwise,
\[
\widehat{Var}\left (\boldsymbol{\widehat\beta}_j \right )
= \widehat{V}_j - \frac{n(n_2)}{B^2(n-n_2)}\sum_{b=1}^B \left (\boldsymbol{\widetilde\beta}^{DL}_{j,b} - \boldsymbol{\widehat\beta}_j\right )^2,
\]
\[
\widehat{V}_j = \frac{n(n-1)}{(n-n_2)^2}\sum_{i=1}^n \left \{\frac{1}{B}\sum_{b=1}^B \left (\boldsymbol{v}_{b,i} - \bar{\boldsymbol{v}}_i\right )(\boldsymbol{\widetilde\beta}^{DL}_{j,b} - \boldsymbol{\widehat\beta}_j) \right \}^2,
\]
where $\bar{\boldsymbol{v}}_i = B^{-1}\sum_{b=1}^B \boldsymbol{v}_{b,i}$, and $(1-\alpha)$\% confidence intervals may be constructed based on a normal approximation.
Similarly, the estimated covariance matrix is
\[
\widehat{Var}\left (\boldsymbol{\widehat\beta}_S \right )
= \frac{n(n-1)}{(n-n_2)^2}\sum_{i=1}^n \boldsymbol{\widehat{C}}_i\boldsymbol{\widehat{C}}_i^T
- \frac{n(n_2)}{B^2(n-n_2)}\sum_{b=1}^B \boldsymbol{\widehat{D}}_b\boldsymbol{\widehat{D}}_b^T
\]
\[
\boldsymbol{\widehat{C}}_i = \frac{1}{B}\sum_{b=1}^B \left (\boldsymbol{v}_{b,i} - \bar{\boldsymbol{v}}_i\right )\left (\boldsymbol{\widetilde\beta}^{DL}_{S,b} - \boldsymbol{\widehat\beta}_S\right ), \
\boldsymbol{\widehat{D}}_b = \boldsymbol{\widetilde\beta}^{DL}_{S,b} - \boldsymbol{\widehat\beta}_S.
\]
Both single and multiple sample splitting procedures require that the matrices $\boldsymbol{\widehat\Sigma}_{\boldsymbol{\widetilde\beta}^\lambda,\widehat{S}}$ computed from each estimation split are invertible.
One way to encourage this is to set a hard limit for the selected model size, which would set a lower bound for the parameter $\lambda$ in the lasso.
In our simulations and real data analysis we do not restrict $\lambda$, which is chosen by cross validation, and instead allow the glm function in R to follow its default behavior of automatically dropping covariates from the model when the design matrix is ill-conditioned.
As the number of random splits $B$ grows we typically have diminishing gains in efficiency relative to the increased computational cost.
Although our theoretical results consider the maximum possible value of $B={n\choose n_2}$, our simulation results indicate that $B=1,000$ is sufficient for many practical settings with up to several hundreds of individuals in the sample.
\section{Theoretical Results}
\label{s:theory}
For the single split estimator, we make the following assumptions:
\begin{enumerate}
\item\label{cond-subg} The covariates $\boldsymbol{x}_i$ are sub-Gaussian random vectors and $\norm{\boldsymbol{X}}{\infty}\leq K$ almost surely for some constant $K>0$.
\item\label{cond-eig} $\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0}$ and $E[\boldsymbol{X}^T\boldsymbol{X}/n]$ are positive definite with eigenvalues bounded from above and away from zero.
\item\label{cond-lip} The derivatives $\dot{\rho}(y,a) = \partial\rho(y,a)/\partial a$ and $\ddot{\rho}(y,a) = \partial^2\rho(y,a)/\partial a^2$ exist for all $(y, a)$ and there exists a $\delta>0$ and constant $c_{Lip}>0$ such that $\ddot{\rho}$ is locally Lipschitz
\[
\max_{a_0\in \{\boldsymbol{x}_i^T\boldsymbol{\beta}^0\}}
\sup_{\max\left (|a-a_0|, |\widehat{a} - a_0|\right )\leq\delta}
\sup_{y\in\mathcal{Y}}
\frac{|\ddot{\rho}(y,a) - \ddot{\rho}(y, \widehat{a})|}{|a-\widehat{a}|}\leq c_{Lip}.
\]
Also, there exist constants $K_1,K_2>0$ such that the derivatives are bounded:
\[
\max_{a_0\in \{\boldsymbol{x}_i^T\boldsymbol{\beta}^0\}}
\sup_{y\in\mathcal{Y}}
|\dot{\rho}(y,a_0)|\leq K_1,
\]
\[
\max_{a_0\in \{\boldsymbol{x}_i^T\boldsymbol{\beta}^0\}}
\sup_{y\in\mathcal{Y}}
|\ddot{\rho}(y,a_0)|\leq K_2.
\]
\item\label{cond-linpred} $\norm{\boldsymbol{X}\boldsymbol{\beta}^0}{\infty}$ is bounded above almost surely.
\item\label{cond-betamin} The sparsity $s_0$ satisfies $s_0\log(p)/\sqrt{n} = \littleO{1}$ and the regression parameters are large enough that $\min_{j\in S_0}|\boldsymbol{\beta}_j^0| \geq \bigO{s_0\sqrt{\log(p)/n}}$ as $n\rightarrow\infty$.
\end{enumerate}
Assumptions \ref{cond-subg}, \ref{cond-eig}, and \ref{cond-linpred} are typical in high dimensional literature.
Bounded covariates are common in practice, including dummy variables for categorical covariates, minor allele counts, and physical measurements.
Assumption \ref{cond-lip} is required in order to apply the results for the refined debiased lasso estimator of \cite{xia2021debiased}.
All but the boundedness condition on $\dot{\rho}$ are satisfied when the function $A$ is twice continuously differentiable, as is the case with commonly-used GLMs, due to assumption \ref{cond-linpred}.
The derivative $\dot{\rho}$ is also bounded if the response variable has bounded support, such as in logistic regression, and this condition can be relaxed to include other common GLMs by instead assuming sub-exponential tails, as in Lemma 3.4 of \cite{ning2017general}.
Assumption \ref{cond-betamin} guarantees that, with high probability, model selection based on the lasso does not miss any signal variables, so that the selected model is not misspecified.
This is a standard condition for sample splitting procedures that is not required by the desparsified lasso \citep{van2014asymptotically}.
For multiple splitting, we make the following additional assumptions for the set of target covariates $S$, where $\boldsymbol{a}$ is a $(p+1)$-vector such that $\norm{\boldsymbol{a}_S}{2}=1$ and $\norm{\boldsymbol{a}_{S^c}}{2}=0$.
Let $\mathcal{Z} = (y_i,\boldsymbol{x}_i)_{i=1}^n$ denote the entire data set.
\begin{enumerate}
\setcounter{enumi}{5}
\item\label{cond-ms1} For independent sampling indicator vectors $\boldsymbol{v}$ and $\boldsymbol{\widetilde{v}}$ with corresponding fitted models $\widehat{S}$ and $\widetilde{S}$, define
\[h_{i,n}
= \left [E\left (\boldsymbol{v}_i\boldsymbol{a}_{\widehat{S}}^T\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0, \widehat{S}}^{-1}\boldsymbol{\widetilde{I}}_{\widehat{S}}\Big |\mathcal{Z} \right )
- E\left (\boldsymbol{v}_i\boldsymbol{a}_{\widetilde{S}}^T\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0, \widetilde{S}}^{-1}\boldsymbol{\widetilde{I}}_{\widetilde{S}}\Big |\mathcal{Z} \right )\right ]\dot{\rho}_{\boldsymbol{\beta}^0}(y_i,\boldsymbol{x}_i) \]
where $\boldsymbol{\widetilde{I}}_S$ is the $|S|\times (p+1)$ matrix such that $\boldsymbol{\widetilde{I}}_S\boldsymbol{b} = \boldsymbol{b}_S$ for any $(p+1)$-vector $\boldsymbol{b}$, and the expectations are with respect to the sampling weights, conditional on the data.
We assume that $\sum_{i=1}^n h_{i,n}/\sqrt{n} =\littleO{1}$ and that $P_n\ddot{\rho}_{\boldsymbol{\widetilde\beta}^\lambda}$ is invertible when computed from any estimation subsample.
\item\label{cond-ms2} There exists a random $(p+1)$-vector $\boldsymbol{\eta}_n$ independent of the data such that $\norm{\boldsymbol{\eta}_n}{\infty}$ is bounded and
\[\norm{E\left (\boldsymbol{a}_{\widehat{S}}^T\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0, \widehat{S}}^{-1}\boldsymbol{\widetilde{I}}_{\widehat{S}}\Big |\mathcal{Z} \right ) - \boldsymbol{\eta}_n^T}{1} = \op{1/\sqrt{\log(p)}}. \]
\end{enumerate}
Similar conditions were used and discussed further by \cite{wang2020debiased} in the context of linear models.
Since, for low-dimensional GLMs, the refined debiased lasso estimator is asymptotically equivalent to the MLE $\widetilde{\boldsymbol{\beta}}^{ML}$ in the sense that $\sqrt{n}(\boldsymbol{\widetilde\beta}^{DL}_j - \widetilde{\boldsymbol{\beta}}_j^{ML})=\op{1}$ for all $j$, our theoretical arguments also apply to multiple splitting with the MLE, using the lasso for model selection.
\cite{fei2021estimation} used a stronger partial orthogonality condition, where the signal variables are independent of the noise variables, to derive the asymptotic normality of their MLE-based multiple splitting estimator.
We further discuss the motivation behind Assumptions \ref{cond-ms1} and \ref{cond-ms2} and their relationship to the assumptions used by other work on sample splitting procedures in \ref{appendix:disc}, but provide a brief overview here.
Under Assumptions \ref{cond-subg}-\ref{cond-betamin} and the sparsity rates required in Theorem \ref{thm-ms} below, it can be shown that
\[
\sqrt{n}\boldsymbol{a}_S^T(\boldsymbol{\widehat\beta}_S - \boldsymbol{\beta}_S^0)
= \frac{(1-q)^{-1}}{\sqrt{n}}\sum_{i= 1}^n E\left [\boldsymbol{a}_{\widehat{S}}^T\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0, \widehat{S}}^{-1}\boldsymbol{\widetilde{I}}_{\widehat{S}} \boldsymbol{v}_{i} | \mathcal{Z}\right ]\dot{\rho}_{\boldsymbol{\beta}^0}(y_i, \boldsymbol{x}_i) + \op{1}.
\]
Therefore, in order to prove asymptotic normality we need to control the randomness of the vector $\boldsymbol{a}_{\widehat{S}}^T\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0, \widehat{S}}^{-1}\boldsymbol{\widetilde{I}}_{\widehat{S}}\boldsymbol{v}_{i}$ averaged across all sample splits that exclude the $i$th sample from model selection (i.e. $\boldsymbol{v}_i=1$).
The ideal case, when this average is deterministic, can be achieved under model selection consistency, so that $\widehat{S}$ is fixed at $S\cup S_0$ for each sample split, or if the set of covariates that are always included in the fitted model are independent of all remaining covariates, essentially giving $\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0}^{-1}$ a block diagonal structure.
Either of these two conditions imply that our less restrictive assumptions hold, where we only require that the effect of always excluding a single sample from model selection is asymptotically negligible (Assumption \ref{cond-ms1}) and that the average converges in probability to a bounded random vector with moderate rate (Assumption \ref{cond-ms2}).
The variable screening properties and asymptotic normality of our estimators are presented in the following three theorems, with their proofs given in \ref{appendix:thm1} through \ref{appendix:thm3}, respectively.
Note that any variable selection procedure with the same screening properties as the lasso may be used in our sample splitting procedures.
\begin{theorem}\label{thm-lasso}
(Variable screening properties for the GLM lasso.)
For a choice of tuning parameter $\lambda\sim \sqrt{\log(p)/n}$ and under assumptions \ref{cond-subg}-\ref{cond-betamin}, the lasso estimator $\boldsymbol{\widetilde\beta}^\lambda$ and its selected model $\widehat{S} = \{j:\widetilde{\beta}_j^\lambda\neq 0\}$ satisfy
\[
P\left (\widehat{S}\supseteq S_0, |\widehat{S}|\leq ks_0 \right )\geq 1 - c_1/p^{c_2}
\]
for some positive constants $k, c_1, c_2$.
\end{theorem}
\begin{theorem}\label{thm-ss}
(Asymptotic normality for single sample splitting.) For the single split debiased lasso estimator $\boldsymbol{\widetilde\beta}^{DL}_{\widehat{S}}$ with both lasso tuning parameters of the order $\sqrt{\log(p)/n}$, $s_0\log(s_0)\sqrt{s_0/n} = \littleO{1}$, $|S|=\bigO{s_0}$, and under assumptions \ref{cond-subg}-\ref{cond-betamin},
$\boldsymbol{\widehat\Sigma}_{\boldsymbol{\widetilde\beta}^\lambda, \widehat{S}}$ is invertible with probability going to one, and for any $(p+1)$-vector $\boldsymbol{a}$ such that $\norm{\boldsymbol{a}}{2} = \norm{\boldsymbol{a}_{\widehat{S}}}{2}=1$, we have
\[
\frac{\sqrt{n_2}\boldsymbol{a}_{\widehat{S}}^T(\boldsymbol{\widetilde\beta}^{DL}_{\widehat{S}} - \boldsymbol{\beta}^0_{\widehat{S}})}{(\boldsymbol{a}_{\widehat{S}}^T\boldsymbol{\widehat\Theta}_{\boldsymbol{\widetilde\beta}^\lambda,\widehat{S}}\boldsymbol{a}_{\widehat{S}})^{1/2}}\rightarrow N(0,1)
\]
in distribution as $n\rightarrow\infty$.
\end{theorem}
\begin{theorem}\label{thm-ms}
(Asymptotic normality for multiple sample splitting.) For the multiple splitting debiased lasso estimator $\boldsymbol{\widehat\beta}_S$ with all lasso tuning parameters of the order $\sqrt{\log(p)/n}$, $s_0\log(s_0)\sqrt{s_0/n} = \littleO{1}$, $|S|=\bigO{s_0}$, and under assumptions \ref{cond-subg}-\ref{cond-ms2},
for any $(p+1)$-vector $\boldsymbol{a}$ such that $\norm{\boldsymbol{a}}{2} = \norm{\boldsymbol{a}_{S}}{2}=1$, we have
\[
\frac{\sqrt{n}\boldsymbol{a}_S^T(\boldsymbol{\widehat\beta}_S - \boldsymbol{\beta}^0_S)}{(\boldsymbol{\eta}_n^T\boldsymbol{\Sigma}_{\boldsymbol{\beta}^0}\boldsymbol{\eta}_n)^{1/2}}\rightarrow N(0,1)
\]
in distribution as $n\rightarrow\infty$.
\end{theorem}
\section{Simulations}
\label{s:simulations}
We apply the proposed estimating methods with sample splitting in several simulations settings with high dimensional covariates in order to assess the potential benefits of using the lasso for model selection and the debiased lasso in place of the MLE for making statistical inference, and of averaging estimates across multiple sample splits (MS) as opposed to using a single split (SS) estimator for a fixed set of coefficients.
Simulation results are all from logistic regression models, where the benefits of using the debiased lasso are particularly apparent.
This may be due to the numerical instability of the Hessian matrix of the negative log-likelihood $P_n\ddot{\rho}_{\boldsymbol{\beta}}(y_i, \boldsymbol{x}_i) = n^{-1}\sum_{i=1}^n \hat{p}(\boldsymbol{x}_i^T\boldsymbol{\beta})\left [1-\hat{p}(\boldsymbol{x}_i^T\boldsymbol{\beta})\right ]\boldsymbol{x}_i\boldsymbol{x}_i^T$, where $\hat{p}(\cdot)$ is the cdf of the standard logistic distribution.
Note $\hat{p}(\boldsymbol{x}_i^T\boldsymbol{\beta})$ will be close to zero or one for large coefficient values, which can result in a near-singular Hessian matrix.
Therefore the debiased lasso approach of performing a single-step maximization of the log-likelihood starting from an initial lasso estimator that is biased towards zero can help alleviate this issue.
In all simulations, $n=500$ samples are generated from a logistic regression model with $p=700$ covariates.
The covariates are generated from a $N(\boldsymbol{0}, \boldsymbol{\Sigma})$ distribution before being truncated at $\pm 3$.
The covariance matrix $\boldsymbol\Sigma$ has ones on the diagonal and either an AR(1) correlation structure $\boldsymbol\Sigma_{jk} = 0.5^{|j-k|}$ or a compound symmetry structure $\boldsymbol\Sigma_{jk} = 0.5$ $(j\neq k)$.
Further simulations that demonstrate the robustness of our procedures under higher covariate correlations are presented in \ref{appendix:sims}.
There are $s_0=6$ signal covariates, and their index set was randomly chosen.
The corresponding coefficient values are $\boldsymbol{\beta}^0_{S_0} = (-1.5, -1, -0.5, 0.5, 1, 1.5)^T$.
We consider all signal covariates together with two randomly chosen noise covariates. For each coefficient $\beta_j$, we assess the single and multiple splitting debiased lasso estimators fit on $\widehat{S}\cup\{j\}$, as well as the corresponding MLE-based estimators.
The splitting proportion is $q=0.5$.
We also provide results from the oracle MLE, fit on $S_0\cup \{j\}$, for reference, using either the entire sample or half of the sample.
Model selection is performed using lasso estimators with the values for $\lambda$ chosen by 10-fold cross validation.
The multiple splitting estimators use $B=1,000$ splits.
We use the R package glmnet for lasso estimation and the built-in glm function for MLE and debiased lasso estimation, using the control and start arguments to specify a single iteration starting at an initial lasso estimate for the latter.
The simulation results for the AR(1) correlation structure are summarized in Table \ref{table-ar1-5}.
The single split MLE exhibits a bias that greatly inflates the size of the estimated coefficients, and the magnitude of this bias increases with the signal strength.
In contrast, the single split debiased lasso estimator is approximately unbiased.
Averaging across multiple splits does not improve the bias of the MLE, which is also apparent to a lesser extent in the logistic regression simulations of \cite{fei2021estimation} summarized in their Table 3.
The MLE has substantially higher variability than the debiased lasso, even for the multiple splitting estimators.
The 95\% confidence interval coverage for noise variables is roughly the same across all considered methods.
For signal variables, however, the MLE has poor coverage that actually worsens after multiple splitting.
This issue appears to be particularly severe in logistic regression models and is mild in some other GLMs such as Poisson regression.
In contrast, the debiased lasso after multiple splitting performs well in achieving the nominal 95\% coverage for all considered coefficients.
All single split standard errors tend to be underestimated for signal variables, leading to slight undercoverage, while the multiple splitting standard errors are approximately unbiased.
Multiple splitting drastically lowers the variability of estimates from either the MLE or debiased lasso compared to a single split.
This produces a dramatic improvement in rejection rate for small coefficients.
Note that the rejection rates for the MLE estimators are inflated due to their bias, which partially offsets the wider confidence interval length.
In summary, for each pre-specified coefficient, the multiple splitting debiased lasso estimator provides the best performance in terms of bias, variability, and confidence interval coverage in this simulation setting, where the correlation between covariates decays rather quickly.
\begin{table}
\caption{Logistic regression simulation results for $n=500$, $p=700$, $s_0=6$, and AR(1) correlation structure with parameter 0.5. Selection results refer to the lasso in the single split estimator, where the average selected model size was 41. The fitted model for estimating each $\beta_j$ was $\widehat{S}\cup\{j\}$. Nominal confidence interval coverage probabilities are 0.95.} \label{table-ar1-5}
{\begin{tabular*}{\textwidth}{l l r r r r r r r r}
\hline & Estimator & $\beta_{489}$ & $\beta_{130}$ & $\beta_{680}$ & $\beta_{488}$ & $\beta_{476}$ & $\beta_{190}$ & $\beta_{510}$ & $\beta_{336}$\\ \hline
$\boldsymbol{\beta}^0_j$ & & -1.50 & -1.00 & -0.50 & 0.00 & 0.00 & 0.50 & 1.00 & 1.50\\
Selection Rate & & 1.00 & 1.00 & 0.64 & 0.13 & 0.07 & 0.69 & 1.00 & 1.00\\ \hline
Bias & Debiased SS & 0.01 & 0.06 & 0.00 & -0.04 & -0.01 & -0.03 & 0.00 & 0.00\\
& MLE SS & -0.52 & -0.30 & -0.24 & -0.06 & -0.01 & 0.18 & 0.37 & 0.54\\
& Debiased MS & 0.01 & 0.04 & 0.02 & -0.02 & 0.00 & -0.02 & -0.01 & 0.00\\
& MLE MS & -0.59 & -0.37 & -0.21 & -0.02 & -0.01 & 0.21 & 0.40 & 0.60\\
& Oracle $(n_2)$ & -0.09 & -0.05 & 0.00 & -0.01 & 0.00 & 0.04 & 0.07 & 0.09\\
& Oracle $(n)$ & -0.05 & -0.01 & -0.01 & -0.02 & 0.00 & 0.01 & 0.04 & 0.05\\ \hline
Coverage & Debiased SS & 0.84 & 0.89 & 0.92 & 0.94 & 0.95 & 0.88 & 0.87 & 0.89\\
& MLE SS & 0.73 & 0.86 & 0.84 & 0.90 & 0.92 & 0.86 & 0.77 & 0.69\\
& Debiased MS & 0.94 & 0.94 & 0.93 & 0.95 & 0.97 & 0.96 & 0.96 & 0.95\\
& MLE MS & 0.39 & 0.64 & 0.81 & 0.94 & 0.94 & 0.79 & 0.58 & 0.40\\
& Oracle $(n_2)$ & 0.95 & 0.96 & 0.94 & 0.95 & 0.96 & 0.94 & 0.96 & 0.97\\
& Oracle $(n)$ & 0.96 & 0.95 & 0.94 & 0.92 & 0.94 & 0.95 & 0.96 & 0.96\\ \hline
Rejection Rate & Debiased SS & 1.00 & 0.99 & 0.74 & 0.06 & 0.05 & 0.62 & 1.00 & 1.00\\
($H_0: \beta_j=0$) & MLE SS & 1.00 & 0.99 & 0.77 & 0.10 & 0.08 & 0.71 & 1.00 & 1.00\\
& Debiased MS & 1.00 & 1.00 & 0.92 & 0.05 & 0.03 & 0.94 & 1.00 & 1.00\\
& MLE MS & 1.00 & 1.00 & 0.94 & 0.06 & 0.06 & 0.95 & 1.00 & 1.00\\
& Oracle $(n_2)$ & 1.00 & 1.00 & 0.74 & 0.05 & 0.04 & 0.80 & 1.00 & 1.00\\
& Oracle $(n)$ & 1.00 & 1.00 & 0.97 & 0.09 & 0.06 & 0.98 & 1.00 & 1.00\\ \hline
Standard Error & Debiased SS & 0.22 & 0.20 & 0.19 & 0.21 & 0.18 & 0.19 & 0.20 & 0.22\\
& MLE SS & 0.37 & 0.30 & 0.28 & 0.29 & 0.25 & 0.27 & 0.31 & 0.37\\
& Debiased MS & 0.18 & 0.15 & 0.14 & 0.14 & 0.12 & 0.14 & 0.16 & 0.18\\
& MLE MS & 0.27 & 0.22 & 0.20 & 0.22 & 0.19 & 0.20 & 0.22 & 0.27\\
& Oracle $(n_2)$ & 0.26 & 0.22 & 0.20 & 0.22 & 0.19 & 0.20 & 0.22 & 0.26\\
& Oracle $(n)$ & 0.18 & 0.15 & 0.14 & 0.15 & 0.13 & 0.14 & 0.15 & 0.18\\ \hline
Empirical SD & Debiased SS & 0.30 & 0.23 & 0.21 & 0.22 & 0.18 & 0.24 & 0.26 & 0.29\\
& MLE SS & 0.58 & 0.41 & 0.36 & 0.38 & 0.31 & 0.40 & 0.48 & 0.56\\
& Debiased MS & 0.17 & 0.14 & 0.14 & 0.14 & 0.12 & 0.14 & 0.14 & 0.18\\
& MLE MS & 0.29 & 0.22 & 0.22 & 0.23 & 0.20 & 0.21 & 0.24 & 0.30\\
& Oracle $(n_2)$ & 0.28 & 0.22 & 0.20 & 0.22 & 0.19 & 0.20 & 0.22 & 0.26\\
& Oracle $(n)$ & 0.17 & 0.14 & 0.14 & 0.16 & 0.14 & 0.14 & 0.14 & 0.17\\ \hline
\end{tabular*}}
\bigskip
\end{table}
The simulation results for compound symmetry correlation structure are summarized in Table \ref{table-cs-5}.
The same trends concerning the bias and large variability of the MLE seen in Table \ref{table-ar1-5} are also present in this setting.
Again the debiased lasso estimators are approximately unbiased, and the multiple splitting debiased lasso estimator has approximate 95\% coverage for each coefficient.
Multiple splitting again greatly reduces the variability of single split estimators, resulting in thinner confidence intervals with more power for detecting small coefficient values.
\begin{table}
\caption{Logistic regression simulation results for $n=500$, $p=700$, $s_0=6$, and compound symmetry correlation structure with parameter 0.5. Selection results refer to the lasso in the single split estimator, where the average selected model size was 37. The fitted model for estimating each $\beta_j$ was $\widehat{S}\cup\{j\}$. Nominal confidence interval coverage probabilities are 0.95.} \label{table-cs-5}
{\begin{tabular*}{\textwidth}{l l r r r r r r r r}
\hline & Estimator & $\beta_{489}$ & $\beta_{130}$ & $\beta_{680}$ & $\beta_{488}$ & $\beta_{476}$ & $\beta_{190}$ & $\beta_{510}$ & $\beta_{336}$\\ \hline
$\boldsymbol{\beta}^0_j$ & & -1.50 & -1.00 & -0.50 & 0.00 & 0.00 & 0.50 & 1.00 & 1.50\\
Selection Rate & & 1.00 & 0.96 & 0.51 & 0.07 & 0.05 & 0.44 & 0.98 & 1.00\\ \hline
Bias & Debiased SS & 0.01 & 0.04 & 0.04 & -0.02 & 0.00 & 0.00 & -0.02 & -0.05\\
& MLE SS & -0.42 & -0.27 & -0.13 & -0.02 & -0.01 & 0.19 & 0.30 & 0.37\\
& Debiased MS & 0.02 & 0.03 & 0.03 & -0.01 & 0.00 & -0.02 & -0.02 & -0.02\\
& MLE MS & -0.39 & -0.26 & -0.14 & -0.01 & 0.00 & 0.15 & 0.26 & 0.39\\
& Oracle $(n_2)$ & -0.05 & -0.04 & -0.02 & 0.00 & 0.00 & 0.01 & 0.04 & 0.07\\
& Oracle $(n)$ & -0.03 & -0.02 & -0.02 & -0.01 & 0.00 & 0.02 & 0.02 & 0.03\\ \hline
Coverage & Debiased SS & 0.88 & 0.92 & 0.92 & 0.94 & 0.96 & 0.92 & 0.90 & 0.89\\
& MLE SS & 0.80 & 0.89 & 0.91 & 0.90 & 0.93 & 0.86 & 0.85 & 0.82\\
& Debiased MS & 0.94 & 0.96 & 0.94 & 0.96 & 0.96 & 0.94 & 0.94 & 0.96\\
& MLE MS & 0.68 & 0.84 & 0.90 & 0.96 & 0.94 & 0.92 & 0.81 & 0.76\\
& Oracle $(n_2)$ & 0.94 & 0.94 & 0.95 & 0.91 & 0.94 & 0.92 & 0.96 & 0.94\\
& Oracle $(n)$ & 0.96 & 0.98 & 0.96 & 0.92 & 0.92 & 0.96 & 0.96 & 0.98\\ \hline
Rejection Rate & Debiased SS & 1.00 & 0.99 & 0.49 & 0.06 & 0.04 & 0.57 & 0.98 & 0.99\\
($H_0: \beta_j=0$) & MLE SS & 1.00 & 1.00 & 0.56 & 0.10 & 0.07 & 0.63 & 0.98 & 0.99\\
& Debiased MS & 1.00 & 1.00 & 0.82 & 0.04 & 0.04 & 0.87 & 1.00 & 1.00\\
& MLE MS & 1.00 & 1.00 & 0.84 & 0.04 & 0.06 & 0.88 & 1.00 & 1.00\\
& Oracle $(n_2)$ & 1.00 & 1.00 & 0.66 & 0.09 & 0.06 & 0.66 & 1.00 & 1.00\\
& Oracle $(n)$ & 1.00 & 1.00 & 0.92 & 0.07 & 0.07 & 0.92 & 1.00 & 1.00\\ \hline
Standard Error & Debiased SS & 0.25 & 0.23 & 0.23 & 0.23 & 0.23 & 0.23 & 0.23 & 0.25\\
& MLE SS & 0.36 & 0.32 & 0.30 & 0.28 & 0.29 & 0.30 & 0.32 & 0.36\\
& Debiased MS & 0.19 & 0.17 & 0.16 & 0.15 & 0.15 & 0.16 & 0.17 & 0.19\\
& MLE MS & 0.27 & 0.23 & 0.21 & 0.21 & 0.21 & 0.21 & 0.23 & 0.27\\
& Oracle $(n_2)$ & 0.27 & 0.24 & 0.22 & 0.22 & 0.22 & 0.22 & 0.24 & 0.27\\
& Oracle $(n)$ & 0.18 & 0.17 & 0.15 & 0.15 & 0.15 & 0.15 & 0.17 & 0.18\\ \hline
Empirical SD & Debiased SS & 0.29 & 0.25 & 0.26 & 0.22 & 0.24 & 0.26 & 0.27 & 0.29\\
& MLE SS & 0.56 & 0.43 & 0.39 & 0.34 & 0.38 & 0.40 & 0.50 & 0.53\\
& Debiased MS & 0.18 & 0.16 & 0.16 & 0.15 & 0.15 & 0.16 & 0.17 & 0.18\\
& MLE MS & 0.28 & 0.24 & 0.22 & 0.22 & 0.22 & 0.21 & 0.24 & 0.27\\
& Oracle $(n_2)$ & 0.28 & 0.25 & 0.23 & 0.25 & 0.23 & 0.23 & 0.25 & 0.29\\
& Oracle $(n)$ & 0.18 & 0.15 & 0.16 & 0.16 & 0.16 & 0.15 & 0.16 & 0.17\\ \hline
\end{tabular*}}
\bigskip
\end{table}
Lastly, we assess the performance of the post-model selection procedure that does not pre-specify any covariate of interest.
This simulation setting is identical to that of Table \ref{table-ar1-5} with AR(1) correlation structure, but the estimates are now all based on a single model fit on only the selected covariates.
For covariates that are not selected, their coefficient estimate and standard error are both set to zero.
The oracle MLE results we present are estimated on $S_0$ instead of $S_0\cup \{j\}$.
\begin{table}
\caption{Logistic regression simulation results for $n=500$, $p=700$, $s_0=6$, and AR(1) correlation structure with parameter 0.5. Selection results refer to the lasso in the single split estimator, where the average selected model size was 41. The fitted model for estimating each $\beta_j$ was $\widehat{S}$. Nominal confidence interval coverage probabilities are 0.95.} \label{table-postselection-ar1-5}
{\begin{tabular*}{\textwidth}{l l r r r r r r r r}
\hline & Estimator & $\beta_{489}$ & $\beta_{130}$ & $\beta_{680}$ & $\beta_{488}$ & $\beta_{476}$ & $\beta_{190}$ & $\beta_{510}$ & $\beta_{336}$\\ \hline
$\boldsymbol{\beta}^0_j$ & & -1.50 & -1.00 & -0.50 & 0.00 & 0.00 & 0.50 & 1.00 & 1.50\\
Selection Rate & & 1.00 & 1.00 & 0.64 & 0.13 & 0.07 & 0.69 & 1.00 & 1.00\\ \hline
Bias & Debiased SS & 0.00 & 0.06 & 0.18 & -0.01 & 0.00 & -0.18 & 0.00 & 0.00\\
& MLE SS & -0.66 & -0.35 & 0.02 & -0.02 & 0.01 & -0.03 & 0.44 & 0.66\\
& Oracle $(n_2)$ & -0.09 & -0.05 & 0.00 & 0.00 & 0.00 & 0.04 & 0.07 & 0.09\\ \hline
Coverage & Debiased SS & 0.83 & 0.90 & 0.57 & 0.99 & 1.00 & 0.59 & 0.87 & 0.89\\
& MLE SS & 0.70 & 0.83 & 0.51 & 0.98 & 1.00 & 0.57 & 0.75 & 0.66\\
& Oracle $(n_2)$ & 0.95 & 0.96 & 0.94 & 1.00 & 1.00 & 0.94 & 0.96 & 0.97\\ \hline
Rejection Rate & Debiased SS & 1.00 & 0.99 & 0.47 & 0.01 & 0.00 & 0.42 & 1.00 & 1.00\\
($H_0: \beta_j=0$) & MLE SS & 1.00 & 0.99 & 0.49 & 0.02 & 0.00 & 0.45 & 1.00 & 1.00\\
& Oracle $(n_2)$ & 1.00 & 1.00 & 0.74 & 0.00 & 0.00 & 0.80 & 1.00 & 1.00\\ \hline
Standard Error & Debiased SS & 0.22 & 0.20 & 0.12 & 0.03 & 0.01 & 0.13 & 0.20 & 0.22\\
& MLE SS & 0.40 & 0.32 & 0.18 & 0.04 & 0.02 & 0.20 & 0.33 & 0.40\\
& Oracle $(n_2)$ & 0.26 & 0.22 & 0.20 & 0.00 & 0.00 & 0.20 & 0.22 & 0.26\\ \hline
Empirical SD & Debiased SS & 0.32 & 0.23 & 0.30 & 0.10 & 0.04 & 0.30 & 0.26 & 0.29\\
& MLE SS & 0.83 & 0.49 & 0.47 & 0.18 & 0.07 & 0.50 & 0.55 & 0.75\\
& Oracle $(n_2)$ & 0.28 & 0.22 & 0.20 & 0.00 & 0.00 & 0.20 & 0.22 & 0.26\\ \hline
\end{tabular*}}
\bigskip
\end{table}
The simulation results are summarized in Table \ref{table-postselection-ar1-5}.
For small coefficients, there is poor confidence interval coverage across all non-oracle estimators due to the randomness associated with their inclusion in each model.
For larger coefficients that are nearly always selected by the lasso, the performance resembles that of Table \ref{table-ar1-5}.
These results demonstrate the importance of only analyzing coefficients that are included in the fitted model.
For larger sample sizes, the lasso selection performance can improve substantially, leading to improved coverage and bias correction of the debiased lasso after model selection. See \ref{appendix:sims} for additional simulations results with a larger sample size. We also provide additional simulation results with higher correlations among covariates in \ref{appendix:sims}.
\section{Real Data Example: The Mid-South Tobacco Case-Control Study}
\label{s:analysis}
We apply the proposed method to a dataset of single nucleotide polymorphisms (SNPs) from a sample of African-American participants in the Mid-South Tobacco Case-Control study population \citep{jiang2019exome, xu2020prediction, han2022identification} to assess genetic risk factors for nicotine dependence.
The dataset is publicly available with GEO accession GSE148375.
It originally contained 242,901 SNPs measured across 3399 individuals.
We excluded SNPs that were insertions or deletions, had a call rate less than 95\%, were not in Hardy-Weinberg equilibrium ($p>10^{-6}$), or had a minor allele frequency of less than 0.01.
Subjects missing more than 1\% of the remaining SNPs were excluded, and missing values were then imputed using the observed allele frequencies for each SNP.
After data cleaning, the covariates consist of 32,557 SNPs as well as gender and age. The response variable is a binary indicator of smoking status (1=smoker), where 1607 of the 3317 participants are smokers.
Prior research has identified several genetic regions with SNPs that are associated with nicotine dependence, including 15q25.1 (CHRNA5/A3/B4), 8p11.21 (CHRNB3/A6), and 19q13.2 (CYP2A6/A7/B6, EGLN2, RAB4B, and NUMBL).
See, for example, \cite{yang2016converging} and the references therein.
We choose the target covariates to be the 16 measured SNPs that lie in these three regions, as well as the demographic variables.
The results from our multiple splitting debiased lasso estimator are presented in Table \ref{table-data-analysis}.
After adjusting for multiple testing using the Holm procedure \citep{holm1979simple}, none of the SNPs have a significant association with smoking status.
The SNP rs3733829 has the largest coefficient estimate, with an estimated 37\% increase in the odds of being a smoker and unadjusted 95\% confidence interval (9\%, 73\%), controlling for all other covariates.
The association of rs3733829 with increased cigarette consumption has been discussed by \cite{tobacco2010genome} and \cite{bloom2014variants}.
\begin{table}
\caption{Multiple splitting debiased lasso estimates for a logistic regression model of smoking status on standardized age, gender, and 32,557 SNPs, based on a case-control sample of 3317 individuals. There are 16 measured SNPs that lie in the regions of interest after data cleaning. P-values are adjusted for multiple testing using the Holm procedure and truncated at one, while confidence intervals are not adjusted.} \label{table-data-analysis}
{\begin{tabular*}{\textwidth}{l r r r r r r r r r r}
\hline
Covariate & Gene & $\hat\beta$ & SE & Holm P-value & Odds Ratio (95\% CI)\\
\hline
Intercept & & -0.01 & 0.18 & 1.00 & 0.99 (0.70, 1.41)\\
Age & & 0.03 & 0.03 & 1.00 & 1.03 (0.96, 1.09)\\
Male & & 0.45 & 0.07 & $1.76\times 10^{-10}$ & 1.58 (1.38, 1.79)\\
rs35327613 & CHRNB3 & -0.06 & 0.11 & 1.00 & 0.94 (0.76, 1.17)\\
rs61740655 & CHRNA5 & -0.10 & 0.13 & 1.00 & 0.90 (0.70, 1.17)\\
rs79109919 & CHRNA5 & -0.06 & 0.16 & 1.00 & 0.94 (0.68, 1.29)\\
rs16969968 & CHRNA5 & -0.09 & 0.11 & 1.00 & 0.91 (0.74, 1.13)\\
rs938682 & CHRNA3 & 0.10 & 0.10 & 1.00 & 1.11 (0.91, 1.35)\\
rs8042374 & CHRNA3 & -0.18 & 0.12 & 1.00 & 0.83 (0.66, 1.05)\\
rs61737502 & CHRNB4 & 0.04 & 0.10 & 1.00 & 1.04 (0.85, 1.28)\\
rs56218866 & CHRNB4 & -0.10 & 0.14 & 1.00 & 0.91 (0.68, 1.21)\\
rs950776 & CHRNB4 & -0.02 & 0.08 & 1.00 & 0.98 (0.83, 1.15)\\
rs12440298 & CHRNB4 & -0.03 & 0.06 & 1.00 & 0.97 (0.87, 1.09)\\
rs3865452 & COQ8B & -0.06 & 0.06 & 1.00 & 0.94 (0.84, 1.04)\\
rs3733829 & EGLN2 & 0.32 & 0.12 & 0.13 & 1.37 (1.09, 1.73)\\
rs75152309 & CYP2A7 & 0.07 & 0.09 & 1.00 & 1.07 (0.90, 1.28)\\
rs73032311 & CYP2A7 & -0.09 & 0.08 & 1.00 & 0.91 (0.78, 1.07)\\
rs28399499 & CYP2B6 & 0.00 & 0.10 & 1.00 & 1.00 (0.82, 1.22)\\
rs7260329 & CYP2B6 & -0.16 & 0.08 & 0.78 & 0.86 (0.73, 1.00)
\end{tabular*}}
\bigskip
\end{table}
\section*{Acknowledgements}
This work was supported in part by NIH Grants R01AG056764 and RF1AG075107, and by NSF Grant DMS-1915711.
\bibliographystyle{apalike}
|
{
"arxiv_id": "2302.14165",
"language": "en",
"timestamp": "2023-03-02T02:05:55",
"url": "https://arxiv.org/abs/2302.14165",
"yymm": "2302"
} | \section{Introduction}
As machine learning (ML) is increasingly used in high-stakes decision-making, such as lending~\cite{siddiqiCreditRiskScorecards2013}, hiring~\cite{liemPsychologyMeetsMachine2018}, and college admissions~\cite{watersGRADEMachineLearning2014}, there has been a call for greater transparency and increased opportunities for algorithmic recourse~\cite{wachterCounterfactualExplanationsOpening2017}.
Algorithmic recourse aims to help those impacted by ML systems learn about the decision rules used~\cite{selbstIntuitiveAppealExplainable2018}, and provide suggestions for \textit{actions} to change decision outcome in the future~\cite{ustunActionableRecourseLinear2019}.
This often involves generating counterfactual (CF) examples,
which suggest minimal changes in a few features that would have led to the desired decision outcome~\cite{wachterCounterfactualExplanationsOpening2017}, such as ``if you had decreased your requested loan amount by \$9k and changed your home ownership from renting to mortgage, your loan application would have been approved.''~(\autoref{fig:crown}\figpart{A})
For such approaches to be useful, it is necessary for the suggested actions to be \textit{actionable}---realistic actions that users can appreciate and follow in their real-life circumstances. In the example above, changing home ownership status would arguably not be an actionable suggestion for most loan applicants.
To provide actionable recourse, recent work proposes techniques such as generating concise CF examples~\cite{leGRACEGeneratingConcise2020}, creating a diverse set of CF examples~\cite{mothilalExplainingMachineLearning2020,russellEfficientSearchDiverse2019}, and grouping features into different actionability categories~\cite{karimiAlgorithmicRecourseCounterfactual2021}.
These approaches often rely on the underlying assumption that ML developers can measure and predict which CF examples are actionable for all users.
However, the actionability of recourse is ultimately subjective and varies from one user to another~\cite{vermaCounterfactualExplanationsMachine2020,barocasHiddenAssumptionsCounterfactual2020}, or even for a single user at different times~\cite{zahediUnderstandingUserPreferences2019,lombrozoExplanatoryPreferencesShape2016}.
Therefore, there is a pressing need to capture and integrate user preferences into algorithmic recourse~\cite{kirfelWhatIfHow2021,barocasHiddenAssumptionsCounterfactual2020}.
\textsc{\textsf{GAM Coach}}{} aims to take a user-centered approach~(\autoref{fig:crown}\figpart{B--C}) to fill this critical research gap.
In this work, we \textbf{contribute}: \looseness=-1
\aptLtoX[graphic=no,type=html]{
\begin{itemize}
\item \textbf{\textsc{\textsf{GAM Coach}}{}, the first interactive algorithmic recourse tool that empowers end users} to specify their recourse \textit{preferences}, such as difficulty and acceptable range for changing a feature, and iteratively \textit{fine-tune} actionable recourse plans~(\autoref{fig:teaser}).
With an exploratory interface design~\cite{shneidermanBridgingGapEthics2020}, our tool helps users understand the ML model behaviors by experimenting with hypothetical input values and inspecting their effects on the model outcomes.
Our tool advances over existing interactive ML tools~\cite{gomezViCEVisualCounterfactual2020,wexlerWhatIfToolInteractive2019}, overcoming unique design challenges identified from a literature review of recent algorithmic recourse work~(\autoref{sec:goal}, \autoref{sec:ui}).
\item \textbf{Novel adaptation of integer linear programming to generate CF examples.}
To operationalize interactive recourse, we ground our research in generalized additive models (GAMs)~\cite{nelderGeneralizedLinearModels1972,caruanaIntelligibleModelsHealthCare2015}, a popular class of models that performs competitively to other state-of-the-art models yet has a transparent and simple structure~\cite{wangPursuitInterpretableFair2020,changHowInterpretableTrustworthy2021, weldChallengeCraftingIntelligible2019,noriInterpretMLUnifiedFramework2019}.
GAMs enable end users to probe model behaviors with hypothetical inputs in real time directly in web browsers.
Adapting integer linear programming, we propose an efficient and flexible method to generate optimal CF examples for GAM-based classifiers and regressors with continuous and categorical features and pairwise feature interactions~\cite{louAccurateIntelligibleModels2013}~(\autoref{sec:method}).
\item \textbf{Design lessons distilled from a user study with log analysis.}
We conducted an online user study with 41 Amazon Mechanical Turk workers to evaluate \textsc{\textsf{GAM Coach}}{} and investigate how everyday users would use an interactive algorithmic recourse tool.
Through analyzing participants' interaction logs and subjective ratings in a hypothetical lending scenario, our study highlights that \textsc{\textsf{GAM Coach}}{} is usable and useful, and users prefer personalized recourse plans over generic plans.
We discuss the \textit{characteristics} of users' satisfactory recourse plans, \textit{approaches} users take to discover them, and \textit{design lessons} for future interactive recourse tools.
We also provide empirical evidence that with transparency, everyday users can discover and are often puzzled by counterintuitive patterns in ML models~(\autoref{sec:user}).
\item \textbf{An open-source, web-based implementation} that broadens people's access to developing and using interactive algorithmic recourse tools.
We implement our CF generation method in both Python and JavaScript, enabling future researchers to use it on diverse platforms.
We develop \textsc{\textsf{GAM Coach}}{} with modern web technologies such as WebAssembly, so that anyone can access our tool using their web browsers without the need for installation or a dedicated backend server.
We open-source\footnote{\textsc{\textsf{GAM Coach}}{} code: \link{https://github.com/poloclub/gam-coach}} our CF generation library and \textsc{\textsf{GAM Coach}}{} system with comprehensive documentation\footnote{\textsc{\textsf{GAM Coach}}{} documentation: \link{https://poloclub.github.io/gam-coach/docs}}~(\autoref{sec:ui:implement}).
For a demo video of \textsc{\textsf{GAM Coach}}{}, visit \link{https://youtu.be/ubacP34H9XE}.
\end{itemize}
}{
\begin{itemize}[topsep=5pt, itemsep=0mm, parsep=1mm, leftmargin=9pt]
\item \textbf{\textsc{\textsf{GAM Coach}}{}, the first interactive algorithmic recourse tool that empowers end users} to specify their recourse \textit{preferences}, such as difficulty and acceptable range for changing a feature, and iteratively \textit{fine-tune} actionable recourse plans~(\autoref{fig:teaser}).
With an exploratory interface design~\cite{shneidermanBridgingGapEthics2020}, our tool helps users understand the ML model behaviors by experimenting with hypothetical input values and inspecting their effects on the model outcomes.
Our tool advances over existing interactive ML tools~\cite{gomezViCEVisualCounterfactual2020,wexlerWhatIfToolInteractive2019}, overcoming unique design challenges identified from a literature review of recent algorithmic recourse work~(\autoref{sec:goal}, \autoref{sec:ui}).
\item \textbf{Novel adaptation of integer linear programming to generate CF examples.}
To operationalize interactive recourse, we ground our research in generalized additive models (GAMs)~\cite{nelderGeneralizedLinearModels1972,caruanaIntelligibleModelsHealthCare2015}, a popular class of models that performs competitively to other state-of-the-art models yet has a transparent and simple structure~\cite{wangPursuitInterpretableFair2020,changHowInterpretableTrustworthy2021, weldChallengeCraftingIntelligible2019,noriInterpretMLUnifiedFramework2019}.
GAMs enable end users to probe model behaviors with hypothetical inputs in real time directly in web browsers.
Adapting integer linear programming, we propose an efficient and flexible method to generate optimal CF examples for GAM-based classifiers and regressors with continuous and categorical features and pairwise feature interactions~\cite{louAccurateIntelligibleModels2013}~(\autoref{sec:method}).
\item \textbf{Design lessons distilled from a user study with log analysis.}
We conducted an online user study with 41 Amazon Mechanical Turk workers to evaluate \textsc{\textsf{GAM Coach}}{} and investigate how everyday users would use an interactive algorithmic recourse tool.
Through analyzing participants' interaction logs and subjective ratings in a hypothetical lending scenario, our study highlights that \textsc{\textsf{GAM Coach}}{} is usable and useful, and users prefer personalized recourse plans over generic plans.
We discuss the \textit{characteristics} of users' satisfactory recourse plans, \textit{approaches} users take to discover them, and \textit{design lessons} for future interactive recourse tools.
We also provide empirical evidence that with transparency, everyday users can discover and are often puzzled by counterintuitive patterns in ML models~(\autoref{sec:user}).
\item \textbf{An open-source, web-based implementation} that broadens people's access to developing and using interactive algorithmic recourse tools.
We implement our CF generation method in both Python and JavaScript, enabling future researchers to use it on diverse platforms.
We develop \textsc{\textsf{GAM Coach}}{} with modern web technologies such as WebAssembly, so that anyone can access our tool using their web browsers without the need for installation or a dedicated backend server.
We open-source\footnote{\textsc{\textsf{GAM Coach}}{} code: \link{https://github.com/poloclub/gam-coach}} our CF generation library and \textsc{\textsf{GAM Coach}}{} system with comprehensive documentation\footnote{\textsc{\textsf{GAM Coach}}{} documentation: \link{https://poloclub.github.io/gam-coach/docs}}~(\autoref{sec:ui:implement}).
For a demo video of \textsc{\textsf{GAM Coach}}{}, visit \link{https://youtu.be/ubacP34H9XE}.
\end{itemize}
}
\begin{figure}[tb]
\includegraphics[width=\linewidth]{figures/crown.pdf}
\caption[]{
\textbf{\textsc{\textsf{GAM Coach}}{} enables end users to iteratively fine-tune recourse plans.}
\textbf{(A)} If a user finds the initial generic plan less actionable,
\textbf{(B)} they can specify their recourse preferences through simple interactions.
\textbf{(C)} Our tool will then generate tailored plans that reflect the user's preferences.
}
\Description{
A flow chart of three components labeled A, B, and C.
Component A shows a generic plan with loan amount decreasing from 35k to 26k and home ownership changing from rent to mortgage.
Component B shows user preference that the acceptable range of loan amount is greater or equal to 30k and the difficulty of home ownership is hard to change.
Component C shows a tailored plan with loan amount decreasing from 35k to 30k and FICO score increasing from 672 to 717.
Component A points to component B.
Component B and C point to each other.
}
\label{fig:crown}
\end{figure}
\noindent To design and evaluate a prospective interface~\cite{shneidermanBridgingGapEthics2020} for interactive algorithmic recourse, we situate \textsc{\textsf{GAM Coach}}{} in loan application scenarios.
However, we caution that adapting \textsc{\textsf{GAM Coach}}{} for real lending settings would require further research with financial and legal experts as well as people who would be impacted by the system.
Our goal is for this work to serve as a foundation for the design of future user-centered recourse and interpretable ML tools.
%
\section{Related Work}
\subsection{Algorithmic Recourse}
\label{sec:related:recourse}
Algorithmic recourse aims to design techniques that provide those impacted by ML systems with actionable feedback about how to alter the outcome of ML models.
Popularized by~\citet{wachterCounterfactualExplanationsOpening2017}, researchers typically generate this actionable feedback by creating CF examples.
Here, a CF example represents a recourse plan that contains minimal changes to the original input but leads to a different model prediction~\cite{karimiSurveyAlgorithmicRecourse2021,ustunActionableRecourseLinear2019}.
For example, a bank that uses ML models to inform loan application decisions can provide a rejected loan applicant with a recourse plan that suggests the applicant increase their annual income by \$5k so that they can obtain a loan approval.
CF examples not only inform end users about the key features contributing to the decision, but also provide suggestions that end users can act on to obtain the desired outcome~\cite{ustunActionableRecourseLinear2019}.
Researchers have developed various methods to generate CF examples, such as casting it as an optimization problem~\cite[e.g.,][]{cuiOptimalActionExtraction2015, russellEfficientSearchDiverse2019, ustunActionableRecourseLinear2019, kanamoriDACEDistributionAwareCounterfactual2020, wachterCounterfactualExplanationsOpening2017, mohammadiScalingGuaranteesNearest2021},
searching through similar samples~\cite[e.g.,][]{goyalCounterfactualVisualExplanations2019, keaneGoodCounterfactualsWhere2020, delaneyInstancebasedCounterfactualExplanations2021, vanlooverenInterpretableCounterfactualExplanations2020, schleichGeCoQualityCounterfactual2021},
and developing generative models~\cite[e.g.,][]{kennyGeneratingPlausibleCounterfactual2021, dhurandharExplanationsBasedMissing2018, joshiRealisticIndividualRecourse2019, singlaExplanationProgressiveExaggeration2020}.
It is challenging to generate helpful CF examples in practice.
Besides making minimal changes, a helpful CF example should also be \textit{actionable} for the end user~\cite{ustunActionableRecourseLinear2019, keaneIfOnlyWe2021}.
To generate actionable recourse plans, recent research includes proposals to find concise CF examples~\cite{leGRACEGeneratingConcise2020}, consider causality~\cite{karimiAlgorithmicRecourseCounterfactual2021,mahajanPreservingCausalConstraints2020, karimiAlgorithmicRecourseImperfect2020}, present diverse plans~\cite{mothilalExplainingMachineLearning2020,russellEfficientSearchDiverse2019}, and assign features with different actionability scores~\cite{karimiAlgorithmicRecourseCounterfactual2021}.
However, the actionability of recourse is ultimately subjective and varies among end users~\cite{vermaCounterfactualExplanationsMachine2020,kirfelWhatIfHow2021,zahediUnderstandingUserPreferences2019,lombrozoExplanatoryPreferencesShape2016}.
To restore users' autonomy with CF examples, some researchers explore the potential of interactive tools.
For example, Prospector~\cite{krauseInteractingPredictionsVisual2016}, What-If Tool~\cite{wexlerWhatIfToolInteractive2019}, Polyjuice~\cite{wuPolyjuiceGeneratingCounterfactuals2021}, and AdViCE~\cite{gomezAdViCEAggregatedVisual2021} leverage interactive visualizations to help ML developers debug models with CF examples.
Context Sight~\cite{yuanContextSightModel2022} allows ML developers to analyze model errors by customizing the acceptable feature range and desired number of changes in CF examples.
CEB~\cite{myersRevealingNeuralNetwork2020} interactively presents CF examples to help non-experts understand neural networks.
In comparison, \textsc{\textsf{GAM Coach}}{} aims to empower \textit{end users} to discover \textit{actionable} strategies to alter undesirable ML decisions.
DECE~\cite{chengDECEDecisionExplorer2021} is a visual analytics tool designed to help ML developers and end users interpret neural network predictions with CF examples.
It allows users to customize CF examples by specifying acceptable feature ranges.
In comparison, while the interface for \textsc{\textsf{GAM Coach}}{} is model agnostic, the recourse generation technique it employs is tailored to GAMs, a different model family, and our tool especially focuses on end users without an ML background.
We evaluate \textsc{\textsf{GAM Coach}}{} through an observational log study with 41 crowdworkers, while DECE is evaluated through three expert interviews.
These evaluations provide complementary viewpoints and insights into how interactive recourse tools may be used in practice.
Possibly closest in spirit to our work is ViCE~\cite{gomezViCEVisualCounterfactual2020}, an interactive visualization tool that generates CF examples on end users' selected continuous features.
In contrast, \textsc{\textsf{GAM Coach}}{}---which supports both continuous and categorical features, as well as their pairwise interactions---allows end users to specify a much wider range of recourse preferences including feature difficulty, acceptable range, and the number of features to change. Our tool then generates \textit{optimal} and \textit{diverse} CF examples meeting specified preferences.
\subsection{Interactive Tools for Interpretable ML}
Besides CF explanations, researchers have developed interactive tools to help different ML stakeholders interpret ML models~\cite[e.g.,][]{wangTimberTrekExploringCurating2022, hohmanSUMMITScalingDeep2019,kahngActiVisVisualExploration2018,pezzottiDeepEyesProgressiveVisual2018}.
In particular, the simple structure and high performance of GAMs have attracted many researchers to use this model to explore how interactivity plays a role in interpretable ML.
For example, Gamut~\cite{hohmanGamutDesignProbe2019} provides both global and local explanations by visualizing the shape functions in GAMs.
Similarly, TeleGam~\cite{hohmanTeleGamCombiningVisualization2019} helps users understand GAM predictions by combining both graphical and textual explanations.
GAM Changer~\cite{wangInterpretabilityThenWhat2022} supports users to edit GAM model parameters through interactive visualization.
However, the target users of these tools are ML experts, such as ML researchers and model developers, or domain experts who need to vet and correct models before deployment.
In comparison, \textsc{\textsf{GAM Coach}}{} targets people who are impacted by ML models and who are less knowledgeable about ML and domain-specific concepts~\cite{sureshExpertiseRolesFramework2021}.
There is an increasing body of research in developing interactive systems to help \textit{non-experts} interact with ML models.
The main goal of these tools is to educate non-experts about the underlying mechanisms of ML models.
For example, Teachable Machine~\cite{carneyTeachableMachineApproachable2020} helps users learn about basic ML concepts through interactive demos.
Tensorflow Playground~\cite{smilkovDirectManipulationVisualizationDeep2017}, GAN Lab~\cite{kahngGANLabUnderstanding2019}, and CNN Explainer~\cite{wangCNNExplainerLearning2020} use interactive visualizations to help novices learn about the underlying mechanisms of neural networks, generative adversarial networks, and convolutional neural networks, respectively.
In contrast, instead of educating non-experts on the technical inner workings of ML models, we focus on helping non-experts who are impacted by ML models understand why a model makes a particular decision and what actions they can take to alter that decision.
%
\section{Design Goals}
\label{sec:goal}
Our goal is to design and develop an interactive, visual experimentation tool that respects end users' autonomy in algorithmic recourse, helping them discover and fine-tune recourse plans that reflect their preferences and needs.
We identify five main design goals of \textsc{\textsf{GAM Coach}}{} through synthesizing the trends and limitations of traditional algorithmic recourse systems~\cite[e.g.,][]{barocasHiddenAssumptionsCounterfactual2020,karimiSurveyAlgorithmicRecourse2021,keaneIfOnlyWe2021,mittelstadtExplainingExplanationsAI2019,shneidermanBridgingGapEthics2020, wachterCounterfactualExplanationsOpening2017, abdulTrendsTrajectoriesExplainable2018}.
\aptLtoX[graphic=no,type=html]{
\begin{itemize}
\leftskip-7pt
\item [\textbf{G1.}]
\textbf{Visual summary of diverse algorithmic recourse plans.}
To help end users find actionable recourse plans, researchers suggest presenting diverse CF options that users can pick from~\cite{mothilalExplainingMachineLearning2020,barocasHiddenAssumptionsCounterfactual2020}.
Thus, \textsc{\textsf{GAM Coach}}{} should efficiently generate diverse recourse plans~(\autoref{sec:method:ip}) and present a visual summary of each plan as well as display multiple plans at the same time~(\autoref{sec:ui:tab}).
This could help users compare different strategies and inform interactions to generate better recourse plans.
\item [\textbf{G2.}]
\textbf{Easy ways to specify recourse preferences.}
What makes a recourse plan actionable varies from one user to another---it is crucial for a recourse tool to enable users to specify a wide range of recourse preferences~\cite{barocasHiddenAssumptionsCounterfactual2020,mittelstadtExplainingExplanationsAI2019,kirfelWhatIfHow2021}.
Therefore, we would like to allow users to easily configure (1) the \textit{difficulty} of changing a feature, (2) the \textit{acceptable range} within which a feature can change, and (2) the \textit{maximum number of features} that a recourse plan can change~(\autoref{sec:ui:panel}), and \textsc{\textsf{GAM Coach}}{} should generate plans reflecting users' specified preferences~(\autoref{sec:method:customization}).
This interactive recourse design would empower users to iteratively customize recourse plans until they find satisfactory plans.
\item [\textbf{G3.}]
\textbf{Exploratory interface to experiment with hypothetical inputs.}
The goal of algorithmic recourse is not only to help users identify actions to alter unfavorable model decisions, but also to help them understand how a model makes decisions~\cite{wachterCounterfactualExplanationsOpening2017,karimiSurveyAlgorithmicRecourse2021}.
When explaining a model's decision-making, research shows that interfaces allowing users to probe an ML model with different inputs help users understand model behaviors and lead to greater satisfaction with the model~\cite{nourashrafeddinVisualApproachInteractive2018, chengExplainingDecisionMakingAlgorithms2019,shneidermanBridgingGapEthics2020,wexlerWhatIfToolInteractive2019}.
Therefore, we would like \textsc{\textsf{GAM Coach}}{} to enable users to experiment with different hypothetical inputs and inspect how these changes affect the model's decision~(\autoref{sec:ui:panel}).
\item [\textbf{G4.}]
\textbf{Clear communication and engagement.}
The target users of \textsc{\textsf{GAM Coach}}{} are everyday people who are usually less knowledgeable about ML and domain-specific concepts~\cite{sureshExpertiseRolesFramework2021}.
Our goal is to design and develop an interactive system that is easy to understand and engaging to use, requiring the tool to communicate and explain recourse plans and domain-specific information to end users~(\autoref{sec:ui:panel}, \autoref{sec:ui:bookmark}).
\item [\textbf{G5.}]
\textbf{Open-source and model-agnostic implementation.}
We aim to develop an interactive recourse tool that is easily accessible to users, with no installation required.
By using web browsers as the platform, users can directly access \textsc{\textsf{GAM Coach}}{} through their laptops or tablets.
Additionally, we aim to make our interface model-agnostic so that future researchers can use it with different ML models and recourse techniques.
Finally, we would like to open-source our implementation and provide documentation to support future design, research, and development of interactive algorithmic recourse~(\autoref{sec:ui:implement}).
\end{itemize}
}{
\begin{enumerate}[topsep=1mm, itemsep=0mm, parsep=1mm, leftmargin=18pt, label=\textbf{G\arabic*.}, ref=G\arabic*]
\item \label{item:g1}
\textbf{Visual summary of diverse algorithmic recourse plans.}
To help end users find actionable recourse plans, researchers suggest presenting diverse CF options that users can pick from~\cite{mothilalExplainingMachineLearning2020,barocasHiddenAssumptionsCounterfactual2020}.
Thus, \textsc{\textsf{GAM Coach}}{} should efficiently generate diverse recourse plans~(\autoref{sec:method:ip}) and present a visual summary of each plan as well as display multiple plans at the same time~(\autoref{sec:ui:tab}).
This could help users compare different strategies and inform interactions to generate better recourse plans.
\item \label{item:g2}
\textbf{Easy ways to specify recourse preferences.}
What makes a recourse plan actionable varies from one user to another---it is crucial for a recourse tool to enable users to specify a wide range of recourse preferences~\cite{barocasHiddenAssumptionsCounterfactual2020,mittelstadtExplainingExplanationsAI2019,kirfelWhatIfHow2021}.
Therefore, we would like to allow users to easily configure (1) the \textit{difficulty} of changing a feature, (2) the \textit{acceptable range} within which a feature can change, and (2) the \textit{maximum number of features} that a recourse plan can change~(\autoref{sec:ui:panel}), and \textsc{\textsf{GAM Coach}}{} should generate plans reflecting users' specified preferences~(\autoref{sec:method:customization}).
This interactive recourse design would empower users to iteratively customize recourse plans until they find satisfactory plans.
\item \label{item:g3}
\textbf{Exploratory interface to experiment with hypothetical inputs.}
The goal of algorithmic recourse is not only to help users identify actions to alter unfavorable model decisions, but also to help them understand how a model makes decisions~\cite{wachterCounterfactualExplanationsOpening2017,karimiSurveyAlgorithmicRecourse2021}.
When explaining a model's decision-making, research shows that interfaces allowing users to probe an ML model with different inputs help users understand model behaviors and lead to greater satisfaction with the model~\cite{nourashrafeddinVisualApproachInteractive2018, chengExplainingDecisionMakingAlgorithms2019,shneidermanBridgingGapEthics2020,wexlerWhatIfToolInteractive2019}.
Therefore, we would like \textsc{\textsf{GAM Coach}}{} to enable users to experiment with different hypothetical inputs and inspect how these changes affect the model's decision~(\autoref{sec:ui:panel}).
\item \label{item:g4}
\textbf{Clear communication and engagement.}
The target users of \textsc{\textsf{GAM Coach}}{} are everyday people who are usually less knowledgeable about ML and domain-specific concepts~\cite{sureshExpertiseRolesFramework2021}.
Our goal is to design and develop an interactive system that is easy to understand and engaging to use, requiring the tool to communicate and explain recourse plans and domain-specific information to end users~(\autoref{sec:ui:panel}, \autoref{sec:ui:bookmark}).
\item \label{item:g5}
\textbf{Open-source and model-agnostic implementation.}
We aim to develop an interactive recourse tool that is easily accessible to users, with no installation required.
By using web browsers as the platform, users can directly access \textsc{\textsf{GAM Coach}}{} through their laptops or tablets.
Additionally, we aim to make our interface model-agnostic so that future researchers can use it with different ML models and recourse techniques.
Finally, we would like to open-source our implementation and provide documentation to support future design, research, and development of interactive algorithmic recourse~(\autoref{sec:ui:implement}).
\end{enumerate}
}
%
\section{Techniques for Customizable Recourse Generation}
\label{sec:method}
Given our design goals~(\aptLtoX[graphic=no,type=html]{\textbf{G1}--\textbf{G5}}{\ref{item:g1}--\ref{item:g5}}), it is crucial for \textsc{\textsf{GAM Coach}}{} to generate customizable recourse plans interactively with a short response time.
Therefore, we base our design on GAMs, a family of ML models that perform competitively to state-of-the-art models yet have a transparent and simple structure---enabling end users to probe model behaviors in real-time with hypothetical inputs.
In addition, with a novel adaptation of integer linear programming~(\autoref{sec:method:ip}), GAMs allow us to efficiently generate recourse plans that respect users' preferences and thus achieve our design goals~(\autoref{sec:method:customization}).
\subsection{Model Choice}
\label{sec:method:model}
To operationalize our design of interactive algorithmic recourse, we ground our research in GAMs~\cite{hastieGeneralizedAdditiveModels1999}. More specifically, we make use of a type of GAMs called \textit{Explainable Boosting Machines}, (EBMs)~\cite{caruanaIntelligibleModelsHealthCare2015,noriInterpretMLUnifiedFramework2019}, which perform competitively to the state-of-the-art black-box models yet have a transparent and simple structure~\cite{wangPursuitInterpretableFair2020,changHowInterpretableTrustworthy2021, weldChallengeCraftingIntelligible2019,noriInterpretMLUnifiedFramework2019}.
Compared to simple models like linear models or decision trees, EBMs achieve superior accuracy by learning complex relations between features through gradient-boosting trees~\cite{louAccurateIntelligibleModels2013}, and thus deploying our design is realistic.
Compared to complex models like neural networks, EBMs have a similar performance on tabular data but a simpler structure; therefore, users can probe model behaviors in real-time with hypothetical inputs~(\aptLtoX[graphic=no,type=html]{\textbf{G3}}{\ref{item:g3}}).
Given an \tcolor{myorange}{input} $\textcolor[HTML]{FA8231}{x \in \mathbb{R}^{k}}$ with \tcolor{myorange}{$k$ features}, the \tcolor{myred}{output} $\textcolor[HTML]{E03177}{y \in \mathbb{R}}$ of an EBM model can be written as:
\begin{align}
\begin{split}
\label{equation:gam}
\textcolor[HTML]{E03177}{y} &= \textcolor[HTML]{4B6584}{l \left( \textcolor[HTML]{E03177}{S_x} \right)} \\
\textcolor[HTML]{E03177}{S_x} &= \textcolor[HTML]{0E9888}{\beta_0} + \textcolor[HTML]{0080E5}{f_1 \left(\textcolor[HTML]{FA8231}{x_1}\right)} + \textcolor[HTML]{0080E5}{f_2 \left(\textcolor[HTML]{FA8231}{x_2}\right)} + \cdots + \textcolor[HTML]{0080E5}{f_k \left(\textcolor[HTML]{FA8231}{x_k}\right)} + \cdots + \textcolor[HTML]{0080E5}{f_{ij}(\textcolor[HTML]{FA8231}{x_i, x_j})}
\end{split}
\end{align}
\noindent Here, each \tcolor{myblue}{shape function} $\textcolor[HTML]{0080E5}{f_j}$ for single features $j \in \set{1, 2, \dots, k}$ or $\textcolor[HTML]{0080E5}{f_{ij}(\textcolor[HTML]{FA8231}{x_i, x_j})}$ for pairwise interactions between features~\cite{louAccurateIntelligibleModels2013}
is learned using \tcolor{myblue}{gradient-boosted trees}~\cite{louIntelligibleModelsClassification2012}.
$\textcolor[HTML]{E03177}{S_x}$ is the sum of all \tcolor{myblue}{shape function} outputs as well as \tcolor{mygreen}{the intercept constant} $\textcolor[HTML]{0E9888}{\beta_0}$.
The model converts $\textcolor[HTML]{E03177}{S_x}$ to the \tcolor{myred}{output} $\textcolor[HTML]{E03177}{y}$ through a \tcolor{gray}{link function} $\textcolor[HTML]{4B6584}{l}$ that is determined by the ML task.
For example, a \tcolor{gray}{sigmoid function} is used for binary classifications, and an \tcolor{gray}{identity function} for regressions.
What distinguishes EBMs from other GAMs is that the \tcolor{myblue}{shape function~$f_j$ or $f_{ij}$} is an ensemble of trees, mapping a \tcolor{myorange}{main effect feature value $x_j$} or a \tcolor{myorange}{pairwise interaction $(x_i, x_j)$} to a scalar \tcolor{myblue}{score}.
Before training, EBM applies \textit{equal-frequency binning} on each continuous feature, where bins have different widths but the same number of training samples.
This discrete bucketing process is commonly used to speed up gradient-boosting tree methods with little cost in accuracy, such as in popular tree-based models LightGBM~\cite{keLightGBMHighlyEfficient2017} and XGBoost~\cite{chenXGBoostScalableTree2016}.
For categorical features, EBMs treat each discrete level as a bin.
Once an EBM model is trained, the learned parameters for each ensemble of trees which defines the feature split points and scores in each region defined by these split points are transformed to a \textit{lookup histogram} (for univariate features) and a \textit{lookup table} (for pairwise interactions).
When predicting on a data point, the model first looks up corresponding scores for all feature values and interaction terms and then applies \autoref{equation:gam} to compute the output. \looseness=-1
\subsection{\mbox{CF Generation: Integer Linear Programming}}
\label{sec:method:ip}
A recourse plan is a CF example $c$ that makes minimal changes to the original input $x$ but leads to a different prediction.
Without loss of generality, we use binary classification as an example, with \tcolor{gray}{sigmoid function} $\textcolor[HTML]{4B6584}{\sigma(a) = \frac{1}{1 + e^{-a}}}$ as a \tcolor{gray}{link function}.
If $\textcolor[HTML]{4B6584}{\sigma\left(\textcolor[HTML]{E03177}{S_x}\right)} \geq 0.5$ or $\textcolor[HTML]{E03177}{S_x} \geq 0$, the model predicts the input $x$ as positive; otherwise it predicts $x$ as negative.
To generate $c$, we can change $x$ so that the new score $\textcolor[HTML]{E03177}{S_c}$ has a different sign from $\textcolor[HTML]{E03177}{S_x}$.
Note that $\textcolor[HTML]{E03177}{S_x}$ is a linear combination of shape function scores and so is $\textcolor[HTML]{E03177}{S_c - S_x}$.
Thus, we can express this counterfactual constraint as a linear constraint~(derivation in \autoref{sec:milp:cf}).
To enforce $c$ to only make minimal changes to $x$, we can minimize the distance between $c$ and $x$, which can also be expressed as a linear function~(\autoref{sec:milp:proximity}).
Since all constraints are linear, and there are a finite number of bins for each feature, we express the \textsc{\textsf{GAM Coach}}{} recourse generation as an \textit{integer linear program}:
\aptLtoX[graphic=no,type=html]{\renewcommand\theequation{2\alph{equation}}
\setcounter{equation}{0}\begin{eqnarray}
&& \min \phantom{.} \textnormal{distance} \\
&&\quad \textnormal{s.t.} \phantom{.} \textnormal{distance} = \sum_{i=1}^{k} \sum_{b\in{B_i}} d_{ib} \textcolor[HTML]{FA8231}{v_{ib}} \label{ilp:distance} \\
&&\qquad \textcolor[HTML]{E03177}{-S_x} \leq \sum_{i=1}^{k} \sum_{b\in{B_i}} g_{ib} \textcolor[HTML]{FA8231}{v_{ib}} + \sum_{\left(i, j\right) \in N} \sum_{b_1 \in B_i} \sum_{b_2 \in B_j} h_{ijb_1b_2} \textcolor[HTML]{0E9888}{z_{ijb_1b_2}} \label{ilp:cf} \\
&&\qquad \textcolor[HTML]{0E9888}{z_{ijb_1b_2}} = \textcolor[HTML]{FA8231}{v_{ib_1} v_{jb_2}}\quad \textnormal{for } \left(i, j\right) \in N, \enskip b_1 \in B_i, \enskip b_2 \in B_j \label{ilp:interaction}\\
&&\qquad \sum_{b\in{B_i}}^{} \textcolor[HTML]{FA8231}{v_{ib}} \leq 1\quad \textnormal{for } i = 1, \dots, k \label{ilp:one}\\
&&\qquad \textcolor[HTML]{FA8231}{v_{ib}} \in \left\{0, 1\right\}\quad \textnormal{for } i = 1, \dots, k, \enskip b \in B_i \label{ilp:binaryv}\\
&&\qquad \textcolor[HTML]{0E9888}{z_{ijb_1b_2}} \in \left\{0, 1\right\}\quad \textnormal{for } \left(i, j\right) \in N, \enskip b_1 \in B_i, \enskip b_2 \in B_j \label{ilp:binaryz}
\end{eqnarray}}{
\begin{subequations}
\newcommand{\setmuskip}[2]{#1=#2\relax}
\setmuskip{\medmuskip}{0.35mu}
\setmuskip{\thickmuskip}{0.35mu}
\setlength{\jot}{1pt}
\label{equation:ilp}
\begin{flalign}
\min \phantom{.} & \textnormal{distance} &\\[-4pt]
\textnormal{s.t.} \phantom{.} & \textnormal{distance} = \sum_{i=1}^{k} \sum_{b\in{B_i}} d_{ib} \textcolor[HTML]{FA8231}{v_{ib}} & \label{ilp:distance} \\[-4pt]
& \textcolor[HTML]{E03177}{-S_x} \leq \sum_{i=1}^{k} \sum_{b\in{B_i}} g_{ib} \textcolor[HTML]{FA8231}{v_{ib}} + \sum_{\left(i, j\right) \in N} \sum_{b_1 \in B_i} \sum_{b_2 \in B_j} h_{ijb_1b_2} \textcolor[HTML]{0E9888}{z_{ijb_1b_2}} & \label{ilp:cf} \\[0pt]
\normalsize
& \textcolor[HTML]{0E9888}{z_{ijb_1b_2}} = \textcolor[HTML]{FA8231}{v_{ib_1} v_{jb_2}} \hspace{5px} \textnormal{for } \left(i, j\right) \in N, \enskip b_1 \in B_i, \enskip b_2 \in B_j & \label{ilp:interaction}\\[0pt]
& \sum_{b\in{B_i}}^{} \textcolor[HTML]{FA8231}{v_{ib}} \leq 1 \hspace{22px} \textnormal{for } i = 1, \dots, k & \label{ilp:one}\\[0pt]
& \textcolor[HTML]{FA8231}{v_{ib}} \in \left\{0, 1\right\} \hspace{25px} \textnormal{for } i = 1, \dots, k, \enskip b \in B_i & \label{ilp:binaryv}\\[0pt]
& \textcolor[HTML]{0E9888}{z_{ijb_1b_2}} \in \left\{0, 1\right\} \hspace{12px} \textnormal{for } \left(i, j\right) \in N, \enskip b_1 \in B_i, \enskip b_2 \in B_j & \label{ilp:binaryz}
\end{flalign}
\end{subequations}}
\renewcommand\theequation{\arabic{equation}}
\setcounter{equation}{2}
\noindent We use an \tcolor{myorange}{indicator variable $\textcolor[HTML]{FA8231}{v_{ib}}$}~(\ref{ilp:binaryv}) to denote if a main effect bin is active:
if $\textcolor[HTML]{FA8231}{v_{ib}}=1$, we change the \tcolor{myorange}{feature value of $\textcolor[HTML]{FA8231}{x_i}$} to the closest value in its bin $b$.
All bin options of $\textcolor[HTML]{FA8231}{x_i}$ are included in a set $B_i$.
For each \tcolor{myorange}{feature $\textcolor[HTML]{FA8231}{x_i}$}, there can be at most one active bin~(\ref{ilp:one}); if there is no active bin, then we do not change the \tcolor{myorange}{value of $x_i$}.
We use an \tcolor{mygreen}{indicator variable $\textcolor[HTML]{0E9888}{z_{ijb_1b_2}}$}~(\ref{ilp:binaryz}) to denote if a \tcolor{mygreen}{pairwise interaction effect} is active---it is active if and only if bin $b_1$ of \tcolor{myorange}{$\textcolor[HTML]{FA8231}{x_i}$} and bin $b_2$ of \tcolor{myorange}{$\textcolor[HTML]{FA8231}{x_j}$} are both active~(\ref{ilp:interaction}).
The set $N$ includes all available \tcolor{mygreen}{interaction effect terms}.
Constraint~\ref{ilp:distance} determines the total distance cost for a potential CF example; it uses a set of pre-computed distance costs $d_{ib}$ of changing one \tcolor{myorange}{feature $x_i$} to the closest value in bin $b$.
Constraint~\ref{ilp:cf} ensures that any solution would flip the model prediction, by gaining enough total score from main effect scores~($g_{ib}$) and interaction effect scores~($h_{ijb_1b_2}$).
Constants $g_{ib}$ and $h_{ijb_1b_2}$ are pre-computed and adjusted for cases where a single active main effect bin results in changes in interaction terms (see~\autoref{sec:milp:ip} for details).
\mypar{Novelty.}
Advancing existing works that use integer linear programs for CF generation (on linear models~\cite{ustunActionableRecourseLinear2019} or using a linear approximation of neural networks~\cite{mohammadiScalingGuaranteesNearest2021}), our algorithm is the first that works on non-linear models without approximation.
Our algorithm is also the first and only CF method specifically designed for EBM models.
Without it, users would have to rely on model-agnostic techniques such as genetic algorithm~\cite{schleichGeCoQualityCounterfactual2021} and KD-tree~\cite{vanlooverenInterpretableCounterfactualExplanations2020} to generate CF examples.
These model-agnostic methods do not allow for customization.
Also, by quantitatively comparing our method with these two model-agnostic CF techniques on three datasets, we find CFs generated by our method are significantly \textit{closer} to the original input, \textit{more sparse}, and encounter \textit{less failures} (see~\autoref{sec:practical:compare} and \autoref{tab:comparison} for details).
\aptLtoX[graphic=no,type=html]{}{
\phantomsection
}
\mypar{Generalizability.}\label{sec:method:ip:generalizability}
Our algorithm can easily be adapted for EBM regressors and multiclass classifiers.
For regression, we modify the left side and the inequality of constraint~\ref{ilp:cf} to bound the prediction value in the desired range~(see~\autoref{sec:practical:regression} for details).
For multiclass classification, we can modify constraint~\ref{ilp:cf} to ensure that the desired class has the largest score (see~\autoref{sec:practical:multiclass} for details).
In addition to EBMs, one can also adapt our algorithm to generate CF examples for linear models~\cite{ustunActionableRecourseLinear2019}.
For other non-linear models (e.g., neural networks), one can first use a linear approximation~\cite{mohammadiScalingGuaranteesNearest2021} and then apply our algorithm, verifying suggested recourse plans with respect to the original model.
If the suggested recourse plan would not change the output of the original model, an alternative can be generated by solving the program again with the previous solution blocked.
\mypar{Scalability.}
Modern linear solvers can efficiently solve our integer linear programs.
The complexity of solving an integer linear program increases along two factors: the number of variables and the number of constraints.
In \aptLtoX[graphic=no,type=html]{Equation \ref{equation:ilp}}{\autoref{equation:ilp}}, all variables are binary---making the program easier to solve than a program with non-binary integer variables.
For any dataset, there are always exactly 3 constraints from \ref{ilp:distance}, \ref{ilp:cf}, and \ref{ilp:one}.
The number of constraints from \ref{ilp:interaction} increases along the number of interaction terms $|N|$ and the number of bins per feature $|B_i|$ on these interaction terms.
In practice, $|N|$ and $|B_i|$ are often bounded to ensure EBM are interpretable.
For example, by default the popular EBM library InterpretML~\cite{noriInterpretMLUnifiedFramework2019} bounds $|N| \leq 10$ and $|B_i| \leq 32$.
Therefore, in the worst-case scenario with 10 continuous-continuous interaction terms, there will be at most $10 \times 32 \times 32 = 10,240$ constraints from \ref{ilp:interaction}.
For instance, on the Communities and Crime dataset~\cite{redmondDatadrivenSoftwareTool2002} with 119 continuous features, 1 categorical feature, and 10 pairwise interaction terms, there are about 7.2k constraints and 3.6k variables in our program.
It only takes about 0.5--3.0 seconds to generate a recourse plan using Firefox Browser on a MacBook~(see~\autoref{sec:practical:speed} for details).\looseness=-1
\subsection{Recourse Customization}
\label{sec:method:customization}
With integer linear programming, we can generate recourse plans that reflect a wide range of user preferences~(\aptLtoX[graphic=no,type=html]{\textbf{G2}}{\ref{item:g2}}).
For example, to prioritize a feature that is \textit{easier for a user to change}, we can lower the distance cost $d_{ib}$ for that feature~(\autoref{sec:practical:distance}).
To enforce recourse plans to only change a feature in a user specified \textit{acceptable range}, we can remove out-of-range binary variables $\textcolor[HTML]{FA8231}{v_{ib}}$.
If a user requires the recourse plans to only change \textit{at most $p$ features}, we can add an additional linear constraint $\sum_{i=1}^{k}\sum_{b\in{B_i}} \textcolor[HTML]{FA8231}{v_{ib}} \leq p$.
Finally, with modern linear solvers, we can efficiently generate diverse recourse plans~(\aptLtoX[graphic=no,type=html]{\textbf{G1}}{\ref{item:g1}}) by solving the program multiple times while blocking previous solutions~(see \autoref{sec:practical:regression}\figpart{--}\autoref{sec:practical:constraint} for details).
%
\section{User Interface}
\label{sec:ui}
Given the design goals~(\aptLtoX[graphic=no,type=html]{\textbf{G1}--\textbf{G5}}{\ref{item:g1}--\ref{item:g5}}) described in \autoref{sec:goal}, we present \textsc{\textsf{GAM Coach}}{}, an interactive tool that empowers end users to specify preferences and iteratively fine-tune recourse plans~(\autoref{fig:scenario}).
The interface tightly integrates three components: the \textit{Coach Menu}{} that provides overall controls and organizes multiple recourse plans as tabs~(\autoref{sec:ui:tab}), the \textit{Feature Panel}{} containing \textit{Feature Cards}{} that allow users to specify recourse preferences with simple interactions~(\autoref{sec:ui:panel}), and the \textit{Bookmark Window} summarizing saved recourse plans~(\autoref{sec:ui:bookmark}).
To explain these views in this section, we use a loan application scenario with the LendingClub dataset~\cite{LendingClubOnline2018}, where a bank refers a rejected loan applicant to \textsc{\textsf{GAM Coach}}{} pre-loaded with the applicant's input data.
Our tool can be easily applied to GAMs trained on different datasets while providing a consistent user experience.
On \textsc{\textsf{GAM Coach}}{}'s \linkhref{https://poloclub.github.io/gam-coach/}{public demo page}, we present five additional examples with five datasets that are commonly used in algorithmic recourse literature:
Communities and Crime~\cite{redmondDatadrivenSoftwareTool2002} (also used in the second usage scenario in \autoref{sec:ui:scenario}),
Taiwan Credit~\cite{yehComparisonsDataMining2009}, German Credit~\cite{duaUCIMachineLearning2017}, Adult~\cite{kohaviScalingAccuracyNaivebayes1996}, and COMPAS~\cite{larsonHowWeAnalyzed2016}.
\looseness=-1
\subsection{Coach Menu}
\label{sec:ui:tab}
\begin{figure}[b]
\includegraphics[width=0.92\linewidth]{figures/score-tab.pdf}
\caption[]{
A bar chart visualizes model's decision score of a recourse plan: the bar is marked with the user's original score (shorter vertical line on the left) and the threshold needed to obtain the desired decision (longer vertical line on the right).
}
\Description{
A screenshot of a small bar chart in the plan tab bar.
There is a green bar between the text labels ``Plan 6'' and ``loan approval''.
A cursor is on top of the midpoint of the green bar with a tooltip ``Score needed to obtain your desired decision (>= 0.5).''
}
\label{fig:score-tab}
\end{figure}
\setlength{\belowcaptionskip}{5pt}
\setlength{\abovecaptionskip}{12pt}
\begin{figure*}[tb]
\includegraphics[width=\textwidth]{figures/scenario.pdf}
\caption[]{
\textsc{\textsf{GAM Coach}}{} enables end users to inspect and customize recourse plans through simple interactions.
\textbf{(A)} \textit{Initial generic plans} are generated with the same configurations for all users.
\textbf{(B)} Users can \textit{specify recourse preferences} if they are not satisfied with the initial plans; by configuring \textbf{(B1)} the \textit{difficulty} to change a feature; \textbf{(B2)} the \textit{acceptable range} that a feature can change between, and \textbf{(B3)} the \textit{max number of features} that a recourse plan can alter.
\textbf{(C)} \textsc{\textsf{GAM Coach}}{} then generates \textit{personalized plans} that respect users' preferences.
Users can iteratively refine their preferences until a satisfactory plan is found.
}
\Description{
A flow chart of three screenshots labeled A, B, and C.
Screenshot A shows three collapsed feature cards under the title ``Generic Plan''.
Screenshot B shows two extended feature cards under the title ``Specify Preferences''.
The left is a categorical feature card when a user is specifying the difficulty.
The right is a continuous feature card when a user is changing its acceptable range.
Screenshot C shows two collapsed feature cards under the title ``Personalized Plan''.
Screenshot A points to screenshot B.
Screenshots B and C point to each other.
}
\label{fig:scenario}
\end{figure*}
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{12pt}
The \textit{Coach Menu}{}~(\autoref{fig:teaser}\figpart{A}) is the primary control panel of \textsc{\textsf{GAM Coach}}{}.
Users can use the dropdown menu and input fields to specify desired decisions for classification and regression.
For each recourse plan generation iteration, the tool generates five diverse plans~(\aptLtoX[graphic=no,type=html]{\textbf{G1}}{\ref{item:g1}}) to help users achieve their goal, with each plan representing a CF example.
Users can access each plan by clicking the corresponding tab on the plan tab bar.
When a plan is selected, the \textit{Feature Panel}{} updates to show details about the plan, and the plan's corresponding tab extends to show the model's decision score~(\autoref{fig:score-tab}).
Users can click the \vcenteredhbox{\includegraphics[height=10pt]{figures/icon-bookmark}} button to open the \textit{Bookmarks}{} window and click the \vcenteredhbox{\includegraphics[height=10pt]{figures/icon-regenerate}} button to generate five new recourse plans that reflect the currently specified recourse preferences.
\subsection{Feature Panel}
\label{sec:ui:panel}
Each recourse plan has a unique \textit{Feature Panel}{}~(\autoref{fig:teaser}\figpart{B}) that visualizes plan details and allows users to provide preferences guiding the generation of new plans~(\aptLtoX[graphic=no,type=html]{\textbf{G2}}{\ref{item:g2}}).
A \textit{Feature Panel}{} consists of \textit{Feature Cards}{} where each card represents a data feature used in the model.
To help users easily navigate through different features, the panel groups \textit{Feature Cards}{} into three sections: (1) features that are changed in the plan, (2) features that are configured by the user, (3) and all other features.
To prevent overwhelming users with too much information~(\aptLtoX[graphic=no,type=html]{\textbf{G4}}{\ref{item:g4}}), all cards are collapsed by default---only displaying the feature name and feature values.
Users can hover over the feature name to see a tooltip explaining the definition of the feature~(\aptLtoX[graphic=no,type=html]{\textbf{G4}}{\ref{item:g4}}).
With a \textit{progressive disclosure} design~\cite{shneidermanEyesHaveIt1996, normanUserCenteredSystem1986}, details of a feature, such as the distribution of feature values, are only shown on demand after users click that \textit{Feature Card}{}.
Progressive disclosure also makes \textsc{\textsf{GAM Coach}}{} interface scalable, as users can easily scroll and browse over hundreds of collapsed \textit{Feature Cards}{}.
Since EBMs process continuous and categorical features differently, we employ different card designs based on the feature type.
\setlength{\columnsep}{10pt}%
\setlength{\intextsep}{-2pt}%
\begin{wrapfigure}{R}{0.27\textwidth}
\vspace{0pt}
\centering
\includegraphics[width=0.27\textwidth]{figures/whatif.pdf}
\vspace{-25pt}
\caption[]{
Users can test hypothetical input values in real time.
}
%
\label{fig:whatif}
\Description{
A screenshot of the filled curve plot of FICO Score.
A user is changing the current feature value from 800 to 770
The model output bar chart's width extends.
}
\end{wrapfigure}
\mypar{Continuous Feature Card.}
For continuous features, such as \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-fico}}, the \textit{Feature Card}{}~(\autoref{fig:whatif}) uses a filled curved chart to visualize the distribution of feature values in the training set.
Users can drag the diamond-shaped thumb \vcenteredhbox{\includegraphics[height=10pt]{figures/icon-thumb2}} on a slider below the chart to experiment with hypothetical values.
During dragging, the decision score bar updates its width to reflect a new prediction score in real time.
Therefore, users can better understand the underlying decision-making process by probing the model with different inputs~(\aptLtoX[graphic=no,type=html]{\textbf{G3}}{\ref{item:g3}}).
Also, users can drag the orange thumbs \vcenteredhbox{\includegraphics[height=7pt]{figures/icon-thumb1}} \vcenteredhbox{\includegraphics[height=7pt]{figures/icon-thumb3}} to set the lower and upper bounds of acceptable feature changes.
For example, one user might only accept recourse plans that include \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-amount}} at \$12k or higher~(\autoref{fig:scenario}\figpart{-B2}).
\mypar{Categorical Feature Card.}
For categorical features, such as \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-home}}, users can inspect the value distribution with a horizontal bar chart~(\autoref{fig:scenario}\figpart{-B1}), where a longer bar represents more frequent options in the training data.
To specify acceptable ranges, users can click the bars to select or deselect acceptable options for new recourse plans.
Acceptable options are highlighted as \orangehl{orange}, whereas unacceptable options are colored as \grayhl{gray}.
Users can also click text labels next to the bars to experiment with hypothetical options and observe how they affect the model decision.
\setlength{\columnsep}{11pt}%
\setlength{\intextsep}{0pt}%
\begin{wrapfigure}{R}{0.18\textwidth}
\vspace{0pt}
\centering
\includegraphics[width=0.175\textwidth]{figures/diff-map.pdf}
\vspace{-8pt}
\caption[]{
Distance multipliers of difficulties.
}
\Description{
A table mapping difficulty icons to their distance multipliers.
Very easy maps to ``⨉ 0.1''.
Easy maps to ``⨉ 0.5''.
Neutral maps to ``⨉ 1''.
Hard maps to ``⨉ 2''.
Very hard maps to ``⨉ 10''.
Impossible maps to ``⨉ infinity''.
}
\label{fig:diff-map}
\end{wrapfigure}
\mypar{Specify Difficulty to Change a Feature.}
Besides selecting a feature's acceptable range, users can also specify how hard it would be for them to change a feature.
For example, it might be easier for some users to lower \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-utilization}} than to change \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-home}}.
To configure feature difficulties, users can click the smiley button on any \textit{Feature Card}{} and then select a suitable difficulty option on the pop-up window~(\autoref{fig:scenario}\figpart{-B1}).
Internally, \textsc{\textsf{GAM Coach}}{} multiplies the distance costs of all bins in that feature with a constant multiplier~(\autoref{fig:diff-map}).
If the user selects the ``impossible to change'' difficulty, the tool will remove all variables associated with this feature in the internal integer program~(\autoref{sec:method:customization}).
Therefore, when generating new recourse strategies, \textsc{\textsf{GAM Coach}}{} would prioritize features that are easier to change and would not consider features that are impossible to change.\looseness=-1
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{5pt}
\begin{figure*}[tb]
\includegraphics[width=\textwidth]{figures/scenario-crime.pdf}
%
\caption[]{
\textsc{\textsf{GAM Coach}}{} allows end users to experiment with hypothetical input values and customize recourse plans.
\textbf{(A)} Our tool first shows \textit{generic plans} generated with default configurations.
\textbf{(B)} Users can explore how different input values affect the model's prediction in real time through simple interactions on the \textit{Feature Card}{}: for example, lowering the percentage of adults without a high school diploma increases the chance of getting a government grant.
\textbf{(C)} Users can then specify recourse preferences---such as feature \textit{difficulties} and \textit{acceptable ranges}---based on their circumstances and understanding of the model's prediction patterns.
\textbf{(D)} \textsc{\textsf{GAM Coach}}{} then generates more actionable recourse plans based on the user-specified preferences.
}
\Description{
A flow chart of three screenshots labeled A, B, C, and D.
Screenshot A shows two collapsed feature cards (``age percentage (>65)'' and ``employed percentage'') under the title ``Generic Plan''.
Screenshot B shows an extended feature card (``without high school rate'') under the title ``Explore what-ifs''.
The user is dragging the slider to explore a hypothetical value on this feature.
Screenshot C shows two collapsed feature cards under the title ``Specify Preferences''.
The user sets ``age percentage (>65)'' to ``new plans won't change it'' and ``without high school rate'' to ``easy to change''.
Screenshot D shows a collapsed feature of ``without high school rate'' under the title ``Personalized Plan''.
Screenshot A points to screenshot B.
Screenshots B and C point to each other.
Screenshots C and D point to each other.
}
\label{fig:scenario-crime}
\end{figure*}
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{12pt}
\subsection{Bookmarks and Receipt}
\label{sec:ui:bookmark}
During the recourse iterations, users can save any suitable plans by clicking the star button \vcenteredhbox{\includegraphics[height=8pt]{figures/icon-star}} on the plan tab~(\autoref{fig:score-tab}).
Then, users can compare and update their saved plans in the \textit{Bookmarks}{} \textit{window}~(\autoref{fig:teaser}\figpart{C}).
Once users are satisfied with bookmarked plans, they can save a \textit{recourse receipt} as proof of the generated recourse plans.
\citet{wachterCounterfactualExplanationsOpening2017} first introduced the recourse receipt concept as a contract guaranteeing that a bank will approve a loan application if the applicant achieves all changes listed in the recourse plan.
\textsc{\textsf{GAM Coach}}{} is the first tool to realize this concept by creating a plaintext file that records the timestamp, a hash of EBM model weights, the user's original input, and details of bookmarked plans~(\aptLtoX[graphic=no,type=html]{\textbf{G4}}{\ref{item:g4}}).
In addition, we propose a novel security scheme that uses Pretty Good Privacy (PGP) to sign the receipt with the bank's private key~\cite{garfinkelPGPPrettyGood1995}.
With public-key cryptography, users can hold the bank accountable by being able to prove the receipt's authenticity to third-party authorities with the bank's public key.
Also, banks can use their private key to verify a receipt's integrity during recourse redemption to avoid counterfeit receipts.\looseness=-1
\subsection{Usage Scenarios}
\label{sec:ui:scenario}
We present two hypothetical usage scenarios to illustrate how \textsc{\textsf{GAM Coach}}{} can potentially help everyday users identify actionable strategies to alter undesirable ML-generated decisions.
\mypar{Individual Loan Application.}
Eve is a rejected loan applicant, and she wants to identify ways to get a loan in the future.
In this hypothetical usage scenario, to inform loan decisions, the bank has trained an EBM model on past data (we use LendingClub~\cite{LendingClubOnline2018} to illustrate this scenario in \autoref{fig:scenario}).
Their dataset has 9 continuous features and 11 categorical features~(\autoref{fig:appendix-input}), and the outcome variable is binary---indicating whether a person can pay back the loan in time.
The bank gives Eve a \linkhref{https://poloclub.github.io/gam-coach/?dataset=lending}{link to \textsc{\textsf{GAM Coach}}{}} when informing her of the loan rejection decision.
After Eve opens \textsc{\textsf{GAM Coach}}{} in a web browser, the tool pre-loads Eve's input data and generates five recourse plans based on the default configurations.
Each plan lists a set of minimal changes in feature values that would lead to loan approval.
One plan suggests Eve lower the requested \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-amount}} from \$15k to \$9k along with two other changes~(\autoref{fig:scenario}\figpart{A}).
Eve does not like this suggestion because she is unwilling to compromise a loss of \$6k in the requested loan.
Therefore, she clicks the \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-amount}} \textit{Feature Card}{} and drags the left thumb~\vcenteredhbox{\includegraphics[height=7pt]{figures/icon-thumb1}} to set the \textit{acceptable range} of \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-amount}} to \$12k and above~(\autoref{fig:scenario}\figpart{-B2}).
After browsing all recourse plans in the \textit{Coach Menu}{}, Eve finds that none of the plans suggest changes to \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-home}}.
Eve and her partner are actually moving to their newly-purchased condo next month.
Therefore, Eve sets the \textit{acceptable range} of \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-home}} to ``mortgage'' and changes its \textit{difficulty} to ``very easy'' ~\vcenteredhbox{\includegraphics[height=10pt]{figures/icon-very-easy}}~(\autoref{fig:scenario}\figpart{-B1}).
Eve also prefers plans that change fewer features, so she clicks the dropdown menu on the \textit{Feature Panel}{} to ask the tool to only generate plans that change at most two features~(\autoref{fig:scenario}\figpart{-B3}).
After Eve clicks the \vcenteredhbox{\includegraphics[height=10pt]{figures/icon-regenerate}} button, \textsc{\textsf{GAM Coach}}{} quickly generates five personalized plans that respect Eve's preferences.
Among these plans, Eve especially likes the one suggesting she lower the \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-amount}} by about \$200 and change \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-home}} to mortgage~(\autoref{fig:scenario}\figpart{C}).
Finally, Eve bookmarks this plan and downloads a recourse receipt that guarantees her a loan if all suggested terms are met.
Eve plans to apply for the loan again at the same bank next month.
\mypar{Government Grant Application.}
Hal is a county manager in the United States.
He has applied for a federal grant for his county.
Unfortunately, his application is rejected.
He wants to learn about the decision-making process and what actions he can take to succeed in future applications.
In this hypothetical usage scenario, to inform funding decisions, the federal government has trained an EBM model on past data (we use the Communities and Crime dataset~\cite{redmondDatadrivenSoftwareTool2002} to illustrate this scenario in \autoref{fig:scenario-crime}).
This dataset has 119 continuous features and 1 categorical feature describing the demographic and economic information of different counties in the United States, and is used to predict the risk of violent crime.
As part of a performance incentive funding program~\cite{verainstituteofjusticePerformanceIncentiveFunding2012},
the federal government provides more funding opportunities to counties with lower predicted crime risk~\cite{slackCounterfactualExplanationsCan2021}.
Before training the EBM model, the federal government has removed protected features (e.g., \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-black-population}}) and features with many (more than half) missing values, resulting in a total of 94 continuous features and 1 categorical feature.
The federal government provides rejected counties with a \linkhref{https://poloclub.github.io/gam-coach/?dataset=crime}{link to \textsc{\textsf{GAM Coach}}{}} when informing them of the funding decisions.
Hal opens \textsc{\textsf{GAM Coach}}{} in his browser; this tool has pre-loaded the demographic and economic features of his county and quickly suggested five recourse plans that would lead to funding.
These generic plans are generated with the default configuration.
One plan~(\autoref{fig:scenario-crime}\figpart{A}) suggests Hal decrease \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-age-65}} and increase \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-employed}} in his county.
Hal likes the recommendation of increasing \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-employed}} because a higher employment rate is also beneficial for the economy of his county.
However, Hal is puzzled by the suggestion of lowering \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-age-65}}.
He is not sure why the population age is used to decide funding decisions.
Besides, lowering the percentage of the elderly population is not actionable.
Therefore, Hal ``locks'' this feature by setting its \textit{difficulty} to ``impossible''~\vcenteredhbox{\includegraphics[height=9pt]{figures/icon-lock}}~(\autoref{fig:scenario-crime}\figpart{C}).
To gain a better understanding of how the funding decision is made, Hal expands several \textit{Feature Cards}{} and experiments with hypothetical feature values by dragging the blue thumbs \vcenteredhbox{\includegraphics[height=10pt]{figures/icon-thumb2}}; \textsc{\textsf{GAM Coach}}{} visualizes the model's prediction scores with these hypothetical inputs in real time~(\autoref{fig:scenario-crime}\figpart{B}).
Hal quickly finds that lowering \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-highschool}} can increase his chance of getting a grant.
This is good news as Hal's county has just started a high school dropout prevention program aiming to lower the percentage of adults without a high school diploma to below 15\% in eight years.
Hal then sets this feature's \textit{difficulty} to ``easy to change''~\vcenteredhbox{\includegraphics[height=9pt]{figures/icon-easy}} and drags the orange thumbs~\vcenteredhbox{\includegraphics[height=7pt]{figures/icon-thumb1}} \vcenteredhbox{\includegraphics[height=7pt]{figures/icon-thumb3}} to set its \textit{acceptable range} to between 15\% and 22.5\%~(\autoref{fig:scenario-crime}\figpart{C}).
After Hal clicks the \vcenteredhbox{\includegraphics[height=10pt]{figures/icon-regenerate}} button, \textsc{\textsf{GAM Coach}}{} generates five new personalized plans in only 3 seconds despite there being almost 100 features.
Among these five plans, Hal likes the one that recommends decreasing \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-highschool}} by 4.27\%~(\autoref{fig:scenario-crime}\figpart{D}).
Finally, Hal saves a recourse receipt, and he will apply for this grant again once the percentage of adults without a high school diploma in his county drops by 4.27\%.
\subsection{Open-source \& Generalizable Tool}
\label{sec:ui:implement}
\textsc{\textsf{GAM Coach}}{} is a web-based algorithmic recourse tool that users can access with any web browser on their laptops or tablets, no installation required~(\aptLtoX[graphic=no,type=html]{\textbf{G5}}{\ref{item:g5}}).
We use \textit{GLPK.js}~\cite{vaillantGlpkJs2021} to solve integer programs with WebAssembly, \textit{OpenPGP.js}~\cite{haseOpenPGPJsOpenPGP2014} to sign recourse receipts with PGP, and \textit{D3.js}~\cite{bostockDataDrivenDocuments2011} for visualizations.
Therefore, the entire system runs locally in users' browsers without dedicated backend servers.
We also provide an additional Python package\footnote{Python package: \link{https://poloclub.github.io/gam-coach/docs/gamcoach}} for developers to generate customizable recourse plans for EBM models without a graphical user interface.
With this Python package, developers and researchers can also easily extract model weights from any EBM model to build their own \textsc{\textsf{GAM Coach}}{}.
Finally, despite its name, \textsc{\textsf{GAM Coach}}{}'s interface is model-agnostic---it supports any ML models where (1) one can control the difficulty and acceptable range of changing a feature during CF generation, and (2) model inference is available.
With our open-source and generalizable implementation, detailed documentation, and examples on six datasets across a wide range of tasks and domains---LendingClub~\cite{LendingClubOnline2018}, Taiwan Credit~\cite{yehComparisonsDataMining2009}, German Credit~\cite{duaUCIMachineLearning2017}, Adult~\cite{kohaviScalingAccuracyNaivebayes1996}, COMPAS~\cite{larsonHowWeAnalyzed2016}, and Communities and Crime~\cite{yehComparisonsDataMining2009}---future researchers can easily adapt our interface design to their models and datasets. %
\section{User Study}
\label{sec:user}
To evaluate \textsc{\textsf{GAM Coach}}{} and investigate how everyday users would use an interactive algorithmic recourse tool, we conducted an online user study with 41 United States-based crowdworkers.
For possible datasets to use in this user study, we compared five public datasets that are commonly used in the recourse literature:
LendingClub~\cite[e.g.,][]{mothilalExplainingMachineLearning2020,tsirtsisDecisionsCounterfactualExplanations2020}, Taiwan Credit~\cite[e.g.,][]{tsirtsisDecisionsCounterfactualExplanations2020,ustunActionableRecourseLinear2019,schleichGeCoQualityCounterfactual2021}, German Credit~\cite[e.g.,][]{mothilalExplainingMachineLearning2020,tsirtsisDecisionsCounterfactualExplanations2020,slackCounterfactualExplanationsCan2021}, Adult~\cite[e.g.,][]{karimiModelAgnosticCounterfactualExplanations2020,schleichGeCoQualityCounterfactual2021,mohammadiScalingGuaranteesNearest2021}, and COMPAS~\cite[e.g.,][]{mothilalExplainingMachineLearning2020,karimiModelAgnosticCounterfactualExplanations2020,rawalIndividualizedRecourseInterpretable2020}.
We decided to use LendingClub in our study for the following three reasons.
First, we chose a lending scenario as it is one scenario that many people, including crowdworkers, may encounter in real-life.
Second, there is no expert knowledge needed to understand the setting, making our tasks appropriate for crowdworkers.
Finally, our institute requires research participants to be United States-based: among the four datasets that can be used in a lending setting (LendingClub, Taiwan Credit, German Credit, and Adult), LendingClub is the only United States-based dataset collected from a real lending website.
In this user study, we aimed to answer the following three research questions:
\aptLtoX[graphic=no,type=html]{
\begin{itemize}
\item[\textbf{RQ1.}] What makes a satisfactory recourse plan for end users? (\autoref{sec:user:result1})
\item[\textbf{RQ2.}] How do end users discover their satisfactory recourse plans? (\autoref{sec:user:result2})
\item[\textbf{RQ3.}] How does interactivity play a role in providing algorithmic recourse? (\autoref{sec:user:result3})
\end{itemize}
}{
\begin{enumerate}[topsep=5pt, itemsep=0mm, parsep=1mm, leftmargin=24pt, label=\textbf{RQ\arabic*.}, ref=RQ\arabic*]
\item \label{item:q1} What makes a satisfactory recourse plan for end users? (\autoref{sec:user:result1})
\item \label{item:q2} How do end users discover their satisfactory recourse plans? (\autoref{sec:user:result2})
\item \label{item:q3} How does interactivity play a role in providing algorithmic recourse? (\autoref{sec:user:result3})
\end{enumerate}
}
\subsection{Participants}
We recruited 50 anonymous and voluntary United States-based participants from Amazon Mechanical Turk (MTurk), an online crowdsourcing platform.
We did not collect any personal information. Collected interaction logs and subjective ratings are stored in a secure location where only the authors have access.
The authors' Institutional Review Board (IRB) has approved the study.
The average of three self-reported task completion times on a worker-centered forum\footnote{TurkerView: \link{https://turkerview.com/}} is 32\nicefrac{1}{2}-minutes.
We paid 41 participants \$6.50 per study and 9 participants who had not passed our quality control \$5.50.\footnote{Originally the task was posted with a base payment of \$3.50 and \$1 bonus for quality. However, when analyzing participants' responses, we realized that the task required more time than we originally expected, so we provided an additional \$2 bonus to all participants after the study to ensure appropriate compensation for their time. This brought the payment to \$6.50 for those who passed the quality control quiz and \$5.50 for those who did not.}
Recruited participants self-report an average score of 2.7 for ML familiarity in a 5-point Likert-scale, where 1 represents ``I have never heard of ML'' and 5 represents ``I have developed ML models.''
\subsection{Study Design}
To start, each participant signed a consent form and filled out a background questionnaire (e.g., familarity with ML).
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{5pt}
\begin{figure}[tb]
\includegraphics[width=0.8\linewidth]{figures/plan-review.pdf}
\caption[]{We asked user study participants to explain why they had chosen their satisfactory plans, and why they had not chosen two other random plans (not shown in figure).}
\Description{
A screenshot of a user study question.
The instruction writes ``Please review the following plans. Please emphasize why chosen plans are helpful for you, and why unchosen plans are less helpful.''
There is a box below the instruction labeled ``Plan 6 (chosen by you).''
Below the label, there is a recourse plan with the feature Home Ownership changed from Rent to Mortgage.
On the right of the plan, there is a question ``Why do you like this plan?''
There is a text field and rating drop-down menu below the question.
}
\label{fig:plan-review}
\end{figure}
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{12pt}
\mypar{\textsc{\textsf{GAM Coach}}{} Tutorial and Short Quiz.}
We directed participants to a Google Survey form and \linkhref{https://poloclub.github.io/gam-coach/user-study/}{a website} containing \textsc{\textsf{GAM Coach}}{}, task instructions, and tutorial videos.
Our tool, loaded with an EBM binary classifier that predicts loan approval on the LendingClub dataset~\cite{LendingClubOnline2018}, also contains input values of 500 random test samples on which the model predicts loan rejection.
Participants were asked to watch a 3-minute tutorial video and complete eight multiple-choice quiz questions.
These questions are simple---asking what is shown in the tool after certain interactions.
All participants were asked to perform these interactions on the same data sample, so we had ``ground truth'' answers for the quiz questions.
We used the quiz as a ``gold standard'' question to detect fraudulent responses~\cite{olsonWaysKnowingHCI2014, kitturCrowdsourcingUserStudies2008}.
Although participants were prompted that they would need to answer all questions correctly to receive the base compensation, we paid all participants regardless of their answers.
However, in our analysis, we only included responses from participants who had correctly answered at least four questions.
\mypar{Free Exploration with an Imaginary Usage Scenario.}
After completing the tutorial and quiz, participants were asked to pretend to be a rejected loan applicant and freely use \textsc{\textsf{GAM Coach}}{} \textit{until finding at least one satisfactory recourse plan.}
These satisfactory recourse plans could be chosen from the first five generic plans that \textsc{\textsf{GAM Coach}}{} generates with a default configuration \textit{or} follow-up plans that are generated based on participants' configured preferences.
To help participants imagine the scenario, we asked them to change the input sample (one of 500 random samples) until they find one that they feel comfortable pretending to be.
Participants could also manually adjust the input values~(\autoref{fig:appendix-input} in the appendix).
After identifying and bookmarking their satisfactory plans, participants were asked to rate the importance of configured preferences or briefly explain why no configuration is needed.
Then, participants were asked to explain why they had chosen their saved plans~(\autoref{fig:plan-review}) and why they had not chosen two other plans, which were randomly picked from the initial recourse plans.
To incentivize participants to write good-quality explanations~\cite{paolacciTurkUnderstandingMechanical2014, hoIncentivizingHighQuality2015}, we told participants that they could get a \$1 bonus reward if their explanations are well-justified.
Regardless of their responses, all participants who had correctly answered at least four quiz questions were rewarded with this bonus.\looseness=-1
\mypar{Interaction Logging and Survey.}
While participants were using \textsc{\textsf{GAM Coach}}{}, the tool logged all interactions, such as preference configuration, hypothetical value experiment, and recourse plan generation.
Each log event includes a timestamp and associated values.
After finishing the exploration task, participants were asked to click a button that uploads their interaction logs and recourse plan reviews as a JSON file to a secured Dropbox directory.
The filenames included a random number.
Participants were given this number as a verification code to report in the survey response and MTurk submission---we used this number to link a participant's MTurk ID with their log data and survey response.
Finally, participants were asked to complete the survey consisting of subjective ratings and open-ended comments regarding the tool.
As the EBM model used in the study is non-monotonic,
the tool sometimes can suggest counterintuitive changes~\cite{barocasHiddenAssumptionsCounterfactual2020}, such as to lower \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-income}} for loan approval.
We asked participants to report counterintuitive recourse plans in the survey if they had seen any.
\subsection{Results}
Out of 50 recruited participants, 41 (P1--P41) correctly answered at least four ``quality-control'' questions.
In the following sections, we summarize our findings through analyzing these 41 participants' interaction logs, recourse plan reviews, and survey responses.
We denote the Wald Chi-Square statistical test score as $\chi^2$.
\subsubsection{RQ1: Characteristics of Satisfactory Recourse Plans}
\label{sec:user:result1}
During the exploration task, participants were asked to identify at least one recourse plan that they would be satisfied with if they were a rejected loan applicant using \textsc{\textsf{GAM Coach}}{}.
On average, each participant chose 1.54~\vcenteredhbox{\includegraphics[height=9pt]{figures/dist-plan-num}} satisfactory plans.
Participants preferred \textit{concise plans} that changed only a few features, with an average of 2.11~~\vcenteredhbox{\includegraphics[height=9pt]{figures/dist-feature-num}} features per plan.
Chosen plans changed a diverse set of features, including 13 out of 20 features.
The most popular features changed by chosen plans were \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-amount}}~(26.3\%), \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-fico}}~(18.8\%), and \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-utilization}}~(11.3\%).
Features that were not changed by any chosen plans were mostly hard to change in real life, such as \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-bankruptcy}} and \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-employment}}.
\aptLtoX[graphic=no, type=html]{}{
\setul{0.4ex}{0.3ex}
\setulcolor{mygray}
}
\mypar{Reasons for Choosing Satisfactory Plans.}
Three main reasons that participants reported choosing plans were that the plans were (1) controllable, (2) requiring small changes or less compromise, or (3) beneficial for life in general.
Most participants chose recourse plans that felt realistic and controllable.
For example, P30 wrote \myquote{I think it's very possible to reduce my \ul{credit utilization} in a short amount of time.}
In particular, participants preferred plans that only changed a few features and required a small amount of change.
Participants described these plans as ``\textit{simple and fast}'' (P5), ``\textit{straightforward}'' (P7), and ``\textit{easy to do}'' (P16).
Some participants chose plans because they could tolerate the compromises.
For example, P8 wrote \myquote{I'm fine with the lower \ul{loan amount}.}
Similarly, P11 reported \myquote{[The decreased] \ul{loan amount} is close to what I need.}
Interestingly, some participants favored plans that could benefit their lives in addition to helping them get loan approval.
For example, P14 wrote \myquote{[...] lower \ul{utilization} is good for me anyway from what I know, so this seems like the best plan.}
Similarly, P28 wrote \myquote{[this plan] in my opinion would guarantee greater monetary flexibility.} \looseness=-1
\mypar{Reasons for Not Choosing a Plan.}
Participants' explanations for not choosing a plan mostly complemented the reasons for choosing a plan.
Some participants also skipped plans because they were puzzled by counterintuitive suggestions, did not understand the suggestions, or just wanted to see more alternatives.
First, participants disliked unrealistic suggestions:
P2 explained \myquote{It tells me to increase my \ul{income}. My \ul{income} is fixed. I cannot just increase them at a whim.}
Similarly, P6 wrote \myquote{With inflation it might be harder to \ul{use less credit}.}
Participants also disliked plans requiring too many changes or a large amount of change.
For example, P30 wrote \myquote{The \ul{amount of loan} suggested to be reduced is too large. Assuming I'm applying for 9,800 for real, I wouldn't want to reduce the amount by more than 30\%.}
Interestingly, some participants skipped a plan because it suggested counterintuitive changes.
For example, P14 wrote \myquote{It seemed like a bug because why would asking for an extra 13 dollars [in \ul{loan amount}] result in a loan approval?}
Participants also skipped plans when they did not understand the suggestion: P9 wrote \myquote{I'm not exactly sure what \ul{credit utilization} is. I looked at the tooltip, but still wasn't sure.}
Finally, some participants skipped the initial plans because they just wanted to explore more alternatives: P22 explained \myquote{I wanted to check out a few more things before I made my decision.}\looseness=-1
\mypar{Design Lessons.}
By analyzing the characteristics of satisfactory recourse plans, our user study is the first study that provides empirical evidence to support several hypotheses from the recourse literature.
We find that participants preferred plans that suggested changes on actionable features~\cite{karimiSurveyAlgorithmicRecourse2021,kirfelWhatIfHow2021}, are concise and make small changes~\cite{leGRACEGeneratingConcise2020,wachterCounterfactualExplanationsOpening2017}, and could benefit participants beyond the recourse goal~\cite{barocasHiddenAssumptionsCounterfactual2020}.
Additionally, participants were likely to save multiple satisfactory plans from one recourse session, highlighting the importance of providing diverse recourse plans~\cite{mothilalExplainingMachineLearning2020}.
Our study also shows that with transparency, end users can identify and dislike counterintuitive recourse plans (see more discussion in \autoref{sec:user:result3}).
Therefore, future researchers and developers should help users identify concise and diverse plans that change actionable features and are beneficial overall.
Also, researchers and developers should carefully audit and improve their models to prevent a CF generation algorithm from generating counterintuitive plans.
Our findings also highlight that communicating recourse plans and providing a good user experience are as important as generating good recourse plans.
\subsubsection{RQ2: Path to Discover Satisfactory Recourse Plans}
\label{sec:user:result2}
In the exploration task, participants could freely choose their satisfactory recourse plans from the initial batch, where plans were generated with default configurations, or from follow-up batches, where plans reflected participants' specified preferences.
We find that participants were more likely to choose satisfactory plans that respect participants' preference configurations (33 participants out of 41) than the default plans (8 participants).
In addition, each recourse session had a median of 3~\vcenteredhbox{\includegraphics[height=9pt]{figures/dist-iteration-num}} plan iterations.
In other words, on average, a participant discovered satisfactory plans after seeing about 15 plans, where the last 10 plans were generated based on their preferences.
The average time to identify satisfactory plans was 8 minutes and 38 seconds.
\mypar{Preference configuration is helpful.}
In \textsc{\textsf{GAM Coach}}{}, users can specify the \textit{difficulty} and \textit{acceptable range} to change a feature and the \textit{max number of features} a plan can change.
We find all three preferences helped participants discover satisfactory plans.
Among 63 total satisfactory plans chosen by 41 participants, 49 plans~(77.78\%) reflected at least one difficulty configuration and 44 plans~(69.84\%) reflected at least one range configuration.
Also, 12 participants configured the max number of features---seven participants changed it to 1 and five changed it to 2 (default is 4).
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{5pt}
\begin{figure}[tb]
\includegraphics[width=0.95\linewidth]{figures/difficulty.pdf}
\caption[]{
Difficulty configuration counts across frequent features highlighting
variability of participants' preferences.
}
\Description{
A horizontal stacked bar chart of the top 7 frequent difficulty configurations.
From the top to the bottom, the feature loan amount has been used in 19 satisfactory plans,
annual income in 12 plans, home ownership in 11 plans, credit history length in 10 plans,
revolving utilization in 10 plans, FICO score in 9 plans, and payment term in 6 plans.
}
\label{fig:result-preference}
\end{figure}
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{12pt}
\mypar{Diverse Preference Configurations.}
By further analyzing participants' preferences associated with their chosen plans, we find (1) participants specified preferences on a wide range of features; (2) some features were more popular than others; (3) different participants set different preferences on a given feature.
Of the 20 features, at least one participant changed the difficulty of 16 features (80\%) and acceptable range of 13 features (65\%).
Among these configured features, participants were more likely to specify preferences on some than others [$\chi^2 = 54.37$, $p<0.001$ for the difficulty, $\chi^2 = 27.68$, $p=0.006$ for the acceptable range].
For example, 19 satisfactory plans reflected difficulty for \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-amount}}, whereas only 1 plan reflected the difficulty for \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-past}}.
Also, there was high variability in configured preferences on popular configured feature~(\autoref{fig:result-preference}).
For instance, 6 plans considered \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-amount}} as ``very easy to change,'' while 9 plans deemed it as ``impossible to change.''
Our findings confirm hypotheses that recourse preferences can be incorporated to identify satisfactory plans~\cite{barocasHiddenAssumptionsCounterfactual2020, weldChallengeCraftingIntelligible2019}, and these preferences are idiosyncratic~\cite{kirfelWhatIfHow2021,vermaCounterfactualExplanationsMachine2020}.
\mypar{Design Lessons.}
When designing recourse systems, it is useful to allow end users to specify a wide range of recourse preferences, such as difficulties to change a feature, acceptable feature ranges, and max number of features to change.
Additionally, there can be predictable patterns in users' recourse preferences---researchers can leverage these patterns to further improve user experiences.
For example, developers can use the log data of an interactive recourse tool to train a new ML model to predict users' preference configurations.
Then, for a new user, developers can predict their recourse preference and use it as the tool's default configuration.
\subsubsection{RQ3: Interactive Algorithmic Recourse}
\label{sec:user:result3}
How did participants use and perceive various \textit{interactions} throughout the exploration task?
Interestingly, 28\% of participants who configured difficulty preferences had also immediately altered the difficulty levels on the same features; most of them have changed ``easy'' to ``very easy'' and ``hard'' to ``very hard.''
For acceptable ranges, the percentage is higher at 88\%.
It suggests participants may need iterations to learn how preference configuration works in \textsc{\textsf{GAM Coach}}{} and then fine-tune configurations to generate better plans---highlighting the key role of iteration in interactive recourse.
Survey response show that participants found both preference configuration and iteration helpful in finding good recourse plans~(\autoref{fig:usability}\figpart{B}).
For example, P30 commented \myquote{[I like] how easy it was to make changes to the priority of each thing. Showing that some things can be easy changes, or impossible to change, and making plans built around those.}
Similarly, P19 wrote \myquote{[I like] regenerating unlimited plans until I find a fit one.}
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{8pt}
\begin{figure*}[tb]
\includegraphics[width=\textwidth]{figures/study-result.pdf}
\caption[]{
Average ratings and rating distributions from 41 participants on the usability and usefulness of \textsc{\textsf{GAM Coach}}{}.
\textbf{(A)} Participants thought \textsc{\textsf{GAM Coach}}{} was relatively easy and enjoyable to use, and the tool helped them identify actions to obtain a preferred ML decision.
\textbf{(B)} All interaction techniques, especially experimenting with hypothetical values, were rated favorably.
}
\Description{
Two horizontal bar charts of usability and usefulness average ratings.
For usability, ``easy to use'' has an average score of 5.02, ``easy to understand'' of 4.9,
``enjoyable to use'' of 5.07, ``help understand model'' of 5.17, ``help find ways to improve''
of 5.61, and ``I will apply loan here again'' of 5.34,
For usefulness, ``explore hypothetical value'' has an average score of 6.02, ``configure difficulty''
of 5.59, ``configure accept range'' of 5.7, ``iterative fine-tuning'' of 5.61, ``show multiple plans'' of 5.9, and ``download a receipt'' of 5.59.
}
\label{fig:usability}
\end{figure*}
\setlength{\belowcaptionskip}{0pt}
\setlength{\abovecaptionskip}{12pt}
\mypar{``What-if'' Questions.}
Besides configuring preferences, participants also engaged in other modes of interaction with \textsc{\textsf{GAM Coach}}{}.
For example, 32 out of 41 participants experimented with hypothetical feature values~(\autoref{sec:ui:panel}), even though it did not affect recourse generations and was not required in the task.
These participants explored median of 3 unique features~\vcenteredhbox{\includegraphics[height=9pt]{figures/dist-alt-feature-count}} and a median of 5.5 hypothetical feature values~\vcenteredhbox{\includegraphics[height=9pt]{figures/dist-alt-feature}}.
These 32 participants asked what-if questions on a total of 99 features, and only 39~(39.4\%) of these features were from the presented recourse plan.
It suggests that participants were more interested in learning about the predictive effects of features that have not been changed by \textsc{\textsf{GAM Coach}}{}.
After exploring what-ifs on these 99 features, participants configured at least one preference (difficulty or acceptable range) on about half of them~(49 features, 49.5\%).
In comparison, these participants only configured preferences on 13.72\% features~(87 out of 634) on which they had not explored what-ifs or had explored what-ifs \textit{after} configuring preferences.
It shows that participants were more likely to customize features on which they had explored hypothetical values [$\chi^2 = 85.459$, $p<0.00001$].
Finally, 20 out of these 32 participants~(62.5\%) chose a satisfactory plan with a changed feature on which they had explored what-ifs.
It may suggest participants preferred recourse plans that changed features on which they had explored what-ifs, but this result is not statistically significant [$\chi^2 = 2.0$, $p=0.1573$].
By analyzing survey responses, we also find that asking what-if questions was one of the participants' favorite features~(\autoref{fig:usability}\figpart{B}).
For example, P12 wrote \myquote{[I like] how it adjusts the plans in real time and gives you an answer if the loan will be approved.}
Throughout the task, participants also frequently used the tooltip annotations to inspect the decision score bar (median 8 times per participant) and check the meaning of different features (median 25 times)---highlighting the importance of clearly explaining visual representations and terminologies in interactive recourse tools. \looseness=-1
\mypar{Counterintuitive recourse plans.}
We asked participants to report strange recourse plans that \textsc{\textsf{GAM Coach}}{} could rarely suggest, such as to lower \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-income}} for loan approval.
To our surprise, 7 out of 41 participants had encountered and reported these counterintuitive plans!
For example, P6 was confused that some plans suggested conflicting changes on the same feature: \myquote{One plan told me to increase the \ul{loan amount} by \$13 while another plan told me to decrease by \$1,613.}
Another interesting case was P39: \myquote{I don't understand how \ul{purpose} changes approval decision. Something like \ul{`mortgage'} I understand, but changing something and all of a sudden you can do a wedding but not home improvement? Like what?}
First, P39 found it counterintuitive that \textsc{\textsf{GAM Coach}}{} includes the categorical feature \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-purpose}} as a changeable feature because they thought the model decision should be independent of the \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-purpose}}.
Then, through experimenting with hypothetical values, P39 was baffled by the observation that two different purposes (wedding and home improvement) resulted in two distinct model decisions.
Some other participants also attributed these strange patterns as reasons why they skipped some plans~(\autoref{sec:user:result1}).
This finding provides empirical evidence that with transparency, everyday users can discover potentially problematic behaviors in ML models.
\mypar{Design Lessons.}
Overall, interactivity helps users identify satisfactory recourse plans, and users appreciate being able to control recourse generation.
In addition, users like being able to ask what-if questions; experimenting with hypothetical feature values also helps them find satisfactory recourse plans.
However, it takes time and trial and error for users to understand how preference configurations affect recourse generation.
Therefore, future interactive recourse tools can improve user experience by focusing on improving learnability and reversibility.
Also, our study shows that interactivity and transparency could occasionally confuse users with counterintuitive recourse plans.
Therefore, future researchers and developers should carefully audit and improve their ML models before deploying interactive recourse tools.
\subsubsection{Usability}
\label{sec:user:usability}
Our survey included a series of 7-point Likert-scale questions regarding the usability of \textsc{\textsf{GAM Coach}}{}~(\autoref{fig:usability}\figpart{A}).
The results suggest that the tool is relatively easy to use~(average 5.02), easy to understand~(average 4.90), and enjoyable to use~(average 5.07).
However, some participants commented that the tool was not easy to learn at first and may be too complex for users with less knowledge about loans.
For example, P5 wrote \myquote{Without the tutorials, it would have taken me much longer to learn how to navigate the program, because it is not very intuitive at first.}
Similarly, P8 wrote \myquote{I am decent with finances, but I'd imagine that other people would have more difficulty [using the tool].}
Our participants were MTurk workers, who are similar to the demographics of American internet users as a whole, but slightly younger and more educated~\cite{olsonWaysKnowingHCI2014, hitlinResearchCrowdsourcingAge2016}.
Therefore, \textsc{\textsf{GAM Coach}}{} might be overwhelming for real-life loan applicants who are less familiar with web technology or finance.
Participants also provided specific feedback for improvement, such as designing a better way to \textit{store} and \textit{compare} all generated plans.
Currently, users would lose unsaved plans when generating new plans, and users could only compare different recourse plans in the \textit{Bookmarks}{} \textit{window}~(\autoref{sec:ui:bookmark}).
We plan to continue improving the design of \textsc{\textsf{GAM Coach}}{} based on participants' feedback.
%
\section{Limitations}
We acknowledge limitations regarding our tool's generalizability, usage scenarios, and user study design.
\mypar{Generalizability of \textsc{\textsf{GAM Coach}}{}.}
To design and develop the first interactive algorithmic recourse tool that enables end users to fine-tune recourse plans with preferences, we ground our research in GAMs, a class of accurate and transparent ML models with simple structures.
This approach enables us to generate customizable CF examples efficiently.
However, not all CF generation algorithms allow users to specify the feature-level distance functions, acceptable ranges, and max number of features that a CF example can change.
Therefore, while the \textsc{\textsf{GAM Coach}}{} interface is model-agnostic, it does not directly support all existing ML models and CF generation methods.
Also, our novel CF generation algorithm is tailored to EBMs.
However, one can easily adapt our linear constraints to generate customizable CF examples for linear models~\cite{ustunActionableRecourseLinear2019}.
For more complex non-linear models (e.g., random forest, neural networks), one can apply our method to a linear approximation~\cite{mohammadiScalingGuaranteesNearest2021} of these models~(\autoref{sec:method:ip:generalizability}).
We also acknowledge that similar to most existing CF generation algorithms~\cite{keaneIfOnlyWe2021,barocasHiddenAssumptionsCounterfactual2020}, our algorithm assumes all features to be independent.
However, in practice, many features can be associated.
For example, changing \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-utilization}} is likely to also affect a user's \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-fico}}.
Future work can generalize our algorithm to dependent features by modeling their casual relationships~\cite{karimiAlgorithmicRecourseCounterfactual2021}.
\mypar{Hypothetical Usage Scenarios.}
We situate \textsc{\textsf{GAM Coach}}{} in lending and government funding settings~(\autoref{sec:ui:scenario}), two most cited scenarios in existing CF literature~\cite{karimiSurveyAlgorithmicRecourse2021,barocasHiddenAssumptionsCounterfactual2020}.
It is important to note that none of the authors have expertise in law, finance, or political science.
Therefore, to adapt \textsc{\textsf{GAM Coach}}{} for use in real lending and government funding settings, it would require more research and engaging with experts in the legal and financial domains as well as people who would be impacted by the systems.
In addition, we use LendingClub~\cite{LendingClubOnline2018} and Communities and Crime~\cite{redmondDatadrivenSoftwareTool2002}, two largest suitable datasets we have access to~(\autoref{sec:user}), to simulate two usage scenarios and design our user study.
These two datasets can have different features and sizes from the data that are used in practice.
Therefore, before adapting \textsc{\textsf{GAM Coach}}{}, researchers and developers should thoroughly test our tool on their own datasets.
\mypar{Simulated Study Design.}
To study how end users would use interactive recourse tools, we recruited MTurk workers and asked them to pretend to be rejected loan applicants, and we logged and analyzed their interactions with \textsc{\textsf{GAM Coach}}{}.
We designed the task to encourage and help participants simulate the scenario (e.g., rewarding bonus, supporting participants to input data or choose data from multiple random samples).
However, participants' usage patterns and reactions may not fully represent real-life loan applicants.
We chose to simulate a lending scenario because (1) crowdworkers may have encountered lending, (2) it does not require expert knowledge, and (3) we have access to a large and real US-based lending dataset.
We acknowledge that participants' usage patterns may not full represent users in other domains.
Therefore, it would require further research with actual end users (e.g., loan applicants, county executives, and bail applicants) to study how \textsc{\textsf{GAM Coach}}{} can aid them in real-world settings.
In our study, we only collected participants' familiarity with ML.
As MTurk workers tend to be younger and more educated than average internet users~\cite{olsonWaysKnowingHCI2014, hitlinResearchCrowdsourcingAge2016}, future researchers can collect more self-reported demographic information (e.g., age, education, sex) to study if different user groups would use an interactive recourse tool differently.
\mypar{Observational Study Design.}
Our observational log study can provide a portrait of users' natural behaviors when interacting with interactive algorithmic recourse tools and scale to a large number of participants~\cite{dumaisUnderstandingUserBehavior2014}.
However, it lacks a control group.
As algorithmic recourse research and applications are still nascent, the community has not yet established a recommended workflow or system that we can use as a baseline in our study~(\autoref{sec:related:recourse}).
Our main goal is to study how \textit{recourse customizability} can help users discover useful recourse plans.
Therefore, to mitigate the lack of a control group, we offer participants the option to \textit{abstain from customizing recourse plans} to probe into the usefulness of recourse customizability.
In our analysis, we compare both (1) the numbers of participants who specify recourse preferences and who do not, (2) and the numbers of satisfactory plans generated with a default configuration and satisfactory plans generated with a participant-configured preference~(\autoref{sec:user:result2}).
Finally, with our open-source implementation~(\autoref{sec:ui:implement}), future researchers can use \textsc{\textsf{GAM Coach}}{} as a baseline system to evaluate their interactive recourse tools.
%
\section{Discussion and Future Work}
Reflecting on our end-to-end realization of interactive algorithmic recourse---from UI design to algorithm development and a user study---we distill lessons and provide a set of future directions for algorithmic recourse and ML interpretability.
\mypar{Too much transparency.}
\textsc{\textsf{GAM Coach}}{} uses a glass-box model, provides end users with complete control of recourse plan generation, and supports users to ask ``what-if'' questions with any feature values.
One might argue that \textsc{\textsf{GAM Coach}}{} is too transparent and too much transparency makes the tool unfavorable, because (1) end users can use this tool for gaming the ML model~\cite{kleinbergHowClassifiersInduce2020, hardtStrategicClassification2016} and (2) this tool fails to protect the decision maker's model intellectual property~\cite{wachterCounterfactualExplanationsOpening2017}.
We acknowledge these concerns.
As recourse research and applications are still nascent, it is challenging to know how we can balance the benefits of transparency and human agency and the risk of revealing too much information about the ML model.
Our user study shows that with transparency end users can discover and are often puzzled by counterintuitive patterns in ML models.
We believe if \textsc{\textsf{GAM Coach}}{} is adopted, it has the potential to incentivize decision makers to create better models in order to avoid confusion as well as model exploitations.
As one of the furthest realizations of ML transparency, \textsc{\textsf{GAM Coach}}{} can be a research instrument that facilitates future researchers to study the tension between \textit{decision makers} and \textit{decision subjects}, and identify the right amount of transparency that most benefits both parties.
Then, to adopt \textsc{\textsf{GAM Coach}}{} in practice, ML developers can remove certain functionalities or impose recourse constraints accordingly.
For example, if a bank is offering \textsc{\textsf{GAM Coach}}{} and is worried about people gaming the system by changing certain features that do not actually improve their creditworthiness (e.g., opening more credit cards), they could insert their own optimization constraints that prevent these features from being modified.
\mypar{Transparent ML models for algorithmic recourse.}
Black-box ML models are popular across different domains.
To interpret these models, researchers have developed post-hoc techniques to identify feature importance~\cite[e.g.][]{ribeiroWhyShouldTrust2016,lundbergUnifiedApproachInterpreting2017} and generate CF examples~\cite[e.g.][]{leGRACEGeneratingConcise2020,mothilalExplainingMachineLearning2020}.
However, \citet{rudinStopExplainingBlack2019} argues that researchers and practitioners should use transparent ML models instead of black-box models in high-stake domains due to transparent models' high accuracy and explanation fidelity.
The design of \textsc{\textsf{GAM Coach}}{} is based on GAMs, a state-of-the-art transparent model~\cite{caruanaIntelligibleModelsHealthCare2015,wangPursuitInterpretableFair2020}.
We would like to broaden the perspective of using transparent models reflecting on our study.
We find that \textsc{\textsf{GAM Coach}}{} provides opportunities for everyday users to discover counterintuitive patterns in the ML model.
It implies that ML developers and researchers can also use \textsc{\textsf{GAM Coach}}{} as a penetration testing tool to detect potentially problematic behaviors in their models.
Note that both black-box and transparent learning methods would have learned these counterintuitive behaviors~\cite{caruanaIntelligibleModelsHealthCare2015}, but with a transparent model, developers can further \textit{vet} and \textit{fix} these behaviors.
As an example, an ML developer training a GAM can use \textsc{\textsf{GAM Coach}}{} to iteratively generate recourse plans for potential users (e.g., training data where the model gives unfavorable predictions).
If they identify strange suggestions, they can use existing interactive tools~\cite{noriInterpretMLUnifiedFramework2019,wangInterpretabilityThenWhat2022} to visualize corresponding shape functions to pinpoint the root cause of these counterintuitive patterns, and then edit shape function parameters to avoid them from happening during recourse deployment.
Future research can leverage transparent models to distill guidelines to audit and fix models before recourse deployment.\looseness=-1
\mypar{Put users at the center.}
During the design and implementation of \textsc{\textsf{GAM Coach}}{}, we have encountered many challenges in transforming technically sound recourse plans into a seamless user experience.
As the end users of recourse tools are everyday people who are less familiar with ML and domain-specific concepts, one of our design goals is to help them understand necessary concepts and have a frictionless experience~(\aptLtoX[graphic=no,type=html]{\textbf{G4}}{\ref{item:g4}}).
\textsc{\textsf{GAM Coach}}{} aims to achieve this goal by following a progressive disclosure and details-on-demand design strategy~\cite{normanUserCenteredSystem1986,shneidermanEyesHaveIt1996} and presenting textual annotations to explain visual representations in the tool.
However, our user study suggests that few users might still find it challenging to use \textsc{\textsf{GAM Coach}}{} at first~(\autoref{sec:user:usability}).
During our development process, we identify many edge cases that a recourse application would encounter in practice, such as features requiring integer values (e.g., \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-fico}}), features using log transformations (e.g., \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-income}}), or features less familiar to everyday users (e.g., \vcenteredhbox{\includegraphics[height=9pt]{figures/feature-utilization}}).
Our open-source implementation handles these edge cases, and we provide ML developers with simple APIs to add descriptions for domain-specific feature names in their own instances of \textsc{\textsf{GAM Coach}}{}.
However, these practical edge cases are rarely discussed or handled in the recourse research community, since (1) the field of algorithmic recourse is relatively nascent, (2) and the main evaluation criteria of recourse research are distance-based statistics instead of \textit{user experience}~\cite{keaneIfOnlyWe2021}.
Therefore, in addition to developing faster techniques to generate more actionable recourse plans, we hope future researchers engage with end users and incorporate user experience into their research agenda.
Besides interactive visualization, researchers can also explore alternative mediums to communicate and personalize ML recourse plans and model explanations, such as through a textual~\cite{ehsanRationalizationNeuralMachine2018} or multi-modal approach~\cite{hohmanTeleGamCombiningVisualization2019}.
%
\section{Conclusion}
As ML models are increasingly used to inform high-stakes decision-making throughout our everyday life, it is crucial to provide decision subjects ways to alter unfavorable model decisions.
In this work, we present \textsc{\textsf{GAM Coach}}{}, an interactive algorithmic recourse tool that empowers end users to specify their preferences and iteratively fine-tune recourse plans.
Our tool runs in web browsers and is open-source, broadening people's access to responsible ML technologies.
We discuss lessons learned from our realization of interactive algorithmic recourse and an online user study.
We hope our work will inspire future research and development of user-centered and interactive tools that help end users restore their human agency and eventually trust and enjoy ML technologies.
\begin{acks}
We thank Kaan Sancak for his support in piloting our user study.
We appreciate Harsha Nori, Paul Koch, Samuel Jenkins, and the InterpretML team for answering our questions about InterpretML.
We express our gratitude to our study participants for testing our tool and providing valuable feedback.
We are also grateful to our anonymous reviewers for their insightful comments and suggestions that have helped us refine our work.
This work was supported in part by a J.P. Morgan PhD Fellowship, gifts from Bosch and Cisco.
\end{acks}
\balance
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.14135",
"language": "en",
"timestamp": "2023-03-01T02:02:12",
"url": "https://arxiv.org/abs/2302.14135",
"yymm": "2302"
} | \section{Introduction}
Let $T$ be a bounded operator on a Banach space ${\mathcal X}$. We study the
growth rate of $\|T^n\|$ when ${\mathcal X}=L^p(\mu)$, with $\mu$ $\sigma$-finite,
under the so-called Strong Kreiss condition. Let us describe the necessary background to state our main result.
\medskip
In \cite{Kreiss}, Kreiss introduced the following resolvent condition.
\begin{equation}\label{Kreiss}
\|R(\lambda,T)\|\le \frac{C}{|\lambda|-1} \quad |\lambda|>1\, ,
\end{equation}
where $R(\lambda,T)=(\lambda I-T)^{-1}$ and proved that, on a \emph{finite dimensional} Banach space that condition is equivalent to power-boundedness of $T$, i.e. $\sup_{n\in {\mathbb N}}\|T^n\|<\infty$.
\medskip
Lubich and Nevanlinna \cite{LN}, proved that \eqref{Kreiss} implies that $\|T^N\|=\mathcal{O}(N)$ and, by a result of Shields \cite{Shields}, this is optimal.
\medskip
Later, McCarthy \cite{McCarthy} considered the following strengthening of \eqref{Kreiss}
\begin{equation}\label{iter-Kreiss}
\|R(\lambda,T)^k\|\le \frac{C}{(|\lambda|-1)^k} \quad |\lambda|>1, \, k\in {\mathbb N}\, ,
\end{equation}
known as the strong Kreiss condition (or iterated Kreiss condition).
\medskip
Lubich and Nevanlinna \cite{LN} proved that \eqref{iter-Kreiss}
implies that $\|T^N\|=\mathcal{O}(\sqrt N)$ and they proved that this estimate is best possible for general Banach spaces.
\medskip
Nevanlinna \cite{Nevanlinna} proved that an operator $T$ satisfies the strong Kreiss condition if and only if there exists $L>0$ such that
\begin{equation}\label{strong-kreiss}
\|{\rm e}^{z T}\|\le L{\rm e}^{|z|} \quad \forall z\in {\mathbb C}\, .
\end{equation}
When ${\mathcal X}$ is a Hilbert space, Cohen et al. \cite{CCEL} proved that
\eqref{strong-kreiss} implies that $\|T^N\|=\mathcal{O}((\log N)^\kappa)$ and that this logarithmic control is best possible under \eqref{iter-Kreiss} in the Hilbert setting.
\medskip
They also obtained results in the Hilbert setting under
\eqref{Kreiss}, which were generalized to $L^p$ spaces (and more generally, Banach spaces with non trivial type and/or cotype)
by the second author \cite{Cuny}.
\medskip
In \cite{Cuny}, the situation of strongly Kreiss bounded operators
(i.e. operators satisfying \eqref{strong-kreiss}) on some $L^p$ space was left open and this is the purpose of the present work to study that situation. We obtain the following.
\begin{theorem}\label{main-theorem}
Let $T$ be a strongly Kreiss bounded operator on $L^p(\mu)$, $1<p<\infty$. There exists $C, \kappa>0$ such that for every
$N\in {\mathbb N}$, setting $\tau_p:=\left|\frac{1}{2}-\frac{1}{p}\right|$,
\begin{equation}\label{main-estimate}
\norm{T^N} \le C N^{\tau_p}\log^\kappa (N+1)\, .
\end{equation}
\end{theorem}
\medskip
\noindent {\bf Remark.} When $p=2$ we recover the optimal result from \cite{CCEL}.
\medskip
Our bound is somewhat optimal, as it follows from the next proposition, based on an example of Lubich and Nevanlinna \cite{LN}.
\begin{proposition}\label{example}
For every $1\le p\le \infty$, there exists a strongly Kreiss bounded operator on $\ell^p({\mathbb Z})$ and some constant $C_p\ge 1$ such that
for every $N\in {\mathbb N}$, $N^{|1/2-1/p|}/C_p \le \|T^N\|\le C_p N^{|1/2-1/p|}$.
\end{proposition}
Actually, we get a single operator acting simultaneously on all $\ell^p({\mathbb Z})$, $1\le p\le \infty$, with the above properties. If $S$
is the right shift, then $T=q(S)$ for some M\"obius transformation of the unit disk.
\medskip
Let us also mention that, see Proposition \ref{positive}, when $T$ is a \emph{positive} strongly Kreiss bounded operator on $L^p(\mu)$, then it is possible to improve \eqref{main-estimate}, when $p\in [1,4/5]$.
\medskip
\section{Auxilliary results}
Through the paper we will denote by ${\mathbb T}$ the complex numbers of modulus one and by $\lambda$ the Haar measure on ${\mathbb T}$.
\medskip
Given a bounded interval $I\subset {\mathbb Z}$, we define an operator $M_I$ by setting
$$
M_If:= \sum_{i\in I}c_i(f) \gamma^i \qquad \forall f\in L^1({\mathbb T}),
$$
where $f(\gamma)=\sum_{n\in {\mathbb Z}} c_n(f)\gamma^n$.
\medskip
Recall the defintion of the weak $L^1$-norm on a measure space
$$
\|h\|_{1,\infty,\mu}:=\sup_{t>0}t\mu(|g|\ge t)\, .
$$
\begin{proposition}\label{new-prop}
Let $1<p<\infty$. Let $p'=\min(2,p)$. There exists $D_p>0$ such that for every finite collection
$(I_\ell)_{1\le \ell\le L}$ of disjoint intervals of integers,
\begin{equation}\label{inequality1}
\Big\|\Big( \sum_{\ell=1}^L |M_{I_\ell}f|^2\Big)^{1/2}\Big\|_{L^p({\mathbb T})}
\le D_p L^{1/p'-1/2}\|f\|_{L^p({\mathbb T})}\, , \quad \forall f\in L^p({\mathbb T})\,.
\end{equation}
Furthermore, there exists $D_{1,\infty}>0$ such that for every finite collection
$(I_\ell)_{1\le \ell\le L}$ of (not necessarily disjoint) intervals of integers,
\begin{equation}\label{inequality-L1-1}
\Big\|\Big( \sum_{\ell=1}^L |M_{I_\ell}f|^2\Big)^{1/2}\Big\|_{L^{1,\infty}({\mathbb T})}
\le D_{1,\infty} L^{1/2}\|f\|_{L^1({\mathbb T})}\, , \quad \forall f\in L^1({\mathbb T})\, .
\end{equation}
\end{proposition}
\noindent {\bf Proof.} For $p\ge 2$, \eqref{inequality1} is the so called Littlewood-Paley-Rubio de Francia inequality, see
\cite{Rubio}. Actually, Rubio de Francia proved the result on the real line. The result for the torus (with extensions) appears in Kislyakov-Parilov \cite{KP}.
\medskip
Inequality \eqref{inequality-L1-1} appears in the middle of page
6419 of \cite{KP}, see also Exercise 4.6.1 (a) page 337 of \cite{Grafakos} for a version on the real line.
\medskip
Then \eqref{inequality1} for $1<p<2$ follows by Marcinkiewicz interpolation, see Theorem 1.3.2 of \cite{Grafakos} page 31.
\medskip
\medskip
\begin{corollary}\label{corollary}
Let $1< p<\infty$. Let $p''=\max(2,p)$. There exists $C_p>0$ such that for every finite collection
$(I_\ell)_{1\le \ell\le L}$ of disjoint and consecutive intervals of integers such with $\cup_{\ell =1}^LI_\ell =:I\subset [-N,N]$, $N\in {\mathbb N}$ and every $f\in L^p({\mathbb T})$,
\begin{align}\label{inequality3}
\|M_I f\|_{L^p({\mathbb T})} &
\le C_p L^{1/2-1/p''}\Big\|\Big( \sum_{\ell=1}^L |M_{I_\ell}f|^2\Big)^{1/2}\Big\|_{L^p({\mathbb T})} \nonumber \\
& \le C_p L^{1/2-1/p''}\Big(
\sum_{\ell=1}^L \|M_{I_\ell}f\|_{L^p({\mathbb T})}^{p'}\Big)^{1/p'}.
\end{align}
\end{corollary}
\noindent {\bf Proof.} Clearly, it is enough to assume that $f$ is a trigonometric polynomial supported in $I$.
Let $g \in L^q$, with $q=p/(p-1)$. Notice that $q'=\min(p/(p-1),2)=p''/(p''-1)$ and that $1/q'-1/2=1/2-1/p''$. We have, by orthogonality,
Cauchy-Schwarz inequality and H\"older inequality,
\begin{align*}
\Big|\int_{\mathbb T} f\bar g \, d\lambda\Big| &=\Big|\int_{\mathbb T}\sum_{\ell=1}^L M_{I_\ell}f M_{-I_\ell}\bar g\, d\lambda\Big|\\
&\le \Big\|\Big(\sum_{\ell=1}^{L}|M_{I_\ell}f|^2\Big)^{1/2}\Big\|_{L^p({\mathbb T})}
\Big\|\Big(\sum_{\ell=1}^{L}|M_{-I_\ell}\bar g|^2 \Big)^{1/2}\Big\|_{L^q({\mathbb T})}\\
&\le C_p L^{1/2-1/p''}\Big\|\Big(\sum_{\ell=1}^{L}|M_{I_\ell}f|^2\Big)^{1/2}\Big\|_{L^p({\mathbb T})}\|\bar g\|_{L^q({\mathbb T})} \, ,
\end{align*}
where we used Proposition \ref{new-prop}.
Then \eqref{inequality3} follows by taking the supremum over $g\in L^q({\mathbb T})$, with $\|g\|_{q,\lambda}=1$.
The last estimate follows by using that $x\mapsto x^{p/2}$ is subadditive when $p\le 2$ and Minkowski's inequality in
$L^{p/2}({\mathbb T})$ when $p\ge 2$. \hfill $\square$
\medskip
We deduce the following.
\medskip
\begin{corollary}\label{lemma}
Let $1 < p < \infty$. There exists $C_p>0$ such that for each $N\in {\mathbb N} $ and each $(x_1, \ldots, x_{N^2})\in
(L^p(\mu))^{N^2}$,
\begin{equation}\label{lemsum}
\Big\| \sum_{k=1}^{N^2} \gamma^kx_k\Big\|_{L^p({\mathbb T},L^p(\mu)}\, \leq C N^{1/2-1/p''}\left(\sum_{n=0}^{N-1}\Big\|\sum_{k=n^2+1}^{(n+1)^2}\gamma^kx_k\Big\|^{p'}_{L^p({\mathbb T},L^p(\mu) }\right)^{1/p'}.
\end{equation}
\end{corollary}
\begin{proof}
Let $N \in {\mathbb N}$ and $I_n := [n^2+1, (n+1)^2)$ for $n \in \{0,\cdots, N-1\}$. By Corollary \ref{corollary} for each $(a_1, \ldots, a_{N^2})$ and $f \in L^p({\mathbb T})$,
$$ \norm{\sum_{k=1}^{N^2} a_k\gamma^k}_{L^p({\mathbb T})} \le 2C_p N^{1/2-1/p''}\left(
\sum_{\ell=0}^{N-1} \norm{\sum_{k=n^2+1}^{(n+1)^2} a_k\gamma^k}_{L^p({\mathbb T})}^{p'}\right)^{1/p'}
$$
If $1\le p \le 2$, \eqref{lemsum} follows Fubini's Theorem. If $p>2$ \eqref{lemsum} follows from Fubini's Theorem combined with Minkowski's inequality in $L^{p/2}(\mu)$.
\end{proof}
We conclude this section by a Lemma we will use several times in this paper.
\begin{lemma}\label{technical}
There exists $C>1$ such that for every $N\in {\mathbb N}$ and every
integer $ K\in [2 -2\sqrt N, 0]$,
\begin{gather}
\label{technical1}\frac{{\rm e}^N}{C \sqrt N}\le \frac{N^{N+K}}{(N+K)!}
\le \frac{C{\rm e}^N}{ \sqrt N}\\
\label{technical2}\sum_{N+2-2\sqrt N\le n\le N }\Big|
\Big(\sum_{n-\sqrt N\le k\le n} \frac{N^{k}}{k!}\Big)^{-1}-
\Big(\sum_{n+1-\sqrt N\le k\le n+1}\frac{N^k}{k!}\Big)^{-1}\Big|\le C{\rm e}^{-N}\, .
\end{gather}
In particular the sequences $\Big({\rm e}^N\sum_{n-\sqrt N\le k\le n} \frac{N^{k}}{k!}\Big)^{-1}\Big)_{N+2-2\sqrt N\le n\le N}$ have bounded variations uniformly bounded with respect to $N$.
\end{lemma}
\noindent {\bf Proof.} The upper bound of \eqref{technical1}
follows from Lemma 3.4 of \cite{CCEL} and the lower bound may be proved similarly. Then, \eqref{technical2} follows from the fact that, for $N+2-2\sqrt N\le n\le N$, writing $m:=[n+1-\sqrt N]$, we have
\begin{align*}
{\rm e}^{-N}\Big|\Big(\sum_{n-\sqrt N\le k \le n}\frac{N^{k}}{k!}\Big)^{-1}-\Big(\sum_{n+1-\sqrt N\le k \le n+1}\frac{N^{k}}{k!}\Big)^{-1}\Big| & \le {\rm e}^{-N}\frac{\frac{N^{m}}{
m!}+\frac{N^n}{n!}}{\Big(\sum_{n+1-\sqrt N\le k \le n}\frac{N^{k}}{k!}\Big)^2}\\
& \le \frac{\tilde C}{\sqrt N}\, ,
\end{align*}
where we used \eqref{technical1}.
\hfill $\square$
\medskip
\section{Proof of Theorem \ref{main-theorem} and of Proposition
\ref{example}}
The proof of Theorem \ref{main-theorem} makes use of Fourier multipliers,
see pages 11 and 12 of \cite{Cuny} for a brief description of Fourier multipliers in UMD Banach spaces (in particular in $L^p$ spaces). Actually, we only make use of \emph{real valued} Fourier multipliers in our proofs and then use Fubini to obtain results for $L^p$-valued
Fourier multipliers.
\medskip
We shall use in the sequel the terminology of Riesz theorem and Stechkin theorem, already used in \cite{Cuny}. More precisely, let $X$ be an UMD space and $1<p<\infty$. We refer as \textit{Riesz theorem} the following: it exists $C_p>0$ such that for each interval $I \subset {\mathbb Z}$ we have
$$
\forall (c_i)_{i\in {\mathbb Z}}\subset X, \quad \norm{\sum_{i\in I}\gamma^ic_i}_{L^p({\mathbb T},X)} \le C_p\norm{\sum_{i\in {\mathbb Z}}\gamma^ic_i}_{L^p({\mathbb T},X)}.
$$
And we refer as \textit{Stechkin theorem} the following: if $(a_n)_{n\in {\mathbb Z}}$ is a bounded monotone sequence of real numbers, then it exists $D_p$ such that,
$$
\forall (c_n)_{n\in {\mathbb Z}}\subset X, \quad \norm{\sum_{n\in {\mathbb Z}}a_n\gamma^nc_n}_{L^p({\mathbb T},X)} \le D_p\norm{\sum_{i\in {\mathbb Z}}\gamma^nc_n}_{L^p({\mathbb T},X)}.
$$
\begin{lemma}\label{lemind1}
Let $T$ be a bounded operator on $L^p(\mu)$, $1< p<\infty$.
Assume that there exists $D>0$ and $ \alpha>0$, such that for every $n\in {\mathbb N}$ and every $x\in L^p(\mu)$,
$$
\norm{\sum_{N+2-2\sqrt N\le n\le N} \gamma^n T^n x}_{L^p({\mathbb T},L^p(\mu))} \le D N^\alpha \norm{x}_{L^p(\mu)}\, .
$$
Then, there exists $E_p>0$ independent of $D$, $\alpha$ and $\beta$ such
that, setting $\delta_p:= \frac12\left(\frac2{p'}-\frac1p\right)$, for every $N\in {\mathbb N}$ and every $x\in Lp(\mu)$, we have
$$
\norm{\sum_{1\le n \le N}\gamma^n T^n x}_{L^p({\mathbb T},L^p(\mu))}\le DE_p N^{\alpha +\delta_{p}}
\, .
$$
\end{lemma}
\noindent {\bf Proof.} By Corollary \ref{lemma} and the assumption, for every $M\in {\mathbb N}$ and every $x\in L^p(\mu)$,
\begin{align*}
\norm{\sum_{1\le n \le M^2}\gamma^n T^n x}_{L^p({\mathbb T},L^p(\mu))}^{p'} & \le\big(C M^{1/2-1/p''}\big)^{p'}\left( \sum_{n=0}^{M-1}\norm{\sum_{k=n^2+1}^{(n+1)^2}\gamma^nT^nx}_{L^p({\mathbb T},L^p(\mu))}^{p'} \right) \\
& \le 2^{p'\beta}(CD)^{p'}M^{p'(1/2-1/p'')/2}\sum_{n=0}^{M-1}(n+1)^{2p'\alpha}\\
& \le 2^{p'\beta}(CD)^{p'} M^{p'(1/2-1/p'')+2p'\alpha+1}\,.
\end{align*}
Let $N\in {\mathbb N}$ and $M:= [\sqrt N]+1$. By the Riesz theorem, using that
$M^2\le 4N$ and $M\le N$, we infer that
$$
\Big\|\sum_{1\le n \le N}\gamma^n T^n x\Big\|_{L^p({\mathbb T},L^p(\mu))}\le \Big\|\sum_{1\le n \le M^2}\gamma^n T^n x\Big\|_{L^p({\mathbb T},L^p(\mu))}\le 2^\beta CD(4N)^{(1/2 -1/p'')/2 + \alpha+1/2p'} \, ,
$$
and the result follows. \hfill $\square$
\medskip
\begin{lemma}\label{lemind2}
Let $T$ be a strongly Kreiss bounded operator on $L^p(\mu)$, $1<p <\infty$. Assume that there exists a function $f : {\mathbb R}_+ \rightarrow (1,\infty)$, $D>0$ such that for every $n\in {\mathbb N}$ and every $x\in L^p(\mu)$,
\begin{equation}\label{assumption}
\Big\|\sum_{ n=1}^ N \gamma^n T^n x\Big\|_{L^p({\mathbb T},L^p(\mu))} \le D f(N) \|x\|_{L^p(\mu)}\, .
\end{equation}
Then, there exists $E_p>0$ independent of $D$ and $f$ such
that for every $N\in {\mathbb N}$ and every $x\in L^p(\mu)$,
$$
\Big\|\sum_{N+2-2\sqrt N\le n \le N}\gamma^n T^n x\Big\|_{L^p({\mathbb T},L^p(\mu))}\le DE_p f(4\sqrt{N}) \|x\|_{L^p(\mu)} \, .
$$
\end{lemma}
\noindent {\bf Proof.} The proof is similar to the one of Lemma 4.7 of \cite{CCEL}.
Let $x \in L^{p}(\mu)$ and $M_{N,\gamma} = \sum_{n = 1}^{4\sqrt{N}} \gamma^nT^n$.
Firstly since $T$ is strongly Kreiss bounded, using assumption \eqref{assumption}, we have
\begin{equation}\label{exp1}
\norm{ e^{\gamma N T}M_{N,\gamma}x}_{L^p({\mathbb T},L^p(\mu))} \le Ce^N\norm{ M_{N,\gamma}x}_{L^p({\mathbb T},L^p(\mu))} \le CDe^Nf(4\sqrt{N}) .
\end{equation}
Furthermore, we have
\begin{equation*}
e^{\gamma NT}M_{N_\gamma}x = \sum_{1\le n \le 4\sqrt{N}}\gamma^n{T}^nx\sum_{0\le k \le n}\frac{N^k}{k!} + \sum_{ n \ge 4\sqrt{N}}\gamma^n{T}^nx\sum_{n-\sqrt{N}\le k \le n}\frac{N^k}{k!}.
\end{equation*}
For every $N$ large enough (such that $N+2-2\sqrt N\ge 4\sqrt N$), using the Riesz theorem, we have
\begin{align*}
\norm{ e^{\gamma N T}M_{N,\gamma}x}_{L^p({\mathbb T},L^p(\mu))} & \ge
C_p \norm{ \sum_{ N+2 -2 \sqrt{N} \le n \le N }\gamma^n{T}^nx\sum_{n-\sqrt{N}\le k \le n}\frac{N^k}{k!}}_{L^p({\mathbb T},L^p(\mu))} .
\end{align*}
By Lemma \ref{technical} (in particular the property of bounded variation) and the Steckin theorem,
\begin{equation}\label{exp2}
\norm{ \sum_{ N+2 -2 \sqrt{N} \le n \le N }\gamma^n{T}^nx\sum_{n-\sqrt{N}\le k \le n}\frac{N^k}{k!}}_{L^p({\mathbb T},L^p(\mu))} \ge De^{N} \norm{ \sum_{ N+2 -2 \sqrt{N} \le n \le N }\gamma^n{T}^nx}_{L^p({\mathbb T},L^p(\mu))}.
\end{equation}
Combining \eqref{exp1} and \eqref{exp2}, we get the desired result.
\hfill $\square$
\medskip
Combining Lemma \ref{lemind1} and \ref{lemind2}, we easily derive the following.
\medskip
\begin{corollary}\label{final-corollary}
Let $T$ be a strongly Kreiss bounded operator on $L^p(\mu)$, $1<p <\infty$. Assume that there exists $D>0$ and $ \alpha>0$, $\beta \ge 0$, such that for every $n\in {\mathbb N}$ and every $x\in L^p(\mu)$,
$$
\norm{\sum_{1\le n\le N} \gamma^n T^n x}_{L^p({\mathbb T},L^p(\mu))} \le D N^\alpha \norm{x}_{L^p(\mu)}\, .
$$
Then, there exists $E_p$, independent of $\alpha$ and $\beta$ such that
for every $N\in {\mathbb N}$ and every $x\in L^p(\mu)$,
$$
\norm{\sum_{1\le n\le N} \gamma^n T^n x}_{L^p({\mathbb T},L^p(\mu))} \le DE_p N^{\alpha/2 +\delta_p} \norm{x}_{L^p(\mu)}\, .
$$
\end{corollary}
We are now ready to prove Theorem \ref{main-theorem}.
\begin{proof}[Proof of Theorem \ref{main-theorem}]
Since $T$ is strongly Kreiss bounded, it is known (see pages 1-2
of \cite{CCEL} that there exists $C>0$ such that for every $N\in {\mathbb N}$ and every $x\in L^p(\mu)$,
$$
\|\sum_{n=1}^N \gamma^n T^n x\|_{L^p(\mu)} \le C N \|x\|_{L^p(\mu)} \quad \forall \gamma\in {\mathbb T}\, .
$$
Applying inductively Corollary \ref{final-corollary} we see that for every $N\in {\mathbb N}$, $K \in {\mathbb N}_0=\{0,1,\ldots\}$ and every $x\in L^p(\mu)$,
$$
\norm{\sum_{1\le n\le N} \gamma^n T^n}_{L^p({\mathbb T},L^p(\mu))} \le C E_p^{K}N^{2\delta_p +(\alpha-\delta_p)2^{-K}} .
$$
Without loss of generality, we may and do assume that $N\ge 3$. Let $K\ge 0$ be the integer such that $ 2^K \le \frac{\log N}{\log(\log N)} \le 2^{K+1}$. Then we have $K \le \log (\log N)/\log 2$ and $E_p^{K} \le $ ${\rm exp}(\log E_p \log (\log N)/\log 2) = log(N)^{log E_p/log 2} $. Moreover, there exists $D>0$ such that $N^{(\alpha-\delta_p)2^{-K}}\le {\rm e}^{D\log \log N} = log(N)^D$
Combining those estimates, we infer that, there exists
$C>0$ and $\kappa>0$ such that for every $N\in {\mathbb N}$,
\begin{equation}\label{intermediary-estimate}
\norm{\sum_{1\le n\le N} \gamma^n T^n}_{L^p({\mathbb T},L^p(\mu))}
\le C N^{2\delta_p}(\log (N+1))^\kappa\|x\|_{L^p(\mu)}\, .
\end{equation}
Using that $T^*$ is strongly Kreiss bounded on $L^q(\mu)$, $q=p/(p-1)$, we obtain a similar estimate for $T^*$.
Hence, applying once more Lemmas \ref{lemind1} and \ref{lemind2}, we see that there exists $C, \kappa>0$ (large that the previous ones) such that
for every $x\in L^p(\mu)$ and $x^*\in L^q(\mu)$,
\begin{align*}
(1+2[\sqrt N]) |\langle x^*, T^{N+1}x\rangle| &=
\Big|\int_{\mathbb T} \langle \sum_{1\le n\le 1+ 2\sqrt N}
\gamma^n T^{*n}, \sum_{1\le m\le 1+ 2\sqrt N}\bar\gamma^m
T^{N+1-m} x\rangle \, d\lambda \Big| \\
&\le \Big\|\sum_{1\le n\le 1+2\sqrt N} \gamma^n T^{*n} x^*\Big\|_{L^q({\mathbb T},L^q(\mu))} \Big\|\sum_{N-2\sqrt N\le n\le N} \gamma^n T^n x\Big\|_{L^p({\mathbb T},L^p(\mu))}\\
&\le CN^{\delta_p+\delta_q} (\log (N+1))^\kappa
\\
& \quad = C N^{1/\min(p,q)} (\log (N+1))^\kappa,
\end{align*}
and the result follows by taking the supremum over $x^*$ with
$\|x^*\|_{L^q(\mu)}=1$.
\end{proof}
\medskip
Let us now prove Proposition \ref{example}. Let $0<a<1$ and define
$q_a(z):=\frac{z-a}{1-az}$. Let $S$ be the right shift, acting on $\ell^p({\mathbb Z})$, $1\le p\le \infty$. Set $T:=q_a(S)$.
\medskip
It has been proved in \cite{LN} that $T$ is strongly Kreiss bounded on $\ell^\infty({\mathbb Z})$. Their proof works equally to prove that $T$ is strongly Kreiss bounded on $\ell^p({\mathbb Z})$ for every $1\le p\le \infty$.
\medskip
Moreover, as noticed in \cite{LN}, it follows from the book by Brenner, Thom\'ee and Wahlbin \cite{BTW} that
$$N^{|1/2-1/p|}/C_p \le \|T^N\|\le C_p N^{|1/2-1/p|}\, ,$$
for some $C_p>0$.
\medskip
In \cite{LN}, they refer to Theorem 3.1 page 102 of \cite{BTW}, while the theorem is actually concerned with operators acting on
$L^p({\mathbb R},{\rm Leb})$. Now, using techniques from \cite{BTW}, pages 19-29 (in particular the bounds (4.3) page 21), it is possible to adapt the proof of Theorem 3.1 page 102 to our setting.
\medskip
\section{Some particular strongly Kreiss bounded operators}
We consider here the case of \emph{positive} strongly Kreiss bounded operators on $L^p(\mu)$ (we do not require anymore $\mu$ to be $\sigma$-finite) or \emph{absolutely} strongly Kreiss bounded operators (on any Banach space). The basic idea used to obtain \eqref{arnold-coine} (in $L^p$-spaces) has been already used
by the first author \cite{AC} where Kreiss bounded operators were considered.
\medskip
Let begin by a general result.
\begin{proposition}\label{general}
Let $1\le p \le 2$ and $T$ be bounded operator on $X$ satisfying that there exists $C>0$ and $\alpha >0$ such that for each $N\in {\mathbb N}$,
\begin{equation}\label{general-assumption1}
\norm{T^N} \le CN^{\alpha}
\end{equation}
and for every $x \in X$
\begin{equation}\label{general-assumption2}
\sum_{N-2\sqrt{N} \le n \le N} \norm{T^nx} \le CN^{p/2}\norm{x}.
\end{equation}.
Then, $\|T^N\|=\mathcal{O}(N^{1/q}(\log N)^\kappa)$, for some $\kappa>0$, where $q=p/(p-1)$.
\end{proposition}
\begin{proof}
We start with the following observation. For every $x\in X$ and
$x^*\in X^*$,
\begin{align}\label{duality1}
(1+2\sqrt N)|\langle x^*, T^{N+1}x\rangle |^p & =\sum_{1\le n\le 1+2\sqrt N}| \langle T^{*n}x^*, T^{N+1-n}x\rangle \big|_{X^*,X}^p\\ \nonumber
& \le \|x^*\|^p\max_{1\le n\le 1+2\sqrt N}\|T^{*n}\|^p
\sum_{N-2\sqrt N\le n\le N}\|T^{n}x\|^p\, .
\end{align}
Taking the supremum over $x,x^*$ of norm 1, using that $\|T^{*n}\|=\|T^n\|$ and assumption \eqref{general-assumption2} , we infer that,
for every $N\in {\mathbb N}$
\begin{align*}\label{first-estimate}
\|T^{N}\| & \le N^{-1/2p}\max_{1\le n\le 3\sqrt N} \|T^n\|\sup_{\|x\| \le 1} \Big(\sum_{N-2\sqrt N\le n\le N}\|T^{n}x\|^p\Big)^{1/p} \\
& \le CN^{1/2q}\max_{1\le n\le 3\sqrt N}\|T^n\|.
\end{align*}
Combining above estimate with \eqref{general-assumption1}, we get that
$$
\|T^N\|\le C^23^{\alpha} DN^{\alpha/2 +1/2q}\, .
$$
Iterating the above we get that for every integers $N,K\in {\mathbb N}$,
$$
\|T^N\|\le 3^{K \alpha} C^{K+1} N^{2^-K\alpha+(1-2^{-K})/q}\, ,
$$
and we conclude as in the proof of Theorem \ref{main-theorem}.
\end{proof}
\medskip
From Proposition \ref{general} we deduce a better bound than in Theorem \ref{main-theorem} for a strongly Kreiss bounded positive operator on $L^p(\mu)$ when $p\in [1,4/3)\cup (4,+\infty)$ (the cases where $p = 1$ and $p=\infty$ are discussed below).
\begin{proposition}\label{positive}
Let $T$ be a positive operator that is strongly Kreiss bounded on $L^p(\mu)$, $1\le p<\infty$. Then, $\|T^N\|=\mathcal{O}(N^{1/\bar p}(\log N)^\kappa)$, for some $\kappa>0$,
where $\bar p=\max(p,p/(p-1))$.
\end{proposition}
The proof is straighforward using following Lemma and the fact that every strongly Kreiss bounded operator satisfies \eqref{general-assumption1} for $\alpha = 1/2$.
\begin{lemma}
Let $1\le p <\infty$. Any positive strong Kreiss bounded operator $T$ on $L^p(\mu)$ satisfies \eqref{general-assumption2} for every $x \in L^p(\mu)$.
\end{lemma}
\begin{proof}
Using the fact that $\|\cdot \|_{\ell^p}\le \|\cdot \|_{\ell^1}$ and
Lemma \ref{technical}, we see that there exists $C>0$ such that for every $N\in {\mathbb N}$ and every $x\in L^p(\mu)$ with $x\ge 0$,
\begin{gather}
\Big( \sum_{n\ge 0}\frac{N^n T^n x}{n!}\Big)^p \ge \sum_{n\ge 0} \Big(\frac{N^n T^n x}{n!}\Big)^p \ge \frac{C^p{\rm e}^{pN}}{\sqrt N^p} \sum_{N-2\sqrt N\le n\le N} (T^n x)^p\, .
\end{gather}\label{arnold-coine}
Integrating with respect to $\mu$ and using Strong Kreiss boundedness, we infer $T$ satisfies \eqref{general-assumption2} for every $x\in L^p(\mu)$ with $x\ge 0$,
By linearity \eqref{general-assumption2} it remains true for every $x\in L^p(\mu)$.
\end{proof}
\medskip
We turn now to the special case of absolutely strongly Kreiss bounded operator. Let $T$ a bounded operator on $X$. We say that $T$ is absolutely strongly Kreiss bounded if there exists $C>0$ such that
$$
\sum_{n=0}^{\infty}\frac{r^n}{n!}\norm{T^nx} \le Ce^r\norm{x}, \quad r>0,x\in X.
$$
Such operator is clearly strongly Kreiss bounded and using Lemma \ref{technical}, for every $x \in X$, it satisfies \eqref{general-assumption2} with $p = 1$. Then we can apply Proposition \ref{general} to state that $\norm{T^N}$ has logarithmic bound.
\begin{proposition}\label{absstrong}
Let $T$ be an absolutely strongly Kreiss bounded operator on $X$. Then, there exists $\kappa >0$ such that
\begin{equation}\label{absstrongbound}
\norm{T^N} = \mathcal{O}(\log(N)^{\kappa}).
\end{equation}
\begin{remark}
When $X =L^p(\mu)$, $1\le p <\infty$ the bound \eqref{absstrongbound} is sharp. Indeed according \cite[remark 1 page 16]{CCEL}, for any $\kappa >0$ there exists an absolutely strongly Kreiss bounded $T$ on $L^p(\mu)$ such that $\norm{T^N} \asymp(\log(N)^{\kappa})$.
\end{remark}
\end{proposition}
\medskip
We now discuss the case of positive strong Kreiss operator on $(AL)$ and $(AM)$-spaces. We refer to \cite{Meyer-N}, Section 2, for more details. A Banach lattice $X$ is an $(\text{AL})$-space if the norm is additive on the positive cone on $X$, that is
\begin{equation}\label{ALnorm}
\forall x,y \in X_+, \ \| x+y \| = \|x\| + \|y\|,
\end{equation}
that is, the norm is additive on the positive cone of $X$. A Banach lattice $X$ is an (AM)-space if the norm on $X$ satisfies
$$
\forall x,y \in X_+, \ \| \sup(x,y) \| = \sup(\|x\|,\|y\|).
$$
If $X$ is an (AM)-space, then $X^*$ is an (AL)-space.
It is known that an (AL)-space is isometrically isomorphic to some space $L^1$ and that an (AM)-space with is isometrically isomorphic respectively to some $C(K)$ where $K$ is a compact space. We are now ready for giving a proof to the following statement.
\begin{proposition}\label{ALAMest}
Let $T$ be a positive strongly Kreiss bounded operator on an (AL)-space or an (AM)-space. Then it exists $\kappa >0$ such that
\begin{equation}
\norm{T^N} = \mathcal{O}(\log(N)^{\kappa}).
\end{equation}
\end{proposition}
{\noindent {\bf Remark.} By the proposition, since $L^\infty(\mu)$ is an (AM)-space, we see that Proposition
\ref{positive} remains true for $p=\infty$.
\begin{proof}
If $T$ is positive strong Kreiss bounded operator on an (AL)-space, then using \eqref{ALnorm} it is straighforward that $T$ is absolute strong Kreiss bounded and then we can conclude with Proposition \ref{general}.
\medskip
If $T$ is a positive strongly Kreiss bounded operator on an (AM)-space, then $T^*$ is a positive strongly Kreiss bounded on an (AL)-space, and then we conclude by above.
\end{proof}
\medskip
\begin{remark}
In view of our last section, we can ask whether the bound of Proposition \ref{positive} can be improved to obtain a logarithmic bound. More precisely for a positive strongly Kreiss bounded operator $T$ on $L^p(\mu)$ with $ 1<p<\infty$, is there exist $\kappa>0$ such that $\norm{T^N} = \mathcal{O}(\log(N)^{\kappa})$ ?
\end{remark}
|
{
"arxiv_id": "2302.14167",
"language": "en",
"timestamp": "2023-03-01T02:03:28",
"url": "https://arxiv.org/abs/2302.14167",
"yymm": "2302"
} | \section{Introduction}
Waveguide quantum electrodynamics, focused on light-matter interactions in arrays of natural or artificial atoms that are coupled to the waveguide, is a rapidly developing field of quantum optics~\cite{Sheremet,Roy2017}. Multiple experimental platforms to study fundamental physics models and engineer atom-photon interactions have now emerged.
Practical applications such as detection, processing~\cite{Prasad2020} and generation~\cite{Kannan2023} of quantum photon states would ultimately require devices operating in the pulsed regime. Hence, there is a fundamental need to understand the time-dependent atom-photon interaction in this system.
In fact, a general scattering matrix consideration of the time-dependent photon scattering has already been performed in the first works in the field~\cite{Yudson1984,Yudson2008,Shen2007}. More recent studies have also emphasized the importance of the formation of bound photon states in the time-dependent photon transmission~\cite{Mahmoodian2020,Chen2020,Calajo2022}, complex correlated multi-photon states~\cite{Iversen2022} as well as have taken into account the time-entangled nature of the photon pulse~\cite{Kiilerich2020,Yang2022}. However, there is one interesting aspect of the time-dependent photon transmission through the arrays, that has so far not been analyzed in detail to the best of our knowledge. That is the role of different collective states of the array. In particular, it is now well known that spatially separated arrays have a complicated structure of collective single- and double-excited states~\cite{Molmer2019,Ke2019,Poshakinskiy2020,Poshakinskiy2021dimer}. These states can be distinguished by their spontaneous decay rate, which can be either enhanced (for superradiant states) or suppressed (for subradiant states) due to the interference between the photons emitted from different atoms. It is then natural to examine the signatures of these states in the photon time dynamics. We have recently analyzed the case of continuous wave excitation in detail in Refs.~\cite{PoshakinskiyBorrmann,NewKornovan}. Here we focus on the case of pulse excitation through the waveguide, see Fig.~\ref{fig:1}. We show that the measurement of the time-dependent joint detection probability of transmitted photons provides additional information about the double-excited collective modes that is not captured by the continuous excitation scheme.
We hope that this work will be useful both as a first step for potential future analysis of a multi-photon pulsed regime and as a helpful tool for analysis of ongoing and coming experiments~\cite{Brehm2022,LeJeannic2022}.
The rest of the manuscript is organized as follows. Section~\ref{sec:model} outlines our
model and the calculation approach. Next, we present the results for the two-photon wavefunction in Sec.~\ref{sec:two}. Our main results are summarized in Sec.~\ref{sec:summary}. Appendix~\ref{sec:Appendix} is reserved for auxiliary theoretical details.
\section{Model and calculation approach}\label{sec:model}
We consider a basic WQED setup with $N$ two-level qubits, periodically spaced near a waveguide and interacting via a waveguide mode.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{figure1.pdf}
\caption{Schematics of a two-photon pulse propagating through an array of qubits coupled to a waveguide. Here, $\omega_0$ is a resonant frequency of qubits, $\gamma_{\text{1D}}$ is a spontaneous emission rate into the guided mode.
} \label{fig:1}
\end{figure}
The system is schematically shown in Fig.~\ref{fig:1} and can be described by the following effective Hamiltonian \cite{Caneva2015,Ke2019,Sheremet},
\begin{equation}
H=-\rmi \gamma_{\rm 1D}\sum_{n,m=1}^N\sigma_n^\dag \sigma^{\vphantom{\dag}}_m \e^{\rmi (\omega_0/c)|z_m-z_n|}\:.
\end{equation}
This Hamiltonian assumes usual Markovian and rotating-wave approximations. The energy is counted from the atomic resonance $\hbar\omega_0$, $c$ is the speed of light, and $z_m$ are the qubit coordinates along the waveguide. For a periodic array, where $z_{m+1}-z_m=d$, the period can be conveniently characterized just by a single dimensionless parameter, the phase
$\varphi=\omega_0d/c\equiv 2\pi d/\lambda_0$ gained by light between two neighbouring two-level atoms with the distance $d$. The raising operators $\sigma_m^\dag$ obey the usual spin-1/2 operators algebra:
$\sigma_m^2=0$, $\sigma_m^{\vphantom{\dag}}\sigma_m^\dag+\sigma_m^\dag \sigma^{\vphantom{\dag}}_m=1$, $[\sigma_m,\sigma_n]=0$ for $m \ne n$. The parameter $\gamma_{\rm 1D}$ is the radiative decay rate of a single atom into the waveguide. It makes the effective Hamiltonian non-Hermitian.
We are interested in the scattering of a general two-photon state, characterized by a
time-dependent wave function $\psi_{t_1,t_2}^{\text{in}}$ from such a setup.
To this end we use the known general technique \cite{Laakso2014,Fang2014,Poshakinskiy2016,Ke2019} to calculate the two-photon scattering matrix in the frequency domain $S(\omega'_{1},\omega'_{2}\leftarrow\omega_{1},\omega_{2})$. We start with the Fourier transform of the input state:
\begin{equation}\label{eq:Fourier}
\psi_{\omega_1,\omega_2}^{\text{in}}=\iint dt_1dt_2e^{\mathrm{i}\omega_1t_1}e^{\mathrm{i}\omega_2t_2}\psi_{t_1,t_2}^{\text{in}}\:.
\end{equation}
The output state is then given by
\begin{equation}
\begin{gathered} \psi_{\omega_1',\omega_2'}^{\text{out}}=\frac12\iint \frac{d\omega_1d\omega_2}{(2\pi)^2} S(\omega'_{1},\omega'_{2}\leftarrow\omega_{1},\omega_{2})
\psi_{\omega_1,\omega_2}^{\text{in}}\:,
\end{gathered}
\end{equation}
and then we perform the inverse Fourier transform
\begin{equation}\label{eq:psit}
\psi_{t_1,t_2}^{\text{out}}=\iint \frac{d\omega_1'd\omega_2'}{(2\pi)^2}e^{-\mathrm{i}\omega_1't_1}e^{-\mathrm{i}\omega_2't_2}\psi_{\omega_1',\omega_2'}^{\text{out}}\:.
\end{equation}
The detailed derivation of the scattering matrix for an arbitrary number and positions of the qubits, mostly following Refs.~\cite{Fang2014,Ke2019}, can be found e.g. in Ref.~\cite{Sheremet}. Here we just recall the answer:
\begin{align} \label{eq:S} &S(\omega'_{1},\omega'_{2}\leftarrow\omega_{1},\omega_{2})=\\&(2\pi)^2t_{\omega_1}t_{\omega_2}\nonumber[\delta(\omega_1-\omega_1')\delta(\omega_2-\omega_2')+\delta(\omega_1-\omega_2')\delta(\omega_2-\omega_1')]\nonumber\\
&+2\gamma_{\text{1D}}^2\sum_{m,n=1}^Ns_m^-(\omega'_{1})s_m^-(\omega'_{2})[\Sigma^{-1}]_{mn}s_n^+(\omega_{1})s_n^+(\omega_{2})\nonumber\\
&\times 2\pi\delta(\omega_1+\omega_2-\omega_1'-\omega_2')\nonumber\:,
\end{align}
where
\begin{equation}\label{eq:Sigma}
\Sigma_{mn}(\varepsilon)=\int G_{mn}(\omega)G_{mn}(2\varepsilon-\omega)\frac{d\omega}{2\pi}
\end{equation}
is the self-energy matrix for double-excited states with $G_{ij}$ being the Green function for a single excitation of the array, given by the inverse of the following matrix:
\begin{multline}\label{eq:G}
[G^{-1}(\omega)]_{mn}\equiv \omega \delta_{mn}-H_{mn}\\
=(\omega-\omega_0)\delta_{mn}+\mathrm{i}\gamma_{\text{1D}}e^{\mathrm{i}(\omega_0/c)|z_m-z_n|}\:.
\end{multline}
The coefficients
\begin{equation}
s_m^{\pm}(\omega)=\sum_n G_{mn}e^{{\pm}\mathrm{i}(\omega_0/c)z_n}
\end{equation}
describe the coupling of the array with the incoming and outgoing plane waves.
The first term in the scattering matrix \eqref{eq:S} accounts for the independent photon transmission with the transmission coefficients given by
\begin{equation}
t_\omega=1-\rmi\gamma_{\rm 1D}\sum_{mn} G_{mn}e^{\mathrm{i}(\omega_0/c)(z_n-z_m)}\:.
\end{equation}
The second term in Eq.~\eqref{eq:S} accounts for the interaction between the photons induced by the array.
For a general pulse shape, the integration over time and frequency can be performed only numerically and is rather tedious. However, it is greatly simplified for a short pulse when the input state $\psi_{t_1,t_2}^{\rm in}$ can be approximated by a product of two $\delta$-functions,
\begin{equation}
\psi_{t_1,t_2}^{in}=\delta(t_1)\delta(t_2)\label{eq:psiin}\:.
\end{equation}
Physically, this means that the pulse duration is significantly shorter than the inverse rate decay of the fastest eigenmode of the system, that is on the order of $1/(N\gamma_{\rm 1D})$. From now on we restrict ourselves to such a case. The calculation procedure is detailed in Appendix~\ref{sec:Appendix}.
\section{Transmitted pulse}\label{sec:two}
We start this section by analyzing in detail the wave function for the pulse, transmitted through the subwavelength array of a given length $N=4$. Next, in Sec.~\ref{sec:N}, we examine the dependence of the effective time it takes the system to scatter photons on the array length $N$.
\subsection{Single- and double-excited states in the transmitted pulse}\label{sec:wavefunction}
\begin{figure*}[t]
\includegraphics[width=0.9\textwidth]{figure2.pdf}
\caption{Incoherent part of the transmitted pulse for the incident delta-pulse in the time domain. Parameters of the system: $N=4$, $\varphi=0.1$. Times are normalized by $1/\gamma_{\rm 1D}$. (a) Schematics of the various types of photon states in the transmitted pulse. (b) Dark red solid curve: probability distribution to detect two photons at the same moment in time $t_1$. Black solid curve: probability distribution to detect one photon at the time $t_1$ while the second photon is detected at time $t_2=0.2$. Dotted curves have been calculated with all two-particle modes and only the superradiant single-particle mode; the red one -- for the times $t_1=t_2$, the gray one -- for the fixed $t_2=0.2$ (c) Schematics of the various types of photon states in the transmitted pulse. (d) Dependence of the probability of the two photons detection on the time difference $t_1-t_2$ for fixed $t_1+t_2=10$. The solid green curve has been calculated exactly, and the dashed orange curve includes only the superradiant single-excited state and all double-excited ones.} \label{fig:2}
\end{figure*}
Figure~\ref{fig:2} shows the incoherent part of the two-photon wavefunction given by Eq.~\eqref{PsiIncoh} calculated under the incidence of the two-photon
$\delta$-pulse Eq.~\eqref{eq:psiin}. The incoherent part has quite a complicated time dependence with several distinct time scales. The shortest time scale $t\sim 1/(N\gamma_{\rm 1D})$ corresponds to the superradiant state where the constructive interference enhances the emission rate. The longest time scale is on the order of
$N^3/(\varphi^2 \gamma_{\rm 1D})$~\cite{Molmer2019} and corresponds to the excitation of subradiant states.
In order to represent different timescales better, we show the wavefunction at large and short times in Fig.~\ref{fig:2}(a) and Fig.~\ref{fig:2}(c) separately.
For reference, we also present in Fig.~\ref{fig:3} the complex spectrum of the system eigenfrequencies, calculated for the same system parameters as in Fig.~\ref{fig:2}.
Orange dots correspond to the single-excited states. They have been obtained as eigenvalues of the effective Hamiltonian matrix $H_{mn}$, defined in Eq.~\eqref{eq:G}. The brightest state, with the largest imaginary part, corresponds to the superradiant state with the decay rate $\approx N\gamma_{\rm 1D}$. Three other dots correspond to the single-excited subradiant states.
Blue dots show the spectrum of double-excited states. It has been calculated following Ref.~\cite{Ke2019}. The eigenfrequencies were found by diagonalizing Eq.~\eqref{eq:Sh2}, given in the Appendix. The calculation demonstrates that there exists one superradiant mode, two subradiant ones and also three modes with decay rates on the order of $\gamma_{\rm 1D}$. As discussed in Ref.~\cite{Ke2019}, these three eigenstates could be understood as the ``twilight'' states, which are a product of the wavefunction with one photon being in the bright state and the other one being in the subradiant one.
Our calculation approach, outlined in detail in the Appendix~\ref{sec:Appendix}, allows us to evaluate the contributions from various single- and double-excited eigenstates into the total transmitted wavefunction $\psi(t_1,t_2)$ separately. Generally, the single-excited states manifest themselves in the dependence on $t_1$ and $t_2$, that is along the edges of the color map in Fig.~\ref{fig:2}(a).
The double-excited states correspond to the dependences on $t_1\pm t_2$, that is diagonal and antidiagonal in Fig.~\ref{fig:2}(a). Hence, the role of different contributions can be singled out by examining the cross sections in the corresponding directions, shown in Fig.~\ref{fig:2}(b,d). Our analysis of contributions of various super- and sub-radiant eigenstates and the directions, along which these contributions are manifested, is schematically summarized in Fig.~\ref{fig:2}(a). We will now discuss it in more detail.
Single- and double- excited subradiant states manifest themselves as the long-living tails in the wavefunction along the edges of the calculation domain in Fig.~\ref{fig:2}(a) and along its main diagonal, respectively. Black solid and dark red curves show the two corresponding cuts in Fig.~\ref{fig:2}(b). In order to distinguish between single- and double-excited subradiant states, we have performed calculations along the same cuts that neglect all single-excited subradiant states and include just a superradiant single-excited mode [dotted curves in Fig.~\ref{fig:2}(b)]. Such approximation well describes the initial fast decay of the wave functions for both curves and the tails along the main diagonal ($t_1=t_2$, red dotted curve). Thus, the tails along the main diagonal can be attributed to the double-excited subradiant states. On the other hand, this approximation significantly underestimates the values of the tails of the wavefunction for fixed $t_2=0.2$, as can be seen by the comparison of solid black and dotted gray curves in Fig.~\ref{fig:2}(b). This indicates that the tails in the solid black curve are due to the single-excited subradiant states.
The single-particle superradiant state manifests itself on the anti-diagonal in the time domain. It should be then measured as a function of the time difference between two photons. This can be seen by comparing the solid green curve in Fig.~\ref{fig:2}(d), calculated accounting for all single-particle states, with the dashed orange one, that includes only superradiant single-particle states. Such a single superradiant mode approximation correctly describes the shape of the central peak in the full calculation. We have also checked that in order to correctly describe the amplitude of this sharp central feature it is necessary to include all the double-excited states.
\begin{figure}[t]
\includegraphics[width=0.35\textwidth]{figure3.pdf}
\caption{Complex frequency spectrum for the following system parameters: number of qubits $N=4$, period of the system $\varphi=0.1$. Orange dots denote single photon states, blue dots correspond to the two-photon states.} \label{fig:3}
\end{figure}
\subsection{Duration of the transmitted pulse}\label{sec:N}
As the measure of the efficiency of the dark states' excitation, we introduce the quantity
\begin{equation} \label{TPDur}
T=\frac{\iint dt_1 dt_2|\psi^{\text{out}}_{t_1,t_2}|^2t_1}{\iint dt_1 dt_2|\psi^{\text{out}}_{t_1,t_2}|^2}
\end{equation}
the same way as it was done in Ref.~\cite{Poshakinskiy2012}. Taking into account the bosonic statistics of photons, we left only the time of the one photon $t_1$ under the integral (in general, we should look at the average times of both particles). The quantity $T$, by its definition, has the meaning of the duration of the transmitted pulse. Therefore, dark states' excitation efficiency is proportional to $T$. Fig.~\ref{fig:4} represents the dependence of the inverse duration of the transmitted pulse $1/T$ on the number of qubits $N$ and the period of the system $\varphi$. If all the qubits are located at one point ($\varphi=0$) then a short propagating pulse excites a superradiant state. This case corresponds to the maximum values of $1/T$ for each N in Fig.~\ref{fig:4}. If the qubits are located periodically at a distance $\varphi$ equal to $\pi/2$ then we notice the excitation of dark states. The period $\pi/2$ corresponds to the minimum values of $1/T$ for each $N$ in Fig.~\ref{fig:4}.
The quantity $1/T$ increases with the number of qubits for a fixed period $\varphi$ as the decay rate of the superradiant is equal to $N\gamma_{\text{1D}}$. The duration of the transmitted pulse does not increase with the number of qubits and, accordingly, the total length of the system because we consider Markovian approximation. This approximation implies the infinite speed of light, so the increase of the physical length of the system plays no role in the considered regime of parameters~\cite{Poshakinskiy2012}.
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{figure4.pdf}
\caption{Dependence of the inverse duration of the transmitted pulse $1/T$ on the period of the system $\varphi$ for fixed numbers of qubits $N$.
} \label{fig:4}
\end{figure}
\section{Summary}\label{sec:summary}
To summarize, we have developed a general analytical theory for the scattering of two-photon pulses from an array of two-level atoms, coupled to the waveguide.
The wavefunction of the scattered pulse has been obtained by a convolution of the known two-photon scattering matrix in the frequency domain with the Fourier transform of the incident pulse. In the case of a pulse duration being much shorter than the spontaneous emission lifetime, we were able to obtain a general analytical result for the scattered signal. This analytical expression, while being relatively cumbersome, considerably simplifies an interpretation of the scattered signal. Namely, it becomes possible to understand the role of the qualitatively different contributions corresponding to various single-excited and double-excited eigenstates of the arrays with different radiative lifetimes (subradiant and superradiant states).
We have also studied the dependence of the average time it takes the array to emit two photons when being excited resonantly on the array length and period. The emission time becomes generally shorter for longer structures, which can be explained by the formation of superradiant single-excited photon states. The longest emission times correspond to the structures with the anti-Bragg period, equal to the quarter of the wavelength of light at the atom resonance frequency $\lambda/4$. This is due to the suppression of the superradiant states for the anti-Bragg structures.
Our results indicate that the time-dependent spectroscopy of photon transmission can be an interesting complementary tool to the frequency domain analysis. It would be also instructive to generalize the results for the more complicated time dependence and entanglement structure of the input pulse. While this problem has already been analyzed in literature~\cite{Yang2022,Calajo2022},
the general effect of the excitation spectrum of the array on the quantum pulse transmission is far from being completely understood. For example, it would be interesting to examine what happens with the quantum light transmission through the Bragg structures with the period of $\lambda/2$~\cite{Poshakinskiy2012}, that can demonstrate strongly non-Markovian physics~\cite{PoshakinskiyBorrmann}.
|
{
"arxiv_id": "2302.14146",
"language": "en",
"timestamp": "2023-03-01T02:02:30",
"url": "https://arxiv.org/abs/2302.14146",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
This paper examines {\em Logical Credal Networks}, a formalism
recently introduced by \citet{Qian2022} to combine
logical sentences, probabilities and independence relations.
The have proposed interesting ideas and evaluated the
formalism in practical scenarios with positive results.
The central element of a Logical Credal Network (LCN)
is a collection of constraints
over probabilities. Independence relations are then extracted mostly
from the logical content of those inequalities. This scheme
differs from previous proposals that extract independence relations
from explicitly
specified graphs~\cite{Andersen96,CozmanUAI2009,Rocha2005}.
Several probabilistic logics have also adopted explicit syntax
for independence relations even when graphs are not employed
\cite{Bacchus90,Doder2017,Halpern2003}.
While Logical Credal Networks have points in common with
existing formalisms, they do
have novel features that deserve attention.
For one thing, they resort to directed graphs that may
contain directed cycles. Also they are endowed with a
sensible Markov condition that is distinct from previous ones.
Little is known about the consequences of these features,
and how they interact with the syntactic conventions that
turn logical formulas into edges in graphs.
In particular, it seems that no study has focused on
the consequences of Markov conditions on factorization results;
that is, how such conditions affect the factors that constitute
probability distributions.
In this paper we present a first investigation towards a deeper understanding
of Logical Credal Networks, looking at their specification, their
Markov conditions, their factorization properties.
We introduce the notion of ``structure'' for a LCN.
We then show that the local Markov condition proposed by \citet{Qian2022}
collapses to the usual local Markov condition applied to chain graphs
when the structure has no directed cycles.
We analyze the behavior of the former Markov condition in the presence
of directed cycles, in particular examining factorization properties and
discussing the semantics of the resulting language.
To conclude, we propose a novel semantics for
LCNs and examine factorization results for their
associated probability distributions.
\section{Graphs and Markov Conditions}\label{section:Graphs}
In this section we present the necessary concepts related to
graphs and graph-theoretical probabilistic models (Bayesian
networks, Markov networks, and chain graphs). Definitions and
notation vary across the huge literature on these topics;
we rely here on three sources.
We use definitions by \citet{Qian2022} and by \citet{Spirtes95}
in their work on LCNs and on directed graphs respectively; we also
use standard results from the textbook by \citet{Cowell99}.
A {\em graph} is a triple $(\mathcal{V},\mathcal{E}_D,\mathcal{E}_U)$,
where $\mathcal{V}$ is a set of nodes,
and both $\mathcal{E}_D$ and $\mathcal{E}_U$ are sets of edges.
A node is always labeled with the name of a random variable; in fact,
we do not distinguish between a node and the corresponding random variable.
The elements of $\mathcal{E}_D$ are {\em directed} edges.
A directed edge is an ordered pair of distinct nodes, and is denoted
by $A \rightarrow B$.
The elements of $\mathcal{E}_U$ are {\em undirected} edges.
An undirected edge is a pair of distinct nodes, and is denoted by $A \sim B$;
note that nodes are not ordered in an undirected edge, so
there is no difference between $A \sim B$ and $B \sim A$.
Note that $\mathcal{E}_D$ and $\mathcal{E}_U$ are sets, so there are no multiple copies
of elements in them (for instance, there are no multiple undirected
edges between two nodes). Note also that there is no loop from a
node to itself.
If there is a directed edge from $A$ to $B$,
the edge is said to be {\em from} $A$ to $B$,
and then $A$ is a {\em parent} of $B$ and $B$ is a {\em child} of $A$.
The parents of $A$ are denoted by $\mbox{pa}(A)$.
If there are directed edges $A \rightarrow B$ and $B \rightarrow A$ between
$A$ and $B$, we say there is {\em bi-directed} edge
between $A$ and $B$ and write $A \rightleftarrows B$.
If $A \sim B$, then both nodes are said to be {\em neighbors}.
The neighbors of $A$ are denoted by $\mbox{ne}(A)$.
The {\em boundary} of a node $A$, denoted by $\mbox{bd}(A)$,
is the set $\mbox{pa}(A) \cup \mbox{ne}(A)$.
The boundary of a set $\mathcal{B}$ of nodes is $\mbox{bd}(\mathcal{B}) = \cup_{A \in \mathcal{B}} \mbox{bd}(A) \backslash \mathcal{B}$.
If we have a set $\mathcal{B}$ of nodes such that, for all $A \in \mathcal{B}$, the boundary of $A$ contained in $\mathcal{B}$, then $\mathcal{B}$ is an {\em ancestral set}.
A {\em path} from $A$ to $B$ is a sequence of edges, the first one between $A$ and some node $C_1$, then from $C_1$ to $C_2$ and so on, until an edge from $C_k$ to $B$, where all nodes other than $A$ and $B$
are distinct, and such that for each pair $(D_1,D_2)$ of consecutive
nodes in the path we have either $D_1 \rightarrow D_2$ or $D_1 \sim D_2$
but never $D_2 \rightarrow D_1$.
If $A$ and $B$ are identical, the path is a {\em cycle}.
If there is at least one directed edge in a path, the path is a {\em directed path}; if that path is a cycle, then it is a {\em directed cycle}.
If a path is not directed, then it is {\em undirected}
(hence all edges in the path are undirected ones).
A {\em directed}/{\em undirected} graph is a graph that only contains
directed/undirected edges.
A graph without directed cycles is a {\em chain graph}.
Figure~\ref{figure:GraphExamples} depicts a number of graphs.
\begin{figure}[t]
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[->,>=latex,thick] (a)--(b);
\draw[->,>=latex,thick] (c)--(d);
\draw[->,>=latex,thick] (b) -- (d);
\node at (1.5,0.3) {\small (a)};
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[->,>=latex,thick] (a) to[out=10,in=170] (b);
\draw[->,>=latex,thick] (b) to[out=-170,in=-10] (a);
\draw[->,>=latex,thick] (c) to[out=10,in=170] (d);
\draw[->,>=latex,thick] (d) to[out=-170,in=-10] (c);
\draw[->,>=latex,thick] (b) to[out=-80,in=80] (d);
\draw[->,>=latex,thick] (d) to[out=100,in=-100] (b);
\node at (1.5,0.3) {\small (b)};
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[thick] (a)--(b);
\draw[thick] (c)--(d);
\draw[thick] (b) -- (d);
\node at (1.5,0.3) {\small (c)};
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[->,>=latex,thick] (a)--(b);
\draw[->,>=latex,thick] (c)--(d);
\draw[->,>=latex,thick] (b) to[out=-80,in=80] (d);
\draw[->,>=latex,thick] (d) to[out=100,in=-100] (b);
\node at (1.5,0.3) {\small (d)};
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[->,>=latex,thick] (a)--(b);
\draw[->,>=latex,thick] (c)--(d);
\draw[thick] (b) -- (d);
\node at (1.5,0.3) {\small (e)};
\end{tikzpicture}
\vspace*{-5ex}
\caption{Graphs (directed/directed/undirected/directed/ chain).
We have $\mbox{pa}(B)=\{A\}$ in Figures \ref{figure:GraphExamples}.a and \ref{figure:GraphExamples}.e, $\mbox{pa}(B)=\{A,D\}$ in Figures \ref{figure:GraphExamples}.b and \ref{figure:GraphExamples}.d,
and $\mbox{pa}(B)=\emptyset$ in Figure \ref{figure:GraphExamples}.c.}
\label{figure:GraphExamples}
\end{figure}
\begin{figure}[t]
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[thick] (a)--(b);
\draw[thick] (c)--(d);
\draw[thick] (b) -- (d);
\draw[thick] (c) -- (b);
\node at (1.5,0.3) {\small (a)};
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[thick] (a) -- (b);
\draw[thick] (c) -- (d);
\draw[thick] (b) -- (d);
\draw[thick] (a) -- (d);
\draw[thick] (c) -- (b);
\node at (1.5,0.3) {\small (b)};
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[thick] (a)--(b);
\draw[thick] (c)--(d);
\draw[thick] (b) -- (d);
\node at (1.5,0.3) {\small (c)};
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[thick] (a) -- (b);
\draw[thick] (c) -- (d);
\draw[thick] (b) -- (d);
\draw[thick] (a) -- (d);
\draw[thick] (c) -- (b);
\node at (1.5,0.3) {\small (d)};
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (1,1) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[thick] (a)--(b);
\draw[thick] (c)--(d);
\draw[thick] (b) -- (d);
\draw[thick] (a) -- (c);
\node at (1.5,0.3) {\small (e)};
\end{tikzpicture}
\vspace*{-5ex}
\caption{The moral graphs of the graphs in Figure~\ref{figure:GraphExamples}.}
\label{figure:MoralGraphs}
\end{figure}
If there is a directed path from $A$ to $B$, then
$A$ is an {\em ancestor} of $B$ and $B$ is a {\em descendant}
of $A$.
For instance, in Figure \ref{figure:GraphExamples}.a,
$A$ is the ancestor of $B$ and $D$ is the descendant of $B$;
in Figure \ref{figure:GraphExamples}.c, there are no ancestors
nor descendants of $B$;
in Figure \ref{figure:GraphExamples}.e,
$A$ is the ancestor of $B$, and there are no descendants of $B$.
As a digression, note that \citet{Cowell99} define ``ancestor''
and ``descendant'' somewhat differently, by asking that there is
a path from $A$ to $B$ but not from $B$ to $A$; this definition
is equivalent to the previous one for graphs without directed
cycles, but it is different otherwise (for instance,
in Figure \ref{figure:GraphExamples}.b the node $B$ has
descendants $\{A,C,D\}$ in the previous definition but no
descendants in the sense of \citet{Cowell99}).
We stick to our former definition, a popular one \cite{Koller2009}
that seems appropriate in the presence of directed cycles \cite{Spirtes95}.
We will need the following concepts:
\begin{itemize}
\item Suppose we take graph $\mathcal{G}$ and remove its directed edges to obtain
an auxiliary undirected graph $\mathcal{G}'$.
A set of nodes $\mathcal{B}$ is a
{\em chain multi-component} of $\mathcal{G}$ iff
every pair of nodes in $\mathcal{B}$ is connected by a path
in $\mathcal{G}'$.
And $\mathcal{B}$ is a {\em chain component} iff it is {\em either}
a chain multi-component {\em or}
a single node that does not belong to any chain multi-component.
\item Suppose we take graph $\mathcal{G}$ and add undirected edges
between all pairs of nodes that have a children in a common chain component
of $\mathcal{G}$ and that are not already joined in $\mathcal{G}$.
Suppose we then take the resulting graph and transform every directed edge
into an undirected edge (if $A \leftrightarrows B$, then both
transformed undirected edges collapse into $A \sim B$).
The final result is the {\em moral graph} of $\mathcal{G}$, denoted
by $\mathcal{G}^m$.
\item Suppose we take a graph $\mathcal{G}$ and a triple $(\mathcal{N}_1,\mathcal{N}_2,\mathcal{N}_3)$
of disjoint subsets of nodes,
and we build the moral graph of the smallest ancestral set containing the nodes in $\mathcal{N}_1 \cup \mathcal{N}_2 \cup \mathcal{N}_3$.
The resulting graph is denoted by
$\mathcal{G}^{ma}(\mathcal{N}_1,\mathcal{N}_2,\mathcal{N}_3)$.
\end{itemize}
Figure \ref{figure:MoralGraphs} depicts the moral graphs
that correspond respectively to the five graphs in
Figure \ref{figure:GraphExamples}.
There are several formalisms that employ graphs to represent
stochastic independence (and dependence) relations among the
random variables associated with nodes. In this paper we
focus only on discrete random variables, so the concept of
stochastic independence is quite simple:
random variables $X$ and $Y$ are (conditionally) independent
given random variables $Z$ iff
$\pr{X=x,Y=y|Z=z} = \pr{X=x|Z=z}\pr{Y=y|Z=z}$ for every possible
$x$ and $y$ and every $z$ such that $\pr{Z=z}>0$.
In case $Z$ is absent, we have independence of $X$ and $Y$ iff
$\pr{X=x,Y=y}=\pr{X=x}\pr{Y=y}$ for every possible $x$ and $y$.
A {\em Markov condition} explains how to extract independent relations
from a graph; there are many such conditions in the literature
\cite{Cowell99}.
Consider first an undirected graph $\mathcal{G}$ with set of nodes
$\mathcal{N}$.
The {\em local Markov condition}
states that a node $A$ is independent of all nodes
in $\mathcal{N}$ other than $A$ itself and $A$'s neighbors, $\mbox{ne}(A)$,
given $\mbox{ne}(A)$.
The {\em global Markov condition}
states that, given any triple $(\mathcal{N}_1,\mathcal{N}_2,\mathcal{N}_3)$
of disjoint subsets of $\mathcal{N}$, such that $\mathcal{N}_2$ separates
$\mathcal{N}_1$ and $\mathcal{N}_3$, then nodes $\mathcal{N}_1$ and
$\mathcal{N}_3$ are independent given nodes $\mathcal{N}_2$.\footnote{In
an undirected graph, a set of nodes separates two other sets iff,
by deleting the separating nodes, we have no connecting path between
a node in one set and a node in the other set.}
If a probability distribution over all random variables in $\mathcal{G}$
is everywhere larger than zero, then both conditions are equivalent
and they are equivalent to a {\em factorization property}:
for each configuration of variables $X=x$, where $X$ denotes the
random variables in $\mathcal{G}$, we have
$\pr{X=x} = \prod_{c \in \mathcal{C}} \phi_c(x_c)$,
where $\mathcal{C}$ is the set of cliques of $\mathcal{G}$,
each $\phi_c$ is a function over the random variables in clique
$c$, and $x_c$ is the projection of $x$ over the random variables
in clique $c$.\footnote{A clique is a maximal set of nodes such
that each pair of nodes in the set is joined.}
Now consider an acyclic
directed graph $\mathcal{G}$ with set of nodes $\mathcal{N}$.
The {\em local Markov condition}
states that a node $A$ is independent, given $A$'s parents $\mbox{pa}(A)$,
of all its non-descendants non-parents except $A$ itself.
The factorization produced by the local Markov condition is
\begin{equation}\label{equation:BayesNet}
\pr{X=x} = \prod_{N \in \mathcal{N}} \pr{N=x_N|\mbox{pa}(N)=x_{\mbox{pa}(N)}},
\end{equation}
where $x_N$ is the value of $N$ in $x$ and $x_{\mbox{pa}(N)}$ is the projection
of $x$ over the parents of $N$.
Finally, consider a chain graph $\mathcal{G}$ with set of nodes $\mathcal{N}$.
The {\em local Markov condition} states that a node $A$ is independent, given
its boundary nodes, of its
non-descendants non-boundary nodes except $A$ itself.
The {\em global Markov condition} for chain graphs is significantly more complicated.
It states that,
given any triple $(\mathcal{N}_1,\mathcal{N}_2,\mathcal{N}_3)$
of disjoint subsets of $\mathcal{N}$,
such that $\mathcal{N}_2$ separates
$\mathcal{N}_1$ and $\mathcal{N}_2$ in the graph $\mathcal{G}^{ma}(\mathcal{N}_1,\mathcal{N}_2,\mathcal{N}_3)$, then nodes $\mathcal{N}_1$ and
$\mathcal{N}_3$ are independent given nodes $\mathcal{N}_2$.
Again, if a probability distribution over all random variables in $\mathcal{G}$
is everywhere larger than zero, then both Markov conditions are equivalent
and they are equivalent to a {\em factorization property} as follows.
Take the chain components $T_1, \dots, T_n$ ordered so that nodes in $T_i$ can only be at the end of directed edges starting from chain compoents before $T_i$; this is always possible in a chain graph.
Then the factorization has the form
$\pr{X=x} = \prod_{i=1}^n \pr{\mathcal{N}_i=x_{\mathcal{N}_i}\mid\mbox{bd}(\mathcal{N}_i)=x_{\mbox{\scriptsize bd}(\mathcal{N}_i)} }$
where $\mathcal{N}_i$ is the set of nodes in the $i$th chain component;
$x_{\mathcal{N}_i}$ and $x_{\mbox{\scriptsize bd}(\mathcal{N}_i)}$
are respectively the projection of $x$
over $\mathcal{N}_i$ and
$\mbox{bd}(\mathcal{N}_i)$.
Moreover, each
factor in the product itself factorizes accordingly to an undirected graph that depends on the corresponding chain
component~\cite{Cowell99}.
More precisely, for each chain component $T_i$,
build an undirected graph consisting of the nodes in $\mathcal{N}_i$ and $\mbox{bd}(\mathcal{N}_i)$ with all edges between these nodes in $\mathcal{G}$ turned into undirected
edges in this new graph, and with new undirected edges connecting each pair of nodes in $\mbox{bd}(\mathcal{N}_i)$ that were not joined already; then each
$\pr{\mathcal{N}_i\mid\mbox{bd}(\mathcal{N}_i)}$ equals the ratio
$\phi_i(\mathcal{N}_i,\mbox{bd}(\mathcal{N}_i))/\phi_i(\mbox{bd}(\mathcal{N}_i)))$ for positive function $\phi_i$, where $\phi_i(\mbox{bd}(\mathcal{N}_i))) = \sum \phi_i(\mathcal{N}_i,\mbox{bd}(\mathcal{N}_i))$ with the sum extending over all configurations of $\mathcal{N}_i$.
\section{Logical Credal Networks}
A Logical Credal Network (LCN) consists of a set of propositions $\mathcal{N}$ and two sets of constraints $\mathcal{T}_U$ and $\mathcal{T}_D$.
The set $\mathcal{N}$ is finite with propositions $A_1, \dots, A_n$.
Each proposition $A_i$ is associated with a random variable $X_i$ that is an indicator variable: if $A_i$ holds in an interpretation of the propositions then $X_i=1$; otherwise, $X_i=0$.
From now on we simply use the same symbol for a proposition and its corresponding indicator random variable.
Each constraint in $\mathcal{T}_U$ and in $\mathcal{T}_D$ is of the form
\[
\alpha \leq \pr{\phi|\varphi} \leq \beta,
\]
where each $\phi$ and each $\varphi$ is a formula.
In this paper, formulas are propositional (with propositions in $\mathcal{N}$ and connectives such as negation, disjunction, conjunction).
The definition of LCNs by \citet{Qian2022} allows for relational structures and first-order formulas; however, their semantics is obtained by grounding on finite domains, so we can focus on a propositional language for our purposes here.
Note that $\varphi$ can be a tautology $\top$, in which case we can just write the ``unconditional'' probability $\pr{\phi}$.
One can obviously use simple variants of constraints, such as $\pr{\phi|\varphi} = \beta$ or $\pr{\phi|\varphi} \geq \alpha$ or $\pr{\phi} \leq \alpha$, whenever needed.
The semantics of a LCN is given by a translation from the LCN to a directed graph where each proposition/random variable is a node (we often refer to
them as {\em proposition-nodes}).
Each constraint is then processed as follows.
First, a node labeled with formula $\phi$ is added and, in case $\varphi$ is not $\top$, another node labeled with $\varphi$ is added (we often refer to them as
{\em formula-nodes}), with a directed edge from $\varphi$ to $\phi$.
Then an edge is added from each proposition in $\varphi$ to node $\varphi$ in case the latter is in the graph, and an edge is added from node $\phi$ to each proposition in $\phi$.\footnote{We note that the original presentation of LCNs is a bit different from what we just described, as there are no edges added for a constraint in $\mathcal{T}_D$ for which $\varphi$ is $\top$. But this does not make any difference in the results and simplifies a bit the discussion.}
Finally, in case the constraint is in $\mathcal{T}_U$, an edge is added from each proposition in $\phi$ to node $\phi$.
We do not distinguish between two logically equivalent formulas (the original proposal
by \citet{Qian2022} focused only on syntactic operations).
The graph just described is referred to as the
{\em dependency graph} of the LCN.
Note that, when a formula is just a single proposition, we do not need to
present it explicitly in the dependency graph; we can just connect
edges from and to the corresponding proposition-node.
As shown in the next example, in our drawings
formulas appear inside dashed rectangles.
\begin{example}\label{example:Smokers}
Consider the following LCN, based on the Smokers and Friends example
by \citet{Qian2022}. We have propositions
$C_i$, $F_i$, $S_i$ for $i\in\{1,2,3\}$.
All constraints belong to $\mathcal{T}_U$ (that is, $\mathcal{T}_D$ is empty),
with $i,j \in \{1,2,3\}$:
\[
\begin{array}{ll}
0.5 \leq \pr{F_i | F_j \wedge F_k} \leq 1, & i \neq j, i \neq k, j \neq k; \\
0 \leq \pr{S_i \vee S_j | F_i} \leq 0.2, & i \neq j; \\
0.03 \leq \pr{C_i|S_i} \leq 0.04; & \\
0 \leq \pr{C_i | \neg S_i} \leq 0.01.
\end{array}
\]
The structure of the LCN is depicted in Figure \ref{figure:SmokersExample}.
Note that there are several directed cycles in this dependency graph.
\end{example}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\node[draw,rectangle,very thick,dashed] (f2f3) at (1,5) {$F_2 \wedge F_3$};
\node[draw,rectangle,very thick,dashed] (f1f3) at (3.5,5) {$F_1 \wedge F_3$};
\node[draw,rectangle,very thick,dashed] (f1f2) at (6,5) {$F_1 \wedge F_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (f1) at (1,4) {$F_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (f2) at (3.5,4) {$F_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (f3) at (6,4) {$F_3$};
\node[draw,rectangle,very thick,dashed] (s1s2) at (1,3) {$S_1 \vee S_2$};
\node[draw,rectangle,very thick,dashed] (s2s3) at (3.5,3) {$S_2 \vee S_3$};
\node[draw,rectangle,very thick,dashed] (s1s3) at (6,3) {$S_1 \vee S_3$};
\node[draw,rectangle,rounded corners,fill=yellow] (s1) at (1,1) {$S_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (s2) at (3.5,1) {$S_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (s3) at (6,1) {$S_3$};
\node[draw,rectangle,very thick,dashed] (ns1) at (2,0.5) {$\neg S_1$};
\node[draw,rectangle,very thick,dashed] (ns2) at (4.5,0.5) {$\neg S_2$};
\node[draw,rectangle,very thick,dashed] (ns3) at (7,0.5) {$\neg S_3$};
\node[draw,rectangle,rounded corners,fill=yellow] (c1) at (1,0) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (c2) at (3.5,0) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (c3) at (6,0) {$C_1$};
\draw[->,>=latex,thick] (f2f3)--(f1);
\draw[->,>=latex,thick] (f1f3)--(f2);
\draw[->,>=latex,thick] (f1f2)--(f3);
\draw[->,>=latex,thick] (f1)--(f1f3);
\draw[->,>=latex,thick] (f2)--(f2f3);
\draw[->,>=latex,thick] (f2)--(f1f2);
\draw[->,>=latex,thick] (f3)--(f1f3);
\draw[->,>=latex,thick] (f3) to[out=-160,in=-40] (f2f3);
\draw[->,>=latex,thick] (f1) to[out=-20,in=-140] (f1f2);
\draw[->,>=latex,thick] (f1)--(s1s2);
\draw[->,>=latex,thick] (f2)--(s2s3);
\draw[->,>=latex,thick] (f3)--(s1s3);
\draw[->,>=latex,thick] (s1)--(c1);
\draw[->,>=latex,thick] (s2)--(c2);
\draw[->,>=latex,thick] (s3)--(c3);
\draw[->,>=latex,thick] (s1s2) to[out=-80,in=80] (s1);
\draw[->,>=latex,thick] (s1) to[out=100,in=-100] (s1s2);
\draw[->,>=latex,thick] (s2s3) to[out=-80,in=80] (s2);
\draw[->,>=latex,thick] (s2) to[out=100,in=-100] (s2s3);
\draw[->,>=latex,thick] (s1s3) to[out=-80,in=80] (s3);
\draw[->,>=latex,thick] (s3) to[out=100,in=-100] (s1s3);
\draw[->,>=latex,thick] (s1s2) to[out=-50,in=160] (s2);
\draw[->,>=latex,thick] (s2) -- (s1s2);
\draw[->,>=latex,thick] (s2s3) to[out=-50,in=160] (s3);
\draw[->,>=latex,thick] (s3) -- (s2s3);
\draw[->,>=latex,thick] (s1s3) to[out=-150,in=10] (s1);
\draw[->,>=latex,thick] (s1) -- (s1s3);
\draw[->,>=latex,thick] (s1) -- (ns1);
\draw[->,>=latex,thick] (s2) -- (ns2);
\draw[->,>=latex,thick] (s3) -- (ns3);
\draw[->,>=latex,thick] (ns1) -- (c1);
\draw[->,>=latex,thick] (ns2) -- (c2);
\draw[->,>=latex,thick] (ns3) -- (c3);
\end{tikzpicture}
\caption{The dependency graph of the LCN in Example~\ref{example:Smokers}.}
\label{figure:SmokersExample}
\end{figure}
\citet{Qian2022} then define:
\begin{definition}\label{definition:ParentLCN}
The {\em lcn-parents} of a proposition $A$, denoted by $\mbox{\rm lcn-pa}(A)$,
are the propositions such
that there exists a directed path in the dependency graph from
each of them to $A$ in which all intermediate nodes are formulas.
\end{definition}
\begin{definition}\label{definition:DescendantLCN}
The {\em lcn-descendants} of a proposition $A$, denoted by $\mbox{\rm lcn-de}(A)$,
are the propositions such
that there exists a directed path in the dependency graph from $A$ to each
of them in which no intermediate node is a parent of~$A$.
\end{definition}
The connections between these concepts and the definitions
of parent and descendant in Section~\ref{section:Graphs}
will be clear in the next section.
In any case, using these definitions \citet{Qian2022} proposed
a Markov condition:
\begin{definition}[LMC(LCN)]
\label{definition:LCNcondition}
A node $A$ is independent, given its lcn-parents,
of all nodes that are not $A$ itself nor lcn-descendants of $A$
nor lcn-parents of $A$.
\end{definition}
The Markov condition in Definition \ref{definition:LCNcondition} is:
\begin{equation}
\label{equation:LCNcondition}
X \; \rotatebox[origin=c]{90}{$\models$} \; \mathcal{N} \backslash \{ \{A\} \cup \mbox{lcn-de}(A) \cup \mbox{lcn-pa}(A)\} \mid \mbox{lcn-pa}(A),
\end{equation}
where we use $\rotatebox[origin=c]{90}{$\models$}$ here, and in the remainder of the paper, to
mean ``is independent of''.
\citet{Qian2022} have derived inference algorithms (that is, they consider
the computation of conditional probabilities) that exploit such
independence relations, and they examine applications that demonstrate
the practical value of LCNs.
It seems that a bit more discussion about the meaning of this Markov condition,
as well as its properties and consequences, would be welcome.
To do so, we find it useful to introduce a
novel concept, namely, the {\em structure} of a LCN.
\begin{figure*}
\centering
\begin{tikzpicture}
\node[draw,rectangle,very thick,dashed] (f2f3) at (1,6) {$F_2 \wedge F_3$};
\node[draw,rectangle,very thick,dashed] (f1f3) at (3.5,6) {$F_1 \wedge F_3$};
\node[draw,rectangle,very thick,dashed] (f1f2) at (6,6) {$F_1 \wedge F_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (f1) at (1,4.5) {$F_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (f2) at (3.5,4.5) {$F_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (f3) at (6,4.5) {$F_3$};
\node[draw,rectangle,very thick,dashed] (s1s2) at (1,3) {$S_1 \vee S_2$};
\node[draw,rectangle,very thick,dashed] (s2s3) at (3.5,3) {$S_2 \vee S_3$};
\node[draw,rectangle,very thick,dashed] (s1s3) at (6,3) {$S_1 \vee S_3$};
\node[draw,rectangle,rounded corners,fill=yellow] (s1) at (1,1) {$S_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (s2) at (3.5,1) {$S_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (s3) at (6,1) {$S_3$};
\node[draw,rectangle,very thick,dashed] (ns1) at (2,0.25) {$\neg S_1$};
\node[draw,rectangle,very thick,dashed] (ns2) at (4.5,0.25) {$\neg S_2$};
\node[draw,rectangle,very thick,dashed] (ns3) at (7,0.25) {$\neg S_3$};
\node[draw,rectangle,rounded corners,fill=yellow] (c1) at (1,-0.5) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (c2) at (3.5,-0.5) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (c3) at (6,-0.5) {$C_1$};
\draw[->,>=latex,thick,dotted] (f2f3)--(f1);
\draw[->,>=latex,thick,dotted] (f1f3)--(f2);
\draw[->,>=latex,thick,dotted] (f1f2)--(f3);
\draw[->,>=latex,thick,dotted] (f1)--(f1f3);
\draw[->,>=latex,thick,dotted] (f2)--(f2f3);
\draw[->,>=latex,thick,dotted] (f2)--(f1f2);
\draw[->,>=latex,thick,dotted] (f3)--(f1f3);
\draw[->,>=latex,thick,dotted] (f3) -- (f2f3);
\draw[->,>=latex,thick,dotted] (f1) -- (f1f2);
\draw[->,>=latex,thick,dotted] (f1)--(s1s2);
\draw[->,>=latex,thick,dotted] (f2)--(s2s3);
\draw[->,>=latex,thick,dotted] (f3)--(s1s3);
\draw[->,>=latex,thick] (s1)--(c1);
\draw[->,>=latex,thick] (s2)--(c2);
\draw[->,>=latex,thick] (s3)--(c3);
\draw[->,>=latex,thick,dotted] (s1s2) to[out=-80,in=80] (s1);
\draw[->,>=latex,thick,dotted] (s1) to[out=100,in=-100] (s1s2);
\draw[->,>=latex,thick,dotted] (s2s3) to[out=-80,in=80] (s2);
\draw[->,>=latex,thick,dotted] (s2) to[out=100,in=-100] (s2s3);
\draw[->,>=latex,thick,dotted] (s1s3) to[out=-80,in=80] (s3);
\draw[->,>=latex,thick,dotted] (s3) to[out=100,in=-100] (s1s3);
\draw[->,>=latex,thick,dotted] (s1s2) to[out=-50,in=160] (s2);
\draw[->,>=latex,thick,dotted] (s2) -- (s1s2);
\draw[->,>=latex,thick,dotted] (s2s3) to[out=-50,in=160] (s3);
\draw[->,>=latex,thick,dotted] (s3) -- (s2s3);
\draw[->,>=latex,thick,dotted] (s1s3) to[out=-150,in=10] (s1);
\draw[->,>=latex,thick,dotted] (s1) -- (s1s3);
\draw[thick,blue] (f1)--(f2);
\draw[thick,blue] (f2)--(f3);
\draw[thick,blue] (f1) to[out=-15,in=-165] (f3);
\draw[thick,blue] (s1)--(s2);
\draw[thick,blue] (s2)--(s3);
\draw[thick,blue] (s1) to[out=-15,in=-165] (s3);
\draw[->,>=latex,thick,blue] (f1) to[out=-140,in=120] (s1);
\draw[->,>=latex,thick,blue] (f1)--(s2);
\draw[->,>=latex,thick,blue] (f2) to[out=-140,in=120] (s2);
\draw[->,>=latex,thick,blue] (f2)--(s3);
\draw[->,>=latex,thick,blue] (f3) to[out=-40,in=60] (s3);
\draw[->,>=latex,thick,blue] (f3) to[out=-120,in=25] (s1);
\draw[->,>=latex,thick,dotted] (s1) -- (ns1);
\draw[->,>=latex,thick,dotted] (s2) -- (ns2);
\draw[->,>=latex,thick,dotted] (s3) -- (ns3);
\draw[->,>=latex,thick,dotted] (ns1) -- (c1);
\draw[->,>=latex,thick,dotted] (ns2) -- (c2);
\draw[->,>=latex,thick,dotted] (ns3) -- (c3);
\node at (0,2) {(a)};
\end{tikzpicture}
\hspace*{15mm}
\begin{tikzpicture}
\node[draw,rectangle,rounded corners,fill=yellow] (f1) at (1,2.5) {$F_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (f2) at (3.5,2.5) {$F_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (f3) at (6,2.5) {$F_3$};
\node[draw,rectangle,rounded corners,fill=yellow] (s1) at (1,1) {$S_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (s2) at (3.5,1) {$S_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (s3) at (6,1) {$S_3$};
\node[draw,rectangle,rounded corners,fill=yellow] (c1) at (1,0) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (c2) at (3.5,0) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (c3) at (6,0) {$C_1$};
\draw[->,>=latex,thick] (s1)--(c1);
\draw[->,>=latex,thick] (s2)--(c2);
\draw[->,>=latex,thick] (s3)--(c3);
\draw[thick,blue] (f1)--(f2);
\draw[thick,blue] (f2)--(f3);
\draw[thick,blue] (f1) to[out=-15,in=-165] (f3);
\draw[thick,blue] (s1)--(s2);
\draw[thick,blue] (s2)--(s3);
\draw[thick,blue] (s1) to[out=-15,in=-165] (s3);
\draw[->,>=latex,thick,blue] (f1) -- (s1);
\draw[->,>=latex,thick,blue] (f1)--(s2);
\draw[->,>=latex,thick,blue] (f2) -- (s2);
\draw[->,>=latex,thick,blue] (f2)--(s3);
\draw[->,>=latex,thick,blue] (f3) -- (s3);
\draw[->,>=latex,thick,blue] (f3) -- (s1);
\node at (0,0) {(b)};
\node[draw,rectangle,rounded corners,fill=yellow] (hf123) at (1.5,-2) {$F_{1,2,3}$};
\node[draw,rectangle,rounded corners,fill=yellow] (hs123) at (1.5,-3) {$S_{1,2,3}$};
\node[draw,rectangle,rounded corners,fill=yellow] (hc1) at (0.5,-4) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (hc2) at (1.5,-4) {$C_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (hc3) at (2.5,-4) {$C_3$};
\draw[->,>=latex,thick] (hf123)--(hs123);
\draw[->,>=latex,thick] (hs123)--(hc1);
\draw[->,>=latex,thick] (hs123)--(hc2);
\draw[->,>=latex,thick] (hs123)--(hc3);
\node at (0,-3) {(c)};
\node[draw,rectangle,rounded corners,fill=yellow] (hf12) at (4.2,-2) {$F_{1,2}$};
\node[draw,rectangle,rounded corners,fill=yellow] (hf23) at (5.8,-2) {$F_{2,3}$};
\node[draw,rectangle,rounded corners,fill=yellow] (hs123) at (5,-3) {$S_{1,2,3}$};
\node[draw,rectangle,rounded corners,fill=yellow] (hc1) at (4,-4) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (hc2) at (5,-4) {$C_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (hc3) at (6,-4) {$C_3$};
\draw[thick] (hf12)--(hf23);
\draw[->,>=latex,thick] (hf12)--(hs123);
\draw[->,>=latex,thick] (hf23)--(hs123);
\draw[->,>=latex,thick] (hs123)--(hc1);
\draw[->,>=latex,thick] (hs123)--(hc2);
\draw[->,>=latex,thick] (hs123)--(hc3);
\node at (3.5,-3) {(d)};
\end{tikzpicture}
\caption{(a) The dependency graph for the LCN in Example~\ref{example:Smokers},
together with the edges in the structure of the LCN. Edges in the structure
are solid (the ones added in the process are in blue); edges
in and out of formula-nodes are dotted.
(b) The structure of the LCN, by removing the formula-nodes and
associated edges.
(c) A directed acyclic graph with the chain components of
the chain graph that represents the structure.
(d) A variant discussed in Example \ref{example:Factorization}.}
\label{figure:DependencyStructure}
\end{figure*}
\section{The Structure of a LCN}
The dependency graph of a LCN is rather similar in spirit to
the {\em factor graph} of a Bayesian network \cite{Koller2009},
where both random variables and conditional probabilities are
explicitly represented. This is a convenient device when it comes
to message-passing inference algorithms, but perhaps it contains
too much information when one wishes to examine independence relations.
We introduce another graph to be extracted from the dependency
graph of a given LCN, that
we call the {\em structure} of the LCN, as follows:
\begin{enumerate}
\item For each formula-node
$\phi$ that appears as a
conditioned formula in a constraint in $\mathcal{T}_U$,
place an undirected edge between any two propositions
that appear in $\phi$.
\item For each pair of formula-nodes $\varphi$ and $\phi$
that appear in a constraint, add a directed edge from
each proposition in $\varphi$ to each proposition in $\phi$.
\item If, for some pair of proposition-nodes $A$ and $B$, there is
now a pair of edges $A \leftrightarrows B$, then replace both edges
by an undirected edge.
\item Finally, remove multiple edges
and remove the formula-nodes and all edges in and out of them.
\end{enumerate}
\begin{example}
Figures \ref{figure:DependencyStructure}.a and
\ref{figure:DependencyStructure}.b depict the
structure of the LCN in Example \ref{example:Smokers}.
\end{example}
We have:
\begin{lemma}\label{theorem:ParentBoundary}
The set of lcn-parents of a proposition $A$ in a LCN is identical
to the boundary of $A$ with respect to the structure of the LCN.
\end{lemma}
\begin{proof}
Consider a LCN with a dependency graph $\mathcal{D}$.
If $B$ is a lcn-parent of $A$ with respect to $\mathcal{D}$,
then $B \rightarrow A$ or
$B \rightarrow \phi \rightarrow A$
or
$B \rightarrow \varphi \rightarrow \phi \rightarrow A$
or
$B \leftrightarrows \phi \rightarrow A$
or
$B \rightarrow \phi \leftrightarrows A$
or
$B \leftrightarrows \phi \leftrightarrows A$
for formula $\varphi$ and $\phi$;
hence $B$ appears either as a parent of $A$ or a neighbor
in the structure of the LCN.
Conversely, if $B$ is a parent or a neighbor of $A$
in the structure of the LCN, then one of the sequences of edges
already mentioned must be in $\mathcal{D}$, so $B$ is a
lcn-parent of $A$ in $\mathcal{D}$.
\end{proof}
The natural candidate for the concept of ``descendant'' in a structure, so as to mirror the concept of lcn-descendant, is as follows:
\begin{definition}\label{definition:QianDescendant}
If there is a directed path from $A$ to $B$ such that no intermediate
node is a boundary node of $A$, then $B$ is a {\em strict descendant} of $A$.
\end{definition}
Using the previous definitions, we can state a local Markov condition that works for any graph but that, when applied to the structure of a LCN, has the same effect
as the original Markov condition LMC(LCN)
(Definition \ref{definition:LCNcondition}) applied to the LCN:
\begin{definition}[LMC(C-STR)]
\label{definition:LCNconditionStructures}
A node $A$ is independent, given its boundary,
of all nodes that are not $A$ itself nor strict descendants of $A$
nor boundary nodes of $A$.
\end{definition}
In symbols,
\begin{equation}
\label{equation:ConditionStructure}
X \; \rotatebox[origin=c]{90}{$\models$} \; \mathcal{N} \backslash \{ \{A\} \cup \mbox{sde}(A) \cup \mbox{bd}(A)\} \mid
\mbox{bd}(A),
\end{equation}
where $\mbox{sde}(A)$ denotes the set of strict descendants of $A$.
We then have:
\begin{theorem}\label{theorem:EqualityMarkovConditions}
Given a LCN, the
Markov condition LMC(LCN) in Definition \ref{definition:LCNcondition}
is identical, with respect to the independence relations it imposes,
to the local Markov condition LMC(C-STR)
in Definition \ref{definition:LCNconditionStructures}
applied to the structure of the LCN.
\end{theorem}
\begin{proof}
To prove that Expressions (\ref{equation:LCNcondition}) and (\ref{equation:ConditionStructure})
are equivalent, we use the fact that $\mbox{lcn-pa}(A)$ and $\mbox{bd}(A)$ are identical
by Lemma \ref{theorem:ParentBoundary} and we prove (next) that
$\mathcal{N} \backslash \{ \{A\} \cup \mbox{lcn-de}(A) \cup \mbox{lcn-pa}(A)\}$ is equal to
$\mathcal{N} \backslash \{ \{A\} \cup \mbox{sde}(A) \cup \mbox{bd}(A)\}$.
Suppose then that
$B \in \mathcal{N} \backslash \{ \{A\} \cup \mbox{lcn-de}(A) \cup \mbox{lcn-pa}(A)\}$
and assume, to obtain a contradiction, that
$B \in (\mathcal{N} \backslash \{ \{A\} \cup \mbox{sde}(A) \cup \mbox{bd}(A)\})^c$,
where the superscript $c$ denotes complement with respect to $\mathcal{N}$.
So, our assumption is that $B \in \{A\} \cup \mbox{sde}(A) \cup \mbox{bd}(A)$,
and the latter union can be written as
$\{A\} \cup \mbox{bd}(A) \cup ( (\mbox{bd}(A))^c \cap \mbox{sde}(A) )$,
a union of disjoint sets. So it may be {\em either} that \\
$\bullet$ We have $B=A$, a contradiction. \\
$\bullet$ We have $B \in \mbox{bd}(A)$. Then $B$ is either a parent or a neighbor, and in
both cases there must be an edge from $B$ to $A$ in the dependency graph and then
$B \in \mbox{lcn-pa}(A)$, a contradiction. \\
$\bullet$ We have $B \in (\mbox{bd}(A))^c \cap \mbox{sde}(A)$. Then there
must be a directed path from $A$ to $B$ (with intermediate nodes that are
not boundary nodes) in the structure, and so there must be a corresponding directed
path from $A$ to $B$ (with intermediate nodes that are not parents) in the
dependency graph; so $B \in \mbox{lcn-de}(A)$, a contradiction. \\
So, we always get a contradiction; hence if
$B \in \mathcal{N} \backslash \{ \{A\} \cup \mbox{lcn-de}(A) \cup \mbox{lcn-pa}(A)\}$
then
$B \in \mathcal{N} \backslash \{ \{A\} \cup \mbox{sde}(A) \cup \mbox{bd}(A)\}$.
Suppose now that we have a node $B$, distinct from $A$, such that
$B \in \mathcal{N} \backslash \{ \{A\} \cup \mbox{sde}(A) \cup \mbox{bd}(A)\}$
and assume, to obtain a contradiction, that
$B \in ( \mathcal{N} \backslash \{ \{A\} \cup \mbox{lcn-de}(A) \cup \mbox{lcn-pa}(A)\} )^c$.
The reasoning that follows is parallel to the one in the last paragraph, but this case
has a few additional twists to take care of.
So, our assumption is that
$B \in \{A\} \cup \mbox{lcn-de}(A) \cup \mbox{lcn-pa}(A)$, and the latter union
can be written as
$\{A\} \cup \mbox{lcn-pa}(A) \cup ( (\mbox{lcn-pa}(A))^c \cap \mbox{lcn-de}(A) )$,
again a union of disjoint sets. So it may be {\em either} that \\
$\bullet$ We have $B=A$, a contradiction. \\
$\bullet$ We have $B \in \mbox{lcn-pa}(A)$. Then there is a directed edge from $B$ to $A$,
or a bi-directed edge between them, and $B \in \mbox{bd}(A)$, a contradiction. \\
$\bullet$ We have $B \in (\mbox{lcn-pa}(A))^c \cap \mbox{lcn-de}(A)$. Then there
must be a path from $A$ to $B$ in the dependency graph (with intermediate nodes that are
not parents), and by construction of the
structure there must be a path from $A$ to $B$
in the structure (with intermediate nodes that are
not boundary nodes). The latter path cannot be an undirected path;
otherwise, it would have passed through a parent of $A$ in the dependency graph,
contradicting the assumption that $B$ is a lcn-descendant. As there is a directed
path that cannot go through a parent in the structure, $B \in \mbox{sde}(A)$, a contradiction. \\
So, we always get a contradiction; hence if
$B \in \mathcal{N} \backslash \{ \{A\} \cup \mbox{sde}(A) \cup \mbox{bd}(A)\}$
then
$B \in \mathcal{N} \backslash \{ \{A\} \cup \mbox{lcn-de}(A) \cup \mbox{lcn-pa}(A)\}$.
Thus the latter two sets are identical and the proof is finished.
\end{proof}
\section{Chain Graphs and Factorization}\label{section:NoDirectedCycles}
If the structure of a LCN is a directed acyclic graph, the LMC(LCN)
is actually the usual local Markov condition for directed
acyclic graphs as applied to Bayesian or credal networks
\cite{CozmanAI2000,Maua2020IJARthirty}.
If instead all constraints in a LCN belong to $\mathcal{T}_U$, all
of them only referring to ``unconditional'' probabilities (that is,
$\varphi=\top$ in every constraint),
then the structure of the LCN is an undirected graph
endowed with the usual local Markov condition for undirected graphs.
These previous points can be generalized in a satisfying way
{\em whenever the structure contains no directed cycle}:
\begin{theorem}\label{theorem:EqualityChainLCN}
If the structure of a LCN is a chain graph, then the
Markov condition LMC(LCN) in Definition \ref{definition:LCNcondition}
is identical, with respect to the independence relations it imposes,
to the local Markov condition for chain graphs applied to the structure.
\end{theorem}
\begin{proof}
Combine Theorem \ref{theorem:EqualityMarkovConditions} and the next lemma.
\end{proof}
\begin{lemma}\label{theorem:DescendantsQian}
In a chain graph, the set of descendants and the set
of strict descendants of a node are identical.\footnote{Sets of descendants
and sets of strict descendants may actually differ in the presence of
directed cycles: for instance, in Figure \ref{figure:GraphExamples}.b, node $B$
has descendants $\{A,C,D\}$ and strict descendants $\{A,D\}$.}
\end{lemma}
\begin{proof}
If there is a directed path from $A$ that reaches
a parent of $A$, then $\mathcal{G}$ has a directed
cycle and is not a chain graph. Hence, the definition
of strict descendant is not affected by parents and
it collapses do the definition of descendant.
\end{proof}
The significance of the previous theorem is that, assuming
that all probabilities are positive, the local
Markov condition for a chain graph is equivalent both to the
global Markov condition and to the factorization property of
chain graphs. This is useful because it allows us to
break down the probability distribution over all random
variables in LCN in hopefully much smaller pieces
that require less specification effort.
\begin{example}\label{example:Factorization}
Figure \ref{figure:DependencyStructure}.b
depicts a structure that is in fact a chain graph.
We can group variables to obtain chain components
$F_{1,2,3}$ and $S_{1,2,3}$ and
draw a directed acyclic graph with the chain
components, as in
Figure \ref{figure:DependencyStructure}.c.
The joint probability distribution factorizes as
Expression (\ref{equation:BayesNet}):
\begin{eqnarray*}
\pr{F_{1,2,3} = f,S_{1,2,3} = s,C_1\!=\!c_1,C_2\!=\!c_2,C_3\!=\!c_3} & = & \\
& & \hspace*{-7cm} \pr{F_{1,2,3}=f}\pr{S_{1,2,3}=s|F_{1,2,3}=f} \nonumber \\
& & \hspace*{-6cm} \pr{C_1=c_1|S_{1,2,3}=s}\pr{C_2=c_2|S_{1,2,3}=s} \nonumber \\
& & \hspace*{-5cm} \pr{C_3=c_3|S_{1,2,3}=s}, \nonumber
\end{eqnarray*}
where $f$ is a configuration of the random variables in $F_{1,2,3}$, while
$s$ is a configuration of the random variables in $S_{1,2,3}$.
Because there are no independence relations ``inside'' the
chain components, this factorization is guaranteed even if
some probability values are equal to zero \cite{Cowell99}.
Suppose that the three constraints $0.5 \leq \pr{F_i|F_j \wedge F_k} \leq 1$
were replaced so that we had a similar structure but
instead of a single chain component with $F_1$, $F_2$, $F_3$, suppose we had
two chain components, one with $F_1$ and $F_2$, the other with $F_2$ and $F_3$.
The chain components might be organized as in Figure \ref{figure:DependencyStructure}.d.
If all probabilities are positive, that chain graph leads
to a factorization of the joint probability distribution similar
to the previous one, but now
$\pr{F_{1,2,3}=f_1f_2f_3}=
\mathbb{F}_1(F_{1,2}=f_1f_2) \mathbb{F}_2(F_{2,3}=f_2f_3)$,
where $\mathbb{F}_1$ and $\mathbb{F}_2$ are positive functions,
and the values of $F_1$, $F_2$ and $F_3$ are indicated by
$f_1$, $f_2$, $f_3$ respectively.
\end{example}
In the previous paragraph the assumption that probabilities are positive
is important: when some probabilities are zero, there is no guarantee
that a factorization actually exists \cite{Moussouris74}.
This is unfortunate as a factorization leads to valuable computational
simplifications.
One strategy then is to guarantee that all configurations do
have positive probability, possibly by adding language directives
that bound probabilities from below. A language command might consist
of explict bounds, say $0.001$, or even a direct guarantee of
positivity without an explicit bounding value.
This solution may be inconvenient if we do have some hard constraints
in the domain. For instance, we may impose that $A \vee B$ (in which
case $\pr{\neg A \wedge \neg B}=0$).
However, is is still possible to obtain a factorization
if hard constraints are imposed. Say we have a formula, for instance
$A \vee B$, that must be satisfied. We treat it as a constraint
$1 \leq \pr{A \vee B} \leq 1$ in $\mathcal{T}_U$, thus guaranteeing that
there is a clique containing its propositions/random variables.
Then we remove the impossible configurations of these random variables
(in our running example, we remove $A = B = 0$), thus reducing the
number of possible configurations for the corresponding clique.
A factorization is obtained again in the reduced space of
configurations, provided the remaining configurations do have positive probabilities.
Finally, an entirely different strategy may be pursued:
adopt a stronger Markov condition
that guarantees factorization (and hence global independence relations)
in all circumstances.
\citet{Moussouris74} has identified one such condition, where a system
is {\em strongly Markovian} in case a Markov condition holds for the system
and suitable sub-systems. That (very!) strong condition forces zero
probabilities to be, in a sense, localized, so that probabilities
satisfy a nice factorization property;
alas, the condition cannot be guaranteed for all graphs, and its consequences
have not been explored in depth so far.
\section{Directed Cycles}
As noted already, existence of a factorization is a very desirable
property for any probabilistic formalism: not only it simplifies
calculations, but it also emphasizes modularity in modeling and ease
of understanding.
In the previous section we have shown that LCNs whose structure is
a chain graph do have, under a positivity assumption, a well-known
factorization property. In this section and the next one we examine how that result
might be extended when structures have directed cycles.
The LMC(LCN) is, of course, a local condition that can be applied
even in the presence of directed cycles. However, local Markov conditions
may not be very satisfactory in the presence of directed cycles, as
a simple yet key example suggests:
\begin{example}\label{example:LongCycle}
Take a LCN whose dependency graph is a long cycle
$A_1 \rightarrow A_2 \rightarrow \dots A_k \rightarrow A_1$,
for some large $k$. No $A_i$ has any non-descendant non-parent.
And no $A_i$ has any non-strict-descendant non-parent either.
The local Markov conditions we have contemplated do not
impose {\em any} independence relation.
\end{example}
Local conditions seem too weak when there are long cycles.
On the other hand, a global condition may work fine in those settings.
For instance, apply the GMC(C) to the graph in Example \ref{example:LongCycle};
the condition does impose non-trivial independence relations such as
$A_1 \rotatebox[origin=c]{90}{$\models$} A_3,\dots,A_{k-1}|A_2,A_k$ and
$A_2 \rotatebox[origin=c]{90}{$\models$} A_4,\dots,A_{k}|A_1,A_3$
(and more generally,
for any $A_i$ with $2<i<k-2$,
we have $A_i \rotatebox[origin=c]{90}{$\models$} A_1,\dots,A_{i-2},A_{i+2},A_k | A_{i-1},A_{i+1}$).
At this point it is mandatory to examine results by \citet{Spirtes95},
as he has studied local and global conditions for directed graphs,
obtaining factorization results even in the presence of directed cycles.
The local Markov condition adopted by \citet{Spirtes95} is just
the one adopted for directed graphs in Section \ref{section:Graphs}:
\begin{definition}[LMC(D)]\label{definition:SpirtesLMC}
A node $A$ is independent, given its parents,
of all nodes that are not $A$ itself nor descendants nor parents of $A$.
\end{definition}
The global Markov condition adopted by \citet{Spirtes95} is just
the one adopted for chain graphs in Section \ref{section:Graphs}:
\begin{definition}[GMC(C)]\label{definition:SpirtesGMC}
Given any triple $(\mathcal{N}_1,\mathcal{N}_2,\mathcal{N}_3)$
of disjoint subsets of $\mathcal{N}$,
if $\mathcal{N}_2$ separates
$\mathcal{N}_1$ and $\mathcal{N}_2$ in the graph
$\mathcal{G}^{ma}(\mathcal{N}_1,\mathcal{N}_2,\mathcal{N}_3)$,
then nodes $\mathcal{N}_1$
and $\mathcal{N}_3$ are independent given nodes $\mathcal{N}_2$.
\end{definition}
Spirtes shows that the LMC(D) is not equivalent to the GMC(C)
for directed graphs with directed cycles.
This observation can be adapted to our setting as follows:
\begin{example}\label{example:Spirtes}
Suppose we have a LCN whose dependency graph is depicted
in Figure \ref{figure:GraphExamples}.d.
For instance, we might have
$0.1 \leq \pr{X|Y} \leq 0.2$ whenever $Y \rightarrow X$ is an edge
in that figure.
Assume all configurations have positive probability.
The LMC(D) applied to this dependency graph yields only
$A \rotatebox[origin=c]{90}{$\models$} C$, $B \rotatebox[origin=c]{90}{$\models$} C|A,D$ and $A \rotatebox[origin=c]{90}{$\models$} D|B,C$.
However, if we apply the GMC(C) directly
to the dependency graph, we {\em do not} get the same independence
relations: then we only
obtain $A \rotatebox[origin=c]{90}{$\models$} C$ and $A \rotatebox[origin=c]{90}{$\models$} C|B,D$, perhaps a surprising
result (in this case, the graph $\mathcal{G}^{ma}(A,B,C,D)$ is
depicted in Figure \ref{figure:MoralGraphs}.d).
\end{example}
\citet[Lemma 3]{Spirtes95} has shown that, for a directed graph
that may have directed cycles, a positive probability distribution
over the random variables is a product of factors, one per random
variable, iff the distribution satisfies the GMC(C) for the graph.
Note that the GMC(C) is equivalent, {\em for
graphs without directed cycles}, under
a positivity assumption, to a local Markov condition:
\begin{definition}[LMC(C)]\label{definition:LMCchaingraphs}
A node $A$ is independent, given its parents,
of all nodes that are not $A$ itself nor descendants nor boundary nodes of $A$.
\end{definition}
However, there is a difficulty in applying Spirtes' result to our setting.
\begin{example}\label{example:SpirtesContinued}
Consider Example \ref{example:Spirtes}.
The structure of the LCN is the chain graph in Figure \ref{figure:GraphExamples}.e,
and we know that the LMC(LCN) is equivalent to the LMC(C) and GMC(C) for
chain graphs. In fact, the LMC(LCN), the LMC(C), the LMC(C-STR), and the GMC(C)
also yield only
$A \rotatebox[origin=c]{90}{$\models$} C$, $B \rotatebox[origin=c]{90}{$\models$} C|A,D$ and $A \rotatebox[origin=c]{90}{$\models$} D|B,C$ when applied to the structure.
Clearly this is not the same set of independence relations imposed by
the GMC(C) applied to the dependency graph (as listed in Example \ref{example:Spirtes}).
There is a difference between undirected and bi-directed edges
when it comes to the GMC(C).
\end{example}
The message of this example is that we cannot impose the GMC(C)
on (a suitable version of) directed dependency graphs and hope
to keep the LMC(LCN) by \citet{Qian2022}.
If we want the factorization induced
by the GMC(C) on (a version of) dependency graphs, we must
somehow modify the original semantics for LCNs by \citet{Qian2022}.
It is worth summarizing the discussion so far.
First, it is well-known that the LMC(C) and the GMC(C) are equivalent,
under a positivity assumption, for chain graphs (both conditions
may differ in the presence of directed cycles).
Second, we know that the LMC(LCN) for dependency graphs is
equivalent to
the LMC(C-STR) with respect to the corresponding structures.
And if the structure is a chain graph, then the LMC(C-STR) and
the LMC(C) are equivalent when applied to the structure.
But for general dependency graphs any
local condition seems quite weak.
We might move to general dependency graphs by adapting
the GMC(C) to them, so as to look for a factorization result;
however, we saw that the result is not equivalent to
what we obtained by applying the GMC(C) to structures.
In the next section we examine alternative semantics
that are based on applying the GMC(C) to structures (possibly
with directed cycles).
Before we jump into that, it is worth noticing that
there are many other relevant results in the literature besides
the ones by Spirtes.
For instance, {\em dependency networks} \cite{Heckerman2000}
allow for directed cycles and do have a modular specification
scheme; they have only an approximate factorization, but that
may be enough in applications. Another proposal has been
advanced by \citet{Schmidt2009}, where directed cycles are
allowed and the adopted Markov condition looks only at the
{\em Markov blanket} of nodes; it does not seem that a factorization
has been proven for that proposal, but it is attractive in its simplicity.
There are also many kinds of graphs that have been contemplated to
handle causal loops and dynamic feedback systems
\cite{Baier2022,Hyttinen2012,Bongers2021}.
This is indeed
a huge literature, filled with independence conditions and
factorization properties, to which we cannot do justice in
the available space. It is necessary to examine whether
we can bring elements of those previous efforts into
LCNs. We leave a more detailed study for
the future.
\section{New Semantics for LCNs}
In this section we explore new semantics for LCNs by
applying the GMC(C) to structures.
This is motivated by the weakness of local conditions as discussed
in the previous section, and also on the fact that
a condition based on moralized graphs is the most obvious route
to factorization properties (as the Hammersley-Clifford theorem can then
be invoked under a positivity assumption~\cite{Moussouris74}).
Here is a (new) semantics: a LCN represents
the set of probability distributions over its nodes such that all
constraints in the LCN are satisfied, and each distribution satisfies
the GMC(C) with respect to the structure.
Note that the GMC(C) is equivalent to the LMC(LCN) when a structure
is a chain graph, but these conditions may differ in the presence
of directed cycles.
The path to a factorization result is then as follows. Take the mixed-structure
and, for each node $A$, build a set $\mathcal{C}_A$ with all nodes
that belong to directed cycles starting at $A$. If there a directed
cycle in a set $\mathcal{C}_B$ such that $B$ is in $\mathcal{C}_A$, then
merge $\mathcal{C}_A$ and $\mathcal{C}_B$ into a set $\mathcal{C}_{A,B}$;
repeat this until
there are no more sets to merge (this must stop, in the worst case
with a single set containing all nodes). For each set, replace all
nodes in the set by a single ``super''-node, directing all edges
in and out of nodes in the set to this super-node. The resulting
graph has no directed cycles, so the GMC(C) applied to it
results in the usual factorization over chain components of
the resulting graph. Now each super-node is in fact a set of
nodes that can be subject to further factorization, even though
it is an open question whether a decomposition can be obtained
with factors that are directly related to graph properties.
To continue, we suggest that,
instead of using structures as mere secondary objects that help
us clarify the meaning of dependency graphs,
structures should be the primary tools in dealing with LCNs.
That is, we should translate every LCN to its structure (without
going through the dependency graph) and then apply appropriate
Markov conditions there. Given a LCN, we can build its structure
by taking every proposition as a node and then:
\begin{enumerate}
\item For each constraint $\alpha \leq \pr{\phi|\varphi} \leq \beta$ in
$\mathcal{T}_U$, add an undirected arrow between each pair of
proposition-nodes in $\phi$.
\item For each constraint $\alpha \leq \pr{\phi|\varphi} \leq \beta$
add a directed edge from each proposition-node in
$\varphi$ to each proposition-node in $\phi$ (if $\varphi$ is $\top$,
there is no such edge to add).
\item Remove multiple identical edges.
\item For each pair of nodes $A$ and $B$,
if there is a bi-directed edge $A \leftrightarrows B$ between them,
replace the two edges by a single undirected edge $A \sim B$.
\end{enumerate}
For instance, the procedure above
goes directly from the LCN in Example \ref{example:Smokers}
to the structure in Figure \ref{figure:DependencyStructure}.b.
When we think of structures this way, we may suspect
that bi-directed edges should be better handled,
so as to differentiate the symmetric connections that appear when a pair of
propositions appear in a formula $\phi$ from the mutual influences that one
proposition is conditioned on the other and vice-versa.
An alternative semantics would be as follows. Take a LCN and build a
{\em mixed-structure} by going through the first two steps above.
That is, create a node per proposition that appears in the LCN;
then take each constraint in $\mathcal{T}_U$ and add undirected
edges between any two propositions in $\phi$, and finally take
each constraint and add a directed edge from each proposition that
appears in $\varphi$ to each proposition that appears in the
corresponding $\phi$.
Figure \ref{figure:BidirectedStructures} depicts the mixed-structure
for Example \ref{example:Smokers}.
Now adopt: a LCN represents
the set of probability distributions over its nodes such that all
constraints in the LCN are satisfied, and each distribution satisfies
the GMC(C) with respect to the mixed-structure.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\node[draw,rectangle,rounded corners,fill=yellow] (f1) at (1,2.5) {$F_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (f2) at (3.5,2.5) {$F_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (f3) at (6,2.5) {$F_3$};
\node[draw,rectangle,rounded corners,fill=yellow] (s1) at (1,1) {$S_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (s2) at (3.5,1) {$S_2$};
\node[draw,rectangle,rounded corners,fill=yellow] (s3) at (6,1) {$S_3$};
\node[draw,rectangle,rounded corners,fill=yellow] (c1) at (1,0) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (c2) at (3.5,0) {$C_1$};
\node[draw,rectangle,rounded corners,fill=yellow] (c3) at (6,0) {$C_1$};
\draw[->,>=latex,thick] (s1)--(c1);
\draw[->,>=latex,thick] (s2)--(c2);
\draw[->,>=latex,thick] (s3)--(c3);
\draw[->,>=latex,thick,blue] (f1) -- (s1);
\draw[->,>=latex,thick,blue] (f1)--(s2);
\draw[->,>=latex,thick,blue] (f2) -- (s2);
\draw[->,>=latex,thick,blue] (f2)--(s3);
\draw[->,>=latex,thick,blue] (f3) -- (s3);
\draw[->,>=latex,thick,blue] (f3) -- (s1);
\draw[->,>=latex,thick,blue] (f1) to[out=10,in=170] (f2);
\draw[->,>=latex,thick,blue] (f2) to[out=-170,in=-10] (f1);
\draw[->,>=latex,thick,blue] (f2) to[out=10,in=170] (f3);
\draw[->,>=latex,thick,blue] (f3) to[out=-170,in=-10] (f2);
\draw[->,>=latex,thick,blue] (f1) to[out=15,in=165] (f3);
\draw[->,>=latex,thick,blue] (f3) to[out=-165,in=-15] (f1);
\draw[thick,blue] (s1) -- (s2);
\draw[thick,blue] (s2) -- (s3);
\draw[thick,blue] (s3) to[out=-165,in=-15] (s1);
\end{tikzpicture}
\caption{The mixed-structure for the LCN in Example \ref{example:Smokers}.}
\label{figure:BidirectedStructures}
\end{figure}
The next example emphasizes the differences between semantics.
\begin{example}\label{example:MixedStructure}
Suppose we have a LCN with constraints $\pr{B|A} = 0.2$,
$\pr{D|E} = 0.3$, $\pr{B \vee C} = 0.4$, $\pr{C \vee D} = 0.5$.
Both the structure and the mixed-structure of this LCN is
depicted in Figure \ref{figure:MixedStructure}.a.
Consider another LCN with constraints
$\pr{B|A \wedge C} = 0.2$, $\pr{C|B \wedge D} = 0.3$,
and $\pr{D|C \wedge E} = 0.4$.
This second LCN has the same structure as the first one, but the mixed-structure
is depicted in Figure \ref{figure:MixedStructure}.b.
The GMC(C) produces quite different sets of independence relations when applied
to these distinct mixed-structures;
for instance, $A,B \rotatebox[origin=c]{90}{$\models$} D | C, E$ in the first LCN, but not necessarily in the second;
$A,B \rotatebox[origin=c]{90}{$\models$} E | C, D$ in the second LCN, but not necessarily in the first.
This seems appropriate as the LCNs convey quite distinct scenarios, one related to the
symmetry of logical constraints, the other related to the links induced by directed influences.
\end{example}
We hope to pursue a comparison between the theoretical and pragmatic aspects of these
semantics in future work.
\section{Conclusion}\label{section:Conclusion}
In this paper we visited
many Markov conditions that can be applied, if properly adapted, to
Logical Credal Networks \cite{Qian2022}. We reviewed existing concepts
and introduced the notion of structure of a LCN, showing that the
original local condition LMC(LCN) can be viewed as a local condition
on structures. We then showed that the LMC(LCN) is equivalent to a usual
local condition when
the structure is a chain graph, and this leads to a factorization result.
Moreover, we introduced a new semantics
based on structures and a global Markov condition --- a semantics
that agrees with the original one when the structure is a chain
graph but that offers a possible path to factorization properties.
There are many issues left for future work.
LCNs stress the connection between the syntactic form of constraints
and the semantic consequences of independence assumptions, a theme
that surfaces in many probabilistic logics. We must investigate more
carefully the alternatives when extracting independence relations from
constraints, in particular to
differentiate ways in which bi-directed edges are
created.
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (3,1.5) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (e) at (1,1) {$E$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[->,>=latex,thick] (a)--(b);
\draw[thick] (b)--(c);
\draw[thick] (c)--(d);
\draw[->,>=latex,thick] (e)--(d);
\node at (0.3,1.5) {\small (a)};
\end{tikzpicture}
\hspace*{1cm}
\begin{tikzpicture}[scale=0.9]
\node[draw,rectangle,rounded corners,fill=yellow] (a) at (1,2) {$A$};
\node[draw,rectangle,rounded corners,fill=yellow] (b) at (2,2) {$B$};
\node[draw,rectangle,rounded corners,fill=yellow] (c) at (3,1.5) {$C$};
\node[draw,rectangle,rounded corners,fill=yellow] (e) at (1,1) {$E$};
\node[draw,rectangle,rounded corners,fill=yellow] (d) at (2,1) {$D$};
\draw[->,>=latex,thick] (a)--(b);
\draw[->,>=latex,thick] (b) to[out=0,in=130] (c);
\draw[->,>=latex,thick] (c) to[out=160,in=-30] (b);
\draw[->,>=latex,thick] (c) to[out=-160,in=30] (d);
\draw[->,>=latex,thick] (d) to[out=0,in=-130] (c);
\draw[->,>=latex,thick] (e)--(d);
\node at (0.3,1.5) {\small (b)};
\end{tikzpicture}
\caption{Structures and mixed-structures in Example \ref{example:MixedStructure}.}
\label{figure:MixedStructure}
\end{figure}
We must also examine positivity assumptions. What is the best way to
guarantee a factorization? Should we have sweeping assumptions, or
should we require the user to explicitly impose a positivity
assumption? Should we allow for logical constraints that assign
probability zero to some configurations; if so, which kinds of
configurations, and how to make those constraints compatible with
factorization properties?
It is also important to study a large number of Markov conditions
that can be found in the literature, both the ones connected with chain
graphs and the ones connected with causal and feedback models, that we did not
deal with in this paper. We must verify which conditions lead
to factorization results, and which conditions are best
suited to capture the content of logical formulas, causal
influences, feedback loops.
In a more applied perspective, we must investigate whether the ideas
behind LCNs can be used with practical specification
languages such as Probabilistic Answer Set Programming, and we must
test how various semantics for LCNs fare in realistic settings.
\section*{Acknowledgements}
This work was carried out at the Center for Artificial Intelligence (C4AI - USP/IBM/FAPESP), with support by the S\~ao Paulo Research Foundation (FAPESP grant 2019/07665-4) and by the IBM Corporation. The author was partially supported by CNPq grant 312180/2018-7. We acknowledge support by CAPES - Finance Code 001.
|
{
"arxiv_id": "2302.14137",
"language": "en",
"timestamp": "2023-03-01T02:02:18",
"url": "https://arxiv.org/abs/2302.14137",
"yymm": "2302"
} |
\section{Introduction}
\label{Introduction}
The large majority of the TeV $\gamma$-ray sources detected so far, mainly thanks to surveys like the the High Energy Stereoscopic System (H.E.S.S.) Galactic Plane Survey (HGPS; ~\citealt{HGPS}), are located in the Galactic plane, and most of them remain unidentified. More generally, the origin of the observed $\gamma$-ray emission is often uncertain. Indeed, while the Galactic plane is the best place to look for TeV $\gamma$-ray sources, it is a quite complex region in itself: the proximity of Galactic plane sources leads to source confusion, and large-scale diffuse emission needs to be taken into account. However, the diffuse emission is poorly understood and not very well modeled, partially due to our lack of knowledge about the gas distribution and the distribution of unresolved sources. In addition, the magnetic field structures can be quite complex and difficult to assess. The Galactic plane is also the place for star formation, involving giant molecular clouds (GMCs) that imply different kinds of interactions, shocks, propagation and diffusion processes~\citep{Molecular_cloud_SFR, CR_propagation, galactic_CR}. The modeling of complex regions and the detailed morphological and spectral analysis of individual sources are crucial for testing different scenarios and obtaining a better understanding of the origin of the observed $\gamma$-ray emission.
The very-high-energy (VHE; $E>100$ GeV) $\gamma$-ray emission of the sources 3HWC~J1928+178 and 3HWC~J1930+188, reported in the third High Altitude Water Cherenkov (HAWC) catalog~\citep{3HWC_catalog} at the galactic coordinates (52\textdegree93,~0\textdegree20) and (54\textdegree03,~0\textdegree32) respectively, and the new source HAWC~J1928+192 located at (54\textdegree69,~0\textdegree20), are the focus of this paper. Because of their possible association with pulsars, a classical pulsar wind nebula (PWN) scenario is studied. However, a molecular cloud in the vicinity of 3HWC~J1928+178 makes it a perfect candidate for studying the possible interaction of charged particles with the components of the cloud. After presenting a multiwavelength picture of the region in section~\ref{Multi-wavelength observations of the region}, and an overview of the HAWC data in section~\ref{HAWC observations}, we present the multicomponent modeling of the region in section~\ref{Method} and the results of the fit using a maximum likelihood approach in section~\ref{Results}. Section~\ref{Origin} is dedicated to an assessment of different hypotheses regarding the origin of the $\gamma$-ray emission of 3HWC~J1928+178. In particular, a scenario involving Inverse Compton (IC) scattering is considered, as well as possible interaction with a nearby molecular cloud.
The conclusion is drawn in Section~\ref{Conclusion}. \\
\newpage
\section{Multiwavelength picture of the region}
\label{Multi-wavelength observations of the region}
\textbf{3HWC~J1930+188} is associated with the $\gamma$-ray emission of the PWN in the supernova remnant SNR~G54.1+0.3, located at 6.2~kpc~\citep{PWNG54_CO}. Studies of the X-ray emission using \textit{XMM-Newton} and \textit{Suzaku} data have inferred that the SNR~G54.1+0.3 would be $\sim$2000~years old ~\citep{G54_age}. It was first detected with 6.8$\sigma$ significance by the Very Energetic Radiation Imaging Telescope Array System (VERITAS) in 2010, with a total observation time of 36.6~hr~\citep{VERITAS_J1930}, and identified as the point-like source VER~J1930+188. With 16 additional hours of observation in 2015-2016~\citep{Veritas_Fermi_2HWCsources}, there is now a total exposure time of 46 hr from VERITAS on this region. Figure~\ref{Veritas_Chandra_NuSTAR} shows the latest VERITAS excess map of the region, zooming in on each HAWC source~\citep{Veritas_Fermi_2HWCsources}. It was shown that the centroid of the HAWC detection agrees with the VERITAS centroid position. However, the spectral index of the simple power law found for the HAWC source, $-2.74 \pm 0.12_{\mbox{\scriptsize stat}}$, is softer than that measured by VERITAS, $-2.18 \pm 0.2_{\mbox{\scriptsize stat}}$. Moreover, the differential flux at 7 TeV measured by HAWC is $(9.8 \pm 1.5) \times 10^{-15}$~TeV$^{-1}$~cm$^{-2}$~s$^{-1}$ while the differential flux at 1 TeV measured by VERITAS is $(6.6 \pm 1.3) \times 10^{-13}$~TeV$^{-1}$~cm$^{-2}$~s$^{-1}$. Extrapolating the HAWC spectrum to the VERITAS energy range gives an integrated flux seven times larger than the VERITAS flux, although it is still within the 2$\sigma$ statistical uncertainties of the VERITAS measurement.
The H.E.S.S. collaboration has also reported the detection of this source in the HGPS~\citep{HGPS} catalog and referenced it as a composite SNR, as it was not possible to distinguish the origin of the emission between the shell and the PWN.
At the center of the PWN, the pulsar PSR~J1930+1852 was discovered in 2002 by the Arecibo radio telescope, with a period of 136~ms ~\citep{Discovery_PSRJ1930}. With a derived spin-down power of $\dot{E}~=~1.2 \times 10^{37}$ erg~s$^{-1}$ and a characteristic age of $\sim$2900~yr~\citep{Discovery_PSRJ1930}, it is amongst the youngest and most energetic known pulsars.
Observations of the X-ray emission by the \textit{Chandra} X-ray observatory over 290.77 ks reveal the pulsar and the PWN~\citep{G54_Chandra_Spiter}. In addition, IR observations by the \textit{Spitzer} space telescope~\citep{G54_Chandra_Spiter} and the \textit{Herschel} space observatory~\citep{G54_Hershel} show a shell of gas and dust, debris from the supernova explosion.
The shell contains compact IR sources arranged in a ringlike structure. These may be young stellar objects, whose formation would have been triggered by the wind of the progenitor star~\citep{G54_IRshell}. They could also be ejecta dust heated by early-type stars belonging to the stellar cluster in which the star exploded~\citep{G54_Chandra_Spiter}. Both \textit{Chandra} X-ray and \textit{Spitzer} IR images are visible in the composite image in the left part of Figure \ref{Veritas_Chandra_NuSTAR}. A morphological association with a molecular cloud detected from CO observations has been suggested~\citep{PWNG54_CO}, but no evidence for interaction with this cloud was found~\citep{CO_G54}. A $^{12}$CO map (rotation emission line $J=1\rightarrow0$ at 115~GHz;~\citealt{COsurvey}) and a radio map from the GaLactic and Extragalactic All-sky MWA (GLEAM) survey~\citep{GLEAMsurvey} are shown in the right panel of Figure~\ref{MWL_1523d_CO_radio} where HAWC significance contours have been superimposed. This source will be referred to as J1930 hereafter. All the details relating to this source are summarized in Table~\ref{J1930_caracteristics} in the Appendix~\ref{Multi-wavelength information}. \\
\textbf{3HWC J1928+178} is located about one degree away from 3HWC~J1930+188. It was not detected by any Imaging Atmospheric Cherenkov Telescope (IACT), despite the 46 and 36 hr of observations by VERITAS and H.E.S.S., respectively, until H.E.S.S. could confirm a detection with significant emission above 5$\sigma$ using a new analysis method more appropriate for extended sources~\citep{HAWC-HESS_GP_ApJ}. It is detected by HAWC with more than 12$\sigma$.
It is likely associated with the pulsar PSR~J1928+1746, located 0\textdegree03 away from the 3HWC source location, one of the pulsars discovered at radio wavelength in 2006 in a long-term pulsar survey of the Galactic plane using the Arecibo L-band Feed Array (ALFA;~\citealt{Discovery_PSRJ1928}). It is described as a young isolated pulsar with a period of 68.7~ms, a spin-down power of $\dot{E}$~=~$1.6\times10^{36}$~erg~s$^{-1}$ and a characteristic age of 82~kyr. The distance to it is estimated to be 4.3~kpc~\citep{ymw17}. No detections in X-ray by \textit{Chandra} or NuSTAR have been reported for this pulsar, as depicted by the bottom right-hand part of Figure \ref{Veritas_Chandra_NuSTAR}. However, the variable X-ray source CXO~J192812.0+174712 is found within the 3HWC source position uncertainties. The association with the 3HWC source has been studied by \citet{J1928_dark_accelerator} in the case of a binary system, although no variability has been seen at TeV energies. Finally, the unidentified Fermi source 4FGL~J1928.4+1801 is located 0\textdegree1 away from the 3HWC source.
This source will be referred to as J1928 hereafter. All the details relating to this source are summarized in Table~\ref{J1928_caracteristics} in the Appendix~\ref{Multi-wavelength information}.\\
\textbf{HAWC J1932+192} is spatially coincident with the pulsar PSR~J1932+1916, discovered by the \textit{Fermi}-LAT in 2013~\citep{Discovery_PSRJ1932} and classified as radio-quiet. It has a period of 208~ms, a spin-down power $\dot{E}$~=~$4.07\times10^{35}$~erg~s$^{-1}$ and a characteristic age of 35.4~kyr. It has also been observed in X-ray by \textit{Suzaku} and by the \textit{Swift} X-ray telescope, and an extended X-ray emission has been reported ~\citep{J1932_Xrays}. In that study, the emission was modeled with two Gaussians: a narrow one with a FWHM~$\leq0\arcmin5$ which could be associated with the pulsar, and a broad one with a FWHM of~$\sim$4$\arcmin$5, which could be interpreted as the PWN emission.
Using these observations, its distance is estimated as being between 2 and 6~kpc~\citep{J1932_Xrays}. This emission is located near the edge of the SNR~G54.4-0.3. It is clearly visible on the radio map, in the lower right-hand panel of Figure~\ref{MWL_1523d_CO_radio} in the shape of a circular feature with the pulsar on the edge.
Moreover, a CO structure was reported to be in morphological coincidence with the radio emission, with an evidence for the interaction of the SNR with the surrounding CO shell~\citep{G54.4_CO}.
For this SNR, the distance has been estimated as being 6.6~kpc~\citep{G54.4}. This source will be referred to as J1932 hereafter. All the details relating to this source are summarized in Table~\ref{J1932_caracteristics} in the Appendix~\ref{Multi-wavelength information}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth]{Veritas_Chandra_NuSTAR.pdf}
\caption{Multiwavelength view of the region surrounding 3HWC J1928+177. The middle map is the VERITAS excess map of the region, adapted from~\citet{Veritas_Fermi_2HWCsources}. Superimposed are the locations and the 1$\sigma$ uncertainties on the locations of the HAWC sources (blue circles) and the Fermi 4FGL sources (green circles), as well as the locations of the pulsars (red crosses). The white contours are HAWC significance contours for 5$\sigma$, 6$\sigma$, 7$\sigma$, 8$\sigma$, 10$\sigma$ and 12$\sigma$ for 1523 days of data.
The top source, 3HWC~J1930+188, is detailed in the zoomed-in view on the left-hand side. The locations of the counterparts detected by VERITAS (yellow) and H.E.S.S. (pink) are represented. The extension of the radio emission is also shown (cyan). The dashed white box represents the size of the composite image at the top (3$\arcmin$ - 0\textdegree05). It depicts the X-ray emission of the pulsar (the bright white star) and the PWN detected by \textit{Chandra} (blue - NASA/CXC/SAO/T.Temim et al.), as well as the IR emission detected by \textit{Spitzer} (green is 8$\mu$m and red is 24$\mu$m - NASA/JPL-Caltech), revealing the dusty remains of a collapsed star. The bottom source 3HWC~J1928+178 is detailed in the zoomed-in image on the right-hand side. The dashed black box represents the NuSTAR background-subtracted map shown at the bottom (adapted from~\citet{J1928_dark_accelerator}). The bright source to the bottom right is CXO J192812.0+174712.
}
\label{Veritas_Chandra_NuSTAR}
\end{figure}
\section{HAWC observations}
\label{HAWC observations}
HAWC is an array of 300 water tanks covering an area of 22,000 m$^2$, each instrumented with four photomultiplier tubes.
The $\gamma$-ray-like events are classified with respect to the fraction of the array that was triggered.
They are assigned to one of the nine analysis bins, according to the definition in~\cite{HAWC_crab}, from analysis bin 1, gathering events triggering 7\% to 10\% of the array, to analysis bin 9, for events hitting 84\% to 100\% of the array. Low-energy events that trigger only a small fraction of the array are likely to be found in the low analysis bins, while the highest-energy events triggering most of the array will be found in the higher analysis bins. This analysis is restricted to bins 4 to 9, as a good compromise between reasonable performance at TeV energies and enough statistics. Indeed, the greater the fraction of the array that was hit, the more information is available and the lower the uncertainties on the reconstructed parameters. In particular, the $\gamma$/hadron separation improves with the increase in the analysis bins, reaching an efficiency of $1\times10^{-2}$ to $1\times10^{-3}$ for events in analysis bins 4 to 9.
The HAWC significance map of the region for 1523 days of data, produced with the reconstruction Pass 4, under the hypothesis of a point-like source and a spectral index of $-2.5$, is shown in Figure~\ref{MWL_1523d_CO_radio}.
Two sources, 3HWC~J1930+188 and 3HWC~J1928+178, are reported in the 3HWC~catalog~\citep{3HWC_catalog}.
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\linewidth]{MWL_1523d_CO_radio.pdf}
\caption{Left: X-ray, radio, IR and GeV $\gamma$-ray emission superimposed on the HAWC significance map for 1523 days, using analysis bins 4 to 9. Top right: velocity-integrated CO map~\citep{COsurvey}. Bottom right: 71-210 MHz radio map from the GLEAM survey ~\citep{GLEAMsurvey}. Superimposed are the HAWC contours for 5$\sigma$, 6$\sigma$, 7$\sigma$, 8$\sigma$, 10$\sigma$ and 12$\sigma$. The locations of the HAWC sources are represented by the black dots. The positions of the pulsars PSR~J1930+1852, PSR~J1932+1916, and PSR~J1928+1746 are (292\textdegree63, 18\textdegree87), (293\textdegree08, 19\textdegree28) and (292\textdegree17, 17\textdegree77), respectively, according to the ATNF catalog~\citep{ATNFcatalog}, and are represented by the black crosses.
}
\label{MWL_1523d_CO_radio}
\end{figure}
\section{Method: the modeling of the region and fit of the HAWC data}
\label{Method}
Modeling this region is not trivial, because it requires disentangling the different sources of emission. An attempt to represent this complex region with several components is described here. For each component, the parameters are fitted simultaneously using the Multi-Mission Maximum Likelihood framework\footnote{The documentation is available at https://threeml.readthedocs.io/en/stable/ and the code at https://github.com/threeML/threeML} (3ML;~\citealt{3ML}) and the HAWC HAL plugin\footnote{The documentation and code are available at https://github.com/threeML/hawc\_hal}~\citep{ICRC_HAL}. This is based on a maximum likelihood approach, in which a model representing a particular region of the sky, here made of several components, is convolved with the instrument response and compared to the corresponding experimental data.
An initial model is defined for the region based on our current knowledge:
\begin{itemize}
\item VER J1930+188 and HESS J1930+188 are point-like sources associated with the PWN surrounding the pulsar PSR~J1930+1852. Hence, the source 3HWC~J1930+188 is defined as a point-like source initialized at the location of the pulsar (292\textdegree63, 18\textdegree87) . This component will be used to model J1930.
\item The source 3HWC~J1928+177 is represented by a symmetric Gaussian with the initial location at the position of the pulsar PSR~J1928+1746 (292\textdegree18, 17\textdegree77) and an initial size of $\sigma$ = 0\textdegree1. This component will be used to model J1928.
\end{itemize}
These are the two components of the initial model, visible in the panel (a) of Figure~\ref{summary_a}. There is no component for the galactic diffuse emission. The positions of the two components and the size of the extended component are left free. Their spectra are assumed to follow a simple power law with free index initialized at $-2.5$ and free differential flux at 10~TeV initialized at $1.0\times10^{-14}$~TeV$^{-1}$~cm$^{-2}$~s$^{-1}$. The fit is performed in an iterative process, starting by fitting the initial model to the data. For each component, a test statistic (TS) is computed, which compares the likelihood that a source is present against the hypothesis that there is no source but only background fluctuations:
\begin{equation}
\mbox{TS} = 2~\mbox{ln}\frac{\mathcal{L}(\mbox{source model})}{\mathcal{L}(\mbox{no source})}.
\end{equation}
If a remaining excess is found in the residual map, a point-like component with a power-law spectrum is added at the location of the excess, and the fit is performed again with the position and spectral parameters being free. This new component is kept if it significantly improves the fit, by $\Delta \mbox{TS} = 25 $.
\section{Results}
\label{Results}
\subsection{Results of the fit}
Figure~\ref{summary_a}(a) shows the HAWC significance map of the region for a point-like source hypothesis and assuming a power-law spectrum with an index of $-2.5$, which are the standard parameters used to produce HAWC maps~\citep{3HWC_catalog}.
Superimposed in blue and green are the initial and fitted positions of the two components previously described in section~\ref{Method}. The width of the green circle represents the 1$\sigma$ uncertainty on the size of the Gaussian.
The fitted model is displayed in Figure~\ref{summary_a}(b) and the residual map and its distribution in Figure~\ref{summary_a}(c). The orange line is a fit to the distribution with a gaussian function.
After this first iteration, excesses of $4\sigma$ and $6\sigma$ significance are found in the residual map near the pulsar PSR~J1932+1916 and at the location of J1928, respectively. To account for this, two components are added to the model at the locations of the excesses:
\begin{itemize}
\item A point-like source is initialized at ($\mbox{R. A.}=293^{\circ}07$, $\mbox{decl.}=19^{\circ}40$), near PSR~J1932+1916, with a simple power law as spectral model. This component will be simply called J1932.
\item An extended source is initialized at ($\mbox{R. A.}=292^{\circ}08$, $\mbox{decl.}=17^{\circ}79$), with initial size $\sigma$~=~0\textdegree1, with a simple power law as a spectral model. This is the new component for J1928.
\end{itemize}
The previous extended component from the initial model will now be called J1928-EXT, and it is given as the initial position and size the output from the first fit. The position, size, index, and flux normalization are again set free. A fit is performed again with the four components. The outputs of the second fit are summarized in Table~\ref{model_parameters}. The corresponding maps are displayed in Figure~\ref{summary_b}, using the same color code as in Figure~\ref{summary_a}. The spectra for the four components are shown in Figure~\ref{all_spec-PL}. The lower edges of the spectra are fixed to 1~TeV, as the median energy of analysis bin 4 minus an error of 1$\sigma$. To determine the upper edges, individual fits are performed for each of the four components using a power law with an exponential cutoff for that component only, with the amplitude and cutoff energy as the only free parameters, and the three other components being modeled by a simple power law with all parameters fixed. Then, the cutoff energy is fixed as well, so that only the flux normalization remains as a free parameter, and the cutoff energy is set to decreasing values until $\Delta TS=2$. In Table~\ref{model_parameters}, all of the fitted parameters are given with the statistical and systematic uncertainties. To calculate the systematic uncertainties, the same fit was performed again using different instrument response files.
The component representing J1928 is found to have a size of $\sigma~=~0^{\circ}18
\pm 0^{\circ}04_{\mbox{\scriptsize stat}}$ (39\% containment), while the other extended source, J1928-EXT, has a size of $\sigma~=~1^{\circ}43 \pm 0^{\circ}17_{\mbox{\scriptsize stat}}$.
The difference in TS between this model and the initial one is 45.
Given the high number of degrees of freedom between this model and the initial model, we can use the Akaike Information Criterion (AIC;~\citealt{AIC}) given by AIC = $2k - 2\mbox{ln}\mathcal{L}$ with $k$ being the number of free parameters and $\mathcal{L}$ being the maximum value of the likelihood function. This penalizes the model with the largest number of free parameters, so that the model with the fewest parameters will be favored, unless the extra parameters actually provide a substantially better fit. The best model is the one that have a lower AIC value. In this case, the four-component model is clearly preferred to the initial model, with $\Delta$AIC = 74.
An excess of $\sim$3$\sigma$ significance remains at the top of the region of interest, visible on map (c) of Figure \ref{summary_b}. Adding a new component at its location improves the fit only by a $\Delta$TS of 10, which is not significant when considering the addition of another source with 4 degrees of freedom. The $\Delta$AIC is 12. Since there are no compelling counterparts to this excess at other wavelengths, the remaining excess may be the result of additional complexities that are not contained in our model, including spatial morphology asymmetries and more complex spectral shapes, or it may simply be due to fluctuations. However, even though it gives a similar likelihood value, using an asymmetric Gaussian shows a clear 5$\sigma$ signal in the residual map at the location of J1928. Moreover, neither a power law with exponential cutoff nor a log-parabola significantly improve the fit.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.93\linewidth]{summary_initial.pdf}
\caption{ The significance map (a) indicates the region of interest (ROI) of radius 3\textdegree5 (the white circle) and the two components for J1928 and J1930. The blue/green dot and circle show the initial/fitted position and size. The width of the green circle represents the 1$\sigma$ uncertainty on the size of the Gaussian.
Map (b) is the significance map of the model in the ROI.
Map (c) is the significance map of the residuals in the ROI and the significance distribution in the inner 2\textdegree \ radius region, with a Gaussian fit. The color scale holds for all maps.
}
\label{summary_a}
\end{figure}
\newpage
\begin{figure}[ht!]
\centering
\includegraphics[width=0.93\linewidth]{summary_final.pdf}
\caption{ The significance map (a) shows the region of interest (ROI) of radius 3.5\textdegree \ (the white circle) and the four components for J1928, J1930, and J1932, as well as the additional extended source J1928-EXT. The blue/green dots and circles show their initial/fitted positions and sizes. The width of the green circle represent the 1$\sigma$ uncertainty on the size of the Gaussian.
Map (b) is the significance map of the model in the ROI.
Map (c) is the significance map of the residuals in the ROI and the significance distribution in the inner 2\textdegree \ radius region, with a Gaussian fit. The color scale holds for all maps.
}
\label{summary_b}
\end{figure}
\newpage
\begin{table}[ht!]
\caption{ Input values and fitted values (subscripts $i$ and $f$ respectively) for each component of the best model representing the region of interest. Each value is followed by the statistical uncertainty and the systematic uncertainty. The fit is performed in two steps. The initial model has two components representing J1928 and J1930. A point-like component and an extended Gaussian component are added at the location of significant excess in the residual map. The flux normalization is given at 10~TeV in units of 10$^{-15}$~TeV$^{-1}$~cm$^{-2}$~s$^{-1}$. The spectral energy distributions are plotted in Figure~\ref{all_spec-PL}. The position of the pulsar PSR~J1930+1852 is (292\textdegree63, 18\textdegree87).
}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
& J1930 & J1932 & J1928 & J1928-EXT \\
\cline{1-5}
Hypothesis & Point-like & Point-like & Extended & Extended \\
\cline{1-5}
pos${_i}$ & PSR J1930+1852 & (293.07, 19.40) & (292.08,17.79) & (292.20,18.18) \\
pos${_f}$ (ra \textdegree) & 292.53 {\tiny $\pm$ 0.05 $\pm$ 0.004} & 292.99 {\tiny $\pm$ 0.05 $\pm$ 0.002} & 292.15 {\tiny $\pm$ 0.04 $\pm$ 0.001} & 292.05 {\tiny $\pm$ 0.15 $\pm$ 0.05} \\
\qquad (dec \textdegree) & 18.84 {\tiny $\pm$ 0.05 $\pm$ 0.001} & 19.36 {\tiny $\pm$ 0.04 $\pm$ 0.001} & 17.90 {\tiny $\pm$ 0.04 $\pm$ 0.001} & 18.10 {\tiny $\pm$ 0.17 $\pm$ 0.05} \\
\cline{1-5}
size${_i}$ (\textdegree) & - & - & 0.10 & 0.9 \\
size${_f}$ (\textdegree) & - & - & 0.18 {\tiny $\pm$ 0.04 $\pm$ 0.003} & 1.43 {\tiny $\pm$ 0.17 $\pm$ 0.05} \\
\cline{1-5}
index${_i}$ & $-2.5$ & $-2.5$ & $-2.5$ & $-2.5$ \\
index${_f}$ & $-2.93$ {\tiny $\pm$ 0.20 $\pm$ 0.01} & $-2.46$ {\tiny $\pm$ 0.24 $\pm$ 0.01} & $-2.09$ {\tiny $\pm$ 0.16 $\pm$ 0.04} & $-2.60$ {\tiny $\pm$ 0.08 $\pm$ 0.01} \\
\cline{1-5}
flux$_{i}$ & 10.0 & 10.0 & 10.0 & 10.0 \\
\multirow{2}*{flux$_{f}$} & \multirow{2}*{2.46 $_{-0.47}^{+0.58}$ {\tiny $\pm$ 0.72} } & \multirow{2}*{1.95 $_{-0.49}^{+0.62}$ {\tiny $\pm$ 0.50}} & \multirow{2}*{4.23 $_{-1.10}^{+1.49}$ {\tiny $\pm$ 1.30}} & \multirow{2}*{40.34 $_{-4.11}^{+4.47}$ {\tiny $\pm$ 1.93}} \\
& & & & \\
\cline{1-5}
Energy range (TeV) & 1 -- 118 & 1 -- 43 & 1 -- 178 & 1 -- 10 \\
\cline{1-5}
\end{tabular}
\label{model_parameters}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth]{all_spec-PL.pdf}
\caption{ Spectral energy distribution of the four components of the best-fit model. The spectral parameters are given in Table~\ref{model_parameters}. The shaded bands are the 1$\sigma$ statistical uncertainties.
}
\label{all_spec-PL}
\end{figure}
\newpage
\subsection{Energy spectrum of 3HWC J1930+188}
The spectrum of 3HWC J1930+188 resulting from the fit of the four-component model described in the previous section is shown in green in Figure~\ref{hawc-veritas-hess_spectrum}. The spectrum is slightly softer than the one previously published by the HAWC collaboration, using the same amount of data, analysis bins 1 to 9, and a single point-like source hypothesis, shown in gray~\citep{3HWC_catalog}, while in the present analysis it is part of a more complex model. The high number of free parameters being fitted together is responsible for the larger uncertainties. At a few TeV, the spectrum from the fit presented here is in better agreement with the VERITAS spectrum~\citep{Veritas_Fermi_2HWCsources}, represented by the black dots, although the error bars are wider. The H.E.S.S. spectrum~\citep{HGPS} is also shown in magenta. The spectral parameters derived in the different works cited here are gathered in Table~\ref{J1930_spec_parameters}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.5\linewidth]{J1930_spectrum_hawc-hess-veritas.pdf}
\caption{ Energy spectrum of 3HWC~J1930+188. The green spectrum is the result from the fit of a point-like component for 3HWC~J1930+188, as part of a model of the region assuming two point sources and two extended sources. The gray spectrum uses the same amount of data, the analysis bins 1 to 9, and a single point-like source hypothesis~\citep{3HWC_catalog}. The H.E.S.S. spectrum is depicted in magenta and is taken from the HGPS~\citep{HGPS}. The shaded areas represent the 1$\sigma$ statistical uncertainty. The black dots are derived by the VERITAS collaboration~\citep{Veritas_Fermi_2HWCsources}. All the spectral parameters for the spectra plotted here are gathered in Table~\ref{J1930_spec_parameters}.
}
\label{hawc-veritas-hess_spectrum}
\end{figure}
\begin{table}[ht!]
\caption{ Spectral parameters and their statistical uncertainties for the spectral energy distributions of 3HWC J1930+188 plotted in Figure~\ref{hawc-veritas-hess_spectrum}.
}
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\multirow{2}*{Experiment (reference)} & Reference Energy & Flux at E$_0$ & \multirow{2}*{Index} & Integrated Flux $>$ 1~TeV \\
& E$_0$ (TeV) & ($10^{−15}$ TeV$^{−1}$ cm$^{−2}$ s$^{−1}$) & & (10$^{-12}$ cm$^{-2}$ s$^{-1}$) \\
\hline
HAWC (this work) & 10 & 2.46 $_{-0.47}^{+0.58}$ & $-2.93\pm0.20$ & $1.08 \pm 0.46$\\
HAWC \citep{3HWC_catalog} & 7 & $10.2\pm0.8$ & $-2.76\pm0.07$ & $1.25 \pm 0.16$ \\
VERITAS \citep{Veritas_Fermi_2HWCsources} & 1 & $660\pm130$ & $-2.18\pm0.20$ & $0.57 \pm 0.14$\\
H.E.S.S. \citep{HGPS} & 1 & $506\pm124$ & $-2.59\pm0.26$ & $0.32 \pm 0.09$\\
\hline
\end{tabular}
\label{J1930_spec_parameters}
\end{table}
\subsection{Characteristics of the new source HAWC J1932+192}
In this section, we discuss whether the $\gamma$-ray emission of the new TeV source candidate HAWC~J1932+192, potentially associated with the pulsar PSR~J1932+1916, could come from the acceleration of particles in its PWN. All characteristics of this system have been previously gathered in section~\ref{Multi-wavelength observations of the region}, as well as in the appendix~\ref{Multi-wavelength information}, Table~\ref{J1932_caracteristics}. The spectrum derived from 3ML under the point-like hypothesis is plotted in Figure~\ref{all_spec-PL}.
From the best fit, the differential flux at 10~TeV was found to be $(1.95 _{-0.49}^{+0.62})_{\mbox{\scriptsize stat}}\times10^{-15}$~TeV$^{-1}$~cm$^{-2}$~s$^{-1}$. With a spectral index equal to $-2.46\pm0.24$, the integrated energy flux between 1~TeV and 43~TeV is F$_{\gamma > 1 \mbox{\tiny TeV}}~=~(1.61\pm0.58)_{\mbox{\scriptsize stat}}\times10^{-12}$~erg~cm$^{-2}$~s$^{-1}$.
The $\gamma$-ray luminosity is given by:
\begin{equation}
L_{\gamma} = 4\pi D^2 F_{\gamma>1\mbox{\tiny TeV}}.
\label{luminosity}
\end{equation}
Under the assumption that the distance is either 3.5~kpc~\citep{G54.4_CO} or 6.6~kpc~\citep{G54.4}, we can calculate the $\gamma$-ray luminosity $L_{\gamma}$ and since the pulsar's rotational energy is $\dot{E} = 4\times10^{35}$ erg s$^{-1}$, we can calculate the energy that the pulsar has to spend to produce it. Using the first distance estimation, $\sim$0.6\% of the pulsar energy is needed to accelerate the electrons and positrons that produce the $\gamma$ rays via IC scattering on ambient photons. In the case of the larger distance, this percentage goes up to $\sim$2\%. For comparison, ~\cite{Geminga_fermi} found that about 1\% of the spin-down energy of the Geminga pulsar has to be converted into e$^{\pm}$ to be consistent with the $\gamma$-ray data from the Fermi-LAT and from HAWC.
This means that the PWN could in principle produce the observed $\gamma$-ray emission. Table~\ref{J1932_parameters} gathers the parameters calculated above.
\begin{table}[ht!]
\caption{Summary of the properties of the new source HAWC~J1932+192 .}
\centering
\begin{tabular}{|c|c|c|}
\hline
Morphology Hypothesis & \multicolumn{2}{c|}{Point-like} \\
\hline
\multirow{2}*{Integrated energy flux F$_{\gamma > 1 \mbox{\tiny TeV}}$ (erg~cm$^{-2}$~s$^{-1}$)} & \multicolumn{2}{c|}{\multirow{2}*{($1.61\pm0.58)_{\mbox{\scriptsize stat}}\times10^{-12}$}} \\
&\multicolumn{2}{c|}{} \\
\hline
Distance $D$ (kpc) & $\sim3.5$ & 6.6 \\
\hline
\multirow{2}*{$\gamma$-ray luminosity $L_{\gamma}$ (erg~s$^{-1}$)} & \multirow{2}*{$\sim2.4\times10^{33}$} & \multirow{2}*{$\sim8.5\times10^{33}$}\\
& & \\
\hline
Fraction of the pulsar energy needed (\%) & $\sim 0.6$ & $\sim 2$ \\
\hline
\end{tabular}
\label{J1932_parameters}
\end{table}
\subsection{Morphology and energy spectrum of 3HWC J1928+178}
The best fit of the data gives a size of $\sigma=0^{\circ}18
\pm 0^{\circ}04_{\mbox{\scriptsize stat}}$ for 3HWC J1928+178, which represents 39\% containment, given our 2D Gaussian model. The corresponding 68\% containment radius is 0\textdegree27.
The flux at 10~TeV is $(4.23_{-1.10}^{+1.49})_{\mbox{\scriptsize stat}}\times10^{-15}$~TeV$^{-1}$~cm$^{-2}$~s$^{-1}$ and the spectral index is $-2.09\pm0.16$, as reported in Table \ref{model_parameters}.
The spectrum is plotted in red in Figure~\ref{J1928_spectrum} together with the one previously published by the HAWC collaboration, in gray, using the same amount of data and analysis bins 1 to 9, but for a single point-like source hypothesis~\citep{3HWC_catalog}. As previously mentioned, the high number of free parameters being fitted together is responsible for the larger uncertainties. Both HAWC spectra are compatible with the flux point from LHAASO at 100 TeV~\citep{LHAASO} within the uncertainties. The origin of the observed TeV $\gamma$-ray emission of 3HWC~J1928+178 is discussed in the next section. A classical PWN scenario is considered, as well as a possible association with a molecular cloud.
Note that the presence of the large extended component J1928-EXT of angular size $\sigma~=~1.43^{\circ} \pm 0.17^{\circ}_{\mbox{\scriptsize stat}}$ may account for a large-scale galactic diffuse emission component that is absent from the model, or may indicate the mismodeling of 3HWC J1928+178. In particular, J1928 and J1928-EXT may be part of the same object, if we consider a Geminga-like diffusion model. This hypothesis was considered in~\cite{3HWCJ1928_ICRC} and will not be treated here.\\
\section{Origin of the $\gamma$-ray emission of 3HWC J1928+178}
\label{Origin}
\subsection{IC scattering of the electrons from the PWN}
\paragraph{Gamma-ray emission}
From the fitted size of 3HWC J1928+178, $\sigma = 0.18$\textdegree, the diameter $d$ and volume $V$ can be calculated assuming a spherical geometry. All the properties derived hereafter are summarized in Table \ref{PWN_parameters}.
With the pulsar being located at a distance of $D = 4.3$~kpc, 39\% and 68\% of the emission are contained in regions of sizes $d \simeq$ 27~pc and 41~pc, respectively.
The integrated energy flux between 1~TeV and 178~TeV is $F_{\gamma>1\mbox{\tiny TeV}}~=~(3.45\pm1.22)_{\mbox{\scriptsize stat}}\times10^{-12}$~erg~cm$^{-2}$~s$^{-1}$ .
The $\gamma$-ray luminosity, given by equation~\ref{luminosity}, is $L_{\gamma} = 7.7\times10^{33}$ erg~s$^{-1}$.
The emission observed in PWNe at TeV energies is dominated by radiation processes involving electrons scattering on ambient photons: IC scattering. In the Thomson regime, the $\gamma$-ray spectral energy distribution due to electrons with energy $E_e$ peaks at
\begin{equation}
E_{\gamma} \simeq 33E_e^2k_BT \quad \mbox{TeV,}
\label{E_e_vs_E_g_thomson}
\end{equation}
where $E_{\gamma}$ and $E_e$ are in TeV, $k_B$ is the Boltzmann constant, T is the temperature of the photon field, and $k_BT$ is in eV~\citep{TeV_astronomy}.
Hence, $ E_e \simeq 11\sqrt{E_{\gamma}} ~\mbox{TeV}$
and a 1~TeV $\gamma$-ray photon is produced via IC scattering of an electron of energy $\sim$10~TeV on cosmic microwave background (CMB) photons.
The electron cooling time for IC scattering in the Thomson regime is given by
\begin{equation}
\tau_{\mbox{\tiny IC}} = \frac{E_e}{dE_e/dt} \simeq 3.1\times10^5\frac{1}{U_{\mbox{\tiny rad}}}\frac{1}{E_e} \quad \mbox{yr,}
\label{IC_cooling_time}
\end{equation}
where $E_e$ is in TeV and $U_{\mbox{\tiny rad}}$ is the radiation energy density in eV~cm$^{-3}$~\citep{TeV_astronomy}. For electrons of energy $E_e$~=~$10$~TeV scattering on CMB photons, $k_BT~=~2.35\times10^{-4}$~eV and $U_{\mbox{\tiny rad}}~=~0.26$~eV~cm$^{-3}$, so the electron cooling time is $\tau_{\mbox{\tiny CMB}} \simeq 120$~kyr.
For far-IR (FIR) photons, $k_BT~=~3\times10^{-4}$~eV and $U_{\mbox{\tiny rad}}~=~0.3$~eV~cm$^{-3}$, so $\tau_{\mbox{\tiny FIR}} \simeq 100$~kyr.
The total energy is the product of the $\gamma$-ray luminosity and the cooling time
$W = \tau_{\mbox{\tiny IC}}~L_{\gamma}$, equal to $W_{\mbox{\tiny CMB}} \simeq 2.9\times10^{46}$ erg, using $\tau_{\mbox{\tiny IC}} = \tau_{\mbox{\tiny CMB}}$. Finally, dividing by the volume, the energy density is simply $\epsilon_{\mbox{\tiny W}} = W/V$.
Assuming a spherical geometry and a diameter of 41~pc, the energy density is $\epsilon_{\mbox{\tiny IC}} \simeq 0.04$ eV~cm$^{-3}$. This is much smaller than the energy density of the interstellar medium (ISM) $\epsilon_{\mbox{\tiny ISM}} \simeq 1$~eV~cm$^{-3}$. Given the age of the pulsar of 82~kyr, this is consistent with an old PWN, where the electrons have started to cool and diffuse away from their source. Note that for electrons with $E_e>300$~TeV, the Klein Nishina regime starts.
Adapting equations~\ref{E_e_vs_E_g_thomson} and \ref{IC_cooling_time} to the Klein Nishina regime for CMB photons only gives 300 TeV electrons producing 230~TeV photons, with the cooling time becoming $\tau_{\mbox{\tiny CMB}} \simeq 30$~kyr. However, the total energy and the energy density are of the same order of magnitude as what was calculated in the Thomson regime.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.5\linewidth]{J1928_spectrum.pdf}
\caption{ Energy spectrum of 3HWC~J1928+178 from the 3ML fit, assuming a 2D Gaussian (red,) compared to that from~\citet{HAWC-HESS_GP_ApJ}, assuming a point-like source (gray). Both make use of the same data set, but different analysis bins. The shaded areas represent the 1$\sigma$ statistical uncertainties. The blue point is the flux point reported by LHAASO~\citep{LHAASO}.
}
\label{J1928_spectrum}
\end{figure}
\begin{table}[ht!]
\caption{Summary of the properties of the fitted source J1928 in the hypothesis where IC scattering on CMB photons is the dominant radiation process.}
\centering
\begin{tabular}{|c|c|}
\hline
\multirow{2}*{Angular size $\theta$ (\textdegree)} & 0.18 (39\%) \\
& 0.27 (68\%) \\
\hline
Distance $D$ (kpc) & 4.3 \\
\hline
Size (68\%) $d$ (pc) & $\sim41$ \\
\hline
\multirow{2}*{Volume $V$ (pc$^{3}$)} & \multirow{2}*{$\sim3.7\times10^{4}$} \\
& \\
\hline
\multirow{2}*{Integrated energy flux F$_{\gamma > 1 \mbox{\tiny TeV}}$ (erg~cm$^{-2}$~s$^{-1}$)} & \multirow{2}*{($3.45\pm1.22)\times10^{-12}$} \\
& \\
\hline
\multirow{2}*{$\gamma$-ray luminosity $L_{\gamma}$ (erg~s$^{-1}$)} & \multirow{2}*{$\sim7.7\times10^{33}$} \\
& \\
\hline
\multirow{2}*{Total energy $W_{\mbox{\tiny IC}}$ (erg)} & \multirow{2}*{$\sim2.9\times10^{46}$} \\
& \\
\hline
\multirow{2}*{Energy density $\epsilon_{\mbox{\tiny IC}}$ (eV~cm$^{-3}$)} & \multirow{2}*{$\sim$0.04} \\
& \\
\hline
\end{tabular}
\label{PWN_parameters}
\end{table}
\paragraph{Parent particle population}
The parent population of the electrons responsible for the observed $\gamma$-ray emission can be obtained using the \textit{naima}\footnote{The documentation and code for naima are available at: https://naima.readthedocs.io/en/latest and https://github.com/zblz/naima} python package~\citep{naima}. This provides models for nonthermal radiative emission from homogeneous distributions of relativistic particles. The contributions of nonthermal radiative processes, IC scattering in this case, can be computed given a shape for the particle energy distribution, and the model can be used to fit the observed nonthermal spectra through a Markov Chain Monte Carlo procedure. In the present case, the emission is assumed to be produced by electrons upscattering CMB photons, with a temperature $T = 2.72$~K and an energy density of 0.26~eV~cm$^{-3}$, and FIR photons, with a temperature $T = 20$~K and an energy density of 0.3~eV~cm$^{-3}$. Since the $\gamma$-ray spectrum of 3HWC~J1928+178 has been represented by a power law, the population of the electrons is also chosen to follow a simple power law. The fit is performed using this model for the electrons and the $\gamma$-ray spectrum from the HAWC observations from the best fit with 3ML.
The best fit for the energy distribution of electrons has a differential energy at 70 TeV $F_{70\mbox{\tiny TeV}}~=~(1.91~\pm~0.2)\times10^{41}$~erg$^{-1}$ and an index of $-2.55~\pm~0.1$. The total energy of the electrons above 1~TeV is $W_e~=~4.6^{+2.2}_{-1.2}\times10^{46}$~erg. Given that the spin-down of the pulsar is $\dot{E}~=~1.6\times10^{36}$~erg~s$^{-1}$, assuming that it is constant over the life of the pulsar, which is 82~kyr, gives a lower limit for the total energy released by the pulsar of $4.1\times10^{48}$~erg. Hence, an upper limit of $\sim$1\% can be set on the amount of energy that the pulsar could have transferred to the electrons above 1 TeV. This is again compatible with previous estimations for the Geminga pulsar~\citep{Geminga_fermi}.
\subsection{Association with a molecular cloud}
\paragraph{Hypotheses for this association}
Most of the interstellar gas in our Galaxy is molecular hydrogen H$_{2}$, contained in GMCs. These massive clouds of gas and dust have a typical size that ranges from 50 to 200 pc and a mass ranging between $10^4$ and $10^6$ solar masses. They are the sites of star formation. In addition, they are the source of most of the diffuse galactic $\gamma$-ray emission~\citep{EGRET_diffuse_emission}. The dominant processes by which cosmic rays interact with the ISM and produce $\gamma$ rays, are high-energy electron bremsstrahlung, IC interactions with low-energy photons and nucleon-nucleon interactions. For the latter, in particular, molecular clouds are favorable environments. Hence, it is interesting to compare the galactic gas distribution, and the $\gamma$-ray emission detected by HAWC, in order to assess whether the components of the molecular cloud, mainly hydrogen, could be a target for relativistic protons, producing observed $\gamma$~rays via pion decay~\citep{HAWC_GMC}.
\paragraph{CO as a tracer for molecular clouds}
H$_{2}$ is not easily observable, because this molecule has no electric dipole moment.
For this reason, it does not emit radiation from neither vibrational nor rotational transitions. However, CO emits radiation through a rotational transition ($J=1\rightarrow0$) when excited by collisions with hydrogen molecules. Hence, CO emission is used to trace H$_{2}$ molecular clouds. The abundance of CO is typically about $7.2\times10^{-5}$ for one hydrogen molecule. Two isotopes are mainly used: $^{12}$CO and $^{13}$CO. The main difference is that $^{12}$CO is optically thick, while $^{13}$CO is optically thin, the first one being on average $\sim$60 times more abundant than the second one~\citep{12CO_13CO_ratio}.
$^{13}$CO is both a good quantitative and qualitative tracer of molecular gas, being related to the column density of H$_2$. It can probe deep in the cloud without saturating, and it provides more accurate velocity and kinematic distances because of its narrower line. Therefore, $^{13}$CO is more suited to deriving the column density of the cloud, under the hypothesis of local thermodynamic equilibrium.
The emission line corresponding to the CO de-excitation gives the mean velocity of the CO molecules in the cloud, and the width of this line gives the velocity dispersion associated with the cloud. Under the virial equilibrium hypothesis, and assuming uniform density within the cloud, the width of the line scales linearly with the size of the cloud.
The $^{13}$CO (rotation emission line $J=1\rightarrow0$ at 110~GHz) data from the Galactic Ring Survey (GRS\footnote{GRS data available at : https://www.bu.edu/galacticring/new\_data.html}; \citealt{GRS}) were obtained using the SEQUOIA multi pixel array on the Five College Radio Astronomy Observatory (FCRAO, \citet{FCRAO}) 14~m telescope located in New Salem, Massachusetts, between 1998 December and 2005 March. Three molecular clouds can be found at the location of the HAWC TeV emission. Figure \ref{HAWC_CO_velocity}(a) shows the HAWC significance map, where a region corresponding to the emission with significance $>5\sigma$ is defined. From this region, the velocity distribution is extracted as a function of the brightness temperature averaged over this region, visible in Figure~\ref{HAWC_CO_velocity}(b).
Three maxima can be highlighted at $\sim$4.5~km~s$^{-1}$, $\sim$22~km~s$^{-1}$ and $\sim$46~km~s$^{-1}$. The $^{13}$CO maps corresponding to each velocity are also displayed in Figure \ref{HAWC_CO_velocity}(c). The most intense one, at $\sim$22~km~s$^{-1}$ is further studied in the next paragraph.
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{HAWC_CO_velocity.pdf}
\caption{The HAWC significance map is shown in panel (a). The black circle shows the location and 1$\sigma$ uncertainty of the HAWC source~\citep{3HWC_catalog}. The black cross is the location of the pulsar PSR J1928+1746. The white box is the region with $>5\sigma$ $\gamma$-ray emission of 3HWC~J1928+178, with the velocity dispersion being averaged and plotted in panel (b) as a function of the brightness temperature. For the three peaks at $\sim$4.5~km~s$^{-1}$, $\sim$22~km~s$^{-1}$ and $\sim$46~km~s$^{-1}$, the $^{13}$CO maps are shown in panel (c).}
\label{HAWC_CO_velocity}
\end{figure}
\paragraph{Detailed study of the brightest cloud}
The cloud at $\sim$22~km~s$^{-1}$ has a very complicated and elongated shape. The study is restricted to the portion of the cloud within the $>5\sigma$ $\gamma$-ray emission of 3HWC~J1928+178, represented by the white box in Figures \ref{HAWC_CO_velocity} and \ref{HAWC_CO_velocity_22kms}. In this region, the cloud is decomposed into two parts, which could be interpreted as two clumps of the cloud. They are represented by the two smaller magenta and cyan boxes that are labeled ``1" and ``2" in Figure \ref{HAWC_CO_velocity_22kms}. The $^{13}$CO maps for the peak velocity and the velocity distribution are also shown on the right-hand side of the same figure. Some basic properties can now be derived, such as the column density, the mass, and the volume of these clumps, to estimate the total cosmic-ray energy and the energy density that would be needed to explain the observed $\gamma$-ray emission.
To do so, the clumps are assumed to have a spherical shape. The most probable distance\footnote{The distance is derived using http://bessel.vlbi-astrometry.org/bayesian with a prior P$_{far}$ = 0.1} for this cloud is $D$ = 4~kpc~\citep{cloud_distance}, which would be compatible with the distance of the pulsar. Their diameter $d$ and volume $V$ can be calculated from their angular size $\theta$. Clumps 1 and 2 have diameters of $\sim$12 and $\sim$18~pc, respectively. They are smaller than the source representing J1928, which was found to contain 68\% of the emission within $\sim$41~pc.
For each clump, the $^{13}$CO column density $N(^{13}$CO) is determined using the brightness temperature $T_{\mbox{\scriptsize mb}}$, in K, and the FWHM of the velocity distribution peak $\Delta v$, in km~s$^{-1}$, as explained in \citet{Simon_molecular_cloud}:
\begin{equation}
N(^{13}\mbox{CO}) = 8.75\times10^{14} T_{\mbox{\scriptsize mb}} \Delta v.
\end{equation}
The clump mass $M$, in the unit of solar masses, is given by
\begin{equation}
M = 3.05\times10^{-25} N(^{13}\mbox{CO})~\theta_x \theta_y D^2 \quad M_{\odot},
\end{equation}
where $\theta_x$ and $\theta_y$ are the half-axes of the clump in arcseconds and $D$ is the distance to the cloud in~pc, which is here assumed to be 4~kpc. With the mass and the volume, the particle density in the cloud, which is a potential target for cosmic rays, can be calculated using
\begin{equation}
n = \frac{M}{\mu m_{\mbox{\tiny H}}V},
\end{equation}
where $\mu m_{\mbox{\tiny H}}$ is the mean mass of an atom in the ISM, with $\mu\simeq1.4$ and $m_{\mbox{\tiny H}}$ being the mass of an hydrogen atom.
Moreover, using the best-fit value for the flux found with 3ML, the luminosity $L_{\gamma}$ above~1~TeV was calculated in the previous section using equation \ref{luminosity} as $L_{\gamma} \simeq 7.7\times10^{33}$~erg~s$^{-1}$.
Considering that, at TeV energies, the spectral energy distribution of the secondary $\gamma$ rays peaks at about one-tenth of the energy of the primary proton and does not vary significantly with energy~\citep{TeV_astronomy}, a 1~TeV photon can be produced by a 10~TeV proton.
The total energy of the cosmic rays above~10~TeV in the cloud is $W_{\mbox{\tiny p}} = \tau_{\mbox{\tiny p}} L_{\gamma}$, where $\tau_{\mbox{\tiny p}}$ is now the characteristic cooling time for relativistic protons. It is derived using the proton-proton interaction cross section $\sigma_{\mbox{\tiny pp}}$, the speed of light $c$ and the density $n$:
\begin{equation}
\tau_{\mbox{\tiny p}} = \frac{1}{f\sigma_{\mbox{\tiny pp}}cn}.
\end{equation}
In this relation, $f$ stands for the fact that a proton loses about half of its energy per interaction, with only a third of them producing $\pi^0$. Hence, using typical values for the inelastic cross section $\sigma_{\mbox{\tiny pp}} \simeq 35$~mb for VHE protons \citep{TeV_astronomy} and $f = 1/6$, it results in a lifetime $\tau_{\mbox{\tiny p}}~\simeq~1.8\times10^8~n^{-1}$~yr.
Finally, the energy density is simply the ratio of the total energy and the volume:
$ \epsilon_{\mbox{\tiny p}} = W_{\mbox{\tiny p}}/V$.
For the cloud considered here, the total energy of the cosmic rays above~10~TeV is $W_{\mbox{\tiny p}} = 7.9\times10^{47}$~erg and the energy density is $\epsilon_{\mbox{\tiny p}} \simeq 4.4$~eV~cm$^{-3}$.
The parameters calculated for each clump and for the total cloud are gathered in Table~\ref{CO_clumps_parameters}.
The farthest edge of clump 2 is 22~pc away from the pulsar. Considering a sphere of radius 22 pc centered on the pulsar, containing both clumps, its volume is 15 times the sum of the volumes of both clumps together. Since the total energy in the cloud is $W_{\mbox{\tiny p}} = 7.9\times10^{47}$~erg, the energy in the sphere centered on the pulsar should be $W_R = 1.2\times10^{49} \mbox{erg}$.
The pulsar releases most of its energy at the beginning of its lifetime and steadily decreases its spin-down power afterward, as described by the following equation:
\begin{equation}
\dot{E} = \dot{E_0}\Big( 1 + \frac{t}{\tau_{\tiny 0}} \Big)^{-\frac{n+1}{n-1}},
\label{spin_down_luminosity}
\end{equation}
where $\dot{E}_0$ is the initial spin-down luminosity and $n$ the braking index. It has been argued that up to 20\% of a pulsar's energy could accelerate ions~\citep{20percent_limit}.
Assuming as a reasonable value that 10\% of the pulsar's energy has been used to accelerate protons, this means that the pulsar must have released $10 \times W_R = 1.2\times10^{50}$~erg. Considering that this is equal to the difference in rotational energy between now and when the pulsar was born gives:
$ \Delta \mbox{E} = 1.2\times10^{50} = I \times (\Omega^2_0 - \Omega^2) /2$,
with $I\simeq 1 \times 10^{45}~\mbox{g cm}^2$ for a typical pulsar.
The pulsar considered here has a period of $P\simeq70$~ms. This gives a birth period of $P_0 = 2\pi/\Omega_0\simeq10$~ms.
The maximum total energy that a pulsar with a birth period of 1~ms can release during its life is $\mbox{E}_{\mbox{\tiny ROT}} = 1\times10^{53}$~erg, for a pulsar with a typical mass of 1.4~$M_{\odot}$ and a typical radius of 10~km~\citep{pulsar_total_energy}. Our result is consistent with this upper limit. Moreover, integrating equation~\ref{spin_down_luminosity} from birth ($t=0$) until now ($t=T$), with the braking index $n=3$,
and using the relation between the characteristic age of the pulsar $\tau_c$ and the age at birth
$\tau_{\tiny 0} = \tau_c - \tau$
we can derive
\begin{equation}
\tau_0 = \frac{\dot{E}~\tau_c^2}{\Delta E + \dot{E} \tau_c }.
\end{equation}
With $\Delta E = 1.2\times10^{50}$~erg, $\dot{E}~=~1.6\times10^{36}$~erg~s$^{-1}$ and $\tau_c = 82600$~yr, the age at birth is $\tau_0\simeq 2700$~yr. Hence, the pulsar's true age would be 79,800~yr. Finally, its spin-down power at birth would be $\dot{E}_0~=~1.4\times10^{39}$~erg~s$^{-1}$. As a comparison, this is the same order of magnitude as the Crab, which makes this value plausible.
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{HAWC_CO_velocity_22kms.pdf}
\caption{Molecular clouds at 22 km~s$^{-1}$ located within the 5$\sigma$ $\gamma$-ray emission of 3HWC~J1928+178 (white box). The black circle shows the location and 1$\sigma$ uncertainty of the HAWC source~\citep{3HWC_catalog}. The black cross is the location of the pulsar. The magenta and cyan boxes correspond to the two clumps considered here. The two velocity maps corresponding to the two boxes are shown in the center, with the velocity dispersion on the right-hand side.}
\label{HAWC_CO_velocity_22kms}
\end{figure}
\begin{table}[ht!]
\centering
\caption{Summary of the properties of the CO cloud}
\begin{tabular}{c|c|c|}
& Clump 1 & Clump 2 \\
\hline
\multicolumn{1}{|c|}{Angular size $\theta$ (\textdegree)} & 0.172 & 0.252 \\
\hline
\multicolumn{1}{|c|}{Distance $D$ (pc)} & 4000 & 4000 \\
\hline
\multicolumn{1}{|c|}{Size $d$ (pc)} & 12.0 & 17.6 \\
\hline
\multicolumn{1}{|c|}{Volume $V$ (pc$^{3}$)} & 906 & 2851 \\
\hline
\multicolumn{1}{|c|}{Average brightness temperature $T_{mb}$ (K)} & 1.875 & 2.44 \\
\hline
\multicolumn{1}{|c|}{FWHM of the velocity distribution peak $\Delta v$ (km~s$^{-1}$)} & 1.5 & 2.56 \\
\hline
\multicolumn{1}{|c|}{\multirow{2}*{Column density $N(^{13}$CO) (cm$^{-3}$)}} & \multirow{2}*{$2.46\times10^{15}$} & \multirow{2}*{$5.47\times10^{15}$} \\
\multicolumn{1}{|c|}{} & & \\
\hline
\multicolumn{1}{|c|}{Mass $M$ ($M_{\odot}$)} & 1151 & 5482 \\
\hline
\hline
&\multicolumn{2}{c|}{Total cloud} \\
\hline
\multicolumn{1}{|c|}{Mass $M$ ($M_{\odot}$)} &\multicolumn{2}{c|}{6633 } \\
\hline
\multicolumn{1}{|c|}{Volume $V$ (pc$^{3}$)} &\multicolumn{2}{c|}{3757} \\
\hline
\multicolumn{1}{|c|}{Density $n$ (particles~cm$^{-3}$)} &\multicolumn{2}{c|}{50} \\
\hline
\multicolumn{1}{|c|}{\multirow{2}*{Total energy $W_{\mbox{\tiny p}}$ (erg)} } &\multicolumn{2}{c|}{\multirow{2}*{$7.9\times10^{47}$} } \\
\multicolumn{1}{|c|}{} &\multicolumn{2}{c|}{\multirow{2}*{ } } \\
\hline
\multicolumn{1}{|c|}{Energy density $\epsilon_{\mbox{\tiny p}}$ (eV~cm$^{-3}$)} &\multicolumn{2}{c|}{4.4} \\
\hline
\hline
\end{tabular}
\label{CO_clumps_parameters}
\end{table}
\paragraph{Conclusions for the molecular cloud association}
From the study performed in this section, we can make conclusions regarding the different hypotheses:
\begin{itemize}
\item The components of the molecular cloud, mainly hydrogen, could be a target for relativistic protons from the pulsar PSR~J1928+1746 and its PWN, producing neutral pions during the interaction, which emit the observed $\gamma$~rays. The energy radiated by the pulsar was found to be compatible with the energy needed to produce the observed $\gamma$-ray luminosity via proton-proton interaction.
\item
Adding up the two clumps gives a mass for the cloud of $\sim$6600~$M_{\odot}$, and a density of $\sim$50 particles per cm$^{3}$. However, the $\gamma$-ray emission around 1~TeV is dominated by IC scattering of electrons on the CMB for medium densities lower than $\sim$240 particles per cm$^{3}$~\citep{TeV_astronomy}. Hence, bremsstrahlung does not play a significant role here: the observed emission cannot be explained by the electrons and positrons from the PWN interacting with the atoms of the cloud via bremsstrahlung.
\item
The total energy in cosmic rays $>10$~TeV in the cloud derived from the observed $\gamma$-ray luminosity is calculated as $7.9~\times~10^{47}$~erg, leading to an energy density of $\sim$4.4~eV~cm$^{-3}$.
This is three orders of magnitude higher than the energy density of the sea of galactic cosmic rays above 10~TeV, which is $\sim$1$\times10^{-3}$~eV~cm$^{-3}$~\citep{CR_sea_spectrum}.
Hence, these cosmic rays cannot explain the TeV emission observed by HAWC via interaction with the cloud.
\item The remaining hypothesis is that a local accelerator, for example an SNR, as yet undetected, is producing the detected VHE $\gamma$-ray emission. It is commonly assumed that an SNR releases $\sim$10$^{51}$~erg of kinetic energy in the ISM, and that 10\% of it - that is $\sim$10$^{50}$ erg - is used for cosmic-ray acceleration. Assuming a cosmic-ray spectrum between 1~GeV and 1~PeV, with an energy dependence E$^{-2}$, 33\% of the energy flux is found above 10~TeV, which makes $\sim$3.3$\times10^{49}$ erg.
The ratio of the volume around the SNR, which is uniformly filled with cosmic rays, and the volume of the cloud scales like the ratio of the energy contained in each volume:
\begin{equation}
\frac{D^3}{r^3} = \frac{3.3\times10^{49}}{7.7\times10^{47}} \simeq 40 \qquad \Rightarrow \qquad D = (40r^3)^{1/3},
\end{equation}
where $r=10$ pc is approximately the radius of the cloud and $D$ is the distance from the SNR to the farther edge of the clump.
Thus, an SNR located within a distance of $\sim$40~pc from the cloud would be able to account for the cosmic rays producing the observed TeV emission via interaction with the molecules of the cloud.
\end{itemize}
\section{Conclusion}
\label{Conclusion}
This paper gives a detailed description and multiwavelength overview of this complex region of the Galactic plane at longitude 52\textdegree~ $<~\ell~<~55$\textdegree. Two sources, 3HWC~J1930+188 and 3HWC~J1928+178, has already been reported in the third HAWC catalog~\citep{3HWC_catalog} and one source, HAWC~J1932+192, is detected for the first time at TeV energies. A multicomponent fit was presented using 3ML.
\begin{itemize}
\item 3HWC~J1930+188 is represented by a point-like source. Its spectrum is described by a simple power law with a flux at 10~TeV of $(2.46~(^{+0.58}_{-0.47})_{\mbox{\scriptsize stat}}\pm0.72_{\mbox{\scriptsize sys}})\times10^{-15}$~TeV$^{-1}$~cm$^{-2}$~ s$^{-1}$ and a spectral index of $-2.93\pm0.20_{\mbox{\scriptsize stat}}\pm0.01_{\mbox{\scriptsize sys}}$. The spectrum is in better agreement with the VERITAS spectrum than previous measurements~\citep{Veritas_Fermi_2HWCsources}.
\item HAWC~J1932+192 is represented by a point-like source. Its spectrum is described by a simple power law with a flux at 10 TeV of $(1.95~(^{+0.62}_{-0.49})_{\mbox{\scriptsize stat}}\pm0.50_{\mbox{\scriptsize sys}})\times10^{-15}$~TeV$^{-1}$~cm$^{-2}$~ s$^{-1}$ and a spectral index of $-2.46\pm0.24_{\mbox{\scriptsize stat}}\pm0.01_{\mbox{\scriptsize sys}}$.
The $\gamma$-ray emission is energetically consistent with a PWN scenario.
\item 3HWC~J1928+178 is represented by an extended source of angular size $\sigma~=~0.18^{\circ}\pm 0.04^{\circ}_{\mbox{\scriptsize stat}}$\ (39\% containment). It has a hard spectrum with an index of $-2.09\pm0.16_{\mbox{\scriptsize stat}}\pm0.04_{\mbox{\scriptsize sys}}$, which would explain the fact that HAWC is more sensitive to detect this source than IACTs. Its flux at 10 TeV is $(4.23~(^{+1.49}_{-1.10})_{\mbox{\scriptsize stat}}\pm1.30_{\mbox{\scriptsize sys}})\times10^{-15}$~TeV$^{-1}$~cm$^{-2}$~ s$^{-1}$. We studied different hypotheses for the origin of the observed $\gamma$-ray emission and concluded that three scenarios would be possible:
\begin{enumerate}
\item e$^{\pm}$ from the PWN started to cool and diffuse away from it, producing $\gamma$ rays via IC scattering on ambient photons;
\item cosmic-ray protons produced by the pulsar interacted with a nearby molecular cloud and produced $\gamma$ rays via proton-proton interaction; and
\item there is another unknown accelerator, such as a nearby SNR, located within $\sim$ 40~pc. However, no hint of such a SNR has been observed at any wavelength.
\end{enumerate}
\end{itemize}
Regarding 3HWC~J1928+178, for now, the first scenario may still be considered the most probable one. Due to the age of the pulsar, the lack of X-ray emission, the extended emission observed by HAWC, and the low energy density compared to the ISM, 3HWC~J1928+178 is a candidate for the TeV halo family~\citep{3HWCJ1928_ICRC}. It may also be in a transitional phase between a classical PWN and a TeV halo, and may help us to understand the late evolution stage of a PWN. The second and third scenarios cannot be ruled out, and more complex morphological and spectral analysis will be needed to help distinguish between them. The possibility that the $\gamma$-ray emission comes from protons produced by the pulsar interacting with a molecular cloud makes it a particularly interesting case to be followed up. However, this hypothesis relies on the estimated distance of this molecular cloud being 4~kpc. The observed $\gamma$-ray emission may also come from a combination of the first two scenarios. The last option would require the detection of a nearby SNR, which has not yet been detected, making it less probable than the two other options.\\
One additional component is also needed to model the region: a large extended source of angular size $\sigma~=~1^{\circ}43
\pm 0^{\circ}17_{\mbox{\scriptsize stat}}$. This may indicate either the mismodeling of 3HWC~J1928+178 or the lack of a large-scale galactic diffuse emission component in the model. We checked the expected flux of the galactic diffuse emission underlying the three sources J1928, J1930, and J1932 by using four different models: the latest Fermi model for Pass 8 and source class events~\citep{4FGL}, the diffuse emission model developed to simulate the Galactic plane survey with the future Cherenkov Telescope Array (CTA)~\citep{CTAdifuseEmission}, an updated version of this model (private communication with Q. Remy), and a model of the galactic diffuse emission up to 100 TeV developed by~\citet{PeVdiffuseEmission}.
On average, the contribution from J1928-EXT to these sources is more than twice the average flux that is expected from the galactic diffuse emission.
This implies either that the diffuse emission does not represent the emission well, or that J1928-EXT contains more signal than diffuse emission, some of which may be left over from J1928, for example. If PSR~J1928+178 is responsible for this component, then the $\gamma$-ray luminosity would be $L_{\gamma} = 7.2 \times 10^{34}$~erg~s$^{-1}$. Assuming that these $\gamma$ rays are produced by IC scattering on CMB photons, the energy density would less than $\epsilon_{\mbox{\tiny IC}} = 0.001$~eV~cm$^{-3}$. Another hypothesis is that the two extended components J1928 and J1928-EXT may be the same object, if we consider a diffusion model similar to that of Geminga~\citep{3HWCJ1928_ICRC}. This hypothesis would favor the TeV halo nature of 3HWC~J1928+178.
Deeper analysis will be required to determine whether it can be related to existing sources, whether it comes from other sources, or whether it comes from large-scale $\gamma$-ray galactic emission.\\
Going farther will require better energy and angular resolutions: future analysis with energy estimators~\citep{HAWC_high_energy}, together with more data, would be appropriate for allowing a better study of the energy dependence of the spectral and morphological parameters. Moreover, a better angular resolution would permit the making profiles in different directions around the pulsar, along the cloud location and perpendicular to it, to see whether there is any asymmetry in the $\gamma$-ray emission.
\section*{Acknowledgments}
\small {We acknowledge support from: the US National Science Foundation (NSF); the US Department of Energy Office of High-Energy Physics; the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnolog\'ia (CONACyT), M\'exico, grants Nos. 271051, 232656, 260378, 179588, 254964, 258865, 243290, 132197, A1-S-46288, and A1-S-22784, c\'atedras 873, 1563, 341, and 323, Red HAWC, M\'exico; DGAPA-UNAM grants Nos. IG101320, IN111716-3, IN111419, IA102019, IN110621, and IN110521; VIEP-BUAP; PIFI 2012, 2013 and PROFOCIE 2014, 2015; the University of Wisconsin Alumni Research Foundation; the Institute of Geophysics, Planetary Physics, and Signatures at Los Alamos National Laboratory; Polish Science Centre grant No. DEC-2017/27/B/ST9/02272; Coordinaci\'on de la Investigaci\'on Cient\'ifica de la Universidad Michoacana; Royal Society - Newton Advanced Fellowship 180385; Generalitat Valenciana, grant No. CIDEGENT/2018/034; the Program Management Unit for Human Resources \& Institutional Development, Research and Innovation, NXPO (grant No. B16F630069); Coordinaci\'on General Acad\'emica e Innovaci\'on (CGAI-UdeG), PRODEP-SEP UDG-CA-499; and the Institute of Cosmic Ray Research (ICRR), University of Tokyo. H.F. acknowledges support from NASA under award No. 80GSFC21M0002. We also acknowledge the significant contributions over many years of Stefan Westerhoff, Gaurang Yodh, and Arnulfo Zepeda Dominguez, all deceased members of the HAWC collaboration. Thanks to Scott Delay, Luciano D\'iaz, and Eduardo Murrieta for technical support.}
\vspace{5mm}
\facility{HAWC (https://www.hawc-observatory.org/)}
\software{naima~\citep{naima}, 3ML~\citep{3ML}}
\newpage
|
{
"arxiv_id": "2302.14217",
"language": "en",
"timestamp": "2023-03-01T02:05:30",
"url": "https://arxiv.org/abs/2302.14217",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{images/architecture.png}
\vspace{0.5pt}
\caption{A diagram of our proposed method. We add a new end-to-end trainable branch to the network (proxy head $\mathcal{H}$) that projects highly dimensional vectors $\mathbf{x}_i$ into very compact representations $\mathbf{z}_i$ ; we use the latter to compute one proxy descriptor $\mathbf{c}_i$ for each place in the mini-batch. We detach each proxy from the computation graph and cache it into a memory bank $\Omega$. Then, at the begining of each epoch, we construct an index upon $\Omega$, in which places are gathered together according to the similarity of their proxies. This index is used to sample mini-batches containing similar places, which yields highly informative pairs or triplets. We call this strategy Global Proxy-based Hard Mining (GPM).}
\label{fig:arch}
\end{figure}
Visual place recognition (VPR) consists of determining the location of a place depicted in a query image by comparing it to a database of previously visited places with known geo-references. This is of major importance for many robotics and computer vision tasks, such as autonomous driving~\cite{chowdhary2013gps, maddern20171}, SLAM~\cite{milford2012seqslam, engel2014lsd}, image geo-localization~\cite{baik2020domain, hausler2021patch, wang2022transvpr} and 3D reconstruction~\cite{cieslewski2016point, sattler2017large}.
Recently, advances in deep learning~\cite{menghani2021efficient} have made retrieval-based place recognition a preferable choice for efficient and large-scale localization. Current VPR techniques \cite{arandjelovic2016netvlad, liu2019stochastic, warburg2020mapillary, thoma2020soft, zhu2020regional, hausler2021patch, wang2022transvpr} use metric learning loss functions to train deep neural networks for VPR. These loss functions operate on the relationships between images in a mini-batch. As such, representations of images from the same place are brought closer and those from different places are distanced~\cite{musgrave2020metric}. For instance, in the most used architecture for VPR, NetVLAD~\cite{arandjelovic2016netvlad, liu2019stochastic, warburg2020mapillary, hausler2021patch, wang2022transvpr}, the network is trained using a triplet ranking loss function that operates on triplets, each of which consists of a query image, a positive image depicting the same place as the query, and a negative image depicting a different place. Moreover, the triples need to be informative in order for the network to converge~\cite{hermans2017defense}, meaning that for each query, the negative must be hard for the network to distinguish from the positive. To do so, these techniques rely on offline hard negative mining, where every image representation generated by the network is kept in a memory bank (cache), to be used offline (out of the training loop) to find the hardest negatives for each training query. Although offline mining allows the network to converge~\cite{warburg2020mapillary}, it involves a large memory footprint and computational overhead.
Another approach for informative example mining is online hard negative mining (OHM)~\cite{hermans2017defense, wu2017sampling}, which consists of first forming mini-batches, by randomly selecting a subset of places from the dataset and sampling images from each of them. Then, in a later stage of the forward pass, select only the most informative triples (or pairs) present in the mini-batch and use them to compute the loss. Nevertheless, randomly constructed mini-batches can generate a large number of triplets (or pairs), most of which may be uninformative~\cite{hermans2017defense}. Yet selecting informative samples is crucial to robust feature learning~\cite{musgrave2020metric}. The advantage of OHM is that there is no memory bank (cache) and no out-of-the-loop mining step. However, as training progresses and the network eventually learns robust representations, the fraction of informative triplets (or pairs) within the randomly sampled mini-batches becomes limited (i.e., the network becomes good at distinguishing hard negatives). Therefore, it's recommended to use very large batch sizes~\cite{hermans2017defense} to potentially increase the presence of hard examples at each iteration.
In this work, we propose a new globally informed mini-batch sampling technique, which instead of randomly sampling places at each iteration, it uses a proxy index to construct mini-batches containing visually similar places. The main idea behind our technique is the following: instead of caching highly dimensional individual image descriptors to mine hard negatives, we propose to add an auxiliary branch that computes compact place-specific representations that we call proxies. Thus, each place in the dataset can be globally represented by one low-dimensional proxy that can be effectively cached during the training. This allows us to build an index in which places are gathered in the same mini-batch according to the similarity of their proxies. Our technique involves negligible computational and memory overhead, while drastically improving performance.
\section{Related Work}
\label{sec:related}
\subsection{Visual Place Recognition}\label{ssec:vpr}
Most state-of-the-art techniques in VPR~\cite{arandjelovic2016netvlad, liu2019stochastic, seymour2019semantically, warburg2020mapillary, kim2017learned, liu2020digging, hausler2021patch, wang2022transvpr} train the network with mini-batches of triplets of images. Such techniques employ offline hard negative mining to form informative triplets. This is done by storing in a memory cache all image representations generated during the training, and using $k$-NN to retrieve, for each training query, the hardest negatives among all references in the cache and form informative triplets (the hard negatives are the images that do not depict the same place as the query but are too close to it in the representation space). However, most SOTA methods generate highly dimensional representations during the training phase, for instance, techniques that rely on NetVLAD~\cite{arandjelovic2016netvlad} generate descriptors of size $d = 32768$. As a result, caching representations when training with large datasets such as Mapillary~SLS~\cite{warburg2020mapillary} or GSV-Cities~\cite{ali2022gsv} quickly becomes infeasible, because of both the computational overhead and the memory footprint of $k$-NN, which has a computational complexity of $\mathcal{O}(QRd)$ and a memory footprint of $\mathcal{O}(Rd)$~\cite{cunningham2021k}, where $R$ is the number of reference samples (cached representations), $d$ the dimensionality of each sample, and $Q$ is the number of queries to be searched.
In \cite{thoma2020soft, arandjelovic2016netvlad, liu2019stochastic} the representations of all the training examples of Pitt250k dataset are cached. Then, after a fixed number of iterations, the training is paused and the cache is used to mine the hardest $10$ negatives for each training query (to form hard triplets). Importantly, the cache is recalculated every $250$ to $1000$ iterations. Warburg \emph{et al}\bmvaOneDot~\cite{warburg2020mapillary} trained NetVLAD on Mapillary-SLS, which is a dataset comprising $1.6$M images. Faced with the huge memory overhead, they used a subcaching strategy, where only a subset of the training images are cached, from which the hard negatives were periodically mined. Note that, if the NetVLAD representations of all images in MSLS dataset~\cite{warburg2020mapillary} were cached, the memory cache would be $196$GB in size.
From the above, it is evident that the extra memory and computational cost of offline hard mining for VPR remains an issue to be addressed.
\subsection{Deep Metric Learning}\label{ssec:dml}
Place recognition networks are generally trained using ranking loss functions issued from deep metric learning~\cite{zhang2021visual}, such as triplet ranking loss~\cite{schroff2015facenet} and contrastive loss~\cite{thoma2020soft}. However, during the training, deep metric learning (DML) networks often generate very compact representations compared to VPR, ranging from $d =128$ to $d=512$~\cite{chen2021deep}. This makes any caching mechanism much less greedy and computationally inexpensive. Related to our work are DML approaches~\cite{ge2018deep, smirnov2018hard} that perform negative mining on class-level representations (a class could be regarded as the equivalent of a place in VPR), under the assumption that class-level similarity is a good approximation of the similarity between instances. Smirnov~\emph{et al}\bmvaOneDot~\cite{ge2018deep} developed a technique that constructs a hierarchical tree for the triplet loss function. The strategy behind their approach is to store class-level representations during the training, identify neighbouring classes and put them in the same mini-batch, resulting in more informative mini-batches that can be further exploited by online hard mining. Applying these techniques directly to train VPR networks would require to cache highly dimensional image-level representations (e.g. $32$K for NetVLAD), which is not feasible when the training dataset contains thousands of different places.
\section{Methodology}
\label{sec:method}
As mentioned above, VPR techniques generate highly dimensional representations, making caching and hard mining with $k$-NN impractical for large-scale datasets. Knowing that the complexity of $k$-NN is linearly dependent on the number of references $Q$ that need to be cached and their dimensionality $d$~\cite{cunningham2021k}. And considering that the only purpose of the caching mechanism is to help retrieve hard examples. We propose to project the highly dimensional pooling representations (e.g. the resulting NetVLAD representations) into a separate branch ($\mathcal{H}$ in figure~\ref{fig:arch}) that we call \textit{proxy head}. $\mathcal{H}$ is an end-to-end trainable module that learns place-specific compact vectors of significantly smaller dimension compared to the pooling module. During each epoch, we capture and cache the semantics of each place (instead of each image) with one compact vector, acting as its global proxy. Therefore, the number of proxies to be cached is one order of magnitude smaller than the number of images in the dataset (considering that a place is generally depicted by $8$ to $20$ images as in GSV-Cities~\cite{ali2022gsv}). Most importantly, we can choose $d'$ the dimensionality of the proxy head $\mathcal{H}$ to be several orders of magnitude smaller than $d$ the dimensionality of the pooling layer. This allows to perform global hard mining based on the compact-proxies, with negligible additional memory and computation cost as we show in section~\ref{sec:exp} (i.e., using $k$-NN on the proxies is orders of magnitude more efficient).
\subsection{Representation Learning for VPR}
Given a dataset of places $\mathcal{D} = \left\{P_1, P_2, ..., P_N\right\}$ where $P_i = \left( \left\{I_1^i, I_2^i, ..., I_{|P_i|}^i\right\}, y_i \right)$ is a set of images depicting the same place and sharing the same identity (or label) $y_i$. The goal is to learn a function $\mathit{f_{\mathbf{\theta}}}$ which is, in most cases, a deep neural network composed of a backbone network followed by a pooling layer (e.g., NetVLAD). The network $\mathit{f_{\mathbf{\theta}}}$ takes an input image $I_i$ and outputs a representation vector $\mathbf{x}_i \in \mathbb{R}^{d}$ such that the similarity of a pair of instances $\left(\mathbf{x}_i, \mathbf{x}_j\right)$ is higher if they represent the same place, and lower otherwise.
As the generated representation $\mathit{f_{\theta}}\left(I_i\right) = \mathbf{x}_i$ is highly dimensional (i.e., $d = 32$k for NetVLAD~\cite{arandjelovic2016netvlad}), we propose to project it further in a separate branch of the network, that we call \textit{proxy head} ($\mathcal{H}$), represented by a function $\mathit{h_{\mathbf{\psi}}} : \mathbb{R}^{d} \mapsto \mathbb{R}^{d'}$ and projects the outputs from the pooling layer to a smaller Euclidean space where $d' << d$ as illustrated in figure~\ref{fig:arch}. Formally, for each vector $\mathbf{x}_i$, the proxy head produces a compact projection $\mathbf{z}_i$ as follow:
\begin{equation}\label{eq1}
\mathbf{z}_i = \mathit{h_{\mathbf{\psi}}} \left( \mathit{f_{\theta}}\left(I_i\right) \right)
= \mathit{h_{\mathbf{\psi}}} \left( \mathbf{x}_i \right)
\end{equation}
In this work, $\mathcal{H}$ is a fully connected layer that projects $d$-dimensional inputs to $d'$-dimensional outputs followed by $L2$ normalization. This gives us the control of the proxy dimensionality $d'$. However, $\mathcal{H}$ could also be an MLP or a trainable module of different architecture.
We use backpropagation to jointly learn the parameters $\mathbf{\theta}$ and $\mathbf{\psi}$, using pair based (or triplet based) loss functions from metric learning literature~\cite{musgrave2020metric} such as Contrastive loss~\cite{hadsell2006dimensionality}, Triplet loss~\cite{hermans2017defense} and Multi-Similarity loss~\cite{wang2019multi}. \textbf{Note}: since the proxy head is only used during the training phase (to mine hard samples) and discarded during evaluation and test, we might not need to backpropagate the gradient from $\mathcal{H}$ back to the pooling layer. Quantitative experiments show that this does not affect performance.
\subsection{Global Proxy-based Hard Mining (GPM)}\label{ssec:gpm}
Traditionally, during the training phase, each mini-batch is formed by randomly sampling $M$ places from the dataset, then picking $K$ images from each one of them, thus resulting in a mini-batch $\mathcal{B}$ of size $M \times K$.
The goal of global hard hard mining is to populate each training mini-batch with $M$ similar places, which in turn yields hard pairs and triplets, potentially inducing a higher loss value, thereby learning robust and discriminative representations. For this purpose, we use the representations generated by the proxy head $\mathcal{H}$, and compute for each place $P_i \in \mathcal{B}$, a single compact descriptor $\mathbf{c}_i$ as follows:
\begin{equation}
\mathbf{c}_i = \frac{1}{|P_i|} \sum_{I \in P_i} \mathit{h_{\mathbf{\psi}}} \left( \mathit{f_{\theta}}\left(I\right) \right)
\end{equation}
where $\mathbf{c}_i$ corresponds to the average of the proxy representations of the images depicting $P_i$ in the mini-batch $\mathcal{B}$. During the training we regard $\mathbf{c}_i$ as a global descriptor (a proxy) of $P_i$ and cache it along with its identity $y_i$ into a memory bank $\Omega$. Then, at the end of each epoch, we use $k$-NN to build an index upon $\Omega$, in which places are gathered together according to the similarity of their proxies (similar places need to appear in the same mini-batch) as in Algorithm~\ref{algo_index}.
\begin{algorithm}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{$\Omega$: the memory bank comprising proxies representing all places in the dataset \\ $M$: the number of places per mini-batch.}
\Output{$\mathcal{L}$: a list of tuples, where each tuple contains $M$ identities of places that need to be sampled in the same mini-batch.}
\BlankLine
\nl $\mathcal{S} \leftarrow k\text{-NN}(k=M)$ \Comment{Initialize a $k$-NN module $\mathcal{S}$ with $k$ equal to $M$ the number of places per mini-batch.}
\nl $\mathcal{S}\text{.add}(\Omega)$ \Comment{Add the contents of $\Omega$ to $\mathcal{S}$ as references.}
\While{$\mathcal{S} \neq \emptyset$}{
\nl Randomly pick a place $c_i$ from $\mathcal{S}$
\nl $\mathbf{T} \leftarrow \mathcal{S}\text{.search}(c_i)$ \Comment{Search $\mathcal{S}$ for the $M$-most similar places to $c_i$.}
\nl $\mathcal{L} \leftarrow \mathcal{L}\cup \mathbf{T}$ \Comment{Append the $M$ identities to $\mathcal{L}$.}
\nl $\mathcal{S} \leftarrow \mathcal{S} \setminus \mathbf{T}$ \Comment{Remove from $\mathcal{S}$ all places present in $\mathbf{T}$.}
}
\caption{Index based mini-batch sampling}\label{algo_index}
\end{algorithm}
For the epoch that follows, the mini-batch sampler picks one tuple from $\mathcal{L}$ at each iteration, yielding in $M$ similar places. We then pick $K$ images from each place resulting in highly informative mini-batches of size $M \times K$. Qualitative results in section~\ref{ssec:qualitative} show the effectiveness of our approach in constructing informative mini-batches.
\vspace{5pt}
\noindent\textbf{Connection to proxy-based loss functions.} Deep metric learning techniques that employ the term ‘\textit{proxy}’, such as \cite{kim2020proxy, yang2022hierarchical, yao2022pcl}, are fundamentally different from our approach, in that, they learn proxies at the loss level, and optimize on the similarity between the proxies and individual samples in the mini-batch. However, learning proxies at the loss level forces them to be of the same dimensionality as the individual samples (e.g., $32$K if used to train NetVLAD). In contrast, we learn compact proxies independently of the loss function, and use them only to construct informative mini-batches.
\section{Experiments}\label{sec:exp}
\textbf{Dataset and Metrics.} GSV-Cities dataset~\cite{ali2022gsv} is used for training, it contains $65$k different places spread on numerous cities around the world, totalling $552$k images. For testing, we use the following $4$ benchmarks, Pitts250k-test~\cite{torii2013visual}, MSLS~\cite{warburg2020mapillary}, SPED~\cite{zaffar2021vpr} and Nordland~\cite{zaffar2021vpr} which contain, respectively, $8$K, $750$, $607$ and $1622$ query images, and $83$k, $19$k, $607$ and $1622$ reference images. We follow the same evaluation metric as~\cite{arandjelovic2016netvlad, warburg2020mapillary, zaffar2021vpr} where the recall@K is reported.
\noindent\textbf{Default Settings.}
In all experiments, we use ResNet-50\cite{he2016deep} as backbone network, pretrained on ImageNet~\cite{krizhevsky2012imagenet} and cropped at the last residual bloc; coupled with NetVLAD~\cite{arandjelovic2016netvlad} as a pooling layer, we chose NetVLAD because it's the most widely used pooling technique that showed consistent SOTA performance. Stochastic gradient descent (SGD) is utilized for optimization, with momentum $0.95$ and weight decay $0.0001$. The initial learning rate on $0.05$ is multiplied by $0.3$ after each $5$ epochs. We train for a maximum of $30$ epochs using images resized to $224\times 224$.
Unless otherwise specified, we use mini-batch containing $M=60$ places, each of which depicted by $K=4$ images ($240$ in total) and fix the output size of the proxy head $d'$ to $128$ when applicable.
\subsection{Effectiveness of GPM}
To demonstrate the effectiveness of out proposed method, we conduct ablation studies on $4$ different VPR benchmarks. We illustrate the effect of using our technique (GPM) alongside three different loss functions, namely, Contrastive loss \cite{hadsell2006dimensionality}, Triplet loss \cite{hermans2017defense} and Multi-Similarity loss~\cite{wang2019multi}.
For each loss function, we conducted four test scenarios (one on each line) as shown in Table~\ref{tab:my-table}. First, we train the network with randomly constructed batches without OHM or GPM (baseline \#1). In the second scenario, we add GPM to the first baseline and show the effect of globally informed sampling provided by our method. The results demonstrate that GPM alone can greatly improve performance of all three loss functions. For example, the triplet loss improved recall@1 (in absolute value) by $4.3, 4.1, 3.6$ and $3.4$ points on Pitts250k, MSLS, SPED and Nordland respectively, while Multi-Similarity loss improved by $5.4, 8.5, 13.9$ and $8.6$ points.
In the third scenario (baseline \#2), online hard mining (OHM) is used during the training without GPM. This consists of selecting the most informative pairs or triplets from randomly sampled mini-batches. The results show that OHM can improve performance over baseline~\#1, which is consistent with the existing literature~\cite{hermans2017defense}.
For the last scenario, we used GPM combined with baseline \#2 (i.e., mini-batches are sampled using GPM and then further exploited by OHM), results show that our technique (GPM) consistently outperform the baseline. For instance, contrastive loss improved recall@1 (in percentage points) by $5.9$ on Pitts250k, $4.7$ on MSLS, $10.1$ on SPED and $16.8$ on Nordland. Note that the relative performance boost introduced by GPM on Nordland is more than $100\%$ for both contrastive and triplet loss. The best overall performance is achieved using Multi-Similarity loss which boosted the recall@1 over baseline~\#2 by, respectively, $2.0, 4.6, 4.8$ and $9.4$ points on the four benchmarks. This ablation study highlights the effectiveness of GPM compared to randomly constructed mini-batches.
\begin{table}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l||cc|ccc|ccc|ccc|ccc}
\multirow{2}{*}{Loss function} & \multicolumn{2}{c|}{Hard mining} & \multicolumn{3}{c|}{Pitts250k-test} & \multicolumn{3}{c|}{MSLS-val} & \multicolumn{3}{c|}{SPED} & \multicolumn{3}{c}{Nordland} \\ \cline{2-15}
& OHM & GPM & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline\hline
\multirow{4}{*}{Triplet} & & & 77.0 & 90.0 & 93.6 & 67.7 & 79.2 & 82.4 & 53.7 & 69.5 & 75.8 & 8.4 & 16.3 & 20.6 \\
& & \ding{51} & 81.3 & 91.9 & 94.9 & 71.8 & 82.0 & 86.3 & 57.3 & 71.8 & 77.8 & 11.8 & 20.3 & 25.9 \\ \cline{2-15}
& \ding{51} & & 87.5 & 95.4 & 96.9 & 74.0 & 85.1 & 87.7 & 62.4 & 78.6 & 83.2 & 10.1 & 17.9 & 22.6 \\
& \ding{51} & \ding{51} & \textbf{90.0} & \textbf{96.4} & \textbf{97.6} & \textbf{77.6} & \textbf{88.0} & \textbf{90.4} & \textbf{71.3} & \textbf{83.7 } & \textbf{87.3} & \textbf{20.2 } & \textbf{33.2} & \textbf{38.8} \\ \hline\hline
\multirow{4}{*}{Contrastive} & & & 83.0 & 93.0 & 95.2 & 72.7 & 82.8 & 85.8 & 53.7 & 67.2 & 74.8 & 8.0 & 13.8 & 17.3 \\
& & \ding{51} & 88.8 & 95.2 & 96.8 & 79.0 & 85.8 & 88.5 & 67.7 & 79.2 & 83.4 & 20.8 & 33.9 & 41.5 \\ \cline{2-15}
& \ding{51} & & 84.5 & 94.0 & 95.9 & 74.6 & 84.7 & 87.8 & 63.4 & 76.9 & 82.5 & 14.6 & 25.2 & 31.2 \\
& \ding{51} & \ding{51} & \textbf{90.4} & \textbf{96.4} & \textbf{97.6} & \textbf{79.3} & \textbf{88.5} & \textbf{90.7} & \textbf{73.5} & \textbf{85.5} & \textbf{88.9} & \textbf{31.4} & \textbf{46.4} & \textbf{53.5} \\ \hline\hline
\multirow{4}{*}{Multi-Similarity} & & & 84.0 & 93.3 & 95.5 & 72.7 & 82.7 & 86.5 & 50.7 & 65.1 & 71.5 & 9.4 & 17.9 & 21.7 \\
& & \ding{51} & 89.4 & 96.0 & 97.3 & 81.2 & 89.1 & 90.9 & 64.6 & 76.4 & 80.6 & 18.0 & 30.1 & 36.0 \\ \cline{2-15}
& \ding{51} & & 89.5 & 96.3 & 97.6 & 77.4 & 87.2 & 90.1 & 74.6 & 86.8 & 89.9 & 29.1 & 43.3 & 50.2 \\
& \ding{51} & \ding{51} & \textbf{91.5} & \textbf{97.2} & \textbf{98.1 } & \textbf{82.0} & \textbf{90.4} & \textbf{91.4} & \textbf{79.4} & \textbf{90.6} & \textbf{93.2} & \textbf{38.5} & \textbf{53.9} & \textbf{60.7} \\ \hline
\end{tabular}
}
\vspace{6pt}
\caption{\small Ablation. We study the performance gain of three loss functions. For each loss, we train $4$ networks. $2$ of which are baselines (one with Online Hard Mining (OHM) and one without), and the other $2$ are to compare the performance gain introduced by our method (GPM).}
\label{tab:my-table}
\end{table}
These results make even more sense when we look at the curves on Figure~\ref{fig:valid_triplets} where we keep track of the fraction of informative pairs and triplets within the mini-batch. As training progresses, the network learns to identify most hard samples, making a large fraction of pairs and triplets in the mini-batch uninformative.
This is highlighted by the red-dotted curve in Figure~\ref{fig:valid_triplets} where the fraction of informative pairs and triplets rapidly decreases to less than $15\%$ after $15$K iterations. More importantly, when we use GPM, where mini-batches are constructed in such a way to incorporate highly informative pairs and triplets, the fraction of informative samples (blue line) stays at around $50\%$ even after $30$K iterations, which explains the performance boost in Table~\ref{tab:my-table}.
\begin{figure}[th]%
\centering
\subfigure[Triplet loss]{\label{fig:valid1}\includegraphics[width=0.32\textwidth]{images/valid_triplets.png}}
\subfigure[Contrastive loss]{\label{fig:valid2}\includegraphics[width=0.32\textwidth]{images/valid_pairs_.png}}
\subfigure[Multi-Similarity loss]{\label{fig:valid3}\includegraphics[width=0.32\textwidth]{images/valid_pairs_ms.png}}
\vspace{3pt}
\caption{\small Percentage of valid triplets/pairs per mini-batch during the training. Our technique (GPM) construct highly informative mini-batches, which in turn keeps the number of valid pairs/triplets higher during all the training phase.}
\label{fig:valid_triplets}
\end{figure}
\subsection{Mini-batch Size} The size of the mini-batch is a key factor in the performance of many pair and triplet based learning approaches. In this experiment, we investigate its impact by using Multi-Similarity loss with and without GPM on three benchmarks. Results are shown in Figure~\ref{fig:batch_size}, where we observe that the smaller the mini-batch size, the lower the performance.
Moreover, when comparing performance with and without GPM, the gap widens as the batch size decreases. This demonstrates that our method brings consistent performance improvements with a wide range of mini-batch sizes.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{images/batch_size_effect.png}
\vspace{4pt}
\caption{\small Impact of the mini-batch size when training with and without GPM. We report recall@1 on Pitts30k-test, MSLS and SPED respectively. The horizontal axis shows $M$ the number of places in the mini-batch. GPM is effective for a wide range of mini-batch sizes, with more impact when smaller mini-batches are used for training. This is of great importance when training hardware resources are limited.}
\label{fig:batch_size}
\end{figure}
\subsection{Memory and computational cost}
Since our method (GPM) requires to add a trainable branch to the network and a memory cache, we investigate the additional computation and memory cost by varying the dimensionality of the proxy head. For each configuration, we train the network for $20$ epochs and record the training time (including the time to build the index and construct mini-batches), the GPU memory required during the training, the size of the memory bank $\Omega$ (Cache size) and the recall@1 performance on Pitts30k-test.
We first train a baseline model without GPM, and compare against it. Note that for the GPU memory and Cache size, we report the amount of extra memory that was needed compared to the baseline.
Table~\ref{tab:table2} shows that the baseline model takes $1.93$ hours to finish $20$ training epochs and achieve a recall@1 of $86.6\%$. Since the baseline does not use GPM, there is no extra cache memory (cache size $= 0$).
We then run multiple experiments with GPM, by varying the dimensionality $d'$ of the proxy head (from $32$ to $1024$). The results show that there is a significant increase in recall@1 performance ($86.6\% \rightarrow 89.4\%$), and a \textit{negligible} amount of GPU and cache memory. For example, by using a proxy of dimension $d'=128$ (as in the above experiments), we end up with $2$MB of extra GPU memory for training $\mathcal{H}$ and $32$MB for the memory cache with \textit{practically} no extra training time. We also notice that proxy with higher dimensionality does not automatically translate to better performance (e.g. GPM with $d'=256$ yields better performance than $d'=1024$).
Particularly, we do another experiment (the rightmost column in table~\ref{tab:table2}) where instead of using a proxy head to generate proxies, we save the NetVLAD representations into cache (we populate $\Omega$ with $32$k-dimensional vectors) and apply global hard mining on them. We end up with $8.0$GB of extra cache memory, more than double the training time and most importantly we get worst recall@1 performance ($88.7\%$ compared to $89.3\%$ when using a $256$-d proxy head). This can be explained by the fact that using the NetVLAD representations resulted in mining the most difficult pairs which is know to impact performance if the dataset contains a certain amount of outliers~\cite{hermans2017defense}. This experiment shows that, even if memory and computation are not a concern, GPM is still a better choice for learning robust representations.
\begin{table}[th]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l||c||cccccc||c}
& \begin{tabular}[c]{@{}c@{}}Baseline\\ (no GPM)\end{tabular} & \multicolumn{6}{c||}{\begin{tabular}[c]{@{}c@{}}Global Proxy-based Hard Mining\\ (GPM)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Global hard mining \\ without proxy\end{tabular} \\ \hline\hline
Dimensionality & 0 & 32 & 64 & 128 & 256 & 512 & 1024 & 32768 \\ \hline
Training time (hours) & 1.93 & 1.93 & 1.93 & 1.93 & 1.94 & 2.05 & 2.1 & 4.83 \\ \hline
GPU memory (GB) & 10.4 & +0.002 & +0.002 & +0.002 & +0.03 & +0.06 & +0.14 & +0.0 \\ \hline
Cache size (GB) & 0.0 & +0.008 & +0.016 & +0.032 & +0.064 & +0.128 & +0.256 & +8.0 \\ \hline
Recall@1 (\%) & 86.6 & 89.1 & 89 & 89.3 & 89.4 & 89 & 89.2 & 88.7 \\ \hline
\end{tabular}
}
\vspace{7pt}
\caption{\small Memory and computation cost of different dimensions of the proxy head compared against the baseline without GPM). We also compare against global mining without a proxy head, where the memory bank is filled with the highly dimensional NetVLAD representations.}
\label{tab:table2}
\end{table}
\subsection{Qualitative Results}\label{ssec:qualitative}
Our technique (GPM) relies on the similarity between proxies to form mini-batches comprising visually similar places. In this experiment, we used GPM to sample a mini-batch containing $6$ places ($M=6$) from a database of $65$k different places. Note that the probability of \emph{randomly} sampling $6$ similar places among $65$k is extremely low. We show in Figure~\ref{fig:batch1} a mini-batch of $6$ places sampled using GPM, we notice that all $6$ places are visually similar containing similar textures and structures aligned in a similar manner. In Figures~\ref{fig:batch2}~and~\ref{fig:batch3} we visualize a subset of triplets and pairs mined using OHM on the same mini-batch sampled by GPM. Some triplets contain negatives that are visually extremely difficult to distinguish. This shows how using GPM can ensure, to a certain degree, the presence of visually similar places at each training iteration, increasing the likelihood of hard pairs and triplets, which in turn helps learn robust representations.
\begin{figure}[thb]%
\centering
\subfigure[A mini-batch sampled with GPM]{\label{fig:batch1}\includegraphics[width=0.385\textwidth]{images/a_batch.png}}
\hfill
\subfigure[Valid triplets]{\label{fig:batch2}\includegraphics[width=0.31\textwidth]{images/triplets.png}}
\hfill
\subfigure[Valid pairs]{\label{fig:batch3}\includegraphics[width=0.2\textwidth]{images/pos_neg_pairs.png}}
\vspace{3pt}
\caption{\small (a) An example of a mini-batch containing $6$ places sampled from a dataset of $65$k places using GPM. Each place is depicted by $4$ images (a row). This highlights the ability of our technique to construct mini-batches containing similar places, which in turn increases the presence of hard pairs and triplets. (b) A subset of hard triplets generated from the mini-batch, each row consists of a triplet with the blue as anchor, green as the positive and red as the hard negative. (c) A subset of positive (green) and negative (red) pairs. All triplets and pairs have been mined in an online fashion from the mini-batch sampled by GPM.}
\label{fig:a_batch}
\end{figure}
\section{Conclusion}
\label{conclusion}
In this paper, we proposed a novel technique that employs compact proxy descriptors to sample highly informative mini-batches at each training iteration with negligible additional memory and computational costs. To do so, we add an auxiliary branch to the baseline network that generates compact place-specific descriptors, which are used to compute one proxy for each place in the dataset. The compactness of these proxies allows to efficiently build a global index that gathers places in the same mini-batch based on the similarity of their proxies.
Our method proved to be very effective in keeping the fraction of informative pairs and triplets at a high level during the entire training phase, resulting in substantial improvement in overall performance. Future works can focus on the architecture of the proxy head and on different ways of building the global index.
\vspace{5pt}
\noindent\textbf{Acknowledgement.} This work has been supported by The Fonds de Recherche du Québec Nature et technologies (FRQNT). We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Quadro RTX 8000 GPU used for our experiments.
\section{Introduction}
\label{sec:intro}
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{images/architecture.png}
\vspace{0.5pt}
\caption{A diagram of our proposed method. We add a new end-to-end trainable branch to the network (proxy head $\mathcal{H}$) that projects highly dimensional vectors $\mathbf{x}_i$ into very compact representations $\mathbf{z}_i$ ; we use the latter to compute one proxy descriptor $\mathbf{c}_i$ for each place in the mini-batch. We detach each proxy from the computation graph and cache it into a memory bank $\Omega$. Then, at the begining of each epoch, we construct an index upon $\Omega$, in which places are gathered together according to the similarity of their proxies. This index is used to sample mini-batches containing similar places, which yields highly informative pairs or triplets. We call this strategy Global Proxy-based Hard Mining (GPM).}
\label{fig:arch}
\end{figure}
Visual place recognition (VPR) consists of determining the location of a place depicted in a query image by comparing it to a database of previously visited places with known geo-references. This is of major importance for many robotics and computer vision tasks, such as autonomous driving~\cite{chowdhary2013gps, maddern20171}, SLAM~\cite{milford2012seqslam, engel2014lsd}, image geo-localization~\cite{baik2020domain, hausler2021patch, wang2022transvpr} and 3D reconstruction~\cite{cieslewski2016point, sattler2017large}.
Recently, advances in deep learning~\cite{menghani2021efficient} have made retrieval-based place recognition a preferable choice for efficient and large-scale localization. Current VPR techniques \cite{arandjelovic2016netvlad, liu2019stochastic, warburg2020mapillary, thoma2020soft, zhu2020regional, hausler2021patch, wang2022transvpr} use metric learning loss functions to train deep neural networks for VPR. These loss functions operate on the relationships between images in a mini-batch. As such, representations of images from the same place are brought closer and those from different places are distanced~\cite{musgrave2020metric}. For instance, in the most used architecture for VPR, NetVLAD~\cite{arandjelovic2016netvlad, liu2019stochastic, warburg2020mapillary, hausler2021patch, wang2022transvpr}, the network is trained using a triplet ranking loss function that operates on triplets, each of which consists of a query image, a positive image depicting the same place as the query, and a negative image depicting a different place. Moreover, the triples need to be informative in order for the network to converge~\cite{hermans2017defense}, meaning that for each query, the negative must be hard for the network to distinguish from the positive. To do so, these techniques rely on offline hard negative mining, where every image representation generated by the network is kept in a memory bank (cache), to be used offline (out of the training loop) to find the hardest negatives for each training query. Although offline mining allows the network to converge~\cite{warburg2020mapillary}, it involves a large memory footprint and computational overhead.
Another approach for informative example mining is online hard negative mining (OHM)~\cite{hermans2017defense, wu2017sampling}, which consists of first forming mini-batches, by randomly selecting a subset of places from the dataset and sampling images from each of them. Then, in a later stage of the forward pass, select only the most informative triples (or pairs) present in the mini-batch and use them to compute the loss. Nevertheless, randomly constructed mini-batches can generate a large number of triplets (or pairs), most of which may be uninformative~\cite{hermans2017defense}. Yet selecting informative samples is crucial to robust feature learning~\cite{musgrave2020metric}. The advantage of OHM is that there is no memory bank (cache) and no out-of-the-loop mining step. However, as training progresses and the network eventually learns robust representations, the fraction of informative triplets (or pairs) within the randomly sampled mini-batches becomes limited (i.e., the network becomes good at distinguishing hard negatives). Therefore, it's recommended to use very large batch sizes~\cite{hermans2017defense} to potentially increase the presence of hard examples at each iteration.
In this work, we propose a new globally informed mini-batch sampling technique, which instead of randomly sampling places at each iteration, it uses a proxy index to construct mini-batches containing visually similar places. The main idea behind our technique is the following: instead of caching highly dimensional individual image descriptors to mine hard negatives, we propose to add an auxiliary branch that computes compact place-specific representations that we call proxies. Thus, each place in the dataset can be globally represented by one low-dimensional proxy that can be effectively cached during the training. This allows us to build an index in which places are gathered in the same mini-batch according to the similarity of their proxies. Our technique involves negligible computational and memory overhead, while drastically improving performance.
\section{Related Work}
\label{sec:related}
\subsection{Visual Place Recognition}\label{ssec:vpr}
Most state-of-the-art techniques in VPR~\cite{arandjelovic2016netvlad, liu2019stochastic, seymour2019semantically, warburg2020mapillary, kim2017learned, liu2020digging, hausler2021patch, wang2022transvpr} train the network with mini-batches of triplets of images. Such techniques employ offline hard negative mining to form informative triplets. This is done by storing in a memory cache all image representations generated during the training, and using $k$-NN to retrieve, for each training query, the hardest negatives among all references in the cache and form informative triplets (the hard negatives are the images that do not depict the same place as the query but are too close to it in the representation space). However, most SOTA methods generate highly dimensional representations during the training phase, for instance, techniques that rely on NetVLAD~\cite{arandjelovic2016netvlad} generate descriptors of size $d = 32768$. As a result, caching representations when training with large datasets such as Mapillary~SLS~\cite{warburg2020mapillary} or GSV-Cities~\cite{ali2022gsv} quickly becomes infeasible, because of both the computational overhead and the memory footprint of $k$-NN, which has a computational complexity of $\mathcal{O}(QRd)$ and a memory footprint of $\mathcal{O}(Rd)$~\cite{cunningham2021k}, where $R$ is the number of reference samples (cached representations), $d$ the dimensionality of each sample, and $Q$ is the number of queries to be searched.
In \cite{thoma2020soft, arandjelovic2016netvlad, liu2019stochastic} the representations of all the training examples of Pitt250k dataset are cached. Then, after a fixed number of iterations, the training is paused and the cache is used to mine the hardest $10$ negatives for each training query (to form hard triplets). Importantly, the cache is recalculated every $250$ to $1000$ iterations. Warburg \emph{et al}\bmvaOneDot~\cite{warburg2020mapillary} trained NetVLAD on Mapillary-SLS, which is a dataset comprising $1.6$M images. Faced with the huge memory overhead, they used a subcaching strategy, where only a subset of the training images are cached, from which the hard negatives were periodically mined. Note that, if the NetVLAD representations of all images in MSLS dataset~\cite{warburg2020mapillary} were cached, the memory cache would be $196$GB in size.
From the above, it is evident that the extra memory and computational cost of offline hard mining for VPR remains an issue to be addressed.
\subsection{Deep Metric Learning}\label{ssec:dml}
Place recognition networks are generally trained using ranking loss functions issued from deep metric learning~\cite{zhang2021visual}, such as triplet ranking loss~\cite{schroff2015facenet} and contrastive loss~\cite{thoma2020soft}. However, during the training, deep metric learning (DML) networks often generate very compact representations compared to VPR, ranging from $d =128$ to $d=512$~\cite{chen2021deep}. This makes any caching mechanism much less greedy and computationally inexpensive. Related to our work are DML approaches~\cite{ge2018deep, smirnov2018hard} that perform negative mining on class-level representations (a class could be regarded as the equivalent of a place in VPR), under the assumption that class-level similarity is a good approximation of the similarity between instances. Smirnov~\emph{et al}\bmvaOneDot~\cite{ge2018deep} developed a technique that constructs a hierarchical tree for the triplet loss function. The strategy behind their approach is to store class-level representations during the training, identify neighbouring classes and put them in the same mini-batch, resulting in more informative mini-batches that can be further exploited by online hard mining. Applying these techniques directly to train VPR networks would require to cache highly dimensional image-level representations (e.g. $32$K for NetVLAD), which is not feasible when the training dataset contains thousands of different places.
\section{Methodology}
\label{sec:method}
As mentioned above, VPR techniques generate highly dimensional representations, making caching and hard mining with $k$-NN impractical for large-scale datasets. Knowing that the complexity of $k$-NN is linearly dependent on the number of references $Q$ that need to be cached and their dimensionality $d$~\cite{cunningham2021k}. And considering that the only purpose of the caching mechanism is to help retrieve hard examples. We propose to project the highly dimensional pooling representations (e.g. the resulting NetVLAD representations) into a separate branch ($\mathcal{H}$ in figure~\ref{fig:arch}) that we call \textit{proxy head}. $\mathcal{H}$ is an end-to-end trainable module that learns place-specific compact vectors of significantly smaller dimension compared to the pooling module. During each epoch, we capture and cache the semantics of each place (instead of each image) with one compact vector, acting as its global proxy. Therefore, the number of proxies to be cached is one order of magnitude smaller than the number of images in the dataset (considering that a place is generally depicted by $8$ to $20$ images as in GSV-Cities~\cite{ali2022gsv}). Most importantly, we can choose $d'$ the dimensionality of the proxy head $\mathcal{H}$ to be several orders of magnitude smaller than $d$ the dimensionality of the pooling layer. This allows to perform global hard mining based on the compact-proxies, with negligible additional memory and computation cost as we show in section~\ref{sec:exp} (i.e., using $k$-NN on the proxies is orders of magnitude more efficient).
\subsection{Representation Learning for VPR}
Given a dataset of places $\mathcal{D} = \left\{P_1, P_2, ..., P_N\right\}$ where $P_i = \left( \left\{I_1^i, I_2^i, ..., I_{|P_i|}^i\right\}, y_i \right)$ is a set of images depicting the same place and sharing the same identity (or label) $y_i$. The goal is to learn a function $\mathit{f_{\mathbf{\theta}}}$ which is, in most cases, a deep neural network composed of a backbone network followed by a pooling layer (e.g., NetVLAD). The network $\mathit{f_{\mathbf{\theta}}}$ takes an input image $I_i$ and outputs a representation vector $\mathbf{x}_i \in \mathbb{R}^{d}$ such that the similarity of a pair of instances $\left(\mathbf{x}_i, \mathbf{x}_j\right)$ is higher if they represent the same place, and lower otherwise.
As the generated representation $\mathit{f_{\theta}}\left(I_i\right) = \mathbf{x}_i$ is highly dimensional (i.e., $d = 32$k for NetVLAD~\cite{arandjelovic2016netvlad}), we propose to project it further in a separate branch of the network, that we call \textit{proxy head} ($\mathcal{H}$), represented by a function $\mathit{h_{\mathbf{\psi}}} : \mathbb{R}^{d} \mapsto \mathbb{R}^{d'}$ and projects the outputs from the pooling layer to a smaller Euclidean space where $d' << d$ as illustrated in figure~\ref{fig:arch}. Formally, for each vector $\mathbf{x}_i$, the proxy head produces a compact projection $\mathbf{z}_i$ as follow:
\begin{equation}\label{eq1}
\mathbf{z}_i = \mathit{h_{\mathbf{\psi}}} \left( \mathit{f_{\theta}}\left(I_i\right) \right)
= \mathit{h_{\mathbf{\psi}}} \left( \mathbf{x}_i \right)
\end{equation}
In this work, $\mathcal{H}$ is a fully connected layer that projects $d$-dimensional inputs to $d'$-dimensional outputs followed by $L2$ normalization. This gives us the control of the proxy dimensionality $d'$. However, $\mathcal{H}$ could also be an MLP or a trainable module of different architecture.
We use backpropagation to jointly learn the parameters $\mathbf{\theta}$ and $\mathbf{\psi}$, using pair based (or triplet based) loss functions from metric learning literature~\cite{musgrave2020metric} such as Contrastive loss~\cite{hadsell2006dimensionality}, Triplet loss~\cite{hermans2017defense} and Multi-Similarity loss~\cite{wang2019multi}. \textbf{Note}: since the proxy head is only used during the training phase (to mine hard samples) and discarded during evaluation and test, we might not need to backpropagate the gradient from $\mathcal{H}$ back to the pooling layer. Quantitative experiments show that this does not affect performance.
\subsection{Global Proxy-based Hard Mining (GPM)}\label{ssec:gpm}
Traditionally, during the training phase, each mini-batch is formed by randomly sampling $M$ places from the dataset, then picking $K$ images from each one of them, thus resulting in a mini-batch $\mathcal{B}$ of size $M \times K$.
The goal of global hard hard mining is to populate each training mini-batch with $M$ similar places, which in turn yields hard pairs and triplets, potentially inducing a higher loss value, thereby learning robust and discriminative representations. For this purpose, we use the representations generated by the proxy head $\mathcal{H}$, and compute for each place $P_i \in \mathcal{B}$, a single compact descriptor $\mathbf{c}_i$ as follows:
\begin{equation}
\mathbf{c}_i = \frac{1}{|P_i|} \sum_{I \in P_i} \mathit{h_{\mathbf{\psi}}} \left( \mathit{f_{\theta}}\left(I\right) \right)
\end{equation}
where $\mathbf{c}_i$ corresponds to the average of the proxy representations of the images depicting $P_i$ in the mini-batch $\mathcal{B}$. During the training we regard $\mathbf{c}_i$ as a global descriptor (a proxy) of $P_i$ and cache it along with its identity $y_i$ into a memory bank $\Omega$. Then, at the end of each epoch, we use $k$-NN to build an index upon $\Omega$, in which places are gathered together according to the similarity of their proxies (similar places need to appear in the same mini-batch) as in Algorithm~\ref{algo_index}.
\begin{algorithm}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{$\Omega$: the memory bank comprising proxies representing all places in the dataset \\ $M$: the number of places per mini-batch.}
\Output{$\mathcal{L}$: a list of tuples, where each tuple contains $M$ identities of places that need to be sampled in the same mini-batch.}
\BlankLine
\nl $\mathcal{S} \leftarrow k\text{-NN}(k=M)$ \Comment{Initialize a $k$-NN module $\mathcal{S}$ with $k$ equal to $M$ the number of places per mini-batch.}
\nl $\mathcal{S}\text{.add}(\Omega)$ \Comment{Add the contents of $\Omega$ to $\mathcal{S}$ as references.}
\While{$\mathcal{S} \neq \emptyset$}{
\nl Randomly pick a place $c_i$ from $\mathcal{S}$
\nl $\mathbf{T} \leftarrow \mathcal{S}\text{.search}(c_i)$ \Comment{Search $\mathcal{S}$ for the $M$-most similar places to $c_i$.}
\nl $\mathcal{L} \leftarrow \mathcal{L}\cup \mathbf{T}$ \Comment{Append the $M$ identities to $\mathcal{L}$.}
\nl $\mathcal{S} \leftarrow \mathcal{S} \setminus \mathbf{T}$ \Comment{Remove from $\mathcal{S}$ all places present in $\mathbf{T}$.}
}
\caption{Index based mini-batch sampling}\label{algo_index}
\end{algorithm}
For the epoch that follows, the mini-batch sampler picks one tuple from $\mathcal{L}$ at each iteration, yielding in $M$ similar places. We then pick $K$ images from each place resulting in highly informative mini-batches of size $M \times K$. Qualitative results in section~\ref{ssec:qualitative} show the effectiveness of our approach in constructing informative mini-batches.
\vspace{5pt}
\noindent\textbf{Connection to proxy-based loss functions.} Deep metric learning techniques that employ the term ‘\textit{proxy}’, such as \cite{kim2020proxy, yang2022hierarchical, yao2022pcl}, are fundamentally different from our approach, in that, they learn proxies at the loss level, and optimize on the similarity between the proxies and individual samples in the mini-batch. However, learning proxies at the loss level forces them to be of the same dimensionality as the individual samples (e.g., $32$K if used to train NetVLAD). In contrast, we learn compact proxies independently of the loss function, and use them only to construct informative mini-batches.
\section{Experiments}\label{sec:exp}
\textbf{Dataset and Metrics.} GSV-Cities dataset~\cite{ali2022gsv} is used for training, it contains $65$k different places spread on numerous cities around the world, totalling $552$k images. For testing, we use the following $4$ benchmarks, Pitts250k-test~\cite{torii2013visual}, MSLS~\cite{warburg2020mapillary}, SPED~\cite{zaffar2021vpr} and Nordland~\cite{zaffar2021vpr} which contain, respectively, $8$K, $750$, $607$ and $1622$ query images, and $83$k, $19$k, $607$ and $1622$ reference images. We follow the same evaluation metric as~\cite{arandjelovic2016netvlad, warburg2020mapillary, zaffar2021vpr} where the recall@K is reported.
\noindent\textbf{Default Settings.}
In all experiments, we use ResNet-50\cite{he2016deep} as backbone network, pretrained on ImageNet~\cite{krizhevsky2012imagenet} and cropped at the last residual bloc; coupled with NetVLAD~\cite{arandjelovic2016netvlad} as a pooling layer, we chose NetVLAD because it's the most widely used pooling technique that showed consistent SOTA performance. Stochastic gradient descent (SGD) is utilized for optimization, with momentum $0.95$ and weight decay $0.0001$. The initial learning rate on $0.05$ is multiplied by $0.3$ after each $5$ epochs. We train for a maximum of $30$ epochs using images resized to $224\times 224$.
Unless otherwise specified, we use mini-batch containing $M=60$ places, each of which depicted by $K=4$ images ($240$ in total) and fix the output size of the proxy head $d'$ to $128$ when applicable.
\subsection{Effectiveness of GPM}
To demonstrate the effectiveness of out proposed method, we conduct ablation studies on $4$ different VPR benchmarks. We illustrate the effect of using our technique (GPM) alongside three different loss functions, namely, Contrastive loss \cite{hadsell2006dimensionality}, Triplet loss \cite{hermans2017defense} and Multi-Similarity loss~\cite{wang2019multi}.
For each loss function, we conducted four test scenarios (one on each line) as shown in Table~\ref{tab:my-table}. First, we train the network with randomly constructed batches without OHM or GPM (baseline \#1). In the second scenario, we add GPM to the first baseline and show the effect of globally informed sampling provided by our method. The results demonstrate that GPM alone can greatly improve performance of all three loss functions. For example, the triplet loss improved recall@1 (in absolute value) by $4.3, 4.1, 3.6$ and $3.4$ points on Pitts250k, MSLS, SPED and Nordland respectively, while Multi-Similarity loss improved by $5.4, 8.5, 13.9$ and $8.6$ points.
In the third scenario (baseline \#2), online hard mining (OHM) is used during the training without GPM. This consists of selecting the most informative pairs or triplets from randomly sampled mini-batches. The results show that OHM can improve performance over baseline~\#1, which is consistent with the existing literature~\cite{hermans2017defense}.
For the last scenario, we used GPM combined with baseline \#2 (i.e., mini-batches are sampled using GPM and then further exploited by OHM), results show that our technique (GPM) consistently outperform the baseline. For instance, contrastive loss improved recall@1 (in percentage points) by $5.9$ on Pitts250k, $4.7$ on MSLS, $10.1$ on SPED and $16.8$ on Nordland. Note that the relative performance boost introduced by GPM on Nordland is more than $100\%$ for both contrastive and triplet loss. The best overall performance is achieved using Multi-Similarity loss which boosted the recall@1 over baseline~\#2 by, respectively, $2.0, 4.6, 4.8$ and $9.4$ points on the four benchmarks. This ablation study highlights the effectiveness of GPM compared to randomly constructed mini-batches.
\begin{table}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l||cc|ccc|ccc|ccc|ccc}
\multirow{2}{*}{Loss function} & \multicolumn{2}{c|}{Hard mining} & \multicolumn{3}{c|}{Pitts250k-test} & \multicolumn{3}{c|}{MSLS-val} & \multicolumn{3}{c|}{SPED} & \multicolumn{3}{c}{Nordland} \\ \cline{2-15}
& OHM & GPM & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline\hline
\multirow{4}{*}{Triplet} & & & 77.0 & 90.0 & 93.6 & 67.7 & 79.2 & 82.4 & 53.7 & 69.5 & 75.8 & 8.4 & 16.3 & 20.6 \\
& & \ding{51} & 81.3 & 91.9 & 94.9 & 71.8 & 82.0 & 86.3 & 57.3 & 71.8 & 77.8 & 11.8 & 20.3 & 25.9 \\ \cline{2-15}
& \ding{51} & & 87.5 & 95.4 & 96.9 & 74.0 & 85.1 & 87.7 & 62.4 & 78.6 & 83.2 & 10.1 & 17.9 & 22.6 \\
& \ding{51} & \ding{51} & \textbf{90.0} & \textbf{96.4} & \textbf{97.6} & \textbf{77.6} & \textbf{88.0} & \textbf{90.4} & \textbf{71.3} & \textbf{83.7 } & \textbf{87.3} & \textbf{20.2 } & \textbf{33.2} & \textbf{38.8} \\ \hline\hline
\multirow{4}{*}{Contrastive} & & & 83.0 & 93.0 & 95.2 & 72.7 & 82.8 & 85.8 & 53.7 & 67.2 & 74.8 & 8.0 & 13.8 & 17.3 \\
& & \ding{51} & 88.8 & 95.2 & 96.8 & 79.0 & 85.8 & 88.5 & 67.7 & 79.2 & 83.4 & 20.8 & 33.9 & 41.5 \\ \cline{2-15}
& \ding{51} & & 84.5 & 94.0 & 95.9 & 74.6 & 84.7 & 87.8 & 63.4 & 76.9 & 82.5 & 14.6 & 25.2 & 31.2 \\
& \ding{51} & \ding{51} & \textbf{90.4} & \textbf{96.4} & \textbf{97.6} & \textbf{79.3} & \textbf{88.5} & \textbf{90.7} & \textbf{73.5} & \textbf{85.5} & \textbf{88.9} & \textbf{31.4} & \textbf{46.4} & \textbf{53.5} \\ \hline\hline
\multirow{4}{*}{Multi-Similarity} & & & 84.0 & 93.3 & 95.5 & 72.7 & 82.7 & 86.5 & 50.7 & 65.1 & 71.5 & 9.4 & 17.9 & 21.7 \\
& & \ding{51} & 89.4 & 96.0 & 97.3 & 81.2 & 89.1 & 90.9 & 64.6 & 76.4 & 80.6 & 18.0 & 30.1 & 36.0 \\ \cline{2-15}
& \ding{51} & & 89.5 & 96.3 & 97.6 & 77.4 & 87.2 & 90.1 & 74.6 & 86.8 & 89.9 & 29.1 & 43.3 & 50.2 \\
& \ding{51} & \ding{51} & \textbf{91.5} & \textbf{97.2} & \textbf{98.1 } & \textbf{82.0} & \textbf{90.4} & \textbf{91.4} & \textbf{79.4} & \textbf{90.6} & \textbf{93.2} & \textbf{38.5} & \textbf{53.9} & \textbf{60.7} \\ \hline
\end{tabular}
}
\vspace{6pt}
\caption{\small Ablation. We study the performance gain of three loss functions. For each loss, we train $4$ networks. $2$ of which are baselines (one with Online Hard Mining (OHM) and one without), and the other $2$ are to compare the performance gain introduced by our method (GPM).}
\label{tab:my-table}
\end{table}
These results make even more sense when we look at the curves on Figure~\ref{fig:valid_triplets} where we keep track of the fraction of informative pairs and triplets within the mini-batch. As training progresses, the network learns to identify most hard samples, making a large fraction of pairs and triplets in the mini-batch uninformative.
This is highlighted by the red-dotted curve in Figure~\ref{fig:valid_triplets} where the fraction of informative pairs and triplets rapidly decreases to less than $15\%$ after $15$K iterations. More importantly, when we use GPM, where mini-batches are constructed in such a way to incorporate highly informative pairs and triplets, the fraction of informative samples (blue line) stays at around $50\%$ even after $30$K iterations, which explains the performance boost in Table~\ref{tab:my-table}.
\begin{figure}[th]%
\centering
\subfigure[Triplet loss]{\label{fig:valid1}\includegraphics[width=0.32\textwidth]{images/valid_triplets.png}}
\subfigure[Contrastive loss]{\label{fig:valid2}\includegraphics[width=0.32\textwidth]{images/valid_pairs_.png}}
\subfigure[Multi-Similarity loss]{\label{fig:valid3}\includegraphics[width=0.32\textwidth]{images/valid_pairs_ms.png}}
\vspace{3pt}
\caption{\small Percentage of valid triplets/pairs per mini-batch during the training. Our technique (GPM) construct highly informative mini-batches, which in turn keeps the number of valid pairs/triplets higher during all the training phase.}
\label{fig:valid_triplets}
\end{figure}
\subsection{Mini-batch Size} The size of the mini-batch is a key factor in the performance of many pair and triplet based learning approaches. In this experiment, we investigate its impact by using Multi-Similarity loss with and without GPM on three benchmarks. Results are shown in Figure~\ref{fig:batch_size}, where we observe that the smaller the mini-batch size, the lower the performance.
Moreover, when comparing performance with and without GPM, the gap widens as the batch size decreases. This demonstrates that our method brings consistent performance improvements with a wide range of mini-batch sizes.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{images/batch_size_effect.png}
\vspace{4pt}
\caption{\small Impact of the mini-batch size when training with and without GPM. We report recall@1 on Pitts30k-test, MSLS and SPED respectively. The horizontal axis shows $M$ the number of places in the mini-batch. GPM is effective for a wide range of mini-batch sizes, with more impact when smaller mini-batches are used for training. This is of great importance when training hardware resources are limited.}
\label{fig:batch_size}
\end{figure}
\subsection{Memory and computational cost}
Since our method (GPM) requires to add a trainable branch to the network and a memory cache, we investigate the additional computation and memory cost by varying the dimensionality of the proxy head. For each configuration, we train the network for $20$ epochs and record the training time (including the time to build the index and construct mini-batches), the GPU memory required during the training, the size of the memory bank $\Omega$ (Cache size) and the recall@1 performance on Pitts30k-test.
We first train a baseline model without GPM, and compare against it. Note that for the GPU memory and Cache size, we report the amount of extra memory that was needed compared to the baseline.
Table~\ref{tab:table2} shows that the baseline model takes $1.93$ hours to finish $20$ training epochs and achieve a recall@1 of $86.6\%$. Since the baseline does not use GPM, there is no extra cache memory (cache size $= 0$).
We then run multiple experiments with GPM, by varying the dimensionality $d'$ of the proxy head (from $32$ to $1024$). The results show that there is a significant increase in recall@1 performance ($86.6\% \rightarrow 89.4\%$), and a \textit{negligible} amount of GPU and cache memory. For example, by using a proxy of dimension $d'=128$ (as in the above experiments), we end up with $2$MB of extra GPU memory for training $\mathcal{H}$ and $32$MB for the memory cache with \textit{practically} no extra training time. We also notice that proxy with higher dimensionality does not automatically translate to better performance (e.g. GPM with $d'=256$ yields better performance than $d'=1024$).
Particularly, we do another experiment (the rightmost column in table~\ref{tab:table2}) where instead of using a proxy head to generate proxies, we save the NetVLAD representations into cache (we populate $\Omega$ with $32$k-dimensional vectors) and apply global hard mining on them. We end up with $8.0$GB of extra cache memory, more than double the training time and most importantly we get worst recall@1 performance ($88.7\%$ compared to $89.3\%$ when using a $256$-d proxy head). This can be explained by the fact that using the NetVLAD representations resulted in mining the most difficult pairs which is know to impact performance if the dataset contains a certain amount of outliers~\cite{hermans2017defense}. This experiment shows that, even if memory and computation are not a concern, GPM is still a better choice for learning robust representations.
\begin{table}[th]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l||c||cccccc||c}
& \begin{tabular}[c]{@{}c@{}}Baseline\\ (no GPM)\end{tabular} & \multicolumn{6}{c||}{\begin{tabular}[c]{@{}c@{}}Global Proxy-based Hard Mining\\ (GPM)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Global hard mining \\ without proxy\end{tabular} \\ \hline\hline
Dimensionality & 0 & 32 & 64 & 128 & 256 & 512 & 1024 & 32768 \\ \hline
Training time (hours) & 1.93 & 1.93 & 1.93 & 1.93 & 1.94 & 2.05 & 2.1 & 4.83 \\ \hline
GPU memory (GB) & 10.4 & +0.002 & +0.002 & +0.002 & +0.03 & +0.06 & +0.14 & +0.0 \\ \hline
Cache size (GB) & 0.0 & +0.008 & +0.016 & +0.032 & +0.064 & +0.128 & +0.256 & +8.0 \\ \hline
Recall@1 (\%) & 86.6 & 89.1 & 89 & 89.3 & 89.4 & 89 & 89.2 & 88.7 \\ \hline
\end{tabular}
}
\vspace{7pt}
\caption{\small Memory and computation cost of different dimensions of the proxy head compared against the baseline without GPM). We also compare against global mining without a proxy head, where the memory bank is filled with the highly dimensional NetVLAD representations.}
\label{tab:table2}
\end{table}
\subsection{Qualitative Results}\label{ssec:qualitative}
Our technique (GPM) relies on the similarity between proxies to form mini-batches comprising visually similar places. In this experiment, we used GPM to sample a mini-batch containing $6$ places ($M=6$) from a database of $65$k different places. Note that the probability of \emph{randomly} sampling $6$ similar places among $65$k is extremely low. We show in Figure~\ref{fig:batch1} a mini-batch of $6$ places sampled using GPM, we notice that all $6$ places are visually similar containing similar textures and structures aligned in a similar manner. In Figures~\ref{fig:batch2}~and~\ref{fig:batch3} we visualize a subset of triplets and pairs mined using OHM on the same mini-batch sampled by GPM. Some triplets contain negatives that are visually extremely difficult to distinguish. This shows how using GPM can ensure, to a certain degree, the presence of visually similar places at each training iteration, increasing the likelihood of hard pairs and triplets, which in turn helps learn robust representations.
\begin{figure}[thb]%
\centering
\subfigure[A mini-batch sampled with GPM]{\label{fig:batch1}\includegraphics[width=0.385\textwidth]{images/a_batch.png}}
\hfill
\subfigure[Valid triplets]{\label{fig:batch2}\includegraphics[width=0.31\textwidth]{images/triplets.png}}
\hfill
\subfigure[Valid pairs]{\label{fig:batch3}\includegraphics[width=0.2\textwidth]{images/pos_neg_pairs.png}}
\vspace{3pt}
\caption{\small (a) An example of a mini-batch containing $6$ places sampled from a dataset of $65$k places using GPM. Each place is depicted by $4$ images (a row). This highlights the ability of our technique to construct mini-batches containing similar places, which in turn increases the presence of hard pairs and triplets. (b) A subset of hard triplets generated from the mini-batch, each row consists of a triplet with the blue as anchor, green as the positive and red as the hard negative. (c) A subset of positive (green) and negative (red) pairs. All triplets and pairs have been mined in an online fashion from the mini-batch sampled by GPM.}
\label{fig:a_batch}
\end{figure}
\section{Conclusion}
\label{conclusion}
In this paper, we proposed a novel technique that employs compact proxy descriptors to sample highly informative mini-batches at each training iteration with negligible additional memory and computational costs. To do so, we add an auxiliary branch to the baseline network that generates compact place-specific descriptors, which are used to compute one proxy for each place in the dataset. The compactness of these proxies allows to efficiently build a global index that gathers places in the same mini-batch based on the similarity of their proxies.
Our method proved to be very effective in keeping the fraction of informative pairs and triplets at a high level during the entire training phase, resulting in substantial improvement in overall performance. Future works can focus on the architecture of the proxy head and on different ways of building the global index.
\vspace{5pt}
\noindent\textbf{Acknowledgement.} This work has been supported by The Fonds de Recherche du Québec Nature et technologies (FRQNT). We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Quadro RTX 8000 GPU used for our experiments.
|
{
"arxiv_id": "2302.14224",
"language": "en",
"timestamp": "2023-03-01T02:05:40",
"url": "https://arxiv.org/abs/2302.14224",
"yymm": "2302"
} | \section{Introduction}
One of the most important scenarios of the next-generation wireless systems and standards (beyond 5G/6G) is extremely high mobility (e.g., high-speed railway systems, vehicle-to-infrastructure, and vehicle-to-vehicle).
It is well known that orthogonal frequency division multiplexing (OFDM), which has already been adopted in the 5G cellular systems, achieves near-optimal performance in time-invariant frequency selective channels.
However, due to the large Doppler frequency shifts in high-mobility scenarios, orthogonality between different subcarriers in the OFDM system is broken, leading to a drastic performance degradation \cite{b1}.
Therefore, many new waveforms are investigated for fast time-varying channels recently.
Orthogonal chirp division multiplexing (OCDM) based on the discrete Fresnel transform (DFnT) outperforms OFDM in terms of bit error rate (BER) in multipath channels \cite{b2}, \cite{b3}.
Nevertheless, OCDM cannot achieve full diversity in general time-varying channels, since its diversity order depends on the delay-Doppler profile of the channel.
While affine frequency division multiplexing (AFDM) based on the discrete affine Fourier transform (DAFT), can achieve the full diversity.
Unlike DFnT, DAFT is a generalization of the discrete Fourier transform (DFT) and is characterized by two parameters that can be adapted to better cope with the doubly selective channels \cite{b4,b5,b6,b01,b03,b02}.
In addition to the chirp-based multicarrier waveforms, the orthogonal time frequency space (OTFS) and orthogonal delay-Doppler division multiplexing (ODDM) were proposed in the delay-Doppler (DD) domain \cite{b7,b8,b11}.
Although OTFS shows superior performance to OFDM in time-varying channels \cite{b8}, \cite{b10},
OTFS does not have its own orthogonal transmission pulse in the DD domain as ODDM.
Thus, ODDM outperforms OTFS in terms of out-of-band emission (OOBE) and BER by achieving perfect coupling between the modulated signal and the DD channels \cite{b11}.
Apart from designing waveforms from the conventional time-frequency (TF) domain and the DD domain, some modulation waveforms implemented in other domains are proposed. Orthogonal time sequency multiplexing (OTSM) and orthogonal delay scale space (ODSS) multiplex information symbols in the delay-sequency domain and the delay-scale domain, respectively \cite{b12,b13,b14}. The underlying transforms of OTSM and ODSS are the Walsh Hadamard transform (WHT), and the discrete Mellin transform (DMT). In particular, OTSM offers similar BER to OTFS but with lower complexity \cite{b13}. ODSS has better BER performance compared to OTFS and OFDM in wideband time-varying channels \cite{b14}.
To get pertinent candidate waveform suggestions for next-generation wireless communications, it is essential to compare the performance of various new waveforms more clearly.
First of all, we analyze the correlations between them from the perspective of their modulation domain and then obtain a unified framework by deeply discussing their system models.
After that, we focus on the analysis and comparison of the BER performance of each waveform.
The simulated results demonstrate our previous judgment about the intrinsic difference of these waveforms.
Finally, based on the preceding analyses and demonstrations, we further discuss the development prospect of our aforementioned waveforms and give the candidate waveform suggestions.
\textit{Notations}: The following notations will be followed in this paper: a, \textbf{a}, \textbf{A} represent a scalar, vector, and matrix, respectively. (·)$ ^T $ is the transpose operator, (·)$ ^H $ is the Hermitian transpose operation, $\otimes$ is the Kronecker product, $ \operatorname{vec}(\boldsymbol{A}) $ is the column-wise vectorization of the matrix $ \boldsymbol{A} $, $ \operatorname{vec}_{M,N}^{-1}(\boldsymbol{r}) $ is the matrix formed by folding a vector $ \boldsymbol{r} $ into a \textit{N×M} matrix by filling it column wise, and $ \textbf{I} $ denotes the identity metrics.
\begin{figure*}[htbp]
\centerline{\includegraphics[width=1\textwidth]{fig2.pdf}}
\caption{Transformation relations between different domains along six dimensions (time, frequency, delay, Doppler, scale and sequency) with six transforms (DFT/IDFT, DMT/IDMT and WHT/IWHT).}
\label{fig2}
\end{figure*}
\section{Waveforms Overview and System Model}\label{s3}
In this section, we provide the overview and system model of the aforementioned waveforms including OFDM, OCDM, OTFS, AFDM, OTSM, ODDM, and ODSS.
We classify them according to their modulation domain, i.e., time-frequency, delay-Doppler, delay-sequency, and delay-scale domain.
As we know, the essence of waveform designs is to adopt some kind of mathematical transformation so that the information symbols can achieve orthogonality between different subchannels in the corresponding domain.
The transformation relations among different domains are shown in Fig. \ref{fig2}.
We can observe that there are three colors corresponding to three types of seven domains, i.e., information symbol, channel and transition.
Each domain can be transformed through special mathematical transforms.
For instance, OTFS multiplexes information symbols in DD domain, and transfers information symbols from DD domain to TF domain via inverse symplectic finite Fourier transform (ISFFT), i.e., DFT along the delay axis and inverse DFT (IDFT) along the Doppler axis.
Finally, TF domain is converted to delay-time domain via IDFT along the frequency axis, and the signal is transmitted in delay-time domain.
At the same time, we can see that channel plays an important role in the above-mentioned transformation relations.
A doubly selective channel model with $P$ propagation paths can be written as
\begin{equation}\label{eq1}
h(\tau, \nu)=\sum_{i=1}^{P} h_{i} \delta\left(\tau-\tau_{i}\right) \delta\left(\nu-\nu_{i}\right),
\end{equation}
where $\delta(\cdot)$ is the Dirac delta function, $h_i$, $\nu_i$ and $\tau_i$ are the complex gain, Doppler shift, and the delay shift associated with the $ i $-th path, respectively. Let $\tau_{max}$ and $\nu_{max}$ denote the maximum delay spread and Doppler spread of the
doubly selective channel, respectively. Thus we have $0 \leq \tau_{i} \leq \tau_{\max }$ and $-\nu_{\max } \leq \nu_{i} \leq \nu_{\max }$.
The corresponding continuous time-varying channel impulse response function can be written as
\begin{equation}\label{eq2}
g(\tau, t)=\int_{\nu} h(\tau, \nu) \mathrm{e}^{j 2 \pi \nu(t-\tau)} d \nu.
\end{equation}
In order to obtain the principles of waveform design adapting to the doubly selective channels, we establish a unified framework based on the classified domains, which includes the transceiver and channel as shown in Fig. \ref{fig1} .
From Fig. \ref{fig1}, we can easily differ the mathematical transforms for modulation and demodulation of seven waveforms.
Through this framework we can also understand the complexity and compatibility with OFDM-based systems.
\subsection{Time-Frequency domain}
OFDM can be seen as a chirp-based modulation waveform with a linearly varying instantaneous frequency of zero. OCDM and AFDM are both based on chirp modulation waveforms. Therefore, we consider them as chirp-based waveforms and discuss them in time-frequency domain.
\subsubsection{OFDM}
In the case of OFDM, data symbols $\mathbf{X}$ can be written as
\begin{equation}
\mathbf{X}=[\mathbf{X}(\mathbf{0}), \mathbf{X}(\mathbf{1}), \ldots, \mathbf{X}(\mathbf{M}-\mathbf{1})]^T,
\end{equation}
where $\mathbf{X}(m)=[X(m, 0), X(m, 1), \ldots, X(m, N-1)]^T $.
As shown in Fig. \ref{fig1}, $X(n, m)$ is the data symbol transmitted on the $m$-th subcarrier of the $n$-th OFDM symbol. The data symbols $\mathbf{X}=[X(0), X(1), \ldots, X(N-1)]^T$ transmitted on each $m$-th OFDM symbol are fed into the IDFT block, and the resulting signal in time domain can be written as
\begin{equation}\label{eq7}
s[n]=\frac{1}{\sqrt{N}} \sum_{m=0}^{N-1} X[m] e^{j\frac{2 \pi}{N} n m},
\end{equation}
In matrix form, \eqref{eq7} can be rewritten as
\begin{equation}
\mathbf{s}=\mathbf{F}^H \mathbf{X},
\end{equation}
where $\mathbf{F}$ is the DFT matrix with entries $ e^{-j 2 \pi m n / N}/\sqrt{N} $.
After the addition of the cyclic prefix (CP), the signal is transmitted into the channels.
\subsubsection{OCDM}
Let $ \boldsymbol{x} $ denote an \textit{N}×1 vector of quadrature amplitude modulation (QAM) symbols. After the serial to parallel operation, \textit{N}-points inverse DFnT (IDFnT) is performed to map $ \boldsymbol{x} $ to the time domain as
\begin{equation}\label{1}
s(n)=\frac{1}{\sqrt{N}} e^{j \frac{\pi}{4}} \sum_{m=0}^{N-1} x[m]\times\left\{\begin{array}{ll}
e^{-j \frac{\pi}{N}(n-m)^{2}} \quad N \equiv 0(\bmod 2) \\
e^{-j \frac{\pi}{N}\left(n-m-\frac{1}{2}\right)^{2}} N \equiv 1(\bmod 2).
\end{array}\right.
\end{equation}
The DFnT matrix can be decomposed using the equation $ \boldsymbol{\Phi}=\boldsymbol{\Theta_{2}}\boldsymbol{F}\boldsymbol{\Theta_{1}} $, where $ \boldsymbol{\Theta_{1}} $ and $ \boldsymbol{\Theta_{2}} $ are two diagonal matrices given by
\begin{equation}
\Theta_{1}(m)=e^{-j \frac{\pi}{4}} \times\left\{\begin{array}{ll}
e^{j \frac{\pi}{N} m^{2}} & N \equiv 0(\bmod 2) \\
e^{j \frac{\pi}{4 N}} e^{j \frac{\pi}{N}\left(m^{2}+m\right)} & N \equiv 1(\bmod 2)
\end{array}\right.
\end{equation}
and
\begin{equation}
\Theta_{2}(n)=\left\{\begin{array}{ll}
e^{j \frac{\pi}{N} n^{2}} & N \equiv 0(\bmod 2) \\
e^{j \frac{\pi}{N}\left(n^{2}-n\right)} & N \equiv 1(\bmod 2).
\end{array}\right.
\end{equation}
\eqref{1} can be rewritten in matrix form as
\begin{equation}
\mathbf{s}=\boldsymbol{\Theta_{1}}^{H} \mathbf{F}^{H} \boldsymbol{\Theta_{2}}^{H} \boldsymbol{x}=\boldsymbol{\Phi}^{H}\boldsymbol{x}.
\end{equation}
After the transmission over the channel, discarding CP and performing \textit{N}-points DFnT, the received sample matrix can be presented in the matrix form as
\begin{equation}
\boldsymbol{y}=\boldsymbol{\Theta_{2}} \mathbf{F} \boldsymbol{\Theta_{1}}\textbf{H}\boldsymbol{\Theta_{1}}^{H} \mathbf{F}^{H} \boldsymbol{\Theta_{2}}^{H}\boldsymbol{x}+\tilde{\boldsymbol{w}}=\textbf{H}_{\mathrm{eff}} \boldsymbol{x}+\tilde{\boldsymbol{w}},
\end{equation}
where $ \tilde{\boldsymbol{w}} \sim \mathcal{C N}\left(0, \sigma_{c}^{2} \textbf{I}\right) $ is an additive Gaussian noise vector and $ \textbf{H} $ is the matrix representation of the communications channel in the time domain.
As shown in Fig. \ref{fig1}, the DFnT can be implemented by DFT in three
steps:
\begin{itemize}
\item
multiplying the chirp phase $ \boldsymbol{\Theta_{1}} $,
\item
performing the DFT,
\item
multiplying the other chirp phase $ \boldsymbol{\Theta_{2}} $,
\end{itemize}
where $ \boldsymbol{\Theta_{1}} $ and $ \boldsymbol{\Theta_{2}} $ are diagonal matrices whose \textit{m}-th diagonal
entries are $ \boldsymbol{\Theta_{1}}(m) $ and $ \boldsymbol{\Theta_{2}}(m) $, respectively.
\subsubsection{AFDM}\label{afdm}
Let $ \boldsymbol{x} $ denote an \textit{N}×1 vector of QAM symbols. After the serial to parallel operation, \textit{N} points inverse DAFT (IDAFT) is performed to map $ \boldsymbol{x} $ to the time domain as
\begin{equation}\label{eq3}
s[n]=\frac{1}{\sqrt{N}} \sum_{m=0}^{N-1} x[m] e^{j 2 \pi\left(c_{1} n^{2}+\frac{1}{N} m n+c_{2} m^{2}\right)},
\end{equation}
where $c_1$ and $ c_2 $ are the AFDM parameters, and $ n =
0, \ldots , N-1 $. Then, a chirp-periodic prefix (CPP) is added with a length of $ L_{cp} $, which is any integer greater than or equal to the value in samples of the maximum delay spread of the channel. The prefix is
\begin{equation}
s[n]=s[N+n] e^{-j 2 \pi c_{1}\left(N^{2}+2 N n\right)}, \quad n=-L_{\mathrm{cp}}, \cdots,-1.
\end{equation}
We can notice that CPP equals CP whenever $ 2Nc_1 $ is an
integer and \textit{N} is even.
\eqref{eq3} can be rewritten in matrix form as
\begin{equation}\label{eq4}
\mathbf{s}=\boldsymbol{\Lambda}_{c_{1}}^{H} \mathbf{F}^{H} \boldsymbol{\Lambda}_{c_{2}}^{H} \boldsymbol{x},
\end{equation}
where
\begin{equation}\label{eq5}
\boldsymbol{\Lambda}_{c_i}=\operatorname{diag}\left(e^{-j 2 \pi c_i n^{2}}, n=0,1, \ldots, N-1\right).
\end{equation}
After the transmission over the channel, discarding CPP and performing \textit{N}-points DAFT, the received sample matrix can be written in the matrix form as
\begin{equation}
\boldsymbol{y}=\textbf{H}_{\mathrm{eff}} \boldsymbol{x}+\tilde{\boldsymbol{w}},
\end{equation}
where $ \tilde{\boldsymbol{w}} \sim \mathcal{C N}\left(0, \sigma_{c}^{2} \textbf{I}\right) $ is an additive Gaussian noise vector and $ \textbf{H}_{\mathrm{eff}}=\boldsymbol{\Lambda}_{c_{2}} \textbf{F} \boldsymbol{\Lambda}_{c_{1}} \textbf{H} \boldsymbol{\Lambda}_{c_{1}}^{\mathrm{H}} \textbf{F}^{\mathrm{H}} \boldsymbol{\Lambda}_{c_{2}}^{\mathrm{H}} $, $ \textbf{H} $ being the matrix representation of the communications channel in the time domain.
As shown in Fig. \ref{fig1}, the DAFT can be implemented by DFT in three steps:
\begin{itemize}
\item
multiplying the chirp phase $ \boldsymbol{\Lambda}_{c_{1}} $,
\item
performing the DFT,
\item
multiplying the other chirp phase $ \boldsymbol{\Lambda}_{c_{2}} $,
\end{itemize}
where $ \boldsymbol{\Lambda}_{c_{1}} $ and $ \boldsymbol{\Lambda}_{c_{2}} $ are diagonal matrices.
It is worth to note that DFT and DFnT are two special cases of DAFT.
When $ c_1 $ and $ c_2 $ are both zero or $ \frac{1}{2N} $, DAFT becomes DFT and DFnT, respectively.
\begin{figure*}[htbp]
\centerline{\includegraphics[width=1\textwidth]{fig.pdf}}
\caption{Unified framework for seven waveforms, i.e., OFDM, OCDM, AFDM, OTFS, ODDM, OTSM and ODSS.}
\label{fig1}
\end{figure*}
\subsection{Delay-Doppler domain}
\subsubsection{OTFS}
The block diagram of OTFS modulation and demodulation is shown in Fig. \ref{fig1}. The information bits to be transmitted, after bit-to-symbol mapping, are multiplexed onto a discrete two-dimension (2D) DD domain grid with a size of $\hat{N} \times \hat{M}$. The information symbols, $ x[k,l] $, are mapped from the discrete DD domain to the TF domain, $ X[n,m] $, by the ISFFT as follows:
\begin{equation}
X[n, m]=\frac{1}{\hat{N} \hat{M}} \sum_{l=0}^{\hat{M}-1} \sum_{k=0}^{\hat{N}-1} x[k, l] e^{j 2 \pi\left(\frac{n l}{\hat{N}}-\frac{m k}{\hat{M}}\right)},
\end{equation}
where $m \in\{0,1, \ldots, \hat{M}-1\}, n \in\{0,1, \ldots, \hat{N}-1\}$.
Heisenberg transform converts the 2D TF data, $X[n, m]$, to a 1D continuous time-series, $s(t)$, given by
\begin{equation}\label{eq10}
s(t)=\sum_{m=0}^{\hat{M}-1} \sum_{n=0}^{\hat{N}-1} X[n, m] e^{j 2 \pi n \Delta f_{\rm OTFS}(t-m T)} g_{\mathrm{tx}}(t-m T),
\end{equation}
where $g_{\mathrm{tx}}(t)$ is the transmit pulse shaping function, $T$ and $ \Delta f_{\rm OTFS} $ represent the time and frequency domain sample
intervals of the time-frequency plane.
\subsubsection{ODDM}
ODDM transmits symbols $ x[k,l] $ on a discrete 2D delay-Doppler domain grid of size $\hat{N} \times \hat{M}$, as shown in Fig. \ref{fig1}. The CP-free waveform of the DD plane multicarrier modulation is given by
\begin{equation}\label{eq8}
s(t)=\sum_{k=0}^{\hat{M}-1} \sum_{l=0}^{\hat{N}-1} x[k, l] \check{g}_{t x}\left(t-k \frac{T}{\hat{M}}\right) e^{j 2 \pi \frac{l}{\hat{N} T}\left(t-k \frac{T}{\hat{M}}\right)},
\end{equation}
where $ \check{g}_{t x}(t) $ is the transmit pulse. In particular, a DD plane orthogonal pulse $ \check{g}_{t x}(t) $ is adopted.
ODDM symbols are sampled at a rate of $ \frac{1}{T} $, and staggered at an interval of $ \frac{T}{\hat{M}} $. Just like OFDM, ODDM uses the IDFT to obtain the time-domain
discrete samples of the $ k $-th ODDM symbol as
\begin{equation}
s[k, \dot{l}]=\sum_{l=0}^{\hat{N}-1} x[k, l] e^{j 2 \pi \frac{\dot{l}l}{\hat{N}}}, \dot{l}=0, \ldots \hat{N}-1,
\end{equation}
where $ \dot{l} $ represents the index of the time-domain discrete samples spaced by $ T $.
To stagger $ \hat{M} $ ODDM symbols at an interval of $ \frac{T}{\hat{M}} $, the aforementioned $ N $ time-domain discrete
samples need to be upsampled by $ \hat{M} $ to obtain $ \hat{M}\hat{N} $ discrete
samples, given by
\begin{equation}
\begin{array}{r}
\mathbf{s}[k]=[\overbrace{0, \ldots, 0}^m, s[k, 0], \overbrace{0, \ldots, 0}^{\hat{M}-1}, s[k, 1], \overbrace{0, \ldots, 0}^{\hat{M}-1}, \\
\ldots, \overbrace{0, \ldots, 0, s[k, \hat{N}-1]}^{\hat{M}-1}, \overbrace{0, \ldots, 0}^{\hat{M}-m-1}].
\end{array}
\end{equation}
After the parallel to serial (P/S) operation, and pulse shaping, the signal $ s(t) $ in time domain is transmitted in the channel. The upsampling, P/S and pules shaping operation are named staggered multitone (SMT) modulation, as shown in Fig. \ref{fig1}.
Assuming $ a(t) $ present a time-symmetric real-valued square-root Nyquist pulse, where $ \int_{-\infty}^{+\infty}|a(t)|^{2} d t= \frac{1}{\hat{N}}$.
Based on the pulse shaping method for ODDM modulation proposed in \cite{b11}, \eqref{eq8} can be rewritten as
\begin{equation}\label{eq9}
s(t)=\sum_{k=0}^{\hat{M}-1} \sum_{\dot{l}=0}^{\hat{N}-1} \sum_{l=0}^{\hat{N}-1} x[k, l] e^{j 2 \pi \frac{\dot{l}l}{\hat{N}}} a\left(t-k \frac{T}{\hat{M}}-\dot{l} T\right).
\end{equation}
The transmit pulse of ODDM $ u(t) $ can be denoted as \cite{b11}
\begin{equation}
u(t)=\sum_{\dot{l}=0}^{\hat{N}-1} a(t-\dot{l} T).
\end{equation}
Therefore, \eqref{eq9} can be rewritten as
\begin{equation}\label{eq11}
s(t)=\sum_{k=0}^{\hat{M}-1} \sum_{l=0}^{\hat{N}-1} x[k, l] u\left(t-k \frac{T}{\hat{M}}\right) e^{j 2 \pi \frac{l}{\hat{N} T}\left(t-k \frac{T}{\hat{M}}\right)}.
\end{equation}
Finally, considering the prepended CP, the definition of $u(t)$ can be extended to
\begin{equation}
u_{c p}(t)=\sum_{\dot{l}=-1}^{\hat{N}-1} a(t-\dot{l} T) .
\end{equation}
Then, the CP-included ODDM waveform becomes
\begin{equation}
s_{c p}(t)=\sum_{k=0}^{\hat{M}-1} \sum_{l=0}^{\hat{N}-1} x[k,l] u_{c p}\left(t-k \frac{T}{\hat{M}}\right) e^{j 2 \pi \frac{l}{\hat{N} T}\left(t-k \frac{T}{\hat{M}}\right)}.
\end{equation}
By comparing \eqref{eq10} with \eqref{eq11}, we can observe that the transmit pulse in OTFS modulation is a rectangle pulse as a special case for pulse shaping in ODDM modulation.
\subsection{Delay-Sequency domain}
\textbf{\textit{OTSM: }}OTSM is a single-carrier modulation scheme, which offers similar BER to OTFS. The information symbols are multiplexed in the delay-sequency domain using WHT. Note that sequency is the number of zero-crossings per unit interval. Since WHT does not require multiplicative operations and requires only addition and subtraction operations, the OTSM modulation/demodulation complexity is significantly low as compared to OFDM and OTFS modulation \cite{b12}, \cite{b13}.
Let $ \boldsymbol{x}, \boldsymbol{y} \in C^{\hat{N} \hat{M} \times 1} $ be the transmitted and received information symbols. The modulation and demodulation system model of OTSM is shown in Fig. \ref{fig1}. At the transmitter, the information symbols $ \boldsymbol{x}=\left[\boldsymbol{x}_{0}^{\mathrm{T}}, \cdots, \boldsymbol{x}_{\hat{M}-1}^{\mathrm{T}}\right]^{\mathrm{T}} $ are split into vectors $ \boldsymbol{x}_{m} \in \mathbb{C}^{\hat{N} \times 1}, m=0, \ldots, \hat{M}-1 $. The symbol vectors
are arranged into a delay-sequency matrix $ \mathbf{X} \in \mathbb{C}^{\hat{M} \times \hat{N}} $
\begin{equation}
\mathbf{X}=\left[\boldsymbol{x}_{0}, \boldsymbol{x}_{1}, \ldots, \boldsymbol{x}_{\hat{M}-1}\right]^{\mathrm{T}},
\end{equation}
where the matrix column and row indices represent the delay and sequency indices of the delay-sequency grid, respectively. Then, a $ \hat{N} $-point inverse WHT (IWHT) is applied on each of these symbol vectors (rows) to transform it to the delay-time domain.
\begin{equation}
\tilde{\mathbf{X}}=\left[\tilde{\boldsymbol{x}}_{0}, \tilde{\boldsymbol{x}}_{1}, \ldots, \tilde{\boldsymbol{x}}_{\hat{M}-1}\right]^{\mathrm{T}}=\mathbf{X} \mathbf{W}_{\hat{N}},
\end{equation}
where $ \mathbf{W}_{\hat{N}} $ is the normalized $ \hat{N} $-point WHT matrix.
The matrix $ \tilde{\mathbf{X}} $ contains the delay-time samples which are column-wise vectorized to obtain the
time-domain samples $ \mathbf{s} \in \mathbb{C}^{\hat{N} \hat{M} \times 1} $ to be transmitted into the physical channel
\begin{equation}\label{2}
\mathbf{s}=\operatorname{vec}(\tilde{\mathbf{X}}).
\end{equation}
The transmitter operation \eqref{2} can be expressed in the
simple matrix form as
\begin{equation}\label{3}
\mathbf{s}=\mathbf{P} \left(\mathbf{I}_{\hat{M}} \otimes \mathbf{W}_{\hat{N}}\right) \boldsymbol{x},
\end{equation}
where $ \mathbf{P} $ is the row-column interleaver matrix.
The transmitter operation in \eqref{3} can be simplified as
\begin{equation}
\mathbf{s}=\left(\mathbf{W}_{\hat{N}} \otimes \mathbf{I}_{\hat{M}}\right) (\mathbf{P} \boldsymbol{x}).
\end{equation}
A CP of length $l_{\max }$ is added to the time-domain samples, which are pulse shaped, digital-to-analog converted and transmitted into the wireless channel as $s(t)$.
At the receiver, the received time-domain signal $ r(t) $ is processed via analog-to-digital conversion and CP removal, yielding time-domain vector $\mathbf{r} \in \mathbb{C}^{\hat{N} \hat{M} \times 1}$. The received time domain samples $\mathbf{r}$ are folded into the matrix $\tilde{\mathbf{Y}}$ column-wise as
\begin{equation}
\tilde{\mathbf{Y}}=\left[\tilde{\boldsymbol{y}}_0, \tilde{\boldsymbol{y}}_1, \ldots, \tilde{\boldsymbol{y}}_{\hat{M}-1}\right]^{\mathrm{T}}=\operatorname{vec}_{\hat{M}, \hat{N}}^{-1}(\mathbf{r}).
\end{equation}
The received delay-sequency information symbols are obtained by taking an $ \hat{N} $-point WHT of the rows of received delay-time matrix $ \tilde{\mathbf{Y}} $ as
\begin{equation}
\mathbf{Y}=\left[\boldsymbol{y}_0, \boldsymbol{y}_1, \ldots, \boldsymbol{y}_{\hat{M}-1}\right]^{\mathrm{T}}=\tilde{\mathbf{Y}} \mathbf{W}_{\hat{N}}.
\end{equation}
The receiver operation can be rewritten in matrix form as
\begin{equation}
\boldsymbol{y}=\left(\mathbf{I}_{\hat{M}} \otimes \mathbf{W}_{\hat{N}}\right) \left(\mathbf{P}^{\mathrm{T}} \mathbf{r}\right),
\end{equation}
where $\boldsymbol{y}=\left[\boldsymbol{y}_0^{\mathrm{T}}, \cdots, \boldsymbol{y}_{\hat{M}-1}^{\mathrm{T}}\right]^{\mathrm{T}}$.
\subsection{Delay-Scale domain}
\textbf{\textit{ODSS: }}The underlying Mellin transform of ODSS enjoys a scale-invariance property , i.e., the Mellin transform of the signal $ \sqrt{a} x(a \alpha), a>0, \alpha>0 $, is same as that of the original signal, $ x(\alpha) $, except for a phase shift. The Mellin transform of a signal $ x(\alpha), \alpha>0 $, is defined by
\begin{equation}
\mathcal{M}_{x}(\beta) \triangleq \int_{0}^{\infty} \frac{1}{\sqrt{\alpha}} x(\alpha) e^{j 2 \pi \beta \log (\alpha)} d \alpha,
\end{equation}
where $ \alpha $ is the scale variable, and $ \beta \in \mathbb{R} $ is the Mellin variable. The inverse Mellin transform is given by
\begin{equation}
x(\alpha) \triangleq \frac{1}{\sqrt{\alpha}} \int_{-\infty}^{\infty} \mathcal{M}_x(\beta) e^{-j 2 \pi \beta \log (\alpha)} d \beta, \alpha>0.
\end{equation}
The information bits are multiplexed onto the discrete 2D Mellin-Fourier domain of size, $M_{\text {tot }}=\sum_{n=0}^{\hat{N}-1} M(n)$, where $ q $ is the ratio of the geometric sampling in the scale domain, and $M(n)=\left\lfloor q^n\right\rfloor$. ODSS maps the data symbols, $\{x[k, l]: k=0,1, \ldots, \hat{N}-1, l=0,1, \ldots, M(k)\}$, in the
discrete Mellin-Fourier space to the 2D sequence, $X[n, m]$, in the delay-scale domain by taking an inverse discrete Mellin transform (IDMT) along the scale axis and a discrete Fourier transform along the delay axis, as follows:
\begin{equation}
X[n, m]=\frac{q^{-n / 2}}{\hat{N}} \sum_{k=0}^{\hat{N}-1} \frac{\sum_{l=0}^{M(k)-1} x[k, l] e^{j 2 \pi\left(\frac{m l}{M(k)}-\frac{n k}{\hat{N}}\right)}}{M(k)}.
\end{equation}
The ODSS modulator converts the 2D TF data, $X[n, m]$, to a 1D continuous time-series, $s(t)$, given by
\begin{equation}
s(t)=\sum_{n=0}^{\hat{N}-1} \sum_{m=0}^{M(n)-1} X[n, m] q^{n / 2} g_{\mathrm{tx}}\left(q^n\left(t-\frac{m}{q^n W}\right)\right),
\end{equation}
where $ g_{tx}(t) $ is the transmit pulse shaping function , and $W$ is the signal bandwidth.
\section{Simulation Results}\label{s4}
In this section, we provide some simulation results to assess the performance of the aforementioned waveforms. The complex gains $ h_i $ are generated as independent complex Gaussian random variables with zero mean and $ 1/P $ variance. BER values are obtained using $ 10^5 $ different channel realizations.
Fig. \ref{fig3} shows the simulated BER performance of these waveforms with parameters setting of $ f_c =4\rm GHz $, $ P=3 $, $ \bigtriangleup f=2\rm kHz $, $ \bigtriangleup f_{\rm OTFS,OTSM}=32\rm kHz $, $ \nu_{max}=4\rm kHz $ (corresponding to a speed of 1080 $ \rm km/h $), $ N=256 $, $ N_{\rm OTFS,OTSM}=16 $, $ M_{\rm OTFS,OTSM}=16 $ to ensure the same resources are occupied by all the considered waveforms, 4QAM and minimum mean square error (MMSE) detection are used.
Firstly, we can observe that the performance of OFDM is the worst in the high-mobility scenarios.
This is mainly due to large Doppler frequency shifts and the loss of orthogonality among different subcarriers, resulting in inter-carrier interference.
Then, OCDM has a better performance than OFDM as its better path separation capabilities.
OTFS outperforms OFDM and OCDM since OTFS operates in the DD coordinate system where all modulated symbols experience the same channel gain, and hence OTFS is able to extract the full channel diversity under a limited signal-to-noise ratio (SNR).
Moreover, we can see that AFDM and OTSM have identical performance with OTFS because both of them can maintain good orthogonality of subcarriers in the corresponding domain to bear information symbols.
Due to limited space, here we only discuss the comparison of ODDM and ODSS while not giving the simulated results.
As described in \citen{b11,b14}, ODDM outperforms the OTFS by achieving perfect coupling between the modulated signal and the DD channel, and ODSS has better performance than OTFS in wideband time-varying channels.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.6\textwidth]{fig3-eps-converted-to.pdf}}
\caption{BER performance of OFDM, OCDM, OTFS, AFDM, and OTSM in a three-path channel using MMSE detection.}
\label{fig3}
\vspace{-0.2em}
\end{figure}
In order to meet the demands of next-generation wireless communications in high-mobility scenarios, it is important to focus on the BER performance.
On the other hand, the waveform selections for 6G need to have good compatibility with the existing communication systems.
Thus, in the sequel, we will emphasize three waveforms (i.e., AFDM, OTFS, OTSM) based on their outstanding BER performance.
From Fig. \ref{fig1}, and Fig. \ref{fig3}, we can contradictorily observe that OTSM has a better BER performance with bad compatibility.
It is known that OTFS and AFDM have good compatibility with the existing OFDM-based systems.
OTFS only needs to add a precoder (ISFFT) at the transmitter of the existing OFDM system.
AFDM needs to add two matrix-multiply operations ($ \boldsymbol{\Lambda}_{c_{1}} $ and $ \boldsymbol{\Lambda}_{c_{2}} $) before and after the DFT, respectively, which is the definition of DAFT.
In conclusion, with the best BER performance and good compatibility with OFDM, AFDM can satisfy the requirements of high-mobility scenarios in next-generation wireless communication systems.
Besides, in our previous research, we have proved that AFDM is currently the only mathematically proven waveform that can achieve full diversity with multiple antennas under the doubly selective channels (without introducing additional precoding procedures), and have advantages over
OTFS in terms of pilot overhead \cite{b01}, \cite{b03}.
Furthermore, as mentioned in Section \ref{afdm}, the complexity of modulation/demodulation of AFDM is almost low with OFDM.
In a word, in the points of BER performance, compatibility and complexity, AFDM is the best choice for high-mobility communications in future wireless systems.
\section{Conclusion and Prospect}\label{s5}
The existing OFDM-based waveforms can't supply the physical foundations to service the high-mobility scenarios in next-generation wireless communications.
The basic reasons are that these waveforms are just designed to match the frequency selective channels while not considering the doubly selective characteristics in fast time-varying channels.
The overviewed waveforms in this paper are all constructed in doubly selective channels, although they are designed in different domains mentioned above.
We establish a unified framework to provide a comprehensive performance comparison.
Based on the simulated results of BER performance and the analysis of compatibility and complexity, we conclude and give the suggestion of AFDM as the candidate waveform.
The recent research on AFDM modulation will greatly promote the development of physical layer wireless transmission technology.
This waveform will apply to next-generation multiple access (NGMA) for future wireless communications, and be widely used in integrated sensing and communications (ISAC) scenarios with its high-precision time and frequency resolution.
|
{
"arxiv_id": "2302.14212",
"language": "en",
"timestamp": "2023-03-01T02:05:23",
"url": "https://arxiv.org/abs/2302.14212",
"yymm": "2302"
} | \section{Introduction}
The possibility of getting some control on light propagation is always desirable since it expands the prospects for technological developments. In this context, a sophisticated class of localized waves (LWs) \cite{Localized-Waves08}\cite{Localized-Waves13}\cite{Zamboni-Rached2008} named Frozen Waves (FWs) \cite{Zamboni-Rached2004}\cite{Zamboni-Rached2005} - which are composed by suitable superposition of Bessel beams - has been constantly studied and improved, providing impressive results in optics and acoustic and being experimentally verified \cite{Zamboni-Rached2006}\cite{Zamboni-Rached2009}\cite{Zamboni-Rached2010}\cite{Corato Zanarella}\cite{Vieira2012}\cite{Vieira2015}. More specifically, the FW method allows us to perform a management of the longitudinal intensity pattern of the resulting beam through a suitable superposition of co-propagating Bessel beams.
In spite of being originally developed for homogeneous media, some advances have been accomplished for inhomogeneous cases \cite{Lourenco-Vittorino}, allowing to design a paraxial FW-type beam that, when propagating through a multilayered nonabsorbing media, is able to compensate the effects of inhomogeneity, providing the desired structured optical beam in the last medium.
In this paper, we extend the aforementioned methodology and propose an approximate analytical method to obtain a highly non-paraxial beam with azimuthal polarization that, at normal incidence on an absorbing stratified medium, will provide in the last semi-infinite absorbing layer a structured beam endowed with a predefined micrometer intensity pattern.
\section{A useful approximation for the transverse wave number of a Bessel beam in an absorbing medium}
First off all, before presenting our theoretical methodology, it is necessary a brief clarification about an approximation adopted for the beam solutions we are going to work with.
Let us consider a linear, homogeneous absorbing medium, with complex refractive index $n=n_r+in_i$, and a scalar wave field with azimuthal symmetry $\Psi = \textrm{exp}(-i \omega t)\psi(\rho,z)$, where $\psi$ obeys the Helmholtz equation $\partial^2\psi/\partial\rho^2 + (1/\rho)\partial\psi/\partial\rho + \partial^2\psi/\partial z^2 + n^2k_0^2\psi=0$, where $k_0 = \omega/c$ and $c$ is the light speed.
The zero-order Bessel beam solution in this case is given by
\begin{equation}
\psi=J_0(h\rho)\exp(i\beta z)
\end{equation}
with
\begin{equation}
\beta^2+h^2=n^2k_0^2
\label{relacao_dispersao}
\end{equation}
where $h$ and $\beta$ are the transverse and longitudinal wavenumbers, respectively.
By writing the complex longitudinal wavenumber as $\beta=\beta_r+i\beta_i$, we have from Eq.(\ref{relacao_dispersao}) that
\begin{equation}
h=\sqrt{n^2k_0^2-\beta^2}=\sqrt{(n_r^2-n_i^2)k_0^2-(\beta_r^2-\beta_i^2)+i(2n_rn_ik_0^2-2\beta_r\beta_i)}
\label{h}
\end{equation}
Now, by demanding that $h$ is real, we must have in Eq.(\ref{h}) that $2n_rn_ik_0^2=2\beta_r\beta_i \rightarrow \beta_i=n_r n_ik_0^2/\beta_r$. If we assume $\beta_r=bn_rk_0$, with $-1\leq b\leq 1$, Eq.$(\ref{h})$ can be written as
\begin{equation}
h=\sqrt{n_r^2k_0^2-b^2n_r^2k_0^2+\left(\frac{1}{b^2}-1\right)n_i^2k_0^2}
\label{h(b)}
\end{equation}
The ratio between the magnitudes of the second and third term of the radicand is $<<1$ if
\begin{equation}
\frac{\left(\frac{1}{b^2}-1\right)n_i^2}{b^2n_r^2}<<1 \,\,\, \Rightarrow \frac{b^2}{\sqrt{1-b^2}}>>\frac{n_i}{n_r} \,\,\, \Rightarrow \frac{\beta_r^2}{\sqrt{n_r^2k_0^2-\beta_r^2}}>>n_ik_0
\label{condicao}
\end{equation}
and, in such cases, we can write
\begin{equation}
h\approx\sqrt{n_r^2k_0^2-b^2n_r^2k_0^2}=\sqrt{n_r^2k_0^2-\beta_r^2}
\label{h_aproximado}
\end{equation}
The transverse wave numbers of the Bessel beams considered in this work will be particularly important because, as we are going to consider normal incidences with the interfaces of the stratified medium, they are conserved and, thus, will be fundamental in obtaining the longitudinal wave numbers of the Bessel beams in any layer of that medium.
In this context, the approximation (\ref{h_aproximado}) is very useful because, as we will see, it makes possible the structured optical beams obtained through our method be described by analytical solutions of Maxwell's equations, without the need of numerical solutions/simulations. Due to this, in any layer of the stratified medium we are dealing with, we will certify that\footnote{Such a condition is satisfied in most cases where $n_i<<n_r$.}
\begin{equation}
\frac{\beta_{rm}^2}{\sqrt{n_{rm}^2k_0^2-\beta_{rm}^2}}>>n_{im}k_0
\label{condicao1}
\end{equation}
where $\beta_{rm}=Re(\sqrt{n_m^2\omega^2/c^2-h^2})$ is the longitudinal wave number of the Bessel beam in the mth layer, whose refractive index, $n_m$, possesses real and imaginary parts given by $n_{rm}$ and $n_{im}$, respectively.
\section{The method}
Here, we present the method by considering a scalar field which, in the next section, will be used in obtaining the desired results for an azimuthally polarized beam.
Let us consider an absorbing stratified medium with M layers, the first and the last ones being semi-infinite. We wish to construct an incident scalar wave field in such a way that in the last medium we have a non-paraxial microstructured scalar beam, capable of assuming a longitudinal intensity pattern chosen on demand.
In order to get this result, we first need to describe the characteristics of the desired beam, $\psi^T$, transmitted to the last medium of the stratified structure. To simplify the notation, we will adopt the subscripts $1$ for the longitudinal wavenumber $\beta_1$, $k_1$ and $n_1$ for the longitudinal wavenumber, wavenumber and refractive index in the first medium, respectively, while to the last medium we will use just $\beta$, $k$ and $n$, with the subscripts $r$ and $i$ in reference to the real and imaginary parts when necessary.
Let us consider as an approximated\footnote{Here we will use the approximate form of the transverse wavenumber, given by eq.(\ref{h_aproximado})} solution to the scalar wave equation a continuous superposition of zero-order Bessel beams over the longitudinal wavenumber $\beta_r$:
\begin{equation}
\begin{array}{clr}
\psi^T(\rho,z) & =\int_{-k_r}^{k_r}S(\beta_r)J_0\left(\rho\sqrt{(n_r^2-n_i^2)k_0^2-\left[\beta_r^2-\left(\frac{n_r n_i k_0^2}{\beta_r}\right)^2\right]}\,\,\right)
\exp(i\beta_r z)\exp\left(- \frac{n_rn_ik_0^2}{\beta_r} z\right) \textrm{d}\beta_r \\
\\
& \approx \, \exp(-\bar{\beta}_i z)\int_{-k_r}^{k_r}S(\beta_r)J_0\left(\rho\sqrt{k_r^2-\beta_r^2}\,\right)
\exp(i\beta_rz)\,\textrm{d}\beta_r \,\, ,
\label{psi_t}
\end{array}
\end{equation}
being $k_0 = \omega/c$, $k_r=n_r k_0$, $\beta_r$ is the real part of the complex longitudinal wavenumber of the medium $M$, i.e, $\beta=\beta_r+i\beta_i$, whose imaginary part $\beta_{i}=k_0^2 n_{r}n_{i}/\beta_{r}$ is responsible for an exponential decay along the propagation direction. The approximation on the integral solution (\ref{psi_t}) was made by considering the approximate form of the transverse wavenumber, given by eq.(\ref{h_aproximado}), and also by considering $\beta_{i} \approx \bar{\beta}_{i}=k_0^2\frac{n_{r}n_{i}}{Q}$, where the parameter $Q=an_r k_0$ $(0<a<1)$, as we will see later, corresponds to the value of $\beta_r$ where the center of the spectrum $S(\beta_r)$ is localized.
Let us also consider the spectrum as given by the following Fourier series
\begin{equation} S(\beta_r)=\sum_{m=-\infty}^{\infty} A_m \exp\left(\frac{i2m\pi}{K_r}\beta_r\right)
\label{sbr}
\end{equation}
with
\begin{equation} A_m= \frac{1}{K_r}F\left(-\frac{2m\pi}{K_r}\right)\label{am}\,\, , \,\,\, K_r=2k_r
\label{am}
\end{equation}
and where $F(\cdot)$ is a function of our choice, as explained below.
Due to Eqs. $(\ref{sbr})$ and ($\ref{am}$), it is possible to show \cite{Zamboni-Rached2008,Zamboni-Rached17} that $|\psi^T(\rho=0,z)|^2\approx \exp(-2\bar{\beta}_{i})|F(z)|^2$. Now, we choose $F(z)$, called morphological function, as given by
\begin{equation}
F(z)=f(z)\exp({\bar{\beta}}_i z)\exp(iQz)
\label{morfologica}
\end{equation}
where the first function, $f(z)$, provides the desired beam intensity profile, the second one, $\exp({\bar{\beta}}_i z)$, assigns attenuation resistance to the beam and the third one, $\exp(iQz)$, is responsible for shifting the spectrum, ensuring that $S(\beta_r)$ gets negligible values for $\beta_r<0$, ensuring only forward propagating Bessel beams in superposition ($\ref{psi_t}$). Actually, we will impose an even more restrictive condition over $S(\beta_r)$, that is
\begin{equation} S(\beta_r)\approx0 \,\,\, \textrm{for}\,\,\, \beta_r<\sqrt{k_r^2-k_1^2} \,, \label{cond_betar}\end{equation}
because $\beta_1=\sqrt{(k_1^2-k_r^2)+\beta_r^2}$ (the longitudinal wavenumber in the first medium) is imaginary for $\beta_r<\sqrt{k_r^2-k_1^2}$.
Finally, it is possible to show \cite{Zamboni-Rached2008,Zamboni-Rached17} that the integral solution $(\ref{psi_t})$ results to be a discrete superposition of Mackinnon-type beams:
\begin{equation}
\psi^T(\rho,z)\approx\ \exp(-\bar{\beta}_i z)\sum_{m=-\infty}^{\infty}A_m sinc\left[k_r^2\rho^2+\left(k_rz+m\pi\right)^2\right] \,,
\label{psi_t_mack}
\end{equation}
where $sinc(\cdot)$ is the sinc function.
\
\emph{Calculating the incident beam:}
\
With the scalar FW beam characterized in the last medium as we wish, Eq.(\ref{psi_t_mack}), we can estimate the necessary incident beam $\psi^I$ in the first (nonabsorbing) medium as
\begin{equation}
\psi^I(\rho,z)\approx\ \int_{0}^{k_r}\frac{S(\beta_r)}{\tau(\beta_1)}J_0(\rho\sqrt{k_r^2-\beta_r^2}\,)
e^{i\beta_1}d\beta_r \,
\label{psi_i}
\end{equation}
where the square root in the argument of the Bessel function is the transverse wavenumber, which is conserved due to the boundary conditions, and $\tau(\beta_1)$ is the effective transmission coefficient, referring to the entire stratified structure, given as a function of $\beta_1$. A description of how to obtain such transmission coefficient through a transfer-matrix method can be found in the appendix.
Since $\beta_1=\sqrt{(k_1^2-k_r^2)+\beta_r^2}$, there are two possible situations in the superposition given by Eq.$(\ref{psi_i})$: if $n_1>n_r$, $\beta_1$ is always real, but if $n_1<n_r$ then:
\begin{enumerate}
\item $\beta_1=i\sqrt{(k_r^2-k_2^2)-\beta_r^2}$~~~if~~ $\beta_r<\sqrt{k_r^2-k_1^2}$ or
\item $\beta_1=\sqrt{(k_1^2-k_r^2)+\beta_r^2}$~~~~if~~ $\beta_r>\sqrt{k_r^2-k_1^2}$
\end{enumerate}
Here, we restrict to the case where $n_1=1$ (vacuum), so the incident beam will be:
\begin{equation}
\psi^I(\rho,z)\approx\ \int_{0}^{\sqrt{k_r^2-k_1^2}}\frac{S(\beta_r)}{\tau(\beta_1)}J_0(\rho\sqrt{k_r^2-\beta_r^2}\,)
e^{-\sqrt{(k_r^2-k_1^2)-\beta_r^2}z}d\beta_r
\\
+\int_{\sqrt{k_r^2-k_1^2}}^{k_r}\frac{S(\beta_r)}{\tau(\beta_1)}J_0(\rho\sqrt{k_r^2-\beta_r^2}\,)
e^{i\sqrt{(k_1^2-k_r^2)+\beta_r^2}z}d\beta_r
\label{psi_ifull}
\end{equation}
Due to condition Eq.(\ref{cond_betar}), the first integral in Eq.$(\ref{psi_ifull})$ is negligible compared to the second one. So, by considering just the second integral in eq.(\ref{psi_ifull}) and changing the integration variable from $\beta_r$ to $\beta_1$ by using that $\beta_r=\sqrt{(k_r^2-k_1^2)+\beta_1^2}$, we can write
\begin{equation}
\psi^I(\rho,z)\approx\int_{0}^{k_1}S'(\beta_1)J_0(\rho\sqrt{k_1^2-\beta_1^2}\,)
\exp(i\beta_1 z)d\beta_1
\end{equation}
where
\begin{equation}
S'(\beta_1)= \frac{S[\beta_r(\beta_1)]}{\tau(\beta_1)}\frac{\beta_1}{\sqrt{(k_r^2-k_1^2)+\beta_1^2}}
\label{s(b1)}
\end{equation}
and
\begin{equation} S[\beta_r(\beta_1)]=\sum_{m=-\infty}^{\infty} A_m \exp\left[\frac{i2m\pi}{K_r}\beta_r(\beta_1)\right]
\label{coef_s(b1)}\end{equation}
By using the Heaviside function,
\begin{equation}
H(\beta_1)=\left\{
\begin{array}{c}
\ 1~~~~~~~~if~~~\beta_1>0\\
\ 0~~~~~~~~if~~~\beta_1<0
\end{array}\right.
\label{quadrado}
\end{equation}
we can write
\begin{equation}
\psi^I(\rho,z)\approx\ \int_{-k_1}^{k_1}H(\beta_1)S'(\beta_1)J_0(\rho\sqrt{k_1^2-\beta_1^2}\,)
\exp(i\beta_1z)d\beta_1
\label{psi_i_integral}
\end{equation}
whose solution is
\begin{equation}
\psi^{I}(\rho,z)\approx\ 2k_1\sum_{g=-\infty}^{\infty}D_g sinc\sqrt{k_1^2\rho^2+\left(k_1z+g\pi\right)^2} \,,
\label{psi_i_mack}
\end{equation}
where we have used that
\begin{equation}
H(\beta_1)S'(\beta_{1})=\sum_{g=-\infty}^{\infty}D_g \exp\left(\frac{i2g\pi }{K_1}\beta_1\right) \,,
\label{sb1}
\end{equation}
with
\begin{equation}
D_g=\frac{1}{2k_1}\int_{-k_1}^{k_1}H(\beta_1)S'(\beta_{1})\exp\left(-\frac{i2g\pi }{K_1}\beta_1\right)d\beta_1
\label{dg}
\end{equation}
In summary, in order to have the desired wave field in the last medium, i.e., a scalar beam microstructured according to a morphological function $F(z)$ and whose analytic solution is given by Eq.(\ref{psi_t_mack}), the incident wave at the first interface of the stratified structure has be given by the analytic solution Eq.(\ref{psi_i_mack}).
\section{The method applied to a non-paraxial and azimuthally polarized optical beam}
When dealing with vector beams it is always mandatory to keep in mind that the behavior of TE and TM beams are different with respect to the phenomena of reflection and refraction.
In the case of a TE Bessel beam at normal incidence on a plane interface separating two dielectrics, the reflection and transmission coefficients are equal to those of a scalar Bessel beam (of the same cone angle) also at normal incidence. This fact will be used here.
The aim of this work is, given a absorbing stratified medium with M layers (the first and the last ones being semi-infinite), to obtain an incident optical beam in such a way that in the last medium we have a non-paraxial, azimuthally polarized and microstructured beam, capable of assuming a longitudinal intensity pattern chosen on demand.
In this section, we will obtain such microstructured electromagnetic beam via Maxwell's equations, taking as a starting point the scalar solution presented in the previous section.
Although we do not present the calculation of the reflected beam (since we are interested in the transmitted wave), it will be shown in the figures since we think that such information contributes to a better understanding of the phenomena studied here.
Let us consider a stratified medium formed by $M$ layers with refractive indices $n_m$ $(m=1,2,…,M)$ and with their interfaces located at the positions $z=d_1,d_2,…,d_{M-1}$.
See Fig. 1.
\begin{figure}[!htb]
\centering
\subfloat{
\includegraphics[height=3cm]{estratificado.jpg}
}
\caption{Schematic representation of the stratified media}
\label{meio_estrat}
\end{figure}
A optical beam with azimuthal polarization and azimuthal symmetry, $\textbf{E}=E_{\phi}(\rho,z)\textrm{exp}(-i \omega t)\hat{\phi}$, must obey
\begin{equation} \frac{\partial^2 E_{\phi}}{\partial \rho^2} + \frac{1}{\rho}\frac{\partial E_{\phi}}{\partial \rho} - \frac{E_{\phi}}{\rho^2} + \frac{\partial^2 E_{\phi}}{\partial z^2} + n^2k_0^2E_{\phi}=0 \,\, , \label{eqephi}\end{equation}
whose the simplest solution is a first order Bessel-type-beam, i.e, $E_{\phi}(\rho,z)=J_1(h\rho)\exp(i\beta z)\hat{\phi}$.
Here, we could follow a procedure similar to that of the scalar case presented in Section $3$ and write the transmitted beam (i.e., the beam in the last medium) as $\textbf{E}^T=E_{\phi}^T(\rho,z)\textrm{exp}(-i \omega t)\hat{\phi}$ being $E_{\phi}^T(\rho,z)$ a superposition similar to Eq.(\ref{psi_t}), only replacing the zero-order Bessel function $J_0(\cdot)$ by the first-order one $J_1(\cdot)$. It turns out, however, that in this case there is no known analytic solution for that integral when $S(\beta_r)$ is given by Eq.(\ref{sbr}).
To overcome this issue, we will adopt the following strategy: it is very simple to verify that differentiating the solution of the Helmholtz equation $\psi =J_0(h \rho) \exp(i\beta z)$ (where $h = \sqrt{k^2 - \beta^2}$) with respect to $\rho$, we obtain $\partial \psi / \partial \rho = - h J_1(h \rho) \exp(i\beta z) $, which in turn is a solution of Eq. (\ref{eqephi}). Thus, it is clear that if we differentiate the integral solution (\ref{psi_t}) of the Helmholtz equation with respect to $\rho$, we will get an integral solution to the differential equation (\ref{eqephi}).
That being said, we will write $E_{\phi}^T = \xi \partial \psi^T / \partial \rho$, with $\psi^T$ given by Eq.(\ref{psi_t}) and $\zeta$ a normalization constant:
\begin{equation}
E_{\phi}^T(\rho,z) \approx\ \xi\exp(-\bar{\beta}_i z)\int_{-k_r}^{k_r}S''(\beta_r)J_1(\rho\sqrt{k_r^2-\beta_r^2})
\exp(i\beta_rz)d\beta_r \,\, ,
\label{EphiT int}
\end{equation}
with
\begin{equation} S''(\beta_r) = - \sqrt{k_r^2-\beta_r^2}\,S(\beta_r) \,\, , \label{S''}\end{equation}
and $S(\beta_r)$ given by Eqs.(\ref{sbr},\ref{am}).
Naturally, from Eq.(\ref{psi_t_mack}), the analytical solution to the integral solution Eq.(\ref{EphiT int}) will be given by:
\begin{equation}
E_\phi^{T}(\rho,z)\approx \xi e^{-\bar{\beta}_i z} \sum_{m=-\infty}^{\infty} A_m \, \frac{\partial}{\partial \rho}sinc\sqrt{k_r^2\rho^2+\left(k_r z+ \pi m\right)^2} \,\,, \label{EphiT}
\end{equation}
always keeping in mind that the coefficients $A_m$ are given by Eqs.(\ref{am},\ref{morfologica}).
The central concern now is whether the integral solution (\ref{EphiT int}) of the beam transmitted to the last semi-infinite layer enables it to have its longitudinal intensity pattern modeled at will within a micrometer spatial region. This concern occurs due to the fact that the spectrum $S''(k_z)$ in the superposition (\ref{EphiT int}) differs from the spectrum $S(k_z)$, Eqs.(\ref{sbr},\ref{am}), which enables the spatial modeling of the scalar field $\psi_T$.
Fortunately, the factor $\sqrt{k_r^2-\beta_r^2}$ in Eq.(\ref{S''}) is not able, in general, to substantially modify the shape of $S''(k_z)$ when compared $S(k_z)$; actually, for the vast majority of cases of interest, both spectra are very similar, except for a difference in amplitude. This causes the longitudinal field pattern of $E_\phi^{T}(\rho,z)$ to be dictated by the morphological function $F(z)$, as intended. In addition, the field is no longer concentrated over the axis $\rho=0$, but it is now concentrated over a cylindrical surface of radius $\rho_{1} \approx 1.84 / \sqrt{k_r^2 - Q^2}$, where the number $1.84$ is the value of the argument that maximizes the Bessel function $J_1(.)$.
In this way, we can say that solution (\ref{EphiT}) represents a microstructured beam in the semi-infinite layer M, behaving according to the morphological function $F(z)$ given by Eq.(\ref{morfologica}), more specifically $E_\phi^{T}(\rho=\rho_1,z)\approx f(z)\exp(i Q z)$ with $f(z)$ and $Q$, chosen at will.
Now, we proceed to calculate $E_\phi^{I}$, the beam incident on the first interface of the stratified medium that, ultimately, gives rise to the transmitted field $E_\phi^{T}$.
Since the transmission and reflection coefficients of a TE Bessel-type beam at normal incidence on a plane interface are the same as those for a scalar Bessel beam, we can use the results of section $3$ and write for the incident beam:
\begin{equation}
E_\phi^{I}(\rho,z)\approx \xi \sum_{g=-\infty}^{\infty}D'_g\frac{\partial}{\partial \rho}sinc\sqrt{k_1^2\rho^2+\left[k_1 z+\pi g\right]^2} \,\,, \label{EphiI}
\end{equation}
where the coefficients $D'_g$ are given by
\begin{equation}
D'_g=\frac{1}{2k_1}\int_{-k_1}^{k_1}H(\beta_1)S'''(\beta_{1})\exp\left(-\frac{i2g\pi }{K_1}\beta_1\right)d\beta_1 \,\,,
\label{dg'}
\end{equation}
where
\begin{equation}
S'''(\beta_1)= \frac{S''[\beta_r(\beta_1)]}{\tau(\beta_1)}\frac{\beta_1}{\sqrt{(k_r^2-k_1^2)+\beta_1^2}} \,\,,
\label{s'''}
\end{equation}
with
\begin{equation} S''[\beta_r(\beta_1)]= - \sqrt{k_r^2-\beta_r^2(\beta_1)}\,\sum_{m=-\infty}^{\infty} A_m \exp\left[\frac{i2m\pi}{K_r}\beta_r(\beta_1)\right]
\label{s''(b1)}\end{equation}
and $\beta_r(\beta_1)=\sqrt{(k_r^2-k_1^2)+\beta_1^2}$. Notice that $\tau(\beta_1)$ is the effective transmission coefficient (of the stratified medium) given as a function of $\beta_1$; it can be achieved through a transfer-matrix method, as described in the appendix.
Using a condensed notation, we can write the incident/transmitted beam pair as:
\begin{equation}
E_\phi^{I\choose T}(\rho,z)\approx \xi{1\choose e^{-\bar{\beta}_i z}} \sum_{{g\choose m}=-\infty}^{\infty}{D'_g\choose A_m}\frac{\partial}{\partial \rho}sinc\sqrt{{k_1^2\choose k_r^2}\rho^2+\left[{k_1\choose k_r}z+\pi {g\choose m}\right]^2}
\end{equation}
The magnetic field $\mathbf{B} = B_{\rho}\hat{\rho} + B_{z}\hat{z}$, obtained from the Faraday law, can be writen in the same notation as:
\begin{equation}
B_\rho^{I\choose T}(\rho,z)\approx \xi\,\frac{i}{\omega}{1\choose e^{-\bar{\beta}_i z}} \sum_{{g\choose m}=-\infty}^{\infty}{D'_g\choose A_m}\frac{\partial^2}{\partial z\partial \rho}sinc\sqrt{{k_1^2\choose k_r^2}\rho^2+\left[{k_1\choose k_r}z+\pi {g\choose m}\right]^2}
\end{equation}
and
\begin{equation}
B_z^{I\choose T}(\rho,z)\approx - \xi \,\frac{i}{\omega}{1\choose e^{-\bar{\beta}_i z}} \sum_{{g\choose m}=-\infty}^{\infty}{D'_g\choose A_m}\frac{1}{\rho}\frac{\partial}{\partial \rho}\left[\rho\frac{\partial}{\partial \rho}sinc\sqrt{{k_1^2\choose k_r^2}\rho^2+\left[{k_1\choose k_r}z+\pi {g\choose m}\right]^2} \right]
\end{equation}
In summary, for the azimuthally polarized beam, Eq.(\ref{EphiT}), microstructured according to the morphological function $F(z)$, be the one transmitted to the last medium of the stratified structure, it is required that in the first medium the incident beam be given by Eq.(\ref{EphiI}).
\emph{An example:}
Here we adopt $\lambda=532$nm (in vacuum).
Let us consider a simple stratified media composed by three layers, whose refractive index as well as the interfaces locations are depicted in Table $(1)$.
\begin{table}
\centering
\caption[tab2]{}
\begin{tabular}{|l|c|c|c|}
Layer $(m)$ & Refractive index ($n_m$) & Thickness ($\mu$m)& Interface at $z$ ($\mu$m) \\
\midrule
1 & 1 & semi-infinite & $d_1=0$\\
2 & 1.3+0.32e-3$i$ & 20& $d_2=20$\\
3 & 1.5+3e-3$i$ & semi-infinite& -
\end{tabular}
\label{tabela2}
\end{table}
In this example, the chosen morphological function is
\begin{equation}
F(z)=\exp\left[-\left(\frac{z-z_0}{Z}\right)^8\right]\cos\left(\frac{2\pi z}{\Lambda}\right)\exp\left(\bar{\beta}_iz\right)\exp(iQz)
\label{f2}
\end{equation}
with $z_0=56\mu$m, $Z=25\mu$m, $\Lambda = (5/6)Z$ and $Q = 0.97 k_r$.
The morphological function given by Eq.(\ref{f2}) means that we wish the azimuthally polarized beam transmitted to the third medium to possess a transverse raius (hollow beam) given approximately by $\rho_{1} \approx 1.84 / \sqrt{k_r^2 - Q^2} \approx 0.43\mu$m and a longitudinal intensity pattern given by an 8th-order supergaussian centered at $z=z_0=56\mu$m, with width approximately $2Z/(2)^{1/8} \approx 46\mu$m, modulated by a squared cosine function of spatial period given by $\Lambda / 2 \approx 10.4\mu$m.
Having $F(z)$ in hand, the solution for the beam in the last medium (i.e., the transmitted beam $E_{\phi}^T$) is given by Eq.(\ref{EphiT}), with the coefficients $A_m$ given by Eqs.(\ref{am},\ref{morfologica}). The value of the normalization constant $\xi$ is chosen such that the maximum intensity of the transmitted beam (in arbitrary units) is unitary.
Figure (2a) shows the intensity of the beam transmitted to the last medium, evidencing that the field is microstructured according to the desired shape. Figure (2b) reinforces this fact by comparing the longitudinal field intensity over the cylindrical surface of radius $\rho_{1} \approx 0.43\mu$m (red line) with the intensity demanded by the morphological function (black line).
It is interesting to note that the transmitted beam is not only resistant to the effects of diffraction, but is also resistant to attenuation (a consequence of the term $\exp\left(\bar{\beta}_iz\right)$ in the morphological function given by Eq.(\ref{f2})). Due to the absorption presented by the last medium, an ordinary optical beam would have a penetration depth given by $\delta=c/(2n_i\omega)\approx 14\mu$m, while the structured beam $E_{\phi}^T$ is able to propagate a distance $3.3$ times greater without suffering the effects of attenuation.
We now proceed to the incident beam $E_{\phi}^I$), given by the solution (\ref{EphiI}), where the coefficients $D'_g$ are numerically calculated from Eq.(\ref{dg'}). As already stated, this is the beam that must be generated in medium 1 so that the beam transmitted to the last medium is given by $E_{\phi}^T$).
Figure (2c) shows the intensity of the beam incident on the first plane interface and Fig(2d) shows, in logarithmic scale, the ratio $|E_{\phi}^I|^2 / |E_{\phi}^T|^2_{max}$, where $|E_{\phi}^T|^2_{max}$ is the maximum value of the transmitted beam.
Although we have not provided the equations for calculating the field reflected by the first interface, Fig.(2e) shows the reflected beam intensity $|E_{\phi}^R|^2$, and Fig.(2f) shows, in logarithmic scale, the ratio $|E_{\phi}^R|^2 / |E_{\phi}^T|^2_{max}$.
\begin{figure}[h!]
\centering{
\subfloat[]{
\includegraphics[width=0.4\textwidth]{azimutal_Ephi_transmitido.jpg}}
\subfloat[]{
\includegraphics[width=0.4\textwidth]{azimutal_2d_transmitido.jpg}}
\\
\centering{
\subfloat[]{
\includegraphics[width=0.4\textwidth]{azimutal_3d_incidente.jpg}
}
\subfloat[]{
\includegraphics[width=0.4\textwidth]{azimutal_incidente_lg.jpg}
}}
\\
\centering{
\subfloat[]{
\includegraphics[width=0.4\textwidth]{azimutal_3d_refletido.jpg}
}
\subfloat[]{
\includegraphics[width=0.4\textwidth]{azimutal_refletido_lg.jpg}
}}
\caption{{ (a) The 3D intensity of the compensated transmitted azimuthal component $E_{\phi}$; (b) comparison between the on-axis longitudinal intensity $E_{\phi}$ (dotted line) and the desired pattern $|F(z)|^2$ (solid line); (c) compensated incident component $E_{\phi}$ and (d) this component in logaritmic scale; (e) reflected component $E_{\phi}$ in medium 1 and (f) this component in logaritmic scale.}
}}
\end{figure}
\begin{figure}[!htb]
\centering
\subfloat{
\includegraphics[height=5cm]{azimutal_espectro_incidente.jpg}
\label{(a)}}
\quad
\subfloat{
\includegraphics[height=5cm]{azimutal_espectro_transmitido.jpg}
\label{(b)}}
\caption{The amplitude spectra of the: a) transmitted beam and b) incident beam.}
\end{figure}
To conclude this section, we show in Figures (3a) and (3b) the amplitude spectra of the transmitted and incident beams, respectively.
\section{Conclusions}
In this paper, we propose an analytical method for obtaining a highly non-paraxial beam with azimuthal polarization that, at normal incidence on an absorbing stratified medium, will provide in the last semi-infinite absorbing layer an azimuthally polarized and structured beam endowed with a micrometer intensity pattern chosen at will. The method also provides the beam incident on the first interface of the stratified medium that, ultimately, gives rise to the desired transmitted beam.
We believe that the possibility of managing the properties of a highly non-paraxial beam under adverse conditions, such as multiple reflections in stratified structures and energy loss to the material media, may be of great importance in many different optical applications, like trapping and micro-manipulation, remote sensing, thin films, medical devices,medical therapies, etc.
|
{
"arxiv_id": "2302.14154",
"language": "en",
"timestamp": "2023-03-01T02:02:42",
"url": "https://arxiv.org/abs/2302.14154",
"yymm": "2302"
} | \section{Proofs for~\cref{sec:lower-bounds}}
\subsection{Proof of~\cref{thm:lb-adaptive-adv}}
\label{sec:apdx-thm-lb-adaptive-adv}
We build on the following property of the padded Tardos code as done in finger-printing lower bounds.
Given a matrix $X \in \{-1,+1\}^{(n+1) \times p}$, we say that $j \in [p]$ is a consensus column if the column is equal to the all one vector or its negation. Let $X_{(i)} \in \{-1,+1\}^{n \times p}$ denote the matrix that results from removing the $i$'th row in $X$. Moreover, we let $\bar X \in \R^p$ denote the sum of the rows of $X$, that is, $\bar X_j = \sum_{i=1}^{n+1} X_{ij}$. Finally, for $v \in \R^p$ let $\sign(v) \in \{-1,+1\}^p$ denote the signs of the entries of $v$
\begin{theorem}[{\citealp[Theorem 3.2]{TalwarThZh15}}]
\label{thm:lb-fb-matrix}
Let $p = 1000m^2$ and $n = m/\log m$ for sufficiently large $m$. There exists a matrix $X \in \{-1,+1\}^{(n+1) \times p}$ such that
\begin{itemize}
\item There are at least $0.999p$ consensus columns in $X_{(i)}$
\item Any algorithm $\A: \{-1,+1\}^{n \times p} \to \{-1,+1\}^p$ such that $\lzero{\A(X_{(i)}) - \mathsf{sign}(\bar X_{(i)})} \le 1/4$ for all $i\in[n+1]$ with probability at least $2/3$ then $\A$ is not $(1,n^{-1.1})$-DP.
\end{itemize}
\end{theorem}
Building on~\cref{thm:lb-fb-matrix}, we can now prove our main lower bound.
\newcommand{S_{\mathsf{cons}}}{S_{\mathsf{cons}}}
\begin{proof}[of \cref{thm:lb-adaptive-adv}]
First, we prove the lower bound for $\diffp \le 1/(\sqrt{T} \log T)$, that is, we prove the regret has to be linear in this case.
We will reduce the problem of private sign estimation to DP-OPE with adaptive adversaries and use the lower bound of~\cref{thm:lb-fb-matrix}. To this end, given an algorithm $\A$ for DP-OPE and an input $X \in \{-1,+1\}^{n \times p}$, we have the following procedure for estimating the signs of the columns of $X$. We design an online experts problem that has $d = 2p$ experts where column $j \in [p]$ in $X$ will have two corresponding experts $2j$ and $2j+1$ (corresponding to the sign of column $j$). We initialize the vector of signs $s_j = 0$ for all $1 \le j \le p$. We have $T=0.9p$ rounds and at round $1 \le t \le T$ we sample a user $i_t \sim [n]$ (arbitrarily while enforcing that each $i \in [n]$ appears at most $2T/n$ times) and play a loss function $\ell_t: [d] \to \{ 0,1 \}$ such that
\begin{equation*}
\ell_{t}(2j+1) =
\begin{cases}
1 & \text{if } s_j \neq 0 \\
\frac{X_{i_t,j}+1}{2} & \text{otherwise}
\end{cases}
\end{equation*}
We also set
\begin{equation*}
\ell_{t}(2j+2) =
\begin{cases}
1 & \text{if } s_j \neq 0 \\
\frac{-X_{i_t,j}+1}{2} & \text{otherwise}
\end{cases}
\end{equation*}
The idea of this loss function is that the $2j+1$ and $2j+2$ experts will represent the signs of the $j$'th column. If the sign of the $j$'th column is $+1$, then expert $2j+2$ will have better loss and hence should be picked by the algorithm. Moreover, whenever the algorithm has estimated the sign of the $j$'th column ($s_j \neq 0$), we set the loss to be $1$ for both experts $2j+1$ and $2j+2$, in order to force the online algorithm to estimate the sign of new columns.
Then, given the output of the algorithm $\A$ at time $t$, that is $x_t = \A(\ell_1,\dots,\ell_{t-1})$ we set $s_j = -1$ if $x_{t} = 2j+1$ and $s_j = 1$ if $x_{t} = 2j+2$ and otherwise we keep $s_j$ unchanged.
Moreover, there is an expert that achieves optimal loss, that is, for some $x\opt \in [d]$ we have
\begin{equation*}
\sum_{t=1}^T \ell_t(x\opt) = 0.
\end{equation*}
This follows since $X$ has at least $0.999p$ consensus columns hence there is a zero-loss expert after $T=0.9p$ iterations. Now we show that if an algorithm $\A$ has small regret, then the vector $s$ estimates the sign of at least $0.8p$ columns. To this end, let $ j_t = \floor{x_t/2}$ denote the column corresponding to the expert picked by the algorithm at time $t$, $S = \{j_t : t \in[T] \}$, and $S_{\mathsf{cons}} = \{ j \in [p]: \text{column j is a consensus column} \} $. Observe that the regret of the algorithm is
\begin{align*}
\sum_{t=1}^T \ell_t(x_t)
& = \sum_{j_t \in S} \indic{s_{j_t}=1} \ell_t(2j_t+2)
+ \indic{s_{j_t}=-1} \ell_t(2j_t+1)
\\
& = \frac{1}{2} \sum_{j_t \in S} \indic{s_{j_t}=1} (-X_{i_t,j_t}+1)
+ \indic{s_{j_t}=-1} (X_{i_t,j_t}-1) \\
& = \frac{1}{2} \sum_{j_t \in S} \indic{s_{j_t}=1, X_{i_t,j_t} = -1}
+ \indic{s_{j_t}=-1, X_{i_t,j_t} = 1} \\
& = \frac{1}{2} \sum_{j_t \in S} \indic{s_{j_t} \neq X_{i_t,j_t}} \\
& \ge -0.001 p + \frac{1}{2} \sum_{j_t \in S \cap S_{\mathsf{cons}}} \indic{s_{j_t} \neq X_{i_t,j_t}} \\
& \ge -0.001 p + \frac{1}{2} \sum_{j_t \in S \cap S_{\mathsf{cons}}} \indic{s_{j_t} \neq \sign(\bar X)_{j_t}}.
\end{align*}
Assume towards a contradiction that $\A$ is $(1/200\sqrt{T}\log(T),\delta)$-DP where $\delta \le 1/T^3$ and that the expected regret is at most $T/1000$. Markove inequality implies that with probability at least $9/10$ the regret is at most $T/100$. Under this event we have
\begin{equation*}
\sum_{j_t \in S \cap S_{\mathsf{cons}}} \indic{s_{j_t} \neq \sign(\bar X)_{j_t}} \le 0.002 T.
\end{equation*}
Now note that we can assume that the online algorithm picks $x_t$ such that each $j_t$ appears at most one. Otherwise we can modify the algorithm to satisfy this property while not increasing the regret: whenever the algorithm picks $x_t$ such that $j_t$ appeared before, the loss of this expert is $1$, hence we can randomly pick another expert $x_t$ such that $j_t$ has not appeared. This implies that $|S| = T = 0.9p$ and hence
$|S \cap S_{\mathsf{cons}}| \ge 0.85p$. Therefore we have that $s_{j_t} = \sign(\bar X)_{j_t}$ for at least $0.8p$ columns from $S_{\mathsf{cons}}$ with probability $0.9$. To finish the proof, we need to argue about the final privacy guarantee of the sign vector $s$; we will prove that $s$ is $(1,T\delta)$-DP which will give a contradiction to~\cref{thm:lb-fb-matrix} and prove the claim. To this end, note that the algorithm $\A$ is $(1/200\sqrt{T}\log(T),\delta)$-DP. Moreover, recall that each row $i\in[n]$ appears at most $k \le 2T/n \le 2p/n \le 200 \sqrt{p} \log(p)$ times, hence group privacy implies the final output $s$ is $(k\diffp,k\delta)$-DP, that is, $(1,1/T^2)$-DP.
Now we proceed to prove the lower bound for larger values $\diffp \ge 1/(\sqrt{T} \log T)$. Note that if $\diffp \ge \log(T)/T^{1/4}$ then the non-private lower bound of $\sqrt{T \log d}$ is sufficient. Otherwise, consider an algorithm $\A$ that is $\diffp$-DP and consider an adversary that in the first $T_0 < T$ iterations behaves the same as the above where $\diffp = 1/(\sqrt{T_0} \log T_0)$. Then in the last $T - T_0$ iterations it sends $\ell_t(x)=0$ for all $x \in [d]$. The above lower bound implies that the algorithm has to pay regret $\Omega(T_0)= \Omega(1/(\diffp \log T_0)^2)$. The claim follows as $T_0 \le T$.
\end{proof}
\subsection{Proof for~\cref{thm:lb-adaptive-adv-pure}}
\label{sec:proof-lb-pure}
To prove a lower bound for pure DP, we use the following version of~\cref{thm:lb-fb-matrix} for this setting.
\begin{theorem}[{\citealp[Theorem A.1]{SteinkeUl17}}]
\label{thm:pure-sign-est}
Let $d = 1000n$ and $n$ sufficiently large. Let $\mc{X} = \{X \in \{-1,+1\}^{n \times d} :$ all the columns in $X$ are consensus columns$\} $. Let $\A: \{-1,+1\}^{n \times d} \to \{-1,+1\}^d$ be an algorithm such that for all $X \in \mc{X}$,
\begin{equation*}
\E[\lzero{\A(X) - \mathsf{sign}(X)}] \le 1/4.
\end{equation*}
Then $\A$ is not $1$-DP.
\end{theorem}
Using the bound of~\cref{thm:pure-sign-est} and following the same steps as in the proof of~\cref{thm:lb-fb-matrix}, the lower bound of~\cref{thm:lb-adaptive-adv-pure} now follows.
\subsection{Proof of~\cref{thm:lb-large-peps}}
\label{sec:thm-lb-large-peps}
We use similar ideas to the one in the proof of~\cref{thm:lb-adaptive-adv} where we used a DP-OPE algorithm for sign estimation. Instead of designing two experts for each column, the idea here is to look at subsets of columns of size $k$ and design $2^k$ experts to represent the sign vector of these $k$ columns.
Given an input $X \in \{-1,+1\}^{n \times p}$ where we assume for simplicity that $p/k$ is an integer, we design an expert problem with $d = 2^k \binom{p}{k} $ experts.
Instead of representing the experts as integers $x \in [d]$, we use an equivalent representation where an expert is a pair $(S,v)$ where $S \subset [p]$ is a set of columns of size $k$ and $v \in \{-1,+1\}^k$ represents the signs that this expert assigns for columns in $S$. We initialize the vector of signs $s_j = 0$ for all $1 \le j \le p$.
Here we have $T=0.9p/k$ rounds and at round $1 \le t \le T$ we sample a user $i_t \sim [n]$ (arbitrarily while enforcing that each $i \in [n]$ appears at most $2T/n$ times) and play a loss function $\ell_t$ such that
\begin{equation*}
\ell_{t}(S,v) =
\begin{cases}
1 & \text{if } s_j \neq 0 \text{ for some } j \in S \\
0 & \text{otherwise if } \sign(\bar X_S) = v \\
1 & \text{otherwise}
\end{cases}
\end{equation*}
Now, given the output of the algorithm $\A$ at time $t$, that is $x_t = (S_t,v_t)$ we set $s_{S_t} = v_t$ (we assume without loss of generality that each $j \in [p]$ will appear in at most a single $S_t$. Otherwise, similarly to the proof of~\cref{thm:lb-adaptive-adv}, we can ensure this property while not increasing the regret).
Moreover, at the end of the game, there is a set $S \subset [p]$ of size $k$ that contains only consensus columns which were not estimated earlier ($S \cap S_t = \emptyset$ for all $t$). This follows from the fact that $X$ has at least $0.999p$ consensus columns hence there is at least $0.05p \ge k$ consensus columns that have not appeared in $S_1,\dots,S_T$, hence there is an expert $(S,v)$ such that
\begin{equation*}
\sum_{t=1}^T \ell_t(S,v) = 0.
\end{equation*}
Now we show that if an algorithm $\A$ has small regret, then the vector $s$ estimates the sign of at least $0.8p$ columns. Observe that the regret of the algorithm is
\begin{align*}
\sum_{t=1}^T \ell_t(x_t)
& = \sum_{t=1}^T \ell_t(S_t,v_t)
\\
& = \sum_{t=1}^T \indic{\sign(\bar X_{S_t}) \neq v_t}
\\
& = \sum_{t=1}^T \indic{\sign(\bar X_{S_t}) \neq s_{S_t}}.
\end{align*}
Assume towards a contradiction that $\A$ is $(\diffp,\delta)$-DP where $\diffp \le \frac{\sqrt{k/T}}{200\log(T)}$ and $\delta \le 1/T^3$ and that the expected regret is at most $T/1000$. Markov inequality implies that with probability at least $9/10$ the regret is at most $T/100$.
Note that $|S| = kT = 0.9p$. Under this event we have
\begin{equation*}
\sum_{t=1}^T \indic{\sign(\bar X_{S_t}) \neq s_{S_t}} \le 0.002 T.
\end{equation*}
Hence we have that $\sign(\bar X_{S_t}) = s_{S_t} $ for at least $0.9T$ rounds. As each round has $k$ distinct columns, we have $s_j = \sign(\bar X_j)$ for at least $0.9kT \ge 0.8 p$. As there are at most $0.001p$ non-consensus columns, this means that $s_j = \sign(\bar X_j)$ for at least $0.75p$ consensus columns. Now we prove that $s$ is also $(1,1/T^2)$-DP which gives a contradiction to~\cref{thm:lb-fb-matrix}. To this end, note that the algorithm $\A$ is $(\diffp,\delta)$-DP where $\diffp \le \frac{\sqrt{k/T}}{200\log(T)} \le \frac{k/\sqrt{p}}{200 \rho \log(p)} $. Moreover, recall that each row $i\in[n]$ appears at most $k_i \le 2T/n \le 2 p /(nk) \le 200 \sqrt{p} \log(p)/k$ times, hence group privacy implies the final output $s$ is $(\max_{i} k_i \diffp,T \delta)$-DP, that is, $(1,1/T^2)$-DP.
\section{Concentration for sums of geometric variables}
In this section, we proof a concentration result for the sum of geometric random variables, which allows us to upper bound the number of switches in the sparse-vector based algorithm.
We say that $Z$ is geometric random variable with success probability $p$ if $P(W=k) = (1-p)^{k-1}p$ for $k\in\{1,2,\dots\}$. To this end, we use the following Chernoff bound.
\begin{lemma}[\citealp{MitzenmacherUp05}, Ch.~4.2.1]
\label{lemma:chernoff}
Let $X = \sum_{i=1}^n X_i$ for $X_i \simiid \mathsf{Ber}(p)$.
Then for $\delta \in [0,1]$,
\begin{align*}
\P(X > (1+\delta)np ) \le e^{-np\delta^2 /3}
~~~ \mbox{and} ~~~
\P(X < (1-\delta)np ) \le e^{-np\delta^2 /2}.
\end{align*}
\end{lemma}
The following lemma demonstrates that the sum of geometric random variables concentrates around its mean with high probability.
\begin{lemma}
\label{lemma:geom-concentration}
Let $W_1,\dots,W_n$ be iid geometric random variables with success probability $p$. Let $W = \sum_{i=1}^n W_i$. Then for any $k \ge n$
\begin{equation*}
\P(W > 2k/p ) \le \exp{\left(-k/4\right)}.
\end{equation*}
\end{lemma}
\begin{proof}
Notice that $W$ is distributed according to the negative binomial distribution where we can think of $W$ as the number of Bernoulli trials until we get $n$ successes. More precisely, let $\{B_i\}$ for $i\ge1$ be Bernoulli random variables with probability $p$. Then the event $W>t$ has the same probability as $\sum_{i=1}^t B_i < n$. Thus we have that
\begin{equation*}
\P(W > t ) \le \P(\sum_{i=1}^t B_i < n).
\end{equation*}
We can now use Chernoff inequality (\Cref{lemma:chernoff}) to get that for $t = 2n/p$
\begin{align*}
\P(\sum_{i=1}^t B_i < n)
\le \exp{(-tp/8)} = \exp{(-n/4)}.
\end{align*}
This proves that
\begin{equation*}
\P(W > 2n/p ) \le \exp{\left(-n/4\right)}.
\end{equation*}
The claim now follows by noticing that $\sum_{i=1}^n W_i \le \sum_{i=1}^k W_i $ for $W_i$ iid geometric random variable when $k \ge n$, thus $\P(\sum_{i=1}^n W_i \ge 2k/p) \le \P(\sum_{i=1}^k W_i \ge 2k/p) \le \exp{\left(-k/4\right)}$
\end{proof}
\section{Proofs for~\cref{sec:oco-imp}}
\subsection{Proof of~\cref{sec:thm-oco-imp}}
\label{sec:apdx-thm-oco-imp}
We assume without loss of generality that $L=1$ (otherwise divide the loss by $L$).
As $\mc{X}$ has diameter $D$, we can construct a cover $C = \{c_1,\dots,c_M\}$ of $\mc{X}$ such that $\min_{i \in [M]}\ltwo{x - c_i} \le \rho$ for all $x \in \mc{X}$ where $M \le 2^{d \log(4/\rho)}$~\citep[Lemma 7.6]{Duchi19}. Consider the following algorithm: run~\cref{alg:SD} where the experts are the elements of the cover $C$. \cref{cor:sd-appr} now implies that this algorithm has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in C} \sum_{t=1}^T \ell_t(x) \right]
\le O \left( \sqrt{T \ln M } + \frac{T^{1/3} \log^{1/3}(1/\delta) \ln M}{\diffp} \right).
\end{equation*}
Since $\ell_t$ is $1$-Lipschitz, we now get
\begin{align*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in \mc{X}} \sum_{t=1}^T \ell_t(x) \right]
& \le \E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in C} \sum_{t=1}^T \ell_t(x) + \min_{x \in C} \sum_{t=1}^T \ell_t(x) - \min_{x \in \mc{X}} \sum_{t=1}^T \ell_t(x) \right] \\
& = O \left( \sqrt{T \ln M } + \frac{T^{1/3} \log^{1/3}(1/\delta) \ln M}{\diffp} + T\rho \right) \\
& = O \left( \sqrt{T d \log(1/\rho) } + \frac{T^{1/3} \log^{1/3}(1/\delta) d \log(1/\rho)}{\diffp} + T\rho \right) \\
& = O \left( \sqrt{T d \log(T) } + \frac{T^{1/3} d \log^{1/3}(1/\delta) \log(T)}{\diffp} \right),
\end{align*}
where the last inequality follows by setting $\rho = 1/T$.
\subsection{Proof of~\cref{cor:DP-OCO}}
\label{sec:apdx-cor-DP-OCO}
The algorithm $\A_{\mathsf{\ell_2}}$ is \ed-DP and has excess loss $\Delta_n = LD \cdot O(1/\sqrt{n} + \sqrt{d}/n\diffp)$. Thus, \cref{thm:ub-stoch-OCO} implies that
\begin{align*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
& \le \sum_{i=1}^{\log T} 2^i \Delta_i \\
& \le O(LD) \sum_{i=1}^{\log T} 2^{i/2} + \sqrt{d}/\diffp \\
& \le LD \cdot O(\sqrt{T} + \sqrt{d} \log(T)/\diffp).
\end{align*}
\section{Proofs for~\cref{sec:upper-bounds-realizable}}
\section{Proofs for~\cref{sec:ub-obl-sd}}
\label{sec:apdx-ub-obl}
\subsection{Proof of~\cref{thm:ub-priv-DS}}
\label{sec:proof-thm-ub-DS}
We build on the following lemma.
\begin{lemma}
\label{lemma:DS-marg-dist}
Let $\hat P_t$ be the marginal distribution of $x_t$ of~\cref{alg:SD}. Then
\begin{equation*}
\norm{\hat P_t - P^t}_{TV} \le e^{-Tp/3}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $k_t$ be the value of $k$ at iteration $t$. We roughly show that if $k_t < K$ then $P_t = \hat P_t$. As $P(k_t>K)$ is very small, this will prove the claim. Recall that $k_t = \sum_{i \le t} \indic{z_i = 0}$. Note that $P(z_t = 0) \le p + (1-p)\eta \le 2p$. Therefore, letting $y_t \sim \mathsf{Ber}(p + (1-p)\eta)$ we have
\begin{align*}
P(k_t > K)
& \le P(k_T > K) \\
& = P(\sum_{i=1}^T \indic{z_i = 0}> K) \\
& \le P(\sum_{i=1}^T \indic{y_i = 0}> K) \\
& \le e^{-Tp/3},
\end{align*}
where the last inequality follows from a Chernoff bound~(\cref{lemma:chernoff}).
Now we proceed to show that $\hat P_t$ and $P_t$ are close. To this end, we first define $Q_t$ to be the marginal distribution of $x_t$ in~\cref{alg:SD} when $K=T+1$ (that is, no limit on switching). We prove by induction that $Q_t = P^t$. The base case for $t=1$ is trivial. Assuming correctness for $t$, we have that for $x \in [d]$
\begin{align*}
Q_t(x)
& = p p_x^t + (1-p) \frac{w^t_{x}}{w^{t-1}_{x}} Q_{t-1}(x) + (1-p) p_x^t \sum_{x'=1}^d Q_{t-1}(x') (1 - \frac{w^t_{x'}}{w^{t-1}_{x'}}) \\
& = p p_x^t + (1-p) \frac{w^t_{x}}{w^{t-1}_{x}} \frac{w_x^{t-1}}{W^{t-1}} + (1-p) \frac{w^t_x}{W^t} \sum_{x'=1}^d \frac{w_{x'}^{t-1}}{W^{t-1}} \frac{w^{t-1}_{x'}-w^t_{x'}}{w^{t-1}_{x'}} \\
& = p p_x^t + (1-p) \left( \frac{w^t_{x}}{W^{t-1}} + \frac{w^t_x}{W^t} \frac{W^{t-1} - W^t}{W^{t-1}} \right) \\
& = p_x^t.
\end{align*}
Now consider $\hat P$. Let $Q_t^0$ and $Q_t^1$ be the conditional distribution of $Q_t$ given $k_t<K$ or $k_t \ge K$, respectively. Moreover, let $\hat P_t^0$ and $\hat P_t^1$ be the conditional distribution of $\hat P_t$ given $k_t<K$ or $k_t \ge K$, respectively. Note that $Q_t(x) = P(k_t<K) Q_t^0 + P(k_t<K) Q_t^1$ and that $\hat P_t(x) = P(k_t<K) \hat P_t^0 + P(k_t<K) \hat P_t^1$. Noting that $P_t^0 = Q^t_0$, we have
\begin{align*}
\norm{\hat P_t - P^t}_{TV}
& = \norm{\hat P_t - Q^t}_{TV} \\
& = \norm{P(k_t<K)(\hat P_t^0 - Q_t^0) + P(k_t>K)(\hat P_t^1 - Q_t^1) }_{TV} \\
& \le P(k_t<K) \norm{\hat P_t^0 - Q_t^0} + P(k_t>K) \norm{\hat P_t^1 - Q_t^1}_{TV} \\
& \le P(k_t>K).
\end{align*}
\end{proof}
\begin{proof}
First, we begin by analyzing the regret. \cref{lemma:DS-marg-dist} shows that $\hat P_t$ the marginal distribution of $x_t$is the same as that of the (non-private) shrinking dartboard algorithm $P_t$, therefore Theorem 3 of \citet{GeulenVoWi10} shows that for $\eta \le 1/2$
\begin{align*}
\E_{x_t \sim \hat P_t}\left[ \sum_{t=1}^T \ell_t(x_t)\right]
& = \E_{x_t \sim P^t}\left[ \sum_{t=1}^T \ell_t(x_t)\right]
+ \E_{x_t \sim \hat P_t}\left[ \sum_{t=1}^T \ell_t(x_t)\right] - \E_{x_t \sim P^t}\left[ \sum_{t=1}^T \ell_t(x_t)\right] \\
& \le E_{x_t \sim P^t}\left[ \sum_{t=1}^T \ell_t(x_t)\right] + 2 T \norm{\hat P_t - P^t}_{TV} \\
& \le (1+ \eta) \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) + \frac{\ln d}{\eta} + 2 T e^{-Tp/3} \\
& \le \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) + \eta T
+ \frac{\ln d}{\eta} + 2 T e^{-Tp/3}.
\end{align*}
Let us now analyze privacy. Assume we have two neighboring sequences that differ at time-step $t_1$.
Let $Z_t$ and $X_t$ denote the random variables for $z_t$ and $x_t$ in the algorithm when run for the first sequence and let $Y_t = 1 - Z_t$. Similarly, let $Z'_t$, $Y'_t$, and $X'_t$ denote the same for the neighboring sequence. We consider the pairs $W_t = (X_t,Z_{t+1})$ (where $X_0 = 0$) and prove that $W_t$ given $\{ W_\ell \}_{\ell=0}^{t-1}$ and $W'_t$ given $\{ W'_\ell \}_{\ell=0}^{t-1}$ are $\diffp_t$-indistinguishable where
\begin{equation*}
\diffp_t =
\begin{cases}
0 & \text{if } t < t_1 \\
\eta/p & \text{if } t = t_1 \\
\indic{\sum_{\ell=1}^{t-1} Y_\ell < K} 4 Y_t \eta &\text{if } t > t_1
\end{cases}
\end{equation*}
The result then follows from advanced composition~\citep{DworkRo14}: note that $Y_t \in \{0,1\}$ therefore we have that the final privacy parameter is
\begin{align*}
\diffp_f
& \le \frac{3}{2} \sum_{t=1}^T \diffp_t^2 + \sqrt{6 \sum_{t=1}^T \diffp_t^2 \log(1/\delta) } \\
& \le \frac{3}{2} (\frac{\eta^2}{p^2} + 16 K \eta^2) + \sqrt{6(\frac{\eta^2}{p^2} + 16K \eta^2)\log(1/\delta) } \\
& \le \frac{5\eta}{p} + 24 K \eta^2 + \eta \sqrt{100 K \log(1/\delta)} \\
& \le \frac{5\eta}{p} + 100 T p \eta^2 + 20 \eta \sqrt{ T p \log(1/\delta)}.
\end{align*}
Similarly, the result for $\delta=0$ follows from basic composition.
To finish the proof, consider the pair $W_t$ and $W'_t$. First, note that if $t<t_1$ then clearly $W_t$ and $W'_t$ are $0$-indistinguishable as they do not depend on $\ell_{t_1}$ or $\ell'_{t_1}$. For $t=t_1$, note that $X_{t_1}$ and $X'_{t_1}$ has the same distribution. Moreover, the definition of $Z_t$ implies that
$Z_t$ and $Z'_t$ are $\eta/p$-indistinguishable since
\begin{align*}
\frac{P(Z_t = 1)}{P(Z'_t = 1)}
& \le \frac{(1-p)}{(1-p)(1-\eta)} \\
& = \frac{1}{1 - \eta} \\
& = 1 + \frac{\eta}{1-\eta} \\
& \le 1 + 2 \eta \\
& \le e^{2\eta}.
\end{align*}
Moreover, since $\eta \le p \diffp $ we have
\begin{align*}
\frac{P(Z_t = 0)}{P(Z'_t = 0)}
& \le \frac{p + (1-p)\eta}{p} \\
& \le 1 + \frac{\eta}{p}
\le e^{\eta/p}.
\end{align*}
Now consider $t>t_1$. If $\sum_{\ell=1}^{t-1} Y_\ell \ge K$ or $Y_t = Y'_t = 0$ then $X_t = X_{t-1}$ and $X'_t = X'_{t-1}$ and thus $X_t$ and $X'_t$ are $0$-indistinguishable.
If $Y_t = Y'_t = 0$ then $X_t$ and $X'_t$ are $4\eta$-indistinguishable since $w_x^t/w_x^{'t} \le 1/(1-\eta) \le e^{2\eta}$ which implies that $P(x_t = x)/P(x'_t=x) \le e^{4\eta}$. Overall, $X_t$ and $X'_t$ are $4Y_t \eta$-indistinguishable. Moreover, since $t>t_1$, we have that $Z_{t+1}$ is a function of $X_t$ and $\ell_t$ and $Z'_{t+1}$ is a function of $X'_t$ and $\ell'_t=\ell_t$, hence by post-processing we get that $Z_{t+1}$ and $Z'_{t+1}$ are $4Y_t \eta$-indistinguishable. Overall, we have that $W_t$ and $W'_t$ are $\indic{\sum_{\ell=1}^{t-1} Y_\ell < K} 4 Y_t \eta$-indistinguishable.
\end{proof}
\subsection{Proof of~\cref{cor:sd-appr}}
\label{sec:apdx-cor-sd-appr}
For these parameters, \cref{alg:SD} has privacy
\begin{equation*}
\diffp_0/4 + T p^3 \diffp_0^2/4 + \diffp_0 \sqrt{T p^3\log(1/\delta)} \le 2\diffp_0.
\end{equation*}
As $\diffp_0 \le \diffp/2$, this proves the claim about privacy.
Moreover, its regret is
\begin{align*}
\eta T
+ \frac{\ln d}{\eta} + 2 T e^{-Tp/3}
& \le T p \diffp_0/20 + 20 \ln d /(p\diffp_0) + 2 T e^{-Tp/3} \\
& \le \frac{T^{2/3} \diffp_0}{\log^{1/3}(1/\delta)} + \frac{20 T^{1/3} \log^{1/3}(1/\delta) \ln d}{\diffp_0} + 2 T e^{-Tp/3} \\
& \le \sqrt{T \ln d } + \frac{20 T^{1/3} \log^{1/3}(1/\delta) \ln d}{\diffp_0} + 2 T e^{-Tp/3} \\
& \le O \left(\sqrt{T \ln d } + \frac{ T^{1/3} \log^{1/3}(1/\delta) \ln d}{\diffp} \right),
\end{align*}
where the last inequality follows as $\diffp_0 = \min( \diffp/2,\frac{\log^{1/3}(1/\delta) \sqrt{\ln d}}{T^{1/6}})$.
\subsection{Proof of~\cref{cor:pure}}
\label{sec:apdx-cor-pure}
For these parameters, \cref{alg:SD} has privacy
\begin{equation*}
\diffp/20 + 16 Tp \eta
\le \diffp/10 + 16Tp^2 \diffp/20
\le \diffp.
\end{equation*}
Moreover, its regret is
\begin{align*}
\eta T
+ \frac{\ln d}{\eta} + 2 T e^{-Tp/3}
& \le T p \diffp/20 + 20 \ln d /(p\diffp) + 2 T e^{-Tp/3} \\
& \le \sqrt{T} + \frac{20 \sqrt{T} \ln d}{\diffp} + 2 T e^{-Tp/3},
\end{align*}
where the last inequality follows since $\diffp \le 1$.
\subsection{Proof of~\cref{cor:sd-batch}}
\label{sec:adpx-cor-sd-batch}
\iftoggle{arxiv}{}{
To prove~\cref{cor:sd-batch}, we first prove the following proposition which charactarizes the performance of the private shrinking dartboard algorithm with batches. We prove this result in~\cref{sec:apdx-thm-ub-priv-DS-batch}.
\begin{theorem}
\label{thm:ub-priv-DS-batch}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by an oblivious adversary. \cref{alg:SD} with batch size $1 \le B \le T$, $p < 1/2$, $\eta<1/2$, and $K = 2 T p/B $ has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le \eta T
+ \frac{B \ln d}{\eta} + 2 T e^{-Tp/3B}.
\end{equation*}
Moreover, for $\delta>0$, \cref{alg:SD} is $(\diffp,\delta)$-DP where
\begin{equation*}
\diffp =
\frac{5\eta}{Bp} + 100 T p \eta^2/B^3 + \frac{20\eta}{B} \sqrt{12 T p/B \log(1/\delta)}.
\end{equation*}
\end{theorem}
We are now ready to prove~\cref{cor:sd-batch}.
}
For these parameters, \cref{alg:SD} has privacy
\begin{equation*}
\frac{5\eta}{Bp} + \frac{100 T p \eta^2}{B^3} + \frac{20\eta}{B^{3/2}} \sqrt{ T p \log(1/\delta)}
\le \diffp/8 + \frac{T p^3 \diffp^2}{16B} + \frac{\diffp}{2} \sqrt{ T p^3 \log(1/\delta)/B}
\le \diffp.
\end{equation*}
Moreover, its regret is
\begin{align*}
\eta T
+ \frac{B\ln d}{\eta} + 2 T e^{-Tp/3B}
& \le T B p \diffp/40 + 40 \ln d /(p\diffp) + 2 T e^{-Tp/3B} \\
& \le \frac{T^{2/3} B^{4/3} \diffp}{\log^{1/3}(1/\delta)} + \frac{40T^{1/3} \log^{1/3}(1/\delta) \ln d}{B^{1/3}\diffp} + 2 T e^{-Tp/3B} \\
& \le O \left( \frac{T^{2/5} \log^{1/5}(1/\delta) \log^{4/5}(d)) }{\diffp^{4/5}} + 2 T e^{-Tp/3B} \right),
\end{align*}
where the last inequality follows by choosing $B = \frac{\log^{2/5}(1/\delta) \log^{3/5}(d)}{T^{1/5} \diffp^{3/5}} $ (note that $B \ge 1$ for $\diffp \le \frac{\log^{2/3}(1/\delta) \log(d)}{T^{1/3}}$) and noticing that $Tp/B \ge \Omega(T^{2/5})$ for these parameters as we have a lower bound on $\diffp$.
\iftoggle{arxiv}{
\subsection{Proof of~\cref{thm:ub-priv-DS-batch}}
\label{sec:apdx-thm-ub-priv-DS-batch}
}{
\subsection{Proof of~\cref{thm:ub-priv-DS-batch}}
\label{sec:apdx-thm-ub-priv-DS-batch}
}
The same analysis as in~\cref{thm:ub-priv-DS} yields regret
\begin{equation*}
\E\left[ \sum_{t=1}^{\tilde T} \tilde \ell_t(\tilde x_t) - \min_{x \in [d]} \sum_{t=1}^T \tilde \ell_t(x) \right]
\le \eta \tilde T
+ \frac{\ln d}{\eta} + 2 \tilde T e^{-\tilde Tp/3}.
\end{equation*}
Setting $x_t = \tilde x_{\floor{t/B}}$ and multiplying both sides by $B$, we have regret
\begin{equation*}
\E\left[ \sum_{t=1}^{ T} \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le \eta T
+ \frac{B \ln d}{\eta} + 2 T e^{-\tilde Tp/3}.
\end{equation*}
Let us now analyze privacy. The privacy follows the same steps as in the proof of~\cref{thm:ub-priv-DS} with two main differences. First, let $t=t_1$ be the time such that $\tilde \ell_{t_1}$ contains the differing loss function and let $\tilde \ell_{t} = \tilde \ell_{t}(x_{t-1})$ and $\tilde \ell'_{t} = \tilde \ell'_{t}(x_{t-1})$. Note that $|\tilde \ell_{t_1} - \tilde \ell'_{t_1}| \le 1/B$ thus we have that $Z_t$ and $Z'_t$ are $\eta/(Bp)$-indistiguishable since
\begin{align*}
\frac{P(Z'_t = 1)}{P(Z_t = 1)}
& \le \frac{(1-p)(1-\eta)^{\tilde \ell'_{t-1}}}{(1-p)(1-\eta)^{\tilde \ell_{t-1}}} \\
& \le (1-\eta)^{-|\tilde \ell_{t-1} - \tilde \ell'_{t-1}|} \\
& \le e^{2\eta/B}.
\end{align*}
Moreover, assuming w.l.o.g. that $\tilde \ell'_{t-1} \ge \tilde \ell_{t-1}$, we have
\begin{align*}
\frac{P(Z'_t = 0)}{P(Z_t = 0)}
& \le \frac{p + (1-p)(1 - (1-\eta)^{\tilde \ell'_{t-1}})}{p + (1-p)(1 - (1-\eta)^{\tilde \ell_{t-1}}) } \\
& \le 1 + \frac{(1-p)|1 - (1-\eta)^{\tilde \ell'_{t-1} - \tilde \ell_{t-1}}|}{p + (1-p)(1 - (1-\eta)^{\tilde \ell_{t-1}}) } \\
& \le 1 + \frac{|1 - (1-\eta)^{\tilde \ell'_{t-1} - \tilde \ell_{t-1}}|}{p} \\
& \le 1 + \frac{|{\tilde \ell'_{t-1} - \tilde \ell_{t-1}}|}{p} \le e^{\eta/(Bp)}.
\end{align*}
The second difference in the privacy analysis is that the sensitivity of the score of the exponential mechanism is now $1/B$ hence $X_t$ and $X'_t$ are now $4\eta/B$-DP. This shows that $W_t$ given $\{ W_\ell \}_{\ell=0}^{t-1}$ and $W'_t$ given $\{ W'_\ell \}_{\ell=0}^{t-1}$ are $\diffp_t$-indistinguishable where
\begin{equation*}
\diffp_t =
\begin{cases}
0 & \text{if } t < t_1 \\
\eta/(Bp) & \text{if } t = t_1 \\
\indic{\sum_{\ell=1}^{t-1} Y_\ell < K} 4 Y_t \eta/B &\text{if } t > t_1
\end{cases}
\end{equation*}
The result then follows from advanced composition~\citep{DworkRo14}: the final privacy parameter is
\begin{align*}
\diffp_f
& \le \frac{3}{2} \sum_{t=1}^T \diffp_t^2 + \sqrt{6 \sum_{t=1}^T \diffp_t^2 \log(1/\delta) } \\
& \le \frac{3}{2} (\frac{\eta}{Bp} + 16K \eta^2/B^2) + \sqrt{6(\frac{\eta^2}{B^2p^2} + 16K \eta^2/B^2)\log(1/\delta) } \\
& \le \frac{5\eta}{Bp} + 24 K \eta^2/B^2 + \frac{10\eta}{B} \sqrt{K \log(1/\delta)} \\
& \le \frac{5\eta}{Bp} + 100 T p \eta^2/B^3 + \frac{20\eta}{B} \sqrt{12 T p/B \log(1/\delta)}.
\end{align*}
\section{Proofs for~\cref{sec:ub-stoch}}
\label{sec:apdx-ub-stoch}
\subsection{Proof of~\cref{thm:ub-stoch-OCO}}
\label{sec:thm-ub-stoch-OCO}
The privacy claim is immediate as each sample $\ell_i$ is used only once in running a single \ed-DP algorithm. Now we prove the claim about utility. Consider time-step $t=2^i$ where we invoke a DP-SCO algorithm with $t/2 = 2^{i-1}$ samples.
Therefore the guarantees of the algorithm imply that at iteration $t$ we have
\begin{equation*}
\E_{\ell_t \sim P} \left[\ell_t(x_t) - \min_{x \in [d]} \ell_t(x) \right] \le O \left( \Delta_{2^i} \right).
\end{equation*}
Therefore at phase $i$, that is $2^{i} \le t \le 2^{i+1}$, the total regret is at most
\begin{equation*}
\E\left[ \sum_{t=2^{i}}^{2^{i+1}}\ell_t(x_t) - \min_{x \in [d]} \sum_{t=2^{i}}^{2^{i+1}} \ell_t(x) \right]
\le O \left(2^i \Delta_{2^i} \right).
\end{equation*}
Summing over $i$ proves the claim.
\subsection{Proof of~\cref{cor:DP-exp-stoch}}
\label{sec:apdx-cor-DP-exp-stoch}
The algorithm $\A_{\mathsf{\ell_1}}$ is $\diffp$-DP and has excess population loss $\Delta_n = O(\sqrt{\log(d)/n} + \log(d)/n\diffp)$~\cite[Theorem 6]{AsiFeKoTa21}. Thus, \cref{thm:ub-stoch-OCO} implies that
\begin{align*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
& \le \sum_{i=1}^{\log T} 2^i \Delta_i \\
& \le O \left( \sum_{i=1}^{\log T} 2^{i/2} \sqrt{\log(d)} + \log(d)/\diffp \right) \\
& \le O \left( \sqrt{T \log(d)} + \log(d) \log(T)/\diffp \right).
\end{align*}
\section{Conclusion}
In this work, we studied differentially private online learning problems in the realizable setting, and developed algorithms with improved rates compared to the non-realizable setting. However, several questions remain open in this domain. First, our near-optimal algorithms for DP-OPE obtain $\log^{1.5}(d)/\diffp$ regret, whereas the lower bound we have is $\Omega(\log(d)/\diffp)$. Hence, perhaps there are better algorithms with tighter logarithmic factors than our sparse-vector based algorithms. Additionally, for DP-OCO, our algorithms are optimal only for low-dimensional setting, and there remains polynomial gaps in the high-dimensional setting. Finally, optimal rates for both problems (DP-OPE and DP-OCO) are still unknown in the general non-realizable setting.
\section{Introduction}
\label{sec:intro}
We study the problem of differentially private online prediction from experts (DP-OPE), where the algorithm interacts with an adversary for $T$ rounds. In each round, the algorithm picks an expert $x_t \in [d]$ and the adversary chooses a loss function $\ell_t: [d] \to [0,1]$. The algorithm incurs loss $\ell_t(x_t)$ at round $t$, and the goal is to design algorithms that minimize the regret, that is, the cumulative loss compared to the best fixed expert in hindsight, defined as
\begin{equation*}
Reg_T = \sum_{t=1}^T \ell_t(x_t) - \min_{x^\star \in [d]} \sum_{t=1}^T \ell_t(x^\star).
\end{equation*}
Online prediction from experts is an important problem in machine learning with numerous applications~\citep{AroraHaKa12}.
Without any privacy restrictions, the power of the adversary (oblivious adversary that picks the losses in advance versus adaptive adversary that picks the losses online in response to the algorithm) does not change the optimal rates for this problem~\citep{cesa2006prediction}. This has perhaps led prior work in private online experts to focus on the strongest notion of adaptive adversaries~\citep{JainKoTh12,SmithTh13,JainTh14,AgarwalSi17}. In this work, we study the problem of DP-OPE against oblivious adversaries as well and show that, somewhat surprisingly, the power of the adversary can significantly affect the optimal rates for this problem.
We consider three types of adversaries: the strongest, \emph{adaptive adversary}, can choose the loss sequence $(\ell_t)_{t=1}^T$ adversarially in an online manner, where the loss $\ell_t$ may depend arbitrarily on the choices made by the algorithm in previous time steps $(x_\ell)_{\ell=1}^{t-1}$. A slightly weaker notion is that of an \emph{oblivious adversary}, which chooses a sequence of loss functions in advance. The weakest adversary is the \emph{stochastic adversary} which chooses a distribution over loss functions and at each round samples a loss function i.i.d.~from this distribution.
In the classical non-private setting, all of these adversarial models are equivalent, in the sense that they all induce an optimal rate of order $\sqrt{T \log d}$~\citep{cesa2006prediction}. We study algorithms that are required to be differentially private, where we view the sequence of loss functions as the dataset and adjacent datasets differ in a single loss function. We note that all our algorithms are private with respect to adaptive adversaries (see~\cref{sec:pre} for precise definitions) and only the utility bounds assume a weaker adversary.
Under the constraint of \ed-differential privacy, the best existing results obtain regret of order
$$
\sqrt{T \log d} + \min\left( \frac{1}{\diffp} \sqrt{T \log(1/\delta) \log d}, \frac{1}{\diffp} \sqrt{d \log(1/\delta)} \log d \log^2 T \right)
$$
for adaptive adversaries~\citep{JainTh14,AgarwalSi17}. For pure $\diffp$-differential privacy, the best existing regret~\citep{JainTh14} is of order
$$\sqrt{T \log d} + \frac{d \log d \log^2 T}{\diffp}.$$
However, none of existing results prove any lower bounds (besides the trivial non-private lower bound $\sqrt{T \log d}$) to certify the optimality of these rates; thus, it is currently unclear whether these rates are optimal for adaptive adversaries, let alone for oblivious and stochastic adversaries.
\subsection{Our contributions}
We study DP-OPE for different types of adversaries and develop new algorithms and lower bounds. More precisely, we obtain the following results.
\paragraph{Faster rates for oblivious adversaries (\cref{sec:ub-obl-sd}):}
We develop new algorithms for DP-OPE with oblivious adversaries based on a lazy version of the multiplicative weights algorithm. For pure $\diffp$-DP, our algorithms obtain regret $$\frac{\sqrt{T} \log d}{\diffp}.$$
This is the first algorithm to achieve sub-linear regret for pure DP in the high-dimensional regime $d \ge T$. For approximate \ed-DP, our algorithm achieves regret $$ \sqrt{T \log d } + \frac{T^{1/3} \log^{1/3}(1/\delta) \log d}{\diffp}.$$
This essentially demonstrates that the privacy cost for DP-OPE is negligible as long as $\diffp \gg T^{-1/6}$. In contrast, previous work has established privacy cost $\diffp^{-1} \sqrt{T \log d\log(1/\delta)}$ which is larger than the non-private cost even when $\diffp$ is constant and $\delta=1/T$. See~\cref{fig:table-appr,fig:table-pure} for more details.
\paragraph{Separation between adaptive and oblivious adversaries (\cref{sec:lower-bounds}):}
We prove the first lower bounds for DP-OPE with adaptive adversaries that are stronger than the non-private lower bounds. These bounds show that any private algorithm must suffer linear regret if $\diffp \le 1/\sqrt{T}$ for approximate DP and $\diffp \le 1/10$ for pure DP. As our algorithms for oblivious adversaries obtain sub-linear regret in this regime of privacy parameters, this demonstrates that the oblivious model is significantly weaker than the adaptive model in the private setting (see~\cref{fig:comp}). Moreover, these lower bounds show a separation between pure and approximate DP for DP-OPE with adaptive adversaries as the latter is necessary to obtain sub-linear regret.
\paragraph{Near-optimal rates for stochastic adversaries (\cref{sec:ub-stoch}):}
We design a general reduction that transforms any algorithm for private stochastic optimization in the offline setting into an algorithm for private online optimization with nearly the same rates (up to logarithmic factors). By building on algorithms for the offline setting~\citep{AsiFeKoTa21}, we obtain regret $O(\sqrt{T\log d} + \diffp^{-1}\log d \log T)$ for DP-OPE with stochastic adversaries. Moreover, using this reduction with general algorithms for differentially private stochastic convex optimization (DP-SCO)~\citep{FeldmanKoTa20}, we obtain near-optimal regret $O( \sqrt{T} + \diffp^{-1} \sqrt{d} \log T )$ for the problem of DP-OCO (online convex optimization) with stochastic adversaries, improving over the best existing result $\sqrt{T}d^{1/4}/\diffp $~\citep{KairouzMcSoShThXu21}.
\paragraph{Improved rates for DP-OCO (\cref{sec:oco-imp}):}
Building on our improvements for DP-OPE, we improve the existing rates for DP-OCO where the algorithms picks $x_t \in \mc{X} = \{ x \in \R^d: \ltwo{x} \le 1\}$ and the adversary picks $\ell_t : \mc{X} \to \R$. Our rates improve over the rates of~\cite{KairouzMcSoShThXu21} in certain regimes.
\renewcommand{\arraystretch}{2}
\begin{table}[h]
\begin{center}
\begin{subtable}[h]{0.8\textwidth}
\begin{tabular}{| c | c | c |}
\hline
& \textbf{\darkblue{Prior work}} & \textbf{\darkblue{This work}}\\
\hline
{{\textbf{Stochastic}}} & \multirow{3}{*}{{$ \sqrt{T \log d} + \min\left( \frac{\sqrt{T \log d}}{\diffp}, \frac{\sqrt{d } \log d}{\diffp} \right)$} } & $ \sqrt{T\log d} + \frac{\log d }{\diffp} $ \\
\cline{1-1} \cline{3-3}
\textbf{{Oblivious}} & & $\sqrt{T \log d } + \frac{T^{1/3} \log d}{\diffp}$ \\
\cline{1-1} \cline{3-3}
{\textbf{Adaptive}} & & None \\
\hline
\end{tabular}
\caption{Approximate \ed-DP.}
\label{fig:table-appr}
\end{subtable}
\vspace{.5cm}
\begin{subtable}[h]{0.8\textwidth}
\hspace{2cm}
\begin{tabular}{| c | c | c |}
\hline
& \textbf{\darkblue{Prior work}} & \textbf{\darkblue{This work}}\\
\hline
{\textbf{Stochastic}} & \multirow{3}{*}{{$ \sqrt{T \log d} + \frac{d \log d}{\diffp}$} } & $ \sqrt{T\log d} + \frac{\log d }{\diffp} $ \\
\cline{1-1} \cline{3-3}
{\textbf{Oblivious}} & & $\frac{\sqrt{T} \log d}{\diffp}$ \\
\cline{1-1} \cline{3-3}
{\textbf{Adaptive}} & & None \\
\hline
\end{tabular}
\caption{Pure $\diffp$-DP.}
\label{fig:table-pure}
\end{subtable}
\caption{Upper bounds for DP-OPE with different types of adversaries. For readability, we omit logarithmic factors that depend on $T$ and $1/\delta$.}
\label{tab:temps}
\end{center}
\end{table}
\begin{figure}[htb]
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
width=7.0cm,
grid=major,
xmax=1,xmin=0,
ymin=0.4,ymax=1.1,
xlabel={$\alpha$},ylabel={$\beta$},
legend style={at={(0.7,0.46)},anchor=north},
tick label style={font=\small}
]
\addplot+[color=red,domain=0:1,thick,mark=none] {min(max(1/3 + x,1/2),1)};
\addlegendentry{\tiny DP-SD (Cor.~\ref{cor:sd-appr})}
\addplot+[color=blue,domain=1/3:1,thick,mark=none] {min(2/5 + 4*x/5,1)};
\addlegendentry{\tiny Batch DP-SD (Cor.~\ref{cor:sd-batch})}
\addplot+[color=black,domain=0:1,thick,mark=none] {min(max(1/2 + x,1/2),1)};
\addlegendentry{\tiny Prior work}
\addplot+[color=black,domain=0:1/2,thick,dashed,mark=none] {max(1/2,2*x)};
\addlegendentry{\tiny LB (adaptive adversary)}
\end{axis}
\end{tikzpicture}
\caption{Approximate \ed-DP}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
width=7.0cm,
grid=major,
xmax=1,xmin=0,
ymin=0.4,ymax=1.1,
xlabel={$\alpha$}
legend style={at={(0.7,0.4)},anchor=north},
tick label style={font=\small}
]
\addplot+[color=red,domain=0:1,thick,mark=none] {min(max(1/2 + x,1/2),1)};
\addlegendentry{\tiny DP-SD (Cor.~\ref{cor:sd-appr})}
\addplot+[color=black,domain=0:1,thick,mark=none] {1};
\addlegendentry{\tiny Prior work}
\addplot+[color=black,domain=0:1/2,thick,dashed,mark=none] {1};
\addlegendentry{\tiny LB (adaptive adversary)}
\end{axis}
\end{tikzpicture}
\caption{Pure $\diffp$-DP}
\end{subfigure}
\caption{Regret bounds for private prediction from experts against oblivious adversaries for the high-dimensional regime $d \ge T$.
We denote the privacy parameter $\diffp = T^{-\alpha}$ and regret $T^\beta$, and plot $\beta$ as a function of $\alpha$ (ignoring logarithmic factors). }\label{fig:comp}
\end{figure}
\subsection{Related work}
\citet{DworkNaPiRo10} were the first to study differentially private learning in the online setting and introduced the binary tree mechanism which is an important building block of many private algorithms in the online setting. In our context of online prediction from experts, there has been several works that study this problem under the constraint of differential privacy~\citep{JainKoTh12,SmithTh13,JainTh14,AgarwalSi17}. The best existing algorithms depend on the dimensionality regime: in the high-dimensional setting, \citet{JainTh14} developed an algorithm based on follow-the-regularized-leader (FTRL) with entropy regularization that achieves regret $O(\diffp^{-1}\sqrt{T \log d \log(1/\delta)})$ for \ed-DP. For low dimensional problems, \citet{AgarwalSi17} developed an improved algorithm that uses the binary tree mechanism to estimate the running sum of the gradients in the FTRL framework. Their algorithm achieves regret $O(\sqrt{T\log d} + \diffp^{-1} d \log d \log^2 T)$ for $\diffp$-DP. Moreover, extending their algorithm to \ed-DP results in regret $O(\sqrt{T\log d} + \diffp^{-1} \sqrt{d \log(1/\delta)} \log d \log^2 T)$.
A slightly related and somewhat easier problem is that of differentially private stochastic convex optimization (DP-SCO) which has been extensively investigated recently~\citep{BassilySmTh14,BassilyFeTaTh19,FeldmanKoTa20,AsiFeKoTa21,AsiDuFaJaTa21}. In DP-SCO, we are given $n$ samples from some distribution and we wish to minimize the excess population loss. In contrast to the online setting, here all $n$ samples are given to the algorithm and it is required to produce only one model; this makes the online version harder as the algorithm has to output a model at each time-step. However, we note that our reduction in~\cref{sec:ub-stoch} shows that DP-SCO is essentially as hard as DP-OCO with stochastic adversaries (up to logarithmic factors). For oblivious and adaptive adversaries, the online setting may be harder as it allows general loss functions that are not necessarily generated from a distribution.
Perhaps most closely related to our problem in the non-private setting is online learning with limited switching~\citep{KalaiVe05,GeulenVoWi10,AltschulerTa18,chen2020minimax,ShermanKo21}. In this setting, the algorithm aims to minimize the regret when having a budget of at most $S$ switches, that is, it can update its decision at most $S$ times. This constraint is (informally) somewhat related to privacy as the less you update the model, the less information is leaked about the data. The ideas developed in this literature turn out to be useful for our DP-OPE setting. Indeed, our algorithms in~\cref{sec:ub-obl-sd} build on ideas from~\cite{GeulenVoWi10} which developed a lazy version of multiplicative weights to limit the number of switches. Moreover,
similarly to our results, the hardness of online learning problems with limited switching can depend on the power of the adversary. For example, for OCO with limited switching, the optimal rate is $\sqrt{T} + T/S$ for oblivious adversaries and $\sqrt{T} + T/\sqrt{S}$ for adaptive adversaries~\citep{chen2020minimax,ShermanKo21}. Despite of these similarities, our results do not establish a fundamental connection between privacy and switching constraints and we leave this as an open question for future research.
\begin{comment}
Differentially Private algorithms for optimization allow learning about the population while ensuring that individual training datum are protected in a strong mathematical sense. Online learning with experts is one of the simplest models for learning from data and unsurprisingly, private algorithms for this problem have been studied for nearly a decade. In this work, we study the fine-grained behavior of the regret of private algorithms in the high-dimensional/high-privacy setting.
We consider the standard full information setting for online learning with experts. There are $d$ experts, and in each round of the game, the algorithm picks an expert $x_t \in [d]$ and nature picks a loss function $l_t : [d] \rightarrow [0,1]$ simultaneously. The algorithm incurs a loss $l_t(x_t)$, and gets to see the full loss function $l_t$. This game is played over $T$ rounds and the (normalized) {\em regret} of the algorithm is defined as \tk{stick to regret rather than to average regret (results are already stated for unnormalized)} \tk{also use $\ell_t$ like in the technical sections}
\[
Reg_T = \frac{1}{T}\sum_{t=1}^T l_t(x_t) - \frac{1}{T} \min_{x^\star \in [d]} \sum_{t=1}^T l_t(x^\star).
\]
The regret thus measures the loss of the algorithm relative to the best single action in hindsight. This setup has formed the basis of our understanding of a large number of algorithms for optimization, including the stochastic gradient descent algorithm commonly used in machine learning. Without privacy constraints, the optimal achievable regret is very well understood and grows as $\Theta(\sqrt{\frac{\log d}{T}})$. \tk{all these bounds should be stated to sum rather than to average}
When optimizing with privacy, we would like the sequence of actions chosen by the algorithm to be differentially private with respect to changing any single loss function $l_t$. This models the setting where each individual user contributes one loss function to the sequence. Differentially private algorithms for online experts have been studied since the work of~\citet{DworkNaPiRo10} who gave an upper bound\footnote{In this introduction, we use $\tilde{O}$ notation to hide logarithmic factors in $T$ and $\frac{1}{\delta}$, when proving $(\eps, \delta)$-DP bounds.} of $\tilde{O}(d/\eps\sqrt{T})$. There has been a sequence of works~\cite{JainKoTh12,SmithTh13,JainTh14,AgarwalSi17} on this problem and the best known bounds scale as $\tilde{O}(\sqrt{\frac{\log d}{T}} + \min(\frac{\sqrt{d}}{\eps T}, \frac{\sqrt{\log d}}{\eps \sqrt{T}})) $.
While for large $\eps$, the privacy term here is dominated by the first non-private term, our goal in this work is to better understand the privacy term, which we do by looking at the setting of small $\eps$. In this case, a more nuanced picture emerges where the power of the adversary has a significant impact on the achievable regret.
In the online learning setup above, while we look at the worst case over nature's choice of the loss function sequence, different models have been studied. In the {\em adaptive adversary} model, nature can choose the loss sequence $\{l_t\}_{t=1}^{T}$ adversarially in an online manner, where the loss $l_t$ can depend arbitrarily on the choices made by the algorithm in previous time steps. A slightly more benign model is the {\em oblivious adversary} model, where nature must pick a sequence of loss functions in advance and this sequence gets revealed to the algorithm one step at a time. While we still take the worst case over all sequences, this adversary is less powerful as the adversary cannot adapt to random choices made by the algorithm. Finally, the {\em stochastic} adversary can only choose a single distribution over loss functions and a fresh sample is drawn from the same distribution at each time step to get $l_t$. Clearly, the adaptive adversary is more powerful than the oblivious one, which in turn is more powerful than the stochastic one. In the non-private learning with experts setting, the best algorithms do equally well against all three classes of adversaries. Typically, these settings start to differ in limited information settings (e.g. the bandit setting).
Somewhat surprisingly, we show that the optimal regret is different for private learning with experts in different settings. In the stochastic adversary setting, we show that a simple algorithm with regret $\tilde{O}(\sqrt{\frac{\log d }{ T}} + \frac{\log d}{\eps T})$ In the oblivious adversary setting, we show a new algorithm that achieves a regret of $\tilde{O}(\sqrt{\frac{\log d }{ T}} + (\frac{\log d}{\eps T})^{2/3})$. For an adaptive adversary, we show that the regret is $\tilde{O}(\sqrt{\frac{\log d }{ T}} + \frac{\sqrt{\log d}}{\eps \sqrt{T}})$, and that the better rate of $\frac{1}{(\eps T)^{2/3}}$ is not achievable.
We show additional results for the ``realizable'' case where the best expert in hindsight has zero loss.
\end{comment}
\section{Lower bounds}
\label{sec:lower-bounds}
In this section, we prove new lower bounds for adaptive adversaries which show a separation from the non-adaptive case. In particular, we show that for $\diffp = 1/\sqrt{T}$, an adaptive adversary can force any \ed-DP algorithm to incur a linear regret $\Omega(T)$. Similarly, any $\diffp$-DP algorithm with $\diffp \le 1/10$ must incur linear regret against adaptive adversaries.
On the other hand, the results of~\cref{sec:upper-bounds} show that a sub-linear regret is possible for both privacy regimes with oblivious adversaries.
Our lower bound is based on finger-printing lower bound constructions~\citep{BunUV18} which are the basic technique for proving lower bounds in the offline setting. The following theorem summarizes our main lower bound for \ed-DP. We defer the proof to~\cref{sec:apdx-thm-lb-adaptive-adv}
\begin{theorem}
\label{thm:lb-adaptive-adv}
Let $T$ be sufficiently large and $d \ge 2 T$.
Let $\diffp \le 1$ and $\delta \le 1/T^3$.
If $\A$ is \ed-DP then there is an adaptive adversary such that
\begin{equation*}
\E\left[\sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x)\right]
\ge \Omega \left( \min \left(T, \frac{1}{(\diffp \log T)^2} \right) \right).
\end{equation*}
\end{theorem}
We also have the following lower bound for pure differential privacy. It shows that pure DP algorithms cannot learn against adaptive adversaries, that is, they must suffer linear regret for constant $\diffp$.
\begin{theorem}
\label{thm:lb-adaptive-adv-pure}
Let $\diffp \le 1/10$ and $d \ge 2T$.
If $\A$ is $\diffp$-DP then there is an adaptive adversary such that
\begin{equation*}
\E\left[\sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x)\right]
\ge \Omega \left(T \right).
\end{equation*}
\end{theorem}
We can also extend the previous lower bounds to larger values of $\diffp$ (see proof in~\cref{sec:thm-lb-large-peps}).
\begin{theorem}
\label{thm:lb-large-peps}
Let $1 \le k \le O(p^{1-\rho})$ for $0 < \rho < 1$, $d = 2^k \binom{p}{k} = 2^{\Theta(k \log p)}$, $T = p/k$, and $\diffp \le \frac{\sqrt{k/T}}{200 \rho \log(T)}$ where $p$ is sufficiently large.
If $\A$ is \ed-DP with $\delta \le 1/T^3$, then there is an adaptive adversary such that
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\ge \Omega \left(\frac{\sqrt{T \log d}}{\diffp \log^{3/2} T } \right).
\end{equation*}
\end{theorem}
Finally, we note that for stochastic adversaries, existing lower bounds for DP-SCO immediately imply lower bounds for the online setting using online-to-batch transformations. As there is a lower bound of $\log(d)/T\diffp$ for private selection~\citep{SteinkeUll17b}, this implies a lower bound on the (normalized) excess loss for DP-SCO with linear functions in $\ell_1$-geometry (as one can reduce private selection to this problem; \citealp{AsiFeKoTa21}). This implies a lower bound of $\log(d)/\diffp$ for DP-OPE.
\subsection*{Acknowledgements}
This work has received support from the Israeli Science Foundation (ISF) grant no.~2549/19 and the Len Blavatnik and the Blavatnik Family foundation.
\printbibliography
\section{Implications for DP-OCO in $\ell_2$-geometry}
\label{sec:oco-imp}
In this section, we derive several implications of our techniques for \emph{differentially private online convex optimization} (DP-OCO) in $\ell_2$-geometry. In this setting, the algorithm chooses $x_t \in \mc{X}$ where $\mc{X} = \{x \in \R^d: \ltwo{x} \le D\}$ and the adversary responds with loss functions $\ell_t: \mc{X} \to \R^d$ that are convex and $L$-Lipschitz. Building on our techniques for DP-OPE, we propose new algorithms that improve over the best existing regret bounds for DP-OCO~\citep{KairouzMcSoShThXu21} which achieve $d^{1/4} \sqrt{T/\diffp}$ for stochastic and adaptive adversaries. Our algorithms obtain (up to logarithmic factors) near-optimal regret $\sqrt{T} + \sqrt{d}/\diffp$ for stochastic adversaries, and $\sqrt{Td} + T^{1/3}d/\diffp$ for adaptive adversaries.
\subsection{Oblivious adversaries}
\label{sec:oco-obl}
Using our private shrinking dartboard algorithm, in this section we develop algorithms that improve the regret for oblivious adversaries. Our algorithms construct a covering of the parameter space $\mc{X}$ then apply our private shrinking dartboard algorithm where the experts are the elements of the cover. By optimizing the size of the cover to balance the error from the approximation error and the error due to the number of experts, we obtain the following regret for DP-OCO in $\ell_2$-geometry.
We defer the proof to~\cref{sec:apdx-thm-oco-imp}.
\begin{theorem}
\label{sec:thm-oco-imp}
Let $\mc{X} = \{x \in \R^d: \ltwo{x} \le D\}$ and $\ell_1,\dots,\ell_T : \mc{X} \to \R$ be convex and $L$-Lipschitz functions chosen by an oblivious adversary.
There is an
\ed-DP that has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in \mc{X}} \sum_{t=1}^T \ell_t(x) \right]
\le LD \cdot O \left( \sqrt{T d \log T } + \frac{T^{1/3} d \log^{1/3}(1/\delta) \log T}{\diffp} \right)
.
\end{equation*}
\end{theorem}
In the high-privacy regime, this result can improve over the previous work~\citep{KairouzMcSoShThXu21} which has regret $\sqrt{T} d^{1/4}/\sqrt{\diffp}$. For example, if $d=1$ and $\diffp = T^{-1/4}$, then our regret is roughly $T^{7/12}$ while their regret is $T^{5/8}$.
\subsection{Stochastic adversaries}
\label{sec:oco-stoch}
For stochastic adversaries, we use the reduction in~\cref{sec:ub-stoch} (\cref{alg:stoch-adv}) with optimal algorithms for DP-SCO in $\ell_2$-geometry to obtain optimal regret bounds.
More precisely, we use an optimal \ed-DP-SCO algorithm from~\cite{FeldmanKoTa20}, which we call $\A_{\mathsf{\ell_2}}$. As this algorithm has excess loss $\Delta_n = LD \cdot O(1/\sqrt{n} + \sqrt{d}/n\diffp)$, the following result follows immediately from~\cref{thm:ub-stoch-OCO}. We defer the proof to~\cref{sec:apdx-cor-DP-OCO}.
We also note that we can also obtain regret bounds for pure $\diffp$-DP using existing (pure) DP algorithms for DP-SCO~\citep{AsiLeDu21} with our reduction.
\begin{corollary}[DP-OCO in $\ell_2$-geometry]
\label{cor:DP-OCO}
Let $\mc{X} = \{ x \in \R^d: \ltwo{x} \le D\}$
and $\ell_1,\dots,\ell_T : \mc{X} \to \R$ be convex and $L$-Lipschitz functions chosen by a stochastic adversary, $\ell_i \simiid P$.
Then
\cref{alg:stoch-adv} using $\A_{\mathsf{\ell_2}}$ is \ed-DP and has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le LD \cdot O \left(\sqrt{T} + \frac{\sqrt{d} \log T}{\diffp} \right)
\end{equation*}
\end{corollary}
This regret is near-optimal up to logarithmic factors since we have the lower bound $\sqrt{T} + \sqrt{d}/\diffp$ for the offline version of this problem (DP-SCO in $\ell_2$-geometry) where all of the samples are given in advance~\citep{BassilySmTh14,BassilyFeTaTh19}.
\section{Problem setting and preliminaries}
\label{sec:pre}
{Online prediction from experts} (OPE) is an interactive $T$-round game between an online algorithm $\A$ and adversary $\mathsf{Adv}$. At round $t$, the algorithm $\A$ chooses an expert $x_t \in [d]$ and the adversary $\mathsf{Adv}$ picks a loss function $\ell_t \in \Fl = \{\ell \mid \ell : [d] \to [0,1]\}$ simultaneously. We let $\A_t(\ell_1,\dots,\ell_{t-1}) = x_t$ denote the mapping of algorithm $\A$ at round $t$. Similarly, we define $\mathsf{Adv}_t$ to be the mapping of the adversary at round $t$ (we provide more details on the input of $\mathsf{Adv}_t$ below depending on the type of the adversary).
The algorithm observes $\ell_t$ (after choosing $x_t$) and incurs loss $\ell_t(x_t)$. For a predefined number of rounds $T$, the regret of the algorithm $\A$ is defined as
\begin{equation*}
\reg_T(\A) = \sum_{t=1}^T \ell_t(x_t) - \min_{x^\star \in [d]} \sum_{t=1}^T \ell_t(x^\star).
\end{equation*}
We consider three types of adversary $\mathsf{Adv}$ for choosing the sequence of loss functions $\{\ell_t\}_{t=1}^T$. To help define privacy, we will diverge from the traditional presentation of these models in online learning literature. The adversary will consist of a sequence of $T$ data points $z_1,\dots,z_T \in \domain$, and an algorithm $\ell$ that generates the sequence of losses.
For both stochastic and oblivious adversaries the loss function at step $t$ is generated based on data point $z_t$ alone; i.e. $\ell_t(\cdot) = \ell(\cdot; z_t)$, where for all $z \in \domain$, $\ell(\cdot; z)$ is an admissible loss function for the relevant setup. The two models differ in the choice of the sequence $z_1,\ldots,z_T$: for a stochastic adversary, the sequence of $z_i$'s is chosen i.i.d. from some distribution $P$ (chosen by the adversary). For an oblivious adversary, this sequence itself is adversarially chosen. In other words:
\begin{align*}
Reg_T^{\mbox{(stochastic)}}(\A) &= \sup_{\ell, P} \mathbb{E}_{z_1,\ldots,z_T \sim P^T} [Reg_T(\A) | \ell_t(\cdot) = \ell(\cdot; z_t)],\\
Reg_T^{\mbox{(oblivious)}}(\A) &= \sup_{\ell} \sup_{z_1,\ldots,z_T \in \domain^T} [Reg_T(\A) | \ell_t(\cdot) = \ell(\cdot; z_t)].
\end{align*}
In the case of an adaptive adversary, the loss at step $t$ can depend on the algorithm's choices in previous steps. Thus $\ell_t(\cdot) = \ell(\cdot; z_t, x_{1:t-1})$, where as before this loss function is constrained to be an admissible loss function for all possible values of inputs $z_t, x_{1:t-1}$. The adaptive regret is then the worst case regret over $z_1,\ldots,Z_T$ and the mapping $\ell$:
\begin{align*}
Reg_T^{\mbox{(adaptive)}}(\A) &= \sup_{\ell} \sup_{z_1,\ldots,z_T \in \domain^T} [Reg_T(\A) | \ell_t(\cdot) = \ell(\cdot; z_t, x_{1:t-1})].
\end{align*}
Given data $z_{1:T} = (z_1,\dots,z_T) \in \domain^T$, we let $\A \circ \mathsf{Adv}(z_{1:T}) = x_{1:T}$ denote the output of the interaction between the algorithm $\A$ and adversary $\mathsf{Adv}$ given inputs $z_{1:T}$.
Under this setting, the goal is to design private algorithms that minimize the appropriate notion of regret. To this end, we extend the standard definition of \ed-differential privacy~\citep{DworkMcNiSm06,DworkKeMcMiNa06} to the online setting.
Like most previous works, we study a stronger notion of privacy that holds even against adaptive adversaries.\footnote{\citet{JainRaSiSm21} recently formalized DP against adaptive adversaries for a different online learning problem. Their notion is equivalent to ours, but our presentation may be easier to work with.}
\begin{definition}[Adaptive DP]
\label{def:DP}
A randomized algorithm $\A$ is \ed-differentially private against adaptive adversaries (\ed-DP) if, for all sequences $\Ds=(z_1,\dots,z_T) \in \domain^T$ and $\Ds'=(z'_1,\dots,z'_T) \in \domain^T$ that differ in a single element, for any $\ell$ defining an adaptive adversary $\mathsf{Adv}$, and for all events $\cO$ in the output space of $\A \circ \mathsf{Adv}$, we have
\[
\Pr[\A \circ \mathsf{Adv}(\Ds)\in \cO] \leq e^{\eps} \Pr[\A \circ \mathsf{Adv}(\Ds')\in \cO] +\delta.
\]
\end{definition}
As remarked earlier, all our algorithms will be differentially private against adaptive adversaries, and the other adversary models are considered only from the the point of view of utility. This is consistent with a long line of work on private learning algorithms, where privacy is proven for worst-case inputs while utility bounds often make distributional or other assumptions on the data.
\begin{comment}
\iftoggle{arxiv}{
\subsection{Background on differential privacy}
We recall the following preliminary results on differential privacy which we use throughout the paper.
\begin{lemma}
\label{lemma:adv-comp}
Let $T \in \N$ and $ \Fl = \{\ell \mid \ell : [d] \to [0,1]\}$ be the space of loss functions.
Let $\A_t : \Fl^t \times [d]^{t-1} \to [d]$ be $\diffp_t$-DP. Then for any $\delta>0$, their composition is $(\diffp,\delta)$-DP where
\begin{equation*}
\diffp = \frac{3}{2} \sum_{t=1}^T \diffp_t^2 + \sqrt{6 \sum_{t=1}^T \diffp_t^2 \log(1/\delta) }.
\end{equation*}
\end{lemma}
\newcommand{AboveThreshold}{AboveThreshold}
\newcommand{\init}{InitializeSparseVec}
\newcommand{\addq}{AddQuery}
\newcommand{\test}{TestAboThr}
\subsubsection{Sparse vector technique}
In this section, we recall the sparse-vector-technique~\cite{DworkRo14} which we use for the realizable setting in~\cref{sec:upper-bounds-realizable}. Given an input $\Ds = (z_1,\dots,z_n) \in \domain^n$, the algorithm takes a stream of queries $q_1,q_2,\dots,q_T$ in an online manner. We assume that each $q_i$ is $1$-sensitive, that is, $|q_i(\Ds) - q_i(\Ds') | \le 1$ for neighboring datasets $\Ds,\Ds' \in \domain^n$ that differ in a single element.
We have the following guarantee.
\begin{lemma}~\cite[theorem 3.24]{DworkRo14}
\label{lemma:svt}
Let $\Ds = (z_1,\dots,z_n) \in \domain^n$.
For a threshold $L$ and $\beta>0$, there is an $\diffp$-DP algorithm (AboveThreshold) that halts at time $k \in [T+1]$ such that for $\alpha = \frac{8(\log T + \log(2/\beta))}{\diffp}$ with probability at least $1-\beta$
\begin{itemize}
\item For all $t < k$, $q_i(\Ds) \le L + \alpha$
\item $q_k(\Ds) \ge L - \alpha$ or $k = T+1$
\end{itemize}
\end{lemma}
To facilitate the notation for using AboveThreshold~in our algorithms, we assume that it has the following components:
\begin{enumerate}
\item $\init(\diffp,L,\beta)$: initializes a new instance of AboveThreshold~with privacy parameter $\diffp$, threshold $L$, and probability parameter $\beta$. This returns an instance (data structure) $Q$ that supports the following two functions.
\item $Q.\addq(q)$: adds a new query $q:\domain^n \to \R$ to $Q$.
\item $Q.\test()$: tests if the last query that was added to $Q$ was above threshold. In that case, the algorithm stops and does not accept more queries.
\end{enumerate}
\newcommand{BinaryTree}{BinaryTree}
\subsubsection{The binary tree mechanism}
We also build on the binary tree mechanism~\cite{DworkNaPiRo10,ChanShSo11} which allows to privately estimate the running sum of a sequence of $T$ numbers $a_1,\dots,a_T \in [0,1]$.
\begin{lemma}~\cite[theorem 4.1]{DworkNaPiRo10}
\label{lemma:svt}
Let $\diffp \le 1$. There is an $\diffp$-DP algorithm (BinaryTree) that takes a stream of numbers $a_1,a_2,\dots,a_T$ and outputs $c_1,c_2,\dots,c_T$ such that for all $t \in [T]$ with probability at least $1-\beta$
\begin{equation*}
|c_t - \sum_{i=1}^t a_i| \le O \left( \frac{\mathsf{poly}(\log T/\beta)}{\diffp} \right).
\end{equation*}
\end{lemma}
\paragraph{Notation} For a positive integer $k \in \N$, we let $[k] = \{1,2,\dots,k\}$. Moreover, for $a_1,\dots,a_t$, we let $a_{1:t} = a_1,\dots,a_t$.
}{}
\end{comment}
\section{Lower bounds}
\label{sec:real-LB}
In this section, we prove lower bounds for private experts in the realizable setting which show that our upper bounds are nearly-optimal up to logarithmic factors. The lower bound demonstrates that a logarithmic dependence on $d$ is necessary even in the realizable setting. Note that for DP-OCO in the realizable setting, a lower bound of $d/T\diffp$ for pure DP follows from known lower bounds for DP-SCO in the interpolation regime~\cite{AsiChChDu22} using online-to-batch conversions~\cite{Hazan16}.
The following theorem states our lower bound for DP-OPE.
\begin{theorem}
\label{thm:lb-obl-experts}
Let $\diffp \le 1/10$ and $\delta \le \diffp/d$.
If $\A$ is $(\diffp,\delta)$-DP then there is an oblivious adversary such that $\min_{x\in[d]} \sum_{t=1}^T \ell_t(x) = 0$ and
\begin{equation*}
\E\left[\sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x)\right]
\ge \Omega \left( \frac{\log(d)}{\diffp} \right).
\end{equation*}
\end{theorem}
\begin{proof}
Let $\ell^0(x)=0$ for all $x$ and for $j \in [d]$
let $\ell^j(x)$ be the function that has $\ell^j(x)=0$ for $x=j$ and otherwise $\ell^j(x)=1$. The oblivious adversary picks one of the following $d$ sequences uniformly at random: $\Ds^j = (\underbrace{\ell^0,\dots,\ell^0}_{T-k},\underbrace{\ell^j,\dots,\ell^j}_{k})$ where $k = \frac{\log d}{2 \diffp}$ and $j \in [d]$. Assume towards a contradiction that the algorithm obtains regret $\log(d)/(32\diffp)$. This implies that there exists $d/2$ sequences such that the algorithm obtains expected regret $\log(d)/(16\diffp)$ where the expectation is only over the randomness of the algorithm. Assume without loss of generality these sequences are $\Ds^1,\dots,\Ds^{d/2}$. Let $B_j$ be the set of outputs that has low regret on $\Ds^j$, that is,
\begin{equation*}
B_j = \{ (x_1,\dots,x_T) \in [d]^T: \sum_{t=1}^T \ell^j(x_t) \le \log(d)/(8\diffp) \}.
\end{equation*}
Note that $B_j \cap B_{j'} = \emptyset$ since
if $x_{1:T} \in B_j$ then at least $3k/4 = 3\log(d)/(8\diffp)$ of the last $k$ outputs must be equal to $j$. Now Markov inequality implies that
\begin{equation*}
\P(\A(\Ds^j) \in B_j) \ge 1/2.
\end{equation*}
Moreover, group privacy gives
\begin{align*}
\P(\A(\Ds^{j}) \in B_{j'})
& \ge e^{-k\diffp} \P(\A(\Ds^{j'}) \in B_{j'}) - k e^{-\diffp} \delta \\
& \ge \frac{1}{2 \sqrt{d}} - \frac{\log(d)}{2\diffp} \delta \\
& \ge \frac{1}{4 \sqrt{d}},
\end{align*}
where the last inequality follows since $\delta \le \diffp/d$.
Overall we get that
\begin{align*}
\frac{d/2-1}{4\sqrt{d}} \le
\P(\A(\Ds^{j}) \notin B_{j})
\le \frac{1}{2} ,
\end{align*}
which is a contradiction for $d \ge 32$.
\end{proof}
\section{Potential-based algorithm for DP-experts}
\label{sec:alg-pot}
The algorithm in~\cref{alg:SVT-zero-loss} uses the sparse vector technique to evaluate the current arm. In this section, we present a more direct algorithm that uses the sparse vector technique on a potential function. This more direct approach allows us to improve the polylogarithmic terms, which results in significant improvements for DP-OCO as we show in the next section.
We describe the algorithm in a more abstract way, which will translate more easily to OCO. Let $
\mu$ be the uniform measure over the space of experts; this will be the uniform distribution over the finite number of experts in the case of DP-OPE, and the uniform distribution over the unit ball in $\mathbb{R}^d$ in the case of DP-OCO. At a high level, the algorithm is similar to \cref{alg:SVT-zero-loss} in that it samples from the exponential mechanism, and keeps that sample until a certain condition is met. While the condition there dealt only with the chosen expert, we will instead test a global condition. For a parameter $\eta$, we define the potential $\phi_{\eta}(t) = \int \exp(-\eta \sum_{i=1}^t \ell_i(x) \mu(x) \text{d}x$. The condition we check (using the sparse vector technique) is that $\log \phi_{\eta}(t)$ has changed by at least $\alpha$ since the last resample. We formally describe the algorithm in \cref{alg:potential-experts}.
\begin{algorithm}[t]
\caption{Potential-based Algorithm for Experts}
\label{alg:potential-experts}
\begin{algorithmic}[1]
\REQUIRE Switching bound $K$, Parameters $\eta, \alpha$, failure probability $\beta$, per-phase privacy parameter $\diffp_0$, measure $\mu$ on set of experts.
\STATE Set $k=0$, $t^*(k)=0$ and sample current expert $x_0 \sim \mu$.
\FOR{$t=1$ to $T$\,}
\IF{$k < K$}
\STATE $Q = \init(\diffp_0,2\alpha,\beta/T)$
\WHILE{Q.\test() = False}
\STATE Set $ x_{t} = x_{t-1}$
\STATE Define a new query $q_t = \frac{1}{\eta}(\log \phi_{\eta}(t^*(k)) - \log \phi_{\eta}(t))$
\STATE Add new query $Q.\addq(q_t)$
\STATE Receive loss function $\ell_t: [d] \to [0,1]$
\STATE Pay cost $\ell_t(x_t)$
\STATE Update $t = t+1$
\ENDWHILE
\STATE Sample $x_t$ from the distribution:
\begin{equation*}
P(x_t = x) = e^{-\eta \sum_{i=1}^{t-1} \ell_i(x) \mu(x)} / \phi_{\eta}(t-1)
\end{equation*}
\STATE $k = k + 1$.
\STATE $t^*(k) = t-1$.
\ELSE
\STATE Set $ x_{t} = x_{t-1}$
\STATE Receive loss function $\ell_t: [d] \to [0,1]$
\STATE Pay cost $\ell_t(x_t)$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
We first observe some simple properties of $\phi_{\eta}$.
\begin{lemma}
Suppose that the per-step losses $\ell_t(s)$ are in $[0,1]$. Then $\phi_{\eta}(t-1) \geq \phi_{\eta}(t) \geq e^{-\eta}\phi_{\eta}(t-1)$.
\end{lemma}
\begin{proof}
For any $x$, we have
\begin{align*}
\sum_{i=1}^{t-1} \ell_i(x) \leq \sum_{i=1}^{t} \ell_i(x) \leq \sum_{i=1}^{t-1} \ell_i(x) + 1.
\end{align*}
This implies that
\begin{align*}
e^{-\eta \sum_{i=1}^{t-1} \ell_i(x)} \geq e^{-\eta \sum_{i=1}^{t} \ell_i(x)} \geq e^{-\eta} e^{-\eta \sum_{i=1}^{t-1} \ell_i(x)}.
\end{align*}
The claim follows by integrating with respect to $\mu$.
\end{proof}
\begin{lemma}
\label{lem:svt_worked}
Suppose that $\alpha \geq \frac{8\eta (\log T + \log 2T/\beta)}{\diffp_0}$. Then during the run of the algorithm, $\log\phi_\eta(t^*(k)) -\log\phi_\eta(t) \leq 3\alpha$ always holds except with probability $\beta$. Moreover, $\log\phi_\eta(t^*(k)) - \log\phi_\eta(t^*(k+1)) \geq \alpha$ holds for all $k$ except with probability $\beta$. \end{lemma}
\begin{proof}
We use sparse vector with a threshold of $2\alpha$. The condition on $\alpha$ along with the first property of the sparse vector algorithm~\cref{lemma:svt} implies that if $\log\phi_\eta(t^*) -\log\phi_\eta(t) \leq 3\alpha$. The second property of sparse vector implies the second property.
\end{proof}
We next analyze the loss of the algorithm.
\begin{lemma}
Suppose that $\eta < \frac 1 2$ and losses are in the range $[0,1]$. For some $\gamma > 0$, let $L^*(\gamma)$ be such that $\Pr_{x \sim \mu}[\sum_{t=1}^T \ell_x(t) \leq L^*(\gamma)] \geq \gamma$. Let $K \geq \frac{\eta L^*(\gamma) + \log 1/\gamma}{\alpha}$. Then the expected loss of \cref{alg:potential-experts} satisfies
\begin{align*}
\E[\sum_{t=1}^T \ell_t(x_t)] \leq 2e^{3\alpha}(L^*(\gamma) + \frac{\log 1/\gamma}{\eta}) + 2\beta T.
\end{align*}
\end{lemma}
\begin{proof}
We first observe that by the assumption on sufficient probability mass on experts with loss at most $L^*(\gamma)$, it follows that $\phi_\eta(T) \geq \gamma e^{-\eta L^*(\gamma)}\phi_\eta(0)$. Let $E$ be the event that both the conditions on $\phi_\eta(t)$'s in ~\cref{lem:svt_worked} hold. Thus $\Pr[E] \geq 1-2\beta$. The second condition implies that if we reach the bound $K$ on the number of switches, then conditioned on $E$, $\phi_\eta(T) \leq e^{-K\alpha}\phi_\eta(0)$. Thus if $K \geq \frac{\eta L^*(\gamma) + \log 1/\gamma}{\alpha}$, then the switching bound is never reached under $E$.
Now note that if we sampled $x \sim e^{-\eta \sum_{i=1}^{t-1} \ell_i(x)} / \phi_{\eta}(t-1)$, then we would have
\begin{align*}
\E[\eta\ell_t(x)/2] &= \frac{1}{\phi_{\eta}(t-1)} \int \eta\ell_t(x) e^{-\eta \sum_{i=1}^{t-1} \ell_i(x)} \mu(x) \text{d}x\\
& \leq \frac{1}{\phi_{\eta}(t-1)} \int (1-e^{-\eta\ell_t(x)})e^{-\eta \sum_{i=1}^{t-1} \ell_i(x)} \mu(x) \text{d}x\\
& = \frac{1}{\phi_{\eta}(t-1)} \int (e^{-\eta \sum_{i=1}^{t-1} \ell_i(x)} - e^{-\eta \sum_{i=1}^{t} \ell_i(x)}) \mu(x) \text{d}x\\
&= \frac{\phi_{\eta}(t-1)- \phi_{\eta}(t)}{\phi_{\eta}(t-1)}\\
&\leq \log \frac{\phi_{\eta}(t-1)}{\phi_{\eta}(t)}
\end{align*}
Here the first inequality uses the fact that for $y \in [0,1]$, we have $y/2 < 1- e^{-y}$, and the second inequality uses $(1-r) \leq -\log r$ for $r \in [\frac{1}{2},1]$. The assumptions on $\eta$ and the losses imply that the relevant conditions hold.
Our algorithm however uses an outdated sample based on the distribution at $t^*(k)$, whenever $t$ is in phase $k$. Nevertheless, note that the cumulative losses increase with $t$ so that $e^{-\eta \sum_{i=1}^{t-1} \ell_i(x)}$ is smaller than $e^{-\eta \sum_{i=1}^{t^*(k)} \ell_i(x)}$
We then write
\begin{align*}
\E[\eta\ell_t(x_t)/2 \cdot \mathbf{1}(E)] &= \frac{1}{\phi_{\eta}(t^*(k))} \int \mathbf{1}(E) \eta\ell_t(x) e^{-\eta \sum_{i=1}^{t^*(k)} \ell_i(x)} \mu(x) \text{d}x\\ &\leq \frac{\phi_{\eta}(t-1)}{\phi_{\eta}(t^*(k))}\cdot \frac{1}{\phi_{\eta}(t-1)}\int \eta\ell_t(x) e^{-\eta \sum_{i=1}^{t-1} \ell_i(x)} \mu(x) \text{d}x\\
&\leq e^{3\alpha}\log \frac{\phi_{\eta}(t-1)}{\phi_{\eta}(t)}.
\end{align*}
Rearranging and summing over the $t$ steps, we get
\begin{align*}
\E[\sum_{t=1}^T\ell_t(x_t)\cdot \mathbf{1}(E)]
&\leq \frac{2e^{3\alpha}}{\eta} \sum_{t=1}^T\log \frac{\phi_{\eta}(t-1)}{\phi_{\eta}(t)}\\
&\leq \frac{2e^{3\alpha}}{\eta} \log \frac{\phi_{\eta}(0)}{\phi_{\eta}(T)}.
\end{align*}
Since $\phi_\eta(T) \geq \gamma e^{-\eta L^*(\gamma)}\phi_\eta(0)$, we can upper bound this expression.
Furthermore, $\E[\sum_{t=1}^T\ell_t(x_t)\cdot \mathbf{1}(E^c)]$ is clearly bounded $\Pr[E^c] \cdot T \leq 2\beta T$. The claim follows.
\end{proof}
This can yield pure or approximate DP. Indeed each phase of the algorithm satisfies $(\eta + \diffp_0)$-DP. Composing over the at most $K$ phases, we get $(\eta + \diffp_0)$-DP or $(2(\eta + \diffp_0)\sqrt{K \log 1/\delta}, \delta)$-DP.
With this in mind, we can now plug in appropriate values of $\eta, \alpha, \eps_0$ to derive the following result for pure $\diffp$-DP.
\begin{theorem}(pure DP)
Suppose that we run \cref{alg:potential-experts} with parameters $\alpha = 1$, $\beta = \frac{1}{T^2}$, $K=2 \log 1/\gamma$ , $\diffp_0 = \diffp / K$, and $\eta =\diffp_0 / (56 \log T) = \diffp / (112 \log T \log 1/\gamma)$. If $L^*(\gamma) \leq \log^2 1/\gamma \log T / \diffp$, then this algorithm satisfies $\eps$-DP and incurs loss $O(L^* + \frac{\log^2 1/\gamma\log T}{\diffp})$. In particular, for DP-OPE with $d$ experts, assuming that that $OPT \leq \log^2 d \log T / \diffp$, we get regret $O(\log^2 d \log T / \diffp)$ by setting $\gamma = \frac 1 d$.
\end{theorem}
Moreover, we have the following improved bound for \ed-DP.
\begin{theorem}(\ed-DP)
\label{thm:pot-appr}
Suppose that for $\eps < 1$, we run \cref{alg:potential-experts} with parameters $\alpha = 1$, $\beta = \frac{1}{T^2}$, $K=2 \log 1/\gamma$ , $\diffp_0 = \diffp / \sqrt{K\log 1/\delta}$, and $\eta =\diffp_0 / (56 \log T) = \diffp / (112 \log T \sqrt{\log 1/\gamma\log 1/\delta})$. If $L^*(\gamma) \leq \log^{1.5} 1/\gamma \log T \sqrt{\log 1/\delta}/ \diffp$, then this algorithm satisfies $(\eps,\delta)$-DP and incurs loss $O(L^* + \frac{\log^{1.5} 1/\gamma\log T\sqrt{\log 1/\delta}}{\diffp})$. In particular, for DP-OPE with $d$ experts, assuming that $OPT \leq \log^{1.5} d \log T\sqrt{\log 1/\delta} / \diffp$, we get regret $O(\log^{1.5} d \log T \sqrt{\log 1/\delta}/ \diffp)$ by setting $\gamma = \frac 1 d$.
\end{theorem}
For the setting of DP-OCO in the realizable case, we get that by Lipschitzness, a small ball of radius $r$ around $x^*$ has loss $O(rLT)$. Setting $r=(LT)^{-1}$ then ensures that this value is at most $1$ This ball has measure $\gamma = (r/D)^{-d}$. Scaling so that $D,L \leq 1$,
{we get an regret $O(d^2 \log^3 T / \diffp)$ for the case of $\eps$-DP and $O(d^{1.5}\log^{2.5} T \sqrt{\log 1/\delta}/ \diffp)$ for $(\eps,\delta)$-DP.} Finally we note that this algorithm only accesses the loss functions to sample from the exponential mechanism, and compute the potential $\phi_{\eta}(t)$. Both of these can be implemented in polynomial time for the case of $\mu$ being uniform over the ball, and the losses being convex using standard techniques from logconcave sampling. Thus this algorithm can be run in polynomial time for DP-OCO.
\section{Missing proofs for~\Cref{sec:dp-oco}}
\subsection{Proof of~\Cref{thm:DP-OCO}}
\label{sec:proof-dp-oco}
Let $x_1,\dots,x_T$ be the experts chosen by the algorithm.
First, \Cref{thm:ub-realizable-ada} implies that this algorithm obtains the following regret with respect to the best expert
\begin{equation*}
\sum_{t=1}^T \ell_t(x_t) - L^{\opt}_{\mathsf{experts}} \le (L^{\opt}_{\mathsf{experts}} + \frac{1}{\diffp}) \cdot O\left( \mathsf{poly} (d\log \frac{DT}{\rho\beta}) \right),
\end{equation*}
where $L^{\opt}_{\mathsf{experts}} = \min_{x \in \mc{X}^\rho_{\mathsf{experts}}} \sum_{t=1}^T \ell_t(x)$. Since $\ell_t$ is $L$-Lipschitz for each $t \in [T]$, we obtain that
\begin{equation*}
| L\opt - L^{\opt}_{\mathsf{experts}} |
= |\min_{x \in \mc{X}} \sum_{t=1}^T \ell_t(x) - \min_{x \in \mc{X}^\rho_{\mathsf{experts}}} \sum_{t=1}^T \ell_t(x) |
\le T L \rho.
\end{equation*}
Overall this gives
\begin{equation*}
\sum_{t=1}^T \ell_t(x_t) - L^{\opt}
\le (L^{\opt} + TL\rho + \frac{1}{\diffp}) \cdot O\left( \mathsf{poly} (d\log \frac{DT}{\rho\beta}) \right).
\end{equation*}
Setting $\rho = 1/(LT)$ proves the claim.
\begin{comment}
\subsection{Proof of~\Cref{thm:dp-oco-smooth}}
\label{proof:dp-oco-smooth}
The proof follows similar arguments to the proof of Theorem 5.1 in~\cite{KairouzMcSoShThXu21}. Let
\begin{equation*}
x_{t+1} = \argmin_{x \in \mc{X}} \sum_{i=1}^t \<\nabla \ell_i(x_i), x \> + \frac{\lambda}{2} \ltwo{x}^2 + \<b_t,x\>,
\end{equation*}
be the iteration of DP-FTRL where $b_t$ is the noise added by the binary tree mechanism. Moreover, let $\hat x_{t+1}$ be the non-private solution, that is,
\begin{equation*}
\hat x_{t+1} = \argmin_{x \in \mc{X}} \sum_{i=1}^t \<\nabla \ell_i(x_i), x \> + \frac{\lambda}{2} \ltwo{x}^2 .
\end{equation*}
Lemma C.2 in~\cite{KairouzMcSoShThXu21} states that $\ltwo{x_{t+1} - \hat x_{t+1}} \le \ltwo{b_t}/\lambda$. Therefore, we have
\begin{align*}
\sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt)
& \le \sum_{t=1}^T \< \nabla \ell_t(x_t), x_t - x\opt\> \\
& = \sum_{t=1}^T \< \nabla \ell_t(x_t), x_t - \hat x_t\> + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \sum_{t=1}^T \ltwo{\nabla \ell_t(x_t)} \ltwo{x_t - \hat x_t} + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \frac{1}{8 \beta} \sum_{t=1}^T \ltwo{\nabla \ell_t(x_t)}^2 + 4\beta \sum_{t=1}^T\ltwo{x_t - \hat x_t}^2 + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \frac{1}{2} \sum_{t=1}^T \ell_t(x_t) + 4 \beta \sum_{t=1}^T \ltwo{b_t}^2/\lambda^2 + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\>,
\end{align*}
where the second inequality follows from the Fenchel-Young inequality.
We can now upper bound the right term. Indeed, Theorem 5.2 in~\cite{Hazan16} imply that FTRL has
\begin{align*}
\sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> & \le \frac{2}{\lambda} \sum_{t=1}^T \ltwo{\nabla \ell_t(x_t)}^2 + \lambda D^2 \\
& \le \frac{8 \beta}{\lambda} \sum_{t=1}^T \ell_t( x_t) + \lambda D^2.
\end{align*}
Overall we now get
\begin{align*}
\sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt)
& \le \frac{1}{2} \sum_{t=1}^T \ell_t(x_t) + \frac{4\beta }{ \lambda^2} \sum_{t=1}^T \ltwo{b_t}^2 + \frac{8 \beta}{\lambda} \sum_{t=1}^T \ell_t( x_t) + \lambda D^2.
\end{align*}
The binary tree mechanism also guarantees that for all $t \in [T]$, $\E[\ltwo{b_t}^2] \le O \left( \frac{L^2 d \log(T) \log(1/\delta)}{\diffp^2} \right)$ (see Appendix B.1 in~\cite{KairouzMcSoShThXu21}).
Thus, taking expectation and setting $\lambda = 32 \beta + \left( \frac{\beta}{\diffp^2} (L/D)^2 T d \log(T) \log(1/\delta) \right)^{1/3}$, we have
\begin{align*}
\E \left[\sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt) \right]
& \le O \left(L\opt + \beta D^2 + \left( \beta D^2 (LD)^2 \frac{T d \log(T) \log(1/\delta)}{\diffp^2} \right)^{1/3} \right).
\end{align*}
\end{comment}
\section{Additional details for~\Cref{sec:upper-bounds-realizable}}
\subsection{A binary-tree based algorithm}
\label{sec:bt-experts}
In this section, we present another algorithm which achieves the optimal regret for settings with zero-expert loss. Instead of using sparse-vector, this algorithm builds on the binary tree mechanism. The idea is to repetitively select $O(\mathsf{poly}(\log(dT)))$ random good experts and apply the binary tree to calculate a private version of their aggregate losses. Whenever all of the chosen experts are detected to have non-zero loss, we choose a new set of good experts. Similarly to~\cref{alg:SVT-zero-loss}, each new phase reduces the number of good experts by a constant factor as an oblivious adversary does not know the choices of the algorithm, hence there are only $O(\mathsf{poly}(\log(dT)))$ phases.
We provide a somewhat informal description of the algorithm in~\cref{alg:Bin-tree-zero-loss}. This algorithm also achieves regret $O(\mathsf{poly}(\log(dT))/\diffp)$ in the realizable case. We do not provide a proof as it is somewhat similar to that of~\cref{thm:ub-realizable}.
\begin{algorithm}
\caption{Binary-tree algorithm for zero loss experts (sketch)}
\label{alg:Bin-tree-zero-loss}
\begin{algorithmic}[1]
\STATE Set $k=0$ and $B= O(\mathsf{poly}(\log(dT)))$
\WHILE{$t \le T$\,}
\STATE Use the exponential mechanism with score function $s(x) = \sum_{i=1}^t \ell_i(x)$ to privately select a set $S_k$ of $B$ experts from $[d] \setminus \cup_{0 \le i \le k} S_i$
\STATE Apply binary tree for each expert $x \in S_k$ to get private aggregate estimates for $ \sum_{i=1}^t \ell_i(x)$ for every $t \in [T]$
\STATE Let $\hat c_{t,x}$ denote the output of the binary tree for expert $x \in S_k$ at time $t$
\WHILE{there exists $x \in S_k$ such that $\hat c_{t,x} \le O(\mathsf{poly}(\log(dT))/\diffp)$}
\STATE Receive $\ell_t : [d] \to [0,1]$
\STATE Choose $x_t \in S_k$ that minimizes $\hat c_{t,x}$
\STATE Pay error $\ell_t(x_t)$
\STATE $t = t+1$
\ENDWHILE
\STATE $k = k + 1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsection{Proof for~\Cref{thm:ub-realizable-ada}}
\label{sec:proof-ub-realizable-ada}
First we prove privacy. Note that $\bar L\opt$ can change at most $\log(T)$ times as $L\opt \le T$. Therefore, we have at most $\log(T)$ applications of~\Cref{alg:SVT-zero-loss}. Each one of these is $\diffp/(2\log(T))$-DP. Moreover, since we have at most $K$ applications of the exponential mechanism in~\Cref{alg:SVT-zero-loss}, we have at most $K \log(T)$ applications of the Laplace mechanism in~\Cref{alg:SVT-ada}. Each of these is $\diffp/4k\log(T)$-DP. Overall, privacy composition implies that the final privacy is $\diffp$-DP.
Now we prove utility. \Cref{alg:SVT-ada} consists of at most $\log(T)$ applications of~\Cref{alg:SVT-zero-loss} with different values of $\bar L\opt$. We will show that each of these applications incurrs low regret.
Consider an application of~\Cref{alg:SVT-zero-loss} with $\bar L\opt$. If $\bar L\opt \ge L\opt$, then~\Cref{thm:ub-realizable} implies that the regret is at most $(\bar L\opt + \frac{1}{\diffp}) \cdot \mathsf{poly} (\log \frac{Td}{\beta})$. Now consider the case where $\bar L\opt \le L\opt$. We will show that~\Cref{alg:SVT-ada} will double $\bar L\opt$ and that the regret of~\Cref{alg:SVT-zero-loss} up to that time-step is not too large.
Let $t_0$ be the largest $t$ such that $\min_{x \in [d]} \sum_{t=1}^{t_0} \ell_t(x) \le \bar L\opt$. Note that up to time $t_0$, the best expert had loss at most $\bar L\opt$ hence the regret up to time $t_0$ is $(\bar L\opt + \frac{1}{\diffp}) \cdot \mathsf{poly} (\log \frac{Td}{\beta})$. Now let $t_1$ denote the next time-step when~\Cref{alg:SVT-zero-loss} applies the exponential mechanism. Sparse-vector guarantees that in the range $[t_0,t_1]$ the algorithm suffers regret at most $\frac{1}{\diffp} \cdot \mathsf{poly} (\log \frac{Td}{\beta})$. Moreover, the guarantees of the Laplace mechanism imply that at this time-step, $\bar L_t \ge \bar L\opt - 5\log(T/\beta)/\diffp_0$ with probability $1-\beta$, hence~\Cref{alg:SVT-ada} will double $\bar L\opt$ and run a new application of~\Cref{alg:SVT-zero-loss}. Overall, an application of~\Cref{alg:SVT-zero-loss} with $\bar L\opt \le L\opt$ results in regret $(L\opt + \frac{1}{\diffp}) \cdot \mathsf{poly} (\log \frac{Td}{\beta})$ and doubles $\bar L\opt$. Finally, note that if $\bar L\opt \ge L\opt + 5\log(T/\beta)/\diffp_0$ then with probability $1-\beta$ the algorithm will not double the value of $\bar L\opt$. As each application of ~\Cref{alg:SVT-zero-loss} has regret $(\bar L\opt + \frac{1}{\diffp}) \cdot \mathsf{poly} (\log \frac{Td}{\beta})$ and $\bar L\opt$ is bounded by $L\opt + 5\log(T/\beta)/\diffp_0$ with high probability, this proves the claim.
\section{Additional details for~\Cref{sec:alg-pot}}
\label{sec:apds-alg-pot}
We can obtain the following regret bounds for $\diffp$-DP using \cref{alg:potential-experts}.
\begin{theorem}(pure DP)
Suppose that we run \cref{alg:potential-experts} with parameters $\alpha = 1$, $\beta = \frac{1}{T^2}$, $K=2 \log 1/\gamma$ , $\diffp_0 = \diffp / K$, and $\eta =\diffp_0 / (56 \log T) = \diffp / (112 \log T \log 1/\gamma)$. If $L^*(\gamma) \leq \log^2 1/\gamma \log T / \diffp$, then this algorithm satisfies $\eps$-DP and incurs loss $O(L^* + \frac{\log^2 1/\gamma\log T}{\diffp}$. In particular, for DP-OPE with $d$ experts, assuming that that $OPT \leq \log^2 d \log T / \diffp$, we get regret $O(\log^2 d \log T / \diffp)$ by setting $\gamma = \frac 1 d$.
\end{theorem}
\section{Introduction}
We study the problem of private online optimization in the realizable setting where there is a zero-loss solution. In this problem, an online algorithm $\A$ interacts with an adversary over $T$ rounds. The adversary picks a (non-negative) loss function $\ell_t: \mc{X} \to \R$ at round $t$ and simultaneously the algorithm $\A$ picks a response $x_t$, suffering loss $\ell_t(x_t)$. The algorithm aims to minimize the regret, which is the loss compared to the best solution $x\opt \in \mc{X}$ in hindsight, while at the same time keeping the sequence of predictions $x_1,\ldots,x_T$ differentially private with respect to individual loss functions.
In this paper, we focus on two well-studied instances of this problem. In differentially private online prediction from experts (DP-OPE), we have $d$ experts $\mc{X} = [d]$ and the adversary chooses a loss function $\ell_t: [d] \to [0,1]$. Our second setting is differentially private online convex optimization (DP-OCO) where $\mc{X} \subset \R^d$ is a convex set with bounded diameter, and the adversary chooses convex and $L$-Lipschitz loss functions $\ell_t : \mc{X} \to \R^+$
Several papers have recently studied DP-OPE and DP-OCO in the general non-realizable setting~\cite{JainKoTh12,SmithTh13,JainTh14,AgarwalSi17,KairouzMcSoShThXu21}. These papers have resulted in different algorithms with sub-linear regret for both problems. For DP-OPE, ~\citet{AgarwalSi17,JainTh14} developed private versions of follow-the-regularized-leader (FTRL) obtaining (normalized) regret $\min\big\{ {d}/{T\diffp}, {\sqrt{T \log d}}/{T\diffp} \big\}$.
More recently, \citet{AsiFeKoTa22} developed low-switching algorithms for DP-OPE with oblivious adversaries, obtaining normalized regret roughly $O(\sqrt{\log (d)/T} + \log d/T^{2/3} \eps)$.
Additionally, for the problem of DP-OCO, \citet{KairouzMcSoShThXu21} have recently proposed a DP-FTRL algorithm based on the binary tree mechanism which obtains (normalized) regret $ \big({\sqrt{d}}/{T\diffp} \big)^{1/2}$.
Despite this progress, the regret bounds of existing algorithms are still polynomially worse than existing lower bounds. Currently, the only existing lower bounds for oblivious adversaries are the trivial bounds from the non-online versions of the same problems: for DP-OPE, lower bounds for private selection~\cite{SteinkeUll17b} imply a (normalized) regret lower bound of $O({\log(d)}/{T\diffp)}$, while existing lower bounds for DP-SCO~\cite{FeldmanKoTa20} give a (normalized) regret lower bound of $\Omega({\sqrt{d}}/{T\diffp})$ for DP-OCO.
Practical optimization problems arising from over-parameterized models often lead to instances that additionally satisfy {\em realizability}, i.e. that the optimal loss is zero or close to zero. This motivates the study of designing algorithms that can do better under this assumption. Realizability has been studied since the early days of learning theory and ubiquitous in the non-private online optimization literature~\cite{SrebroSrTe10,Shalev12,Hazan16}.
It has proven useful for improving regret bounds in non-private OPE and OCO~\cite{Shalev12,SrebroSrTe10} and in the closely related problem of differentially private stochastic convex optimization (DP-SCO)~\cite{AsiChChDu22}. In this work we study DP-OPE and DP-OCO in the realizable setting and develop new algorithms that obtain near-optimal regret bounds in several settings.
\subsection{Contributions}
We propose new algorithms and lower bounds
for the problems of differentially private online prediction from experts (DP-OPE) and differentially private online convex optimization (DP-OCO) in the realizable setting. The following are our primary contributions:
\begin{itemize}
\item \textbf{Near-optimal algorithms for DP-OPE.}~~
We design new algorithms that obtain near-optimal regret $\wt O \left( \log^{1.5}(d)/\diffp \right)$ for DP-OPE with $d$ experts when there is a zero-loss expert.
The best existing algorithms for non-realizable DP-OPE obtain significantly worse regret bounds $\min\big\{{d}/{\diffp}, T^{1/3} \log d/{\diffp} \big\}$~\cite{AgarwalSi17,AsiFeKoTa22}, which have a polynomial dependence on either $T$ or the number of experts $d$.
Our algorithms build on sequential applications of the exponential mechanism to pick a good expert, and the sparse-vector-technique to identify when the current expert is no longer a good expert (with near-zero loss).
Crucially, an oblivious adversary cannot identify which expert the algorithm has picked, resulting in a small number of switches.
We deploy a potential-based proof strategy to show that this algorithm have logarithmic number of switches.
We also show that a lower bound of $\Omega(\log d / \diffp)$ holds for any $\diffp$-DP algorithm even in the realizable case.
\item \textbf{Adaptive algorithms for DP-OPE with low-loss experts.}~~
We also develop an algorithm that adapts to the setting where there is an expert with low loss, that is, $L\opt = \min_{x \in [d]} \sum_{t=1}^T \ell_t(x)$. Our algorithms are adaptive to the value of $L\opt$ and obtain total regret of $L\opt \log d + \diffp^{-1} \log^{1.5} d$.
\item \textbf{Near-optimal regret for low-dimensional DP-OCO.}~~
Building on our algorithms for DP-OPE, we propose a new algorithm for DP-OCO that obtains regret $\wt O \left( d^{1.5}/\diffp \right)$. This is near-optimal for low-dimensional problems where $d=O(1)$ and improves over the best existing algorithm which obtains a normalized regret $ (\sqrt{d}/T\diffp)^{1/2}$~\cite{KairouzMcSoShThXu21}.
\item \textbf{Improved regret for smooth DP-OCO.}~~
When the loss function is smooth, we show that DP-FTRL~\cite{KairouzMcSoShThXu21} with certain parameters obtains an improved normalized regret of
$(\sqrt{d}/T\diffp)^{2/3}$ if there is a zero-loss expert.
\end{itemize}
\begin{table*}[t]
\begin{center}
\begin{tabular}{| Sc | Sc | Sc |}
\hline
& \textbf{\darkblue{Non-realizable}} & \makecell{\textbf{\darkblue{Realizable}}\\\textbf{(This work)}}\\
\hline
{{\textbf{DP-OPE}}} & $\displaystyle \min\left\{ \frac{\sqrt{d}}{T\diffp}, \sqrt{\frac{ \log d}{T}} + \frac{\log d}{T^{2/3}\eps} \right\}$~\footnotesize{\cite{AgarwalSi17,AsiFeKoTa22}} & $\displaystyle \frac{\log^{1.5} d}{T\diffp}$ \\
\cline{1-3}
\textbf{{DP-OCO}} & $\displaystyle \left(\frac{\sqrt{d}}{T\diffp} \right)^{1/2}$~\footnotesize{\cite{KairouzMcSoShThXu21} } & $\displaystyle \frac{d^{1.5}}{T\diffp}$ \\
\cline{1-3}
\textbf{{DP-OCO (smooth)}} & $\displaystyle \left(\frac{\sqrt{d}}{T\diffp} \right)^{1/2}$~\footnotesize{\cite{KairouzMcSoShThXu21}} & $\displaystyle \left(\frac{\sqrt{d}}{T\diffp} \right)^{2/3}$ \\
\hline
\end{tabular}
\end{center}
\caption{Comparison between (normalized) regret upper bounds for the realizable and non-realizable case for both DP-OPE and DP-OCO. For readability, we omit logarithmic factors in $T$ and $1/\delta$.}
\label{tab:temps}
\end{table*}
\subsection{Related work}
Several works have studied online optimization in the realizable setting, developing algorithms with better regret bounds~\cite{Shalev12,SrebroSrTe10}. For online prediction from experts, the weighted majority algorithm obtains a regret bound of $4\log{d}$ compared to $O(\sqrt{T\log d})$ in the non-realizable setting. Moreover, for online convex optimization, \citet{SrebroSrTe10} show that online mirror descent achieves regret $4\beta D^2 + 2\sqrt{\beta D^2 T L\opt}$ compared to $O(\sqrt{T})$ in the general case.
On the other hand, the private online optimization literature has mainly studied the general non-realizable case~\cite{JainKoTh12,SmithTh13,JainTh14,AgarwalSi17,KairouzMcSoShThXu21}.
For online prediction from experts, the best existing regret bounds for \ed-DP are $O(\diffp^{-1}\sqrt{T \log d \log(1/\delta)})$~\cite{JainTh14} and $O(\sqrt{T\log d} + \diffp^{-1} \sqrt{d \log(1/\delta)} \log d \log^2 T)$~\cite{AgarwalSi17}.
\citet{AsiFeKoTa22} show that these rates can be improved using a private version of the shrinking dartboard algorithm, obtaining regret roughly $O(\sqrt{T \log d} + T^{1/3} \log d/\eps)$.
For online convex optimization, \citet{KairouzMcSoShThXu21} developed a private follow-the-regularized-leader algorithm using the binary tree mechanism that obtains normalized regret bound $\wt O\big( {\sqrt{d}}/{T\diffp} \big)^{1/2}$.
The realizable setting has recently been studied in the different but related problem of differentially private stochastic convex optimization (DP-SCO)~\cite{AsiChChDu22}.
DP-SCO and DP-OCO are closely related as one can convert an OCO algorithm into an SCO algorithm using standard online-to-batch transformations~\cite{Hazan16}
\citet{AsiChChDu22} study DP-SCO problems in the interpolation regime where there exists a minimizer that minimizes all loss functions, and propose algorithms that improve the regret over the general setting if the functions satisfy certain growth conditions.
\section{Preliminaries}
In online optimization, we have an interactive $T$-round game between an adversary and an online algorithm. In this paper, we focus on oblivious adversaries that choose in advance a sequence of loss functions $\ell_1,\dots,\ell_T$ where $\ell_t : \mc{X} \to \R$. Then, at round $t$, the adversary releases a loss function $\ell_t$ and simultaneously the algorithm plays a solution $x_t \in \mc{X}$. The algorithm then suffers loss $\ell_t(x_t)$ at this round. The regret of the online algorithm is
\begin{equation*}
\reg_T(\A) = \sum_{t=1}^T \ell_t(x_t) - \min_{x^\star \in \mc{X}} \sum_{t=1}^T \ell_t(x^\star).
\end{equation*}
For ease of notation, for an oblivious adversary that chooses a loss sequence $\Ds = (\ell_1,\dots,\ell_T) $, we let $\A(\Ds) = (x_1,\dots,x_T)$ denote the output of the interaction between the online algorithm and the adversary.
In this work, we are mainly interested in two instances of the above general online optimization problem:
\begin{itemize}
\item \textbf{Online prediction from experts (OPE).}~~
In this problem, we have a set of $d$ experts $\mc{X} = [d]$, and the adversary chooses loss functions $\ell_t : [d] \to [0,1]$.
\item\textbf{Online convex optimization (OCO).}~~
In OCO, we are optimizing over a convex set $\mc{X} \subseteq \R^d$ with bounded diameter $\diam(\mc{X}) \le D$,%
\footnote{The diameter of a set $\mc{X} \subseteq \R^d$ (in Euclidean geometry) is defined as $\diam(\mc{X}) = \sup_{x,y \in \mc{X}} \|x-y\|$.}
and the adversary chooses loss functions $\ell_t : \mc{X} \to \R$ that are convex and $L$-Lipschitz.
\end{itemize}
We are mainly interested in the so-called realizable setting. More precisely, we say than an OPE (or OCO) problem is \emph{realizable} if there exists a feasible solution $x\opt \in \mc{X}$ such that $L\opt = \sum_{t=1}^T \ell_t(x\opt) = 0$. We also extend some of our results to the near-realizable setting where $0 < L\opt \ll T$.
The main goal of this paper is to study both of these problems under the restriction of differential privacy.
\begin{definition}[Differential Privacy]
\label{def:DP}
A randomized algorithm $\A$ is \emph{\ed-differentially private} against oblivious adversaries (\ed-DP) if, for all sequences $\Ds=(\ell_1,\dots,\ell_T)$ and $\Ds'=(\ell'_1,\dots,\ell'_T)$ that differ in a single element, and for all events $\cO$ in the output space of $\A$, we have
\[
\Pr[\A(\Ds)\in \cO] \leq e^{\eps} \Pr[\A(\Ds')\in \cO] +\delta.
\]
\end{definition}
We note that our algorithms satisfy a stronger privacy guarantee against adaptive adversaries (see for example the privacy definition in~\cite{JainRaSiSm21}). However, we choose to focus solely on oblivious adversaries for ease of presentation and readability.
\subsection{Background on Differential Privacy}
\newcommand{AboveThreshold}{\ensuremath{\mathsf{AboveThreshold}}}
\newcommand{\init}{\ensuremath{\mathsf{InitializeSparseVec}}}
\newcommand{\addq}{\ensuremath{\mathsf{AddQuery}}}
\newcommand{\test}{\ensuremath{\mathsf{TestAboThr}}}
In our analysis, we require the following standard privacy composition result.
\begin{lemma}[Advanced composition~\citealp{DworkRo14}]
\label{lemma:advanced-comp}
If $\A_1,\dots,A_k$ are randomized algorithms that each is $(\diffp,\delta)$-DP, then their composition $(\A_1(\Ds),\dots,A_k(\Ds))$ is $(\sqrt{2k \log(1/\delta')} \diffp + k \diffp (e^\diffp - 1),\delta' + k \delta)$-DP.
\end{lemma}
In addition to basic facts about differential privacy such as composition and post-processing, our development uses two key techniques from the privacy literature: the Sparse-vector-technique and the binary tree mechanism, which we now describe.
\paragraph{Sparse vector technique.}
We recall the sparse-vector-technique~\cite{DworkRo14} which we use for the realizable setting in~\cref{sec:upper-bounds-realizable}. Given an input $\Ds = (z_1,\dots,z_n) \in \domain^n$, the algorithm takes a stream of queries $q_1,q_2,\dots,q_T$ in an online manner. We assume that each $q_i$ is $1$-sensitive, that is, $|q_i(\Ds) - q_i(\Ds') | \le 1$ for neighboring datasets $\Ds,\Ds' \in \domain^n$ that differ in a single element.
We have the following guarantee.
\begin{lemma}[\citealp{DworkRo14}, Theorem 3.24]
\label{lemma:svt}
Let $\Ds = (z_1,\dots,z_n) \in \domain^n$.
For a threshold $L$ and $\beta>0$, there is an $\diffp$-DP algorithm (AboveThreshold) that halts at time $k \in [T+1]$ such that for $\alpha = \frac{8(\log T + \log(2/\beta))}{\diffp}$ with probability at least $1-\beta$,
\begin{itemize}
\item For all $t < k$, $q_i(\Ds) \le L + \alpha$;
\item $q_k(\Ds) \ge L - \alpha$ or $k = T+1$.
\end{itemize}
\end{lemma}
To facilitate the notation for using AboveThreshold~in our algorithms, we assume that it has the following components:
\begin{enumerate}
\item $\init(\diffp,L,\beta)$: initializes a new instance of AboveThreshold~with privacy parameter $\diffp$, threshold $L$, and probability parameter $\beta$. This returns an instance (data structure) $Q$ that supports the following two functions.
\item $Q.\addq(q)$: adds a new query $q:\domain^n \to \R$ to $Q$.
\item $Q.\test()$: tests if the last query that was added to $Q$ was above threshold. In that case, the algorithm stops and does not accept more queries.
\end{enumerate}
\newcommand{BinaryTree}{BinaryTree}
\paragraph{The binary tree mechanism.}
We also build on the binary tree mechanism~\cite{DworkNaPiRo10,ChanShSo11} which allows to privately estimate the running sum of a sequence of $T$ numbers $a_1,\dots,a_T \in [0,1]$.
\begin{lemma}[\citealp{DworkNaPiRo10}, Theorem 4.1]
\label{lemma:bt}
Let $\diffp \le 1$. There is an $\diffp$-DP algorithm (BinaryTree) that takes a stream of numbers $a_1,a_2,\dots,a_T$ and outputs $c_1,c_2,\dots,c_T$ such that for all $t \in [T]$ with probability at least $1-\beta$,
\begin{equation*}
\Big| c_t - \sum_{i=1}^t a_i \Big|
=
\frac{1}{\diffp} \cdot \mathsf{poly}(\log(\beta^{-1})\log{T}).
\end{equation*}
\end{lemma}
The same approach extends to the case when $a_i$'s are vectors in $\mathbb{R}^d$ with $\|a_i\|_2 \leq 1$. In this case, the error vector $(c_t - \sum_{i=1}^t a_i)$ is distributed at $\mathcal{N}(0, d \cdot \mathsf{poly}(\log T/\beta\delta)/\diffp^2 \mathbb{I})$ and the mechanism satisfies $(\diffp,\delta)$-DP.
\paragraph{Additional notation.}
For a positive integer $k \in \N$, we let $[k] = \{1,2,\dots,k\}$. Moreover, for a sequence $a_1,\dots,a_t$, we use the shorthand $a_{1:t} = a_1,\dots,a_t$.
\section{Faster rates for DP-OCO}
\label{sec:dp-oco}
In this section we study differentially private online convex optimization (DP-OCO) and propose new algorithms with faster rates in the realizable setting. In~\Cref{sec:dp-oco-experts}, we develop an algorithm that reduces the OCO problem to an experts problem (by discretizing the space) and then uses our procedure for experts.
In~\Cref{sec:dp-oco-smooth}, we show that follow-the-regularized-leader (FTRL) using the binary tree mechanism results in faster rates in the realizable setting for smooth functions.
\subsection{Experts-based algorithm for DP-OCO}
\label{sec:dp-oco-experts}
The algorithm in this section essentially reduces the problem of DP-OCO to DP-OPE by discretizing the space $\mc{X} = \{ x \in \R^d: \ltwo{x} \le D \}$ into sufficiently many experts. In particular, we consider a $\rho$-net of the space $\mc{X}$, that is, a set $\mc{X}_{\mathsf{experts}} = \{x^1, \dots, x^M \} \subset \mc{X}$ such that for all $x \in \mc{X}$ there is $x^i \in \mc{X}^\rho_{\mathsf{experts}}$ such that $\ltwo{x^i - x} \le \rho$. Such a set exists if $M \ge 2^{d \log(4D/\rho)}$ (\citealp{Duchi19}, Lemma 7.6). Given a loss function $\ell_t: \mc{X} \to \R$, we define the loss of expert $x^i$ to be $\ell_t(x^i)$. Then, we run~\Cref{alg:SVT-zero-loss} for the given DP-OPE problem. This algorithm has the following guarantees.
\begin{theorem}
\label{thm:DP-OCO}
Let $\mc{X} = \{ x \in \R^d: \ltwo{x} \le D\}$
and $\ell_1,\dots,\ell_T : \mc{X} \to \R$ be non-negative, convex and $L$-Lipschitz functions chosen by an oblivious adversary.
Then running~\Cref{alg:SVT-zero-loss} over $\mc{X}^\rho_{\mathsf{experts}}$ with $\rho = 1/(LT)$ is \ed-DP and with probability at least $1-O(\beta)$ has regret
\iftoggle{arxiv}{
\begin{equation*}
O\left( L^{\opt} d \log(LD/\beta) + \frac{d^{3/2} \log^{3/2}(LDT) \sqrt{\log(1/\delta)} + \log(T/\beta) d \log(LD/\beta)}{\diffp} \right).
\end{equation*}
}
{
\begin{align*}
& \E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in \mc{X}} \sum_{t=1}^T \ell_t(x) \right] \\
& \le (L\opt + \frac{1}{\diffp}) d^{1.5} \cdot O\left( \mathsf{poly} (\log (DLT/\delta)) \right).
\end{align*}
}
\end{theorem}
\iftoggle{arxiv}{}{We defer the proof to~\Cref{sec:proof-dp-oco}.}
\iftoggle{arxiv}{
\begin{proof}
Let $x_1,\dots,x_T$ be the experts chosen by the algorithm.
First, \Cref{thm:ub-realizable} implies that this algorithm obtains the following regret with respect to the best expert
\begin{equation*}
\sum_{t=1}^T \ell_t(x_t) - L^{\opt}_{\mathsf{experts}} \le
O\left( L^{\opt}_{\mathsf{experts}} \log(M/\beta) + \frac{\log^2(M) + \log(T/\beta) \log(M/\beta)}{\diffp} \right)
\end{equation*}
where $L^{\opt}_{\mathsf{experts}} = \min_{x \in \mc{X}^\rho_{\mathsf{experts}}} \sum_{t=1}^T \ell_t(x)$. Since $\ell_t$ is $L$-Lipschitz for each $t \in [T]$, we obtain that
\begin{equation*}
| L\opt - L^{\opt}_{\mathsf{experts}} |
= |\min_{x \in \mc{X}} \sum_{t=1}^T \ell_t(x) - \min_{x \in \mc{X}^\rho_{\mathsf{experts}}} \sum_{t=1}^T \ell_t(x) |
\le T L \rho.
\end{equation*}
Overall this gives
\begin{equation*}
\sum_{t=1}^T \ell_t(x_t) - L^{\opt}
\le O\left( (L^{\opt} + TL\rho) \log(M/\beta) + \frac{\log^{3/2}(M) \sqrt{\log(1/\delta)} + \log(T/\beta) \log(M/\beta)}{\diffp} \right).
\end{equation*}
Setting $\rho = 1/(LT\diffp)$ proves the claim.
\end{proof}
}
These results demonstrates that existing algorithms which achieve normalized regret roughly $(\ifrac{\sqrt{d}}{T \diffp})^{1/2}$ are not optimal for the realizable setting. Moreover, in the low-dimensional regime (constant $d$), the above bound is nearly-optimal up to logarithmic factors as we have a lower bound of $\sqrt{d}/T\diffp$ from the stochastic setting of this problem (see discussion in the introduction).
Finally, while the algorithm we presented in~\Cref{thm:DP-OCO} has exponential runtime due to discretizing the space, we note that applying~\Cref{alg:SVT-zero-loss} over the unit ball results in similar rates and polynomial runtime.
Recall that this algorithm only accesses the loss functions to sample from the exponential mechanism, and uses sparse-vector over the running loss. Both of these can be implemented in polynomial time---since the losses are convex---using standard techniques from log-concave sampling.
\subsection{Binary-tree based FTRL}
\label{sec:dp-oco-smooth}
In this section, we consider DP-OCO with smooth loss functions and show that DP-FTRL~\cite[Algorithm 1]{KairouzMcSoShThXu21} with modified parameters obtains improved normalized regret ${\beta D^2}/{T} + ({\sqrt{d}}/{T \diffp})^{2/3}$ in the realizable setting, compared to ${LD}/{\sqrt{T}} + ({\sqrt{d}}/{T \diffp} )^{1/2}$ in the non-realizable setting.
We present the details in~\Cref{alg:dp-ftrl}. Appendix B.1 in~\cite{KairouzMcSoShThXu21} has more detailed information about the implementation of the binary tree mechanism in DP-FTRL.
\begin{algorithm}
\caption{DP-FTRL~\cite{KairouzMcSoShThXu21}}
\label{alg:dp-ftrl}
\begin{algorithmic}[1]
\REQUIRE Regularization parameter $\lambda$
\STATE Set $x_0 \in \mc{X}$
\FOR{$t=1$ to $T$\,}
\STATE Use the binary tree mechanism to estimate the sum $\sum_{i=1}^{t-1} \nabla \ell_i(x_i)$; let $\bar g_{t-1}$ be the estimate
\STATE Apply follow-the-regularized-leader step
\begin{equation*}
x_{t} = \argmin_{x \in \mc{X}} \<\bar g_{t-1}, x \> + \frac{\lambda}{2} \ltwo{x}^2,
\end{equation*}
\STATE Receive loss function $\ell_t: \mc{X} \to \R$
\STATE Pay cost $\ell_t(x_t)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
We have the following guarantees for DP-FTRL in the realizable and smooth setting.
\begin{theorem}
\label{thm:dp-oco-smooth}
Let $\mc{X} = \{ x \in \R^d: \ltwo{x} \le D\}$
and $\ell_1,\dots,\ell_T : \mc{X} \to \R$ be non-negative, convex, $L$-Lipschitz, and $\beta$-smooth functions chosen by an oblivious adversary. DP-FTRL with $\lambda = 32 \beta + \left( \frac{\beta}{\diffp^2} (L/D)^2 T d \log(T) \log(1/\delta) \right)^{1/3}$ is \ed-DP and generates
$x_1,\dots,x_T$ that has regret
\iftoggle{arxiv}{
\begin{align*}
\frac{1}{T} \E \left[ \sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt) \right]
& \le O \left( \frac{L\opt + \beta D^2}{T} + \left( LD \frac{ \sqrt{\beta D^2 d \log(T) \log(1/\delta)}}{T \diffp} \right)^{2/3} \right).
\end{align*}
}
{
\begin{align*}
& \frac{1}{T} \E \left[ \sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt) \right] \\
& \le O \left( \frac{L\opt + \beta D^2}{T} + \left( LD \frac{ \sqrt{\beta D^2 d \log(T) \log(1/\delta)}}{T \diffp} \right)^{2/3} \right).
\end{align*}
}
\end{theorem}
For the proof, we use the following property for smooth non-negative functions.
\begin{lemma}[\citealp{Nesterov04}]
Let $\ell: \mc{X} \to \R$ be non-negative and $\beta$-smooth function. Then $\ltwo{\nabla \ell(x)}^2\le 4 \beta \ell(x)$.
\end{lemma}
\begin{proof}
The proof follows similar arguments to the proof of Theorem 5.1 in~\cite{KairouzMcSoShThXu21}. Let
\begin{equation*}
x_{t+1} = \argmin_{x \in \mc{X}} \sum_{i=1}^t \<\nabla \ell_i(x_i), x \> + \frac{\lambda}{2} \ltwo{x}^2 + \<b_t,x\>,
\end{equation*}
be the iteration of DP-FTRL where $b_t$ is the noise added by the binary tree mechanism. Moreover, let $\hat x_{t+1}$ be the non-private solution, that is,
\begin{equation*}
\hat x_{t+1} = \argmin_{x \in \mc{X}} \sum_{i=1}^t \<\nabla \ell_i(x_i), x \> + \frac{\lambda}{2} \ltwo{x}^2 .
\end{equation*}
Lemma C.2 in~\cite{KairouzMcSoShThXu21} states that $\ltwo{x_{t+1} - \hat x_{t+1}} \le \ltwo{b_t}/\lambda$. Therefore, we have
\iftoggle{arxiv}{
\begin{align*}
\sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt)
& \le \sum_{t=1}^T \< \nabla \ell_t(x_t), x_t - x\opt\> \\
& = \sum_{t=1}^T \< \nabla \ell_t(x_t), x_t - \hat x_t\> + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \sum_{t=1}^T \ltwo{\nabla \ell_t(x_t)} \ltwo{x_t - \hat x_t} + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \frac{1}{8 \beta} \sum_{t=1}^T \ltwo{\nabla \ell_t(x_t)}^2 + 4\beta \sum_{t=1}^T\ltwo{x_t - \hat x_t}^2 + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \frac{1}{2} \sum_{t=1}^T \ell_t(x_t) + 4 \beta \sum_{t=1}^T \ltwo{b_t}^2/\lambda^2 + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\>,
\end{align*}
}
{
\begin{align*}
& \sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt) \\
& \le \sum_{t=1}^T \< \nabla \ell_t(x_t), x_t - x\opt\> \\
& = \sum_{t=1}^T \< \nabla \ell_t(x_t), x_t - \hat x_t\> + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \sum_{t=1}^T \ltwo{\nabla \ell_t(x_t)} \ltwo{x_t - \hat x_t} + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \frac{1}{8 \beta} \sum_{t=1}^T \ltwo{\nabla \ell_t(x_t)}^2 + 4\beta \sum_{t=1}^T\ltwo{x_t - \hat x_t}^2 + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> \\
& \le \frac{1}{2} \sum_{t=1}^T \ell_t(x_t) + 4 \beta \sum_{t=1}^T \ltwo{b_t}^2/\lambda^2 + \sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\>,
\end{align*}
}
where the second inequality follows from the Fenchel-Young inequality.
We can now upper bound the right term. Indeed, Theorem 5.2 in~\cite{Hazan16} implies that FTRL has
\begin{align*}
\sum_{t=1}^T \< \nabla \ell_t(x_t), \hat x_t - x\opt\> & \le \frac{2}{\lambda} \sum_{t=1}^T \ltwo{\nabla \ell_t(x_t)}^2 + \lambda D^2 \\
& \le \frac{8 \beta}{\lambda} \sum_{t=1}^T \ell_t( x_t) + \lambda D^2.
\end{align*}
Overall we now get
\iftoggle{arxiv}{
\begin{align*}
\sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt)
& \le \frac{1}{2} \sum_{t=1}^T \ell_t(x_t) + \frac{4\beta }{ \lambda^2} \sum_{t=1}^T \ltwo{b_t}^2
+ \frac{8 \beta}{\lambda} \sum_{t=1}^T \ell_t( x_t) + \lambda D^2.
\end{align*}
}
{
\begin{align*}
\sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt)
& \le \frac{1}{2} \sum_{t=1}^T \ell_t(x_t) + \frac{4\beta }{ \lambda^2} \sum_{t=1}^T \ltwo{b_t}^2 \\
& + \frac{8 \beta}{\lambda} \sum_{t=1}^T \ell_t( x_t) + \lambda D^2.
\end{align*}
}
The binary tree mechanism also guarantees that for all $t \in [T]$, $$\E[\ltwo{b_t}^2] \le O \left( \frac{L^2 d \log(T) \log(1/\delta)}{\diffp^2} \right)$$ (see Appendix B.1 in~\cite{KairouzMcSoShThXu21}).
Thus, taking expectation and setting the regularization parameter to $\lambda = 32 \beta + \big( \frac{\beta}{\diffp^2} (L/D)^2 T d \log(T) \log(1/\delta) \big)^{1/3}$, we have
\iftoggle{arxiv}{
\begin{align*}
\E \left[\sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt) \right]
& \le O \left(L\opt + \beta D^2 + \left( \beta D^2 (LD)^2 \frac{T d \log(T) \log(1/\delta)}{\diffp^2} \right)^{1/3} \right).
\end{align*}
}
{
\begin{align*}
& \E \left[\sum_{t=1}^T \ell_t(x_t) - \ell_t(x\opt) \right]
\le O (L\opt + \beta D^2 ) \\
& \quad + O\left( \left( \beta D^2 (LD)^2 \frac{T d \log(T) \log(1/\delta)}{\diffp^2} \right)^{1/3} \right).
\end{align*}
}
\end{proof}
\section{Preliminaries for the realizable setting}
\section{Near-optimal regret for online prediction from experts}
\label{sec:upper-bounds-realizable}
In this section, we consider the online prediction from experts problem in the near-realizable regime,
where the best expert achieves small loss $L\opt ll T$.
Under this setting,
we develop a new private algorithm that achieves regret $\wt O(L\opt \log d + \log^{3/2}(d)/\diffp)$. For the realizable setting where $L\opt=0$, this algorithm obtains near-optimal regret $\wt O(\log^{3/2} (d)/\diffp)$.
The algorithm builds on the fact that an oblivious adversary cannot know which expert the algorithm picks. Therefore, if the algorithm picks a random good expert with loss smaller than $L\opt$, the adversary has to increase the loss for many experts before identifying the expert chosen by the algorithm. The algorithm will therefore proceed as follows:
at each round, privately check using sparse-vector-technique whether the previous expert is still a good expert (has loss nearly $L\opt$). If not, randomly pick (privately) a new expert from the set of remaining good experts. The full details are in~\cref{alg:SVT-zero-loss}.
The following theorem summarizes the performance of~\cref{alg:SVT-zero-loss}.
\begin{theorem}
\label{thm:ub-realizable}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by an oblivious adversary such that there is $x\opt \in [d]$ such that $\sum_{t=1}^T \ell_t(x\opt) \le L\opt$. Let $0 < \beta < 1/2$, $B = \log(2T^2/\beta)$, $K = 6\ceil{\log d} + 24 \log(1/\beta)$, and $L = L\opt + 4/\eta + \frac{8B}{\diffp}$. If $\eta = \diffp/2K $ then
\cref{alg:SVT-zero-loss} is $\diffp$-DP and with probability at least $1-O(\beta)$ has regret
\begin{equation*}
\sum_{t=1}^T \ell_t(x_t) \le O\left( L\opt \log(d/\beta) + \frac{\log^2(d) + \log(T/\beta) \log(d/\beta)}{\diffp} \right).
\end{equation*}
Further, if $\diffp \le \sqrt{\log T \log(1/\delta)}$ and $\eta = \diffp/4\sqrt{2K \log(1/\delta)}$ then
\cref{alg:SVT-zero-loss} is $(\diffp,\delta)$-DP and with probability at least $1-O(\beta)$ has regret
\begin{equation*}
\sum_{t=1}^T \ell_t(x_t) \le O\left( L\opt \log(d/\beta) + \frac{\log^{3/2}(d)\sqrt{\log(1/\delta)} + \log(T/\beta) \log(d/\beta)}{\diffp} \right).
\end{equation*}
\end{theorem}
While \cref{alg:SVT-zero-loss} requires the knowledge of $L\opt$, we also design an adaptive version that does not require $L\opt$ in the next section. Note that the algorithm obtains regret roughly $\log^{3/2} (d)/\diffp$ for the realizable setting where $L\opt = 0$.
\begin{algorithm}[t]
\caption{Sparse-Vector for zero loss experts }
\label{alg:SVT-zero-loss}
\begin{algorithmic}[1]
\REQUIRE Switching bound $K$, optimal loss $L\opt$, Sampling parameter $\eta$, Threshold parameter $L$, failure probability $\beta$, privacy parameters $(\diffp,\delta)$
\STATE Set $k=0$ and current expert $x_0 = \mathsf{Unif}[d]$
\STATE Set $t_p = 0$
\WHILE{$t \le T$\,}
\STATE Set $ x_{t} = x_{t-1}$
\IF{$k < K$}
\STATE $Q = \init(\diffp/2,L,\beta/T)$
\WHILE{Q.\test() = False}
\STATE Set $ x_{t} = x_{t-1}$
\STATE Define a new query $q_t = \sum_{i=t_p}^{t-1} \ell_i(x_t)$
\STATE Add new query $Q.\addq(q_t)$
\STATE Receive loss function $\ell_t: [d] \to [0,1]$
\STATE Pay cost $\ell_t(x_t)$
\STATE Update $t = t+1$
\ENDWHILE
\STATE Sample $x_t$ from the exponential mechanism with scores $s_t(x) = \max \left(\sum_{i=1}^{t-1} \ell_i(x),L\opt \right)$ for $x \in [d]$:
\begin{equation*}
\P(x_t = x) \propto e^{-\eta s_t(x)/2 }
\end{equation*}
\STATE Set $k = k + 1$ and $t_p = t$
\ENDIF
\STATE Receive loss function $\ell_t: [d] \to [0,1]$
\STATE Pay cost $\ell_t(x_t)$
\STATE Update $t = t+1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{proof}
First, we prove the privacy guarantees of the algorithm using privacy composition results: there are $K$ applications of the exponential mechanism with privacy parameter $\eta$. Moreover, sparse-vector is applied over each user's data only once, hence the $K$ applications of sparse-vector are $\diffp/2$-DP. Overall, the algorithm is $(\diffp/2 + K\eta)$-DP and $(\diffp/2 + \sqrt{2K \log(1/\delta)} \eta + K \eta (e^\eta - 1),\delta)$-DP (using advanced compositions; see~\Cref{lemma:advanced-comp}). Setting $\eta = \diffp/2K$ results in $\diffp$-DP and $\eta = O(\diffp/\sqrt{K \log(1/\delta)})$ results in \ed-DP.
We proceed to analyze utility. First, note that the guarantees of the sparse-vector algorithm (\Cref{lemma:svt}) imply that with probability at least $1-\beta$ for each time-step $t \in [T]$, if sparse-vector identifies above threshold query then $s_t(x) \ge \ubar \Delta \defeq L - \frac{8B}{\diffp} \ge 4/\eta$. Otherwise, $s_t(x) \le \bar \Delta \defeq L + \frac{8B}{\diffp}$. In the remainder of the proof, we condition on this event. The idea is to show that the algorithm has logarithmic number of switches, and each switch the algorithm pays roughly $1/\diffp$ regret.
\\
To this end, we define a potential at time $t \in [T]$:
\begin{equation*}
\phi_t = \sum_{x \in [d]} e^{-\eta L_t(x)/2} ,
\end{equation*}
where $L_t(x) = \max( \sum_{j=1}^{t-1} \ell_j(x), L\opt)$.
Note that $\phi_1 = d e^{-\eta L\opt/2}$ and $\phi_t \ge e^{-\eta L\opt/2}$ for all $t \in [T]$ as there is $x \in [d]$ such that $\sum_{t=1}^T \ell_t(x) = L\opt$.
We split the iterates to $m = \ceil{\log d}$ rounds $t_0,t_1,\dots,t_m$ where $t_i$ is the largest $t\in[T]$ such that $\phi_{t_i} \ge \phi_1/2^{i}$.
Let $Z_i$ be the number of switches in $[t_i,t_{i+1}-1]$ (number of times the exponential mechanism is used to pick $x_t$).
The following key lemma shows that $Z_i$ cannot be too large.
\begin{lemma}
\label{lemma:prob-switch}
Fix $0 \le i \le m-1$. Then for any $1 \le k \le T$, it holds that
\begin{equation*}
P(Z_i = k+1) \le (2/3)^k.
\end{equation*}
\end{lemma}
\begin{proof}
Let $t_i \le t \le t_{i+1}$ be a time-step where a switch happens (exponential mechanism is used to pick $x_{t}$). Note that $\phi_{t_{i+1}} \ge \phi_{t}/2$. We prove that the probability that $x_t$ is switched between $t$ and $t_{i+1}$ is at most $2/3$. To this end, note that if $x_t$ is switched before $t_{i+1}$ then $\sum_{i=t}^{t_{i+1}} \ell_i(x) \ge \ubar \Delta$ as sparse-vector identifies $x_t$, and therefore $L_{t_{i+1}}(x) - L_t(x) \ge \ubar \Delta - L\opt \ge 4/\eta$.
Thus we have that
\begin{align*}
P(x_{t} \text{ is switched before $t_{i+1}$})
& \le \sum_{x \in [d]} P(x_t=x) \indic{L_{t_{i+1}}(x) - L_t(x) \ge 4/\eta} \\
& = \sum_{x \in [d]} \frac{e^{-\eta L_{t}(x)/2}}{\phi_{t}}\cdot \indic{L_{t_{i+1}}(x) - L_t(x) \ge 4/\eta } \\
& \le \sum_{x \in [d]} \frac{e^{-\eta L_{t}(x)/2}}{\phi_{t}}\cdot \frac{1 - e^{-\eta (L_{t_{i+1}}(x)- L_t(x))/2}}{1 - e^{-2} } \\
& \le 4/3 (1 - \phi_{t_{i+1}}/\phi_{t}) \\
& \le 2/3.
\end{align*}
where the second inequality follows the fact that $\indic{a \ge b} \le \frac{1-e^{-\eta b}}{1 - e^{-\eta a}}$ for $a,b,\eta \ge 0$,
and the last inequality since $\phi_{t_{i+1}}/\phi_{t_1} \ge 1/2$.
This argument shows that after the first switch inside the range $[t_i,t_{i+1}]$, each additional switch happens with probability at most $2/3$. The claim follows.
\end{proof}
We now proceed with the proof. Let $Z = \sum_{i=0}^{m-1} Z_i$ be the total number of switches.
Note that $Z \le m + \sum_{i=0}^{m-1} \max(Z_i-1,0)$ and \Cref{lemma:prob-switch} implies $\max(Z_i-1,0)$ is upper bounded by a geometric random variable with success probability $1/3$. Therefore, using concentration of geometric random variables (\Cref{lemma:geom-concentration}), we get that
\begin{equation*}
P(Z \ge 6m + 24 \log(1/\beta) ) \le \beta .
\end{equation*}
Noting that $K \ge 6m + 24 \log(1/\beta)$, this shows that the algorithm does not reach the switching budget with probability $1-O(\beta)$. Thus, the guarantees of the sparse-vector algorithm imply that the algorithm pays regret at most $\bar \Delta$ for each switch, hence the total regret of the algorithm is at most $O(\bar \Delta (m + \log(1/\beta))) = O(\bar \Delta \log(d/\beta)) $. The claim follows as $\bar \Delta \le L\opt + 4/\eta + 16B/\diffp$.
\end{proof}
\begin{comment}
\newcommand{S_{\mathsf{good}}}{S_{\mathsf{good}}}
\newcommand{S_{\mathsf{bad}}}{S_{\mathsf{bad}}}
\newcommand{S_{\mathsf{chosen}}}{S_{\mathsf{chosen}}}
\begin{proof}
The algorithm consists of at most $K$ applications of AboveThreshold{} and the exponential mechanism. As AboveThreshold{} is $\diffp_0$-DP and the exponential mechanism is $\diffp_0$-DP as well (since $s_t(x)$ is $1$-sensitive function), the final mechanism is $2K\diffp_0$-DP.
Now we prove utility. Let $k$ be the number of switches that the algorithm makes, that is, the number of queries on which sparse-vector finds an above threshold query. Let $\alpha = \frac{B}{\diffp_0}$ and define a set of good experts $ S_{\mathsf{good}}^t= \{x \in[d] : \sum_{i=1}^{t-1} \ell_i(x) \le L\opt + \alpha \}$ and a set of bad experts $S_{\mathsf{bad}}^t = [d] \setminus S_{\mathsf{good}}^t$. First, note that the guarantees of AboveThreshold{} (\Cref{lemma:svt}) imply that with probability at least $1-\beta$, for all $t \in [T]$ where AboveThreshold{} halts (that is, a switch happens), we have that $\sum_{i=1}^{t-2} \ell_i(x_{t-2}) \le L\opt + 4 \alpha$ and $\sum_{i=1}^{t-1} \ell_i(x_{t-1}) \ge L\opt + 2 \alpha$. This means that $x_{t-1} \notin S_{\mathsf{good}}^t$ when AboveThreshold{} makes a switch.
Moreover,
the exponential mechanism picks $x_t$ such that $\P(x_t \notin S_{\mathsf{good}}^t) \le d e^{-\alpha \diffp_0} \le d e^{-B} \le \beta/T$. Thus, with probability at least $1-\beta$, for all $x_t$ picked by the exponential mechanism, we have $x_t \in S_{\mathsf{good}}^t$.
Thus, we now assume that these two events happen and proceed to finish the proof conditioning on that.
We will prove that if there exists an expert with loss $\sum_{t=1}^T \ell_t(x) = L\opt$, then $\P(k > K) < \beta$. Note that this will prove the claim as the regret of the algorithm in this case is at most $K (L\opt + 4\alpha)$ since AboveThreshold{} guarantees that the model at each switch has cumulative loss at most $L\opt + 4\alpha$. The proof will go as follows:
we show that the adversary has to shrink the size of $S_{\mathsf{good}}^t$ by a constant factor every $O(\mathsf{poly}(B))$ switches and therefore after $k = O(\mathsf{poly} (B))$ switches the set $S_{\mathsf{good}}^t$ has only one expert (which has small loss) which will be chosen by the exponential mechanism with high probability.
Recall that $s_t(x) = \max(\sum_{i=1}^{t-1} \ell_i(x),L\opt)$.
We write the set $S_{\mathsf{good}}^t$ as the union of the sets $S^t_j$ for $1 \le j \le B$ where
\begin{equation*}
S^t_j = \left\{ x\in[d]: \frac{j-1}{\diffp_0} \le s_t(x) - L\opt \le \frac{j}{\diffp_0} \right\}.
\end{equation*}
Crucially, each set $S^t_j$ contains experts that are almost indistinguishable by a private algorithm and therefore the exponential mechanism will assign similar probabilities for experts in the same set. Let $N^t_j = |S^t_j|$. We have the following lemma.
\begin{lemma}
\label{lemma:switching-shrinkage}
Assume the algorithms makes a switch at times $t_1 < t_2$. Then
there is $\hat j \in [B]$ such that with probability $\frac{1}{12 B}$ we have
\begin{equation*}
\sum_{j=1}^{\hat j} N^{t_2}_j \le \left(1-\frac{1}{4B}\right) \sum_{j=1}^{\hat j} N^{t_1}_j .
\end{equation*}
\end{lemma}
\noindent
Assuming correctness of~\cref{lemma:switching-shrinkage} for now, we proceed to prove that $k<K$ with high probability. Assume we have $k$ switches at time-steps $t_1,t_2,\dots,t_k$. Let $\hat j_i \in [B]$ be the corresponding $\hat j$ from~\cref{lemma:switching-shrinkage} at time-step $t_i$. Assume that $k > K/2 \ge 200B^2\log(d/\beta)$. Then there is a $\hat j \in [B]$ that appears at least $K/2B$ times. Noting that the sum $\sum_{j=1}^{\hat j} N^{t}_j$ cannot increase, \cref{lemma:switching-shrinkage} now implies that at time $t_k +1$ we have with probability at least $1-\beta$ (Chernoff inequality. See~\cref{lemma:chernoff})
\begin{align*}
\sum_{j=1}^{\hat j} N^{t_k}_j
&\le \max \left( \left(1-\frac{1}{4B}\right)^{K/48B} \sum_{j=1}^{\hat j} N^{1}_j,1 \right)
\\
&\le \max \left(e^{-K/200B^2} d,1 \right)
= 1.
\end{align*}
As we know that there is an expert with loss $\sum_{i=1}^{T} \ell_i(x) = L\opt$, this shows that $|S_{\mathsf{good}}^{t_k}| = 1$ and therefore sparse-vector will not do any more switches. This proves the claim.
\end{proof}
Now we proceed to prove~\cref{lemma:switching-shrinkage}. The following lemma shows that if the exponential mechanism chooses an expert from $S^t_j$ then the adversary has to eliminate roughly $N^t_j/2$ experts from $S^t_j$ (move them to $S_{\mathsf{bad}}$) in order to force a switch.
\begin{lemma}
\label{lemma:adv-eliminiation}
Assume the exponential mechanism chooses an expert $x_{t_1} \in S^{t_1}_j$ at time $t_1$. Let $t_2$ be the time of the next switch that the algorithm makes. Then with probability $1/6$ we have
\begin{equation*}
| S^{t_1}_j \cap S_{\mathsf{good}}^{t_2} | \le \frac{N^{t_1}_j}{2}.
\end{equation*}
\end{lemma}
\begin{proof}
We analyze a stronger adversary that knows the switching times and the set $S^{t_1}_j$ from which the expert was chosen. We show that even such an adversary has to eliminate a constant fraction of the experts in $S^{t_1}_j$ before enforcing the algorithm to make a switch. To this end,
note that for each $x_1,x_2 \in S^{t_1}_j$ we have
\begin{equation*}
e^{-1} \le \frac{\P(x_{t_1}=x_1)}{\P(x_{t_1}=x_2)} \le e.
\end{equation*}
Therefore we have that for all $x \in S^{t_1}_j$
\begin{equation*}
\frac{1}{e N^{t_1}_j} \le \P(x_{t_1}=x \mid x_{t_1} \in S^{t_1}_j) \le \frac{e}{ N^{t_1}_j}.
\end{equation*}
Note that for the adversary to enforce the algorithm to make a switch at time $t_2$, the guarantees of AboveThreshold~imply that the loss of $x_{t_1}$ must satisfy $\sum_{i=1}^{t_2-1} \ell_i(x_{t_1}) \ge L\opt + 2\alpha $ (and hence $x_{t_1} \notin S_{\mathsf{good}}^{t_2}$).
Let $\hat x_1, \hat x_2, \dots, \hat x_M$ be the sequence of experts that the adversary eliminates, that is, makes $\sum_{i=1}^t \ell(\hat x) \ge L\opt + 2 \alpha$.
Note that the adversary is oblivious: that means that it can depend on the distribution of the exponential mechanism though it cannot depend on the returned expert $x_{t_1}$. In other words, $x_{t_1}$ is independent from $\hat x_1,\hat x_2,\dots,\hat x_M$. The goal is to upper bound $\P(M < N^{t_1}_j/2 \mid x_{t_1} \in S^{t_1}_j)$ or equivalently lower bound $\P(M \ge N^{t_1}_j/2 \mid x_{t_1} \in S^{t_1}_j)$. We have
\begin{align*}
&\P(M \ge N^{t_1}_j/2 \mid x_{t_1} \in S^{t_1}_j)
\\
& \ge \P( x_{t_1} \notin \{\hat x_1,\dots, x_{\floor{N^{t_1}_j/2}} \} \mid x_{t_1} \in S^{t_1}_j ) \\
& = \P( x_{t_1} \in S^{t_1}_j \setminus \{\hat x_1,\dots,\hat x_{\floor{N^{t_1}_j/2}} \} \mid x_{t_1} \in S^{t_1}_j ) \\
& \ge \frac{1}{2e} \ge \frac{1}{6}.
\end{align*}
The claim follows.
\end{proof}
We are now ready to prove~\cref{lemma:switching-shrinkage}.
\begin{proof}[of \cref{lemma:switching-shrinkage}]
Let $p_j = \P(x_{t_1} \in S^{t_1}_j)$ be the probability that the exponential mechanism returns an expert in $S^{t_1}_j$. Noting that $\sum_{j=1}^B p_j = \P(x_{t_1} \in S_{\mathsf{good}}^{t_1}) \ge 1 - \beta \ge 1/2$,
we have that there is $j \in [B]$ such that $ p_j \ge 1/2B$. On the other hand, $s_t(x) \le s_t(x')$ for $x \in S^{t_1}_j$ and $x' \in S^{t_1}_{j'} $, which implies that $\P(x_{t_1} = x) \ge \P(x_{t_1} = x')$. This implies that
\begin{equation*}
\frac{1}{2 B} \le p_j \le \frac{N^{t_1}_j}{\sum_{\ell=1}^j N^{t_1}_\ell},
\end{equation*}
or stated differently
\begin{equation*}
\frac{\sum_{\ell=1}^j N^{t_1}_\ell}{2 B} \le {N^{t_1}_j}.
\end{equation*}
If the exponential mechanism picks an expert $x_{t_1} \in S_j^{t_1}$ (which happens with probability $p_t \ge 1/2B$), \cref{lemma:adv-eliminiation} now implies that at the next switch at time $t_2$ with probability $1/6$ the adversary will eliminate $N^{t_1}_j/2$ experts from $S^{t_1}_j$ which implies
\begin{align*}
\sum_{\ell=1}^j N^{t_2}_\ell
& \le \sum_{\ell=1}^{j} N^{t_1}_\ell - \frac{1}{2} N^{t_1}_j \\
& \le \sum_{\ell=1}^{j} N^{t_1}_\ell -\frac{1}{4B} \sum_{\ell=1}^{j} N^{t_1}_\ell \\
& = \sum_{\ell=1}^{j} N^{t_1}_\ell (1 - \frac{1}{4B}).
\end{align*}
This proves the claim.
\end{proof}
\end{comment}
\subsection{Adaptive algorithms for DP experts}
While~\cref{alg:SVT-zero-loss} achieves near-optimal loss for settings with low-loss experts, it requires the knowledge of the value of $L\opt$. As $L\opt$ is not always available in practice, our goal in this section is to develop an adaptive version of~\cref{alg:SVT-zero-loss} which obtains similar regret without requiring the knowledge of $L\opt$. Similarly to other online learning problems, we propose to use the doubling trick~\cite{KalaiVe05} to design our adaptive algorithms. We begin with an estimate $L\opt_1 = 1$ of $L\opt$. Then we apply~\cref{alg:SVT-zero-loss} using $L\opt = L\opt_1$ until the exponential mechanism picks an expert that contradicts the current estimate of $L\opt$, that is, $\sum_{i=1}^{t-1} \ell_i(x_t) \gg L\opt_1$. We use the Laplace mechanism to check this privately. Noting that this happens with small probability if $L\opt \le L\opt_1$, we conclude that our estimate of $L\opt$ was too small and set a new estimate $L\opt_2 = 2 L\opt_1$ and repeat the same steps. As $L\opt \le T$, this process will stop in at most $\log T$ phases, hence we can divide the privacy budget equally among phases while losing at most a factor of $\log T$. We present the full details in~\Cref{alg:SVT-ada}.
\begin{algorithm}[t]
\caption{Adaptive Sparse-Vector for low-loss experts }
\label{alg:SVT-ada}
\begin{algorithmic}[1]
\REQUIRE Failure probability $\beta$
\STATE Set $\diffp_0 = \diffp / 2\log T$
\STATE $K = \log d + 2 \log T/\beta$, $\eta = \diffp_0/2K$, $B = \log T + \log(2T/\beta)$
\STATE Set $\bar L\opt = 1$, $L = L\opt + 4/\eta + \frac{8B}{\diffp_0}$
\WHILE{$t < T$}
\STATE Run~\Cref{alg:SVT-zero-loss} with parameters $K$, $\bar L\opt$, $\eta$, $L$, $\beta$, $\diffp_0$
\IF{\Cref{alg:SVT-zero-loss} applies the exponential mechanism (step 12)}
\STATE Calculate $\bar L_t = \sum_{i=1}^{t-1} \ell_i(x_t) + \zeta_t$ where $\zeta_t \sim \lap(K/\diffp_0)$
\IF{$\bar L_t > \bar L\opt - 5K\log(T/\beta)/\diffp_0 $}
\STATE Set $\bar L\opt = 2 \bar
L\opt$
\STATE Go to step 4
\ENDIF
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
We have the following guarantees for the adaptive algorithm.
\iftoggle{arxiv}{}{
We defer the proof to~\Cref{sec:proof-ub-realizable-ada}.
}
\begin{theorem}
\label{thm:ub-realizable-ada}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by an oblivious adversary such that there is $x\opt \in [d]$ such that $\sum_{t=1}^T \ell_t(x\opt) \le L\opt$. Let $0 < \beta < 1/2$. Then
\Cref{alg:SVT-ada} is $\diffp$-DP and with probability at least $1-O(\beta)$ has regret
\begin{equation*}
\sum_{t=1}^T \ell_t(x_t) \le O\left( L\opt \log(d/\beta) \log(T) + \frac{\log^2(d)\log(T) + \log(T/\beta) \log(d/\beta) \log(T)}{\diffp} \right).
\end{equation*}
\end{theorem}
\iftoggle{arxiv}{
\begin{proof}
First we prove privacy. Note that $\bar L\opt$ can change at most $\log(T)$ times as $L\opt \le T$. Therefore, we have at most $\log(T)$ applications of~\Cref{alg:SVT-zero-loss}. Each one of these is $\diffp/(2\log(T))$-DP. Moreover, since we have at most $K$ applications of the exponential mechanism in~\Cref{alg:SVT-zero-loss}, we have at most $K \log(T)$ applications of the Laplace mechanism in~\Cref{alg:SVT-ada}. Each of these is $\diffp/2K\log(T)$-DP. Overall, privacy composition implies that the final privacy is $\diffp$-DP.
Now we prove utility. \Cref{alg:SVT-ada} consists of at most $\log(T)$ applications of~\Cref{alg:SVT-zero-loss} with different values of $\bar L\opt$. We will show that each of these applications incurrs low regret.
Consider an application of~\Cref{alg:SVT-zero-loss} with $\bar L\opt$. If $\bar L\opt \ge L\opt$, then~\Cref{thm:ub-realizable} implies that the regret is at most $$O\left( \bar L\opt \log(d/\beta) + \frac{\log^2(d) + \log(T/\beta) \log(d/\beta)}{\diffp_0} \right).$$
Now consider the case where $\bar L\opt \le L\opt$. We will show that~\Cref{alg:SVT-ada} will double $\bar L\opt$ and that the regret of~\Cref{alg:SVT-zero-loss} up to that time-step is not too large.
Let $t_0$ be the largest $t$ such that $\min_{x \in [d]} \sum_{t=1}^{t_0} \ell_t(x) \le \bar L\opt$. Note that up to time $t_0$, the best expert had loss at most $\bar L\opt$ hence the regret up to time $t_0$ is $$O\left( \bar L\opt \log(d/\beta) + \frac{\log^2(d) + \log(T/\beta) \log(d/\beta)}{\diffp_0} \right).$$
Now let $t_1$ denote the next time-step when~\Cref{alg:SVT-zero-loss} applies the exponential mechanism. Sparse-vector guarantees that in the range $[t_0,t_1]$ the algorithm suffers regret at most $O\left( \bar L\opt + \frac{\log(d) + \log(T/\beta) }{\diffp_0} \right)$. Moreover, the guarantees of the Laplace mechanism imply that at this time-step, $\bar L_t \ge \bar L\opt - 5K\log(T/\beta)/\diffp_0$ with probability $1-\beta$, hence~\Cref{alg:SVT-ada} will double $\bar L\opt$ and run a new application of~\Cref{alg:SVT-zero-loss}. Overall, an application of~\Cref{alg:SVT-zero-loss} with $\bar L\opt \le L\opt$ results in regret $(L\opt + \frac{1}{\diffp_0}) \cdot \mathsf{poly} (\log \frac{Td}{\beta})$ and doubles $\bar L\opt$. Finally, note that if $\bar L\opt \ge L\opt + 5\log(T/\beta)/\diffp_0$ then with probability $1-\beta$ the algorithm will not double the value of $\bar L\opt$. As each application of ~\Cref{alg:SVT-zero-loss} has regret $$O\left( \bar L\opt \log(d/\beta) + \frac{\log^2(d) + \log(T/\beta) \log(d/\beta)}{\diffp_0} \right),$$ and $\bar L\opt$ is bounded by $L\opt + 5\log(T/\beta)/\diffp_0$ with high probability, this proves the claim.
\end{proof}
}
\iftoggle{arxiv}{}{We also present a different binary-tree based mechanism for this problem with similar rates in~\Cref{sec:bt-experts}.}
\iftoggle{arxiv}{
\subsection{A binary-tree based algorithm}
In this section, we present another algorithm which achieves the optimal regret for settings with zero-expert loss. Instead of using sparse-vector, this algorithm builds on the binary tree mechanism. The idea is to repetitively select $O(\mathsf{poly}(\log(dT)))$ random good experts and apply the binary tree to calculate a private version of their aggregate losses. Whenever all of the chosen experts are detected to have non-zero loss, we choose a new set of good experts. Similarly to~\cref{alg:SVT-zero-loss}, we can show that each new phase reduces the number of good experts by a constant factor as an oblivious adversary does not know the choices of the algorithm, hence there are only $O(\mathsf{poly}(\log(dT)))$ phases.
We provide a somewhat informal description of the algorithm in~\cref{alg:Bin-tree-zero-loss}. This algorithm also achieves regret $O(\mathsf{poly}(\log(dT))/\diffp)$ in the realizable case. We do not provide a proof as it is somewhat similar to that of~\cref{thm:ub-realizable}.
\begin{algorithm}
\caption{Binary-tree algorithm for zero loss experts (sketch)}
\label{alg:Bin-tree-zero-loss}
\begin{algorithmic}[1]
\STATE Set $k=0$ and $B= O(\mathsf{poly}(\log(dT)))$
\WHILE{$t \le T$\,}
\STATE Use the exponential mechanism with score function $s(x) = \sum_{i=1}^t \ell_i(x)$ to privately select a set $S_k$ of $B$ experts from $[d] \setminus \cup_{0 \le i \le k} S_i$
\STATE Apply binary tree for each expert $x \in S_k$ to get private aggregate estimates for $ \sum_{i=1}^t \ell_i(x)$ for every $t \in [T]$
\STATE Let $\hat c_{t,x}$ denote the output of the binary tree for expert $x \in S_k$ at time $t$
\WHILE{there exists $x \in S_k$ such that $\hat c_{t,x} \le O(\mathsf{poly}(\log(dT))/\diffp)$}
\STATE Receive $\ell_t : [d] \to [0,1]$
\STATE Choose $x_t \in S_k$ that minimizes $\hat c_{t,x}$
\STATE Pay error $\ell_t(x_t)$
\STATE $t = t+1$
\ENDWHILE
\STATE $k = k + 1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
}
\section{Private algorithms for prediction from experts}
\label{sec:upper-bounds}
We begin our algorithmic contribution by developing new algorithms for oblivious (\cref{sec:ub-obl-sd}) and stochastic adversaries (\cref{sec:ub-stoch}).
The main idea is to save the privacy budget by limiting the number of model updates. Though, the way in which this is done varies significantly depending on the adversary: for stochastic adversaries, we allow for $\log(T)$ updates at fixed time-steps $t=2^i$, while for oblivious adversaries we employ a more adaptive time-step strategy for updates based on the underlying data.
\begin{comment}
The following theorem summarizes the guarantees of~\cref{alg:stoch-adv}. For simplicity, we only state the result for the private experts setting, however, we note that a more general statement holds for general convex functions.
\begin{theorem}
\label{thm:ub-stoch}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by a stochastic adversary, $\ell_i \simiid P$. \cref{alg:stoch-adv} is \ed-DP and has regret
\begin{equation*}
\E_{\ell_t \sim P} \left[\ell_t(x_t) - \min_{x \in [d]} \ell_t(x) \right] \le O \left( \sqrt{\frac{\log(d)}{2^i}} + \frac{\log(d)}{2^i \diffp} \right).
\end{equation*}
Therefore at phase $i$, that is $2^{i} \le t \le 2^{i+1}$, the total regret is at most
\begin{equation*}
\E\left[ \sum_{t=2^{i}}^{2^{i+1}}\ell_t(x_t) - \min_{x \in [d]} \sum_{t=2^{i}}^{2^{i+1}} \ell_t(x) \right]
\le O \left(\sqrt{2^i \log(d)} + \frac{\log(d)}{ \diffp}\right).
\end{equation*}
\end{theorem}
\begin{proof}
The privacy claim is immediate as each sample $\ell_i$ is used only once in running a single \ed-DP algorithm. Now we prove the claim about utility. Consider time-step $t=2^i$ where we invoke a DP-SCO algorithm with $t/2 = 2^{i-1}$ samples. Therefore the guarantees of such algorithms~\cite[theorem 6]{AsiFeKoTa21} imply that at iteration $t$ we have
\begin{equation*}
\E_{\ell_t \sim P} \left[\ell_t(x_t) - \min_{x \in [d]} \ell_t(x) \right] \le O \left( \sqrt{\frac{\log(d)}{2^i}} + \frac{\log(d)}{2^i \diffp} \right).
\end{equation*}
Therefore at phase $i$, that is $2^{i} \le t \le 2^{i+1}$, the total regret is at most
\begin{equation*}
\E\left[ \sum_{t=2^{i}}^{2^{i+1}}\ell_t(x_t) - \min_{x \in [d]} \sum_{t=2^{i}}^{2^{i+1}} \ell_t(x) \right]
\le O \left(\sqrt{2^i \log(d)} + \frac{\log(d)}{ \diffp}\right).
\end{equation*}
As we have at most $\log(n)$ phases with $1 \le i \le \log(T)$, the total regret of~\cref{alg:stoch-adv} is at most $O(\sqrt{T\log (d)} + {\log(d) \log(T)}/{ \diffp})$ which proves the claim.
\end{proof}
\end{comment}
\begin{comment}
\subsection{Oblivious adversaries}
\label{sec:ub-obl}
In this section we consider the stronger model of oblivious adversaries and develop an algorithm with regret $O(\sqrt{T} + T^{1/3}/\diffp^{2/3})$ in this setting. Our algorithm builds on follow-the-lazy-leader where we identify update-steps using sparse-vector-technique. We provide full details in~\cref{alg:obl-adv}. \\
\hac{TODO: need to figure out proof details.}
\begin{algorithm}
\caption{Follow-the-lazy-leader with sparse-vector}
\label{alg:obl-adv}
\begin{algorithmic}[1]
\REQUIRE Noise parameter $\Delta > 0$, switching budget $K$
\STATE Set $x_0 = 1$
\STATE Set $\diffp_i = \diffp/\sqrt{K}$ and $k=0$
\STATE Sample $p_0 \sim \lap(\Delta)^d$
\FOR{$t=1$ to $T$\,}
\STATE $x_t = x_{t-1}$
\STATE Define queries $q_x(\ell_{1:t-1}) = p_0(x_t) - p_0(x) + \sum_{i=1}^{t-1} \ell_i(x_t) - \ell_i(x)$ for $x \in [d]$
\STATE Use Sparse-Vec-tech with $\diffp_i$ privacy to identify if $q_x(\ell_{1:t-1}) > -\Delta/10$ for $x \neq x_t$ \hac{formalize}
\IF{$k<K$ and Sparse-Vec identifies above thershold}
\STATE Sample new $p_0 \sim \lap(\Delta)^d$
\STATE $x_t = \argmin_{x \in [d]} p_0(x) + \sum_{i=1}^{t-1} \ell_i(x) $
\STATE $k = k + 1$
\ENDIF
\STATE Receive $\ell_t \in \R^d$
\STATE Pay cost $\ell_t(x_t)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
The following theorem shows our improved upper bound for oblivious adversaries.
\begin{theorem}
\label{thm:ub-obl}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by an oblivious adversary. \cref{alg:obl-adv} with $\Delta = T^{1/3}/\diffp^{2/3}$ is \ed-DP and has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le O \left(\sqrt{T\log(d)} + \frac{ T^{1/3} \log(d)}{\diffp^{2/3}} \right).
\end{equation*}
\end{theorem}
\begin{proof}
The privacy proof follows from the guarantees for sparse-vector and the report-noisy-max mechanisms. \hac{formalize more}
The utility proof has two main steps. In the first step, we analyze the same procedure while allowing $x_t$ to depend on $\ell_t$, that is, we analyze the regret of $\bar x_t = \argmin_{x \in [d]} p_0(x) + \sum_{i=1}^{t} \ell_i(x) $ and show the desired upper bound. In the second step, we show that the regret of the original algorithm is not different from the procedure of step $1$. Let us denote $M(v) = \argmin_{x \in [d]} v(x)$ for $v: [d] \to [0,1]$. The proof will follow directly from the following two lemmas.
\begin{lemma}
\label{lemma:step-one}
For $\diffp \le 1/T^{1/4}$, we have
\begin{equation*}
\E \left[ \sum_{t=1}^T \<\bar x_t,\ell_t\> - \min_{x\opt \in[d]} \sum_{t=1}^T \<x\opt,\ell_t\> \right]
\le O(\Delta).
\end{equation*}
\end{lemma}
\begin{lemma}
\label{lemma:step-two}
For $\diffp \le 1/T^{1/4}$, we have
\begin{equation*}
\E \left[\sum_{t=1}^T \<x_t - \bar x_t, \ell_t\> \right]
\le O(\sqrt{T}).
\end{equation*}
\end{lemma}
The choice of $\Delta = T^{1/3}/\diffp^{2/3}$ now implies the claim.
\end{proof}
Now we proceed to prove Lemmas~\ref{lemma:step-one} and~\ref{lemma:step-two}.
\begin{proof}(\cref{lemma:step-two})
We will show that each $x_t$ is $(1/\sqrt{T},\delta)$-DP. To this end, we can view the algorithm as a sequence of applications of the 1-AboveThreshold algorithm and then an application of the Laplace mechanism where each such phase---application of AboveThreshold and Lapalce---is $\diffp_i$-DP. As $x_t$ is a subset of the output of one phase, it is also $\diffp_i$-DP. \hac{formalize}
Recalling that $x_t = M(p_\ell + \sum_{i=1}^{t-1} \ell_i)$ and $\bar x_t = M(p_\ell + \sum_{i=1}^{t} \ell_i)$ \hac{need to say something about $k<K$ whp here}, this now implies that $x_t$ and $\bar x_t$ are $(\frac{1}{\sqrt{T}},\delta)$-indistinguishable. Therefore we have
\begin{align*}
\E \left[\sum_{t=1}^T \<x_t - \bar x_t, \ell_t\> \right]
& \le \sum_{t=1}^T \lone{ \E x_t - \E \bar x_t} \\
& \le \sum_{t=1}^T \lone{ \sum_{j=1}^d (P(x_t = e_j) - P(\bar x_t = e_j) ) e_j } \\
& \le \sum_{t=1}^T \sum_{j=1}^d \left|P(x_t = e_j) - \delta + P(\bar x_t = e_j)\right| + \delta T d \\
& \le \sum_{t=1}^T \sum_{j=1}^d \left| P(\bar x_t = e_j) (\frac{P( x_t = e_j) - \delta}{P(\bar x_t = e_j)} - 1) \right| + \delta T d\\
& \le \sum_{t=1}^T \sum_{j=1}^d \left| P(\bar x_t = e_j) (e^\diffp - 1) \right| + \delta T d \\
& \le T (e^\diffp - 1) + \delta T d
\le O(\sqrt{T}).
\end{align*}
\end{proof}
\begin{proof}(\cref{lemma:step-one})
Let $t_1<t_2<\dots <t_K$ denote the switching times. We can show that with high probability $K \le O((T \diffp)^{2/3}) \le \sqrt{T}$ as we assume without loss of generality that $\diffp \le 1/T^{1/4}$. We will upper bound the regret after conditioning on the switching times $(t_1,\dots,t_k)$. First, note that using Be-The-Leader lemma (Lemma 3.1 in \cite{KalaiVe05}) we have
\begin{align*}
\sum_{t=1}^T \<\bar x_t,\ell_t\>
& = \sum_{t=1}^T \<M(p_\ell + \sum_{i=1}^{t} \ell_i),\ell_t\> \\
& \le \sum_{t=1}^T \<M( \sum_{i=1}^{T} \ell_i),\ell_t\>
+ \sum_{t=1}^T \|p_{t+1} - p_t\|_{\infty},
\end{align*}
which implies the regret is upper bounded by
\begin{align*}
\sum_{t=1}^T \<\bar x_t,\ell_t\> - \min_{x\opt \in[d]} \sum_{t=1}^T \<x\opt,\ell_t\>
\le \sum_{t=1}^T \|p_{t+1} - p_t\|_{\infty},
\end{align*}
We will upper bound the expected regret by conditioning first on $(t_1,\dots,t_K)$. Note that we will have $p_0,p_1,\dots,p_{K+1}$ and the regret is upper bounded by
\begin{align*}
\E \left[ \sum_{t=1}^T \|p_{t+1} - p_t\|_{\infty} \mid t_1,t_2,\dots,t_K \right]
& \le \sum_{j=1}^K \E \left[ \|p_{j+1} - p_j\|_{\infty} \mid t_1,t_2,\dots,t_K \right] \\
& = \sum_{j=1}^K \E \left[ \|p_{j+1} - p_j\|_{\infty} \mid t_j,t_{j+1},t_{j+2} \right],
\end{align*}
where the equality follows since each $p_j$ is independent of $t_{j'}$ for $j' < j$ or $j' > j+1$ \hac{prove this?}.
We have two observations that allow to upper bound this quantity. First, without any conditioning, $p_{j+1}$ and $p_j$ have the same distribution. Secondly, the switching times $(t_j,t_{j+1},t_{j+2})$ are $O(1/\sqrt{T})$-DP. We can show that \hac{show this: something not quite right here..}
\begin{equation*}
\E \left[ \|p_{j+1} - p_j\|_{\infty} \mid t_j,t_{j+1},t_{j+2} \right]
\le O\left( \frac{\linf{p_j + p_{j+1}}}{\sqrt{T}} \right)
\le O\left( \frac{\Delta}{\sqrt{T}} \right).
\end{equation*}
Overall as $K \le O(\sqrt{T} \log(T))$ we get that
\begin{equation*}
\E \left[ \sum_{t=1}^T \|p_{t+1} - p_t\|_{\infty} \mid t_1,t_2,\dots,t_K \right]
\le O\left( K \frac{\Delta}{\sqrt{T}} \right)
\le O(\Delta).
\end{equation*}
The law of iterated expectation now gives the claim.
\end{proof}
\begin{lemma}
\label{lemma:limited-switches}
Let $\diffp \le 1/T^{1/4}$. For $\Delta \ge T^{1/3}/\diffp^{2/3}$ and $K = \sqrt{T} \log(T)$,
\begin{equation*}
P(k \ge K) \le 1/T^{10}.
\end{equation*}
\end{lemma}
\begin{proof}
\hac{complete}
Idea: we define a good switch and a bad switch. A good switch is one that has a sequence of size at least $\Delta$. We will show that each sequence is good with constant probability. This implies that the number of good and bad switches is roughly $T/\Delta = (T\diffp)^{2/3} \le \sqrt{T}$.
What is expected length of a sequence?
\end{proof}
Ideas for analysis:
\begin{itemize}
\item Important: we will analyze the algorithm for $\diffp \le 1/T^{1/4}$ as we can get regret $\sqrt{T}$ for such $\diffp$.
\item We will have $k = O((T \diffp)^{2/3}) \le O(\sqrt{T})$ switches whp.
\item Let $t_1<t_2<\dots<t_k$ denote the switching times with noise vectors $p_1,p_2,\dots,p_k \in \R^d$.
\item We can write in phase $\ell$ for $t_{\ell-1}<t<t_\ell$
\begin{equation*}
x_t = \argmin_{x \in [d]} p_\ell(x) + \sum_{i=1}^{t-1} \ell_i(x)
\end{equation*}
\item We will have two types of error in the regret:
\begin{enumerate}
\item Error from changing $p_i$: as we have at most $k=O(\sqrt{T})$ switches, this will result in error $k |p_{t_2} - p_{t_1}|$. We have to show that $|p_{t_2} - p_{t_1}| = O(1)$.
\item Error for difference between follow the leader and follow the previous leader. For this, we use the fact that each iterate $x_t$ is $O(1/\sqrt{T})$-DP for $\diffp \le 1/T^{1/4}$ which will imply an error of $O(1/\sqrt{T})$ for each round, that is, error $O(\sqrt{T})$ for the final regret.
\end{enumerate}
\end{itemize}
\end{comment}
\subsection{Oblivious adversaries using shrinking dartboard}
\label{sec:ub-obl-sd}
In this section, we present our main algorithms for DP-OPE with oblivious adversaries. We build on the shrinking dartboard algorithm~\citep{GeulenVoWi10} to develop a private algorithm that improves over the best existing results for both pure and approximate DP. For $\diffp$-DP, our algorithm obtains regret $\diffp^{-1} \sqrt{T} \log d$ which nearly matches the non-private regret for constant $\diffp$; this is the first algorithm for pure DP that achieves sub-linear regret for oblivious adversaries. For approximate DP, our algorithm obtains regret $\sqrt{T \log d } + \diffp^{-1} (T\log(1/\delta))^{1/3} \log d$: our algorithm achieves negligible privacy cost compared to the non-private regret in the high-dimensional regime. Previous algorithms obtain regret roughly $\diffp^{-1} \sqrt{T \log d \log(1/\delta)}$ for $d \ge T$ which is $\diffp^{-1} \sqrt{\log(1/\delta)}$ larger than the non-private regret.
Key to our improvements in this section (similarly to limited switching OPE; \citealp{GeulenVoWi10}) is the observation that the distribution of the multiplicative weights (MW) algorithm does not change significantly between consecutive time-steps. Thus, by using a correlated sampling procedure, we can preserve the same marginal distribution as MW while updating the model with small probability. As this limits the number of model updates, this will allow us to assign higher privacy budget for each model update, hence improving its utility.
Another obstacle that arises from this approach is that the switching probability (to preserve the same marginal distribution) depends on the underlying data $p_{\mathsf{switch}} = 1 - (1-\eta)^{\ell_{t-1}(x_{t-1})}$. This is clearly not private as the probability of switching is zero whenever $\ell_{t-1}(x_{t-1})=0$. To tackle this issue, we guarantee privacy by adding another switching step that forces the algorithm to switch with probability $p$ regardless of the underlying data.
\begin{algorithm}
\caption{Private Shrinking Dartboard}
\label{alg:SD}
\begin{algorithmic}[1]
\REQUIRE Step size $\eta > 0$, switching probability $p \in [0,1]$, switching budget $K$
\STATE Set $w^1_i = 1$, $p^1_i = \frac{1}{d}$ for all $i \in [d]$
\STATE Choose expert $x_1$ from the distribution $P^1 = (p^1_1,\dots,p^1_d)$
\STATE Set $k = 1$
\FOR{$t=2$ to $T$\,}
\STATE Set $w^t_i = w^{t-1}_i (1 - \eta)^{\ell_{t-1}(i)}$ for all $i \in [d]$
\STATE Set $p^t_i = \frac{w^t_i}{\sum_{i'=1}^d w^t_{i'}}$ for all $i \in [d]$
\STATE Sample $z_t \sim \mathsf{Ber}(1-p)$
\IF{$z_t=1$}
\STATE Sample $z_t \sim \mathsf{Ber}(w^t_{x_{t-1}}/w^{t-1}_{x_{t-1}})$
\ENDIF
\IF{$z_t = 1$}
\STATE $x_t = x_{t-1}$
\ELSIF{$k < K$}
\STATE $k = k + 1$
\STATE Sample $x_t$ from $P^t$
\ENDIF
\STATE Receive $\ell_t: [d] \to [0,1]$
\STATE Pay cost $\ell_t(x_t)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
The following theorem states the privacy guarantees and the upper bounds on the regret of~\cref{alg:SD}. We defer the proof to~\cref{sec:proof-thm-ub-DS}.
\begin{theorem}
\label{thm:ub-priv-DS}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by an oblivious adversary. \cref{alg:SD} with $p < 1/2$, $\eta<1/2$, and $K = 4 T p $ has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le \eta T
+ \frac{\ln d}{\eta} + 2 T e^{-Tp/3}.
\end{equation*}
Moreover, \cref{alg:SD} is $(\diffp,\delta)$-DP where
\begin{equation*}
\diffp =
\begin{cases}
\frac{5\eta}{p} + 100 T p \eta^2 + 20 \eta \sqrt{ T p \log(1/\delta)} & \text{if } \delta>0 ;\\
\frac{\eta}{p} + 16 T p \eta & \text{if } \delta = 0 .
\end{cases}
\end{equation*}
\end{theorem}
\iftoggle{arxiv}{
Before proving~\cref{thm:ub-priv-DS}, we illustrate the implication of this result when optimizing the parameters in the algorithm. We have the following regret for approximate DP (see~\cref{sec:apdx-cor-sd-appr} for the proof).}
{
We illustrate the implication of this result when optimizing the parameters in the algorithm. We have the following regret for approximate DP (see~\cref{sec:apdx-cor-sd-appr} for the proof).
}
\begin{corollary}
\label{cor:sd-appr}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by an oblivious adversary.
Let $\diffp \le 1$ and set $\diffp_0 = \min(\diffp/2, \log^{1/3}(1/\delta) T^{-1/6} \sqrt{\ln d})$ and $0 < \delta \le 1$ such that $T \ge \Omega(\log(1/\delta))$. Setting $p = 1/(T \log(1/\delta))^{1/3}$ and $\eta = p \diffp_0/20$, \cref{alg:SD} is \ed-DP and has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le O \left( \sqrt{T \ln d } + \frac{T^{1/3} \log^{1/3}(1/\delta) \ln d}{\diffp} \right).
\end{equation*}
\end{corollary}
Moreover, we have the following regret for pure DP (proof in~\cref{sec:apdx-cor-pure}).
\begin{corollary}
\label{cor:pure}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by an oblivious adversary.
Let $\diffp \le 1$. Setting $p = 1/\sqrt{T}$ and $\eta = p \diffp/20$, \cref{alg:SD} is $\diffp$-DP and has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le O \left( \frac{\sqrt{T} \ln d}{\diffp} \right).
\end{equation*}
\end{corollary}
\subsubsection{Private batched shrinking dartboard}
For smaller values of $\diffp$, we present a batch version of~\cref{alg:SD} that groups losses in a batch of size $B$ and applies the same update.
For a batch size $B$, we define the grouped loss
\begin{equation*}
\tilde \ell_t = \frac{1}{B} \sum_{i=Bt}^{B(t+1)} \ell_i.
\end{equation*}
The batch version of~\cref{alg:SD} then runs~\cref{alg:SD} on the grouped loss $\tilde \ell_t $ for $\tilde T = \ceil{T/B}$ iterations.
\iftoggle{arxiv}{%
The following theorem state the regret of this algorithm. We prove the theorem in~\cref{sec:apdx-thm-ub-priv-DS-batch}
\begin{theorem}
\label{thm:ub-priv-DS-batch}
Let $\ell_1,\dots,\ell_T \in [0,1]^d$ be chosen by an oblivious adversary. \cref{alg:SD} with batch size $1 \le B \le T$, $p < 1/2$, $\eta<1/2$, and $K = 4 T p/B $ has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le \eta T
+ \frac{B \ln d}{\eta} + 2 T e^{-Tp/3B}.
\end{equation*}
Moreover, for $\delta>0$, \cref{alg:SD} is $(\diffp,\delta)$-DP where
\begin{equation*}
\diffp =
\frac{5\eta}{Bp} + 100 T p \eta^2/B^3 + \frac{20\eta}{B} \sqrt{12 T p/B \log(1/\delta)}
.
\end{equation*}
\end{theorem}
Optimizing the batch size $B$, we obtain the following regret. \cref{fig:comp} illustrates that this algorithm offers improvements over the original version (without batches) in the high-privacy regime. We defer the proof to~\cref{sec:adpx-cor-sd-batch}.
\begin{corollary}
\label{cor:sd-batch}
Let $\frac{\log^{2/3}(1/\delta) \log(d)}{T} \le \diffp \le \frac{\log^{2/3}(1/\delta) \log(d)}{T^{1/3}} $ and $\delta \le 1$. Setting $B = \frac{\log^{2/5}(1/\delta) \log^{3/5}(d)}{T^{1/5} \diffp^{3/5}} $, $p = (\frac{B}{T \log(1/\delta)})^{1/3}$ and $\eta = B p \diffp/40$, \cref{alg:SD} is \ed-DP and has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le O \left( \frac{T^{2/5} \log^{1/5}(1/\delta) \log^{4/5}(d)) }{\diffp^{4/5}} + 2 T e^{-Tp/3B} \right).
\end{equation*}
\end{corollary}}
{
Following similar steps as in the previous section, we obtain the following regret bounds for this algorithm (proof in~\cref{sec:adpx-cor-sd-batch}). \cref{fig:comp} illustrates that this algorithm offers improvements over the original version (without batches) in the high-privacy regime.
\begin{theorem}
\label{cor:sd-batch}
Let $\frac{\log^{2/3}(1/\delta) \log(d)}{T} \le \diffp \le \frac{\log^{2/3}(1/\delta) \log(d)}{T^{1/3}} $ and $\delta \le 1$. Setting $B = \frac{\log^{2/5}(1/\delta) \log^{3/5}(d)}{T^{1/5} \diffp^{3/5}} $, $p = (\frac{B}{T \log(1/\delta)})^{1/3}$ and $\eta = B p \diffp/40$, \cref{alg:SD} is \ed-DP and has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le O \left( \frac{T^{2/5} \log^{1/5}(1/\delta) \log^{4/5}(d)) }{\diffp^{4/5}} + 2 T e^{-Tp/3B} \right).
\end{equation*}
\end{theorem}
}
\begin{comment}
\subsubsection{Implications for DP-OCO over the sphere}
\label{sec:imp-oco}
Building on our upper bounds for private prediction from experts, in this section we develop algorithms that improve over the best existing bounds for DP-OCO in $\ell_2$-geometry. Our algorithms construct a covering of the parameter space then apply our private shrinking dartboard algorithm over this cover. By optimizing the size of the cover to balance the error from the approximation error and the error due to the dimensionality, we obtain the following regret for DP-OCO in $\ell_2$-geometry.
We defer the proof to~\cref{sec:apdx-thm-oco-imp}.
\begin{theorem}
\label{sec:thm-oco-imp}
Let $\mc{X} = \{x \in \R^d: \ltwo{x} \le D\}$ and $\ell_1,\dots,\ell_T : \mc{X} \to \R$ be convex and $L$-Lipschitz functions chosen by an oblivious adversary.
There is an
\ed-DP that has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in \mc{X}} \sum_{t=1}^T \ell_t(x) \right]
\le LD \cdot O \left( \sqrt{T d \log T } + \frac{T^{1/3} d \log^{1/3}(1/\delta) \log T}{\diffp} \right)
\end{equation*}
\end{theorem}
In the high-privacy regime, this result can improve over the previous work~\cite{KairouzMcSoShThXu21} which has regret $\sqrt{T} d^{1/4}/\sqrt{\diffp}$. For example, if $d=1$ and $\diffp = T^{-1/4}$, then our regret is roughly $T^{7/12}$ while their regret is $T^{5/8}$. \hac{modify text here..}
\end{comment}
\subsection{Stochastic adversaries}
\label{sec:ub-stoch}
In this section, we consider stochastic adversaries and present a reduction from private online learning problems---including OPE and online convex optimization (OCO) in general---to the (offline) differentially private stochastic convex optimization (DP-SCO)~\citep{BassilyFeTaTh19}. This reduction demonstrates that the (offline) DP-OCO problem is not harder than private online learning for stochastic adversaries: any algorithm for DP-SCO can be transformed into an online algorithm with stochastic adversaries with nearly the same rate (up to logarithmic factors). Using existing algorithms for DP-SCO~\citep{AsiFeKoTa21}, this reduction then results in regret $\sqrt{T\log d} + \diffp^{-1} \log d \log T$ for private prediction from experts (\cref{cor:DP-exp-stoch}) and regret $\sqrt{T} + \diffp^{-1} \sqrt{d} \log T$ for general DP online convex optimization in $\ell_2$-geometry (\cref{cor:DP-OCO}).
Our reduction builds on algorithms for DP-SCO. In this problem, the loss functions are sampled i.i.d. from some distribution $\ell_i \simiid P$ where $\ell_i : \mc{X} \to \R$ and the goal is to minimize the population loss $L(x) = \E_{\ell \sim P}[\ell(x)]$ given $n$ samples $\ell_1,\dots,\ell_n \simiid P$. The performance of an algorithm $\A$ given $n$ samples is measured by its excess population loss, that is, $\Delta_n(\A) = \E[{L(\A(\ell_1,\dots,\ell_n)) - \inf_{x \in \mc{X}}L(x)}]$.
Given an algorithm $\A$ for DP-SCO,
we design an online algorithm that updates the model only a logarithmic number of times---during the time-steps $t=1,2,4,8,\dots,T$. For each such time-step $t$, we run $\A$ on the past $t/2$ samples to release the next model. As the loss functions are from the same distribution, the previous model generated by $\A$ should perform well for future loss functions.
We present the full details in~\cref{alg:stoch-adv}.
\begin{algorithm}
\caption{Limited Updates for Online Optimization with Stochastic Adversaries}
\label{alg:stoch-adv}
\begin{algorithmic}[1]
\REQUIRE Parameter space $\mc{X}$, DP-SCO algorithm $\A$
\STATE Set $x_0 \in \mc{X}$
\FOR{$t=1$ to $T$\,}
\IF{$t=2^\ell$ for some integer $\ell \ge 1$}
\STATE Run an optimal \ed-DP-SCO algorithm $\A$ over $\mc{X}$ with samples $\ell_{t/2},\dots,\ell_{t-1}$.
\STATE Let $x_t$ denote the output of the private algorithm
\ELSE
\STATE Let $x_t = x_{t-1}$
\ENDIF
\STATE Receive $\ell_t: \mc{X} \to \R$.
\STATE Pay cost $\ell_t(x_t)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
We have the following regret for~\cref{alg:stoch-adv}. We defer the proof to~\cref{sec:thm-ub-stoch-OCO}.
\begin{theorem}
\label{thm:ub-stoch-OCO}
Let $\ell_1,\dots,\ell_T : \mc{X} \to \R$ be convex functions chosen by a stochastic adversary, $\ell_i \simiid P$.
Let $\A$ be a \ed-DP algorithm for DP-SCO. Then
\cref{alg:stoch-adv} is \ed-DP and has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in \mc{X}} \sum_{t=1}^T \ell_t(x) \right]
\le \sum_{i=1}^{\log T} 2^i \Delta_{2^i}(\A).
\end{equation*}
\end{theorem}
\newcommand{\A_{\mathsf{\ell_2}}}{\A_{\mathsf{\ell_2}}}
\newcommand{\A_{\mathsf{\ell_1}}}{\A_{\mathsf{\ell_1}}}
Now we present the implications of this reduction for DP-OPE with stochastic adversaries. We use
the algorithm for DP-SCO in $\ell_1$-geometry from~\cite{AsiFeKoTa21}. This algorithm, denoted $\A_{\mathsf{\ell_1}}$, has excess population loss $\Delta_n(\A_{\mathsf{\ell_1}}) = O(\sqrt{\log(d)/n} + \log(d)/n\diffp)$. We have the following result which we prove in~\cref{sec:apdx-cor-DP-exp-stoch}.
\begin{corollary}
\label{cor:DP-exp-stoch}
Let $\ell_1,\dots,\ell_T : [d] \to [0,1]$ be chosen by a stochastic adversary, $\ell_i \simiid P$.
Then
\cref{alg:stoch-adv} using $\A_{\mathsf{\ell_1}}$ is $\diffp$-DP and has regret
\begin{equation*}
\E\left[ \sum_{t=1}^T \ell_t(x_t) - \min_{x \in [d]} \sum_{t=1}^T \ell_t(x) \right]
\le O \left(\sqrt{T\log d} + \frac{\log d \log T}{\diffp} \right).
\end{equation*}
\end{corollary}
|
{
"arxiv_id": "2302.14190",
"language": "en",
"timestamp": "2023-03-01T02:04:26",
"url": "https://arxiv.org/abs/2302.14190",
"yymm": "2302"
} | \section{Introduction} For a semisimple Lie group $G$, an irreducible representation $(\pi, V)$ of $G$ and closed reductive subgroup $H\subset G$ the problem of decomposing the restriction of $\pi$ to $H$ has received attention ever since number theory or physics and other branches of mathematics required a solution. In this paper, we are concerned with the important particular case of branching representations of the Discrete Series, i.e. those $\pi$ arising as closed irreducible subspaces of the left regular representation in $L^2(G)$, and breaking the symmetry by a reductive subgroup $H$. Here much work has been done. Notable is the paper of Gross-Wallach, \cite{GW}, and the work of Toshiyuki Kobayashi and his school. For further references on the subject, we refer to the overview work of Toshiyuki Kobayashi and references therein. To compute the decomposition of the restriction of $\pi$ to a symmetric subgroup (see~\ref{sec:gwmultip}), in \cite{GW} it is shown a duality Theorem for Discrete Series representation. Their duality is based on the dual subgroup $G^d$ (this is
the dual subgroup which enters the duality introduced by Flensted-Jensen in his
study of discrete series for affine symmetric spaces \cite{FG}) and, roughly speaking, their formula looks like \begin{equation*} dimHom_H(\sigma, \pi_{\vert_H})=dim Hom_{\widetilde K}(F_\sigma, \tilde\pi). \end{equation*} Here, $\pi$ is a irreducible square integrable representation of $G$, $\sigma $ is a irreducible representation of $H$, $F_\sigma$ is a irreducible representation of a maximal compact subgroup $\widetilde{K}$ of $G^d$, and $\tilde\pi$ is a finite sum of fundamental representations of $G^d$ attached to $\pi$. In \cite{os}, B. Speh and the first author noticed a different duality Theorem for restriction to a symmetric subgroup, let $H_0$ the associated subgroup to $H$ and $L:=H_0\cap H$ a maximal compact subgroup of $H$. Then, \begin{equation*} dimHom_H(\sigma, \pi_{\vert_H})=dim Hom_{L}(\sigma_0, \widetilde\Pi)\tag{$\ddag$}. \end{equation*} Here, $\pi$ is certain irreducible representation of $G$, $\sigma $ is a irreducible representation of $H$, $\sigma_0$ is the lowest $L$-type of $\sigma$ and $\widetilde\Pi$ is a finite sum of irreducible representation of $H_0$ attached to $\pi$. The purpose of this paper is, for a $H$-admissible Discrete Series $\pi$ for $G$, to show a formula as the above and to provide an explicit isomorphism between the two vector spaces involved in the equality. This is embodied in Theorem~\ref{prop:Rwelldefgc}.
Theorem~\ref{prop:Rwelldefgc} reduces the branching law in
two steps (1) For the maximal compact subgroup $K$ of $G$ and the lowest $K$-type
of $\pi$, branching this under $L$ (maximal compact in $H$ and also in $H_0$) (2)
branching a Discrete Series of $H_0$ with respect to $L$, i.e. finding its $L$-types with
multiplicity. Both of these steps can be implemented in algorithms, as they are available
for example in the computer program Atlas, http://atlas.math.umd.edu.
We would like to point out that T. Kobayashi, T. Kobayashi-Pevzner and Nakahama have shown a duality formula as $(\ddag)$ for holomorphic Discrete Series representation $\pi$. In order to achieve their result, they have shown a explicit isomorphism between the two vector spaces in the formula. Further, with respect to analyze $res_H(\pi)$, Kobayashi-Oshima have shown a way to compute the irreducible components of $res_H(\pi)$ in the language of Zuckerman modules $A_\mathfrak q (\lambda)$ \cite{KO}\cite{KO2}.
As a consequence of the involved material, we obtain a necessary and sufficient condition for a symmetry breaking operators to be represented via normal derivatives. This is presented in Proposition~\ref{prop:symmenormal}.
Another consequence is Proposition~\ref{prop:gentau}. That is, for the closure of the linear span of the totality of $H_0$-translates (resp. $H$-translates) of the isotypic component associated to the lowest $K$-type of $\pi$, we exhibit its explicit decomposition as a finite sum of Discrete Series representations of $H_0$ (resp. $H$).
Our proof is heavily based in that Discrete Series representations are realized in reproducing kernel Hilbert spaces. As a consequence, in Lemma~\ref{lem:injecrh}, we obtain a general result on the structure of the kernel of a certain restriction map. The proof also relies on the work of Hecht-Schmid \cite{HS}, and a result of Schmid in \cite{Sc}.
It follows from the work of Kobayashi-Oshima, else, from Tables 1,2,3, that whenever a Discrete Series for $G$ has admissible restriction to a symmetric subgroup, then, the infinitesimal character of the representation is dominant with respect to either a Borel de Siebenthal system of positive roots or to a system of positive roots so that it has two noncompact simple roots, each of one, has multiplicity one in the highest root. Under the $H$-admissible hypothesis, the infinitesimal character of each of the irreducible components of $\widetilde{\Pi}$ in formula $(\ddag)$, has the same property as the infinitesimal character of $\pi$. Thus, for most $H$-admissible Discrete series, to compute the right hand side of $(\ddag)$, we may appeal to the work of the first author and Wolf \cite{ow}. Their results let us compute the highest weight of each irreducible factor in the restriction of $\pi$ to $K_1(\Psi)$, next, we apply \cite[Theorem 5]{DV} for the general case.
We may speculate that a formula like $(\ddag)$ might be true for $\pi$ whose underlying Harish-Chandra module is equivalent to a unitarizable Zuckerman module. In this case, the definition of $\sigma_0$ would be the subspace spanned by the lowest $L$-type of $\sigma$ and $\widetilde \Pi$ would be a Zuckerman module attached to the lowest $K$-type of $\pi$.
The paper is organized as follows. In Section 2, we introduce facts about Discrete Series representation and notation. In Section 3,
we state the main Theorem and begin its proof. As a tool, we obtain information on the kernel of the restriction map.
In Section 4, we complete the proof of the main Theorem. As a sub-product, we obtain information on the kernel of the restriction map, under admissibility hypothesis. We present examples and applications of the Main Theorem in section 5. This includes lists of multiplicity free restriction of representations, many of the multiplicity free representations are non holomorphic Discrete Series representations. We also dealt with quaternionic and generalized quaternionic representations.
In Section 6, we analyze when symmetry breaking operators are represented by means of normal derivatives. Section 7 presents the list of $H$-admissible Discrete Series and related information.
\medskip
{\it Acknowledgements:}
The authors would like to thank T. Kobayashi for much insight and inspiration on the problems considered here. Also, we thank Michel Duflo, Birgit Speh, Yosihiki Oshima and Jan Frahm for conversations on the subject.
Part of the research in this paper was carried out within the online research community on Representation Theory and Noncommutative Geometry sponsored by the American Institute of Mathematics. Also, some of the results in this note were the subject of a talk in the "Conference in honour of Prof. Toshiyuki Kobayashi" to celebrate his sixtieth birthday, the authors deeply thanks the organizers for the facilities to present and participate in such a wonderful meeting via zoom.
\smallskip
\section{Preliminaries and some notation}\label{sec:prelim}
Let $G$ be an arbitrary, matrix, connected semisimple Lie group. Henceforth, we fix a maximal compact subgroup $K$ for $G$ and a maximal torus $T$ for $K.$ Harish-Chandra showed that $G$ admits square integrable irreducible representations if and only if $T$ is a Cartan subgroup of $G.$ For this paper, we always assume $T$ is a Cartan subgroup of $G.$ Under these hypothesis, Harish-Chandra showed that the set of equivalence classes of irreducible square integrable representations is parameterized by a lattice in $i\mathfrak t^\star.$ In order to state our results we need to make explicit this parametrization and set up some notation. As usual, the Lie algebra of a Lie group is denoted by the corresponding lower case German letter. To avoid notation, the complexification of the Lie algebra of a Lie group is also denoted by the corresponding German letter without any subscript. $V^\star $ denotes the dual space to a vector space $V.$ Let $\theta$ be the Cartan involution which corresponds to the subgroup $K,$ the associated Cartan decomposition is denoted by $\mathfrak g=\mathfrak k +\mathfrak p.$ Let $\Phi(\mathfrak g,\mathfrak t) $ denote the root system attached to the Cartan subalgebra $\mathfrak t.$ Hence, $\Phi(\mathfrak g,\mathfrak t)=\Phi_c \cup \Phi_n =\Phi_c(\mathfrak g, \mathfrak t) \cup \Phi_n (\mathfrak g, \mathfrak t)$ splits up as the union the set of compact roots and the set of noncompact roots. From now on, we fix a system of positive roots $\Delta $ for $\Phi_c.$ For this paper, either the highest weight or the infinitesimal character of an irreducible representation of $K$ is dominant with respect to $\Delta.$ The Killing form gives rise to an inner product $(...,...)$ in $i\mathfrak t^\star.$ As usual, let $\rho=\rho_G $ denote half of the sum of the roots for some system of positive roots for $\Phi(\mathfrak g, \mathfrak t).$ \textit{A Harish-Chandra parameter} for $G$ is $\lambda \in i\mathfrak t^\star$ such that $(\lambda, \alpha)\not= 0 , $ for every $\alpha \in \Phi(\mathfrak g,\mathfrak t) ,$ and so that $\lambda + \rho$ lifts to a character of $T.$ To each Harish-Chandra parameter $\lambda$, Harish-Chandra, associates a unique irreducible square integrable representation $(\pi_\lambda^G , V_\lambda^G)$ of $G$ of infinitesimal character $\lambda.$ Moreover, he showed the map $\lambda \rightarrow (\pi_\lambda^G, V_\lambda^G)$ is a bijection from the set of Harish-Chandra parameters dominant with respect to $\Delta$ onto the set of equivalence classes of irreducible square integrable representations for $G$ (see \cite[Chap 6]{Wa1}). For short, we will refer to an irreducible square integrable representation as a Discrete Series representation.
Each Harish-Chandra parameter $\lambda$ gives rise to a system of positive roots \\
\phantom{xxxxxxxxxxxxxxx}$\Psi_\lambda =\Psi_{G,\lambda} =\{ \alpha \in \Phi(\mathfrak g, \mathfrak t) : (\lambda, \alpha) >0 \}.$ \\ From now on, we assume that Harish-Chandra parameter for $G$ are dominant with respect to $\Delta.$ Whence, $\Delta \subset \Psi_\lambda.$ We write $\rho_n^\lambda =\rho_n =\frac 12 \sum_{ \beta \in \Psi_\lambda \cap \Phi_n} \beta $, $(\Psi_\lambda)_n:=\Psi_\lambda \cap \Phi_n$.
We denote by $ (\tau ,W ):= (\pi_{\lambda +\rho_n}^K , V_{\lambda + \rho_n}^K) $ the lowest $K-$type of $\pi_\lambda :=\pi_\lambda^G.$ The highest weight of $(\pi_{\lambda +\rho_n}^K , V_{\lambda + \rho_n}^K)$ is $\lambda +\rho_n -\rho_c.$ We recall a Theorem of Vogan's thesis \cite{VoT}\cite{EW} which states that $(\tau,W)$ determines $(\pi_\lambda, V_\lambda^G)$ up to unitary equivalence. We recall the set of square integrable sections of the vector bundle determined by the principal bundle $K\rightarrow G \rightarrow G/K$ and the representation $(\tau, W)$ of $K$ is isomorphic to the space \begin{eqnarray*}\lefteqn{L^2(G\times_\tau W) }\hspace{1.0cm} \\ & & := \{ f \in L^2(G) \otimes W : f(gk)=\tau(k)^{-1} f(g), g \in G, k \in K \}.
\end{eqnarray*}
Here, the action of $G$ is by left translation $L_x, x \in G.$ The inner product on $L^2(G)\otimes W$ is given by \begin{equation*}(f,g)_{V_\lambda} =\int_G (f(x),g(x))_W dx, \end{equation*} where $(...,...)_W$ is a $K-$invariant inner product on $W.$
Subsequently, $L_D $ (resp. $R_D)$ denotes the left infinitesimal (resp. right infinitesimal) action on functions from $G$ of an element $D$ in universal enveloping algebra $U(\mathfrak g)$ for the Lie algebra $\mathfrak g$. As usual, $\Omega_G$ denotes the Casimir operator for $\mathfrak g.$ Following Hotta-Parthasarathy \cite{ho}, Enright-Wallach \cite{EW}, Atiyah-Schmid \cite{AS}, we realize $V_\lambda :=V_\lambda^G $ as the space
\begin{eqnarray*}
\lefteqn{ H^2(G, \tau) =\{ f \in L^2(G) \otimes W : f(gk)=\tau(k)^{-1} f(g)} \hspace{3.0cm} \\ & & g\in G, k \in K, R_{\Omega_G} f= [(\lambda, \lambda) -(\rho, \rho)] f \}.
\end{eqnarray*}
We also recall, $R_{\Omega_G}=L_{\Omega_G} $ is an elliptic $G-$invariant operator on the vector bundle $W \rightarrow G \times_\tau W \rightarrow G/K$ and hence, $\,H^2(G,\tau)$ consists of smooth sections, moreover point evaluation $e_x$ defined by $ \,H^2(G,\tau) \ni f \mapsto f(x) \in W $ is continuous for each $x \in G$ (cf. \cite[Appendix A4]{OV2}). Therefore, the orthogonal projector $P_\lambda$ onto $\,H^2(G,\tau)$ is an integral map (integral operator) represented by the smooth {\it matrix kernel} or {\it reproducing kernel} \cite[ Appendix A1, Appendix A4, Appendix A6]{OV2}. \begin{equation} \label{eq:Klambda}K_\lambda : G\times G \rightarrow End_\mathbb C (W) \end{equation} which satisfies $ K_\lambda (\cdot ,x)^\star w$ belongs to $\,H^2(G,\tau)$ for each $x \in G, w \in W$ and $$ (P_\lambda (f)(x), w)_W=\int_G (f(y), K_\lambda (y,x)^\star w)_W dy, \, f\in L_2(G\times_\tau W).$$
For a closed reductive subgroup $H$, after conjugation by an inner automorphism of $G$ we may and will assume $ L:=K\cap H $ is a maximal compact subgroup for $H.$ That is, $H$ is $\theta-$stable. In this paper for irreducible square integrable representations $(\pi_\lambda, V_\lambda)$ for $G $ we would like to analyze its restriction to $H.$ In particular, we study the irreducible $H-$subrepresentations for $\pi_\lambda$. A known fact is that any irreducible $H-$subrepresentation of $V_\lambda$ is a square integrable representation for $H$, for a proof (cf. \cite{GW}). Thus, owing to the result of Harish-Chandra on the existence of square integrable representations, from now on, we may and will assume {\it $H$ admits a compact Cartan subgroup}. After conjugation, we may assume $U :=H\cap T$ is a maximal torus in $L=H\cap K.$ From now on, we set a square integrable representation $V_\mu^H\equiv H^2(H, \sigma) \subset L^2 (H \times_\sigma Z)$ of lowest $L-$type $(\pi_{\mu +\rho_n^\mu}^L, V_{\mu+\rho_n^\mu}^L)\equiv :(\sigma, Z)$.
For a representation $M$ and irreducible representation $N$, $M[N]$ denotes the isotypic
component of $N$, that is, $M[N]$ is the linear span of the irreducible subrepresentations of $M$ equivalent to $N$. If topology is involved $M[N]$ is the closure of the linear span.
For a $H$-admissible representation $\pi$, $Spec_H(\pi)$, denotes the set of Harish-Chandra parameters of the irreducible $H$-subrepresentations of $\pi$.
\section{Duality Theorem, explicit isomorphism}
\subsection{Statement and proof of the duality result }
The unexplained notation is as in section~\ref{sec:prelim}, our hypotheses are $(G,H=(G^\sigma)_0)$ is a symmetric pair and $(\pi_\lambda, V_\lambda^G)$ is a $H$-admissible, square integrable irreducible representation for $G$. $K=G^\theta$ is a maximal compact subgroup of $G$, $H_0 :=(G^{\sigma \theta})_0$ and $K$ is so that $L=H\cap K=H_0 \cap K$ is a maximal compact subgroup of both $H$ and $H_0$. By definition, $H_0$ is the {\it associated} subgroup to $H$.
In this section, under our hypothesis, for $V_\mu^H$ a irreducible factor for $res_H(\pi_\lambda)$,
we show an explicit isomorphism from the space \\ \phantom{xxxx} $Hom_H(V_\mu^H, V_\lambda^G)$ onto $Hom_L(V_{\mu +\rho_n^\mu}^L, \, \pi_\lambda(\mathcal U(\mathfrak h_0))V_\lambda^G[V_{\lambda+\rho_n}^K])$.
We also analyze the restriction map $r_0 :H^2(G,\tau)\rightarrow L^2(H_0 \times_\tau W)$.
To follow, we present the necessary definitions and facts involved in the main statement.
\subsubsection{ } We consider
the linear subspace $\mathcal L_\lambda$ spanned by the lowest $L$-type subspace of each irreducible $H$-factor of $res_H((L,H^2(G,\tau)))$. That is, \begin{center} $ \mathcal L_\lambda$ is the linear span of $\cup_{\mu \in Spec_H(\pi_\lambda)} H^2(G,\tau)[V_\mu^H][V_{\mu +\rho_n^\mu}^L]$. \end{center} We recall that our hypothesis yields that the subspace of $L$-finite vectors in $V_\lambda^G$ is equal to the subspace of $K$-finite vectors \cite[Prop. 1.6 ]{Kob}. Whence, we have $\mathcal L_\lambda$ is a subspace of the space of $K$-finite vectors in $H^2(G,\tau)$.
\subsubsection{} We also need the subspace
\begin{equation*} \mathcal U(\mathfrak h_0)W:=L_{\mathcal U(\mathfrak h_0)}H^2(G,\tau)[V_{\lambda+\rho_n^\lambda}^K]\equiv\pi_\lambda(\mathcal U(\mathfrak h_0))(V_\lambda^G[V_{\lambda+\rho_n^\lambda}^K]). \end{equation*} We write $\mathrm{Cl}(U(\mathfrak h_0)W)$ for the closure of $U(\mathfrak h_0)W$. Hence, $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ is the closure of the left translates by the algebra $\mathcal U(\mathfrak h_0) $ of the subspace of $K$-finite vectors \begin{equation*} H^2(G,\tau)[V_{\lambda +\rho_n^\lambda}^K]= \{ K_\lambda (\cdot, e)^\star w: w \in W \}\equiv W. \end{equation*} Thus, $\mathcal U(\mathfrak h_0)W$ consists of analytic vectors for $\pi_\lambda$. Therefore, $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ is invariant under left translations by $H_0$. In Proposition~\ref{prop:gentau} we present the decomposition of $\mathcal U(\mathfrak h_0)W$ as a sum of irreducible representations for $H_0$.
We point out
\begin{center} The $L$-module $\mathcal L_\lambda$ is equivalent to the underlying $L$ -module in
$\mathcal U(\mathfrak h_0)W$. \end{center}
This has been proven in \cite[(4.5)]{Vaint}. For completeness we present a proof in Proposition~\ref{prop:gentau}.
\smallskip
Under the extra assumption $res_{L}(\tau)$ is irreducible, we have $ \mathcal U(\mathfrak h_0)W $ is a irreducible $(\mathfrak h_0,L)$-module, and, in this case, the lowest $L$-type of $ \mathcal U(\mathfrak h_0)W $ is $(res_L(\tau),W)$. That is, $\mathcal U(\mathfrak h_0)W $ is equivalent to the underlying Harish-Chandra module for $ H^2(H_0,res_L(\tau))$. The Harish-Chandra parameter $\eta_0 \in i\mathfrak u^\star $ for $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ is computed in \ref{sub:paramhc}.
For scalar holomorphic Discrete Series, the classification of the symmetric pairs $(G,H)$ such that the equality $\mathcal U(\mathfrak h_0)W =\mathcal L_\lambda$ holds, is:
\smallskip
$ (\mathfrak{su}(m,n), \mathfrak{su} (m,l) +\mathfrak{su} ( n-l)+\mathfrak u(1)),$ $(\mathfrak{so}(2m,2), \mathfrak u(m,1)),$ \\ $(\mathfrak{so}^\star (2n), \mathfrak u(1,n-1)),$ $(\mathfrak{so}^\star(2n), \mathfrak{so}(2) +\mathfrak{so}^\star(2n-2)),$ $(\mathfrak e_{6(-14)}, \mathfrak{so}(2,8)+\mathfrak{so}(2)).$ \cite[(4.6)]{Vaint}.
Thus, there exists scalar holomorphic Discrete Series with $\mathcal U(\mathfrak h_0)W \not=\mathcal L_\lambda$.
\subsubsection{} To follow, we set some more notation. We fix a representative for $(\tau,W)$. We write
\smallskip
$(res_L(\tau), W)=\sum_{1\leq j \leq r} q_j (\sigma_j, Z_j)$, $q_j=\dim Hom_L(Z_j, res_L(W))$
and the decomposition in isotypic components
$$W=\oplus_{1\leq j \leq r} W[(\sigma_j, Z_j)]=\oplus_{1\leq j \leq r} W[\sigma_j].$$
From now on, we fix respective representatives for $(\sigma_j, Z_j)$ with $Z_j \subset W[(\sigma_j, Z_j)]$.
Henceforth, we denote by $$ \mathbf H^2(H_0, \tau):= \sum_{j } \,dim Hom_L(\tau, \sigma_j) \, H^2(H_0 , \sigma_j).$$ We think the later module as a linear subspace of $$ \sum_{j} L^2(H_0 \times_{\sigma_j} W[\sigma_j])_{H_0-disc} \equiv L^2(H_0 \times_\tau W)_{H_0-disc}.$$ Hence, $\mathbf H^2(H_0, \tau) \subset L^2(H_0 \times_\tau W)_{H_0-disc}$. We note that when $res_{L}(\tau)$ is irreducible, then $\mathbf H^2(H_0, \tau)=H^2(H_0,res_L(\tau))$.
\subsubsection{} \label{sec:kernelprop} Owing to both spaces $H^2(H,\sigma),H^2(G, \tau)$ are reproducing kernel spaces, we represent each $T \in Hom_H(H^2(H,\sigma),H^2(G, \tau))$ by a kernel $K_T :H\times G \rightarrow Hom_\mathbb C(Z,W)$ so that $K_T(\cdot,x)^\star w \in H^2(H,\sigma)$ and $(T(g)(x),w)_W =\int_H (g(h),K_T(h,x)^\star w)_Z dh$. Here, $x \in G, w \in W, g\in H^2(H,\sigma)$. In \cite{OV2}, it is shown: $K_T$ is a smooth function, $ K_T(h,\cdot)z=K_{T^\star}(\cdot,h)^\star z \in H^2(G,\tau)$ and \begin{equation}\label{eq:kten} K_T(e,\cdot)z \in H^2(G,\tau)[V_\mu^H][V_{\mu+\rho_n^H}^L] \end{equation} is a $L$-finite vector in $ H^2(G,\tau)$.
\subsubsection{} Finally, we recall the restriction map \begin{equation*}\label{eq:r0} r_0 : H^2(G,\tau) \rightarrow L^2(H_0 \times_\tau W),\,\, r_0(f)(h_0)=f(h_0), h_0 \in H_0, \end{equation*} is $(L^2,L^2)$-continuous \cite{OV1}.
The main result of this section is,
\begin{thm}\label{prop:Rwelldefgc} We assume $(G,H)$ is a symmetric pair and $res_H(\pi_\lambda)$ is admissible. We fix a irreducible factor $V_\mu^H$ for $res_H(\pi_\lambda)$. Then, the following statements hold.\\
i) The map $r_0 : H^2(G,\tau) \rightarrow L^2(H_0 \times_\tau W)$ restricted to $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W )$ yields a isomorphism between $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W )$ onto $\mathbf H^2(H_0, \tau)$. \\ ii) For each fixed intertwining $L$-equivalence
\smallskip
\phantom{xxxxxx} $D: \mathcal L_\lambda[V_{\mu +\rho_n^\mu}^L]=H^2(G,\tau)[V_\mu^H][V_{\mu +\rho_n^\mu}^L] \rightarrow (\mathcal U(\mathfrak h_0)W)[V_{\mu +\rho_n^\mu}^L]$, \\ the map
\smallskip
\phantom{xxx} $r_0^D : Hom_H(H^2(H, \sigma),H^2(G, \tau))\rightarrow Hom_L(V_{\mu +\rho_n^\mu}^L, \mathbf H^2(H_0,\tau))$\\ defined by $$ T \stackrel{r_0^D}\longmapsto(V_{\mu +\rho_n^\mu}^L\ni z \mapsto r_0( D(K_T(e,\cdot)z)) \in \mathbf H^2(H_0,\tau))$$ is a linear isomorphism.
\end{thm}
\begin{rmk} When the natural inclusion $H/L \rightarrow G/K$ is a holomorphic map, T. Kobayashi, M. Pevzner and Y. Oshima in \cite{Kob3},\cite{KP2} has shown a similar dual multiplicity result after replacing the underlying Harish-Chandra module in $\mathbf H^2(H_0,\tau)$ by its representation as a Verma module. Also, in the holomorphic setting, Jakobsen-Vergne in \cite{JV} has shown the isomorphism $H^2(G,\tau)\equiv \sum_{r\geq 0} H^2(H, \tau_{\vert_L} \otimes S^{(r)}((\mathfrak h_0 \cap \mathfrak{ p}^+))^\star)$. On the papers, \cite{Na} \cite{La}, we find applications of the result of Kobayashi for their work on decomposing holomorphic Discrete Series.
H. Sekiguchi \cite{Se} has obtained a similar result of branching laws for singular holomorphic representations.
\end{rmk}
\begin{rmk} The proof of Theorem~\ref{prop:Rwelldefgc} requires to show the map $r_0^D$ is well defined as well as several structure Lemma's. Once we verify the map is well defined, we will show injectivity, Corollary~\ref{prop:r0dinj}, Proposition~\ref{prop:equaldim} and linear algebra will give the surjectivity. In Proposition~\ref{prop:gentau}, we show $i)$, in the same Proposition we give a proof of the existence of the map $D$ as well as its bijectivity, actually this result has been shown in \cite{Vaint}. However, we sketch a proof in this note. The surjectivity also depends heavily on a result in \cite{Vaint}, for completeness we give a proof. We may say that our proof of Theorem 1 is rather long and intricate, involving both linear algebra for finding the multiplicities, and analysis of the kernels of the intertwining operators in question to set up the equivalence of the $H$-morphisms and the $L$-morphisms. The structure of the branching and corresponding symmetry breaking is however very convenient to apply in concrete situations, and we give several illustrations.
We explicit the inverse map to the bijection $r_0^D$ in subsection~\ref{sub:inverserd}.
\end{rmk}
\begin{rmk} \label{rmk:estruD} When $\mathcal L_\lambda =\mathcal U(\mathfrak h_0)W$ we may take $D$ equal to the identity map.
We believe that one choice of $D$ is the orthogonal projector onto $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ restricted to $\mathcal L_\lambda[V_{\mu +\rho_n^\mu}^L]$.
\end{rmk}
\begin{rmk} A mirror statement to Theorem~\ref{prop:Rwelldefgc} for symmetry breaking operators is as follows: $Hom_H(H^2(G,\tau),H^2(H,\sigma))$ is isomorphic to $Hom_L(Z,\mathbf H^2(H_0,\tau))$ via the map $S\mapsto (z\mapsto (H_0 \ni x \mapsto r_0^D(S^\star)(z)(x)=r_0(D(K_S(\cdot,x)^\star)(z))\in W)$.
\end{rmk}
\medskip
\subsubsection{} We verify $r_0(D(K_T(e,\cdot)z))(\cdot)$ belongs to $L^2(H_0 \times_\tau W)_{H_0-disc}$.
\smallskip
Indeed, owing to our hypothesis, a result \cite{DV} (see \cite[Proposition 2.4]{DGV}) implies $\pi_\lambda$ is $L$-admissible. Hence, \cite[Theorem 1.2]{Kob1} implies $\pi_\lambda$ is $H_0$-admissible. Also, \cite[Proposition 1.6]{Kob} shows the subspace of $L$-finite vectors in $H^2(G,\tau)$ is equal to the subspace of $K$-finite vectors and $res_{\mathcal U(\mathfrak h_0)}(H^2(G,\tau)_{K-fin})$ is a admissible, completely algebraically decomposable representation. Thus, the subspace $H^2(G,\tau)[W]\equiv W$ is contained in a finite sum of irreducible $\mathcal U(\mathfrak h_0)$-factors. Hence, $\mathcal U(\mathfrak h_0)W$ is a finite sum of irreducible $\mathcal U(\mathfrak h_0)$ factors. In \cite{GW}, we find a proof that the irreducible factors for $res_{H_0}(\pi_\lambda)$ are square integrable representations for $H_0$, whence, the equivariance and continuity of $r_0$ yields $r_0(\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ is contained in $L^2(H_0\times_{\tau} W)_{H_0-disc}$. \ref{sec:kernelprop} shows $K_T(e, \cdot)\in V_\lambda[V_\mu^H][V_{\mu+\rho_n^H}^L]$, hence, $D(K_T(e,\cdot)z))(\cdot)\in \mathcal U(\mathfrak h_0)W$, and the claim follows.
\subsubsection{} The map $Z \ni z \mapsto r_0(D(K_T(e,\cdot)z) )(\cdot)\in L^2(H_0 \times_\tau W)$ is a $L$-map.
For this,
we recall the equalities $$K_T(hl,gk)=\tau(k^{-1})K_T(h,g)\sigma (l), k \in K, l\in L, g\in G, h\in H. $$ $$K_T(hh_1,hx)=K_T(h_1,x), h, h_1 \in H, x\in G . $$ Therefore, $K_T(e, hl_1)\sigma(l_2)z=\tau(l_1^{-1})K_T(l_2,h)z=\tau(l_1^{-1})K_T(e, l_2^{-1}h)z$ for $l_1, l_2 \in L, h\in H_0$ and we have shown the claim.
\medskip
We have enough information to verify the injectivity we have claimed in $i)$ as well as the injectivity of the map $r_0^D$. For these, we show a fact valid for a arbitrary reductive pair $(G,H)$ and arbitrary Discrete Series representation.
\subsection{ Kernel of the restriction map } In this paragraph we show a fact valid for any reductive pair $(G,H)$ and arbitrary representation $\pi_\lambda$. The objects involved in the fact are the restriction map $r$ from $H^2(G,\tau)$ into $L^2(H\times_\tau W)$ and the subspace
\begin{equation}\label{eq:uhzerow} \mathcal U(\mathfrak h)W:=L_{\mathcal U(\mathfrak h)}H^2(G,\tau)[V_{\lambda+\rho_n^\lambda}^K]\equiv\pi_\lambda(\mathcal U(\mathfrak h))(V_\lambda^G[V_{\lambda+\rho_n^\lambda}^K]), \end{equation} We write $\mathrm{Cl}(U(\mathfrak h)W)$ for the closure of $U(\mathfrak h)W$. The subspace $Cl(\mathcal U(\mathfrak h)W)$ is the closure of the left translates by the algebra $\mathcal U(\mathfrak h) $ of the subspace of $K$-finite vectors
\smallskip
\phantom{xxxxxxxxxxxxx} $ \{ K_\lambda (\cdot, e)^\star w: w \in W \}=H^2(G,\tau)[W]$. \\ Thus, $\mathcal U(\mathfrak h)W$ consists of analytic vectors for $\pi_\lambda$. Hence, $\mathrm{Cl}(\mathcal U(\mathfrak h)W)$ is invariant by left translations by $H$.
Therefore the subspace \\ \phantom{xx} $ L_H(H^2(G,\tau)[W])=\{ K_\lambda (\cdot, h)^\star w=L_h(K_\lambda (\cdot,e)^\star w) : w \in W, h\in H\}$ \\ is contained in $\mathrm{Cl}(\mathcal U(\mathfrak h)W))$. Actually,
\phantom{xxxxxxx} $\mathrm{Cl}(L_H(H^2(G,\tau)[W]))=\mathrm{Cl}(\mathcal U(\mathfrak h)W)$.
The other inclusion follows from that $\mathrm{Cl}(L_H(H^2(G,\tau)[W]))$ is invariant by left translation by $H$ and $ \{ K_\lambda (\cdot, e)^\star w: w \in W \}$ is contained in the subspace of smooth vectors in $\mathrm{Cl}(L_H(H^2(G,\tau)[W]))$.
\smallskip
The result pointed out in the title of the this paragraph is: \begin{lem} \label{lem:injecrh}Let $(G,H)$ be a arbitrary reductive pair and a arbitrary representation $(\pi_\lambda, H^2(G,\tau))$. Then, $Ker(r)$ is equal to the orthogonal subspace to $\mathrm{Cl}(\mathcal U(\mathfrak h)W)$.
\end{lem}
\begin{proof} Since, \cite{OV1}, $r: H^2(G,\tau)\rightarrow L^2(H\times_\tau W)$ is a continuous map, we have $Ker(r)$ is a closed subspace of $H^2(G,\tau)$. Next, for $f\in H^2(G,\tau)$, it holds the identity \\ \phantom{xxx} $(f(x),w)_W=\int_G(f(y), K_\lambda (y,x)^\star w)_W dy, \forall x\in G, \forall w\in W$. \\ Thus, $r(f)=0$ if and only if $f$ is orthogonal to the subspace spanned by $\{ K_\lambda (\cdot, h)^\star w: w \in W, h\in H\}$.
Hence, $\mathrm{Cl}(Ker(r))=(\mathrm{Cl}(L_H(H^2(G,\tau)[W]))^\perp$. Applying the considerations after the definition of $\mathrm{Cl}(\mathcal U(\mathfrak h)W)$ we obtain $Ker(r)^\perp = \mathrm{Cl}(\mathcal U(\mathfrak h)W)$.
Thus $Ker(r)=(Ker(r)^\perp)^\perp =\mathrm{Cl}(\mathcal U(\mathfrak h)W)$.
\end{proof}
\begin{cor} Any irreducible $H$-discrete factor $M$ for $\mathrm{Cl}(\mathcal U(\mathfrak h)W)$ contains a $L$-type in $res_L(\tau)$. That is, $M[res_L(\tau)]\not=\{0\}.$
\end{cor} The corollary follows from that $r$ restricted to $\mathrm{Cl}(\mathcal U(\mathfrak h)W)$ is injective and that Frobenius reciprocity for $L^2(H\times_\tau W)$ holds.
\subsection{The map $r_0^D$ is injective} As a consequence of the general fact shown in the previous subsection, we obtain the injectivity in $i)$ and the map $r_0^D$ is injective.
\begin{cor}Let $(G,H)$ be a symmetric pair and $H_0=G^{\sigma \theta}$. Then, the restriction map $r_0 :H^2(G,\tau)\rightarrow L^2(H_0\times_\tau W) $ restricted to the subspace $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ is injective.
\end{cor}
\begin{cor}\label{prop:r0dinj} Let $(G,H)$ be a symmetric pair, $H_0=G^{\sigma \theta}$ and we assume $res_H(\pi_\lambda)$ is $H$-admissible. Then, the map $r_0^D$ is injective. \end{cor} In fact, for $T\in Hom_H(H^2(H,\sigma), H^2(G,\tau))$, if $ r_0(D(K_T(e,\cdot)z))=0 \, \forall z \in Z$, then, since $ D(K_T(e,x)z) \in \mathcal U(\mathfrak h_0)W$, the previous corollary implies $ D(K_T(e,x)z)=0 \, \forall z, x \in G$. Since, $ K_T(e,\cdot)z \in V_\lambda^G[V_\mu^H][V_{\mu+\rho_n^H}^L]$, and $D$ is injective we obtain $ K_T(e,x)z=0 \, \forall z, \forall x$. Lastly, we recall equality $K_T(h,x)=K_T(e,h^{-1}x)$. Whence we have verified the corollary.
Before we show the surjectivity for the map $r_0^D$ we would like to comment other works on the topic object of this note.
\subsection{Previous work on duality formula and Harish-Chandra parameters} The setting for this subsection is: $(G,H)$ is a symmetric pair and $(\pi_\lambda , V_\lambda^G)$ is a irreducible square integrable representation of $G$ and $H$-admissible. As before, we fix $K,L=H\cap K, T, U=H\cap T$.
The following Theorem has been shown by \cite{GW}, a different proof is in \cite{KO}.
\begin{thm}[Gross-Wallach, T. Kobayashi-Y. Oshima]\label{prop:GWKO} We assume $(G,H)$ is symmetric pair, $\pi_\lambda^G$-is $H$-admissible, then
\smallskip
a) $res_H(\pi_\lambda^G)$ is the Hilbert sum of {\bf inequivalent} Square integrable representations for $H$, $\pi_{\mu_j}^H, j=1, 2, \dots$, with respective finite multiplicity $0<m_j<\infty$.
\smallskip
b) The Harish-Chandra parameters of the totality of discrete factors for $res_H(\pi_\lambda^G)$ belong to a "unique" Weyl Chamber in $i\mathfrak u^\star $.
\smallskip
That is, \phantom{xx} $V_\lambda^G =\oplus_{1\leq j<\infty} V_\lambda^G[V_{\mu_j}^H]\equiv \oplus_{ j} \,\, Hom_H(V_{\mu_j}^H, V_\lambda^G)\otimes V_{\mu_j}^H$,
\medskip
\phantom{xxxx} $dim Hom_H(V_{\mu_j}^H, V_\lambda^G)=m_j$, \phantom{xx} $\pi_{\mu_j}^H \neq \pi_{\mu_i}^H $ iff $i\not= j$, \\ and there exists a system of positive roots $ \Psi_{H,\lambda} \subset \Phi(\mathfrak h,\mathfrak u),$ such that for all $j$, $(\alpha,\mu_j)>0 $\,\, for all $\alpha \in \Psi_{H,\lambda} $.
\end{thm}
In \cite{Vaint}\cite{KO} (see Tables 1,2,3) we find the list of pairs $(\mathfrak g,\mathfrak h)$, as well as systems of positive roots $\Psi_G \subset \Phi(\mathfrak g, \mathfrak t), \Psi_{H,\lambda}\subset \Phi(\mathfrak h, \mathfrak u)$ such that,
- $ \lambda$ dominant with respect to $ \Psi_G $ implies $res_H(\pi_\lambda^G)$ is admissible.
- For all $\mu_j$ in $a)$ we have $(\mu_j, \Psi_{H,\lambda})>0$.
\smallskip
- When $U=T$, we have $\Psi_{H,\lambda}=\Psi_\lambda \cap \Phi(\mathfrak h,\mathfrak t).$
\smallskip Since $(G,H_0)$ is a symmetric pair, Theorem~\ref{prop:GWKO} as well as its comments apply to $(G,H_0)$ and $\pi_\lambda$. Here, when $U=T$, $\Psi_{H_0, \lambda}=\Psi_\lambda \cap \Phi(\mathfrak h_0,\mathfrak t)$.
\medskip
From the tables in \cite{Vaint} it follows that any of the system $\Psi_\lambda$, $\Psi_{H,\lambda}, \Psi_{H_0, \lambda}$ has, at most, two noncompact simple roots, and the sum of the respective multiplicity of each noncompact simple root in the highest root is less or equal than two.
\subsubsection{Computing Harish-Chandra parameters from Theorem~\ref{prop:Rwelldefgc}}\label{sub:paramhc}
As usual, $\rho_n=\frac12 \sum_{\beta \in \Psi_\lambda \cap \Phi_n} \beta$, $ \rho_n^H=\frac12 \sum_{\beta \in \Psi_{H,\lambda} \cap \Phi_n} \beta $, $\rho_K= \frac12 \sum_{\alpha \in \Psi_\lambda \cap \Phi_c} \alpha$, \\ $\rho_L= \frac12 \sum_{\alpha \in \Psi_{H,\lambda} \cap \Phi_c} \alpha$. We write $res_L(\tau)=res_L(V_{\lambda+\rho_n}^K )=\oplus_{1\leq j\leq r} \, q_j\, \pi_{\nu_j}^L=\sum_j q_j \sigma_j$, with $\nu_j$ dominant with respect to $\Psi_{H,\lambda} \cap \Phi_c$.
we recall $\nu_j$ is the infinitesimal character (Harish-Chandra parameter) of $\pi_{\nu_j}^L$. Then, the Harish-Chandra parameter for $H^2(H_0, \pi_{\nu_j}^L)$ is $\eta_j=\nu_j -\rho_n^{H_0}$.
According to \cite[Lemma 2.22]{Sc}(see Remark~\ref{ktypessch}), the infinitesimal character of a $L$-type of $H^2(H_0, \pi_{\nu_j}^L)$ is equal to $\nu_j +B=\eta_j+\rho_n^{H_0}+B $ where $B$ is a sum of roots in $\Psi_{H_0,\lambda}\cap \Phi_n$.
The isomorphism $r_0^D$ in Theore~\ref{prop:Rwelldefgc}, let us conclude:
For each subrepresentation $V_{\mu_s}^H $ of $res_H(\pi_\lambda)$, we have $\mu_s +\rho_n^H$ is a $L$-type of \\ \phantom{xxxxxxx} $\mathbf H^2(H_0,\tau)\equiv \oplus_j\, q_j \, H^2(H_0,\pi_{\nu_j}^L)\equiv \oplus_j \, \underbrace { V_{\eta_j}^{H_0}\oplus \cdots \oplus V_{\eta_j}^{H_0}}_{q_j}$, \\ and the multiplicity of $V_{\mu_s}^H$ is equal to the multiplicity of $V_{\mu_s +\rho_n^H}^L$ in $\mathbf H^2(H_0,\tau)$.
\subsubsection{Gross-Wallach multiplicity formula}\label{sec:gwmultip}
To follow we describe the duality Theorem due to \cite{GW}. $(G,H)$ is a symmetric pair. For this paragraph, in order to avoid subindexes we write $\mathfrak g=Lie(G), \mathfrak h=Lie(H) $ etc. We recall $\mathfrak h_0=\mathfrak g^{\sigma \theta}$. We have the decompositions $\mathfrak g=\mathfrak k+\mathfrak p=\mathfrak h+\mathfrak q=\mathfrak h_0 +\mathfrak p \cap \mathfrak h + \mathfrak q \cap \mathfrak k$. The {\it dual} real Lie algebra to $\mathfrak g$ is $\mathfrak g^d=\mathfrak h_0 +i(\mathfrak p \cap \mathfrak h + \mathfrak q \cap \mathfrak k)$, the algebra $\mathfrak g^d$ is a real form for $\mathfrak g_\mathbb C$. A maximal compactly embedded subalgebra for $\mathfrak g^d$ is $\widetilde{\mathfrak k}=\mathfrak h \cap \mathfrak k +i(\mathfrak h \cap \mathfrak p)$. Let $\pi_\lambda$ be a $H$-admissible Discrete Series for $G$. One of the main results of \cite{GW} attach to $\pi_\lambda$ a finite sum of underlying Harish-Chandra modules of fundamental representation for $G^d$, $(\Gamma_{H\cap L}^{\widetilde K})^{p_0+q_0}(N(\Lambda))$, so that for each subrepresentation $V_\mu^H$ of $V_\lambda$ we compute the multiplicity $m^{G,H}(\lambda, \mu)$ of $V_\mu^H$ by means of Blattner's formula \cite{HS} applied to $(\Gamma_{H\cap L_1}^{\widetilde K})^{p_0+q_0}(N(\Lambda)$ . In more detail, since $Lie(H)_\mathbb C =Lie(\widetilde K)_\mathbb C$, and the center of $H$ is equal to the center of $\widetilde K$, for the infinitesimal character $\mu$ and the central character $\chi$ of $V_\mu^H$, we may associate a finite dimensional irreducible representation $F_{\mu,\chi}$ for $\widetilde K$. Then, they show \begin{equation*} dim Hom_{\mathfrak h,H\cap K}(V_\mu^H, V_\lambda^G)=dim Hom_{\widetilde K}(F_{\mu,\chi}, (\Gamma_{H\cap L_1}^{\widetilde K})^{p_0+q_0}(N(\Lambda)). \end{equation*}
\begin{equation*} m^{G,H}(\lambda,\mu) =(-1)^{\frac12 \dim ( H/H\cap L_1)} \sum_{i=1}^d \sum_{s\in W_{\widetilde K}} \epsilon(s)p(\Lambda_i +\rho_{\widetilde K} +ss_{ H\cap K } \mu). \end{equation*}
where $\tau=F^\Lambda=\sum_i M^{\Lambda_i}$ as a sum of irreducible $H\cap L_1$-module and $p$ is the partition function associated to $\Phi(\mathfrak u_1 /\mathfrak u_1 \cap\mathfrak h_\mathbb C, \mathfrak u)$, here, $\mathfrak u_1$ is the nilpotent radical of certain parabolic subalgebra $\mathfrak q=\mathfrak l_1 +\mathfrak u_1$ used to define the $A_\mathfrak q(\lambda)$-presentation for $\pi_\lambda$. {\it Explicit example V} presents the result of \cite{GW} for the pair $(SO(2m,2n), SO(2m,2n-1))$.
\subsubsection{Duflo-Vargas multiplicity formula, \cite{DV}} We keep notation and hypothesis as in the previous paragraph. Then, \begin{equation*} m^{G,H}(\lambda,\mu)=\pm \sum_{w \in W_K} \epsilon (w)p_{S_w^H}(\mu -q_\mathfrak u(w\lambda)).\end{equation*}
Here, $q_\mathfrak u :\mathfrak t^\star \rightarrow \mathfrak u^\star $ is the restriction map. $p_{S_w^H}$ is the partition function associated to the multiset \begin{equation*} S_w^H :=S_w^L \backslash \Phi(\mathfrak h/\mathfrak l,\mathfrak u), \,\mathrm{where,} \,\, S_w^L:=q_\mathfrak u(w(\Psi_\lambda)_n)\cup \Delta(\mathfrak k/\mathfrak l, \mathfrak u). \end{equation*}
We recall for a strict multiset of elements in vector space $V$ the partition function attached to $S$, roughly speaking, is the function that counts the number of ways of expressing each vector as a nonnegative integral linear combinations of elements of $S$. For a precise definition see \cite{DV} or the proof of Lemma~\ref{lem:ddzhfinite}.
\subsubsection{Harris-He-Olafsson multiplicity formula, \cite{HHO}}
Notation and hypothesis as in the previous paragraphs. Let \begin{equation*} r_m : H^2(G,\tau)\rightarrow L^2(H \times_{S^m(Ad)\boxtimes \tau} (S^m(\mathfrak p \cap \mathfrak q)^\star \otimes W)). \end{equation*} the normal derivative map defined in \cite{OV1}. Let $\Theta_{\pi_\mu^H}$ denote the Harish-Chandra character of $\pi_\mu^H$. For $f$ a tempered function in $H^2(G,\tau)$, they define $\phi_{\pi_\lambda, \pi_\mu^H, m}(f)=\Theta_{\pi_\mu^H}
\star r_m(f)$ . They show: \begin{equation*} m^{G,H}(\lambda, \mu)= \lim_{m\rightarrow \infty} \dim \phi_{\pi_\lambda, \pi_\mu^H, m}((H^2(G,\tau)\cap \mathcal C(G,\tau)) [V_{\mu +\rho_n^H}]). \end{equation*}
\subsection{Completion of the Proof of Theorem~\ref{prop:Rwelldefgc}, the map $r_0^D$ is surjective }
Item $i)$ in Theorem~\ref{prop:Rwelldefgc} is shown in Proposition~\ref{prop:gentau} $c)$. The existence of the map $D$ is shown in Proposition~\ref{prop:gentau} $e)$.
To show the surjectivity of $r_0^D$ we appeal to Theorem~\ref{prop:equaldim}, \cite[Theorem 1]{Vaint}, where we show the initial space and the target space are equidimensional, linear algebra concludes de proof of Theorem~\ref{prop:Rwelldefgc}. Thus, we conclude the proof of Theorem~\ref{prop:Rwelldefgc} as soon as we complete the proof of Theorem~\ref{prop:equaldim} and Proposition~\ref{prop:gentau}.
\section{Duality Theorem, proof of dimension equality}\label{sec:equaldim}
The purpose of this subsection is to sketch a proof of the equality of dimensions in the duality formula presented in Theorem~\ref{prop:Rwelldefgc} as well as some consequences. Part of the notation has already been introduced in the previous section. Sometimes notation will be explained after it has been used. Unexplained notation is as in \cite{DV}, \cite{OV2}, \cite{Vaint}.
The setting is as follows, $(G,H)$ is a symmetric pair,$(\pi_\lambda, V_\lambda^G)=(L,H^2(G,\tau))$ a $H$-admissible irreducible square integrable representation. Then, the Harish-Chandra parameter $\lambda$ gives rise to systems of positive roots $\Psi_\lambda $ in $\Phi(\mathfrak g,\mathfrak t)$ and by mean of $\Psi_\lambda$, in \cite{DV} is defined a nontrivial normal connected subgroup $K_1(\Psi_\lambda)=K_1$ of $K$, it is shown that the $H$-admissibility yields $K_1 \subset H$\footnote{This also follows from the tables in \cite{KO}}. Thus, $\mathfrak k=\mathfrak k_1 \oplus \mathfrak k_2$, $\mathfrak l = \mathfrak k_1 \oplus \mathfrak l \cap \mathfrak k_2$ (as ideals), and $\mathfrak t=\mathfrak t \cap \mathfrak k_1 + \mathfrak t \cap \mathfrak k_2$, $\mathfrak u:=\mathfrak t\cap \mathfrak l =\mathfrak u \cap \mathfrak k_1 + \mathfrak u \cap \mathfrak k_2$ is a Cartan subalgebra of $\mathfrak l$. Let $q_\mathfrak u$ denote restriction map from $\mathfrak t^\star$ onto $\mathfrak u^\star$. Let $K_2$ denote the analytic subgroup corresponding to $\mathfrak k_2$. We recall $H_0:=(G^{\sigma \theta})_0$, $L =K\cap H =K\cap H_0$. We have $K=K_1 K_2$, $L=K_1 (K_2\cap L)$. We set $\Delta:=\Psi_\lambda \cap \Phi(\mathfrak k,\mathfrak t)$. Applying Theorem~\ref{prop:GWKO} to both $H$ and $H_0$ we obtain respective systems of positive roots $\Psi_{H,\lambda}$ in $\Phi(\mathfrak h, \mathfrak u)$, $\Psi_{H_0,\lambda}$ in $\Phi(\mathfrak h_0, \mathfrak u)$. For a list of six-tuples $(G,H, \Psi_\lambda,$ $ \Psi_{H,\lambda}, \Psi_{H_0,\lambda}, K_1)$ we refer to \cite[Table 1, Table 2, Table 3]{Vaint}. Always, $ \Psi_{H,\lambda}\cap \Phi_c(\mathfrak l, \mathfrak u)= \Psi_{H_0,\lambda})\cap \Phi_c(\mathfrak l, \mathfrak u)$. As usual, either $\Phi_n (\mathfrak g, \mathfrak t)$ or $\Phi_n$ denotes the subset of noncompact roots in $\Phi(\mathfrak g,\mathfrak t)$, $\rho_n^\lambda$ (resp. $\rho_n^H, \rho_n^{H_0} $) denotes one half of the sum of the elements in $\Psi_\lambda \cap \Phi_n (\mathfrak g, \mathfrak t)$ (resp. $\Phi_n \cap \Psi_{H,\lambda}, \Phi_n \cap \Psi_{H_0,\lambda} $). When $\mathfrak u=\mathfrak t, \rho_n^\lambda= \rho_n^H + \rho_n^{H_0} $. From now on, the infinitesimal character of an irreducible representation of $K$ (resp $L$) is dominant with respect to $\Delta$ (resp. $\Psi_{H,\lambda} \cap \Phi(\mathfrak l, \mathfrak u))$. The lowest $K$-type $(\tau,W)$ of $\pi_\lambda$ decomposes $\pi_{\lambda +\rho_n^\lambda}^K =\pi_{\Lambda_1}^{K_1}\boxtimes \pi_{\Lambda_2}^{K_2}$, with $\pi_{\Lambda_s}^{K_s}$ an irreducible representation for $K_s, s=1,2$. We express $\gamma=(\gamma_1, \gamma_2)\in \mathfrak t^\star=\mathfrak t_1^\star +\mathfrak t_2^\star $. Hence, \cite{HS}\cite{DHV}, $\Lambda_1 =\lambda_1 +(\rho_n^\lambda)_1 , \Lambda_2 =\lambda_2 +(\rho_n^\lambda)_2$. Sometimes $(\rho_n^\lambda)_2 \not=0$. This happens only for $\mathfrak{su}(m,n)$ and some particular systems $\Psi_\lambda$ (see proof of Lemma~\ref{lem:equalm}). Harish-Chandra parameters for the irreducible factors of either $res_H(\pi_\lambda)$ (resp. $res_{H_0}(\pi_\lambda)$)) will always be dominant with respect to $\Psi_{H,\lambda} \cap \Phi (\mathfrak l, \mathfrak u)$ (resp. $\Psi_{H_0,\lambda} \cap \Phi (\mathfrak l, \mathfrak u))$.
For short, we write $\pi_{\Lambda_2} :=\pi_{\Lambda_2}^{K_2}$.
We write $$ res_{L\cap K_2}(\pi_{\Lambda_2})= res_{L\cap K_2}(\pi_{\Lambda_2}^{K_2})= \sum_{\nu_2 \in (\mathfrak u \cap \mathfrak k_2)^\star} \, m^{K_2, L\cap K_2}(\Lambda_2, \nu_2) \, \pi_{\nu_2}^{L\cap K_2},$$ as a sum of irreducible representations of $L\cap K_2$.\\
The set of $\nu_2$ so that $ m^{K_2, L\cap K_2}(\Lambda_2, \nu_2)\not= 0$ is denoted by $Spec_{L\cap K_2}(\pi_{\Lambda_2}^{K_2})$. Thus,
$$ res_{L}(\pi_{\Lambda_1}^{K_1} \boxtimes \pi_{\Lambda_2}^{K_2})= \sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2}^{ K_2})} \, m^{K_2, L\cap K_2}(\Lambda_2, \nu_2) \, \pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2}, $$ as a sum of irreducible representations of $L$. Besides, for a Harish-Chandra parameter $\eta=(\eta_1,\eta_2)$ for $H_0$, we write
$$ res_L( \pi_{(\eta_1,\eta_2)}^{H_0})= \sum_{(\theta_1,\theta_2) \in Spec_L(\pi_{(\eta_1,\eta_2)}^{H_0}) } \, m^{H_0, L}( (\eta_1,\eta_2),(\theta_1,\theta_2) ) \, \pi_{(\theta_1,\theta_2)}^{L}.$$ The restriction of $\pi_\lambda$ to $H$ is expressed by (see \ref{prop:GWKO})
$$ res_H(\pi_{\lambda})= res_H( \pi_{\lambda}^{G})= \sum_{\mu \in Spec_H(\pi_\lambda) } \, m^{G, H}(\lambda, \mu) \, \pi_{\mu}^{H}.$$
In the above formulaes, $m^{\cdot,\cdot }(\cdot,\cdot)$ are non negative integers and represent multiplicities; for $\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2}^{K_2})$, $\nu_2$ is dominant with respect to $\Psi_{H,\lambda} \cap \Phi(\mathfrak k_2, \mathfrak u \cap \mathfrak k_2)$, and $(\Lambda_1,\nu_2)$ is $\Psi_{H_0,\lambda}$-dominant (see \cite{Vaint}); in the third formulae, $(\eta_1, \eta_2)$ is dominant with respect to $\Psi_{H_0,\lambda} $ and $(\theta_1, \theta_2)$ is dominant with respect to $\Psi_{H_0,\lambda} \cap \Phi_c(\mathfrak h_0, \mathfrak u )$; in the fourth formula, $\mu$ is dominant with respect to $\Psi_{H,\lambda}$. Sometimes, for $\mu \in Spec_{H}(\pi_{\lambda}^{G})$, we
replace $\rho_n^\mu$ by $\rho_n^H$.
We make a change of notation:
\phantom{xxxxxxxxxxx}$\sigma_j=\pi_{\Lambda_1}^{K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2}$ and $q_j= m^{K_2, L\cap K_2}(\Lambda_2, \nu_2)$.\\
Then, in order to show either the existence of the map $D$ or the surjectivity of the map $r_0^D$, we need to show:
\begin{thm} \label{prop:equaldimeigensrea} \begin{eqnarray*} \lefteqn{ m^{G, H}(\lambda, \mu)=\dim Hom_H(H^2(H,V_{\mu+\rho_n^H}^L), H^2(G,V_{\lambda +\rho_n^G}^K)) } \\ & & \hspace{1.0cm} \mbox{ }\mbox{ } = \sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2}^{K_2}) } \, m^{K_2, L\cap K_2}(\Lambda_2, \nu_2) \\ & & \hspace{4.2cm}\times \dim Hom_L( V_{\mu +\rho_n^H}^L, H^2(H_0, \pi_{\Lambda_1}^{K_1}\boxtimes \pi_{ \nu_2}^{L\cap K_2})) .\end{eqnarray*}
\end{thm}
A complete proof of the result is in \cite{Vaint}. However, for sake of completeness and clarity we would like to sketch a proof. We also present some consequences of the Theorem.
Next, we compute the infinitesimal character (Harish-Chandra parameter) for $ H^2(H_0, \pi_{\Lambda_1}^{K_1}\boxtimes \pi_{ \nu_2}^{L\cap K_2})$ and restate the previous Theorem.
ic($ H^2(H_0, \pi_{\Lambda_1}^{K_1}\boxtimes \pi_{ \nu_2}^{L\cap K_2}) )=(\Lambda_1,\nu_2)-\rho_n^{H_0}=(\lambda_1 +\rho_n^{G}-\rho_n^{H_0},\nu_2)=(\lambda_1,\nu_2)+\rho_n^H$.
This equality is obviously true when $(\rho_n^\lambda)_2=0$.
To follow, we state Theorem~\ref{prop:equaldimeigensrea} regardless of the realization of the involved Discrete Series.
\begin{thm} \label{prop:equaldim} Duality, dimension formula. The hypothesis is $(G,H)$ is a symmetric pair and $\pi_\lambda$ is a $H$-admissible representation. Then, \begin{eqnarray*} \lefteqn{ m^{G, H}(\lambda, \mu)=\dim Hom_H(V_{\mu }^H, V_\lambda^G) } \hspace{1.0cm} \\ & & \mbox{ } \mbox{ } = \sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2}^{K_2}) } \, m^{K_2, L\cap K_2}(\Lambda_2, \nu_2) \\ & & \hspace{4.0cm} \times \dim Hom_L( V_{\mu +\rho_n^H}^L, V_{(\lambda_1, \nu_2)+\rho_n^H}^{H_0}) .\end{eqnarray*} \end{thm}
After Lemma~\ref{lem:multshift} the formula simplifies to
$$ m^{G, H}(\lambda, \mu) = \sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\lambda_2}^{K_2})} m^{K_2 , L\cap K_2}(\lambda_2, \nu_2) \dim Hom_L (V_\mu^L, V_{(\lambda_1, \nu_2)}^{H_0}).$$
The following diagram helps to understand the equalities in the Theorem and in the next three Lemmas.
\medskip
\xymatrix{
Spec_H( V_{(\lambda_1, \lambda_2)}^G ) \ar[r]^-{\mu \mapsto \mu} \ar[dr]_{\nu \mapsto \nu+\rho_n^H}
& \cup_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2})} Spec_L(V_{(\lambda_1,\nu_2)}^{H_0}) \ar[d]^{\nu \mapsto \nu+\rho_n^H} \\
& Spec_L(\mathbf H^2(H_0,\tau))= \cup_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2})} Spec_L(V_{(\lambda_1, \nu_2) +\rho_n^H}^{H_0}) }
\medskip
\smallskip
A consequence of Theorem~\ref{prop:equaldim}, Lemma~\ref{lem:multshift} and Lemma~\ref{lem:ddzhfinite} is:
\begin{cor}
\begin{multline*}Spec_H( \pi_\lambda) +\rho_n^H \\ =Spec_L(\mathbf H^2(H_0,\tau))= \cup_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2})} Spec_L(V_{(\lambda_1, \nu_2) +\rho_n^H}^{H_0}). \end{multline*}
\phantom{xxxxxx} $Spec_H( \pi_\lambda) = \cup_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2})} Spec_L(V_{(\lambda_1, \nu_2) }^{H_0}).$
\end{cor}
Theorem~\ref{prop:equaldim} follows after we verify the next two Lemmas.
\begin{lem}\label{lem:ddzhfinite} The hypothesis is $(G,H)$ is a symmetric space and $\pi_\lambda$ is $H$-admissible. Then
\begin{eqnarray*} \lefteqn{ \dim Hom_H(V_\mu^H, V_\lambda^G) }\hspace{1.0cm} \\ & & = \sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\lambda_2}^{K_2})} m^{K_2 , L\cap K_2}(\lambda_2, \nu_2) \dim Hom_L (V_\mu^L, V_{(\lambda_1, \nu_2)}^{H_0}). \end{eqnarray*}
\end{lem}
\begin{proof}[Proof of Lemma~\ref{lem:ddzhfinite}]
The hypothesis $(G,H)$ is a symmetric pair and $\pi_\lambda$ is $H$-admissible, let us to apply notation and facts in \cite{DV}, \cite{Vaint} as well as in \cite{H} \cite{DHV} \cite{GW} \cite{KO}. The proof is based on an idea in \cite{DHV} of piling up multiplicities by means of Dirac delta distributions. That is, let $\delta_\nu$ denote the Dirac delta distribution at $\nu \in i\mathfrak u^\star$. Under our hypothesis, the function $m^{G,H}(\lambda,\mu)$ has polynomial growth in $\mu$, whence, the series $\sum_\mu m^{G,H}(\lambda, \mu) \,\,\delta_\mu $ converges in the space of distributions in $i\mathfrak u^\star$. Since Harish-Chandra parameter is regular, we may and will extend the function $m^{G,H}(\lambda, \cdot)$ to a $W_L$-skew symmetric function by the rule $m^{G,H}(\lambda, w\mu) =\epsilon (w) m^{G,H}(\lambda, \mu), w \in W_L$. Thus, the series $\sum_{\mu \in HC-param(H)} m^{G,H}(\lambda, \mu)\delta_\mu$ converges in the space of distributions in $i\mathfrak u^\star$. Next, for $0\not= \gamma \in i\mathfrak u^\star$ we consider the discrete Heaviside distribution $y_\gamma :=\sum_{n\geq 0}\delta_{\frac{\gamma}{2} +n\gamma}$, and for a strict, finite, multiset $S=\{\gamma_1, \dots, \gamma_r\}$ of elements in $i\mathfrak u^\star$, we set \begin{equation*}y_S:= y_{\gamma_1}\star \cdots \star y_{\gamma_r}=\sum_{\mu \in i\mathfrak u^\star} p_S(\mu) \delta_\mu .\end{equation*} Here, $\star$ is the convolution product in the space of distributions on $i\mathfrak u^\star$. $p_S$ is called the {\it partition} function attached to the set $S$. Then, in \cite{DV} there is presented the equality
$$\sum_{\mu \in HC-param(H)} m^{G,H}(\lambda, \mu) \,\,\delta_\mu =\sum_{w \in W_K} \epsilon(w)\,\,\delta_{q_\mathfrak u(w\lambda)} \bigstar y_{S_w^H}.$$
Here, $W_S$ is the Weyl group of the compact connected Lie group $S$; for a $ad(\mathfrak u)$-invariant linear subspace $R$ of $\mathfrak g_\mathbb C$, $\Phi(R,\mathfrak u)$ denotes the multiset of elements in $\Phi(\mathfrak g,\mathfrak u)$ such that its root space is contained in $R$, and $S_w^H=[q_\mathfrak u(w(\Psi_\lambda)_n)\cup \Delta(\mathfrak k/\mathfrak l,\mathfrak u)]\backslash \Phi(\mathfrak h/\mathfrak l,\mathfrak u)$.
Since, $K=K_1K_2$, $W_K =W_{K_1}\times W_{K_2}$, we write $W_K \ni w=st, s\in W_{K_1}, t\in W_{K_2}$. We recall the hypothesis yields $K_1\subset L$. It readily follows: $s \Phi(\mathfrak h/\mathfrak l,\mathfrak u)=\Phi(\mathfrak h/\mathfrak l,\mathfrak u)$, $ s\Delta(\mathfrak k/\mathfrak l,\mathfrak u)=\Delta(\mathfrak k/\mathfrak l,\mathfrak u), t(\Psi_\lambda)_n=(\Psi_\lambda)_n, t\eta_1=\eta_1, s\eta_2=\eta_2 \, for\, \eta_j \in \mathfrak k_j \cap \mathfrak u $, $sq_\mathfrak u (\cdot)=q_\mathfrak u(s\cdot)$.
Hence,
$$S_w^H= s([q_\mathfrak u ( (\Psi_\lambda)_n) \cup \Delta(\mathfrak k/\mathfrak l,\mathfrak u)]\backslash \Phi(\mathfrak h/\mathfrak l,\mathfrak u))=s(\Psi_n^{H_0}) \cup \Delta(\mathfrak k/\mathfrak l,\mathfrak u).$$
Thus,
\begin{eqnarray*}\lefteqn{ \sum_{w \in W_K} \epsilon(w)\,\,\delta_{q_\mathfrak u(w\lambda)} \bigstar y_{S_w^H} =\sum_{s,t} \epsilon(st)\delta_{q_\mathfrak u(st\lambda)}\bigstar y_{s(\Psi_n^{H_0}) \cup \Delta(\mathfrak k/\mathfrak l,\mathfrak u)}} \\ & & \mbox{\phantom{xxxxx}} = \sum_{s,t} \epsilon(st)\delta_{q_\mathfrak u(s\bcancel{t}\lambda_1+\bcancel{s}t\lambda_2) }\bigstar y_{s(\Psi_n^{H_0})}\bigstar y_{ \Delta(\mathfrak k/\mathfrak l,\mathfrak u)} \\ & & \mbox{\phantom{xxxxxxxxxx}} = \sum_{s} \epsilon(s)\delta_{(s\lambda_1,0)} \bigstar y_{s(\Psi_n^{H_0})}\bigstar \sum_{t} \epsilon(t) \delta_{q_\mathfrak u(t\lambda_2) } \bigstar y_{ \Delta(\mathfrak k/\mathfrak l,\mathfrak u)}. \end{eqnarray*}
Following \cite{H}, we write the restriction of $\pi_{\lambda_2}^{K_2}$ to $L\cap K_2$ in the language of Dirac, Heaviside distributions in $i\mathfrak u^\star$, whence \begin{eqnarray*} \lefteqn{\sum_{t \in W_{K_2}} \epsilon(t) \delta_{ q_\mathfrak u (t \lambda_{ 2 }) } \bigstar y_{ \Delta(\mathfrak k_2 /(\mathfrak k_2 \cap \mathfrak l),\mathfrak u)}} \\ & & \mbox{\phantom{xxxxxx}} =\sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\lambda_2}^{K_2})} m^{K_2 , L\cap K_2}(\lambda_2, \nu_2)\sum_{w_2 \in W_{K_2\cap L}} \epsilon(w_2) \delta_{(0,w_2 \nu_2)}. \end{eqnarray*} In the previous formula, we will apply $ \Delta(\mathfrak k_2 /(\mathfrak k_2 \cap \mathfrak l),\mathfrak u)= \Delta(\mathfrak k /\mathfrak l,\mathfrak u)$.\\
We also write in the same language the restriction to $L$ of a Discrete Series $\pi_{(\lambda_1, \nu_2)}^{H_0}$ for $H_0$. This is.
$$\sum_{\nu \in i\mathfrak u^\star} m^{H_0,L}((\lambda_1,\nu_2), \nu) \, \delta_\nu = \sum_{s\in W_{K_1},t\in W_{K_2\cap L}} \epsilon(st)\delta_{ st(\lambda_1, \nu_2) }\bigstar y_{st(\Psi_n^{H_0})}.$$
Putting together the previous equalities, we obtain
\begin{eqnarray*}\lefteqn{\sum_\mu m^{G,H}( \lambda, \mu ) \,\,\delta_\mu } \\ & =\sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\lambda_2}^{K_2})} m^{K_2 , L\cap K_2}(\lambda_2, \nu_2) & \\ & \mbox{\phantom{xxxxxxxxxxccccccccccc}} \times \sum_{s\in W_{K_1},t\in W_{K_2\cap L}} \epsilon(st)\delta_{(st\lambda_1,st\nu_2) }\bigstar y_{st(\Psi_n^{H_0})} & \\ & \mbox{\phantom{xxxx}}
=\sum_\nu (\sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\lambda_2}^{K_2})} m^{K_2 , L\cap K_2}(\lambda_2, \nu_2) m^{H_0,L}((\lambda_1,\nu_2),\nu)) \,\,\delta_\nu . & \end{eqnarray*}
Since, the family $\{ \delta_\nu \}_{\nu \in i\mathfrak u^\star}$ is linearly independent,
we have shown Lemma~\ref{lem:ddzhfinite}.
\end{proof}
In order to conclude the proof of the dimension equality we state and prove a translation invariant property of multiplicity.
\begin{lem} \label{lem:multshift} For a dominant integral $\mu \in i\mathfrak u^\star$, it holds:
\smallskip \phantom{xxxxxxx} $ m^{H_0 ,L} ((\lambda_1, \nu_2)+\rho_n^H,\mu +\rho_n^H )= m^{H_0 ,L} ((\lambda_1, \nu_2) ,\mu) $.
\end{lem}
\begin{proof}
We recall that the hypothesis of the Lemma~\ref{lem:multshift} is: $(G,H)$ is a symmetric pair and $\pi_\lambda$ is $H$-admissible. The proof of Lemma~\ref{lem:multshift} is an application of Blattner's multiplicity formula, facts from \cite{HS} and observations from \cite[Table 1,2,3]{Vaint}. In the next paragraphs we only consider systems $\Psi_\lambda$ so that $res_H(\pi_\lambda)$ is admissible. We check the following statements by means of case by case analysis and the tables in \cite{GW} and \cite{Vaint}.
OBS0. Every quaternionic system of positive roots that we are dealing with, satisfies the Borel de Siebenthal property, except for the algebra $\mathfrak{su}(2,2n)$ and the systems $\Psi_1 $ (see \ref{obs:wuforsum}). Its Dynkin diagram is \xymatrix{
{\bullet} \ar@{-}[r] & {\circ} \ar@{-}[r]& {\circ} \ar@{-}[r]
& {\bullet} } .
Bullet represents non compact roots, circle compact.
OBS1. Always the systems $\Psi_{H,\lambda}, \Psi_{H_0,\lambda}$ have the same compact simple roots.
OBS2. When $ \Psi_\lambda$ satisfies the Borel de Siebenthal property, it follows that both systems $\Psi_{H,\lambda}, \Psi_{H_0,\lambda}$ satisfy the Borel de Siebenthal property.
OBS3. $\Psi_\lambda$ satisfies the Borel de Siebenthal property except for two families of algebras: a) the algebra $ \mathfrak{su}(m,n) $ and the systems $\Psi_a, a=1,\cdots, m-1$, $\tilde \Psi_b, b=1, \cdots, n-1$, the corresponding systems $\Psi_{H_0,\lambda}, \Psi_{H,\lambda}$ do not satisfy the Borel de Siebenthal property. They have two noncompact simple roots; b) For the algebra $\mathfrak{so}(2m,2)$ each system $\Psi_{\pm }$ does not satisfy the Borel de Siebenthal property, however, each associated system $ \Psi_{SO(2m,1),\lambda},\Psi_{H_0,\lambda} $ satisfies the Borel de Siebenthal property.
OBS4. For the pair $(\mathfrak{su}(2,2n),
\mathfrak{sp}(1,n))$. $\Psi_1$ does not satisfy the Borel de Siebenthal property. Here, $\Psi_{H,\lambda} =\Psi_{H_0,\lambda}$ and they have Borel de Siebenthal property.
OBS5. Summing up. Both systems $\Psi_{H,\lambda}, \Psi_{H_0,\lambda}$ satisfy the Borel de Siebenthal property except for $(\mathfrak{su}(m,n), \mathfrak{su}(m,k)+ \mathfrak{su}(n-k)+\mathfrak u(1))$, $(\mathfrak{su}(m,n), \mathfrak{su}(k,n)+ \mathfrak{su}( m-k)+\mathfrak u(1))$ and the systems $\Psi_a, a=1,\cdots, m-1$, $\tilde \Psi_b, b=1, \cdots, n-1$.
\smallskip
To continue, we explicit Blattner's formula according to our setting, we recall fact's from \cite{HS} and finish the proof of Lemma~\ref{lem:multshift} under the assumption $\Psi_{H_0,\lambda}$ satisfies the property of Borel de Siebenthal. Later on, we consider other systems.
\medskip
Blattner's multiplicity formula applied to the $L$-type $V_{\mu +\rho_n^H}^L$ of $V_{(\lambda_1, \nu_2)+\rho_n^H }^{H_0}$ yields
\begin{eqnarray} \label{eqn:(A)}\lefteqn{ dim Hom_L(V_{\mu +\rho_n^H}^L, V_{(\lambda_1,\nu_2) +\rho_n^H}^{H_0}) } & & \\ & & =\sum_{s\in W_L} \epsilon (s) Q_0(s(\mu +\rho_n^H)-((\lambda_1,\nu_2) +\rho_n^H+\rho_n^{H_0})).\nonumber
\end{eqnarray}
Here, $Q_0$ is the partition function associated to the set $\Phi_n (\mathfrak h_0) \cap \Psi_{H_0,\lambda}$.
We recall a fact that allows to simplify the formula of above under our setting.
Fact 1: \cite[Statement 4.31]{HS}. For a system $\Psi_{H_0,\lambda}$ having the Borel de Siebenthal property, it is shown that in the above sum, if the summand attached to $s \in W_L$ contributes nontrivially, then $ s$ belongs to the subgroup $W_U(\Psi_{H_0,\lambda})$ spanned by the reflections about the compact simple roots in $\Psi_{H_0,\lambda}$.
From OBS1 we have
$W_U(\Psi_{H_0,\lambda})= W_U(\Psi_{H,\lambda})$. Owing that either $\Psi_{H_0,\lambda}$ or $\Psi_{H,\lambda}$ has the Borel de Siebenthal property we apply \cite[Lemma 3.3]{HS}, whence $W_U(\Psi_{H,\lambda})=\{ s \in W_L : s(\Psi_{H,\lambda}\cap \Phi_n(\mathfrak h,\mathfrak u)) =\Psi_{H,\lambda}\cap \Phi_n(\mathfrak h,\mathfrak u)\}$. Thus, for $s \in W_U(\Psi_{H_0,\lambda})$ we have $s\rho_n^H= \rho_n^H$. We apply the equality $s\rho_n^H= \rho_n^H$ in \ref{eqn:(A)} and we obtain $$\dim Hom_L (V_{\mu+\rho_n^H}^L, V_{(\lambda_1, \nu_2)+\rho_n^H }^{H_0})= \sum_{s \in W_U(\Psi_{H_0,\lambda})} \epsilon(s) Q_0( s\mu -((\lambda_1, \nu_2 )+\rho^{H_0}_n)).$$ Blattner's formula and the previous observations gives us that the right hand side of the above equality is
\phantom{cccccccccccccccc}$\dim Hom_L(V_{\mu}^L, V_{(\lambda_1, \nu_2)}^{H_0})= m^{H_0,L}((\lambda_1, \nu_2), \mu)$,\\ whence, we have shown Lemma~\ref{lem:multshift} when $\Psi_{H_0,\lambda}$ has the Borel de Siebenthal property.
\medskip
In order to complete the proof of Lemma~\ref{lem:multshift}, owing to OBS5, we are left to consider the pair
$ (\mathfrak{su}(m,n), \mathfrak{su}(m,k)+ \mathfrak{su}(n-k)+\mathfrak u(1)) $ as well as $ (\mathfrak{su}(m,n), \mathfrak{su}(k,n)+ \mathfrak{su}(m-k)+\mathfrak u(1)) $ and the systems $\Psi_a, a=1,\cdots, m-1$, $\tilde \Psi_b, b=1, \cdots, n-1$. The previous reasoning says we are left to extend Fact 1, \cite[Statement (4.31)]{HS}, for the pair $ (\mathfrak{su}(m,n), \mathfrak{su}(m,k)+ \mathfrak{su}(n-k)+\mathfrak u(1)) $ (resp. $ (\mathfrak{su}(m,n), \mathfrak{su}(k,n)+ \mathfrak{su}(m-k)+\mathfrak u(1)) $) and the systems $(\Psi_a)_{ a=1,\cdots, m-1}$ (resp. $(\tilde \Psi_b)_{ b=1, \cdots, n-1}$). Under this setting we first verify:
\begin{rmk}\label{obs:wuforsum} If $w \in W_L$ and $Q_0(w\mu -(\lambda + \rho_n))\not= 0$, then $ w \in W_U(\Psi_{H_0,\lambda})$.
\end{rmk}
To show {\it Remark}~\ref{obs:wuforsum} we follow \cite{HS}. We fix as Cartan subalgebra $\mathfrak t$ of $\mathfrak {su}(m,n)$ the set of diagonal matrices in $\mathfrak {su}(m,n).$ For certain orthogonal basis $\epsilon_1, \dots ,\epsilon_m, \delta_1, \dots, \delta_n$ of the dual vector space to the subspace of diagonal matrices in $\mathfrak{gl}(m+n, \mathbb C),$ we may, and will choose $\Delta =\{ \epsilon_r - \epsilon_s, \delta_p -\delta_q, 1 \leq r < s \leq m, 1 \leq p < q \leq n \},$ the set of noncompact roots is $ \Phi_n= \{ \pm (\epsilon_r - \delta_q) \}.$ We recall the positive roots systems for $\Phi(\mathfrak g, \mathfrak t)$ containing $\Delta$ are in a bijective correspondence with the totality of lexicographic orders for the basis $\epsilon_1, \dots ,\epsilon_m, \delta_1, \dots, \delta_n$ which contains the "suborder" $\epsilon_1 > \dots > \epsilon_m, \,\, \delta_1 > \dots > \delta_n.$ The two holomorphic systems correspond to the orders $\epsilon_1 > \dots > \epsilon_m > \delta_1 > \dots > \delta_n ; \,\, \delta_1 > \dots > \delta_n >\epsilon_1 > \dots > \epsilon_m.$ We fix $1 \leq a \leq m-1, $ and let $ \Psi_a$ denote the set of positive roots associated to the order $\epsilon_1 > \dots > \epsilon_a >\delta_1 > \dots > \delta_m > \epsilon_{a+1} > \dots > \epsilon_m$. We fix $1 \leq b \leq n-1 $ and let $\tilde{\Psi}_b$ denote the set of positive roots associated to the order $\delta_1 > \dots > \delta_b >\epsilon_1 > \dots > \epsilon_m > \delta_{b+1} > \dots > \delta_n $. Since, $\mathfrak h=\mathfrak{su}(n,k)+\mathfrak u(m-k)$, $\mathfrak h_0=\mathfrak{su}(n,n-k)+\mathfrak u(k)$. The root systems for $(\mathfrak h, \mathfrak t)$ and $(\mathfrak h_0, \mathfrak t)$ respectively are:
\begin{multline*}
\Phi(\mathfrak h, \mathfrak t)=\{ \pm (\epsilon_r -\epsilon_s), \pm(\delta_p -\delta_q), \pm (\epsilon_i -\delta_j),
1 \leq r < s \leq m, \\ 1 \leq p < q \leq k, \,\, or, \,\, k+1 \leq p < q \leq n, 1\leq i \leq m, 1 \leq j \leq k \}.
\end{multline*}
\begin{multline*} \Phi(\mathfrak h_0, \mathfrak t)=\{ \pm (\epsilon_r -\epsilon_s), \pm(\delta_p -\delta_q), \pm (\epsilon_i -\delta_j), 1 \leq r < s \leq m, \\ 1 \leq p < q \leq k\,\, or\,\, k+1 \leq p < q \leq n, 1 \leq i \leq m, k+1 \leq j \leq n \}.
\end{multline*}
The system $\Psi_{H,\lambda}=\Psi_\lambda \cap \Phi(\mathfrak h,\mathfrak t),$ $\Psi_{H_0,\lambda}=\Psi_\lambda \cap \Phi(\mathfrak h_0,\mathfrak t)$ which correspond to $\Psi_a$ are the system associated to the respective lexicographic orders $$ \epsilon_1> \dots > \epsilon_a > \delta_1> \dots> \delta_k > \epsilon_{a+1} > \dots > \epsilon_m,\, \delta_1> \dots> \delta_n. $$ $$\epsilon_1> \dots > \epsilon_a > \delta_{k+1}> \dots> \delta_n > \epsilon_{a+1} > \dots > \epsilon_m, \, \delta_1> \dots> \delta_n .$$
For the time being we set $k=n$ and we show {\it Remark}~\ref{obs:wuforsum} for $\mathfrak{su}(m,n)$ and $\Psi_a$. $Q$ denotes the partition function for $\Psi_a \cap \Phi_n$.
The subroot system spanned by the compact simple roots in $\Psi_a$ is
$\Phi_U =\{ \epsilon_i -\epsilon_j, 1\leq i\leq a, 1\leq j\leq a \,\text{or}\, a+1\leq i\leq m, a+1\leq j\leq m \} \cup \{ \delta_i -\delta_j, 1\leq i\not= j\leq n \}.$
$\Psi_a \cap \Phi_c \backslash \Phi_U =\{ \epsilon_i -\epsilon_j, 1\leq i\leq a, a+1\leq j\leq m \} $.
$\Psi_a \cap \Phi_n =\{\epsilon_i -\delta_j, \delta_j -\epsilon_r, 1\leq i \leq a, a+1\leq r \leq m, 1\leq j \leq n \}$.
$2\rho_n^H= n(\epsilon_1 +\dots +\epsilon_a)- n(\epsilon_{a+1} +\dots +\epsilon_m)+(a-(m-a))(\delta_1 +\dots +\delta_n)$.
A finite sum of non compact roots in $\Psi_a$ is equal a to
$B= \sum_{1\leq j \leq a} A_j \epsilon_j -\sum_{a+1\leq i \leq m} B_i \epsilon_i +\sum_r C_r \delta_r $ with $A_j, B_i$ non negative numbers.
Let $ w \in W $ so that $Q(w\mu -(\lambda + \rho_n))\not= 0$. Hence, $\mu =w^{-1}(\lambda +\rho_n +B)$, with $B$ a sum of roots in $\Psi_a\cap \Phi_n$. Thus, $w^{-1}$ is the unique element in $W_L$ that takes $\lambda +\rho_n +B $ to the Weyl chamber determined by $\Psi_a \cap \Phi_c$.
Let $w_1 \in W_U(\Psi_a)$ so that $ w_1(\lambda +\rho_n +B)$ is $ \Psi_a \cap \Phi_U $-dominant. Next we verify $w_1(\lambda +\rho_n +B)$ is $\Psi_a \cap \Phi_c$-dominant. For this, we fix $\alpha \in \Psi_a \cap\Phi_c \backslash \Phi_U$ and check $(w_1(\lambda +\rho_n +B),\alpha)>0 $. $\alpha =\epsilon_i -\epsilon_j , i\leq a < j$, and $ w_1 \in W_U(\Psi_a)$, hence, $w_1^{-1}(\alpha)=e_r -e_s, r\leq a < s$ belongs to $\Psi_a$. Thus,
$(w_1(\lambda +\rho_n +B),\alpha)=( \lambda +\rho_n +B, w_1^{-1}\alpha)= ( \lambda , w_1^{-1}\alpha) +(\rho_n, w_1^{-1}\alpha) +(B, w_1^{-1}\alpha)= ( \lambda , w_1^{-1}\alpha) +n-(-n) +A_r +B_s $, the first summand is positive because $\lambda$ is $\Psi_a$-dominant, the third and fourth are nonnegative. Therefore, $w^{-1}=w_1$ and we have shown {\it Rematk}~\ref{obs:wuforsum}, whence, we have concluded the proof of Lemma~\ref{lem:multshift}. \end{proof}
\begin{lem}\label{lem:equalm} We recall $\rho_n^G=\rho_n^\lambda$ and $\Lambda_2=\lambda_2 +(\rho_n^G)_2$. We claim: \\ \phantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxx} $m^{K_2,L\cap K_2}(\Lambda_2,\nu_2)=m^{K_2,L\cap K_2}(\lambda_2,\nu_2)$.
\end{lem}
In fact, when $\Psi_\lambda$ is holomorphic, $\rho_n^G$ is in $\mathfrak z_\mathfrak k =\mathfrak k_1$ hence $(\rho_n^G)_2=0$. In \cite{Vaint} it is shown that when $K$ is semisimple $(\rho_n^G)_2=0$. Actually, this is so, owing that the simple roots for $\Psi_\lambda \cap \Phi(\mathfrak k_2,\mathfrak t_2)$ are simple roots for $\Psi_\lambda$ and that $\rho_n^G$ is orthogonal to every compact simple root for $\Psi_\lambda$. For general $\mathfrak g$, the previous considerations together with that $(\rho_n^G)_2$ is orthogonal to $\mathfrak k_1$ yields that $(\rho_n^G)_2$ belongs to the dual of the center of $\mathfrak l \cap \mathfrak k_2$. From Tables 1,2,3 we deduce we are left to analyze $(\rho_n^G)_2$ for $\mathfrak{su}(m,n), \mathfrak{so}(m,2)$.
For $\mathfrak{so}(m,2)$ we follow the notation in \ref{tauisirred}, then $\mathfrak t_1=span(e_1,\dots,e_m), \mathfrak t_2=span(\delta_1)$ and $\rho_n^{\Psi_{\pm m}}=c(e_1+\dots +e_m) \in \mathfrak t_1$. For $\mathfrak{su}(m,n)$ we follow the notation in Lemma~\ref{lem:multshift}. It readily follows that for $1\leq a<m$, $\rho_n^{\Psi_a}=\frac{n}{m}((m-a)(e_1+\dots +e_a) -a(e_{a+1}+\dots +e_m))+\frac{2a-m}{2m}((n(e_1+\dots +e_m)-m(d_1+\dots +d_n))$. The first summand is in $\mathfrak t\cap \mathfrak{su}(m)$, the second summand belongs to $\mathfrak z_\mathfrak k$, thus,
$(\rho_n^{\Psi_a})_2=0$ if and only if $2a=m$. Whence, for $(\mathfrak{su}(2,m), \mathfrak{sp}(1,m))$, we have $(\rho_n^{\Psi_1})_2=0$. For $(\mathfrak{su}(m,n), \mathfrak{su}(m,k)+ \mathfrak{su}( n-k)+\mathfrak z_\mathfrak l) $, always, $(\rho_n^{\Psi_a})_2$ determines a character of the center of $\mathfrak k$. In this case, $\lambda_2=\Lambda_2$ except for $(\mathfrak{su}(m,n), \mathfrak{su}(m,k)+ \mathfrak{su}( n-k)+\mathfrak z_\mathfrak l), \Psi_a $ and $a\not= 2m$, hence, $\pi_{\Lambda_2}^{K_2}$ is equal to $\pi_{\lambda_2}^{K_2}$ times a central character of $K$. Thus, the equality holds.
\subsubsection{Conclusion proof of Theorem~\ref{prop:equaldim}} We just put
together Lemma~\ref{lem:multshift}, Lemma~\ref{lem:ddzhfinite} and Lemma~\ref{lem:equalm}, hence, we obtain the equalities we were searching for. This concludes the proof of Theorem~\ref{prop:equaldim}.
\subsubsection{Existence of $D$}
To follow we show the existence of the isomorphism $D$ in Theorem~\ref{prop:equaldim} $i)$ and derive the decomposition into irreducible factors of the semisimple $\mathfrak h_0$-module $\mathcal U(\mathfrak h_0)W$. On the mean time, we also consider some particular cases of Theorem~\ref{prop:equaldim}. Before, we proceed we comment on the structure of the representation $\tau$.
\subsubsection{Representations $\pi_\lambda$ so that $res_L(\tau)$ is irreducible} \label{tauisirred} Under our $H$-admissibility hypothesis of $\pi_\lambda$ we analyze the cases so that the representation $res_L(\tau)$ is irreducible. The next structure statements are verified in \cite{Vaint}. To begin with, we recall the decomposition $K=K_1Z_K K_2 $, (this is not a direct product!, $Z_K$ connected center of $K$) and the direct product $K=K_1K_2$, we also recall that actually, either $K_1$ or $K_2$ depend on $\Psi_\lambda$. When $\pi_\lambda$ is a holomorphic representation $K_1=Z_K$ and $\mathfrak k_2=[\mathfrak k,\mathfrak k]$; when, $Z_K$ is nontrivial and $\pi_\lambda$ is not a holomorphic representation we have $Z_K \subset K_2$; for $\mathfrak g=\mathfrak{su}(m,n), \mathfrak h=\mathfrak{su}(m,k)+\mathfrak{su}(n-k)+\mathfrak{z}_L$, we have $\mathbf T\equiv Z_K \subset Z_L\equiv \mathbf T^2$. Here, $Z_K\subset L$, and, $\tau_{\vert_L}$ irreducible, forces $\tau=\pi_{\Lambda_1}^{SU(m)}\boxtimes \pi_\chi^{Z_K}\boxtimes \pi_{\rho_{SU(n)}}^{SU(n)}$ ; for $\mathfrak g\ncong \mathfrak{su}(m,n)$ and $G/K$ a Hermitian symmetric space, we have to consider the next two examples.
For both cases we have $K_2 =Z_K (K_2)_{ss}$ and $Z_K \nsubseteq L$.
1) When $\mathfrak g=\mathfrak{so}(m,2), \mathfrak h=\mathfrak{so}(m,1)$ and $\Psi_\lambda =\Psi_{\pm m}$, then $\mathfrak k_1=\mathfrak{so}(m)$, $\mathfrak k_2=\mathfrak z_K$ and obviously $res_L(\tau)$ is always an irreducible representation. Here, $\pi_{\Lambda_2}^{K_2}$ is one dimensional representation.
\smallskip
2) When $\mathfrak g=\mathfrak{su}(2,n), \mathfrak h=\mathfrak{sp}(1,n)$, $\Psi_\lambda=\Psi_1$, then $\mathfrak k_1=\mathfrak{su}_2(\alpha_{max})$, $\mathfrak k_2=\mathfrak{sp}(n)+\mathfrak z_\mathfrak k$, $L=K_1 (L\cap (K_2)_{ss})$. Here, $\tau_{\vert_L}$ irreducible forces, $\tau=\pi_{\Lambda_1}^{K_1}\boxtimes \pi_\chi^{Z_K} \boxtimes \pi_{\rho_{(K_2)_{ss}}}^{(K_2)_{ss}}$.
\smallskip
We would like to point out, for $ \mathfrak g=\mathfrak{so}(2m,2n), \mathfrak h=\mathfrak{so}(2m,2n-1), n>1,$ $\Psi_\lambda \cap \Phi_n =\{ \epsilon_i \pm \delta_j \},$ $\mathfrak k_1=\mathfrak{so}(2m)$, and if $\lambda$ is so that $\lambda +\rho_n^\lambda=ic(\tau) =(\sum c_i \epsilon_i, k(\delta_1 +\dots +\delta_{ n-1} \pm \delta_n) ) +\rho_K $, then $res_L(\tau)$ is irreducible and $\pi_{\Lambda_2}^{K_2}=\pi_{ k(\delta_1 +\dots +\delta_{ n-1} \pm \delta_n) +\rho_{K_2}}^{K_2}$ is not a one dimensional representation for $k>0$. It follows from the classical branching laws that these are the unique $\tau's$ such that $res_L(\tau)$ is irreducible.
\smallskip
We believe, if
$res_L(\tau)$ is irreducible and $\mathfrak g \ncong \mathfrak{so}(2m,2n)$ we may conclude that $\tau$ is the tensor product of a irreducible representation of $K_1$ times a one dimensional representation of $K_2$. That is,
$\tau \equiv \pi_{\Lambda_1}^{K_1}\boxtimes \pi_{\rho_{K_2}}^{K_2}\otimes \pi_{\chi}^{Z_{K_2}}$.
\smallskip
In \ref{sub:existencep} we show that whenever a symmetric pair $(G,H)$ is so that some Discrete Series is $H$-admissible, then there exists $H$-admissible Discrete Series so that its lowest $K$-type restricted to $L$ is irreducible.
\subsubsection{Analysis of $\mathcal U(\mathfrak h_0)W$, $\mathcal L_\lambda$, existence of $D$, case $\tau_{\vert_L}$ is irreducible}\label{tauirred} As before, our hypothesis is $(G,H)$ is a symmetric space and $\pi_\lambda^G$ is $H$-admissible. For this paragraph we add the hypothesis $\tau_{\vert_L}=res_L(\tau)$ is irreducible. We recall that $\mathcal U(\mathfrak h_0)W=L_{\mathcal U (\mathfrak h_0)} (H^2(G,\tau)[W])$, $\mathcal L_\lambda= \oplus_{\mu \in Spec_H(\pi_\lambda)} H^2(G,\tau)[V_\mu^H][V_{\mu +\rho_n^\mu}^L] $. We claim:
\smallskip
a) if a $H$-irreducible discrete factor of $V_\lambda$ contains a copy of $\tau_{\vert L}$, then $\tau_{\vert L}$ is the lowest $L$-type of such factor.
b) the multiplicity of $res_L(\tau)$ in $H^2(G,\tau)$ is one.
c) $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ is equivalent to $H^2(H_0,\tau)$.
d) $\mathcal L_\lambda$ is equivalent to $H^2(H_0,\tau)_{L-fin}$.
e) $\mathcal L_\lambda$ is equivalent to $ \mathcal U(\mathfrak h_0)W$. Thus, $D$ exists.
\smallskip
We rely on:
\begin{rmk}\label{ktypessch} 1) Two Discrete Series are equivalent if and only if their respective lowest $L$-types are equivalent. \cite{VoT}.
2)\label{ktypessch} For any Discrete Series $\pi_\lambda$, the highest weight (resp. infinitesimal character) of any $K$-type is equal to the highest weight of the lowest $K$-type (resp. the infinitesimal character of the lowest $K$-type) plus a sum of noncompact roots in $\Psi_\lambda$ \cite[Lemma 2.22]{Sc}.
\end{rmk}
From now on $ic(\phi)$ denotes the infinitesimal character (Harish-Chandra parameter) of the representation $\phi$
Let $V_\mu^H$ a discrete factor for $res_H(\pi_\lambda)$ so that $\tau_{\vert_L}$ is a $L$-type.
Then, Theorem~\ref{prop:equaldim} implies $V_{\mu+\rho_n^H}$ is a $L$-type for $H^2(H_0,\tau)$. Hence, after we apply {\it Remark}~\ref{ktypessch}, we obtain
$\mu +\rho_n^H + B_1 =ic(\tau_{\vert_L})$ with $B_1$ a sum of roots in $\Psi_{H,\lambda}\cap \Phi_n$.
$\mu +\rho_n^H=ic(\tau_{\vert_L})+ B_0$ with $B_0$ a sum of roots in $\Psi_{H_0,\lambda}\cap \Phi_n$.
Thus, $B_0 +B_1 = 0$, whence $B_0=B_1=0$ and $\mu +\rho_n^H
=ic(\tau_{\vert_L})$, we have verified a).
Due to $H$-admissibility hypothesis, we have $\mathcal U(\mathfrak h)W$ is a finite sum of irreducible underlying modules of Discrete Series for $H$. Now, Corollary 1 to Lemma~\ref{lem:injecrh}, yields that a copy of a $V_\mu^H$ contained in $\mathcal U(\mathfrak h)W$ contains a copy of $V_\lambda[W]$. Thus, a) implies $\tau_{\vert_L}$ is the lowest $L$-type of such $V_\mu^H$. Hence, $H^2(H,\tau)$ is nonzero. Now, Theorem~\ref{prop:equaldim} together with that the lowest $L$-type of a Discrete Series has multiplicity one yields that $\dim Hom_H(H^2(H,\tau), V_\lambda) =1$. Also, we obtain $\dim Hom_{H_0}(H^2(H_0,\tau), V_\lambda) =1$. Thus, whenever $\tau_{\vert L}$ occurs in $res_L(V_\lambda)$, we have $\tau_{\vert_L}$ is realized in $V_\lambda[W]$. In other words, the isotypic compoent $V_\lambda[\tau_{\vert_L}]\subset V_\lambda[W]$. Hence, b) holds.
Owing our hypothesis, we may write $U(\mathfrak h_0)W=N_1+...+N_k$, with each $N_j$ being the underlying Harish-Chandra module of a irreducible square integrable representation for $H_0$. Since Lemma~\ref{lem:injecrh} shows $r_0$ is injective in $\mathcal U(\mathfrak h_0)W$, we have $r_0(\mathrm{Cl}(N_j))$ is a Discrete Series in $L^2(H_0 \times_{res_L(\tau)} W)$, whence Frobenius reciprocity implies $\tau_{\vert_L}$ is a $L$-type for $N_j$. Hence, b) and a) forces $\mathcal U(\mathfrak h_0)W$ is $\mathfrak h_0$-irreducible and c) follows.
By definition, the subspace $\mathcal L_\lambda$ is the linear span of $V_\lambda[V_\mu^H][V_{\mu+\rho_n^H}^L]$ with $\mu \in Spec_H(\pi_\lambda)$. Since, $\dim V_\lambda[V_\mu^H][V_{\mu+\rho_n^H}^L]$ $= \dim Hom_H(V_\mu^H, V_\lambda)$ \\ $=\dim Hom_L(V_{\mu +\rho_n^H}^L, H^2(H_0, \tau))=\dim \, H^2(H_0, \tau)[V_{\mu +\rho_n^H}^L]$, and, both $L$-modules are isotypical, and it follows d). Finally, e) follows from c) and d).
\medskip
Under the assumption $\pi_{\Lambda_2}^{K_2}$ is the trivial representation, the formulae in Theorem~\ref{prop:equaldim} becomes:
\begin{eqnarray*}\lefteqn{\dim Hom_H( V_\mu^H, V_\lambda^G)= \dim Hom_L( V_{\mu+\rho_n^H}^L , V_{(\lambda_1,\rho_{K_2\cap L})+\rho_n^H }^{H_0})} & & \\ & & = \dim Hom_L( V_{\mu+\rho_n^H}^L , H^2(H_0,\tau))= \dim Hom_L( V_{\mu}^L , V_{(\lambda_1,\rho_{K_2\cap L})}^{H_0}),\end{eqnarray*}
\noindent
the infinitesimal character of $H^2(H_0, \tau)$ is $(\lambda_1 +\rho_n^\lambda, \rho_{K_2\cap L})-\rho_n^{H_0}=(\lambda_1, \rho_{K_2\cap
L})+\rho_n^H$.
Thus, $H^2(H_0, \tau)\equiv V_{(\lambda_1, \rho_{K_2\cap L})+\rho_n^{H}}^{H_0} $.
\subsubsection{
Analysis of $\mathcal U(\mathfrak h_0)W$, $\mathcal L_\lambda$, existence of $D$, for general $(\tau, W)$}
We recall that by definition, $\mathcal L_\lambda= \oplus_{\mu \in Spec_H(\pi_\lambda)} H^2(G,\tau)[V_\mu^H][V_{\mu +\rho_n^\mu}^L] $, $\mathcal U(\mathfrak h_0)W=L_{\mathcal U(\mathfrak h_0)} (H^2(G,\tau)[W])$.
\begin{prop}\label{prop:gentau} The hypothesis is: $(G,H)$ is a symmetric pair and $\pi_\lambda$ a $H$-admissible square integrable representation of lowest $K$-type $(\tau,W)$. We write\\ \phantom{xxxxx} $res_L(\tau)=q_1 \sigma_1 +\cdots + q_r \sigma_r, $ with $(\sigma_j, Z_j)\in \widehat L, q_j >0$. Then,
a) if a $H$-irreducible discrete factor for $res_H(\pi_\lambda)$ contains a copy of $\sigma_j$, then $\sigma_j$ is the lowest $L$-type of such factor.
b) the multiplicity of $\sigma_j $ in $res_L(H^2(G,\tau))$ is equal to $q_j$.
c) $r_0: \mathrm{Cl}(\mathcal U(\mathfrak h_0)W) \rightarrow \mathbf H^2(H_0,\tau)$
is a equivalence.
d) $\mathcal L_\lambda$ is $L$-equivalent to $\mathbf H^2(H_0,\tau)_{L-fin}$.
e) $\mathcal L_\lambda$ is $L$-equivalent to $ \mathcal U(\mathfrak h_0)W$. Whence, $D$ exists.
\end{prop}
\begin{proof} Let $V_\mu^H$ a discrete factor for $res_H(\pi_\lambda)$ so that some irreducible factor of $\tau_{\vert_L}$ is a $L$-type.
Then, Theorem~\ref{prop:equaldim} implies $V_{\mu+\rho_n^H}^L$ is a $L$-type for $\mathbf H^2(H_0,\tau)=\oplus_j q_j H^2(H_0,\sigma_j)$. Let's say $V_{\mu+\rho_n^H}^L$ is a subrepresentation of $H^2(H_0,\sigma_i)$. We recall $ic(\phi)$ denotes the infinitesimal character (Harish-Chandra parameter) of the representation $\phi$. Hence, after we apply {\it Remark}~\ref{ktypessch} we obtain
$\mu +\rho_n^H + B_1 =ic(\sigma_j)$ with $B_1$ a sum of roots in $\Psi_{H,\lambda}\cap \Phi_n$.
$\mu +\rho_n^H=ic(\sigma_i)+ B_0$ with $B_0$ a sum of roots in $\Psi_{H_0,\lambda}\cap \Phi_n$.
Thus, $B_0 +B_1 = ic(\sigma_j)-ic(\sigma_i) $. Now, since $\mathfrak k=\mathfrak k_1 +\mathfrak k_2$, $\mathfrak k_1 \subset \mathfrak l$, $\tau=\pi_{\Lambda_1}^{K_1} \boxtimes \pi_{\Lambda_2}^{K_2}$, we may write $\sigma_s=\pi_{\Lambda_1}^{K_1}\boxtimes \phi_s, $ with $\phi_s \in \widehat {L\cap K_2}$, whence, $ic(\sigma_j)-ic(\sigma_i)=ic(\phi_j)-ic(\phi_i)$. Since, each $\phi_t$ is a irreducible factor of $res_{L\cap K_2}(\pi_{\Lambda_2}^{K_2})$, we have $ic(\phi_j)-ic(\phi_i)$ is equal to the difference of two sum of roots in $\Phi (\mathfrak k_2, \mathfrak t \cap \mathfrak k_2)$. The hypothesis forces that the simple roots for $\Psi_\lambda \cap \Phi (\mathfrak k_2, \mathfrak t \cap \mathfrak k_2)$ are compact simple roots for $\Psi_\lambda$ (see \cite{DV}) whence $ic(\sigma_j)-ic(\sigma_i)$ is a linear combination of compact simple roots for $\Psi_\lambda$. On the other hand, $B_0 +B_1$ is a sum of noncompact roots in $\Psi_\lambda$. Now $B_0 +B_1 $ can not be a linear combination of compact simple roots, unless $B_0=B_1=0$. Whence, $ic(\sigma_i)=ic(\sigma_j)$ and $Z_j \equiv V_{\sigma_j}^L$ is the lowest $L$-type of $V_\mu^H$, we have verified a).
\smallskip
Due to $H$-admissibility hypothesis, we have $\mathcal U(\mathfrak h)W$ is a finite sum of irreducible underlying Harish-Chandra modules of Discrete Series for $H$. Thus, a copy of certain $V_\mu^H$ contained in $\mathcal U(\mathfrak h)W$ contains $W[\sigma_j]$. Whence, $\sigma_j$ is the lowest $L$-type of such $V_\mu^H$. Whence, $H^2(H,\sigma_j)$ is nonzero and it is equivalent to a subrepresentation of $\mathrm{Cl}(\mathcal U(\mathfrak h)W)$.
We claim, for $i\not= j$, no $\sigma_j$ is a $L$-type of $\mathrm{Cl}(\mathcal U(\mathfrak h)W)[H^2(H,\sigma_i)]$.
Indeed, if $\sigma_j$ were a $L$-type in $\mathrm{Cl}(\mathcal U(\mathfrak h)W)[H^2(H,\sigma_i)]$, then, $\sigma_j$ would be a $L$-type of a Discrete Series of lowest $L$-type equal to $\sigma_i$, according to a) this forces $i=j$, a contradiction.
Now, we compute the multiplicity of $H^2(H,\sigma_j)$ in $H^2(G,\tau)$. For this, we apply Theorem~\ref{prop:equaldim}. Thus, $\dim Hom_H(V_\lambda, H^2(H,\sigma_j))=\sum_i q_i \dim Hom_L(\sigma_j, H^2(H,\sigma_i))=q_j$
In order to realize the isotypic component corresponding to $H^2(H,\sigma_j)$ we write $V_\lambda[W][\sigma_j]=R_1 +\cdots +R_{q_j}$ a explicit sum of $L$-irreducibles modules. Then, owing to a), $L_{\mathcal U(\mathfrak h)}(R_r)$ contains a copy $N_r$ of $H^2(H,\sigma_j)$ and $R_r$ is the lowest $L$-type of $N_r$. Therefore, the multiplicity computation yields $H^2(G,\tau)[H^2(H,\sigma_j)]= N_1 +\cdots +N_{q_j} $. Hence, b) holds.
\noindent
A corollary of this computation is:
\phantom{xxxxxxxxxxxxx} $Hom_H(H^2(H,\sigma_j), (\mathrm{Cl}(\mathcal U(\mathfrak h)W))^\perp ) =\{0\}$.
Verification of c). After we recall Lemma~\ref{lem:injecrh}, we have $r_0 :\mathrm{Cl}(\mathcal U(\mathfrak h_0)\rightarrow L^2(H_0\times_\tau W)$ is injective and we apply to the algebra $\mathfrak h:=\mathfrak h_0$, the statement b) together with the computation to show b), we make the choice of the $q_j's$ subspaces $Z_j$ as a lowest $L$-type subspace of $W[Z_j]$. Thus, the image via $r_0$ of $\mathcal U(\mathfrak h_0)Z_j$ is a subspace of $L^2(H_0 \times \sigma_j)$. Since, either Atiyah-Schmid or Enright-Wallach \cite{EW} have shown $H^2(H_0, \sigma_j)$ has multiplicity one in $L^2(H_0 \times \sigma_j)$ we obtain the image of $r_0$ is equal to $\mathbf H^2(H_0,\tau)$.
The proof of d) and e) are word by word as the one for \ref{tauirred}. \end{proof}
\begin{cor} The multiplicity of $H^2(H,\sigma_j)$ in $res_H(H^2(G,\tau))$ is equal to \\ \phantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx} $q_j=dimHom_L(\sigma_j, H^2(G,\tau))$.
\end{cor}
\begin{cor}For each $\sigma_j$, $\mathcal L_\lambda[Z_j]=\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)[Z_j]=H^2(G,\tau)[W][Z_j]="W"[Z_j]$. Thus, we may fix $D=I_{"W"[Z_j]} : \mathcal L_\lambda[Z_j] \rightarrow \mathrm{Cl}(\mathcal U(\mathfrak h_0)W)[Z_j]$.
\end{cor}
\subsection{ Explicit inverse map to $r_0^D$}\label{sub:inverserd}
We consider three cases: $res_L(\tau)$ is irreducible, $res_L(\tau)$ is multiplicity free, and general case. Formally, they are quite alike, however, for us it has been illuminating to consider the three cases. As a byproduct, we obtain information on the compositions $r^\star r, r_0^\star r_0$; a functional equation that must satisfy the kernel of a holographic operator; for some particular discrete factor $H^2(H,\sigma)$ of $res_H(\pi_\lambda)$ the reproducing kernel for $H^2(G,\tau)$ is a extension of the reproducing kernel for $H^2(H,\sigma)$ as well as that the holographic operator from $H^2(H,\sigma)$ into $H^2(G,\tau)$ is just plain extension of functions.
\subsubsection{ Case $(\tau, W)$ restricted to $L$ is irreducible} In Tables 1,2,3, we show the list of the triples $(G,H,\pi_\lambda)$ such that $(G,H)$ is a symmetric pair, and $\pi_\lambda$ is $H$-admissible. In \ref{sub:existencep} we show that if there exists $(G,H,\pi_\lambda)$ so that $\pi_\lambda$ is $H$-admissible, then there exists a $H$-admissible $\pi_{\lambda'}$ so that its lowest $K$-type restricted to $L$ is irreducible and $\lambda'$ is dominant with respect to $\pi_\lambda$. We denote by $\eta_0$ the Harish-Chandra parameter for $ H^2(H_0,\tau)\equiv \mathrm{Cl}(\mathcal U(\mathfrak h_0)W) $.
We set $c=d(\pi_\lambda)dim W /d(\pi_{\eta_0}^{H_0})$. Next, we show
\begin{prop} \label{prop:ktfromkto}We assume the setting as well as the hypothesis in Theorem~\ref{prop:Rwelldefgc}, and further $(\tau, W)$ restricted to $L$ is irreducible. \\
Let $T_0 \in Hom_L(Z,H^2(H_0,\tau))$, then the kernel $K_T$ corresponding to $T:=(r_0^D)^{-1}(T_0) \in Hom_H(H^2(H,\sigma), H^2(G,\tau))$ is $$K_T(h,x)z= (D^{-1}[\int_{H_0} \frac{1}{c} K_\lambda (h_0,\cdot)(T_0(z) (h_0)) dh_0 ])(h^{-1}x).$$
\end{prop}
\begin{proof} We systematically apply Theorem~\ref{prop:gentau}. Under our assumptions, we have: $\mathbf H^2(H_0,\tau)$ is a irreducible representation and $\mathbf H^2(H_0,\tau)= H^2(H_0,\tau) $; \\ $\mathrm{Cl}(\mathcal U(\mathfrak h_0) ( H^2(G,\tau)[W]))$ is $H_0$-irreducible; We define\\ $ \tilde r_0:= rest(r_0) :\mathrm{Cl}(\mathcal U(\mathfrak h_0)H^2(G,\tau)[W]) \rightarrow H^2(H_0, \tau)$ is a isomorphism. To follow, we notice the inverse of $\tilde r_0$, is up to a constant, equal to $r_0^\star$ restricted to $H^2(H_0, \tau)$. This is so, because functional analysis yields the equalities $\mathrm{Cl}(Im(r_0^\star))=ker(r_0)^\perp=\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$, $Ker(r_0^\star)=Im(r_0)^\perp=H^2(H_0,\tau)^\perp$. Thus, Schur's lemma applied to the irreducible modules $H^2(H_0,\tau), \mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ implies there exists non zero constants $b,d$ so that $ (\tilde r_0 r_0^\star)_{\vert_{H^2(H_0,\tau)}}=b I_{H^2(H_0,\tau)}$, $r_0^\star \tilde r_0 =d I_{\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)}$. Whence, the inverse to $\tilde r_0$ follows. In \ref{sub:valuec}, we show $b=d =d(\pi_\lambda)dim W/d(\pi_{\eta_0}^{H_0})=c$.
For $x\in G, f\in H^2(G,\tau)$, the identity $f(x)=\int_G K_\lambda(y,x)f(y) dy$ holds. Thus,
$r_0(f)(p)=f(p)=\int_G K_\lambda (y,p) f(y) dy, \, \text{for}\, p\in H_0, f \in H^2(G,\tau)$, and, we obtain
\medskip
$K_{r_0}(x,h_0)= K_\lambda (x,h_0), \,\,\, K_{r_0^*}(h_0,x)=K_{r_0}(x,h_0)^\star =K_\lambda (h_0,x)$.
\smallskip
Hence, for $g \in H^2(H_0, \tau)$ we have, \\ \phantom{xxx} $\tilde r_0^{-1}(g)(x)=\frac{1}{c}\int_{H_0} K_{r_0^\star}(h_0,x) g(h_0) dh_0 = \frac{1}{c}\int_{H_0} K_{\lambda}(h_0,x) g(h_0) dh_0$.
Therefore, for $T_0 \in Hom_L (Z, H^2(H_0,\tau))$, the kernel $K_T$ of the element $T $ in $Hom_H(H^2(H,\sigma),H^2(G,\tau))$ such that $r_0^D(T)=T_0$, satisfies for $z\in Z$ $$ D^{-1}([r_0^{-1}(T_0(z)(\cdot))])(\cdot)=K_T(e,\cdot)z \in V_\lambda^G[H^2(H,\sigma)][Z]\subset H^2(G,\tau ).$$ More explicitly, after we recall $K_T(e,h^{-1}x)=K_T(h,x)$,
$$K_T(h,x)z= (D^{-1}[\int_{H_0}\frac{1}{c} K_\lambda (h_0,\cdot)(T_0(z)(h_0))dh_0 ])(h^{-1}x).$$ \end{proof}
\begin{cor}For any $T$ in $Hom_H(H^2(H,\sigma),H^2(G,\tau))$ we have
$$K_T(h,x)z= (D^{-1}[\int_{H_0} \frac{1}{c} K_\lambda (h_0,\cdot)(r_0(D(K_T(e,\cdot)z))(h_0))dh_0 ])(h^{-1}x).$$
\end{cor}
\begin{cor}When $D$ is the identity map, we obtain
\begin{eqnarray*}\lefteqn{K_T(h,x)z = \int_{H_0} \frac{1}{c}K_\lambda (h_0,h^{-1}x)(T_0(z)(h_0))dh_0} & & \\ & & \mbox{\phantom{ssssssssssss}} =\int_{H_0} \frac{1}{c}K_\lambda (hh_0,x)K_T(e,h_0)z dh_0.\end{eqnarray*} \end{cor}
The equality in the conclusion of Proposition~\ref{prop:ktfromkto} is equivalent to \\ \phantom{xxxxxx} $D(K_T(e,\cdot))(y)=\int_{H_0} \frac{1}{c} K_\lambda(h_0,y) D(K_T(e,\cdot))(h_0)dh_0, y\in G$. \\
Whence, we have derived a formula that let us to recover the kernel $K_T$ (resp. $D(K_T(e,\cdot)) (\cdot)$) from $K_T(e,\cdot)$ (resp. $D(K_T(e,\cdot)) (\cdot)$) restricted to $H_0$!
\begin{rmk}
We notice, \begin{equation} \label{eq:q} r_0^\star r_0(f)(y) = \int_{H_0} K_\lambda(h_0,y) f(h_0)dh_0 , \,\, f\in H^2(G,\tau),\,y\in G. \end{equation}
Since we are assuming $\tau_{\vert_L}$ is irreducible, we have $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ is irreducible, hence, Lemma~\ref{lem:injecrh} let us to obtain that a scalar multiple of $r_0^\star r_0$ is the orthogonal projector onto the irreducible factor $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W$.
\noindent
Whence, the orthogonal projector onto $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$
is given by $ \frac{d(\pi_{\eta_0}^{H_0})}{d(\pi_\lambda) dim W}\,\, r_0^\star r_0$.
Thus, the kernel $K_{\lambda,\eta_0}$ of the orthogonal projector onto $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$ is
\phantom{xxxxxx} $ K_{\lambda, \eta_0}(x,y):= \frac{d(\pi_{\eta_0}^{H_0})}{d(\pi_\lambda) dim W} \int_{H_0}K_{\lambda}(p,y)K_{\lambda}(x,p) dp$.
Doing $H:=H_0$ we obtain a similar result for the kernel of the orthogonal projector onto $\mathrm{Cl}(\mathcal U(\mathfrak h)W)$.
\end{rmk}
\smallskip
The equality $(r_0r_0^\star)_{\vert_{H^2(H_0,\tau)}} =cI_{H^2(H_0,\tau)}$ yields the first claim in:
\begin{prop}Assume $res_L(\tau) $ is irreducible. Then, \\
a) for every $g\in H^2(H_0, \tau_{\vert_L})$ (resp. $g\in H^2(H,\tau_{\vert_L})$), the function $r_0^\star(g)$ (resp. $r^\star(g)$), is an extension of a scalar multiple of $g$.\\ b) The kernel $K_\lambda^G$ is a extension of a scalar multiple of $K_{\tau_{\vert_L}}^H$.
\end{prop}
When we restrict holomorphic Discrete Series,
this fact naturally happens, see \cite{Na}, \cite[ Example 10.1]{OV2} and references therein.
\begin{proof} Let $r:H^2(G,\tau)\rightarrow L^2(H\times_\tau W)$ the restriction map. The duality $H,H_0$, and Theorem~\ref{prop:Rwelldefgc} applied to $H:=H_0$ implies $H^2(H,\tau)=r(\mathrm{Cl}(\mathcal U(\mathfrak h)W))$, as well as that there exists, up to a constant, a unique $T\in Hom_H (H^2(H,\tau),H^2(G,\tau))\equiv Hom_L(W,H^2(H,\tau))\equiv \mathbb C$. It follows from the proof of Proposition~\ref{prop:ktfromkto}, that, up to a constant, $T=r^*$ restricted to $H^2(H,\tau)$. After we apply the equality $ T(K_\mu^H(\cdot, e)^\star z)(x)= K_T(e,x)z$, (see \cite{OV1}), we obtain, \\ \phantom{xxxxxxxxx} $r^\star(K_\mu^H(\cdot,e)^*z)(y)=K_\lambda(y,e)^\star z$.\\ Also, Schur's lemma implies $rr^* $ restricted to $H^2(H,\tau)$ is a constant times the identity map. Thus, for $h\in H$, we have $rr^* (K_\mu^H(\cdot,e)^\star w)(h) =qK_\mu^H(h,e)^\star w$. For the value of $q$ see \ref{sub:valuec}. Putting together, we obtain,\\ \phantom{xxcccxxxx} $K_\lambda(h,e)^\star z=r(K_\lambda(\cdot,e)^\star z)(h)=qK_\mu^H(h,e)^\star z$.
Whence, for $h,h_1 \in H$ we have
$K_\lambda(h_1,h)^\star z=K_\lambda(h^{-1}h_1,e)^\star z=qK_\mu^H(h^{-1}h_1,e)^\star z=qK_\mu^H( h_1,h)^\star z$ as we have claimed. \end{proof}
By the same token, after we set $H:=H_0$ we obtain:
\smallskip
{\it For $res_L(\tau) $ irreducible, $(\sigma,Z)=(res_L(\tau),W)$, and $V_{\eta_0}^{H_0}= H^2(H_0,\sigma)$, the kernel $K_\lambda$ extends a scalar multiple of $K_{\eta_0}^{H_0}$}. Actually, $r_0(K_\lambda(\cdot,e)^\star w )=cK_{\eta_0}^{H_0}(\cdot,e)^\star w$.
\medskip
\begin{rmk} We would like to point out that the equality \\ \phantom{xxxxxxxxxxxxx} $r^\star(K_\mu^H(\cdot,e)^*(z))(y)=qK_\lambda(y,e)^\star z$\\ implies $res_H(\pi_\lambda)$ is $H$-algebraically discretely decomposable. Indeed, we apply a Theorem shown by Kobayashi \cite[Lemma 1.5]{Kob}, the Theorem says that when $(V_\lambda^G)_{K-fin}$ contains an irreducible $(\mathfrak h,L)$ irreducible submodule, then $V_\lambda$ is discretely decomposable. We know $K_\lambda(y,e)^\star z$ is a $K$-finite vector, the equality implies $K_\lambda(y,e)^\star z$ is $\mathfrak z(\mathcal U(\mathfrak h))$-finite. Hence, owing to Harish-Chandra \cite[Corollary 3.4.7 and Theorem 4.2.1]{Wa1}, $H^2(G,\tau)_{K-fin}$ contains a nontrivial irreducible $(\mathfrak h,L)$-module and the fact shown by Kobayashi applies.
\end{rmk}
\subsubsection{Value of $b=d=c$ when $res_L(\tau)$ is irreducible}\label{sub:valuec} We show $b=d=d(\pi_\lambda)dim W/d(\pi_{\eta_0}^{H_0})=c $. In fact, the constant $b,d$ satisfies $(r_0^\star r_0)_{\mathcal U(\mathfrak h_0)W} =d I_{\mathcal U(\mathfrak h_0)W}$, $ (r_0 r_0^\star)_{\vert_{H^2(H_0,\tau)}}=b I_{H^2(H_0,\tau)}$. Now, it readily follows $b=d$. To evaluate $r_0^\star r_0$ at $K_\lambda (\cdot,e)^\star w $, for $h_1 \in H_0$ we compute, for $h_1 \in H_0$, \\ $bK_\lambda (h_1,e)^\star w =r_0^* r_0(K_\lambda (\cdot,e)^\star w) (h_1) $
$= \int_{H_0} K_\lambda (h_0,h_1) K_\lambda (h_0,e)^\star dh_0 w$ \\ \phantom{nnnnnnnnnnnnnnnnnnnnnnnnnnn} $=d(\pi_\lambda)^2 \int_{H_0} \Phi(h_1^{-1}h_0 ) \Phi(h_0)^\star dh_0 w$. \\ Here, $\Phi$ is the spherical function attached to the lowest $K$-type of $\pi_\lambda$. Since, we are assuming $res_L(\tau)$ is a irreducible representation, we have $\mathcal U(\mathfrak h_0)W$ is a irreducible $(\mathfrak h_0, L)$-module and it is equivalent to the underlying Harish-Chandra module for $H^2(H_0,res_L(\tau))$. Thus, the restriction of $\Phi$ to $H_0$ is the spherical function attached to the lowest $L$-type of the irreducible square integrable representation $Cl(\mathcal U(\mathfrak h_0)W)\equiv H^2(H_0,res_L(\tau)) $. We fix a orthonormal basis $\{w_i\}$ for $\mathcal U(\mathfrak h_0)W[W]$. We recall,
$\Phi(x)w=P_W \pi(x)P_W w=\sum_{1\leq i \leq \dim W} (\pi(x)w,w_i)_{L^2} w_i$,
$\Phi(x^{-1})=\Phi(x)^\star$.
For $h_1 \in H_0$, we compute, to justify steps we appel to the invariance of Haar measure and to the orthogonality relations for matrix coefficients of irreducible square integrable representations and we recall $d(\pi_{\eta_0}^{H_0})$ denotes the formal degree for $H^2(H_0,res_L(\tau)) $.
\begin{equation*}
\begin{split}
\int_{H_0} \Phi (h_1^{-1}h)\Phi(h)^\star w dh & = \sum_{i,j}\int_{H_0} (\pi(h_1^{-1}h)w_j ,w_i)_{L^2} (\pi(h^{-1})w,w_j)_{L^2} w_i \\ & = \sum_{i,j}\int_{H_0} (\pi(h)w_j ,h_1 w_i)_{L^2} \overline{(\pi(h)w_j,w)_{L^2}} w_i \\ & =1/d(\pi_{\eta_0}^{H_0}) \sum_{i,j} (w_j,w_j)_{L^2} \overline { (h_1 w_i,w)_{L^2}} \\ & =dim W /d(\pi_{\eta_0}^{H_0})\sum_i (h_1^{-1} w,w_i)_{L^2}w_i \\ & = dim W /d(\pi_{\eta_0}^{H_0}) \Phi(h_1)^\star w . \end{split} \end{equation*}
Thus, \begin{equation*}
\begin{split} r_0^\star r_0 (K_\lambda (\cdot, e)^\star w) (h_1) & = d(\pi_\lambda)^2 \int_{H_0} \Phi (h_1^{-1}h)\Phi(h)^\star w dh \\ & = d(\pi_\lambda)^2 dim W /d(\pi_{\eta_0}^{H_0}) /d(\pi_\lambda) K_\lambda (h_1,e)^\star w. \end{split} \end{equation*}
The functions $ K_\lambda (\cdot ,e)^\star w, r_0^\star r_0 (K_\lambda (\cdot, e)^\star w) (\cdot)$ belong to $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)$, the injectivity of $r_0$ on $Cl(\mathcal U(\mathfrak h_0)W)$, forces, for every $x \in G$
\phantom{xxxxxxx} $r_0^\star r_0 (K_\lambda (\cdot, e)^\star w) (x)= d(\pi_\lambda) dim W /d(\pi_{\eta_0}^{H_0}) K_\lambda (x,e)^\star w$.
Hence, we have computed $b=d=c$.
\subsubsection{Analysis of $r_0^D$ for arbitrary $(\tau, W), (\sigma, Z)$ }
We recall the decomposition $W=\sum_{\nu_2 \in Spec_{L\cap K_2}(\pi_{\Lambda_2}^{K_2}) } W[ \pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2}]$.
A consequence of Proposition~\ref{prop:gentau} is that $r_0^\star$ maps $\mathbf H^2(H_0, W[\pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2}])$ into $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W[\pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2}])$. In consequence, $r_0 r_0^\star$ restricted to $\mathbf H^2(H_0, W[\pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2}])$ is a bijective $H_0$-endomorphism $C_j$. Hence, the inverse map of $r_0$ restricted to $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W[\pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2}]) $ is $r_0^\star C_j^{-1}$. Since, $H^2(H_0, \pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2})$ has a unique lowest $L$-type, we conclude $C_j$ is determined by an element of $Hom_L( \pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2},H^2(H,\pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2})[\pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2}])$. Since for $D \in \mathcal U(\mathfrak h_0), w \in W$ we have \ $C_j(L_D w)=L_D C_j(w)$, we obtain $C_j$ is a zero order differential operator on the underlying Harish-Chandra module of $H^2(H_0,\pi_{\Lambda_1}^{ K_1}\boxtimes \pi_{\nu_2}^{L\cap K_2})$. Summing up, we have that the inverse to $r_0 : \mathrm{Cl}(\mathcal U(\mathfrak h_0)W) \rightarrow \mathbf H^2(H_0, \tau)$ is the function $ r_0^\star (\oplus_j C_j^{-1})$. \\ For $T \in Hom_H(H^2(H,\sigma), H^2(G,\tau))$ and $T_0 \in Hom_L(Z, \mathbf H^2(H,\tau))$ so that $r_0^D(T)=T_0$ we obtain the equalities\\ \phantom{xxxxxx}$K_T(e,x)z= (D^{-1}[\int_{H_0} K_\lambda (h_0,\cdot)((\oplus_j C_j^{-1})T_0(z))(h_0)dh_0 ])(x).$
\begin{equation*}\begin{split} K_T(h,x)z & \\ & = (D^{-1}[\int_{H_0} K_\lambda (h_0,\cdot) \\ & \mbox{\phantom{xxxxxxxxx}} \times ((\oplus_j C_j^{-1})(r_0 (D (K_T(e,\cdot)z))(\cdot))(h_0)dh_0 ])(h^{-1}x). \end{split}
\end{equation*}
When $D$ is the identity the formula simplifies as the one in the second Corollary to Proposition~\ref{prop:ktfromkto}.
\subsubsection{Eigenvalues of $r_0^\star r_0$}\label{sub:eigenvalq} For general case, we recall $r_0^\star r_0$ intertwines the action of $H_0$. Moreover, Proposition~\ref{prop:gentau} and its Corollary gives that for each $L$-isotypic component $Z_1 \subseteq W$ of $res_L(\tau)$, we have $\mathcal U(\mathfrak h_0)W[Z_1]=Z_1$. Thus, each isotypic component of $res_L((\mathcal U(\mathfrak h_0)W)[W])$ is invariant by $r_0^* r_0$, in consequence, $r_0^\star r_0$ leaves invariant the subspace
$"W"=H^2(G,\tau)[W]=\{K_\lambda(\cdot,e)^\star w, w \in W\}$. Since, $Ker(r_0)=(\mathrm{Cl}(\mathcal U(\mathfrak h_0)W))^\perp$, we have $r_0^\star r_0$ is determined by the values it takes on $"W"$.
Now, we assume $res_L(\tau)$ is a multiplicity free representation, we write $Z_1^\perp=Z_2\oplus \dots \oplus Z_q$, where $Z_j$ are $L$-invariant and $L$-irreducible. Thus, Proposition~\ref{prop:gentau} implies $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)= \mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_1) \oplus \cdots \oplus \mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_q)$. This a orthogonal decomposition, each summand is irreducible and no irreducible factor is equivalent to other. For $1\leq i\leq q$, let $\eta_i$ denote the Harish-Chandra parameter for $ \mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_i)$.
\begin{prop} \label{prop:eigenvalq} When $res_L(\tau)$ is a multiplicity free representation, the linear operator $r_0^\star r_0$ on $\mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_i )$ is equal to $\frac{d(\pi_\lambda)dim Z_i}{d(\pi_{\eta_i}^{H_0}) }$ times the identity map.
\end{prop}
\begin{proof} For the subspace $\mathrm{Cl}(\mathcal U(\mathfrak h_0)W)[W]$, we choose a $L^2(G)$-or\-tho\-nor\-mal basis $\{w_j\}_{1\leq j \leq dim W}$ equal to the union of respective $L^2(G)$-or\-tho\-nor\-mal basis for $\mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_i)[Z_i]$. Next, we compute and freely make use of notation in \ref{sub:valuec}. Owing to our multiplicity free hypothesis, we have that $r_0^\star r_0$ restricted to $ \mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_i)$ is equal to a constant $d_i$ times the identity map. Hence, on $ w\in \mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_i)[Z_i]$ we have $d_i w=d(\pi_\lambda)^2 \int_{H_0} \Phi(h_0) \Phi(h_0)^\star w dh_0$.
Now, $\Phi(h_0)=(a_{i j})=((\pi_\lambda(h_0)w_j, w_i)_{L^2(G)})$, Whence, the $p q$-coefficient of the product $\Phi(h_0)\Phi(h_0)^\star $ is equal to
$\sum_{1\leq j \leq dim W} (\pi_\lambda(h_0)w_j, w_p)_{L^2(G)}\overline{(\pi_\lambda(h_0)w_q, w_j)}_{L^2(G)}$
Let $I_i$ denote the set of indexes $j$ so that $w_j \in Z_i$. Thus, $\{1,\dots,dim W\}$ is equal to the disjoint union $\cup_{1\leq i \leq q } I_i$. A consequence of Proposition~\ref{prop:gentau} is the $L^2(G)$-orthogonality of the subspaces $\mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_j)$, hence, for $t \in I_a, q \in I_d$ and $a\not= d$ we have $(\pi_\lambda(h_0)w_q, w_t)_{L^2(G)} =0$. Therefore, the previous observation and the disjointness of the sets $I_r$, let us obtain that for $i\not= d, p \in I_i, q \in I_d $ each summand in
$\sum_{1\leq j \leq dim W} \int_{H_0} (\pi_\lambda(h_0)w_j, w_p)_{L^2(G)}\overline{(\pi_\lambda(h_0)w_q, w_j)}_{L^2(G)}
dh_0 $
is equal to zero.
For $p,q \in I_i$, we apply the previous computation and the orthogonality relations to the irreducible representation $\mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_i$. We obtain
$\sum_{1\leq j \leq dim W} \int_{H_0} (\pi_\lambda(h_0)w_j, w_p)_{L^2(G)}\overline{(\pi_\lambda(h_0)w_q, w_j)}_{L^2(G)}
dh_0$
$ = \sum_{j \in I_i} \int_{H_0} (\pi_\lambda(h_0)w_j, w_p)_{L^2(G)}\overline{(\pi_\lambda(h_0)w_q, w_j)}_{L^2(G)}
dh_0$
$ =\sum_{j\in I_i} \frac{ 1}{d(\pi_{\eta_i}^{H_0})} (w_j,w_q)_{L^2(G)} (w_j,w_p)_{L^2(G)}= \frac{ dim Z_i }{d(\pi_{\eta_i}^{H_0})} $.
Thus, we have shown Proposition~\ref{prop:eigenvalq}. \end{proof}
\begin{rmk} Even, when $res_L(\tau)$ is not multiplicity free, the conclusion in Proposition~\ref{prop:eigenvalq} holds. In fact, let us denote the $L$-isotypic component of $res_L(\tau)$ again by $Z_i$. Now, the proof goes as the one for Proposition~\ref{prop:eigenvalq} till we need to compute
$ = \sum_{j \in I_i} \int_{H_0} (\pi_\lambda(h_0)w_j, w_p)_{L^2(G)}\overline{(\pi_\lambda(h_0)w_q, w_j)}_{L^2(G)}
dh_0$
For this, we decompose $"Z_i"=\sum_s Z_{i, s}$ as a $L^2(G)$-orthogonal sum of irreducible $L$-modules and we choose the orthonormal basis for $"Z_i"$ as a union of orthonormal basis for each $Z_{i,s}$. Then, we have the $L^2(G)$-orthogonal decomposition $\mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_i)=\sum_s \mathrm{Cl}(\mathcal U(\mathfrak h_0)Z_{i,s})$. Then, the proof follows as in the case $res_L(\tau)$ is multiplicity free.
\end{rmk}
\section{Examples} We present three type of examples. The first is: {\it Multiplicity free representations.} A simple consequence of the duality theorem is that it readily follows examples of symmetric pair $(G,H)$ and square integrable representation $\pi_\lambda^G$ so that $res_H(\pi_\lambda)$ is $H$-admissible and the multiplicity of each irreducible factor is one. This is equivalent to determinate when the representation $res_H(\mathbf H^2(H_0,\tau))$ is multiplicity free.
The second is: {\it Explicit examples}. Here, we compute the Harish-Chandra parameters of the irreducible factors for some $res_H(H^2(G,\tau))$. The third is: {\it Existence of representations so that its lowest $K$-types restricted to $L$ is a irreducible representation.}
In order to present the examples we need information on certain families of representations.
\subsection{Multiplicity free representations} In this paragraph we generalize work of T. Kobayashi and his coworkers in the setting of Hermitian symmetric spaces and holomorphic Discrete Series.
\smallskip
Before we present the examples, we would like to comment.
\smallskip
a) Assume a Discrete Series $\pi_\lambda$ has admissible restriction to a subgroup $H$. Then, any Discrete Series $\pi_{\lambda^\prime}$ for $\lambda^\prime$ dominant with respect to $\Psi_\lambda$ is $H$-admissible \cite{Kob1}.
\smallskip
b) If $res_H(\pi_\lambda)$ is $H$-admissible and a multiplicity free representation. Then the restriction to $L$ of the lowest $K$-type for $\pi_\lambda$ is multiplicity free. This follows from the duality theorem.
\smallskip
c) In the next paragraphs we will list families $\mathcal F$ of Harish-Chandra parameters of Discrete Series for $G$ so that each representation in the family has multiplicity free restriction to $H$. We find that it may happen that $\mathcal F$ is the whole set of Harish-Chandra parameters on a Weyl chamber or $\mathcal F$ is a proper subset of a Weyl Chamber. Information on $\mathcal F$ for holomorphic reprentations is in \cite{KO}, \cite{KO2}.
\smallskip
d) Every irreducible $(\mathfrak g, K)$-module for either $\mathfrak g\equiv \mathfrak{su}(n,1)$ or $\mathfrak g\equiv \mathfrak{so}(n,1)$, restricted to $K$, is a multiplicity free representation.
\subsubsection{Holomorphic representations} For $G$ so that $G/K$ is a Hermitian symmetric space, it has been shown by Harish-Chandra that $G$ admits Discrete Series representations with one dimensional lowest $K$-type. For this paragraph we further assume that the smooth imbedding $H/L \rightarrow G/K$ is holomorphic, equivalently the center of $K$ is contained in $L$, and $\pi_\lambda$ is a holomorphic representation. Under this hypothesis, it was shown by Kobayashi \cite{Kob3} that a holomorphic Discrete Series for $G$ has a multiplicity free restriction to the subgroup $H$ whenever the it is a scalar holomorphic Discrete Series. Moreover, in \cite[Theorem 8.8]{ Kob3} computes the Harish-Chandra parameter of each irreducible factor. Also, from the work of Kobayashi and Nakahama we find a description of the restriction to $H$ of a arbitrary holomorphic Discrete Series representations. As a consequence, we find restrictions which are not multiplicity free.
In \cite{KO2} we find a complete list of the pairs $(\mathfrak g, \mathfrak h)$ so that $H/L \rightarrow G/K$ is a holomorphic embedding. From the list in \cite{Kob3}, it can be constructed the list bellow.
\smallskip
Also, Theorem~\ref{prop:Rwelldefgc} let us verify that the following pairs $(\mathfrak g,\mathfrak h)$ are so that $res_H(\pi_\lambda)$ is multiplicity free for any holomorphic $\pi_\lambda$. For this, we list the corresponding $\mathfrak h_0$.
$(\mathfrak{su}(m,n), \mathfrak{u}(\mathfrak{u}(m-1,n)+ \mathfrak u(1))) $, $\mathfrak h_0= \mathfrak{su}(1,n)+\mathfrak{su}(m-1)+ \mathfrak u(1) $.
$(\mathfrak{su}(m,n), \mathfrak{s}( \mathfrak{u}(m,n-1)+ \mathfrak u(1))) $, $\mathfrak h_0= \mathfrak{su}(n-1)+\mathfrak{su}(m,1)+ \mathfrak u(1) $.
$(\mathfrak{so}(2m,2),\mathfrak{u}(m,1))$, $\mathfrak h_0= \mathfrak{u}(m,1)$.
$(\mathfrak{so}^\star (2n),\mathfrak{so}^\star(2)+ \mathfrak{so}^\star(2n-2))$, $\mathfrak h_0 = \mathfrak{u}(1,n-1)$.
$(\mathfrak{sp}(n,\mathbb R ), \mathfrak{sp}( n-1,\mathbb R )+ \mathfrak{sp}( 1,\mathbb R )), \mathfrak h_0 =\mathfrak u(1,n-1)$.
$(\mathfrak{e}_{6(-14)}, \mathfrak{so}^\star(10)+\mathfrak{so}(2)), $ $\mathfrak h_0=\mathfrak{su}(5,1)+\mathfrak{sl}_2(\mathbb R )$ (Prasad).
The list is correct, owing to any Discrete Series for $SU(n,1)$ restricted to $K$ is a multiplicity free representation.
\
\subsubsection{Quaternionic real forms, quaternionic representations} In \cite{GW}, the authors considered and classified quaternionic real forms as well as they made a careful study of quaternionic representations. To follow we bring out the essential facts for us. From \cite{GW} we read that the list of Lie algebra of quaternionic groups is: $\mathfrak{su}(2,n)$, $\mathfrak{so}(4,n)$, $\mathfrak{sp}(1,n)$, $\mathfrak e_{6(2)}$, $\mathfrak e_{7(-5)}$, $\mathfrak e_{8(-24)}$, $\mathfrak f_{4(4)}$, $\mathfrak g_{2(2)}$. For each quaternionic real form $G$, there exists a system of positive roots $\Psi \subset \Phi (\mathfrak g,\mathfrak t)$ so that the maximal root $\alpha_{max}$ in $\Psi$ is compact, $\alpha_{max}$ is orthogonal to all compact simple roots and $\alpha_{max}$ is not orthogonal to each noncompact simple roots. Hence, $\mathfrak k_1(\Psi)\equiv \mathfrak{su}_2(\alpha_{max}) $. The system $\Psi$ is not unique. We appel such a system of positive roots {\it a quaternionic system}.
Let us recall that a {\it quaternionic representation} is a Discrete Series for a quaternionic real form $G$ so that its Harish-Chandra parameter is dominant with respect to a quaternionic system of positive roots, and so that its lowest $K$-type is equivalent to a irreducible representation for $ K_1(\Psi)$ times the trivial representation for $K_2$.
A fact shown in \cite{GW} is: Given a quaternionic system of positive roots, for all but finitely many representations $(\tau, W)$ equivalent to the tensor of a nontrivial representation for $K_1(\Psi)$ times the trivial representation of $K_2$, it holds: $\tau$ is the lowest $K$-type of a quaternionic (unique) irreducible square integrable representation $H^2(G,\tau)$. We define a {\it generalized quaternionic representation} to be a Discrete Series representation $\pi_\lambda$ so that its Harish-Chandra parameter is dominant with respect to a quaternionic system of positive roots.
From Table 1,2 we readily read the pairs $(\mathfrak g,\mathfrak h)$ so that $\mathfrak g$ is a quaternionic Lie algebra and hence, we have a list of generalized quaternionic representations of $G$ with admissible restriction to $H$.
Let $(G,H)$ denote a symmetric pair so that a quaternionic representation $(\pi_\lambda, H^2(G,\tau))$ is $H$-admissible. Then, from \cite{Vaint} \cite{DV} \cite{DGV} we have: $ \mathfrak k_1(\Psi_\lambda) \equiv \mathfrak{su}_2(\alpha_{max}) \subset \mathfrak l $ and $\pi_\lambda$ is $L$-admissible. In consequence, \cite{Kob}, $\pi_\lambda$ is $H_0$-admissible. By definition, for a quaternionic representation $\pi_\lambda$, we have $\tau_{\vert_L}$ is irreducible, hence, $\mathbf{H}^2(H_0,\tau)$ is irreducible. Moreover, after checking on \cite{Vaint} or Tables 1,2, the list of systems $\Psi_{H_0,\lambda}$, it follows that $H^2(H_0, \tau)$ is again a quaternionic representation. Finally, in order to present a list of quaternionic representations with multiplicity free restriction to $H$ we recall that it follows from the duality Theorem that $res_H(H^2(G,\tau))$ is multiplicity free if and only if $res_L(H^2(H_0,\tau))$ is a multiplicity free representation, and that on \cite[Page 88]{GW} it is shown that a quaternionic representation for $H_0$ is $L$-multiplicity free if and only if $\mathfrak h_0 =\mathfrak{sp}(n,1), n\geq 1 $.
\smallskip
To follow, we list pairs $(\mathfrak g,\mathfrak h)$ where multiplicity free restriction holds for all quaternionic representations.
$(\mathfrak{su}(2,2n), \mathfrak{sp}(1,n))$, $\mathfrak h_0=\mathfrak{sp}(1,n)$, $n\geq 1$.
$(\mathfrak{so}(4,n),\mathfrak{so}(4,n-1))$, $\mathfrak h_0 =\mathfrak{so}(4,1) +\mathfrak{so}(n-1)$ ($n$ even or odd).
$(\mathfrak{sp}(1, n),\mathfrak{sp}(1,k)+\mathfrak{sp}(n-k) ) $, $\mathfrak h_0=\mathfrak{sp}(1,n-k)+\mathfrak{sp}(k)$.
$(\mathfrak{f}_{4(4)}, \mathfrak{so}(5,4))$, $\mathfrak h_0 =\mathfrak{sp}(1,2)\oplus \mathfrak{su}(2)$.
$(\mathfrak e_{6(2)}, \mathfrak f_{4(4)})$, $\mathfrak h_0= \mathfrak{ sp}(3,1)$.
\smallskip
A special pair is:
$(\mathfrak{su}(2,2), \mathfrak{sp}(1,1)), $ $\mathfrak h_0=\mathfrak{sp}(1,1)$.
Here, multiplicity free holds for any $\pi_\lambda$ so that $\lambda$ is dominant with respect to a system of positive roots that defines a quaternionic structure on $G/K$. For details see \cite[Table 2]{Vaint} or {Explicit example II}.
\medskip
\subsubsection{More examples of multiplicity free restriction}
Next, we list pairs $(\mathfrak g,\mathfrak h)$ and systems of positive roots $\Psi \subset \Phi(\mathfrak g,\mathfrak t)$ so that $\pi_{\lambda^\prime}$ is $H$- admissible and multiplicity free for every element $ \lambda^\prime$ dominant with respect to $\Psi$. We follow either Table 1,2,3 or \cite{KO2}. For each $(\mathfrak g, \mathfrak h)$ we list the corresponding $ \mathfrak h_0 $.
\medskip
$(\mathfrak{su}(m,n),\mathfrak{su}(m,n-1)+\mathfrak u(1))$, $\Psi_a, \tilde\Psi_b $, \\ \phantom{xxxxxxxxxxxxxxxxxxxx} $\mathfrak h_0 =\mathfrak{su}(m,1) +\mathfrak{su}(n-1)+\mathfrak u(1)$.
$(\mathfrak{so}(2m,2n+1),\mathfrak{so}(2m,2n))$, $\Psi_{\pm m}$, $\mathfrak h_0= \mathfrak{so}(2m,1)+\mathfrak{so}(2n)$.
$(\mathfrak{so}(2m,2),\mathfrak{so}(2m,1))$, $\Psi_{\pm m}$, $\mathfrak h_0= \mathfrak{so}(2m,1)$.
$(\mathfrak{so}(2m,2n),\mathfrak{so}(2m,2n-1)), n>1$, $\Psi_\pm$, $\mathfrak h_0 =\mathfrak{so}(2m,1) +\mathfrak{so}(2n-1)$.
\subsection{Explicit examples}
\subsubsection{Quaternionic representations for $Sp(1,b)$}\label{sub:Ltypesspd1} For further use we present a intrinsic description for the $Sp(1)\times Sp(b)$-types of a quaternionic representation for $Sp(1,b)$, a proof of the statements is in \cite{GW0}. The quaternionic representations for $Sp(1,b)$ are the representations of lowest $Sp(1)\times Sp(b)$-type $S^n(\mathbb C^{2})\boxtimes \mathbb C, n\geq 1$. We label the simple roots for the quaternionic system of positive roots $\Psi$ as in \cite{GW}, $\beta_1, \dots, \beta_{b+1}$, the long root is
$\beta_{b+1}$, $\beta_1$ is adjacent to just one simple root and the maximal root $\beta_{max}$ is adjacent to $-\beta_1$. Let $\Lambda_1, \dots, \Lambda_{d+1}$ the associated fundamental weights. Thus, $\Lambda_1=\frac{\beta_{max}}{2}$.
Let $\tilde{\Lambda}_1, \dots, \tilde{\Lambda}_b$ denote the fundamental weights for $"\Psi \cap \Phi(\mathfrak{sp}(b))"$.
The irreducible $L=Sp(1)\times Sp(b)$-factors of
\phantom{xxx} $ H^2(Sp(1,b), \pi_{n\frac{\beta_{max}}{2}}^{Sp(1)}\boxtimes \pi_{\rho_{Sp(b)}}^{Sp(b)})= H^2(Sp(1,b), S^{n-1}(\mathbb C^2) \boxtimes \mathbb C)$
are
\hskip 0.7cm $\{S^{n-1+m}(\mathbb C^2)\boxtimes S^m(\mathbb C^{2b} ) \\ \phantom{xxxxxxxxxxxxxxxxxxxxxxxxxx} \equiv \pi_{(n+m)\frac{\beta_{max}}{2}}^{Sp(1)}\boxtimes \pi_{m\tilde{\Lambda}_1 +\rho_{Sp(b)}}^{Sp(b)} , \, m\geq 0 \}$.
The multiplicity of each $L$-type in $ H^2(Sp(1,b), S^{n-1}(\mathbb C^2) \boxtimes \mathbb C)$ is one.
\subsubsection{Explicit example I} \label{sub:expexam} We develop this example in detail. We restrict quaternionic representations for $Sp(1,d)$ to $Sp(1,k)\times Sp(d-k)$. For this, we need to review definitions and facts in \cite{GW0}\cite{KO2} \cite{Vaint}. The group $G:=Sp(1,d)$ is a subgroup of $GL(\mathbb C^{2+2d})$. A maximal compact subgroup of $Sp(1,d)$ is the usual immersion of $Sp(1)\times Sp(d)$. Actually, $Sp(1,d)$ is a quaternionic real form for $Sp(\mathbb C^{1+d})$. $Sp(1,d)$ has a compact Cartan subgroup $T$ and there exists a orthogonal basis $\delta, \epsilon_1, \dots, \epsilon_d$ for $i\mathfrak t^\star$ so that
$\Phi(\mathfrak{sp}(d+1,\mathbb C), \mathfrak t)=\{ \pm 2\delta, \pm 2\epsilon_1, \dots, \pm 2\epsilon_d, \pm (\epsilon_i \pm \epsilon_j), 1\leq i \not= j \leq d, \pm (\delta \pm \epsilon_s), 1\leq s \leq d \}$.
We fix $1 \leq k <d$. We consider the usual immersion of $H:=Sp(1,k)\times Sp(d-k)$ into $Sp(1,d)$.
Thus, $\Phi(\mathfrak h,\mathfrak t):= \{ \pm 2\delta, \pm 2\epsilon_1, \dots, \pm 2\epsilon_d, \pm (\epsilon_i \pm \epsilon_j), 1\leq i \not= j \leq k \,\text{or}\, k+1\leq i \not= j \leq d, \pm (\delta \pm \epsilon_s), 1\leq s \leq k \}$.
Then, $H_0$ is isomorphic to $ Sp(1,d-k) \times Sp(k) $. We have
$\Phi(\mathfrak h_0,\mathfrak t):= \{ \pm 2\delta, \pm 2\epsilon_{1}, \dots, \pm 2\epsilon_d, \pm (\epsilon_i \pm \epsilon_j), k+1\leq i \not= j \leq d \,\text{or}\, 1\leq i \not= j \leq k, \pm (\delta \pm \epsilon_s), k+1\leq s \leq d \}$.
From now on, we fix the quaternionic system of positive roots \\ $\Psi:= \{ 2\delta, 2\epsilon_1, \dots, 2\epsilon_d, (\epsilon_i \pm \epsilon_j), 1\leq i < j \leq d, (\delta \pm \epsilon_s), 1\leq s \leq d \}$. Here, $\alpha_{max}=2\delta$, $\rho_n^\Psi = d \delta $. The Harish-Chandra parameter $\lambda$ of a quaternionic representation $\pi_\lambda$ is dominant with respect to $\Psi$. Whence, $ \Psi_\lambda=\Psi$. The systems in Theorem~\ref{prop:GWKO} are $\Psi_{H,\lambda}=\Phi(\mathfrak h,\mathfrak t)\cap \Psi$, $\Psi_{H_0,\lambda}=\Phi(\mathfrak h_0,\mathfrak t)\cap \Psi$. Also, \cite{DV}, $\Phi (\mathfrak k_1:=\mathfrak k_1(\Psi), \mathfrak t_1:=\mathfrak t \cap \mathfrak k_1)=\{ \pm 2\delta \}$, $\Phi (\mathfrak k_2:=\mathfrak k_2(\Psi), \mathfrak t_2:=\mathfrak t \cap \mathfrak k_2)=\{ \pm 2\epsilon_1, \dots, \pm 2\epsilon_d, \pm (\epsilon_i \pm \epsilon_j), 1\leq i \not= j \leq d\}$. Thus, $K_1(\Psi)\equiv SU_2(2\delta)\equiv Sp(1)\subset H$, $K_2\equiv Sp(d)$. Hence, for a Harish-Chandra parameter $\lambda=(\lambda_1, \lambda_2), \lambda_j \in i\mathfrak t_j^\star $ dominant with respect to $\Psi$, the representation $\pi_\lambda$ is $H$-admissible.
The lowest $K$-type of a generalized quaternionic representation $\pi_\lambda$ is the representation $\tau= \pi_{\lambda +\rho_n^\lambda}^K= \pi_{\lambda_1 +d\delta}^{K_1}\boxtimes \pi_{\lambda_2}^{K_2}$. Since, $\rho_{K_2}= d\epsilon_1 +(d-1)\epsilon_2 +\dots +\epsilon_d $, for $n\geq 2d+1$, the functional $\mathfrak t^\star \ni \lambda_n:= n \delta +\rho_{K_2} $ is a Harish-Chandra parameter dominant with respect to $\Psi$ and the lowest $K$-type $\tau_n$ of $\pi_{\lambda_n}$ is
$\pi_{(n+d)\delta}^{K_1}\boxtimes \pi_{\rho_{K_2}}^{K_2}$. That is, $\pi_{ \lambda_n +\rho_n^\lambda}^K$ is equal to a irreducible representation of $K_1\equiv Sp(1)=SU(2\delta)$ times the trivial representation of $K_2\equiv Sp(d)$. The family $(\pi_{\lambda_n})_n$ exhausts, up to equivalence, the set of quaternionic representations for $Sp(1,d)$. Now, $\mathbf{H}^2(H_0,\tau_n)$ is the irreducible representation of lowest $L$-type equal to the irreducible representation $\pi_{(n+d)\delta}^{K_1}$ of $K_1$ times the trivial representation of $K_2 \cap L$. Actually, $\mathbf{H}^2(H_0, \pi_{(n+d)\delta}^{K_1}\boxtimes \pi_{\rho_{K_2}}^{K_2})$ is a realization of the quaternionic representation $H^2(Sp(1,n-k),\pi_{(n+d)\delta}^{Sp(1)}\boxtimes \pi_{\rho_{Sp(n-k)}}^{Sp(n-k)})$ for $Sp(1,d-k)$ times the trivial representation of $Sp(k)$. In \cite[Proposition 6.3]{GW0} it is shown that the representation $H^2(Sp(1,n-k),\pi_{(n+d)\delta}^{Sp(1)}\boxtimes \pi_{\rho_{Sp(n-k)}}^{Sp(n-k)})$ restricted to $L$ is a multiplicity free representation as well as it is computed the highest weight of the totality of $L$-irreducible factors. To follow we explicit such a computation.
For this we recall \ref{sub:Ltypesspd1} and notice $b=d-k$; $\Lambda_1=\delta$, $\beta_{max}=2\delta$, $\tilde{\Lambda}_1=\epsilon_1$; as $Sp(1)$-module, $S^p(\mathbb C^2)\equiv \pi_{(p+1) \delta}^{SU(2\delta)}$; for $p\geq 1$, as $Sp(p)$-module $S^m(\mathbb C^{2p})\equiv \pi_{ m\epsilon_1+\rho_{Sp(p)}}^{Sp(p)}$. Then,
\noindent
the irreducible $L=Sp(1)\times Sp(d-k)\times Sp(k)$-factors of
\bigskip
$ \mathbf H^2(H_0, \pi_{(n+d)\delta }^{K_1}\boxtimes \pi_{\rho_{K_2}}^{K_2}) \equiv H^2(Sp(1, d-k), \pi_{(n+d)\delta }^{Sp(1)}\boxtimes \pi_{\rho_{Sp(d-k)}}^{Sp(d-k)})\boxtimes \mathbb C$.
\bigskip
\noindent
are multiplicity free and it is the set of inequivalent representations
\bigskip
$\{S^{n+d-1+m}(\mathbb C^2)\boxtimes S^m(\mathbb C^{2(d-k)} )\boxtimes \mathbb C \\ \phantom{xxxxxxxxxxxxxxxxx} \equiv \pi_{(n+d+m)\delta}^{Sp(1)}\boxtimes \pi_{m\epsilon_1 +\rho_{Sp(d-k)}}^{Sp(d-k)} \boxtimes \pi_{\rho_{Sp(k)}}^{Sp(k)}, \, m\geq 0 \}$.
Here, $\rho_{Sp(d-k)}=(d-k)\epsilon_{k+1}+ (d-k-1)\epsilon_{k+2}+\dots +\epsilon_d $ and $\rho_{Sp(k)}=k\epsilon_1 + (k-1)\epsilon_2 +\dots +\epsilon_k$.
We compute $\Psi_{H,\lambda}= \{ 2\delta, 2\epsilon_1, \dots, 2\epsilon_d, (\epsilon_i \pm \epsilon_j), 1\leq i \not= j \leq k \,\text{or}\, k+1\leq i \not= j \leq d, (\delta \pm \epsilon_s), 1\leq s \leq k \}$.
$\rho_n^\mu=\rho_n^H=k\delta$.
Now, from Theorem~\ref{prop:Rwelldefgc} we have $Spec_H(\pi_\lambda) +\rho_n^H =Spec_L(\mathbf H^2(H_0,\tau))$, whence, we conclude:
\smallskip
The representation $res_{Sp(1,k)\times Sp(d-k)}(\pi_{\lambda_n}^{Sp(1,d)})$ is a multiplicity free representation and the totality of Harish-Chandra parameters of the $Sp(1,k)\times Sp(d-k)$-irreducible factors is the set
\bigskip
$\{ (n+d+m)\delta + m \epsilon_1+\rho_{Sp(k)+\rho_{Sp(d-k)} }-\rho_n^H = \\ \phantom{xxx}(n+d+m-k)\delta + m \epsilon_1 + (d-k)\epsilon_{k+1}+\dots +\epsilon_d + k\epsilon_1 +\dots +\epsilon_k , m\geq 0 \}$.
\bigskip
Whence, $res_{Sp(1,k)\times Sp(d-k)}(\pi_{\lambda_n}^{Sp(1,d)})$ is equivalent to the Hilbert sum
\bigskip
$\oplus_{m\geq 0} V_{(n+d+m-k) \delta + m \epsilon_1+\rho_{Sp(k)}+\rho_{Sp(d-k)}}^{Sp(1,k)\times Sp(d-k)}\\ \phantom{xxx}\equiv \oplus_{m\geq 0}H^2(Sp(1,k)\times Sp(d-k), \pi_{(n+d+m)\delta + m \epsilon_1+\rho_{Sp(k)}+\rho_{Sp(d-k)}}^{Sp(1)\times Sp(k)\times Sp(d-k)}) $.
\medskip
A awkward point of our decomposition is that not provide a explicit description of the $H$-isotypic components for $res_H(V_\lambda^G)$.
\smallskip
\subsubsection{Explicit example II} We restrict from $Spin(2m,2), m \geq 2$, to $Spin(2m,1)$. We notice the isomorphism between $(Spin(4,2), Spin(1,1))$ and the pair $(SU(2,2),Sp(1,1))$. In this setting $K=Spin(2m)\times Z_K$, $L=Spin(2m)$, $ Z_K\equiv \mathbb T$. Obviously, we may conclude that any irreducible representation of $K$ is irreducible when restricted to $L$. In this case $H_0 \equiv Spin(2m,1)$, and (for $m=2$, $H_0 \equiv Sp(1,1)$) and $\mathbf H^2(H_0,\tau)$ is irreducible. Therefore, the duality theorem together with that any irreducible representation for $ Spin(2m,1) $ is $L$-multiplicity free, we obtain:
\smallskip
{\it Any $ Spin(2m,1) $-admissible representation $(\pi_\lambda^{Spin(2m,2)}, V_\lambda^{Spin(2m,2)})$ is multiplicity free.}
\smallskip
For $(Spin(2m,2), Spin(2m,1))$ in \cite[Table 2 ]{Vaint}, \cite{KO2} it is verified that any $\pi_\lambda$, with $\lambda$ dominant with respect to one of the systems $\Psi_{\pm m}$ (see proof of \ref{lem:equalm}) has admissible restriction to $Spin(2m,1)$ and no other $\pi_\lambda$ has admissible restriction to $Spin(2m,1)$.
\smallskip
In \cite[Table 2 ]{Vaint} \cite{Kob1} \cite{Kob} it is verified that any square integrable representation $\pi_\lambda$ with $\lambda$ dominant with respect to a quaternionic system for $SU(2,2)$, has admissible restriction to $Sp(1,1)$. As in \ref{sub:expexam}, we may compute the Harish-Chandra parameters for the irreducible components of $res_{Sp(1,1)}(\pi_\lambda^{SU(2,2)})$.
\subsubsection{Explicit example III} To follow, $G$ is so that its Lie algebra is
$\mathfrak{sp}(m, n)$, $ n\geq 2, m>1$. The aim of this example is twofold. One is to produce Discrete Series representations so that the lowest $K$-type restricted to $K_1(\Psi)$ is still irreducible and secondly to produce another multiplicity free examples. Here, $\mathfrak k= \mathfrak{sp}(m)+\mathfrak{sp}(n)$. We fix maximal torus $T \subset K$ and describe the root system as in \cite{Vaint}. For the system of positive roots $\Psi :=\{ \epsilon_i \pm \epsilon_j, i<j, \delta_r \pm \delta_s, r<s, \epsilon_a \pm \delta_b, 1\leq a, i,j \leq m, 1\leq b,r,s \leq n\}$,
we have $K_1(\Psi)=K_1 \equiv Sp(m), K_2 (\Psi)=K_2\equiv Sp(n)$. Obviously, there exists a system of positive roots $\tilde \Psi$ so that $K_1(\tilde \Psi) \equiv Sp(n), K_2 (\tilde \Psi)\equiv Sp(m)$. For any other system of positive roots in $\Phi(\mathfrak g,\mathfrak t)$ we have that the associated subgroup $K_1$ is equal to $K$. It readily follows that $\lambda:=\sum_{1\leq j\leq m} a_j \epsilon_j +\rho_{K_2}$ is a $\Psi$-dominant Harish-Chandra parameter when the coefficients $a_j$ are all integers so that $a_1>\dots >a_m>>0$. Since $\rho_n^\lambda$ belongs to $span_\mathbb C \{e_1,\dots,e_m\}$, it follows that the lowest $K$ type of $\pi_\lambda$ is equivalent to a irreducible representation for $Sp(m)$ times the trivial representation for $Sp(n)$. Next, we consider $\mathfrak h= \mathfrak{sp}(m,n-1)+\mathfrak{sp}(1)$ in the usual embedding. Here, $\mathfrak h_0\equiv \mathfrak{sp}(m,1)+\mathfrak{sp}(n-1)$. Whence, after we proceed as in {\it Explicit example I} we may conclude $res_{Sp(m,n-1)\times Sp(1)}(\pi_\lambda^{Sp(m,n)})$ is a multiplicity free representation and we may compute the Harish-Chandra parameters of each $Sp(m,n-1)\times Sp(1)$-irreducible factor for $\pi_\lambda$.
\subsubsection{Explicit example IV}
\,\, $ (\mathfrak{e}_{6(2)}, \mathfrak f_{4(4)}). $ We fix a compact Cartan subgroup $T\subset K$ so that $U:=T\cap H$ is a compact Cartan subgroup of $L=K\cap H$. Then, there exist a quaternionic and Borel de Siebenthal positive root system $\Psi_{BS}$ for $\Phi(\mathfrak e_6, \mathfrak t)$ so that, after we write the simple roots as in Bourbaki (see \cite{GW0}\cite{Vaint}), the compact simple roots are $\alpha_1, \alpha_3, \alpha_4, \alpha_5, \alpha_6$ (They determinate the $A_5$-Dynkin sub-diagram) and $\alpha_2$ is noncompact. $\alpha_2$ is adjacent to $\alpha_{max}$ and to $\alpha_4$. In \cite{Vaint}, it is verified $\Psi_{BS}$ is the unique system of positive roots such that $\mathfrak k_1( \Psi_{BS}) =\mathfrak{su}_2(\alpha_{max}).$
The automorphism $\sigma$ of $\mathfrak g$ acts on the simple roots as follows $$\sigma (\alpha_2)=\alpha_2, \,\, \sigma (\alpha_1)=\alpha_6, \,\, \sigma( \alpha_3 )=\alpha_5, \,\, \sigma (\alpha_4)=\alpha_4.$$ Hence, $\sigma(\Psi_{BS})=\Psi_{BS}.$ Let $h_2 \in i\mathfrak t^\star $ be so that $\alpha_j(h_2)=\delta_{j 2} $ for $j=1,\dots, 6.$ Then, $h_2=\frac{2H_{\alpha_{m}}}{(\alpha_{m}, \alpha_{m})}$ and $\theta= Ad(exp(\pi i h_2)).$ A straightforward computation yields: $\mathfrak k \equiv \mathfrak{su}_2(\alpha_{max}) + \mathfrak{sp}(3)$, $\mathfrak l\equiv \mathfrak{su}_2(\alpha_{max}) + \mathfrak{sp}(1)+ \mathfrak{sp}(2)$; the fix point subalgebra for $\theta \sigma$ is isomorphic to $\mathfrak{sp}(1,3)$. Thus, the pair $(\mathfrak e_{6(2)}, \mathfrak{sp}(1,3))$ is the associated pair to $ (\mathfrak e_{6(2)}, \mathfrak f_{4(4)}). $ Let $q_\mathfrak u$ denote the restriction map from $\mathfrak t^\star$ into $\mathfrak u^\star$. Then, then, for $\lambda$ dominant with respect to $\Psi_{BS}$, the simple roots for $\Psi_{H,\lambda}=\Psi_{\mathfrak f_{4(4)},\lambda}$, respectively $\Psi_{\mathfrak{sp}(1,3), \lambda},$ are:
\begin{center}
$\alpha_2, \,\, \alpha_4, \,\, q_\mathfrak u(\alpha_3)=q_\mathfrak u (\alpha_5), \,\, q_\mathfrak u(\alpha_1)=q_\mathfrak u (\alpha_6). $
$ \beta_1=q_\mathfrak u(\alpha_2 + \alpha_4 +\alpha_5)=q_\mathfrak u (\alpha_2 + \alpha_4 +\alpha_3), \,\, \beta_2=q_\mathfrak u(\alpha_1)=q_\mathfrak u (\alpha_6)$, \\ $\,\, \beta_3=q_\mathfrak u(\alpha_3)=q_\mathfrak u (\alpha_5), \,\, \beta_4= \alpha_4. $
\end{center}
The fundamental weight $\tilde{\Lambda}_1$ associated to $\beta_1$ is equal to $\frac{1}{2} \beta_{max}$. Hence, $ \tilde\Lambda_1=\beta_1+\beta_2+ \beta_3 +\frac{1}{2} \beta_4=\alpha_2+\frac{3}{2}\alpha_4 + \alpha_3+\alpha_5+\frac{1}{2}(\alpha_1+\alpha_6) $.
Thus, from the Duality Theorem, for the quaternionic representation
\smallskip
\phantom{xxxxxxxxxxxxxxxxxxxx}$H^2( E_{6(2)}, \pi_{n\frac{\alpha_{max}}{2}+\rho_{SU(6)}}^{SU_2(\alpha_{max})\times SU(6) })$
\medskip
\noindent
{\it the set of Harish-Chandra parameters of the irreducible $F_{4(4)}$-factors is equal to:
\medskip
$-\rho_n^H $ plus the set of infinitesimal characters of the $L\equiv SU(\alpha_{max}) \times Sp(3) $-irreducible factors for}
\phantom{xxxxxxxxx} $res_{SU_2(\alpha_{max}) \times Sp(3)} (H^2(Sp(1,3), \pi_{n \frac{\alpha_{max}}{2} + \rho_{Sp(3)}}^{SU_2(\alpha_{max}) \times Sp(3)}))$.
Here, $-\rho_n^H =-d_H \frac{\alpha_{max}}{2} $, $d_H=d_{\mathfrak f_{4(4)}}=7$ (see \cite{GW}).
\medskip
Therefore, from the computation in \ref{sub:Ltypesspd1}, we obtain:
\begin{equation*} res_{ F_{4(4)}}( \pi_{n\frac{\alpha_{max}}{2}+\rho_{SU(6)}}^{ E_{6(2)}}) =\oplus_{ m\geq 0 }\,\, V_{(n-7+m)\frac{\alpha_{max}}{2} +m\tilde\Lambda_1 +\rho_{Sp(3)}}^{F_{4(4)}}.
\end{equation*}
Here, $\rho_{Sp(3)}= 3\beta_2 + 5\beta_3 +3\beta_4=\frac{3}{2} (\alpha_5+\alpha_3)+\frac{5}{2} (\alpha_1+\alpha_6)+ 3\alpha_4$.
\subsubsection{Comments on admissible restriction of quaternionic representations} As usual $(G,H)$ is a symmetric pair and $(\pi_\lambda ,H^2(G,\tau))$ a $H$-admissible, non-holomorphic, square integrable representation. We further assume $G/K$ holds a quaternionic structure. Then, from Tables 1,2,3 it follows:
a) $ \lambda $ is dominant with respect to a quaternionic system of positive roots. That is, $\pi_\lambda$ is a generalized quaternionic representation.
b) $H/L$ has a quaternionic structure.
c) Each system $\Psi_{H,\lambda}$, $\Psi_{H_0,\lambda}$ is a quaternionic system.
d) The representation $\mathbf H^2(H_0,\tau)$ is a sum of generalized quaternionic rep's.
e) When $\pi_\lambda$ is quaternionic, then the representation $\mathbf{H}^2(H_0, \tau)$ is equal to $H^2(H_0, res_L(\tau))$, hence, it is quaternionic. Moreover, in \cite{GW0}, it is computed the highest weight and the respective multiplicity of each of its $L$-irreducible factors.
f) Thus, the duality Theorem~\ref{prop:Rwelldefgc} together with a)---e) let us compute the Harish-Chandra parameters of the irreducible $H$-factors for a quaternionic representation $\pi_\lambda$. Actually, the computation of the Harish-Chandra parameters is quite similar to the computation in {\it Explicit example I, Explicit example IV}.
\medskip
To follows we consider particular quaternionic symmetric pairs. One pair is $(\mathfrak f_{4(4)}, \mathfrak{so}(5,4))$. Here, $\mathfrak h_0 \equiv \mathfrak{sp}(1,2)+ \mathfrak{su}(2)$. Thus, for any Harish-Chandra parameter $\lambda$ dominant with respect to the quaternionic system of positive roots, we have $\pi_\lambda$ restricted to $SO(5,4)$ is a admissible representation and the Duality theorem let us compute either multiplicities or Harish-Chandra parameters of the restriction. Moreover, since
quaternionic Discrete Series for $Sp(1,2)\times SU(2)$ are multiplicity free, \cite{GW0}, we have that quaternionic Discrete Series for $\mathfrak f_{4(4)}$, restricted to $SO(5,4)$ are multiplicity free. It seems that it can be deduced from the branching rules for the pair $(Sp(3),Sp(1)\times Sp(2))$ that a generalized quaternionic representation, $res_{SO(5,4)}(\pi_\lambda)$ is multiplicity free if and only $\pi_\lambda$ is quaternionic.
\medskip
For the pair $(\mathfrak f_{4(4)}, \mathfrak{so}(5,4))$, if we attempt to deduce our decomposition result from the work of \cite{GW}, we have to consider the group of Lie algebra $\mathfrak g^d \equiv \mathfrak f_{4(-20)}$, its maximal compactly embedded subalgebra is isomorphic to $\mathfrak{so}(9)$, a simple Lie algebra, hence no Discrete Series for $G^d$ has admissible restriction to $H_0$ (see \cite{KO} \cite{DV}). Thus, it is not clear to us how to deduce our Duality result from the Duality Theorem in \cite{GW0}.
\medskip
For the pairs $(\mathfrak e_{6(-14)}, \mathfrak{su}(2,4)+\mathfrak{su}(2)), (\mathfrak e_{6(2)}, \mathfrak{so}(6,4)+\mathfrak{so}(2)),$ $ (\mathfrak e_{7(-5)}, \mathfrak{e}_{6(2)}+\mathfrak{so}(2))$, for each $G$, generalized quaternionic representations do exist and they are $H$-admissible. For these pairs, the respective $\mathfrak{h}_0$ are: $ \mathfrak{su}(2,4)+\mathfrak{su}(2), \mathfrak{su}(2,4)+\mathfrak{su}(2), \mathfrak{su}(6,2)$.
In these three cases, the Maple soft developed by Silva-Vergne\cite{BV}, allows to compute the $L$-Harish-Chandra parameters and respective multiplicity for each Discrete Series for $H_0\equiv SU(p,q)\times SU(r)$, hence, the duality formula yields the Harish-Parameters for $res_H(\pi_\lambda)$ and their multiplicity.
\subsubsection{Explicit example V} The pair $(SO(2m,n), SO(2m,n-1))$. This pair is considered in \cite{GW}. We recall their result and we sketch how to derive the result from our duality Theorem. We only consider the case $\mathfrak g=\mathfrak{so}(2m,2n+1)$. Here, $\mathfrak k=\mathfrak{so}(2m) +\mathfrak{so}(2n+1)$, $\mathfrak h=\mathfrak{so}(2m,2n)$, $\mathfrak h_0=\mathfrak{so}(2m,1)+\mathfrak{so}(2n)$, $\mathfrak l=\mathfrak{so}(2m)+ \mathfrak{so}(2n)$. We fix a Cartan subalgebra $\mathfrak t \subset \mathfrak l \subset \mathfrak k$. Then, there exists a orthogonal basis
$\epsilon_1, \dots, \epsilon_m, \delta_1, \dots, \delta_n$ for $i\mathfrak t^\star $ so that
$\Delta = \{(\epsilon_i \pm \epsilon_j), 1 \leq i < j\leq m, (\delta_r \pm \delta_s), 1 \leq r < s \leq n \} \cup \{\delta_j\}_{1\leq j \leq m} .$ $$ \Phi_n = \{ \pm (\epsilon_r \pm \delta_s), r=1, \dots,m, s=1,\dots,n \} \cup \{ \pm \epsilon_j, j=1,\dots,m \}.$$ The systems of positive roots $\Psi_\lambda$ so that $\pi_\lambda^G$ is an admissible representation of $H$ are the systems $\Psi_{\pm}$ associated to the lexicographic orders $ \epsilon_1> \dots> \epsilon_m> \delta_1> \dots> \delta_n $, $ \epsilon_1> \dots>\epsilon_{m-1}> -\epsilon_m> \delta_1> \dots> \delta_{n-1} > -\delta_n. $
Here, for $m \geq 3$, $\mathfrak k_1(\Psi_{\pm})=\mathfrak{so}(2m).$ For $m=2, \mathfrak k_1(\Psi_\pm)=\mathfrak{su}_2(\epsilon_1 \pm \epsilon_2).$ Then,
$\Psi_{H, +}= \{(\epsilon_i \pm \epsilon_j), 1 \leq i < j\leq m, (\delta_r \pm \delta_s), 1 \leq r < s \leq n \}\cup \{ (\epsilon_r \pm \delta_s), r=1, \dots,m, s=1,\dots,n \}$,
$\Psi_{H_0, +}= \{(\epsilon_i \pm \epsilon_j), 1 \leq i < j\leq m, (\delta_r \pm \delta_s), 1 \leq r < s \leq n \}\cup \{ \epsilon_j, j=1,\dots,m \}.$
\smallskip
$\mathfrak g^d=\mathfrak{so}(2m+2n,1)$. Thus, from either our duality Theorem or from \cite{GW}, we infer that whenever $res_H(\pi_\lambda)$ is $H$-admissible, then, $res_H(\pi_\lambda)$ is a multiplicity free representation.
Whence, we are left to compute the Harish-Chandra parameters for $res_{SO(2m,2n)}(H^2(SO(2m,2n+1), \pi_{\Lambda_1}^{SO(2m)}\boxtimes \pi_{\Lambda_2}^{SO(2m+1)}))$. For this, according to the duality Theorem, we have to compute the infinitesimal characters of each irreducible factor of the underlying $L$-module in \\
$\mathbf H^2(H_0,\tau)= \sum_{\nu \in Spec_{SO(2m)}(\pi_{\Lambda_2}^{SO(2m+1)})} H^2(SO(2m,1), \pi_{\Lambda_1}^{SO(2m)})\boxtimes V_{\nu}^{SO(2m)}$. The branching rules for $ res_{SO(2m)}(H^2(SO(2m,1), \pi_{\Lambda_1}^{SO(2m)})$ are found in \cite{Th} and other references, the branching rule for $res_{SO(2m)}(\pi_{\Lambda_2}^{SO(2m+1)})$ can be found in \cite{Th}. From both computations, we deduce: \cite[Proposition 3]{GW}, for $\lambda=\sum_{1\leq i \leq m} \lambda_i \epsilon_i +\sum_{1\leq j \leq n} \lambda_{m+j} \delta_j$, then $V_\mu^H$ is a $H$-subrepresentation of $H^2(G,\tau)\equiv V_\lambda^{SO(2m,2n+1)}$ ($\mu=\sum_{1\leq i \leq m} \mu_i \epsilon_i +\sum_{1\leq j \leq n} \mu_{j+m} \delta_j$) if and only if \\ \phantom{xxxx}$\mu_1>\lambda_1> \dots> \mu_m> \lambda_m, \lambda_{m+1}>\mu_{m+1}>\dots\lambda_{m+n}>\vert \mu_{m+n}\vert.$
\subsection{ Existence of Discrete Series whose lowest $K$-type restricted to $K_1(\Psi)$ is irreducible} Let $G$ a semisimple Lie group that admits square integrable representations. This hypothesis allows to fix a compact Cartan subgroup $T\subset K$ of $G$. In \cite{DV} it is defined for each system of positive roots $\Psi \subset \Phi(\mathfrak g,\mathfrak t)$ a normal subgroup $K_1(\Psi)\subset K$ so that for a symmetric pair $(G,H)$, with $H$ a $\theta$-invariant subgroup, it holds: for any Harish-Chandra parameter dominant with respect to $\Psi$, the representation $res_H(\pi_\lambda)$ is $H$-admissible if and only if $K_1(\Psi)$ is a subgroup of $H$. For a holomorphic system $\Psi$, $K_1(\Psi)$ is equal to the center of $K$; for a quaternionic system of positive roots $K_1(\Psi) \equiv SU_2(\alpha_{max})$. Either for the holomorphic family or for a quaternionic real forms we find that among the $H$-admissible Discrete Series for $G$, there are many examples of the following nature: the lowest $K$-type of $\pi_\lambda$ is equal to a irreducible representation of $K_1(\Psi)$ tensor with the trivial representation for $K_2$, \cite{GW}. To follow, under the general setting at the beginning of this paragraph, we verify.
\subsubsection{}\label{sub:existencep}{\it For each system of positive roots $\Psi \subset \Phi(\mathfrak g,\mathfrak t)$, there exists Discrete Series with Harish-Chandra parameter dominant with respect to $\Psi$ and so that its lowest $K$-type is equal to a irreducible representation of $K_1(\Psi)$ tensor with the trivial representation for $K_2(\Psi)$.}
We may assume $K_1(\Psi)$ is a proper subgroup of $K$. Then, when $K_1(\Psi)=Z_K$, Harish-Chandra showed there exists such a representation. For $G$ a quaternionic real form, $\Psi$ a quaternionic system of positive roots, $K_1(\Psi)=SU_2(\alpha_{max})$, then, in \cite{GW} we find a proof of the statement. From the tables in \cite{DV}\cite{Vaint}, we are left to consider the triples $(G, K,K_1(\Psi))$ so that their respective Lie algebras is the triple
\smallskip
$(\mathfrak{su}( m, n ), \mathfrak{su}(m )+\mathfrak{su}(n)+\mathfrak u(1), \mathfrak{su}(m ) ), m>2$,
$(\mathfrak{sp}(m, n), \mathfrak{sp}(m)+\mathfrak{sp}(n) , \mathfrak{sp}(m ) )$.
$(\mathfrak{so}(2m, n), \mathfrak{so}(2m )+\mathfrak{so}(n ) , \mathfrak{so}(2m )) $.
\smallskip
In {\it Explicit example III} we already analyzed the second triple of the list. With the same proof it is verified that the statement holds for the third triple. For the first triple, we further assume $G=SU(p,q)$. Thus, $K$ is the product of two simply connected subgroups times a one dimensional torus $Z_K$, we notice $\rho_n^{\Psi_a}=\rho_\mathfrak g^\lambda-\rho_K$, hence, $\rho_n^{\Psi_a}$ lifts to a character of $K$. Thus, as in Explicit example III, we obtain $\pi_\lambda$ with $\lambda$ dominant with respect to $\Psi_a$ so that its lowest $K=SU(p)SU(q)Z_K$-type is the tensor product of a irreducible representation for $SU(p)Z_K$ times the trivial representation for $SU(q)$. Since $\rho_n^{\Psi_a}$ lifts to a character of $K$, after some computation the claim follows.
\section{Symmetric breaking operators and normal derivatives } For this subsection $(G,H)$ is a symmetric pair and $\pi_\lambda$ is a square integrable representation. Our aim is to generalize a result in \cite[Theorem 5.1]{Na}. In \cite{KP2} it is considered symmetry breaking operators expressed by means of normal derivatives, they obtain results for holomorphic embedding of a rank one symmetric pairs. As before, $H_0=G^{\sigma \theta}$ is the dual subgroup. We recall $\mathfrak h\cap \mathfrak p$ is orthogonal to $\mathfrak h_0\cap \mathfrak p$ and that $\mathfrak h\cap \mathfrak p\equiv T_{eL}(H/L)$, $\mathfrak h_0\cap \mathfrak p\equiv T_{eL}(H_0/L)$. Hence, for $X \in \mathfrak h_0\cap \mathfrak p$, more generally for $X \in \mathcal U(\mathfrak h_0)$, we say $L_X$ is a normal derivative to $H/K$ differential operator. For short, {\it normal derivative}. Other ingredient necessary for the next Proposition are the subspaces $\mathcal L_\lambda$ and $\mathcal U(\mathfrak h_0)W$. The later subspace is contained in the subspace of $K$-finite vectors, whereas, the former subspace, it is believed, that when $res_H(\pi_\lambda)$ is not discretely decomposable it is disjoint to the subspace of $G$-smooth vectors. When, $res_H(\pi_\lambda)$ is $H$-admissible $\mathcal L_\lambda$ is contained in the subspace of $K$-finite vectors. However, it might not be equal to $\mathcal U(\mathfrak h_0)W$ as we have pointed out. The next Proposition and its converse, dealt with consequences of the equality $\mathcal L_\lambda = \mathcal U(\mathfrak h_0)W$.
\begin{prop}\label{prop:symmenormal} We assume $(G,H)$ is a symmetric pair. We also assume there exists a irreducible representation $(\sigma, Z)$ of $L$ so that $H^2(H,\sigma)$ is a irreducible factor of $H^2(G,\tau)$ and $ H^2(G,\tau)[H^2(H,\sigma)][Z]=\mathcal L_\lambda[Z]= \mathcal U(\mathfrak h_0)W[Z]=L_{\mathcal U(\mathfrak h_0)}(H^2(G,\tau)[W])[Z]$. Then, $res_H(\pi_\lambda)$ is $H$-admissible. Moreover, any symmetry breaking operator from $H^2(G,\tau)$ into $H^2(H,\sigma)$ is represented by a normal derivative differential operator.
\end{prop}
We show a converse to Proposition~\ref{prop:symmenormal} in \ref{sub:convprop}.
\begin{proof}To begin with we recall $H^2(G,\tau)[W]=\{ K_\lambda(\cdot,e)^\star w, w\in W\}$ is a subspace of $H^2(G,\tau)_{K-fin}$, whence $L_{\mathcal U(\mathfrak h_0)}(H^2(G,\tau)[W])[Z]$ is a subspace of $H^2(G,\tau)_{K-fin}$. Owing to our hypothesis we then have $\mathcal L_\lambda[Z]$ is a subspace of $H^2(G,\tau)_{K-fin}$. Next, we quote a
result
of Harish-Chandra: a $U(\mathfrak h)-$finitely generated, $\mathfrak z(U(\mathfrak h))-$finite, module has a finite composition
series. Thus, $ H^2(G,\tau)_{K-fin} $ contains an irreducible $(\mathfrak h,L)$-submodule. For a proof (cf. \cite[Corollary 3.4.7 and Theorem 4.2.1]{Wa1}). Now, in \cite[Lemma 1.5]{Kob1} we find a proof of: if a $(\mathfrak g,K)-$mod\-ule contains an irreducible $(\mathfrak h,L)-$submodule, then the $(\mathfrak g,K)-$module is $\mathfrak h-$al\-ge\-bra\-i\-cal\-ly
decomposable. Thus, $res_H(\pi_\lambda)$ is al\-ge\-bra\-i\-cal\-ly discretely decomposable. In \cite[Theorem 4.2]{Kob}, it is shown that under the hypothesis $(G,H)$ is a symmetric pair, for Discrete Series, $\mathfrak h$-algebraically discrete decomposable is equivalent to $H$-admissibility, whence $res_H(\pi_\lambda)$ is $H$-admissible. Let $S: H^2(G,\tau)\rightarrow H^2(H,\sigma)=V_\mu^H$ a continuous intertwining linear map. Then, we have shown in \ref{eq:kten}, for $z \in Z$, $K_S(\cdot,e)^\star z\in H^2(G,\tau)[V_\mu^H][Z]$. We fix a orthonormal basis $\{z_p \},\, p=1,\dots,dim Z$ for $Z$. The hypothesis \\ \phantom{ccccccccc} $H^2(G,\tau)[V_\mu^H][Z]=L_ {\mathcal U(\mathfrak h_0)}(H^2(G,\tau)[W])[Z]$\\ implies for each $p$, there exists $D_p\in \mathcal U(\mathfrak h_0)$ and $w_p \in W$ so that $K_S(\cdot,e)^\star z_p=L_{D_p} K_\lambda(\cdot,e)^\star w_p$. Next, we fix $f_1\in H^2(G,\tau)^\infty, h\in H$ and set $f:=L_{h^{-1}}(f_1)$, then $f(e)=f_1(h)$. We have, \begin{equation}\label{eq:xxx}
\begin{split}
(S(f)(e),z_p)_Z & =\int_G(f(y),K_S(y,e)^\star z_p)_W dy \\
& =\int_G(L_{D_p^\star}f(y),K_\lambda(y,e)^\star w_p)_W dy \\
& =(L_{D_p^\star}f(e),w_p)_W \\ & =(R_{\check D_p^\star}f(e),w_p)_W
\end{split}
\end{equation}
Thus, for each $z\in Z$ and $f_1$ smooth vector we obtain \\ $(S(f_1)(h),z)_Z=\sum_p (S(f_1)(h),(z,z_p)_Z z_p)_Z=(\sum_p ( R_{\check D_p^\star}f_1(h),w_p)_W (z_p,z)_Z $. As in \cite[Proof of Lemma 2]{OV2} we conclude for any $f \in H^2(G,\tau)$ that \begin{equation} S(f)(h)= \sum_{1\leq p \leq dim Z} ( R_{\check D_p^\star}f(h),w_p)_W \,\, z_p \end{equation} Since $D_p \in \mathcal U(\mathfrak h_0)$ such a expression of $S(f)$ is a representation in terms of normal derivatives. \end{proof}
\subsubsection{Converse to Proposition~\ref{prop:symmenormal} }\label{sub:convprop} We want to show: If every element in $Hom_H(H^2(G,\tau), H^2(H,\sigma)) $ has a expression as differential operator by means of "normal derivatives", then, the equality $\mathcal L_\lambda[Z]= H^2(G,\tau)[H^2(H,\sigma)][Z]=\mathcal U(\mathfrak h_0)W[Z]$ holds.
In fact, the hypothesis $ S(f)(h)= \sum_{1\leq p \leq dim Z} ( R_{\check D_p^\star}f(h),w_p)_W \,\, z_p$, $D_p \in \mathcal U(\mathfrak h_0)$, yields $K_S(\cdot,e)^\star z=L_{D_z} K_\lambda (\cdot,e)^\star w_z$, $D_z \in \mathcal U(\mathfrak h_0)$, $w_z \in W$. The fact that $(\sigma, Z)$ has multiplicity one in $H^2(H,\sigma)$ gives \\ \phantom{xxxxxx} $dim Hom_H(H^2(G,\tau), H^2(H,\sigma))=dim H^2(G,\tau)[H^2(H,\sigma)][Z]$. \\ Hence, the functions\\ \phantom{xxxxxx} $\{K_S(\cdot,e)^\star z, z \in Z, S \in Hom_H(H^2(G,\tau), H^2(H,\sigma))\}$ \\ span $H^2(G,\tau)[H^2(H,\sigma)][Z]$. Therefore, $H^2(G,\tau)[H^2(H,\sigma)][Z]$ is contained in $\mathcal U(\mathfrak h_0)W[Z]=L_{\mathcal U(\mathfrak h_0)}H^2(G,\tau)[W][Z]$. Owing to Theorem~\ref{prop:Rwelldefgc}, both spaces have the same dimension, whence, the equality holds.
\medskip
The pairs so that Proposition~\ref{prop:symmenormal} holds for scalar holomorphic Discrete Series are
$ (\mathfrak{su}(m,n), \mathfrak{su} (m,l) +\mathfrak{su} ( n-l)+\mathfrak u(1)),$ $(\mathfrak{so}(2m,2), \mathfrak u(m,1)),$ $(\mathfrak{so}^\star (2n), \mathfrak u(1,n-1)),$ $(\mathfrak{so}^\star(2n), \mathfrak{so}(2) +\mathfrak{so}^\star(2n-2)),$ $(\mathfrak e_{6(-14)}, \mathfrak{so}(2,8)+\mathfrak{so}(2)).$ See \cite[(4.6)]{Vaint}.
\subsubsection{Comments on the interplay among the subspaces, $\mathcal L_\lambda$, $\mathcal U(\mathfrak h_0)W$, $H^2(G,\tau)_{K-fin}$ and symmetry breaking operators} It readily follows that the subspace $\mathcal L_\lambda[Z]=V_\lambda^G[H^2(H, \sigma)][Z]$ is equal to the closure of the linear span of
$\mathcal K_{Sy}(G,H):=\{K_{S^*}(e,\cdot)z=K_{S}(\cdot,e)^\star z, z\in Z, S \in Hom_H(V_\lambda^G, V_\mu^H) \}$.
\smallskip
(1)\, $ H^2(G,\tau)_{K-fin} \cap \mathcal K_{Sy}(G,H) $
is equal to the linear span of elements in $\mathcal K_{Sy}(G,H) $ so that the corresponding symmetry breaking operator is represented by a differential operator. See \cite[Lemma 4.2]{OV2}.
\smallskip
(2) \, $ \mathcal U(\mathfrak h_0)W \cap \mathcal K_{Sy}(G,H) $
is equal to the linear span corresponding to element $ K_S $ in $\mathcal K_{Sy}(G,H) $ so that $S$ is represented by normal derivative differential, operator. This is shown in Proposition~\ref{prop:symmenormal} and its converse.
\smallskip
(3) The set of symmetry breaking operators represented by a differential operator is not the null space if and only if $res_H(\pi_\lambda)$ is $H$-admissible. See \cite[Theorem 4.3]{OV2} and the proof of Proposition~\ref{prop:symmenormal}.
\smallskip
(4) We believe that from Nakahama's thesis, it is possible to construct examples of $V_\lambda^G[H^2(H, \sigma)][Z]\cap \mathcal U(\mathfrak h_0)W[Z]\not= \{0\}$, so that the equality $V_\lambda^G[H^2(H, \sigma)][Z]= \mathcal U(\mathfrak h_0)W[Z]$ does not hold! That is, there are symmetry breaking represented by plain differential operators and some of them are not represented by normal derivative operators.
\subsubsection{A functional equation for symmetry breaking operators} Notation is as in Theorem~\ref{prop:Rwelldefgc}. We assume $(G,H)$ is a symmetric pair and $res_H(\pi_\lambda)$ is admissible. The objects involved in the equation are: $H_0=G^{\sigma \theta}$, $Z=V_{\mu+\rho_n^H}^H$ the lowest $L$-type for $V_\mu^H$, $\mathcal L_\lambda=\sum_{\mu} H^2(G,\tau)[V_\mu^H][V_{\mu+\rho_n^H}^L]$, $\mathcal U(\mathfrak h_0)W =L_{\mathcal U(\mathfrak h_0) }H^2(G,\tau)[W]$, $L$-isomorphism $D: \mathcal L_\lambda [Z] \rightarrow \mathcal U(\mathfrak h_0)W[Z]$, a $H$-equivariant continuous linear map $S: H^2(G,\tau)\rightarrow H^2(H,\sigma)$, the kernel $K_S :G\times H\rightarrow Hom_\mathbb C(W,Z)$ corresponding to $S$, \ref{eq:kten} implies $K_S(\cdot,e)^\star z \in \mathcal L_\lambda[Z]$, finally, we recall $K_\lambda :G\times G\rightarrow Hom_\mathbb C(W,W)$ the kernel associated to the orthogonal projector onto $ H^2(G,\tau)$. Then,
\begin{prop}\label{prop:funcequasymmt} For $z\in Z$, $y\in G$ we have \begin{equation*} D(K_S(e,\cdot)^\star (z))(y)=\int_{H_0}K_\lambda(h_0,y) D(K_S(e,\cdot)^\star (z))(h_0) dh_0 \end{equation*}
\end{prop}
When, $D$ is the identity map, the functional equation turns into
\begin{equation*} K_S(x,h) =\int_{H_0} K_S( h_0,e) K_\lambda(x,hh_0) dh_0 \end{equation*}
The functional equation follows from Proposition~\ref{prop:ktfromkto} applied to $T:=S^\star$. The second equation follows after we compute the adjoint of the first equation.
We note, that as in the case of holographic operators, a symmetry breaking operator can be recovered from its restriction to $H_0$.
We also note that \cite{Na} has shown a different functional equation for $K_S$ for scalar holomorphic Discrete Series and holomorphic embedding $H/L \rightarrow G/K$.
\section{Tables}
For an arbitrary symmetric pair $(G,H),$ whenever $\pi_\lambda^G$ is an admissible representation of $H,$ we define,
$$ K_1= \left\{ \begin{array}{ll} Z_K & \mbox{\, if $\Psi_\lambda$ holomorphic } \\ K_1(\Psi_\lambda) & \phantom{x} \mbox{otherwise} \end{array} \right. $$
In the next tables we present the 5-tuple so that: $(G, H)$ is a symmetric pair, $H_0$ is the associated group to $H,$ $\Psi_\lambda$ is a system of positive such that $\pi_\lambda^G$ is an admissible representation of $H,$ and $K_1 =Z_1( \Psi_\lambda) K_1(\Psi_\lambda).$ Actually, instead of writing Lie groups we write their respective Lie algebras. Each table is in part a reproduction of tables in \cite{KO} \cite{Vaint}. The tables can also be computed by means of the techniques presented in \cite{DV}. Note that each table is "symmetric" when we replace $H$ by $H_0.$ As usual, $\alpha_m$ denotes the highest root in $\Psi_\lambda.$ Unexplained notation is as in \cite{Vaint}.
\rotatebox{90}{
\begin{tabular}{|c| c | c| c |c |} \hline $G$ & $H$ & $H_0$ & $\Psi_\lambda$ & $ K_1$ \\
\hline $\mathfrak{su}(m,n)$ & $\mathfrak{su}(m,k)\oplus \mathfrak{su}(n-k)\oplus \mathfrak{u}(1)$ & $\mathfrak{su}(m,n-k)\oplus \mathfrak{su}(k)\oplus \mathfrak{u}(1)$ & $\Psi_a$ & $ \mathfrak{su}(m)$ \\
\hline $\mathfrak{su}(m,n)$ & $\mathfrak{su}(k,n)\oplus \mathfrak{su}(m-k)\oplus \mathfrak{u}(1)$ & $\mathfrak{su}(m-k,n)\oplus \mathfrak{su}(k)\oplus \mathfrak{u}(1)$ & $\tilde{\Psi_b}$ & $ \mathfrak{su}(n)$ \\
\hline $\mathfrak{so}(2m,2n), m>2$ & $\mathfrak{so}(2m,2k)\oplus \mathfrak{so}(2n-2k)$ & $\mathfrak{so}(2m,2n-2k)\oplus \mathfrak{so}(2k)$ & $\Psi_{\pm}$ & $ \mathfrak{so}(2m)$ \\
\hline $\mathfrak{so}(4,2n) $ & $\mathfrak{so}(4,2k)\oplus \mathfrak{so}(2n-2k)$ & $\mathfrak{so}(4,2n-2k)\oplus \mathfrak{so}(2k)$ & $\Psi_{\pm}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline $\mathfrak{so}(2m,2n+1), m>2$ & $\mathfrak{so}(2m,k)\oplus \mathfrak{so}(2n+1-k)$ & $\mathfrak{so}(2m,2n+1-k)\oplus \mathfrak{so}(k)$ & $\Psi_{\pm}$ & $ \mathfrak{so}(2m)$ \\
\hline $\mathfrak{so}(4,2n+1) $ & $\mathfrak{so}(4,k)\oplus \mathfrak{so}(2n+1-k)$ & $\mathfrak{so}(4,2n+1-k)\oplus \mathfrak{so}(k)$ & $\Psi_{\pm}$ & $ \mathfrak{su}_2(\alpha_m)$ \\ \hline $\mathfrak{so}(4,2n), n>2$ & $\mathfrak{u}(2,n)_1$ & $w\mathfrak{u}(2,n)_1$ & $\Psi_{1\, -1}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline $\mathfrak{so}(4,2n), n>2$ & $\mathfrak{u}(2,n)_2$ & $w\mathfrak{u}(2,n)_2$ & $\Psi_{1 \, 1}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline $\mathfrak{so}(4,4)$ & $\mathfrak{u}(2,2)_{1\,1}$ & $w\mathfrak{u}(2,2)_{11}$ & $\Psi_{1\, -1}, \, w_{\epsilon,\delta}\Psi_{1 \, -1}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{so}(4,4)$ & $\mathfrak{u}(2,2)_{12}$ & $w\mathfrak{u}(2,2)_{12}$ & $\Psi_{1\, -1}, \, w_{\epsilon,\delta}\Psi_{1\, 1}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline $\mathfrak{so}(4,4)$ & $\mathfrak{u}(2,2)_{21}$ & $w\mathfrak{u}(2,2)_{21}$ & $\Psi_{1\, 1}, \, w_{\epsilon,\delta}\Psi_{1\, -1}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline $\mathfrak{so}(4,4)$ & $\mathfrak{u}(2,2)_{22}$ & $w\mathfrak{u}(2,2)_{22}$ & $\Psi_{1\,1}, \, w_{\epsilon,\delta}\Psi_{1\,1}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{sp}(m,n)$ & $\mathfrak{sp}(m,k)\oplus \mathfrak{sp}(n-k)$ & $\mathfrak{sp}(m,n-k)\oplus \mathfrak{sp}(k)$ & $\Psi_+$ & $ \mathfrak{sp}(m)$ \\
\hline $\mathfrak f_{4(4)}$ & $\mathfrak{sp}(1,2)\oplus \mathfrak{su}(2)$ & $\mathfrak{so}(5,4)$ & $\Psi_{BS}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
\phantom{ } $\mathfrak{e}_{6(2)}$ & $\mathfrak{so}(6,4)\oplus \mathfrak{so}(2)$ & $\mathfrak{su}(4,2)\oplus \mathfrak{su}(2)$ & $\Psi_{BS}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{e}_{7(-5)}$ & $ \mathfrak{so}(8,4)\oplus \mathfrak{su}(2)$ & $\mathfrak{so}(8,4)\oplus \mathfrak{su}(2)$ & $\Psi_{BS}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{e}_{7(-5)}$ & $\mathfrak{su}(6,2)$ & $\mathfrak{e}_{6(2)}\oplus \mathfrak{so}(2)$ & $\Psi_{BS}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{e}_{8(-24)}$ & $\mathfrak{so}(12,4)$ & $\mathfrak{e}_{7(-5)}\oplus \mathfrak{su}(2)$ & $\Psi_{BS}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
\end{tabular} } \\
Table 1. Case $U=T, \Psi_\lambda$ nonholomorphic
\rotatebox{90}{ \begin{tabular}{|c| c | c| c |c |} \hline $G$ & $H$ & $H_0$ & $\Psi_\lambda$ & $ K_1$ \\
\hline $\mathfrak {su}(2,2n), \, n> 2$ & $\mathfrak {sp}(1,n)$ & $\mathfrak {sp}(1,n)$ & $\Psi_1$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline $\mathfrak{su}(2,2)$ & \phantom{xx} $\mathfrak{sp}(1,1)$ & $\mathfrak{sp}(1,1)$ & $\Psi_1$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{su}(2,2)$ & $\mathfrak{sp}(1,1)$ & $\mathfrak{sp}(1,1)$ & $\tilde{\Psi}_1$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{so}(2m,2n), m>2$ & $ \mathfrak{so}(2m,2k+1) + \mathfrak{so}(2n-2k-1)$ & $\mathfrak{so}(2m,2n-2k-1)+ \mathfrak{so}(2k+1)$ & $\Psi_{\pm}$ & $ \mathfrak{so}(2m)$ \\
\hline
$\mathfrak{so}(4,2n), $ & $ \mathfrak{so}(4,2k+1) + \mathfrak{so}(2n-2k-1)$ & $\mathfrak{so}(4,2n-2k-1)+ \mathfrak{so}(2k+1)$ & $\Psi_{\pm}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{so}(2m,2), m>2 $ & $ \mathfrak{so}(2m,1) $ & $\mathfrak{so}(2m,1)$ & $\Psi_{\pm}$ & $ \mathfrak{so}(2m)$ \\
\hline
$\mathfrak{so}(4,2), $ & $ \mathfrak{so}(4,1) $ & $\mathfrak{so}(4,1)$ & $\Psi_{\pm}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
$\mathfrak{e}_{6(2)}$ & $\mathfrak{f}_{4(4)}$ & $\mathfrak{sp}(3,1)$ & $\Psi_{BS}$ & $ \mathfrak{su}_2(\alpha_m)$ \\
\hline
\end{tabular} } \\
Table 2, Case $U\not=T, \Psi_\lambda$ non holomorphic
\begin{center}
\begin{tabular}{|c| c | c|}
\hline $G$ & $H \, $ (a) & $H_0 \, $ (b) \\
\hline $\mathfrak{su}(m,n), m\not= n$ & $\mathfrak{su}(k,l)+\mathfrak{su}(m-k,n-l)+ \mathfrak u(1)$ & $\mathfrak{su}(k,n-l)+\mathfrak{su}(m-k,l)+ \mathfrak u(1)$ \\
\hline $\mathfrak{su}(n,n)$ & $\mathfrak{su}(k,l)+ \mathfrak{su}(n-k,n-l)+\mathfrak u(1)$ &$\mathfrak{su}(k,n-l)+ \mathfrak{su}(n-k,l)+\mathfrak u(1)$ \\
\hline $\mathfrak{so}(2,2n)$ & $\mathfrak{so}(2,2k)+ \mathfrak{so}(2n-2k)$ & $\mathfrak{so}(2,2n-2k)+ \mathfrak{so}(2k)$ \\
\hline $\mathfrak{so}(2,2
n)$ & $\mathfrak{u}(1,n)$ & $ \mathfrak{u}(1,n)$ \\
\hline $\mathfrak{so}(2,2n+1)$ & $\mathfrak{so}(2,k)+ \mathfrak{so}(2n+1-k)$ & $\mathfrak{so}(2,2n+1-k)+ \mathfrak{so}(k)$ \\
\hline $\mathfrak{so}^\star (2n)$ & $\mathfrak{u}(m,n-m)$ & $\mathfrak{so}^\star(2m)+ \mathfrak{so}^\star(2n-2m)$ \\
\hline $\mathfrak{sp}(n, \mathbb R)$ & $\mathfrak{u}(m,n-m)$ & $\mathfrak{sp}(m, \mathbb R)+ \mathfrak{sp}(n-m, \mathbb R)$ \\
\hline $\mathfrak e_{6(-14)}$ & $\mathfrak{so}(2,8)+\mathfrak{so}(2) $ & $\mathfrak{so}(2,8)+\mathfrak{so}(2)$ \\
\hline $\mathfrak e_{6(-14)}$ & $\mathfrak{su}(2,4)+\mathfrak{su}(2) $ & $\mathfrak{su}(2,4)+\mathfrak{su}(2)$ \\
\hline $\mathfrak e_{6(-14)}$ & $\mathfrak{so}^\star(10)+\mathfrak{so}(2) $ & $\mathfrak{su}(5,1)+\mathfrak{sl}(2, \mathbb R)$ \\
\hline $\mathfrak e_{7(-25)}$ & $\mathfrak{so}^\star(12)+\mathfrak{su}(2) $ & $\mathfrak{su}(6,2)$ \\
\hline $\mathfrak e_{7(-25)}$ & $\mathfrak{so}(2,10)+\mathfrak{sl}(2, \mathbb R) $ & $\mathfrak e_{6(-14)} + \mathfrak{so}(2)$ \\
\hline $\mathfrak{su}(n,n)$ & $\mathfrak{so}^\star(2n)$ &$\mathfrak{sp}(n,\mathbb R)$ \\
\hline $\mathfrak{so}(2,2n)$ & $\mathfrak{so}(2,2k+1)+ \mathfrak{so}(2n-2k-1)$ & $\mathfrak{so}(2,2n-2k-1)+ \mathfrak{so}(2k+1)$ \\
\hline
\end{tabular} \\
Table 3, $\pi_\lambda^G $ holomorphic Discrete Series. \\
The last two lines show the unique holomorphic pairs so that $U \not= T.$
\end{center}
\section{Partial list of symbols and definitions}
\noindent
- $(\tau ,W),$ $(\sigma, Z)$, $L^2(G \times_\tau W), L^2(H\times_\sigma Z)$ (cf. Section~\ref{sec:prelim}).\\
-$\,H^2(G,\tau)=V_\lambda=V_\lambda^G $, $H^2(H,\sigma)=V_\mu^H, \pi_\mu^H, \pi_\nu^K. $ (cf. Section~\ref{sec:prelim}).\\
-$\pi_\lambda =\pi_\lambda^G$, $d_\lambda=d(\pi_\lambda)$ dimension of $\pi_\lambda,$ $P_\lambda, P_\mu, K_\lambda, K_\mu, $ (cf. Section~\ref{sec:prelim}). \\
-$P_X$ orthogonal projector onto subspace $X$.\\
-$\Phi(x)=P_W \pi (x)P_W$ spherical function attached to the lowest $K$-type $W$ of $\pi_\lambda$.\\
-$K_\lambda(y,x)=d(\pi_\lambda)\Phi(x^{-1}y)$.\\
-$M_{K-fin} (resp. M^\infty) $ $K-$finite vectors in $M$ (resp. smooth vectors in $M$).\\
-$dg,dh$ Haar measures on $G$, $H$.\\
-A unitary representation is {\it square integrable}, equivalently a {\it Discrete Series} representation, (resp. {\it integrable}) if some nonzero matrix coefficient is square integrable (resp. integrable) with respect to Haar measure. \\
-$\Theta_{\pi_\mu^H}(...)$ Harish-Chandra character of the representation $\pi_\mu^H.$\\
-For a module $M$ and a simple submodule $N$, $M[N]$ denotes the {\it isotypic component} of $N$ in $M$. That is, $M[N]$ is the sum of all irreducible submodules isomorphic to $N.$ If topology is involved, we define $M[N]$ to be the closure of $M[N].$ \\
-$ M_{H-disc}$ is the closure of the linear subspace spanned by the totality of $H-$irreducible submodules. $ M_{disc}:= M_{G-disc}$\\
-A representation $M$ is $H-${\it discretely decomposable} if $ M_{H-disc} =M.$\\
-A representation is $H-${\it admissible} if it is $H-$discretely decomposable and each isotypic component is equal to a finite sum of $H-$irreducible representations.\\
-$U(\mathfrak g) $ (resp. $\mathfrak z(U(\mathfrak g)=\mathfrak z_\mathfrak g$) universal enveloping algebra of the Lie algebra $\mathfrak g$(resp. center of universal enveloping algebra).\\
-$\mathrm{Cl}(X)= $closure of the set $X$.\\
-$I_X$ identity function on set $X$.\\
-$\mathbb T$ one dimensional torus.\\
-$Z_S$ identity connected component of the center of the group $S$.\\
$S^{(r)}(V)$ the $r^{th}$-symmetric power of the vector space $V$.
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
{
"arxiv_id": "2302.14151",
"language": "en",
"timestamp": "2023-03-01T02:02:41",
"url": "https://arxiv.org/abs/2302.14151",
"yymm": "2302"
} | \section{Introduction.}\label{intro}
\section{Introduction} \label{sec:introduction}
Bilinear constraints in conjunction with network models appear in various mixed-integer and nonlinear programming (MINLP) applications, such as the fixed-charge network flow problems \cite{rebennack:na:pa:2009}, and network models with complementarity constraints, such as transportation problems with conflicts \cite{ficker:sp:wo:2021}.
A common occurrence of bilinear terms in network models pertains to bilevel network problems after being reformulated as a single-level program through either using a dual formulation of the inner problem or incorporating optimality conditions inside the outer problem \cite{benayed:bl:1990,chiou:2005}.
These reformulation approaches are widely used in the network interdiction problems, where newly added bilinear terms are relaxed using a \textit{linearization} technique based on the McCormick bounds \cite{mccormick:1976} over a box domain; see \cite{smith:li:2008} for an exposition.
While these relaxations provide the convex hull of the bilinear constraint over a box domain \cite{alkhayyal:fa:1983}, they often lead to weak relaxations when the variables domain becomes more complicated as in general polyhedra \cite{gupte:ah:ch:de:2013,davarnia:2016}.
It has been shown \cite{boland:de:ka:mo:ri:2017,luedtke:na:li:2012} that even when the number of bilinear terms in the underlying function increases, the McCormick bounds can be very poor compared to the ones obtained from the convex and concave envelopes of the function.
\medskip
There are various studies in the literature that develop convexification methods for multilinear functions as a generalization of bilinear forms, but the side constraints for the involved variables are often limited to variable bounds.
For instance, \cite{delpia:kh:2021} introduces a new class of valid inequalities, called running intersection inequalities, for the multilinear polytope described by a set of multilinear equations.
The authors in \cite{gupte:ka:ri:wa:2020} derive extended formulations for the convex hull of the graph of a bilinear function on the $n$-dimensional unit cube through identifying the facets of the Boolean Quadratic Polytope.
In \cite{fampa:lee:2021}, an efficient method to convexify bilinear functions through McCormick relaxations is proposed which takes advantage of the structural convexity in a symmetric quadratic form.
Other works in the literature consider a polyhedral, often triangular, subdivision of the domain to derive strong valid inequalities for a bilinear set; see \cite{tawarmalani:ri:xi:2012,sherali:al:1992,locatelli:sch:2014} for examples of such approaches.
Further, \cite{davarnia:ri:ta:2017} proposes a constructive procedure, referred to as \textit{extended cancel-and-relax} (EC\&R), to simultaneously convexify the graph of bilinear functions over a general polytope structure.
In this paper, we make use of the EC\&R\xspace procedure to derive convexification results for a bilinear set where the side constraints on variables are described by a network flow model as defined next.
\medskip
For $N := \{1,\dotsc,n\}$, $M := \{1,\dotsc,m\}$, $K := \{1,\dotsc,\kappa\}$, and $T := \{1, \dotsc, \tau\}$, we consider
\[
\mcl{S} = \left\{ (\vc{x};\vc{y};\vc{z}) \in \Xi \times \Delta_m \times {\mathbb R}^{\kappa} \, \middle|\,
\vc{y}^{\intercal} A^k \vc{x} = z_{k}, \, \, \, \forall k \in K
\right\},
\]
where $\Xi = \left\{ \vc{x} \in {\mathbb R}^n \, \middle| \, E\vc{x} \geq \vc{f}, \, \vc{0} \leq \vc{x} \leq \vc{u} \right\}$ is a primal network polytope, and $\Delta_m = \left\{ \vc{y} \in {\mathbb R}_+^{m} \, \middle| \, \vc{1}^{\intercal} \vc{y} \leq 1 \right\}$ is a simplex.
When variables $\vc{y}$ are binary, $\Delta_m$ represents a \textit{special ordered set of type I} (SOS1); see \cite{beale:fo:1976} for an exposition.
Such simplex structures appear in various applications and can be obtained by reformulating the underlying polytopes through extreme point decomposition; see \cite{davarnia:ri:ta:2017} for a detailed account.
In the above definition, $E \in {\mathbb R}^{\tau \times n}$, $\vc{f} \in {\mathbb R}^{\tau}$, $\vc{u} \in {\mathbb R}^n$, and $A^k \in {\mathbb R}^{m\times n}$ is a matrix with all elements equal to zero except one that is equal to one, i.e., if $A^k_{ji} = 1$ for some $(i,j) \in N \times M$, the bilinear constraint with index $k$ represents $y_jx_i = z_k$.
\medskip
The contributions of this paper are as follows.
We propose a systematic procedure to convexify $\mcl{S}$ and derive explicit inequalities in its description.
The resulting cutting planes are directly obtained in the original space of variables.
We show that facet-defining inequalities in the convex hull description can be explicitly derived by identifying special tree and forest structures in the underlying network, leading to an interpretable and efficient cut-generating oracle.
The inequalities obtained from our proposed algorithms can be added to the typical McCormick relaxations to strengthen the formulation and improve the bounds.
The presented methods consider a general network structure, which complement and generalize the results of \cite{davarnia:ri:ta:2017} that are obtained for network interdiction problems.
In particular, \cite{davarnia:ri:ta:2017} presents the convexification results for a special case of $\mcl{S}$ where $m = 1$, $\kappa = 1$ and $\Xi$ is a dual network polytope.
In this work, we extend these results by considering the cases where $M$ and $K$ can have multiple elements and $\Xi$ is a primal network polytope.
\medskip
The remainder of the paper is organized as follows.
We give a brief background on the EC\&R\xspace procedure as a basis of our analysis in Section~\ref{sec:ECR}.
In Section~\ref{sec:primal}, we obtain convexification results for bilinear terms defined over a network polytope.
Preliminary computational experiments are presented in Section~\ref{sec:computation} to show the effectiveness of the developed cut-generating frameworks.
We give concluding remarks in Section~\ref{sec:conclusion}.
\medskip
\textbf{Notation.}
Bold letters represent vectors.
For a given set $S \subseteq {\mathbb R}^n$, we denote by $\mathop{\rm conv}(S)$ its convex hull.
We use symbol $\pm$ to show both cases with $+$ and $-$.
For example, when we use $l^{\pm}$ in an expression, it means that expression holds for both cases $l^+$ and $l^-$.
\section{Extended Cancel-and-Relax} \label{sec:ECR}
In this section, we present the EC\&R\xspace procedure adapted for $\mcl{S}$.
The following theorem is at the core of this procedure, as shown in Theorem 2.7 of \cite{davarnia:ri:ta:2017}.
\begin{theorem}
A convex hull description of $\mcl{S}$ can be obtained by the linear constraints in $\Xi$ and $\Delta_m$ together will all class-$l^{\pm}$ EC\&R\xspace inequalities for all $l \in K$.
\Halmos
\end{theorem}
The procedure to generate a class-$l^{\pm}$ EC\&R\xspace inequality is as follows.
\begin{enumerate}
\item
We select $l \in K$ to be the index of a bilinear constraint used in the aggregation, which we refer to as the \textit{base} equality.
We also select a sign indicator $+$ or $-$ to indicate whether a weight $1$ or $-1$ is used for the base equality during aggregation.
\item
We select $\mcl{L}$ and $\bar{\mcl{L}}$ as disjoint subsets of $K \setminus \{l\}$.
Then, for each $k \in \mcl{L}$ (resp. $k \in \bar{\mcl{L}}$), we multiply $\vc{y}^{\intercal} A^k \vc{x} - z_{k} = 0$ by $\beta^+_k$ (resp. $-\beta^-_k$), where $\beta^+_k \geq 0$ (resp. $\beta^-_k \geq 0$).
\item
Defining $T$ as the index set of the non-bound constraints in $\Xi$, we select $\mcl{I}_1$, $\cdots$, $\mcl{I}_m$ and $\bar{\mcl{I}}$ as subsets of $T$ whose intersection is empty.
Then, for each $j \in M$ and for each $t \in \mcl{I}_j$ (resp. $t \in \bar{\mcl{I}}$), we multiply the constraint $E_{t.}\vc{x} \geq f_t$ by $\gamma^j_ty_j$ where $\gamma^j_t \geq 0$ (resp. by $\theta_t(1-\sum_{i \in M}y_i)$ where $\theta_t \geq 0$).
\item
We select $\mcl{J}$ and $\bar{\mcl{J}}$ as disjoint subsets of $N$.
Then, for each index $i \in \mcl{J}$, we multiply $x_i \geq 0$ by $\lambda_i(1-\sum_{j \in M}y_j)$ where $\lambda_i \geq 0$, and for each $i \in \bar{\mcl{J}}$, we multiply $u_i - x_i \geq 0$ by $\mu_i(1-\sum_{j \in M}y_j)$ where $\mu_i \geq 0$.
\end{enumerate}
The above sets are compactly represented as $\big[\mcl{L},\bar{\mcl{L}}\big| \mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$, which is called an \textit{EC\&R\xspace assignment}.
Each EC\&R\xspace assignment is identified by its class-$l^{\pm}$ where $l$ is the index of the base equality and $\pm$ is its sign indicator.
We next aggregate all aforementioned weighted constraints.
During the aggregation, we require that weights $\beta$, $\gamma$, $\theta$, $\lambda$ and $\mu$ be chosen in such a way that:
\begin{itemize}
\item[\textit{(C1)}]
at least $|\mcl{L}|+|\bar{\mcl{L}}|+\sum_{j \in M}|\mcl{I}_j|+|\bar{\mcl{I}}|+|\mcl{J}|+|\bar{\mcl{J}}|$ bilinear terms are canceled (i.e., their coefficient becomes zero), and
\item[\textit{(C2)}]
if $\mcl{L} \cup \bar{\mcl{L}} \cup_{j \in M}\mcl{I}_j \cup \bar{\mcl{I}} \cup \mcl{J} \cup \bar{\mcl{J}} \neq \emptyset$, for each constraint used in the aggregation (including the base equality), at least one bilinear term among all those created after multiplying that constraint with its corresponding weight is canceled.
\end{itemize}
The desired EC\&R\xspace inequality is then obtained by relaxing (i.e., replacing) the remaining bilinear terms $x_iy_j$ in the aggregated inequality using either $x_iy_j \geq 0$ or $u_iy_j - x_iy_j \geq 0$, depending on the sign of their coefficients.
The resulting linear inequality is referred to as a class-$l^{\pm}$ EC\&R\xspace inequality.
\medskip
Next, we present a summary of the derivation of the EC\&R\xspace procedure which will be used in subsequent sections; we refer the reader to \cite{davarnia:ri:ta:2017} for a detailed account.
It can be shown that $\vc{y}$ components in the extreme points of $\mcl{S}$ are binary-valued.
As a result, the convex hull of $\mcl{S}$ can be obtained as a disjunctive union of a finite number of polytopes, each fixing $\vc{y}$ at an extreme point of the simplex $\Delta_m$.
A description for this convex hull can be obtained in a higher-dimensional space using the reformulation-linearization technique \cite{sherali:ad:dr:1998,sherali:al:1992} or disjunctive programming \cite{balas:1998} through addition of new variables, as shown below.
\begin{equation}
\left.
\begin{array}{lll}
& \pm A^k_{j.} \vc{w}^j \mp v^j_k \geq 0, &\forall (k, j) \in K \times M \\
& \mp \left(z_k - \sum_{j \in M} v^j_k\right) \geq 0, &\forall k \in K \\
& E\vc{w}^j \geq \vc{f} y_j, &\forall j \in M \\
& E\left(\vc{x}-\sum_{j \in M}\vc{w}^j\right) \geq \vc{f}\left(1-\sum_{j \in M}y_j\right), \\
& \vc{0} \leq \vc{w}^j \leq \vc{u} y_j, &\forall j \in M \\
& \vc{0} \leq \vc{x}-\sum_{j \in M}\vc{w}^j \leq \vc{u}\left(1-\sum_{j \in M}y_j\right).
\end{array}
\right. \label{eq_ef}
\end{equation}
\medskip
In the above description, variables $\vc{w}^j$ and $v^j_k$ can be viewed as $y_j \vc{x}$ and $y_j z_k$, respectively, and the equalities are formulated as pairs of inequalities of opposite directions.
Because the convex hull description in \eqref{eq_ef} contains additional variables, we can use polyhedral projection to obtain a convex hull description in the space of original variables.
\medskip
Define the dual variables associated with the constraints of \eqref{eq_ef} by $\vc{\alpha}^{j\pm}, \vc{\beta}^\pm \in {\mathbb R}^{\kappa}_+$, $\vc{\gamma}^{j}, \vc{\theta} \in {\mathbb R}^{\tau}_+$, and $\vc{\eta}^j, \vc{\rho}^j, \vc{\lambda}, \vc{\mu} \in {\mathbb R}^{n}_+$, respectively.
It follows from Proposition 2.3 of \cite{davarnia:ri:ta:2017} that the collection of inequalities
\begin{equation}
\sum_{i \in N} q_i(\vc{\pi}) x_i + \sum_{j \in M} r_j(\vc{\pi}) y_j + \sum_{k \in K} s_k(\vc{\pi}) z_k \ge t(\vc{\pi}), \label{eq:proj-facet}
\end{equation}
where
\begin{equation*}
\begin{array}{rll}
q_i(\vc{\pi}) &=& \sum_{t \in T}E_{ti} \theta_t + \lambda_i - \mu_i \nonumber \\
r_j(\vc{\pi}) &=& \sum_{t \in T}f_t \left(\theta_t - \gamma^j_t\right) + \sum_{i \in N}\left(\rho^j_i - \mu_i\right) \nonumber \\
s_k(\vc{\pi}) &=& -\left(\beta^+_k - \beta^-_k\right) \nonumber\\
t(\vc{\pi}) &=& \sum_{t \in T}f_t \theta_t - \sum_{i \in N}\mu_i, \nonumber
\end{array}
\end{equation*}
for all extreme rays $\vc{\pi} = \left(\vc{\beta}^+;\vc{\beta}^-;\{\vc{\gamma}^j\}_{j \in M};\vc{\theta};\{\vc{\eta}^j\}_{j \in M};\{\vc{\rho}^j\}_{j \in M};\vc{\lambda};\vc{\mu}\right)$ of the projection cone
\begin{multline*}
\mathcal{C} =
\left\{
\vc{\pi} \in {\mathbb R}_+^{2\kappa + (m+1)(\tau + 2n)} \, \middle| \, \sum_{k \in K}A^k_{ji}\left(\beta^+_k - \beta^-_k\right) + \sum_{t \in T} E_{ti} \left(\gamma^j_t - \theta_t\right) \right. \\
\left. + \eta^j_i - \rho^j_i - \lambda_i + \mu_i = 0, \, \, \, \forall (i,j) \in N \times M \right\}
\end{multline*}
contains all \textit{non-trivial} facet-defining inequalities in the convex hull description of $\mcl{S}$.
A non-trivial inequality is one that cannot be implied by the linear constraints in the description of $\Xi$ and $\Delta_m$.
It is easy to verify that an extreme ray of $\mcl{C}$ that has components $\beta^{\pm}_k = 0$ for all $k \in K$ leads to a trivial inequality.
Therefore, we may assume that $\beta_l^{\pm} = 1$ for some $l \in K$ in an extreme ray associated with a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S})$ through proper scaling.
As a result, the search for the extreme rays of $\mcl{C}$ reduces to that of the extreme points of the restriction set $\mcl{C}^l$ for all $l \in K$, where
\begin{multline} \label{eq:Cl}
\mathcal{C}^{l} =
\left\{
\vc{\pi}^l \in {\mathbb R}^{2(\kappa-1) + (m+1)(\tau + 2n)}_{+} \, \middle|
\sum_{k \in K_l}A^k_{ji}\left(\beta^+_k - \beta^-_k\right) + \sum_{t \in T} E_{ti} \left(\gamma^j_t - \theta_t\right) \right. \\
\left. + \eta^j_i - \rho^j_i - \lambda_i + \mu_i = \pm A^l_{ji}, \, \forall (i,j) \in N \times M \right\},
\end{multline}
where $K_l = K \setminus \{l\}$, and $\vc{\pi}^l$ is defined similarly to $\vc{\pi}$, but without elements $\beta^+_l$ and $\beta^-_l$.
\medskip
The components in the dual vector $\vc{\pi}^l$ can be interpreted as the weights used in the EC\&R\xspace procedure as follows.
The fixing of the component $\beta^{\pm}_l$ is achieved in Step 1 of the EC\&R\xspace procedure by picking a base equality $l$ with either $+1$ or $-1$ weights.
The components $\vc{\beta}^{\pm}$ represent the weights of the bilinear constraints in $K_l$ as described in Step 2 of the EC\&R\xspace procedure.
The components $\vc{\gamma}^j$ (resp. $\vc{\theta}$) can be viewed as the weights for $y_j$ (resp. $1 - \sum_{j \in M} y_j$) when multiplied with the non-bound constraints in $\Xi$ as demonstrated in Step 3 of the EC\&R\xspace procedure.
Similarly, $\vc{\lambda}$ and $\vc{\mu}$ denote the weights for $1 - \sum_{j \in M} y_j$ when multiplied with the bound constraints in $\Xi$ in Step 4 of the EC\&R\xspace procedure.
Finally, the relaxation step in the EC\&R\xspace procedure will use the components $\vc{\eta}^j$ and $\vc{\rho}^j$ as the weights for $y_j$ when multiplied with the bound constraints in $\Xi$ to \textit{cancel} the remaining bilinear terms.
It can be shown that criteria (C1) and (C2) of the EC\&R\xspace procedure provide necessary conditions for the selected weight vector to be an extreme point of $\mcl{C}^l$.
The resulting EC\&R\xspace inequality is of the form \eqref{eq:proj-facet}, and the collection of all such inequalities contain all non-trivial facet-defining inequalities in $\mathop{\rm conv}(\mcl{S})$.
\begin{comment}
\begin{remark} \label{rem:squential ECR}
Even though the EC\&R\xspace procedure provides a systematic tool to obtain valid inequalities for $\mathop{\rm conv}(\mcl{S})$, it requires the selection of the constraints together with their weights to be used in the aggregation.
This task can be computationally cumbersome for general sets as it requires searching for weights that satisfy condition (C1) of the EC\&R\xspace.
One way to circumvent this difficulty is to search for a special class of EC\&R\xspace assignments that are obtained by a \textit{sequential EC\&R\xspace procedure}.
In this procedure, the constraints used in the aggregation are selected sequentially, starting from the base equality with weight $\pm 1$ depending on the class sign.
At each step, a new constraint that satisfies condition (C2) of EC\&R\xspace is selected to be added to the current aggregated inequality with a suitable weight so that (1) all the bilinear terms that have been canceled so far remain canceled, and (2) at least one new bilinear term becomes canceled.
This procedure builds an EC\&R\xspace assignment by adding the constraints to the assignment one at a time.
The steps of the sequential EC\&R\xspace procedure ensure the satisfaction of condition (C1) of the EC\&R\xspace by design.
The main advantage of this approach is that we can quickly find a weight for a newly selected constraint to cancel one new bilinear term in a time linear in the total number of bilinear terms, whereas finding the aggregation weights for the entire selection of constraints in the assignment has a cubic time complexity because of solving a linear system of equations.
\end{remark}
\end{comment}
\section{Network Polytopes} \label{sec:primal}
In this section, we study the set $\mcl{S}$ where $\Xi$ represents a network polytope.
In particular, the constraint set $E\vc{x} \geq \vc{f}$ is composed of the flow-balance constraints after being separated into two inequalities of opposite signs, i.e., $E$ is an augmented node-arc incidence matrix where each row is duplicated with a negative sign, and $\vc{f}$ represents the extended supply/demand vector.
In this description, $\vc{u}$ denotes the arc-capacity vector.
First, we show that the representation of the EC\&R\xspace assignment can be simplified for set $\mcl{S}$ because of the special structure of the bilinear constraints in its description.
\medskip
\begin{remark} \label{rem:reduce-bilinear}
In $\mcl{S}$, each bilinear constraint has a single bilinear term that does not appear in any other bilinear constraints.
In particular, for each $k \in K$, the set contains the bilinear constraint $y_jx_i - z_k = 0$ for some $(i,j) \in N \times M$ such that $A^k_{ji} = 1$.
As a result, we can skip Step 2 in the EC\&R\xspace procedure and merge it into the relaxation step, in which the bound constraints on variables are multiplied with $y_j$ to relax any remaining bilinear term in the aggregated inequality.
As such, we may remove the sets $\mcl{L}$ and $\bar{\mcl{L}}$ from the EC\&R\xspace assignment to reformat it as $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$, and perform the aggregation on the constraints in this reduced assignment to satisfy conditions (C1) and (C2), which are adjusted accordingly by dropping $\mcl{L}$ and $\bar{\mcl{L}}$ in their statements.
In the resulting aggregated inequality, any remaining bilinear term of the form $y_jx_i$ can be relaxed using either $-y_jx_i + u_iy_j \geq 0$ corresponding to the dual weight $\rho^j_i$ or $-y_jx_i + z_k \geq 0$ with $k \in K$ such that $A^k_{ji} = 1$ corresponding to the dual weight $\beta^-_k$.
Similarly, we can relax $-y_jx_i$ using either $y_jx_i \geq 0$ corresponding to the dual weight $\eta^j_i$ or $y_jx_i - z_k \geq 0$ with $k \in K$ such that $A^k_{ji} = 1$ corresponding to the dual weight $\beta^+_k$.
\end{remark}
\subsection{The Case with $m = 1$.} \label{subsec:primal-single}
In this section, we consider the case where $m = 1$, whose corresponding bilinear set is denoted by $\mcl{S}^1$.
In this case, we can simplify notation by matching the indices of $\vc{z}$ and $\vc{x}$ variables such that $y_1x_k = z_k$ for all $k \in K = N$.
We next show that, to generate an EC\&R\xspace inequality for $\mcl{S}^1$, it is sufficient to use aggregation weight $1$ for all constraints used in the aggregation.
\begin{proposition} \label{prop:primal-weight}
Let $\bar{\vc{\pi}}^l$ be an extreme point of the projection cone $\mcl{C}^l$, for some $l \in K$, corresponding to a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S}^1)$. Then, it can be scaled in such a way that all of its components are 0 or 1.
\end{proposition}
\proof{Proof.}
When $m=1$, we can write $\mcl{C}^l$ in \eqref{eq:Cl} as
\begin{multline*}
\mathcal{C}^{l} =
\left\{
\vc{\pi}^l \in {\mathbb R}^{2(\kappa-1) + 2(\tau + 2n)}_{+} \, \middle|
\sum_{t \in T} E_{ti} \left(\gamma^1_t - \theta_t\right) + \mu_i - \lambda_i \right. \\
\left. + \eta^1_i - \rho^1_i + \sum_{k \in K_l}A^k_{1i}\left(\beta^+_k - \beta^-_k\right) = \pm A^l_{1i}, \, \forall i \in N \right\}.
\end{multline*}
We can rearrange the columns of the coefficient matrix of the system defining $\mathcal{C}^{l}$ to obtain
\begin{equation}
\Bigl[
\begin{array}{c|c|c|c|c|c|c|c}
E^{^{\intercal}} \, & \, -E^{^{\intercal}} \, & \, I \, & \, -I \, & \, I \, & \, -I & \, \bar{I} \, & \, -\bar{I}
\end{array}
\Bigr]. \label{eq:ECR-matrix}
\end{equation}
In the above matrix, the rows correspond to bilinear terms $y_1x_i$ (i.e., $w^1_i$ in the disjunctive programming formulation \eqref{eq_ef}) for $i \in N$.
The first and second column blocks correspond to the weights of the non-bound constraints in $\Xi$ multiplied by $y_{1}$ and $1-y_{1}$, which are denoted by $\gamma^1_t$ and $\theta_t$, respectively, for all $t \in T$.
The third and fourth column blocks correspond to the weights of the lower and upper bound constraints on variables in $\Xi$ multiplied by $1-y_{1}$, which are captured by $\mu_i$ and $\lambda_i$, respectively, for all $i \in N$.
Similarly, the fifth and sixth column blocks correspond to the weights of the lower and upper bound constraints on variables in $\Xi$ multiplied by $y_{1}$, which are recorded by $\eta^1_i$ and $\rho^1_i$, respectively, for all $i \in N$.
In these columns, $I$ represents the identity matrix of appropriate size.
Lastly, the seventh and eighth column blocks correspond to the weights of the bilinear constraints in $\mcl{S}^1$, which are represented by $\beta^+_k$ and $\beta^-_k$, respectively, for all $k \in K_l$.
In particular, the element at column $k \in K_l$ and row $i \in N$ of $\bar{I}$ is equal to $1$ if $i=k$, and it is equal to zero otherwise.
Based on these column values, it can be easily verified that \eqref{eq:ECR-matrix} is totally unimodular (TU).
In $\mathcal{C}^{l}$, the right-hand-side vector is $\pm\vc{e}^{l} \in {\mathbb R}^{m+n}$, where $\vc{e}^{l}$ is the unit vector whose components are all zero except for that corresponding to row $l$ representing $y_1x_l$, which is equal to $1$.
Because $\bar{\vc{\pi}}^l$ is an extreme point of $\mathcal{C}^{l}$, it is associated with a basic feasible solution for its system of equations.
Let $B$ be the corresponding basis for \eqref{eq:ECR-matrix}.
It follows from Cramer's rule that all elements of $B^{-1}$ belong to $\{0,-1,1\}$ since \eqref{eq:ECR-matrix} is TU.
Therefore, the components of $\pm B^{-1}\vc{e}^{l}$ belong to $\{0,-1,1\}$.
We conclude that the components of basic feasible solutions to $\mathcal{C}^{l}$ are equal to $0$ or $1$ due to non-negativity of all variables in its description.
\Halmos
\endproof
\medskip
\begin{remark} \label{rem:1-simplex}
When $m = 1$, multiplying the bound constraints with $1-y_1$ in Step 3 of the EC\&R\xspace procedure produces two of the standard McCormick bounds.
As a result, we can skip Step 3 in the EC\&R\xspace procedure and merge it into the relaxation step, in which the other two McCormick bounds are used for relaxing the remaining bilinear terms.
Considering Remark~\ref{rem:reduce-bilinear}, any remaining bilinear term in the aggregated inequality can be \textit{relaxed} into either of the two McCormick lower bounds or the two McCormick upper bounds or the $\pm z$ variable corresponding to that term depending on its sign.
In this case, the characterization of EC\&R\xspace assignment can be reduced further to $\big[\mcl{I}_1,\bar{\mcl{I}}\big]$.
\end{remark}
\medskip
\begin{remark} \label{rem:1-separation}
As described in Remark~\ref{rem:1-simplex}, each remaining bilinear term in the aggregated inequality of the EC\&R\xspace procedure can be relaxed into three different linear terms.
While this can lead to an exponential growth in the number of resulting linear EC\&R\xspace inequalities for each EC\&R\xspace assignment, we can use an efficient separation procedure to find the most violated inequality among the resulting EC\&R\xspace inequalities as follows.
Assume that we aim to separate a given solution $(\bar{\vc{x}}; \bar{y}_1; \bar{\vc{z}})$ from $\mathop{\rm conv}(\mcl{S}^1)$ through the EC\&R\xspace inequalities obtained from the aggregated inequality $g(\bar{\vc{x}}; y_1; \bar{\vc{z}}) \geq 0$ associated with the EC\&R\xspace assignment $\big[\mcl{I}_1,\bar{\mcl{I}}\big]$.
For each bilinear term $y_1x_i$, we choose the relaxation option that provides the minimum value among $u_i\bar{y}_1$ obtained from using $y_1(u_i - x_i) \geq 0$, $\bar{x}_i$ obtained from using $(1-y_1)x_i \geq 0$, and $\bar{z}_i$ obtained from using $-y_1x_i + z_i \geq 0$.
Similarly, for each bilinear term $-y_1x_i$, we choose the relaxation option that provides the minimum value among $0$ obtained from using $y_1x_i \geq 0$, $u_i - \bar{x}_i - u_i\bar{y}_1$ obtained from using $(1-y_1)(u_i - x_i) \geq 0$, and $-\bar{z}_i$ obtained from using $y_1x_i - z_i \geq 0$.
This approach provides the most violated EC\&R\xspace inequality in the time linear with the number of remaining bilinear terms in the aggregated inequality.
\end{remark}
\medskip
Considering the relation between the extreme points of the projection cone $\mcl{C}^l$ for $l \in K$ and the aggregation weights in the EC\&R\xspace procedure, Proposition~\ref{prop:primal-weight} and Remark~\ref{rem:1-simplex} imply that generating class-$l^{\pm}$ EC\&R\xspace inequalities reduces to identifying the assignment $\big[\mcl{I}_1,\bar{\mcl{I}}\big]$ as the aggregation weights are readily determined.
In particular, the constraints in $\mcl{I}_1$ are multiplied with $y_1$, and those in $\bar{\mcl{I}}$ are multiplied with $1-y_1$.
We next show that, for set $\mcl{S}^1$, identifying all the EC\&R\xspace assignments that satisfy the EC\&R\xspace conditions (C1) and (C2) can be achieved by considering a special graphical structure in the underlying network.
\medskip
Given a network $\mr{G} = (\mr{V},\mr{A})$ with a node set $\mr{V}$ and arc set $\mr{A}$, assume that the index $k$ of variables $z_k$ in the description of $\mcl{S}^1$ refers to the arc whose flow variable $x_k$ appears in that bilinear constraint, i.e., $y_1x_k = z_k$ for $k \in \mr{A} = N = K$.
We define $t(k)$ and $h(k)$ to be the tail and head nodes of arc $k \in \mr{A}$, respectively.
Further, for any node $i \in \mr{V}$, we define $\delta^+(i)$ and $\delta^-(i)$ to be the set of outgoing and incoming arcs at that node, respectively.
We refer to the flow-balance inequality $\sum_{k \in \delta^+(i)} x_k - \sum_{k \in \delta^-(i)} x_k \geq f_i$ (resp. $-\sum_{k \in \delta^+(i)} x_k + \sum_{k \in \delta^-(i)} x_k \geq -f_i$) corresponding to node $i$ as the \textit{positive} (resp. \textit{negative}) flow-balance inequality, and refer to its index in the description of $\Xi$ by $i^+$ (resp. $i^-$) to be recorded in the EC\&R\xspace assignment.
For example, an EC\&R\xspace assignment $\big[\{i^+\},\{j^-\}\big]$ implies that, in the aggregation,
the positive flow-balance inequality corresponding to the node $i \in \mr{V}$ is multiplied with $y_1$, and the negative flow-balance inequality corresponding to the node $j \in \mr{V}$ is multiplied with $1-y_1$.
In the sequel, we refer to the undirected variant of a subnetwork $\mr{P}$ of $\mr{G}$ with $\bar{\mr{P}}$, and conversely, we refer to the the directed variant of an undirected subnetwork $\bar{\mr{P}}$ of $\bar{\mr{G}}$ with $\mr{P}$ according to the arc directions in $\mr{G}$.
\begin{proposition} \label{prop:primal-tree}
Consider set $\mcl{S}^1$ with $\Xi$ that represents the network polytope corresponding to the network $\mr{G} = (\mr{V},\mr{A})$.
Let $\big[\mcl{I}_1,\bar{\mcl{I}}\big]$ be an EC\&R\xspace assignment for class-$l^{\pm}$, for some $l \in \mr{A}$, that leads to a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S}^1)$.
Define $\widetilde{\mcl{I}} = \{i \in \mr{V} | i^{\pm} \in \mcl{I}_1 \cup \bar{\mcl{I}} \}$ to be the subset of nodes whose flow-balance inequalities are used in the aggregation.
Then, there exists a tree $\bar{\mr{T}}$ of $\bar{\mr{G}}$ composed of the nodes in $\widetilde{\mcl{I}}$ such that arc $l$ is incident to exactly one node of $\bar{\mr{T}}$.
\end{proposition}
\proof{Proof.}
First, we observe that for each node $i \in \mr{V}$, both of its positive and negative flow-balance inequalities cannot be selected for the aggregation, since otherwise, the columns representing the positive and negative inequalities in the basis of the coefficient matrix \eqref{eq:ECR-matrix} associated with the extreme point of $\mcl{C}^l$ would be linearly dependent, which would be a contradiction to the fact that the selected EC\&R\xspace assignment leads to a facet-defining inequality of $\mathop{\rm conv}(\mcl{S}^1)$; see the proof of Proposition~\ref{prop:primal-weight} for details.
As a result, considering that $\mcl{I}_1 \cap \bar{\mcl{I}} = \emptyset$ by the EC\&R\xspace requirement, at most one of the following possibilities can occur in the EC\&R\xspace assignment: $i^+ \in \mcl{I}_1$, $i^+ \in \bar{\mcl{I}}$, $i^- \in \mcl{I}_1$, and $i^- \in \bar{\mcl{I}}$.
Therefore, each node in $\widetilde{\mcl{I}}$ corresponds to a unique flow-balance constraint in the EC\&R\xspace assignment.
Next, we show that arc $l$ is incident to exactly one node of $\widetilde{\mcl{I}}$.
It follows from condition (C2) of the EC\&R\xspace procedure that the bilinear term $y_1x_l$ for arc $l$ in the base equality must be canceled during the aggregation.
The constraints of $\Xi$ that can produce the bilinear term $y_1x_l$ during the aggregation are the flow-balance constraint corresponding to the tail node $t(l)$ of arc $l$, and the flow-balance constraint corresponding to the head node $h(l)$ of arc $l$.
Since the aggregation weight for all the constraints in the EC\&R\xspace assignment are $1$ according to Proposition~\ref{prop:primal-weight}, and considering that each flow-balance constraint can appear once in the aggregation as noted above, the only possibility to cancel the term $y_1x_l$ is to pick exactly one of the above constraints in the EC\&R\xspace assignment.
As a result, exactly one of the head and the tail nodes of arc $l$ must be in $\widetilde{\mcl{I}}$.
Next, we show that there exists a tree $\bar{\mr{T}}$ of $\bar{\mr{G}}$ whose node set is $\widetilde{\mcl{I}}$.
Assume by contradiction that there is no such tree composed of the nodes in $\widetilde{\mcl{I}}$.
Therefore, $\widetilde{\mcl{I}}$ can be partitioned into two subsets $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$, where the nodes in $\widetilde{\mcl{I}}_1$ are not adjacent to any nodes in $\widetilde{\mcl{I}}_2$.
It is clear that arc $l$ cannot be incident to the nodes in both $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$, since otherwise $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$ would have adjacent nodes.
Assume without the loss of generality that arc $l$ is incident to a node in $\widetilde{\mcl{I}}_1$.
Since the given EC\&R\xspace assignment leads to a facet-defining inequality after applying the relaxation step, its aggregation weights correspond to an extreme point of $\mcl{C}^l$ as descried in the proof of Proposition~\ref{prop:primal-weight}.
The resulting system of equations for the associated basic feasible solution can be written as
\begin{equation}
\left[
\def1.2{1.2}
\begin{array}{ccc|ccc|c}
\pm E_1 & \pm I_1 & \pm \bar{I}_1 \, & \, 0 & 0 & 0 \, & \, C_1 \\
\hline
0 & 0 & 0 \, & \pm E_2 & \pm I_2 & \pm \bar{I}_2 \, & \, C_2 \\
\hline
\hline
0 & 0 & 0 \, & 0 & 0 & 0 \, & \, C_3 \\
\end{array}
\right]
\left[
\begin{array}{c}
\vc{1} \\
\hline
\vc{1} \\
\hline
\vc{0}
\end{array}
\right]
=
\left[
\begin{array}{c}
\pm \vc{e}^{l} \\
\hline
\vc{0} \\
\hline
\hline
\vc{0}
\end{array}
\right], \label{eq:matrix-tree}
\end{equation}
where the columns and rows of the basis matrix have been suitably reordered.
In \eqref{eq:matrix-tree}, the first (resp. second) row block corresponds to bilinear terms $y_1x_i$ for arcs $i \in \mr{A}$ that are incident to the nodes in $\widetilde{\mcl{I}}_1$ (resp. $\widetilde{\mcl{I}}_2$), and the last row block corresponds to all the other bilinear terms that do not appear during aggregation.
The first (resp. fourth) column block denotes the transpose of the node-arc incidence matrix for nodes in $\widetilde{\mcl{I}}_1$ (resp. $\widetilde{\mcl{I}}_2$).
The second (resp. fifth) column block contains positive or negative multiples of columns of the identity matrix representing the weights used in the relaxation step of the EC\&R\xspace procedure corresponding to the McCormick bounds.
The third (resp. sixth) column block represents positive or negative multiples of the bilinear constraints in the description of $\mcl{S}^1$ used in the relaxation step corresponding to the arcs appearing in the first (resp. second) row blocks.
All these columns have weights equal to 1 according to Proposition~\ref{prop:primal-weight} as denoted in the first two row blocks of the solution vector multiplied with this matrix.
The last column in the basis corresponds to the constraints that have weights $0$ in the basic feasible solution and are added to complete the basis.
Lastly, $\vc{e}^{l}$ is a unit vector whose elements are all zeros except that corresponding to $y_1x_l$, which is equal to $1$.
It is now easy to verify that the linear combination of the columns in the column blocks 4, 5 and 6 of the basis matrix with weights $1$ yields the zero vector.
This shows that these columns are linearly dependent, a contradiction.
\Halmos
\endproof
\medskip
Proposition~\ref{prop:primal-tree} implies that each non-trivial EC\&R\xspace inequality can be obtained as an aggregation of constraints corresponding to a special tree structures.
The next theorem provides the converse result that aggregating constraints associated with each special tree structure can produce EC\&R\xspace inequalities.
\begin{theorem} \label{thm:primal-tree-converse}
Consider set $\mcl{S}^1$ with $\Xi$ that represents the network polytope corresponding to the network $\mr{G} = (\mr{V},\mr{A})$.
Let $\bar{\mr{T}}$ be a tree in $\bar{\mr{G}}$ with the node set $\widetilde{\mcl{I}} \subseteq \mr{V}$, and let $l \in \mr{A}$ be an arc that is incident to exactly one node of $\bar{\mr{T}}$.
Then, for any partition $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$ of $\widetilde{\mcl{I}}$ (i.e., $\widetilde{\mcl{I}}_1 \cap \widetilde{\mcl{I}}_2 = \emptyset$ and $\widetilde{\mcl{I}}_1 \cup \widetilde{\mcl{I}}_2 = \widetilde{\mcl{I}}$), we have that
\begin{itemize}
\item[(i)] if $h(l) \in \widetilde{\mcl{I}}$, then $\big[\{i^+\}_{i \in \widetilde{\mcl{I}}_1},\{i^-\}_{i \in \widetilde{\mcl{I}}_2}\big]$ is an EC\&R\xspace assignment for class-$l^{+}$,
\item[(ii)] if $h(l) \in \widetilde{\mcl{I}}$, then $\big[\{i^-\}_{i \in \widetilde{\mcl{I}}_1},\{i^+\}_{i \in \widetilde{\mcl{I}}_2}\big]$ is an EC\&R\xspace assignment for class-$l^{-}$,
\item[(iii)] if $t(l) \in \widetilde{\mcl{I}}$, then $\big[\{i^-\}_{i \in \widetilde{\mcl{I}}_1},\{i^+\}_{i \in \widetilde{\mcl{I}}_2}\big]$ is an EC\&R\xspace assignment for class-$l^{+}$,
\item[(iv)] if $t(l) \in \widetilde{\mcl{I}}$, then $\big[\{i^+\}_{i \in \widetilde{\mcl{I}}_1},\{i^-\}_{i \in \widetilde{\mcl{I}}_2}\big]$ is an EC\&R\xspace assignment for class-$l^{-}$.
\end{itemize}
\end{theorem}
\proof{Proof.}
We show the result for part (i), as the proof for parts (ii)--(iv) follows from similar arguments.
It suffices to show that the aggregation procedure performed on the constraints in the proposed assignment satisfies the EC\&R\xspace conditions (C1) and (C2).
For condition (C1), we need to show that at least $|\widetilde{\mcl{I}}_1| + |\widetilde{\mcl{I}}_2|$ bilinear terms are canceled during aggregation.
Let $R$ be the set of arcs in $\mr{T}$, which is the directed variant of $\bar{\mr{T}}$ obtained by replacing each edge with its corresponding arc in $\mr{G}$.
It is clear from the definition that $l \notin R$.
As a result, for each $r \in R$, the only constraints in the aggregation that contain $x_r$ are the flow-balance constraints corresponding to the head node $h(r)$ and tail node $t(r)$ of $r$ since both of these nodes are included in $\widetilde{\mcl{I}}$ as $r$ is an arc in $\mr{T}$.
There are four cases for the partitions of $\widetilde{\mcl{I}}$ that these head and tail nodes can belong to.
For the first case, assume that $h(r) \in \widetilde{\mcl{I}}_1$ and $t(r) \in \widetilde{\mcl{I}}_1$.
It follows from the EC\&R\xspace assignment in case (i) that the positive flow-balance constraints $h(r)^+$ and $t(r)^+$ are used in the aggregation with weights $y_1$.
In particular, we have $y_1\big(\sum_{k \in \delta^+(h(r))} x_k - \sum_{k \in \delta^-(h(r))\setminus \{r\}} x_k - x_r \geq f_{(h(r))}\big)$ added with $y_1\big(\sum_{k \in \delta^+(t(r))\setminus \{r\}} x_k - \sum_{k \in \delta^-(t(r))} x_k + x_r \geq f_{(t(r))}\big)$, which results in the cancellation of $y_1x_r$.
For the second case, assume that $h(r) \in \widetilde{\mcl{I}}_1$ and $t(r) \in \widetilde{\mcl{I}}_2$.
It follows from the EC\&R\xspace assignment that the positive flow-balance constraints $h(r)^+$ and the negative flow-balance constraint $t(r)^+$ are used in the aggregation with weights $y_1$ and $(1-y_1)$, respectively.
In particular, we have $y_1\big(\sum_{k \in \delta^+(h(r))} x_k - \sum_{k \in \delta^-(h(r))\setminus \{r\}} x_k - x_r \geq f_{(h(r))}\big)$ added with $(1-y_1)\big(-\sum_{k \in \delta^+(t(r))\setminus \{r\}} x_k + \sum_{k \in \delta^-(t(r))} x_k - x_r \geq -f_{(t(r))}\big)$, which results in the cancellation of $y_1x_r$.
For the remaining two cases, we can use similar arguments by changing the inequality signs to show that the term $y_1x_r$ will be canceled during aggregation.
As a result, we obtain at least $|R|$ cancellations during the aggregation corresponding to the arcs of $\mr{T}$.
Since $\bar{\mr{T}}$ is a tree, we have that $|R| = |\widetilde{\mcl{I}}_1| + |\widetilde{\mcl{I}}_2| - 1$.
Finally, for arc $l$, it follows from the assumption of case (i) in the problem statement that $h(l) \in \widetilde{\mcl{I}}$ and $t(l) \notin \widetilde{\mcl{I}}$.
If $h(l) \in \widetilde{\mcl{I}}_1$, then according to the EC\&R\xspace procedure for class-$l^+$, we aggregate $y_1x_l - z_l = 0$ with $y_1\big(\sum_{k \in \delta^+(h(l))} x_k - \sum_{k \in \delta^-(h(l))\setminus \{r\}} x_k - x_l \geq f_{(h(l))}\big)$, which results in the cancellation of $y_1x_l$.
If $h(l) \in \widetilde{\mcl{I}}_2$, we aggregate $y_1x_l - z_l = 0$ with $(1-y_1)\big(-\sum_{k \in \delta^+(h(l))} x_k + \sum_{k \in \delta^-(h(l))\setminus \{r\}} x_k + x_l \geq -f_{(h(l))}\big)$, which results in the cancellation of $y_1x_l$.
As a result, in total we have at least $|R| + 1 = |\widetilde{\mcl{I}}_1| + |\widetilde{\mcl{I}}_2|$ cancellations during the aggregation of the constraints in the EC\&R\xspace assignment, showing the satisfaction of condition (C1) of the EC\&R\xspace procedure.
For condition (C2) of the EC\&R\xspace procedure, we need to show that for each constraint used in the aggregation, including the base equality, at least one bilinear term among those created after multiplication of that constraint with its corresponding weight is canceled.
There are two types of constraints to consider.
The first type is the flow-balance constraints in $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$, which correspond to the nodes of $\bar{\mr{T}}$.
It follows from the previous discussion that for each node $i \in \widetilde{\mcl{I}}_1 \cup \widetilde{\mcl{I}}_2$, the bilinear term $y_1x_r$, where $r$ is an arc in $\mr{T}$ that is incident to $i$, i.e., $h(r) = i$ or $t(r) = i$, is canceled during aggregation.
This proves that at least one bilinear term is canceled in the inequality obtained after multiplying the corresponding flow-balance constraint at node $i$ with $y_1$ or $1-y_1$.
The second type of constraints used in the aggregation is the base equality $l$.
The proof follows from an argument similar to that given above where we showed that the bilinear term $y_1x_l$ that appears in the base constraint $y_1x_l - z_l = 0$ is canceled.
We conclude that condition (C2) of the EC\&R\xspace procedure is satisfied for all constraints used in the aggregation.
\Halmos
\endproof
\medskip
In view of Theorem~\ref{thm:primal-tree-converse}, note that for the most basic choice of the tree $\bar{\mr{T}}$, i.e., an empty set, the resulting EC\&R\xspace inequalities recover the classical McCormick bounds.
Therefore, considering any nonempty tree structure can potentially improve the McCormick results by adding new valid inequalities for the bilinear set.
\medskip
Proposition~\ref{prop:primal-tree} and Theorem~\ref{thm:primal-tree-converse} suggest that the EC\&R\xspace assignments have a simple graphical interpretation for $\mcl{S}^1$, which can be used to generate all non-trivial EC\&R\xspace inequalities to describe $\mathop{\rm conv}(\mcl{S}^1)$ without the need to search for all possible constraints and their aggregation weights that satisfy the EC\&R\xspace conditions as is common for general bilinear sets.
This attribute can significantly mitigate cut-generation efforts when used systematically to produce cutting plane.
Such a systematic procedure can be designed by identifying tress of a given network and then following the result of Theorem~\ref{thm:primal-tree-converse} to obtain the corresponding EC\&R\xspace assignments.
We illustrate this method in the following example.
\medskip
\begin{example} \label{ex:primal-tree}
Consider set $\mcl{S}^1$ where $\Xi$ represents the network model corresponding to a \textit{spiked cycle} graph $\mr{G} = (\mr{V},\mr{A})$ shown in Figure~\ref{fig:primal-tree}.
We refer to each arc in this network as a pair $(i,j)$ of its tail node $i$ and its head node $j$, and denote its corresponding flow variable as $x_{i,j}$.
Assume that we are interested in finding EC\&R\xspace assignments for class-$(1,5)^{+}$.
According to Theorem~\ref{thm:primal-tree-converse}, we need to identify the trees that contain exactly one of the tail and head nodes of arc $(1,5)$.
For instance, we may select the tree $\bar{\mr{T}}$ composed of the nodes $\widetilde{\mcl{I}} = \{8,4,1,2,6\}$ that contain the tail node of arc $(1,5)$.
Consider the partitions $\widetilde{\mcl{I}}_1 = \{8,2\}$ and $\widetilde{\mcl{I}}_2 = \{4,1,6\}$.
Following case (iii) in Theorem~\ref{thm:primal-tree-converse}, we can obtain the EC\&R\xspace assignment $\big[\{8^-, 2^-\},\{4^+, 1^+, 6^+\}\big]$ for class-$(1,5)^{+}$.
As a result, we multiply the negative flow-balance constraints at nodes $8$ and $2$ with $y_1$, and we multiply the positive flow-balance constraints at nodes $4$, $1$, and $6$ with $1-y_1$, and aggregate them with the base bilinear equality corresponding to arc $(1,5)$ with weight 1 to obtain the aggregated inequality
\begin{multline*}
-z_{1,5} - y_1 x_{2,3} - y_1 x_{4,3} + (f_8 + f_2 +f_1 + f_4 + f_6)y_1\\
+ x_{1,5} -x_{2,1} + x_{4,3} - x_{8,4} + x_{6,2} - f_1 - f_4 - f_6 \geq 0
\end{multline*}
where $f_i$ denotes the supply/demand value at node $i$.
Following Remark~\ref{rem:1-simplex}, we may relax each remaining bilinear term $-y_1 x_{2,3}$ and $-y_1x_{4,3}$ into three possible linear expressions, leading to 9 total EC\&R\xspace inequalities.
If implemented inside of a separation oracle, we can use Remark~\ref{rem:1-separation} to find the most violated inequality among these 9 efficiently.
\end{example}
\medskip
\begin{figure}[!t]
\centering
\includegraphics[scale=3]{G1.png}
\caption{Graph $\mr{G}$ of Example~\ref{ex:primal-tree}}\label{fig:primal-tree}
\end{figure}
Consider a generalization of $\mcl{S}^1$ where the bilinear constraints may contain multiple bilinear terms:
\[
\widetilde{\mcl{S}}^1 = \left\{ (\vc{x};y;\vc{z}) \in \Xi \times \Delta_1 \times {\mathbb R}^{\kappa} \, \middle|\,
y_1 A^k \vc{x} = z_{k}, \, \, \, \forall k \in K
\right\},
\]
where $A^k$ is a matrix of appropriate size with potentially multiple nonzero elements.
For instance, $\widetilde{\mcl{S}}^1$ may include the constraint $2y_1x_i -5 y_1x_j = z_k$ for some $i, j \in N$.
In this case, the coefficient matrix \eqref{eq:ECR-matrix} of $\mcl{C}^l$ will be modified as follows after rearranging columns and rows.
\begin{equation}
\Bigl[
\begin{array}{c|c|c|c|c|c|c|c}
E^{^{\intercal}} \, & \, -E^{^{\intercal}} \, & \, I \, & \, -I \, & \, I \, & \, -I & \, \tilde{A} \, & \, -\tilde{A}
\end{array}
\Bigr]. \label{eq:ECR-matrix-3}
\end{equation}
In the above matrix, the row and column blocks are defined similarly to those of \eqref{eq:ECR-matrix} with a difference that the seventh and eighth column blocks correspond to the weights of the bilinear constraints $y_1A^k\vc{x} = z_k$, which are represented by $\beta^+_k$ and $\beta^-_k$ in the dual weight vector, respectively, for all $k \in K_l$.
In particular, the element at column $k \in K_l$ and row $i \in N$ of $\tilde{A}$ is equal to $A^k_{1i}$.
It is clear from the structure of \eqref{eq:ECR-matrix-3} that this matrix may lose the TU property when $A^k$ contains multiple nonzero elements for some $k \in K_l$.
In fact, this property may not hold even if $A^k_{1i} \in \{0,1,-1\}$ for all $k \in K_l$ and $i \in N$.
As a result, there is no guarantee that the aggregation weights for all the EC\&R\xspace inequalities corresponding to non-trivial facets of $\mathop{\rm conv}(\widetilde{\mcl{S}}^1)$ will be $1$.
While an explicit derivation of the convex hull description through identifying special network structures, such as those presented for $\mcl{S}^1$, may not be attainable for this problem in its original space of variables, we can use the following ancillary result to apply Theorem~\ref{thm:primal-tree-converse} and obtain a convex hull description for $\widetilde{\mcl{S}}^1$ in a higher-dimensional space.
\begin{proposition} \label{prop:convex-high}
Consider sets
\[
\mcl{S}^1 = \left\{ (\vc{x};y;\vc{w}) \in \Xi \times \Delta_1 \times {\mathbb R}^{n} \, \middle|\,
y_1x_i = w_{i}, \, \, \, \forall i \in N
\right\},
\]
and
\[
\mcl{D} = \left\{ (\vc{x};y;\vc{w};\vc{z}) \in \Xi \times \Delta_1 \times {\mathbb R}^{n} \times {\mathbb R}^{\kappa} \, \middle|\,
A^k\vc{w} = z_{k}, \, \, \, \forall k \in K
\right\},
\]
Then,
\begin{equation}
\mathop{\rm conv}\left( (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D} \right) = \left(\mathop{\rm conv}(\mcl{S}^1) \times {\mathbb R}^{\kappa}\right) \cap \mcl{D}. \label{eq:conv_high}
\end{equation}
\end{proposition}
\proof{Proof.}
We prove the result by showing both directions of inclusion for the equality.
The direct inclusion follows from the fact that the convex hull of intersection of two sets is a subset of the intersection of the convex hulls of those sets.
For the reverse inclusion, we need to show that $\mathop{\rm conv}\left( (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D} \right) \supseteq \left(\mathop{\rm conv}(\mcl{S}^1) \times {\mathbb R}^{\kappa}\right) \cap \mcl{D}$.
Consider a point $\bar{\phi} = (\bar{\vc{x}};\bar{y};\bar{\vc{w}};\bar{\vc{z}}) \in \left(\mathop{\rm conv}\left( \mcl{S}^1 \right) \times {\mathbb R}^{\kappa}\right) \cap \mcl{D}$.
We show that $\bar{\phi} \in \mathop{\rm conv}\left( (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D} \right)$.
It follows from the assumption that $\bar{z}_k = A^k\bar{\vc{w}}$ for all $k \in K$.
Further, there must exist a finite collection of points $\hat{\phi}^j = (\hat{\vc{x}}^j;\hat{y}^j;\hat{\vc{w}}^j) \in \mcl{S}^1$ for $j \in J$ such that $\bar{\vc{x}} = \sum_{j \in J}\omega_j\hat{\vc{x}}^j$, $\bar{y} = \sum_{j \in J}\omega_j\hat{y}^j$, and $\bar{\vc{w}} = \sum_{j \in J}\omega_j\hat{\vc{w}}^j$ for some non-negative weights $\omega_j$ such that $\sum_{j \in J}\omega_j = 1$.
Consider the set of points $\dot{\phi}^j = (\dot{\vc{x}}^j;\dot{y}^j;\dot{\vc{w}}^j;\dot{\vc{z}}^j)$ for $j \in J$ such that $\dot{\vc{x}}^j = \hat{\vc{x}}^j$, $\dot{y}^j = \hat{y}^j$, $\dot{\vc{w}}^j = \hat{\vc{w}}^j$, and $\dot{\vc{z}}^j_k = A^k\dot{\vc{w}}^j$ for all $k \in K$.
It is clear that $\dot{\phi}^j \in (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D}$ for all $j \in J$ by definition of the components of these points.
It follows that $\bar{\phi} = \sum_{j \in J}\omega_j\dot{\phi}^j$, since $\bar{\vc{x}} = \sum_{j \in J}\omega_j\dot{\vc{x}}^j$, $\bar{y} = \sum_{j \in J}\omega_j\dot{y}^j$, and $\bar{\vc{w}} = \sum_{j \in J}\omega_j\dot{\vc{w}}^j$ by definition, and since $\bar{z}_k = A^k\bar{\vc{w}} = A^k(\sum_{j \in J}\omega_j\hat{\vc{w}}^j) = \sum_{j \in J}\omega_j A^k\hat{\vc{w}}^j = \sum_{j \in J}\omega_j A^k\dot{\vc{w}}^j = \sum_{j \in J}\omega_j \dot{\vc{z}}^j_k$ for all $k \in K$.
This proves that $\bar{\phi} \in \mathop{\rm conv}\left( (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D} \right)$.
\Halmos
\endproof
\medskip
The result of Proposition~\ref{prop:convex-high} shows that we can obtain a convex hull description for $\widetilde{\mcl{S}}^1$ in a higher dimension, which is expressed on the left-hand-side of \eqref{eq:conv_high}, by finding the convex hull of $\mcl{S}^1$ through application of Theorem~\ref{thm:primal-tree-converse} and then intersecting it with the linear constraints in $\mcl{D}$ as indicated on the right-hand-side of \eqref{eq:conv_high}.
\subsection{The Case with $m > 1$.} \label{subsec:primal-multi}
In this section, we consider the general case where $m > 1$ in $\mcl{S}$.
The coefficient matrix of $\mcl{C}^l$ in \eqref{eq:Cl} can be written as follows after a suitable rearrangement of columns and rows.
\begin{gather} \label{eq:ECR-matrix-2}
\left[
\def1.2{1.5}
\begin{array}{c|c|c|c||c||c|c||cc|cc|c|cc||cc|cc|c|cc}
E^{^{\intercal}} \, & \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \, -E^{^{\intercal}} \, & I \, & \, -I \, & \, I \, & \, -I \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, \bar{I}^1 \, & \, -\bar{I}^1 \, & \, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, \\
\hline
\, \vc{0} \, & E^{^{\intercal}} \, & \, \dotsc \, & \, \vc{0} \, & \, -E^{^{\intercal}} \, & \, I \, & \, -I \, & \vc{0} \, & \, \vc{0} \, & \, I \, & \, -I \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \bar{I}^2 \, & \, -\bar{I}^2 \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \\
\hline
\, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots \, & \, \vdots \, & \vdots \, & \, \vdots \, & \, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots \, & \, \vdots \, & \, \vdots \, & \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots \, \\
\hline
\, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & E^{^{\intercal}} \, & \, -E^{^{\intercal}} \, & \, I \, & \, -I \, & \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, I \, & \, -I \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \bar{I}^m \, & \, -\bar{I}^m \, \\
\end{array}
\right].
\end{gather}
In the above matrix, each row block $j \in M$ represents the bilinear terms $y_jx_i$ (i.e., $w^j_i$ in the disjunctive programming formulation \eqref{eq_ef}) for all $i \in N$.
The first $m$ column blocks correspond to the weights of the flow-balance constraints in $\Xi$ multiplied by $y_{j}$ for $j \in M$, which are denoted by $\gamma^j_t$ for all $t \in T$ in the dual vector.
The next column block represents the weights of the flow-balance constraints in $\Xi$ multiplied by $1-\sum_{j \in M}y_{j}$, which are denoted by $\theta_t$ for all $t \in T$ in the dual vector.
The next two column blocks indicate the lower and upper bound constraints on variables in $\Xi$ multiplied by $1-\sum_{j \in M}y_{j}$, which are recorded by $\lambda_i$ and $\mu_i$, respectively, for all $i \in N$.
The next $2m$ columns blocks correspond to the weights of the lower and upper bound constraints on variables in $\Xi$ multiplied by $y_{j}$ for $j \in M$, which are recorded by $\eta^j_i$ and $\rho^j_i$, respectively, for all $i \in N$.
The last $2m$ column blocks correspond to the weights of the positive and negative bilinear constraints in $\mcl{S}$, which are represented by $\beta^+_k$ and $\beta^-_k$ for all $k \in K_l$.
For instance, for constraint $y_jx_i - z_k \geq 0$ with $(i,j) \in N\times M$ and $k \in K_l$, the elements of column $k$ in $\bar{I}^j$ are all zero except the one in the row corresponding to the bilinear term $y_jx_i$ which is equal to one.
\medskip
It is clear from the structure of \eqref{eq:ECR-matrix-2} that this matrix does not have the TU property, implying that a result similar to that of Proposition~\ref{prop:primal-weight} does not necessarily hold.
Therefore, there is no guarantee that the aggregation weights for all the EC\&R\xspace assignments obtained from the extreme points of $\mcl{C}^l$ are $1$.
In fact, Example~\ref{ex:ECR-weight} shows that there exists EC\&R\xspace assignments with aggregation weights that map to extreme points of $\mcl{C}^l$ with components that are not all $0$ or $1$.
\medskip
\begin{example} \label{ex:ECR-weight}
Consider set $\mcl{S}$ where $\Xi$ describes the network polytope corresponding to network $\mr{G} = (\mr{V},\mr{A})$ in Figure~\ref{fig:primal-tree}, and $\Delta = \{(y_1,y_2) \in {\mathbb R}^2_+ | x_1 + x_2 \leq 1\}$.
Select class-$l^+$ corresponding to the base equality $y_1x_{8,4} - z_l = 0$.
Let $\big[\mcl{I}_1,\mcl{I}_2,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ be an assignment for class-$l^+$ where $\mcl{I}_1 = \{4^+, 3^+\}$, $\mcl{I}_2 = \{2^-\}$, $\bar{\mcl{I}} = \{1^+\}$, $\mcl{J} = \{(4,1)\}$, and $\bar{\mcl{J}} = \{(2,3)\}$.
Next, we show that the above assignment is an EC\&R\xspace assignment for class-$l^+$ when considering the dual weight $1$ for all constraints except the bound constraint in $\mcl{J}$ which has a dual weight of $2$ in the aggregation.
Specifically, we aggregate the base constraint $y_1x_{8,4} - z_l \geq 0$ with weight $1$, the positive flow-balance constraints at node $4$ that is $x_{4,1} + x_{4,3} - x_{8,4} \geq f_4$ with weight $y_1$, the positive flow-balance constraints at node $3$ that is $x_{3,7} - x_{4,3} - x_{2,3} \geq f_3$ with weight $y_1$, the negative flow-balance constraints at node $2$ that is $x_{6,2} - x_{2,1} - x_{2,3} \geq -f_2$ with weight $y_2$, the positive flow-balance constraints at node $1$ that is $x_{1,5} - x_{4,1} - x_{2,1} \geq f_1$ with weight $1-y_1-y_2$, the lower bound constraint for arc $(4,1)$ that is $x_{4,1} \geq 0$ with weight $2(1-y_1-y_2)$, and the upper bound constraint for arc $(2,3)$ that is $u_{2,3} - x_{2,3} \geq 0$ with weight $1-y_1-y_2$.
During this aggregation, six bilinear terms will be canceled, satisfying condition (C1) of the EC\&R\xspace procedure.
Further, at least one bilinear term for each constraint involved in the aggregation is canceled, satisfying condition (C2) of the EC\&R\xspace procedure.
Therefore, the above assignment is a valid EC\&R\xspace assignment for class-$l^+$.
Next, we argue that the dual weight vector for this assignment corresponds to an extreme point of $\mcl{C}^l$ in \eqref{eq:Cl}.
For $\mcl{S}$ in this example, the coefficient matrix of $\mcl{C}^l$, as depicted in \eqref{eq:ECR-matrix-2}, has 16 rows corresponding to bilinear terms $y_jx_i$ for all $j = 1,2$ and $i \in \mr{A}$.
It is easy to verify that the columns corresponding to the six constraints in the above EC\&R\xspace assignment are linearly independent.
As a result, we can form a basis by adding to the above six columns 10 more linearly independent columns corresponding to the constraints used in the relaxation step for the bilinear terms remaining in the aggregated inequality together with the columns that complete the basis.
The resulting basis corresponds to a basic feasible solution where all variables (interpreted as the dual weights for constraints involved in the EC\&R\xspace procedure) are $0$ or $1$, except the one associated with the column representing the lower bound constraint for arc $(4,1)$ which has the dual weight equal to $2$.
Therefore, there exists extreme points of $\mcl{C}^l$ with components that are not all $0$ or $1$.
\end{example}
\medskip
According to the above observation, even though identifying the aggregation weights for the EC\&R\xspace procedure for $\mcl{S}$ is not as straightforward compared to that of $\mcl{S}^1$, we next show that a generalization of the tree structure can still be detected for a given EC\&R\xspace assignment.
First, we give a few definitions that will be used to derive these results.
\medskip
\begin{definition} \label{def:primal-parallel-1}
Consider set $\mcl{S}$ where $\Xi$ describes the network polytope corresponding to network $\mr{G} = (\mr{V},\mr{A})$.
We define a \textit{parallel network} $\mr{G}^j = (\mr{V}^j,\mr{A}^j)$ for $j \in M$ to be a replica of $\mr{G}$ that represents the multiplication of flow variables $\vc{x}$ with $y_j$ during the aggregation procedure.
To simplify presentation, we use the same node and arc notation across all parallel networks.
For instance, for each node $v \in \mr{V}$ (resp. arc $a \in \mr{A}$), its replica $v$ belongs to $\mr{V}^j$ (resp. $a$ belongs to $\mr{A}^j$).
\end{definition}
\medskip
\begin{definition} \label{def:primal-parallel-3}
We say that a collection of subnetworks $\dot{\mr{G}}^k = (\dot{\mr{V}}^k,\dot{\mr{A}}^k)$, for $k=1,\dotsc,r$ of parallel networks $\{\mr{G}^j\}_{j \in M}$ are \textit{vertically connected through the connection nodes $C_v \subseteq \mr{V}$ and connection arcs $C_a \subseteq \mr{A}$} if there exists an ordering $s_1, s_2, \dotsc, s_r$ of indices $1,\dotsc,r$ such that for each $i=1, \dotsc, r-1$, there exits either (i) an arc $a \in C_a$ such that $a$ is incident to a node of $\dot{\mr{G}}^{s_{i+1}}$ and it is incident to a node of some subnetworks among $\dot{\mr{G}}^{s_{1}}, \dotsc, \dot{\mr{G}}^{s_{i}}$, or (ii) a set of nodes $v_1, \dotsc, v_p \in C_v$ each adjacent to the previous one such that $v_1$ is adjacent to a node of $\dot{\mr{G}}^{s_{i+1}}$ and $v_{p}$ is adjacent to a node of some subnetworks among $\dot{\mr{G}}^{s_{1}}, \dotsc, \dot{\mr{G}}^{s_{i}}$.
In this definition, if node $v_1$ is in $\dot{\mr{G}}^{s_{i+1}}$, it counts as an adjacent node to the connection node $v_1$.
\end{definition}
\begin{proposition} \label{prop:primal-forest}
Consider set $\mcl{S}$ with $\Xi$ that represents the network polytope corresponding to the network $\mr{G} = (\mr{V},\mr{A})$.
Let $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ be an EC\&R\xspace assignment
that leads to a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S})$.
Assume that $\cup_{j \in M} \mcl{I}_j \neq \emptyset$.
For each $j \in M$, define $\widetilde{\mcl{I}}^j = \{i \in \mr{V} | i^{\pm} \in \mcl{I}_j \}$, $\widetilde{\mcl{I}} = \{i \in \mr{V} | i^{\pm} \in \bar{\mcl{I}} \}$, and $\widetilde{\mcl{J}} = \{i \in \mr{A} | i \in \mcl{J} \cup \bar{\mcl{J}} \}$.
Then, there exist forests $\bar{\mr{F}}^j$ in the parallel network $\mr{G}^j$ for $j \in M$, each composed of trees $\bar{\mr{T}}^j_k$ for $k \in \Gamma_j$, where $\Gamma_j$ is an index set, such that
\begin{itemize}
\item[(i)] the forest $\bar{\mr{F}}^j$ is composed of the nodes in $\widetilde{\mcl{I}}^j$ for all $j \in M$,
\item[(ii)] the collection of the trees $\bar{\mr{T}}^j_k$ for all $k \in \Gamma_j$ and all $j \in M$ are vertically connected through connection nodes $\widetilde{\mcl{I}}$ and connection arcs $\widetilde{\mcl{J}}$,
\item[(iii)] the collection of all nodes in $\bar{\mr{F}}^j$ for all $j \in M$ together with the connection nodes $\widetilde{\mcl{I}}$ form a tree in $\mr{G}$, which has at least one incident node to each connection arc in $\widetilde{\mcl{J}}$.
\end{itemize}
\end{proposition}
\proof{Proof.}
We show the result by proving conditions (i)--(iii).
First, we may assume that the given EC\&R\xspace assignment corresponds to class-$l^{\pm}$ for some $l \in K$.
Since $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ leads to a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S})$, its corresponding dual weights in the aggregation should represent an extreme point of $\mcl{C}^{l}$ defined in \eqref{eq:Cl}.
This extreme point is associated with a basis in the coefficient matrix \eqref{eq:ECR-matrix-2}.
In this basis, the subset of the column block that contains $E^{^{\intercal}}$ in the row block $j$ represents the flow-balance constraints (multiplied with $y_j$) for the nodes $i \in \widetilde{\mcl{I}}^j$, which can be viewed as the selected nodes in the parallel network $\mr{G}^j$ for $j \in M$.
Further, the rows in the row block $j$ represent the flow variables (multiplied with $y_j$) for each arc in $\mr{G}$, which can be viewed as an arc in the parallel network $\mr{G}^j$.
We may reorder the columns and rows of this basis corresponding to each parallel network $\mr{G}^j$ to obtain a diagonal formation composed of diagonal blocks $E^{^{\intercal}}_{j,k}$ for $k$ in an index set $\Gamma_j$.
It follows from this structure that the nodes corresponding to the columns of $E^{^{\intercal}}_{j,k}$ in the parallel network $\mr{G}^j$ are connected via arcs of $\mr{G}^j$ represented by the matrix rows.
Therefore, these nodes can form a tree $\bar{\mr{T}}^j_k$ for $k \in \Gamma_j$, the collection of which represents a forest $\bar{\mr{F}}^j$ for all $j \in M$, satisfying condition (i) of the proposition statement.
\smallskip
For condition (ii), considering the aforementioned diagonal block structure and representing the subset of column blocks of \eqref{eq:ECR-matrix-2} containing $\bar{I}^j$ and $I$ in the basis by one block with $\pm$ sign (as only one of them can be selected in the basis), we can write the resulting system of equations for the associated basic feasible solution as follows
\begin{gather} \label{eq:ECR-matrix-4}
\left[
\def1.2{1.2}
\begin{array}{c|c|c|c||c||c||ccccc||ccccc||c}
\begin{array}{ccc}
E^{^{\intercal}}_{1,1} & 0 & 0\\
0 & \ddots & 0 \\
0 & 0 & E^{^{\intercal}}_{1,|\Gamma_1|}
\end{array} & \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \, -E^{^{\intercal}} \, & \pm I & \, \pm I \, & \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, \pm\bar{I}^1 & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, C_1 \\
\hline
\, \vc{0} \, & \begin{array}{ccc}
E^{^{\intercal}}_{2,1} & 0 & 0\\
0 & \ddots & 0 \\
0 & 0 & E^{^{\intercal}}_{2,|\Gamma_2|}
\end{array} & \, \dotsc \, & \, \vc{0} \, & \, -E^{^{\intercal}} \, & \, \pm I & \vc{0} \, & \, \pm I \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \pm\bar{I}^2 & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, C_2 \\
\hline
\, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots \, & \vdots \, & \, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots & \, \vdots & \, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots & \, \vdots \\
\hline
\, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \begin{array}{ccc}
E^{^{\intercal}}_{m,1} & 0 & 0\\
0 & \ddots & 0 \\
0 & 0 & E^{^{\intercal}}_{m,|\Gamma_m|}
\end{array} & \, -E^{^{\intercal}} \, & \, \pm I \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \pm I \, & \vc{0} \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \pm\bar{I}^m & \, \vc{0} & \, C_m \\
\hline
\hline
\, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, -E^{^{\intercal}} \, & \, \pm I \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \vc{0} & \, \pm I \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} & \pm\bar{I}^{1,\dotsc,m} & \, C_{m+1} \\
\hline
\hline
\, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \vc{0} & \, \vc{0} \, & \, \vc{0} \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} & \, \vc{0} \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \vc{0} & \vc{0} & \, C_{m+2}
\end{array}
\right],
\end{gather}
where the last column in the basis corresponds to the constraints that have weights $0$ in the basic feasible solution and are added to complete the basis, and where the last row represents all bilinear terms that do not appear in any constraints during aggregation.
Further, the row block next to the last row corresponds to the bilinear terms that appear during aggregation but not in any of the selected flow-balance constraints; the matrix $\pm\bar{I}^{1,\dotsc,m}$ denotes the bilinear constraints in $\mcl{S}$ that contain these bilinear terms and could be used during the relaxation step.
In the above basis, the column block that contains $-E^{^{\intercal}}$ represents the flow-balance constraints (multiplied with $1-\sum_{j \in M}y_j$) corresponding to the nodes in $\widetilde{\mcl{I}}$.
Similarly, the column block that contains $\pm I$ in all rows represents the bound constraints on the flow variables (multiplied with $1-\sum_{j \in M}y_j$) corresponding to the arcs in $\widetilde{\mcl{J}}$.
We refer to the column group composed of the columns of $E^{^{\intercal}}_{j,k}$ for any $j \in M$ and $k \in \Gamma_j$ as the column group representing the nodes of the tree $\bar{\mr{T}}^j_k$.
It is clear from the diagonal structure of the submatrix containing $E^{^{\intercal}}_{j,k}$ that the column groups representing the nodes of $\bar{\mr{T}}^j_k$ are all \textit{arc disjoint}, i.e., there are no two columns from different groups that have a nonzero element in the same row.
We claim that there exists an ordering $(s_1,r_1), (s_2, r_2), \dotsc, (s_h, r_h)$ of the pairs $(j,k)$ for all $j \in M$ and $k \in \Gamma_j$ such that for each column group representing the nodes of $\bar{\mr{T}}^{s_i}_{r_i}$, for all $i = 2, \dotsc, h$, there exists either (i) a column from the column blocks corresponding to $\widetilde{\mcl{J}}$ that has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$ and a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_t,r_t}$ for some $t \in \{1, \dotsc, i-1\}$, or (ii) a sequence of columns in the column block corresponding to $\widetilde{\mcl{I}}$, each sharing a nonzero element in a common row with the previous one, such that the first column has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$ and the last column has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_t,r_t}$ for some $t \in \{1, \dotsc, i-1\}$.
Assume by contradiction that no such ordering exists.
Therefore, we can partition the rows in the first $m+1$ row blocks of \eqref{eq:ECR-matrix-4} in such a way that no two rows in all columns composed of the columns in $E^{^{\intercal}}_{j,k}$ for all $j \in M$ and $k \in \Gamma_j$ and those corresponding to the columns of $\widetilde{\mcl{I}}$ and $\widetilde{\mcl{J}}$ have all their nonzero elements in the rows of one of these partitions.
In this case, considering that the column blocks composed of $\pm I$ and those composed of $\pm\bar{I}^j$ have exactly one nonzero element in the basis, we can compactly rewrite the system of equations for the basic feasible solution as follows:
\begin{equation}
\left[
\def1.2{1.2}
\begin{array}{ccc|ccc|c}
\pm E_1 & \pm I_1 & \pm\bar{I}_1 \, & \, 0 & 0 & 0 \, & \, \bar{C}_1 \\
\hline
0 & 0 & 0 \, & \pm E_2 & \pm I_2 & \pm\bar{I}_2 \, & \, \bar{C}_2 \\
\hline
\hline
0 & 0 & 0 \, & \, 0 & 0 & 0 \, & \, C_{m+2} \\
\end{array}
\right]
\left[
\begin{array}{c}
\vc{+} \\
\hline
\vc{+} \\
\hline
\vc{0}
\end{array}
\right]
=
\left[
\begin{array}{c}
\pm \vc{e}^{l} \\
\hline
\vc{0} \\
\hline
\hline
\vc{0}
\end{array}
\right], \label{eq:matrix-forest}
\end{equation}
where the first and second row blocks respectively correspond to the first and second partitions discussed above.
In \eqref{eq:matrix-forest}, $\vc{e}^{l}$ is a unit vector whose elements are all zero except that corresponding to the row representing $y_{j'}x_{i'}$ for some $i',j'$ that satisfy $A^l_{j'i'} = 1$, which is equal to $1$.
We may assume without the loss of generality that the row containing this nonzero element in $\vc{e}^l$ belongs to the first row block.
All these columns except the ones in the last column block have positive weights because the associated constraints are assumed to be used in the aggregation.
These weights are denoted by $\vc{+}$ in the first two row blocks of the vector multiplied with this matrix.
It is now easy to verify that the linear combination of the columns in the second column block of the basis matrix with positive weights yields the zero vector.
This shows that the columns are linearly dependent, a contradiction.
Now, consider the ordering $(s_1,r_1), (s_2, r_2), \dotsc, (s_h, r_h)$ described above.
It follows that for each $i = 2, \dotsc, h$, there exists either (i) a column from the column block corresponding to $\widetilde{\mcl{J}}$ that has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$ and a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_t,r_t}$ for some $t \in \{1, \dotsc, i-1\}$, or (ii) a sequence of columns in the column block corresponding to $\widetilde{\mcl{I}}$, each sharing a nonzero element in a common row with the previous one, such that the first column has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$ and the last column has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_t,r_t}$ for some $t \in \{1, \dotsc, i-1\}$.
First, consider the case (i) in the above either-or argument holds for a column $k \in \widetilde{\mcl{J}}$.
This column has nonzero elements in the rows representing arc $k$ in all subnetworks $\mr{G}^j$ for $j \in M$.
Matrix $E^{^{\intercal}}_{s_i,r_i}$ has a nonzero element in row $k$ if the tree $\bar{\mr{T}}^{s_i}_{r_i}$ has an incident node to arc $k$.
we conclude that $\bar{\mr{T}}^{s_i}_{r_i}$ and $\bar{\mr{T}}^{s_t}_{r_t}$ have at least one node incident to $k$, satisfying criterion (i) in Definition~\ref{def:primal-parallel-3}.
Second, consider the case (ii) in the above either-or argument holds for a sequence $k_1, \dotsc, k_p$ of the nodes corresponding to $\widetilde{\mcl{I}}$ where $p \leq |\widetilde{\mcl{I}}|$.
Any such column, say $k_1$, has nonzero elements in the rows representing the arcs that are incident to node $k_1$ in all parallel networks $\mr{G}^j$ for $j \in M$.
Therefore, since each column contains a nonzero element in a common row with the previous one, the nodes corresponding to these columns must be adjacent to one another in $\mr{G}$.
Further, since the column corresponding to $k_1$ has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$, we conclude that $k_1$ is adjacent to a node in $\bar{\mr{T}}^{s_i}_{r_i}$, which means that either $k_1$ belongs to this tree, or it is adjacent to a node of this tree.
A similar argument can be made about node $k_p$ and the tree $\bar{\mr{T}}^{s_t}_{r_t}$.
This satisfies criterion (ii) of Definition~\ref{def:primal-parallel-3}.
This proves condition (ii) of the proposition statement due to Definition~\ref{def:primal-parallel-3}.
\smallskip
For condition (iii), we show there exists a sequence $v_1, \dotsc, v_q$ of all the nodes in $\cup_{j \in M}\widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}}$, where $q = |\cup_{j \in M}\widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}}|$ and $v_1 \in \cup_{j \in M}\widetilde{\mcl{I}}^j$, such that every node $v_i$ is adjacent to at least one node in $v_1, \dotsc, v_{i-1}$ for every $i = 2, \dotsc, q$.
We may assume that $v_1$ is incident to arc $i'$ defined previously that is associated with the base equality $l$.
For other cases, where $i'$ is not incident to any nodes in $\cup_{j \in M}\widetilde{\mcl{I}}^j$, the argument will be similar with an adjustment of the partitions described below.
It follows from the previously proven conditions (i) and (ii) of the problem statement, as well as Definition~\ref{def:primal-parallel-3} that there exists a sequence $\bar{v}_1, \dotsc, \bar{v}_p$ of all the nodes in $\cup_{j \in M}\widetilde{\mcl{I}}^j \cup \widehat{\mcl{I}}$ for some $\widehat{\mcl{I}} \subseteq \widetilde{\mcl{I}}$ such that $\bar{v}_i$ is adjacent to at least one node in $\bar{v}_1, \dotsc, \bar{v}_{i-1}$ for every $i = 2, \dotsc, p$.
We claim that there exists a sequence $\hat{v}_1, \dotsc, \hat{v}_{q-p}$ of all the nodes in $\widetilde{\mcl{I}} \setminus \widehat{\mcl{I}}$ such that $\hat{v}_i$ is adjacent to at least one node in $\bar{v}_1, \dotsc, \bar{v}_{p}, \hat{v}_1, \dotsc, \hat{v}_{i-1}$ for every $i = 1, \dotsc, q-p$.
Assume by contradiction that no such sequence exists.
Then, we can use an argument similar to that of case (ii) above to partition the rows of \eqref{eq:ECR-matrix-4} in such a way that all columns corresponding to the nodes in $\bar{v}_1, \dotsc, \bar{v}_{p}, \hat{v}_1, \dotsc, \hat{v}_{t}$ have all their nonzero elements in the first partition, and the columns corresponding to all the remaining nodes $\hat{v}_{t+1}, \dotsc, \hat{v}_{q-p}$ have all their nonzero elements in the second partition.
Then, a similar argument to that following \eqref{eq:matrix-forest} will show that the columns in the second group are linearly dependent, a contradiction.
The case for the second part of the statement regarding the connection arcs in $\widetilde{\mcl{J}}$ can be shown similarly.
\Halmos
\endproof
\begin{comment}
Issues with different restrictions of the EC\&R\xspace procedure:
\begin{itemize}
\item The first potential restriction is to restrict the aggregation weights to be $\pm 1$. The issue is that it is possible to have EC\&R\xspace assignments where a bilinear term is canceled using more that two constraints, which makes the tree/forest identification complicated. For instance, an arc corresponding to a bilinear constraint in $L \cup \bar{L}$ can be adjacent to a connection node in $\bar{I}$, a connection arc in $J \cup \bar{J}$, and a forest node in $I_j$ all at the same time; see the attached example in the IDEAS folder.
\item The second potential restriction is to consider sequential EC\&R\xspace, where a newly added constraints cancels a new bilinear term while keeping the previously canceled terms. The issue here is that (i) it is does not necessarily have $\pm 1$ aggregation weights (see the attached example in the IDEAS folder), (ii) identifying network structures that can lead to an EC\&R\xspace assignment is difficult. Even when a structure is identified, finding the aggregation weights is done sequentially one constraint at a time. However, depending on the path (sequence) the weights can be different and sometimes infeasible for the sequential EC\&R\xspace. See the example attached.
\item the next restriction is to use both of the previous ones at the same time. In this case, the obstacle (ii) of the previous case still exists. The attached example can be used to show that too.
\end{itemize}
As noted in Remark~\ref{rem:squential ECR}, one way to mitigate the computational challenges of finding suitable aggregation weights is to restrict the search to sequential EC\&R\xspace assignments.
This way, we can select a new constraint sequentially while ensuring the EC\&R\xspace conditions are satisfied at each step of the procedure.
Because our goal is to find EC\&R\xspace assignments that can be interpreted through a network structure, we make an additional assumption for the aggregation weights to be equal to 1 for all constraints used in the sequential EC\&R\xspace procedure.
While this approach does not necessarily produce the full convex hull description, it has three important advantages: (i) it enables us to derive explicit EC\&R\xspace inequalities cognizant of the underlying network structure without the need to search for the aggregation weights that satisfy the EC\&R\xspace conditions; (ii) it generalizes the convexification results for the case with $m = 1$; and (iii) it can produce inequalities stronger than those obtained by applying Theorem~\ref{thm:primal-tree-converse} to relaxations of $\mcl{S}$ that contain one $y$ variable at a time, because it considers all the $y$ variables in their original simplex set $\Delta_m$.
We note here that since the sequential EC\&R\xspace procedure is applied to network constraints, it may seem intuitive that the aggregation weights are automatically equal to 1.
The next example shows that a sequential EC\&R\xspace assignment can have aggregation weights that are not all equal to 1, showing the necessity to impose that assumption on aggregation weights explicitly.
\end{comment}
\medskip
Although Proposition~\ref{prop:primal-forest} shows that an EC\&R\xspace assignment corresponds to a forest structure in the parallel networks created from the underlying network in $\mcl{S}$, the converse result---similar to the one presented for set $\mcl{S}^1$---does not hold here.
More specifically, a forest structure that satisfies the conditions of Proposition~\ref{prop:primal-forest} does not necessarily lead to a valid EC\&R\xspace assignment, and
even when it does, the calculation of the aggregation weights to satisfy the EC\&R\xspace conditions is not as straightforward; see Example~\ref{ex:forest} below.
\medskip
\begin{example} \label{ex:forest}
Consider set $\mcl{S}$ where $\Xi$ describes the network polytope corresponding to network $\mr{G} = (\mr{V},\mr{A})$ in Figure~\ref{fig:primal-tree}, and $\Delta = \{(y_1,y_2) \in {\mathbb R}^2_+ | x_1 + x_2 \leq 1\}$.
Select class-$l^+$ corresponding to the base equality $y_1x_{1,5} - z_l = 0$.
In the parallel network $\mr{G}^1$, we select the forest $\bar{\mr{F}}^1$ composed of the tree $\bar{\mr{T}}^1_1$ with the node set $\widetilde{\mcl{I}}^1 = \{1, 2\}$.
In the parallel network $\mr{G}^2$, we select the forest $\bar{\mr{F}}^2$ composed of the tree $\bar{\mr{T}}^2_1$ with the node set $\widetilde{\mcl{I}}^2 = \{2, 6\}$.
We select the connection node set $\widetilde{\mcl{I}} = \{6\}$, and the connection arc set $\widetilde{\mcl{J}} = \emptyset$.
It is easy to verify that these sets satisfy the conditions (i)--(iii) of Propositions~\ref{prop:primal-forest}.
However, we cannot find an aggregation weight for the flow-balance constraints corresponding to the nodes in the above sets that yields a cancellation of at least 5 bilinear terms.
As a result, there is no EC\&R\xspace assignment that matches the considered forest structure.
\end{example}
\medskip
A common way to circumvent the above-mentioned difficulty in obtaining valid EC\&R\xspace assignments and their aggregation weights is to aim at a special class of EC\&R\xspace assignments with more specific attributes that can be used to strengthen the connection between an EC\&R\xspace assignment and its corresponding network structure.
An important example of such class is the class of EC\&R\xspace assignments that are obtained through \textit{pairwise cancellation}.
In this procedure, each cancellation of bilinear terms is obtained by aggregating two constraints.
This definition includes the bilinear terms that are \textit{canceled} during the relaxation step, i.e., the constraint used to relax the remaining bilinear terms counts as one of the two constraints in the preceding statement.
Following this procedure, the aggregation weight for each constraint can be determined successively as the constraint is added to the assignment to ensure the satisfaction of the EC\&R\xspace conditions.
The next result shows that the aggregation weights for all constraints used in the EC\&R\xspace assignments obtained through pairwise cancellation are $1$.
\begin{proposition} \label{prop:pairwise ECR}
Consider set $\mcl{S}$ where $\Xi$ describes the network polytope corresponding to network $\mr{G} = (\mr{V},\mr{A})$.
Let $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ be an EC\&R\xspace assignment for class-$l^{\pm}$ for some $l \in K$ corresponding to pairwise cancellation.
Then, the aggregation weights for all constraints used in this assignment are $1$.
\end{proposition}
\proof{Proof.}
Let $\bar{\vc{\pi}}^l$ be the solution vector for the system of equations \eqref{eq:Cl} of $\mcl{C}^l$ corresponding to the aggregation weights of the given EC\&R\xspace assignment.
We may rewrite this system of equations as follows by rearranging its rows and columns.
\begin{equation}
\left[
\def1.2{1.5}
\begin{array}{c|c|c}
P_1 \, & \, 0 \, & \, C_1 \\
\hline
P_2 \, & \, \pm I \, & \, C_2 \\
\hline
0 \, & \, 0 \, & \, C_3 \\
\end{array}
\right]
\left[
\def1.2{1.5}
\begin{array}{c}
\bar{\vc{\pi}}^l_1 \\
\hline
\bar{\vc{\pi}}^l_2 \\
\hline
\vc{0}
\end{array}
\right]
=
\left[
\def1.2{1.5}
\begin{array}{c}
\pm \vc{e}^{l} \\
\hline
\vc{0} \\
\hline
\vc{0}
\end{array}
\right], \label{eq:matrix-pairwise}
\end{equation}
In the coefficient matrix of \eqref{eq:matrix-pairwise}, the first row block represents the bilinear terms that are canceled during aggregation.
The second row block corresponds to the remaining bilinear terms in the aggregated inequality that are relaxed in the last step of the EC\&R\xspace procedure.
The last row block represents all the bilinear terms that are not involved in the aggregation procedure.
Further, in this matrix, the first column block corresponds to the constraints used in the aggregation, whose aggregation weights in the solution vector $\bar{\vc{\pi}}^l$ are denoted by $\bar{\vc{\pi}}^l_1$.
The second column block corresponds to the variable bound constraints in $\Xi$ as well as the bilinear constraints in $\mcl{S}$ used in the EC\&R\xspace procedure to relax the remaining bilinear terms in the aggregated inequality, whose weights in the solution vector $\bar{\vc{\pi}}^l$ are denoted by $\bar{\vc{\pi}}^l_2$.
The last column block represents all other constraints that are not used during the EC\&R\xspace procedure and their weights in the solution vector $\bar{\vc{\pi}}^l$ are zero.
Finally, $\vc{e}^{l}$ on the right-hand-side of this system is a unit vector whose elements are all zeros except that corresponding to the row representing $y_{j'}x_{i'}$ for some $i',j'$ that satisfy $A^l_{j'i'} = 1$, which is equal to $1$.
It is clear that this row belongs to the first row block since according to the EC\&R\xspace condition (C2), the bilinear term in the base equality $l$ must be canceled during the aggregation procedure when the assignments are not empty.
It follows from the equation \eqref{eq:matrix-pairwise} that $P_1 \bar{\vc{\pi}}^l_1 = \pm \vc{e}^{l}$.
Next, we analyze the structure of $P_1$.
Note that all elements of $P_1$ belong to $\{0, -1, 1\}$ because it is a submatrix of \eqref{eq:ECR-matrix-2} that represents the coefficients of the constraints in $\mcl{S}$.
Considering that the columns of $P_1$ represent the constraints used in the aggregation except the base equality (as that constraint has been moved to the right-hand-side to form $\mcl{C}^l$), and that the rows of $P_1$ correspond to the canceled bilinear terms during aggregation, according to condition (C1) of EC\&R\xspace, we conclude that the number of rows of $P_1$ is no smaller that the number of columns of $P_1$.
Further, it follows from condition (C2) of EC\&R\xspace that each constraint used in the aggregation (after being multiplied with its corresponding weight) will have at least one bilinear term canceled, which implies that each column of $P_1$ has at least one nonzero element.
The assumption of pairwise cancellation for the given EC\&R\xspace assignment implies that each canceled bilinear term corresponding to the rows of $P_1$ are obtained through aggregation of exactly two constraints.
As a result, each row of $P_1$ must contain exactly two nonzero elements, except for the row corresponding to the bilinear term $y_{j'}x_{i'}$ that appears in the base equality $y_{j'}x_{i'} - z_l = 0$ which must have only one nonzero element because the weight of the base equality has been fixed at $\pm 1$ and its column has been moved to the right-hand-side of the equation captured by $\pm \vc{e}^{l}$; see the derivation of \eqref{eq:Cl}.
Therefore, we may rearrange the rows and columns of the matrices in this equation to obtain the form:
\begin{equation}
\left[
\def1.2{1.2}
\begin{array}{c|c|c|c}
\pm 1 & 0 & \cdots & 0 \\
\hline
\{0, \pm 1\} & \pm 1 & \cdots & 0 \\
\hline
\vdots & \vdots & \ddots & \vdots \\
\hline
\{0, \pm 1\} & \{0, \pm 1\} & \cdots & \pm 1 \\
\hline
\{0, \pm 1\} & \{0, \pm 1\} & \cdots & \{0, \pm 1\} \\
\hline
\vdots & \vdots & \ddots & \vdots \\
\hline
\{0, \pm 1\} & \{0, \pm 1\} & \cdots & \{0, \pm 1\}
\end{array}
\right]
\bar{\bar{\vc{\pi}}}^l_1 = \pm \vc{e}^{1}, \label{eq:matrix-pairwise-2}
\end{equation}
where $\bar{\bar{\vc{\pi}}}^l_1$ is composed of the elements of $\bar{\vc{\pi}}^l_1$ that are rearranged to match the rearrangement of columns of $P_1$ in the above form, and where the first row corresponds to the bilinear term $y_{j'}x_{i'}$ so that the right-hand-side vector becomes $\pm \vc{e}^1$.
It follows from the above discussion about the structure of $P_1$ and equation \eqref{eq:matrix-pairwise-2} that all components of $\bar{\bar{\vc{\pi}}}^l_1$ must be equal to 1 as they need to be nonnegative.
Finally, for the equations in the second row block of \eqref{eq:matrix-pairwise}, we have that $P_2 \bar{\vc{\pi}}^l_1 \pm I \bar{\vc{\pi}}^l_2 = \vc{0}$.
It follows from the pairwise cancellation assumption that each row of $P_2$ contains exactly one nonzero element as it corresponds to a remaining bilinear term in the aggregation inequality.
Since all of the elements in $P_2$ belong to $\{0, -1, 1\}$, and all the components in $\bar{\vc{\pi}}^l_1$ are equal to 1, it must hold that $\bar{\vc{\pi}}^l_2 = \vc{1}$.
\Halmos
\endproof
\medskip
\begin{remark} \label{rem:pairwise-primal}
For the case with $m=1$, as described in the proof of Proposition~\ref{prop:primal-tree}, there are two (resp. three) possible scenarios for constraints that could be used in the aggregation to cancel a bilinear term $y_1x_i$ for any $i \in N \setminus \{l\}$ (resp. for $i = l$).
Since the aggregation weights for all constraints are $1$ in this case (see Proposition~\ref{prop:primal-weight}), we conclude that each cancellation is obtained through aggregation of exactly two constraints.
Further, any remaining bilinear term in the aggregated inequality corresponds to an arc that is incident to exactly one node of the tree associated with the EC\&R\xspace assignment (see Proposition~\ref{prop:primal-tree}), which implies that each such bilinear term appears in exactly one constraint during aggregation.
As a result, \textit{all} EC\&R\xspace inequalities for the case with $m = 1$ can be obtained through pairwise cancellation.
\end{remark}
\medskip
Although the EC\&R\xspace inequalities obtained through pairwise cancellation do not necessarily produce a full convex hull description for $\mcl{S}$, the result of Proposition~\ref{prop:pairwise ECR} provides three important advantages: (i) it generalizes the convexification results for the case with $m = 1$ as described in Remark~\ref{rem:pairwise-primal}; (ii) it can produce inequalities stronger than those obtained by applying Theorem~\ref{thm:primal-tree-converse} to relaxations of $\mcl{S}$ that contain one $y$ variable at a time, because it considers all the $y$ variables in their original simplex set $\Delta_m$; and (iii) it enables us to derive explicit EC\&R\xspace inequalities cognizant of the underlying network structure without the need to search for the aggregation weights that satisfy the EC\&R\xspace conditions as will be shown in the sequel.
These advantages are corroborated by the computational experiments presented in Section~\ref{sec:computation}.
The next proposition shows that the pairwise cancellation property provides more information about the forest structure presented in Proposition~\ref{prop:primal-forest}.
\begin{proposition} \label{prop:primal-forest-pairwise}
Consider the setting of Proposition~\ref{prop:primal-forest}, and assume that the EC\&R\xspace assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ has the pairwise cancellation property.
Further, let this assignment correspond to a class-$l^{\pm}$ for some $l \in K$ such that $A^l_{j'i'} = 1$ for some $(i',j') \in N \times M$.
Then, in addition to the outcome of Proposition~\ref{prop:primal-forest}, we have that
\begin{itemize}
\item[(i)] arc $i'$ is either in $\widetilde{\mcl{J}}$ or incident to exactly one node in $\widetilde{\mcl{I}} \cup \widetilde{\mcl{I}}^{j'}$, but not both,
\item[(ii)] each arc in $\widetilde{\mcl{J}}$ is incident to at most one node in $\widetilde{\mcl{I}} \cup \widetilde{\mcl{I}}^{j}$ for each $j \in M$,
\item[(iii)] each node in $\widetilde{\mcl{I}} \cap \widetilde{\mcl{I}}^j$, for $j \in M \setminus \{j'\}$ (resp. $j = j'$), is adjacent to no other nodes in that set and no arcs in $\widetilde{\mcl{J}}$ (resp. $\widetilde{\mcl{J}} \cup \{i'\}$).
\end{itemize}
\end{proposition}
\proof{Proof.}
For case (i), it follows from condition (C2) of the EC\&R\xspace procedure that the bilinear term $y_{j'}x_{i'}$ in the base equality must be canceled during aggregation.
Further, according to the pairwise cancellation property, there must be exactly one constraint in the aggregation in addition to the base equality that would contain $y_{j'}x_{i'}$ after multiplication with the corresponding dual weight.
There are two possible scenarios.
The first possibility is that the bound constraints for $x_{i'}$ are used in the aggregation, which implies that arc $i'$ is a connection arc and belongs to $\widetilde{\mcl{J}}$.
The second possibility is that the flow-balance constraint at either node $t(i')$ or $h(i')$, but not both, is used in the aggregation, which implies that $i'$ is incident to exactly one node in $\widetilde{\mcl{I}} \cup \widetilde{\mcl{I}}^{j'}$.
\smallskip
For case (ii), consider an arc $i \in \widetilde{\mcl{J}}$.
Therefore, either of the bound constraints $x_i \geq 0$ or $u_i-x_i \geq 0$ is used in the aggregation with weight $1-\sum_{j \in M} y_j$.
It follows from the pairwise cancellation property that, for each $j \in M$, there can be at most one additional constraint in the aggregation that contains a term $y_j x_i$.
The only possibility for such a constraint is the flow-balance constraint at either node $t(i')$ or $h(i')$, but not both.
We conclude that $i$ is incident to at most one node in $\widetilde{\mcl{I}} \cup \widetilde{\mcl{I}}^{j}$ for each $j \in M$.
\smallskip
For case (iii), consider a node $i \in \widetilde{\mcl{I}} \cap \widetilde{\mcl{I}}^j$ for some $j \in M \setminus \{j'\}$.
Therefore, the aggregation contains the (positive or negative) flow-balance constraint at node $i$ multiplied with $1-\sum_{j \in M} y_j$ due to $i \in \widetilde{\mcl{I}}$, together with the (positive or negative) flow-balance constraint at node $i$ multiplied with $y_j$ due to $i \in \widetilde{\mcl{I}}^j$.
Therefore, the bilinear terms $y_j x_k$ for all $k \in \delta^+(i) \cup \delta^-(i)$ already appear in two constraints, which implies that they cannot appear in any other constraints during aggregation.
As a result, the bound constraints for each variable $x_k$ corresponding to arc $k$ cannot be included in $\widetilde{\mcl{J}}$.
Similarly, the flow-balance constraint at node $h(k)$ for any $k \in \delta^+(i)$ and at node $t(k)$ for any $k \in \delta^-(i)$ cannot be included in the aggregation, which implies that $i$ cannot be adjacent to any other nodes in $\widetilde{\mcl{I}} \cap \widetilde{\mcl{I}}^j$.
The proof for the case where $j = j'$ follows from a similar argument.
\Halmos
\endproof
\medskip
As noted earlier, an important consequence of the pairwise cancellation property is providing the ability to derive the converse statement to those of Proposition~\ref{prop:primal-forest} and \ref{prop:primal-forest-pairwise}, which identifies EC\&R\xspace assignments based on a special forest structure in the underlying network.
\begin{theorem} \label{thm:primal-forest-converse}
Consider set $\mcl{S}$ with $\Xi$ that represents the network polytope corresponding to the network $\mr{G} = (\mr{V},\mr{A})$.
Let $\bar{\mr{F}}^j$, for each $j \in M$, be a forest in the parallel network $\mr{G}^j$, composed of trees $\bar{\mr{T}}^j_k$ for $k \in \Gamma_j$, where $\Gamma_j$ is an index set, that satisfies the conditions (i)--(iii) of Propositions~\ref{prop:primal-forest} and \ref{prop:primal-forest-pairwise} with the corresponding node sets $\widetilde{\mcl{I}}^j$, the connection node set $\widetilde{\mcl{I}}$, the connection arc set $\widetilde{\mcl{J}}$, and the class $l$.
Then, the assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ obtained from Algorithm~\ref{alg:primal-ECR} is an EC\&R\xspace assignment for class-$l^{\pm}$.
\end{theorem}
\proof{Proof.}
First, we argue that conditions (i)--(iii) of Proposition~\ref{prop:primal-forest} imply that each member of the sets $\widetilde{\mcl{I}}$, $\widetilde{\mcl{J}}$, and $\widetilde{\mcl{I}}^j$ for $j \in M$ receives a label assignment through the steps of Algorithm~\ref{alg:primal-ECR}, i.e., the member is added to set $\mt{D}$ defined in that algorithm.
It follows from condition (i) of Proposition~\ref{prop:primal-forest} that once a member of the node subset in $\widetilde{\mcl{I}}^j$, for each $j \in M$, that represents a tree $\bar{\mr{T}}^j_k$ with $k \in \Gamma_j$ is added to $\mt{D}$, all the remaining nodes in $\bar{\mr{T}}^j_k$ are eventually added to $\mt{D}$ because of the loop in lines 11--13 in the algorithm, as all nodes of the tree are connected.
Condition (ii) of Proposition~\ref{prop:primal-forest} implies that all trees $\bar{\mr{T}}^j_k$ for $k \in \Gamma_j$ and $j \in M$ are connected through an appropriate sequence of the tree nodes, the connection nodes in $\widetilde{\mcl{I}}$, and the connection arcs in $\widetilde{\mcl{J}}$.
Consequently, the loops in lines 10--44 of the algorithm ensure that each member of these sets is visited following that sequence and becomes added to $\mt{D}$.
Further, condition (iii) of Proposition~\ref{prop:primal-forest} suggests that each member in the sets $\widetilde{\mcl{I}}$ and $\widetilde{\mcl{J}}$ is connected to the subgraph composed of the set of all tree nodes in $\widetilde{\mcl{I}}^j$ and their associated connection nodes and connection arcs.
As a result, there exists a sequence of adjacent nodes and arcs that lead to each member of $\widetilde{\mcl{I}}$ and $\widetilde{\mcl{J}}$, thereby getting added to $\mt{D}$.
\smallskip
Second, we show that each bilinear term created during the aggregation can appear in at most two constraints.
There are four cases.
In case 1, consider the bilinear term $y_{j'}x_{i'}$ that appears in the base equality $l$.
Condition (i) of Proposition~\ref{prop:primal-forest-pairwise} implies that this bilinear term can appear in exactly one other constraint, which could be either the bound constraint on variable $x_{i'}$ (which would be included in $\widetilde{\mcl{J}}$) or the flow-balance constraint at one of the incident nodes to $i'$ (which would be included in $\widetilde{\mcl{I}}^{j'} \cup \widetilde{\mcl{I}}$).
In case 2, consider a bilinear term $y_j x_i$, for some $j \in M$, that appears in the bound constraint on variable $x_i$ for any arc $i \in \widetilde{\mcl{J}}$.
Condition (ii) of Proposition~\ref{prop:primal-forest-pairwise} implies that this bilinear term can appear in at most one other constraint, which could be the flow-balance constraint at one of the incident nodes to $i$ (which would be included in $\widetilde{\mcl{I}}^{j} \cup \widetilde{\mcl{I}}$).
In case 3, consider a bilinear term $y_j x_i$, for some $j \in M$, that appears in the flow-balance constraint at an incident node of arc $i$ after being multiplied with both $y_j$ (i.e., the node being in $\widetilde{\mcl{I}}^j$) and $1-\sum_{j \in M}y_j$ (i.e., the node being in $\widetilde{\mcl{I}}$).
Condition (iii) of Proposition~\ref{prop:primal-forest-pairwise} implies that this bilinear term cannot appear in any other constraints during aggregation.
In case 4, consider a bilinear term $y_j x_i$, for some $j \in M$, that appears in the flow-balance constraint at an incident node of arc $i$ that is not in $\widetilde{\mcl{I}}^j \cap \widetilde{\mcl{I}}$.
It follows from condition (iii) of Proposition~\ref{prop:primal-forest} that this bilinear term can appear in at most one other constraint because of the tree structure of all the nodes in $\cup_{j \in M} \widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}}$.
\smallskip
Third, we discuss that, for any $k \in \mt{D}$ that has been newly added to this set, its label value has been determined through lines 10--44 of Algorithm~\ref{alg:primal-ECR} in such a way that, for a member $i \in \cup_{j \in M} \widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}} \cup \widetilde{\mcl{J}}$ that has been previously added to $\mt{D}$ and is adjacent/incident to $i$, the bilinear term that commonly appears in the weighted constraints corresponding to both $i$ and $k$ is canceled.
For instance, consider the case where $i \in \widetilde{\mcl{I}}$ (line 22 of the algorithm) and $k \in \widetilde{\mcl{I}}^j$ for some $j \in M$ is an adjacent node to $i$ (line 26 of the algorithm).
Assume that $\mt{l}(i) = +$, and that arc $a \in \mr{A}$ is such that $t(a) = i$ and $h(a) = k$.
It follows from line 27 of the algorithm that $\mt{l}(k) = -$.
Considering the assignment rule in lines 48 and 52 of the algorithm, we should aggregate the constraint $\sum_{r \in \delta^+(i) \setminus \{p\}} x_r - \sum_{r \in \delta^-(i)} x_r + x_p \geq f_i$ with weight $1 - \sum_{j \in M} y_j$, together with the constraint $-\sum_{r \in \delta^+(k)} x_r + \sum_{r \in \delta^-(k) \setminus \{p\}} x_r + x_p \geq - f_k$ with weight $y_j$, which results in the cancellation of the bilinear term $y_j x_p$.
A similar argument can be made for any other possible case in Algorithm~\ref{alg:primal-ECR}.
\smallskip
Combining all the results shown in the previous parts, i.e., (I) each member of the sets $\widetilde{\mcl{I}}$, $\widetilde{\mcl{J}}$, and $\widetilde{\mcl{I}}^j$ for $j \in M$ receives a label assignment and is added to $\mt{D}$; (II) each bilinear term created during the aggregation can appear in at most two constraints; and (III) for any $k \in \mt{D}$, its label value is determined in such a way that the bilinear term that is common between the weighted constraints corresponding to $i$ and a previously added member $k$ in $\mt{D}$ is canceled, we conclude that at least $|\widetilde{\mcl{I}}| + |\widetilde{\mcl{J}}|+\sum_{j \in M}|\widetilde{\mcl{I}}^j|$ bilinear terms will be canceled during aggregation in the desired assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$.
This satisfies the EC\&R\xspace conditions (C1).
Finally, the above argument also implies that each flow-balance constraint at the nodes in $\cup_{j \in M} \widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}}$, and each variable bound constraint for the arcs in $\widetilde{\mcl{I}}$ will have at least one of their bilinear terms (after being multiplied with appropriate weights) canceled because each such node or arc will eventually be added to $\mt{D}$ when it receives a label for the desired cancellation.
This satisfies the EC\&R\xspace condition (C2).
We conclude that $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ is an EC\&R\xspace assignment.
\Halmos
\endproof
\begin{algorithm}
\scriptsize
\caption{Derive an EC\&R\xspace assignment associated with a forest structure}
\label{alg:primal-ECR}
\begin{algorithmic}[1]
\REQUIRE network $\mr{G} = (\mr{V},\mr{A})$, forest node sets $\widetilde{\mcl{I}}^j$ for $j \in M$, connection node set $\widetilde{\mcl{I}}$, connection arc set $\widetilde{\mcl{J}}$, class $l$ with $(i',j') \in N \times M$ such that $A^{l}_{j'i'} = 1$, and a class sign indicator $\pm$
\ENSURE the EC\&R\xspace assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ for class-$l^{\pm}$
\STATE assign an empty label denoted by $\mt{l}(i)$ to each node $i \in \widetilde{\mcl{I}} \cup_{j \in M}\widetilde{\mcl{I}}^j$ and each arc $i \in \widetilde{\mcl{J}} \cup \{l\}$, and define set $\mt{D} = \emptyset$
\STATE set $\mathtt{l}(i') = $ the class sign indicator, and let $k \in \widetilde{\mcl{I}}^{j'} \cup \widetilde{\mcl{I}}$ either be an incident node to $i'$ or be the arc $i' \in \widetilde{\mcl{J}}$
\STATE \textbf{if} $\left(k = h(i') \in \widetilde{\mcl{I}}^{j'}\right)$ or $\left(k = t(i') \in \widetilde{\mcl{I}}\right)$ or $\left(k = i' \in \widetilde{\mcl{J}}\right)$ \textbf{then}
\STATE \quad set $\mt{l}(k) = \mt{l}(i')$, and add $k$ to $\mt{D}$
\STATE \textbf{else if} $\left(k = t(i') \in \widetilde{\mcl{I}}^{j'}\right)$ or $\left(k = h(i') \in \widetilde{\mcl{I}}\right)$ \textbf{then}
\STATE \quad set $\mt{l}(k) = \neg \mt{l}(i')$, and add $k$ to $\mt{D}$ ($\neg$ represents the negation symbol)
\STATE \textbf{end if}
\WHILE{$\mt{D} \neq \emptyset$}
\STATE \quad select $i \in \mt{D}$
\STATE \quad \textbf{if} $i \in \widetilde{\mcl{I}}^j$ for some $j \in M$ \textbf{then}
\STATE \quad \quad \textbf{for} each unlabeled node $k \in \widetilde{\mcl{I}}^j$ and $\bar{k} \in \widetilde{\mcl{I}}$ that is adjacent to $i$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, set $\mt{l}(\bar{k}) = \neg \mt{l}(i)$, and add $k$ and $\bar{k}$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \quad \textbf{for} each unlabeled arc $k \in \widetilde{\mcl{J}}$ that is incident to $i$ \textbf{do}
\STATE \quad \quad \quad \textbf{if} $i = t(k)$ \textbf{then}
\STATE \quad \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \quad \textbf{else}
\STATE \quad \quad \quad \quad set $\mt{l}(k) = \neg \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \quad \textbf{end if}
\STATE \quad \quad \textbf{end for}
\STATE \quad \textbf{end if}
\STATE \quad \textbf{if} $i \in \widetilde{\mcl{I}}$ \textbf{then}
\STATE \quad \quad \textbf{for} each unlabeled node $k \in \widetilde{\mcl{I}}^j$ for $j \in M$ such that $k = i$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \quad \textbf{for} each unlabeled node $k \in \widetilde{\mcl{I}}^j$ for $j \in M$, and $\bar{k} \in \widetilde{\mcl{I}}$ that is adjacent to $i$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \neg \mt{l}(i)$, set $\mt{l}(\bar{k}) = \mt{l}(i)$, and add $k$ and $\bar{k}$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \quad \textbf{for} each unlabeled arc $k \in \widetilde{\mcl{J}}$ that is incident to $i$ \textbf{do}
\STATE \quad \quad \quad \textbf{if} $i = t(k)$ \textbf{then}
\STATE \quad \quad \quad \quad set $\mt{l}(k) = \neg \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \quad \textbf{else}
\STATE \quad \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \quad \textbf{end if}
\STATE \quad \quad \textbf{end for}
\STATE \quad \textbf{end if}
\STATE \quad \textbf{if} $i \in \widetilde{\mcl{J}}$ \textbf{then}
\STATE \quad \quad \textbf{for} each unlabeled node $k = t(i) \in \widetilde{\mcl{I}}^j$ and $\bar{k} = h(i) \in \widetilde{\mcl{I}}^j$ for $j \in M$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, set $\mt{l}(\bar{k}) = \neg \mt{l}(i)$, and add $k$ and $\bar{k}$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \quad \textbf{for} each unlabeled node $k = t(i) \in \widetilde{\mcl{I}}$ and $\bar{k} = h(i) \in \widetilde{\mcl{I}}$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \neg \mt{l}(i)$, set $\mt{l}(\bar{k}) = \mt{l}(i)$, and add $k$ and $\bar{k}$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \textbf{end if}
\STATE \quad Remove $i$ from $\mt{D}$
\ENDWHILE
\STATE \textbf{for} each $i \in \widetilde{\mcl{I}}^j$ for each $j \in M$ \textbf{do}
\STATE \quad add $i^{\mt{l}(i)}$ to $\mcl{I}_j$
\STATE \textbf{end for}
\STATE \textbf{for} each $i \in \widetilde{\mcl{I}}$ \textbf{do}
\STATE \quad add $i^{\mt{l}(i)}$ to $\bar{\mcl{I}}$
\STATE \textbf{end for}
\STATE \textbf{for} each $i \in \widetilde{\mcl{J}}$ \textbf{do}
\STATE \quad \textbf{if} $i^{\mt{l}(i)} = +$ \textbf{then}
\STATE \quad \quad add $i$ to $\mcl{J}$
\STATE \quad \textbf{else}
\STATE \quad \quad add $i$ to $\bar{\mcl{J}}$
\STATE \quad \textbf{end if}
\STATE \textbf{end for}
\end{algorithmic}
\end{algorithm}
\medskip
In view of Theorem~\ref{thm:primal-forest-converse}, once we identify a forest structure with the desired conditions, we can use the steps in Algorithm~\ref{alg:primal-ECR} to determine the weight of each constraint in the corresponding EC\&R\xspace assignment by following a path that starts from the arc associated with the base equality and reaches the node or arc associated with that constraint.
We illustrate this approach in the following example.
\medskip
\begin{example} \label{ex:primal-forest}
Consider set $\mcl{S}$ with $m = 2$ and $\Xi$ that represents the primal network model corresponding to the graph $\mr{G} = (\mr{V},\mr{A})$ shown in Figure~\ref{fig:primal-tree}.
Similarly to Example~\ref{ex:primal-tree}, we refer to each arc in this network as a pair $(i,j)$ of its tail node $i$ and its head node $j$, and denote its corresponding flow variable as $x_{i,j}$.
Assume that we are interested in finding EC\&R\xspace assignments for class-$l^{+}$ where the base equality $l$ contains the bilinear term $y_1x_{1,5}$, i.e., $i' = (1,5)$ and $j' = 1$.
According to Theorem~\ref{thm:primal-tree-converse}, we need to identify a forest structure that satisfies the conditions (i)--(iii) of Propositions~\ref{prop:primal-forest} and \ref{prop:primal-forest-pairwise}.
In the parallel network $\mr{G}^1$, we select the forest $\bar{\mr{F}}^1$ composed of the tree $\bar{\mr{T}}^1_1$ with the node set $\{1, 2, 6\}$ and the tree $\bar{\mr{T}}^1_2$ with the node set $\{8\}$.
In the parallel network $\mr{G}^2$, we select the forest $\bar{\mr{F}}^2$ composed of the tree $\bar{\mr{T}}^2_1$ with the node set $\{1, 4\}$.
Therefore, we can form the set $\widetilde{\mcl{I}}^1 = \{1, 2, 6, 8\}$ and $\widetilde{\mcl{I}}^2 = \{1, 4\}$.
We select the connection node set $\widetilde{\mcl{I}} = \{3\}$, and the connection arc set $\widetilde{\mcl{J}} = \{(8,4)\}$.
It is easy to verify that these sets satisfy the conditions (i)--(iii) of Propositions~\ref{prop:primal-forest} and \ref{prop:primal-forest-pairwise}.
Next, we determine the label of each node and arc in the above sets through applying Algorithm~\ref{alg:primal-ECR}.
According to line 2 of this algorithm, we set $\mt{l}(1,5) = +$ in parallel network $\mr{G}^1$, and select $k = t(1,5) = 1 \in \widetilde{\mcl{I}}^1$.
It follows from line 5 of the algorithm that $\mt{l}(1) = -$ and $k$ is added to $\mt{D}$.
Following lines 10--13, we obtain for $\widetilde{\mcl{I}}^1$ that $\mt{l}(2) = \mt{l}(6) = -$, and for $\widetilde{\mcl{I}}$ that $\mt{l}(3) = +$.
Then, from lines 26--28 of Algorithm~\ref{alg:primal-ECR}, we deduce for $\widetilde{\mcl{I}}^2$ that $\mt{l}(4) = -$, and from lines 11--13 for $\widetilde{\mcl{I}}^2$, we obtain that $\mt{l}(1) = -$.
Lines 32--34 imply that $\mt{l}(8,4) = -$ for $\widetilde{\mcl{J}}$.
Lastly, we conclude from lines 38--40 that $\mt{l}(8) = -$ for $\widetilde{\mcl{I}}^1$.
As a result, following lines 47--59 of the algorithm, we obtain the EC\&R\xspace assignment $\big[\{1^-, 2^-, 6^-, 8^-\},\{1^-, 4^-\},\{3^+\} \big| \emptyset, \{(8,4)\}\big]$ for class-$l^+$.
Based on this assignment, we multiply the negative flow-balance constraints at nodes $1, 2, 6, 8$ with $y_1$, we multiply the negative flow-balance constraints at nodes $1, 4$ with $y_2$, we multiply the positive flow-balance constraint at node $3$ with $1-y_1-y_2$, and we multiply the upper bound constraint on variable $x_{8,4}$ with $1-y_1-y_2$, and aggregate them with the base bilinear equality corresponding to arc $(1,5)$ with weight 1 to obtain the aggregated inequality
\begin{multline*}
-z_{1,5} - y_1 x_{4,5} - y_1 x_{3,7} + y_1 x_{4,3} - y_2 x_{1,5} + y_2 x_{4,5} + y_2 x_{2,3} - y_2 x_{3,7}\\
+ (f_1 + f_2 + f_3 + f_6 + f_8 - u_{8,4})y_1 + (f_1 + f_3 + f_4 - u_{8,4}) y_2 \\
+ x_{3,7} -x_{2,3} - x_{4,3} - x_{8,4} - f_3 + u_{8,4} \geq 0,
\end{multline*}
where $f_i$ denotes the supply/demand value at node $i$, and $u_{i,j}$ denotes the upper bound for variable $x_{i,j}$.
Following Remark~\ref{rem:reduce-bilinear}, we may relax each of the seven remaining bilinear terms into two possible linear expressions, leading to 128 total EC\&R\xspace inequalities.
If implemented inside of a separation oracle, we can use Remark~\ref{rem:1-separation} to find the most violated inequality among these 128 inequalities efficiently in linear time.
\end{example}
\medskip
We conclude this section with a remark on the practical implementation of the proposed EC\&R\xspace inequalities.
While there is an efficient separation algorithm to find a separating EC\&R\xspace inequality among those created from a given EC\&R\xspace assignment as noted in Remark~\ref{rem:1-separation}, the choice of the class of an EC\&R\xspace assignment and its possible forest structure in the underlying network can lead to a large pool of candidates to consider during a branch-and-cut approach.
Note that each EC\&R\xspace inequality is obtained through an aggregation of the constraints of $\mcl{S}$ with proper weights.
In particular, given an EC\&R\xspace assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ for class-$l^{\pm}$, we aggregate the base inequality of the form $f_l(\vc{x},\vc{y},\vc{z}) \geq 0$ with constraints of the general form $h(\vc{y})g(\vc{x}) \geq 0$, where $h(\vc{y})$ represents the aggregation weight that could be $y_j$ or $1-\sum_{j \in M} y_j$, and where $g(\vc{x}) \geq 0$ denotes a linear side constraint that could be the flow-balance or variable bound constraints.
In most branch-and-cut approaches, the starting relaxation of the problem contains all linear side constraints on $\vc{x}$ and $\vc{y}$.
It follows that an optimal solution $(\bar{\vc{x}};\bar{\vc{y}};\bar{\vc{z}})$ of such relaxation that is to be separated satisfies $h(\bar{\vc{y}})g(\bar{\vc{x}}) \geq 0$ for all valid choices of function $h(\vc{y})$ and constraint $g(\vc{x}) \geq 0$.
Therefore, for the resulting aggregated inequality to be violated at a point $(\bar{\vc{x}};\bar{\vc{y}};\bar{\vc{z}})$, we must have the base inequality violated at that point, i.e, $f_l(\bar{\vc{x}},\bar{\vc{y}},\bar{\vc{z}}) < 0$.
This observation can be used to select the class and sign of the EC\&R\xspace assignment to be generated during separation process.
To this end, we may sort the values $\Psi_k = |\bar{y}_j\bar{x}_i - \bar{z}_k|$ for all $(i,j,k)$ such that $A^k_{ji} = 1$, and choose class $k$ as that associated with largest $\Psi_k$ with the class sign $+$ if $\bar{y}_j\bar{x}_i - \bar{z}_k < 0$, and class sign $-$ otherwise.
This perspective can shed light on the observation that the EC\&R\xspace inequalities obtained from \textit{fewer} aggregations tend to be more effective in practice as noted in \cite{davarnia:ri:ta:2017} and also observed in our experiments in Section~\ref{sec:computation}.
Specifically, the addition of constraints $h(\vc{y})g(\vc{x}) \geq 0$ in the aggregation can increase the left-hand-side value in the aggregated inequality when $h(\bar{\vc{y}})g(\bar{\vc{x}}) > 0$, which could reduce the chances of obtaining a violated aggregated inequality.
\medskip
Another observation that can be helpful for choosing the forest structures is considering the relaxation step in the EC\&R\xspace procedure.
As described in Remark~\ref{rem:reduce-bilinear}, each remaining bilinear term $y_jx_i$ can be relaxed using either the bound constraints or the bilinear constraints.
The former case is equivalent to aggregating the inequality with a constraint of the form $h(\vc{y})g(\vc{x}) \geq 0$ where $h(\vc{y}) = y_j$ and $g(\vc{x}) \in \{x_i \geq 0, u_i-x_i \geq 0\}$, for which the previous argument holds about achieving a violation.
For the latter case, on the other hand, we aggregate the inequality with a bilinear constraint of the form $\pm(y_jx_i - z_k) \geq 0$ for $i,j,k$ such that $A^k_{j,i} = 1$, which can potentially lead to a violation depending on the value of $\Psi_k = |\bar{y}_j\bar{x}_i - \bar{z}_k|$.
As a result, we might choose forest structures that contain the nodes incident to arcs $i \in \mr{A}$ corresponding to the most violated values in $|\bar{y}_j\bar{x}_i - \bar{z}_k|$.
In our computational experiments presented in Section~\ref{sec:computation}, we use the above-mentioned heuristics in our separation oracle to efficiently select the class of EC\&R\xspace assignments and their forest structures, which show promising results.
\section{Computational Experiments} \label{sec:computation}
In this section, we present preliminary computational results to evaluate the impact of the EC\&R\xspace cutting planes generated through the results of Section~\ref{sec:primal}.
We study several basic network structures, from both dense and sparse classes, that can be used to obtain different relaxations for any network problems by isolating those structures in the underlying model.
These structures include bipartite, clique, and cycle, as shown in Figures~\ref{fig:network1}--\ref{fig:network3}, to represent different density levels for the underlying graph.
To form set $\mcl{S}$ with network model $\Xi$, for each structure, we generate supply/demand vectors in such a way that the underlying network problem is feasible.
In the following, we give details about the data generation.
For $\Delta$, we consider two scenarios for each structure, one for the case with $m = 1$, and the other for the case with $m=2$.
The former case is fundamental as it can be always used as a basic relaxation even when there are multiple $y$ variables in the model.
The latter case is of particular importance among instances of $\mcl{S}$ with multiple $y$ variables in the simplex $\Delta$, as it represents the pairwise conflict between variables.
Specifically, when two binary variables $y_i$ and $y_j$ cannot be positive at the same time, which can also be modeled as complemetarity constraints of the form $y_i y_j = 0$ for each $i,j$ that belong to a so-called \textit{conflict graph} defined on $y$ variables, we may use such formulations with $m=2$.
As the base relaxation for set $\mcl{S}$, we consider its \textit{linearized} LP relaxation, where $y$ variables are continuous, and the bilinear constraints $y_jx_i - z_k = 0$, for all $k \in K$ and $(i,j) \in N \times M$ such that $A^k_{ji} = 1$, are replaced with the well-know McCormick bounds $z_k \geq 0$, $z_k \geq u_i y_j + x_i - u_i$, $z_k \leq x_i$, and $z_k \leq u_iy_j$.
We denote this set by $\mcl{S}_L$.
To evaluate the impact of adding EC\&R\xspace cuts on strengthening $\mcl{S}_L$, we optimize a linear function in $x$ and $z$ variables over this set:
\begin{equation}
\max\{\sum_{i\in N} r_ix_i + \sum_{k\in K} c_kz_k | (\vc{x},\vc{y},\vc{z}) \in \mcl{S}_L\}, \label{eq:OP1}
\end{equation}
which provides an LP relaxation for the original problem
\begin{equation}
\max\{\sum_{i\in N} r_ix_i + \sum_{k\in K} c_kz_k | (\vc{x},\vc{y},\vc{z}) \in \mcl{S}\}. \label{eq:OP2}
\end{equation}
We use the special tree structure of Section~\ref{subsec:primal-single} for the case with $m=1$, and the special forest structure of Section~\ref{subsec:primal-multi} to for the case with $m = 2$ to produce EC\&R\xspace cutting planes that can be added to $\mcl{S}_L$ to improve the dual bound in the optimization problem.
Our experiments show the effectiveness of the proposed EC\&R\xspace inequality in improving the classical McCormick bounds.
While the EC\&R\xspace cuts that we obtain are valid for both cases where $y$ variables are binary and continues, we consider the binary case in these computational experiments so that we can obtain the optimal value of the original problem \eqref{eq:OP2} and use it to compare the bound improvement achieved from adding the EC\&R\xspace cuts to \eqref{eq:OP1}.
This statement follows from the fact that the McCormick formulation is an exact reformulation of the original problem when $y$ variables are binary.
For these experiments, the codes are written in Python 3.7.8. and the optimization problems are solved using CPLEX 20.1.0
at its default settings.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[scale=0.15]{P2}
\caption{Bipartite structure}
\label{fig:network1}
\end{subfigure}
\hspace{0.05\linewidth}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[scale=0.15]{P3}
\caption{Clique structure}
\label{fig:network2}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[scale=0.15]{P4}
\caption{Cycle structure}
\label{fig:network3}
\end{subfigure}
\caption{Basic network structures to evaluate the effectiveness of EC\&R\xspace cuts}
\label{fig:network}
\end{figure}
\subsection{Bipartite Structure.}
In this section, we perform computational experiments on bipartite structures; see Figure~\ref{fig:network1}.
This structure represents a mid-level density for the underlying graph.
We consider three problem sizes, where the number of nodes in the underlying network is $10$, $30$, and $50$.
For each problem size, we create $10$ randomly generated instances for set $\mcl{S}$ with the following specifications for its underlying network $\mr{G} = (\mr{V}, \mr{A})$.
Because the network is bipartite, we consider half the nodes in the left partition and half in the right partition, and there is a directed arc from each node in the left partition to each node in the right partition. For problems with sizes $10$, $30$, and $50$, each arc is assigned a capacity that is randomly generated from a discrete uniform distribution between $(0,100)$, $(0,100)$, and $(0,200)$, respectively.
The coefficients of the $x$ variables in the objective function are randomly generated from a discrete uniform distribution from $(0,20)$,$(0,20)$, and $(0,30)$, respectively, for problem sizes $10$, $30$, and $50$.
For all instances, the coefficient of $z$ variables in the objective function is randomly generated from a uniform interval $(-10,10)$, and the supply/demand at each node is selected randomly from $(-200,200)$ in such a way that the supply and demand are balanced.
\medskip
As noted earlier, we consider two cases for the number $m$ of $y$ variables in the simplex.
Table~\ref{tab:bipartite-1} shows the results for the case with $m=1$.
The first column contains the problem size, and the second column shows the instance number.
The third column represents the optimal value of \eqref{eq:OP2} obtained by setting the $y$ variable as binary in \eqref{eq:OP1}.
The fourth column shows the optimal value of the linearized LP relaxation of \eqref{eq:OP2} given in \eqref{eq:OP1}.
The next two columns under ``Full EC\&R\xspace" show the result of adding all violated EC\&R\xspace inequalities obtained from tree structures according to Theorem~\ref{thm:primal-tree-converse} with up to two cancellations.
These cuts are added in loops after the LP relaxation is solved to separate the current optimal solution until the improvement in the optimal value is less than 1\% for problems with sizes 10 and 30, and 2\% for problems with size 50 due to its higher computational cost.
To find the most violated EC\&R\xspace inequalities produced from an EC\&R\xspace assignment, we use the technique in Remark~\ref{rem:1-separation}.
The column ``Gap" contains the gap improvement obtained by adding these EC\&R\xspace inequalities over the optimal value of \eqref{eq:OP1}.
The next column shows the total solution time to add these inequalities.
The column ``Gap" under ``Separation EC\&R\xspace" includes the result of adding the above EC\&R\xspace inequalities through a separation oracle according to that discussed at the end of Section~\ref{sec:primal}.
In particular, for a current optimal solution $(\bar{\vc{x}}; \bar{\vc{y}}; \bar{\vc{z}})$, we consider the EC\&R\xspace assignment class and sign associated with the 30 largest values for $\Psi_k = |\bar{y}_j\bar{x}_i - \bar{z}_k|$ for all $(i,j,k)$ such that $A^k_{ji} = 1$.
We add the resulting EC\&R\xspace inequalities in loops as discussed above.
The last column shows the solution time when using this separation method.
The last row for each problem size reports the average values over the 10 random instances.
The results in Table~\ref{tab:bipartite-1} have three important implications.
First, they show the effectiveness of our proposed EC\&R\xspace inequalities based on the tree structures in improving the gap closure and strengthening the classical McCormick relaxation.
Second, they confirm the general observation about EC\&R\xspace inequalities with fewer aggregations (up to three in these experiments) tend to be the most effective, as they account for above 90\% of the total gap closure for most instances.
Third, they demonstrate the effectiveness of the proposed separation method which achieves similar gap improvement levels in much smaller time compared to the case without separation.
These observations show promise for an efficient implementation of the EC\&R\xspace technique to solve practical problems.
\medskip
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the bipartite structure with $m = 1$}
\label{tab:bipartite-1}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full EC\&R\xspace} & \multicolumn{2}{c|}{Separation EC\&R\xspace} \\ \cline{5-8}
& & & & Gap (\%)& Time (s) & Gap (\%)& Time (s) \\ \hline
10 & 1 & 3501 & 3035.53 & 99 & 0.44 & 99 & 0.30 \\
& 2 & 1982.63 & 1828.85 & 99 & 0.27 & 99& 0.15 \\
& 3 & 2846 & 2376.23 & 91 & 0.84 & 90& 0.62 \\
& 4 & 3160.86 & 3056.15 & 99 & 0.29 & 99& 0.14 \\
& 5 & 5929 & 5825.98 & 99 & 0.24 & 99& 0.15 \\
& 6 & 4487.71 & 3743.12 & 74 & 0.89 & 74& 0.65 \\
& 7 & 2009.02 & 1858.60 & 99 & 0.33 & 99& 0.37 \\
& 8 & 4531.55 & 4215.66 & 91 & 0.87 & 83& 0.61 \\
& 9 & 3238.87 & 3015.88 & 99 & 0.45 & 99& 0.48 \\
& 10 & 2302 & 2119.27 & 98 & 0.49 & 91& 0.47 \\
\cline{2-8}
& avg. && & 95 & 0.51 & 93 & 0.40 \\
\hline
30 & 1 & 2219.82 &1976.29 & 99 & 22.85 & 99 & 3.02 \\
& 2 & 1619.88 & 1444.26 & 99 & 22.79 & 99 & 1.49 \\
& 3 & 3678.58 & 3236.11 & 99 & 23.49 & 99 & 2.95 \\
& 4 & 3010.56 & 2544.60 & 99 & 41.96 & 92 & 5.96 \\
& 5 & 1959.27 & 1777.84 & 99 & 22.68 & 80 & 3.11 \\
& 6 & 4315.31 & 3379.29 & 90 & 81.48 & 99 & 7.67 \\
& 7 & 854.26 & 709.06 & 99 & 22.70 & 85 & 1.58 \\
& 8 & 1604.93 & 1324.22 & 99 & 23.01 & 99 & 2.97 \\
& 9 & 309.39 & 105.43 & 99 & 21.95 & 99 & 2.76 \\
& 10 & 3224.44 & 2980.87 & 99 & 21.61 & 99 & 3.17 \\
\cline{2-8}
& avg. && & 98 & 30.45 & 95 & 3.47 \\
\hline
50 & 1 & 2378.15 & 2012.32 & 99 & 170.22 & 99 & 14.21 \\
& 2 & 1798.19 & 1544.38 & 99 & 179.22 & 99 & 9.23 \\
& 3 & 1100.05 & 758.45 & 99 & 179.77 & 96 & 14.86 \\
& 4 &-1320.24 & -2103.91 & 99 & 177.30 & 99 & 9.67 \\
& 5 &-2517.20 &-2941.66 & 99 & 185.33 & 99 & 9.53 \\
& 6 &-3396.62 &-3607.08 & 99 & 183.70 & 99 & 4.64 \\
& 7 & 515.27 & 7.30 & 99 & 188.80 & 99 & 9.81 \\
& 8 & 668.71 &-126.09 & 99 & 181.87 & 99 & 13.80 \\
& 9 & 442.04 & 44.26 & 99 & 185.06 & 99 & 4.59 \\
& 10 &-1987.83 &-2479.73 & 99 & 182.14 & 99 & 13.59 \\
\cline{2-8}
& avg. && &99& 181.34 &99& 10.39 \\
\hline
\end{tabular}
\end{table}
In Table~\ref{tab:bipartite-2}, we consider the case for $\mcl{S}$ where $m=2$.
The first and second columns contain the problem size and instance number, respectively.
The third and fourth columns show the optimal value of the original problem and its LP relaxation, respectively.
The next four columns under ``Tree EC\&R\xspace" show the result of adding EC\&R\xspace inequalities with up to three aggregations (two cancellations) obtained from the one-variable relaxations of set $\mcl{S}$ where only one $y$ variable is considered.
For this approach, we use the EC\&R\xspace results of Theorem~\ref{thm:primal-tree-converse} to identify the tree structures for each one-variable relaxation and add the resulting cutting planes for each relaxation separately through loops as previously described.
The subcolumns under the ``Full" and ``Separation" headers contain the gap closure and the solution time to add these inequalities without and with the aforementioned separation oracle, respectively.
For these instances, we consider 60 of the largest values for $\Psi_k$ in the separation procedure.
The next four columns under ``Forest EC\&R\xspace" include the result of adding EC\&R\xspace inequalities with up to three aggregations obtained for $\mcl{S}$ where both $y$ variables are
considered in their original simplex.
To add EC\&R\xspace cutting planes, we consider the forest structures according to Theorem~\ref{thm:primal-forest-converse}.
The subcolumns under the ``Full" and ``Separation" headers contain the gap closure and the solution time to implement these cutting planes without and with the separation oracle, respectively.
It is evident from these results that the EC\&R\xspace cuts obtained from the forest structure that involve all $y$ variables outperform those obtained from the tree structures that consider $y$ variables individually, showing the effectiveness of the class of EC\&R\xspace inequalities with the pairwise cancellation property.
Further, these results show the notable impact of using a separation oracle to produce the EC\&R\xspace inequalities on reducing the solution time, especially for larger size problems where the time save is of orders of magnitude.
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the bipartite structure with $m = 2$}
\label{tab:bipartite-2}
\resizebox{1.00\textwidth}{!}{
\begin{tabular}{|c|c||c|c||c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{4}{c||}{Tree EC\&R\xspace} & \multicolumn{4}{c|}{Forest EC\&R\xspace} \\ \cline{5-12}
& & & & \multicolumn{2}{c||}{Full} & \multicolumn{2}{c||}{Separation} & \multicolumn{2}{c||}{Full}& \multicolumn{2}{c|}{Separation}\\ \cline{5-12}
& & & & Gap & Time & Gap & Time & Gap & Time & Gap & Time \\ \hline
\hline
10 & 1 & 3820.43 & 3304.37 & 72 & 2.68 & 66 & 0.90 & 80 & 4.19 & 76 & 3.78 \\
& 2 & 4670 & 3722.12 & 94 & 2.74 & 90 & 1.26 & 99 & 4.06 & 96 & 2.85 \\
& 3 & 3046.69 & 2296.67 & 88 & 2.75 & 86 & 1.28 & 99 & 2.86 & 99 & 3.88 \\
& 4 & 2091.19 & 1617.27 & 99 & 1.95 & 99 & 1.02 & 99 & 1.50 & 99 & 1.89 \\
& 5 & 4744.34 & 3833.32 & 86 & 2.06 & 74 & 1.25 & 96 & 4.15 & 96 & 3.81 \\
& 6 & 2994.59 & 2247.07 & 93 & 2.69 & 93 & 1.29 & 99 & 5.42 & 98 & 4.61 \\
& 7 & 4790.34 & 4366.39 & 99 & 1.72 & 99 & 0.65 & 99 & 1.56 & 99 & 1.98 \\
& 8 & 6438.68 & 5781.10 & 99 & 1.58 & 99 & 0.77 & 99 & 1.50 & 99 & 1.91 \\
& 9 & 995.54 & 514.49 & 81 & 1.73 & 55 & 0.92 & 99 & 1.47 & 99 & 1.90 \\
& 10& 3127.59 & 2985.04 & 99 & 1.35 & 99 & 0.36 & 99 & 1.53 & 99 & 0.94 \\
\cline{2-12}
&avg. &&& 91 & 2.13 & 86 & 0.97 & 97 & 2.82 & 96 & 2.75 \\
\hline
30 & 1 & -243.66 &-3690.76 & 28 & 165.79 & 17 & 9.86 & 49 & 497.22 & 31 & 36.26 \\
& 2 & 771.06 &-2039.13 & 16 & 172.91 & 09 & 10.42 & 49 & 505.36 & 29 & 27.15 \\
& 3 &-1103.88 &-3888.50 & 40 & 175.02 & 25 & 10.43 & 65 & 498.75 & 39 & 27.14 \\
& 4 & 2397.13 &-1259.93 & 30 & 170.07 & 24 & 10.24 & 58 & 495.05 & 33 & 27.40 \\
& 5 & 35.46 &-2069.69 & 59 & 164.67 & 40 & 19.38 & 78 & 486.38 & 50 & 54.06 \\
& 6 &-535.73 &-3742.10 & 31 & 209.90 & 22 & 17.33 & 64 & 494.07 & 43 & 45.65 \\
& 7 & 510.26 &-2121.24 & 43 & 174.23 & 27 & 13.67 & 85 & 500.75 & 68 & 64.31 \\
& 8 & 161.23 &-2120.39 & 29 & 168.76 & 16 & 9.93 & 67 & 614.68 & 36 & 35.82 \\
& 9 & 2727.77 & -355.00 & 58 & 172.26 & 52 & 14.50 & 89 & 494.68 & 71 & 55.88 \\
&10 & 1615.33 & -510.79 & 49 & 209.28 & 36 & 13.17 & 76 & 497.71 & 47 & 45.85 \\
\cline{2-12}
& avg.&&& 41 & 178.29 & 27 & 12.89 & 68 & 508.46 & 45 & 41.95 \\
\hline
50 & 1 &-4103.47 &-9633.82 & 41 & 1499.04 & 24 & 37.56 & 64 & 4248.47 & 41 & 195.21 \\
& 2 & -942.02 &-6275.92 & 60 & 1110.56 & 47 & 47.72 & 83 & 3960.46 & 62 & 167.52 \\
& 3 &-3652.90 &-11173.54& 39 & 1085.44 & 26 & 28.84 & 60 & 3923.47 & 37 & 131.14 \\
& 4 & -940.47 &-5652.85 & 61 & 1006.40 & 48 & 56.31 & 80 & 3729.93 & 60 & 174.41 \\
& 5 & 2006.51 &-3895.38 & 45 & 1024.55 & 35 &48.22 & 65 & 3767.51 & 38 & 100.05 \\
& 6 &-3040.65 &-10676.33& 45 & 999.34 & 32 &37.38 & 62 & 2918.44 & 41 & 100.59 \\
& 7 &-2074.94 &-11708.06& 35 & 1029.83 & 28 &37.67 & 52 & 2955.59 & 39 & 125.89 \\
& 8 &-4621.66 &-9875.65 & 42 & 1046.62 & 29 &38.16 & 67 & 3872.93 & 38 & 125.75 \\
& 9 &-1622.87 &-7510.43 & 48 &1058.28 & 36 &28.94 & 72 & 3881.18 & 46 & 125.50 \\
&10 &-4396.41 &-11995.74& 24 &1010.64 & 17 &37.90 & 39 & 2965.72 & 26 & 161.27 \\
\cline{2-12}
&avg.&&& 44 & 1087.07& 32 &39.87& 64 & 3622.37 & 42 & 134.62 \\
\hline
\end{tabular}
}
\end{table}
\subsection{Clique Structure.}
In this section, we perform computational experiments on clique structures; see Figure~\ref{fig:network2}.
This structure represents a high-level density for the underlying graph.
We consider three problem sizes, where the number of nodes in the underlying network is $10$, $20$, and $30$.
For each problem size, we create $10$ randomly generated instances for set $\mcl{S}$ with the following specifications for its underlying network $\mr{G} = (\mr{V}, \mr{A})$.
For the clique networks, there is an arc between each pair of nodes.
We determine the direction of each arc through a random binary value.
Each arc is assigned a capacity that is randomly generated from a discrete uniform distribution between $(0,100)$.
The coefficient of the $z$ variables in the objective function is randomly generated from a uniform distribution between $(-10,10)$, added by a random number between $0$ and $0.1$ that is multiplied by the number of nodes in that network.
This second random element is added to widen the optimality gap for a meaningful comparison of the different approaches.
The coefficient for $x$ variables in the objective function is randomly generated from a discrete uniform distribution between $(0,20)$, and the supply/demand at each node is selected randomly from $(-200,200)$ in such a way that supply and demands are balanced.
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the clique structure with $m = 1$}
\label{tab:clique-1}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full EC\&R\xspace} & \multicolumn{2}{c|}{Separation EC\&R\xspace} \\ \cline{5-8}
& & & & Gap (\%)& Time (s) & Gap (\%)& Time (s) \\ \hline
10 & 1 & 6619.00 & 6291.72 & 99 & 2.91 & 78 & 0.30 \\
& 2 & 5027.37 & 4925.55 & 99 & 4.28 & 00 & 0.08 \\
& 3 & 6118.96 & 5957.02 & 99 & 1.57 & 99 & 0.23 \\
& 4 & 3494.89 & 3391.59 & 99 & 1.54 & 99 & 0.10 \\
& 5 & 2240.41 & 1843.81 & 99 & 1.59 & 99& 0.17 \\
& 6 & 1765.00 & 1603.52 & 99 & 1.52 & 81& 0.45 \\
& 7 & 4129.00 & 3939.31 & 99 & 1.51 & 99& 0.19 \\
& 8 & 5514.99 & 5128.99 & 99 & 1.59 & 88& 0.40 \\
& 9 & 8396.64 & 8260.95 & 99 & 1.51 & 99& 0.10 \\
& 10 & 4402.24 & 4156.90 & 99 & 1.48 & 92& 0.25 \\
\cline{2-8}
& avg. && & 99 & 1.95 & 83 & 0.23 \\
\hline
20 & 1 & 6176.00 & 6008.36 & 99 & 36.31 & 99 & 1.69 \\
& 2 & 8734.00 & 8530.56 & 99 & 34.06 & 99 & 1.66 \\
& 3 &10195.00 & 9492.98 & 86 & 99.73 & 73 & 6.84 \\
& 4 & 7020.00 & 6626.42 & 99 & 67.61 & 97 & 8.53 \\
& 5 & 7917.00 & 7339.43 & 99 & 97.31 & 95 & 12.27 \\
& 6 & 9154.00 & 8571.02 & 99 & 35.18 & 99 & 1.78 \\
& 7 & 9117.00 & 8849.31 & 99 & 35.13 & 99 & 1.79 \\
& 8 & 9804.00 & 9138.27 & 99 & 65.62 & 99 & 8.53 \\
& 9 & 5876.17 & 5463.65 & 99 & 66.50 & 99 & 3.36 \\
& 10 & 7605.72 & 6822.39 & 98 &132.54 & 91 & 6.88 \\
\cline{2-8}
& avg. && & 98 & 67.00 & 95 & 5.33 \\
\hline
30 & 1 & 7673.11 & 6899.80 & 99 & 360.89 & 88 & 66.99 \\
& 2 & 6376.51 & 5191.96 & 99 & 190.06 & 99 & 29.07 \\
& 3 & 11341.00 & 10650.40 & 99 & 166.24 & 63 & 28.61 \\
& 4 & 9045.00 & 7826.18 & 95 & 619.64 & 78 & 65.26 \\
& 5 & 6094.86 & 5289.02 & 95 & 636.07 & 87 & 39.74 \\
& 6 & 7641.00 & 7222.33 & 99 & 321.02 & 95 & 37.95 \\
& 7 &11040.00 &10413.28 & 99 & 163.22 & 99 & 35.56 \\
& 8 & 6862.00 & 6411.01 & 99 & 175.81 & 99 & 18.61 \\
& 9 & 7424.00 & 6295.28 & 94 & 786.84 & 85 & 73.03 \\
& 10 & 6102.74 & 4945.62 & 99 & 357.86 & 80 & 56.56 \\
\cline{2-8}
& avg. && & 98 & 377.77 & 87 & 45.14 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the clique structure with $m = 2$}
\label{tab:clique-2}
\resizebox{1.00\textwidth}{!}{
\begin{tabular}{|c|c||c|c||c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{4}{c||}{Tree EC\&R\xspace} & \multicolumn{4}{c|}{Forest EC\&R\xspace} \\ \cline{5-12}
& & & & \multicolumn{2}{c||}{Full} & \multicolumn{2}{c||}{Separation} & \multicolumn{2}{c||}{Full}& \multicolumn{2}{c|}{Separation}\\ \cline{5-12}
& & & & Gap & Time & Gap & Time & Gap & Time & Gap & Time \\ \hline
\hline
10 & 1 & 5177.00 & 4620.61 & 99 & 3.25 & 99 & 1.40 & 99 & 9.52 & 99 & 4.41 \\
& 2 & 276.13 & -682.07 & 80 & 11.86& 76 & 5.78 & 91 & 25.73& 88 & 17.34\\
& 3 & 5444.39 & 5046.27 & 99 & 5.83 & 98 & 2.82 & 99 & 17.13& 99 & 8.96 \\
& 4 & 6699.48 & 5954.44 & 97 & 9.06 & 96 & 4.34 & 99 & 17.43& 97 & 13.56\\
& 5 & 4897.99 & 4244.31 & 99 & 6.11 & 99 & 4.40 & 99 & 9.52 & 99 & 4.41 \\
& 6 & 5690.00 & 5448.84 & 99 & 3.17 & 99 & 1.37 & 99 & 9.31 & 99 & 4.46 \\
& 7 & 3744.00 & 3081.42 & 99 & 5.53 & 99 & 2.73 & 99 & 9.12 & 99 & 8.65 \\
& 8 & 5840.00 & 5591.10 & 99 & 9.02 & 99 & 4.31 & 99 & 17.52& 99 & 8.46\\
& 9 & 6995.26 & 6758.82 & 99 & 3.23 & 99 & 1.44 & 99 & 9.55& 99 & 4.29 \\
& 10& 3576.71 & 3254.87 & 99 & 3.13 & 99 & 1.37 & 99 & 9.80& 99 & 4.11 \\
\cline{2-12}
&avg. &&& 97 & 6.02 & 96 & 3.00 & 98 & 13.46& 98 & 7.86 \\
\hline
20 & 1 & 7519.86 & 5941.60 & 80 & 247.21 & 68 & 16.70 & 99 & 567.28 & 86 & 93.92 \\
& 2 & 6940.38 & 4598.74 & 45 & 259.08 & 25 & 16.97 & 68 & 750.97 & 50 & 65.45 \\
& 3 & 6661.91 & 5093.44 & 63 & 261.60 & 41 & 10.15 & 99 & 382.93 & 86 & 47.06 \\
& 4 & 5529.50 & 3884.97 & 72 & 259.85 & 60 & 17.14 & 98 & 734.08 & 83 & 45.56 \\
& 5 & 7517.72 & 6460.06 & 76 & 191.48 & 56 & 14.08 & 99 & 523.53 & 76 & 45.62 \\
& 6 & 6691.37 & 5225.75 & 94 & 315.58 & 72 & 19.68 & 99 & 515.09 & 86 & 54.63 \\
& 7 & 9608.17 & 7674.57 & 73 & 192.52 & 58 & 13.29 & 95 & 525.21 & 79 & 45.56 \\
& 8 & 9808.00 & 7723.56 & 60 & 252.08 & 48 & 13.57 & 79 & 661.85 & 67 & 46.10 \\
& 9 & 5940.59 & 3654.28 & 55 & 198.98 & 37 & 16.23 & 80 & 679.97 & 62 & 5.14 \\
&10 & 7332.00 & 6126.00 & 99 & 266.47 & 93 & 13.47 & 99 & 352.75 & 94 & 27.71 \\
\cline{2-12}
& avg.&&& 71 & 236.64 & 56 & 15.13 & 92 & 569.37 & 77 & 52.68 \\
\hline
30 & 1 & 6912.00 & 5808.28 & 99 & 426.04 & 94 &84.48 & 99 & 1062.91 & 99 & 164.67 \\
& 2 &11003.69 & 9877.86 & 95 &1177.46 & 87 &86.41 & 96 & 2988.85 & 92 & 224.89 \\
& 3 & 6676.09 & 4151.90 & 84 &1815.14 & 64 &88.16 & 99 & 1919.25 & 93 & 280.02 \\
& 4 & 8375.87 & 5882.49 & 90 &1420.13 & 69 &107.86& 98 & 4084.95 & 80 & 282.41 \\
& 5 & 9854.24 & 7338.24 & 99 &644.18 & 98 &88.94 & 99 & 1125.23 & 99 & 167.50 \\
& 6 & 7613.34 & 5011.54 & 62 &1538.90 & 46 &111.18& 81 & 4004.75 & 64 & 367.75 \\
& 7 &11367.00 & 8704.43 & 81 &1266.64 & 65 &129.98& 91 & 5027.96 & 70 & 255.62 \\
& 8 & 8179.74 & 5631.34 & 73 &1277.91 & 57 &93.64 & 92 & 3662.67 & 76 & 333.25 \\
& 9 & 8350.00 & 4983.29 & 74 &1588.36 & 50 &90.22 & 91 & 3576.49 & 71 & 328.55\\
&10 &10606.00 & 7477.32 & 75 &1876.30 & 56 &103.98& 98 & 4662.70 & 82 & 360.30 \\
\cline{2-12}
&avg.&&& 83 &1303.11& 69 &98.48 & 94 & 3211.58 & 83 & 276.49 \\
\hline
\end{tabular}
}
\end{table}
The results for the clique structure are reported in Tables~\ref{tab:clique-1} and \ref{tab:clique-2} for the cases with $m=1$ and $m=2$, respectively.
The definition of columns in each table is similar to that of Tables~\ref{tab:bipartite-1} and \ref{tab:bipartite-2}.
For the clique structures, we observe similar patterns to those of the bipartite case in the gap improvement and solution time of the different approaches.
\begin{comment}
\subsection{Star Structure.}
In this section, we perform computational experiments on star structures as a sparse network model; see Figure~\ref{fig:network3}.
We consider three problem sizes, where the number of nodes in the underlying network is $31$, $61$, and $101$.
For example, for size $31$, there is one node at the center, and $30$ nodes connected to it.
For each problem size, we create $10$ randomly generated instances for set $\mcl{S}$ with the following specifications for its underlying network $\mr{G} = (\mr{V}, \mr{A})$.
For the star networks, there is an arc between each node of the network and the center node.
Half of these arcs will have the direction into the center node, and half will have the direction out of the center node.
The coefficient of the $z$ variables in the objective function is randomly generated from a uniform distribution between $(-18,22)$.
Similarly to the case for the clique structure, this interval is shifted to widen the optimality gap for a meaningful comparison of the different approaches.
The coefficient for $x$ variables in the objective function is randomly generated from a discrete uniform distribution between $(0,20)$.
Each node that has an outgoing arc connected to the center node is considered a supply node, whose supply is generated randomly from a discrete uniform distribution between $(0,150)$.
Each node that has an incoming arc connected to the center node is considered a demand node, whose demand is generated randomly from a discrete uniform distribution between $(0,100)$.
The center node is a hub node with zero supply/demand.
We note here that this model is not balanced, unlike the network structures studied in the previous sections, because a balanced model will have a single solution only.
As a result, we change the equality flow-balance constraints to inequalities to account for an unbalanced model.
We set the capacity of each arc equal to the supply/demand of the node it is adjacent to.
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the star structure with $m = 1$}
\label{tab:star-1}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full EC\&R\xspace} & \multicolumn{2}{c|}{Separation EC\&R\xspace} \\ \cline{5-8}
& & & & Gap (\%)& Time (s) & Gap (\%)& Time (s) \\ \hline
31 & 1 & 13987.72 & 13011.58 & 99 & 2.31 & 99 & 0.08 \\
& 2 & 8814.00 & 8537.33 & 99 & 2.67 & 99 & 0.05 \\
& 3 & 14746.00 & 14384.96 & 99 & 2.52 & 99 & 0.04 \\
& 4 & 10711.93 & 10486.39 & 99 & 2.45 & 99 & 0.06 \\
& 5 & 6815.50 & 6342.41 & 99 & 2.49 & 99& 0.11 \\
& 6 & 10920.07 & 9589.44 & 99 & 2.48 & 99& 0.05 \\
& 7 & 9162.00 & 8958.50 & 99 & 2.67 & 99& 0.07 \\
& 8 & 9915.00 & 5128.99 & 99 & 2.53 & 99& 0.04 \\
& 9 & 15736.74 & 15390.30 & 99 & 1.51 & 99& 0.04 \\
& 10 & 9597.49 & 9037.50 & 99 & 1.48 & 92& 0.25 \\
\cline{2-8}
& avg. && & 99 & 2.52 & 99 & 0.06 \\
\hline
61 & 1 &28166.67 &27798.56 & 99 & 21.09 & 99 & 0.70 \\
& 2 &26038.68 &25448.46 & 99 & 20.10 & 99 & 0.72 \\
& 3 &24955.06 &24597.34 & 99 & 18.02 & 99 & 0.38 \\
& 4 &18437.22 &17683.77 & 99 & 18.31 & 99 & 0.70 \\
& 5 &26749.81 &26175.38 & 99 & 19.03 & 99 & 0.34 \\
& 6 &30736.00 &30512.41 & 99 & 18.72 & 99 & 0.38 \\
& 7 &27021.96 &25912.78 & 99 & 20.07 & 99 & 0.73 \\
& 8 &35728.75 &34858.62 & 99 & 17.84 & 99 & 0.68 \\
& 9 &32932.00 &32605.94 & 99 & 17.96 & 99 & 0.36 \\
& 10 &27209.69 &26928.82 & 99 & 18.92 & 99 & 0.80 \\
\cline{2-8}
& avg. && & 99 & 19.01 & 99 & 0.58 \\
\hline
101 & 1 & 33395.34 & 32833.66 & 99 & 93.21 & 99 & 3.39 \\
& 2 & 39813.00 & 39612.49 & 99 & 88.51 & 99 & 1.70 \\
& 3 & 43941.00 & 43556.39 & 99 & 84.34 & 99 & 1.71 \\
& 4 & 51040.68 & 50409.79 & 99 & 80.81 & 99 & 1.86 \\
& 5 & 42475.00 & 41867.57 & 99 & 79.27 & 99 & 1.70 \\
& 6 & 45239.38 & 42782.59 & 99 & 82.14 & 99 & 1.68 \\
& 7 & 33604.00 & 33298.75 & 99 & 80.79 & 99 & 3.32 \\
& 8 & 42334.00 & 41938.69 & 99 & 84.05 & 99 & 1.70 \\
& 9 & 38056.00 & 37124.38 & 99 & 86.57 & 99 & 6.51 \\
& 10 & 40634.00 & 40411.84 & 99 & 83.32 & 99 & 1.83 \\
\cline{2-8}
& avg. && & 99 & 84.30 & 99 & 2.54 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the star structure with $m = 2$}
\label{tab:star-2}
\resizebox{1.00\textwidth}{!}{
\begin{tabular}{|c|c||c|c||c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{4}{c||}{Tree EC\&R\xspace} & \multicolumn{4}{c|}{Forest EC\&R\xspace} \\ \cline{5-12}
& & & & \multicolumn{2}{c||}{Full} & \multicolumn{2}{c||}{Separation} & \multicolumn{2}{c||}{Full}& \multicolumn{2}{c|}{Separation}\\ \cline{5-12}
& & & & Gap & Time & Gap & Time & Gap & Time & Gap & Time \\ \hline
\hline
31 & 1 & 5954.05 & 5597.23 & 24 & 3.89 & 24 & 0.37 & 99 & 8.31 & 25 & 1.58 \\
& 2 & 7274.95 & 6760.15 & 73 & 3.87 & 73 & 0.30 & 97 & 22.20& 83 & 1.57 \\
& 3 & 14222.00& 13987.42& 99 & 1.99 & 99 & 0.09 & 99 & 7.96 & 99 & 0.41 \\
& 4 & 15030.00& 14788.34& 99 & 2.00 & 99 & 0.11 & 99 & 8.16 & 99 & 0.46 \\
& 5 & 14376.69& 14116.10& 99 & 1.99 & 99 & 0.08 & 99 & 8.10 & 99 & 0.44 \\
& 6 & 12787.00& 11413.63& 99 & 1.97 & 99 & 0.09 & 99 & 8.29 & 99 & 0.49 \\
& 7 & 18676.00& 18430.77& 99 & 1.93 & 99 & 0.10 & 99 & 8.18 & 99 & 0.39 \\
& 8 & 13575.26& 12518.69& 71 & 3.80 & 71 & 0.27 & 99 & 8.52 & 86 & 1.21\\
& 9 & 8794.00 & 8394.10 & 99 & 1.93 & 99 & 0.20 & 99 & 8.39& 99 & 0.78 \\
& 10& 17206.00& 16892.27& 99 & 1.93 & 99 & 0.09 & 99 & 8.27& 99 & 0.44 \\
\cline{2-12}
&avg. &&& 86 & 2.53 & 86 & 0.17 & 99 & 9.64& 89 & 0.78 \\
\hline
61 & 1 & 11506.86 & 10818.00 & 00 & 14.53 & 00 & 0.74 & 11 & 188.21 & 11 & 12.34 \\
& 2 & 23037.00 & 21149.27 & 98 & 29.16 & 96 & 2.74 & 99 & 70.00 & 99 & 3.65 \\
& 3 & 23514.02 & 21189.56 & 79 & 29.31 & 75 & 2.77 & 99 & 70.07 & 91 & 12.85 \\
& 4 & 24727.42 & 24316.71 & 99 & 29.03 & 99 & 1.41 & 99 & 69.20 & 99 & 6.52 \\
& 5 & 21673.20 & 20020.22 & 99 & 15.61 & 99 & 2.80 & 99 & 68.47 & 99 & 13.00 \\
& 6 & 23588.00 & 22946.48 & 99 & 15.03 & 99 & 0.79 & 99 & 66.55 & 99 & 3.33 \\
& 7 & 25150.00 & 24940.65 & 99 & 14.92 & 99 & 0.71 & 99 & 65.20 & 99 & 3.32 \\
& 8 & 18171.81 & 17943.15 & 06 & 27.65 & 06 & 1.40 & 78 & 237.90& 29 & 20.96 \\
& 9 & 26629.00 & 24700.47 & 99 & 14.68 & 99 & 0.75 & 99 & 66.88 & 99 & 3.54 \\
&10 & 25784.93 & 24720.23 & 84 & 59.22 & 83 & 3.53 & 99 & 183.27& 85 & 15.57 \\
\cline{2-12}
& avg.&&& 76 & 24.91 & 75 & 1.76 & 88 & 108.58 & 81 & 9.51 \\
\hline
101& 1 &46589.89 & 44623.20 & 53 & 280.95 & 50 &12.97 & 99 & 321.88 & 99 & 30.15 \\
& 2 &35633.83 & 33233.90 & 29 & 145.62 & 29 &13.14 & 97 & 941.36 & 40 & 74.98 \\
& 3 &23970.87 & 23277.36 & 69 & 336.61 & 67 &26.31 & 99 & 343.01 & 69 & 131.32\\
& 4 &32991.28 & 30641.69 & 42 & 276.79 & 34 &13.19 & 99 & 945.28 & 88 & 59.00 \\
& 5 &39693.64 & 39255.76 & 99 & 72.45 & 99 & 9.82 & 99 & 356.83 & 99 & 16.77 \\
& 6 &39634.00 & 38678.27 & 99 & 72.00 & 99 & 3.65 & 99 & 356.89 & 99 & 16.35 \\
& 7 &33496.72 & 32500.26 & 11 &269.16 & 00 & 3.39 & 95 & 1428.29 & 56 & 101.05\\
& 8 & 8179.74 & 21823.48 & 19 &206.84 & 19 & 12.98 & 90 & 598.93 & 53 & 31.04 \\
& 9 &36653.80 & 33817.44 & 70 &1588.36& 55 & 17.70 & 99 & 589.58 & 63 & 74.96\\
&10 &40215.00 & 39390.27 & 99 &1876.30& 99 & 3.78 & 99 & 319.21 & 99 & 16.69 \\
\cline{2-12}
&avg.&&& 59 &208.62& 53 &11.69 & 98 & 620.13 & 77 & 55.23 \\
\hline
\end{tabular}
}
\end{table}
\end{comment}
\subsection{Cycle Structure.}
In this section, we perform computational experiments on cycle structures; see Figure~\ref{fig:network3}.
This structure represents a low-level density for the underlying graph.
We consider three problem sizes, where the number of nodes in the underlying network is $50$, $100$, and $200$.
For each problem size, we create $10$ randomly generated instances for set $\mcl{S}$ with the following specifications for its underlying network $\mr{G} = (\mr{V}, \mr{A})$.
Each node in the cycle is either a supply or a demand node.
The adjacent nodes to a supply node are both demand nodes, and the adjacent nodes to a demand node are both supply nodes.
The direction of each arc in the cycle is from the supply node to the demand node incident to that arc.
The coefficient of the $z$ variables in the objective function is randomly generated from a uniform distribution between $(-18,22)$.
The coefficient for $x$ variables in the objective function is randomly generated from a discrete uniform distribution between $(0,20)$.
The supply for the supply nodes is generated randomly from a discrete uniform distribution between $(100,200)$.
The demand for the demand nodes is generated randomly from a discrete uniform distribution between $(0,100)$.
We note here that this model is not balanced, unlike the network structures studied in the previous sections, because a balanced model will have a single solution only.
As a result, we change the equality flow-balance constraints to inequalities to account for an unbalanced model.
We set the capacity of each arc equal to $150$ to ensure that the problem is feasible.
The computational results are given in Tables~\ref{tab:cycle-1} and \ref{tab:cycle-2} for the cases with $m=1$ and $m=2$, respectively.
The columns in these tables are defined similarly to the previous tables, with a difference that the separation columns are omitted because implementing the full EC\&R\xspace approaches is fast, hence a separation oracle would not be necessary.
This fast performance can be attributed to the sparsity of the underlying graph that allows for generating a complete set of EC\&R\xspace inequalities in a short amount of time since the aggregation options to obtain the desired cancellations is limited.
As a result, we can achieve high gap improvements for larger size problems as evidenced in Tables~\ref{tab:cycle-1} and \ref{tab:cycle-2}.
\begin{table}[!t]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the cycle structure with $m = 1$}
\label{tab:cycle-1}
\begin{tabular}{|c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full EC\&R\xspace} \\
\cline{5-6}
& & & & Gap (\%)& Time (s) \\
\hline
50 & 1 & 2867.15 & -80.28 & 99 & 0.04 \\
& 2 & 5797.12 & 2704.93 & 99 & 0.01 \\
& 3 &-1552.82 &-2147.89 & 99 & 0.03 \\
& 4 & 5365.32 & 3288.67 & 99 & 0.02 \\
& 5 & 980.96 &-1389.79 & 99 & 0.01 \\
& 6 & -266.16 &-2592.61 & 99 & 0.03 \\
& 7 & 1250.60 &-354.21 & 99 & 0.02 \\
& 8 &-3119.87 &-4349.18 & 99 & 0.02 \\
& 9 & 733.80 &-118.27 & 99 & 0.01 \\
& 10 & 2210.15 & 243.57 & 99 & 0.03 \\
\cline{2-6}
& avg. && & 99 & 0.02 \\
\hline
100& 1 &-1783.35 &-3668.73 & 99 & 0.06 \\
& 2 &-2069.08 &-3695.66 & 99 & 0.03 \\
& 3 & 8678.79 & 1677.03 & 99 & 0.03 \\
& 4 &-4346.53 &-5407.01 & 99 & 0.05 \\
& 5 & 1243.70 &-2122.95 & 99 & 0.03 \\
& 6 &15120.43 & 4885.57 & 99 & 0.03 \\
& 7 & 1235.43 &-3377.43 & 99 & 0.03 \\
& 8 & 1256.49 &-1378.17 & 99 & 0.03 \\
& 9 &11766.20 & 4017.61 & 99 & 0.05 \\
& 10 & 3647.78 & -435.92 & 99 & 0.04 \\
\cline{2-6}
& avg. && & 99 & 0.04 \\
\hline
200& 1 & 366.47 & -7118.60 & 99 & 0.08 \\
& 2 & -7235.71 &-12323.90 & 99 & 0.12 \\
& 3 & 3544.59 & -5999.36 & 99 & 0.07 \\
& 4 & 10746.70 & 1156.33 & 99 & 0.07 \\
& 5 & 12870.76 & -188.56 & 99 & 0.09 \\
& 6 & 11889.94 & 3498.13 & 99 & 0.07 \\
& 7 & 3482.99 & -4362.33 & 99 & 0.08 \\
& 8 & -1885.01 &-10801.39 & 99 & 0.06 \\
& 9 & 22166.25 & 11341.87 & 99 & 0.08 \\
& 10 & -348.45 & -8025.35 & 99 & 0.09 \\
\cline{2-6}
& avg. && & 99 & 0.08 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the cycle structure with $m = 2$}
\label{tab:cycle-2}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full Tree EC\&R\xspace} & \multicolumn{2}{c|}{Full Forest EC\&R\xspace} \\
\cline{5-8}
& & & & Gap & Time & Gap & Time \\
\hline
50 & 1 &-1590.18 &-8576.67 & 87 & 0.13 & 99 & 0.26 \\
& 2 &-4622.80 &-13593.67& 85 & 0.10 & 96 & 0.32 \\
& 3 &-7794.96 &-15165.05& 63 & 0.11 & 87 & 0.37 \\
& 4 &-1503.33 &-6513.07 & 89 & 0.16 & 99 & 0.27 \\
& 5 & 4433.70 &-6508.12 & 94 & 0.12 & 99 & 0.23 \\
& 6 & -284.46 &-7786.90 & 70 & 0.11 & 91 & 0.37 \\
& 7 & -1140.91&-14831.40& 60 & 0.09 & 82 & 0.32 \\
& 8 & 6625.82 & -1629.65& 93 & 0.09 & 99 & 0.23 \\
& 9 & -2116.06&-5866.54 & 99 & 0.03 & 99 & 0.17 \\
& 10& 88.65 &-8106.67 & 89 & 0.09 & 97 & 0.33 \\
\cline{2-8}
&avg. &&& 83 & 0.10 & 95 & 0.29 \\
\hline
100& 1 & 900.68 &-16607.11 & 74 & 0.22 & 90 & 0.71 \\
& 2 & 140.56 &-17583.33 & 92 & 0.22 & 99 & 0.31 \\
& 3 & -131.98 &-20668.01 & 73 & 0.26 & 87 & 0.66 \\
& 4 & -231.62 &-20736.33 & 80 & 0.34 & 95 & 0.63 \\
& 5 & -1709.59 &-18456.56 & 66 & 0.24 & 81 & 0.68 \\
& 6 & 6992.68 &-16783.92 & 74 & 0.17 & 85 & 0.66 \\
& 7 & 1326.26 &-20135.15 & 75 & 0.18 & 82 & 0.63 \\
& 8 & 90.11 &-19442.39 & 72 & 0.17 & 86 & 0.67 \\
& 9 & 7523.83 &-12600.35 & 88 & 0.27 & 99 & 0.55 \\
&10 & -2365.06 &-18051.56 & 77 & 0.19 & 96 & 0.72 \\
\cline{2-8}
& avg.&&& 77 & 0.23 & 90 & 0.62 \\
\hline
200& 1 &-4498.84 &-42056.21 & 88 & 0.42 & 95 &1.37 \\
& 2 &-2270.67 &-41823.19 & 77 & 0.40 & 86 &1.49 \\
& 3 &9853.81 &-38627.76 & 70 & 0.42 & 82 &1.48 \\
& 4 &10420.28 &-24979.47 & 74 & 0.45 & 88 &1.35 \\
& 5 &-10699.73 &-46329.12 & 77 & 0.43 & 94 & 1.34 \\
& 6 & 1474.74 &-34978.20 & 86 & 0.39 & 99 & 1.08 \\
& 7 & 2446.84 &-36567.37 & 78 & 0.44 & 95 & 1.37 \\
& 8 &-7058.24 &-40441.54 & 85 & 0.40 & 99 & 1.01 \\
& 9 &-14652.28&-48186.48 & 74 & 0.41 & 93 & 1.37 \\
&10 &9392.03 &-28443.66 & 81 & 0.42 & 94 & 1.51 \\
\cline{2-8}
&avg.&&& 79 & 0.42 & 93 & 1.34 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion} \label{sec:conclusion}
We study a bipartite bilinear set, where the variables in one partition belong to a network flow model, and the variables in the other partition belong to a simplex.
We design a convexification technique based on aggregation of side constraints with appropriate weights, which produces an important class of facet-defining inequalities for the convex hull of the bilinear set, which describes the convex hull for the special case where the simplex contains a single variable.
We show that each such inequality can be obtained by considering the constraints corresponding to the nodes of the underlying network that form a special tree or forest structure.
This property leads to an explicit derivation of strong inequalities through identifying special graphical structures in the network model.
These inequalities can be added to the classical McCormick relaxation to strengthen the relaxation and improve the dual bounds, as corroborated in the preliminary computational experiments conducted on various basic network structures.
\bibliographystyle{informs2014}
\section{Introduction.}\label{intro}
\section{Introduction} \label{sec:introduction}
Bilinear constraints in conjunction with network models appear in various mixed-integer and nonlinear programming (MINLP) applications, such as the fixed-charge network flow problems \cite{rebennack:na:pa:2009}, and network models with complementarity constraints, such as transportation problems with conflicts \cite{ficker:sp:wo:2021}.
A common occurrence of bilinear terms in network models pertains to bilevel network problems after being reformulated as a single-level program through either using a dual formulation of the inner problem or incorporating optimality conditions inside the outer problem \cite{benayed:bl:1990,chiou:2005}.
These reformulation approaches are widely used in the network interdiction problems, where newly added bilinear terms are relaxed using a \textit{linearization} technique based on the McCormick bounds \cite{mccormick:1976} over a box domain; see \cite{smith:li:2008} for an exposition.
While these relaxations provide the convex hull of the bilinear constraint over a box domain \cite{alkhayyal:fa:1983}, they often lead to weak relaxations when the variables domain becomes more complicated as in general polyhedra \cite{gupte:ah:ch:de:2013,davarnia:2016}.
It has been shown \cite{boland:de:ka:mo:ri:2017,luedtke:na:li:2012} that even when the number of bilinear terms in the underlying function increases, the McCormick bounds can be very poor compared to the ones obtained from the convex and concave envelopes of the function.
\medskip
There are various studies in the literature that develop convexification methods for multilinear functions as a generalization of bilinear forms, but the side constraints for the involved variables are often limited to variable bounds.
For instance, \cite{delpia:kh:2021} introduces a new class of valid inequalities, called running intersection inequalities, for the multilinear polytope described by a set of multilinear equations.
The authors in \cite{gupte:ka:ri:wa:2020} derive extended formulations for the convex hull of the graph of a bilinear function on the $n$-dimensional unit cube through identifying the facets of the Boolean Quadratic Polytope.
In \cite{fampa:lee:2021}, an efficient method to convexify bilinear functions through McCormick relaxations is proposed which takes advantage of the structural convexity in a symmetric quadratic form.
Other works in the literature consider a polyhedral, often triangular, subdivision of the domain to derive strong valid inequalities for a bilinear set; see \cite{tawarmalani:ri:xi:2012,sherali:al:1992,locatelli:sch:2014} for examples of such approaches.
Further, \cite{davarnia:ri:ta:2017} proposes a constructive procedure, referred to as \textit{extended cancel-and-relax} (EC\&R), to simultaneously convexify the graph of bilinear functions over a general polytope structure.
In this paper, we make use of the EC\&R\xspace procedure to derive convexification results for a bilinear set where the side constraints on variables are described by a network flow model as defined next.
\medskip
For $N := \{1,\dotsc,n\}$, $M := \{1,\dotsc,m\}$, $K := \{1,\dotsc,\kappa\}$, and $T := \{1, \dotsc, \tau\}$, we consider
\[
\mcl{S} = \left\{ (\vc{x};\vc{y};\vc{z}) \in \Xi \times \Delta_m \times {\mathbb R}^{\kappa} \, \middle|\,
\vc{y}^{\intercal} A^k \vc{x} = z_{k}, \, \, \, \forall k \in K
\right\},
\]
where $\Xi = \left\{ \vc{x} \in {\mathbb R}^n \, \middle| \, E\vc{x} \geq \vc{f}, \, \vc{0} \leq \vc{x} \leq \vc{u} \right\}$ is a primal network polytope, and $\Delta_m = \left\{ \vc{y} \in {\mathbb R}_+^{m} \, \middle| \, \vc{1}^{\intercal} \vc{y} \leq 1 \right\}$ is a simplex.
When variables $\vc{y}$ are binary, $\Delta_m$ represents a \textit{special ordered set of type I} (SOS1); see \cite{beale:fo:1976} for an exposition.
Such simplex structures appear in various applications and can be obtained by reformulating the underlying polytopes through extreme point decomposition; see \cite{davarnia:ri:ta:2017} for a detailed account.
In the above definition, $E \in {\mathbb R}^{\tau \times n}$, $\vc{f} \in {\mathbb R}^{\tau}$, $\vc{u} \in {\mathbb R}^n$, and $A^k \in {\mathbb R}^{m\times n}$ is a matrix with all elements equal to zero except one that is equal to one, i.e., if $A^k_{ji} = 1$ for some $(i,j) \in N \times M$, the bilinear constraint with index $k$ represents $y_jx_i = z_k$.
\medskip
The contributions of this paper are as follows.
We propose a systematic procedure to convexify $\mcl{S}$ and derive explicit inequalities in its description.
The resulting cutting planes are directly obtained in the original space of variables.
We show that facet-defining inequalities in the convex hull description can be explicitly derived by identifying special tree and forest structures in the underlying network, leading to an interpretable and efficient cut-generating oracle.
The inequalities obtained from our proposed algorithms can be added to the typical McCormick relaxations to strengthen the formulation and improve the bounds.
The presented methods consider a general network structure, which complement and generalize the results of \cite{davarnia:ri:ta:2017} that are obtained for network interdiction problems.
In particular, \cite{davarnia:ri:ta:2017} presents the convexification results for a special case of $\mcl{S}$ where $m = 1$, $\kappa = 1$ and $\Xi$ is a dual network polytope.
In this work, we extend these results by considering the cases where $M$ and $K$ can have multiple elements and $\Xi$ is a primal network polytope.
\medskip
The remainder of the paper is organized as follows.
We give a brief background on the EC\&R\xspace procedure as a basis of our analysis in Section~\ref{sec:ECR}.
In Section~\ref{sec:primal}, we obtain convexification results for bilinear terms defined over a network polytope.
Preliminary computational experiments are presented in Section~\ref{sec:computation} to show the effectiveness of the developed cut-generating frameworks.
We give concluding remarks in Section~\ref{sec:conclusion}.
\medskip
\textbf{Notation.}
Bold letters represent vectors.
For a given set $S \subseteq {\mathbb R}^n$, we denote by $\mathop{\rm conv}(S)$ its convex hull.
We use symbol $\pm$ to show both cases with $+$ and $-$.
For example, when we use $l^{\pm}$ in an expression, it means that expression holds for both cases $l^+$ and $l^-$.
\section{Extended Cancel-and-Relax} \label{sec:ECR}
In this section, we present the EC\&R\xspace procedure adapted for $\mcl{S}$.
The following theorem is at the core of this procedure, as shown in Theorem 2.7 of \cite{davarnia:ri:ta:2017}.
\begin{theorem}
A convex hull description of $\mcl{S}$ can be obtained by the linear constraints in $\Xi$ and $\Delta_m$ together will all class-$l^{\pm}$ EC\&R\xspace inequalities for all $l \in K$.
\Halmos
\end{theorem}
The procedure to generate a class-$l^{\pm}$ EC\&R\xspace inequality is as follows.
\begin{enumerate}
\item
We select $l \in K$ to be the index of a bilinear constraint used in the aggregation, which we refer to as the \textit{base} equality.
We also select a sign indicator $+$ or $-$ to indicate whether a weight $1$ or $-1$ is used for the base equality during aggregation.
\item
We select $\mcl{L}$ and $\bar{\mcl{L}}$ as disjoint subsets of $K \setminus \{l\}$.
Then, for each $k \in \mcl{L}$ (resp. $k \in \bar{\mcl{L}}$), we multiply $\vc{y}^{\intercal} A^k \vc{x} - z_{k} = 0$ by $\beta^+_k$ (resp. $-\beta^-_k$), where $\beta^+_k \geq 0$ (resp. $\beta^-_k \geq 0$).
\item
Defining $T$ as the index set of the non-bound constraints in $\Xi$, we select $\mcl{I}_1$, $\cdots$, $\mcl{I}_m$ and $\bar{\mcl{I}}$ as subsets of $T$ whose intersection is empty.
Then, for each $j \in M$ and for each $t \in \mcl{I}_j$ (resp. $t \in \bar{\mcl{I}}$), we multiply the constraint $E_{t.}\vc{x} \geq f_t$ by $\gamma^j_ty_j$ where $\gamma^j_t \geq 0$ (resp. by $\theta_t(1-\sum_{i \in M}y_i)$ where $\theta_t \geq 0$).
\item
We select $\mcl{J}$ and $\bar{\mcl{J}}$ as disjoint subsets of $N$.
Then, for each index $i \in \mcl{J}$, we multiply $x_i \geq 0$ by $\lambda_i(1-\sum_{j \in M}y_j)$ where $\lambda_i \geq 0$, and for each $i \in \bar{\mcl{J}}$, we multiply $u_i - x_i \geq 0$ by $\mu_i(1-\sum_{j \in M}y_j)$ where $\mu_i \geq 0$.
\end{enumerate}
The above sets are compactly represented as $\big[\mcl{L},\bar{\mcl{L}}\big| \mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$, which is called an \textit{EC\&R\xspace assignment}.
Each EC\&R\xspace assignment is identified by its class-$l^{\pm}$ where $l$ is the index of the base equality and $\pm$ is its sign indicator.
We next aggregate all aforementioned weighted constraints.
During the aggregation, we require that weights $\beta$, $\gamma$, $\theta$, $\lambda$ and $\mu$ be chosen in such a way that:
\begin{itemize}
\item[\textit{(C1)}]
at least $|\mcl{L}|+|\bar{\mcl{L}}|+\sum_{j \in M}|\mcl{I}_j|+|\bar{\mcl{I}}|+|\mcl{J}|+|\bar{\mcl{J}}|$ bilinear terms are canceled (i.e., their coefficient becomes zero), and
\item[\textit{(C2)}]
if $\mcl{L} \cup \bar{\mcl{L}} \cup_{j \in M}\mcl{I}_j \cup \bar{\mcl{I}} \cup \mcl{J} \cup \bar{\mcl{J}} \neq \emptyset$, for each constraint used in the aggregation (including the base equality), at least one bilinear term among all those created after multiplying that constraint with its corresponding weight is canceled.
\end{itemize}
The desired EC\&R\xspace inequality is then obtained by relaxing (i.e., replacing) the remaining bilinear terms $x_iy_j$ in the aggregated inequality using either $x_iy_j \geq 0$ or $u_iy_j - x_iy_j \geq 0$, depending on the sign of their coefficients.
The resulting linear inequality is referred to as a class-$l^{\pm}$ EC\&R\xspace inequality.
\medskip
Next, we present a summary of the derivation of the EC\&R\xspace procedure which will be used in subsequent sections; we refer the reader to \cite{davarnia:ri:ta:2017} for a detailed account.
It can be shown that $\vc{y}$ components in the extreme points of $\mcl{S}$ are binary-valued.
As a result, the convex hull of $\mcl{S}$ can be obtained as a disjunctive union of a finite number of polytopes, each fixing $\vc{y}$ at an extreme point of the simplex $\Delta_m$.
A description for this convex hull can be obtained in a higher-dimensional space using the reformulation-linearization technique \cite{sherali:ad:dr:1998,sherali:al:1992} or disjunctive programming \cite{balas:1998} through addition of new variables, as shown below.
\begin{equation}
\left.
\begin{array}{lll}
& \pm A^k_{j.} \vc{w}^j \mp v^j_k \geq 0, &\forall (k, j) \in K \times M \\
& \mp \left(z_k - \sum_{j \in M} v^j_k\right) \geq 0, &\forall k \in K \\
& E\vc{w}^j \geq \vc{f} y_j, &\forall j \in M \\
& E\left(\vc{x}-\sum_{j \in M}\vc{w}^j\right) \geq \vc{f}\left(1-\sum_{j \in M}y_j\right), \\
& \vc{0} \leq \vc{w}^j \leq \vc{u} y_j, &\forall j \in M \\
& \vc{0} \leq \vc{x}-\sum_{j \in M}\vc{w}^j \leq \vc{u}\left(1-\sum_{j \in M}y_j\right).
\end{array}
\right. \label{eq_ef}
\end{equation}
\medskip
In the above description, variables $\vc{w}^j$ and $v^j_k$ can be viewed as $y_j \vc{x}$ and $y_j z_k$, respectively, and the equalities are formulated as pairs of inequalities of opposite directions.
Because the convex hull description in \eqref{eq_ef} contains additional variables, we can use polyhedral projection to obtain a convex hull description in the space of original variables.
\medskip
Define the dual variables associated with the constraints of \eqref{eq_ef} by $\vc{\alpha}^{j\pm}, \vc{\beta}^\pm \in {\mathbb R}^{\kappa}_+$, $\vc{\gamma}^{j}, \vc{\theta} \in {\mathbb R}^{\tau}_+$, and $\vc{\eta}^j, \vc{\rho}^j, \vc{\lambda}, \vc{\mu} \in {\mathbb R}^{n}_+$, respectively.
It follows from Proposition 2.3 of \cite{davarnia:ri:ta:2017} that the collection of inequalities
\begin{equation}
\sum_{i \in N} q_i(\vc{\pi}) x_i + \sum_{j \in M} r_j(\vc{\pi}) y_j + \sum_{k \in K} s_k(\vc{\pi}) z_k \ge t(\vc{\pi}), \label{eq:proj-facet}
\end{equation}
where
\begin{equation*}
\begin{array}{rll}
q_i(\vc{\pi}) &=& \sum_{t \in T}E_{ti} \theta_t + \lambda_i - \mu_i \nonumber \\
r_j(\vc{\pi}) &=& \sum_{t \in T}f_t \left(\theta_t - \gamma^j_t\right) + \sum_{i \in N}\left(\rho^j_i - \mu_i\right) \nonumber \\
s_k(\vc{\pi}) &=& -\left(\beta^+_k - \beta^-_k\right) \nonumber\\
t(\vc{\pi}) &=& \sum_{t \in T}f_t \theta_t - \sum_{i \in N}\mu_i, \nonumber
\end{array}
\end{equation*}
for all extreme rays $\vc{\pi} = \left(\vc{\beta}^+;\vc{\beta}^-;\{\vc{\gamma}^j\}_{j \in M};\vc{\theta};\{\vc{\eta}^j\}_{j \in M};\{\vc{\rho}^j\}_{j \in M};\vc{\lambda};\vc{\mu}\right)$ of the projection cone
\begin{multline*}
\mathcal{C} =
\left\{
\vc{\pi} \in {\mathbb R}_+^{2\kappa + (m+1)(\tau + 2n)} \, \middle| \, \sum_{k \in K}A^k_{ji}\left(\beta^+_k - \beta^-_k\right) + \sum_{t \in T} E_{ti} \left(\gamma^j_t - \theta_t\right) \right. \\
\left. + \eta^j_i - \rho^j_i - \lambda_i + \mu_i = 0, \, \, \, \forall (i,j) \in N \times M \right\}
\end{multline*}
contains all \textit{non-trivial} facet-defining inequalities in the convex hull description of $\mcl{S}$.
A non-trivial inequality is one that cannot be implied by the linear constraints in the description of $\Xi$ and $\Delta_m$.
It is easy to verify that an extreme ray of $\mcl{C}$ that has components $\beta^{\pm}_k = 0$ for all $k \in K$ leads to a trivial inequality.
Therefore, we may assume that $\beta_l^{\pm} = 1$ for some $l \in K$ in an extreme ray associated with a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S})$ through proper scaling.
As a result, the search for the extreme rays of $\mcl{C}$ reduces to that of the extreme points of the restriction set $\mcl{C}^l$ for all $l \in K$, where
\begin{multline} \label{eq:Cl}
\mathcal{C}^{l} =
\left\{
\vc{\pi}^l \in {\mathbb R}^{2(\kappa-1) + (m+1)(\tau + 2n)}_{+} \, \middle|
\sum_{k \in K_l}A^k_{ji}\left(\beta^+_k - \beta^-_k\right) + \sum_{t \in T} E_{ti} \left(\gamma^j_t - \theta_t\right) \right. \\
\left. + \eta^j_i - \rho^j_i - \lambda_i + \mu_i = \pm A^l_{ji}, \, \forall (i,j) \in N \times M \right\},
\end{multline}
where $K_l = K \setminus \{l\}$, and $\vc{\pi}^l$ is defined similarly to $\vc{\pi}$, but without elements $\beta^+_l$ and $\beta^-_l$.
\medskip
The components in the dual vector $\vc{\pi}^l$ can be interpreted as the weights used in the EC\&R\xspace procedure as follows.
The fixing of the component $\beta^{\pm}_l$ is achieved in Step 1 of the EC\&R\xspace procedure by picking a base equality $l$ with either $+1$ or $-1$ weights.
The components $\vc{\beta}^{\pm}$ represent the weights of the bilinear constraints in $K_l$ as described in Step 2 of the EC\&R\xspace procedure.
The components $\vc{\gamma}^j$ (resp. $\vc{\theta}$) can be viewed as the weights for $y_j$ (resp. $1 - \sum_{j \in M} y_j$) when multiplied with the non-bound constraints in $\Xi$ as demonstrated in Step 3 of the EC\&R\xspace procedure.
Similarly, $\vc{\lambda}$ and $\vc{\mu}$ denote the weights for $1 - \sum_{j \in M} y_j$ when multiplied with the bound constraints in $\Xi$ in Step 4 of the EC\&R\xspace procedure.
Finally, the relaxation step in the EC\&R\xspace procedure will use the components $\vc{\eta}^j$ and $\vc{\rho}^j$ as the weights for $y_j$ when multiplied with the bound constraints in $\Xi$ to \textit{cancel} the remaining bilinear terms.
It can be shown that criteria (C1) and (C2) of the EC\&R\xspace procedure provide necessary conditions for the selected weight vector to be an extreme point of $\mcl{C}^l$.
The resulting EC\&R\xspace inequality is of the form \eqref{eq:proj-facet}, and the collection of all such inequalities contain all non-trivial facet-defining inequalities in $\mathop{\rm conv}(\mcl{S})$.
\begin{comment}
\begin{remark} \label{rem:squential ECR}
Even though the EC\&R\xspace procedure provides a systematic tool to obtain valid inequalities for $\mathop{\rm conv}(\mcl{S})$, it requires the selection of the constraints together with their weights to be used in the aggregation.
This task can be computationally cumbersome for general sets as it requires searching for weights that satisfy condition (C1) of the EC\&R\xspace.
One way to circumvent this difficulty is to search for a special class of EC\&R\xspace assignments that are obtained by a \textit{sequential EC\&R\xspace procedure}.
In this procedure, the constraints used in the aggregation are selected sequentially, starting from the base equality with weight $\pm 1$ depending on the class sign.
At each step, a new constraint that satisfies condition (C2) of EC\&R\xspace is selected to be added to the current aggregated inequality with a suitable weight so that (1) all the bilinear terms that have been canceled so far remain canceled, and (2) at least one new bilinear term becomes canceled.
This procedure builds an EC\&R\xspace assignment by adding the constraints to the assignment one at a time.
The steps of the sequential EC\&R\xspace procedure ensure the satisfaction of condition (C1) of the EC\&R\xspace by design.
The main advantage of this approach is that we can quickly find a weight for a newly selected constraint to cancel one new bilinear term in a time linear in the total number of bilinear terms, whereas finding the aggregation weights for the entire selection of constraints in the assignment has a cubic time complexity because of solving a linear system of equations.
\end{remark}
\end{comment}
\section{Network Polytopes} \label{sec:primal}
In this section, we study the set $\mcl{S}$ where $\Xi$ represents a network polytope.
In particular, the constraint set $E\vc{x} \geq \vc{f}$ is composed of the flow-balance constraints after being separated into two inequalities of opposite signs, i.e., $E$ is an augmented node-arc incidence matrix where each row is duplicated with a negative sign, and $\vc{f}$ represents the extended supply/demand vector.
In this description, $\vc{u}$ denotes the arc-capacity vector.
First, we show that the representation of the EC\&R\xspace assignment can be simplified for set $\mcl{S}$ because of the special structure of the bilinear constraints in its description.
\medskip
\begin{remark} \label{rem:reduce-bilinear}
In $\mcl{S}$, each bilinear constraint has a single bilinear term that does not appear in any other bilinear constraints.
In particular, for each $k \in K$, the set contains the bilinear constraint $y_jx_i - z_k = 0$ for some $(i,j) \in N \times M$ such that $A^k_{ji} = 1$.
As a result, we can skip Step 2 in the EC\&R\xspace procedure and merge it into the relaxation step, in which the bound constraints on variables are multiplied with $y_j$ to relax any remaining bilinear term in the aggregated inequality.
As such, we may remove the sets $\mcl{L}$ and $\bar{\mcl{L}}$ from the EC\&R\xspace assignment to reformat it as $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$, and perform the aggregation on the constraints in this reduced assignment to satisfy conditions (C1) and (C2), which are adjusted accordingly by dropping $\mcl{L}$ and $\bar{\mcl{L}}$ in their statements.
In the resulting aggregated inequality, any remaining bilinear term of the form $y_jx_i$ can be relaxed using either $-y_jx_i + u_iy_j \geq 0$ corresponding to the dual weight $\rho^j_i$ or $-y_jx_i + z_k \geq 0$ with $k \in K$ such that $A^k_{ji} = 1$ corresponding to the dual weight $\beta^-_k$.
Similarly, we can relax $-y_jx_i$ using either $y_jx_i \geq 0$ corresponding to the dual weight $\eta^j_i$ or $y_jx_i - z_k \geq 0$ with $k \in K$ such that $A^k_{ji} = 1$ corresponding to the dual weight $\beta^+_k$.
\end{remark}
\subsection{The Case with $m = 1$.} \label{subsec:primal-single}
In this section, we consider the case where $m = 1$, whose corresponding bilinear set is denoted by $\mcl{S}^1$.
In this case, we can simplify notation by matching the indices of $\vc{z}$ and $\vc{x}$ variables such that $y_1x_k = z_k$ for all $k \in K = N$.
We next show that, to generate an EC\&R\xspace inequality for $\mcl{S}^1$, it is sufficient to use aggregation weight $1$ for all constraints used in the aggregation.
\begin{proposition} \label{prop:primal-weight}
Let $\bar{\vc{\pi}}^l$ be an extreme point of the projection cone $\mcl{C}^l$, for some $l \in K$, corresponding to a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S}^1)$. Then, it can be scaled in such a way that all of its components are 0 or 1.
\end{proposition}
\proof{Proof.}
When $m=1$, we can write $\mcl{C}^l$ in \eqref{eq:Cl} as
\begin{multline*}
\mathcal{C}^{l} =
\left\{
\vc{\pi}^l \in {\mathbb R}^{2(\kappa-1) + 2(\tau + 2n)}_{+} \, \middle|
\sum_{t \in T} E_{ti} \left(\gamma^1_t - \theta_t\right) + \mu_i - \lambda_i \right. \\
\left. + \eta^1_i - \rho^1_i + \sum_{k \in K_l}A^k_{1i}\left(\beta^+_k - \beta^-_k\right) = \pm A^l_{1i}, \, \forall i \in N \right\}.
\end{multline*}
We can rearrange the columns of the coefficient matrix of the system defining $\mathcal{C}^{l}$ to obtain
\begin{equation}
\Bigl[
\begin{array}{c|c|c|c|c|c|c|c}
E^{^{\intercal}} \, & \, -E^{^{\intercal}} \, & \, I \, & \, -I \, & \, I \, & \, -I & \, \bar{I} \, & \, -\bar{I}
\end{array}
\Bigr]. \label{eq:ECR-matrix}
\end{equation}
In the above matrix, the rows correspond to bilinear terms $y_1x_i$ (i.e., $w^1_i$ in the disjunctive programming formulation \eqref{eq_ef}) for $i \in N$.
The first and second column blocks correspond to the weights of the non-bound constraints in $\Xi$ multiplied by $y_{1}$ and $1-y_{1}$, which are denoted by $\gamma^1_t$ and $\theta_t$, respectively, for all $t \in T$.
The third and fourth column blocks correspond to the weights of the lower and upper bound constraints on variables in $\Xi$ multiplied by $1-y_{1}$, which are captured by $\mu_i$ and $\lambda_i$, respectively, for all $i \in N$.
Similarly, the fifth and sixth column blocks correspond to the weights of the lower and upper bound constraints on variables in $\Xi$ multiplied by $y_{1}$, which are recorded by $\eta^1_i$ and $\rho^1_i$, respectively, for all $i \in N$.
In these columns, $I$ represents the identity matrix of appropriate size.
Lastly, the seventh and eighth column blocks correspond to the weights of the bilinear constraints in $\mcl{S}^1$, which are represented by $\beta^+_k$ and $\beta^-_k$, respectively, for all $k \in K_l$.
In particular, the element at column $k \in K_l$ and row $i \in N$ of $\bar{I}$ is equal to $1$ if $i=k$, and it is equal to zero otherwise.
Based on these column values, it can be easily verified that \eqref{eq:ECR-matrix} is totally unimodular (TU).
In $\mathcal{C}^{l}$, the right-hand-side vector is $\pm\vc{e}^{l} \in {\mathbb R}^{m+n}$, where $\vc{e}^{l}$ is the unit vector whose components are all zero except for that corresponding to row $l$ representing $y_1x_l$, which is equal to $1$.
Because $\bar{\vc{\pi}}^l$ is an extreme point of $\mathcal{C}^{l}$, it is associated with a basic feasible solution for its system of equations.
Let $B$ be the corresponding basis for \eqref{eq:ECR-matrix}.
It follows from Cramer's rule that all elements of $B^{-1}$ belong to $\{0,-1,1\}$ since \eqref{eq:ECR-matrix} is TU.
Therefore, the components of $\pm B^{-1}\vc{e}^{l}$ belong to $\{0,-1,1\}$.
We conclude that the components of basic feasible solutions to $\mathcal{C}^{l}$ are equal to $0$ or $1$ due to non-negativity of all variables in its description.
\Halmos
\endproof
\medskip
\begin{remark} \label{rem:1-simplex}
When $m = 1$, multiplying the bound constraints with $1-y_1$ in Step 3 of the EC\&R\xspace procedure produces two of the standard McCormick bounds.
As a result, we can skip Step 3 in the EC\&R\xspace procedure and merge it into the relaxation step, in which the other two McCormick bounds are used for relaxing the remaining bilinear terms.
Considering Remark~\ref{rem:reduce-bilinear}, any remaining bilinear term in the aggregated inequality can be \textit{relaxed} into either of the two McCormick lower bounds or the two McCormick upper bounds or the $\pm z$ variable corresponding to that term depending on its sign.
In this case, the characterization of EC\&R\xspace assignment can be reduced further to $\big[\mcl{I}_1,\bar{\mcl{I}}\big]$.
\end{remark}
\medskip
\begin{remark} \label{rem:1-separation}
As described in Remark~\ref{rem:1-simplex}, each remaining bilinear term in the aggregated inequality of the EC\&R\xspace procedure can be relaxed into three different linear terms.
While this can lead to an exponential growth in the number of resulting linear EC\&R\xspace inequalities for each EC\&R\xspace assignment, we can use an efficient separation procedure to find the most violated inequality among the resulting EC\&R\xspace inequalities as follows.
Assume that we aim to separate a given solution $(\bar{\vc{x}}; \bar{y}_1; \bar{\vc{z}})$ from $\mathop{\rm conv}(\mcl{S}^1)$ through the EC\&R\xspace inequalities obtained from the aggregated inequality $g(\bar{\vc{x}}; y_1; \bar{\vc{z}}) \geq 0$ associated with the EC\&R\xspace assignment $\big[\mcl{I}_1,\bar{\mcl{I}}\big]$.
For each bilinear term $y_1x_i$, we choose the relaxation option that provides the minimum value among $u_i\bar{y}_1$ obtained from using $y_1(u_i - x_i) \geq 0$, $\bar{x}_i$ obtained from using $(1-y_1)x_i \geq 0$, and $\bar{z}_i$ obtained from using $-y_1x_i + z_i \geq 0$.
Similarly, for each bilinear term $-y_1x_i$, we choose the relaxation option that provides the minimum value among $0$ obtained from using $y_1x_i \geq 0$, $u_i - \bar{x}_i - u_i\bar{y}_1$ obtained from using $(1-y_1)(u_i - x_i) \geq 0$, and $-\bar{z}_i$ obtained from using $y_1x_i - z_i \geq 0$.
This approach provides the most violated EC\&R\xspace inequality in the time linear with the number of remaining bilinear terms in the aggregated inequality.
\end{remark}
\medskip
Considering the relation between the extreme points of the projection cone $\mcl{C}^l$ for $l \in K$ and the aggregation weights in the EC\&R\xspace procedure, Proposition~\ref{prop:primal-weight} and Remark~\ref{rem:1-simplex} imply that generating class-$l^{\pm}$ EC\&R\xspace inequalities reduces to identifying the assignment $\big[\mcl{I}_1,\bar{\mcl{I}}\big]$ as the aggregation weights are readily determined.
In particular, the constraints in $\mcl{I}_1$ are multiplied with $y_1$, and those in $\bar{\mcl{I}}$ are multiplied with $1-y_1$.
We next show that, for set $\mcl{S}^1$, identifying all the EC\&R\xspace assignments that satisfy the EC\&R\xspace conditions (C1) and (C2) can be achieved by considering a special graphical structure in the underlying network.
\medskip
Given a network $\mr{G} = (\mr{V},\mr{A})$ with a node set $\mr{V}$ and arc set $\mr{A}$, assume that the index $k$ of variables $z_k$ in the description of $\mcl{S}^1$ refers to the arc whose flow variable $x_k$ appears in that bilinear constraint, i.e., $y_1x_k = z_k$ for $k \in \mr{A} = N = K$.
We define $t(k)$ and $h(k)$ to be the tail and head nodes of arc $k \in \mr{A}$, respectively.
Further, for any node $i \in \mr{V}$, we define $\delta^+(i)$ and $\delta^-(i)$ to be the set of outgoing and incoming arcs at that node, respectively.
We refer to the flow-balance inequality $\sum_{k \in \delta^+(i)} x_k - \sum_{k \in \delta^-(i)} x_k \geq f_i$ (resp. $-\sum_{k \in \delta^+(i)} x_k + \sum_{k \in \delta^-(i)} x_k \geq -f_i$) corresponding to node $i$ as the \textit{positive} (resp. \textit{negative}) flow-balance inequality, and refer to its index in the description of $\Xi$ by $i^+$ (resp. $i^-$) to be recorded in the EC\&R\xspace assignment.
For example, an EC\&R\xspace assignment $\big[\{i^+\},\{j^-\}\big]$ implies that, in the aggregation,
the positive flow-balance inequality corresponding to the node $i \in \mr{V}$ is multiplied with $y_1$, and the negative flow-balance inequality corresponding to the node $j \in \mr{V}$ is multiplied with $1-y_1$.
In the sequel, we refer to the undirected variant of a subnetwork $\mr{P}$ of $\mr{G}$ with $\bar{\mr{P}}$, and conversely, we refer to the the directed variant of an undirected subnetwork $\bar{\mr{P}}$ of $\bar{\mr{G}}$ with $\mr{P}$ according to the arc directions in $\mr{G}$.
\begin{proposition} \label{prop:primal-tree}
Consider set $\mcl{S}^1$ with $\Xi$ that represents the network polytope corresponding to the network $\mr{G} = (\mr{V},\mr{A})$.
Let $\big[\mcl{I}_1,\bar{\mcl{I}}\big]$ be an EC\&R\xspace assignment for class-$l^{\pm}$, for some $l \in \mr{A}$, that leads to a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S}^1)$.
Define $\widetilde{\mcl{I}} = \{i \in \mr{V} | i^{\pm} \in \mcl{I}_1 \cup \bar{\mcl{I}} \}$ to be the subset of nodes whose flow-balance inequalities are used in the aggregation.
Then, there exists a tree $\bar{\mr{T}}$ of $\bar{\mr{G}}$ composed of the nodes in $\widetilde{\mcl{I}}$ such that arc $l$ is incident to exactly one node of $\bar{\mr{T}}$.
\end{proposition}
\proof{Proof.}
First, we observe that for each node $i \in \mr{V}$, both of its positive and negative flow-balance inequalities cannot be selected for the aggregation, since otherwise, the columns representing the positive and negative inequalities in the basis of the coefficient matrix \eqref{eq:ECR-matrix} associated with the extreme point of $\mcl{C}^l$ would be linearly dependent, which would be a contradiction to the fact that the selected EC\&R\xspace assignment leads to a facet-defining inequality of $\mathop{\rm conv}(\mcl{S}^1)$; see the proof of Proposition~\ref{prop:primal-weight} for details.
As a result, considering that $\mcl{I}_1 \cap \bar{\mcl{I}} = \emptyset$ by the EC\&R\xspace requirement, at most one of the following possibilities can occur in the EC\&R\xspace assignment: $i^+ \in \mcl{I}_1$, $i^+ \in \bar{\mcl{I}}$, $i^- \in \mcl{I}_1$, and $i^- \in \bar{\mcl{I}}$.
Therefore, each node in $\widetilde{\mcl{I}}$ corresponds to a unique flow-balance constraint in the EC\&R\xspace assignment.
Next, we show that arc $l$ is incident to exactly one node of $\widetilde{\mcl{I}}$.
It follows from condition (C2) of the EC\&R\xspace procedure that the bilinear term $y_1x_l$ for arc $l$ in the base equality must be canceled during the aggregation.
The constraints of $\Xi$ that can produce the bilinear term $y_1x_l$ during the aggregation are the flow-balance constraint corresponding to the tail node $t(l)$ of arc $l$, and the flow-balance constraint corresponding to the head node $h(l)$ of arc $l$.
Since the aggregation weight for all the constraints in the EC\&R\xspace assignment are $1$ according to Proposition~\ref{prop:primal-weight}, and considering that each flow-balance constraint can appear once in the aggregation as noted above, the only possibility to cancel the term $y_1x_l$ is to pick exactly one of the above constraints in the EC\&R\xspace assignment.
As a result, exactly one of the head and the tail nodes of arc $l$ must be in $\widetilde{\mcl{I}}$.
Next, we show that there exists a tree $\bar{\mr{T}}$ of $\bar{\mr{G}}$ whose node set is $\widetilde{\mcl{I}}$.
Assume by contradiction that there is no such tree composed of the nodes in $\widetilde{\mcl{I}}$.
Therefore, $\widetilde{\mcl{I}}$ can be partitioned into two subsets $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$, where the nodes in $\widetilde{\mcl{I}}_1$ are not adjacent to any nodes in $\widetilde{\mcl{I}}_2$.
It is clear that arc $l$ cannot be incident to the nodes in both $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$, since otherwise $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$ would have adjacent nodes.
Assume without the loss of generality that arc $l$ is incident to a node in $\widetilde{\mcl{I}}_1$.
Since the given EC\&R\xspace assignment leads to a facet-defining inequality after applying the relaxation step, its aggregation weights correspond to an extreme point of $\mcl{C}^l$ as descried in the proof of Proposition~\ref{prop:primal-weight}.
The resulting system of equations for the associated basic feasible solution can be written as
\begin{equation}
\left[
\def1.2{1.2}
\begin{array}{ccc|ccc|c}
\pm E_1 & \pm I_1 & \pm \bar{I}_1 \, & \, 0 & 0 & 0 \, & \, C_1 \\
\hline
0 & 0 & 0 \, & \pm E_2 & \pm I_2 & \pm \bar{I}_2 \, & \, C_2 \\
\hline
\hline
0 & 0 & 0 \, & 0 & 0 & 0 \, & \, C_3 \\
\end{array}
\right]
\left[
\begin{array}{c}
\vc{1} \\
\hline
\vc{1} \\
\hline
\vc{0}
\end{array}
\right]
=
\left[
\begin{array}{c}
\pm \vc{e}^{l} \\
\hline
\vc{0} \\
\hline
\hline
\vc{0}
\end{array}
\right], \label{eq:matrix-tree}
\end{equation}
where the columns and rows of the basis matrix have been suitably reordered.
In \eqref{eq:matrix-tree}, the first (resp. second) row block corresponds to bilinear terms $y_1x_i$ for arcs $i \in \mr{A}$ that are incident to the nodes in $\widetilde{\mcl{I}}_1$ (resp. $\widetilde{\mcl{I}}_2$), and the last row block corresponds to all the other bilinear terms that do not appear during aggregation.
The first (resp. fourth) column block denotes the transpose of the node-arc incidence matrix for nodes in $\widetilde{\mcl{I}}_1$ (resp. $\widetilde{\mcl{I}}_2$).
The second (resp. fifth) column block contains positive or negative multiples of columns of the identity matrix representing the weights used in the relaxation step of the EC\&R\xspace procedure corresponding to the McCormick bounds.
The third (resp. sixth) column block represents positive or negative multiples of the bilinear constraints in the description of $\mcl{S}^1$ used in the relaxation step corresponding to the arcs appearing in the first (resp. second) row blocks.
All these columns have weights equal to 1 according to Proposition~\ref{prop:primal-weight} as denoted in the first two row blocks of the solution vector multiplied with this matrix.
The last column in the basis corresponds to the constraints that have weights $0$ in the basic feasible solution and are added to complete the basis.
Lastly, $\vc{e}^{l}$ is a unit vector whose elements are all zeros except that corresponding to $y_1x_l$, which is equal to $1$.
It is now easy to verify that the linear combination of the columns in the column blocks 4, 5 and 6 of the basis matrix with weights $1$ yields the zero vector.
This shows that these columns are linearly dependent, a contradiction.
\Halmos
\endproof
\medskip
Proposition~\ref{prop:primal-tree} implies that each non-trivial EC\&R\xspace inequality can be obtained as an aggregation of constraints corresponding to a special tree structures.
The next theorem provides the converse result that aggregating constraints associated with each special tree structure can produce EC\&R\xspace inequalities.
\begin{theorem} \label{thm:primal-tree-converse}
Consider set $\mcl{S}^1$ with $\Xi$ that represents the network polytope corresponding to the network $\mr{G} = (\mr{V},\mr{A})$.
Let $\bar{\mr{T}}$ be a tree in $\bar{\mr{G}}$ with the node set $\widetilde{\mcl{I}} \subseteq \mr{V}$, and let $l \in \mr{A}$ be an arc that is incident to exactly one node of $\bar{\mr{T}}$.
Then, for any partition $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$ of $\widetilde{\mcl{I}}$ (i.e., $\widetilde{\mcl{I}}_1 \cap \widetilde{\mcl{I}}_2 = \emptyset$ and $\widetilde{\mcl{I}}_1 \cup \widetilde{\mcl{I}}_2 = \widetilde{\mcl{I}}$), we have that
\begin{itemize}
\item[(i)] if $h(l) \in \widetilde{\mcl{I}}$, then $\big[\{i^+\}_{i \in \widetilde{\mcl{I}}_1},\{i^-\}_{i \in \widetilde{\mcl{I}}_2}\big]$ is an EC\&R\xspace assignment for class-$l^{+}$,
\item[(ii)] if $h(l) \in \widetilde{\mcl{I}}$, then $\big[\{i^-\}_{i \in \widetilde{\mcl{I}}_1},\{i^+\}_{i \in \widetilde{\mcl{I}}_2}\big]$ is an EC\&R\xspace assignment for class-$l^{-}$,
\item[(iii)] if $t(l) \in \widetilde{\mcl{I}}$, then $\big[\{i^-\}_{i \in \widetilde{\mcl{I}}_1},\{i^+\}_{i \in \widetilde{\mcl{I}}_2}\big]$ is an EC\&R\xspace assignment for class-$l^{+}$,
\item[(iv)] if $t(l) \in \widetilde{\mcl{I}}$, then $\big[\{i^+\}_{i \in \widetilde{\mcl{I}}_1},\{i^-\}_{i \in \widetilde{\mcl{I}}_2}\big]$ is an EC\&R\xspace assignment for class-$l^{-}$.
\end{itemize}
\end{theorem}
\proof{Proof.}
We show the result for part (i), as the proof for parts (ii)--(iv) follows from similar arguments.
It suffices to show that the aggregation procedure performed on the constraints in the proposed assignment satisfies the EC\&R\xspace conditions (C1) and (C2).
For condition (C1), we need to show that at least $|\widetilde{\mcl{I}}_1| + |\widetilde{\mcl{I}}_2|$ bilinear terms are canceled during aggregation.
Let $R$ be the set of arcs in $\mr{T}$, which is the directed variant of $\bar{\mr{T}}$ obtained by replacing each edge with its corresponding arc in $\mr{G}$.
It is clear from the definition that $l \notin R$.
As a result, for each $r \in R$, the only constraints in the aggregation that contain $x_r$ are the flow-balance constraints corresponding to the head node $h(r)$ and tail node $t(r)$ of $r$ since both of these nodes are included in $\widetilde{\mcl{I}}$ as $r$ is an arc in $\mr{T}$.
There are four cases for the partitions of $\widetilde{\mcl{I}}$ that these head and tail nodes can belong to.
For the first case, assume that $h(r) \in \widetilde{\mcl{I}}_1$ and $t(r) \in \widetilde{\mcl{I}}_1$.
It follows from the EC\&R\xspace assignment in case (i) that the positive flow-balance constraints $h(r)^+$ and $t(r)^+$ are used in the aggregation with weights $y_1$.
In particular, we have $y_1\big(\sum_{k \in \delta^+(h(r))} x_k - \sum_{k \in \delta^-(h(r))\setminus \{r\}} x_k - x_r \geq f_{(h(r))}\big)$ added with $y_1\big(\sum_{k \in \delta^+(t(r))\setminus \{r\}} x_k - \sum_{k \in \delta^-(t(r))} x_k + x_r \geq f_{(t(r))}\big)$, which results in the cancellation of $y_1x_r$.
For the second case, assume that $h(r) \in \widetilde{\mcl{I}}_1$ and $t(r) \in \widetilde{\mcl{I}}_2$.
It follows from the EC\&R\xspace assignment that the positive flow-balance constraints $h(r)^+$ and the negative flow-balance constraint $t(r)^+$ are used in the aggregation with weights $y_1$ and $(1-y_1)$, respectively.
In particular, we have $y_1\big(\sum_{k \in \delta^+(h(r))} x_k - \sum_{k \in \delta^-(h(r))\setminus \{r\}} x_k - x_r \geq f_{(h(r))}\big)$ added with $(1-y_1)\big(-\sum_{k \in \delta^+(t(r))\setminus \{r\}} x_k + \sum_{k \in \delta^-(t(r))} x_k - x_r \geq -f_{(t(r))}\big)$, which results in the cancellation of $y_1x_r$.
For the remaining two cases, we can use similar arguments by changing the inequality signs to show that the term $y_1x_r$ will be canceled during aggregation.
As a result, we obtain at least $|R|$ cancellations during the aggregation corresponding to the arcs of $\mr{T}$.
Since $\bar{\mr{T}}$ is a tree, we have that $|R| = |\widetilde{\mcl{I}}_1| + |\widetilde{\mcl{I}}_2| - 1$.
Finally, for arc $l$, it follows from the assumption of case (i) in the problem statement that $h(l) \in \widetilde{\mcl{I}}$ and $t(l) \notin \widetilde{\mcl{I}}$.
If $h(l) \in \widetilde{\mcl{I}}_1$, then according to the EC\&R\xspace procedure for class-$l^+$, we aggregate $y_1x_l - z_l = 0$ with $y_1\big(\sum_{k \in \delta^+(h(l))} x_k - \sum_{k \in \delta^-(h(l))\setminus \{r\}} x_k - x_l \geq f_{(h(l))}\big)$, which results in the cancellation of $y_1x_l$.
If $h(l) \in \widetilde{\mcl{I}}_2$, we aggregate $y_1x_l - z_l = 0$ with $(1-y_1)\big(-\sum_{k \in \delta^+(h(l))} x_k + \sum_{k \in \delta^-(h(l))\setminus \{r\}} x_k + x_l \geq -f_{(h(l))}\big)$, which results in the cancellation of $y_1x_l$.
As a result, in total we have at least $|R| + 1 = |\widetilde{\mcl{I}}_1| + |\widetilde{\mcl{I}}_2|$ cancellations during the aggregation of the constraints in the EC\&R\xspace assignment, showing the satisfaction of condition (C1) of the EC\&R\xspace procedure.
For condition (C2) of the EC\&R\xspace procedure, we need to show that for each constraint used in the aggregation, including the base equality, at least one bilinear term among those created after multiplication of that constraint with its corresponding weight is canceled.
There are two types of constraints to consider.
The first type is the flow-balance constraints in $\widetilde{\mcl{I}}_1$ and $\widetilde{\mcl{I}}_2$, which correspond to the nodes of $\bar{\mr{T}}$.
It follows from the previous discussion that for each node $i \in \widetilde{\mcl{I}}_1 \cup \widetilde{\mcl{I}}_2$, the bilinear term $y_1x_r$, where $r$ is an arc in $\mr{T}$ that is incident to $i$, i.e., $h(r) = i$ or $t(r) = i$, is canceled during aggregation.
This proves that at least one bilinear term is canceled in the inequality obtained after multiplying the corresponding flow-balance constraint at node $i$ with $y_1$ or $1-y_1$.
The second type of constraints used in the aggregation is the base equality $l$.
The proof follows from an argument similar to that given above where we showed that the bilinear term $y_1x_l$ that appears in the base constraint $y_1x_l - z_l = 0$ is canceled.
We conclude that condition (C2) of the EC\&R\xspace procedure is satisfied for all constraints used in the aggregation.
\Halmos
\endproof
\medskip
In view of Theorem~\ref{thm:primal-tree-converse}, note that for the most basic choice of the tree $\bar{\mr{T}}$, i.e., an empty set, the resulting EC\&R\xspace inequalities recover the classical McCormick bounds.
Therefore, considering any nonempty tree structure can potentially improve the McCormick results by adding new valid inequalities for the bilinear set.
\medskip
Proposition~\ref{prop:primal-tree} and Theorem~\ref{thm:primal-tree-converse} suggest that the EC\&R\xspace assignments have a simple graphical interpretation for $\mcl{S}^1$, which can be used to generate all non-trivial EC\&R\xspace inequalities to describe $\mathop{\rm conv}(\mcl{S}^1)$ without the need to search for all possible constraints and their aggregation weights that satisfy the EC\&R\xspace conditions as is common for general bilinear sets.
This attribute can significantly mitigate cut-generation efforts when used systematically to produce cutting plane.
Such a systematic procedure can be designed by identifying tress of a given network and then following the result of Theorem~\ref{thm:primal-tree-converse} to obtain the corresponding EC\&R\xspace assignments.
We illustrate this method in the following example.
\medskip
\begin{example} \label{ex:primal-tree}
Consider set $\mcl{S}^1$ where $\Xi$ represents the network model corresponding to a \textit{spiked cycle} graph $\mr{G} = (\mr{V},\mr{A})$ shown in Figure~\ref{fig:primal-tree}.
We refer to each arc in this network as a pair $(i,j)$ of its tail node $i$ and its head node $j$, and denote its corresponding flow variable as $x_{i,j}$.
Assume that we are interested in finding EC\&R\xspace assignments for class-$(1,5)^{+}$.
According to Theorem~\ref{thm:primal-tree-converse}, we need to identify the trees that contain exactly one of the tail and head nodes of arc $(1,5)$.
For instance, we may select the tree $\bar{\mr{T}}$ composed of the nodes $\widetilde{\mcl{I}} = \{8,4,1,2,6\}$ that contain the tail node of arc $(1,5)$.
Consider the partitions $\widetilde{\mcl{I}}_1 = \{8,2\}$ and $\widetilde{\mcl{I}}_2 = \{4,1,6\}$.
Following case (iii) in Theorem~\ref{thm:primal-tree-converse}, we can obtain the EC\&R\xspace assignment $\big[\{8^-, 2^-\},\{4^+, 1^+, 6^+\}\big]$ for class-$(1,5)^{+}$.
As a result, we multiply the negative flow-balance constraints at nodes $8$ and $2$ with $y_1$, and we multiply the positive flow-balance constraints at nodes $4$, $1$, and $6$ with $1-y_1$, and aggregate them with the base bilinear equality corresponding to arc $(1,5)$ with weight 1 to obtain the aggregated inequality
\begin{multline*}
-z_{1,5} - y_1 x_{2,3} - y_1 x_{4,3} + (f_8 + f_2 +f_1 + f_4 + f_6)y_1\\
+ x_{1,5} -x_{2,1} + x_{4,3} - x_{8,4} + x_{6,2} - f_1 - f_4 - f_6 \geq 0
\end{multline*}
where $f_i$ denotes the supply/demand value at node $i$.
Following Remark~\ref{rem:1-simplex}, we may relax each remaining bilinear term $-y_1 x_{2,3}$ and $-y_1x_{4,3}$ into three possible linear expressions, leading to 9 total EC\&R\xspace inequalities.
If implemented inside of a separation oracle, we can use Remark~\ref{rem:1-separation} to find the most violated inequality among these 9 efficiently.
\end{example}
\medskip
\begin{figure}[!t]
\centering
\includegraphics[scale=3]{G1.png}
\caption{Graph $\mr{G}$ of Example~\ref{ex:primal-tree}}\label{fig:primal-tree}
\end{figure}
Consider a generalization of $\mcl{S}^1$ where the bilinear constraints may contain multiple bilinear terms:
\[
\widetilde{\mcl{S}}^1 = \left\{ (\vc{x};y;\vc{z}) \in \Xi \times \Delta_1 \times {\mathbb R}^{\kappa} \, \middle|\,
y_1 A^k \vc{x} = z_{k}, \, \, \, \forall k \in K
\right\},
\]
where $A^k$ is a matrix of appropriate size with potentially multiple nonzero elements.
For instance, $\widetilde{\mcl{S}}^1$ may include the constraint $2y_1x_i -5 y_1x_j = z_k$ for some $i, j \in N$.
In this case, the coefficient matrix \eqref{eq:ECR-matrix} of $\mcl{C}^l$ will be modified as follows after rearranging columns and rows.
\begin{equation}
\Bigl[
\begin{array}{c|c|c|c|c|c|c|c}
E^{^{\intercal}} \, & \, -E^{^{\intercal}} \, & \, I \, & \, -I \, & \, I \, & \, -I & \, \tilde{A} \, & \, -\tilde{A}
\end{array}
\Bigr]. \label{eq:ECR-matrix-3}
\end{equation}
In the above matrix, the row and column blocks are defined similarly to those of \eqref{eq:ECR-matrix} with a difference that the seventh and eighth column blocks correspond to the weights of the bilinear constraints $y_1A^k\vc{x} = z_k$, which are represented by $\beta^+_k$ and $\beta^-_k$ in the dual weight vector, respectively, for all $k \in K_l$.
In particular, the element at column $k \in K_l$ and row $i \in N$ of $\tilde{A}$ is equal to $A^k_{1i}$.
It is clear from the structure of \eqref{eq:ECR-matrix-3} that this matrix may lose the TU property when $A^k$ contains multiple nonzero elements for some $k \in K_l$.
In fact, this property may not hold even if $A^k_{1i} \in \{0,1,-1\}$ for all $k \in K_l$ and $i \in N$.
As a result, there is no guarantee that the aggregation weights for all the EC\&R\xspace inequalities corresponding to non-trivial facets of $\mathop{\rm conv}(\widetilde{\mcl{S}}^1)$ will be $1$.
While an explicit derivation of the convex hull description through identifying special network structures, such as those presented for $\mcl{S}^1$, may not be attainable for this problem in its original space of variables, we can use the following ancillary result to apply Theorem~\ref{thm:primal-tree-converse} and obtain a convex hull description for $\widetilde{\mcl{S}}^1$ in a higher-dimensional space.
\begin{proposition} \label{prop:convex-high}
Consider sets
\[
\mcl{S}^1 = \left\{ (\vc{x};y;\vc{w}) \in \Xi \times \Delta_1 \times {\mathbb R}^{n} \, \middle|\,
y_1x_i = w_{i}, \, \, \, \forall i \in N
\right\},
\]
and
\[
\mcl{D} = \left\{ (\vc{x};y;\vc{w};\vc{z}) \in \Xi \times \Delta_1 \times {\mathbb R}^{n} \times {\mathbb R}^{\kappa} \, \middle|\,
A^k\vc{w} = z_{k}, \, \, \, \forall k \in K
\right\},
\]
Then,
\begin{equation}
\mathop{\rm conv}\left( (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D} \right) = \left(\mathop{\rm conv}(\mcl{S}^1) \times {\mathbb R}^{\kappa}\right) \cap \mcl{D}. \label{eq:conv_high}
\end{equation}
\end{proposition}
\proof{Proof.}
We prove the result by showing both directions of inclusion for the equality.
The direct inclusion follows from the fact that the convex hull of intersection of two sets is a subset of the intersection of the convex hulls of those sets.
For the reverse inclusion, we need to show that $\mathop{\rm conv}\left( (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D} \right) \supseteq \left(\mathop{\rm conv}(\mcl{S}^1) \times {\mathbb R}^{\kappa}\right) \cap \mcl{D}$.
Consider a point $\bar{\phi} = (\bar{\vc{x}};\bar{y};\bar{\vc{w}};\bar{\vc{z}}) \in \left(\mathop{\rm conv}\left( \mcl{S}^1 \right) \times {\mathbb R}^{\kappa}\right) \cap \mcl{D}$.
We show that $\bar{\phi} \in \mathop{\rm conv}\left( (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D} \right)$.
It follows from the assumption that $\bar{z}_k = A^k\bar{\vc{w}}$ for all $k \in K$.
Further, there must exist a finite collection of points $\hat{\phi}^j = (\hat{\vc{x}}^j;\hat{y}^j;\hat{\vc{w}}^j) \in \mcl{S}^1$ for $j \in J$ such that $\bar{\vc{x}} = \sum_{j \in J}\omega_j\hat{\vc{x}}^j$, $\bar{y} = \sum_{j \in J}\omega_j\hat{y}^j$, and $\bar{\vc{w}} = \sum_{j \in J}\omega_j\hat{\vc{w}}^j$ for some non-negative weights $\omega_j$ such that $\sum_{j \in J}\omega_j = 1$.
Consider the set of points $\dot{\phi}^j = (\dot{\vc{x}}^j;\dot{y}^j;\dot{\vc{w}}^j;\dot{\vc{z}}^j)$ for $j \in J$ such that $\dot{\vc{x}}^j = \hat{\vc{x}}^j$, $\dot{y}^j = \hat{y}^j$, $\dot{\vc{w}}^j = \hat{\vc{w}}^j$, and $\dot{\vc{z}}^j_k = A^k\dot{\vc{w}}^j$ for all $k \in K$.
It is clear that $\dot{\phi}^j \in (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D}$ for all $j \in J$ by definition of the components of these points.
It follows that $\bar{\phi} = \sum_{j \in J}\omega_j\dot{\phi}^j$, since $\bar{\vc{x}} = \sum_{j \in J}\omega_j\dot{\vc{x}}^j$, $\bar{y} = \sum_{j \in J}\omega_j\dot{y}^j$, and $\bar{\vc{w}} = \sum_{j \in J}\omega_j\dot{\vc{w}}^j$ by definition, and since $\bar{z}_k = A^k\bar{\vc{w}} = A^k(\sum_{j \in J}\omega_j\hat{\vc{w}}^j) = \sum_{j \in J}\omega_j A^k\hat{\vc{w}}^j = \sum_{j \in J}\omega_j A^k\dot{\vc{w}}^j = \sum_{j \in J}\omega_j \dot{\vc{z}}^j_k$ for all $k \in K$.
This proves that $\bar{\phi} \in \mathop{\rm conv}\left( (\mcl{S}^1 \times {\mathbb R}^{\kappa}) \cap \mcl{D} \right)$.
\Halmos
\endproof
\medskip
The result of Proposition~\ref{prop:convex-high} shows that we can obtain a convex hull description for $\widetilde{\mcl{S}}^1$ in a higher dimension, which is expressed on the left-hand-side of \eqref{eq:conv_high}, by finding the convex hull of $\mcl{S}^1$ through application of Theorem~\ref{thm:primal-tree-converse} and then intersecting it with the linear constraints in $\mcl{D}$ as indicated on the right-hand-side of \eqref{eq:conv_high}.
\subsection{The Case with $m > 1$.} \label{subsec:primal-multi}
In this section, we consider the general case where $m > 1$ in $\mcl{S}$.
The coefficient matrix of $\mcl{C}^l$ in \eqref{eq:Cl} can be written as follows after a suitable rearrangement of columns and rows.
\begin{gather} \label{eq:ECR-matrix-2}
\left[
\def1.2{1.5}
\begin{array}{c|c|c|c||c||c|c||cc|cc|c|cc||cc|cc|c|cc}
E^{^{\intercal}} \, & \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \, -E^{^{\intercal}} \, & I \, & \, -I \, & \, I \, & \, -I \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, \bar{I}^1 \, & \, -\bar{I}^1 \, & \, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, \\
\hline
\, \vc{0} \, & E^{^{\intercal}} \, & \, \dotsc \, & \, \vc{0} \, & \, -E^{^{\intercal}} \, & \, I \, & \, -I \, & \vc{0} \, & \, \vc{0} \, & \, I \, & \, -I \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \bar{I}^2 \, & \, -\bar{I}^2 \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \\
\hline
\, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots \, & \, \vdots \, & \vdots \, & \, \vdots \, & \, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots \, & \, \vdots \, & \, \vdots \, & \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots \, \\
\hline
\, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & E^{^{\intercal}} \, & \, -E^{^{\intercal}} \, & \, I \, & \, -I \, & \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, I \, & \, -I \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \bar{I}^m \, & \, -\bar{I}^m \, \\
\end{array}
\right].
\end{gather}
In the above matrix, each row block $j \in M$ represents the bilinear terms $y_jx_i$ (i.e., $w^j_i$ in the disjunctive programming formulation \eqref{eq_ef}) for all $i \in N$.
The first $m$ column blocks correspond to the weights of the flow-balance constraints in $\Xi$ multiplied by $y_{j}$ for $j \in M$, which are denoted by $\gamma^j_t$ for all $t \in T$ in the dual vector.
The next column block represents the weights of the flow-balance constraints in $\Xi$ multiplied by $1-\sum_{j \in M}y_{j}$, which are denoted by $\theta_t$ for all $t \in T$ in the dual vector.
The next two column blocks indicate the lower and upper bound constraints on variables in $\Xi$ multiplied by $1-\sum_{j \in M}y_{j}$, which are recorded by $\lambda_i$ and $\mu_i$, respectively, for all $i \in N$.
The next $2m$ columns blocks correspond to the weights of the lower and upper bound constraints on variables in $\Xi$ multiplied by $y_{j}$ for $j \in M$, which are recorded by $\eta^j_i$ and $\rho^j_i$, respectively, for all $i \in N$.
The last $2m$ column blocks correspond to the weights of the positive and negative bilinear constraints in $\mcl{S}$, which are represented by $\beta^+_k$ and $\beta^-_k$ for all $k \in K_l$.
For instance, for constraint $y_jx_i - z_k \geq 0$ with $(i,j) \in N\times M$ and $k \in K_l$, the elements of column $k$ in $\bar{I}^j$ are all zero except the one in the row corresponding to the bilinear term $y_jx_i$ which is equal to one.
\medskip
It is clear from the structure of \eqref{eq:ECR-matrix-2} that this matrix does not have the TU property, implying that a result similar to that of Proposition~\ref{prop:primal-weight} does not necessarily hold.
Therefore, there is no guarantee that the aggregation weights for all the EC\&R\xspace assignments obtained from the extreme points of $\mcl{C}^l$ are $1$.
In fact, Example~\ref{ex:ECR-weight} shows that there exists EC\&R\xspace assignments with aggregation weights that map to extreme points of $\mcl{C}^l$ with components that are not all $0$ or $1$.
\medskip
\begin{example} \label{ex:ECR-weight}
Consider set $\mcl{S}$ where $\Xi$ describes the network polytope corresponding to network $\mr{G} = (\mr{V},\mr{A})$ in Figure~\ref{fig:primal-tree}, and $\Delta = \{(y_1,y_2) \in {\mathbb R}^2_+ | x_1 + x_2 \leq 1\}$.
Select class-$l^+$ corresponding to the base equality $y_1x_{8,4} - z_l = 0$.
Let $\big[\mcl{I}_1,\mcl{I}_2,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ be an assignment for class-$l^+$ where $\mcl{I}_1 = \{4^+, 3^+\}$, $\mcl{I}_2 = \{2^-\}$, $\bar{\mcl{I}} = \{1^+\}$, $\mcl{J} = \{(4,1)\}$, and $\bar{\mcl{J}} = \{(2,3)\}$.
Next, we show that the above assignment is an EC\&R\xspace assignment for class-$l^+$ when considering the dual weight $1$ for all constraints except the bound constraint in $\mcl{J}$ which has a dual weight of $2$ in the aggregation.
Specifically, we aggregate the base constraint $y_1x_{8,4} - z_l \geq 0$ with weight $1$, the positive flow-balance constraints at node $4$ that is $x_{4,1} + x_{4,3} - x_{8,4} \geq f_4$ with weight $y_1$, the positive flow-balance constraints at node $3$ that is $x_{3,7} - x_{4,3} - x_{2,3} \geq f_3$ with weight $y_1$, the negative flow-balance constraints at node $2$ that is $x_{6,2} - x_{2,1} - x_{2,3} \geq -f_2$ with weight $y_2$, the positive flow-balance constraints at node $1$ that is $x_{1,5} - x_{4,1} - x_{2,1} \geq f_1$ with weight $1-y_1-y_2$, the lower bound constraint for arc $(4,1)$ that is $x_{4,1} \geq 0$ with weight $2(1-y_1-y_2)$, and the upper bound constraint for arc $(2,3)$ that is $u_{2,3} - x_{2,3} \geq 0$ with weight $1-y_1-y_2$.
During this aggregation, six bilinear terms will be canceled, satisfying condition (C1) of the EC\&R\xspace procedure.
Further, at least one bilinear term for each constraint involved in the aggregation is canceled, satisfying condition (C2) of the EC\&R\xspace procedure.
Therefore, the above assignment is a valid EC\&R\xspace assignment for class-$l^+$.
Next, we argue that the dual weight vector for this assignment corresponds to an extreme point of $\mcl{C}^l$ in \eqref{eq:Cl}.
For $\mcl{S}$ in this example, the coefficient matrix of $\mcl{C}^l$, as depicted in \eqref{eq:ECR-matrix-2}, has 16 rows corresponding to bilinear terms $y_jx_i$ for all $j = 1,2$ and $i \in \mr{A}$.
It is easy to verify that the columns corresponding to the six constraints in the above EC\&R\xspace assignment are linearly independent.
As a result, we can form a basis by adding to the above six columns 10 more linearly independent columns corresponding to the constraints used in the relaxation step for the bilinear terms remaining in the aggregated inequality together with the columns that complete the basis.
The resulting basis corresponds to a basic feasible solution where all variables (interpreted as the dual weights for constraints involved in the EC\&R\xspace procedure) are $0$ or $1$, except the one associated with the column representing the lower bound constraint for arc $(4,1)$ which has the dual weight equal to $2$.
Therefore, there exists extreme points of $\mcl{C}^l$ with components that are not all $0$ or $1$.
\end{example}
\medskip
According to the above observation, even though identifying the aggregation weights for the EC\&R\xspace procedure for $\mcl{S}$ is not as straightforward compared to that of $\mcl{S}^1$, we next show that a generalization of the tree structure can still be detected for a given EC\&R\xspace assignment.
First, we give a few definitions that will be used to derive these results.
\medskip
\begin{definition} \label{def:primal-parallel-1}
Consider set $\mcl{S}$ where $\Xi$ describes the network polytope corresponding to network $\mr{G} = (\mr{V},\mr{A})$.
We define a \textit{parallel network} $\mr{G}^j = (\mr{V}^j,\mr{A}^j)$ for $j \in M$ to be a replica of $\mr{G}$ that represents the multiplication of flow variables $\vc{x}$ with $y_j$ during the aggregation procedure.
To simplify presentation, we use the same node and arc notation across all parallel networks.
For instance, for each node $v \in \mr{V}$ (resp. arc $a \in \mr{A}$), its replica $v$ belongs to $\mr{V}^j$ (resp. $a$ belongs to $\mr{A}^j$).
\end{definition}
\medskip
\begin{definition} \label{def:primal-parallel-3}
We say that a collection of subnetworks $\dot{\mr{G}}^k = (\dot{\mr{V}}^k,\dot{\mr{A}}^k)$, for $k=1,\dotsc,r$ of parallel networks $\{\mr{G}^j\}_{j \in M}$ are \textit{vertically connected through the connection nodes $C_v \subseteq \mr{V}$ and connection arcs $C_a \subseteq \mr{A}$} if there exists an ordering $s_1, s_2, \dotsc, s_r$ of indices $1,\dotsc,r$ such that for each $i=1, \dotsc, r-1$, there exits either (i) an arc $a \in C_a$ such that $a$ is incident to a node of $\dot{\mr{G}}^{s_{i+1}}$ and it is incident to a node of some subnetworks among $\dot{\mr{G}}^{s_{1}}, \dotsc, \dot{\mr{G}}^{s_{i}}$, or (ii) a set of nodes $v_1, \dotsc, v_p \in C_v$ each adjacent to the previous one such that $v_1$ is adjacent to a node of $\dot{\mr{G}}^{s_{i+1}}$ and $v_{p}$ is adjacent to a node of some subnetworks among $\dot{\mr{G}}^{s_{1}}, \dotsc, \dot{\mr{G}}^{s_{i}}$.
In this definition, if node $v_1$ is in $\dot{\mr{G}}^{s_{i+1}}$, it counts as an adjacent node to the connection node $v_1$.
\end{definition}
\begin{proposition} \label{prop:primal-forest}
Consider set $\mcl{S}$ with $\Xi$ that represents the network polytope corresponding to the network $\mr{G} = (\mr{V},\mr{A})$.
Let $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ be an EC\&R\xspace assignment
that leads to a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S})$.
Assume that $\cup_{j \in M} \mcl{I}_j \neq \emptyset$.
For each $j \in M$, define $\widetilde{\mcl{I}}^j = \{i \in \mr{V} | i^{\pm} \in \mcl{I}_j \}$, $\widetilde{\mcl{I}} = \{i \in \mr{V} | i^{\pm} \in \bar{\mcl{I}} \}$, and $\widetilde{\mcl{J}} = \{i \in \mr{A} | i \in \mcl{J} \cup \bar{\mcl{J}} \}$.
Then, there exist forests $\bar{\mr{F}}^j$ in the parallel network $\mr{G}^j$ for $j \in M$, each composed of trees $\bar{\mr{T}}^j_k$ for $k \in \Gamma_j$, where $\Gamma_j$ is an index set, such that
\begin{itemize}
\item[(i)] the forest $\bar{\mr{F}}^j$ is composed of the nodes in $\widetilde{\mcl{I}}^j$ for all $j \in M$,
\item[(ii)] the collection of the trees $\bar{\mr{T}}^j_k$ for all $k \in \Gamma_j$ and all $j \in M$ are vertically connected through connection nodes $\widetilde{\mcl{I}}$ and connection arcs $\widetilde{\mcl{J}}$,
\item[(iii)] the collection of all nodes in $\bar{\mr{F}}^j$ for all $j \in M$ together with the connection nodes $\widetilde{\mcl{I}}$ form a tree in $\mr{G}$, which has at least one incident node to each connection arc in $\widetilde{\mcl{J}}$.
\end{itemize}
\end{proposition}
\proof{Proof.}
We show the result by proving conditions (i)--(iii).
First, we may assume that the given EC\&R\xspace assignment corresponds to class-$l^{\pm}$ for some $l \in K$.
Since $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ leads to a non-trivial facet-defining inequality of $\mathop{\rm conv}(\mcl{S})$, its corresponding dual weights in the aggregation should represent an extreme point of $\mcl{C}^{l}$ defined in \eqref{eq:Cl}.
This extreme point is associated with a basis in the coefficient matrix \eqref{eq:ECR-matrix-2}.
In this basis, the subset of the column block that contains $E^{^{\intercal}}$ in the row block $j$ represents the flow-balance constraints (multiplied with $y_j$) for the nodes $i \in \widetilde{\mcl{I}}^j$, which can be viewed as the selected nodes in the parallel network $\mr{G}^j$ for $j \in M$.
Further, the rows in the row block $j$ represent the flow variables (multiplied with $y_j$) for each arc in $\mr{G}$, which can be viewed as an arc in the parallel network $\mr{G}^j$.
We may reorder the columns and rows of this basis corresponding to each parallel network $\mr{G}^j$ to obtain a diagonal formation composed of diagonal blocks $E^{^{\intercal}}_{j,k}$ for $k$ in an index set $\Gamma_j$.
It follows from this structure that the nodes corresponding to the columns of $E^{^{\intercal}}_{j,k}$ in the parallel network $\mr{G}^j$ are connected via arcs of $\mr{G}^j$ represented by the matrix rows.
Therefore, these nodes can form a tree $\bar{\mr{T}}^j_k$ for $k \in \Gamma_j$, the collection of which represents a forest $\bar{\mr{F}}^j$ for all $j \in M$, satisfying condition (i) of the proposition statement.
\smallskip
For condition (ii), considering the aforementioned diagonal block structure and representing the subset of column blocks of \eqref{eq:ECR-matrix-2} containing $\bar{I}^j$ and $I$ in the basis by one block with $\pm$ sign (as only one of them can be selected in the basis), we can write the resulting system of equations for the associated basic feasible solution as follows
\begin{gather} \label{eq:ECR-matrix-4}
\left[
\def1.2{1.2}
\begin{array}{c|c|c|c||c||c||ccccc||ccccc||c}
\begin{array}{ccc}
E^{^{\intercal}}_{1,1} & 0 & 0\\
0 & \ddots & 0 \\
0 & 0 & E^{^{\intercal}}_{1,|\Gamma_1|}
\end{array} & \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \, -E^{^{\intercal}} \, & \pm I & \, \pm I \, & \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, \pm\bar{I}^1 & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, C_1 \\
\hline
\, \vc{0} \, & \begin{array}{ccc}
E^{^{\intercal}}_{2,1} & 0 & 0\\
0 & \ddots & 0 \\
0 & 0 & E^{^{\intercal}}_{2,|\Gamma_2|}
\end{array} & \, \dotsc \, & \, \vc{0} \, & \, -E^{^{\intercal}} \, & \, \pm I & \vc{0} \, & \, \pm I \, & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, \vc{0} \, & \pm\bar{I}^2 & \, \dotsc \, & \, \vc{0} \, & \, \vc{0} \, & \, C_2 \\
\hline
\, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots \, & \vdots \, & \, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots & \, \vdots & \, \vdots \, & \, \vdots \, & \, \ddots \, & \, \vdots \, & \, \vdots & \, \vdots \\
\hline
\, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \begin{array}{ccc}
E^{^{\intercal}}_{m,1} & 0 & 0\\
0 & \ddots & 0 \\
0 & 0 & E^{^{\intercal}}_{m,|\Gamma_m|}
\end{array} & \, -E^{^{\intercal}} \, & \, \pm I \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \pm I \, & \vc{0} \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \pm\bar{I}^m & \, \vc{0} & \, C_m \\
\hline
\hline
\, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} \, & \, -E^{^{\intercal}} \, & \, \pm I \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \vc{0} & \, \pm I \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} & \pm\bar{I}^{1,\dotsc,m} & \, C_{m+1} \\
\hline
\hline
\, \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \vc{0} & \, \vc{0} \, & \, \vc{0} \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \, \vc{0} & \, \vc{0} \, & \vc{0} \, & \, \vc{0} \, & \, \dotsc \, & \vc{0} & \vc{0} & \, C_{m+2}
\end{array}
\right],
\end{gather}
where the last column in the basis corresponds to the constraints that have weights $0$ in the basic feasible solution and are added to complete the basis, and where the last row represents all bilinear terms that do not appear in any constraints during aggregation.
Further, the row block next to the last row corresponds to the bilinear terms that appear during aggregation but not in any of the selected flow-balance constraints; the matrix $\pm\bar{I}^{1,\dotsc,m}$ denotes the bilinear constraints in $\mcl{S}$ that contain these bilinear terms and could be used during the relaxation step.
In the above basis, the column block that contains $-E^{^{\intercal}}$ represents the flow-balance constraints (multiplied with $1-\sum_{j \in M}y_j$) corresponding to the nodes in $\widetilde{\mcl{I}}$.
Similarly, the column block that contains $\pm I$ in all rows represents the bound constraints on the flow variables (multiplied with $1-\sum_{j \in M}y_j$) corresponding to the arcs in $\widetilde{\mcl{J}}$.
We refer to the column group composed of the columns of $E^{^{\intercal}}_{j,k}$ for any $j \in M$ and $k \in \Gamma_j$ as the column group representing the nodes of the tree $\bar{\mr{T}}^j_k$.
It is clear from the diagonal structure of the submatrix containing $E^{^{\intercal}}_{j,k}$ that the column groups representing the nodes of $\bar{\mr{T}}^j_k$ are all \textit{arc disjoint}, i.e., there are no two columns from different groups that have a nonzero element in the same row.
We claim that there exists an ordering $(s_1,r_1), (s_2, r_2), \dotsc, (s_h, r_h)$ of the pairs $(j,k)$ for all $j \in M$ and $k \in \Gamma_j$ such that for each column group representing the nodes of $\bar{\mr{T}}^{s_i}_{r_i}$, for all $i = 2, \dotsc, h$, there exists either (i) a column from the column blocks corresponding to $\widetilde{\mcl{J}}$ that has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$ and a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_t,r_t}$ for some $t \in \{1, \dotsc, i-1\}$, or (ii) a sequence of columns in the column block corresponding to $\widetilde{\mcl{I}}$, each sharing a nonzero element in a common row with the previous one, such that the first column has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$ and the last column has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_t,r_t}$ for some $t \in \{1, \dotsc, i-1\}$.
Assume by contradiction that no such ordering exists.
Therefore, we can partition the rows in the first $m+1$ row blocks of \eqref{eq:ECR-matrix-4} in such a way that no two rows in all columns composed of the columns in $E^{^{\intercal}}_{j,k}$ for all $j \in M$ and $k \in \Gamma_j$ and those corresponding to the columns of $\widetilde{\mcl{I}}$ and $\widetilde{\mcl{J}}$ have all their nonzero elements in the rows of one of these partitions.
In this case, considering that the column blocks composed of $\pm I$ and those composed of $\pm\bar{I}^j$ have exactly one nonzero element in the basis, we can compactly rewrite the system of equations for the basic feasible solution as follows:
\begin{equation}
\left[
\def1.2{1.2}
\begin{array}{ccc|ccc|c}
\pm E_1 & \pm I_1 & \pm\bar{I}_1 \, & \, 0 & 0 & 0 \, & \, \bar{C}_1 \\
\hline
0 & 0 & 0 \, & \pm E_2 & \pm I_2 & \pm\bar{I}_2 \, & \, \bar{C}_2 \\
\hline
\hline
0 & 0 & 0 \, & \, 0 & 0 & 0 \, & \, C_{m+2} \\
\end{array}
\right]
\left[
\begin{array}{c}
\vc{+} \\
\hline
\vc{+} \\
\hline
\vc{0}
\end{array}
\right]
=
\left[
\begin{array}{c}
\pm \vc{e}^{l} \\
\hline
\vc{0} \\
\hline
\hline
\vc{0}
\end{array}
\right], \label{eq:matrix-forest}
\end{equation}
where the first and second row blocks respectively correspond to the first and second partitions discussed above.
In \eqref{eq:matrix-forest}, $\vc{e}^{l}$ is a unit vector whose elements are all zero except that corresponding to the row representing $y_{j'}x_{i'}$ for some $i',j'$ that satisfy $A^l_{j'i'} = 1$, which is equal to $1$.
We may assume without the loss of generality that the row containing this nonzero element in $\vc{e}^l$ belongs to the first row block.
All these columns except the ones in the last column block have positive weights because the associated constraints are assumed to be used in the aggregation.
These weights are denoted by $\vc{+}$ in the first two row blocks of the vector multiplied with this matrix.
It is now easy to verify that the linear combination of the columns in the second column block of the basis matrix with positive weights yields the zero vector.
This shows that the columns are linearly dependent, a contradiction.
Now, consider the ordering $(s_1,r_1), (s_2, r_2), \dotsc, (s_h, r_h)$ described above.
It follows that for each $i = 2, \dotsc, h$, there exists either (i) a column from the column block corresponding to $\widetilde{\mcl{J}}$ that has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$ and a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_t,r_t}$ for some $t \in \{1, \dotsc, i-1\}$, or (ii) a sequence of columns in the column block corresponding to $\widetilde{\mcl{I}}$, each sharing a nonzero element in a common row with the previous one, such that the first column has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$ and the last column has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_t,r_t}$ for some $t \in \{1, \dotsc, i-1\}$.
First, consider the case (i) in the above either-or argument holds for a column $k \in \widetilde{\mcl{J}}$.
This column has nonzero elements in the rows representing arc $k$ in all subnetworks $\mr{G}^j$ for $j \in M$.
Matrix $E^{^{\intercal}}_{s_i,r_i}$ has a nonzero element in row $k$ if the tree $\bar{\mr{T}}^{s_i}_{r_i}$ has an incident node to arc $k$.
we conclude that $\bar{\mr{T}}^{s_i}_{r_i}$ and $\bar{\mr{T}}^{s_t}_{r_t}$ have at least one node incident to $k$, satisfying criterion (i) in Definition~\ref{def:primal-parallel-3}.
Second, consider the case (ii) in the above either-or argument holds for a sequence $k_1, \dotsc, k_p$ of the nodes corresponding to $\widetilde{\mcl{I}}$ where $p \leq |\widetilde{\mcl{I}}|$.
Any such column, say $k_1$, has nonzero elements in the rows representing the arcs that are incident to node $k_1$ in all parallel networks $\mr{G}^j$ for $j \in M$.
Therefore, since each column contains a nonzero element in a common row with the previous one, the nodes corresponding to these columns must be adjacent to one another in $\mr{G}$.
Further, since the column corresponding to $k_1$ has a nonzero element in a row corresponding to a row of $E^{^{\intercal}}_{s_i,r_i}$, we conclude that $k_1$ is adjacent to a node in $\bar{\mr{T}}^{s_i}_{r_i}$, which means that either $k_1$ belongs to this tree, or it is adjacent to a node of this tree.
A similar argument can be made about node $k_p$ and the tree $\bar{\mr{T}}^{s_t}_{r_t}$.
This satisfies criterion (ii) of Definition~\ref{def:primal-parallel-3}.
This proves condition (ii) of the proposition statement due to Definition~\ref{def:primal-parallel-3}.
\smallskip
For condition (iii), we show there exists a sequence $v_1, \dotsc, v_q$ of all the nodes in $\cup_{j \in M}\widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}}$, where $q = |\cup_{j \in M}\widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}}|$ and $v_1 \in \cup_{j \in M}\widetilde{\mcl{I}}^j$, such that every node $v_i$ is adjacent to at least one node in $v_1, \dotsc, v_{i-1}$ for every $i = 2, \dotsc, q$.
We may assume that $v_1$ is incident to arc $i'$ defined previously that is associated with the base equality $l$.
For other cases, where $i'$ is not incident to any nodes in $\cup_{j \in M}\widetilde{\mcl{I}}^j$, the argument will be similar with an adjustment of the partitions described below.
It follows from the previously proven conditions (i) and (ii) of the problem statement, as well as Definition~\ref{def:primal-parallel-3} that there exists a sequence $\bar{v}_1, \dotsc, \bar{v}_p$ of all the nodes in $\cup_{j \in M}\widetilde{\mcl{I}}^j \cup \widehat{\mcl{I}}$ for some $\widehat{\mcl{I}} \subseteq \widetilde{\mcl{I}}$ such that $\bar{v}_i$ is adjacent to at least one node in $\bar{v}_1, \dotsc, \bar{v}_{i-1}$ for every $i = 2, \dotsc, p$.
We claim that there exists a sequence $\hat{v}_1, \dotsc, \hat{v}_{q-p}$ of all the nodes in $\widetilde{\mcl{I}} \setminus \widehat{\mcl{I}}$ such that $\hat{v}_i$ is adjacent to at least one node in $\bar{v}_1, \dotsc, \bar{v}_{p}, \hat{v}_1, \dotsc, \hat{v}_{i-1}$ for every $i = 1, \dotsc, q-p$.
Assume by contradiction that no such sequence exists.
Then, we can use an argument similar to that of case (ii) above to partition the rows of \eqref{eq:ECR-matrix-4} in such a way that all columns corresponding to the nodes in $\bar{v}_1, \dotsc, \bar{v}_{p}, \hat{v}_1, \dotsc, \hat{v}_{t}$ have all their nonzero elements in the first partition, and the columns corresponding to all the remaining nodes $\hat{v}_{t+1}, \dotsc, \hat{v}_{q-p}$ have all their nonzero elements in the second partition.
Then, a similar argument to that following \eqref{eq:matrix-forest} will show that the columns in the second group are linearly dependent, a contradiction.
The case for the second part of the statement regarding the connection arcs in $\widetilde{\mcl{J}}$ can be shown similarly.
\Halmos
\endproof
\begin{comment}
Issues with different restrictions of the EC\&R\xspace procedure:
\begin{itemize}
\item The first potential restriction is to restrict the aggregation weights to be $\pm 1$. The issue is that it is possible to have EC\&R\xspace assignments where a bilinear term is canceled using more that two constraints, which makes the tree/forest identification complicated. For instance, an arc corresponding to a bilinear constraint in $L \cup \bar{L}$ can be adjacent to a connection node in $\bar{I}$, a connection arc in $J \cup \bar{J}$, and a forest node in $I_j$ all at the same time; see the attached example in the IDEAS folder.
\item The second potential restriction is to consider sequential EC\&R\xspace, where a newly added constraints cancels a new bilinear term while keeping the previously canceled terms. The issue here is that (i) it is does not necessarily have $\pm 1$ aggregation weights (see the attached example in the IDEAS folder), (ii) identifying network structures that can lead to an EC\&R\xspace assignment is difficult. Even when a structure is identified, finding the aggregation weights is done sequentially one constraint at a time. However, depending on the path (sequence) the weights can be different and sometimes infeasible for the sequential EC\&R\xspace. See the example attached.
\item the next restriction is to use both of the previous ones at the same time. In this case, the obstacle (ii) of the previous case still exists. The attached example can be used to show that too.
\end{itemize}
As noted in Remark~\ref{rem:squential ECR}, one way to mitigate the computational challenges of finding suitable aggregation weights is to restrict the search to sequential EC\&R\xspace assignments.
This way, we can select a new constraint sequentially while ensuring the EC\&R\xspace conditions are satisfied at each step of the procedure.
Because our goal is to find EC\&R\xspace assignments that can be interpreted through a network structure, we make an additional assumption for the aggregation weights to be equal to 1 for all constraints used in the sequential EC\&R\xspace procedure.
While this approach does not necessarily produce the full convex hull description, it has three important advantages: (i) it enables us to derive explicit EC\&R\xspace inequalities cognizant of the underlying network structure without the need to search for the aggregation weights that satisfy the EC\&R\xspace conditions; (ii) it generalizes the convexification results for the case with $m = 1$; and (iii) it can produce inequalities stronger than those obtained by applying Theorem~\ref{thm:primal-tree-converse} to relaxations of $\mcl{S}$ that contain one $y$ variable at a time, because it considers all the $y$ variables in their original simplex set $\Delta_m$.
We note here that since the sequential EC\&R\xspace procedure is applied to network constraints, it may seem intuitive that the aggregation weights are automatically equal to 1.
The next example shows that a sequential EC\&R\xspace assignment can have aggregation weights that are not all equal to 1, showing the necessity to impose that assumption on aggregation weights explicitly.
\end{comment}
\medskip
Although Proposition~\ref{prop:primal-forest} shows that an EC\&R\xspace assignment corresponds to a forest structure in the parallel networks created from the underlying network in $\mcl{S}$, the converse result---similar to the one presented for set $\mcl{S}^1$---does not hold here.
More specifically, a forest structure that satisfies the conditions of Proposition~\ref{prop:primal-forest} does not necessarily lead to a valid EC\&R\xspace assignment, and
even when it does, the calculation of the aggregation weights to satisfy the EC\&R\xspace conditions is not as straightforward; see Example~\ref{ex:forest} below.
\medskip
\begin{example} \label{ex:forest}
Consider set $\mcl{S}$ where $\Xi$ describes the network polytope corresponding to network $\mr{G} = (\mr{V},\mr{A})$ in Figure~\ref{fig:primal-tree}, and $\Delta = \{(y_1,y_2) \in {\mathbb R}^2_+ | x_1 + x_2 \leq 1\}$.
Select class-$l^+$ corresponding to the base equality $y_1x_{1,5} - z_l = 0$.
In the parallel network $\mr{G}^1$, we select the forest $\bar{\mr{F}}^1$ composed of the tree $\bar{\mr{T}}^1_1$ with the node set $\widetilde{\mcl{I}}^1 = \{1, 2\}$.
In the parallel network $\mr{G}^2$, we select the forest $\bar{\mr{F}}^2$ composed of the tree $\bar{\mr{T}}^2_1$ with the node set $\widetilde{\mcl{I}}^2 = \{2, 6\}$.
We select the connection node set $\widetilde{\mcl{I}} = \{6\}$, and the connection arc set $\widetilde{\mcl{J}} = \emptyset$.
It is easy to verify that these sets satisfy the conditions (i)--(iii) of Propositions~\ref{prop:primal-forest}.
However, we cannot find an aggregation weight for the flow-balance constraints corresponding to the nodes in the above sets that yields a cancellation of at least 5 bilinear terms.
As a result, there is no EC\&R\xspace assignment that matches the considered forest structure.
\end{example}
\medskip
A common way to circumvent the above-mentioned difficulty in obtaining valid EC\&R\xspace assignments and their aggregation weights is to aim at a special class of EC\&R\xspace assignments with more specific attributes that can be used to strengthen the connection between an EC\&R\xspace assignment and its corresponding network structure.
An important example of such class is the class of EC\&R\xspace assignments that are obtained through \textit{pairwise cancellation}.
In this procedure, each cancellation of bilinear terms is obtained by aggregating two constraints.
This definition includes the bilinear terms that are \textit{canceled} during the relaxation step, i.e., the constraint used to relax the remaining bilinear terms counts as one of the two constraints in the preceding statement.
Following this procedure, the aggregation weight for each constraint can be determined successively as the constraint is added to the assignment to ensure the satisfaction of the EC\&R\xspace conditions.
The next result shows that the aggregation weights for all constraints used in the EC\&R\xspace assignments obtained through pairwise cancellation are $1$.
\begin{proposition} \label{prop:pairwise ECR}
Consider set $\mcl{S}$ where $\Xi$ describes the network polytope corresponding to network $\mr{G} = (\mr{V},\mr{A})$.
Let $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ be an EC\&R\xspace assignment for class-$l^{\pm}$ for some $l \in K$ corresponding to pairwise cancellation.
Then, the aggregation weights for all constraints used in this assignment are $1$.
\end{proposition}
\proof{Proof.}
Let $\bar{\vc{\pi}}^l$ be the solution vector for the system of equations \eqref{eq:Cl} of $\mcl{C}^l$ corresponding to the aggregation weights of the given EC\&R\xspace assignment.
We may rewrite this system of equations as follows by rearranging its rows and columns.
\begin{equation}
\left[
\def1.2{1.5}
\begin{array}{c|c|c}
P_1 \, & \, 0 \, & \, C_1 \\
\hline
P_2 \, & \, \pm I \, & \, C_2 \\
\hline
0 \, & \, 0 \, & \, C_3 \\
\end{array}
\right]
\left[
\def1.2{1.5}
\begin{array}{c}
\bar{\vc{\pi}}^l_1 \\
\hline
\bar{\vc{\pi}}^l_2 \\
\hline
\vc{0}
\end{array}
\right]
=
\left[
\def1.2{1.5}
\begin{array}{c}
\pm \vc{e}^{l} \\
\hline
\vc{0} \\
\hline
\vc{0}
\end{array}
\right], \label{eq:matrix-pairwise}
\end{equation}
In the coefficient matrix of \eqref{eq:matrix-pairwise}, the first row block represents the bilinear terms that are canceled during aggregation.
The second row block corresponds to the remaining bilinear terms in the aggregated inequality that are relaxed in the last step of the EC\&R\xspace procedure.
The last row block represents all the bilinear terms that are not involved in the aggregation procedure.
Further, in this matrix, the first column block corresponds to the constraints used in the aggregation, whose aggregation weights in the solution vector $\bar{\vc{\pi}}^l$ are denoted by $\bar{\vc{\pi}}^l_1$.
The second column block corresponds to the variable bound constraints in $\Xi$ as well as the bilinear constraints in $\mcl{S}$ used in the EC\&R\xspace procedure to relax the remaining bilinear terms in the aggregated inequality, whose weights in the solution vector $\bar{\vc{\pi}}^l$ are denoted by $\bar{\vc{\pi}}^l_2$.
The last column block represents all other constraints that are not used during the EC\&R\xspace procedure and their weights in the solution vector $\bar{\vc{\pi}}^l$ are zero.
Finally, $\vc{e}^{l}$ on the right-hand-side of this system is a unit vector whose elements are all zeros except that corresponding to the row representing $y_{j'}x_{i'}$ for some $i',j'$ that satisfy $A^l_{j'i'} = 1$, which is equal to $1$.
It is clear that this row belongs to the first row block since according to the EC\&R\xspace condition (C2), the bilinear term in the base equality $l$ must be canceled during the aggregation procedure when the assignments are not empty.
It follows from the equation \eqref{eq:matrix-pairwise} that $P_1 \bar{\vc{\pi}}^l_1 = \pm \vc{e}^{l}$.
Next, we analyze the structure of $P_1$.
Note that all elements of $P_1$ belong to $\{0, -1, 1\}$ because it is a submatrix of \eqref{eq:ECR-matrix-2} that represents the coefficients of the constraints in $\mcl{S}$.
Considering that the columns of $P_1$ represent the constraints used in the aggregation except the base equality (as that constraint has been moved to the right-hand-side to form $\mcl{C}^l$), and that the rows of $P_1$ correspond to the canceled bilinear terms during aggregation, according to condition (C1) of EC\&R\xspace, we conclude that the number of rows of $P_1$ is no smaller that the number of columns of $P_1$.
Further, it follows from condition (C2) of EC\&R\xspace that each constraint used in the aggregation (after being multiplied with its corresponding weight) will have at least one bilinear term canceled, which implies that each column of $P_1$ has at least one nonzero element.
The assumption of pairwise cancellation for the given EC\&R\xspace assignment implies that each canceled bilinear term corresponding to the rows of $P_1$ are obtained through aggregation of exactly two constraints.
As a result, each row of $P_1$ must contain exactly two nonzero elements, except for the row corresponding to the bilinear term $y_{j'}x_{i'}$ that appears in the base equality $y_{j'}x_{i'} - z_l = 0$ which must have only one nonzero element because the weight of the base equality has been fixed at $\pm 1$ and its column has been moved to the right-hand-side of the equation captured by $\pm \vc{e}^{l}$; see the derivation of \eqref{eq:Cl}.
Therefore, we may rearrange the rows and columns of the matrices in this equation to obtain the form:
\begin{equation}
\left[
\def1.2{1.2}
\begin{array}{c|c|c|c}
\pm 1 & 0 & \cdots & 0 \\
\hline
\{0, \pm 1\} & \pm 1 & \cdots & 0 \\
\hline
\vdots & \vdots & \ddots & \vdots \\
\hline
\{0, \pm 1\} & \{0, \pm 1\} & \cdots & \pm 1 \\
\hline
\{0, \pm 1\} & \{0, \pm 1\} & \cdots & \{0, \pm 1\} \\
\hline
\vdots & \vdots & \ddots & \vdots \\
\hline
\{0, \pm 1\} & \{0, \pm 1\} & \cdots & \{0, \pm 1\}
\end{array}
\right]
\bar{\bar{\vc{\pi}}}^l_1 = \pm \vc{e}^{1}, \label{eq:matrix-pairwise-2}
\end{equation}
where $\bar{\bar{\vc{\pi}}}^l_1$ is composed of the elements of $\bar{\vc{\pi}}^l_1$ that are rearranged to match the rearrangement of columns of $P_1$ in the above form, and where the first row corresponds to the bilinear term $y_{j'}x_{i'}$ so that the right-hand-side vector becomes $\pm \vc{e}^1$.
It follows from the above discussion about the structure of $P_1$ and equation \eqref{eq:matrix-pairwise-2} that all components of $\bar{\bar{\vc{\pi}}}^l_1$ must be equal to 1 as they need to be nonnegative.
Finally, for the equations in the second row block of \eqref{eq:matrix-pairwise}, we have that $P_2 \bar{\vc{\pi}}^l_1 \pm I \bar{\vc{\pi}}^l_2 = \vc{0}$.
It follows from the pairwise cancellation assumption that each row of $P_2$ contains exactly one nonzero element as it corresponds to a remaining bilinear term in the aggregation inequality.
Since all of the elements in $P_2$ belong to $\{0, -1, 1\}$, and all the components in $\bar{\vc{\pi}}^l_1$ are equal to 1, it must hold that $\bar{\vc{\pi}}^l_2 = \vc{1}$.
\Halmos
\endproof
\medskip
\begin{remark} \label{rem:pairwise-primal}
For the case with $m=1$, as described in the proof of Proposition~\ref{prop:primal-tree}, there are two (resp. three) possible scenarios for constraints that could be used in the aggregation to cancel a bilinear term $y_1x_i$ for any $i \in N \setminus \{l\}$ (resp. for $i = l$).
Since the aggregation weights for all constraints are $1$ in this case (see Proposition~\ref{prop:primal-weight}), we conclude that each cancellation is obtained through aggregation of exactly two constraints.
Further, any remaining bilinear term in the aggregated inequality corresponds to an arc that is incident to exactly one node of the tree associated with the EC\&R\xspace assignment (see Proposition~\ref{prop:primal-tree}), which implies that each such bilinear term appears in exactly one constraint during aggregation.
As a result, \textit{all} EC\&R\xspace inequalities for the case with $m = 1$ can be obtained through pairwise cancellation.
\end{remark}
\medskip
Although the EC\&R\xspace inequalities obtained through pairwise cancellation do not necessarily produce a full convex hull description for $\mcl{S}$, the result of Proposition~\ref{prop:pairwise ECR} provides three important advantages: (i) it generalizes the convexification results for the case with $m = 1$ as described in Remark~\ref{rem:pairwise-primal}; (ii) it can produce inequalities stronger than those obtained by applying Theorem~\ref{thm:primal-tree-converse} to relaxations of $\mcl{S}$ that contain one $y$ variable at a time, because it considers all the $y$ variables in their original simplex set $\Delta_m$; and (iii) it enables us to derive explicit EC\&R\xspace inequalities cognizant of the underlying network structure without the need to search for the aggregation weights that satisfy the EC\&R\xspace conditions as will be shown in the sequel.
These advantages are corroborated by the computational experiments presented in Section~\ref{sec:computation}.
The next proposition shows that the pairwise cancellation property provides more information about the forest structure presented in Proposition~\ref{prop:primal-forest}.
\begin{proposition} \label{prop:primal-forest-pairwise}
Consider the setting of Proposition~\ref{prop:primal-forest}, and assume that the EC\&R\xspace assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ has the pairwise cancellation property.
Further, let this assignment correspond to a class-$l^{\pm}$ for some $l \in K$ such that $A^l_{j'i'} = 1$ for some $(i',j') \in N \times M$.
Then, in addition to the outcome of Proposition~\ref{prop:primal-forest}, we have that
\begin{itemize}
\item[(i)] arc $i'$ is either in $\widetilde{\mcl{J}}$ or incident to exactly one node in $\widetilde{\mcl{I}} \cup \widetilde{\mcl{I}}^{j'}$, but not both,
\item[(ii)] each arc in $\widetilde{\mcl{J}}$ is incident to at most one node in $\widetilde{\mcl{I}} \cup \widetilde{\mcl{I}}^{j}$ for each $j \in M$,
\item[(iii)] each node in $\widetilde{\mcl{I}} \cap \widetilde{\mcl{I}}^j$, for $j \in M \setminus \{j'\}$ (resp. $j = j'$), is adjacent to no other nodes in that set and no arcs in $\widetilde{\mcl{J}}$ (resp. $\widetilde{\mcl{J}} \cup \{i'\}$).
\end{itemize}
\end{proposition}
\proof{Proof.}
For case (i), it follows from condition (C2) of the EC\&R\xspace procedure that the bilinear term $y_{j'}x_{i'}$ in the base equality must be canceled during aggregation.
Further, according to the pairwise cancellation property, there must be exactly one constraint in the aggregation in addition to the base equality that would contain $y_{j'}x_{i'}$ after multiplication with the corresponding dual weight.
There are two possible scenarios.
The first possibility is that the bound constraints for $x_{i'}$ are used in the aggregation, which implies that arc $i'$ is a connection arc and belongs to $\widetilde{\mcl{J}}$.
The second possibility is that the flow-balance constraint at either node $t(i')$ or $h(i')$, but not both, is used in the aggregation, which implies that $i'$ is incident to exactly one node in $\widetilde{\mcl{I}} \cup \widetilde{\mcl{I}}^{j'}$.
\smallskip
For case (ii), consider an arc $i \in \widetilde{\mcl{J}}$.
Therefore, either of the bound constraints $x_i \geq 0$ or $u_i-x_i \geq 0$ is used in the aggregation with weight $1-\sum_{j \in M} y_j$.
It follows from the pairwise cancellation property that, for each $j \in M$, there can be at most one additional constraint in the aggregation that contains a term $y_j x_i$.
The only possibility for such a constraint is the flow-balance constraint at either node $t(i')$ or $h(i')$, but not both.
We conclude that $i$ is incident to at most one node in $\widetilde{\mcl{I}} \cup \widetilde{\mcl{I}}^{j}$ for each $j \in M$.
\smallskip
For case (iii), consider a node $i \in \widetilde{\mcl{I}} \cap \widetilde{\mcl{I}}^j$ for some $j \in M \setminus \{j'\}$.
Therefore, the aggregation contains the (positive or negative) flow-balance constraint at node $i$ multiplied with $1-\sum_{j \in M} y_j$ due to $i \in \widetilde{\mcl{I}}$, together with the (positive or negative) flow-balance constraint at node $i$ multiplied with $y_j$ due to $i \in \widetilde{\mcl{I}}^j$.
Therefore, the bilinear terms $y_j x_k$ for all $k \in \delta^+(i) \cup \delta^-(i)$ already appear in two constraints, which implies that they cannot appear in any other constraints during aggregation.
As a result, the bound constraints for each variable $x_k$ corresponding to arc $k$ cannot be included in $\widetilde{\mcl{J}}$.
Similarly, the flow-balance constraint at node $h(k)$ for any $k \in \delta^+(i)$ and at node $t(k)$ for any $k \in \delta^-(i)$ cannot be included in the aggregation, which implies that $i$ cannot be adjacent to any other nodes in $\widetilde{\mcl{I}} \cap \widetilde{\mcl{I}}^j$.
The proof for the case where $j = j'$ follows from a similar argument.
\Halmos
\endproof
\medskip
As noted earlier, an important consequence of the pairwise cancellation property is providing the ability to derive the converse statement to those of Proposition~\ref{prop:primal-forest} and \ref{prop:primal-forest-pairwise}, which identifies EC\&R\xspace assignments based on a special forest structure in the underlying network.
\begin{theorem} \label{thm:primal-forest-converse}
Consider set $\mcl{S}$ with $\Xi$ that represents the network polytope corresponding to the network $\mr{G} = (\mr{V},\mr{A})$.
Let $\bar{\mr{F}}^j$, for each $j \in M$, be a forest in the parallel network $\mr{G}^j$, composed of trees $\bar{\mr{T}}^j_k$ for $k \in \Gamma_j$, where $\Gamma_j$ is an index set, that satisfies the conditions (i)--(iii) of Propositions~\ref{prop:primal-forest} and \ref{prop:primal-forest-pairwise} with the corresponding node sets $\widetilde{\mcl{I}}^j$, the connection node set $\widetilde{\mcl{I}}$, the connection arc set $\widetilde{\mcl{J}}$, and the class $l$.
Then, the assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ obtained from Algorithm~\ref{alg:primal-ECR} is an EC\&R\xspace assignment for class-$l^{\pm}$.
\end{theorem}
\proof{Proof.}
First, we argue that conditions (i)--(iii) of Proposition~\ref{prop:primal-forest} imply that each member of the sets $\widetilde{\mcl{I}}$, $\widetilde{\mcl{J}}$, and $\widetilde{\mcl{I}}^j$ for $j \in M$ receives a label assignment through the steps of Algorithm~\ref{alg:primal-ECR}, i.e., the member is added to set $\mt{D}$ defined in that algorithm.
It follows from condition (i) of Proposition~\ref{prop:primal-forest} that once a member of the node subset in $\widetilde{\mcl{I}}^j$, for each $j \in M$, that represents a tree $\bar{\mr{T}}^j_k$ with $k \in \Gamma_j$ is added to $\mt{D}$, all the remaining nodes in $\bar{\mr{T}}^j_k$ are eventually added to $\mt{D}$ because of the loop in lines 11--13 in the algorithm, as all nodes of the tree are connected.
Condition (ii) of Proposition~\ref{prop:primal-forest} implies that all trees $\bar{\mr{T}}^j_k$ for $k \in \Gamma_j$ and $j \in M$ are connected through an appropriate sequence of the tree nodes, the connection nodes in $\widetilde{\mcl{I}}$, and the connection arcs in $\widetilde{\mcl{J}}$.
Consequently, the loops in lines 10--44 of the algorithm ensure that each member of these sets is visited following that sequence and becomes added to $\mt{D}$.
Further, condition (iii) of Proposition~\ref{prop:primal-forest} suggests that each member in the sets $\widetilde{\mcl{I}}$ and $\widetilde{\mcl{J}}$ is connected to the subgraph composed of the set of all tree nodes in $\widetilde{\mcl{I}}^j$ and their associated connection nodes and connection arcs.
As a result, there exists a sequence of adjacent nodes and arcs that lead to each member of $\widetilde{\mcl{I}}$ and $\widetilde{\mcl{J}}$, thereby getting added to $\mt{D}$.
\smallskip
Second, we show that each bilinear term created during the aggregation can appear in at most two constraints.
There are four cases.
In case 1, consider the bilinear term $y_{j'}x_{i'}$ that appears in the base equality $l$.
Condition (i) of Proposition~\ref{prop:primal-forest-pairwise} implies that this bilinear term can appear in exactly one other constraint, which could be either the bound constraint on variable $x_{i'}$ (which would be included in $\widetilde{\mcl{J}}$) or the flow-balance constraint at one of the incident nodes to $i'$ (which would be included in $\widetilde{\mcl{I}}^{j'} \cup \widetilde{\mcl{I}}$).
In case 2, consider a bilinear term $y_j x_i$, for some $j \in M$, that appears in the bound constraint on variable $x_i$ for any arc $i \in \widetilde{\mcl{J}}$.
Condition (ii) of Proposition~\ref{prop:primal-forest-pairwise} implies that this bilinear term can appear in at most one other constraint, which could be the flow-balance constraint at one of the incident nodes to $i$ (which would be included in $\widetilde{\mcl{I}}^{j} \cup \widetilde{\mcl{I}}$).
In case 3, consider a bilinear term $y_j x_i$, for some $j \in M$, that appears in the flow-balance constraint at an incident node of arc $i$ after being multiplied with both $y_j$ (i.e., the node being in $\widetilde{\mcl{I}}^j$) and $1-\sum_{j \in M}y_j$ (i.e., the node being in $\widetilde{\mcl{I}}$).
Condition (iii) of Proposition~\ref{prop:primal-forest-pairwise} implies that this bilinear term cannot appear in any other constraints during aggregation.
In case 4, consider a bilinear term $y_j x_i$, for some $j \in M$, that appears in the flow-balance constraint at an incident node of arc $i$ that is not in $\widetilde{\mcl{I}}^j \cap \widetilde{\mcl{I}}$.
It follows from condition (iii) of Proposition~\ref{prop:primal-forest} that this bilinear term can appear in at most one other constraint because of the tree structure of all the nodes in $\cup_{j \in M} \widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}}$.
\smallskip
Third, we discuss that, for any $k \in \mt{D}$ that has been newly added to this set, its label value has been determined through lines 10--44 of Algorithm~\ref{alg:primal-ECR} in such a way that, for a member $i \in \cup_{j \in M} \widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}} \cup \widetilde{\mcl{J}}$ that has been previously added to $\mt{D}$ and is adjacent/incident to $i$, the bilinear term that commonly appears in the weighted constraints corresponding to both $i$ and $k$ is canceled.
For instance, consider the case where $i \in \widetilde{\mcl{I}}$ (line 22 of the algorithm) and $k \in \widetilde{\mcl{I}}^j$ for some $j \in M$ is an adjacent node to $i$ (line 26 of the algorithm).
Assume that $\mt{l}(i) = +$, and that arc $a \in \mr{A}$ is such that $t(a) = i$ and $h(a) = k$.
It follows from line 27 of the algorithm that $\mt{l}(k) = -$.
Considering the assignment rule in lines 48 and 52 of the algorithm, we should aggregate the constraint $\sum_{r \in \delta^+(i) \setminus \{p\}} x_r - \sum_{r \in \delta^-(i)} x_r + x_p \geq f_i$ with weight $1 - \sum_{j \in M} y_j$, together with the constraint $-\sum_{r \in \delta^+(k)} x_r + \sum_{r \in \delta^-(k) \setminus \{p\}} x_r + x_p \geq - f_k$ with weight $y_j$, which results in the cancellation of the bilinear term $y_j x_p$.
A similar argument can be made for any other possible case in Algorithm~\ref{alg:primal-ECR}.
\smallskip
Combining all the results shown in the previous parts, i.e., (I) each member of the sets $\widetilde{\mcl{I}}$, $\widetilde{\mcl{J}}$, and $\widetilde{\mcl{I}}^j$ for $j \in M$ receives a label assignment and is added to $\mt{D}$; (II) each bilinear term created during the aggregation can appear in at most two constraints; and (III) for any $k \in \mt{D}$, its label value is determined in such a way that the bilinear term that is common between the weighted constraints corresponding to $i$ and a previously added member $k$ in $\mt{D}$ is canceled, we conclude that at least $|\widetilde{\mcl{I}}| + |\widetilde{\mcl{J}}|+\sum_{j \in M}|\widetilde{\mcl{I}}^j|$ bilinear terms will be canceled during aggregation in the desired assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$.
This satisfies the EC\&R\xspace conditions (C1).
Finally, the above argument also implies that each flow-balance constraint at the nodes in $\cup_{j \in M} \widetilde{\mcl{I}}^j \cup \widetilde{\mcl{I}}$, and each variable bound constraint for the arcs in $\widetilde{\mcl{I}}$ will have at least one of their bilinear terms (after being multiplied with appropriate weights) canceled because each such node or arc will eventually be added to $\mt{D}$ when it receives a label for the desired cancellation.
This satisfies the EC\&R\xspace condition (C2).
We conclude that $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ is an EC\&R\xspace assignment.
\Halmos
\endproof
\begin{algorithm}
\scriptsize
\caption{Derive an EC\&R\xspace assignment associated with a forest structure}
\label{alg:primal-ECR}
\begin{algorithmic}[1]
\REQUIRE network $\mr{G} = (\mr{V},\mr{A})$, forest node sets $\widetilde{\mcl{I}}^j$ for $j \in M$, connection node set $\widetilde{\mcl{I}}$, connection arc set $\widetilde{\mcl{J}}$, class $l$ with $(i',j') \in N \times M$ such that $A^{l}_{j'i'} = 1$, and a class sign indicator $\pm$
\ENSURE the EC\&R\xspace assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ for class-$l^{\pm}$
\STATE assign an empty label denoted by $\mt{l}(i)$ to each node $i \in \widetilde{\mcl{I}} \cup_{j \in M}\widetilde{\mcl{I}}^j$ and each arc $i \in \widetilde{\mcl{J}} \cup \{l\}$, and define set $\mt{D} = \emptyset$
\STATE set $\mathtt{l}(i') = $ the class sign indicator, and let $k \in \widetilde{\mcl{I}}^{j'} \cup \widetilde{\mcl{I}}$ either be an incident node to $i'$ or be the arc $i' \in \widetilde{\mcl{J}}$
\STATE \textbf{if} $\left(k = h(i') \in \widetilde{\mcl{I}}^{j'}\right)$ or $\left(k = t(i') \in \widetilde{\mcl{I}}\right)$ or $\left(k = i' \in \widetilde{\mcl{J}}\right)$ \textbf{then}
\STATE \quad set $\mt{l}(k) = \mt{l}(i')$, and add $k$ to $\mt{D}$
\STATE \textbf{else if} $\left(k = t(i') \in \widetilde{\mcl{I}}^{j'}\right)$ or $\left(k = h(i') \in \widetilde{\mcl{I}}\right)$ \textbf{then}
\STATE \quad set $\mt{l}(k) = \neg \mt{l}(i')$, and add $k$ to $\mt{D}$ ($\neg$ represents the negation symbol)
\STATE \textbf{end if}
\WHILE{$\mt{D} \neq \emptyset$}
\STATE \quad select $i \in \mt{D}$
\STATE \quad \textbf{if} $i \in \widetilde{\mcl{I}}^j$ for some $j \in M$ \textbf{then}
\STATE \quad \quad \textbf{for} each unlabeled node $k \in \widetilde{\mcl{I}}^j$ and $\bar{k} \in \widetilde{\mcl{I}}$ that is adjacent to $i$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, set $\mt{l}(\bar{k}) = \neg \mt{l}(i)$, and add $k$ and $\bar{k}$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \quad \textbf{for} each unlabeled arc $k \in \widetilde{\mcl{J}}$ that is incident to $i$ \textbf{do}
\STATE \quad \quad \quad \textbf{if} $i = t(k)$ \textbf{then}
\STATE \quad \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \quad \textbf{else}
\STATE \quad \quad \quad \quad set $\mt{l}(k) = \neg \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \quad \textbf{end if}
\STATE \quad \quad \textbf{end for}
\STATE \quad \textbf{end if}
\STATE \quad \textbf{if} $i \in \widetilde{\mcl{I}}$ \textbf{then}
\STATE \quad \quad \textbf{for} each unlabeled node $k \in \widetilde{\mcl{I}}^j$ for $j \in M$ such that $k = i$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \quad \textbf{for} each unlabeled node $k \in \widetilde{\mcl{I}}^j$ for $j \in M$, and $\bar{k} \in \widetilde{\mcl{I}}$ that is adjacent to $i$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \neg \mt{l}(i)$, set $\mt{l}(\bar{k}) = \mt{l}(i)$, and add $k$ and $\bar{k}$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \quad \textbf{for} each unlabeled arc $k \in \widetilde{\mcl{J}}$ that is incident to $i$ \textbf{do}
\STATE \quad \quad \quad \textbf{if} $i = t(k)$ \textbf{then}
\STATE \quad \quad \quad \quad set $\mt{l}(k) = \neg \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \quad \textbf{else}
\STATE \quad \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, and add $k$ to $\mt{D}$
\STATE \quad \quad \quad \textbf{end if}
\STATE \quad \quad \textbf{end for}
\STATE \quad \textbf{end if}
\STATE \quad \textbf{if} $i \in \widetilde{\mcl{J}}$ \textbf{then}
\STATE \quad \quad \textbf{for} each unlabeled node $k = t(i) \in \widetilde{\mcl{I}}^j$ and $\bar{k} = h(i) \in \widetilde{\mcl{I}}^j$ for $j \in M$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \mt{l}(i)$, set $\mt{l}(\bar{k}) = \neg \mt{l}(i)$, and add $k$ and $\bar{k}$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \quad \textbf{for} each unlabeled node $k = t(i) \in \widetilde{\mcl{I}}$ and $\bar{k} = h(i) \in \widetilde{\mcl{I}}$ \textbf{do}
\STATE \quad \quad \quad set $\mt{l}(k) = \neg \mt{l}(i)$, set $\mt{l}(\bar{k}) = \mt{l}(i)$, and add $k$ and $\bar{k}$ to $\mt{D}$
\STATE \quad \quad \textbf{end for}
\STATE \quad \textbf{end if}
\STATE \quad Remove $i$ from $\mt{D}$
\ENDWHILE
\STATE \textbf{for} each $i \in \widetilde{\mcl{I}}^j$ for each $j \in M$ \textbf{do}
\STATE \quad add $i^{\mt{l}(i)}$ to $\mcl{I}_j$
\STATE \textbf{end for}
\STATE \textbf{for} each $i \in \widetilde{\mcl{I}}$ \textbf{do}
\STATE \quad add $i^{\mt{l}(i)}$ to $\bar{\mcl{I}}$
\STATE \textbf{end for}
\STATE \textbf{for} each $i \in \widetilde{\mcl{J}}$ \textbf{do}
\STATE \quad \textbf{if} $i^{\mt{l}(i)} = +$ \textbf{then}
\STATE \quad \quad add $i$ to $\mcl{J}$
\STATE \quad \textbf{else}
\STATE \quad \quad add $i$ to $\bar{\mcl{J}}$
\STATE \quad \textbf{end if}
\STATE \textbf{end for}
\end{algorithmic}
\end{algorithm}
\medskip
In view of Theorem~\ref{thm:primal-forest-converse}, once we identify a forest structure with the desired conditions, we can use the steps in Algorithm~\ref{alg:primal-ECR} to determine the weight of each constraint in the corresponding EC\&R\xspace assignment by following a path that starts from the arc associated with the base equality and reaches the node or arc associated with that constraint.
We illustrate this approach in the following example.
\medskip
\begin{example} \label{ex:primal-forest}
Consider set $\mcl{S}$ with $m = 2$ and $\Xi$ that represents the primal network model corresponding to the graph $\mr{G} = (\mr{V},\mr{A})$ shown in Figure~\ref{fig:primal-tree}.
Similarly to Example~\ref{ex:primal-tree}, we refer to each arc in this network as a pair $(i,j)$ of its tail node $i$ and its head node $j$, and denote its corresponding flow variable as $x_{i,j}$.
Assume that we are interested in finding EC\&R\xspace assignments for class-$l^{+}$ where the base equality $l$ contains the bilinear term $y_1x_{1,5}$, i.e., $i' = (1,5)$ and $j' = 1$.
According to Theorem~\ref{thm:primal-tree-converse}, we need to identify a forest structure that satisfies the conditions (i)--(iii) of Propositions~\ref{prop:primal-forest} and \ref{prop:primal-forest-pairwise}.
In the parallel network $\mr{G}^1$, we select the forest $\bar{\mr{F}}^1$ composed of the tree $\bar{\mr{T}}^1_1$ with the node set $\{1, 2, 6\}$ and the tree $\bar{\mr{T}}^1_2$ with the node set $\{8\}$.
In the parallel network $\mr{G}^2$, we select the forest $\bar{\mr{F}}^2$ composed of the tree $\bar{\mr{T}}^2_1$ with the node set $\{1, 4\}$.
Therefore, we can form the set $\widetilde{\mcl{I}}^1 = \{1, 2, 6, 8\}$ and $\widetilde{\mcl{I}}^2 = \{1, 4\}$.
We select the connection node set $\widetilde{\mcl{I}} = \{3\}$, and the connection arc set $\widetilde{\mcl{J}} = \{(8,4)\}$.
It is easy to verify that these sets satisfy the conditions (i)--(iii) of Propositions~\ref{prop:primal-forest} and \ref{prop:primal-forest-pairwise}.
Next, we determine the label of each node and arc in the above sets through applying Algorithm~\ref{alg:primal-ECR}.
According to line 2 of this algorithm, we set $\mt{l}(1,5) = +$ in parallel network $\mr{G}^1$, and select $k = t(1,5) = 1 \in \widetilde{\mcl{I}}^1$.
It follows from line 5 of the algorithm that $\mt{l}(1) = -$ and $k$ is added to $\mt{D}$.
Following lines 10--13, we obtain for $\widetilde{\mcl{I}}^1$ that $\mt{l}(2) = \mt{l}(6) = -$, and for $\widetilde{\mcl{I}}$ that $\mt{l}(3) = +$.
Then, from lines 26--28 of Algorithm~\ref{alg:primal-ECR}, we deduce for $\widetilde{\mcl{I}}^2$ that $\mt{l}(4) = -$, and from lines 11--13 for $\widetilde{\mcl{I}}^2$, we obtain that $\mt{l}(1) = -$.
Lines 32--34 imply that $\mt{l}(8,4) = -$ for $\widetilde{\mcl{J}}$.
Lastly, we conclude from lines 38--40 that $\mt{l}(8) = -$ for $\widetilde{\mcl{I}}^1$.
As a result, following lines 47--59 of the algorithm, we obtain the EC\&R\xspace assignment $\big[\{1^-, 2^-, 6^-, 8^-\},\{1^-, 4^-\},\{3^+\} \big| \emptyset, \{(8,4)\}\big]$ for class-$l^+$.
Based on this assignment, we multiply the negative flow-balance constraints at nodes $1, 2, 6, 8$ with $y_1$, we multiply the negative flow-balance constraints at nodes $1, 4$ with $y_2$, we multiply the positive flow-balance constraint at node $3$ with $1-y_1-y_2$, and we multiply the upper bound constraint on variable $x_{8,4}$ with $1-y_1-y_2$, and aggregate them with the base bilinear equality corresponding to arc $(1,5)$ with weight 1 to obtain the aggregated inequality
\begin{multline*}
-z_{1,5} - y_1 x_{4,5} - y_1 x_{3,7} + y_1 x_{4,3} - y_2 x_{1,5} + y_2 x_{4,5} + y_2 x_{2,3} - y_2 x_{3,7}\\
+ (f_1 + f_2 + f_3 + f_6 + f_8 - u_{8,4})y_1 + (f_1 + f_3 + f_4 - u_{8,4}) y_2 \\
+ x_{3,7} -x_{2,3} - x_{4,3} - x_{8,4} - f_3 + u_{8,4} \geq 0,
\end{multline*}
where $f_i$ denotes the supply/demand value at node $i$, and $u_{i,j}$ denotes the upper bound for variable $x_{i,j}$.
Following Remark~\ref{rem:reduce-bilinear}, we may relax each of the seven remaining bilinear terms into two possible linear expressions, leading to 128 total EC\&R\xspace inequalities.
If implemented inside of a separation oracle, we can use Remark~\ref{rem:1-separation} to find the most violated inequality among these 128 inequalities efficiently in linear time.
\end{example}
\medskip
We conclude this section with a remark on the practical implementation of the proposed EC\&R\xspace inequalities.
While there is an efficient separation algorithm to find a separating EC\&R\xspace inequality among those created from a given EC\&R\xspace assignment as noted in Remark~\ref{rem:1-separation}, the choice of the class of an EC\&R\xspace assignment and its possible forest structure in the underlying network can lead to a large pool of candidates to consider during a branch-and-cut approach.
Note that each EC\&R\xspace inequality is obtained through an aggregation of the constraints of $\mcl{S}$ with proper weights.
In particular, given an EC\&R\xspace assignment $\big[\mcl{I}_1,\dotsc,\mcl{I}_m,\bar{\mcl{I}} \big| \mcl{J},\bar{\mcl{J}}\big]$ for class-$l^{\pm}$, we aggregate the base inequality of the form $f_l(\vc{x},\vc{y},\vc{z}) \geq 0$ with constraints of the general form $h(\vc{y})g(\vc{x}) \geq 0$, where $h(\vc{y})$ represents the aggregation weight that could be $y_j$ or $1-\sum_{j \in M} y_j$, and where $g(\vc{x}) \geq 0$ denotes a linear side constraint that could be the flow-balance or variable bound constraints.
In most branch-and-cut approaches, the starting relaxation of the problem contains all linear side constraints on $\vc{x}$ and $\vc{y}$.
It follows that an optimal solution $(\bar{\vc{x}};\bar{\vc{y}};\bar{\vc{z}})$ of such relaxation that is to be separated satisfies $h(\bar{\vc{y}})g(\bar{\vc{x}}) \geq 0$ for all valid choices of function $h(\vc{y})$ and constraint $g(\vc{x}) \geq 0$.
Therefore, for the resulting aggregated inequality to be violated at a point $(\bar{\vc{x}};\bar{\vc{y}};\bar{\vc{z}})$, we must have the base inequality violated at that point, i.e, $f_l(\bar{\vc{x}},\bar{\vc{y}},\bar{\vc{z}}) < 0$.
This observation can be used to select the class and sign of the EC\&R\xspace assignment to be generated during separation process.
To this end, we may sort the values $\Psi_k = |\bar{y}_j\bar{x}_i - \bar{z}_k|$ for all $(i,j,k)$ such that $A^k_{ji} = 1$, and choose class $k$ as that associated with largest $\Psi_k$ with the class sign $+$ if $\bar{y}_j\bar{x}_i - \bar{z}_k < 0$, and class sign $-$ otherwise.
This perspective can shed light on the observation that the EC\&R\xspace inequalities obtained from \textit{fewer} aggregations tend to be more effective in practice as noted in \cite{davarnia:ri:ta:2017} and also observed in our experiments in Section~\ref{sec:computation}.
Specifically, the addition of constraints $h(\vc{y})g(\vc{x}) \geq 0$ in the aggregation can increase the left-hand-side value in the aggregated inequality when $h(\bar{\vc{y}})g(\bar{\vc{x}}) > 0$, which could reduce the chances of obtaining a violated aggregated inequality.
\medskip
Another observation that can be helpful for choosing the forest structures is considering the relaxation step in the EC\&R\xspace procedure.
As described in Remark~\ref{rem:reduce-bilinear}, each remaining bilinear term $y_jx_i$ can be relaxed using either the bound constraints or the bilinear constraints.
The former case is equivalent to aggregating the inequality with a constraint of the form $h(\vc{y})g(\vc{x}) \geq 0$ where $h(\vc{y}) = y_j$ and $g(\vc{x}) \in \{x_i \geq 0, u_i-x_i \geq 0\}$, for which the previous argument holds about achieving a violation.
For the latter case, on the other hand, we aggregate the inequality with a bilinear constraint of the form $\pm(y_jx_i - z_k) \geq 0$ for $i,j,k$ such that $A^k_{j,i} = 1$, which can potentially lead to a violation depending on the value of $\Psi_k = |\bar{y}_j\bar{x}_i - \bar{z}_k|$.
As a result, we might choose forest structures that contain the nodes incident to arcs $i \in \mr{A}$ corresponding to the most violated values in $|\bar{y}_j\bar{x}_i - \bar{z}_k|$.
In our computational experiments presented in Section~\ref{sec:computation}, we use the above-mentioned heuristics in our separation oracle to efficiently select the class of EC\&R\xspace assignments and their forest structures, which show promising results.
\section{Computational Experiments} \label{sec:computation}
In this section, we present preliminary computational results to evaluate the impact of the EC\&R\xspace cutting planes generated through the results of Section~\ref{sec:primal}.
We study several basic network structures, from both dense and sparse classes, that can be used to obtain different relaxations for any network problems by isolating those structures in the underlying model.
These structures include bipartite, clique, and cycle, as shown in Figures~\ref{fig:network1}--\ref{fig:network3}, to represent different density levels for the underlying graph.
To form set $\mcl{S}$ with network model $\Xi$, for each structure, we generate supply/demand vectors in such a way that the underlying network problem is feasible.
In the following, we give details about the data generation.
For $\Delta$, we consider two scenarios for each structure, one for the case with $m = 1$, and the other for the case with $m=2$.
The former case is fundamental as it can be always used as a basic relaxation even when there are multiple $y$ variables in the model.
The latter case is of particular importance among instances of $\mcl{S}$ with multiple $y$ variables in the simplex $\Delta$, as it represents the pairwise conflict between variables.
Specifically, when two binary variables $y_i$ and $y_j$ cannot be positive at the same time, which can also be modeled as complemetarity constraints of the form $y_i y_j = 0$ for each $i,j$ that belong to a so-called \textit{conflict graph} defined on $y$ variables, we may use such formulations with $m=2$.
As the base relaxation for set $\mcl{S}$, we consider its \textit{linearized} LP relaxation, where $y$ variables are continuous, and the bilinear constraints $y_jx_i - z_k = 0$, for all $k \in K$ and $(i,j) \in N \times M$ such that $A^k_{ji} = 1$, are replaced with the well-know McCormick bounds $z_k \geq 0$, $z_k \geq u_i y_j + x_i - u_i$, $z_k \leq x_i$, and $z_k \leq u_iy_j$.
We denote this set by $\mcl{S}_L$.
To evaluate the impact of adding EC\&R\xspace cuts on strengthening $\mcl{S}_L$, we optimize a linear function in $x$ and $z$ variables over this set:
\begin{equation}
\max\{\sum_{i\in N} r_ix_i + \sum_{k\in K} c_kz_k | (\vc{x},\vc{y},\vc{z}) \in \mcl{S}_L\}, \label{eq:OP1}
\end{equation}
which provides an LP relaxation for the original problem
\begin{equation}
\max\{\sum_{i\in N} r_ix_i + \sum_{k\in K} c_kz_k | (\vc{x},\vc{y},\vc{z}) \in \mcl{S}\}. \label{eq:OP2}
\end{equation}
We use the special tree structure of Section~\ref{subsec:primal-single} for the case with $m=1$, and the special forest structure of Section~\ref{subsec:primal-multi} to for the case with $m = 2$ to produce EC\&R\xspace cutting planes that can be added to $\mcl{S}_L$ to improve the dual bound in the optimization problem.
Our experiments show the effectiveness of the proposed EC\&R\xspace inequality in improving the classical McCormick bounds.
While the EC\&R\xspace cuts that we obtain are valid for both cases where $y$ variables are binary and continues, we consider the binary case in these computational experiments so that we can obtain the optimal value of the original problem \eqref{eq:OP2} and use it to compare the bound improvement achieved from adding the EC\&R\xspace cuts to \eqref{eq:OP1}.
This statement follows from the fact that the McCormick formulation is an exact reformulation of the original problem when $y$ variables are binary.
For these experiments, the codes are written in Python 3.7.8. and the optimization problems are solved using CPLEX 20.1.0
at its default settings.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[scale=0.15]{P2}
\caption{Bipartite structure}
\label{fig:network1}
\end{subfigure}
\hspace{0.05\linewidth}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[scale=0.15]{P3}
\caption{Clique structure}
\label{fig:network2}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[scale=0.15]{P4}
\caption{Cycle structure}
\label{fig:network3}
\end{subfigure}
\caption{Basic network structures to evaluate the effectiveness of EC\&R\xspace cuts}
\label{fig:network}
\end{figure}
\subsection{Bipartite Structure.}
In this section, we perform computational experiments on bipartite structures; see Figure~\ref{fig:network1}.
This structure represents a mid-level density for the underlying graph.
We consider three problem sizes, where the number of nodes in the underlying network is $10$, $30$, and $50$.
For each problem size, we create $10$ randomly generated instances for set $\mcl{S}$ with the following specifications for its underlying network $\mr{G} = (\mr{V}, \mr{A})$.
Because the network is bipartite, we consider half the nodes in the left partition and half in the right partition, and there is a directed arc from each node in the left partition to each node in the right partition. For problems with sizes $10$, $30$, and $50$, each arc is assigned a capacity that is randomly generated from a discrete uniform distribution between $(0,100)$, $(0,100)$, and $(0,200)$, respectively.
The coefficients of the $x$ variables in the objective function are randomly generated from a discrete uniform distribution from $(0,20)$,$(0,20)$, and $(0,30)$, respectively, for problem sizes $10$, $30$, and $50$.
For all instances, the coefficient of $z$ variables in the objective function is randomly generated from a uniform interval $(-10,10)$, and the supply/demand at each node is selected randomly from $(-200,200)$ in such a way that the supply and demand are balanced.
\medskip
As noted earlier, we consider two cases for the number $m$ of $y$ variables in the simplex.
Table~\ref{tab:bipartite-1} shows the results for the case with $m=1$.
The first column contains the problem size, and the second column shows the instance number.
The third column represents the optimal value of \eqref{eq:OP2} obtained by setting the $y$ variable as binary in \eqref{eq:OP1}.
The fourth column shows the optimal value of the linearized LP relaxation of \eqref{eq:OP2} given in \eqref{eq:OP1}.
The next two columns under ``Full EC\&R\xspace" show the result of adding all violated EC\&R\xspace inequalities obtained from tree structures according to Theorem~\ref{thm:primal-tree-converse} with up to two cancellations.
These cuts are added in loops after the LP relaxation is solved to separate the current optimal solution until the improvement in the optimal value is less than 1\% for problems with sizes 10 and 30, and 2\% for problems with size 50 due to its higher computational cost.
To find the most violated EC\&R\xspace inequalities produced from an EC\&R\xspace assignment, we use the technique in Remark~\ref{rem:1-separation}.
The column ``Gap" contains the gap improvement obtained by adding these EC\&R\xspace inequalities over the optimal value of \eqref{eq:OP1}.
The next column shows the total solution time to add these inequalities.
The column ``Gap" under ``Separation EC\&R\xspace" includes the result of adding the above EC\&R\xspace inequalities through a separation oracle according to that discussed at the end of Section~\ref{sec:primal}.
In particular, for a current optimal solution $(\bar{\vc{x}}; \bar{\vc{y}}; \bar{\vc{z}})$, we consider the EC\&R\xspace assignment class and sign associated with the 30 largest values for $\Psi_k = |\bar{y}_j\bar{x}_i - \bar{z}_k|$ for all $(i,j,k)$ such that $A^k_{ji} = 1$.
We add the resulting EC\&R\xspace inequalities in loops as discussed above.
The last column shows the solution time when using this separation method.
The last row for each problem size reports the average values over the 10 random instances.
The results in Table~\ref{tab:bipartite-1} have three important implications.
First, they show the effectiveness of our proposed EC\&R\xspace inequalities based on the tree structures in improving the gap closure and strengthening the classical McCormick relaxation.
Second, they confirm the general observation about EC\&R\xspace inequalities with fewer aggregations (up to three in these experiments) tend to be the most effective, as they account for above 90\% of the total gap closure for most instances.
Third, they demonstrate the effectiveness of the proposed separation method which achieves similar gap improvement levels in much smaller time compared to the case without separation.
These observations show promise for an efficient implementation of the EC\&R\xspace technique to solve practical problems.
\medskip
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the bipartite structure with $m = 1$}
\label{tab:bipartite-1}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full EC\&R\xspace} & \multicolumn{2}{c|}{Separation EC\&R\xspace} \\ \cline{5-8}
& & & & Gap (\%)& Time (s) & Gap (\%)& Time (s) \\ \hline
10 & 1 & 3501 & 3035.53 & 99 & 0.44 & 99 & 0.30 \\
& 2 & 1982.63 & 1828.85 & 99 & 0.27 & 99& 0.15 \\
& 3 & 2846 & 2376.23 & 91 & 0.84 & 90& 0.62 \\
& 4 & 3160.86 & 3056.15 & 99 & 0.29 & 99& 0.14 \\
& 5 & 5929 & 5825.98 & 99 & 0.24 & 99& 0.15 \\
& 6 & 4487.71 & 3743.12 & 74 & 0.89 & 74& 0.65 \\
& 7 & 2009.02 & 1858.60 & 99 & 0.33 & 99& 0.37 \\
& 8 & 4531.55 & 4215.66 & 91 & 0.87 & 83& 0.61 \\
& 9 & 3238.87 & 3015.88 & 99 & 0.45 & 99& 0.48 \\
& 10 & 2302 & 2119.27 & 98 & 0.49 & 91& 0.47 \\
\cline{2-8}
& avg. && & 95 & 0.51 & 93 & 0.40 \\
\hline
30 & 1 & 2219.82 &1976.29 & 99 & 22.85 & 99 & 3.02 \\
& 2 & 1619.88 & 1444.26 & 99 & 22.79 & 99 & 1.49 \\
& 3 & 3678.58 & 3236.11 & 99 & 23.49 & 99 & 2.95 \\
& 4 & 3010.56 & 2544.60 & 99 & 41.96 & 92 & 5.96 \\
& 5 & 1959.27 & 1777.84 & 99 & 22.68 & 80 & 3.11 \\
& 6 & 4315.31 & 3379.29 & 90 & 81.48 & 99 & 7.67 \\
& 7 & 854.26 & 709.06 & 99 & 22.70 & 85 & 1.58 \\
& 8 & 1604.93 & 1324.22 & 99 & 23.01 & 99 & 2.97 \\
& 9 & 309.39 & 105.43 & 99 & 21.95 & 99 & 2.76 \\
& 10 & 3224.44 & 2980.87 & 99 & 21.61 & 99 & 3.17 \\
\cline{2-8}
& avg. && & 98 & 30.45 & 95 & 3.47 \\
\hline
50 & 1 & 2378.15 & 2012.32 & 99 & 170.22 & 99 & 14.21 \\
& 2 & 1798.19 & 1544.38 & 99 & 179.22 & 99 & 9.23 \\
& 3 & 1100.05 & 758.45 & 99 & 179.77 & 96 & 14.86 \\
& 4 &-1320.24 & -2103.91 & 99 & 177.30 & 99 & 9.67 \\
& 5 &-2517.20 &-2941.66 & 99 & 185.33 & 99 & 9.53 \\
& 6 &-3396.62 &-3607.08 & 99 & 183.70 & 99 & 4.64 \\
& 7 & 515.27 & 7.30 & 99 & 188.80 & 99 & 9.81 \\
& 8 & 668.71 &-126.09 & 99 & 181.87 & 99 & 13.80 \\
& 9 & 442.04 & 44.26 & 99 & 185.06 & 99 & 4.59 \\
& 10 &-1987.83 &-2479.73 & 99 & 182.14 & 99 & 13.59 \\
\cline{2-8}
& avg. && &99& 181.34 &99& 10.39 \\
\hline
\end{tabular}
\end{table}
In Table~\ref{tab:bipartite-2}, we consider the case for $\mcl{S}$ where $m=2$.
The first and second columns contain the problem size and instance number, respectively.
The third and fourth columns show the optimal value of the original problem and its LP relaxation, respectively.
The next four columns under ``Tree EC\&R\xspace" show the result of adding EC\&R\xspace inequalities with up to three aggregations (two cancellations) obtained from the one-variable relaxations of set $\mcl{S}$ where only one $y$ variable is considered.
For this approach, we use the EC\&R\xspace results of Theorem~\ref{thm:primal-tree-converse} to identify the tree structures for each one-variable relaxation and add the resulting cutting planes for each relaxation separately through loops as previously described.
The subcolumns under the ``Full" and ``Separation" headers contain the gap closure and the solution time to add these inequalities without and with the aforementioned separation oracle, respectively.
For these instances, we consider 60 of the largest values for $\Psi_k$ in the separation procedure.
The next four columns under ``Forest EC\&R\xspace" include the result of adding EC\&R\xspace inequalities with up to three aggregations obtained for $\mcl{S}$ where both $y$ variables are
considered in their original simplex.
To add EC\&R\xspace cutting planes, we consider the forest structures according to Theorem~\ref{thm:primal-forest-converse}.
The subcolumns under the ``Full" and ``Separation" headers contain the gap closure and the solution time to implement these cutting planes without and with the separation oracle, respectively.
It is evident from these results that the EC\&R\xspace cuts obtained from the forest structure that involve all $y$ variables outperform those obtained from the tree structures that consider $y$ variables individually, showing the effectiveness of the class of EC\&R\xspace inequalities with the pairwise cancellation property.
Further, these results show the notable impact of using a separation oracle to produce the EC\&R\xspace inequalities on reducing the solution time, especially for larger size problems where the time save is of orders of magnitude.
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the bipartite structure with $m = 2$}
\label{tab:bipartite-2}
\resizebox{1.00\textwidth}{!}{
\begin{tabular}{|c|c||c|c||c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{4}{c||}{Tree EC\&R\xspace} & \multicolumn{4}{c|}{Forest EC\&R\xspace} \\ \cline{5-12}
& & & & \multicolumn{2}{c||}{Full} & \multicolumn{2}{c||}{Separation} & \multicolumn{2}{c||}{Full}& \multicolumn{2}{c|}{Separation}\\ \cline{5-12}
& & & & Gap & Time & Gap & Time & Gap & Time & Gap & Time \\ \hline
\hline
10 & 1 & 3820.43 & 3304.37 & 72 & 2.68 & 66 & 0.90 & 80 & 4.19 & 76 & 3.78 \\
& 2 & 4670 & 3722.12 & 94 & 2.74 & 90 & 1.26 & 99 & 4.06 & 96 & 2.85 \\
& 3 & 3046.69 & 2296.67 & 88 & 2.75 & 86 & 1.28 & 99 & 2.86 & 99 & 3.88 \\
& 4 & 2091.19 & 1617.27 & 99 & 1.95 & 99 & 1.02 & 99 & 1.50 & 99 & 1.89 \\
& 5 & 4744.34 & 3833.32 & 86 & 2.06 & 74 & 1.25 & 96 & 4.15 & 96 & 3.81 \\
& 6 & 2994.59 & 2247.07 & 93 & 2.69 & 93 & 1.29 & 99 & 5.42 & 98 & 4.61 \\
& 7 & 4790.34 & 4366.39 & 99 & 1.72 & 99 & 0.65 & 99 & 1.56 & 99 & 1.98 \\
& 8 & 6438.68 & 5781.10 & 99 & 1.58 & 99 & 0.77 & 99 & 1.50 & 99 & 1.91 \\
& 9 & 995.54 & 514.49 & 81 & 1.73 & 55 & 0.92 & 99 & 1.47 & 99 & 1.90 \\
& 10& 3127.59 & 2985.04 & 99 & 1.35 & 99 & 0.36 & 99 & 1.53 & 99 & 0.94 \\
\cline{2-12}
&avg. &&& 91 & 2.13 & 86 & 0.97 & 97 & 2.82 & 96 & 2.75 \\
\hline
30 & 1 & -243.66 &-3690.76 & 28 & 165.79 & 17 & 9.86 & 49 & 497.22 & 31 & 36.26 \\
& 2 & 771.06 &-2039.13 & 16 & 172.91 & 09 & 10.42 & 49 & 505.36 & 29 & 27.15 \\
& 3 &-1103.88 &-3888.50 & 40 & 175.02 & 25 & 10.43 & 65 & 498.75 & 39 & 27.14 \\
& 4 & 2397.13 &-1259.93 & 30 & 170.07 & 24 & 10.24 & 58 & 495.05 & 33 & 27.40 \\
& 5 & 35.46 &-2069.69 & 59 & 164.67 & 40 & 19.38 & 78 & 486.38 & 50 & 54.06 \\
& 6 &-535.73 &-3742.10 & 31 & 209.90 & 22 & 17.33 & 64 & 494.07 & 43 & 45.65 \\
& 7 & 510.26 &-2121.24 & 43 & 174.23 & 27 & 13.67 & 85 & 500.75 & 68 & 64.31 \\
& 8 & 161.23 &-2120.39 & 29 & 168.76 & 16 & 9.93 & 67 & 614.68 & 36 & 35.82 \\
& 9 & 2727.77 & -355.00 & 58 & 172.26 & 52 & 14.50 & 89 & 494.68 & 71 & 55.88 \\
&10 & 1615.33 & -510.79 & 49 & 209.28 & 36 & 13.17 & 76 & 497.71 & 47 & 45.85 \\
\cline{2-12}
& avg.&&& 41 & 178.29 & 27 & 12.89 & 68 & 508.46 & 45 & 41.95 \\
\hline
50 & 1 &-4103.47 &-9633.82 & 41 & 1499.04 & 24 & 37.56 & 64 & 4248.47 & 41 & 195.21 \\
& 2 & -942.02 &-6275.92 & 60 & 1110.56 & 47 & 47.72 & 83 & 3960.46 & 62 & 167.52 \\
& 3 &-3652.90 &-11173.54& 39 & 1085.44 & 26 & 28.84 & 60 & 3923.47 & 37 & 131.14 \\
& 4 & -940.47 &-5652.85 & 61 & 1006.40 & 48 & 56.31 & 80 & 3729.93 & 60 & 174.41 \\
& 5 & 2006.51 &-3895.38 & 45 & 1024.55 & 35 &48.22 & 65 & 3767.51 & 38 & 100.05 \\
& 6 &-3040.65 &-10676.33& 45 & 999.34 & 32 &37.38 & 62 & 2918.44 & 41 & 100.59 \\
& 7 &-2074.94 &-11708.06& 35 & 1029.83 & 28 &37.67 & 52 & 2955.59 & 39 & 125.89 \\
& 8 &-4621.66 &-9875.65 & 42 & 1046.62 & 29 &38.16 & 67 & 3872.93 & 38 & 125.75 \\
& 9 &-1622.87 &-7510.43 & 48 &1058.28 & 36 &28.94 & 72 & 3881.18 & 46 & 125.50 \\
&10 &-4396.41 &-11995.74& 24 &1010.64 & 17 &37.90 & 39 & 2965.72 & 26 & 161.27 \\
\cline{2-12}
&avg.&&& 44 & 1087.07& 32 &39.87& 64 & 3622.37 & 42 & 134.62 \\
\hline
\end{tabular}
}
\end{table}
\subsection{Clique Structure.}
In this section, we perform computational experiments on clique structures; see Figure~\ref{fig:network2}.
This structure represents a high-level density for the underlying graph.
We consider three problem sizes, where the number of nodes in the underlying network is $10$, $20$, and $30$.
For each problem size, we create $10$ randomly generated instances for set $\mcl{S}$ with the following specifications for its underlying network $\mr{G} = (\mr{V}, \mr{A})$.
For the clique networks, there is an arc between each pair of nodes.
We determine the direction of each arc through a random binary value.
Each arc is assigned a capacity that is randomly generated from a discrete uniform distribution between $(0,100)$.
The coefficient of the $z$ variables in the objective function is randomly generated from a uniform distribution between $(-10,10)$, added by a random number between $0$ and $0.1$ that is multiplied by the number of nodes in that network.
This second random element is added to widen the optimality gap for a meaningful comparison of the different approaches.
The coefficient for $x$ variables in the objective function is randomly generated from a discrete uniform distribution between $(0,20)$, and the supply/demand at each node is selected randomly from $(-200,200)$ in such a way that supply and demands are balanced.
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the clique structure with $m = 1$}
\label{tab:clique-1}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full EC\&R\xspace} & \multicolumn{2}{c|}{Separation EC\&R\xspace} \\ \cline{5-8}
& & & & Gap (\%)& Time (s) & Gap (\%)& Time (s) \\ \hline
10 & 1 & 6619.00 & 6291.72 & 99 & 2.91 & 78 & 0.30 \\
& 2 & 5027.37 & 4925.55 & 99 & 4.28 & 00 & 0.08 \\
& 3 & 6118.96 & 5957.02 & 99 & 1.57 & 99 & 0.23 \\
& 4 & 3494.89 & 3391.59 & 99 & 1.54 & 99 & 0.10 \\
& 5 & 2240.41 & 1843.81 & 99 & 1.59 & 99& 0.17 \\
& 6 & 1765.00 & 1603.52 & 99 & 1.52 & 81& 0.45 \\
& 7 & 4129.00 & 3939.31 & 99 & 1.51 & 99& 0.19 \\
& 8 & 5514.99 & 5128.99 & 99 & 1.59 & 88& 0.40 \\
& 9 & 8396.64 & 8260.95 & 99 & 1.51 & 99& 0.10 \\
& 10 & 4402.24 & 4156.90 & 99 & 1.48 & 92& 0.25 \\
\cline{2-8}
& avg. && & 99 & 1.95 & 83 & 0.23 \\
\hline
20 & 1 & 6176.00 & 6008.36 & 99 & 36.31 & 99 & 1.69 \\
& 2 & 8734.00 & 8530.56 & 99 & 34.06 & 99 & 1.66 \\
& 3 &10195.00 & 9492.98 & 86 & 99.73 & 73 & 6.84 \\
& 4 & 7020.00 & 6626.42 & 99 & 67.61 & 97 & 8.53 \\
& 5 & 7917.00 & 7339.43 & 99 & 97.31 & 95 & 12.27 \\
& 6 & 9154.00 & 8571.02 & 99 & 35.18 & 99 & 1.78 \\
& 7 & 9117.00 & 8849.31 & 99 & 35.13 & 99 & 1.79 \\
& 8 & 9804.00 & 9138.27 & 99 & 65.62 & 99 & 8.53 \\
& 9 & 5876.17 & 5463.65 & 99 & 66.50 & 99 & 3.36 \\
& 10 & 7605.72 & 6822.39 & 98 &132.54 & 91 & 6.88 \\
\cline{2-8}
& avg. && & 98 & 67.00 & 95 & 5.33 \\
\hline
30 & 1 & 7673.11 & 6899.80 & 99 & 360.89 & 88 & 66.99 \\
& 2 & 6376.51 & 5191.96 & 99 & 190.06 & 99 & 29.07 \\
& 3 & 11341.00 & 10650.40 & 99 & 166.24 & 63 & 28.61 \\
& 4 & 9045.00 & 7826.18 & 95 & 619.64 & 78 & 65.26 \\
& 5 & 6094.86 & 5289.02 & 95 & 636.07 & 87 & 39.74 \\
& 6 & 7641.00 & 7222.33 & 99 & 321.02 & 95 & 37.95 \\
& 7 &11040.00 &10413.28 & 99 & 163.22 & 99 & 35.56 \\
& 8 & 6862.00 & 6411.01 & 99 & 175.81 & 99 & 18.61 \\
& 9 & 7424.00 & 6295.28 & 94 & 786.84 & 85 & 73.03 \\
& 10 & 6102.74 & 4945.62 & 99 & 357.86 & 80 & 56.56 \\
\cline{2-8}
& avg. && & 98 & 377.77 & 87 & 45.14 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the clique structure with $m = 2$}
\label{tab:clique-2}
\resizebox{1.00\textwidth}{!}{
\begin{tabular}{|c|c||c|c||c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{4}{c||}{Tree EC\&R\xspace} & \multicolumn{4}{c|}{Forest EC\&R\xspace} \\ \cline{5-12}
& & & & \multicolumn{2}{c||}{Full} & \multicolumn{2}{c||}{Separation} & \multicolumn{2}{c||}{Full}& \multicolumn{2}{c|}{Separation}\\ \cline{5-12}
& & & & Gap & Time & Gap & Time & Gap & Time & Gap & Time \\ \hline
\hline
10 & 1 & 5177.00 & 4620.61 & 99 & 3.25 & 99 & 1.40 & 99 & 9.52 & 99 & 4.41 \\
& 2 & 276.13 & -682.07 & 80 & 11.86& 76 & 5.78 & 91 & 25.73& 88 & 17.34\\
& 3 & 5444.39 & 5046.27 & 99 & 5.83 & 98 & 2.82 & 99 & 17.13& 99 & 8.96 \\
& 4 & 6699.48 & 5954.44 & 97 & 9.06 & 96 & 4.34 & 99 & 17.43& 97 & 13.56\\
& 5 & 4897.99 & 4244.31 & 99 & 6.11 & 99 & 4.40 & 99 & 9.52 & 99 & 4.41 \\
& 6 & 5690.00 & 5448.84 & 99 & 3.17 & 99 & 1.37 & 99 & 9.31 & 99 & 4.46 \\
& 7 & 3744.00 & 3081.42 & 99 & 5.53 & 99 & 2.73 & 99 & 9.12 & 99 & 8.65 \\
& 8 & 5840.00 & 5591.10 & 99 & 9.02 & 99 & 4.31 & 99 & 17.52& 99 & 8.46\\
& 9 & 6995.26 & 6758.82 & 99 & 3.23 & 99 & 1.44 & 99 & 9.55& 99 & 4.29 \\
& 10& 3576.71 & 3254.87 & 99 & 3.13 & 99 & 1.37 & 99 & 9.80& 99 & 4.11 \\
\cline{2-12}
&avg. &&& 97 & 6.02 & 96 & 3.00 & 98 & 13.46& 98 & 7.86 \\
\hline
20 & 1 & 7519.86 & 5941.60 & 80 & 247.21 & 68 & 16.70 & 99 & 567.28 & 86 & 93.92 \\
& 2 & 6940.38 & 4598.74 & 45 & 259.08 & 25 & 16.97 & 68 & 750.97 & 50 & 65.45 \\
& 3 & 6661.91 & 5093.44 & 63 & 261.60 & 41 & 10.15 & 99 & 382.93 & 86 & 47.06 \\
& 4 & 5529.50 & 3884.97 & 72 & 259.85 & 60 & 17.14 & 98 & 734.08 & 83 & 45.56 \\
& 5 & 7517.72 & 6460.06 & 76 & 191.48 & 56 & 14.08 & 99 & 523.53 & 76 & 45.62 \\
& 6 & 6691.37 & 5225.75 & 94 & 315.58 & 72 & 19.68 & 99 & 515.09 & 86 & 54.63 \\
& 7 & 9608.17 & 7674.57 & 73 & 192.52 & 58 & 13.29 & 95 & 525.21 & 79 & 45.56 \\
& 8 & 9808.00 & 7723.56 & 60 & 252.08 & 48 & 13.57 & 79 & 661.85 & 67 & 46.10 \\
& 9 & 5940.59 & 3654.28 & 55 & 198.98 & 37 & 16.23 & 80 & 679.97 & 62 & 5.14 \\
&10 & 7332.00 & 6126.00 & 99 & 266.47 & 93 & 13.47 & 99 & 352.75 & 94 & 27.71 \\
\cline{2-12}
& avg.&&& 71 & 236.64 & 56 & 15.13 & 92 & 569.37 & 77 & 52.68 \\
\hline
30 & 1 & 6912.00 & 5808.28 & 99 & 426.04 & 94 &84.48 & 99 & 1062.91 & 99 & 164.67 \\
& 2 &11003.69 & 9877.86 & 95 &1177.46 & 87 &86.41 & 96 & 2988.85 & 92 & 224.89 \\
& 3 & 6676.09 & 4151.90 & 84 &1815.14 & 64 &88.16 & 99 & 1919.25 & 93 & 280.02 \\
& 4 & 8375.87 & 5882.49 & 90 &1420.13 & 69 &107.86& 98 & 4084.95 & 80 & 282.41 \\
& 5 & 9854.24 & 7338.24 & 99 &644.18 & 98 &88.94 & 99 & 1125.23 & 99 & 167.50 \\
& 6 & 7613.34 & 5011.54 & 62 &1538.90 & 46 &111.18& 81 & 4004.75 & 64 & 367.75 \\
& 7 &11367.00 & 8704.43 & 81 &1266.64 & 65 &129.98& 91 & 5027.96 & 70 & 255.62 \\
& 8 & 8179.74 & 5631.34 & 73 &1277.91 & 57 &93.64 & 92 & 3662.67 & 76 & 333.25 \\
& 9 & 8350.00 & 4983.29 & 74 &1588.36 & 50 &90.22 & 91 & 3576.49 & 71 & 328.55\\
&10 &10606.00 & 7477.32 & 75 &1876.30 & 56 &103.98& 98 & 4662.70 & 82 & 360.30 \\
\cline{2-12}
&avg.&&& 83 &1303.11& 69 &98.48 & 94 & 3211.58 & 83 & 276.49 \\
\hline
\end{tabular}
}
\end{table}
The results for the clique structure are reported in Tables~\ref{tab:clique-1} and \ref{tab:clique-2} for the cases with $m=1$ and $m=2$, respectively.
The definition of columns in each table is similar to that of Tables~\ref{tab:bipartite-1} and \ref{tab:bipartite-2}.
For the clique structures, we observe similar patterns to those of the bipartite case in the gap improvement and solution time of the different approaches.
\begin{comment}
\subsection{Star Structure.}
In this section, we perform computational experiments on star structures as a sparse network model; see Figure~\ref{fig:network3}.
We consider three problem sizes, where the number of nodes in the underlying network is $31$, $61$, and $101$.
For example, for size $31$, there is one node at the center, and $30$ nodes connected to it.
For each problem size, we create $10$ randomly generated instances for set $\mcl{S}$ with the following specifications for its underlying network $\mr{G} = (\mr{V}, \mr{A})$.
For the star networks, there is an arc between each node of the network and the center node.
Half of these arcs will have the direction into the center node, and half will have the direction out of the center node.
The coefficient of the $z$ variables in the objective function is randomly generated from a uniform distribution between $(-18,22)$.
Similarly to the case for the clique structure, this interval is shifted to widen the optimality gap for a meaningful comparison of the different approaches.
The coefficient for $x$ variables in the objective function is randomly generated from a discrete uniform distribution between $(0,20)$.
Each node that has an outgoing arc connected to the center node is considered a supply node, whose supply is generated randomly from a discrete uniform distribution between $(0,150)$.
Each node that has an incoming arc connected to the center node is considered a demand node, whose demand is generated randomly from a discrete uniform distribution between $(0,100)$.
The center node is a hub node with zero supply/demand.
We note here that this model is not balanced, unlike the network structures studied in the previous sections, because a balanced model will have a single solution only.
As a result, we change the equality flow-balance constraints to inequalities to account for an unbalanced model.
We set the capacity of each arc equal to the supply/demand of the node it is adjacent to.
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the star structure with $m = 1$}
\label{tab:star-1}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full EC\&R\xspace} & \multicolumn{2}{c|}{Separation EC\&R\xspace} \\ \cline{5-8}
& & & & Gap (\%)& Time (s) & Gap (\%)& Time (s) \\ \hline
31 & 1 & 13987.72 & 13011.58 & 99 & 2.31 & 99 & 0.08 \\
& 2 & 8814.00 & 8537.33 & 99 & 2.67 & 99 & 0.05 \\
& 3 & 14746.00 & 14384.96 & 99 & 2.52 & 99 & 0.04 \\
& 4 & 10711.93 & 10486.39 & 99 & 2.45 & 99 & 0.06 \\
& 5 & 6815.50 & 6342.41 & 99 & 2.49 & 99& 0.11 \\
& 6 & 10920.07 & 9589.44 & 99 & 2.48 & 99& 0.05 \\
& 7 & 9162.00 & 8958.50 & 99 & 2.67 & 99& 0.07 \\
& 8 & 9915.00 & 5128.99 & 99 & 2.53 & 99& 0.04 \\
& 9 & 15736.74 & 15390.30 & 99 & 1.51 & 99& 0.04 \\
& 10 & 9597.49 & 9037.50 & 99 & 1.48 & 92& 0.25 \\
\cline{2-8}
& avg. && & 99 & 2.52 & 99 & 0.06 \\
\hline
61 & 1 &28166.67 &27798.56 & 99 & 21.09 & 99 & 0.70 \\
& 2 &26038.68 &25448.46 & 99 & 20.10 & 99 & 0.72 \\
& 3 &24955.06 &24597.34 & 99 & 18.02 & 99 & 0.38 \\
& 4 &18437.22 &17683.77 & 99 & 18.31 & 99 & 0.70 \\
& 5 &26749.81 &26175.38 & 99 & 19.03 & 99 & 0.34 \\
& 6 &30736.00 &30512.41 & 99 & 18.72 & 99 & 0.38 \\
& 7 &27021.96 &25912.78 & 99 & 20.07 & 99 & 0.73 \\
& 8 &35728.75 &34858.62 & 99 & 17.84 & 99 & 0.68 \\
& 9 &32932.00 &32605.94 & 99 & 17.96 & 99 & 0.36 \\
& 10 &27209.69 &26928.82 & 99 & 18.92 & 99 & 0.80 \\
\cline{2-8}
& avg. && & 99 & 19.01 & 99 & 0.58 \\
\hline
101 & 1 & 33395.34 & 32833.66 & 99 & 93.21 & 99 & 3.39 \\
& 2 & 39813.00 & 39612.49 & 99 & 88.51 & 99 & 1.70 \\
& 3 & 43941.00 & 43556.39 & 99 & 84.34 & 99 & 1.71 \\
& 4 & 51040.68 & 50409.79 & 99 & 80.81 & 99 & 1.86 \\
& 5 & 42475.00 & 41867.57 & 99 & 79.27 & 99 & 1.70 \\
& 6 & 45239.38 & 42782.59 & 99 & 82.14 & 99 & 1.68 \\
& 7 & 33604.00 & 33298.75 & 99 & 80.79 & 99 & 3.32 \\
& 8 & 42334.00 & 41938.69 & 99 & 84.05 & 99 & 1.70 \\
& 9 & 38056.00 & 37124.38 & 99 & 86.57 & 99 & 6.51 \\
& 10 & 40634.00 & 40411.84 & 99 & 83.32 & 99 & 1.83 \\
\cline{2-8}
& avg. && & 99 & 84.30 & 99 & 2.54 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the star structure with $m = 2$}
\label{tab:star-2}
\resizebox{1.00\textwidth}{!}{
\begin{tabular}{|c|c||c|c||c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{4}{c||}{Tree EC\&R\xspace} & \multicolumn{4}{c|}{Forest EC\&R\xspace} \\ \cline{5-12}
& & & & \multicolumn{2}{c||}{Full} & \multicolumn{2}{c||}{Separation} & \multicolumn{2}{c||}{Full}& \multicolumn{2}{c|}{Separation}\\ \cline{5-12}
& & & & Gap & Time & Gap & Time & Gap & Time & Gap & Time \\ \hline
\hline
31 & 1 & 5954.05 & 5597.23 & 24 & 3.89 & 24 & 0.37 & 99 & 8.31 & 25 & 1.58 \\
& 2 & 7274.95 & 6760.15 & 73 & 3.87 & 73 & 0.30 & 97 & 22.20& 83 & 1.57 \\
& 3 & 14222.00& 13987.42& 99 & 1.99 & 99 & 0.09 & 99 & 7.96 & 99 & 0.41 \\
& 4 & 15030.00& 14788.34& 99 & 2.00 & 99 & 0.11 & 99 & 8.16 & 99 & 0.46 \\
& 5 & 14376.69& 14116.10& 99 & 1.99 & 99 & 0.08 & 99 & 8.10 & 99 & 0.44 \\
& 6 & 12787.00& 11413.63& 99 & 1.97 & 99 & 0.09 & 99 & 8.29 & 99 & 0.49 \\
& 7 & 18676.00& 18430.77& 99 & 1.93 & 99 & 0.10 & 99 & 8.18 & 99 & 0.39 \\
& 8 & 13575.26& 12518.69& 71 & 3.80 & 71 & 0.27 & 99 & 8.52 & 86 & 1.21\\
& 9 & 8794.00 & 8394.10 & 99 & 1.93 & 99 & 0.20 & 99 & 8.39& 99 & 0.78 \\
& 10& 17206.00& 16892.27& 99 & 1.93 & 99 & 0.09 & 99 & 8.27& 99 & 0.44 \\
\cline{2-12}
&avg. &&& 86 & 2.53 & 86 & 0.17 & 99 & 9.64& 89 & 0.78 \\
\hline
61 & 1 & 11506.86 & 10818.00 & 00 & 14.53 & 00 & 0.74 & 11 & 188.21 & 11 & 12.34 \\
& 2 & 23037.00 & 21149.27 & 98 & 29.16 & 96 & 2.74 & 99 & 70.00 & 99 & 3.65 \\
& 3 & 23514.02 & 21189.56 & 79 & 29.31 & 75 & 2.77 & 99 & 70.07 & 91 & 12.85 \\
& 4 & 24727.42 & 24316.71 & 99 & 29.03 & 99 & 1.41 & 99 & 69.20 & 99 & 6.52 \\
& 5 & 21673.20 & 20020.22 & 99 & 15.61 & 99 & 2.80 & 99 & 68.47 & 99 & 13.00 \\
& 6 & 23588.00 & 22946.48 & 99 & 15.03 & 99 & 0.79 & 99 & 66.55 & 99 & 3.33 \\
& 7 & 25150.00 & 24940.65 & 99 & 14.92 & 99 & 0.71 & 99 & 65.20 & 99 & 3.32 \\
& 8 & 18171.81 & 17943.15 & 06 & 27.65 & 06 & 1.40 & 78 & 237.90& 29 & 20.96 \\
& 9 & 26629.00 & 24700.47 & 99 & 14.68 & 99 & 0.75 & 99 & 66.88 & 99 & 3.54 \\
&10 & 25784.93 & 24720.23 & 84 & 59.22 & 83 & 3.53 & 99 & 183.27& 85 & 15.57 \\
\cline{2-12}
& avg.&&& 76 & 24.91 & 75 & 1.76 & 88 & 108.58 & 81 & 9.51 \\
\hline
101& 1 &46589.89 & 44623.20 & 53 & 280.95 & 50 &12.97 & 99 & 321.88 & 99 & 30.15 \\
& 2 &35633.83 & 33233.90 & 29 & 145.62 & 29 &13.14 & 97 & 941.36 & 40 & 74.98 \\
& 3 &23970.87 & 23277.36 & 69 & 336.61 & 67 &26.31 & 99 & 343.01 & 69 & 131.32\\
& 4 &32991.28 & 30641.69 & 42 & 276.79 & 34 &13.19 & 99 & 945.28 & 88 & 59.00 \\
& 5 &39693.64 & 39255.76 & 99 & 72.45 & 99 & 9.82 & 99 & 356.83 & 99 & 16.77 \\
& 6 &39634.00 & 38678.27 & 99 & 72.00 & 99 & 3.65 & 99 & 356.89 & 99 & 16.35 \\
& 7 &33496.72 & 32500.26 & 11 &269.16 & 00 & 3.39 & 95 & 1428.29 & 56 & 101.05\\
& 8 & 8179.74 & 21823.48 & 19 &206.84 & 19 & 12.98 & 90 & 598.93 & 53 & 31.04 \\
& 9 &36653.80 & 33817.44 & 70 &1588.36& 55 & 17.70 & 99 & 589.58 & 63 & 74.96\\
&10 &40215.00 & 39390.27 & 99 &1876.30& 99 & 3.78 & 99 & 319.21 & 99 & 16.69 \\
\cline{2-12}
&avg.&&& 59 &208.62& 53 &11.69 & 98 & 620.13 & 77 & 55.23 \\
\hline
\end{tabular}
}
\end{table}
\end{comment}
\subsection{Cycle Structure.}
In this section, we perform computational experiments on cycle structures; see Figure~\ref{fig:network3}.
This structure represents a low-level density for the underlying graph.
We consider three problem sizes, where the number of nodes in the underlying network is $50$, $100$, and $200$.
For each problem size, we create $10$ randomly generated instances for set $\mcl{S}$ with the following specifications for its underlying network $\mr{G} = (\mr{V}, \mr{A})$.
Each node in the cycle is either a supply or a demand node.
The adjacent nodes to a supply node are both demand nodes, and the adjacent nodes to a demand node are both supply nodes.
The direction of each arc in the cycle is from the supply node to the demand node incident to that arc.
The coefficient of the $z$ variables in the objective function is randomly generated from a uniform distribution between $(-18,22)$.
The coefficient for $x$ variables in the objective function is randomly generated from a discrete uniform distribution between $(0,20)$.
The supply for the supply nodes is generated randomly from a discrete uniform distribution between $(100,200)$.
The demand for the demand nodes is generated randomly from a discrete uniform distribution between $(0,100)$.
We note here that this model is not balanced, unlike the network structures studied in the previous sections, because a balanced model will have a single solution only.
As a result, we change the equality flow-balance constraints to inequalities to account for an unbalanced model.
We set the capacity of each arc equal to $150$ to ensure that the problem is feasible.
The computational results are given in Tables~\ref{tab:cycle-1} and \ref{tab:cycle-2} for the cases with $m=1$ and $m=2$, respectively.
The columns in these tables are defined similarly to the previous tables, with a difference that the separation columns are omitted because implementing the full EC\&R\xspace approaches is fast, hence a separation oracle would not be necessary.
This fast performance can be attributed to the sparsity of the underlying graph that allows for generating a complete set of EC\&R\xspace inequalities in a short amount of time since the aggregation options to obtain the desired cancellations is limited.
As a result, we can achieve high gap improvements for larger size problems as evidenced in Tables~\ref{tab:cycle-1} and \ref{tab:cycle-2}.
\begin{table}[!t]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the cycle structure with $m = 1$}
\label{tab:cycle-1}
\begin{tabular}{|c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full EC\&R\xspace} \\
\cline{5-6}
& & & & Gap (\%)& Time (s) \\
\hline
50 & 1 & 2867.15 & -80.28 & 99 & 0.04 \\
& 2 & 5797.12 & 2704.93 & 99 & 0.01 \\
& 3 &-1552.82 &-2147.89 & 99 & 0.03 \\
& 4 & 5365.32 & 3288.67 & 99 & 0.02 \\
& 5 & 980.96 &-1389.79 & 99 & 0.01 \\
& 6 & -266.16 &-2592.61 & 99 & 0.03 \\
& 7 & 1250.60 &-354.21 & 99 & 0.02 \\
& 8 &-3119.87 &-4349.18 & 99 & 0.02 \\
& 9 & 733.80 &-118.27 & 99 & 0.01 \\
& 10 & 2210.15 & 243.57 & 99 & 0.03 \\
\cline{2-6}
& avg. && & 99 & 0.02 \\
\hline
100& 1 &-1783.35 &-3668.73 & 99 & 0.06 \\
& 2 &-2069.08 &-3695.66 & 99 & 0.03 \\
& 3 & 8678.79 & 1677.03 & 99 & 0.03 \\
& 4 &-4346.53 &-5407.01 & 99 & 0.05 \\
& 5 & 1243.70 &-2122.95 & 99 & 0.03 \\
& 6 &15120.43 & 4885.57 & 99 & 0.03 \\
& 7 & 1235.43 &-3377.43 & 99 & 0.03 \\
& 8 & 1256.49 &-1378.17 & 99 & 0.03 \\
& 9 &11766.20 & 4017.61 & 99 & 0.05 \\
& 10 & 3647.78 & -435.92 & 99 & 0.04 \\
\cline{2-6}
& avg. && & 99 & 0.04 \\
\hline
200& 1 & 366.47 & -7118.60 & 99 & 0.08 \\
& 2 & -7235.71 &-12323.90 & 99 & 0.12 \\
& 3 & 3544.59 & -5999.36 & 99 & 0.07 \\
& 4 & 10746.70 & 1156.33 & 99 & 0.07 \\
& 5 & 12870.76 & -188.56 & 99 & 0.09 \\
& 6 & 11889.94 & 3498.13 & 99 & 0.07 \\
& 7 & 3482.99 & -4362.33 & 99 & 0.08 \\
& 8 & -1885.01 &-10801.39 & 99 & 0.06 \\
& 9 & 22166.25 & 11341.87 & 99 & 0.08 \\
& 10 & -348.45 & -8025.35 & 99 & 0.09 \\
\cline{2-6}
& avg. && & 99 & 0.08 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\caption{Evaluating EC\&R\xspace cutting planes for the cycle structure with $m = 2$}
\label{tab:cycle-2}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
Size & \# & Opt & LP & \multicolumn{2}{c||}{Full Tree EC\&R\xspace} & \multicolumn{2}{c|}{Full Forest EC\&R\xspace} \\
\cline{5-8}
& & & & Gap & Time & Gap & Time \\
\hline
50 & 1 &-1590.18 &-8576.67 & 87 & 0.13 & 99 & 0.26 \\
& 2 &-4622.80 &-13593.67& 85 & 0.10 & 96 & 0.32 \\
& 3 &-7794.96 &-15165.05& 63 & 0.11 & 87 & 0.37 \\
& 4 &-1503.33 &-6513.07 & 89 & 0.16 & 99 & 0.27 \\
& 5 & 4433.70 &-6508.12 & 94 & 0.12 & 99 & 0.23 \\
& 6 & -284.46 &-7786.90 & 70 & 0.11 & 91 & 0.37 \\
& 7 & -1140.91&-14831.40& 60 & 0.09 & 82 & 0.32 \\
& 8 & 6625.82 & -1629.65& 93 & 0.09 & 99 & 0.23 \\
& 9 & -2116.06&-5866.54 & 99 & 0.03 & 99 & 0.17 \\
& 10& 88.65 &-8106.67 & 89 & 0.09 & 97 & 0.33 \\
\cline{2-8}
&avg. &&& 83 & 0.10 & 95 & 0.29 \\
\hline
100& 1 & 900.68 &-16607.11 & 74 & 0.22 & 90 & 0.71 \\
& 2 & 140.56 &-17583.33 & 92 & 0.22 & 99 & 0.31 \\
& 3 & -131.98 &-20668.01 & 73 & 0.26 & 87 & 0.66 \\
& 4 & -231.62 &-20736.33 & 80 & 0.34 & 95 & 0.63 \\
& 5 & -1709.59 &-18456.56 & 66 & 0.24 & 81 & 0.68 \\
& 6 & 6992.68 &-16783.92 & 74 & 0.17 & 85 & 0.66 \\
& 7 & 1326.26 &-20135.15 & 75 & 0.18 & 82 & 0.63 \\
& 8 & 90.11 &-19442.39 & 72 & 0.17 & 86 & 0.67 \\
& 9 & 7523.83 &-12600.35 & 88 & 0.27 & 99 & 0.55 \\
&10 & -2365.06 &-18051.56 & 77 & 0.19 & 96 & 0.72 \\
\cline{2-8}
& avg.&&& 77 & 0.23 & 90 & 0.62 \\
\hline
200& 1 &-4498.84 &-42056.21 & 88 & 0.42 & 95 &1.37 \\
& 2 &-2270.67 &-41823.19 & 77 & 0.40 & 86 &1.49 \\
& 3 &9853.81 &-38627.76 & 70 & 0.42 & 82 &1.48 \\
& 4 &10420.28 &-24979.47 & 74 & 0.45 & 88 &1.35 \\
& 5 &-10699.73 &-46329.12 & 77 & 0.43 & 94 & 1.34 \\
& 6 & 1474.74 &-34978.20 & 86 & 0.39 & 99 & 1.08 \\
& 7 & 2446.84 &-36567.37 & 78 & 0.44 & 95 & 1.37 \\
& 8 &-7058.24 &-40441.54 & 85 & 0.40 & 99 & 1.01 \\
& 9 &-14652.28&-48186.48 & 74 & 0.41 & 93 & 1.37 \\
&10 &9392.03 &-28443.66 & 81 & 0.42 & 94 & 1.51 \\
\cline{2-8}
&avg.&&& 79 & 0.42 & 93 & 1.34 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion} \label{sec:conclusion}
We study a bipartite bilinear set, where the variables in one partition belong to a network flow model, and the variables in the other partition belong to a simplex.
We design a convexification technique based on aggregation of side constraints with appropriate weights, which produces an important class of facet-defining inequalities for the convex hull of the bilinear set, which describes the convex hull for the special case where the simplex contains a single variable.
We show that each such inequality can be obtained by considering the constraints corresponding to the nodes of the underlying network that form a special tree or forest structure.
This property leads to an explicit derivation of strong inequalities through identifying special graphical structures in the network model.
These inequalities can be added to the classical McCormick relaxation to strengthen the relaxation and improve the dual bounds, as corroborated in the preliminary computational experiments conducted on various basic network structures.
\bibliographystyle{informs2014}
|
{
"arxiv_id": "2302.14115",
"language": "en",
"timestamp": "2023-03-01T02:01:27",
"url": "https://arxiv.org/abs/2302.14115",
"yymm": "2302"
} |
\section{Qualitative examples of dense video captioning predictions}\label{sec:addquali}
In Figure~\ref{fig:qualitative}, we show qualitative results of dense event captioning by our Vid2Seq{} model.
Here in Figures~\ref{fig:qualitative2} and~\ref{fig:qualitative3}
, we show additional results on examples from the YouCook2 and ActivityNet Captions datasets.
These results show that Vid2Seq{} can predict meaningful dense captions and event boundaries in diverse scenarios, with or without transcribed speech input, \eg series of instructions in cooking recipes (Figure~\ref{fig:qualitative2}) or actions in human sports or leisure activities (first three examples in Figure~\ref{fig:qualitative3}).
The last example in Figure~\ref{fig:qualitative3} illustrates a failure case where the model hallucinates events that are not visually grounded such as `one man hats off to the camera`.
\section{Experimental setup}\label{sec:adddetails}
In this section, we complement the information provided in Section~\ref{sec:setup} about the datasets we use (Section~\ref{sec:adddatasets}). We also give additional implementation details (Section~\ref{sec:addimplem}).
\subsection{Datasets}\label{sec:adddatasets}
\noindent \textbf{YT-Temporal-1B}~\cite{zellers2022merlot} consists of 18.821M unlabeled narrated videos covering about 150 years of video content for pretraining.
Compared with HowTo100M~\cite{miech19howto100m}, this dataset was created to cover a wider range of domains and not only instructional videos.
\smallskip
\noindent \textbf{HowTo100M}~\cite{miech19howto100m} consists of 1.221M unlabeled narrated instructional videos covering about 15 years of video content for pretraining.
\smallskip
\noindent \textbf{YouCook2}~\cite{youcook2} has 1,790 untrimmed videos of cooking procedures.
On average, each video lasts 320s and is annotated with 7.7 temporally-localized imperative sentences.
The dataset is split into 1,333 videos for training and 457 videos for validation.
\smallskip
\noindent \textbf{ViTT}~\cite{huang2020multimodal} consists of 7,672 untrimmed instructional videos from the YouTube-8M dataset~\cite{abu2016youtube}.
Compared to YouCook2, ViTT was created to better reflect the distribution of instructional videos in the wild.
On average, each video lasts 250s and is annotated with 7.1 temporally-localized short tags.
The dataset is split into 5,476, 1,102 and 1,094 videos for training, validation and testing, respectively.
Videos in the validation and test sets are provided with multiple sets of dense event captioning annotations.
Following~\cite{huang2020multimodal}, we treat each set of annotations as a single example during evaluation and discard videos with more than 3 sets of annotations.
\smallskip
\noindent \textbf{ActivityNet-Captions}~\cite{krishna2017dense} contains 14,934 untrimmed videos of various human activities.
Different from YouCook2 and ViTT where most videos contain transcribed speech content, we find that 68\% of videos in ActivityNet Captions do not have transcribed narration.
On average, each video lasts 120s and is annotated with 3.7 temporally-localized sentences.
The dataset is split into 10,009 and 4,925 videos for training and validation, respectively.
Videos in the validation set are provided with two sets of dense video captioning annotations.
Following prior work~\cite{wang2021end}, we use both sets of annotations for evaluation, by computing the average of the scores over each set for SODA\_c and by using the standard evaluation tool~\cite{krishna2017dense} for all other dense event captioning metrics.
For video paragraph captioning, we follow~\cite{wang2021end} and report results on the 'val-ae' split that includes 2,460 videos~\cite{zhou2019grounded, lei2020mart}.
\smallskip
\noindent \textbf{MSR-VTT}~\cite{xu16msrvtt} consists of 10,000 open domain video clips.
The duration of each video clip is between 10 and 30 seconds. 20 natural language descriptions are manually annotated for each clip.
The dataset is split into 6,513, 497 and 2,990 videos for training, validation and testing, respectively.
\smallskip
\noindent \textbf{MSVD}~\cite{chen2011collecting} consists of 1,970 open domain video clips.
The duration of each video clip is between 10 and 30 seconds. Each video clip has roughly 40 manually annotated captions.
The dataset is split into 1,200, 100 and 670 videos for training, validation and testing, respectively.
\subsection{Implementation details}\label{sec:addimplem}
\paragraph{Architecture.}
The visual temporal transformer encoder $f^t$, the text encoder $g^t$ and the text decoder $h^t$ all have 12 layers, 12 heads, embedding dimension 768, and MLP hidden dimension of 2048.
The text encoder and decoder sequences are truncated or padded to $L=S=1000$ tokens during pretraining, and $S=1000$ and $L=256$ tokens during finetuning.
At inference, we use beam search decoding where we track the top 4 sequences and apply a length normalization of 0.6.
\paragraph{Training.}
We use the Adam optimizer~\cite{kingma15adam} with $\beta=(0.9, 0.999)$ and no weight decay.
During pretraining, we use a learning rate of $1e^{-4}$, warming it up linearly (from 0) for the first 1000 iterations, and keeping it constant for the remaining iterations.
During finetuning, we use a learning rate of $3e{-4}$, warming it up linearly (from 0) for the first 10\% of iterations, followed by a cosine decay (down to 0) for the remaining 90\%.
During finetuning, we use a batch size of 32 videos split on 16 TPU v4 chips.
We finetune for 40 epochs on YouCook2, 20 epochs on ActivityNet Captions and ViTT, 5 epochs on MSR-VTT and 10 epochs on MSVD.
We clip the maximum norm of the gradient to 0.1 during pretraining, and 1 during finetuning.
For data augmentation, we use random temporal cropping.
For regularization, we use label smoothing~\cite{szegedy2016rethinking} with value 0.1 and dropout~\cite{srivastava2014dropout} with probability 0.1.
\section{Experiments}\label{sec:addexperiments}
In this section, we provide additional experiments that complement the results presented in Section~\ref{sec:experiments}.
We first show the importance of pretraining in our proposed few-shot setting in Section~\ref{sec:fewshot2}.
Then we provide additional ablation studies in the standard fully-supervised setting in Section~\ref{sec:ablation2}, where we ablate various factors including pretraining on long narrated videos, the time tokenization process and the number of time tokens, the sequence construction process, the temporal positional embeddings and the initialization of the language model.
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{4pt}
\resizebox{1\linewidth}{!}{
\begin{tabular}{lcc|ccc|ccc|ccc}
\toprule
& \multirow{2}{*}{Data}
& \multirow{2}{*}{Pretrain}
& \multicolumn{3}{c|}{YouCook2}
& \multicolumn{3}{c|}{ViTT}
& \multicolumn{3}{c}{ActivityNet} \\
& & & \small{S} & \small{C} & \small{M}
& \small{S} & \small{C} & \small{M}
& \small{S} & \small{C} & \small{M} \\
\midrule
1. & 1\% & \xmark
& 0.0 & 0.0 & 0.0
& 0.0 & 0.0 & 0.0
& 0.0 & 0.0 & 0.1 \\
2. & 1\% & \cmark
& 2.4 & 10.1 & 3.3
& 2.0 & 7.4 & 1.9
& 2.2 & 6.2 & 3.2 \\
3. & 10\% & \xmark
& 0.1 & 0.0 & 0.2
& 3.3 & 0.4 & 3.3
& 3.4 & 11.9 & 4.6 \\
4. & 10\% & \cmark
& 3.8 & 18.4 & 5.2
& 10.7 & 28.6 & 6.0
& 4.3 & 20.0 & 6.1 \\
5. & 50\% & \xmark
& 1.8 & 8.5 & 2.4
& 6.5 & 18.7 & 3.9
& 4.6 & 13.1 & 6.3 \\
6. & 50\% & \cmark
& 6.2 & 32.1 & 7.6
& 12.5 & 38.8 & 7.8
& 5.4 & 27.5 & 7.8 \\
7. & 100\% & \xmark
& 4.0 & 18.0 & 4.6
& 7.9 & 21.2 & 6.2
& 5.4 & 18.8 & 7.1 \\
8. & 100\% & \cmark
& \textbf{7.9} & \textbf{47.1} & \textbf{9.3}
& \textbf{13.5} & \textbf{43.5} & \textbf{8.5}
& \textbf{5.8} & \textbf{30.1} & \textbf{8.5} \\
\bottomrule
\end{tabular}
}
\vspace{-0.2cm}
\caption{\small \textbf{Impact of our pretraining on few-shot dense event captioning}, by finetuning Vid2Seq{} using a small fraction of the downstream training dataset.}
\label{table:fewshot2}
\end{center}
\vspace{-0.3cm}
\end{table}
\subsection{Importance of pretraining in few-shot settings}\label{sec:fewshot2}
In Section~\ref{sec:ablation}, we show the benefits of our pretraining method in the fully-supervised setting, \ie when using 100\% of the downstream training dataset.
In Table~\ref{table:fewshot2}, we further show that our pretraining method has a considerable importance in the few-shot setting defined in Section~\ref{sec:fewshot}, \ie when using a smaller fraction of the downstream training dataset.
In particular, our pretraining method enables our Vid2Seq{} model to have a non zero performance when using only 1\% of the downstream training dataset (rows 1 and 2).
\begin{table}[t]
\centering
\vspace{-0pt}
\begin{center}
\setlength\tabcolsep{6pt}
\resizebox{.85\linewidth}{!}{
\begin{tabular}{cc|ccc|ccc}
\toprule
& \multirow{2}{*}{\makecell{\small{Max number} \\ \small{of narrations}}} & \multicolumn{3}{c|}{YouCook2} & \multicolumn{3}{c}{ActivityNet} \\ & & \small{S} & \small{C} & \small{F1} & \small{S} & \small{C} & \small{F1} \\
\midrule
1. & \textit{No pretraining}
& 4.0 & 18.0 & 18.1
& 5.4 & 18.8 & 49.2 \\
2. & 1
& 6.0 & 32.1 & 22.1
& 5.1 & 22.9 & 48.1 \\
3. & 10
& 6.5 & 34.6 & 23.6
& 5.4 & 27.1 & 50.3 \\
4. & $\infty$
& \textbf{7.9} & \textbf{47.1} & \textbf{27.3}
& \textbf{5.8} & \textbf{30.1} & \textbf{52.4} \\
\bottomrule
\end{tabular}}
\vspace{-0.2cm}
\caption{\small \textbf{Ablation showing the importance of pretraining on long narrated videos}, by varying the maximum number of narration sentences that a randomly cropped video can cover.
$\infty$ means the cropping is unrestricted and can sample arbitrarily long videos.}
\vspace{-0.3cm}
\label{table:long}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{6pt}
\resizebox{1\linewidth}{!}{
\begin{tabular}{lcc|ccc|ccc}
\toprule
& \multirow{2}{*}{Pretraining Data}
& \multirow{2}{*}{Model}
& \multicolumn{3}{c|}{YouCook2}
& \multicolumn{3}{c}{ActivityNet} \\
& & & \small{S} & \small{C} & \small{F1}
& \small{S} & \small{C} & \small{F1} \\
\midrule
1. & ImageNet & ViT-B/16
& 6.6 & 40.2 & 24.3
& 4.5 & 17.2 & 49.3 \\
2. & CLIP & ViT-B/16
& 7.7 & 46.3 & 26.5
& 5.6 & 28.4 & 51.7 \\
3. & CLIP & ViT-L/14
& \textbf{7.9} & \textbf{47.1} & \textbf{27.3}
& \textbf{5.8} & \textbf{30.1} & \textbf{52.4} \\
\bottomrule
\end{tabular}
}
\vspace{-0.2cm}
\caption{\small \textbf{Ablation on the pretraining data and model size of the visual backbone $f^s$.}}
\label{table:backbone}
\end{center}
\vspace{-0.3cm}
\end{table}
\subsection{Additional ablation studies}\label{sec:ablation2}
We here complement ablation studies reported in Section~\ref{sec:ablation}, using the same default settings, evaluation metrics and downstream datasets.
\paragraph{Pretraining on long narrated videos.}
In Table~\ref{table:pretraining}, we show the benefits of pretraining on untrimmed videos in comparison with the standard practice of pretraining on short, trimmed, video-speech segments~\cite{huang2020multimodal, luo2020univilm, seo2022end}.
In Table~\ref{table:long}, we further evaluate the importance of sampling long narrated videos during pretraining.
By default, at each training iteration, we randomly temporally crop each narrated video without constraints, resulting in a video that can span over hundreds of transcribed speech sentences.
We here evaluate a baseline that constrains this cropping process such that the cropped video only spans over a given maximum number of narration sentences.
Even with a maximum of 10 narration sentences, this baseline significantly underperforms our model trained in default settings where we sample longer untrimmed narrated videos (rows 1, 2 and 3).
This demonstrates that our model benefits from pretraining on long narrated videos.
\paragraph{Visual features.}
In Table~\ref{table:scale}, we show the benefits of scaling up the size of the pretraining dataset of narrated videos and the size of the language model.
In Table~\ref{table:backbone}, we further analyze the importance of the pretraining dataset and size of the visual backbone $f^s$.
We find that CLIP pretraining~\cite{radford2021learning} considerably improves over ImageNet pretraining~\cite{steiner2021train} with the same ViT-B/16 visual backbone model (row 2 vs 1).
Furthermore, scaling up the visual backbone size from ViT-B/16 to ViT-L/14 brings additional improvements (row 3 vs 2).
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{6pt}
\resizebox{1\linewidth}{!}{
\begin{tabular}{lcc|ccc|ccc}
\toprule
& \multirow{2}{*}{Tokenization}
& \multirow{2}{*}{$N$}
& \multicolumn{3}{c|}{YouCook2}
& \multicolumn{3}{c}{ActivityNet} \\
& & & \small{S} & \small{C} & \small{F1}
& \small{S} & \small{C} & \small{F1} \\
\midrule
1. & Absolute & 20
& 0.3 & 0.2 & 0.9
& 3.2 & 23.0 & 23.1 \\
2. & Absolute & 100
& 3.5 & 25.7 & 12.0
& 4.8 & 25.5 & 41.5 \\
3. & Absolute & 500
& \textbf{7.9} & 39.8 & 24.3
& 5.4 & 28.1 & 48.6 \\
4. & Relative & 20
& 7.2 & 39.6 & 23.7
& 5.6 & 29.0 & 49.4 \\
5. & Relative & 100
& \textbf{7.9} & \textbf{47.1} & \textbf{27.3}
& \textbf{5.8} & \textbf{30.1} & 52.4 \\
6. & Relative & 500
& 7.2 & 40.0 & 25.0
& 5.7 & 28.6 & \textbf{52.5} \\
\bottomrule
\end{tabular}
}
\vspace{-0.2cm}
\caption{\small \textbf{Ablation on time tokenization (relative or absolute) and the number of time tokens $N$.}}
\label{table:time}
\end{center}
\vspace{-0.3cm}
\end{table}
\begin{table}[t]
\centering
\vspace{-0pt}
\begin{center}
\setlength\tabcolsep{4pt}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{ccc|ccc|ccc}
\toprule
& \multirow{2}{*}{\makecell{\small{Dot symbol} \\ \small{between segments}}} & \multirow{2}{*}{\makecell{\small{Time tokens} \\ \small{Position}}} & \multicolumn{3}{c|}{YouCook2} & \multicolumn{3}{c}{ActivityNet} \\ & & & \small{S} & \small{C} & \small{F1} & \small{S} & \small{C} & \small{F1} \\
\midrule
1. & \xmark & \textit{After text}
& 7.9 & 48.3 & 26.7
& 5.6 & 29.8 & 51.1 \\
2. & \cmark & \textit{After text}
& \textbf{8.3} & \textbf{50.9} & 26.2
& 5.7 & \textbf{30.4} & 51.8 \\
3. & \xmark & \textit{Before text}
& 8.0 & 50.0 & \textbf{27.3}
& 5.6 & 28.2 & 50.7 \\
4. & \cmark & \textit{Before text}
& 7.9 & 47.1 & \textbf{27.3}
& \textbf{5.8} & 30.1 & \textbf{52.4} \\
\bottomrule
\end{tabular}}
\vspace{-0.2cm}
\caption{\small \textbf{Ablation on the sequence construction process.}}
\vspace{-0.3cm}
\label{table:sequence}
\end{center}
\end{table}
\paragraph{Time tokenization and number of time tokens.}
In Table~\ref{table:time}, we further ablate the time tokenization process presented in Section~\ref{sec:model}.
Our default time tokens represent relative timestamps in a video, as we quantize a video of duration $T$ into $N$ equally-spaced timestamps.
Another possibility is to use time tokens that represent absolute timestamps in the video, \ie the k-th token represents the k-th second in the video.
For both these variants, we vary the number of time tokens $N$.
For the relative time tokens, increasing $N$ makes the quantization more fine-grained but also spreads the data into more time tokens.
On the other hand, for the absolute time tokens, increasing $N$ increases the video duration that the time tokens can cover.
We find that the best dense video captioning results are obtained with the relative time tokens and $N=100$ time tokens (row 5).
\paragraph{Sequence construction.}
In Table~\ref{table:sequence}, we further ablate the sequence construction process presented in Section~\ref{sec:model}.
Our default sequence inserts the start and end time tokens of each segment before its corresponding text sentence.
Another possibility is to insert time tokens after each corresponding text sentence.
We find that both variants achieve similar results (rows 2 and 4), with the default sequence (row 4) resulting in slightly higher event localization performance (F1 Score) but slightly lower dense captioning results overall.
Furthermore, we observe that the dot symbols indicating the separation between different events have low importance (rows 1 and 2, rows 3 and 4).
\paragraph{Temporal positional embeddings.}
In Table~\ref{table:pretraining}, we show that time tokens in the speech sequence provide temporal information about the speech transcript to our model.
In Table~\ref{table:temporal}, we also evaluate the importance of the temporal positional embeddings which communicate temporal information from the visual stream to our model.
We find that these temporal embeddings are beneficial (row 2 vs 1).
\begin{table}[t]
\centering
\vspace{-0pt}
\begin{center}
\setlength\tabcolsep{6pt}
\resizebox{.85\linewidth}{!}{
\begin{tabular}{cc|ccc|ccc}
\toprule
& \multirow{2}{*}{\makecell{\small{Temporal} \\ \small{embeddings}}} & \multicolumn{3}{c|}{YouCook2} & \multicolumn{3}{c}{ActivityNet} \\ & & \small{S} & \small{C} & \small{F1} & \small{S} & \small{C} & \small{F1} \\
\midrule
1. & \xmark
& 6.8 & 42.0 & 24.9
& 5.3 & 27.0 & 50.6 \\
2. & \cmark
& \textbf{7.9} & \textbf{47.1} & \textbf{27.3}
& \textbf{5.8} & \textbf{30.1} & \textbf{52.4} \\
\bottomrule
\end{tabular}}
\vspace{-0.2cm}
\caption{\small \textbf{Ablation on the temporal positional embeddings.}}
\vspace{-0.3cm}
\label{table:temporal}
\end{center}
\end{table}
\begin{table}[t]
\centering
\vspace{-0pt}
\begin{center}
\setlength\tabcolsep{4pt}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{ccc|ccc|ccc}
\toprule
& \multirow{2}{*}{\makecell{\small{Language Model} \\ \small{Initialization}}} & \multirow{2}{*}{\makecell{\small{Video-text} \\ \small{Pretraining}}} & \multicolumn{3}{c|}{YouCook2} & \multicolumn{3}{c}{ActivityNet} \\ & & & \small{S} & \small{C} & \small{F1} & \small{S} & \small{C} & \small{F1} \\
\midrule
1. & \xmark & \xmark
& 0.9 & 4.2 & 7.6
& 4.3 & 23.7 & 41.2 \\
2. & \cmark & \xmark
& 4.0 & 18.0 & 18.1
& 5.4 & 18.8 & 49.2 \\
3. & \xmark & \cmark
& \textbf{8.8} & \textbf{51.3} & \textbf{28.4}
& 5.7 & 28.7 & 51.2 \\
4. & \cmark & \cmark
& 7.9 & 47.1 & 27.3
& \textbf{5.8} & \textbf{30.1} & \textbf{52.4} \\
\bottomrule
\end{tabular}}
\vspace{-0.2cm}
\caption{\small \textbf{Ablation on language model initialization and pretraining.}}
\vspace{-0.3cm}
\label{table:initialization}
\end{center}
\end{table}
\paragraph{Language model initialization and pretraining.}
In Table~\ref{table:scale}, we show the benefits of using T5-Base instead of T5-Small.
In Table~\ref{table:initialization}, we further investigate the importance of initializing the language model from weights pretrained on Web text.
Without pretraining on narrated videos, we find that text-only initialization is helpful (rows 1 and 2).
Interestingly, after pretraining on narrated videos, we find that text-only initialization has little importance (rows 3 and 4), as it slightly improves the performance on ActivityNet Captions while resulting in a slight drop of performance on YouCook2.
We believe that this may be because of the domain gap between Web text and the imperative-style dense captions in YouCook2, which are more similar to transcribed speech in YT-Temporal-1B.
\section{Introduction}\label{sec:intro}
\input{intro.tex}
\section{Related Work}\label{sec:background}
\input{background.tex}
\section{Method}\label{sec:method}
\input{method.tex}
\vspace{-0.2cm}
\section{Experiments}\label{sec:experiments}
\input{results_arxiv.tex}
\vspace{-0.2cm}
\section{Conclusion}\label{sec:conclusion}
\input{conclusion.tex}
\input{ack.tex}
{\small
\bibliographystyle{ieee_fullname}
\subsection{Model}\label{sec:model}
We wish to design a model for dense video captioning that can capture relationships between events using visual and (transcribed) speech cues
in order to effectively localize and describe these events in untrimmed minutes-long videos.
To tackle this challenge, we cast dense video captioning as a sequence-to-sequence problem where the input and output sequences contain both the semantic information about the event in the form of natural language descriptions and the temporal localization of the events in the form of temporal timestamps.
In addition, to best leverage both the visual and the language signal, we develop an appropriate multi-modal encoder-decoder architecture.
As illustrated in Figure~\ref{fig:overview}, our architecture takes as input video frames $x=\{x_i\}_{i=1}^{F}$ together with the transcribed speech sequence $y=\{y_j\}_{j=1}^{S}$. The output of our model is an event sequence $z=\{z_k\}_{k=1}^{L}$, where each event contains both its textual description and timestamps corresponding to the temporal event locations in the video.
Below we explain the structure of the transcribed speech and event sequences constructed for our model as well as details of our model architecture.
\paragraph{Sequence construction.}\label{sec:sequence}
To model inter-event relationships in dense event captioning annotations (or the readily-available transcribed narration, see Section~\ref{sec:training}), we cast dense video captioning as predicting a single output sequence of tokens $z$.
This output event sequence is constructed by leveraging a text tokenizer augmented with special \emph{time tokens}.
Furthermore, we enable our architecture to jointly reason about the semantic and temporal information provided in the transcript of the input narration by constructing the input transcript sequence $y$ in a similar manner as the event sequence $z$.
Details are given next.
\noindent \textbf{\textit{Time tokenization.}}
We start from a text tokenizer with a vocabulary size $V$, and augment it with $N$ additional time tokens, resulting in a tokenizer with $V+N$ tokens.
The time tokens represent relative timestamps in a video, as we quantize a video of duration $T$ into $N$ equally-spaced timestamps.
In detail, we use the SentencePiece tokenizer~\cite{kudo2018sentencepiece} with vocabulary size $V=32,128$ and $N=100$.
\noindent \textbf{\textit{Event sequence.}}
Our introduced tokenizer enables us to construct sequences that contain both video timestamps and text video descriptions.
We next explain how we construct the output event sequence $z$.
Note that videos have a variable number of events in standard dense video captioning datasets~\cite{huang2020multimodal, krishna2017dense, youcook2}.
Each event $k$ is characterized by a text segment, a start time and an end time.
We first construct for each event $k$ a sequence by concatenating its start time token $t_{start_k}$, its end time token $t_{end_k}$ and its text tokens $[z_{k_1}, ..., z_{k_{l_k}}]$.
Then we order all these sequences in increasing order of their start times and concatenate them.
In practice, each text segment ends with a dot symbol indicating the separation between different events.
Finally, the event sequence is obtained by prepending and appending a BOS and an EOS tokens to indicate the start and the end of sequence, respectively, \ie $z=[BOS, t_{start_1}, t_{end_1}, z_{1_1}, ..., z_{1_{l_1}}, t_{start_2}, ..., EOS]$.
\footnotetext[1]{https://cloud.google.com/speech-to-text/docs/automatic-punctuation.}
\noindent \textbf{\textit{Transcribed speech sequence.}}
To enable the model to use both the transcribed speech and its corresponding timestamps, we convert the speech transcript into a speech sequence $y$ similarly as the input training dense event captions $z$.
This is done by segmenting the raw speech transcript into sentences with the Google Cloud API$^1$, and using each transcribed speech sentence with its corresponding timestamps analogously as an event in the previously explained process.
\paragraph{Architecture.}\label{sec:architecture}
We wish to design an architecture that can effectively model relationships between different events in untrimmed minutes-long videos.
To tackle this challenge, we propose a multi-modal encoder-decoder architecture, illustrated in Figure~\ref{fig:overview}, that
gradually refines and outputs the event sequence described above.
In detail, given an untrimmed minutes-long video, the visual encoder $f$ embeds its frames while the text encoder $g$ embeds transcribed speech and the corresponding timestamps.
Then a text decoder $h$ predicts event boundaries and text captions using the visual and transcribed speech embeddings.
The individual modules are described next.
\noindent \textit{\textbf{Visual encoder.}}
The visual encoder operates on a sequence of $F$ frames $x \in \mathbb{R}^{F \times H \times W \times C}$ where $H$, $W$ and $C$ are the height, width and the number of channels of each frame.
A visual backbone $f^s$ first encodes each frame separately and outputs frame embeddings $x^s = f^s(x) \in \mathbb{R}^{F \times d}$, where $d$ is the embedding dimension.
Then a transformer encoder~\cite{vaswani2017attention} $f^t$ models temporal interactions between the different frames, and outputs $F$ contextualized visual embeddings $x^t = f^t(x^s + x^p) \in \mathbb{R}^{F \times d}$, where $x^p \in \mathbb{R}^{F \times d}$ are learnt temporal positional embeddings, which communicate time information from visual inputs to the model.
\noindent In detail, the visual backbone is CLIP ViT-L/14~\cite{dosovitskiy2021an, radford2021learning} at resolution $224\times224$ pixels, pretrained to map images to text descriptions with a contrastive loss on Web-scraped image-text pairs.
We keep the backbone frozen for efficiency.
\noindent \textit{\textbf{Text encoder.}}
The text encoder operates on a transcribed speech sequence of $S$ tokens $y \in \{1, ..., V+N\}^S$, where $V$ is the text vocabulary size, $N$ is the size of the vocabulary of time tokens and $S$ is the number of tokens in the transcribed speech sequence.
Note that the transcribed speech sequence includes time tokens to input the temporal information from the transcribed speech into the model.
An embedding layer $g^s \in \mathbb{R}^{(V+N) \times d}$ embeds each token independently and outputs semantic embeddings $y^s=g^s(y) \in \mathbb{R}^{S \times d}$.
Then a transformer encoder $g^t$ computes interactions in the transcribed speech sequence and outputs $S$ contextualized speech embeddings $y^t=g^t(y^s) \in \mathbb{R}^{S \times d}$.
\noindent \textit{\textbf{Text decoder.}}
The text decoder generates the event sequence $z$ by using the encoder embeddings, which are obtained by concatenating the visual and speech embeddings $x^t$ and $y^t$.
The text decoder is based on a causal transformer decoder $h^t$ that cross-attends to the encoder outputs, and at each autoregressive step $k$, self-attends to the previously generated tokens $\hat{z}^{t}_{<k}$ to output a contextualized representation $z^t_k=h^t(h^s(\hat{z}^{t}_{<k}), x^t, y^t) \in \mathbb{R}^{d}$ where $h^s \in \mathbb{R}^{(V+N) \times d}$ is the decoder token embedding layer.
Then a language modeling head $h^{l} \in \mathbb{R}^{d \times (V+N)}$ predicts a probability distribution over the joint vocabulary of text and time tokens in order to predict the next token in the event sequence, \ie $z^{l}_k = h^{l}(z^t_k) \in \mathbb{R}^{V + N}$.
\noindent \textit{\textbf{Text initialization.}}
We initialize the text encoder and the text decoder with T5-Base~\cite{raffel2020exploring} which has been pretrained on Web text corpora with a denoising loss.
Therefore their implementation and parameters also closely follow T5-Base, \eg they use relative positional embeddings and share their token embedding layer $g^s = h^s \in \mathbb{R}^{(V+N) \times d}$.
\subsection{Training}\label{sec:training}
In this Section, we describe how we leverage a large amount of unlabeled narrated videos to train the previously described dense event captioning model.
We first present the pretraining method used to effectively train Vid2Seq{} using cross-modal supervision in readily-available narrated videos in Section~\ref{sec:pretraining} and Figure~\ref{fig:pretraining}.
Then we explain how we finetune our architecture for various downstream tasks including dense event captioning in Section~\ref{sec:downstream}.
\begin{figure}[t]
\centering
\includegraphics[width=1.\linewidth]{figures/cvpr23_pretraining2.pdf}
\vspace{-0.7cm}
\caption{\small \textbf{Pretraining tasks}.
To train Vid2Seq{} on unlabeled narrated videos, we design two pretraining objectives.
\textbf{Top}: generative objective,
given visual inputs $x$ only, the task is to generate the transcribed speech sequence $y$.
\textbf{Bottom}: denoising objective,
given visual inputs $x$ and the corrupted
speech sequence $\tilde{y}$, the task is to generate the sequence of recovered
speech segments $\Bar{y}$.}
\label{fig:pretraining}
\vspace{-0.5cm}
\end{figure}
\vspace{-0.3cm}
\subsubsection{Pretraining on untrimmed narrated videos}\label{sec:pretraining}
We wish to leverage narrated videos for pretraining as they are easily available at scale~\cite{miech19howto100m, zellers2022merlot}.
However these videos do not contain dense event captioning annotations.
Therefore we use as supervisory signal the transcribed speech sentences and their corresponding timestamps.
As speech transcripts are not always visually grounded and often temporally misaligned~\cite{han2022temporal, ko2022video, miech20endtoend}, we note that they only provide \emph{weak} supervision.
Furthermore, speech transcripts drastically differ from dense event captioning annotations.
For instance, in the YT-Temporal-1B dataset~\cite{zellers2022merlot}, a video contains 120 speech sentences on average which is an order of magnitude more than the number of events in standard dense video captioning datasets~\cite{youcook2, huang2020multimodal, krishna2017dense}.
Our Vid2Seq{} model is particularly suitable for using such weak supervision as it constructs the speech sequence similarly as a manually annotated event sequence,
and jointly contextualizes the speech boundaries and semantic information on the level of potentially minutes-long videos (see Section~\ref{sec:model}) rather than at a shorter clip-level,
enabling our model to learn long-term relationships between the different speech segments: in experiments we show that pretraining on entire minutes-long videos is highly beneficial.
We next describe the two proposed training objectives, which are both based on a maximum likelihood objective.
Formally, given visual inputs $x$, encoder text sequence $y$ and a decoder target text sequence $z$, both objectives are based on minimizing the following loss:
\setlength{\abovedisplayskip}{0.pt}
\setlength{\belowdisplayskip}{0.pt}
\begin{equation}
{\mathcal{L}_\theta(x, y, z)} = - \frac{1}{\sum_{k=1}^{L-1}{w_k}}\sum_{k=1}^{L-1}{w_k \log\,p_\theta(z_{k+1} | x, y, z_{1:k})},
\label{eq:loss}
\end{equation}
where $L$ is the length of the decoder target sequence, $w_k$ is the weight for k-th token in the sequence, which we set to $w_k=1$ $\forall k$ in practice, $\theta$ denotes the trainable parameters in the model and $p_\theta$ is the output probability distribution over the vocabulary of text and time tokens.
\noindent\textbf{Generative objective.}
This objective uses the transcribed speech as a (pseudo-)supervisory signal to teach the decoder to predict a sequence of events given visual inputs.
Given video frames $x$, which are fed to the encoder, the decoder has to predict the transcribed speech sequence $y$ (see Figure~\ref{fig:pretraining}), which serves as a proxy dense event captioning annotation.
Note that no text input is given to the encoder for this task as using transcribed speech both as input and target would lead the model to learn text-only shortcuts.
\noindent \textbf{Denoising objective.}
As no text input is given to the encoder for the generative proxy task, the generative objective only trains the visual encoder and the text decoder, but not the text encoder.
However when our model is used for dense video captioning, the text encoder has a significant importance as it encodes speech transcripts.
Hence we introduce a denoising objective that aims at jointly aligning the visual encoder, the text encoder and the text decoder.
Inspired by T5~\cite{raffel2020exploring} in the text domain, we randomly mask spans of (text and time) tokens in the transcribed speech sequence with a probability $P$ and an average span length $M$.
The encoder input is composed of the video frames $x$ together with the corrupted speech sequence $\tilde{y}$, which contains sentinel tokens that uniquely identify the masked spans.
The decoder then has to predict a sequence $\Bar{y}$ constructed with the corresponding masked spans for each sentinel token, based on visual inputs $x$ and speech context $\tilde{y}$ (see Figure~\ref{fig:pretraining}).
\vspace{-0.3cm}
\subsubsection{Downstream task adaptation}\label{sec:downstream}
Our architecture and task formulation enables us to tackle dense video captioning with a generic language modeling training objective and inference procedure.
Note that as a by-product of our generic architecture, our model can also be used to generate paragraphs about entire videos by simply removing the time tokens from the output sequence,
and can also be easily adapted to video clip captioning with the same finetuning and inference recipe.
\noindent \textbf{Finetuning.}
To finetune our model for dense video captioning, we use a maximum likelihood objective based on the event sequence (see Equation~\ref{eq:loss}).
Given video frames $x$ and speech transcripts $y$, the decoder has to predict the event sequence $z$.
\noindent \textbf{Inference.}
The text decoder autoregressively generates the event sequence by sampling from the model likelihood.
In practice, we use beam search as we find that it improves the captioning quality compared with argmax sampling or nucleus sampling.
Finally, the event sequence is converted into a set of event predictions by simply reversing the sequence construction process.
\subsection{Experimental setup}\label{sec:setup}
\vspace{-0.1cm}
\noindent \textbf{Datasets.}
For pretraining, following prior work showing the benefits of pretraining on a diverse and large dataset~\cite{zellers2021merlot}, we use the \textbf{YT-Temporal-1B} dataset~\cite{zellers2022merlot}, which includes 18 million narrated videos collected from YouTube.
We evaluate Vid2Seq{} on three downstream dense video captioning datasets: YouCook2~\cite{youcook2}, ViTT~\cite{huang2020multimodal} and ActivityNet Captions~\cite{krishna2017dense}.
\textbf{YouCook2} has 2K untrimmed videos of cooking procedures.
On average, each video lasts 320s and is annotated with 7.7 temporally-localized sentences.
\textbf{ViTT} consists of 8K untrimmed instructional videos.
On average, each video lasts 250s and is annotated with 7.1 temporally-localized short tags.
\textbf{ActivityNet Captions} contains 20k untrimmed videos of various human activities.
On average, each video lasts 120s and is annotated with 3.7 temporally-localized sentences.
For video clip captioning, we use two standard benchmarks, \textbf{MSR-VTT}~\cite{xu16msrvtt} and \textbf{MSVD}~\cite{chen2011collecting}.
For all datasets, we follow the standard splits for training, validation and testing.
Note that we only use videos available on YouTube at the time of the work, resulting in 10 to 20\% less videos than in the original datasets.
\noindent \textbf{Implementation details.}
We extract video frames at 1FPS, and subsample or pad the sequence of frames to $F$ frames where we set $F=100$.
The text encoder and decoder sequence are truncated or padded to $L=S=1000$ tokens.
Our model has 314M trainable parameters.
We use the Adam optimizer~\cite{kingma15adam}.
We pretrain our model for 200,000 iterations with a batch size of 512 videos split on 64 TPU v4 chips, which lasts a day.
We sum both pretraining objectives with equal weighting to get our final pretraining loss.
Our Jax implementation is based on the Scenic library~\cite{dehghani2021scenic}.
More details are included in Appendix Section~\ref{sec:adddetails}.
\begin{table}[t]
\centering
\vspace{-0pt}
\begin{center}
\setlength\tabcolsep{6pt}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{ccc|ccc|ccc}
\toprule
& \multicolumn{2}{c|}{Pretraining input} & \multicolumn{3}{c|}{YouCook2} & \multicolumn{3}{c}{ActivityNet} \\ & Untrimmed & Time tokens & \small{S} & \small{C} & \small{F1} & \small{S} & \small{C} & \small{F1} \\
\midrule
1. & \multicolumn{2}{c|}{\textit{No pretraining}}
& 4.0 & 18.0 & 18.1
& 5.4 & 18.8 & 49.2 \\
2. & \xmark & \xmark
& 5.5 & 27.8 & 20.5
& 5.5 & 26.5 & 52.1 \\
3. & \cmark & \xmark
& 6.7 & 35.0 & 23.3
& 5.6 & 27.4 & 52.2 \\
4. & \cmark & \cmark
& \textbf{7.9} & \textbf{47.1} & \textbf{27.3}
& \textbf{5.8} & \textbf{30.1} & \textbf{52.4} \\
\bottomrule
\end{tabular}}
\vspace{-0.3cm}
\caption{\small \textbf{Ablation showing the impact of using untrimmed videos and adding time tokens during pretraining.} When we use untrimmed video-speech inputs, time information from transcribed speech sentence boundaries is integrated via time tokens.}
\vspace{-0.9cm}
\label{table:pretraining}
\end{center}
\end{table}
\noindent \textbf{Evaluation metrics.}
For captioning, we use
CIDEr~\cite{vedantam2015cider} (C) and METEOR~\cite{banerjee2005meteor} (M).
For dense video captioning, we follow the commonly used evaluation tool~\cite{krishna2017dense} which calculates matched pairs between generated events and the ground truth across IoU thresholds of \{0.3, 0.5, 0.7, 0.9\}, and compute captioning metrics over the matched pairs.
However, these metrics do not take into account the story of the video.
Therefore we also use SODA\_c~\cite{fujita2020soda} (S) for an overall dense video captioning evaluation.
To further isolate the evaluation of event localization, we report the average precision and average recall across IoU thresholds of \{0.3, 0.5, 0.7, 0.9\} and their harmonic mean, the F1 Score.
\subsection{Ablation studies}\label{sec:ablation}
\vspace{-0.1cm}
The default Vid2Seq{} model predicts both text and time tokens, uses both visual frames and transcribed speech as input, builds on the T5-Base language model, and is pretrained on untrimmed videos from YT-Temporal-1B with both the generative and denoising losses.
Below we ablate the importance of each of these factors on the downstream dense video captioning performance by reporting results on YouCook2 and ActivityNet Captions validation sets.
\noindent \textbf{Pretraining on untrimmed narrated videos by exploiting transcribed speech sentence boundaries.}
In Table~\ref{table:pretraining}, we evaluate the effectiveness of our pretraining task formulation that uses untrimmed videos and integrates sentence boundaries of transcribed speech via time tokens.
In contrast, most video clip captioning pretraining methods~\cite{huang2020multimodal, luo2020univilm, seo2022end} use short, trimmed, video-speech segments for pretraining.
We adapt this strategy in our model and find that it indeed yields significant performance improvements over the baseline that uses no video-text pretraining (row 2 vs row 1).
However, larger improvements are obtained by using untrimmed video-speech inputs (row 3 vs row 2).
Moreover, using time tokens to integrate time information from transcribed speech drastically improves performance (row 4 vs row 3).
This shows the benefits of exploiting sentence boundaries of transcribed speech via time tokens and of using untrimmed videos during pretraining.
In Appendix Section~\ref{sec:ablation2}, we show additional ablations to quantify how the performance improves by pretraining on longer narrated videos that contain more speech sentences.
\begin{table}[t]
\centering
\vspace{-0pt}
\begin{center}
\setlength\tabcolsep{3pt}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{ccc|cc|ccc|ccc}
\toprule
& \multicolumn{2}{c|}{Finetuning Input} & \multicolumn{2}{c|}{Pretraining loss} & \multicolumn{3}{c|}{YouCook2} & \multicolumn{3}{c}{ActivityNet} \\
& Visual & Speech & Generative & Denoising & \small{S} & \small{C} & \small{F1} & \small{S} & \small{C} & \small{F1} \\
\midrule
1. & \cmark & \xmark & \multicolumn{2}{c|}{\textit{No pretraining}}
& 3.0 & 15.6 & 15.4
& 5.4 & 14.2 & 46.5 \\
2. & \cmark & \cmark & \multicolumn{2}{c|}{\textit{No pretraining}}
& 4.0 & 18.0 & 18.1
& 5.4 & 18.8 & 49.2 \\
3. & \cmark & \xmark & \cmark & \xmark
& 5.7 & 25.3 & 23.5
& \textbf{5.9} & \textbf{30.2} & 51.8 \\
4. & \cmark & \cmark & \cmark & \xmark
& 2.5 & 10.3 & 15.9
& 4.8 & 17.0 & 48.8 \\
5. & \cmark & \cmark & \cmark & \cmark
& \textbf{7.9} & \textbf{47.1} & \textbf{27.3}
& 5.8 & 30.1 & \textbf{52.4} \\
\bottomrule
\end{tabular}}
\vspace{-0.3cm}
\caption{\small \textbf{Effect of input modalities and pretraining losses.}}
\vspace{-0.7cm}
\label{table:modalities}
\end{center}
\end{table}
\begin{table}[t]
\centering
\vspace{-0pt}
\begin{center}
\setlength\tabcolsep{3pt}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{ccc|ccc|ccc}
\toprule
& \multirow{2}{*}{Captioning} & \multirow{2}{*}{Pretraining}
& \multicolumn{3}{c|}{YouCook2}
& \multicolumn{3}{c}{ActivityNet} \\
& & & \small{Recall} & \small{Precision} &
\small{F1} & \small{Recall} & \small{Precision} &
\small{F1} \\
\midrule
1. & \xmark & \xmark
& 17.8 & 19.4 & 17.7 & 47.3 & 57.9 & 52.0 \\
2. & \cmark & \xmark
& 17.2 & 20.6 & 18.1 & 42.5 & \textbf{64.1} & 49.2 \\
3. & \xmark & \cmark
& 25.7 & 21.4 & 22.8 & 52.5 & 53.0 & 51.1 \\
4. & \cmark & \cmark
& \textbf{27.9} & \textbf{27.8} & \textbf{27.3} & \textbf{52.7} & 53.9 & \textbf{52.4} \\
\bottomrule
\end{tabular}}
\vspace{-0.3cm}
\caption{\small \textbf{Effect of joint captioning and localization on the localization performance.}
The variant that does not caption corresponds to a localization-only variant that only predicts time tokens.}
\vspace{-0.7cm}
\label{table:inter}
\end{center}
\end{table}
\begin{table}[t]
\centering
\vspace{-0pt}
\begin{center}
\setlength\tabcolsep{4pt}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{cc|cc|ccc|ccc}
\toprule
& \multirow{2}{*}{\makecell{\small{Language} \\ \small{Model}}} & \multicolumn{2}{c|}{Pretraining} & \multicolumn{3}{c|}{YouCook2} & \multicolumn{3}{c}{ActivityNet} \\
& & \# Videos & Dataset & \small{S} & \small{C} & \small{F1}
& \small{S} & \small{C} & \small{F1} \\
\midrule
1. & T5-Small & 15M & YTT
& 6.1 & 31.1 & 24.3
& 5.5 & 26.5 & 52.2 \\
2. & T5-Base & $\emptyset{}$ & $\emptyset{}$
& 4.0 & 18.0 & 18.1
& 5.4 & 18.8 & 49.2 \\
3. & T5-Base & 15K & YTT
& 6.3 & 35.0 & 24.4
& 5.1 & 24.4 & 49.9 \\
4. & T5-Base & 150K & YTT
& 7.3 & 40.1 & 26.7
& 5.4 & 27.2 & 51.3 \\
5. & T5-Base & 1M5 & YTT
& 7.8 & 45.5 & 26.8
& 5.6 & 28.7 & 52.2 \\
6. & T5-Base & 1M & HTM
& \textbf{8.3} & \textbf{48.3} & 26.6
& \textbf{5.8} & 28.8 & \textbf{53.1} \\
7. & T5-Base & 15M & YTT
& 7.9 & 47.1 & \textbf{27.3}
& \textbf{5.8} & \textbf{30.1} & 52.4 \\
\bottomrule
\end{tabular}}
\vspace{-0.3cm}
\caption{\small\textbf{Effect of language model size and pretraining data.} HTM: HowTo100M~\cite{miech19howto100m}, YTT: YT-Temporal-1B~\cite{zellers2022merlot}.}
\vspace{-0.9cm}
\label{table:scale}
\end{center}
\end{table}
\noindent \textbf{Input modalities and pretraining objectives.}
In Table~\ref{table:modalities}, we analyze the importance of input modalities and pretraining tasks on the downstream dense video captioning performance.
The model with visual inputs only (no transcribed speech as input) benefits significantly from pretraining with the generative objective (row 3 vs row 1).
This shows the effectiveness of using the transcribed speech as a proxy annotation for dense video captioning pretraining.
However, this model is pretrained with visual inputs only and its performance largely drops when it is finetuned with both visual and transcribed speech inputs (row 4 vs row 3).
With both modalities, adding the denoising loss strongly benefits our model (row 5 vs rows 4 and 2).
We conclude that the denoising objective benefits multi-modal reasoning.
\noindent \textbf{Effect of captioning on localization.}
In Table~\ref{table:inter}, we compare the event localization performance of our model with a localization-only variant that only predicts event boundaries.
We find that the model that jointly predicts event boundaries and captions localizes better and benefits more from pretraining than the localization-only baseline (row 4 vs row 3), which demonstrates the importance of contextualizing the noisy timestamps of the transcribed speech with the speech semantic content during pretraining.
\noindent \textbf{Model size and pretraining data.}
In Table~\ref{table:scale}, we show that the language model size has a great importance on the performance, as the model with T5-Base outperforms its variant with T5-Small (row 7 vs row 1).
We also evaluate the importance of the size of the pretraining dataset of narrated videos by constructing subsets such that larger subsets include the smaller ones.
We find that scaling up the size of the pretraining dataset is beneficial, and that our pretraining method yields important benefits when only using 150K narrated videos for pretraining (row 4).
We further show that our pretraining method generalizes well to the HowTo100M dataset~\cite{miech19howto100m}.
The model pretrained on HowTo100M (row 6) actually achieves best results on YouCook2, as these datasets are from a similar domain.
Finally, we ablate the importance of the size and pretraining of the visual backbone in Appendix Section~\ref{sec:ablation2}.
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{4pt}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{l|ccc|ccc|ccc}
\toprule
\multirow{3}{*}{Method}
& \multicolumn{3}{c|}{YouCook2 (val)}
& \multicolumn{3}{c|}{ViTT (test)}
& \multicolumn{3}{c}{ActivityNet (val)} \\
& \small{S} & \small{C} & \small{M}
& \small{S} & \small{C} & \small{M}
& \small{S} & \small{C} & \small{M} \\
\midrule
MT~\cite{zhou2018end}
& --- & 6.1 & 3.2
& --- & --- & ---
& --- & 9.3 & 5.0 \\
ECHR~\cite{wang2020event}
& --- & --- & 3.8
& --- & --- & ---
& 3.2 & 14.7 & 7.2 \\
PDVC~\cite{wang2021end}
& 4.4 & 22.7 & 4.7
& --- & --- & ---
& 5.4 & 29.0 & 8.0 \\
UEDVC~\cite{zhang2022unifying}
& --- & --- & ---
& --- & --- & ---
& 5.5 & --- & --- \\
E2ESG~\cite{zhu2022end}
& --- & 25.0* & 3.5
& --- & 25.0 & 8.1
& --- & --- & ---- \\
Vid2Seq{} (Ours)
& \textbf{7.9} & \textbf{47.1} & \textbf{9.3}
& \textbf{13.5} & \textbf{43.5} & \textbf{8.5}
& \textbf{5.8} & \textbf{30.1} & \textbf{8.5} \\
\bottomrule
\end{tabular}
}
\vspace{-0.3cm}
\caption{\small Comparison to the state of the art for dense video captioning. * Results provided by the authors.}
\label{table:sotac}
\vspace{-0.7cm}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{5pt}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{l|cc|cc|cc}
\toprule
\multirow{3}{*}{Method}
& \multicolumn{2}{c|}{YouCook2 (val)}
& \multicolumn{2}{c|}{ViTT (test)}
& \multicolumn{2}{c}{ActivityNet (val)} \\
& \small{Recall} & \small{Precision}
& \small{Recall} & \small{Precision}
& \small{Recall} & \small{Precision} \\
\midrule
PDVC~\cite{wang2021end}
& --- & ---
& --- & ---
& 55.4 & 58.1 \\
UEDVC~\cite{zhang2022unifying}
& --- & ---
& --- & ---
& \textbf{59.0} & \textbf{60.3} \\
E2ESG~\cite{zhu2022end}
& 20.7* & 20.6*
& 32.2* & 32.1*
& --- & --- \\
Vid2Seq{} (Ours)
& \textbf{27.9} & \textbf{27.8}
& \textbf{42.6} & \textbf{46.2}
& 52.7 & 53.9 \\
\bottomrule
\end{tabular}
}
\vspace{-0.3cm}
\caption{\small Comparison to the state of the art for event localization. * Results provided by the authors.}
\label{table:sotal}
\end{center}
\vspace{-0.9cm}
\end{table}
\subsection{Comparison to the state of the art}\label{sec:sota}
\noindent \textbf{Dense video captioning.}
In Table~\ref{table:sotac}, we compare our approach to state-of-the-art dense video captioning methods using cross-entropy training~\footnote{We do not include methods directly optimizing the test metric~\cite{deng2021sketch, mun2019streamlined}.} on the YouCook2, ViTT and ActivityNet Captions datasets.
Vid2Seq{} sets new state of the art on all three datasets.
In particular, our method improves the SODA metric by 3.5 and 0.3 points on YouCook2 and ActivityNet Captions over PDVC~\cite{wang2021end} and UEDVC~\cite{zhang2022unifying}, respectively.
Our method also outperforms E2ESG~\cite{zhu2022end} which uses in-domain text-only pretraining on Wikihow.
These results demonstrate the strong dense event captioning ability of our pretrained Vid2Seq{} model.
\begin{figure*}[t]
\centering
\includegraphics[clip, trim=0mm 9cm 0mm 0mm, width=1\linewidth]{figures/cvpr23_qualitative2}
\vspace{-0.9cm}
\caption{\small Example of dense event captioning predictions of Vid2Seq{} on ActivityNet Captions validation set, compared with ground-truth.
}
\vspace{-0.6cm}
\label{fig:qualitative}
\end{figure*}
\noindent \textbf{Event localization.}
In Table~\ref{table:sotal}, we evaluate the event localization performance of our dense video captioning model in comparison with prior work.
On both YouCook2 and ViTT, Vid2Seq{} outperforms prior work~\cite{zhu2022end} tackling dense video captioning as a single sequence generation task.
However, our model underperforms compared to PDVC~\cite{wang2021end} and UEDVC~\cite{wang2021end} on ActivityNet Captions.
We emphasize that our approach integrates less prior knowledge about temporal localization than both these approaches, which include task specific components such as event counters~\cite{wang2021end} or separately train a model for the localization subtask~\cite{zhang2022unifying}.
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{3pt}
\resizebox{1\linewidth}{!}{
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{Method}
& \multicolumn{2}{c|}{YouCook2 (val)}
& \multicolumn{2}{c}{ActivityNet (val-ae)} \\
& \small{C} & \small{M} & \small{C} & \small{M} \\
\midrule
\textit{With Ground-Truth Proposals} & & & \\
VTransformer~\cite{zhou2018end}
& 32.3 & 15.7 & 22.2 & 15.6 \\
Transformer-XL~\cite{dai2019transformer}
& 26.4 & 14.8 & 21.7 & 15.1 \\
MART~\cite{lei2020mart}
& 35.7 & 15.9 & 23.4 & 15.7 \\
GVDSup~\cite{zhou2019grounded}
& --- & --- & 22.9 & 16.4 \\
AdvInf~\cite{park2019adversarial}
& --- & --- & 21.0 & 16.6 \\
PDVC~\cite{wang2021end}
& --- & --- & 27.3 & 15.9 \\
\hline
\textit{With Learnt Proposals} & & & \\
MFT~\cite{xiong2018move}
& --- & --- & 19.1 & 14.7 \\
PDVC~\cite{wang2021end}
& --- & --- & 20.5 & 15.8 \\
Vid2Seq{} (Ours) &
\textbf{50.1} & \textbf{24.0} & \textbf{28.0} & \textbf{17.0} \\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-0.6cm}
\caption{\small Comparison to the SoTA for video paragraph captioning.}
\vspace{-0.4cm}
\label{table:sotapara}
\end{table}
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{6pt}
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{Method}
& \multicolumn{2}{c|}{MSR-VTT (test)}
& \multicolumn{2}{c}{MSVD (test)} \\
& \small{C} & \small{M} & \small{C} & \small{M} \\
\midrule
ORG-TRL~\cite{zhang2020object} & 50.9 & 28.8 & 95.2 & 36.4 \\
SwinBERT~\cite{lin2022swinbert} & 53.8 & 29.9 & 120.6 & 41.3 \\
MV-GPT~\cite{seo2022end}
& 60.0 & ---$^*$ & --- & --- \\
Vid2Seq{} (Ours)
& \textbf{64.6} & \textbf{30.8} & \textbf{146.2} & \textbf{45.3} \\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-0.6cm}
\caption{\small Comparison to the SoTA for video clip captioning.
* Authors confirmed that they used an incorrect metric implementation.}
\vspace{-0.4cm}
\label{table:sotaclip}
\end{table}
\noindent \textbf{Video paragraph captioning.}
In Table~\ref{table:sotapara}, we compare our approach to state-of-the-art video paragraph captioning methods on the YouCook2 and ActivityNet Captions datasets.
Vid2Seq{} outperforms all prior methods on both datasets, including the ones using ground-truth event boundary proposals at inference time~\cite{dai2019transformer, lei2020mart, zhou2018end, zhou2019grounded, wang2021end, park2019adversarial}, showing strong video paragraph captioning ability.
\noindent \textbf{Video clip captioning.}
In Table~\ref{table:sotaclip}, we compare our approach to state-of-the-art video clip captioning methods on the MSR-VTT and MSVD datasets.
Vid2Seq{} improves over prior methods on both datasets.
We conclude that our pretrained Vid2Seq{} model generalizes well to the standard video clip captioning setting.
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{6pt}
\resizebox{1\linewidth}{!}{
\begin{tabular}{lc|ccc|ccc|ccc}
\toprule
& \multirow{2}{*}{Data}
& \multicolumn{3}{c|}{YouCook2}
& \multicolumn{3}{c|}{ViTT}
& \multicolumn{3}{c}{ActivityNet} \\
& & \small{S} & \small{C} & \small{M}
& \small{S} & \small{C} & \small{M}
& \small{S} & \small{C} & \small{M} \\
\midrule
1. & 1\%
& 2.4 & 10.1 & 3.3
& 2.0 & 7.4 & 1.9
& 2.2 & 6.2 & 3.2 \\
2. & 10\%
& 3.8 & 18.4 & 5.2
& 10.7 & 28.6 & 6.0
& 4.3 & 20.0 & 6.1 \\
3. & 50\%
& 6.2 & 32.1 & 7.6
& 12.5 & 38.8 & 7.8
& 5.4 & 27.5 & 7.8 \\
4. & 100\%
& \textbf{7.9} & \textbf{47.1} & \textbf{9.3}
& \textbf{13.5} & \textbf{43.5} & \textbf{8.5}
& \textbf{5.8} & \textbf{30.1} & \textbf{8.5} \\
\bottomrule
\end{tabular}
}
\vspace{-0.3cm}
\caption{\small \textbf{Few-shot dense event captioning}, by finetuning Vid2Seq{} using a small fraction of the downstream training dataset.}
\label{table:fewshot}
\end{center}
\vspace{-1cm}
\end{table}
\subsection{Few-shot dense video captioning}\label{sec:fewshot}
To further evaluate the generalization capabilities of our pretrained Vid2Seq{} model, we propose a new few-shot dense video captioning setting where we finetune Vid2Seq{} using only a fraction of the downstream training dataset.
From Table~\ref{table:fewshot}, we observe important improvements when using 10\% compared to 1\% of training data (row 2 vs 1).
In Appendix Section~\ref{sec:fewshot2} we further show that pretraining is essential in this few-shot setting.
\vspace{-0.1cm}
\subsection{Qualitative examples}\label{sec:qualitative}
In Figure~\ref{fig:qualitative}, we show an example of dense event captioning predictions from Vid2Seq{}.
This example shows that our model can predict meaningful event boundaries and captions, and that the predicted captions and boundaries differ considerably from the transcribed speech input (showing the importance of the visual tokens in the input).
More examples are provided in Appendix Section~\ref{sec:addquali}. |
{
"arxiv_id": "2302.14223",
"language": "en",
"timestamp": "2023-03-01T02:05:39",
"url": "https://arxiv.org/abs/2302.14223",
"yymm": "2302"
} | \section{Introduction}
Bayesian parameter estimation with prior knowledge on unknown parameters naturally enters when estimating signals upon communication processes \cite{vTbook1}.
Quantum communication is a promising near term communication technology,
which can transmit information more securely and efficiently than the classical protocols.
There have been many investigation on how classical and/or quantum information can be transmitted faithfully over a given noisy quantum channel, see for example \cite{gt07,wilde13,holevo19}.
Quantum Bayesian estimation is a key ingredient for decoding classical information encoded in quantum states efficiently. Quantum Bayesian estimation also got a great attention in the field of quantum sensing and quantum metrology \cite{jd15,dbgmb19,gsp21,nsp21}.
Quantum Bayesian estimation was initiated about forty years ago by Personick \cite{personick_thesis,personick71}. Due to recent advances in the quantum estimation theory, the quantum Bayesian estimation problem has gotten a renew interest by the community.
Several quantum Bayesian bounds were proposed for the Bayes risk, see for example, \cite{personick_thesis,personick71,hlg70,tsang_zk,lt16,rd2020,demkowicz2020,tsang_gl}.
However, most of them do not capture the genuine quantum nature,
since known lower bounds are based on almost a direct translation of classical Bayesian bounds.
In particular, previously proposed lower bounds are derived by applying the Cauchy-Schwarz type inequality with respect to a certain choice for inner products on an operator space.
Holevo initiated the investigation of nontrivial lower bounds for quantum estimation in the context of the general statistical decision problems \cite{holevo_1973}.
He also analyzed lower bounds for the Bayes risk based on the quantum Fisher information matrices \cite{holevo_bayes,holevo_monograph}.
In particular, he gave a thorough analysis on the gaussian shift model in the Bayesian setting.
When estimating non-random parameters, the Holevo bound established unique feature of quantum estimation theory \cite{holevo,nagaoka89}.
This is because it is expressed as a certain optimization problem without use of any quantum Fisher information matrix.
Later, Nagaoka proposed a tighter bound \cite{nagaoka89} for two-parameter estimation.
This lower bound is based on a different statistical problem in which one aims at finding an approximated diagonalization of two noncommuting matrices \cite{nagaoka91}.
This Nagaoka's result was generalized by Hayashi for any finite number of noncommuting matrices \cite{hayashi99}.
In a recent paper \cite{lja2021}, the Nagaoka bound for parameter estimation was generalized to estimating any number of non-random parameters, which was named as the Nagaoka-Hayashi bound.
In this paper, we attempt to make a further step toward developing genuine quantum bounds based on the Nagaoka-Hayashi bound in Bayesian parameter estimation.
In particular, we propose two Bayesian versions of these lower bounds, the Holevo bound and the Nagaoka-Hayashi bound.
The unique nature of the proposed lower bounds is that they are expressed as a certain optimization problem.
We show that it is tighter than the recently proposed a lower bound by Rubio and Dunningham \cite{rd2020}.
This paper is organized as follows. Section 2 gives a brief summary of existing quantum Bayesian bounds.
In Sec.~3, we propose two new quantum Bayesian bounds, and we also show that the proposed lower bound is tighter than the generalized Personick bound.
In Sec.~4, we conclude and list some open problems.
Technical lemmas are given in Appendix.
\section{Preliminaries}
Let ${\cal H}$ be a finite dimensional Hilbert space and denote by ${\cal S}({\cal H})$ the totality of density matrices on ${\cal H}$.
A quantum parametric model is a smooth family of density matrices on ${\cal H}$, $\{S_\theta\,|\,\theta\in\Theta\}$, which is parametrized by $n$-parameter $\theta=(\theta_1,\theta_2,\ldots,\theta_n)$.
In the following, we consider a regular model, in particular, $S_\theta$ is full rank for all $\theta\in\Theta$ and the state is differentiable with respect to the parameter sufficiently many times.
A measurement is described by a set of positive semidefinite matrices $\Pi_x$ on the measurement outcome label set ${\cal X}$ such that all elements add to the identity $I$.
The set of operators corresponding to a quantum measurement is normally called a positive operator-valued measure (POVM), and it is defined by
\begin{equation}
\Pi=\{\Pi_x\};\ \forall x,\Pi_x\ge0,\ \sum_x\Pi_x=I,
\end{equation}
where the measurement outcomes are labeled by real number $x\in{\cal X}\subset\mathbb{R}$ and $I$ is the identity operator on ${\cal H}$.
When the measurement outcomes are labelled with a continuous set,
the condition on the POVM elements is $\forall x,\Pi_x\ge0,\ \int_{\cal X} dx\,\Pi_x=I$.
Measurement outcome is described by a random variable $X$ that obeys the conditional probability distribution:
$p_\theta(x)=\opTr{S_\theta \Pi_x}$.
(In the following, $\opTr{\cdot}$ denotes the trace on ${\cal H}$.)
The expectation value for a random variable $X$ is
denoted by $E_\theta[X|\Pi]=\sum_x xp_\theta(x)$.
To infer the parameter value, we use an estimator that returns values on the set $\Theta$:
$\hat{\theta}=(\hat{\theta}_i):\ {\cal X}\to\Theta$.
The performance of the estimator is quantified by a loss function:
\begin{equation}
L:\ \Theta\times {\cal X}\to\{x\in\mathbb{R}|x\ge0\}\cup\{\infty\}.
\end{equation}
In this study, we adopt $L(\theta,\hat{\theta})=\sum_{i,j}(\hat{\theta}_i-\theta_i)\mathsf{W}_{ij}(\theta)(\hat{\theta}_j-\theta_j)$.
Here $\mathsf{W}(\theta)=[\mathsf{W}(\theta)_{ij}]$ is an $n\times n$ positive semidefinite matrix, called a weight (cost) matrix.
As a special case, $W$ can be parameter independent.
In the language of statistical decision theory, the set $(\Pi,\hat{\theta})$ is called a quantum decision.
The main objective of parameter estimation about quantum states $\{S_\theta\}$ is to
find the best quantum decision $(\Pi,\hat{\theta})$ that minimizes the loss function.
As the measurement outcomes are random, we need to further identify a risk for this optimization.
\begin{definition}
The Bayes risk for a given prior probability distribution $p_{\mathrm prior}(\theta)$ on $\Theta$ is defined by
\begin{equation}\label{def:Brisk1}
\mathsf{R}[\Pi,\hat{\theta}]:=\int_\Theta d\theta p_{\mathrm prior}(\theta) E_\theta\big[L\big(\theta,\hat{\theta}(X)\big)\big|\Pi\big].
\end{equation}
\end{definition}
With this quantum Bayes risk, the objective is to find the best quantum decision that minimizes the risk, i.e. the minimization problem over $(\Pi,\hat{\theta})$:
\begin{equation}
\inf_{\Pi,\hat{\theta}}\mathsf{R}[\Pi,\hat{\theta}].
\end{equation}
Denoting the joint distribution by $p(\theta,x):=p_{\mathrm prior}(\theta) p_\theta(x)=\tr{p_{\mathrm prior}(\theta) S_\theta\Pi_x}$,
the Bayes risk \eqref{def:Brisk1} is also written as
\begin{equation}\label{def:Brisk2}
\mathsf{R}[\Pi,\hat{\theta}]=E_p\big[L\big(\theta,\hat{\theta}(X)\big)\big],
\end{equation}
where $E_p[\cdot]$ denotes the expectation value with respect to the joint distribution $p(\theta,x)$.
In the following discussion, we will also express the Bayes risk in terms of the mean square error (MSE) matrix whose $jk$ component is defined by
\begin{equation}\label{def:Brisk3}
\mathsf{V}_{\theta,jk}[\Pi,\hat{\theta}]:=E_\theta \big[(\hat{\theta}_j(X)-\theta_j)(\hat{\theta}_k(X)-\theta_k)\big|\Pi\big].
\end{equation}
This then gives an alternative expression for the Bayes risk:
\begin{equation}\label{def:Brisk4}
\mathsf{R} [\Pi,\hat{\theta}]= \int_\Theta d\theta\,p_{\mathrm prior}(\theta)\sfTr{\mathsf{W}(\theta)\mathsf{V}_\theta[\Pi,\hat{\theta}]},
\end{equation}
where $\sfTr{\cdot}$ denotes the trace for matrices on the $n$-dimensional parameter space.
\subsection{Quantum van Tree inequality}
The classical van Tree inequality is based on the covariance inequality \cite{vTbook1}.
This inequality is applicable to bound the Bayesian MSE matrix
\begin{equation}
\mathsf{V}_B[\Pi,\hat{\theta}]:=\int_\Theta d\theta\,p_{\mathrm prior}(\theta)\mathsf{V}_\theta[\Pi,\hat{\theta}].
\end{equation}
The resulting matrix inequality is
\begin{align}
\mathsf{V}_B[\Pi,\hat{\theta}]&\ge \left(J_B[\Pi]\right)^{-1}, \\
J_B[\Pi]&:=J(\pi)+\int_\Theta d\theta\,p_{\mathrm prior}(\theta) J_\theta[\Pi],\\
J_{ij}(\pi)&:=\int_\Theta d\thetap_{\mathrm prior}(\theta) \frac{\partial \ell_{\mathrm prior}}{\partial \theta_i}\frac{\partial \ell_{\mathrm prior}}{\partial \theta_j},
\end{align}
where $\ell_{\mathrm prior}(\theta):=\log p_{\mathrm prior}(\theta)$ and $J_\theta[\Pi]$ is the Fisher information matrix about the distribution $p_\theta(x)$.
The van Tree inequality can be generalized in order to include the parameter dependent weight matrix.
This can be accomplished by the use of the Gill-Levit bound \cite{gill_levit}.
A quantum version of the Gill-Levit bound was recently proposed \cite{tsang_gl}.
It was Personick who proposed a quantum version of the van Tree inequality for the Bayes risk \cite{personick_thesis,personick71}.
Without going into the details, we give his result.
It is known that the Fisher information matrix is bounded by appropriate quantum Fisher information matrices $J^Q_\theta$.
Using this fact, one can derive a quantum van Tree inequality:
\begin{align}\label{eq:QvTineq}
\mathsf{V}_B[\Pi,\hat{\theta}]&\ge (J^Q_B)^{-1},\\
J^Q_B&:=J(\pi)+\int_\Theta d\thetap_{\mathrm prior}(\theta) J^Q_\theta.\nonumber
\end{align}
With this inequality, one gets a lower bound for the Bayes risk \eqref{def:Brisk4} when
the weight matrix $\mathsf{W}$ is parameter independent.
\begin{equation}
\mathsf{R}[\Pi,\hat{\theta}]\ge {\cal C}_\mathrm{vT}:=\sfTr{\mathsf{W} (J^Q_B)^{-1}}.
\end{equation}
A well-known example for the quantum Fisher information matrix is the symmetric logarithmic derivative (SLD) Fisher information matrix \cite{helstrom67}.
Originally, this lower bound \eqref{eq:QvTineq} was proven in Personick's thesis \cite[Sec.2.2.2]{personick_thesis} immediately after the pioneering work by Helstrom that formulated point estimation about quantum states \cite{helstrom67}.
In his thesis, he also derived quantum versions of Bhattacharyya and Barankin bounds, however his results seem too early to be appreciated by the community at the time.
\subsection{Personick bound}
Personick also proposed a different method to derive a lower bound for the Bayes risk in the same paper where he proposed the quantum van Tree inequality \cite{personick71}.
In the published paper, he considered one-parameter estimation, and then he proved this lower bound is tight. However, it is less known that he also derived a lower bound for the general $n$-parameter estimation problem \cite{personick_thesis}.
Define the averaged states and the first moment by
\begin{align}
\bar{S}&=\int_\Theta d\theta\,p_{\mathrm prior}(\theta) S_\theta,\\
\bar{S}_j&=\int_\Theta d\theta\,p_{\mathrm prior}(\theta)\theta_j S_\theta,
\end{align}
where $\bar{\cdot}$ denotes the averaged operators with respect to the prior distribution.
Next, consider a set of Hermitian matrices $L_j$ satisfying the so-called Bayesian version of the SLD equation:
\begin{equation}
\bar{S}_j=\frac12(\bar{S}L_j+L_j\bar{S} ).
\end{equation}
For a positive definite averaged state $\bar{S}$, the Bayesian SLD $L_j$ is uniquely defined by the solution.
Then, the real symmetric matrix $K=[K_{jk}]$, the Bayesian SLD Fisher information matrix, is defined by
\begin{equation}
{K}_{jk}=\langle L_j,L_k\rangle_{\bar{S}}
=\frac12\opTr{\bar{S}(L_jL_k+L_kL_j)}.
\end{equation}
Here, $\langle X,Y\rangle_{\bar{S}}:=\opTr{\bar{S}(X^\dagger Y+YX^\dagger)}/2$
is the symmetrized inner product for linear operators $X,Y$ on ${\cal H}$ with respect to the state $\bar{S}$,
and $X\dagger$ denotes the Hermitian conjugation of $X$.
For one parameter estimation, Personick proved the following inequality.
\begin{equation}
\mathsf{V}_B[\Pi,\hat{\theta}]\ge \overline{\theta^2}-K,
\end{equation}
where $\overline{\theta^2}=\int d\theta \,p_{\mathrm prior}(\theta) \theta^2$.
Since the random parameter $\theta$ is scalar,
the second term is $K=\opTr{\bar{S} L_{1}^{2}}$.
(The original form of the second term is written as $\opTr{\bar{S}\bar{S}_1}$ in Personick's work \cite{personick71}.)
Almost half century after the seminal work by Personick,
Rubio and Dunningham generalized the Personick bound
based on a different approach \cite{rd2020}.
They proved
\begin{equation}\label{def:RDbound}
\mathsf{V}_B[\Pi,\hat{\theta}] \ge \overline{\theta \theta^\intercal}-K,
\end{equation}
where the first term is purely classical one defined by
\begin{equation}
\Big[\overline{\theta \theta^\intercal}\Big]_{jk}=\int_\Theta d\theta\,p_{\mathrm prior}(\theta)\theta_j\theta_k.
\end{equation}
The second term is a contribution regarded as the quantum nature, and can be interpreted as the Bayesian version of the SLD Fisher information matrix.
Their bound, Eq.~\eqref{def:RDbound} with a parameter-independent weight matrix, then takes the form
\begin{equation}
{\cal C}_\mathrm{PRD}=\sfTr{\mathsf{W} (\overline{\theta \theta^\intercal}-{K})}.
\end{equation}
In the following, we call this lower bound as the generalized Personick bound.
\section{Results}
In this section, we propose a new lower bound for the Bayes risk, which is applicable for arbitrary weight matrix. To state the main theorem, we need to introduce several notations first.
\subsection{Alternative expression for the Bayes risk}
Noting the MSE matrix $\mathsf{V}$ consists of four terms
\begin{multline}
\mathsf{V}_{\theta,jk}[\Pi,\hat{\theta}]=\sum_x \hat{\theta}_{j}(x)\hat{\theta}_{k}(x)\opTr{S_\theta\Pi_x}\\
\hspace{2cm}-\theta_j\sum_x \hat{\theta}_{k}(x)\opTr{S_\theta\Pi_x}\\
-\sum_x \hat{\theta}_{j}(x)\opTr{S_\theta\Pi_x}\theta_k
+\theta_j\theta_k,
\end{multline}
we introduce the following Hermitian matrices on ${\cal H}$:
\begin{align*}
\mathbb{L}_{jk}[\Pi,\hat{\theta}]&=\sum_x \hat{\theta}_{j}(x)\Pi_x\hat{\theta}_{k}(x)\quad(j,k=1,2,\dots,n),\\
X_j[\Pi,\hat{\theta}]&=\sum_{x}\hat{\theta}_{j}(x)\Pi_x \quad(j=1,2,\ldots,n).
\end{align*}
Importantly, these matrices are solely defined by a quantum decision $\Pi,\hat{\theta}$, and hence
they are independent of model parameter $\theta$.
In the following, we often omit the argument $\Pi$ and $\hat{\theta}$, when it is clear from the context.
With these definitions, the MSE matrix is expressed as
\begin{equation*}
\mathsf{V}_{\theta,jk}[\Pi,\hat{\theta}]=\opTr{S_\theta(\mathbb{L}_{jk}-\theta_jX_k-X_j\theta_k+\theta_j\theta_k)}.
\end{equation*}
We next define a matrix and a vector which are defined on the extended Hilbert space \cite{lja2021}.
Consider the Hilbert space $\mathbb{H}:={\mathbb C}^n\otimes{\cal H}$, and define $ \mathbb{L}$ on $\mathbb{H}$ whose $jk$ component is given by $\mathbb{L}_{jk}$.
We also define a column vector $X$ whose $j$th component is $X_j$.
We denote transpose of matrices and vectors with respect to ${\mathbb C}^n$ by $(\cdot)^\intercal$.
The fundamental inequality is stated in the following lemma, which is a variant of Holevo's lemma \cite{holevo}.
\begin{lemma}[\cite{lja2021}] \label{lem:holevo}
For all POVMs and estimators, $\mathbb{L}$ and $X$ obey the matrix inequality:
\begin{equation}
\mathbb{L}[\Pi,\hat{\theta}]\ge X[\Pi,\hat{\theta}] \left(X[\Pi,\hat{\theta}]\right)^\intercal .
\end{equation}
\end{lemma}
The weighted trace of the MSE matrix then takes of the form:
\begin{align*}
&\sfTr{\mathsf{W}(\theta) \mathsf{V}_\theta}\\
&=\sum_{j,k}\mathsf{W}_{jk}(\theta)\opTr{S_\theta(\mathbb{L}_{jk}-\theta_jX_k-X_j\theta_k+\theta_j\theta_k)}\\
&=\bbTr{\mathbb{S}_{\mathsf{W}}\mathbb{L}}-\bbTr{\mathsf{S}_{\mathsf{W}} X^\intercal}-\bbTr{X {\mathsf{S}_{\mathsf{W}}}^\intercal}
+\mathsf{W}_\theta,
\end{align*}
where $\bbTr{\cdot}$ denotes the trace on the extended Hilbert space $\mathbb{H}$.
In this expression, we define
\begin{align}
\mathbb{S}_{\mathsf{W},jk}(\theta)&:=\mathsf{W}_{jk}(\theta)\otimes S_\theta ,\\
\mathsf{S}_{\mathsf{W},j}(\theta) &:=\sum_k\mathsf{W}_{jk}(\theta)\theta_k S_\theta,\\
\mathsf{W}_\theta&:=\sum_{j,k}\theta_j\mathsf{W}_{jk}(\theta)\theta_k.
\end{align}
$\mathbb{S}_{\mathsf{W}}:=[\mathbb{S}_{\mathsf{W},jk}]$ is an operator on the extended Hilbert space.
$\mathsf{S}_{\mathsf{W}}\in\mathbb{H}$ is a vector with Hermitian matrix elements.
These quantities are determined by the quantum statistical model and the weight matrix.
After combining the above expressions and the integration with respect to the prior,
we obtain the alternative form of the Bayes risk:
\begin{lemma}\label{lem:Briskrep}
The Bayes risk is expressed as
\begin{equation}\label{eq:Brisk5}
\mathsf{R}[\Pi,\hat{\theta}]=\bbTr{\bar{\mathbb{S}}_{\mathsf{W}}\mathbb{L}}-\bbTr{\bar{\mathsf{S}}_{\mathsf{W}} X^\intercal}-\bbTr{X \bar{\mathsf{S}}_{\mathsf{W}}^\intercal}+ \overline{\mathsf{W}},
\end{equation}
where quantities with bars indicate the averaged quantities with respect to the prior.
\begin{align}
\bar{\mathbb{S}}_{\mathsf{W}}&:=\int_\Theta \,d\theta\,\mathbb{S}_{\mathsf{W}}p_{\mathrm prior}(\theta),\label{def:bayesS}\\
\bar{\mathsf{S}}_{\mathsf{W}}&:=\int_\Theta \,d\theta\,\mathsf{S}_{\mathsf{W}}p_{\mathrm prior}(\theta),\label{def:bayesT}\\
\overline{\mathsf{W}}&:=\int_\Theta \,d\theta\,\mathsf{W}_{\theta}p_{\mathrm prior}(\theta).
\end{align}
\end{lemma}
We emphasize that everything is exact so far.
We also remind ourselves that $\mathbb{L}$ and $X$ are functions of a POVM $\Pi$ and an estimator $\hat{\theta}$.
\subsection{New Bayesian bounds}
To derive a lower bound for the Bayes risk $\mathsf{R}[\Pi,\hat{\theta}]$, we follow the same line of logic used in Ref.~\cite{lja2021}.
This then gives the main result of the paper.
\begin{theorem}[Bayesian Nagaoka-Hayashi bound]~\\
\label{thm:BNHbound}
For any POVM $\Pi$ and estimator $\hat{\theta}$, the following inequality holds for the Bayes risk.
\begin{align}
&\mathsf{R}[\Pi,\hat{\theta}]\ge {\cal C}_\mathrm{NH} \nonumber\\
&{\cal C}_\mathrm{NH}:=\min_{\mathbb{L},X}\left\{ \bbTr{\bar{\mathbb{S}}_{\mathsf{W}}\mathbb{L}}
-\bbTr{\bar{\mathsf{S}}_{\mathsf{W}} X^\intercal}-\bbTr{X \bar{\mathsf{S}}_{\mathsf{W}}^\intercal}\right\}\nonumber\\
&\hspace{6cm} + \overline{\mathsf{W}}.
\label{def:BNHbound}
\end{align}
Here optimization is subject to the constraints:
$\forall jk,\mathbb{L}_{jk}=\mathbb{L}_{kj}$, $\mathbb{L}_{jk}$ is Hermitian, $X_j$ is Hermitian, and $\mathbb{L}\geq {X} X^\intercal$.
\end{theorem}
\begin{proof}
Let $\mathbb{L}_{*}$ and $X_{*}$ be the optimal quantities calculated from an optimal POVM $\Pi_{*}$ and an optimal estimator $\hat{\theta}_{*}$.
Then, the following chain of inequalities holds.
\begin{align*}
&\mathsf{R}[\Pi,\hat{\theta}]\ge\mathsf{R}[\Pi_{*},\hat{\theta}_{*}]\nonumber\\
&=\bbTr{\bar{\mathbb{S}}_{\mathsf{W}}\mathbb{L}_{*}}-\bbTr{\bar{\mathsf{S}}_{\mathsf{W}} X_{*}^\intercal}-\bbTr{X_{*} \bar{\mathsf{S}}_{\mathsf{W}}^\intercal}+ \overline{\mathsf{W}}\\
&\ge \min_{\mathbb{L}\ge X_{*}X_{*}^\intercal}\left\{ \bbTr{\bar{\mathbb{S}}_{\mathsf{W}}\mathbb{L}}
-\bbTr{\bar{\mathsf{S}}_{\mathsf{W}} X_{*}^\intercal}-\bbTr{X_{*} \bar{\mathsf{S}}_{\mathsf{W}}^\intercal}\right\}+ \overline{\mathsf{W}}\\
&\ge \min_{\mathbb{L},X}\left\{ \bbTr{\bar{\mathbb{S}}_{\mathsf{W}}\mathbb{L}}
-\bbTr{\bar{\mathsf{S}}_{\mathsf{W}} X^\intercal}-\bbTr{X \bar{\mathsf{S}}_{\mathsf{W}}^\intercal}\right\}+ \overline{\mathsf{W}}.
\end{align*}
The first inequality follows by definition of the optimizer.
The second line is due to Lemma \ref{lem:Briskrep}.
To get the third line, we apply Lemma \ref{lem:holevo}.
In the last line, optimization is subject to the constraints stated in the theorem in particular $\mathbb{L}\geq {X} X^\intercal$.
\end{proof}
The main difference from the Nagaoka-Hayashi bound for the point estimation setting \cite{lja2021} is that
there is no constraint about the locally unbiasedness.
The next result is that the proposed Bayesian Nagaoka-Hayashi bound can be computed efficiently by an semidefinite programming problem.
\begin{proposition}
The Bayesian Nagaoka-Hayashi bound is semidefinite programming.
\end{proposition}
\begin{proof}
To put the Bayesian Nagaoka-Hayashi bound in the SDP form, we write the first three terms of Eq.~\eqref{def:BNHbound} as
\begin{multline}
\bbTr{\bar{\mathbb{S}}_{\mathsf{W}}\mathbb{L}}-\bbTr{\bar{\mathsf{S}}_{\mathsf{W}} X^\intercal}-\bbTr{X \bar{\mathsf{S}}_{\mathsf{W}}^\intercal}\\
= \bbTr{\left(\begin{array}{cc}\bar{\mathbb{S}}_{\mathsf{W}} & -\bar{\mathsf{S}}_{\mathsf{W}} \\
-{\bar{\mathsf{S}}_{\mathsf{W}}}^\intercal & {\bar{\mathsf{S}}_{\mathsf{W}}}^\intercal (\bar{\mathbb{S}}_{\mathsf{W}})^{-1}\bar{\mathsf{S}}_{\mathsf{W}}\end{array}\right)
\left(\begin{array}{cc}\mathbb{L} & X \\ X^\intercal & 1\end{array}\right) }\\
-\bbTr{{\bar{\mathsf{S}}_{\mathsf{W}}}^\intercal (\bar{\mathbb{S}}_{\mathsf{W}})^{-1}\bar{\mathsf{S}}_{\mathsf{W}}}.
\end{multline}
Clearly, this is an semidefinite programming problem, since the constraint on the variable $\mathbb{L}\ge {X} X^\intercal$ is equivalent to a positive semidefinite condition:
\[
\left(\begin{array}{cc}\mathbb{L} & X \\ X^\intercal & 1\end{array}\right)\ge0. \]
Other constraints on $\mathbb{L}$ and $X$ can also be put in the trace condition for the variable (see Ref.~\cite{lja2021}).
\end{proof}
When estimating two parameters, the Bayesian Nagaoka-Hayashi bound \eqref{def:BNHbound} takes an optimization problem with respect to only the variable $X$.
This lower bound will be named as the Bayesian Nagaoka bound.
\begin{theorem}[Bayesian Nagaoka bound]
When the number of parameters is equal to two, the Bayesian Nagaoka-Hayashi bound \eqref{def:BNHbound} is written as follows.
\begin{multline}\label{def:BNbound}
{\cal C}_\mathrm{N}:=\min_{X=(X_{1},X_{2})}\big\{ \bbTr{\mathrm{sym}_{+}\left(\sqrt{\bar{\mathbb{S}}_{\mathsf{W}}}XX^{\intercal}\sqrt{\bar{\mathbb{S}}_{\mathsf{W}}}\right)}\\
\hspace{8mm}+\bbTrAbs{\mathrm{sym}_{-}\left(\sqrt{\bar{\mathbb{S}}_{\mathsf{W}}}XX^{\intercal}\sqrt{\bar{\mathbb{S}}_{\mathsf{W}}}\right)}\\
-\bbTr{\bar{\mathsf{S}}_{\mathsf{W}} X^\intercal}-\bbTr{X \bar{\mathsf{S}}_{\mathsf{W}}^\intercal}\big\}+ \overline{\mathsf{W}}.
\end{multline}
Here optimization is subject to: $\forall j,\,X_j$ is Hermitian.
\end{theorem}
In this theorem, $\mathrm{sym}_{\pm}(\mathbb{A}):= \frac12 (\mathbb{A}\pm \mathbb{A}^{\intercal})$ denotes the symmetrized (anti-symmetrized) matrix with respect to the first Hilbert space of the extended Hilbert space $\mathbb{H}={\mathbb C}^n\otimes{\cal H}$, i.e.,
for $\mathbb{A}=[\mathbb{A}_{jk}]\in\mathbb{H}$, $[\mathrm{sym}_{\pm}(\mathbb{A})]_{jk}=(\mathbb{A}_{jk}\pm\mathbb{A}_{kj})/2$. $\bbTrAbs{\mathbb{A}}$ denotes the sum of the absolute values for the eigenvalues of $\mathbb{A}$.
\begin{proof}
This theorem is proven by using Lemma \ref{lem:h2} in Appendix.
\end{proof}
We next turn our attention to the Bayesian Holevo bound which is in general lower than
the Bayesian Nagaoka-Hayashi bound.
\begin{theorem}[Bayesian Holevo bound]\label{thm:BHbound}
\begin{multline}\label{def:BHbound}
{\cal C}_\mathrm{NH}\ge {\cal C}_\mathrm{H}:=\min_{X:\mathrm{Hermitian}}\big\{ \sfTr{\mathrm{Re}\, \mathsf{Z}_{\mathsf{W}}[X]}\\
\hspace{20mm}+\sfTrAbs{\mathrm{Im}\, \mathsf{Z}_{\mathsf{W}}[X]}\\
-\opTr{\bar{\mathsf{S}}_{\mathsf{W}}^\intercal X}-\opTr{X^\intercal\bar{\mathsf{S}}_{\mathsf{W}}}\big\}+ \overline{\mathsf{W}},
\end{multline}
where $\mathsf{Z}_{\mathsf{W}}[X]$ is an $n\times n$ Hermitian matrix is defined by
\[
\mathsf{Z}_{\mathsf{W}}[X]:= \opTr{\bar{\mathbb{S}}_{\mathsf{W}}XX^\intercal}.
\]
\end{theorem}
In the above expression, $\mathrm{Re}\, \mathsf{A}$ ($\mathrm{Im}\, \mathsf{A}$) denotes the component-wise real (imaginary) part of a matrix $\mathsf{A}\in{\mathbb C}^{n\times n}$,
and $\sfTrAbs{\mathsf{A}}$ denotes the sum of the absolute values for the eigenvalues of $\mathsf{A}$.
In the Bayesian Holevo bound, optimization is subject to $X_j$: Hermitian.
This is in contrast to point estimation under the locally unbiasedness condition.
\begin{proof}
This theorem is due to Lemma \ref{lem:h3} in Appendix.
\end{proof}
\subsection{Parameter independent weight matrix}
To make a comparison to existing lower bounds in the literature,
we set the weight matrix to be parameter independent.
Then, quantities \eqref{def:bayesS} and \eqref{def:bayesT} reduce to
\begin{align}
\bar{\mathbb{S}}_{\mathsf{W}}&=\mathsf{W}\otimes \bar{S},\\
\bar{\mathsf{S}}_{\mathsf{W},j}&=\sum_k\mathsf{W}_{jk}\bar{S}_k,
\end{align}
where $\bar{S}=\int_\Theta \,d\thetap_{\mathrm prior}(\theta) S_\theta$
and $\bar{S}_j=\int_\Theta \,d\thetap_{\mathrm prior}(\theta) \theta_j S_\theta$ as before.
In this case, $\bar{\mathbb{S}}_{\mathsf{W}}$ exhibits a tensor product structure.
Then, we can apply Lemma \ref{lem:h1} in Appendix to simplify the Bayesian Holevo bound as follows.
\begin{corollary}
For a parameter independent weight matrix, the Bayesian Holevo bound is
\begin{multline}\label{def:BHbound2}
{\cal C}_\mathrm{H}:=\min_{X:\mathrm{Hermitian}}\big\{ \sfTr{\mathsf{W}\mathrm{Re}\, \mathsf{Z}_{\bar{S}}[X]}+\sfTrAbs{\mathsf{W}\mathrm{Im}\, \mathsf{Z}_{\bar{S}}[X]}\\
-\sfTr{\mathsf{W}\mathsf{H}_{\bar{S}}[X]}-\sfTr{\mathsf{W}\mathsf{H}_{\bar{S}}[X]^\intercal}\big\}+ \overline{\mathsf{W}},
\end{multline}
where $\mathsf{Z}_{\bar{S}}[X]$ and $\mathsf{H}_{\bar{S}}[X]$ are
$n\times n$ matrices defined by
$Z_{\bar{S},jk}[X]:= \opTr{\bar{S}X_kX_j}$
and $\mathsf{H}_{\bar{S},jk}[X]:=\opTr{\bar{S}_jX_k}$, respectively.
\end{corollary}
When the number of parameters is two, the Bayesian Nagaoka bound can take the following explicit form
by applying Lemma \ref{lem:2para_indep} in Appendix.
\begin{corollary}
For a two-parameter estimation with a parameter independent weight matrix, the Bayesian Nagaoka bound is
expressed as
\begin{multline}\label{def:BNbound2}
{\cal C}_\mathrm{N}:=\min_{X:\mathrm{Hermitian}}\big\{ \sfTr{\mathsf{W}\mathrm{Re}\, \mathsf{Z}_{\bar{S}}[X]}\\
\hspace{2cm}+\sqrt{\Det{\mathsf{W}}}\,\opTrAbs{\bar{S}(X_{1}X_{2}-X_{2}X_{1})}\\
-\sfTr{\mathsf{W}\mathsf{H}_{\bar{S}}[X]}-\sfTr{\mathsf{W}\mathsf{H}_{\bar{S}}[X]^\intercal}\big\}+ \overline{\mathsf{W}}.
\end{multline}
\end{corollary}
\subsection{Relation to the generalized Personick bound}
We claim the proposed Bayesian Nagaoka-Hayashi bound is tighter than
the generalized Personick bound. To show this we have the following statement.
\begin{proposition}
${\cal C}_\mathrm{H}\ge{\cal C}_\mathrm{PRD}$, and hence, ${\cal C}_\mathrm{NH}\ge{\cal C}_\mathrm{PRD}$.
\end{proposition}
\begin{proof}
First, we set $\theta$-independent $\mathsf{W}$ as the rank-1 projector $\mathsf{W}=\mathsf{c}\sfc^\intercal$ with
$\mathsf{c}\in\mathbb{R}^n$.
Next, we ignore the second term in the minimization \eqref{def:BHbound} to
obtain a lower bound for ${\cal C}_\mathrm{H}$.
This gives the desired result.
\begin{align*}
{\cal C}_\mathrm{H}&\ge \min_{X} \big\{\sum_{jk}\mathsf{c}_j\mathsf{c}_k(\opTr{\bar{S}(X_jX_k+X_kX_j)}\nonumber\\
&\hspace{15mm}- \opTr{\bar{S}_jX_k}- \opTr{X_j\bar{S}_k})\big\}+\mathsf{c}^\intercal\overline{\theta \theta^\intercal}\mathsf{c}\\
&= \min_{X} \{\langle X_\mathsf{c},X_\mathsf{c}\rangle_{\bar{S}}
-\langle X_\mathsf{c},L_\mathsf{c}\rangle_{\bar{S}}-\langle L_\mathsf{c},X_\mathsf{c}\rangle_{\bar{S}}
\}+\mathsf{c}^\intercal\overline{\theta \theta^\intercal}\mathsf{c}\\
&= \min_{X} \{\langle X_\mathsf{c}-L_\mathsf{c},X_\mathsf{c}-L_\mathsf{c}\rangle_{\bar{S}}\}
-\langle L_\mathsf{c},L_\mathsf{c}\rangle_{\bar{S}}+\mathsf{c}^\intercal\overline{\theta \theta^\intercal}\mathsf{c}\\
&=\mathsf{c}^\intercal\overline{\theta \theta^\intercal}\mathsf{c}-\langle L_\mathsf{c},L_\mathsf{c}\rangle_{\bar{S}}\\
&=\mathsf{c}^\intercal(\overline{\theta \theta^\intercal}-{\cal K})\mathsf{c},
\end{align*}
where we set $X_\mathsf{c}=\sum_j\mathsf{c}_jX_j$ and $L_\mathsf{c}=\sum_j\mathsf{c}_jL_j$.
Since this is true for any choice of $\mathsf{c}$, we obtain the relation ${\cal C}_\mathrm{H}\ge{\cal C}_\mathrm{PRD}$.
\end{proof}
\section{Conclusion and outlook}
In summary we have proposed a new lower bound for the Bayes risk, called the Bayesian Nagaoka-Hayashi bound.
This bound then can be bounded by a Bayesian version of the Holevo bound.
It was shown that our lower bounds are tighter than the existing lower bound proposed by Rubio and Dunningham.
The proposed Bayesian lower bounds are based on the idea developed in the previous publication \cite{lja2021}.
In this paper, we only derived bounds, and hence there are many future directions.
Firstly, we need to analyze achievability of new proposed bounds both in the asymptotic and non asymptotic theory.
For example, Ref.~\cite{gill08} investigated the first order asymptotics of the Bayes risk.
Secondly, relations to other Bayesian bounds need to be examined.
Thirdly, an extension including random parameters in the presence of nuisance parameters will be important.
Non-random parameter estimation with the nuisance parameter has been investigated recently \cite{syh2021,tad2020}. Extensions of these formulations in the Bayesian setting will be presented in the future work.
\section*{Acknowledgmenet}
The work is partly supported by JSPS KAKENHI Grant Number JP21K11749 and JP21K04919.
The author would like to thank Mr.~L. Conlon and Dr.~S.M. Assad for collaboration at the early stage of the project.
|
{
"arxiv_id": "2302.14158",
"language": "en",
"timestamp": "2023-03-01T02:03:05",
"url": "https://arxiv.org/abs/2302.14158",
"yymm": "2302"
} | \subsection*{Acknowledgements
} MVdH was supported by the Simons Foundation under the MATH + X program, the National Science Foundation under grant DMS-1815143, and the corporate members of the Geo-Mathematical Imaging Group at Rice University.
JI was supported by the Academy of Finland (projects 332890 and 336254).
We thank Chunquan Yu for help with composing figure~\ref{fig:basic}.
\section{Introduction}
We establish spectral rigidity for spherically symmetric manifolds with boundary and interfaces determined by discontinuities in the metric. We study the recovery of a (radially symmetric Riemannian) metric or wave speed containing jump discontinuities along finitely many $C^\infty$ hypersurfaces. To our knowledge, it is the first such result pertaining to a manifold with boundary and a piecewise continuous metric.
Terrestrial planets in our solar system are approximately spherically symmetric. On the one hand, the deviation from such a symmetry becomes apparent only at high eigenfrequencies. On the other hand, our results provide a stable approximation upon truncating the spectrum of eigenfrequencies. Discontinuities arise largely due to phase transitions. Hence, their radial depths play an important role in determining the thermal structure and chemical composition of planets as well as the dynamics of their interiors \cite{SchubertPhaseTransition}. The question of spectral rigidity is behind the validity of PREM \cite{DZIEWONSKIPREM} which is still widely used as a reference in linearized tomography. More interestingly, in space exploration such as the current NASA's InSight mission to Mars \cite{lognonne:hal-02526740}, with a single data point, spectral data could provide the leading information about its interior; other missions are being proposed.
The results presented, here, are an extension of our previous result \cite{HIKRigidity} where we proved a spectral rigidity for a smooth metric on a radial manifold. Allowing for certain discontinuities in the metric adds a new level of challenge for several reasons. First, the geodesics in such a manifold get reflected and transmitted when they hit an interface, creating a complex geometry for the analysis. In addition, we allow such geodesics to hit an interface at certain critical angles where a scattered ray can intersect an interface tangentially or ``glide'' along an interface. We also recover the location of the interfaces and do not assume that they are known.
We require the so-called Herglotz condition while allowing an unsigned curvature; that is,
curvature can be everywhere positive or it can change sign, and we
allow for conjugate points. Spherically symmetric manifolds with
boundary are models for planets, the preliminary reference Earth model
(PREM) being the prime example. Specifically, restricting to toroidal
modes, our spectral rigidity result determines the shear wave speed of
Earth's mantle in the rigidity sense.
The method of proof relies on a trace formula, relating the spectrum
of the manifold with boundary to its length spectrum, and the
injectivity of the periodic broken ray transform. Specifically, our
manifold is the Euclidean ball $M = \bar B(0,1)\subset \mathbb R^3$, with the metric $g(x) =
c^{-2}(\abs{x}) e(x)$, where $e$ is the standard Euclidean metric and
$c \colon (0,1]_r \to (0,\infty)$ is a function satisfying suitable
conditions, where $r = |x|$ is the radial coordinate.
We work in dimension three but our result on length spectral rigidity (Theorem \ref{thm:blspr-multilayer}) carries over to higher dimensions, and our methods to prove spectral rigidity (Theorem \ref{t: spectral rigiditiy}) may be generalized to higher dimensions.
We assume $c(r)$ has a jump discontinuity at a finite set of values $r = r_1, \dots, r_K$; that is $\lim_{r \to r_i^-}c(r) \neq \lim_{r \to r_i^+}c(r)$ for each $i$.
Our assumption is the \emph{smooth Herglotz
condition}: $\Der{r}(r/c(r))>0$ is satisfied everywhere away from the discontinuities of $c$, but we note that $c$ is allowed to either increase or decrease across an interface.
We note that the natural extension of the Herglotz condition when $c$ is smooth to our case when $c$ has discontinuities is to view $c$ as a distribution and require $\Der{r}(r/c(r))>0$ in the distributional sense. If $c$ has a jump discontinuity at $r = r_i$, this distributional condition implies $\lim_{r \to r_i^-}c(r) > \lim_{r \to r_i^+}c(r)$. This would be too restrictive since radial models of Earth (PREM) and Mars (T13) (see \cite{KhanMars}) satisfy the smooth Herglotz condition but not this stronger distributional Herglotz condition, since the jump across the core-mantle boundary differs in sign to the jumps at other interfaces. Hence, our smooth Herglotz condition is weaker to allow the jump across interfaces to have any sign.
We also allow trapped rays that never interact with the boundary. Such rays just correspond to small but nonzero boundary amplitudes of modes. The assumption $\Der{r}(r/c(r))>0$ when $c$ is smooth is the \emph{Herglotz condition} first discovered by Herglotz~\cite{H:kinematic} and
used by Wiechert and Zoeppritz~\cite{WZ:kinematic}.
By a maximal geodesic we mean a unit speed geodesic on the Riemannian
manifold $(M,g)$ with each endpoint at the boundary $\partial M$ or an interface. A broken ray or a billiard trajectory is a
concatenation of maximal geodesics satisfying the reflection condition
of geometrical optics at both inner and outer boundaries of $M$, and Snell's law for geometric optics at the interfaces. If the initial and final points
of a broken ray coincide at the boundary or an interface, we call it a periodic broken ray -- in
general, we would have to require the reflection condition at the
endpoints as well, but in the assumed spherical symmetry it is
automatic. We will describe later (Definition \ref{d: ccc}) what will
be called the \emph{countable conjugacy condition} which ensures
that up to rotation only countably many maximal geodesics have conjugate endpoints.
\vitaly{The above paragraph needs checking}
The length spectrum of a manifold $M$ with boundary is the set of
lengths of all periodic broken rays on $M$. If the
radial sound speed is $c$, we denote the length spectrum by $\operatorname{lsp}(c)$.
We will introduce in Definition \ref{simple closed ray} the notion of closed \emph{{basic}{} rays}, which are certain periodic rays that stay completely within a single layer. The set of lengths of such rays form the {basic}{} length spectrum $\operatorname{blsp}(c)$.
We note that every broken ray is contained in a
unique two-dimensional plane in $\mathbb R^n$ due to symmetry
considerations. Therefore, it will suffice to consider the case $n=2$;
the results regarding geodesics and the length spectrum carry over to
higher dimensions. We denote the Neumann spectrum of the Laplace--Beltrami operator in
three dimensions, $\Delta_c = c^3 \nabla \cdot c^{-1} \nabla$, on $M$
by $\operatorname{spec}(c)$, where we impose Neumann-type boundary
conditions on both the inner and outer boundary.
The spectrum $\operatorname{spec}(c)$ includes multiplicity, not just the set spectrum.
Some earlier results in tensor tomography, the methods of which are
related to ours, may be found in
\cite{Anasov14,Beurling15,Sharaf97,UhlSharaf}. Let us now enumerate the various geometric assumption we make in this manuscript for easy reference.
\subsection{Herglotz and other conditions}
There are several geometric assumptions we make that we shall enumerate here:
\begin{enumerate}
\item[(A1)] \label{A1}
``Periodic conjugacy condition.'' This is an analog of the clean intersection hypothesis used in \cite{Mel79,DG75,HIKRigidity};
(see Definition \ref{d: pcc}).
\item [(A2)]
``Principal amplitude injectivity condition.'' This is an analog to assuming \emph{simplicity} of the length spectrum. (see section \ref{s: geometric spreading injectivity condition}).
\item [(A3)]
``Countable conjugacy condition'' (Definition \ref{d: ccc}).
\item [(A4)]
Smooth Herglotz condition: $\frac{d}{dr} \frac{r}{c(r)} > 0$ away from the discontinuities.
\end{enumerate}
These assumptions allow us to prove that the singular support of the wave trace includes the {basic}{} length spectrum. Assumption (A1) is a standard assumption (normally referred to as the clean intersection hypothesis when $c$ is smooth) when calculating the trace singularity by a stationary phase method to ensure that the critical manifolds are non-degenerate and the phase function is Bott-Morse nondegenerate (see \cite{DG75, Mel79}). A ubiquitous issue in computing a trace formula is the possibility of cancellations between the contributions of two components of the same length that are not time reversals of each other to the wave trace. One usually assumes ``simplicity'' of the length spectrum so that any two rays with a given period are either rotations of each other or time reversals of each other, but since our trace formula computation is more explicit, we have a slightly weaker assumption (A2) to take care of this issue. Assumptions (A1), (A2), and (A4) are needed for the trace formula (Proposition \ref{prop:Trace Formula}), and all four assumptions are needed for spectral rigidity (Theorem \ref{t: spectral rigiditiy}), while only assumptions (A3) and (A4) are used to prove length spectral rigidity (Theorem \ref{thm:blspr-multilayer}). Below, we provide a chart for easy reference regarding which assumptions are needed for each theorem:
\begin{table}[h]
\begin{center}
\begin{tabular}{lcccc} & (A1) & (A2) & (A3) & (A4) \\ \cline{2-5}
\multicolumn{1}{l|}{Trace formula} & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{X} \\ \cline{2-5}
\multicolumn{1}{l|}{Length spectral rigidity} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{X} \\ \cline{2-5}
\multicolumn{1}{l|}{Spectral rigidity} & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{X} & \multicolumn{1}{c|}{X} \\ \cline{2-5}
\end{tabular}
\end{center}
\end{table}
\medskip\medskip \medskip \medskip
\noindent
\subsection{Main results}
\label{s: main results}
Here we present our main theorems, which follow a discussion of the notation we use for the geometry. Let $A(r', r'') = \bar B(0, r'') \setminus B(0, r') \subset \mathbb R^3$ be the closed annulus in a Euclidean space where $r''>r'$. Fix $K \in \mathbb N$ and let $r_k \in (0, 1} \def \rturn{ R^\star)$ so that $ 1} \def \rturn{ R^\star=:r_0 > r_1 > \cdots > r_K$.
Assume $c(r)$ has jump discontinuities at each $r_k \in (0, 1} \def \rturn{ R^\star)$. Let $\Gamma = \bigcup_k \{ r = r_k\}$ be the collection of interfaces together with $ \partial M$, and denote $\Gamma_k := \{ r = r_k\}$. We sometimes refer to the smooth annular regions $A(r_k, r_{k-1})$ as \emph{layers}. We view $M$ as a Riemannian manifold with (rough) metric $g = c^{-2} dx^2$.
\begin{definition}
Fix any $\varepsilon>0$ and $K\in\mathbb{N}$.
We say that a collection of functions $c_\tau\colon[0,1]\to(0,\infty)$ indexed by $\tau\in(-\varepsilon,\varepsilon)$ is an admissible family of profiles if the following hold:
\begin{itemize}
\item
There are radii $r_k\in(0,1)$ that depend $C^1$-smoothly on $\tau\in(-\varepsilon,\varepsilon)$ so that $1\eqqcolon r_0(\tau)>r_1(\tau)>\dots>r_K(\tau)>0$ for all $\tau\in(-\varepsilon,\varepsilon)$.
\item
For every $\tau\in(-\varepsilon,\varepsilon)$ the function $c_\tau$ is piecewise $C^{1,1}$ and satisfies the smooth Herglotz condition.
\item
The only singular points of each function $c_\tau$ are the radii $r_k(\tau)$ where it has a jump discontinuity.
\item
Within each annulus $A(r_k(\tau),r_{k-1}(\tau))$ the profile $c_\tau$ satisfies the countably conjugacy condition for all $\tau\in(-\varepsilon,\varepsilon)$.
\item
We assume that $(r,\tau)\mapsto c_\tau(r)$ is $C^1$ at all points where $r\notin\{r_1(\tau),\dots,r_K(\tau)\}$.
\end{itemize}
\end{definition}
Recall from the introduction that the length spectrum of a manifold $M$ with boundary is the set of
lengths of all periodic broken rays on $M$ and we denote the length spectrum by $\operatorname{lsp}(c)$. We will introduce in Definition \ref{simple closed ray} the notion of closed \emph{{basic}{} rays}, which are certain periodic rays that stay completely within a single layer. The set of lengths of such rays form the {basic}{} length spectrum $\operatorname{blsp}(c)$.
Our main theorem provides the rigidity of the basic length spectrum in the presence of ``countable noise''.
Choosing the ``noise'' suitably gives corollaries for the full length spectrum.
Missing or spurious points in the length spectrum or some amount of degeneracy do not matter.
The ``noise'' can be of the same size as the data, and this will play a role in the case of multiple wave speeds.
\begin{theorem}
\label{thm:blspr-multilayer}
Fix any $\varepsilon>0$ and $K\in\mathbb{N}$, and let $c_\tau(r)$ be an admissible family of profiles with discontinuities at $r_k(\tau)$ for all $k=1,\dots,K$.
Let $\operatorname{blsp}(\tau)$ denote the basic length spectrum of
the ball $\bar B(0,1)$
with the velocity profile $c_\tau$.
Suppose $\operatorname{blsp}(\tau)$ is countable for all $\tau$.
Let $S(\tau)$ be any collection of countable subsets of $\mathbb R$ indexed by $\tau$.
If $\operatorname{blsp}(\tau)\cup S(\tau)=\operatorname{blsp}(0)\cup S(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$, then $c_\tau=c_0$ and $r_k(\tau)=r_k(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$ and $k=1,\dots,K$.
\end{theorem}
The theorem has two immediate corollaries.
The first one concerns the whole length spectrum, and the second one the length spectrum of two velocity profiles.
\begin{corollary}[Length spectral rigidity of a layered planet with moving interfaces]
\label{cor:lsp-rig}
Fix any $\varepsilon>0$ and $K\in\mathbb{N}$, and let $c_\tau(r)$ be an admissible family of profiles with discontinuities at $r_k(\tau)$ for all $k=1,\dots,K$.
Suppose that the length spectrum for each $c_\tau$ is countable\joonas{Perhaps this countability can be proven from other assumptions. Basically we would prove that there are a countable number of topological types of an orbit and each type has only countably many instances modulo rotations. Cf. \cite[Lemma 4.14]{HIKRigidity}. I feel that a proof is either trivial or very hard.} in the
ball $\bar B(0,1)$.
Let $\operatorname{lsp}(\tau)$ and $\operatorname{blsp}(\tau)$ denote the length spectrum and the basic length spectrum of
the ball $\bar B(0,1)$
with the velocity profile $c_\tau$.
Suppose either one of the following holds:
\begin{itemize}
\item
$\operatorname{lsp}(\tau)=\operatorname{lsp}(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$.
\item
$\operatorname{blsp}(\tau)=\operatorname{blsp}(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$.
\end{itemize}
Then $c_\tau=c_0$ and $r_k(\tau)=r_k(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$ and $k=1,\dots,K$.
\end{corollary}
\begin{corollary}[Length spectral rigidity with two polarizations]
\label{cor:2-speeds}
Fix any $\varepsilon>0$ and $K\in\mathbb{N}$, and let $c^i_\tau(r)$ with both $i=1,2$ be an admissible family of profiles with discontinuities at $r_k(\tau)$ for all $k=1,\dots,K$.
Consider all periodic rays which are geodesics within each layer and satisfy the usual reflection or transmission conditions at interfaces, but which can change between the velocity profiles $c^1_\tau$ and $c^2_\tau$ at any reflection and transmission.
Suppose that the length spectrum of this whole family of geodesics, denoted by $\operatorname{lsp}(\tau)$, is countable in the
ball $\bar B(0,1)$.
If $\operatorname{lsp}(\tau)=\operatorname{lsp}(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$, then $c^i_\tau=c^i_0$ for both $i=1,2$ and $r_k(\tau)=r_k(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$ and $k=1,\dots,K$.
\end{corollary}
The ``noise'' set $S(\tau)$ of Theorem~\ref{thm:blspr-multilayer} plays an important role.
One metric is recovered at a time, and all rays that have one leg following the other metric or different legs in different layers are treated as noise.
The proofs of the corollaries are immediate:
\begin{itemize}
\item For Corollary~\ref{cor:lsp-rig}, simply let $S(\tau)=\operatorname{lsp}(\tau)$.
\item For Corollary~\ref{cor:2-speeds}, study the basic length spectra of the profiles $c^1(\tau)$ and $c^2(\tau)$ independently of each other and let again $S(\tau)=\operatorname{lsp}(\tau)$.
\end{itemize}
\begin{remark}
\label{rmk:variations}
Some variations of Theorem~\ref{thm:blspr-multilayer} and its corollaries hold true.
One can introduce an impermeable core and work with a finite number of layers that do not exhaust the ball.
One can choose to include or exclude rays with reflections from the lower boundary $r_K(\tau)$ and the results remain true for this smaller length spectrum, at least when $r_K$ is independent of $\tau$.
The proofs are immediate adaptations of the one we give.
\end{remark}
Recall the Neumann spectrum of the Laplace–Beltrami operator is denoted $\operatorname{spec}(c)$, where we impose Neumann-type
boundary conditions (we can allow for other boundary conditions cf. section \ref{sec: connect to LB}).
\begin{theorem}[Spectral rigidity with moving interfaces]
\label{t: spectral rigiditiy}
Fix any $\varepsilon>0$ and $K\in\mathbb{N}$, and let $c_\tau(r)$ be an admissible family of profiles with discontinuities at $r_k(\tau)$ for all $k=1,\dots,K$.
Suppose that the length spectrum for each $c_\tau$ is countable in the
ball $\bar B(0,1) \subset \mathbb R^3$.
Assume also that the length spectrum satisfies the principal amplitude injectivity condition and the periodic conjugacy condition.
Suppose
$\operatorname{spec}(\tau)=\operatorname{spec}(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$.
Then $c_\tau=c_0$ and $r_k(\tau)=r_k(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$ and $k=1,\dots,K$.
\end{theorem}
\begin{proof}
The spectrum determines the trace of the Green's function by Proposition~\ref{prop:Trace Formula}.
As $\operatorname{spec}(\tau)=\operatorname{spec}(0)$ for all $\tau$, the trace is independent of $\tau$ and so are its singularities.
The singularities are contained in the set $\operatorname{lsp}(\tau)$ by Proposition~\ref{prop:Trace Formula}.
We apply Theorem~\ref{thm:blspr-multilayer} to pass from length spectral information to geometric information.
We set $S(\tau)$ to be the singular support of the trace.
Every length of a {basic}{} periodic broken ray only appears once in the whole length spectrum by assumption, whence there is a singularity for every {basic}{} length.
Therefore $\operatorname{blsp}(\tau)\subset S(\tau)$.
Now Theorem~\ref{thm:blspr-multilayer} implies the claim.
\end{proof}
Planets are full balls, but Theorem~\ref{t: spectral rigiditiy} holds for an annulus as well.
Cf. Remark~\ref{rmk:variations}.
\begin{remark}[Implications for planets]
The theorem is stated for a scalar operator (the Laplace-Beltrami operator), but the proof extends to the radial elastic case and thus, round planets by considering the toroidal modes associated with the shear wave speed and their corresponding eigenfrequencies.
The proof of the theorem is using a trace formula to recover the {basic}{} length spectrum from the spectrum and employ the length spectral rigidity results.
See sections \ref{s: toroidal modes} and \ref{sec: connect to LB}, where we initially start the proof of the trace formula using toroidal modes and show why the argument is identical for the scalar Laplace-Beltrami operator. In that case, we work inside an annulus with an inner boundary representing the core-mantle boundary for more generality. By considering toroidal modes, the argument for proving a trace formula for spheroidal modes that involve two wave speeds becomes more transparent and is discussed in section \ref{s: spheroidal modes}. Hence, by considering the spectrum of the radial isotropic elastic operator with natural boundary conditions, our arguments may be generalized to recover both elastic wave speeds using Corollary \ref{cor:2-speeds}.
\end{remark}
\begin{remark}\label{rem: dimension}
We note that the dimension is irrelevant for the length spectral rigidity results; if the sound speed is fixed, the length spectrum is independent of dimension. For spectral rigidity, we assume dimension three to ease the computation of the trace formula since it allows us to compute the leading order asymptotics of the eigenfunctions explicitly.
\end{remark}
This paper will be essentially divided into parts. The first part is proving length spectral rigidity. In the second part, we prove the trace formula in our setting, and as a corollary, we prove the spectral rigidity theorem.
\subsection{Reasonableness of radial models}
\label{sec:reasonable-radial}
Spherically symmetric Earth models are widely used in geophysics and there are a number of results showing how well such models fit seismic data. The $P$ and $S$ wave speeds are denoted $c_P$ and $c_S$.
There are several important questions to address when using PREM to analyze seismic data.
\subsubsection*{
Question 1. What is the uncertainty in the best-fitting spherical average profile?}
The classic reference for this question is Lee and Johnson in \cite{Lee1984}. They suggest that the extremal bounds in the upper mantle are around 0.6 km/s (around 6 \%) for $c_P$ and 0.4 km/s for $c_S$ (around 7 \%). In the lower mantle, it is around 0.18 km/s (around 2 \%) for $c_P$, and 0.14 km/s (around 2 \%) for $c_S$. Note that the bounds increase in the lowermost mantle and especially in the crust.
\subsubsection*{
Question 2. What is the standard deviation of the residuals to the spherical average model, as a function of depth?}
In theory, residuals can be calculated as a function of depth for any global tomographic model. However, this information is not always presented. A good, thorough, recent example is the SP12RTS model \cite{koelemeijer2016a}. Their figure 9a shows that variations are smallest in the mid-mantle (standard deviations of around 0.1 \% for $c_P$, 0.2 \% for $c_S$) and increase towards the surface (to around 1.0 \% for both $c_P$ and $c_S$) and towards the CMB (to around 0.3 \% for $c_P$, and 0.5 \% for $c_S$).
\subsubsection*{
Question 3. What is the measurement uncertainty in the wave speed at a given point in a typical tomographic model?
}
Very few groups have given robust estimates of point-wise measurement uncertainties, and the best study to date could be the Bayesian study by Burdick and Leki\'{c} in \cite{BurdickLekic17}. They find the standard deviation in estimates of 0.25 \% $dc_P/c_P$ (so, for example the anomaly in California at 10 km depth might be 1.00 \% +/- 0.25 \%). We are not aware of any similar estimates for $c_S$, but they would most likely be more uncertain.
\subsubsection*{Question 4. In a given region, what is the typical variation in the absolute wavespeed?
}
Near Earth's surface, there are huge lateral variations in wavespeed, for example between continental and oceanic regions (for example, at a depth of 50 km, mountain belt may have a $c_P$ of 6.1 km/s, while an ocean basin may have a $c_P$ of 8.1 km/s at the same radial coordinate, a variation of 25 \%. However, within a given region type (e.g. 'island arc' or 'mountain belt'), typical variations around 0.3 km/s for $c_P$ (an authoritative reference is \cite{MooneyLaske98}; see their fig. 3b), which is about 5 \%. Variations in $c_S$ can be larger because $c_S$ is more strongly affected by fluids and temperature (partial melting and anelasticity). The reference given does not address $c_S$.
\section{Unraveling assumptions}
Let us give the relevant definition and assumptions on the geometry of the problem.
Recalling from the previous section, fix $K \in \mathbb N$ and let $r_k \in (0, 1} \def \rturn{ R^\star)$ so that $ 1} \def \rturn{ R^\star=:r_0 > r_1 > \cdots > r_K$.
Assume $c(r)$ has jump discontinuities at each $r_k \in (0, 1} \def \rturn{ R^\star)$. Let $\Gamma = \bigcup_k \{ r = r_k\}$ be the collection of interfaces together with $ \partial M$, and denote $\Gamma_k := \{ r = r_k\}$. We view $M$ as a Riemannian manifold with (rough) metric $g = c^{-2} dx^2$.
We showed in \cite{HIKRigidity} that any rotation symmetric Riemannian manifold with the Herglotz condition is of this form. The same is true in the presence of jumps with essentially the same proof we used in the smooth setting.
\subsection{Geodesics in a spherically symmetric model with interfaces}
On the three-dimensional manifold $M$ the phase space of the unit speed geodesic flow has dimension $5$.
Due to rotation symmetry most of these dimensions are superfluous, and the dimension of the reduced phase space needed to represent all geodesics up to isometries of the manifold is only $2$.
The dimension of the ``reduced phase space'' is $2$ for any ambient dimension $2$ or higher.
Two natural coordinates in this space are the radius $r$ (Euclidean distance to the origin) and the angular momentum denoted as $p$.
Any geodesic is either radial or is contained in a unique plane through the origin, so it suffices to study geodesics in $2$-dimensional disks.
In dimension two, points on the disk can be described with polar coordinates $(r,\theta)$, and a geodesic $\gamma$ can be parameterized as $t \mapsto (r(t), \theta(t))$.
We then have the explicit formula $p = p_{\gamma} = c(r(t))^{-2}r(t)^2\theta'(t)$.
The angular momentum (often called the \emph{ray parameter} associated to $\gamma$) $p$ is conserved, even across discontinuities in the metric.
Therefore trajectories of the geodesic flow in the $(r,p)$-plane are horizontal lines.
Much of the geometry is conveniently encoded in the function $\rho(r)=r/c(r)$.
At a turning point (where $\dot r=0$) we have $\abs{p}=\rho(r)$, and elsewhere $\abs{p}<\rho(r)$.
Therefore the reduced phase space is the subgraph of the function $\rho\colon(0,1]\to(0,\infty)$.
The classical Herglotz condition states that $\rho'(r)>0$ for all $r$.
Three examples are given in figure~\ref{fig:profiles}.
\begin{figure}
\centering
\begin{overpic}[width=0.8\textwidth]{figure-1.pdf}
\put (0,73) {(a)}
\put (35,73) {(b)}
\put (70,73) {(c)}
\end{overpic}
\caption{Three different velocity profiles described in terms of the function $\rho(r)=r/c(r)$. Dashed vertical lines connect the plot with the manifold. The reduced phase space of the geodesic flow is the subgraph of the function $\rho$ and the trajectories are horizontal lines. The Herglotz condition implies that $\rho$ is increasing and thus all horizontal lines starting at the graph can be extended all the way to $r=1$ while staying under the graph. Therefore rays starting at any depth meet the surface. The classical Herglotz condition is satisfied in case (a) above. In case (b) an extended Herglotz condition is satisfied, where $\rho'>0$ in the sense of distributions. The jump at the interface ({\color{red}red}) has to be positive for this to hold. In case (c) the smooth segments satisfy the Herglotz condition but the jump is in the wrong direction. Therefore rays diving just below the corresponding interface ({\color{green}green}) are trapped by total internal reflection. Even in the presence of discontinuities the condition $\rho'>0$ implies that there is no trapping, and jumps in the wrong direction necessarily imply trapping. The Herglotz condition is a convexity condition on the phase space.}
\label{fig:profiles}
\end{figure}
\begin{definition}
A (unit-speed) \emph{broken geodesic} or \emph{ray} in $(M, g)$ is a continuous, piecewise smooth path $\gamma: \mathbb R \supset I \to M$ such that each smooth piece is a unit-speed geodesic with respect to $g_c$ on $M\setminus \Gamma$, intersecting the interfaces $\Gamma$ at a discrete set of times $t_i \in I$. Furthermore, at each $t_i$, if the intersection is
transversal, then Snell's law for reflections and refraction of waves is satisfied. More precisely, a
broken geodesic (parameterized by a time variable) can be written as $\gamma : (t_0, t_1) \cup (t_1, t_2) \cup \cdots \cup
(t_{k-1}, t_k) \to M \setminus \Gamma$, which is a sequence of geodesics concatenated by reflections and refractions obeying Snell's law: for $i = 1, \dots, k-1,$
\[
\gamma(t_i) \in \Gamma,
\qquad \qquad (d\iota_\Gamma)^* (\gamma(t_i),\dot \gamma(t_i^-)) = (d\iota_\Gamma)^* (\gamma(t_i), \dot \gamma(t_i^+)),
\]
where $\iota_\Gamma: \Gamma \to M$ is the inclusion map and $\dot \gamma(t_i^\mp) = \lim_{t \to t_i^\mp} \gamma(t)$. Each restriction
$\gamma\restriction_{(t_i,t_{i+1})}$ is a maximal smooth geodesic that we call a \emph{{leg}{}} of $\gamma$. For each $i$, note that $\gamma(t_i) \in \Gamma_{k_i}$ for some $k_i$. One can view $\gamma$ as a concatenation of all of its {leg}{}s. A {leg}{} $\gamma\restriction_{(t_i, t_{i+1})}$ is \emph{reflected} if the inner product of $\dot \gamma(t_i^+)$ and $\dot \gamma(t_i^-)$ with a normal vector to $\Gamma_{k_i}$ have opposite signs. If they have the same sign, it is a \emph{transmitted {leg}{}}. If $\dot \gamma(t_i^+)$ and $\dot \gamma(t_i^-)$ are equal, then $\gamma\restriction_{(t_{i-1},t_{i+1})}$ is a \emph{grazing {leg}{}} or ray; in this case, $\dot \gamma(t_i^\pm)$ is tangent to $\Gamma$. The only other situation is when $\dot \gamma(t_i^+)$ is tangent to $\gamma$ while $\dot \gamma(t_i^-)$ is not (or vice versa); in this case $\gamma\restriction_{(t_i, t_{i+1})}$ is called a \emph{gliding ray} or {leg}{} because it travels along $\Gamma_{i_k}$. A ray with no gliding {leg}{}s will be called a non-gliding ray.
Our results will also extend to the elastic setting, which has two wave speeds $c_P$ and $c_S$ corresponding to pressure waves and shear waves. In this case, the definition of broken rays is identical except that each {leg}{} can either be a geodesic with the metric $g_{c_P}$ or $g_{c_S}$.
\end{definition}
We follow the discussion and notation in \cite[Section 2.1]{HIKRigidity}. Assume for the moment $n=2$ since due to spherical symmetry, rays are confined to a disk, and equip the annulus $M=A(1,r)$ with spherical coordinates $\theta,r$. Fix a broken geodesic $\gamma$ whose endpoints are both located at a particular interface $r_i$ for some $i \in \{0,\dots, K\}$.
We denote $\alpha= \alpha(p)$ to be the epicentral distance between both endpoints of $\gamma$, where $p = p_\gamma$ is the ray parameter associated to $\gamma$. It is the angular distance that $\gamma$ travels.
It may happen that $\alpha(p) > 2\pi$ if the geodesic winds around the origin several times.
Each {leg}{} can be parameterized as
\begin{equation}
t \mapsto (r(t),\theta(t))
\end{equation} over some maximal interval $I$ associated to the {leg}{}. Using both of the conserved quantities
$ c(r(t))^{-2}[r'(t)^2 + r(t)^2\theta'(t)^2 = 1$ and $p =c(r(t))^{-2}r(t)^2\theta'(t)$ (angular momentum) we can compute $\alpha_\gamma$ explicitly following \cite[Equation (2.2)]{HIKRigidity}.
Let $R^*$ be the smallest radius that $\gamma$ passes through, and there is a unique $k$ such that $r_{k} \leq R^* < r_{k-1}$. We refer to $R^*$ as the \emph{radius} of $\gamma$ and it may coincide with an interface or a boundary. Next, $\gamma$ will have a certain number of {leg}{}s in each of the annular regions $A(r_{k-1},r_k)), A(r_{k-2}, r_{k-1}),\dots A(r_0,r_1)$. Since $\gamma$ might stay just within a single (or more) annular region, there could be zero {leg}{}s in one or more of the annuli. By definition of $R^*$, $\gamma$ has no {leg}{}s in $A(r_{k},r_K)$. We denote $n_j$ to be half of the number of {leg}{}s of $\gamma$ in $A(r_{j-1},r_j)$.
Next we introduce a certain quantity
$\wkb^2:= c(r)^{-2}-r^{-2}p^2$.
Analogous to \cite{HIKRigidity}, the length of a broken geodesic with only transmitted {leg}{}s, starting in $r = r_0$ and ending at $r = 1} \def \rturn{ R^\star$ is an integer multiple of the quantity
\begin{equation}\label{e: transmitted length L}
L(r_0,p) :=
\int_{r_0}^{ 1} \def \rturn{ R^\star} \frac{1}{c(r')^2\wkb(r';p)} \operatorname{d} \! r'
\end{equation}
If $r_0 = R^*$ is the radius of $\gamma$, then $R^*$ is a function of $p$ and we will write $L(p)$.
With this notation and using the computation for epicentral distance in \cite{HIKRigidity}, one can also find an explicit formula for $\alpha_\gamma(r):$
\begin{align}
\alpha(p)
&= \sum_{j=1}^{k-1} 2n_j\int^{r_{j-1}}_{r_{j}} \frac{p}{(r')^2\wkb(r',p)} \operatorname{d} \! r'
+2n_k \int^{r_{k-1}}_{R^*} \frac{p}{(r')^2\wkb(r',p)} \operatorname{d} \! r'.
\end{align}
\begin{definition}
Following Hron in \cite{HronCriteria}, those waves which travel from the source to the receiver along different paths but
with identical travel-times are kinematically equivalent and are called kinematic analogs. We will refer to two different rays connecting source and receiver with the same ray parameter and travel time as \emph{kinematic analogs.} The groups of kinematic analogs may be further divided into subgroups of waves whose amplitude curves are identical. The members of this subgroup of phases may be called dynamic analogs. A sufficient condition for kinematic equivalence of two different broken rays $\gamma_1$ and $\gamma_2$ is they must have an equal number of {leg}{}s in each layer along their paths. Since $\alpha(p_\gamma)$ just measures the epicentral distance between the endpoints, $\alpha(p_\gamma)$ will be the same for $\gamma$ and all of its kinematic analogs. We will say two non-gliding rays connecting source and receiver are \emph{dynamic analogs} if they have the same ray parameter, travel time, and inside each $A(r_k, r_{k-1})$, they have the same number of {leg}{}s that are reflections starting at $\Gamma_k$, transmission starting at $\Gamma_k$, reflections starting at $\Gamma_{k-1}$ and transmissions starting at $\Gamma_{k-1}$. This is a sufficient condition to ensure that the principal amplitudes of the corresponding waves are identical. See \cite{HronCriteria} for examples and figures of kinematic and dynamic analogs.
\end{definition}
For length spectral rigidity, we only require what we term \emph{{basic}} closed rays.
\begin{definition}[Basic rays]\label{simple closed ray}
A broken ray is called \emph{{basic}{}} if either it stays within a single layer and all of its {leg}{}s are reflections from a single interface (type 1), or it is a \emph{radial} ray contained in a single layer (type 2).
A \emph{radial} ray is defined to be a ray with zero epicentral distance.
It will necessarily reflect from two interface and cannot be type 1.
The first type of {basic}{} rays are analogs to the \emph{turning} rays in \cite{HIKRigidity} that formed $\operatorname{lsp}(c)$ in the notation there. A closed {basic}{} ray of the first type will be periodic, stay within a singular layer, and only consists of reflected {leg}{}s from a single interface.
We have illustrated {basic}{} and other periodic rays in Figure~\ref{fig:basic}.
The lengths of periodic {basic}{} will suffice to prove length spectral rigidity so we define $\operatorname{blsp}(c)$ as the set of lengths of all periodic {basic}{} rays.
\end{definition}
\begin{figure}
\centering
\includegraphics[width=0.28\textwidth]{I_m2_n9.pdf}
\includegraphics[width=0.28\textwidth]{K_m5_n29.pdf}
\includegraphics[width=0.28\textwidth]{P_m5_n23.pdf} \\[0.25cm]
\includegraphics[width=0.28\textwidth]{PcP.pdf}
\includegraphics[width=0.28\textwidth]{PKPab.pdf}
\includegraphics[width=0.28\textwidth]{PKIKP.pdf} \\[0.25cm]
\includegraphics[width=0.28\textwidth]{SP.pdf}
\includegraphics[width=0.28\textwidth]{SKKS.pdf}
\includegraphics[width=0.28\textwidth]{PKJKP.pdf}
\caption{Some periodic rays in a radial planet with two interfaces (PREM). The top row illustrates examples of \textit{basic} rays (with different winding numbers), the middle row illustrates rays (left-to-right: PcP, PKPab, PKIKP) that are not basic and only probe the \textit{P} wave speed, and the bottom row also illustrates examples of non-basic rays (left-to-right: SP, SKKS, PKJKP) that probe both \textit{P} (in blue) and \textit{S} (in red) wave speeds. Acknowledgement: Chunquan Yu.}
\label{fig:basic}
\end{figure}
Computing the length and epicentral distance of {basic}{} rays is much simpler. Let $\gamma$ be a {basic}{} ray with radius $R^*$, ray parameter $p$, and lies inside inside $A(r_{k-1},r_k)$. Then there is a unique $N(p) \in \mathbb{N}$ so that the length, denoted $T(p)$, of $\gamma$ is
\[
T(p) = 2N(p)L(p) = 2N(p)\int_{R^*}^{r_{k-1}} \frac{1}{c(r')^2\wkb(r';p)} \operatorname{d} \! r'
\]
and
\[
\alpha(p) = 2N(p)\int_{R^*}^{r_{k-1}} \frac{p}{(r')^2\wkb(r';p)} \operatorname{d} \! r'.
\]
\begin{definition}\label{d: ccc}
Consider geodesics in an annulus $A(a,b)$ equipped with a $C^{1,1}$ sound speed $c\colon(a,b]\to(0,\infty)$.
We say that $c$ satisfies the \emph{countable conjugacy condition} if there are only countably many radii $r\in (a,b)$ so that the endpoints of the
corresponding maximal geodesic $\gamma(r)$ are conjugate along that geodesic.
\end{definition}
We will only need the countable conjugacy condition with each layer, so we do not need a definition in the presence of discontinuities.
We point out that ``countable'' includes the possibility that the set be empty or finite.
Definition~\ref{d: ccc} is the same as the one given in~\cite{HIKRigidity}.
We need an analog to the clean intersection hypothesis used in \cite{HIKRigidity,Mel79} to prove a trace formula that also makes sure that the phase function is Bott-Morse nondegenerate when applying a stationary phase argument.
\begin{definition}\label{d: pcc}
We say that the radial wave speed $c$ satisfies the \emph{periodic conjugacy condition} if for each periodic, nongliding ray with a ray parameter $p$, $ \partial_p \alpha(p) \neq 0$. This condition ensures that the phase function in the stationary phase argument for computing the trace formula is Bott-Morse nondegenerate.
\end{definition}
\subsection{Gliding rays as limits} \label{s: gliding as limits}
\joonas{What follows is not fully rigorous but should do for the project review draft. It might benefit from a picture, but I wouldn't conjure one up this week unless my rough drawings will do. Do we want to have this as text or as a proper proposition or something?}
\maarten{Condense this further, just give the main ideas? (From today's chat -- J)}
\joonas{I would wait with this over the project review, unless it happens easily and there is time to spare.}
Consider a periodic broken ray $\gamma_0$ with a gliding {leg}{} of positive length.
\joonas{It is possible to have zero-length gliding legs. In this case there is potential for gliding but none occurs. Those I can't approximate but those probably are not an issue either.}
We assume that gliding occurs at only one interface; this is ensured by the smooth Herglotz condition.
We may rearrange the {leg}{}s of the periodic broken ray without changing its length or essential geometry so that there is only one gliding {leg}{} per period.
\joonas{Is there a name for this rearrangement? Having several gliding legs is not a real issue, but inconvenient to write, and rearrangement invariance is built in anyway.}
We will argue that there is a sequence of periodic non-gliding broken rays $\gamma_i$ so that $\gamma_i\to\gamma_0$.
This is very simple for any finite segment of a gliding broken ray; the subtlety lies in ensuring periodicity of the approximating rays. We will prove the following lemma.
\begin{lemma}\label{l: gliding} Let $\gamma_0$ be a periodic broken ray with a gliding leg of positive length as described above. Then there is a sequence $\{\gamma_i\}_{i=1}^\infty$ of periodic, non-gliding broken rays such that \[
\lim_{i \to \infty} \gamma_i = \gamma
\]
\end{lemma}
\begin{proof}
Let $x$ and $y$ be the final and initial point, respectively, of the gliding {leg}{} of $\gamma_0$, and let $\theta_0$ be the angle between $\gamma_0$ and the interface.
We wish to find angles $\theta_i>\theta_0$ with the correct approximating and periodicity properties.
For any angle $\theta>\theta_0$, let the angle between the interface and the {leg}{} of the refracted ray in the lower layer be denoted by $\kappa$.
In the limiting case $\kappa_0=0$ as the ray $\gamma_0$ glides on the interface.
It follows from Snell's law and a calculation that
\begin{equation}
\label{eq:kappa-sim}
\kappa
=
a(\theta-\theta_0)^{1/2}
+
\mathcal O(\theta-\theta_0)
\end{equation}
for some constant $a>0$.
When $\theta$ is slightly above $\theta_0$ --- or when $\kappa>0$ is small --- the opening angle of a single short diving {leg}{} under the interface is denoted by $\phi(\theta)$.
A simple calculation shows that $\phi(\theta)$ is asymptotically comparable to $\kappa$, whence
\begin{equation}
\label{eq:phi-sim}
\phi(\theta)
=
b(\theta-\theta_0)^{1/2}
+
\mathcal O(\theta-\theta_0)
\end{equation}
for some constant $b>0$.
Let the angle between the points $y$ and $x$ be $\alpha_0>0$.
Starting from the point $x$ and following the broken ray near $\gamma_0$ with the initial angle $\theta\approx\theta_0$ we get a map $\theta\mapsto y(\theta)$.
This map is $C^1$.
Denote the angle between $y(\theta)$ and $x$ by $\alpha(\theta)$.
This map is well defined in a neighborhood of $\theta_0$, as the relevant broken ray stays above the interface and total internal reflection is not an issue.
If $\alpha'(\theta_0)=0$, then the points $x$ and $y$ are conjugate along the non-gliding part of the broken ray $\gamma_0$.
But this turns out not to be an issue.
\joonas{I initially thought it would be an issue, but apparently not! The coefficient $c$ disappears into the next-to-leading order. If the separation angle $\alpha_0$ vanishes, then not being conjugate becomes a leading order issue, but the proof fails due to wrong scaling anyway. So nothing can be won by assuming no conjugate points, it seems.}
Denoting $\alpha'(\theta_0)=c$, we have
\begin{equation}
\label{eq:alpha-sim}
\alpha(\theta)
-
\alpha_0
=
c
(\theta-\theta_0)
+
\mathcal O((\theta-\theta_0)^2)
\end{equation}
due to simple Taylor approximation.
We want to choose the angle $\theta>\theta_0$ so that an integer amount of these short diving {leg}{}s connect $y(\theta)$ to $x$.
The condition is $\alpha(\theta)/\phi(\theta)\in\mathbb{N}$.
Combining with equations~\eqref{eq:kappa-sim}, \eqref{eq:phi-sim}, and~\eqref{eq:alpha-sim}, we end up with the condition that
\begin{equation}
\label{eq:gliding-periodic}
\alpha_0 b^{-1} (\theta-\theta_0)^{-1/2}
+
\mathcal O((\theta-\theta_0)^{1/2})
\in\mathbb{N}.
\end{equation}
Here the error term depends continuously on $\theta$, so the left-hand side of equation~\eqref{eq:gliding-periodic} obtains integer values infinitely many times as $\theta\to\theta_0+$.
This gives us a choice of directions $\theta_i$ starting at $x$, and thus a sequence of periodic broken rays $\gamma_i$ which converge to $\gamma_0$.
\joonas{Do we need to discuss the mode of convergence? It is pointwise or locally uniform. It can only be globally uniform if the periods stay constant, but that need not be the case.}
This concludes the argument that every periodic broken ray with a gliding {leg}{} can be approximated by periodic non-gliding rays.
\end{proof}
\joonas{This can be put into a lemma/proposition/other environment if it's better.}
\subsection{Principal amplitude injectivity condition}\label{s: geometric spreading injectivity condition}
We also need an assumption similar to ``simplicity'' of the length spectrum modulo the group action in order to recover the length spectrum when there are multiple components in the length spectrum. For a closed ray $\gamma$, denote $[\gamma]$ to be the equivalence class to include all rotations and dynamic analogs of $\gamma$ along with its time reversal. We will see that $[\gamma]$ has a particular contribution to the trace formula.
The principal contribution of $[\gamma]$ with ray parameter $p$ to the trace formula has the form (see \eqref{eq: trace coefficient})
\[
c(t-T(p)+i0)^{-k} i^{N(p)} n(p)Q(p)L(p)\abs{p^{-2} \partial_p \alpha}^{-1/2}
\]
where $c$ is independent of $\gamma$, $Q(p)$ is a product of reflection and transmission coefficients, and $T(p)$ is the length of $\gamma$. Theoretically, there may be another class $[\gamma']$ with an identical period whose principal contribution to the trace cancels with that of $[\gamma]$, thereby preventing recovery of $T$.
We say that the length spectrum satisfies the \emph{principal amplitude injectivity condition} if given two closed rays $\gamma_1$ and $\gamma_2$ with the same period and disjoint equivalence classes (so they must have different ray parameters $p_1$ and $p_2$), then
\[
n(p_1)Q(p_1)\abs{p_1^{-2} \partial_p \alpha(p_1)}^{-1/2}
\neq n(p_2)Q(p_2)\abs{p_2^{-2} \partial_p \alpha(p_2)}^{-1/2}.
\]
We assume that $\operatorname{lsp}(c)$ satisfies the principal amplitude injectivity condition in order to prove Theorem \ref{t: spectral rigiditiy}.
\subsection{Spherical symmetry}
\label{sec:symmetry}
In section~\ref{sec:reasonable-radial} we saw that spherical symmetry is a good approximation for the Earth.
This symmetry is of tremendous technical convenience.
The geodesic flow is integrable with simple conserved quantities (an orbital plane and an angular momentum) and many of our calculations can be done explicitly.
The geometry of periodic broken rays is poorly understood outside symmetric situations.
It is not clear whether there are densely many such rays on a general manifold with boundary, nor whether the periodic rays are stable under deformations of the geometry.
On general manifolds, small smooth perturbations of a smooth metric only have a second order effect on the direction of the geodesics.
However, small smooth deformations of an interface have a first order effect, and this increased sensitivity substantially complicates matters.
Radial deformations of radial models are better behaved in that the conserved symmetry and deformed conserved quantities make the deformations tractable.
\section{Proofs: Length spectral rigidity}
\subsection{Auxiliary results}
We denote by $A(r_1,r_0)=\bar B(0,r_1)\setminus B(0,r_0)\subset\mathbb R^n$ the closed annulus in a Euclidean space.
\begin{lemma}
\label{lma:lspr-annulus}
Fix any $\varepsilon>0$ and $r_1\in(0,1)$, and any finite set $F\subset(0,1)$.
Let $r(\tau)\in(0,1)$ depend $C^1$-smoothly on $\tau$.
Let $c_\tau$ with $\tau\in(-\varepsilon,\varepsilon)$ be $C^{1,1}$ functions $[r_1,1]\to (0,\infty)$ satisfying the Herglotz condition and the countable conjugacy condition and depending $C^1$-smoothly on $\tau$.
If $\partial_\tau c_\tau(r)\restriction_{\tau=0}\neq0$ for some $r\in(r_1,1)$, then there is a periodic broken ray $\gamma_\tau$ with respect to $c_\tau$ so that
\begin{itemize}
\item
$\tau\mapsto\ell_\tau(\gamma_\tau)$ is $C^1$ on $(-\delta,\delta)$ for some $\delta\in(0,\varepsilon)$,
\item
$\partial_\tau\ell(\gamma_\tau)\restriction_{\tau=0}\neq0$,
and
\item
the depth (minimum of Euclidean distance to the origin) of $\gamma_0$ is not in $F$.
\end{itemize}
Here $\ell_\tau$ is the length functional corresponding to the velocity profile $c_\tau$.
\end{lemma}
While in our application we have $F=\emptyset$, we include this freedom in the lemma so that finitely many problematic depths can be avoided if needed.
\joonas{I added this.}
We say that a broken ray is \emph{radial} if it is contained in a one-dimensional linear (not affine) subspace of $\mathbb R^n$.
\joonas{I'm not sure how to phrase this most clearly.}
\begin{lemma}
\label{lma:lspr-moving-interface}
Fix any $\varepsilon>0$.
Let $c_\tau\colon(0,1]\to(0,\infty)$ be a family if $C^{1,1}$ functions depending smoothly on $\tau\in(-\varepsilon,\varepsilon)$.
Let $r(\tau)\colon(-\varepsilon,\varepsilon)\to(0,1)$ be $C^1$.
Let $\ell_\tau$ be the length of the radial geodesic between $r=r_1$ and $r=1$.
If $\partial_\tau c_\tau(r)\restriction_{\tau=0}=0$ for all $\tau\in(-\varepsilon,\varepsilon)$, then
\[
\ell'(0)
=
c_0(r_1(0))^{-1}
r_1'(0).
\]
\end{lemma}
\subsection{Proof of Theorem~\ref{thm:blspr-multilayer}}
The idea of the proof is as follows:
We first show that $c_\tau$ is independent of $\tau$ within the first layer.
Then we show that the first interface is also independent of $\tau$.
After these steps we can ``peel off'' the top layer and repeat the argument for the second one.
Countability of the basic length spectrum provides sufficient decoupling between the layers and between the ``data'' $\operatorname{blsp}(\tau)$ and the ``noise'' $S(\tau)$.
We give most arguments at $\tau=0$ first for definiteness, but the exact value of the parameter is unimportant.
\begin{proof}[Proof of Theorem~\ref{thm:blspr-multilayer}]
Let us denote $f_\tau(r)=\partial_\tau c_\tau(r)$ and $\hat S(\tau)=\operatorname{blsp}(\tau)\cup S(\tau)$.
Take any $r\in(r_1(0),1)$.
If $\partial f_0(r)\neq0$, then by Lemma~\ref{lma:lspr-annulus} there is a family of basic periodic broken rays $\gamma_\tau$ for which the length map $\tau\mapsto\ell(\gamma_\tau)$ is $C^1$ in a neighborhood of $\tau=0$ and the derivative at $\tau=0$ is non-zero.
As $\ell(\gamma_\tau)\in \hat S(\tau)$ and by assumption $\hat S(\tau)=\hat S(0)$ for all $\tau$, this implies that the set $\hat S(0)$ contains a neighborhood of $\ell(\gamma_0)$.
This is in contradiction with countability of $\hat S(\tau)$, and so $\partial f_0(r)\neq0$ is impossible.
We conclude that $\partial f_\tau(r)=0$ for all $r\in(r_1(0),1)$.
The same argument can be repeated at any value of the parameter $\tau$, leading us to conclude that $\partial f_\tau(r)=0$ whenever $r\in(r_1(\tau),1)$.
If $r_1'(0)\neq0$, then by Lemma~\ref{lma:lspr-moving-interface} the radial broken rays (which are basic and periodic with period twice their length) there is a family of broken rays whose lengths vary differentiably and with a non-zero derivative at $\tau=0$.
This contradicts countability as above.
The same argument is valid for any value of $\tau$, so we conclude that $r_1'(\tau)=0$ for all $\tau\in(-\varepsilon,\varepsilon)$.
We have thus found that $r_1(\tau)=r_1(0)$ and $c_\tau(r)=c_0(r)$ for all $\tau\in(-\varepsilon,\varepsilon)$ and $r\in(r_1(0),1)$.
We may now turn our attention to the annulus $A(r_2(\tau),r_1(\tau))$, whose top interface is now fixed at $r=r_1(0)=r_1(\tau)$ for all $\tau$.
Repeating the same argument in this annulus shows that both the velocity profile in this annulus and the location of the second interface are independent of $\tau$.
Carrying on inductively, we exhaust all layers of the ball and find that the claim does indeed hold true.
\end{proof}
\subsection{Proofs of the lemmas}
Lemma~\ref{lma:lspr-annulus} is a small variation of the reasoning in \cite{HIKRigidity}, rewritten in a way that is useful in the presence of interfaces.
The proof is concise; the reader is invited to refer to \cite{HIKRigidity} for details.
\begin{proof}[Proof of Lemma~\ref{lma:lspr-annulus}]
Consider the velocity profile for any fixed $\tau$.
A maximal broken ray without reflections from the inner boundary component is determined uniquely up to rotation by its deepest point.
Let us denote the essentially unique geodesic of depth $r\in(0,1)$ by $\gamma_r^\tau$.
For a subset $P^\tau\subset(r_1,1)$ the corresponding broken rays are periodic, and we denote the minimal period by $\ell(\tau,r)$.
A periodic broken ray with respect to $c_0$ is called stable if there is $\delta\in(0,\varepsilon)$ so that there is a family of paths $\gamma^\tau\colon\R\to A(1,r_1)$ which is $C^1$ in $\tau$ (and only continuously at reflection points) and each $\gamma^\tau$ is a periodic broken ray with respect to $c_\tau$.
When such a family exists, let us denote the depth corresponding to the parameter $\tau\in(-\delta,\delta)$ by $r^\tau$.
Let us denote by $C^0\subset P^0\subset(r_1,1)$ the set of depths of stable periodic broken rays.
It was shown in \cite{HIKRigidity} that under the countable conjugacy condition and the Herglotz condition the set $C^0$ is dense in $[r_1,1]$.
Thus also $C^0\setminus F$ is dense.
Let us denote $f(r)=\partial_\tau c_\tau(r)\restriction_{\tau=0}$.
Suppose that $f(r)\neq0$ for some $r\in(r_1,1)$.
Due to the injectivity of generalized Abel transforms, the function
\[
h(r)
=
\int_r^1
f(s)
\left[
1-
\left(
\frac{rc(s)}{sc(r)}
\right)^2
\right]^{-1/2}
\frac{\mathrm{d} s}{c(s)}
\]
is also non-trivial.
As $h$ is continuous and $C_0$ is dense, there is $r'\in C_0\setminus F$ so that $h(r')\neq0$.
The length $\ell(\tau,r^\tau)$ of the family of periodic broken rays is differentiable in $\tau$ near $\tau=0$ because $r'\in C^0$ and
\[
\partial_\tau
\ell(\tau,r^\tau)\restriction_{\tau=0}
=
2nh(r'),
\]
where $n$ is the (constant) winding number of the minimal period of $\gamma^\tau$.
Therefore the claimed derivative is indeed non-zero.
\end{proof}
The proof of Lemma~\ref{lma:lspr-moving-interface} is a straightforward calculation and the statement is geometrically intuitive, so we skip the proof.
The essential statement concerns simply the derivative of the length of a geodesic with respect to its endpoint.
\section{The Trace formula and its proof}
As in \cite{HIKRigidity}, we will prove a trace formula in order to recover part of the length spectrum, and then use the argument in the previous sections on length spectral rigidity in order to prove Theorem \ref{t: spectral rigiditiy}.
Although the main theorems as stated in subsection \ref{s: main results} refer to the scalar operator $\Delta_c$, for greater generality, we initially consider the toroidal modes corresponding to the isotropic elastic operator (see \cite{DahlTromp,HIKRigidity} for definitions). As in \cite{HIKRigidity}, the proof is identical when considering the scalar Laplace-Beltrami operator. This allows us to naturally consider and extend our results to spheroidal modes in section \ref{s: spheroidal modes} where two waves speed are present.
First, we give the general setup and state the trace formula as Proposition \ref{prop:Trace Formula}, followed by its proof.
\subsection{Toroidal modes, eigenfrequencies, and trace formula}
\label{s: toroidal modes}
We now use spherical coordinates $(r,\theta,\psi)$. Toroidal modes are
precisely the eigenfunctions of the isotropic elastic operator that
are sensitive to only the shear wave speed. We forgo writing down the
full elastic equation, and merely write down these special
eigenfunctions connected to the shear wave speed. Analytically, these eigenfunctions admit a separation in
radial functions and real-valued spherical harmonics, that is,
\begin{equation}
u = {}_n\mathbf{D}_l Y^m_l ,
\end{equation}
where
\begin{equation}
\mathbf{D} = U(r)\ (- k^{-1})
[-\widehat{\theta} (\sin \theta)^{-1} \partial_{\psi}
+ \widehat{\psi} \partial_{\theta}] ,
\end{equation}
in which $k = \sqrt{l (l + 1)}$ and $U$ represents a radial function
(${}_n U_l$). In the further analysis, we ignore the curl (which
signifies a polarization); that is, we think of ${}_n\mathbf{D}_l$ as
the multiplication with ${}_n U_l(-k^{-1})$. In the above, $Y^m_l$ are
spherical harmonics, defined by
\[
Y^m_l(\theta,\psi) = \left\{ \begin{array}{rcl}
\sqrt{2} X^{\abs{m}}_l(\theta) \cos(m \psi) & \text{if}
& -l \le m < 0 ,
\\
X^0_l(\theta) & \text{if} & m = 0 ,
\\
\sqrt{2} X^m_l(\theta) \sin(m \psi) & \text{if} & 0 < m \le l ,
\end{array}\right.
\]
where
\[
X^m_l(\theta) = (-)^m \sqrt{\frac{2l + 1}{4\pi}}
\sqrt{\frac{(l-m)!}{(l+m)!}} P^m_l(\cos\theta) ,
\]
in which
\[
P^m_l(\cos(\theta)) = (-)^m \frac{1}{2^l l!} (\sin\theta)^m
\left( \frac{1}{\sin\theta} \frac{\mathrm{d}}{\mathrm{d}\theta}
\right)^{l+m} (\sin\theta)^{2l} .
\]
The function, $U$ (a component of displacement), satisfies the
equation
\begin{equation}\label{eq: equation for U_2}
[-r^{-2} \partial_r\ r^2 \mu \partial_r
+ r^{-2} \partial_r\ \mu r - r^{-1} \mu \partial_r
+ r^{-2} (-1 + k^2) \mu] \, U - \omega^2 \rho U = 0 ,
\end{equation}
where $\mu = \mu(r)$ is a Lam\'{e} parameter and $\rho = \rho(r)$ is
the density, both of which are smooth, and $c = \sqrt{\mu/\rho}$. Also, $\omega = {}_n\omega_l$
denotes the associated eigenvalue. Here, $l$ is referred to as the
angular order and $m$ as the azimuthal order.
The traction is given by
\begin{equation}\label{eq: Neumann condition for U_2}
T(U) = \mathcal{N} U ,\qquad
\mathcal{N} = \mu \partial_r - r^{-1}\mu
\end{equation}
which vanishes at the boundaries (Neumann condition). The transmission conditions are that $U$ and $T(U)$ remain continuous across the interfaces.
If $r = b$ is an interface and $U_{\pm}$ represent two solutions on opposite sides of the interface, then in the high frequency limit as $\omega \to \infty$, the transmission conditions will amount to
\begin{align}\label{eq: trans conditions for U}
U_+\restriction_{r = b} &= U_-\restriction_{r = b} \\
\mu_+ \partial_rU_+\restriction_{r = b} &= \mu_- \partial_rU_-\restriction_{r = b}
\end{align}
for the principal terms in the WKB expansion of the solution.
The radial
equations do not depend on $m$ and, hence, every eigenfrequency is
degenerate with an associated $(2l + 1)$-dimensional eigenspace
spanned by
\[
\{ Y^{-l}_l,\ldots,Y^l_l \} .
\]
Following \cite{ZhaoModeSum}, let $d$ indicate the overtone number $n$ and the angular degree $l$. The radial eigenfunction $U_d(r)$ is independent of the order $m$. We define the inner product of the eigenfunctions:
\begin{equation} } \def\eeq{\end{equation} \label{I_d inner product}
{}_nI_l = I_d := \int_ R} \def \wkb{\beta^ 1} \def \rturn{ R^\star \abs{U_d(r)}^2 \rho(r) \operatorname{d} \! r
\eeq
We use spherical coordinates $(r_0,\theta_0,\psi_0)$ for the location,
$x_0$, of a source, and introduce the shorthand notation
$({}_n\mathbf{D}_l)_0$ for the operator expressed in coordinates
$(r_0,\theta_0,\psi_0)$. We now write the (toroidal contributions to
the) fundamental solution as a normal mode summation
\begin{equation}\label{normal-mode-summation}
G(x,x_0,t) = \operatorname{Re}\
\sum_{l=0}^{\infty} \sum_{n=0}^{\infty}\
{}_n\mathbf{D}_l ({}_n\mathbf{D}_l)_0\
\sum_{m=-l}^l Y^m_l(\theta,\psi) Y^m_l(\theta_0,\psi_0)\
\frac{e^{i {}_n\omega_l t}}{i ({}_n\omega_l) ({}_n I_l)} .
\end{equation}
On the diagonal, $(r,\theta,\psi) = (r_0,\theta_0,\psi_0)$ and, hence,
$\Theta = 0$.
Here $\Theta$ is the angular epicentral distance,
We observe the following reductions in the evaluation of
the trace of~\eqref{normal-mode-summation}:
\begin{itemize}
\item
We will not normalize $U(r)$.
Meanwhile, the spherical harmonic terms satisfy
\begin{equation} \label{eq:Ylmnorm}
\sum_{m=-l}^l \iint
Y^m_l(\theta,\psi)^2 \sin \theta \operatorname{d} \!\theta \operatorname{d} \!\psi
= 2l + 1
\end{equation}
(counting the degeneracies of eigenfrequencies).
\item
If we were to include the curl in our analyis (generating vector
spherical harmonics), taking the trace of the matrix on the diagonal
yields
\begin{equation} \label{eq:GYlmnorm}
\sum_{m=-l}^l \iint
(-k^{-2})
\abs{[-\widehat{\theta} (\sin \theta)^{-1} \partial_{\psi}
+ \widehat{\psi} \partial_{\theta}]
Y^m_l(\theta,\psi)}^2 \sin \theta \operatorname{d} \!\theta \operatorname{d} \!\psi
= 2l + 1 .
\end{equation}
\end{itemize}
From the reductions above, we obtain
\begin{equation}
\int_M
G(x,x,t) \, \rho(x) \operatorname{d} \! x
= \sum_{l=0}^{\infty} \sum_{n=0}^{\infty}\
(2l + 1)
\operatorname{Re} \left\{
\frac{e^{i {}_n\omega_l t}}{
i ({}_n\omega_l) }\right\}
\end{equation}
or
\begin{equation} \label{eq:TrptG}
\operatorname{Tr}( \partial_t G)(t) = \int_M
\partial_t G(x,x,t) \, \rho(x) \operatorname{d} \! x
= \sum_{l=0}^{\infty} \sum_{n=0}^{\infty}\
(2l + 1)
\operatorname{Re} \left\{
e^{i {}_n\omega_l t} \right\} .
\end{equation}
Let us also denote $\Sigma = \text{singsupp}( \operatorname{Tr}( \partial_t G))\subset \mathbb{R}_t.$
\begin{comment}
We now write
\begin{equation*}
{}_n f_l(t) = \operatorname{Re} \left\{
\frac{e^{i {}_n\omega_l t}}{
i {}_n\omega_l} \right\}
\end{equation*}
which is the inverse Fourier transform of
\begin{equation}
{}_n \hat{f}_l(\omega) = \frac{1}{2i {}_n\omega_l}
\left[ \mstrut{0.5cm} \right.
\pi \delta(\omega - {}_n\omega_l)
- \pi \delta(\omega + {}_n\omega_l)
\left. \mstrut{0.5cm} \right] .
\end{equation}
Moreover, taking the Laplace--Fourier transform yields
\begin{equation} \label{eq:nflom}
\int_0^{\infty} {}_n f_l(t) e^{-i \omega t} \, \operatorname{d} \! t
= \frac{1}{2i {}_n\omega_l}
\left[ \mstrut{0.5cm} \right.
\frac{i}{-(\omega - {}_n\omega_l) + i 0}
- \frac{i}{-(\omega + {}_n\omega_l) + i 0}
\left. \mstrut{0.5cm} \right] .
\end{equation}
This confirms that the trace is equal to the inverse Fourier transform
of
\begin{equation}
\sum_{l=0}^{\infty} (2l + 1) \sum_{n=0}^{\infty}
\frac{1}{2i ({}_n\omega_l)({}_nI_l)}
\left[ \mstrut{0.5cm} \right.
\pi \delta(\omega - {}_n\omega_l)
- \pi \delta(\omega + {}_n\omega_l)
\left. \mstrut{0.5cm} \right] .
\end{equation}
\end{comment}
\subsection{Connection between toroidal eigenfrequencies, spectrum of the Laplace--Beltrami operator, and the Schr\"{o}dinger equation} \label{sec: connect to LB}
We repeat the discussion in \cite{HIKRigidity} to relate the spectrum of a scalar Laplacian, the eigenvalues
associated to the vector valued toroidal modes, and the trace
distribution $\sum_{l=0}^{\infty} \sum_{n=0}^{\infty}\ (2l+1)
\cos(t{}_n\omega_l)$.
We note that \eqref{eq: equation for U_2} and \eqref{eq: Neumann
condition for U_2} for $U$ ensure that $v = U Y^m_l$ satisfies
\begin{equation} \label{eq: scalar P}
P v \coloneqq \rho^{-1}
(-\nabla \cdot \mu \nabla + P_0) v = \omega^2 v ,\qquad
\mathcal{N} v = 0 \text{ on } \partial M
\end{equation}
where $P_0 = r^{-1}( \partial_r\mu)$ is a $0$th order operator, $\omega^2$
is a particular eigenvalue, and $\mathcal N$ is as in \eqref{eq: Neumann condition for U_2}. Hence $U Y^m_l$ are scalar eigenfunctions
for the self-adjoint (with respect to the measure $\rho\operatorname{d} \! x$) scalar
operator $P$ with Neumann boundary conditions (on both boundaries)
expressed in terms of $\mathcal{N}$.
The above argument shows that we may view the toroidal spectrum
$\{{}_n\omega^2_l\}_{n,l}$ as also the collection of eigenvalues
$\lambda$ for the boundary problem on scalar functions \eqref{eq:
scalar P}. Thus \eqref{eq:TrptG} can be written in the form
\begin{equation}\label{eq: toroidal trace equal laplace trace}
\operatorname{Tr} \, ( \partial_t G)
= \sum_{\lambda \in \operatorname{spec}(P)} \cos(t \sqrt{\lambda}) ,
\end{equation}
where the last sum is taken with multiplicities for the
eigenvalues. (While $G$ is a vector valued distribution, the
asymptotic trace formula we obtain is for $\operatorname{Tr} ( \partial_t G)$, which is equal
to $\sum_{\lambda \in \operatorname{spec}(P)} \cos(t \sqrt{\lambda})$ by the
normalizations we have chosen.) Up to principal symbols, $P$ coincides
with the $\Delta_c = c^3 \nabla \cdot c^{-1} \nabla$ upon identifying
$c^2$ with $\rho^{-1} \mu$. This means that the length spectra of $P$
and $\Delta_c$ will be the same even though they have differing subprincipal symbols and spectra. Thus, the trace formula which will
appear to have a unified form, connects two different spectra to a
common length spectrum and the proof is identical for both.
We will prove a trace formula using a WKB expansion of
eigenfunctions. To this end, it is convenient to establish a
connection with the Schr\"{o}dinger equation. Indeed, we present an
asymptotic transformation finding this connection. In boundary normal
coordinates $(r,\theta)$ (which are spherical coordinates in dimension
three by treating $\theta$ as coordinates on the $2$-sphere),
\begin{equation}
P = \rho^{-1} (-r^{-2} \partial_r r^2 \mu \partial_r
- \mu r^{-2} \Delta_\theta + P_0) ,
\end{equation}
where $\Delta_\theta$ is the Laplacian on the $2$-sphere.
Let us now simplify the PDE \eqref{eq: scalar P} for $v$.
Let $Y(\theta)$ be an eigenfunction of $\Delta_\theta$ with eigenvalue
$-k^2$ as before and $V = V(r):= \mu^{1/2}r U$ a radial function with $U$ as in \eqref{eq: scalar P}. Then after a straightforward calculation, as a leading order term in a WKB expansion, $V(r)$ must satisfy
\begin{equation}\label{eq:VSchro}
\partial_r^2 V + \omega^2 \wkb^2 V = 0 ,\quad
\partial_r V = 0\ \text{ on }\ \partial M ,
\end{equation}
with transmission conditions for $V$ to leading order
\begin{align}\label{eq: trans conditions for V}
\mu_+^{-1/2} V_+\restriction_{r = b} &= \mu^{-1/2}_-V_-\restriction_{r = b} \\
\mu_+^{1/2} \partial_rV_+\restriction_{r = b} &= \mu_-^{1/2} \partial_rV_-\restriction_{r = b},
\end{align}
where $\wkb^2 = \rho(r) \mu(r)^{-1} - \omega^{-2}r^{-2}k^2$ and $\{ r = b\}$ is an interface, generating
two linearly independent solutions. The WKB asymptotic solution to
this PDE with Neumann boundary conditions will precisely give us the
leading order asymptotics for the trace formula, and is all that is
needed.
For the boundary condition, we note that we would end up with the same
partial differential equation with different boundary conditions for
$V$ in the previous section if we had used the boundary condition
$ \partial_r u = 0 \text{ on } \partial M$. Indeed, one would merely choose
$\mathcal{N}u = \mu \partial_r u$ instead without the $0$th order
term. However, the boundary condition for $V$ would be of the form
\begin{equation}
\partial_r V = K(r) V\quad\ \text{ on }\ \partial M
\end{equation}
with $K$ signifying a smooth radial function. Nevertheless, the
leading order (in $\omega$) asymptotic behavior for $V$ stays the same
despite the $K$ term as clearly seen in the calculation of Appendix \ref{a: Generalized Debye}. Thus, our analysis applies with no
change using the standard Neumann boundary conditions. This should
come as no surprise since in~\cite{Mel79}, the $0$'th order term in the
Neumann condition played no role in the leading asymptotic analysis of
their trace formula. Only if one desires the lower-order terms in the
trace formula would it play a role.
In addition, we could also consider a Dirichlet boundary condition, where for $V$, it is also $ V = 0$ on $ \partial M$. This would slightly modify the Debye expansion in Appendix \ref{a: Generalized Debye} by constant factors. Nevertheless, the same argument holds to obtain the trace formula and recover the length spectrum. More general boundary conditions such as Robin boundary conditions may be considered as well. However, since we only need to look at the principal term in the high frequency asymptotics, this would just reduce to the Neumann boundary case. Thus, our arguments work with all these boundary conditions, and we choose Neumann boundary conditions only because it has a natural interpretation from geophysics.
An interesting feature of the trace formula in this setup is that a broken ray $\gamma$ can have {leg}{}s that glide along the interface. This happens when a reflected ray hits an interface at a critical angle leading to a transmitted {leg}{} that glides along the interface. Technically, such a ray is \emph{not} a broken geodesic of the metric $g$, but it will be a limit of periodic broken geodesics as shown in section \ref{s: gliding as limits} and makes a contribution to the singular support of the trace as an accumulation point.
Since the length spectral rigidity theorems only require the {basic}{} length spectrum,
the main goal is to determine the leading contribution of {basic}{} rays without gliding {leg}{}s to the trace.
\begin{proposition}\label{prop:Trace Formula}(Non-gliding case)
Suppose the radial wave speed $c$ satisfies the extended Herglotz condition and the periodic conjugacy condition (definition \ref{I_d inner product}).
Suppose $T = T(p_\gamma) \in \operatorname{lsp}(c)$ corresponds to a periodic ray $\gamma$ with ray parameter $p_\gamma$ such that no periodic ray with a gliding {leg}{} has period $T$.
Then there exists a neighborhood of $T$ such that, the leading order singularity of $(\operatorname{Tr}( \partial_t
G))(t)$ near $T(p_\gamma)$ is the real part of
\begin{equation}\label{eq: trace coefficient}
\sum_{[\gamma]} (t-T(p_\gamma)+ i 0)^{-5/2} \left(\frac{1}{2\pi i}\right)^{3/2}
i^{N(p_\gamma)}n(p_\gamma)Q(p_\gamma)\abs{p^{-2}_\gamma \partial_p\alpha_\gamma(p_\gamma)}^{-1/2}
L(p_\gamma)c \abs{SO(3)} ,
\end{equation}
where
\begin{enumerate}
\item[$\bullet$] the sum is taken over all equivalence classes $[\gamma]$ with period $T(p_\gamma)$ and ray parameter $p_\gamma = p_{[\gamma]}$.
\item[$\bullet$] $N(p_\gamma)$ is the Keller-Maslov-Arnold-H\"{o}rmander (KMAH) index associated to
$\gamma$;
\item[$\bullet$] $c$ independent of $[\gamma]$;
\item[$\bullet$] $\abs{SO(3)}$ is the volume of the compact Lie group
$SO(3)$ under the Haar measure.
\item[$\bullet$]$Q(p_\gamma)$ is a product of reflection and transmission coefficients of the corresponding broken ray.
\item[$\bullet$] $n(p_\gamma) \in \mathbb N$ is a combinatorial constant counting the number of dynamic analogs of $\gamma$.
\end{enumerate}
Moreover, if the principal amplitude injectivity condition holds, the distribution $(\operatorname{Tr} \, ( \partial_t G))(t) =
\sum_{n,l}(2l+1)\cos(t{}_n\omega_l)$ is singular at the lengths of periodic {basic}{} rays.
\begin{comment}
Suppose $T \in \operatorname{blsp}(c)$ and $\gamma$ is a periodic {basic}{} ray of period $T$ and has ray parameter $p_\gamma$. Then there exists a neighborhood of $T$ such that, to leading order in singularity, $(\operatorname{Tr}( \partial_t
G))(t)$ is the real part of
\begin{equation}\label{eq: basic trace coefficient}
(t-T+ i 0)^{-5/2} \left(\frac{1}{2\pi i}\right)^{3/2}
i^{\sigma_{\gamma}}n_{[\gamma]}Q_{[\gamma]}|p^{-2}_\gamma \partial_p\alpha_\gamma(p_\gamma)|^{-1/2}
L(p_\gamma)c_d \abs{SO(3)} ,
\end{equation}
\end{comment}
\end{proposition}
\begin{remark}
Our proof will show that one may obtain the leading order contribution of $\gamma^l$, which is $\gamma$ traversed $l$ times, from the above expression for $\gamma$. The contribution from $[\gamma^l]$ will be
\begin{equation}
(t-lT(p_\gamma)+ i 0)^{-5/2} \left(\frac{1}{2\pi i}\right)^{3/2}
i^{l N(p_\gamma)}n^l(p_\gamma)Q^l(p_\gamma)\abs{p^{-2}_\gamma l \partial_p\alpha_\gamma(p_\gamma)}^{-1/2}
L(p_\gamma) c_d \abs{SO(3)}
\end{equation}
\end{remark}
\begin{comment}
\joonas{Do the harmonics have a nice shape? If the primitive period is $T$, is the singularity with multiplicity $k$ of the form $f_k(t)=(t-T)^{-\alpha} \cdot a \dot b^k \cdot (1-p^k)^{-1/2}$ or something?}
\end{comment}
\begin{remark}
Note the above trace formula is almost identical to that of \cite{HIKRigidity} except for the $Q(p_\gamma)$ term. This is natural since a wave corresponding to a periodic broken bicharacteristic in this nonsmooth case will have a principal symbol containing transmission and reflection coefficients while the rest of the principal symbol remains the same. The KMAH index also differs slightly than the nonsmooth case when a turning ray grazes an interface.
\end{remark}
\begin{remark}
Similar to remark 2.5 in \cite{HIKRigidity}, our trace formula holds in an annulus where the boundary is not geodesically convex unlike the case in \cite{Mel79}. Hence, there could be periodic \emph{grazing rays} at the inner boundary of the annulus or rays that graze an interface. As described in \cite{TaylorGrazing}, grazing rays are bicharacteristics that intersect the boundary of a layer tangentially, have exactly second order contact with the boundary, and remain in $\bar M$. This is another reason our proof is via a careful study of the asymptotics of the eigenfunctions rather than the parametrix construction appearing in \cite{Mel79}, where the presence of a periodic grazing ray would make the analysis significantly more technical (cf. \cite{TaylorGrazing,MelroseGliding}). The spherical symmetry essentially allows us to construct a global parametrix (to leading order) to obtain the leading order contribution of a periodic grazing ray to the trace, which would be more challenging in a general setting (see Appendix \ref{a: Generalized Debye} and \ref{a: grazing} for the analysis and \cite{bennett1982poisson} for a similar computation). The leading order contribution of the grazing ray has the same form as in the above proposition, but the lower order contributions will not have this ``classical'' form since stationary phase cannot be applied to such terms, and will instead involve Airy functions as in \cite{bennett1982poisson} and \cite[Appendix B]{HIKRigidity}.
Nevertheless, we note that for the main theorems, we do not need to recover the travel time of a periodic grazing ray if one exists. Travel times of sufficiently many non-grazing basic rays suffice. Our methods also produce a precise trace formula where periodic orbits are no longer simple as in \cite{Mel79}, but come in higher dimensional families (see \cite{GuillLieGroups,Creagh91,Creagh92,Gornet} for related formulas albeit in different settings).
\end{remark}
We showed in section \ref{s: gliding as limits} that a ray with a gliding {leg}{} is a limit of broken non-gliding rays, and we can also describe its contribution to the singular support to leading order. Let $\gamma$ be a periodic broken ray with travel time $T$ and contains a gliding {leg}{} (see \cite[Figure 4.1]{CervenyHeadWaves} for a diagram of such a ray in the piecewise constant wavespeed setting). By lemma \ref{l: gliding}, there is a sequence of non-degenerate closed broken rays $\gamma_n$ with travel time $T_n$ such that $T_n \nearrow T$ and $\gamma_n$ converges to $\gamma$. We will state our trace formula near gliding rays in the same form as \cite[Theorem (42)]{Bennett1982}.
Denote $a_n = a_{n,[\gamma_n]}$ to be the coefficient in \eqref{eq: trace coefficient} in front of $(t-T_n+i0)^{-5/2}$ corresponding to ray $\gamma_n$. We assume that there are no periodic broken rays with travel time $T$ besides for $\gamma$ and its image under the group action. Let us introduce the notation for any real number $s$,
\[
H^{s-}_{loc} = \{ f: f \in H^t_{loc}(\mathbb R) \text{ for } t<s\}.
\]
We will prove the following proposition.
\begin{proposition}\label{t: gliding ray trace}
Let $T$ be as above, and let $J$ be a small enough interval containing $T$ such that $\operatorname{lsp}(c) \cap J = \{T_n\}_{n=1}^\infty \cup \{T\}$.
Then
\[
\operatorname{Tr} \, ( \partial_t G)(t))\restriction_J = \Re \sum_{n=1}^\infty a_n(t-T_n+ i 0)^{-5/2} + R(t),
\]
where $R(t)$ is a distribution that lies in the Sobolev space $H^{-2+\delta}$ for some $\delta > 0$.
\end{proposition}
Note that this is a genuine error estimate even though we do not have a sharp result on which Sobolev space contains $R(t)$ since the sum in the formula above lies in $H^{-2-}_{loc}$. Proposition \ref{t: gliding ray trace} is not needed for spectral rigidity and will be proved in appendix \ref{a: proof of gliding}.
Also, implicit in the above proposition is that away from the singularities, the infinite sum converges. It is not clear which Sobolev space $R(t)$ belongs to since we only compute the principal term in the trace (which appears as the sum in the above proposition) using stationary phase, and we show that the remainder is in a weaker Sobolev space even though we cannot use stationary phase for it.
In fact, it is not even clear whether a term of the form $(t-T+i0)^{-\epsilon}$ appears in $R(t)$. Denote $Z(t) = \text{Tr}( \partial_t G)(t)$. Then for small enough $\epsilon > 0$,
$(T-\epsilon, T) \cap \operatorname{lsp}(c) = \{T_n\}_{n=1}^\infty$
while $(T, T+\epsilon) \cap \operatorname{lsp}(c) = \emptyset$. Thus $\text{Re} Z(t)$ is $C^\infty$ for $t \in (T,T+\epsilon)$, and it becomes an interesting question, what is the asymptotic behavior of $Z(t)$ as $t \to T$ from the right? This is subtle and
Colin de Verdi\'{e}re (see \cite{ColinClusterpoints,ColinGliding}) showed how in certain, simpler examples than what we consider here, $Z(t)$ is actually $C^\infty$ on $[T, T+\epsilon)$ for some $\epsilon$.
Thus, the trace is actually smooth from the right up to and including $T$ (it is obviously not smooth from the left). \v{C}erven\'{y} points out in \cite{CervenyHeadWaves} that the contribution of the singularity precisely at $T$ cannot be investigated with ray theory in this setting, and it remains an open question of the precise nature of this singularity. However, in our computations of the principal term in the WKB expansion, it is not present, which is how we know it can only be in a lower order term, if it is there at all.
The trace formula allows us to recover the {basic}{} length spectrum from the spectrum, and then apply the theorems on length spectral rigidity to prove Theorem \ref{thm:blspr-multilayer}.
\subsection{Proof of the trace formula} \label{s: proof of trace formula}
We need several preliminary computations before proving proposition \ref{prop:Trace Formula}. The key to the trace formula is the Debye expansion that will give geometric meaning to the leading order amplitudes of the radial eigenfunctions. A key step will be a derivation for an alternative way of expressing $I_d$ in \eqref{I_d inner product}.
\subsubsection{A key formula for the Green's function}
As pointed out in \cite{ZhaoModeSum}, the inner product $I_d$ can be expressed in terms of the derivatives of a quantity involving the radial eigenfunctions $U_d(r)$ as well as their radial derivatives with respect to frequency $\omega$. We repeat the argument here to show that it holds even when the PDE parameters have discontinuities.
The key is obtaining a special formula for $\langle U_n, U_n \rangle$ shown in \cite{Singh69}. We recall the ordinary differential equation \eqref{eq: equation for U_2} for the radial constituent of the eigenfunction:
\begin{equation}\label{e: ode for U}
\partial^2_rU + \left(\frac{2}{r} + \mu^{-1} \partial_r \mu\right) \partial_r U
+ \left[\omega^2-\frac{1}{r\mu} - \frac{k^2}{r^2}\right]U = 0.
\end{equation}
Here $U= U_k = U_l$ denotes the above solution for general $\omega$ while $U_n$ is a solution for such $\omega_n = {}_n\omega_l$ such that $T(U_n) = \mu( \partial_r -r^{-1})U_n = 0$ at $r = 1} \def \rturn{ R^\star $ and $r = R} \def \wkb{\beta$.
It will be convenient to write
\begin{equation}\label{e: ode for U_n}
\partial^2_rU_n + \left(\frac{2}{r} + \mu^{-1} \partial_r \mu\right) \partial_r U_n
+ \left[\omega_n^2-\frac{1}{r\mu} - \frac{k^2}{r^2}\right]U_n = 0
\end{equation}
Multiply \eqref{e: ode for U} by $U_n$ and \eqref{e: ode for U_n} by $U$ and subtract the two equation to get
\[
U_n \partial^2_r U - U \partial^2_rU_n
+ \left(\frac{2}{r} + \mu^{-1} \partial_r \mu\right)(U_n \partial_r U - U \partial_r U_n)
+ \rho/\mu(\omega^2 - \omega_n^2)U U_n = 0
\]
which may be simplified to
\begin{comment}(by first multiplying the above equation by $r^2\mu$)
\[
\frac{d}{dr} \left[ r^2 \mu (U_n \partial_r U - U \partial_r U_n)\right]
= \rho r^2(\omega_n^2 - \omega^2)U_n U
\]
We then note that $\mu (U_n \partial_r U - U \partial_r U_n) = U_n T - U T_n$
so that
\end{comment}
\[
\frac{d}{dr} \left[ r^2 (U_n T - U T_n)\right]
= \rho r^2(\omega_n^2 - \omega^2)U_n U.
\]
We integrate over $( R} \def \wkb{\beta, 1} \def \rturn{ R^\star)$ to obtain
\[
\frac{\left[ r^2 (U_n T - U T_n)\right]_{r= R} \def \wkb{\beta}^ 1} \def \rturn{ R^\star}{\omega_n^2-\omega^2}
= \int_ R} \def \wkb{\beta^ 1} \def \rturn{ R^\star r'^2 \rho(r') U(r')U_n(r') \operatorname{d} \! r'.
\]
Above, we use that $U,U_n,T,T_n$ are continuous across the interface to apply the fundamental theorem of calculus.
Let us suppose $\omega$ is not an eigenfrequency and then take the limit as $\omega \to \omega_n$. Let
\begin{equation} } \def\eeq{\end{equation} \label{e: introducing D}D := [ r^2 (U_n T - U T_n)]_{r= R} \def \wkb{\beta}^ 1} \def \rturn{ R^\star=
[ r^2 U_n T ]_{r= R} \def \wkb{\beta}^ 1} \def \rturn{ R^\star
\eeq
using the Neumann conditions. Note that the solutions to $D = 0$ are precisely the eigenvalues ${}_n \omega_l$ determined by the Neumann boundary conditions. A key fact is that even for such general solutions, we can enforce the inner boundary condition $T(U)\restriction_{r = R} \def \wkb{\beta} = 0$ to leading order while still keeping $\omega$ generic. This simplifies the computations so that
\begin{equation} } \def\eeq{\end{equation} \label{e: D only at outer boundary}
D =
[ r^2 U_n T ]_{r= 1} \def \rturn{ R^\star}.
\eeq
Then by L'Hospital's rule using the limit $\omega \to \omega_n$, we obtain
\[
\int_ R} \def \wkb{\beta^ 1} \def \rturn{ R^\star r'^2 \rho(r') U_n(r')U_n(r') \operatorname{d} \! r'
= -\frac{( \partial_\omega D)_{\omega_n}}{2\omega_n}.
\]
Next we recall
\[
G(x,x_0,t) = \frac{1}{2\pi}\
\sum_{l=0}^{\infty} \sum_{n=0}^{\infty}\
(l + \tfrac{1}{2})\ \frac{\sin({}_n\omega_l t)}{{}_n\omega_l I_l}\ \underbrace{
{}_n\mathbf{D}_l ({}_n\mathbf{D}_l)_0}_{\eqqcolon {}_n H_l}\
P_l(\cos \Theta)
\]
Where $I_d = I_{n,l}$ is equal to $l(l+1) \int_{r= R} \def \wkb{\beta}^ 1} \def \rturn{ R^\star \rho r^2 U_n^2 \operatorname{d} \! r.$
What we have shown is that
\begin{equation}\label{e: I_l in terms of D}
I_l = -\frac{l(l+1)}{2{}_n\omega_l} \left( \frac{ \partial D}{ \partial \omega}\right)_{{}_n \omega_l}
\end{equation}
so the Green's function becomes
\[
G(x,x_0,t) = -\frac{1}{\pi}\
\sum_{l=0}^{\infty} \sum_{n=0}^{\infty}\
\frac{l + \frac{1}{2}}{l(l+1)}\ \frac{\sin({}_n\omega_l t)}{\left( \tfrac{ \partial D}{ \partial \omega}\right)_{{}_n \omega_l}}\
{}_n\mathbf{D}_l ({}_n\mathbf{D}_l)_0\
P_l(\cos \Theta).
\]
Next, observe that ${}_n \omega_l$ are exactly the zeros of $D$ so we can replace the sum over $n$ by a complex line integral over $\omega$.
First use $\text{Re} \tfrac{e^{-i\omega t}}{i} = \sin (\omega t)$. Then for fixed $l$, we compute as in \cite{ZhaoModeSum}
\begin{equation}
\sum_{n=0}^{\infty}\
\frac{\sin({}_n\omega_l t)}{\left( \tfrac{ \partial D}{ \partial \omega}\right)_{{}_n \omega_l}}\
{}_n\mathbf{D}_l ({}_n\mathbf{D}_l)_0
= -\frac{1}{2\pi}\text{Re}\int_{-\infty}^\infty D^{-1} \mathbf{D}_l (\mathbf{D}_l)_0 e^{-i\omega t}\operatorname{d} \! \omega
\end{equation}
where the residue at $\omega = {}_n \omega_l$ of the integrand is calculated via
\[
\lim_{\omega \to {}_n \omega_l} \frac{w-{}_n\omega_l}{D} \mathbf{D}_l (\mathbf{D}_l)_0 e^{-i\omega t}
\]
and one uses L'Hospital's rule to get the desired formula. As in \cite{ZhaoModeSum}, the lack of a prefix $n$ on $U_l(r)$ and $U_l(r')$ indicates that these are general solutions which \emph{do not necessarily} satisfy the free-surface boundary conditions although \emph{we are enforcing the inner boundary condition.}
\begin{remark}
We note that \cite{HIKRigidity} also used residue theory to compute the infinite sum over $n$. However, the argument would not readily apply here since ${}_n\omega_l$ is more complicated in our case, so we employ a trick to circumvent using the equations involving ${}_n \omega_l$, which cannot be solved explicitly.
\end{remark}
Thus, we have managed to write $G$ as the Fourier transform in $\omega$ of $D^{-1} \mathbf{D}_l (\mathbf{D}_l)_0$. Taking the inverse of the transform, we obtain
\begin{equation} } \def\eeq{\end{equation} \label{e: hat G with general efunctions}
\hat G(x,x_0,\omega) = \frac{1}{2\pi}\sum_{l=0}^\infty
\frac{l + \tfrac{1}{2}}{l(l+1)} D^{-1} \mathbf{D}_l (\mathbf{D}_l)_0 P_l(\cos \Theta).
\eeq
This corresponds with the residue theory in \cite{HIKRigidity} to calculate the infinite series over $n$.
\subsubsection{Poisson's formula for the Green's function}
We abuse notation and denote
\[
H(k) = k^{-2}U_l(r)U_l(r')
\]
in the formula for $G$ to not treat the curl operations at first. This will not cause risk of confusion since we will specify the exact moment we apply the curl operators. Note that $U_l$ does not necessarily satisfy the Neumann boundary conditions.
\begin{proof}[Proof of proposition \ref{prop:Trace Formula}]
By the identical argument in \cite[Appendix A]{HIKRigidity}, we use \emph{Poisson's formula} to rewrite $\hat G(x,x_0,\omega)$ in a different form.
\begin{comment}
Poisson's formula is given by
\begin{equation} \label{eq:P}
\sum_{l=0}^{\infty} f(l + \tfrac{1}{2})
= \sum_{s = -\infty}^{\infty} (-)^s \int_0^{\infty} f(k)
e^{-2 i s k \pi} \, \operatorname{d} \! k .
\end{equation}
\begin{remark}
Poisson's formula can be obtained from the Watson transformation: If
$f$ is a function analytic in the vicinity of the real axis, and
$\mathcal{C} = \mathcal{C}^- \cup \mathcal{C}^+$ is a contour around
the positive real axis, then
\begin{equation} \label{eq:W}
\sum_{l=0}^{\infty} f(l + \tfrac{1}{2})
\ =\ \tfrac{1}{2} \int_{\mathcal{C}} f(k) [\cos(k \pi)]^{-1}
e^{-i k \pi} \, \operatorname{d} \! k .
\end{equation}
The integrand in the right-hand side has simple poles at $(2n+1) / 2$,
$n \in \mathbb{N}_0$ -- where $\cos(k \pi) = 0$. It follows from the residual
theorem. Poisson's formula is obtained as follows. In the limit $s \to
\infty$ the path of integration is $\mathcal{C}^-$, while in the limit
$s \to -\infty$ the path of integration is $\mathcal{C}^+$. One then
expands
\[
[\cos(k \pi)]^{-1}
= \frac{2 e^{-i k \pi}}{1 + e^{-2 i k \pi}}
\]
in a series separately for $\operatorname{Im} \{ k \} > 0$ and $<
0$.
\end{remark}
We apply Poisson's formula to the summation in $l$ in~\eqref{e: hat G with general efunctions}
while keeping the summation in $n$ intact. We obtain
\begin{equation} \label{eq:PSS}
\hat{G}(x,x_0,\omega) = \frac{1}{2\pi}\
\sum_{s=-\infty}^{\infty}\ (-)^s
\int_0^{\infty} \left[
\hat{f}(k;\omega)D^{-1}(k)\ H(k) \right]
P_{k - 1/2}(\cos \Theta) e^{-2 i s k \pi} \, k \operatorname{d} \! k .
\end{equation}
Again, $H(k)$ just has the general eigenfunctions and not anything that satisfies the Neumann boundary conditions, as well as any terms that do not have a subscript $n$.
This expression can be rewritten as
\end{comment}
\begin{multline} \label{eq:PSS-poss}
\hat{G}(x,x_0,\omega) =
\\ \frac{1}{2\pi}\
\sum_{s=1}^{\infty}\ (-)^s
\int_0^{\infty} \left[
\ D^{-1}\ H(k) \right]
P_{k - 1/2}(\cos \Theta)
\{ e^{-2 i s k \pi} + e^{2 i s k \pi} \} \, k \operatorname{d} \! k
\\
+ \frac{1}{2\pi}\ \int_0^{\infty} \left[
D^{-1}\ H(k) \right]
P_{k - 1/2}(\cos \Theta) \, k \operatorname{d} \! k .
\end{multline}
\begin{comment}
Following \cite{HIKRigidity}, it is useful to define \emph{traveling-wave Legendre functions} that we denote by
$ Q_{\lambda}^{(1)}$ and
$ Q_{\lambda}^{(2)}$.Their asymptotic behaviors, as $\abs{\lambda} \gg 1$, are (assuming that
$\Theta$ is sufficiently far away from the endpoints of $[0,\pi]$)
\begin{align}
Q_{\lambda-1/2}^{(1)} &\simeq
\left( \frac{1}{2\pi \lambda \sin \Theta} \right)^{1/2}
e^{-i (\lambda \Theta - \pi/4)} ,
\label{Q1tr}\\
Q_{\lambda-1/2}^{(2)} &\simeq
\left( \frac{1}{2\pi \lambda \sin \Theta} \right)^{1/2}
e^{+i (\lambda \Theta - \pi/4)} ,
\label{Q2tr}
\end{align}
upon substituting $z = \cos \Theta$. Taking into consideration the
time-harmonic factor $e^{i \omega t}$, it follows that $Q^{(1)}$
represents waves traveling in the direction of increasing $\Theta$,
while $Q^{(2)}$ represents waves traveling in the direction of
decreasing $\Theta$.
To distinguish the angular directions of propagation, one decomposes
\begin{equation} \label{PP}
P_{k-1/2}(\cos \Theta) = Q_{k-1/2}^{(1)}(\cos \Theta)
+ Q_{k-1/2}^{(2)}(\cos \Theta).
\end{equation}
Substituting~\eqref{PP} into~\eqref{eq:PSS-poss}, the computation in \cite{HIKRigidity} gives
\begin{multline}
\hat{G}(x,x_0,\omega) = \frac{1}{2\pi}\
\left[ \mstrut{0.6cm} \right.
\sum_{s = 1,3,5,\ldots} (-)^{(s-1)/2}
\\
\int_0^{\infty} \left[
\hat{f}(k;\omega)D^{-1}\ H(k) \right]
Q_{k - 1/2}^{(1)}(\cos \Theta)
\{ e^{-i (s-1) k \pi} - e^{i (s+1) k \pi} \}
\, k \operatorname{d} \! k
\\
+ \sum_{s = 2,4,\ldots} (-)^{s/2}
\int_0^{\infty} \left[
\hat{f}(k;\omega)D^{-1}\ H(k) \right]
Q_{k - 1/2}^{(2)}(\cos \Theta)
\{ e^{-i s k \pi} - e^{i (s-2) k \pi} \}
\, k \operatorname{d} \! k \left. \mstrut{0.6cm} \right] .
\end{multline}
\end{comment}
Note that $H(k)$ has the general eigenfunctions that do not necessarily satisfy Neumann boundary conditions.
We substitute $k = \omega p$ so $k \operatorname{d} \! k = p^{-1} \operatorname{d} \! p$ and the above expression becomes (see \cite[Appendix A]{HIKRigidity} for details)
\begin{multline}\label{e: G with p integral}
\hat{G}(x,x_0,\omega) = \frac{1}{2\pi}\
\left[ \mstrut{0.6cm} \right.
\sum_{s = 1,3,5,\ldots} (-)^{(s-1)/2}
\\
\int_0^{\infty} \left[
D^{-1}\ H(\omega p) \right]
Q_{\omega p - 1/2}^{(1)}(\cos \Theta)
\{ e^{-i (s-1) \omega p \pi} - e^{i (s+1) \omega p \pi} \}
\, p^{-1} \operatorname{d} \! p
\\
+ \sum_{s = 2,4,\ldots} (-)^{s/2}
\int_0^{\infty} \left[
D^{-1}\ H(\omega p) \right]
Q_{\omega p - 1/2}^{(2)}(\cos \Theta)
\{ e^{-i s \omega p \pi} - e^{i (s-2) \omega p \pi} \}
\, p^{-1} \operatorname{d} \! p \left. \mstrut{0.6cm} \right],
\end{multline}
where $Q^{(j)}_k (\cos \Theta)$ are certain travelling wave Legendre functions described in \cite[Appendix A]{HIKRigidity}.
To obtain the leading order asymptotics of the above formula, we will eventually employ the method of steepest descent. Before doing so, we will obtain an expression for $U_k(r)$ that has a geometric meaning representing all the multiple scattering of a single ray interacting with not just the boundary, but the interfaces as well. In the Appendix \ref{a: Generalized Debye}, we obtain a Debye series expansion of the $D^{-1} H(\omega p)$ term in the above sum.
After a lengthy computation in Appendix \ref{a: Generalized Debye}, we write down the updated formula for a single term in the sum over $s$ in the Green's function from \eqref{eq: wave propagator form N interface in app} when $r$ and $r_0$ are in the same layer,
\begin{multline}\label{eq: wave propagator form N interface}
\simeq \frac{1}{4\pi} (-)^{(s-1)/2}
(r r_0 c^{(+)}(r) c^{(+)}(r_0))^{-1}
(2\pi \rho^{(+)}(r) \rho^{(+)}(r_0) \sin \Theta)^{-1/2}
\hspace*{3.0cm}
\\
\int (\wkb_+(r;p) \wkb_+(r_0;p))^{-1/2}
\sum_{M \in \mathbb{Z}_{\geq 0}^{4(n-4)}}n_M(p) \cdot
\\
\sum_{i=1}^{4}
\exp[-i \omega (\tau_{M,i}(r,r_0;p) + p \Theta + (s-1) p \pi)]Q_{M,i}(p)
\\
\exp[i (\pi/4) (2 N_{M,i} - 1)] (\omega p)^{-3/2} \operatorname{d} \! p,
\end{multline}
where the formula is nearly identical to that of \cite{HIKRigidity} with several key differences that encode (using the multiindex $M$) the amplitude and broken ray path consisting of reflecting/transmitting {leg}{}s. Each component of $M$ indicates the number of reflected or transmitted {leg}{}s of the ray in a particular layer.
First, $Q_{M,i}(p)$ is the leading amplitude of the wave, which is a product of reflection and transmission coefficients corresponding to the {leg}{}s of a ray connecting two points at $r$ and $r_0$ with epicentral distance $\Theta$ (see \eqref{e: Q_M term} and \eqref{e: defining Q_M,i}), and $n_M$ is a combinatorial coefficient counting the dynamic analogs of this ray.
Here, $\tau_{M,i}(r,r_0;p)$ is the radial component of the travel time of a broken ray with ray parameter $p$ that connect two points at $r$ and $r_0$. It is the sum of the radial travel times of each of the reflected and transmitted {leg}{}s of the ray (see \eqref{e: Phi_m radial travel time} and \eqref{tau_M formula N interface}). Hence, $\tau_{M,i}$ and $Q_{M,i}$ encode the phase and amplitude (with all the reflections/transmission) of the wave associated to a particular ray. The index $i=1, \dots, 4$ corresponds to different ray paths with zero or one reflections connecting the source and receiver located at the radii $r$ and $r_0$ analogous to \cite{ZhaoModeSum}; once we take the trace, and apply the method of steepest descent, only the terms with $i =1,4$ make a contribution to the leading order asymptotics. Moreover, when taking the trace, the terms with $i=1$ and $i=4$ are identical so we will drop the subscript $i$.
Also, $N_{M,i}= N_{M,i}(p)$ is the KMAH index which is piecewise constant depending on the value of $p$ and is also affected by a ray grazing an interface.
\subsubsection*{Method of steepest descent} As in \cite[Section 3.2]{HIKRigidity}, we carry out the method of steepest descent in the integration over $p$. At this point, the argument is identical so we will be brief. Considering \eqref{eq: wave propagator form N interface}, we interchange the order of summation and integration, and invoke the
method of steepest descent in the variable $p$. Also notice that the path of integration is beneath the real axis, while taking $\omega > 0$. We carry out the analysis for a single
term, $s=1$. For $s=2,4,\ldots$ we have to add $s p \pi$ to $\tau_{M,i}$,
and for $s=3,5,\ldots$ we have to add $(s-1) p \pi$ to $\tau_{M,i}$,
in the analysis below.
Considering
\[
\phi_{M,i,s=1}=\phi_{M,i}(p) = \phi_{M,i}(r,r_0,\Theta,p):= \tau_{M,i}(r,r_0;p) + p \Theta
\]
as the phase function (for $s=1$) and $\omega$ as a large parameter,
we find (one or more) saddle points for each $i$, where
\[
\partial_p \tau_{M,i}(r,r_0,p)\restriction_{p = p_k} = -\Theta .
\]
Later, we will consider the diagonal, setting $r_0 = r$ and $\Theta =
0$. We label the saddle points by $k$ for each $M,i$ (and $s$). We note
that $r, r_0$ and $\Theta$ determine the possible values for $p$
(given $M$,$i$ and $s$) which corresponds with the number of rays connecting the
receiver point with the source point (allowing conjugate points). Hence, there can be multiple saddle points for a fixed $M,i,s,r,r_0,\Theta$. For
$s=1$, the rays have not completed an orbit. With $s = 3$ we begin to
include multiple orbits.
We then apply the method of steepest descent to \eqref{eq: wave propagator form N interface} with a contour deformation as in \cite[Section 3.2]{HIKRigidity} and we obtain
\begin{comment}
\begin{equation} \label{eq:abws}
\begin{aligned}
&\frac{1}{4\pi} (-)^{(s-1)/2}
(r r_0 c(r) c(r_0))^{-1}
(2\pi \rho(r) \rho(r_0) \sin \Theta)^{-1/2}
\hspace*{3.0cm}
\\
&\qquad \qquad \int (\wkb(r;p) \wkb(r_0;p))^{-1/2}
\sum_{M \in \mathbb{Z}_{\geq 0}^{4(n-4)}}n_M \cdot
\\ & \qquad \qquad \qquad \sum_{i=1}^{4}
\exp[-i \omega (\tau_{M,i}(r,r_0;p) + p \Theta + (s-1) p \pi)]
\\
&\qquad \qquad \qquad \qquad
\exp[i (\pi/4) (2 N_{M,i} - 1)] (\omega p)^{-3/2} Q_{M,i}(p)\operatorname{d} \! p
\\[0.25cm]
&\hspace{1in}\simeq \frac{1}{4\pi} (-)^{(s-1)/2}
(r r_0 c(r) c(r_0))^{-1}
(2\pi \rho(r) \rho(r_0) \sin \Theta)^{-1/2}
\\[0.2cm]
&\qquad \sum_{M \in \mathbb{Z}_{\geq 0}^{4(n-4)}}n_M\sum_{i=1}^{4} \sum_k
\left[ \omega^{-2}p^{-3/2} (\wkb(r;.) \wkb(r_0;.))^{-1/2}
\abs{\partial_p^2 \tau_{M,i}(r,r_0;.)}^{-1/2} \right.
\\& \hspace{1 in}\qquad \qquad \qquad Q_{M,i}(p)\Big]_{p = p_k}
\hspace{0in}
\exp[-i \omega T_{Mik} + i \tilde N_{Mik} (\pi/2)] ,
\end{aligned}
\end{equation}
\end{comment}
\begin{multline}
\simeq -\frac{2\pi}{(2\pi i)^{3/2}} (-)^{(s-1)/2}
(r r_0 c(r) c(r_0))^{-1}
(\rho(r) \rho(r_0))^{-1/2}
\\[0.2cm]
\sum_{M \in \mathbb{Z}_{\geq 0}^{4(N-4)}} n_M\sum_{i=1}^4 \sum_k
\left[ p (\wkb(r;.) \wkb(r_0;.))^{-1/2}
\abs{\partial_p^2 \tau_{M,i}(r,r_0;.)}^{-1/2}Q_{M,i}(p) \right]_{p =
p_k}
\\
\frac{1}{2\pi}\int_0^{\infty}i\omega^{3/2} \exp[-i \omega(
T_{Mik}-t) + i \tilde N_{Mik} (\pi/2)] \operatorname{d} \!\omega,
\end{multline}
as $\Theta \to 0$,
where
\begin{equation}
\begin{aligned}
T_{Mik} &= T_{s;Mik}(r,r_0,\Theta)
= \tau_{M,i}(r,r_0;p_k) + p_k \Delta_s ,\\
\tilde N_{Mik} &= N_{M,i} - \tfrac{1}{2} (1 -
\operatorname{sgn} \partial_p^2 \tau_{M,i} \restriction_{p = p_k}) ,
\end{aligned}
\end{equation}
in which
\begin{equation}
\Delta_s =
\begin{cases}
\Theta + (s-1) \pi & \text{if $s$ is odd} \\
-\Theta + s \pi & \text{if $s$ is even.}
\end{cases}
\end{equation}
The $\tilde N_{Mik}$ contribute to the KMAH indices, while the $T_{Mik}$ represent
geodesic lengths or travel times. The orientation of the contour
(after deformation) in the neighborhood of $p_k$ is determined by
$\operatorname{sgn} \partial_p^2 \tau_{M,i} \restriction_{p = p_k}$. Besides for the geometric spreading factor, the leading order amplitude is $Q_{M,i}(p)$, which is just a product of reflection and transmission coefficients corresponding to the {leg}{}s of the associated ray; terms involving curvature of the interface do not appear in the lead order term and only make an appearance in the subsequent term that is not necessary for the theorem. We note that
\begin{itemize}
\item
$\tilde N_{Mik} = \tilde N_{s;Mik}(r,r_0,\Theta)$ for multi-orbit waves
($s=3,4,\ldots$) includes polar phase shifts produced by any angular
passages through $\Theta = 0$ or $\Theta = \pi$ as well;
\item
if $r$ lies in a caustic, the asymptotic analysis needs to be adapted
in the usual way.
\end{itemize}
\begin{comment}
Next, as in \cite[Section 3.2]{HIKRigidity}, we consider the forward rays with their time reversal to deal with the $(\sin \Theta)^{-1/2}$ term as $\Theta \to 0$. We also take an inner product to deal with the curl terms in the definition of toroidal modes. After taking the inverse Fourier transform, we have \cite[Section 3.2]{HIKRigidity}
\end{comment}
Next, we take the trace of $ \partial_t G$ by restricting to $(r=r_0,\Theta =
0)$ and integrating. The phase function on the diagonal is $T_{Mik} =
\tau_{M,i}(r,r,p_k)+\pi(s-1)p_k$ and we apply stationary phase in the
variables $r,\theta,\psi$ with large parameter $\omega$.
This is a standard computation exactly as done in \cite[Section 3.2]{HIKRigidity}.
\begin{comment}
Since one has
$ \partial_p T_{M,i}(r,r,p) = 0$ at $p=p_k$, the critical points occur precisely
when
\[
\partial_r T_{M,i}(r,r,p) + \partial_{r_0}T_{M,i}(r,r,p) = 0, \qquad \partial_p T_{M,i}(r,r,p) = 0.
\]
The first condition forces $T_{M,i}(r,r,p)$ to be
independent of $r$. Also, we showed that for geodesics with turning
points, $U = O(\omega^{-\infty})$ when $r < \rturn$ where $\rturn$ is the radius of the corresponding ray. Finally, using
the inverse Fourier transform,
\[
\int_0^{\infty} \exp[i\omega(t-T)]\omega^{3/2} \operatorname{d} \!\omega
= c_d(t-T+\ii0)^{-5/2} ,\ \text{ with $c_d$ a constant.}
\]
Setting $R_{ik}= \{(r,\theta,\psi); r \in [\tilde{R}, 1} \def \rturn{ R^\star], \;
\partial_rT_{M,i}(r,r,p_k) + \partial_{r_0}T_{M,i}(r,r,p_k)=0\}$ where $\tilde{R} =
\rturn$ or $ R} \def \wkb{\beta$ depending on $p_k$,
$T_{Mik}$ remains constant over $R_{ik}$ and this holds only for $i = 1$ or $4$ by \eqref{tau_M formula N interface} in the leading order asymptotics. For $i=2,3$, $T_{M, i}(r,r,p_k)$ has no critical point over $(R^*, 1} \def \rturn{ R^\star)$ so a stationary phase computation would only have boundary terms at $r = R^*$ and $r = 1} \def \rturn{ R^\star$. However, these are lower order and at the boundaries, $T_{M, i}(r, r, p_k)$ for $i=2, 3$ is equal to $T_{\tilde M, \tilde i}$ for some $\tilde M$ and $\tilde i = 1, 4$. Hence, such boundary terms do not effect our computation.
When $r = r_0$, all terms with index $M$ and $i=2$ or $i=3$ are equal by in \eqref{e: defining Q_M,i} and \eqref{tau_M formula N interface}, and can be combined. Thus, we may remove the index $i$.
We find that modulo terms of lower order in $\omega$, the
trace microlocally corresponds with
\begin{multline*}
\operatorname{Re}\sum_s \sum_{M \in \mathbb{Z}_{\geq 0}^{4(N-4)}} \sum_k\ \\
\left(\frac{1}{2\pi i}\right)^{3/2}(t-T_{s;Mk}+\ii0)^{-5/2}
i^{M_{k}}n_{M}(p_k)Q_{M}(p_k)c_d \\
\cdot \frac{1}{2} \int_{R_{k}} A_{s;Mk}(r,r,0) \rho r^2 \sin
\theta \operatorname{d} \! r
\operatorname{d} \!\theta \operatorname{d} \!\psi
\end{multline*}
and we use that here,
\[
T_{s;Mk} = T_{s;Mk}(r,r;p_k) = \tau_{M}(r,r;p_k)
+ \left\{\begin{array}{rcl}
p_k (s-1) \pi & \text{if $s$} & \text{odd} \\
p_k s \pi & \text{if $s$} & \text{even}
\end{array}\right.
\]
is independent of $r$ on the critical set.
We note that $p_k$ exists only for $\abs{M}$, and $s$, sufficiently large,
which reflects the geometrical quantization.
From this expression, it is clear that the singular support of the
trace consists of the travel times of periodic geodesics.
We further simplify the above formula, that is, the integral involving
$A_{s;Mk}$. First, since $T_{Mk}$ is independent of $r$, then so is
$\tau_{M}(r,r;p) = \tau_{M}(p)$.
\end{comment}
\begin{comment}
Thus, we may pull $ \partial^2_p\tau_{M,i}$ out of
the integral involving $A_{s;Mik}$ precisely because we are integrating
over a closed orbit:
\begin{align*}
&\int_{R_{ik}} A_{s;ik}(r,r,0) \rho r^2 \sin \theta \operatorname{d} \! r
\operatorname{d} \!\theta \operatorname{d} \!\psi
\\
&\qquad \qquad =(-)^{(s-1)/2}\abs{p_k^{-2} \partial^2_p\tau_{M,i}(p_k)}^{-1/2}\int_{R_{ik}}
\frac{1}{c^2\beta(r,p_k)} 2\sin \theta
\operatorname{d} \! r \operatorname{d} \!\theta \operatorname{d} \!\psi.
\end{align*}
We recall that the travel time $T$ for a piece of a geodesic from
$r_0$ to $r$ is
\[
T = \int_{r_0}^r \frac{dr'}{c^2\beta(r',p_k)} \operatorname{d} \! r'.
\]
Hence, denoting $T_{ik}^\sharp$ as the two way travel time of a transmitted
geodesic from $r= 1} \def \rturn{ R^\star$ to $r=R^*$, we obtain
\[
\int_{R_{ik}} \frac{1}{c^2\beta(r,p_k)} 2\sin \theta
\operatorname{d} \! r \operatorname{d} \!\theta \operatorname{d} \!\psi = T_{ik}^\sharp \int_0^{2\pi}\int_0^{\pi}
2\sin \theta
\operatorname{d} \!\theta \operatorname{d} \!\psi
= T_{ik}^\sharp \abs{SO(3)},
\]
where $\abs{SO(3)}$ is the volume of $SO(3)$ under a Haar measure.
Substituting these calculations,
\end{comment}
Following the computation in \cite{HIKRigidity}, we obtain
the leading order term in the trace formula as
\begin{align} \label{eq: principal term spheric symm}
&\operatorname{Re}\sum_s\sum_{M \in \mathbb{Z}_{\geq 0}^{4(N-4)}} \sum_k\
\left(\frac{1}{2\pi i}\right)^{3/2}(t-T_{s;Mk}+\ii0)^{-5/2}
i^{M_{k}+s-1}\\
&\qquad \qquad \qquad \qquad \cdot c Q_{M}(p_k) L_{k}\abs{p_k^{-2} \partial^2_p\tau_{M}(p_k)}^{-1/2}
\frac{1}{2\pi}\abs{SO(3)},
\end{align}
where $L_{k}$ is the travel time of a ray with only transmitted {leg}{}s from $r= 1} \def \rturn{ R^\star$ to $r=R^*$ (see \eqref{e: transmitted length L}).
Note that the critical set becomes $\Theta_{M,k} = \partial_p \tau_{M}(p_k)$ so
$ \partial^2_p \tau_{M}(p_k)= \partial_p\alpha_{M,k}$ when restricting to the diagonal. Also, we use that here,
\[
T_{s;Mk} = T_{s;Mk}(r,r;p_k) = \tau_{M}(r,r;p_k)
+ \left\{\begin{array}{rcl}
p_k (s-1) \pi & \text{if $s$} & \text{odd} \\
p_k s \pi & \text{if $s$} & \text{even}
\end{array}\right.
\]
is independent of $r$ on the critical set.
We note that $p_k$ exists only for $\abs{M}$, and $s$, sufficiently large,
which reflects the geometrical quantization.
\end{proof}
\subsection*{Harmonics of the principal ray}
From the argument above, if $\gamma$ is a periodic orbit with period $T_{s,Mik}$ for some indices $s,M,i,k$ described above, the principal symbol of the contribution of $\gamma$ to the trace is as above. We can immediately write down the leading order contribution of $\gamma^l$ which is $\gamma$ travelled $l$ times. The travel time will be $lT_{s,Mik}$ . Then
$Q_{M,i}(p_k)$ becomes
$Q_{M,i}(p_k)^l$, $M_{ik}$ becomes $lM_{ik}$, and $p_k^{-2} \partial_p \alpha_{M,ik} $ becomes $lp_k^{-2} \partial_p \alpha_{M,ik}$.
\subsection{Spheroidal modes}\label{s: spheroidal modes}
The above trace formula, theorems \ref{t: spectral rigiditiy} and \ref{thm:blspr-multilayer} are essentially dealing with a scalar wave equation with a single wavespeed. The analysis for toroidal modes reduced to a scalar wave equation with an associated Laplace-Beltrami operator. However, our methods can also treat the isotropic elastic setting to include spheroidal modes (with the PDE described in \cite[Chapter 8]{DahlTromp}) where two wavespeeds ($c_P$ and $c_S$) are present corresponding to the $P$-waves and the $S$-waves, and there is a spectrum associated to the elliptic, isotropic elastic operator. In the elastic setting, each {leg}{} of a broken geodesic will be a geodesic for either the metric $c_P^{-2}dx^2$ or $c_S^{-2}dx^2$, so there is an associated length spectrum as well that includes \emph{mode converted} {leg}{}s. Thus, theorem \ref{t: spectral rigiditiy} can be extended to the case of the elastic operator by using corollary \ref{cor:2-speeds} if the length spectrum (or a dense subset) can be recovered by a trace formula from the spectrum.
The theorem would take the form
\begin{theorem}[Elastic spectral rigidity with moving interfaces]
\label{t: elastic spectral rigiditiy}
Fix any $\varepsilon>0$ and $K\in\mathbb{N}$, and let $c_{P,\tau}(r)$ and $c_{S,\tau}(r)$ be an admissible family of profiles with discontinuities at $r_k(\tau)$ for all $k=1,\dots,K$.
Suppose that the length spectrum for each $c_{P/S,\tau}$ is countable in the
ball $\bar B(0,1) \subset \mathbb R^3$.
Assume also that the length spectrum satisfies the principal amplitude injectivity condition and the periodic conjugacy condition.
Suppose
$\operatorname{spec}(\tau)=\operatorname{spec}(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$.
Then $c_{P,\tau}=c_{P,0}$, $c_{S,\tau}=c_{S,0}$ and $r_k(\tau)=r_k(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$ and $k=1,\dots,K$.
\end{theorem}
Thus, all we need is to extend proposition \ref{prop:Trace Formula} to the elastic case and then apply corollary \ref{cor:2-speeds}. Since the calculation is similar, but a more tedious version of the case we consider here, We will just provide an outline of the proof.
\begin{enumerate}
\item
The Green's function associated to just the spheroidal modes can be computed analogously as in \cite[Equation (31)]{ZhaoModeSum}.
\item One can then obtain (vector-valued) WKB solutions to approximate spheroidal modes, which are eigenfunctions of the static, elastic operator as in \cite[Appendix A]{ZhaoModeSum} and \cite[Chapter 8]{DahlTromp}.
\item
We can use the methods presented here (with the method of steepest descent for the asymptotic analysis) to then determine the leading order asymptotics of the sum of eigenfunctions to obtain a corresponding trace formula. The scattering coefficients will be determined by the elastic transmission condition, with an associated Debye expansion as done in appendix \ref{a: Generalized Debye}. Afterward, the stationary phase analysis
will lead to the same form as \eqref{eq: trace coefficient} but the reflection and transmission coefficients appearing in $Q(p_\gamma)$ will be different to account for mode conversions. Also, $\alpha(p_\gamma)$ will be modified with the appropriate wave speed appearing in each constituent of the linear combination of epicentral distances that correspond to an associated $P$ or $S$ {leg}{} of $\gamma$.
\item
The computation in \cite{ZhaoModeSum} does not treat glancing nor grazing rays, but their formulas can be modified with the methods presented here to account for such rays as well. The $n(p_\gamma)$ appearing in \eqref{eq: trace coefficient} will again count the number of ``dynamic analogs'' associated to $\gamma$ as described in \cite{HronCriteria} for the spheroidal case; that paper also has several figures of broken geodesics in the spheroidal case.
\end{enumerate}
Under an analog to the principal amplitude injectivity condition for spheroidal modes, one can recover the {basic}{} length spectrum for each of the two wave speeds. One then uses Corollary~\ref{cor:2-speeds} to recover both wave speeds.
\section{Declarations}
\subsection*{Funding}
MVdH was supported by the Simons Foundation under the MATH + X program, the National Science Foundation under grant DMS-1815143, and the corporate members of the Geo-Mathematical Imaging Group at Rice University.
JI was supported by the Academy of Finland (projects 332890 and 336254).
\subsection*{Conflict of interest/Competing interests}
{\bf Financial interests:} The authors declare they have no financial interests.
\\
\noindent {\bf Non-financial interests:} The authors declare they have no non-financial interests.
\subsection*{Availability of data and material} Not applicable
\subsection*{ Code availability} Not applicable
\section{Length spectral rigidity proof}
\subsection{Statements}
Some background to be set up:
\begin{itemize}
\item
Let $A(r_1,r_0)=\bar B(0,r_1)\setminus B(0,r_0)\subset\R^n$ be the closed annulus in a Euclidean space.
\item
The length spectrum comes with reflections included.
\item
We might want to mean ``primitive length spectrum'' where no multiples are included.
\end{itemize}
\joonas{It is enough to have non-degeneracy in the part of the length spectrum we actually use. We only need to know that the singularities are there for the rays that stay within a single layer (and a single mode if we look at the elastic system). For the rest we only need them not to mess our good data; it does not matter whether the singularities are even there. We have no need to prove that any mode conversion singularities are actually present!}
The first result is an extension of~\cite[Theorem 1.1]{SRHIK} to the case of a moving interface.
\begin{proposition}[Length spectral rigidity of an annulus with one moving interface]
\label{prop:lspr-annulus}
Fix any $\varepsilon>0$.
Let $r(\tau)\in(0,1)$ depend $C^1$-smoothly on $\tau$.
Let $c_\tau$ with $\tau\in(-\varepsilon,\varepsilon)$ be $C^{1,1}$ functions $(0,1]\to (0,\infty)$ satisfying the Herglotz condition and the countable conjugacy condition and depending $C^1$-smoothly on $\tau$.
Let $\operatorname{lsp}(\tau)$ denote the length spectrum of the annulus $A(1,r(\tau))$ with the velocity profile $c_\tau$.
If $\operatorname{lsp}(\tau)=\operatorname{lsp}(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$, then $c_\tau=c_0$ and $r(\tau)=r(0)$ for all $\tau\in(-\varepsilon,\varepsilon)$.
\end{proposition}
This is used to prove a multi-layered version of the result, which is our main result on the length spectrum.
\begin{proof}[Proof of theorem~\ref{thm:lspr-multilayer}]
The proof is a slight generalization of that of proposition~\ref{prop:lspr-annulus}.
The proposition is first applied to...
\joonas{Continue here! Details of variation (and other small details that are unimportant fo the main argument) in appendix?}
\joonas{Use an intermediate lemma if it clarifies presentation?}
\end{proof}
\joonas{Full planet as the main result. (In general should work down to the depth where Herglotz fails. In the Earth SKS satisfies Herglotz at CMB. Would we need Herglotz globally for all modes?) Different variants might fit better within the theorem with assumptions pulled into a definition.}
\joonas{Shape of harmonics, determining parameters, higher order primitive, higher order symbols might be worse but with clean jumps we don't need them, Poincaré map has only one dimension and eigenvalue}
\subsection{Proof sketch}
\begin{enumerate}
\item
We only use rays that stay in a single layer first.
\item
In each layer $\rho'>0$ or $\rho'<0$ throughout, and at interfaces $\rho$ is non-smooth or $\rho'=0$ with $\rho''>0$.
We call the end with highest $\rho$ the ``top'' of the layer.
(With the traditional Herlgotz condition this is the top, but we allow some layers to flip.)
Rays can bounce off the top of the layer.
It does not matter if they are also allowed to transmit as long as they can reflect.
\item
In each layer there are rays of all depths joining two points at the top.
They behave smoothly.
Their periodic extension closes up when their opening angle is a rational multiple of $\pi$.
Such points are dense.
(Mild conjugate point assumption needed here.)
\item
These periodic broken rays staying in the same layer are mostly smooth when $s$ varies.
There is no uniform interval in the varying parameter; it gets shorter when the winding number grows.
The point is that for most rays there is a smooth family of rays indexed by $s$ so that each ray $\gamma_s$ is a periodic broken ray for $c_s(r)$.
With mild conjugacy assumptions these kinds of rays are common enough.
\item
With the arguments of the previous paper, differentiation in $s$ and inverting an Abel transform shows that $\Der{s}c_s(r)=0$ in that layer, assuming that the top stays put.
If the top moves, then it becomes $\Der{s}c_s(r+h_s)=0$, where $h_s$ is the shift of the top as a smooth function of $s$.
This means in the linearized picture that we know what the speed in the layer is w.r.t. its top but we do not quite know where the layer is.
There seems to be no control as to whether the bottom is moving; only the reflecting top is relevant.
\item
This leaves a single shift parameter for each interface, a much smaller inverse problem than the original.
There is the consistency condition that if the same interface is the top of two layers (Herglotz switches direction at it), then the two movements must coincide.
\item
Assuming a certain no-shadow property and same orientation of all layers (top on top), we should be able to use rays passing through interfaces to find the movement of interfaces.
The condition is that from each layer there must be a diving ray that bounces of an interface above its own top.
(Maybe we want an open set of them?)
The topmost top is the boundary of the manifold and stays put.
Then we can take rays from the next layer and see how they bounce of the surface after one refraction.
(The condition is that these refraction must be permitted.)
The only new calculation needed is the derivative of the length of the diving wave that goes through interfaces.
The contributions within the layers are already known and accounted for, so the only thing we are interested in is the new term picked up at the interface.
Without this condition at least naive examples suggest that uniqueness might fail if interfaces can move.
\end{enumerate}
\begin{remark}
The last two steps simplify greatly by using purely radial rays.
Starting from the top and iterating inwards, we have a single radial ray with one endpoint moving and the other fixed.
\end{remark}
\section{Moving interfaces}
\joonas{Notes for typing up the proof.}
\subsection{Basic quantities are still valid}
The general theory of moving interfaces seems to be quite inconvenient, but the structure of our specific setting makes it much more tractable.
Let us consider geometry only in 2D; the geometric results are independent of dimension.
In polar coordinates $(r,\theta)$ the metric is
$
g
=
c^{-2}(r)[\mathrm{d} r^2+r^2\mathrm{d}\theta^2]
$
and the constants of motion are the energy
$
E
=
c^{-2}(r)
[\dot r^2+r^2\dot\theta^2]
$
and the angular momentum
$
L
=
c^{-2}(r)r^2\dot\theta
$.
That energy is $E\equiv1$ is just unit speed parametrization.
Suppose there is a jump discontinuity in the wave speed $c(r)$.
When a geodesic traverses such an interface (or reflects off it), two quantities are conserved:
the speed and the covelocity parallel to the interface.
(More precisely, $\dot\gamma^\flat|_{T_x\Sigma}$ is conserved at $x\in\Sigma\subset M$.)
As the jump is in the radial direction, these conserved quantities are exactly $E$ and $L$.
This is not a surprise though: If a jump respects the symmetry, the conserved quantities should be conserved across the interface.
Because the conservation laws are independent of jumps, the formulas for travel time and opening angle as a function of depth remain valid:
\begin{equation}
T(r_0)
=
2
\int_{r_0}^1
\rho(r)
\left[
1-\left(\frac{\rho(r_0)}{\rho(r)}\right)^2
\right]^{-1/2}
r^{-1}\mathrm{d} r
\end{equation}
and
\begin{equation}
\alpha(r_0)
=
2
\int_{r_0}^1
\frac{\rho(r_0)}{\rho(r)}
\left[
1-\left(\frac{\rho(r_0)}{\rho(r)}\right)^2
\right]^{-1/2}
r^{-1}\mathrm{d} r
.
\end{equation}
As before, $\rho(r)=r/c(r)$.
\subsection{Radial shifts}
Take any $h\in\R$ of small enough absolute value.
We define a shift by $h$ so that $\tau_hc(r)=\frac{r}{r+h}c(r+h)$.
Then $\tau_h\rho(r)=\rho(r+h)$ and the shift by $h$ is an isometry from the annulus $r\in(a,b)$ to $r\in(a+h,b+h)$.
(The shift nature would be more apparent if we used $(0,1]\times S^1$ instead of the punctured disc.)
Suppose we have a family of wave speeds $c_s(r)$ with jumps or other singularities at $r_s^k$ depending smoothly on $s$.
We index the jump points so that $1=r_s^1>r_s^2>\dots>r_s^K>0$.
At first we will change $c_s(r)$ to $\tau_{r_0^k-r_s^k}c_s(r)$ on $(r_s^{k-1},r_s^k)$.
The new modified profile $\hat c_s(r)$ will have its interfaces fixed at $r_0^k$ for all $s$.
The relative movements of the layers can cause there to be a gap in the definition of $\hat c_s(r)$ or it can be doubly defined.
In the latter case the lowest layer takes precedence.
Neither of these is an issue, as we peel things from above and we only want to show that a derivative vanishes, so in fact infinitesimal movements of the interfaces do not mess things up.
All derivatives of lengths of periodic orbits will be exactly the same for $c_s(r)$ and $\hat c_s(r)$ as long as they stay in a single layer.
All other orbits will be ignored at first.
\subsection{The two steps of length spectral rigidity}
Using the tools of the previous paper we get $\partial_s\hat c_s(r)=0$.
The bulk of the result is in this first step.
This reduces the problem to fixing the moving interfaces.
It suffices to fix the interfaces in a two-layer model, as then we may always fix the next interface using the previous one.
The top interface is the boundary of the manifold, and that stays put and makes a good starting point for iteration.
It remains to analyze how a moving interface affects the length of a ray going through the interface and whether such (periodic) rays are stable under perturbations.
Consider a single moving interface $r_s^1$ and depths $z$ below the interface.
Travel time is given as
\begin{equation}
T(z,s)
=
2
\int_{r_0}^1
\rho(r)
\left[
1-\left(\frac{\rho(z)}{\rho(r)}\right)^2
\right]^{-1/2}
r^{-1}\mathrm{d} r.
\end{equation}
Let us rescale depth so that it is measured with respect to the interface instead of the surface:
\begin{equation}
\hat T(z,s)
=
T(z-r_s^1+r_0^1,s).
\end{equation}
This way the travel time from the tip up to the interface is independent of $s$.
\subsection{Bouncing rays}
It turns out that no rays through an interface need to be analyzed at all!
I would reformulate the argument as follows:
We start in the uppermost layer.
Diving rays give wave speed variation down to the interface.
Bounding rays then give the movement of the interface.
Only then one looks at the next layer, whose upper boundary is now fixed, and runs the same argument.
Conjugate points can be a globally messy concept when going across interfaces.
We only need to assume densely many stable diving waves in each layer and globally countable length spectrum.
Consider a layer extending from $r=z$ to $r=1$ with a smooth sound speed $c\colon(z-\varepsilon,1]\to(0,\infty)$.
How $c$ is extended to $(z-\varepsilon,z)$ is irrelevant as long as it is smooth, and this extension may also depend on the parameter $s$ which we will introduce as long as $x|_{[z,1]}$ stays put.
We will replace $r_0$ as a parameter for the geodesics by $P$ (to be read as capital Rho) which plays the role of $\rho(r_0)$ and varies in $(0,\rho(z))$.
The travel time and opening angle of a V-shaped broken ray segment reflecting once off the inner boundary are
\begin{equation}
T(P,z)
=
2\int_z^1
\rho(r)
\left[
1-\left(\frac{P}{\rho(r)}\right)^2
\right]^{-1/2}
r^{-1}\mathrm{d} r
\end{equation}
and
\begin{equation}
\alpha(P,z)
=
2\int_z^1
P\rho(r)^{-1}
\left[
1-\left(\frac{P}{\rho(r)}\right)^2
\right]^{-1/2}
r^{-1}\mathrm{d} r
=
2\int_z^1
\rho(r)^{-1}
\left[
P^{-2}-\rho(r)^{-2}
\right]^{-1/2}
r^{-1}\mathrm{d} r.
\end{equation}
(In the limit $P\to0$ we get the expected limits for the purely radial bouncing ray.)
Evidently
$\partial_1T>0$,
$\partial_1\alpha>0$,
$\partial_2T<0$, and
$\partial_2\alpha<0$,
and one can easily find expressions for these derivatives.
A straightforward computation gives
\begin{equation}
\partial_2\alpha\partial_1T
-
\partial_1\alpha\partial_2T
=
4
z^{-1}\rho(z)
\left[
1-\left(\frac{P}{\rho(z)}\right)^2
\right]^{-1/2}
\int_z^1
\rho(r)^{-1}
\left[
1-\left(\frac{P}{\rho(r)}\right)^2
\right]^{-1/2}
r^{-1}\mathrm{d} r
>
0
\end{equation}
which will be useful in a moment.
\joonas{This means that a Jacobian is invertible so that $(T,\alpha)$ make as valid coordinates as $(p,z)$ at least locally.}
Now suppose the interface is at $z=z(s)$ and fix any target angle $\alpha_0=\frac{m}{n}\pi$ with coprime positive integers $n$ and $m$.
Copies and rotations of this one V-shaped segment make up a periodic broken ray which winds around the origin $m$ times and hits the interface $n$ times.
It follows from the implicit function theorem that there is a smooth function $P(s)$ (with $s$ varying in a small enough neighborhood of zero) so that $\alpha(P(s),z(s))=\alpha_0$ for all $s$ and the derivative satisfies $P'=-(\partial_1\alpha)^{-1}\partial_2\alpha z'$.
This $P(s)$ is the parameter describing the (leg of the) broken ray which keeps the same winding and reflection number as the interface moves.
We can then differentiate the length of this broken ray.
As there are $n$ of these V-shaped segments, the total travel time is $T(s)=nT(P(s),z(s))$ and its derivative is
\begin{equation}
T'(s)
=
-(\partial_1\alpha)^{-1}
[
\partial_2\alpha\partial_1T
-
\partial_1\alpha\partial_2T
]
z'(s).
\end{equation}
Therefore $T'=0$ implies $z'=0$!
\section{Old notes}
\subsection{Assumptions}
Denote $\rho(r)=r/c(r)$.
The classical Herglotz condition is that $\rho'>0$ everywhere smoothly.
A weaker version is that $\rho$ is piecewise smooth and the condition holds in the sense of distributions or signed measures.
(This means no trapping with total internal reflections.)
The length spectral bit might work in a little more generality.
The key assumptions for length spectral rigidity to hold are:
\begin{itemize}
\item
The Herglotz condition holds in the same direction in every layer: either $\rho'(r)>0$ or $\rho'(r)<0$.
Different layers can have different signs.
\item
The length spectrum is countable.
Accumulation points should not matter, especially if the accumulation set is discrete (probably multiples of circumferences of interfaces).
Otherwise the interesting numbers (which are a subset of the full length spectrum) that are useful and behave well under parameter change are lost in the noise.
\item
There should not be smooth trapping.
If $\rho'(r)=0$, then $\rho''(r)>0$.
That is, if $\rho$ has a local maximum, it must be non-smooth.
The proof essentially peels a layer starting from the end with highest $\rho$ with short diving waves.
These short waves need to reflect back into the same layer, or else the data from the next layer will interfere.
It is not clear whether this actually prohibits uniqueness, but it is a serious obstacle.
\item
Mild conjugacy conditions are needed.
Hopefully only within each layer only.
There should not be a need for many conditions intertwining the layers.
\item
We need to assume something on how the interfaces behave as $s$ varies.
If they stay put, life becomes easier.
At least they should behave smoothly and not change type or split.
The general topological type of the velocity profile should be known a priori, at least in theory.
\item
No-shadow condition.
Implied by Herglotz.
\end{itemize}
\subsection{Graphical understanding of velocity profiles}
We observe that at the angular momentum of a geodesic is a constant of motion and satisfies $L\leq\rho$ at all points, and equality holds iff $\dot r=0$ (at inflection points).
The figures below illustrate some phenomena related to the velocity profile.
We are still missing a picture of the no-shadow condition.
\begin{figure}[h]
\includegraphics[width=10cm]{layer2.png}
\caption{The classical Herglotz condition. The function $\rho$ is strictly increasing. If a ray starts at some depth, it will curve in the direction of increasing $\rho$ (outwards) until it meets the boundary.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=10cm]{layer3.png}
\caption{A profile with a jump at blue radius. The two layers satisfy the Herglotz condition but the jump point does not; the jump is in the wrong direction. This means that if you start a ray normal to the radius at a depth just below the interface, the ray cannot continue past the interface. The condition $L\leq\rho$ forbids passing through an interval of radii and thus the ray is trapped with total internal reflection. It can only reflect back. If we start deeper, the horizontal line stays below the graph of $\rho$ and the ray can transmit to the upper layer. Some of it will also reflect. Both reflection and refraction are allowed for the ray starting deeper, only reflection for the shallower one.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=10cm]{layer4.png}
\caption{A profile with two jumps. In the middle layer Herglotz is reversed and a ray starting near the innermost interface will keep bouncing against it, unable to pass through. If we start a little higher, the ray is still allowed to bounce (always allowed at a discontinuity) but it is always allowed to transmit too. If the starting point is chosen close to the center, the ray passes through all layers.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=10cm]{layer6.png}
\caption{Smooth profile divided in three layers by three critical points of $\rho$. Rays near the local maximum of $\rho$ keep oscillating around that maximum. They are unable to reflect as the geometry is smooth, and the two layers cannot be decoupled in the length spectrum. The smooth local maximum of $\rho$ is thus an issue, as it causes a two-sided smooth trap. The smooth local minimum at the other interface is irrelevant. Curves starting almost tangential to it curve away in both directions.}
\end{figure}
\joonas{I have a sketch for a situation where the length spectrum is dense in an open set, and I think the open set will be large due to harmonic multiples. Herglotz prohibits it, so it could justify our assumptions.}
\section{Individual environments}
\begin{remark}
\joonas{Do we want to do something with this or just throw away? I'm inclined to throw away. VITALY: Me too since this seems more relevant for the trace formula and that's not our focus}
\textcolor{red}{TO BE WRITTEN AND THOUGHT:}
We need to discuss degeneracy, as we may allow some.
It was pointed out in \cite[Remark 2.4]{SRHIK} that it is enough that the primitive length spectrum does not degenerate.
That should be enough here too, but we need to check what happens with a discrete but countable number of accumulation points (or gliding ray lengths).
Perhaps we should define a ``gliding length spectrum'' that contains everything with glides and then require this to be disjoint from the primitive length spectrum.
It sounds plausible to actually use the gliding spectrum as data as well, but it seems to be unnecessary with the current approach.
\end{remark}
\end{document}
|
{
"arxiv_id": "2302.14134",
"language": "en",
"timestamp": "2023-03-01T02:02:12",
"url": "https://arxiv.org/abs/2302.14134",
"yymm": "2302"
} | \section{Introduction}
\label{sec:Introduction}
Correlated electron materials are often characterized by competing or correlated degrees of freedom whose interplay can give rise to remarkable physical properties and symmetry-broken states. This competition involves spin, orbital, charge and lattice degrees of freedom which may be active at comparable energy scales.~\cite{PhysRevLett.120.166401} One way to disentangle competing or cooperative effects is via fine-tuned laser pulse excitation of correlated systems, which allow to reveal characteristic timescales, coupling constants and collective modes,\cite{Smallwood2016} and in some cases hidden nonthermal states.~\cite{PhysRevLett.127.227401}
Up to hundreds of femtoseconds after an impulsive excitation, the order parameter involved in a dynamical phase transition exhibits distinctly nonthermal scaling relations and fluctuations~\cite{Tsuji2013,Maklar2021,delaTorre2022} and the electronic band structure can be strongly modified.~\cite{PhysRevX.12.011013,PhysRevB.102.035109,doi:10.1126/sciadv.abd9275}
These effects are expected to be particularly prominent in low-dimensional systems, where nonlocal correlations govern the physics close to phase instabilities and crossovers.~\cite{Rohringer2018,PhysRevLett.123.097601,kauch_pitons_2019,https://doi.org/10.48550/arxiv.2112.15323,PhysRevB.104.245127_non_eq_piton_simard}
To capture the effect of nonlocal correlations, single- and two-particle correlation functions need to be calculated consistently, and this is challenging for several reasons. There is a lack of out-of-equilibrium methods that incorporate both local and nonlocal correlations and which allow to access the strongly-correlated regime. Dynamical Mean Field theory (DMFT) only captures local correlations,~\cite{RevModPhys.86.779_non_eq_review} the nonlocal components of GW+DMFT only charge fluctuations,\cite{Biermann2003,Ayral2013,PhysRevB.100.235117} the phenomenological time-dependent Ginzburg-Landau (tdGL) only considers low-order microscopic electronic fluctuations~\cite{PhysRevB.8.3423} and time-dependent Density Functional Theory (tdDFT) cannot properly describe strong correlation effects and does not capture the scattering processes which are relevant for thermalization at long times.~\cite{PhysRevLett.52.997}
The development of reliable, yet computationally efficient numerical methods, is crucial if we want to simulate nonthermal phenomena, including symmetry-broken states, up to experimentally relevant times of the order of picoseconds. Such methods would allow one to study accurately the destruction of thermal states and formation of nonthermal phases triggered by impulsive excitations,\cite{Strand2015,Bauer2015,Stahl2021} and possibly shed light on the mechanisms which underlie photoinduced metastable states, such as the superconducting-like states observed in K$_3$C$_{60}$~\cite{Budden2021} and $\kappa$-organic compounds.~\cite{Buzzi2020} They would also allow to address fundamental questions such as the effect of long- and short-range correlations in the formation and (de)stabilization of prethermal and hidden states,\cite{Berges2004,Moeckel2008,Eckstein2009,Werner2020} and they would allow to study the role of order parameter fluctuations in nonthermal phase transitions beyond tdGL.
The challenge is to devise nonequilibrium numerical many-body methods for treating nonlocal correlations that are, on the one hand, computationally tractable and, on the other hand, accurate enough to capture the relevant physics. A promising method, which has recently been extended to the nonequilibrium domain,~\cite{https://doi.org/10.48550/arxiv.2205.13813} is the so-called Two-Particle Self-Consistent approach (TPSC).~\cite{Vilk_1997} TPSC correctly reproduces the pseudogap in models for cuprates~\cite{Vilk_1996} and the growth of antiferromagnetic (AFM) correlations as the renormalized classical regime -- where the AFM correlation length exceeds the de Broglie wave length -- is approached.~\cite{Vilk_1997} %
It can also deal with superconducting phases,~\cite{PhysRevB.68.174502_tpsc_superconductivity} two-particle vertex corrections~\cite{Bergeron_2011_optical_cond} and multi-orbital systems.\cite{https://doi.org/10.1002/andp.202000399} TPSC has been used in conjunction with Density Functional theory (DFT) in equilibrium to calculate the renormalization of the bands of iron pnictides and chalcogenides.~\cite{PhysRevB.102.035109} The main drawback of TPSC is that is does not fully capture strong local correlations, so that the method does not give access to the renormalized classical regime or Mott physics. To better account for strong local correlations while at the same time keeping track of the nonlocal correlations, a combination of DMFT and nonequilibrium TPSC is proposed in this paper, which resembles in spirit the recently developed equilibrium approaches of Refs.~\onlinecite{https://doi.org/10.48550/arxiv.2211.01919} and \onlinecite{https://doi.org/10.48550/arxiv.2211.01400}, and is applied to the single-band Hubbard model in the context of hopping and interaction quenches. We will in particular study the time-dependent spin and charge correlation functions and the pseudogap phase in the weak-to-intermediate coupling regime.
The paper is structured as follows: In Sec.~\ref{subsec:Hubbard_model}, we present the Hamiltonian of the model and the methods used to solve it. More specifically, the nonequilibrium DMFT, nonequilibrium TPSC and nonequilibrium DMFT+TPSC are presented in Secs.~\ref{subsec:DMFT}, \ref{subsec:TPSC_variants} and \ref{ch:dmft_tpsc}, respectively. The results are shown and discussed in Sec.~\ref{sec:results}. We give our conclusions in Sec.~\ref{sec:conclusion}.
\section{Model and methods}
\label{sec:Models_and_methods}
\subsection{Hubbard model}
\label{subsec:Hubbard_model}
We consider a single-band Hubbard model with time-dependent hopping parameters
\begin{align}
\label{eq:Hubbard_model_intro}
\hat{\mathcal{H}}(t)=&-\sum_{ij,\sigma}t^{\text{hop}}_{ij}(t)\left(\hat{c}_{i,\sigma}^{\dagger}\hat{c}_{j,\sigma}+\text{H.c.}\right)+U\sum_i\hat{n}_{i,\uparrow}\hat{n}_{i,\downarrow}\nonumber\\
&-\mu \sum_i (\hat{n}_{i,\uparrow} + \hat{n}_{i,\downarrow}).
\end{align}
Here, $t^{\text{hop}}_{ij}$ denotes the hopping amplitudes between sites $j$ and $i$, $\sigma \in \{\uparrow,\downarrow\}$ the spin, $\hat{c}^{(\dagger)}_{i,\sigma}$ are annihilation (creation) operators for site $i$, while $\hat{n}_{i\sigma}=\hat{c}^{\dagger}_{i,\sigma}\hat{c}_{i,\sigma}$ is the number operator, $U$ the local Hubbard repulsion and $\mu$ the chemical potential. We will consider ramps from a 2D square lattice to a 3D cubic lattice (and vice-versa) with in-plane nearest-neighbor hoppings $t^{\text{hop}}$, and use $t^{\text{hop}}$ as the unit of energy ($\hbar/t^{\text{hop}}$ as the unit of time). The ramps are implemented for the $z$-axis hopping, so that the corresponding time-dependent bare electronic dispersion reads
\begin{align}
\label{eq:dispersion_relation}
\epsilon_{\mathbf{k}}(t) = -2t^{\text{hop}}\left(\cos{k_x}+\cos{k_y}\right) - 2t_{z}^{\text{hop}}(t)\cos{k_z},
\end{align}
where $-\pi \le k_x,k_y,k_z\le \pi$ defines the Brillouin zone. This implies that the bare bandwidth $W$ of the Hubbard model~\eqref{eq:Hubbard_model_intro} changes from $8t^{\text{hop}}$ (2D) to $12t^{\text{hop}}$ (3D) and vice-versa. Note that we have set the fundamental constants like $\hbar$, $k_B$, the electric charge $e$ and the lattice spacings $a$ to unity.
\subsection{Nonequilibrium DMFT}
\label{subsec:DMFT}
\subsubsection{General formalism}
Nonequilibrium DMFT is an implementation of the DMFT equations on the Kadanoff-Baym contour $\mathcal{C}$.~\cite{RevModPhys.86.779_non_eq_review,PhysRevLett.97.266408_freericks_non_eq} In DMFT, the lattice model is self-consistently mapped onto a single-site Anderson impurity model, where upon convergence the time-dependent hybridization function captures the effects of the lattice environment.~\cite{Georges_1996} The action of the nonequilibrium Anderson impurity problem is
\begin{align}
\label{eq:DMFT:DMFT_action}
\mathcal{S}[\Delta] =& -\int_{\mathcal{C}}\mathrm{d}z \ \hat{\mathcal{H}}_{\text{loc}}(z) \nonumber\\
& - \int_{\mathcal{C}}\mathrm{d}z \int_{\mathcal{C}} \mathrm{d}z^{\prime} \sum_\sigma \hat{c}^{\dagger}_\sigma(z)\Delta_\sigma(z,z^{\prime})\hat{c}_\sigma(z^{\prime}),
\end{align}
where $ \hat{\mathcal{H}}_{\text{loc}}$ is the same local term as in the lattice model, $\hat{c}^{(\dagger)}_\sigma$ annihilates (creates) an electron with spin $\sigma$ on the impurity and $z\in \mathcal{C}$. The hybridization function is denoted by $\Delta^\sigma(z,z^{\prime})$, and the integrals span over the entire Kadanoff-Baym contour $\mathcal{C}$.
With the nonequilibrium action \eqref{eq:DMFT:DMFT_action}, one can define the nonequilibrium impurity Green's function
\begin{align}
\label{eq:DMFT:Greens_function}
\mathcal{G}_{\text{imp}}^{\sigma}(z,z^{\prime}) = -i\text{Tr}\left[\mathcal{T}_{\mathcal{C}}e^{i\mathcal{S}[\Delta]}\hat{c}_{\sigma}(t)\hat{c}^{\dagger}_\sigma(t^{\prime})\right]/\mathcal{Z}[\Delta],
\end{align}
where $\mathcal{T}_{\mathcal{C}}$ is the time-ordering operator defined on the Kadanoff-Baym contour and $\mathcal{Z}[\Delta] = \text{Tr}\left[\mathcal{T}_{\mathcal{C}}e^{i\mathcal{S}[\Delta]}\right]$ is the partition function. The operator $\mathcal{T}_{\mathcal{C}}$ orders strings of operators according to the contour $\mathcal{C}$, which includes the forward branch $\mathcal{C}_1$, the backward branch $\mathcal{C}_2$ and the imaginary time branch ($\mathcal{C}$: $\mathcal{C}_1 \prec \mathcal{C}_2 \prec \mathcal{C}_3$). The impurity Green's function $\mathcal{G}^{\sigma}_{\text{imp}}$ will be computed using the third-order iterated perturbation theory (IPT) method, adapted to the nonequilibrium formalism (see Sec.~\ref{subsubsec:IPT}). When compared to second-order IPT, the additional third-order diagrams to the impurity solver allow to push $U/W$ to larger values and to dope away from half-filling.~\citep{tsuji_nonequilibrium_2013}
\subsubsection{Paramagnetic self-consistency}
\label{subsubsec:PM}
In nonequilibrium DMFT, the lattice self-energy is assumed to be local and identified with the impurity self-energy, $\Sigma^\sigma_{ij}(z,z^{\prime})=\Sigma_{\text{imp}}^\sigma(z,z^{\prime})\delta_{ij}$, which is an approximation in systems with finite coordination number.~\cite{Mueller_1989,Georges_1996} Moreover, to attain the self-consistency condition, the impurity Green's function
$\mathcal{G}^{\sigma}_{\text{imp}}(z,z^{\prime})$ must be identical to the local lattice Green's function $\mathcal{G}^{\sigma}_\text{loc}(z,z^{\prime})$.
This self-consistency condition determines the hybridization function $\Delta^\sigma(z,z')$ appearing in the impurity action~\eqref{eq:DMFT:DMFT_action}, which plays the role of a dynamical mean field.
In impurity solvers based on weak-coupling perturbation theory, it is more convenient to work with the so-called Weiss Green's function $\mathcal{G}^{\sigma}_{0}$, which is related to the hybridization function via the Dyson equation
\begin{align}
\label{eq:PM:Weiss_Green_hyb}
\left[i\partial_z+\mu\right]\mathcal{G}^{\sigma}_0(z,z^{\prime}) - \Delta^{\sigma}(z,\bar{z})\mathcal{G}^0_{\sigma}(\bar{z},z^{\prime})=\delta^{\mathcal{C}}(z,z^{\prime}),
\end{align}
and which contains the same information.
Here, $\delta^{\mathcal{C}}(z,z')$ represents the delta function on the Kadanoff-Baym contour. The convolution along the contour $\mathcal{C}$ will sometimes be denoted by the operator ``$\ast$''. Contour-time arguments $z$ featuring an over-bar are integrated over $\mathcal{C}$.
The impurity Dyson equation for the interacting problem connects the impurity Green's function $\mathcal{G}^{\sigma}_{\text{imp}}$, the impurity self-energy $\Sigma_{\text{imp}}^{\sigma}$ and the Weiss Green's function $\mathcal{G}_{\sigma}^0$ as follows:
\begin{align}
\label{eq:PM:Dyson_eq}
&\mathcal{G}_{\text{imp}}^{\sigma}(z,z^{\prime}) = \mathcal{G}_{0}^{\sigma}(z,z^{\prime}) + \mathcal{G}_{0}^{\sigma}(z,\bar{z})\Sigma_{\text{imp}}^{\sigma}(\bar{z},\bar{z}^{\prime})\mathcal{G}_{\text{imp}}^{\sigma}(\bar{z}^{\prime},z^{\prime}).
\end{align}
As pointed out for example in Ref.~\onlinecite{PhysRevB.104.245127_non_eq_piton_simard}, the formulation of the impurity solver in terms of the Weiss Green's functions, $\Sigma_{\text{imp}}=\Sigma_{\text{imp}}[\mathcal{G}_0]$, violates the energy conservation principle (in the absence of an external field) because the self-energy is not expressed in terms of the interacting Green's functions. However, it turns out that combining DMFT with TPSC improves the energy conservation such that one can perform meaningful simulations up to longer times. Hence, it is not necessary to resort to an impurity solver which expresses the self-energy in terms of the interacting impurity Green's function, $\Sigma_{\text{imp}}=\Sigma_{\text{imp}}[\mathcal{G}_{\text{imp}}]$, which can lead to poor results already for the short time dynamics \cite{Eckstein2010ipt} and which does not correctly reproduce the energy scale $\omega\sim W$ of the onset of the asymptotic behavior (high-frequency and atomic limits) of the Hubbard model self-energy~\cite{Vilk_1997} (see Sec.~\ref{subsubsec:IPT}).
The lattice Green's function $\mathcal{G}_{\mathbf{k}}^{\sigma}$ is related to the impurity self-energy via the lattice Dyson equation
\begin{align}
\label{eq:PM:projected_green_function_impurity}
&\left[i\partial_z+\mu-\epsilon(\mathbf{k})-\Sigma^{\delta,\sigma}_{\text{imp}}(z)\right]\mathcal{G}_{\mathbf{k}}^{\sigma}(z,z^{\prime})\notag\\
&\hspace{1.0cm}- \Sigma_{\text{imp}}^{\sigma}(z,\bar{z})\mathcal{G}_{\mathbf{k}}^{\sigma}(\bar{z},z^{\prime})=\delta^{\mathcal{C}}(z,z^{\prime}),
\end{align}
where $\epsilon(\mathbf{k})$ is the bare electronic dispersion written in Eq.~\eqref{eq:dispersion_relation}, $\mu$ is the impurity chemical potential and $\Sigma^{\delta}_{\text{imp}}$ represents the time-local impurity self-energy diagrams, denoted by $\Sigma_{H}$ in Sec.~\ref{subsubsec:IPT}.
Owing to the DMFT self-consistency condition, the impurity Dyson equation \eqref{eq:PM:Dyson_eq} can be rewritten as a Volterra integral equation where the impurity Green's function $\mathcal{G}_{\text{imp}}^{\sigma}$ is replaced by the $\mathbf{k}$-averaged lattice Green's function $\mathcal{G}^{\sigma}_\text{loc}$:
\begin{align}
\label{eq:PM:impurity_self_energy}
\mathcal{G}_0^{\sigma}(z,\bar{z})\left[\delta^{\mathcal{C}}(\bar{z},z^{\prime}) + F^{\sigma}(\bar{z},z^{\prime})\right] = \mathcal{G}^{\sigma}_\text{loc}(z,z^{\prime}),
\end{align} where $F^{\sigma}(z,z^{\prime})\equiv \Sigma_{\text{imp}}^{\sigma}(z,\bar{z}^{\prime})\mathcal{G}^{\sigma}_\text{loc}(\bar{z}^{\prime},z^{\prime})$. Equations \eqref{eq:PM:projected_green_function_impurity} and \eqref{eq:PM:impurity_self_energy}, along with the diagrammatic expression for the impurity self-energy, form a closed set of equations determining $\mathcal{G}^{\sigma}_0$.~\cite{Eckstein2010ipt,tsuji_nonequilibrium_2013} The weak-coupling impurity self-energy $\Sigma_{\text{imp}}^{\sigma}$ enters Eq.~\eqref{eq:PM:projected_green_function_impurity} and the impurity Dyson equation~\eqref{eq:PM:Dyson_eq}, and the DMFT equations are iterated until $\mathcal{G}^{\sigma}_0$ has converged. To solve the Dyson equations \eqref{eq:PM:Dyson_eq}, \eqref{eq:PM:projected_green_function_impurity} and the Volterra integral equation \eqref{eq:PM:impurity_self_energy}, we use the NESSi package.~\cite{Nessi} For the paramagnetic solutions considered in this study, all quantities are independent of the spin projection, \textit{i.e.} we have that $\Sigma_{\text{imp}}^{\sigma}=\Sigma_{\text{imp}}^{-\sigma}$ and the same holds for $\mathcal{G}_{\text{imp}}^{\sigma}$ and $\Delta^{\sigma}$.
\subsubsection{Impurity solver}
\label{subsubsec:IPT}
Since we work in the weak coupling regime ($U \lesssim W/2$), we use a weak-coupling impurity solver based on an expansion of the self-energy up to $3^{\text{rd}}$ order in the interaction $U$.~\cite{tsuji_nonequilibrium_2013} This approach is a generalization of the second-order iterated perturbation theory (IPT) for the Anderson impurity model.~\cite{kajueter_new_1996,arsenault_benchmark_2012} In the ``bare IPT" formalism, the self-energy $\Sigma_{\text{imp}}[\mathcal{G}_0]$ is approximated as a functional of the Weiss Green's function defined in Eq.~\eqref{eq:PM:Weiss_Green_hyb}. Alternatively, one can define a ``bold IPT," where $\mathcal{G}_0^{\sigma}$ in the self-energy diagrams is replaced by the dressed impurity Green's function $\mathcal{G}^{\sigma}_{\text{imp}}$ obtained from Eq.~\eqref{eq:PM:Dyson_eq}. This replacement has a detrimental effect on the short-time dynamics, but it yields a conserving approximation, which means that the total energy after a perturbation is conserved under the time evolution.~\citep{Eckstein2010ipt} In this paper, we will use the ``bare IPT" formalism within the nonequilibrium DMFT+TPSC scheme introduced in Sec.~\ref{ch:dmft_tpsc}, since it turns out that this scheme conserves the energy to a very good approximation in the considered parameter range.
By making use of Hedin's equations,~\cite{Hedin1965} one can generate systematically, order by order, the Feynman diagrams that characterize single- and two-particle correlation functions. This, however,
becomes impractical for
high expansion orders in the interaction $U$, since one would have to deal with a large set of diagrams. We thus only consider diagrams up to the third order. In the case of the Hubbard model, the Fock interaction term vanishes and this leads (in addition to the first-order Hartree diagram) to two self-energy diagrams of order $\mathcal{O}(U^2)$ and eight diagrams of order $\mathcal{O}(U^3)$. These leading diagrams are derived in detail in Appendix~\ref{appendice:ch:weak_coupling_self_expansion}. In this section, we present the formulas for the different contributions and their diagrammatic representations. Note that at half-filling, we choose $\mu=U/2$, so that the Hartree terms vanish in the paramagnetic state. However, the Hartree diagrams and those containing Hartree insertions do not vanish if the system is doped away from half-filling.\cite{tsuji_nonequilibrium_2013}
\paragraph{$2^{\text{nd}}$-order IPT}
\label{ch:2nd_order_IPT}
To second order, the Hartree contribution $\Sigma_H^{(2)}$ reads
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_2nd_order}
&\Sigma_{H,\sigma}^{(2)}(z,z^{\prime}) = (-i)^2U(z)\mathcal{G}_0^{-\sigma}(z,\bar{z})U(\bar{z})\mathcal{G}_0^{\sigma}(\bar{z},\bar{z}^{+})\notag\\
&\times\mathcal{G}_{0}^{-\sigma}(\bar{z},z^+)\delta^{\mathcal{C}}(z,z^{\prime}).
\end{align}
The diagram representing Eq.~\eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_2nd_order} is shown in Fig.~\ref{fig:2nd_Hartree_diagram} and is a combination of two Hartree diagrams.
\begin{figure}[t!]
\centering
\includegraphics[scale=1.2]{self_energy_diagram_Hartree_2nd-crop.pdf}
\caption{$2^{\text{nd}}$-order Hartree self-energy diagram. The fermionic propagators represent the Weiss Green's functions $\mathcal{G}^0$.}
\label{fig:2nd_Hartree_diagram}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[scale=1.2]{self_energy_diagram_2nd-crop.pdf}
\caption{$2^{\text{nd}}$-order self-energy diagram.}
\label{fig:2nd_order_self}
\end{figure}
The term $\Sigma_{H,\sigma}^{(2)}$ is necessary to spontaneously break the SU(2) spin symmetry within DMFT, since it confers different effective chemical potentials to the different spin projections.~\cite{PhysRevB.103.104415_Simard_pi_ton}
The remaining second-order diagram comprises one particle-hole bubble diagram, as depicted in Fig.~\ref{fig:2nd_order_self}, and reads
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:second_order_bubble}
\Sigma^{(2)}_{\sigma}(z,z^{\prime}) = U(z)\mathcal{G}_0^{\sigma}(z,z^{\prime})U(z^{\prime})\mathcal{G}_0^{-\sigma}(z^{\prime},z^+)\mathcal{G}^{-\sigma}_0(z,{z^{\prime}}^+).
\end{align}
The self-energy~\eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:second_order_bubble}, expressed as a functional of the Weiss Green's function, $\Sigma^{(2)}[\mathcal{G}_0]$, captures the Mott transition and crossover because Eq.~\eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:second_order_bubble} correctly reproduces the large-$U$ limit of the Hubbard model~\eqref{eq:Hubbard_model_intro},\footnote{See Appendix~\ref{appendice:ch:weak_coupling_self_expansion} and the discussion below for more details.} which coincides with the atomic limit, at half-filling. On the other hand, the self-energy expressed in terms of the boldified Green's function, $\Sigma^{(2)}[\mathcal{G}_{\text{imp}}]$, does not allow to describe the Mott transition. This is due to the fact that, even though the perturbation theory expressed in terms of the interacting Green's functions leads to the correct asymptotics at half-filling, it does not set in at $\omega\sim W$, but rather at $\omega\gg U$, which is too high and contradicts the Pauli exclusion principle.~\cite{Vilk_1997} In order to carry out the perturbation theory using the dressed Green's functions, one would need to consider the frequency dependent G-skeletonic two-particle vertex corrections to get physically sound results. In the weak-coupling regime $U\lesssim W/2$, both schemes however lead to similar results for short times.~\cite{PhysRevB.104.245127_non_eq_piton_simard}
\paragraph{$3^{\text{rd}}$-order solver}
\label{ch:3rd_order_IPT}
We next describe the $3^{\text{rd}}$-order self-energy diagrams. There are three diagrams contributing to the time-local component of the self-energy. The first one is obtained by attaching a Hartree diagram to the top propagator of the $2^{\text{nd}}$-order diagram \eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_2nd_order}. This produces the diagram shown in the top left corner of Fig.~\ref{fig:H_3_diagram}, which corresponds to the expression
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_3a}
&\Sigma^{3a}_{H,\sigma}(z,z^{\prime}) = (-i)^3U(z)\mathcal{G}_0^{-\sigma}(z,\bar{z})U(\bar{z})\mathcal{G}_0^{\sigma}(\bar{z},\bar{z}^{\prime})U(\bar{z}^{\prime})\notag\\
&\times\mathcal{G}_{0}^{-\sigma}(\bar{z}^{\prime},\bar{z}^{\prime +})\mathcal{G}_0^{\sigma}(\bar{z}^{\prime},\bar{z}^{+})\mathcal{G}_0^{-\sigma}(\bar{z},z^+)\delta^{\mathcal{C}}(z,z^{\prime}).
\end{align}
The second time-local $3^{\text{rd}}$-order self-energy diagram stems from two Hartree self-energy corrections to the first-order Hartree term. This gives the diagram shown in the top right corner of Fig.~\ref{fig:H_3_diagram}, namely
\begin{figure}[t!]
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[scale=1.0]{self_energy_diagram_Hartree_3a-crop.pdf}
\end{minipage}\hfill
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[scale=1.0]{self_energy_diagram_Hartree_3b-crop.pdf}%
\end{minipage}%
\par
\begin{minipage}{\columnwidth}
\centering
\includegraphics[scale=1.0]{self_energy_diagram_Hartree_3c-crop.pdf}%
\end{minipage}%
\caption{$3^{\text{rd}}$-order diagrams $\Sigma_H^{3a}$ (top left corner), $\Sigma_H^{3b}$ (top right corner) and $\Sigma_H^{3c}$ (bottom).}
\label{fig:H_3_diagram}
\end{figure}
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_3b}
&\Sigma^{3b}_{H,\sigma}(z,z^{\prime}) = (-i)^3U(z)\mathcal{G}_0^{-\sigma}(z,\bar{z})U(\bar{z})\mathcal{G}_0^{\sigma}(\bar{z},\bar{z}^{+})\mathcal{G}_{0}^{-\sigma}(\bar{z},\bar{z}^{\prime})\notag\\
&\times U(\bar{z}^{\prime})\mathcal{G}_0^{\sigma}(\bar{z}^{\prime},\bar{z}^{\prime +})\mathcal{G}_0^{-\sigma}(\bar{z}^{\prime},z^+)\delta^{\mathcal{C}}(z,z^{\prime}).
\end{align}
The third diagram comes from the insertion of the bare $2^{\text{nd}}$-order self-energy diagram \eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:second_order_bubble} into the first-order Hartree propagator, giving the bottom diagram of Fig.~\ref{fig:H_3_diagram},
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_3c}
&\Sigma^{3c}_{H,\sigma}(z,z^{\prime}) = -iU(z)\mathcal{G}_0^{-\sigma}(z,\bar{z})U(\bar{z})\mathcal{G}_0^{\sigma}(\bar{z},\bar{z}^{\prime})U(\bar{z}^{\prime})\notag\\
&\times\mathcal{G}_{0}^{-\sigma}(\bar{z}^{\prime},\bar{z}^{+})\mathcal{G}_0^{-\sigma}(\bar{z},\bar{z}^{\prime})\mathcal{G}_0^{\sigma}(\bar{z}^{\prime},z^+)\delta^{\mathcal{C}}(z,z^{\prime}).
\end{align}
The set of diagrams corresponding to Eqs.~\eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_3a}, \eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_3b} and \eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:Hartree_3c} represent a $3^{\text{rd}}$-order shift of the chemical potential.
Another category of diagrams originates from the consideration of the second-order self-energy diagram~\eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:second_order_bubble} in the vertex function $\Gamma\equiv-\frac{\delta\Sigma}{\delta\mathcal{G}}$ discussed in details in Sec.~\ref{subsec:TPSC_variants}. This gives three distinct vertex terms out of which two lead to a nonzero contribution.\footnote{To obtain those diagrams, the lowest-order diagram (particle-hole bubble in $\mathcal{G}_0^{\sigma}$) in the Bethe-Salpeter equation~\eqref{eq:bethe_Salpeter_equation} is used in Eq.~\eqref{eq:self_longitudinal_derivation}.} The first of those diagrams reads
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:3a_diagram}
&\Sigma_{\sigma}^{3a}(z,z^{\prime}) = iU(z)U(z^{\prime})\mathcal{G}_0^{-\sigma}(z,z^{\prime})\mathcal{G}_0^{\sigma}(z,\bar{z})\mathcal{G}_0^{\sigma}(\bar{z},z^{\prime})U(\bar{z})\notag\\
&\times\mathcal{G}_0^{-\sigma}(z^{\prime},\bar{z})\mathcal{G}_0^{-\sigma}(\bar{z},z^{+}),
\end{align}
and the second diagram of this category reads
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:3b_diagram}
&\Sigma_{\sigma}^{3b}(z,z^{\prime}) = iU(z)U(z^{\prime})\mathcal{G}_0^{-\sigma}(z^{\prime},z^+)\mathcal{G}_0^{-\sigma}(z,\bar{z}^{+})\mathcal{G}_0^{\sigma}(z,\bar{z})\notag\\
&\times U(\bar{z})\mathcal{G}_0^{\sigma}(\bar{z},z^{\prime})\mathcal{G}_0^{-\sigma}(\bar{z},{z^{\prime}}^{+}).
\end{align}
The diagram representing Eq.~\eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:3a_diagram} is shown on the left of Fig.~\ref{fig:3_diagram} and the one representing Eq.~\eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:3b_diagram} is shown on the right of Fig.~\ref{fig:3_diagram}.
\begin{figure}[t!]
\centering
\begin{minipage}{0.5\columnwidth}
\centering
\includegraphics[scale=1.2]{self_energy_diagram_3a-crop.pdf}
\end{minipage}%
\begin{minipage}{0.5\columnwidth}
\centering
\includegraphics[scale=1.2]{self_energy_diagram_3b-crop.pdf}
\end{minipage}
\caption{$3^{\text{rd}}$-order diagrams $\Sigma^{3a}$ (left) and $\Sigma^{3b}$ (right).}
\label{fig:3_diagram}
\end{figure}
The next (and last) series of $3^{\text{rd}}$-order Feynman diagrams come from the insertion of Hartree-type self-energy corrections into the Green's functions of the second-order self-energy \eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:second_order_bubble}. The first such diagram (top left of Fig.~\ref{fig:3_diagram_2}) reads
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:3c_diagram}
&\Sigma_{\sigma}^{3c}(z,z^{\prime}) = -iU(z)U(z^{\prime})\mathcal{G}_0^{\sigma}(z,z^{\prime})\mathcal{G}_0^{-\sigma}(z^{\prime},z^{+})\mathcal{G}_0^{-\sigma}(z,\bar{z})\notag\\
&\times U(\bar{z})\mathcal{G}_0^{\sigma}(\bar{z},\bar{z}^{+})\mathcal{G}_0^{-\sigma}(\bar{z},{z^{\prime}}).
\end{align}
As second diagram (top right of Fig.~\ref{fig:3_diagram_2}) we obtain
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:3d_diagram}
&\Sigma_{\sigma}^{3d}(z,z^{\prime}) = -iU(z)U(z^{\prime})\mathcal{G}_0^{\sigma}(z,z^{\prime})\mathcal{G}_0^{-\sigma}(z^{\prime},\bar{z})U(\bar{z})\mathcal{G}_0^{\sigma}(\bar{z},\bar{z}^{+})\notag\\
&\times\mathcal{G}_0^{-\sigma}(\bar{z},z^{+})\mathcal{G}_0^{-\sigma}(z,{z^{\prime}})
\end{align}
and the third diagram (bottom of Fig.~\ref{fig:3_diagram_2}) is
\begin{align}
\label{eq:nonequilibrium_quantum_many_body_physics:IPT:3e_diagram}
&\Sigma_{\sigma}^{3e}(z,z^{\prime}) = -iU(z)U(z^{\prime})\mathcal{G}_0^{\sigma}(z,\bar{z})U(\bar{z})\mathcal{G}_0^{-\sigma}(\bar{z},\bar{z}^{+})\mathcal{G}_0^{\sigma}(\bar{z},z^{\prime})\notag\\
&\times\mathcal{G}_0^{-\sigma}(z^{\prime},z^{+})\mathcal{G}_0^{-\sigma}(z,{z^{\prime}}^+).
\end{align}
\begin{figure}[h!]
\begin{minipage}{.5\columnwidth}
\includegraphics[scale=1.2]{self_energy_diagram_3c-crop.pdf}
\end{minipage}\hfill
\begin{minipage}{.5\columnwidth}
\includegraphics[scale=1.2]{self_energy_diagram_3d-crop.pdf}%
\end{minipage}%
\par
\begin{minipage}{\columnwidth}
\centering
\includegraphics[scale=1.2]{self_energy_diagram_3e-crop.pdf}%
\end{minipage}%
\caption{$3^{\text{rd}}$-order diagrams $\Sigma^{3d}$ (top left corner), $\Sigma^{3c}$ (top right corner) and $\Sigma^{3e}$ (bottom).}
\label{fig:3_diagram_2}
\end{figure}
As shown in Ref.~\onlinecite{tsuji_nonequilibrium_2013}, the addition of the third-order self-energy diagrams allows one to access higher values of $U/W$ (compared to second-order IPT) and to dope the systems with electrons or holes away from half-filling. The inclusion of these extra self-energy diagrams however does not improve the IPT impurity solver in the strong-coupling regime ($U>W$), which is why we restrict the current study to the weak-to-intermediate correlation regime. Moreover, the fact that the TPSC self-energy introduced below and the second-order IPT self-energy \eqref{eq:nonequilibrium_quantum_many_body_physics:IPT:second_order_bubble} share the same asymptotics when $U \rightarrow 0$ makes it natural to combine these two diagrammatic approaches.~\cite{doi:10.1143/JPSJ.69.3912}
\subsection{Nonequilibrium TPSC and TPSC+GG}
\label{subsec:TPSC_variants}
\subsubsection{General formalism}
\label{ch:general_formalism}
In this section, we derive in detail the nonequilibrium Two-Particle Self-Consistent approach and a variant proposed in Ref.~\onlinecite{https://doi.org/10.48550/arxiv.2205.13813}, namely TPSC+GG. The formalism and the steps in the derivation follow Refs.~\onlinecite{https://doi.org/10.48550/arxiv.2205.13813} and \onlinecite{Vilk_1997}. We first briefly introduce the nonequilibrium generating functional formalism.~\cite{PhysRev.115.1342} The nonequilibrium Green's function can be used to express arbitrary order correlation functions between particles on the Kadanoff-Baym (KB) contour and these can be generated by the functional
\begin{align}
\label{eq:generating_corr_func_out_equilibrium}
&\mathcal{Z}[\phi] = \text{Tr}\biggl[\mathcal{T}_{\mathcal{C}} e^{-i\int_{\mathcal{C}} \mathrm{d}z \ \htcal{H}(z)}\underbrace{e^{-i \hat{c}^{\dagger}_{\bar{\alpha}}(\bar{z}_1)\phi_{\bar{\alpha}\bar{\beta}}(\bar{z}_1,\bar{z}_2)\hat{c}_{\bar{\beta}}(\bar{z}_2)}}_{\equiv S[\phi]} \biggr],
\end{align} where $\mathcal{C}$ stands for the KB contour and $\htcal{H}$ is the Hubbard Hamiltonian~\eqref{eq:Hubbard_model_intro} whose equations of motion we want to derive. $\mathcal{T}_{\mathcal{C}}$ is the time-ordering operator on $\mathcal{C}$ and $\phi$ is a source field defined on the contour. The Greek indices represent arbitrary degrees of freedom, such as lattice site or spin, and $S[\phi]$ is a functional of the source field $\phi$. Just like for the contour-time arguments, the bar over the indices means that they are summed over. The trace in Eq.~\eqref{eq:generating_corr_func_out_equilibrium} spans over the eigenstates in Fock space. According to Eq.~\eqref{eq:generating_corr_func_out_equilibrium}, the contour Green's function reads
\begin{align}
\label{eq:contour_G_phi}
\mathcal{G}^{\phi}_{\epsilon\zeta}(z_1,z_2) = -\frac{\delta\ln{\mathcal{Z}[\phi]}}{\delta\phi_{\zeta\epsilon}(z_2,z_1)} = -i\langle\mathcal{T}_{\mathcal{C}} \hat{c}_{\epsilon}(z_1)\hat{c}^{\dagger}_{\zeta}(z_2)\rangle_{\phi}.
\end{align}
In Eq.~\eqref{eq:contour_G_phi}, the grand-canonical ensemble average is
\begin{align}
\label{eq:def_ensemble_average}
\langle \cdots \rangle_{\phi} = \frac{1}{\mathcal{Z}[\phi]}\sum_{i}\bra{\Psi_i}e^{-i\int_{\mathcal{C}}\mathrm{d}\bar{z}\htcal{H}(\bar{z})} S[\phi]\cdots\ket{\Psi_i},
\end{align} with the $\{\ket{\Psi_i}\}$ a set of eigenstates of the Fock space. Using Eq.~\eqref{eq:contour_G_phi}, we perform a second functional derivative
\begin{align}
\label{eq:second_der_phi_partition_function_1}
&\frac{\delta \mathcal{G}^{\phi}_{\epsilon\zeta}(z_1,z_2)}{\delta\phi_{\gamma\delta}(z_4,z_3)} = \mathcal{G}^{\phi}_{\delta\gamma}(z_3,z_4)\mathcal{G}^{\phi}_{\epsilon\zeta}(z_1,z_2)\notag\\
&\hspace{1.0cm} - \langle\hat{c}_{\gamma}^{\dagger}(z_4)\hat{c}_{\delta}(z_3)\hat{c}_{\epsilon}(z_1)\hat{c}^{\dagger}_{\zeta}(z_2) \rangle_{\phi},
\end{align} which, defining the two-particle correlation function $\chi\equiv -i\frac{\delta\mathcal{G}}{\delta\phi}$ (\textit{cf.} Eq.~(12.18) in Ref.~\onlinecite{stefanucci_van_leeuwen_2013}), leads to
\begin{align}
\label{eq:second_der_phi_partition_function}
&\chi^{\phi}_{\epsilon\zeta,\gamma\delta}(z_1,z_2;z_4,z_3) = i\langle\mathcal{T}_{\mathcal{C}}\hat{c}^{\dagger}_{\gamma}(z_4)\hat{c}_{\delta}(z_3)\hat{c}_{\epsilon}(z_1)\hat{c}^{\dagger}_{\zeta}(z_2) \rangle_{\phi}\notag\\
&\hspace{1.0cm} - i\mathcal{G}^{\phi}_{\delta\gamma}(z_3,z_4)\mathcal{G}^{\phi}_{\epsilon\zeta}(z_1,z_2).
\end{align}\noindent
Note that Eq.~\eqref{eq:second_der_phi_partition_function_1} corresponds to Eq.~(15.11) in Ref.~\onlinecite{stefanucci_van_leeuwen_2013}. Another important result originates from the ``closure relation''
\begin{align}
\label{eq:closure_relation}
\frac{\delta \left(\mathcal{G}^{\phi}_{\epsilon\bar{\alpha}}(z_1,\bar{z}_3)\mathcal{G}^{\phi}_{\bar{\alpha}\eta}(\bar{z}_3,z_2)^{-1}\right) }{\delta \phi_{\gamma\delta}(z_4,z_3)} = 0.
\end{align}
Equation~\eqref{eq:closure_relation} gives
\begin{align}
\label{eq:closure_relation_expanded}
\frac{\delta\mathcal{G}^{\phi}_{\epsilon\zeta}(z_1,z_2)}{\delta\phi_{\gamma\delta}(z_4,z_3)} = -\mathcal{G}^{\phi}_{\epsilon\bar{\alpha}}(z_1,\bar{z}_3)\frac{\delta\mathcal{G}^{\phi}_{\bar{\alpha}\bar{\eta}}(\bar{z}_3,\bar{z}_5)^{-1}}{\delta\phi_{\gamma\delta}(z_4,z_3)}\mathcal{G}^{\phi}_{\bar{\eta}\zeta}(\bar{z}_5,z_2),
\end{align}
and the modified Dyson equation with the source field reads
\begin{align}
\label{eq:modified_Dyson_equation}
&\mathcal{G}^{\phi}_{\alpha\eta}(z_3,z_5)^{-1}\notag\\
&= {\mathcal{G}^0_{\alpha\eta}(z_3,z_5)}^{-1} - \phi_{\alpha\eta}(z_3,z_5) - \Sigma^{\phi}_{\alpha\eta}(z_3,z_5).
\end{align}
Equation~\eqref{eq:modified_Dyson_equation} appears naturally when deriving the equations of motion of Eq.~\eqref{eq:contour_G_phi}, as will be shown later on.
In this section, $\mathcal{G}_0^{\sigma}$ denotes the noninteracting lattice Green's function.
Note that all the two-time objects introduced hitherto can be expressed in a $3\times 3$ matrix form, as described in Ref.~\onlinecite{RevModPhys.86.779_non_eq_review}. Inserting Eq.~\eqref{eq:modified_Dyson_equation} into Eq.~\eqref{eq:closure_relation_expanded}, we get
\begin{align}
\label{eq:closure_relation_expanded_2}
&-i\frac{\delta\mathcal{G}^{\phi}_{\epsilon\zeta}(z_1,z_2)}{\delta\phi_{\gamma\delta}(z_4,z_3)} = -i\mathcal{G}^{\phi}_{\epsilon\gamma}(z_1,z_4)\mathcal{G}^{\phi}_{\delta\zeta}(z_3,z_2)\notag\\
& -i \mathcal{G}^{\phi}_{\epsilon\bar{\alpha}}(z_1,\bar{z}_3)\frac{\delta\Sigma^{\phi}_{\bar{\alpha}\bar{\eta}}(\bar{z}_3,\bar{z}_5)}{\delta\mathcal{G}^{\phi}_{\bar{\theta}\bar{\omega}}(\bar{z}_6,\bar{z}_7)}\frac{\delta\mathcal{G}^{\phi}_{\bar{\theta}\bar{\omega}}(\bar{z}_6,\bar{z}_7)}{\delta\phi_{\gamma\delta}(z_4,z_3)}\mathcal{G}^{\phi}_{\bar{\eta}\zeta}(\bar{z}_5,z_2),
\end{align}
where we used the chain rule for the self-energy $\Sigma[\mathcal{G}]$. Defining the two-particle irreducible G-skeletonic vertex function $\Gamma \equiv -\frac{\delta\Sigma}{\delta\mathcal{G}}$ (\textit{cf.} Eq.~(12.34) in Ref.~\onlinecite{stefanucci_van_leeuwen_2013}), we get the Bethe-Salpeter equation (\textit{cf.} Eq.~(12.17) in Ref.~\onlinecite{stefanucci_van_leeuwen_2013})
\begin{align}
\label{eq:bethe_Salpeter_equation}
&\chi^{\phi}_{\epsilon\zeta,\gamma\delta}(z_1,z_2; z_4,z_3) = -i\mathcal{G}^{\phi}_{\epsilon\gamma}(z_1,z_4)\mathcal{G}^{\phi}_{\delta\zeta}(z_3,z_2) \notag\\
&\quad -\mathcal{G}^{\phi}_{\epsilon\bar{\alpha}}(z_1,\bar{z}_3)\Gamma^{\phi}_{\bar{\alpha}\bar{\eta},\bar{\theta}\bar{\omega}}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\chi^{\phi}_{\bar{\theta}\bar{\omega},\gamma\delta}(\bar{z}_6,\bar{z}_7;z_4,z_3)\notag\\
&\quad \times\mathcal{G}^{\phi}_{\bar{\eta}\zeta}(\bar{z}_5,z_2).
\end{align}
We then finally note that Eqs.~\eqref{eq:second_der_phi_partition_function} and \eqref{eq:bethe_Salpeter_equation} can be combined to give
\begin{align}
\label{eq:combination_bethe_Salpeter_two_particle_corr}
&i\langle\mathcal{T}_{\mathcal{C}}\hat{c}^{\dagger}_{\gamma}(z_4)\hat{c}_{\delta}(z_3)\hat{c}_{\epsilon}(z_1)\hat{c}^{\dagger}_{\zeta}(z_2) \rangle_{\phi}\notag\\
& = i\mathcal{G}^{\phi}_{\delta\gamma}(z_3,z_4)\mathcal{G}^{\phi}_{\epsilon\zeta}(z_1,z_2)-i\mathcal{G}^{\phi}_{\epsilon\gamma}(z_1,z_4)\mathcal{G}^{\phi}_{\delta\zeta}(z_3,z_2) \notag\\
&\quad -\mathcal{G}^{\phi}_{\epsilon\bar{\alpha}}(z_1,\bar{z}_3)\Gamma^{\phi}_{\bar{\alpha}\bar{\eta},\bar{\theta}\bar{\omega}}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\chi^{\phi}_{\bar{\theta}\bar{\omega},\gamma\delta}(\bar{z}_6,\bar{z}_7;z_4,z_3)\notag\\
&\quad \times\mathcal{G}^{\phi}_{\bar{\eta}\zeta}(\bar{z}_5,z_2).
\end{align}
Equation~\eqref{eq:combination_bethe_Salpeter_two_particle_corr} allows us to determine the equations of motion of the Hubbard model~\eqref{eq:Hubbard_model_intro} and to calculate the TPSC self-energy.
\subsubsection{Equations of motion}
\label{ch:eq_of_motion}
To properly account for the different degrees of freedom defining the Hubbard model, the Greek indices in Eq.~\eqref{eq:contour_G_phi} will be replaced by tuples of lattice sites represented by Latin letters and spin represented by $\sigma$.
To obtain the equations of motion, we differentiate the contour one-body Green's function \eqref{eq:contour_G_phi}:
\begin{align}
\label{eq:differentiation_greens_function_for_equation_motion}
&i\partial_{z_1}\mathcal{G}^{\phi}_{lm,\sigma}(z_1,z_2) = \partial_{z_1}\langle\mathcal{T}_{\mathcal{C}} \hat{c}_{l,\sigma}(z_1)\hat{c}_{m,\sigma}^{\dagger}(z_2)\rangle_{\phi}\notag\\
&= \delta^{\mathcal{C}}(z_1,z_2)\langle\{\hat{c}_{l,\sigma},\hat{c}_{m,\sigma}^{\dagger}\} \rangle_{\phi} + \left<\mathcal{T}_{\mathcal{C}}\partial_{z_1}S[\phi]\hat{c}_{l,\sigma}(z_1)\hat{c}_{m,\sigma}^{\dagger}(z_2)\right>_{\phi}\notag\\
&\hspace{4mm}+i\big\langle \mathcal{T}_{\mathcal{C}} [\htcal{H},\hat{c}_{l,\sigma}](z_1)\hat{c}_{m,\sigma}^{\dagger}(z_2)\big\rangle_{\phi},
\end{align} where the chemical potential term is absorbed into the Hamiltonian $\htcal{H}\to \htcal{H}-\mu\hat{N}$ since we work in the grand-canonical ensemble.~\footnote{$\hat{N}$ is the total number operator and $\mu$ is the chemical potential.} The first term of the development \eqref{eq:differentiation_greens_function_for_equation_motion} yields the identity matrix. The second term has to be dealt with carefully because the differentiation involves the source field $S[\phi]$:
\begin{align}
\label{eq:eq_motion_source_field_differentiation}
&\left<\mathcal{T}_{\mathcal{C}}\partial_{z_1}S[\phi]\hat{c}_{l,\sigma}(z_1)\hat{c}^{\dagger}_{m,\sigma}(z_2)\right>_{\phi}=\notag\\
&=i\phi_{\bar{a}\bar{b},\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}}(z_1,\bar{z}_4)\notag\\
&\hspace{0.4cm}\times\left<\mathcal{T}_{\mathcal{C}}\left[\hat{c}^{\dagger}_{\bar{a},\bar{\sigma}^{\prime}}(z_1)\hat{c}_{\bar{b},\bar{\sigma}^{\prime\prime}}(\bar{z}_4),\hat{c}_{l,\sigma}(z_1)\right]\hat{c}_{m,\sigma}^{\dagger}(z_2)\right>_{\phi}\notag\\
&=\phi_{l\bar{b},\sigma\bar{\sigma}^{\prime\prime}}(z_1,\bar{z}_4)\mathcal{G}^{\phi}_{\bar{b}m,\bar{\sigma}^{\prime\prime}\sigma}(\bar{z}_4,z_2).
\end{align} Here, we used the fact that, in the exponential representing the time-evolution operators, $\partial_x\int^{x^{\prime}}_{x}\mathrm{d}x^{\prime\prime}f^{\prime}(x^{\prime\prime}) = -f^{\prime}(x)$, and also the relation $[AB,C]=A\{B,C\}-\{A,C\}B=A[B,C]+[A,C]B$. The annihilation operator in the exponential anticommutes with $\hat{c}_{l,\sigma}(z_1)$ which is taken care of by the contour-ordering operator. There is no global sign associated with shifting around $S[\phi]$ within the thermal average, since its arguments consist of an even number of annihilation and creation operators.
Finally, after evaluating the commutator in Eq.~\eqref{eq:differentiation_greens_function_for_equation_motion} (last term) using the Hamiltonian~\eqref{eq:Hubbard_model_intro}, the equations of motion become
\begin{align}
\label{eq:eq_motion_including_cummutator_developed}
&i\partial_{z_1}\mathcal{G}^{\phi}_{lm,\sigma}(z_1,z_2) + t_{l\bar{b}}^{\text{hop}}(z_1)\mathcal{G}^{\phi}_{\bar{b}m,\sigma}(z_1,z_2)\notag\\
&-\phi_{l\bar{b},\sigma\bar{\sigma}^{\prime\prime}}(z_1,\bar{z}_4)\mathcal{G}^{\phi}_{\bar{b}m,\bar{\sigma}^{\prime\prime}\sigma}(\bar{z}_4,z_2) = \delta^{\mathcal{C}}(z_1,z_2)\delta_{lm} \notag\\
&- iU(z_1)\left<\mathcal{T}_{\mathcal{C}}\hat{n}_{l,-\sigma}(z_1)\hat{c}_{l,\sigma}(z_1)\hat{c}^{\dagger}_{m,\sigma}(z_2)\right>_{\phi}.
\end{align}
Note that the adjoint can be obtained in a similar fashion by acting from the right with the complex conjugate operator $-i\overleftarrow{\partial}_{z_2}$ on the single-particle Green's function. In Eq.~\eqref{eq:eq_motion_including_cummutator_developed} one can recognize the modified Dyson's equation \eqref{eq:modified_Dyson_equation}. Indeed, we have
\begin{align*}
&\left[{\mathcal{G}^0_{l\bar{b},\sigma\sigma^{\prime\prime}}(z_1,\bar{z}_2)}^{-1}-\phi_{l\bar{b},\sigma\bar{\sigma}^{\prime\prime}}(z_1,\bar{z}_2)\right]\mathcal{G}^{\phi}_{\bar{b}m,\bar{\sigma}^{\prime\prime}\sigma}(\bar{z}_2,z_2)\notag\\
&=\delta^{\mathcal{C}}(z_1,z_2)\delta_{lm}+\Sigma^{\phi}_{l\bar{b},\sigma\bar{\sigma}^{\prime\prime}}(z_1,\bar{z}_2)\mathcal{G}^{\phi}_{\bar{b}m,\bar{\sigma}^{\prime\prime}\sigma}(\bar{z}_2,z_2),
\end{align*}
such that the four-point correlation function is related to the self-energy and Green's function via
\begin{align}
\label{eq:four_point_self_G}
&\Sigma^{\phi}_{l\bar{b},\sigma\bar{\sigma}^{\prime\prime}}(z_1,\bar{z}_2)\mathcal{G}^{\phi}_{\bar{b}m,\bar{\sigma}^{\prime\prime}\sigma}(\bar{z}_2,z_2)\notag\\
&\hspace{0.7cm}=-iU(z_1)\left<\mathcal{T}_{\mathcal{C}}\hat{n}_{l,-\sigma}(z_1)\hat{c}_{l,\sigma}(z_1)\hat{c}^{\dagger}_{m,\sigma}(z_2)\right>_{\phi}.
\end{align}
Equation~\eqref{eq:four_point_self_G} provides an expression for the self-energy of the model Hamiltonian we are interested in. Once the desired correlation functions have been generated, the physical results are obtained by setting the source field $\phi$ to zero. We will show below that the very same four-point correlation function can be calculated in both the longitudinal and transversal channels, \textit{i.e.} by using a source field to derive Eqs.~\eqref{eq:second_der_phi_partition_function} and \eqref{eq:bethe_Salpeter_equation} which does not induce a spin-flip ($\phi_{\sigma,\sigma}$) and one inducing a spin-flip ($\phi_{\sigma,-\sigma}$), respectively. The two expressions of the self-energy will then be averaged to restore the crossing symmetry, giving the self-energy approximation of the theory $\Sigma^{\text{TPSC},(1)}$.
\subsubsection{Longitudinal expression of the self-energy}
\label{ch:longitudinal_self}
To get the second-level longitudinal self-energy, we need to use Eq.~\eqref{eq:combination_bethe_Salpeter_two_particle_corr} and perform the following substitutions for the indices: $\gamma\to (l,-\sigma)$, $\delta\to (l,-\sigma)$, $\epsilon\to (l,\sigma)$ and $\zeta\to (m,\sigma)$. At the same time, for the contour-time variables, we have to make the following substitutions: $z_4\to z_1^{++}$, $z_3\to z_1^+$, $z_2\to z_2$ and $z_1\to z_1$. Then, inserting the resulting four-point correlation function into Eq.~\eqref{eq:four_point_self_G}, we end up with the relation
\begin{align}
\label{eq:self_longitudinal_derivation}
&\Sigma^{\phi,\text{long.}}_{l\bar{b},\sigma\bar{\sigma}^{\prime\prime}}(z_1,\bar{z}_2)\mathcal{G}^{\phi}_{\bar{b}m,\bar{\sigma}^{\prime\prime}\sigma}(\bar{z}_2,z_2)\notag\\
&=-iU(z_1)\mathcal{G}^{\phi}_{ll,-\sigma-\sigma}(z_1^+,z_1^{++})\mathcal{G}^{\phi}_{lm,\sigma\sigma}(z_1,z_2)\notag\\
&+ iU(z_1)\mathcal{G}^{\phi}_{ll,\sigma-\sigma}(z_1,z_1^{++})\mathcal{G}^{\phi}_{lm,-\sigma\sigma}(z_1^+,z_2)\notag\\
&+U(z_1)\mathcal{G}^{\phi}_{(l,\sigma),\bar{\alpha}}(z_1,\bar{z}_3)\Gamma^{\phi}_{\bar{\alpha}\bar{\eta}\bar{\theta}\bar{\omega}}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\notag\\
&\quad\times\chi^{\phi}_{\bar{\theta}\bar{\omega}(l,-\sigma)(l,-\sigma)}(\bar{z}_6,\bar{z}_7;z_1^{++},z_1^+)\mathcal{G}^{\phi}_{\bar{\eta}(m,\sigma)}(\bar{z}_5,z_2),
\end{align}
where $z^{++}$ is placed infinitesimally later than $z^{+}$ along $\mathcal{C}$. The second term of Eq.~\eqref{eq:self_longitudinal_derivation} vanishes for the Hubbard model when the source field is spin diagonal (longitudinal channel), namely $\mathcal{G}_{\sigma-\sigma}=0$. The longitudinal component to the self-energy can then be straightforwardly isolated by multiplying by $\mathcal{G}_{\sigma}^{-1}$ from the right:
\begin{align}
\label{eq:self_longitudinal_isolated_from_derivation}
&\Sigma^{\phi,\text{long.}}_{lm,\sigma}(z_1,z_2) = -iU(z_1)\mathcal{G}^{\phi}_{l,-\sigma}(z_1^{+},z_1^{++})\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}+\notag\\
&U(z_1)\mathcal{G}^{\phi}_{(l,\sigma)\bar{\alpha}}(z_1,\bar{z}_3)\Gamma^{\phi}_{\bar{\alpha}(m,\sigma)\bar{\theta}\bar{\omega}}(\bar{z}_3,z_2;\bar{z}_6,\bar{z}_7)\notag\\
&\hspace{0.2cm}\times\chi^{\phi}_{\bar{\theta}\bar{\omega}(l,-\sigma)}(\bar{z}_6,\bar{z}_7;z_1).
\end{align} In Eq.~\eqref{eq:self_longitudinal_isolated_from_derivation}, for the sake of conciseness, we have used an unambiguous notation compressing tuples of repeated indices denoting the same degree of freedom, \textit{i.e.} $\chi_{jsll,\sigma\sigma-\sigma-\sigma}(z_6,z_7;z_1^{++},z_1^+)\to\chi_{jsl,\sigma-\sigma}(z_6,z_7;z_1)$. Furthermore, by expanding the implicitly summed quantities in Eq.~\eqref{eq:self_longitudinal_isolated_from_derivation}, we obtain
\begin{align}
\label{eq:self_longitudinal_isolated_from_derivation_spin_choice}
&\Sigma^{\text{long.}}_{lm,\sigma}(z_1,z_2) = -iU(z_1)\mathcal{G}_{l,-\sigma}(z_1,z_1^+)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}+\notag\\
&U(z_1)\mathcal{G}_{l\bar{i},\sigma}(z_1,\bar{z}_3)\biggl[\Gamma_{\bar{i}m\bar{j}\bar{s},\sigma\sigma}(\bar{z}_3,z_2;\bar{z}_6,\bar{z}_7)\chi_{\bar{j}\bar{s}l,\sigma-\sigma}(\bar{z}_6,\bar{z}_7;z_1)\notag\\
&+\Gamma_{\bar{i}m\bar{j}\bar{s},\sigma-\sigma}(\bar{z}_3,z_2;\bar{z}_6,\bar{z}_7)\chi_{\bar{j}\bar{s}l,-\sigma-\sigma}(\bar{z}_6,\bar{z}_7;z_1)\biggr].
\end{align} Let us now define two susceptibilities, namely the charge $\chi^{\text{ch}}$ and spin $\chi^{\text{sp}}$ susceptibilities. We will use two corresponding G-skeletonic irreducible vertices, \textit{i.e.} the charge $\Gamma^{\text{ch}}$ and spin $\Gamma^{\text{sp}}$ vertices. Using spin-rotational symmetry, the spin and charge susceptibilities are defined as
\begin{align}
\label{eq:charge_spin_susceptibility_TPSC_definition}
&\chi_{ij}^{\text{ch}/\text{sp}}(z_1,z_1^+;z_2^+,z_2)\notag\\
&=-2i\left(\frac{\delta\mathcal{G}^{\phi}_{i,\uparrow}(z_1,z_1^+)}{\delta\phi_{j,\uparrow}(z_2^+,z_2)} \pm \frac{\delta\mathcal{G}^{\phi}_{i,\uparrow}(z_1,z_1^+)}{\delta\phi_{j,\downarrow}(z_2^+,z_2)}\right)\bigg\rvert_{\phi\to 0}.
\end{align}
The factor of $2$ comes from tracing over the spin degrees of freedom and the upper (lower) sign corresponds to the charge (spin) susceptibility. We then expand the functional derivatives using Eq.~\eqref{eq:closure_relation_expanded_2}:
\begin{align}
\label{eq:charge_spin_susceptibility_TPSC_definition_developed}
&\chi_{ij}^{\text{ch}/\text{sp}}(z_1;z_2)=-2i\mathcal{G}_{ij,\uparrow}(z_1,z_2^+)\mathcal{G}_{ji,\uparrow}(z_2,z_1^+) - 2\mathcal{G}_{i\bar{l},\uparrow}(z_1,\bar{z}_3)\notag\\
&\times\bigl[\Gamma_{\bar{l}\bar{m}\bar{n}\bar{s},\uparrow\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\chi_{\bar{n}\bar{s}j,\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}\uparrow}(\bar{z}_6,\bar{z}_7;z_2)\notag\\
&\pm\Gamma_{\bar{l}\bar{m}\bar{n}\bar{s},\uparrow\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\chi_{\bar{n}\bar{s}j,\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}\downarrow}(\bar{z}_6,\bar{z}_7;z_2)\bigr]\notag\\
&\hspace{0.0cm}\times\mathcal{G}_{\bar{m}i,\uparrow}(\bar{z}_5,z_1^+).
\end{align}
The summed over spin indices $\sigma^{\prime}$ and $\sigma^{\prime\prime}$ must take the same value in order to lead to a nonzero result,\footnote{The Hubbard model conserves spin as well, not just particle number.} \textit{i.e.} $\chi_{\sigma^{\prime}\sigma^{\prime\prime}\sigma}=0 \ \forall \ \sigma^{\prime}\neq \sigma^{\prime\prime}$; this allows us to conveniently collapse those two spin labels into one. In Eq.~\eqref{eq:charge_spin_susceptibility_TPSC_definition}, only the functional derivative with all the same spin projections gives a non-zero Hartree term, hence we get only one bubble term in Eq.~\eqref{eq:charge_spin_susceptibility_TPSC_definition_developed}. Using that $\Gamma^{\text{ch}/\text{sp}}\equiv \Gamma_{\uparrow\downarrow}\pm\Gamma_{\uparrow\uparrow}$ and $\chi^0\equiv -2i\mathcal{G}_{\sigma}\mathcal{G}_{\sigma}$, the spin and charge susceptibilities in the paramagnetic state read
\begin{align}
\label{eq:charge_spin_susceptibility_TPSC_definition_developed_2}
&\chi_{ij}^{\text{ch}/\text{sp}}(z_1;z_2)\notag\\
&=-2i\mathcal{G}_{ij,\uparrow}(z_1,z_2^+)\mathcal{G}_{ji,\uparrow}(z_2,z_1^+) \mp 2\mathcal{G}_{i\bar{l},\uparrow}(z_1,\bar{z}_3)\notag\\
&\times\bigl[\pm\Gamma_{\bar{l}\bar{m}\bar{n}\bar{s},\uparrow\uparrow}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)+\Gamma_{\bar{l}\bar{m}\bar{n}\bar{s},\uparrow\downarrow}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\bigr]\notag\\
&\times\bigl[\chi_{\bar{n}\bar{s}j,\uparrow\uparrow}(\bar{z}_6,\bar{z}_7;z_2)\pm\chi_{\bar{n}\bar{s}j,\downarrow\uparrow}(\bar{z}_6,\bar{z}_7;z_2)\bigr]\mathcal{G}_{\bar{m}i,\uparrow}(\bar{z}_5,z_1^+)\notag\\
&=\chi^0_{ij}(z_1,z_2) \mp\frac{i}{2}\chi^0_{i\bar{l}\bar{m}}(z_1;\bar{z}_3,\bar{z}_5)\Gamma_{\bar{l}\bar{m}\bar{n}\bar{s}}^{\text{ch}/\text{sp}}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\notag\\
&\hspace{0.4cm}\times\chi_{\bar{n}\bar{s}j}^{\text{ch}/\text{sp}}(\bar{z}_6,\bar{z}_7;z_2).
\end{align}
In Eq.~\eqref{eq:charge_spin_susceptibility_TPSC_definition_developed_2}, the spin rotational invariance allowed us to factorize $\chi$ and $\Gamma$ into their spin and charge components. Now, if we write out $\Gamma_{\text{ch}}\chi_{\text{ch}}+\Gamma_{\text{sp}}\chi_{\text{sp}}$, we symbolically get
\begin{align}
\label{eq:gamma_chi_combination_equivalence}
&\Gamma_{\text{ch}}\chi_{\text{ch}} + \Gamma_{\text{sp}}\chi_{\text{sp}} = 2\left[\Gamma_{\uparrow\uparrow}+\Gamma_{\uparrow\downarrow}\right]\left[\chi_{\uparrow\uparrow}+\chi_{\uparrow\downarrow}\right]\notag\\
&+2\left[\Gamma_{\uparrow\downarrow}-\Gamma_{\uparrow\uparrow}\right]\left[\chi_{\uparrow\uparrow}-\chi_{\uparrow\downarrow}\right]\notag\\
&=4\Gamma_{\uparrow\uparrow}\chi_{\uparrow\downarrow} + 4\Gamma_{\uparrow\downarrow}\chi_{\uparrow\uparrow}.
\end{align}
The result \eqref{eq:gamma_chi_combination_equivalence} can be substituted into the longitudinal expression for the self-energy \eqref{eq:self_longitudinal_isolated_from_derivation_spin_choice}. Doing so, the physical longitudinal self-energy can be expressed as ($\phi\to 0$)
\begin{align}
\label{eq:longitudinal_self_energy_reexpressed}
&\Sigma^{\text{long.}}_{lm,\uparrow}(z_1,z_2) = U(z_1)n_{l,\downarrow}(z_1,z_1^+)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}\notag\\
&+\frac{U(z_1)}{4}\mathcal{G}_{l\bar{i},\uparrow}(z_1,\bar{z}_3)\biggl[\Gamma^{\text{ch}}_{\bar{i}m\bar{j}\bar{s}}(\bar{z}_3,z_2;\bar{z}_6,\bar{z}_7)\chi^{\text{ch}}_{\bar{j}\bar{s}l}(\bar{z}_6,\bar{z}_7;z_1)\notag\\
&+\Gamma^{\text{sp}}_{\bar{i}m\bar{j}\bar{s}}(\bar{z}_3,z_2;\bar{z}_6,\bar{z}_7)\chi^{\text{sp}}_{\bar{j}\bar{s}l}(\bar{z}_6,\bar{z}_7;z_1)\biggr].
\end{align}
If we replace the irreducible vertices in Eq.~\eqref{eq:longitudinal_self_energy_reexpressed} with local ones, namely
\begin{align}
\label{eq:irr_vertex_expression}
&\Gamma^{{\text{ch}/\text{sp}}}_{imjs}(z_3,z_2;z_6,z_7)\to \Gamma_m^{{\text{ch}/\text{sp}}}(z_2)\delta^{\mathcal{C}}(z_2,z_6)\notag\\
&\quad\times\delta^{\mathcal{C}}(z_2^+,z_7)\delta^{\mathcal{C}}(z_2,z_3)\delta_{m,j}\delta_{m,s}\delta_{m,i},
\end{align}
we get \cite{Bergeron_2011_optical_cond,Vilk_1997}
\begin{align}
\label{eq:self_energy_approx_TPSC_locality_logitudinal}
&\Sigma^{\text{long.}}_{lm,\sigma}(z_1,z_2)\notag\\
&= U(z_1)n_{l,-\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}+\frac{U(z_1)}{4}\mathcal{G}_{lm,\sigma}(z_1,z_2)\notag\\
&\quad\times\left[\Gamma_m^{\text{ch}}(z_2)\chi^{\text{ch}}_{ml}(z_2,z_1) + \Gamma_m^{\text{sp}}(z_2)\chi^{\text{sp}}_{ml}(z_2,z_1)\right].
\end{align}
\subsubsection{Transversal expression of the self-energy}
\label{ch:transversal_self}
The four-point correlation function appearing in Eq.~\eqref{eq:four_point_self_G} can also be obtained by employing a transversal field.~\cite{senechal_bourbonnais_tremblay_2004} To see that, we return to Eq.~\eqref{eq:combination_bethe_Salpeter_two_particle_corr} expressing the four-point correlation function in terms of the self-energy and Green's function. We first notice that, in a transverse field, when we work out Eq.~\eqref{eq:second_der_phi_partition_function} using an off-diagonal source field $\phi_{\sigma-\sigma}$ in spin, we have
\begin{align}
\label{eq:second_der_phi_partition_function_transversal_field}
&\chi^{\phi,\sigma-\sigma\sigma-\sigma}_{abdc}(z_1,z_2;z_4,z_3)\notag\\
&= i\langle\mathcal{T}_{\mathcal{C}}\hat{c}^{\dagger}_{d,\sigma}(z_4)\hat{c}_{c,-\sigma}(z_3)\hat{c}_{a,\sigma}(z_1)\hat{c}^{\dagger}_{b,-\sigma}(z_2) \rangle_{\phi}\notag\\
&\quad - i\mathcal{G}^{\phi}_{cd,-\sigma\sigma}(z_3,z_4)\mathcal{G}^{\phi}_{ab,\sigma-\sigma}(z_1,z_2),
\end{align}
where we have rendered the notation more compact by turning the spin subscripts into superscripts. Furthermore, to get Eq.~\eqref{eq:second_der_phi_partition_function_transversal_field}, we performed the following substitutions in Eq.~\eqref{eq:second_der_phi_partition_function}: $\epsilon\to(a,\sigma)$, $\zeta\to(b,-\sigma)$, $\gamma\to(d,\sigma)$ and $\delta\to(c,-\sigma)$. In the transversal particle-hole channel, another expression of the form $\chi^{\phi,-\sigma\sigma\sigma-\sigma}_{abdc}(z_1,z_2;z_4,z_3)$ is produced, but since it includes a four-point correlation function of the form $i\langle\mathcal{T}_{\mathcal{C}}\hat{c}^{\dagger}_{d,\sigma}(z_4)\hat{c}_{c,-\sigma}(z_3)\hat{c}_{a,-\sigma}(z_1)\hat{c}^{\dagger}_{b,\sigma}(z_2) \rangle_{\phi}$, it is equal to $0$ in the Hubbard model due to spin conservation. To match the four-point correlation function appearing in Eq.~\eqref{eq:four_point_self_G}, we need to perform at last the variable substitutions $(a,z_1)\to (l,z_1^+)$, $(b,z_2)\to (m,z_2)$, $(c,z_3)\to (l,z_1)$ and $(d,z_4)\to (l,z_1^{++})$. Doing so, the last term of Eq.~\eqref{eq:second_der_phi_partition_function_transversal_field} vanishes when the source field is turned off. Making the same variable substitutions in Eq.~\eqref{eq:bethe_Salpeter_equation} as done hitherto in Eq.~\eqref{eq:second_der_phi_partition_function}, we obtain
\begin{align}
\label{eq:bethe_Salpeter_equation_transversal_field}
&\chi^{\phi,\sigma-\sigma\sigma-\sigma}_{lml}(z_1^+,z_2; z_1^{++},z_1) = -i\mathcal{G}^{\phi}_{l,\sigma}(z_1^+,z_1^{++})\mathcal{G}^{\phi}_{lm,-\sigma}(z_1,z_2) \notag\\
&-\mathcal{G}^{\phi}_{l\bar{a},\sigma}(z_1^+,\bar{z}_3)\Gamma^{\phi,\sigma-\sigma\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}}_{\bar{a}\bar{b}\bar{c}\bar{d}}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\notag\\
&\hspace{0.7cm}\times\chi^{\phi,\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}\sigma-\sigma}_{\bar{c}\bar{d}l}(\bar{z}_6,\bar{z}_7;z_1^{++},z_1)\mathcal{G}^{\phi}_{\bar{b}m,-\sigma}(\bar{z}_5,z_2).
\end{align}
In Eq.~\eqref{eq:bethe_Salpeter_equation_transversal_field}, we used the spin selection rule forbidding antiparallel spins in Green's functions once $\phi\to0$. Next, we insert the result \eqref{eq:bethe_Salpeter_equation_transversal_field} into Eq.~\eqref{eq:second_der_phi_partition_function_transversal_field} to isolate the four-point correlation function and then multiply by $U(z_1)$ to recover something similar to Eq.~\eqref{eq:combination_bethe_Salpeter_two_particle_corr}, but now for the transversal channel. This yields an expression for the TPSC self-energy in the transversal channel
\begin{align}
\label{eq:self_transversal_derivation}
&\Sigma^{\phi,\text{trans.}}_{l\bar{b},-\sigma\bar{\sigma}^{\prime}}(z_1,\bar{z}_2)\mathcal{G}^{\phi}_{\bar{b}m,\bar{\sigma}^{\prime}-\sigma}(\bar{z}_2,z_2)\notag\\
&=iU(z_1)\mathcal{G}^{\phi}_{l,-\sigma\sigma}(z_1,z_1^+)\mathcal{G}^{\phi}_{lm,\sigma-\sigma}(z_1,z_2)\notag\\
&- iU(z_1)\mathcal{G}^{\phi}_{l,\sigma}(z_1,z_1^{+})\mathcal{G}^{\phi}_{lm,-\sigma}(z_1,z_2) \notag\\
&-U(z_1)\mathcal{G}^{\phi}_{l\bar{a},\sigma}(z_1,\bar{z}_3)\Gamma^{\phi,\sigma-\sigma\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}}_{\bar{a}\bar{b}\bar{c}\bar{d}}(\bar{z}_3,\bar{z}_5;\bar{z}_6,\bar{z}_7)\notag\\
&\hspace{1.0cm}\times\chi^{\phi,\bar{\sigma}^{\prime}\bar{\sigma}^{\prime\prime}\sigma-\sigma}_{\bar{c}\bar{d}l}(\bar{z}_6,\bar{z}_7;z_1)\mathcal{G}^{\phi}_{\bar{b}m,-\sigma}(\bar{z}_5,z_2).
\end{align} From Eq.~\eqref{eq:self_transversal_derivation}, the physical transversal component to the second-level TPSC self-energy reads
\begin{align}
\label{eq:self_transversal_isolated_from_derivation}
&\Sigma^{\text{trans.}}_{lm,-\sigma}(z_1,z_2) = U(z_1)n_{l,\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}\notag\\
&-U(z_1)\mathcal{G}_{l\bar{a},\sigma}(z_1,\bar{z}_3)\Gamma^{\sigma-\sigma\sigma-\sigma}_{\bar{a}m\bar{c}\bar{d}}(\bar{z}_3,z_2;\bar{z}_6,\bar{z}_7)\notag\\
&\hspace{0.7cm}\times\chi^{\sigma-\sigma\sigma-\sigma}_{\bar{c}\bar{d}l}(\bar{z}_6,\bar{z}_7;z_1),
\end{align}
since $\chi^{\sigma\sigma-\sigma\sigma}=\chi^{\sigma-\sigma-\sigma\sigma}=0$.
It is now time to have a closer look at the different components making up Eq.~\eqref{eq:self_transversal_isolated_from_derivation}, namely $\chi$ and $\Gamma$. To start with, we assume that the vertex appearing in Eq.~\eqref{eq:self_transversal_isolated_from_derivation} is fully local, as done in Sec.~\ref{ch:longitudinal_self} for the longitudinal component,
\begin{align}
\label{eq:off_diagonal_self_energy_functional_derivative}
&\Gamma^{\sigma-\sigma\sigma-\sigma}_{amcd}(z_3,z_2;z_6,z_7) = \Gamma^{\sigma-\sigma\sigma-\sigma}_{m}(z_2)\delta^{\mathcal{C}}(z_2,z_3)\notag\\
&\hspace{0.5cm}\times\delta^{\mathcal{C}}(z_2,z_6)\delta^{\mathcal{C}}(z_2^+,z_7)\delta_{l,m}\delta_{l,i}\delta_{l,j}.
\end{align} Next, we work out an expression for $\chi^{\sigma-\sigma\sigma-\sigma}$, using Eq.~\eqref{eq:second_der_phi_partition_function},
\begin{align}
\label{eq:chi_transverse_4_point_corr_expression}
&\chi^{\sigma-\sigma\sigma-\sigma}_{cdl}(z_6,z_7;z_1^{+},z_1)\notag\\
&= -i\left<\mathcal{T}_{\mathcal{C}} \hat{c}^{\dagger}_{l,\sigma}(z_1^{+})\hat{c}_{l,-\sigma}(z_1)\hat{c}_{d,-\sigma}^{\dagger}(z_7)\hat{c}_{c,\sigma}(z_6)\right>.
\end{align} Since it follows from Eq.~\eqref{eq:off_diagonal_self_energy_functional_derivative} that $z_7\to z_6^+$ and $d\to c$ in Eq.~\eqref{eq:chi_transverse_4_point_corr_expression}, we obtain
\begin{align}
\label{eq:spin_create_spin_annihilate_op}
&\chi^{\sigma-\sigma\sigma-\sigma}_{cl}(z_6,z_6^+;z_1^{+},z_1)= -i\langle\mathcal{T}_{\mathcal{C}} \hat{S}_{c,+}(z_6)\hat{S}_{l,-}(z_1)\rangle\notag\\
&= \chi_{cl;+-}(z_6,z_1),
\end{align}
where $\hat{S}_{c,+/-}\equiv \frac{1}{2}\left(\hat{S}_{c,x}\pm i\hat{S}_{c,y}\right)$, such that Eq.~\eqref{eq:spin_create_spin_annihilate_op} can be expressed as
\begin{align}
\label{eq:spin_transverse_susceptibility_indentities}
&\chi_{cl;+-}(z_6,z_1)=-\frac{i}{4}\langle\mathcal{T}_{\mathcal{C}}\hat{S}_{c,x}(z_6)\hat{S}_{l,x}(z_1)\rangle\notag\\
&-\frac{i}{4}\langle\mathcal{T}_{\mathcal{C}}\hat{S}_{c,y}(z_6)\hat{S}_{l,y}(z_1)\rangle=-\frac{i}{2}\langle\mathcal{T}_{\mathcal{C}}\hat{S}_{c,z}(z_6)\hat{S}_{l,z}(z_1)\rangle.
\end{align} Hence, from Eqs.~\eqref{eq:spin_transverse_susceptibility_indentities} and \eqref{eq:off_diagonal_self_energy_functional_derivative}, the transversal component \eqref{eq:self_transversal_isolated_from_derivation} becomes~\cite{senechal_bourbonnais_tremblay_2004}
\begin{align}
\label{eq:self_energy_approx_TPSC_locality_transversal}
&\Sigma^{\text{trans.}}_{lm,\sigma}(z_1,z_2) = U(z_1)n_{l,-\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}\notag\\
&-\frac{U(z_1)}{2}\mathcal{G}_{lm,-\sigma}(z_1,z_2)\Gamma^{\sigma-\sigma\sigma-\sigma}_{m}(z_2)\chi^{\text{sp}}_{ml}(z_2;z_1).
\end{align}
The spin off-diagonal irreducible vertex $\Gamma^{\sigma-\sigma\sigma-\sigma}$ showing up in Eq.~\eqref{eq:self_energy_approx_TPSC_locality_transversal} will be specified in Sec.~\ref{ch:TPSC_ansatz} using the first-level TPSC approximations.
\subsubsection{TPSC ansatz}
\label{ch:TPSC_ansatz}
To calculate the single- and two-particle correlation functions, TPSC employs an ansatz for the Luttinger-Ward functional $\Phi$ that approximates the local irreducible vertices in the particle-hole channel (transversal and longitudinal with respect to some generating field), namely the charge $\Gamma_{\text{ch}}$ and spin $\Gamma_{\text{sp}}$. The starting point is the following Luttinger-Ward functional~\cite{Vilk_1997}
\begin{align}
\label{eq:TPSC_Luttinger_Ward_functional}
&\Phi[\mathcal{G}] = \frac{1}{2}\int_{\mathcal{C}}\mathrm{d}z \ \sum_{\sigma}\mathcal{G}_{\sigma}(z,z^+)\Gamma_{\sigma\sigma}(z)\mathcal{G}_{\sigma}(z,z^+)\notag\\
&+\frac{1}{2}\int_{\mathcal{C}}\mathrm{d}z \ \sum_{\sigma}\mathcal{G}_{\sigma}(z,z^+)\Gamma_{\sigma-\sigma}(z)\mathcal{G}_{-\sigma}(z,z^+),
\end{align} where the quantities are defined on the Kadanoff-Baym contour, with arguments $z\in \mathcal{C}$. The integral can be decomposed into contour components according to the Langreth rules. From Eq.~\eqref{eq:TPSC_Luttinger_Ward_functional}, both the self-energy and the G-skeletonic irreducible vertices can be obtained. The first-level TPSC self-energy $\Sigma^{(0)}$ reads
\begin{align}
\label{eq:TPSC_self_energy}
\Sigma^{(0)}_{\sigma}(z_2,z_3) = \frac{\delta\Phi[\mathcal{G}]}{\delta\mathcal{G}_{\sigma}(z_3,z_2)},
\end{align}
which yields
\begin{align}
\label{eq:TPSC_self_energy_expanded}
\Sigma^{(0)}_{\sigma}(z_2,z_3) =& \,\, \Gamma_{\sigma\sigma}(z_3)\mathcal{G}_{\sigma}(z_3,z_3^+)\delta^{\mathcal{C}}(z_3^+,z_2)\notag\\
&+ \Gamma_{\sigma-\sigma}(z_3)\mathcal{G}_{-\sigma}(z_3,z_3^+)\delta^{\mathcal{C}}(z_3^+,z_2),
\end{align}
where the rotational spin symmetry $\Gamma_{\sigma-\sigma} = \Gamma_{-\sigma\sigma}$ was used. Since the $\Gamma$ factors are scalars, the first-level self-energy \eqref{eq:TPSC_self_energy_expanded} can be absorbed into a shift of the chemical potential $\mu_0$ when defining the lattice Green's function at the first level of approximation:
\begin{align}
\label{eq:first_level_approx_G}
&\left(i\partial_z + \mu_0 - \Sigma_{\sigma}^{(0)}(z)\delta^{\mathcal{C}}(z,z^{\prime}) - \epsilon(\mathbf{k},z)\right)\mathcal{G}^{(0)}_{\mathbf{k},\sigma}(z,z^{\prime})\notag\\
&= \delta^{\mathcal{C}}(z,z^{\prime}).
\end{align} In essence, the Green's function at the first level of approximation
is noninteracting.
Let us now contrast Eq.~\eqref{eq:TPSC_self_energy_expanded} with the full expression describing the Hubbard self-energy \eqref{eq:four_point_self_G}. TPSC at the first level approximation corresponds to a Hartree-Fock factorization of Eq.~\eqref{eq:four_point_self_G}:
\begin{align}
\label{eq:first_approx_TPSC_two_particle}
&\Sigma^{\phi,(0)}_{l\bar{b},\sigma\bar{\sigma}^{\prime}}(z_1,\bar{z}_2)\mathcal{G}^{\phi,(0)}_{\bar{b}m,\bar{\sigma}^{\prime}\sigma}(\bar{z}_2,z_2)\notag\\
&\simeq A^{\phi}_{l,\sigma}(z_1)\biggl(\mathcal{G}^{\phi,(0)}_{l,-\sigma}(z_1,z_1^{+})\mathcal{G}^{\phi,(0)}_{lm,\sigma}(z_1,z_2)\notag\\
&\hspace{2.5cm}- \mathcal{G}^{\phi,(0)}_{l,\sigma-\sigma}(z_1,z_1^+)\mathcal{G}^{\phi,(0)}_{lm,-\sigma\sigma}(z_1,z_2)\biggr),
\end{align}
where the kernel $A^{\phi}$ appearing in Eq.~\eqref{eq:first_approx_TPSC_two_particle} is defined as
\begin{align}
\label{eq:first_approx_TPSC_kernel_A}
A^{\phi}_{l,\sigma}(z_1)\equiv -iU(z_1)\frac{\left<\mathcal{T}_{\mathcal{C}}\hat{n}_{l,-\sigma}(z_1)\hat{n}_{l,\sigma}(z_1)\right>_{\phi}}{\left<\hat{n}_{l,-\sigma}(z_1)\right>_{\phi}\left<\hat{n}_{l,\sigma}(z_1)\right>_{\phi}}.
\end{align}
The kernel \eqref{eq:first_approx_TPSC_kernel_A} becomes exact in the local case where $z_2\to z_1^{+}$ and $m\to l$; one can indeed recover Eq.~\eqref{eq:four_point_self_G} given the definition of the local vertex $A^{\phi}_{l,\sigma}$. In Eq.~\eqref{eq:first_approx_TPSC_two_particle}, the source field is complete, \textit{i.e.} it contains both the diagonal (longitudinal) and off-diagonal (transversal) spin components. The first (second) term of Eq.~\eqref{eq:first_approx_TPSC_two_particle} results from the factorization of the longitudinal (transversal) four-point correlation function. From Eq.~\eqref{eq:first_approx_TPSC_two_particle}, because the transversal contribution vanishes when multiplying from the right by ${\mathcal{G}^{\phi}_{\sigma}}^{-1}$, the first-level longitudinal self-energy approximation reads
\begin{align}
\label{eq:first_approx_TPSC_two_particle_self}
\Sigma^{\phi,(0)}_{lm,\sigma}(z_1,z_2)&=A^{\phi}_{l,\sigma}(z_1)\mathcal{G}^{\phi}_{l,-\sigma}(z_1,z_1^+)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}\notag\\
&=iA^{\phi}_{l,\sigma}(z_1)n_{l,-\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m},
\end{align} such that
\begin{align}
\label{eq:functional_derivative_sigma_first_approx}
&\frac{\delta\Sigma_{lm,\sigma}^{\phi,(0)}(z_1,z_2)}{\delta\mathcal{G}^{\phi,(0)}_{ij,-\sigma}(z_4,z_3)} = A^{\phi}_{l,\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_4)\delta^{\mathcal{C}}(z_1^+,z_3)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,i}\notag\\
&\times\delta_{l,j}\delta_{l,m}+\frac{\delta A^{\phi}_{l,\sigma}(z_1)}{\delta\mathcal{G}^{\phi}_{ij,-\sigma}(z_4,z_3)}n_{l,-\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m},
\end{align}
and
\begin{align}
\label{eq:functional_derivative_sigma_first_approx_2}
&\frac{\delta\Sigma_{lm,\sigma}^{\phi,(0)}(z_1,z_2)}{\delta\mathcal{G}^{\phi,(0)}_{ij,\sigma}(z_4,z_3)} = \frac{\delta A^{\phi}_{l,\sigma}(z_1)}{\delta\mathcal{G}^{\phi}_{ij,\sigma}(z_4,z_3)}n_{l,-\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}.
\end{align} We have that $A^{\phi}_{l,\sigma}(z_1)=A^{\phi}_{l,-\sigma}(z_1)$. Now, since the irreducible vertex in the spin channel reads
\begin{align}
\label{eq:spin_irreducible_vertex_def}
&\Gamma^{\text{sp}}_{lmij}(z_1,z_2;z_4,z_3) \equiv \frac{\delta\Sigma^{\phi,(0)}_{\sigma}}{\delta\mathcal{G}^{\phi,(0)}_{-\sigma}}\biggr\rvert_{\phi\to 0} - \frac{\delta\Sigma^{\phi,(0)}_{\sigma}}{\delta\mathcal{G}^{\phi,(0)}_{\sigma}}\bigg\rvert_{\phi\to 0}\notag\\
&= A_{l,\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_4)\delta^{\mathcal{C}}(z_1^+,z_3)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,i}\delta_{l,j}\delta_{l,m},
\end{align} we can establish the following equivalence (within the TPSC approximation) between the local irreducible spin vertex and the double occupancy,
\begin{align}
\label{eq:equivalence_spin_irr_vertex_double_occupancy}
&\Gamma^{\text{sp}}_{lmij}(z_1,z_2;z_4,z_3)\notag\\
&= -iU(z_1)\frac{\left<\mathcal{T}_{\mathcal{C}}\hat{n}_{l,-\sigma}(z_1)\hat{n}_{l,\sigma}(z_1)\right>}{\left<\hat{n}_{l,-\sigma}(z_1)\right>\left<\hat{n}_{l,\sigma}(z_1)\right>}\notag\\
&\quad\times\delta^{\mathcal{C}}(z_1,z_4)\delta^{\mathcal{C}}(z_1^+,z_3)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,i}\delta_{l,j}\delta_{l,m}.
\end{align}
The charge irreducible vertex is approximated in the same fashion as Eq.~\eqref{eq:spin_irreducible_vertex_def},
\begin{align}
\label{eq:charge_irreducible_vertex_def}
&\Gamma^{\text{ch}}_{lmij}(z_1,z_2;z_4,z_3) \equiv \frac{\delta\Sigma^{\phi,(0)}_{\sigma}}{\delta\mathcal{G}^{\phi,(0)}_{-\sigma}}\biggr\rvert_{\phi\to 0} + \frac{\delta\Sigma^{\phi,(0)}_{\sigma}}{\delta\mathcal{G}^{\phi,(0)}_{\sigma}}\bigg\rvert_{\phi\to 0}\notag\\
& = \Gamma_l^{\text{ch}}(z_1)\delta^{\mathcal{C}}(z_1,z_4)\delta^{\mathcal{C}}(z_1^+,z_3)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,i}\delta_{l,j}\delta_{l,m},
\end{align} where $\Gamma^{\text{ch}}$ has a different analytical expression from $\Gamma^{\text{sp}}$ and can be calculated from our knowledge of $\Gamma^{\text{sp}}$ using two-particle local sum-rules and Eq.~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy}.
We next work out a useful expression for the vertex $\Gamma^{\sigma-\sigma\sigma-\sigma}$ appearing in Eq.~\eqref{eq:self_energy_approx_TPSC_locality_transversal}. To derive it, we need to calculate
\begin{align*}
\Gamma^{\sigma-\sigma\sigma-\sigma}_{lmij}(z_1,z_2;z_4,z_3)=\frac{\delta\Sigma^{\phi,(0)}_{lm,\sigma-\sigma}(z_1,z_2)}{\delta\mathcal{G}^{\phi,(0)}_{ij,\sigma-\sigma}(z_4,z_3)}\bigg\rvert_{\phi\to 0},
\end{align*}
where $\Sigma^{\phi,(0)}_{\sigma-\sigma}$ can be extracted from Eq.~\eqref{eq:first_approx_TPSC_two_particle},
\begin{align}
\label{eq:transversal_channel_self_energy_get_vertex}
\Sigma^{\phi,(0)}_{lb,\sigma-\sigma}(z_1,z_2)=&\,\,iU(z_1)\frac{\left<\mathcal{T}_{\mathcal{C}}\hat{n}_{l,-\sigma}(z_1)\hat{n}_{l,\sigma}(z_1)\right>_{\phi}}{\left<\hat{n}_{l,-\sigma}(z_1)\right>_{\phi}\left<\hat{n}_{l,\sigma}(z_1)\right>_{\phi}}\notag\\
&\hspace{0cm}\times\mathcal{G}^{\phi,(0)}_{l,\sigma-\sigma}(z_1,z_1^+)\delta^{\mathcal{C}}(z_1,z_2)\delta_{lm}.
\end{align}
Hence, we obtain
\begin{align}
\label{eq:equivalence_transversal_irr_vertex_double_occupancy}
&\Gamma^{\sigma-\sigma\sigma-\sigma}_{lmij}(z_1,z_2;z_4.z_3) = iU(z_1)\frac{\left<\mathcal{T}_{\mathcal{C}}\hat{n}_{l,-\sigma}(z_1)\hat{n}_{l,\sigma}(z_1)\right>}{\left<\hat{n}_{l,-\sigma}(z_1)\right>\left<\hat{n}_{l,\sigma}(z_1)\right>}\notag\\
&\hspace{0.5cm}\times\delta^{\mathcal{C}}(z_1,z_4)\delta^{\mathcal{C}}(z_1^+,z_3)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,i}\delta_{l,j}\delta_{l,m}\notag\\
&=-\Gamma^{\text{sp}}_{lmij}(z_1,z_2;z_4.z_3).
\end{align}
Equation~\eqref{eq:equivalence_transversal_irr_vertex_double_occupancy} is inserted into Eq.~\eqref{eq:self_energy_approx_TPSC_locality_transversal} to replace $\Gamma^{\sigma-\sigma\sigma-\sigma}$. Gathering all the results stemming from the TPSC ansatz, we can express the total self-energy for the second-level approximation, which is an average of the longitudinal \eqref{eq:self_energy_approx_TPSC_locality_logitudinal} and the transversal \eqref{eq:self_energy_approx_TPSC_locality_transversal} components:
\begin{align}
\label{eq:self_energy_approx_TPSC_locality_total}
&\Sigma^{\text{TPSC},(1)}_{lm,\sigma}(z_1,z_2)\notag\\
&= U(z_1)n_{l,\downarrow}(z_1)\delta^{\mathcal{C}}(z_1,z_2)\delta_{l,m}+\frac{U(z_1)}{8}\mathcal{G}^{(0)}_{lm,\sigma}(z_1,z_2)\notag\\
&\quad\times\biggl[\Gamma_m^{\text{ch}}(z_2)\chi^{\text{ch}}_{ml}(z_2,z_1) + 3\Gamma_m^{\text{sp}}(z_2)\chi^{\text{sp}}_{ml}(z_2,z_1)\biggr].
\end{align}
The Fourier transform of Eq.~\eqref{eq:self_energy_approx_TPSC_locality_total} yields~\cite{Bergeron_2011_optical_cond}
\begin{align}
\label{eq:fft_total_self_energy}
&\int\mathrm{d}^D(\mathbf{r}_l-\mathbf{r}_m) \ e^{-i\mathbf{k}\cdot(\mathbf{r}_l-\mathbf{r}_m)}\Sigma^{\phi,(1)}_{lm,\sigma}(z_1,z_2)\notag\\
&= \Sigma^{\text{TPSC},(1)}_{\mathbf{k},\sigma}(z_1,z_2)\notag\\
&=U(z_1)n_{-\sigma}(z_1)\delta^{\mathcal{C}}(z_1,z_2)
+\frac{U(z_1)}{8}\int\mathrm{d}^D\mathbf{q} \ \mathcal{G}^{(0)}_{\mathbf{k}+\mathbf{q},\sigma}(z_1,z_2)\notag\\
&\quad\times\biggl[\Gamma^{\text{ch}}(z_2)\chi^{\text{ch}}_{\mathbf{q}}(z_2,z_1) + 3\Gamma^{\text{sp}}(z_2)\chi^{\text{sp}}_{\mathbf{q}}(z_2,z_1)\biggr].
\end{align}
The susceptibilities $\chi^{\text{ch/sp}}$ are functionals of $\mathcal{G}^{0}$ defined in Eq.~\eqref{eq:first_level_approx_G}. The steps which lead from the first-level approximation to the self-energy $\Sigma^{(0)}$ (Eq.~\eqref{eq:TPSC_self_energy_expanded}) to the second-level approximation $\Sigma^{(1)}$ (Eq.~\eqref{eq:fft_total_self_energy}) do not result in an approximation which is conserving in the Kadanoff-Baym sense, as was already pointed out in Ref.~\onlinecite{https://doi.org/10.1002/andp.202000399}. Nevertheless, in practice, the second-level approximation conserves energy rather well after a perturbation, for a large range of bare interactions $U$ and dopings $n$. Moreover, the fact that the second-level approximation to the TPSC self-energy \eqref{eq:fft_total_self_energy} reduces to the second-order lattice IPT self-energy~\cite{doi:10.1143/JPSJ.69.3912} in the limit of small $U$ makes it natural to combine TPSC within a DMFT scheme based on a weak-coupling impurity solver.
This nonequilibrium DMFT+TPSC scheme is explained in Sec.~\ref{ch:dmft_tpsc}.
\subsubsection{Algorithm}
\label{sec:tpsc_algorithm}
Our implementation of nonequilibrium TPSC contains the following steps: we first compute the noninteracting Green's function $\mathcal{G}^0$ that allows to calculate the noninteracting two-particle Green's function $\chi^0\equiv -2i\mathcal{G}^0\mathcal{G}^0$ and make an initial guess for the double occupancy $D(z)=\langle\hat{n}_{\uparrow,\sigma}(z)\hat{n}_{\downarrow,-\sigma}(z)\rangle$. Then, we simultaneously solve for $\chi^{\text{sp}}$ and $\Gamma^{\text{sp}}$ using the local spin two-particle sum-rule
\begin{align}
\label{eq:fluctuation_dissipation_two_particle}
&i\int\frac{\mathrm{d}^Dq}{\left(2\pi\right)^D} \ \chi^{\text{sp/ch}}_{\mathbf{q}}(z,z^{+})\notag\\
&\quad =n(z)+2(-1)^l\left<\hat{n}_{-\sigma}(z)\hat{n}_{\sigma}(z)\right> - (1-l)n(z)^2,
\end{align}
where $n=\left<\hat{n}_{\uparrow}+\hat{n}_{\downarrow}\right>$ is the density of particles, $l=0$ for charge (ch) and $l=1$ for spin (sp). This is done using a multidimensional root-finding method for a non-linear system of equations at each time step. Alternatively, as shown in the flow chart \ref{fig:KB_sp_quantities}, the spin quantities could be solved self-consistently until $D(z)$ converges. However, we make use of the multidimensional root-finding method due to its higher efficiency.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{flow_chart_spin-crop.pdf}
\caption{Flow chart describing the self-consistent determination of $D(z)$, $\chi^{\text{sp}}$ and $\Gamma^{\text{sp}}$ (alternative method). In the actual simulations, we modify the BSE as in Eq.~\eqref{eq:bethe_Salpeter_eq_approximated} and use the multidimensional root-finding method.
}
\label{fig:KB_sp_quantities}
\end{figure}
\noindent
The next step is to solve for the charge quantities $\chi^{\text{ch}}$ and $\Gamma^{\text{ch}}$. Again, a multidimensional root-finding method for a non-linear system of equations is used at each time step. The two equations which must be simultaneously solved are displayed in Fig.~\ref{fig:KB_ch_quantities}, which involves the charge two-particle sum-rule~\eqref{eq:fluctuation_dissipation_two_particle}.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{flow_chart_charge-crop.pdf}
\caption{Flow chart describing the self-consistent determination of $\chi^{\text{ch}}$ and $\Gamma^{\text{ch}}$. In the actual simulations, we modify the BSE as in Eq.~\eqref{eq:bethe_Salpeter_eq_approximated}.}
\label{fig:KB_ch_quantities}
\end{figure}
\noindent
In order to satisfy the local sum-rules~\eqref{eq:fluctuation_dissipation_two_particle} out of equilibrium, we introduce and additional approximation, resulting in a modified (approximated) form of the Bethe-Salpeter equations written in the green panels of the flow charts \ref{fig:KB_sp_quantities} and \ref{fig:KB_ch_quantities}. The approximated form is
\begin{align}
\label{eq:bethe_Salpeter_eq_approximated}
&\chi_{\mathbf{q}}^{\text{sp/ch}}(z,z^{\prime}) = \chi^0_{\mathbf{q}}(z,z^{\prime})\notag\\
&\quad + (-1)^{l+1}\frac{i}{2} \Gamma^{\text{sp}/\text{ch}}(z)\chi_{\mathbf{q}}^0(z,\bar{z})\chi_{\mathbf{q}}^{\text{sp/ch}}(\bar{z},z^{\prime}),
\end{align}
where, once again, $l=0$ for charge (ch) and $l=1$ for spin (sp). The reason for the approximation \eqref{eq:bethe_Salpeter_eq_approximated} is explained in Appendix~\ref{appendice:ch:noneq_approx_TPSC_vertices}. As far as TPSC is concerned, the algorithm terminates once all the quantities in each channel have been solved and the self-energy
\begin{align}
\label{eq:tpsc_self_energy_alpha}
&\Sigma^{\text{TPSC},(1)}_{\mathbf{k},\sigma}[\alpha](z_1,z_2)\notag\\
&= U(z_1)n_{-\sigma}(z_1)\delta_{\mathcal{C}}(z_1,z_2) + \frac{U(z_1)}{8}\int\frac{\mathrm{d}^Dq}{(2\pi)^D}\alpha(z_2)\notag\\
&\!\!\times\!\!\biggl[3\Gamma^{\text{sp}}(z_2)\chi^{\text{sp}}_{\mathbf{q}}(z_2,z_1) + \Gamma^{\text{ch}}(z_2)\chi^{\text{ch}}_{\mathbf{q}}(z_2,z_1)\biggr]\mathcal{G}^{(0)}_{\mathbf{k}+\mathbf{q},\sigma}(z_1,z_2)
\end{align} has been computed. In Eq.~\eqref{eq:tpsc_self_energy_alpha}, the one-time variable $\alpha$ has been introduced into Eq.~\eqref{eq:fft_total_self_energy} to satisfy the sum-rule involving the double occupancy appearing in Eqs.~\eqref{eq:first_approx_TPSC_two_particle} and \eqref{eq:first_approx_TPSC_kernel_A} which is needed for solving the spin quantities (see Fig.~\ref{fig:KB_sp_quantities}),
\begin{align}
\label{eq:double_occupancy_sum_rule_alpha}
&\frac{-i}{2}\int\frac{\mathrm{d}^Dk}{(2\pi)^D} \ \left[\Sigma_{\mathbf{k},\bar{\sigma}}^{\text{TPSC},(1)}[\alpha](z_1,\bar{z})\mathcal{G}^{(1)}_{\mathbf{k},\bar{\sigma}}[\Sigma^{\text{TPSC},(1)}](\bar{z},z_1^{+})\right]\notag\\
& \quad = U(z_1)\langle \hat n_{-\sigma}(z_1)\hat n_{\sigma}(z_1) \rangle.
\end{align}
This extra renormalization of the irreducible vertices is necessary in order to get physically sound results after parameter quenches in the Hubbard model~\eqref{eq:Hubbard_model_intro}.
The variant TPSC+GG reintroduces the Green's function $\mathcal{G}^{(1)}[\Sigma^{\text{TPSC},(1)}]$ computed with Eq.~\eqref{eq:tpsc_self_energy_alpha} into the noninteracting bubble $\chi^0$ and repeats the subroutines described in Figures~\ref{fig:KB_sp_quantities} and \ref{fig:KB_ch_quantities} until overall convergence.
\subsection{Nonequilibrium DMFT+TPSC}
\label{ch:dmft_tpsc}
\subsubsection{General remarks}
Similar in spirit to established schemes like GW+DMFT\cite{Biermann2003,Ayral2013} or FLEX+DMFT,\cite{Huang2015,Kitatani2015} the combination of DMFT (introduced in Sec.~\ref{subsec:DMFT}) and TPSC (introduced in Sec.~\ref{subsec:TPSC_variants}) can be accomplished by replacing the local TPSC self-energy component with the DMFT one in a self-consistent manner in order to better capture the local correlations. The resulting self-energy reads $\Sigma^{\text{DMFT+TPSC}}_{ij}=\Sigma^{\text{imp}}\delta_{ij}+\Sigma^{\text{TPSC},(1)}(1-\delta_{ij})$, with $i,j$ lattice site indices, and thus incorporates the effects of local and nonlocal correlations on the spin and charge degrees of freedom. These correlations feed back into the DMFT calculations within a self-consistency loop. In the following subsection, we describe the algorithmic procedure that defines nonequilibrium DMFT+TPSC. The full scheme is illustrated as a flow chart in Fig.~\ref{fig:DMFT_flow_chart}.
\subsubsection{Algorithm}
To start the DMFT+TPSC procedure, one must guess an initial Weiss Green's function \eqref{eq:PM:Weiss_Green_hyb} (e.g.~local Green's function of the noninteracting lattice) that enters the impurity solver described in Sec.~\ref{subsubsec:IPT}. The impurity solver computes an impurity self-energy, denoted by $\Sigma_{\text{imp}}[\mathcal{G}_0]$ in this section, that renormalizes and broadens the energy spectrum of the impurity electrons. Then, the impurity double occupancy
\begin{align}
\label{eq:impurity_double_occupancy}
D^{\text{imp}}(z) =& \,\frac{-i}{2U(z)}\text{Tr}\left[\Sigma^{\text{imp}}_{\sigma}(z,\bar{z})\mathcal{G}^{\text{imp}}_{\sigma}(\bar{z},z)\right]^<\notag\\
&+ \frac{1}{4}\sum_{\sigma}n_{\sigma}(z)n_{-\sigma}(z),
\end{align}
is used instead of that extracted from the ansatz~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy}, which is employed in TPSC and TPSC+GG. $D^{\text{imp}}$ determines both the spin and charge irreducible vertices according to Figs.~\ref{fig:KB_sp_quantities} and \ref{fig:KB_ch_quantities}, respectively, making use of the respective local sum-rules~\eqref{eq:fluctuation_dissipation_two_particle}. This time, the susceptibilities defined through the Bethe-Salpeter equation~\eqref{eq:charge_spin_susceptibility_TPSC_definition_developed_2} are slightly different, in that the ``bare" two-particle Green's function $\chi^0$ is defined as
\begin{align}
\label{eq:noninteracting_two_particle_Gfunc}
\chi^0_{\mathbf{q}}(z,z^{\prime}) = -2i\int\frac{\mathrm{d}^Dk}{(2\pi)^D} \ \mathcal{G}_{\mathbf{k}}(z,z^{\prime})\mathcal{G}_{\mathbf{k}+\mathbf{q}}(z,z^{\prime}),
\end{align}
where the lattice Green's function $\mathcal{G_{\mathbf{k}}}$ is obtained from Eq.~\eqref{eq:PM:projected_green_function_impurity} and contains the local impurity self-energy. Then, the momentum-dependent TPSC self-energy can be calculated using Eq.~\eqref{eq:fft_total_self_energy} (with $\mathcal{G}^{(0)}$ replaced with $\mathcal{G}$). We finally replace the local self-energy component of $\Sigma^{\text{TPSC},(1)}_{\mathbf{k}}$,
\begin{align}
\Sigma^{\text{TPSC},(1)}_{\text{loc},\sigma}(z,z^{\prime})\equiv\frac{1}{N_k}\sum_{\mathbf{k}}\Sigma_{\mathbf{k},\sigma}^{\text{TPSC},(1)}(z,z^{\prime}),
\end{align}
by the impurity self-energy $\Sigma^{\sigma}_{\text{imp}}$. The DMFT+TPSC self-energy with improved local correlations thus reads
\begin{align}
\label{eq:sigma_DMFT_TPSC}
&\Sigma^{(1)}_{\mathbf{k},\sigma}(z,z^{\prime})\notag\\
&\equiv\Sigma_{\mathbf{k},\sigma}^{\text{TPSC},(1)}(z,z^{\prime})-\Sigma^{\text{TPSC},(1)}_{\text{loc},\sigma}(z,z^{\prime})+\Sigma^{\text{imp}}_{\sigma}(z,z^{\prime}),
\end{align}
and the improved lattice Green's function $\mathcal{G}^{\text{lat},(1)}_{\mathbf{k}}$ with $\Sigma_{\mathbf{k}}$~from Eq.~\eqref{eq:sigma_DMFT_TPSC} is defined
as the solution of the Dyson equation
\begin{align}
\label{eq:lattice_G_definition_improved}
&\left[i\partial_z + \mu - \epsilon(\mathbf{k})-\Sigma_{\text{imp}}^{\delta,\sigma}(z)\right]\mathcal{G}^{\text{lat},(1)}_{\mathbf{k},\sigma}(z,z^{\prime})\notag\\
&\hspace{0.7cm}-\Sigma^{(1)}_{\mathbf{k},\sigma}(z,\bar{z})\mathcal{G}^{\text{lat},(1)}_{\mathbf{k},\sigma}(\bar{z},z^{\prime}) = \delta^{\mathcal{C}}(z,z^{\prime}).
\end{align}
Once the improved lattice Green's function \eqref{eq:lattice_G_definition_improved} is known, the lattice average $\mathcal{G}_{\text{loc}}^{\sigma}(z,z^{\prime})\equiv\frac{1}{N_k}\sum_{\mathbf{k}}\mathcal{G}^{\text{lat},(1)}_{\mathbf{k},\sigma}(z,z^{\prime})$ is calculated and identified with the impurity Green's function. Finally, by solving the Volterra equation \eqref{eq:PM:impurity_self_energy}, the Weiss Green's function can be updated and reinserted into the impurity solver. The whole process is repeated until the scheme converges.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{DMFT_TPSC_flow-crop.pdf}
\caption{Flow chart describing the self-consistent DMFT+TPSC procedure. In the actual calculations, the Bethe-Salpeter equations inside the yellow panel are approximated by Eq.~\eqref{eq:bethe_Salpeter_eq_approximated}.
}
\label{fig:DMFT_flow_chart}
\end{figure}
Apart from looking at the energy conservation during the time propagation of the (undriven) DMFT+TPSC solution, the comparison between the DMFT double occupancy $D^{\text{imp}}$~\eqref{eq:impurity_double_occupancy} and the one extracted from the lattice quantities
\begin{align}
\label{eq:tr_sk_Gk_TPSC}
D^{\text{TPSC},(1)}(z) =& \, \frac{-i}{2U(z)} \text{Tr}\left[\Sigma^{(1)}_{\mathbf{k},\sigma}(z,\bar{z})\mathcal{G}^{\text{lat},(1)}_{\mathbf{k},\sigma}(\bar{z},z)\right]^< \notag\\
&\hspace{0.0cm}+ \frac{1}{4}\sum_{\sigma}n_{\sigma}(z)n_{-\sigma}(z),
\end{align}
with $\Sigma^{(1)}_{\mathbf{k}}$ defined in Eq.~\eqref{eq:sigma_DMFT_TPSC} and $\mathcal{G}^{\text{lat},(1)}_{\mathbf{k}}$ defined in Eq.~\eqref{eq:lattice_G_definition_improved}, turns out to be a good consistency check for the method. If the difference between $D^{\text{imp}}(z)$ and $D^{\text{TPSC},(1)}(z)$ becomes too large, the results become unreliable. Note that in our single-band model, Eq.~\eqref{eq:tr_sk_Gk_TPSC} can be obtained by Fourier-transforming Eq.~\eqref{eq:four_point_self_G}.
Similarly to TPSC and TPSC+GG, which employ the sum-rule \eqref{eq:double_occupancy_sum_rule_alpha} to obtain a consistent result for the double occupation, DMFT+TPSC can be modified by enforcing that the impurity double occupancy $D^{\text{imp}}$~\eqref{eq:impurity_double_occupancy} be equal to that computed from the lattice quantities obtained from TPSC~\eqref{eq:tr_sk_Gk_TPSC}:
\begin{align}
\label{eq:double_occupancy_sum_rule_alpha_DMFT_TPSC}
&\text{Tr}\left[\Sigma^{(1)}_{\mathbf{k},\sigma}[\alpha](z,\bar{z})\mathcal{G}^{\text{lat},(1)}_{\mathbf{k},\sigma}(\bar{z},z)\right]^<\notag\\
&\hspace{2.0cm} \equiv \text{Tr}\left[\Sigma^{\text{imp}}_{\sigma}(z,\bar{z})\mathcal{G}^{\text{imp}}_{\sigma}(\bar{z},z)\right]^<,
\end{align}
with
\begin{align}
\label{eq:sigma_DMFT_TPSC_alpha}
&\Sigma^{(1)}_{\mathbf{k},\sigma}[\alpha](z,z^{\prime})\notag\\
&=\Sigma_{\mathbf{k},\sigma}^{\text{TPSC},(1)}[\alpha](z,z^{\prime})-\Sigma^{\text{TPSC},(1)}_{\text{loc},\sigma}[\alpha](z,z^{\prime})+\Sigma^{\text{imp}}_{\sigma}(z,z^{\prime}),
\end{align}
or, alternatively,
\begin{align}
\label{eq:sigma_DMFT_TPSC_alpha_2}
&\Sigma^{(1)}_{\mathbf{k},\sigma}[\alpha](z,z^{\prime})\notag\\
&=\Sigma_{\mathbf{k},\sigma}^{\text{TPSC},(1)}(z,z^{\prime})-\alpha(z)\Sigma^{\text{TPSC},(1)}_{\text{loc},\sigma}(z,z^{\prime})+\Sigma^{\text{imp}}_{\sigma}(z,z^{\prime}),
\end{align}
where $\alpha$, in the case of Eq.~\eqref{eq:sigma_DMFT_TPSC_alpha}, serves a similar purpose as in Eq.~\eqref{eq:double_occupancy_sum_rule_alpha}, in that it renormalizes further the irreducible vertices in Eq.~\eqref{eq:tpsc_self_energy_alpha} so as to fulfil Eq.~\eqref{eq:double_occupancy_sum_rule_alpha_DMFT_TPSC}. In Eq.~\eqref{eq:sigma_DMFT_TPSC_alpha_2}, the parameter $\alpha$ can be seen as a time-dependent correction to the hybridization function appearing in the DMFT self-consistency (Eq.~\eqref{eq:DMFT:DMFT_action}). These \textit{modified} DMFT+TPSC methods are coined DMFT+TPSC$\alpha$. It turns out, however, that neither the lattice self-energy \eqref{eq:sigma_DMFT_TPSC_alpha} nor the one defined in Eq.~\eqref{eq:sigma_DMFT_TPSC_alpha_2} leads to a stable nonequilibrium evolution. Thus, DMFT+TPSC$\alpha$ will only be discussed in equilibrium set-ups, making use of Eq.~\eqref{eq:sigma_DMFT_TPSC_alpha}.
\subsection{Summary of the different schemes}
In order to clarify the similarities and differences between the methods considered in this paper, we summarize the key characteristics of the methods in Table~\ref{table:nonequilibrium_methods_laid_out}. Moreover, the graph in Fig.~\ref{fig:graph_second_level_first_level} illustrates the connection between the first- and second-level approximations.
\begin{table}[h!]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{||c|c|c|c||}
\hline
\backslashbox{\phantom{=}}{} & Self-consistent & $D$ consistency & $\Sigma^{(1)}_{\mathbf{k}}$\\
\hline
OG TPSC & $\text{\sffamily X}$ & $\text{\sffamily X}$ & Eq.~\eqref{eq:fft_total_self_energy} \\
\hline
TPSC & $\text{\sffamily X}$ & $\checkmark$ & Eqs.~\eqref{eq:tpsc_self_energy_alpha} \& \eqref{eq:double_occupancy_sum_rule_alpha} \\
\hline
TPSC+GG & $\checkmark$ & $\checkmark$ & Eqs.~\eqref{eq:tpsc_self_energy_alpha} \& \eqref{eq:double_occupancy_sum_rule_alpha} \\
\hline
DMFT+TPSC & $\checkmark$ & $\text{\sffamily X}$ & Eqs.~\eqref{eq:sigma_DMFT_TPSC} \\
\hline
DMFT+TPSC$\alpha$ & $\checkmark$ & $\checkmark$ & Eqs.~\eqref{eq:sigma_DMFT_TPSC_alpha} \& \eqref{eq:double_occupancy_sum_rule_alpha_DMFT_TPSC} \\[1ex]
\hline
\end{tabular}
}
\caption{Properties of the TPSC variants considered in this study. Checkmarks ($\checkmark$) indicate that a method is endowed with the corresponding characteristic, while the x-marks ($\text{\sffamily X}$) mean the opposite. In the last column, we list the equations defining the lattice self-energy.}
\label{table:nonequilibrium_methods_laid_out}
\end{table}
The first column of Table~\ref{table:nonequilibrium_methods_laid_out} titled ``Self-consistent'' specifies which methods are self-consistent, \textit{i.e.} feed back the interacting lattice Green's functions into a self-consistency loop until convergence. The methods without this characteristic compute the self-energy and related quantities in a ``one-shot'' fashion. The second column titled ``$D$ consistency'' indicates which methods make use of a parameter $\alpha$ to enforce consistency between the double occupancies obtained from local and lattice quantities. For example, in the case of TPSC and TPSC+GG, the sum-rule~\eqref{eq:double_occupancy_sum_rule_alpha} ensures that the double occupancy obtained within the first-level approximation from Eq.~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy} is equal to that calculated from the second-level quantities $\Sigma^{(1)}_{\mathbf{k}}$ and $\mathcal{G}^{(1)}$. Indeed, in a fully
consistent
scheme, the double occupancy appearing in Eq.~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy}, which is extracted from the first-level approximation self-energy $\Sigma^{(0)}_{\mathbf{k}}$~\eqref{eq:transversal_channel_self_energy_get_vertex}, should be equal to that obtained from the second-level single-particle quantities $\Sigma^{(1)}_{\mathbf{k}}$ and $\mathcal{G}^{(1)}_{\mathbf{k}}$ (Eq.~\eqref{eq:double_occupancy_sum_rule_alpha}). Finally, the last column of Table~\ref{table:nonequilibrium_methods_laid_out} refers to the second-level self-energies featuring in each method, together with the extra sum-rule they need to satisfy if the method is ``$D$-consistent''.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[
level 1/.style={sibling distance=5.0cm, level distance=4.5cm, align=center},
level 2/.style={sibling distance=5.0cm, level distance=4.5cm, align=center},
edge from parent/.style={very thick,draw=blue!50!black!90,
shorten >=5pt, shorten <=5pt,->},
edge from parent path={(\tikzparentnode.south) -- (\tikzchildnode.north)},
kant/.style={text width=2cm, text centered, sloped},
every node/.style={text ragged, inner sep=.5mm, align=center},
punkt/.style={rectangle, rounded corners, shade, top color=white,
bottom color=blue!70!black!30, draw=blue!50!black!60, very
thick },
punkt2/.style={rectangle, rounded corners, shade, top color=white,
bottom color=green!70!black!30, draw=green!50!black!60, very
thick }
]
\begin{scope}
\node[punkt] [rectangle split, rectangle split, rectangle split parts=2, text ragged] (A) at (0,0) {
\textbf{First-level self-energy}
\nodepart{second}
$\Sigma^{(0)}$~(Eq.~\eqref{eq:TPSC_self_energy_expanded})
};
\node[punkt] [rectangle split, rectangle split, rectangle split parts=2,
text ragged] (B) at (2.2cm,-3cm) {
\textbf{First-level vertices}
\nodepart{second}
$\Gamma^{\text{sp}}$~(Eq.~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy}), $\Gamma^{\text{ch}}$~(Eq.~\eqref{eq:charge_irreducible_vertex_def})
};
\node[punkt] [rectangle split, rectangle split, rectangle split parts=2,
text ragged] (C) at (-2.2cm,-3cm) {
\textbf{First-level propagator}
\nodepart{second}
$\mathcal{G}^{(0)}$~(Eq.~\eqref{eq:first_level_approx_G})
};
\node[punkt2] [rectangle split, rectangle split, rectangle split parts=3] (D) at (0,-6cm) {
\textbf{Second-level self-energy}
\nodepart{second}
$\text{OG TPSC} \ \Sigma^{(1)}\colon \text{Eq.~\eqref{eq:fft_total_self_energy}}$
\nodepart{third}
$\text{TPSC and TPSC+GG} \ \Sigma^{(1)}\colon \text{Eqs.~\eqref{eq:tpsc_self_energy_alpha} and \eqref{eq:double_occupancy_sum_rule_alpha}}$
};
\node[punkt2] [rectangle split, rectangle split, rectangle split parts=2] (E) at (2.2cm,-9cm) {
\textbf{Second-level vertices}
\nodepart{second}
$\Gamma^{(1),\text{sp/ch}}\equiv -\frac{\delta\Sigma_{\sigma}^{(1)}}{\delta\mathcal{G}_{-\sigma}^{(1)}}\pm \frac{\delta\Sigma_{\sigma}^{(1)}}{\delta\mathcal{G}_{\sigma}^{(1)}}$
};
\node[punkt2] [rectangle split, rectangle split, rectangle split parts=2] (F) at (-2.2cm,-9cm) {
\textbf{Second-level propagator}
\nodepart{second}
$\mathcal{G}^{(1)}$~(Eq.~\eqref{eq:lattice_G_definition_improved})
};
\end{scope}
\begin{scope}[>={Stealth[black]}]
\path [->,draw=black,very thick] (A) edge[text width=1.5cm, text centered, anchor=south, sloped] node {\small Ansatz} (B);
\path [->,draw=black,very thick] (A) edge[text width=1.5cm, text centered, anchor=south, sloped] node {} (C);
\path [->,draw=black,very thick] (C) edge node {} (D);
\path [->,draw=black,very thick] (B) edge node {} (D);
\path [->,draw=red,very thick] (D) edge[text width=2cm, text centered, sloped] node {} (E);
\path [->,draw=black,very thick] (D) edge[text width=1.5cm, text centered, anchor=south, sloped] node {} (F);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Flow graph showing the connections between the two levels of TPSC, namely the first- (blue boxes) and second-level (green boxes) approximations. The red line shows that second-level irreducible vertices could in principle be obtained from the second-level self-energy $\Sigma^{(1)}$. The $\alpha$ renormalization of the vertices introduced via Eq.~\eqref{eq:double_occupancy_sum_rule_alpha} modifies the irreducible vertices such that the two levels of the approximation become consistent.
}
\label{fig:graph_second_level_first_level}
\end{figure}
\section{Results}
\label{sec:results}
\subsection{General remarks}
We first test TPSC, TPSC+GG and DMFT+TPSC as introduced in Sec.~\ref{ch:dmft_tpsc} by studying equilibrium lattice models and comparing some results with data published in the literature.\cite{PhysRevX.11.011058} In Sec.~\ref{sec:results:equilibrium}, we benchmark our results against Diagrammatic Monte Carlo (DiagMC)\cite{PhysRevLett.81.2514,VanHoucke2010} and compare our implementations with TPSC in its original formulation, coined from now on ``OG TPSC''.\cite{tpsc_1997} Then, TPSC, TPSC+GG and DMFT+TPSC are used to compute various equilibrium properties of the cubic lattice Hubbard model. In Sec.~\ref{sec:results:Nonequilibrium}, we present the nonequilibrium applications. We simulate ramps in one of the hopping terms to induce a dimensional crossover from a square to a cubic lattice and analyze the corresponding spin and charge dynamics.
\subsection{Equilibrium}
\label{sec:results:equilibrium}
\subsubsection{Benchmarks against DiagMC}
To understand how well the different methods capture nonlocal correlations, we first focus on the 2D square lattice Hubbard model. The first Matsubara frequencies of the self-energy at the antinode $\Sigma^{(1)}(\mathbf{k}=(0,\pi);i\omega_n)$ are plotted for $U=2$ in Fig.~\ref{fig:self_energy_antinode_matsubara} for the original TPSC formulation (OG TPSC), TPSC, TPSC+GG, DMFT+TPSC, DMFT+TPSC$\alpha$ and DiagMC. The TPSC and TPSC+GG schemes used here were introduced in Ref.~\onlinecite{https://doi.org/10.48550/arxiv.2205.13813}, while OG TPSC corresponds to the variant introduced in Ref.~\onlinecite{tpsc_1997}. The DiagMC results are taken from Ref.~\onlinecite{PhysRevX.11.011058}. The top subplot shows results for $T=0.33$ ($\beta=3$) and the bottom subplot for $T=0.1$ ($\beta=10$). As a reminder, we note that OG TPSC does not ensure consistency in the double occupancy between the first- and second-level TPSC approximations, \textit{i.e.} no $\alpha$ parameter is used. Comparing the results of Fig.~\ref{fig:self_energy_antinode_matsubara} with the ``TPSC'' panel in Fig.~10 of Ref.~\onlinecite{PhysRevX.11.011058}, which in our notation corresponds to OG TPSC, one can notice that TPSC+GG (green curves) improves the self-energy substantially so that it almost overlaps with the numerically exact result from the DiagMC method (black curves). DMFT+TPSC (orange curves) and DMFT+TPSC$\alpha$ also show a good agreement at $T=0.33$ with TPSC+GG and DiagMC. In the DMFT+TPSC schemes, the antinodal self-energy follows very closely that of TPSC+GG and DiagMC, except for the lowest Matsubara frequency, which reveals a too metallic behavior in this weak-coupling regime. The TPSC self-energy, on the other hand, systematically overestimates the self-energy (red curves). This result is rescaled, with respect to the result of OG TPSC (cyan curves), by the introduction of the parameter $\alpha$ (see Eq.~\eqref{eq:tpsc_self_energy_alpha}), which worsens the agreement with DiagMC. However, since TPSC+GG also uses the parameter $\alpha$ and agrees very well with DiagMC, the lack of self-consistency seems to be the main problem. At the lower temperature $T=0.1$, shown in the bottom panel of Fig.~\ref{fig:self_energy_antinode_matsubara}, TPSC+GG is clearly the most accurate of the TPSC variants, and again remarkably on top of the exact DiagMC result. While DMFT+TPSC and DMFT+TPSC$\alpha$ underestimates the antinodal self-energy, it follows qualitatively the trend of the TPSC+GG and DiagMC results, while this is not the case for both TPSC and OG TPSC which bend in the opposite direction at lower Matsubara frequencies and hence overestimate the pseudogap tendency.
Furthermore, the DMFT+TPSC schemes and TPSC+GG allow one to access lower temperature results by alleviating the convergence problems that limit the applicability of TPSC and OG TPSC in the vicinity of $T_x$ (crossover temperature to the renormalized classical regime). It is also worth mentioning that the non-self-consistent TPSC+DMFT scheme introduced in Ref.~\onlinecite{https://doi.org/10.48550/arxiv.2211.01919} matches the DiagMC data well, although less accurately than TPSC+GG.
\begin{figure}[t]
\begin{minipage}[h]{\columnwidth}
\begin{center}
\includegraphics[width=\columnwidth]{matsubara_comparison_self_U_2.00_beta_3.00-crop.pdf}
\end{center}
\end{minipage}
\hfill
\vspace{0.1 cm}
\begin{minipage}[h]{\columnwidth}
\begin{center}
\includegraphics[width=\columnwidth]{matsubara_comparison_self_U_2.00_beta_10.00-crop.pdf}
\end{center}
\end{minipage}
\caption{Imaginary part of the Matsubara self-energy at the antinode ($\mathbf{k}=(0,\pi)$) for the half-filled Hubbard model at $U=2$. Results for $T=0.33$ (top subplot) and $T=0.1$ (bottom subplot) are shown for the various methods indicated in the legend.
This figure can be compared with the ``TPSC'' panel in Fig.~10 of Ref.~\onlinecite{PhysRevX.11.011058}.}
\label{fig:self_energy_antinode_matsubara}
\end{figure}
As we will see in Sec.~\ref{sec:results:Nonequilibrium}, even though TPSC+GG looks most convincing in the benchmark of Fig.~\ref{fig:self_energy_antinode_matsubara}, this is not anymore the case out of equilibrium when evaluating local quantities such as the impurity double occupancy~\eqref{eq:impurity_double_occupancy},
although we lack exact benchmarks in this case.
\subsubsection{Spin and charge vertices}
TPSC gives access to self-consistently computed spin and charge vertices, which exhibit a distinct $U$ dependence. In 3D, the separation between the charge and spin vertices, renormalized by the bandwidth $W$, grows a bit faster with $U/W$ than in 2D, as shown in Fig.~\ref{fig:usp_uch_vs_u_n_0p5}.\cite{https://doi.org/10.48550/arxiv.2205.13813} The distinction between $\Gamma^{\text{ch}}$ and $\Gamma^{\text{sp}}$ is more pronounced in TPSC compared to TPSC+GG, for both dimensions considered. Corresponding results without rescaling of the vertices and of the interaction by $W$ can be found in Ref.~\onlinecite{https://doi.org/10.48550/arxiv.2205.13813}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{usp_uch_vs_u-crop.pdf}
\caption{Bandwidth-renormalized spin and charge irreducible vertices as a function of normalized bare interaction for the nearest-neighbor square (2D) and cubic (3D) lattices for TPSC (bold lines) and TPSC+GG (dashed lines). The dimensionless temperature is $T/W=0.05$ and we consider half-filled systems. These data are taken from Ref.~\onlinecite{https://doi.org/10.48550/arxiv.2205.13813}.
}
\label{fig:usp_uch_vs_u_n_0p5}
\end{figure}
In Fig.~\ref{fig:Uch_Usp_vs_beta_3D_tpsc_GG}, the temperature dependence of the vertices calculated with TPSC+GG for various interaction strengths is plotted for the cubic lattice, while in Fig.~\ref{fig:Uch_Usp_vs_beta_3D_tpsc}, the TPSC results are shown for the same model parameters. These plots illustrate how the effective charge and spin interactions evolve when the renormalized classical regime is approached in the two methods. The vertical dotted lines in Fig.~\ref{fig:Uch_Usp_vs_beta_3D_tpsc} indicate the temperatures where $\Gamma^{\text{sp}}$ bends down and these temperatures will be later linked to a sharp upturn in the static spin susceptibility.\footnote{These vertical lines roughly mark the temperatures below which the temperature dependence of the spin vertex is no longer linear, \textit{i.e.}, starts deviating from a straight line that fits the neighboring points at higher $T$.} There is no significant $T$ dependence of the spin and charge vertices in TPSC+GG at intermediate temperatures. In TPSC+GG only a hint of an upturn in $\Gamma^{\text{ch}}$ can be resolved near $T_x$, due to a convergence slowdown at low temperatures, while in the case of TPSC a much more pronounced up-turn can be observed. The sharp downturn of $\Gamma^{\text{sp}}$ close to the renormalized classical regime Fig.~\ref{fig:Uch_Usp_vs_beta_3D_tpsc} is due to the suppression of the double occupancy extracted from the ansatz~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{Uchsp_vs_beta_Us_3D_tpscGG-crop.pdf}
\caption{$\Gamma^{\text{ch}}$ (top panel) and $\Gamma^{\text{sp}}$ (bottom panel) as a function of $T/W$ for $U=2,3,4,5$ in the half-filled 3D Hubbard model, calculated with TPSC+GG. The values of the vertices are normalized by $U$ for presentation reasons.
}
\label{fig:Uch_Usp_vs_beta_3D_tpsc_GG}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{Uchsp_vs_beta_Us_3D_tpsc-crop.pdf}
\caption{$\Gamma^{\text{ch}}$ (top panel) and $\Gamma^{\text{sp}}$ (bottom panel) as a function of $T/W$ for $U=2,3,4,5$ in the half-filled 3D Hubbard model, calculated with TPSC. The values of the vertices are normalized by $U$.
}
\label{fig:Uch_Usp_vs_beta_3D_tpsc}
\end{figure}
The local two-particle irreducible spin and charge vertices can also be computed within DMFT+TPSC. Throughout this work, the weak-coupling impurity solver introduced in Sec.~\ref{subsubsec:IPT} is used to treat the local impurity interactions. At half-filling, the second-order IPT self-energy is used, unless mentioned otherwise, in which case the self-energy diagrams up to the third-order are considered. In Fig.~\ref{fig:usp_uch_vs_u_n_0p5_dmft_tpsc}, the irreducible vertices are plotted as a function of the normalized bare interaction parameter $U/W$ at normalized temperature $T/W=0.05$. These can be compared with the TPSC and TPSC+GG results for 2D and 3D in Fig.~\ref{fig:usp_uch_vs_u_n_0p5}, which are very similar. $\Gamma^{\text{ch}}$ and $\Gamma^{\text{sp}}$ drift apart with increasing $U/W$, and as mentioned before this is more pronounced in 3D than in 2D. In DMFT+TPSC, both $\Gamma^{\text{ch}}$ and $\Gamma^{\text{sp}}$ have larger values than in TPSC or TPSC+GG at a given $U/W$. Because the IPT impurity solver is reliable only in the weak-coupling regime, the range of interactions shown is limited to $U/W=0.5$.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{usp_uch_vs_u_dmft_tpsc-crop.pdf}
\caption{Dimensionless spin and charge irreducible vertices as a function of normalized bare interaction for the square and cubic lattices, calculated with DMFT+TPSC. The dimensionless temperature is $T/W=0.05$ and the systems are half-filled.
}
\label{fig:usp_uch_vs_u_n_0p5_dmft_tpsc}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{Uchsp_vs_beta_Us_3D_dmft_tpsc-crop.pdf}
\caption{$\Gamma^{\text{ch}}$ (top panel) and $\Gamma^{\text{sp}}$ (bottom panel) as a function of $T$ for $U=2,3,4,5$ in the 3D half-filled nearest-neighbor Hubbard model. The values of the vertices are normalized by $U$ and were obtained using DMFT+TPSC. At lower temperatures for $U=5$, the DMFT solution could not be converged.
}
\label{fig:Uch_Usp_vs_beta_3D_dmft_tpsc}
\end{figure}
The DMFT+TPSC irreducible vertices $\Gamma^{\text{ch}}$ (top panel) and $\Gamma^{\text{sp}}$ (bottom panel) are plotted in Fig.~\ref{fig:Uch_Usp_vs_beta_3D_dmft_tpsc} as a function of temperature for the half-filled 3D Hubbard model with $U=\{2,3,4,5\}$. For a better comparison with the TPSC+GG and TPSC results, we use here the same $y$ axis range as in Figs.~\ref{fig:Uch_Usp_vs_beta_3D_tpsc_GG} and \ref{fig:Uch_Usp_vs_beta_3D_tpsc}. Again, the vertical lines in Fig.~\ref{fig:Uch_Usp_vs_beta_3D_dmft_tpsc} indicate the temperatures where $\Gamma^{\text{sp}}$ bends down, and these will be related to an upturn in the static spin susceptibility. Contrary to the TPSC+GG and TPSC temperature dependence of $\Gamma^{\text{ch}}$, the charge vertex gets significantly reduced as temperature is lowered, but it starts from higher values at high $T$. On the other hand, $\Gamma^{\text{sp}}$ almost saturates at lower temperatures in DMFT+TPSC, and then sharply drops in the renormalized classical regime near $T_x$. In contrast to TPSC, the rapid decrease of the spin irreducible vertex $\Gamma^{\text{sp}}$ (concomitant with a drop in the double occupancy) in DMFT+TPSC does not coincide with a shooting up of $\Gamma^{\text{ch}}$ (compare Figs.~\ref{fig:Uch_Usp_vs_beta_3D_tpsc} and \ref{fig:Uch_Usp_vs_beta_3D_dmft_tpsc}).
\subsubsection{Spin susceptibility}
In Fig.~\ref{fig:static_sus_vs_T_TPSC}, the static spin susceptibility at half-filling is plotted for both TPSC and TPSC+GG in 2D (bottom subplot) and 3D (top subplot). It shows the growth of the static spin correlations as temperature is lowered. The up-turn in $\chi^{\text{sp}}(\tau=0,\mathbf{k}_{\pi})$ marks the temperature crossover $T_x$ to the renormalized classical regime. Increasing the interaction $U$ displaces the up-turn to higher temperatures in both TPSC and TPSC+GG. However, in TPSC+GG, for the same interaction value, the estimated crossover temperature $T_x$ is consistently lower than that extracted from the TPSC static susceptibility. In 3D, the shooting-up of the static spin susceptibility at $\mathbf{k}_{\pi}$ at low temperature in TPSC coincides with the up-turn of $\Gamma^{\text{ch}}$, \textit{cf.} Figs.~\ref{fig:Uch_Usp_vs_beta_3D_tpsc} and \ref{fig:static_sus_vs_T_TPSC} (top subplot), as becomes clear from the vertical dashed lines which are at the same temperatures in both figures.
\begin{figure}[t]
\begin{minipage}[h]{1.0\columnwidth}
\begin{center}
\includegraphics[width=1.0\columnwidth]{static_sus_vs_beta_Us_3D-crop.pdf}
\end{center}
\end{minipage}
\hfill
\vspace{0.1cm}
\begin{minipage}[h]{1.0\columnwidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{static_sus_vs_beta_Us_2D-crop.pdf}
\end{center}
\end{minipage}
\caption{Static spin susceptibility of the 3D (top subplot) and 2D (bottom subplot) models at momentum $\mathbf{k}_{\pi}$ as a function of temperature for the interactions $U=2,3,4,5$ and half-filling. Results are shown for TPSC (bold lines) and TPSC+GG (dashed lines).
The vertical lines coincide with those in Fig.~\ref{fig:Uch_Usp_vs_beta_3D_tpsc}.
}
\label{fig:static_sus_vs_T_TPSC}
\end{figure}
In the bottom subplot of Fig.~\ref{fig:static_sus_vs_T_TPSC}, the static spin susceptibility $\chi^{\text{sp}}(\tau=0,\mathbf{k}_{\pi})$ is plotted for the 2D model. For equal interaction strengths $U$ (without normalizing $U$ by $W$), the up-turns in the static susceptibility happen at slightly lower temperatures when increasing the dimension, except at $U\geq 4$. As a consequence, a larger temperature range is accessible in 3D compared to 2D at weak coupling, since $T_x$ is lowered in 3D. In 3D, the TPSC+GG results of the static susceptibility are qualitatively more similar to TPSC than it is the case in 2D, where only the beginning of the up-turn is numerically accessible.~\cite{https://doi.org/10.48550/arxiv.2205.13813} This might be an indication that TPSC is more accurate in 3D.
To demonstrate that DMFT+TPSC still captures the growth of the AFM correlations with decreasing temperature at various interactions, the 2D and 3D static spin susceptibilities are plotted for DMFT+TPSC in Fig.~\ref{fig:static_spin_sus_fermi_surface_dmft_tpsc_3D}. These results can be compared directly to Fig.~\ref{fig:static_sus_vs_T_TPSC} for TPSC and TPSC+GG. It is obvious that the same qualitative behavior of the static spin response is observed also in the presence of the DMFT correction: with increasing interaction strength, the up-turn in the static spin susceptibility is shifted to higher temperatures. Furthermore, the relative change in the $T$ value of the up-turns increases as $U$ is decreased.
(Remember that since TPSC and its variants make use of the spin rotational symmetry in the derivation, these methods can only describe the growth of spin correlations, but not the spontaneous symmetry-breaking.)
Similarly to TPSC+GG, the up-turns at fixed $U$ in DMFT+TPSC occur at lower temperatures when compared to TPSC.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{static_sus_vs_beta_Us_dmft_tpsc_3D-crop.pdf}
\caption{Static spin susceptibility of the half-filled 2D and 3D model at momentum $\mathbf{k}_{\pi}$ as a function of temperature for interactions $U=1,2,3,4$ (2D) and $U=2,3,4,5$ (3D), obtained with DMFT+TPSC. The vertical lines coincide with those in Fig.~\ref{fig:Uch_Usp_vs_beta_3D_dmft_tpsc}.
}
\label{fig:static_spin_sus_fermi_surface_dmft_tpsc_3D}
\end{figure}
A different way of quantifying the growth of the spin correlations is to plot the antiferromagnetic correlation length $\xi_{\text{sp}}$ as a function of inverse temperature. In Fig.~\ref{fig:corr_len_comparison_eq}, $\xi_{\text{sp}}$ is shown for the half-filled 2D square lattice Hubbard model at constant interaction $U=2$. Several methods are compared against each other, namely OG TPSC, TPSC+GG, DMFT+TPSC, D$\Gamma$A,\cite{PhysRevB.75.045118_dynamical_vertex_approx} DiagMC,\cite{PhysRevLett.81.2514,Kozik_2010} TRILEX~\cite{PhysRevB.92.115109,PhysRevB.93.235124} and the Parquet Approximation (PA).\cite{doi:10.1063/1.1704062,doi:10.1063/1.1704064} The correlation length $\xi_{\text{sp}}$ is extracted from the Ornstein-Zernicke fit of the momentum-dependent static spin susceptibility $\chi^{\text{sp}}_{\mathbf{q}-\mathbf{Q}}(iq_{n}=0)$ in the vicinity of the AFM scattering wave vector $\mathbf{Q}$:
\begin{align*}
\chi^{\text{sp}}_{\mathbf{q}-\mathbf{Q}}(iq_{n}=0) \approx \frac{A}{(\mathbf{q}-\mathbf{Q})^{2} + \xi_{\text{sp}}^{-2}},
\end{align*}
where $\mathbf{Q}=\mathbf{k}_{\pi}$ ($\mathbf{k}_{\pi}=(\pi,\pi)$ in 2D) at half-filling and $A$ is some weight of the order of $1$. It is clear from Fig.~\ref{fig:corr_len_comparison_eq} that the original formulation of TPSC (OG TPSC) overestimates the growth of spin correlations as the temperature is decreased, \textit{i.e.}, $T_{x}$ is much higher than the values estimated by the other more accurate methods. The latter predict similar correlation lengths in the temperature range up to $\beta\simeq 12$. In particular, both TPSC+GG and DMFT+TPSC follow very closely the $\xi_{\text{sp}}$ results obtained from TRILEX, PA and D$\Gamma$A. Thus, TPSC+GG and DMFT+TPSC both correct the overestimation of the spin correlations of OG TPSC and this is reflected also in the antinodal self-energy at the Fermi surface, where TPSC+GG and DMFT+TPSC agree quite well with DiagMC, especially in the case of TPSC+GG (Fig.~\ref{fig:self_energy_antinode_matsubara}).
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{corr_length_vs_beta_Us_2D-crop.pdf}
\caption{$\xi_{{\text{sp}}}$ as a function of $\beta=1/T$ for $U=2$ in the half-filled 2D Hubbard model. The $y$-axis uses a logarithmic scale. The methods compared are ``OG TPSC'' (green circles, called TPSC in Refs.~\onlinecite{PhysRevX.11.011058,https://doi.org/10.48550/arxiv.2211.01919}), TPSC+GG (orange diamonds), DMFT+TPSC (cyan crosses), D$\Gamma$A (blue circles), DiagMC (black triangles), TRILEX (red circles) and PA (green triangles). The data calculated using TRILEX, DiagMC, OG TPSC, D$\Gamma$A and PA were taken from Ref.~\onlinecite{PhysRevX.11.011058}. The $3^{\text{rd}}$-order IPT impurity solver is used in DMFT+TPSC (see Sec.~\ref{ch:3rd_order_IPT}).}
\label{fig:corr_len_comparison_eq}
\end{figure}
\subsubsection{Double occupancy}
In DMFT+TPSC, there are local Green's functions and self-energies of the auxiliary Anderson impurity model, \textit{i.e.} $\mathcal{G}^{\text{imp}}$, $\Sigma^{\text{imp}}$, and corresponding functions defined on the lattice, \textit{i.e.} $\mathcal{G}^{\text{TPSC}}$, $\Sigma^{\text{TPSC}}$. With these quantities, we can calculate a double occupancy for the impurity $D^{\text{imp}}$ via Eq.~\eqref{eq:impurity_double_occupancy} and a double occupancy on the lattice $D^{\text{TPSC}}$ via Eq.~\eqref{eq:tr_sk_Gk_TPSC}.
In Fig.~\ref{fig:Ds_dmft_tpsc_3D} we plot both estimates for the 3D model. The lower the temperature and the larger the interaction, the larger the deviation between $D^{\text{imp}}$ and $D^{\text{TPSC}}$ becomes. The largest deviation for each interaction is displayed in the figure as an absolute relative percentage with respect to $D^{\text{imp}}$. Overall, the deviations are quite small (below $3\%$). The deviations are larger in the 2D model, but the same qualitative trend in $U$ and $T$ is observed (not shown). At larger temperature the double occupancies flex upwards since they approach $D=0.25$ as $T\to\infty$ at half-filling.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{Ds_dmft_tpsc_3D-crop.pdf}
\caption{Double occupancies $D^{\text{imp}}$ (Eq.~\eqref{eq:impurity_double_occupancy}) and $D^{\text{TPSC}}$ (Eq.~\eqref{eq:tr_sk_Gk_TPSC}) as a function of temperature for several interactions $U$ in the half-filled 3D Hubbard model. The annotated percentages denote the largest absolute variation relative to $D^{\text{imp}}$.}
\label{fig:Ds_dmft_tpsc_3D}
\end{figure}
\subsection{Nonequilibrium}
\label{sec:results:Nonequilibrium}
\subsubsection{General remarks}
We now switch to the real-time dynamics of perturbed correlated lattice systems, as described by the different TPSC variants. In Fig.~\ref{fig:self_energy_antinode_matsubara}, it was shown by comparing to DiagMC that the equilibrium self-energy at the antinodal point of the Fermi surface calculated with TPSC+GG and DMFT+TPSC was improved substantially, compared to TPSC, especially at higher temperatures. One might thus naively expect that these two methods also provide the best description of the nonequilibrium dynamics. However, as shown below, the incorporation of the DMFT local self-energy has substantial effects on the time evolution and cures some anomalies of our (approximate) TPSC+GG implementation.
\subsubsection{Interaction ramps}
We first investigate the double occupancy following an interaction ramp from $U=0\to 1$ in the 2D Hubbard model at half-filling, which is the most challenging filling for TPSC.~\cite{tpsc_1997,https://doi.org/10.48550/arxiv.2211.01919} Besides the various TPSC-based methods, we consider second-order lattice perturbation theory, $\Sigma^{(2)}$,~\cite{Tsuji2014} which employs the self-energy
\begin{align*}
&\Sigma^{(2)}_{\mathbf{k},\sigma}(z_1,z_2) = U(z_1)U(z_2)\int\frac{\mathrm{d}^Dq\mathrm{d}^Dk^{\prime}}{(2\pi)^{2D}}\notag\\
&\hspace{0.0cm}\times\mathcal{G}^0_{\mathbf{k}+\mathbf{q},\sigma}(z_1,z_2)\mathcal{G}^0_{\mathbf{k}^{\prime}+\mathbf{q},-\sigma}(z_2,z_1^+)\mathcal{G}_{\mathbf{k}^{\prime}+\mathbf{q},-\sigma}^0(z_1,{z_2}^+)
\end{align*}
in the lattice Dyson equation~\eqref{eq:lattice_G_definition_improved}. This scheme should provide useful reference data in the weak-coupling regime $U\ll W$. OG TPSC refers to the original formulation of TPSC that utilizes the self-energy $\Sigma_{\mathbf{k}}\to \Sigma^{(1),\text{TPSC}}_{\mathbf{k}}$~\eqref{eq:fft_total_self_energy}. In the case of TPSC+GG, the self-energy $\Sigma_{\mathbf{k}}$ used is laid out in Eq.~\eqref{eq:tpsc_self_energy_alpha}. DMFT employs the third-order IPT as impurity solver~(see Sec.~\ref{subsubsec:IPT}), so that the local self-energy becomes $\Sigma^{(3)}_{\text{imp}}$, while DMFT+TPSC uses the momentum-dependent $\Sigma_{\mathbf{k}}$ defined in Eq.~\eqref{eq:sigma_DMFT_TPSC}. We remind the reader that OG TPSC does not enforce the sum-rule~\eqref{eq:double_occupancy_sum_rule_alpha}, \textit{i.e.} it does not include the time-dependent parameter $\alpha$ that forces the double occupancy calculated from the TPSC ansatz Eq.~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy} to be the same as that computed from the trace over lattice TPSC quantities~\eqref{eq:double_occupancy_sum_rule_alpha}.
In this paper, the interaction ramp $\Delta U$ is described by the error function
\begin{align}
\label{eq:ramp_profile_function_erf}
\Delta U(t) = \pm\left(\frac{U_{\text{f}}-U_{\text{i}}}{2}\right)\text{erf}(\gamma t + \delta) + \left(\frac{U_{\text{f}}+U_{\text{i}}}{2}\right),
\end{align}
where $U_{\text{i}}$ corresponds to the initial interaction value and $U_{\text{f}}$ to the final one, $\gamma$ controls the steepness of the inflection of the curve and $\delta$ its position on the time axis. A global minus sign appears in Eq.~\eqref{eq:ramp_profile_function_erf} in the case of a down ramp ($U_f<U_i$). The same form is also used for the lattice hopping ramps ($U\to t^{\text{hop}}_z$).
Figure~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1} plots the double occupancy calculated from the lattice quantities (Eq.~\eqref{eq:tr_sk_Gk_TPSC}) for an interaction ramp with parameters $\gamma=3.5$ and $\delta=2.45$ in Eq.~\eqref{eq:ramp_profile_function_erf}. The double occupancies $D$ computed by DMFT+IPT and $\Sigma^{(2)}$ follow each other quite closely, both featuring a dip at the end of the interaction ramp, succeeded by a fast thermalization. OG TPSC, with the approximate solution \eqref{eq:bethe_Salpeter_eq_approximated} of the BSE, however predicts a qualitatively different transient behavior of this local quantity: it yields an (unphysical) increase of the double occupancy at the beginning of the interaction ramp and no dip at the end of the ramp. Furthermore, the thermalized value of the double occupancy is lower than the value predicted by the other methods. DMFT+TPSC agrees rather well at all times with the results from DMFT+IPT and $\Sigma^{(2)}$.
\begin{figure}[t]
\includegraphics[width=\linewidth]{benchmark_TPSC_trSG_Umin_0.00_Umax_1.00-crop.pdf}
\caption{Double occupancy of the 2D Hubbard model calculated from the lattice quantities, Eq.~\eqref{eq:tr_sk_Gk_TPSC}, for $\Sigma^{(2)}$, DMFT, OG TPSC, DMFT+TPSC and TPSC+GG. The interaction is ramped from $U=0$ to $U=1$ in the time interval indicated by the grey shading and the initial temperature is $T=0.2$.
}
\label{fig:DMFT_TPSC_tr_sigmak_Gk_0_1}
\end{figure}
One way to correct the transient anomalies of OG TPSC is to resort to the sum-rule \eqref{eq:double_occupancy_sum_rule_alpha} and employ the TPSC second-level approximation~\eqref{eq:tpsc_self_energy_alpha}, \textit{i.e.} switch to TPSC (or TPSC+GG if there is self-consistency). In these schemes, the double occupancy does not show a transient increase at the start of the up-ramp and there is no ambiguity in the definition of the double occupancy, since $D$ obtained from the ansatz is equal to $D$ calculated from the lattice quantities by construction (Eq.~\eqref{eq:double_occupancy_sum_rule_alpha}). The effect of this correction is illustrated in Fig.~\ref{fig:DMFT_TPSC_Dloc_0_1} along with the same result for $\Sigma^{(2)}$ as in Fig.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1}. While the unphysical increase in the double occupation no longer appears, there is no minimum at the end of the ramp and -- most prominently -- a time-shift in the response to the interaction ramp appears, compared to the other methods. Some of these discrepancies may be related to the fact that we approximately solve the Bethe-Salpeter equations by using Eq.~\eqref{eq:bethe_Salpeter_eq_approximated}.
Note that for DMFT+IPT and DMFT+TPSC the double occupancies illustrated in Fig.~\ref{fig:DMFT_TPSC_Dloc_0_1} are obtained from the impurity quantities using \eqref{eq:impurity_double_occupancy}. In the case of DMFT+IPT this gives the same result as in Figs.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1}, while there is a small difference for DMFT+TPSC, which employs a momentum-dependent self-energy. However, the difference between the DMFT+TPSC data of Figs.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1} and \ref{fig:DMFT_TPSC_Dloc_0_1} is only about $1$\%.
\begin{figure}[t]
\includegraphics[width=\linewidth]{benchmark_TPSC_Dloc_Umin_0.00_Umax_1.00-crop.pdf}
\caption{Double occupancies calculated using the impurity quantities~\eqref{eq:impurity_double_occupancy} in the cases of DMFT+IPT and DMFT+TPSC. In the case of TPSC and TPSC+GG, the double occupancy taken from Eq.~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy} is shown. The result for $\Sigma^{(2)}$ as well as the parameters are the same as in Fig.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1}.
}
\label{fig:DMFT_TPSC_Dloc_0_1}
\end{figure}
We next consider an interaction ramp from $U=1$ to $U=3$ with the ramp profile corresponding to the parameters $\gamma=1.5$ and $\delta=0.675$ in Eq.~\eqref{eq:ramp_profile_function_erf}. The initial temperature is $T=0.33$, the model is still the half-filled 2D Hubbard model, and we focus on the results from DMFT+TPSC. In Fig.~\ref{fig:DMFT_TPSC_local_qties_U_1_3}, the local irreducible vertices $\Gamma^{\text{ch}}$ (top panel) and $\Gamma^{\text{sp}}$ (second panel from top), the impurity double occupancy $D^{\text{imp}}$ (Eq.~\eqref{eq:impurity_double_occupancy}, third panel from top) and lattice double occupancy $D^{\text{TPSC}}$ (Eq.~\eqref{eq:tr_sk_Gk_TPSC}, bottom panel) are displayed over a time window of $\Delta t = 8$. After the ramp, $\Gamma^{\text{ch}}$ thermalizes to $6.10$ and $\Gamma^{\text{sp}}$ to $2.05$ in DMFT+TPSC (dashed lines). These values are close to those obtained with TPSC+GG for the same ramp (solid lines), which are $\Gamma^{\text{ch}}\simeq 6.01$ and $\Gamma^{\text{sp}}\simeq 2.05$. The same holds for the local double occupancies, which are calculated from Eq.~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy} in TPSC+GG and from Eq.~\eqref{eq:impurity_double_occupancy} in DMFT+TPSC: for TPSC+GG, the double occupancy reaches $D=0.172$, while the value is $D^{\text{imp}}=0.177$ for DMFT+TPSC (green curves). The thermalized value of the lattice double occupancy $D^{\text{TPSC}}$ (Eq.~\eqref{eq:tr_sk_Gk_TPSC}) is $0.174$ (orange curve), which is quite close to that of TPSC+GG. The double occupancies $D^{\text{TPSC}}$ and $D^{\text{imp}}$ overlap almost perfectly. Moreover, given that the interaction ramp used in Fig.~\ref{fig:DMFT_TPSC_local_qties_U_1_3} is slower than that used in Figs.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1} and \ref{fig:DMFT_TPSC_Dloc_0_1}, no transient dips in the double occupancies are observed near the end of the ramp. Notice that the response of the charge vertex $\Gamma^{\text{ch}}$ to the ramp (top panel of Fig.~\ref{fig:DMFT_TPSC_local_qties_U_1_3}) is delayed compared to that of the spin vertex $\Gamma^{\text{sp}}$ (second top panel of Fig.~\ref{fig:DMFT_TPSC_local_qties_U_1_3}), as was previously reported in the case of TPSC and TPSC+GG,\cite{https://doi.org/10.48550/arxiv.2205.13813} which in contrast to DMFT+TPSC makes use of the ansatz \eqref{eq:equivalence_spin_irr_vertex_double_occupancy} to connect $D$ and $\Gamma^{\text{sp}}$.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Locals_dmft_tpsc_and_tpsc_GG-crop.pdf}
\caption{Local DMFT+TPSC (dashed lines) and TPSC+GG (solid lines) quantities in the 2D Hubbard model for the ramp from $U=1$ to $U=3$ at initial temperature $T=0.33$. The charge irreducible vertex (top panel), spin irreducible vertex (second panel from top), $D^{\text{imp}}$ (third panel from top) and $D^{\text{TPSC}}$ (bottom panel) are plotted for a time window of $\Delta t=8$.
}
\label{fig:DMFT_TPSC_local_qties_U_1_3}
\end{figure}
A drawback of the DMFT+TPSC implementation which does not enforce the equivalence of $D^{\text{TPSC}}$ (Eq.~\eqref{eq:tr_sk_Gk_TPSC}) and $D^{\text{imp}}$ (Eq.~\eqref{eq:impurity_double_occupancy}) is that there is no unambiguous way to determine the potential energy and hence the thermalized temperature from the total energy after the ramp.
In the following analysis, we calculate the total energy from the lattice quantities $\Sigma_{\mathbf{k}}$ (Eq.~\eqref{eq:sigma_DMFT_TPSC}) and $\mathcal{G}_{\mathbf{k}}^{\text{lat}}$ (Eq.~\eqref{eq:lattice_G_definition_improved}). Then
the kinetic energy of the system is $E_\text{k}(t)=\frac{-i}{N_{\mathbf{k}}}\sum_{\mathbf{k}}\epsilon_{\mathbf{k}}\mathcal{G}^{<}_{\mathbf{k}}(t,t)$, while the potential energy is $E_{\text{p}}(t)=\frac{-i}{N_k}\sum_{\mathbf{k}}\int_{\mathcal{C}}\mathrm{d}z \left[\Sigma_{\mathbf{k}}(t,z)\mathcal{G}_{\mathbf{k}}(z,t)\right]^<$, which gives the total energy of the lattice electrons $E_{\text{tot}}(t)=E_{\text{k}}(t)+E_{\text{p}}(t)$. A temperature of $T_{\text{therm}}\simeq 0.32$ is obtained for the $U=1\to 3$ ramp used in Fig.~\ref{fig:DMFT_TPSC_local_qties_U_1_3}. In Fig.~\ref{fig:DMFT_TPSC_energy_plane_Us}, the corresponding total energy in the post-ramp state is marked by a red cross in the energy plane and compared to results calculated in equilibrium (colored dots). The green cross shows the DMFT+TPSC total lattice energy after the interaction ramp $U=0\to 1$ presented in Fig.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1}. One can notice that the red cross is quite far from the thermal reference points for $U=3$, corresponding to the post-ramp value of the interaction, meaning that the state after the ramp is not a thermalized state (even though there seems to be little evolution in physical observables). This is surprising, since a trapping in nonthermal states is generically expected for weak interactions, but not in the intermediate coupling regime.\cite{Moeckel2008,Eckstein2009}
From Fig.~\ref{fig:DMFT_TPSC_energy_plane_Us}, different effective temperatures could be defined based on the potential energy $E_{\text{p}}$ or the kinetic energy $E_{\text{k}}$. The temperature extracted from $E_{\text{p}}$ is $T_{\text{therm}}(E_{\text{p}})\simeq 0.93$, whereas that extracted from $E_{\text{k}}$ is $T_{\text{therm}}(E_{\text{k}})\simeq 0.28$.
This unexpected trapping in a nonthermal state may be related to the fact that $U=3$ is close to the regime where the weak-coupling impurity solver breaks down.\cite{Tsuji_2013b} At the weaker post-ramp interaction $U=1$, the energy is almost compatible with a thermalized state, since the green cross practically falls on the $U=1$ line of thermalized states. Here, the discrepancy to the thermalized value may indeed be the result of slow thermalization.
For comparison, we show in Fig.~\ref{fig:TPSC_GG_energy_plane_Us} the same type of analysis as in Fig.~\ref{fig:DMFT_TPSC_energy_plane_Us}, but for TPSC+GG. This time, the red (green) cross corresponds to the TPSC+GG post-ramp state for the ramp shown in Fig.~\ref{fig:DMFT_TPSC_local_qties_U_1_3} (Fig.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1}). This figure clearly demonstrates that within TPSC+GG the system approximately thermalizes after an interaction ramp, even at $U=3$. The problem of trapping or unexpectedly slow thermalization at intermediate $U$ is thus much reduced in TPSC+GG, compared to DMFT+TPSC with the bare IPT impurity solver.
One way to address the issue of non-unique double occupations and potential energies is to introduce a parameter $\alpha$ that enforces the equivalence between the impurity $D^{\text{imp}}$~\eqref{eq:impurity_double_occupancy} and the lattice $D^{\text{TPSC}}$~\eqref{eq:tr_sk_Gk_TPSC}, as indicated in Eq.~\eqref{eq:double_occupancy_sum_rule_alpha_DMFT_TPSC}. This extra sum-rule promotes DMFT+TPSC to DMFT+TPSC$\alpha$. This scheme, however, only works well in equilibrium, as already mentioned, and it does not solve problems originating from the bare IPT solver.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Epot_vs_Ekin_Us_dmft_tpsc_2D-crop.pdf}
\caption{Color plot illustrating the relation between the potential energy $E_{\text{p}}$ ($x$-axis), the kinetic energy $E_{\text{k}}$ ($y$-axis) and the corresponding equilibrium temperature for DMFT+TPSC and $U=1,2,3,4$ (see annotations). The 2D square lattice Hubbard model is used. The red (green) cross marks the post-ramp state, obtained from the interaction ramp shown in Fig.~\ref{fig:DMFT_TPSC_local_qties_U_1_3} (Fig.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1}).
}
\label{fig:DMFT_TPSC_energy_plane_Us}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\linewidth]{Epot_vs_Ekin_Us_tpsc_GG_2D-crop.pdf}
\caption{Color plot analogous to Fig.~\ref{fig:DMFT_TPSC_energy_plane_Us}, but for TPSC+GG. The red (green) cross marks the post-ramp state obtained from the interaction ramp shown in Fig.~\ref{fig:DMFT_TPSC_local_qties_U_1_3} (Fig.~\ref{fig:DMFT_TPSC_tr_sigmak_Gk_0_1}).
}
\label{fig:TPSC_GG_energy_plane_Us}
\end{figure}
\subsubsection{Dimensional crossover}
We next consider lattice hopping ramps to test the performance of TPSC, TPSC+GG and DMFT+TPSC in dimensions $\ge 2$. In these ramps, we switch on the hopping $t^{\text{hop}}_z$ in the direction perpendicular to the plane, and thus induce a transition from the 2D Hubbard model ($t^{\text{hop}}_z=0$) to the 3D model ($t^{\text{hop}}_z=1$). Figure~\ref{fig:TPSC_local_qties_tp_0_1} shows TPSC (solid lines) and TPSC+GG (dashed lines) results of such a ramp for the constant interaction $U=2.5$ and initial temperature $T=0.2$. As the dimension is increased, $\Gamma^{\text{ch}}$ decreases while the double occupation increases. This makes sense, since the bandwidth $W$ increases from $8t^{\text{hop}}$ (square lattice) to $12t^{\text{hop}}$ (cubic lattice) and hence the correlation strength is reduced. On the other hand, the spin irreducible vertex $\Gamma^{\text{sp}}$ varies in the opposite direction (see second panel from the top), since $D$ increases and $\Gamma^{\text{sp}}$ and $D$ are related via the ansatz~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy}. As a result, the spin and charge vertices become more similar, which is the expected result if $U/W$ decreases. The parameter $\alpha$, which enforces consistency between the different evaluations of the double occupancy, relaxes slowly since it is strongly affected by the $\mathbf{k}$-dependent thermalization of the (convolved) single-particle quantities. Overall, TPSC admits larger variations of the quantities with faster thermalization compared to TPSC+GG.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Locals_tpsc_and_tpsc_GG_3D-crop.pdf}
\caption{Local TPSC (solid lines) and TPSC+GG (dashed lines) quantities in a dimensional ramp from a square lattice to a cubic lattice corresponding to a ramp from $t^{\text{hop}}_z=0$ to $t^{\text{hop}}_z=1$ in the dispersion relation \eqref{eq:dispersion_relation}. The initial temperature is $T=0.2$ and the constant interaction is $U=2.5$. The charge irreducible vertex (top panel), spin irreducible vertex (second panel from top), $D^{\text{imp}}$ (third panel from top) and $\alpha$ (bottom panel) are plotted for a time window of $\Delta t=7$.}
\label{fig:TPSC_local_qties_tp_0_1}
\end{figure}
By construction, nonequilibrium TPSC and its variants rely to a much larger extent on the conservation of the potential energy $E_{\text{p}}$ than on the kinetic energy $E_{\text{k}}$, because the local irreducible vertices are strongly dependent on the double occupancy $D$ (see for instance Eqs.~\eqref{eq:equivalence_spin_irr_vertex_double_occupancy}, \eqref{eq:fluctuation_dissipation_two_particle} or \eqref{eq:double_occupancy_sum_rule_alpha}). When the total energy drifts after the ramp, which happens for too large and/or too fast ramps, especially for TPSC following a lattice hopping ramp like depicted in Fig.~\ref{fig:TPSC_local_qties_tp_0_1}, this drift is mainly caused by $E_{\text{k}}$. Therefore, as long as $E_{\text{p}}$ is stable after the ramps, which is the case in most situations, the TPSC quantities such as $\Gamma^{\text{sp/ch}}$ and $D$ will stabilize at some value. One particularly useful observation is that even if $E_{\text{k}}$ drifts, thermalized temperatures can be assigned within TPSC frameworks by matching the post-ramp values of the local quantities ($\Gamma^{\text{sp/ch}}$ and $D$) with those calculated at equilibrium for the same post-ramp value: the $T_{\text{therm}}$ values thereby extracted for each local quantities are almost exactly the same,\footnote{This is however not the case in DMFT+TPSC.} \textit{i.e.} $T_{\text{therm}}(\Gamma^{\text{ch}})=T_{\text{therm}}(\Gamma^{\text{sp}})=T_{\text{therm}}(D)$. Since $E_{\text{k}}$ is calculated with $\mathcal{G}_{\mathbf{k}}$, the only meaningful kinetic energy is that of the lattice. When calculating the thermalized temperature of the system after the $t^{\text{hop}}_{z}$ ramp in Fig.~\ref{fig:TPSC_local_qties_tp_0_1}, one finds that the variation from the initial temperature ($T=0.2$) is negligible in TPSC. Hence, the thermalized values of the local quantities depicted in Fig.~\ref{fig:TPSC_local_qties_tp_0_1} are those, at equilibrium, of a cubic lattice at $U=2.5$ and $T_{\text{therm}}\simeq0.204$. On the other hand, the thermalized temperature calculated from TPSC+GG would be much higher, that is $T_{\text{therm}}\simeq1.06$. Note that the system heats up much more in TPSC+GG as well when ramping the interaction, compared to TPSC.\cite{https://doi.org/10.48550/arxiv.2205.13813} The way the thermalized temperature is computed after a lattice hopping ramp is the same as the one explained for $U$-ramps (Eq.~\eqref{eq:ramp_profile_function_erf}), with the exception that equilibrium results are calculated with the post-ramp $t^{\text{hop}}_{z}$ ($U$ is fixed).
\begin{figure}[h!]
\includegraphics[width=1\linewidth]{Locals_dmft_tpsc_3D-crop.pdf}
\caption{Local DMFT+TPSC quantities in the dimensional ramp from $t^{\text{hop}}_z=0$ to $t^{\text{hop}}_z=1$ in the single-band nearest-neighbor Hubbard model for $U=2.5$ at initial temperature $T=0.2$. The charge irreducible vertex (top panel), spin irreducible vertex (second panel from top), $D^{\text{imp}}$ (third panel from top) and $D^{\text{TPSC}}$ (bottom panel) are plotted for a time window of $\Delta t=8$.}
\label{fig:up_quench_loc_dmft_tpsc_tp_0_1_U_2p5}
\end{figure}
The analogous results to Fig.~\ref{fig:TPSC_local_qties_tp_0_1} but for DMFT+TPSC are shown in Fig.~\ref{fig:up_quench_loc_dmft_tpsc_tp_0_1_U_2p5}.
The overall trend follows that of Fig.~\ref{fig:TPSC_local_qties_tp_0_1}, in that $\Gamma^{\text{ch}}$ is reduced and $\Gamma^{\text{sp}}$ increased as the dimensionality is increased from 2D to 3D. Also the double occupancy $D^\text{imp}$ increases, although significantly less than what is observed in TPSC (Fig.~\ref{fig:TPSC_local_qties_tp_0_1}), while $D^\text{TPSC}$ even shows a transient reduction. The main qualitative difference for this particular set-up however is that the DMFT+TPSC results exhibit prominent humps -- one located at $t\simeq 0.7$ and the other at $t\simeq 1.7$ -- in all the local quantities in Fig.~\ref{fig:up_quench_loc_dmft_tpsc_tp_0_1_U_2p5} and that there is a slower approach to the thermalized state. The lattice hopping ramp stops around the time of the second hump. The minima in the charge vertex correlate with maxima in $\Gamma^{\text{sp}}$ as well as in the double occupancies.
\subsubsection{Momentum-resolved spectra}
Next, the time evolution of the spin and charge susceptibilities is illustrated in Fig.~\ref{fig:lesser_spin_and_charge_sus_time_evolution_2D_3D_TPSC} for the dimensional ramp simulated with TPSC. In this figure, we show the spectra at momentum $\mathbf{k}_{\pi}=(\pi,\pi,\pi)$. The lesser component of the spin susceptibility (top subplot) shows that the peak at $\omega\simeq 0$ melts when going from 2D to 3D, which we attribute to the lower $T_x$ in the 3D system. Since the bandwidth increases, the energy range of the spin and charge excitations also increases. The bottom subplot shows the result for the lesser component of the charge susceptibility. The peak of the charge excitation spectrum is shifted up in energy when going from 2D to 3D and is reduced in height. Furthermore, the peak is broadened in 3D because of the larger bandwidth.
\begin{figure}[h!]
\begin{minipage}[h]{1.0\linewidth}
\begin{center}
\includegraphics[width=1\linewidth]{run_TPSC_Chisp-crop.pdf}
\end{center}
\end{minipage}
\vspace{0.1 cm}
\begin{minipage}[h]{1.0\linewidth}
\begin{center}
\includegraphics[width=1\linewidth]{run_TPSC_Chich-crop.pdf}
\end{center}
\end{minipage}
\caption{Imaginary parts of the \textit{lesser} component of the spin (top subplot) and charge (bottom subplot) susceptibilities for momentum $\mathbf{k}_{\pi}$ and TPSC. The initial temperature is $T=0.2$ and the interaction is $U=2.5$. The inset shows the profile of the perpendicular hopping ramps $t^{\text{hop}}_z$ with the vertical bars representing the times for which the spectra are calculated. The time window for the Fourier transformation is $\Delta t=2.5$.
}
\label{fig:lesser_spin_and_charge_sus_time_evolution_2D_3D_TPSC}
\end{figure}
The $\mathbf{k}$-dependent spectral evolution of the spin and charge susceptibilities obtained with TPSC is displayed in Fig.~\ref{fig:k_resolved_retarded_charge_2_3_D_tpsc},
along the momentum path indicated in the inset ($k_z=\pi$). We plot the change in the spectra during the ramp, defined as $\Delta Q(t_f,t_i;\omega)\equiv Q(t_f;\omega)-Q(t_i;\omega)$. The top panels show the results for the charge susceptibility ($Q=\chi^{\text{ch}}$), while the bottom panels show those for the spin susceptibility ($Q=\chi^{\text{sp}}$). On the left-hand side, the difference $\Delta\chi(t_f,t_i;\omega)$ is plotted for $t_i=0$ and $t_f=1.3$, whereas $t_i=1.3$ and $t_f=2.4$ on the right-hand side. The vertical bars in the inset indicate the time snapshots $t_i$ and $t_f$ relative to the ramp profile. One striking feature is the qualitative difference between the left and right panels; much of the change happens in the first half of the ramp, while only small changes occur in the second half of the ramp. This can be partly explained by the fact that these spectra are computed using a forward Fourier transform defined as
\begin{align}
\label{eq:forward_FFT}
Q^{R,<}(\omega,t^{\prime})=\int_{t^{\prime}}^{t^{\prime}+\Delta t}\mathrm{d}t \ e^{i\omega(t-t^{\prime})} \ Q^{R,<}(t,t^{\prime}),
\end{align}
using a time window $\Delta t$ that is larger than the duration of the ramp; these transforms take into account the state after the ramp, even at early times $t'$. The two-time quantity $Q$ in Eq.~\eqref{eq:forward_FFT} represents the Green's function or spin/charge susceptibility. Since the relative weight of the ripples appearing at $\left|\omega\right|\gtrsim 15$ varies a lot with the time window $\Delta t$ used in the forward Fourier transform, we believe that these are artifacts of the Fourier transformation. These ripples however only appear in the TPSC simulations. In the case of the charge susceptibility, the excitations are redistributed to larger absolute energies. The same is true for the spin excitation spectra, which in addition exhibit a strong decrease at $\mathbf{k}_{\pi}$, consistent with the top panel of Fig.~\ref{fig:lesser_spin_and_charge_sus_time_evolution_2D_3D_TPSC}.
\begin{figure}[h!]
\includegraphics[width=\linewidth]{les_imag_Umin_2.50_Umax_2.50_tpmin_0.00_tpmax_1.00_3D_up_t1_0.00_t2_1.30_tp_-crop.pdf}
\caption{
Top (Bottom) panels: Difference spectra of the lesser component of the charge (spin) susceptibility after the interaction ramp shown in the inset. The inset black triangle illustrates the path in reciprocal space -- within the $k_z=\pi$ plane -- along which the spectra are displayed. The times $t_i$ and $t_f$ used in the calculation of the difference spectra are annotated in each panel. The time window used in the Fourier transformation is $\Delta t=2.5$. Each row of panels uses the same color scale. The method used here is TPSC.
}
\label{fig:k_resolved_retarded_charge_2_3_D_tpsc}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=1\linewidth]{les_imag_Umin_2.50_Umax_2.50_tpmin_0.00_tpmax_1.00_3D_up_t1_0.00_t2_1.30_tp_dmft_tpsc-crop.pdf}
\caption{Top (Bottom) panels: Difference spectra of the DMFT+TPSC lesser component of the charge (spin) susceptibility after the perpendicular lattice hopping ramp from $t^{\text{hop}}_z=0$ to $t^{\text{hop}}_z=1$ shown in the inset. The time window employed in the Fourier transformation is $\Delta t=2.5$. Each row of panels uses the same color scale. The initial temperature is $T=0.2$.}
\label{fig:up_quench_time_diff_lesser_susceptibilities_dmft_tpsc_tp_0_1_U_2p5}
\end{figure}
The corresponding data obtained with DMFT+TPSC are shown in Fig.~\ref{fig:up_quench_time_diff_lesser_susceptibilities_dmft_tpsc_tp_0_1_U_2p5} (for the evolution of the local quantities, see Fig.~\ref{fig:up_quench_loc_dmft_tpsc_tp_0_1_U_2p5}). The results obtained from TPSC+GG are quantitatively almost the same (not shown). The time differences $\Delta\chi^{\text{ch/sp},<}(t_f,t_i;\mathbf{k})$ of the \textit{lesser} charge susceptibility (top panels) and spin susceptibility (bottom panels) are shown for times $t_i=0$ and $t_f=1.3$ in the left panels and for times $t_i=1.3$ and $t_f=2.4$ in the right panels. Similar to the TPSC results shown in Fig.~\ref{fig:k_resolved_retarded_charge_2_3_D_tpsc}, the dominant changes occur during the first time interval. The results from DMFT+TPSC display less oscillations in the spectra than TPSC, especially for the charge susceptibility. As in the case of TPSC (Fig.~\ref{fig:k_resolved_retarded_charge_2_3_D_tpsc}), the spin-spin correlations in the vicinity of $\mathbf{k}_{\pi}$ are substantially reduced when going from 2D to 3D, since at fixed $U$, the crossover temperature $T_x$ is reduced (\textit{cf.}~Fig.~\ref{fig:static_spin_sus_fermi_surface_dmft_tpsc_3D}) and the system heats up. In Appendix~\ref{appendice:ch:interaction_quench_comparisons} we show comparisons between TPSC, TPSC+GG and DMFT+TPSC results for a $U$-ramp going from $U=1$ to $U=3$ in the half-filled square lattice Hubbard model.
\section{Conclusions}
\label{sec:conclusion}
The nonequilibrium formulation of TPSC and its variants on the Kadanoff-Baym contour has been detailed. We also introduced nonequilibrium DMFT+TPSC, which makes use of the TPSC self-energy to incorporate nonlocal electronic correlations into the DMFT framework in a self-consistent manner, or, alternatively speaking, replaces the local component of the TPSC self-energy by the DMFT counterpart. Focusing on the weak-to-intermediate correlation regime, we employed $2^{\text{nd}}$-order or $3^{\text{rd}}$-order IPT to solve the DMFT impurity problem. In equilibrium, our self-consistent version of DMFT+TPSC gives similar results to the non-self-consistent scheme recently introduced in Ref.~\onlinecite{https://doi.org/10.48550/arxiv.2211.01919}.
We have extensively tested the different TPSC variants and provided benchmarks against more sophisticated methods to check the accuracy. For the 2D Hubbard model, it was demonstrated that the momentum-dependent self-energy of TPSC+GG and DMFT+TPSC match very well the DiagMC results, especially in the case of TPSC+GG. Moreover, it was shown that the growth of the antiferromagnetic correlation length as temperature is lowered is significantly improved in TPSC+GG and DMFT+TPSC when compared to OG TPSC, which overestimates the spin correlations.
TPSC and its variants were then tested in nonequilibrium settings, by applying interaction ramps and lattice hopping ramps designed to switch between 2D and 3D lattices. While in this case, we lack exact benchmark results, the comparison to established approximate schemes like $\Sigma^{(2)}$ or DMFT could provide some useful insights. It turns out that the transient dynamics of the double occupancy is substantially improved in both DMFT+TPSC and TPSC+GG, compared to OG TPSC, which produces seemingly unphysical features in the time evolution. DMFT+TPSC yields double occupancies very close to DMFT, which shows that for this local quantity, the feedback from the nonlocal components of the self-energy has only minor effects. More generally, we found that in the weak-to-intermediate correlation regime, TPSC+GG and DMFT+TPSC lead to very similar results both for momentum-resolved two-particle and single-particle spectral functions, and for time-dependent two-body local quantities.
A conceptual problem of the DMFT+TPSC approach lies in the fact that the double occupancy measured from the impurity problem can deviate from the one estimated from the lattice quantities, thereby creating an ambiguity in the definition of the potential energy. Calculating all the energy contributions from the lattice Green's functions and self-energies, we found that the state after a ramp to intermediate interactions (e.g. $U=3$ in the 2D Hubbard model) is not consistent with a thermalized state, even though the post-ramp evolution of physical observables is almost constant. Since a thermalization bottleneck at intermediate couplings is not expected, this points to a breakdown of the formalism, which may be related to the aforementioned ambiguity in the calculation of the potential energy contribution, the non-conserving nature of the formalism, or the perturbative impurity solver, which becomes unreliable at intermediate $U$. The mismatch between the post-ramp observables and the expected thermalized values is much reduced within TPSC+GG, where it might (at weak coupling) originate from slow thermalization. An attempt to enforce consistency between the impurity and lattice double occupancies within a DMFT+TPSC$\alpha$ scheme resulted in an algorithm which suffers from an unstable time propagation on the real axis.
In further studies, different avenues to overcome the issues with the effective temperature at intermediate coupling will be investigated. For instance, the spin and charge irreducible vertices could be extracted in the same fashion as discussed in Ref.~\onlinecite{kusunose_influence_2006}, \textit{i.e.} directly from the impurity self-energy, bypassing the two-particle sum-rules. More accurate impurity solvers should be employed within DMFT+TPSC to access the intermediate and strong coupling regime. Furthermore, the consequences of the approximate solution of the BSE (Eq.~\eqref{eq:bethe_Salpeter_eq_approximated}) need to be investigated. For this purpose nonequilibrium setups in which this approximation can be circumvented, such as nonequilibrium steady-state solutions, are of particular interest.
While this study presented the current status in the development of TPSC based nonequilibrium methods, and revealed a certain number of challenges and inconsistencies, it also demonstrated the potential of TPSC and DMFT+TPSC approaches as a promising and computationally efficient new method to access nonequilibrium dynamics of correlated lattice systems. In particular, this approach enables calculations with self-consistently renormalized spin and charge vertices and full momentum resolution.
\begin{acknowledgments}
The calculations have been performed on the Beo05 cluster at the University of Fribourg. OS and PW acknowledge support from ERC Consolidator Grant No.~724103.
\end{acknowledgments}
\pagebreak
|